Configure Teflo

This is a mandatory configuration file, where you set your credentials, and there are many optional settings to help you adjust your usage of Teflo. The credentials of the configuration file is the only thing that is mandatory. Most of the other default configuration settings should be sufficient; however, please read through the options you have.

Where it is loaded from (using precedence low to high):

  1. /etc/teflo/teflo.cfg

  2. ./teflo.cfg (current working directory)

  3. TEFLO_SETTINGS environment variable to the location of the file

Important

It is important to realize if you have a configuration file set using both options, the configuration files will be combined, and common key values will be overridden by the higher precedent option, which will be the TEFLO_SETTINGS environment variable.

Configuration example (with all options):

# teflo config file
# ==================

# the config file provides an additional way to define teflo parameters

# config file is searched for in the following order below. a configuration
# setting will be overrided if another source is found last
#   1. /etc/teflo/teflo.cfg
#   2. ./teflo.cfg (current working directory)
#   3. TEFLO_SETTINGS (environment variable)

# default settings

[defaults]
log_level=debug
# Path for teflo's data folder where teflo logs will be stored
data_folder=/var/local/teflo
workspace=.
# Endpoint URL of Cachet Status page
# Cachet status page.
resource_check_endpoint=<endpoint_ url_for_dependency_check>
# The teflo run exits on occurrence of a failure of a task in a scenario, if a user wants to continue
# the teflo run, in spite of one task failure, the skip_fail parameter can be set to true in
# the teflo.cfg or passed using cli.
skip_fail=False
#
# A static inventory path can be used for ansible inventory file.
# Can be relative path in teflo scenario workspace
# inventory_folder=static/inventory
#
# Can be a directory in the user $HOME path
# inventory_folder=~/scenario/static/inventory
#
# Can be an absolute path
# inventory_folder=/test/scenario/static/inventory
#
# Can be a path containing an environment variable
# inventory_folder=$WORKSPACE/scenario/static/inventory
# default value of the inventory folder is 'TEFLO_DATA_FOLDER/.results/inventory'
inventory_folder=<path for ansible inventory files>
# credentials file and Vault password
# User can set all teh credential information in a text file and encrypt it using ansible vault
# provide the path in under CREDENTIALS_PATH. Provide the vault password here. This password can be
# exported as an environmental variable
CREDENTIAL_PATH=<path for the credetials txt file>
VAULTPASS=<ansible vault password>


# time out for each stage
# you can set the timeout value for each of teflo's stages (validation, provision, orchestrate, execute, report and cleanup)
[timeout]
provision=500
cleanup=1000
orchestrate=300
execute=200
report=100
validate=10


# credentials settings

[credentials:beaker-creds]
hub_url=<hub_url>
keytab=<keytab>
keytab_principal=<keytab_principal>
username=<username>
password=<password>

[credentials:openstack-creds]
auth_url=<auth_url>
tenant_name=<tenant_name>
username=<username>
password=<password>
domain_name=<domain_name>

# orchestrator settings

[orchestrator:ansible]
# remove ansible log
log_remove=False
# set the verbosity
# this option will override the max verbosity when log level is set to debug.
verbosity=vv

[task_concurrency]
# this controls how tasks (provision, orchestrate, execute, report) are executed
# by Teflo either parallel or sequential.
# When set to False the task will execute sequentially.
provision=False


# executor settings

[executor:runner]
# set the testrun_results to false if you dont want it to be collected in the logs for the xml files collected during
# execution
testrun_results=False
# Teflo by default will NOT exit if the collection of artifact task fails. In order to exit the run on an error during
# collection of artifacts user can set this field to true , else False or ignore the field.
exit_on_error=True

# Teflo Alias
[alias]
dev_run=run -s scenario.yml --log-level debug --iterate-method by_depth
prod_run=show -s scenario.yml --list-labels

Note

Many of the configuration options can be overridden by passing cli options when running teflo. See the options in the running teflo example.

Using Jinja Variable Data

Teflo uses Jinja2 template engine to be able to template variables within the teflo.cfg file. Teflo allows template variable data to be set as environmental variables

Here is an example teflo.cfg file using Jinja to template some variable data:

[credentials:openstack]
auth_url=<auth_url>
username={{ OS_USER }}
password={{ OS_PASSWORD }}
tenant_name={{ OS_TENANT }}
domain_name=redhat.com

[task_concurrency]
provision=True
report=False
orchestrate={{ ORC_TASK_CONCURRENCY }}

Prior to running teflo, the templated variables will have to be exported

export OS_USER=user1
export OS_PASSWORD=password
export OS_TENANT=project1
export ORC_TASK_CONCURRENCY=True

inventory_folder

The inventory_folder option is not a required option but it is important enough to note its usage. By default teflo will create an inventory directory containing ansible inventory files in its data directory. These are used during orchestration and execution. Refer to the Teflo Output page.

Some times this is not desired behavior. This option allows a user to specify a static known directory that Teflo can use to place the ansible inventory files. If the specified directory does not exist, teflo will create it and place the ansible inventory files. If it does, teflo will only place the ansible files in the directory. Teflo will then use this static directory during orchestrate and execution.

task_concurrency

The task_concurrency option is used to control how tasks are executed by Teflo. Whether it should be sequential or in parallel/concurrent. Below is the default execution type of each of the Teflo tasks:

Key

Concurrent

Type

validate

True

String

provision

True

String

orchestrate

False

String

execute

False

String

report

False

String

There are cases where it makes sense to adjust the execution type. Below are some examples:

There are cases when provisioning assets of different types that there might be an inter-dependency so executing the tasks in parallel will not suffice, i.e. provision a virtual network and a VM attached to that network. In that case, set the provision=False and arrange the assets in the scenario descriptor file in the proper sequential order.

There are cases when you need to import the same test artifact into separate reporting systems but one reporting systems needs the data in the test artifact to be modified with metadata before it can be imported. i.e modify and import into Polarion with Polarion metadata and then import that same artifact into Report Portal. In that case, set the report=False and arrange the resources defined in the scenario descriptor file in the proper sequential order.

There could be a case where you would like to execute two different test suites concurrently because they have no dependency on each other or there is no affect to each other. In that case, set the execute=True to have them running concurrently.