Orchestrate¶
Teflo’s orchestrate section declares the configuration to be be performed in order to test the systems properly.
First lets go over the basic structure that defines a configuration task.
---
orchestrate:
- name:
description:
orchestrator:
hosts:
The above code snippet is the minimal structure that is required to create a orchestrate task within teflo. This task is translated into a teflo action object which is part of the teflo compound. You can learn more about this at the architecture page. Please see the table below to understand the key/values defined.
Key |
Description |
Type |
Required |
Default |
---|---|---|---|---|
name |
The name of the action you want teflo to execute |
String |
Yes |
n/a |
description |
A description of what the resource is trying to accomplish |
String |
No |
n/a |
orchestrator |
The orchestrator to use to execute the action (name) you defined above |
String |
No (best practice to define this!) |
ansible |
hosts |
The list of hosts where teflo will execute the action against |
List |
Yes |
n/a |
environment_vars |
Additional environment variables to be passed during the orchestrate task |
dict |
No |
environment variables set prior to starting the teflo run are available |
Hosts¶
You can associate hosts to a given orchestrate task a couple of different ways. First is to define your hosts in a comma separated string.
---
orchestrate:
- name: register_task
hosts: host01, host02
ansible_playbook:
name: rhsm_register.yml
You can also define your hosts as a list.
---
orchestrate:
- name: -_taskregister
hosts:
- host01
- host02
ansible_playbook:
name: rhsm_register.yml
It can become tedious if an orchestrate task needs to be performed on multiple or all hosts within the scenario and you have many hosts declared. Teflo provides you with the ability to run against a group of hosts or all hosts. To run against multiple hosts use the name defined in the groups key for your hosts or use all to run against all hosts. This eliminates the need to define every host per multiple tasks. It can be either in string or list format.
---
orchestrate:
- name: register_task
hosts: all
ansible_playbook:
name: rhsm_register.yml
---
orchestrate:
- name: task1
hosts: clients
ansible_playbook:
name: rhsm_register.yml
Re-running Tasks and Status Code¶
You may notice in your results.yml that each orchestrate block has a new parameter
status: 0
When teflo runs any of the defined orchestration tasks successfully a status code of 0 will be set. If an orchestration task fails, teflo will set the status to 1. The next time you re-run the Orchestration task teflo will check for any orchestration tasks with a failed status and start the orchestration process from there.
This is useful when you have a long configuration process and you don’t want to start over all the way from the beginning. If at some point you would rather have the orchestration process start from the beginning, you can modify the status code back 0.
Since teflos development model is plug and play. This means different orchestrator’s could be used to execute configuration tasks declared. Ansible is Teflo’s default orchestrator. Its information can be found below.
Ansible¶
Ansible is teflos default orchestrator. As we mentioned above each task has a given name (action). This name is the orchestrate task name.
Teflo uses these keywords to detect to ansible playbook, script or shell command ansible_playbook, ansible_script, ansible_shell respectively. Please refer here to get an idea on how to use the keys
In addition to the required orchestrate base keys, there are more you can define based on your selected orchestrator.Lets dive into them..
Key |
Description |
Type |
Required |
Default |
---|---|---|---|---|
ansible_options |
Additional options to provide to the ansible orchestrator regarding the task (playbook) to be executed |
Dictionary |
No |
n/a |
ansible_galaxy_options |
Additional options to provide to the ansible orchestrator regarding ansible roles and collections |
Dictionary |
No |
n/a |
ansible_script |
scribt to be executed |
Dictionary |
(Not required; however, one of the following must be defined: ansible_shell/ansible_script/ansible_playbook) |
False |
ansible_playbook |
playbook to be run. |
dictionary |
(Not required; however, one of the following must be defined: ansible_shell/ansible_script/ansible_playbook) |
False |
ansible_shell |
shell commands to be run. |
list of dictionary |
(Not required; however, one of the following must be defined: ansible_shell/ansible_script/ansible_playbook) |
False |
The table above describes additional key:values you can set within your orchestrate task. Each of those keys can accept additional key:values.
Use Ansible group_vars¶
Ansible can set variables to each host with different ways, one of them is using the group_vars file.
---
ansible_user: fedora
Note
For more information read from Ansible Docs.
Teflo will look for group_vars dir inside workspace/ansible:
workspace/ansible/group_vars/example
Use Playbook Within A Collection¶
We can Use playbook within a collection with Fully Qualified Collections Name. When running a playbook using fqcn, Teflo will first check if the collection exist and will try to download it if needed.
Example¶
Lets first call the playbook in our orchestrate task:
# use FQCN and collection install
- name: Example 1 # action name
description: "use fqcn" # describes what is being performed on the hosts
orchestrator: ansible # orchestrator module to use in this case ansible
hosts: # hosts which the action is executed on
- all # ref above ^^ to all hosts : provision.*
ansible_playbook:
name: namespace.collection1.playbook1 # playbook name(Using FQCN)
ansible_galaxy_options:
role_file: requirements.yml # A .yml file to describe collection(name,type,version)
the requirements.yml should look like:
---
collections:
- name: https://github.com/collection/path
type: git
version: main
Note
For more information read from Ansible Docs.
By default Teflo will install collections under “workspace/collections/” To change default use the ansible.cfg file:
collections_paths = ./wanted_coll_path
Teflo Ansible Configuration¶
In the teflo configuration file, you can set some options related to ansible. These values should be set in the [orchestrator:ansible] section of the teflo.cfg file. The following are the settings.
Key |
Description |
Default |
---|---|---|
log_remove |
configuration option to delete the ansible log file after configuration is complete. Either way the ansible log will be moved to the user’s output directory. |
By default this is set to true to delete it. |
verbosity |
configuration option to set the verbosity of ansible. |
Teflo sets the ansible verbosity to the value provided by this option. If this is not set then, teflo will set the verbosity based on teflo’s logging level. If logging level is ‘info’ (default) ansible verbosity is set to None else if logging level is ‘debug’ then ansible verbosity is ‘vvvv’. |
Note
Teflo can consume the Ansible verbosity level using Ansible’s built-in environment variable ANSIBLE_VERBOSITY in addition to consuming it from being defined within teflo.cfg file. If the verbosity value is incorrect within teflo.cfg, teflo will default to the verbosity based on teflo’s logging level.
Ansible Configuration¶
It is highly recommended that every scenario that uses Ansible provide their own ansible.cfg file. This can be used for specific connection requirements, logging, and other settings for the scenario. The following is an example of a configuration file that can be used as a base.
[defaults]
# disable strict SSH key host checking
host_key_checking = False
# filter out logs that are not ansible related
log_filter = paramiko,pykwalify,teflo,blaster,urllib3
# set the path to set ansible logs
log_path = ./ansible.log
# set specific privelege escalation if necessary for the scenario
[privilege_escalation]
become=True
become_method=sudo
become_user=test
To see all of the settings that can be set see Ansible Configuation Settings.
Ansible Logs¶
To get ansible logs, you must set the log_path in the ansible.cfg, and it is recommended to set the log_filter in the ansible.cfg as described to filter out non ansible logs. If you do not set the log path or don’t provide an ansible.cfg, you will not get any ansible logs. The ansible log will be added to the ansible_orchestrate folder under the logs folder of teflo’s output, please see Teflo Output for more details.
Using ansible_script¶
Orchestrate task uses ansible playbook module to run the user provided scripts. The script name can be given within the name key of the ansible_script list of dictionary.
The script parameters can be provided along with name of the script by separating it using space.
Note
The script parameters can also be passed using ansible_options key. But this will be deprecated in the future releases Example 15
Extra_args for the script can be provided as a part of the ansible_script list of dictionary or under ansible_options. Please see Extra_args Example 13 Example 14
Using ansible_shell¶
Orchestrate task uses ansible shell module to run the user provided shell commands. ansible_shell takes in a list of dictionaries for the different commands to be run. The shell command can be provided under the command key the ansible_shell list of dictionary. Extra_args for the shell command can be provided as a part of the ansible_shell list of dictionary or under ansible_options. Please see Extra_args Example 12
When building your shell commands it is important to take into consideration that there are multiple layers the command is being passed through before being executed. The two main things to pay attention to are YAML syntax/escaping and Shell escaping.
When writing the command in the scenario descriptor file it needs to be written in a way that both Teflo and Ansible can parse the YAML properly. From a Teflo perspective it is when the the scenario descriptor is first loaded. From an Ansible perspective its when we pass it the playbook we create, cbn_execute_shell.yml, through to the ansible-playbook CLI.
Then there could be further escapes required to preserve the test command so it can be interpreted by the shell properly. From a Teflo perspective that is when we pass the test command to the ansible-playbook CLI on the local shell using the -e “xcmd=’<test_command>’” parameter. From the Ansible perspective its when the shell module executes the actual test command using the shell on the designated system.
Let’s go into a couple examples
ansible_shell:
- command: glusto --pytest='-v tests/test_sample.py --junitxml=/tmp/SampleTest.xml'
--log /tmp/glusto_sample.log
On the surface the above command will pass YAML syntax parsing but will fail when actually executing the command on the shell. That is because the command is not preserved properly on the shell when it comes to the –pytest optioned being passed in. In order to get this to work you could escape this in one of two ways so that the –pytest optioned is preserved.
ansible_shell:
- command: glusto --pytest=\\\"-v tests/test_sample.py --junitxml=/tmp/SampleTest.xml\\\"
--log /tmp/glusto_sample.log
ansible_shell:
- command: glusto \\\"--pytest=-v tests/test_sample.py --junitxml=/tmp/SampleTest.xml\\\"
--log /tmp/glusto_sample.log
Here is a more complex example
ansible_shell:
- command: if [ `echo \$PRE_GA | tr [:upper:] [:lower:]` == 'true' ];
then sed -i 's/pre_ga:.*/pre_ga: true/' ansible/test_playbook.yml; fi
By default this will fail to be parsed by YAML as improper syntax. The rule of thumb is if your unquoted YAML string has any of the following special characters :-{}[]!#|>&%@ the best practice is to quote the string. You have the option to either use single quote or double quotes. There are pros and cons to which quoting method to use. There are online resources that go further into this topic.
Once the string is quoted, you now need to make sure the command is preserved properly on the shell. Below are a couple of examples of how you could achieve this using either a single quoted or double quoted YAML string
ansible_shell:
- command: 'if [ \`echo \$PRE_GA | tr [:upper:] [:lower:]\` == ''true'' ];
then sed -i \"s/pre_ga:.*/pre_ga: true/\" ansible/test_playbook.yml; fi'
ansible_shell:
- command: "if [ \\`echo \\$PRE_GA | tr [:upper:] [:lower:]\\` == \\'true\\' ];
then sed \\'s/pre_ga:.*/pre_ga: true/\\' ansible/test_playbook.yml; fi"
Note
It is NOT recommended to output verbose logging to standard output for long running tests as there could be issues with teflo parsing the output
Using ansible_playbook¶
Using the ansible_playbook parameter you can provide the playbook to be run The name of the playbook can be provided as the name under the ansible_playbook list of dictionary
Note
Unlike the shell or script parameter the test playbook executes locally from where teflo is running. Which means the test playbook must be in the workspace.
Note
Only one action type, either ansible_playbook or ansible_script or ansible_shell is supported per orchestrate task
Extra_args for script and shell¶
Teflo supports the following parameters used by ansible script and shell modules
Parameters |
---|
chdir |
creates |
decrypt |
executable |
removes |
warn |
stdin |
stdin_add_newline |
Please look here for more info
vault-password-file¶
The vault-password-file can be passed using vault-password-file under ansible_options
---
orchestrate:
- name: rhsm_register.yml
description: "register systems under test against rhsm"
orchestrator: ansible
hosts: all
ansible_options:
vault-password-file:
- "./vaultpass"
Extra_vars¶
Extra variables needed by ansible playbooks can be passed using extra_vars key under the ansible_options section
---
orchestrate:
- name: rhsm_register.yml
description: "register systems under test against rhsm"
orchestrator: ansible
hosts: all
ansible_options:
extra_vars:
username: kingbob
password: minions
server_hostname: server01.example.com
auto_attach: true
Use the file key to pass a variable file to the playbook. This file needs to present in teflo’s workspace file key can be a single file give as string or a list of variable files present in teflo’s workspace
---
orchestrate:
- name: rhsm_register.yml
description: "register systems under test against rhsm"
orchestrator: ansible
hosts: all
ansible_options:
extra_vars:
file: variable_file.yml
---
orchestrate:
- name: rhsm_register.yml
description: "register systems under test against rhsm"
orchestrator: ansible
hosts: all
ansible_options:
extra_vars:
file:
- variable_file.yml
- variable_1_file.yml
Note
Teflo can make the variable files declared in the default locations below, to be passed as extra_vars to the ansible playbook in the orchestrate and execute stage
defaults section of teflo.cfg
var_file.yml under the teflo workspace
yml files under the directory vars under teflo workspace
This can be done by setting the following property to True in the defaults section of the teflo.cfg
[defaults]
ansible_extra_vars_files=true
- Example:
Here the default variable file my_default_variable_file.yml is made available as a variable file to be passed as extra_vars to the ansible playbooks being run in the execute and orchestrate stages. If variable file(s) are already being passed to the ansible playbook as a part of ansible_options, this setting will append the default variable files to that list. In the below example for orchestrate stage the file my_default_variable_file.yml is passed along with variable.yml as extra_vars
[defaults] var_file=./my_default_variable_file.yml ansible_extra_vars_files=true
orchestrate: - name: playbook_2 description: "run orchestrate step using file key as extra_vars" orchestrator: ansible hosts: localhost ansible_playbook: name: ansible/var_test1.yml ansible_options: extra_vars: file: variable.yml execute: - name: playbook_3 description: "run orchestrate step using file key as extra_vars" executor: runner hosts: localhost playbook: - name: ansible/template_host_test_playbook_tasks.yml
Ansible Galaxy¶
Before teflo initiates the ansible-playbook command, it will attempt to download any roles or collections based on what is configured within the ansible_galaxy_options for the given task. Teflo downloads these dependencies using the ansible-galaxy command. In the event the command fails for any reason, teflo will retry the download. A maximum of 2 attempts will be made with a 30 second delay between attempts. Teflo will stop immediately when its unable to download the roles. Reducing potential playbook failures at a later point.
Examples¶
Lets dive into a couple different examples.
Example 1¶
You have a playbook which needs to run against x number of hosts and does not require any additional extra variables.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
ansible_playbook:
name: rhsm_register.yml
hosts:
- host01
- host02
Example 2¶
You have a playbook which needs to run against x number of hosts and requires additional extra variables.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
hosts:
- host01
- host02
ansible_playbook:
name: rhsm_register.yml
ansible_options:
extra_vars:
username: kingbob
password: minions
server_hostname: server01.example.com
auto_attach: true
Example 3¶
You have a playbook which needs to run against x number of hosts and requires only tasks with a tag set to prod.
---
orchestrate:
- name: custom
description: "running a custom playbook, only running tasks when tag=prod"
orchestrator: ansible
hosts:
- host01
ansible_playbook:
name: custom.yml
ansible_options:
tags:
- prod
Example 4¶
You have a playbook which needs to run against x number of hosts and requires only tasks with a tag set to prod and requires connection settings that conflicts with your ansible.cfg.
---
orchestrate:
- name: custom2
description: "custom playbook, w/ different connection options"
orchestrator: ansible
hosts:
- host07
ansible_playbook:
name: custom2.yml
ansible_options:
forks: 2
become: True
become_method: sudo
become_user: test_user2
remote_user: test_user
connection: paramiko
tags:
- prod
Example 5¶
You have a playbook which needs to run against x number of hosts and requires an ansible role to be downloaded.
Note
Although the option is called role_file: but it relates both, roles and collections.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
ansible_playbook:
name: rhsm_register.yml
hosts:
- host01
- host02
ansible_galaxy_options:
role_file: requirements.yml
Content of requirements.yml as a dictionary, suitable for both roles and collections:
---
roles:
- src: oasis-roles.rhsm
collections:
- name: geerlingguy.php_roles
- geerlingguy.k8s
As you can see we defined the role_file key. This defines the ansible requirements filename. Teflo will consume that file and download all the roles and collections defined within.
Note
We can define roles in the req file as a list or as dictionary, Teflo support both ways. but if we chose to set roles as list then we can’t set collections in the same file.
Content of requirements.yml file as a list, only suitable for roles:
---
- src: oasis-roles.rhsm
- src: https://gitlab.cee.redhat.com/PIT/roles/junit-install.git
scm: git
An alternative to using the requirements file is you can directly define them using the roles or collections key.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
hosts:
- host01
- host02
ansible_playbook:
name: rhsm_register.yml
ansible_galaxy_options:
roles:
- oasis-roles.rhsm
- git+https://gitlab.cee.redhat.com/oasis-roles/coreos_infra.git,master,oasis_roles.coreos_infra
collections:
- geerlingguy.php_roles
- geerlingguy.k8s
It is possible to define both role_file and direct definitions. Teflo will install the roles and collections first from the role_file and then the roles and collections defined using the keys. It is up to the scenario to ensure no problems may occur if both are defined.
Note
If your scenario directory has roles and collections already defined, you do not need to define them. This is only if you want teflo to download roles or collections from sites such as ansible galaxy, external web servers, etc.
Example 6¶
You have a playbook which needs to run against x number of hosts, requires ansible roles to be downloaded and requires additional extra variables.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
hosts:
- host01
- host02
ansible_playbook:
name: rhsm_register.yml
ansible_options:
extra_vars:
username: kingbob
password: minions
server_hostname: server01.example.com
auto_attach: true
ansible_galaxy_options:
role_file: roles.yml
Attention
Every scenario processed by teflo should define an ansible configuration file. This provides the scenario with the flexibility to easily control portions of ansible.
If you are using the ability to download roles or collections by teflo, you need to set the roles_path or the collections_paths within your ansible.cfg. If this is not set, the default collections path “<your workspace>/collections/” will be selected.
Here is an example ansible.cfg setting the roles_path and collections_paths to a relative path within the scenario directory.
[defaults]
host_key_checking = False
retry_files_enabled = False
roles_path = ./roles
collections_paths = ./collections
Example 7¶
You have a playbook which needs to run against x number of hosts. Prior to deleting the configured hosts. You want to run a playbook to do some post tasks.
---
orchestrate:
- name: register_task
description: "register systems under test against rhsm"
orchestrator: ansible
hosts: all
ansible_playbook:
name: rhsm_register.yml
ansible_options:
extra_vars:
username: kingbob
password: minions
server_hostname: server01.example.com
auto_attach: true
ansible_galaxy_options:
role_file: roles.yml
cleanup:
name: unregister_task
description: "unregister systems under tests from rhsm"
orchestrator: ansible
hosts: all
ansible_playbook:
name: rhsm_unregister.yml
ansible_galaxy_options:
role_file: roles.yml
Example 8¶
The following is an example of running a script. The following is an example of a script running on the localhost. For localhost usage refer to the`localhost <../localhost.html>`_ page.
---
orchestrate:
- name: orc_script
description: create a local dir
ansible_script:
name: scripts/create_dir.sh
hosts: localhost
orchestrator: ansible
Example 9¶
The following builds on the previous example, by showing how a user can add options to the script they are executing (In the example below, the script is run with options as create_dir.sh -c -e 12).
---
orchestrate:
- name: orc_script
description: creates a local dir
ansible_options:
extra_args: -c -e 12
ansible_script:
name: scripts/create_dir.sh
hosts: localhost
orchestrator: ansible
Example 10¶
Again building on the previous example, you can set more options to the script execution. The script is executed using the script ansible module, so you can set options the script module has. The example below uses the chdir option. You can also set other ansible options, and the example below sets the remote_user option.
To see all script options see ansible’s documentation here.
---
orchestrate:
- name: orc_script
description: creates a remote dir
ansible_options:
remote_user: cloud-user
extra_args: -c -e 12 chdir=/home
ansible_script:
name: scripts/create_dir.sh
hosts: host01
orchestrator: ansible
Example 11¶
You have a playbook which needs to run against x number of hosts and requires skipping tasks with a tag set to ssh_auth and requires extra variables.
---
orchestrate:
- name: orc_task_auth
description: "setup key authentication between driver and clients"
orchestrator: ansible
hosts: driver
ansible_playbook:
name: ansible/ssh_connect.yml
ansible_options:
skip_tags:
- ssh_auth
extra_vars:
username: root
password: redhat
ansible_galaxy_options:
role_file: roles.yml
Example 12¶
Example to run playbooks, scripts and shell command as a part of orchestrate task
---
- name: orchestrate_1
description: "orchestrate1"
orchestrator: ansible
hosts: localhost
ansible_playbook:
name: ansible/list_block_devices.yml
- name: orchestrate_2
description: "orchestrate2"
orchestrator: ansible
hosts: localhost
ansible_shell:
- chdir: ./test_sample_artifacts
command: ls
- chdir: ./test_sample_artifacts
command: cp a.txt b.txt
- name: orchestrate_3
description: "orchestrate3"
orchestrator: ansible
hosts: localhost
ansible_script:
name: ./scripts/helloworld.py Teflo_user
executable: python
Example 13¶
Example to use ansible_script with extra arags with in the ansible_script list of dictionary and its paramter in the name field
---
- name: orchestrate_1
description: "orchestrate1"
orchestrator: ansible
hosts: localhost
ansible_script:
name: ./scripts/helloworld.py Teflo_user
executable: python
Example 14¶
Example to use ansible_script with extra arags as a part of ansible_options
---
- name: orchestrate_1
description: "orchestrate1"
orchestrator: ansible
hosts: localhost
ansible_script:
name: ./scripts/helloworld.py Teflo_user
ansible_options:
extra_args: executable=python
Example 15¶
Example to use ansible_script and using ansible_options: extra_args to provide the script parameters
---
- name: scripta_task
description: ""
orchestrator: ansible
hosts: controller
ansible_script:
name: scripts/add_two_numbers.sh
ansible_options:
extra_args: X=12 Y=18
Example 16¶
Example to use environment_vars to be passed to the ansible playbook/script/command Variables X and Y are available during the script execution and can be retrieved for additional logic within the script
---
- name: scripta_task
description: ""
orchestrator: ansible
hosts: controller
ansible_script:
name: scripts/add_two_numbers.sh
environment_vars:
X: 12
Y: 18
Resources¶
For system configuration & product installs use roles from: Oasis Roles