Including Scenarios¶
Overview¶
The __*Include*__ section is introduced to provide a way to include common steps of provisioning, orchestration, execute or reports under one or more common scenario files. This reduces the redundancy of putting the same set of steps in every scenario file. Each scenario file is a single node of the whole __*Scenario Graph*__
When running a scenario that is using the include option, several results files will be generated. One for each of the scenarios. the included scenario will use the scenario’s name as a prefix. e.g. common_scenario_results.yml where common_scenario is the name of the included scenario file. All these files will be stored in the same location. This allows users to run common.yml(s) once and their result(s) can be included in other scenario files saving time on test executions. Also see Teflo Output
Note
For any given task the included scenario is checked and executed first followed by the parent scenario. For example, for Orchestrate task, if you have an orchestrate section in both the included and main scenario, then the orchestrate tasks in included scenario will be performed first followed by the orchestrate tasks in the main scenario.
Example 1¶
You want to separate out provision of a set of resources because this is a common resource used in all of your scenarios.
---
name: Include_example1
description: Descriptor file with include section
resource_check:
include:
- provision.yml
orchestrate:
.
.
.
execute:
.
.
.
report:
.
.
.
The provision.yml could look like below
---
name: common-provision
description: 'common provisioning of resources used by the rest of the scenarios.'
provision:
- name: ci_test_client_b
groups:
- client
- vnc
provisioner: beaker-client
Example 2¶
You want to separate out provision and orchestrate because this is common configuration across all your scenarios.
---
name: Include_example2
description: Descriptor file with include section
include:
- provision.yml
- orchestrate.yml
execute:
.
.
.
report:
.
.
.
The orchstrate.yml could look like below
# Example 9
---
orchestrate:
- name: orc_script
description: creates a local dir
ansible_options:
extra_args: -c -e 12
ansible_script:
name: scripts/create_dir.sh
hosts: localhost
orchestrator: ansible
# Example 10
Example 3¶
You’ve already provisioned a resource from a scenario that contained just the provision and you want to include it’s results.yml in another scenario.
---
name: Include_example3
description: Descriptor file with include section
include:
- /var/lib/workspace/teflo_data/.results/common-provision_results.yml
orchestrate:
.
.
.
execute:
.
.
.
report:
.
.
.
The common-provision_results.yml could look like below
---
name: common-provision
description: 'common provisioning of resources used by the rest of the scenarios.'
provision:
- name: ci_test_client_a
description:
groups:
- client
- test_driver
provisioner: linchpin-wrapper
provider:
count: 1
credential: aws-creds
name: aws
region: us-east-2
hostname: ec2-host.us-east-2.compute.amazonaws.com
tx_id: 44
keypair: ci_aws_key_pair
node_id: i-0f452f3479919d703
role: aws_ec2
flavor: t2.nano
image: ami-0d8f6eb4f641ef691
ip_address:
public: 13.59.32.24
private: 172.31.33.91
ansible_params:
ansible_ssh_private_key_file: keys/ci_aws_key_pair.pem
ansible_user: centos
metadata: {}
workspace: /home/dbaez/projects/teflo/e2e-acceptance-tests
data_folder: /var/lib/workspace/teflo_data/fich6j1ooq
Example 4¶
You want to separate out provision and orchestrate because this is common configuration across all your scenarios but with this particular scenario you want to also a run a non-common orchestration task.
---
name: Include_example4
description: Descriptor file with include section
include:
- provision.yml
- orchestrate.yml
orchestrate:
- name: ansible/ssh_connect.yml
description: "setup key authentication between driver and clients"
orchestrator: ansible
hosts: driver
ansible_options:
skip_tags:
- ssh_auth
extra_vars:
username: root
password: redhat
ansible_galaxy_options:
role_file: roles.yml
execute:
.
.
.
report:
.
.
.
Example 5¶
You can use jinja templating like below
---
name: Include_example5
description: Descriptor file with include section
include:
- teflo/stack/provision_localhost.yml
- teflo/stack/provision_libvirt.yml
{% if true %} - teflo/stack/orchestrate-123.yml{% endif %}
- teflo/stack/orchestrate.yml
orchestrate:
- name: ansible/ssh_connect.yml
description: "setup key authentication between driver and clients"
orchestrator: ansible
hosts: driver
ansible_options:
skip_tags:
- ssh_auth
extra_vars:
username: root
password: redhat
ansible_galaxy_options:
role_file: roles.yml
execute:
.
.
.
report:
.
.
.
Scenario Graph Explanation¶
There are two ways of executing teflo scenarios, which are __by_level and by__depth. User can modify how the scenarios are executed by changing the setting __included_sdf_iterate_method__ in the teflo.cfg , as shown below, by_level is set by default if you don’t specify this parameter
[defaults]
log_level=info
workspace=.
included_sdf_iterate_method = by_depth
All blocks(provision, orchestrate, execute, report) in a senario descriptor file will be executed together for each scenario, in case there are included scenarios
Note
Scenarios should be designed such that the dependent(which you want it to run first) scenario should be at the child level. In the below example if sdf13 has the provisioning information and the orchestrate block which uses these provisioned assets can be in scenario which is at a higher level, but not the other way round
Example¶
sdf
/ | \
sdf1 sdf7 sdf
/ | \ / \ / | \
sdf3 sdf8 sdf5 sdf10 sdf11 sdf4 sdf9 sdf6
/ \
sdf12 sdf13
The above is an complex include usage. Consider sdf1-sdf13 are different included scenarios and sdf is the main scenario
by_level¶
The execution order will be 12,13,3,8,5,10,11,4,9,6,1,7,2,0
by_depth¶
The execution order will be 12,13,3,8,5,1,10,11,7,4,9,6,2,0
Remote Include¶
You can include teflo workspace from remote server(currently only support for git)
Example SDF¶
---
name: remote_include_example
description: include remote sdf from git server
remote_workspace:
- workspace_url: git@github.com:dno-github/remote-teflo-lib1.git
alias_name: remote
# the alias_name should not be the same as local folder, it will collide
- workspace_url: https://github.com/dno-github/remote-teflo-lib1.git
alias_name: remote2
include:
- "remote/sdf_remote.yml"
name: sdf using remote include
description: "Provision step"
provision:
- name: from_local_parent
groups: localhost
ip_address: 127.0.0.1
ansible_params:
ansible_connection: local
execute:
.
.
.
report:
.
.
Note
When using ssh to clone (example above “remote”), user need to use GIT_SSH_COMMAND= in teflo.cfg.
Example teflo.cfg¶
[defaults]
log_level=debug
workspace=.
data_folder=.teflo
# This is the directory for all downloaded remote workspaces
remote_workspace_download_location=remote_dir
# if you set this to False, the downloaded remote workspace
# will not be removed after the teflo job is done. And teflo
# will automatically reuse the downloaded workspace if you run
# the same job again(skip the download process, could potentially
# save your time)
# when using SSH instead of HTTPS to clone remote workspace, private key path can be set from here.
GIT_SSH_COMMAND=/home/user/keys/private_k
CLEAN_CACHED_WORKSPACE_AFTER_EACH_RUN = False
workspace_url is the url of the git repo(your teflo workspace), alias_name is the name which you want to use in include section .. note:
The alias_name should not be the same as local folder, it will collide