You can integrate Ansible Automation Platform, fomerly Ansible Tower, with Automation Assembler to support configuration management of deployed resources. After configuring integration, you can add Ansible Automation Platform virtual components to new or existing deployments from the cloud template editor.
Prerequisites
- Grant non-administrator users the appropriate permissions to access Ansible Automation Platform. There are two options that work for most configurations. Choose the one that is most appropriate for your configuration.
- Grant users Inventory Administrator and Job Template Administrator roles at the organization level.
- Grant users Administrator permission for a particular inventory and the Execute role for all job templates used for provisioning.
- You must configure the appropriate credentials and templates in Ansible Automation Platform for use with your deployments. Templates can be job templates or workflow templates. Job templates define the inventory and playbook for use with a deployment. There is a 1:1 mapping between a job template and a playbook. Playbooks use a YAML-like syntax to define tasks that are associated with the template. For most typical deployments, use machine credentials for authentication.
Workflow templates enable users to create sequences consisting of any combination of job templates, project syncs, and inventory syncs that are linked together so that you can execute them as a single unit. The Ansible Automation Platform Workflow Visualizer helps users to design workflow templates. For most typical deployments, you can use machine credentials for authentication.
If you are working with Ansible Automation Platform, you must define an execution environment on the Ansible Controller to satisfy ansible-runner dependencies. See the Ansible documentation for more information about execution environments and container images. In particular, refer to https://docs.ansible.com/automation-controller/4.2.0/html/userguide/execution_environments.html.
- Log in to Ansible Automation Platform and navigate to the Templates section.
- Select Adding a new job template.
- Select the credential that you already created. These are the credentials of the machine to be managed by Ansible Automation Platform. For each job template, there can be one credential object.
- For the Limit selection, select Prompt on Launch. This ensures that the job template runs against the node being provisioned or de-provisioned from Automation Assembler. If this option is not selected, a Limit is not set error will appear when the blueprint that contains the job template is deployed.
- Select Adding a new workflow template.
- Select the credentials that you already created and then define the inventory. Using Workflow Visualizer, design the workflow template.
For the Limit box of workflow or job templates, generally you can select Prompt on Launch. This selection ensures that the job or workflow template runs against the node being provisioned or de-provisioned from Automation Assembler.
- You can view the execution of the Job templates or workflow templates invoked from Automation Assembler on the Ansible Tower Jobs tab.
Procedure
Results
Ansible Tower is available for use in cloud templates.
What to do next
Add Ansible Automation Platform components to the desired cloud templates. You must specify the applicable job template with execute permission for the user specified in the integration account.
- On the cloud template canvas page, select Ansible under the Configuration Management heading on the blueprint options menu and drag the Ansible Automation Platform component to the canvas.
- Use the panel on the right to configure the appropriate Ansible Automation Platform properties such as job templates.
When you add an Ansible Automation Platform tile to a cloud template, VMware Aria Automation creates the host entry for the attached virtual machine in the Ansible Automation Platform. By default, VMware Aria Automation will use the virtual machine’s resource name to create the host entry, but you can specify any name using the hostName
property in the blueprint YAML. In order to communicate with the machine, VMware Aria Automation will create the host variable ansible_host: IP Address
for the host entry. You can override the default behaviour to configure communication using FQDN, by specifying the keyword ansible_host
under hostVariables
and providing FQDN as its value. The following YAML code snippet shows an example of how hostname and FQDN communication can be configured:
Cloud_Ansible_Tower_1: type: Cloud Ansible Tower properties: host: name of host account: name of account hostName: resource name hostVariables: ansible_host:Host FQDN
In this example you override the default ansible_host
value by providing the FQDN. This may be useful for users who want Ansible Tower to connect to the host machine using the FQDN.
The default value of hostVariables
in the YAML will be ansible_host:IP_address
and the IP address is used to communicate with the server.
If the YAML count property is more than 1 for Ansible Automation Platform, the hostname could be mapped to any of the respective virtual machine's properties.The following example shows mapping for a virtual machine resource named Ubuntu-VM if we want its address property to be mapped to the hostname.
hostname: '${resource.Ubuntu-VM.address[count.index]}'
When you add an Ansible Automation Platform component to a cloud template, and you can specify the job template to call in the cloud template YAML. You can also specify workflow templates or a combination of job templates and workflow templates. If you don't specify the template type, by default VMware Aria Automation assumes that you are calling a job template.
The following YAML snippet shows an example of how a combination of job and workflow templates can be called in an Ansible Tower cloud template.
Cloud_Ansible_1: type: Cloud.Ansible.Tower properties: host: ‘${resource.CentOS_Machine.*}’ account: maxConnectionRetries: 2 maxJobRetries: 2 templates: provision: - name: My workflow type: workflow - name: My job template
We added the maxConnectionsRetries
and maxJobRetries
to handle Ansible related failures. The cloud templates accepts the custom value and, in case no value is provided, it uses the default value. For maxConnectionRetries
, the default value is 10, and for maxJobRetries
the default value is 3.
Automation Assembler templates for Ansible Automation Platform integrations include the useDefaultLimit
property with a true or false value to define where Ansible templates are executed. Ansible templates can be job templates or workflow templates. If this value is set to true, the specified templates are run against the machine specified in the Limit box on the Ansible Templates page. If the value is set to false, the templates are run against the provisioned machine, but users should check the Prompt on Launch checkbox on the Ansible Automation Platform Templates page. By default, the value of this property is false. The following YAML example shows how the useDefaultLimit
property appears in cloud templates.
templates: provision: - name: ping aws_credentials type: job useDefaultLimit: false extraVars: '{"rubiconSurveyJob" : "checkSurvey"}'
In addition, as the preceding example shows, you can use the extraVars
property to specify extra variables or survey variables. This capability can be useful for running templates that require input. If a user has maintained the survey variable, then you must pass the variable in the extraVars
section of the cloud template to avoid errors.
Users with cloud administrator privileges can change the project of a deployment containing Ansible Open Source and Ansible Automation Platform resources. The functionality is available as a day 2 action on deployment level.
To change the project for an Ansible deployment, select the Change Project option from the Actions menu of the deployment as shown on the Automation Assembler Deployments page, and then choose the target project and click Submit on the displayed dialog.
While the Ansible Tower integration does not support the groups property, there is an alternative for customers to implement equivalent functionality using the VM tags and the VMware inventory plug in as described in the following article: https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_vm_inventory_inventory.html
- Use
ansible_host
(e.g., FQDN) andhostName
in the cloud template. - In AWX, turn on the update on launch flag; that is, sync to the vCenter for new hosts before running the playbook. The sync will merge FQDN host entries added by VMware Aria Automation and imported by VMware inventory plug-in and assign hosts to groups. Inventory groups are created from VM tag values using the sync source variables above.
See the following cloud template for an example implementation.
# Created by Quickstart wizard. name: RHEL 8 version: 0.0.1 formatVersion: 1 inputs: image: type: string description: Select an OS Version default: RHEL 8 Base enum: - RHEL 8 Base - RHEL 7 Base AWX: type: string description: Choose AWX Environment enum: - LabAWX - FA/CC-AWX envrionmnetTag: type: string description: Choose VM Environment enum: - cel - mag - wdr purposeTag: type: string description: Choose Server Purpose default: '' enum: - '' - mariadb - oracle authGroupTag: type: string description: Choose Authentication Group default: '' enum: - '' - dbo_linux - oracle - postgres hostname: type: string description: Desired hostname default: changeme cpuCount: type: integer description: Number of virtual processors default: 1 totalMemoryMB: type: integer description: Machine virtual memory size in Megabytes default: 1024 disk1Size: type: integer description: A SIZE of 0 will disable the disk and it will not be provisioned. default: 0 disk2Size: type: integer description: A SIZE of 0 will disable the disk and it will not be provisioned. default: 0 neededip: type: string description: Enter an available IP Address title: Needed-IP-Address vlan: type: string description: Enter in needed vlan title: Enter VLAN ID example "vl500" resources: Cloud_Ansible_Tower_1: type: Cloud.Ansible.Tower metadata: layoutPosition: - 0 - 0 properties: host: ${resource.Cloud_vSphere_Machine_1.*} account: ${input.AWX} hostName: ${input.hostname} hostVariables: ansible_host: ${input.hostname}.dcl.wdpr.mycompany.com templates: provision: - name: Linux-Role Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine metadata: layoutPosition: - 0 - 1 properties: image: ${input.image} Infoblox.IPAM.Network.dnsSuffix: dcl.wdpr.mycompany.com Infoblox.IPAM.Network.dnsView: Internal customizationSpec: Rhel7Base name: ${input.hostname} cpuCount: ${input.cpuCount} totalMemoryMB: ${input.totalMemoryMB} attachedDisks: ${map_to_object(resource.Cloud_Volume_1[*].id + resource.Cloud_Volume_2[*].id, "source")} networks: - network: ${resource.Cloud_vSphere_Network_1.id} assignment: static address: ${input.neededip} tags: - key: Server-Team value: ${input.envrionmnetTag} - key: Server-Team value: ${input.purposeTag} - key: Server-Team value: ${input.authGroupTag} Cloud_Volume_1: type: Cloud.Volume metadata: layoutPosition: - 0 - 2 properties: count: '${input.disk1Size == 0 ? 0 : 1 }' capacityGb: ${input.disk1Size} Cloud_Volume_2: type: Cloud.Volume metadata: layoutPosition: - 0 - 3 properties: count: '${input.disk2Size == 0 ? 0 : 1}' capacityGb: ${input.disk2Size} Cloud_vSphere_Network_1: type: Cloud.vSphere.Network metadata: layoutPosition: - 1 - 0 properties: networkType: existing constraints: - tag: ${input.vlan}