Cloud Assembly supports integration with Ansible Open Source configuration management. After configuring integration, you can add Ansible components to new or existing deployments.

When you integrate Ansible Open Source with Cloud Assembly, you can configure it to run one or more Ansible playbooks in a given order when a new machine is provisioned to automate configuration management. You specify the desired playbooks in the cloud template for a deployment.

When setting up an Ansible integration, you must specify the Ansible Open Source host machine as well as the inventory file path that defines information for managing resources. In addition, you must provide a name and password to access the Ansible Open Source instance. Later, when you add an Ansible component to a deployment, you can update the connection to use key-based authentication.

By default, Ansible uses ssh to connect to the physical machines. If you are using Windows machines as specified in the cloud template with the osType Windows property, the connection_type variable is automatically set to winrm.

Initially, Ansible integration uses the user/password or user/key credentials provided in the integration to connect to the Ansible Control Machine. Once the connection is successful, the provided playbooks in the cloud template are validated for syntax.

If the validation is successful, then an execution folder is created on the Ansible Control Machine at ~/var/tmp/vmware/provider/user_defined_script/. This is the location from which scripts run to add the host to the inventory, create the host vars files including setting up the authentication mode to connect to the host, and finally run the playbooks. At this point, the credentials provided in the cloud template are used to connect to the host from the Ansible Control Machine.

Ansible integration supports physical machines that do not use an IP address. For machines provisioned on public clouds such as AWS, Azure, and GCP, the address property in the created resource is populated with the machine's public IP address only when the machine is connected to a public network. For machines not connected to a public network, the Ansible integration looks for the IP address from the network attached to the machine. If there are multiple networks attached, Ansible integration looks for the network with the least deviceIndex; that is, the index of the Network Interface Card (NIC) attached to the machine. If the deviceIndex property is not specified in the blueprint, the integration uses the first network attached.

See What Is configuration management in Cloud Assembly for more details on configuring Ansible Open Source for integration in Cloud Assembly.

Prerequisites

  • The Ansible control machine must use an Ansible version. See the vRealize Automation Support Matrix for information about supported versions.
  • Ansible log verbosity must be set to default of zero.
  • The user must have read/write access to the directory where the Ansible inventory file is located. In addition, the user must have read/write access to the inventory file, if it exists already.
  • If you are using a non-root user with the sudo option, ensure that the following is set in the sudoers file:

    Defaults:user_name !requiretty

    and

    username ALL=(ALL) NOPASSWD: ALL

  • Ensure that host key checking is deactivated by setting host_key_checking = False at /etc/ansible/ansible.cfg or ~/.ansible.cfg.
  • Ensure that the vault password is set by adding the following line to the /etc/ansible/ansible.cfg or ~/.ansible.cfg file:
    vault_password_file = /path/to/password_file
    The vault password file contains the password in plain text and is used only when cloud templates or deployments provide the username and password combination to use between ACM and the node as show in the following example.
    echo 'myStr0ng9@88w0rd' > ~/.ansible_vault_password.txt
    echo 'ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_password.txt' >> ~/.profile        # Instead of this way, you can also set it setting 'vault_password_file=~/.ansible_vault_password.txt' in either /etc/ansible/ansible.cfg or ~/.ansible.cfg
  • To avoid host key failures while trying to run playbooks, it is recommended that you include the following settings in /etc/ansible/ansible config.
    [paramiko_connection]
    record_host_keys = False
     
    [ssh_connection]
    #ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
    ssh_args = -o UserKnownHostsFile=/dev/null                  # If you already have any options set for ssh_args, just add the additional option shown here at the end.

Procedure

  1. Select Infrastructure > Connections > Integrations and click Add Integration.
  2. Click Ansible.
    The Ansible configuration page appears.
  3. Enter the Hostname, Inventory File Path and other required information for the Ansible Open Source instance.
  4. Click Validate to check the integration.
  5. Click Add.

Results

Ansible is available for use with cloud templates.

What to do next

Add Ansible components to the desired cloud templates.

  1. On the cloud template canvas page, select Ansible under the Configuration Management heading on the cloud template options menu and drag the Ansible component to the canvas.
  2. Use the panel on the right to configure the appropriate Ansible properties such as specifying the playbooks to run.

In Ansible, users can assign a variable to a single host, and then use it later in playbooks. Ansible Open Source integration enables you to specify these host variable in cloud templates. The hostVariables property must be in proper YAML format, as expected by the Ansible control machine, and this content will be placed at the following location:

parent_directory_of_inventory_file/host_vars/host_ip_address/vra_user_host_vars.yml

The default location of the Ansible inventory file is defined in the Ansible account as added on the Integrations page in Cloud Assembly. The Ansible integration will not validate the hostVariable YAML syntax in the cloud template, but the Ansible Control Machine will throw an error when you run a playbook in the case of incorrect format or syntax.

The following cloud template YAML snippet shows an example useage of the hostVariables property.

Cloud_Ansible_1:
    type: Cloud.Ansible
    properties:
      host: '${resource.AnsibleLinuxVM.*}'
      osType: linux
      account: ansible-CAVA
      username: ${input.username}
      password: ${input.password}
      maxConnectionRetries: 20
      groups:
        - linux_vms
      playbooks:
        provision:
          - /root/ansible-playbooks/install_web_server.yml
      hostVariables: |
        message: Hello ${env.requestedBy}
        project: ${env.projectName}
Ansible integrations expect authentication credentials to be present in a cloud template in one of the following ways:
  • User name and password in the Ansible resource.
  • User name and privateKeyFile in the Ansible resource.
  • Username in Ansible resource and privatekey in the compute resource by specifying remoteAccess to generatedPublicPrivateKey.

When you create an Ansible Open Source integration, you must provide login information for the integration user to connect with the Ansible control machine using SSH. To run playbooks with an integration, you can specify a different user in the integration YAML code. The username property is mandatory and required to connect to the virtual machine where Ansible will make changes. The playbookRunUsername property is optional and can be provided to execute the playbook on the Ansible node. The default value of playbookRunUsername is the Ansible endpoint integration username.

If you specify a different user, that user should have write access to the Ansible hosts file and should have permission to create private key files.

When you add an Ansible Open Source tile to a cloud template, vRealize Automation creates the host entry for the attached virtual machine. By default, vRealize Automation will use the virtual machine’s resource name to create the host entry, but you can specify any name using the hostName property in the blueprint YAML. In order to communicate with the machine, vRealize Automation will create the host variable ansible_host: IP Address for the host entry. You can override the default behaviour to configure communication using FQDN, by specifying the keyword ansible_host under hostVariables and providing FQDN as its value. The following YAML code snippet shows an example of how hostname and FQDN communication can be configured:

Cloud_Ansible:
  type: Cloud Ansible
  properties:
    osType: linux
    username: ubuntu
    groups:
       - sample
    hostName: resource name
    host: name of host
    account: name of account
    hostVariables:
       ansible_host:Host FQDN
			

In this example you override the default ansible_host value by providing the FQDN. This may be useful for users who want Ansible Open Souce to connect to the host machine using the FQDN.

The default value of hostVariables in the YAML will be ansible_host:IP_address and the IP address is used to communicate with the server.

If the YAML count property is more than 1 for Ansible Open Source, the hostname could be mapped to any of the respective virtual machine's properties.The following example shows mapping for a virtual machine resource named Ubuntu-VM if we want its address property to be mapped to the hostname.

 hostname: '${resource.Ubuntu-VM.address[count.index]}' 

In cloud templates, ensure that the path to the Ansible playbook is accessible to the user specified in the integration account. You can use an absolute path to specify the playbook location, but it is not necessary. An absolute path to the user's home folder is recommended so that the path remains valid even if the Ansible integration credentials change over time.

Users with cloud administrator privileges can change the project of a deployment containing Ansible Open Source and Ansible Tower resources. The functionality is available as a day 2 action on deployment level.

To change the project for an Ansible deployment, select the Change Project option from the Actions menu of the deployment as shown on the Cloud Assembly Deployments page, and then choose the target project and click Submit on the displayed dialog.