Automation Assembler supports integration with Puppet Enterprise, Ansible Open Source, and Ansible Tower so that you can manage deployments for configuration and drift.

Puppet Integration

To integrate Puppet-based configuration management, you must have a valid instance of Puppet Enterprise installed on a public or private cloud with a vSphere workload. You must establish a connection between this external system and your Automation Assembler instance. Then you can make Puppet configuration management available to Automation Assembler by adding it to appropriate blueprints.

The Automation Assembler blueprint service Puppet provider installs, configures, and runs the Puppet agent on a deployed compute resource. The Puppet provider supports both SSH and WinRM connections with the following prerequisites:

  • SSH connections:
    • The user name must be either a super user or a user with sudo permissions to run commands with NOPASSWD.
    • Deactivate requiretty for the given user.
    • cURL must be available on the deployment compute resource.
  • WinRM connections:
    • PowerShell 2.0 must be available on the deployment compute resource.
    • Configure the Windows template as described in the VMware Aria Automation Orchestrator documentation.

The DevOps administrator is responsible for managing the connections to a Puppet master and for applying Puppets roles, or configuration rules, to specific deployments. Following deployment, virtual machines configured to support configuration management are registered with the designated Puppet Master.

When virtual machines are deployed, users can add or delete a Puppet Master as an external system or update projects assigned to the Puppet Master. Finally, appropriate users can de-register deployed virtual machines from the Puppet Master when the machines are decommissioned.

Ansible Open Source Integration

When setting up an Ansible integration, install Ansible Open Source in accordance with the Ansible installation instructions. See the Ansible documentation for more information about installation.

Ansible enables host key checking by default. If a host is reinstalled with a different key in the known_hosts file, an error message appear. If a host is not listed in the known_hosts file, you must supply the key on start-up. You can deactivate host key checking with the following setting in the /etc/ansible/ansible.cfg or ~/.ansible.cfg file:
[defaults]
host_key_checking = False
localhost_warning = False
 
[paramiko_connection]
record_host_keys = False
 
[ssh_connection]
#ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
ssh_args = -o UserKnownHostsFile=/dev/null

To avoid the host key checking errors, set host_key_checking and record_host_keys to False including adding an extra option UserKnownHostsFile=/dev/null set in ssh_args.In addition, if the inventory is empty initially, Ansible warns that the host list is empty. This causes the playbook syntax check to fail.

Ansible vault enables you to store sensitive information, such as passwords or keys, in encrypted files rather than as plain text. Vault is encrypted with a password. In Automation Assembler, Ansible uses Vault to encrypt data such as ssh passwords for host machines. It assumes that the path to the Vault password has been set.

You can modify the ansible.cfg file to specify the location of the password file using the following format.

vault_password_file = /path to/file.txt

You can also set the ANSIBLE_VAULT_PASSWORD_FILE environment variable so that Ansible automatically searches for the password. For example, ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt

Automation Assembler manages the Ansible inventory file, so you must ensure that the Automation Assembler user has rwx access on the inventory file.

cat ~/var/tmp/vmware/provider/user_defined_script/$(ls -t ~/var/tmp/vmware/provider/user_defined_script/ | head -1)/log.txt
If you want to use a non-root user with Automation Assembler open-source integration, the users require a set of permissions to run the commands used by the Automation Assembler open-source provider. The following commands must be set in the user's sudoers file.
Defaults:myuser !requiretty
If the user is not part of an admin group that has no askpass application specified, set the following command in the user's sudoers file.
myuser ALL=(ALL) NOPASSWD: ALL

If you encounter errors or other problems when setting up Ansible integration, refer to the log.txt file at 'cat~/var/tmp/vmware/provider/user_defined_script/$(ls -t ~/var/tmp/vmware/provider/user_defined_script/ | head -1)/' on the Ansible Control Machine.

Ansible Tower Integration

Supported Operating System Types
  • Red Hat Enterprise Linux 8.0 or later 64-bit (x86), supports only Ansible Tower 3.5 and greater.
  • Red Hat Enterprise Linux 7.4 or later 64-bit (x86).
  • CentOS 7.4 or later 64-bit (x86).

The following is a sample inventory file, which is generated during an Ansible Tower installation. You may need to modify it for Automation Assembler integration uses.

[root@cava-env8-dev-001359 ansible-tower-setup-bundle-3.5.2-1.el8]# pwd
 
/root/ansible-tower-install/ansible-tower-setup-bundle-3.5.2-1.el8
 
[root@cava-env8-dev-001359 ansible-tower-setup-bundle-3.5.2-1.el8]# cat inventory
 
[tower]
 
localhost ansible_connection=local
 
 
 
 
[database]
 
 
 
 
[all:vars]
 
admin_password='VMware1!'
 
 
 
 
pg_host=''
 
pg_port=''
 
 
 
 
pg_database='awx'
 
pg_username='awx'
 
pg_password='VMware1!'
 
 
 
 
rabbitmq_port=5672
 
rabbitmq_vhost=tower
 
rabbitmq_username=tower
 
rabbitmq_password='VMware1!'
 
rabbitmq_cookie=cookiemonster
 
 
 
 
# Needs to be true for fqdns and ip addresses
 
rabbitmq_use_long_name=false
 
 
 
 
# Isolated Tower nodes automatically generate an RSA key for authentication;
 
# To deactivate this behavior, set this value to false
 
# isolated_key_generation=true