Use the VMware resource recommendations as a starting point for vRealize Automation deployment planning.

After initial testing and deployment to production, continue to monitor performance and allocate additional resources if necessary, as described in vRealize Automation Scalability.

Authentication

When configuring vRealize Automation, you can use the default Directories Management connector for user authentication, or you can specify a pre-existing SAML based identity provider to support a single-sign on experience.

If two-factor authentication is required vRealize Automation supports integration with RSASecurID. When this integration point is configured, users are prompted for their user ID and passcode.

Load Balancer Considerations

Use the Least Response Time or round-robin method to balance traffic to the vRealize Automation appliances and infrastructure Web servers. Enable session affinity or the sticky session feature to direct subsequent requests from each unique session to the same Web server in the load balancer pool.

You can use a load balancer to manage failover for the Manager Service, but do not use a load-balancing algorithm, because only one Manager Service is active at a time. Also, do not use session affinity when managing failover with a load balancer.

Use ports 443 and 8444 when load balancing the vRealize Automation Appliance. For the Infrastructure Website and Infrastructure Manager Service, only port 443 should be load balanced.

Although you can use other load balancers, NSX, F5 BIG-IP hardware, and F5 BIG-IP Virtual Edition are tested and are recommended for use.

See the vRealize Automation documentation for more information on configuring load balancers.

Database Deployment

vRealize Automation automatically clusters the appliance database in 7.0 and later releases. All new 7.0 and later deployments must use the internal appliance database. vRealize Automation 6.2.x instances which are upgrading can use an external appliance database but it is recommended that these databases be migrated internally. See the vRealize Automation 7.0 product documentation for more information on the upgrade process.

For production deployments of the Infrastructure components, use a dedicated database server to host the Microsoft SQL Server (MSSQL) databases. vRealize Automation requires machines that communicate with the database server to be configured to use Microsoft Distributed Transaction Coordinator (MSDTC). By default, MSDTC requires port 135 and ports 1024 through 65535.

For more information about changing the default MSDTC ports, see the Microsoft Knowledge Base article Configuring Microsoft Distributed Transaction Coordinator (DTC) to work through a firewall available at https://support.microsoft.com/en-us/kb/250367

vRealize Automation does not support using SQL AlwaysOn groups due to its dependency on MSDTC. Where possible, use an SQL Failover Cluster instance using a shared disk.

Data Collection Configuration

The default data collection settings provide a good starting point for most implementations. After deploying to production, continue to monitor the performance of data collection to determine whether you must make any adjustments.

Proxy Agents

For maximum performance, deploy agents in the same data center as the endpoint to which they are associated. You can install additional agents to increase system throughput and concurrency. Distributed deployments can have multiple agent servers that are distributed around the globe.

When agents are installed in the same data center as their associated endpoint, you can see an increase in data collection performance of 200 percent, on average. The collection time measured includes only the time spent transferring data between the proxy agent and the manager service. It does not include the time it takes for the manager service to process the data.

For example, you currently deploy the product to a data center in Palo Alto and you have vSphere endpoints in Palo Alto, Boston, and London. In this configuration, the vSphere proxy agents are deployed in Palo Alto, Boston, and London for their respective endpoints. If instead, agents are deployed only in Palo Alto, you might see a 200 percent increase in data collection time for Boston and London.

Distributed Execution Manager Configuration

In general, locate distributed execution managers (DEMs) as close as possible to the model manager host. The DEM Orchestrator must have strong network connectivity to the model manager at all times. Create two DEM Orchestrator instances, one for failover, and two DEM Worker instances in your primary data center.

If a DEM Worker instance must run a location-specific workflow, install the instance in that location.

Assign skills to the relevant workflows and DEMs so that those workflows are always run by DEMs in the correct location. For information about assigning skills to workflows and DEMs by using the vRealize Automation designer console, see the vRealize Automation Extensibility documentation. Because this function is advanced, you must design your solution so that WAN communication is not required between the running DEM and remote services, for example, vRealize Orchestrator.

For the best performance, install DEMs and agents on separate machines. For additional information about installing vRealize Automation agents, see Installing Agents.

vRealize Orchestrator

Use an external vCenter Orchestrator system for each tenant to enforce tenant isolation. If tenant isolation is not a requirement, you can use the internal instance of vRealize Orchestrator

The internal vRealize Orchestrator instance is a good starting point for deployments. If the internal instance cannot handle the required workload, VMware recommends use of an external vRealize Orchestrator cluster.