For the production instance of vRealize Operations to function optimally, your environment must conform to certain configurations. Review and familiarize yourself with these configurations before you deploy a production instance of vRealize Operations.

Sizing
vRealize Operations supports up to 320,000 monitored resources spread across eight extra-large analytics nodes.
Size your vRealize Operations instance to ensure performance and support. For more information about sizing, refer to the KB article, vRealize Operations Manager Sizing Guidelines (KB 2093783) .
Environment
Deploy analytics nodes in the same vSphere cluster and use identical or similar hosts and storage. If you cannot deploy analytics nodes in the same vSphere cluster, you must deploy them in the same geographical location.
When continuous availability is enabled, deploy analytics nodes in fault domains in the same vSphere cluster and use identical or similar hosts and storage. Fault domains are supported on vSphere stretched clusters.
Analytics nodes must be able to communicate with one another always. The following vSphere events might disrupt connectivity.
  • vMotion
  • Storage vMotion
  • High Availability (HA)
  • Distributed Resource Scheduler (DRS)

Due to a high level of traffic between analytics nodes, all analytics nodes must be on the same VLAN and IP subnet, and that VLAN is not stretched between data centers, when continuous availability is not enabled.

When continuous availability is enabled, analytics nodes in fault domains should be located on the same VLAN and IP subnet, and communication between fault domains must be available. The witness node might be located in a separate VLAN and IP subnet but must be able to communicate with all analytics nodes.

Latency between analytics nodes cannot exceed 5 milliseconds, except when continuous availability is enabled, where latency between fault domains cannot exceed 10 milliseconds but analytics nodes, within each fault domain, still cannot exceed 5 milliseconds. The bandwidth must be equal to or faster than 10 GB per second.

If you deploy analytics nodes into a highly consolidated vSphere cluster, configure resource reservations. A full analytics node, for example a large analytics node that monitors 20,000 resources, requires one virtual CPU to physical CPU. If you experience performance issues, review the CPU ready and co-stop to determine if the virtual to physical CPU ratio is the cause of the issues. For more information about how to troubleshoot VM performance and interpret CPU performance metrics, see Troubleshooting a virtual machine that has stopped responding: VMM and Guest CPU usage comparison (1017926).
You can deploy remote collectors and the witness node behind a firewall. You cannot use NAT between remote collectors or the witness node and analytics nodes.
Multiple Data Centers
vRealize Operations can be stretched across data centers only when continuous availability is enabled. The fault domains may reside in separate vSphere clusters; however, all analytics nodes across both fault domains must reside in the same geographical location.

For example, the first data center is located in Palo Alto but is configured in two different buildings or in different locations of the city (downtown and mid-town) will have latency that is less than 5 milliseconds. The second data center is located in Santa Clara so the latency between the two data centers is greater than 5 milliseconds but less than 10 milliseconds. Refer to the KB article, vRealize Operations Manager Sizing Guidelines (KB 2093783) for network requirements.

If vRealize Operations is monitoring resources in additional data centers, you must use remote collectors and deploy the remote collectors in the remote data centers. You might need to modify the intervals at which the configured adapters on the remote collector collect information depending on latency.
It is recommended that you monitor collections to validate that they are completing in less than five minutes. Check the KB article, vRealize Operations Manager Sizing Guidelines (KB 2093783) for latency, bandwidth and sizing requirements. If all requirements are met and collections are still not completing within the default 5 minutes time limit, increase the interval to 10 minutes.
Certificates
A valid certificate signed by a trusted Certificate Authority, private, or public, is an important component when you configure a production instance of vRealize Operations. Configure a Certificate Authority signed certificate against the system before you configure agents.
You must include all analytics nodes, remote collector nodes, witness nodes, and load balancer DNS names in the Subject Alternative Names field of the certificate.
Adapters
It is recommended that you configure adapters to remote collectors in the same data center as the analytics cluster for large and extra-large deployment profiles. Configuring adapters to remote collectors improves performance by reducing load on the analytics node. As an example, you might decide to configure an adapter to remote collectors if the total resources on a given analytics node begin to degrade the node's performance. You might configure the adapter to a large remote collector with the appropriate capacity.
Configure adapters to remote collectors when the number of resources the adapters are monitoring exceeds the capacity of the associated analytics node.
Authentication
You can use the Platform Services Controller for user authentication in vRealize Operations. For more information about deploying a highly available Platform Services Controller instance, see Deploying the vCenter Server Appliance in the VMware vSphere Documentation. All Platform Services Controller services are consolidated into vCenter Server, and deployment and administration are simplified.
Load Balancer
For more information about load balancer configuration, see the vRealize Operations Load Balancing Guide.