You place the vRealize Operations Manager nodes in several network units for isolation and failover. The networking design also supports public access to the analytics cluster.

For secure access, load balancing and portability, the vRealize Operations Manager analytics cluster is deployed in the shared cross-region application virtual network Mgmt-xRegion01-VXLAN, and the remote collector group in the shared local application virtual network Mgmt-RegionA01-VXLAN.

Figure 1. Networking Design of the vRealize Operations Manager Deployment
In the Consolidated SDDC, the vRealize Operations Manager components are in two networks. The analytics node is in the load-balanced application virtual network that supports potentil cross-region failover after you scale the environment out to a two-pod design. The remote collector is in the region-specific application virtual network.

Application Virtual Network for vRealize Operations Manager

The vRealize Operations Manager analytics cluster is installed in the cross-region shared application virtual network and the remote collector group is installed in a region-specific shared application virtual network.

This networking design has the following features:

  • All nodes have routed access to the vSphere management network through the NSX universal distributed logical router.

  • Routing to the vSphere management network and other external networks is dynamic, and is based on the Border Gateway Protocol (BGP).

For more information about the networking configuration of the application isolated network, see Virtualization Network Design for Consolidated SDDC and NSX Design for Consolidated SDDC.

Table 1. Design Decisions about the Application Virtual Network for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-008

Use the existing cross-region application virtual network for the vRealize Operations Manager analytics cluster.

Provides a consistent deployment model for management applications and ensures that growth to a two-pod design is supported.

You must use an implementation in NSX to support this network configuration.

CSDDC-OPS-MON-009

Use the existing region-specific application virtual network for vRealize Operations Manager remote collector group.

Ensures collection of metrics locally in a region in the event of a network outage.

You must use an implementation in NSX to support this network configuration.

IP Subnets for vRealize Operations Manager

You can allocate the following example subnets for the vRealize Operations Manager deployment.

Table 2. IP Subnets in the Application Virtual Networks for vRealize Operations Manager

vRealize Operations Manager Cluster Type

IP Subnet

Analytics cluster in consolidated pod

192.168.11.0/24

Remote collector group in consolidated pod

192.168.31.0/24

Table 3. Design Decision about IP Subnets for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-010

Allocate separate subnets for each application virtual network.

Placing the remote collectors on their own subnet enables them to communicate with the analytics cluster and not be a part of a future failover group.

You must have an allocation of dedicated IP subnets.

DNS Names for vRealize Operations Manager

The FQDNs of the vRealize Operations Manager nodes follow certain domain name resolution:

  • The analytics cluster node IP addresses and a load balancer virtual IP address (VIP) are associated with names that have the root domain suffix rainpole.local.

    From the public network, users access vRealize Operations Manager using the VIP address, the traffic to which is handled by the NSX Edge services gateway.

  • Name resolution for the IP addresses of the remote collector group node uses a region-specific suffix, for example, sfo01.rainpole.local.

Table 4. DNS Names for vRealize Operations Manager Nodes

vRealize Operations Manager DNS Name

Node Type

vrops01svr01.rainpole.local

VIP address of the analytics cluster

vrops01svr01a.rainpole.local

Master node in the analytics cluster

vrops01svr01x.rainpole.local

Additional nodes in the analytics cluster (not deployed)

sfo01vropsc01a.sfo01.rainpole.local

Remote collector node in remote collector group

sfo01vropsc01x.sfo01.rainpole.local

Additional collector nodes in remote collector group (not deployed)

Table 5. Design Decision about DNS Names for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-011

Configure forward and reverse DNS records for all vRealize Operations Manager nodes and VIP address deployed.

All nodes are accessible by using fully-qualified domain names instead of by using IP addresses only.

You must manually provide DNS records for all vRealize Operations Manager nodes and the VIP.

Networking for Failover and Load Balancing in vRealize Operations Manager

By default, vRealize Operations Manager does not provide a solution for load-balanced UI user sessions across nodes in the analytics cluster. You associate vRealize Operations Manager with the shared load balancer in the region.

The lack of load balancing for user sessions results in the following limitations:

  • Users must know the URL of each node to access the UI. As a result, a single node might be overloaded if all users access it at the same time.

  • Each node supports up to four simultaneous user sessions.

  • Taking a node offline for maintenance might cause an outage. Users cannot access the UI of the node when the node is offline.

To avoid such problems, place the analytics cluster behind an NSX load balancer that is configured to allow up to four connections per node. The load balancer must distribute the load evenly to all cluster nodes. In addition, configure the load balancer located in the Mgmt-xRegion01-VXLAN application virtual network to redirect service requests from the UI on port 80 to port 443.

Load balancing for the remote collector nodes is not required.

Table 6. Design Decisions about Networking Failover and Load Balancing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-012

Use an NSX Edge services gateway as a load balancer for the vRealize Operation Manager analytics cluster located in the Mgmt-xRegion01-VXLAN application virtual network.

Enables balanced access of tenants and users to the analytics services with the load being spread evenly across the cluster.

You must manually configure the NSX Edge devices to provide load balancing services.

CSDDC-OPS-MON-013

Do not use a load balancer for the remote collector group.

  • Remote collectors must directly access the systems that they are monitoring.

  • Remote collectors do not require access to and from the public network.

None.