You place the vRealize Operations Manager nodes in several network units for isolation and failover. The networking design also supports public access to the analytics cluster nodes.

For secure access, load balancing and portability, the vRealize Operations Manager analytics cluster is deployed in the shared cross-region application isolated network Mgmt-xRegion01-VXLAN, and the remote collector clusters in the shared local application isolated networks Mgmt-RegionA01-VXLAN and Mgmt-RegionB01-VXLAN.

Figure 1. Networking Design of the vRealize Operations Manager Deployment


The analytics node of vRealize Operations Manager reside in the cross-region application virtual network using the load balancer for distribution of incoming user requests. The remote collectors reside in the region-specific virtual network.

Application Virtual Network Design for vRealize Operations Manager

The vRealize Operations Manager analytics cluster is installed into the cross-region shared application virtual network and the remote collector nodes are installed in their region-specific shared application virtual networks.

This networking design has the following features:

  • The analytics nodes of vRealize Operations Manager are on the same network because they are failed over between regions. vRealize Automation also share this network.

  • All nodes have routed access to the vSphere management network through the NSX Universal Distributed Logical Router.

  • Routing to the vSphere management network and other external networks is dynamic, and is based on the Border Gateway Protocol (BGP).

For more information about the networking configuration of the application virtual network, see Virtualization Network Design and NSX Design.

Table 1. Design Decisions about the Application Virtual Network for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-009

Use the existing cross-region application virtual networks for the vRealize Operations Manager analytics cluster.

Support disaster recovery by isolating the vRealize Operations Manager analytics cluster on the application virtual network Mgmt-xRegion01-VXLAN.

You must use an implementation in NSX to support this network configuration.

SDDC-OPS-MON-010

Use the existing region-specific application virtual networks for vRealize Operations Manager remote collectors.

Ensures collections of metrics locally per region in the event of a cross-region network outage. Additionally, it co-localized metric collection to the per-region SDDC applications using the virtual networks Mgmt-RegionA01-VXLAN and Mgmt-RegionB01-VXLAN.

You must use an implementation in NSX to support this network configuration.

IP Subnets for vRealize Operations Manager

You can allocate the following example subnets for each cluster in the vRealize Operations Manager deployment.

Table 2. IP Subnets in the Application Virtual Network of vRealize Operations Manager

vRealize Operations Manager Cluster Type

IP Subnet

Analytics cluster in Region A (also valid for Region B for failover)

192.168.11.0/24

Remote collectors in Region A

192.168.31.0/24

Remote collectors in Region B

192.168.32.0/24

Table 3. Design Decision about IP Subnets for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-011

Allocate separate subnets for each application virtual network.

Placing the remote collectors on their own subnet enables them to communicate with the analytics cluster and not be a part of the failover group.

None.

DNS Names for vRealize Operations Manager

The FQDNs of the vRealize Operations Manager nodes follow certain domain name resolution:

  • The analytics cluster node IP addresses and a load balancer virtual IP address (VIP) are associated with names that have the root domain suffix rainpole.local.

    From the public network, users access vRealize Operations Manager using the VIP address, the traffic to which is handled by the NSX Edge services gateway.

  • Name resolution for the IP addresses of the remote collector group nodes uses a region-specific suffix, for example, sfo01.rainpole.local or lax01.rainpole.local..

Table 4. DNS Names for the Application Virtual Networks

vRealize Operations Manager DNS Name

Node Type

Region

vrops01svr01.rainpole.local

Virtual IP of the analytics cluster

Region A (failover to Region B)

vrops01svr01a.rainpole.local

Master node in the analytics cluster

Region A (failover to Region B)

vrops01svr01b.rainpole.local

Master replica node in the analytics cluster

Region A (failover to Region B)

vrops01svr01c.rainpole.local

First data node in the analytics cluster

Region A (failover to Region B)

vrops01svr01x.rainpole.local

Additional data nodes in the analytics cluster

Region A (failover to Region B)

sfo01vropsc01a.sfo01.rainpole.local

First remote collector node

Region A

sfo01vropsc01b.sfo01.rainpole.local

Second remote collector node

Region A

lax01vropsc01a.lax01.rainpole.local

First remote collector node

Region B

lax01vropsc01b.lax01.rainpole.local

Second remote collector node

Region B

Table 5. Design Decision about DNS Names for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-012

Configure forward and reverse DNS records for all vRealize Operations Manager nodes and VIP address deployed.

All nodes are accessible by using fully-qualified domain names instead of by using IP addresses only.

You must manually provide DNS records for all vRealize Operations Manager nodes and the VIP.

Networking for Failover and Load Balancing

By default, vRealize Operations Manager does not provide a solution for load-balanced UI user sessions across nodes in the cluster. You associate vRealize Operations Manager with the shared load balancer in the region.

The lack of load balancing for user sessions results in the following limitations:

  • Users must know the URL of each node to access the UI. As a result, a single node might be overloaded if all users access it at the same time.

  • Each node supports up to four simultaneous user sessions.

  • Taking a node offline for maintenance might cause an outage. Users cannot access the UI of the node when the node is offline.

To avoid such problems, place the analytics cluster behind an NSX load balancer located in the Mgmt-xRegion01-VXLAN application virtual network that is configured to allow up to four connections per node. The load balancer must distribute the load evenly to all cluster nodes. In addition, configure the load balancer to redirect service requests from the UI on port 80 to port 443.

Load balancing for the remote collector nodes is not required.

Table 6. Design Decisions about Networking Failover and Load Balancing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-013

Use an NSX Edge services gateway as a load balancer for the vRealize Operation Manager analytics cluster located in the Mgmt-xRegion01-VXLAN application virtual network.

Enables balanced access of tenants and users to the analytics services with the load being spread evenly across the cluster.

You must manually configure the NSX Edge devices to provide load balancing services.

SDDC-OPS-MON-014

Do not use a load balancer for the remote collector nodes.

  • Remote collectors must directly access the systems that they are monitoring.

  • Remote collectors do not require access to and from the public network.

None.