You provide isolation of the vRealize Operations Manager nodes by placing them in several network segments. This networking design also supports public access to the analytics cluster nodes.

For secure access, load balancing and portability, you deploy the vRealize Operations Manager analytics cluster in the shared cross-region application virtual network Mgmt-xRegion01-VXLAN, and the remote collector clusters in the shared local application isolated networks Mgmt-RegionA01-VXLAN and Mgmt-RegionB01-VXLAN.

Figure 1. Networking Design of the vRealize Operations Manager Deployment


The analytics nodes of vRealize Operations Manager reside in the cross-region application virtual network using the load balancer for distribution of incoming user requests. The remote collectors reside in the region-specific virtual network.

Application Virtual Network Design for vRealize Operations Manager

The vRealize Operations Manager analytics cluster is installed in the cross-region shared application virtual network and the remote collector nodes are installed in their region-specific shared application virtual networks.

This networking design has the following features:

  • The analytics nodes of vRealize Operations Manager are on the same network because they can be failed over between regions after scaling out to a multi-region design. vRealize Automation and vRealize Business also share this network.

  • All nodes have routed access to the vSphere management network through the NSX Universal Distributed Logical Router.

  • Routing to the vSphere management network and other external networks is dynamic and is based on the Border Gateway Protocol (BGP).

For more information about the networking configuration of the application virtual network, see Virtualization Network Design and NSX Design.

Table 1. Design Decisions on the Application Virtual Network for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-011

Use the existing cross-region application virtual networks for the vRealize Operations Manager analytics cluster.

Supports disaster recovery by isolating the vRealize Operations Manager analytics cluster on the application virtual network Mgmt-xRegion01-VXLAN.

You must use an implementation in NSX to support this network configuration.

SDDC-OPS-MON-012

Use the existing region-specific application virtual networks for vRealize Operations Manager remote collectors.

Ensures collection of metrics locally per region in the event of a cross-region network outage. It also co-locates metric collection with the region-specific applications using the virtual networks Mgmt-RegionA01-VXLAN and Mgmt-RegionB01-VXLAN.

You must use an implementation in NSX to support this network configuration.

IP Subnets for vRealize Operations Manager

You can allocate the following example subnets for each cluster in the vRealize Operations Manager deployment.

Table 2. IP Subnets in the Application Virtual Network for vRealize Operations Manager

vRealize Operations Manager Cluster Type

IP Subnet

Analytics cluster in Region A

192.168.11.0/24

Remote collectors in Region A

192.168.31.0/24

Remote collectors in Region B

192.168.32.0/24

Table 3. Design Decision on the IP Subnets for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-013

Allocate separate subnets for each application virtual network.

Placing the remote collectors on their own subnet enables them to communicate with the analytics cluster and not be a part of the failover group.

None.

FQDNs for vRealize Operations Manager

The FQDNs of the vRealize Operations Manager nodes follow a certain domain name resolution:

  • The IP addresses of the analytics cluster node and a load balancer virtual IP address (VIP) are associated with names whose suffix is set to the root domain rainpole.local.

    From the public network, users access vRealize Operations Manager using the VIP address, the traffic to which is handled by an NSX Edge services gateway providing the load balancer function.

  • Name resolution for the IP addresses of the remote collector group nodes uses a region-specific suffix, for example, sfo01.rainpole.local or lax01.rainpole.local.

  • The IP addresses of the remote collector group nodes are associated with names whose suffix is set to the region-specific domain, for example, sfo01.rainpole.local or lax01.rainpole.local.

  • Name resolution for the IP addresses of the remote collector group nodes uses a root domain suffix, for example, rainpole.local

  • The IP addresses of the remote collector group nodes are associated with names whose suffix is set to the root domain, for example, rainpole.local

Table 4. FQDNs for the vRealize Operations Manager Nodes

FQDN

Node Type

Region

vrops01svr01.rainpole.local

Virtual IP of the analytics cluster

Region A (failover to Region B)

vrops01svr01a.rainpole.local

Master node in the analytics cluster

Region A (failover to Region B)

vrops01svr01b.rainpole.local

Master replica node in the analytics cluster

Region A (failover to Region B)

vrops01svr01c.rainpole.local

First data node in the analytics cluster

Region A (failover to Region B)

vrops01svr01x.rainpole.local

Additional data nodes in the analytics cluster

Region A (failover to Region B)

sfo01vropsc01a.sfo01.rainpole.local

First remote collector node

Region A

sfo01vropsc01b.sfo01.rainpole.local

Second remote collector node

Region A

lax01vropsc01a.lax01.rainpole.local

First remote collector node

Region B

lax01vropsc01b.lax01.rainpole.local

Second remote collector node

Region B

Table 5. Design Decision on the DNS Names for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-014

Configure forward and reverse DNS records for all vRealize Operations Manager nodes and VIP address deployed.

All nodes are accessible by using fully qualified domain names instead of by using IP addresses only.

You must manually provide DNS records for all vRealize Operations Manager nodes and the VIP address.

Networking for Failover and Load Balancing

By default, vRealize Operations Manager does not provide a solution for load-balanced UI user sessions across nodes in the cluster. You associate vRealize Operations Manager with the shared load balancer in the region.

The lack of load balancing for user sessions results in the following limitations:

  • Users must know the URL of each node to access the UI. As a result, a single node might be overloaded if all users access it at the same time.

  • Each node supports up to four simultaneous user sessions.

  • Taking a node offline for maintenance might cause an outage. Users cannot access the UI of the node when the node is offline.

To avoid such problems, place the analytics cluster behind the NSX load balancer located in the Mgmt-xRegion01-VXLAN application virtual network. This load balancer is configured to allow up to four connections per node. The load balancer must distribute the load evenly to all cluster nodes. In addition, configure the load balancer to redirect service requests from the UI on port 80 to port 443.

Load balancing for the remote collector nodes is not required.

Table 6. Design Decisions on Networking Failover and Load Balancing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-OPS-MON-015

Use an NSX Edge services gateway as a load balancer for the vRealize Operation Manager analytics cluster located in the Mgmt-xRegion01-VXLAN application virtual network.

Enables balanced access of tenants and users to the analytics services with the load being spread evenly across the cluster.

You must manually configure the NSX Edge devices to provide load balancing services.

SDDC-OPS-MON-016

Do not use a load balancer for the remote collector nodes.

  • Remote collector nodes must directly access the systems that they are monitoring.

  • Remote collector nodes do not require access to and from the public network.

None.