You provide isolation of the vRealize Operations Manager nodes by placing them in several network segments. This networking design also supports public access to the analytics cluster nodes.

For secure access, load balancing and portability, you deploy the vRealize Operations Manager analytics cluster in the shared cross-region application isolated network Mgmt-xRegion01-VXLAN, and the remote collector group in the shared local application virtual network Mgmt-RegionA01-VXLAN.

Figure 1. Networking Design of the vRealize Operations Manager Deployment
In the Consolidated SDDC, the vRealize Operations Manager components are in two networks. The analytics node is in the load-balanced application virtual network that supports potentil cross-region failover after you scale the environment out to a two-pod design. The remote collector is in the region-specific application virtual network.

Application Virtual Network Design for vRealize Operations Manager

The vRealize Operations Manager analytics cluster is installed in the cross-region shared application virtual network and the remote collector nodes are installed in their region-specific shared application virtual networks.

This networking design has the following features:

  • The analytics nodes of vRealize Operations Manager are on the same network because they can be failed over between regions after scaling out to a multi-region design. vRealize Automation also shares this network.

  • All nodes have routed access to the vSphere management network through the NSX Universal Distributed Logical Router.

  • Routing to the vSphere management network and other external networks is dynamic, and is based on the Border Gateway Protocol (BGP).

For more information about the networking configuration of the application virtual network, see Virtualization Network Design for Consolidated SDDC and NSX Design for Consolidated SDDC.

Table 1. Design Decisions About the Application Virtual Network for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-008

Use the existing cross-region application virtual network for the vRealize Operations Manager analytics cluster.

Provides a consistent deployment model for management applications and ensures that growth to a dual-region design is supported .

You must use an implementation in NSX to support this network configuration.

CSDDC-OPS-MON-009

Use the existing region-specific application virtual networks for vRealize Operations Manager remote collectors.

Ensures collections of metrics locally per region in the event of a cross-region network outage.

You must use an implementation in NSX to support this network configuration.

IP Subnets for vRealize Operations Manager

You can allocate the following example subnets for each cluster in the vRealize Operations Manager deployment.

Table 2. IP Subnets in the Application Virtual Network for vRealize Operations Manager

vRealize Operations Manager Cluster Type

IP Subnet

Analytics cluster in Region A

192.168.11.0/24

Remote collectors in Region A

192.168.31.0/24

Table 3. Design Decision About IP Subnets for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-010

Allocate separate subnets for each application virtual network.

Placing the remote collectors on their own subnet enables them to communicate with the analytics cluster and not be a part of the failover group.

None.

DNS Names for vRealize Operations Manager

The FQDNs of the vRealize Operations Manager nodes follow certain domain name resolution:

  • The IP addresses of the analytics cluster node and a load balancer virtual IP address (VIP) are associated with names whose suffix is set to the root domain rainpole.local.

    From the public network, users access vRealize Operations Manager using the VIP address, the traffic to which is handled by a NSX Edge services gateway providing the load balancer function.

  • Name resolution for the IP addresses of the remote collector group nodes uses a region-specific suffix, for example, sfo01.rainpole.local.

  • The IP addresses of the remote collector group nodes are associated with names whose suffix is set to the region-specific domain, for example, sfo01.rainpole.local .

Table 4. FQDNs for the vRealize Operations Manager Nodes

vRealize Operations Manager DNS Name

Node Type

vrops01svr01.rainpole.local

Virtual IP of the analytics cluster

vrops01svr01a.rainpole.local

Master node in the analytics cluster

vrops01svr01x.rainpole.local

Additional data nodes in the analytics cluster (not deployed)

sfo01vropsc01a.sfo01.rainpole.local

Remote collector node in remote collector group

sfo01vropsc01x.sfo01.rainpole.local

Additional collector nodes in remote collector group (not deployed)

Table 5. Design Decision About DNS Names for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-011

Configure forward and reverse DNS records for all vRealize Operations Manager nodes and VIP address deployed.

All nodes are accessible by using fully qualified domain names instead of by using IP addresses only.

You must manually provide DNS records for all vRealize Operations Manager nodes and the VIP address.

Networking for Failover and Load Balancing

By default, vRealize Operations Manager does not provide a solution for load-balanced UI user sessions across nodes in the cluster. You associate vRealize Operations Manager with the shared load balancer in the region.

The lack of load balancing for user sessions results in the following limitations:

  • Users must know the URL of each node to access the UI. As a result, a single node might be overloaded if all users access it at the same time.

  • Each node supports up to four simultaneous user sessions.

  • Taking a node offline for maintenance might cause an outage. Users cannot access the UI of the node when the node is offline.

To avoid such problems, place the analytics cluster behind an NSX load balancer located in the Mgmt-xRegion01-VXLAN application virtual network. This load balancer is configured to allow up to four connections per node. The load balancer must distribute the load evenly to all cluster nodes. In addition, configure the load balancer to redirect service requests from the UI on port 80 to port 443.

Load balancing for the remote collector nodes is not required.

Table 6. Design Decisions About Networking Failover and Load Balancing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-OPS-MON-012

Use an NSX Edge services gateway as a load balancer for the vRealize Operation Manager analytics cluster located in the Mgmt-xRegion01-VXLAN application virtual network.

Enables balanced access of tenants and users to the analytics services with the load being spread evenly across the cluster.

You must manually configure the NSX Edge devices to provide load balancing services.

CSDDC-OPS-MON-013

Do not use a load balancer for the remote collector nodes.

  • Remote collector nodes must directly access the systems that they are monitoring.

  • Remote collector nodes do not require access to and from the public network.

None.