For secure access to the UI and API of vRealize Operations Manager, you place the analytics cluster nodes on the cross-instance NSX segment. This configuration also supports public access to the analytics cluster nodes.

Network Segments

The network segments design consists of characteristics and decisions for placement of vRealize Operations Manager in the management domain.

For secure access, load balancing, and multi-instance design, you deploy the vRealize Operations Manager analytics cluster on the cross-instance NSX segment, and you place the remote collector groups on the corresponding local-instance NSX segments.

This validated solution uses an implementation of the VMware Cloud Foundation application virtual networks feature in the management domain provided by NSX-T Data Center. The application virtual networks in the management domain can be either overlay-backed NSX segments or VLAN-backed NSX segments.
Table 1. NSX Segment Types

Type

Description

Overlay-backed NSX segment

The routing to the VLAN-backed management network segment and other networks is dynamic and based on the Border Gateway Protocol (BGP).

Routed access to the VLAN-backed management network segment is provided through an NSX-T Data Center Tier-1 and Tier-0 gateway.

Recommended option to facilitate scale out to a multi instance design supporting disaster recovery.

VLAN-backed NSX segment

You must provide two unique VLANs, network subnets, and vCenter Server portgroup names.

Figure 1. Network Design of the vRealize Operations Manager Deployment on Overlay-Backed NSX Segments
The analytics cluster nodes are connected to the cross-instance NSX segment for secure access to the application UI and API, the remote collector nodes are connected to the corresponding local-instance NSX segments. The cross-instance NSX segment is connected to the management networks in the VMware Cloud Foundation instances through the cross-instance NSX Tier-0 gateway and the cross-instance Tier-1 gateway. Each local-instance NSX segment is connected to the management network in the corresponding VMware Cloud Foundation instances through the cross-instance NSX Tier-0 gateway and the local-instance Tier-1 gateway.
Table 2. Design Decisions on the Network Segments for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-001

Place the vRealize Operations Manager analytics nodes on the cross-instance NSX network segment.

Provides a consistent deployment model for management applications and a potential to extend to a second VMware Cloud Foundation instance for disaster recovery.

You must use an implementation of NSX-T Data Center to support this network configuration.

IOM-VROPS-NET-002

Place the vRealize Operations Manager remote collector nodes on the local-instance NSX network segment.

Supports collection of metrics locally per VMware Cloud Foundation instance.

You must use an implementation in NSX-T Data Center to support this networking configuration.

Network Segments for Multiple VMware Cloud Foundation Instances

In an environment with multiple VMware Cloud Foundation instances, the remote collector nodes in each instance are connected to the corresponding local-instance network segment.

Table 3. Design Decisions on the Network Segments for vRealize Operations Manager for a Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-003

In an environment with multiple VMware Cloud Foundation instances, place the vRealize Operations Manager remote collector nodes in each instance on the local-instance NSX segment.

Supports collection of metrics locally per VMware Cloud Foundation instance.

You must use an implementation in NSX-T Data Center to support this networking configuration.

IP Addressing

Allocate statically assigned IP addresses and host names to the vRealize Operations Manager nodes from their corresponding networks.
Table 4. Design Decisions on the IP Addressing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-004

Allocate statically assigned IP addresses and host names from the cross-instance NSX segment to the vRealize Operations Manager analytics cluster nodes and the NSX-T Data Center load balancer.

Ensures stability across the SDDC, and makes it simpler to maintain and easier to track.

Requires precise IP address management.

IOM-VROPS-NET-005

Allocate statically assigned IP addresses and host names from the local-instance NSX segment to the vRealize Operations Manager remote collector nodes.

Ensures stability across the SDDC, and makes it simpler to maintain and easier to track.

Requires precise IP address management.

IP Addressing for Multiple VMware Cloud Foundation Instances

In an environment with multiple VMware Cloud Foundation instances, the remote collector nodes in each instance are assigned IP addresses associated with their corresponding network.

Table 5. Design Decisions on the IP Addressing for vRealize Operations Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-006

In an environment with multiple VMware Cloud Foundation instances, allocate statically assigned IP addresses and host names from each local-instance NSX segment to the corresponding vRealize Operations Manager remote collector nodes in the instance.

Ensures stability across the SDDC, and makes it simpler to maintain and easier to track.

Requires precise IP address management.

Name Resolution

Name resolution provides the translation between an IP address and a fully qualified domain name (FQDN), which makes it easier to remember and connect to components across the SDDC. The IP address of each vRealize Operations Manager node, including the load balancer VIP, must have a valid internal DNS forward (A) and reverse (PTR) record.
Table 6. Design Decisions on Name Resolution for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-007

Configure forward and reverse DNS records for all vRealize Operations Manager nodes and for the NSX-T Data Center load balancer virtual IP address.

All nodes are accessible by using fully qualified domain names instead of by using IP addresses only.

You must provide DNS records for the vRealize Operations Manager nodes.

Load Balancing

A vRealize Operations Manager cluster deployment requires a load balancer to manage the connections to vRealize Operations Manager. This validated solution uses load-balancing services provided by NSX-T Data Center in the management domain. The load balancer is automatically configured by vRealize Suite Lifecycle Manager and SDDC Manager during the deployment of vRealize Operations Manager. The load balancer is configured with the following settings.

Table 7. vRealize Operations Manager Load Balancer Configuration

Load Balancer Element

Settings

Service monitor

  • Name: vrops-https-monitor

  • Default intervals and timeouts:

    • Monitoring interval: 5 seconds

    • Idle timeout period: 16 seconds

    • Rise/Fall: 3 seconds.

  • HTTP request:

    • HTTP method: Get

    • HTTP request version: 1.1

    • Request URL: /suite-api/api/deployment/node/status?services=api&services=adminui&services=ui

  • HTTP response:

    • HTTP response code: 200, 204, 301

    • HTTP response body: ONLINE

Server pool

  • Name: vrops-server-pool

  • Algorithm: LEAST_CONNECTION

  • SNAT translation mode: Auto Map

  • Static members:

    • Name: host_name

    • IP: IP_address

    • Port: 443

    • Weight: 1

    • State: Enabled

  • Service monitor: vrops-https-monitor

TCP application profile

  • Name: vrops-tcp-app-profile

  • Timeout: 1800 seconds (30 minutes)

Source IP persistence profile

  • Name: vrops-source-ip-persistence-profile

  • Timeout: 1800 seconds (30 minutes)

HTTP redirect application profile

  • Name: vrops-http-app-profile-redirect

  • Timeout: 1800 seconds (30 minutes).

  • Redirection: HTTP to HTTPS Redirect

Virtual server

  • Name: vrops-https

  • HTTP type: L4

  • Port: 443

  • IP: vrealize_operations manager_cluster_IP_address

  • Persistence: vrops-source-ip-persistence-profile

  • Application profile: vrops-tcp-app-profile

  • Server pool: vrops-server-pool

HTTP redirect virtual server

  • Name: vrops-http-redirect

  • HTTP type: L7

  • Port: 80

  • IP: vrealize_operations_manager_cluster_IP_address

  • Persistence: Disabled

  • Application profile: vrops-http-app-profile-redirect

  • Server pool: None

Table 8. Design Decisions on Load Balancing for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-008

Use the small-size load balancer that is configured by SDDC Manager on a dedicated NSX-T Data Center Tier-1 gateway in the management domain to load balance the clustered Workspace ONE Access nodes, to also load balance the connections across the vRealize Operations Manager analytics cluster members.

Required to deploy a vRealize Operations Manager analytics cluster deployment type with distributed user interface access across members.

You must use the NSX-T Data Center load balancer that is configured by SDDC Manager to support this network configuration.

IOM-VROPS-NET-009

Do not use a load balancer for the vRealize Operations Manager remote collector nodes.

  • vRealize Operations Manager remote collector nodes must directly access the systems that they are monitoring.

  • vRealize Operations Manager remote collector nodes do not require access to and from the public network.

None.

Time Synchronization

Time synchronization provided by the Network Time Protocol (NTP) is important to ensure that all components within the SDDC are synchronized to the same time source.

Table 9. Design Decisions on Time Synchronization for vRealize Operations Manager

Decision ID

Design Decision

Design Justification

Design Implication

IOM-VROPS-NET-010

Configure NTP on each vRealize Operations Manager node.

vRealize Operations Manager depends on time synchronization.

None.

IOM-VROPS-NET-011

Configure the timezone of vRealize Operations Manager to use UTC.

You must use UTC to provide the integration with vRealize Automation , because vRealize Automation supports only UTC.

If you are in a timezone other than UTC, timestamps appear skewed.