vRealize Operations Manager tracks and analyzes the operation of multiple data sources in the SDDC by using specialized analytic algorithms. These algorithms help vRealize Operations Manager learn and predict the behavior of every object it monitors. Users access this information by using views, reports, and dashboards.
vRealize Operations Manager is available as a pre-configured virtual appliance in OVF. By using the virtual appliance, you can easily create vRealize Operations Manager nodes with pre-defined identical sizes.
You deploy the OVF file of the virtual appliance once for each node. After node deployment, you access the product to set up cluster nodes according to their role and log in to configure the installation.
- Standalone node
- Cluster of one primary and at least one data node, and optionally a group of remote collector nodes.
You can establish high availability by using an external load balancer.
The compute and storage resources of the vRealize Log Insight instances can scale up as growth demands.
vRealize Operations Manager contains functional elements that collaborate for data analysis and storage, and support creating clusters of nodes with different roles.
Types of Nodes
For high availability and scalability, you can deploy several vRealize Operations Manager instances in a cluster to track, analyze, and predict the operation of monitored systems. Cluster nodes can have either of the following roles.
- Primary Node
- Required initial node in the cluster. In large-scale environments, manages all other nodes. In small-scale environments, the primary node is the single standalone vRealize Operations Manager node.
- Primary Replica Node
- Optional. Enables high availability of the primary node.
- Data Node
- Optional. Enables scale-out of vRealize Operations Manager in larger environments. Data nodes have adapters installed to perform collection and analysis. Data nodes also host vRealize Operations Manager management packs.
- Remote Collector Node
Overcomes data collection issues across the enterprise network, such as limited network performance. You can also use remote collector nodes to offload data collection from the other types of nodes.
Remote collector nodes only gather statistics about inventory objects and forward collected data to the data nodes. Remote collector nodes do not store data or perform analysis.
The primary and primary replica nodes are data nodes that have extended capabilities.
Types of Node Groups
- Analytics Cluster
- Tracks, analyzes, and predicts the operation of monitored systems. Consists of a primary node, data nodes, and optionally of a primary replica node.
- Remote Collector Group
Because it consists of remote collector nodes, only collects diagnostics data without storage or analysis. A vRealize Operations Manager deployment can contain several collector groups.
Use collector groups to achieve adapter resiliency in cases where the collector experiences network interruption or becomes unavailable.
Application Functional Components
The functional components of a vRealize Operations Manager instance interact with each other to analyze diagnostics data from the data center and visualize the result in the Web user interface.
The components of vRealize Operations Manager node perform these tasks.
- Product/Admin UI and Suite API
- The UI server is a Web application that serves as both user and administration interface, and hosts the API for accessing collected statistics.
- The Collector collects data from all components in the data center.
- Transaction Locator
- The Transaction Locator handles the data flow between the primary, primary replica, and remote collector nodes.
- Transaction Service
- The Transaction Service is responsible for caching, processing, and retrieving metrics for the analytics process.
- The analytics engine creates all associations and correlations between various data sets, handles all super metric calculations, performs all capacity planning functions, and is responsible for triggering alerts.
- Common Databases
Common databases store the following types of data that is related to all components of a vRealize Operations Manager deployment:
- Collected metric data
- User content, metric key mappings, licensing, certificates, telemetry data, and role privileges
- Cluster administration data
- Alerts and alarms including the root cause, and object historical properties and versions
- Replication Database
- The replication database stores all resources, such as metadata, relationships, collectors, adapters, collector groups, and relationships between them.
You can configure vRealize Operations Manager user authentication to use one or more of the following authentication sources:
- vCenter Single Sign-On
- VMware Identity Manager
- OpenLDAP via LDAP
- Active Directory via LDAP
Management packs contain extensions and third-party integration software. They add dashboards, alert definitions, policies, reports, and other content to the inventory of vRealize Operations Manager. You can learn more details about and download management packs from VMware Solutions Exchange.
You back up each vRealize Operations Manager node using traditional virtual machine backup solutions that are compatible with VMware vSphere Storage APIs – Data Protection (VADP).
Multi-Region vRealize Operations Manager Deployment
The scope of this validated design can cover both multiple regions and availability zones.
VMware Validated Design for Software-Defined Data Center implements a large-scale vRealize Operations Manager deployment across multiple regions by using the following configuration:
- Load-balanced analytics cluster that runs multiple nodes is protected by Site Recovery Manager to fail over across regions
- Multiple remote collector nodes that are assigned to a remote collector group in each region to handle data coming from management solutions
In a multi-availability zone implementation, which is a super-set of the multi-region design, vRealize Operations Manager continues to provide monitoring of the solutions in all regions of the SDDC. All components of vRealize Operations Manager reside in Availability Zone 1 in Region A. If this zone becomes compromised, all nodes are brought up in Availability Zone 2.