vRealize Operations Management Pack for Container Monitoring | 30 APR 2019 | Build 13480015
What's in the Release NotesThe release notes cover the following topics:
With VMware vRealize Operations Management Pack for Container Monitoring, Virtual Infrastructure Administrators can get complete Kubernetes topology of Namespaces, Clusters, Replica Sets, Nodes, Pods, and Containers for monitoring Kubernetes clusters. The OOTB dashboard not only provides an overview of the Kubernetes eco-system, but also helps in troubleshooting by highlighting the Key Performance Index and alerts for various objects pertaining to Kubernetes clusters that are monitored. The management pack extends the monitoring capability of vRealize Operations Manager to provide insights to the Kubernetes clusters to the Virtual Infrastructure administrator.
- Support for Kubernetes 1.13
- Support for VMware PKS 1.4
- Auto discovery of Kubernetes Clusters available in the VMware PKS Environment
- Inclusion of HTTP Proxy Server settings as part of adapter configuration
For compatibility between products, please refer to the VMware Product Interoperability Matrices.
- Discovery of the Replication Controller has been disabled.
- API versions are limited to:
- extensions/v1beta1 on Kubernetes v1.5.x – 1.7.x
- apps/v1beta2 on Kubernetes v1.8.x
- apps/v1 on Kubernetes 1.9.x and above
- For a Pod object to collect metrics, enable Super Metric Memory Usage(MB) and CPU Usage (Cores). To enable super metric, perform the following:
- Click Administration.
- In the left pane click Configuration > Super Metrics.
- Select active policy from the Policy library and click Edit.
- Select the Collect metrics and properties tab.
- Set Attribute Type to Super Metric and Object type to Kubernetes pod and enable the Memory Usage (MB) and CPU Usage (Cores) metrics.
- The Disk IO metric for node might be missing in some clusters due to the variations in Kubelet configuration.
- The VMware PKS adapter instance auto-discovers the Kubernetes clusters available in the VMware PKS Environment. It creates an appropriate Kubernetes Cluster Resource and a Kubernetes adapter instance against each cluster.
- Metrics of Container Object might be missing in some clusters if the cAdvisor Daemonset is not configured or if the port is not reachable
- If you upgrade from VMware vRealize® Operations Management Pack™ for Container Monitoring 1.0 to 1.2.1, the collection state displays a status called Not Collecting for all the adapter instances.
This occurs because of the addition of new settings and credential types in the 1.1 version of VMware vRealize® Operations Management Pack™ for Container Monitoring.
Note: If you upgrade from VMware vRealize Operations Management Pack for Container Monitoring 1.1 to 1.2.1, you do not have to complete the steps listed below.
Workaround: All the adapter instances must be deleted and recreated. This will lead to creation of new objects. However, you can retain the old objects to keep historical data intact.
- From the main menu of vRealize Operations Manager, click Administration, and then in the left pane click Solutions.
- From the Solutions page, select VMware vRealize Operations Management Pack for Container Monitoring.
- Click the Configure icon. The Manage Solution dialog box appears.
- Select an adapter instance.
- Click the Delete icon.
- When the Confirmation dialog box appears and if you want to retain historical data, deselect the option Remove related objects.
- Recreate the adapter instance by following steps provided in User Guide.
- Repeat the above steps for all adapter instances.
- During configuration, VMware vRealize® Operations Management Pack™ for Container Monitoring verifies if the cAdvisor service is accessible on every node. An error message similar to the following may appear: Unable to establish a valid connection to the target system. cAdvisor service on following nodes is either not reachable OR of a lower version than v2.1.
The error occurs if the cAdvisor service is inaccessible or if the API version is lesser than 2.1. You may sometimes receive this error if the cAdvisor service temporarily throws a gateway error at the time of verification.
- Verify if the cAdvisor service is up and running on the affected nodes and responds to API calls.
- Verify if the API version of the cAdvisor service is later than 2.1. If not, deploy the latest version of the cAdvisor service.
If you have completed the above two steps, you can ignore the error message and continue to save the adapter instance.
- Under recommendations, the Defined by column is displayed as KubernetesAdapter3.
Under recommendations, the Defined by column is displayed as KubernetesAdapter3.
- Deleting the VMware PKS adapter instance does not remove the Kubernetes adapter instances created by the VMware PKS adapter instance
When you delete the VMware PKS adapter instance, the Kubernetes adapter instances created by the VMware PKS adapter instance will not be removed.
Workaround : Manually delete the adapter instances related to the VMware PKS adapter instance
- Adding the VMware PKS adapter will configure the K8s instances but does not create the vCenter adapter instances
Adding the VMware PKS adapter will configure the K8s instances but does not create the vCenter adapter instances or associate it with the vCenters that the Kubernetes cluster nodes are deployed in.
Workaround: Manually configure the vCenter adapter instances and then add the details to the K8s adapters that are auto-configured by VMware PKS
- The Environment Overview dashboard does not display the relationship between the vCenter Hosts/Virtual Machines and the Kubernetes nodes
If the vRealize Operations Manager accesses the K8s through proxy, the vCenter adapter instance does not provide a provision to specify proxy. So, the Environment Overview dashboard may not display the relationship between the vCenter Hosts/Virtual Machines and the Kubernetes nodes.
- Data collection fails for the K8s adapters that are auto-configured by the VMware PKS adapter
The auto-configured K8s adapter instances that presents the untrusted SSL certificates will have the collection status as 'Failed'.
Workaround: Manually accept the untrusted certificate for the auto-configured K8s adapter instances for which data collection has failed