This topic describes how to manually configure and deploy the Healthwatch Exporter for VMware Tanzu® Kubernetes Grid™ Integrated Edition (TKGI) tile.
To install, configure, and deploy Healthwatch Exporter for TKGI through an automated pipeline, see Installing, Configuring, and Deploying a Tile Through an Automated Pipeline.
When installed on an Ops Manager foundation you want to monitor, Healthwatch Exporter for TKGI deploys metric exporter VMs to generate service level indicators (SLIs) related to the health of your TKGI deployment. The Prometheus instance that exists within your metrics monitoring system then scrapes the Prometheus exposition endpoints on the metric exporter VMs and imports those metrics into your monitoring system. For more information about the architecture of the Healthwatch Exporter for TKGI tile, see Healthwatch Exporter for TKGI in Healthwatch Architecture.
After installing Healthwatch Exporter for TKGI, you configure the metric exporter VMs deployed by Healthwatch Exporter for TKGI through the tile UI. You can also configure errands and system logging, as well as scale VM instances up or down and configure load balancers for multiple VM instances.
To configure and deploy the Healthwatch Exporter for TKGI tile:
Note: If you want to quickly deploy the Healthwatch Exporter for TKGI tile to ensure that it deploys successfully before you fully configure it, you only need to configure the Assign AZ and Networks and BOSH Health Metric Exporter VM panes.
Navigate to the Healthwatch Exporter for TKGI tile in the Ops Manager Installation Dashboard. For more information, see Navigate to the Healthwatch Exporter for TKGI Tile below.
Assign jobs to your availability zones (AZs) and networks. For more information, see Assign AZs and Networks below.
(Optional) Configure the TKGI Metric Exporter VMs pane. For more information, see (Optional) Configure TKGI and Certificate Expiration Metric Exporter VMs below.
(Optional) Configure the TKGI SLI Exporter VM pane. For more information, see (Optional) Configure TKGI SLI Exporter VMs below.
Configure the BOSH Health Metric Exporter VM pane. For more information, see Configure the BOSH Health Metric Exporter VM below.
(Optional) Configure the BOSH Deployment Metric Exporter VM pane. For more information, see (Optional) Configure the BOSH Deployment Metric Exporter VM below.
(Optional) Configure the Errands pane. For more information, see (Optional) Configure Errands below.
(Optional) Configure the Syslog pane. For more information, see (Optional) Configure Syslog below.
(Optional) Configure the Resource Config pane. For more information, see (Optional) Configure Resources below.
Deploy the Healthwatch Exporter for TKGI tile through the Ops Manager Installation Dashboard. For more information, see Deploy Healthwatch Exporter for TKGI below.
Once you have finished installing, configuring, and deploying Healthwatch Exporter for TKGI, configure a scrape job for Healthwatch Exporter for TKGI in the Prometheus instance that exists within your monitoring system. For more information, see Configure a Scrape Job for Healthwatch Exporter for TKGI below.
Note: You only need to configure a scrape job for installations of Healthwatch Exporter for TKGI that are not on the same Ops Manager foundation as your Healthwatch for VMware Tanzu tile. The Prometheus instance in the Healthwatch tile automatically discovers and scrapes Healthwatch Exporter tiles that are installed on the same Ops Manager foundation as the Healthwatch tile.
To navigate to the Healthwatch Exporter for TKGI tile:
Navigate to the Ops Manager Installation Dashboard.
Click the Healthwatch Exporter for Tanzu Kubernetes Grid - Integrated tile.
In the Assign AZ and Networks pane, you assign jobs to your AZs and networks.
To configure the Assign AZ and Networks pane:
Select Assign AZs and Networks.
Under Place singleton jobs in, select the first AZ. Ops Manager runs any job with a single instance in this AZ.
Under Balance other jobs in, select one or more other AZs. Ops Manager balances instances of jobs with more than one instance across the AZs that you specify.
From the Network dropdown, select the runtime network that you created when configuring the BOSH Director tile. For more information about TKGI networks, see the Ops Manager documentation.
(Optional) If you want to assign jobs to a service network in addition to your runtime network, select it from the Services Network dropdown. For more information about TKGI service networks, see the Ops Manager documentation.
Click Save.
In the TKGI Metric Exporter VMs pane, you configure static IP addresses for the TKGI metric exporter and certificate expiration metric exporter VMs. After generating these metrics, the metric exporter VMs expose them in Prometheus exposition format on a secured endpoint.
To configure the TKGI Metric Exporter VMs pane:
Important: The IP addresses you configure in the TKGI Metric Exporter VMs pane must not be within the reserved IP ranges you configured in the BOSH Director tile.
Select TKGI Metric Exporter VMs.
(Optional) For Static IP address for TKGI metric exporter VM, enter a valid static IP address that you want to reserve for the TKGI metric exporter VM. The TKGI metric exporter VM collects health metrics from the BOSH Director. For more information, see TKGI Metric Exporter VM in Healthwatch Metrics.
(Optional) For Static IP address for certificate expiration metric exporter VM, enter a valid static IP address that you want to reserve for the certificate expiration metric exporter VM. The certificate expiration metric exporter VM collects metrics that show when certificates in your Ops Manager deployment are due to expire. For more information, see Certificate Expiration Metric Exporter VM in Healthwatch Metrics and Monitoring Certificate Expiration.
Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for Tanzu Application Service for VMs (TAS for VMs installed on the same Ops Manager foundation, scale the certificate expiration metric exporter VM to zero instances in the Resource Config pane in one of th Healthwatch Exporter tiles. Otherwise, the two certificate expiration metric exporter VMs create redundant sets of metrics.
(Optional) If your Ops Manager deployment uses self-signed certificates, activate the Skip TLS certificate verification checkbox. When this checkbox is activated, the certificate expiration metric exporter VM does not verify the identity of the Ops Manager VM. This checkbox is deactivated by default.
Click Save.
In the TKGI SLI Exporter VM pane, you configure the TKGI SLI exporter VM. The TKGI SLI exporter VM generates SLIs that allow you to monitor whether the core functions of the TKGI Command-Line Interface (TKGI CLI) are working as expected. The TKGI CLI allows developers to create and manage Kubernetes clusters through TKGI. For more information, see TKGI SLI Exporter VM in Healthwatch Metrics.
To configure the TKGI SLI Exporter VM pane:
Select TKGI SLI Exporter VM.
(Optional) For Static IP address for TKGI SLI exporter VM, enter a valid static IP address that you want to reserve for the TKGI SLI exporter VM. This IP address must not be within the = reserved IP ranges you configured in the BOSH Director tile.
For SLI test frequency, enter in seconds how frequently you want the TKGI SLI exporter VM to run SLI tests.
(Optional) To allow TKGI SLI exporter VM to communicate with the TKGI API over TLS, configure one of the following options:
Click Save.
In the BOSH Health Metric Exporter VM pane, you configure the AZ and VM type of the BOSH health metric exporter VM. Healthwatch Exporter for TKGI deploys the BOSH health metric exporter VM, which creates a BOSH deployment called bosh-health
every ten minutes. The bosh-health
deployment deploys another VM, bosh-health-check
, that runs a suite of SLI tests to validate the functionality of the BOSH Director. After the SLI tests are complete, the BOSH health metric exporter VM collects the metrics from the bosh-health-check
VM, then deletes the bosh-health
deployment and the bosh-health-check
VM. For more information, see BOSH Health Metric Exporter VM in Healthwatch Metrics.
To configure the BOSH Health Metric Exporter VM pane:
Select BOSH Health Metric Exporter VM.
Under Availability zone, select the AZ on which you want Healthwatch Exporter for TKGI to deploy the BOSH health metric exporter VM.
Under VM type, select from the dropdown the type of VM you want Healthwatch Exporter for TKGI to deploy.
Click Save.
Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for TAS for VMs installed on the same Ops Manager foundation, scale the BOSH health metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two sets of BOSH health metric exporter VM metrics cause a 401
error in your BOSH Director deployment, and one set of metrics reports that the BOSH Director is down in the Grafana UI. For more information, see BOSH Health Metrics Cause Errors When Two Healthwatch Exporter Tiles Are Installed in Troubleshooting Healthwatch.
In the BOSH Deployment Metric Exporter VM pane, you configure the authentication credentials and a static IP address for the BOSH deployment metric exporter VM. This VM checks every 30 seconds whether any BOSH deployments other than the one created by the BOSH health metric exporter VM are running. For more information, see BOSH Deployment Metric Exporter VM in Healthwatch Metrics.
To configure the BOSH Deployment Metric Exporter VM pane:
Select BOSH Deployment Metric Exporter VM.
(Optional) For UAA client credentials, enter the username and secret for the UAA client that the BOSH deployment metric exporter VM uses to access the BOSH Director VM. For more information, see Create a UAA Client for the BOSH Deployment Metric Exporter VM below.
(Optional) For Static IP address for BOSH deployment metric exporter VM, enter a valid static IP address that you want to reserve for the BOSH deployment metric exporter VM. This IP address must not be within the reserved IP ranges you configured in the BOSH Director tile.
Click Save.
Note: If you have both Healthwatch Exporter for TKGI and Healthwatch Exporter for TAS for VMs installed on the same Ops Manager foundation, scale the BOSH deployment metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two BOSH deployment metric exporter VMs create redundant sets of metrics.
To allow the BOSH deployment metric exporter VM to access the BOSH Director VM and view BOSH deployments, you must create a new UAA client for the BOSH deployment metric exporter VM. The procedure to create this UAA client differs depending on the authentication settings of your Ops Manager deployment.
To create a UAA client for the BOSH deployment metric exporter VM:
Return to the Ops Manager Installation Dashboard.
Record the IP address for the BOSH Director VM and the login and administrator credentials for the BOSH Director UAA instance:
password
. This value is the secret for Uaa Admin Client Credentials.password
. This value is the secret for Uaa Login Client Credentials. password
. This value is the secret for Uaa Bosh Client Credentials. password
. This value is the secret for Uaa Bosh Client Credentials. SSH into the Ops Manager VM by following the procedure in the Ops Manager documentation.
Target the UAA instance for the BOSH Director by running:
uaac target https://BOSH-DIRECTOR-IP:8443 --skip-ssl-validation
Where BOSH-DIRECTOR-IP
is the IP address for the BOSH Director VM that you recorded from the Status tab in the BOSH Director tile in a previous step.
Log in to the UAA instance:
If your Ops Manager deployment uses internal authentication, log in to the UAA instance by running:
uaac token owner get login -s UAA-LOGIN-CLIENT-SECRET
Where UAA-LOGIN-CLIENT-SECRET
is the secret you recorded from the Uaa Login Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.
If your Ops Manager deployment uses SAML or LDAP, log in to the UAA instance by running:
uaac token client get bosh_admin_client -s BOSH-UAA-CLIENT-SECRET
Where BOSH-UAA-CLIENT-SECRET
is the secret you recorded from the Uaa Bosh Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.
When prompted, enter the UAA administrator client username admin
and the secret you recorded from the Uaa Admin Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.
Create a UAA client for the BOSH deployment metric exporter VM by running:
uaac client add CLIENT-USERNAME \
--secret CLIENT-SECRET \
--authorized_grant_types client_credentials,refresh_token \
--authorities bosh.read \
--scope bosh.read
Where:
CLIENT-USERNAME
is the username you want to set for the UAA client.CLIENT-SECRET
is the secret you want to set for the UAA client.Return to the Ops Manager Installation Dashboard.
Click the Healthwatch Exporter for Tanzu Kubernetes Grid - Integrated tile.
Select BOSH Deployment Metric Exporter VM.
For UAA client credentials, enter the username and secret for the UAA client you just created.
Errands are scripts that Ops Manager runs automatically when it installs or uninstalls a product, such as a new version of Healthwatch Exporter for TKGI. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product is uninstalled. However, there are no pre-delete errands for Healthwatch Exporter for TKGI.
By default, Ops Manager always runs all errands.
In the Errands pane, you can select On to always run an errand or Off to never run it.
For more information about how Ops Manager manages errands, see the Ops Manager documentation.
To configure the Errands pane:
Select Errands.
(Optional) Choose whether to always run or never run the Smoke Tests errand. This errand verifies that the metric exporter VMs are running.
Click Save.
In the Syslog pane, you can configure system logging in Healthwatch Exporter for TKGI to forward log messages from tile component VMs to an external destination for troubleshooting, such as a remote server or external syslog aggregation service.
To configure the Syslog pane:
Select Syslog.
Under Do you want to configure Syslog forwarding?, select one of the following options:
For Address, enter the IP address or DNS domain name of your external destination.
For Port, enter a port on which your external destination listens.
For Transport Protocol, select TCP or UDP from the dropdown. This determines which transport protocol Healthwatch Exporter for TKGI uses to forward system logs to your external destination.
(Optional) To transmit logs over TLS:
(Optional) For Queue Size, specify the number of log messages Healthwatch Exporter for TKGI can hold in a buffer at a time before sending them to your external destination. The default value is 100000
.
(Optional) To forward debug logs to your external destination, activate the Forward Debug Logs checkbox. This checkbox is deactivated by default.
(Optional) To specify a custom syslog rule, enter it in Custom rsyslog configuration in RainerScript syntax. For more information about custom syslog rules, see the TAS for VMs documentation. For more information about RainerScript syntax, see the rsyslog documentation.
Click Save Syslog Settings.
In the Resource Config pane, you can scale VMs in Healthwatch Exporter for TKGI VMs up or down according to the needs of your deployment, as well as associate load balancers with a group of VMs. For example, you can scale the persistent disk size of a metric exporter VM to allow longer data retention.
To configure the Resource Config pane:
Select Resource Config.
(Optional) To scale a job, select an option from the dropdown for the resource you want to modify:
(Optional) To add a load balancer to a job:
Click Save.
To complete your installation of the Healthwatch Exporter for TKGI tile:
Return to the Ops Manager Installation Dashboard.
Click Review Pending Changes.
Click Apply Changes.
For more information, see the Ops Manager documentation.
After you have successfully deployed Healthwatch Exporter for TKGI, you must configure a scrape job in the Prometheus instance that exists within your metrics monitoring system, unless you installed Healthwatch Exporter for TKGI on the same Ops Manager foundation as the Healthwatch tile. Follow the procedure in one of the following sections, depending on which monitoring system you use:
If you monitor metrics using the Healthwatch tile on an Ops Manager foundation, see Configure a Scrape Job for Healthwatch Exporter for TKGI in Healthwatch below.
Note: You only need to configure a scrape job for installations of Healthwatch Exporter for TKGI that are not on the same Ops Manager foundation as your Healthwatch tile. The Prometheus instance in the Healthwatch tile automatically discovers and scrapes Healthwatch Exporter tiles that are installed on the same Ops Manager foundation as the Healthwatch tile.
If you monitor metrics using a service or database located outside your Ops Manager foundation, such as from an external TSDB, see Configure a Scrape Job for Healthwatch Exporter for TKGI in an External Monitoring System below.
To configure a scrape job for Healthwatch Exporter for TKGI in the Healthwatch tile on your Ops Manager foundation, see Configure Prometheus in Configuring Healthwatch.
To configure a scrape job for Healthwatch Exporter for TKGI in a service or database that is located outside your Ops Manager foundation:
Open network communication paths from your external service or database to the metric exporter VMs in Healthwatch Exporter for TKGI. The procedure to open these network paths differs depending on your Ops Manager foundation’s IaaS. For a list of TCP ports used by each metric exporter VM, see Required Networking Rules for Healthwatch Exporter for TKGI in Healthwatch Architecture.
In the scrape_config
section of the Prometheus configuration file, create a scrape job for your Ops Manager foundation. Under static_config
, specify the TCP ports of each metric exporter VM as static targets for the IP address of your external service or database. For example:
job_name: foundation-1
metrics_path: /metrics
scheme: https
static_configs:
- targets:
- "1.2.3.4:8443"
- "1.2.3.4:25555"
- "1.2.3.4:443"
- "1.2.3.4:25595"
- "1.2.3.4:9021"
For more information about the scrape_config
section of the Prometheus configuration file, see the Prometheus documentation. For more information about the static_config
section of the Prometheus configuration file, see the Prometheus documentation.