This topic describes how to manually configure and deploy the Healthwatch Exporter for VMware Tanzu® Application Service™ (TAS for VMs) tile.

To install, configure, and deploy Healthwatch Exporter for TAS for VMs through an automated pipeline, see Installing, Configuring, and Deploying a Tile Through an Automated Pipeline.

Overview of Configuring and Deploying Healthwatch Exporter for TAS for VMs

When installed on an Ops Manager foundation you want to monitor, Healthwatch Exporter for TAS for VMs deploys metric exporter VMs to generate each type of metric related to the health of your TAS for VMs deployment. Healthwatch Exporter for TAS for VMs sends metrics through the Loggregator Firehose to a Prometheus exposition endpoint on the associated metric exporter VMs. The Prometheus instance that exists within your metrics monitoring system then scrapes the exposition endpoints on the metric exporter VMs and imports those metrics into your monitoring system. For more information about the architecture of the Healthwatch Exporter for TAS for VMs tile, see Healthwatch Exporter for TAS for VMs in Healthwatch Architecture.

After installing Healthwatch Exporter for TAS for VMs, you configure the metric exporter VMs deployed by Healthwatch Exporter for TAS for VMs through the tile UI. You can also configure errands and system logging, as well as scale VM instances up or down and configure load balancers for multiple VM instances.

To configure and deploy the Healthwatch Exporter for TAS for VMs tile:

Note: If you want to quickly deploy the Healthwatch Exporter for TAS for VMs tile to ensure that it deploys successfully before you fully configure it, you only need to configure the Assign AZ and Networks and BOSH Health Metric Exporter VM panes.

  1. Navigate to the Healthwatch Exporter for TAS for VMs tile in the Ops Manager Installation Dashboard. For more information, see Navigate to the Healthwatch Exporter for TAS for VMs Tile below.

  2. Assign jobs to your availability zones (AZs) and networks. For more information, see Assign AZs and Networks below.

  3. (Optional) Configure the TAS for VMs Metric Exporter VMs pane. For more information, see (Optional) Configure TAS for VMs Metric Exporter VMs below.

  4. Configure the BOSH Health Metric Exporter VM pane. For more information, see Configure the BOSH Health Metric Exporter VM below.

  5. (Optional) Configure the BOSH Deployment Metric Exporter VM pane. For more information, see (Optional) Configure the BOSH Deployment Metric Exporter VM below.

  6. (Optional) Configure the Errands pane. For more information, see (Optional) Configure Errands below.

  7. (Optional) Configure the Syslog pane. For more information, see (Optional) Configure Syslog below.

  8. (Optional) Configure the Resource Config pane. For more information, see (Optional) Configure Resources below.

  9. Deploy the Healthwatch Exporter for TAS for VMs tile through the Ops Manager Installation Dashboard. For more information, see Deploy Healthwatch Exporter for TAS for VMs below.

  10. Once you have finished installing, configuring, and deploying Healthwatch Exporter for TAS for VMs, configure a scrape job for Healthwatch Exporter for TAS for VMs in the Prometheus VM that exists within your monitoring system. For more information, see Configure a Scrape Job for Healthwatch Exporter for TAS for VMs below.

    Note: You only need to configure a scrape job for installations of Healthwatch Exporter for TAS for VMs that are not on the same Ops Manager foundation as your Healthwatch for VMware Tanzu tile. The Prometheus instance in the Healthwatch tile automatically discovers and scrapes Healthwatch Exporter tiles that are installed on the same Ops Manager foundation as the Healthwatch tile.

Navigate to the Healthwatch Exporter for TAS for VMs Tile

To navigate to the Healthwatch Exporter for TAS for VMs tile:

  1. Navigate to the Ops Manager Installation Dashboard.

  2. Click the Healthwatch Exporter for Tanzu Application Service tile.

Assign AZs and Networks

In the Assign AZ and Networks pane, you assign jobs to your AZs and networks.

To configure the Assign AZ and Networks pane:

  1. Select Assign AZs and Networks.

  2. Under Place singleton jobs in, select the first AZ. Ops Manager runs any job with a single instance in this AZ.

  3. Under Balance other jobs in, select one or more other AZs. Ops Manager balances instances of jobs with more than one instance across the AZs that you specify.

  4. From the Network dropdown, select the runtime network that you created when configuring the BOSH Director tile. For more information about TAS for VMs networks, see the Ops Manager documentation.

  5. (Optional) If you want to assign jobs to a service network in addition to your runtime network, select it from the Services Network dropdown. For more information about TAS for VMs service networks, see the Ops Manager documentation.

  6. Click Save.

(Optional) Configure TAS for VMs Metric Exporter VMs

In the TAS for VMs Metric Exporter VMs pane, you configure static IP addresses for the metric exporter VMs that collect metrics from the Loggregator Firehose in TAS for VMs. There are two metric exporter VMs that each collect a single metric type from the Loggregator Firehose: counter or gauge. You can deploy one or both VMs. After generating these metrics, the metric exporter VMs convert them to a Prometheus exposition format on a secured endpoint.

You can also deploy two other VMs: the TAS for VMs service level indicator (SLI) exporter VM and the certificate expiration metric exporter VM.

To configure the TAS for VMs Metric Exporter VMs pane:

Important: The IP addresses you configure in the TAS for VMs Metric Exporter VMs pane must not be within the reserved IP ranges you configured in the BOSH Director tile.

  1. Select TAS for VMs Metric Exporter VMs.

  2. (Optional) For Static IP address for counter metric exporter VM, enter a valid static IP address that you want to reserve for the counter metric exporter VM.

  3. (Optional) For Static IP address for gauge metric exporter VM, enter a valid static IP address that you want to reserve for the gauge metric exporter VM.

  4. (Optional) For Static IP address for TAS for VMs SLI exporter VM, enter a valid static IP address that you want to reserve for the TAS for VMs SLI exporter VM. The TAS for VMs SLI exporter VM generates SLIs that allow you to monitor whether the core functions of the Cloud Foundry Command-Line Interface (cf CLI) are working as expected. The cf CLI allows developers to create and manage apps through TAS for VMs. For more information, see TAS for VMs SLI Exporter VM in Healthwatch Metrics.

  5. (Optional) For Static IP address for certificate expiration metric exporter VM, enter a valid static IP address that you want to reserve for the certificate expiration metric exporter VM. The certificate expiration metric exporter VM collects metrics that show when certificates in your Ops Manager deployment are due to expire. For more information, see Certificate Expiration Metric Exporter VM in Healthwatch Metrics and Monitoring Certificate Expiration.

    Note: If you have both Healthwatch Exporter for TAS for VMs and Healthwatch Exporter for TKGI installed on the same Ops Manager foundation, scale the certificate expiration metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two certificate expiration metric exporter VMs create redundant sets of metrics.

  6. (Optional) If your Ops Manager deployment uses self-signed certificates, activate the Skip TLS certificate verification for certificate metric exporter VM checkbox. When this checkbox is activated, the certificate expiration metric exporter VM does not verify the identity of the Ops Manager VM. This checkbox is deactivated by default.

  7. Under cf CLI version, select from the dropdown the version of the cf CLI that your TAS for VMs or Pivotal Application Service (PAS) deployment uses:

    • If you have TAS for VMs v2.11 or later installed, select one of the following options:
      • 7 (Use with TAS 2.11+): Allows the TAS for VMs SLI exporter VM to run SLI tests for cf CLI v7. This option is selected by default.
      • CF CLI 8: Allows the TAS for VMs SLI exporter VM to run SLI tests for cf CLI v8.
    • If you have PAS v2.7 installed, select 6 (Use with TAS 2.7 only) to allow the TAS for VMs SLI exporter VM to run SLI tests for cf CLI v6.
  8. (Optional) If Metric Registrar is configured in your TAS for VMs tile, and you do not want Healthwatch to scrape custom application metrics, select the Filter out custom application metrics checkbox.

  9. Click Save.

Configure the BOSH Health Metric Exporter VM

In the BOSH Health Metric Exporter VM pane, you configure the AZ and VM type of the BOSH health metric exporter VM. Healthwatch Exporter for TAS for VMs deploys the BOSH health metric exporter VM, which creates a BOSH deployment called bosh-health every ten minutes. The bosh-health deployment deploys another VM, bosh-health-check, that runs a suite of SLI tests to validate the functionality of the BOSH Director. After the SLI tests are complete, the BOSH health metric exporter VM collects the metrics from the bosh-health-check VM, then deletes the bosh-health deployment and the bosh-health-check VM. For more information, see BOSH Health Metric Exporter VM in Healthwatch Metrics.

To configure the BOSH Health Metric Exporter VM pane:

  1. Select BOSH Health Metric Exporter VM.

  2. Under Availability zone, select the AZ on which you want Healthwatch Exporter for TAS for VMs to deploy the BOSH health metric exporter VM.

  3. Under VM type, select from the dropdown the type of VM you want Healthwatch Exporter for TAS for VMs to deploy.

  4. Click Save.

Note: If you have both Healthwatch Exporter for TAS for VMs and Healthwatch Exporter for TKGI installed on the same Ops Manager foundation, scale the BOSH health metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two sets of BOSH health metric exporter VM metrics cause a 401 error in your BOSH Director deployment, and one set of metrics reports that the BOSH Director is down in the Grafana UI. For more information, see BOSH Health Metrics Cause Errors When Two Healthwatch Exporter Tiles Are Installed in Troubleshooting Healthwatch.

(Optional) Configure the BOSH Deployment Metric Exporter VM

In the BOSH Deployment Metric Exporter VM pane, you configure the authentication credentials and a static IP address for the BOSH deployment metric exporter VM. This VM checks every 30 seconds whether any BOSH deployments other than the one created by the BOSH health metric exporter VM are running. For more information, see BOSH Deployment Metric Exporter VM in Healthwatch Metrics.

To configure the BOSH Deployment Metric Exporter VM pane:

  1. Select BOSH Deployment Metric Exporter VM.

  2. (Optional) For UAA client credentials, enter the username and secret for the UAA client that the BOSH deployment metric exporter VM uses to access the BOSH Director VM. For more information, see Create a UAA Client for the BOSH Deployment Metric Exporter VM below.

  3. (Optional) For Static IP address for BOSH deployment metric exporter VM, enter a valid static IP address that you want to reserve for the BOSH deployment metric exporter VM. This IP address must not be within the reserved IP ranges you configured in the BOSH Director tile.

  4. Click Save.

Note: If you have both Healthwatch Exporter for TAS for VMs and Healthwatch Exporter for TKGI installed on the same Ops Manager foundation, scale the BOSH deployment metric exporter VM to zero instances in the Resource Config pane in one of the Healthwatch Exporter tiles. Otherwise, the two BOSH deployment metric exporter VMs create redundant sets of metrics.

Create a UAA Client for the BOSH Deployment Metric Exporter VM

To allow the BOSH deployment metric exporter VM to access the BOSH Director VM and view BOSH deployments, you must create a new UAA client for the BOSH deployment metric exporter VM. The procedure to create this UAA client differs depending on the authentication settings of your Ops Manager deployment.

To create a UAA client for the BOSH deployment metric exporter VM:

  1. Return to the Ops Manager Installation Dashboard.

  2. Record the IP address for the BOSH Director VM and the login and admininistrator credentials for the BOSH Director UAA instance:

    • If your Ops Manager deployment uses internal authentication:
      1. Click the BOSH Director tile.
      2. Select the Status tab.
      3. Record the IP address in the IPs column of the BOSH Director row.
      4. Select the Credentials tab.
      5. In the Uaa Admin Client Credentials row of the BOSH Director section, click Link to Credential.
      6. Record the value of password. This value is the secret for Uaa Admin Client Credentials.
      7. Return to the Credentials tab.
      8. In the Uaa Login Client Credentials row of the BOSH Director section, click Link to Credential.
      9. Record the value of password. This value is the secret for Uaa Login Client Credentials.

        For more information about internal authentication settings for your Ops Manager deployment, see the Ops Manager documentation.
    • If your Ops Manager deployment uses SAML authentication:
      1. Click the user account menu in the upper-right corner of the Ops Manager Installation Dashboard.
      2. Click Settings.
      3. Select SAML Settings.
      4. Activate the Provision an Admin Client in the BOSH UAA checkbox.
      5. Click Enable SAML Authentication.
      6. Return to the Ops Manager Installation Dashboard.
      7. Click the BOSH Director tile.
      8. Select the Status tab.
      9. Record the IP address in the IPs column of the BOSH Director row.
      10. Select the Credentials tab.
      11. In the Uaa Bosh Client Credentials row of the BOSH Director section, click Link to Credential.
      12. Record the value of password. This value is the secret for Uaa Bosh Client Credentials.

        For more information about SAML authentication settings for your Ops Manager deployment, see the Ops Manager documentation.
    • If your Ops Manager deployment uses LDAP authentication:
      1. Click the user account menu in the upper-right corner of the Ops Manager Installation Dashboard.
      2. Click Settings.
      3. Select LDAP Settings.
      4. Activate the Provision an Admin Client in the BOSH UAA checkbox.
      5. Click Enable LDAP Authentication.
      6. Return to the Ops Manager Installation Dashboard.
      7. Click the BOSH Director tile.
      8. Select the Status tab.
      9. Record the IP address in the IPs column of the BOSH Director row.
      10. Select the Credentials tab.
      11. In the Uaa Bosh Client Credentials row of the BOSH Director section, click Link to Credential.
      12. Record the value of password. This value is the secret for Uaa Bosh Client Credentials.

        For more information about LDAP authentication settings for your Ops Manager deployment, see the Ops Manager documentation.
  3. SSH into the Ops Manager VM by following the procedure in the Ops Manager documentation.

  4. Target the UAA instance for the BOSH Director by running:

    uaac target https://BOSH-DIRECTOR-IP:8443 --skip-ssl-validation
    

    Where BOSH-DIRECTOR-IP is the IP address for the BOSH Director VM that you recorded from the Status tab in the BOSH Director tile in a previous step.

  5. Log in to the UAA instance:

    • If your Ops Manager deployment uses internal authentication, log in to the UAA instance by running:

      uaac token owner get login -s UAA-LOGIN-CLIENT-SECRET
      

      Where UAA-LOGIN-CLIENT-SECRET is the secret you recorded from the Uaa Login Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

    • If your Ops Manager deployment uses SAML or LDAP, log in to the UAA instance by running:

      uaac token client get bosh_admin_client -s BOSH-UAA-CLIENT-SECRET
      

      Where BOSH-UAA-CLIENT-SECRET is the secret you recorded from the Uaa Bosh Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

  6. When prompted, enter the UAA administrator client username admin and the secret you recorded from the Uaa Admin Client Credentials row in the Credentials tab in the BOSH Director tile in a previous step.

  7. Create a UAA client for the BOSH deployment metric exporter VM by running:

    uaac client add CLIENT-USERNAME \
     --secret CLIENT-SECRET \
     --authorized_grant_types client_credentials,refresh_token \
     --authorities bosh.read \
     --scope bosh.read
    

    Where:

    • CLIENT-USERNAME is the username you want to set for the UAA client.
    • CLIENT-SECRET is the secret you want to set for the UAA client.
  8. Return to the Ops Manager Installation Dashboard.

  9. Click the Healthwatch Exporter for Tanzu Application Service tile.

  10. Select BOSH Deployment Metric Exporter VM.

  11. For UAA client credentials, enter the username and secret for the UAA client you just created.

(Optional) Configure Errands

Errands are scripts that Ops Manager runs automatically when it installs or uninstalls a product, such as a new version of Healthwatch Exporter for TAS for VMs. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product is uninstalled.

By default, Ops Manager always runs all errands.

In the Errands pane, you can select On to always run an errand or Off to never run it.

For more information about how Ops Manager manages errands, see the Ops Manager documentation.

To configure the Errands pane:

  1. Select Errands.

  2. (Optional) Choose whether to always run or never run the following errands:

    • Smoke Tests: Verifies that the metric exporter VMs are running.
    • Cleanup: Deletes any existing BOSH deployments created by the BOSH health metric exporter VM for running SLI tests.
    • Remove CF SLI User: Deletes the user account that the TAS for VMs SLI exporter VM creates to run the TAS for VMs SLI test suite. For more information, see TAS for VMs SLI Exporter VM in Healthwatch Metrics.
  3. Click Save.

(Optional) Configure Syslog

In the Syslog pane, you can configure system logging in Healthwatch Exporter for TAS for VMs to forward log messages from tile component VMs to an external destination for troubleshooting, such as a remote server or external syslog aggregation service.

To configure the Syslog pane:

  1. Select Syslog.

  2. (Optional) Under Do you want to configure Syslog forwarding?, select one of the following options:

    • No, do not forward Syslog: Disallows syslog forwarding.
    • Yes: Allows syslog forwarding and allows you to edit the configuration fields describednbelow.
  3. For Address, enter the IP address or DNS domain name of your external destination.

  4. For Port, enter a port on which your external destination listens.

  5. For Transport Protocol, select TCP or UDP from the dropdown. This determines which transport protocol Healthwatch Exporter for TAS for VMs uses to forward system logs to your external destination.

  6. (Optional) To transmit logs over TLS:

    1. Activate the Enable TLS checkbox. This checkbox is deactivated by default.
    2. For Permitted Peer, enter either the name or SHA1 fingerprint of the remote peer.
    3. For SSL Certificate, enter the TLS certificate for your external destination.
  7. (Optional) For Queue Size, specify the number of log messages Healthwatch Exporter for TAS for VMs can hold in a buffer at a time before sending them to your external destination. The default value is 100000.

  8. (Optional) To forward debug logs to your external destination, activate the Forward Debug Logs checkbox. This checkbox is deactivated by default.

  9. (Optional) To specify a custom syslog rule, enter it in Custom rsyslog configuration in RainerScript syntax. For more information about custom syslog rules, see the TAS for VMs documentation. For more information about RainerScript syntax, see the rsyslog documentation.

  10. Click Save Syslog Settings.

(Optional) Configure Resources

In the Resource Config pane, you can scale VMs in Healthwatch Exporter for TAS for VMs up or down according to the needs of your deployment, as well as associate load balancers with a group of VMs. For example, you can scale the persistent disk size of a metric exporter VM to allow longer data retention.

To configure the Resource Config pane:

  1. Select Resource Config.

  2. (Optional) To scale a job, select an option from the dropdown for the resource you want to modify:

    • Instances: Configures the number of instances each job has.
    • VM Type: Configures the type of VM used in each instance.
    • Persistent Disk Type: Configures the amount of persistent disk space to allocate to the job.
  3. (Optional) To add a load balancer to a job:

    1. Click the icon next to the job name.
    2. For Load Balancers, enter the name of your load balancer.
    3. Ensure that the Internet Connected checkbox is deactivated. Activating this checkbox gives VMs a public IP address that allows outbound Internet access.
  4. (Optional) The instance count for the SVM Forwarder VM is set to 0 by default. This VM emits Healthwatch-generated super value metrics (SVMs) into the Loggregator Firehose. To deploy the SVM Forwarder VM, increase the instance count by selecting from the Instances dropdown. You do not need to deploy this VM unless you use a third-party nozzle that can export the SVMs to an external system, such as a remote server or a syslog aggregation service. For more information about the SVM Forwarder VM, see SVM Forwarder VM - Platform Metrics and SVM Forwarder VM - Healthwatch Component Metrics in Healthwatch Metrics.

    Note: If you installed the Healthwatch Exporter for TAS for VMs tile before installing the Healthwatch tile, you may need to re-deploy Healthwatch Exporter for TAS for VMs after deploying the SVM Forwarder VM. For more information, see Deploy Healthwatch Exporter for TAS for VMs below.

  5. (Optional) Healthwatch Exporter for TAS for VMs deploys the counter and gauge metric exporter VMs by default. If you do not want to collect both of these metric types, set the instance count for the VMs associated with the metrics you do not want to collect to 0.

  6. Click Save.

Deploy Healthwatch Exporter for TAS for VMs

To complete your installation of the Healthwatch Exporter for TAS for VMs tile:

  1. Return to the Ops Manager Installation Dashboard.

  2. Click Review Pending Changes.

  3. Click Apply Changes.

For more information, see the Ops Manager documentation.

Configure a Scrape Job for Healthwatch Exporter for TAS for VMs

After you have successfully deployed Healthwatch Exporter for TAS for VMs, you must configure a scrape job in the Prometheus instance that exists within your metrics monitoring system. Follow the procedure in one of the following sections, depending on which monitoring system you use:

Configure a Scrape Job for Healthwatch Exporter for TAS for VMs in Healthwatch

To configure a scrape job for Healthwatch Exporter for TAS for VMs in the Healthwatch tile on your Ops Manager foundation, see Configure Prometheus in Configuring Healthwatch.

Configure a Scrape Job for Healthwatch Exporter for TAS for VMs in an External Monitoring System

To configure a scrape job for Healthwatch Exporter for TAS for VMs in a service or database that is located outside your Ops Manager foundation:

  1. Open network communication paths from your external service or database to the metric exporter VMs in Healthwatch Exporter for TAS for VMs. The procedure to open these network paths differs depending on your Ops Manager foundation’s IaaS. For a list of TCP ports used by each metric exporter VM, see Required Networking Rules for Healthwatch Exporter for TAS for VMs in Healthwatch Architecture.

  2. In the scrape_config section of the Prometheus configuration file, create a scrape job for your Ops Manager foundation. Under static_config, specify the TCP ports of each metric exporter VM as static targets for the IP address of your external service or database. For example:

    job_name: foundation-1
    metrics_path: /metrics
    scheme: https
    static_configs:
    - targets:
      - "1.2.3.4:8443"
      - "1.2.3.4:25555"
      - "1.2.3.4:443"
      - "1.2.3.4:8082"
    

    For more information about the scrape_config section of the Prometheus configuration file, see the Prometheus documentation. For more information about the static_config section of the Prometheus configuration file, see the Prometheus documentation.

check-circle-line exclamation-circle-line close-line
Scroll to top icon