This topic describes Healthwatch for VMware Tanzu tile components and the resource requirements for the Healthwatch tile.
For information about the metric exporter VMs that the Healthwatch Exporter for VMware Tanzu Application Service for VMs (TAS for VMs) and Healthwatch Exporter for Tanzu Kubernetes Grid Integrated Edition (TKGI) tiles deploy, see Healthwatch Metrics.
The main three components of the Healthwatch tile are Prometheus, Grafana, and MySQL.
The Prometheus instance scrapes and stores metrics from the Healthwatch Exporter tiles, allows you to configure alerts with Alertmanager, and sends canary tests to target URLs with Blackbox Exporter. The Healthwatch tile then exports the collected metrics to dashboards in the Grafana UI, allowing you to visualize the data with charts and graphs and create customized dashboards for long-term monitoring and troubleshooting. MySQL is used only to store your Grafana settings and does not store any time series data. The Healthwatch tile also deploys a fourth component, MySQL Proxy, to route client connections to healthy MySQL nodes.
By default, the Healthwatch tile deploys two Prometheus VMs, one Grafana VM, one MySQL VM, and one MySQL Proxy VM. In the Resource Config pane of the Healthwatch tile, you can scale and assign load balancers to these resources. For information about making your Healthwatch deployment highly available (HA), see High Availability below.
The table below explains each Healthwatch tile component and which VM deploys it:
Component | VM Name | Description |
---|---|---|
Prometheus | tsdb |
|
Grafana | grafana |
|
MySQL | pxc |
Stores the Grafana settings you configure |
MySQL Proxy | pxc-proxy |
Routes client connections to healthy MySQL nodes and away from unhealthy MySQL nodes |
The following table provides the default resource and IP requirements for installing the Healthwatch tile:
Resource | Instances | CPUs | RAM | Ephemeral Disk | Persistent Disk |
---|---|---|---|---|---|
Prometheus | 2 | 4 | 16 GB | 5 GB | 512 GB |
Grafana | 1 | 1 | 4 GB | 5 GB | 5 GB |
MySQL | 1 | 1 | 4 GB | 5 GB | 10 GB |
MySQL Proxy | 1 | 1 | 4 GB | 5 GB | 5 GB |
0
removes Grafana from your Healthwatch deployment. For more information, see Removing Grafana below.
You can scale these resources in the Resource Config pane of the Healthwatch tile.
The Healthwatch tile automatically selects the instance type that is best suited for each job. The instance types that Healthwatch selects depend on the available resources for your deployment.
To make your Healthwatch deployment HA, you deploy a redundant number of Healthwatch tile component instances. This increases the capacity and availability of those components, which decreases the chances of downtime.
To make your Healthwatch deployment HA, you scale Healthwatch tile resources in the Resource Config pane of the Healthwatch tile. You can scale Healthwatch tile resources either vertically or horizontally. For more information about vertical and horizontal scaling, see the TAS for VMs documentation.
Healthwatch deploys two Prometheus VMs by default. With two VMs in the Prometheus instance, Prometheus and Alertmanager are made HA by default. You can scale the Prometheus instance vertically, but you should not scale it horizontally.
Healthwatch deploys a single Grafana VM by default. If you need Grafana to be HA, you can scale the Grafana instance horizontally. When making Grafana HA, VMware recommends scaling your MySQL instance to three VMs and your MySQL Proxy instance to two VMs.
If you do not want to use any Grafana instances in your Healthwatch deployment, you can set the number of Grafana, MySQL, and MySQL Proxy instances for your Healthwatch deployment to 0
. For example, you may want to remove Grafana from your Healthwatch deployment if you configure the Prometheus instance to send metrics to an external Grafana instance.
If you want to remove Grafana from your Healthwatch deployment, you should scale Grafana, MySQL, and MySQL Proxy to 0
at the same time. Because MySQL is used only to store Grafana settings and MySQL Proxy only routes client connections to healthy MySQL nodes in an HA Grafana deployment, neither component is necessary when no Grafana instance is deployed.