This topic describes how to monitor the health status of the NSX ingress load balancer resources for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).

Note: This feature requires NCP v2.5.1 or later.

Overview

The NSX-T Load Balancer is a logical load balancer that handles a number of functions using virtual servers and pools.

The NSX-T load balancer creates a load balancer service for each Kubernetes cluster provisioned by Tanzu Kubernetes Grid Integrated Edition with NSX-T. For each load balancer service, NCP, by way of the Kubernetes CustomResourceDefinition (CRD), creates corresponding NSXLoadBalancerMonitor objects.

By default Tanzu Kubernetes Grid Integrated Edition deploys the following NSX-T virtual servers for each Kubernetes cluster:

  • One TCP layer 4 load balancer virtual server for the Kubernetes API server.
  • One TCP layer 4 auto-scaled load balancer virtual server for each Kubernetes service resource of type: LoadBalancer.
  • Two HTTP/HTTPS layer 7 ingress routing virtual servers. These virtual server are attached to the Kubernetes Ingress Controller cluster load balancer service and can be manually scaled. Tanzu Kubernetes Grid Integrated Edition uses Kubernetes custom resources to monitor the state of the NSX-T load balancer service and scale the virtual servers created for ingress.

For information about scaling TCP layer 4 ingress controller see Defining Network Profiles for the TCP Layer 4 Load Balancer.

For information about configuring layer 7 ingress routing load balancers see Scaling the HTTP/S Layer 7 Ingress Load Balancers Using the LoadBalancer CRD. For information about configuring the layer 7 ingress controller see Defining Network Profiles for the HTTP/S Layer 7 Ingress Controller.

For more information about the NSX-T Load Balancer, see Create an IP Pool in Manager Mode or Add an IP Address Pool in the VMware documentation.

For more information about Kubernetes custom resources, see Custom resources in the Kubernetes documentation.

Monitor the NSX-T Load Balancer Service

You can use the NSXLoadBalancerMonitor CRD to monitor the NSX-T load balancer service, including traffic, usage and health score information.

The NSXLoadBalancerMonitor returns statistics showing the number of connections and throughput of the virtual servers for each type of load balancer.

In addition to connections and throughput statistics the NSXLoadBalancerMonitor CRD returns two health scores for the current performance of load balancers:

  • servicePressureIndex which represents an overall health score for the NSX-T load balancer service.
  • infraPressureIndex which represents the heath score of the NSX-T Edge Node that is running the load balancer and associated virtual servers.

Based on the health score the user can decide what action to take:

The table below summarizes the actions that you can take based on the health scores.

servicePressureIndex infraPressureIndex Cluster Manager Infrastructure Admin
LOW or WARM LOW or WARM NONE NONE
LOW or WARM HIGH Alert infra admin Move the LBS from the CRITICAL Edge Node to another Edge Node.
HIGH LOW or WARM Resolve the LBS health score by Scaling the HTTP/S Layer 7 Ingress Load Balancers Using the LoadBalancer CRD and, if necessary, by increasing the size of the LBS using Defining Network Profiles for the TCP Layer 4 Load Balancer. NONE
HIGH HIGH Alert infra admin; Resolve the LBS health score by Scaling the HTTP/S Layer 7 Ingress Load Balancers Using the LoadBalancer CRD and, if necessary, by increasing the size of the LBS using Defining Network Profiles for the TCP Layer 4 Load Balancer. Move the LBS from the CRITICAL Edge Node to another Edge Node.

Monitor Your NSX-T Load Balancer Service Using the NSXLoadBalancerMonitor CRD

To monitor your NSX-T Load Balancer Service using the NSXLoadBalancerMonitor CRD, complete the following procedure.

  1. To view the NSXLoadBalancerMonitor CRD, run the following command:

    kubectl get crd
    
  2. To determine the UUID of the NSX-T load balancer deployed for the cluster, run the following command:

    kubectl get nsxlbmonitors
    
  3. To view statistics, throughput, and health score for all virtual servers deployed by a specific load balancer service, run the following command:

    kubectl describe nsxlbmonitors UUID-OF-LOAD-BALANCER
    

    Where UUID-OF-LOAD-BALANCER is your load balancer’s UUID.

    For example:

    $ kubectl describe nsxlbmonitor f61a8cec-28eb-4b0c-bf4a-906f3ce2d8e6  
    
    Name:         f61a8cec-28eb-4b0c-bf4a-906f3ce2d8e6
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  vmware.com/v1alpha1
    Health:
      Metrics:
        Cpu Usage Percentage:         0
        Poolmember Usage Percentage:  1
      Service Pressure Index:         0,LOW
      Infra Pressure Index:           0,LOW
      Metrics:
        Cpu Usage Percentage:         0
        Lb Service Usage Percentage:  0
        Memory Usage Percentage:      0
        Poolmember Usage Percentage:  0
    Kind:                             NSXLoadBalancerMonitor
    Metadata:
      Creation Timestamp:  2019-11-13T19:37:10Z
      Generation:          914
      Resource Version:    17139
      Self Link:           /apis/vmware.com/v1alpha1/nsxlbmonitors/f61a8cec-28eb-4b0c-bf4a-906f3ce2d8e6
      UID:                 f56d3cf5-748d-44c3-8026-c6c569fde954
    Traffic:
      Bytes In Rate:         0
      Bytes Out Rate:        0
      Current Session Rate:  0
      Ip Address:            192.168.160.102
      Max Sessions:          0
      Packets In Rate:       0
      Packets Out Rate:      0
      Protocol:              TCP
      Total Sessions:        0
      Virtual Server Name:   pks-042bccde-2197-4e06-863e-55129bf2e195-http
      Bytes In Rate:         0
      Bytes Out Rate:        0
      Current Session Rate:  0
      Ip Address:            192.168.160.102
      Max Sessions:          0
      Packets In Rate:       0
      Packets Out Rate:      0
      Protocol:              TCP
      Total Sessions:        0
      Virtual Server Name:   pks-042bccde-2197-4e06-863e-55129bf2e195-https_terminated
    Usage:
      Current Server Pool Count:     1
      Current Virtual Server Count:  3
    Events:                          <none>
    
check-circle-line exclamation-circle-line close-line
Scroll to top icon