This topic describes how to deploy a load balancer for the NSX Management Cluster for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).
NSX-T provides a converged management and control plane that is referred to as the NSX-T Management Cluster. The architecture delivers high availability of the NSX-T Manager node, reduces the likelihood of operation failures of NSX-T, and provides API and UI clients with multiple endpoints or a single VIP for high availability.
While using a VIP to access the NSX-T Management layer provides high-availability, it does not balance the workload. To avoid overloading a single NSX-T Manager, which might be the case when HA VIP addressing is used, an NSX-T load balancer can be provisioned to allow NCP and other components orchestrated by Tanzu Kubernetes Grid Integrated Edition to distribute load efficiently among NSX-T Manager nodes.
The diagram below shows an external load balancer fronting the NSX-T Manager nodes. The load balancer is deployed within the NSX-T environment and intercepts requests to the NSX-T Management Cluster. The load balancer selects one of the NSX-T Manager nodes to handle the request and rewrites the destination IP address to reflect the selection.
Note: The load balancer VIP load balances traffic to all NSX-T Manager instances in round robin fashion. A Cluster HA VIP, on the other hand, only sends traffic one of the NSX-T Manager instances that is mapped to the Cluster IP VIP; the other NSX-T Manager instances do not receive any traffic.
For scalability, deploy a load balancer in front of the NSX-T Manager nodes. When provisioning the load balancer, you configure a virtual server on the load balancer, and associate a virtual IP address with the virtual server. This load balancer VIP can be used as the entry-point for TKGI- and NCP-related API requests on the NSX-T Control Plane. The virtual server includes a member pool where all NSX-T Management Cluster nodes belong. Additionally, health monitoring is enabled for the member pool to quickly and efficiently address potential node failures detected among the NSX-T Management Cluster.
To provision the load balancer for the NSX-T Management Cluster, complete the following steps.
Note: You can connect to any NSX-T Manager Node in the management cluster to provision the load balancer.
Note: You must use the Advanced Networking and Security tab in NSX-T Manager to create, read, update, and delete all NSX-T networking objects used for Tanzu Kubernetes Grid Integrated Edition.
Add and configure a new logical switch for the load balancer.
LS-NSX-T-EXTERNAL-LB
.TZ-Overlay
.Configure a new Tier-1 Router. Create the Tier-1 Router on the same Edge Cluster where the Tier-0 Router that provides external connectivity to vCenter and NSX-T Manager is located.
T1-NSX-T-EXTERNAL-LB
, for example.Shared-T0
.edgecluster1
.nsx-edge-1-tn
and nsx-edge-2-tn
, for example.Configure Route Advertisement for the Tier-1 Router.
Enabled
.Yes
.Yes
.Verify successful creation and configuration of the logical switch and router.
Create a new small-size Load Balancer and attach it to the Tier1 router previously created.
Note: The small-size load balancer is suitable for the NSX-T Management Cluster load balancer. Make sure you have enough Edge Cluster resources to provision a small load balancer.
Attach the load balancer to the Tier-1 Router previously created.
T1-NSX-T-EXTERNAL-LB
.Add and configure a virtual server for the load balancer.
Configure General Properties for the virtual server:
VS-NSX-T-EXTERNAL-LB
.Layer 4 TCP
.default-tcp-lb-app-profile
.Deactivated
.Configure Virtual Server Identifiers for the virtual server:
10.40.14.250
.443
.Configure Virtual Server Pool for the virtual server:
Configure General Properties for the server pool:
NSX-T-MGRS-SRV-POOL
.ROUND_ROBIN
.Configure SNAT Translation for the server pool:
10.40.14.250
.Configure Pool Members for the server pool:
Static
.Configure Health Monitors:
Back at the Server Pool screen, click Next.
Configure Load Balancing Profiles for the load balancer:
default-source-ip-lb-persistence-profile
.Note: If a proxy is used between the NSX-T Management Cluster and the TKGI Management Plane, do not configure a persistence profile.
Attach the virtual switch to the NSX-T load balancer.
NSX-T-EXTERNAL-LB
.Once the load balancer is configured, verify it by doing the following:
https://10.40.14.250
.Note: The URL redirects to the same NSX-T Manager. Persistence is done on the source IP based on the persistence profile you selected.
Create a new Active Health Monitor (HM) for NSX-T Management Cluster members using the NSX-T Health Check protocol.
NSX-T-MGRS-SRV-POOL
.Configure Monitor Properties:
NSX-T-Mgr-Health-Monitor
.LbHttpsMonitor
.443
.Configure Health Check Parameters.
Configure the new Active HM with specific HTTP request fields as follows:
Configure the HTTP Request Configuration settings for the health monitor:
GET
/api/v1/reverse-proxy/node/health
HTTP_VERSION_1_1
Configure the HTTP Request Headers for the health monitor:
Basic YWRtaW46Vk13YXJlMSE=
, which is the base64-encoded value of the NSX-T administrator credentials.application/json
.application/json
.Note: In the example, YWRtaW46Vk13YXJlMSE= is the base64-encoded value of the NSX-T administrator credentials, expressed in the form admin-user:password. You can use the free online service www.base64encode.org to base64 encode your NSX-T administrator credentials.
Configure the HTTP Response Configuration for the health monitor:
200
.At the Health Monitors screen, specify the Active Health Monitor you just created:
NSX-T-Mgr-Health-Monitor
.If your Tanzu Kubernetes Grid Integrated Edition deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX-T topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example, 100.64.128.0/31
is the subnet for the Load Balancer Tier-1 uplink interface.
To do this you need to retrieve the IP of the T1 uplink (Tier-1 Router that connected the NSX-T LB instance). In the example below, the T1 uplink IP is 100.64.112.37/31
.
Create the following SNAT rule on the Tier-0 Router:
2000
.SNAT
.100.64.112.36/31
, for example.10.40.206.0/25
, for example.10.40.14.251
, for example.Click Save
Verify configuration of the SNAT rule and server pool health:
Verify the load balancer and that traffic is load balanced.
Confirm that traffic is load-balanced across different NSX-T Managers:
You can use the NSX-T API to validate that secure HTTP requests against the new VIP address are associated with the load balancer’s Virtual Server. Relying on the SuperUser Principal Identity created as part of TKGI provisioning steps, you can cURL the NSX-T Management Cluster using the standard HA-VIP address or the newly-provisioned virtual server VIP. For example:
curl -k -X GET "https://192.168.6.210/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key
curl -k -X GET "https://91.0.0.1/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key
The Key behavioral differences among the two API calls:
Residual configuration steps: