This topic describes how to deploy a load balancer for the NSX-T Management Cluster for Enterprise PKS.
Note: The instructions provided in this topic are for NSX-T v2.4.
This section describes the NSX-T Management Cluster and the external load balancer for use with Enterprise PKS.
NSX-T v2.4 introduces a converged management and control plane that is referred to as the NSX-T Management Cluster. The new deployment model delivers high availability of the NSX-T Manager node, reduces the likelihood of operation failures of NSX-T, and provides API and UI clients with multiple endpoints or a single VIP for high availability.
While using a VIP to access the NSX-T Management layer provides high-availability, it does not balance the workload. To avoid overloading a single NSX-T Manager, as may be the case when HA VIP addressing is used, an NSX-T load balancer can be provisioned to allow NCP and other components orchestrated by Enterprise PKS to distribute load efficiently among NSX Manager nodes.
The diagram below shows an external load balancer fronting the NSX Manager nodes. The load balancer is deployed within the NSX-T environment and intercepts requests to the NSX-T Management Cluster. The load balancer selects one of the NSX-T Manager nodes to handle the request and rewrites the destination IP address to reflect the selection.
Note: The load balancer VIP load balances traffic to all NSX-T Manager instances in round robin fashion. A Cluster HA VIP, on the other hand, only sends traffic one of the NSX-T Manager instances that is mapped to the Cluster IP VIP; the other NSX-T manager instances do not receive any traffic.
Various components in an Enterprise PKS deployment interact with the NSX Management Cluster.
PKS Control Plane components:
Kubernetes Cluster components:
The interaction of the PKS Control Plane components and the BOSH jobs with the NSX-T Management Cluster is sporadic. However, the NCP component may demand a high level of scalability for the NSX-T API processing capability of the NSX Management Cluster, and NCP is vital to the networking needs of each Kubernetes cluster. When a high number of Kubernetes clusters are subjected to concurrent activities, such as Kubernetes Pod and Service lifecycle operations, multiple NCP instances may tax the system and push NSX-T API processing to its limits.
For scalability, consider deploying a load balancer in front of the NSX-T Manager nodes. As a general rule of thumb, if you are using Enterprise PKS with NSX-T to deploy more than 25 Kubernetes clusters, you should use a load balancer in front of the NSX-T Management Cluster.
Note: If you do not require scalability, you can configure a Cluster VIP to achieve HA for the NSX-T Management Cluster. See HA VIP addressing.
For general purposes, a small NSX-T load balancer is sufficient. However, refer to the Scaling Load Balancer Resources to ensure that the load balancer you choose is sufficient to meet your needs.
When provisioning the load balancer, you configure a virtual server on the load balancer, and associate a virtual IP address with the virtual server. This load balancer VIP can be used as the entry-point for PKS- and NCP-related API requests on the NSX-T Control Plane. The virtual server includes a member pool where all NSX-T Management Cluster nodes belong. Additionally, health monitoring is enabled for the member pool to quickly and efficiently address potential node failures detected among the NSX-T Management Cluster.
Before you provision a load balancer for the NSX-T Management Cluster, ensure that your environment is configured as follows:
To provision the load balancer for the NSX-T Management Cluster, complete the following steps.
Note: You can connect to any NSX-T Manager node in the management cluster to provision the load balancer.
Note: You must use the Advanced Networking and Security tab in NSX-T Manager to create, read, update, and delete all NSX-T networking objects used for Enterprise PKS.
Add and configure a new logical switch for the load balancer.
Configure a new Tier-1 Router in Active/StandBy mode. Create the Tier-1 Router on the same Edge Cluster where the Tier-0 Router that provides external connectivity to vCenter and NSX Manager is located.
Configure Route Advertisement for the Tier-1 Router.
Verify successful creation and configuration of the logical switch and router.
Create a new small-size Load Balancer and attach it to the Tier1 router previously created.
Note: The small-size VM is suitable for the NSX Management Cluster load balancer. Make sure you have enough Edge Cluster resources to provision the load balancer.
Attach the load balancer to the Tier-1 Router previously created.
Configure General Properties for the Virtual Server.
Configure Virtual Server Identifiers.
Configure Virtual Server Pool.
Configure General Properites for the Server Pool:
Configure SNAT Translation for the Server Pool:
Configure Pool Members for the Server Pool:
Configure Health Monitors:
Back at the Server Pool screen, click Next.
Configure Load Balancing Profiles:
Note: If a proxy is used between the NSX Management Cluster and the PKS Control Plane, do not configure a persistence profile.
Attach the virtual switch to the NSX-T load balancer.
Once the load balancer is configured, you should be able to do the following:
Note: Because you selected the
default-source-ip-lb-persistence-profile, the URL redirects to the same NSX-T Manager. Persistence is done on the source IP.
Create a new Active Health Monitor (HM) for NSX Management Cluster members. Configure the new Active Health Monitor with the Health Check protocol
LbHttpsHeathMonitor. To do this:
Configure Monitor Properties:
Configure Health Check Parameters.
Configure the new Active HM with specific HTTP request fields as follows:
HTTP Request Configuration:
HTTP Request Headers:
Basic YWRtaW46Vk13YXJlMSE=, which is base64 encoded.
Note: In the example, "YWRtaW46Vk13YXJlMSE=" is the base64-encoded value of the NSX-T administrator credentials, expressed in the form 'admin-user:password'. You can use the free online service https://www.base64encode.org/ to base64 encode your values.
HTTP Response Configuration:
Lastly, back at the Health Monitors screen, specify the Active Health Monitor you just created:
If your Enterprise PKS deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX-T topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example,
100.64.128.0/31 is the subnet for the Load Balancer Tier-1 uplink interface.
To do this you need to retrieve the IP of the T1 uplink (Tier-1 Router that connected the NSX-T LB instance). In the example below, the T1 uplink IP is
Create the following SNAT rule on the Tier-0 Router:
Verify the load balancer and that traffic is load balanced.
You can use the NSX API to validate that secure HTTP requests against the new VIP address are associated with the load balancer’s Virtual Server. Relying on the SuperUser Principal Identity created as part of PKS provisioning steps, you can cURL the NSX Management Cluster using the standard HA-VIP address or the newly-provisioned virtual server VIP. For example:
Before load balancer provisioning is completed:
curl -k -X GET "https://192.168.6.210/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key
After load balancer provisioning is completed:
curl -k -X GET "https://126.96.36.199/api/v1/trust-management/principal-identities" --cert $(pwd)/pks-nsx-t-superuser.crt --key $(pwd)/pks-nsx-t-superuser.key
Key behavioral differences among the two API calls is the fact that the call toward the Virtual Server VIP will effectively Load Balance requests among the NSX-T Server Pool members. On the other hand, the call made toward the HA VIP address would ALWAYS select the same member (the Active Member) of the NSX Management Cluster.
Residual configuration step would be to change PKS Tile configuration for NSX-Manager IP Address to use the newly-provisioned Virtual IP Address. This configuration will enable any component internal to PKS (NCP, NSX OSB Proxy, BOSH CPI, etc…) to use the new Load Balancer functionality.
Generate a new NSX-T Manager CA certificate using the external NSX-T LB VS IP.
There are various configurations for the CSR. Listed below are exampales for each.
Using a fully-qualified domain name (FQDN), the
commonName is a wildcard FQDN (
*.pks.vmware.local, for example) and the
subjectAltName (SAN) includes the same wildcard FQDN (
*.pks.vmware.local, for example) and the load balancer VIP (
192.168.160.100, for example).
[ req ] default_bits = 2048 distinguished_name = req_distinguished_name x509_extensions = v3_req prompt = no [ req_distinguished_name ] countryName = US stateOrProvinceName = California localityName = CA organizationName = NSX commonName = *.pks.vmware.local [ v3_req ] subjectAltName = @alt_names [alt_names] DNS.1 = *.pks.vmware.local DNS.2 = 192.168.160.100
If you have previously configured the Cluster HA VIP, an alternative approach is to use the Cluster HA VIP as the
10.196.188.27, for example), and the
subjectAltName (SAN) includes the load balancer VIP (
192.168.160.100, for example) and all 3 of the NSX Manager IP addresses.
[ req ] default_bits = 2048 distinguished_name = req_distinguished_name req_extensions = req_ext prompt = no [ req_distinguished_name ] countryName = US stateOrProvinceName = California localityName = CA organizationName = NSX commonName = 10.196.188.27 [ req_ext ] subjectAltName = @alt_names [alt_names] DNS.1 = 192.168.160.100 DNS.1 = 10.196.188.21 DNS.1 = 10.196.188.22 DNS.1 = 10.196.188.23
Define environment variables for the NSX_MANAGER_IP_ADDRESS and the NSX_MANAGER_COMMONNAME.
export NSX_MANAGER_IP_ADDRESS=*.pks.vmware.local export NSX_MANAGER_COMMONNAME=*.pks.vmware.local
Where: - NSX_MANAGER_IP_ADDRESS is a wildcard FQDN (
*.pks.vmware.local, for example) or all three of the NSX-T Manager IP addresses. - NSX_MANAGER_COMMONNAME is a wildcard FQDN or the Cluster VIP address.
Run the following command to generate the certificate and priate key:
openssl req -newkey rsa:2048 -x509 -nodes -keyout nsx.key -new -out nsx.crt -subj /CN=$NSX_MANAGER_COMMONNAME -reqexts SAN -extensions SAN -config <(cat ./nsx-cert.cnf <(printf "[SAN]\nsubjectAltName=DNS:$NSX_MANAGER_COMMONNAME,IP:$NSX_MANAGER_IP_ADDRESS")) -sha256 -days 365
Add the certificate to each of the 3 NSX-T Managers. Once this is done, the same certificate is then replicated to the other NSX-T manager instances.
Run the following set of command for each NSX-T manager instance. The CERTIFICATE_ID should be the same for all 3 NSX-T Manager instances. For example:
export NSX_MANAGER_IP_ADDRESS=10.40.206.2 export CERTIFICATE_ID="ea65ee14-d7d3-49c3-b656-ee0864282654"
curl --insecure -u admin:'VMware1!' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/node/services/http?action=apply_certificate&certificate_id=$CERTIFICATE_ID"
nsx-manager-1> get certificate api.