NSX Advanced Load Balancer (ALB), formerly known as Avi Vantage, provides L4+L7 load balancing services to the deployments on vSphere. Tanzu Kubernetes Grid includes VMware NSX Advanced Load Balancer Essentials Edition.
The management cluster deployments on Amazon EC2 or Microsoft Azure creates the Amazon EC2 or Azure load balancer instances automatically. They do not require the load balancing services offered by NSX ALB.
You can configure NSX Advanced Load Balancer (ALB) in Tanzu Kubernetes Grid as:
Each workload cluster integrates with NSX ALB by running an Avi Kubernetes Operator (AKO) on one of its nodes. The cluster’s AKO calls the Kubernetes API to manage the lifecycle of load balancing and ingress resources for its workloads.
As a load balancer, NSX ALB provides an L4+L7 load balancing solution for vSphere. It includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.
Legacy ingress services for Kubernetes include multiple disparate solutions. The services and products contain independent components that are difficult to manage and troubleshoot. The ingress services have reduced observability capabilities with little analytics, and they lack comprehensive visibility into the applications that run on the system. Cloud-native automation is difficult in the legacy ingress services.
In comparison to the legacy Kubernetes ingress services, NSX ALB has comprehensive load balancing and ingress services features. As a single solution with a central control, NSX ALB is easy to manage and troubleshoot. NSX ALB supports real-time telemetry with an insight into the applications that run on the system. The elastic auto-scaling and the decision automation features highlights the cloud-native automation capabilities of NSX ALB.
To configure the NSX ALB on your workload clusters, see Configure NSX Advanced Load Balancer.
NSX ALB also lets you configure L7 ingress for your workload clusters by using one of the following options:
This option enables NSX Advanced Load Balancer L7 ingress capabilities, including sending traffic directly from the service engines (SEs) to the pods, preventing multiple hops that other ingress solutions need when sending packets from the load balancer to the right node where the pod runs. This option is fully supported by VMware. However, each workload cluster needs a dedicated SE group for Avi Kubernetes Operator (AKO) to work, which could increase the amount of SEs you need for your environment.
L7 Ingress in NodePortLocal Mode
Like the option above, this option avoids the potential extra hop when sending traffic from the NSX Advanced Load Balancer SEs to the pod by targeting the right nodes where the pods run, this time by leveraging the integration between NSX Advanced Load Balancer and Antrea. With this option, the workload clusters can share SE groups.
NodePort mode is the default mode when AKO is installed on Tanzu Kubernetes Grid. This option allows your workload clusters to share SE groups and is fully supported by VMware. In this mode, traffic will leverage standard Kubernetes NodePort behavior, including its limitations, and will require services to be of type NodePort
.
NSX ALB L4 Ingress with Contour L7 Ingress
This option lets workload clusters share SE groups, it is supported by VMware, and requires minimal setup. However, you will not have all the NSX Advanced Load Balancer L7 ingress capabilities.
NSX ALB L7 ClusterIP mode | NSX ALB L7 NodePortLocal mode | NSX ALB L7 NodePort mode | NSX ALB L4 with Contour L7 | |
---|---|---|---|---|
Minimal SE groups required | N | Y | Y | Y |
VMware supported | Y | Y | Y | Y |
NSX ALB L7 ingress capabilities | Y | Y | Y | N |
To configure the NSX ALB L7 ingress, see Configuring L7 Ingress with NSX Advanced Load Balancer.
You can use NSX ALB as the control plane endpoint provider in Tanzu Kubernetes Grid. The following table describes the differences between NSX ALB and Kube-Vip, which is the default control plane endpoint provider in Tanzu Kubernetes Grid.
Kube-Vip | NSX ALB | |
---|---|---|
Sends Traffic to | Single control plane node |
Multiple control plane nodes |
Requires configuring endpoint VIP | Yes |
No Assigns VIP from the NSX ALB static IP pool |
To configure NSX Advanced Load Balancer as a cluster’s control plane HA Provider, see NSX Advanced Load Balancer.
To change a cluster’s control plane HA Provider to NSX Advanced Load Balancer, see Configure NSX Advanced Load Balancer.
NSX Advanced Load Balancer includes the following components:
LoadBalancer
objects and interacts with the Avi Controller APIs to create VirtualService
objects.VirtualService
objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices
and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.
The topology diagram above shows the following configuration:
NodePort
mode only.For more information on the architecture of NSX ALB on vSphere, see VMware Tanzu for Kubernetes Operations on vSphere Reference Design.
Configure different SE group and VIP network set ups in different workload clusters by using AKODeploymentConfig.spec.clusterSelector.matchLabels
. For more information about how to use AKODeploymentConfig
to configure the following scenario recommendations, see Create Multiple NSX ALB Configurations for Different Workload Clusters in Tanzu Kubernetes Grid Networking.
AKODeploymentConfig.spec.clusterSelector.matchLabels
.In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.
With NSX Advanced Load Balancer Essentials, all workload cluster users are associated with the single admin tenant.
Avi Kubernetes Operator is installed on Tanzu Kubernetes clusters. It is configured with the Avi Controller IP address and the user credentials that Avi Kubernetes Operator uses to communicate with the Avi Controller. A dedicated user per workload is created with the admin tenant and a customized role. This role has limited access, as defined in https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AKO/roles/ako-essential.json.
Share an SE group between multiple clusters in setups with a large number of Tanzu Kubernetes clusters, each of which has a small number of nodes. An SE group can be shared by any number of workload clusters as long as the sum of the number of distinct cluster node networks and the number of distinct cluster VIP networks is not greater than 8.
Note: Use the spec.clusterSelector.matchLabels
field in the AKODeploymentConfig
file to configure different SE groups and VIP network setups in different workload clusters. For more information, see Create Multiple NSX ALB Configurations for Different Workload Clusters in Tanzu Kubernetes Grid Networking.
spec.clusterSelector.matchLabels
field in the AKODeploymentConfig
file.You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Kubernetes Grid.
Follow the installer prompts to deploy a VM from the OVA template, referring to the Deploying Avi Controller OVA instructions in the Avi Networks documentation.
Select the following options in the OVA deployment wizard:
nsx-adv-lb-controller
and the datacenter in which to deploy it.It takes some time for the deployment to finish.
When the OVA deployment finishes, power on the resulting VM.
After you power on the VM, it takes some time for it to be ready to use.
In vCenter, create a vSphere account for the Avi controller, with permissions as described in Roles and Permissions for vCenter and NSX-T Users in the Avi Networks documentation.
Note: See the Tanzu Kubernetes Grid v1.5 Release Notes for which Avi Controller versions are supported in this release. To upgrade the Avi Controller, see Flexible Upgrades for Avi Vantage.
In TKG v1.5.2+, you can set up Avi to use either vSphere or VMware NSX.
If you are using NSX ALB with an NSX overlay network, configure the NSX interface in the Avi Controller UI.
For full details of how to set up the Avi Controller for vCenter Cloud, see Performing the Avi Controller Initial setup in the Avi Controller documentation.
This section provides some information about configuration that has been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation. The procedure below applies to Avi Controller version 20.1.5+.
Note: If you are using an existing Avi Controller, you must make sure that the VIP Network that is used during Tanzu Kubernetes Grid management cluster deployment has a unique name across all AVI Clouds.
In a browser, go to the IP address of the Controller VM.
Configure a password to create an admin account.
Optionally set DNS Resolvers and NTP server information, set the backup passphrase, and click Next.
Setting the backup passphrase is mandatory.
Select None to skip SMTP configuration, and click Next.
For Permissions, select Write.
This allows the Controller to create and manage SE VMs.
For SDN Integration select None and click Next.
For System IP Address Management Setting, enable the DHCP Enabled checkbox if your data plane networks have DHCP. Otherwise, leave the DHCP Enabled checkbox disabled.
For Virtual Service Placement Settings, leave both checkboxes unchecked and click Next.
Select a virtual switch to use as the management network NIC in the SEs. Select the same network that you used when you deployed the controller.
If you enabled DHCP for your data plane networks, enable the DHCP Enabled checkbox for your management network. Otherwise leave the checkbox disabled and configure IP Subnet and Static IP Address Pool to set the management network address range. Click Next.
For Support Multiple Tenants, select No.
Integrating TKG with NSX and NSX ALB (Avi) is supported in the following versions:
NSX | Avi Controller | Tanzu Kubernetes Grid |
---|---|---|
3.0+ | 20.1.1+ | 1.5.2+ |
After you have configured vCenter and NSX to use NSX ALB, you can configure NSX ALB side of the integration as follows from the Avi Controller UI.
For full details of how to set up the Avi Controller for NSX, see the Avi Integration with NSX in the Avi Controller documentation.
To enable NSX ALB to authenticate with the vCenter and NSX Manager servers:
Click on Connect to to authenticate with the NSX manager.
In the NSX tab, under Management Network, select the Transport Zone required. Note: If Virtual LAN (VLAN)-backed logical segments are used instead of Overlay transport zone, refer NSX VLAN Logical Segment.
Under Data Networks, select the Transport Zone required. Click on Add to add more T1 routers and connected segments for VIP placement.
Under vCenter Servers, click Add.
Click on Save to create the NSX Cloud integration for NSX ALB.
Configure the IP subnet and static IP pool for any control plane or data network that you will set in the management cluster configuration file.
Configure the AVI Service Engine’s static IP route:
0.0.0.0/0
There are additional settings to configure in the Controller UI before you can use NSX Advanced Load Balancer.
In the Controller UI, go to Applications > Templates > Profiles > IPAM/DNS Profiles, click Create and select IPAM Profile.
tkg-ipam-profile
.In the IPAM/DNS Profiles view, click Create again and select DNS Profile.
Note: The DNS Profile is optional for using Service type LoadBalancer
.
tkg-dns-profile
.tkg.nsxlb.vmware.com
.
LoadBalancer
, but it is mostly relevant if you use AVI DNS VS as your Name Server.service.namespace.tkg-lab.vmware.com
.Click the menu in the top left corner and select Infrastructure > Clouds.
For Default-Cloud, click the edit icon and under IPAM Profile and DNS Profile, select the IPAM and DNS profiles that you created above.
Select the DataCenter tab.
Do not update the Network section yet.
Edit the network to add a pool of IPs to be used as a VIP.
Edit the subnet and add an IP Address pool range within the boundaries, for example 192.168.14.210-192.168.14.219.
If the SE Group that you want to use with the management cluster does not have a Virtual Service, it can suggest that there are no service engines that are running for that SE Group. Hence, the management cluster deployment process will have to wait for a service engine to be created. The creation of a Service Engine is time-consuming because it requires a new VM to be deployed. In poor networking conditions, this can cause an internal timeout that prevents the management cluster deployment process to finish successfully.
To prevent this issue, it is recommended to create a dummy Virtual Service through the Avi Controller UI to trigger the creation of a service engine before deploying the management cluster.
To verify that the SE group has a virtual service assigned to it, in the Controller UI, go to Infrastructure > Service Engine Group, and view the details of the SE group. If the SE group does not have a virtual service assigned to it, create a dummy virtual service:
In the Controller UI, go to Applications > Virtual Service.
Click Create Virtual Service and select Basic Setup.
Configure the VIP:
Note: You can delete the dummy virtual service after the management cluster is deployed successfully.
For complete information about creating a virtual service, more than needed to create a dummy service, see Create a Virtual Service in the Avi Networks documentation.
The default NSX Advanced Load Balancer certificate does not contain the Controller’s IP or FQDN in the Subject Alternate Names (SAN), however valid SANs must be defined in Avi Controller’s certificate. You consequently must create a custom certificate to provide when you deploy management clusters.
For Subject Alternate Name (SAN), enter either the IP address or FQDN, or both, of the Controller VM.
If only the IP address or FQDN is used, it must match the value that you use for Controller Host when you configure NSX Advanced Load Balancer settings during management cluster deployment, or specify in the AVI_CONTROLLER
variable in the management cluster configuration file.
Copy the certificate contents.
You will need the certificate contents when you deploy management clusters.
Finish setting up the Avi Controller by enabling the Essentials license, if required.
Your NSX Advanced Load Balancer deployment is ready for you to use with management clusters.