In Tanzu Kubernetes Grid, NSX Advanced Load Balancer includes the following components:
LoadBalancer
objects and interacts with the Avi Controller APIs to create VirtualService
objects.VirtualService
objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices
and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.Tanzu Kubernetes Grid supports NSX Advanced Load Balancer deployed in one-arm and multi-arm network topologies.
NoteIf you want to deploy NSX ALB Essentials Edition in a multi-arm network topology, ensure that you have not configured any firewall or network policies in the networks that are configured for the communication between the Avi SE and the Kubernetes cluster nodes. To configure any firewall or network policies in a multi-arm network topology where NSX ALB is deployed, you require NSX Advanced Load Balancer Enterprise Edition with the auto-gateway feature enabled.
The following diagram represents a one-arm NSX ALB deployment:
The following diagram represents a multi-arm NSX ALB deployment:
With NSX Advanced Load Balancer Essentials, all workload cluster users are associated with the single admin tenant.
Avi Kubernetes Operator is installed on workload clusters. It is configured with the Avi Controller IP address and the user credentials that Avi Kubernetes Operator uses to communicate with the Avi Controller. A dedicated user per workload is created with the admin tenant and a customized role. This role has limited access, as defined in https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AKO/roles/ako-essential.json.
To install Avi on vCenter Server, see Installing Avi Vantage for VMware vCenter in the Avi documentation.
To install Avi on vCenter with VMware NSX, see Installing Avi Vantage for VMware vCenter with NSX in the Avi documentation.
NoteSee the Tanzu Kubernetes Grid v2.4 Release Notes for the Avi Controller version that is supported in this release. To upgrade the Avi Controller, see Flexible Upgrades for Avi Vantage.
To enable the Tanzu Kubernetes Grid NSX - Advanced Load Balancer integration solution, you must deploy and configure the Avi Controller. Currently, Tanzu Kubernetes Grid supports deploying the Avi Controller in vCenter cloud and NSX-T cloud.
To ensure that the Tanzu Kubernetes Grid - NSX Advanced Load Balancer integration solution work, your environment network topology must satisfy the following requirements:
Integrating Tanzu Kubernetes Grid with NSX and NSX Advanced Load Balancer (Avi) is supported in the following versions:
NSX | Avi Controller | Tanzu Kubernetes Grid |
---|---|---|
3.0+ | 20.1.1+ | 1.5.2+ |
There are additional settings to configure in the Controller UI before you can use NSX Advanced Load Balancer.
In the Controller UI, go to Templates > Profiles > IPAM/DNS Profiles, click Create and select IPAM Profile.
tkg-ipam-profile
.In the IPAM/DNS Profiles view, click Create again and select DNS Profile.
NoteThe DNS Profile is optional for using Service type
LoadBalancer
.
tkg-dns-profile
.tkg.nsxlb.vmware.com
.
LoadBalancer
, but it is mostly relevant if you use AVI DNS VS as your Name Server.service.namespace.tkg-lab.vmware.com
.Click the menu in the top left corner and select Infrastructure > Clouds.
For Default-Cloud, click the edit icon and under IPAM Profile and DNS Profile, select the IPAM and DNS profiles that you created above.
Select the DataCenter tab.
Do not update the Network section yet.
Open Infrastructure > Cloud Resources > Networks and click the edit icon for the network that you are using as the VIP network.
An editable list of IP ranges in the IP address pool appears. Click Add Static IP Address Pool.
Enter an IP address range within the subnet boundaries for the pool of static IP addresses to use as the VIP network, for example 192.168.14.210-192.168.14.219
. Click Save.
If the SE Group that you want to use with the management cluster does not have a Virtual Service, it can suggest that there are no service engines that are running for that SE Group. Hence, the management cluster deployment process will have to wait for a service engine to be created. The creation of a Service Engine is time-consuming because it requires a new VM to be deployed. In poor networking conditions, this can cause an internal timeout that prevents the management cluster deployment process to finish successfully.
To prevent this issue, it is recommended to create a dummy Virtual Service through the Avi Controller UI to trigger the creation of a service engine before deploying the management cluster.
To verify that the SE group has a virtual service assigned to it, in the Controller UI, go to Infrastructure > Service Engine Group, and view the details of the SE group. If the SE group does not have a virtual service assigned to it, create a dummy virtual service:
In the Controller UI, go to Applications > Virtual Service.
Click Create Virtual Service and select Basic Setup.
Configure the VIP:
NoteYou can delete the dummy virtual service after the management cluster is deployed successfully.
For complete information about creating a virtual service, more than needed to create a dummy service, see Create a Virtual Service in the Avi Networks documentation.
The default NSX Advanced Load Balancer certificate does not contain the Controller’s IP or FQDN in the Subject Alternate Names (SAN), however valid SANs must be defined in Avi Controller’s certificate. You consequently must create a custom certificate to provide when you deploy management clusters.
For Subject Alternate Name (SAN), enter either the IP address or FQDN, or both, of the Controller VM.
If only the IP address or FQDN is used, it must match the value that you use for Controller Host when you Configure VMware NSX Advanced Load Balancer, or specify in the AVI_CONTROLLER
variable in the management cluster configuration file.
Copy the certificate contents.
You will need the certificate contents when you deploy management clusters.