Install NSX Advanced Load Balancer

NSX Advanced Load Balancer (ALB), formerly known as Avi Vantage, provides L4+L7 load balancing services to the deployments on vSphere. Tanzu Kubernetes Grid includes VMware NSX Advanced Load Balancer Essentials Edition.

The management cluster deployments on Amazon EC2 or Microsoft Azure creates the Amazon EC2 or Azure load balancer instances automatically. They do not require the load balancing services offered by NSX ALB.

NSX ALB in Tanzu Kubernetes Grid

You can configure NSX Advanced Load Balancer (ALB) in Tanzu Kubernetes Grid as:

  • A load balancer for workloads in the clusters that are deployed on vSphere.
  • The VIP endpoint provider for the control plane API server.

Each workload cluster integrates with NSX ALB by running an Avi Kubernetes Operator (AKO) on one of its nodes. The cluster’s AKO calls the Kubernetes API to manage the lifecycle of load balancing and ingress resources for its workloads.

NSX ALB as an L4+L7 Ingress Service Provider

As a load balancer, NSX ALB provides an L4+L7 load balancing solution for vSphere. It includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.

Legacy ingress services for Kubernetes include multiple disparate solutions. The services and products contain independent components that are difficult to manage and troubleshoot. The ingress services have reduced observability capabilities with little analytics, and they lack comprehensive visibility into the applications that run on the system. Cloud-native automation is difficult in the legacy ingress services.

In comparison to the legacy Kubernetes ingress services, NSX ALB has comprehensive load balancing and ingress services features. As a single solution with a central control, NSX ALB is easy to manage and troubleshoot. NSX ALB supports real-time telemetry with an insight into the applications that run on the system. The elastic auto-scaling and the decision automation features highlights the cloud-native automation capabilities of NSX ALB.

To configure the NSX ALB on your workload clusters, see Configure NSX Advanced Load Balancer.

NSX ALB also lets you configure L7 ingress for your workload clusters by using one of the following options:

L7 Ingress in ClusterIP Mode

This option enables NSX Advanced Load Balancer L7 ingress capabilities, including sending traffic directly from the service engines (SEs) to the pods, preventing multiple hops that other ingress solutions need when sending packets from the load balancer to the right node where the pod runs. This option is fully supported by VMware. However, each workload cluster needs a dedicated SE group for Avi Kubernetes Operator (AKO) to work, which could increase the amount of SEs you need for your environment.

L7 Ingress in NodePortLocal Mode

Like the option above, this option avoids the potential extra hop when sending traffic from the NSX Advanced Load Balancer SEs to the pod by targeting the right nodes where the pods run, this time by leveraging the integration between NSX Advanced Load Balancer and Antrea. With this option, the workload clusters can share SE groups. However, VMware support will not assist in configuring or troubleshooting this option.

L7 Ingress in NodePort Mode

NodePort mode is the default mode when AKO is installed on Tanzu Kubernetes Grid. This option allows your workload clusters to share SE groups and is fully supported by VMware. In this mode, traffic will leverage standard Kubernetes NodePort behavior, including its limitations, and will require services to be of type NodePort.

NSX ALB L4 ingress with Contour L7 ingress

This option lets workload clusters share SE groups, it is supported by VMware, and requires minimal setup. However, you will not have all the NSX Advanced Load Balancer L7 ingress capabilities.

NSX ALB L7 ClusterIP mode NSX ALB L7 NodePortLocal mode NSX ALB L7 NodePort mode NSX ALB L4 with Contour L7
Minimal SE groups required N Y Y Y
VMware supported Y N Y Y
NSX ALB L7 ingress capabilities Y Y Y N

To configure the NSX ALB L7 ingress, see Configuring L7 Ingress with NSX Advanced Load Balancer.

NSX ALB as a Control Plane Endpoint Provider

You can use NSX ALB as the control plane endpoint provider in Tanzu Kubernetes Grid. The following table describes the differences between NSX ALB and Kube-Vip, which is the default control plane endpoint provider in Tanzu Kubernetes Grid.

Kube-Vip NSX ALB
Sends Traffic to Single control plane node
Multiple control plane nodes
Requires configuring endpoint VIP Yes
No
Assigns VIP from the NSX ALB static IP pool

To configure NSX Advanced Load Balancer as a cluster’s control plane HA Provider, see NSX Advanced Load Balancer.

To change a cluster’s control plane HA Provider to NSX Advanced Load Balancer, see Configure NSX Advanced Load Balancer.

NSX Advanced Load Balancer Deployment Topology

NSX Advanced Load Balancer includes the following components:

  • Avi Controller manages VirtualService objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.
  • Avi Kubernetes Operator (AKO) is a Kubernetes controller that each cluster runs on one of its nodes. Each AKO pod uses its cluster’s Kubernetes API to watch for changes in the cluster’s LoadBalancer and Ingress specifications, or other relevant custom resource definitions. When the AKO detects a change, it calls the Avi Controller API to make the change in the Avi resources, for example create a new load balancer VirtualService object and connect it with pods running in the cluster.
  • AKO Operator on the management cluster manages the lifecycle and configuration of the AKO on each workload cluster, and can make runtime changes to the AKO configuration.
  • Service Engines (SE) implement the data plane in a VM.
  • SE Groups group Service Engines into isolated sets, for example to dedicate them to specific namespaces. This lets you control SEs collectively and set maximum SE counts for different resource types, such as CPU and Memory.

You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.

VMware NSX Advanced Load Balancer deployment topology

The topology diagram above shows the following configuration:

  • Avi controller is connected to the management port group.
  • The service engines are connected to the management port group and one or more VIP port groups. Service engines run in dual-arm mode.
  • Avi Kubernetes Operator is installed on the Tanzu Kubernetes clusters and should be able to route to the controller’s management IP.
  • Avi Kubernetes Operator is installed in NodePort mode only.

For more information on the architecture of NSX ALB on vSphere, see VMware Tanzu for Kubernetes Operations on vSphere Reference Design.

Recommendations

Configure different SE group and VIP network set ups in different workload clusters by using AKODeploymentConfig.spec.clusterSelector.matchLabels. For more information about how to use AKODeploymentConfig to configure the following scenario recommendations, see Create Multiple NSX ALB Configurations for Different Workload Clusters in Tanzu Kubernetes Grid Networking.

  • For set ups with a small number of Tanzu Kubernetes clusters that each have a large number of nodes, it is recommended to use one dedicated SE group per cluster.
  • For set ups with a large number of Tanzu Kubernetes clusters that each have a small number of nodes, it is recommended to share an SE group between multiple clusters.
  • An SE group can be shared by any number of workload clusters as long as the sum of the number of distinct cluster node networks and the number of distinct cluster VIP networks is no bigger than 8.
  • All clusters can share a single VIP network or each cluster can have a dedicated VIP network.
  • Clusters that share a VIP network should be grouped by AKODeploymentConfig.spec.clusterSelector.matchLabels.
  • For simplicity, in a lab environment all components can be connected to the same port group on which the Tanzu Kubernetes clusters are connected.

In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.

Networking

  • SEs are deployed in a dual-arm mode in relation to the data path, with connectivity both to the VIP network and to the workload cluster node network.
  • The VIP network and the workload networks must be discoverable in the same vCenter Cloud so Avi Controller could create SEs attached to both networks.
  • VIP and SE data interface IP addresses are allocated from the VIP network.
  • There can only be one VIP network per workload cluster. However, different VIP networks could be assigned to different workload clusters, for example in a large Tanzu Kubernetes Grid deployment.

IPAM

  • If DHCP is not available, IPAM for the VIP and SE Interface IP address is managed by Avi Controller.
  • The IPAM profile in Avi Controller is configured with a Cloud and a set of Usable Networks.
  • If DHCP is not configured for the VIP network, at least one static pool must be created for the target network.

Resource Isolation

  • Dataplane isolation across Tanzu Kubernetes clusters can be provided by using SE Groups. The vSphere admin can configure a dedicated SE Group and configure that for a set of Tanzu Kubernetes clusters that need isolation.
  • SE Groups offer the ability to control the resource characteristics of the SEs created by the Avi Controller, for example, CPU, memory, and so on.

Tenancy

With NSX Advanced Load Balancer Essentials, all workload cluster users are associated with the single admin tenant.

Avi Kubernetes Operator

Avi Kubernetes Operator is installed on Tanzu Kubernetes clusters. It is configured with the Avi Controller IP address and the user credentials that Avi Kubernetes Operator uses to communicate with the Avi Controller. A dedicated user per workload is created with the admin tenant and a customized role. This role has limited access, as defined in https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AKO/roles/ako-essential.json.

Install Avi Controller on vCenter Server

You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Kubernetes Grid.

  1. Make sure your vCenter environment fulfills the prerequisites described in Installing Avi Vantage for VMware vCenter in the Avi Networks documentation.
  2. Access the Avi Networks portal from the Tanzu Kubernetes Grid downloads page.
  3. In the VMware NSX Advanced Load Balancer row, click Go to Downloads.
  4. Click Download Now to go the NSX Advanced Load Balancer Customer Portal.
  5. In the customer portal, go to Software > 20.1.6.
    • You can also install version 20.1.3.
  6. Scroll down to VMware, and click the download button for Controller OVA.
  7. Log in to the vSphere Client.
  8. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
  9. Select Local File, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
  10. Follow the installer prompts to deploy a VM from the OVA template, referring to the Deploying Avi Controller OVA instructions in the Avi Networks documentation.

    Select the following options in the OVA deployment wizard:

    • Provide a name for the Controller VM, for example, nsx-adv-lb-controller and the datacenter in which to deploy it.
    • Select the cluster in which to deploy the Controller VM.
    • Review the OVA details, then select a datastore for the VM files. For the disk format, select Thick Provision Lazy Zeroed.
    • For the network mapping, select a port group for the Controller to use to communicate with vCenter Server. The network must have access to the management network on which vCenter Server is running.
    • If DHCP is available, you can use it for controller management.
    • Specify the management IP address, subnet mask, and default gateway. If you use DHCP, you can leave these fields empty.
    • Leave the key field in the template empty.
    • On the final page of the installer, click Finish to start the deployment.

    It takes some time for the deployment to finish.

  11. When the OVA deployment finishes, power on the resulting VM.

    After you power on the VM, it takes some time for it to be ready to use.

  12. In vCenter, create a vSphere account for the Avi controller, with permissions as described in VMware User Role for Avi Vantage in the Avi Networks documentation.

    NOTE: See the Tanzu Kubernetes Grid v1.4 Release Notes for which Avi Controller versions are supported in this release. To upgrade the Avi Controller, see Flexible Upgrades for Avi Vantage.

Avi Controller Setup: Basics

For full details of how to set up the Controller, see the Performing the Avi Controller Initial setup in the Avi Controller documentation.

This section provides some information about configuration that has been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation.

NOTE: If you are using an existing Avi Controller, you must make sure that the VIP Network that is be used during Tanzu Kubernetes Grid management cluster deployment has a unique name across all AVI Clouds.

  1. In a browser, go to the IP address of the Controller VM.

  2. Configure a password to create an admin account.

  3. Optionally set DNS Resolvers and NTP server information, set the backup passphrase, and click Next.

    Setting the backup passphrase is mandatory.

  4. Select None to skip SMTP configuration, and click Next.

  5. For Multi-Tenant, leave the default settings. Enable the Setup Cloud After checkbox next to Save, and click Save.
  6. For Orchestrator Integration, select VMware.
  7. Under the Infrastructure tab, enter the vCenter Server credentials and the IP address or FQDN of the vCenter Server instance.
  8. For Permissions, select Write.

    This allows the Controller to create and manage SE VMs.

  9. For SDN Integration select None and click Next.

  10. Select the vSphere Datacenter.
  11. For System IP Address Management Setting, enable the DHCP Enabled checkbox if your data plane networks have DHCP. Otherwise, leave the DHCP Enabled checkbox disabled.

  12. For Virtual Service Placement Settings, leave both checkboxes unchecked and click Next.

  13. Select a virtual switch to use as the management network NIC in the SEs. Select the same network that you used when you deployed the controller.

  14. If you enabled DHCP for your data plane networks, enable the DHCP Enabled checkbox for your management network. Otherwise leave the checkbox disabled and configure IP Subnet and Static IP Address Pool to set the management network address range. Click Next.

  15. For Support Multiple Tenants, select No.

Avi Controller Setup: IPAM and DNS

There are additional settings to configure in the Controller UI before you can use NSX Advanced Load Balancer.

  1. In the Controller UI, go to Applications > Templates > Profiles > IPAM/DNS Profiles, click Create and select IPAM Profile.

    • Enter a name for the profile, for example, tkg-ipam-profile.
    • Leave the Type set to Avi Vantage IPAM.
    • Leave Allocate IP in VRF unchecked.
    • Click Add Usable Network.
    • Select Default-Cloud.
    • For Usable Network, select the network where you want the virtual IPs to be allocated. If you are using a flat network topology, this can be the same network (management network) that you selected in the preceding procedure. For a different network topology, select a separate port group network for the virtual IPs.
    • (Optional) Click Add Usable Network to configure additional VIP networks.
    • Click Save.

    Configure IPAM and DNS Profile

  2. In the IPAM/DNS Profiles view, click Create again and select DNS Profile.

    NOTE: The DNS Profile is optional for using Service type LoadBalancer.

    • Enter a name for the profile, for example, tkg-dns-profile.
    • For Type, select AVI Vantage DNS
    • Click Add DNS Service Domain and enter at least one Domain Name entry, for example tkg.nsxlb.vmware.com.
      • This should be from a DNS domain that you can manage.
      • This is more important for the L7 Ingress configurations for workload clusters, in which the Controller bases the logic to route traffic on hostnames.
      • Ingress resources that the Controller manages should use host names that belong to the domain name that you select here.
      • This domain name is also used for Services of type LoadBalancer, but it is mostly relevant if you use AVI DNS VS as your Name Server.
      • Each Virtual Service will create an entry in the AVI DNS configuration. For example, service.namespace.tkg-lab.vmware.com.
    • Click Save.

    Create the DNS Profile

  3. Click the menu in the top left corner and select Infrastructure > Clouds.

  4. For Default-Cloud, click the edit icon and under IPAM Profile and DNS Profile, select the IPAM and DNS profiles that you created above.

    Add the IPAM and DNS Profiles to the Cloud

  5. Select the DataCenter tab.

    • Leave DHCP enabled. This is set per network.
    • Leave the IPv6… and Static Routes… checkboxes unchecked.
  6. Do not update the Network section yet.

  7. Save the cloud configuration.
  8. Go to Infrastructure > Networks and click the edit icon for the network you are using as the VIP network.
  9. Edit the network to add a pool of IPs to be used as a VIP.

    Edit the subnet and add an IP Address pool range within the boundaries, for example 192.168.14.210-192.168.14.219.

Avi Controller Setup: Custom Certificate

The default NSX Advanced Load Balancer certificate does not contain the Controller’s IP or FQDN in the Subject Alternate Names (SAN), however valid SANs must be defined in Avi Controller’s certificate. You consequently must create a custom certificate to provide when you deploy management clusters.

  1. In the Controller UI, click the menu in the top left corner and select Templates > Security > SSL/TLS Certificates, click Create, and select Controller Certificate.
  2. Enter the same name in the Name and Common Name text boxes.
  3. Select Self-Signed.
  4. For Subject Alternate Name (SAN), enter either the IP address or FQDN, or both, of the Controller VM.

    If only the IP address or FQDN is used, it must match the value that you use for Controller Host when you configure NSX Advanced Load Balancer settings during management cluster deployment, or specify in the AVI_CONTROLLER variable in the management cluster configuration file.

  5. Leave the other fields empty and click Save.
  6. In the menu in the top left corner, select Administration > Settings > Access Settings, and click the edit icon in System Access Settings.
  7. Delete all of the certificates in SSL/TLS Certificate.
  8. Use the SSL/TLS Certificate drop-down menu to add the custom certificate that you created above.
  9. In the menu in the top left corner, select Templates > Security > SSL/TLS Certificates, select the certificate you create and click the export icon.
  10. Copy the certificate contents.

    You will need the certificate contents when you deploy management clusters.

Avi Controller Setup: Essentials License

Finish setting up the Avi Controller by enabling the Essentials license, if required.

  1. In the Controller UI, go to Administration > Settings > Licensing. The Licensing screen appears.
  2. In the Licensing screen, click the crank wheel icon that is next to Licensing.
  3. In the list of license types, select Essentials License. Click Save, and then click Next.
  4. In the Licensing screen, verify that the license has been set to Essentials.
  5. To create a default gateway route for the traffic to flow from the service engines to the Pods and then back to the clients, go to Infrastructure > Routing > Static Route in the Controller UI, and click CREATE.
  6. In the Edit Static Route:1 screen, enter the following details:
    • Gateway Subnet: 0.0.0.0/0
    • Next Hop: The gateway IP address of the virtual IP network that you want to use, set as Usable Network above
  7. Click SAVE. After the Essentials Tier is enabled on a Controller that has not been configured already, the default service Engine group is switched to the Legacy (Active/Standby) HA mode, which is the only mode that Essentials Tier supports.

What to Do Next

Your NSX Advanced Load Balancer deployment is ready for you to use with management clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon