Install NSX Advanced Load Balancer

NSX Advanced Load Balancer (ALB), formerly known as Avi Vantage, provides L4+L7 load balancing services to the deployments on vSphere. Tanzu Kubernetes Grid includes VMware NSX Advanced Load Balancer Essentials Edition.

The management cluster deployments on Amazon EC2 or Microsoft Azure creates the Amazon EC2 or Azure load balancer instances automatically. They do not require the load balancing services offered by NSX ALB.

NSX ALB in Tanzu Kubernetes Grid

You can configure NSX Advanced Load Balancer (ALB) in Tanzu Kubernetes Grid as:

  • A load balancer for workloads in the clusters that are deployed on vSphere.
  • The VIP endpoint provider for the control plane API server.

Each workload cluster integrates with NSX ALB by running an Avi Kubernetes Operator (AKO) on one of its nodes. The cluster’s AKO calls the Kubernetes API to manage the lifecycle of load balancing and ingress resources for its workloads.

NSX ALB as an L4+L7 Ingress Service Provider

As a load balancer, NSX ALB provides an L4+L7 load balancing solution for vSphere. It includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.

Legacy ingress services for Kubernetes include multiple disparate solutions. The services and products contain independent components that are difficult to manage and troubleshoot. The ingress services have reduced observability capabilities with little analytics, and they lack comprehensive visibility into the applications that run on the system. Cloud-native automation is difficult in the legacy ingress services.

In comparison to the legacy Kubernetes ingress services, NSX ALB has comprehensive load balancing and ingress services features. As a single solution with a central control, NSX ALB is easy to manage and troubleshoot. NSX ALB supports real-time telemetry with an insight into the applications that run on the system. The elastic auto-scaling and the decision automation features highlights the cloud-native automation capabilities of NSX ALB.

To configure the NSX ALB on your workload clusters, see Configure NSX Advanced Load Balancer.

NSX ALB also lets you configure L7 ingress for your workload clusters by using one of the following options:

L7 Ingress in ClusterIP Mode

This option enables NSX Advanced Load Balancer L7 ingress capabilities, including sending traffic directly from the service engines (SEs) to the pods, preventing multiple hops that other ingress solutions need when sending packets from the load balancer to the right node where the pod runs. This option is fully supported by VMware. However, each workload cluster needs a dedicated SE group for Avi Kubernetes Operator (AKO) to work, which could increase the amount of SEs you need for your environment.

L7 Ingress in NodePortLocal Mode

Like the option above, this option avoids the potential extra hop when sending traffic from the NSX Advanced Load Balancer SEs to the pod by targeting the right nodes where the pods run, this time by leveraging the integration between NSX Advanced Load Balancer and Antrea. With this option, the workload clusters can share SE groups.

L7 Ingress in NodePort Mode

NodePort mode is the default mode when AKO is installed on Tanzu Kubernetes Grid. This option allows your workload clusters to share SE groups and is fully supported by VMware. In this mode, traffic will leverage standard Kubernetes NodePort behavior, including its limitations, and will require services to be of type NodePort.

NSX ALB L4 Ingress with Contour L7 Ingress

This option lets workload clusters share SE groups, it is supported by VMware, and requires minimal setup. However, you will not have all the NSX Advanced Load Balancer L7 ingress capabilities.

NSX ALB L7 ClusterIP mode NSX ALB L7 NodePortLocal mode NSX ALB L7 NodePort mode NSX ALB L4 with Contour L7
Minimal SE groups required N Y Y Y
VMware supported Y Y Y Y
NSX ALB L7 ingress capabilities Y Y Y N

To configure the NSX ALB L7 ingress, see Configuring L7 Ingress with NSX Advanced Load Balancer.

NSX ALB as a Control Plane Endpoint Provider

You can use NSX ALB as the control plane endpoint provider in Tanzu Kubernetes Grid. The following table describes the differences between NSX ALB and Kube-Vip, which is the default control plane endpoint provider in Tanzu Kubernetes Grid.

Kube-Vip NSX ALB
Sends Traffic to Single control plane node
Multiple control plane nodes
Requires configuring endpoint VIP Yes
No
Assigns VIP from the NSX ALB static IP pool

To configure NSX Advanced Load Balancer as a cluster’s control plane HA Provider, see NSX Advanced Load Balancer.

To change a cluster’s control plane HA Provider to NSX Advanced Load Balancer, see Configure NSX Advanced Load Balancer.

NSX Advanced Load Balancer Deployment Topology

NSX Advanced Load Balancer includes the following components:

  • Avi Kubernetes Operator (AKO) provides the load balancer functionality for Kubernetes clusters. It listens to Kubernetes Ingress and Service Type LoadBalancer objects and interacts with the Avi Controller APIs to create VirtualService objects.
  • Service Engines (SE) implement the data plane in a VM form factor.
  • Avi Controller manages VirtualService objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.
  • SE Groups provide a unit of isolation in the form of a set of Service Engines, for example a dedicated SE group for specific important namespaces. This offers control in the form of the flavor of SEs (CPU, Memory, and so on) that needs to be created and also the limits on the maximum number of SEs that are permitted.

You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.

VMware NSX Advanced Load Balancer deployment topology

The topology diagram above shows the following configuration:

  • Avi controller is connected to the management port group.
  • The service engines are connected to the management port group and one or more VIP port groups. Service engines run in dual-arm mode.
  • Avi Kubernetes Operator is installed on the Tanzu Kubernetes clusters and should be able to route to the controller’s management IP.
  • Avi Kubernetes Operator is installed in NodePort mode only.

For more information on the architecture of NSX ALB on vSphere, see VMware Tanzu for Kubernetes Operations on vSphere Reference Design.

Configure different SE group and VIP network set ups in different workload clusters by using AKODeploymentConfig.spec.clusterSelector.matchLabels. For more information about how to use AKODeploymentConfig to configure the following scenario recommendations, see Create Multiple NSX ALB Configurations for Different Workload Clusters in Tanzu Kubernetes Grid Networking.

  • For set ups with a small number of Tanzu Kubernetes clusters that each have a large number of nodes, it is recommended to use one dedicated SE group per cluster.
  • For set ups with a large number of Tanzu Kubernetes clusters that each have a small number of nodes, it is recommended to share an SE group between multiple clusters.
  • An SE group can be shared by any number of workload clusters as long as the sum of the number of distinct cluster node networks and the number of distinct cluster VIP networks is no bigger than 8.
  • All clusters can share a single VIP network or each cluster can have a dedicated VIP network.
  • Clusters that share a VIP network should be grouped by AKODeploymentConfig.spec.clusterSelector.matchLabels.
  • For simplicity, in a lab environment all components can be connected to the same port group on which the Tanzu Kubernetes clusters are connected.

In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.

Networking

  • SEs are deployed in a dual-arm mode in relation to the data path, with connectivity both to the VIP network and to the workload cluster node network.
  • The VIP network and the workload networks must be discoverable in the same vCenter Cloud so Avi Controller could create SEs attached to both networks.
  • VIP and SE data interface IP addresses are allocated from the VIP network.
  • There can only be one VIP network per workload cluster. However, different VIP networks could be assigned to different workload clusters, for example in a large Tanzu Kubernetes Grid deployment.

IPAM

  • If DHCP is not available, IPAM for the VIP and SE Interface IP address is managed by Avi Controller.
  • The IPAM profile in Avi Controller is configured with a Cloud and a set of Usable Networks.
  • If DHCP is not configured for the VIP network, at least one static pool must be created for the target network.

Resource Isolation

  • Dataplane isolation across Tanzu Kubernetes clusters can be provided by using SE Groups. The vSphere admin can configure a dedicated SE Group and configure that for a set of Tanzu Kubernetes clusters that need isolation.
  • SE Groups offer the ability to control the resource characteristics of the SEs created by the Avi Controller, for example, CPU, memory, and so on.

Tenancy

With NSX Advanced Load Balancer Essentials, all workload cluster users are associated with the single admin tenant.

Avi Kubernetes Operator

Avi Kubernetes Operator is installed on Tanzu Kubernetes clusters. It is configured with the Avi Controller IP address and the user credentials that Avi Kubernetes Operator uses to communicate with the Avi Controller. A dedicated user per workload is created with the admin tenant and a customized role. This role has limited access, as defined in https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AKO/roles/ako-essential.json.

Recommendations

  • Use a dedicated SE group per cluster in setups with a small number of Tanzu Kubernetes clusters, each of which has a large number of nodes.
  • Share an SE group between multiple clusters in setups with a large number of Tanzu Kubernetes clusters, each of which has a small number of nodes. An SE group can be shared by any number of workload clusters as long as the sum of the number of distinct cluster node networks and the number of distinct cluster VIP networks is not greater than 8.

    Note: Use the spec.clusterSelector.matchLabels field in the AKODeploymentConfig file to configure different SE groups and VIP network setups in different workload clusters. For more information, see Create Multiple NSX ALB Configurations for Different Workload Clusters in Tanzu Kubernetes Grid Networking.

  • Group the clusters that share a VIP network by setting the spec.clusterSelector.matchLabels field in the AKODeploymentConfig file.

Install Avi Controller on vCenter Server

You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Kubernetes Grid.

  1. Make sure your vCenter environment fulfills the prerequisites described in Installing Avi Vantage for VMware vCenter in the Avi Networks documentation.
  2. Access the Avi Networks portal from the Tanzu Kubernetes Grid downloads page.
  3. In the VMware NSX Advanced Load Balancer row, click Go to Downloads.
  4. Click Download Now to go the NSX Advanced Load Balancer Customer Portal.
  5. In the customer portal, go to Software > 20.1.6.
    • You can also install version 20.1.3.
  6. Scroll down to VMware, and click the download button for Controller OVA.
  7. Log in to the vSphere Client.
  8. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
  9. Select Local File, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
  10. Follow the installer prompts to deploy a VM from the OVA template, referring to the Deploying Avi Controller OVA instructions in the Avi Networks documentation.

    Select the following options in the OVA deployment wizard:

    • Provide a name for the Controller VM, for example, nsx-adv-lb-controller and the datacenter in which to deploy it.
    • Select the cluster in which to deploy the Controller VM.
    • Review the OVA details, then select a datastore for the VM files. For the disk format, select Thick Provision Lazy Zeroed.
    • For the network mapping, select a port group for the Controller to use to communicate with vCenter Server. The network must have access to the management network on which vCenter Server is running.
    • If DHCP is available, you can use it for controller management.
    • Specify the management IP address, subnet mask, and default gateway. If you use DHCP, you can leave these fields empty.
    • Leave the key field in the template empty.
    • On the final page of the installer, click Finish to start the deployment.

    It takes some time for the deployment to finish.

  11. When the OVA deployment finishes, power on the resulting VM.

    After you power on the VM, it takes some time for it to be ready to use.

  12. In vCenter, create a vSphere account for the Avi controller, with permissions as described in Roles and Permissions for vCenter and NSX-T Users in the Avi Networks documentation.

    Note: See the Tanzu Kubernetes Grid v1.5 Release Notes for which Avi Controller versions are supported in this release. To upgrade the Avi Controller, see Flexible Upgrades for Avi Vantage.

Avi Controller Setup: Infrastructure

In TKG v1.5.2+, you can set up Avi to use either vSphere or VMware NSX.

If you are using NSX ALB with an NSX overlay network, configure the NSX interface in the Avi Controller UI.

vSphere

For full details of how to set up the Avi Controller for vCenter Cloud, see Performing the Avi Controller Initial setup in the Avi Controller documentation.

This section provides some information about configuration that has been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation. The procedure below applies to Avi Controller version 20.1.5+.

Note: If you are using an existing Avi Controller, you must make sure that the VIP Network that is used during Tanzu Kubernetes Grid management cluster deployment has a unique name across all AVI Clouds.

  1. In a browser, go to the IP address of the Controller VM.

  2. Configure a password to create an admin account.

  3. Optionally set DNS Resolvers and NTP server information, set the backup passphrase, and click Next.

    Setting the backup passphrase is mandatory.

  4. Select None to skip SMTP configuration, and click Next.

  5. For Multi-Tenant, leave the default settings. Enable the Setup Cloud After checkbox next to Save, and click Save.
  6. Select VMware vCenter/vSphere ESX as the Cloud Infrastructure Type.
  7. Under the Infrastructure tab, enter the vCenter Server credentials and the IP address or FQDN of the vCenter Server instance.
  8. For Permissions, select Write.

    This allows the Controller to create and manage SE VMs.

  9. For SDN Integration select None and click Next.

  10. Select the vSphere Datacenter.
  11. For System IP Address Management Setting, enable the DHCP Enabled checkbox if your data plane networks have DHCP. Otherwise, leave the DHCP Enabled checkbox disabled.

  12. For Virtual Service Placement Settings, leave both checkboxes unchecked and click Next.

  13. Select a virtual switch to use as the management network NIC in the SEs. Select the same network that you used when you deployed the controller.

  14. If you enabled DHCP for your data plane networks, enable the DHCP Enabled checkbox for your management network. Otherwise leave the checkbox disabled and configure IP Subnet and Static IP Address Pool to set the management network address range. Click Next.

  15. For Support Multiple Tenants, select No.

NSX (v1.5.2+)

Integrating TKG with NSX and NSX ALB (Avi) is supported in the following versions:

NSX Avi Controller Tanzu Kubernetes Grid
3.0+ 20.1.1+ 1.5.2+

After you have configured vCenter and NSX to use NSX ALB, you can configure NSX ALB side of the integration as follows from the Avi Controller UI.

For full details of how to set up the Avi Controller for NSX, see the Avi Integration with NSX in the Avi Controller documentation.

Configure NSX ALB to authenticate with the NSX Manager and vCenter

To enable NSX ALB to authenticate with the vCenter and NSX Manager servers:

  1. Log in to the Avi Controller.
  2. Navigate to Administration > User Credentials and select CREATE at top right.
  3. Enter a Name for your NSX Manager admin account and select NSX as the Credentials Type.
  4. Enter the Username and Password for your NSX Manager admin account, and click Save.
  5. From Administration > User Credentials, select CREATE again.
  6. Enter a Name for your vCenter account and select vCenter as the Credentials Type. The account can be your vCenter admin account, or a service account that you create in vCenter for NSX ALB.
  7. Enter the Username and Password for your vCenter account, and click Save.

Create NSX Cloud

  1. Navigate to Infrastructure > Clouds, click on Create and select NSX Cloud.
  2. Enter the Name of the NSX cloud. Note: Type defaults to NSX Cloud.
  3. Enable the DHCP checkbox if the SE management segment has been enabled for DHCP.
  4. Enter the NSX manager hostname or IP address as the NSX Manager Address and select the NSX Manager Credentials you just created in above section.
  5. Click on Connect to to authenticate with the NSX manager.

    Configure NSX Cloud

  6. In the NSX tab, under Management Network, select the Transport Zone required. Note: If Virtual LAN (VLAN)-backed logical segments are used instead of Overlay transport zone, refer NSX VLAN Logical Segment.

  7. Select the Tier1 Logical Router and Overlay Segment.
  8. Under Data Networks, select the Transport Zone required. Click on Add to add more T1 routers and connected segments for VIP placement.

    Configure NSX Cloud

  9. Under vCenter Servers, click Add.

  10. Enter the Name for your vCenter server, and configure the Credentials using the vCenter credentials entered in the section above.
  11. Click Connect, select the Content Library, and click Done.
  12. (Optional) Configure the IPAM/DNS profiles. You can also configure this later, as described in Avi Controller Setup: IPAM and DNS below.
  13. Click on Save to create the NSX Cloud integration for NSX ALB.

    Configure NSX Cloud

Configure NSX Cloud Networks

  1. Navigate to Infrastructure > Networks and select Cloud: with the name of your newly-created NSX Cloud integration.
  2. Configure the IP subnet and static IP pool for any control plane or data network that you will set in the management cluster configuration file.

    Configure NSX Cloud Networks

Set Static Routes

  1. Navigate to Infrastructure > Route and select Cloud: with the name of your newly-created NSX Cloud integration.
  2. Configure the AVI Service Engine’s static IP route:

    • Gateway Subnet: 0.0.0.0/0
    • Next Hop: The network that your service engine will route traffic to.

    Configure NSX Cloud Static Routes

Avi Controller Setup: IPAM and DNS

There are additional settings to configure in the Controller UI before you can use NSX Advanced Load Balancer.

  1. In the Controller UI, go to Applications > Templates > Profiles > IPAM/DNS Profiles, click Create and select IPAM Profile.

    • Enter a name for the profile, for example, tkg-ipam-profile.
    • Leave the Type set to Avi Vantage IPAM.
    • Leave Allocate IP in VRF unchecked.
    • Click Add Usable Network.
    • Select Default-Cloud.
    • For Usable Network, select the network where you want the virtual IPs to be allocated. If you are using a flat network topology, this can be the same network (management network) that you selected in the preceding procedure. For a different network topology, select a separate port group network for the virtual IPs.
    • (Optional) Click Add Usable Network to configure additional VIP networks.
    • Click Save.

    Configure IPAM and DNS Profile

  2. In the IPAM/DNS Profiles view, click Create again and select DNS Profile.

    Note: The DNS Profile is optional for using Service type LoadBalancer.

    • Enter a name for the profile, for example, tkg-dns-profile.
    • For Type, select AVI Vantage DNS
    • Click Add DNS Service Domain and enter at least one Domain Name entry, for example tkg.nsxlb.vmware.com.
      • This should be from a DNS domain that you can manage.
      • This is more important for the L7 Ingress configurations for workload clusters, in which the Controller bases the logic to route traffic on hostnames.
      • Ingress resources that the Controller manages should use host names that belong to the domain name that you select here.
      • This domain name is also used for Services of type LoadBalancer, but it is mostly relevant if you use AVI DNS VS as your Name Server.
      • Each Virtual Service will create an entry in the AVI DNS configuration. For example, service.namespace.tkg-lab.vmware.com.
    • Click Save.

    Create the DNS Profile

  3. Click the menu in the top left corner and select Infrastructure > Clouds.

  4. For Default-Cloud, click the edit icon and under IPAM Profile and DNS Profile, select the IPAM and DNS profiles that you created above.

    Add the IPAM and DNS Profiles to the Cloud

  5. Select the DataCenter tab.

    • Leave DHCP enabled. This is set per network.
    • Leave the IPv6… and Static Routes… checkboxes unchecked.
  6. Do not update the Network section yet.

  7. Save the cloud configuration.
  8. Go to Infrastructure > Networks and click the edit icon for the network you are using as the VIP network.
  9. Edit the network to add a pool of IPs to be used as a VIP.

    Edit the subnet and add an IP Address pool range within the boundaries, for example 192.168.14.210-192.168.14.219.

(Recommended) Avi Controller Setup: Virtual Service

If the SE Group that you want to use with the management cluster does not have a Virtual Service, it can suggest that there are no service engines that are running for that SE Group. Hence, the management cluster deployment process will have to wait for a service engine to be created. The creation of a Service Engine is time-consuming because it requires a new VM to be deployed. In poor networking conditions, this can cause an internal timeout that prevents the management cluster deployment process to finish successfully.

To prevent this issue, it is recommended to create a dummy Virtual Service through the Avi Controller UI to trigger the creation of a service engine before deploying the management cluster.

To verify that the SE group has a virtual service assigned to it, in the Controller UI, go to Infrastructure > Service Engine Group, and view the details of the SE group. If the SE group does not have a virtual service assigned to it, create a dummy virtual service:

  1. In the Controller UI, go to Applications > Virtual Service.

  2. Click Create Virtual Service and select Basic Setup.

  3. Configure the VIP:

    1. Select Auto-Allocate.
    2. Select the VIP Network and VIP Subnet configured for your IPAM profile in Avi Controller Setup: IPAM and DNS.
    3. Click Save.

Note: You can delete the dummy virtual service after the management cluster is deployed successfully.

For complete information about creating a virtual service, more than needed to create a dummy service, see Create a Virtual Service in the Avi Networks documentation.

Avi Controller Setup: Custom Certificate

The default NSX Advanced Load Balancer certificate does not contain the Controller’s IP or FQDN in the Subject Alternate Names (SAN), however valid SANs must be defined in Avi Controller’s certificate. You consequently must create a custom certificate to provide when you deploy management clusters.

  1. In the Controller UI, click the menu in the top left corner and select Templates > Security > SSL/TLS Certificates, click Create, and select Controller Certificate.
  2. Enter the same name in the Name and Common Name text boxes.
  3. Select Self-Signed.
  4. For Subject Alternate Name (SAN), enter either the IP address or FQDN, or both, of the Controller VM.

    If only the IP address or FQDN is used, it must match the value that you use for Controller Host when you configure NSX Advanced Load Balancer settings during management cluster deployment, or specify in the AVI_CONTROLLER variable in the management cluster configuration file.

  5. Leave the other fields empty and click Save.
  6. In the menu in the top left corner, select Administration > Settings > Access Settings, and click the edit icon in System Access Settings.
  7. Delete all of the certificates in SSL/TLS Certificate.
  8. Use the SSL/TLS Certificate drop-down menu to add the custom certificate that you created above.
  9. In the menu in the top left corner, select Templates > Security > SSL/TLS Certificates, select the certificate you create and click the export icon.
  10. Copy the certificate contents.

    You will need the certificate contents when you deploy management clusters.

Avi Controller Setup: Essentials License

Finish setting up the Avi Controller by enabling the Essentials license, if required.

  1. In the Controller UI, go to Administration > Settings > Licensing. The Licensing screen appears.
  2. In the Licensing screen, click the crank wheel icon that is next to Licensing.
  3. In the list of license types, select Essentials License. Click Save, and then click Next.
  4. In the Licensing screen, verify that the license has been set to Essentials.
  5. To create a default gateway route for the traffic to flow from the service engines to the Pods and then back to the clients, go to Infrastructure > Routing > Static Route in the Controller UI, and click CREATE.
  6. In the Edit Static Route:1 screen, enter the following details:
    • Gateway Subnet: 0.0.0.0/0
    • Next Hop: The gateway IP address of the virtual IP network that you want to use, set as Usable Network above
  7. Click SAVE. After the Essentials Tier is enabled on a Controller that has not been configured already, the default service Engine group is switched to the Legacy (Active/Standby) HA mode, which is the only mode that Essentials Tier supports.

What to Do Next

Your NSX Advanced Load Balancer deployment is ready for you to use with management clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon