If you use VMware Tanzu Kubernetes Grid to deploy management clusters to Amazon EC2 or Microsoft Azure, Amazon EC2 or Azure load balancer instances are created. To provide load balancing services to deployments on vSphere, Tanzu Kubernetes Grid includes VMware NSX Advanced Load Balancer Essentials Edition.

NSX Advanced Load Balancer, formerly known as Avi Vantage, provides an L4 load balancing solution. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.

NSX Advanced Load Balancer Deployment Topology

NSX Advanced Load Balancer includes the following components:

  • Avi Kubernetes Operator (AKO) provides the load balancer functionality for Kubernetes clusters. It listens to Kubernetes Ingress and Service Type LoadBalancer objects and interacts with the Avi Controller APIs to create VirtualService objects.
  • Service Engines (SE) implement the data plane in a VM form factor.
  • Avi Controller manages VirtualService objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.
  • SE Groups provide a unit of isolation in the form of a set of Service Engines, for example a dedicated SE group for specific important namespaces. This offers control in the form of the flavor of SEs (CPU, Memory, and so on) that needs to be created and also the limits on the maximum number of SEs that are permitted.

You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.

VMware NSX Advanced Load Balancer deployment topology

The topology diagram above shows the following configuration:

  • Avi controller is connected to the management port group.
  • The service engines are connected to the management port group and one or more VIP port groups. Service engines run in dual-arm mode.
  • Avi Kubernetes Operator is installed on the Tanzu Kubernetes clusters and should be able to route to the controller’s management IP.
  • Avi Kubernetes Operator is installed in NodePort mode only.

Recommendations

  • For set ups with a small number of Tanzu Kubernetes clusters that each have a large number of nodes, it is recommended to use one dedicated SE group per cluster.
  • For set ups with a large number of Tanzu Kubernetes clusters that each have a small number of nodes, it is recommended to share an SE group between multiple clusters.
  • An SE group can be shared by any number of workload clusters as long as the sum of the number of distinct cluster node networks and the number of distinct cluster VIP networks is no bigger than 8.
  • All clusters can share a single VIP network or each cluster can have a dedicated VIP network.
  • Clusters that share a VIP network should be grouped by labels. A dedicated AKODeploymentConfig should be created in the management cluster.
  • For simplicity, in a lab environment all components can be connected to the same port group on which the Tanzu Kubernetes clusters are connected.

In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.

Networking

  • SEs are deployed in a dual-arm mode in relation to the data path, with connectivity both to the VIP network and to the workload cluster node network.
  • The VIP network and the workload networks must be discoverable in the same vCenter Cloud so Avi Controller could create SEs attached to both networks.
  • VIP and SE data interface IP addresses are allocated from the VIP network.
  • There can only be one VIP network per workload cluster. However, different VIP networks could be assigned to different workload clusters, for example in a large Tanzu Kubernetes Grid deployment.

IPAM

  • If DHCP is not available, IPAM for the VIP and SE Interface IP address is managed by Avi Controller.
  • The IPAM profile in Avi Controller is configured with a Cloud and a set of Usable Networks.
  • If DHCP is not configured for the VIP network, at least one static pool must be created for the target network.

Resource Isolation

  • Dataplane isolation across Tanzu Kubernetes clusters can be provided by using SE Groups. The vSphere admin can configure a dedicated SE Group and configure that for a set of Tanzu Kubernetes clusters that need isolation.
  • SE Groups offer the ability to control the resource characteristics of the SEs created by the Avi Controller, for example, CPU, memory, and so on.

Tenancy

With NSX Advanced Load Balancer Essentials, all workload cluster users are associated with the single admin tenant.

Avi Kubernetes Operator

Avi Kubernetes Operator is installed on Tanzu Kubernetes clusters. It is configured with the Avi Controller IP address and the user credentials that Avi Kubernetes Operator uses to communicate with the Avi Controller. A dedicated user per workload is created with the admin tenant and a customized role. This role has limited access, as defined in https://github.com/avinetworks/avi-helm-charts/blob/master/docs/AKO/roles/ako-essential.json.

Install Avi Controller on vCenter Server

You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Kubernetes Grid.

  1. Make sure your vCenter environment fulfills the prerequisites described in Installing Avi Vantage for VMware vCenter in the Avi Networks documentation.
  2. Access the Avi Networks portal from the Tanzu Kubernetes Grid downloads page.
  3. In the VMware NSX Advanced Load Balancer row, click Go to Downloads.
  4. Click Download Now to go the NSX Advanced Load Balancer Customer Portal.
  5. In the customer portal, go to Software > 20.1.3.
  6. Scroll down to VMware, and click the download button for Controller OVA.
  7. Log in to the vSphere Client.
  8. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
  9. Select Local File, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
  10. Follow the installer prompts to deploy a VM from the OVA template, referring to the Deploying Avi Controller OVA instructions in the Avi Networks documentation.

    Select the following options in the OVA deployment wizard:

    • Provide a name for the Controller VM, for example, nsx-adv-lb-controller and the datacenter in which to deploy it.
    • Select the cluster in which to deploy the Controller VM.
    • Review the OVA details, then select a datastore for the VM files. For the disk format, select Thick Provision Lazy Zeroed.
    • For the network mapping, select a port group for the Controller to use to communicate with vCenter Server. The network must have access to the management network on which vCenter Server is running.
    • If DHCP is available, you can use it for controller management.
    • Specify the management IP address, subnet mask, and default gateway. If you use DHCP, you can leave these fields empty.
    • Leave the key field in the template empty.
    • On the final page of the installer, click Finish to start the deployment.

    It takes some time for the deployment to finish.

  11. When the OVA deployment finishes, power on the resulting VM.

    After you power on the VM, it takes some time for it to be ready to use.

  12. In vCenter, create a vSphere account for the Avi controller, with permissions as described in VMware User Role for Avi Vantage in the Avi Networks documentation.

Avi Controller Setup: Basics

For full details of how to set up the Controller, see the Performing the Avi Controller Initial setup in the Avi Controller documentation.

This section provides some information about configuration that has been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation.

NOTE: If you are using an existing Avi Controller, you must make sure that the VIP Network that is be used during Tanzu Kubernetes Grid management cluster deployment has a unique name across all AVI Clouds.

  1. In a browser, go to the IP address of the Controller VM.
  2. Configure a password to create an admin account.
  3. Optionally set DNS Resolvers and NTP server information, set the backup passphrase, and click Next.

    Setting the backup passphrase is mandatory.

  4. Select None to skip SMTP configuration, and click Next.

  5. For Orchestrator Integration, select VMware.
  6. Enter the vCenter Server credentials and the IP address or FQDN of the vCenter Server instance.
  7. For Permissions, select Write.

    This allows the Controller to create and manage SE VMs.

  8. For SDN Integration select None and click Next.

  9. Select the vSphere Datacenter.
  10. For System IP Address Management, select DHCP.
  11. For Virtual Service Placement Settings, leave both check boxes unchecked and click Next.
  12. Select a distributed virtual switch to use as the management network, select DHCP and click Next.

    • The switch is used for the management network NIC in the SEs.
    • Select the same network as you used when you deployed the controller.
  13. For Support Multiple Tenants, select No.

Avi Controller Setup: IPAM and DNS

There are additional settings to configure in the Controller UI before you can use NSX Advanced Load Balancer.

  1. In the Controller UI, go to Applications > Templates > Profiles > IPAM/DNS Profiles, click Create and select IPAM Profile.

    • Enter a name for the profile, for example, tkg-ipam-profile.
    • Leave the Type set to Avi Vantage IPAM.
    • Leave Allocate IP in VRF unchecked.
    • Click Add Usable Network.
    • Select Default-Cloud.
    • For Usable Network, select the distributed virtual switch that you selected in the preceding procedure.
    • (Optional) Click Add Usable Network to configure additional VIP networks.
    • Click Save.

    Configure IPAM and DNS Profile

  2. In the IPAM/DNS Profiles view, click Create again and select DNS Profile.

    NOTE: The DNS Profile is optional for using Service type LoadBalancer.

    • Enter a name for the profile, for example, tkg-dns-profile.
    • For Type, select AVI Vantage DNS
    • Click Add DNS Service Domain and enter at least one Domain Name entry, for example tkg.nsxlb.vmware.com.
      • This should be from a DNS domain that you can manage.
      • This is more important for the L7 Ingress configurations, in which the Controller bases the logic to route traffic on hostnames.
      • Ingress resources that the Controller manages should use host names that belong to the domain name that you select here.
      • This domain name is also used for Services of type LoadBalancer, but it is mostly relevant if you use AVI DNS VS as your Name Server.
      • Each Virtual Service will create an entry in the AVI DNS configuration. For example, service.namespace.tkg-lab.vmware.com.
    • Click Save.

    Create the DNS Profile

  3. Click the menu in the top left corner and select Infrastructure > Clouds.

  4. For Default-Cloud, click the edit icon and under IPAM Profile and DNS Profile, select the IPAM and DNS profiles that you created above.

    Add the IPAM and DNS Profiles to the Cloud

  5. Select the DataCenter tab.

    • Leave DHCP enabled. This is set per network.
    • Leave the IPv6... and Static Routes... check boxes unchecked.
  6. Do not update the Network section yet.

  7. Save the cloud configuration.
  8. Go to Infrastructure > Networks and click the edit icon for the network you are using as the VIP network.
  9. Edit the network to add a pool of IPs to be used as a VIP.

    Edit the subnet and add an IP Address pool range within the boundaries, for example 192.168.14.210-192.168.14.219.

Avi Controller Setup: Custom Certificate

The default NSX Advanced Load Balancer certificate does not contain the Controller's IP or FQDN in the Subject Alternate Names (SAN), however valid SANs must be defined in Avi Controller's certificate. You consequently must create a custom certificate to provide when you deploy management clusters.

  1. In the Controller UI, click the menu in the top left corner and select Templates > Security > SSL/TLS Certificates, click Create, and select Controller Certificate.
  2. Enter the same name in the Name and Common Name text boxes.
  3. Select Self-Signed.
  4. For Subject Alternate Name (SAN), enter either the IP address or FQDN, or both, of the Controller VM.

    If only the IP address or FQDN is used, it must match the value that you use for Controller Host when you configure NSX Advanced Load Balancer settings during management cluster deployment, or specify in the AVI_CONTROLLER variable in the management cluster configuration file.

  5. Leave the other fields empty and click Save.
  6. In the menu in the top left corner, select Administration > Settings > Access Settings, and click the edit icon in System Access Settings.
  7. Delete all of the certificates in SSL/TLS Certificate.
  8. Use the SSL/TLS Certificate drop-down menu to add the custom certificate that you created above.
  9. In the menu in the top left corner, select Templates > Security > SSL/TLS Certificates, select the certificate you create and click the export icon.
  10. Copy the certificate contents.

    You will need the certificate contents when you deploy management clusters.

Avi Controller Setup: Essentials License

Finish setting up the Avi Controller by enabling the Essentials license, if required.

  1. In the Controller UI, go to Administration > Settings > Licensing. The Licensing screen appears.
  2. In the Licensing screen, click the crank wheel icon that is next to Licensing.
  3. In the list of license types, select Essentials License. Click Save, and then click Next.
  4. In the Licensing screen, verify that the license has been set to Essentials.
  5. To create a default gateway route for the traffic to flow from the service engines to the Pods and then back to the clients, go to Infrastructure > Routing > Staic Route in the Controller UI, and click CREATE.
  6. In the Edit Static Route:1 screen, enter the following details:
    • Gateway Subnet: 0.0.0.0/0
    • Next Hop: The gateway IP address of the virtual IP network that you want to use
  7. Click SAVE.

    After the Essentials Tier is enabled on a Controller that has not been configured already, the default service Engine group is switched to the Legacy (Active/Standby) HA mode, which is the only mode that Essentials Tier supports.

Update the Avi Certificate

Tanzu Kubernetes Grid authenticates to the Avi Controller by using certificates. When these certificates near expiration, update them by using the Tanzu CLI. You can update the certificates in an existing workload cluster, or in a management cluster for use by new workload clusters. Newly-created workload clusters obtain their Avi certificate from their management cluster.

Update the Avi Certificate in an Existing Workload Cluster

Updating the Avi certificate in an existing workload cluster is performed through the workload cluster context in the Tanzu CLI. Before performing this task, ensure that you have the workload cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the workload cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context *WORKLOAD-CLUSTER-CONTEXT*
    
  2. Run the following command to update the avi-secret value under avi-system namespace:

    kubectl edit secret avi-secret -n avi-system
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

  4. Run the following command to obtain the number of Avi Kubernetes Operator (AKO) pods in your environment:

    kubectl get pod -n avi-system
    

    Record the number of pods in the output. The values start from 0, which suggests one AKO pod in the environment.

  5. Run the following command to restart the AKO pods:

    kubectl delete ako-NUMBER -n avi-system
    

    Where NUMBER is the number of AKO pods in your environment recorded in the previous step.

Update the Avi Certificate in a Management Cluster

Workload clusters obtain their Avi certificates from their management cluster. This procedure updates the Avi certificate in a management cluster. The management cluster then includes the updated certificate in any new workload clusters that it creates.

Before performing this task, ensure that you have the management cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the management cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the management cluster:

    kubectl config use-context MANAGEMENT-CLUSTER-CONTEXT
    
  2. Run the following command to update the avi-controller-ca value under tkg-system-networking namespace:

    kubectl edit secret avi-controller-ca -n tkg-system-networking
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

  4. Run the following command to obtain the AKO Controller Manager string:

    kubectl get pod -n tkg-system-networking
    

    Note down the random string in the output. You will require this string while restarting the AKO pods.

  5. Run the following command to restart the AKO pods:

    kubectl delete po ako-operator-controller-manager-RANDOM STRING -n tkg-system-networking
    

    Where RANDOM STRING is the string that you noted down in Step 5.

Create an additional service engine group for NSX Advanced Load Balancer

The NSX Advanced Load Balancer Essentials Tier has limited high-availability (HA) capabilities. To distribute the load balancer services to different service engine groups (SEG), create additional SEGs on the Avi Controller, and create a new AKO configuration object (akodeploymentconfig object) in a YAML file in the management cluster. Alternatively, you can update an existing akodeploymentconfig object in the management cluster with the name of the new SEG.

  1. In the Avi Controller UI, go to Infrastructure > Service Engine Groups, and click CREATE to create the new SEG.

    Create Service Engine Group

  2. Create the service engine group as follows:

    Create Service Engine Group - Basic

  3. If you want to create a new akodeploymentconfig object for the new SEG, do the following steps on the command terminal:

    1. Run the following command to open the text editor.

      vi FILE_NAME
      

      Where FILE_NAME is the name of the akodeploymentconfig YAML file that you want to create.

    2. Add the AKO configuration details in the file. The following is an example:

        apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
        kind: AKODeploymentConfig
        metadata:
          name: install-ako-for-all
        spec:
          adminCredentialRef:
            name: avi-controller-credentials
            namespace: tkg-system-networking
          certificateAuthorityRef:
            name: avi-controller-ca
            namespace: tkg-system-networking
          cloudName: Default-Cloud
          controller: 10.184.74.162
          dataNetwork:
            cidr: 10.184.64.0/20
            name: VM Network
          extraConfigs:
            cniPlugin: antrea
            disableStaticRouteSync: true
            image:
              pullPolicy: IfNotPresent
              repository: projects.registry.vmware.com/tkg/ako
              version: v1.4.3_vmware.1
            ingress:
              defaultIngressController: false
              disableIngressClass: true
          serviceEngineGroup: SEG-1
      
    3. Save the file, and exit the text editor.

    4. Run the following command to apply the new configuration:

      kubectl apply -f FILE_NAME
      

      Where FILE NAME is the name of the YAML file that you have created.

  4. If you want to update an existing akodeploymentconfig object for the new SEG, do the following steps on the command terminal:

    1. Run the following command to open the akodeploymentconfig object:

      kubectl edit adc ADC_NAME
      

      Where ADC_NAME is the name of the akodeploymentconfig object in the YAML file.

    2. Update the SEG name in the text editor that pops up.

    3. Save the file, and exit the text editor.

  5. Run the following command to verify that the new configuration is present in the management cluster:

    kubectl get adc ADC_NAME -o yaml
    

    Where ADC_NAME is the name of the akodeploymentconfig object in the YAML file.

    In the file, verify that the adc.spec.serviceEngineGroup field displays the name of the new service engine group.

  6. Switch the context to the workload cluster by using the kubectl utility.

  7. Run the following command to view the AKO deployment information:

    kubectl get cm avi-k8s-config -n avi-system -o yaml
    

    In the output, verify that the service engine group has been updated.

  8. Run the following command to verify that AKO is running:

    kubectl get pod -n avi-system
    

What to Do Next

Your NSX Advanced Load Balancer deployment is ready for you to use with management clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon