If you use the Tanzu Kubernetes Grid Service provided by VMware vSphere with Tanzu, you can create a Service of type LoadBalancer on Tanzu Kubernetes clusters. For more information, see Tanzu Kubernetes Service Load Balancer Example.

Similarly, if you use VMware Tanzu Kubernetes Grid to deploy management clusters to Amazon EC2 or Microsoft Azure, Amazon EC2 or Azure load balancer instances are created. However, Tanzu Kubernetes Grid does not provide a load balancer for deployments on vSphere when the vSphere with Tanzu feature is not available.

To provide load balancing services to Tanzu Kubernetes Grid deployments on vSphere where vSphere with Tanzu is not available, VMware Tanzu Advanced Edition includes VMware NSX Advanced Load Balancer to provide an L4+L7 load balancing solution. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.

NOTE: For general information about NSX Advanced Load Balancer components and concepts, see the VMware NSX Advanced Load Balancer (formerly Avi) documentation.

NSX Advanced Load Balancer Deployment topology

NSX Advanced Load Balancer includes the following components:

  • Avi Kubernetes Operator (AKO) provides the load balancer functionality for Kubernetes clusters. It listens to Kubernetes Ingress and Service Type LoadBalancer objects and interacts with the Avi Controller APIs to create VirtualService objects. In the context of Tanzu Kubernetes Grid Service, Avi Kubernetes Operator is integrated using Service v2 API objects (GatewayClass and Gateway) for Layer-4 load balancers and will not look at Service Type LoadBalancer objects.
  • Service Engines (SE) implement the data plane in a VM form factor.
  • Avi Controller manages VirtualService objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.
  • SE Groups provide a unit of isolation in the form of a set of Service Engines eg. dedicated SE group for specific important namespaces. Offers control in the form of the flavor of SEs (CPU, Memory, etc) that needs to be created and also the limits on the maximum number of SEs that are allowed.

You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.

VMware NSX Advanced Load Balancer deployment topology

The topology diagram above shows the following configuration:

  • Avi controller is connected to the management port group.
  • The service engines are connected to the management port group and one or more VIP port groups. Service engines must run in a single-arm mode only.
  • Avi Kubernetes Operator is installed on the Tanzu Kubernetes clusters and should be able to route to the controller’s management IP.
  • Avi Kubernetes Operator is installed in NodePort mode only.

Recommendations

  • For set ups with a small number of Tanzu Kubernetes clusters, it is recommended to use one dedicated SE group per cluster.
  • For set ups with a large number of Tanzu Kubernetes clusters, it is recommended to share an SE group between multiple clusters.
  • All clusters can share a single VIP network or each cluster can have a dedicated VIP network.
  • For simplicity, in a lab environment all components can be connected to the same port group on which the Tanzu Kubernetes clusters are connected.

In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.

Networking

  • SE are deployed in a single-arm mode in relation to the data path, with connectivity only to the VIP network.
  • The VIP network and the workload networks are routable to each other.
  • VIP and SE data interface IP addresses are allocated from the VIP network.
  • Multiple VIP networks can be specified where needed, for example in a large Tanzu Kubernetes Grid deployment.

IPAM

  • If DHCP is not available, IPAM for the VIP and SE Interface IP address is managed by Avi Controller.
  • The IPAM profile in Avi Controller is configured with an IPAM profile and a set of VIP networks.

Resource Isolation

  • Isolation across Tanzu Kubernetes clusters can be provided by using SE Groups. The vSphere admin can configure a dedicated SE Group and configure that for a set of Tanzu Kubernetes clusters that need isolation.
  • SE Groups offer the ability to control the resource characteristics of the SEs created by the Avi Controller, for example, CPU, memory, and so on.

Tenancy

With NSX Advanced Load Balancer Enterprise edition, tenancy is configured as follows:

  • Each Tanzu Kubernetes cluster is modeled as a tenant in the Avi Controller.
  • User credentials provided to Avi Kubernetes Operator in the Tanzu Kubernetes cluster provide limited access to only that tenant.

Avi Kubernetes Operator

  • Avi Kubernetes Operator is installed on Tanzu Kubernetes clusters. It is configured with the Avi Controller IP address and the user credentials for the service account that Avi Kubernetes Operator uses to communicate with the Avi Controller. This user account has limited access.

Install Avi Controller on vCenter Server

You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Advanced. For full details of how to deploy Avi Controller, see Installing Avi Vantage for VMware vCenter in the Avi Networks documentation.

  1. Access the Avi Networks portal from https://www.vmware.com/go/get-tanzu-adv.
  2. Log in to the NSX-ALB Customer Portal with the credentials that you received from VMware.
  3. In the customer portal, go to Software > 20.1.2.
  4. Scroll down to VMware, and click the download button for Controller OVA.
  5. Log in to the vSphere Client.
  6. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
  7. Select Local file, click the button to upload files, and navigate to the downloaded OVA file on your local machine.
  8. Follow the installer prompts to deploy a VM from the OVA template.

    Select the following options in the OVA deployment wizard:

    • For the disk format, select Thick Provision Lazy Zeroed.
    • For the network mapping, select the management port group. This port group will be used by the Avi Controller to communicate with vCenter Server.
    • If DHCP is available, you can use it for controller management.
    • Specify the management IP address and default gateway. In the case of DHCP, leave this field empty.
    • You can keep the key field in the template empty

    It might take a few minutes for the deployment to finish.

  9. When the OVA deployment finishes, power on the resulting VM.
  10. In a browser, go to the IP address of the new Avi Controller VM.

Set Up Avi Controller

For full details of how to set up Avi Controller, see the Performing the Avi Controller Initial setup in the Avi Controller documentation.

This section provides some information about configurations that have been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation.

  • Create Administrator Account

    • Select your Administrator account credentials.
    • Email is not required.
  • Configure System Settings

    • Set the DNS and NTP server information
    • Set the Backup passphrase
  • Email/SMTP is not required.

  • Select VMware Orchestrator integration
  • Enter vCenter host/IP and credentials
  • Select Write Permissions to select Write Access Mode.

    • This allows Avi Controller to create and manage SE VMs.
  • SDN Integration is not required (the NSX integration here is for NSX-v).

  • Select the vSphere Datacenter.
  • For System IP Address Management, leave DHCP by default.
  • For Virtual Service Placement Settings: No Static Routes, leave both check boxes unchecked.
  • Select the management network
    • This is used for the management network NIC in the SEs.
    • Select the same network as you used for the controller.
    • Select DHCP.
  • For Support Multiple Tenants, select No.

Create IPAM and DNS Profiles and Add them to the Cloud

You must configure some additional settings in the AVI Controller UI before you can install the Avi Kubernetes Operator operator.

  1. In the AVI Controller UI, go to Templates > Profiles > IPAM/DNS Profiles
  2. Create an IPAM Profile, to define where VIPs are allocated:

    • Leave Allocate IP in VRF unchecked
    • Click “Add Usable Network”
    • Select "Default-Cloud"
    • Select the VIP network
    • (Optional) Click Add Usable Network to configure additional VIP networks.

    Configure IPAM and DNS Profile

  3. Create the DNS Profile.

    • For Type, select AVI Vantage DNS
    • Enter at least one Domain Name entry.
      • This should be from a DNS domain that you can manage.
      • This is more important for the L7 Ingress configurations, in which Avi Controller bases the logic to route traffic on hostnames.
      • Ingress resources that Avi Controller manages should use host names that belong to the domain name that you select here.
      • This domain name is also used for Services of type LoadBalancer, but it is mostly relevant if you use AVI DNS VS as your Name Server.
      • Each Virtual Service will create an entry in the AVI DNS configuration. For example, service.namespace.tkg-lab.vmware.com.

    Create the DNS Profile

  4. Go to Infrastructure > Cloud.

  5. Edit Default-Cloud and assign the IPAM and DNS profiles that you created above.

    Add the IPAM and DNS Profiles to the Cloud

  6. Update the DataCenter settings.

    • Leave DHCP enabled. This is set per network.
    • Leave the IPv6 and Static Routes check boxes unchecked.
  7. Do not update the Network section yet.

  8. Save the cloud configuration.
  9. Go to Infrastructure > Networks and search for the network you are using as the VIP network.
  10. Edit the network to add a pool of IPs to be used as VIP.

    Edit the subnet and add an IP Address pool range within the s boundaries, for example 192.168.14.210-192.168.14.219.

Install Avi Kubernetes Operator

You use Helm to install Avi Kubernetes Operator on Tanzu Kubernetes clusters. Full instructions for installing Avi Kubernetes Operator are found in the Avi Kubernetes Operator Helm chart GitHub pages.

  1. Set the context of kubectl to the context of the Tanzu Kubernetes cluster that you want to integrate with AVI.

    For example, if your cluster is named my-cluster`, run the following command.

    kubectl config use-context my-cluster-admin@my-cluster
    
  2. Create a namespace on the cluster.

    kubectl create ns avi-system
    
  3. Run the following command to configure the helm chart repo.

    This will install Avi Kubernetes Operator v1.2.1

    helm repo add ako https://avinetworks.github.io/avi-helm-charts/charts/stable/ako 
    
  4. Run the following command to obtain the values.yaml base file.

    curl -JOL https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/stable/ako/values.yaml
    
  5. Edit the following values in values.yaml.

    For descriptions of each property, see avi-helm-charts.

    AKOSettings:
      disableStaticRouteSync: "true" # Since we will be using NodePort mode, static route sync is not required.
    
      clusterName: "cluster1" # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller.
    
      cniPlugin: "" # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift
    
    NetworkSettings:
    
      subnetIP: "10.79.172.0" # Subnet IP of the vip network
    
      subnetPrefix: "22" # Subnet Prefix of the vip network
    
      networkName: "vxw-dvs-26-virtualwire-7-sid-2210006-wdc-02-vc21-avi-mgmt" # Network Name of the vip network. Same as configured in IPAM
    
    L7Settings:
    
      serviceType: NodePort #enum NodePort|ClusterIP
    
      shardVSSize: "SMALL" # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL
    

    The ENUMs LARGE, MEDIUM, and SMALL map to 8/4/2 number of virtual switch shards on which all Ingresses are shared. LoadBalancer type services map to a virtual switch with dedicated VIP.

    ControllerSettings:
    
      serviceEngineGroupName: "Default-Group" # Name of the ServiceEngine Group.
      controllerVersion: "18.2.10" # The controller API version
      cloudName: "Default-Cloud" # The configured cloud name on the Avi
    

    When installing Avi Kubernetes Operator on other clusters, you can create a new SE group and refer to it in the ControllerSettings. Multiple clusters can share an SE group or can use it as a dedicated resource.

  6. Deploy the Avi Kubernetes Operator helm chart.

    helm install ako/ako --generate-name --version 1.2.1 -f values.yaml --set ControllerSettings.controllerIP=10.79.174.254 --set avicredentials.username=admin --set avicredentials.password=VMware1! --namespace=avi-system
    
  7. Check that Avi Kubernetes Operator is running.

    helm list -n avi-system
    
    kubectl get all -n avi-system 
    
check-circle-line exclamation-circle-line close-line
Scroll to top icon