If you use the Tanzu Kubernetes Grid Service provided by VMware vSphere with Tanzu, you can create a Service of type LoadBalancer
on Tanzu Kubernetes clusters. For more information, see Tanzu Kubernetes Service Load Balancer Example.
Similarly, if you use VMware Tanzu Kubernetes Grid to deploy management clusters to Amazon EC2 or Microsoft Azure, Amazon EC2 or Azure load balancer instances are created. However, Tanzu Kubernetes Grid does not provide a load balancer for deployments on vSphere when the vSphere with Tanzu feature is not available.
To provide load balancing services to Tanzu Kubernetes Grid deployments on vSphere where vSphere with Tanzu is not available, VMware Tanzu Advanced Edition includes VMware NSX Advanced Load Balancer to provide an L4+L7 load balancing solution. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads.
NOTE: For general information about NSX Advanced Load Balancer components and concepts, see the VMware NSX Advanced Load Balancer (formerly Avi) documentation.
NSX Advanced Load Balancer includes the following components:
LoadBalancer
objects and interacts with the Avi Controller APIs to create VirtualService
objects. In the context of Tanzu Kubernetes Grid Service, Avi Kubernetes Operator is integrated using Service v2 API objects (GatewayClass
and Gateway
) for Layer-4 load balancers and will not look at Service Type LoadBalancer
objects.VirtualService
objects and interacts with the vCenter Server infrastructure to manage the lifecycle of the service engines (SEs). It is the portal for viewing the health of VirtualServices
and SEs and the associated analytics that NSX Advanced Load Balancer provides. It is also the point of control for monitoring and maintenance operations such as backup and restore.You can deploy NSX Advanced Load Balancer in the topology illustrated in the figure below.
The topology diagram above shows the following configuration:
NodePort
mode only.Recommendations
In the topology illustrated above, NSX Advanced Load Balancer provides the following networking, IPAM, isolation, tenancy, and Avi Kubernetes Operator functionalities.
Networking
IPAM
Resource Isolation
Tenancy
With NSX Advanced Load Balancer Enterprise edition, tenancy is configured as follows:
Avi Kubernetes Operator
You install Avi Controller on vCenter Server by downloading and deploying an OVA template. These instructions provide guidance specific to deploying Avi Controller for Tanzu Advanced. For full details of how to deploy Avi Controller, see Installing Avi Vantage for VMware vCenter in the Avi Networks documentation.
Follow the installer prompts to deploy a VM from the OVA template.
Select the following options in the OVA deployment wizard:
It might take a few minutes for the deployment to finish.
For full details of how to set up Avi Controller, see the Performing the Avi Controller Initial setup in the Avi Controller documentation.
This section provides some information about configurations that have been validated on Tanzu Kubernetes Grid, as well as some tips that are not included in the Avi Controller documentation.
Create Administrator Account
Configure System Settings
Email/SMTP is not required.
Select Write Permissions to select Write Access Mode.
SDN Integration is not required (the NSX integration here is for NSX-v).
You must configure some additional settings in the AVI Controller UI before you can install the Avi Kubernetes Operator operator.
Create an IPAM Profile, to define where VIPs are allocated:
Create the DNS Profile.
LoadBalancer
, but it is mostly relevant if you use AVI DNS VS as your Name Server.service.namespace.tkg-lab.vmware.com
.Go to Infrastructure > Cloud.
Edit Default-Cloud and assign the IPAM and DNS profiles that you created above.
Update the DataCenter settings.
Do not update the Network section yet.
Edit the network to add a pool of IPs to be used as VIP.
Edit the subnet and add an IP Address pool range within the s boundaries, for example 192.168.14.210-192.168.14.219.
You use Helm to install Avi Kubernetes Operator on Tanzu Kubernetes clusters. Full instructions for installing Avi Kubernetes Operator are found in the Avi Kubernetes Operator Helm chart GitHub pages.
Set the context of kubectl
to the context of the Tanzu Kubernetes cluster that you want to integrate with AVI.
For example, if your cluster is named my-cluster`, run the following command.
kubectl config use-context my-cluster-admin@my-cluster
Create a namespace on the cluster.
kubectl create ns avi-system
Run the following command to configure the helm chart repo.
This will install Avi Kubernetes Operator v1.2.1
helm repo add ako https://avinetworks.github.io/avi-helm-charts/charts/stable/ako
Run the following command to obtain the values.yaml
base file.
curl -JOL https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/stable/ako/values.yaml
Edit the following values in values.yaml
.
For descriptions of each property, see avi-helm-charts.
AKOSettings:
disableStaticRouteSync: "true" # Since we will be using NodePort mode, static route sync is not required.
clusterName: "cluster1" # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller.
cniPlugin: "" # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift
NetworkSettings:
subnetIP: "10.79.172.0" # Subnet IP of the vip network
subnetPrefix: "22" # Subnet Prefix of the vip network
networkName: "vxw-dvs-26-virtualwire-7-sid-2210006-wdc-02-vc21-avi-mgmt" # Network Name of the vip network. Same as configured in IPAM
L7Settings:
serviceType: NodePort #enum NodePort|ClusterIP
shardVSSize: "SMALL" # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL
The ENUMs
LARGE
, MEDIUM
, and SMALL
map to 8/4/2 number of virtual switch shards on which all Ingresses are shared. LoadBalancer
type services map to a virtual switch with dedicated VIP.
ControllerSettings:
serviceEngineGroupName: "Default-Group" # Name of the ServiceEngine Group.
controllerVersion: "18.2.10" # The controller API version
cloudName: "Default-Cloud" # The configured cloud name on the Avi
When installing Avi Kubernetes Operator on other clusters, you can create a new SE group and refer to it in the ControllerSettings
. Multiple clusters can share an SE group or can use it as a dedicated resource.
Deploy the Avi Kubernetes Operator helm chart.
helm install ako/ako --generate-name --version 1.2.1 -f values.yaml --set ControllerSettings.controllerIP=10.79.174.254 --set avicredentials.username=admin --set avicredentials.password=VMware1! --namespace=avi-system
Check that Avi Kubernetes Operator is running.
helm list -n avi-system
kubectl get all -n avi-system