NSX ALB can act as the external Load Balancer provider for your Kubernetes clusters in a Tanzu Kubernetes Grid deployment.
To configure NSX ALB Load Balancer implementation for all clusters:
Create a management cluster configuration YAML file, and add the following fields in the file:
AVI_ENABLE: true
AVI_CONTROLLER: <avi controller IP address or FQDN>
AVI_USERNAME: <avi admin username>
AVI_PASSWORD: <avi admin password>
AVI_CA_DATA_B64: <base64 encoded certificate>
AVI_CLOUD_NAME: <cloud you configured to deploy virtual services>
AVI_SERVICE_ENGINE_GROUP: <SEG you configured to host virtual services>
AVI_DATA_NETWORK: <VIP Network you want to use for your load balancer external IP>
AVI_DATA_NETWORK_CIDR: <above VIP Network's CIDR>
#### only for NSX-T cloud ####
AVI_NSXT_T1LR: <NSX-T Tier 1 path used for NSX Advanced Loader Balancer backend network>
For more information on creating a management cluster configuration file, see Create a Management Cluster Configuration File.
Create the management cluster by using the tanzu management-cluster create
command.
NSX ALB is now configured as the load balancer for the management cluster and all the workload clusters that are created by this management cluster.
Optionally, you can configure certain advanced load balancing features of NSX ALB in Tanzu Kubernetes Grid.
To configure NSX ALB as the Load Balancer only on specific workload clusters:
Create a management cluster configuration YAML file, and add the following fields in the file:
AVI_ENABLE: true
AVI_LABELS: '{"enable-nsx-alb":"true"}'
AVI_CONTROLLER: <avi controller IP address or FQDN>
AVI_USERNAME: <avi admin username>
AVI_PASSWORD: <avi admin password>
AVI_CA_DATA_B64: <base64 encoded certificate>
AVI_CLOUD_NAME: <cloud you configured to deploy virtual services>
AVI_SERVICE_ENGINE_GROUP: <SEG you configured to host virtual services>
AVI_DATA_NETWORK: <VIP Network you want to use for your load balancer external IP>
AVI_DATA_NETWORK_CIDR: <above VIP Network's CIDR>
#### only for NSX-T cloud ####
AVI_NSXT_T1LR: <NSX-T Tier 1 path used for NSX Advanced Loader Balancer backend network>
For more information on creating a management cluster configuration file, see Create a Management Cluster Configuration File.
Create the management cluster by using the tanzu management-cluster create
command.
In the workload cluster configuration YAML file, add the following field:
AVI_LABELS: '{"enable-nsx-alb":"true"}'
Create the workload cluster by using the tanzu cluster create
command.
NSX ALB is now configured as the load balancer only for the workload clusters that have the corresponding AVI_LABELS
value.
This feature leverages the Avi Kubernetes Operator (AKO) application that is deployed in the clusters. For information, see Service of Type Load Balancer with Preferred IP.
Ensure that the IP address that you specify is an unallocated address in the IP pool that is configured in your Avi Controller.
To configure external static IP address for the load balancer service provided by NSX ALB, add the external IP address in the loadbalancerIP
field in the load balancer type of service configuration file, as shown in this example:
apiVersion: v1
kind: Service
metadata:
name: corgi-test
spec:
type: LoadBalancer
selector:
corgi: test
ports:
- nodePort: 30008
port: 80
targetPort: 80
loadBalancerIP: 1.1.1.1
A Tanzu Kubernetes Grid deployment integrated with NSX ALB supports the Gateway API v1beta1. This feature leverages the AKO application deployed in the clusters and allows you to expose Kubernetes services outside of a cluster. This feature is currently under tech preview. For more information about NSX ALB Gateway API support, see Gateway API - v1beta1 in the NSX ALB documentation.
NoteTo use this feature, NSX ALB must have an enterprise license.
AKO currently only supports
GatewayClass
,Gateway
, andHTTPROUTE
.For more information about the limitations of this technical preview, see Conditions and Caveats in the Gateway API - v1beta1 documentation.
To configure the v1beta1 API Gateways for the load balancer services provided by NSX ALB, you must set the flag spec.extraConfigs.featureGates.GatewayAPI
in the AKODeploymentConfig
object to true
. For information about how to access the AKODeploymentConfig
for a cluster, see View the AKODeploymentConfig
CR Object for a Cluster.
Create or modify an existing AKODeploymentConfig
file to enable the gateway API on a workload cluster.
Example AKODeploymentConfig
file:
spec:
adminCredentialRef:
name: avi-controller-credentials
namespace: tkg-system-networking
certificateAuthorityRef:
name: avi-controller-ca
namespace: tkg-system-networking
cloudName: Default-Cloud
clusterSelector:
matchLabels:
cluster-type: cluster-gateway-clusterip
controlPlaneNetwork:
cidr: 10.92.192.0/19
name: VM Network
controller: 10.92.196.20
controllerVersion: 22.1.3
dataNetwork:
cidr: 10.92.192.0/19
name: VM Network
extraConfigs:
cniPlugin: antrea
disableStaticRouteSync: false
featureGates:
GatewayAPI: true
ingress:
defaultIngressController: false
disableIngressClass: false
nodeNetworkList:
- networkName: VM Network
serviceType: ClusterIP
shardVSSize: MEDIUM
serviceEngineGroup: Default-Group
Create the workload cluster from a configuration file.
To use a custom AKO configuration, set AVI_LABELS
in the cluster configuration to match the cluster selector in AKODeploymentConfig
.
CLUSTER_NAME: test-cluster
AVI_LABELS: '{"cluster-type": "cluster-gateway-clusterip"}'
After the workload cluster is created, check whether the statefulset
container ako-gateway-api
controller image has deployed successfully.
kubectl get pods -a
Create a GatewayClass
object YAML file.
AKO identifies GatewayClass
instances that point to ako.vmware.com/avi-lb
as the .spec.controllerName
value in the GatewayClass
object.
For example, create a file named gateway-class.yaml
with the following contents.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: avi-lb
spec:
controllerName: "ako.vmware.com/avi-lb"
Apply the GatewayClass
object to the cluster.
kubectl apply -f gateway-class.yaml
Create a Gateway
object YAML file.
For example, create a file named gateway.yaml
with the following contents.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
spec:
gatewayClassName: avi-lb
listeners:
- name: foo-http
protocol: HTTP
port: 80
hostname: foo.avilocal.lol
Apply the Gateway
object to the cluster.
kubectl apply -f gateway.yaml
Create an HttpRoute
and backend service YAML file.
For example, create a file named http-route.yaml
with the following contents.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: foo-http
spec:
parentRefs:
- name: my-gateway
hostnames:
- "foo.avilocal.lol"
rules:
- backendRefs:
- name: coffee-svc
port: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
selector:
app: coffee
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-coffee
spec:
selector:
matchLabels:
app: coffee
replicas: 1
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: nginx
image:/nginx
ports:
- containerPort: 80
Apply the HttpRoute
object to the cluster.
kubectl apply -f http-route.yaml
For information about how to create dual-stack clusters, see Create Dual-Stack Clusters with NSX Advanced Load Balancer as Cluster Load Balancer Service Provider.
All the NSX ALB features that can be availed through AKO are supported in Tanzu Kubernetes Grid. To use a feature, set the corresponding value in the AKODeploymentConfig.spec.extraConfigs.<FEATURE-KNOB>
object. For more information, see Avi Kubernetes Operator Deployment Guide.
NSX ALB as the load balancer service provider is automatically enabled in the management cluster if NSX ALB is enabled in your Tanzu Kubernetes Grid deployment.