For TKG workload clusters at the edge, load balancing is needed for control plane high-availability and workload application load balancing. NSX Advanced Load Balancer (Avi) can do both, but it's not recommended due to footprint. kube-vip for application workloads is in tech preview and can be evaluated for production use in the future. Please contact the TKG product team about timing for this feature to be GA.
There are two aspects to load balancers in TKG workload clusters running at the edge – load balancing for control plane high-availability, and workload application load balancing. Although NSX Advanced Load Balancer (Avi) can satisfy both use cases, we don’t currently recommend using NSX Advanced LB for edge locations due to footprint and other reasons discussed.
For the Enterprise Edge, customers can either bring their load balancer of choice or leverage kube-vip, which is recommended as a load-balancing solution to provide connectivity to applications in the workload clusters. However, kube-vip for application workloads is currently in tech preview.
In TKG versions 1.3 and newer, the control plane HA for the master node already leverages kube-vip during bootstrap by deploying it as a static pod. For load balancing of workloads running in the worker nodes, kube-vip for Service Type=Loadbalancer was added in TKG release 2.1.0 (in tech preview) leveraging a new component named kube-vip-cloud-provider. The kube-vip-cloud-provider is a statefulSet that uses IPAM to allocate an IP address for Loadbalancer services deployed by the user. The kube-vip pods in return will be responsible for advertising the IP address allocated to the load balancer service. Administrators will have to choose either kube-vip or NSX ALB for a workload cluster as kube-vip-cloud-provider can only be enabled when NSX ALB is disabled.
For further design considerations for workload load balancing, please visit the Ingress and Load-Balancing at the Edge section in the Tanzu Edge Solution Reference Architecture.