Kube-VIP Load Balancer (vSphere Technical Preview)

This topic describes using Kube-VIP as an L4 load balancer for workloads hosted on Tanzu Kubernetes Grid (TKG) workload clusters that are deployed by a standalone management cluster on vSphere.

Note

This feature is in the unsupported Technical Preview state; see TKG Feature States.

Background

Kube-VIP provides Kubernetes clusters with a virtual IP and load balancer for both the control plane and Kubernetes services of ServiceType LoadBalancer without relying on external hardware or software.

Previous versions of TKG already use Kube-VIP as to provide VIP services for the TKG control plane.

Note

TKG does not support ExternalTrafficPolicy Local mode for Kube-VIP.

Configuration

Prerequisites

You can only configure Kube-VIP as a LoadBalancer service on:

  • Class-based workload clusters on vSphere.
  • Workload clusters configured with Kube-VIP as control plane HA provider, AVI_CONTROL_PLANE_HA_PROVIDER = false.
  • Workload clusters with a management cluster that has Kube-VIP as a control plane HA provider.
  • You cannot use Kube-VIP as a LoadBalancer service on Windows-based clusters.

Before you can create a workload cluster that uses Kube-VIP as a LoadBalancer service in a cluster, you must allocate ranges of IP addresses that it assigns to node VMs. The IP address for LoadBalancer service itself must be in this same range.

Parameters

To configure Kube-VIP as a class-based workload cluster’s load balancer service, set the following in the cluster configuration file:

  • KUBEVIP_LOADBALANCER_ENABLE

    • Set to true to enable Kube-VIP. Defaults to false.
  • KUBEVIP_LOADBALANCER_IP_RANGES

    • A list of non-overlapping IP ranges to allocate for LoadBalancer type service IP. For example: 10.0.0.1-10.0.0.23,10.0.2.1-10.0.2.24.
  • KUBEVIP_LOADBALANCER_CIDRS

    • A list of non-overlapping CIDRs to allocate for LoadBalancer type service IP. For example: 10.0.0.0/24,10.0.2/24.

Either KUBEVIP_LOADBALANCER_IP_RANGES or KUBEVIP_LOADBALANCER_CIDRS is required. If you set both, the kube-vip-load-balancer component only allocates IP addresses IPs from KUBEVIP_LOADBALANCER_CIDRS, even if no more addresses are available in the set ranges.

To avoid conflicts, each cluster must have a different IP range. IP and CIDR ranges for different clusters must not overlap.

Extend IP Range for Kube-VIP Load Balancer

For workload clusters with Kube-VIP as load balancer, you can extend the IP address range that Kube-VIP balances traffic across by changing the loadbalancerCIDRs or loadbalancerIPRanges value in the Kube-VIP CPI configuration.

Note

You can only extend Kube-VIP’s range; you cannot decrease its existing IP range.

  1. Set the context of kubectl to the management cluster.

    kubectl config use-context my-mgmnt-cluster-admin@my-mgmnt-cluster
    
  2. Edit the KubevipCPIConfig configuration for the target cluster:

    kubectl edit kubevipcpiconfig CLUSTER-NAME -n CLUSTER-NAMESPACE
    

    Where CLUSTER-NAME and CLUSTER-NAMESPACE are the name and namespace of the workload cluster that you are extending Kube-VIP’s range for.

  3. In the KubevipCPIConfig spec, change the loadbalancerCIDRs or loadbalancerIPRanges value in a way that only adds IP addresses. For example, you could change loadbalancerCIDRs: 10.0.0.1/24 to either one of the following:

    • 10.0.0.0/24,10.0.1.0/24
    • 10.0.0.0/16
check-circle-line exclamation-circle-line close-line
Scroll to top icon