When using vSphere with Tanzu with vDS networking, HAProxy provides load balancing for developers accessing the Tanzu Kubernetes control plane, and for Kubernetes Services of Type Load Balancer. Review the possible topologies that you can implement for the HAProxy load balancer.

Workload Networks on the Supervisor Cluster

To configure a Supervisor Cluster with the vSphere networking stack, you must connect all hosts from the cluster to a vSphere Distributed Switch. Depending on the topology that you implement for the Supervisor Cluster Workload Networks, you create one or more distributed port groups. You designate the port groups as Workload Networks to vSphere Namespaces.

Before you add a host to a Supervisor Cluster, you must add it to all the vSphere Distributed Switches that are part of the cluster.

Workload Networks provide connectivity to the nodes of Tanzu Kubernetes clusters and to Supervisor Cluster control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor Cluster must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor Cluster.
Note: Workload Networks are only added when you enable the Supervisor Cluster and cannot be added later.

The Kubernetes control plane VMs on the Supervisor Cluster use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes cluster runs.

Allocation of IP Ranges

When you plan for the networking topology of the Supervisor Cluster with HA Proxy load balancer, plan for having two types of IP ranges:
  • A range for allocating virtual IPs for HAProxy. The IP range that you configure for the virtual servers of HAProxy are reserved by the load balancer appliance. For example, if the virtual IP range is 192.168.1.0/24, all hosts on that range are not accessible for traffic other than virtual IP traffic.
    Note: You must not configure a gateway within the HAProxy virtual IP range, because all routes to that gateway will fail.
  • An IP range for the nodes of the Supervisor Cluster and Tanzu Kubernetes clusters. Each Kubernetes control plane VM in the Supervisor Cluster has an IP address assigned, which make three IP addresses in total. Each node of a Tanzu Kubernetes cluster also has a separate IP assigned. You must assign a unique IP range to each Workload Network on the Supervisor Cluster that you configure to a namespace.

An example configuration with one /24 network:

  • Network: 192.168.120.0/24
  • HAProxy VIPs: 192.168.120.128/25
  • 1 IP address for the HAProxy workload interface: 192.168.120.5

Depending on the IPs that are free within the first 128 addresses, you can define IP ranges for Workload Networks on the Supervisor Cluster, for example:

  • 192.168.120.31-192.168.120.40 for the Primary Workload Network
  • 192.168.120.51-192.168.120.60 for another Workload Network
Note: The ranges that you define for Workload Networks must not overlap with the HAProxy VIP range.

HAProxy Network Topology

There are two network configuration options for deploying HAProxy: Default and Frontend. The default network has 2 NICs: one for the Management network and one for the Workload network. The Frontend network has 3 NICs: Management network, Workload network, and the Frontend network for clients. The table lists and describes characteristics of each network.

For production installations, it is recommended that you deploy the HAProxy load balancer using the Frontend Network configuration. If you deploy the HAProxy load balancer using the Default configuration, it is recommended that you assign a /24 IP address block size to the Workload network. For both configuration options, DHCP is not recommended.
Network Characteristics
Management
The Supervisor Cluster uses the Management network to connect to and program the HAProxy load balancer.
  • The HAProxy Data Plane API endpoint is bound to the network interface connected to the Management network.
  • The Management IP address assigned to the HAProxy control plane VM must be a static IP on the Management network so that the Supervisor Cluster can reliably connect to the load balancer API.
  • The default gateway for the HAProxy VM should be on this network.
  • DNS queries should occur on this network.
Workload
The HAProxy control plane VM uses the Workload network to access the services on the Supervisor Cluster and Tanzu Kubernetes cluster nodes.
  • The HAProxy control plane VM forwards traffic to the Supervisor and Tanzu Kubernetes cluster nodes on this network.
  • If the HAProxy control plane VM is deployed in Default mode (two NICs), the Workload network must provide the logical networks used to access the load balancer services.
  • In the Default configuration, the load balancer virtual IPs and the Kubernetes cluster node IPs will come from this network. They will be defined as separate, non-overlapping ranges within the network.
Note: The workload network must be on a different subnet than the management network. Refer to the system requirements.
Frontend (optional)

External clients (such as users or applications) accessing cluster workloads use the Frontend network to access backend load balanced services using virtual IP addresses.

  • The Frontend network is only used when the HAProxy control plane VM is deployed with three NICs.
  • Recommended for production installations.
  • The Frontend network is where you expose the virtual IP address (VIP). HAProxy will balance and forward the traffic to the appropriate backend.

The diagram below illustrates an HAProxy deployment using a Frontend Network topology. The diagram indicates where configuration fields are expected during the installation and configuration process.

""

Supervisor Cluster Topology with One Workload Network and HA Proxy with Two Virtual NICs

In this topology, you configure a Supervisor Cluster with one Workload Network for the following components:

  • Kubernetes control plane VMs
  • The nodes of Tanzu Kubernetes clusters.
  • The HAProxy virtual IP range where external services and DevOps users connect. In this configuration, HAProxy is deployed with two virtual NICs (Default configuration), one connected to the management network, and a second one connected to the Primary Workload Network. You must plan for allocating Virtual IPs on a separate subnet from the Primary Workload Network.
You designate one port group as the Primary Workload Network to the Supervisor Cluster and then use the same port group as the Workload Network for vSphere Namespaces. Supervisor Cluster, Tanzu Kubernetes clusters, HAProxy, DevOps users, and external services all connect to the same distributed port group that is set as the Primary Workload Network.
Figure 1. Supervisor Cluster Backed by One Network

Supervisor Cluster Backed by One Network
The traffic path for DevOps users or external applications is the following:
  1. The DevOps user or external service sends traffic to a virtual IP on the Workload Network subnet of the distributed port group.
  2. HAProxy load balances the virtual IP traffic to either Tanzu Kubernetes node IP or control plane VM IP. HAProxy claims the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes cluster node delivers the traffic to the target pods running inside the Supervisor Cluster or Tanzu Kubernetes cluster respectively.

Supervisor Cluster Topology with an Isolated Workload Network and HA Proxy with Two Virtual NICs

In this topology, you configure networks to the following components:
  • Kubernetes control plane VMs. A Primary Workload Network to handle the traffic for Kubernetes control plane VMs.
  • Tanzu Kubernetes cluster nodes. A Workload Network. that you assign to all namespaces on the Supervisor Cluster. This network connects the Tanzu Kubernetes cluster nodes.
  • HAProxy virtual IPs. In this configuration, the HAProxy VM is deployed with two virtual NICs (Default configuration). You can either connect the HAProxy VM to the Primary Workload Network or the Workload Network that you use for namespaces. You can also connect HAProxy to a VM network that already exists in vSphere and is routable to the Primary and Workload networks.
The Supervisor Cluster is connected to the distributed port group backing the Primary Workload Network and Tanzu Kubernetes clusters are connected to a distributed port group backing the Workload Network. The two port groups must be layer 3 routable. You can implement layer 2 isolation through VLANs. Layer 3 traffic filtering is possible through IP firewalls and gateways.
Figure 2. Supervisor Cluster with an Isolated Workload Network

Supervisor Cluster with an Isolated Workload Network
The traffic path for DevOps users or external service is the following:
  1. The DevOps user or external service sends traffic to a virtual IP. Traffic is routed to the network where HAProxy is connected.
  2. HAProxy load balances the virtual IP traffic to either Tanzu Kubernetes node IP or control plane VM. HAProxy is claiming the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes cluster node delivers the traffic to the target pods running inside the Tanzu Kubernetes cluster.

Supervisor Cluster Topology with Multiple Workload Networks and HA Proxy with Two Virtual NICs

In this topology, you can configure one port group to act as the Primary Workload Network and a dedicated port group to serve as the Workload Network to each namespace. HAProxy is deployed with two virtual NICs (Default configuration), and you can either connect it to the Primary Workload Network or to any of the Workload Networks. You can also use en existing VM network that is routable to the Primary and Workload Networks.

The traffic path for the DevOps users and external services in this topology is the same as with the isolated Workload Network topology.
Figure 3. Supervisor Cluster Backed by Multiple Isolated Workload Networks

Supervisor Cluster Backed by Multiple Isolated Workload Networks

Supervisor Cluster Topology with Multiple Workload Networks and HAProxy with Three Virtual NICs

In this configuration, you deploy the HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Deploying HAProxy with three virtual NICs is recommended for production environments.
Figure 4. HAProxy Deployed with Three Virtual NICs

HAProxy Deployed with Three Virtual NICs

Selecting Between the Possible Topologies

Before you select between each of the possible topologies, asses the needs of your environment:

  1. Do you need layer 2 isolation between Supervisor Cluster and Tanzu Kubernetes clusters?
    1. No: the simplest topology with one Workload Network serving all components.
    2. Yes: the isolated Workload Network topology with a separate Primary and Workload Networks.
  2. Do you need further layer 2 isolation between your Tanzu Kubernetes clusters?
    1. No: isolated workload network topology with a separate Primary and Workload Networks.
    2. Yes: multiple workload networks topology with a separate Workload Network for each namespace and a dedicated Primary Workload Network.
  3. Do you want to prevent your DevOps users and external services from directly routing to Kubernetes control plane VMs and Tanzu Kubernetes cluster nodes?
    1. No: two NIC HAProxy configuration.
    2. Yes: three NIC HAProxy configuration. This configuration is recommended for production environments

Considerations for Using the HAProxy Load Balancer with vSphere with Tanzu

Keep in mind the following considerations when configuring vSphere with Tanzu with the HAProxy load balancer.

  • A support contract is required with HAProxy to get technical support for the HAProxy load balancer. VMware GSS cannot provide support for the HAProxy appliance.
  • The HAProxy appliance is a singleton with no possibility for a highly available topology. For highly available environments, VMware recommends that you use either a full installation of NSX or the NSX Advanced Load Balancer.
  • It is not possible to expand the IP address range used for the front-end at a later date, meaning the network should be sized for all future growth.