Review the possible topologies that you can implement for the HAProxy load balancer for a Supervisor configured with VDS networking. When using vSphere IaaS control plane with VDS networking, HAProxy provides load balancing for developers accessing the Tanzu Kubernetes Grid control plane, and for Kubernetes Services of Type Load Balancer.

Workload Networks on the Supervisor

To configure a Supervisor with VDS networking, you must connect all hosts from the cluster to a VDS. Depending on the topology that you implement for the Supervisor Workload Networks, you create one or more distributed port groups. You designate the port groups as Workload Networks to vSphere Namespaces.

Workload Networks provide connectivity to the nodes of Tanzu Kubernetes Grid clusters and to Supervisor control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor.
Note: Workload Networks are only added when you enable the Supervisor and cannot be added later.

The Kubernetes control plane VMs on the Supervisor use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes Grid cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes Grid cluster runs.

Allocation of IP Ranges

When you plan for the networking topology of the Supervisor with HA Proxy load balancer, plan for having two types of IP ranges:
  • A range for allocating virtual IPs for HAProxy. The IP range that you configure for the virtual servers of HAProxy are reserved by the load balancer appliance. For example, if the virtual IP range is 192.168.1.0/24, all hosts on that range are not accessible for traffic other than virtual IP traffic.
    Note: You must not configure a gateway within the HAProxy virtual IP range, because all routes to that gateway will fail.
  • An IP range for the nodes of the Supervisor and Tanzu Kubernetes Grid clusters. Each Kubernetes control plane VM in the Supervisor has an IP address assigned, which make three IP addresses in total. Each node of a Tanzu Kubernetes Grid cluster also has a separate IP assigned. You must assign a unique IP range to each Workload Network on the Supervisor that you configure to a namespace.

An example configuration with one /24 network:

  • Network: 192.168.120.0/24
  • HAProxy VIPs: 192.168.120.128/25
  • 1 IP address for the HAProxy workload interface: 192.168.120.5

Depending on the IPs that are free within the first 128 addresses, you can define IP ranges for Workload Networks on the Supervisor, for example:

  • 192.168.120.31-192.168.120.40 for the Primary Workload Network
  • 192.168.120.51-192.168.120.60 for another Workload Network
Note: The ranges that you define for Workload Networks must not overlap with the HAProxy VIP range.

HAProxy Network Topology

There are two network configuration options for deploying HAProxy: Default and Frontend. The default network has 2 NICs: one for the Management network and one for the Workload Network. The Frontend network has 3 NICs: Management network, Workload network, and the Frontend network for clients. The table lists and describes characteristics of each network.

For production installations, it is recommended that you deploy the HAProxy load balancer using the Frontend Network configuration. If you deploy the HAProxy load balancer using the Default configuration, it is recommended that you assign a /24 IP address block size to the Workload network. For both configuration options, DHCP is not recommended.
Network Characteristics
Management
The Supervisor Cluster uses the Management network to connect to and program the HAProxy load balancer.
  • The HAProxy Data Plane API endpoint is bound to the network interface connected to the Management network.
  • The Management IP address assigned to the HAProxy control plane VM must be a static IP on the Management network so that the Supervisor Cluster can reliably connect to the load balancer API.
  • The default gateway for the HAProxy VM should be on this network.
  • DNS queries should occur on this network.
Workload
The HAProxy control plane VM uses the Workload network to access the services on the Supervisor Cluster and Tanzu Kubernetes cluster nodes.
  • The HAProxy control plane VM forwards traffic to the Supervisor and Tanzu Kubernetes cluster nodes on this network.
  • If the HAProxy control plane VM is deployed in Default mode (two NICs), the Workload network must provide the logical networks used to access the load balancer services.
  • In the Default configuration, the load balancer virtual IPs and the Kubernetes cluster node IPs will come from this network. They will be defined as separate, non-overlapping ranges within the network.
Note: The workload network must be on a different subnet than the management network. Refer to the system requirements.
Frontend (optional)

External clients (such as users or applications) accessing cluster workloads use the Frontend network to access backend load balanced services using virtual IP addresses.

  • The Frontend network is only used when the HAProxy control plane VM is deployed with three NICs.
  • Recommended for production installations.
  • The Frontend network is where you expose the virtual IP address (VIP). HAProxy will balance and forward the traffic to the appropriate backend.

The diagram below illustrates an HAProxy deployment using a Frontend Network topology. The diagram indicates where configuration fields are expected during the installation and configuration process.

""

Supervisor Topology with One Workload Network and HAProxy with Two Virtual NICs

In this topology, you configure a Supervisor with one Workload Network for the following components:

  • Kubernetes control plane VMs
  • The nodes of Tanzu Kubernetes Grid clusters.
  • The HAProxy virtual IP range where external services and DevOps users connect. In this configuration, HAProxy is deployed with two virtual NICs (Default configuration), one connected to the management network, and a second one connected to the Primary Workload Network. You must plan for allocating Virtual IPs on a separate subnet from the Primary Workload Network.
You designate one port group as the Primary Workload Network to the Supervisor and then use the same port group as the Workload Network for vSphere Namespaces. Supervisor, Tanzu Kubernetes Grid clusters, HAProxy, DevOps users, and external services all connect to the same distributed port group that is set as the Primary Workload Network.
Figure 1. Supervisor Backed by One Network

The diagram shows a Supervisor having one distributed port group serving for workload and management traffic.
The traffic path for DevOps users or external applications is the following:
  1. The DevOps user or external service sends traffic to a virtual IP on the Workload Network subnet of the distributed port group.
  2. HAProxy load balances the virtual IP traffic to either Tanzu Kubernetes Grid cluster node IP or control plane VM IP. HAProxy claims the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes Grid cluster node delivers the traffic to the target pods running inside the Supervisor or Tanzu Kubernetes Grid cluster respectively.

Supervisor Topology with an Isolated Workload Network and HA Proxy with Two Virtual NICs

In this topology, you configure networks to the following components:
  • Kubernetes control plane VMs. A Primary Workload Network to handle the traffic for Kubernetes control plane VMs.
  • Tanzu Kubernetes Grid cluster nodes. A Workload Network. that you assign to all namespaces on the Supervisor. This network connects the Tanzu Kubernetes Grid cluster nodes.
  • HAProxy virtual IPs. In this configuration, the HAProxy VM is deployed with two virtual NICs (Default configuration). You can either connect the HAProxy VM to the Primary Workload Network or the Workload Network that you use for namespaces. You can also connect HAProxy to a VM network that already exists in vSphere and is routable to the Primary and Workload networks.
The Supervisor is connected to the distributed port group backing the Primary Workload Network and Tanzu Kubernetes Grid clusters are connected to a distributed port group backing the Workload Network. The two port groups must be layer 3 routable. You can implement layer 2 isolation through VLANs. Layer 3 traffic filtering is possible through IP firewalls and gateways.
Figure 2. Supervisor with an Isolated Workload Network

""
The traffic path for DevOps users or external service is the following:
  1. The DevOps user or external service sends traffic to a virtual IP. Traffic is routed to the network where HAProxy is connected.
  2. HAProxy load balances the virtual IP traffic to either Tanzu Kubernetes Grid node IP or control plane VM. HAProxy is claiming the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes Grid cluster node delivers the traffic to the target pods running inside the Tanzu Kubernetes Grid cluster.

Supervisor Topology with Multiple Workload Networks and HA Proxy with Two Virtual NICs

In this topology, you can configure one port group to act as the Primary Workload Network and a dedicated port group to serve as the Workload Network to each namespace. HAProxy is deployed with two virtual NICs (Default configuration), and you can either connect it to the Primary Workload Network or to any of the Workload Networks. You can also use en existing VM network that is routable to the Primary and Workload Networks.

The traffic path for the DevOps users and external services in this topology is the same as with the isolated Workload Network topology.
Figure 3. Supervisor Backed by Multiple Isolated Workload Networks

""

Supervisor Topology with Multiple Workload Networks and HA Proxy with Three Virtual NICs

In this configuration, you deploy the HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Deploying HA Proxy with three virtual NICs is recommended for production environments.
Figure 4. HAProxy Deployed with Three Virtual NICs

""

Selecting Between the Possible Topologies

Before you select between each of the possible topologies, asses the needs of your environment:

  1. Do you need layer 2 isolation between Supervisor and Tanzu Kubernetes Grid clusters?
    1. No: the simplest topology with one Workload Network serving all components.
    2. Yes: the isolated Workload Network topology with a separate Primary and Workload Networks.
  2. Do you need further layer 2 isolation between your Tanzu Kubernetes Grid clusters?
    1. No: isolated workload network topology with a separate Primary and Workload Networks.
    2. Yes: multiple workload networks topology with a separate Workload Network for each namespace and a dedicated Primary Workload Network.
  3. Do you want to prevent your DevOps users and external services from directly routing to Kubernetes control plane VMs and Tanzu Kubernetes Grid cluster nodes?
    1. No: two NIC HAProxy configuration.
    2. Yes: three NIC HAProxy configuration. This configuration is recommended for production environments

Considerations for Using the HAProxy Load Balancer with vSphere IaaS control plane

Keep in mind the following considerations when planning a vSphere IaaS control plane with the HAProxy load balancer.

  • A support contract is required with HAProxy to get technical support for the HAProxy load balancer. VMware GSS cannot provide support for the HAProxy appliance.
  • The HAProxy appliance is a singleton with no possibility for a highly available topology. For highly available environments, VMware recommends that you use either a full installation of NSX or the NSX Advanced Load Balancer.
  • It is not possible to expand the IP address range used for the front-end at a later date, meaning the network should be sized for all future growth.