HAProxy provides load balancing for developers accessing the Tanzu Kubernetes control plane, and for Kubernetes Services of Type Load Balancer. To support a successful installation of HAProxy, review and satisfy the prerequisites before proceeding with the deployment.

Review the System Requirements for vDS Networking with HAProxy

HAProxy requires vDS networking. Review the system requirements for vDS networking with HAProxy. See System Requirements and Topologies for Setting Up a Supervisor Cluster with vSphere Networking and HAProxy Load Balancing.

It may be helpful to view a demonstration of how to use vSphere with Tanzu with vDS networking and HAProxy. If so, check out the video Getting Started Using vSphere with Tanzu.

Download the HAProxy OVA

VMware provides an HAProxy OVA file that you deploy in your vSphere environment where you will enable Workload Management. Download the latest version of the VMware HAProxy OVA file from the VMware-HAProxy site.

As a convenience, you can import the HAProxy OVA into a Content Library and deploy it from there. See Import the HAProxy OVA to a Local Content Library.

Plan the HAProxy Network Topology

Plan the network topology for deploying the HAProxy load balancer.

There are two network configuration options: Default and Frontend. The default network has 2 NICs: one for the Management network and one for the Workload network. The Frontend network has 3 NICS: Management network, Workload network, and the Frontend network for clients. The table lists and describes characteristics of each network.

For production installations, it is recommended that you deploy the HAProxy load balancer using the Frontend Network configuration. If you deploy the HAProxy load balancer using the Default configuration, assign a /24 IP address block size to the Workload network. For both configuration options, DHCP is not recommended.
Network Characteristics
Management
The Supervisor Cluster uses the Management network to connect to and program the HAProxy load balancer.
  • The HAProxy Data Plane API endpoint is bound to the network interface connected to the Management network.
  • The Management IP address assigned to the HAProxy control plane VM must be a static IP on the Management network so that the Supervisor Cluster can reliably connect to the load balancer API.
  • The default gateway for the HAProxy VM should be on this network.
  • DNS queries should occur on this network.
Workload
The HAProxy control plane VM uses the Workload network to access the services on the Supervisor Cluster and Tanzu Kubernetes cluster nodes.
  • The HAProxy control plane VM forwards traffic to the Supervisor and Tanzu Kubernetes cluster nodes on this network.
  • If the HAProxy control plane VM is deployed in Default mode (two NICs), the Workload network must provide the logical networks used to access the load balancer services.
  • In the Default configuration, the load balancer virtual IPs and the Kubernetes cluster node IPs will come from this network. They will be defined as separate, non-overlapping ranges within the network.
Frontend (optional)

External clients (such as users or applications) accessing cluster workloads use the Frontend network to access backend load balanced services using virtual IP addresses.

  • The Frontend network is only used when the HAProxy control plane VM is deployed with three NICs.
  • Recommended for production installations.
  • The Frontend network is where you expose the virtual IP address (VIP). HAProxy will balance and forward the traffic to the appropriate backend.

The diagram below illustrates an HAProxy deployment using a Frontend Network topology. The diagram indicates where configuration fields are expected during the installation and configuration process.