The vSphere cluster that you configure as a Supervisor Cluster with the vSphere networking stack, must meet certain system requirements. You can also apply different topologies depending on the needs of your Kubernetes workloads and the underlying networking infrastructure.

Workload Networks

To configure a Supervisor Cluster with the vSphere networking stack, you must connect all hosts from the cluster to a vSphere Distributed Switch. Depending on the topology that you implement for the Supervisor Cluster, you create one or more distributed port groups. You designate the port groups as Workload Networks to Supervisor Namespaces.

Workload Networks provide connectivity to the nodes of Tanzu Kubernetes clusters and to Kubernetes control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor Cluster must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor Cluster.

The Kubernetes control plane VMs on the Supervisor Cluster use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes cluster runs.

Allocation of IP Ranges

When you plan for the networking topology of the Supervisor Cluster, plan for having two types of IP ranges:
  • A range for allocating virtual IPs for HAProxy. The IP range that you configure for the virtual servers of HAProxy are reserved by the load balancer appliance. For example, if the virtual IP range is 192.168.1.0/24, all hosts on that range are not accessible for traffic other than virtual IP traffic.
    Note: You must not configure a gateway within the HAProxy virtual IP range, because all routes to that gateway will fail.
  • An IP range for the nodes of the Supervisor Cluster and Tanzu Kubernetes clusters. Each Kubernetes control pane VM in the Supervisor Cluster has an IP address assigned, which make three IP addresses in total. Each node of a Tanzu Kubernetes cluster also has a separate IP assigned. You must assign a unique IP range to each Workload Network that you configure to a namespace.

For information about installing HAProxy, see KB 80735.

An example configuration with one /24 network:

  • Network: 192.168.120.0/24
  • HAProxy VIPs: 192.168.120.128/24
  • 1 IP address for the HAProxy workload interface: 192.168.120.5

Depending on the IPs that are free within the first 128 addresses, you can define IP ranges for Workload Networks, for example:

  • 192.168.120.31-192.168.120.40 for the Primary Workload Network
  • 192.168.120.51-192.168.120.60 for another Workload Network
Note: The ranges that you define for Workload Networks must not overlap with the HAProxy VIP range.

Topology with One Workload Network

In this topology, you configure a Supervisor Cluster with one network for the following components:

  • Kubernetes control plane VMs
  • The nodes of Tanzu Kubernetes clusters.
  • The HAProxy virtual IP range where external services and DevOps users connect. In this configuration, HAProxy is deployed with two virtual NICs, one connected to the management network, and a second one connected to the Primary Workload Network. You must plan for allocating Virtual IPs on a separate subnet from the Primary Workload Network.
You designate one port group as the Primary Workload Network to the Supervisor Cluster and then use the same port group as the Workload Network for Supervisor Namespaces. Supervisor Cluster, Tanzu Kubernetes clusters, HAProxy, DevOps users, and external services all connect to the same distributed port group that is set as the Primary Workload Network.
Figure 1. Supervisor Cluster Backed by One Network

Supervisor Cluster Backed by One Network
The traffic path for DevOps users or external applications is the following:
  1. The DevOps user or external service sends traffic to a virtual IP on the Workload Network subnet of the distributed port group.
  2. HAProxy loadbalances the virtual IP traffic to either Tanzu Kubernetes node IP or control plane VM IP. HAProxy claims the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes cluster node delivers the traffic to the target pods running inside the Tanzu Kubernetes cluster.

Topology with an Isolated Workload Network

In this topology, you configure networks to the following components:
  • Kubernetes control plane VMs. A Primary Workload Network to handle the traffic for Kubernetes control plane VMs.
  • Tanzu Kubernetes cluster nodes. A Workload Network. that you assign to all namespaces on the Supervisor Cluster. This network connects the Tanzu Kubernetes cluster nodes.
  • HAProxy virtual IPs. In this configuration, the HAProxy VM is deployed with two virtual NICs. You can either connect the HAProxy VM to the Primary Workload Network or the Workload Network that you use for namespaces. You can also connect HAProxy to a VM network that already exists in vSphere and is routable to the Primary and Workload networks.
The Supervisor Cluster is connected to the distributed port group backing the Primary Worklaod Network and Tanzu Kubernetes clusters are connected to a distributed port group backing the Worklaod Network. The two port groups must be layer 3 routable. You can implement layer 2 isolation through VLANs. Layer 3 traffic filtering is possible through IP firewalls and gateways.
Figure 2. Supervisor Cluster with an Isolated Workload Network

Supervisor Cluster with an Isolated Workload Network
The traffic path for DevOps users or external service is the following:
  1. The DevOps user or external service sends traffic to a virtual IP. Traffic is routed to the network where HAProxy is connected.
  2. HAProxy loadbalances the virtual IP traffic to either Tanzu Kubernetes node IP or control plane VM. HAProxy is claiming the virtual IP address so that it can load balance the traffic coming on that IP.
  3. The control plane VM or Tanzu Kubernetes cluster node delivers the traffic to the target pods running inside the Tanzu Kubernetes cluster.

Topology with Multiple Isolated Workload Networks

In this topology, you can configure one port group to act as the Primary Workload Network and a dedicated port group to serve as the Workload Network to each namespace. HAPoxy is deployed with two virtual NICs, and you can either connect it to the Primary Workload Network or to any of the Workload Networks. You can also use en existing VM network that is routable to the Primary and Workload Networks.

The traffic path for the DevOps users and external services in this topology is the same as with the isolated Workload Network topology.
Figure 3. Supervisor Cluster Backed by Multiple Isolated Workload Networks

Supervisor Cluster Backed by Multiple Isolated Workload Networks

HAProxy Configuration with Three Virtual NICs

In this configuration, you deploy the HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network.
Figure 4. HAProxy Deployed with Three Virtual NICs

HAProxy Deployed with Three Virtual NICs

Selecting Between the Possible Topologies

Before you select between each of the possible topologies, asses the needs of your environment:

  1. Do you need layer 2 isolation between Supervisor Cluster and Tanzu Kubernetes clusters?
    1. No: the simplest topology with one Workload Network serving all components.
    2. Yes: the isolated Workload Network topology with a separate Primary and Workload Networks.
  2. Do you need further layer 2 isolation between your Tanzu Kubernetes clusters?
    1. No: isolated workload network topology with a separate Primary and Workload Networks.
    2. Yes: multiple workload networks topology with a separate Workload Network for each namespace and a dedicated Primary Workload Network.
  3. Do you want to prevent your DevOps users and external services from directly routing to Kubernetes control plane VMs and Tanzu Kubernetes clsuter nodes?
    1. No: two NIC HAProxy configuration.
    2. Yes: three NIC HAProxy configuration.

Minimum Compute and Requirements

Table 1. Minimum Compute Requirements
System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 Small 2 16 GB 290 GB
ESXi hosts 7.0 3 ESXi hosts with 1 static IP per host.

4 ESXi hosts for vSAN with at least 2 physical NICs.

The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automate or Partially Automate mode.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the cluster for Workload Management might fail.
8 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB
Table 2. Minimum Networking Requirements
Component Minimum Quantity Required Configuration
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned to the Kubernetes control plane VMs in the Supervisor Cluster.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, and a DHCP server. The network must be able to access an image registry and have Internet connectivity if the image registry is on the external network. The image registry must be resolvable through DNS.
vSphere Distributed Switch 1 All hosts from the cluster must be connected to a vSphere Distributed Switch.
HAProxy load balancer 1 An instance of HAProxy load balancer configured with the vCenter Server instance.
  • If the same the HAProxy instance is serving multiple Supervisor Clusters, it must be able to route traffic to and from all Workload Networks across all Supervisor Clusters. IP ranges across Workload Networks in all Supervisor Clusters that the HAProxy serves must not overlap.
  • A dedicated IP range for virtual IPs. The HAProxy VM must be the only owner of this virtual IP range. The range must not overlap with any IP range assigned to any Workload Network owned by any Supervisor Cluster.
  • The network that HAProxy uses to allocate Virtual IPs must be routable to the Workload Networks used across all Supervisor Clusters to which HAProxy is connected.
Workload Networks 1 At least one distributed port group must be created on the vSphere Distributed Switch that you configure as the Primary Workload Network. Depending on the topology of choice, you can use the same distributed port group as the Workload Network of namespaces or create more port groups and configure them as Workload Networks. Workload Networks must meet the following requirements:
  • Workload Networks that are used for Tanzu Kubernetes cluster traffic must be routable between each other and the Supervisor Cluster Primary Workload Network.
  • Routability between any Workload Network with the network that HAProxy uses for virtual IP allocation.
  • No overlapping of IP address ranges across all Workload Networks within a Supervisor Cluster.
NTP and DNS Server 1 A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and vCenter Server .
Management Network Subnet 1
The subnet used for management traffic between ESXi hosts and vCenter Server, and the Kubernetes control plane. The size of the subnet must be the following:
  • One IP address per host VMkernel adapter.
  • One IP address for the vCenter Server Appliance.
  • 5 IP addresses for the Kubernetes control plane. 1 for each of the 3 nodes, 1 for virtual IP, 1 for rolling cluster upgrade.
Management Network VLAN 1 The VLAN ID of the Management Network subnet.
Physical Network MTU 1600 The MTU size must be 1600 or greater on any network that carries overlay traffic.
Kubernetes services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster.