To configure vSphere with Tanzu with the NSX Advanced Load Balancer, also known as the Avi Load Balancer, your environment must meet certain requirements. vSphere with Tanzu supports multiple topologies for Avi networking: a single vDS network for the Avi Service Engine and load balancer services, and a vDS for the Avi management plane and another vDS for the NSX Advanced Load Balancer.

Workload Networks

To configure a Supervisor Cluster with the vSphere networking stack, you must connect all hosts from the cluster to a vSphere Distributed Switch. Depending on the topology that you implement for the Supervisor Cluster, you create one or more distributed port groups. You designate the port groups as Workload Networks to vSphere Namespaces.

Before you add a host to a Supervisor Cluster, you must add it to all the vSphere Distributed Switches that are part of the cluster.

Workload Networks provide connectivity to the nodes of Tanzu Kubernetes clusters and to Supervisor Cluster control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor Cluster must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor Cluster.
Note: Workload Networks are only added when you enable the Supervisor Cluster and cannot be added later.

The Kubernetes control plane VMs on the Supervisor Cluster use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes cluster runs.

Networking Requirements

The NSX Advanced Load Balancer requires two routable subnets:
  • The Management network. The Management network is where the Avi Controller, also called the Controller, resides. The Management network provides the Controller with connectivity to the vCenter Server, ESXi hosts, and the Supervisor Cluster control plane nodes. This network can use a vSphere Standard Switch (vSS), or a vSphere Distributed Switch (vDS).
  • The Data network. The Avi Service Engines, also called Service Engines, run on this network. This network requires a vSphere Distributed Switch (vDS) and distributed portgroup. You must configure the vDS and portgroups before installing the load balancer.

Allocation of IP Addresses

The Controller and the Service Engine are connected to the Management network. When you install and configure the NSX Advanced Load Balancer, provide a static, routable IP address for each Controller VM.

The Service Engines can use DHCP. If DHCP is unavailable, you can configure a pool of IP addresses for the Service Engines.

For more information, see Configure Static Routes.

Minimum Compute Requirements

The table lists the minimum compute requirements for vSphere networking with NSX Advanced Load Balancer.
Table 1. Minimum Compute Requirements
System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 7.0.2 Small 2 16 GB 290 GB
ESXi hosts 7.0 3 ESXi hosts with 1 static IP per host.

If you are using vSAN: 3 ESXi hosts with at least 2 physical NICs is the minimum; however, 4 ESXi hosts are recommended for resiliency during patching and upgrading.

The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automate or Partially Automate mode.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the cluster for Workload Management might fail.
8 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB

Minimum Network Requirements

The table lists the minimum network requirements for vSphere networking with NSX Advanced Load Balancer.

Table 2. Minimum Networking Requirements
Component Minimum Quantity Required Configuration
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned to the Kubernetes control plane VMs in the Supervisor Cluster.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor Cluster and load balancer. The network must be able to access an image registry and have Internet connectivity if the image registry is on the external network. The image registry must be resolvable through DNS.
vSphere Distributed Switch 1 All hosts from the cluster must be connected to a vSphere Distributed Switch.
Workload Networks 1 At least one distributed port group must be created on the vSphere Distributed Switch that you configure as the Primary Workload Network. Depending on the topology of choice, you can use the same distributed port group as the Workload Network of namespaces or create more port groups and configure them as Workload Networks. Workload Networks must meet the following requirements:
  • Workload Networks that are used for Tanzu Kubernetes cluster traffic must be routable between each other and the Supervisor Cluster Primary Workload Network.
  • Routability between any Workload Network with the network that the NSX Advanced Load Balancer uses for virtual IP allocation.
  • No overlapping of IP address ranges across all Workload Networks within a Supervisor Cluster.
NTP and DNS Server 1 A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and vCenter Server .
Management Network Subnet 1
The Management network is where the Avi Controller, also called the Controller, resides. The subnet used for management traffic between ESXi hosts and vCenter Server, and the Kubernetes control plane. The size of the subnet must be the following:
  • One IP address per host VM kernel adapter.
  • One IP address for the vCenter Server Appliance.
  • 5 IP addresses for the Kubernetes control plane. 1 for each of the 3 nodes, 1 for virtual IP, 1 for rolling cluster upgrade.
Data Network Subnet 1 The Avi Service Engines, also called Service Engines, run on this network. Configure a pool of IP addresses for the Service Engines.
Management Network VLAN 1 The VLAN ID of the Management Network subnet.
Physical Network MTU 1600 The MTU size must be 1600 or greater on any network that carries overlay traffic.
Kubernetes services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster.