Checkout the requirements for enabling a Supervisor with VDS networking and NSX Advanced Load Balancer on three vSphere clusters mapped to three vSphere Zones. To configure vSphere IaaS control plane with the NSX Advanced Load Balancer, also known as the Avi Load Balancer, your environment must meet certain requirements. vSphere IaaS control plane supports multiple topologies: a single VDS network for the Avi Service Engine and load balancer services, and a VDS for the Avi management plane and another VDS for the NSX Advanced Load Balancer.

Workload Networks

To configure a Supervisor with the VDS networking stack, you must connect all hosts from the cluster to a VDS. Depending on the topology that you implement for the Supervisor, you create one or more distributed port groups. You designate the port groups as Workload Networks to vSphere Namespaces.

Workload Networks provide connectivity to the nodes of Tanzu Kubernetes Grid clusters,VMs created through the Virtual Machine Service, and to Supervisor control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor.

The Kubernetes control plane VMs on the Supervisor use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes Grid cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes Grid cluster runs.

Networking Requirements

The NSX Advanced Load Balancer requires two routable subnets:
  • The Management Network. The Management Network is where the Avi Controller, also called the Controller, resides. The Management Network provides the Controller with connectivity to the vCenter Server, ESXi hosts, and the Supervisor control plane nodes. This network is where the management interface of Avi Service Engine is placed. This network requires a VDS and a distributed port group.
  • The Data Network. The data interface of the Avi Service Engines, also called Service Engines, connect to this network. The load balancer Virtual IPs (VIPs) are assigned from this network. This network requires a VDS and distributed port groups. You must configure the VDS and port groups before installing the load balancer.

Allocation of IP Addresses

The Controller and the Service Engine are connected to the Management Network. When you install and configure the NSX Advanced Load Balancer, provide a static, routable IP address for each Controller VM.

The Service Engines can use DHCP. If DHCP is unavailable, you can configure a pool of IP addresses for the Service Engines.

Placement of vSphere Zones Across Physical Sites

You can distribute vSphere zones across different physical sites as long as the latency between the sites doesn't exceed 100 ms. For example, you can distribute the vSphere zones across two physical sites - one vSphere zone on the first site, and two vSphere zones on the second site.

Minimum Compute Requirements for Testing Purposes

If you would like to test the capabilities of the vSphere IaaS control plane, you can deploy the platform on a very minimal testbed. However, you should be aware that such a testbed is not suitable for running production scale workloads and does not provide HA on a cluster level.

Table 1. Minimum Compute Requirements for Testing Purposes
System Minimum Deployment Size CPU Memory Storage
vCenter Server 8.0 Small 2 21 GB 290 GB
vSphere clusters
  • 3 vSphere clusters
  • vSphere DRS and HA enabled on each vSphere Cluster. vSphere DRS must be in Fully Automate or Partially Automate mode.
  • Independent storage and networking configured for each vSphere cluster.
Not applicable Not applicable Not applicable
ESXi hosts 8.0

For each vSphere cluster:

  • Without vSAN: 1 ESXi host with 1 static IP per host.
  • With vSAN: 2 ESXi hosts per cluster with at least 2 physical NICs.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the activation of the Supervisor might fail.
8 per host 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB
NSX Advanced Load Balancer Controller

Enterprise

4 (Small)

8 (Medium)

24 (Large)

12 GB

24 GB

128 GB

128 GB

128 GB

128 GB

Minimum Compute Requirements for Production

The table lists the minimum compute requirements for enabling a Supervisor with VDS networking and NSX Advanced Load Balancer on three vSphere Zones. Consider separating the management and workload domain as a best practice. The workload domain hosts the Supervisor where you run workloads. The management domain hosts all the management components such as vCenter Server.
Table 2. Minimum Compute Requirements
System Minimum Deployment Size CPU Memory Storage
vCenter Server 8.0 Small 2 21 GB 290 GB
vSphere clusters
  • 3 vSphere clusters
  • vSphere DRS and HA enabled on each vSphere Cluster. vSphere DRS must be in Fully Automate or Partially Automate mode.
  • Independent storage and networking configured for each vSphere cluster.
Not applicable Not applicable Not applicable
ESXi hosts 8.0

For each vSphere cluster:

  • Without vSAN: 3 ESXi hosts with 1 static IP per host.
  • With vSAN: 4 ESXi hosts per cluster with at least 2 physical NICs.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the Supervisor might fail.
8 per host 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB
NSX Advanced Load Balancer Controller

Enterprise

For production environments, it is recommended to install a cluster of 3 Controller VMs. A minimum of 2 Service Engine VMs are required for HA.

4 (Small)

8 (Medium)

24 (Large)

12 GB

24 GB

128 GB

128 GB

128 GB

128 GB

Minimum Network Requirements

The table lists the minimum network requirements for enabling a Supervisor with VDS networking and NSX Advanced Load Balancer.
Table 3. Physical Network Requirements
Component Minimum Quantity Required Configuration
Layer 2 device 1 The management network that will handle the traffic of the Supervisor must be on the same layer 2 device for all clusters part of the Supervisor. The Primary Workload network must also be on the same layer two device.
Physical Network MTU 1500 The MTU size must be 1500 or greater on any distributed port group.
Table 4. General Networking Requirements
Component Minimum Quantity Required Configuration
Latency 100 ms The maximum recommended latency between each cluster that is part of a vSphere Zone joined together in a Supervisor.
NTP and DNS Server 1 A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and vCenter Server .
DHCP Server 1 Optional. Configure a DHCP server to automatically acquire IP addresses for the management and Workload Networks as well as floating IPs. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server.

For the management network, all the IP addresses, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from the DHCP server.

The DHCP configuration is used by the Supervisor. Load balancers may require static IP addresses for Management. DHCP Scopes should not overlap these static IP’s. DHCP is not used for virtual IPs. (VIPs)

Table 5. Management Network Requirements
Component Minimum Quantity Required Configuration
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor and load balancer.
Management Network Subnet 1

The Management Network is where the NSX Advanced Load Balancer Controller, also called the Controller, resides.

It is also where the Service engine management interface is connected. The Controller must be connected to the vCenter Server and ESXi management IPs from this network

Note: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload Networks is not supported and can lead to system errors and problems.
Table 6. Workload Network Requirements
Component Minimum Quantity Required Configuration
vSphere Distributed Switch 1 All hosts from all three vSphere clusters must be connected to a VDS.
Workload Networks 1 At least one distributed port group must be created on the VDS that you configure as the Primary Workload Network. Depending on the topology of choice, you can use the same distributed port group as the Workload Network of namespaces or create more port groups and configure them as Workload Networks. Workload Networks must meet the following requirements:
  • Routability between any Workload Network with the network that the NSX Advanced Load Balancer uses for virtual IP allocation.
  • No overlapping of IP address ranges across all Workload Networks within a Supervisor.
Kubernetes Services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor.
Table 7. Load Balancer Networking Requirements
NTP and DNS Server 1 DNS server IP is required for the NSX Advanced Load Balancer Controller to resolve the vCenter Server and ESXi hostnames correctly. NTP is optional as public NTP servers are used by default.
Data Network Subnet 1 The data interface of the Service Engines, also called Service Engines, connect to this network. Configure a pool of IP addresses for the Service Engines. The load balancer Virtual IPs (VIPs) are assigned from this network.
NSX Advanced Load Balancer Controller IPs 1 or 4 If you deploy the NSX Advanced Load Balancer Controller as a single node, one static IP is required for its management interface.

For a 3 node cluster, 4 IP addresses are required. One for each Controller VM and one for the cluster VIP. These IPs must be from the Management Network subnet.

VIP IPAM range -

A private CIDR range to assign IP addresses to Kubernetes services. The IPs must be from the Data Network Subnet. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster.

Ports and Protocols

This table lists the protocols and ports required for managing IP connectivity between the NSX Advanced Load Balancer, vCenter Server and other vSphere IaaS control plane components.

Source Destination Protocol and Ports
NSX Advanced Load Balancer Controller NSX Advanced Load Balancer Controller (in cluster)

TCP 22 (SSH)

TCP 443 (HTTPS)

TCP 8443 (HTTPS)

Service Engine Service Engine in HA

TCP 9001 for VMware, LSC and NSX-T cloud

Service Engine NSX Advanced Load Balancer Controller

TCP 22 (SSH)

TCP 8443 (HTTPS)

UDP 123 (NTP)

NSX Advanced Load Balancer Controller vCenter Server, ESXi, NSX-T Manager TCP 443 (HTTPS)
Supervisor Control plane nodes (AKO) NSX Advanced Load Balancer Controller TCP 443 (HTTPS)

For more information about ports and protocols for the NSX Advanced Load Balancer, see https://ports.esp.vmware.com/home/NSX-Advanced-Load-Balancer.