Check out the system requirements for enabling a Supervisor on three vSphere clusters mapped to vSphere Zones by using the NSX networking stack.

Requirements for a Management, Edge, and Workload Domain Cluster

Table 1. Minimum Compute Requirements
System Minimum Deployment Size CPU Memory Storage
vCenter Server 8.0 Small 2 21 GB 290 GB
vSphere clusters
  • 3 vSphere clusters
  • vSphere DRS and HA enabled on each vSphere Cluster. vSphere DRS must be in Fully Automate or Partially Automate mode.
  • Independent storage and networking configured for each vSphere cluster.
Not applicable Not applicable Not applicable
ESXi hosts 8.0
For each vSphere cluster:
  • Without vSAN: 3 ESXi hosts with 1 static IP per host.
  • With vSAN: 4 ESXi hosts per cluster with at least 2 physical NICs.
Note: Make sure that the names of the hosts that join the clusters use lower case letters. Otherwise, the enablement of the Supervisormight fail.
8 64 GB per host Not applicable
NSX Manager Medium 6 24 GB 300 GB
NSX Edge 1 Large 8 32 GB 200 GB
NSX Edge 2 Large 8 32 GB 200 GB
Kubernetes control plane VMs 3 4 16 GB 16 GB

Networking Requirements

Note: You cannot create IPv6 clusters with a vSphere 8 Supervisor, or register IPv6 clusters with Tanzu Mission Control.

Check the VMware Product Interoperability Matrix for the supported NSX versions.

Table 2. Physical Network Requirements
Component Minimum Quantity Required Configuration
Layer 2 device 1 The management network that will handle the traffic of the Supervisor must be on the same layer 2 device. As least one physical NIC per host handling management traffic must be connected to the same layer 2 device.
Physical Network MTU 1500 The MTU size must be 1500 or greater on any vSphere Distributed Switch port group.
Table 3. General Networking Requirements
Component Minimum Quantity Required Configuration
NTP and DNS Server 1 A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and vCenter Server .
DHCP Server 1 Optional. Configure a DHCP server to automatically acquire IP addresses for the Management and Workload Networks as well as floating IPs. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server.

For the management network, all the IP addresses, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from the DHCP server.

The DHCP configuration is used by the Supervisor. Load balancers may require static IP addresses for Management. DHCP Scopes should not overlap these static IP’s. DHCP is not used for virtual IPs. (VIPs)

Image Registry 1 Access to a registry for service.
Table 4. Management Network Requirements
Component Minimum Quantity Required Configuration
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor and load balancer.
Management Network Subnet 1
The subnet used for management traffic between ESXi hosts and vCenter Server, NSX Appliances, and the Kubernetes control plane. The size of the subnet must be the following:
  • One IP address per host VMkernel adapter.
  • One IP address for the vCenter Server Appliance.
  • One or four IP addresses for NSX Manager. Four when performing NSX Manager clustering of 3 nodes and 1 virtual IP (VIP).
  • 5 IP addresses for the Kubernetes control plane. 1 for each of the 3 nodes, 1 for virtual IP, 1 for rolling cluster upgrade.
Note: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload networks is not supported and can lead to system errors and problems.
Management Network VLAN 1 The VLAN ID of the Management Network subnet.
Table 5. Workload Network Requirements
Component Minimum Quantity Required Configuration
vSphere Pod CIDR range /23 Private IP addresses A private CIDR range that providers IP addresses for vSphere Pods. These addresses are also used for the Tanzu Kubernetes Grid cluster nodes.
You must specify a unique vSphere Pod CIDR range for each cluster.
Note: The vSphere Pod CIDR range and the CIDR range for the Kubernetes service addresses must not overlap.
Kubernetes services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor.
Egress CIDR range /27 Static IP Addresses A private CIDR annotation to determine the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor. The egress IP is the address that external entities use to communicate with the services in the namespace. The number of egress IP addresses limits the number of egress policies the Supervisor can have.
The minimum is a CIDR of /27 or more. For example,
Note: Egress IP addresses and ingress IP addresses must not overlap.
Ingress CIDR /27 Static IP Addresses A private CIDR range to be used for IP addresses of ingresses. Ingress lets you apply traffic policies to requests entering the Supervisor from external networks. The number of ingress IP addresses limits the number of ingresses the cluster can have.
The minimum is a CIDR of /27 or more.
Note: Egress IP addresses and ingress IP addresses must not overlap.
Namespace Networks Range 1 One or more IP CIDRs to create subnets/segments and assign IP addresses to workloads.
Namespace Subnet Prefix 1 The subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28.
Table 6. NSX Requirements
Component Minimum Quantity Required Configuration
VLANs 3 These VLAN IPs are the IP addresses for the tunnel endpoints (TEPs). The ESXi host TEPs and the Edge TEPs must be routable.
VLAN IP addresses are required for the following:
  • ESXi Host VTEP
  • Edge VTEP using the static IP
  • Tier 0 gateway and uplink for transport node.
Note: The ESXi Host VTEP and the Edge VTEP must have MTU size greater than 1600.

ESXi hosts and NSX-T Edge nodes act as tunnel end points, and a Tunnel End Point (TEP) IP is assigned to each host and Edge node.

As the TEP IPs for ESXi hosts create an overlay tunnel with TEP IPs on the Edge nodes, the VLAN IPs should be routable.

An additional VLAN is required to provide North-South connectivity to Tier-0 gateway.

IP pools can be shared across clusters. However, host overlay IP pool/VLAN must not be shared with Edge overlay IP pool/VLAN.

Note: If host TEP and Edge TEP are using different physical NICs, they can use the same VLAN.
Tier-0 Uplink IP /24 Private IP addresses The IP subnet used for the Tie-0 uplink. The requirements for the IP address of the Tier-0 uplink are as follows:
  • 1 IP, if you do not use Edge redundancy.
  • 4 IPs, if you use BGP and Edge redundancy, 2 IP addresses per Edge.
  • 3 IPs, if you use static routes and Edge redundancy.

The Edge Management IP, subnet, gateway, Uplink IP, subnet, gateway must be unique.