The vSphere cluster that you configure as a Supervisor Cluster with the NSX-T Data Center networking stack must meet certain system requirements. You can also apply different topologies depending on the needs of your Kubernetes workloads and the underlying networking infrastructure.

Configuration Limits for vSphere with Tanzu Clusters

VMware provides configuration limits in the VMware Configuration Maximums tool.

For configuration limits specific to vSphere with Tanzu, including Supervisor Clusters and Tanzu Kubernetes clusters, select vSphere > vSphere 7.0 > vSphere with Kubernetes > VMware Tanzu Kubernetes Grid Service for vSphere and click View Limits, or follow this link.

Topology for a Management, Edge, and Workload Domain Cluster

You can deploy vSphere with Tanzu with combined management, Edge, and workload management functions on a single vSphere cluster.

Figure 1. Management, Edge, and Workload Management Cluster

Management, Edge, and Workload Domain Cluster
Table 1. Minimum Compute Requirements for the Management, Edge, and Workload Management Cluster
System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 Small 2 16 GB 290 GB
ESXi hosts 7.0 3 ESXi hosts with 1 static IP per host.

4 ESXi hosts for vSAN with at least 2 physical NICs.

The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automate or Partially Automate mode.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the cluster for Workload Management might fail.
8 64 GB per host Not applicable
NSX Manager Medium 6 24 GB 300 GB
NSX Edge 1 Large 8 32 GB 200 GB
NSX Edge 2 Large 8 32 GB 200 GB
Kubernetes control plane VMs 3 4 16 GB 16 GB

Topology with Separate Management and Edge Cluster and Workload Management Cluster

You can deploy vSphere with Tanzu in two clusters, one cluster for the Management and Edge functions, and another one dedicated to Workload Management.

Figure 2. Management and Edge and Workload Management Clusters

Management and Edge and Workload Management Clusters
Table 2. Minimum Compute Requirements for the Management and Edge Cluster
System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 Small 2 16 GB 290 GB
ESXi hosts 7.0 2 ESXi hosts 8 64 GB per host Not applicable
NSX Manager Medium 6 24 GB 300 GB
NSX Edge 1 Large 8 32 GB 200 GB
NSX Edge 2 Large 8 32 GB 200 GB
Table 3. Minimum Compute Requirements for the Workload Management Cluster
System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 Small 2 16 GB 290 GB
ESXi hosts 7.0 3 ESXi hosts with 1 static IP per host.

4 ESXi hosts for vSAN with at least 2 physical NICs

The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automated mode.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the cluster for Workload Management might fail.
8 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB

Networking Requirements

No matter of the topology that you implement for Kubernetes workload management in vSphere, your deployment must meet the following networking requirements:

Component Minimum Quantity Required Configuration
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned to the Kubernetes control plane VMs in the Supervisor Cluster.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, and a DHCP server. The network must be able to access a container registry and have Internet connectivity if the container registry is on the external network. The container registry must be resolvable through DNS, and the Egress setting described below must be able to reach it.
NTP and DNS Server 1 A DNS server and NTP server that can be used for the vCenter Server.
Note: Configure NTP on all ESXi hosts, vCenter Server systems, and NSX Manager instances.
Image Registry Access to a registry for service.
Management Network Subnet 1
The subnet used for management traffic between ESXi hosts and vCenter Server, NSX Appliances, and the Kubernetes control plane. The size of the subnet must be the following:
  • One IP address per host VMkernel adapter.
  • One IP address for the vCenter Server Appliance.
  • One or four IP addresses for NSX Manager. Four when performing NSX Manager clustering of 3 nodes and 1 virtual IP (VIP).
  • 5 IP addresses for the Kubernetes control plane. 1 for each of the 3 nodes, 1 for virtual IP, 1 for rolling cluster upgrade.
Management Network VLAN 1 The VLAN ID of the Management Network subnet.
VLANs 3 These VLAN IPs are the IP addresses for the tunnel endpoints (TEPs). The ESXi host TEPs and the Edge TEPs must be routable.
VLAN IP addresses are required for the following:
  • ESXi Host VTEP
  • Edge VTEP using the static IP
  • Tier 0 gateway and uplink for transport node.
Note: The ESXi Host VTEP and the Edge VTEP must have MTU size greater than 1600.

ESXi hosts and NSX-T Edge nodes act as tunnel end points, and a Tunnel End Point (TEP) IP is assigned to each host and Edge node.

As the TEP IPs for ESXi hosts create an overlay tunnel with TEP IPs on the Edge nodes, the VLAN IPs should be routable.

An additional VLAN is required to provide North-South connectivity to Tier-0 gateway.

IP pools can be shared across clusters. However, host overlay IP pool/VLAN must not be shared with Edge overlay IP pool/VLAN.

Note: If host TEP and Edge TEP are using different physical NICs, they can use the same VLAN.
Tier-0 Uplink IP /24 Private IP addresses The IP subnet used for the Tie-0 uplink. The requirements for the IP address of the Tier-0 uplink are as follows:
  • 1 IP, if you do not use Edge redundancy.
  • 4 IPs, if you use BGP and Edge redundancy, 2 IP addresses per Edge.
  • 3 IPs, if you use static routes and Edge redundancy.

The Edge Management IP, subnet, gateway, Uplink IP, subnet, gateway must be unique.

Physical Network MTU 1600 The MTU size must be 1600 or greater on any network that carries overlay traffic.
vSphere Pod CIDR range /24 Private IP addresses A private CIDR range that providers IP addresses for vSphere Pods. These addresses are also used for the Tanzu Kubernetes cluster nodes.
You must specify a unique vSphere Pod CIDR range for each cluster.
Note: The vSphere Pod CIDR range and the CIDR range for the Kubernetes service addresses must not overlap.
Kubernetes services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster.
Egress CIDR range /27 Static IP Addresses A private CIDR annotation to determine the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor Cluster. The egress IP is the address that external entities use to communicate with the services in the namespace. The number of egress IP addresses limits the number of egress policies the Supervisor Cluster can have.
The minimum is a CIDR of /27 or more. For example, 10.174.4.96/27
Note: Egress IP addresses and ingress IP addresses must not overlap.
Ingress CIDR /27 Static IP Addresses A private CIDR range to be used for IP addresses of ingresses. Ingress lets you apply traffic policies to requests entering the Supervisor Cluster from external networks. The number of ingress IP addresses limits the number of ingresses the cluster can have.
The minimum is a CIDR of /27 or more.
Note: Egress IP addresses and ingress IP addresses must not overlap.