Review the system requirements for configuring vSphere with Tanzu on a vSphere cluster by using the NSX networking stack.

Configuration Limits for vSphere with Tanzu Clusters

VMware provides configuration limits in the VMware Configuration Maximums tool.

For configuration limits specific to vSphere with Tanzu, including Supervisor Clusters and Tanzu Kubernetes clusters, select vSphere > vSphere 7.0 > vSphere with Kubernetes > VMware Tanzu Kubernetes Grid Service for vSphere and click View Limits, or follow this link.

Minimum Compute Requirements for the Management and Edge Cluster

You can deploy vSphere with Tanzu in two clusters, one cluster for the Management and Edge functions, and another one dedicated to Workload Management.

System Minimum Deployment Size CPU Memory Storage
vCenter Server 7.0 Small 2 16 GB 290 GB
ESXi hosts 7.0 2 ESXi hosts 8 64 GB per host Not applicable
NSX Manager Medium 6 24 GB 300 GB
NSX Edge 1 Large 8 32 GB 200 GB
NSX Edge 2 Large 8 32 GB 200 GB
Note: Verify that all ESXi hosts participating in the vSphere Cluster on which you plan to configure vSphere with Tanzu are prepared as NSX transport nodes. For more information, see https://kb.vmware.com/s/article/95820 and Prepare ESXi Cluster Hosts as Transport Nodes in the NSX documentation.

Minimum Compute Requirements for the Workload Domain Cluster

System Minimum Deployment Size CPU Memory Storage
ESXi hosts 7.0 3 ESXi hosts with 1 static IP per host.

If you are using vSAN: 3 ESXi hosts with at least 2 physical NICs per host is the minimum; however, 4 ESXi hosts are recommended for resiliency during patching and upgrading.

The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automated mode.

Caution: Do not disable vSphere DRS after you configure the Supervisor Cluster. Having DRS enabled at all times is a mandatory prerequisite for running workloads on the Supervisor Cluster. Disabling DRS leads to breaking your Tanzu Kubernetes clusters.
8 64 GB per host Not applicable
Kubernetes control plane VMs 3 4 16 GB 16 GB
Note: Verify that all ESXi hosts participating in the vSphere Cluster on which you plan to configure vSphere with Tanzu are prepared as NSX transport nodes. For more information, see https://kb.vmware.com/s/article/95820 and Prepare ESXi Cluster Hosts as Transport Nodes in the NSX documentation.

Networking Requirements

No matter of the topology that you implement for Kubernetes workload management in vSphere, your deployment must meet the networking requirements listed in the table below.
Note: You cannot create IPv6 clusters with a vSphere 7 Supervisor Cluster, or register IPv6 clusters with Tanzu Mission Control.
Component Minimum Quantity Required Configuration
Physical NIC At least 2 physical NICs per host if vSAN is used To use Antrea CNI and for optimal NSX performance, each physical NIC on each participating ESXi host must support GENEVE encapsulation and have it enabled.
Static IPs for Kubernetes control plane VMs Block of 5 A block of 5 consecutive static IP addresses to be assigned to the Kubernetes control plane VMs in the Supervisor Cluster.
Management traffic network 1 A Management Network that is routable to the ESXi hosts, vCenter Server, and a DHCP server. The network must be able to access a container registry and have Internet connectivity if the container registry is on the external network. The container registry must be resolvable through DNS, and the Egress setting described below must be able to reach it.
NTP and DNS Server 1 A DNS server and NTP server that can be used for the vCenter Server.
Note: Configure NTP on all ESXi hosts, vCenter Server systems, and NSX Manager instances.
DHCP Server 1 Optional. Configure a DHCP server to automatically acquire IP addresses for the management. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server.
Image Registry 1 Access to a registry for service.
Management Network Subnet 1
The subnet used for management traffic between ESXi hosts and vCenter Server, NSX Appliances, and the Kubernetes control plane. The size of the subnet must be the following:
  • One IP address per host VMkernel adapter.
  • One IP address for the vCenter Server Appliance.
  • One or four IP addresses for NSX Manager. Four when performing NSX Manager clustering of 3 nodes and 1 virtual IP (VIP).
  • 5 IP addresses for the Kubernetes control plane. 1 for each of the 3 nodes, 1 for virtual IP, 1 for rolling cluster upgrade.
Note: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload networks is not supported and can lead to system errors and problems.
Management Network VLAN 1 The VLAN ID of the Management Network subnet.
VLANs 3 These VLAN IPs are the IP addresses for the tunnel endpoints (TEPs). The ESXi host TEPs and the Edge TEPs must be routable.
VLAN IP addresses are required for the following:
  • ESXi Host VTEP
  • Edge VTEP using the static IP
  • Tier 0 gateway and uplink for transport node.
Note: The ESXi Host VTEP and the Edge VTEP must have MTU size greater than 1600.

ESXi hosts and NSX-T Edge nodes act as tunnel end points, and a Tunnel End Point (TEP) IP is assigned to each host and Edge node.

As the TEP IPs for ESXi hosts create an overlay tunnel with TEP IPs on the Edge nodes, the VLAN IPs should be routable.

An additional VLAN is required to provide North-South connectivity to Tier-0 gateway.

IP pools can be shared across clusters. However, host overlay IP pool/VLAN must not be shared with Edge overlay IP pool/VLAN.

Note: If host TEP and Edge TEP are using different physical NICs, they can use the same VLAN.
Tier-0 Uplink IP /24 Private IP addresses The IP subnet used for the Tie-0 uplink. The requirements for the IP address of the Tier-0 uplink are as follows:
  • 1 IP, if you do not use Edge redundancy.
  • 4 IPs, if you use BGP and Edge redundancy, 2 IP addresses per Edge.
  • 3 IPs, if you use static routes and Edge redundancy.

The Edge Management IP, subnet, gateway, Uplink IP, subnet, gateway must be unique.

Physical Network MTU 1600 The MTU size must be 1600 or greater on any network that carries overlay traffic.
vSphere Pod CIDR range /23 Private IP addresses A private CIDR range that providers IP addresses for vSphere Pods. These addresses are also used for the Tanzu Kubernetes cluster nodes.
You must specify a unique vSphere Pod CIDR range for each cluster.
Note: The vSphere Pod CIDR range and the CIDR range for the Kubernetes service addresses must not overlap.
Kubernetes services CIDR range /16 Private IP addresses A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster.
Egress CIDR range /27 Static IP Addresses A private CIDR annotation to determine the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor Cluster. The egress IP is the address that external entities use to communicate with the services in the namespace. The number of egress IP addresses limits the number of egress policies the Supervisor Cluster can have.
The minimum is a CIDR of /27 or more. For example, 10.174.4.96/27
Note: Egress IP addresses and ingress IP addresses must not overlap.
Ingress CIDR /27 Static IP Addresses A private CIDR range to be used for IP addresses of ingresses. Ingress lets you apply traffic policies to requests entering the Supervisor Cluster from external networks. The number of ingress IP addresses limits the number of ingresses the cluster can have.
The minimum is a CIDR of /27 or more.
Note: Egress IP addresses and ingress IP addresses must not overlap.