To use the Tanzu Kubernetes Grid Service v1alpha2 API for provisioning Tanzu Kubernetes clusters, adhere to the full list of requirements.

Requirements for Using the TKGS v1alpha2 API

The Tanzu Kubernetes Grid Service v1alpha2 API provides a robust set of enhancements for provisioning Tanzu Kubernetes clusters. For more information, see TKGS v1alpha2 API for Provisioning Tanzu Kubernetes Clusters.

To take advantage of the new functionality provided by the Tanzu Kubernetes Grid Service v1alpha2 API, your environment must satisfy each of the following requirements.

Requirement Reference
Workload Management is enabled with supported networking, either NSX-T Data Center or native vSphere vDS.
Note: A particular feature may require a specific type of networking. If so this is called out in the topic for that feature.

See Prerequisites for Configuring vSphere with Tanzu on a vSphere Cluster.

See Enable Workload Management with NSX Networking.

See Enable Workload Management with vSphere Networking.

The vCenter Server hosting Workload Management is updated to version 7 Update 3 or later.

Refer to the vCenter Server Update and Patch Releases release notes.

For update instructions, see Upgrading the vCenter Server Appliance.

All ESXi hosts supporting the vCenter Server cluster where Workload Management is enabled are updated to version 7 Update 3 or later.

Refer to the ESXi Update and Patch Release Notes.

For update instructions, see Upgrading ESXi Hosts.

vSphere Namespaces is updated to v0.0.11 or later.

For release details, see the vSphere with Tanzu Release Notes.

For update instructions, see Updating the vSphere with Tanzu Environment.

Supervisor Cluster is updated to v1.21.0+vmware.wcp.2 or later.

For release details, see the vSphere with Tanzu Release Notes.

For update instructions, see Update the Supervisor Cluster by Performing a vSphere Namespaces Update.

You must use Tanzu Kubernetes release v1.21.2---vmware.1-tkg.1.ee25d55 or later.

For release details, see Verify Tanzu Kubernetes Cluster Compatibility for Update.

For instructions on provisioning new clusters, see Example YAML for Provisioning Tanzu Kubernetes Clusters Using the TKGS v1alpha2 API.

For instructions on updating an existing cluster, see Updating a Tanzu Kubernetes Release After the Cluster Spec Is Converted to the TKGS v1alpha2 API.

CNI Considerations for Node Limits

The cluster specification setting spec.settings.network.pods.cidrBlocks defaults to 192.168.0.0/16. See TKGS v1alpha2 API for Provisioning Tanzu Kubernetes Clusters.

If you customize, the minimum pods CIDR block size is /24. However, be cautious about restricting the pods.cidrBlocks subnet mask beyond /16.

TKGS allocates to each cluster node a /24 subnet carved from the pods.cidrBlocks. This allocation is determined by the Kubernetes Controller Manager > NodeIPAMController parameter named NodeCIDRMaskSize which sets the subnet mask size for the Node CIDR in the cluster. The default node subnet mask is /24 for IPv4.

Because each node in a cluster gets a /24 subnet from the pods.cidrBlocks, you can run out of node IP addresses if you use a subnet mask size that is too restrictive for the cluster you are provisioning.

The following node limits apply to a Tanzu Kubernetes cluster provisioned with either the Antrea or Calico CNI.

/16 == 150 nodes max (per ConfigMax)

/17 == 128 nodes max

/18 == 64 nodes max

/19 == 32 nodes max

/20 == 16 nodes max

/21 == 8 nodes max

/22 == 4 nodes max

/23 == 2 nodes max

/24 == 1 node max