To use the Tanzu Kubernetes Grid Service v1alpha2 API for provisioning Tanzu Kubernetes clusters, adhere to the full list of requirements.
Requirements for Using the TKGS v1alpha2 API
The Tanzu Kubernetes Grid Service v1alpha2 API provides a robust set of enhancements for provisioning Tanzu Kubernetes clusters. For more information, see TKGS v1alpha2 API for Provisioning Tanzu Kubernetes Clusters.
To take advantage of the new functionality provided by the Tanzu Kubernetes Grid Service v1alpha2 API, your environment must satisfy each of the following requirements.
Requirement | Reference |
---|---|
Workload Management is enabled with supported networking, either NSX-T Data Center or native vSphere vDS.
Note: A particular feature may require a specific type of networking. If so this is called out in the topic for that feature.
|
See Prerequisites for Configuring vSphere with Tanzu on a vSphere Cluster. |
The vCenter Server hosting Workload Management is updated to version 7 Update 3 or later. |
Refer to the vCenter Server Update and Patch Releases release notes. For update instructions, see Upgrading the vCenter Server Appliance. |
All ESXi hosts supporting the vCenter Server cluster where Workload Management is enabled are updated to version 7 Update 3 or later. |
Refer to the ESXi Update and Patch Release Notes. For update instructions, see Upgrading ESXi Hosts. |
vSphere Namespaces is updated to v0.0.11 or later. |
For release details, see the vSphere with Tanzu Release Notes. For update instructions, see Updating the vSphere with Tanzu Environment. |
Supervisor Cluster is updated to |
For release details, see the vSphere with Tanzu Release Notes. For update instructions, see Update the Supervisor Cluster by Performing a vSphere Namespaces Update. |
You must use Tanzu Kubernetes release |
For release details, see Verify Tanzu Kubernetes Cluster Compatibility for Update. For instructions on provisioning new clusters, see Example YAML for Provisioning Tanzu Kubernetes Clusters Using the TKGS v1alpha2 API. For instructions on updating an existing cluster, see Updating a Tanzu Kubernetes Release After the Cluster Spec Is Converted to the TKGS v1alpha2 API. |
CNI Considerations for Node Limits |
The cluster specification setting If you customize, the minimum pods CIDR block size is /24. However, be cautious about restricting the TKGS allocates to each cluster node a /24 subnet carved from the Because each node in a cluster gets a /24 subnet from the The following node limits apply to a Tanzu Kubernetes cluster provisioned with either the Antrea or Calico CNI. /16 == 150 nodes max (per ConfigMax) /17 == 128 nodes max /18 == 64 nodes max /19 == 32 nodes max /20 == 16 nodes max /21 == 8 nodes max /22 == 4 nodes max /23 == 2 nodes max /24 == 1 node max |