The Tanzu Kubernetes Grid Service API exposes several parameters for provisioning and updating Tanzu Kubernetes clusters.

Parameters for Provisioning Tanzu Kubernetes Clusters

This table lists and describes all YAML parameters and acceptable values for cluster provisioning.

Note: This table is a reference for all cluster provisioning parameters. To view the hierarchy of the parameters in example YAML files, see Minimal YAML for Provisioning a Tanzu Kubernetes Cluster.
Table 1. Parameters for Provisioning Tanzu Kubernetes Clusters
Name Value Description
apiVersion run.tanzu.vmware.com/v1alpha1 Specifies the version of the Tanzu Kubernetes Grid Service API.
kind TanzuKubernetesCluster Specifies the type of Kubernetes resource to create. The only allowed value is TanzuKubernetesCluster (case-sensitive).
metadata Section for cluster metadata Includes cluster metadata, such as name and namespace. This is standard Kubernetes metadata, so you can use generateName instead of name, add labels and annotations, and so on.
name A user-defined string that accepts alphanumeric characters and dashes, for example: my-tkg-cluster-1 Specifies the name of the cluster to create. Current cluster naming constraints:
  • Name length must be 41 characters or less.
  • Name must begin with a letter.
  • Name may contain letters, numbers, and hyphens.
  • Name must end with a letter or a number.
namespace A user-defined string that accepts alphanumeric characters and dashes, for example: my-sns-1 Identifies the name of the Supervisor Namespace where the cluster will be deployed. This is a reference to a Supervisor Namespace that exists on the Supervisor Cluster.
spec Section for technical specifications for the cluster Includes the specification, expressed in declarative fashion, for the end-state of the cluster, including the node toplogy and Kubernetes software distribution.
topology Section for cluster node topologies Includes fields that describe the number, purpose, and organization of cluster nodes and the resources allocated to each. Cluster nodes are grouped into pools based on their intended purpose: either control-plane or worker. Each pool is homogeneous, having the same resource allocation and using the same storage.
controlPlane Section for control plane settings Specifies the topology of the cluster control plane, including the number of nodes (count), type of VM (class), and the storage resources allocated for each node (storageClass).
count An integer that is either 1 or 3 Specifies the number of control plane nodes. The control plane must have an odd number of nodes.
class A system-defined element in the form of a string from an enumerated set, for example: guaranteed-small or best-effort-large Specifies the name of the VirtualMachineClass that describes the virtual hardware settings to be used each node in the pool. This controls the hardware available to the node (CPU and memory) as well as the requests and limits on those resources. See Virtual Machine Class Types for Tanzu Kubernetes Clusters.
storageClass node-storage (for example) Identifies the storage class to be used for storage of the disks which store the root file systems of the control plane nodes. Run kubectl describe ns on the namespace to view the available storage classes. The available storage classes for the namespace depend on the storage set by the vSphere administrator. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be a valid value for this field.
volumes
Optional storage setting
  • volumes:
    • name: string
    • mountPath: /dir/path
    • capacity
      • storage: GiB size
Can specify separate disk and storage parameters for etcd on control plane nodes. See Example YAML for Provisioning a Tanzu Kubernetes Cluster with Separate Disks and Storage Parameters.
workers Section for worker node settings Specifies the topology of the cluster worker nodes, including the number of nodes (count), type of VM (class), and the storage resources allocated for each node (storageClass).
count An integer between 0 and 150, for example: 1 or 2 or 7 Specifies the number of worker nodes in the cluster. A cluster with zero worker nodes can be created, allowing for a cluster with only control plane nodes. There is no hard maximum for the number of worker nodes, but a reasonable limit is 150.
class A system-defined element in the form of a string from an enumerated set, for example: guaranteed-small or best-effort-large Specifies the name of the VirtualMachineClass that describes the virtual hardware settings to be used each node in the pool. This controls the hardware available to the node (CPU and memory) as well as the requests and limits on those resources. See Virtual Machine Class Types for Tanzu Kubernetes Clusters.
storageClass node-storage (for example) Identifies the storage class to be used for storage of the disks that store the root file systems of the worker nodes. Run kubectl describe ns on the namespace to list available storage classes. The available storage classes for the namespace depend on the storage set by the vSphere administrator. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be valid.
volumes
Optional storage setting
  • volumes:
    • name: string
    • mountPath: /dir/path
    • capacity
      • storage: GiB size
Can specify separate disk and storage parameters for container images on worker nodes. See Example YAML for Provisioning a Tanzu Kubernetes Cluster with Separate Disks and Storage Parameters.
distribution Section for specifying the Tanzu Kubernetes Release version Indicates the distribution for the cluster: the Tanzu Kubernetes cluster software installed on the control plane and worker nodes, including Kubernetes itself.
version Alphanumeric string with dashes representing the Kubernetes version, for example: v1.18.5+vmware.1-tkg.1 or v1.18.5 or v1.18 Specifies the software version of the Kubernetes distribution to install on cluster nodes using semantic version notation. Can specify the fully qualified version or use version shortcuts, such as "version: v1.18.5", which is resolved to the most recent image matching that patch version, or "version: v1.18", which is resolved to the most recent matching patch version. The resolved version displays as the "fullVersion" on the cluster description after you have created it.
settings Section for cluster-specific settings Identifies optional runtime configuration information for the cluster, including node network details and persistent storage for pods.
storage Section for specifying storage Identifies persistent volume (PV) storage entries for container workloads.
classes Array of one or more user-defined strings, for example: ["gold", "silver"] Specifies named persistent volume (PV) storage classes for container workloads. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be a valid value.
defaultClass silver (for example) Specifies a named storage class to be annotated as the default in the cluster. If you do not specify it, there is no default. You do not have to specify one or more classes to specify a defaultClass.
network Section for networking settings Specifies network-related settings for the cluster.
cni Section for specifying the CNI Identifies the Container Networking Interface (CNI) plug-in for the cluster. The default is Antrea, which does not need to be specified for new clusters.
name antrea or calico Specifies the CNI to use. Antrea and Calico are supported. System configuration sets Antrea as the default CNI. The default CNI can be changed. If using the default, this field does not need to be specified.
services Section for specifying services Identifies network settings for Kubernetes services. Default is 10.96.0.0/12.
cidrBlocks ["198.51.100.0/12"] (for example) Specifies a range of IP addresses to use for Kubernetes services. Default is 10.96.0.0/12. Must not overlap with the settings chosen for the Supervisor Cluster. Although this field is an array, allowing for multiple ranges, currently only a single IP range is allowed.
pods Section for specifying pods Specifies network settings for pods. Default is 192.168.0.0/16. Minimum block size is /24.
cidrBlocks ["192.0.2.0/16"] (for example) Specifies a range of IP addresses to use for Kubernetes pods. Default is 192.168.0.0/16. Must not overlap with the settings chosen for the Supervisor Cluster. Pods subnet size must be equal to or larger than /24. Although this field is an array, allowing for multiple ranges, currently only a single IP range is allowed.
serviceDomain "cluster.local" Specifies the service domain for the cluster. Default is cluster.local.
proxy Section that specifies HTTP(s) proxy configuration for the cluster Provides fields for specified proxy settings; will be auto-populated if a global proxy is configured and individual cluster proxy is not configured
httpProxy http://<user>:<pwd>@<ip>:<port> Specifies a proxy URL to use for creating HTTP connections outside the cluster.
httpsProxy http://<user>:<pwd>@<ip>:<port> Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
noProxy POD-CIDRs, EGRESS-CIDRs,INGRESS-CIDRS You must include from the Workload Network the Pod CIDRs, Ingress CIDRs, and Egress CIDRs. Both localhost and 127.0.0.1 are automatically not proxied so you don't need to add them.