The Tanzu Kubernetes Grid Service declarative API exposes several parameters for configuring Tanzu Kubernetes clusters. Refer to the list and description of all parameters and usage guidelines to provision and customize your clusters.
Annotated YAML for Provisioning a Tanzu Kubernetes Cluster
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster #valid config key must consist of alphanumeric characters, '-', '_' or '.' metadata: name: <tanzu kubernetes cluster name> namespace: <vsphere namespace where the cluster will be provisioned> spec: distribution: version: <tanzu kubernetes release version string: full, point, short> topology: controlPlane: count: <integer either 1 or 3> class: <vm class bound to the target vsphere namespace> storageClass: <vsphere storage policy bound to the target vsphere namespace> volumes: #optional setting for high-churn control plane component - name: <user-defined string> mountPath: </dir/path> capacity: storage: <size in GiB> workers: count: <integer from 0 to 150> class: <vm class bound to the target vsphere namespace> storageClass: <vsphere storage policy bound to the target vsphere namespace> volumes: #optional setting for high-churn worker node component (such as containerd) - name: <user-defined string> mountPath: </dir/path> capacity: storage: <size in GiB> settings: #all spec.settings are optional storage: #optional storage settings classes: [<array of kubernetes storage classes for dynamic pvc provisioning>] defaultClass: <default kubernetes storage class> network: #optional network settings cni: #override default cni set in the tkgservicesonfiguration spec name: <antrea or calico> pods: #custom pod network cidrBlocks: [<array of pod cidr blocks>] services: #custom service network cidrBlocks: [<array of service cidr blocks>] serviceDomain: <custom service domain> proxy: #proxy server for outbound connections httpProxy: http://<IP:PORT> httpsProxy: http://<IP:PORT> noProxy: [<array of CIDRs to not proxy>] trust: #trust fields for custom public certs for tls additionalTrustedCAs: - name: <first-cert-name> data: <base64-encoded string of PEM encoded public cert 1> - name: <second-cert-name> data: <base64-encoded string of PEM encoded public cert 2>
Parameters for Provisioning Tanzu Kubernetes Clusters
Name | Value | Description |
---|---|---|
apiVersion |
run.tanzu.vmware.com/v1alpha1 |
Specifies the version of the Tanzu Kubernetes Grid Service API. |
kind |
TanzuKubernetesCluster |
Specifies the type of Kubernetes resource to create. The only allowed value is TanzuKubernetesCluster (case-sensitive). |
metadata |
Section for cluster metadata | Includes cluster metadata, such as name and namespace . This is standard Kubernetes metadata, so you can use generateName instead of name , add labels and annotations, and so on. |
name |
A user-defined string that accepts alphanumeric characters and dashes, for example: my-tkg-cluster-1 |
Specifies the name of the cluster to create. Current cluster naming constraints:
|
namespace |
A user-defined string that accepts alphanumeric characters and dashes, for example: my-sns-1 |
Identifies the name of the Supervisor Namespace where the cluster will be deployed. This is a reference to a Supervisor Namespace that exists on the Supervisor Cluster. |
spec |
Section for technical specifications for the cluster | Includes the specification, expressed in declarative fashion, for the end-state of the cluster, including the node toplogy and Kubernetes software distribution . |
distribution |
Section for specifying the Tanzu Kubernetes Release version | Indicates the distribution for the cluster: the Tanzu Kubernetes cluster software installed on the control plane and worker nodes, including Kubernetes itself. |
version |
Alphanumeric string with dashes representing the Kubernetes version, for example: or v1.20.2 or v1.20 |
Specifies the software version of the Kubernetes distribution to install on cluster nodes using semantic version notation. Can specify the fully qualified version or use version shortcuts, such as "version: v1.20.2", which is resolved to the most recent image matching that patch version, or "version: v1.20", which is resolved to the most recent matching patch version. The resolved version displays as the "fullVersion" on the cluster description after you have created it. |
topology |
Section for cluster node topologies | Includes fields that describe the number, purpose, and organization of cluster nodes and the resources allocated to each. Cluster nodes are grouped into pools based on their intended purpose: either control-plane or worker . Each pool is homogeneous, having the same resource allocation and using the same storage. |
controlPlane |
Section for control plane settings | Specifies the topology of the cluster control plane, including the number of nodes (count ), type of VM (class ), and the storage resources allocated for each node (storageClass ). |
count |
An integer that is either 1 or 3 |
Specifies the number of control plane nodes. The control plane must have an odd number of nodes. |
class |
A system-defined element in the form of a string from an enumerated set, for example: guaranteed-small or best-effort-large |
Specifies the name of the VirtualMachineClass that describes the virtual hardware settings to be used each node in the pool. This controls the hardware available to the node (CPU and memory) as well as the requests and limits on those resources. See Virtual Machine Classes for Tanzu Kubernetes Clusters. |
storageClass |
node-storage (for example) |
Identifies the storage class to be used for storage of the disks which store the root file systems of the control plane nodes. Run kubectl describe ns on the namespace to view the available storage classes. The available storage classes for the namespace depend on the storage set by the vSphere administrator. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be a valid value for this field. See Configuring and Managing vSphere Namespaces. |
volumes |
Optional storage setting
|
Can specify separate disk and storage parameters for etcd on control plane nodes. See the example Cluster with Separate Disks and Storage Parameters. |
workers |
Section for worker node settings | Specifies the topology of the cluster worker nodes, including the number of nodes (count ), type of VM (class ), and the storage resources allocated for each node (storageClass ). |
count |
An integer between 0 and 150, for example: 1 or 2 or 7 |
Specifies the number of worker nodes in the cluster. A cluster with zero worker nodes can be created, allowing for a cluster with only control plane nodes. There is no hard maximum for the number of worker nodes, but a reasonable limit is 150.
Note: A cluster provisioned with 0 worker nodes is not assigned any load balancer services.
|
class |
A system-defined element in the form of a string from an enumerated set, for example: guaranteed-small or best-effort-large |
Specifies the name of the VirtualMachineClass that describes the virtual hardware settings to be used each node in the pool. This controls the hardware available to the node (CPU and memory) as well as the requests and limits on those resources. See Virtual Machine Classes for Tanzu Kubernetes Clusters. |
storageClass |
node-storage (for example) |
Identifies the storage class to be used for storage of the disks that store the root file systems of the worker nodes. Run kubectl describe ns on the namespace to list available storage classes. The available storage classes for the namespace depend on the storage set by the vSphere administrator. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be valid. See Configuring and Managing vSphere Namespaces. |
volumes |
Optional storage setting
|
Can specify separate disk and storage parameters for container images on worker nodes. See the example Cluster with Separate Disks and Storage Parameters. |
settings |
Section for cluster-specific settings; all spec.settings are optional |
Identifies optional runtime configuration information for the cluster, including node network details and persistent storage for pods. |
storage |
Section for specifying storage | Identifies persistent volume (PV) storage entries for container workloads. |
classes |
Array of one or more user-defined strings, for example: ["gold", "silver"] |
Specifies named persistent volume (PV) storage classes for container workloads. Storage classes associated with the Supervisor Namespace are replicated in the cluster. In other words, the storage class must be available on the Supervisor Namespace to be a valid value. See the example Cluster with Storage Classes and a Default Class for Persistent Volumes. |
defaultClass |
silver (for example) |
Specifies a named storage class to be annotated as the default in the cluster. If you do not specify it, there is no default. You do not have to specify one or more classes to specify a defaultClass . Some workloads may require a default class, such as Helm. See the example Cluster with Storage Classes and a Default Class for Persistent Volumes. |
network |
Section marker for networking settings | Specifies network-related settings for the cluster. |
cni |
Section marker for specifying the CNI | Identifies the Container Networking Interface (CNI) plug-in for the cluster. The default is Antrea, which does not need to be specified for new clusters. |
name |
String antrea or calico |
Specifies the CNI to use. Antrea and Calico are supported. System configuration sets Antrea as the default CNI. The default CNI can be changed. If using the default, this field does not need to be specified. |
services |
Section marker for specifying Kubernetes services subnets | Identifies network settings for Kubernetes services. Default is 10.96.0.0/12. |
cidrBlocks |
Array ["198.51.100.0/12"] (for example) |
Specifies a range of IP addresses to use for Kubernetes services. Default is 10.96.0.0/12. Must not overlap with the settings chosen for the Supervisor Cluster. Although this field is an array, allowing for multiple ranges, currently only a single IP range is allowed. See the networking examples at Examples for Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API. |
pods |
Section marker for specifying Kubernetes pods subnets | Specifies network settings for pods. Default is 192.168.0.0/16. Minimum block size is /24. |
cidrBlocks |
Array ["192.0.2.0/16"] (for example) |
Specifies a range of IP addresses to use for Kubernetes pods. Default is 192.168.0.0/16. Must not overlap with the settings chosen for the Supervisor Cluster. Pods subnet size must be equal to or larger than /24. Although this field is an array, allowing for multiple ranges, currently only a single IP range is allowed. See the networking examples at Examples for Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API. |
serviceDomain |
"cluster.local" |
Specifies the service domain for the cluster. Default is cluster.local . |
proxy |
Section that specifies HTTP(s) proxy configuration for the cluster. If implemented all fields are required. | Provides fields for specified proxy settings; will be auto-populated if a global proxy is configured and individual cluster proxy is not configured. See the example Cluster with a Proxy Server. |
httpProxy |
http://<user>:<pwd>@<ip>:<port> |
Specifies a proxy URL to use for creating HTTP connections outside the cluster. |
httpsProxy |
http://<user>:<pwd>@<ip>:<port> |
Specifies a proxy URL to use for creating HTTPS connections outside the cluster. |
noProxy |
Array of CIDR blocks to not proxy, for example: Get the required values come from the Workload Network on the Supervisor Cluster: Refer to the image below for what values to include in the |
You must not proxy the subnets used by the Workload Network on the Supervisor Cluster for Pods, Ingress, and Egress. You do not need to include the Services CIDR from the Supervisor Cluster in the The endpoints The Pod and Service CIDRs for Tanzu Kubernetes clusters are automatically not proxied. You do not need to add them to the See the example Cluster with a Proxy Server. |
trust |
Section marker for trust parameters. |
Accepts no data. |
additionalTrustedCAs |
Accepts an array of certificates with name and data for each. |
Accepts no data. |
name |
String | The name of the TLS certificate. |
data |
String | The base64-encoded string of a PEM encoded public certificate. |
Get the required noProxy
values from the Workload Network on the Supervisor Cluster as shown in the image.