The Tanzu Kubernetes Grid Service API provides intelligent defaults and an array of options for customizing Tanzu Kubernetes clusters. Refer to the examples to provision clusters of various types with different configurations and customizations to meet your needs.
The v1alpha1 API Is Deprecated
Minimal YAML for Provisioning a Tanzu Kubernetes Cluster
The following example YAML is the minimal configuration required to invoke the Tanzu Kubernetes Grid Service and provision a Tanzu Kubernetes cluster that uses all of the default settings.
- The Tanzu Kubernetes release version, listed as v1.19, is resolved to the most recent distribution matching that minor version, for example
v1.19.7+vmware.1-tkg.1.xxxxxx
. See Verify Tanzu Kubernetes Cluster Compatibility for Update. - The VM class
best-effort-<size>
has no reservations. For more information, see Virtual Machine Classes for Tanzu Kubernetes Clusters. - The cluster does not include persistent storage for containers. If needed this is set in
spec.settings.storage
. See the storage example below. - Some workloads, such as Helm, may require a
spec.settings.storage.defaultClass
. See the storage example below. - The
spec.settings.network
section is not specified. This means that the cluster uses the following default network settings:- Default CNI:
antrea
- Default Pod CIDR:
192.168.0.0/16
- Default Services CIDR:
10.96.0.0/12
- Default Service Domain:
cluster.local
Note: The default IP range for Pods is192.168.0.0/16
. If this subnet is already in use, you must specify a different CIDR range. See the custom network examples below. - Default CNI:
apiVersion: run.tanzu.vmware.com/v1alpha1 #TKGS API endpoint kind: TanzuKubernetesCluster #required parameter metadata: name: tkgs-cluster-1 #cluster name, user defined namespace: tgks-cluster-ns #vsphere namespace spec: distribution: version: v1.20 #Resolves to latest TKR 1.20 topology: controlPlane: count: 1 #number of control plane nodes class: best-effort-medium #vmclass for control plane nodes storageClass: vwt-storage-policy #storageclass for control plane workers: count: 3 #number of worker nodes class: best-effort-medium #vmclass for worker nodes storageClass: vwt-storage-policy #storageclass for worker nodes
Cluster with Separate Disks and Storage Parameters
The following example YAML shows how to provision a cluster with separate disks and storage parameters for cluster control plane and worker nodes.
- Customize storage performance on control plane nodes for the etcd database
- Customize the size of the disk for container images on the worker nodes
The example has the following characteristics:
- The
spec.topology.workers.volumes
settings specify the separate volume for the container images. - The
mountPath: /var/lib/containerd
for container images is supported for Tanzu Kubernetes releases 1.17 and later.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-5 namespace: tgks-cluster-ns spec: distribution: version: v1.20 topology: controlPlane: count: 3 class: best-effort-medium storageClass: vwt-storage-policy workers: count: 3 class: best-effort-medium storageClass: vwt-storage-policy volumes: - name: containerd mountPath: /var/lib/containerd capacity: storage: 16Gi
Cluster with a Custom Antrea Network
- Because custom network settings are applied, the
cni.name
parameter is required even though the default Antrea CNI is used.- CNI name:
antrea
- Custom Pods CIDR:
193.0.2.0/16
- Custom Services CIDR:
195.51.100.0/12
- Custom Service Domain:
managedcluster.local
- CNI name:
- The custom CIDR blocks cannot overlap with the Supervisor Cluster. For more information, see Configuration Parameters for Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-3-antrea namespace: tkgs-cluster-ns spec: distribution: version: v1.20 topology: controlPlane: class: guaranteed-medium count: 3 storageClass: vwt-storage-policy workers: class: guaranteed-medium count: 5 storageClass: vwt-storage-policy settings: network: cni: name: antrea #Use Antrea CNI pods: cidrBlocks: - 193.0.2.0/16 #Must not overlap with SVC services: cidrBlocks: - 195.51.100.0/12 #Must not overlap with SVC serviceDomain: managedcluster.local
Cluster with a Custom Calico Network
- Calico is not the default CNI, so it is explicitly named in the manifest. To change the default CNI at the service-level, see Examples for Configuring the Tanzu Kubernetes Grid Service v1alpha1 API.
- CNI name:
calico
- Custom Pods CIDR:
198.51.100.0/12
- Custom Services CIDR:
192.0.2.0/16
- Custom Service Domain:
managedcluster.local
- CNI name:
- The network uses custom CIDR ranges, not the defaults. These ranges must not overlap with the Supervisor Cluster. See Configuration Parameters for Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-2 namespace: tkgs-cluster-ns spec: distribution: version: v1.20 topology: controlPlane: count: 3 class: guaranteed-large storageClass: vwt-storage-policy workers: count: 5 class: guaranteed-xlarge storageClass: vwt-storage-policy settings: network: cni: name: calico #Use Calico CNI for this cluster services: cidrBlocks: ["198.51.100.0/12"] #Must not overlap with SVC pods: cidrBlocks: ["192.0.2.0/16"] #Must not overlap with SVC serviceDomain: managedcluster.local
Cluster with Storage Classes and a Default Class for Persistent Volumes
- The
spec.settings.storage.classes
setting specifies two storage classes for persistent storage for containers in the cluster. - The
spec.settings.storage.defaultClass
is specified. Some applications require a default class. For example, if you want to use Helm or Kubeapps as thedefaultClass
, which is referenced by many charts.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: default-storage-spec namespace: tkgs-cluster-ns spec: topology: controlPlane: count: 3 class: best-effort-medium storageClass: vwt-storage-policy workers: count: 3 class: best-effort-medium storageClass: vwt-storage-policy distribution: version: v1.20 settings: network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"] serviceDomain: "tanzukubernetescluster.local" storage: classes: ["gold", "silver"] #Array of named PVC storage classes defaultClass: silver #Default PVC storage class
Cluster with a Proxy Server
You can use a proxy server with an individual Tanzu Kubernetes cluster by applying the proxy server configuration to the cluster manifest.
Note the following characteristics:
- The section
spec.settings.network.proxy
specifies HTTP(s) proxy configuration for this Tanzu Kubernetes cluster. - The syntax for both
proxy
server values ishttp://<user>:<pwd>@<ip>:<port>
. - Specific endpoints are automatically not proxied, including
localhost
and127.0.0.1
, and the Pod and Service CIDRs for Tanzu Kubernetes clusters. You do not need to include these in thenoProxy
field. - The
noProxy
field accepts an array of CIDRs to not proxy. Get the required values from the Workload Network on the Supervisor Cluster. Refer to the image at Configuration Parameters for Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API. - If a global proxy is configured on the
TkgServiceConfiguration
, that proxy information is propagated to the cluster manifest after the initial deployment of the cluster. The global proxy configuration is added to the cluster manifest only if there is no proxy configuration fields present when creating the cluster. In other words, per-cluster configuration takes precedence and will overwrite a global proxy configuration.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-with-proxy namespace: tkgs-cluster-ns spec: distribution: version: v1.20 topology: controlPlane: count: 3 class: guaranteed-medium storageClass: vwt-storage-policy workers: count: 5 class: guaranteed-xlarge storageClass: vwt-storage-policy settings: storage: classes: ["gold", "silver"] defaultClass: silver network: cni: name: antrea pods: cidrBlocks: - 193.0.2.0/16 services: cidrBlocks: - 195.51.100.0/12 serviceDomain: managedcluster.local proxy: httpProxy: http://10.186.102.224:3128 #Proxy URL for HTTP connections httpsProxy: http://10.186.102.224:3128 #Proxy URL for HTTPS connections noProxy: [10.246.0.0/16,192.168.144.0/20,192.168.128.0/20] #SVC Pod, Egress, Ingress CIDRs
Cluster with Custom Certificates for TLS
trust.additionalTrustedCAs
in the
TkgServiceConfiguration
(see
Configuration Parameters for the Tanzu Kubernetes Grid Service v1alpha1 API), you can include
trust.additionalTrustedCAs
under the
spec.settings.network
in the
TanzuKubernetesCluster
spec. For example:
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-with-custom-certs-tls namespace: tkgs-cluster-ns spec: topology: controlPlane: count: 3 class: guaranteed-medium storageClass: vwt-storage-profile workers: count: 3 class: guaranteed-large storageClass: vwt-storage-profile distribution: version: 1.20.2 settings: network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"] serviceDomain: "managedcluster.local" trust: additionalTrustedCAs: - name: custom-selfsigned-cert-for-tkc data: | LS0aaaaaaaaaaaaaaabase64...
Cluster that Does or Does Not Inherit Global Settings from the TkgServiceConfiguration Spec
TkgServiceConfiguration
, configure the cluster with the global setting not specified or nulled out.
For example, if you wanted to configure a cluster that would inherit the proxy
setting you could use either of the following approaches:
proxy
setting in the cluster specification:
... settings: network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"] serviceDomain: "tanzukubernetescluster.local"
proxy
setting in the specification but explicitly set its value to
null
:
settings: network: proxy: null
To provision a Tanzu Kubernetes cluster that does not inherit the default value from TkgServiceConfiguration
, configure the cluster specification with all elements included but empty values.
TkgServiceConfiguration
has a global
proxy
configured and you want to provision a cluster that does not inherit the global
proxy
settings, include the following syntax in your cluster specification:
... settings: network: proxy: httpProxy: "" httpsProxy: "" noProxy: null
Cluster Using a Local Content Library
To provision a Tanzu Kubernetes cluster in an air-gapped environment, create a cluster using the virtual machine image synchronized from a Local Content Library.
distribution.version
value, you can enter either the full image name or, if you kept the name format from the image directory, you can shorten it to the Kubernetes version. If you want to use a fully qualified version number, replace
-----
with
+
. For example, if you have an OVA image named
photon-3-k8s-v1.20.2---vmware.1-tkg.1.1d4f79a
, the following formats are acceptable.
spec: distribution: version: v1.20
spec: distribution: version: v1.20.2
spec: distribution: version: v1.20.2+vmware.1-tkg.1
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tgks-cluster-9 namespace: tkgs-cluster-ns spec: topology: controlPlane: count: 3 class: best-effort-medium storageClass: vwt-storage-policy workers: count: 3 class: best-effort-medium storageClass: vwt-storage-policy distribution: version: v1.20.2 settings: network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"]