The Tanzu Kubernetes Grid Service API provides intelligent defaults and an array of options for customizing Tanzu Kubernetes clusters. Refer to the examples to provision clusters of various types with different configurations and customizations to meet your needs.

Minimal YAML for Provisioning a Tanzu Kubernetes Cluster

The following example YAML is the minimal configuration required to invoke the Tanzu Kubernetes Grid Service and provision a Tanzu Kubernetes cluster that uses all of the default settings.

Characteristics of the minimal example YAML include:
  • The Tanzu Kubernetes Release version, listed as v1.19, is resolved to the most recent distribution matching that minor version, for example v1.19.7+vmware.1-tkg.1.xxxxxx. See List of Tanzu Kubernetes Releases.
  • The VM class best-effort-<size> has no reservations. For more information, see Virtual Machine Classes for Tanzu Kubernetes Clusters.
  • The cluster does not include persistent storage for containers. If needed this is set in spec.settings.storage. See the storage example below.
  • Some workloads, such as Helm, may require a spec.settings.storage.defaultClass. See the storage example below.
  • The spec.settings.network section is not specified. This means that the cluster uses the following default network settings:
    • Default CNI: antrea
    • Default Pod CIDR: 192.168.0.0/16
    • Default Services CIDR: 10.96.0.0/12
    • Default Service Domain: cluster.local
apiVersion: run.tanzu.vmware.com/v1alpha1      #TKGS API endpoint
kind: TanzuKubernetesCluster                   #required parameter
metadata:
  name: tkgs-cluster-1                         #cluster name, user defined
  namespace: tgks-cluster-ns                   #vsphere namespace
spec:
  distribution:
    version: v1.20                             #Resolves to latest TKR 1.20
  topology:
    controlPlane:
      count: 1                                 #number of control plane nodes
      class: best-effort-medium                #vmclass for control plane nodes
      storageClass: vwt-storage-policy         #storageclass for control plane
    workers:
      count: 3                                 #number of worker nodes
      class: best-effort-medium                #vmclass for worker nodes
      storageClass: vwt-storage-policy         #storageclass for worker nodes

Cluster with Separate Disks and Storage Parameters

The following example YAML shows how to provision a cluster with separate disks and storage parameters for cluster control plane and worker nodes.

Using separating disks and storage parameters for high-churn data help to minimize read-write overhead related to the use of linked clones, among other benefits. There are two primary use cases:
  • Customize storage performance on control plane nodes for the etcd database
  • Customize the size of the disk for container images on the worker nodes

The example has the following characteristics:

  • The spec.topology.controlPlane.volumes settings specify the separate volume for the etcd database.
  • The spec.topology.workers.volumes settings specify the separate volume for the container images.
  • The mountPath: /var/lib/containerd for container images is supported for Tanzu Kubernetes releases 1.17 and later.
apiVersion: run.tanzu.vmware.com/v1alpha1      
kind: TanzuKubernetesCluster                   
metadata:
  name: tkgs-cluster-5                         
  namespace: tgks-cluster-ns                   
spec:
  distribution:
    version: v1.20                            
  topology:
    controlPlane:
      count: 3                                 
      class: best-effort-medium                 
      storageClass: vwt-storage-policy
      volumes:
        - name: etcd
          mountPath: /var/lib/etcd
          capacity:
            storage: 4Gi       
    workers:
      count: 3                                 
      class: best-effort-medium                 
      storageClass: vwt-storage-policy        
      volumes:
        - name: containerd
          mountPath: /var/lib/containerd
          capacity:
            storage: 16Gi       

Cluster with a Custom Antrea Network

The following YAML demonstrates how to provision a Tanzu Kubernetes cluster with custom network ranges for the Antrea CNI.
  • Because custom network settings are applied, the cni.name parameter is required even though the default Antrea CNI is used.
    • CNI name: antrea
    • Custom Pods CIDR: 193.0.2.0/16
    • Custom Services CIDR: 195.51.100.0/12
    • Custom Service Domain: managedcluster.local
  • The custom CIDR blocks cannot overlap with the Supervisor Cluster. For more information, see Configuration Parameters for Tanzu Kubernetes Clusters.
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkg-cluster-3-antrea
  namespace: tkgs-cluster-ns
spec:
  distribution:
    version: v1.20
  topology:
    controlPlane:
      class: guaranteed-medium
      count: 3
      storageClass: vwt-storage-policy
    workers:
      class: guaranteed-medium
      count: 5
      storageClass: vwt-storage-policy
  settings:
    network:
      cni:
        name: antrea                       #Use Antrea CNI
      pods:
        cidrBlocks:
        - 193.0.2.0/16                     #Must not overlap with SVC
      services:
        cidrBlocks:
        - 195.51.100.0/12                  #Must not overlap with SVC
      serviceDomain: managedcluster.local

Cluster with a Custom Calico Network

The following YAML demonstrates how to provision a Tanzu Kubernetes cluster with a custom Calico network.
apiVersion: run.tanzu.vmware.com/v1alpha1    
kind: TanzuKubernetesCluster                 
metadata:
  name: tkgs-cluster-2                                
  namespace: tkgs-cluster-ns                     
spec:
  distribution:
    version: v1.20                              
  topology:
    controlPlane:
      count: 3                                                        
      class: guaranteed-large                  
      storageClass: vwt-storage-policy        
    workers:
      count: 5                                                      
      class: guaranteed-xlarge                           
      storageClass: vwt-storage-policy      
  settings:
    network:
      cni:
        name: calico                           #Use Calico CNI for this cluster
      services:
        cidrBlocks: ["198.51.100.0/12"]        #Must not overlap with SVC
      pods:
        cidrBlocks: ["192.0.2.0/16"]           #Must not overlap with SVC
      serviceDomain: managedcluster.local

Cluster with Storage Classes and a Default Class for Persistent Volumes

The following example YAML demonstrates how to provision a cluster with a storage classes for dynamic PVC provisioning and a default storage class.
  • The spec.settings.storage.classes setting specifies two storage classes for persistent storage for containers in the cluster.
  • The spec.settings.storage.defaultClass is specified. Some applications require a default class. For example, if you want to use Helm or Kubeapps as the defaultClass, which is referenced by many charts.
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: default-storage-spec
  namespace: tkgs-cluster-ns
spec:
  topology:
    controlPlane:
      count: 3
      class: best-effort-medium
      storageClass: vwt-storage-policy
    workers:
      count: 3
      class: best-effort-medium
      storageClass: vwt-storage-policy
  distribution:
    version: v1.20
  settings:
    network:
      cni:
        name: antrea
      services:
        cidrBlocks: ["198.51.100.0/12"]
      pods:
        cidrBlocks: ["192.0.2.0/16"]
      serviceDomain: "tanzukubernetescluster.local"
    storage:
      classes: ["gold", "silver"]              #Array of named PVC storage classes
      defaultClass: silver                     #Default PVC storage class

Cluster with a Proxy Server

You can use a proxy server with an individual Tanzu Kubernetes cluster by applying the proxy server configuration to the cluster manifest.

Note the following characteristics:

  • The section spec.settings.network.proxy specifies HTTP(s) proxy configuration for this Tanzu Kubernetes cluster.
  • The syntax for both proxy server values is http://<user>:<pwd>@<ip>:<port>.
  • Specific endpoints are automatically not proxied, including localhost and 127.0.0.1, and the Pod and Service CIDRs for Tanzu Kubernetes clusters. You do not need to include these in the noProxy field.
  • The noProxy field accepts an array of CIDRs to not proxy. Get the requried values from the Workload Network on the Supervisor Cluster. Refer to the image at Configuration Parameters for Tanzu Kubernetes Clusters.
  • If a global proxy is configured on the TkgServiceConfiguration, that proxy information is propagated to the cluster manifest after the initial deployment of the cluster. The global proxy configuration is added to the cluster manifest only if there is no proxy configuration fields present when creating the cluster. In other words, per-cluster configuration takes precedence and will overwrite a global proxy configuration.
apiVersion: run.tanzu.vmware.com/v1alpha1    
kind: TanzuKubernetesCluster                 
metadata:
  name: tkgs-cluster-with-proxy                                
  namespace: tkgs-cluster-ns                     
spec:
  distribution:
    version: v1.20                              
  topology:
    controlPlane:
      count: 3                                                        
      class: guaranteed-large                  
      storageClass: vwt-storage-policy        
    workers:
      count: 5                                                      
      class: guaranteed-xlarge                           
      storageClass: vwt-storage-policy       
  settings:
    storage:
      classes: ["gold", "silver"]              
      defaultClass: silver                     
    network:
      cni:
        name: antrea 
      pods:
        cidrBlocks:
        - 193.0.2.0/16
      services:
        cidrBlocks:
        - 195.51.100.0/12           
      serviceDomain: managedcluster.local
      proxy:
        httpProxy: http://10.186.102.224:3128  #Proxy URL for HTTP connections
        httpsProxy: http://10.186.102.224:3128 #Proxy URL for HTTPS connections
        noProxy: [10.246.0.0/16,192.168.144.0/20,192.168.128.0/20] #SVC Pod, Egress, Ingress CIDRs

Cluster Using a Local Content Library

To provision a Tanzu Kubernetes cluster in an air-gapped environment, create a cluster using the virtual machine image synchronized from a Local Content Library.

To provision a cluster using a local content library image, you must specify that image in the cluster spec. For the distribution.version value, you can enter either the full image name or, if you kept the name format from the image directory, you can shorten it to the Kubernetes version. If you want to use a fully qualified version number, replace ----- with +. For example, if you have an OVA image named photon-3-k8s-v1.20.2---vmware.1-tkg.1.1d4f79a, the following formats are acceptable.
spec:
  distribution:
    version: v1.20
spec:
  distribution:
    version: v1.20.2
spec:
  distribution:
    version: v1.20.2+vmware.1-tkg.1
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
    name: tgks-cluster-9
    namespace: tkgs-cluster-ns
spec:
   topology:
       controlPlane:
           count: 3
           class: best-effort-medium
           storageClass: vwt-storage-policy
       workers:
           count: 3
           class: best-effort-medium
           storageClass: vwt-storage-policy
   distribution:
        version: v1.20.2         
   settings:
        network:
          cni:
              name: antrea 
          services:
             cidrBlocks: ["198.51.100.0/12"]
          pods:
             cidrBlocks: ["192.0.2.0/16"]