You can use the example YAML files as a starting point for provisioning and updating Tanzu Kubernetes clusters using the Tanzu Kubernetes Grid Service.

Minimal YAML for Cluster Provisioning

The minimal YAML configuration required to create a Tanzu Kubernetes cluster has the following characteristics:
  • This YAML provisions a cluster with a single control plane node and three worker nodes.
  • The apiVersion and kind parameter values are constants.
  • The Kubernetes version, listed as v1.17, is resolved to the most recent distribution matching that minor version, which is v1.17.8+vmware.1-tkg.1.5417466. To use Antrea networking, the version must be at least v1.17.7+vmware.1-tkg.1.154236c. See Supported Update Path.
  • The VM class best-effort-<size> has no reservations. For more information, see Virtual Machine Class Types for Tanzu Kubernetes Clusters.
  • If you want to use Helm or Kubeapps as the defaultClass, which is referenced by many charts, add spec.settings.storage.defaultClass to the minimal YAML.
  • Runtime parameters for settings.storage are not specified because the defaults are used.
  • Runtime parameters for settings.network are not needed because the defaults are used. For reference, the following are the default network settings. For more information, see Configuration Parameters for Tanzu Kubernetes Clusters.
    • Default Pod CIDR: 192.168.0.0/16
    • Default Services CIDR: 10.96.0.0/12
    • Default Service Domain: cluster.local
  • Because pod security policy is enforced, after creating a Tanzu Kubernetes cluster using the example YAML, you will be able to run pods but not deployments. To create a deployment, define a binding to one off the default pod security policies. For more information, see Using Pod Security Policies with Tanzu Kubernetes Clusters.
apiVersion: run.tanzu.vmware.com/v1alpha1      #TKG API endpoint
kind: TanzuKubernetesCluster                   #required parameter
metadata:
  name: tkg-cluster-1                          #cluster name, user defined
  namespace: namespace-a                       #supervisor namespace
spec:
  distribution:
    version: v1.17  #Resolves to the latest v1.17 image (v1.17.8+vmware.1-tkg.1.5417466)
  topology:
    controlPlane:
      count: 1                                 #number of control plane nodes
      class: best-effort-small                 #vmclass for control plane nodes
      storageClass: tkc-storage-policy         #storageclass for control plane
    workers:
      count: 3                                 #number of worker nodes
      class: best-effort-small                 #vmclass for worker nodes
      storageClass: tkc-storage-policy         #storageclass for worker nodes
Note: If you are not using a Tanzu Kubernetes version that is compatible with Antrea (v1.17.8 or later), to avoid the error antrea cni not supported in current tkc version, you must specify calico in spec.settings.network.cni, or change the default CNI to Calico. See Change the Default CNI for Tanzu Kubernetes Clusters.

Complete YAML for Cluster Provisioning

The complete YAML configuration required to create a Tanzu Kubernetes cluster has the following characteristics:

  • This YAML provisions a cluster with three control plane nodes and five worker nodes.
  • The Kubernetes version, listed as v1.17, is resolved to the most recent distribution matching that minor version, which is v1.17.8+vmware.1-tkg.1.5417466. To use Antrea networking, the version must be at least v1.17.7+vmware.1-tkg.1.154236c. See Supported Update Path.
  • The network settings are included because the defaults are not used. The CIDR ranges must not overlap with those of the Supervisor Cluster. For more information, see Configuration Parameters for Tanzu Kubernetes Clusters.
  • Different storage classes are used for the control plane and worker nodes.
  • The VM class guaranteed-effort-<size> has full reservations. For more information, see Virtual Machine Class Types for Tanzu Kubernetes Clusters.
apiVersion: run.tanzu.vmware.com/v1alpha1    
kind: TanzuKubernetesCluster                 
metadata:
  name: tkg-cluster-2                                
  namespace: namespace-b                     
spec:
  distribution:
    version: 1.17  #Resolves to the latest v1.17 image (v1.17.8+vmware.1-tkg.1.5417466)
  topology:
    controlPlane:
      count: 3                                 #3 control plane nodes                       
      class: guaranteed-large                  #large size VM
      storageClass: tkc-storage-policy-yellow  #Specific storage class for control plane       
    workers:
      count: 5                                 #5 worker nodes                     
      class: guaranteed-xlarge                 #extra large size VM          
      storageClass: tkc-storage-policy-green   #Specific storage class for workers     
  settings:
    network:
      cni:
        name: calico 
      services:
        cidrBlocks: ["198.51.100.0/12"]        #Cannot overlap with Supervisor Cluster
      pods:
        cidrBlocks: ["192.0.2.0/16"]           #Cannot overlap with Supervisor Cluster
    storage:
      classes: ["gold", "silver"]              #Named PVC storage classes
      defaultClass: silver                     #Default PVC storage class