You provision Tanzu Kubernetes clusters by invoking the Tanzu Kubernetes Grid Service declarative API using kubectl and a cluster specification defined in YAML. After you provision a cluster, you operate it and deploy workloads to it using kubectl.

This workflow supports the Tanzu Kubernetes Grid Service v1alpha2 API. If you are using the v1alpha1 API, refer to that workflow.

Prerequisites

Verify the completion of the following prerequisites before starting the workflow procedure:

Procedure

  1. Download and install the Kubernetes CLI Tools for vSphere.
  2. Authenticate with the Supervisor Cluster using the vSphere Plugin for kubectl.
    kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME
  3. Verify successful login to the Supervisor Cluster.
    You should see a message similar to the following:
    Logged in successfully.
    
    You have access to the following contexts:
       192.197.2.65
       tkgs-ns
    Where 192.197.2.65 is the Supervisor Cluster context and tkgs-ns is the context for the vSphere Namespace where you plan to provision the Tanzu Kubernetes cluster.
  4. Verify that the target vSphere Namespace is the current context.
    kubectl config get-contexts
    CURRENT   NAME            CLUSTER         AUTHINFO                              NAMESPACE
              192.197.2.65    192.197.2.65    wcp:192.197.2.65:[email protected]
    *         tkgs-ns         192.197.2.65    wcp:192.197.2.65:[email protected]   tkgs-ns
    If the target vSphere Namespace is not the current context, switch to it.
    kubectl config use-context tkgs-ns
  5. List the virtual machine class bindings that are available in the target vSphere Namespace.
    kubectl get virtualmachineclassbindings
    You can only use those VM classes that are bound to the target namespace. If you do not see any VM classes, check that the vSphere Namespace has the default VM classes added to it.
  6. Get the available persistent volume storage classes.
    kubectl describe storageclasses
  7. List the available Tanzu Kubernetes releases.
    You can use either of the following commands to perform this operation:
    kubectl get tkr
    kubectl get tanzukubernetesreleases
    You can only use those releases that are returned by this command. If you do not see any releases, or the releases that you want, verify that you have synchronized the desired OVA files with the content library.
  8. Craft the YAML file for provisioning your Tanzu Kubernetes cluster.
    1. Review the v1alpha2 API specification.
      Note: The TKG v1alpha1 API is deprecated and should not be used to create new TKG clusters. Use the TKG v1alpha2 API.
    2. Start with one of the example YAML files for provisioning a cluster, either the default or custom depending on your requirements.
    3. Save your YAML file as tkgs-cluster-1.yaml, or similar.
    4. Populate your YAML file based on your requirements and using the information you gleaned from the output of the preceding commands, including:
      • The name of the cluster, such as tkgs-cluster-1
      • The target vSphere Namespace, such as tkgs-ns
      • Bound VM classes, such as guaranteed-medium and guaranteed-small
      • Storage classes for cluster nodes and workloads, such as vwt-storage-policy
      • The number of control plane and worker nodes (replicas)
      • The Tanzu Kubernetes release specified by the TKR NAME string, such as v1.23.8---vmware.3-tkg.1
    5. Customize your YAML file as needed. For example:
      • Specify a default persistent storage class for cluster nodes
      • Customize the cluster networking, including the CNI, pod and service CIDRs
    The result of this step is a valid YAML for provisioning your TKGS cluster. For example:
    apiVersion: run.tanzu.vmware.com/v1alpha2
    kind: TanzuKubernetesCluster
    metadata:
      name: tkgs-cluster-1
      namespace: tkgs-ns
    spec:
      topology:
        controlPlane:
          replicas: 3
          vmClass: guaranteed-medium
          storageClass: vwt-storage-policy
          tkr:
            reference:
              name: v1.23.8---vmware.3-tkg.1
        nodePools:
        - name: worker-nodepool-a1
          replicas: 3
          vmClass: guaranteed-medium
          storageClass: vwt-storage-policy
          tkr:
            reference:
              name: v1.23.8---vmware.3-tkg.1
        - name: worker-nodepool-a2
          replicas: 2
          vmClass: guaranteed-small
          storageClass: vwt-storage-policy
          tkr:
            reference:
              name: v1.23.8---vmware.3-tkg.1
      settings:
        storage:
          defaultClass: vwt-storage-policy
    Note: The above example uses the default cluster networking, that is, the Antrea CNI and default CIDR ranges for cluster pods and services.
  9. Provision the cluster by running the following kubectl command.
    kubectl apply -f tkgs-cluster-1.yaml
    Expected result:
    tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster-1 created
  10. Monitor the deployment of cluster nodes using kubectl.
    kubectl get tanzukubernetesclusters
    Initially the cluster is not ready because it is being provisioned.
    NAME             CONTROL PLANE   WORKER   TKR NAME                   AGE    READY   TKR COMPATIBLE   UPDATES AVAILABLE
    tkgs-cluster-1   3               5        v1.23.8---vmware.3-tkg.1   2m4s   False   True
    After a few minutes the READY status should be True.
    NAME             CONTROL PLANE   WORKER   TKR NAME                   AGE   READY   TKR COMPATIBLE   UPDATES AVAILABLE
    tkgs-cluster-1   3               5        v1.23.8---vmware.3-tkg.1   13m   True    True
  11. Monitor the deployment of cluster nodes using the vSphere Client.
    In the vSphere Hosts and Clusters inventory, you should see the virtual machine nodes being deployed in the target vSphere Namespace.
  12. Run additional kubectl commands to verify cluster provisioning.
    kubectl get tanzukubernetescluster,cluster-api,virtualmachinesetresourcepolicy,virtualmachineservice,virtualmachine
    For troubleshooting, see Troubleshooting Tanzu Kubernetes Clusters.
  13. Using the vSphere Plugin for kubectl, log in to the cluster.
    kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME \
    --tanzu-kubernetes-cluster-name CLUSTER-NAME \
    --tanzu-kubernetes-cluster-namespace NAMESPACE-NAME
    For example:
    kubectl vsphere login --server=192.197.2.65 --vsphere-username [email protected] \
    --tanzu-kubernetes-cluster-name tkgs-cluster-1 --tanzu-kubernetes-cluster-namespace tkgs-ns
  14. Verify successful login to the Tanzu Kubernetes cluster.
    You should see a message similar to the following.
    Logged in successfully.
    
    You have access to the following contexts:
       192.197.2.65
       tkgs-cluster-1
       tkgs-ns
    Where 192.197.2.65 is the Supervisor Cluster context, tkgs-ns is the vSphere Namespace context, and tkgs-cluster-1 is the Tanzu Kubernetes cluster context.
  15. List available cluster contexts using kubectl.
    kubectl config get-contexts
    For example:
    CURRENT   NAME             CLUSTER        AUTHINFO                                      NAMESPACE
              192.197.2.65     192.197.2.65   wcp:192.197.2.65:[email protected]
    *         tkgs-cluster-1   192.197.2.67   wcp:192.197.2.67:[email protected]
              tkgs-ns          192.197.2.65   wcp:192.197.2.65:[email protected]   tkgs-ns
    If necessary use kubect config use-context tkgs-cluster-1 to switch to the Tanzu Kubernetes cluster so that it is the current context.
  16. Verify cluster provisioning using the following kubectl command.
    kubectl get nodes
    For example:
    NAME                                                       STATUS   ROLES                  AGE   VERSION
    tkgs-cluster-1-control-plane-6ln2h                         Ready    control-plane,master   30m   v1.21.6+vmware.1
    tkgs-cluster-1-control-plane-6q67n                         Ready    control-plane,master   33m   v1.21.6+vmware.1
    tkgs-cluster-1-control-plane-jw964                         Ready    control-plane,master   37m   v1.21.6+vmware.1
    tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-h5fp8   Ready    <none>                 32m   v1.21.6+vmware.1
    tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-q4g24   Ready    <none>                 33m   v1.21.6+vmware.1
    tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-vdcn4   Ready    <none>                 33m   v1.21.6+vmware.1
    tkgs-cluster-1-worker-nodepool-a2-2n22f-bd59d7b96-nh4dg    Ready    <none>                 34m   v1.21.6+vmware.1
    tkgs-cluster-1-worker-nodepool-a2-2n22f-bd59d7b96-vvfmf    Ready    <none>                 33m   v1.21.6+vmware.1
  17. Verify cluster provisioning using additional kubectl commands.
    kubectl get namespaces
    kubectl get pods -A
    kubectl cluster-info
    kubectl api-resources
  18. Define appropriate pod security policy.
    Tanzu Kubernetes clusters have the PodSecurityPolicy Admission Controller enabled by default. For guidance, see Using Pod Security Policies with Tanzu Kubernetes Clusters.
    Depending on the workload and user, you will need to create bindings for system-provided PodSecurityPolicy, or create custom PodSecurityPolicy. See Example Role Bindings for Pod Security Policy.
  19. Deploy an example workload and verify cluster creation.
  20. Operationalize the cluster by deploying TKG Extensions.