You provision Tanzu Kubernetes clusters by invoking the Tanzu Kubernetes Grid Service declarative API using kubectl and a cluster specification defined in YAML. After you provision a cluster, you operate it and deploy workloads to it using kubectl.
This workflow supports the Tanzu Kubernetes Grid Service v1alpha2 API. If you are using the v1alpha1 API, refer to that workflow.
Prerequisites
Verify the completion of the following prerequisites before starting the workflow procedure:
- Install or update your environment to support the Tanzu Kubernetes Grid Service v1alpha2 API. Refer to the requirements for details. The minimum Tanzu Kubernetes release that supports the v1alpha2 API is
v1.21.2
. Refer to the VMware Tanzu Kubernetes releases Release Notes for details. - Configure a vSphere Namespace for hosting Tanzu Kubernetes clusters. The namespace requires edit permissions for DevOps engineers and shared storage. See Create and Configure a vSphere Namespace.
- Create a content library for Tanzu Kubernetes releases and synchronize the releases you want to use. See Creating and Managing Content Libraries for Tanzu Kubernetes releases.
- Decide which default VM classes you want to use and if you require custom VM classes. See Virtual Machine Classes for Tanzu Kubernetes Clusters.
- Associate the content library and the virtual machine classes with the vSphere Namespace. See Configure a vSphere Namespace for Tanzu Kubernetes releases.
Procedure
- Download and install the Kubernetes CLI Tools for vSphere.
For guidance see Download and Install the Kubernetes CLI Tools for vSphere.
- Authenticate with the Supervisor Cluster using the vSphere Plugin for kubectl.
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME
For guidance see Connect to the Supervisor Cluster as a vCenter Single Sign-On User. - Verify successful login to the Supervisor Cluster.
You should see a message similar to the following:
Logged in successfully. You have access to the following contexts: 192.197.2.65 tkgs-ns
Where192.197.2.65
is the Supervisor Cluster context andtkgs-ns
is the context for the vSphere Namespace where you plan to provision the Tanzu Kubernetes cluster. - Verify that the target vSphere Namespace is the current context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE 192.197.2.65 192.197.2.65 wcp:192.197.2.65:[email protected] * tkgs-ns 192.197.2.65 wcp:192.197.2.65:[email protected] tkgs-ns
If the target vSphere Namespace is not the current context, switch to it.kubectl config use-context tkgs-ns
- List the virtual machine class bindings that are available in the target vSphere Namespace.
kubectl get virtualmachineclassbindings
You can only use those VM classes that are bound to the target namespace. If you do not see any VM classes, check that the vSphere Namespace has the default VM classes added to it. - Get the available persistent volume storage classes.
kubectl describe storageclasses
- List the available Tanzu Kubernetes releases.
You can use either of the following commands to perform this operation:
kubectl get tkr
kubectl get tanzukubernetesreleases
You can only use those releases that are returned by this command. If you do not see any releases, or the releases that you want, verify that you have synchronized the desired OVA files with the content library. - Craft the YAML file for provisioning your Tanzu Kubernetes cluster.
- Review the v1alpha2 API specification.
Note: The TKG v1alpha1 API is deprecated and should not be used to create new TKG clusters. Use the TKG v1alpha2 API.
- Start with one of the example YAML files for provisioning a cluster, either the default or custom depending on your requirements.
- Save your YAML file as
tkgs-cluster-1.yaml
, or similar. - Populate your YAML file based on your requirements and using the information you gleaned from the output of the preceding commands, including:
- The name of the cluster, such as
tkgs-cluster-1
- The target vSphere Namespace, such as
tkgs-ns
- Bound VM classes, such as
guaranteed-medium
andguaranteed-small
- Storage classes for cluster nodes and workloads, such as
vwt-storage-policy
- The number of control plane and worker nodes (replicas)
- The Tanzu Kubernetes release specified by the TKR NAME string, such as
v1.23.8---vmware.3-tkg.1
- The name of the cluster, such as
- Customize your YAML file as needed. For example:
- Specify a default persistent storage class for cluster nodes
- Customize the cluster networking, including the CNI, pod and service CIDRs
The result of this step is a valid YAML for provisioning your TKGS cluster. For example:apiVersion: run.tanzu.vmware.com/v1alpha2 kind: TanzuKubernetesCluster metadata: name: tkgs-cluster-1 namespace: tkgs-ns spec: topology: controlPlane: replicas: 3 vmClass: guaranteed-medium storageClass: vwt-storage-policy tkr: reference: name: v1.23.8---vmware.3-tkg.1 nodePools: - name: worker-nodepool-a1 replicas: 3 vmClass: guaranteed-medium storageClass: vwt-storage-policy tkr: reference: name: v1.23.8---vmware.3-tkg.1 - name: worker-nodepool-a2 replicas: 2 vmClass: guaranteed-small storageClass: vwt-storage-policy tkr: reference: name: v1.23.8---vmware.3-tkg.1 settings: storage: defaultClass: vwt-storage-policy
Note: The above example uses the default cluster networking, that is, the Antrea CNI and default CIDR ranges for cluster pods and services. - Review the v1alpha2 API specification.
- Provision the cluster by running the following kubectl command.
kubectl apply -f tkgs-cluster-1.yaml
Expected result:tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster-1 created
- Monitor the deployment of cluster nodes using kubectl.
kubectl get tanzukubernetesclusters
Initially the cluster is not ready because it is being provisioned.NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE UPDATES AVAILABLE tkgs-cluster-1 3 5 v1.23.8---vmware.3-tkg.1 2m4s False True
After a few minutes the READY status should be True.NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE UPDATES AVAILABLE tkgs-cluster-1 3 5 v1.23.8---vmware.3-tkg.1 13m True True
For additional guidance, see Monitor Tanzu Kubernetes Cluster Status Using kubectl. - Monitor the deployment of cluster nodes using the vSphere Client.
In the vSphere Hosts and Clusters inventory, you should see the virtual machine nodes being deployed in the target vSphere Namespace.For additional guidance, see Monitor Tanzu Kubernetes Cluster Status Using the vSphere Client.
- Run additional
kubectl
commands to verify cluster provisioning.kubectl get tanzukubernetescluster,cluster-api,virtualmachinesetresourcepolicy,virtualmachineservice,virtualmachine
For additional guidance, see Use Tanzu Kubernetes Cluster Operational Commands.For troubleshooting, see Troubleshooting Tanzu Kubernetes Clusters. - Using the vSphere Plugin for kubectl, log in to the cluster.
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME \ --tanzu-kubernetes-cluster-name CLUSTER-NAME \ --tanzu-kubernetes-cluster-namespace NAMESPACE-NAME
For example:kubectl vsphere login --server=192.197.2.65 --vsphere-username [email protected] \ --tanzu-kubernetes-cluster-name tkgs-cluster-1 --tanzu-kubernetes-cluster-namespace tkgs-ns
For additional guidance, see Connect to a Tanzu Kubernetes Cluster as a vCenter Single Sign-On User. - Verify successful login to the Tanzu Kubernetes cluster.
You should see a message similar to the following.
Logged in successfully. You have access to the following contexts: 192.197.2.65 tkgs-cluster-1 tkgs-ns
Where192.197.2.65
is the Supervisor Cluster context,tkgs-ns
is the vSphere Namespace context, andtkgs-cluster-1
is the Tanzu Kubernetes cluster context. - List available cluster contexts using
kubectl
.kubectl config get-contexts
For example:CURRENT NAME CLUSTER AUTHINFO NAMESPACE 192.197.2.65 192.197.2.65 wcp:192.197.2.65:[email protected] * tkgs-cluster-1 192.197.2.67 wcp:192.197.2.67:[email protected] tkgs-ns 192.197.2.65 wcp:192.197.2.65:[email protected] tkgs-ns
If necessary usekubect config use-context tkgs-cluster-1
to switch to the Tanzu Kubernetes cluster so that it is the current context. - Verify cluster provisioning using the following kubectl command.
kubectl get nodes
For example:NAME STATUS ROLES AGE VERSION tkgs-cluster-1-control-plane-6ln2h Ready control-plane,master 30m v1.21.6+vmware.1 tkgs-cluster-1-control-plane-6q67n Ready control-plane,master 33m v1.21.6+vmware.1 tkgs-cluster-1-control-plane-jw964 Ready control-plane,master 37m v1.21.6+vmware.1 tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-h5fp8 Ready <none> 32m v1.21.6+vmware.1 tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-q4g24 Ready <none> 33m v1.21.6+vmware.1 tkgs-cluster-1-worker-nodepool-a1-4vvkb-65494d66d8-vdcn4 Ready <none> 33m v1.21.6+vmware.1 tkgs-cluster-1-worker-nodepool-a2-2n22f-bd59d7b96-nh4dg Ready <none> 34m v1.21.6+vmware.1 tkgs-cluster-1-worker-nodepool-a2-2n22f-bd59d7b96-vvfmf Ready <none> 33m v1.21.6+vmware.1
- Verify cluster provisioning using additional kubectl commands.
kubectl get namespaces
kubectl get pods -A
kubectl cluster-info
kubectl api-resources
- Define appropriate pod security policy.
Tanzu Kubernetes clusters have the PodSecurityPolicy Admission Controller enabled by default. For guidance, see Using Pod Security Policies with Tanzu Kubernetes Clusters.Depending on the workload and user, you will need to create bindings for system-provided PodSecurityPolicy, or create custom PodSecurityPolicy. See Example Role Bindings for Pod Security Policy.
- Deploy an example workload and verify cluster creation.
For guidance, see Deploy Workloads on Tanzu Kubernetes Clusters.
- Operationalize the cluster by deploying TKG Extensions.
For guidance, see Install Packages on Tanzu Kubernetes Clusters.