Follow this workflow to provision a TKG 2 cluster declaratively using kubectl commands and a cluster specification defined in YAML.
This workflow supports provisioning a TKG 2 cluster declaratively using kubectl and YAML.
Prerequisites
Verify or complete the following prerequisites before starting the provisioning workflow:
- Install or update your environment to the latest Supervisor version. See Running TKG 2 Clusters on Supervisor.
- Create or update a content library with the latest Tanzu Kubernetes releases. See Administering Tanzu Kubernetes Releases for TKG 2 Clusters on Supervisor.
- Create and configure a vSphere Namespace for hosting TKG 2 clusters. See Configuring vSphere Namespaces for TKG 2 Clusters on Supervisor.
Procedure
- Install the Kubernetes CLI Tools for vSphere.
- Authenticate with Supervisor using kubectl.
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME
- Verify successful login to the Supervisor.
You should see a message similar to the following:
Logged in successfully. You have access to the following contexts: 192.197.2.65 tkg2-cluster-namespace
Where192.197.2.65
is the Supervisor context andtkg2-cluster-namespace
is the context for the vSphere Namespace where you plan to provision the TKG 2 cluster. - Verify that the target vSphere Namespace is the current context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE 192.197.2.65 192.197.2.65 wcp:10.197.154.65:administrator@vsphere.local * tkg2-cluster-namespace 10.197.154.65 wcp:10.197.154.65:administrator@vsphere.local tkg2-cluster-namespace
If the target vSphere Namespace is not the current context, switch to it.kubectl config use-context tkg2-cluster-namespace
- List the virtual machine class bindings that are available in the target vSphere Namespace.
kubectl get virtualmachineclassbindings
You can only use those VM classes that are bound to the target namespace. If you do not see any VM classes, check that the vSphere Namespace has the default VM classes added to it. - Get the available persistent volume storage classes.
kubectl describe namespace VSPHERE-NAMESPACE-NAME
The command returns details about the namesapce, including the storage class in the formtkg2-storage-policy.storageclass.storage.k8s.io/requests.storage
. The first token of the string is the storage class name, in this exampletkg2-storage-policy
. The commandkubectl describe storageclasses
also returns available storage classes, but requires vSphere administrator permissions. - List the available Tanzu Kubernetes releases.
You can use either of the following commands to perform this operation:
kubectl get tkr
kubectl get tanzukubernetesreleases
This command returns the TKRs that are aviable to you in this vSphere Namespace, and shows their compatiability with the Supervisor you are deploying on. You can only use those releases that are returned by this command. If you do not see any releases, or the releases you want, verify that you have done the following: a) Created a TKR content library; b) Synchronized the content library with the desired OVA files; and c) Associated the content library with the vSphere Namespace where you are provisioning TKG 2 cluster. - Craft the YAML file for provisioning the TKG cluster.
- Determine the type of cluster you will create and review its API and features:
- TanzuKubernetesCluster: Using the TanzuKubernetesCluster v1alpha3 API
- Cluster: Using the Cluster v1beta1 API
- Start with one of the example YAMLs for provisioning the cluster.
- Save your YAML file as
tkg2-cluster-1.yaml
, or similar. - Populate the YAML using the information you gleaned from the output of the preceding commands, including:
- The name of the cluster, such as
tkg2-cluster-1
- The target vSphere Namespace
- Bound VM classes, such as
guaranteed-medium
- Storage class for cluster nodes and persistent volumes
- The number of control plane and worker nodes (replicas)
- The Tanzu Kubernetes release specified by the TKR NAME string, such as
v1.23.8---vmware.2-tkg.2-zshippable
- The name of the cluster, such as
- Customize the TGK 2 cluster YAML as needed.
- Add separate volumes for high-churn components, such as
etcd
andcontainerd
- Specify a default persistent storage class for cluster nodes and persistent volumes
- Customize cluster networking, including the CNI, pod and service CIDRs
- Add separate volumes for high-churn components, such as
- Use a YAML syntax checker and verify that the YAML is valid.
The result of this step is a valid YAML for provisioning the TKG cluster. - Determine the type of cluster you will create and review its API and features:
- Provision the TKG 2 cluster by running the following command.
kubectl apply -f tkg2-cluster-1.yaml
Expected result:tanzukubernetescluster.run.tanzu.vmware.com/tkg2-cluster-1 created
- Monitor the provisioning of the TKG 2 cluster.
kubectl get tanzukubernetesclusters
kubectl get tkc
Of, if you create a Cluster using the v1beta1 API:kubectl get cluster
Initially the READY status is False as the cluster is being provisioned. After a few minutes it should be True.NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE UPDATES AVAILABLE tkg2-cluster-1 3 6 v1.23.8---vmware.2-tkg.2-zshippable 49m True True
Run additional commands to view details about the cluster.kubectl get tanzukubernetescluster,cluster,virtualmachinesetresourcepolicy,virtualmachineservice,kubeadmcontrolplane,machinedeployment,machine,virtualmachine
kubectl describe tanzukubernetescluster tkg2-cluster-1
- Monitor the deployment of cluster nodes using the vSphere Client.
In the vSphere inventory for Hosts and Clusters, you should see the cluster node VMs deployed in the target vSphere Namespace.
- Once all TKG 2 cluster nodes are in a READY state, log in to the cluster using the vSphere Plugin for kubectl.
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME \ --tanzu-kubernetes-cluster-name CLUSTER-NAME \ --tanzu-kubernetes-cluster-namespace NAMESPACE-NAME
For example:kubectl vsphere login --server=192.197.2.65 --vsphere-username user@vsphere.local \ --tanzu-kubernetes-cluster-name tkg2-cluster-1 --tanzu-kubernetes-cluster-namespace tkg2-cluster-namespace
Note: The log in command will only succeed once the control plane nodes are running and the authentication service plugin has started. If worker nodes are in the process of being created, log in may spotty. It is recommended that you log in once all cluster nodes are READY. - Switch context to the TKG 2 cluster so that it is the current context.
On successful login to the TKG 2 cluster, you should see a message similar to the following.
Logged in successfully. You have access to the following contexts: 192.197.2.65 tkg2-cluster-namespace tkg2-cluster-1
Where192.197.2.65
is the Supervisor context,tkg2-cluster-namespace
is the vSphere Namespace context, andtkg2-cluster-1
is the TKG 2 cluster context.Switch to the TKG cluster context.kubect config use-context tkg2-cluster-1
- Check TKG cluster resources.
kubectl get nodes
kubectl get namespaces
kubectl get pods -A
kubectl cluster-info
kubectl api-resources
- Exercise the TKG 2 cluster by deploying a test pod and verify that it works as expected.