You can connect the Tanzu Kubernetes Grid CLI to a vSphere with Kubernetes Supervisor Cluster that is running in a vSphere 7.0 instance. In this way, you can deploy Tanzu Kubernetes clusters to vSphere with Kubernetes and manage their lifecycle directly from the Tanzu Kubernetes Grid CLI.
vSphere with Kubernetes provides a vSphere Plugin for
kubectl. The vSphere Plugin for
kubectl extends the standard
kubectl commands so that you can connect to the Supervisor Cluster from
kubectl by using vCenter Single Sign-On credentials. Once you have installed the vSphere Plugin for
kubectl, you can connect the Tanzu Kubernetes Grid CLI to the Supervisor Cluster. Then, you can use the Tanzu Kubernetes Grid CLI to deploy and manage Tanzu Kubernetes clusters running in vSphere.
Download and install the
kubectl vsphere CLI utility on the bootstrap environment machine on which you run Tanzu Kubernetes Grid CLI commands.
For information about how to obtain and install the vSphere Plugin for
kubectl, see Download and Install the Kubernetes CLI Tools for vSphere in the vSphere with Kubernetes documentation.
On the bootstrap environment machine, run the
kubectl vsphere login command to log in to your vSphere 7.0 instance.
Specify a vCenter Single Sign-On user account with the required privileges for Tanzu Kubernetes Grid operation, and the virtual IP (VIP) address for the control plane of the Supervisor Cluster. For example:
kubectl vsphere login --vsphere-username email@example.com --server=control_Plane_VIP --insecure-skip-tls-verify=true
Enter the password for the vSphere Administrator account.
When you have successfully logged in,
kubectl vsphere displays all of the contexts to which you have access. The list of contexts should include the vSphere with Kubernetes Supervisor Cluster.
Set the context of
kubectl to the Supervisor Cluster.
kubectl config use-context <Supervisor_Cluster_context>
Add the Supervisor Cluster to your Tanzu Kubernetes Grid instance.
tkg add management-cluster
tkg get management-cluster to see the list of management clusters that your Tanzu Kubernetes Grid CLI can access.
tkg get management-cluster
The output should show the vSphere with Kubernetes Supervisor Cluster in the list.
+-------------------------+-------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+-------------------------------+ | vsphere-mc | vsphere-mc-admin@vsphere-mc | | aws-mc * | aws-mc-admin@aws-mc | | <Supervisor_Cluster_IP> | <Supervisor_Cluster_context> | +-------------------------+-------------------------------+
Set the context of the Tanzu Kubernetes Grid CLI to the Supervisor Cluster.
tkg set management-cluster <Supervisor_Cluster_IP>
Obtain information about the storage classes that are defined in the Supervisor Cluster.
kubectl get storageclasses
Set variables to define the storage classes, VM classes, and service domain with which to create your cluster. For information about all of the configuration parameters that you can set when deploying Tanzu Kubernetes clusters to vSphere with Kubernetes, see Configuration Parameters for Provisioning Tanzu Kubernetes Clusters in the vSphere with Kubernetes documentation.
You can set these variables by doing either of the following:
Set the variables as environment variables by running
export <variable>=<value> on the command line. This command sets environment variables on Linux and Mac OS platforms. On Windows platforms, use the
SET command instead of
*_STORAGE_CLASS variables, specify one of the storage classes that you obtained in the previous step.
DEFAULT_STORAGE_CLASS variable, specify a storage class. Alternatively, you can specify
DEFAULT_STORAGE_CLASS="", in which case no default storage class is set.
STORAGE_CLASSES variable, enter a comma-separated list of storage classes for the cluster to use. Alternatively, you can specify
STORAGE_CLASSES="" so that all storage classes that are available to the namespace are made available to clusters that you create. For example, set either of the following:
SERVICE_DOMAIN variable, enter a service domain name for the cluster. If you are going to assign FQDNs with the nodes, DNS lookup is required.
*_VM_CLASS variables, specify one of the standard VM classes for vSphere with Kubernetes. For information about the VM classes that vSphere with Kubernetes provides, see Virtual Machine Class Types for Tanzu Kubernetes Clusters in the vSphere with Kubernetes documentation. For example, set the following variables:
SERVICE_CIDR variable, specify the CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13. Use a different range if the recommended range is unavailable.
CLUSTER_CIDR variable, specify the CIDR range to use for pods. The recommended range is 100.96.0.0/11. Use a different range if the recommended range is unavailable.
Set the variables listed above by updating the
~/.tkg/config.yaml file. When setting a variable, use the
<variable>: <value> format. For example:
Obtain the list of Kubernetes versions that are available in the Supervisor Cluster.
kubectl get virtualmachineimages
Obtain the list of namespaces that are available in the Supervisor Cluster.
kubectl get namespaces
tkg create cluster to create a cluster in vSphere with Kubernetes.
When deploying clusters to vSphere with Kubernetes, you must provide the namespace when you run
tkg create cluster. If the available versions of Kubernetes differ from the one expected by Tanzu Kubernetes Grid, you must also specify the Kubernetes version. You must also specify which of the Tanzu Kubernetes Grid plans to use,
tkg create cluster my-vsphere7-cluster --plan=dev --namespace=<namespace> --kubernetes-version=v1.16.8+vmware.1-tkg.3.60d2ffd
You can now use the Tanzu Kubernetes Grid CLI to deploy more Tanzu Kubernetes clusters to the vSphere with Kubernetes Supervisor Cluster. You can also use the Tanzu Kubernetes Grid CLI to manage the lifecycles of clusters that are already running there. For information about how to manage the lifecycle of clusters, see the other topics in Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle.