This topics in this section describe how you use the Tanzu CLI to deploy and manage workload clusters. The procedures in this topic, below, describe the most basic configuration options.
You can deploy workload clusters with the Tanzu CLI after you have deployed a management cluster to vSphere, Amazon Web Services (AWS), or Azure, or you have connected the Tanzu CLI to a vSphere with Tanzu Supervisor Cluster.
You can use the Tanzu CLI to deploy workload clusters to the following platforms:
The Ubuntu 20.04 machine images for cluster nodes for vSphere, AWS and Azure are hardened to Center for Internet Security (CIS) standards by default, with AppArmor enabled. Photon OS 3 machine images are hardened to Security Technical Implementation Guides (STIG) standards by default.
To deploy a workload cluster, you create a configuration file that specifies the different options with which to deploy the cluster. You then run the
tanzu cluster create command, specifying the configuration file in the
Note: After you have installed the v1.6 CLI but before a management cluster has been deployed or upgraded, all context-specific CLI command groups (
tanzu kubernetes-release) plus all of the
management-cluster plugin commands except for
tanzu mc upgrade and
tanzu mc create are unavailable and not included in Tanzu CLI
Error: validation failed: version mismatch between management cluster and cli version. Please upgrade your management cluster to the latest to continue.For instructions on how to upgrade management clusters, see Upgrade Management Clusters.
CLUSTER-NAMEis the name of the cluster. To use an existing VNet for the cluster, you must manually create these NSGs as described in Create Azure NSGs for Existing VNet.
When you deploy a workload cluster, most of the configuration for the cluster is the same as the configuration of the management cluster that you use to deploy it. Because of this, the easiest way to create a configuration file for a workload is to start with a copy of the management cluster configuration file:
Locate the YAML configuration file for the management cluster.
--fileoption when you ran
tanzu mc create --ui, the configuration file is saved in
~/.config/tanzu/tkg/clusterconfigs/. The file has a randomly generated name, for example,
--fileoption, the management cluster configuration is taken from in the file that you specified.
--fileoption, or from the default location,
Make a copy of the management cluster configuration file and save it with a new name.
For example, save the file as
Optionally set a name for the cluster in the
For example, if you are deploying the cluster to vSphere, set the name to
If you do not specify a
CLUSTER_NAME value in the cluster configuration file or as an environment variable, you must pass it as the first argument to the
tanzu cluster create command. The
CLUSTER_NAME value passed to
tanzu cluster create overrides the name you set in the configuration file.
Workload cluster names must be must be 42 characters or less, and must comply with DNS hostname requirements as amended in RFC 1123.
If you are deploying the cluster to vSphere and using the default Kube-Vip load balancer for cluster’s control plane API, you must specify its endpoint by setting
If you are using NSX Advanced Load Balancer (ALB), do not set
VSPHERE_CONTROL_PLANE_ENDPOINT unless you need the control plane endpoint to be specific address. If so, use a static address within the NSX ALB IPAM Profile’s VIP Network range that you have manually added to the Static IP pool.
No two clusters, including any management cluster and workload cluster, can have the same
To configure a workload cluster to use an OS other than the default Ubuntu 20.04, you must set the
OS_VERSION values in the cluster configuration file. The installer interface does not include node VM OS values in the management cluster configuration files that it saves to
Save the configuration file.
Important: VMware recommends using a dedicated configuration file for every cluster that you deploy.
Create or copy a configuration file for the workload cluster, as described in Create a Workload Cluster Configuration File, above.
See the configuration information relevant to your management cluster infrastructure:
tanzu cluster create command, specifying the path to the configuration file in the
If you saved the workload configuration file
my-vsphere-tkc.yaml in the default
clusterconfigs folder, run the following command to create a cluster with a name that you specified in the configuration file:
tanzu cluster create --file .config/tanzu/tkg/clusterconfigs/my-vsphere-tkc.yaml
If you did not specify a name in the configuration file, or to create a cluster with a different name to the one that you specified, specify the cluster name in the
tanzu cluster create command. For example, to create a cluster named
another-vsphere-tkc from the configuration file
my-vsphere-tkc.yaml, run the following command:
tanzu cluster create another-vsphere-tkc --file .config/tanzu/tkg/clusterconfigs/my-vsphere-tkc.yaml
Any name that you specify in the
tanzu cluster create command will override the name you set in the configuration file.
To see information about the cluster, run the
tanzu cluster get command, specifying the cluster name.
tanzu cluster get my-vsphere-tkc
The output lists information about the status of the control plane and worker nodes, the Kubernetes version that the cluster is running, and the names of the nodes.
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES TKR my-vsphere-tkc default running 1/1 1/1 v1.23.8+vmware.1 <none> v1.23.8---vmware.1-tkg Details: NAME READY SEVERITY REASON SINCE MESSAGE /my-vsphere-tkc True 17m ├─ClusterInfrastructure - VSphereCluster/my-vsphere-tkc True 19m ├─ControlPlane - KubeadmControlPlane/my-vsphere-tkc-control-plane True 17m │ └─Machine/my-vsphere-tkc-control-plane-ss9rt True 17m └─Workers └─MachineDeployment/my-vsphere-tkc-md-0 └─Machine/my-vsphere-tkc-md-0-657958d58-mgtpp True 8m33s
The cluster runs the default version of Kubernetes for this Tanzu Kubernetes Grid release, which in Tanzu Kubernetes Grid v1.6.0 is v1.23.8.
In the preceding example, because you did not change any of the node settings in the workload cluster configuration file, the resulting workload cluster has the same node settings as the management cluster. You can customize these settings when preparing the configuration file for your workload cluster. For example, if you selected Development in the Management Cluster Settings section of the installer interface or specified
CLUSTER_PLAN: dev in the configuration file for the management cluster, you can set the
CLUSTER_PLAN variable in the workload cluster configuration file to
Similarly, if you used the
prod plan to create the management cluster, you can set the
CLUSTER_PLAN variable in the workload cluster configuration file to
In this version of Tanzu Kubernetes Grid, the
prod plans for workload clusters deploy the following:
devplan: one control plane node and one worker node. This configuration is the same as the configuration of the
devplan for management clusters.
prodplan: three control plane nodes and three worker nodes. This configuration is the same as the configuration of the
prodplan for management clusters.
To deploy a workload cluster with more control plane nodes than the
prod plans define by default, specify the
CONTROL_PLANE_MACHINE_COUNT variable in the cluster configuration file. The number of control plane nodes that you specify in
CONTROL_PLANE_MACHINE_COUNT must be uneven.
Specify the number of worker nodes for the cluster in the
WORKER_MACHINE_COUNT variable. For example:
How you configure the size and resource configurations of the nodes depends on whether you are deploying clusters to vSphere, AWS, or Azure. For information about how to configure the nodes, see the appropriate topic for each provider:
If you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy workload clusters to those namespaces by specifying the
NAMESPACE variable. If you do not specify the the
NAMESPACE variable, Tanzu Kubernetes Grid places clusters in the
default namespace. Any namespace that you identify in the
NAMESPACE variable must exist in the management cluster before you run the command. For example, you might want to create different types of clusters in dedicated namespaces. For information about creating namespaces in the management cluster, see Create Namespaces in the Management Cluster.
Note: If you have created namespaces, you must provide a unique name for all workload clusters across all namespaces. If you provide a cluster name that is in use in another namespace in the same instance, the deployment fails with an error.
You can create Kubernetes manifest files for clusters as described in Create Workload Cluster Manifest Files.
To deploy a cluster from a saved manifest file, pass it to the
kubectl apply -f command. For example:
kubectl config use-context my-mgmt-context-admin@my-mgmt-context
kubectl apply -f my-cluster-manifest.yaml
If you need to deploy a workload cluster with more advanced configuration, rather than copying the configuration file of the management cluster, see the topics that describe the options that are specific to each infrastructure provider.
Each of the topics on deployment to vSphere, AWS, and Azure include workload cluster templates, that contain all of the options that you can use for each provider.
You can further customize the configuration of your workload clusters by performing the following types of operations:
After you have deployed workload clusters, the Tanzu CLI provides commands and options to perform the following cluster lifecycle management operations. See Manage Clusters.
To understand how to deploy an application on your workload cluster, expose it publicly, and access it online, see Tutorial: Example Application Deployment.