This topics in this section describe how you use the Tanzu CLI to deploy and manage Tanzu Kubernetes clusters. The procedures in this topic, below, describe the most basic configuration options.

You can deploy Tanzu Kubernetes (workload) clusters with the Tanzu CLI after you have deployed a management cluster to vSphere, Amazon EC2, or Azure, or you have connected the Tanzu CLI to a vSphere with Tanzu Supervisor Cluster.

You can use the Tanzu CLI to deploy Tanzu Kubernetes clusters to the following platforms:

  • vSphere 6.7u3
  • vSphere 7 (see below)
  • Amazon EC2
  • Microsoft Azure

To deploy a workload cluster, you create a configuration file that specifies the different options with which to deploy the cluster. You then run the tanzu cluster create command, specifying the configuration file in the --file option.

Prerequisites for Cluster Deployment

  • You have followed the procedures in Install the Tanzu CLI and Other Tools and Deploy Management Clusters to deploy a management cluster to vSphere, Amazon EC2, or Azure.
  • You have already upgraded the management cluster to the version that corresponds with the Tanzu CLI version. If you attempt to deploy a Tanzu Kubernetes cluster with an updated CLI without upgrading the management cluster first, the Tanzu CLI returns the error Error: validation failed: version mismatch between management cluster and cli version. Please upgrade your management cluster to the latest to continue. For instructions on how to upgrade management clusters, see Upgrade Management Clusters.
  • Alternatively, you have a vSphere 7 instance on which a vSphere with Tanzu Supervisor Cluster is running. To deploy clusters to a vSphere 7 instance on which the vSphere with Tanzu feature is enabled, you must connect the Tanzu CLI to the vSphere with Tanzu Supervisor Cluster. For information about how to do this, see Add a vSphere7 Supervisor Cluster as a Management Cluster.
  • vSphere: If you are deploying Tanzu Kubernetes clusters to vSphere, each cluster requires one static virtual IP address to provide a stable endpoint for Kubernetes. Make sure that this IP address is not in the DHCP range, but is in the same subnet as the DHCP range.
  • Azure: If you are deploying Tanzu Kubernetes clusters to Azure, each cluster requires a Network Security Group (NSG) for its worker nodes named CLUSTER-NAME-node-nsg, where CLUSTER-NAME is the name of the cluster. For more information, see Network Security Groups on Azure.
  • Configure Tanzu Kubernetes cluster node size depending on cluster complexity and expected demand. For more information, see Minimum VM Sizes for Cluster Nodes.

Create a Tanzu Kubernetes Cluster Configuration File

When you deploy a workload cluster, most of the configuration for the cluster is the same as the configuration of the management cluster that you use to deploy it. Because of this, the easiest way to create a configuration file for a workload is to start with a copy of the management cluster configuration file:

  1. Locate the YAML configuration file for the management cluster.

    • If you deployed the management cluster from the installer interface and you did not specify the --file option when you ran tanzu management-cluster create --ui, the configuration file is saved in ~/.config/tanzu/tkg/clusterconfigs/. The file has a randomly generated name, for example, bm8xk9bv1v.yaml.
    • If you deployed the management cluster from the installer interface and you did specify the --file option, the management cluster configuration is taken from in the file that you specified.
    • If you deployed the management cluster from the Tanzu CLI without using the installer interface, the management cluster configuration is taken from either a file that you specified in the --file option, or from the default location, ~/.config/tanzu/tkg/cluster-config.yaml.
  2. Make a copy of the management cluster configuration file and save it with a new name.

    For example, save the file as my-aws-tkc.yaml, my-azure-tkc.yaml or my-vsphere-tkc.yaml.

  3. Optionally set a name for the cluster in the CLUSTER_NAME variable.

    For example, if you are deploying the cluster to vSphere, set the name to my-vsphere-tkc.

    CLUSTER_NAME: my-vsphere-tkc
    

    If you do not specify a CLUSTER_NAME value in the cluster configuration file or as an environment variable, you must pass it as the first argument to the tanzu cluster create command. The CLUSTER_NAME value passed to tanzu cluster create overrides the name you set in the configuration file.
    Workload cluster names must be must be 42 characters or less, and must comply with DNS hostname requirements as amended in RFC 1123.

  4. If you are deploying the cluster to vSphere, specify a static virtual IP address or FQDN in the VSPHERE_CONTROL_PLANE_ENDPOINT variable.

    No two clusters, including any management cluster and workload cluster, can have the same VSPHERE_CONTROL_PLANE_ENDPOINT address.

    • Ensure that this IP address is not in the DHCP range, but is in the same subnet as the DHCP range.
    • If you mapped a fully qualified domain name (FQDN) to the VIP address, you can specify the FQDN instead of the VIP address.
    VSPHERE_CONTROL_PLANE_ENDPOINT: 10.90.110.100
    
  5. To configure a workload cluster to use an OS other than the default Ubuntu 20.04, you must set the OS_NAME and OS_VERSION values in the cluster configuration file. The installer interface does not include node VM OS values in the management cluster configuration files that it saves to ~/.config/tanzu/tkg/clusterconfigs.

  6. Save the configuration file.

Deploy a Workload Cluster: Basic Process

IMPORTANT: VMware recommends using a dedicated configuration file for every cluster that you deploy.

  1. Create or copy a configuration file for the workload cluster, as described in Create a Tanzu Kubernetes Cluster Configuration File, above.

  2. Run the tanzu cluster create command, specifying the path to the configuration file in the --file option.

    If you saved the workload configuration file my-vsphere-tkc.yaml in the default clusterconfigs folder, run the following command to create a cluster with a name that you specified in the configuration file:

    tanzu cluster create --file .config/tanzu/tkg/clusterconfigs/my-vsphere-tkc.yaml
    

    If you did not specify a name in the configuration file, or to create a cluster with a different name to the one that you specified, specify the cluster name in the tanzu cluster create command. For example, to create a cluster named another-vsphere-tkc from the configuration file my-vsphere-tkc.yaml, run the following command:

    tanzu cluster create another-vsphere-tkc --file .config/tanzu/tkg/clusterconfigs/my-vsphere-tkc.yaml
    

    Any name that you specify in the tanzu cluster create command will override the name you set in the configuration file.

  3. To see information about the cluster, run the tanzu cluster get command, specifying the cluster name.

    tanzu cluster get my-vsphere-tkc
    

    The output lists information about the status of the control plane and worker nodes, the Kubernetes version that the cluster is running, and the names of the nodes.

    NAME             NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES
    my-vsphere-tkc   default    running  1/1           1/1      v1.21.2+vmware.2  <none>
    
    Details:
    
    NAME                                                                READY  SEVERITY  REASON  SINCE  MESSAGE
    /my-vsphere-tkc                                                    True                     17m
    ├─ClusterInfrastructure - VSphereCluster/my-vsphere-tkc            True                     19m
    ├─ControlPlane - KubeadmControlPlane/my-vsphere-tkc-control-plane  True                     17m
    │ └─Machine/my-vsphere-tkc-control-plane-ss9rt                     True                     17m
    └─Workers
      └─MachineDeployment/my-vsphere-tkc-md-0
        └─Machine/my-vsphere-tkc-md-0-657958d58-mgtpp                  True                     8m33s
    
    

The cluster runs the default version of Kubernetes for this Tanzu Kubernetes Grid release, which in Tanzu Kubernetes Grid v1.4.0 is v1.21.2.

Deploy a Workload Cluster with Custom Control Plane and Worker Node Counts

In the preceding example, because you did not change any of the node settings in the Tanzu Kubernetes cluster configuration file, the resulting Tanzu Kubernetes cluster has the same node settings as the management cluster. You can customize these settings when preparing the configuration file for your Tanzu Kubernetes cluster. For example, if you selected Development in the Management Cluster Settings section of the installer interface or specified CLUSTER_PLAN: dev in the configuration file for the management cluster, you can set the CLUSTER_PLAN variable in the Tanzu Kubernetes cluster configuration file to prod.

CLUSTER_PLAN: prod

Similarly, if you used the prod plan to create the management cluster, you can set the CLUSTER_PLAN variable in the Tanzu Kubernetes cluster configuration file to dev.

In this version of Tanzu Kubernetes Grid, the dev and prod plans for Tanzu Kubernetes clusters deploy the following:

  • The dev plan: one control plane node and one worker node. This configuration is the same as the configuration of the dev plan for management clusters.
  • The prod plan: three control plane nodes and three worker nodes. For management clusters, the prod plan deploys three control plane nodes and one worker node.

To deploy a Tanzu Kubernetes cluster with more control plane nodes than the dev and prod plans define by default, specify the CONTROL_PLANE_MACHINE_COUNT variable in the cluster configuration file. The number of control plane nodes that you specify in CONTROL_PLANE_MACHINE_COUNT must be uneven.

CONTROL_PLANE_MACHINE_COUNT: 5

Specify the number of worker nodes for the cluster in the WORKER_MACHINE_COUNT variable. For example:

WORKER_MACHINE_COUNT: 10

How you configure the size and resource configurations of the nodes depends on whether you are deploying clusters to vSphere, Amazon EC2, or Azure. For information about how to configure the nodes, see the appropriate topic for each provider:

Deploy a Cluster in a Specific Namespace

If you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy Tanzu Kubernetes clusters to those namespaces by specifying the NAMESPACE variable. If you do not specify the the NAMESPACE variable, Tanzu Kubernetes Grid places clusters in the default namespace. Any namespace that you identify in the NAMESPACE variable must exist in the management cluster before you run the command. For example, you might want to create different types of clusters in dedicated namespaces. For information about creating namespaces in the management cluster, see Create Namespaces in the Management Cluster.

NAMESPACE: production

NOTE: If you have created namespaces, you must provide a unique name for all Tanzu Kubernetes clusters across all namespaces. If you provide a cluster name that is in use in another namespace in the same instance, the deployment fails with an error.

Deploy a Cluster from a Saved Manifest File

You can create Kubernetes manifest files for clusters as described in Create Tanzu Kubernetes Cluster Manifest Files.

To deploy a cluster from a saved manifest file, pass it to the kubectl apply -f command. For example:

kubectl config use-context my-mgmt-context-admin@my-mgmt-context
kubectl apply -f my-cluster-manifest.yaml

Advanced Configuration of Tanzu Kubernetes Clusters

If you need to deploy a Tanzu Kubernetes cluster with more advanced configuration, rather than copying the configuration file of the management cluster, see the topics that describe the options that are specific to each infrastructure provider.

Each of the topics on deployment to vSphere, Amazon EC2, and Azure include Tanzu Kubernetes cluster templates, that contain all of the options that you can use for each provider.

You can further customize the configuration of your Tanzu Kubernetes clusters by performing the following types of operations:

What to Do Next

After you have deployed Tanzu Kubernetes clusters, the Tanzu CLI provides commands and options to perform the following cluster lifecycle management operations. See Manage Clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon