This topic explains how to create workload clusters with the Tanzu CLI.
The procedures below explain how to deploy a workload cluster from a cluster configuration file. From a cluster configuration file, you can create the following types of clusters:
Class-based clusters: Follow the steps in Create a Class-Based Cluster below if you are deploying your workload cluster to:
If you want to deploy a class-based workload to vSphere 8 with Supervisor, you must deploy it from an object spec, as described in Configure a Supervisor-Deployed Class-Based Cluster. For more information about cluster configuration files and object specs, see Configuration Files.
(Legacy) Plan-based and TKC clusters: Follow the steps in (Legacy) Create a Plan-Based or a TKC Cluster below.
For more information about these cluster types, see Workload Cluster Types in About Tanzu Kubernetes Grid.
The procedure below explains how to deploy a class-based workload cluster from a cluster configuration file. The resulting cluster is represented by a Cluster
object in Kubernetes.
ImportantVMware recommends using and retaining a dedicated configuration file for every cluster that you deploy.
Locate the configuration file that you prepared as part of Prerequisites above.
Create the cluster. You can create a cluster in either one or two steps, depending on whether you want to examine or edit its object spec before the object is created:
--file
option of tanzu cluster create
, and the command automatically applies it.--file
option of tanzu cluster create
, and the command then converts the file into a Cluster
object spec and exits without creating the cluster. After examining or editing the spec, you create the cluster by re-running tanzu cluster create
.NoteIn TKG v2.3.1 on AWS and Azure, to create a cluster from an object spec you need to use the one-step process or explicitly skip AZ validation as described in On AWS and Azure, creating workload cluster with object spec fails with zone/region error in VMware Tanzu Kubernetes Grid v2.3 Release Notes.
Set auto-apply-generated-clusterclass-based-configuration
to true
if it is not already. This configures the Tanzu CLI to always create class-based clusters using the one-step process. For more information about auto-apply-generated-clusterclass-based-configuration
, see Features in Tanzu CLI Architecture and Configuration.
tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration true
Run tanzu cluster create
, specifying the path to the cluster configuration file in the --file
option. For example, if you saved the configuration file my-workload-cluster.yaml
in the default clusterconfigs
folder, run the following command to create a cluster with a name that you specified in the configuration file:
tanzu cluster create --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
If you did not specify a name in the cluster configuration file or you want to create a cluster with a different name to the one that you specified, specify the cluster name in the tanzu cluster create
command. For example, to create a cluster named another-workload-cluster
from the configuration file my-workload-cluster.yaml
, run the following command:
tanzu cluster create another-workload-cluster --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
Set the auto-apply-generated-clusterclass-based-configuration
feature to false
if it is not already. This configures the Tanzu CLI to always create class-based clusters using the two-step process. false
is the default setting. If you have changed the default setting, to set it back to false
, run:
tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration false
For more information about auto-apply-generated-clusterclass-based-configuration
, see Features in Tanzu CLI Architecture and Configuration.
To generate the object spec, run tanzu cluster create
, specifying the path to the cluster configuration file in the --file
option. The command saves the resulting object spec to the ~/.config/tanzu/tkg/clusterconfigs
folder, prints its location, and then exits.
For example, if you saved the cluster configuration file my-workload-cluster.yaml
in the default clusterconfigs
folder, run the following command to generate the object spec:
tanzu cluster create --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
If you did not specify a name for your cluster in the configuration file or you want to create a cluster with a different name to the one that you specified, specify the cluster name in the tanzu cluster create
command. For example:
tanzu cluster create another-workload-cluster --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
Examine or edit the object spec file generated by tanzu cluster create
.
Re-run tanzu cluster create
, specifying the path to the object spec in the --file
option. For example:
tanzu cluster create --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster-spec.yaml
Include all the same flags that you used in the first step, including the --tkr
flag if you are creating a cluster that runs a different Kubernetes version than the management cluster. For example:
tanzu cluster create --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster-spec.yaml -v 6 --tkr v1.24.17---vmware.1-tkg.2
To generate the object spec, run tanzu cluster create
with the --dry-run
option. The --dry-run
option overrides the auto-apply-generated-clusterclass-based-configuration
setting.
tanzu cluster create CLUSTER-NAME --dry-run --file PATH-TO-CLUSTER-CONFIG-FILE.yaml > PATH-TO-OBJECT-SPEC-FILE.yaml
Where:
CLUSTER-NAME
is the name of the cluster. You can omit CLUSTER-NAME
if you specified it in the cluster configuration file.PATH-TO-CLUSTER-CONFIG-FILE
is the path to the cluster configuration file that you located in step 1.PATH-TO-OBJECT-SPEC-FILE
is the location to which you want to save the resulting object spec file.For example, to save the resulting object spec to a file named my-workload-cluster-spec.yaml
, run:
tanzu cluster create my-cluster --dry-run --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml > my-workload-cluster-spec.yaml
Examine or edit the object spec file generated by the --dry-run
option in the previous step. In the example above, the name of the spec file is my-workload-cluster-spec.yaml
.
After you examine or edit your object spec file, re-run tanzu cluster create
without the --dry-run
option. In the --file
option, specify the path to the object spec file. For example:
tanzu cluster create my-cluster --file my-workload-cluster-spec.yaml
Include all the same flags that you used in the first step, including the --tkr
flag if you are creating a cluster that runs a different Kubernetes version than the management cluster. For example:
tanzu cluster create --file my-workload-cluster-spec.yaml -v 6 --tkr v1.24.17---vmware.1-tkg.2
NoteWhen creating class-based clusters, the Tanzu CLI does not use
ytt
customizations described in Legacy Cluster Configuration with ytt. If the CLI detects them on your machine, it outputs an errorIt seems like you have done some customizations to the template overlays.
After the cluster has been created, run the tanzu cluster get
command to see information about the cluster:
tanzu cluster get CLUSTER-NAME
The output lists information about the status of the control plane and worker nodes, the Kubernetes version that the cluster is running, and the names of the nodes.
The procedure below explains how to deploy a plan-based or TKC cluster from a configuration file:
AWSCluster
, AzureCluster
, or VSphereCluster
object in Kubernetes, depending on the infrastructure platform that you are targeting.TanzuKubernetesCluster
object in Kubernetes.To create the cluster:
Set the allow-legacy-cluster
feature to true
in the Tanzu CLI:
tanzu config set features.cluster.allow-legacy-cluster true
Create the cluster:
Run the tanzu cluster create
command, specifying the path to the configuration file in the --file
option. For example, if you saved the workload configuration file my-workload-cluster.yaml
in the default clusterconfigs
folder, run the following command to create a cluster with a name that you specified in the configuration file:
tanzu cluster create --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
If you did not specify a name in the configuration file or you want to create a cluster with a different name to the one that you specified, specify the cluster name in the tanzu cluster create
command. For example, to create a cluster named another-workload-cluster
from the configuration file my-workload-cluster.yaml
, run the following command:
tanzu cluster create another-workload-cluster --file ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
After the cluster has been created, run the tanzu cluster get
command to see information about the cluster:
tanzu cluster get CLUSTER-NAME
The output lists information about the status of the control plane and worker nodes, the Kubernetes version that the cluster is running, and the names of the nodes.
Create or copy a configuration file for the workload cluster as described in Configure a Supervisor-Deployed TKC Cluster (Legacy).
After you have connected the Tanzu CLI to the Supervisor, get the target vSphere namespace:
tanzu namespaces get
Determine the versioned Tanzu Kubernetes release (TKr) for the cluster:
Obtain the list of TKrs that are available in the Supervisor cluster:
tanzu kubernetes-release get
From the command output, record the desired value listed under NAME
, for example, v1.26.8---vmware.2-tkg.1
. The tkr
NAME
is the same as its VERSION
but with +
changed to ---
.
Deploy the cluster by running tanzu cluster create
with the TKR-NAME
value and configuration file name:
tanzu cluster create CLUSTER-NAME --file CONFIGURATION-FILE --tkr=TKR-NAME
Where:
CLUSTER-NAME
is any name you provide for the cluster. This command-line value overrides any CLUSTER_NAME
setting in the configuration file.CONFIGURATION-FILE
is the local path to the cluster configuration file, for example, ~/.config/tanzu/tkg/clusterconfigs/my-workload-cluster.yaml
.TKR-NAME
is the name of the TKr obtained above.After the cluster has been created, run tanzu cluster get
to see the current status of the cluster:
tanzu cluster get CLUSTER-NAME
Configure the IP addresses of its control plane nodes and endpoint to be static, as described in Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only).
The procedures below explain how to create a class-based workload cluster using a Kubernetes-style object spec:
NoteIn TKG v2.3.1 on AWS and Azure, to create a cluster from an object spec you need to use the one-step process or explicitly skip AZ validation as described in On AWS and Azure, creating workload cluster with object spec fails with zone/region error in VMware Tanzu Kubernetes Grid v2.3 Release Notes.
To create a Kubernetes-style object spec file for a class-based workload cluster, follow the steps below.
If you have updated the default configuration of the auto-apply-generated-clusterclass-based-configuration
feature, set it back to false
and then run tanzu cluster create
with the --file
flag. To set auto-apply-generated-clusterclass-based-configuration
to false
:
tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration false
When this feature is set to false
and you run tanzu cluster create
with the --file
flag, the command converts your cluster configuration file into an object spec file and exits without creating the cluster. After reviewing the configuration, you re-run tanzu cluster create
with the object spec file generated by the Tanzu CLI.
To create the spec file for a single cluster, pass the --dry-run
option to tanzu cluster create
and save the output to a file. Use the same options and configuration --file
that you would use if you were creating the cluster, for example:
tanzu cluster create my-cluster --file my-cluster-config.yaml --dry-run > my-cluster-spec.yaml
The --dry-run
option overrides the auto-apply-generated-clusterclass-based-configuration
setting.
For an example object spec file, see Example Cluster
Object and Its Subordinate Objects.
Cluster
object spec as described in
Configure a Supervisor-Deployed Class-Based Cluster.
Cluster
object specs to work from, for example, v1beta1 Example: Default Cluster.topology
block of the spec file.Cluster
object itself, for example, one-time container interface settings in the cluster infrastructure, see Configure One-Time Infrastructure Settings.To deploy a class-based workload cluster from an object spec, pass the object spec to the --file
option of tanzu cluster create
, for example:
tanzu cluster create my-cluster --file my-cluster-spec.yaml
NoteWhen creating class-based clusters, the Tanzu CLI does not use
ytt
customizations described in Legacy Cluster Configuration with ytt. If the CLI detects them on your machine, it outputs an errorIt seems like you have done some customizations to the template overlays.
For workload clusters managed by a management cluster created with tanzu management-cluster create
or tanzu mc create
, rather than a vSphere with Tanzu Supervisor Cluster, deploying Harbor or other services enables all of the workload to share a single service instance.
Each Tanzu Kubernetes Grid instance can have only one shared services cluster.
Deploying Harbor to a shared services cluster enables all workload clusters that are managed by the same management cluster to share a single Harbor instance. For instructions on deploying Harbor, See Install Harbor for Service Registry.
To create a shared services cluster:
Create a cluster configuration YAML file for the cluster. We recommend using the prod
cluster plan rather than the dev
plan. For example:
INFRASTRUCTURE_PROVIDER: vsphere
CLUSTER_NAME: YOUR-CLUSTER-NAME
CLUSTER_PLAN: prod
Where YOUR-CLUSTER-NAME
is the name you choose for the cluster. For example, tkg-services
.
(vSphere only) If you are using the default Kube-Vip load balancer for cluster’s control plane API, you must specify its endpoint by setting VSPHERE_CONTROL_PLANE_ENDPOINT
. Ensure that this VIP address is not in the DHCP range, but is in the same subnet as the DHCP range. If you mapped a fully qualified domain name (FQDN) to the VIP address, you can specify the FQDN instead of the VIP address.
If you are using NSX Advanced Load Balancer (ALB), do not set VSPHERE_CONTROL_PLANE_ENDPOINT
unless you need the control plane endpoint to be specific address. If so, use a static address within the NSX ALB IPAM Profile’s VIP Network range that you have manually added to the Static IP pool, or an FQDN mapped to the static address.
For example:
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.10.10.10
Create the shared services cluster as described in Create a Cluster from a Configuration File, above.
Set the context of kubectl
to the context of your management cluster. For example:
kubectl config use-context mgmt-cluster-admin@mgmt-cluster
In this example, mgmt-cluster
is the name of the management cluster.
Add the tanzu-services
label to the shared services cluster, as its cluster role. This label identifies the shared services cluster to the management cluster and workload clusters. For example:
kubectl label cluster.cluster.x-k8s.io/tkg-services cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
In this example, tkg-services
is the name of the shared services cluster. You should see the confirmation cluster.cluster.x-k8s.io/tkg-services labeled
.
Check that the label has been correctly applied by running the following command:
tanzu cluster list --include-management-cluster
You should see that your shared services cluster has the tanzu-services
role. For example:
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR
another-cluster default running 1/1 1/1 v1.26.8+vmware.1 <none> dev v1.26.8---vmware.2-tkg
tkg-services default running 3/3 3/3 v1.26.8+vmware.1 tanzu-services prod v1.26.8---vmware.2-tkg
mgmt-cluster tkg-system running 1/1 1/1 v1.26.8+vmware.1 management dev v1.26.8---vmware.2-tkg
Get the admin
credentials of the shared services cluster. For example:
tanzu cluster kubeconfig get tkg-services --admin
Set the context of kubectl
to the shared services cluster. For example:
kubectl config use-context tkg-services-admin@tkg-services