This topic describes the different types of workload clusters created by Tanzu Kubernetes Grid (TKG) and how they are configured and created.
Important
From v2.5.x onwards, Tanzu Kubernetes Grid does not support the management of TKG workload clusters on AWS and Azure. For more information, see End of Support for TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.5.x Release Notes.
From v2.5.1 onwards, Tanzu Kubernetes Grid does not support creating workload clusters on vSphere 6.7. For more information, see End of Support for TKG Management and Workload Clusters on vSphere 6.7.
Tanzu Kubernetes Grid hosts three different types of workload clusters:
Cluster
spec.topology
block
spec.topology.class
value
ClusterClass
objectclass
is tkg-INFRASTRUCTURE-default-VERSION
, for example, tkg-vsphere-default-v1.0.0
.TKC-based clusters (legacy)
Are Kubernetes objects of type TanzuKubernetesCluster
, used by the TKG Service in vSphere with Tanzu 7.
ImportantClass-based clusters with
class: tanzukubernetescluster
, all lowercase, are different from TKC-based clusters, which have object typeTanzuKubernetesCluster
. TheTanzuKubernetesCluster
type of cluster is not described in the TKG documentation. See the vSphere IaaS control plane (formerly known as vSphere with Tanzu) docs.
Cluster
Class-based clusters are designed to replace the other two cluster types, by presenting the same API to both types of management cluster: vSphere IaaS conrol plane Supervisors and standalone TKG management clusters. For descriptions of when to use a standalone TKG management cluster and when to use vSphere IaaS conrol plane Supervisors, see Management Clusters: Supervisors and Standalone.
To create and manage workload clusters, management clusters run Cluster API software:
The following table maps management and workload cluster types to the Cluster API providers that they use:
TKG with... | uses Cluster API Provider... | on... | to create and manage workload clusters of type... | in product versions... |
---|---|---|---|---|
Standalone management cluster | CAPA (OSS) | AWS | Class-based Cluster objects |
TKG v2.1 to v2.4 |
Plan-based AWSCluster objects |
TKG v1.x to v2.4 | |||
CAPZ (OSS) | Azure | Class-based Cluster objects |
TKG v2.1 to v2.4 | |
Plan-based AzureCluster objects |
TKG v1.x to v2.4 | |||
CAPV (OSS) | vSphere | Class-based Cluster objects |
All TKG v2.x versions | |
Plan-based VSphereCluster objects |
All TKG v1.x and v2.x versions |
The different versions of the Tanzu CLI that ship with different versions of Tanzu Kubernetes Grid allow you to create different types of cluster depending on whether you are using a standalone management cluster on vSphere 7 and 8 (all TKG versions), or on AWS and Azure (TKG versions up to and including v2.4 only).
NoteFor the list of CLI versions that are compatible with different Tanzu Kubernetes Grid versions from TKG 2.3 onwards, compatibility with other Tanzu products, and for backwards compatibility information, see the Product Interoperability Matrix.
CLI Version | TKG Version | Create class-based clusters with standalone management cluster | Create plan-based clusters with standalone management cluster | Create TanzuKubernetesClusters with standalone management cluster |
---|---|---|---|---|
v.1.3.0 | 2.5.2 | ✓ | ✓ | x |
v.1.2.0 | 2.5.1 | ✓ | ✓ | x |
v1.1.x | 2.5.0 | ✓ | ✓ | x |
v1.1.0 | 2.4.1 | ✓ | ✓ | x |
v1.0.0, v0.90.1* | 2.3.0 | ✓ | ✓ | x |
v0.29.0 | 2.2.0 | ✓ | ✓ | x |
v0.28.1 | 2.1.1 | ✓ | ✓ | x |
v0.25.4 | 1.6.1 | x | ✓ | x |
v0.25.0 | 1.6.0 | x | ✓ | x |
v0.11.x | 1.5.x | x | ✓ | x |
Class-based clusters have the following high-level hierarchy of object types. The objects underlying KubeAdmControlPlane
and MachineDeployment
have the same types, but are typically different objects:
Cluster
- number and type of control plane and worker nodes set by topology
block in spec
KubeAdmControlPlane
- defines the control plane nodes
vSphereMachine
, AWSMachine
, DockerMachine
Machine
- generic object for node VMKubeAdmConfig
- Kubernetes configuration, including Kubernetes version, image repository, pre- and post- deploy hooks, etc.MachineDeployment
- defines the worker nodes
Machine
KubeAdmConfig
For more information, see CustomResourceDefinitions relationships in The Cluster API Book.
Depending on your installed environment, you can create Tanzu Kubernetes Grid workload clusters in multiple ways: with the Tanzu CLI, Tanzu Mission Control, and kubectl
.
The following chart outlines how users can create different types of workload clusters on different infrastructures:
Tanzu CLI:tanzu cluster create |
Class-based workload cluster (vSphere) | Cluster and underlying object specs |
User, for example classycluster.yaml | Create a Class-based Cluster |
Plan-based workload cluster (all TKG versions on vSphere; TKG versions up to and including v2.4 only on AWS and Azure) | Cluster configuration file, local environment, (advanced) ytt overlays |
infrastructure-vsphere , infrastructure-aws , infrastructure-azure * |
(Legacy) Create a Plan-Based Cluster | |
Tanzu Mission Control (TMC) | TanzuKubernetesCluster or plan-based workload cluster |
TMC UI | Registered management cluster | Provisioning Workload Clusters |
*Local directories under .config/tanzu/tkg/providers/
When the Tanzu CLI creates a plan-based workload cluster, it combines configuration values from the following:
~/.config/tanzu/tkg/cluster-config.yaml
or other file passed to the CLI --file
option~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere
, as described in Plan Configuration Files below.~/.config/tanzu/tkg/providers
Live input applies configuration values that are unique to each invocation, environment variables persist them over a terminal session, and configuration files and overlays persist them indefinitely. You can customize clusters through any of these sources, with recommendations and caveats described below.
See Configuration Value Precedence for how the tanzu
CLI derives specific cluster configuration values from these multiple sources where they may conflict.
The ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere
directory contains workload cluster plan configuration files that are named cluster-template-definition-PLAN.yaml
. The configuration values for each plan come from these files and from the files that they list under spec.paths
:
tanzu
CLIspec.paths
listTo customize cluster plans via YAML, you edit files under ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere
, but you should avoid changing other files.
Files to Edit
Workload cluster plan configuration file paths follow the form ~/.config/tanzu/tkg/providers/infrastructure-infrastructure-tkg-service-vsphere/VERSION/cluster-template-definition-PLAN.yaml
, where:
VERSION
is the version of the Cluster API Provider module that the configuration uses.PLAN
is dev
, prod
, or a custom plan.Each plan configuration file has a spec.paths
section that lists source files and ytt
directories that configure the cluster plan. For example:
apiVersion: providers.tanzu.vmware.com/v1alpha1
kind: TemplateDefinition
spec:
paths:
- path: providers/infrastructure-tkg-service-vsphere/v1.1.0/ytt
- path: providers/ytt
- path: bom
filemark: text-plain
- path: providers/config_default.yaml
These files are processed in the order listed. If the same configuration field is set in multiple files, the last-processed setting is the one that the tanzu
CLI uses.
To customize your cluster configuration, you can:
spec.paths
list.
ytt
overlay files.
ytt
.Files to Leave Alone
VMware discourages changing the following files under ~/.config/tanzu/tkg/providers
, except as directed:
base-template.yaml
files, in ytt
directories
ytt
to set values in the overlay.yaml
file in the same ytt
directory.~/.config/tanzu/tkg/providers/config_default.yaml
- Append only
User Customizations
section at the end.--file
option of tanzu cluster create
.~/.config/tanzu/tkg/providers/config.yaml
tanzu
CLI uses this file as a reference for all providers present in the /providers
directory, and their default versions.When the Tanzu CLI creates a plan-based workload cluster, it it combines configuration values from multiple sources. If those sources conflict, it resolves conflicts in the following order of descending precedence:
Processing layers, ordered by descending precedence | Source | Examples |
---|---|---|
1. Cluster configuration variables set in your local environment | Set in shell. | export WORKER_VM_CLASS=best-effort-large |
2. Cluster configuration variables set in the Tanzu CLI, with tanzu config set env. |
Set in shell; saved in the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml . |
tanzu config set env.WORKER_VM_CLASS best-effort-large |
3. Cluster configuration variables set in the cluster configuration file | Set in the file passed to the --file option of tanzu cluster create . File defaults to ~/.config/tanzu/tkg/cluster-config.yaml . |
WORKER_VM_CLASS: best-effort-large |
4. Factory default configuration values | Set in providers/config_default.yaml , but some fields are listed without default values. Do not modify this file. |
WORKER_VM_CLASS: |