Workload Clusters

This topic describes the different types of workload clusters created by Tanzu Kubernetes Grid (TKG) and how they are configured and created.

Important

Workload Cluster Types: Class-based, TKC, and Plan-based

Tanzu Kubernetes Grid hosts three different types of workload clusters:

  • Class-based clusters
    • Are Kubernetes objects of type Cluster
    • Are a new type of cluster introduced in vSphere with Tanzu 8 and TKG 2.x
    • Have basic topology defined in a spec.topology block
      • For example, number and type of worker and control plane nodes
    • Inherit configuration from spec.topology.class value
      • Refers to a ClusterClass object
      • On Supervisor, default class is tanzukubernetescluster
      • On a standalone management cluster, default class is tkg-INFRASTRUCTURE-default-VERSION, for example, tkg-vsphere-default-v1.0.0.
    • Can be created by using Supervisor in vSphere with Tanzu 8 or by using a standalone TKG v2.x management cluster on vSphere 7 and 8 without a Supervisor, or on AWS and Azure (TKG versions up to and including v2.4 only)
  • TKC-based clusters (legacy)
    • Are Kubernetes objects of type TanzuKubernetesCluster
    • Can be created by using a Supervisor Cluster on vSphere 7, or by Supervisor on vSphere 8 for legacy purposes
  • Plan-based clusters (legacy)
    • Are Kubernetes objects of type Cluster
    • Can be created by using a standalone TKG v2.x management cluster on vSphere 7 and 8 without a Supervisor, or on AWS and Azure (TKG versions up to and including v2.4 only)

Note that Class-based clusters with class: tanzukubernetescluster, all lowercase, are different from TKC-based clusters, which have object type TanzuKubernetesCluster.

Class-based clusters are designed to replace the other two cluster types, by presenting the same API to both types of management cluster: Supervisors and standalone management clusters.

Cluster Types and Cluster API

To create and manage workload clusters, management clusters run Cluster API software:

  • Cluster API - open-source Kubernetes software for creating and managing Kubernetes clusters.
  • Cluster API Provider software that runs on specific cloud or physical infrastructures as an interface to support Cluster API.
    • Most Cluster API Provider software projects are open-source, but some are proprietary.

The following table maps management and workload cluster types to the Cluster API providers that they use:

TKG with... uses Cluster API Provider... on... to create and manage workload clusters of type... in product versions...
Supervisor CAPW (proprietary) vSphere Class-based Cluster objects TKG 2.x (all versions) and vSphere with Tanzu 8
TanzuKubernetesCluster objects vSphere with Tanzu 7 & 8
Standalone management cluster CAPA (OSS) AWS Class-based Cluster objects TKG v2.1 to v2.4
Plan-based AWSCluster objects TKG v1.x to v2.4
CAPZ (OSS) Azure Class-based Cluster objects TKG v2.1 to v2.4
Plan-based AzureCluster objects TKG v1.x to v2.4
CAPV (OSS) vSphere Class-based Cluster objects All TKG v2.x versions
Plan-based VSphereCluster objects All TKG v1.x and v2.x versions

Cluster Types and Tanzu CLI Compatibility

The different versions of the Tanzu CLI that ship with different versions of Tanzu Kubernetes Grid allow you to create different types of cluster depending on whether you are using Supervisor on vSphere 8, a Supervisor Cluster on vSphere 7, or a standalone management cluster on vSphere 7 and 8 without a Supervisor (all TKG versions), or on AWS and Azure (TKG versions up to and including v2.4 only).

Note

For the list of CLI versions that are compatible with different Tanzu Kubernetes Grid versions from TKG 2.3 onwards, compatibility with other Tanzu products, and for backwards compatibility information, see the Product Interoperability Matrix.

CLI Version TKG Version Create class-based clusters with ... Create plan-based clusters with ... Create TanzuKubernetesClusters with ...
Standalone Management Cluster Supervisor on vSphere 8 Supervisor Cluster on vSphere 7 Standalone Management Cluster Supervisor on vSphere 8 Supervisor Cluster on vSphere 7 Standalone Management Cluster Supervisor on vSphere 8 Supervisor Cluster on vSphere 7
v.1.2.0 2.5.1 x x x x
v1.1.x 2.5.0 x x x x
v1.1.0 2.4.1 x x x x
v1.0.0, v0.90.1* 2.3.0 x x x x
v0.29.0 2.2.0 x x x x
v0.28.1 2.1.1 x x x x
v0.25.4 1.6.1 x x x x
v0.25.0 1.6.0 x x x x
v0.11.x 1.5.x x x x x x x x

Workload Cluster Object Subcomponents

Class-based clusters have the following high-level hierarchy of object types. The objects underlying KubeAdmControlPlane and MachineDeployment have the same types, but are typically different objects:

  • Cluster - number and type of control plane and worker nodes set by topology block in spec
    • KubeAdmControlPlane - defines the control plane nodes
      • IaaS-specific machine object - for example, vSphereMachine, AWSMachine, DockerMachine
      • Machine - generic object for node VM
      • KubeAdmConfig - Kubernetes configuration, including Kubernetes version, image repository, pre- and post- deploy hooks, etc.
    • MachineDeployment - defines the worker nodes
      • IaaS-specific machine object
      • Machine
      • KubeAdmConfig

For more information, see CustomResourceDefinitions relationships in The Cluster API Book.

Ways of Creating Workload Clusters

Depending on your installed environment, you can create Tanzu Kubernetes Grid workload clusters in multiple ways: with the Tanzu CLI, Tanzu Mission Control, and kubectl.

The following chart outlines how users can create different types of workload clusters on different infrastructures:

Using the... to create a... takes config values from... and config templates from... Instructions
Tanzu CLI:
tanzu cluster create
Class-based workload cluster (vSphere) Cluster and underlying object specs User, for example classycluster.yaml Create a Class-based Cluster
TanzuKubernetesCluster workload cluster (vSphere) Cluster configuration file,
local environment,
(advanced) ytt overlays
infrastructure-tkg-service-vsphere* (Legacy) Create a Plan-Based or a TKC Cluster
Plan-based workload cluster (all TKG versions on vSphere; TKG versions up to and including v2.4 only on AWS and Azure) infrastructure-vsphere,
infrastructure-aws,
infrastructure-azure*
Tanzu Mission Control (TMC) TanzuKubernetesCluster or plan-based workload cluster TMC UI Registered management cluster Provisioning Workload Clusters
kubectl apply Class-based or TanzuKubernetesCluster workload clusters (vSphere) Cluster and underlying object specs User, for example classycluster.yaml, tkc.yaml Creating Workload Clusters Declaratively

*Local directories under .config/tanzu/tkg/providers/

About Legacy TKC and Plan-Based Cluster Configuration

When the Tanzu CLI creates a TKC-based workload cluster, it combines configuration values from the following:

  • Live input at invocation
    • CLI input
  • Environment variables
  • ~/.config/tanzu/tkg/cluster-config.yaml or other file passed to the CLI --file option
  • Cluster plan YAML configuration files in ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere, as described in Plan Configuration Files below.
  • Other, non-plan YAML configuration files under ~/.config/tanzu/tkg/providers

Live input applies configuration values that are unique to each invocation, environment variables persist them over a terminal session, and configuration files and overlays persist them indefinitely. You can customize clusters through any of these sources, with recommendations and caveats described below.

See Configuration Value Precedence for how the tanzu CLI derives specific cluster configuration values from these multiple sources where they may conflict.

Plan Configuration Files

The ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere directory contains TKC workload cluster plan configuration files that are named cluster-template-definition-PLAN.yaml. The configuration values for each plan come from these files and from the files that they list under spec.paths:

  • Config files that ship with the tanzu CLI
  • Custom files that users create and add to the spec.paths list
  • ytt Overlays that users create or edit to overwrite values in other configuration files

Files to Edit, Files to Leave Alone

To customize cluster plans via YAML, you edit files under ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere, but you should avoid changing other files.

Files to Edit

Workload cluster plan configuration file paths follow the form ~/.config/tanzu/tkg/providers/infrastructure-infrastructure-tkg-service-vsphere/VERSION/cluster-template-definition-PLAN.yaml, where:

  • VERSION is the version of the Cluster API Provider module that the configuration uses.
  • PLAN is dev, prod, or a custom plan.

Each plan configuration file has a spec.paths section that lists source files and ytt directories that configure the cluster plan. For example:

apiVersion: providers.tanzu.vmware.com/v1alpha1
kind: TemplateDefinition
spec:
  paths:
    - path: providers/infrastructure-tkg-service-vsphere/v1.1.0/ytt
    - path: providers/ytt
    - path: bom
      filemark: text-plain
    - path: providers/config_default.yaml

These files are processed in the order listed. If the same configuration field is set in multiple files, the last-processed setting is the one that the tanzu CLI uses.

To customize your cluster configuration, you can:

  • Create new configuration files and add them to the spec.paths list.
    • This is the easier method.
  • Modify existing ytt overlay files.
    • This is the more powerful method, for people who are comfortable with ytt.

Files to Leave Alone

VMware discourages changing the following files under ~/.config/tanzu/tkg/providers, except as directed:

  • base-template.yaml files, in ytt directories

    • These configuration files use values from the Cluster API provider repos under Kubernetes SIGs, and other upstream, open-source projects, and they are best kept intact.
    • Instead, create new configuration files or see Clusters and Cluster Plans in Advanced TKC Configuration with ytt to set values in the overlay.yaml file in the same ytt directory.
  • ~/.config/tanzu/tkg/providers/config_default.yaml - Append only

    • This file contains system-wide defaults for Tanzu Kubernetes Grid.
    • Do not modify existing values in this file, but you can append a User Customizations section at the end.
    • Instead of changing values in this file, customize cluster configurations in files that you pass to the --file option of tanzu cluster create.
  • ~/.config/tanzu/tkg/providers/config.yaml

    • The tanzu CLI uses this file as a reference for all providers present in the /providers directory, and their default versions.

Configuration Value Precedence

When the Tanzu CLI creates a TKC-based workload cluster, it it combines configuration values from multiple sources. If those sources conflict, it resolves conflicts in the following order of descending precedence:

Processing layers, ordered by descending precedence Source Examples
1. Cluster configuration variables set in your local environment Set in shell. export WORKER_VM_CLASS=best-effort-large
2. Cluster configuration variables set in the Tanzu CLI, with tanzu config set env. Set in shell; saved in the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml. tanzu config set env.WORKER_VM_CLASS best-effort-large
3. Cluster configuration variables set in the cluster configuration file Set in the file passed to the --file option of tanzu cluster create. File defaults to ~/.config/tanzu/tkg/cluster-config.yaml. WORKER_VM_CLASS: best-effort-large
4. Factory default configuration values Set in providers/config_default.yaml, but some fields are listed without default values. Do not modify this file. WORKER_VM_CLASS:
check-circle-line exclamation-circle-line close-line
Scroll to top icon