Tanzu Kubernetes Grid Glossary

The topic below defines the key terms and concepts of a Tanzu Kubernetes Grid (TKG) deployment. Other topics in this section provide references for key TKG elements and describe experimental TKG features.


From v2.5 onwards, Tanzu Kubernetes Grid does not support the deployment of TKG management clusters and workload clusters on AWS and Azure. For more information, see End of Support for TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.5 Release Notes.

Tanzu Kubernetes Grid

Tanzu Kubernetes Grid (TKG) is a high-level, multicloud infrastructure for Kubernetes. Tanzu Kubernetes Grid allows you to make Kubernetes available to developers as a utility, just like an electricity grid. Operators can use this grid to create and manage Kubernetes clusters for hosting containerized applications, and developers can use it to develop, deploy, and manage the applications. For more information, see What Is Tanzu Kubernetes Grid?.

Management Cluster

A management cluster is a Kubernetes cluster that deploys and manages other Kubernetes clusters, called workload clusters, that host containerized apps.

For more information about management clusters, see Management Clusters: Supervisors and Standalone.

Workload Cluster

Workload clusters deployed by Tanzu Kubernetes Grid are CNCF-conformant Kubernetes clusters where containerized apps and packaged services are deployed to and run.

Workload clusters are deployed by the management cluster and run on the same infrastructure.

You can have many workload clusters, and to match the needs of the apps that they host, workload clusters can run different Kubernetes versions and have different topologies that include customizable node types and node counts, diverse operating systems, processors, storage, and other resource settings and configurations.

For pod-to-pod networking, workload clusters use Antrea by default, and can also use Calico.

For more information about the different types of workload cluster, see Workload Clusters.

Tanzu Kubernetes Grid Instance

A Tanzu Kubernetes Grid instance is a full deployment of Tanzu Kubernetes Grid, including the management cluster, the deployed workload clusters, and the packaged services that they run. You can operate many instances of Tanzu Kubernetes Grid, for different environments, such as production, staging, and test; for different IaaS providers, such as vSphere, Azure, and AWS; and for different failure domains, for example Datacenter-1, AWS us-east-2, or AWS us-west-2.

Tanzu CLI

The Tanzu CLI enables the tanzu commands that deploy and operate TKG. For example:

  • tanzu cluster commands communicate with a TKG management cluster to create and manage workload clusters that host containerized workloads.
  • tanzu package commands install and manage packaged services that hosted workloads use.
  • tanzu apps commands manage hosted workloads via Tanzu Application Platform running on workload clusters.
  • (Standalone management cluster only) tanzu management-cluster (or tanzu mc) commands deploy TKG by creating a standalone management cluster and then manage the deployment once the standalone management cluster is running.

The Tanzu CLI uses plugins to modularize and extend its capabilities.

With a standalone management cluster, the version of Kubernetes that the management cluster runs is the same as the Kubernetes version used by the Tanzu CLI.

See Tanzu CLI Architecture and Configuration for more information.

Bootstrap Machine

The bootstrap machine is a laptop, host, or server on which you download and run the Tanzu CLI.

When you use the Tanzu CLI to deploy a standalone management cluster, it creates the management cluster as a kind cluster on the bootstrap machine before deploying it.

How you connect the Tanzu CLI to an existing management cluster depends on its deployment option:

A bootstrap machine can be a local laptop, a jumpbox, or any other physical or virtual machine.

Tanzu Kubernetes Release

To run safely and efficiently, Kubernetes apps typically need to be hosted on nodes with specific patch versions of both Kubernetes and a base OS, along with compatible versions of other components. These component versions change over time.

To facilitate currency, safety, and compatibility, VMware publishes Tanzu Kubernetes releases (TKrs), which package patch versions of Kubernetes with base OS versions that it can run on, along with other versioned components that support that version of Kubernetes and the workloads it hosts.

The management cluster uses TKrs to create workload clusters that run the desired Kubernetes and OS versions.

Each TKr contains everything that a specific patch version of Kubernetes needs to run on various VM types on various cloud infrastructures.

See Tanzu Kubernetes Releases and Custom Node Images for more information.

Packages and Cluster Services

To provide hosted workloads with services such as authentication, ingress control, container registry, observability, service discovery, and logging, you can install Tanzu-packaged services, or packages to TKG clusters.

As an alternative to running separate instances of the same service in multiple workload clusters, TKG with a standalone management cluster supports installing some services to a shared services cluster, a special workload cluster that can publish its services to other workload clusters.

Tanzu-packaged services are bundled with the Carvel imgpkg tool and tested for TKG by VMware.

Such packages can include:

  • Services for use by hosted applications
  • Platform services for cluster administrators
  • Services for both of the above

For information about how to download and install imgpkg, see Install the Carvel Tools.

Cluster Plans (TKG v1.x)

In Tanzu Kubernetes Grid v1.x, and in legacy TKC-based clusters supported by TKG 2.x, a cluster plan is a standardized configuration for workload clusters. The plan configures settings for the number of control plane nodes, worker nodes, VM types, etc.

TKG v1.x provides two default plans: dev clusters have one control plane node and one worker node, and prod clusters have three control plane nodes and three workers.

TKG 2.x supports more fine-grained configuration of cluster topology as described in Configure a Class-Based Workload Cluster.

ytt Overlays

Configuration settings for TKG clusters and plans come from upstream, open-source sources such as the Cluster API project and its IaaS-specific provider projects. These sources publish Kubernetes object specifications in YAML, with settings pre-configured.

TKG supports Carvel ytt overlays for customizing objects for your own installation non-destructively, retaining the original YAML specifications. This is useful when YAML customization files reside on the bootstrap machine, and changing them directly would destroy the local copy of the original, upstream configurations.

Installing the TKG plugins installs the cluster and cluster plan configuration files in the ~/.config/tanzu/tkg directory of the bootstrap machine, and supports ytt overlays for these configurations.

ytt overlays specify how to change target settings in target locations within a source YAML file, to support any possible customization.

See Advanced TKC Configuration with ytt for more information.

For information about how to download and install ytt, see Install the Carvel Tools.

Tanzu Kubernetes Grid Installer (Standalone Management Cluster)

To deploy TKG with a standalone management cluster, the Tanzu Kubernetes Grid installer is a graphical wizard that you start up by running the command tanzu mc create --ui. The installer wizard runs on the bootstrap machine, and provides a user interface to guide you through the process of deploying a standalone management cluster.

Cluster Configuration File (Legacy Supervisor and Standalone Management Cluster)

The Tanzu CLI uses a cluster configuration file to create clusters under the following circumstances:

  • When connected to a Supervisor and creating a legacy, TKC-based workload cluster
  • When creating a standalone management cluster from a cluster configuration file
  • When connected to a standalone management cluster and creating a workload cluster

When the TKG Installer creates a standalone management cluster, it captures user input from the UI and writes the entered values out to a cluster configuration file. The TKG Installer then uses this cluster configuration file to deploy the standalone management cluster.

Required and optional variables in the cluster configuration file depend on the management cluster deployment option:

Tanzu Kubernetes Grid and Cluster Upgrades

Upgrading TKG means different things based on its deployment option:

  • Supervisor: When you update vCenter Server, the Supervisor tab lets you update the Supervisor to the latest Kubernetes version.
  • Standalone management cluster: After upgrading the Tanzu CLI, running tanzu management-cluster upgrade upgrades the standalone management cluster to the CLI’s version of Kubernetes.

Upgrading workload clusters in Tanzu Kubernetes Grid means migrating its nodes to run on a base VM image with a newer version of Kubernetes. By default, workload clusters upgrade to the management cluster’s native Kubernetes version, but you can specify other, non-default Kubernetes versions to upgrade workload clusters to.

To find out which Kubernetes versions are available in Tanzu Kubernetes Grid, see:

Integrating Tanzu Mission Control

To integrate your management cluster with Tanzu Mission Control, a Kubernetes management platform with a UI console, see:

check-circle-line exclamation-circle-line close-line
Scroll to top icon