The topic below defines the key terms and concepts of a Tanzu Kubernetes Grid (TKG) deployment. Other topics in this section provide references for key TKG elements and describe experimental TKG features.
Tanzu Kubernetes Grid (TKG) is a high-level, multicloud infrastructure for Kubernetes. Tanzu Kubernetes Grid allows you to make Kubernetes available to developers as a utility, just like an electricity grid. Operators can use this grid to create and manage Kubernetes clusters for hosting containerized applications, and developers can use it to develop, deploy, and manage the applications. For more information, see What Is Tanzu Kubernetes Grid?.
A management cluster is a Kubernetes cluster that deploys and manages other Kubernetes clusters, called workload clusters, that host containerized apps.
TKG users log in to the management cluster with the Tanzu CLI and the Kubernetes CLI (
kubectl) and issue commands like
tanzu cluster create to create a workload cluster, or
tanzu package install to install a packaged service to the cluster for hosted apps to consume.
The management cluster runs Cluster API, Carvel tools, and other software to process these commands.
The management cluster is purpose-built for managing workload clusters and packaged services, and for running container networking and other system-level agents. VMware recommends never deploying workloads to the management cluster itself because:
The management cluster has two deployment options that run on different infrastructures using different sets of components:
Workload clusters deployed by Tanzu Kubernetes Grid are CNCF-conformant Kubernetes clusters where containerized apps and packaged services are deployed to and run.
Workload clusters are deployed by the management cluster and run on the same private or public cloud infrastructure.
You can have many workload clusters, and to match the needs of the apps that they host, workload clusters can run different Kubernetes versions and have different topologies that include customizable node types and node counts, diverse operating systems, processors, storage, and other resource settings and configurations.
For pod-to-pod networking, workload clusters use Antrea by default, and can also use Calico.
A Tanzu Kubernetes Grid instance is a full deployment of Tanzu Kubernetes Grid, including the management cluster, the deployed workload clusters, and the packaged services that they run. You can operate many instances of Tanzu Kubernetes Grid, for different environments, such as production, staging, and test; for different IaaS providers, such as vSphere, Azure, and AWS; and for different failure domains, for example
us-east-2, or AWS
The Tanzu CLI enables the
tanzu commands that deploy and operate TKG. For example:
tanzu clustercommands communicate with a TKG management cluster to create and manage workload clusters that host containerized workloads.
tanzu packagecommands install and manage packaged services that hosted workloads use.
tanzu appscommands manage hosted workloads via Tanzu Application Platform running on workload clusters.
tanzu mc) commands deploy TKG by creating a standalone management cluster on a target infrastructure, and then manage the deployment once the standalone management cluster is running.
The Tanzu CLI uses plugins to modularize and extend its capabilities.
With a standalone management cluster, the version of Kubernetes that the management cluster runs is the same as the Kubernetes version used by the Tanzu CLI.
See Tanzu CLI Architecture and Configuration for more information.
The bootstrap machine is a laptop, host, or server on which you download and run the Tanzu CLI.
When you use the Tanzu CLI to deploy a standalone management cluster, it creates the management cluster as a
kind cluster on the bootstrap machine before deploying it to the target infrastructure.
How you connect the Tanzu CLI to an existing management cluster depends on its deployment option:
A bootstrap machine can be a local laptop, a jumpbox, or any other physical or virtual machine.
To run safely and efficiently, Kubernetes apps typically need to be hosted on nodes with specific patch versions of both Kubernetes and a base OS, along with compatible versions of other components. These component versions change over time.
To facilitate currency, safety, and compatibility, VMware publishes Tanzu Kubernetes releases (TKrs), which package patch versions of Kubernetes with base OS versions that it can run on, along with other versioned components that support that version of Kubernetes and the workloads it hosts.
The management cluster uses TKrs to create workload clusters that run the desired Kubernetes and OS versions.
Each TKr contains everything that a specific patch version of Kubernetes needs to run on various VM types on various cloud infrastructures.
See Tanzu Kubernetes Releases and Custom Node Images for more information.
To provide hosted workloads with services such as authentication, ingress control, container registry, observability, service discovery, and logging, you can install Tanzu-packaged services, or packages to TKG clusters.
As an alternative to running separate instances of the same service in multiple workload clusters, TKG with a standalone management cluster supports installing some services to a shared services cluster, a special workload cluster that can publish its services to other workload clusters.
Tanzu-packaged services are bundled with the Carvel imgpkg tool and tested for TKG by VMware.
Such packages can include:
In Tanzu Kubernetes Grid v1.x, and in legacy TKC-based clusters supported by TKG 2, a cluster plan is a standardized configuration for workload clusters. The plan configures settings for the number of control plane nodes, worker nodes, VM types, etc.
TKG v1.x provides two default plans:
dev clusters have one control plane node and one worker node, and
prod clusters have three control plane nodes and three workers.
TKG 2 supports more fine-grained configuration of cluster topology as described in Configure a Class-Based Workload Cluster.
yttOverlays (TKG v1.x)
Configuration settings for TKG clusters and plans come from upstream, open-source sources such as the Cluster API project and its IaaS-specific provider projects. These sources publish Kubernetes object specifications in YAML, with settings pre-configured.
TKG supports Carvel ytt overlays for customizing objects for your own installation non-destructively, retaining the original YAML specifications. This is useful when YAML customization files reside on the bootstrap machine, and changing them directly would destroy the local copy of the original, upstream configurations.
In TKG v1.x, installing the Tanzu CLI installs the cluster and cluster plan configuration files in the
~/.config/tanzu/tkg directory of the bootstrap machine, and supports
ytt overlays for these configurations.
ytt overlays specify how to change target settings in target locations within a source YAML file, to support any possible customization.
See Advanced TKC Configuration with ytt for more information.
To deploy TKG with a standalone management cluster, the Tanzu Kubernetes Grid installer is a graphical wizard that you start up by running the command
tanzu mc create --ui. The installer wizard runs on the bootstrap machine, and provides a user interface to guide you through the process of deploying a standalone management cluster.
The Tanzu CLI uses a cluster configuration file to create clusters under the following circumstances:
When the TKG Installer creates a standalone management cluster, it captures user input from the UI and writes the entered values out to a cluster configuration file. The TKG Installer then uses this cluster configuration file to deploy the standalone management cluster.
Required and optional variables in the cluster configuration file depend on the management cluster deployment option:
Upgrading TKG means different things based on its deployment option:
tanzu management-cluster upgradeupgrades the standalone management cluster to the CLI’s version of Kubernetes.
Upgrading workload clusters in Tanzu Kubernetes Grid means migrating its nodes to run on a base VM image with a newer version of Kubernetes. By default, workload clusters upgrade to the management cluster’s native of Kubernetes version, but you can specify other, non-default Kubernetes versions to upgrade workload clusters to.
To find out which Kubernetes versions are available in Tanzu Kubernetes Grid, see:
To integrate your management cluster with Tanzu Mission Control, a Kubernetes management platform with a UI console, see: