The topic below describes the key elements and concepts of a Tanzu Kubernetes Grid (TKG) deployment. Other topics in this section provide references for key TKG elements and describe experimental TKG features.
The Tanzu CLI enables the
tanzu commands that deploy and operate TKG, and Tanzu CLI plugins extend Tanzu CLI capabilities. For example:
tanzu mc) commands deploy TKG by creating a management cluster on a target infrastructure, and then manage the deployment once the management cluster is running.
tanzu clustercommands communicate with a TKG management cluster to create and manage the Tanzu Kubernetes clusters (workload clusters) that host containerized workloads.
tanzu packagecommands install and manage packaged services that hosted workloads can use.
tanzu appscommands manage hosted workloads, by communicating with Tanzu Application Platform running on workload clusters.
See the Tanzu CLI Reference for more information.
The bootstrap machine is a laptop, host, or server on which you download and run the Tanzu CLI. When you use the Tanzu CLI to deploy a management cluster, it creates the management cluster as a
kind cluster on the bootstrap machine before deploying it to the target infrastructure.
With a management cluster running, you can install and run the Tanzu CLI from other bootstrap machines, as described in Add Existing Management Clusters to Your Tanzu CLI.
A bootstrap machine can be a local laptop, a jumpbox, or any other physical or virtual machine.
A management cluster is a Kubernetes cluster that runs and operates Tanzu Kubernetes Grid. “Deploying TKG” means deploying a management cluster to a cloud infrastructure such as vSphere, AWS, or Azure.
The management cluster runs Cluster API to create the and manage the workload clusters that host application, and deploys and manages the shared and in-cluster services that the workloads use.
The management cluster is purpose-built for managing workload clusters and packaged services, and for running container networking and other system-level agents. VMware recommends never deploying workloads to the management cluster because:
On vSphere 7, you can use a built-in Supervisor Cluster from vSphere with Tanzu as a Tanzu Kubernetes Grid management cluster, rather than deploying a separate management cluster. Because the Supervisor cluster manages vSphere infrastructure more directly than a Kubernetes layer does, VMware recommends this approach. For details, see vSphere with Tanzu Provides Management Cluster.
Tanzu Kubernetes clusters are CNCF-conformant Kubernetes clusters deployed by the management cluster to run workloads.
Tanzu Kubernetes clusters (workload clusters) can run different versions of Kubernetes, depending on the needs of the applications they host. For pod-to-pod networking, they use use Antrea by default, and can also use Calico.
TKG runs on Kubernetes clusters, which run on Kubernetes nodes, which are VMs that run specific patch versions of both Kubernetes and a base OS.
Tanzu Kubernetes releases (TKRs) package patch versions of Kubernetes with base OS versions that it can run on, and other versioned components that support that version of Kubernetes and the workloads it hosts. Each TKr contains everything that a specific patch version of Kubernetes needs to run on various types of VMs on various cloud infrastructures.
TKrs center around a Bill of Materials (BoM) file that lists a tested combination of component versions and sources that VMware guarantees work together to support a specific version of Kubernetes.
See Tanzu Kubernetes Releases for more information.
To provide hosted workloads with services such as authentication, ingress control, container registry, observability, service discovery, and logging, you can install Tanzu-packaged services, or packages to TKG clusters.
As an alternative to running separate instances of the same service in multiple workload clusters, TKG supports installing some services to a shared services cluster.
Tanzu-packaged services are bundled with the Carvel imgpkg tool and tested for TKG by VMware.
A cluster plan is a standardized configuration for workload clusters deployed by the management cluster. The plan configures settings for the number of control plane nodes, worker nodes, VM types, etc.
Tanzu Kubernetes Grid provides two default plans:
dev clusters have one control plane node and one worker node, and
prod clusters have three control plane nodes and three workers.
Configuration settings for TKG clusters, plans, and packages come from upstream, open-source sources such as the Cluster API project and its IaaS-specific provider projects. These sources publish Kubernetes object specifications in YAML, with settings pre-configured.
To non-destructively customize these objects for your own installation, while retaining the original specifications, TKG supports Carvel ytt overlays.
ytt overlays specify how to change target settings in target locations within a source YAML file, to support any possible customization.
See Customizing Clusters, Plans, and Packages with ytt Overlays for more information.
The Tanzu Kubernetes Grid installer is a graphical wizard that you start up by running the
tanzu management-cluster create --ui command. The installer wizard runs locally on the bootstrap machine, and provides a user interface to guide you through the process of deploying a management cluster.
When the Tanzu CLI deploys new clusters, it reads the cluster’s configuration settings from a configuration file. The Tanzu CLI uses these configuration files when creating both management and workload clusters.
When the TKG Installer creates a management cluster, it captures user input from the UI, and writes the entered settings out to a cluster configuration file. The TKG Installer then uses this cluster configuration file to deploy the management cluster.
The bootstrap machine writes and maintains the state of the Tanzu CLI to the local
~/.config/tanzu/tkg directory. This directory contains everything that the Tanzu CLI needs to run, including information about running management clusters, BoM files for TKG and TKrs, cluster configurations, CLI settings, and Kubernetes object definitions with
A Tanzu Kubernetes Grid instance is a full deployment of Tanzu Kubernetes Grid, including the management cluster, the deployed Tanzu Kubernetes clusters, and the shared and in-cluster services that you configure. You can operate many instances of Tanzu Kubernetes Grid, for different environments, such as production, staging, and test; for different IaaS providers, such as vSphere, Azure, and AWS; and for different failure domains, for example
us-east-2, or AWS
Upgrading to a Tanzu Kubernetes Grid release means upgrading the management clusters created by the CLI version of that release.
Upgrading a management or Tanzu Kubernetes (workload) cluster in Tanzu Kubernetes Grid means migrating its nodes to run on a base VM image with a newer version of Kubernetes:
To find out which Kubernetes versions are available in Tanzu Kubernetes Grid, see List Available Versions.