This topic provides conceptual information about upgrading VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and TKGI-provisioned Kubernetes clusters.
For step-by-step instructions on upgrading TKGI and TKGI-provisioned Kubernetes clusters, see:
An Tanzu Kubernetes Grid Integrated Edition upgrade modifies the TKGI version, for example, upgrading TKGI from v1.15.x to v1.16.0 or from v1.16.0 to v1.16.1.
There are two ways you can upgrade TKGI:
Full Upgrade: By default, TKGI is set to perform a full upgrade, which upgrades both the TKGI control plane and all TKGI-provisioned Kubernetes clusters.
Control Plane Only Upgrade: You can choose to upgrade TKGI in two phases by upgrading the TKGI control plane first and then upgrading your TKGI-provisioned Kubernetes clusters later.
When deciding whether to perform the default full upgrade or to upgrade the TKGI control plane and TKGI-provisioned Kubernetes clusters separately, consider your organization’s needs.
You might prefer to upgrade TKGI in two phases because of the advantages it provides:
If your organization runs TKGI-provisioned Kubernetes clusters in both development and production environments and you want to upgrade only one environment first, you can achieve your goal by upgrading the TKGI control plane and TKGI-provisioned Kubernetes separately.
Faster Tanzu Kubernetes Grid Integrated Edition tile upgrades. If you have a large number of clusters in your TKGI deployment, performing a full upgrade can significantly increase the amount of time required to upgrade the Tanzu Kubernetes Grid Integrated Edition tile.
More granular control over cluster upgrades. In addition to enabling you to upgrade subsets of clusters, the TKGI CLI supports upgrading each cluster individually.
Not a monolithic upgrade. This helps isolate the root cause of an error when troubleshooting upgrades. For example, when a cluster-related upgrade error occurs during a full upgrade, the entire Tanzu Kubernetes Grid Integrated Edition tile upgrade might fail.
Warning: If you deactivate the default full upgrade and upgrade only the TKGI control plane, you must upgrade all your TKGI-provisioned Kubernetes clusters before the next Tanzu Kubernetes Grid Integrated Edition tile upgrade. Deactivating the default full upgrade and upgrading only the TKGI control plane cause the TKGI version tagged in your Kubernetes clusters to fall behind the Tanzu Kubernetes Grid Integrated Edition tile version. If your TKGI-provisioned Kubernetes clusters fall more than one version behind the tile, TKGI cannot upgrade the clusters.
You can use either the Tanzu Kubernetes Grid Integrated Edition tile or the TKGI CLI to perform TKGI upgrades:
Upgrade Method | Supported Upgrade Types | ||
---|---|---|---|
Full TKGI upgrade | TKGI control plane only | Kubernetes clusters only | |
TKGI Tile | ✔ | ✔ | ✔ |
TKGI CLI | ✖ | ✖ | ✔ |
Typically, if you choose to upgrade TKGI-provisioned Kubernetes clusters only, you will upgrade them through the TKGI CLI.
After you add a new Tanzu Kubernetes Grid Integrated Edition tile version to your staging area on the Ops Manager Installation Dashboard, Ops Manager automatically migrates your configuration settings into the new tile version.
You can perform a full TKGI upgrade or a TKGI control plane only upgrade:
During a full TKGI upgrade, the Tanzu Kubernetes Grid Integrated Edition tile does the following:
Recreates the Control Plane VMs:
Upgrades Clusters:
During a TKGI control plane only upgrade, the Tanzu Kubernetes Grid Integrated Edition tile does the following:
Recreates the Control Plane VMs:
Does Not Upgrade Clusters:
Note the following when upgrading the TKGI control plane:
When the TKGI control plane is not scaled for high availability (beta), upgrading the control plane temporarily interrupts the following:
tkgi
commands.These outages do not affect the Kubernetes clusters themselves. During a TKGI control plane upgrade, you can still interact with clusters and their workloads using the Kubernetes Command Line Interface, kubectl
.
For more information about the TKGI control plane and high availability (beta), see TKGI Control Plane Overview in Tanzu Kubernetes Grid Integrated Edition Architecture.
The Tanzu Kubernetes Grid Integrated Edition tile is a BOSH deployment.
BOSH-deployed products can set a number of canary instances to upgrade first, before the rest of the deployment VMs. BOSH continues the upgrade only if the canary instance upgrade succeeds. If the canary instance encounters an error, the upgrade stops running and other VMs are not affected.
The Tanzu Kubernetes Grid Integrated Edition tile uses one canary instance when deploying or upgrading Tanzu Kubernetes Grid Integrated Edition.
TKGI allows an admin to upgrade the TKGI control plane without upgrading the TKGI-provisioned Kubernetes clusters. These clusters continue running the previous TKGI version.
Although the TKGI CLI generally supports these clusters, there are CLI commands that are not supported.
The following tables summarize which TKGI CLI commands are supported on clusters running the previous TKGI version:
Note: VMware recommends you do not run TKGI CLI cluster management commands on clusters running the previous TKGI version.
The following summarizes the TKGI CLI utility commands that are supported for clusters running the previous TKGI version.
Task Status | Notes | |
---|---|---|
Supported Tasks |
|
The following summarizes the TKGI CLI cluster management commands that are supported for clusters running the previous TKGI version.
Note: VMware recommends you do not run TKGI CLI cluster management commands on clusters running the previous TKGI version.
Task Status | Notes | |
---|---|---|
Supported Tasks |
|
|
Partially-Supported Tasks |
|
Supported tkgi update-cluster Flags:
Note: The supported flags require TKGI v1.16.2 or later. Unsupported tkgi update-cluster Flags:
* Clusters running the previous TKGI version and configured with ‑‑tags do not support any tkgi update-cluster operations. |
Unsupported Tasks |
|
Upgrading a TKGI-provisioned Kubernetes cluster upgrades the cluster to the TKGI version of the TKGI control plane and tags the cluster with the upgrade version.
Upgrading the cluster also upgrades the cluster’s Kubernetes version to the version included with the Tanzu Kubernetes Grid Integrated Edition tile.
During an upgrade of TKGI-provisioned clusters, TKGI recreates your clusters. This includes the following stages for each cluster you upgrade:
Depending on your cluster configuration, these recreations might cause Cluster Control Plane Nodes Outage or Worker Nodes Outage as described below.
Note: When the Upgrade all clusters errand is enabled in the Tanzu Kubernetes Grid Integrated Edition tile, updating the tile with a new Linux or Windows stemcell rolls every Linux or Windows VM in each Kubernetes cluster. This automatic rolling ensures that all your VMs are patched. To avoid workload downtime, use the resource configuration recommended in Control Plane Nodes Outage and Worker Nodes Outage below and in Maintaining Workload Uptime.
You can upgrade TKGI-provisioned Kubernetes clusters either through the Tanzu Kubernetes Grid Integrated Edition tile or the TKGI CLI. See the table below.
This method | Upgrades |
---|---|
The Upgrade all clusters errand in the Tanzu Kubernetes Grid Integrated Edition tile > Errands |
All clusters. Clusters are upgraded serially. |
tkgi upgrade-cluster |
One cluster. |
tkgi upgrade-clusters |
Multiple clusters. Clusters are upgraded serially or in parallel. |
When TKGI upgrades a single-control plane node cluster, you cannot interact with your cluster, use kubectl
, or push new workloads.
To avoid this loss of functionality, VMware recommends using multi-control plane node clusters.
When TKGI upgrades a worker node, the node stops running containers. If your workloads run on a single node, they will experience downtime.
To avoid downtime for stateless workloads, VMware recommends using at least one worker node per availability zone (AZ). For stateful workloads, VMware recommends using a minimum of two worker nodes per AZ.
Tanzu Kubernetes Grid Integrated Edition supports Antrea, Flannel, and NSX-T as the Container Network Interfaces (CNIs) for TKGI-provisioned clusters.
VMware recommends the Antrea CNI over Flannel. The Antrea CNI provides Kubernetes Network Policy support for non-NSX-T environments. Antrea CNI-configured clusters are supported on AWS, Azure, and vSphere without NSX-T environments.
For more information about Antrea, see Antrea in the Antrea documentation.
Note: Support for the Flannel Container Networking Interface (CNI) is deprecated.
VMware recommends that you configure Antrea as the default TKGI-provisioned cluster CNI, and that you switch your Flannel CNI-configured clusters to the Antrea CNI.
You can configure TKGI to network newly created TKGI-provisioned clusters with the Antrea CNI.
Configure the TKGI default CNI during TKGI installation and upgrade only.
During TKGI installation:
During TKGI upgrades:
If you initially configured TKGI to use Flannel as the default CNI and switch to Antrea as the default CNI during a TKGI upgrade:
Warning:
Do not change the TKGI default CNI configuration between upgrades.
For information about selecting and configuring a CNI for TKGI, see the Networking section of the installation documentation for your environment: