To upgrade your Tanzu Kubernetes Grid instance, you must first upgrade the management cluster. You cannot upgrade Tanzu Kubernetes clusters until you have upgraded the management cluster that manages them.
IMPORTANT: Management clusters and Tanzu Kubernetes clusters use client certificates to authenticate clients. These certificates are valid for one year. To renew them, upgrade your clusters at least once a year.
publish-images.shscripts with the new component image versions.
tanzu login command to see an interactive list of management clusters available for upgrade.
Select the management cluster that you want to upgrade. See List Management Clusters and Change Context for more information.
tanzu cluster list command with the
--include-management-cluster option. This command shows the versions of Kubernetes running on the management cluster and all of the clusters that it manages:
$ tanzu cluster list --include-management-cluster NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN k8s-1-17-13-cluster default running 1/1 1/1 v1.17.13+vmware.1 <none> dev k8s-1-18-10-cluster default running 1/1 1/1 v1.18.10+vmware.1 <none> dev k8s-1-19-3-cluster default running 1/1 1/1 v1.19.3+vmware.1 <none> dev mgmt-cluster tkg-system running 1/1 1/1 v1.20.4+vmware.1 management dev
Before you run the upgrade command, remove all unmanaged
kapp-controller deployment artifacts from the management cluster. An unmanaged
kapp-controller deployment is a deployment that exists outside of the
kubectl delete deployment kapp-controller -n kapp-controller
Note: If you receive a
NotFound error message, ignore the error. You should continue with the following deletion steps in case you have any orphaned objects related to a pre-existing
Error from server (NotFound): deployments.apps "kapp-controller" not found
kubectl delete clusterrole kapp-controller-cluster-role kubectl delete clusterrolebinding kapp-controller-cluster-role-binding kubectl delete serviceaccount kapp-controller-sa -n kapp-controller
If you set up Harbor access by installing a connectivity API on your v1.2 management cluster, follow the Replace Connectivity API with a Load Balancer procedure below. If your workload clusters access Harbor via a load balancer, proceed to the next step.
tanzu management-cluster upgrade command and enter
y to confirm.
The following command upgrades the current management cluster.
tanzu management-cluster upgrade
If multiple base VM images in your IaaS account have the same version of Kubernetes that you are upgrading to, use the
--os-name option to specify the OS you want. See Selecting an OS During Cluster Upgrade for more information.
For example, on vSphere if you have uploaded both Photon and Ubuntu OVA templates with Kubernetes v1.20.5, specify
--os-name ubuntu to upgrade your management cluster to run on an Ubuntu VM.
tanzu management-cluster upgrade --os-name ubuntu
To skip the confirmation step when you upgrade a cluster, specify the
tanzu management-cluster upgrade --yes
The upgrade process first upgrades the Cluster API providers for vSphere, Amazon EC2, or Azure that are running in the management cluster. Then, it upgrades the version of Kubernetes in all of the control plane and worker nodes of the management cluster.
If the upgrade times out before it completes, run
tanzu management-cluster upgrade again and specify the
--timeout option with a value greater than the default of 30 minutes.
tanzu management-cluster upgrade --timeout 45m0s
When the upgrade finishes, run the
tanzu cluster list command with the
--include-management-cluster option again to check that the management cluster has been upgraded.
tanzu cluster list --include-management-cluster
You see that the management cluster is now running the new version of Kubernetes, but that the Tanzu Kubernetes clusters are still running previous versions of Kubernetes.
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN k8s-1-17-13-cluster default running 1/1 1/1 v1.17.13+vmware.1 <none> dev k8s-1-18-10-cluster default running 1/1 1/1 v1.18.10+vmware.1 <none> dev k8s-1-19-3-cluster default running 1/1 1/1 v1.19.3+vmware.1 <none> dev mgmt-cluster tkg-system running 1/1 1/1 v1.20.5+vmware.2 management dev
In Tanzu Kubernetes Grid v1.2 workload clusters access the Harbor service via a load balancer or a connectivity API installed in the management cluster. Tanzu Kubernetes Grid v1.3 only supports a load balancer for Harbor access. If your v1.2 installation uses the connectivity API, you need to remove it and set up a load balancer for the Harbor domain name before you upgrade:
Set up a load balancer for your workload clusters. For vSphere, see Install VMware NSX Advanced Load Balancer on a vSphere Distributed Switch.
tkg-connectivity operator and
tanzu-registry webhook from the management cluster:
Set the context of
kubectl to the context of your management cluster:
kubectl config use-context MGMT-CLUSTER-admin@MGMT-CLUSTER
MGMT-CLUSTER is the name of your management cluster.
Run the following commands to remove the resources and related objects:
kubectl delete mutatingwebhookconfiguration tanzu-registry-webhook kubectl delete namespace tanzu-system-connectivity kubectl delete namespace tanzu-system-registry kubectl delete clusterrolebinding tanzu-registry-webhook kubectl delete clusterrole tanzu-registry-webhook kubectl delete clusterrolebinding tkg-connectivity-operator kubectl delete clusterrole tkg-connectivity-operator
Undo the effects of the
List all cluster control plane resources in the management cluster:
kubectl get kubeadmcontrolplane -A | grep -v tkg-system
These resources correspond to the workload clusters and shared service cluster.
For each workload cluster control plane listed, run
kubectl edit to edit its resource manifest as follows. For example, for a workload cluster
kubectl -n NAMESPACE edit kubeadmcontrolplane my_cluster_1-control-plane
NAMESPACE is the namespace of the management cluster. If the namespace is
default, you can omit this option.
In the manifest's
files: section, delete
/opt/tkg/tanzu-registry-proxy.sh from from the
In the manifest's
preKubeadmCommands: section, delete the two lines that start with the following commands:
echo- appends an IP address for Harbor into the
/opt/tkg/tanzu-registry-proxy.sh- executes the
In Tanzu Kubernetes Grid v1.3.0, Pinniped used Dex as the endpoint for both OIDC and LDAP providers. In Tanzu Kubernetes Grid v1.3.1 and later, Pinniped no longer requires Dex and uses the Pinniped endpoint for OIDC providers. In Tanzu Kubernetes Grid v1.3.1 and later, Dex is only used if you use an LDAP provider. If you used Tanzu Kubernetes Grid v1.3.0 to deploy management clusters that implement OIDC authentication, when you upgrade those management clusters to v1.3.1, the
dexsvc service running in the management cluster is removed and replaced by the
pinniped-supervisor service. Consequently, you must update the callback URLs that you specified in your OIDC provider after you deployed the management clusters with Tanzu Kubernetes Grid v1.3.0, so that it connects to the
pinniped-supervisor service rather than to the
Before you can update the callback URL, you must obtain the address of the Pinniped service that is running in the upgraded cluster.
admin context of the management cluster.
tanzu management-cluster kubeconfig get --admin
If your management cluster is named
id-mgmt-test, you should see the confirmation
Credentials of workload cluster 'id-mgmt-test' have been saved. You can now access the cluster by running 'kubectl config use-context id-mgmt-test-admin@id-mgmt-test'. The
admin context of a cluster gives you full access to the cluster without requiring authentication with your IDP.
kubectl to the
admin context of the management cluster.
kubectl config use-context id-mgmt-test-admin@id-mgmt-test
Get information about the services that are running in the management cluster.
In Tanzu Kubernetes Grid v1.3.1 and later, the identity management service runs in the
kubectl get all -n pinniped-supervisor
You see the following entry in the output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/pinniped-supervisor NodePort 100.70.70.12 <none> 5556:31234/TCP 84m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/pinniped-supervisor LoadBalancer 100.69.13.66 ab1[...]71.eu-west-1.elb.amazonaws.com 443:30865/TCP 56m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/pinniped-supervisor LoadBalancer 100.69.169.220 126.96.36.199 443:30451/TCP 84m
Note the following information:
pinniped-supervisorservice is running. In the example above, the port listed under
LoadBalancernode of the
pinniped-supervisorservice is running, that is listed under
Once you have obtained information about the address at which
pinniped-supervisor is running, you must update the callback URL for your OIDC provider. For example, if your IDP is Okta, perform the following steps:
Under Login, update Login redirect URIs to include the address of the node in which the
pinniped-supervisor is running.
On vSphere, update the
pinniped-supervisor port number that you noted in the previous procedure.
On Amazon EC2 and Azure, update the external address of the
LoadBalancer node on which the
pinniped-supervisor is running, that you noted in the previous procedure.
You can now upgrade the Tanzu Kubernetes clusters that this management cluster manages and deploy new Tanzu Kubernetes clusters. By default, any new clusters that you deploy with this management cluster will run the new default version of Kubernetes.
However, if required, you can use the
tanzu cluster create command with the
--tkr option to deploy new clusters that run different versions of Kubernetes. For more information, see Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions.