Upgrade Standalone Management Clusters

To upgrade Tanzu Kubernetes Grid with a standalone management cluster, you must first upgrade the standalone management cluster. You cannot upgrade workload clusters until you have upgraded the management cluster that manages them.

If you are running TKG with vSphere with Tanzu Supervisor, you do not follow this procedure. Instead, you upgrade the Supervisor as part of vSphere and update the Supervisor’s Kubernetes version by upgrading its TKrs.

Important

Tanzu Kubernetes Grid v2.4.x is the last version of TKG that supports upgrading existing standalone TKG management clusters on AWS and Azure. The ability to upgrade standalone TKG management clusters on AWS and Azure will be removed in the Tanzu Kubernetes Grid v2.5 release.

Starting from now, VMware recommends that you use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters. However, upgrading existing standalone TKG management clusters on AWS and Azure remains fully supported for all TKG releases up to and including TKG v2.4.x.

For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.

Upgrading the management cluster automatically upgrades the auto-managed packages that it runs.

Note

After you have installed the Tanzu CLI but before a standalone management cluster has been upgraded, all context-specific CLI command groups (tanzu cluster, tanzu kubernetes-release) are unavailable and not included in Tanzu CLI --help output.

Management clusters and workload clusters use client certificates to authenticate clients. These certificates are valid for one year. To renew them, upgrade your clusters at least once a year or rotate them manually as described in Renew Cluster Certificates (Standalone MC) or the VMware Knowledge Base article How to rotate certificates in a Tanzu Kubernetes Grid cluster.

Prerequisites

(LDAP Only) Update LDAP Settings

Important

The procedure in this section only applies to Class-based clusters. For legacy plan-based clusters, see KB 369603.

Starting in Tanzu Kubernetes Grid v2.3, you must set the LDAP_BIND_DN and LDAP_BIND_PASSWORD variables when configuring an LDAP identity provider. Before upgrading a management cluster configured to use an LDAP identity provider to Tanzu Kubernetes Grid v2.3, set these variables if you have not already done so. Upgrading your management cluster without setting LDAP_BIND_DN and LDAP_BIND_PASSWORD causes the Pinniped App to fail to deploy and the App custom resource returns an error. For more information about these configuration variables, see Identity Providers - LDAP in Configuration File Variable Reference.

To update LDAP_BIND_DN and LDAP_BIND_PASSWORD, you must use the version of the management-cluster CLI plugin that corresponds to the version of your management cluster. Perform these steps before upgrading the management cluster:

  1. If you no longer have the configuration file for the management cluster, regenerate it.

    1. Set the context of kubectl to your management cluster.

      For example, with a management cluster named id-mgmt-test:

      kubectl config use-context id-mgmt-test-admin@id-mgmt-test
      
    2. Run the following command.

      kubectl -n tkg-system get secret tkg-pkg-tkg-system-values -o jsonpath="{.data.tkgpackagevalues\.yaml}" | base64 -d | yq .configvalues | grep -v CLUSTER_CLASS > YOUR-MANAGEMENT-CLUSTER-CONFIG-FILE.yaml
      
  2. Add LDAP_BIND_DN and LDAP_BIND_PASSWORD to the configuration file for your management cluster.

  3. Confirm that the correct version of the management-cluster plugin is installed on your machine. For Tanzu Kubernetes Grid v2.2.0, the correct version is v0.29.0.

    tanzu plugin list
    

    If the correct version of the management-cluster plugin is not installed, do the following:

    1. In the list of all available versions, locate the version of the management-cluster plugin that corresponds to the version of your management cluster:

      tanzu plugin search -n management-cluster --show-details
      
    2. Install the plugin:

      tanzu plugin install management-cluster --version PLUGIN-VERSION
      

      Where PLUGIN-VERSION is the version that you located in the previous step.

  4. Generate a new secret for the Pinniped package by running:

    FILTER_BY_ADDON_TYPE=authentication/pinniped tanzu management-cluster create --dry-run -f YOUR-MANAGEMENT-CLUSTER-CONFIG-FILE.yaml > PINNIPED-PACKAGE-SECRET.yaml
    

    Where YOUR-MANAGEMENT-CLUSTER-CONFIG-FILE.yaml is the configuration file you updated in the previous step and PINNIPED-PACKAGE-SECRET.yaml is the new secret for the Pinniped package.

  5. Confirm that the resulting secret includes your updated settings, set the kubectl context to the management cluster, and apply the secret:

    kubectl apply -f PINNIPED-PACKAGE-SECRET.yaml
    

Procedure

  1. Run the tanzu context use command to see an interactive list of management clusters available for upgrade.

    tanzu context use
    
  2. Select the management cluster that you want to upgrade. See List Management Clusters and Change Context for more information.

  3. Get the admin credentials of the cluster. The Tanzu CLI alias mc is short for management-cluster.

    tanzu mc kubeconfig get --admin
    
  4. Connect kubectl to the management cluster.

    kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME.
    
  5. If the management cluster is running on Azure, set the AZURE_CLIENT_SECRET environment variable before upgrading the cluster:

    export AZURE_CLIENT_SECRET=YOUR-AZURE-CLIENT-SECRET
    
  6. Run the tanzu mc upgrade command and enter y to confirm.

    Note

    After you run this command, non-admin users cannot log in to the associated workload clusters until the Pinniped pods finish restarting.

    tanzu mc upgrade
    

    If multiple base VM images in your IaaS account have the same version of Kubernetes that you are upgrading to, you can include --os-name and other options to specify the target OS as described in Select an OS to Upgrade To:

    tanzu mc upgrade --os-name ubuntu
    

    On vSphere, you can use the --vsphere-vm-template-name option to specify a target OVA template for cluster nodes as described in Select an OVA Template to Upgrade To:

    tanzu mc upgrade --vsphere-vm-template-name "/dc0/vm/tanzu/ubuntu-2004-kube-v1.29.9-vmware.1"
    

    To skip the confirmation step when you upgrade a cluster, specify the --yes option.

    tanzu mc upgrade --yes
    

    The upgrade process first upgrades the Cluster API providers for vSphere, Amazon Web Services (AWS), or Azure that are running in the management cluster. Then, it upgrades the version of Kubernetes in all of the control plane and worker nodes of the management cluster.

    Important

    While a management cluster is upgrading, do not run tanzu cluster or tanzu mc commands against it or the workload clusters that it manages, for example from another bootstrap machine or shell window.

    If the upgrade times out before it completes, run tanzu mc upgrade again and specify the --timeout option with a value greater than the default of 30 minutes.

    tanzu mc upgrade --timeout 45m0s
    
    Note

    After you have installed the v2.3 CLI but before a management cluster has been upgraded, all context-specific CLI command groups (tanzu cluster, tanzu kubernetes-release) plus all of the management-cluster plugin commands except for tanzu mc upgrade and tanzu mc create are unavailable and not included in Tanzu CLI --help output.

  7. After the upgrade finishes, run the tanzu cluster list command with the --include-management-cluster -A options again to check that the management cluster has been upgraded.

    tanzu cluster list --include-management-cluster -A
    

    You see that the management cluster is now running the new version of Kubernetes, but that the workload clusters are still running previous versions of Kubernetes.

     NAME                 NAMESPACE   STATUS    CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLA   TKR
     k8s-1-24-14-cluster  default     running   1/1           1/1      v1.24.14+vmware.1  <none>      dev   v1.24.14---vmware.1-tkg.1
     k8s-1-25-10-cluster  default     running   1/1           1/1      v1.25.10+vmware.1  <none>      dev   v1.25.10---vmware.1-tkg.1
     mgmt-cluster         tkg-system  running   1/1           1/1      v1.26.8+vmware.1   management  dev   v1.26.8---vmware.2-tkg.1
    
  8. Regenerate the admin kubeconfig:

    tanzu management-cluster kubeconfig get --admin
    

    The following is sample output of the command:

    Credentials of cluster 'mgmt' have been saved
    You can now access the cluster by running 'kubectl config use-context mgmt-admin@mgmt'
    
    Important

    If you don’t renew the kubeconfig after upgrading, you won’t be able to access the cluster once it expires.

What to Do Next

You can now:

  • Upgrade the workload clusters that this management cluster manages.

  • Create new workload clusters. By default, any new clusters that you deploy with this management cluster will run the new default version of Kubernetes. However, if required, you can use the tanzu cluster create command with the --tkr option to deploy new clusters that run different versions of Kubernetes. For more information, see Multiple Kubernetes Versions.

check-circle-line exclamation-circle-line close-line
Scroll to top icon