To upgrade your Tanzu Kubernetes Grid instance, you must first upgrade the management cluster. You cannot upgrade Tanzu Kubernetes clusters until you have upgraded the management cluster that manages them.

IMPORTANT: Management clusters and Tanzu Kubernetes clusters use client certificates to authenticate clients. These certificates are valid for one year. To renew them, upgrade your clusters at least once a year.

Prerequisites

  • You performed the steps in Upgrade Tanzu Kubernetes Grid that occur before the step for upgrading management clusters.
  • If you are upgrading a management cluster that you previously upgraded from v1.2.x to v1.3.x, you performed the steps to replace the connectivity API with a load balancer when you upgraded to 1.3.x. For information, see the 1.3 documentation.
  • If you deployed the previous version of Tanzu Kubernetes Grid in an Internet-restricted environment, you have performed the steps in Prepare an Internet-Restricted Environment to recreate and run the gen-publish-images.sh and publish-images.sh scripts with the new component image versions.

Procedure

  1. Run the tanzu login command to see an interactive list of management clusters available for upgrade.

    tanzu login
    
  2. Select the management cluster that you want to upgrade. See List Management Clusters and Change Context for more information.

  3. Run the tanzu cluster list command with the --include-management-cluster option.

    tanzu cluster list --include-management-cluster
    

    This command shows the versions of Kubernetes running on the management cluster and all of the clusters that it manages:

    $ tanzu cluster list --include-management-cluster
     NAME                 NAMESPACE   STATUS    CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
     k8s-1-19-12-cluster   default     running   1/1           1/1      v1.19.12+vmware.1   <none>      dev
     k8s-1-20-5-cluster   default     running   1/1           1/1      v1.20.5+vmware.1   <none>      dev
     mgmt-cluster         tkg-system  running   1/1           1/1      v1.21.2+vmware.1   management  dev
    
  4. Run the tanzu management-cluster upgrade command and enter y to confirm.

    The following command upgrades the current management cluster.

    tanzu management-cluster upgrade
    

    If multiple base VM images in your IaaS account have the same version of Kubernetes that you are upgrading to, use the --os-name option to specify the OS you want. See Selecting an OS During Cluster Upgrade for more information.

    For example, on vSphere if you have uploaded both Photon and Ubuntu OVA templates with Kubernetes v1.21.2, specify --os-name ubuntu to upgrade your management cluster to run on an Ubuntu VM.

    tanzu management-cluster upgrade --os-name ubuntu
    

    To skip the confirmation step when you upgrade a cluster, specify the --yes option.

    tanzu management-cluster upgrade --yes
    

    The upgrade process first upgrades the Cluster API providers for vSphere, Amazon EC2, or Azure that are running in the management cluster. Then, it upgrades the version of Kubernetes in all of the control plane and worker nodes of the management cluster.

    If the upgrade times out before it completes, run tanzu management-cluster upgrade again and specify the --timeout option with a value greater than the default of 30 minutes.

    tanzu management-cluster upgrade --timeout 45m0s
    
  5. When the upgrade finishes, run the tanzu cluster list command with the --include-management-cluster option again to check that the management cluster has been upgraded.

    tanzu cluster list --include-management-cluster
    

    You see that the management cluster is now running the new version of Kubernetes, but that the Tanzu Kubernetes clusters are still running previous versions of Kubernetes.

     NAME                 NAMESPACE   STATUS    CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
     k8s-1-19-12-cluster   default     running   1/1           1/1      v1.19.12+vmware.1   <none>      dev
     k8s-1-20-5-cluster   default     running   1/1           1/1      v1.20.5+vmware.1   <none>      dev
     mgmt-cluster         tkg-system  running   1/1           1/1
     v1.21.2+vmware.1   management  dev
    

IMPORTANT: In Tanzu Kubernetes Grid v1.3.x, by default all configuration information for your clusters and Tanzu Kubernetes Grid installation was stored in the folder ~/.tanzu. In Tanzu Kubernetes Grid v1.4.x, configuration information is stored in the folder ~/.config/tanzu. All information that was stored in the ~/.tanzu folder is automatically migrated into the ~/.config/tanzu folder when you run the tanzu management-cluster upgrade command to upgrade to v1.4. You can verify that all of your management cluster configurations have migrated into the ~/.config/tanzu folder by running the tanzu config server list command. After verification, you can delete the ~/.tanzu folder.

Update Pinniped Settings for Management Clusters with OIDC Authentication

This procedure is only required if you are upgrading from v1.3.0 to v1.4.x and using OIDC Authentication. If you are upgrading a management cluster that you deployed with v1.3.1 or that you have already upgraded to v1.3.1, you do not need to perform this procedure.

In Tanzu Kubernetes Grid v1.3.1 and later, Pinniped with OIDC no longer requires Dex. Follow these steps to change the Dex settings to Pinniped settings.

  1. Set kubectl to the admin context of the management cluster.

    kubectl config use-context MGMT-CLUSTER-admin@MGMT-CLUSTER
    

    Where MGMT-CLUSTER is the name of the management cluster.

  2. Decode the Pinniped configuration settings from the secret object that they are stored in, and save them to a local file values.yaml:

    kubectl get secret  MGMT-CLUSTER-pinniped-addon -n tkg-system -o jsonpath="{.data.values\.yaml}" | base64 --decode > values.yaml
    

    The settings are stored in the secret's data.values.yaml property.

  3. Open the Pinniped configuration file values.yaml in a text editor and replace the Dex settings with the Pinniped settings from the table below:

    Dex Pinniped Notes
    dex.config.oidc.CLIENT_ID pinniped.upstream_oidc_client_id
    dex.config.oidc.CLIENT_Secret pinniped.upstream_oidc_client_secret
    dex.config.oidc.issuer pinniped.upstream_oidc_issuer_url Set Pinniped value equal to the former Dex value.
    dex.config.oidc.scopes pinniped.upstream_oidc_additional_scopes
    dex.config.oidc.claimMapping.userNameKey pinniped.upstream_oidc_claims.username If the Dex value is not set, set the Pinniped value to name as this is the Dex default.
    dex.config.oidc.claimMapping.groups pinniped.upstream_oidc_claims.groups

    For a description of these settings, see Updating Core Package Configuration in Viewing and Updating Configuration Information for Core Packages.

  4. Save the config.yaml file and exit the text editor.

  5. Run base64 to re-encode the file without newline characters, and record the the base64-encoded string that the command outputs:

    • Linux:

      base64 -w 0 values.yaml
      
    • MacOS:

      base64 values.yaml
      
  6. Run kubectl edit to edit the secret object itself:

    kubectl edit secret MGMT-CLUSTER-pinniped-addon -n tkg-system
    

    Where MGMT-CLUSTER is the name of the management cluster.

  7. In the configuration file, replace the data.values.yaml value with the new base64 string containing the configuration values:

    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: v1
    data:
     values.yaml: CONFIG-BASE64
    kind: Secret
    ...
    

    Where CONFIG-BASE64 is the new base64 string recorded from the output above.

  8. Save and exit to update the secret.

  9. Check the status of the Pinniped add-on.

    kubectl get app pinniped -n tkg-system
    

    If the returned status is Reconcile failed, run the following command to get details on the failure.

    kubectl get app pinniped -n tkg-system -o yaml
    

    For more information about troubleshooting the Pinniped add-on, see Troubleshooting Core Add-on Configuration in Viewing and Updating Configuration Information for Core Packages.

  10. Update the callback URL to match the Pinniped service by following the steps in Update the Callback URL for Management Clusters with OIDC Authentication below.

Update the Callback URL for Management Clusters with OIDC Authentication

This step is only required if you are upgrading from v1.3.0 to v1.4.x. If you are upgrading a management cluster that you deployed with v1.3.1 or that you have already upgraded to v1.3.1, you do not need to perform this procedure.

In Tanzu Kubernetes Grid v1.3.0, Pinniped used Dex as the endpoint for both OIDC and LDAP providers. In Tanzu Kubernetes Grid v1.3.1 and later, Pinniped no longer requires Dex and uses the Pinniped endpoint for OIDC providers. In Tanzu Kubernetes Grid v1.3.1 and later, Dex is only used if you use an LDAP provider. If you used Tanzu Kubernetes Grid v1.3.0 to deploy management clusters that implement OIDC authentication, when you upgrade those management clusters to v1.4.x, the dexsvc service running in the management cluster is removed and replaced by the pinniped-supervisor service. Consequently, you must update the callback URLs that you specified in your OIDC provider after you deployed the management clusters with Tanzu Kubernetes Grid v1.3.0, so that it connects to the pinniped-supervisor service rather than to the dexsvc service.

Obtain the Address of the Pinniped Service

Before you can update the callback URL, you must obtain the address of the Pinniped service that is running in the upgraded cluster.

  1. Get the admin context of the management cluster.

    tanzu management-cluster kubeconfig get --admin
    

    You should see the confirmation Credentials of workload cluster 'MGMT-CLUSTER' have been saved. You can now access the cluster by running 'kubectl config use-context MGMT-CLUSTER-admin@MGMT-CLUSTER', where MGMT-CLUSTER is the name of your management cluster. The admin context of a cluster gives you full access to the cluster without requiring authentication with your IDP.

  2. Set kubectl to the admin context of the management cluster.

    kubectl config use-context MGMT-CLUSTER-admin@MGMT-CLUSTER
    

    Where MGMT-CLUSTER is the name of your management cluster.

  3. Get information about the services that are running in the upgraded management cluster.

    In Tanzu Kubernetes Grid v1.4.x, the identity management service runs in the pinniped-supervisor namespace:

    kubectl get all -n pinniped-supervisor
    

    You see the following entry in the output:

    vSphere:

    NAME                          TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    service/pinniped-supervisor   NodePort   100.70.70.12   <none>        5556:31234/TCP   84m
    

    Amazon EC2:

    NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP                              PORT(S)         AGE
    service/pinniped-supervisor   LoadBalancer   100.69.13.66   ab1[...]71.eu-west-1.elb.amazonaws.com   443:30865/TCP   56m
    

    Azure:

    NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE
    service/pinniped-supervisor   LoadBalancer   100.69.169.220   20.54.226.44     443:30451/TCP   84m
    
  4. Note the following information:

    • For management clusters that are running on vSphere, note the port on which the pinniped-supervisor service is running. In the example above, the port listed under EXTERNAL-IP is 31234.
    • For clusters that you deploy to Amazon EC2 and Azure, note the external address of the LoadBalancer node of the pinniped-supervisor service is running, that is listed under EXTERNAL-IP.

Update the Callback URL

Once you have obtained information about the address at which pinniped-supervisor is running, you must update the callback URL for your OIDC provider. For example, if your IDP is Okta, perform the following steps:

  1. Log in to your Okta account.
  2. In the main menu, go to Applications.
  3. Select the application that you created for Tanzu Kubernetes Grid.
  4. In the General Settings panel, click Edit.
  5. Under Login, update Login redirect URIs to include the address of the node in which the pinniped-supervisor is running.

    • On vSphere, update the pinniped-supervisor port number that you noted in the previous procedure.

      https://API-ENDPOINT-IP:31234/callback
      
    • On Amazon EC2 and Azure, update the external address of the LoadBalancer node on which the pinniped-supervisor is running, that you noted in the previous procedure.

      https://EXTERNAL-IP/callback
      

      Specify https, not http.

  6. Click Save.

What to Do Next

You can now upgrade the Tanzu Kubernetes clusters that this management cluster manages and deploy new Tanzu Kubernetes clusters. By default, any new clusters that you deploy with this management cluster will run the new default version of Kubernetes.

However, if required, you can use the tanzu cluster create command with the --tkr option to deploy new clusters that run different versions of Kubernetes. For more information, see Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions.

check-circle-line exclamation-circle-line close-line
Scroll to top icon