Upgrade Tanzu Kubernetes Grid

To upgrade Tanzu Kubernetes Grid, you download and install the new version of the Tanzu CLI on the machine that you use as the bootstrap machine. You must also download and install base image templates and VMs, depending on whether you are upgrading clusters that you previously deployed to vSphere, Amazon Web Services (AWS), or Azure.

After you have installed the new versions of the components, you use the tanzu mc upgrade and tanzu cluster upgrade CLI commands to upgrade management clusters and workload clusters.

Prerequisites

Before you begin the upgrade to Tanzu Kubernetes Grid v1.6.x, you must ensure that your current deployment is Tanzu Kubernetes Grid v1.5.x. To upgrade to v1.6.x from Tanzu Kubernetes Grid versions earlier than v1.5, you must first upgrade to v1.5.x with the v1.5.x version of the Tanzu CLI.

Procedure

The next sections are the overall steps required to upgrade Tanzu Kubernetes Grid. This procedure assumes that you are upgrading to Tanzu Kubernetes Grid v1.6.0.

Some steps are only required if you are performing a minor upgrade from Tanzu Kubernetes Grid v1.5.x to v1.6.x and are not required if you are performing a patch upgrade from Tanzu Kubernetes Grid v1.6.x to v1.6.y.

If you deployed the previous version of Tanzu Kubernetes Grid in an Internet-restricted environment, see Upgrading vSphere Deployments in an Internet Restricted Environment.

Download and Install the New Version of the Tanzu CLI

This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

To download and install the new version of the Tanzu CLI, perform the following steps.

  1. Delete the ~/.config/tanzu/tkg/compatibility/tkg-compatibility.yaml file.

    If you do not delete this file, the new version of the Tanzu CLI will continue to use the Bill of Materials (BOM) for the previous release. Deleting this file causes the Tanzu CLI to pull the updated BOM.

    This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

  2. Follow the instructions in Install the Tanzu CLI and Other Tools to download and install the Tanzu CLI and kubectl on the machine where you currently run your tanzu commands.

  3. After you install the new version of the Tanzu CLI, run tanzu version to check that the correct version of the Tanzu CLI is properly installed. Tanzu Kubernetes Grid v1.6.0 uses Tanzu CLI v0.25.0, based on Tanzu Framework v0.25.0.
  4. After you install kubectl, run kubectl version to check that the correct version of kubectl is properly installed. The correct version of kubectl for Tanzu Kubernetes Grid v1.6.0 is v1.23.8.

For information about Tanzu CLI commands and options that are available, see the Tanzu CLI Command Reference.

Prepare to Upgrade Clusters on vSphere

This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

Before you can upgrade a Tanzu Kubernetes Grid deployment on vSphere, you must import into vSphere new versions of the base image templates that the upgraded management and workload clusters will run. VMware publishes base image templates in OVA format for each supported OS and Kubernetes version. After importing the OVAs, you must convert the resulting VMs into VM templates.

This procedure assumes that you are upgrading to Tanzu Kubernetes Grid v1.6.0.

  1. Go to the Tanzu Kubernetes Grid downloads page and log in with your My VMware credentials.
  2. Download the latest Tanzu Kubernetes Grid OVAs for the OS and Kubernetes version lines that your management and workload clusters are running.

    For example, for Photon v3 images:

    • Kubernetes v1.23.8: Photon v3 Kubernetes v1.23.8 OVA
    • Kubernetes v1.22.11: Photon v3 Kubernetes v1.22.11 OVA
    • Kubernetes v1.21.14: Photon v3 Kubernetes v1.21.14 OVA

    For Ubuntu 20.04 images:

    • Kubernetes v1.23.8: Ubuntu 2004 Kubernetes v1.23.8 OVA
    • Kubernetes v1.22.11: Ubuntu 2004 Kubernetes v1.22.11 OVA
    • Kubernetes v1.21.14: Ubuntu 2004 Kubernetes v1.21.14 OVA

    Important:: Make sure you download the most recent OVA base image templates in the event of security patch releases. You can find updated base image templates that include security patches on the Tanzu Kubernetes Grid product download page.

  3. In the vSphere Client, right-click an object in the vCenter Server inventory and select Deploy OVF template.

  4. Select Local file, click the button to upload files, and navigate to a downloaded OVA file on your local machine.
  5. Follow the installer prompts to deploy a VM from the OVA.

    • Accept or modify the appliance name.
    • Select the destination datacenter or folder.
    • Select the destination host, cluster, or resource pool.
    • Accept the end user license agreements (EULA).
    • Select the disk format and destination datastore.
    • Select the network for the VM to connect to.
  6. Click Finish to deploy the VM.
  7. When the OVA deployment finishes, right-click the VM and select Template > Convert to Template.
  8. In the VMs and Templates view, right-click the new template, select Add Permission, and assign your Tanzu Kubernetes Grid user, for example, tkg-user, to the template with the Tanzu Kubernetes Grid role, for example, TKG. You created this user and role in Prepare to Deploy Management Clusters to vSphere.

Repeat the procedure for each of the Kubernetes versions for which you have downloaded the OVA file.

VMware Cloud on AWS SDDC Compatibility

If you are upgrading workload clusters that are deployed on VMware Cloud on AWS, verify that the underlying Software-Defined Datacenter (SDDC) version used by your existing deployment is compatible with the version of Tanzu Kubernetes Grid you are upgrading to.

To view the version of an SDDC, select View Details on the SDDC tile in VMware Cloud Console and click on the Support pane.

To validate compatibility with Tanzu Kubernetes Grid, refer to the VMware Product Interoperablity Matrix.

Prepare to Upgrade Clusters on AWS

Before upgrading to TKG v1.6 on AWS, you must install the AWS EBS CSI driver onto all workload clusters that use CSI storage, as described below.

Warning: Failure to follow this procedure before upgrade may result in data loss.

  1. Grant permissions for the AWS EBS CSI driver:

    export AWS_REGION={YOUR_AWS_REGION}
    tanzu mc permissions aws set
    
  2. For each workload cluster that uses CSI storage:

    1. Export the following environment variables:

      export _TKG_CLUSTER_FORCE_ROLE="management"
      export FILTER_BY_ADDON_TYPE="csi/aws-ebs-csi-driver"
      export NAMESPACE="tkg-system"
      

      Set NAMESPACE to the cluster’s namespace, tkg-system in the example above.

    2. Generate the CSI driver manifest:

      tanzu cluster create ${TARGET_CLUSTER_NAME} --dry-run -f ~/.config/tanzu/tkg/clusterconfigs/MANAGEMENT_CLUSTER_CONFIG.yaml > csi-driver-addon-manifest.yaml
      

      Set TARGET_CLUSTER_NAME to the name of the cluster that you are installing the CSI driver into.

    3. Connect kubectl to the management cluster.

      kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME.
      
    4. Apply the changes in the management cluster’s context:

      kubectl apply -f csi-driver-addon-manifest.yaml
      
  3. Unset the environment variables:

    unset _TKG_CLUSTER_FORCE_ROLE
    unset FILTER_BY_ADDON_TYPE
    unset NAMESPACE
    

Prepare to Upgrade Clusters on Azure

Before upgrading to TKG v1.6 on Azure, you must do the following, as described in the sections below:

  • Install the Azure Disk CSI Driver to all workload clusters that use CSI storage.
    • Required for minor v1.5.x to v1.6.x upgrades. and patch v1.6.x to v1.6.y upgrades.
  • Accept the Image License for the new default VM image and for any other image versions that you plan to use for workload clusters.
    • Required for both minor upgrades and patch v1.6.x to v1.6.y upgrades.

Install Azure Disk CSI Driver

If you are using Kubernetes v1.21.x or v1.22.x, follow this procedure to install Azure Disk CSI Driver after upgrading your Tanzu Kubernetes Grid installation to v1.6+ with Kubernetes version equals or later than v1.23.x.

Warning: Failure to follow this procedure before upgrade may result in data loss.

  1. Export the required environment variables:

    export _TKG_CLUSTER_FORCE_ROLE="management"
    export FILTER_BY_ADDON_TYPE="csi/azuredisk-csi-driver"
    export NAMESPACE="tkg-system"
    

    Set NAMESPACE to the cluster’s namespace, tkg-system in the example above.

  2. For each workload cluster that uses CSI storage:

    1. Generate the CSI driver manifest:

      tanzu cluster create ${TARGET_CLUSTER_NAME} --dry-run -f ~/.config/tanzu/tkg/clusterconfigs/MANAGEMENT_CLUSTER_CONFIG.yaml > csi-driver-addon-manifest.yaml
      

      Set TARGET_CLUSTER_NAME to the name of the cluster that you are installing the CSI driver into.

    2. Connect kubectl to the management cluster.

      kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME.
      
    3. Apply the changes in the management cluster’s context:

      kubectl apply -f csi-driver-addon-manifest.yaml
      
  3. Unset the environment variables:

    unset _TKG_CLUSTER_FORCE_ROLE
    unset FILTER_BY_ADDON_TYPE
    unset NAMESPACE
    

Accept the Image License

Before upgrading to TKG v1.6 on Microsoft Azure, you need to accept the license terms for new default VM image, and for each non-default VM image that you plan to use for your cluster VMs. You must accept these terms once per subscription.

To accept the terms:

  1. List all available VM images for Tanzu Kubernetes Grid in the Azure Marketplace:

    az vm image list --publisher vmware-inc --offer tkg-capi --all
    
  2. Accept the terms for the new default VM image:

    az vm image terms accept --urn publisher:offer:sku:version
    

    For example, to accept the terms for the default VM image in Tanzu Kubernetes Grid v1.6.0, k8s-1dot23dot8-ubuntu-2004, run:

    az vm image terms accept --urn vmware-inc:tkg-capi:k8s-1dot23dot8-ubuntu-2004:2021.05.17
    
  3. If you plan to upgrade any of your workload clusters to a non-default Kubernetes version, such as v1.22.11 or v1.21.14, accept the terms for each non-default version that you want to use for your cluster VMs.

Upgrade Management Clusters

This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

To upgrade Tanzu Kubernetes Grid, you must upgrade all management clusters in your deployment. You cannot upgrade workload clusters until you have upgraded the management clusters that manage them.

Follow the procedure in Upgrade Management Clusters to upgrade your management clusters.

Upgrade Workload Clusters

This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

After you upgrade the management clusters in your deployment, you can upgrade the workload clusters that are managed by those management clusters.

Follow the procedure in Upgrade Workload Clusters to upgrade the workload clusters that are running your workloads.

Sync Package Versions Older Than n-2

Some packages that are installed by default in the management cluster, for example, cert-manager can be installed as CLI-managed packages in workload and the shared services clusters. When the management cluster is upgraded to the latest Tanzu Kubernetes Grid release, its default packages are automatically updated.

You can run different versions of the CLI-managed packages in different workload clusters. In a workload cluster, you can run either the latest supported version of a CLI-managed package or the package’s versions in your last two previously-installed versions of Tanzu Kubernetes Grid. For example, if the latest packaged version of cert-manager is v1.1.0 and your previous two Tanzu Kubernetes Grid installations ran cert-manager v1.1.0 and v0.16.1, then you can run cert-manager versions v1.1.0 and v1.16.1 in workload clusters.

For any workload clusters that are running package versions that are more than n-2 installed Tanzu Kubernetes Grid versions older than the package versions in the management cluster, you must update the package repository (See Update a Package Repository) and then upgrade the package in the workload clusters (See Update a Package). If you do not upgrade the package version, you will not be able to update the package configuration because the package repository might not include over n-2 older version of the package.

Install NSX Advanced Load Balancer After Tanzu Kubernetes Grid Upgrade (vSphere)

If you are using NSX ALB on vSphere, follow this procedure to set it up after upgrading your Tanzu Kubernetes Grid installation to v1.6.

  1. If NSX ALB was not enabled in your Tanzu Kubernetes Grid v1.5 installation, perform the following sub-steps (If you did have NSX ALB enabled before upgrading to v1.6, skip to the export commands below):

    1. Configure the Avi Controller. For more information, see Avi Controller: Clouds and subsequent sections.
    2. On your local system, open the management cluster configuration file, for example ~/.config/tanzu/tkg/clusterconfigs/MANAGEMENT_CLUSTER_CONFIG.yaml.

      Where MANAGEMENT_CLUSTER_CONFIG.yaml is the filename.

    3. Add the Avi details in the following AVI fields, and save the file:

      AVI_CA_DATA: <AVI CERTIFICATE DATA>
      AVI_CLOUD_NAME: <YOUR CLOUD ENVIRONMENT>
      AVI_CONTROLLER: <CONTROLLER IP ADDRESS>
      AVI_CONTROLLER_VERSION: <CONTROLLER VERSION>
      AVI_DATA_NETWORK: <DATA NETWORK>
      AVI_DATA_NETWORK_CIDR: <CIDR IP ADDRESS>
      AVI_ENABLE: "true"
      AVI_LABELS: ""
      AVI_USERNAME: <YOUR USERNAME>
      AVI_PASSWORD: <PASSWORD>
      AVI_SERVICE_ENGINE_GROUP: <SE GROUP>
      

      Example for the clusterconfig.yaml file:

      AVI_CA_DATA: |-
          -----BEGIN CERTIFICATE-----
          MIICxzCCAa+gAwIBAgIUT+SWtJ1JK4...
          -----END CERTIFICATE-----
      AVI_CLOUD_NAME: Default-Cloud
      AVI_CONTROLLER: 10.83.20.229
      AVI_CONTROLLER_VERSION: 21.1.4
      AVI_DATA_NETWORK: VM Network
      AVI_DATA_NETWORK_CIDR: 10.83.0.0/19
      AVI_ENABLE: "true"
      AVI_LABELS: ""
      AVI_USERNAME: admin
      AVI_PASSWORD: <encoded:QWRtaW4hMjM=>
      AVI_SERVICE_ENGINE_GROUP: Default-Group
      
  2. Export the required environment variables:

    export _TKG_CLUSTER_FORCE_ROLE="management"
    export FILTER_BY_ADDON_TYPE="networking/ako-operator"
    export REMOVE_CRS_FOR_ADDON_TYPE="networking/ako-operator"
    export NAMESPACE="tkg-system"
    
  3. Generate a manifest containing the AKO operator changes by running tanzu cluster create ... --dry-run with the management cluster configuration file:

    tanzu cluster create ${MANAGEMENT_CLUSTER_NAME} --dry-run -f ~/.config/tanzu/tkg/clusterconfigs/MANAGEMENT_CLUSTER_CONFIG.yaml --vsphere-controlplane-endpoint 1.1.1.1 > ako-operator-addon-manifest.yaml
    
  4. Connect kubectl to the management cluster.

    kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME.
    
  5. Apply the changes in the management cluster’s context:

    kubectl apply -f ako-operator-addon-manifest.yaml
    
  6. Unset the environment variables:

    unset _TKG_CLUSTER_FORCE_ROLE
    unset FILTER_BY_ADDON_TYPE
    unset REMOVE_CRS_FOR_ADDON_TYPE
    unset NAMESPACE
    

    Note: See the Tanzu Kubernetes Grid v1.6 Release Notes for which Avi Controller versions are supported in this release. To upgrade the Avi Controller, see Flexible Upgrades for Avi Vantage.

Upgrade Crash Recovery and Diagnostics

This step is required for both minor v1.5.x to v1.6.x and patch v1.6.x to v1.6.y upgrades.

For information about how to upgrade Crash Recovery and Diagnostics, see Install or Upgrade the Crash Recovery and Diagnostics Binary.

What to Do Next

Examine your upgraded management clusters or register them in Tanzu Mission Control. See Examine the Management Cluster Deployment and Register Your Management Cluster with Tanzu Mission Control.

check-circle-line exclamation-circle-line close-line
Scroll to top icon