VMware Tanzu Kubernetes Grid | 25 MAR 2021 | Build 17776681

Check for additions and updates to these release notes.

About VMware Tanzu Kubernetes Grid

VMware Tanzu Kubernetes Grid provides Enterprise organizations with a consistent, upstream compatible, regional Kubernetes substrate across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG builds on trusted upstream and community projects and delivers an engineered and supported Kubernetes platform for end users and partners.

Key features include:

  • The Tanzu Kubernetes Grid installer interface, a graphical installer that walks you through the process of deploying management clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • The Tanzu CLI, providing simple commands that allow you to deploy CNCF conformant Kubernetes clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • Binaries for Kubernetes and all of the components that you need in order to easily stand up an enterprise-class Kubernetes development environment. All binaries are tested and signed by VMware.
  • Extensions for your Tanzu Kubernetes Grid instance, that provide authentication and authorization, logging, networking, monitoring, Harbor registry, and ingress control. 
  • VMware support for your Tanzu Kubernetes Grid deployments.

New Features in Tanzu Kubernetes Grid v1.3.0

IMPORTANT: To obtain the most recent security fixes, VMware recommends that you install or upgrade to Tanzu Kubernetes Grid v1.3.1 or later instead of v1.3.0. For more information, see the VMware Tanzu Kubernetes Grid 1.3.1 Release Notes.

  • New Kubernetes versions:
    • 1.20.4
    • 1.19.8
    • 1.18.16
    • 1.17.16
  • Support for Ubuntu node OS images on all infrastructures, as default, in addition to Photon OS on vSphere and Amazon Linux on Amazon EC2. In this release, Ubuntu defaults to 20.04.
  • Inclusion of NSX Advanced Load Balancer Essentials, which provides Tanzu Kubernetes Grid users on vSphere with a Layer 4 load balancing solution for Kubernetes workloads.
  • New CLI, the Tanzu CLI. The Tanzu CLI replaces the Tanzu Kubernetes Grid CLI.
  • New CLI commands to:
    • Generate standalone kubeconfig files, with or without embedded credentials, for sharing clusters with other users.
    • Log in to the management cluster or add a new one from a local kubeconfig file, vSphere 7 Supervisor cluster, or server endpoint.
    • (vSphere) Update vSphere credentials used by management and workload clusters. 
  • Cluster configuration parameters and upgrade command flags specify OS name and version, to support multiple OS images for new or upgraded cluster node VMs.
  • Dynamic discovery and downloading of newly available Kubernetes versions for workload clusters. For more information, see Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions.
  • Image Builder improvements.
  • Networking features. You can now configure:
    • HTTP and HTTPS proxies for management and workload clusters. For more information, see Configure the Kubernetes Network and Proxies in Deploy Management Clusters with the Installer Interface and Proxy Configuration in Tanzu CLI Configuration File Variable Reference.
    • Service discovery with the external-dns extension.
  • Automatic installation and lifecycle management of the Antrea, Pinniped, vSphere CPI, vSphere CSI, and Metrics Server add-ons.
  • OIDC and LDAP identity management with Pinniped and Dex.
  • Support for AWS credential profiles and AWS default credential chain in the installer interface.
  • Disaster recovery of workload clusters with Velero.
  • Observability features include:
    • Audit logging for the Kubernetes API server and node VMs.
    • Syslog output plugin for the Fluent Bit extension. This enables you to forward syslog messages to vRealize Log Insight.
    • Metrics Server pre-installed on management and workload clusters. This enables you to use the kubectl top nodes and kubectl top pods commands.
  • kapp-controller pre-installed on management and workload clusters.
  • Support for registering management clusters with Tanzu Mission Control in the installer interface and CLI. At time of publication, you can only register Tanzu Kubernetes Grid management clusters that are deployed on specific infrastructure platforms. For more information, see Register Your Management Cluster with Tanzu Mission Control.


Supported Kubernetes Versions in Tanzu Kubernetes Grid v1.3.0

Each version of Tanzu Kubernetes Grid adds support for new Kubernetes versions. This version also supports versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.

Tanzu Kubernetes Grid Version Provided Kubernetes Versions Supported in v1.3.0?
1.3.0 1.20.4
1.2.1 1.19.3
1.2 1.19.1
1.1.3 1.18.6
1.1.2 1.18.3
1.1.0 1.18.2 YES
1.0.0 1.17.3 NO

* This is the last version update for the 1.17.x version line. The 1.17.x version line will no longer be updated in Tanzu Kubernetes Grid.

Product Snapshot for Tanzu Kubernetes Grid v1.3.0

Tanzu Kubernetes Grid v1.3.0 supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components. The component versions listed in parentheses are included in Tanzu Kubernetes Grid v1.3.0. For more information, see Component Versions.

  vSphere Amazon EC2 Azure
Infrastructure platform
  • vSphere 6.7U3
  • vSphere 7
  • VMware Cloud on AWS****
  • Azure VMware Solution
Native AWS* Native Azure
Cluster creation and management Core Cluster API (v0.3.14), Cluster API Provider vSphere (v0.7.6) Core Cluster API (v0.3.14), Cluster API Provider AWS (v0.6.4) Core Cluster API (v0.3.14), Cluster API Provider Azure (v0.4.8)
Kubernetes node OS distributed with TKG Photon OS 3, Ubuntu 20.04 Amazon Linux 2, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Bring your own image Photon OS 3, Red Hat Enterprise Linux 7, Ubuntu 18.04, Ubuntu 20.04 Amazon Linux 2, Ubuntu 18.04, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Container runtime Containerd (v1.4.3) Containerd (v1.4.3) Containerd (v1.4.3)
Container networking Antrea (v0.11.3), Calico (v3.11.3) Antrea (v0.11.3), Calico (v3.11.3) Antrea (v0.11.3), Calico (v3.11.3)
Container registry Harbor (v2.1.3) Harbor (v2.1.3) Harbor (v2.1.3)
Ingress NSX Advanced Load Balancer Essentials (v20.1.3)**, Contour (v1.12.0) Contour (v1.12.0) Contour (v1.12.0)
Storage vSphere Container Storage Interface (v2.1.0***) and vSphere Cloud Native Storage In-tree cloud providers only In-tree cloud providers only
Authentication LDAP or OIDC via Pinniped (v0.4.1) and Dex LDAP or OIDC via Pinniped (v0.4.1) and Dex LDAP or OIDC via Pinniped (v0.4.1) and Dex
Observability Fluent Bit (v1.6.9), Prometheus (v2.18.1), Grafana (v7.3.5) Fluent Bit (v1.6.9), Prometheus (v2.18.1), Grafana (v7.3.5) Fluent Bit (v1.6.9), Prometheus (v2.18.1), Grafana (v7.3.5)
Backup and migration Velero (v1.5.3) Velero (v1.5.3) Velero (v1.5.3)


For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v1.3.0, see Supported Kubernetes Versions in Tanzu Kubernetes Grid v1.3.0 above.

Component Versions

The Tanzu Kubernetes Grid v1.3.0 release includes the following software component versions:

  • ako-operator: v1.3.0+vmware.1
  • alertmanager: v0.20.0+vmware.1
  • antrea: v0.11.3+vmware.1
  • cadvisor: v0.36.0+vmware.1
  • calico_all: v3.11.3+vmware.1
  • cloud-provider-azure: v0.5.1+vmware.2
  • cloud_provider_vsphere: v1.18.1+vmware.1
  • cluster-api-provider-azure: v0.4.8-47-gfbb2d55b
  • cluster_api: v0.3.14+vmware.2
  • cluster_api_aws: v0.6.4+vmware.1
  • cluster_api_vsphere: v0.7.6+vmware.1
  • configmap-reload: v0.3.0+vmware.1
  • contour: v1.12.0+vmware.1
  • crash-diagnostics: v0.3.2+vmware.2
  • csi_attacher: v3.0.0+vmware.1
  • csi_livenessprobe: v2.1.0+vmware.1
  • csi_node_driver_registrar: v2.0.1+vmware.1
  • csi_provisioner: v2.0.0+vmware.1
  • dex: v2.27.0+vmware.1
  • envoy: v1.17.0+vmware.1
  • external-dns: v0.7.4+vmware.1
  • fluent-bit: v1.6.9+vmware.1
  • gangway: v3.2.0+vmware.2
  • grafana: v7.3.5+vmware.1
  • harbor: v2.1.3+vmware.1
  • imgpkg: v0.2.0+vmware.1
  • jetstack_cert-manager: v0.16.1+vmware.1
  • k8s-sidecar: v0.1.144+vmware.1
  • k14s_kapp: v0.33.0+vmware.1
  • k14s_ytt: v0.30.0+vmware.1
  • kapp-controller: v0.16.0+vmware.1
  • kbld: v0.24.0+vmware.1
  • kube-state-metrics: v1.9.5+vmware.1
  • kube-vip: v0.3.2+vmware.1
  • kube_rbac_proxy: v0.4.1+vmware.2
  • kubernetes-csi_external-resizer: v1.0.0+vmware.1
  • kubernetes-sigs_kind: v1.20.4+vmware.1
  • kubernetes_autoscaler:
    v1.20.0+vmware.1, v1.19.1+vmware.1, v1.18.3+vmware.1, v1.17.4+vmware.1
  • load-balancer-and-ingress-service: v1.3.2+vmware.1
  • metrics-server: v0.4.0+vmware.1
  • pinniped: v0.4.1+vmware.1
  • prometheus: v2.18.1+vmware.1
  • prometheus_node_exporter: v0.18.1+vmware.1
  • pushgateway: v1.2.0+vmware.1
  • sonobuoy: v0.20.0+vmware.1
  • tanzu_core: v1.3.0
  • tkg-bom: v1.3.0
  • tkg_extensions: v1.3.0+vmware.1
  • tkg_telemetry: v1.3.0+vmware.1
  • velero: v1.5.3+vmware.1
  • velero-plugin-for-aws: v1.1.0+vmware.1
  • velero-plugin-for-microsoft-azure: v1.1.0+vmware.1
  • velero-plugin-for-vsphere: v1.1.0+vmware.1
  • vmware-private_tanzu-cli-tkg-plugins: v1.3.0
  • vsphere_csi_driver: v2.1.0+vmware.1

For a complete list of software component versions that ship with Tanzu Kubernetes Grid v1.3.0, see ~/.tanzu/tkg/bom/bom-v1.3.0.yaml and ~/.tanzu/tkg/bom/tkr-bom-v1.20.4+vmware.1-tkg.1.yaml.

Supported AWS Regions

You can use Tanzu Kubernetes Grid v1.3.0 to deploy clusters to the following AWS regions:

  • ap-northeast-1
  • ap-northeast-2
  • ap-south-1
  • ap-southeast-1
  • ap-southeast-2
  • eu-central-1
  • eu-west-1
  • eu-west-2
  • eu-west-3
  • sa-east-1
  • us-east-1
  • us-east-2
  • us-gov-east-1
  • us-gov-west-1
  • us-west-2

Supported Upgrade Paths

You can only upgrade Tanzu Kubernetes Grid from v1.2.x to v1.3.0. If you want to upgrade to Tanzu Kubernetes Grid v1.3.0 from a version earlier than v1.2.x, you must upgrade to v1.2.x first before upgrading to v1.3.0.

In addition, you cannot skip minor versions when upgrading Kubernetes versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.18.x to v1.20.x. You must upgrade a v1.18.x cluster to v1.19.x before upgrading the cluster to v1.20.x.

Behavior Changes Between Tanzu Kubernetes Grid v1.2.1 and v1.3.0

Tanzu Kubernetes Grid v1.3.0 introduces the following new behavior compared with v1.2.1.

User Documentation

The Tanzu Kubernetes Grid 1.3 documentation applies to all of the 1.3.x releases. It includes information about the following subjects:

Resolved Issues

  • Cannot log back in to vSphere 7 Supervisor Cluster after connection expires

    When you use kubectl vsphere login to log in to a vSphere 7 Supervisor Cluster, the kubeconfig file that is generated expires after 10 hours. If you attempt to run Tanzu Kubernetes Grid CLI commands against the Supervisor Cluster after 10 hours have passed, you are no longer authorized to do so. If you use kubectl vsphere login to log in to the Supervisor Cluster again, get the new kubeconfig, and attempt to run  tkg add management-cluster cluster_name to add the new kubeconfig to .tkg/config, it throws an error:

    Error: : cannot save management cluster context to kubeconfig: management cluster with context already exists
  • Upgrade fails if the location of the management cluster has changed

    If the location of the management cluster changed after initial deployment, for example because the cluster was renamed, upgrading the management cluster fails with an error similar to the following:

    "error"="failed to reconcile VM: unable to get resource pool for management-cluster"
  • Pods using PersistentVolumeClaim do not start or remain in the CrashLoopBackOff status, and Grafana and Harbor extension deployments fail

    This problem occurs if you set the security context for pods in Tanzu Kubernetes clusters running on vSphere. For information about setting the security context for pods, see the Kubernetes 1.19 documentation. If you create a StatefulSet that uses PersistentVolumeClaim and you set securityContext.fsGroup to specify the owner of the mounted volume, the volume owner is actually set to root, rather than to the value that you specify in securityContext.fsGroup. Consequently, containers that are not running as root cannot write to the volume, and fail to start. The Grafana and Harbor extensions both use PersistentVolumeClaim and securityContext.fsGroup, and so will fail to deploy on this Tanzu Kubernetes cluster. You also cannot deploy any other applications that use PersistentVolumeClaim and securityContext.fsGroup on this cluster. 

  • Cannot increase storage for Harbor shared service after deployment

    You can set the size of the persistent volume for the Harbor shared service in the persistence.persistentVolumeClaim.registry.size setting in the harbor-data-values.yaml file when you deploy Harbor. However, if your Harbor deployment runs out of storage, expansion of the persistent volume after deployment is not enabled by default.  

  • Management cluster deployment fails if vSphere user is specified in domain\user format

    When you deploy a management cluster, if you specify the vSphere user account in the format domain\user, for example, vsphere.local\administrator, the deployment fails with the error Cannot complete login due to an incorrect user name or password.

Known Issues

The known issues are grouped as follows.

vSphere Issues
  • Tanzu Kubernetes Grid 1.3.0 extensions do not function on Tanzu Kubernetes Grid Service clusters when attached to Tanzu Mission Control

    Tanzu Kubernetes Grid extensions (Contour, Fluentbit, Prometheus, Grafana) that have been previously installed and are functioning correctly on a Tanzu Kubernetes cluster, created by using Tanzu Kubernetes Grid Service, stop working when you attach that guest cluster to Tanzu Mission Control (TMC).

    Workaround: See VMware KB 83322.

  • On vSphere 7, offline volume expansion for vSphere CSI storage used by workload clusters does not work.

    Cluster storage interface (CSI) lacks the csi-resizer pod needed to resize storage volumes.

    Workaround: Add a csi-resizer sidecar pod to the cluster’s CSI processes, as documented in Enable Offline Volume Expansion for vSphere CSI (vSphere 7)

  •  Thumbprints for expired vCenter Server certificates cannot be updated

    If your vCenter Server certificate expires, it is not possible to update the certificate thumbprint in cluster node VMs.

    Workaround: Currently under validation.

  • Management clusters that run Photon OS deploy workload clusters that run Ubuntu by default

    If you use a Photon OS OVA image when you deploy a management cluster to vSphere from the installer interface, the OS_NAME setting is not written into the configuration file. Consequently, if you use a copy of the management cluster configuration file to deploy workload clusters, the workload cluster OS defaults to Ubuntu, unless you explicitly set the OS_NAME variable to photon in the configuration file. If the Ubuntu image is not present in your vSphere inventory, deployment of workload clusters will fail.

    Workaround: To use Photon OS as the operating system for workload cluster nodes, always set the OS_NAME setting in the cluster configuration file:

    OS_NAME: photon

  • Management cluster create fails or performance slow with older NSX-T versions and Photon 3 or Ubuntu with Linux kernel 5.8 VMs

    Deploying a management cluster with the following infrastructure and configuration may fail or result in restricted traffic between pods:

    • vSphere with any of the following versions of NSX-T:

      • NSX-T v3.1.3 with Enhanced Datapath enabled

      • NSX-T v3.1.x lower than v3.1.3

      • NSX-T v3.0.x lower than v3.0.2 hot patch

      • NSX-T v2.x. This includes Azure VMware Solution (AVS) v2.0, which uses NSX-T v2.5

    • Base image: Photon 3 or Ubuntu with Linux kernel 5.8

    This combination exposes a checksum issue between older versions of NSX-T and Antrea CNI.


    • Upgrade to NSX-T v3.0.2 Hot Patch, v3.1.3, or later, without Enhanced Datapath enabled.
    • Use an Ubuntu base image with Linux kernel 5.9 or later.
    • If the management cluster deploys successfully, run the following on all of its nodes:
      • ethtool -K eth0 tx-udp_tnl-segmentation off && ethtool -K eth0 tx-udp_tnl-csum-segmentation off
  • Cannot delete cluster if AKO agent pod is not running correctly

    If you use NSX Advanced Load Balancer, attempts to use tanzu cluster delete to delete a workload cluster fail if the AVI Kubernetes Operator (AKO) agent pod is in the CreateContainerConfigError status:

    kubectl get po -n avi-system
     NAME  READY STATUS                     RESTARTS AGE
     ako-0 0/1   CreateContainerConfigError 0        94s

    The deletion process waits indefinitely for the AKO agent to clean up its related items.  


    1. Edit the cluster configuration.
      kubectl edit cluster cluster-name
    2. Under finalizers, remove the AKO related line 18, ako-operator.networking.tkg.tanzu.vmware.com:
       16   finalizers:
       17   - cluster.cluster.x-k8s.io
       18   - ako-operator.networking.tkg.tanzu.vmware.com

    The cluster will be successfully removed after a short time.

AWS Issues
  • Telemetry for the Customer Experience Improvement Program (CEIP) does not run on AWS.

    Telemetry pods fail with an error like the following:

    "ERROR workspace/main.go:48 the individual labels are formed incorrectly. e.g. --labels=<key1>=<value1>,<key2>=<value2> with no ',' and '=' allowed in keys and values"

    This issue only affects management clusters created with the CEIP Participation enabled in the installer interface, or ENABLE_CEIP_PARTICIPATION absent in the configuration file or set to true (the default).

    After deploying a management cluster to AWS, run:

    tanzu management-cluster ceip-participation set true --labels='entitlement-account-number="ACCOUNT-NUMBER",env_type="ENV-TYPE"'


    • ACCOUNT-NUMBER is your alphanumeric entitlement account number
    • ENV-TYPE is production, development, or test
Azure Issues
  • Tanzu Kubernetes Grid does not support Azure accelerated networking

    Tanzu Kubernetes Grid is currently incompatible with Azure accelerated networking, which is enabled by default on most VM instances that have 4 vCPUs or more.

    Note: The default node VM size that Tanzu Kubernetes Grid creates in Azure is Standard_D2s_v3, which does not use accelerated networking. This issue only affects larger node sizes.

    Workaround: Modify the azure-overlay.yaml file before you deploy management clusters or workload clusters.

    1. Open the following file in a text editor.
    2. Replace the file contents with the following:
      #@ load("@ytt:overlay", "overlay")
      #@overlay/match expects="1+", by=overlay.subset({"kind": "AzureMachineTemplate"})
            #@overlay/match missing_ok=True
            acceleratedNetworking: false
    3. Save and close the file.

    When you deploy management clusters and workload clusters to Azure, accelerated networking is disabled.

Upgrade Issues
  • List of clusters shows incorrect Kubernetes version after unsuccessful upgrade attempt

    If you attempt to upgrade a Tanzu Kubernetes cluster and the upgrade fails, and if you subsequently run tanzu cluster list or tanzu cluster get to see the list of deployed clusters and their versions, the cluster for which the upgrade failed shows the upgraded version of Kubernetes.

    Workaround: None

Deployment and Extensions Issues
  • May 2021 Linux security patch causes kind clusters to fail during management cluster creation

    If you run Tanzu CLI commands on a machine with a recent Linux kernel, for example Linux 5.11 and 5.12 with Fedora, kind clusters do not operate. This happens because kube-proxy attempts to change nf_conntrack_max sysctl, which was made read-only in the May 2021 Linux security patch, and kube-proxy enters a CrashLoopBackoff state. The security patch is currently being backported to all LTS kernels from 4.9 onwards, so as  operating system updates are shipped, including for Docker Machine on Mac OS and Windows Subsystem for Linux, kind clusters will fail, resulting in management cluster deployment failure.

    Workaround: Update your version of kind to at least v.1.11.0, and run tanzu management-cluster create with the --use-existing-bootstrap-cluster option. For more information, see Use an Existing Bootstrap Cluster to Deploy Management Clusters.

  • Worker nodes cannot join cluster if cluster name contains period (.)

    If you deploy a Tanzu Kubernetes cluster and specify a name that includes the period character (.), the cluster appears to be created but only the control plane nodes are visible. Worker nodes are unable to join the cluster, and their names are truncated to exclude any text included after the period.

    Workaround: Do not include period characters in cluster names.

  • Deleting shared services cluster without removing registry webhook causes cluster deletion to stop indefinitely

    If you created a shared services cluster and deployed Harbor as a shared service with the Tanzu Kubernetes Grid Connectivity API, and then you created one or more Tanzu Kubernetes clusters, attempting to delete both the shared services cluster and the Tanzu Kubernetes clusters results in machines being deleted but both clusters remaining indefinitely in the deleting status.

    Workaround: Delete the registry admission webhook so that the cluster deletion process can complete. 

  • The tanzu CLI truncates workload cluster names or does not perform cluster operations.

    Workload cluster names must be 42 characters or less.

    Avoid workload cluster names longer than this-workload-cluster-name-is-far-too-long.

  • Velero installer pulls incorrect container version

    If you download the Velero binary from the Tanzu Kubernetes Grid 1.3.0 downloads page and run velero install, Velero pulls the main version of the container, and not the correct version of the container.

    Workaround: When installing Velero, provide the --image parameter to pull the correct image, projects.registry.vmware.com/tkg/velero/velero:v1.5.3_vmware.1.

    For example, omitting the other installation options, run the following the command to install Velero:

    velero install --image projects.registry.vmware.com/tkg/velero/velero:v1.5.3_vmware.1
  • Option to skip TLS verification for private registries is ignored

    If you are deploying Tanzu Kubernetes Grid in an Internet-restricted environment and you set the TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY=true variable to skip TLS verification, this variable is ignored and the deployment of management clusters fails. If you examine one of the running apps in the cluster, for example by running kubectl describe app antrea, you see the following error:

    Error: Syncing directory '0': Syncing directory '.' with image contents: Imgpkg: exit status 1 
    (stderr: Error: Collecting images: Working with 
    Get x509: certificate signed by unknown authority

    Workaround: You must provide a CA certificate in base64-encoded format in the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: variable. For example: 

  • Cannot use tanzu login selector in Git Bash on Windows

    If you use Git Bash to run the tanzu login command on Windows systems, you see Error: Incorrect function and you cannot use the arrow keys to select a management cluster.

    Workaround: Run the following command in Git Bash before you run any Tanzu CLI commands:

    alias tanzu='winpty -Xallow-non-tty tanzu'
check-circle-line exclamation-circle-line close-line
Scroll to top icon