VMware Tanzu Kubernetes Grid | 15 OCT 2020 | Build 17008919

Check for additions and updates to these release notes.

About VMware Tanzu Kubernetes Grid

VMware Tanzu Kubernetes Grid provides Enterprise organizations with a consistent, upstream compatible, regional Kubernetes substrate across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG builds on trusted upstream and community projects and delivers an engineered and supported Kubernetes platform for end users and partners.

Key features include:

  • The Tanzu Kubernetes Grid installer interface, a graphical installer that walks you through the process of deploying management clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • The Tanzu Kubernetes Grid CLI, providing simple commands that allow you to deploy CNCF conformant Kubernetes clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • Binaries for Kubernetes and all of the components that you need in order to easily stand up an enterprise-class Kubernetes development environment. All binaries are tested and signed by VMware.
  • Extensions for your Tanzu Kubernetes Grid instance, that provide authentication and authorization, logging, networking, monitoring, Harbor registry, and ingress control. 
  • VMware support for your Tanzu Kubernetes Grid deployments.

New Features in Tanzu Kubernetes Grid 1.2

Behavior Changes Between Tanzu Kubernetes Grid 1.1.3 and 1.2

Tanzu Kubernetes Grid v1.2 introduces the following new behavior compared with v1.1.3:

  • Due to changes in implementation betweeen Tanzu Kubernetes Grid v1.1.x and v1.2, you must upgrade all management clusters that you connect to with the upgraded instance of the Tanzu Kubernetes Grid CLI. You cannot use v1.2 of the CLI to deploy Tanzu Kubernetes clusters from management clusters that are still on v1.1.x.
  • On vSphere, previous versions of Tanzu Kubernetes Grid required you to deploy an API server load balancer OVA template, named photon-3-haproxy-v1.x.x-vmware.1.ova. New Tanzu Kubernetes clusters that you deploy with v1.2 do not use this template. However, any management clusters that you deployed with a previous version of Tanzu Kubernetes Grid and then upgrade to version 1.2 will continue to use the API server load balancer. Do not delete the template from the vSphere inventory.
  • On Amazon EC2, Tanzu Kubernetes Grid versions before v1.2 used the identity and access management (IAM) resources of the CloudFormation stack that you created by running the clusterawsadm command line utility. This CloudFormation stack must be present in your AWS account when you upgrade your existing clusters to Tanzu Kubernetes Grid v1.2. Do not delete the stack after upgrading the clusters.
  • For Tanzu Kubernetes Grid v1.2 and later, you create the required IAM resources by enabling the Automate creation of AWS CloudFormation Stack checkbox in the installer interface or by running the tkg config permissions aws command from the CLI. This replaces the clusterawsadm command line utility. For more information, see Deploy Management Clusters to Amazon EC2 with the Installer Interface or Deploy Management Clusters to Amazon EC2 with the CLI.
  • In previous releases, management clusters were not labeled with a role. In Tanzu Kubernetes Grid 1.2, the labels management and tanzu-services are applied to clusters when you create them, so that you can easily distinguish between management clusters and the clusters that are created when you deploy the Tanzu Kubernetes Grid extensions. When you are upgrading management clusters from a previous version, you must apply the new role labels to existing clusters manually.

Supported Kubernetes Versions in Tanzu Kubernetes Grid 1.2

Tanzu Kubernetes Grid 1.2 provides support for Kubernetes 1.19.1, 1.18.8 and 1.17.11. This version also supports the versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.

Tanzu Kubernetes Grid Version Provided Kubernetes Versions Supported in v1.2?
1.2 1.19.1
1.18.8
1.17.11
YES
YES
YES
1.1.3 1.18.6
1.17.9
YES
YES
1.1.2 1.18.3
1.17.6
YES
YES
1.1.0 1.18.2 YES
1.0.0 1.17.3 YES

Supported Upgrade Paths

You can upgrade Tanzu Kubernetes Grid v1.0.0 and v1.1.x to version 1.2.

Supported AWS Regions

You can use Tanzu Kubernetes Grid 1.2 to deploy clusters to the following AWS regions:

  • ap-northeast-1
  • ap-northeast-2
  • ap-south-1
  • ap-southeast-1
  • ap-southeast-2
  • eu-central-1
  • eu-west-1
  • eu-west-2
  • eu-west-3
  • sa-east-1
  • us-east-1
  • us-east-2
  • us-gov-east-1
  • us-gov-west-1
  • us-west-2

User Documentation

The Tanzu Kubernetes Grid 1.2 documentation applies to all of the 1.2.x releases. It includes information about the following subjects:

Component Versions

The Tanzu Kubernetes Grid 1.2 release includes the following software component versions:

  • antrea v0.9.3
  • calico v3.11.3
  • containerd v1.3.4
  • contour v1.8.1
  • dex v2.22.0
  • envoy v1.15.0
  • etcd v3.4.13
  • fluent-bit v1.5.3
  • gangway v7.0.3
  • grafana v7.0.3
  • harbor v2.0.2
  • kube-vip v0.1.8
  • kubernetes v1.19.1
  • prometheus v2.18.1
  • sonobuoy v0.19.0
  • velero v1.4.2

For a complete list of software component versions that ship with Tanzu Kubernetes Grid 1.2, see the the file ~/.tkg/bom/bom-1.2.0+vmware.1.yaml after you install the Tanzu Kubernetes Grid CLI and run any tkg command.

Resolved Issues

  • Tanzu Mission Control reports Tanzu Kubernetes Grid clusters as unhealthy when they are actually healthy

    If you use Tanzu Kubernetes Grid to deploy clusters with Kubernetes v1.17.9 and v1.18.6, and if you register these clusters with Tanzu Mission Control, Tanzu Mission Control reports that these clusters are unhealthy. This happens because these versions of Kubernetes introduced a change that affects the way that Tanzu Mission Control checks cluster health.

    This issue will be addressed in an update of Tanzu Mission Control.

Known Issues

The known issues are grouped as follows.

vSphere Issues
  • Cannot log back in to vSphere 7 Supervisor Cluster after connection expires

    When you use kubectl vsphere login to log in to a vSphere 7 Supervisor Cluster, the kubeconfig file that is generated expires after 10 hours. If you attempt to run Tanzu Kubernetes Grid CLI commands against the Supervisor Cluster after 10 hours have passed, you are no longer authorized to do so. If you use kubectl vsphere login to log in to the Supervisor Cluster again, get the new kubeconfig, and attempt to run  tkg add management-cluster cluster_name to add the new kubeconfig to .tkg/config, it throws an error:
    Error: : cannot save management cluster context to kubeconfig: management cluster 192.168.123.1 with context 192.168.123.1 already exists

    Workaround:

    1. Set an environment variable for kubeconfig: export KUBECONFIG=$HOME/.kube-tkg/config
    2. Run kubectl vsphere login.

    This updates the Tanzu Kubernetes Grid management cluster kubeconfig for vSphere 7 with Kubernetes without requiring you to update it by using the tkg add management-cluster command.

  • Upgrade fails if the location of the management cluster has changed

    If the location of the management cluster changed after initial deployment, for example because the cluster was renamed, upgrading the management cluster fails with an error similar to the following:

    "error"="failed to reconcile VM: unable to get resource pool for management-cluster"

    Workaround: Do not change the name of the cluster on which you deploy management clusters.

  • Management cluster deployment fails if the vCenter Server FQDN includes uppercase characters

    If you set the VSPHERE_SERVER parameter in the config.yaml file with a vCenter Server FQDN that includes upper-case letters, deployment of the management cluster fails with the error Credentials not found.

    Workaround: Use all lower-case letters when you specify a vCenter Server FQDN in the config.yaml file.

Upgrade Issues
  • List of clusters shows incorrect Kubernetes version after unsuccessful upgrade attempt

    If you attempt to upgrade a Tanzu Kubernetes cluster and the upgrade fails, and if you subsequently run tkg get cluster to see the list of deployed clusters and their versions, the cluster for which the upgrade failed shows the upgraded version of Kubernetes.

    Workaround: None

Kubernetes Issues
  • Persistent volumes cannot attach to a new node if previous node is deleted

    During upgrade, if you have stateful workloads that use persistent volumes, workloads using the persistent volumes become stuck in either the container creation or init states. This only happens when using CSI providers. You might see errors similar to the following:

    • failed to find VirtualMachine for node:
    • Multi-Attach error for volume 
    • Unable to attach or mount volumes: unmounted volumes

    Workaround: Remove the finalizer from all the volume attachments that belong to the older nodes. For example:

    kubectl patch volumeattachments.storage.k8s.io csi-11a0b1c040fd7179707f982ea0cc3856bafe5ffef47217d7c059fd91fb1fa9d1 -p '{"metadata":{"finalizers":[]}}' --type=merge 

    NOTE: It might take some time for the control plane components to register the changes before they take effect.

Deployment Issues
  • Worker nodes cannot join cluster if cluster name contains period (.)

    If you deploy a Tanzu Kubernetes cluster and specify a name that includes the period character (.), the cluster appears to be created but only the control plane nodes are visible. Worker nodes are unable to join the cluster, and their names are truncated to exclude any text included after the period.

    Workaround: Do not include period characters in cluster names.

check-circle-line exclamation-circle-line close-line
Scroll to top icon