VMware Tanzu Kubernetes Grid | 10 DEC 2020 | Build 17274478 

Check for additions and updates to these release notes.

About VMware Tanzu Kubernetes Grid

VMware Tanzu Kubernetes Grid provides Enterprise organizations with a consistent, upstream compatible, regional Kubernetes substrate across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG builds on trusted upstream and community projects and delivers an engineered and supported Kubernetes platform for end users and partners.

Key features include:

  • The Tanzu Kubernetes Grid installer interface, a graphical installer that walks you through the process of deploying management clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • The Tanzu Kubernetes Grid CLI, providing simple commands that allow you to deploy CNCF conformant Kubernetes clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • Binaries for Kubernetes and all of the components that you need in order to easily stand up an enterprise-class Kubernetes development environment. All binaries are tested and signed by VMware.
  • Extensions for your Tanzu Kubernetes Grid instance, that provide authentication and authorization, logging, networking, monitoring, Harbor registry, and ingress control. 
  • VMware support for your Tanzu Kubernetes Grid deployments.

New Features in Tanzu Kubernetes Grid 1.2.1

  • New Kubernetes versions:
    • 1.19.3
    • 1.18.10
    • 1.17.13
  • Enables specifying custom VM images on Azure in the cluster configuration file
  • Adds Cluster Autoscaler
  • Renames the --vsphere-controlplane-endpoint-ip option of the tkg init and tkg create cluster commands to --vsphere-controlplane-endpoint
  • Adds the execute permissions to all files in the Tanzu Kubernetes Grid CLI for Linux and Tanzu Kubernetes Grid CLI for Mac downloads
  • Mac OS Velero binary is now signed by VMware
  • Adds support for self-signed certificates for external registries for deployments in Internet-restricted environments
  • Upgrades containerd from v1.3.x to v1.4.1

Behavior Changes Between Tanzu Kubernetes Grid 1.2.0 and 1.2.1

Tanzu Kubernetes Grid v1.2.1 introduces the following new behavior compared with v1.2.0:

  • You can enable Cluster Autoscaler for Tanzu Kubernetes clusters that you create with v1.2.1 of the Tanzu Kubernetes Grid CLI. This includes all supported Kubernetes versions in Tanzu Kubernetes Grid v1.2.1. You cannot enable Cluster Autoscaler for existing clusters that you upgrade to Tanzu Kubernetes Grid v1.2.1.
  • Tanzu Kubernetes Grid v1.2.1 renames the --vsphere-controlplane-endpoint-ip option of the tkg init and tkg create cluster commands to --vsphere-controlplane-endpoint. When using the renamed option, you can specify either an IP address or a fully qualified domain name. Previously, these commands accepted only IP addresses for the control plane endpoint. In v1.2.1, --vsphere-controlplane-endpoint-ip is an alias of --vsphere-controlplane-endpoint.
  • All files in the Tanzu Kubernetes Grid CLI for Linux and Tanzu Kubernetes Grid CLI for Mac downloads are now executable by default. Previously, the execute permissions were added manually, by running the chmod command with the +x option for each file.
  • You can set the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE variable so that you can use self-signed certificates with external registries in Internet-restricted environments.

For the major differences in behavior between 1.1.x and 1.2.x, see the VMware Tanzu Kubernetes Grid 1.2 Release Notes.

Supported Kubernetes Versions in Tanzu Kubernetes Grid 1.2.1

Each version of Tanzu Kubernetes Grid adds support for new Kubernetes versions. This version also supports versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.

Tanzu Kubernetes Grid Version Provided Kubernetes Versions Supported in v1.2.1?
1.2.1 1.19.3
1.18.10
1.17.13
YES
YES
YES
1.2 1.19.1
1.18.8
1.17.11
YES
YES
YES
1.1.3 1.18.6
1.17.9
YES
YES
1.1.2 1.18.3
1.17.6
YES
YES
1.1.0 1.18.2 YES
1.0.0 1.17.3 YES

Supported Upgrade Paths

You can upgrade Tanzu Kubernetes Grid v1.0.0, v1.1.x, and 1.2.0 to version 1.2.1.

Supported AWS Regions

You can use Tanzu Kubernetes Grid 1.2.1 to deploy clusters to the following AWS regions:

  • ap-northeast-1
  • ap-northeast-2
  • ap-south-1
  • ap-southeast-1
  • ap-southeast-2
  • eu-central-1
  • eu-west-1
  • eu-west-2
  • eu-west-3
  • sa-east-1
  • us-east-1
  • us-east-2
  • us-gov-east-1
  • us-gov-west-1
  • us-west-2

User Documentation

The Tanzu Kubernetes Grid 1.2 documentation applies to all of the 1.2.x releases. It includes information about the following subjects:

Component Versions

The Tanzu Kubernetes Grid 1.2.1 release includes the following software component versions:

  • alertmanager v0.20.0+vmware.1
  • antrea v0.9.3+vmware.1
  • cadvisor v0.36.0+vmware.1
  • calico_all v3.11.3+vmware.1
  • cloud-provider-azure v0.5.1+vmware.2
  • cloud_provider_vsphere v1.2.1+vmware.1
  • cluster-api-provider-azure v0.4.8-47-gfbb2d55b+vmware.1
  • cluster_api v0.3.11-13-ga74685ee9+vmware.1
  • cluster_api_aws v0.6.3+vmware.1
  • cluster_api_vsphere v0.7.1+vmware.1
  • cni_plugins v0.8.7+vmware.3
  • configmap-reload v0.3.0+vmware.1
  • containerd v1.4.1+vmware.1
  • contour v1.8.1+vmware.1
  • coredns v1.7.0+vmware.5
  • crash-diagnostics v0.3.2+vmware.1
  • cri_tools v1.18.0+vmware.3
  • csi_attacher v2.0.0+vmware.2
  • csi_livenessprobe v1.1.0+vmware.8
  • csi_node_driver_registrar v1.2.0+vmware.2
  • csi_provisioner v2.0.0+vmware.1
  • dex v2.22.0+vmware.2
  • envoy v1.15.0+vmware.1
  • etcd v3.4.13+vmware.4
  • fluent-bit v1.5.3+vmware.1
  • gangway v3.2.0+vmware.2
  • grafana v7.0.3+vmware.1
  • harbor v2.0.2+vmware.1
  • jetstack_cert-manager v0.16.1+vmware.1
  • k8s-sidecar v0.1.144+vmware.1
  • kapp-controller v0.9.0+vmware.1
  • kokoni v0.2.0+vmware.3
  • kube-state-metrics v1.9.5+vmware.1
  • kube-vip v0.2.0+vmware.1
  • kube_rbac_proxy v0.4.1+vmware.2
  • kubernetes v1.19.3+vmware.1
  • kubernetes-sigs_kind v0.8.1-1.19.3+vmware.1
  • kubernetes_autoscaler v1.19.1+vmware.1,v1.18.3+vmware.1,v1.17.4+vmware.1
  • node_ova v1.19.3+vmware.1,v1.18.10+vmware.1,v1.17.13+vmware.1
  • prometheus v2.18.1+vmware.1
  • prometheus_node_exporter v0.18.1+vmware.1
  • pushgateway v1.2.0+vmware.1
  • sonobuoy v0.19.0+vmware.1
  • tanzu_tkg-cli v1.2.1+vmware.1
  • tkg-connectivity v1.2.0+vmware.2
  • tkg_extensions v1.2.0+vmware.1
  • tkg_telemetry v1.2.0+vmware.1
  • velero v1.4.3+vmware.1
  • velero-plugin-for-aws v1.1.0+vmware.1
  • velero-plugin-for-microsoft-azure v1.1.0+vmware.1
  • velero-plugin-for-vsphere v1.0.2+vmware.1
  • vsphere_csi_driver v2.0.1+vmware.1

For a complete list of software component versions that ship with Tanzu Kubernetes Grid 1.2.1, see the the file ~/.tkg/bom/bom-1.2.1+vmware.1.yaml after you install the Tanzu Kubernetes Grid CLI and run any tkg command.

Resolved Issues

  • Management cluster deployment fails if the vCenter Server FQDN includes uppercase characters

    If you set the VSPHERE_SERVER parameter in the config.yaml file with a vCenter Server FQDN that includes upper-case letters, deployment of the management cluster fails with the error Credentials not found.

  • Persistent volumes cannot attach to a new node if previous node is deleted

    During upgrade, if you have stateful workloads that use persistent volumes, workloads using the persistent volumes become stuck in either the container creation or init states. This only happens when using CSI providers. You might see errors similar to the following:

    • failed to find VirtualMachine for node:
    • Multi-Attach error for volume 
    • Unable to attach or mount volumes: unmounted volumes

Known Issues

The known issues are grouped as follows.

vSphere Issues
  • Pods using PersistentVolumeClaim do not start or remain in the CrashLoopBackOff status, and Grafana and Harbor extension deployments fail

    This problem occurs if you set the security context for pods in Tanzu Kubernetes clusters running on vSphere. For information about setting the security context for pods, see the Kubernetes 1.19 documentation. If you create a StatefulSet that uses PersistentVolumeClaim and you set securityContext.fsGroup to specify the owner of the mounted volume, the volume owner is actually set to root, rather than to the value that you specify in securityContext.fsGroup. Consequently, containers that are not running as root cannot write to the volume, and fail to start. The Grafana and Harbor extensions both use PersistentVolumeClaim and securityContext.fsGroup, and so will fail to deploy on this Tanzu Kubernetes cluster. You also cannot deploy any other applications that use PersistentVolumeClaim and securityContext.fsGroup on this cluster. 

    Workaround: This workaround assumes that you have installed the Tanzu Kubernetes Grid 1.2.1 CLI and that you have run a tkg command to generate the Tanzu Kubernetes Grid configuration files, for example tkg get management-cluster or tkg init.

    1. Open the file ~/.tkg/providers/infrastructure-vsphere/v0.7.1/ytt/csi.lib.yaml in a text editor.
    2. Insert a new line after line 400.
    3. Add the following entry, making sure to include exactly 12 leading spaces.
                  - --default-fstype=ext4
      

    The modified section of csi.lib.yaml should look like this:

            - args:
                - --v=4
                - --timeout=300s
                - --csi-address=$(ADDRESS)
                - --default-fstype=ext4
                - --leader-election

    New Tanzu Kubernetes clusters that you create after applying the workaround will function correctly and allow you to deploy the Grafana and Harbor extensions.

    To fix any existing clusters that you deployed before applying the workaround, perform the following steps:

    1. Update the vsphere-csi-controller configuration.
      kubectl patch deployment -n kube-system vsphere-csi-controller --type=json -p='[{"op": "add", "path": "/spec/template/spec/containers/4/args/-", "value": "--default-fstype=ext4"}]'
    2. Delete the vsphere-csi-controller pod.
      kubectl delete pod -n kube-system -l app=vsphere-csi-controller

      Deleting the pod causes it to be recreated with the new configuration.

    3. Delete failed Grafana and Harbor extensions and reattempt the deployments.
  • Management cluster deployment fails if vSphere user is specified in domain\user format

    When you deploy a management cluster, if you specify the vSphere user account in the format domain\user, for example, vsphere.local\administrator, the deployment fails with the error Cannot complete login due to an incorrect user name or password.

    Workaround: Specify vSphere user accounts in the format user@domain, for example administrator@vsphere.local.

  • Cannot log back in to vSphere 7 Supervisor Cluster after connection expires

    When you use kubectl vsphere login to log in to a vSphere 7 Supervisor Cluster, the kubeconfig file that is generated expires after 10 hours. If you attempt to run Tanzu Kubernetes Grid CLI commands against the Supervisor Cluster after 10 hours have passed, you are no longer authorized to do so. If you use kubectl vsphere login to log in to the Supervisor Cluster again, get the new kubeconfig, and attempt to run  tkg add management-cluster cluster_name to add the new kubeconfig to .tkg/config, it throws an error:
    Error: : cannot save management cluster context to kubeconfig: management cluster 192.168.123.1 with context 192.168.123.1 already exists

    Workaround:

    1. Set an environment variable for kubeconfig: export KUBECONFIG=$HOME/.kube-tkg/config
    2. Run kubectl vsphere login.

    This updates the Tanzu Kubernetes Grid management cluster kubeconfig for vSphere 7 with Kubernetes without requiring you to update it by using the tkg add management-cluster command.

  • Upgrade fails if the location of the management cluster has changed

    If the location of the management cluster changed after initial deployment, for example because the cluster was renamed, upgrading the management cluster fails with an error similar to the following:

    "error"="failed to reconcile VM: unable to get resource pool for management-cluster"

    Workaround: Do not change the name of the cluster on which you deploy management clusters.

Upgrade Issues
  • List of clusters shows incorrect Kubernetes version after unsuccessful upgrade attempt

    If you attempt to upgrade a Tanzu Kubernetes cluster and the upgrade fails, and if you subsequently run tkg get cluster to see the list of deployed clusters and their versions, the cluster for which the upgrade failed shows the upgraded version of Kubernetes.

    Workaround: None

Deployment and Extensions Issues
  • Worker nodes cannot join cluster if cluster name contains period (.)

    If you deploy a Tanzu Kubernetes cluster and specify a name that includes the period character (.), the cluster appears to be created but only the control plane nodes are visible. Worker nodes are unable to join the cluster, and their names are truncated to exclude any text included after the period.

    Workaround: Do not include period characters in cluster names.

  • Deleting shared services cluster without removing registry webhook causes cluster deletion to stop indefinitely

    If you created a shared services cluster and deployed Harbor as a shared service with the Tanzu Kubernetes Grid Connectivity API, and then you created one or more Tanzu Kubernetes clusters, attempting to delete both the shared services cluster and the Tanzu Kubernetes clusters results in machines being deleted but both clusters remaining indefinitely in the deleting status.

    Workaround: Delete the registry admission webhook so that the cluster deletion process can complete. 

  • Cannot increase storage for Harbor shared service after deployment

    You can set the size of the persistent volume for the Harbor shared service in the persistence.persistentVolumeClaim.registry.size setting in the harbor-data-values.yaml file when you deploy Harbor. However, if your Harbor deployment runs out of storage, expansion of the persistent volume after deployment is not enabled by default.  

    Workaround

    Expansion of the persistent volume after deployment is not currently supported. If you have already deployed the Harbor shared service and you have run out of storage, see https://kb.vmware.com/s/article/82181 for instructions about how to increase the storage.

check-circle-line exclamation-circle-line close-line
Scroll to top icon