VMware Tanzu Kubernetes Grid | 06 AUG 2020 | CLI build 16711757 | Component build 16699823
Check for additions and updates to these release notes.
About VMware Tanzu Kubernetes Grid
VMware Tanzu Kubernetes Grid provides Enterprise organizations with a consistent, upstream compatible, regional Kubernetes substrate across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG builds on trusted upstream and community projects and delivers an engineered and supported Kubernetes platform for end users and partners.
Key features include:
- The Tanzu Kubernetes Grid installer interface, a graphical installer that walks you through the process of deploying management clusters to either vSphere or Amazon EC2.
- The Tanzu Kubernetes Grid CLI, providing simple commands that allow you to deploy CNCF conformant Kubernetes clusters to either vSphere or Amazon EC2.
- Binaries for Kubernetes and all of the components that you need in order to easily stand up an enterprise-class Kubernetes development environment. All binaries are tested and signed by VMware.
- Extensions for your Tanzu Kubernetes Grid instance, that provide authentication and authorization, logging, networking, and ingress control.
- VMware support for your Tanzu Kubernetes Grid deployments.
New Features in Tanzu Kubernetes Grid 1.1.3
Tanzu Kubernetes Grid 1.1.3 is a patch release that includes support for new Kubernetes versions and critical bug fixes.
- New Kubernetes versions:
- Cluster API Provider vSphere v0.6.6
- New vSphere container storage interface image that includes NFS Utils
Behavior Changes Between Tanzu Kubernetes Grid 1.1.2 and 1.1.3
Tanzu Kubernetes Grid v1.1.3 introduces no new behavior compared with v1.1.2.
Supported Kubernetes Versions in Tanzu Kubernetes Grid 1.1.3
Tanzu Kubernetes Grid 1.1.3 provides support for Kubernetes 1.18.6 and 1.17.9. This version also supports the versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.
|Tanzu Kubernetes Grid Version||Provided Kubernetes Versions||Supported in v1.1.3?|
Supported Upgrade Paths
You can upgrade Tanzu Kubernetes Grid v1.0.0, v1.1.0, and v1.1.2 to version 1.1.3.
Supported AWS Regions
You can use Tanzu Kubernetes Grid 1.1.3 to deploy clusters to the following AWS regions:
The Tanzu Kubernetes Grid 1.1 documentation applies to all of the 1.1.x releases. It includes information about the following subjects:
- Tanzu Kubernetes Grid Concepts introduces the key components of Tanzu Kubernetes Grid and describes how you use them and what they do.
- Installing Tanzu Kubernetes Grid describes how to install the Tanzu Kubernetes Grid CLI as well as the prerequisites for installing Tanzu Kubernetes Grid on vSphere and on Amazon EC2
- Deploying and Managing Management Clusters describes how to deploy Tanzu Kubernetes Grid management clusters to both vSphere and Amazon EC2.
- Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle describes how to use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters from your management cluster, and how to manage the lifecycle of those clusters.
- Configuring and Managing the Tanzu Kubernetes Grid Instance describes how to set up local shared services for your Tanzu Kubernetes clusters, such as authentication and authorization, logging, networking, and ingress control.
- Upgrading Tanzu Kubernetes Grid describes how to upgrade to this version.
- Troubleshooting Tips for Tanzu Kubernetes Grid includes tips to help you to troubleshoot common problems that you might encounter when installing Tanzu Kubernetes Grid and deploying Tanzu Kubernetes clusters.
- Tanzu Kubernetes Grid CLI Reference lists all of the commands and options of the Tanzu Kubernetes Grid CLI, and provides links to the section in which they are documented.
The Tanzu Kubernetes Grid 1.1.3 release ships with the following software components:
After you install the Tanzu Kubernetes Grid CLI and run any
tkg command, you can see the list of versions for all of the components that ship with Tanzu Kubernetes Grid 1.1.3 in the file
- Cluster API Provider vSphere stops unexpectedly when the VirtualMachine configuration is unavailable
If Cluster API Provider vSphere cannot read the UUID of a virtual machine it stops unexpectedly because the error is not detected. For more information, see https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/944.
- Installation in Internet-restricted environments fails due to incorrect version for Cluster API Provider AWS
If you are deploying Tanzu Kubernetes Grid in an internet-restricted environment, the deployment fails because the script that populates your local registry includes the incorrect version of the Cluster API Provider AWS.
- NFS Utils is missing from vSphere CSI driver
If you are using VSAN storage on vSphere, when you create a pod, attempting to mount volumes fails with the error:
an error occurred during FIP allocation. This occurs because NFS utils is missing from the
The known issues are grouped as follows.vSphere Issues
- Cluster networking issues after upgrading virtual hardware in Base OS templates or cluster VMs
Due to a kernel issue in Photon OS, if you are using the Calico CNI, the virtual hardware of the
photon-3-kube-v.1.17.9+vmware.1templates, as well as any cluster VMs that you deploy from the templates, must remain at version 13. Do not upgrade these templates or VMs to a virtual hardware version above version 13.
Use SSH to log in to each VM in the cluster, and run the following command:
ethtool eth0 -K tx-off
To make the workaround persist following reboots, create a file at
/etc/udev/rules.d/90-netif-disable-hw-offload.rulesand paste the following contents into it.
ACTION=="add", SUBSYSTEM=="net", KERNEL=="eth*", TAG+="netif_hw_tx_offload_disable" ACTION=="add", SUBSYSTEM=="net", KERNEL=="en*", TAG+="netif_hw_tx_offload_disable" TAG=="netif_hw_tx_offload_disable", RUN+="/usr/sbin/ethtool -K $name tx off"
- Cannot log back in to vSphere 7 Supervisor Cluster after connection expires
When you use
kubectl vsphere loginto log in to a vSphere 7 Supervisor Cluster, the
kubeconfigfile that is generated expires after 10 hours. If you attempt to run Tanzu Kubernetes Grid CLI commands against the Supervisor Cluster after 10 hours have passed, you are no longer authorized to do so. If you use
kubectl vsphere loginto log in to the Supervisor Cluster again, get the new
kubeconfig, and attempt to run
tkg add management-cluster cluster_nameto add the new
.tkg/config, it throws an error:
Error: : cannot save management cluster context to kubeconfig: management cluster 192.168.123.1 with context 192.168.123.1 already exists
- Set an environment variable for
kubectl vsphere login.
This updates the Tanzu Kubernetes Grid management cluster
kubeconfigfor vSphere 7 with Kubernetes without requiring you to update it by using the
tkg add management-clustercommand.
- Set an environment variable for
- Upgrade to 1.1 fails if the location of the management cluster has changed
If the location of the management cluster changed after initial deployment, for example because the cluster was renamed, upgrading the management cluster to version 1.1 fails with an error similar to the following:
"error"="failed to reconcile VM: unable to get resource pool for management-cluster"
Workaround: Do not change the name of the cluster on which you deploy management clusters.
- Management cluster deployment fails if the vCenter Server FQDN includes uppercase characters
If you set the
VSPHERE_SERVERparameter in the
config.yamlfile with a vCenter Server FQDN that includes upper-case letters, deployment of the management cluster fails with the error
Credentials not found.
Workaround: Use all lower-case letters when you specify a vCenter Server FQDN in the
- List of clusters shows incorrect Kubernetes version after unsuccessful upgrade attempt
If you attempt to upgrade a Tanzu Kubernetes cluster and the upgrade fails, and if you subsequently run
tkg get clusterto see the list of deployed clusters and their versions, the cluster for which the upgrade failed shows the upgraded version of Kubernetes.
- Tanzu Mission Control reports Tanzu Kubernetes Grid 1.1.3 clusters as unhealthy when they are actually healthy
If you use Tanzu Kubernetes Grid 1.1.3 to deploy clusters with Kubernetes v1.17.9 and v1.18.6, and if you register these clusters with Tanzu Mission Control, Tanzu Mission Control reports that these clusters are unhealthy. This happens because these versions of Kubernetes introduced a change that affects the way that Tanzu Mission Control checks cluster health.
This issue will be addressed in an update of Tanzu Mission Control.