VMware Tanzu Kubernetes Grid | 25 JUN 2020 | Build 16444468
Check for additions and updates to these release notes.
What's in the Release Notes
The release notes cover the following topics:
- About VMware Tanzu Kubernetes Grid
- New Features in Tanzu Kubernetes Grid 1.1.2
- Behavior Changes Between Tanzu Kubernetes Grid 1.1.0 and 1.1.2
- Supported Kubernetes Versions in Tanzu Kubernetes Grid 1.1.2
- Supported Upgrade Paths
- Supported AWS Regions
- User Documentation
- Component Versions
- Resolved Issues
- Known Issues
About VMware Tanzu Kubernetes Grid
VMware Tanzu Kubernetes Grid provides Enterprise organizations with a consistent, upstream compatible, regional Kubernetes substrate across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG builds on trusted upstream and community projects and delivers an engineered and supported Kubernetes platform for end users and partners.
Key features include:
- The Tanzu Kubernetes Grid installer interface, a graphical installer that walks you through the process of deploying management clusters to either vSphere or Amazon EC2.
- The Tanzu Kubernetes Grid CLI, providing simple commands that allow you to deploy CNCF conformant Kubernetes clusters to either vSphere or Amazon EC2.
- Binaries for Kubernetes and all of the components that you need in order to easily stand up an enterprise-class Kubernetes development environment. All binaries are tested and signed by VMware.
- Extensions for your Tanzu Kubernetes Grid instance, that provide authentication and authorization, logging, networking, and ingress control.
- VMware support for your Tanzu Kubernetes Grid deployments.
New Features in Tanzu Kubernetes Grid 1.1.2
- Support for Kubernetes 1.18.3 and 1.17.6 when deploying new management clusters and Tanzu Kubernetes clusters to vSphere and Amazon EC2, and when upgrading your existing clusters.
- The Tanzu Kubernetes Grid installer interface displays the equivalent CLI command for you to copy for future use, for both vSphere and Amazon EC2.
us-gov-west-1to the list of supported AWS Regions.
tkg init(vSphere, Amazon EC2),
tkg create cluster, and
tkg config cluster --size,
--worker-sizeoptions, that allow you to configure control plane, worker, and HA proxy nodes of different sizes.
- Adds the
tkg create cluster --manifestoption, to allow you to create Tanzu Kubernetes clusters from generated YAML files.
- Allows you to either create or reuse an existing bastion host when deploying to Amazon EC2, in both the installer interface and with the CLI.
TKG_CUSTOM_IMAGE_REPOSITORYvariable to specify a local image registry for use in internet-restricted deployments.
Behavior Changes Between Tanzu Kubernetes Grid 1.1.0 and 1.1.2
VSPHERE_TEMPLATEsetting is detected automatically when using the CLI to deploy management clusters to vSphere.
.tkg/config.yamlfile is populated at the Review Configuration stage when using the installer interface to deploy management clusters, rather than when you start the deployment, on both vSphere and Amazon EC2.
Supported Kubernetes Versions in Tanzu Kubernetes Grid 1.1.2
Tanzu Kubernetes Grid 1.1.2 adds support for Kubernetes 1.18.3 and 1.17.6. This version also supports the versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.
|Tanzu Kubernetes Grid Version||Provided Kubernetes Versions||Supported in v1.1.2?|
Supported Upgrade Paths
You can upgrade Tanzu Kubernetes Grid v1.0.0 and v1.1.0 to version 1.1.2.
Supported AWS Regions
You can use Tanzu Kubernetes Grid 1.1.2 to deploy clusters to the following AWS regions:
The Tanzu Kubernetes Grid 1.1 documentation covers both the 1.1.0 and 1.1.2 releases. It includes information about the following subjects:
- Tanzu Kubernetes Grid Concepts introduces the key components of Tanzu Kubernetes Grid and describes how you use them and what they do.
- Installing Tanzu Kubernetes Grid describes how to install the Tanzu Kubernetes Grid CLI as well as the prerequisites for installing Tanzu Kubernetes Grid on vSphere and on Amazon EC2
- Deploying and Managing Management Clusters describes how to deploy Tanzu Kubernetes Grid management clusters to both vSphere and Amazon EC2.
- Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle describes how to use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters from your management cluster, and how to manage the lifecycle of those clusters.
- Configuring and Managing the Tanzu Kubernetes Grid Instance describes how to set up local shared services for your Tanzu Kubernetes clusters, such as authentication and authorization, logging, networking, and ingress control.
- Upgrading Tanzu Kubernetes Grid describes how to upgrade to this version.
- Troubleshooting Tips for Tanzu Kubernetes Grid includes tips to help you to troubleshoot common problems that you might encounter when installing Tanzu Kubernetes Grid and deploying Tanzu Kubernetes clusters.
- Tanzu Kubernetes Grid CLI Reference lists all of the commands and options of the Tanzu Kubernetes Grid CLI, and provides links to the section in which they are documented.
The Tanzu Kubernetes Grid 1.1.2 release ships with the following software components:
The resolved issues are grouped as follows.Installation Issues
- Creating Kubernetes v1.17.3 clusters fails
If you attempt to create a Tanzu Kubernetes cluster that runs Kubernetes v1.17.3 by running the
tkg create clustercommand with
--kubernetes-version v1.17.3+vmware.2specified, the creation fails with the error
vSphere template kubernetes version validation failed: Kubernetes version (v1.17.3+vmware.2) does not match that of the VM template (v1.18.2+vmware.1).
This happens because the
VSPHERE_TEMPLATEsetting in your
.tkg/config.yamlfile points to
The known issues are grouped as follows.vSphere Issues
- Cannot log back in to vSphere 7 Supervisor Cluster after connection expires
When you use
kubectl vsphere loginto log in to a vSphere 7 Supervisor Cluster, the
kubeconfigfile that is generated expires after 10 hours. If you attempt to run Tanzu Kubernetes Grid CLI commands against the Supervisor Cluster after 10 hours have passed, you are no longer authorized to do so. If you use
kubectl vsphere loginto log in to the Supervisor Cluster again, get the new
kubeconfig, and attempt to run
tkg add management-cluster cluster_nameto add the new
.tkg/config, it throws an error:
Error: : cannot save management cluster context to kubeconfig: management cluster 192.168.123.1 with context 192.168.123.1 already exists
- Set an environment variable for
kubectl vsphere login.
This updates the Tanzu Kubernetes Grid management cluster
kubeconfigfor vSphere 7 with Kubernetes without requiring you to update it by using the
tkg add management-clustercommand.
- Set an environment variable for
- Upgrade to 1.1 fails if the location of the management cluster has changed
If the location of the management cluster changed after initial deployment, for example because the cluster was renamed, upgrading the management cluster to version 1.1 fails with an error similar to the following:
"error"="failed to reconcile VM: unable to get resource pool for management-cluster"
Workaround: Do not change the name of the cluster on which you deploy management clusters.
- Management cluster deployment fails if the vCenter Server FQDN includes uppercase characters
If you set the
VSPHERE_SERVERparameter in the
config.yamlfile with a vCenter Server FQDN that includes upper-case letters, deployment of the management cluster fails with the error
Credentials not found.
Workaround: Use all lower-case letters when you specify a vCenter Server FQDN in the
- Cluster API Provider vSphere stops unexpectedly when the VirtualMachine configuration is unavailable
If Cluster API Provider vSphere cannot read the UUID of a virtual machine it stops unexpectedly because the error is not detected. For more information, see https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/944.
Workaround: The fix has been made in Cluster API Provider vSphere and will be picked up in a Tanzu Kubernetes Grid patch release.
- May 2021 Linux security patch causes kind clusters to fail during management cluster creation
If you run Tanzu Kubernetes Grid CLI commands on a machine with a recent Linux kernel, for example Linux 5.11 and 5.12 with Fedora,
kindclusters do not operate. This happens because
kube-proxyattempts to change
nf_conntrack_max sysctl, which was made read-only in the May 2021 Linux security patch, and
CrashLoopBackoffstate. The security patch is currently being backported to all LTS kernels from 4.9 onwards, so as operating system updates are shipped, including for Docker Machine on Mac OS and Windows Subsystem for Linux,
kindclusters will fail, resulting in management cluster deployment failure.
Workaround: Update your version of
kindto at least v.1.11.0, and run
tkg initwith the
--use-existing-bootstrap-clusteroption. For more information, see Use an Existing Bootstrap Cluster to Deploy Management Clusters.
- List of clusters shows incorrect Kubernetes version after unsuccessful upgrade attempt
If you attempt to upgrade a Tanzu Kubernetes cluster and the upgrade fails, and if you subsequently run
tkg get clusterto see the list of deployed clusters and their versions, the cluster for which the upgrade failed shows the upgraded version of Kubernetes.