VMware Tanzu Kubernetes Grid 1.4.1 | 06 JAN 2022
Check for additions and updates to these release notes.
Here are the key new features and capabilities specific to Tanzu Kubernetes Grid v1.4.1. See Tanzu Kubernetes Grid v1.4.0 Release Notes for new features and capabilities that apply to all v1.4.x versions.
tanzu management-cluster registercommand is removed from the Tanzu CLI.
LoadBalancer. Previously, they deployed as
ServiceType: NodePortand you had to integrate NSX Advanced Load Balancer with your identity provider manually.
prodplan have three worker nodes by default. Previously, the default count was one.
prodplans define by default, by setting the
TKG_PROXY_CA_CERTlets the proxy server use a different self-signed certificate than is used by the private image registry.
Tanzu Kubernetes Grid v1.4 supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components. The component versions listed in parentheses are included in Tanzu Kubernetes Grid v1.4. For more information, see Component Versions.
|Infrastructure platform||vSphere 6.7U3 and later, vSphere 7, VMware Cloud on AWS****, Azure VMware Solution||Native AWS*||Native Azure*|
|Cluster creation and management||Core Cluster API (v0.3.22), Cluster API Provider vSphere (v0.7.10)||Core Cluster API (v0.3.22), Cluster API Provider AWS (v0.6.6)||Core Cluster API (v0.3.22), Cluster API Provider Azure (v0.4.15)|
|Kubernetes node OS distributed with TKG||Photon OS 3, Ubuntu 20.04||Amazon Linux 2, Ubuntu 20.04||Ubuntu 18.04, Ubuntu 20.04|
|Build your own image||Photon OS 3, Red Hat Enterprise Linux 7, Ubuntu 18.04, Ubuntu 20.04||Amazon Linux 2, Ubuntu 18.04, Ubuntu 20.04||Ubuntu 18.04, Ubuntu 20.04|
|Container runtime||Containerd (v1.4.6) +||Containerd (v1.4.6) +||Containerd (v1.4.6) +|
|Container networking||Antrea (v0.13.3), Calico (v3.11.3)||Antrea (v0.13.3), Calico (v3.11.3)||Antrea (v0.13.3), Calico (v3.11.3)|
|Container registry||Harbor (v2.2.3)||Harbor (v2.2.3)||Harbor (v2.2.3)|
|Ingress||NSX Advanced Load Balancer Essentials (v20.1.3)**, Contour (v1.17.2),Avi Kubernetes Operator (AKO) (v1.4.3_vmware.1),Avi Controller (v20.1.3 and v20.1.6)||Contour (v1.17.1)||Contour (v1.17.1)|
|Storage||vSphere Container Storage Interface (v2.3.0***) and vSphere Cloud Native Storage||In-tree cloud providers only||In-tree cloud providers only|
|Authentication||OIDC via Pinniped (v0.4.4), LDAP via Pinniped (v0.4.4) and Dex||OIDC via Pinniped (v0.4.4), LDAP via Pinniped (v0.4.4) and Dex||OIDC via Pinniped (v0.4.4), LDAP via Pinniped (v0.4.4) and Dex|
|Observability||Fluent Bit (v1.7.5), Prometheus (v2.27.0), Grafana (v7.5.7)||Fluent Bit (v1.7.5), Prometheus (v2.27.0), Grafana (v7.5.7)||Fluent Bit (v1.7.5), Prometheus (v2.27.0), Grafana (v7.5.7)|
|Backup and migration||Velero (v1.6.2)||Velero (v1.6.2)||Velero (v1.6.2)|
Each version of Tanzu Kubernetes Grid adds support for new Kubernetes versions. This version also supports versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.
|Tanzu Kubernetes Grid Version||Provided Kubernetes Versions||Supported in v1.4?|
|1.4.x||1.21.2, 1.20.8, 1.19.12||YES, YES, YES|
|1.3.1||1.20.5, 1.19.9, 1.18.17||YES, YES, NO|
|1.3.0||1.20.4, 1.19.8, 1.18.16, 1.17.16||YES, YES, NO, NO|
|1.2.1||1.19.3, 1.18.10, 1.17.13||YES, NO, NO|
|1.2||1.19.1, 1.18.8, 1.17.11||YES, NO, NO|
The Tanzu Kubernetes Grid v1.4.1 release includes the following software component versions:
You can only upgrade to Tanzu Kubernetes Grid v1.4.1 from v1.3.x and v1.4.0. If you want to upgrade to Tanzu Kubernetes Grid v1.4.1 from a version earlier than v1.3.x, you must upgrade to v1.3.x first before upgrading to v1.4.1.
When upgrading Kubernetes versions on Tanzu Kubernetes clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.19.x to v1.21.x. You must upgrade a v1.19.x cluster to v1.20.x before upgrading the cluster to v1.21.x.
You must apply the following changes to your Tanzu Kubernetes Grid v1.4.1 environment before upgrading from v1.4.0 and earlier.
nodes.tkg.cloud.vmware.compolicy. This enables the Tanzu Mission Control resource retriever on AWS. See Tanzu Mission Control for the full list of required permissions.
The following issues from Tanzu Kubernetes Grid v1.4.0 have been resolved in v1.4.1.
Tanzu Kubernetes Grid management clusters can be registered with Tanzu Mission Control
You can register Tanzu Kubernetes Grid management clusters with Tanzu Mission Control from the Tanzu Mission Control UI.
The cluster configuration variable
TMC_REGISTRATION_URL and the Tanzu CLI command
tanzu management-cluster register are no longer used.
Management cluster and infrastructure can be behind different proxies
In proxied, internet-restricted environments in which the management cluster and infrastructure (such as vCenter) are on different networks and behind a different proxies, the local bootstrap kind cluster can access and pull container images.
Resolves issue that prevented access from bootstrap cluster in such environments.
Production plan management cluster on AWS has three workers
Default number of worker nodes deployed in production plan management cluster on AWS has been increased from one to three.
Previous behavior: After selecting an Amazon EC2 instance type under AZ1 Worker Node Instance Type, AZ2 Worker Node Instance Type, and AZ3 Worker Node Instance Type in the Production view, the installer deployed the management cluster with only one worker node, AZ1 Worker Node Instance Type, instead of three worker nodes.
Disconnecting or powering off vCenter host does not cause workload cluster upgrade or deletion to hang.
The following known issues apply specifically to Tanzu Kubernetes Grid v1.4.1. See the Tanzu Kubernetes Grid v1.4.0 Release Notes for known issues that apply to all v1.4.x versions.
Installer ignores test names entered for LDAP check
The Tanzu Kubernetes Grid installer interface ignores what you enter in the Test User Name (Optional) and Test Group Name (Optional) fields when verifying LDAP configuration. Instead, it uses cn for the test user name and ou for the test group name when running its LDAP check.
Management cluster create fails or performance slow with older NSX-T versions and Photon 3 or Ubuntu with Linux kernel 5.8 VMs
Deploying a management cluster with the following infrastructure and configuration may fail or result in restricted traffic between pods:
This combination exposes a checksum issue between older versions of NSX-T and Antrea CNI.
TMC: If the management cluster is registered with Tanzu Mission Control (TMC) there is no workaround to this issue. Otherwise, see the workarounds below.
ethtool -K eth0 tx-udp_tnl-segmentation off && ethtool -K eth0 tx-udp_tnl-csum-segmentation off
The Tanzu Kubernetes Grid 1.4 documentation applies to all of the 1.4.x releases.