VMware Tanzu Kubernetes Grid Service 3 Check for additions and updates to these release notes. |
VMware Tanzu Kubernetes Grid Service 3 Check for additions and updates to these release notes. |
VMware TKG Service provides components for provisioning and managing the lifecycle of Kubernetes clusters in the vSphere IaaS control plane environment.
These release notes provide details for each TKG Service release. Refer to the documentation for usage information.
TKG Service 3.2 is generally available for use with vSphere 8 Update 3.
Name |
Release Date |
---|---|
TKG Service 3.2 |
10/4/2024 |
Support for TKr 1.31. To upgrade to TKG service 3.2.0, all TKG service clusters must be running TKr 1.27 or higher. Refer to the interoperability matrix for more details.
Support for Windows container workloads on Microsoft Windows Server 2022 Worker nodes. To configure a Windows node pool for a TKG service cluster, the cluster must be running at least TKr 1.31 and use Antrea CNI, and must not be managed using the TanzuKubernetesCluster API. OS images for Windows are not distributed by VMware and must be built using our Image Builder. See the documentation for more details.
Versioning for VMware-provided ClusterClasses avoids System Initiated Rolling Updates for existing TKG Service clusters following a TKG Service upgrade when upgrading from v3.0.0 or higher.
TKG Service clusters created using the TanzuKubernetesCluster API will instead use the builtin-generic-v3.1.0 cluster
class. Clusters using the TanzuKubernetesCluster API may not move to later cluster classes. (See below for information about the deprecation of the TanzuKubernetesCluster API.)
TKG Service clusters created using Cluster API v1beta1 with the tanzukubernetescluster
cluster class will be automatically updated to use a new builtin-generic-v3.1.0
cluster class equivalent to the tanzukubernetescluster
cluster class from v3.1.0. These clusters as well as TKG Service clusters created using the builtin-generic-v3.1.0
cluster class will be annotated with kubernetes.vmware.com/skip-auto-cc-rebase
to protect you from potentially disruptive changes. When your tooling is compatible with the builtin-generic-v3.2.0
cluster class, you can remove this annotation.
TKG Service clusters created using the v1beta1 API will default to builtin-generic-v3.2.0.
When a cluster without the kubernetes.vmware.com/skip-auto-cc-rebase
annotation is upgraded to a new TKr version, it will be automatically updated to the latest built-in cluster class version, currently builtin-generic-v3.2.0
. This allows changes that require a rolling update to be combined with the rolling update that occurs as a part of the TKG Service cluster upgrade, avoiding the need for a System Initiated Rolling Update.
Users using GitOps-style workflows should ensure spec.topology.class
field in Cluster v1beta1 object does not get reverted after it has been automatically updated to the latest built-in cluster class.
Deprecation of TanzuKubernetesCluster, TanzuKubernetesRelease, and associated APIs including TanzuKubernetesAddons and TKGServiceConfiguration as well as the tanzukubernetescluster cluster class.
You are encouraged to use the Cluster v1beta1 API to create TKG service clusters, and to use the new KubernetesRelease API to list available TKr versions. New status conditions are available to aid you in understanding cluster readiness via the Cluster v1beta API. New functionality may require the use of the Cluster v1beta1 API or the KubernetesRelease API.
When upgrading TanzuKubernetesCluster-based clusters, TKr releases labeled as legacy (often described as "for vSphere 7.x") may no longer be used. Instead, use a non-legacy TKr release (often described as "for vSphere 8.x").
The TanzuKubernetesCluster API will be removed no sooner than June 2025. In a release prior to the removal of the TanzuKubernetesCluster API, a mechanism will be provided to transition from managing a cluster via the TanzuKubernetesCluster API to managing the cluster using the Cluster v1beta API.
With introduction of the builtin-generic-v3.2.0
ClusterClass, the CIDR ranges are not automatically added to the proxy exceptions
For TKCs and v1beta1 clusters using the default and builtin-generic-v3.1.0
ClusterClass, the pod and service CIDR range defined on the Cluster
object as well as localhost
and 127.0.0.1
are automatically not proxied. This behavior has changed for the builtin-generic-v3.2.0
ClusterClass. Now the CIDR ranges are not automatically added to the proxy exceptions.
Workaround:
To mitigate this issue, make sure to add the above CIDR ranges as well as the localhost
and 127.0.0.1
entries to the noProxy field of the systemProxy
section of the osConfiguration
ClusterClass variable. For existing clusters using the 3.2.0 ClusterClass, making these changes will cause a rollout of the nodes on the Cluster.
Unable to add/update node pools for clusters using builtin-generic-v3.1 cluster class
Tanzu Mission Control (TMC) users cannot add/update node pools for clusters using builtin-generic-v3.1
cluster class (automatically updated from tanzukubernetescluster cluster class) from the UI.
The Local Consumption Interface (LCI) UI to create Clusters by using v1beta API does not proceed to completion
Creating a Cluster v1beta1 TKG Service Cluster by using the LCI 1.0.0 does not work with the TKG Service 3.2 release.
An upcoming version of LCI that is compatible with TKG Service 3.2 will resolve this issue. Alternatively, you can create a TKG Service cluster based on the v1beta1 API by using kubectl or Tanzu CLI.
Any updates to TKGServiceConfiguration object will be overwritten every 10 minutes
If you set fields like proxy, trust, etc. on the TKGServiceConfiguration resource, it gets overwritten every 10 minutes.
TKG Service 3.1.1 is generally available for use with vSphere 8.0 Update 3b.
Name |
Release Date |
---|---|
TKG Service 3.1.1 |
9/17/2024 |
TKG Service 3.1.1 includes the same features as version 3.1 with additional bug fixes. It is available as a synchronous option with vCenter 8.0 Update 3b release. A public asynchronous release is not available at this time. For more information, see Registering New TKG Service Versions with vCenter.
TKG Service 3.1 is generally available for use with vSphere 8 Update 3.
Name |
Release Date |
---|---|
TKG Service 3.1 |
7/8/2024 |
TKG Service 3.1 includes the following new features:
Updates TKG Service components to support the TKr v1.30 release
Improvements to Kubectl and Tanzu CLI to identify legacy TKRs (refer to the documentation for more information about TKr formats)
After restoring Supervisor to a backup taken with TKG Service 3.0, the TKG Service version displayed in the web interface is 3.1.
Post upgrading to TKG Core Service 3.1.0, the restore to the backup taken with TKG Core Service 3.0.0 is not supported.
Workaround: None
One of the OSImages might be missing from non-legacy TKr >= v1.28 if vSphere is upgraded from 7.x to 8.x.
Description: TKr >= v1.28 will have all required OSImages present when vSphere is upgraded from 7.x to 8.x.
TKr controller should not recommend TKr for vSphere 7.x as upgrade for a cluster using TKr for vSphere 8.x
The Tanzu CLI was listing legacy TKrs as options for upgrading a classy cluster when running 'tanzu cluster upgrade'.
Block upgrades to TKG 1.0 TKr for Classy clusters
You cannot upgrade to TKr for vSphere 7.x using a classy cluster.
TKG Service 3.0 is generally available for use with vSphere 8 Update 3.
Name |
Release Date |
---|---|
TKG Service 3.0 |
6/25/2024 |
Prior to vSphere 8.0 Update 3, TKG Service components responsible for the lifecycle of TKG clusters were delivered as part of vCenter releases. Starting with the vSphere 8.0 Update 3 release, TKG Service components are decoupled from vCenter and packaged as a Supervisor service which can be updated and managed independent of vCenter releases.
With this flexibility, TKG Service releases are delivered two ways: independent and bundled with vCenter. When released independently, you can register and upgrade the TKG Service separate from upgrading vCenter. When bundled with vCenter, after upgrading vCenter the TKG Service is automatically registered for upgrade. Refer to the TKG Service documentation for details.
TKG Service 3.0 is the first release of the decoupled TKG Service. It is available as part of the vSphere 8.0 Update 3 release. The system automatically installs TKG Service 3.0 when you install or upgrade to the vSphere IaaS control plane component versions listed in the table. Once installed, you can upgrade to subsequent TKG Service releases, such as TKG Service 3.1, 3.2, and so on.
Component |
Required Version |
---|---|
vCenter Server |
8.0 Update 3 (8.0.3) |
vSphere Namespaces |
0.1.9 |
Supervisor |
1.28.3, 1.27.5, 1.26.8 |
TKr |
v1.25.7 or later for vSphere 8.x |
TKG Service 3.0 has the following known issues.
One of the OSImages might be missing from non-legacy TKr >= v1.28 if vSphere is upgraded from 7.x to 8.x.
Description: If, after upgrading from vSphere 7.x to 8.x, you list available TKrs for vSphere 8.x, you may not see both OSImages. For example, the following command returns only one OSImage when you should see two (must be run from the vSphere Namespace where a TKG cluster is provisioned):
kubectl get osimage -A | grep 1.28
Workaround: Delete any non-legacy TKr >=v1.28 and wait for it to be re-created which will happen after approximately 10 minutes. The recreated TKr will have both the Photon and Ubuntu images, and the OSImage Kuberentes resources will also be created for both Photon and Ubuntu.
If you cannot wait for the TKr to be recreated, you can delete the active vmware-system-tkg-controller-manager
pod which will recreate the TKr.
Supervisor backup taken via TMC with TKG Service 3.0 installed does not restore after 8 hours due to token expiration.
Description: On a TMC-managed cluster, after restoring a Supervisor from backup taken with TKG Service 3.0, the TMC agent is in a CrashLoopBackoff state.
Workaround: Reregister the cluster with TMC.