VMware TKrs | 3 OCTOBER 2024 Check frequently for additions and updates to these release notes. |
VMware TKrs | 3 OCTOBER 2024 Check frequently for additions and updates to these release notes. |
VMware TKrs provide you with Kubernetes software distributions signed and supported by VMware for provisioning TKG Service clusters in the vSphere IaaS control plane environment. VMware TKrs follow the upstream Kubernetes versioning.
The availability of TKrs depends on the vCenter content library and its synchronization frequency. Use the command kubectl get tkr
to list the TKr that are available in your environment. If a release is not synchronized with your library, that release will not appear in the list returned by the command. Refer to the documentation for more information.
The TKrs for vSphere 8.x are updated to a package-based framework. While you can run vSphere 7.x TKrs on vSphere 8.x, you cannot take advantage of vSphere 8.x features unless you upgrade the cluster to use a TKr for vSphere 8.x.
Use the command kubectl get tkr
to verify compatibility of TKrs with the TKG Service. See also the Compatibility Matrix. Upgrading TKG clusters to a TKr for vSphere 8.x is allowed starting with the vSphere 8.0a release. You must update existing TKG clusters to a minimum TKr v1.21.x or later before upgrading your environment to vSphere 8.0a. After you have upgraded to vSphere 8.0a, update clusters to a TKr for vSphere 8.x format to take advantage of new features. See the upgrade documentation for details.
Upgrading is only supported when the origin and target TKrs are the same operating system. For example, you cannot upgrade from a Photon OS TKr to an Ubuntu TKr.
Use the command kubectl get tkr TKR_NAME -o yaml
to list components and packages. The TKr format that supports provisioning TKG clusters on vSphere 8.x uses a package-based framework to deliver VMware software and Kubernetes components.
VMware provides Security Technical Implementation Guides (STIG) for vSphere with Tanzu, including Supervisor and TKrs. See STIG Hardening for details.
If you have deployed vSphere IaaS control plane 8.0.0 GA, be aware of the following CRITICAL REQUIRMENT.
Before upgrading to vSphere 8 U1 you must create a temporary TKr content library to avoid a known issue that causes TKG Controller pods to go into CrashLoopBackoff when TKrs are pushed to the existing content library. To avoid this issue, complete the following steps.
NOTE: This requirement does not apply to content libraries you use for VMs provisioned through VM Service.
Create a new subscribed content library with a temporary subscription URL pointing to https://wp-content.vmware.com/v2/8.0.0/lib.json.
Synchronize all the items in the temporary content library.
Associate the temporary content library with the TKG Service.
Run the command kubectl get tkr
and verify that all the TKrs are created.
At this point the TKG Controller should be in a running state, which you can verify by listing the pods in the Supervisor namesapce.
If the TKG Controller is in CrashLoopBackOff (CLBO) state, restart the TKG Controller deployment using the following command:
kubectl rollout restart deployment -n vmware-system-tkg vmware-system-tkg-controller-manager
Upgrade to vSphere 8 U1.
Update the TKG Service configuration to use the original subscribed content library at https://wp-content.vmware.com/v2/latest/lib.json. NOTE: The content library is not namespace-scoped, so when you update one TKG Service configuration, the change should be propagated to all others. You should verify this by checking each vSphere Namespace.
TKr 1.31.1 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
TKG Service |
---|---|---|---|
|
10/3/2024 |
vCenter Server 8.0 U3 and later |
TKG Service 3.2 and later |
TKr 1.31.1 includes the following new features:
Support for Kubernetes release v1.31 (previously referred to Tanzu Kubernetes release, or TKr). To be able to run Kubernetes release v1.31, You must upgrade your TKG Service to v3.2. Please refer TKG Service 3.2 Release notes.
Operating System |
OS Version |
Kernel Version |
---|---|---|
Photon OS |
VMware Photon OS 5.0 |
6.1.109-2.ph5 |
Ubuntu OS |
Ubuntu 22.04.4 LTS |
5.15.0-122-generic |
TKr 1.31.1 for use with vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.31.1 |
coredns |
v1.11.3 |
etcd |
v3.5.16 |
containerd |
v1.7.22 |
cni-plugins |
v1.5.1 |
TKr 1.31.1 for use with vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v2.1.0 |
calico |
v3.28.1 |
guest-cluster-auth-service |
v1.4.0 |
kapp-controller |
v0.53.0 |
metrics-server |
v0.7.1 |
pinniped |
v0.32.0 |
secretgen-controller |
v0.18.0 |
vsphere-cpi |
v1.31.0 |
vsphere-pv-csi |
v3.3.1 |
gateway-api-package |
v1.0.0 |
The Standard Package version v2024.9.18 includes the following packages
URL for Standard Packages: projects.packages.broadcom.com/vsphere/iaas/packages/2024.9.18/tanzu-standard-packages:v2024.9.18
Package |
Latest Version (recommended for 1.31) |
Other included versions |
---|---|---|
Cluster Autoscaler |
v1.31.0+vmware.1-tkg.1 |
v1.30.0+vmware.1-tkg.1 v1.29.0+vmware.1-tkg.1 v1.28.0+vmware.1-tkg.1 v1.27.2+vmware.1-tkg.3 |
Cert Manager |
v1.13.3+vmware.1-tkg.1 |
v1.12.10+vmware.2-tkg.1 v1.1.0+vmware.1-tkg.2 v1.1.0+vmware.2-tkg.1 v1.5.3+vmware.2-tkg.1 v1.5.3+vmware.4-tkg.1 v1.5.3+vmware.7-tkg.1 v1.5.3+vmware.7-tkg.3 v1.7.2+vmware.1-tkg.1 v1.7.2+vmware.3-tkg.1 v1.7.2+vmware.3-tkg.3 v1.11.1+vmware.1-tkg.1 v1.12.2+vmware.2-tkg.2 |
Contour |
1.28.2+vmware.1-tkg.1 |
1.27.1+vmware.1-tkg.1 1.26.2+vmware.1-tkg.1 |
External DNS |
0.13.6+vmware.1-tkg.1 |
v0.10.0+vmware.1-tkg.1 v0.10.0+vmware.1-tkg.2 v0.10.0+vmware.1-tkg.8 v0.11.0+vmware.1-tkg.2 v0.11.0+vmware.1-tkg.8 v0.12.2+vmware.7-tkg.1 |
Fluent Bit |
v2.2.3+vmware.1-tkg.2 |
v1.7.5+vmware.1-tkg.1 v1.7.5+vmware.2-tkg.1 v1.8.15+vmware.1-tkg.1 v1.9.5+vmware.1-tkg.2 v2.1.6+vmware.1-tkg.1 v2.1.6+vmware.1-tkg.2 v2.2.3+vmware.1-tkg.1 |
FluxCD Helm Controller |
0.36.2+vmware.1-tkg.1 |
v0.21.0+vmware.1-tkg.1 v0.21.0+vmware.1-tkg.5 v0.28.1+vmware.1-tkg.4 |
FluxCD Kustomize Controller |
1.1.1+vmware.1-tkg.1 |
v0.24.4+vmware.1-tkg.1 v0.24.4+vmware.1-tkg.5 v0.32.0+vmware.1-tkg.4 |
FluxCD Source Controller |
1.1.2+vmware.5-tkg.1 |
v0.24.4+vmware.2-tkg.3 v0.33.0+vmware.2-tkg.3 0.36.1+vmware.2-tkg.2 1.1.2+vmware.1-tkg.1 |
Grafana |
v10.0.1+vmware.1-tkg.3 |
v7.5.7+vmware.1-tkg.1 v7.5.7+vmware.2-tkg.1 v7.5.16+vmware.1-tkg.1 v7.5.17+vmware.1-tkg.2 v10.0.1+vmware.1-tkg.2 |
Harbor |
2.9.1+vmware.1-tkg.1 |
2.8.4+vmware.1-tkg.1 |
Prometheus |
v2.45.0+vmware.1-tkg.3 |
v2.37.0+vmware.3-tkg.1 v2.43.0+vmware.2-tkg.1 v2.45.0+vmware.1-tkg.1 v2.45.0+vmware.1-tkg.2 |
External CSI Snapshot Validation Webhook |
6.1.0+vmware.1-tkg.6 |
|
vSphere PV CSI Webhook |
3.1.0+vmware.1-tkg.3 |
TKr 1.31.1 for vSphere 8.x has the following fixed issue:
When VC public keys are renewed, after you authenticate with Kubernetes cluster successfully, you will get the following error with kubectl
command:
$ kubectl get pod -A error: You must be logged in to the server (Unauthorized)
The issue is fixed in 1.31.1. Public keys will be auto reloaded.
TKr 1.30.1 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
TKG Service |
---|---|---|---|
|
7/8/2024 |
vCenter Server 8.0 U3 and later |
TKG Service 3.1 and later |
TKr 1.30.1 includes the following new features.
Support for Kubernetes version 1.30.1 for Photon and Ubuntu operating systems.
Operating Systems (OS) |
OS Version |
Kernel Version |
---|---|---|
Photon |
VMware PhotonOS 5.0 |
6.1.83-4.ph5 |
Ubuntu |
Ubuntu 22.04.4 LTS |
5.15.0-107-generic |
TKr v1.30.1 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.30.1 |
coredns |
v1.11.1 |
etcd |
v3.5.12 |
containerd |
v1.6.31 |
cri-tools |
v1.29.0 |
cni-plugins |
v1.4.0 |
TKr v1.30.1 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.15.1 |
calico |
v3.27.3 |
guest-cluster-auth-service |
v1.3.3 |
kapp-controller |
v0.50.0 |
metrics-server |
v0.6.2 |
pinniped |
v0.25.0 |
secretgen-controller |
v0.16.1 |
vsphere-cpi |
v1.30.1 |
vsphere-pv-csi |
v3.3.0 |
capabilities |
v0.32.1 |
gateway-api-package |
v1.0.0 |
The Standard Package v2024.7.2 repository includes the following standard packages.
Package |
Version |
Upgrades from |
---|---|---|
Cluster Autoscaler |
v1.30.0+vmware.1-tkg.1-ca |
v1.29.0+vmware.1-tkg.1-ca v1.28.0+vmware.1-tkg.1-ca v1.27.2+vmware.1-tkg.3-ca |
Cert Manager |
1.12.10+vmware.1-tkg.1 |
v1.12.2+vmware.2-tkg.2-cert-manager v1.11.1+vmware.1-tkg.1-cert-manager v1.7.2+vmware.3-tkg.3-cert-manager v1.5.3+vmware.7-tkg.3-cert-manager v1.7.2+vmware.3-tkg.1-cert-manager v1.5.3+vmware.7-tkg.1-cert-manager v1.7.2+vmware.1-tkg.1-cert-manager v1.5.3+vmware.4-tkg.1-cert-manager v1.5.3+vmware.2-tkg.1-cert-manager v1.1.0+vmware.2-tkg.1-cert-manager v1.1.0+vmware.1-tkg.2-cert-manager |
Contour |
1.28.2+vmware.1-tkg.1 |
1.27.1+vmware.1-tkg.1 1.26.2+vmware.1-tkg.1 |
External DNS |
v0.13.6+vmware.1-tkg.1-external-dns v0.12.2+vmware.7-tkg.1-external-dns v0.11.0+vmware.1-tkg.8-external-dns v0.10.0+vmware.1-tkg.8-external-dns |
v0.11.0+vmware.1-tkg.2-external-dns v0.10.0+vmware.1-tkg.2-external-dns v0.10.0+vmware.1-tkg.1-external-dns |
Fluent Bit |
v2.2.3+vmware.1-tkg.1-fluent-bit |
v2.1.6+vmware.1-tkg.2-fluent-bit v2.1.6+vmware.1-tkg.1-fluent-bit v1.9.5+vmware.1-tkg.2-fluent-bit v1.8.15+vmware.1-tkg.1-fluent-bit v1.7.5+vmware.2-tkg.1-fluent-bit v1.7.5+vmware.1-tkg.1-fluent-bit |
FluxCD Helm Controller |
v0.36.2+vmware.1-tkg.1-helm-ctrl v0.28.1+vmware.1-tkg.4-helm-ctrl v0.21.0+vmware.1-tkg.5-helm-ctrl |
v0.21.0+vmware.1-tkg.1-helm-ctrl |
FluxCD Kustomize Controller |
v1.1.1+vmware.1-tkg.1-kustomize-ctrl v0.32.0+vmware.1-tkg.4-kustomize-ctrl v0.24.4+vmware.1-tkg.5-kustomize-ctrl |
v0.24.4+vmware.1-tkg.1-kustomize-ctrl |
FluxCD Source Controller |
v1.1.2+vmware.5-tkg.1-source-ctrl v1.1.2+vmware.5-tkg.1-source-ctrl v0.36.1+vmware.2-tkg.2-source-ctrl v0.33.0+vmware.2-tkg.3-source-ctrl v0.24.4+vmware.2-tkg.3-source-ctrl |
|
Grafana |
v10.0.1+vmware.1-tkg.2-grafana |
v7.5.17+vmware.1-tkg.2-grafana v7.5.16+vmware.1-tkg.1-grafana v7.5.7+vmware.2-tkg.1-grafana v7.5.7+vmware.1-tkg.1-grafana |
Harbor |
v2.9.1+vmware.1-tkg.1-harbor |
v2.8.4+vmware.1-tkg.1-harbor |
Multus CNI |
v4.0.1+vmware.2-tkg.1-multus-cni |
v3.8.0+vmware.3-tkg.1-multus-cni v3.8.0+vmware.2-tkg.2-multus-cni v3.8.0+vmware.1-tkg.1-multus-cni v3.7.1+vmware.2-tkg.2-multus-cni v3.7.1+vmware.2-tkg.1-multus-cni v3.7.1+vmware.1-tkg.1-multus-cni |
Prometheus |
v2.45.0+vmware.1-tkg.2-prometheus |
v2.45.0+vmware.1-tkg.1-prometheus v2.43.0+vmware.2-tkg.1-prometheus v2.37.0+vmware.3-tkg.1-prometheus |
External CSI Snapshot Validation Webhook |
v6.1.0+vmware.1-tkg.6-snapshot-webhook |
|
vSphere PV CSI Webhook |
v3.1.0+vmware.1-tkg.3-vspherecsiwebhook |
|
Whereabouts |
v0.6.3+vmware.1-tkg.2-whereabouts |
v0.5.1+vmware.2-tkg.1-whereabouts v0.5.4+vmware.1-tkg.1-whereabouts v0.5.4+vmware.2-tkg.1-whereabouts |
None.
After decreasing the node-pool max count, scale down of worker nodes is failing on TKG cluster.
Fixed.
TKr 1.29.5 is generally available for use with vSphere 8.x.
Contains fix for CVE-2024-6387
Nvidia GPU fix
Name |
Release Date |
vCenter Server 8.0 |
TKG Service |
---|---|---|---|
|
9/6/2024 |
vCenter Server 8.0 U3 and later |
TKG Service 3.1 and later |
TKr 1.29.5 includes the following new features.
Support for Kubernetes version 1.29.5 for Photon OS and Ubuntu.
Operating Systems (OS) |
OS Version |
Kernel Version |
---|---|---|
Photon |
VMware Photon OS 5.0 |
6.1.83-4.ph5 |
Ubuntu |
Ubuntu 22.04.4 LTS |
5.15.0-116-generic |
TKr v1.29.5 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.29.5 |
coredns |
v1.11.1 |
etcd |
v3.5.12 |
containerd |
v1.6.31 |
cri-tools |
v1.28.0 |
cni-plugins |
v1.3.0 |
TKr 1.29.5 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.13.3 |
calico |
v3.27.3 |
guest-cluster-auth-service |
v1.3.3 |
kapp-controller |
v0.50.0 |
metrics-server |
v0.6.2 |
pinniped |
v0.25.0 |
secretgen-controller |
v0.16.1 |
vsphere-cpi |
v1.29.0 |
vsphere-pv-csi |
v3.2.0 |
gateway-api-package |
v1.0.0 |
The Standard Package v2024.6.27 repository includes the following standard packages.
Package |
Version |
---|---|
Cluster Autoscaler |
1.29.0+vmware.1-tkg.1 |
Cert Manager |
1.12.10+vmware.1-tkg.1 |
Contour |
1.26.2+vmware.1-tkg.1 1.27.1+vmware.1-tkg.1 1.28.2+vmware.1-tkg.1 |
External DNS |
0.13.6+vmware.1-tkg.1 |
Fluent Bit |
2.1.6+vmware.1-tkg.2 |
FluxCD Helm Controller |
0.36.2+vmware.1-tkg.1 |
FluxCD Kustomize Controller |
1.1.1+vmware.1-tkg.1 |
FluxCD Source Controller |
0.36.1+vmware.2-tkg.2 1.1.2+vmware.1-tkg.1 1.1.2+vmware.5-tkg.1 |
Grafana |
10.0.1+vmware.1-tkg.2 |
Harbor |
2.8.4+vmware.1-tkg.1 2.9.1+vmware.1-tkg.1 |
Prometheus |
2.45.0+vmware.1-tkg.2 |
External CSI Snapshot Validation Webhook |
6.1.0+vmware.1-tkg.6 |
vSphere PV CSI Webhook |
3.1.0+vmware.1-tkg.3 |
TKr 1.29.4 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
6/27/2024 |
vCenter Server 8.0 U3 and later |
TKr 1.29.4 includes the following new features.
Support for Kubernetes version 1.29.4 for Photon and Ubuntu operating systems.
Photon OS |
Ubuntu OS |
|
---|---|---|
OS Version |
VMware Photon Linux 5.0 |
Ubuntu 22.04.4 LTS |
Kernel Version |
6.1.83-4.ph5 |
5.15.0-107-generic |
Container runtime:
runc updated to version 1.1.12
containerd updated to version 1.6.31
TKr v1.29.4 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.29.4 |
coredns |
v1.10.1 |
etcd |
v3.5.12 |
containerd |
v1.6.31 |
cri-tools |
v1.28.0 |
cni-plugins |
v1.3.0 |
TKr v1.29.4 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.13.3 |
calico |
v3.27.3 |
guest-cluster-auth-service |
v1.3.3 |
kapp-controller |
v0.50.0 |
metrics-server |
v0.6.2 |
pinniped |
v0.25.0 |
secretgen-controller |
v0.16.1 |
vsphere-cpi |
v1.29.0 |
vsphere-pv-csi |
v3.2.0 |
The Standard Package v2024.6.27 repository includes the following standard packages.
Package |
Version |
---|---|
Cluster Autoscaler |
1.29.0+vmware.1-tkg.1 |
Cert Manager |
1.12.10+vmware.1-tkg.1 |
Contour |
1.26.2+vmware.1-tkg.1 1.27.1+vmware.1-tkg.1 1.28.2+vmware.1-tkg.1 |
External DNS |
0.13.6+vmware.1-tkg.1 |
Fluent Bit |
2.1.6+vmware.1-tkg.2 |
FluxCD Helm Controller |
0.36.2+vmware.1-tkg.1 |
FluxCD Kustomize Controller |
1.1.1+vmware.1-tkg.1 |
FluxCD Source Controller |
0.36.1+vmware.2-tkg.2 1.1.2+vmware.1-tkg.1 1.1.2+vmware.5-tkg.1 |
Grafana |
10.0.1+vmware.1-tkg.2 |
Harbor |
2.8.4+vmware.1-tkg.1 2.9.1+vmware.1-tkg.1 |
Prometheus |
2.45.0+vmware.1-tkg.2 |
External CSI Snapshot Validation Webhook |
6.1.0+vmware.1-tkg.6 |
vSphere PV CSI Webhook |
3.1.0+vmware.1-tkg.3 |
TKr 1.28.8 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
5/8/2024 |
vCenter Server 8.0 U2c and later |
TKr 1.28.8 includes the following new features.
Support for Kubernetes version 1.28.8 for PhotonOS and Ubuntu.
PhotonOS 5.0: Kernel version: 6.1.81-2.ph5
Ubuntu 22.04.4 LTS: Kernel version: 5.15.0-101-generic
PhotonOS upgraded to PhotonOS 5
runc updated to version 1.1.12
containerd updated to version 1.6.28
TKr 1.28.8 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.28.8 |
coredns |
v1.10.1 |
etcd |
v3.5.12 |
containerd |
v1.6.28 |
cri-tools |
v1.27.0 |
cni-plugins |
v1.2.0 |
TKr 1.28.8 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.13.3 |
calico |
v3.26.3 |
guest-cluster-auth-service |
v1.3.3 |
kapp-controller |
v0.48.2 |
metrics-server |
v0.6.2 |
pinniped |
v0.25.0 |
secretgen-controller |
v0.15.0 |
vsphere-cpi |
v1.28.0 |
vsphere-pv-csi |
v3.1.0 |
capabilities |
v0.32.1 |
TKr 1.27.11 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
4/18/2024 |
vCenter Server 8.0 U2c and later |
TKr 1.27.11 includes the following new features.
Support for Kubernetes version 1.27.11 for PhotonOS and Ubuntu.
PhotonOS 3.0: Kernel version: 4.19.306-1.ph3
Ubuntu 22.04.4 LTS: Kernel version: 5.15.0-97-generic
runc has been updated to version 1.1.12
containerd has been updated to version 1.6.28
TKr 1.27.11 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.27.11 |
coredns |
v1.10.1 |
etcd |
v3.5.11 |
containerd |
v1.6.28 |
cri-tools |
v1.26.0 |
cni-plugins |
v1.2.0 |
TKr 1.27.11 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.13.3 |
calico |
v3.26.3 |
guest-cluster-auth-service |
v1.3.0 |
kapp-controller |
v0.48.2 |
metrics-server |
v0.6.2 |
pinniped |
v0.25.0 |
secretgen-controller |
v0.15.0 |
vsphere-cpi |
v1.27.0 |
vsphere-pv-csi |
v3.1.0 |
capabilities |
v0.32.1 |
TKr 1.26.13 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
3/15/2024 |
vCenter Server 8.0 U1c and later |
TKr 1.26.13 includes the following new features.
Support for Kubernetes version 1.26.13 for PhotonOS and Ubuntu.
PhotonOS 3.0: Kernel version: 4.19.305-6.ph3
Ubuntu 20.04.6 LTS: Kernel version: 5.4.0-171-generic
runc has been updated to version 1.1.12
containerd has been updated to version 1.6.28
TKr 1.26.13 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.26.13 |
coredns |
v1.9.3 |
etcd |
v3.5.11 |
containerd |
v1.6.28 |
cri-tools |
v1.25.0 |
cni-plugins |
v1.1.1 |
TKr 1.26.13 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.11.3 |
calico |
v3.25.1 |
guest-cluster-auth-service |
v1.3.0 |
kapp-controller |
v0.45.2 |
metrics-server |
v0.6.2 |
pinniped |
v0.24.0 |
secretgen-controller |
v0.14.2 |
vsphere-cpi |
v1.26.2 |
vsphere-pv-csi |
v3.1.0 |
capabilities |
v0.30.0 |
TKG clusters going to ClusterBootstrapReconciling state on a random basis.
This issue is fixed with with this release of TKr which includes the Antrea v1.11.2 package.
TKr 1.26.5 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
8/24/2023 |
vCenter Server 8.0 U1c and later |
TKr 1.26.5 includes the following new features.
Support for Kubernetes version 1.26.5 for PhotonOS and Ubuntu.
Photon OS Kernel version: photon-4.19.288-2.ph3
Ubuntu Kernel version: ubuntu-20.04.1
With TKr 1.26.5, TKG on Supervisor will enforce PSA by default on TKG clusters.
Sample command to update pod security on a TKr 1.26 Kubernetes namespace:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=privileged
PSA policies on system namespaces cannot be changed.
For more information, refer to the PSA documentation:
See also:
TKr 1.25 Release Notes
TKG on Supervisor documentation
Tanzu Kubernetes Grid Image Builder 0.3.0 for vSphere 8.x is available on Github.
This release adds support for building custom Photon and Ubuntu images for Kubernetes 1.26.5.
TKr 1.26.5 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.26.5 |
coredns |
v1.9.3 |
etcd |
v3.5.6 |
TKr 1.26.5 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.11.1 |
calico |
v3.25.1 |
guest-cluster-auth-service |
v1.3.0 |
kapp-controller |
v0.45.2 |
metrics-server |
v0.6.2 |
pinniped |
v0.24.0 |
secretgen-controller |
v0.14.2 |
vsphere-cpi |
v1.26.2 |
vsphere-pv-csi |
v3.1.0 |
capabilities |
v0.30.0 |
TKG clusters using Antrea CNI have been observed flipping between Ready True/False.
TKG clusters using TKr v1.26.5 with the Antrea package v1.11.1 can randomly enter into a ClusterBootstrapReconciling state.
Workaround: Upgrade to TKr v1.26.13 that includes the Antrea v1.11.2 package which fixes the issue.
TKr 1.26.5 for vSphere 8.x has the following fixed issues.
Iptables fails with exit code 3 (no child processes) due to bpfilter kernel bug
Verify naming length issue due to kapp is fixed
Verify that the name length issue is fixed and available from v0.45.0 of kapp controller.
Consume linux kernel package for photon images instead of linux-esx
The linux-esx kernel package lacks some of the features supported by the generic linux kernel package. Because there is no specific requirement to support linux-esx kernel, moving to the linux kernel package is preferred.
TKr 1.25.7 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
7/27/2023 |
vSphere 8.0 U1c and later |
TKr 1.25.7 includes the following new features.
Support for Kubernetes version 1.25.7 for PhotonOS and Ubuntu.
Support for Pod Security Admission (PSA) controller to replace Pod Security Policies (PSP).
With TKr 1.25, TKG on Supervisor supports applying pod security standards of type privileged, baseline, and restricted using the Pod Security Admission (PSA) controller. By default, for a TKG cluster created with 1.25 TKr, all namespaces have their pod security warn and audit modes set to restricted. This is a no-force setting: the PSA controller may generate warnings about pods violating policy, but the pods will continue to run. Some system pods running in kube-system, tkg-system, and vmware-system-cloud-provider require elevated privileges. These namespaces are excluded from pod security.
With 1.26 TKr, TKG on Supervisor will enforce PSA by default on TKG clusters. TKr 1.25 users should plan to migrate their TKG cluster workloads to PSA in anticipation of uprading TKG clusters to TKr 1.26.
PSA configuration for TKG on Supervisor:
TKr 1.25.7 - warn mode: restricted, audit mode: restricted, enforce mode: not set
TKr 1.26 and later - enforce mode: restricted
Sample commands to update pod security on a TKR 1.25 Kubernetes namespace:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/audit=privileged
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/warn=privileged
PSA policies on system namespaces cannot be changed.
For more information, refer to the PSA documentation:
Tanzu Kubernetes Grid Image Builder 0.2.0 for vSphere 8.x is available on Github. This release adds support for building Photon and Ubuntu images for Kubernetes 1.25.7.
TKr 1.25.7 for use with vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.25.7 |
coredns |
v1.9.3 |
etcd |
v3.5.6 |
TKr 1.25.7 for use with vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.9.0 |
calico |
v3.24.1 |
guest-cluster-auth-service |
v1.3.0 |
kapp-controller |
v0.41.7 |
metrics-server |
v0.6.2 |
pinniped |
v0.12.1 |
secretgen-controller |
v0.11.2 |
vsphere-cpi |
v1.25.1 |
vsphere-pv-csi |
v2.7.1 |
capabilities |
v0.29.0 |
TKr 1.25.7 for use with vSphere 8.x has the following known issues.
For Clusters provisioned using the v1beta1 API, upgrading the Kubernetes version from TKR v1.24.9 to TKR v1.25.7 can result in an inconsistent cluster state if you couple the version upgrade with a scale up of the number of nodes in a node pool.
For clusters provisioned using the v1beta1 API, attempting to perform a version upgrade from TKR v1.24.9+vmware.1-tkg.4 to TKR v1.25.7+vmware.3-fips.1-tkg.1 in the same operation as a scale up of the number of replicas in a node pool can lead to the cluster upgrade failing. The control plane will upgrade, but the node pool scale up operation fails causing the cluster upgrade to roll back.
NOTE: This issue has been observed only on v1beta1 Clusters with Supervisor running Cluster API (CAPI) version less than 1.5.0. Supervisor running CAPI 1.5.0 or above will not face this issue.
If this issue does occur, scale in the node pool to 0 and wait for the cluster to complete the upgrade. Once the upgrade is successful then scale out the nodes to desired value.
NOTE: As a general practice, it is recommended that you perform a cluster version upgrade first and separately. Once the version upgrade completes successfully, you can then perform a second operation and scale in or out the number of nodes in a node pool.
TKr 1.25.7 for use with vSphere 8.x has the following fixed issues.
TKG clusters on Supervisor do not display pod metrics
1.23.15 and 1.24.9 TKG 2.0 TKR's were using cgroupfs as cgroupdriver for containerd. With 1.25 containerd uses systemd as cgroupdriver.
High Slab memory usage / release photon with these 2 kernel params
The Photon OS kernel used for TKR 1.23.8 and 1.24.9 had a cgroup memory leak.
Online volume expansion on restored pvc failed
Fixes issue in online expansion of persistent volume in workload cluster.
Regression in performance of pvc creation
Fixes performance issue caused by regression in pvcsi-driver shipped with TKR 1.24.9.
TKr 1.24.9 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
4/18/2023 |
vCenter Server 8.0 U1 and later |
TKr 1.24.9 for vSphere 8.x includes the following new features.
Support for Kubernetes version 1.24.9 for PhotonOS and Ubuntu.
Support for building your own custom TKr image for TKG 2.0 nodes using the vSphere TKG Image Builder.
TKr 1.24.9 for use with vSphere 8.x includes the following components.
Kubernetes Component |
Version |
---|---|
kubernetes |
v1.24.9 |
coredns |
v1.8.6 |
etcd |
v3.5.6 |
Package Name |
Version |
---|---|
antrea |
v1.7.2 |
calico |
v3.24.1 |
capabilities |
v0.28.0 |
guest-cluster-auth-service |
v1.1.0 |
kapp-controller |
v0.41.5 |
metrics-server |
v0.6.2 |
pinniped |
v0.12.1 |
secretgen-controller |
v0.11.2 |
vsphere-cpi |
v1.24.3 |
vsphere-pv-csi |
v2.6.1 |
TKr 1.24.9 for use with vSphere 8.x has the following known issues.
TKG 2.0 cluster creation fails if the cluster name contains more than 31 characters.
Due to an issue with the kapp controller, a TKG cluster cannot be created if the cluster name has more than 31 charaters. This issue will be fixed in a future release when the underlying kapp controller issue is fixed.
Do not use a cluster name with more than 31 characters.
TKG cluster upgrades to TKR v1.24.9 from TKR v1.23.x may fail with CP node going into a NotReady state.
After upgrading a TKG cluster to TKR v1.24.9+vmware.1-tkg.4 from TKR v1.23.x, the TKG cluster may not be upgraded and the CP node is stuck in a NotReady State.
The following upgrade paths are affected:
FROM v1.23.15---vmware.1-tkg.4 (TKG 2 TKR Photon and Ubuntu) --TO--> v1.24.9+vmware.1-tkg.4 (TKG 2 TKR Photon and Ubuntu)
FROM v1.23.8+vmware.2-tkg.2-zshippable (TKG 2 TKR Photon and Ubuntu) --TO--> v1.24.9+vmware.1-tkg.4 (TKG 2 TKR Photon and Ubuntu)
FROM v1.23.8+vmware.3-tkg.1 (TKG 1 TKR Photon) --TO--> v1.24.9+vmware.1-tkg.4 (TKG 2 TKR Photon)
On the control plane node, manually delete the kube-proxy pod that is stuck in ImagePullBackoff. This should restore the kube-proxy pod to a running state. The kaap-controller should come up, followed by updated packages and finally a running upgraded cluster.
TKr 1.23.15 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
4/18/2023 |
vCenter Servr 8.0 U1 and later |
TKr 1.23.15 for vSphere 8.x supports Photon and Ubuntu.
TKr 1.23.15 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.23.15 |
coredns |
v1.8.6 |
etcd |
v3.5.6 |
TKr 1.23.15 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.7.2 |
calico |
v3.24 |
guest-cluster-auth-service |
v1.1.0 |
kapp-controller |
v0.41.5 |
metrics-server |
v0.6.2 |
pinniped |
v0.12.1 |
secretgen-controller |
v0.11.2 |
vsphere-cpi |
v1.23.3 |
vsphere-pv-csi |
v2.6.0 |
capabilities |
v0.28.0 |
TKr 1.23.15 for vSphere 8.x has the following known issues.
TKG 2.0 cluster creation fails if the cluster name contains more than 31 characters.
Due to an issue with the kapp controller, a TKG cluster cannot be created if the cluster name has more than 31 charaters. This issue will be fixed in a future release when the underlying kapp controller issue is fixed.
Do not use a cluster name with more than 31 characters.
TKr 1.23.8 is generally available for use with vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
2/10/2023 |
vCenter Server 8.0 and later |
TKr 1.23.8 for vSphere 8.x introduces a new TKr format that provides full TKG 2.0 feature support. TKr 1.23.8 for vSphere 8.x supports both PhotonOS and Ubuntu using annotations. TKr 1.23.8 for vSphere 8.x is incompatible with vSphere 7.
Upgrades from legacy TKrs to TKrs that are compatible with TKG 2.0 are supported starting with the vShpere with Tanzu 8.0a release.
TKr 1.23.8 for vSphere 8.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.23.8 |
coredns |
v1.8.6 |
etcd |
v3.5.4 |
TKr 1.23.8 for vSphere 8.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.5.3 |
calico |
v3.22.1 |
guest-cluster-auth-service |
v1.0.0 |
kapp-controller |
v0.41.2 |
metrics-server |
v0.6.1 |
pinniped |
v0.12.1 |
secretgen-controller |
v0.11.0 |
vsphere-cpi |
v1.23.1 |
vsphere-pv-csi |
v2.6.0 |
capabilities |
v0.28.0 |
TKR 1.23.8 for vSphere 8.x has the following known issues.
Known Issue with TKr v1.23.8---vmware.2-tkg.1-zshippable
VMware Tanzu Mission Control data protection does not work correctly when you use v1.23.8---vmware.2-tkg.1-zshippable to provision a Tanzu Kubernetes Cluster that uses Photon OS for cluster nodes. The issues is not present when using the Ubuntu OS version of v1.23.8---vmware.2-tkg.1-zshippable.
As a result of service account permission issues, the tanzu-capabilities-controller-manager pod on the Tanzu Kubernetes Cluster is continually restarting and going into a Crash Loop back off (CLBO) state when using TKR version v1.23.8+vmware.2-tkg.2-zshippable on vSphere with Tanzu 8.0.0a. To resolve the issue, add the required permissions to the capabilities service account tanzu-capabilities-manager-sa on the TKC as described below.
To address the known issue with the Ubuntu OS version of v1.23.8---vmware.2-tkg.1-zshippable, complete the following workaround.
1. Pause the reconciliation of the capabilities package on the TKC (replace 'tkc' with the name of the Tanzu Kubernetes Grid cluster).
kubectl patch -n vmware-system-tkg pkgi tkc-capabilities -p '{"spec":{"paused": true}}' --type=merge
2. Create a new file named capabilities-rbac-patch.yaml.
apiVersion: v1
kind: Secret
metadata:
name: tanzu-capabilities-update-rbac
namespace: vmware-system-tkg
stringData:
patch-capabilities-rbac.yaml: |
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"ClusterRole", "metadata": {"name": "tanzu-capabilities-manager-clusterrole"}}),expects="1+"
---
rules:
- apiGroups:
- core.tanzu.vmware.com
resources:
- capabilities
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- core.tanzu.vmware.com
resources:
- capabilities/status
verbs:
- get
- patch
- update-rbac
3. Patch the capabilities cluster role on TKC (replace 'tkc' with the name of the Tanzu Kubernetes Grid cluster).
4. Delete the tanzu-capabilities-controller-manager on the TKC.
kubectl patch -n vmware-system-tkg pkgi tkc-capabilities -p '{"metadata":{"annotations":{"ext.packaging.carvel.dev/ytt-paths-from-secret-name.0":"tanzu-capabilities-update-rbac"}}}' --type=merge
TKr 1.28.7 is generally available for use with vSphere 7.x.
Name |
Release Date |
vCenter Server 7.0 |
---|---|---|
|
7/12/2024 |
vCenter Server 7.0 Update 3P and later |
|
7/12/2024 |
vCenter Server 7.0 Update 3P and later |
TKr 1.28.7 includes the following new features.
Support for Kubernetes version 1.28.7 for Photon and Ubuntu operating systems.
Operating Systems (OS) |
OS Version |
Kernel Version |
---|---|---|
Photon |
VMware PhotonOS 5.0 |
6.1.83-4.ph5 |
Ubuntu |
Ubuntu 22.04.4 LTS |
5.15.0-107-generic |
TKr 1.28.7 for vSphere 7.x includes the following core components.
Component |
Version |
---|---|
kubernetes |
v1.28.7 |
coredns |
v1.10.1 |
etcd |
v3.5.12 |
containerd |
v1.6.28 |
cri-tools |
v1.27.0 |
cni-plugins |
v1.2.0 |
TKr 1.28.7 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.13.3 |
calico |
v3.26.3 |
guest-cluster-auth-service |
v1.3.3 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.28.0 |
vsphere-pv-csi |
v3.0.1 |
dockerd |
v2.7.1 |
None
None
TKr 1.27.10 is generally available for use with vSphere 7.x.
Name |
Release Date |
vCenter Server 7.0 |
---|---|---|
|
4/5/2024 |
vCenter Server 7.0 Update 3P and later |
|
4/5/2024 |
vCenter Server 7.0 Update 3P and later |
TKr 1.27.10 for vSphere 7.x includes the following new features.
Support for Kubernetes version 1.27.10 for Photon OS and Ubuntu, respectively.
PhotonOS 3.0 | Kernel version: 4.19.306-1.ph3
Ubuntu 20.04.6 LTS | Kernel version: 5.4.0-172-generic
runc has been updated to version 1.1.12
containerd has been updated to version 1.6.28
TKr 1.27.10 for vSphere 7.x includes the following core components.
Component |
Version |
---|---|
kubernetes |
v1.27.10 |
coredns |
v1.10.1 |
etcd |
v3.5.11 |
containerd |
v1.6.28 |
cri-tools |
v1.26.0 |
cni-plugins |
v1.2.0 |
TKr 1.27.10 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.11.3 |
calico |
v3.26.1 |
guest-cluster-auth-service |
v1.3.2 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.27.0 |
vsphere-pv-csi |
v3.0.0 |
dockerd |
v2.7.1 |
TKr 1.27.6 is generally available for use with vSphere 7.x.
Name |
Release Date |
vCenter Server 7.0 |
---|---|---|
|
1/31/2024 |
vCenter Server 7.0 Update 3P and later |
|
1/31/2024 |
vCenter Server 7.0 Update 3P and later |
TKr 1.27.6 for vSphere 7.x includes the following new features.
Support for Kubernetes version 1.27.6 for Photon OS and Ubuntu, respectively.
Photon OS Kernel version: 4.19.297-1.ph3 (for photon-3.0)
Ubuntu OS Kernel version: 5.4.0-169-generic (for ubuntu-20.04.6)
TKr 1.27.6 for vSphere 7.x includes the following core components.
Component |
Version |
---|---|
kubernetes |
v1.27.6 |
coredns |
v1.10.1 |
etcd |
v3.5.7 |
containerd |
v1.6.24 |
cri-tools |
v1.26.0 |
cni-plugins |
v1.2.0 |
TKr 1.27.6 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.11.3 |
calico |
v3.26.1 |
guest-cluster-auth-service |
v1.3.2 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.27.0 |
vsphere-pv-csi |
v3.0.0 |
dockerd |
v2.7.1 |
TKr 1.27.6 for vSphere 7.x does not support vCenter Server 8.0.
This issue is fixed with the vSphere 8.0 U2C release.
TKr 1.26.12 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.x.
Name |
Release Date |
vCenter Server 8.0 |
---|---|---|
|
3/15/2024 |
vCenter Server 7.0 U3p and later |
|
3/15/2024 |
vCenter Server 7.0 U3p and later |
TKr 1.26.12 for vSphere 7.x includes the following new features.
Support for Kubernetes version 1.26.12 for Photon OS and Ubuntu, respectively.
PhotonOS 3.0: Kernel version: 4.19.305-6.ph3
Ubuntu 20.04.6 LTS: Kernel version: 5.4.0-171-generic
runc has been updated to version 1.1.12
containerd has been updated to version 1.6.28
TKr 1.26.12 for vSphere 7.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.26.12 |
coredns |
v1.9.3 |
etcd |
v3.5.11 |
containerd |
v1.6.28 |
cri-tools |
v1.25.0 |
cni-plugins |
v1.1.1 |
TKr 1.26.12 for vSphere 7.x includes the following VMware packages.
Package |
Version |
---|---|
antrea |
v1.11.1 |
calico |
v3.25.1 |
guest-cluster-auth-service |
v1.3.2 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.26.2 |
vsphere-pv-csi |
v3.0.0 |
dockerd |
v2.7.1 |
Large cluster upgrade from v1.26.10 to v1.26.12 may get stuck.
Control plane nodes and some of the worker nodes are upgraded, but one of the worker nodes is stuck in a NotReady state.
Workaround: Delete the worker node stuck in a NotReady state. The upgrade should proceed.
TKr 1.26.10 is generally available for use with vSphere 7.x.
Name |
Release Date |
vCenter Server 7.0 |
---|---|---|
|
12/07/2023 |
vCenter Server 7.0 Update 3P and later |
|
12/7/2023 |
vCenter Server 7.0 Update 3P and later |
TKr 1.26.10 for vSphere 7.x includes the following new features.
Support for Kubernetes version 1.26.10 for Photon OS and Ubuntu, respectively.
Photon OS Kernel version: 4.19.295-4.ph3 (for photon-3.0)
Ubuntu OS Kernel version: 5.4.0-166-generic (for ubuntu-20.04.1)
With TKr 1.26, TKG on Supervisor will enforce PSA by default on TKG clusters.
Sample command to update pod security on a TKr 1.26 Kubernetes namespace:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=privileged
NOTE: PSA policies on system namespaces cannot be changed.
For more information, refer to the PSA documentation:
See also:
TKr 1.25.13 for vSphere 7.x Release Notes
TKG on Supervisor documentation
TKr 1.26.10 for vSphere 7.x includes the following core components.
Component |
Version |
---|---|
kubernetes |
v1.26.10 |
coredns |
v1.9.3 |
etcd |
v3.5.9 |
containerd |
v1.6.24 |
TKr 1.26.10 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.11.1 |
calico |
v3.25.1 |
guest-cluster-auth-service |
v1.3.2 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.26.2 |
vsphere-pv-csi |
v3.0.0 |
dockerd |
v2.7.1 |
TKr 1.26.10 for vSphere 7.x as the following fixed issues.
TKr 1.26.10 for vSphere 7.x does not support vCenter Server 8.0.
This issue is fixed. Customers running TKr 1.26.10 on vSphere 7.x can now upgrade to vSphere 8.0 U2B or later.
Disable password expiry for vmware-system-user
Disable unnecessary tdnf timers in TKrs to reduce memory consumption by 750-850 MB
TKr 1.25.13 is generally available for use with vSphere 7.x.
Name |
Release Date |
vCenter Server 7.0 |
---|---|---|
|
12/7/2023 |
vSphere 7.0 Update 3p and later |
|
12/7/2023 |
vSphere 7.0 Update 3p and later |
TKr 1.25.13 for vSphere 7.x includes the following new features.
Support for Kubernetes version 1.25.13 for Photon OS and Ubuntu, respectively.
Photon OS Kernel version: 4.19.295-4.ph3 (for photon-3.0)
Ubuntu OS Kernel version: 5.4.0-166-generic (for ubuntu-20.04.1)
Support for Pod Security Admission (PSA) controller to replace Pod Security Policies (PSP).
With TKr 1.25, TKG on Supervisor supports applying pod security standards of type privileged, baseline, and restricted using the Pod Security Admission (PSA) controller. By default, for a TKG cluster created with 1.25 TKr, all namespaces have their pod security warn and audit modes set to restricted. This is a no-force setting: the PSA controller may generate warnings about pods violating policy, but the pods will continue to run. Some system pods running in kube-system, tkg-system, and vmware-system-cloud-provider require elevated privileges. These namespaces are excluded from pod security.
With 1.26 TKr, TKG on Supervisor will enforce PSA by default on TKG clusters. TKr 1.25 users should plan to migrate their TKG cluster workloads to PSA in anticipation of uprading TKG clusters to TKr 1.26.
PSA configuration for TKG on Supervisor:
TKr 1.25.13 - warn mode: restricted, audit mode: restricted, enforce mode: not set
TKr 1.26 and later - enforce mode: restricted
Sample commands to update pod security on a TKR 1.25 Kubernetes namespace:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/audit=privileged
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/warn=privileged
NOTE: PSA policies on system namespaces cannot be changed.
For more information, refer to the PSA documentation:
TKr 1.25.13 for vSphere 7.x includes the following core components.
Component |
Version |
---|---|
kubernetes |
v1.25.13 |
coredns |
v1.9.3 |
etcd |
v3.5.6 |
containerd |
v1.6.18 |
TKr 1.25.13 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.9.0 |
calico |
v3.24.1 |
guest-cluster-auth-service |
v1.3.2 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.25.3 |
vsphere-pv-csi |
v2.7.1 |
dockerd |
v2.7.1 |
TKr 1.25.13 for vSphere 7.x has the following fixed issues.
TKr 1.25.13 for vSphere 7.x does not support vCenter Server 8.0.
This issue is fixed. Customers running TKr 1.25.13 on vSphere 7.x can now upgrade to vSphere 8.0 U2B or later.
Disable password expiry for vmware-system-user
Disable unnecessary tdnf timers in TKrs to reduce memory consumption by 750-850 MB
TKr 1.24.11 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.0.
Name |
Release Date |
vCenter Server 7.0 |
vCenter Server 8.0 (upgrade) |
---|---|---|---|
|
7/13/2023 |
vCenter Server 7.0 Update 3N and later |
vCenter Server 8.0a |
|
7/13/2023 |
vCenter Server 7.0 Update 3N and later |
vCenter Server 8.0a |
TKr 1.24.11 for vSphere 7.x provides separate images for Photon and Ubuntu.
TKr 1.24.11 for vSphere 7.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.24.11 |
coredns |
v1.8.6 |
etcd |
v3.5.6 |
TKr 1.24.11 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.7.2 |
calico |
v3.24.1 |
guest-cluster-auth-service |
v1.3.1 |
metrics-server |
v0.6.2 |
vsphere-cpi |
v1.24.3 |
vsphere-pv-csi |
v2.6.0 |
containerd |
v1.6.18 |
dockerd |
v2.7.1 |
TKr 1.24.11 for vSphere 7.x has the following known issues.
Pod imbalance observed on Photon and Ubuntu 1.24.11 TKr
Users may see CPU or Memory pressure on Node where all pods are scheduled even though we have Nodes in clusters where sufficient resources (CPU, Memory) are available to schedule pods.
Workaround: None
TKr 1.23.8 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.0.
Name |
Release Date |
vCenter Server 7.0 |
vCenter Server 8.0 (upgrade) |
---|---|---|---|
|
1/6/2023 |
vCenter Server 7.0 Update 3F and later |
vCenter Server 8.0a |
|
5/18/2023 |
vCenter Server 7.0 Update 3L and later |
vCenter Server 8.0a |
TKr 1.23.8 for vSphere 7.x provides separate images for Photon and Ubuntu.
TKr 1.23.8 for vSphere 7.x includes the following Kubernetes components.
Component |
Version |
---|---|
kubernetes |
v1.23.8 |
coredns |
v1.8.6 |
etcd |
v3.5.4 |
TKr 1.23.8 for vSphere 7.x includes the following additional components.
Package |
Version |
---|---|
antrea |
v1.5.3 |
calico |
v3.19.1 |
guest-cluster-auth-service |
v0.1-72 |
metrics-server |
v0.4.0 |
vsphere-cpi |
v1.23.2 |
vsphere-pv-csi |
v2.5.2 |
containerd |
v1.6.6 |
dockerd |
v2.7.1 |
TKr 1.23.8 for vSphere 7.x has the following known issues.
TKR-v1.23.8 Ubuntu for TKG 1.0 clusters the system pod guest-cluster-auth-svc is in CLBO state on freshly deployed large cluster
On a newly deployed TKG 1.0 large cluster using Ubuntu 1.23.8 TKR with 7.0.3P07 GA setup, the guest-cluster-auth-svc pod may be in a CrashLoopBackOff state. Otherwise the cluster operates as expected.
None
When upgrading from TKR v1.22.9 Ubuntu to TKR v1.23.8 Ubuntu, some TKG 1.0 clusters get stuck in False state
When a TKG 1.0 cluster is deployed with v1.22.9 Ubuntu and then upgraded to v1.23.8, new control plane nodes of v1.23.8 get stuck into False state.
The current workaround is to delete kube-proxy pod stuck in imagepullbackoff.
After trying this workaround, the upgrade proceed as expected and the cluster comes to running state post this.
TKr 1.22.9 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.0.
Name |
Release Date |
vCenter Server 7.0 |
vCenter Server 8.0 (upgrade) |
---|---|---|---|
|
3/28/2022 |
vCenter Server 7.0 Update 3E and later |
vCenter Server 8.0a |
|
4/5/2023 |
vCenter Server 7.0 Update 3L and later |
vCenter Server 8.0a |
TKr 1.22.9 for vSphere 7.x provides separate images for Photon and Ubuntu.
TKr 1.22.9 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.22.9 |
antrea |
v1.2.3 |
calico |
v3.19.1 |
csi |
2.4.0 |
TKr 1.22.9 for vSphere 7.x has the following known issues.
Fixed Issues
Fix issues related to customers deploying and running Falco on Tanzu Kubernetes Grid Service.
For other fixed issues, also refer to:
Known Issues for TKR 1.22
TKr 1.21.6 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.0.
Name |
Release Date |
vCenter Server 7.0 |
vCenter Server 8.0 (upgrade) |
---|---|---|---|
|
2/21/2022 |
vCenter Server 7.0 Update 3 and later |
vCenter Server 8.0a |
|
3/4/2022 |
vCenter Server 7.0 Update 3 and later |
vCenter Server 8.0a |
TKr 1.21.6 for vSphere 7.x provides separate images for Photon and Ubuntu.
TKr 1.21.6 is the minimum TKR Ubuntu version that you can use for upgrading from vSphere 7 to vSphere 8.
TKr 1.21.6 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.21.6 |
antrea |
v0.13.5 |
calico |
v3.11.2 (Ubuntu) v3.0.1 (Photon) |
csi |
v2.3.0 |
TKr 1.21.6 (Photon) for vSphere 7.x has the following known issues:
Storage throughput regression caused by new QPS throttles with csi-attacher v3.2.1
TKr 1.21.2 upgraded the csi-attacher to v3.2.1 (in TKr 1.20.7 the csi-attacher is v2.0.0). There are more QPS throttles with csi-attacher v3.2.1 that are causing a drop in throughput.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround:
Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on device error even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy to Retain.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a Supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
Kubernetes will create a new instance of the CSI controller and all sidecars will be reinitialized.
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 24 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
TKr 1.21.6 (Ubuntu) for vSphere 7.x has the following known issues:
Attempts to run the Remove Disk operation on a vSAN Direct Datastore fail with the VimFault - Cannot complete the operation error
Generally, you can observe this error when one of the following scenarios occurs:
As a part of the Remove Disk operation, all persistent volumes placed on the vSAN Direct Datastore are relocated to another vSAN Direct Datastores on the same ESXi host. The relocation can fail if no space is available on the target vSAN Direct Datastores. To avoid this failure, make sure that the target vSAN Direct Datastores have sufficient storage space for running applications.
The Remove Disk operation can also fail when the target vSAN Direct Datastores on the ESXi host have sufficient storage. This might occur when the underlying persistent volume relocation operation, spawned by the Remove Disk parent operation, takes more than 30 minutes due to the size of the volume. In this case, you can observe that reconfiguration of the underlying vSphere Pod remains in progress in the Tasks view.
The in-progress status indicates that even though the Remove Disk operation times out and fails, the underlying persistent volume relocation done by reconfiguration of the vSphere Pod is not interrupted.
Workaround:
After the reconfiguration task for the vSphere Pod completes, run the Remove Disk operation again. The Remove Disk operation successfully proceed.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround:
Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on deviceerror even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy to Retain.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
Kubernetes will create a new instance of the CSI controller and all sidecars will be reinitialized.
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 12 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
Fixed Issues for TKR 1.21.6 (Photon)
TKC creation stuck. Error: Network plugin returns error: cni plugin not initialized.
When creating a new cluster using TKr v1.21.2, the container runtime network is not ready and the cluster is not created. This is a regression in runc. This has been fixed in this release.
Pods are not evenly scheduled across the nodes
This was a known regression that was introduced in upstream Kubernetes. The fix is included in this release.
CVE-2021-35942 has been adressed in this release
Fixed Issues for TKR 1.21.6 (Ubuntu)
Pods are not evenly scheduled across the nodes
This was a known regression that was introduced in upstream Kubernetes. The fix is included in this release.
The following CVEs were fixed in this release: CVE-2022-0185 and CVE-2021-4034
TKr 1.21.2 is generally available for use with vSphere 7.x, and for upgrade to vSphere 8.0.
Name |
Release Date |
vCenter Server 7.0 |
vCenter Server 8.0 (upgrade) |
---|---|---|---|
|
11/16/2021 |
vCenter Server 7.0 Update 3 |
vCenter Server 8.0a |
TKr 1.21.2 provides a Photon image for provisioning TKG clusters on Supervisor.
TKr 1.21.2 is the minimum Photon version that you can use for upgrading from vSphere 7 to vSphere 8.
TKr 1.21.2 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.21.2 |
antrea |
v0.13.5 |
calico |
v0.3.0.1 |
csi |
v2.3.0 |
TKr 1.21.2 for vSphere 7.x has the following known issues:
TKC creation stuck. Error: Network plugin returns error: cni plugin not initialized.
When creating a new cluster using TKr v1.21.2, the container runtime network is not ready and the cluster is not created. This is a regression in runc. The fix will be included in the next TKr 1.21.x patch.
Storage throughput regression caused by new QPS throttles with csi-attacher v3.2.1
TKr 1.21.2 upgraded the csi-attacher to v3.2.1 (in TKr 1.20.7 the csi-attacher is v2.0.0). There are more QPS throttles with csi-attacher v3.2.1 that are causing a drop in throughput.
Pods are not evenly scheduled across the nodes
This is a known regression that was introduced in upstream Kubernetes. VMware will release an updated Tanzu Kubernetes release once a fix is available upstream.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround: Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on device error even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy to Retain.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a Supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 24 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
TKr 1.20.12 is generally available for use with vSphere 7.x.
Name |
vCenter Server 7.0 |
---|---|
|
vCenter Server 7.0 Update 2 and later |
TKr v1.20.12 for vSphere 7.x provides a Photon image for provisioning TKG clusters on Supervisor.
TKr 1.20.12 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.20.12 |
antrea |
v0.11.3 |
calico |
v0.3.0.1 |
csi |
vsphere70u2 |
TKr 1.20.12 for vSphere 7.x has the following known issues:
Attempts to delete a stateful pod and reuse or delete its volume in the Tanzu Kubernetes cluster after an inactive session might cause failures and unpredictable behavior
When you delete a stateful pod after a day or so of being inactive, its volume appears to be successfully detached from the node VM of the Tanzu Kubernetes cluster. However, when you try to create a new stateful pod with that volume or delete the volume, your attempts fail because the volume is still attached to the node VM in vCenter Server.
Workaround:
Use the CNS API to detach the volume from the node VM to synchronize the state of the volume in the Tanzu Kubernetes cluster and vCenter Server. Restart the CSI controller in Supervisor cluster to renew the session.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround:
Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on device
error even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy
to Retain
.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
Kubernetes will create a new instance of the CSI controller and all sidecars will be reinitialized.
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 24 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
TKr 1.20.12 for vSphere 7.x fixes the following issues:
Pods are not evenly scheduled across the nodes
This was a known regression that was introduced in upstream Kubernetes. The fix is included in this release.
CVE-2021-35942 has been addressed in this release
TKr 1.20.9 is generally available for use with vSphere 7.x.
Name |
Platform |
---|---|
|
vSphere 7.0 Update 2 and later |
TKr v1.20.9 provides a Photon image (photon-3-k8s-v1.20.9---vmware.1-tkg.1.a4cee5b) for provisioning TKG clusters on Supervisor in the vSphere 7 with Tanzu environment.
TKr 1.20.9 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.20.9 |
antrea |
v0.11.3 |
calico |
v0.3.0.1 |
csi |
vsphere70u2 |
TKr 1.20.9 for vSphere 7.x has the following known issues:
Attempts to delete a stateful pod and reuse or delete its volume in the Tanzu Kubernetes cluster after an inactive session might cause failures and unpredictable behavior
When you delete a stateful pod after a day or so of being inactive, its volume appears to be successfully detached from the node VM of the Tanzu Kubernetes cluster. However, when you try to create a new stateful pod with that volume or delete the volume, your attempts fail because the volume is still attached to the node VM in vCenter Server.
Workaround:
Use the CNS API to detach the volume from the node VM to synchronize the state of the volume in the Tanzu Kubernetes cluster and vCenter Server. Restart the CSI controller in Supervisor cluster to renew the session.
Pods are not evenly scheduled across the nodes
This is a known regression that was introduced in upstream Kubernetes. VMware will release an updated Tanzu Kubernetes release once a fix is available upstream.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround:
Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on device
error even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy
to Retain
.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
Kubernetes will create a new instance of the CSI controller and all sidecars will be reinitialized.
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 24 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
TKr 1.20.8 is generally available for use with vSphere 7.x.
Name |
Platform |
---|---|
|
vSphere 7.0 Update 2 and later |
TKr v1.20.8 provides an Ubuntu image (ubuntu-2004-v1.20.8---vmware.1-tkg.2) for provisioning TKG clusters on Supervisor in the vSphere 7 with Tanzu environment.
The image is optimized and tested specifically for GPU (AI/ML) workloads. VMware recommends using this node image only for such applications. Future releases of the Tanzu Kubernetes release based on Ubuntu will support more general application use cases.
TKr 1.20.8 for vSphere 7.x includes the following components.
Component |
Version |
---|---|
kubernetes |
v1.20.8 |
antrea |
v0.13.5 |
calico |
v3.11.2 |
csi |
vsphere70u2 |
TKr 1.20.8 for vSphere 7.x has the following known issues:
Attempts to delete a stateful pod and reuse or delete its volume in the Tanzu Kubernetes cluster after an inactive session might cause failures and unpredictable behavior
When you delete a stateful pod after a day or so of being inactive, its volume appears to be successfully detached from the node VM of the Tanzu Kubernetes cluster. However, when you try to create a new stateful pod with that volume or delete the volume, your attempts fail because the volume is still attached to the node VM in vCenter Server.
Workaround:
Use the CNS API to detach the volume from the node VM to synchronize the state of the volume in the Tanzu Kubernetes cluster and vCenter Server. Restart the CSI controller in Supervisor cluster to renew the session.
Pods are not evenly scheduled across the nodes
This is a known regression that was introduced in upstream Kubernetes. VMware will release an updated Tanzu Kubernetes release once a fix is available upstream.
An expansion of a Supervisor cluster PVC in offline or online mode does not result in an expansion of a corresponding Tanzu Kubernetes cluster PVC
A pod that uses the Tanzu Kubernetes cluster PVC cannot use the expanded capacity of the Supervisor cluster PVC because the filesystem has not been resized.
Workaround:
Resize the Tanzu Kubernetes cluster PVC to a size equal or greater than the size of the Supervisor cluster PVC.
Size mismatch in statically provisioned TKG PVC when compared to underlying volume
Static provisioning in Kubernetes does not verify if the PV and backing volume sizes are equal. If you statically create a PVC in a Tanzu Kubernetes cluster, and the PVC size is less than the size of the underlying corresponding Supervisor cluster PVC, you might be able to use more space than the space you request in the PV. If the size of the PVC you statically create in the Tanzu Kubernetes cluster is greater than the size of the underlying Supervisor cluster PVC, you might notice No space left on device
error even before you exhaust the requested size in the Tanzu Kubernetes cluster PV.
Workaround:
In the Tanzu Kubernetes cluster PV, change the persistentVolumeReclaimPolicy
to Retain
.
Note the volumeHandle of the Tanzu Kubernetes cluster PV and then delete the PVC and PV in the Tanzu Kubernetes cluster.
Re-create the Tanzu Kubernetes cluster PVC and PV statically using the volumeHandle and set the storage to the same size as the size of the corresponding Supervisor cluster PVC.
Attempts to create a PVC from a supervisor namespace or a TKG cluster fail if the external csi.vsphere.vmware.com provisioner loses its lease for leader election
When you try to create a PVC from a supervisor namespace or a TKG cluster using the kubectl command, your attempts might not succeed. The PVC remains in the Pending state. If you describe the PVC, the Events field displays the following information in a table layout:
Type – Normal
Reason – ExternalProvisioning
Age – 56s (x121 over 30m)
From – persistentvolume-controller
Message – waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Workaround:
Verify that all containers in the vsphere-csi-controller pod inside the vmware-system-csi namespace are running.
kubectl describe pod vsphere-csi-controller-pod-name -n vmware-system-csi
Check the external provisioner logs by using the following command.
kubectl logs vsphere-csi-controller-pod-name -n vmware-system-csi -c csi-provisioner
The following entry indicates that the external-provisioner sidecar container lost its leader election:
I0817 14:02:59.582663 1 leaderelection.go:263] failed to renew lease vmware-system-csi/csi-vsphere-vmware-com: failed to tryAcquireOrRenew context deadline exceeded
F0817 14:02:59.685847 1 leader_election.go:169] stopped leading
Delete this instance of vsphere-csi-controller.
kubectl delete pod vsphere-csi-controller-pod-name -n vmware-system-csi
Kubernetes will create a new instance of the CSI controller and all sidecars will be reinitialized.
All PVC operations, such as create, attach, detach, or delete a volume, fail while CSI cannot connect to vCenter Server
In addition to operation failures, the Volume health information and the StoragePool CR cannot be updated in the Supervisor cluster. The CSI and Syncer logs display errors about not being able to connect to vCenter Server.
CSI connects to vCenter Server as a specific solution user. The password for this SSO user is rotated by wcpsvc once every 24 hours and the new password is transferred in to a Secret that the CSI driver reads to connect to vCenter Server. If the new password fails to be delivered to Secret, the stale password remains in the Supervisor cluster, and the CSI driver fails its operations.
This problem affects vSAN Data Persistence Platform and all CSI volume operations.
Workaround:
Typically the WCP Service delivers the updated password to CSI that runs in the Kubernetes cluster. Occasionally, the password delivery doesn't happen due to a problem, for example, a connectivity issue or an error in earlier part of the sync process. The CSI continues to use the old password and eventually locks the account due to too many authentication failures.
Ensure that the WCP cluster is in a healthy and running state. No errors should be reported for that cluster on Workload Management page. After problems causing the sync to fail are resolved, force a password refresh to unlock the locked account.
To force reset of the password:
Stop wcpsvc:
vmon-cli -k wcp
Edit the time of the last password rotation to a small value, for example 1, by changing the 3rd line in /etc/vmware/wcp/.storageUser to 1.
Start wcpsvc:
vmon-cli -i wcp
The wcpsvc resets the password, which unlocks the account and delivers the new password to the cluster.
In addition to reviewing the compatibility matrix, you should independently verify compatibility before using or upgrading to a particular Tanzu Kubernetes release. You must also verify compatibility before upgrading Supervisor. Note that upgrading Supervisor may initiate a rolling update of provisioned TKG clusters.
Compatibility with Supervisor
To verify Tanzu Kubernetes release compatibility with Supervisor, switch context to the target vSphere Namespace and run the command kubectl get tkr
. The COMPATIBLE column displays a Boolean value showing the compatibility of each Tanzu Kubernetes release with Supervisor. True means the release is compatible with Supervisor. False means it is not.
For the Tanzu Kubernetes release to be compatible, Supervisor must be upgraded to the latest patch version shipped with the corresponding vCenter release.
Compatibility with vSphere 8.x and 7.x
There are two types of TKr formats: non-legacy TKrs and legacy TKRs.
Non-legacy TKrs are purpose-built for vSphere 8.x and are only compatible with vSphere 8.x
Legacy TKRs use a legacy format and are compatible with vSphere 7.x, and also with vSphere 8.x but for upgrade purposes only.
To verify TKr compatibility with vSphere, run the command kubectl get tkr TKR_NAME --show-labels
(or kubectl get tkr TKR-NAME -o yaml
). To list only the TKrs that do not contain the legacy label, run the command kubectl get tkr -l '!run.tanzu.vmware.com/legacy-tkr'
. See examples below.
If the release includes the label annotation run.tanzu.vmware.com/legacy-tkr
, the image is based on the vSphere 7.x format. You can run a legacy TKr on vSphere 8.x, but you will not be able to take advantage of vSphere 8.x features such as class-based clusters and Carvel packaging until you upgrade to a non-legacy TKr. For more information, refer to the TKGS documentation.
The following example shows that v1.24.11---vmware.1-fips.1-tkg.1 is a legacy TKr because the legacy-tkr
label is present.
kubectl get tkr v1.24.11---vmware.1-fips.1-tkg.1 -o yaml
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesRelease
metadata:
creationTimestamp: "2023-07-14T04:25:04Z"
finalizers:
- tanzukubernetesrelease.run.tanzu.vmware.com
generation: 1
labels:
os-arch: amd64
os-name: photon
os-type: linux
os-version: "3.0"
run.tanzu.vmware.com/legacy-tkr: ""
name: v1.24.11---vmware.1-fips.1-tkg.1
The following example shows that v1.25.7---vmware.3-fips.1-tkg.1 is a non-legacy TKr because the legacy-tkr
label is absent.
kubectl get tkr v1.25.7---vmware.3-fips.1-tkg.1 -o yaml
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesRelease
metadata:
creationTimestamp: "2023-07-27T04:20:13Z"
finalizers:
- tanzukubernetesrelease.run.tanzu.vmware.com
generation: 2
labels:
os-arch: amd64
os-name: ubuntu
os-type: linux
os-version: "20.04"
name: v1.25.7---vmware.3-fips.1-tkg.1
The output of the command kubectl get tkr -l '!run.tanzu.vmware.com/legacy-tkr'
returns all non-legacy TKrs. These are the TKrs purpose-built for vSphere 8.x and only compatible with vSphere 8.x. (Current as of April 11, 2024).
NAME VERSION READY COMPATIBLE CREATED
v1.23.15---vmware.1-tkg.4 v1.23.15+vmware.1-tkg.4 True True 317d
v1.23.8---vmware.2-tkg.2-zshippable v1.23.8+vmware.2-tkg.2-zshippable True True 317d
v1.24.9---vmware.1-tkg.4 v1.24.9+vmware.1-tkg.4 True True 317d
v1.25.7---vmware.3-fips.1-tkg.1 v1.25.7+vmware.3-fips.1-tkg.1 True True 259d
v1.26.13---vmware.1-fips.1-tkg.3 v1.26.13+vmware.1-fips.1-tkg.3 True True 14d
v1.26.5---vmware.2-fips.1-tkg.1 v1.26.5+vmware.2-fips.1-tkg.1 True True 231d
The Tanzu Kubernetes releases for vSphere 8.x are updated to a package-based framework for TKG 2.0 components, such as the container storage interface (CSI) and container network interface (CNI). To view the components and packages comprising a release, run the command kubectl get tkr TKR_NAME -o yaml
.
For example, the command kubectl get tkr v1.25.7---vmware.3-fips.1-tkg.1 -o yaml
returns the following:
spec:
bootstrapPackages:
- name: antrea.tanzu.vmware.com.1.9.0+vmware.2-tkg.1-advanced-vmware
- name: vsphere-pv-csi.tanzu.vmware.com.2.7.1+vmware.1-tkg.2-vmware
- name: vsphere-cpi.tanzu.vmware.com.1.25.1+vmware.2-tkg.2-vmware
- name: kapp-controller.tanzu.vmware.com.0.41.7+vmware.1-tkg.1-vmware
- name: guest-cluster-auth-service.tanzu.vmware.com.1.3.0+tkg.1-vmware
- name: metrics-server.tanzu.vmware.com.0.6.2+vmware.1-tkg.2-vmware
- name: secretgen-controller.tanzu.vmware.com.0.11.2+vmware.1-tkg.3-vmware
- name: pinniped.tanzu.vmware.com.0.12.1+vmware.3-tkg.4-vmware
- name: capabilities.tanzu.vmware.com.0.29.0+vmware.1
- name: calico.tanzu.vmware.com.3.24.1+vmware.1-tkg.2-vmware
kubernetes:
coredns:
imageTag: v1.9.3_vmware.7-fips.1
etcd:
imageTag: v3.5.6_vmware.9-fips.1
imageRepository: localhost:5000/vmware.io
pause:
imageTag: "3.8"
version: v1.25.7+vmware.3-fips.1
osImages:
- name: ob-21961079-ubuntu-2004-amd64-vmi-k8s-v1.25.7---vmware.3-fips.1-tkg.1
- name: ob-21961086-photon-3-amd64-vmi-k8s-v1.25.7---vmware.3-fips.1-tkg.1
version: v1.25.7+vmware.3-fips.1-tkg.1
Refer to the online edition of the compatibility matrix for current information.
The table lists TKrs and their compatibility with vCenter and Supervisor. The leading number is the Kubernetes version delivered with that release. Independently verify release compatibility before installing or upgrading. See Verify TKr Compatibility.
For the TKr to be compatible, Supervisor must be upgraded to the latest patch version shipped with the corresponding vCenter release.
TKr for vSphere 8.x
The table lists available TKrs for vSphere 8.x.
TKr for vSphere 8.x |
Name |
Minimum vCenter Version |
Supervisor Shipped with the Minimum vCenter Version |
Release Date |
---|---|---|---|---|
TKr 1.30.1 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U3 |
Supervisor 1.26, 1.27, 1.28 |
7/8/2024 |
TKr 1.29.4 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U3 |
Supervisor 1.26, 1.27, 1.28 |
6/27/2024 |
TKr 1.28.8 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U2c |
Supervisor 1.23, 1.24, 1.25 |
5/8/2024 |
TKr 1.27.11 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U2c |
Supervisor 1.23, 1.24, 1.25 |
4/18/2024 |
TKr 1.26.13 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U1c |
Supervisor 1.23, 1.24, 1.25 |
3/15/2024 |
TKr 1.26.5 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U1c |
Supervisor 1.23, 1.24, 1.25 |
8/24/2023 |
TKr 1.25.7 vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U1c |
Supervisor 1.23, 1.24, 1.25 |
7/27/2023 |
TKr 1.24.9 for vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U1 |
Supervisor 1.22, 1.23, 1.24 |
4/18/2023 |
TKr 1.23.15 for vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 U1 |
Supervisor 1.22, 1.23, 1.24 |
4/18/2023 |
TKr 1.23.8 for vSphere 8.x (Photon and Ubuntu) |
|
vSphere 8.0 |
Supervisor 1.22, 1.23, 1.24 |
2/10/2023 |
Legacy TKr for vSphere 7.x
The table lists available TKrs for vSphere 7.x.
TKr for vSphere 7.x |
Name |
Minimum vCenter Server Version |
Supervisor Version Shipped with the Minimum vCenter Version |
Release Date |
---|---|---|---|---|
TKr 1.28.7 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
7/12/2024 |
TKr 1.28.7 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
7/12/2024 |
TKr 1.27.10 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
4/5/2024 |
TKr 1.27.10 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
4/5/2024 |
TKr 1.27.6 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
1/31/2024 |
TKr 1.27.6 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2c (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
1/31/2024 |
TKr 1.26.12 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
3/15/2024 |
TKr 1.26.12 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
3/15/2024 |
TKr 1.26.10 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
12/07/2023 |
TKr 1.26.10 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
12/07/2023 |
TKr 1.25.13 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
12/07/2023 |
TKr 1.25.13 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3P vCenter Server 8.0 U2b (for upgrade to vSphere 8.x TKr) |
Supervisor 1.23, 1.24, 1.25 |
12/07/2023 |
TKr 1.24.11 vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3N vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.22, 1.23, 1.24 |
7/13/2023 |
TKr 1.24.11 vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3N vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.22, 1.23, 1.24 |
7/13/2023 |
TKr 1.23.8 for vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3F vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.20, 1.21, 1.22 |
1/6/2023 |
TKr 1.23.8 for vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3L vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.21, 1.22, 1.23 |
5/18/2023 |
TKr 1.22.9 for vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3E vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.20, 1.21, 1.22 |
3/28/2022 |
TKr 1.22.9 for vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3L vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.21, 1.22, 1.23 |
4/5/2023 |
TKr 1.21.6 for vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3 vCenter Server 8.0a+ (for upgrade to vSphere 8.x TKr) |
Supervisor 1.20, 1.21, 1.22 |
2/21/2022 |
TKr 1.21.6 for vSphere 7.x (Ubuntu) |
|
vCenter Server 7.0 Update 3 vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.20, 1.21, 1.22 |
3/4/2022 |
TKr 1.21.2 for vSphere 7.x (Photon) |
|
vCenter Server 7.0 Update 3 vCenter Server 8.0a (for upgrade to vSphere 8.x TKr) |
Supervisor 1.20, 1.21, 1.22 |
11/16/2021 |
Historical TKrs
Refer to the online edition of the compatibility matrix for the most current information.
The table lists historical TKrs. The version number is the Kubernetes version delivered with that release. While these TKrs are compatible with the specified environment, they do not support the latest TKG functionality and are superseded by later releases. In addition, you cannot upgrade to vSphere 8 using these releases.
VMware Tanzu Kubernetes release |
Description |
---|---|
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 3 if you are using Supervisor Cluster version 1.21.0. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. For Kubernetes version 1.20.2, you might also see |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. |
|
Compatible with vSphere 7.0 Update 2 if you are using Supervisor Cluster version 1.18.10 and above. For Kubernetes version 1.19.7, you might also see |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. For Kubernetes version 1.18.15, you might also see |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. |
|
Compatible with vSphere 7.0 Update 1 and above. Incompatible with Antrea CNI. |
|
Compatible with vSphere 7.0 Update 1, but not recommended for use with vSphere 7.0 Update 2. |
|
Compatible with vSphere 7.0 Update 1, but not recommended for use with vSphere 7.0 Update 2. Incompatible with Antrea CNI. |
|
Incompatible with vSphere 7.0 Update 1 and above. Incompatible with Antrea CNI. |