Except where noted, these release notes apply to all v2.5.x patch versions of Tanzu Kubernetes Grid (TKG).
TKG v2.5.x is distributed as a downloadable Tanzu CLI package that deploys a versioned TKG standalone management cluster. TKG v2.5.x supports creating and managing class-based workload clusters with a standalone management cluster that can run on vSphere.
Tanzu Kubernetes Grid v2.5.x includes the following new features.
New features in Tanzu Kubernetes Grid v2.5.2:
Component version updates listed in Component Versions
NoteIf you use NSX Advanced Load Balancer, TKG v2.5.2 installs Avi Kubnernetes Operator (AKO) v1.12.2 in management clusters. According to the AKO Compatibility Matrix, AKO v1.12 supports Kubernetes versions 1.25 to 1.29. However, TKG v2.5.2 and AKO v1.12.2 have been fully tested and validated with Kubernetes v1.30 workload clusters. Using TKG v2.5.2 and AKO v1.12.2 with Kubenernetes 1.30 clusters is fully supported.
New features in Tanzu Kubernetes Grid v2.5.1:
New features in Tanzu Kubernetes Grid v2.5.0:
KAPP_CONCURRENCY
, KAPP_ROLLINGUPDATE_MAXSURGE
, KAPP_ROLLINGUPDATE_MAXUNAVAILABLE
, and KAPP_UPDATE_STRATEGY
configure how to maintain continuity during kapp-controller
version update; see Kubernetes and Package Tuning in the Configuration File Variable Reference.
KAPP_CONCURRENCY
defaults to 10
, which overrides the default value of 4
for its underlying upstream object property, kappController.deployment.concurrency
.tanzu diagnostics
commands troubleshoot standalone management, workload, and bootstrap clusters; see Troubleshooting Clusters with the diagnostics Plugin.Each version of Tanzu Kubernetes Grid adds support for new Kubernetes versions, distributed as Tanzu Kubernetes releases (TKrs), and new Tanzu Standard package versions.
VMware supports TKG v2.5.2 with Kubernetes v1.30, v1.29, v1.28, v1.27, and v1.26. TKG v2.5.0 and v2.5.1 support v1.28, v1.27, and v1.26. For information about the Kubernetes versions supported by each release of TKG, see Kubernetes Version Support Policies in About Tanzu Kubrernetes Grid.
NoteIf you use NSX Advanced Load Balancer, TKG v2.5.2 installs Avi Kubnernetes Operator (AKO) v1.12.2 in management clusters. According to the AKO Compatibility Matrix, AKO v1.12 supports Kubernetes versions 1.25 to 129. However, TKG v2.5.2 and AKO v1.12.2 have been fully tested and validated with Kubernetes v1.30 workload clusters. Using TKG v2.5.2 and AKO v1.12.2 with Kubenernetes 1.30 clusters is fully supported.
For information about the support lifecycle of TKG, see Supported Tanzu Kubernetes Grid Versions in About Tanzu Kubrernetes Grid.
Package versions in the Tanzu Standard repositories for TKG v2.5.0 and v2.5.1 are compatible via TKrs with Kubernetes minor versions v1.28, v1.27, and v1.26. Package versions in the Tanzu Standard repositories for TKG v2.5.2 are compatible with Kubernetes minor versions v1.30, v1.29, v1.28, v1.27, and v1.26. See the Tanzu Standard Repository Release Notes for information.
Tanzu Kubernetes Grid v2.5.x supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components.
See Component Versions for a full list of component versions included in TKG v2.5.2.
See Tanzu Standard Repository Release Notes for additional package versions compatible with TKG v2.5.2.
Infrastructure platform |
|
Tanzu CLI | Tanzu CLI Core v1.0.0, v1.1.x, v1.2.x, v1.3.x** |
TKG API, and package infrastructure | Tanzu Framework v0.32.3 |
Default ClusterClass | v0.32.2, tkg-vsphere-default-v1.2.0 |
Cluster creation and management | Core Cluster API (v1.7.3), Cluster API Provider vSphere (v1.10.1) |
Kubernetes node OS distributed with TKG |
|
Build your own image | Photon OS 3 and OS 5; Red Hat Enterprise Linux 8; Ubuntu 20.04 and 22.04; Windows 2019 |
Container runtime | Containerd (v1.6.33) |
Container networking | Antrea (v1.13.3), Calico (v3.27.3), Multus CNI via Tanzu Standard Packages |
Container registry | Harbor v2.10.3 via Tanzu Standard Packages |
Ingress | NSX Advanced Load Balancer (Avi Controller) v22.1.3-v22.1.7***, NSX v4.1.2 (vSphere 8.0.U2), v3.2.2 (vSphere 7.0U3), Contour via Tanzu Standard Packages |
Storage | vSphere Container Storage Interface (v3.2.0) and vSphere Cloud Native Storage |
Authentication | OIDC and LDAP via Pinniped (v0.25.0) |
Observability | Fluent Bit, Prometheus, and Grafana via Tanzu Standard Packages |
Service Discovery | External DNS via Tanzu Standard Packages |
Backup and migration | Velero (v1.13.2) |
* For a list of VMware Cloud on AWS SDDC versions that are compatible with this release, see the VMware Product Interoperability Matrix.
** For a full list of Tanzu CLI versions that are compatible with this release, see Product Interoperability Matrix.
*** TKG v2.5.x does not support NSX ALB v30.1.1+.
For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v2.5.x, see Supported Kubernetes Versions above.
The TKG v2.5.x release includes the following software component versions:
NotePrevious TKG releases included components that are now distributed via the Tanzu Standard repository. For a list of these components, see Tanzu Standard Repository Release Notes.
Component | TKG v2.5.2 | TKG v2.5.1 | TKG v2.5 |
---|---|---|---|
aad-pod-identity | v1.8.15+vmware.2 | ||
addons-manager | v1.6.0+vmware.1 | v1.6.0+vmware.1 | v1.6.0+vmware.1 |
ako-operator | v1.12.2+vmware.1* | v1.11.0+vmware.2 | v1.11.0+vmware.2* |
antrea | v1.13.3+vmware.1-advanced | v1.13.3+vmware.1-advanced* | v1.13.1_vmware.3-advanced* |
antrea-interworking | v0.13.1+vmware.1 | v0.13.1+vmware.1* | v0.13.0* |
calico_all | v3.26.3+vmware.2* | v3.26.3+vmware.1 | v3.26.3+vmware.1* |
capabilities-package | v0.32.3-capabilities* | v0.32.2-capabilities* | v0.32.0-capabilities* |
carvel-secretgen-controller | v0.15.0+vmware.2* | v0.15.0+vmware.1 | v0.15.0+vmware.1* |
cloud_provider_vsphere | v1.30.1+vmware.2* | v1.28.0+vmware.1 | v1.28.0+vmware.1* |
cluster_api | v1.7.3+vmware.0* | v1.5.6+vmware.0* | v1.5.3+vmware.0* |
cluster-api-ipam-provider-in-cluster | v0.1.0+vmware.7 | v0.1.0+vmware.7 | v0.1.0_vmware.7* |
cluster_api_vsphere | v1.10.1+vmware.0* | v1.8.8+vmware.0* | v1.8.4+vmware.0 |
cni_plugins | v1.4.0+vmware.1* | v1.2.0+vmware.13* | v1.2.0+vmware.10* |
containerd | v1.6.33+vmware.2* | v1.6.28+vmware.2* | v1.6.24+vmware.2* |
crash-diagnostics | v0.3.7+vmware.8 | v0.3.7+vmware.8 | v0.3.7+vmware.8* |
cri_tools | v1.28.0+vmware.9* | v1.27.0+vmware.8* | v1.27.0+vmware.4* |
csi_attacher | v4.5.1+vmware.1*, v4.5.0+vmware.2* |
4.5.0+vmware.1*, v4.3.0+vmware.2, v4.2.0+vmware.3 |
v4.3.0+vmware.2, v4.2.0+vmware.3 |
csi_livenessprobe | v2.12.0+vmware.2* |
v2.12.0+vmware.1*, v2.10.0+vmware.2, v2.9.0+vmware.3 |
v2.10.0+vmware.2, v2.9.0+vmware.3 |
csi_node_driver_registrar | v2.10.1+vmware.1*, v2.10.0+vmware.2* |
v2.10.0+vmware.1*, v2.8.0+vmware.2, v2.7.0+vmware.3 |
v2.8.0+vmware.2, v2.7.0+vmware.3 |
csi_provisioner | v4.0.1+vmware.1*, v4.0.0+vmware.2* |
v4.0.0+vmware.1*, v3.5.0+vmware.2, v3.4.1+vmware.3, v3.4.0+vmware.3 |
v3.5.0+vmware.2, v3.4.1+vmware.3, v3.4.0+vmware.3 |
dns (coredns) | v1.11.1+vmware.10*, v1.10.1+vmware.21*, v1.10.1+vmware.20*, v1.9.3+vmware.22 |
v1.10.1+vmware.17* | v1.10.1_vmware.13* |
etcd | v3.5.12+vmware.5* | v3.5.11+vmware.4* | v3.5.9_vmware.6* |
external-snapshotter | v7.0.2+vmware.1*, v7.0.1+vmware.2*, v7.0.2+vmware.1*, v6.1.0+vmware.7, v6.1.0+vmware.1 |
v7.0.1+vmware.1*, v6.2.2+vmware.2, v6.2.1+vmware.3 |
v6.2.2+vmware.2, v6.2.1+vmware.3 |
guest-cluster-auth-service | v1.3.3_vmware.1 | v1.3.3_vmware.1* | v1.3.0_vmware.1* |
image-builder | v0.1.14+vmware.2 | v0.1.14+vmware.2 | v0.1.14+vmware.2* |
image-builder-resource-bundle | v1.29.6+vmware.1* | v1.28.7+vmware.1-tkg.3* | v1.28.4_vmware.1-tkg.1* |
imgpkg | v0.36.0+vmware.2 | v0.36.0+vmware.2 | v0.36.0+vmware.2 |
jetstack_cert-manager (cert-manager) | v1.12.10+vmware.2* | v1.12.2+vmware.2 | v1.12.2+vmware.2* |
k14s_kapp (kapp) | v0.55.0+vmware.2 | v0.55.0+vmware.2 | v0.55.0+vmware.2 |
k14s_ytt (ytt) | v0.45.0+vmware.2 | v0.45.0+vmware.2 | v0.45.0+vmware.2 |
kapp-controller | v0.48.2+vmware.1 | v0.48.2+vmware.1 | v0.48.2+vmware.1* |
kbld | v0.37.0+vmware.2 | v0.37.0+vmware.2 | v0.37.0+vmware.2 |
kube-vip | v0.6.4+vmware.2* | v0.5.12+vmware.2 | v0.5.12+vmware.2* |
kube-vip-cloud-provider | v0.0.5+vmware.3* | v0.0.5+vmware.2 | v0.0.5+vmware.2* |
kubernetes | v1.30.2+vmware.1*, v1.29.6+vmware.1*, v1.28.11+vmware.1*, v1.27.15+vmware.1*, v1.26.14+vmware.1 |
v1.28.7+vmware.1*, v1.27.11+vmware.1*, v1.26.14+vmware.1* |
v1.28.4+vmware.1*, v1.27.8+vmware.1*, v1.26.11+vmware.1* |
kubernetes-csi_external-resizer | v1.10.1+vmware.1*, v1.10.0+vmware.2* |
v1.10.0+vmware.1*, v1.8.0+vmware.2, v1.7.0+vmware.3 |
v1.8.0+vmware.2, v1.7.0+vmware.3 |
kubernetes-sigs_kind | v1.30.2+vmware.1*, v1.29.6+vmware.1*, v1.28.11+vmware.2*, v1.27.15+vmware.1*, v1.26.14+vmware.1 |
v1.28.7+vmware.1-tkg.1_v0.20.0*, v1.27.11+vmware.1-tkg.1_v0.20.0, v1.26.14+vmware.1-tkg.1_v0.17.0 |
v1.28.4+vmware.1-tkg.1_v0.20.0* |
kubernetes_autoscaler | v1.30.0+vmware.2* | v1.28.0+vmware.1 | v1.28.0+vmware.1* |
load-balancer-and-ingress-service (AKO) | v1.11.2+vmware.2* | v1.11.2+vmware.1 | v1.11.2+vmware.1* |
metrics-server | v0.6.2+vmware.8* | v0.6.2+vmware.3 | v0.6.2+vmware.3* |
pinniped | v0.25.0+vmware.3* | v0.25.0+vmware.2 | v0.25.0+vmware.2* |
pinniped-post-deploy | v0.25.0+vmware.2* | v0.25.0+vmware.1 | v0.25.0+vmware.1* |
sonobuoy | v0.57.0+vmware.1 | v0.57.0+vmware.1 | v0.57.0+vmware.1* |
tanzu-framework | v0.32.3* | v0.32.2* | v0.32.0* |
tanzu-framework-addons | v0.32.3* | v0.32.2* | v0.32.0* |
tanzu-framework-management-packages | v0.32.3* | v0.32.2* | v0.32.0* |
tkg-bom | v2.5.2* | v2.5.1* | v2.5.0* |
tkg-core-packages | v1.28.9+vmware.1-tkg.2* | v1.28.7+vmware.1-tkg.3* | v1.28.4+vmware.1-tkg.1* |
tkg-standard-packages | v2024.8.21* | v2024.4.12* | v2024.2.1* |
tkg-storageclass-package | v0.32.3* | v0.32.2* | v0.32.0* |
tkg_telemetry | v2.3.0+vmware.3 | v2.3.0+vmware.3 | v2.3.0+vmware.3 |
velero | 1.13.2_vmware.1* | v1.12.1_vmware.1 | v1.12.1_vmware.1* |
velero-mgmt-cluster-plugin | v0.3.0+vmware.1 | v0.3.0+vmware.1 | v0.3.0+vmware.1* |
velero-plugin-for-aws | v1.9.2+vmware.1* | v1.8.1+vmware.1 | v1.8.1+vmware.1* |
velero-plugin-for-csi | v0.7.1+vmware.1* | v0.6.1+vmware.1 | v0.6.1+vmware.1* |
velero-plugin-for-microsoft-azure | v1.9.2+vmware.1* | v1.8.1+vmware.1 | v1.8.1+vmware.1* |
velero-plugin-for-vsphere | v1.5.2+vmware.1 | v1.5.2+vmware.1 | v1.5.2+vmware.1* |
vendir | v0.33.1+vmware.2 | v0.33.1+vmware.2 | v0.33.1+vmware.2 |
vsphere_csi_driver | v3.2.0+vmware.2* | v3.2.0+vmware.1* | v3.1.2+vmware.1* |
* Indicates a new component or version bump since the previous release. TKG v2.5.1 is previous to v2.5.2, and v2.4.1 is previous to v2.5.0.
For a list of software component versions that ship with TKG v2.5.x, use imgpkg
to pull repository bundles and then list their contents. For example, to list the component versions that ship with the Tanzu Standard repository for TKG v2.5.2, run the following command:
imgpkg pull -b projects.registry.vmware.com/tkg/packages/standard/repo:v2024.8.21 -o standard-v2024.8.21
In the TKG upgrade path, v2.5.2 immediately follows v2.5.1.
You can only upgrade to Tanzu Kubernetes Grid v2.5.x from v2.4.x. If you want to upgrade to Tanzu Kubernetes Grid v2.5.x from a version earlier than v2.4.x, you must upgrade to v2.4.x first.
When upgrading Kubernetes versions on workload clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.26.x to v1.28.x. You must upgrade a v1.26.x cluster to v1.27.x before upgrading the cluster to v1.28.x.
Tanzu Kubernetes Grid v2.5.x release dates are:
Tanzu Kubernetes Grid v2.5.x introduces the following new behaviors compared with v2.4.x, which is the latest previous release.
Tanzu Kubernetes Grid v2.5.x does not support the creation of standalone TKG management clusters and TKG workload clusters on AWS and Azure. Use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters on AWS and Azure. For information about how to create native AWS EKS and Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters and Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.
For information about why VMware no longer supports TKG clusters on AWS and Azure, see VMware Tanzu Aligns to Multi-Cloud Industry Trends on the VMware Tanzu blog.
When upgrading a legacy plan-based workload cluster to Kubernetes v1.28, you must specify the --os-name
, --os-version
, and --os-arch
options in the tanzu cluster upgrade
command.
When upgrading a Photon 3 workload cluster to Kubernetes v1.27 and keeping Photon 3 as its node OS, you must specify photon3
for --os-name
or edit the Cluster
object.
For details, see Additional Steps for Certain OS, Kubernetes, and Cluster Type Combinations in Upgrade Workload Clusters.
For security hardening, TKG v2.5 sets the minimum TLS version to v1.3 for all Kubernetes components. Previous versions set this to the Kubernetes default, TLS v1.2.
To create a new TKG cluster that uses TLS version v1.2 instead of v1.3, set APISERVER_EXTRA_ARGS
to include min-tls-version=VersionTLS12
in its cluster configuration file as described in Kubernetes and Package Tuning.
To change the TLS version of an existing TKG workload cluster, see the Knowledge Base article Updating the API Server “tls-min-version” on a Running Workload Cluster.
From v2.5.1 onwards, Tanzu Kubernetes Grid does not support creating management clusters or workload clusters on vSphere 6.7. TKG v2.5.1 includes critical cloud network security (CNS) updates for container storage interface (CSI) storage functionality that are not compatible with vSphere 6.7. Creating clusters on vSphere 6.7 is supported on TKG versions up to and including v2.5.0 only. General support for vSphere 6.7 ended in October 2022 and VMware recommends that you upgrade to vSphere 7 or 8.
For workload clusters that use Photon 3 as the node OS, if you are upgrading the cluster from Kubernetes v1.26 to v1.27, TKG also by default upgrades the cluster to Photon 5, which is the default OS version for TKG clusters running Kubernetes v1.27 on Photon. If you want the upgraded cluster to continue using Photon 3 as its node OS, follow the additional steps for Kubernetes v1.27 on Photon 3 described in Additional Steps for Certain OS, Kubernetes, and Cluster Type Combinations in Upgrade Workload Clusters. There are different steps for plan-based and class-based workload clusters.
Also part of the upgrade from Photon 3 to Photon 5, the cgroupfs
is upgraded from v1 to v2 by default. This change might impact downstream applications such as Java or Systemd. For information about how upgrading cgroupfs
might affect your clusters, see Migrating to cgroup v2 in the Kubernetes documentation.
This section provides advance notice of behavior changes and feature deprecations that will take effect in future releases, after the TKG v2.5.x releases.
The following previously-documented customer-visible issues are resolved in Tanzu Kubernetes Grid v2.5.2.
Before upgrade, you must manually update a changed Avi certificate in the tkg-system
package values
Management cluster upgrade fails if you have rotated an Avi Controller certificate, even if you have updated its value in the management cluster’s secret/avi-controller-ca
as described in Modify the Avi Controller Credentials.
Failure occurs because updating secret/avi-controller-ca
does not copy the new value into the management cluster’s tkg-system
package values, and TKG uses the certificate value from those package values during upgrade. For legacy management clusters created in TKG v1.x, the new value is also not copied into the ako-operator-addon
secret.
CAPV controller manager stops unexpectedly
In TKG v2.5.1, CAPV controller manager works but then stops after some time
Scaling ClusterClass clusters manually when autoscaler is enabled causes autoscaler not to function correctly
If you enabled autoscaler on a ClusterClass cluster when you created it, and if you then run tanzu cluster scale
to scale that cluster manually, the tanzu cluster scale
operation will succeed but afterwards autoscaler will not function correctly.
In TKG v2.5.2, the Tanzu CLI prevents you from running tanzu cluster scale
on a cluster when autoscaler is enabled.
No previous-documented customer-visible issues are resolved in Tanzu Kubernetes Grid v2.5.1.
The following issues that were documented as Known Issues in earlier Tanzu Kubernetes Grid releases are resolved in Tanzu Kubernetes Grid v2.5.0.
Creating a ClusterClass workload cluster fails on IPv4 primary dualstack in an airgapped environment with proxy mode enabled
Attempts to create a ClusterClass workload cluster on IPv4 primary dualstack in an airgapped environment with proxy mode enabled fail with the error unable to wait for cluster nodes to be available: worker nodes are still being created for MachineDeployment 'wl-antrea-md-0-5zlgc', DesiredReplicas=1 Replicas=0 ReadyReplicas=0 UpdatedReplicas=0
The antrea-agent
log shows Error running agent: error initializing agent: K8s Node should have an IPv6 address if IPv6 Pod CIDR is defined
.
When multiple vSphere OVA templates are detected, the first one is used
If multiple templates are detected at the same path that have the same os-name
, os-arch
, and os-version
, the first one that matches the requirement is used. This has been fixed so that an error is thrown prompting the user to provide the full path to the desired template.
Deploying management cluster to vSphere 7 fails while waiting for the cluster control plane to become available
If you specify the VM Network when deploying a management cluster to vSphere 7, the deployment fails with the error unable to set up management cluster: unable to wait for cluster control plane available: control plane is not available yet
.
The following are known issues in Tanzu Kubernetes Grid v2.5.x. Any known issues that were present in v2.5.0 that have been resolved in a subsequent v2.5.x patch release are listed under the Resolved Issues for the patch release in which they were fixed.
You can find additional solutions to frequently encountered issues in Troubleshooting Management Cluster Issues and Troubleshooting Workload Cluster Issues, or in Broadcom Communities.
Scaling ClusterClass clusters manually when autoscaler is enabled causes autoscaler not to function correctly
If you enabled autoscaler on a ClusterClass cluster when you created it, and if you then run tanzu cluster scale
to scale that cluster manually, the tanzu cluster scale
operation will succeed but afterwards autoscaler will not function correctly.
Workaround: In TKG v2.5.0 and v2.5.1, do not run tanzu cluster scale
on a cluster when autoscaler is enabled. In TKG v2.5.2, the Tanzu CLI prevents you from doing so.
Cannot create management cluster with AVI as control plane HA provider and Pinniped OIDC as identity provider
When creating a management cluster with configuration variables AVI_CONTROL_PLANE_HA_PROVIDER
set to true
to use AVI as the control plane HA provider and IDENTITY_MANAGEMENT_TYPE
set to oidc
to use an external OIDC identity provider, CLI output shows Waiting
messages for AKO package
and resource mgmt-load-balancer-and-ingress-service
and then fails with packageinstalls/mgmt-load-balancer-and-ingress-service ... connect: connection refused
.
This issue results from an internal behavior change in AKO.
Workaround: When configuring a management cluster, set VSPHERE_CONTROL_PLANE_ENDPOINT
to an IP address that is not the first address in the configured AVI VIP range, AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR
if set, or else AVI_DATA_NETWORK_CIDR
.
Cannot create new workload clusters based on non-current TKr versions with Antrea CNI
You cannot create a new workload cluster that uses Antrea CNI and runs Kubernetes versions shipped with prior versions TKG, such as Kubernetes v1.23.10, which was the default Kubernetes version in TKG v1.6.1 as listed in Supported Kubernetes Versions in Tanzu Kubernetes Grid v2.5.
Workaround: Create a workload cluster that runs Kubernetes 1.28.x, 1.27.x, or 1.26.x. The Kubernetes project recommends that you run components on the most recent patch version of any current minor version.
management-cluster delete
and other operations fail with kind
error on cloud management as a service platforms
On VMware cloud infrastructure products on AWS, Azure, Oracle Cloud, Google Cloud, and other infrastructures, running tanzu management-cluster delete
and other standalone management cluster operations fail with an error failed to create kind cluster
.
Workaround: On your bootstrap machine, change your Docker or Docker Desktop cgroup
driver default to systemd
so that it matches the cgroup
setting in the TKG container runtime, as described in the Tanzu CLI installation Prerequisites.
Orphan vSphereMachine
objects after cluster upgrade or scale
Due to a known issue in the cluster-api-provider-vsphere (CAPV) project, standalone management clusters on vSphere may leave orphaned VSphereMachine
objects behind after cluster upgrade or scale operations.
This issue is fixed in newer versions of CAPV, which future patch versions of TKG will incorporate.
Workaround: To find and delete orphaned CAPV VM objects:
VSphereMachine
objects and identify the orphaned ones, which do not have any PROVIDERID
value: kubectl get vspheremachines -A
For each orphaned VSphereMachine
object:
kubectl get vspheremachines VSPHEREMACHINE -n NAMESPACE -o yaml
Where VSPHEREMACHINE
is the machine NAME
and NAMESPACE
is its namespace.
VSphereMachine
has an associated Machine
object: kubectl get machines -n NAMESPACE | grep MACHINE-ID
Run kubectl delete machine
to delete any Machine
object associated with the VSphereMachine
.
VSphereMachine
object: kubectl delete vspheremachines VSPHEREMACHINE -n NAMESPACE
VSphereMachine
VM still appears; it may be present, but powered off. If so, delete it in vCenter.kubectl patch vspheremachines VSPHEREMACHINE -n NAMESPACE -p '{"metadata": {"finalizers": null}}' --type=merge
Workload cluster cannot distribute storage across multiple datastores
You cannot enable a workload cluster to distribute storage across multiple datastores as described in Deploy a Cluster that Uses a Datastore Cluster. If you tag multiple datastores in a datastore cluster as the basis for a workload cluster’s storage policy, the workload cluster uses only one of the datastores.
Workaround: None
CAPV controller manager stops unexpectedly
In TKG v2.5.1, CAPV controller manager works but then stops after some time.
Workaround: See KB 370310
Availability zones can be deleted while VMs are assigned to it
If you delete an availability zone that contains VMs, the VMs cannot subsequently be deleted.
Workaround: Remove all VMs from an availability zone before deleting it.
Creating workload clusters fails due to VPXD session exhaust
When creating workload clusters on vSphere, the creation fails with the following error:
vSphere config validation failed: failed to get VC client: failed to create vc client: Post "https://address/sdk": EOF ". VCenter vpxd.log report error: Out of HTTP sessions: Limited to 2000
This happens due to vCenter Server session exhaust.
Workaround: See VMware KB 50114010.
nfs-common and rpcbind packages are removed from Ubuntu OVAs
Both rpcbind
service and the nfs-common
packages are removed from Ubuntu OVAs due default CIS benchmark compliance, to enable the package follow the BYOI procedure for hardening. For more information, see KB 376737.
With TKG v2.5, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp. For TKG v2.5.2, the compatible Tanzu Standard repository version is v2024.8.21 and both are released around the same date.
Future Tanzu Standard repository versions may publish more frequently than TKG versions, but all patch versions will maintain existing compatibilities between minor versions of TKG and Tanzu Standard.
For more information, see the Tanzu Standard v2024.8.21 release notes.