VMware Tanzu Kubernetes Grid v2.5.x Release Notes

Except where noted, these release notes apply to all v2.5.x patch versions of Tanzu Kubernetes Grid (TKG).

TKG v2.5.x is distributed as a downloadable Tanzu CLI package that deploys a versioned TKG standalone management cluster. TKG v2.5.x supports creating and managing class-based workload clusters with a standalone management cluster that can run on vSphere.

Tanzu Kubernetes Grid v2.5.x and vSphere with Tanzu Supervisor in vSphere 8

Important

The vSphere with Tanzu Supervisor in vSphere 8.0.1c and later runs TKG v2.2. Earlier versions of vSphere 8 run TKG v2.0, which was not released independently of Supervisor. Standalone management clusters that run TKG 2.x are available from TKG 2.1 onwards. Due to the earlier TKG version that is embedded in Supervisor, some of the features that are available if you are using a standalone TKG 2.5.x management cluster are not available if you are using a vSphere with Tanzu Supervisor to create workload clusters. Later TKG releases will be embedded in Supervisor in future vSphere update releases. Consequently, the version of TKG that is embedded in the latest vSphere with Tanzu version at a given time might not be as recent as the latest standalone version of TKG. However, the versions of the Tanzu CLI that are compatible with all TKG v2.x releases are fully supported for use with Supervisor in all releases of vSphere 8. For example, Tanzu CLI v1.1.x is fully backwards compatible with the TKG 2.2 plugins that Supervisor provides.

What’s New

Tanzu Kubernetes Grid v2.5.x includes the following new features.

Tanzu Kubernetes Grid v2.5.1

New features in Tanzu Kubernetes Grid v2.5.1:

Tanzu Kubernetes Grid v2.5.0

New features in Tanzu Kubernetes Grid v2.5.0:

Supported Kubernetes, TKG, and Package Versions

From TKG v2.2 onwards, VMware’s support policy changed for older patch versions of TKG and Tanzu Kubernetes releases (TKrs), which package Kubernetes versions for TKG. Support policies for TKG v2.1 and older minor versions of TKG do not change.

The first two sections below summarize support for all currently-supported versions of TKG and TKrs, under the support policies that apply to each.

The third section below lists the versions of packages in the Tanzu Standard repository that are supported by Kubernetes v1.28, v1.27, and v1.26 TKrs.

Supported Kubernetes Versions

Each version of Tanzu Kubernetes Grid adds support for the Kubernetes version of its management cluster, plus additional Kubernetes versions, distributed as Tanzu Kubernetes releases (TKrs), except where noted as a Known Issue.

Minor versions: VMware supports TKG v2.5.x with Kubernetes v1.28, v1.27, and v1.26 at time of release. Once TKG v2.3 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.26 with TKG. Once TKG v2.4 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.27 with TKG.

Patch versions: After VMware publishes a new TKr patch version for a minor line, it retains support for older patch versions for two months. This gives customers a 2-month window to upgrade to new TKr patch versions. From TKG v2.2 onwards, VMware does not support all TKr patch versions from previous minor lines of Kubernetes.

Tanzu Kubernetes Grid patch versions support or supported TKr patch versions as listed below.

Tanzu Kubernetes Grid Version Management Cluster Kubernetes Version Provided Kubernetes (TKr) Versions
2.5.1 1.28.7 1.28.7, 1.27.11, 1.26.14
2.5.0 1.28.4 1.28.4, 1.27.8, 1.26.11
2.4.1 1.27.5 1.27.5, 1.26.8, 1.25.13
2.4.0 1.27.5 1.27.5, 1.26.8, 1.25.13
2.3.1 1.26.8 1.26.8, 1.25.13, 1.24.17
2.3.0 1.26.5 1.26.5, 1.25.10, 1.24.14
2.2.0 1.25.7 1.25.7, 1.24.11, 1.23.17

Supported Tanzu Kubernetes Grid Versions

VMware supports TKG versions as follows:

Minor versions: VMware supports TKG following the N-2 Lifecycle Policy, which applies to the latest and previous two minor versions of TKG. With the release of TKG v2.5.x, TKG v2.2 is no longer supported after a period of one year has elapsed since the v2.2 release. See the VMware Product Lifecycle Matrix for more information.

Patch versions: VMware does not support all previous TKG patch versions. After VMware releases a new patch version of TKG, it retains support for the older patch version for two months. This gives customers a 2-month window to upgrade to new TKG patch versions.

  • For example, support for TKG v2.5.0 would end two months after the general availability of TKG v2.5.1.

Supported Package Versions

Package versions in the Tanzu Standard repository for TKG v2.5.x are compatible via TKrs with Kubernetes minor versions v1.28, v1.27, and v1.26, and are listed in the Tanzu Standard Repository Release Notes.

Product Snapshot for Tanzu Kubernetes Grid v2.5

Tanzu Kubernetes Grid v2.5.x supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components.

See Component Versions for a full list of component versions included in TKG v2.5.1.

See Tanzu Standard Repository Release Notes for additional package versions compatible with TKG v2.5.1.

Infrastructure platform
  • vSphere 7.0, 7.0U1-U3
  • vSphere 8.0, 8.0U1, 8.0U2
  • VMware Cloud on AWS* v1.22, v1.24
  • Azure VMware Solution v2.0
  • Oracle Cloud VMware Solution (OCVS) v1.0
  • Google Cloud VMware Engine (GCVE) v1.0
Tanzu CLI Tanzu CLI Core v1.0.0, v1.1.x**
TKG API, and package infrastructure Tanzu Framework v0.32.2
Default ClusterClass v0.32.2, tkg-vsphere-default-v1.2.0
Cluster creation and management Core Cluster API (v1.5.6), Cluster API Provider vSphere (v1.8.8)
Kubernetes node OS distributed with TKG
  • Kubernetes v1.28, v1.27: Photon OS 5, Ubuntu 22.04
  • Kubernetes v1.27, v1.26: Photon OS 3, Ubuntu 20.04
Build your own image Photon OS 3 and OS 5; Red Hat Enterprise Linux 8; Ubuntu 20.04 and 22.04; Windows 2019
Container runtime Containerd (v1.6.28)
Container networking Antrea (v1.13.3), Calico (v3.26.3), Multus CNI via Tanzu Standard Packages
Container registry Harbor v2.9.1 via Tanzu Standard Packages
Ingress NSX Advanced Load Balancer (Avi Controller) v22.1.3-v22.1.4***, NSX v4.1.2 (vSphere 8.0.U2), v3.2.2 (vSphere 7.0U3), Contour via Tanzu Standard Packages
Storage vSphere Container Storage Interface (v3.0.2) and vSphere Cloud Native Storage
Authentication OIDC and LDAP via Pinniped (v0.25.0)
Observability Fluent Bit, Prometheus, and Grafana via Tanzu Standard Packages
Service Discovery External DNS via Tanzu Standard Packages
Backup and migration Velero (v1.12.1)

* For a list of VMware Cloud on AWS SDDC versions that are compatible with this release, see the VMware Product Interoperability Matrix.

** For a full list of Tanzu CLI versions that are compatible with this release, see Product Interoperability Matrix.

*** TKG v2.5.x does not support NSX ALB v30.1.1+.

For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v2.5.x, see Supported Kubernetes Versions above.

Component Versions

The TKG v2.5.x release includes the following software component versions:

Note

Previous TKG releases included components that are now distributed via the Tanzu Standard repository. For a list of these components, see Tanzu Standard Repository Release Notes.

Component TKG v2.5.1 TKG v2.5
aad-pod-identity   v1.8.15+vmware.2
addons-manager v1.6.0+vmware.1 v1.6.0+vmware.1
ako-operator v1.11.0+vmware.2 v1.11.0+vmware.2*
antrea v1.13.3+vmware.1-advanced* v1.13.1_vmware.3-advanced*
antrea-interworking v0.13.1+vmware.1* v0.13.0*
calico_all v3.26.3+vmware.1 v3.26.3+vmware.1*
capabilities-package v0.32.2-capabilities* v0.32.0-capabilities*
carvel-secretgen-controller v0.15.0+vmware.1 v0.15.0+vmware.1*
cloud_provider_vsphere v1.28.0+vmware.1 v1.28.0+vmware.1*
cluster_api v1.5.6+vmware.0* v1.5.3+vmware.0*
cluster-api-ipam-provider-in-cluster v0.1.0+vmware.7 v0.1.0_vmware.7*
cluster_api_vsphere v1.8.8+vmware.0* v1.8.4+vmware.0
cni_plugins v1.2.0+vmware.13* v1.2.0+vmware.10*
containerd v1.6.28+vmware.2* v1.6.24+vmware.2*
crash-diagnostics v0.3.7+vmware.8 v0.3.7+vmware.8*
cri_tools v1.27.0+vmware.8* v1.27.0+vmware.4*
csi_attacher 4.5.0+vmware.1*,
v4.3.0+vmware.2,
v4.2.0+vmware.3
v4.3.0+vmware.2,
v4.2.0+vmware.3
csi_livenessprobe v2.12.0+vmware.1*,
v2.10.0+vmware.2,
v2.9.0+vmware.3
v2.10.0+vmware.2,
v2.9.0+vmware.3
csi_node_driver_registrar v2.10.0+vmware.1*,
v2.8.0+vmware.2,
v2.7.0+vmware.3
v2.8.0+vmware.2,
v2.7.0+vmware.3
csi_provisioner v4.0.0+vmware.1*,
v3.5.0+vmware.2,
v3.4.1+vmware.3,
v3.4.0+vmware.3
v3.5.0+vmware.2,
v3.4.1+vmware.3,
v3.4.0+vmware.3
dns (coredns) v1.10.1+vmware.17* v1.10.1_vmware.13*
etcd v3.5.11+vmware.4* v3.5.9_vmware.6*
external-snapshotter v7.0.1+vmware.1*,
v6.2.2+vmware.2,
v6.2.1+vmware.3
v6.2.2+vmware.2,
v6.2.1+vmware.3
guest-cluster-auth-service v1.3.3_vmware.1* v1.3.0_vmware.1*
image-builder v0.1.14+vmware.2 v0.1.14+vmware.2*
image-builder-resource-bundle v1.28.7+vmware.1-tkg.3* v1.28.4_vmware.1-tkg.1*
imgpkg v0.36.0+vmware.2 v0.36.0+vmware.2
jetstack_cert-manager (cert-manager) v1.12.2+vmware.2 v1.12.2+vmware.2*
k14s_kapp (kapp) v0.55.0+vmware.2 v0.55.0+vmware.2
k14s_ytt (ytt) v0.45.0+vmware.2 v0.45.0+vmware.2
kapp-controller v0.48.2+vmware.1 v0.48.2+vmware.1*
kbld v0.37.0+vmware.2 v0.37.0+vmware.2
kube-vip v0.5.12+vmware.2 v0.5.12+vmware.2*
kube-vip-cloud-provider v0.0.5+vmware.2 v0.0.5+vmware.2*
kubernetes v1.28.7+vmware.1*,
v1.27.11+vmware.1*,
v1.26.14+vmware.1*
v1.28.4+vmware.1*,
v1.27.8+vmware.1*,
v1.26.11+vmware.1*
kubernetes-csi_external-resizer v1.10.0+vmware.1*,
v1.8.0+vmware.2,
v1.7.0+vmware.3
v1.8.0+vmware.2,
v1.7.0+vmware.3
kubernetes-sigs_kind v1.28.7+vmware.1-tkg.1_v0.20.0*,
v1.27.11+vmware.1-tkg.1_v0.20.0,
v1.26.14+vmware.1-tkg.1_v0.17.0
v1.28.4+vmware.1-tkg.1_v0.20.0*
kubernetes_autoscaler v1.28.0+vmware.1 v1.28.0+vmware.1*
load-balancer-and-ingress-service (AKO) v1.11.2+vmware.1 v1.11.2+vmware.1*
metrics-server v0.6.2+vmware.3 v0.6.2+vmware.3*
pinniped v0.25.0+vmware.2 v0.25.0+vmware.2*
pinniped-post-deploy v0.25.0+vmware.1 v0.25.0+vmware.1*
sonobuoy v0.57.0+vmware.1 v0.57.0+vmware.1*
tanzu-framework v0.32.2* v0.32.0*
tanzu-framework-addons v0.32.2* v0.32.0*
tanzu-framework-management-packages v0.32.2* v0.32.0*
tkg-bom v2.5.1* v2.5.0*
tkg-core-packages v1.28.7+vmware.1-tkg.3* v1.28.4+vmware.1-tkg.1*
tkg-standard-packages v2024.4.12* v2024.2.1*
tkg-storageclass-package v0.32.2* v0.32.0*
tkg_telemetry v2.3.0+vmware.3 v2.3.0+vmware.3
velero v1.12.1_vmware.1 v1.12.1_vmware.1*
velero-mgmt-cluster-plugin v0.3.0+vmware.1 v0.3.0+vmware.1*
velero-plugin-for-aws v1.8.1+vmware.1 v1.8.1+vmware.1*
velero-plugin-for-csi v0.6.1+vmware.1 v0.6.1+vmware.1*
velero-plugin-for-microsoft-azure v1.8.1+vmware.1 v1.8.1+vmware.1*
velero-plugin-for-vsphere v1.5.2+vmware.1 v1.5.2+vmware.1*
vendir v0.33.1+vmware.2 v0.33.1+vmware.2
vsphere_csi_driver v3.2.0+vmware.1* v3.1.2+vmware.1*

* Indicates a new component or version bump since the previous release. TKG v2.5.0 is previous to v2.5.1, and v2.4.1 is previous to v2.5.0.

For a list of software component versions that ship with TKG v2.5.x, use imgpkg to pull repository bundles and then list their contents. For example, to list the component versions that ship with the Tanzu Standard repository for TKG v2.5.1, run the following command:

imgpkg pull -b projects.registry.vmware.com/tkg/packages/standard/repo:2024.4.12 -o standard-2024.4.12

Supported Upgrade Paths

In the TKG upgrade path, v2.5.1 immediately follows v2.5.0.

You can only upgrade to Tanzu Kubernetes Grid v2.5.x from v2.4.x. If you want to upgrade to Tanzu Kubernetes Grid v2.5.x from a version earlier than v2.4.x, you must upgrade to v2.4.x first.

When upgrading Kubernetes versions on workload clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.26.x to v1.28.x. You must upgrade a v1.26.x cluster to v1.27.x before upgrading the cluster to v1.28.x.

Release Dates

Tanzu Kubernetes Grid v2.5.x release dates are:

  • v2.5.1: April 19, 2024
  • v2.5.0: February 22, 2024

Behavior Changes in Tanzu Kubernetes Grid v2.5.x

Tanzu Kubernetes Grid v2.5.x introduces the following new behaviors compared with v2.4.x, which is the latest previous release.

End of Support for TKG Management and Workload Clusters on AWS and Azure

Tanzu Kubernetes Grid v2.5.x does not support the creation of standalone TKG management clusters and TKG workload clusters on AWS and Azure. Use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters on AWS and Azure. For information about how to create native AWS EKS and Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters and Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.

For information about why VMware no longer supports TKG clusters on AWS and Azure, see VMware Tanzu Aligns to Multi-Cloud Industry Trends on the VMware Tanzu blog.

New Node Operating Systems for Kubernetes 1.27 and 1.28 Clusters

When upgrading a legacy plan-based workload cluster to Kubernetes v1.28, you must specify the --os-name, --os-version, and --os-arch options in the tanzu cluster upgrade command.

When upgrading a Photon 3 workload cluster to Kubernetes v1.27 and keeping Photon 3 as its node OS, you must specify photon3 for --os-name or edit the Cluster object.

For details, see Additional Steps for Certain OS, Kubernetes, and Cluster Type Combinations in Upgrade Workload Clusters.

Defaults to TLS v1.3 for Kubernetes components

For security hardening, TKG v2.5 sets the minimum TLS version to v1.3 for all Kubernetes components. Previous versions set this to the Kubernetes default, TLS v1.2.

End of Support for TKG Management and Workload Clusters on vSphere 6.7

From v2.5.1 onwards, Tanzu Kubernetes Grid does not support creating management clusters or workload clusters on vSphere 6.7. TKG v2.5.1 includes critical cloud network security (CNS) updates for container storage interface (CSI) storage functionality that are not compatible with vSphere 6.7. Creating clusters on vSphere 6.7 is supported on TKG versions up to and including v2.5.0 only. General support for vSphere 6.7 ended in October 2022 and VMware recommends that you upgrade to vSphere 7 or 8.

Future Behavior Change and Deprecation Notices

This section provides advance notice of behavior changes and feature deprecations that will take effect in future releases, after the TKG v2.5.x releases.

  • None

User Documentation

Deploying and Managing TKG 2.5 Standalone Management Clusters on vSphere, includes topics specific to standalone management clusters that are not relevant to using TKG with a vSphere with Tanzu Supervisor.

For more information, see Find the Right TKG Docs for Your Deployment on the VMware Tanzu Kubernetes Grid Documentation page.

Resolved Issues

Resolved in v2.5.1

No previous-documented customer-visible issues are resolved in Tanzu Kubernetes Grid v2.5.1.

Resolved in v2.5.0

The following issues that were documented as Known Issues in earlier Tanzu Kubernetes Grid releases are resolved in Tanzu Kubernetes Grid v2.5.0.

  • Creating a ClusterClass workload cluster fails on IPv4 primary dualstack in an airgapped environment with proxy mode enabled

    Attempts to create a ClusterClass workload cluster on IPv4 primary dualstack in an airgapped environment with proxy mode enabled fail with the error unable to wait for cluster nodes to be available: worker nodes are still being created for MachineDeployment 'wl-antrea-md-0-5zlgc', DesiredReplicas=1 Replicas=0 ReadyReplicas=0 UpdatedReplicas=0

    The antrea-agent log shows Error running agent: error initializing agent: K8s Node should have an IPv6 address if IPv6 Pod CIDR is defined.

  • When multiple vSphere OVA templates are detected, the first one is used

    If multiple templates are detected at the same path that have the same os-name, os-arch, and os-version, the first one that matches the requirement is used. This has been fixed so that an error is thrown prompting the user to provide the full path to the desired template.

  • Deploying management cluster to vSphere 7 fails while waiting for the cluster control plane to become available

    If you specify the VM Network when deploying a management cluster to vSphere 7, the deployment fails with the error unable to set up management cluster: unable to wait for cluster control plane available: control plane is not available yet.

Known Issues

The following are known issues in Tanzu Kubernetes Grid v2.5.x. Any known issues that were present in v2.5.0 that have been resolved in a subsequent v2.5.x patch release are listed under the Resolved Issues for the patch release in which they were fixed.

You can find additional solutions to frequently encountered issues in Troubleshooting Management Cluster Issues and Troubleshooting Workload Cluster Issues, or in Broadcom Communities.

Upgrade

  • Upgrading Photon 3 workload cluster to Kubernetes v1.27 also upgrades to Photon 5 by default

    For workload clusters that use Photon 3 as the node OS, if you are upgrading the cluster from Kubernetes v1.26 to v1.27, TKG also by default upgrades the cluster to Photon 5, which is the default OS version for TKG clusters running Kubernetes v1.27 on Photon.

    Workaround

    If you want the upgraded cluster to continue using Photon 3 as its node OS, follow the additional steps for Kubernetes v1.27 on Photon 3 described in Additional Steps for Certain OS, Kubernetes, and Cluster Type Combinations in Upgrade Workload Clusters. There are different steps for plan-based and class-based workload clusters.

  • You cannot upgrade multi-OS clusters

    You cannot use the tanzu cluster upgrade command to upgrade clusters with Windows worker nodes as described in Deploy a Multi-OS Workload Cluster.

  • Before upgrade, you must manually update a changed Avi certificate in the tkg-system package values

    Management cluster upgrade fails if you have rotated an Avi Controller certificate, even if you have updated its value in the management cluster’s secret/avi-controller-ca as described in Modify the Avi Controller Credentials.

    Failure occurs because updating secret/avi-controller-ca does not copy the new value into the management cluster’s tkg-system package values, and TKG uses the certificate value from those package values during upgrade. For legacy management clusters created in TKG v1.x, the new value is also not copied into the ako-operator-addon secret.

    Workaround: Before upgrading TKG, check if the Avi certificate in tkg-pkg-tkg-system-values is up-to-date, and patch it if needed:

    1. Get the certificate from avi-controller-ca:
      kubectl get secret avi-controller-ca -n tkg-system-networking -o jsonpath="{.data.certificateAuthorityData}"
      
    2. In the tkg-pkg-tkg-system-values secret, get and decode the package values string:
      kubectl get secret tkg-pkg-tkg-system-values -n tkg-system -o jsonpath="{.data.tkgpackagevalues\.yaml}" | base64 --decode
      
    3. In the decoded package values, check the value for avi_ca_data_b64 under akoOperatorPackage.akoOperator.config. If it differs from the avi-controller-ca value, update tkg-pkg-tkg-system-values with the new value:

      1. In a copy of the decoded package values string, paste in the new certificate from avi-controller-ca as the avi_ca_data_b64 value under akoOperatorPackage.akoOperator.config.
      2. Also paste in the new certificate as the AVI_CA_DATA_B64 value under configvalues.
      3. Run base64 to re-encode the entire package values string.
      4. Patch the tkg-pkg-tkg-system-values secret with the new, encoded string:
        kubectl patch secret/tkg-pkg-tkg-system-values -n tkg-system -p '{"data": {"tkgpackagevalues.yaml": "BASE64-ENCODED STRING"}}'
        

Cluster Operations

  • Cannot create management cluster with AVI as control plane HA provider and Pinniped OIDC as identity provider

    When creating a management cluster with configuration variables AVI_CONTROL_PLANE_HA_PROVIDER set to true to use AVI as the control plane HA provider and IDENTITY_MANAGEMENT_TYPE set to oidc to use an external OIDC identity provider, CLI output shows Waiting messages for AKO package and resource mgmt-load-balancer-and-ingress-service and then fails with packageinstalls/mgmt-load-balancer-and-ingress-service ... connect: connection refused.

    This issue results from an internal behavior change in AKO.

    Workaround: When configuring a management cluster, set VSPHERE_CONTROL_PLANE_ENDPOINT to an IP address that is not the first address in the configured AVI VIP range, AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR if set, or else AVI_DATA_NETWORK_CIDR.

  • Cannot create new workload clusters based on non-current TKr versions with Antrea CNI

    You cannot create a new workload cluster that uses Antrea CNI and runs Kubernetes versions shipped with prior versions TKG, such as Kubernetes v1.23.10, which was the default Kubernetes version in TKG v1.6.1 as listed in Supported Kubernetes Versions in Tanzu Kubernetes Grid v2.5.

    Workaround: Create a workload cluster that runs Kubernetes 1.28.x, 1.27.x, or 1.26.x. The Kubernetes project recommends that you run components on the most recent patch version of any current minor version.

  • management-cluster delete and other operations fail with kind error on cloud management as a service platforms

    On VMware cloud infrastructure products on AWS, Azure, Oracle Cloud, Google Cloud, and other infrastructures, running tanzu management-cluster delete and other standalone management cluster operations fail with an error failed to create kind cluster.

    Workaround: On your bootstrap machine, change your Docker or Docker Desktop cgroup driver default to systemd so that it matches the cgroup setting in the TKG container runtime, as described in the Tanzu CLI installation Prerequisites.

  • Orphan vSphereMachine objects after cluster upgrade or scale

    Due to a known issue in the cluster-api-provider-vsphere (CAPV) project, standalone management clusters on vSphere may leave orphaned VSphereMachine objects behind after cluster upgrade or scale operations.

    This issue is fixed in newer versions of CAPV, which future patch versions of TKG will incorporate.

    Workaround: To find and delete orphaned CAPV VM objects:

    1. List all VSphereMachine objects and identify the orphaned ones, which do not have any PROVIDERID value:
      kubectl get vspheremachines -A
      
    2. For each orphaned VSphereMachine object:

      1. List the object and retrieve its machine ID:
        kubectl get vspheremachines VSPHEREMACHINE -n NAMESPACE -o yaml
        

        Where VSPHEREMACHINE is the machine NAME and NAMESPACE is its namespace.

      2. Check if the VSphereMachine has an associated Machine object:
        kubectl get machines -n NAMESPACE |  grep MACHINE-ID
        

        Run kubectl delete machine to delete any Machine object associated with the VSphereMachine.

      3. Delete the VSphereMachine object:
        kubectl delete vspheremachines VSPHEREMACHINE -n NAMESPACE
        
      4. From vCenter, check if the VSphereMachine VM still appears; it may be present, but powered off. If so, delete it in vCenter.
      5. If the deletion hangs, patch its finalizer:
        kubectl patch vspheremachines VSPHEREMACHINE -n NAMESPACE -p '{"metadata": {"finalizers": null}}' --type=merge
        

Storage

  • Workload cluster cannot distribute storage across multiple datastores

    You cannot enable a workload cluster to distribute storage across multiple datastores as described in Deploy a Cluster that Uses a Datastore Cluster. If you tag multiple datastores in a datastore cluster as the basis for a workload cluster’s storage policy, the workload cluster uses only one of the datastores.

    Workaround: None

vSphere

  • CAPV controller manager stops unexpectedly

    In TKG v2.5.1, CAPV controller manager works but then stops after some time.

    Workaround: See KB 370310

  • Availability zones can be deleted while VMs are assigned to it

    If you delete an availability zone that contains VMs, the VMs cannot subsequently be deleted.

    Workaround: Remove all VMs from an availability zone before deleting it.

  • Creating workload clusters fails due to VPXD session exhaust

    When creating workload clusters on vSphere, the creation fails with the following error:

    vSphere config validation failed: failed to get VC client: failed to create vc client: Post "https://address/sdk": EOF ". VCenter vpxd.log report error: Out of HTTP sessions: Limited to 2000
    

    This happens due to vCenter Server session exhaust.

    Workaround: See VMware KB 50114010.

Image-Builder

  • nfs-common and rpcbind packages are removed from Ubuntu OVAs

    Both rpcbind service and the nfs-common packages are removed from Ubuntu OVAs due default CIS benchmark compliance, to enable the package follow the BYOI procedure for hardening.

Tanzu Standard Repository v2024.4.12

With TKG v2.5, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp. For TKG v2.5.1, the latest compatible Tanzu Standard repository version is v2024.4.12 and both are released around the same date.

Future Tanzu Standard repository versions may publish more frequently than TKG versions, but all patch versions will maintain existing compatibilities between minor versions of TKG and Tanzu Standard.

For more information, see the Tanzu Standard v2024.4.12 release notes.

check-circle-line exclamation-circle-line close-line
Scroll to top icon