Except where noted, these release notes apply to all v2.4.x patch versions of Tanzu Kubernetes Grid (TKG).
TKG v2.4 is distributed as a downloadable Tanzu CLI package that deploys a versioned TKG standalone management cluster. TKG v2.4 supports creating and managing class-based workload clusters with a standalone management cluster that can run on multiple infrastructures, including vSphere, AWS, and Azure.
ImportantThe vSphere with Tanzu Supervisor in vSphere 8.0.1c and later runs TKG v2.2. Earlier versions of vSphere 8 run TKG v2.0, which was not released independently of Supervisor. Standalone management clusters that run TKG 2.x are available from TKG 2.1 onwards. Due to the earlier TKG version that is embedded in Supervisor, some of the features that are available if you are using a standalone TKG 2.4 management cluster are not available if you are using a vSphere with Tanzu Supervisor to create workload clusters. Later TKG releases will be embedded in Supervisor in future vSphere update releases. Consequently, the version of TKG that is embedded in the latest vSphere with Tanzu version at a given time might not be as recent as the latest standalone version of TKG. However, the versions of the Tanzu CLI that are compatible with all TKG v2.x releases are fully supported for use with Supervisor in all releases of vSphere 8. For example, Tanzu CLI v1.0.x is fully backwards compatible with the TKG 2.2 plugins that Supervisor provides.
Tanzu Kubernetes Grid v2.4.x includes the following new features.
New features in Tanzu Kubernetes Grid v2.4.1:
New features in Tanzu Kubernetes Grid v2.4.0:
--vsphere-vm-template-name
option as described in Select an OVA Template to Upgrade To.From TKG v2.2 onwards, VMware’s support policy changed for older patch versions of TKG and Tanzu Kubernetes releases (TKrs), which package Kubernetes versions for TKG. Support policies for TKG v2.1 and older minor versions of TKG do not change.
The first two sections below summarize support for all currently-supported versions of TKG and TKrs, under the support policies that apply to each.
The third section below lists the versions of packages in the Tanzu Standard repository that are supported by Kubernetes v1.27, v1.26, and v1.25 TKrs.
Each version of Tanzu Kubernetes Grid adds support for the Kubernetes version of its management cluster, plus additional Kubernetes versions, distributed as Tanzu Kubernetes releases (TKrs), except where noted as a Known Issue.
Minor versions: VMware supports TKG v2.4 with Kubernetes v1.27, v1.26, and v1.25 at time of release. Once TKG v2.2 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.25 with TKG. Once TKG v2.3 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.26 with TKG.
Patch versions: After VMware publishes a new TKr patch version for a minor line, it retains support for older patch versions for two months. This gives customers a 2-month window to upgrade to new TKr patch versions. From TKG v2.2 onwards, VMware does not support all TKr patch versions from previous minor lines of Kubernetes.
Tanzu Kubernetes Grid patch versions support or supported TKr patch versions as listed below.
Tanzu Kubernetes Grid Version | Management Cluster Kubernetes Version | Provided Kubernetes (TKr) Versions |
---|---|---|
2.4.1 | 1.27.5 | 1.27.5, 1.26.8, 1.25.13 |
2.4.0 | 1.27.5 | 1.27.5, 1.26.8, 1.25.13 |
2.3.0 | 1.26.5 | 1.26.5, 1.25.10, 1.24.14 |
2.2.0 | 1.25.7 | 1.25.7, 1.24.11, 1.23.17 |
2.1.1 | 1.24.10 | 1.24.10, 1.23.16, 1.22.17 |
2.1.0 | 1.24.9 | 1.24.9, 1.23.15, 1.22.17 |
VMware supports TKG versions as follows:
Minor versions: VMware supports TKG following the N-2 Lifecycle Policy, which applies to the latest and previous two minor versions of TKG. With the release of TKG v2.4.0, TKG v2.1 is no longer supported after a period of one year has elapsed since the v2.1 release. See the VMware Product Lifecycle Matrix for more information.
Patch versions: VMware does not support all previous TKG patch versions. After VMware releases a new patch version of TKG, it retains support for the older patch version for two months. This gives customers a 2-month window to upgrade to new TKG patch versions.
Package versions in the Tanzu Standard repository for TKG v2.4 are compatible via TKrs with Kubernetes minor versions v1.27, v1.26, and v1.25, and are listed in the Tanzu Standard Repository Release Notes.
Tanzu Kubernetes Grid v2.4.1 supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components.
See Component Versions for a full list of component versions included in TKG v2.4.1 and v2.4.0.
See Tanzu Standard Repository Release Notes for additional package versions compatible with TKG v2.4.1.
vSphere | AWS | Azure | |
Infrastructure platform |
|
Native AWS | Native Azure |
Tanzu CLI | Tanzu CLI Core v1.0.x** | ||
TKG API, and package infrastructure | Tanzu Framework v0.31.1 | ||
Cluster creation and management | Core Cluster API (v1.4.5), Cluster API Provider vSphere (v1.7.1) | Core Cluster API (v1.4.5), Cluster API Provider AWS (v2.1.3) | Core Cluster API (v1.4.5), Cluster API Provider Azure (v1.9.2) |
Kubernetes node OS distributed with TKG | Photon OS 3, Ubuntu 20.04 | Amazon Linux 2, Ubuntu 20.04 | Ubuntu 18.04, Ubuntu 20.04 |
Build your own image | Photon OS 3, Red Hat Enterprise Linux 7*** and 8, Ubuntu 18.04, Ubuntu 20.04, Windows 2019 | Amazon Linux 2, Ubuntu 18.04, Ubuntu 20.04 | Ubuntu 18.04, Ubuntu 20.04 |
Container runtime | Containerd (v1.6.18) | ||
Container networking | Antrea (v1.11.3), Calico (v3.26.1), Multus CNI (v4.0.1, v3.8.0) | ||
Container registry | Harbor (v2.8.4) | ||
Ingress | NSX Advanced Load Balancer Essentials and Avi Controller **** (v21.1.5-v21.1.6, v22.1.3-v22.1.4), NSX v4.1.0 (vSphere 8.0.u1), v3.2.2 (vSphere 7), Contour (v1.25.2, v1.24.5) | Contour (v1.25.2, v1.24.5) | Contour (v1.25.2, v1.24.5) |
Storage | vSphere Container Storage Interface (v3.0.2*****) and vSphere Cloud Native Storage | Amazon EBS CSI driver (v1.18.0) and in-tree cloud providers | Azure Disk CSI driver (v1.28.1), Azure File CSI driver (v1.28.0), and in-tree cloud providers |
Authentication | OIDC and LDAP via Pinniped (v0.24.0) | ||
Observability | Fluent Bit (v2.1.2, v1.9.5), Prometheus (v2.45.0, v2.43.0)******, Grafana (v10.0.1) | ||
Service Discovery | External DNS (v0.13.4, v0.12.2) | ||
Backup and migration | Velero (v1.11.1) |
* For a list of VMware Cloud on AWS SDDC versions that are compatible with this release, see the VMware Product Interoperability Matrix.
** For a full list of Tanzu CLI versions that are compatible with this release, see Product Interoperability Matrix.
*** Tanzu Kubernetes Grid v1.6 is the last release that supports building Red Hat Enterprise Linux 7 images.
**** On vSphere 8, to use NSX Advanced Load Balancer with a TKG standalone management cluster and its workload clusters, you need NSX ALB v22.1.2 or later and TKG v2.1.1 or later.
***** Version of vsphere_csi_driver. For a full list of vSphere Container Storage Interface components included in this release, see Component Versions.
****** If you upgrade a cluster to Kubernetes v1.25, you must upgrade Prometheus to at least version 2.37.0+vmware.3-tkg.1
. Earlier versions of the Prometheus package, for example version 2.37.0+vmware.1-tkg.1
, are not compatible with Kubernetes 1.25.
For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v2.4, see Supported Kubernetes Versions above.
The TKG v2.4.x release includes the following software component versions:
NotePrevious TKG releases included components that are now distributed via the Tanzu Standard repository. For a list of these components, see Tanzu Standard Repository Release Notes.
Component | TKG v2.4.1 | TKG v2.4.0 |
---|---|---|
aad-pod-identity | v1.8.15+vmware.2 | v1.8.15+vmware.2 |
addons-manager | v2.2+vmware.1 | v2.2+vmware.1 |
ako-operator | v1.10.0_vmware.2 | v1.10.0_vmware.2* |
antrea | v1.11.3_vmware.2-advanced* | v1.11.2_vmware.1-advanced* |
antrea-internetworking | v1.11.2* | v1.11.1* |
aws-cloud-controller-manager | v1.27.1+vmware.1 | v1.27.1+vmware.1* |
aws-ebs-csi-driver | v1.18.0+vmware.3 | v1.18.0+vmware.3* |
azuredisk-csi-driver | v1.28.1+vmware.3* | v1.28.1+vmware.2* |
azurefile-csi-driver | v1.28.0+vmware.3* | v1.28.0+vmware.2* |
calico_all | v3.26.1+vmware.3* | v3.26.1+vmware.1* |
capabilities-package | v0.31.1-capabilities* | v0.31.0-capabilities* |
carvel-secretgen-controller | v0.14.2+vmware.2 | v0.14.2+vmware.2 |
cloud-provider-azure | v1.1.26+vmware.1, v1.23.23+vmware.1, v1.24.10+vmware.1 |
v1.1.26+vmware.1, v1.23.23+vmware.1, v1.24.10+vmware.1 |
cloud_provider_vsphere | v1.27.0+vmware.1 | v1.27.0+vmware.1* |
cluster-api-provider-azure | v1.9.2+vmware.1 | v1.9.2+vmware.1 |
cluster_api | v1.4.5+vmware.1 | v1.4.5+vmware.1* |
cluster_api_aws | v2.1.3+vmware.0 | v2.1.3+vmware.0 |
cluster_api_vsphere | v1.7.1+vmware.0 | v1.7.1+vmware.0* |
cni_plugins | v1.2.0+vmware.7 | v1.2.0+vmware.7* |
containerd | v1.6.18+vmware.1 | v1.6.18+vmware.1 |
coredns | v1.10.1_vmware.7 | v1.10.1_vmware.7* |
crash-diagnostics | v0.3.7+vmware.8* | v0.3.7+vmware.7 |
cri_tools | v1.26.0+vmware.7 | v1.26.0+vmware.7* |
csi_attacher | v4.3.0+vmware.2, v4.2.0+vmware.3 |
v4.3.0+vmware.2*, v4.2.0+vmware.3* |
csi_livenessprobe | v2.10.0+vmware.2, v2.9.0+vmware.3 |
v2.10.0+vmware.2*, v2.9.0+vmware.3* |
csi_node_driver_registrar | v2.8.0+vmware.2, v2.7.0+vmware.3 |
v2.8.0+vmware.2*, v2.7.0+vmware.3* |
csi_provisioner | v3.5.0+vmware.2, v3.4.1+vmware.3, v3.4.0+vmware.3 |
v3.5.0+vmware.2*, v3.4.1+vmware.3*, v3.4.0+vmware.3* |
etcd | v3.5.7_vmware.6 | v3.5.7_vmware.6* |
external-snapshotter | v6.2.2+vmware.2, v6.2.1+vmware.3 |
v6.2.2+vmware.2*, v6.2.1+vmware.3* |
guest-cluster-auth-service | v1.3.0_tkg.3* | v1.3.0_tkg.2 |
image-builder | v0.1.14+vmware.1 | v0.1.14+vmware.1 |
image-builder-resource-bundle | v1.27.5+vmware.1-tkg.3* | v1.27.5+vmware.1-tkg.1* |
imgpkg | v0.36.0+vmware.2 | v0.36.0+vmware.2 |
jetstack_cert-manager | v1.12.2+vmware.1 | v1.12.2+vmware.1* |
k14s_kapp | v0.55.0+vmware.2 | v0.55.0+vmware.2 |
k14s_ytt | v0.45.0+vmware.2 | v0.45.0+vmware.2 |
kapp-controller | v0.45.2+vmware.1 | v0.45.2+vmware.1 |
kbld | v0.37.0+vmware.2 | v0.37.0+vmware.2 |
kube-vip | v0.5.12+vmware.1 | v0.5.12+vmware.1 |
kube-vip-cloud-provider | v0.0.5+vmware.1, v0.0.4+vmware.4 |
v0.0.5+vmware.1, v0.0.4+vmware.4 |
kubernetes | v1.27.5+vmware.1, v1.26.8+vmware.1, v1.25.13+vmware.1 |
v1.27.5+vmware.1*, v1.26.8+vmware.1*, v1.25.13+vmware.1* |
kubernetes-csi_external-resizer | v1.8.0+vmware.2, v1.7.0+vmware.3 |
v1.8.0+vmware.2*, v1.7.0+vmware.3* |
kubernetes-sigs_kind | v1.27.5+vmware.1-tkg.1_v0.17.0 | v1.27.5+vmware.1-tkg.1_v0.17.0* |
kubernetes_autoscaler | v1.27.2+vmware.1* | v1.27.5+vmware.1* |
load-balancer-and-ingress-service (AKO) | 1.10.2+vmware.1-tkg.1 | 1.10.2+vmware.1-tkg.1 |
metrics-server | v0.6.2+vmware.1 | v0.6.2+vmware.1 |
pinniped | v0.24.0+vmware.1-tkg.1 | v0.24.0+vmware.1-tkg.1 |
pinniped-post-deploy | v0.24.0+vmware.1 | v0.24.0+vmware.1 |
sonobuoy | v0.56.16+vmware.2 | v0.56.16+vmware.2 |
tanzu-framework | v0.31.1* | v0.31.0* |
tanzu-framework-addons | v0.31.1* | v0.31.0* |
tanzu-framework-management-packages | v0.31.1* | v0.31.0* |
tkg-bom | v2.4.1* | v2.4.0* |
tkg-core-packages | v1.27.5+vmware.1-tkg.3* | v1.27.5+vmware.1-tkg.1* |
tkg-standard-packages | v2023.9.27* | v2023.11.21* |
tkg-storageclass-package | v0.31.1* | v0.31.0* |
tkg_telemetry | v2.3.0+vmware.3 | v2.3.0+vmware.3* |
velero | v1.11.1+vmware.1 | v1.11.1+vmware.1* |
velero-mgmt-cluster-plugin | v0.2.1+vmware.1 | v0.2.1+vmware.1* |
velero-plugin-for-aws | v1.7.1+vmware.1 | v1.7.1+vmware.1* |
velero-plugin-for-csi | v0.5.1+vmware.1 | v0.5.1+vmware.1* |
velero-plugin-for-microsoft-azure | v1.7.1+vmware.1 | v1.7.1+vmware.1* |
velero-plugin-for-vsphere | v1.5.1+vmware.1 | v1.5.1+vmware.1 |
vendir | v0.33.1+vmware.2 | v0.33.1+vmware.2 |
vsphere_csi_driver | v3.0.2+vmware.2 | v3.0.2+vmware.2* |
* Indicates a new component or version bump since the previous release. TKG v2.4.0 is previous to v2.4.1, and v2.3.0 is previous to v2.4.0.
For a list of software component versions that ship with TKG v2.4, use imgpkg
to pull repository bundles and then list their contents. For example, to list the component versions that ship with the Tanzu Standard repository for TKG v2.4.1, run the following command:
imgpkg pull -b projects.registry.vmware.com/tkg/packages/standard/repo:v2023.11.21 -o standard-2023.11.21
In the TKG upgrade path, v2.4 immediately follows v2.3.0 and v2.4.1 immediately follows v2.4.
You can only upgrade to Tanzu Kubernetes Grid v2.4.x from v2.3.x. If you want to upgrade to Tanzu Kubernetes Grid v2.4.x from a version earlier than v2.3.x, you must upgrade to v2.3.x first.
When upgrading Kubernetes versions on workload clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.25.x to v1.27.x. You must upgrade a v1.25.x cluster to v1.26.x before upgrading the cluster to v1.27.x.
Tanzu Kubernetes Grid v2.4 release dates are:
Tanzu Kubernetes Grid v2.4 introduces the following documentation change compared with v2.3.0, which is the latest previous release.
This section provides advance notice of behavior changes and feature deprecations that will take effect in future releases, after the TKG v2.4.x releases.
ImportantTanzu Kubernetes Grid v2.4 (including patch releases) is the last minor version of TKG that supports the creation of standalone TKG management clusters and TKG workload clusters on AWS and Azure. The ability to create standalone TKG management clusters and TKG workload clusters on AWS and Azure will be removed in the Tanzu Kubernetes Grid v2.5 release.
Starting from now, VMware recommends that you use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters instead of creating new standalone TKG management clusters or new TKG workload clusters on AWS and Azure. For information about how to create native AWS EKS and Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters and Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.
Although the recommentation is to use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters, creating and using standalone TKG management clusters and TKG workload clusters on AWS and Azure remains fully supported for all TKG releases up to and including TKG v2.4.x.
For information about why VMware is deprecating TKG clusters on AWS and Azure, see VMware Tanzu Aligns to Multi-Cloud Industry Trends on the VMware Tanzu blog.
Deploying and Managing TKG 2.4 Standalone Management Clusters, includes topics specific to standalone management clusters that are not relevant to using TKG with a vSphere with Tanzu Supervisor.
For more information, see Find the Right TKG Docs for Your Deployment on the VMware Tanzu Kubernetes Grid Documentation page.
The following customer-visible issues are resolved in Tanzu Kubernetes Grid v2.4.1.
Creating a ClusterClass workload cluster fails on IPv4 primary dualstack in an airgapped environment with proxy mode enabled
Attempts to create a ClusterClass workload cluster on IPv4 primary dualstack in an airgapped environment with proxy mode enabled fail with the error unable to wait for cluster nodes to be available: worker nodes are still being created for MachineDeployment 'wl-antrea-md-0-5zlgc', DesiredReplicas=1 Replicas=0 ReadyReplicas=0 UpdatedReplicas=0
The antrea-agent
log shows Error running agent: error initializing agent: K8s Node should have an IPv6 address if IPv6 Pod CIDR is defined
.
When multiple vSphere OVA templates are detected, the first one is used
If multiple templates are detected at the same path that have the same os-name
, os-arch
, and os-version
, the first one that matches the requirement is used. This has been fixed so that an error is thrown prompting the user to provide the full path to the desired template.
The following issues that were documented as Known Issues in earlier Tanzu Kubernetes Grid releases are resolved in Tanzu Kubernetes Grid v2.4.0.
Components fail to schedule when using clusters with limited capacity
For management clusters and workload clusters, if you deploy clusters with a single control plane node, a single worker node, or small or medium clusters, you might encounter resource scheduling contention.
The following are known issues in Tanzu Kubernetes Grid v2.4.x. Any known issues that were present in v2.4.0 that have been resolved in a subsequent v2.4.x patch release are listed under the Resolved Issues for the patch release in which they were fixed.
You can find additional solutions to frequently encountered issues in Troubleshooting Management Cluster Issues and Troubleshooting Workload Cluster Issues, or in Broadcom Communities.
You cannot upgrade multi-OS clusters
You cannot use the tanzu cluster upgrade
command to upgrade clusters with Windows worker nodes as described in Deploy a Multi-OS Workload Cluster.
Before upgrade, you must manually update a changed Avi certificate in the tkg-system
package values
Management cluster upgrade fails if you have rotated an Avi Controller certificate, even if you have updated its value in the management cluster’s secret/avi-controller-ca
as described in Modify the Avi Controller Credentials.
Failure occurs because updating secret/avi-controller-ca
does not copy the new value into the management cluster’s tkg-system
package values, and TKG uses the certificate value from those package values during upgrade.
For legacy management clusters created in TKG v1.x, the new value is also not copied into the ako-operator-addon
secret.
Workaround: Before upgrading TKG, check if the Avi certificate in tkg-pkg-tkg-system-values
is up-to-date, and patch it if needed:
avi-controller-ca
: kubectl get secret avi-controller-ca -n tkg-system-networking -o jsonpath="{.data.certificateAuthorityData}"
tkg-pkg-tkg-system-values
secret, get and decode the package values string: kubectl get secret tkg-pkg-tkg-system-values -n tkg-system -o jsonpath="{.data.tkgpackagevalues\.yaml}" | base64 --decode
In the decoded package values, check the value for avi_ca_data_b64
under akoOperatorPackage.akoOperator.config
. If it differs from the avi-controller-ca
value, update tkg-pkg-tkg-system-values
with the new value:
avi-controller-ca
as the avi_ca_data_b64
value under akoOperatorPackage.akoOperator.config
.base64
to re-encode the entire package values string.tkg-pkg-tkg-system-values
secret with the new, encoded string: kubectl patch secret/tkg-pkg-tkg-system-values -n tkg-system -p '{"data": {"tkgpackagevalues.yaml": "BASE64-ENCODED STRING"}}'
For management clusters created before TKG v2.1, if you updated tkg-pkg-tkg-system-values
in the previous step, also update the ako-operator-addon
secret:
kubectl get secret -n tkg-system ${MANAGEMENT_CLUSTER_NAME}-ako-operator-addon
If the command outputs an ako-operator-addon
object, the management cluster was created in v1.x and you need to update its secret as follows.
ako-operator-addon
secret, get and decode the values string: kubectl get secret ${MANAGEMENT-CLUSTER-NAME}-ako-operator-addon -n tkg-system -o jsonpath="{.data.values\.yaml}" | base64 --decode
avi-controller-ca
as the avi_ca_data_b64
value.base64
to re-encode the entire ako-operator-addon
values string.ako-operator-addon
secret with the new, encoded string: kubectl patch secret/${MANAGEMENT-CLUSTER-NAME}-ako-operator-addon -n tkg-system -p '{"data": {"values.yaml": "BASE64-ENCODED STRING"}}'
Cannot create new workload clusters based on non-current TKr versions with Antrea CNI
You cannot create a new workload cluster that uses Antrea CNI and runs Kubernetes versions shipped with prior versions TKG, such as Kubernetes v1.23.10, which was the default Kubernetes version in TKG v1.6.1 as listed in Supported Kubernetes Versions in Tanzu Kubernetes Grid v2.4.
Workaround: Create a workload cluster that runs Kubernetes 1.27.x, 1.26.x, or 1.25.x. The Kubernetes project recommends that you run components on the most recent patch version of any current minor version.
Orphan vSphereMachine
objects after cluster upgrade or scale
Due to a known issue in the cluster-api-provider-vsphere (CAPV) project, standalone management clusters on vSphere may leave orphaned VSphereMachine
objects behind after cluster upgrade or scale operations.
This issue is fixed in newer versions of CAPV, which future patch versions of TKG will incorporate.
Workaround: To find and delete orphaned CAPV VM objects:
VSphereMachine
objects and identify the orphaned ones, which do not have any PROVIDERID
value: kubectl get vspheremachines -A
For each orphaned VSphereMachine
object:
kubectl get vspheremachines VSPHEREMACHINE -n NAMESPACE -o yaml
Where VSPHEREMACHINE
is the machine NAME
and NAMESPACE
is its namespace.
VSphereMachine
has an associated Machine
object: kubectl get machines -n NAMESPACE | grep MACHINE-ID
Run kubectl delete machine
to delete any Machine
object associated with the VSphereMachine
.
VSphereMachine
object: kubectl delete vspheremachines VSPHEREMACHINE -n NAMESPACE
VSphereMachine
VM still appears; it may be present, but powered off. If so, delete it in vCenter.kubectl patch vspheremachines VSPHEREMACHINE -n NAMESPACE -p '{"metadata": {"finalizers": null}}' --type=merge
Workload cluster cannot distribute storage across multiple datastores
You cannot enable a workload cluster to distribute storage across multiple datastores as described in Deploy a Cluster that Uses a Datastore Cluster. If you tag multiple datastores in a datastore cluster as the basis for a workload cluster’s storage policy, the workload cluster uses only one of the datastores.
Workaround: None
Deploying management cluster to vSphere 7 fails while waiting for the cluster control plane to become available
If you specify the VM Network when deploying a management cluster to vSphere 7, the deployment fails with the error unable to set up management cluster: unable to wait for cluster control plane available: control plane is not available yet
.
Workaround: Then network “VM Network” has multiple configured subnets with static IPs for VsVip
and ServiceEngine
. Set exclude_discovered_subnets
to True on the VM Network, to ignore the discovered subnets and allow virtual services to be placed on the service engines.
Availability zones can be deleted while VMs are assigned to it
If you delete an availability zone that contains VMs, the VMs cannot subsequently be deleted.
Workaround: Remove all VMs from an availability zone before deleting it.
Creating workload clusters fails due to VPXD session exhaust
When creating workload clusters on vSphere, the creation fails with the following error:
vSphere config validation failed: failed to get VC client: failed to create vc client: Post "https://address/sdk": EOF ". VCenter vpxd.log report error: Out of HTTP sessions: Limited to 2000
This happens due to vCenter Server session exhaust.
Workaround: See VMware KB 50114010.
Node pools created with small
nodes may stall at Provisioning
Node pools created with node SIZE
configured as small
may become stuck in the Provisioning
state and never proceed to Running
.
Workaround: Configure node pool with at least medium
size nodes.
Ignorable goss
test failures during image-build process
When you run Kubernetes Image Builder to create a custom Linux custom machine image, the goss
tests python-netifaces
, python-requests
, and ebtables
fail. Command output reports the failures. The errors can be ignored; they do not prevent a successful image build.
With TKG v2.4, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp. For TKG v2.4.1, the latest compatible Tanzu Standard repository version is v2023.11.21 and both are released on the same date.
Future Tanzu Standard repository versions may publish more frequently than TKG versions, but all patch versions will maintain existing compatibilities between minor versions of TKG and Tanzu Standard.
For more information, see the Tanzu Standard v2023.11.21 release notes.