VMware Tanzu Kubernetes Grid v2.3 Release Notes

Except where noted, these release notes apply to all v2.3.x patch versions of Tanzu Kubernetes Grid (TKG).

TKG v2.3 is distributed as a downloadable Tanzu CLI package that deploys a versioned TKG standalone management cluster. TKG v2.3 supports creating and managing class-based workload clusters with a standalone management cluster that can run on multiple infrastructures, including vSphere, AWS, and Azure.

Tanzu Kubernetes Grid v2.x and vSphere with Tanzu Supervisor in vSphere 8

Important

The vSphere with Tanzu Supervisor in vSphere 8.0.1c and later runs TKG v2.2. Earlier versions of vSphere 8 run TKG v2.0, which was not released independently of Supervisor. Standalone management clusters that run TKG 2.x are available from TKG 2.1 onwards. Due to the earlier TKG version that is embedded in Supervisor, some of the features that are available if you are using a standalone TKG 2.3 management cluster are not available if you are using a vSphere with Tanzu Supervisor to create workload clusters. Later TKG releases will be embedded in Supervisor in future vSphere update releases. Consequently, the version of TKG that is embedded in the latest vSphere with Tanzu version at a given time might not be as recent as the latest standalone version of TKG. However, the versions of the Tanzu CLI that are compatible with all TKG v2.x releases are fully supported for use with Supervisor in all releases of vSphere 8. For example, Tanzu CLI v1.0.0 is fully backwards compatible with the TKG 2.2 plugins that Supervisor provides.

Tanzu CLI and vSphere with Tanzu in vSphere 7

The versions of the Tanzu CLI that are compatible with TKG 2.x and with the vSphere with Tanzu Supervisor in vSphere 8 are not compatible with the Supervisor Cluster in vSphere 7. To use the Tanzu CLI with a vSphere with Tanzu Supervisor Cluster on vSphere 7, use the Tanzu CLI version from TKG v1.6. To use the versions of the Tanzu CLI that are compatible with TKG 2.x with Supervisor, upgrade to vSphere 8. You can deploy a standalone TKG 2.x management cluster to vSphere 7 if a vSphere with Tanzu Supervisor Cluster is not present. For information about compatibility between the Tanzu CLI and VMware products, see the Tanzu CLI Documentation.

What’s New

Tanzu Kubernetes Grid v2.3.x includes the following new features.

Tanzu Kubernetes Grid v2.3.1

New features in Tanzu Kubernetes Grid v2.3.1:

Tanzu Kubernetes Grid v2.3.0

New features in Tanzu Kubernetes Grid v2.3.0:

  • New distribution mechanism for standalone Tanzu CLI plugins, including standalone Tanzu CLI plugins for Tanzu Kubernetes Grid. Additionally, the Tanzu Core CLI is now distributed separately from Tanzu Kubernetes Grid. For instructions on how to install the Tanzu CLI for use with Tanzu Kubernetes Grid, see Install the Tanzu CLI and Kubernetes CLI for Use with Standalone Management Clusters.
  • The Tanzu Standard package repository is versioned and distributed separately from TKG; see Tanzu Standard Repository v2023.7.13, below.
    • The latest compatible Tanzu Standard repository version for TKG v2.3.0 is Tanzu Standard repository v2023.7.13.
  • You can run workload and standalone management clusters across multiple availability zones (AZs) and change the AZs that their nodes run in.
  • Single-node clusters are upgradeable and are supported for Telco Cloud Automation (TCA); see Single-Node Clusters on vSphere.
    • Streamlined deployment procedure does not require adding tiny TKrs to the management cluster.
    • You cannot use Tanzu Mission Control (TMC) to create and manage single-node clusters, but this capability is planned for a future release of TMC.
  • You can configure a cluster-wide Pod Security Admission (PSA) controller with new cluster configuration file variables POD_SECURITY_STANDARD_*, and Cluster spec podSecurityStandard settings as described in Pod Security Admission Controller.
    • PSA support is in the Stable feature state.
  • (vSphere) in-cluster IP Address Management (IPAM) capabilities expanded; see Node IPAM:
    • IPAM for standalone management clusters, in addition to class-based workload clusters.
    • IPv6 support.
    • Workload clusters in different management namespaces can allocate IP addresses from the same global IP pool.
    • IP pools can contain non-contiguous IP ranges.
    • IP pool query kubectl get inclusterippool outputs FREE and USED address counts.
    • Cluster configuration variables, described in Node IPAM in the Configuration File Variable Reference.
    • InClusterIPPool object structure differs from prior TKG versions; cluster upgrade to v2.3 converts IP pools to new structure.
  • Clusters that use the Calico CNI perform IP address detection; see Calico CNI in Pod and Container Networking.
  • Edge workload clusters with isolated storage can use their own locally-stored VM templates; see Specifying a Local VM Template.
  • New cluster configuration file variable VSPHERE_MTU sets the size of the maximum transmission unit (MTU) for management and workload cluster nodes on vSphere; see Configure Cluster Node MTU.
  • You can configure clusters to use alternative machine images by annotating their object specs, and without creating or modifying TKrs; see Use an Alternative Machine Image.
  • (vSphere) CONTROL_PLANE_NODE_NAMESERVERS and WORKER_NODE_NAMESERVERS are now in the Stable state. You can set these variables for nodes running on Ubuntu or Photon; Windows is not supported. For an example use case, see Node IPAM.
  • You can now upgrade clusters that you created from a custom ClusterClass definition in a previous release. For information, see Upgrade Custom Clusters.
  • New flags for machine health checks, --max-unhealthy and --machine-deployment; see Manage Machine Health Checks for Workload Clusters.
  • New tanzu mc credentials update option --vsphere-thumbprint allows you to use the Tanzu CLI to update the TLS thumbprint of your vCenter Server in management clusters and workload clusters on vSphere. See Update Cluster Credentials.
  • Newly-created plan-based clusters do not have the Tanzu Standard package repository added by default.
  • The Pinniped component no longer uses Dex for LDAP identity providers, resulting in the following configuration changes:

    • New configuration variable: LDAP_GROUP_SEARCH_SKIP_ON_REFRESH
    • Updated configuration variables:
    • LDAP_BIND_DN and LDAP_BIND_PASSWORD are now required.
    • LDAP_GROUP_SEARCH_NAME_ATTRIBUTE defaults to dn.
    • LDAP_USER_SEARCH_FILTER and LDAP_GROUP_SEARCH_FILTER must be set in the format used by Pinniped.
    • Removed configuration variables: LDAP_USER_SEARCH_USERNAME, LDAP_USER_SEARCH_EMAIL_ATTRIBUTE, and LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE.

    The removal of Dex means you must change a management cluster’s LDAP settings before upgrading it to TKG v2.3; see (LDAP Only) Update LDAP Settings.
    For more information about the new and updated configuration variables, see Identity Providers - LDAP in Configuration File Variable Reference.

  • Because the Tanzu Standard package repository is distributed separately from TKG, the TKG BoM file no longer lists a grafana container image. Other Tanzu Standard components are planned for removal from the BoM in a future release.

Tanzu Standard Repository

With TKG v2.3, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp. See Tanzu Standard Repository v2023.7.13 below for more information.

For each TKG v2.3 patch version, release notes for its latest compatible Tanzu Standard repository version are:

Supported Kubernetes, TKG, and Package Versions

From TKG v2.2 onwards, VMware’s support policy changed for older patch versions of TKG and TKrs, which package Kubernetes versions for TKG. Support policies for TKG v2.1 and older minor versions of TKG do not change.

The first two sections below summarize support for all currently-supported versions of TKG and TKrs, under the support policies that apply to each.

The third section below lists the versions of packages in the Tanzu Standard repository that are supported by Kubernetes v1.26, v1.25, and v1.24 TKrs.

Supported Kubernetes Versions

Each version of Tanzu Kubernetes Grid adds support for the Kubernetes version of its management cluster, plus additional Kubernetes versions, distributed as Tanzu Kubernetes releases (TKrs), except where noted as a Known Issue.

Minor versions: VMware supports TKG v2.3 with Kubernetes v1.26, v1.25, and v1.24 at time of release. Once TKG v2.1 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.24 with TKG. Once TKG v2.2 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.25 with TKG.

Patch versions: After VMware publishes a new TKr patch version for a minor line, it retains support for older patch versions for two months. This gives customers a 2-month window to upgrade to new TKr patch versions. From of TKG v2.2 onwards, VMware does not support all TKr patch versions from previous minor lines of Kubernetes.

Tanzu Kubernetes Grid patch versions support or supported TKr patch versions as listed below.

Tanzu Kubernetes Grid Version Management Cluster Kubernetes Version Provided Kubernetes (TKr) Versions
2.3.1 1.26.8 1.26.8, 1.25.13, 1.24.17
2.3.0 1.26.5 1.26.5, 1.25.10, 1.24.14
2.2.0 1.25.7 1.25.7, 1.24.11, 1.23.17
2.1.1 1.24.10 1.24.10, 1.23.16, 1.22.17
2.1.0 1.24.9 1.24.9, 1.23.15, 1.22.17

Supported Tanzu Kubernetes Grid Versions

VMware supports TKG versions as follows:

Minor versions: VMware supports TKG following the N-2 Lifecycle Policy, which applies to the latest and previous two minor versions of TKG. See the VMware Product Lifecycle Matrix for more information.

Patch versions: VMware does not support all previous TKG patch versions. After VMware releases a new patch version of TKG, it retains support for the older patch version for two months. This gives customers a 2-month window to upgrade to new TKG patch versions.

  • For example, support for TKG v2.3.0 would end two months after the general availability of TKG v2.3.1.

Supported Package Versions

For TKG v2.3, package versions in the Tanzu Standard repository are compatible with TKrs for Kubernetes minor versions v1.26, v1.25, and v1.24 as follows:

Package Package Version Kubernetes v1.26 TKrs Kubernetes v1.25 TKrs Kubernetes v1.24 TKrs
Cert Manager
cert-manager.tanzu.vmware.com
1.11.1+vmware.1-tkg.1-20230629
Contour
contour.tanzu.vmware.com
1.24.4+vmware.1-tkg.1-20230629
External DNS
external-dns.tanzu.vmware.com
0.13.4+vmware.2-tkg.1-20230629
0.12.2+vmware.5-tkg.2-20230629
0.11.1+vmware.5-tkg.2-20230629
Fluent Bit
fluent-bit.tanzu.vmware.com
2.1.2+vmware.1-tkg.1-20230629
1.9.5+vmware.1-tkg.3-zshippable
1.8.15+vmware.1-tkg.1
FluxCD Help Controller
fluxcd-helm-controller.tanzu.vmware.com
0.32.0+vmware.1-tkg.2-20230629
0.21.0+vmware.1-tkg.1-zshippable
FluxCD Source Controller
fluxcd-source-controller.tanzu.vmware.com
0.33.0+vmware.2-tkg.1-20230629
Grafana
grafana.tanzu.vmware.com
9.5.1+vmware.2-tkg.1-20230629
7.5.17+vmware.1-tkg.3-zshippable
Harbor
harbor.tanzu.vmware.com
2.8.2+vmware.2-tkg.1-20230629
Multus CNI
multus-cni.tanzu.vmware.com
3.8.0+vmware.3-tkg.1
4.0.1+vmware.1-tkg.1-20230629
Prometheus
prometheus.tanzu.vmware.com
2.43.0+vmware.2-tkg.1-20230629
2.37.0+vmware.3-tkg.1
2.36.2+vmware.1-tkg.1
Whereabouts
whereabouts.tanzu.vmware.com
0.6.1+vmware.2-tkg.1-20230629
0.5.4+vmware.2-tkg.1

Product Snapshot for Tanzu Kubernetes Grid v2.3

Tanzu Kubernetes Grid v2.3 supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components.

See Tanzu Standard Repository v2023.10.16 Release Notes in Tanzu Packages Documentation for additional package versions compatible with TKG v2.3.

See Component Versions for a full list of component versions included in TKG v2.3.

vSphere AWS Azure
Infrastructure platform
  • vSphere 7.0, 7.0U1-U3
  • vSphere 8.0, 8.0U1
  • VMware Cloud on AWS* v1.20, v1.22
  • Azure VMware Solution v2.0
Native AWS Native Azure
Tanzu CLI Tanzu CLI Core v1.0.0**
TKG API, and package infrastructure Tanzu Framework v0.30.2
Cluster creation and management Core Cluster API (v1.4.5), Cluster API Provider vSphere (v1.7.1) Core Cluster API (v1.4.5), Cluster API Provider AWS (v2.1.3) Core Cluster API (v1.4.5), Cluster API Provider Azure (v1.9.2)
Kubernetes node OS distributed with TKG Photon OS 3, Ubuntu 20.04 Amazon Linux 2, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Build your own image Photon OS 3, Red Hat Enterprise Linux 7*** and 8, Ubuntu 18.04, Ubuntu 20.04, Windows 2019 Amazon Linux 2, Ubuntu 18.04, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Container runtime Containerd (v1.6.18)
Container networking Antrea (v1.11.2), Calico (v3.25.1), Multus CNI (v4.0.1, v3.8.0)
Container registry Harbor (v2.8.4)
Ingress NSX Advanced Load Balancer Essentials and Avi Controller **** (v21.1.4-v21.1.6, v22.1.2-v22.1.3), NSX v4.1.0 (vSphere 8.0.u1), v3.2.2 (vSphere 7), Contour (v1.25.2) Contour (v1.25.2) Contour (v1.25.2)
Storage vSphere Container Storage Interface (v3.0.1*****) and vSphere Cloud Native Storage Amazon EBS CSI driver (v1.18.0) and in-tree cloud providers Azure Disk CSI driver (v1.27.1), Azure File CSI driver (v1.27.0), and in-tree cloud providers
Authentication OIDC and LDAP via Pinniped (v0.24.0)
Observability Fluent Bit (v2.1.6), Prometheus (v2.45.0, v2.37.0)******, Grafana (v10.0.1)
Service Discovery External DNS (v0.13.4)
Backup and migration Velero (v1.10.3)

* For a list of VMware Cloud on AWS SDDC versions that are compatible with this release, see the VMware Product Interoperability Matrix.

** For a full list of Tanzu CLI versions that are compatible with this release, see Product Interoperability Matrix.

*** Tanzu Kubernetes Grid v1.6 is the last release that supports building Red Hat Enterprise Linux 7 images.

**** On vSphere 8, to use NSX Advanced Load Balancer with a TKG standalone management cluster and its workload clusters, you need NSX ALB v22.1.2 or later and TKG v2.1.1 or later.

***** Version of vsphere_csi_driver. For a full list of vSphere Container Storage Interface components included in this release, see Component Versions.

****** If you upgrade a cluster to Kubernetes v1.25, you must upgrade Prometheus to at least version 2.37.0+vmware.3-tkg.1. Earlier versions of the Prometheus package, for example version 2.37.0+vmware.1-tkg.1, are not compatible with Kubernetes 1.25.

For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v2.3, see Supported Kubernetes Versions above.

Component Versions

The TKG v2.3.x release includes the following software component versions:

Note

Previous TKG releases included components that are now distributed via the Tanzu Standard repository. For a list of these components, see Tanzu Standard Repository below.

Component TKG v2.3.1 TKG v2.3.0
aad-pod-identity v1.8.15+vmware.2 v1.8.15+vmware.2*
addons-manager v2.2+vmware.1 v2.2+vmware.1*
ako-operator v1.9.0_vmware.1 v1.9.0_vmware.1*
alertmanager v0.25.0_vmware.4* v0.25.0_vmware.3*
antrea v1.11.2_vmware.1* v1.11.1_vmware.4*
antrea-interworking v0.11.1* v0.11.0*
aws-ebs-csi-driver v1.18.0+vmware.2 v1.18.0+vmware.2*
azuredisk-csi-driver v1.27.1+vmware.3* v1.27.1+vmware.2*
azurefile-csi-driver v1.27.0+vmware.3* v1.27.0+vmware.2*
calico v3.25.1+vmware.2 v3.25.1+vmware.2*
capabilities-package v0.30.2-capabilities* v0.30.2-capabilities*
carvel-secretgen-controller v0.14.2+vmware.2 v0.14.2+vmware.2*
cloud-provider-azure v1.1.26+vmware.1,
v1.23.23+vmware.1,
v1.24.10+vmware.1
v1.1.26+vmware.1,
v1.23.23+vmware.1,
v1.24.10+vmware.1
cloud_provider_vsphere

v1.24.6+vmware.1,
v1.26.2+vmware.1,
v1.25.3+vmware.1

v1.24.6+vmware.1*,
v1.26.2+vmware.1*,
v1.25.3+vmware.1*

cluster-api-provider-azure v1.9.2+vmware.1 v1.9.2+vmware.1*
cluster_api v1.4.5+vmware.1* v1.4.2+vmware.3*
cluster_api_aws v2.1.3+vmware.0 v2.1.3+vmware.0*
cluster_api_vsphere v1.7.1+vmware.0* v1.7.0+vmware.0*
cni_plugins v1.1.1+vmware.28* v1.1.1+vmware.23*
configmap-reload v0.8.0+vmware.3* v0.8.0+vmware.2*
containerd v1.6.18+vmware.1 v1.6.18+vmware.1
coredns v1.9.3+vmware.16* v1.9.3+vmware.11*
crash-diagnostics v0.3.7+vmware.7 v0.3.7+vmware.7*
cri_tools v1.25.0+vmware.10* v1.25.0+vmware.6*
csi_attacher v4.2.0+vmware.4*,
v4.0.0+vmware.1,
v3.5.0+vmware.1,
v3.4.0+vmware.1,
v3.3.0+vmware.1
v4.2.0+vmware.2*,
v4.0.0+vmware.1*,
v3.5.0+vmware.1,
v3.4.0+vmware.1,
v3.3.0+vmware.1
csi_livenessprobe v2.9.0+vmware.4*,
v2.8.0+vmware.1,
v2.7.0+vmware.1,
v2.6.0+vmware.1,
v2.5.0+vmware.1,
v2.4.0+vmware.1
v2.9.0+vmware.2*,
v2.8.0+vmware.1*,
v2.7.0+vmware.1,
v2.6.0+vmware.1,
v2.5.0+vmware.1,
v2.4.0+vmware.1
csi_node_driver_registrar v2.7.0+vmware.4*,
v2.7.0+vmware.2,
v2.6.3+vmware.1,
v2.6.2+vmware.1,
v2.5.1+vmware.1,
v2.5.0+vmware.1,
v2.3.0+vmware.1
v2.7.0+vmware.1*,
v2.7.0+vmware.2*,
v2.6.3+vmware.1*,
v2.6.2+vmware.1*,
v2.5.1+vmware.1,
v2.5.0+vmware.1,
v2.3.0+vmware.1
csi_provisioner v3.4.1+vmware.3*,
v3.4.0+vmware.4*,
v3.3.0+vmware.1,
v3.2.1+vmware.1,
v3.1.0+vmware.2
v3.4.1+vmware.2*,
v3.4.0+vmware.2*,
v3.3.0+vmware.1*,
v3.2.1+vmware.1,
v3.1.0+vmware.2
dex N/A Removed
envoy v1.25.9+vmware.1* v1.25.6+vmware.1*
external-snapshotter v6.2.1+vmware.4*,
v6.1.0+vmware.1,
v6.0.1+vmware.1,
v5.0.1+vmware.1
v6.2.1+vmware.2*,
v6.1.0+vmware.1*,
v6.0.1+vmware.1,
v5.0.1+vmware.1
etcd v3.5.6+vmware.20* v3.5.6+vmware.14*
guest-cluster-auth-service v1.3.0_tkg.2 v1.3.0_tkg.2
image-builder v0.1.14+vmware.1 v0.1.14+vmware.1*
image-builder-resource-bundle v1.26.8+vmware.1-tkg.2* v1.26.5+vmware.2-tkg.1*
imgpkg v0.36.0+vmware.2 v0.36.0+vmware.2*
jetstack_cert-manager (cert-manager) v1.11.1+vmware.1 v1.11.1+vmware.1*
k8s-sidecar v1.22.4+vmware.2* v1.22.0+vmware.2*,
v1.15.6+vmware.5,
v1.12.1+vmware.6
k14s_kapp (kapp) v0.55.0+vmware.2 v0.55.0+vmware.2*
k14s_ytt (ytt) v0.45.0+vmware.2 v0.45.0+vmware.2*
kapp-controller v0.45.2+vmware.1 v0.45.2+vmware.1*
kbld v0.37.0+vmware.2 v0.37.0+vmware.2*
kube-state-metrics v2.8.2+vmware.1 v2.8.2+vmware.1*
kube-vip v0.5.12+vmware.1 v0.5.12+vmware.1
kube-vip-cloud-provider v0.0.5+vmware.1,
v0.0.4+vmware.4
v0.0.5+vmware.1*,
v0.0.4+vmware.4
kubernetes v1.26.8+vmware.1*,
v1.25.13+vmware.1*,
v1.24.17+vmware.1*
v1.26.5+vmware.2*,
v1.25.10+vmware.2*,
v1.24.14+vmware.2
kubernetes-csi_external-resizer v1.7.0+vmware.4*,
v1.6.0+vmware.1,
v1.5.0+vmware.1,
v1.4.0+vmware.1
v1.7.0+vmware.2*,
v1.6.0+vmware.1*,
v1.5.0+vmware.1*,
v1.4.0+vmware.1
kubernetes-sigs_kind v1.26.8+vmware.1-tkg.2_v0.17.0*,
v1.25.13+vmware.2-tkg.1_v0.17.0*,
v1.24.17+vmware.2-tkg.1_v0.17.0*
v1.26.5+vmware.2-tkg.1_v0.17.0*,
v1.25.10+vmware.2-tkg.1_v0.17.0*,
v1.24.14+vmware.2-tkg.1_v0.17.0*
kubernetes_autoscaler v1.26.2+vmware.1 v1.26.2+vmware.1*
load-balancer-and-ingress-service (AKO) v1.9.3+vmware.2-tkg.1 v1.9.3+vmware.2-tkg.1*
metrics-server v0.6.2+vmware.1 v0.6.2+vmware.1
pinniped v0.24.0+vmware.1-tkg.1 v0.24.0+vmware.1-tkg.1*
pinniped-post-deploy v0.24.0+vmware.1 v0.24.0+vmware.1*
prometheus_node_exporter v1.5.0+vmware.3* v1.5.0+vmware.2*
pushgateway v1.5.1+vmware.3* v1.5.1+vmware.2*
sonobuoy v0.56.16+vmware.2 v0.56.16+vmware.2*
tanzu-framework v0.30.2* v0.30.2*
tanzu-framework-addons v0.30.2* v0.30.2*
tanzu-framework-management-packages v0.30.2* v0.30.2*
tkg-bom v2.3.1* v2.3.0*
tkg-core-packages v1.26.8+vmware.1-tkg.2* v1.26.8+vmware.2-tkg.1*
tkg-standard-packages v2023.10.16* v2023.7.13*
tkg-storageclass-package v0.30.2* v0.30.2*
tkg_telemetry v2.3.1+vmware.3* v2.3.0+vmware.2*
velero v1.10.3+vmware.1 v1.10.3+vmware.1*
velero-mgmt-cluster-plugin* v0.2.0+vmware.1 v0.2.0+vmware.1*
velero-plugin-for-aws v1.6.2+vmware.1 v1.6.2+vmware.1*
velero-plugin-for-csi v0.4.3+vmware.1 v0.4.3+vmware.1*
velero-plugin-for-microsoft-azure v1.6.2+vmware.1 v1.6.2+vmware.1*
velero-plugin-for-vsphere v1.5.1+vmware.1 v1.5.1+vmware.1*
vendir v0.33.1+vmware.2 v0.33.1+vmware.2*
vsphere_csi_driver v3.0.1+vmware.4* v3.0.1+vmware.2*

* Indicates a new component or version bump since the previous release. TKG v2.3.0 is previous to v2.3.1 and TKG v2.2.0 is previous to v2.3.0.

For a list of software component versions that ship with TKG v2.3, use imgpkg to pull repository bundles and then list their contents. For example, to list the component versions that ship with the Tanzu Standard repository for TKG v2.3.1, run the following command:

imgpkg pull -b projects.registry.vmware.com/tkg/packages/standard/repo:v2023.10.16 -o standard-2023.10.16

Local BOM files such as the following also list package versions, but may not be current:

  • ~/.config/tanzu/tkg/bom/tkg-bom-v2.3.1.yaml
  • ~/.config/tanzu/tkg/bom/tkr-bom-v1.26.8+vmware.2-tkg.1.yaml

Supported Upgrade Paths

In the TKG upgrade path, v2.3 immediately follows v2.2.0.

You can only upgrade to Tanzu Kubernetes Grid v2.3.x from v2.2.x. If you want to upgrade to Tanzu Kubernetes Grid v2.3.x from a version earlier than v2.2.x, you must upgrade to v2.2.x first.

When upgrading Kubernetes versions on workload clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.24.x to v1.26.x. You must upgrade a v1.24.x cluster to v1.25.x before upgrading the cluster to v1.26.x.

Release Dates

Tanzu Kubernetes Grid v2.3 release dates are:

  • v2.3.1: November 9, 2023
  • v2.3.0: August 1, 2023

Behavior Changes in Tanzu Kubernetes Grid v2.3

Tanzu Kubernetes Grid v2.3 introduces the following new behaviors compared with v2.2.0, which is the latest previous release.

  • The Carvel tools are shipped in a dedicated download bundle. For information, see Install the Carvel Tools.
  • When using an OIDC identity provider (IDP) for identity and access management via Pinniped, the OIDC IDP must be configured to issue a refresh token.

    • This may require some additional configuration in the OIDC IDP.
    • For an Okta example, see Configure Identity Management.
    • If your OIDC IDP requires additional scopes or parameters to return a refresh token, configure OIDC_IDENTITY_PROVIDER_SCOPES and OIDC_IDENTITY_PROVIDER_ADDITIONAL_AUTHORIZE_PARAMS with the required scopes or parameters.
  • Tanzu Kubernetes Grid no longer uses Dex for LDAP identity providers. Each configuration variable listed in the Identity Providers - LDAP section of Configuration File Variable Reference now corresponds to a configuration setting in the Pinniped LDAPIdentityProvider custom resource. Before upgrading a management cluster configured to use an LDAP identity provider to Tanzu Kubernetes Grid v2.3, update your LDAP settings as described in (LDAP Only) Update LDAP Settings. All existing LDAP settings will be automatically migrated to the new Pinniped format during the upgrade of the management cluster to v2.3.
  • Velero v1.10 introduces Breaking Changes compared to the versions of Velero that shipped with previous versions of TKG. For information about how to mitigate these breaking changes when upgrading from Velero v1.9.x to v1.10, see Upgrade Velero.
  • While upgrading Tanzu Kubernetes Grid to v2.3 on AWS, you must run tanzu mc permissions aws set after upgrading the Tanzu CLI but before upgrading the management cluster. For more information, see the AWS tab in Prepare to Upgrade Clusters.
  • When creating custom ClusterClass object definitions, you no longer download the default manifest from the Tanzu Framework repository. The default manifest is made available when you create a management cluster or install the Tanzu CLI, or you can pull it from the TKG image repository. For information, see Create a Custom ClusterClass.
  • When deploying a workload cluster with a Kubernetes patch version that is older than the latest supported patch for its minor version, you need to create a ConfigMap object for its TKr before you deploy the cluster. See Deploy the Kubernetes Cluster on the Multiple Kubernetes Versions page.
  • The default vSphere CNS StorageClass that TKG on vSphere creates when ENABLE_DEFAULT_STORAGE_CLASS is set to true (the default) has a volumeBindingMode of WaitForFirstConsumer instead of Immediate. For clusters deployed across multiple AZs, this change ensures that persistent volumes are provisioned in the same availability zones as the pods that use them, to avoid unschedulable pods.

    • For more information about the volumeBindingMode field, see Volume binding mode in the Kubernetes documentation.
    • For more information about default storage classes and customizing storage classes in TKG, see Dynamic Storage.

Future Behavior Change Notices

This section provides advance notice of behavior changes that will take effect in future releases, after the TKG v2.3 release.

Deprecation Notices

The tanzu login command will be removed in the future releases of TKG. The command will be replaced by the tanzu context command. For more information, see tanzu context in VMware Tanzu CLI Documentation.

User Documentation

Deploying and Managing TKG 2.3 Standalone Management Clusters, includes topics specific to standalone management clusters that are not relevant to using TKG with a vSphere with Tanzu Supervisor.

For more information, see Find the Right TKG Docs for Your Deployment on the VMware Tanzu Kubernetes Grid Documentation page.

Resolved Issues

Resolved in v2.3.1

Tanzu Kubernetes Grid v2.3.1 resolves undocumented customer issues and bugs.

Resolved in v2.3.0

The following issues that were documented as Known Issues in earlier Tanzu Kubernetes Grid releases are resolved in Tanzu Kubernetes Grid v2.3.

  • IPv6 networking is not supported on vSphere 8

    TKG v2.3 does not support IPv6 networking on vSphere 8, although it supports single-stack IPv6 networking using Kube-Vip on vSphere 7 as described in IPv6 Networking.

  • Cluster autoscaler does not add the expected keys to the MachineDeployment

    If you create a cluster with cluster autoscaler enabled, the expected keys cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size and cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size are not added to metadata.annotations in the MachineDeployment.

  • Worker node rollout variable is missing from ClusterClass configurations

    The WORKER_ROLLOUT_STRATEGY variable, that you use to set the rollout strategy for workload clusters MachineDeployments of the was missing from the ClusterClass configurations for all target platforms. You can now set the WORKER_ROLLOUT_STRATEGY variable on Class-based clusters as well as legacy plan-based clusters. For more information, see GPU-Enabled Clusters in the Configuration File Variable Reference.

  • Workload cluster node pools on AWS must be in the same availability zone as the standalone management cluster.

    When creating a node pool configured with an az that is different from where the management cluster is located, the new node pool may remain stuck with status ScalingUp, as listed by tanzu cluster node-pool list, and never reach the Ready state.

  • Re-creating standalone management cluster does not restore Pinniped authentication

    After you re-create a standalone management cluster as described in Back Up and Restore Management and Workload Cluster Infrastructure (Technical Preview), users cannot log in to workload clusters via Pinniped authentication.

  • Harbor CVE export may fail when execution ID exceeds 1000000+

    Harbor v2.7.1, which was the version packaged for TKG v2.2, has a known issue that CVE reports export with error “404 page not found” when the execution primary key auto-increment ID grows to 1000000+.

  • --default-values-file-output option of tanzu package available get outputs an incomplete configuration template file for the Harbor package

    Running tanzu package available get harbor.tanzu.vmware.com/PACKAGE-VERSION --default-values-file-output FILE-PATH creates an incomplete configuration template file for the Harbor package.

  • Autoscaler for class-based clusters requires manual annotations

    Due to a label propagation issue in Cluster API, AUTOSCALER_MIN_SIZE_* and AUTOSCALER_MAX_SIZE_* settings in the cluster configuration file for class-based workload clusters are not set in the cluster’s MachineDeployment objects, and have to be added manually.

  • Class-based cluster names have 25 character limit with NSX ALB as load balancer service or ingress controller

    When NSX Advanced Load Balancer (ALB) is used as a class-based cluster’s load balancer service or ingress controller with a standalone management cluster, its application names include both the cluster name and load-balancer-and-ingress-service, the internal name for the AKO package. When the combined name exceeds the 64-character limit for Avi Controller apps, the tanzu cluster create command may fail with an error that the avi-system namespace was not found.

    Workaround: Limit class-based cluster name length to 25 characters or less when using NSX ALB as a load balancer or ingress controller.

  • Offline Upgrade Cannot Find Kubernetes v1.25.7 Packages, Fails

    During upgrade to TKG v2.2 in an internet-restricted environment using tanzu isolated-cluster commands, procedure fails with error could not resolve TKR/OSImage [...] 'v1.25.7+vmware.2-tkg.1' or similar as described in Knowledge Base article TKG 2.1 upgrade to TKG 2.2 fails when searching for 1.25.7 Tanzu Kubernetes Release on Air Gapped environment.

    Failure occurs because the tanzu isolated-cluster upload-bundle command does not upload two packages needed for TKr v1.25.7.

    Workaround: Manually upload the tkr-vsphere-nonparavirt-v1.25.7_vmware.2-tkg.1 and tkg-vsphere-repository-nonparavir-1.25.7_vmware.2-tkg.1 packages to your Harbor registry as described in the KB article linked above.

Known Issues

The following are known issues in Tanzu Kubernetes Grid v2.3.x. Any known issues that were present in v2.3.0 that have been resolved in a subsequent v2.3.x patch release are listed under the Resolved Issues for the patch release in which they were fixed.

You can find additional solutions to frequently encountered issues in Troubleshooting Management Cluster Issues and Troubleshooting Workload Cluster Issues, or in Broadcom Communities.

Upgrade

  • You cannot upgrade multi-OS clusters

    You cannot use the tanzu cluster upgrade command to upgrade clusters with Windows worker nodes as described in Deploy a Multi-OS Workload Cluster.

  • Before upgrade, you must manually update a changed Avi certificate in the tkg-system package values

    Management cluster upgrade fails if you have rotated an Avi Controller certificate, even if you have updated its value in the management cluster’s secret/avi-controller-ca as described in Modify the Avi Controller Credentials.

    Failure occurs because updating secret/avi-controller-ca does not copy the new value into the management cluster’s tkg-system package values, and TKG uses the certificate value from those package values during upgrade.

    For legacy management clusters created in TKG v1.x, the new value is also not copied into the ako-operator-addon secret.

    Workaround: Before upgrading TKG, check if the Avi certificate in tkg-pkg-tkg-system-values is up-to-date, and patch it if needed:

    1. In the management cluster context, get the certificate from avi-controller-ca:
      kubectl get secret avi-controller-ca -n tkg-system-networking -o jsonpath="{.data.certificateAuthorityData}"
      
    2. In the tkg-pkg-tkg-system-values secret, get and decode the package values string:
      kubectl get secret tkg-pkg-tkg-system-values -n tkg-system -o jsonpath="{.data.tkgpackagevalues\.yaml}" | base64 --decode
      
    3. In the decoded package values, check the value for avi_ca_data_b64 under akoOperatorPackage.akoOperator.config. If it differs from the avi-controller-ca value, update tkg-pkg-tkg-system-values with the new value:

      1. In a copy of the decoded package values string, paste in the new certificate from avi-controller-ca as the avi_ca_data_b64 value under akoOperatorPackage.akoOperator.config.
      2. Run base64 to re-encode the entire package values string.
      3. Patch the tkg-pkg-tkg-system-values secret with the new, encoded string:
        kubectl patch secret/tkg-pkg-tkg-system-values -n tkg-system -p '{"data": {"tkgpackagevalues.yaml": "BASE64-ENCODED STRING"}}'
        
    4. For management clusters created before TKG v2.1, if you updated tkg-pkg-tkg-system-values in the previous step, also update the ako-operator-addon secret:

      1. If needed, run the following to check whether your management cluster was created in TKG v1.x:
        kubectl get secret -n tkg-system ${MANAGEMENT_CLUSTER_NAME}-ako-operator-addon
        

        If the command outputs an ako-operator-addon object, the management cluster was created in v1.x and you need to update its secret as follows.

      2. In the ako-operator-addon secret, get and decode the values string:
        kubectl get secret ${MANAGEMENT-CLUSTER-NAME}-ako-operator-addon -n tkg-system -o jsonpath="{.data.values\.yaml}" | base64 --decode
        
      3. In a copy of the decoded values string, paste in the new certificate from avi-controller-ca as the avi_ca_data_b64 value.
      4. Run base64 to re-encode the entire ako-operator-addon values string.
      5. Patch the ako-operator-addon secret with the new, encoded string:
        kubectl patch secret/${MANAGEMENT-CLUSTER-NAME}-ako-operator-addon -n tkg-system -p '{"data": {"values.yaml": "BASE64-ENCODED STRING"}}'
        

Packages

  • Adding standard repo fails for single-node clusters

    Running tanzu package repository add to add the tanzu-standard repo to a single-node cluster of the type described in Single-Node Clusters on vSphere) may fail.

    This happens because single-node clusters boot up with cert-manager as a core add-on, which conflicts with the different cert-manager package in the tanzu-standard repo.

    Workaround: Before adding the tanzu-standard repo, patch the cert-manager package annotations as described in Install cert-manager.

Cluster Operations

  • On AWS and Azure, creating workload cluster with object spec fails with zone/region error

    By default, on AWS or Azure, running tanzu cluster create with a class-based cluster object spec passed to --file causes the Tanzu CLI to perform region and zone verification that are only relevant to vSphere availability zones.

    Workaround When creating a class-based cluster on AWS or Azure, do either of the following, based on whether you use the one-step or a two-step process described in Create a Class-Based Cluster:

    • One-step: Follow the one-step process as described, by setting features.cluster.auto-apply-generated-clusterclass-based-configuration to true and not passing --dry-run to the tanzu cluster create command.

    • Two-step: Before running tanzu cluster create with the object spec as the second step, set SKIP_MULTI_AZ_VERIFY to true in your local environment:

      export SKIP_MULTI_AZ_VERIFY=true
      
  • Components fail to schedule when using clusters with limited capacity

    For management clusters and workload clusters, if you deploy clusters with a single control plane node, a single worker node, or small or medium clusters, you might encounter resource scheduling contention.

    Workaround: Use either single-node clusters or clusters with a total of three or more nodes.

  • Cannot create new workload clusters based on non-current TKr versions with Antrea CNI

    You cannot create a new workload cluster that uses Antrea CNI and runs Kubernetes versions shipped with prior versions TKG, such as Kubernetes v1.23.10, which was the default Kubernetes version in TKG v1.6.1 as listed in Supported Kubernetes Versions in Tanzu Kubernetes Grid v2.2.

    Workaround: Create a workload cluster that runs Kubernetes 1.26.8, 1.25.13, or 1.24.17. The Kubernetes project recommends that you run components on the most recent patch version of any current minor version.

  • You cannot scale management cluster control plane nodes to an even number

    If you run tanzu cluster scale on a management cluster and pass an even number to the --controlplane-machine-count option, TKG does not scale the control plane nodes, and the CLI does not output an error. To maintain quorum, control plane node counts should always be odd.

    Workaround Do not scale control plane node counts to an even number.

  • Orphan vSphereMachine objects after cluster upgrade or scale

    Due to a known issue in the cluster-api-provider-vsphere (CAPV) project, standalone management clusters on vSphere may leave orphaned VSphereMachine objects behind after cluster upgrade or scale operations.

    This issue is fixed in newer versions of CAPV, which future patch versions of TKG will incorporate.

    Workaround: To find and delete orphaned CAPV VM objects:

    1. List all VSphereMachine objects and identify the orphaned ones, which do not have any PROVIDERID value:
      kubectl get vspheremachines -A
      
    2. For each orphaned VSphereMachine object:

      1. List the object and retrieve its machine ID:
        kubectl get vspheremachines VSPHEREMACHINE -n NAMESPACE -o yaml
        

        Where VSPHEREMACHINE is the machine NAME and NAMESPACE is its namespace.

      2. Check if the VSphereMachine has an associated Machine object:
        kubectl get machines -n NAMESPACE |  grep MACHINE-ID
        

        Run kubectl delete machine to delete any Machine object associated with the VSphereMachine.

      3. Delete the VSphereMachine object:
        kubectl delete vspheremachines VSPHEREMACHINE -n NAMESPACE
        
      4. From vCenter, check if the VSphereMachine VM still appears; it may be present, but powered off. If so, delete it in vCenter.
      5. If the deletion hangs, patch its finalizer:
        kubectl patch vspheremachines VSPHEREMACHINE -n NAMESPACE -p '{"metadata": {"finalizers": null}}' --type=merge
        

Networking

Note

For v4.0+, VMware NSX-T Data Center is renamed to “VMware NSX.”

  • Some NSX ALB Configuration Variables Do Not Work

    In TKG v2.3, the management cluster configuration variables AVI_DISABLE_INGRESS_CLASS, AVI_DISABLE_STATIC_ROUTE_SYNC, AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER do not work.

    To set any of their underlying properties to the non-default value true, you need to manually edit the management cluster’s two AKODeploymentConfig configuration objects as described below after the management cluster has been created.

    Workaround: Edit the install-ako-for-all and install-ako-for-management-cluster objects in the management cluster:

    1. Set the kubectl context to the management cluster:

      kubectl config use-context MGMT-CLUSTER-CONTEXT
      
    2. Edit the install-ako-for-all and install-ako-for-management-cluster configurations:

      kubectl edit adc install-ako-for-all
      
      kubectl edit adc install-ako-for-management-cluster
      
    3. In the configurations, set the following properties as desired:

      • extraConfigs.ingress.disableIngressClass - for config var AVI_DISABLE_INGRESS_CLASS
      • extraConfigs.disableStaticRouteSync - for config var AVI_DISABLE_STATIC_ROUTE_SYNC
      • extraConfigs.ingress.defaultIngressController - for config var AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER
    4. Save and exit.

    These settings will apply to workload clusters that the management cluster subsequently creates.

  • NSX ALB NodePortLocal ingress mode is not supported for management cluster

    In TKG v2.3, you cannot run NSX Advanced Load Balancer (ALB) as a service type with ingress mode NodePortLocal for traffic to the management cluster.

    This issue does not affect support for NodePortLocal ingress to workload clusters, as described in L7 Ingress in NodePortLocal Mode.

    Workaround: Configure management clusters with AVI_INGRESS_SERVICE_TYPE set to either NodePort or ClusterIP. Default is NodePort.

  • Management cluster create fails or performance slow with older NSX-T versions and Photon 3 or Ubuntu with Linux kernel 5.8 VMs

    Deploying a management cluster with the following infrastructure and configuration may fail or result in restricted traffic between pods:

    • vSphere with any of the following versions of NSX-T:
      • NSX-T v3.1.3 with Enhanced Datapath enabled
      • NSX-T v3.1.x lower than v3.1.3
      • NSX-T v3.0.x lower than v3.0.2 hot patch
      • NSX-T v2.x. This includes Azure VMware Solution (AVS) v2.0, which uses NSX-T v2.5
    • Base image: Photon 3 or Ubuntu with Linux kernel 5.8

    This combination exposes a checksum issue between older versions of NSX-T and Antrea CNI.

    TMC: If the management cluster is registered with Tanzu Mission Control (TMC) there is no workaround to this issue. Otherwise, see the workarounds below.

    Workarounds:

    • Deploy workload clusters configured with ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD set to "true". This setting deactivates Antrea’s UDP checksum offloading, which avoids the known issues with some underlay network and physical NIC network drivers.
    • Upgrade to NSX-T v3.0.2 Hot Patch, v3.1.3, or later, without Enhanced Datapath enabled
    • Use an Ubuntu base image with Linux kernel 5.9 or later.

Storage

  • Workload cluster cannot distribute storage across multiple datastores

    You cannot enable a workload cluster to distribute storage across multiple datastores as described in Deploy a Cluster that Uses a Datastore Cluster. If you tag multiple datastores in a datastore cluster as the basis for a workload cluster’s storage policy, the workload cluster uses only one of the datastores.

    Workaround: None

Tanzu CLI

  • Non-alphanumeric characters cannot be used in HTTP/HTTPS proxy passwords

    When deploying management clusters with CLI, the non-alphanumeric characters # ` ^ | / ? % ^ { [ ] } \ " < > cannot be used in passwords. Also, any non-alphanumeric character cannot be used in HTTP/HTTPS proxy passwords when deploying management cluster with UI.

    Workaround: You can use non-alphanumeric characters other than # ` ^ | / ? % ^ { [ ] } \ " < > in passwords when deploying management cluster with CLI.

  • Tanzu CLI does not work on macOS machines with ARM processors

    Tanzu CLI v0.11.6 does not work on macOS machines with ARM (Apple M1) chips, as identified under Finder > About This Mac > Overview.

    Workaround: Use a bootstrap machine with a Linux or Windows OS, or a macOS machine with an Intel processor.

  • Tanzu CLI lists tanzu management-cluster osimage

    The management-cluster command group lists tanzu management-cluster osimage. This feature is currently in development and reserved for future use.

    Workaround: Do not use tanzu management-cluster osimage.

vSphere

  • Deploying management cluster to vSphere 7 fails while waiting for the cluster control plane to become available

    If you specify the VM Network when deploying a management cluster to vSphere 7, the deployment fails with the error unable to set up management cluster: unable to wait for cluster control plane available: control plane is not available yet.

    Workaround: Then network “VM Network” has multiple configured subnets with static IPs for VsVip and ServiceEngine. Set exclude_discovered_subnets to True on the VM Network, to ignore the discovered subnets and allow virtual services to be placed on the service engines.

  • Availability zones can be deleted while VMs are assigned to it

    If you delete an availability zone that contains VMs, the VMs cannot subsequently be deleted.

    Workaround: Remove all VMs from an availability zone before deleting it.

  • Creating workload clusters fails due to VPXD session exhaust

    When creating workload clusters on vSphere, the creation fails with the following error:

    vSphere config validation failed: failed to get VC client: failed to create vc client: Post "https://address/sdk": EOF ". VCenter vpxd.log report error: Out of HTTP sessions: Limited to 2000
    

    This happens due to vCenter Server session exhaust.

    Workaround: See VMware KB 50114010.

  • Node pools created with small nodes may stall at Provisioning

    Node pools created with node SIZE configured as small may become stuck in the Provisioning state and never proceed to Running.

    Workaround: Configure node pool with at least medium size nodes.

Windows

  • Windows workers not supported in internet-restricted environments

    VMware does not support TKG workload clusters with Windows worker nodes in proxied or air-gapped environments.

    Workaround: Please contact your VMware representative. Some TKG users have built Windows custom images and run workload clusters with Windows workers in offline environments, for example as described in this unofficial repo.

Image-Builder

  • Ignorable goss test failures during image-build process

    When you run Kubernetes Image Builder to create a custom Linux custom machine image, the goss tests python-netifaces, python-requests, and ebtables fail. Command output reports the failures. The errors can be ignored; they do not prevent a successful image build.

AVS

  • vSphere CSI volume deletion may fail on AVS

    On Azure vSphere Solution (AVS), vSphere CSI Persistent Volumes (PVs) deletion may fail. Deleting a PV requires the cns.searchable permission. The default admin account for AVS, [email protected], is not created with this permission. For more information, see vSphere Roles and Privileges.

    Workaround: To delete a vSphere CSI PV on AVS, contact Azure support.

Tanzu Standard Repository v2023.7.13

With TKG v2.3, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp.

For TKG v2.3.0 and v2.3.1, the TKG patch version and its latest compatible Tanzu Standard repository version are released around the same date.

Future Tanzu Standard repository versions may publish more frequently than TKG versions, but all patch versions will maintain existing compatibilities between minor versions of TKG and Tanzu Standard.

For each TKG v2.3 patch version, its latest compatible Tanzu Standard repository version is:

Tanzu Standard Repository Package Support

VMware provides the following support for the optional packages that are provided in the VMware Tanzu Standard Repository:

  • VMware provides installation and upgrade validation for the packages that are included in the optional VMware Tanzu Standard Repository when they are deployed on Tanzu Kubernetes Grid. This validation is limited to the installation and upgrade of the package but includes any available updates required to address CVEs. Any bug fixes, feature enhancements, and security fixes are provided in new package versions when they are available in the upstream package project.
  • VMware does not provide Runtime Level Support for the components provided by the Tanzu Standard Repository. The debugging of configuration, performance related issues, or debugging and fixing the package itself is not provided by VMware.
  • VMware offers Runtime Level Support for the VMware Supported Packages Harbor, Contour, and Velero when they are deployed on Tanzu Kubernetes Grid.

Cert-manager v1.11.1

What’s New

Supported Versions

TKG version jetstack_cert-manager version vmware cert-manager package version Kubernetes version compatibility
2.3 v1.11.1 v1.11.1+vmware.1 1.21-1.27

Component Versions

cert manager v1.11.1 contains following component image versions:

  • quay.io/jetstack/cert-manager-cainjector:v1.11.1
  • quay.io/jetstack/cert-manager-controller:v1.11.1
  • quay.io/jetstack/cert-manager-webhook:v1.11.1
  • quay.io/jetstack/cert-manager-acmesolver:v1.11.1

Deprecations

The following cert manager versions are deprecated in TKG v2.3:

  • v1.5.3
  • v1.7.2
  • v1.10.2

Contour v1.24.4

What’s New

  • See the release notes for Contour v1.24.0-4:
  • You can configure Envoy to install as a Deployment with a specified number of replicas rather than as a DaemonSet (default), using data values like the following:
    envoy: 
      workload: 
        type: Deployment 
        replicas: 3
    
  • You can specify resource requests or limits for each container within the Contour and Envoy workloads, using data values like the following:

    contour: 
      resources: 
        contour: 
          requests: 
            cpu: 250m 
            memory: 128Mi 
          limits: 
            cpu: 1 
    
            memory: 512Mi
    envoy:
      resources:
        envoy:
    # include requests and limits as desired
        shutdown-manager:
    # include requests and limits as desired 
    
  • data-values file configuration values are verified. Specifying an unsupported value in the data values results in an error.

Supported Kubernetes Versions

Contour v1.24.4 is supported on Kubernetes v1.24-v1.26. See the Contour Compatibility Matrix.

Deprecations

  • All versions of Contour prior to v1.24.4 have been removed from Tanzu Standard repository v2023.7.13.
  • Contour package data-values files no longer accept null values. For any configuration field with a value set to null, you should omit the value entirely.

External-csi-snapshot-webhook v6.1.0

What’s New

  • external-csi-snapshot-webhook is a new package for TKG v2.3
  • TKG v2.3 with a vSphere 8.0U2 Supervisor supports CSI snapshots for Supervisor and the workload clusters it deploys, but you first need to explicitly install external-csi-snapshot-webhook v6.1.0 using the Tanzu CLI.

Supported Versions

TKG version external-csi-snapshot-webhook version Expected Kubernetes version compatibility Tested on Kubernetes version
2.3.0 6.1.0 1.18 - latest 1.24

Dependencies

  • external-csi-snapshot-webhook requires cert-manager, for secure X509 communication with the Kubernetes API server.

Component Versions

external-csi-snapshot-webhook contains the following component image version:

  • registry.k8s.io/sig-storage/snapshot-validation-webhook:v6.1.0

External DNS v0.13.4

What’s New

  • See the External DNS v0.13.4 release notes
  • New createNamespace configuration field. Set to true to create the namespace that external-dns components are installed in. If set to false, package components install into an existing namespace.

Fluent-bit v2.1.2

What’s New

Limitations / Known Issues

  • Does not support AWS credential environment variables in the fluent-bit package ConfigMap for accessing AWS S3.
    • AWS credentials support is planned for a future release.

Fluxcd controllers

What’s New

See the following Fluxcd controller package release notes:

Grafana v9.5.1

What’s New

Component Versions

Grafana v9.5.1 contains the following component image versions:

  • grafana/grafana:9.5.1
  • kiwigrid/k8s-sidecar:1.22.0

Harbor v2.8.2

What’s New

  • See Harbor v2.8.2 release notes
  • Due to CVEs, TKG v2.3 compatibility with the following Harbor versions has been removed:
    • v2.2.3_vmware.1-tkg.1
    • v2.2.3_vmware.1-tkg.2
    • v2.3.3_vmware.1-tkg.1
    • v2.5.3_vmware.1-tkg.1
    • v2.7.1_vmware.1-tkg.1

Component Versions

Harbor v2.8.2 contains the following component image versions:

  • harbor-core:v2.8.2
  • harbor-db:v2.8.2
  • harbor-exporter:v2.8.2
  • harbor-jobservice:v2.8.2
  • harbor-portal:v2.8.2
  • harbor-registryctl:v2.8.2
  • registry-photon:v2.8.2
  • notary-server-photon:v2.8.2
  • notary-signer-photon:v2.8.2
  • redis-photon:v2.8.2
  • trivy-adapter-photon:v2.8.2

Multus-CNI v4.0.1

What’s New

  • Introduces a thick-plugin deployment and architecture. See Multus-CNI v4.0.1 new feature
  • Changes default values as follows:

    namespace: kube-system 
    #! DaemonSet related configuration 
    daemonset: 
      resources: 
        limits: 
          cpu: 100m 
          memory: 50Mi 
        requests: 
          cpu: 100m 
          memory: 50Mi 
    configmap: 
      cniVersion: 0.3.1 
      multusConfigFile: auto 
    

Prometheus v2.43.0

What’s New

Component Versions

Prometheus v2.43.0 contains the following component image versions:

  • prom/prometheus:v2.43.0
  • prom/alertmanager:v0.25.0
  • prom/pushgateway:v1.5.1
  • jimmidyson/configmap-reload:v0.8.0
  • bitnami/kube-state-metrics:2.8.2
  • quay.io/prometheus/node-exporter:v1.5.0

Velero v1.10.0

What’s New

  • See Velero v1.10.0 release notes
  • Kopia replaces Restic as the uploader. This results in the following breaking changes. For details, see Breaking changes in the Velero v1.10 changelog:
    • restic daemonset renamed to node-agent
    • ResticRepository CR renamed to BackupRepository
    • velero restic repo command renamed to velero repo
    • velero-restic-credentials secret renamed to velero-repo-credentials
    • default-volumes-to-restic parameter renamed to default-volumes-to-fs-backup
    • restic-timeout parameter renamed to fs-backup-timeout
    • default-restic-prune-frequency parameter renamed to default-repo-maintain-frequency
  • Unifies the backup repository and decouples it from data movers as described in Unified Repository & Kopia Integration Design.
  • Refactors filesystem backup by adding a Kopia path alongside the existing Restic path, supporting both through a uploader-type configuration parameter.
  • Moves BackupItemAction, RestoreItemAction and VolumeSnapshotterAction plugins to version v1 to allow future plugin changes that may not support backward compatibility, such as complex data movement tasks, for example, data movement tasks. See Plugin Versioning.
  • Adds options to save credentials for specific volume snapshot locations; see Backup Storage Locations and Volume Snapshot Locations.
  • Enhances CSI snapshot robustness with protection codes for error handling and ability to skip exclusion checks so that CSI snapshot works with various backup resource filters.
  • Supports backup schedule pause/unpause:
    • Pass -paused flag to velero schedule create to create a paused schedule.
    • velero schedule pause and velero schedule unpause pause and unpause an existing schedule.

Supported Versions

TKG version Velero version Expected Kubernetes version compatibility Tested on Kubernetes version
2.3(Halifax) 1.10 1.18-latest 1.22.5, 1.23.8, 1.24.6 and 1.25.1

Component Versions

Velero v1.10.3 contains the following component image versions:

  • velero/velero:v1.10.3
  • velero/velero-plugin-for-aws:v1.6.2
  • velero/velero-plugin-for-csi:v0.4.3
  • velero/velero-plugin-for-microsoft-azure:v1.6.2
  • vsphereveleroplugin/velero-plugin-for-vsphere:v1.5.1

To fix CVEs, Velero runtime and dependency versions are updated as follows:

  • Go runtime v1.18.8
  • Compiled Restic v0.13.1 with Go 1.18.8 instead of packaging the official Restic binary
  • Updated versions of core dependent libraries

Upgrades

Previous upgrade procedures do not work due to filesystem backup changes. For the new upgrade steps, see Upgrading to Velero 1.10.

Limitations / Known Issues

  • Kopia backup does not support self-signed certificate for S3 compatible storage. To track this issue, see Velero issue #5123 and Kopia issue #1443.
  • Does not work with the latest vSphere plugin for Velero version, as described in vSphere Plugin for Velero issue #485. Until this issue is fixed, do not upgrade the vSphere plugin to work with Velero v1.1.0.

Whereabouts v0.6.1

What’s New

  • Supports ipRanges for configuring dual-stack IP assigning; see Example IPv6 Config in the Whereabouts repo README.
check-circle-line exclamation-circle-line close-line
Scroll to top icon