This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic contains release notes for Tanzu Kubernetes Grid Integrated Edition (TKGI) v1.15.



TKGI v1.15.8

Release Date: October 24, 2023


Product Snapshot

Release Details
Version v1.15.8
Release date October 24, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.18
Windows: v1.6.18
CoreDNS v1.8.6+vmware.26*
CSI Driver for vSphere v2.7.2 Release Notes
etcd v3.5.6
Harbor v2.7.3* Release Notes
Kubernetes v1.24.17* Release Notes
Metrics Server v0.6.4
NCP v4.0.1.3 Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.89*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.65* or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.8 are from TKGI v1.15.7 and earlier TKGI v1.15 patches, and TKGI v1.14.7 and earlier TKGI v1.14 patches.


Features and Enhancements

This release of Tanzu Kubernetes Grid Integrated Edition does not include any new features or enhancements.


Resolved Issues

TKGI v1.15.8 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.7 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.8. See the TKGI v1.15.7 Known Issues below.



TKGI Management Console v1.15.8

Release Date: October 24, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

Element Details
Version v1.15.8
Release date October 24, 2023
Installed TKGI version v1.15.8
Installed Ops Manager version v2.10.62* Release Notes
Component Version
Installed Kubernetes version v1.24.17* Release Notes
Installed Harbor Registry version v2.7.3* Release Notes
Linux stemcell v621.699*

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.8 are from TKGI MC v1.15.7 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.7 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

This release of Tanzu Kubernetes Grid Integrated Edition does not include any new features or resolve any issues.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.7 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.8. See the TKGI Management Console v1.15.7 Known Issues below.




TKGI v1.15.7

Release Date: August 29, 2023


Product Snapshot

Release Details
Version v1.15.7
Release date August 29, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.18
Windows: v1.6.18
CoreDNS v1.8.6+vmware.24*
CSI Driver for vSphere v2.7.2 Release Notes
etcd v3.5.6
Harbor v2.7.2 Release Notes
Kubernetes v1.24.16* Release Notes
Metrics Server v0.6.4*
NCP v4.0.1.3* Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.85*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.61 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.7 are from TKGI v1.15.6 and earlier TKGI v1.15 patches, and TKGI v1.14.7 and earlier TKGI v1.14 patches.


Features and Enhancements

This release of Tanzu Kubernetes Grid Integrated Edition does not include any new features.


Resolved Issues

TKGI v1.15.7 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.6 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.7. See the TKGI v1.15.6 Known Issues below.



TKGI Management Console v1.15.7

Release Date: August 29, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>August 29, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-7">v1.15.7</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.60&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#2-10-60">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.16&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12415">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.7.2&os=&servicePk=308415&language=EN">v2.7.2</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v2.7.2">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.644&#42;</td>
</tr>

Element Details
Version v1.15.7

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.7 are from TKGI MC v1.15.6 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.7 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

TKGI Management Console v1.15.7 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.6 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.7. See the TKGI Management Console v1.15.6 Known Issues below.




TKGI v1.15.6

Release Date: July 18, 2023


Product Snapshot

Release Details
Version v1.15.6
Release date July 18, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.18
Windows: v1.6.18
CoreDNS v1.8.6+vmware.22
CSI Driver for vSphere v2.7.2* Release Notes
etcd v3.5.6
Harbor v2.7.2 Release Notes
Kubernetes v1.24.14 Release Notes
Metrics Server v0.6.1
NCP v4.0.1.2 Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.79*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.61 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.6 are from TKGI v1.15.5 and earlier TKGI v1.15 patches, and TKGI v1.14.7 and earlier TKGI v1.14 patches.


Features and Enhancements

TKGI v1.15.6 includes the following enhancements:


Compute Profile Enhancements

TKGI v1.15.6 includes the following compute profile enhancements:

  • Supports configuring compute profiles with a node pool description. For more information, see node_pools Block in Creating and Managing Compute Profiles with the CLI (vSphere).
  • Supports optionally skipping compute profile validation. For more information, see Assign a Compute Profile in Using Compute Profiles (vSphere).
  • Prevents updating an existing compute profile with unsupported configuration changes. Prevents: Changing the number of control plane nodes, changing the node pool name property, and adding a new node pool while deleting an existing node pool.


Resolved Issues

TKGI v1.15.6 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.5 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.6. See the TKGI v1.15.5 Known Issues below.



TKGI Management Console v1.15.6

Release Date: July 18, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>July 18, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-6">v1.15.6</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.59&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#v21059-1">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.14</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12412">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.7.2&os=&servicePk=308415&language=EN">v2.7.2</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v2.7.2">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.584&#42;</td>
</tr>

Element Details
Version v1.15.6

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.6 are from TKGI MC v1.15.5 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.7 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

TKGI Management Console v1.15.6 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.5 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.6. See the TKGI Management Console v1.15.5 Known Issues below.




TKGI v1.15.5

Release Date: June 6, 2023


Product Snapshot

Release Details
Version v1.15.5
Release date June 6, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.18
Windows: v1.6.18
CoreDNS v1.8.6+vmware.22*
CSI Driver for vSphere v2.7.0 Release Notes
etcd v3.5.6
Harbor v2.7.2* Release Notes
Kubernetes v1.24.14* Release Notes
Metrics Server v0.6.1
NCP v4.0.1.2 Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.74*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.61 or later*
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.5 are from TKGI v1.15.4 and earlier TKGI v1.15 patches, and TKGI v1.14.6 and earlier TKGI v1.14 patches.


Features and Enhancements

TKGI v1.15.5 includes the following enhancements:


Resolved Issues

TKGI v1.15.5 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.4 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.5. See the TKGI v1.15.4 Known Issues below.



TKGI Management Console v1.15.5

Release Date: June 6, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>June 6, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-5">v1.15.5</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.58&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#v21056-1">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.14&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12412">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.7.2&os=&servicePk=308415&language=EN">v2.7.2&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v2.7.2">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.543&#42;</td>
</tr>

Element Details
Version v1.15.5

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.5 are from TKGI MC v1.15.4 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.6 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console does not include any new features or resolved issues.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.4 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.5. See the TKGI Management Console v1.15.4 Known Issues below.




TKGI v1.15.4

Release Date: April 27, 2023


Product Snapshot

Release Details
Version v1.15.4
Release date April 27, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.18*
Windows: v1.6.18*
CoreDNS v1.8.6+vmware.20*
CSI Driver for vSphere v2.7.0* Release Notes
etcd v3.5.6
Harbor v2.7.1* Release Notes
Kubernetes v1.24.12* Release Notes
Metrics Server v0.6.1
NCP v4.0.1.2 Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.69*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.58 or later*
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.4 are from TKGI v1.15.3 and earlier TKGI v1.15 patches, and TKGI v1.14.6 and earlier TKGI v1.14 patches.


Features and Enhancements

TKGI v1.15.4 includes the following enhancements:

vSphere CSI Driver Enhancements

TKGI v1.15.4 includes the following vSphere CSI Driver enhancements:

Network Profiles Enhancements

Supports configuring additional NCP Network Profiles parameters:

  • nsx_v3.cookie_name, nsx_v3.members_per_medium_lbs, nsx_v3.members_per_small_lbs, nsx_v3.natfirewallmatch, nsx_v3.ncp_enforced_pool_member_limit, nsx_v3.relax_scale_validation

For more information, see cni_configurations Extensions Parameters in Creating and Managing Network Profiles.


TKGI v1.15.4 also includes the following minor enhancements:

  • Increases the length of the insecure_registries column to 4K characters.
  • Supports configuring the Fluent Bit memory limit.
  • Decreases the Fluentd default refresh interval to 30 seconds from 60 seconds. This ensures that all the logs are forwarded to VMware vRealize Log Insight consistently.


Resolved Issues

TKGI v1.15.4 resolves the following issues:

  • Fixes NullPointerException error when creating a Compute Profile configured with instances: 0 and the max_worker_instances parameter.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.3 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.4. See the TKGI v1.15.3 Known Issues below.

TKGI v1.15.4 also has the following known issues:


During Delete Cluster the PKS-API Deprovision Remains in In Progress State

On occasion, delete cluster does not exit the drain cluster task and the PKS-API deprovision remains in In Progress state. BOSH logging prevents the statefulset PersistentVolume from detaching during drain node.

Workaround

To resume the BOSH task:

  1. In vCenter, manually detach the cluster’s PersistentVolumeClaim.
  2. Stop the worker node’s pre-stop process.




TKGI Management Console v1.15.4

Release Date: April 27, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>April 27, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-4">v1.15.4</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.56&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#v21056-1">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.12&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12410">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.7.1&os=&servicePk=308148&language=EN">v2.7.1&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v271-0">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.488&#42;</td>
</tr>

Element Details
Version v1.15.4

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.4 are from TKGI MC v1.15.3 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.6 and earlier TKGI MC v1.14 patches.


Features

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console includes the following enhancements:

  • Prevents deploying multiple TKGI instances on a vCenter Server. By using the TKGI MC, you can now deploy only one instance of TKGI on a vCenter Server.


Resolved Issues

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.3 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.4. See the TKGI Management Console v1.15.3 Known Issues below.




TKGI v1.15.3

Release Date: March 16, 2023

Product Snapshot

Release Details
Version v1.15.3
Release date March 16, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.6
Windows: v1.6.6
CoreDNS v1.8.6+vmware.17*
CSI Driver for vSphere v2.6.2 Release Notes
etcd v3.5.6
Harbor v2.7.0* Release Notes
Kubernetes v1.24.10* Release Notes
Metrics Server v0.6.1
NCP v4.0.1.2* Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.64*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.55 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.3.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.3 are from TKGI v1.15.2 and earlier TKGI v1.15 patches, and TKGI v1.14.4 and earlier TKGI v1.14 patches.


Features and Enhancements

This release of the Tanzu Kubernetes Grid Integrated Edition includes no new features or resolved issues.


Resolved Issues

TKGI v1.15.3 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.2 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.3. See the TKGI v1.15.2 Known Issues below.




TKGI Management Console v1.15.3

Release Date: March 16, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>March 16, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-3">v1.15.3</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.53</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#2-10-53">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.10&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1249">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.7.0&os=&servicePk=307745&language=EN">v2.7.0&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v270-0">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.448&#42;</td>
</tr>

Element Details
Version v1.15.3

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.3 are from TKGI MC v1.15.2 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.4 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console includes no new features or resolved issues.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.2 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.3. See the TKGI Management Console v1.15.2 Known Issues below.




TKGI v1.15.2

Release Date: January 17, 2023


Product Snapshot

Release Details
Version v1.15.2
Release date January 17, 2023
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.6
Windows: v1.6.6
CoreDNS v1.8.6+vmware.15*
CSI Driver for vSphere v2.6.2 Release Notes
etcd v3.5.6*
Harbor v2.6.2* Release Notes
Kubernetes v1.24.9* Release Notes
Metrics Server v0.6.1
NCP v4.0.1.1* Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.62*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.55 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.2.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.2 are from TKGI v1.15.1 and earlier TKGI v1.15 patches, and TKGI v1.14.4 and earlier TKGI v1.14 patches.


Features and Enhancements

TKGI v1.15.2 includes the following features and enhancements:

  • Supports running the vROPs cAdvisor daemonset without Privileged permission. No longer requires that cAdvisor be run with Privileged permission.
  • Enables the VMware vSphere Container Storage Plug-in ability to suspend a specific datastore for volume provisioning using Cloud Native Storage Manager. For more information, see VMware vSphere Container Storage Plug-in 2.5 Release Notes.
  • Component bumps:
    • Upgrades the fluent-plugin-vmware-loginsight Fluentd output plugin to v1.3.1. fluent-plugin-vmware-loginsight forwards logs to VMware Log Insight.


Resolved Issues

TKGI v1.15.2 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.1 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.2. See the TKGI v1.15.1 Known Issues below.

TKGI v1.15.2 also has the following known issues:


Updating and Upgrading a Cluster Can Timeout While Stopping containerd

This issue is fixed in TKGI v1.15.3.

Updating and upgrading a TKGI v1.15.2 cluster can fail while stopping Monitored Services because the job has timed out while stopping the containerd service.

Symptom

Updating or upgrading a TKGI v1.15.2 cluster fails while stopping Monitored Services, logging errors similar to the following:

Updating instance worker: worker/... (0) (canary)
L executing pre-stop: worker/... (0) (canary)
L executing drain: worker/... (0) (canary)
L stopping jobs: worker/... (0) (canary) 
                    L Error: Action Failed get_task: Task ... result: Stopping Monitored Services: Stopping services '[containerd]' errored
Error: Action Failed get_task: Task ... result: Stopping Monitored Services: Stopping services '[containerd]' erroredTask ... 
Finished 

Explanation

The containerd job stops existing containerd containers while updating or upgrading a cluster. The containerd job has taken longer than 180 seconds to complete resulting in a timeout.

Workaround

Restart your cluster update or upgrade process. If needed, restart the update or upgrade process several times to modify all of the workers in your cluster.




TKGI Management Console v1.15.2

Release Date: January 17, 2023

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>January 17, 2023</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-2">v1.15.2</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.52&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#2-10-52">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.9&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1248">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.6.2&os=&servicePk=307511&language=EN">v2.6.2&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v262-0">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.376&#42;</td>
</tr>

Element Details
Version v1.15.2

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.2 are from TKGI MC v1.15.1 and earlier TKGI MC v1.15 patches, and TKGI MC v1.14.4 and earlier TKGI MC v1.14 patches.


Features and Resolved Issues

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console includes no new features or resolved issues.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.1 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.2. See the TKGI Management Console v1.15.1 Known Issues below.




TKGI v1.15.1

Release Date: November 22, 2022


Product Snapshot

Release Details
Version v1.15.1
Release date November 22, 2022
Component Version
Antrea v1.5.0 Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.6
Windows: v1.6.6
CoreDNS v1.8.6+vmware.13*
CSI Driver for vSphere v2.6.2* Release Notes
etcd v3.5.4
Harbor v2.6.1* Release Notes
Kubernetes v1.24.7* Release Notes
Metrics Server v0.6.1
NCP v4.0.1.0* Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.58*
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0
Wavefront Proxy: v11.3
Compatibilities Versions
Ops Manager See Broadcom Support.
VMware NSX See VMware Product Interoperability Matrices***.
vSphere
Windows stemcells v2019.51 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.1.
*** NSX Management Plane API to NSX Policy API Migration requires VMware NSX v4.0.1.1 or later.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.1 are from TKGI v1.15.0, and TKGI v1.14.3 and earlier TKGI v1.14 patches.


Features and Enhancements

TKGI v1.15.1 includes the following features and enhancements:

Supports Migrating TKGI to NSX Policy API

Supports promoting TKGI and TKGI Kubernetes clusters and workloads from NSX Management Plane API to NSX Policy API on vSphere with VMware NSX v4.0.1.1.

For more information, see Migrating the NSX Management Plane API to NSX Policy API - Overview.

Supports the Velero vSphere Plugin

Supports backing up and restoring TKGI and TKGI Kubernetes clusters on vSphere using the Velero vSphere Plugin.

For more information, see Installing Velero vSphere Plugin.

vSphere CSI Supports Multiple Data Centers

Supports using the vSphere Container Storage Interface (CSI) Driver on cluster worker nodes that are distributed across multiple data centers. For more information, see Configure CNS Data Centers in Deploying and Managing Cloud Native Storage (CNS) on vSphere.


Resolved Issues

TKGI v1.15.1 resolves the following issues:


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.15.0 are also in Tanzu Kubernetes Grid Integrated Edition v1.15.1. See the TKGI v1.15.0 Known Issues below.




TKGI Management Console v1.15.1

Release Date: November 22, 2022

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.


Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>November 22, 2022</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-1">v1.15.1</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.49&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#2-10-49">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.7&#42;</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1246">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/group/ecx/productfiles?subFamily=Tanzu+Kubernetes+Grid+Integrated+Edition+(TKGi)&displayGroup=VMware+Harbor+Registry&release=2.6.1&os=&servicePk=307349&language=EN">v2.6.1&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v261-0">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.330&#42;</td>
</tr>

Element Details
Version v1.15.1

* Components marked with an asterisk have been updated.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.1 are from TKGI MC v1.15.0, and TKGI MC v1.14.3 and earlier TKGI v1.14 patches.


Features and Resolved Issues

This release of the Tanzu Kubernetes Grid Integrated Edition Management Console includes no new features or resolved issues.


Known Issues

Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.0 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.1. See the TKGI Management Console v1.15.0 Known Issues below.




TKGI v1.15.0

Release Date: September 20, 2022


Product Snapshot

Release Details
Version v1.15.0
Release date September 20, 2022
Component Version
Antrea v1.5.0* Release Notes
cAdvisor v0.39.1
Containerd Linux: v1.6.6*
Windows: v1.6.6*
CoreDNS v1.8.6+vmware.9*
CSI Driver for vSphere v2.6.0* Release Notes
etcd v3.5.4
Harbor v2.5.3 Release Notes
Kubernetes v1.24.3* Release Notes
Metrics Server v0.6.1*
NCP v4.0.0.0* Release Notes
Percona XtraDB Cluster (PXC)
(in BOSH pxc-release)
v5.7.38-31.59
pxc-release: v0.44.0
Release Notes:
PXC
pxc-release
UAA v74.5.48
Velero v1.8.1 Release Notes
VMware Cloud Foundation (VCF) v4.5**, v4.4**, and v4.3.1 Release Notes:
v4.5, v4.4, v4.3.1
Wavefront Wavefront Collector: v1.11.0*
Wavefront Proxy: v11.3*
Compatibilities Versions
Ops Manager See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.
VMware NSX See VMware Product Interoperability Matrices.
vSphere
Windows stemcells v2019.51 or later
Xenial stemcells See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.

* Components marked with an asterisk have been updated.
** VCF v4.5 and v4.4 are supported but have not been tested with TKGI v1.15.0.


Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.15.0 are from TKGI v1.14.2 and earlier TKGI v1.14 patches.


Breaking Changes

TKGI v1.15.0 has the following breaking changes:

  • Upgrades Kubernetes to v1.24:

    • Removes support for Dockershim.

      Kubernetes support for the Docker container runtime has been entirely removed. TKGI v1.15 supports the containerd container runtime. All existing TKGI Kubernetes clusters using the Docker container runtime are switched to the containerd container runtime during the upgrade to TKGI v1.15.

      Warning: During the TKGI v1.15 upgrade, cluster workloads on clusters that switch from using the Docker container runtime to containerd will experience downtime.

      Cluster workloads do not experience downtime when tkgi update-cluster is used to switch container runtimes before upgrading TKGI. To avoid workload downtime, VMware recommends that you switch your Docker container runtime clusters to containerd before upgrading to TKGI v1.15. For more information, see Cluster Workloads Experience Downtime While Upgrading and Switching Container Runtimes below.

      The tkgi cli create-cluster and update-cluster commands no longer support the runtime and lock_container_runtime flags in configuration files.

    • On public clouds, support for in-tree storage volumes has been removed.

      Migration from public cloud in-tree storage volumes to the CSI driver is required.

      If your TKGI environment is on a public cloud, you must install the required CSI driver on your clusters before upgrading to TKGI v1.15.0. For more information on migrating in-tree storage volumes to the CSI driver, see Install the Public Cloud CSI Driver Before Upgrading in Upgrade Preparation Checklist for Tanzu Kubernetes Grid Integrated Edition.

      For information on using a public cloud CSI Driver with TKGI Kubernetes clusters, see Limitations on Using a Public Cloud CSI Driver below.

    For more information about Kubernetes v1.24, see Kubernetes Removals and Deprecations In 1.24 in the Kubernetes documentation.

  • fluent-bit sink-resources requires a CA certificate with a SAN field:

    • The custom CA certificate used to secure fluent-bit log forwarding connections must include a SAN field. If the certificate does not include a SAN field, fluent-bit will not send logs and will record the following fluent-bit Pod log error:
    build tls conn err: x509: certificate relies on legacy Common Name field, use SANs instead
    

For additional breaking changes, see Deprecations below.


Features and Enhancements

TKGI v1.15.0 includes the following features and enhancements:

Security Features

Passes additional CIS Kubernetes Benchmarks:

  • 1.3.7: Controller Mgr: Ensure that the --bind-address argument is set to 127.0.0.1.
  • 1.4.2: Scheduler: Ensure that the --bind-address argument is set to 127.0.0.1.

For more information, see CIS Kubernetes Benchmarks.

Kubernetes Profiles Enhancements

Supports the following Kubernetes Profiles enhancements:

Supports Changing the Service Cluster IP Range

Kubernetes Profiles configuration now supports changing the service cluster IP range using customizations. For more information, see Modify the Service Cluster IP Range in Using Kubernetes Profiles.

Network Profiles Enhancements

Supports the following Network Profiles enhancements:

Supports Configuring Additional NCP Parameters

Network Profiles configuration now supports configuring the NCP nsx_v3.l4_lb_auto_scaling parameter. For more information, see cni_configurations Extensions Parameters in Creating and Managing Network Profiles.

Tenant Isolation Enhancements

Supports isolating tenants using VRF Tier-0 gateways. For more information, see Using a VRF Tier-0 Gateway Configuration for Tenant Isolation in Isolating Tenants.

Supports Reporting Additional Metrics

Supports the following telegraf metrics reporting enhancements:

  • Supports configuring telegraf to export output using the metric_version=1 format instead of the default metric_version=2 format.
  • Supports reporting Kubernetes Scheduler Metrics.
  • Supports reporting telegraf agent process metrics.

For more information, see Configure Telegraf in the Tile in Configuring Telegraf in TKGI. For more information on Telegraf output formats, see Example Output in the Telegraf GitHub documentation.

Additional Features

TKGI v1.15.0 includes the following additional features:

  • Supports VMware NSX v4.0.0.1.
  • Supports disabling load balancer SNAT for workloads in a single-tier Policy API topology. For more information, see Deploy Workloads on vSphere with NSX-T in Deploying and Exposing Basic Linux Workloads.
  • Improves create-cluster, update-cluster, and delete-cluster performance at scale on NSX for vSphere.


Resolved Issues

TKGI v1.15.0 resolves the following issues:


Deprecations

The following TKGI features have been deprecated or removed from TKGI v1.15:

  • Docker Support: Kubernetes support for the Docker container runtime has been entirely removed in Kubernetes v1.24. TKGI v1.15 supports the containerd container runtime.

  • Google Cloud Platform: Support for the Google Cloud Platform (GCP) is deprecated. Support for GCP will be entirely removed in TKGI v1.19.

  • The log_dropped_traffic CNI Configuration parameter: In TKGI v1.15.0 and later, the log_dropped_traffic CNI Configuration parameter is ignored.

    To configure logging in a Network Profile, modify the log_firewall_traffic parameter. For more information, see log_settings in the cni_configurations Parameters section in Creating and Managing Network Profiles.

  • Pod Security Policy Support: Kubernetes Pod Security Policy (PSP) support has been deprecated and PSP support will be entirely removed in Kubernetes v1.25. Kubernetes v1.23 and v1.24 provide beta support for Pod Security Admission. For more information, see Pod Security Admission and Enforce Pod Security Standards with Namespace Labels in the Kubernetes documentation.

  • Flannel Support: Support for the Flannel Container Networking Interface (CNI) is deprecated. Support for Flannel will be entirely removed in TKGI v1.19. VMware recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI. For more information about Flannel CNI deprecation, see About Switching from the Flannel CNI to the Antrea CNI in About Tanzu Kubernetes Grid Integrated Edition Upgrades.

  • In-Tree vSphere Storage Volume Support: In-Tree vSphere Storage volume support has been deprecated and will be entirely removed in a future Kubernetes version. For information on how to manually migrate In-Tree vSphere Storage volumes on existing TKGI clusters from In-Tree vSphere Storage to the automatically installed vSphere CSI Driver, see Migrate In-Tree vSphere Storage to the vSphere CSI Driver in Deploying and Managing Cloud Native Storage (CNS) on vSphere.


Known Issues

TKGI v1.15.0 has the following known issues.


TKGI MC Unable to Manage TKGI after Restoring the TKGI Control Plane from Backup

Symptom

After you restore Ops Manager and the TKGI API VM from backup, TKGI functions normally, but your TKGI MC tabs include the following error: “…product ‘pivotal-container service’ is not deployed…”.

Explanation

TKGI MC is associated with an Ops Manager with a specific name. If you rename Ops Manager with a new name while restoring, your TKGI MC will not recognize the restored Ops Manager and cannot manage it.


TKGI version upgrade without new stemcell fails for Containerd runtime clusters with Istio CNI

Symptom

On clusters configured to use a containerd registry and Istio CNI, upgrading the TKGI version without also upgrading the stemcell fails with errors kubelet cannot find istio-cni binary and nsx fails to recieve message header.

This error does not occur when you upgrade to a new stemcell along with the new TKGI version.

Explanation

When TKGI cluster upgrades and drains the node during upgrade, it leaves the cluster nodes’ Istio CNI agent and CNI configuration in a corrupted state.

If the cluster nodes are not automatically re-created by a stemcell change, the corrupted Istio CNI state remains.

Workaround

For clusters that use both Containerd and Istio CNI:

  • If you have already encountered this issue, re-create all worker nodes using the bosh recreate command:

    1. Run the bosh vms command to list the cluster VMs:

      bosh -d service-instance-DEPLOYMENT-ID vms
      

      Where DEPLOYMENT-ID is the BOSH-generated ID of your Kubernetes cluster deployment.

    2. For each VM instance listed as worker/UUID in the output, run bosh recreate VM-NAME:

      bosh -d service-instance-DEPLOYMENT-ID recreate worker/UUID
      
  • In the future, you can avoid this issue by upgrading a cluster’s stemcell whenever you upgrade its TKGI version.


Kubernetes Pods on NSX-T Become Stuck in a Creating State

This issue is fixed by using NSX-T v3.2 or later.

Symptom

The pods in your TKGI Kubernetes clusters on NSX-T become stuck in a creating state. The connections between nsx-node-agent and hyperbus repeatedly close, log Couldn't connect to 'tcp://...' (error: 111-Connection refused), and have a status of COMMUNICATION_ERROR.

Explanation

For information and workaround steps for this Known Issue, see Issue 2795268: Connection between nsx-node-agent and hyperbus flips and Kubernetes pod is stuck at creating state in NSX Container Plugin 3.1.2 Release Notes in the VMware documentation.


Error: Could Not Execute “Apply-Changes” in Azure Environment

Symptom

After clicking Apply Changes on the TKGI tile in an Azure environment, you experience an error ‘…could not execute “apply-changes”…’ with either of the following descriptions:

  • {“errors”:{“base”:[“undefined method ‘location’ for nil:NilClass”]}}
  • FailedError.new(“Resource Groups in region ‘#{location}’ do not support Availability Zones”))

For example:

INFO | 2020-09-21 03:46:49 +0000 | Vessel::Workflows::Installer#run | Install product (apply changes)
2020/09/21 03:47:02 could not execute "apply-changes": installation failed to trigger: request failed: unexpected response from /api/v0/installations:
HTTP/1.1 500 Internal Server Error
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Sep 2020 17:51:50 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Referrer-Policy: strict-origin-when-cross-origin
Server: Ops Manager
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none
X-Request-Id: f5fc99c1-21a7-45c3-7f39
X-Runtime: 9.905591
X-Xss-Protection: 1; mode=block

44
{"errors":{"base":["undefined method `location' for nil:NilClass"]}}
0

Explanation

The Azure CPI endpoint used by Ops Manager has been changed and your installed version of Ops Manager is not compatible with the new endpoint.

Workaround

Run the following Ops Manager CLI command:

om --skip-ssl-validation --username USERNAME --password PASSWORD --target https://OPSMAN-API curl --silent --path /api/v0/staged/director/verifiers/install_time/IaasConfigurationVerifier -x PUT -d '{ "enabled": false }'

Where:

  • USERNAME is the account to use to run Ops Manager API commands.
  • PASSWORD is the password for the account.
  • OPSMAN-API is the IP address for the Ops Manager API

For more information, see Error ‘undefined method location’ is received when running Apply Change on Azure in the VMware Tanzu Knowledge Base.


VMware vRealize Operations Does Not Support Windows Worker-Based Kubernetes Clusters

VMware vRealize Operations (vROPs) does not support Windows worker-based Kubernetes clusters and cannot be used to manage TKGI-provisioned Windows workers.


TKGI Wavefront Requires Manual Installation for Windows Workers

To monitor Windows-based worker node clusters with a Wavefront collector and proxy, you must first install Wavefront on the clusters manually, using Helm. For instructions, see the Wavefront section of the Monitoring Windows Worker Clusters and Nodes topic.


Pinging Windows Worker Kubernetes Clusters Does Not Work

TKGI-provisioned Windows worker-based Kubernetes clusters inherit a Kubernetes limitation that prevents outbound ICMP communication from workers. As a result, pinging Windows workers does not work.

For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.


Velero Does Not Support Backing Up Stateful Windows Workloads

You can use Velero to back up stateless TKGI-provisioned Windows workers only. You cannot use Velero to back up stateful Windows applications. For more information, see Velero on Windows in Basic Install in the Velero documentation.


Tanzu Mission Control Integration Not Supported on GCP

TKGI on Google Cloud Platform (GCP) does not support Tanzu Mission Control (TMC) integration, which is configured in the Tanzu Kubernetes Grid Integrated Edition tile > the Tanzu Mission Control pane.

If you intend to run TKGI on GCP, skip this pane when configuring the Tanzu Kubernetes Grid Integrated Edition tile.


TMC Data Protection Feature Requires Privileged TKGI Containers

TMC Data Protection feature supports privileged TKGI containers only. For more information, see Plans in the Installing TKGI topic for your IaaS.


Windows Worker Kubernetes Clusters with Group Managed Service Account Do Not Support Compute Profiles

Windows worker-based Kubernetes clusters integrated with group Managed Service Account (gMSA) cannot be managed using compute profiles.


Windows Worker Kubernetes Clusters on Flannel Do Not Support Compute Profiles

On vSphere with NSX-T networking you can use compute profiles with both Linux and Windows worker‑based Kubernetes clusters. On vSphere with Flannel networking, you can apply compute profiles only to Linux clusters.


TKGI CLI Does Not Prevent Reducing the Control Plane Node Count

TKGI CLI does not prevent accidentally reducing a cluster’s control plane node count using a compute profile.

Warning: Reducing a cluster’s control plane node count can destroy the cluster. Do not scale out or scale in existing control plane nodes by reconfiguring the TKGI tile or by using a compute profile. Reducing a cluster’s number of control plane nodes might remove a control plane node and cause the cluster to become inactive.


Windows Cluster Nodes Not Deleted After VM Deleted

Symptom

After you delete a VM using the management console of your infrastructure provider, you notice a Windows worker node that had been on that VM is now in a notReady state.

Solution

  1. To identify the leftover node:

    kubectl get no -o wide
    
  2. Locate nodes on the returned list that are in a notReady state and have the same IP address as another node in the list.
  3. To manually delete a notReady node:

    kubectl delete node NODE-NAME
    

    Where NODE-NAME is the name of the node in the notReady state.


502 Bad Gateway After OIDC Login

Symptom

You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.

Explanation

A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 16,384.

Workaround

If you experience this issue, manually reconfigure your NSX-T request_header_size to 4096 characters and your response_header_size to 16384. For information about configuring NSX default header sizes, see OIDC Response Header Overflow in the Knowledge Base.


Difficulty Changing Proxy for Windows Workers

You must configure a global proxy in the Tanzu Kubernetes Grid Integrated Edition tile > Networking pane before you create any Windows workers that use the proxy.

You cannot change the proxy configuration for Windows workers in an existing cluster.


Character Limitations in HTTP Proxy Password

For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters: & or ;.


Error After Modifying Your Harbor Storage Configuration

Symptom

You receive the following error after modifying your existing Harbor installation’s storage configuration:

Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown

Explanation

Harbor does not support modifying an existing Harbor installation’s storage configuration.

Workaround

To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.


Ingress Controller Statefulset Fails to Start After Resizing Worker Nodes

Symptom

Permissions are removed from your cluster’s files and processes after resizing the persistent disk during a cluster upgrade. The ingress controller statefulset fails to start.

Explanation

When resizing a persistent disk, Bosh migrates the data from the old disk to the new disk but does not copy the files’ extended attributes.

Workaround

To resolve the problem, complete the steps in [Ingress controller statefulset fails to start after resize of worker nodes with permission denied] (https://knowledge.broadcom.com/external/article/298618/ingress-controller-statefulset-fails-to.html?language=en_US) in the VMware Tanzu Knowledge Base.


Azure Default Security Group Is Not Automatically Assigned to Cluster VMs

Symptom

You experience issues when configuring a load balancer for a multi-control plane node Kubernetes cluster or creating a service of type LoadBalancer. Additionally, in the Azure portal, the VM > Networking page does not display any inbound and outbound traffic rules for your cluster VMs.

Explanation

As part of configuring the Tanzu Kubernetes Grid Integrated Edition tile for Azure, you enter Default Security Group in the Kubernetes Cloud Provider pane. When you create a Kubernetes cluster, Tanzu Kubernetes Grid Integrated Edition automatically assigns this security group to each VM in the cluster. However, on Azure the automatic assignment might not occur.

As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.

Workaround

If you experience this issue, manually assign the default security group to each VM NIC in your cluster.


One Plan ID Longer than Other Plan IDs

Symptom

One of your plan IDs is one character longer than your other plan IDs.

Explanation

In TKGI, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.

Solution

You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.

If you require all plan IDs to have identical length, do not activate or use Plan 4.


Database Cluster Stops After a Database Instance is Stopped

Symptom

After you stop one instance in a multiple-instance database cluster, the cluster stops, or communication between the remaining databases times out, and the entire cluster becomes unreachable.

The following might be in your UAA log:

WSREP has not yet prepared node for application use

Explanation

The database cluster is unable to recover automatically because a member is no longer available to reconcile quorum.


Velero Back Up Fails for vSphere PVs Attached to Clusters on Kubernetes v1.20 and Later

Symptom

Backing up vSphere persistent volumes using Velero fails and your Velero backup log includes the following error:

rpc error: code = Unknown desc = Failed during IsObjectBlocked check: Could not translate selfLink to CRD name

Explanation

This is a known issue when backing up clusters on Kubernetes v1.20 and later using the Velero Plugin for vSphere v1.1.0 or earlier.

Workaround

To resolve the problem, complete the steps in Velero backups of vSphere persistent volumes fail on Kubernetes clusters version 1.20 or higher (83314) in the VMware Tanzu Knowledge Base.


Creating Two Windows Clusters at the Same Time Fails

Symptom

The first time that you try to create two Windows clusters at the same time, the creation of one of the clusters fails. If you run pks cluster CLUSTER-NAME to examine the last action taken on the cluster, you see the following:

 Last Action: Create Last Action State: failed Last Action Description: Instance provisioning failed: There was a problem completing your request. … operation: create, error-message: Failed to acquire lock … locking task id is 111, description: ‘create deployment’ 

Explanation

This is a known issue that occurs the first time that you create two Windows clusters concurrently.

Workaround

Recreate the failed cluster. This issue only occurs the first time that you create two Windows clusters concurrently.


Deleted Clusters are Listed in Cluster Lists

Symptom

After running tkgi delete-cluster and cluster deletion has completed, the deleted cluster continues to be listed when running tkgi clusters.

Workaround

You must manually remove the deleted cluster using a customized version of the ncp_cleanup script. For more information, see Deleting a Tanzu Kubernetes Grid Integrated Edition cluster with “tkgi delete-cluster” stuck “in progress” status in the VMware Tanzu Knowledge Base.


BOSH Director Logs the Error ‘Duplicate vm extension name’

Symptom

After you uninstall TKGI, then reinstall TKGI in the same environment, BOSH Director logs errors similar to the following:

.../gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/cloud_manifest_parser.rb:120:in `parse_vm_extensions': Duplicate vm extension name 'disk_enable_uuid' (Bosh::Director::DeploymentDuplicateVmExtensionName)

Explanation

The pivotal-container-service cloud-config was not removed when you uninstalled the TKGI tile, and it remained active. When you reinstalled the TKGI tile, an additional pivotal-container-service cloud-config was created, causing the metrics_server to fall into a crash-loop state.

Workaround

You must manually remove the pivotal-container-service cloud-config after removing your TKGI deployment, including after removing the TKGI tile from Ops Manager.

For more information, see “Duplicate vm extension name” error when metrics_server runs on Director VM in Tanzu Kubernetes Grid Integrated Edition in the VMware Tanzu Community Knowledge Base.


The TKGI API FQDN Must Not Include Trailing Whitespace

Symptom

Your TKGI logs include the following error:

'uaa'. Errors are:- Error filling in template 'uaa.yml.erb' (line 59: Client redirect-uri is invalid: uaa.clients.pks_cli.redirect-uri Client redirect-uri is invalid: uaa.clients.pks_cluster_client.redirect-uri)

Explanation

The TKGI API fully-qualified domain name (FQDN) for your cluster contains leading or trailing whitespace.

Workaround

Do not include whitespace in the TKGI tile API Hostname (FQDN) field.


TMC Cluster Data Protection Backup Fails After Upgrading TKGI

The TMC Cluster Data Protection Backup fails in TKGI environments upgraded from an earlier version.

Symptom

The TMC Cluster Data Protection Backup fails to back up your existing clusters and logs the following error:

error executing custom action (groupResource=customresourcedefinitions.apiextensions.k8s.io, namespace=, name=ncpconfigs.nsx.vmware.com): rpc error: code = Unknown desc = error fetching v1beta1 version of ncpconfigs.nsx.vmware.com: the server could not find the requested resource

Explanation

Kubernetes v1.22 disallows the spec.preserveUnknownFields: true configuration in your existing clusters and the creation of a v1 CustomResourceDefinitions configuration fails.


TMC Cluster Data Protection Restore Fails When Using Antrea CNI

The TMC Cluster Data Protection Restore operation can fail when restoring multiple Antea resources.

Symptom

The TMC Cluster Data Protection Restore fails and logs errors that requests to restore the admission webhook have been denied.

Explanation

Velero has encountered a race condition while operating a resource. For more information, see Allow customizing restore order for Kubernetes controllers and their managed resources in the Velero GitHub repository.


TKGI Does Not Support CVDS / NVDS Mixed Environments

TKGI does not support environments where there are multiple matching networks, such as a mixed CVDS/NVDS environment.

Symptom

TKGI logs errors similar to the following in an environment with multiple matching networks:

LastOperationstatus='failed', description='Instance provisioning failed:
There was a problem completing your request. Please contact your operations team providing the following information:
service: p.pks, service-instance-guid: ..., broker-request-id: ..., task-id: ..., operation: create,
error-message: Unknown CPI error 'Unknown' with message 'undefined method `mob' for <VimSdk::Vim::OpaqueNetwork:' in create_vm' CPI method

Explanation

TKGI cannot identify which of the matching networks you intend to use and has selected the wrong network.


Occasionally update-cluster Does Not Complete for Windows Workers

Occasionally, tkgi update-cluster hangs while updating a Windows worker node instance and the BOSH task cannot finish and exits.

Symptom

The ovsdb-server service has stopped but other processes report that it is running.

Explanation

The ovsdb-server.pid file uses the pid for a process that is not the ovsdb-server.

To confirm that this is the root cause for tkgi update-cluster to hang:

  • To verify the ovsdb-server service has actually stopped, run the PowerShell Get-services command on the Windows worker node.
  • To verify that other processes report the ovsdb-server service is still running:

    1. Review the ovsdb-server job-service-wrapper.err.log log file.
      The job-service-wrapper.err.log log file is located at:

      C:\var\vcap\sys\log\openvswitch-windows\ovsdb-server\job-service-wrapper.err.log
      
    2. Confirm that after the flushing processes, the log includes an error similar to the following:

      Pid-Guard : ovsdb-server is already runing, please stop it first
      At C:\var\vcap\jobs\openvswitch-windows\bin\ovsdb-server_ctl.ps1:30 char:5
      +     Pid-Guard $PIDFILE "ovsdb-server"
      +     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          + CategoryInfo          : NotSpecified: ( [Write-Error], WriteErrorException
          + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Pid-Guard
      
  • To verify the root cause:

    1. Run the following PowerShell commands on the Windows worker node:

      $RUN_DIR = "C:\var\vcap\sys\run\openvswitch-windows"
      $PIDFILE = "$RUN_DIR\ovsdb-server.pid"
      $pid1 = Get-Content $PidFile -First 1
      echo $pid1
      $rst = Get-Process -Id $pid1 -ErrorAction SilentlyContinue
      echo $rst
      
    2. Confirm the returned ProcessName is not ovsdb-server.

Workaround

To resolve this issue for a single Windows worker:

  1. SSH to the affected worker node.
  2. Run the following:

    rm C:\var\vcap\sys\run\openvswitch-windows\ovsdb-server.pid
    
  3. Wait for the ovsdb-server process to start.
  4. Confirm the dependent services also start.


Harbor Private Projects Are Inaccessible after Upgrading to TKGI v1.13.0

If LDAP is enabled, Harbor private projects are inaccessible after upgrading to TKGI v1.13.0. For more information, see Private projects become inaccessible after upgrading Harbor for TKGI to v2.4.x with LDAP feature enabled in the VMware Tanzu Knowledge Base.


Deployments Fail on TKGI Windows Worker-based Kubernetes Clusters after the January 2022 Microsoft Windows Security Patch

Microsoft changed Microsoft Windows’ support for tar file commands in the January 2022 Microsoft Windows security patch.

Packaging scripts that use tar commands for Windows worker-based Kubernetes Cluster deployments can fail after the Microsoft tar command patch update has been applied.

The BOSH agent used by vSphere stemcells built by stembuild v2019.43 and earlier use tar commands that are no longer supported and will fail if the Microsoft Windows security patch has been applied.

Workaround

stembuild v2019.44 and later include a version of the BOSH agent that does not use unsupported tar commands.

If you use vSphere stemcells, use stembuild 2019.44 or later to avoid the BOSH agent tar error.


Cluster Workloads Experience Downtime While Upgrading and Switching Container Runtimes

The workloads on a cluster will experience a period of downtime if the cluster runtime is automatically switched from Docker to containerd during a TKGI upgrade.

Explanation

By default, clusters are switched from the Docker container runtime to containerd while upgrading TKGI to a newer TKGI version.

If a TKGI upgrade switches a cluster container runtime, the workloads on that cluster will experience a period of downtime.

Administrators have the option to switch container runtimes manually before upgrading TKGI. Administrators can also tag a cluster to not switch container runtimes automatically during a TKGI upgrade.

Workaround

To avoid workload downtime, run tkgi update-cluster to manually switch the cluster’s container runtime to containerd before upgrading to TKGI v1.15 or configure the cluster so that the TKGI upgrade does not switch the cluster’s container runtime.

If you must allow the container runtime switch to occur during a TKGI upgrade, avoid workload downtime by completing the following steps before upgrading to TKGI v1.15:

  1. Pull down the coreDNS image used in your TKGI v1.14 Kubernetes clusters:

    docker pull projects.registry.vmware.com/tkg/coredns:COREDNS-VERSION
    

    Where COREDNS-VERSION version of CoreDNS in your TKGI version as listed in the Release Notes, for example v1.8.7+vmware.3 for TKGI 1.14.3 - 1.14.7.

  2. Upload the coreDNS image to your custom image registry.

  3. Modify the coreDNS deployment to use the uploaded image. Repeat this step for all of the TKGI Kubernetes clusters with workloads that should remain up during the upgrade.
  4. Upgrade TKGI from v1.14 to TKGI v1.15.

    During worker upgrades the coreDNS image in the custom image registry is available, avoiding coreDNS downtime. After upgrading a cluster, the workers use the coreDNS version installed by the TKGI upgrade.


Fluent Bit Does Not Merge Containerd Runtime Cluster Multi-Line Entries

This issue is fixed in TKGI v1.15.1.

The Fluent Bit Docker, CRI, Go, Java, and Python multi-line parser does not merge containerd runtime cluster log entries belonging to the same context into a single log entry.


Switching Your Default CNI to Antrea is Not Supported

This issue is fixed in TKGI v1.15.1.

You cannot switch your default CNI from Flannel to Antrea during TKGI upgrade if TKGI is running on Ops Manager v2.10.40 or later.


HTTPS Ingress Outage During VMware NSX Certificate Rotation

This issue is fixed in TKGI v1.15.6.

When the TKGI CLI rotates VMware NSX certificates, HTTPS Ingress with customer-defined TLS experiences a brief outage.

Explanation

When the TKGI CLI rotates VMware NSX certificates, it updates the default certificate IDs in the HTTPS Load Balancer (LB) virtual server, and removes the server name indicator (SNI) certificate IDs on the LB Virtual Server. This causes a brief outage of HTTPS Ingress with customer-defined TLS. After the certificate rotation, NSX Container Plugin (NCP) restarts and resets the removed SNI.


Some Telegraf Metric-Sink Pods Crash After TKGI Upgrade

This issue is fixed in TKGI v1.15.6.

After upgrading TKGI to 1.15.x, some Telegraf metric-sink pods crash with this error:

Events:
  Type    Reason   Age                     From     Message
  ----    ------   ----                    ----     -------
  Normal  BackOff  4m18s (x547 over 129m)  kubelet  Back-off pulling image "cnabu-docker-local.artifactory.eng.vmware.com/oratos/telegraf:1a3337bb81890b3ca0848b5dd456 

Explanation

During the TKGI upgrade, the metric-controller uses the Telegraf image in the previous TKGI version to deploy Telegraf in the new version. After upgrading TKGI to version 1.15.x, the old telegraf image is deleted from some worker nodes due to high disk utlization. The metric-controller is unable to deploy Telegraf on those nodes.

Workaround

Before you perform this procedure, ensure that you have collected the name and the namespace from your metricSink Custom Resource (CR).

To resolve this issue:

  1. In the TKGI, run the following command to find the new tag for Telegraf:

    kubectl describe deployment observability-manager -n pks-system
    

    Note the tag that corresponds to the Image field under observability-manager. For example, efcb96f78984d7731kl99hds564e.

  2. Run the following command to edit the Telegraf deployment details:

    kubectl edit deployment telegraf-METRICSINK-NAME -n METRICSINK-NAMESPACE
    

    Where:

    • METRICSINK-NAME is the name of MetricSink that you collected from the metricSink CR.
    • METRICSINK-NAMESPACE is the namespace of MetricSink that you collected from the metricSink CR.
  3. In the image: cnbu-docker-local.artifactory.eng.vmware.com/oratos/telegraf: field, replace the existing tag with the tag that you noted in Step 1.

  4. Save the changes and exit the editor.


Windows Worker Nodes Are Unresponsive after Update-Cluster and Upgrade-Cluster

This issue is fixed in TKGI v1.15.2.

While updating or upgrading a Windows Worker cluster that uses the containerd container runtime, some of the cluster’s nodes can become unresponsive. During the cluster update or upgrade, the node drain step for the cluster’s nodes can time out and not finish.

Symptom

While updating or upgrading a Windows Worker cluster, the drain step for some nodes hangs for an extended period, and the nodes enter an unresponsive agent state. The update or upgrade process eventually logs the error Error: Timed out sending 'get_task' to instance for the executing drain step.

Workaround

If the upgrade/update node drain step for your windows Worker cluster is waiting to timeout, you can manually stop the drain process and re-start your cluster upgrade/update.

To stop the upgrade/update node drain process and re-start your cluster upgrade/update:

  1. SSH to your jump host.
  2. Determine the VM for the cluster being upgraded/updated and the task ID of the node drain step that is running.
  3. Confirm BOSH resurrection is enabled.
  4. Run the following command to stop the task:

    bosh cancel-task TASK-ID
    

    Where TASK-ID is the task ID for the node drain step that is running.

  5. Run the following command to delete the VM:

    bosh delete-vm VM-ID
    

    Where VM-ID is the VM of the cluster being upgrade/updated.

  6. Wait for BOSH Resurrection to recreate the missing VM.

  7. Restart the upgrade or update cluster process for your cluster.


Timeout While Switching Container Runtimes If the Docker Directory Is Too Large

This issue is fixed in TKGI v1.15.1.

Switching a cluster’s container runtime from Docker to containerd can timeout and fail if the Docker directory contains many files.

Description

A timeout occurs during the remove Docker step while switching a cluster’s container runtime from Docker to containerd if the Docker directory contains too many files to delete within the 180-second timeout interval.

Workaround

To work around this issue:

  1. Manually remove the /var/vcap/store/docker directory.
  2. Re-start the process that stopped due to the time out.


Persistent Volumes Fail to Detach from Nodes

This issue is fixed in TKGI v1.15.1.

If a Pod is recreated in a new instance node, the persistent volume might remain attached to the old node.

Symptom

A persistent volume remains attached to an old node, and attachment errors similar to the following are logged:

Warning FailedMount... kubelet Unable to attach or mount volumes: unmounted volumes=..., unattached volumes=...: timed out waiting for the condition
Warning FailedMount... kubelet Unable to attach or mount volumes: unmounted volumes=..., unattached volumes=...: timed out waiting for the condition
Warning FailedAttachVolume... attachdetach-controller AttachVolume.Attach failed for volume...: 
rpc error: code = Internal desc = failed to attach disk:... with node:... err failed to attach cns volume:... to node vm:.... 
fault: "(*types.LocalizedMethodFault)(0xc000c88d80)({\n DynamicData: (types.DynamicData)

For more information, see Persistent volume fails to be detached from a node in VMware vSphere Container Storage Plug-in 2.5 Release Notes.


Pods on Clusters Using the containerd-Runtime Enter a CrashLoopBackOff State

This issue is fixed in TKGI v1.15.1.

Pods in a cluster that has been switched from the Docker container runtime to containerd might enter a CrashLoopBackOff state. If the container runtime switch is part of a cluster upgrade, the upgrade halts.

Symptom

The Pods that have entered the CrashLoopBackOff state log the following:

Warning FailedCreatePodSandBox... Failed to create pod sandbox: rpc error: code = 
Unknown desc = failed to create containerd task: failed to start shim: 
write /var/vcap/sys/run/containerd/io.containerd.runtime.v2.task/.../config.json: 
no space left on device: unknown

The /var/vcap/data/sys/run directory on the instance node with Pods that have entered the CrashLoopBackOff state is full.


Telegraf In-Host Monitoring Does Not Collect kubelet Metrics

This issue is fixed in TKGI v1.15.3.

Symptom

Telegraf in-host monitoring does not include kubelet metrics and instead logs errors similar to the following in worker VM /var/vcap/sys/log/metric-sink/telegraf.stderr.log log files:

E! [inputs.kubernetes::kubelet] Error in plugin: https://localhost:10250/stats/summary returned HTTP status 401 Unauthorized

Explanation

In Kubernetes 1.24 and later, the beta LegacyServiceAccountTokenNoAutoGeneration feature gate is enabled by default. While this feature is enabled, the expected Secret API objects containing service account tokens are no longer auto-generated, and in-host monitoring does not authenticate.


CSI Driver Image Missing After High Disk Utilization

This issue is fixed in TKGI v1.15.3.

The CSI driver might be missing on a worker node in an air-gapped environment.

Explanation

A garbage collection event is triggered when a worker node experiences low available disk capacity. To increase storage capacity, the garbage collector deletes unused image files, deleting the CSI driver image.

Usually, a worker node automatically pulls a replacement CSI driver image if the image is missing, but in an air-gapped environment, it cannot.


‘Input not an X.509 certificate’ When Applying Change on the TKGI Tile

This issue is fixed in TKGI v1.15.3.

The TKGI tile might report an error similar to the following when Applying Changes with a correctly formatted certificate.

Setting up key store, trust store and installing certs.
keytool error: java.lang.Exception: Input not an X.509 certificate
pre-start.stdout.log 

Explanation

The certificate contains one or more certificate keywords, for example, BEGIN or END, and does not validate.


TKGI Sets the Maximum Persistent Volumes per Node to 59 Instead of 45

This issue is fixed in TKGI v1.15.6.

In TKGI, the maximum number of persistent volumes for a node is set to 59, instead of 45.

Explanation

The three available SCSI controllers in a TKGI node support 45 persistent volumes in total. However, on vSphere CSI nodes, TKGI sets the maximum number of supported persistent volumes to 59 instead of 45.


Limitations on Using the VMware vSphere CSI Driver

The VMware vSphere CSI Driver supports a limited set of VMware vSphere features. Before enabling the vSphere CSI Driver on a TKGI cluster, confirm the cluster and storage configuration are supported by the driver. For more information, see Unsupported Features and Limitations in Deploying and Managing Cloud Native Storage (CNS) on vSphere.


Limitations on Using a Public Cloud CSI Driver

TKGI supports using a public cloud CSI Driver on a TKGI-provisioned cluster.

Installing a Public Cloud CSI Driver on a TKGI Cluster

If you plan to use a public cloud CSI Driver on a TKGI-provisioned cluster, VMware recommends you take additional steps before installing the CSI Driver:

  • For most public clouds, VMware recommends you follow the CSI Driver installation procedure recommended by the public cloud provider.

  • For installing the Azure CSI Driver on a TKGI cluster, VMware recommends you follow the procedure in the How to install Azure file/disk CSI driver onto TKGI 1.14 cluster knowledge base article in the VMware Tanzu Support Hub.

Managing a TKGI Cluster That Uses a Public Cloud CSI Driver

If you have enabled a public cloud CSI Driver on a TKGI cluster, you must take additional steps when deleting,upgrading, or updating the cluster:

Updating a Cluster on a Public Cloud

When updating a cluster that uses a public cloud CSI Driver:

  • No preparation step are needed when updating a multi-worker node cluster.
  • To prepare a single-worker node cluster for updating:

    1. Resize the cluster to two or more worker nodes before updating the cluster. For more information, see Scaling Existing Clusters.
    2. Update the cluster.

Upgrading a Cluster on a Public Cloud

When upgrading a cluster that uses a public cloud CSI Driver:

  • No preparation steps are needed when upgrading a multi-worker node cluster.
  • To prepare a single-worker node cluster for upgrading:

    1. Resize the cluster to two or more worker nodes before upgrading the cluster. For more information, see Scaling Existing Clusters.
    2. Upgrade the cluster. For more information on upgrading clusters, see Upgrading Clusters.

Deleting a Cluster on a Public Cloud

When deleting a cluster that uses a public cloud CSI Driver:

  1. Manually delete the workload PVCs and PVs before deleting the cluster.
  2. Delete the cluster. For more information on deleting clusters, see Deleting Clusters.


TKGI Clusters Fail after NSX Upgrade If They Use NSGroup Policy API Resources

TKGI supports clusters that use NSGroup Policy API resources, but Policy API NSGroups created in one NSX version will be empty after upgrading NSX to a newer version. 

Workaround

BOSH reconfigures a deployment’s NSGroup members if the deployment is redeployed.

After upgrading NSX, redeploy affected deployments to reconfigure their NSGroup members:  

  1. Re-Apply Changes on the Ops Manager UI to redeploy TKGI tile deployments.
  2. Re-deploy the affected cluster deployments.


Kubernetes API Server and etcd Daemon Occasionally Fail to Start During BBR Restore

This issue is fixed in TKGI v1.15.5.

The Kubernetes API server or the etcd daemon on a cluster control plane node might not start during a BBR restore, stopping the restore.

Symptom

During a BBR restore, the post-restore-unlock script occasionally times out while starting the etcd daemon or Kubernetes API server.

For example, the post-restore-unlock script shows the following when the etcd daemon fails to start:

Error attempting to run post-restore-unlock for job bbr-etcd on master...
+ NAME=post-restore-unlock
+ LOG_DIR=/var/vcap/sys/log/bbr-etcd
+ exec
++ tee -a /var/vcap/sys/log/bbr-etcd/post-restore-unlock.stdout.log
...
monit has started etcd
+ timeout 1200 /bin/bash
waiting for etcd daemon to start
Process 'etcd'     not monitored - start pending
...
waiting for etcd daemon to start
Process 'etcd'     initializing
etcd daemon was unable to start after 1200 seconds
+ exit 1 - exit code 1 

Workaround

Restart the BBR restore if the Kubernetes API server or the etcd daemon fails to start.


The ‘kube-state-metrics’ ClusterRole Is Deleted during Cluster Upgrade

This issue is fixed in TKGI v1.15.3.

The wavefront-proxy-errand deletes the kube-state-metrics ClusterRole during cluster upgrade. The deleted ClusterRole must be manually restored after upgrading a cluster.


The Fluent Bit Pod Restarts Due to Out-of-Memory Issue

When the LogSink feature is enabled, the Fluent Bit Pod can experience an out-of-memory issue during high memory utilization. An OOMKilled error is logged, Kubernetes exit code 137, and the Fluent Bit pod restarts.

Workaround

In TKGI v1.15.4 and later, increase the Fluent Bit Pod memory limit.
For more information, see Log Sink Resources in the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS.


Rotated TKGI Certificates Remain Listed as Expiring on the Ops Manager Certificates List

This issue is fixed in TKGI v1.15.5.

After rotating certificates, the Ops Manager list of certificates shows the pks_api_internal_2018certificate on each cluster remains expiring on the original expiration date.

Explanation

The Ops Manager list of certificates is displaying stale data for pks_api_internal_2018 certificates.


TKGI Certificate Rotation Might Remove NSX Ingress Certificates from TKGI

This issue is fixed in TKGI v1.15.1.

When rotating certificates using the tkgi rotate-certificates --only-nsx command, TKGI certificate rotation might remove the TKGI certificates used for ingress from the NSX virtual server. For example, rotate-certificates --only-nsx might remove the NSX Load Balancer certificate or the NSX Manager Superuser Principal Identity certificate from TKGI when this issue occurs.

Explanation

When tkgi rotate-certificates --only-nsx rotates the NSX ingress certificates, rotate-certificates removes and then replaces the certificates. After removing the ingress certificates, rotate-certificates relies on NCP for a list of the NSX ingress certificates to restore, but NCP v4.0.0 provides an incomplete list of certificates.

This occurs because NCP v4.0.0 does not collate the paginated certificate list returned by the NSX API and provides tkgi rotate-certificates with a maximum of 50 certificates.

Workaround

To manually rotate an admin-defined NSX-T certificate that is not rotated using the tkgi rotate-certificates --only-nsx command:

  1. Delete the TLS secret.
  2. Recreate the TLS secret.


The Validator Secret Certificate Is Not Rotated

This issue is fixed in TKGI v1.15.5.

The certificates signed by pks-ca, for example, the pks-system namespace validator, event-controller, and fluent-bit secret certificates, are not rotated by running tkgi rotate-certificates and are not automatically rotated during cluster upgrades.

Workaround

To rotate the certificates signed by pks-ca:

  1. Delete the event-controller, fluent-bit, and pks-system namespace validator secrets.

  2. If you also want to rotate the pks-ca certificate, delete the pks-ca secret.

  3. To generate a new pks-ca certificate and or leaf certificates, apply the cert-generator job:

    1. Backup the cert-generator job as yaml.
    2. Delete the cert-generator job.
    3. Apply the backup cert-generator yaml.
    4. Restart event-controller, fluent-bit, and the pks-system namespace validator.


API Server Audit Logs Leak Tokens

This issue is fixed in TKGI v1.15.6.

The API Server audit logs include clear-text tokens on clusters master nodes that use the default audit policy.

Description

JSON Web Tokens in the tokenrequests API body are being written to the API Server audit log /var/vcap/sys/log/kube-apiserver/audit/log on TKGI clusters.


TKGI Does Not Support the Antrea Egress Feature on AWS

In AWS environments, TKGI does not support the Antrea CNI Egress feature. For example, the Egress resource egressIP and externalIPPool fields in an antrea-config configuration are ignored on AWS. For more information about the Antra Egress feature, see What is Egress in the Antrea documentation.


Cluster Might Fail to Send the cluster_name Tag to Logging after Cluster Upgrade

Occasionally, a cluster might fail to send the cluster_name tag to logging after being upgraded.

Description

After upgrading a cluster, the Name record_modifier filter will occasionally be missing from the cluster’s fluent-bit ConfigMap, and the cluster_name is not included in log entries. This problem occurs if the sink-controller process configures the cluster before the observability-manager starts, which overwrites the desired configuration.


Cluster Might Fail to Send the cluster_name Tag to Logging after Cluster Upgrade

This issue is fixed in TKGI v1.15.7.

Occasionally, a cluster might fail to send the cluster_name tag to logging after being upgraded.

Description

After upgrading a cluster, the Name record_modifier filter will occasionally be missing from the cluster’s fluent-bit ConfigMap, and the cluster_name is not included in log entries. This problem occurs if the sink-controller process configures the cluster before the observability-manager starts, which overwrites the desired configuration.


Cluster Update Operations Fail Due to Duplicate Tag Keys

This issue is fixed in TKGI v1.15.8.

The cluster update operations fail if you reuse the same key in different tags in a cluster.

Description

Tag keys must be unique within a cluster, for example, key1:value1, key2:value2. TKGI does not prevent you from reusing the same key for multiple tags in a cluster, for example, key1:value1, key1:value2. However, the cluster update operations fail.

Workaround

Use different keys for the tags within a cluster, for example, key1:value1, key2:value2. For more information, see Tagging Rules.


TKGI Management Console v1.15.0

Release Date: September 20, 2022

Note: Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI. The supported versions might differ from or be more limited than what is generally supported by TKGI.

Product Snapshot

</tr>
<tr>
    <td>Release date</td>
    <td colspan=2>September 20, 2022</td>
</tr>
<tr>
    <td>Installed TKGI version</td>
    <td colspan=2><a href="#1-15-0">v1.15.0</a></td>
</tr>
<tr>
    <td>Installed Ops Manager version</td>
    <td>v2.10.46&#42;</td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/2.10/vmware-tanzu-ops-manager/release-notes.html#2-10-46">Release Notes</a></td>
</tr>
<tr>
    <th>Component</th>
    <th colspan=2>Version</th>
</tr>
<tr>
    <td>Installed Kubernetes version</td>
    <td>v1.24.3</td>
    <td style="text-align: center;"><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#changelog-since-v1239">Release Notes</a></td>
</tr>
<tr>
    <td>Installed Harbor Registry version</td>
    <td><a href="https://support.broadcom.com/products/harbor-container-registry/#/releases/1128636">v2.5.3&#42;</a></td>
    <td style="text-align: center;"><a href="https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-release-notes.html#v253-8">Release Notes</a></td>
</tr>
<tr>
    <td>Linux stemcell</td>
    <td colspan=2>v621.265&#42;</td>
</tr>

Element Details
Version v1.15.0

* Components marked with an asterisk have been updated.

Upgrade Path

The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.0 are from TKGI MC v1.14.2 and earlier TKGI v1.14 patches.

Features and Resolved Issues

TKGI Management Console v1.15.0 includes the following features:

  • TKGI MC OS has been upgraded to Photon OS 4.0. For more information about Photon OS 4.0, see What is New in Photon OS 4 in the Project Photon OS documentation.

Deprecations

The following TKGI features have been deprecated or removed from TKGI Management Console v1.15:

Known Issues

The Tanzu Kubernetes Grid Integrated Edition Management Console v1.15.0 has the following known issues:


vRealize Log Insight Integration Does Not Support HTTPS Connections

Symptom

The Tanzu Kubernetes Grid Integrated Edition Management Console integration to vRealize Log Insight does not support connections to the HTTPS port on the vRealize Log Insight server.

Workaround

  1. Use SSH to log in to the Tanzu Kubernetes Grid Integrated Edition Management Console appliance VM.
  2. Open the file /lib/systemd/system/pks-loginsight.service in a text editor.
  3. Add -e LOG_SERVER_ENABLE_SSL_VERIFY=false.
  4. Set -e LOG_SERVER_USE_SSL=true.

    The resulting file should look like the following example:

    ExecStart=/bin/docker run --privileged --restart=always --network=pks
    -v /var/log/journal:/var/log/journal
    --name=pks-loginsight
    -e TYPE=gear2-vm
    -e LOG_SERVER_HOST=${LOGINSIGHT_HOST}
    -e LOG_SERVER_PORT=${LOGINSIGHT_PORT}
    -e LOG_SERVER_ENABLE_SSL_VERIFY=false
    -e LOG_SERVER_USE_SSL=true
    -e LOG_SERVER_AGENT_ID=${LOGINSIGHT_ID}
    pksoctopus/vrli-journald:v07092019
    
  5. Save the file and run systemctl daemon-reload.

  6. To restart the vRealize Log Insight service, run systemctl restart pks-loginsight.service.

Tanzu Kubernetes Grid Integrated Edition Management Console can now send logs to the HTTPS port on the vRealize Log Insight server.


vSphere HA causes Management Console ovfenv Data Corruption

Symptom

If you enable vSphere HA on a cluster, if the TKGI Management Console appliance VM is running on a host in that cluster, and if the host reboots, vSphere HA recreates a new TKGI Management Console appliance VM on another host in the cluster. Due to an issue with vSphere HA, the ovfenv data for the newly created appliance VM is corrupted and the new appliance VM does not boot up with the correct network configuration.

Workaround

  1. In the vSphere Client, right-click the appliance VM and select Power > Shut Down Guest OS.
  2. Right-click the appliance again and select Edit Settings.
  3. Select VM Options and click OK.
  4. Verify under Recent Tasks that a Reconfigure virtual machine task has run on the appliance VM.
  5. Power on the appliance VM.


Base64 encoded file arguments are not decoded in Kubernetes profiles

Symptom

Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.

Workaround

Run echo "$content" | base64 --decode


Network profiles not immediately selectable

Symptom

If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.

Workaround

Log out of the management console and log back in again.


Real-Time IP information not displayed for network profiles

Symptom

In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.

Workaround

None


Error After Modifying Your Harbor Storage Configuration

Symptom

You receive the following error after modifying your existing Harbor installation’s storage configuration:

Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown

Explanation

Harbor does not support modifying an existing Harbor installation’s storage configuration.

Workaround

To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.


Windows Stemcells Must be Re-Imported After Upgrading Ops Manager

Symptom

After upgrading Ops Manager, your Management Console does not recognize a Windows stemcell imported when using the prior version of Ops Manager.

Workaround

If your Management Console does not recognize a Windows stemcell after upgrading Ops Manager:

  1. Re-import your previously imported Windows stemcell.
  2. Apply Changes to TKGI MC.


Your New Clusters Are Not Shown In Tanzu Mission Control

Symptom

After you create a cluster, Tanzu Mission Control does not include the cluster in cluster lists. You have a “Resource not found” error similar to the following in your BOSH logs:

Cluster Name in TMC: cluster-1
Cluster Name Prefix: tkgi-my-prefix-
Group Name in TMC: my-prefix-clusters
Cluster Description in TMC: VMware Enterprise PKS Attaching cluster ''tkgi-my-prefix-cluster-1'' to TMC
Fetching token successful
request POST:/v1alpha1/clusters,
response 404 Not Found:{"error":"Resource not found - clustergroup(my-prefix-clusters)
org id(d859dc9f-g622-426d-8c91-939a9f13dea9)",
"code":5,"message":"Resource not found - clustergroup(my-prefix-clusters)

Explanation

The cluster group you assign a cluster to must be defined in Tanzu Mission Control before you assign your cluster to the cluster group in the TKGI Management Console.

Workaround

To resolve the problem, complete the steps in Attaching a Tanzu Kubernetes Grid Integrated (TKGI) cluster to Tanzu Mission Control (TMC) fails with “Resource not found - clustergroup(cluster-group-name)” in the VMware Tanzu Knowledge Base.


Previous nsx-t-superuser-certificate Is Restored during TKGI MC Upgrade

This issue is fixed in TKGI MC v1.15.4.

Upgrading the TKGI MC after rotating the nsx-t-superuser-certificate certificate restores the previous nsx-t-superuser-certificate certificate. For example, this issue occurs if you upgrade TKGI MC after following the steps in How to renew the nsx-t-superuser-certificate used by Principal Identity user (80355).


TKGI MC Unable to Create a Network Profile Configured with Source IP Ingress Persistence

This issue is fixed in TKGI MC v1.15.6.

The TKGI MC halts and returns the following error when creating a network profile which includes an ingress_persistence_settings.persistence_type configuration:

Failed to save network profile. ingress_persistence_settings.persistence_type in body should be one of [none cookie]


check-circle-line exclamation-circle-line close-line
Scroll to top icon