This topic contains release notes for Tanzu Kubernetes Grid Integrated Edition (TKGI) v1.17.
Release Date: November 16, 2023
Release Details |
||
---|---|---|
Version | v1.17.2 | |
Release date | November 16, 2023 | |
Internal Component Versions |
||
Antrea | v1.7.1* | |
cAdvisor | v0.39.1 | |
Containerd | Linux: v1.6.24* Windows: v1.6.24* |
|
CoreDNS | v1.9.3+vmware.18* | |
CSI Driver for vSphere | v3.0.3* | Release Notes |
etcd | v3.5.9 | |
Harbor | v2.8.4* | Release Notes |
Kubernetes | v1.26.10* | Release Notes |
Metrics Server | v0.6.4 | |
NCP | v4.1.1.1 | Release Notes |
Percona XtraDB Cluster (PXC) (in BOSH pxc-release) |
v8.0.31-23 pxc-release: v1.0.8 |
Release Notes: PXC pxc-release |
UAA | v74.5.92* | |
Velero | v1.10.3 | Release Notes |
Wavefront | Wavefront Collector: v1.13.0 Wavefront Proxy: v12.4 |
|
Stemcell Compatibility |
||
Ubuntu Jammy stemcells | See VMware Tanzu Network. | |
Windows stemcells | v2019.65* or later | |
Interoperability |
||
Ops Manager | See VMware Tanzu Network. | |
VMware Aria Operations Management Pack for Kubernetes | v1.10.3 | Release Notes |
VMware Cloud Foundation (VCF) | v5.0**, v4.5.2** | Release Notes: v5.0, v4.5.2 |
VMware NSX | See VMware Product Interoperability Matrices***. | |
vSphere |
* Components marked with an asterisk have been updated.
** VCF v5.0 and VCF v4.5.2 are supported but have not been tested with TKGI v1.17.
*** Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later. NSX v4.0.1.1 supports only 50% of NSX Management Plane API scale. To use Policy API at 100% of Management Plane API scale, you require NSX v4.1.1.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.17.2 are from TKGI v1.17.1, and TKGI v1.16.5 and earlier TKGI v1.16 patches.
TKGI v1.17.2 does not include any new breaking changes.
TKGI v1.17.2 does not include any new features or enhancements.
TKGI v1.17.2 resolves the following issues:
Note: You must grant the AWS Worker Instance Profile additional AWS Identity and Access Management (IAM) permissions before using the Antrea Egress feature with worker nodes on AWS. For more information, see Prepare AWS Worker Instance Profile Permissions in General Troubleshooting.
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.17.1 are also in Tanzu Kubernetes Grid Integrated Edition v1.17.2. For more information, see TKGI v1.17.1 Known Issues below.
TKGI v1.17.2 does not include any additional known issues.
Release Date: November 16, 2023
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Element | Details | |
---|---|---|
Version | v1.17.2 | |
Release date | November 16, 2023 | |
Installed TKGI version | v1.17.2 | |
Installed Ops Manager version | v3.0.18* | Release Notes |
Component | Version | |
Installed Kubernetes version | v1.26.10* | Release Notes |
Installed Harbor Registry version | v2.8.4* | Release Notes |
Ubuntu Jammy stemcell | v1.289* | Release Notes |
* Components marked with an asterisk have been updated.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.2 are from TKGI MC v1.17.1, and TKGI MC v1.16.5 and earlier TKGI v1.16 patches.
TKGI Management Console v1.17.2 does not include any new features or resolved issues.
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.1 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.2. For more information, see TKGI v1.17.1 Known Issues below.
TKGI MC v1.17.2 does not include any additional known issues.
Release Date: September 14, 2023
Release Details |
||
---|---|---|
Version | v1.17.1 | |
Release date | September 14, 2023 | |
Internal Component Versions |
||
Antrea | v1.7.0 | Release Notes |
cAdvisor | v0.39.1 | |
Containerd | Linux: v1.6.18 Windows: v1.6.18 |
|
CoreDNS | v1.9.3+vmware.16* | |
CSI Driver for vSphere | v3.0.2 | Release Notes |
etcd | v3.5.9 | |
Harbor | v2.8.2 | Release Notes |
Kubernetes | v1.26.8* | Release Notes |
Metrics Server | v0.6.4* | |
NCP | v4.1.1.1* | Release Notes |
Percona XtraDB Cluster (PXC) (in BOSH pxc-release) |
v8.0.31-23 pxc-release: v1.0.8 |
Release Notes: PXC pxc-release |
UAA | v74.5.85* | |
Velero | v1.10.3 | Release Notes |
Wavefront | Wavefront Collector: v1.13.0 Wavefront Proxy: v12.4 |
|
Stemcell Compatibility |
||
Ubuntu Jammy stemcells | See VMware Tanzu Network. | |
Windows stemcells | v2019.61 or later | |
Interoperability |
||
Ops Manager | See VMware Tanzu Network. | |
VMware Aria Operations Management Pack for Kubernetes | v2.0 | Release Notes |
VMware Cloud Foundation (VCF) | v5.0**, v4.5.2** | Release Notes: v5.0, v4.5.2 |
VMware NSX | See VMware Product Interoperability Matrices***. | |
vSphere |
* Components marked with an asterisk have been updated.
** VCF v5.0 and VCF v4.5.2 are supported but have not been tested with TKGI v1.17.
*** Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later. NSX v4.0.1.1 supports only 50% of NSX Management Plane API scale. To use Policy API at 100% of Management Plane API scale, you require NSX v4.1.1.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.17.1 are from TKGI v1.17.0, and TKGI v1.16.3 and earlier TKGI v1.16 patches.
TKGI v1.17.1 does not include any new breaking changes.
TKGI v1.17.1 does not include any new features or enhancements.
TKGI v1.17.1 resolves the following issues:
FailedCreatePodSandBox
error during cluster creation.cluster_name
Tag to Logging after Cluster Upgrade.Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.17.0 are also in Tanzu Kubernetes Grid Integrated Edition v1.17.1. For more information, see TKGI v1.17.0 Known Issues below.
TKGI v1.17.1 does not include any additional known issues.
Release Date: September 14, 2023
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Element | Details | |
---|---|---|
Version | v1.17.1 | |
Release date | September 14, 2023 | |
Installed TKGI version | v1.17.1 | |
Installed Ops Manager version | v3.0.14* | Release Notes |
Component | Version | |
Installed Kubernetes version | v1.26.8* | Release Notes |
Installed Harbor Registry version | v2.8.2 | Release Notes |
Ubuntu Jammy stemcell | v1.207* | Release Notes |
* Components marked with an asterisk have been updated.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.1 are from TKGI MC v1.17.0, and TKGI MC v1.16.3 and earlier TKGI v1.16 patches.
TKGI Management Console v1.17.1 includes the following enhancements:
tkgi delete-cluster
reliability by increasing the default TKGI Operation Timeout value from 60 seconds to 120 seconds. TKGI MC v1.17.1 also supports configuring the TKGI Operation Timeout. For more information about configuring the TKGI Operation Timeout from the TKGI MC, see Generate Configuration File and Deploy Tanzu Kubernetes Grid Integrated Edition in Deploy Tanzu Kubernetes Grid Integrated Edition by Using the Configuration Wizard.Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.0 are also in Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.1. For more information, see TKGI v1.17.0 Known Issues below.
TKGI MC v1.17.1 does not include any additional known issues.
Release Date: August 3, 2023
Release Details |
||
---|---|---|
Version | v1.17.0 | |
Release date | August 3, 2023 | |
Internal Component Versions |
||
Antrea | v1.7.0* | Release Notes |
cAdvisor | v0.39.1 | |
Containerd | Linux: v1.6.18* Windows: v1.6.18* |
|
CoreDNS | v1.9.3+vmware.11* | |
CSI Driver for vSphere | v3.0.2* | Release Notes |
etcd | v3.5.9* | |
Harbor | v2.8.2* | Release Notes |
Kubernetes | v1.26.5* | Release Notes |
Metrics Server | v0.6.1 | |
NCP | v4.1.1.0* | |
Percona XtraDB Cluster (PXC) (in BOSH pxc-release) |
v8.0.31-23* pxc-release: v1.0.8* |
Release Notes: PXC pxc-release |
UAA | v74.5.81* | |
Velero | v1.10.3* | Release Notes |
Wavefront | Wavefront Collector: v1.13.0* Wavefront Proxy: v12.4* |
|
Stemcell Compatibility |
||
Ubuntu Jammy stemcells | See VMware Tanzu Network. | |
Windows stemcells | v2019.61* or later | |
Interoperability |
||
Ops Manager | See VMware Tanzu Network. | |
VMware Aria Operations Management Pack for Kubernetes | v2.0* | Release Notes |
VMware Cloud Foundation (VCF) | v5.0**, v4.5.2** | Release Notes: v5.0, v4.5.2 |
VMware NSX | See VMware Product Interoperability Matrices***. | |
vSphere |
* Components marked with an asterisk have been updated.
** VCF v5.0 and VCF v4.5.2 are supported but have not been tested with TKGI v1.17.
*** Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later. NSX v4.0.1.1 supports only 50% of NSX Management Plane API scale. To use Policy API at 100% of Management Plane API scale, you require NSX v4.1.1.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.17.0 are from TKGI v1.16.2 and earlier TKGI v1.16 patches.
TKGI v1.17.0 has the following breaking changes:
Removals in Kubernetes v1.26: The following APIs are removed in Kubernetes v1.26:
v1alpha2
CRI APIv1beta1
flow control API groupv2beta2
HorizontalPodAutoscaler APIFor information on other removals in Kubernetes v1.26, see Kubernetes Removals, Deprecations, and Major Changes in 1.26 in Kubernetes Blog.
In-Tree vSphere Storage Volume Support: In-Tree vSphere Storage volume support has been entirely removed. The TKGI v1.17 upgrade automatically migrates TKGI clusters from in-tree vSphere storage to the CSI Driver for vSphere. VMware strongly recommends that you migrate your in-tree vSphere storage volumes to vSphere CSI volumes before upgrading to TKGI v1.17. For information on how to manually migrate In-Tree vSphere Storage volumes on existing TKGI clusters from In-Tree vSphere Storage to the automatically installed vSphere CSI Driver, see Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
Warning: If you have TKGI-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere. For more information, see Configure vSphere CSI for Windows in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
TKGI v1.17.0 includes the following features:
The tkgi upgrade-cluster
CLI command includes the following enhancements:
Note: These enhancements apply to tkgi upgrade-cluster
only. For example, when using tkgi upgrade-clusters
, the clusters can be upgraded in parallel, but the worker nodes within an upgrading cluster are always upgraded serially.
TKGI v1.17.0 includes the following compute profile enhancements:
node_pools
Block in Creating and Managing Compute Profiles with the CLI (vSphere).name
property, and adding a new node pool while deleting an existing node pool.TKGI v1.17.0 includes the following CSI Driver for vSphere enhancements:
TKGI v1.17.0 includes the following additional features:
Supports VMware vSphere v8.0 and VMware vSAN 8.0. For more information, see Scenario 2: Upgrading to TKGI v1.17 and vSphere v8.0 in Upgrade Order for TKGI Environments on vSphere and VMware Product Interoperability Matrices.
TKGI API and the UAA connectivity now support TLS v1.3, in addition to TLS v1.2.
Supports configuring cluster-level Pod Security Admission (PSA). For more information, see Pod Security Admission in a TKGI Cluster.
NSX Only: Enhances tkgi delete-cluster
reliability by increasing the default TKGI Operation Timeout from 60 seconds to 120 seconds. For more information about configuring the TKGI Operation Timeout field on the TKGI Tile, see Networking in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with VMware NSX.
Upgrades the TKGI Database from MySQL v5.7 to MySQL v8. For information about the differences between MySQL v5.7 to MySQL v8, see MySQL Server Version Reference in the MySQL documentation.
Supports resizing clusters that have not been upgraded to the current TKGI control plane version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About TKGI Upgrades.
vSphere with VMware NSX only: Supports specifying a network profile for configuring a TKGI upgrade smoke test cluster. For more information, see Errands in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with VMware NSX.
Improves NSX resource clean-up when deleting a Kubernetes cluster in NSX Policy API mode.
Supports configuring additional NCP Network Profiles parameters: nsx_v3.cookie_name
, nsx_v3.members_per_medium_lbs
, nsx_v3.members_per_small_lbs
, nsx_v3.natfirewallmatch
, nsx_v3.ncp_enforced_pool_member_limit
, nsx_v3.relax_scale_validation
. For more information, see cni_configurations Extensions Parameters in Creating and Managing Network Profiles.
Supports configuring the Fluent Bit container memory limit. For more information, see Log Sink Resources in Installing TKGI or The Fluent Bit Pod restarts Due to Out-of-Memory Issue in Troubleshooting.
Increases the default expiration period of the monitoring-metric-cert
certificate from one to four years.
Decreases the Fluentd default refresh interval to 30 seconds from 60 seconds. This ensures that all the logs are forwarded to VMware vRealize Log Insight consistently.
Supports NSX Policy API at 100% of Management Plane API scale with NSX v4.1.1.
TKGI v1.17.0 resolves the following issues:
NullPointerException
error when creating a Compute Profile configured with instances: 0
and the max_worker_instances
parameter.The following TKGI features have been deprecated or removed from TKGI v1.17:
In-Tree vSphere Storage Volume Support: In-Tree vSphere Storage volume support has been entirely removed. For more information, see Breaking Changes above.
Google Cloud Platform: Support for the Google Cloud Platform (GCP) is deprecated. Support for GCP will be entirely removed in TKGI v1.19.
The log_dropped_traffic
CNI Configuration parameter: In TKGI v1.17.0 and later, the log_dropped_traffic
CNI Configuration parameter is ignored.
To configure logging in a Network Profile, modify the log_firewall_traffic
parameter. For more information, see log_settings
in the cni_configurations
Parameters section in Creating and Managing Network Profiles.
Flannel Support: Support for the Flannel Container Networking Interface (CNI) is deprecated. Support for Flannel will be entirely removed in TKGI v1.19. VMware recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI. For more information about Flannel CNI deprecation, see About Switching from the Flannel CNI to the Antrea CNI in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
SecurityContextDeny Admission Controller Support: TKGI support for the SecurityContextDeny admission controller will be removed in TKGI v1.18. SecurityContextDeny has been deprecated, and the Kubernetes community recommends the controller not be used. Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.
TKGI v1.17.0 has the following known issues.
The VMware vSphere CSI Driver supports a limited set of VMware vSphere features. Before enabling the vSphere CSI Driver on a TKGI cluster, confirm the cluster and storage configuration are supported by the driver. For more information, see Unsupported Features and Limitations in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
TKGI supports using a public cloud CSI Driver on a TKGI-provisioned cluster.
Installing a Public Cloud CSI Driver on a TKGI Cluster
If you plan to use a public cloud CSI Driver on a TKGI-provisioned cluster, VMware recommends you take additional steps before installing the CSI Driver:
For most public clouds, VMware recommends you follow the CSI Driver installation procedure recommended by the public cloud provider.
For installing the Azure CSI Driver on a TKGI cluster, VMware recommends you follow the procedure in the How to install Azure file/disk CSI driver onto TKGI 1.14 cluster knowledge base article in the VMware Tanzu Support Hub.
Managing a TKGI Cluster That Uses a Public Cloud CSI Driver
If you have enabled a public cloud CSI Driver on a TKGI cluster, you must take additional steps when deleting,upgrading, or updating the cluster:
Updating a Cluster on a Public Cloud
When updating a cluster that uses a public cloud CSI Driver:
To prepare a single-worker node cluster for updating:
Upgrading a Cluster on a Public Cloud
When upgrading a cluster that uses a public cloud CSI Driver:
To prepare a single-worker node cluster for upgrading:
Deleting a Cluster on a Public Cloud
When deleting a cluster that uses a public cloud CSI Driver:
This issue is fixed by using the July 21, 2023 or later releases of Tanzu Mission Control.
Tanzu Mission Control (TMC) is not compatible with Kubernetes v1.26 at the time of the TKGI v1.17 release and temporarily cannot manage TKGI v1.17 Kubernetes clusters. Interoperability between TMC and TKGI v1.17 is expected at a later time.
Refer to the VMware Tanzu Mission Control Release Notes for an announcement of compatibility with Kubernetes v1.26.
This issue has been resolved: VMware Aria Operations Management Pack for Kubernetes v2.0 provides interoperability with TKGI v1.17. For more information, see the VMware Aria Operations for Integrations Release Notes.
Description
Interoperability with VMware Aria Operations Management Pack for Kubernetes is temporarily unavailable.
VMware Aria Operations Management Pack for Kubernetes is currently not compatible with TKGI v1.17. Interoperability between VMware Aria Operations Management Pack for Kubernetes and TKGI v1.17 is expected at a later time.
Symptom
After you restore Ops Manager and the TKGI API VM from backup, TKGI functions normally, but your TKGI MC tabs include the following error: “…product ‘pivotal-container service’ is not deployed…”.
Explanation
TKGI MC is associated with an Ops Manager with a specific name. If you rename Ops Manager with a new name while restoring, your TKGI MC will not recognize the restored Ops Manager and cannot manage it.
Symptom
After clicking Apply Changes on the TKGI tile in an Azure environment, you experience an error ‘…could not execute “apply-changes”…’ with either of the following descriptions:
For example:
INFO | 2020-09-21 03:46:49 +0000 | Vessel::Workflows::Installer#run | Install product (apply changes)
2020/09/21 03:47:02 could not execute "apply-changes": installation failed to trigger: request failed: unexpected response from /api/v0/installations:
HTTP/1.1 500 Internal Server Error
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Sep 2020 17:51:50 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Referrer-Policy: strict-origin-when-cross-origin
Server: Ops Manager
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none
X-Request-Id: f5fc99c1-21a7-45c3-7f39
X-Runtime: 9.905591
X-Xss-Protection: 1; mode=block
44
{"errors":{"base":["undefined method `location' for nil:NilClass"]}}
0
Explanation
The Azure CPI endpoint used by Ops Manager has been changed and your installed version of Ops Manager is not compatible with the new endpoint.
Workaround
Run the following Ops Manager CLI command:
om --skip-ssl-validation --username USERNAME --password PASSWORD --target https://OPSMAN-API curl --silent --path /api/v0/staged/director/verifiers/install_time/IaasConfigurationVerifier -x PUT -d '{ "enabled": false }'
Where:
USERNAME
is the account to use to run Ops Manager API commands.PASSWORD
is the password for the account.OPSMAN-API
is the IP address for the Ops Manager APIFor more information, see Error ‘undefined method location’ is received when running Apply Change on Azure in the VMware Tanzu Knowledge Base.
VMware vRealize Operations (vROPs) does not support Windows worker-based Kubernetes clusters and cannot be used to manage TKGI-provisioned Windows workers.
To monitor Windows-based worker node clusters with a Wavefront collector and proxy, you must first install Wavefront on the clusters manually, using Helm. For instructions, see the Wavefront section of the Monitoring Windows Worker Clusters and Nodes topic.
TKGI-provisioned Windows worker-based Kubernetes clusters inherit a Kubernetes limitation that prevents outbound ICMP communication from workers. As a result, pinging Windows workers does not work.
For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.
You can use Velero to back up stateless TKGI-provisioned Windows workers only. You cannot use Velero to back up stateful Windows applications. For more information, see Velero on Windows in Basic Install in the Velero documentation.
TKGI on Google Cloud Platform (GCP) does not support Tanzu Mission Control (TMC) integration, which is configured in the Tanzu Kubernetes Grid Integrated Edition tile > the Tanzu Mission Control pane.
If you intend to run TKGI on GCP, skip this pane when configuring the Tanzu Kubernetes Grid Integrated Edition tile.
TMC Data Protection feature supports privileged TKGI containers only. For more information, see Plans in the Installing TKGI topic for your IaaS.
Windows worker-based Kubernetes clusters integrated with group Managed Service Account (gMSA) cannot be managed using compute profiles.
On vSphere with NSX-T networking you can use compute profiles with both Linux and Windows worker‑based Kubernetes clusters. On vSphere with Flannel networking, you can apply compute profiles only to Linux clusters.
TKGI CLI does not prevent accidentally reducing a cluster’s control plane node count using a compute profile.
Warning: Reducing a cluster’s control plane node count can destroy the cluster. Do not scale out or scale in existing control plane nodes by reconfiguring the TKGI tile or by using a compute profile. Reducing a cluster’s number of control plane nodes might remove a control plane node and cause the cluster to become inactive.
Symptom
After you delete a VM using the management console of your infrastructure provider, you notice a Windows worker node that had been on that VM is now in a notReady
state.
Solution
To identify the leftover node:
kubectl get no -o wide
notReady
state and have the same IP address as another node in the list.To manually delete a notReady
node:
kubectl delete node NODE-NAME
Where NODE-NAME
is the name of the node in the notReady
state.
Symptom
You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.
Explanation
A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 50,000.
Workaround
If you experience this issue, manually reconfigure your NSX-T request_header_size
and response_header_size
to 50,000 characters. For information about configuring NSX-T default header sizes, see OIDC Response Header Overflow in the Knowledge Base.
You must configure a global proxy in the Tanzu Kubernetes Grid Integrated Edition tile > Networking pane before you create any Windows workers that use the proxy.
You cannot change the proxy configuration for Windows workers in an existing cluster.
For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters: &
or ;
.
Symptom
You receive the following error after modifying your existing Harbor installation’s storage configuration:
Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown
Explanation
Harbor does not support modifying an existing Harbor installation’s storage configuration.
Workaround
To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.
Symptom
Permissions are removed from your cluster’s files and processes after resizing the persistent disk during a cluster upgrade. The ingress controller statefulset fails to start.
Explanation
When resizing a persistent disk, Bosh migrates the data from the old disk to the new disk but does not copy the files’ extended attributes.
Workaround
To resolve the problem, complete the steps in [Ingress controller statefulset fails to start after resize of worker nodes with permission denied] (https://community.pivotal.io/s/article/5000e00001nCJxT1603094435795?language=en_US) in the VMware Tanzu Knowledge Base.
Symptom
You experience issues when configuring a load balancer for a multi-control plane node Kubernetes cluster or creating a service of type LoadBalancer
. Additionally, in the Azure portal, the VM > Networking page does not display any inbound and outbound traffic rules for your cluster VMs.
Explanation
As part of configuring the Tanzu Kubernetes Grid Integrated Edition tile for Azure, you enter Default Security Group in the Kubernetes Cloud Provider pane. When you create a Kubernetes cluster, Tanzu Kubernetes Grid Integrated Edition automatically assigns this security group to each VM in the cluster. However, on Azure the automatic assignment might not occur.
As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.
Workaround
If you experience this issue, manually assign the default security group to each VM NIC in your cluster.
Symptom
One of your plan IDs is one character longer than your other plan IDs.
Explanation
In TKGI, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.
Solution
You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.
If you require all plan IDs to have identical length, do not activate or use Plan 4.
Symptom
After you stop one instance in a multiple-instance database cluster, the cluster stops, or communication between the remaining databases times out, and the entire cluster becomes unreachable.
The following might be in your UAA log:
WSREP has not yet prepared node for application use
Explanation
The database cluster is unable to recover automatically because a member is no longer available to reconcile quorum.
Symptom
Backing up vSphere persistent volumes using Velero fails and your Velero backup log includes the following error:
rpc error: code = Unknown desc = Failed during IsObjectBlocked check: Could not translate selfLink to CRD name
Explanation
This is a known issue when backing up clusters on Kubernetes v1.20 and later using the Velero Plugin for vSphere v1.1.0 or earlier.
Workaround
To resolve the problem, complete the steps in Velero backups of vSphere persistent volumes fail on Kubernetes clusters version 1.20 or higher (83314) in the VMware Tanzu Knowledge Base.
Symptom
The first time that you try to create two Windows clusters at the same time, the creation of one of the clusters fails. If you run pks cluster CLUSTER-NAME
to examine the last action taken on the cluster, you see the following:
Last Action: Create Last Action State: failed Last Action Description: Instance provisioning failed: There was a problem completing your request. … operation: create, error-message: Failed to acquire lock … locking task id is 111, description: ‘create deployment’
Explanation
This is a known issue that occurs the first time that you create two Windows clusters concurrently.
Workaround
Recreate the failed cluster. This issue only occurs the first time that you create two Windows clusters concurrently.
Symptom
After running tkgi delete-cluster
and cluster deletion has completed, the deleted cluster continues to be listed when running tkgi clusters
.
Workaround
You must manually remove the deleted cluster using a customized version of the ncp_cleanup script. For more information, see Deleting a Tanzu Kubernetes Grid Integrated Edition cluster with “tkgi delete-cluster” stuck “in progress” status in the VMware Tanzu Knowledge Base.
Symptom
After you uninstall TKGI, then reinstall TKGI in the same environment, BOSH Director logs errors similar to the following:
.../gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/cloud_manifest_parser.rb:120:in `parse_vm_extensions': Duplicate vm extension name 'disk_enable_uuid' (Bosh::Director::DeploymentDuplicateVmExtensionName)
Explanation
The pivotal-container-service
cloud-config was not removed when you uninstalled the TKGI tile, and it remained active. When you reinstalled the TKGI tile, an additional pivotal-container-service
cloud-config was created, causing the metrics_server to fall into a crash-loop state.
Workaround
You must manually remove the pivotal-container-service
cloud-config after removing your TKGI deployment, including after removing the TKGI tile from Ops Manager.
For more information, see “Duplicate vm extension name” error when metrics_server runs on Director VM in Tanzu Kubernetes Grid Integrated Edition in the VMware Tanzu Community Knowledge Base.
Symptom
Your TKGI logs include the following error:
'uaa'. Errors are:- Error filling in template 'uaa.yml.erb' (line 59: Client redirect-uri is invalid: uaa.clients.pks_cli.redirect-uri Client redirect-uri is invalid: uaa.clients.pks_cluster_client.redirect-uri)
Explanation
The TKGI API fully-qualified domain name (FQDN) for your cluster contains leading or trailing whitespace.
Workaround
Do not include whitespace in the TKGI tile API Hostname (FQDN) field.
The TMC Cluster Data Protection Backup fails in TKGI environments upgraded from an earlier version.
Symptom
The TMC Cluster Data Protection Backup fails to back up your existing clusters and logs the following error:
error executing custom action (groupResource=customresourcedefinitions.apiextensions.k8s.io, namespace=, name=ncpconfigs.nsx.vmware.com): rpc error: code = Unknown desc = error fetching v1beta1 version of ncpconfigs.nsx.vmware.com: the server could not find the requested resource
Explanation
Kubernetes v1.22 disallows the spec.preserveUnknownFields: true
configuration in your existing clusters and the creation of a v1 CustomResourceDefinitions configuration fails.
The TMC Cluster Data Protection Restore operation can fail when restoring multiple Antea resources.
Symptom
The TMC Cluster Data Protection Restore fails and logs errors that requests to restore the admission webhook
have been denied.
Explanation
Velero has encountered a race condition while operating a resource. For more information, see Allow customizing restore order for Kubernetes controllers and their managed resources in the Velero GitHub repository.
TKGI does not support environments where there are multiple matching networks, such as a mixed CVDS/NVDS environment.
Symptom
TKGI logs errors similar to the following in an environment with multiple matching networks:
LastOperationstatus='failed', description='Instance provisioning failed:
There was a problem completing your request. Please contact your operations team providing the following information:
service: p.pks, service-instance-guid: ..., broker-request-id: ..., task-id: ..., operation: create,
error-message: Unknown CPI error 'Unknown' with message 'undefined method `mob' for <VimSdk::Vim::OpaqueNetwork:' in create_vm' CPI method
Explanation
TKGI cannot identify which of the matching networks you intend to use and has selected the wrong network.
Occasionally, tkgi update-cluster
hangs while updating a Windows worker node instance and the BOSH task cannot finish and exits.
Symptom
The ovsdb-server
service has stopped but other processes report that it is running.
Explanation
The ovsdb-server.pid
file uses the pid for a process that is not the ovsdb-server.
To confirm that this is the root cause for tkgi update-cluster
to hang:
ovsdb-server
service has actually stopped, run the PowerShell Get-services
command on the Windows worker node.To verify that other processes report the ovsdb-server
service is still running:
Review the ovsdb-server job-service-wrapper.err.log
log file.
The job-service-wrapper.err.log
log file is located at:
C:\var\vcap\sys\log\openvswitch-windows\ovsdb-server\job-service-wrapper.err.log
Confirm that after the flushing processes, the log includes an error similar to the following:
Pid-Guard : ovsdb-server is already runing, please stop it first
At C:\var\vcap\jobs\openvswitch-windows\bin\ovsdb-server_ctl.ps1:30 char:5
+ Pid-Guard $PIDFILE "ovsdb-server"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: ( [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Pid-Guard
To verify the root cause:
Run the following PowerShell commands on the Windows worker node:
$RUN_DIR = "C:\var\vcap\sys\run\openvswitch-windows"
$PIDFILE = "$RUN_DIR\ovsdb-server.pid"
$pid1 = Get-Content $PidFile -First 1
echo $pid1
$rst = Get-Process -Id $pid1 -ErrorAction SilentlyContinue
echo $rst
ProcessName
is not ovsdb-server
.Workaround
To resolve this issue for a single Windows worker:
Run the following:
rm C:\var\vcap\sys\run\openvswitch-windows\ovsdb-server.pid
ovsdb-server
process to start.If LDAP is enabled, Harbor private projects are inaccessible after upgrading to TKGI v1.13.0. For more information, see Private projects become inaccessible after upgrading Harbor for TKGI to v2.4.x with LDAP feature enabled in the VMware Tanzu Knowledge Base.
Microsoft changed Microsoft Windows’ support for tar file commands in the January 2022 Microsoft Windows security patch.
Packaging scripts that use tar commands for Windows worker-based Kubernetes Cluster deployments can fail after the Microsoft tar command patch update has been applied.
The BOSH agent used by vSphere stemcells built by stembuild v2019.43 and earlier use tar commands that are no longer supported and will fail if the Microsoft Windows security patch has been applied.
Workaround
stembuild v2019.44 and later include a version of the BOSH agent that does not use unsupported tar commands.
If you use vSphere stemcells, use stembuild 2019.44 or later to avoid the BOSH agent tar error.
TKGI supports clusters that use NSGroup Policy API resources, but Policy API NSGroups created in one NSX version will be empty after upgrading NSX to a newer version.
Workaround
BOSH reconfigures a deployment’s NSGroup members if the deployment is redeployed.
After upgrading NSX, redeploy affected deployments to reconfigure their NSGroup members:
This issue is fixed in TKGI v1.17.0.
The API Server audit logs include clear-text tokens on clusters master nodes that use the default audit policy.
Description
JSON Web Tokens in the tokenrequests
API body are being written to the API Server audit log /var/vcap/sys/log/kube-apiserver/audit/log
on TKGI clusters.
This issue is fixed in TKGI v1.17.2.
In AWS environments, TKGI does not support the Antrea CNI Egress
feature. For example, the Egress
resource egressIP
and externalIPPool
fields in an antrea-config
configuration are ignored for clusters on AWS, including both single-AZ and multiple-AZ clusters. For more information about the Antra Egress
feature, see What is Egress in the Antrea documentation.
Note: You must grant the AWS Worker Instance Profile additional AWS Identity and Access Management (IAM) permissions before using the Antrea Egress feature with worker nodes on AWS. For more information, see Prepare AWS Worker Instance Profile Permissions in General Troubleshooting.
cluster_name
Tag to Logging after Cluster UpgradeThis issue is fixed in TKGI v1.17.1.
Occasionally, a cluster might fail to send the cluster_name
tag to logging after being upgraded.
Description
After upgrading a cluster, the Name record_modifier
filter will occasionally be missing from the cluster’s fluent-bit ConfigMap, and the cluster_name is not included in log entries. This problem occurs if the sink-controller process configures the cluster before the observability-manager starts, which overwrites the desired configuration.
This issue is fixed in TKGI v1.17.2.
The cluster update operations fail if you reuse the same key in different tags in a cluster.
Description
Tag keys must be unique within a cluster, for example, key1:value1, key2:value2
. TKGI does not prevent you from reusing the same key for multiple tags in a cluster, for example, key1:value1, key1:value2
. However, the cluster update operations fail.
Workaround
Use different keys for the tags within a cluster, for example, key1:value1, key2:value2
. For more information, see Tagging Rules.
This issue is fixed in TKGI v1.17.2.
When upgrading a cluster, the node drain operation ignores the pod shutdown grace period specified in the deployment plan on the TKGI Tile.
Release Date: August 3, 2023
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Element | Details | |
---|---|---|
Version | v1.17.0 | |
Release date | August 3, 2023 | |
Installed TKGI version | v1.17.0 | |
Installed Ops Manager version | v3.0.13* | Release Notes |
Component | Version | |
Installed Kubernetes version | v1.26.5* | Release Notes |
Installed Harbor Registry version | v2.8.2* | Release Notes |
Ubuntu Jammy stemcell | v1.179* | Release Notes |
* Components marked with an asterisk have been updated.
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.0 are from TKGI MC v1.16.2 and earlier TKGI v1.16 patches.
TKGI Management Console v1.17.0 includes the following features:
Upgrades the TKGI MC database from MySQL v5.7 to MySQL v8. For information about the differences between MySQL v5.7 to MySQL v8, see MySQL Server Version Reference in the MySQL documentation.
Prevents deploying multiple TKGI instances on a vCenter Server. By using the TKGI MC, you can now deploy only one instance of TKGI on a vCenter Server.
TKGI Management Console v1.17.0 resolves the following issues:
The following TKGI features have been deprecated or removed from TKGI Management Console v1.17:
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.17.0 has the following known issues:
Symptom
The Tanzu Kubernetes Grid Integrated Edition Management Console integration to vRealize Log Insight does not support connections to the HTTPS port on the vRealize Log Insight server.
Workaround
/lib/systemd/system/pks-loginsight.service
in a text editor.-e LOG_SERVER_ENABLE_SSL_VERIFY=false
.Set -e LOG_SERVER_USE_SSL=true
.
The resulting file should look like the following example:
ExecStart=/bin/docker run --privileged --restart=always --network=pks
-v /var/log/journal:/var/log/journal
--name=pks-loginsight
-e TYPE=gear2-vm
-e LOG_SERVER_HOST=${LOGINSIGHT_HOST}
-e LOG_SERVER_PORT=${LOGINSIGHT_PORT}
-e LOG_SERVER_ENABLE_SSL_VERIFY=false
-e LOG_SERVER_USE_SSL=true
-e LOG_SERVER_AGENT_ID=${LOGINSIGHT_ID}
pksoctopus/vrli-journald:v07092019
Save the file and run systemctl daemon-reload
.
systemctl restart pks-loginsight.service
.Tanzu Kubernetes Grid Integrated Edition Management Console can now send logs to the HTTPS port on the vRealize Log Insight server.
Symptom
If you enable vSphere HA on a cluster, if the TKGI Management Console appliance VM is running on a host in that cluster, and if the host reboots, vSphere HA recreates a new TKGI Management Console appliance VM on another host in the cluster. Due to an issue with vSphere HA, the ovfenv
data for the newly created appliance VM is corrupted and the new appliance VM does not boot up with the correct network configuration.
Workaround
Reconfigure virtual machine
task has run on the appliance VM.Symptom
Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.
Workaround
Run echo "$content" | base64 --decode
Symptom
If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.
Workaround
Log out of the management console and log back in again.
Symptom
In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.
Workaround
None
Symptom
You receive the following error after modifying your existing Harbor installation’s storage configuration:
Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown
Explanation
Harbor does not support modifying an existing Harbor installation’s storage configuration.
Workaround
To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.
Symptom
After upgrading Ops Manager, your Management Console does not recognize a Windows stemcell imported when using the prior version of Ops Manager.
Workaround
If your Management Console does not recognize a Windows stemcell after upgrading Ops Manager:
Symptom
After you create a cluster, Tanzu Mission Control does not include the cluster in cluster lists. You have a “Resource not found” error similar to the following in your BOSH logs:
Cluster Name in TMC: cluster-1
Cluster Name Prefix: tkgi-my-prefix-
Group Name in TMC: my-prefix-clusters
Cluster Description in TMC: VMware Enterprise PKS Attaching cluster ''tkgi-my-prefix-cluster-1'' to TMC
Fetching token successful
request POST:/v1alpha1/clusters,
response 404 Not Found:{"error":"Resource not found - clustergroup(my-prefix-clusters)
org id(d859dc9f-g622-426d-8c91-939a9f13dea9)",
"code":5,"message":"Resource not found - clustergroup(my-prefix-clusters)
Explanation
The cluster group you assign a cluster to must be defined in Tanzu Mission Control before you assign your cluster to the cluster group in the TKGI Management Console.
Workaround
To resolve the problem, complete the steps in Attaching a Tanzu Kubernetes Grid Integrated (TKGI) cluster to Tanzu Mission Control (TMC) fails with “Resource not found - clustergroup(cluster-group-name)” in the VMware Tanzu Knowledge Base.