What's New

These Tanzu Kubernetes Grid v1.5.2 Release Notes do not apply to the current Tanzu Kubernetes Grid v1.5 patch release. For the latest release notes, see VMware Tanzu Kubernetes Grid 1.5.3 Release Notes.

WARNING: VMware recommends not installing or upgrading to Tanzu Kubernetes Grid v1.5 until a future patch release, due to a bug in the versions of etcd in the versions of Kubernetes used by TKG v1.5.0-v1.5.2. For more information, see Known Issues below.

Except where noted, these release notes apply to v1.5.0, v1.5.1, and v1.5.2 of Tanzu Kubernetes Grid.

  • You can register management and workload clusters with Tanzu Mission Control. See Register Your Management Cluster with Tanzu Mission Control.
  • You can install Tanzu Application Platform (TAP) v1.0.1 on workload clusters created by Tanzu Kubernetes Grid, use TAP to deploy apps to those clusters, and manage the TAP-deployed applications with tanzu apps commands.
  • (vSphere) You can create workload clusters with Windows OS worker nodes. See Deploying a Windows Cluster.
  • Different workload clusters can run different versions of the same user-managed package, including the latest version and the versions of the package in your two previous installations of Tanzu Kubernetes Grid. See User-Managed Packages.
  • (vSphere) Clusters that have NSX Advanced Load Balancer (ALB) as their control plane API endpoint server can use an external identity provider for login authentication, via Pinniped.
  • (vSphere) Supports Antrea NodePortLocal mode. See L7 Ingress in NodePortLocal Mode.
  • Crash Diagnostics (Crashd) collects information about clusters and infrastructure on Microsoft Azure as well as on vSphere and Amazon EC2.
  • (Azure) Supports NVIDIA GPU machine types based on Azure NC-, NV-, and NVv3-series VMs for control plane and worker VMs. See GPU-Enabled Clusters in the Cluster API Provider Azure documentation.
  • (Azure) You can now configure Azure management and workload clusters to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs. See Azure Private Clusters.
  • Tanzu CLI:
    • tanzu secret registry commands manage secrets to enable cluster access to a private container registry. See Configure Authentication to a Private Container Registry.
    • tanzu config set and tanzu config unset commands activate and deactivate CLI features and manage persistent environment variables. See Tanzu CLI Configuration.
    • tanzu plugin sync command discovers and downloads new CLI plugins that are associated with either a newer version of Tanzu Kubernetes Grid, or a package installed on your management cluster that your local CLI does not know about, for example if another user installed it. See Sync New Plugins.
    • mc alias for management-cluster. See Tanzu CLI Command Reference.
    • -v and -f flags to tanzu package installed update enable updating package configuration without updating version.
    • -p flag to tanzu cluster scale lets you specify a node pool when scaling node-pool nodes. See Update Node Pools.
    • The --machine-deployment-base option to tanzu cluster node-pool set specifies a base MachineDeployment object from which to create a new node pool.
    • (Amazon EC2) tanzu management-cluster permissions aws generate-cloudformation-template command retrieves the CloudFormation template to create the IAM resources required by Tanzu Kubernetes Grid's account on AWS. See Permissions Set by Tanzu Kubernetes Grid.
  • Installer interface:
    • Export Configuration button in installer interface lets you save management cluster configuration file out to a location of your choice before you deploy the cluster.
    • Disable Verification checkbox disables certificate thumbprint verification when configuring access to your vCenter server, equivalent to setting VSPHERE_INSECURE: true in cluster configuration file.
    • Browse File button supports specifying SSH public key file, as alternative to pasting key into textbox.
    • (Amazon EC2) Automate creation of AWS CloudFormation Stack option retrieves the CloudFormation template to create the IAM resources required by Tanzu Kubernetes Grid's account on AWS. See Permissions Set by Tanzu Kubernetes Grid.
  • Configuration variables:
    • CONTROL_PLANE_MACHINE_COUNT and WORKER_MACHINE_COUNT configuration variables customize management clusters, in addition to workload clusters.
    • CLUSTER_API_SERVER_PORT sets the port number of the Kubernetes API server, overriding default 6443, for deployments without NSX Advanced Load Balancer.
    • ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD disables Antrea's UDP checksum offloading, to avoid known issues with underlay network and physical NIC network drivers. See Antrea CNI Configuration.
    • (Azure) Configuration variable AZURE_ENABLE_ACCELERATED_NETWORKING toggles Azure accelerated networking. Defaults to true and enables setting to false on VMs with more than 4 CPUs.
  • New Kubernetes versions:
    • 1.22.5
    • 1.21.8
    • 1.20.14
  • Addresses security vulnerabilities:
    • CVE-2022-0185
    • CVE-2021-4034
  • The version numbering scheme for Tanzu Framework changed when the project became open-source. Previously, Tanzu Framework version numbers matched the Tanzu Kubernetes Grid versions that included them. Tanzu Kubernetes Grid v1.5 uses Tanzu Framework v0.11.1.

Supported Kubernetes Versions in Tanzu Kubernetes Grid v1.5

Each version of Tanzu Kubernetes Grid adds support for new Kubernetes versions. This version also supports versions of Kubernetes from previous versions of Tanzu Kubernetes Grid.

Tanzu Kubernetes Grid Version Provided Kubernetes Versions Supported in v1.5?
1.5.2, 1.5.1, 1.5.0 1.22.5, 1.21.8, 1.20.14 YES, YES, YES
1.4.2 1.21.8, 1.20.14, 1.19.16* YES, YES, NO
1.4.0 1.21.2, 1.20.8, 1.19.12 YES, YES, NO
1.3.1 1.20.5, 1.19.9, 1.18.17 YES, NO, NO
1.3.0 1.20.4, 1.19.8, 1.18.16, 1.17.16 YES, NO, NO, NO

* This is the last version update for the 1.19.x version line. The 1.19.x version line will no longer be updated in Tanzu Kubernetes Grid.

Product Snapshot for Tanzu Kubernetes Grid v1.5

Tanzu Kubernetes Grid v1.5 supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components. The component versions listed in parentheses are included in Tanzu Kubernetes Grid v1.4. For more information, see Component Versions.

vSphere Amazon EC2 Azure
Infrastructure platform
  • vSphere 6.7U3
  • vSphere 7
  • VMware Cloud on AWS***
  • Azure VMware Solution
Native AWS Native Azure
CLI, API, and package infrastructure Tanzu Framework v0.11.1
Cluster creation and management Core Cluster API (v1.0.1), Cluster API Provider vSphere (v1.0.2) Core Cluster API (v1.0.1), Cluster API Provider AWS (v1.2.0) Core Cluster API (v1.0.1), Cluster API Provider Azure (v1.0.1)
Kubernetes node OS distributed with TKG Photon OS 3, Ubuntu 20.04 Amazon Linux 2, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Bring your own image Photon OS 3, Red Hat Enterprise Linux 7, Ubuntu 18.04, Ubuntu 20.04, Windows 2019 Amazon Linux 2, Ubuntu 18.04, Ubuntu 20.04 Ubuntu 18.04, Ubuntu 20.04
Container runtime Containerd (v1.5.7)
Container networking Antrea (v1.2.3), Calico (v3.19.1)
Container registry Harbor (v2.3.3)
Ingress NSX Advanced Load Balancer Essentials and Avi Controller (v20.1.3 and v20.1.6)*, Contour (v1.17.2)

Contour (v1.17.2) Contour (v1.17.2)
Storage vSphere Container Storage Interface (v2.4.1**) and vSphere Cloud Native Storage In-tree cloud providers only In-tree cloud providers only
Authentication OIDC via Pinniped (v0.12.0), LDAP via Pinniped (v0.12.0) and Dex
Observability Fluent Bit (v1.7.5), Prometheus (v2.27.0), Grafana (v7.5.7)
Backup and migration Velero (v1.7.0)

NOTES:

  • * NSX Advanced Load Balancer Essentials is supported on vSphere 6.7U3, vSphere 7, and VMware Cloud on AWS. You can download it from the Download VMware Tanzu Kubernetes Grid page.
  • ** Version of vsphere_csi_driver. For a full list of vSphere Container Storage Interface components included in the Tanzu Kubernetes Grid v1.5 release, see Component Versions.
  • *** For a list of VMware Cloud on AWS SDDC versions that are compatible with this release, see the VMware Product Interoperability Matrix.

For a full list of Kubernetes versions that ship with Tanzu Kubernetes Grid v1.5, see Supported Kubernetes Versions in Tanzu Kubernetes Grid v1.5 above.

Component Versions

The Tanzu Kubernetes Grid v1.5.2 release includes the following software component versions:

  • aad-pod-identity: v1.8.0+vmware.1 *
  • addons-manager: v1.5.0_vmware.1-tkg.4 *
  • ako-operator: v1.5.0_vmware.5 *
  • alertmanager: v0.22.2+vmware.1
  • antrea: v1.2.3+vmware.4 *
  • cadvisor: v0.39.1+vmware.1
  • calico_all: v3.19.1+vmware.1 *
  • carvel-secretgen-controller: v0.7.1+vmware.1 *
  • cloud-provider-azure: v0.7.4+vmware.1
  • cloud_provider_vsphere: v1.22.4+vmware.1 *
  • cluster-api-provider-azure: v1.0.1+vmware.1 *
  • cluster_api: v1.0.1+vmware.1 *
  • cluster_api_aws: v1.2.0+vmware.1 *
  • cluster_api_vsphere: v1.0.2+vmware.1 *
  • cni_plugins: v0.9.1+vmware.8 *
  • configmap-reload: v0.5.0+vmware.2 *
  • containerd: v1.5.7+vmware.1 *
  • contour: v1.17.2+vmware.1, v1.12.2+vmware.1 *
  • coredns: v1.8.4+vmware.7 *
  • crash-diagnostics: v0.3.7+vmware.5 *
  • cri_tools: v1.21.0+vmware.7 *
  • csi_attacher: v3.3.0+vmware.1 *
  • csi_livenessprobe: v2.4.0+vmware.1 *
  • csi_node_driver_registrar: v2.3.0+vmware.1 *
  • csi_provisioner: v3.0.0+vmware.1 *
  • dex: v2.30.2+vmware.1 *
  • envoy: v1.19.1+vmware.1, v1.18.4+vmware.1 *
  • external-dns: v0.10.0+vmware.1 *
  • etcd: v3.5.0+vmware.7 *
  • fluent-bit: v1.7.5+vmware.2 *
  • gangway: v3.2.0+vmware.2
  • grafana: v7.5.7+vmware.2 *
  • harbor: v2.3.3+vmware.1 *
  • image-builder: v0.1.11+vmware.3 *
  • imgpkg: v0.18.0+vmware.1 *
  • jetstack_cert-manager: v1.5.3+vmware.2 *
  • k8s-sidecar: v1.12.1+vmware.2 *
  • k14s_kapp: v0.42.0+vmware.1 *
  • k14s_ytt: v0.35.1+vmware.1 *
  • kapp-controller: v0.30.0+vmware.1 *
  • kbld: v0.31.0+vmware.1 *
  • kube-state-metrics: v1.9.8+vmware.1
  • kube-vip: v0.3.3+vmware.1
  • kube_rbac_proxy: v0.8.0+vmware.1
  • kubernetes: v1.22.5+vmware.1-tkg.4 *
  • kubernetes-csi_external-resizer: v1.3.0+vmware.1
  • kubernetes-sigs_kind: v1.22.5+vmware.1_v0.11.1 *
  • kubernetes_autoscaler: v1.22.0+vmware.1, v1.21.0+vmware.1 *
  • load-balancer-and-ingress-service: v1.6.1+vmware.4 *
  • metrics-server: v0.5.1+vmware.1 *
  • multus-cni: v3.7.1_vmware.2
  • pinniped: v0.12.0+vmware.1 *
  • prometheus: v2.27.0+vmware.1
  • prometheus_node_exporter: v1.1.2+vmware.1
  • pushgateway: v1.4.0+vmware.1
  • standalone-plugins-package: v0.11.2-standalone-plugins *
  • sonobuoy: v0.54.0+vmware.1 *
  • tanzu-framework: v0.11.2 *†
  • tanzu-framework-addons: v0.11.2 *†
  • tanzu-framework-management-packages: v0.11.2 *†
  • tkg-bom: v1.5.2 *
  • tkg-core-packages: v1.22.5+vmware.1-tkg.3 *
  • tkg-standard-packages: v1.5.2 *
  • tkg_telemetry: v1.5.0+vmware.1 *
  • velero: v1.7.0+vmware.1 *
  • velero-plugin-for-aws: v1.3.0+vmware.1 *
  • velero-plugin-for-microsoft-azure: v1.3.0+vmware.1 *
  • velero-plugin-for-vsphere: v1.3.0+vmware.1 *
  • vendir: v0.23.0+vmware.1 *
  • vsphere_csi_driver: v2.4.1+vmware.1 *
  • windows-resource-bundle: v1.22.3+vmware.1-tkg.1

VMware recommends that you install or upgrade to Tanzu Kubernetes Grid v1.5.2, not previous v1.5 patch versions.

* Indicates a version bump or new component since v1.4.1, which is the latest previous release.

The version numbering scheme for Tanzu Framework changed when the project became open-source. Previously, Tanzu Framework version numbers matched the Tanzu Kubernetes Grid versions that included them.

For a complete list of software component versions that ship with Tanzu Kubernetes Grid v1.5.2, see ~/.config/tanzu/tkg/bom/tkg-bom-v1.5.2.yaml and ~/.config/tanzu/tkg/bom/tkr-bom-v1.22.5+vmware.1-tkg.4.yaml. For component versions in previous releases, see the tkg-bom- and tkr-bom- YAML files that install with those releases.

Supported Upgrade Paths

You can only upgrade to Tanzu Kubernetes Grid v1.5.x from v1.4.x. If you want to upgrade to Tanzu Kubernetes Grid v1.5.x from a version earlier than v1.4.x, you must upgrade to v1.4.x first.

When upgrading Kubernetes versions on Tanzu Kubernetes clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.20.x to v1.22.x. You must upgrade a v1.20.x cluster to v1.21.x before upgrading the cluster to v1.22.x.

Behavior Changes Between Tanzu Kubernetes Grid v1.4.1 and v1.5

Tanzu Kubernetes Grid v1.5 introduces the following new behavior compared with v1.4.1, which is the latest previous release.

  • After installing the Tanzu CLI, you need to run tanzu init before deploying a management cluster.
  • You cannot use the Tanzu CLI to register management clusters with Tanzu Mission Control.
  • After you have installed the v1.5 CLI but before a management cluster has been deployed or upgraded, all context-specific CLI command groups (tanzu cluster, tanzu kubernetes-release) plus all of the management-cluster plugin commands except for tanzu mc upgrade and tanzu mc create are unavailable and not included in Tanzu CLI --help output.

User Documentation

The Tanzu Kubernetes Grid 1.5 documentation applies to all of the 1.5.x releases. It includes information about the following subjects:

  • Tanzu Kubernetes Grid Concepts introduces the key components of Tanzu Kubernetes Grid and describes how you use them and what they do.
  • Install the Tanzu CLI describes how to install the Tanzu CLI as well as the prerequisites for deploying Tanzu Kubernetes Grid on vSphere, Amazon EC2, and Microsoft Azure
  • Deploying Management Clusters describes how to deploy Tanzu Kubernetes Grid management clusters to vSphere, Amazon EC2, and Microsoft Azure.
  • Deploying Tanzu Kubernetes Clusters describes how to use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters from your management cluster
  • Managing Cluster Lifecycles describes how to manage the lifecycle of management and workload clusters.
  • Deploying Tanzu Kubernetes Grid Extensions and Shared Services describes how to set up local shared services for your Tanzu Kubernetes clusters, such as authentication and authorization, logging, networking, and ingress control.
  • Building Machine Images describes how to build your own OS images for cluster nodes.
  • Upgrading Tanzu Kubernetes Grid describes how to upgrade to this version.
  • Troubleshooting Tanzu Kubernetes Grid includes tips to help you to troubleshoot common problems that you might encounter when installing Tanzu Kubernetes Grid and deploying Tanzu Kubernetes clusters.

Resolved Issues

  • CAPV controller parses datacenter correctly in multi-datacenter vSphere environment.

    Previously, during upgrade to Tanzu Kubernetes Grid v1.4.1 on vSphere, if you have multiple datacenters running within a single vCenter, the CAPV controller failed to find datacenter contents, causing upgrade failure and possible loss of data.

  • Linux or MacOS bootstrap machines do not need to be running cgroups v1 in kernel.

    A bootstrap machine running Linux or MacOS may run cgroups v1 or cgroups v2 in its Linux kernel. Previously, users with cgroups v2 machines needed to either patch their kernel or pre-create the bootstrap cluster in order to deploy a management cluster.

  • On vSphere 7.0 U3, Tanzu CLI supports tanzu cluster node-pool commands for Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid service. (v1.5.1)

    In Tanzu Kubernetes Grid v1.5.0, node-pool commands were not supported for Tanzu Kubernetes Grid service (TKGS) clusters.

  • NSX ALB retrieves network information from NSX-T, not vCenter, to disambiguate port groups (v1.5.2)

    With Tanzu Kubernetes Grid v1.5.1 and prior releases, the Avi Controller UI could only show port group names, without their associated T1 gateways set in the NSX-T dashboard. In deployments where multiple clusters share networks, this can mean having to select network port groups from lists of identical names. TKG v1.5.2 supports enabling the Avi Controller to retrieve and expose T1 and segment information from NSX-T, along with port group names.

  • Commands tanzu cluster node-pool scale and tanzu cluster node-pool delete target correct node pool (v1.5.2)

    In Tanzu Kubernetes Grid v1.5.0 and v1.5.1, the commands tanzu cluster node-pool scale and tanzu cluster node-pool delete sometimes operated on a node pool other than the one specified in the command.

  • Node pool operations work in proxied environments (v1.5.2)

    In Tanzu Kubernetes Grid v1.5.0 and v1.5.1, tanzu cluster node-pool commands did not work in proxied environments.

  • Editing cluster resources on Amazon EC2 with Calico CNI no longer produces errors.

    You can add or remove a resource for a workload cluster on Amazon EC2 that uses the Calico CNI without errors. Previously, you had to add an ingress role to avoid errors.

  • Changing name or location of virtual machine template for current Kubernetes version no longer reprovisions cluster nodes when running tanzu cluster upgrade.

    Previously, moving or renaming the virtual machine template in your vCenter and then running tanzu cluster upgrade caused cluster nodes to be reprovisioned with new IP addresses.

  • Upgrading management cluster automatically creates tanzu-package-repo-global namespace.

    When you upgrade from Tanzu Kubernetes Grid v1.4.x to v1.5, the tanzu-package-repo-global namespace and the associated package repository are now created automatically.

    This resolves the issue of needing to create this namespace manually when upgrading from v1.3 to v1.4.x.

  • Management cluster installation and upgrade succeed in airgapped environment. (v1.5.1)

    In Tanzu Kubernetes Grid v1.5.0, the local kind process attempted to retrieve a pause v3.5 image from k8s.gcr.io.

  • Management cluster upgrade succeeds on Amazon EC2 with Ubuntu v20.04 (v1.5.1)

    In Tanzu Kubernetes Grid v1.5.0, the local kind process retrieved an incompatible pause version (v3.6) image from k8s.gcr.io.

Known Issues

Kubernetes

  • etcd v3.5.0-2 data corruption bug in Kubernetes v1.22.0-5 affects Tanzu Kuberenetes Grid v1.5.0-2

    Kubernetes versions v1.22.0-5, which are used by TKG v1.5.0-2, use etcd versions v3.5.0-2. These versions of etcd have a data corruption bug that can result in data loss.

    Fixes for this bug will be incorporated into subsequent versions of etcd, Kubernetes, and Tanzu Kubernetes Grid.

    Workaround: Until a fix is available, do not deploy or upgrade to v1.5.

Internet-Restricted Environments

  • You cannot deploy or upgrade to TKG v1.5 if you are using a registry with a self-signed certificate.

    Management cluster deployment or upgrade fails when accessing an image registry with a custom certificate, such as in an internet-restricted environment.

    Workaround: Until a fix is available, do not deploy or upgrade to v1.5 with a registry that uses a self-signed certificate, or use a public CA certificate for the registry.

  • Management cluster installation and upgrade fail in airgapped environment (v1.5.0)

    This Tanzu Kubernetes Grid v1.5.0 issue is resolved in v1.5.1.

    In an airgapped environment, running tanzu management-cluster create or tanzu management-cluster upgrade fails when the kind process attempts to retrieve a pause v3.5 image from k8s.gcr.io.

Packages

  • Tanzu Standard repository v1.5.0 packages do not work with TKGS

    You cannot install Tanzu Standard repository v1.5.0 packages into Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid service; the packages have not been validated for Tanzu Kubernetes Grid service clusters.

    Tanzu Kubernetes Grid v1.5.0 and v1.5.1 both use the Tanzu Standard repository versioned as v1.5.0, projects.registry.vmware.com/tkg/packages/standard/repo:v1.5.0

    Workaround: If you are running Tanzu Kubernetes Grid v1.4 packages on Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid service, do not upgrade to v1.5 until a fix is available.

  • Shared Services cluster does not work with TKGS

    Tanzu Kubernetes Grid Service (TKGS) does not support deploying packages to a shared services cluster. Workload clusters deployed by TKGS can only use packaged services deployed to the workload clusters themselves.

    Workaround: None

Upgrade

  • Management cluster upgrade fails on AWS with Ubuntu v20.04 (v1.5.0)

    This Tanzu Kubernetes Grid v1.5.0 issue is resolved in v1.5.1.

    On Amazon EC2, with a management cluster based on Ubuntu v20.04 nodes, running tanzu management-cluster upgrade fails after the kind process retrieves an incompatible pause version (v3.6) image from k8s.gcr.io.

  • Unexpected VIP network separation after upgrading TKG management cluster from 1.5.0+ to 1.5.2

    After upgrading the management cluster from 1.5.0+ to 1.5.2, the newly created workload clusters control plane VIP network will not use the same VIP network as their management cluster if users set AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME and AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR different from AVI_DATA_NETWORK and AVI_DATA_NETWORK_CIDR.

    Workaround: See step 6 in Upgrade Management Clusters procedure.

  • Workload cluster upgrade may hang or fail due to undetached persistent volumes

    If you are upgrading your Tanzu Kubernetes clusters from Tanzu Kubernetes Grid v1.4.x to v1.5.x and you have applications on the cluster that use persistent volumes, the volumes may fail to detach and re-attach during upgrade, causing the upgrade process to hang or fail.

    Workaround: Follow the steps in Persistent volumes cannot attach to a new node if previous node is deleted (85213) in the VMware Knowledge Base.

  • Upgrade ignores custom bootstrap token duration

    Management cluster upgrade ignores CAPBK_BOOTSTRAP_TOKEN_TTL setting configured in TKG v1.4.2+ to extend bootstrap token TTL during cluster initialization. This may cause timeout. The default TTL is 15m.

    If you no longer have the management cluster's configuration file, you can use kubectl to determine its CAPBK_BOOTSTRAP_TOKEN_TTL setting:

    1. In the management cluster context, run kubectl get pods -n capi-kubeadm-bootstrap-system to list the bootstrap pods.
    2. Find the name of the capi-kubeadm-bootstrap-controller-manager pod and output its description in YAML. For example: kubectl describe pod capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-788b64657d-tjdsv -o yaml
    3. Check the path under the spec.containers[0].args. If the argument --bootstrap-token-ttl is present and is set to something other than 15m (the default value), then the value was customized and requires the workaround below.

    Workaround: Before running tanzu mc upgrade, set CAPBK_BOOTSTRAP_TOKEN_TTL as an environment variable. For example:

    export CAPBK_BOOTSTRAP_TOKEN_TTL=30m
  • Pinniped authentication error on workload cluster after upgrading management cluster

    When attempting to authenticate to a workload cluster associated with the upgraded management cluster, you receive an error message similar to the following:

    Error: could not complete Pinniped login: could not perform OIDC discovery for "https://IP:PORT": Get "https://IP:PORT/.well-known/openid-configuration": x509: certificate signed by unknown authority

CLI

  • Tanzu CLI does not work on macOS machines with ARM processors

     Tanzu CLI v0.11.1 does not work on macOS machines with ARM (Apple M1) chips, as identified under Finder > About This Mac > Overview.

    Workaround: Use a bootstrap machine with a Linux or Windows OS, or a macOS machine with an Intel processor.

  • Commands tanzu cluster node-pool scale and tanzu cluster node-pool delete target the wrong node pool.

    This issue is resolved in Tanzu Kubernetes Grid v1.5.2.

    Because of an internal regex mismatch, the commands tanzu cluster node-pool scale and tanzu cluster node-pool delete sometimes operate on a node pool other than the one specified in the command.

  • Node pool operations do not work in proxied environments (v1.5.0, v1.5.1)

    This issue is resolved in Tanzu Kubernetes Grid v1.5.2.

    In Tanzu Kubernetes Grid v1.5.0 and v1.5.1, tanzu cluster node-pool commands do not work in proxied environments.

  • Windows CMD: Extraneous characters in CLI output column headings

    In the Windows command prompt (CMD), Tanzu CLI command output that is formatted in columns includes extraneous characters in column headings.

    The issue does not occur in Windows Terminal or PowerShell.

    Workaround: On Windows bootstrap machines, run the Tanzu CLI from Windows Terminal.

  • After upgrading a dev cluster to Kubernetes v1.22, kapp-controller fails to reconcile its CSI package

    When viewing the status of the CSI PackageInstall resource, you may see Reconcile failed:

    kubectl get pkgi -A 

    In CSI v2.4, the default number of CSI replicas changed from 1 to 3. Because the dev plan deploys only one control plane node, kapp-controller is unable to match the current number of CSI replicas, 1, to the desired state.

    Workaround: This error does not affect CSI functionality, and CSI continues working as expected. You can do either of the following:

    • Ignore the error message.
    • Add vsphereCSI.deployment_replicas: 1 to the CSI add-on secret for your target cluster, under data.values.yaml.

  • Ignorable AKODeploymentConfig error during management cluster creation

    Running tanzu management-cluster create to create a management cluster with NSX ALB outputs the following error: no matches for kind “AKODeploymentConfig” in version “networking.tkg.tanzu.vmware.com/v1alpha1”. The error can be ignored. For more information, see this article in the KB.

  • Ignorable machinehealthcheck and clusterresourceset errors during workload cluster creation on vSphere

    When a workload cluster is deployed to vSphere by using the tanzu cluster create command through Tanzu Kubernetes Grid Service (TKGS), the output might include errors related to running machinehealthcheck and accessing the clusterresourceset resources, as shown below:

    Error from server (Forbidden): error when creating "/tmp/kubeapply-3798885393": machinehealthchecks.cluster.x-k8s.io is forbidden: User "sso:Administrator@vsphere.local" cannot create resource "machinehealthchecks" in API group "cluster.x-k8s.io" in the namespace "tkg" 
    ... 
    Error from server (Forbidden): error when retrieving current configuration of: Resource: "addons.cluster.x-k8s.io/v1alpha3, Resource=clusterresourcesets", GroupVersionKind: "addons.cluster.x-k8s.io/v1alpha3, Kind=ClusterResourceSet"
    ...

    The workload cluster is successfully created. You can ignore the errors.

  • CLI temporarily misreports status of recently deleted nodes when MHCs are disabled

    When machine health checks (MHCs) are disabled, then Tanzu CLI commands such as tanzu cluster status may not report up-to-date node state while infrastructure is being recreated.

    Workaround: None

AWS

  • Deleting cluster on AWS fails if cluster uses networking resources not deployed with Tanzu Kubernetes Grid.

    The tanzu cluster delete and tanzu management-cluster delete commands may hang with clusters that use networking resources created by the AWS Cloud Controller Manager independently from the Tanzu Kubernetes Grid deployment process. Such resources may include load balancers and other networking services, as listed in The Service Controller in the Kubernetes AWS Cloud Provider documentation.

    For more information, see the Cluster API issue Drain workload clusters of service Type=Loadbalancer on teardown.

    Workaround: Use kubectl delete to delete services of type LoadBalancer from the cluster. Or if that fails, use the AWS console to manually delete any LoadBalancer and SecurityGroup objects created for this service by the Cloud Controller manager. Warning: Do not to delete load balancers or security groups managed by Tanzu, which have the tags key: sigs.k8s.io/cluster-api-provider-aws/cluster/CLUSTER-NAME, value: owned.

Windows Workload Clusters

  • Pinniped fails to reconcile on newly created Windows workload cluster

    After creating a Windows workload cluster that uses an external identity provider, you may see the following error message:

    Reconcile failed: Error (see .status.usefulErrorMessage for details) 
    pinniped-supervisor pinniped-post-deploy-job - Waiting to complete (1 active, 0 3h failed, 0 succeeded)^ pinniped-post-deploy-job--1-kfpr5 - Pending: Unschedulable (message: 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {os: windows}, that the pod didn't tolerate.)

    Workaround: Add a tolerations setting to the Pinniped secret by following the procedure in Add Pinniped Overlay in Windows Custom Machine Images.

  • load-balancer-and-ingress-service package fails to reconcile on newly created Windows workload cluster

    After creating a Windows workload cluster that uses NSX Advanced Load Balancer, you may see the following error message:

    Reconcile failed: Error (see .status.usefulErrorMessage for details) 
    avi-system ako-0 - Pending: Unschedulable (message: 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {os: windows}, that the pod didn't tolerate.)

    Workaround: Add a tolerations setting to the AKO pod specifications by following the procedure in Add AKO Overlay in Windows Custom Machine Images.

NSX-T

  • NSX ALB setup lists identical names when multiple clusters share networks (v1.5.0, v1.5.1)

    This issue is resolved in Tanzu Kubernetes Grid v1.5.2.

    The Avi Controller retrieves port group information from vCenter, which does not include the ports T1 and segment associations that are set in the NSX-T dashboard. When setting up load-balancing for deployments where multiple clusters share networks via NSX-T, this can mean having to choose port groups from lists with identical names in the Avi Controller UI.

    Tanzu Kubernetes Grid v1.5.2 supports enabling the Avi Controller to retrieve network information from NSX-T. This lets users disambiguate port groups that have identical names but are attached to different T1 routers.

  • Management cluster create fails or performance slow with older NSX-T versions and Photon 3 or Ubuntu with Linux kernel 5.8 VMs

    Deploying a management cluster with the following infrastructure and configuration may fail or result in restricted traffic between pods:

    • vSphere with any of the following versions of NSX-T:
      • NSX-T v3.1.3 with Enhanced Datapath enabled
      • NSX-T v3.1.x lower than v3.1.3
      • NSX-T v3.0.x lower than v3.0.2 hot patch
      • NSX-T v2.x. This includes Azure VMware Solution (AVS) v2.0, which uses NSX-T v2.5
    • Base image: Photon 3 or Ubuntu with Linux kernel 5.8

    This combination exposes a checksum issue between older versions of NSX-T and Antrea CNI.

    TMC: If the management cluster is registered with Tanzu Mission Control (TMC) there is no workaround to this issue. Otherwise, see the workarounds below.

    Workarounds:

    • Deploy workload clusters configured with ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD set to "true". This setting disables Antrea's UDP checksum offloading, which avoids the known issues with some underlay network and physical NIC network drivers.
    • Upgrade to NSX-T v3.0.2 Hot Patch, v3.1.3, or later, without Enhanced Datapath enabled
    • Use an Ubuntu base image with Linux kernel 5.9 or later.

AVS

  • vSphere CSI volume deletion may fail on AVS

    On Azure vSphere Solution (AVS), vSphere CSI Persistent Volumes (PVs) deletion may fail. Deleting a PV requires the cns.searchable permission. The default admin account for AVS, cloudadmin@vsphere.local, is not created with this permission. For more information, see vSphere Roles and Privileges.

    Workaround: To delete a vSphere CSI PV on AVS, contact Azure support.

Harbor

  • No Harbor proxy cache support

    You cannot use Harbor in proxy cache mode for running Tanzu Kubernetes Grid in an internet-restricted environment. Prior versions of Tanzu Kubernetes Grid supported the Harbor proxy cache feature.

    Workaround: None

check-circle-line exclamation-circle-line close-line
Scroll to top icon