To update the Tanzu Kubernetes release after the cluster specification has been converted to the v1alpha2 API, which is typically done by changing the Tanzu Kubernetes release version, you may need to do some pre-processing of the spec to avoid errors.

Auto-Conversion of Cluster Specs

To update your vSphere with Tanzu environment to the Tanzu Kubernetes Grid Service v1alpha2 API, you update the Supervisor Cluster where the service runs.

Once the Tanzu Kubernetes Grid Service is running the v1alpha2 API, the system automatically converts all existing Tanzu Kubernetes cluster specifications from the v1alpha1 format to the v1alpha2 format. During the auto-conversion process, the system creates and the populates the expected fields for each cluster manifest. API Deprecations and Additions lists the cluster specification fields that are new and deprecated in the v1alpha2 API.

To update the Tanzu Kubernetes release version for a cluster whose manifest has been auto-converted to the v1alpha2 format, you need to perform some manual pre-processing to avoid errors. Cluster Update Examples lists various options.

API Deprecations and Additions

The table lists the cluster specification settings that are deprecated in the v1alpha2 API and replaced by new settings.
Deprecated Settings New Settings Comments

spec.distribution.version

spec.distribution.fullVersion

spec.topology.controlPlane.tkr.refernece.name

spec.topology.nodePools[*].tkr.reference.name

Must use the TKR NAME format. See the examples.

spec.topology.workers

spec.topology.nodePools[*]

In a converted cluster, the block spec.topology.workers becomes spec.topology.nodePools[0].

The first entry in the nodePools list is name: workers.

spec.topology.controlPlane.count

spec.topology.workers.count

spec.topology.controlPlane.replicas

spec.topology.nodePools[*].replicas

count is replaced by replicas

spec.topology.controlPlane.class

spec.topology.workers.class

spec.topology.controlPlane.vmClass

spec.topology.nodePools[*].vmClass

class is replaced by vmClass

N/A

spec.topology.nodePools[*].labels

Optional key-pair values to organize and categorize objects; labels are propagated to the created nodes

N/A

spec.topology.nodePools[*].taints

Optional taints to register the nodes with; user-defined taints are propagated to the created nodes

TKR NAME Format Is Required

In addition to the spec.distribution.version fields being deprecated, the DISTRIBUTION format for specifying the Tanzu Kubernetes release version is not supported. This means that you cannot use the following string formats to reference the target release: 1.21.2+vmware.1-tkg.1.ee25d55, 1.21.2, and 1.21.

When referencing the Tanzu Kubernetes release version in a v1alpha2 API cluster spec, you must use the TKR NAME format, not the deprecated DISTRIBUTION format. Although the deprecated format is displayed in the UPDATES AVAILABLE column, the only supported format is the one listed in the TKR NAME column.
kubectl get tanzukubernetescluster
NAMESPACE        NAME            CONTROL PLANE   WORKER   TKR NAME                             AGE    READY   TKR COMPATIBLE   UPDATES AVAILABLE
tkgs-cluster-1   test-cluster    3               3        v1.21.2---vmware.1-tkg.1.ee25d55     38h    True    True             [1.21.2+vmware.1-tkg.1.ee25d55]

Cluster Update Examples

Because changing the spec.distribution.version is the most common way to trigger a rolling update of the cluster (see Update Tanzu Kubernetes Clusters), and this field is deprecated in the v1alpha2 API, there are some considerations to be aware of and some pre-processing recommendations to follow to avoid potential cluster update problems.

The following examples demonstrate how to update the version of a Tanzu Kubernetes cluster that was provisioned using the v1alpha1 API to a system that is running the v1alpha2 API.

Cluster Upgrade Example 1: Use a Single TKR NAME Reference in the Control Plane

The recommended approach is to remove all nodePools[*].tkr.reference.name blocks from the converted spec and update the controlPlane.tkr.reference.name with the TKR NAME of the target release. In this case the same Tanzu Kubernetes release is propagated to all nodePools[*] nodes.

In the future, the Tanzu Kubernetes release versions can be different across controlPlane and nodePools[*]. Currently, however, all releases in a cluster must match so putting a single TKR NAME reference in controlPlane is sufficient.

For example:
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-cluster-update-example1
  namespace: tkgs-cluster-ns
spec:
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 192.0.2.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 198.51.100.0/12
  topology:
    controlPlane:
      replicas: 3
      storageClass: vwt-storage-policy
      tkr:
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
      vmClass: best-effort-medium
    nodePools:
    - name: workers
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium

Cluster Upgrade Example 2: Use a TKR NAME Reference for Each Node Pool

The second example is to put the TKR NAME in the tkr.reference.name block for both the controlPlane and nodePools[*] topologies.

This approach has the advantage of being ready for future releases when the Tanzu Kubernetes release can be different across node pools. Currently, they must match.

For example:
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-cluster-update-example2
  namespace: tkgs-cluster-ns
spec:
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 192.0.2.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 198.51.100.0/12
  topology:
    controlPlane:
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium
      tkr:
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
    nodePools:
    - name: workers
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium
      tkr:
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55

Cluster Upgrade Example 3: Use Deprecated Distribution Fields

The final option is to use the deprecated fields spec.distribution.fullVersion and spec.distribution.version, and manually remove all tkr.reference.name blocks. You must include both fields with one using the TKR NAME format and the other nulled out. Version shortcuts such as v1.21.2 and v1.21 are not supported.
Note: For the Tanzu Kubernetes release on Ubuntu, the use of spec.distribution.version is not supported.
The following example uses the fullVersion with the TKR NAME and a null (empty) value in the version field. All tkr.reference.name entries are removed.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-cluster-update-example3a
  namespace: tkgs-cluster-ns
spec:
  distribution:
    fullVersion: v1.21.2---vmware.1-tkg.1.ee25d55
    version: ""
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 192.0.2.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 198.51.100.0/12
  topology:
    controlPlane:
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium
    nodePools:
    - name: workers
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium
Alternatively you can use the version field with the TKR NAME and a null (empty) value in the fullVersion field. Even though you are using the version field, version shortcuts are not supported. All tkr.reference.name entries are removed.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-cluster-update-example3b
  namespace: tkgs-cluster-ns
spec:
  distribution:
    fullVersion: ""
    version: v1.21.2---vmware.1-tkg.1.ee25d55
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 192.0.2.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 198.51.100.0/12
  topology:
    controlPlane:
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium
    nodePools:
    - name: workers
      replicas: 3
      storageClass: vwt-storage-policy
      vmClass: best-effort-medium