You can update a TKG Service cluster by changing the virtual machine class used to host the cluster nodes.

You can initiate a rolling update of a TKG Service cluster by editing the vmClass definition using the kubectl edit command. New nodes based on the changed class are rolled out and the old nodes are spun down.
Note: You cannot use the kubectl apply command to update a deployed TKG cluster.

Prerequisites

This task requires the use of the kubectl edit command. This command opens the cluster manifest in the text editor defined by your KUBE_EDITOR or EDITOR environment variable. When you save the file, the cluster is updated with the changes. To configure an editor for kubectl, see Configure a Text Editor for Kubectl.

Procedure

  1. Authenticate with the Supervisor.
    kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME
  2. Switch context to the vSphere Namespace where the target TKG cluster is provisioned.
    kubectl config use-context SUPERVISOR-NAMESPACE
  3. Describe the target TKG cluster and check the VM class.
    v1alpha3 cluster:
    kubectl describe tanzukubernetescluster CLUSTER-NAME
    v1beta1 cluster:
    kubectl describe cluster CLUSTER-NAME
  4. List and describe the available VM classes in the vSphere Namespace where the cluster is provisioned.
    kubectl get virtualmachineclass
    Note: The target VM class must be associated with the vSphere Namespace where the TKG cluster is provisioned. Refer to the TKG Service or VM Service documenation for details on binding VM classes to vSphere Namespaces.
  5. Run the following command to edit the cluster manifest.
    v1alpha3 cluster:
    kubectl edit tanzukubernetescluster/CLUSTER-NAME
    v1beta1 cluster:
    kubectl edit cluster/CLUSTER-NAME
  6. Edit the manifest by changing the VM class string.
    For example, if you are using a v1alpah3 cluster, change the cluster manifest from using the guaranteed-medium VM class for worker nodes:
     topology:
        controlPlane:
          replicas: 3
          storageClass: vwk-storage-policy
          tkr:
            reference:
              name: v1.27.11---vmware.1-fips.1-tkg.2
          vmClass: guaranteed-medium
        nodePools:
        - name: worker-nodepool-a1
          replicas: 3
          storageClass: vwk-storage-policy
          tkr:
            reference:
              name: v1.27.11---vmware.1-fips.1-tkg.2
          vmClass: guaranteed-medium
    To using the guaranteed-large VM class for worker nodes:
     topology:
        controlPlane:
          replicas: 3
          storageClass: vwk-storage-policy
          tkr:
            reference:
              name: v1.27.11---vmware.1-fips.1-tkg.2
          vmClass: guaranteed-medium
        nodePools:
        - name: worker-nodepool-a1
          replicas: 3
          storageClass: vwk-storage-policy
          tkr:
            reference:
              name: v1.27.11---vmware.1-fips.1-tkg.2
          vmClass: guaranteed-large
    Similarly, if you have provisioned a v1beta1 cluster, update the value of variables.vmclass to the target VM class.
  7. Save the changes you made to the manifest file.
    When you save the file, kubectl applies the changes to the cluster. In the background, TKG controller provisions the new node VMs and spins down the old ones.
  8. Verify that kubectl reports that the manifest edits were successfully recorded.
    kubectl edit tanzukubernetescluster/tkgs-cluster-1
    tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster-1 edited
    Note: If you receive an error, or kubectl does not report that the cluster manifest was successfully edited, make sure you have properly configured your default text editor using the KUBE_EDITOR environment variable. See Configure a Text Editor for Kubectl.
  9. Verify that the cluster is updated.
    v1alpha3 cluster:
    kubectl get tanzukubernetescluster
    v1beta1 cluster:
    kubectl get cluster