You can scale a Tanzu Kubernetes cluster horizontally by changing the number of nodes or vertically by changing the virtual machine class hosting the nodes.

Supported Scaling Operations

The table lists the supported scaling operations for Tanzu Kubernetes clusters.
Table 1. Supported Scaling Operations for Tanzu Kubernetes Clusters
Node Horizontal Scale Out Horizontal Scale In Vertical Scale
Control Plane Yes No Yes
Worker Yes Yes Yes
Keep in mind the following considerations:
  • While vertically scaling a cluster node, workloads may no longer be able to run on the node for lack of available resource. For this reason horizontal scaling may be the preferred approach.
  • VM classes are not immutable. If you scale out a Tanzu Kubernetes cluster after editing a VM class used by that cluster, new cluster nodes use the updated class definition, but existing cluster nodes continue to use the initial class definition, resulting in a mismatch. See Virtual Machine Classes for Tanzu Kubernetes Clusters.

Scaling Prerequisite: Configure Kubectl Editing

To scale a Tanzu Kubernetes cluster, you update the cluster manifest using the command kubectl edit tanzukubernetescluster/CLUSTER-NAME. The kubectl edit command opens the cluster manifest in the text editor defined by your KUBE_EDITOR or EDITOR environment variable. For instructions on setting up the environment variable, see Specify a Default Text Editor for Kubectl.

When you save the manifest changes, kubectl reports that the edits were successfully recorded, and the cluster is updated with the changes.
kubectl edit tanzukubernetescluster/tkgs-cluster-1
tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster-1 edited
To cancel, simply close the editor without saving.
kubectl edit tanzukubernetescluster/tkgs-cluster-1
Edit cancelled, no changes made.

Scale Out the Control Plane

You can scale out a Tanzu Kubernetes cluster by increasing number of control plane nodes from 1 to 3. The number of control plane nodes must be odd. You cannot scale in the control plane.
  1. Authenticate with the Supervisor Cluster.
    kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
  2. Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
    kubectl config use-context tkgs-cluster-ns
  3. List the Kubernetes clusters that are running in the namespace.
    kubectl get tanzukubernetescluster -n tkgs-cluster-ns
  4. Get the number of nodes running in the target cluster.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    For example, the following cluster has 1 control plane nodes and 3 worker nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      1               3        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    
  5. Load the cluster manifest for editing using the kubectl edit command.
    kubectl edit tanzukubernetescluster/tkgs-cluster-1

    The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.

  6. Locate the spec.topology.controlPlane.count parameter and increase the number of nodes from 1 to 3.
    ...
    controlPlane:
        count: 1
    ...
    
    ...
    ControlPlane:
        count: 3
    ...
    
  7. To apply the changes, save the file in the text editor. To cancel, close the editor without saving.

    When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.

  8. Verify that the new nodes are added.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    The scaled out control plane now has 3 nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      3               3        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    

Scale Out Worker Nodes

You can scale out a Tanzu Kubernetes cluster by increasing the number of worker nodes using kubectl.

  1. Authenticate with the Supervisor Cluster.
    kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
  2. Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
    kubectl config use-context tkgs-cluster-ns
  3. List the Kubernetes clusters that are running in the namespace.
    kubectl get tanzukubernetescluster -n tkgs-cluster-ns
  4. Get the number of nodes running in the target cluster.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    For example, the following cluster has 3 control plane nodes and 3 worker nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      3               3        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    
  5. Load the cluster manifest for editing using the kubectl edit command.
    kubectl edit tanzukubernetescluster/tkgs-cluster-1

    The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.

  6. Locate the spec.topology.workers.count parameter and increase the number of nodes.
    ...
    workers:
        count: 3
    ...
    
    ...
    workers:
        count: 4
    ...
    
  7. To apply the changes, save the file in the text editor. To cancel, close the editor without saving.

    When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.

  8. Verify that the new worker node is added.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    After scaling out, the cluster has 4 worker nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      3               4        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    

Scale In Worker Nodes

You can scale in a Tanzu Kubernetes cluster by decreasing the number of worker nodes. Scaling in the control plane is not supported.

  1. Authenticate with the Supervisor Cluster.
    kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
  2. Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
    kubectl config use-context tkgs-cluster-ns
  3. List the Kubernetes clusters that are running in the namespace.
    kubectl get tanzukubernetescluster -n tkgs-cluster-ns
  4. Get the number of nodes running in the target cluster.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    For example, the following cluster has 3 control plane nodes and 3 worker nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      3               4        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    
  5. Load the cluster manifest for editing using the kubectl edit command.
    kubectl edit tanzukubernetescluster/tkgs-cluster-1

    The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.

  6. Locate the spec.topology.workers.count parameter and increase the number of nodes.
    ...
    workers:
        count: 4
    ...
    
    ...
    workers:
        count: 2
    ...
    
  7. To apply the changes, save the file in the text editor. To cancel, close the editor without saving.

    When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.

  8. Verify that the worker nodes were removed.
    kubectl get tanzukubernetescluster tkgs-cluster-1
    After scaling in, the cluster has 2 worker nodes.
    NAME                CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1      3               2        v1.18.5+vmware.1-tkg.1.886c781   1d    running
    

Scale a Cluster Vertically

You can vertically scale a Tanzu Kubernetes cluster by changing the virtual machine class used to host the cluster nodes. Vertical scaling is supported for both control plane and worker nodes.

The Tanzu Kubernetes Grid Service supports scaling cluster nodes vertically through the rolling update mechanism built into the service. If you change the VirtualMachineClass definition, the service rolls out new nodes with that new class and spins down the old nodes. See Update Tanzu Kubernetes Clusters.

  1. Authenticate with the Supervisor Cluster.
    kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
  2. Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
    kubectl config use-context tkgs-cluster-ns
  3. List the Kubernetes clusters that are running in the namespace.
    kubectl get tanzukubernetescluster -n tkgs-cluster-ns
  4. Describe the target Tanzu Kubernetes cluster and check the VM class.
    kubectl describe tanzukubernetescluster tkgs-cluster-2

    For example, the following cluster is using the best-effort-small VM class.

    Spec:
      ...
      Topology:
        Control Plane:
          Class:          best-effort-small
          ...
        Workers:
          Class:          best-effort-small
          ...
    
  5. List and describe the available VM classes.
    kubectl get virtualmachineclassbinding
    kubectl describe virtualmachineclassbinding
    Note: The VM class you want to use must be bound the vSphere Namespace. See Virtual Machine Classes for Tanzu Kubernetes Clusters.
  6. Open for editing the target cluster manifest.
    kubectl edit tanzukubernetescluster/tkgs-cluster-2

    The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.

  7. Edit the manifest by changing the VM class.
    For example, edit the cluster manifest to use the guaranteed-xlarge VM class for control plane and worker nodes.
    spec:
      topology:
        controlPlane:
          class: guaranteed-xlarge
          ...
        workers:
          class: guaranteed-xlarge
          ...
    
  8. To apply the changes, save the file in the text editor. To cancel, close the editor without saving.

    When you save the file, kubectl applies the changes to the cluster. In the background, the Tanzu Kubernetes Grid Service provisions the new nodes and deletes the old ones. For a description of the rolling update process, see About Tanzu Kubernetes Cluster Updates.

  9. Verify that the cluster is being updated.
    kubectl get tanzukubernetescluster
    NAME             CONTROL PLANE   WORKER   DISTRIBUTION                     AGE   PHASE
    tkgs-cluster-1   3               3        v1.18.5+vmware.1-tkg.1.c40d30d   21h   updating