You can scale a Tanzu Kubernetes cluster horizontally by changing the number of nodes or vertically by changing the virtual machine class hosting the nodes.
Supported Scaling Operations
Node | Horizontal Scale Out | Horizontal Scale In | Vertical Scale | Volume Scale |
---|---|---|---|---|
Control Plane | Yes | No | Yes | No |
Worker | Yes | Yes | Yes | Yes |
- While vertically scaling a cluster node, workloads may no longer be able to run on the node for lack of available resource. For this reason horizontal scaling may be the preferred approach.
- VM classes are not immutable. If you scale out a Tanzu Kubernetes cluster after editing a VM class used by that cluster, new cluster nodes use the updated class definition, but existing cluster nodes continue to use the initial class definition, resulting in a mismatch. See Virtual Machine Classes for Tanzu Kubernetes Clusters.
- Worker node volumes can be changed afer provisioning; control plane node volumes cannot.
Scaling Prerequisite: Configure Kubectl Editing
To scale a Tanzu Kubernetes cluster, you update the cluster manifest using the command kubectl edit tanzukubernetescluster/CLUSTER-NAME
. The kubectl edit command opens the cluster manifest in the text editor defined by your KUBE_EDITOR or EDITOR environment variable. For instructions on setting up the environment variable, see Specify a Default Text Editor for Kubectl.
kubectl
reports that the edits were successfully recorded, and the cluster is updated with the changes.
kubectl edit tanzukubernetescluster/tkgs-cluster-1 tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster-1 edited
kubectl edit tanzukubernetescluster/tkgs-cluster-1 Edit cancelled, no changes made.
Scale Out the Control Plane
- Authenticate with the Supervisor Cluster.
kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
- Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
kubectl config use-context tkgs-cluster-ns
- List the Kubernetes clusters that are running in the namespace.
kubectl get tanzukubernetescluster -n tkgs-cluster-ns
- Get the number of nodes running in the target cluster.
kubectl get tanzukubernetescluster tkgs-cluster-1
For example, the following cluster has 1 control plane nodes and 3 worker nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 1 3 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
- Load the cluster manifest for editing using the
kubectl edit
command.kubectl edit tanzukubernetescluster/tkgs-cluster-1
The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.
- Locate the
spec.topology.controlPlane.count
parameter and increase the number of nodes from 1 to 3.... controlPlane: replicas: 1 ...
... ControlPlane: replicas: 3 ...
- To apply the changes, save the file in the text editor. To cancel, close the editor without saving.
When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.
- Verify that the new nodes are added.
kubectl get tanzukubernetescluster tkgs-cluster-1
The scaled out control plane now has 3 nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 3 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
Scale Out Worker Nodes
You can scale out a Tanzu Kubernetes cluster by increasing the number of worker nodes using kubectl.
- Authenticate with the Supervisor Cluster.
kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
- Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
kubectl config use-context tkgs-cluster-ns
- List the Kubernetes clusters that are running in the namespace.
kubectl get tanzukubernetescluster -n tkgs-cluster-ns
- Get the number of nodes running in the target cluster.
kubectl get tanzukubernetescluster tkgs-cluster-1
For example, the following cluster has 3 control plane nodes and 3 worker nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 3 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
- Load the cluster manifest for editing using the
kubectl edit
command.kubectl edit tanzukubernetescluster/tkgs-cluster-1
The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.
- Locate the
spec.topology.workers.count
parameter and increase the number of nodes.... workers: replicas: 3 ...
... workers: replicas: 4 ...
- To apply the changes, save the file in the text editor. To cancel, close the editor without saving.
When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.
- Verify that the new worker node is added.
kubectl get tanzukubernetescluster tkgs-cluster-1
After scaling out, the cluster has 4 worker nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 4 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
Scale In Worker Nodes
You can scale in a Tanzu Kubernetes cluster by decreasing the number of worker nodes. Scaling in the control plane is not supported.
- Authenticate with the Supervisor Cluster.
kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
- Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
kubectl config use-context tkgs-cluster-ns
- List the Kubernetes clusters that are running in the namespace.
kubectl get tanzukubernetescluster -n tkgs-cluster-ns
- Get the number of nodes running in the target cluster.
kubectl get tanzukubernetescluster tkgs-cluster-1
For example, the following cluster has 3 control plane nodes and 4 worker nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 4 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
- Load the cluster manifest for editing using the
kubectl edit
command.kubectl edit tanzukubernetescluster/tkgs-cluster-1
The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.
- Locate the
spec.topology.workers.count
parameter and decrease the number of nodes.... workers: replicas: 4 ...
... workers: replicas: 2 ...
- To apply the changes, save the file in the text editor. To cancel, close the editor without saving.
When you save the file, kubectl applies the changes to the cluster. In the background, the Virtual Machine Service on the Supervisor Cluster provisions the new worker node.
- Verify that the worker nodes were removed.
kubectl get tanzukubernetescluster tkgs-cluster-1
After scaling in, the cluster has 2 worker nodes.NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 2 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
Scale a Cluster Vertically
You can vertically scale a Tanzu Kubernetes cluster by changing the virtual machine class used to host the cluster nodes. Vertical scaling is supported for both control plane and worker nodes.
The Tanzu Kubernetes Grid Service supports scaling cluster nodes vertically through the rolling update mechanism built into the service. If you change the VirtualMachineClass
definition, the service rolls out new nodes with that new class and spins down the old nodes. See Update Tanzu Kubernetes Clusters.
- Authenticate with the Supervisor Cluster.
kubectl vsphere login --server=SVC-IP-ADDRESS --vsphere-username USERNAME
- Switch context to the vSphere Namespace where the Tanzu Kubernetes cluster is running.
kubectl config use-context tkgs-cluster-ns
- List the Kubernetes clusters that are running in the namespace.
kubectl get tanzukubernetescluster -n tkgs-cluster-ns
- Describe the target Tanzu Kubernetes cluster and check the VM class.
kubectl describe tanzukubernetescluster tkgs-cluster-2
For example, the following cluster is using the best-effort-medium VM class.
Spec: ... Topology: Control Plane: Class: best-effort-medium ... nodePool-a1: Class: best-effort-medium ...
- List and describe the available VM classes.
kubectl get virtualmachineclassbinding
kubectl describe virtualmachineclassbinding
Note: The VM class you want to use must be bound the vSphere Namespace. See Virtual Machine Classes for Tanzu Kubernetes Clusters. - Open for editing the target cluster manifest.
kubectl edit tanzukubernetescluster/tkgs-cluster-2
The cluster manifest opens in the text editor defined by your KUBE_EDITOR or EDITOR environment variables.
- Edit the manifest by changing the VM class.
For example, edit the cluster manifest to use the
guaranteed-large
VM class for control plane and worker nodes.spec: topology: controlPlane: class: guaranteed-large ... nodePool-a1: class: guaranteed-large ...
- To apply the changes, save the file in the text editor. To cancel, close the editor without saving.
When you save the file, kubectl applies the changes to the cluster. In the background, the Tanzu Kubernetes Grid Service provisions the new nodes and deletes the old ones. For a description of the rolling update process, see About Tanzu Kubernetes Grid Service Cluster Updates.
- Verify that the cluster is updated.
kubectl get tanzukubernetescluster NAMESPACE NAME CONTROL PLANE WORKER TKR NAME AGE READY tkgs-cluster-ns test-cluster 3 3 v1.21.2---vmware.1-tkg.1.13da849 5d12h True
Scale Node Volumes
In the Tanzu Kubernetes cluster specification for nodes, you have the option of declaring one or more persistent volumes. Declaring a node volume is useful for high-churn components, such as the etcd database on the control plane and the container runtime on worker nodes. A excerpted cluster specification with both of these node volumes declared is provided below for reference. (A full example cluster specification is available here.)
Volume Node | Description |
---|---|
Worker node volume changes are allowed. |
After a
Tanzu Kubernetes cluster is provisioned, you can add or update a worker node volume. When you initiate a rolling update, the cluster is updated with the new or changed volume.
Warning: If you scale the worker node with a new or changed volume, data in the current volume is deleted during the rolling update.
|
Control plane node volume changes are not allowed. | After a Tanzu Kubernetes cluster is provisioned, you cannot add or update a control plane node volume. The Kubernetes Cluster API (CAPI) forbids post-creation changes to spec.toplogy.controlPlane.volumes . If you attempt to add or change a control plane volume after cluster creation, the request is denied and you receive the error message "updates to immutable fields are not allowed." |
spec: topology: controlPlane: replicas: 3 vmClass: guaranteed-medium storageClass: vwt-storage-policy volumes: - name: etcd mountPath: /var/lib/etcd capacity: storage: 4Gi tkr: reference: name: v1.21.2---vmware.1-tkg.1.ee25d55 nodePools: - name: worker-nodepool-a1 replicas: 3 vmClass: guaranteed-large storageClass: vwt-storage-policy volumes: - name: containerd mountPath: /var/lib/containerd capacity: storage: 16Gi