This topic explains how to scale Tanzu Kubernetes Grid (TKG) workload clusters and standalone management clusters, using the following methods:
Autoscaler: For workload clusters deployed by a standalone management cluster, you can use Cluster Autoscaler to automatically scale the number of worker nodes to meet demand. See Scale Worker Nodes with Cluster Autoscaler.
You cannot use Autoscaler to scale workload clusters deployed by a Supervisor cluster.
Scale horizontally: For workload or standalone management clusters, you can manually scale the number of control plane and worker nodes. See Scale a Cluster Horizontally.
Scale vertically: For workload clusters, you can manually change the size of the control plane and worker nodes. See Scale a Cluster Vertically.
NoteTo scale nodes in a node pool, see Update Node Pools in Manage Node Pools of Different VM Types.
Single-node clusters cannot be scaled.
Cluster Autoscaler is a Kubernetes program that automatically scales Kubernetes clusters depending on the demands on the workload clusters. Use Cluster Autoscaler only for workload clusters deployed by a standalone management cluster.
For more information about Cluster Autoscaler, see the following documentation in GitHub:
By default, Cluster Autoscaler is deactivated in Tanzu Kubernetes Grid. To enable Cluster Autoscaler in a workload cluster, set the ENABLE_AUTOSCALER
to true
and set the AUTOSCALER_
options in the cluster configuration file or as environment variables before running tanzu cluster create --file
.
Each Cluster Autoscaler configuration variable in a cluster configuration file corresponds to a parameter in the Cluster Autoscaler tool. For a list of these variables and their defaults, see Cluster Autoscaler in the Configuration File Variable Reference.
The AUTOSCALER_*_SIZE
settings limit the number of worker nodes in a cluster, while AUTOSCALER_MAX_NODES_TOTAL
limits the count of all nodes, both worker and control plane.
Set AUTOSCALER_*_SIZE
values depending the number of worker nodes in the cluster:
dev
clusters, set AUTOSCALER_MIN_SIZE_0
and AUTOSCALER_MAX_SIZE_0
.prod
clusters, set:
AUTOSCALER_MIN_SIZE_0
and AUTOSCALER_MAX_SIZE_0
AUTOSCALER_MIN_SIZE_1
and AUTOSCALER_MAX_SIZE_1
AUTOSCALER_MIN_SIZE_2
and AUTOSCALER_MAX_SIZE_2
The following provides an example of Cluster Autoscaler settings in a cluster configuration file. You cannot modify these values after you deploy the cluster.
#! ---------------------------------------------------------------------
#! Autoscaler related configuration
#! ---------------------------------------------------------------------
ENABLE_AUTOSCALER: false
AUTOSCALER_MAX_NODES_TOTAL: "0"
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
AUTOSCALER_MIN_SIZE_0:
AUTOSCALER_MAX_SIZE_0:
AUTOSCALER_MIN_SIZE_1:
AUTOSCALER_MAX_SIZE_1:
AUTOSCALER_MIN_SIZE_2:
AUTOSCALER_MAX_SIZE_2:
For each workload cluster that you create with Cluster Autoscaler enabled, Tanzu Kubernetes Grid creates a Cluster Autoscaler deployment in the management cluster. To deactivate Cluster Autoscaler, delete the Cluster Autoscaler deployment associated with your workload cluster.
You can scale a TKG cluster horizontally in two ways, depending on the cluster type:
replicas
settings in the cluster definition, as described in the Scale a Class-Based Cluster section below.To scale a workload or standalone management cluster horizontally using the Tanzu CLI, run the tanzu cluster scale
command.
ImportantDo not change context or edit the
.kube-tkg/config
file while Tanzu Kubernetes Grid operations are running.
--controlplane-machine-count
and --worker-machine-count
options set the new number of control plane and worker nodes, respectively.Examples:
To scale a cluster to 5 control plane nodes and 10 worker nodes:
tanzu cluster scale MY-CLUSTER --controlplane-machine-count 5 --worker-machine-count 10
If you deployed a cluster with --controlplane-machine-count 1
and then you scale it up to 3 control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane.
If the cluster is running in a namespace other than default
, you must include the --namespace
option:
tanzu cluster scale MY-CLUSTER --controlplane-machine-count 5 --worker-machine-count 10 --namespace=MY-NAMESPACE
vSphere with standalone management cluster: After changing a cluster’s node count on vSphere deployed with a standalone management cluster, the DHCP reservations of any added or removed nodes’ IP addresses must be reserved or released. To change these reservations manually, see Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only). For instructions on how to configure DHCP reservations, see your DHCP server documentation.
vSphere with Supervisor cluster On clusters that run in vSphere with Tanzu, you can only run either 1 control plane node or 3 control plane nodes. You can scale up the number of control plane nodes from 1 to 3, but you cannot scale down the number from 3 to 1.
The procedure to vertically scale a workload cluster depends on the cluster type.
Follow the Updating Infrastructure Machine Templates procedure in The Cluster API Book, which changes the cluster’s machine template.
The procedure downloads the cluster’s existing machine template, with a kubectl get
command that you can construct as follows:
kubectl get MACHINE-TEMPLATE-TYPE MACHINE-TEMPLATE-NAME -o yaml
Where:
MACHINE-TEMPLATE-TYPE
is:
VsphereMachineTemplate
on vSphereAWSMachineTemplate
on Amazon Web Services (AWS)AzureMachineTemplate
on AzureMACHINE-TEMPLATE-NAME
is the name of the machine template for the cluster nodes that you are scaling, which follows the form:
CLUSTER-NAME-control-plane
for control plane nodesCLUSTER-NAME-worker
for worker nodesFor example:
kubectl get VsphereMachineTemplate monitoring-cluster-worker -o yaml
To scale a class-based cluster vertically, change the machine
settings in the its cluster definition, as described in Scale a Class-Based Cluster below.
To scale a class-based cluster horizontally or vertically using its topology
configuration:
Set kubectl
to the management cluster’s context, for example:
kubectl config use-context management-cluster@admin
Run kubectl edit cluster CLUSTER-NAME
and edit the settings in its topology
block under controlPlane
and worker
.
To scale horizontally, change the replicas
settings. - To scale vertically, change the settings under machine
.
For example:
- name: controlPlane
value:
replicas: 3
machine:
diskGiB: 20
memoryMiB: 8192
numCPUs: 4
- name: worker
value:
replicas: 5
machine:
diskGiB: 20
memoryMiB: 8192
numCPUs: 4