Refer to these instructions to install and configure the cluster autoscaler package using kubectl.
Requirements
- The minimum vSphere version is vSphere 8 U3
- The minimum TKr version is TKr 1.27.x for vSphere 8
- The minor version of the TKr and the minor version of the Cluster Autoscaler package must match
Configure the vSphere Namespace
Complete the following prerequisite tasks for provisioning a TKG cluster.
- Install or update your environment to vSphere 8 U3 and TKr 1.27.x for vSphere 8.
- Create or update a content library with the latest Tanzu Kubernetes releases. See Administering Kubernetes Releases for TKG Service Clusters.
- Create and configure a vSphere Namespace for hosting the TKG cluster. See Configuring vSphere Namespaces for Hosting TKG Service Clusters.
- Install the Kubernetes CLI Tools for vSphere.
The following example can be used to install the tools from the command line. For additional guidance, see Install the Kubernetes CLI Tools for vSphere.
curl -LOk https://${SUPERVISOR_IP-or-FQDN}/wcp/plugin/linux-amd64/vsphere-plugin.zip unzip vsphere-plugin.zip mv -v bin/* /usr/local/bin/
- Run
kubectl
andkubectl vsphere
to verify the installation.
Create a TKG Cluster with Autoscaler Annotations
Follow the instructions to create a TKG cluster. For additional guidance, see Workflow for Provisioning TKG Clusters Using Kubectl.
To use the autoscaler, you must configure the cluster with autoscaler label annotations as demonstrated in the cluster specification example provided here. Unlike regular cluster provisioning, you do not hard code the number of worker node replicas. Kubernetes has built-in default logic for the replicas based on the autoscaler minimum and maximum size annotations. Since this is a new cluster, the minimum size is used to create the cluster. For more information, see https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling.
- Authenticate with Supervisor using kubectl.
kubectl vsphere login --server=SUPERVISOR-CONTROL-PLANE-IP-ADDRESS-or-FQDN --vsphere-username USERNAME
- Switch context to the target vSphere Namespace that will host the cluster.
kubectl config use-context tkgs-cluster-namespace
- List the VM classes that are available in the vSphere Namespace.
You can only use VM classes bound to the target vSphere Namespace. See Using VM Classes with TKG Service Clusters.
- List the available persistent volume storage classes.
kubectl describe namespace VSPHERE-NAMESPACE-NAME
The command returns details about the vSphere Namespace, including the storage class. The command
kubectl describe storageclasses
also returns available storage classes, but requires vSphere administrator permissions. - List the available Tanzu Kubernetes releases.
kubectl get tkr
This command returns the TKrs available in this vSphere Namespace and their compatibility. See Administering Kubernetes Releases for TKG Service Clusters.
- Use the information you have gleaned to craft a TKG cluster specification YAML file with the required cluster autoscaler configuration.
- Use the
*-min-size
and*-max-size
annotations for the worker nodePools, in this example 3 is the minimum and 5 is the maximum number of worker nodes that can be scaled. By default the cluster will be created with 3 worker nodes. - Use the matching minor version for the TKr and autoscaler package.
- The cluster
metadata.name
andmetadata.namespace
values used are consistent with autoscaler package default values. If you change these values in the cluster spec, you will need to modify them in theautoscaler-data-values
(see below).
#cc-autoscaler.yaml apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: gc1 namespace: cluster spec: clusterNetwork: pods: cidrBlocks: - 192.0.2.0/16 serviceDomain: cluster.local services: cidrBlocks: - 198.51.100.0/12 topology: class: tanzukubernetescluster controlPlane: metadata: {} replicas: 3 variables: - name: storageClasses value: - wcpglobal-storage-profile - name: vmClass value: guaranteed-medium - name: storageClass value: wcpglobal-storage-profile #minor versions must match version: v1.27.11---vmware.1-fips.1-tkg.2 workers: machineDeployments: - class: node-pool metadata: annotations: cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "3" cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5" name: np-1
- Use the
- Apply the cluster specification.
kubectl apply -f cc-autoscaler.yaml
- Verify cluster creation.
kubectl get cluster,vm
- Verify cluster node version.
kubectl get node
Install the Package Manager on the TKG Cluster
- Log in to the TKG cluster you provisioned.
kubectl vsphere login --server=SUPERVISOR-CONTROL-PLANE-IP-ADDRESS-or-FQDN \ --vsphere-username USERNAME \ --tanzu-kubernetes-cluster-name CLUSTER-NAME \ --tanzu-kubernetes-cluster-namespace NAMESPACE-NAME
- Install the Carvel imgpkg tool.
wget -O- https://carvel.dev/install.sh > install.sh sudo bash install.sh
- Run
imgpkg version
to verify the installation. - Check the package repository version.
imgpkg tag list -i projects.registry.vmware.com/tkg/packages/standard/repo
- Install the package repository. Update the repository version accordingly.
apiVersion: packaging.carvel.dev/v1alpha1 kind: PackageRepository metadata: name: tanzu-standard namespace: tkg-system spec: fetch: imgpkgBundle: image: projects.registry.vmware.com/tkg/packages/standard/repo:v2024.4.12
- Verify the package repository.
kubectl get packagerepository -A NAMESPACE NAME AGE DESCRIPTION tkg-system tanzu-standard 2m22s Reconcile succeeded
- Verify the existence of the cluster autoscaler package.
kubectl get package NAME PACKAGEMETADATA NAME VERSION AGE cert-manager.tanzu.vmware.com.1.7.2+vmware.3-tkg.1 cert-manager.tanzu.vmware.com 1.7.2+vmware.3-tkg.1 5s cert-manager.tanzu.vmware.com.1.7.2+vmware.3-tkg.3 cert-manager.tanzu.vmware.com 1.7.2+vmware.3-tkg.3 5s cluster-autoscaler.tanzu.vmware.com.1.25.1+vmware.1-tkg.3 cluster-autoscaler.tanzu.vmware.com 1.25.1+vmware.1-tkg.3 5s cluster-autoscaler.tanzu.vmware.com.1.26.2+vmware.1-tkg.3 cluster-autoscaler.tanzu.vmware.com 1.26.2+vmware.1-tkg.3 5s cluster-autoscaler.tanzu.vmware.com.1.27.2+vmware.1-tkg.3 cluster-autoscaler.tanzu.vmware.com 1.27.2+vmware.1-tkg.3 5s contour.tanzu.vmware.com.1.26.2+vmware.1-tkg.1 contour.tanzu.vmware.com 1.26.2+vmware.1-tkg.1 5s ...
Install the Autoscaler Package
kube-system
namespace.
- Create the
autoscaler.yaml
configuration file.- You can customize the autoscaler by changing the
autoscaler-data-values
section of the specification with appropriate values for your environment.
#autoscaler.yaml apiVersion: v1 kind: ServiceAccount metadata: name: autoscaler-sa namespace: tkg-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: autoscaler-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: autoscaler-sa namespace: tkg-system --- apiVersion: packaging.carvel.dev/v1alpha1 kind: PackageInstall metadata: name: autoscaler namespace: tkg-system spec: serviceAccountName: autoscaler-sa packageRef: refName: cluster-autoscaler.tanzu.vmware.com versionSelection: constraints: 1.27.2+vmware.1-tkg.3 values: - secretRef: name: autoscaler-data-values --- apiVersion: v1 kind: Secret metadata: name: autoscaler-data-values namespace: tkg-system stringData: values.yml: | --- arguments: ignoreDaemonsetsUtilization: true maxNodeProvisionTime: 15m maxNodesTotal: 0 metricsPort: 8085 scaleDownDelayAfterAdd: 10m scaleDownDelayAfterDelete: 10s scaleDownDelayAfterFailure: 3m scaleDownUnneededTime: 10m clusterConfig: clusterName: "gc1" clusterNamespace: "cluster" paused: false
- You can customize the autoscaler by changing the
- Install the cluster autoscaler package.
kubectl apply -f autoscaler.yaml
- Verify the installation of the autoscaler package.
kubectl get pkgi -A | grep autoscaler
Expected result:tkg-system autoscaler cluster-autoscaler.tanzu.vmware.com 1.27.2+vmware.1-tkg.3 Reconcile succeeded 3m52s
- Verify the autoscaler deployment.
kubectl get pods -n kube-system | grep autoscaler
cluster-autoscaler-798b65bd9f-bht8n 1/1 Running 0 2m
Test Cluster Autoscaling
To test cluster autoscaling, deploy an application, increase the number of replicas, and verify that additional worker nodes are scaled out to handle the load.
Upgrade Autoscaled Cluster
To upgrade an autoscaled cluster, pause the autoscaler package.