This topic describes the VMware-recommended procedure for sizing VMs for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) cluster components.

Overview

When you configure plans in the Tanzu Kubernetes Grid Integrated Edition tile, you provide VM sizes for the control plane and worker node VMs. For more information about configuring plans, see the Plans section of Installing Tanzu Kubernetes Grid Integrated Edition for your IaaS:

You select the number of control plane nodes when you configure the plan.

For worker node VMs, you select the number and size based on the needs of your workload. The sizing of control plane and worker node VMs is highly dependent on the characteristics of the workload. Adapt the recommendations in this topic based on your own workload requirements.

Control Plane Node VM Size

The control plane node VM size is linked to the number of worker nodes. The VM sizing shown in the following table is per control plane node:

Note: If there are multiple control plane nodes, all control plane node VMs are the same size. To configure the number of control plane nodes, see the Plans section of Installing Tanzu Kubernetes Grid Integrated Edition for your IaaS.

To customize the size of the Kubernetes control plane node VM, see Customize Control Plane and Worker Node VM Size and Type.

Number of Workers CPU RAM (GB)
1-5 1 3.75
6-10 2 7.5
11-100 4 15
101-250 8 30
251-500 16 60
500+ 32 120

Do not overload your control plane node VMs by exceeding the recommended maximum number of worker node VMs or by downsizing from the recommended VM sizings listed above. These recommendations support both a typical workload managed by a VM and the higher than usual workload managed by the VM while other VM’s in the cluster are upgrading.

Warning: Upgrading an overloaded Kubernetes cluster control plane node VM can result in downtime.

Worker Node VM Number and Size

A maximum of 100 pods can run on a single worker node. The actual number of pods that each worker node runs depends on the workload type as well as the CPU and memory requirements of the workload.

To calculate the number and size of worker VMs you require, determine the following for your workload:

  • Maximum number of pods you expect to run [p]
  • Memory requirements per pod [m]
  • CPU requirements per pod [c]

Using the values above, you can calculate the following:

  • Minimum number of workers [W] = p / 100
  • Minimum RAM per worker = m * 100
  • Minimum number of CPUs per worker = c * 100

This calculation gives you the minimum number of worker nodes your workload requires. We recommend that you increase this value to account for failures and upgrades.

For example, increase the number of worker nodes by at least one to maintain workload uptime during an upgrade. Additionally, increase the number of worker nodes to fit your own failure tolerance criteria.

The maximum number of worker nodes that you can create for a plan in an Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes cluster is set by the Maximum number of workers on a cluster field in the Plans pane of the Tanzu Kubernetes Grid Integrated Edition tile. To customize the size of the Kubernetes worker node VM, see Customize Control Plane and Worker Node VM Size and Type.

Example Worker Node Requirement Calculation

An example app has the following minimum requirements:

  • Number of pods [p] = 1000
  • RAM per pod [m] = 1 GB
  • CPU per pod [c] = 0.10

To determine how many worker node VMs the app requires, do the following:

  1. Calculate the number of workers using p / 100:
    1000/100 = 10 workers
    
  2. Calculate the minimum RAM per worker using m * 100:
    1 * 100 = 100 GB
    
  3. Calculate the minimum number of CPUs per worker using c * 100:
    0.10 * 100 = 10 CPUs
    
  4. For upgrades, increase the number of workers by one:
    10 workers + 1 worker = 11 workers
    
  5. For failure tolerance, increase the number of workers by two:
    11 workers + 2 workers = 13 workers
    

In total, this app workload requires 13 workers with 10 CPUs and 100 GB RAM.

Customize Control Plane and Worker Node VM Size and Type

You select the CPU, memory, and disk space for the Kubernetes node VMs from a set list in the Tanzu Kubernetes Grid Integrated Edition tile. Control Plane and worker node VM sizes and types are selected on a per-plan basis. For more information, see the Plans section of the Tanzu Kubernetes Grid Integrated Edition installation topic for your IaaS. For example, Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T.

While the list of available node VM types and sizes is extensive, the list may not provide the exact type and size of VM that you want. You can use the Ops Manager API to customize the size and types of the control plane and worker node VMs. For more information, see How to Create or Remove Custom VM_TYPE Template using the Operations Manager API in the Knowledge Base.

Warning: Do not reduce the size of your Kubernetes control plane node VMs below the recommended sizes listed in Control Plane Node VM Size, above. Upgrading an overloaded Kubernetes cluster control plane node VM can result in downtime.

check-circle-line exclamation-circle-line close-line
Scroll to top icon