Tanzu Kubernetes Grid (TKG) supports single-node clusters. Single-node clusters are workload clusters on which hosted workloads run alongside control plane infrastructure on a single ESXi host.
To further minimize the footprint of a single-node cluster, you can create it from a tiny
Tanzu Kubernetes release (TKr), which has a Photon or Ubuntu Tiny OVA for its base OS. Such clusters are called minimal single-node clusters.
Single-node clusters are class-based workload clusters that run on vSphere and are deployed by standalone management clusters.
Single-node clusters are fully supported for Telco Cloud Automation (TCA).
NoteYou cannot use Tanzu Mission Control (TMC) to create and manage single-node clusters, but this capability is planned for a future release of TMC.
Single-node cluster use cases include:
Single-node clusters are supported for the following environments and components:
Category | Supported Options |
---|---|
Infrastructure | vSphere 7, vSphere 8 |
Node OS | Ubuntu 22.04, Photon 5 (Kubernetes v1.27 and v1.28); Ubuntu 20.04, Photon 3 (Kubernetes v1.27 and earlier) |
Node size | small |
Package | Cert Manager, Fluent Bit, Multus, Prometheus, Whereabouts |
Control plane endpoint provider | Kube-Vip* |
Workload load balancer | Kube-Vip* |
Workload cluster type | Class-based |
CNI | Antrea, Calico |
Connectivity mode | Online, Internet-restricted |
*NSX Advanced Load Balancer (ALB) is not supported with single-node clusters in TKG v2.5.
kubectl
To create a single-node workload cluster on vSphere that uses a tiny
Tanzu Kubernetes release (TKr) to minimize its footprint:
Prepare the OVA:
Go to the Tanzu Kubernetes Grid downloads page and select 2.5.2.
Download the Tiny OVA to use for your single-node cluster:
Photon v5 Kubernetes v1.28.11 Tiny OVA
Ubuntu 2204 Kubernetes v1.28.11 Tiny OVA
Import the Tiny OVA into your vSphere environment and convert it into a VM template as described in Import the Base Image Template into vSphere.
Create the single-node workload cluster.
NoteTo create minimal single-node clusters, you must run the
tanzu cluster create
command with a Kubernetes-style object spec. If you start with a flat cluster configuration file, you must follow a two-step process described in Create a Class-Based Cluster to generate the object spec, and then edit it as described below before runningtanzu cluster create
a second time to create the cluster.
Set environment variables as set in this example:
export CLUSTER_NAME='workload-snc'
export CLUSTER_NAMESPACE='default'
export CLUSTER_CIDR='100.96.0.0/11'
export SERVICE_CIDR='100.64.0.0/13'
export VSPHERE_CONTROL_PLANE_ENDPOINT=10.185.11.134
export VSPHERE_SERVER=10.185.12.154
export VSPHERE_USERNAME='[email protected]'
export VSPHERE_PASSWORD=<encoded:QWRtaW4hMjM=>
export VSPHERE_DATACENTER='/dc0'
export VSPHERE_DATASTORE='/dc0/datastore/sharedVmfs-0'
export VSPHERE_FOLDER='/dc0/vm'
export VSPHERE_NETWORK='/dc0/network/VM Network'
export VSPHERE_RESOURCE_POOL='/dc0/host/cluster0/Resources'
export VSPHERE_SSH_AUTHORIZED_KEY=ssh-rsa AAAAB3[...]tyaw== [email protected]
export VSPHERE_TLS_THUMBPRINT=47:F5:83:8E:5D:36:[...]:72:5A:89:7D:29:E5:DA
export VSPHERE_CONTROL_PLANE_NUM_CPUS='2'
export VSPHERE_CONTROL_PLANE_MEM_MIB='4096'
export VSPHERE_CONTROL_PLANE_DISK_GIB='20'
export TKG_CUSTOM_IMAGE_REPOSITORY='projects.registry.vmware.com/tkg'
export OS_NAME='photon'
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE="LS0tL[...]0tLQo="
Create a manifest vsphere-snc.yaml
with Cluster
and Secret
object specs referencing the above variables:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations:
tkg.tanzu.vmware.com/cluster-controlplane-endpoint: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
run.tanzu.vmware.com/resolve-tkr: 'tkr.tanzu.vmware.com/tiny'
labels:
tkg.tanzu.vmware.com/cluster-name: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
spec:
clusterNetwork:
pods:
cidrBlocks:
- ${CLUSTER_CIDR}
services:
cidrBlocks:
- ${SERVICE_CIDR}
topology:
class: tkg-vsphere-default-v1.1.1
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=${OS_NAME}
replicas: 1
variables:
- name: controlPlaneTaint
value: false
- name: auditLogging
value:
enabled: false
- name: apiServerEndpoint
value: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
- name: aviAPIServerHAProvider
value: false
- name: imageRepository
value:
host: ${TKG_CUSTOM_IMAGE_REPOSITORY}
- name: trust
value:
additionalTrustedCAs:
- data: ${TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE}
name: imageRepository
- name: vcenter
value:
cloneMode: fullClone
datacenter: ${VSPHERE_DATACENTER}
datastore: ${VSPHERE_DATASTORE}
folder: ${VSPHERE_FOLDER}
network: ${VSPHERE_NETWORK}
resourcePool: ${VSPHERE_RESOURCE_POOL}
server: ${VSPHERE_SERVER}
storagePolicyID: ""
tlsThumbprint: ${VSPHERE_TLS_THUMBPRINT}
- name: user
value:
sshAuthorizedKeys:
- ${VSPHERE_SSH_AUTHORIZED_KEY}
- name: controlPlane
value:
machine:
diskGiB: ${VSPHERE_CONTROL_PLANE_DISK_GIB}
memoryMiB: ${VSPHERE_CONTROL_PLANE_MEM_MIB}
numCPUs: ${VSPHERE_CONTROL_PLANE_NUM_CPUS}
version: v1.28.11+vmware.1-tiny.2
---
apiVersion: v1
kind: Secret
metadata:
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
stringData:
password: ${VSPHERE_PASSWORD}
username: ${VSPHERE_USERNAME}
EOF
Note the following:
metadata.annotations
setting for run.tanzu.vmware.com/resolve-tkr
topology.variables
setting for controlPlaneTaint
topology.workers
block, only topology.controlPlane
topology.version
should be:
v1.28.4+vmware.1-tiny.1
v1.27.5+vmware.1-tiny.1
v1.26.8+vmware.1-tiny.1
v1.25.7+vmware.1-tiny.1
v1.24.10+vmware.1-tiny.1
(Optional) To configure the cluster to use Calico as the CNI instead of the default Antrea CNI, follow the instructions for single-node clusters in Calico CNI for Single-Node Class-Based Workload Clusters.
Apply the Cluster
object manifest:
tanzu cluster create -f vsphere-snc.yaml
To create a single-node workload cluster on vSphere that uses a standard Photon or Ubuntu TKr:
Create a flat configuration file for the workload cluster as described in vSphere with Standalone Management Cluster Configuration Files.
Run tanzu cluster create
with the --dry-run
flag to convert the flat configuration file into a Kubernetes-style Cluster
object spec as described in Create an Object Spec.
Edit the Cluster
object spec to include the following settings:
Under topology.controlPlane
:
replicas: 1
No topology.workers
block; if present, delete it.
Under topology.variables
:
- name: controlPlaneTaint
value: false
(Optional) to set maximum pod counts, under topology.variables
include max-pods
settings as extra arguments for the control plane or worker kubelets. For example, to set both control plane and worker pod counts to a maximum of 254
:
- name: controlPlaneKubeletExtraArgs
value:
max-pods: "254"
- name: workerKubeletExtraArgs
value:
max-pods: "254"
Run tanzu cluster create
with the modified Cluster
object spec as described in Create a Class-Based Cluster from the Object Spec.