Tanzu Kubernetes Grid (TKG) supports single-node clusters. Single-node clusters are workload clusters on which hosted workloads run alongside control plane infrastructure on a single ESXi host.
To further minimize the footprint of a single-node cluster, you can create it from a tiny
Tanzu Kubernetes release (TKr), which has a Photon or Ubuntu Tiny OVA for its base OS. Such clusters are called minimal single-node clusters.
Single-node clusters are class-based workload clusters that run on vSphere 8 and are deployed by standalone management clusters.
NoteThis feature is in the unsupported Technical Preview state; see TKG Feature States.
Use cases include:
A bootstrap machine with the following installed, as described in Install the Tanzu CLI and Other Tools for Use with Standalone Management Clusters:
kubectl
imgpkg
The single-node-clusters
setting enabled in the Tanzu CLI:
tanzu config set features.cluster.single-node-clusters true
To create a single-node workload cluster on vSphere that uses a standard Photon or Ubuntu TKr:
Create a flat configuration file for the workload cluster as described in vSphere with Standalone Management Cluster Configuration Files.
Run tanzu cluster create
with the --dry-run
flag to convert the flat configuration file into a Kubernetes-style Cluster
object spec as described in Create an Object Spec.
Edit the Cluster
object spec to include the following settings:
Under topology.controlPlane
:
replicas: 1
No topology.workers
block; if present, delete it.
Under topology.variables
:
- name: controlPlaneTaint
value: false
Run tanzu cluster create
with the modified Cluster
object spec as described in Create a Class-Based Cluster from the Object Spec.
To create a single-node workload cluster on vSphere that uses a tiny
Tanzu Kubernetes release (TKr) to minimize its footprint:
Patch the vSphere ClusterClass
definition to prevent the management cluster from using tiny
TKrs when it creates standard, non-single-node clusters:
kubectl annotate --overwrite clusterclass tkg-vsphere-default-v1.0.0 run.tanzu.vmware.com/resolve-tkr='!tkr.tanzu.vmware.com/tiny'
Install the tiny
TKr in the management cluster:
Set the context of kubectl
to your management cluster:
kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
Where MY-MGMT-CLUSTER
is the name of your management cluster.
Create ConfigMap
definition file tiny-tkr-cm.yaml
for the tiny
TKr with the following code:
apiVersion: v1
data:
tkrVersions: '["v1.24.10+vmware.1-tiny.1-tkg.1"]'
kind: ConfigMap
metadata:
name: tkg-compatibility-versions-v1.24.10---vmware.1-tiny.1-tkg.1
namespace: tkg-system
labels:
run.tanzu.vmware.com/additional-compatible-tkrs: ""
For TKG v2.1.0, substitute Kubernetes v1.24.9 versions here and below: TKr v1.24.9+vmware.1-tiny.2-tkg.1
, OVA v1.24.9 Tiny OVA
, and ConfigMap
name and other strings similarly changed.
Apply the TKr ConfigMap
:
kubectl apply -f tiny-tkr-cm.yaml
Download the TKr package manifest and metadata to a /tmp/
directory:
imgpkg pull -b projects.registry.vmware.com/tkg/tkr-repository-vsphere-edge:v1.24.10_vmware.1-tiny.1-tkg.1 -o /tmp/tkr-repository-vsphere-edge
In the TKr package manifest, change the metadata.namespace
setting to "tkg-system"
in either one of the following ways:
Run the following yq
command:
yq -i e '.metadata.namespace = "tkg-system"' /tmp/tkr-repository-vsphere-edge/packages/tkr-vsphere-edge.tanzu.vmware.com/1.24.10+vmware.1-tiny.1-tkg.1.yml
Edit the manifest to add setting namespace: "tkg-system"
under metadata
:
metadata:
[...]
namespace: "tkg-system"
Apply the TKr manifest:
kubectl apply -f /tmp/tkr-repository-vsphere-edge/packages/tkr-vsphere-edge.tanzu.vmware.com/1.24.10+vmware.1-tiny.1-tkg.1.yml
After a few minutes, run kubectl get
to verify that tiny
TKr, cluster bootstrap template, and OS image objects were all created. For example:
kubectl get tkr,cbt,osimage -A | grep -i tiny
The output should look something like:
v1.24.10---vmware.1-tiny.1-tkg.1 v1.24.10+vmware.1-tiny.1-tkg.1 True True 16m
tkg-system v1.24.10---vmware.1-tiny.1-tkg.1 antrea.tanzu.vmware.com.1.7.2+vmware.1-tkg.1-advanced vsphere-csi.tanzu.vmware.com.2.6.2+vmware.2-tkg.1 vsphere-cpi.tanzu.vmware.com.1.24.3+vmware.1-tkg.1 kapp-controller.tanzu.vmware.com.0.41.5+vmware.1-tkg.1
v1.24.10---vmware.1-tiny.1- 1.b3 v1.24.10+vmware.1-tiny.1 ubuntu 2004 amd64 ova 16m
v1.24.10---vmware.1-tiny.1-tkg.1-ac20b3 v1.24.10+vmware.1-tiny.1 photon 3 amd64 ova 16m
Prepare the OVA:
In the version drop-down, select 2.2.0.
Under Tiny TKG OVAs (Technical Preview), download the Tiny OVA to use for your single-node cluster:
Photon v3 Kubernetes v1.24.10 Tiny OVA (Technical Preview)
Ubuntu 2004 Kubernetes v1.24.10 Tiny OVA (Technical Preview)
Import the Tiny OVA into your vSphere environment and convert it into a VM template as described in Import the Base Image Template into vSphere.
Create the single-node workload cluster.
NoteFor single-node clusters, the
tanzu cluster create
cannot yet convert flat cluster configuration files into Kubernetes-style object specs as described in Create Workload Clusters.
Set environment variables as set in this example:
export CLUSTER_NAME='workload-snc'
export CLUSTER_NAMESPACE='default'
export CLUSTER_CIDR='100.96.0.0/11'
export SERVICE_CIDR='100.64.0.0/13'
export VSPHERE_CONTROL_PLANE_ENDPOINT=10.185.11.134
export VSPHERE_SERVER=10.185.12.154
export VSPHERE_USERNAME='[email protected]'
export VSPHERE_PASSWORD=<encoded:QWRtaW4hMjM=>
export VSPHERE_DATACENTER='/dc0'
export VSPHERE_DATASTORE='/dc0/datastore/sharedVmfs-0'
export VSPHERE_FOLDER='/dc0/vm'
export VSPHERE_NETWORK='/dc0/network/VM Network'
export VSPHERE_RESOURCE_POOL='/dc0/host/cluster0/Resources'
export VSPHERE_SSH_AUTHORIZED_KEY=ssh-rsa AAAAB3[...]tyaw== [email protected]
export VSPHERE_TLS_THUMBPRINT=47:F5:83:8E:5D:36:[...]:72:5A:89:7D:29:E5:DA
export VSPHERE_CONTROL_PLANE_NUM_CPUS='4'
export VSPHERE_CONTROL_PLANE_MEM_MIB='4096'
export VSPHERE_CONTROL_PLANE_DISK_GIB='20'
export TKG_CUSTOM_IMAGE_REPOSITORY='projects.registry.vmware.com/tkg'
export OS_NAME='photon'
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE="LS0tL[...]0tLQo="
Create a manifest vsphere-snc.yaml
with Cluster
and Secret
object specs referencing the above variables:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations:
tkg.tanzu.vmware.com/cluster-controlplane-endpoint: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
run.tanzu.vmware.com/resolve-tkr: 'tkr.tanzu.vmware.com/tiny'
labels:
tkg.tanzu.vmware.com/cluster-name: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
spec:
clusterNetwork:
pods:
cidrBlocks:
- ${CLUSTER_CIDR}
services:
cidrBlocks:
- ${SERVICE_CIDR}
topology:
class: tkg-vsphere-default-v1.0.0
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=${OS_NAME}
replicas: 1
variables:
- name: controlPlaneTaint
value: false
- name: auditLogging
value:
enabled: false
- name: apiServerEndpoint
value: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
- name: aviAPIServerHAProvider
value: false
- name: imageRepository
value:
host: ${TKG_CUSTOM_IMAGE_REPOSITORY}
- name: trust
value:
additionalTrustedCAs:
- data: ${TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE}
name: imageRepository
- name: vcenter
value:
cloneMode: fullClone
datacenter: ${VSPHERE_DATACENTER}
datastore: ${VSPHERE_DATASTORE}
folder: ${VSPHERE_FOLDER}
network: ${VSPHERE_NETWORK}
resourcePool: ${VSPHERE_RESOURCE_POOL}
server: ${VSPHERE_SERVER}
storagePolicyID: ""
tlsThumbprint: ${VSPHERE_TLS_THUMBPRINT}
- name: user
value:
sshAuthorizedKeys:
- ${VSPHERE_SSH_AUTHORIZED_KEY}
- name: controlPlane
value:
machine:
diskGiB: ${VSPHERE_CONTROL_PLANE_DISK_GIB}
memoryMiB: ${VSPHERE_CONTROL_PLANE_MEM_MIB}
numCPUs: ${VSPHERE_CONTROL_PLANE_NUM_CPUS}
version: v1.24.10+vmware.1-tiny.1
---
apiVersion: v1
kind: Secret
metadata:
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
stringData:
password: ${VSPHERE_PASSWORD}
username: ${VSPHERE_USERNAME}
EOF
Note the following:
metadata.annotations
setting for run.tanzu.vmware.com/resolve-tkr
topology.variables
setting for controlPlaneTaint
topology.workers
block, only topology.controlPlane
topology.version
should be v1.24.9+vmware.1-tiny.2
(Optional) To configure the cluster to use Calico as the CNI instead of the default Antrea CNI, follow the instructions for single-node clusters in Calico CNI for Supervisor or Single-Node Class-Based Workload Clusters.
Apply the Cluster
object manifest:
tanzu cluster create -f vsphere-snc.yaml
(TKG v2.1.0 only, Optional) To provision Persistent Volumes (PVs) for the cluster, create a vSphere StorageClass
object.
NoteThese steps are not needed with TKG v2.1.1 and later.
Create a manifest vspherestorageclass.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: default
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
Apply the StorageClass
manifest:
kubectl apply -f vspherestorageclass.yaml --kubeconfig=KUBECONFIG
Where KUBECONFIG
is the kubeconfig
for the single-node cluster.