Tanzu Kubernetes Grid (TKG) supports single-node clusters. Single-node clusters are workload clusters on which hosted workloads run alongside control plane infrastructure on a single ESXi host.
To further minimize the footprint of a single-node cluster, you can create it from a tiny
Tanzu Kubernetes release (TKr), which has a Photon or Ubuntu Tiny OVA for its base OS. Such clusters are called minimal single-node clusters.
Single-node clusters are class-based workload clusters that run on vSphere 8 and are deployed by standalone management clusters.
NoteThis feature is in the unsupported Technical Preview state; see TKG Feature States.
TKG does not support upgrading clusters running previous versions oftiny
TKr. To update a minimal single-node cluster to the latesttiny
TKr version, you need to delete the old cluster and create a new one.
Use cases include:
A bootstrap machine with the following installed, as described in Install the Tanzu CLI and Other Tools for Use with Standalone Management Clusters:
kubectl
imgpkg
The single-node-clusters
setting enabled in the Tanzu CLI:
tanzu config set features.cluster.single-node-clusters true
To create a single-node workload cluster on vSphere that uses a standard Photon or Ubuntu TKr:
Create a flat configuration file for the workload cluster as described in vSphere with Standalone Management Cluster Configuration Files.
Run tanzu cluster create
with the --dry-run
flag to convert the flat configuration file into a Kubernetes-style Cluster
object spec as described in Create an Object Spec.
Edit the Cluster
object spec to include the following settings:
Under topology.controlPlane
:
replicas: 1
No topology.workers
block; if present, delete it.
Under topology.variables
:
- name: controlPlaneTaint
value: false
Run tanzu cluster create
with the modified Cluster
object spec as described in Create a Class-Based Cluster from the Object Spec.
To create a single-node workload cluster on vSphere that uses a tiny
Tanzu Kubernetes release (TKr) to minimize its footprint:
Patch the vSphere ClusterClass
definition to prevent the management cluster from using tiny
TKrs when it creates standard, non-single-node clusters:
kubectl annotate --overwrite clusterclass tkg-vsphere-default-v1.0.0 run.tanzu.vmware.com/resolve-tkr='!tkr.tanzu.vmware.com/tiny'
Install the tiny
TKr in the management cluster:
Set the context of kubectl
to your management cluster:
kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
Where MY-MGMT-CLUSTER
is the name of your management cluster.
Create ConfigMap
definition file tiny-tkr-cm.yaml
for the tiny
TKr with the following code:
apiVersion: v1
data:
tkrVersions: '["v1.25.7+vmware.1-tiny.2-tkg.1"]'
kind: ConfigMap
metadata:
name: tkg-compatibility-versions-v1.25.7---vmware.1-tiny.2-tkg.1
namespace: tkg-system
labels:
run.tanzu.vmware.com/additional-compatible-tkrs: ""
For releases other than TKG v2.2.0, substitute Kubernetes v1.25.7 versions here and below: TKr v1.25.7+vmware.1-tiny.2-tkg.1
, OVA v1.25.7 Tiny OVA
, and ConfigMap
name and other strings similarly changed with the appropriate Kubernetes version.
Apply the TKr ConfigMap
:
kubectl apply -f tiny-tkr-cm.yaml
Download the TKr package manifest and metadata to a /tmp/
directory:
imgpkg pull -b projects.registry.vmware.com/tkg/tkr-repository-vsphere-edge:v1.25.7_vmware.1-tiny.2-tkg.1 -o /tmp/tkr-repository-vsphere-edge
In the TKr package manifest, change the metadata.namespace
setting to "tkg-system"
in either one of the following ways:
Run the following yq
command:
yq -i e '.metadata.namespace = "tkg-system"' /tmp/tkr-repository-vsphere-edge/packages/tkr-vsphere-edge.tanzu.vmware.com/1.25.7+vmware.1-tiny.2-tkg.1.yml
Edit the manifest to add setting namespace: "tkg-system"
under metadata
:
metadata:
[...]
namespace: "tkg-system"
Apply the TKr manifest:
kubectl apply -f /tmp/tkr-repository-vsphere-edge/packages/tkr-vsphere-edge.tanzu.vmware.com/1.25.7+vmware.1-tiny.2-tkg.1.yml
After a few minutes, run kubectl get
to verify that tiny
TKr, cluster bootstrap template, and OS image objects were all created. For example:
kubectl get tkr,cbt,osimage -A | grep -i tiny
The output should look something like:
v1.25.7---vmware.1-tiny.2-tkg.1 v1.25.7+vmware.1-tiny.2-tkg.1 True True 16m
tkg-system v1.25.7---vmware.1-tiny.2-tkg.1 antrea.tanzu.vmware.com.1.9.0+vmware.1-tkg.1-advanced vsphere-csi.tanzu.vmware.com.2.7.1+vmware.2-tkg.1 vsphere-cpi.tanzu.vmware.com.1.25.1+vmware.2-tkg.1 kapp-controller.tanzu.vmware.com.0.41.7+vmware.1-tkg.1
v1.25.7---vmware.1-tiny.2- 1.b3 v1.25.7+vmware.1-tiny.2 ubuntu 2004 amd64 ova 16m
v1.25.7---vmware.1-tiny.2-tkg.1-ac20b3 v1.25.7+vmware.1-tiny.2 photon 3 amd64 ova 16m
Prepare the OVA:
Browse to the Tanzu Kubernetes Grid downloads page and log in to Customer Connect.
In the VMware Tanzu Kubernetes Grid row, click Go to Downloads.
Select version 2.x
and click VMware Tanzu Kubernetes Grid > GO TO DOWNLOADS.
Under Tiny TKG OVAs (Experimental), download the Tiny OVA to use for your single-node cluster:
Photon v3 Kubernetes v1.25.7 Tiny OVA(Experimental)
Ubuntu 2004 Kubernetes v1.25.7 Tiny OVA (Experimental)
Import the Tiny OVA into your vSphere environment and convert it into a VM template as described in Import the Base Image Template into vSphere.
Create the single-node workload cluster.
NoteFor single-node clusters, the
tanzu cluster create
cannot yet convert flat cluster configuration files into Kubernetes-style object specs as described in Create Workload Clusters.
Set environment variables as set in this example:
export CLUSTER_NAME='workload-snc'
export CLUSTER_NAMESPACE='default'
export CLUSTER_CIDR='100.96.0.0/11'
export SERVICE_CIDR='100.64.0.0/13'
export VSPHERE_CONTROL_PLANE_ENDPOINT=10.185.11.134
export VSPHERE_SERVER=10.185.12.154
export VSPHERE_USERNAME='administrator@vsphere.local'
export VSPHERE_PASSWORD=<encoded:QWRtaW4hMjM=>
export VSPHERE_DATACENTER='/dc0'
export VSPHERE_DATASTORE='/dc0/datastore/sharedVmfs-0'
export VSPHERE_FOLDER='/dc0/vm'
export VSPHERE_NETWORK='/dc0/network/VM Network'
export VSPHERE_RESOURCE_POOL='/dc0/host/cluster0/Resources'
export VSPHERE_SSH_AUTHORIZED_KEY=ssh-rsa AAAAB3[...]tyaw== user@example.com
export VSPHERE_TLS_THUMBPRINT=47:F5:83:8E:5D:36:[...]:72:5A:89:7D:29:E5:DA
export VSPHERE_CONTROL_PLANE_NUM_CPUS='4'
export VSPHERE_CONTROL_PLANE_MEM_MIB='4096'
export VSPHERE_CONTROL_PLANE_DISK_GIB='20'
export TKG_CUSTOM_IMAGE_REPOSITORY='projects.registry.vmware.com/tkg'
export OS_NAME='photon'
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE="LS0tL[...]0tLQo="
Create a manifest vsphere-snc.yaml
with Cluster
and Secret
object specs referencing the above variables:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations:
tkg.tanzu.vmware.com/cluster-controlplane-endpoint: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
run.tanzu.vmware.com/resolve-tkr: 'tkr.tanzu.vmware.com/tiny'
labels:
tkg.tanzu.vmware.com/cluster-name: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
spec:
clusterNetwork:
pods:
cidrBlocks:
- ${CLUSTER_CIDR}
services:
cidrBlocks:
- ${SERVICE_CIDR}
topology:
class: tkg-vsphere-default-v1.0.0
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=${OS_NAME}
replicas: 1
variables:
- name: controlPlaneTaint
value: false
- name: auditLogging
value:
enabled: false
- name: apiServerEndpoint
value: ${VSPHERE_CONTROL_PLANE_ENDPOINT}
- name: aviAPIServerHAProvider
value: false
- name: imageRepository
value:
host: ${TKG_CUSTOM_IMAGE_REPOSITORY}
- name: trust
value:
additionalTrustedCAs:
- data: ${TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE}
name: imageRepository
- name: vcenter
value:
cloneMode: fullClone
datacenter: ${VSPHERE_DATACENTER}
datastore: ${VSPHERE_DATASTORE}
folder: ${VSPHERE_FOLDER}
network: ${VSPHERE_NETWORK}
resourcePool: ${VSPHERE_RESOURCE_POOL}
server: ${VSPHERE_SERVER}
storagePolicyID: ""
tlsThumbprint: ${VSPHERE_TLS_THUMBPRINT}
- name: user
value:
sshAuthorizedKeys:
- ${VSPHERE_SSH_AUTHORIZED_KEY}
- name: controlPlane
value:
machine:
diskGiB: ${VSPHERE_CONTROL_PLANE_DISK_GIB}
memoryMiB: ${VSPHERE_CONTROL_PLANE_MEM_MIB}
numCPUs: ${VSPHERE_CONTROL_PLANE_NUM_CPUS}
version: v1.25.7+vmware.1-tiny.2
---
apiVersion: v1
kind: Secret
metadata:
name: ${CLUSTER_NAME}
namespace: ${CLUSTER_NAMESPACE}
stringData:
password: ${VSPHERE_PASSWORD}
username: ${VSPHERE_USERNAME}
EOF
Note the following:
metadata.annotations
setting for run.tanzu.vmware.com/resolve-tkr
topology.variables
setting for controlPlaneTaint
topology.workers
block, only topology.controlPlane
topology.version
should be v1.24.10+vmware.1-tiny.1
for v2.1.1 and v1.24.9+vmware.1-tiny.2
for v2.1.0.(Optional) To configure the cluster to use Calico as the CNI instead of the default Antrea CNI, follow the instructions for single-node clusters in Calico CNI for Supervisor or Single-Node Class-Based Workload Clusters.
Apply the Cluster
object manifest:
tanzu cluster create -f vsphere-snc.yaml