This topic explains how to create your own custom ClusterClass resource definition that you can use as a basis for creating class-based workload clusters with a Tanzu Kubernetes Grid (TKG) standalone management cluster. To base a cluster on a ClusterClass, you set its spec.topology.class
to the ClusterClass name.
This procedure does not apply to TKG with a vSphere with Tanzu Supervisor.
ImportantCreating custom ClusterClass resource definitions is for advanced users. VMware does not guarantee the functionality of workload clusters based on arbitrary ClusterClass objects.
To create a ClusterClass definition you need the following tools installed locally:
Tool | Version |
clusterctl | v1.2.0 |
ytt | 0.42.0 |
jq | 1.5-1 |
yq | v4.30.5 |
To create a new ClusterClass, VMware recommends starting with an existing, default ClusterClass object linked under ClusterClass Object Source and then modifying it with a ytt
overlay. When a new version of the default ClusterClass object is published, you can then apply the overlay to the new version in order to implement the same customizations. The procedure below describes this method of creating a custom ClusterClass object.
To write a new ClusterClass object from nothing, follow Writing a ClusterClass in the Cluster API documentation.
To create a custom ClusterClass object definition based on an existing ClusterClass defined in the Tanzu Framework repository, do the following:
From the Tanzu Framework code repository on Github, download and unpack the v0.29.0 zip bundle.
Based on your target infrastructure, cd
in to the repository’s package
subdirectory:
packages/tkg-clusterclass-aws
packages/tkg-clusterclass-azure
packages/tkg-clusterclass-vsphere
Copy the bundle
folder to your workspace. It has the following structure:
$ tree bundle
bundle
|-- config
|-- upstream
│ |-- base.yaml
│ |-- overlay-kube-apiserver-admission.yaml
│ |-- overlay.yaml
|-- values.yaml
Capture your infrastructure settings in a values file as follows, depending on whether you have already deployed a standalone management cluster:
Management cluster deployed Run the following to generate default-values.yaml
:
cat <<EOF > default_values.yaml
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
EOF
kubectl get secret tkg-clusterclass-infra-values -o jsonpath='{.data.values\.yaml}' -n tkg-system | base64 -d >> default_values.yaml
No standalone management cluster:
bundle
directory, copy values.yaml
to a new file, default_values.yaml
:cp bundle/config/values.yaml default_values.yaml
default_values.yaml
and fill in variable settings according to your infrastructure, for example:VSPHERE_DATACENTER: /dc0
VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-0
VSPHERE_FOLDER: /dc0/vm
See the Configuration File Variable Reference for infrastructure-specific variables.
Generate your base ClusterClass manifest from the values file:
ytt -f bundle -f default_values.yaml > default_cc.yaml
To customize your ClusterClass manifest, you create ytt
overlay files alongside the manifest. The following example shows how to modify a Linux kernel parameter in the ClusterClass definition.
Create a custom
folder structured as follows:
$ tree custom
custom
|-- overlays
| -- kernels.yaml
-- values.yaml
Edit custom/overlays/kernels.yaml
, for example, to add nfConntrackMax
as a variable and define a patch for it that adds its value the kernel parameter net.netfilter.nf_conntrack_max
for control plane nodes.
This overlay appends a command to the field preKubeadmCommands
, to write the configuration to sysctl.conf
. To make the setting take effect you could to append the command sysctl -p
to apply this change.
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"ClusterClass"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
spec:
variables:
- name: nfConntrackMax
required: false
schema:
openAPIV3Schema:
type: string
patches:
- name: nfConntrackMax
enabledIf: '{{ not (empty .nfConntrackMax) }}'
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
valueFrom:
template: echo "net.netfilter.nf_conntrack_max={{ .nfConntrackMax }}" >> /etc/sysctl.conf
Default ClusterClass definitions are immutable, so create or edit the custom/values.yaml
overlay to change the name of your custom ClusterClass and all of its templates by adding -extended
and a version, for example:
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
#! Change the suffix so we know from which default ClusterClass it is extended
CC-VERSION: extended-v1.0.0
#! Add other variables below if necessary
Where CC-VERSION
is the name of the cluster class version, e.g. VSPHERE_CLUSTER_CLASS_VERSION
.
Generate the custom ClusterClass:
ytt -f bundle -f default_values.yaml -f custom > custom_cc.yaml
(Optional) Check the difference between the default ClusterClass and your custom one, to confirm that all names have the suffix -extended
and your new variables and JSON patches are added, for example.
$ diff default_cc.yaml custom_cc.yaml
4c4
< name: tkg-vsphere-default-v1.0.0-cluster
---
> name: tkg-vsphere-default-extended-v1.0.0-cluster
15c15
< name: tkg-vsphere-default-v1.0.0-control-plane
---
> name: tkg-vsphere-default-extended-v1.0.0-control-plane
39c39
< name: tkg-vsphere-default-v1.0.0-worker
---
> name: tkg-vsphere-default-extended-v1.0.0-worker
63c63
< name: tkg-vsphere-default-v1.0.0-kcp
---
> name: tkg-vsphere-default-extended-v1.0.0-kcp
129c129
< name: tkg-vsphere-default-v1.0.0-md-config
---
> name: tkg-vsphere-default-extended-v1.0.0-md-config
165c165
< name: tkg-vsphere-default-v1.0.0
---
> name: tkg-vsphere-default-extended-v1.0.0
174c174
< name: tkg-vsphere-default-v1.0.0-kcp
---
> name: tkg-vsphere-default-extended-v1.0.0-kcp
180c180
< name: tkg-vsphere-default-v1.0.0-control-plane
---
> name: tkg-vsphere-default-extended-v1.0.0-control-plane
195c195
< name: tkg-vsphere-default-v1.0.0-cluster
---
> name: tkg-vsphere-default-extended-v1.0.0-cluster
205c205
< name: tkg-vsphere-default-v1.0.0-md-config
---
> name: tkg-vsphere-default-extended-v1.0.0-md-config
211c211
< name: tkg-vsphere-default-v1.0.0-worker
---
> name: tkg-vsphere-default-extended-v1.0.0-worker
228c228
< name: tkg-vsphere-default-v1.0.0-md-config
---
> name: tkg-vsphere-default-extended-v1.0.0-md-config
234c234
< name: tkg-vsphere-default-v1.0.0-worker
---
> name: tkg-vsphere-default-extended-v1.0.0-worker
537a538,542
> - name: nfConntrackMax
> required: false
> schema:
> openAPIV3Schema:
> type: string
1807a1813,1825
> - name: nfConntrackMax
> enabledIf: '{{ not (empty .nfConntrackMax) }}'
> definitions:
> - selector:
> apiVersion: controlplane.cluster.x-k8s.io/v1beta1
> kind: KubeadmControlPlaneTemplate
> matchResources:
> controlPlane: true
> jsonPatches:
> - op: add
> path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
> valueFrom:
> template: echo "net.netfilter.nf_conntrack_max={{ .nfConntrackMax }}" >> /etc/sysctl.conf
To enable your management cluster to use your custom ClusterClass, install it as follows:
Apply the ClusterClass manifest, for example:
$ kubectl apply -f custom_cc.yaml
vsphereclustertemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-cluster created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-control-plane created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-worker created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-kcp created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-md-config created
clusterclass.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0 created
Check if the custom ClusterClass has propagated to the default namespace, for example:
$ kubectl get clusterclass,kubeadmconfigtemplate,kubeadmcontrolplanetemplate,vspheremachinetemplate,vsphereclustertemplate
NAME AGE
clusterclass.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0 3m16s
clusterclass.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0 2d18h
NAME AGE
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-md-config 3m29s
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0-md-config 2d18h
NAME AGE
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-kcp 3m27s
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0-kcp 2d18h
NAME AGE
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-control-plane 3m31s
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-worker 3m28s
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0-control-plane 2d18h
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0-worker 2d18h
NAME AGE
vsphereclustertemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-extended-v1.0.0-cluster 3m31s
vsphereclustertemplate.infrastructure.cluster.x-k8s.io/tkg-vsphere-default-v1.0.0-cluster 2d18h
Create a new workload cluster based on your custom ClusterClass as follows:
Run tanzu cluster create
with the --dry-run
option to generate a cluster manifest:
tanzu cluster create --file workload-1.yaml --dry-run > default_cluster.yaml
Create a ytt
overlay or edit the cluster manifest directly to do the following:
topology.class
value with the name of your custom ClusterClassvariables
block, with a default valueAs with modifying ClusterClass object specs, using an overlay as follows lets you automatically apply the changes to new objects whenever there is a new upstream cluster release:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Cluster"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
topology:
class: tkg-vsphere-default-extended-v1.0.0
variables:
- name: nfConntrackMax
value: "1048576"
Generate the manifest:
ytt -f default_cluster.yaml -f cluster_overlay.yaml > custom_cluster.yaml
(Optional) As with the ClusterClass above, you can run diff
to compare your custom-class cluster manifest with a default class-based cluster, for example:
$ diff custom_cluster.yaml default_cluster.yaml
142c142
< class: tkg-vsphere-default-extended-v1.0.0
---
> class: tkg-vsphere-default-v1.0.0
185,186d184
< - name: nfConntrackMax
< value: "1048576"
This optional procedure shows you the resources that your custom ClusterClass will generate and manage.
Make a copy of custom_cluster.yaml
:
cp custom_cluster.yaml dryrun_cluster.yaml
Copy the variable TKR_DATA
from the management cluster:
kubectl get cluster mgmt -n tkg-system -o jsonpath='{.spec.topology.variables}' | jq -r '.[] | select(.name == "TKR_DATA")' | yq -p json '.'
Manually append above output to .spec.topology.variables
in the dryrun_cluster.yaml
, for example as shown in this diff
output:
$ diff custom_cluster.yaml dryrun_cluster.yaml
186a187,214
> - name: TKR_DATA
> value:
> v1.25.7+vmware.1:
> kubernetesSpec:
> coredns:
> imageTag: v1.9.3_vmware.8
> etcd:
> imageTag: v3.5.4_vmware.10
> imageRepository: projects.registry.vmware.com/tkg
> kube-vip:
> imageRepository: projects-stg.registry.vmware.com/tkg
> imageTag: v0.5.5_vmware.1
> pause:
> imageTag: "3.7"
> version: v1.25.7+vmware.1
> labels:
> image-type: ova
> os-arch: amd64
> os-name: photon
> os-type: linux
> os-version: "3"
> ova-version: v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
> run.tanzu.vmware.com/os-image: v1.25.7---vmware.1-tkg.1-efe12079f22627aa1246398eba077476
> run.tanzu.vmware.com/tkr: v1.25.7---vmware.1-tkg.1-zshippable
> osImageRef:
> moid: vm-156
> template: /dc0/vm/photon-3-kube-v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
> version: v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
Create plan
directory:
mkdir plan
Dry-run creating the cluster, for example:
$ clusterctl alpha topology plan -f dryrun_cluster.yaml -o plan
Detected a cluster with Cluster API installed. Will use it to fetch missing objects.
No ClusterClasses will be affected by the changes.
The following Clusters will be affected by the changes:
* default/workload-1
Changes for Cluster "default/workload-1":
NAMESPACE KIND NAME ACTION
default KubeadmConfigTemplate workload-1-md-0-bootstrap-lvchb created
default KubeadmControlPlane workload-1-2zmql created
default MachineDeployment workload-1-md-0-hlr7c created
default MachineHealthCheck workload-1-2zmql created
default MachineHealthCheck workload-1-md-0-hlr7c created
default Secret workload-1-shim created
default VSphereCluster workload-1-fmt2j created
default VSphereMachineTemplate workload-1-control-plane-mf6k6 created
default VSphereMachineTemplate workload-1-md-0-infra-k88bk created
default Cluster workload-1 modified
Created objects are written to directory "plan/created"
Modified objects are written to directory "plan/modified"
You should see the following files generated, and find the kernel configuration in KubeadmControlPlane_default_workload-1-2zmql.yaml
.
$ tree plan
plan
|-- created
| |-- KubeadmConfigTemplate_default_workload-1-md-0-bootstrap-lvchb.yaml
| |-- KubeadmControlPlane_default_workload-1-2zmql.yaml
| |-- MachineDeployment_default_workload-1-md-0-hlr7c.yaml
| |-- MachineHealthCheck_default_workload-1-2zmql.yaml
| |-- MachineHealthCheck_default_workload-1-md-0-hlr7c.yaml
| |-- Secret_default_workload-1-shim.yaml
| |-- VSphereCluster_default_workload-1-fmt2j.yaml
| |-- VSphereMachineTemplate_default_workload-1-control-plane-mf6k6.yaml
| `-- VSphereMachineTemplate_default_workload-1-md-0-infra-k88bk.yaml
`-- modified
|-- Cluster_default_workload-1.diff
|-- Cluster_default_workload-1.jsonpatch
|-- Cluster_default_workload-1.modified.yaml
`-- Cluster_default_workload-1.original.yaml
2 directories, 13 files
You should compare resources as above whenever you make changes to your custom ClusterClass.
Create a custom workload cluster based on the custom manifest generated above as follows.
Apply the custom-class cluster manifest generated above, for example:
$ kubectl apply -f custom_cluster.yaml
secret/workload-1-vsphere-credential created
secret/workload-1-nsxt-credential created
vspherecpiconfig.cpi.tanzu.vmware.com/workload-1 created
vspherecsiconfig.csi.tanzu.vmware.com/workload-1 created
clusterbootstrap.run.tanzu.vmware.com/workload-1 created
secret/workload-1 created
cluster.cluster.x-k8s.io/workload-1 created
CautionDo not use apply
dryrun_cluster.yaml
, which was used to generate the manifest, because the variableTKR_DATA
will be injected by a TKG webhook.
Check created object properties. For example with the kernel modification above, retrieve the KubeadmControlPlane
object and confirm that the kernel configuration is set:
$ kubectl get kcp workload-1-jgwd9 -o yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
...
preKubeadmCommands:
- hostname "{{ ds.meta_data.hostname }}"
- echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
- echo "127.0.0.1 localhost" >>/etc/hosts
- echo "127.0.0.1 {{ ds.meta_data.hostname }}" >>/etc/hosts
- echo "{{ ds.meta_data.hostname }}" >/etc/hostname
- echo "net.netfilter.nf_conntrack_max=1048576" >> /etc/sysctl.conf
useExperimentalRetryJoin: true
...
Log in to a control plane node and confirm that its sysctl.conf
is modified:
$ capv@workload-1-cn779-pthp5 [ ~ ]$ sudo cat /etc/sysctl.conf
...
net.ipv4.ip_forward=1
net.ipv4.tcp_timestamps=1
net.netfilter.nf_conntrack_max=1048576