Create a ClusterClass

This topic explains how to create your own custom ClusterClass resource definition that you can use as a basis for creating class-based workload clusters with a Tanzu Kubernetes Grid (TKG) standalone management cluster. To base a cluster on a ClusterClass, you set its spec.topology.class to the ClusterClass name.

This procedure does not apply to TKG with a vSphere with Tanzu Supervisor.


Creating custom ClusterClass resource definitions is for advanced users. VMware does not guarantee the functionality of workload clusters based on arbitrary ClusterClass objects.


To create a ClusterClass definition you need the following tools installed locally:

Tool Version
clusterctl v1.2.0
ytt 0.42.0
jq 1.5-1
yq v4.30.5

Options: Overlay vs. New Object Spec

To create a new ClusterClass, VMware recommends starting with an existing, default ClusterClass object linked under ClusterClass Object Source and then modifying it with a ytt overlay. When a new version of the default ClusterClass object is published, you can then apply the overlay to the new version in order to implement the same customizations. The procedure below describes this method of creating a custom ClusterClass object.

To write a new ClusterClass object from nothing, follow Writing a ClusterClass in the Cluster API documentation.

Create a Custom ClusterClass Object

To create a custom ClusterClass object definition based on an existing ClusterClass defined in the Tanzu Framework repository, do the following:

Retrieve ClusterClass Object Source

  1. From the Tanzu Framework code repository on Github, download and unpack the v0.29.0 zip bundle.

  2. Based on your target infrastructure, cd in to the repository’s package subdirectory:

    • AWS: packages/tkg-clusterclass-aws
    • Azure: packages/tkg-clusterclass-azure
    • vSphere: packages/tkg-clusterclass-vsphere
  3. Copy the bundle folder to your workspace. It has the following structure:

    $ tree bundle
    |-- config
       |-- upstream
       │   |-- base.yaml
       │   |-- overlay-kube-apiserver-admission.yaml
       │   |-- overlay.yaml
       |-- values.yaml

Generate Your Base ClusterClass Manifest

  1. Capture your infrastructure settings in a values file as follows, depending on whether you have already deployed a standalone management cluster:

    • Management cluster deployed Run the following to generate default-values.yaml:

      cat <<EOF > default_values.yaml
      #@overlay/match-child-defaults missing_ok=True
      kubectl get secret tkg-clusterclass-infra-values -o jsonpath='{.data.values\.yaml}' -n tkg-system | base64 -d >> default_values.yaml
    • No standalone management cluster:

      1. In the bundle directory, copy values.yaml to a new file, default_values.yaml:
      cp bundle/config/values.yaml default_values.yaml
      1. Edit default_values.yaml and fill in variable settings according to your infrastructure, for example:
      VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-0
      VSPHERE_FOLDER: /dc0/vm

      See the Configuration File Variable Reference for infrastructure-specific variables.

  2. Generate your base ClusterClass manifest from the values file:

    ytt -f bundle -f default_values.yaml > default_cc.yaml

Customize Your ClusterClass Manifest

To customize your ClusterClass manifest, you create ytt overlay files alongside the manifest. The following example shows how to modify a Linux kernel parameter in the ClusterClass definition.

  1. Create a custom folder structured as follows:

    $ tree custom
     |-- overlays
     |   -- kernels.yaml
     -- values.yaml
  2. Edit custom/overlays/kernels.yaml, for example, to add nfConntrackMax as a variable and define a patch for it that adds its value the kernel parameter net.netfilter.nf_conntrack_max for control plane nodes.

    This overlay appends a command to the field preKubeadmCommands, to write the configuration to sysctl.conf. To make the setting take effect you could to append the command sysctl -p to apply this change.

    #@ load("@ytt:overlay", "overlay")
    #@ load("@ytt:data", "data")
    #@overlay/match by=overlay.subset({"kind":"ClusterClass"})
    kind: ClusterClass
     - name: nfConntrackMax
       required: false
           type: string
     - name: nfConntrackMax
       enabledIf: '{{ not (empty .nfConntrackMax) }}'
         - selector:
             kind: KubeadmControlPlaneTemplate
               controlPlane: true
             - op: add
               path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
                 template: echo "net.netfilter.nf_conntrack_max={{ .nfConntrackMax }}" >> /etc/sysctl.conf
  3. Default ClusterClass definitions are immutable, so create or edit the custom/values.yaml overlay to change the name of your custom ClusterClass and all of its templates by adding -extended and a version, for example:

    #@overlay/match-child-defaults missing_ok=True
    #! Change the suffix so we know from which default ClusterClass it is extended
    CC-VERSION: extended-v1.0.0
    #! Add other variables below if necessary

    Where CC-VERSION is the name of the cluster class version, e.g. VSPHERE_CLUSTER_CLASS_VERSION.

  4. Generate the custom ClusterClass:

    ytt -f bundle -f default_values.yaml -f custom > custom_cc.yaml
  5. (Optional) Check the difference between the default ClusterClass and your custom one, to confirm that all names have the suffix -extended and your new variables and JSON patches are added, for example.

    $ diff default_cc.yaml custom_cc.yaml
    <   name: tkg-vsphere-default-v1.0.0-cluster
    >   name: tkg-vsphere-default-extended-v1.0.0-cluster
    <   name: tkg-vsphere-default-v1.0.0-control-plane
    >   name: tkg-vsphere-default-extended-v1.0.0-control-plane
    <   name: tkg-vsphere-default-v1.0.0-worker
    >   name: tkg-vsphere-default-extended-v1.0.0-worker
    <   name: tkg-vsphere-default-v1.0.0-kcp
    >   name: tkg-vsphere-default-extended-v1.0.0-kcp
    <   name: tkg-vsphere-default-v1.0.0-md-config
    >   name: tkg-vsphere-default-extended-v1.0.0-md-config
    <   name: tkg-vsphere-default-v1.0.0
    >   name: tkg-vsphere-default-extended-v1.0.0
    <       name: tkg-vsphere-default-v1.0.0-kcp
    >       name: tkg-vsphere-default-extended-v1.0.0-kcp
    <         name: tkg-vsphere-default-v1.0.0-control-plane
    >         name: tkg-vsphere-default-extended-v1.0.0-control-plane
    <       name: tkg-vsphere-default-v1.0.0-cluster
    >       name: tkg-vsphere-default-extended-v1.0.0-cluster
    <             name: tkg-vsphere-default-v1.0.0-md-config
    >             name: tkg-vsphere-default-extended-v1.0.0-md-config
    <             name: tkg-vsphere-default-v1.0.0-worker
    >             name: tkg-vsphere-default-extended-v1.0.0-worker
    <             name: tkg-vsphere-default-v1.0.0-md-config
    >             name: tkg-vsphere-default-extended-v1.0.0-md-config
    <             name: tkg-vsphere-default-v1.0.0-worker
    >             name: tkg-vsphere-default-extended-v1.0.0-worker
    >   - name: nfConntrackMax
    >     required: false
    >     schema:
    >       openAPIV3Schema:
    >         type: string
    >   - name: nfConntrackMax
    >     enabledIf: '{{ not (empty .nfConntrackMax) }}'
    >     definitions:
    >     - selector:
    >         apiVersion:
    >         kind: KubeadmControlPlaneTemplate
    >         matchResources:
    >           controlPlane: true
    >       jsonPatches:
    >       - op: add
    >         path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
    >         valueFrom:
    >           template: echo "net.netfilter.nf_conntrack_max={{ .nfConntrackMax }}" >> /etc/sysctl.conf

Install the ClusterClass in the Management Cluster

To enable your management cluster to use your custom ClusterClass, install it as follows:

  1. Apply the ClusterClass manifest, for example:

    $ kubectl apply -f custom_cc.yaml created created created created created created
  2. Check if the custom ClusterClass has propagated to the default namespace, for example:

    $ kubectl get clusterclass,kubeadmconfigtemplate,kubeadmcontrolplanetemplate,vspheremachinetemplate,vsphereclustertemplate
    NAME                                                                AGE   3m16s            2d18h
    NAME                                                                                             AGE   3m29s            2d18h
    NAME                                                                                                AGE   3m27s            2d18h
    NAME                                                                                                       AGE   3m31s          3m28s            2d18h                   2d18h
    NAME                                                                                                 AGE   3m31s            2d18h

Generate Custom Workload Cluster Manifest

Create a new workload cluster based on your custom ClusterClass as follows:

  1. Run tanzu cluster create with the --dry-run option to generate a cluster manifest:

    tanzu cluster create --file workload-1.yaml --dry-run > default_cluster.yaml
  2. Create a ytt overlay or edit the cluster manifest directly to do the following:

    • Replace the topology.class value with the name of your custom ClusterClass
    • Add your custom variables to the variables block, with a default value

    As with modifying ClusterClass object specs, using an overlay as follows lets you automatically apply the changes to new objects whenever there is a new upstream cluster release:

    #@ load("@ytt:overlay", "overlay")
    #@overlay/match by=overlay.subset({"kind":"Cluster"})
    kind: Cluster
       class: tkg-vsphere-default-extended-v1.0.0
       - name: nfConntrackMax
         value: "1048576"
  3. Generate the manifest:

    ytt -f default_cluster.yaml -f cluster_overlay.yaml > custom_cluster.yaml
  4. (Optional) As with the ClusterClass above, you can run diff to compare your custom-class cluster manifest with a default class-based cluster, for example:

    $ diff custom_cluster.yaml default_cluster.yaml
    <     class: tkg-vsphere-default-extended-v1.0.0
    >     class: tkg-vsphere-default-v1.0.0
    <     - name: nfConntrackMax
    <       value: "1048576"

(Optional) Dry-Run Cluster Create

This optional procedure shows you the resources that your custom ClusterClass will generate and manage.

  1. Make a copy of custom_cluster.yaml:

    cp custom_cluster.yaml dryrun_cluster.yaml
  2. Copy the variable TKR_DATA from the management cluster:

    kubectl get cluster mgmt -n tkg-system -o jsonpath='{.spec.topology.variables}' | jq -r '.[] | select(.name == "TKR_DATA")' | yq -p json '.'
  3. Manually append above output to .spec.topology.variables in the dryrun_cluster.yaml, for example as shown in this diff output:

    $ diff custom_cluster.yaml dryrun_cluster.yaml
    >     - name: TKR_DATA
    >       value:
    >         v1.25.7+vmware.1:
    >           kubernetesSpec:
    >             coredns:
    >               imageTag: v1.9.3_vmware.8
    >             etcd:
    >               imageTag: v3.5.4_vmware.10
    >             imageRepository:
    >             kube-vip:
    >               imageRepository:
    >               imageTag: v0.5.5_vmware.1
    >             pause:
    >               imageTag: "3.7"
    >             version: v1.25.7+vmware.1
    >           labels:
    >             image-type: ova
    >             os-arch: amd64
    >             os-name: photon
    >             os-type: linux
    >             os-version: "3"
    >             ova-version: v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
    >    v1.25.7---vmware.1-tkg.1-efe12079f22627aa1246398eba077476
    >    v1.25.7---vmware.1-tkg.1-zshippable
    >           osImageRef:
    >             moid: vm-156
    >             template: /dc0/vm/photon-3-kube-v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
    >             version: v1.25.7+vmware.1-tkg.1-efe12079f22627aa1246398eba077476
  4. Create plan directory:

    mkdir plan
  5. Dry-run creating the cluster, for example:

    $ clusterctl alpha topology plan -f dryrun_cluster.yaml -o plan
    Detected a cluster with Cluster API installed. Will use it to fetch missing objects.
    No ClusterClasses will be affected by the changes.
    The following Clusters will be affected by the changes:
    * default/workload-1
    Changes for Cluster "default/workload-1":
     NAMESPACE  KIND                    NAME                             ACTION
     default    KubeadmConfigTemplate   workload-1-md-0-bootstrap-lvchb  created
     default    KubeadmControlPlane     workload-1-2zmql                 created
     default    MachineDeployment       workload-1-md-0-hlr7c            created
     default    MachineHealthCheck      workload-1-2zmql                 created
     default    MachineHealthCheck      workload-1-md-0-hlr7c            created
     default    Secret                  workload-1-shim                  created
     default    VSphereCluster          workload-1-fmt2j                 created
     default    VSphereMachineTemplate  workload-1-control-plane-mf6k6   created
     default    VSphereMachineTemplate  workload-1-md-0-infra-k88bk      created
     default    Cluster                 workload-1                       modified
    Created objects are written to directory "plan/created"
    Modified objects are written to directory "plan/modified"
  6. You should see the following files generated, and find the kernel configuration in KubeadmControlPlane_default_workload-1-2zmql.yaml.

    $ tree plan
    |-- created
    |   |-- KubeadmConfigTemplate_default_workload-1-md-0-bootstrap-lvchb.yaml
    |   |-- KubeadmControlPlane_default_workload-1-2zmql.yaml
    |   |-- MachineDeployment_default_workload-1-md-0-hlr7c.yaml
    |   |-- MachineHealthCheck_default_workload-1-2zmql.yaml
    |   |-- MachineHealthCheck_default_workload-1-md-0-hlr7c.yaml
    |   |-- Secret_default_workload-1-shim.yaml
    |   |-- VSphereCluster_default_workload-1-fmt2j.yaml
    |   |-- VSphereMachineTemplate_default_workload-1-control-plane-mf6k6.yaml
    |   `-- VSphereMachineTemplate_default_workload-1-md-0-infra-k88bk.yaml
    `-- modified
       |-- Cluster_default_workload-1.diff
       |-- Cluster_default_workload-1.jsonpatch
       |-- Cluster_default_workload-1.modified.yaml
       `-- Cluster_default_workload-1.original.yaml
    2 directories, 13 files

    You should compare resources as above whenever you make changes to your custom ClusterClass.

Create a Custom Cluster

Create a custom workload cluster based on the custom manifest generated above as follows.

  1. Apply the custom-class cluster manifest generated above, for example:

    $ kubectl apply -f custom_cluster.yaml
    secret/workload-1-vsphere-credential created
    secret/workload-1-nsxt-credential created created created created
    secret/workload-1 created created

    Do not use apply dryrun_cluster.yaml, which was used to generate the manifest, because the variable TKR_DATA will be injected by a TKG webhook.

  2. Check created object properties. For example with the kernel modification above, retrieve the KubeadmControlPlane object and confirm that the kernel configuration is set:

    $ kubectl get kcp workload-1-jgwd9 -o yaml
    kind: KubeadmControlPlane
       - hostname "{{ ds.meta_data.hostname }}"
       - echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
       - echo "   localhost" >>/etc/hosts
       - echo "   {{ ds.meta_data.hostname }}" >>/etc/hosts
       - echo "{{ ds.meta_data.hostname }}" >/etc/hostname
       - echo "net.netfilter.nf_conntrack_max=1048576" >> /etc/sysctl.conf
       useExperimentalRetryJoin: true
  3. Log in to a control plane node and confirm that its sysctl.conf is modified:

    $ capv@workload-1-cn779-pthp5 [ ~ ]$ sudo cat /etc/sysctl.conf
check-circle-line exclamation-circle-line close-line
Scroll to top icon