Use VMware Tanzu Mission Control to create a new Tanzu Kubernetes cluster using a cluster class.

Note: This feature is in technical preview. Lifecycle management of Tanzu Kubernetes clusters running in vSphere with Tanzu version 8 is not supported in Tanzu Mission Control Self-Managed version 1.0. This procedure is provided for preview purposes only. Do not use this feature on clusters running production workloads.

Starting in version 8.0 of vSphere, you can leverage the power of ClusterClass to use a predefined baseline configuration to create clusters with a consistent size and shape in Tanzu Kubernetes Grid Service Supervisor Clusters.

For more information about ClusterClass in the Cluster API, see Introducing ClusterClass and Managed Topologies in Cluster API in the Kubernetes Blog.

Note: The data protection features of Tanzu Mission Control are not compatible with clusters created using the Tanzu Kubernetes release v1.23.8---vmware.2-tkg.1-zshippable. If you rely on Tanzu Mission Control for data protection, use a different Tanzu Kubernetes release when creating your clusters.

In Tanzu Mission Control Self-Managed, you can create new workload clusters in a registered Supervisor Cluster using a default cluster class.

Prerequisites

Before you can create new clusters using Tanzu Mission Control, you must first establish a connection with your management cluster.
  1. Register your Tanzu Kubernetes Grid Service Supervisor Cluster (vSphere version 8 or later) with Tanzu Mission Control, as described in Register a Management Cluster with Tanzu Mission Control.
  2. Create a provisioner into which you will provision the cluster, as described in Create a Provisioner in Your Tanzu Kubernetes Grid Management Cluster
Make sure you have the appropriate permissions to create a Tanzu Kubernetes cluster.
  • To provision a cluster, you must be associated with the clustergroup.edit role on the cluster group in which you want to put the new cluster.
  • To see and use a cloud provider account connection for creating a cluster, you must be associated with the organization.credential.view role.
  • You must also have admin privileges on the management cluster to provision resources within it.

Procedure

  • Use the following bash script to provision a workload cluster from a cluster class.
    This script uses the following values:
    • The cluster name is dev.
    • The supervisor namespace (in which the cluster is deployed) is dev-ns.
    #!/usr/bin/env bash
    
    
    cluster='dev'
    ns='dev-ns'
    export caCert=$(yq eval '.trustedCAs."custom-ca.pem"' ./values.yaml) # Path to registry CA
    
    
    cat > vsphere8-ca-values.yaml <<-EOF
    #@data/values
    ---
    EOF
    
    
    yq e -i ".caCert = strenv(caCert)" ./vsphere8-ca-values.yaml
    yq e -i ".cluster = strenv(cluster)" ./vsphere8-ca-values.yaml
    yq e -i ".ns = strenv(ns)" ./vsphere8-ca-values.yaml
    
    
    cat > vsphere8-ca-prep.yaml <<-EOF
    #@ load("@ytt:data", "data")
    #@ load("@ytt:base64", "base64")
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: #@ "{}".format(data.values.ns)
    ---
    apiVersion: v1
    data:
      corp-ca-1: #@ base64.encode(base64.encode(data.values.caCert))
    kind: Secret
    metadata:
      name: #@ "{}-user-trusted-ca-secret".format(data.values.cluster)
      namespace: #@ "{}".format(data.values.ns)
    type: Opaque
    ---
    apiVersion: run.tanzu.vmware.com/v1alpha3
    kind: KappControllerConfig
    metadata:
      name: #@ data.values.cluster
      namespace: #@ "{}".format(data.values.ns)
    spec:
      kappController:
        createNamespace: false
        config:
          caCerts: #@ data.values.caCert
          dangerousSkipTLSVerify: ""
          httpProxy: ""
          httpsProxy: ""
          noProxy: ""
        deployment:
          apiPort: 10100
          concurrency: 4
          hostNetwork: true
          metricsBindAddress: "0"
          priorityClassName: system-cluster-critical
          tolerations:
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoSchedule
            key: node-role.kubernetes.io/control-plane
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
          - effect: NoSchedule
            key: node.kubernetes.io/not-ready
          - effect: NoSchedule
            key: node.cloudprovider.kubernetes.io/uninitialized
            value: "true"
        globalNamespace: tkg-system
      namespace: tkg-system
    ---
    apiVersion: run.tanzu.vmware.com/v1alpha3
    kind: ClusterBootstrap
    metadata:
      name: #@ data.values.cluster
      namespace: #@ "{}".format(data.values.ns)
      annotations:
        tkg.tanzu.vmware.com/add-missing-fields-from-tkr: v1.23.8+vmware.2
    spec:
      kapp:
        refName: kapp-controller.tanzu.vmware.com.0.41.5+vmware.1-tkg.1
        valuesFrom:
          providerRef:
            apiGroup: run.tanzu.vmware.com
            kind: KappControllerConfig
            name: #@ data.values.cluster
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    metadata:
      name: #@ data.values.cluster
      namespace: #@ "{}".format(data.values.ns)
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 172.20.0.0/16
        serviceDomain: cluster.local
        services:
          cidrBlocks:
          - 10.96.0.0/16
      topology:
        class: tanzukubernetescluster
        controlPlane:
          metadata:
            annotations:
              run.tanzu.vmware.com/resolve-os-image: os-name=photon,os-version=3,os-arch=amd64
          replicas: 1
        variables:
        - name: controlPlaneVolumes
          value:
          - capacity:
              storage: 10G
            mountPath: /var/lib/etcd
            name: etcd
            storageClass: tanzu
        - name: nodePoolVolumes
          value:
          - capacity:
              storage: 50G
            mountPath: /var/lib/containerd
            name: containerd
            storageClass: tanzu
        - name: vmClass
          value: best-effort-large  # this vm class should be present in the given namespace
        - name: storageClass
          value: tanzu
        - name: defaultStorageClass
          value: tanzu
        - name: trust
          value:
            additionalTrustedCAs:
            - name: corp-ca-1
        - name: ntp
          value: time1.oc.vmware.com # If it is airgapped then ntp server should be accessible
        version: v1.23.8+vmware.2
        workers:
          machineDeployments:
          - class: node-pool
            metadata:
              annotations:
                run.tanzu.vmware.com/resolve-os-image: os-name=photon,os-version=3,os-arch=amd64
            name: md-0
            replicas: 3
            variables:
              overrides:
              - name: vmClass
                value: best-effort-2xlarge # this vm class should be present in the given namespace
              - name: storageClass
                value: tanzu # this storage class should be present in the given namespace
    EOF
    

Results

This bash script provisions the resources necessary for the new cluster in your management cluster. It then creates the workload cluster, but does not attach it to your organization.

What to do next

After the script completes and the cluster is provisioned, you can bring it under the management of Tanzu Mission Control Self-Managed, as described in Add a Workload Cluster into Tanzu Mission Control Management.