Refer to this example to provision a v1beta1 Cluster with the Calico CNI instead of the default Antrea CNI. You can also refer to this example for guidance in customizing one or more TKR packages for a cluster.

v1beta1 Example: Cluster with Custom CNI

The following example YAML demonstrates how to use the v1beta1 API to provision a Cluster with a custom CNI. This example builds on the v1beta1 Example: Default Cluster.

As defined in the tanzukubernetescluster ClusterClass, the default CNI is Antrea. The other supported CNI is Calico. To change the CNI from Antrea to Calico, you overload the default CNI by creating a ClusterBootstrap custom resource that references a CalicoConfig custom resource.

The ClusterBootstrap custom resource includes the spec.cni.refName block with the value that comes from the TKR. (See TKr Packages for guidance on how to obtain the value for this field.) The ClusterBootstrap value overwrites the default value in the ClusterClass and is picked up by the Cluster API (CAPI) when the Cluster is created. The name of the ClusterBootstrap custom resource must be the same as the Cluster.
Note: The example is provided as a single YAML file, but can be separated into individual files. If you do this, you must create them in order: first the CalicoConfig custom resource, then the ClusterBootstrap, then the cluster-calico cluster.
---
apiVersion: cni.tanzu.vmware.com/v1alpha1
kind: CalicoConfig
metadata:
  name: cluster-calico
spec:
  calico:
    config:
      vethMTU: 0
---
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: ClusterBootstrap
metadata:
  annotations:
    tkg.tanzu.vmware.com/add-missing-fields-from-tkr: v1.23.8---vmware.2-tkg.2-zshippable
  name: cluster-calico
spec:
  cni:
    refName: calico.tanzu.vmware.com.3.22.1+vmware.1-tkg.2-zshippable
    valuesFrom:
      providerRef:
        apiGroup: cni.tanzu.vmware.com
        kind: CalicoConfig
        name: cluster-calico
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: cluster-calico
spec:
  clusterNetwork:
    services:
      cidrBlocks: ["198.51.100.0/12"]
    pods:
      cidrBlocks: ["192.0.2.0/16"]
    serviceDomain: "cluster.local"
  topology:
    class: tanzukubernetescluster
    version: v1.23.8---vmware.2-tkg.2-zshippable
    controlPlane:
      replicas: 3
    workers:
      machineDeployments:
        - class: node-pool
          name: node-pool-1
          replicas: 3
    variables:
      - name: vmClass
        value: guaranteed-medium
      - name: storageClass
        value: tkg2-storage-policy