Refer to the example YAML to provision a TanzuKubernetesCluster using the v1alpha3 API with custom network settings.

v1alpha3 Example: TKC with Custom Network Settings

The network is customized as follows. Refer to the v1alpha3 API spec for details.
  • Calico CNI is used instead of the default Antrea
  • Non-default subnets for pods and services are used
  • A proxy server and TLS certificates are declared
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesCluster
metadata:
  name: tkc-custom-network
  namespace: tkg2-cluster-ns
spec:
  topology:
    controlPlane:
      replicas: 3
      vmClass: guaranteed-medium
      storageClass: tkg-storage-policy
      tkr:
        reference:
          name: v1.25.7---vmware.3-fips.1-tkg.1
    nodePools:
    - name: worker
      replicas: 3
      vmClass: guaranteed-medium
      storageClass: tkg-storage-policy
      tkr:
        reference:
          name: v1.25.7---vmware.3-fips.1-tkg.1
      volumes:
      - name: containerd
        mountPath: /var/lib/containerd
        capacity:
          storage: 50Gi
      - name: kubelet
        mountPath: /var/lib/kubelet
        capacity:
          storage: 50Gi
  settings:
    storage:
      defaultClass: tkg-storage-policy
    network:
      cni:
        name: calico
      services:
        cidrBlocks: ["172.16.0.0/16"]
      pods:
        cidrBlocks: ["192.168.0.0/16"]
      serviceDomain: cluster.local
      proxy:
        httpProxy: http://<user>:<pwd>@<ip>:<port>
        httpsProxy: http://<user>:<pwd>@<ip>:<port>
        noProxy: [10.246.0.0/16,192.168.144.0/20,192.168.128.0/20]
      trust:
        additionalTrustedCAs:
          - name: CompanyInternalCA-1
            data: LS0tLS1C...LS0tCg==
          - name: CompanyInternalCA-2
            data: MTLtMT1C...MT0tPg==

Considerations for Customizing the TKC Pod Network

The cluster specification setting spec.settings.network.pods.cidrBlocks defaults to 192.168.0.0/16.

If you customize, the minimum pods CIDR block size is /24. However, be cautious about restricting the pods.cidrBlocks subnet mask beyond /16.

TKG allocates to each cluster node a /24 subnet carved from the pods.cidrBlocks. This allocation is determined by the Kubernetes Controller Manager > NodeIPAMController parameter named NodeCIDRMaskSize which sets the subnet mask size for the Node CIDR in the cluster. The default node subnet mask is /24 for IPv4.

Because each node in a cluster gets a /24 subnet from the pods.cidrBlocks, you can run out of node IP addresses if you use a subnet mask size that is too restrictive for the cluster you are provisioning.

The following node limits apply to a Tanzu Kubernetes cluster provisioned with either the Antrea or Calico CNI.

/16 == 150 nodes max (per ConfigMax)

/17 == 128 nodes max

/18 == 64 nodes max

/19 == 32 nodes max

/20 == 16 nodes max

/21 == 8 nodes max

/22 == 4 nodes max

/23 == 2 nodes max

/24 == 1 node max