You can use the v1beta1 API to create a Cluser with routable pods network. You do this by overriding the default Cluster with custom configurations for AntreaConfig and VSphereCPIConfig.

About Routable Pods Networking Using the v1beta1 API

The following example YAML demonstrates how to use the v1beta1 API to provision a Cluster with Antrea RoutablePod feature enabled. This example builds on the v1beta1 Example: Default Cluster.

To enable the RoutablePod feature, the Cluster requires AntreaConfig and VSphereCPIConfig with special configuration.

The AntreaConfig must set trafficEncapMode: noEncap and noSNAT: true.

The VSphereCPIConfig must set antreaNSXPodRoutingEnabled: true, mode: vsphereParavirtualCPI, and
tlsCipherSuites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

The name of AntreaConfig must be in the format <cluster-name>-antrea-package. The name of VSphereCPIConfig must be the format <cluster-name>-vsphere-cpi-package.

Once the configuration files are created, you then create the Cluster specification object which references the configuration files. During cluster creation, the configuration files will be used to provision the Cluster and overwrite the default configuration.

Creating a Routable Pods Network: Supervisor Configuration

Creating a routable pods network requires configuration on the Supervisor and on the TKG cluster.
Note: Supervisor must be configured with NSX to use routable pods networking. You cannot use routable pods with VDS networking.
To configure a routable pods network on Supervisor:
  1. Create a new vSphere Namespace.

    See Create a vSphere Namespace for Hosting TKG Clusters on Supervisor.

  2. Select the checkbox option to Override Supervisor network settings.

    See Override Supervisor Network Settings for a vSphere Namespace for guidance.

  3. Configure the routable pods network as follows.
    Field Description
    NAT Mode Deselect this option to disable network address translation (NAT).
    Namespace Network

    Populate this field with a routable IP subnet in the form IP Address/Bits (e.g., 10.0.0.6/16).

    NCP will create one or more IP pools from the IP blocks specified for the network.

    At a minimum you should specify a /23 subnet size. For example, if you specify a /23 routable subnet with a /28 subnet prefix, you will get 32 subnets which should be enough for a 6 node cluster. A /24 subnet with a /28 prefix will only get 2 subnets which is not enough.

    Attention: Make sure that the routable IP subnet that you add does not overlap with the Services CIDR which allocates the IP addresses for cluster nodes. You can check the Services CIDR at Supervisor > Configure > Network > Workload Network.
    Namespace subnet prefix

    Specify a subnet prefix in the form /28, for example.

    The subnet prefix is used to carve the pod-subnet for each node from the Namespace Network.

  4. Click Create to create the routable pods network.

Creating a Routable Pods Network: TKG Cluster Configuration

The following example YAML shows how to configure a v1beta1 Cluster with a routable pods network.

As shown in the example below, you must remove the spec.clusterNetwork.pod section from the Cluster specification since the pod IP addresses will be allocated by the cloud-provider-vsphere.
Note: The example is provided as a single YAML file, but can be separated into individual files. If you do this, you must create them in order: first the AntreaConfig and VSphereCPIConfig custom resources, then the target-cluster Cluster.
---
apiVersion: cni.tanzu.vmware.com/v1alpha1
kind: AntreaConfig
metadata:
 name: target-cluster-antrea-package
spec:
 antrea:
   config:
     defaultMTU: ""
     disableUdpTunnelOffload: false
     featureGates:
       AntreaPolicy: true
       AntreaProxy: true
       AntreaTraceflow: true
       Egress: false
       EndpointSlice: true
       FlowExporter: false
       NetworkPolicyStats: false
       NodePortLocal: false
     noSNAT: true
     tlsCipherSuites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384
     trafficEncapMode: noEncap
---
apiVersion: cpi.tanzu.vmware.com/v1alpha1
kind: VSphereCPIConfig
metadata:
 name: target-cluster-vsphere-cpi-package
spec:
 vsphereCPI:
   antreaNSXPodRoutingEnabled: true
   insecure: false
   mode: vsphereParavirtualCPI
   tlsCipherSuites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
 name: target-cluster
spec:
 clusterNetwork:
   services:
     cidrBlocks: ["198.51.100.0/12"]
   serviceDomain: "cluster.local"
 topology:
   class: tanzukubernetescluster
   version: v1.25.7---vmware.3-fips.1-tkg.1
   controlPlane:
     replicas: 3
   workers:
     machineDeployments:
       - class: node-pool
         name: node-pool-1
         replicas: 3
   variables:
     - name: vmClass
       value: guaranteed-medium
     - name: storageClass
       value: tkg2-storage-policy