You can create a TanzuKubernetesCluster with routable pods networking by configuring a routable Namespace Network on Supervisor and by specifying antrea-nsx-routed as the CNI for the cluster.

About Routable Pods Networking

When you provision a Tanzu Kubernetes cluster using the antrea or calico CNI plugins, the system creates the default pods network 192.168.0.0/16. This subnet is a private address space that is only unique within the cluster and not routable on the network.

The TKG v1alpha3 API supports routable pod networking using the antrea-nsx-routed CNI plugin. This network interface is a customized Antrea plugin configured to support routable pod networks for TKG clusters. In the cluster spec, the pods CIDR blocks field must be explicitly null so that IP address management (IPAM) is handled by the Supervisor. Refer to the example below.

Enabling routable pod networking lets pods be directly addressed from a client external to the cluster. In addition, pod IP addressees are preserved so external network services and servers can identify the source pods and apply policies based on IP addresses. Supported traffic patterns including the following:
  • Traffic is allowed between a TKG cluster pod and a vSphere Pod in the same vSphere Namespace.
  • Traffic is dropped between a TKG cluster pod and a vSphere Pod in different vSphere Namespaces.
  • Supervisor control plane nodes can reach TKG cluster pods.
  • TKG cluster pods can reach the external network.
  • External network cannot reach TKG cluster pods. Traffic is dropped by distributed firewall (DFW) isolation rules on the cluster nodes.

Creating a Routable Pods Network: Supervisor Configuration

Creating a routable pods network requires configuration on the Supervisor and on the TKG cluster.
Note: Supervisor must be configured with NSX to use routable pods networking. You cannot use routable pods with VDS networking.
To configure a routable pods network on Supervisor:
  1. Create a new vSphere Namespace.

    See Create a vSphere Namespace for Hosting TKG Service Clusters.

  2. Select the checkbox option to Override Supervisor network settings.

    See Override Workload Network Settings for a vSphere Namespace for guidance.

  3. Configure the routable pods network as follows.
    Field Description
    NAT Mode Deselect this option to disable network address translation (NAT) since you are using a routable subnet.
    Namespace Network CIDR

    The Namespace Network CIDR is a subnet that operates as an IP pool for the vSphere Namespace. The Namespace Subnet Prefix describes the size of any subsequent CIDR block that is carved out from that IP pool.

    Populate this field with a routable IP subnet in the form IP Address/Bits (e.g., 10.0.0.6/16). NCP will create one or more IP pools from the IP blocks specified for the network.

    At a minimum you should specify a /23 subnet size. For example, if you specify a /23 routable subnet with a /28 subnet prefix, you will get 32 subnets which should be enough for a 6 node cluster. A /24 subnet with a /28 prefix will only get 2 subnets which is not enough.

    Namespace subnet prefix

    The Namespace Subnet Prefix describes the size of any subsequent CIDR block that is carved out from the Namespace Network IP pool.

    Specify a subnet prefix in the form /28, for example.

  4. Click Create to create the routable pods network.

Creating a Routable Pods Network: TKG Cluster Configuration

The following example YAML shows how to configure a cluster with a routable pods network.

The cluster spec declares antrea-nsx-routed as the CNI to enable routable pods networking. If antrea-nsx-routed is specified, cluster provisioning will fail if NSX-T networking is not being used.

When the CNI is specified as antrea-nsx-routed, the pods.cidrBlock field must be empty.
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesCluster
metadata:
  name: tkc-routable-pods
  namespace: tkg-cluster-ns
spec:
  topology:
    controlPlane:
      replicas: 3
      vmClass: guaranteed-medium
      storageClass: tkg-storage-policy
      tkr:  
        reference:
          name: v1.25.7---vmware.3-fips.1-tkg.1
    nodePools:
    - name: worker-nodepool-a1
      replicas: 3
      vmClass: guaranteed-large
      storageClass: tkg-storage-policy
      tkr:  
        reference:
          name: v1.25.7---vmware.3-fips.1-tkg.1
  settings:
    storage:
      defaultClass: tkg-storage-policy
    network:
      #antrea-nsx-routed is the required CNI
      cni:
        name: antrea-nsx-routed
      services:
        cidrBlocks: ["10.97.0.0/24"]
      #pods.cidrBlocks must be null (empty)
      pods:
        cidrBlocks:
      serviceDomain: cluster.local