You can configure a Tanzu Kubernetes cluster to use routable pod networking by specifying the antrea-nsx-routed as the CNI for your cluster.

Introducing Routable Pod Networking

The Kubernetes network model requires that a pod in the node network of a cluster be able to communicate with all pods on all nodes in the same cluster without network address translation (NAT). To satisfy this requirement, each Kubernetes pod is given an IP address that is allocated from a dedicated pods network.

When you provision a Tanzu Kubernetes cluster using the antrea or calico CNI plugins, the system creates the default pods network 192.168.0.0/16. This subnet is a private address space that is only unique within the cluster and not routable on the internet. Although you can customize the network.pods.cidrBlocks, the pods network cannot be routable using these CNI plugins. For more information, see TKGS v1alpha2 API for Provisioning Tanzu Kubernetes Clusters.

The Tanzu Kubernetes Grid Service v1alpha2 API supports routable pod networking using the antrea-nsx-routed CNI plugin. This network interface is a customized Antrea plugin configured to support routable pod networks for Tanzu Kubernetes clusters. In the cluster specification, the pods CIDR blocks field must be explicitly null so that IP address management (IPAM) is handled by the Supervisor Cluster.

Enabling routable pod networking lets pods be directly addressed from a client external to the cluster. In addition, pod IP addressees are preserved so external network services and servers can identify the source pods and apply policies based on IP addresses. Supported traffic patterns including the following:
  • Traffic is allowed between a Tanzu Kubernetes cluster pod and a vSphere Pod in the same vSphere Namespace.
  • Traffic is dropped between a Tanzu Kubernetes cluster pod and a vSphere Pod in different vSphere Namespaces.
  • Supervisor Cluster control plane nodes can reach Tanzu Kubernetes cluster pods.
  • Tanzu Kubernetes cluster pods can reach the external network.
  • External network cannot reach Tanzu Kubernetes cluster pods. Traffic is dropped by distributed firewall (DFW) isolation rules on the Tanzu Kubernetes cluster nodes.

System Requirements for Routable Pods

Routable pods networking requires the Supervisor Cluster to be configured with NSX-T Data Center. You cannot use routable pods with native vSphere vDS networking.

Routable pods requires the Tanzu Kubernetes Grid Service v1alpha2 API. See Requirements for Using the TKGS v1alpha2 API.

NSX-T Configuration Requirements for Routable Pods

Other than the base requirements, there is no special NSX-T configuration required to use routable pods networks with Tanzu Kubernetes clusters. A vSphere with Tanzu environment running vSphere U3 with NSX-T includes the NCP version to support routable pods networks. There is no extra NSX-T configuration needed.

NCP creates an IP Pool for the routable pods network from one of two sources:
  • If the Workload Network is configured with a Namespace Network, NCP will create one or more IP pools from the IP blocks specified for this Namespace Network.
  • If there is no Namespace Network specified for the Workload Network, NCP will create one or more IP pools from the Supervisor Cluster Pod CIDR.
For more information, see Add Workload Networks to a Supervisor Cluster Configured with VDS Networking and Change Workload Network Settings on a Supervisor Cluster Configured with NSX-T Data Center.

Supervisor Cluster Configuration Requirements for Routable Pods

Other than the base requirements, there is no special Supervisor Cluster configuration required to use routable pods networks with Tanzu Kubernetes clusters.

If routable pods networking is enabled as described below, the Tanzu Kubernetes cluster pods CIDR is allocated from the IP pool created from the Namespace Network or, if none, from the Supervisor Cluster Pod CIDR.

You must make sure that the Supervisor Cluster Services CIDR which allocates the IP addresses for cluster nodes does not overlap with the Namespace Network CIDR or the with Supervisor Cluster Pod CIDR.

Example Cluster Configuration for Routable Pods

The following example YAML shows how to configure a cluster with a routable pods network. is a custom configuration for invoking the Tanzu Kubernetes Grid Service and provision a Tanzu Kubernetes cluster using the v1alpha2 API.

The cluster spec declares antrea-nsx-routed as the CNI to enable routable pods networking. When the CNI is antrea-nsx-routed is specified, the pods.cidrBlock field must be empty. If antrea-nsx-routed is specified, cluster provisioning will fail if NSX-T networking is not being used.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-v2-cluster-routable-pods
  namespace: tkgs-cluster-ns
spec:
  topology:
    controlPlane:
      replicas: 3
      vmClass: guaranteed-medium
      storageClass: vwt-storage-policy
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
    nodePools:
    - name: worker-nodepool-a1
      replicas: 3
      vmClass: guaranteed-large
      storageClass: vwt-storage-policy
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
  settings:
    storage:
      defaultClass: vwt-storage-policy
    network:
      #`antrea-nsx-routed` is the required CNI
      #for routable pods 
      cni:
        name: antrea-nsx-routed
      services:
        cidrBlocks: ["10.97.0.0/24"]
      serviceDomain: tanzukubernetescluster.local
      #`pods.cidrBlocks` value must be empty
      #when `antrea-nsx-routed` is the CNI 
      pods:
        cidrBlocks: