You can create a TanzuKubernetesCluster with routable pods networking by configuring a routable Namespace Network on Supervisor and by specifying
antrea-nsx-routed as the CNI for the cluster.
About Routable Pods Networking
When you provision a Tanzu Kubernetes cluster using the
calico CNI plugins, the system creates the default pods network
192.168.0.0/16. This subnet is a private address space that is only unique within the cluster and not routable on the network.
The TKG v1alpha3 API supports routable pod networking using the
antrea-nsx-routed CNI plugin. This network interface is a customized Antrea plugin configured to support routable pod networks for TKG clusters. In the cluster spec, the pods CIDR blocks field must be explicitly null so that IP address management (IPAM) is handled by the Supervisor. Refer to the example below.
- Traffic is allowed between a TKG cluster pod and a vSphere Pod in the same vSphere Namespace.
- Traffic is dropped between a TKG cluster pod and a vSphere Pod in different vSphere Namespaces.
- Supervisor control plane nodes can reach TKG cluster pods.
- TKG cluster pods can reach the external network.
- External network cannot reach TKG cluster pods. Traffic is dropped by distributed firewall (DFW) isolation rules on the cluster nodes.
Creating a Routable Pods Network: Supervisor Configuration
- Create a new vSphere Namespace.
See Create a vSphere Namespace for Hosting TKG Clusters on Supervisor.
- Select the checkbox option to Override Supervisor network settings.
See Override Supervisor Network Settings for guidance.
- Deselect NAT Mode.
- Populate the Namespace Network with a routable subnet. NCP will create one or more IP pools from the IP blocks specified for the network.
- Make sure that the routable Namespace Network that you added does not overlap with the Services CIDR which allocates the IP addresses for cluster nodes.
Creating a Routable Pods Network: TKG Cluster Configuration
The following example YAML shows how to configure a cluster with a routable pods network.
The cluster spec declares
antrea-nsx-routed as the CNI to enable routable pods networking. If
antrea-nsx-routed is specified, cluster provisioning will fail if NSX-T networking is not being used.
pods.cidrBlockfield must be empty.
apiVersion: run.tanzu.vmware.com/v1alpha3 kind: TanzuKubernetesCluster metadata: name: tkc-routable-pods namespace: tkg2-cluster-ns spec: topology: controlPlane: replicas: 3 vmClass: guaranteed-medium storageClass: tkg2-storage-policy tkr: reference: name: v1.23.8---vmware.2-tkg.2-zshippable nodePools: - name: worker-nodepool-a1 replicas: 3 vmClass: guaranteed-large storageClass: tkg2-storage-policy tkr: reference: name: v1.22.8---vmware.1-tkg.2-zshippable settings: storage: defaultClass: tkg2-storage-policy network: #antrea-nsx-routed is the required CNI cni: name: antrea-nsx-routed services: cidrBlocks: ["10.97.0.0/24"] #pods.cidrBlocks must be null (empty) pods: cidrBlocks: serviceDomain: cluster.local