In Kubernetes Networking, every Pod has a unique IP and all the containers in that Pod share the IP.

The IP of a Pod is routable from all the other Pods, regardless of the nodes they are on. Kubernetes is agnostic to reachability, L2, L3, overlay networks, and so on, as long as the traffic can reach the desired pod on any node.

CNI is a container networking specification adopted by Kubernetes to support pod-to-pod networking. The specification defines a Linux network namespace. The container runtime first allocates a network namespace to the container and then passes numerous CNI parameters to the network driver. The network driver then attaches the container to a network and reports the assigned IP address to the container runtime. Multiple plugins may be run at a time with containers joining networks driven by different plugins.

Telco workloads typically require a separation of the control plane and data plane. Also, a strict separation between the Telco traffic and the Kubernetes control plane requires multiple network interfaces to provide service isolation or routing. To support those workloads that require multiple interfaces in a Pod, additional plugins are required. A CNI meta plugin or CNI multiplexer that attaches multiple interfaces can be used to provide pods with multiple NIC support. The CNI plugin that serves pod-to-pod networking is called the primary or default CNI (a network interface that every Pod is created with). Each network attachment created by the meta plugin is called the secondary CNI.

While there are numerous container networking technologies and unique ways of approaching them, Telco Cloud and Kubernetes admins want to eliminate manual CNI provisioning in containerized environments and reduce the number of container plugins to maintain. Calico or Antrea are often used as the primary CNI plugin. IPVLAN can be used as a secondary CNI together with a CNI meta plugin such as Multus.

Kubernetes Primary Interface with Antrea

Each node must have a management interface. The management and pod IP addresses must be routable for the Kubernetes health check to work. Antrea is the default CNI for Pod-to-Pod communication within the cluster.

Design Decision

Design Justification

Design Implication

Each node must have at least one management network interface.

The management interface is used by K8s Pods to communicate within the Kubernetes cluster.

Nine vNICs remain for the CNF data plane traffic.

Use a dedicated Kubernetes Node IP block per NSX-T fabric.

  • IP block should be large enough to accommodate the expected number of Kubernetes clusters.

  • During the Kubernetes cluster deployment, allocate a single /24 subnet from the Nodes IP Block for each Kubernetes cluster to provide a sufficient IP address for cluster scale-out.

  • A smaller block size can be used if the cluster size is fixed or will not scale to a large number of nodes.

A dedicated subnet simplifies troubleshooting and routing.

  • This IP block must not overlap with Pod or Multus IP blocks.

  • IP address fragmentation can result in small cluster sizes.

Allocate a dedicated Kubernetes Pod IP block if cannot be used.

  • Start with a /11 network for the Kubernetes Pods IP Block.

  • The Container Plugin uses this block to assign address space to Kubernetes pods. A single /24 network segment for the Pods IP Block is instantiated per Kubernetes node.

  • Pod IP block should not be routable outside of the K8s cluster.

This IP block must not overlap with Multus IP blocks. For Multus requirements, see the Secondary CNI Plugins section.

Allocate a dedicated Kubernetes Service IP block if cannot be used.

  • The current best practice for performing Carrier-Grade NAT (CGN) as defined by RFC 6598.

  • IP block must not be routable outside of the K8s cluster.

  • This IP block must not overlap with Multus IP blocks. For Multus requirements, see the Secondary CNI Plugins section.

Secondary CNI Plugins

Multiple network interfaces can be realized by Multus, by working with Antrea and additional upstream CNI plugins. Antrea creates primary or default networks for every pod. Additional interfaces can be VDS interfaces managed through Multus by using secondary CNI plugins. An IP Address Management (IPAM) instance assigned to the secondary interface is independent of the primary or default network.

Design Decision

Design Justification

Design Implication

Enable Multus integration with the Kubernetes API server to provision network devices to a data plane CNF.

  • Multus CNI enables the attachment of multiple network interfaces to a Pod.

  • Multus acts as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.

Multus is an upstream plugin and follows the community support model.

Assign a dedicated IP block for additional container interfaces.

Note: IP address management for additional interface must be separate from the primary container interface.

  • Specify a /16 network for the Multus IP Block for additional container interfaces.

  • Default host-local IPAM scope is per node instead of global.

  • Cluster-wide IPAM is available but requires additional out-of-the-box installation.

  • Additional care must be given to avoid IP address conflicts.