In Kubernetes Networking, every Pod has a unique IP and all the containers in that Pod share the IP.
The IP of a Pod is routable from all the other Pods, regardless of the nodes they are on. Kubernetes is agnostic to reachability, L2, L3, overlay networks, and so on, as long as the traffic can reach the desired pod on any node.
CNI is a container networking specification adopted by Kubernetes to support pod-to-pod networking. The specification defines a Linux network namespace. The container runtime first allocates a network namespace to the container and then passes numerous CNI parameters to the network driver. The network driver then attaches the container to a network and reports the assigned IP address to the container runtime. Multiple plugins may be run at a time with containers joining networks driven by different plugins.
Telco workloads typically require a separation of the control plane and data plane. Also, a strict separation between the Telco traffic and the Kubernetes control plane requires multiple network interfaces to provide service isolation or routing. To support those workloads that require multiple interfaces in a Pod, additional plugins are required. A CNI meta plugin or CNI multiplexer that attaches multiple interfaces can be used to provide pods with multiple NIC support. The CNI plugin that serves pod-to-pod networking is called the primary or default CNI (a network interface that every Pod is created with). Each network attachment created by the meta plugin is called the secondary CNI.
While there are numerous container networking technologies and unique ways of approaching them, Telco Cloud and Kubernetes admins want to eliminate manual CNI provisioning in containerized environments and reduce the number of container plugins to maintain. Calico or Antrea are often used as the primary CNI plugin. IPVLAN can be used as a secondary CNI together with a CNI meta plugin such as Multus.
Kubernetes Primary Interface with Antrea
Each node must have a management interface. The management and pod IP addresses must be routable for the Kubernetes health check to work. Antrea is the default CNI for Pod-to-Pod communication within the cluster.
Design Decision |
Design Justification |
Design Implication |
---|---|---|
Each node must have at least one management network interface. |
The management interface is used by K8s Pods to communicate within the Kubernetes cluster. |
Nine vNICs remain for the CNF data plane traffic. |
Use a dedicated Kubernetes Node IP block per NSX-T fabric.
|
A dedicated subnet simplifies troubleshooting and routing. |
|
Allocate a dedicated Kubernetes Pod IP block if 100.96.0.0/11 cannot be used. |
|
This IP block must not overlap with Multus IP blocks. For Multus requirements, see the Secondary CNI Plugins section. |
Allocate a dedicated Kubernetes Service IP block if 100.64.0.0/13 cannot be used. |
|
|
Secondary CNI Plugins
Multiple network interfaces can be realized by Multus, by working with Antrea and additional upstream CNI plugins. Antrea creates primary or default networks for every pod. Additional interfaces can be VDS interfaces managed through Multus by using secondary CNI plugins. An IP Address Management (IPAM) instance assigned to the secondary interface is independent of the primary or default network.
Design Decision |
Design Justification |
Design Implication |
---|---|---|
Enable Multus integration with the Kubernetes API server to provision network devices to a data plane CNF. |
|
Multus is an upstream plugin and follows the community support model. |
Assign a dedicated IP block for additional container interfaces. Note: IP address management for additional interface must be separate from the primary container interface. |
|
|