5G CNFs require advanced networking services to support receive and transmit at high speed with low latency. Advanced network capabilities must be achieved without deviating from the default networking abstraction provided by Kubernetes.

The Container Network Interface (CNI) supports the networking constructs within a Kubernetes cluster. Aside from each worker node having a general management IP, there are requirements for pod-pod communications.

The Cloud Native Networking design focuses on supporting multiple NICs in a Pod, where the primary NIC is allocated to Kubernetes management, and attaching additional networks for data forwarding.

The CNI option allows the extension of the traditional single interface model of Kubernetes to support multiple interfaces (also known as MULTUS). This option allows the separation of user-plane and control-plane traffic and provides service isolation by leveraging multiple external interfaces.

In case of 5G workloads, the main interfaces are managed by the primary CNI. Additional network attachments created using the secondary CNI are bound to SR-IOV or EDP port-groups created within NSX.

TCA lets you choose a CNI that is configured and deployed as part of the cluster creation. Typically, even in control plane CNFs, the secondary interface (MULTUS) is used with network types. This is to provide logical separation at the worker node level for different interfaces. If each interface is required to terminate into its respective VRF or isolate L3 domain (such as signaling and O&M) on the DC Fabric/MPLS/Transport network, use MULTUS to attach multiple interfaces to a worker node.

Interface

5G Control Workloads

5G User-Plane Workloads

Primary CNI (Calico / Antrea)

Required

Required

Secondary CNI (MULTUS / MACVLAN / IPVLAN / SRIOV)

Not required, but usually leveraged for traffic separation

Required to provide high throughput and complex connectivity requirements

When creating additional interfaces to the container, different network types can be configured. The most common are MACVLAN and IPVLAN.

  • MACVLAN Interfaces: MACVLAN is used to create multiple virtual network interfaces behind the host’s single physical interface. Interfaces of MACVLAN type create a secondary attachment to the Kubernetes pod. With MACVLAN, each interface is allocated with a unique MAC address.

  • IPVLAN: IPVLAN is used in a similar way to create multiple virtual network interfaces to a single physical interface. However, with IPVLAN, all the secondary attachments to the pod have a common MAC address inherited from the physical interface MAC address.

Telco Cloud Automation can add secondary network interfaces (SR-IOV and VMXNET3) through Dynamic infrastructure Provisioning. Regular non-SRIOV or EDP interfaces can be added as part of the cluster creation.

This implies that infrastructure admins are not obliged to design the CAAS secondary networks in advance, during the cluster creation process. The Dynamic Infrastructure Provisioning features of the Telco Cloud allow the Network Function Cloud Service Archive (CSAR) to be customized to add Secondary interfaces to worker nodes. This is achieved by enabling the Multus CNI on those interfaces during the onboarding or instantiation process.

Within the CSAR, you can add a new network adapter of type VMXNET3 or SRIOV, name the Multus Interface, and attach it to the appropriate network resource from the available vSphere resources.

The Telco Cloud platform provides two options for the Primary CNI: Calico or Antrea. The Tanzu Kubernetes Grid Management cluster is always deployed with Antrea as the primary CNI. However, the choice of primary CNI exists when creating the workload cluster.

Note:

Once the primary CNI choice is made and the cluster is deployed, you cannot change the CNI.

Table 1. Primary CNI Deployment Options

Endpoint

Description

Antrea

Calico

Pod Connectivity

Container network interface for pods

Uses Open vSwitch

Uses Linux bridge with BGP

Service: ClusterIP

Default k8s service type accessible from within the cluster

Supported

Supported

Service: NodePort

Allows external access through a port exposed on the worker node

Supported

Supported

Service: LoadBalancer

Leverage a L4 load-balancer to distribute traffic across pods

Provided externally to the CNI, typically through NSX Advanced Load Balancer, HAProxy, MetalLB

Ingress Service

Routing for inbound pod traffic

Provided externally to the CNI typically through the Avi Kubernetes Operator (provided by the NSX Advanced Load Balancer or Contour)

Network Policy

Controls ingress and egress traffic

Open vSwitch based

IP tables based

NSX Integration

Connectivity to NSX for administrator-defined security policies

Supported

Not supported

Design recommendations for Cloud Native networking depend on various factors. The most common consideration is related to the CNI that is tested by the function vendor.

Note:

Do not modify the primary CNI configurations. The eBGP function provided by Calico is currently not supported.

It is possible to mix and match across clusters. Some clusters can be deployed with Antrea as the Primary CNI and others can use Calico. However, the primary CNI cannot be mixed across node pools within the same cluster.

Cloud Native Egress Considerations

The CNF Egress communication requirements are important. The two main considerations for egress networking include:

  • Multus for Egress: Multus CNI enables the attachment of multiple network interfaces to pods. The Multus CNI plugin is supported with Tanzu Kubernetes Grid. VMware TCA orchestrates the cluster with all required resources to run Multus as an additional CNI.

    The network attachment definition file is used to set up the network attachment for the pod. The CNF vendor must create those files using CNI Custom Resources (CRs) as needed by the application.

  • WorkerNode primary interface: Pods can share the worker node primary interface. In this case, Kubernetes manages Source Network Address Translation (SNAT) between Pods and Worker Nodes.

    An external security platform such as VMware NSX may also be required if SNAT is required along with multiple VRFs. In this scenario, the NAT rules can be based on the destination networks the packet is heading to. A specific NAT pool is required for each destination traffic type.

    With this option, overlapping networks within the VRF are not supported as the SNAT cannot distinguish between the different destination endpoints.

The recommended egress design is dependent on the overall CNF networking requirements and design. Multus for Egress is a good design consideration as it provides traffic isolation and allows multi-homed pods, this in turn simplifies networking configuration and operations.

Cloud Native Networking Recommendations

Design Recommendation

Design Justification

Design Implication

Work with the function vendor and all parties to determine the preferred Primary CNI for the network function.

Determines if Antrea or Calico has been validated

Impacts the choice of primary CNI.

Leverage MULTUS as the secondary CNI only for functions or architectures that require it

MULTUS is used to provide additional interfaces. Not all applications require multus.

This may impact the network topology of how secondary interfaces connect to the network and how network ingress/egress routing is configured.

Do not change the default configuration of the primary CNI.

This may invalidate support and cause networking issues within the cluster.

The cluster networking capabilities are defined by the deployed CNI versions and the default configuration.