Data Plane Development Kit (DPDK) is the preferred data path framework for most of the data plane intensive CNFs. DPDK allows significantly fast packet processing by polling the network devices in the user space.

The Worker Nodes in the Node Pool, which is dedicated to the data plane CNF, require customizations to support data plane acceleration with DPDK. These customizations are applied during the Node Customization in Telco Cloud Automation.

Huge Pages

Huge pages are mandatory for DPDK. Huge pages are blocks of memory whose size is larger than the regular memory page. Data plane intensive CNFs have a large memory footprint. Huge pages significantly reduce the number of page table entries that are managed by the Memory Management Unit (MMU) for the data plane CNFs. Hence, the amount of page faults and Translation Lookaside Buffer (TLB) misses are reduced significantly, and the performance of the data plane CNFs are improved.

Most CPUs support 2 MB and 1 GB huge pages. The size and the number of huge pages in the Worker Node must be selected based on the memory footprint of the data plane CNF. Use 1 GB huge pages in the Worker Nodes, whenever possible. When selecting the number of huge pages, you must also consider the memory reserved on the Worker Node.

When configuring 1 GB huge pages through Telco Cloud Automation, ESXi 1 GB large page configuration is automatically enabled on the Worker Node. ESXi large page is required to support huge pages within Worker Nodes. For more information about ESXi 1 GB large page, see Backing Guest vRAM with 1GB Pages in the vSphere documentation.

Note:

The following node customization is required for configuring huge pages in the Worker Nodes:

  • Define the kernel arguments default_hugepagesz, hugepagesz, and hugepages with the recommended values under kernel: kernel_args

For more information, see Node Customization in the Telco Cloud Automation documentation.

DPDK Kernel Module

Different Pool Mode Drivers (PMDs) require different kernel drivers to work properly. Depending on the PMD being used, a corresponding kernel driver must be loaded, and network ports must be bound to that driver. You can use the following two DPDK kernel modules for binding a network device with DPDK: vfio-pci and igb_uio.

Virtual Functio I/O (VFIO): VFIO is a robust and secure driver that relies on IOMMU protection. The IOMMU allows the OS to encapsulate I/O devices in their own virtual memory spaces domain, thus restricting their DMAs (Direct Memeory Acces) to specific memory pages. The OS uses the IOMMU to protect itself against buggy drivers and malicious/errant devices, increasing security on bare-metal kernel by providing protection against I/O attacks.

vIOMMU is an emulation of IOMMU for virtual guests. vIOMMU provides IOMMU capability to guests, but it degrades the throughput of I/O-intensive workloads. This performance degradation is heavily dependent on how OS programs IOMMUs, how guest driver issues DMA requests base on different types of OS, NIC driver, etc..

If there is no IOMMU available on the system, you can use VFIO but load it with an additional module parameter: modprobe vfio enable_unsafe_noiommu_mode=1

Alternatively, you can enable this option in a pre-loaded kernel module: echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

Userspace I/O (UIO): In cases where VFIO cannot be used, you can use alternative drivers. For some devices virtual function (VF) devices that do not support legacy interrupts, use the igb_uio module as a substitute for VFIO.

If security is a major concern, a tradeoff between performance and security should be considered, however if security is not the primary concern and unless the data plane CNF has a mandatory requirement for vfio-pci, igb_uio can be an the alternative drivers be for accelerating the data plane.

Note:

The following node customizations are required for configuring the DPDK kernel module in the Worker Nodes:

  • Define the kernel module dpdk and its version under kernel: kernel_modules.

  • Define the package pciutils under custom_packages.

  • For the network devices that need to be bound to DPDK, define dpdkBinding with the value igb_uio under network: devices.

  • If you are using vfio-pci, define the following additional kernel arguments under kernel: kernel_args:

    • intel_iommu with the value on

    • iommu with the value pt

For more information, see Supported DPDK and Kernel Versions and Node Customization in the Telco Cloud Automation documentation.