Data Plane Development Kit (DPDK) is the preferred data path framework for most of the data plane intensive CNFs. DPDK allows significantly fast packet processing by polling the network devices in the user space.

The Worker Nodes in the Node Pool, which is dedicated to the data plane CNF, require customizations to support data plane acceleration with DPDK. These customizations are applied during the Node Customization in Telco Cloud Automation.

Huge Pages

Huge pages are mandatory for DPDK. Huge pages are blocks of memory whose size is larger than the regular memory page. Data plane intensive CNFs have a large memory footprint. Huge pages significantly reduce the number of page table entries that are managed by the Memory Management Unit (MMU) for the data plane CNFs. Hence, the amount of page faults and Translation Lookaside Buffer (TLB) misses are reduced significantly, and the performance of the data plane CNFs are improved.

Most CPUs support 2 MB and 1 GB huge pages. The size and the number of huge pages in the Worker Node must be selected based on the memory footprint of the data plane CNF. Use 1 GB huge pages in the Worker Nodes, whenever possible. When selecting the number of huge pages, you must also consider the memory reserved on the Worker Node.

When configuring 1 GB huge pages through Telco Cloud Automation, ESXi 1 GB large page configuration is automatically enabled on the Worker Node. ESXi large page is required to support huge pages within Worker Nodes. For more information about ESXi 1 GB large page, see Backing Guest vRAM with 1GB Pages in the vSphere documentation.

Note:

The following node customization is required for configuring huge pages in the Worker Nodes:

  • Define the kernel arguments default_hugepagesz, hugepagesz, and hugepages with the recommended values under kernel: kernel_args

For more information, see Node Customization in the Telco Cloud Automation documentation.

DPDK Kernel Module

You can use the following two DPDK kernel modules for binding a network device with DPDK: vfio-pci and igb_uio. Unless the data plane CNF has a mandatory requirement for vfio-pci, use igb_uio for accelerating the data plane.

vfio-pci uses Input-Output Memory Management Unit (IOMMU) to provide a more secure userspace driver environment, but it causes notable performance impact to medium and large-sized data packets. When you customize the Worker Nodes to use vfio-pci, vIOMMU is also enabled on those Worker Nodes. vIOMMU is an IOMMU emulation and it impacts the performance. Hence, if the safety features offered by vfio-pci through vIOMMU are not critical, use igb_uio to achieve the best data plane performance.

Note:

The following node customizations are required for configuring the DPDK kernel module in the Worker Nodes:

  • Define the kernel module dpdk and its version under kernel: kernel_modules.

  • Define the package pciutils under custom_packages.

  • For the network devices that need to be bound to DPDK, define dpdkBinding with the value igb_uio under network: devices.

  • If you are using vfio-pci, define the following additional kernel arguments under kernel: kernel_args:

    • intel_iommu with the value on

    • iommu with the value pt

For more information, see Supported DPDK and Kernel Versions and Node Customization in the Telco Cloud Automation documentation.