The Data Plane Development Kit (DPDK) comprises a set of libraries that boosts packet processing in data plane applications.
Following are the packet processing for the SE data path:
Server health monitor
TCP/ IP Stack - TCP for all flows
Terminate SSL
Parse protocol header
Server load balancing for SIP/ L4/ L7 App profiles
Sending and receiving packets
SE System Logical Architecture
The following are the features of each component in SE system logical architecture:
- Work Process
-
The following are the three processes in Service Engine:
SE-DP
SE-Agent
SE-Log-Agent
SE-DP: The role of the process can be a proxy-alone, dispatcher-alone, or proxy-dispatcher combination.
Proxy-alone: Full TCP/ IP, L4/ L7 processing, and policies defined for each application/ virtual service.
Dispatcher-alone:
Processes Rx of (v)NIC and distributes flows across the proxy services through per proxy lock-less RxQ based on the current load of each proxy service.
The dispatcher manages the reception and transmission of packets through the NIC.
Polls the proxy TxQ and transacts to the NIC.
Proxy-dispatcher: This acts as a proxy and dispatcher depending on the configuration and resources available.
SE–Agent: This acts as a configuration and metrics agent for Controller. This can run on any available core.
SE-Log-Agent: This maintains a queue for logs. This performs the following actions:
Batches the logs from all SE processes and sends them to the log manager in Controller.
SE-Log-Agent can run on any available core.
- Flow Table
-
This is a table that stores relevant information about flows. It maintains flow to proxy service mapping.
Based on the resources available, the service engine configures an optimum number of dispatchers. You can override this by using Service Engine group properties. There are multiple dispatching schemes supported based on the ownership and usage of Network Interface Cards (NICs), such as,
A single dispatcher process owning and accessing all the NICs.
Ownership of NICs distributed among a configured number of dispatchers.
Multi-queue configuration where all dispatcher cores poll one or more NIC queue pairs, but with mutually exclusive
se_dp
to queue pair mapping.
The remaining instances are considered as a proxy. The combination of NICs and dispatchers determine the Packets per Second (PPS) that a SE can handle. The CPU speed determines the maximum data plane performance (CPS/ RPS/ TPS/ Tput) of a single core and linearly scales with the number of cores for a SE. You can dynamically increase the SE’s proxy power without the need to reboot. A subset of the se_dp
processes is active in handling the traffic flows. The remaining se_dp
processes will not be selected to handle new flows. All the dispatcher cores are also selected from this subset of processes.
The active number of se_dp
processes can be specified using SE group property max_num_se_dps
. As a run-time property, it can be increased without a reboot. However, if the number is decreased, it will not take effect until after the SE is rebooted.
The following is the configuration example:
[admin:ctr2]: serviceenginegroup> max_num_se_dps INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not configured, defaults to the number of CPUs on the SE. [admin:aziz-tb1-ctr2]: serviceenginegroup> max_num_se_dps INTEGER 1-128 Configures the maximum number of se_dp processes that handles traffic. If not configured, defaults to the number of CPUs on the SE. [admin:ctr2]: serviceenginegroup> max_num_se_dps 2 [admin:ctr2]: serviceenginegroup> where | grep max_num | max_num_se_dps | 2 | [admin:ctr2]: serviceenginegroup>
Tracking CPU Usage
CPU is intensive in the following cases:
Proxy
SSL Termination
HTTP Policies
Network Security Policies
WAF
Dispatcher
High PPS
High Throughput
Small Packets (for instance, DNS)
Packet Flow from Hypervisor to Guest Virtual Machine
- SR-IOV
-
Single Root I/O Virtualization (SR-IOV) assigns a part of the physical port (PF - Platform Function) resources to the guest operating system. A Virtual Function (VF) is directly mapped as the vNIC of the guest VM and the guest VM needs to implement the specific VF’s driver.
SR-IOV is supported on CSP and OpenStack no-access deployments.
For more information on SR-IOV, see SR-IOV with VLAN and NSX Advanced Load Balancer (OpenStack No-Access) Integration in DPDK Overview section in Installing Avi Load Balancer in OpenStack topic in VMware Avi Load BalancerInstallation Guide.
- Virtual Switch
-
Virtual switch within hypervisor implements L2 switch functionality and forwards traffic to each guest virtual machine's vNIC. Virtual switch maps a VLAN to a vNIC or terminates overlay networks and maps overlay segment-ID to vNIC.
Note:AWS/ Azure clouds have implemented the full virtual switch and overlay termination within the physical NIC and network packets bypass the hypervisor.
In these cases, as VF is directly mapped to the vNIC of the guest virtual machine, the guest virtual machine needs to implement a specific VF’s driver.
VLAN Interfaces and VRFs
- VLAN
-
VLAN is a logical physical interface that can be configured with an IP address. This acts as child interfaces of the parent vNIC interface. VLAN interfaces can be created on port channels/ bonds.
- VRF Context
-
A VRF identifies a virtual routing and forwarding domain. Every VRF has its routing table within the SE. Similar to a physical interface, a VLAN interface can be moved into a VRF. The IP subnet of the VLAN interface is part of the VRF and its routing table. The packet with a VLAN tag is processed within the VRF context. Interfaces in two different VRF contexts can have overlapping IP addresses.
Health Monitor
Health monitors run in data paths within proxy as synchronous operations along with packet processing. Health monitors are shared across all the proxy cores, hence linearly scales with the number of cores in SE.
For instance, ten virtual services with five servers in a pool per virtual service and one HM per server is 50 health monitors across all the virtual services. The six core SE with dedicated dispatchers will have five proxies. Each proxy will run 10 HMs and all the HM status is maintained within shared memory across all the proxies.
Custom external health monitor runs as a separate process within SE and script provides HM status to the proxy.
Health check using one virtual service to another virtual service is not possible for virtual service placed in the same SE group.
DHCP on Datapath Interfaces
The Dynamic Host Configuration Protocol (DHCP) mode is supported on datapath interfaces (regular interfaces/ bond) in bare-metal/ LSC Cloud. However, it can also be enabled from Controller GUI.
You can enable DHCP from the Controller using the following command: configure serviceengine <serviceengine-name>
You can check the desired data_vnics index ( i )
using the following command:
data_vnics index <i> dhcp_enabled save save
This must enable DHCP on the desired interface.
To deactivate DHCP on a particular data_vnic
, you can replace dhcp_enabled
with no dhcp_enabled
in the above command sequence.
If DHCP is turned-ON on unmanaged/ unconnected interfaces, it can slow down the SE stop sequence and SE can get restarted by the Controller.