This section provides information about hyper-threading, pinning, NICs in the NUMA node, and core sibling information inside the DU Worker node.
Hyper-threading
| Core | pCPU |
|---|---|
| Core 0 | pCPU 0 and pCPU 1 |
| Core 1 | pCPU 2 and pCPU 3 |
| Core 2 | pCPU 4 and pCPU 5 |
| And so on. |
Creating and Pinning 40 vCPU DU Worker Nodes
The following architectural diagram provides information about how a Worker node vCPU can be pinned to a pCPU if there are no other VMs running on the host.
isNumaConfigNeeded flag in the CSAR file. This flag must be set to
true.
NICs in NUMA
When the DU worker node is requesting for I/O devices through CSAR, it can either choose the I/O connected to the same NUMA or share the I/O with different NUMA. This is configured using isSharedAcrossNuma flag in the CSAR file. If this flag set to true, it can source the I/O devices from a different NUMA node. If this flag is set to false or not present, it will source I/O devices connected to the same NUMA to which the DU worker node is pinned.
Core Sibling Information Inside DU Worker Node
To expose hyper-threading details inside the VM, we must enable the vHT feature through CSAR. After you enable vHT, the Worker node VM can access hyper-threading sibling relations inside the VM. After enabling vHT, The lscpu -e -a command displays all vCPUs and its associated core and socket. The lscpu command also displays the threads per core information.
cat /sys/devices/system/cpu/cpu'x'/topology/thread_siblings_list
For information about enabling vHT, see Enable Virtual Hyper-Threading.