CNFs are installed in the Tanzu Kubernetes Workload Cluster within the Telco Cloud Platform Tier. When you deploy the Workload Cluster, consider the performance aspects of data plane intensive CNFs and assign maximum compute resources to the Worker Nodes.

Apply the following best practices when you create Workload Cluster templates using Telco Cloud Automation. For information about creating a Workload Cluster template, see the Telco Cloud Automation documentation.

Dedicated node pool for data plane intensive CNFs: Create a dedicated Node Pool for hosting data plane CNFs. Data plane CNFs have a different set of infrastructure requirements such as CPU pinning for data plane containers, SR-IOV, DPDK, and so on. A Node Pool groups the worker nodes that can support these requirements. For more information, see Add a Node Pool in the Telco Cloud Automation documentation.

Worker Node Size: The dedicated node pool for the data plane CNFs must use maximum compute resources available in the vSphere. Assign the number of CPU cores to a Worker Node such that the Worker Node can be deployed in a single NUMA node. This helps avoid the NUMA misalignment for the Worker Node and the associated performance impact.

The size of the Worker Node is chosen based on the requirements of the data plane CNF. When determining the size of the Worker Node, consider the following factors:

  • The size of a single data plane Pod

  • The number of interfaces on the data plane Pod.

  • The number of data plane Pods that can run in a single Worker Node.

The Worker Nodes consume CPU resources, and the usage of many small Worker Nodes increases the CPU cost. The Network Equipment Providers (NEPs) are commonly deploying a single large Worker Node in a NUMA node for workloads with high throughput requirements.

The number of CPUs assigned to the Worker Node also depends on the CPU pinning requirements for the data plane CNF. For more information, see CPU pinning for Data Plane CNFs.

Node Pool CPU Manager policy: To enable static CPU reservation on the Worker Nodes, use 'Static' as the CPU Manager policy. This setting is required to configure exclusive CPU affinity for the Worker Node CPUs.