This section outlines the CNF requirements and how CNFs can be onboarded and instantiated across the Telco Cloud.
Helm Charts
Helm is the default package manager in Kubernetes. CNF vendors use Helm to simplify container packaging. With Helm charts, dependencies between CNFs are handled in the formats agreed upon by the upstream community; allowing Telco operators to consume CNF packages in a declarative and easy-to-operate manner. With proper version management, Helm charts also simplify workload updates and inventory control.
Helm repository is a required component in the Telco Cloud Platform. Production CNF Helm charts must be stored centrally and accessible by the Tanzu Kubernetes clusters. To reduce the number of management endpoints, the Helm repository must work seamlessly with container images. A container registry must be capable of supporting both container images and Helm charts.
The Chartmuseum feature of Harbor is scheduled for deprecation. Telco Cloud Automation now supports both OCI-based charts and chartmuseum-based charts.
CNF Cloud Service Archive (CSAR) Design
Network Function (NF) Helm charts are uploaded as a catalog offering wrapped around the ETSI-compliant TOSCA YAML (CSAR) descriptor file. The descriptor file includes the structure and composition of the NF and supporting artifacts such as Helm charts version, provider, and set of pre-instantiation jobs.
Telco Cloud Network Functions have a set of prerequisite configurations such as node sizing and base features on the underlying Kubernetes cluster. Telco Cloud Automation also supports Dynamic Infrastructure Provisioning (DIP). These requirements are also defined in the Network Function CSAR. The summary of features supported by the CSAR extension includes:
Interface configuration and addition, along with DPDK binding
NUMA Alignment of vCPUs and Virtual Functions
Latency Sensitivity
Custom Operating system package installations
Full GRUB configuration
The following table outlines those extensions in detail:
Component |
Type |
Description |
---|---|---|
node_components |
Kernel_type |
Type of Linux Kernel and version. Based on the Kernel version and type, Telco Cloud Automation downloads and installs the Linux kernel from VMware Photon Linux repo (or airgap server) during Kubernetes node customization. |
kernel_args |
Kernel boot parameters are required for CPU isolation. Parameters are free-form text strings. The syntaxes are as follows: Key: the name of the parameter Value: the value corresponding to the key Note: The Value field is optional for those Kernel parameters that do not require a value. |
|
kernel_modules |
Kernel Modules are specific to DPDK. When the DPDK host binding is required, the name of the DPDK module and the relevant version are required. |
|
custom_packages |
Custom packages include lxcfs, tuned, and pci-utils. Telco Cloud Automation downloads and installs from VMware Photon Linux repo during node customization. |
|
network |
deviceType |
Types of network device. For example, vmxnet3. |
resourceName |
Resource name refers to the label in the Network Attachment Definition (NAD). |
|
dpdkBinding |
The PCI driver this network device must use. Specify "igb_uio" or "vfio" in case DPDK or any equivalent driver depending on the vendors. |
|
count |
Number of adapters required |
|
caas_components |
CaaS components define the CaaS CNI, CSI, and HELM components for the Kubernetes cluster. |
VMware Telco Cloud Automation supports rolling upgrades of network functions. The following options are available for network function lifecycle operations:
Upgrade: The update function updates the entire NF or NS to a new catalog version. This could be used when performing minor updates to a CNF where only a single helm chart component is changed.
In Telco Cloud Automation 2.3, this model supports adding and removing VDUs (individual HELM charts) from the Network Descriptor.
Upgrades and updates depend on a newer revision of the CSAR. A new CSAR with the corresponding updates (such as helm charts and release numbers) is supplied. If the Vendor and Product name match, the newer CSARs are available for selection from the catalog during the NF upgrade processes.
Upgrade Package: Upgrade package updates an instantiated NF to the new catalog version, without making any changes to the application.
The upgrade process links an existing instantiated NF to an updated version of the catalog entry for that NF. The process then allows new workflows that are present in the new catalog to be run. This model can be beneficial in upgrade cases where workflows or migration are necessary before the upgrade.
User-Plane and RAN CNF Workload Considerations
The Telco cloud supports both control plane functions, such as SMF and AMF, and user-plane functions such as the DU, UPF, and so on.
The main considerations for deploying user plane functions include NUMA Alignment, CPU Pinning, and use of SR-IOV.
Telco Cloud Automation supports multiple options for NUMA Alignment and CPU pinning configurations that can be leveraged to meet the requirements of a network function.
NUMA Alignment: This configuration option ensures that NICs, Memory, and CPU are aligned. Also, if this option is used without any other options, it ensures that CPUs are pinned in the format of pCore + Hyperthread and exclusive affinity is granted to these pinned CPUs.
This implies that a 20vCPU Tanzu Kubernetes Grid VM consumes 10 physical cores and 10 hyperthread. This pinning is static and determined by the VM Operator. This option also reserves 100% of CPU and Memory.
Latency Sensitivity: By setting Latency Sensitivity to HIGH, you can adjust the way the ESXi schedules the VM. In this case, pinning is achieved by ESXi without the need for static pinning.
This implies that a 20vCPU Tanzu Kubernetes Grid VM consumes 20 Physical cores. When LS is set to High, scheduling on the Physical Core sibling hyperthread is prohibited. To achieve this behavior, 100% reservation of CPU must be configured on the VM by the Telco Cloud Automation platform.
NUMA Alignment and Latency Sensitivity can be configured at the same time. CPU Pinning is performed based the Latency Sensitivity option, which means the vCPUs are scheduled only on physical cores and its associated hyper-thread are blocked for scheduling. The Latency Sensitivity option also ensures NUMA alignment.
As part of vSphere 8.0, Telco Cloud 3.0 introduces a new feature for SMT threading. This feature allows the CPU pinning to occur in the same way as with the NUMA alignment. However, rather than Telco Cloud Automation statically pinning vCPUs to logical cores, the ESXi scheduler ensures the correct placement and execution of cores.
As part of vSphere 8.0, the Virtual Hyperthreading (vHT) function is introduced in VM hardware version 20. This feature allows ESXi to dynamically provision Latency Sensitivity with hyper-threading enabled.
vHT is an enhancement to the latency sensitivity high feature. With latency sensitivity set to high and vHT activated, specific applications benefit from hyperthreading awareness and achieve performance gains. This model helps prevent cache thrashing.
Without vHT activated on ESXi, each virtual CPU (vCPU) is equivalent to a single non-hyperthreaded core available to the guest operating system. With vHT activated, each guest vCPU is treated as a single hyperthread of a virtual core (vCore).
For RAN workloads such as DU, the HW support option increases the VM hardware to the latest release available on the target vCenter. This feature ensures that additional real-time scheduling options are available when the RAN DU workload is run.