You can modify the node components and CaaS components in TOSCA for different Kubernetes VIMs.

To support various network functions, the Worker nodes may require a customization in the TOSCA. These customizations include the kernel-related changes, custom packages installations, network adapter, SRIOV, DPDK configurations, and CPU Pinning of the Worker nodes on which you deploy the network functions.

Node Components

  • Kernel: The Kernel definition uses multiple arguments that require a customization.
    • kernel_type: Kernel type for the worker nodes. The kernel types are:
      • Linux RealTime (linux-rt)
      • Linux Non-RealTime (linux)
      The kernel type depends on the network function workload requirement. The required Linux version is downloaded from TDNF repo[VMware Photon Linux] during customization.
      kernel type
      infra_requirements:
        node_components:
          kernel:
            kernel_type:
              name: linux-rt
              version: 4.19.132-1.ph3
    • kernel_args: Kernel boot parameters for tuning values that you can adjust when the system is running. These parameters configure the behavior of the kernel such as isolating CPUs. These parameters are free form strings. They are defined as 'key' → name of the parameter and optionally 'value' → if any arguments are provided.
      kernel_args
      infra_requirements:
        node_components:
          kernel:
            kernel_args:
              - key: nosoftlockup
              - key: noswap
              - key: softlockup_panic
                value: 0
              - key: pcie_aspm.policy
                value: performance
              - key: intel_idle.max_cstate
                value: 1
              - key: mce
                value: ignore_ce
              - key: fsck.mode
                value: force
      Huge Pages
      infra_requirements:
        node_components:
          kernel:
            kernel_args:
              - key: default_hugepagesz
                value: 1G
              - key: hugepagesz
                value: 1G
              - key: hugepages
                value: 17
      Note:
      i.   This order should be maintained.
      ii.  Nodes will be restarted to set these values
      iii. supported hugepagesz are 2M | 1G
      isolcpus
      infra_requirements:
        node_components:
          kernel:
            kernel_args:
              - key: isolcpus
                value: 2-{{tca.node.vmNumCPUs}}
      Note: TCA will replace the {{tca.node.vmNumCPUs}} with vCPUs configured on the worker node.
    • kernel_modules: To install any kernel modules on Worker nodes. For example, dpdk, sctp, and vrf.
      Note: When configuring dpdk, ensure that the corresponding pciutils package is specified under custom_packages.
      dpdk
      infra_requirements:
        node_components:
          kernel:
            kernel_modules:
              - name: dpdk
                version: 19.11.1

      For a details on supported DPDK versions, see Supported DPDK and Kernel Versions.

    • custom_packages: Custom packages include the lxcfs, tuned, pci-utils, ptp. The required packages are downloaded from TDNF repo[VMware Photon Linux] during customization.
      custom_packages
      infra_requirements:
        node_components:
          custom_packages:
             - name: pciutils
               version: 3.6.2-1.ph3
             - name: tuned
               version: 2.13.0-3.ph3
             - name: linuxptp
               version: 2.0-1.ph3
      Note: Make sure these packages are available on VMWARE TDNF Repository
  • additional_config: Helps in the additional customization on node. For example, tuned.
    Note: While configuring tuned, ensure that the corresponding tuned package is specified under custom_packages
    tuned
    infra_requirements:
      node_components:
        additional_config:
          - name: tuned   # <--- for setting tuned
            value: '[{"name":"custom-profile"}]'    # <--- list of profile names to activate.
  • file_injection: Inject the configuration files inside the nodes.
    file_injection
    infra_requirements:
      node_components:         
        file_injection:

          - source: file
            content: ../Artifacts/scripts/custom-tuned-profile.txt
 #<-- File path location which is embedded in CSAR
            path: /etc/tuned/custom-profile/tuned.conf  #<-- Target location of the configuration file. Location should align with name of the profile.
          - source: file

            content: ../Artifacts/scripts/cpu-partitioning-variables.txt
 #<-- File path location which is embedded in CSAR
            path: /etc/tuned/cpu-partitioning-variables.conf #<-- Supporting files for the main configuration file.
    
  • isNumaConfigNeeded: This feature tries to find a host and a NUMA node that can fit the VM with the given requirements and assign it. It is useful for high-performance profile Network Functions such as DU, which require a high throughput. This sets CPU and Memory reservations to maximum on the Worker node. It sets the affinity for the Worker node cpus to the ESXi cpus.
    isNumaConfigNeeded
    infra_requirements:
       node_components:
         isNumaConfigNeeded:  [true | false]
  • latency_sensitivity: For Network Functions that require a high-performance profile with low-latency such as DU, CU-CP, CU-UP, and UPF. These functions require the node latency sensitivity set on vSphere.
    Note: Node restarts after customization.
    latency_sensitivity
    infra_requirements:
      node_components:         
        latency_sensitivity: 
[high | low]
  • passthrough_devices: For adding PCI devices. For example, ptp.
    Note: While specifying passthrough device configurations, ensure that the corresponding linuxptp package is specified under custom_packages.
    passthrough_devices
     infra_requirements:
       node_components:
         passthrough_devices:
           - device_type: NIC
             pf_group: ptp
    Note: For now the values are hardcoded
  • network: Creates network adapters on the nodes. For SRIOV, the given resource name will be allocatable resource on the node.
    Network
     infra_requirements:
       node_components:
         network:
            devices:
              - deviceType:    # <-- Network Adapter type [sriov]
                networkName:   # <-- Input label for User Input to provide Network while NF Instantiation. Refer below section how to define these input
                resourceName:  # <-- This is the label the device will be exposed in K8s node.
                dpdkBinding:   # <-- The driver this device should used. If not mentioned, then default OS driver will be used.
                count: 3       # <- Number of adapters required.
    Note:
        1. vmxnet3 not supported in TCA
        2. for 'networkName' refer below section
        3. dpdkBinding
           - igb_uio
           - vfio-pci
        4. Make sure to have 'pciutils' custom packages and 'dpdk' kernel modules.
For SRIOV network adapters, when initiating, add the following:
VnfAdditionalConfigurableProperties
tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn:
  derived_from: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties
  properties:
    F1U: # <--- label that is provided infra_requirements.node_components.network.devices.networkName
      required: true
      propertyName: F1U # <--- label that is provided infra_requirements.node_components.network.devices.networkName
      description: ''
      default: ''
      type: string
      format: network        #<- to show the network drop-down menu
 
 helm-abc:
    type: tosca.nodes.nfv.Vdu.Compute.Helm.helm-abc
    properties:
      :
      configurable_properties:
        additional_vnfc_configurable_properties:
          type: tosca.datatypes.nfv.VnfcAdditionalConfigurableProperties.lmn
          :
          :
          F1U: '' # <-- Same label provided above

caas_components

You can configure CaaS components, such as CNI, CSI, Helm for the Kubernetes. You can install CNI plugins on Worker nodes during CNF instantiation. Provide CNIs such as SRINOV in Cluster Configuration in the CaaS Infrastructure.
infra_requirements:
  caas_components:
    - name: sriov
      type: cni