You can customize the kubernetes cluster with custom infrastructure requirements like custom packages, network adapters, kernels, and so on, using infrastructure designer. These customizations are available only for CNF components.
You can use the VMware Telco Cloud Automation to customize the infrastructure requirements of the node pools. You can define these customizations through user interface and the system adds these customizations to the corresponding TOSCA file. For more details on the TOSCA components, see TOSCA Components.
Prerequisites
Procedure
- Log in to the VMware Telco Cloud Automation web interface.
- Select Catalog > Network Function.
- Click Onboard on the Network Function Catalog page.
- Select Design Network Function Descriptor on the Onboard Network Function page. Add the following details:
- Name: Name of the network package.
- Tags: Associated tags for the network package. Select the key and value from the drop-down menu.
- Network Function: Select the type of network function. For infrastructure designer, select Cloud Native Network Function.
- Click Design.
- On the Network Function Designer page, click Infrastructure Requirements.
- To design the infrastructure, enable Configure Infra Requirements.
- Network Adapter - Click Add to add a new network adapter. Enter the following details:
- Device Type - Select the value from the drop-down menu.
- Network Name - Enter the name of the network.
- Resource Name - Enter the name of the resource.
- (Optional) Target Driver - Select the value from the drop-down menu.
- Interface Name - Name of the interface for the vmxnet3 device. This property is displayed when you select vmxnet3 in Device Type.
- (Optional) Count - Enter the number of adapters.
- PF Group - Enter the name of the PF group for which you want to add the network adaptor.
- Shared Across NUMA - Select the button to enable or disable sharing of the devices across NUMA.
Note: Shared Across NUMA is appliable only when NUMA Alignments is enabled.
- Additional Properties - This property is displayed when you select vmxnet3 in Device Type.
- CTX Per Dev - To configure the Multiple Context functionality for vNIC traffic managed through Enhanced Datapath mode, select the value from the drop-down menu. For more details, see CTX Per Dev. For more details on Enhanced Datapath settings, see Configuration to Support Enhanced Data Path Support.
Note: When you select Target Driver, the system automatically adds the required DPDK in Kernel Modules and dependent custom packages in the Custom Packages. - PCI Pass Through - Click Add to enter the PTP or PCI Devices.
Note: When you add a PCI Pass Through device, the system automatically adds the required Linux-rt in Kernel Type, DPDK in Kernel Modules, and dependent custom packages in the Custom Packages.
- For the PTP devices, add the following information.
Note:
- To use the PTP PHC services, enable PCI passthrough on PF0 on ESXi server when the E810 card is configured with multiple PF groups.
- To use the PTP VF services, disable the PCI passthrough on PF0 and enable the SRIOV on both the PFs. E810 card supports 1 VF as PTP and the other VF serves as SRIOV VF NICs for network traffic.
- Device Type - You can select to add a PTP device or a NIC device. To use a physical device, select NIC. To use a virtual device, select PTP from the drop-down menu.
Note: To upgrade the device type from PTP PF to PTP VF, delete the existing PTP PF device and add the new PTP VF device. Do not change the device type from NIC to PTP directly in the CSAR file.
- Shared Across NUMA - Select the button to enable or disable sharing of the devices across NUMA.
Note: Shared Across NUMA is applicable only when NUMA Alignments is enabled.
- PF Group - Enter the name of the PF group for which you want to PCI Pass Through device.
- Enter the details for phc2sys and ptp4l files.
- Source - To provide input through file, select File from the drop-down menu. To provide input during network function instantiation, select Input from the drop-down menu.
Note: To select File from the Source menu, you must first upload the required file in Artifacts folder available under the Resources tab.
- Content - Name of the file. The value is automatically displayed based on the Source value.
- Source - To provide input through file, select File from the drop-down menu. To provide input during network function instantiation, select Input from the drop-down menu.
- Click Add to confirm.
- For the PCI Device, add the following information.
Note:
- Before adding the ACC100 Adapter PCI device, ensure the ACC100 Adapter is enabled in the VMware ESXI server. For details, see Configuring the ESXI Driver for the Intel vRAN Accelerator ACC100 Adapter.
- You can add the ACC100 Adapter on the workload clusters with kubernetes version 1.20.5, 1.19.9, or 1.18.17. For workload cluster upgrade, see Upgrade Management Kubernetes Cluster Version.
- Shared Across NUMA - Select the button to enable or disable sharing of the devices across NUMA.
Note: Shared Across NUMA is applicable only when NUMA Alignments is enabled.
- Enter the name of the resource in Resource Name.
- Select the Target Driver from the drop-down menu.
Note: Based on the Target driver, the system automatically adds the required Linux in Kernel Type, pciutils and DPDK modules.
- PF Group - Enter the name of the PF group for which you want to PCI device.
- Add the number of PCI devices in Count.
- Click Add to confirm.
- For the PTP devices, add the following information.
- Kernel
- Kernel Type - Select the Name and Version from the drop-down menu.
- Kernel Arguments - Click Add to add a new kernel argument. Add the Key and Value in respective text box.
Note: For hugepagesz and default_hugepagez, you can select the value from the drop-down menu. For other arguments, you can specify the key and value in respective text box.
- Kernel Modules - Non editable.
- Custom Packages - Click Add to add a new custom kernel package. Add the Name and Version in the respective text box.
- Files - You can add a file for injection. Click Add and select the file from the drop-down menu in Content and provide the file path of the target system where the file will be uploaded in Path text box respectively.
Note: To view the file in the drop-down menu, you must upload the file in the scripts folder. You can upload only .JSON, .XML, and .conf files.
- Click the Resources tab.
- Click the > icon corresponding to the root folder.
- Click the > icon corresponding to the Artifacts folder.
- Click the + icon corresponding to the scripts folder.
- Click Choose Files and select the file to upload.
- Click Upload to upload the selected file.
- Services - You can add
stalld
andsyslog-ng
services.- To add the
stalld
service, select the stalld from the drop-down menu. - To add the
syslog-ng
service, select the syslog-ng from the drop-down menu. When you select syslog-ng, the Add Service Config Files pop-up appears. Select the required configuration files for syslog-ng service.
- To add the
- (Optional) Tuned Profiles - Enter the name of the tuned profile. You can add multiple tuned profiles separated by commas.
Note: When you add a tuned profile, system adds the tuned package in the Custom Packages.
- NUMA Alignments - Click the corresponding button to enable or disable the support for NUMA alignments.
- Latency Sensitivity - You can set the latency value for high performance profiles. Select the value from the drop-down menu. You can select both high and low. Default value is normal.
- I/O MMU Enabled - Click the corresponding button to enable or disable the I/O MMU.
- Upgrade VM Hardware Version - Click the corresponding button to enable or disable upgrading the hardware version of the virtual machine.
- Network Adapter - Click Add to add a new network adapter. Enter the following details: