This site will be decommissioned on December 31st 2024. Please visit techdocs.broadcom.com for the latest content.

VMware Telco Cloud Platform RAN 5.0 | 30 SEP 2024

Check for additions and updates to these release notes.

What's New

VMware Telco Cloud Platform - RAN™ Release 5.0 includes key features and enhancements across the carrier-grade RAN stack to improve performance, operability, scalability, and user experience.

  • Lifecycle Management Enhancements: Workflow Hub’s IaaS and CaaS automation capabilities streamline the platform’s Lifecycle Management (LCM) for network functions, significantly reducing operational complexities.

  • Platform Operability Enhancements: CaaS features, such as Kubernetes cluster rehoming and adopting the latest Kubernetes version with a telco-grade lifecycle, reduce the need for frequent system upgrades. These features minimize the risk of network quality degradation and ensure operational consistency for CaaS clusters.

Carrier-Grade Kubernetes Infrastructure

VMware Telco Cloud Platform RAN 5.0 supports the following Kubernetes versions as part of VMware Tanzu Kubernetes Grid 2.5.2. This release also supports Tanzu Kubernetes Grid 2.1.1.

Users can manage multiple Kubernetes versions based on the NF requirements on a horizontal platform.

  • 1.30.2

  • 1.29.6

  • 1.28.11    

  • 1.27.15    

  • 1.26.14

Important:
  • Kubernetes 1.30: If you deploy Kubernetes 1.30 as part of Telco Cloud Platform, it benefits from the telco-grade lifecycle support included in the Telco Cloud Platform bundle. This release significantly shortens the lag between upstream Kubernetes releases and their integration into Telco Cloud Platform, ensuring quick access to the latest features and enhancements.

For more information about lifecycle support for specific Kubernetes versions in a Tanzu Kubernetes Grid (TKG) release, see the VMware Tanzu Kubernetes Grid 2.5.x Release Notes.

For further information about support and extensions for Kubernetes versions, contact the Telco Cloud Platform RAN Product Management team and Broadcom Support.

For more information about bug fixes and other updates, see the VMware Tanzu Kubernetes Grid 2.5.x Release Notes.

Extended Operations and Automation

VMware Telco Cloud Automation 3.2 supports various features and enhancements:

  • CaaS Management Enhancements

    • Rehoming of a Kubernetes Cluster: Rehoming involves moving a workload cluster from one management cluster to another management cluster of the same Kubernetes version. With this release, rehoming is now supported for Classy (Cluster Class based) clusters.

    • Multi-TKG Support: A single Telco Cloud Platform RAN release can support multiple versions of Tanzu Kubernetes Grid (TKG), such as 2.5.2 and 2.1.1, each offering compatibility with different Kubernetes versions. This flexbility allows the platform to operate in a multi-vendor environment.

    • VLAN Sub-Interface Configuration for Secondary Interfaces: Allows CNF users to specify VLAN sub-interfaces as part of the Dynamic Infrastructure Policy. This configuration persists through the CaaS LCM events, such as scale, MHC rebuild, upgrade, and so on.

      Advanced Network Configuration for Secondary Interfaces: Advanced network configurations, such as Generic Receive Offload (GRO), Generic Segmentation Offload (GSO), ProxyARP, and so on, can be included in the Dynamic Infrastructure Policy defined by the NF user.

    • DHCP IP Auto-Release for Standard Kubernetes Cluster Upgrade: Reduces the size of the IP pool required for CaaS LCM in large cluster deployments, by releasing unused DHCP IP addresses back to the pool.

    • UI Improvements for Cluster Fallout: Enhances the user experience by displaying troubleshooting hints on the TCA UI. This feature helps platform administrators and users to take faster recovery actions when cluster issues occur.

    • Specifying NodePort for Prometheus: Allows users to specify a nodeport number during Prometheus deployment, instead of using a randomly assigned port. This makes it easier to access and manage Prometheus metrics.

  • Infrastructure Automation Enhancements

    • IaaS Automation for ESXi Upgrade: By using TCA Workflow Hub, users can run workflows that automate the ESXi hypervisor upgrade as part of the Telco Cloud Platform infrastructure upgrade.

  • Harbor for Containers Registry: Supports Harbor version 2.10.2

  • Certificate Observability Enhancements: TCA 3.2 now supports automatic monitoring of certificates for Active Directory endpoints.

  • Support for Intel CVL 4.5 Package: Integrates Intel’s latest Cloud Virtualization Layer (CVL) driver, featuring SyncE fallback and various bug fixes.

  • Support for Intel Logan Beach Card: Supports Intel Logan Beach NIC, offering a higher port count compared to the Westport Channel NIC.

  • Increased Virtual Functions (VFs): Increases the maximum number of VFs per VM from 64 to 128, enabling support for additional PCI functions in Telco Cloud Platform RAN deployments.

  • Dell XR8610t Validation for Telco Cloud Platform RAN: Validates support for the Dell XR8610t platform that utilizes the Sapphire Rapids processor, making it suitable for Telco Cloud Platform RAN deployments.

For more information about these features and enhancements, see the VMware Telco Cloud Automation 3.2 Release Notes.

Components

To download these components, see the VMware Telco Cloud Platform RAN Essentials 5.0 Product Downloads page.

Optional Add-On Components

Note: An additional license is required.

Deprecated Features

VMware deprecates the following components from Telco Cloud Platform RAN 5.0 onwards.

Note: Though these components are supported during the deprecation phase, we recommend that you upgrade these components, as support will be removed in a future release.

  • VMware vSphere 7.x (ESXi and vCenter) is deprecated.

  • The following component versions will be removed in a future release:

    • Aria Suite 8.12, 8.13

    • Aria Operations 8.12, 8.13, 8.14

    • Aria Operations for Logs 8.12, 8.13, 8.14

    • Aria Operations for Networks 6.10

    • Aria Automation Orchestrator 8.12, 8.13, 8.14

  • Photon OS 3 is deprecated.

  • The following Tanzu Kubernetes Grid (TKG) versions will be removed in a future release.

    • TKG 2.1.1 with Kubernetes 1.24.10

    • TKG 2.3.1

  • Harbor Chartmuseum Charts

    • If you are using Harbor 2.8 or later versions, Harbor Chartmuseum charts are no longer supported for CNF LCM Operations.

    Note: Only OCI-based helm charts are now supported.

Support for Backward Compatibility of CaaS Layer with IaaS Layer

VMware Telco Cloud Platform RAN supports backward compatibility of its CaaS layer components (Telco Cloud Automation and Tanzu Kubernetes Grid) with the IaaS Layer component (vSphere) in earlier versions of Telco Cloud Platform for RAN. With this feature, you can upgrade the CaaS layer components to their latest versions while using earlier versions of the IaaS layer component.

For more information, see Software Version Support and Interoperability in the Telco Cloud Automation Deployment Guide and Supported Features on Different VIM Types in the Telco Cloud Automation User Guide.

Resolved Issues

Note: For information about the entire list of resolved issues in each Telco Cloud Platform RAN component, see the corresponding component release notes.

  • ACC 100 and vRAN Boost Accelerator Configuration Not Persistent After ESXi Reboot

    After rebooting the ESXi server or host, the ACC 100 and vRAN Boost Accelerator configuration applied using the host profile do not persist.

    This issue is fixed.

    Solution: Use Intel vRAN Boost driver 2.1.0.134 for both ACC 100 and vRAN Boost Accelerator.

  • Applying Host Profile on ESXi Hosts Fails When Previous Version of ACC100 Driver or ibbd Tool is Installed

    If an old version (1.0.8, 1.0.7, 1.0.6) of ACC100 driver or ibbd tool is installed on an ESXi host, applying a Host profile on that host fails with the following error

    Config support for this device is not available

    During the vRB1 driver installation, this older version of the Accelerator driver or tool causes issues in configuring the Accelerator.

    Solution: Before applying the host profile, ensure that Intel vRB driver 2.1.0 or later version is installed.

Known Issues

Note: For information about the entire list of known issues in each Telco Cloud Platform RAN component, see the corresponding component release notes.

  • Kernel-Taint Warning Occasionally Appears in dmesg Logs

    A kernel-taint warning occasionally appears in dmesg logs due to the vfio_pci_mmap_open stack trace. Once the kernel state is marked as tainted, the tainted state cannot be unset.

    Workaround: Reboot the worker node to clear the kernel-taint warning.

  • PTP Fluctuations Might Occur on Remote Radio Units When Connected to Intel Westport Channel Card in LLS-C1 Configuration

    When Remote Radio Units (RRUs) are connected to specific ports (port0 and port1, or port2 and port3) of an Intel Westport Channel card, PTP fluctuations might occur on one or more RRUs.

    Workaround: Limit the number of RRUs per Intel Westport Channel card to two, in the following configurations:

    • Port 0, Port 2

    • Port 0, Port 3

    • Port 1, Port 2

    • Port 1, Port 3

  • Turbostat Command Output Delayed by Two Minutes in Photon OS 5

    Due to the use of isolcpus in applications running on a worker node, the output of the turbostat command is delayed by two minutes in Photon OS 5.

    If a CPU is isolated and fully occupied by a real-time task, the state collector task of the turbostat utility might not get enough CPU resources to run, causing the delay in printing the output.

    This issue occurs in Telco Cloud Automation 3.1 (Telco Cloud Platform 4.0).

    Workaround: None

  • VF Initialization Occasionally Fails in vDU DPP0 Pods When Using DPDK-Compatible Drivers

    Application pods that use DPDK-compatible drivers, such as vfio_pci, might get stuck occasionally while waiting for the NIC device reset to complete. Hence, applications might take longer than expected to come online. In rare cases, applications pods might remain stuck.

    This issue is observed in Intel E810 NIC.

    Workaround: Delete the stuck application pods, so they can come online faster when recreated.

  • vDU Application Pods Stuck in Unknown State for a Long Time

    Some vDU application pods might remain in an unknown state for a long time.

    This issue is associated with CNI plugin initialization during node reboot, when using containerd version 1.6.24 or later and kubelet v1.28.4+vmware.1 or later in TCA 3.1.

    Workaround: Restart the kubelet service on the worker node, or manually delete the pods in the unknown state so that Kubernetes can recreate new pods.

  • Pods Mapped to SR-IOV VFs Appear in UnexpectedAdmissionError State After Worker Node Reboot

    When a worker node is rebooted, pods that are mapped to SR-IOV VFs are recreated and the old pods move to the UnexpectedAdmissionError state due to an upstream Kubernetes bug.

    Workaround: Before upgrading the cluster, manually delete the pods that are in the error state.

  • iptables-nft-sa Segfault Error Appears in Dmesg Output When Pods are Deployed or Recreated in TCA

    When Pods are deployed or recreated in TCA 3.1 with the Calico add-on in use, the iptables-nft-sa segfault error might occur, causing coredump files to be created continuously on a worker node. This fills up the available space for coredump files, leaving no space for applications.

    Workaround: Clean up the coredump files manually.

  • VMware Aria Operations 8.16 Fails to Integrate with Tanzu Kubernetes Classy Standard Clusters 1.27 and 1.28 Running TLS 1.3

    VMware Aria Operations 8.16 fails to integrate with Tanzu Kubernetes Classy Standard Clusters (1.27 and 1.28) that are running Transport Layer Security (TLS) version 1.3. Hence, the Classy Standard Clusters 1.27 and 1.28 cannot be monitored.

    Note: TLS 1.3 is not supported in Aria Operations 8.16.

    Workaround: Follow one of these workarounds:

    • Upgrade to Aria Operations 8.18.

    • If you are using Aria Operations 8.16, change TLS 1.3 to TLS 1.2 on the Classy Standard clusters.

      1. Navigate to CaaS Infrastructure > Cluster Instances in the TCA UI.

      2. Select the cluster you want to modify, and add the following topology variable in the Edit Cluster Configuration > Configuration > Cluster Info section:

        • Key: Security

        • Value: minimumTLSProtocol: tls_1.2

  • Tuned Daemon Fails to Start After VM Reboot

    Sometimes, after a virtual machine reboot, an error message might appear on the VM console indicating that the tuned daemon failed to start.

    When users log in to the VM and run commands such as tuned-adm active or systemctl status tuned, the tuned service does not start successfully. This issue can lead to performance deterioration.

    Workaround: Update the CSAR to restart the tuned service. For detailed instructions, see KB97434.

  • Workload Cluster Upgrade from 1.27 to 1.28 Fails with an Error

    Upgrading workload clusters from 1.27 to 1.28 fails with the following error:

    [CAPVResourceNotReady] MachineDeployment still provisioning

    Solution: If any of the Network Adapters require SR-IOV, ensure that the Upgrade Hardware Version setting is enabled.

    Note: VMs using SR-IOV require a minimum hardware version of 'vmx-17' for a successful upgrade.

  • Unable to Power ON VMs Requiring 52 GB Memory on Low-Memory Servers 64 GB or Less

    VMs requiring 52 GB memory or more cannot be powered ON on low-memory servers (64 GB or less) running ESXi 8.0 U1.

    Workaround: Reduce the memory consumption in the 64 GB server by following these steps:

    1. Add the following settings in the ESXi host:

      • esxcli system settings kernel set -s maxVMs -v 6

      • esxcli system settings advanced set -o /Mem/MemMinFreePct -i 1

    2. Reboot the ESXi host.

    3. Re-instantiate the VM on the server.

  • Latency Spikes Might Occur for a Brief Period When Accessing Dell iDRAC Console Using USB Devices

    When the Dell iDRAC Console is accessed using a USB Device such as keyboard and mouse, latency spikes might occur for a brief period.

    Workaround: None

    Note: Access the Dell iDRAC console only during the maintenance window of the ESXi host.

  • PTP Service Error Occurs When Instantiating a CSAR Without Specifying the iavf and ice Driver Details

    If you instantiate a CSAR without specifying iavf and ice drivers in the /Definitions/VNFD.yaml file, the PTP service error occurs instead of indicating the package error in Telco Cloud Automation.

    Workaround: Specify the iavf and ice driver details as described in KB90345.

  • Scheduling Latency Spikes Observed Rarely Over a Long Run on Servers Installed with ESXi

    On servers installed with ESXi, spikes are observed rarely in scheduling latency exceeding 20us over a long run.

    Workaround: None

End of General Support Guidance

Broadcom Product Lifecycle Matrix outlines the End of Service (EoS) dates for Broadcom products. Lifecycle planning is required to keep each component of the VMware Telco Cloud Platform solution in a supported state. Plan the component updates and upgrades according to the EoS dates. To ensure that component versions are supported, you may need to update the Telco Cloud Platform solution to its latest maintenance release.

Broadcom pre-approval is required to use a product past its EoS date. To discuss the extended service of products, contact your Broadcom representative.

Support Resources

For additional support resources, see the VMware Telco Cloud Platform RAN documentation page.

check-circle-line exclamation-circle-line close-line
Scroll to top icon