This site will be decommissioned on January 30th 2025. After that date content will be available at techdocs.broadcom.com.

VMware Telco Cloud Platform 5.0 | 30 SEP 2024

Check for additions and updates to these release notes.

What's New

VMware Telco Cloud Platform Release 5.0 is a major milestone release, providing a holistic Telco Cloud Platform to address the needs of both Cloud-Native Network Functions (CNFs) and Virtual Network Functions (VNFs). It includes new features and enhancements across the carrier-grade network function automation and orchestration, CaaS and cloud infrastructure areas.

Additionally, Telco Cloud Platform introduces robust service assurance capabilities, ensuring optimal performance, reliability, and visibility across the network. This holistic approach enables operators to efficiently manage and scale their services while maintaining high standards of quality and user experience.

  • Lifecycle Management Enhancements: Workflow Hub’s IaaS and CaaS automation capabilities streamline the platform’s Lifecycle Management (LCM) for both VNFs and CNFs, significantly reducing operational complexities.

  • Platform Operability Enhancements: CaaS features, such as Kubernetes cluster rehoming and adopting the latest Kubernetes version with a telco-grade support lifecycle, reduce the need for frequent system upgrades. These features minimize the risk of network quality degradation and ensure operational consistency for CaaS clusters.

  • Horizontal Platform Enhancements: Supporting both VNFs and CNFs on a unified horizontal platform simplifies the network architecture and operations, improves resource utilization, and reduces overall expenses.

Carrier-Grade Kubernetes Infrastructure

VMware Telco Cloud Platform 5.0 supports the following Kubernetes versions as part of VMware Tanzu Kubernetes Grid 2.5.2. This release also supports Tanzu Kubernetes Grid 2.1.1.

Users can manage multiple Kubernetes versions based on the NF requirements on a horizontal platform.

  • 1.30.2

  • 1.29.6

  • 1.28.11    

  • 1.27.15    

  • 1.26.14

Important:
  • Kubernetes 1.30: If you deploy Kubernetes 1.30 as part of Telco Cloud Platform, it benefits from the telco-grade lifecycle support included in the Telco Cloud Platform bundle. This release significantly shortens the lag between upstream Kubernetes releases and their integration into Telco Cloud Platform, ensuring quick access to the latest features and enhancements.

For more information about lifecycle support for specific Kubernetes versions in a Tanzu Kubernetes Grid (TKG) release, see the VMware Tanzu Kubernetes Grid 2.5.x Release Notes.

For further information about support and extensions for Kubernetes versions, contact the Telco Cloud Platform Product Management team and Broadcom Support.

For more information about bug fixes and other updates, see the VMware Tanzu Kubernetes Grid 2.5.x Release Notes.

Carrier-Grade VNF and CNF Automation and Orchestration

VMware Telco Cloud Automation 3.2 supports various features and enhancements:

  • CaaS Management Enhancements

    • Rehoming of a Kubernetes Cluster: Rehoming involves moving a workload cluster from one management cluster to another management cluster of the same Kubernetes version. With this release, rehoming is now supported for Classy (Cluster Class based) clusters.

    • Multi-TKG Support: A single Telco Cloud Platform release can support multiple versions of Tanzu Kubernetes Grid (TKG), such as 2.5.2 and 2.1.1, each offering compatibility with different Kubernetes versions. This flexbility allows the platform to operate in a multi-vendor environment.

    • VLAN Sub-Interface Configuration for Secondary Interfaces: Allows CNF users to specify VLAN sub-interfaces as part of the Dynamic Infrastructure Policy. This configuration persists through the CaaS LCM events, such as scale, MHC rebuild, upgrade, and so on.

    • Advanced network configuration for secondary interfaces: Advanced network configurations, such as Generic Receive Offload (GRO), Generic Segmentation Offload (GSO), ProxyARP, and so on, can be included in the Dynamic Infrastructure Policy defined by the NF user.

    • DHCP IP Auto-Release for Standard Kubernetes Cluster Upgrade: Reduces the size of the IP pool required for CaaS LCM in large cluster deployments, by releasing unused DHCP IP addresses back to the pool.

    • UI Improvements for Cluster Fallout: Enhances the user experience by displaying troubleshooting hints on the Telco Cloud Automation (TCA) UI. This feature helps platform administrators and users to take faster recovery actions when cluster issues occur.

    • Specifying NodePort for Prometheus: Allows users to specify a nodeport number during Prometheus deployment, instead of using a randomly assigned port. This ensures that Prometheus metrics are accessed and managed easily.

  • Infrastructure Automation Enhancements

    • IaaS Automation for ESXi Upgrade: By using TCA Workflow Hub, users can run workflows that automate the ESXi hypervisor upgrade as part of the Telco Cloud Platform infrastructure upgrade.

  • Harbor for Containers Registry: Supports Harbor version 2.10.2

  • Certificate Observability Enhancements: TCA 3.2 now supports automatic monitoring of certificates for Active Directory endpoints.

For more information about these features and enhancements, see the VMware Telco Cloud Automation 3.2 Release Notes.

Carrier-Grade Resilient Networking and Security

VMware NSX 4.2 includes various new features and enhancements:

  • MPLS and DFS Support for EDP and Edge Nodes: Improves throughput for Multiprotocol Label Switching (MPLS) and Distributed File System (DFS) traffic.

  • Datapath Observability Enhancements: Introduces a new datapath monitoring capability through the API and UI. You can collect debug metrics and counters for each transport node and network segment, without logging into the ESX Transport Nodes. Also, the debug data in the datapath can be collected periodically.

  • Enhancements to Ethertypes Support: Enhances VDS support capabilities to forward traffic of any ethertype, ensuring that proprietary double VLAN-tagged frames (QinQ) are forwarded.

  • Enhanced Data Path Improvements: Provides better performance for port mirroring and multicast capabilities.

  • Improved Switch Flexibility: Allows VDS mode to change from Standard to Enhanced Datapath Standard or Enhanced Datapath Performance and vice-versa, without losing the current switch configuration. However, downtime occurs during the mode change.

  • Dual DPUs (HA): Supports High Availability (HA) configuration (Active / Standby), where the failure of one DPU does not impact the server host. If the active DPU fails, all the traffic handled by the active DPU fails over to the standby DPU.

  • Dual DPUs (non-HA): Supports non-HA configuration, where both DPUs are active and can be used without high availability. If one DPU fails, the traffic handled by that DPU is not protected and does not fail over to the second DPU.

  • NSX Certificate Management Enhancements: Provides operational ease through the revamped certificate management capabilities of NSX. Capabilities include certificate replacement (single or multiple), certificate renewal, automatic notifications for expiring certificates, revamped user experience, and so on in the NSX UI. 

  • Packet Capture with Trace in EDP Host Switch:  Allows you to trace the path of packets in the network stack for latency analysis and packet drop locations using the pktcap-uw tool with the trace option.

  • NSX Upgrade: Supports direct upgrades from NSX 3.2.x to NSX 4.2.0.

  • TLS 1.3: Supports TLS 1.3 for internal communications between NSX components.

For more information about these features and enhancements, see the VMware NSX 4.2 Release Notes.

Components

Telco Cloud Platform Essentials

To download these components, see the Telco Cloud Platform 5.0 Essentials Product Downloads page.

Optional Add-On Components

Note: An additional license is required.

Telco Cloud Platform Advanced

To download these components, see the Telco Cloud Platform 5.0 Advanced Product Downloads page.

Optional Add-On Components

Note: Additional licenses are required.

Deprecated Features

VMware deprecates the following components from Telco Cloud Platform 5.0 onwards.

Note: Though these components are supported during the deprecation phase, we recommend that you upgrade these components, as support will be removed in a future release.

  • VMware vSphere 7.x (ESXi and vCenter) is deprecated

  • The following component versions will be removed in a future release:

    • VMware NSX 3.x, 4.0.x, 4.1.x

    • Aria Suite 8.12, 8.13

    • Aria Operations 8.12, 8.13, 8.14

    • Aria Operations for Logs 8.12, 8.13, 8.14

    • Aria Operations for Networks 6.10

    • Aria Automation Orchestrator 8.12, 8.13, 8.14

  • Photon OS 3 is deprecated.

  • VMware Integrated OpenStack

    • Managing VMware Integrated OpenStack as a VMware Infrastructure Manager (VIM) in TCA will be deprecated

  • The following Tanzu Kubernetes Grid (TKG) versions will be removed in a future release.

    • TKG 2.1.1 with Kubernetes 1.24.10

    • TKG 2.3.1

  • Harbor Chartmuseum Charts

    • If you are using Harbor 2.8 or later versions, Harbor Chartmuseum charts are no longer supported for CNF LCM Operations.

    Note: Only OCI-based helm charts are now supported.

Support for Backward Compatibility of CaaS Layer with IaaS Layer

VMware Telco Cloud Platform supports backward compatibility of its CaaS layer components (Telco Cloud Automation and Tanzu Kubernetes Grid) with the IaaS Layer components (vSphere and NSX) in earlier versions of Telco Cloud Platform. With this feature, you can upgrade the CaaS layer components to their latest versions while using earlier versions of the IaaS layer components.

For more information, see Software Version Support and Interoperability in the Telco Cloud Automation Deployment Guide and Supported Features on Different VIM Types in the Telco Cloud Automation User Guide.

Resolved Issues

Note: For information about the entire list of resolved issues in each Telco Cloud Platform component, see the corresponding component release notes.

  • Bare Metal Edge Devices Experience Packet Drops Causing Network Performance Degradation

    Bare Metal Edge devices using NSX 4.1.2.1 experience packet drops, which affect the network traffic flow and result in degraded network performance.

    This issue is fixed in NSX 4.2.

  • VMware Aria Operations 8.16 Fails to Integrate with Tanzu Kubernetes Classy Standard Clusters 1.27 and 1.28 Running TLS 1.3

    VMware Aria Operations 8.16 fails to integrate with Tanzu Kubernetes Classy Standard Clusters (1.27 and 1.28) that are running Transport Layer Security (TLS) version 1.3. Hence, the Classy Standard Clusters 1.27 and 1.28 cannot be monitored.

    Note: TLS 1.3 is not supported in Aria Operations 8.16.

    This issue is fixed in Aria Operations 8.18.

  • Include Node Pool Toggle Button Gets Reset When Selecting Templates Individually

    The Include Node Pool toggle button in the Cluster Upgrade wizard gets reset when selecting templates individually.

    This issue is fixed.

  • Edit DualStack Workload Cluster Shows Incorrect IP Version that Blocks Workload Cluster Upgrade

    The workload cluster upgrade from 1.28.4 to 1.28.7 is blocked due to an incorrect IP version that appears when using Edit DualStack Workload Cluster.

    This issue is fixed.

  • Existing Airgap Server and Harbor Appear as Disconnected in TCA Manager After Migrating TCA From 2.3 to 3.1.1

    After migrating Telco Cloud Automation (TCA) from 2.3 to 3.1.1, the existing Airgap server and Harbor appear as Disconnected under the Connected Endpoints tab in the TCA Manager.

    This issue is fixed.

  • Retry Can be Performed Only After a Four-Hour Timeout if Management Cluster Upgrade is Stuck Due to Missing TKG Template on vCenter

    If the Tanzu Kubernetes management cluster 1.24 upgrade is stuck due to the missing TKG template on vCenter, users need to wait approximately four hours for the timeout before retrying the cluster upgrade.

    This issue is fixed.

  • vCenter Upgrade to 8.0 U2 Stuck for an Extended Time in Airgapped Environment

    This issue is fixed.

  • Migration to TCA 3.0 or 3.1 Not Supported if Compute Cluster Domains Exist in TCA 2.3.x Infrastructure Automation

    If compute cluster domains exist in Telco Cloud Automation 2.3.x Infrastructure Automation, migration to TCA 3.0 or 3.1 is not supported.

    This issue is fixed.

  • CNF Upgrade Retry Skips Nodecustomization if Previous Nodecustomization Failed During CNF Upgrade

    This issue is fixed.

  • Management Cluster Upgrade Might Fail Due to Default Timeout in TCA API

    The management cluster upgrade might fail due to the default timeout (about 3.5 hours) in the TCA API. If the upgrade task runs in the backend, an inconsistent cluster status appears in the TCA UI and backend.

    This issue is fixed.

Known Issues

Note: For information about the entire list of known issues in each Telco Cloud Platform component, see the corresponding component release notes.

  • Adding Tenant IDP or Configuring Active Directory Fails Due to a Misconfigured User

    If an Active Directory (AD) has a misconfigured user, attempts to configure the AD or add a Tenant IDP fail with an Internal Server Error.

    Workaround:

    1. Correct the misconfigured user in the AD server.

    2. Update the AD configuration in the Appliance Management UI or add Tenant IDP in the TCAM Console.

  • TCA-M Marks Infrastructure LCM Workflow as Failed After its Time-Out Period if TCA-CP or  Infra-lcm-spoke Pod Restarts

    If TCA-CP or the Infra-lcm-spoke pod restarts during an Infrastructure LCM workflow (assessment or upgrade), TCA-M marks the workflow as failed after its time-out period.

    This issue can also occur if TCA-M or the infra-lcm-hub pod restarts.

    Workaround: Re-run the failed workflow as follows:

    1. Log in to Workflow Hub (WFH) from the TCA UI.

    2. Navigate to the Workflow Hub > Runs.

    3. Point to the failed workflow at WFH and re-run the workflow by using the existing payload.

  • DHCP IP Release Might Fail for Standard Kubernetes Workload Clusters When DHCP Server IP Address Belongs to Same Subnet as Cluster Node VMs

    When a standard Kubernetes workload cluster (Kubernetes 1.27 or later) is deployed on an NSX segment where the DHCP server’s IP address is in the same subnet as the cluster node VMs, releasing the DHCP IP might fail.

    In this issue, the MAC address of the DHCP server is required to send the DHCP release request. However, the ARP cache that stores the MAC address expires after 60 seconds.

    Workaround: Set the DHCP lease time or renewal time to less than 60 seconds. This ensures that the MAC address of the DHCP server remains in the ARP cache.

    Note: Reduced lease or renewal time will increase the DHCP traffic significantly, so implement it with caution.

  • RBAC User Permissions Not Applied Properly

    When an RBAC user is associated with multiple types of permissions, where one permission is based on filters such as tags and another permission is without filters, permissions are applied using a Logical OR operation instead of a Logical AND operation.

    Workaround: While configuring permissions, set the filters such that all the required resources are accessible. If the user needs to access all the resources, do not define filters in any permission.

  • vsphere-csi Add-On with Network Permissions Stuck in Processing State After Changing vCenter Login Credentials

    When you change the vCenter username or password for the vsphere-csi add-on that is configured with Network Permissions, the add-on might get stuck in the Processing state.

    Workaround: Remove and re-add the network permissions in the vsphere-csi add-on.

    1. Edit the vsphere-csi add-on.

    2. Remove all network permissions and save the configuration.

    3. Wait until the add-on status changes to Provisioned.

    4. Edit the vsphere-csi add-on, re-add all network permissions, and save.

  • NFS Client Add-On Deployment Fails When Using NSX Gateway as Backend Network for Workload Cluster

    When you use NSX as the backend network for a workload cluster, the NSX Gateway (Tier0 or Tier1) resides in the datapath between the cluster and the NFS storage server, causing the deployment failure of the NFS client add-on.

    In this issue, the NFS storage server uses the default secure settings to allow the NFS Client only using TCP ports 1-1024. However, the NSX Gateway enacts stateful Network Address Translation that modifies the TCP source port to a value above 1024, leading to the deployment failure.

    Workaround: Modify the secure settings of the NFS storage server to "insecure" or follow the workaround in NSX KB71104.

  • Management Cluster Creation Fails When vSphere Resource Folder Resolved to Multiple Resources in Kubernetes-based Service

    The management cluster creation fails when the vSphere resource folder is resolved to multiple resources in Kubernetes-based Service (KBS).

    In this issue, multiple folders with the same name exist under different parent folders in vCenter. When KBS attempts to create a VM in the vCenter Cloud, resources are identified based on the resource name instead of the absolute folder path in TCA. Hence, TCA cannot determine which resource folder to use because of duplicate names.

    Workaround: When creating a management cluster, do not select a VM folder with a duplicate name within the same vSphere cluster.

    Note: Ensure that each folder has a unique name to prevent conflicts during cluster creation.

  • vCenter Login Fails During CaaS Operation

    vCenter login fails with the following error during a CaaS operation, indicating that the maximum session count has exceeded on vCenter.

    failedToLoginMaxUserSessionCountReached

    In this issue, Kubernetes components such as k8s-csiuseragent and k8s-capv-useragent are holding up many sessions, leading to a maximum user session count and thereby preventing further vCenter logins.

    Workaround: Restart vCenter to remove the long-lived idle sessions and free up the session count. For instructions, see KB88668.

  • Secondary Network Adapters Added Through Node Policy Step Do Not Appear in Node Customization Tab

    When you add secondary network adapters through the Node Policy step in the Add/Edit Nodepool wizard of the TCA UI, the network adapters do not appear in the Node Customization tab.

    Workaround: You can view the secondary network adapters from the nodepool.

    1. Click the workload cluster and navigate to the specific node pool.

    2. Click Conditions > VIEW MORE DETAILS.

  • Upgrade Retry for Management Cluster Fails at Pre-Validation Stage

    When a management cluster upgrade fails, the cluster remains in "Not Active" status. When you re-trigger the cluster upgrade, the upgrade fails at the pre-validation stage and you cannot click the Retry or Upgrade Cluster button from the cluster operation list.

    Workaround: Do the following tasks:

    1. Fix the issue that caused the upgrade pre-validation failure.

    2. Navigate to the Upgrade tab on the Management Cluster page and do a new upgrade.

  • TCA Migration Validation Fails if vApp Options are Disabled in TCA VM

    If the vApp options are disabled or if the OVF properties values are lost for a TCA VM, TCA migration fails during deploy validation.

    Workaround: Ensure that the vApp Properties of TCA VM are enabled, and the Product Name under vApp Properties is VMware Telco Cloud Automation.

  • Retry Not Working if Management Cluster Upgrade Fails Due to a Missing TKG Template on vCenter

    If the Tanzu Kubernetes management cluster 1.24 upgrade fails due to a missing Tanzu Kubernetes Grid (TKG) template on vCenter, the retry operation does not work.

    Workaround:

    1. Log in to the Tanzu Kubernetes management cluster.

    2. Restart the Kubernetes pod.

    3. Retry the management cluster upgrade from the TCA Manager.

  • Techsupport Bundle Generation for CaaS Clusters Might Fail When Run in Parallel

    The techsupport bundle generation for CaaS clusters might fail if it is run in parallel.

    In this issue, the Support bundle service allows a user to trigger multiple support bundle requests simultaneously, while KBS allows only one CaaS cluster log collection request at a time.

    Workaround: Wait until the previous techsupport bundle generation completes and then retry the subsequent bundle generation.

    Note: The Support bundle service displays a tooltip that a subsequent request to collect CaaS cluster logs will fail if one is already running.

  • Multitenancy Not supported for Certificate Observability Service

    Unless a non-default Tenant shares the Endpoint with the default Tenant or the default Tenant inherits the Endpoint as a part of parent-child relationship, the Endpoint is not shown in the view for a Default Tenant login.

    For the Default Tenant login, though the Endpoint owned by other Tenants (non-default) is not listed in the portal, the Endpoint may get listed in the Connected Endpoints listing.

    Workaround: NA

  • Airgap rsync Operation Might Fail Occasionally if it is Run Multiple Times

    The airgap rsync operation might fail occasionally if it is run multiple times.

    Workaround: Run the following commands on the airgap server as a root user:

    1. Remove the existing content from the following location:

      rm -f /etc/yum.repos.d/*
    2. Copy the content from the backup location:

      cp /usr/local/airgap/backup_repo/* /etc/yum.repos.d/
    3. Run the rsync operation using the copied content:

      agctl rsync
  • capv User Account Gets Locked After Three Unsuccessful Login Attempts in 15 Minutes

    The capv user account gets locked after three unsuccessful login attempts in 15 minutes. The following message appears in the Journal log:

    Mar 27 07:15:55 cp-stardard-cluster-1-control-plane-zdfgm sshd[3767202]: pam_faillock(sshd:auth): Consecutive login failures for user capv account temporarily locked

    In this issue, the Photon operating system automatically locks the user account as per the Photon 5 STIG requirement (PHTN-50-000108).

    Workaround:

    1. Log in to TCA-CP as an admin and change to the root user.

    2. SSH in to the workload cluster endpoint as a capv user.

    3. Release the locked account:

      # faillock --user capv --reset

End of General Support Guidance

Broadcom Product Lifecycle Matrix outlines the End of Service (EoS) dates for Broadcom products. Lifecycle planning is required to keep each component of the VMware Telco Cloud Platform solution in a supported state. Plan the component updates and upgrades according to the EoS dates. To ensure that component versions are supported, you may need to update the Telco Cloud Platform solution to its latest maintenance release.

Broadcom pre-approval is required to use a product past its EoS date. To discuss the extended service of products, contact your Broadcom representative.

Note: If you purchased NSX as part of Telco Cloud Platform Advanced, NSX is entitled to the service lifecycle specific to the Telco Cloud Platform Advanced bundle. For more information, contact the Telco Cloud Platform Product Management team and Broadcom Support.

Support Resources

For additional support resources, see the VMware Telco Cloud Platform documentation page.

check-circle-line exclamation-circle-line close-line
Scroll to top icon