VMware Telco Cloud Platform 2.1 | 12 AUG 2021

Check for additions and updates to these release notes.

What's New

Telco Cloud Platform 5G Edition Release 2.1 brings together various key features and enhancements across carrier-grade workload compute, networking, network function automation and orchestration, and Kubernetes infrastructure areas. With the Airgap feature, this release allows users to securely deploy and provision the appliance or application in the production environment without accessing the Internet or Cloud. Users do not need Internet access in the production network to download the software and upgrade each component in the Telco Cloud Platform 5G Edition bundle.

Telco Cloud Platform 5G Edition Release 2.1 brings the Telco Cloud Platform core stack on par with the VMware Telco Cloud PlatformTM RAN stack in terms of component versions. This release also provides several security vulnerability fixes and bug fixes at the infrastructure and networking layers.

Workload Compute Performance

  • VMware ESXi 7.0 Update 2a includes various new features and enhancements that are inherited from vSphere ESXi 7.0 Update 2 release. Some key features are as follows:

    • Support for vSphere Quick Boot on specific servers of Dell Inc., HPE, and Lenovo.

    • Some ESXi configuration files become read-only: Configurations formerly stored in files such as /etc/keymap now reside in the ConfigStore database. You can modify these configurations only by using ESXCLI commands. 

    • Ability to track performance statistics of vSphere Virtual Volumes for better debugging. These statistics help quickly identify issues such as latency in third-party VASA provider responses.

    • NVIDIA Ampere architecture support to perform high-end AI/ML training and ML inference workloads by using  the accelerated capacity of the A100 GPU.

    • Support for Mellanox ConnectX-6 200G NICs

    • Performance improvements for AMD Zen CPUs. Out-of-the-box optimizations can increase the AMD Zen CPU performance by up to 30% in various benchmarks. AMD Zen CPU optimizations allow a higher number of VMs or container deployments with better performance.

    • Reduced compute and I/O latency, and jitter for latency-sensitive workloads. Latency-sensitive workloads, such as in financial and telecom applications, can see a significant performance benefit from I/O latency and jitter optimizations. The optimizations reduce interference and jitter sources to provide a consistent runtime environment. With ESXi 7.0 Update 2, you can also see high speed in interrupt delivery for passthrough devices.

    • Ability to configure vSphere lifecycle manager for fast upgrades. When you update an ESXi host, you can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk. By suspending VMs to memory and using the Quick Boot functionality, the upgrade time, system downtime, and system boot time are reduced significantly.

    • Ability to encrypt fault tolerance log traffic for enhanced security, thereby preventing malicious access or network attacks.

      For more information about these enhancements, see the VMware ESXi 7.0 Update 2 Release Notes. For more information about known issues, resolved issues, and patches included in ESXi 7.0 Update 2a, see the VMware ESXi 7.0 Update 2a Release Notes.

  • VMware vCenter Server 7.0 Update 2b includes various features and enhancements that are inherited from vCenter Server 7.0 Update 2a and vCenter Server 7.0 Update 2. Some key features are as follows:

    • In-product feedback option in vSphere Client: Enables you to provide real-time rating and comments on key VMware vSphere workflows and features.

    • Parallel remediation on hosts in clusters that you manage with vSphere Lifecycle Manager baselines: To reduce the time needed for patching or upgrading the ESXi hosts in your environment, you can enable vSphere Lifecycle Manager to remediate the hosts within a cluster in parallel by using baselines. Parallel remediation applies only to ESXi hosts that are already in maintenance mode. Parallel remediation cannot be done on hosts in a vSAN cluster.

    • Improved vSphere Lifecycle Manager error messages: Improved error messages help you better understand the root cause for issues such as skipped nodes during upgrades and updates, or hardware compatibility, or ESXi installation and update as part of the Lifecycle Manager operations.

    • Scaled VMware vSphere vMotion operations: vSphere vMotion automatically adapts to make full use of high-speed networks such as 25 GbE, 40 GbE, and 100 GbE with a single vMotion VMkernel interface, up from a maximum 10 GbE in previous releases.

    • Increased scalability with vSphere Lifecycle Manager: For vSphere Lifecycle Manager operations with ESXi hosts and clusters, the number of ESXi hosts that can be managed by a vSphere Lifecycle Manager Image is increased from 280 to 400.

    • Upgrade and migration from NSX-T-managed Virtual Distributed Switches to vSphere Distributed Switches: By using vSphere Lifecycle Manager baselines, you can upgrade your system and simultaneously migrate from NSX-T-managed Virtual Distributed Switches to vSphere Distributed Switches for clusters enabled with NSX-T Data Center.

    • Create new clusters by importing the desired software specification from a single reference host:

      • You can import the desired software specification from a single reference host and ensure that all necessary components and images are available in the vSphere Lifecycle Manager depot before creating a new cluster, thereby saving time and effort.

      • During the image import, vSphere Lifecycle Manager extracts the software specification from the reference host to the vCenter Server instance where you create the cluster, as well as the software depot associated with the image. So, you do not need to compose or validate a new image.

      • You can import an image from an ESXi host that is in the same or a different vCenter Server instance. You can also import an image from an ESXi host that is not managed by vCenter Server, move the reference host to the cluster or use the image on the host and seed it to the new cluster without moving the host.

    • vSphere Lifecycle Manager fast upgrades: You can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk.

      For more information about these features, see VMware vCenter Server 7.0 Update 2 Release Notes. For information about security fixes delivered in vCenter Server 7.0 Update 2b, see the VMware vCenter Server 7.0 Update 2b Release Notes

Workflow Automation and Orchestration

  • VMware vRealize Orchestrator 8.3 introduces various features around usability and security. Some key features are as follows:

    • Viewer role: Support for view-only access to all vRealize Orchestrator objects and pages

    • Usability improvements: You can now filter workflows by additional parameters in the data grid of the Variables and Input/Output tabs. You can also sort the workflow parameters and variables.

For more information about these features, see the VMware vRealize Orchestrator 8.3 Release Notes.

Carrier-Grade Resilient Networking

  • VMware NSX-T Data Center 3.1.2 introduces various new features for virtualized networking and security for private, public, and multi-clouds. Some key features are as follows:

    • Events and alarms: load balancer, edge health, IPAM, edge NIC Out of Receive buffer

    • Operations: Rolling packet capture to troubleshoot datapath issues on Edge

    • NVDS to VDS migration:

      • Supported during the ESXi host upgrade, where the host clusters are upgraded in parallel

      • Supported on a host that has either the vSAN file service or the vSAN share nothing architecture VMs connected to the NVDS on the host.

For more information about these features and other enhancements, see the VMware NSX-T Data Center 3.1.2 Release Notes.

Carrier-Grade VNF and CNF Automation and Orchestration

  • VMware Telco Cloud Automation 1.9.5 introduces various new features and enhancements. Some key features are as follows:

    • Airgap Feature: You can now create an air-gapped server that serves as a repository for all binaries and libraries that are required by VMware Telco Cloud Automation and VMware Tanzu Kubernetes Grid for performing end-to-end operations. In an air-gapped environment, you can now deploy VMware Tanzu Kubernetes clusters, perform late-binding operations, and perform license upgrades.

      Note: The airgap setup requires packages to be placed in a private air-gapped repository. To keep the packages up to date and to manage the repository, it must have access to the Internet.

    • Features and Enhancements inherited from Telco Cloud Automation 1.9 and 1.9.1:

      • CaaS clusters and node pools of VMware Tanzu Kubernetes Grid can now be upgraded by Telco Cloud Automation.

      • Role-Based Access Control (RBAC) for CaaS infrastructure automation

      • Ability to install and configure Antrea as part of the CaaS lifecycle management

      • Support for Harbor 2.x as a Partner System.

For more information, see the VMware Telco Cloud Automation 1.9.5 Release Notes.

Carrier-Grade Kubernetes Infrastructure

  • VMware Tanzu Standard for Telco introduces various new features and security vulnerability fixes as part of VMware Tanzu Kubernetes Grid 1.3.1. Some key features are as follows:

    • Support for new Kubernetes versions

      • 1.20.5

      • 1.19.9

      • 1.18.17

    • One of the observability features inherited from Tanzu Kubernetes Grid 1.3: Metrics Server is pre-installed on management and workload clusters. This feature enables you to use the following commands. For more information about this feature, see the VMware Tanzu Kubernetes Grid 1.3 Release Notes.

      • kubectl top nodes

      • kubectl top pods

For more information about other features and security vulnerability fixes, see the VMware Tanzu Kubernetes Grid 1.3.1 Release Notes.

Components

Mandatory Add-On Components

Note: Additional license is required.

Validated Patches

Resolved Issues

  • TCA Virtual Infrastructure Main Dashboard Does Not Show the Resource Consumption for Virtual Infrastructure

    This issue is fixed in Telco Cloud Automation 1.9 and later.

  • VMXNET Not Supported During CNF Instantiation

    This issue is fixed in Telco Cloud Automation 1.9 and later.

  • Registration of Multiple Harbor Systems with a Single VIM is Not Supported

    This issue is fixed in Telco Cloud Automation 1.9 and later.

  • Partner System Admin User Cannot Associate VIMs with an Existing Harbor

    This issue is fixed in Telco Cloud Automation 1.9 and later.

  • Association of Harbor Registry with Kubernetes Cluster Not Recorded Properly in Telco Cloud Automation

    This issue is fixed in Telco Cloud Automation 1.9 and later.

Known Issues

  • Telco Cloud Automation Fails to Upgrade the Tanzu Kubernetes Management Cluster

    Telco Cloud Automation 1.9.5 fails to upgrade the Tanzu Kubernetes Management Cluster from Kubernetes version 1.19.1 to 1.20.5. As part of the upgrade process, CAPI/CAPV always deletes the old node (vSphere VM). But, in some cases, CAPI/CAPV does not delete the information about the old node.

    Workaround:

    1. SSH into the TCA-CP VM.

    2. Switch to the Tanzu Kubernetes management cluster context.

    3. Use the following command to list the vSphere VM to delete:

      kubectl get machine -A

    4. Use the following command to manually delete the vSphere VM:

      kubectl delete vspherevm k8-mgmt-cluster-np1-769b4484c5-pq4p5 -n tkg-system --force --grace-period 0

    5. Upgrade the Tanzu Kubernetes management cluster from the Telco Cloud Automation 1.9.5 UI.

  • After a Scale-In operation on the Kubernetes Cluster, stale worker nodes are still shown in the TCA CaaS Infrastructure UI

    After a user performs the scale-in operation on the Kubernetes Cluster, some stale worker nodes are visible in the Telco Cloud Automation CaaS Infrastructure UI.

    Workaround: This issue is due to a temporary sync delay. Telco Cloud Automation displays the correct data automatically after two hours.

  • Users need to enable PCI passthrough on PF0 when the E810 card is configured with multiple PF groups

    To use the PTP PHC services in Telco Cloud Automation, enable PCI passthrough on PF0 when the E810 card is configured with multiple PF groups.

    Workaround: None

  • Configure add-on fails intermittently while deploying the cluster

    While deploying the cluster in Telco Cloud Automation, the configure add-on fails intermittently with the following error:

    Error: failed to generate and apply NodeConfig CR

    Workaround: If the config add-on partially fails, edit the cluster and re-add harbor on the cluster.

  • Cluster Creation Fails for an NSX segment that spans multiple DVS

    In Telco Cloud Automation, the cluster creation fails when it is connected to an NSX segment that spans across multiple DVS and across the same Transport Zone.

    Workaround:

    1. Create a Network Folder for each DVS that belongs to the same Transport Zone in vCenter.

    2. Move the DVS to the newly created Network Folder.

  • vSAN NFS image URL must be provided manually under the Image section in Infrastructure Automation

    In a non air-gapped environment, the vCenter deployed by Infrastructure Automation downloads the required vSAN NFS OVF images automatically from https://download3.vmware.com/software/VSANOVF/FsOvfMapping.json. However, with the introduction of the airgap server and the airgap support in VMware Telco Cloud Automation 1.9.5, setting up Infrastructure Automation fails in environments that do not have Internet access.

    Workaround:

    1. Download the required vSAN OVF and other image files to a web server that is local to your air-gapped environment.

    2. Provide the vSAN OVF file URL under the Images section of the Configuration tab in Infrastructure Automation.

    For more information, see the Telco Cloud Automation 1.9.5 Release Notes.

  • Config spec JSON from one Telco Cloud Automation setup does not work in other Telco Cloud Automation setups

    As part of the security requirement, the downloaded config spec JSON file does not include appliance passwords. So, users cannot upload the config spec JSON of one Telco Cloud Automation setup to another Telco Cloud Automation setup.

    Workaround: None

  • No option to delete a compute cluster that is 'ENABLED' but in the 'failed' state

    If the compute cluster is 'ENABLED' but in 'failed' state and the user attempts to delete the cluster, the cluster gets deleted from the Telco Cloud Automation inventory leaving behind the cluster resources intact.

    Workaround: Manually delete the cluster resources by logging in to VMware Center Server and VMware NSX-T server.

  • Telco Cloud Automation does not provision domains when four PNICs are configured

    Telco Cloud Automation does not provision domains when four PNICs are configured on a DVS and the edge cluster deployment is enabled.

    Workaround: None

  • CSI tagging is not supported for predeployed domains in Telco Cloud Automation

    The CSI tagging feature is not applicable for predeployed domains in VMware Telco Cloud Automation. However, Telco Cloud Automation does not modify the existing tags if already set in the underlying VMware vSphere server for the predeployed domains.

    Workaround: None

  • User needs to edit and save the domain information to enable the CSI tagging feature upon resync in a brownfield deployment

    To use the CSI tagging feature in a Brownfield deployment in Telco Cloud Automation, users need to edit and save the domain information and then perform a resync operation for that particular domain.

    Workaround: None

  • No option is available to edit or remove the CSI tagging After Enabling it in Telco Cloud Automation

    After you set the CSI tags in a domain in Telco Cloud Automation, you cannot remove or make further modifications to the tags.

    Workaround: None

  • CSI tagging is not enabled by default for an air-gapped environment with the standalone mode of activation

    In Telco Cloud Automation, the CSI tagging is not enabled by default for an Air-gapped environment that has a standalone mode of license activation.

    Workaround: Contact VMware support team for enabling the CSI tagging feature.

  • CSI tagging is applicable only for newly added hosts in a Cell Site Group

    In Telco Cloud Automation, the VMware vSphere server Container Storage Interface (CSI) tagging is applicable only on the newly added hosts for a Cell Site Group (CSG). This restriction is not applicable to the hosts already added to a Cell Site Group.

    Workaround: None

  • Infrastructure Automation deploys the old versions of Workload Domain components in Telco Cloud Platform 5G Edition 2.1

    When you use Infrastructure Automation to deploy Workload Domains, the old versions of components are deployed instead of the latest versions supported in Telco Cloud Platform 5G Edition 2.1. This issue is due to the reliance of Infrastructure Automation on Cloud Builder.

    Workaround: After creating the Workload Domain, upgrade the components to the supported versions manually.

  • Partner System Admin Users Can View Other Registered Partner Systems in Telco Cloud Automation

    When configuring permissions for a Partner System Admin user in Telco Cloud Automation, the associated Partner System is not available as an object type under advanced filter criteria. Hence, the user can view other registered Partner Systems also.

    Workaround: None

  • Users with Infrastructure LCM Privilege Are Not Able to Delete Kubernetes Clusters

    Users with the Infrastructure LCM privilege are not able to delete Kubernetes clusters that are deployed through VMware Telco Cloud Automation.

    Workaround: Add the Virtual Infrastructure Admin privilege to users.

Release Notes Change Log

Date

Change

10 MAR 2022

Replaced 'VMware Tanzu Kubernetes Grid' with 'VMware Tanzu Standard for Telco' in the Components section

04 OCT 2021

Added the following to the Validated Patches section:

  • VMware ESXi 7.0 Update 2d

  • VMware vCenter Server 7.0 Update 2d

19 AUG 2021

Added the following to the Validated Patches section:

  • VMware NSX-T Data Center 3.1.3

Support Resources

For additional support resources, see the VMware Telco Cloud Platform documentation page.

check-circle-line exclamation-circle-line close-line
Scroll to top icon