VMware Telco Cloud Platform 2.5 | 28 JUL 2022

Check for additions and updates to these release notes.

Release Overview

Telco Cloud Platform 5G Edition Release 2.5 provides various key features and enhancements across carrier-grade workload compute, networking, network function automation and orchestration, and Kubernetes infrastructure areas.

This release delivers key Role-Based Access Control (RBAC) features, improved network lifecycle management through AVI Load-Balancer Kubernetes Operator (AKO) lifecycle management, visibility and monitoring capabilities through Tanzu Kubernetes Grid extensions, SSH access to appliances, and IPv6 support for a new deployment in an Airgapped environment.

Other key features of this release include new functionalities for virtualized networking, security, and migration from NSX Data Center for vSphere. In addition, it delivers several key infrastructure and security bug fixes.

What's New

  • Workload Management, Storage, and Reliability Enhancements

    VMware vCenter Server 7.0 U3f delivers the following features and enhancements. This release also inherits features and enhancements from vCenter Server 7.0 U3d and vCenter Server 7.0 U3e.

    • Several key security bug fixes including CVE-2022-22982 and CVE-2021-22048. For more information on this vulnerability and its impact on VMware products, see VMSA-2022-0018 and VMSA-2021-0025, respectively. 

    • Key scalability features including the ability of the vSAN cluster to store up to ten client vSAN clusters in its local data store.

    For more information, see the VMware vCenter Server 7.0 Update 3f Release Notes, VMware vCenter Server 7.0 Update 3d Release Notes, and VMware vCenter Server 7.0 Update 3e Release Notes.

    VMware ESXi 7.0 U3f delivers the following features and enhancements:

    • Several key security bug fixes including CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, and CVE-2022-29901. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2022-0020.

    • Support vSphere Quick Boot on additional Cisco, Dell, HPE, and Lenovo servers.

    For more information, see the VMware ESXi 7.0 Update 3f Release Notes.

  • Carrier-Grade Resilient Networking and Security

    VMware NSX-T Data Center 3.2.1 introduces the following key enhancements across networking, load balancing, NSX Data Center for vSphere to NSX-T Data Center migration, security, services, and onboarding.

    • N-VDS to VDS migrator Tool: This enables you to migrate the underlying N-VDS connectivity to NSX on VDS while the workloads are running on the hypervisors. The N-VDS to VDS Migrator tool now supports migration of the underlying N-VDS connectivity if there are different configurations of N-VDS with the same N-VDS name.

    • Enhancements to the NSX for vSphere to NSX-T Data Center Migration Coordinator: Migration Coordinator now supports migrating to NSX-T environments with edge nodes deployed with two TEPs for different modes including User-Defined Topologies, Migrating Distributed Firewall Configuration, Hosts, and Workloads. Migration Coordinator also supports adding hosts during migration for a single site and changing the certificate during migration.

    • Certificate Management Enhancements for TLS Inspection: With the introduction of the TLS Inspection feature, certificate management supports the addition and modification of certification bundles and the capability to generate CA certificates to be used with the TLS Inspection feature.

    • Rolling upgrade of NSX Management Cluster: The Rolling Upgrade feature provides a near-zero downtime of the NSX Management Plane (MP) when upgrading the NSX Management cluster from NSX-T 3.2.1.

    • Install NSX on Bare Metal/Physical Servers as a non-root user: In NSX-T 3.2.1, you can install NSX on Linux bare metal/physical servers as a non-root user.

    • Security/Firewall Enhancements: IDPS (Intrusion Detection and Prevention System) is now supported in NSX-T 3.2.1.  Gateway Firewall IDPS detects any attempts to exploit system flaws or gain unauthorized access to systems. In addition, TLS 1.2 Inspection is available and supported for production environments. Gateway Firewall decrypts and inspects the payload to prevent any advanced persistent threats.

    For more information, see the VMware NSX-T Data Center 3.2.1 Release Notes.

  • Carrier-Grade Kubernetes Infrastructure

    VMware Tanzu Standard for Telco introduces various key features as part of VMware Tanzu Kubernetes Grid 1.5.4. This release also inherits enhancements from Tanzu Kubernetes Grid 1.5 and 1.5.2.

    Following are some of the key features:

    • New Kubernetes versions are supported

      • 1.22.9

      • 1.21.11

      • 1.20.15

    • Security vulnerabilities are addressed:

      • CVE-2022-0847 (in v1.5.3+)

      • CVE-2022-0492 (in v1.5.3+)

    • Additional management/provisioning features are supported:

      • Sending DHCP option 121 in DHCP IPAM plugin to support IP assignment: A bug fix regarding the DHCP option 121 in DHCP IPAM plugin now enables the DHCP client which uses the DHCP plugin available at /opt/cni/bin to send the option 121 when sending the DGCP discover request.  This enables the secondary interface for a pod and allows it to be configured to use DHCP for IPAM.

      • Additional configuration variables to enable functionalities in AVI, Cluster API, and Antrea.

    For more information about other features and security vulnerability fixes, see the VMware Tanzu Kubernetes Grid 1.5.4 Release Notes.

  • Carrier-Grade VNF and CNF Automation and Orchestration

    VMware Telco Cloud Automation 2.1 introduces various new features and enhancements. The following are some of the key features:

    • Role Based Access Control Enhancements: SSH access to appliances and secure kubectl access to VMware Tanzu Kubernetes Grid for restricted access.

    • CaaS Infrastructure Enhancements: Includes granular upgrade support where customers can upgrade the control plane and worker node pools separately, support for Tanzu Kubernetes Grid extensions such as Prometheus and Fluentbit, granular visibility of the component status, and effective management of cluster failures.

    • Other Key Feature Enhancements: Includes IPv6 support for a new deployment in an airgapped environment, Active Directory integration, VIO support extensions, and HA-based improvements for deploying Cloud Native VMware Telco Cloud Automation with customized cluster sizes and upgrading TCA in an airgapped environment.

    For more information, see the VMware Telco Cloud Automation 2.1 Release Notes.

Components

Mandatory Add-On Components

Note:

An additional license is required.

Optional Add-On Components

Note: Additional license is required.

Validated Patches

Support for Backward Compatibility of CaaS Layer with IaaS Layer

VMware Telco Cloud Platform 5G Edition Release 2.5 supports backward compatibility of its CaaS layer components (Telco Cloud Automation and Tanzu Kubernetes Grid) with the IaaS Layer components (vSphere and NSX-T Data Center) in earlier versions of Telco Cloud Platform 5G Edition. With this feature, you can upgrade the CaaS layer components to their latest versions while using earlier versions of the IaaS layer components.

For more information, see the Telco Cloud Automation 2.1 Release Notes.

End of General Support Guidance

VMware Product Lifecycle Matrix outlines the End of General Support (EoGS) dates for VMware products. Lifecycle planning is required to keep each component of the VMware Telco Cloud Platform solution in a supported state. Plan the component updates and upgrades according to the EoGS dates. To ensure that the component versions are supported, you may need to update the Telco Cloud Platform solution to its latest maintenance release.

VMware pre-approval is required to use a product past its EoGS date. To discuss the extended support of products, contact your VMware representative.

Note: If you purchase NSX-T Data Center as part of the Telco Cloud Platform bundles, NSX-T Data Center is entitled to the support lifecycle specific to the Telco Cloud Platform bundles. The entitlement details for NSX-T 3.2.x are as follows:  

  • General Availability: 2021-12-16

  • End of General Support: 2026-12-16

The Technical Guidance phase is not available for this product lifecycle. To receive new severity 1 bug fixes and security updates, upgrade NSX-T Data Center to the latest maintenance release in the 3.2.x release series.

Resolved Issues

  • CNF instantiation failure occurs due to high diskspace consumption in Telco Cloud Automation 

    TCA 2.1 comprises the following hardware resources to resolve the resource consumptions issues:

    • TCA 2.1 Manager: 6 cpu / 16 GB RAM / 200 GB

    • TCA 2.1 CP: 6 cpu / 18 GB RAM / 200 GB

    If you are upgrading from TCA 2.0 to 2.1, to ensure that the upgraded TCP components meet the new requirements, do the following:

    1. Upgrade to TCA 2.1.

    2. Take a backup.

    3. Deploy the new TCA Manager and Control Plane and restore them using the backup.

Known Issues

  • Edit operation fails for cluster transformed from CaaS V1 to v2  

    Post transformation of a workload cluster associated with TCA-CP other than the management cluster, the day-2 operations such as creating a cluster or nodepool, editing a cluster or nodepool and such other operations fail.

  • Design Network function with VMXNET3 adapter fails to Onboard Package

    In CSAR designer, the Add Network Adapter with the device type vmxnet3 does not show Resource Name.

    1. Design a CSAR with targetDriver set to [igb_uio/vfio_pci] and resourceName.

    2. Onboard CSAR.

    3. Download CSAR and delete the onboarded CSAR from VMware Telco Cloud Automation.

    4. Edit the downloaded CSAR in a text editor:

      1. Remove targetDriver.

      2. Add interfaceName (the interface name inside the GUEST OS for this adapter). For example:

      network:
              devices:
                - deviceType: vmxnet3
                  networkName: network1
                  resourceName: res2
                  interfaceName: net1
                  count: 1
                  isSharedAcrossNuma: false

    5. Upload the edited CSAR file to VMware Telco Cloud Automation.

    To edit a CSAR file:

    1. Unzip the CSAR file.

    2. Edit the Definitions/NFD.yaml with the changes listed in the preceding steps.

    3. Run zip -r <new_name>.csar TOSCA-Metadata/ Definitions/ Artifacts/ NFD.mf.

  • Cluster LCM operations might fail due to a root cause where the api-server of the corresponding Management Cluster is restarted

    This issue occurs because the Control Plane nodes run on low performance hosts, which result in some pods with CrashLoopBackOff error messages such as no route to host or leader election lost.

    Restart the pods within the Management Cluster that are in a CrashLoopBackOff state.

  • Failed to upgrade management cluster: One control plane node displays the status as SchedulingDisabled.

    You cannot end the pods of an existing Control Plane node because it cannot remove the cgroup path. This is similar to the upstream Kubernetes issue: https://github.com/kubernetes/kubernetes/issues/97497

    • Restart the affected Control Plane node. After you restart, the upgrade continues automatically in the backend.

    • Retry the upgrade operation for the Cluster from the VMware Telco Cloud Automation UI.

  • Sometimes internal server error occurs while performing any operation with Kubernetes in Telco Cloud Automation

    While you perform any operation with Kubernetes in Telco Cloud Automation, sometimes the following error occurs:

    Internal server error: NodePolicyJob. Error: Unable to connect to Bootstrapper. Error: 'HttpHostConnectException'. Please restart Bootstrapper Service in TCA CP if the issue persists. Contact System Administrator for more help.

    1. Verify whether the Bootstrapper service is running in the Telco Cloud Automation-Control Plane Appliance. If it is not running, start the Bootstrapper service.

    2. Retry the operation that you were performing with Kubernetes.

  • CaaS cluster deployments are not supported on vCenter servers containing multiple datacenters

  • Infrastructure automation deploys the old versions of workload domain components in Telco Cloud Platform 5G edition 2.2

    When you use Infrastructure Automation to deploy Workload Domains, the old versions of components are deployed instead of the latest versions supported in Telco Cloud Platform 5G Edition 2.2. This issue is due to the reliance of Infrastructure Automation on Cloud Builder.

    Workaround:

    After creating the Workload Domain, upgrade the components to the supported versions manually.

  • VMware Telco Cloud Automation does not support Active Directory usernames that have more than 20 characters

    If the log in username is more than 20 characters, the group retrieval of the user fails. This causes the TCA log in attempts to fail.

    Workaround:

    Ensure that the usernames are less than 20 characters in length.

  • Adding custom port harbor to the V2 clusters causes the harbor add-on to be stuck

    Workaround:

    Add the custom port Harbor to partner systems first and then register the same to cluster as an extension.

  • Network Slicing feature is not supported for environments if their authentication is set as 'Active Directory'

  • vsphere-csi is not supported if the cluster is deployed across multiple vCenters

  • Upgrading a cluster with a single control plane node leads to upgrade failure

    Upgrading a cluster that has only one control plane node results in failure and displays one of the following errors:

    • Scenario 1: Upgrade fails and reports an error message "timeout: poll control plane ready for removing SCTPSupport=true"

    • Scenario 2: Could not find server "cdc-mgmt-cluster"

    Workaround:

    Ensure that the cluster has multiple control plane nodes prior to upgrading to VMware Telco Cloud Automation 2.1.

    Scenario 1: Upgrade fails and reports an error message: "timeout: poll control plane ready for removing SCTPSupport=true"

    1. Retry upgrade on the user interface.

    2. If a retry fails, check the CAPI log to find logs related to the new control plane machine and its status.

      1. If the new control plane machine's status is 'Running', the new control plane node cloud-init logs contains "could not find a JWS signature in the cluster-info ConfigMap for token ID".

        1. Contact tech-support.

    # Confirm the new control plane
    [root@10 /home/admin]# kubectl get machine -n cdc-work-cluster1-v1-21-2
    NAME                                                          CLUSTER                     NODENAME                                                      PROVIDERID                                       PHASE     AGE     VERSION
    cdc-work-cluster1-v1-21-2-work-master-control-plane-29xgb     cdc-work-cluster1-v1-21-2   cdc-work-cluster1-v1-21-2-work-master-control-plane-29xgb     vsphere://423b959d-59aa-47d9-56cf-d657b165ab0a   Running   2d14h   v1.21.2+vmware.1
    cdc-work-cluster1-v1-21-2-work-master-control-plane-9ptnm     cdc-work-cluster1-v1-21-2   cdc-work-cluster1-v1-21-2-work-master-control-plane-9ptnm     vsphere://423b1c01-4b7f-c27d-6458-fb6a4e3c3984   Running   2d10h   v1.21.2+vmware.1
    cdc-work-cluster1-v1-21-2-work-worker-np1-575bc4c56-h89d6     cdc-work-cluster1-v1-21-2   cdc-work-cluster1-v1-21-2-work-worker-np1-575bc4c56-h89d6     vsphere://423b7444-e75f-066a-eb67-d9631c939e3d   Running   2d14h   v1.21.2+vmware.1
    cdc-work-cluster1-v1-21-2-work-worker-np10-65b5c46bd6-2kcqm   cdc-work-cluster1-v1-21-2   cdc-work-cluster1-v1-21-2-work-worker-np10-65b5c46bd6-2kcqm   vsphere://423b993d-dbda-2dbd-0943-d0800ac6a0aa   Running   2d14h   v1.21.2+vmware.1
     
    # Look for IP address of new control plane node
    [root@10 /home/admin]# kubectl get machine -n cdc-work-cluster1-v1-21-2 cdc-work-cluster1-v1-21-2-work-master-control-plane-9ptnm -oyaml
     
    # Ssh login
    [root@10 /home/admin]# ssh capv@10.176.119.36
     
    capv@cdc-work-cluster1-v1-21-2-work-master-control-plane-9ptnm [ ~ ]$ sudo su
    root [ /home/capv ]# cat /var/log/cloud-init-output.log
     
    # If have "could not find a JWS signature in the cluster-info ConfigMap for token ID" message in cloud-init log and kube-vip does not floats on this new bad control plane, you can delete this bad control plane machine.
    ***If kube-vip floats on this new bad control plane, please contact TKG team to debug and workaround.
     
    # Login to TCA-CP and switch to root user
    [root@10 /home/admin]# kubectl edit  deployment -n capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager
     
    # Update bootstrap-token-ttl=15m to bootstrap-token-ttl=25m. This will cause capi-kubeadm-bootstrap-system pod to be restarted
     
    # Delete bad new control plane machie
    kubectl delete machine -n cdc-work-cluster1-v1-21-2 cdc-work-cluster1-v1-21-2-work-master-control-plane-9ptnm
     
    # After bad machinee is deleted, new control plane will be created automatically. When this new control plane is Running and old control plane is deleted, please retry upgrade on UI.
     
    # If new control plane node cloud-init still reports error "could not find a JWS signature ", tune bootstrap-token-ttl value to be more bigger.

    Scenario 2: Upgrade fails and reports the error message: "could not find server cdc-mgmt-cluster"

    1. Navigate to /root/.config/tanzu/config.yaml and confirm if Management cluster context exists. If not, contact TKG tech-support to debug.

    2. After finding the root cause, perform the following workaround.

    Old value:

    [root@10 /home/admin]# cat /root/.config/tanzu/config.yaml
    apiVersion: config.tanzu.vmware.com/v1alpha1
    clientOptions:
      cli:
        discoverySources:
        - oci:
            image: projects.registry.vmware.com/tkg/packages/standalone/standalone-plugins:v0.11.6-1-g90440e2b_vmware.1
            name: default
        edition: tkg
      features:
        cluster:
          custom-nameservers: "false"
          dual-stack-ipv4-primary: "false"
          dual-stack-ipv6-primary: "false"
        global:
          context-aware-cli-for-plugins: "true"
        management-cluster:
          custom-nameservers: "false"
          dual-stack-ipv4-primary: "false"
          dual-stack-ipv6-primary: "false"
          export-from-confirm: "true"
          import: "false"
          network-separation-beta: "false"
          standalone-cluster-mode: "false"
    kind: ClientConfig
    metadata:
      creationTimestamp: null
    

    New value:

    [root@10 /home/admin]# cat /root/.config/tanzu/config.yaml
    apiVersion: config.tanzu.vmware.com/v1alpha1
    clientOptions:
      cli:
        discoverySources:
        - oci:
            image: projects.registry.vmware.com/tkg/packages/standalone/standalone-plugins:v0.11.6-1-g90440e2b_vmware.1
            name: default
        edition: tkg
      features:
        cluster:
          custom-nameservers: "false"
          dual-stack-ipv4-primary: "false"
          dual-stack-ipv6-primary: "false"
        global:
          context-aware-cli-for-plugins: "true"
        management-cluster:
          custom-nameservers: "false"
          dual-stack-ipv4-primary: "false"
          dual-stack-ipv6-primary: "false"
          export-from-confirm: "true"
          import: "false"
          network-separation-beta: "false"
          standalone-cluster-mode: "false"
    current: cdc-mgmt-cluster
    kind: ClientConfig
    metadata:
      creationTimestamp: null
    servers:
    - managementClusterOpts:
        context: cdc-mgmt-cluster-admin@cdc-mgmt-cluster
        path: /root/.kube-tkg/config
      name: cdc-mgmt-cluster
      type: managementcluster

    Verify if the information is correct.

    [root@10 /home/admin]# tanzu login --server cdc-mgmt-cluster
    ✔  successfully logged in to management cluster using the kubeconfig cdc-mgmt-cluster
    Checking for required plugins...
    All required plugins are already installed and up-to-date
    [root@10 /home/admin]#

    It is strongly recommended to create / ensure that the Management Clusters have 3 or more Control Plane nodes before upgrading to VMware Telco Cloud Automation to 2.1.

  • vmconfig-operator supports vCenter access only through port 443

    vmconfig-operator supports the deployment of clusters within vCenter environments that run only on port 443. The vCenter environments with custom ports are not supported.

  • Updating ako-operator add-on configuration is not supported

    Updating ako-operator add-on configurations, such as Avi Controller credentials and certificates, is not supported.

    Workaround:

    Uninstall and re-install the ako-operator addon with the new configuration.

  • Uninstallation of AKO does not delete objects on AVI Controller

    Objects on Avi Controller are not deleted automatically when the load-balance-and-ingress-service add-on is uninstalled from the Workload cluster.

    Workaround:

    Delete the objects from the Avi Controller UI directly.

  • Deployment of IPv6 based Workload Clusters with Kubernetes versions prior to 1.22.x will result in TKG known limitations where the VSPHERE_CONTROL_PLANE_ENDPOINT IP Address is assigned to the node and host network pods

    This may further result in scenarios where there are potential IP Address conflicts when a Workload Cluster has more than one Control Plane node.

    Workaround:

    It is recommended that IPv6 base Workload Clusters are deployed with Kubernetes version 1.22.x or later.

  • The vsphere-csi daemonset does not always load the latest configuration after restart

    The vsphere-csi daemonset does not always load the latest configuration after restarting due to which the nodes are not labeled with multi-zone information (zone and region information). In addition, the topology keys are not populated either.

    1. Login to the Workload Cluster Control Plane node.

    2. Restart the vsphere-csi node daemonset manually after applying the csi params from VMware Telco Cloud Automation and verifying that the operation is complete from VMware Telco Cloud Automation.

    3. Run the following command to restart vsphere-csi daemonset: kubectl rollout restart ds vsphere-csi-node -n kube-system

    4. Verify the node labels after 2 or more minutes.

    For example:

    capv@work-ran-dish-master-control-plane-788qg [ ~ ]$ kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region
    NAME                                       STATUS   ROLES                  AGE   VERSION            ZONE   REGION
    work-ran-dish-dishpool1-68d5d8db96-lhlz7   Ready    <none>                 36m   v1.21.2+vmware.1
    work-ran-dish-master-control-plane-788qg   Ready    control-plane,master   37m   v1.21.2+vmware.1
    work-ran-dish-ranpool1-8b5d7f485-52gwl     Ready    <none>                 36m   v1.21.2+vmware.1
      
    capv@work-ran-dish-master-control-plane-788qg [ ~ ]$ kubectl get daemonSet -A -w
    NAMESPACE     NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
    kube-system   calico-node                        3         3         3       3            3           kubernetes.io/os=linux            36m
    kube-system   kube-multus-ds-amd64               3         3         3       3            3           kubernetes.io/arch=amd64          33m
    kube-system   kube-proxy                         3         3         3       3            3           kubernetes.io/os=linux            38m
    kube-system   vsphere-cloud-controller-manager   1         1         1       1            1           node-role.kubernetes.io/master=   36m
    kube-system   vsphere-csi-node                   3         3         3       3            3           <none>                            36m
    tca-system    nodeconfig-daemon                  2         2         2       2            2           <none>                            33m
    tca-system    nodeconfig-daemon-control-plane    1         1         1       1            1           node-role.kubernetes.io/master=   33m
      
    capv@work-ran-dish-master-control-plane-788qg [ ~ ]$ kubectrollout restart  ds vsphere-csi-node -n kube-systemon
    daemonset.apps/vsphere-csi-node restarted
      
    capv@work-ran-dish-master-control-plane-788qg [ ~ ]$ kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region
    NAME                                       STATUS   ROLES                  AGE   VERSION            ZONE          REGION
    work-ran-dish-dishpool1-68d5d8db96-lhlz7   Ready    <none>                 37m   v1.21.2+vmware.1   tag-zone-02   tag-region
    work-ran-dish-master-control-plane-788qg   Ready    control-plane,master   38m   v1.21.2+vmware.1   tag-zone-01   tag-region
    work-ran-dish-ranpool1-8b5d7f485-52gwl     Ready    <none>                 37m   v1.21.2+vmware.1   tag-zone-4    tag-region
    
  • New Active Directory users cannot log in to VMware Telco Cloud Automation

    New Active Directory users with the option to 'Change Password on next logon' cannot log in to VMware Telco Cloud Automation.

    Workaround:

    Set the user password in AD prior to logging in to VMware Telco Cloud Automation.

  • Fluent-bit pod is stuck in CrashLoopBackoff on worker nodes where the cpu-manager-policy is set to static

    Fluent-bit pod is stuck in CrashLoopBackoff on worker nodes when the cpu-manager-policy is set to static.

    Workaround:

    Change the cpu-manager-policy to none.

  • Alertmanager pod of TKG Prometheus add-on fails to boot

    When deploying prometheus to a TKG cluster, the alertmanager pod of TKG Prometheus add-on fails to boot.

    Within the Prometheus add-on, the alertmanager pod might be in a CrashLoopBackOff state in vCenter 70u2 and vCenter 70u3 deployments.

    Workaround:

    Provide the cluster.advertise-address in the alertmanager deployment yaml.

    containers: 

    - name: prometheus-alertmanager
         image: "prom/alertmanager:v0.20.0"
         imagePullPolicy: "IfNotPresent"
         args:
         - --config.file=/etc/config/alertmanager.yml
         - --storage.path=/data
         - --cluster.advertise-address=127.0.0.1:9093

Release Notes Change Log

Date

Change

25 OCT 2023

VMware vCenter Server 7.0 Update 3o is added to the Validated Patches section.

4 AUG 2023

VMware NSX-T Data Center Advanced Edition 3.2.2 is added to the Validated Patches section.

20 APR 2023

VMware NSX Advanced Load Balancer 21.1.6 is added to the Validated Patches section.

26 SEP 2022

VMware Telco Cloud Automation 2.1.1 is added to the Validated Patches section.

Support Resources

For additional support resources, see the VMware Telco Cloud Platform documentation page.

check-circle-line exclamation-circle-line close-line
Scroll to top icon