About VMware Telco Cloud Service Assurance

VMware Telco Cloud Service Assurance is a real-time automated service assurance solution designed to holistically monitor and manage complex 5G NFV virtual and physical infrastructure and services end to end, from mobile core to the RAN to the edge. From single pane of glass, VMware Telco Cloud Service Assurance provides cross‑domain, multi‑layer, automated assurance in a multi‑vendor and multi‑cloud environment. It provides operational intelligence to reduce complexity, perform rapid root cause analysis and see how problems impacts services and customers across all layers lowering costs, and improved customer experience.

For information about setting up and using VMware Telco Cloud Service Assurance, see the VMware Telco Cloud Service Assurance Documentation.

What's New

VMware Telco Cloud Service Assurance release 2.0.1.2 brings features and enhancements across platform and virtual infrastructure management areas. This release introduces new functionalities like the deployment and rollback of patch and new usecase support for Cell Site Groups.

5G Core and vRAN Assurance

  • Support for Cell Site Group discovery and monitoring: VMware Telco Cloud Service Assurance supports the discovery and monitoring of the Cell Site Groups to support the day 0 launch steps in VMware Telco Cloud Automation.

  • Support for Cell Site Group vRAN connectivity maps, topology browser, new events, and RCA.

  • VMware Telco Cloud Automation Pipeline pre-validation and stage reports are now available in dashboard and reports tab.

Platform Modernization

VMware Telco Cloud Service Assurance now provides support for deployment and rollback of patch. For more information, see Patch Deployment and Rollback in VMware Telco Cloud Service Assurance Deployment Guide.

Fixed Issues

  • Apiservice must not allow anonymous user to access cluster resource.

    Apiservice was creating cluster-rolebindings with anonymous user access which could be potential security issue, therefore cluster-rolebindings has been removed.

  • Kafka Collector deserialisation issue.

    Kafka Collector does not process messages when '/' is contained any of the values in the incoming JSON messages.

  • VMware Smart Assurance Integration fails in the VMware Telco Cloud Service Assurance when running on AKS 1.22.x platform. Where "x" version is 11 or higher.

    In VMware Telco Cloud Service Assurance, Kubernetes Python SDK was used. When Kubernetes Python SDK get nodes API invoked on the Azure resulted in failure with error Invalid value for `names`, must be `None`.

    This has been fixed by using the REST API to list nodes instead of using Kubernetes Python SDK.

Known Issues

  • Cell Site instances are not available in Topology Browser and Maps of VMware Telco Cloud Service Assurance UI.

    Follow the procedure to get the Cell Site instances.

    1. Login to jobmanager pod.

    kubectl exec -it flink-jobmanager-0 bash

    2. Run the following command:

    cd /opt/flink/data/jars/flink-web-upload

    3. Delete the following jar:

    TopologyEngine-flinkTBL-shadow-2.0.1-SNAPSHOT.jar

    4. Check the logs of the Topology pod:

    Kubectl get pods | grep topology
    kubectl logs -f <<topologypod_name>>

    From logs , get the respective job ids for the jobs named “Topology-Cleanup” and “Topology-Processor”.

    For example, "jid": "1ffeb29940ca9c06d803fbf18fd5a416",”

    5. Login to the Topology pod.

    kubectl exec -it <<topologypod_name>> bash

    6. Run the following commands:

    curl -X PATCH http://flink-jobmanager:8081/jobs/<<Topology-Cleanup job d>>?mode=cancelcurl -X PATCH http://flink-jobmanager:8081/jobs/<<Topology-Processor job d>>?mode=cancel
  • Grafana service Reconcile is failing post patch uninstallation.

    VMware Telco Cloud Service Assurance Patch uninstallation fails with the following error:

    + helm upgrade --wait --install --timeout 1800s tcsa /root/tcx-deployer/product-helm-charts/tcsa -f /root/tcx-deployer/product-helm-charts/tcsa/values.yaml -f /root/tcx-deployer/product-helm-charts/tcsa/values-tkg.yaml -f /root/tcx-deployer/product-helm-charts/tcsa/values-tkg-50k.yaml -f /root/tcx-deployer/product-helm-charts/tcsa/values-imgpkg-overrides.yaml --set registryRootUrl=tcotkgregistry.azurecr.io/tcops25ksanity/tcx --set ingressHostname.product=10.220.142.25 --set footprint=50k --set statusChecker.timeout=1800 --set appSpecs.nginx.helmOverrides.productInfo.footPrint=50k --set asynchronousMode=true --set appDeploymentInterval=2s
    Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition

    Output of kubectl get apps command:

    grafana Reconcile failed: Deploying: Error (see .status.usefulErrorMessage for details) 2m40s 19h

    Delete the tco-grafana-deployer-job using kubectl delete job tco-grafana-deployer-job. This must let the App CR reconciliation pass.

    Reconciliation process takes few mins and the Grafana will be successfully deployed.

  • VMware vRealize Operations and NFV-SOL collectors uses old image post upgrading to 2.0.1 Patch.

    Existing VMware vRealize Operations and NFV-SOL collectors uses images of VMware Telco Cloud Service Assurance 2.0.1, expected Cell Site instances cannot be discovered automatically post upgrading.

    Post the Patch upgrade, run the following commands to discover and monitor Cell Site and related entities.

    Identify and delete the existing vROPS and NFV-SOl collectors using the below command.

    kubectl get deployments | egrep “vmware-vrops|nfv-sol”
    kubectl delete deployment <old-collector-id>

    From the ESM Domain Manager Install (SMARTS_HOME/bin) run the below commands to delete the existing Collector Instance IDs:

    ./dmctl -s <ESMServer> -b <Broker:Port> invoke ICF_PersistentDataSet::TCACollectorInstanceIds clear
    ./dmctl -s <ESMServer> -b <Broker:Port> invoke ICF_PersistentDataSet::CNFCollectorInstanceIds clear

    Trigger a Discover All from ESM Domain Manager.

  • After upgrading to VMware Telco Cloud Service Assurance 2.0.1 Patch 2, the application displays the wrong patch version.

    To check the patch version number in CLI, run the command:

    kubectl describe cm product-info

    To see the correct patch version in the application, run the command to restart the nginx service:

    kubectl rollout restart deployment/nginx

check-circle-line exclamation-circle-line close-line
Scroll to top icon