In this topic, you can find information about common issues and workaround for various data collectors in VMware Telco Cloud Service Assurance.

  1. Collector name: multi-smarts-notifs

    Issue: The Default Notification Console in Notification View is not displayed any/new notifications generated in the configured SAM server.

    Steps to troubleshoot the logs:
    1. Verify whether the collector is running by selecting the status column for the Notification collector under Administration > Integrations > Smarts Integrations > Select SAM > Details > Presentation SAM Details > Notification Collectors.
    2. If the multi-smarts-notifs collector is not running, start the collector and wait for five minutes to synchronize the events from the SAM server.
    3. If the events are still not updated in Notification Console view, then run the following command on the VMware Telco Cloud Service Assurance control plane node to view the latest log messages from the collector:
      get instance name
      kubectl get deploy -o=custom-columns="NAME:.metadata.name,SEC:.spec.template.spec.containers[*].env[*].value" | grep <SAM ADDRESS>| grep multi-smarts-notifs |cut -f 1 -d ' '
      get pods name
      kubectl get pods -o wide --show-labels  --selector 'app=collector-manager ,instance_id=<instance name from previous command>'| awk '{print $1}'
      get logs
      kubectl logs -f <pod name from above command>
  2. Collector name: multi-smarts-topology

    Issue: Topology Explorer is not showing any/new instances available in the configured SAM server.

    Steps to troubleshoot the logs:
    1. Verify whether the collector is running by selecting the status column for the Notification collector under Administration > Integrations > Smarts Integrations > Select SAM > Details > Presentation SAM Details > Topology Collectors:
    2. Start the collector multi-smarts-notifs, if it is in the stopped state.
    3. If the Topology explorer is still not updated in the UI, then run the following command on the VMware Telco Cloud Service Assurance control plane node to view the latest log messages from the collector:
      get instance name
      kubectl get deploy -o=custom-columns="NAME:.metadata.name,SEC:.spec.template.spec.containers[*].env[*].value" | grep <SAM ADDRESS>| grep multi-smarts-topology |cut -f 1 -d ' '
      get pods name
      kubectl get pods -o wide --show-labels  --selector 'app=collector-manager ,instance_id=<instance name from previous command>'| awk '{print $1}'
      get logs
      kubectl logs -f <pod name from above command>
  3. Collector name: smarts-metrics

    Issue: The metric data from the INCHARGE-AM-PM domain manager is collected, and as a result the corresponding reports are not displaying the new/updated data.

    Steps to troubleshoot the logs:
    1. Verify whether the collector is running by selecting the status column for the Notification collector under Administration > Integrations > Smarts Integrations > Select SAM > Details > Domain Manager Details > Metric Collectors.
    2. get instance name
      kubectl get deploy -o=custom-columns="NAME:.metadata.name,SEC:.spec.template.spec.containers[*].env[*].value" | grep <SAM ADDRESS>| grep smarts-metrics |cut -f 1 -d ' '
      get pods name
      kubectl get pods -o wide --show-labels  --selector 'app=collector-manager ,instance_id=<instance name from previous command>'| awk '{print $1}'
      get logs
      kubectl logs -f <pod name from above command>
  4. Collector name: Kafka-Collector

    Issue: Kafka collector not collecting the data from source cluster.

    Steps to troubleshoot the logs:
    1. Verify whether there is a collector instance created with a category Kafka-Collector in Administration > Gateways > Collector and ensure that the collector is in the running state.
    2. To verify the logs from the associated collector, run the following command where the collector instance name is same as in the Administration > Data Collector for the category.
      kubectl get pods --selector 'app=collector-manager' | grep <Collector instance name shown in the UI> | awk '{print $1}'
      kubectl get pods --selector 'app=collector-manager' | grep demo | awk '{print $1}'
      kubectl logs -f demo-698df7f84c-qdzdc
      Note: Ensure that the external Kafka is up and running.
  5. Collector name: Cisco IP-SLA Collector

    Issue: Collector does not start, or is in failed state.

    Steps to troubleshoot:
    1. Start the IP-SLA collector, if it is in stopped or failed state.
    2. If collector state is still not updated in the user interface, then run the following command on the control plane node to view the latest log from the collector:
      1. get Collector name from UI
      2. kubectl get pods --selector 'app=collector-manager' | grep <Collector instance name shown in the UI> | awk '{print $1}'
        For example:
        kubectl get pods --selector 'app=collector-manager' | grep ipsla-demo | awk '{print $1}'
      3. kubectl logs   -f  ipsla_demo-698df7f84c-qdzdc