To find where subscribers are experiencing degraded QoS in the RAN, identify the root-causes, and view actionable fix recommendations provided by the Uhana by VMware platform, use the Focus menu and Alerts dashboard. To understand how Uhana uses AI to generate these actionable alerts, see Understanding AI-based RAN alerts.

The following topics describe how to use the Focus page.

Filtering alerts

This topic describes how to search and filter alerts along various dimensions in the Focus page. Use the following figure to understand the various components on the Alerts dashboard.

Alerts dashboard

Dashboard element Description
Alerts map Visualize where subscribers have poor experience
Alerts summary Summary of number of alerts, impacted sessions and cells
Alerts breakdown Pie chart showing breakdown of alerts per root cause
Alerts table List of alerts sorted by number of impacted sessions

Using the selection panel

The selection panel is the same as the Explore page and is used to set the range of cells and time windows over which data needs to be returned. See Using the selection panel for details.

Using the map

The map in the Focus page has the same functionality as the Explore page. The clusters and cells on the map are colored by number of subscriber sessions that have degraded experience due to RAN root-causes. See Using the map for details.

Using the search alerts panel

The Search Alerts panel in the Focus page is used to query alerts along additional dimensions apart from time and geography. Clicking on the Search Alerts panel shows the following drop-down menu.

  • Root causes - Filter the alerts on one or more root causes. The following root causes are supported.
    • Load imbalance
      • Excessive offloading to victim cell
        • Misconfigured load balancing parameters
        • Mismatched coverage
      • Traffic offloading not enough
    • Uplink interference
      • External
      • Internal
      • Unknown
    • Poor coverage
      • Misconfigured RF shaping
  • Symptoms - Filter the alerts based on impacted service KPIs. The following service KPIs are supported.
    • Accessibility
    • Downlink throughput
    • Uplink voice quality
    • Retainability
  • Fix type - Filter the alerts based on type of fix recommendation. The following fix recommendations are supported.
    • Remove external interferer
    • Adjust cell tilt and/or power
  • QCI - Prioritize the alerts based on service impact on selected QCIs. This can be used to prioritize alerts impacting high-value subscribers or enterprise customers who might have been allocated a special QCI for differentiated service.

Make your selections and click Apply for viewing alerts.

Visualizing alerts

This topic describes the different components in the Focus page that are relevant to visualizing alerts.

Alerts summary

This section provides a summary of the alerts for the given user selection.

  • Alerts - The number of alerts generated by the system satisfying the selection criteria
  • Affected entities - The number of cells affected by at least one alert
  • Impacted sessions - The number of subscriber sessions impacted by alerts
  • Percent impacted - The percentage of subscriber sessions impacted by alerts
  • Impacted sessions breakdown - The distribution of subscriber sessions impacted per root cause

Alerts table

The alerts table provides a list of alerts generated by the system. Each alert indicates a specific subscriber impacting problem in the system that has a unique fix. An alert can impact multiple cells and last over a long period of time. The Uhana by VMware system automatically takes care of aggregating alerts across space and time to minimize the number of alerts generated.

The alerts table is paginated with a default of 25 alerts per page which can be configured using the Alerts per page drop-down menu at the bottom of the Focus page.

Each alert in the table consists of the following fields.

  • Alert ID - A unique ID for the alert
  • Symptom - The subscriber service impacted by the alert
  • Duration - The time duration over which the alert was active
  • Affected entities - The list of cells in which subscribers are impacted by the alert
  • Session impact - The number of subscriber sessions impacted by the alert
  • Percent impacted - The percentage of subscriber sessions impacted by the alert among all the sessions present when the alert was active
  • Root cause - The root cause for the alert
  • Actions - The actions that can be taken by a user for the alert
    • Visibililty - Navigate to a visibility page for the alert containing a set of KPI charts that explain why the alert was generated with the specific root cause
    • Insights - Navigate to an insights page for the alert containing fix recommendations and a form to provide feedback on observations and actions taken

Analyzing alerts

This topic describes how to analyze alerts using their respective visibility and insights pages. These pages are specific to the root cause of the alert and will describe the analysis workflow for each root cause.

Uplink interference

These alerts are generated when subscribers experience poor service due to high uplink interference.

Visibility KPIs

The visibility KPIs for uplink interference alerts are the same as the Interference KPIs in the Explore page.

External uplink interference insights

For uplink interference alerts where the root cause is Uplink interference: External, the Uhana by VMware system triangulates the location of the external interference source. The insights page for such alerts displays a map with a set of polygons indicating the search areas where an operator can dispatch a field technician to hunt for the external interference source. The polygons are colored from dark red to yellow. Darker the color, higher the likelihood of the external interference source to be present in that polygon.

Insights for other uplink interference root causes

For uplink interference alerts caused by an Unknown root causes, the insights page provides a heatmap view of uplink interference per PRB for additional investigation.

Feedback form for uplink interference alerts

The insights page for all uplink interference alerts contains a form for the operator or field technician to provide feedback on observations and actions taken for the alert. This form contains the following fields.

  • Root cause - Observed root cause
  • Impacted bands - Frequency spectrum impacted by interference
  • Severity - Perceived severity of the alert
  • Location - Address or landmark of interference source
  • Fix applied - Action taken to fix the interference issue
  • Notes - Additional notes about the observed issue

This information is used by the Uhana by VMware system to measure the accuracy of the generated alerts and retrain AI models used for generating alerts.

Load imbalance

These alerts are generated when subscribers experience poor service due to an abnormal imbalance in the load among different cells that are within their coverage.

Visibility KPIs

The visibility KPIs for load imbalance alerts are the same as the Sector KPIs in the Explore page.

Insights

The insights page for load imbalance alerts shows the Control Channel Utilization chart indicating imbalance in control channel load among different cells in the sector for which the alert is generated. This can be used for additional investigations.

Poor coverage

These alerts are generated when subscribers experience poor service due to abnormally low signal strength relative to other users in the network.

Visibility KPIs

The following KPIs are displayed in the visibility page for coverage alerts.

  • Accessibility & Retainability - Rate of session setup failures and abnormal session releases
  • A5 Handover Count - Number of handovers based on A5 events between different pairs of cells in the sector
  • Path Loss Distribution - Distribution of path loss for users in the cell with the alert and the distribution of path loss for users in the same frequency band in the network

Insights

The insights page for coverage alerts indicates the cell where adjusting the antenna tilt or increasing the transmission power could result in resolving the coverage issue.

check-circle-line exclamation-circle-line close-line
Scroll to top icon