Application Troubleshooting explains the procedure to troubleshoot the Avi Load Balancer. What to read next Packet CaptureThis section explains the use of packet capture to troubleshoot the Avi Load Balancer. Troubleshooting Packet Latencies within SESE time flow tracker can track the network characteristics, processing time at critical checkpoints, and flag queuing delays in a packet journey through the network appliance. Real-Time Metrics Updates Slow Down After DeploymentThe Avi Load Balancer UI displays real-time metrics for objects such as virtual service, pool, server, and so on. The Past 30 Minutes option is suggested for clearer viewing. However, the graphs might update only every few minutes, resulting in five-minute blocks of averaged data. SSL Visibility and TroubleshootingThe Avi Load Balancer provides many features to help understand the utilization of SSL traffic and troubleshoot SSL-related issues. Faults in Avi Load Balancer SystemFaults represent issues occurring within the Avi Load Balancer system and depict the state at a specific moment in time. Each fault generates a system event, which is a historical representation of the occurrence. Faults must not be confused with alerts. While a fault represents an issue at a specific moment in time, alerts are generated based on expressions defined on events. Tech SupportThis section explains the proactive tech support services offered by Avi Load Balancer Pulse. HTTP Error CodesThe Avi Load Balancer responds with various error codes when an HTTP or HTTPS communication fails. This section describes some common HTTP error codes returned from the server. Enabling Session Key Capture When Debugging a Virtual ServiceThis section elaborates the configuration to enable session key capture when debugging a virtual service using the CLI and UI. Servers Flap Up - DownServer flapping, or bouncing up and down, is a common issue. Generally, server flapping is caused by the server reaching or slightly exceeding the health monitor’s maximum allowed response time. Non-uniform RR Traffic DistributionA common requirement when testing a new load balancer is to validate that it correctly balances the load. The round-robin load-balancing algorithm is commonly used as a simple test case. Reasons for Hypervisor Reporting High CPU Utilization by SEThis section describes why a hypervisor or a host reports high CPU utilization of Avi Load Balancer SE. Understanding "No Data" in Analytic GraphNo Data in analytic or log graph on Avi Load Balancer UI indicates that a particular event or entity has no data to display. SE_SYN_CACHE_USAGE_HIGH and CONN_DROP_POOL_LB_FAILURE Alerts Observed on Avi Load Balancer UIThis section explains why SE_SYN_CACHE_USAGE_HIGH and CONN_DROP_POOL_LB_FAILURE alerts are observed on Avi Load Balancer UI under Operations > Events. Auto-Rebalance Option Not Working for Service EnginesThis section discusses the resolution when the auto-rebalance feature does not work as expected for Service Engines on Avi Load Balancer. Due to this, when the load on Service Engines goes above or below the defined threshold values, virtual services do not scale in or scale out to other Service Engines. Service Engine Failure DetectionFailure detection is essential in achieving Service Engine high availability. Disparity in CPU UtilizationThere can be a difference in the CPU utilization figure reported by host monitoring tools and the actual CPU utilized by the Avi Load Balancer SEs. This disparity in CPU utilization is by design. This article explains the reason for the difference in values. You can also troubleshoot through the Avi Load Balancer UI and CLI modes. Overlapping Management NetworkThe Controller runs Docker locally to provide a sandbox for running the Avi Load Balancer CLI server for non-superuser accounts. By default, Docker creates a Linux bridge interface with IP address 172.17.0.1/16. This interface can cause conflicts and communication failures between the Controller and hosts in this 172.17.0.1/16 network range because the Controller will have a local next-hop route for the entire /16 subnet through the Docker bridge.