This reference documentation describes the various statistics that are collected from the ESXi host transport nodes in your NSX deployment.

The first section in this document describes the host transport node statistics that are visible in the NSX Manager UI.

The remaining sections in this document describe the statistics that are collected by the various datapath modules, which are running in the ESXi host transport nodes. To view these statistics, you must use either the NSX APIs or the NSX Central CLI. The Statistic column in these sections refers to the name of the statistic as seen in the API output. To learn about the API workflow for viewing the host transport node statistics, see Monitor Statistics of NSX Host Transport Nodes Using APIs.

Datapath Stats Tab

This tab displays the aggregate values of statistics for a host transport node in the NSX Manager UI.

Statistic Description Release Introduced

Broadcast Packets Received

Rate of broadcast packets received by the VDS from the VM. This statistic internally maps to the statistic - rxbcastpkts.

4.2.0

Broadcast Packets Transmitted

Rate of broadcast packets sent by the VDS to the VM. This statistic internally maps to the statistic - txbcastpkts.

4.2.0

Broadcast rate limiting packet drops

Number of ingress or egress packets dropped by broadcast rate limiting.

Rate limits are used to protect the network or VMs from events such as broadcast storms. You can configure rate limit values in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

This statistic internally maps to these detailed statistics: rx_rate_limit_bcast_drops, rx_rate_limit_mcast_drops, tx_rate_limit_bcast_drops, tx_rate_limit_mcast_drops. For more details, see the individual statistic definitions.

4.2.0

DFW

Total packets dropped by the DFW module for various reasons.

Click the NSX Datapath link in the UI to understand details for the drops.

4.2.0

Datapath L3

Total packets dropped by the Datapath L3 module for various reasons.

Click the NSX Datapath link in the UI to understand details for the drops.

4.2.0

Datapath System Error

Total packets dropped due to critical internal system errors. If these statistics increment consistently, it means the ESXi host is running low on resources. Moving some VMs to other hosts might help to ease the load.

This statistic internally maps to these detailed statistics: leaf_rx_system_err_drops, uplink_rx_system_err_drops, pkt_attr_error_drops, tx_dispatch_queue_too_long_drops. For more details, see the individual statistic definitions.

4.2.0

Fast Path

Total packets dropped by the Fastpath module for various reasons.

Click the NSX Datapath link in the UI to understand details for the drops.

4.2.0

Fastpath Flow Hit

Rate of flow table hits in the ENS/flow cache module. This statistic internally maps to the statistic - hits.

4.2.0

Fastpath Flow Miss

Rate of packets that are processed by slowpath because of a flow miss. This statistic does not overlap with the statistic in the next row. This statistic internally maps to the statistic - miss.

4.2.0

Fastpath packet drops

Total number of packets dropped by flow cache fastpath, in the receive or transmit direction to all ports.

This statistic internally maps to these detailed statistics: rx_drops ,tx_drops, rx_drops_uplink, tx_drops_uplink, rx_drops_sp, tx_drops_sp. For more details, see the individual statistic definitions.

4.2.0

Firewall flood limit packet drops

Number of packets dropped due to various protocols flood over limit. There is a configured flood limit for different protocols in the kernel interface.

This statistic internally maps to these detailed statistics: udp_flood_overlimit_drops, tcp_flood_overlimit_drops, icmp_flood_overlimit_drops, other_flood_overlimit_drops. For more details, see the individual statistic definitions.

4.2.0

Firewall internal error packet drops

Number of packets dropped by the firewall due to internal errors.

This statistic internally maps to these detailed statistics: memory_drops, state_insert_drops, l7_attr_error_drops, lb_reject_drops, src_limit_misc. For more details, see the individual statistic definitions.

4.2.0

Firewall malformed packet drops

Number of malformed packets dropped by the firewall.

This statistic internally maps to these detailed statistics: fragment_drops, short_drops, normalize_drops, bad_timestamp_drops, proto_cksum_drops. For more details, see the individual statistic definitions.

4.2.0

Firewall packet rejects

Number of packets rejected by the firewall for various reasons.

This statistic internally maps to these detailed statistics: rx_ipv4_reject_pkts, tx_ipv4_reject_pkts, rx_ipv6_reject_pkts, tx_ipv6_reject_pkts. For more details, see the individual statistic definitions.

4.2.0

Firewall rule received packet drops

Number of received packets dropped by hitting drop or reject distributed firewall rule.

This statistic internally maps to the statistic - match_drop_rule_rx_drops.

4.2.0

Firewall rule transmitted packet drops

Number of transmitted packets dropped by hitting drop or reject distributed firewall rule.

This statistic internally maps to the statistic - match_drop_rule_tx_drops.

4.2.0

Firewall state check packet drops

Number of packets dropped due to state related checks.

This statistic internally maps to these detailed statistics: icmp_err_pkt_drops, alg_handler_drops, syn_expected_drops, ip_option_drops, syn_proxy_drops, spoof_guard_drops, state_mismatch_drops, strict_no_syn_drops. For more details, see the individual statistic definitions.

4.2.0

Firewall state table full packet drops

Number of packets dropped due to max limit of states reached. For instance, if number of TCP states are higher than the limit, it results in a drop. This statistic internally maps to the statistic - state_limit_drops.

4.2.0

Firewall total packet drops

Number of total packets dropped by the firewall for various reasons.

This statistic internally maps to these detailed statistics: rx_ipv4_reject_pkts, tx_ipv4_reject_pkts, rx_ipv6_reject_pkts, tx_ipv6_reject_pkts, rx_ipv4_drop_pkts, tx_ipv4_drop_pkts, rx_ipv6_drop_pkts, tx_ipv6_drop_pkts, rx_l2_drop_pkts, tx_l2_drop_pkts. For more details, see the individual statistic definitions.

4.2.0

Hostswitch network mismatch packet drops

Number of unicast, multicast, broadcast packets dropped due to VNI or VLAN tag mismatch.

This statistic internally maps to these detailed statistics: vlan_tag_mismatch_rx, vlan_tag_mismatch_tx, vni_tag_mismatch_tx, vlan_tag_mismatch_rx_mcast, vlan_tag_mismatch_tx_mcast, vni_tag_mismatch_tx_mcast. For more details, see the individual statistic definitions.

4.2.0

Hostswitch received forged MAC packet drops

Number of packets dropped as forged drops due to the source MAC of the packet being different from the MAC of the virtual machine adapter.

Forged transmits or MAC learning being disabled on the segment causes these drops. Enabling MAC learning or forged transmits on the segment should mitigate the issue.

This statistic internally maps to the statistic - forged_transmit_rx_drops.

4.2.0

L3 hop limit packet drops

Number of IPv4 or IPv6 packets dropped due to low TTL (Time-To-Live). Each logical router instance will deduce 1 from TTL value. Use packet capture to determine which packets have low TTL values.

This statistic internally maps to these detailed statistics: ttl_ip4_drops, ttl_ipv6_drops. For more details, see the individual statistic definitions.

4.2.0

L3 neighbor unreachable packet drops

Number of IPv4 or IPv6 packets dropped due to failed neighbor resolution.

This statistic internally maps to these detailed statistics: arp_hold_pkt_drops, ns_hold_pkt_drops.

4.2.0

L3 no route packet drops

Each logical router instance has its own routing table for route lookups. This statistic increases when IPv4 packets are dropped due to no matching routes for that logical router instance.

This statistic internally maps to these detailed statistics: no_route_ipv4_drops, no_route_ipv6_drops. For more details, see the individual statistic definitions.

4.2.0

L3 reverse path forwarding packet drops

Number of IPv4 or IPv6 packets dropped due to reverse path forwarding check failure. Distributed Router may check if the source IP of packets is coming from a valid (reachable) source and may drop the packets based on the configuration.

You can change this setting in the NSX Manager UI.

This statistic internally maps to these detailed statistics: rpf_ipv4_drops, rpf_ipv6_drops. For more details, see the individual statistic definitions.

4.2.0

Mac Learning Table Full

Rate of packet drops due to MAC table update failures at the time of MAC learning either from the central control plane (CCP) or for the packets received from the underlay network.

Check if the MAC table is full on the host transport node by using the following command:

$ nsxcli -c "get segment mac-table"

If required, increase the MAC table size.

This statistic internally maps to these detailed statistics: mac_tbl_update_full, mac_tbl_lookup_full. For more details, see the individual statistic definitions.

4.2.0

Multicast Packets Received

Rate of multicast packets received by the VDS from the VM.

This statistic internally maps to the statistic - rx_mcast_pkts.

4.2.0

Multicast Packets Transmitted

Rate of multicast packets sent by the VDS to the VM.

This statistic internally maps to the statistic - tx_mcast_pkts.

4.2.0

Overlay Datapath L2

Total packets dropped by the Overlay Datapath L2 module for various reasons.

Click the NSX Datapath link in the UI to understand details for the drops.

4.2.0

Overlay Datapath Transmitted to Uplink

Rate of unicast packets flooded to remote VTEPs due to MAC table lookup failure. Large values implies unidirectional L2 flows or MAC table update issues.

Check if the MAC table is full on the host transport node by using the following command:

$ nsxcli -c "get segment mac-table"

If required, increase the MAC table size.

This statistic internally maps to the statistic - mac_tbl_lookup_flood.

4.2.0

Overlay Unsuccessful Control Plane Assisted Neighbor Resolution

Rate of packet drop due to control plane not being able to successfully assist in the neighbor resolution. The reasons could be CCP has not learnt the IP-MAC mapping yet, or system is running low on packet buffer resource.

This statistic internally maps to these detailed statistics: nd_proxy_resp_unknown, arp_proxy_resp_unknown, nd_proxy_req_fail_drops, arp_proxy_req_fail_drops, arp_proxy_resp_drops, nd_proxy_resp_drops. For more details, see the individual statistic definitions.

4.2.0

Overlay received packet drops

Number of packet drops at the VDL2LeafInput due to various reasons. See the other leaf received drop reasons to identify the specific reason for the drops.

This statistic internally maps to these detailed statistics: leaf_rx_ref_port_not_found_drops, leaf_rx_drops. For more details, see the individual statistic definitions.

4.2.0

Overlay transmitted packet drops

Total number of drops at VDL2LeafOutput due to various reasons. See the other leaf transmitted drop reasons to identify specific reason for the drops.

This statistic internally maps to the statistic - leaf_tx_drops.

4.2.0

Overlay uplink received packet drops

Number of packets dropped at the VDL2UplinkInput due to various reasons. See the other uplink received drop reasons to identify the specific reason for the drops.

This statistic internally maps to these detailed statistics: uplink_rx_drops, uplink_rx_guest_vlan_drops, uplink_rx_invalid_encap_drops, mcast_proxy_rx_drops. For more details, see the individual statistic definitions.

4.2.0

Overlay uplink transmitted packet drops

Number of total packets drops at the VDL2UplinkOutput due to various reasons. See the other uplink transmitted drop reasons to identify the specific reason for the drops.

This statistic internally maps to these detailed statistics: uplink_tx_drops, nested_tn_mcast_proxy_same_vlan_tx_drops, nested_tn_mcast_proxy_diff_vlan_tx_drops, mcast_poxy_tx_drops. For more details, see the individual statistic definitions.

4.2.0

PNIC Received (mbps)

Received megabits per second. This statistic internally maps to the statistic - rxmbps.

4.2.0

PNIC Received (pps)

Received packets per second. This statistic internally maps to the statistic - rxpps.

PNIC Received Drops

Received errors per second. Non-zero value usually indicates the following two cases:
  1. PNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the PNIC RX ring, leading to packet drops.

This statistic internally maps to the statistic - rxeps.

4.2.0

PNIC Transmitted (mbps)

Transmitted megabits per second. This statistic internally maps to the statistic - txmbps

4.2.0

PNIC Transmitted (pps)

Transmitted packets per second. This statistic internally maps to the statistic - txpps.

4.2.0

PNIC Transmitted Drops

Transmitted errors per second. This statistic internally maps to the statistic - txeps.

4.2.0

PNICs

Number of physical NICs. This statistic internally maps to the statistic - num_pnics.

4.2.0

Packet parsing error drops

Number of IPv6 neighbor discovery (ND) packets which were not correctly parsed. Examine logs for error messages. Do packet captures at the port to identify if the packets are malformed.

This statistic internally maps to the statistic - nd_parse_errors.

4.2.0

Slowpath Only

Rate of packets that are always processed in slowpath by design. One example is broadcast packet.

This statistic internally maps to the statistic - slowpath.

4.2.0

Spoof guard packet drops

Number of IPv4/IPv6/ARP packets dropped by SpoofGuard. SpoofGuard protects against IP spoofing by maintaining a reference table of VM names/MAC and IP addresses. This will be incremented only if SpoofGuard is enabled on the segment or segment port.

This statistic internally maps to these detailed statistics: spoof_guard_ipv4_drops, spoof_guard_arp_drops, spoof_guard_ipv6_drops, spoof_guard_nd_drops, spoof_guard_non_ip_drops. For more details, see the individual statistic definitions.

4.2.0

Switch Security

Total packets dropped by the Switch Security module for various reasons. Click the NSX Datapath link in the UI to understand details for the drops.

4.2.0

Unknown Tunnel Endpoint

Rate of packets drops for which source outer MAC cannot be learned as incoming GENEVE label is unknown.

Large values of this statistic can point to missing remote VTEP updates on the transport node from the control plane. Use the CLI to check the remote VTEP table on the transport node.

This statistic internally maps to the statistic - uplink_rx_skip_mac_learn.

4.2.0

VNIC Received (mbps)

Received megabits per second. This statistic internally maps to the statistic - rxmbps.

4.2.0

VNIC Received (pps)

Received packets per second. This statistic internally maps to the statistic - rxpps.

4.2.0

VNIC Received Drops

Received errors per second. Non-zero value usually indicates the following two cases:
  1. VNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the VNIC RX ring, leading to packet drops.

This statistic internally maps to the statistic - rxeps.

4.2.0

VNIC Transmitted (mbps)

Transmitted megabits per second. This statistic internally maps to the statistic - txmbps.

4.2.0

VNIC Transmitted (pps)

Transmitted packets per second. This statistic internally maps to the statistic - txpps.

4.2.0

VNIC Transmitted Drops

Transmitted errors per second. Non-zero value usually indicates the following:
  • The packet rate is too high for uplink to handle.
  • Uplink is not able to pull packets out of network stack's queue, leading to packet drops.

This statistic internally maps to the statistic - txeps.

4.2.0

VNICs

Number of virtual NICs. This statistic internally maps to the statistic - num_vnics.

4.2.0

Workload BPDU filter packet drops

Number of packets dropped by BPDU filtering. When the BPDU filter is enabled, traffic to the configured BPDU destination MAC addresses is dropped.

This statistic internally maps to the statistic - bpdu_filter_drops.

4.2.0

Workload DHCP not allowed packet drops

Number of DHCPv4 or DHCPv6 packets dropped by DHCP client/server block.

This statistic internally maps to these detailed statistics: dhcp_client_block_ipv6_drops, dhcp_server_block_ipv6_drops, dhcp_client_block_ipv4_drops, dhcp_server_block_ipv4_drops, dhcp_client_validate_ipv4_drops. For more details, see the individual statistic definitions.

4.2.0

Workload IPv6 RA guard packet drops

Number of IPv6 Router Advertisement packets dropped by RA Guard. The RA Guard feature filters out IPv6 Router Advertisements (ICMPv6 type 134) transmitted from VMs. In an IPv6 deployment, routers periodically multicast Router Advertisement messages, which are used by hosts for autoconfiguration.

You can use RA Guard to protect your network against rogue RA messages generated by unauthorized or improperly configured routers connecting to the network segment. You can configure RA Guard in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

This statistic internally maps to the statistic - ra_guard_drops.

4.2.0

vSwitch

Total packets dropped by the vSwitch module for various reasons.

Click the NSX Datapath link in the UI to understand details for the drops.

vSwitch Received from Uplink

Rate of packets received from one or more uplinks to the vSwitch which are unknown unicast flooded to other ports in the same broadcast domain by the vSwitch.

This statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning. This statistic internally maps to the statistic - unknown_unicast_rx_uplink_pkts.

4.2.0

vSwitch Transmitted to Uplink

Rate of packets unknown unicast flooded by the vSwitch to one or more uplinks.

The statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning. This statistic internally maps to the statistic - unknown_unicast_tx_uplink_pkts.

4.2.0

Module: host_enhanced_fastpath

This datapath module provides host/infrastructure statistics for ENS datapath module. This datapath module is known as host-fastpath-ens in the NSX Central CLI.

Statistic Description Release Introduced

flow_table_occupancy_0_pct

Histogram: Number of flow tables with 0-25% utilization.

4.2.0

flow_table_occupancy_25_pct

Histogram: Number of flow tables with 25-50% utilization.

4.2.0

flow_table_occupancy_50_pct

Histogram: Number of flow tables with 50-75% utilization.

4.2.0

flow_table_occupancy_75_pct

Histogram: Number of flow tables with 75-90% utilization.

4.2.0

flow_table_occupancy_90_pct

Histogram: Number of flow tables with 90-95% utilization. If the number of active flows becomes greater than the flow table size, you may see an increase in flow misses, leading to a performance degradation. Flow table occupancy histogram statistics are useful to determine whether flow tables are getting full.

Increasing the flow table size doesn't always improve performance, if short-lived connections keep coming in. The flow table might be always full regardless of the flow table size. A large flow table size wouldn't help in this case. EDP has a logic to detect this and automatically enable and disable flow tables to handle such a case.

4.2.0

flow_table_occupancy_95_pct

Histogram: Number of flow tables with 95% utilization. If the number of active flows becomes greater than the flow table size, you may see an increase in flow misses, leading to a performance degradation. Flow table occupancy histogram statistics are useful to determine whether flow tables are getting full.

Increasing the flow table size doesn't always improve performance, if short-lived connections keep coming in. The flow table might be always full regardless of the flow table size. A large flow table size wouldn't help in this case. EDP has a logic to detect this and automatically enable and disable flow tables to handle such a case.

4.2.0

flow_table_size

Max size of the flow table in EDP.

4.2.0

hits

Number of flow table hits in the ENS module. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

insertion_errors

Number of flow table insertion errors. This can happen when a flow table is full (or close to full) and there are hash collisions.

4.2.0

miss

Packets that are processed by slowpath because of a flow miss. This statistic does not overlap with the slowpath statistic that is described later in this table. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

num_flow_tables

Number of flow tables used for EDP. EDP has one flow table per EDP thread. This is useful to see how many flow tables are created and used.

4.2.0

num_flows

Number of flows in the EDP.

4.2.0

num_flows_created

Number of flows created in the EDP. Use this statistic to calculate the flow creation rate, which will be useful to determine the workload characteristics. Flow table occupancy histogram statistics will tell you whether flow tables are full or not.

If the flow creation rate is low and there is no significant changes in num_flows or flow occupancy statistics, this indicates that the traffic is stable and in a steady state. The number of active flows remains stable. If the flow creation rate is high and num_flows is increasing, this means that the number of active flows is increasing. Flow tables will become full eventually if the flow creation rate doesn't drop.

If the flow creation rate is high and the average flow size is not small, you should consider increasing the flow table size. Average flow size = hit rate / num_flows_created rate.

A small value for the average flow size means that flows are short-lived. Both hits and num_flows_created are accumulated. You can calculate the rate values first to get the average flow size during a specific time period.

4.2.0

slowpath

Packets that are always processed in slowpath by design. One example is broadcast packet. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

Module: host_fastpath_ens_lcore

This datapath module provides the estimated Lcore usage stats for ENS module. Upto 16 Lcores ranked by usage are displayed. If less than 16 Lcores are configured, only Lcores with valid IDs are displayed. This datapath module is known as host-fastpath-ens-lcore in the NSX Central CLI.

Statistic Description Release Introduced

lcorerank01_lcoreid

The ID of rank 1 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank01_lcoreusage

CPU usage of rank 1 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank02_lcoreid

The ID of rank 2 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank02_lcoreusage

CPU usage of rank 2 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank03_lcoreid

The ID of rank 3 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank03_lcoreusage

CPU usage of rank 3 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank04_lcoreid

The ID of rank 4 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank04_lcoreusage

CPU usage of rank 4 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank05_lcoreid

The ID of rank 5 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank05_lcoreusage

CPU usage of rank 5 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank06_lcoreid

The ID of rank 6 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank06_lcoreusage

CPU usage of rank 6 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank07_lcoreid

The ID of rank 7 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank07_lcoreusage

CPU usage of rank 7 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank08_lcoreid

The ID of rank 8 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank08_lcoreusage

CPU usage of rank 8 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank09_lcoreid

The ID of rank 9 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank09_lcoreusage

CPU usage of rank 9 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank10_lcoreid

The ID of rank 10 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank10_lcoreusage

CPU usage of rank 10 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank11_lcoreid

The ID of rank 11 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank11_lcoreusage

CPU usage of rank 11 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank12_lcoreid

The ID of rank 12 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank12_lcoreusage

CPU usage of rank 12 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank13_lcoreid

The ID of rank 13 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

lcorerank13_lcoreusage

CPU usage of rank 13 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank14_lcoreid

The ID of rank 14 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank14_lcoreusage

CPU usage of rank 14 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank15_lcoreid

The ID of rank 15 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank15_lcoreusage

CPU usage of rank 15 Lcore. Will be displayed only if ID is valid.

4.2.0

lcorerank16_lcoreid

The ID of rank 16 kernel thread for EDP performance mode in terms of CPU usage. Will be displayed only if ID is valid.

4.2.0

lcorerank16_lcoreusage

CPU usage of rank 16 Lcore. Will be displayed only if ID is valid.

4.2.0

Module: host_standard_fastpath

This datapath module provides host/infrastructure statistics for legacy flow cache datapath module. This datapath module is known as host-fastpath-standard in the NSX Central CLI.

Statistic Description Release Introduced

flow_table_occupancy_0_pct

Histogram: Number of flow tables with 0-25% utilization.

4.2.0

flow_table_occupancy_25_pct

Histogram: Number of flow tables with 25-50% utilization.

4.2.0

flow_table_occupancy_50_pct

Histogram: Number of flow tables with 50-75% utilization.

4.2.0

flow_table_occupancy_75_pct

Histogram: Number of flow tables with 75-90% utilization.

4.2.0

flow_table_occupancy_90_pct

Histogram: Number of flow tables with 90-95% utilization. If the number of active flows becomes greater than the flow table size, you may see an increase in flow misses, leading to a performance degradation. Flow table occupancy histogram statistics are useful to determine whether flow tables are getting full.

Increasing the flow table size doesn't always improve performance, if short-lived connections keep coming in. The flow table might be always full regardless of the flow table size. A large flow table size wouldn't help in this case. EDP has a logic to detect this and automatically enable and disable flow tables to handle such a case.

4.2.0

flow_table_occupancy_95_pct

Histogram: Number of flow tables with 95% utilization. If the number of active flows becomes greater than the flow table size, you may see an increase in flow misses, leading to a performance degradation. Flow table occupancy histogram statistics are useful to determine whether flow tables are getting full.

Increasing the flow table size doesn't always improve performance, if short-lived connections keep coming in. The flow table might be always full regardless of the flow table size. A large flow table size wouldn't help in this case. EDP has a logic to detect this and automatically enable and disable flow tables to handle such a case.

4.2.0

flow_table_size

Max size of the flow table in EDP.

4.2.0

hits

Number of flow table hits in the ENS module. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

insertion_errors

Number of flow table insertion errors. This can happen when a flow table is full (or close to full) and there are hash collisions.

4.2.0

miss

Packets that are processed by slowpath because of a flow miss. This statistic does not overlap with the slowpath statistic that is described later in this table. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

num_flow_tables

Number of flow tables used for EDP. EDP has one flow table per EDP thread. This is useful to see how many flow tables are created and used.

4.2.0

num_flows

Number of flows in the EDP.

4.2.0

num_flows_created

Number of flows created in the EDP. Use this statistic to calculate the flow creation rate, which will be useful to determine the workload characteristics. Flow table occupancy histogram statistics will tell you whether flow tables are full or not.

If the flow creation rate is low and there is no significant changes in num_flows or flow occupancy statistics, this indicates that the traffic is stable and in a steady state. The number of active flows remains stable. If the flow creation rate is high and num_flows is increasing, this means that the number of active flows is increasing. Flow tables will become full eventually if the flow creation rate doesn't drop.

If the flow creation rate is high and the average flow size is not small, you should consider increasing the flow table size. Average flow size = hit rate / num_flows_created rate.

A small value for the average flow size means that flows are short-lived. Both hits and num_flows_created are accumulated. You can calculate the rate values first to get the average flow size during a specific time period.

4.2.0

slowpath

Packets that are always processed in slowpath by design. One example is broadcast packet. This statistic can be used to calculate the flow hit / miss / slowpath rate or calculate hit / miss / slowpath ratio.

4.2.0

Module: host_net_thread_nioc

This datapath module provides network thread stats related to NIOC. This datapath module is known as host-net-thread-nioc in the NSX Central CLI.

Statistic Description Release Introduced

hist_0_pct

Histogram: Number of threads within 0%-25%

4.2.0

hist_25_pct

Histogram: Number of threads within 25%-50%

4.2.0

hist_50_pct

Histogram: Number of threads within 50%-70%

4.2.0

hist_70_pct

Histogram: Number of threads within 70%-80%

4.2.0

hist_80_pct

Histogram: Number of threads within 80%-85%

4.2.0

hist_85_pct

Histogram: Number of threads within 85%-90%

4.2.0

hist_90_pct

Histogram: Number of threads within 90%-95%

4.2.0

hist_95_pct

Histogram: Number of threads within 95%-97%

4.2.0

hist_97_pct

Histogram: Number of threads within 97%-99%

4.2.0

hist_99_pct

Histogram: Number of threads with >99% utilization.

Network datapath problems are expressed as three symptoms: packet drops, low throughput, and high latency. While these symptoms are shared by both functional and performance problems, more often than not, they are caused by performance-related problems. It is critical to rule out whether the problem is performance-related or not at an early stage of investigation.

In software-defined networking especially built on top of virtualization, CPU is the most critical resource that affects network performance. With faster NICs available in the market, network bandwidth rarely becomes a bottleneck.

Packet processing in datapath usually involves a set of threads executed in a pipeline, associated with queues and buffers that hold packets. When any thread in the pipeline from vCPUs to kernel network threads gets overloaded, the corresponding queues and buffers can get full, leading to packet drops. This throttles throughput.

It is crucial to monitor CPU usage of kernel network threads in addition to traditional network statistics. Instead of using CPU usage numbers for individual threads, we group them and generate a histogram. Then you can monitor the histogram bins like 90pct, 95pct, 97pct, 99pct, which tell you how many networking threads are getting bottlenecked. The total_CPU statistic is also useful to show how much CPU time is spent processing packets in the kernel.

4.2.0

max_cpu

Maximum thread CPU utilization

4.2.0

min_cpu

Minimum thread CPU utilization

4.2.0

num_threads

Number of threads used for delivering packets from NetIOC packet scheduler to uplink.

4.2.0

total_cpu

Sum of CPU utilization of all network threads in the group. Total CPU is useful to see the overall CPU usage distributions between different thread groups and VMs.

4.2.0

Module: host_net_thread_rx

This datapath module provides network thread stats related to RX. This datapath module is known as host-net-thread-rx in the NSX Central CLI.

Statistic Description Release Introduced

hist_0_pct

Histogram: Number of threads within 0%-25%

4.2.0

hist_25_pct

Histogram: Number of threads within 25%-50%

4.2.0

hist_50_pct

Histogram: Number of threads within 50%-70%

4.2.0

hist_70_pct

Histogram: Number of threads within 70%-80%

4.2.0

hist_80_pct

Histogram: Number of threads within 80%-85%

4.2.0

hist_85_pct

Histogram: Number of threads within 85%-90%

4.2.0

hist_90_pct

Histogram: Number of threads within 90%-95%

4.2.0

hist_95_pct

Histogram: Number of threads within 95%-97%

4.2.0

hist_97_pct

Histogram: Number of threads within 97%-99%

4.2.0

hist_99_pct

Histogram: Number of threads with >99% utilization.

Network datapath problems are expressed as three symptoms: packet drops, low throughput, and high latency. While these symptoms are shared by both functional and performance problems, more often than not, they are caused by performance-related problems. It is critical to rule out whether the problem is performance-related or not at an early stage of investigation.

In software-defined networking especially built on top of virtualization, CPU is the most critical resource that affects network performance. With faster NICs available in the market, network bandwidth rarely becomes a bottleneck.

Packet processing in datapath usually involves a set of threads executed in a pipeline, associated with queues and buffers that hold packets. When any thread in the pipeline from vCPUs to kernel network threads gets overloaded, the corresponding queues and buffers can get full, leading to packet drops. This throttles throughput.

It is crucial to monitor CPU usage of kernel network threads in addition to traditional network statistics. Instead of using CPU usage numbers for individual threads, we group them and generate a histogram. Then you can monitor the histogram bins like 90pct, 95pct, 97pct, 99pct, which tell you how many networking threads are getting bottlenecked. The total_CPU statistic is also useful to show how much CPU time is spent processing packets in the kernel.

4.2.0

max_cpu

Maximum thread CPU utilization

4.2.0

min_cpu

Minimum thread CPU utilization

4.2.0

num_threads

Number of threads used for delivering packets from NetIOC packet scheduler to uplink.

4.2.0

total_cpu

Sum of CPU utilization of all network threads in the group. Total CPU is useful to see the overall CPU usage distributions between different thread groups and VMs.

4.2.0

Module: host_net_thread_tx

This datapath module provides network thread stats related to TX. This datapath module is known as host-net-thread-txhost-net-thread-tx in the NSX Central CLI.

Statistic Description Release Introduced

hist_0_pct

Histogram: Number of threads within 0%-25%

4.2.0

hist_25_pct

Histogram: Number of threads within 25%-50%

4.2.0

hist_50_pct

Histogram: Number of threads within 50%-70%

4.2.0

hist_70_pct

Histogram: Number of threads within 70%-80%

4.2.0

hist_80_pct

Histogram: Number of threads within 80%-85%

4.2.0

hist_85_pct

Histogram: Number of threads within 85%-90%

4.2.0

hist_90_pct

Histogram: Number of threads within 90%-95%

4.2.0

hist_95_pct

Histogram: Number of threads within 95%-97%

4.2.0

hist_97_pct

Histogram: Number of threads within 97%-99%

4.2.0

hist_99_pct

Histogram: Number of threads with >99% utilization.

Network datapath problems are expressed as three symptoms: packet drops, low throughput, and high latency. While these symptoms are shared by both functional and performance problems, more often than not, they are caused by performance-related problems. It is critical to rule out whether the problem is performance-related or not at an early stage of investigation.

In software-defined networking especially built on top of virtualization, CPU is the most critical resource that affects network performance. With faster NICs available in the market, network bandwidth rarely becomes a bottleneck.

Packet processing in datapath usually involves a set of threads executed in a pipeline, associated with queues and buffers that hold packets. When any thread in the pipeline from vCPUs to kernel network threads gets overloaded, the corresponding queues and buffers can get full, leading to packet drops. This throttles throughput.

It is crucial to monitor CPU usage of kernel network threads in addition to traditional network statistics. Instead of using CPU usage numbers for individual threads, we group them and generate a histogram. Then you can monitor the histogram bins like 90pct, 95pct, 97pct, 99pct, which tell you how many networking threads are getting bottlenecked. The total_CPU statistic is also useful to show how much CPU time is spent processing packets in the kernel.

4.2.0

max_cpu

Maximum thread CPU utilization

4.2.0

min_cpu

Minimum thread CPU utilization

4.2.0

num_threads

Number of threads used for delivering packets from NetIOC packet scheduler to uplink.

4.2.0

total_cpu

Sum of CPU utilization of all network threads in the group. Total CPU is useful to see the overall CPU usage distributions between different thread groups and VMs.

4.2.0

Module: host_pcpu

This datapath module provides the usage of physical CPUs. This datapath module is known as host-pcpuhost-pcpu in the NSX Central CLI.

Statistic Description Release Introduced

hist_0_pct

Histogram: Number of CPUs within 0%-50%

4.2.0

hist_50_pct

Histogram: Number of CPUs within 50%-70%

4.2.0

hist_75_pct

Histogram: Number of CPUs within 75%-85%

4.2.0

hist_85_pct

Histogram: Number of CPUs within 85%-90%

4.2.0

hist_90_pct

Histogram: Number of CPUs within 90%-95%

4.2.0

hist_95_pct

Histogram: Number of CPUs within 95%-100%

4.2.0

total_cpu

Total host CPU utilization. Sum of utilization of all physical CPU cores on the host.

Network datapath problems are expressed as three symptoms: packet drops, low throughput, and high latency. While these symptoms are shared by both functional and performance problems, more often than not, they are caused by performance-related problems. It is critical to rule out whether the problem is performance-related or not at an early stage of investigation.

In software-defined networking especially built on top of virtualization, CPU is the most critical resource that affects network performance. With faster NICs available in the market, network bandwidth rarely becomes a bottleneck.

Packet processing in datapath usually involves a set of threads executed in a pipeline, associated with queues and buffers that hold packets. When any thread in the pipeline from vCPUs to kernel network threads gets overloaded, the corresponding queues and buffers can get full, leading to packet drops. This throttles throughput.

It is crucial to monitor CPU usage of kernel network threads in addition to traditional network statistics. Instead of using CPU usage numbers for individual threads, we group them and generate a histogram. Then you can monitor the histogram bins like 90pct, 95pct, 97pct, 99pct, which tell you how many networking threads are getting bottlenecked. The total_CPU statistic is also useful to show how much CPU time is spent processing packets in the kernel.

4.2.0

Module: host_uplink

This datapath module provides the usage of physical uplink NICs. This datapath module is known as host-uplink in the NSX Central CLI.

Statistic Description Release Introduced

num_pnics

Number of physical NICs

4.2.0

rx_error_total

Driver stats: received errors total. Usually, this statistic should have a similar value to rx_missed.

Non-zero value usually indicates two cases:
  1. PNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the PNIC RX ring, leading to packet drops.
4.2.0

rx_missed

Driver stats: received missed. Usually, this statistic should have a similar value to rx_error_total.

Non-zero value usually indicates two cases:
  1. PNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the PNIC RX ring, leading to packet drops.
4.2.0

rxeps

Received errors per second.

Non-zero value usually indicates two cases:
  1. PNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the PNIC RX ring, leading to packet drops.
4.2.0

rxmbps

Received megabits per second

4.2.0

rxpps

Received packets per second

4.2.0

txeps

Transmitted errors per second

4.2.0

txmbps

Transmitted megabits per second

4.2.0

txpps

Transmitted packets per second

4.2.0

Module: host_vnic

This datapath module provides the usage of virtual NICs. This datapath module is known as host-vNIC in the NSX Central CLI.

Statistic Description Release Introduced

num_vnics

Number of virtual NICs.

4.2.0

rxeps

Received errors per second.

Non-zero value usually indicates the following two cases:
  1. VNIC RX ring size is too small and ring can be easily filled up due to a workload spikes. You can consider increasing the ring size.
  2. The packet rate is too high for the guest to handle. The guest is not able to pull packets out of the VNIC RX ring, leading to packet drops.
4.2.0

rxmbps

Received megabits per second.

4.2.0

rxpps

Received packets per second.

4.2.0

txeps

Transmitted errors per second.

Non-zero value usually indicates the following two cases:
  1. The packet rate is too high for uplink to handle.
  2. Uplink is not able to pull packets out of network stack's queue, leading to packet drops.
4.2.0

txmbps

Transmitted megabits per second.

4.2.0

txpps

Transmitted packets per second.

4.2.0

Module: host_vcpu

This datapath module provides the usage of virtual CPUs. This datapath module is known as host-vcpu in the NSX Central CLI.

Statistic Description Release Introduced

hist_0_pct

Histogram: Number of CPUs within 0%-50%.

4.2.0

hist_50_pct

Histogram: Number of CPUs within 50%-70%.

4.2.0

hist_75_pct

Histogram: Number of CPUs within 75%-85%

4.2.0

hist_85_pct

Histogram: Number of CPUs within 85%-90%.

4.2.0

hist_90_pct

Histogram: Number of CPUs within 90%-95%.

4.2.0

hist_95_pct

Histogram: Number of CPUs within 95%-100%.

Network datapath problems are expressed as three symptoms: packet drops, low throughput, and high latency. While these symptoms are shared by both functional and performance problems, more often than not, they are caused by performance-related problems. It is critical to rule out whether the problem is performance-related or not at an early stage of investigation.

In software-defined networking especially built on top of virtualization, CPU is the most critical resource that affects network performance. With faster NICs available in the market, network bandwidth rarely becomes a bottleneck.

Packet processing in datapath usually involves a set of threads executed in a pipeline, associated with queues and buffers that hold packets. When any thread in the pipeline from vCPUs to kernel network threads gets overloaded, the corresponding queues and buffers can get full, leading to packet drops. This throttles throughput.

It is crucial to monitor CPU usage of kernel network threads in addition to traditional network statistics. Instead of using CPU usage numbers for individual threads, we group them and generate a histogram. Then you can monitor the histogram bins, which tell you how many networking threads are getting bottlenecked. The total_CPU statistic is also useful to show how much CPU time is spent processing packets in the kernel.

4.2.0

total_cpu

Total vCPU utilization. Sum of CPU utilization of all VMs on the host. Total CPU is useful to see the overall CPU usage distributions between different thread groups and VMs.

4.2.0

Module: fastpath

Fastpath includes flow-cache (FC) and Enhanced network stack (ENS) datapath modules for enhanced datapath packet processing. This datapath module is known as nsxt-fp in the NSX Central CLI.

Statistic Description Release Introduced

rx_bytes

The number of received bytes by flow cache fastpath, in the receive direction from ports.

4.2.0

rx_drops

The number of dropped packets by flow cache fastpath, in the receive direction from ports. This is not applicable for flow cache in non-ENS mode.

4.2.0

rx_drops_sp

Received packets drops when packets are sent to the slowpath from flow cache fastpath. Not applicable for flow cache in non-ENS mode and standard switch mode.

4.2.0

rx_drops_uplink

The number of dropped packets by flow cache fastpath in the receive direction from uplink ports. Not applicable for flow cache in non-ENS mode and standard switch mode.

4.2.0

rx_pkts

The number of received packets by flow cache fastpath in the receive direction from ports.

4.2.0

rx_pkts_sp

Received packets when packets are sent to the slowpath from flow cache fastpath. Not applicable for flow cache in non-ENS mode and standard switch mode.

4.2.0

rx_pkts_uplink

The number of received packets by flow cache faspath in the receive direction from uplink ports.

4.2.0

tx_bytes

The number of transmitted bytes by flow cache fastpath in the transmit direction to ports.

4.2.0

tx_drops

The number of dropped packets by flow cache fastpath in the transmit direction to ports.

4.2.0

tx_drops_sp

Transmitted packet drops by fastapth when packets are injected back to flow cache fastpath from slowpath. Not applicable for flow cache in non-ENS mode and standard switch mode.

4.2.0

tx_drops_uplink

The number of dropped packets by flow cache fastpath in the transmit direction to uplink ports.

4.2.0

tx_pkts

The number of transmitted packets by flow cache fastpath in the transmit direction to ports.

4.2.0

tx_pkts_sp

Transmitted packets by fastpath when packets are injected back to flow cache fastpath from slowpath. Not applicable for standard switch mode.

4.2.0

tx_pkts_uplink

The number of transmitted packets by flow cache fastpath in the transmit direction from uplink ports.

4.2.0

Module: switch_security

This datapath module provides stateless L2 and L3 security by checking traffic to the segment and dropping unauthorized packets sent from VMs. In this table, Rx refers to packets received "from" the switch, and Rx refers to packets sent "to" the switch. This datapath module is known as nsxt-swsec in the NSX Central CLI.

Statistic Description Release Introduced

bpdu_filter_drops

Number of packets dropped by BPDU filtering. When the BPDU filter is enabled, traffic to the configured BPDU destination MAC addresses is dropped.

4.2.0

dhcp_client_block_ipv4_drops

Number of IPv4 DHCP packets dropped by DHCP Client Block.

DHCP Client Block prevents a VM from acquiring DHCP IP address by blocking DHCP requests. If this is not expected, then you can deactivate DHCPv4 Client Block from the segment security profile of a segment or a segment port. To do this in NSX Manager, navigate to Networking > Segments > Segment Profile > Segment Security.

4.2.0

dhcp_client_block_ipv6_drops

Number of IPv6 DHCP packets dropped by DHCP Client Block.

DHCP Client Block prevents a VM from acquiring DHCP IP address by blocking DHCP requests. If this is not expected, then you can deactivate DHCPv6 Client Block from the segment security profile of a segment or a segment port. To do this in NSX Manager, navigate to Networking > Segments > Segment Profile > Segment Security.

4.2.0

dhcp_client_validate_ipv4_drops

Number of IPv4 DHCP packets dropped because addresses in the payload were not valid.

It is possible that a malicious VM on the network might be trying to send invalid DHCP packets, for example, without source IP, client hardware address not matching source MAC, and so on.

4.2.0

dhcp_server_block_ipv4_drops

Number of IPv4 DHCP packets dropped by DHCP Server Block. DHCP Server Block blocks traffic from a DHCP server to a DHCP client.

If this is not expected, then you can disable DHCP Server Block from the segment security profile of a segment or a segment port. To do this in NSX Manager, navigate to Networking > Segments > Segment Profile > Segment Security.

4.2.0

dhcp_server_block_ipv6_drops

Number of DHCPv6 packets dropped by DHCP Server Block.

DHCP Server Block blocks traffic from a DHCP server to a DHCP client. If this is not expected, then you can disable DHCPv6 Server block from the segment security profile of a segment or a segment port. To do this in NSX Manager, navigate to Networking > Segments > Segment Profile > Segment Security.

4.2.0

nd_parse_errors

Number of IPv6 Neighbor Discovery (ND) packets which were not correctly parsed.

Examine logs for error messages. Do packet captures at the port to identify if the packets are malformed.

4.2.0

ra_guard_drops

Number of IPv6 Router Advertisement packets dropped by RA Guard.

The RA Guard feature filters out IPv6 Router Advertisements (ICMPv6 type 134) transmitted from VMs. In an IPv6 deployment, routers periodically multicast Router Advertisement messages, which are used by hosts for autoconfiguration.

You can use RA Guard to protect your network against rogue RA messages generated by unauthorized or improperly configured routers connecting to the network segment. You can configure RA Guard in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

4.2.0

rx_arp_pkts

Number of ARP packets received by the VDS from the VM.

4.2.0

rx_garp_pkts

Number of Gratuitous ARP (GARP) packets received by the VDS from the VM.

4.2.0

rx_ipv4_pkts

Number of IPv4 packets received by the VDS from the VM.

4.2.0

rx_ipv6_pkts

Number of IPv6 packets received by the VDS from the VM.

4.2.0

rx_na_pkts

Number of IPv6 ND (Neighbor Discovery) NA (Neighbor Advertisement) packets received by the VDS from the VM.

4.2.0

rx_non_ip_pkts

Number of non-IP packets received by the VDS from the VM

4.2.0

rx_ns_pkts

Number of IPv6 ND (Neighbor Discovery) NS (Neighbor Solicitation) packets received by the VDS from the VM.

4.2.0

rx_rate_limit_bcast_drops

Number of ingress packets dropped by broadcast rate limiting.

Rate limits can be used to protect the network or VMs from events such as broadcast storms. You can configure rate limit values in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

4.2.0

rx_rate_limit_mcast_drops

Number of ingress packets dropped by multicast rate limiting.

Rate limits can be used to protect the network or VMs from events such as multicast storms. You can configure rate limit values in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

4.2.0

rx_unsolicited_na_pkts

Number of unsolicited IPv6 ND (Neighbor Discovery) NA (Neighbor Advertisement) packets received by the VDS from the VM.

4.2.0

rxbcastpkts

Number of broadcast packets received by the VDS from the VM.

4.2.0

rxmcastpkts

Number of multicast packets received by the VDS from the VM.

4.2.0

spoof_guard_arp_drops

Number of ARP packets dropped by SpoofGuard.

SpoofGuard protects against malicious ARP spoofing attacks by keeping track of MAC and IP addresses. This statistic will be incremented only when SpoofGuard is enabled on the segment or segment port. (Networking > Segments > Segment Profile > SpoofGuard)

4.2.0

spoof_guard_ipv4_drops

Number of IPv4 packets dropped by SpoofGuard.

SpoofGuard protects against IP spoofing by maintaining a reference table of VM names and IP addresses. This statistic will be incremented only when SpoofGuard is enabled on the segment or segment port. (Networking > Segments > Segment Profile > SpoofGuard)

4.2.0

spoof_guard_ipv6_drops

Number of IPv6 packets dropped by SpoofGuard.

SpoofGuard protects against IP spoofing by maintaining a reference table of VM names and IP addresses. This statistic will be incremented only when SpoofGuard is enabled on the segment or segment port. (Networking > Segments > Segment Profile > SpoofGuard)

4.2.0

spoof_guard_nd_drops

Number of IPv6 Neighbor Discovery (ND) packets dropped by SpoofGuard.

SpoofGuard protects against ND Spoofing by filtering out ND packets whose addresses do not match the VM's address. This statistic will be incremented only when SpoofGuard is enabled on the segment or segment port. (Networking > Segments > Segment Profile > SpoofGuard)

4.2.0

spoof_guard_non_ip_drops

Number of non-IP packets dropped by SpoofGuard.

This statistic will be incremented only when SpoofGuard is enabled on the segment or segment port. (Networking > Segments > Segment Profile > SpoofGuard)

4.2.0

tx_arp_pkts

Number of ARP packets sent by the VDS to the VM.

4.2.0

tx_ipv4_pkts

Number of IPv4 packets sent by the VDS to the VM.

4.2.0

tx_ipv6_pkts

Number of IPv6 packets sent by the VDS to the VM.

4.2.0

tx_non_ip_pkts

Number of non-IP packets sent by the VDS to the VM.

4.2.0

tx_rate_limit_bcast_drops

Number of egress packets dropped by broadcast rate limiting.

Rate limits can be used to protect the network or VMs from events such as broadcast storms. You can configure rate limit values in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

4.2.0

tx_rate_limit_mcast_drops

Number of egress packets dropped by multicast Rate Limiting.

Rate limits can be used to protect the network or VMs from events such as multicast storms. You can configure rate limit values in the NSX Manager UI at Networking > Segments > Segment Profile > Segment Security.

4.2.0

txbcastpkts

Number of broadcast packets sent by the VDS to the VM.

4.2.0

txmcastpkts

Number of multicast packets sent by the VDS to the VM.

4.2.0

Module: overlay_datapath_l2

This datapath module is responsible for workload connectivity. This datapath module is known as nsxt-vdl2 in the NSX Central CLI.

Statistic Description Release Introduced

arp_proxy_req_fail_drops

Number of ARP requests failed to be resent on uplink for datapath based learning when CCP does not have IP-MAC binding leading to ARP suppression failure.

Non-zero statistic indicates system is running low on packet buffer resource and continuous increment should be treated as a critical error.

4.2.0

arp_proxy_req_suppress

Number of ARP requests which are suppressed by VDL2 by querying CCP to find IP-MAC binding.

These ARP packets are sent out on uplink only when CCP does not know the binding.

4.2.0

arp_proxy_resp

Number of valid IP-MAC binding responses from CCP for each ARP suppression request from this transport node.

4.2.0

arp_proxy_resp_drops

Number of ARP responses corresponding to IP-MAC binding response from CCP that could not be sent to switchport that requested the ARP request.

4.2.0

arp_proxy_resp_filtered

Number of ARP responses generated based on IP-MAC binding response from CCP that are not sent to switch port that initiated the ARP request.

This is possible because ARP was requested due to traceflow or the port that initiated the ARP request was no longer present in the transport node.

4.2.0

arp_proxy_resp_unknown

Number of unknown IP-MAC bindings in the control plane for each IP-MAC request from this transport node.

On receiving this message, VDL2 module resends the ARP request on uplink to learn IP-MAC binding through datapath.

4.2.0

leaf_rx

For a segment (logical switch), this statistic gets incremented when a workload generated packet is successfully received at VDL2LeafInput (Overlay L2) IOChain. These packets are sent to VDS if there are no other leaf received drops.

4.2.0

leaf_rx_drops

Number of packet drops at the VDL2LeafInput due to various reasons.

See the other leaf received drop reasons to identify the specific reason for the drops.

4.2.0

leaf_rx_ref_port_not_found_drops

Number of packet drops at VDL2LeafInput Leaf. This can happen if the trunk port is not part of the segment.

4.2.0

leaf_rx_system_err_drops

Number of packet drops at VDL2LeafInput due to various system errors like memory failure, packet attribute update failures.

This generally means that the ESXi host is running low on resources. Moving some VMs to other hosts might help ease the load.

4.2.0

leaf_tx

This statistic ncrements when a packet is processed successfully by VDL2LeafOutput (Overlay L2) IOChain for a switch port.

4.2.0

leaf_tx_drops

Total number of drops at VDL2LeafOutput due to various reasons.

See the other leaf transmitted drop reasons to identify specific reason for the drops.

4.2.0

mac_tbl_lookup_flood

Number of unicast packets flooded to remote VTEPs due to MAC table lookup failure. Large values implies unidirectional L2 flows or MAC table update issues.

Check if the MAC table is full on the host transport node by using the following command:

$ nsxcli -c "get segment mac-table"

If required, increase the MAC table size.

4.2.0

mac_tbl_lookup_full

Number of times destination MAC query to control plane for traffic to remote VMs failed as MAC table is already full.

Check if the MAC table is full on the host transport node by using the following command:

$ nsxcli -c "get segment mac-table"

If required, increase the MAC table size.

4.2.0

mac_tbl_update_full

Number of MAC table update failures at the time of MAC learning for the packets received from underlay.

Check if the MAC table is full on the host transport node by using the following command:

$ nsxcli -c "get segment mac-table"

If required, increase the MAC table size.

4.2.0

mcast_proxy_rx_drops

Number of BUM packets received on the uplink of the MTEP transport node that are dropped while replicating to other VTEPs.

4.2.0

mcast_proxy_tx_drops

Number of BUM packets originated from the workloads in the transport node that are dropped after the replication at the uplink output.

This statistic increments if uplink_tx_invalid_state_drops increments or due to system errors like out-of-memory.

4.2.0

nd_proxy_req_fail_drops

Number of ND requests failed to be resent on uplink for datapath based learning when the CCP does not have IP-MAC binding leading to ND suppression failure.

Non-zero statistic indicates system is running low on packet buffer resource and continuous increment should be treated as a critical error.

4.2.0

nd_proxy_req_suppress

Number of ND requests suppressed by VDL2 by querying CCP to find IP-MAC binding.

These ND packets are sent out on uplink only if CCP does not know the binding.

4.2.0

nd_proxy_resp

Number of valid IP-MAC binding responses from CCP for each ND suppression request from this transport node.

These ND responses could be a result of direct CCP response or due to a already cached ND entry in the transport node.

4.2.0

nd_proxy_resp_drops

Number of ND responses corresponding to IP-MAC binding response from CCP that could not be sent to switchport that initiated the ND packet.

4.2.0

nd_proxy_resp_filtered

Number of ND responses generated based on IP-MAC binding response from CCP that are not sent to switch port that initiated the ND request.

This is possible because ND was requested due to traceflow or the port that initiated the ND request was no longer present in the transport node.

4.2.0

nd_proxy_resp_unknown

Number of unknown IPv6-MAC bindings in the control plane for each IPv6-MAC request from this transport node.

On receiving this message, VDL2 module resends the ND packets on uplink to learn IPv6-MAC binding through datapath.

4.2.0

nested_tn_mcast_proxy_diff_vlan_tx_drops

Number of dropped BUM packets replicated towards nested transport node.

The nested transport node and this transport node are configured with different transport VLAN ID. Check VTEP GW IP is reachable from this transport node's VTEP VMK interfaces.

4.2.0

nested_tn_mcast_proxy_same_vlan_tx_drops

Number of dropped BUM packets replicated towards nested transport node.

The nested transport node and this transport node are configured with the same transport VLAN ID.

4.2.0

uplink_rx

Number of packets which are received at uplink port from the TOR switch.

These packets will be sent to VDS when there are no drops at uplink Rx.

4.2.0

uplink_rx_drops

Number of packet drops at the VDL2UplinkInput due to various reasons.

See the other uplink received drop reasons to identify the specific reason for the drops.

4.2.0

uplink_rx_filtered

Number of packets sent by the TOR switch, which are filtered at the VDL2 uplink for reasons like IGMP reports from peer ESXi transport nodes.

4.2.0

uplink_rx_guest_vlan_drops

Number of packet drops at the VDL2UplinkInput when the guest VLAN tag removal fails for the inner packet due to a system error.

4.2.0

uplink_rx_invalid_encap_drops

Number of packets that are received at uplink from underlay and dropped due to incorrect encapsulation headers.

To understand the exact error, perform the packet capture and verify the encapsulation headers (protocol version, checksum, length, and so on) by running the following command:

pktcap-uw --capture UplinkRcvKernel --uplink --ng -o uplink.pcap

4.2.0

uplink_rx_mcast_invalid_dr_uplink_drops

Number of IP multicast packet drops at the VDL2 uplink input since the vdrPort is not associated with this uplink.

This could happen when the TOR switch is flooding the multicast traffic on all the uplinks of the transport node.

Check the vdrPort and uplink association by using following command, and then check if the dropped packet was received on the unassociated uplink:

nsxdp-cli vswitch instance list

4.2.0

uplink_rx_skip_mac_learn

Number of packets for which source outer MAC cannot be learned as incoming GENEVE label is unknown.

Large values for this statistic can point to missing remote VTEP updates at transport node from the control plane.

Use the following CLI commands to check the remote VTEP table on the transport node:

nsxcli -c "get global-vtep-table"

$ nsxcli -c "get segment vtep-table"

A possible workaround can be to restart the local control plane agent (CfgAgent) on the transport node to force a full sync by running the following command:

$ /etc/init.d/nsx-cfgagent restart

4.2.0

uplink_rx_system_err_drops

Number of packet drops at the VDL2UplinkInput due to various system errors like memory failure, packet attribute update failures.

This generally means that the ESXi host is running low on resources. Moving some VMs to other hosts might help ease the load.

4.2.0

uplink_rx_wrong_dest_drops

Number of packets received from underlay and dropped as the destination IP of the packet does not match with any of the VTEPs configured on the host.

4.2.0

uplink_tx

Number of packets sent by the VDS which are received at uplink port's VDL2 IOChain.

These packets will be sent to underlay network when there are no drops at uplink Tx.

4.2.0

uplink_tx_drops

Number of total packets drops at the VDL2UplinkOutput due to various reasons.

See the other uplink transmitted drop reasons to identify the specific reason for the drops.

4.2.0

uplink_tx_flood_rate_limit

Number of unknown unicast packets flooded on uplinks that are rate limited.

4.2.0

uplink_tx_ignore

Number of packets sent by VDS that are filtered at VDL2 uplink output and not forwarded to underlay.

For instance, BUM packets are filtered if there are no VTEPs on the segment to replicate the packets to.

4.2.0

uplink_tx_invalid_frame_drops

Number of packets that are dropped at VDL2 uplink output as either the encap header could not be found or the TSO set on the inner frame could not be performed. This is due to large TCP packets.

4.2.0

uplink_tx_invalid_state_drops

Number of packets that are dropped at VDL2 uplink output due to incorrect transport VLAN configuration. This is due to incorrect uplink profile association at the transport node or if the gateway MAC is not resolved.

Use the following procedure to check on an ESXi node if the VTEP Gateway IP is reachable from this transport node's VTEP VMK interfaces.

  1. Get the Gateway IP by running the following command: net-vdl2 -l
  2. Get the network stack instance name by running the following command: esxcfg-vmknic -l
  3. Ping the VTEP Gateway IP by running the following command: vmkping -I vmk10 -S
4.2.0

uplink_tx_nested_tn_repl_drops

Number of BUM packets that are dropped at VDL2 uplink output while replicating to nested transport node due to incorrect VTEP association.

Use following command to check the source switchport to uplink association:

nsxdp-cli vswitch instance list

4.2.0

uplink_tx_non_unicast

Number of broadcast or multicast packets replicated to remote VTEPs. Large rate implies that transport node has to replicate these packets to remote VTEPs, which might cause stress on uplink layer transmitted queues.

4.2.0

uplink_tx_teaming_drops

Number of packets that are dropped at VDL2UplinkOutput due to non-availability of the VTEP associated with switchport that originated the traffic.

Use the following command to check the workload switchport's uplink association and the teaming status:

nsxdp-cli vswitch instance list

4.2.0

uplink_tx_ucast_flood

Number of unknown unicast packets flooded at uplink output. Large values implies unidirectional L2 flow or MAC table update issues.

Check if the unidirectional flow is expected or if the MAC table is full.

4.2.0

Module: datapath_l3

This datapath module also known as Virtual Distributed Routing (VDR), routes packets on every ESXi host. This datapath module is known as nsxt-vdrb in the NSX Central CLI.

Statistic Description Release Introduced

arp_hold_pkt_drops

When Distributed Router is in the process of resolving an IPv4 ARP entry, the packets using this ARP entry will be queued.

The number of packets that can be queued is capped per logical router instance. When the cap is reached, the oldest packets will be tail dropped and this statistic will increase by the number of old packets that are dropped.

4.2.0

arpfaildrops (lta)

IPv4 packet drops due to ARP failure.

4.2.0

consumed_icmpv4

Number of IPv4 packets that are destined for a logical routing port IP address of Distributed Router corresponding to a given segment.

Keep in mind that statistic will increase after routing the packet from source subnet.

4.2.0

consumed_icmpv6

Number of IPv6 packets that are destined for a logical routing port IP address of Distributed Router corresponding to a given segment. Keep in mind that statistic will increase after routing the packet from source subnet.

4.2.0

drop_route_ipv4_drops

Number of IPv4 packets matching "drop routes". Drop routes are the routes configured to purposely drop the matching packets.

If it is not expected, check the routes on ESXi host and check the configuration in the management plane.

4.2.0

drop_route_ipv6_drops

Number of IPv6 packets matching "drop routes".

Drop routes are the routes configured to purposely drop the matching packets. If it is not expected, check the routes on ESXi host and check the configuration in the management plane.

4.2.0

ndfaildrops (lta)

IPv6 packet drops due to Neighbor Discovery failure.

4.2.0

no_nbr_ipv4

No IPv4 ARP entry found in Distributed Router's ARP table.

4.2.0

no_nbr_ipv6

No IPv6 neighbor entry found in Distributed Router's neighbor table.

4.2.0

no_route_ipv4_drops

Each logical router instance has its own routing table for route lookups.

This statistic increases when IPv4 packets are dropped due to no matching route existing for that logical router instance.

4.2.0

no_route_ipv6_drops

Each logical router instance has its own routing table for route lookups.

This statistic increases when IPv6 packets are dropped due to no matching route existing for that logical router instance.

4.2.0

ns_hold_pkt_drops

When Distributed Router is in the process of resolving an IPv6 neighbor entry, the packets using this neighbor entry will be queued.

The number of packets can be queued is capped per logical router instance. When the cap is reached, the oldest packets will be tail dropped and this statistic will increase by the number of old packets that are dropped.

4.2.0

pkt_attr_error_drops

Number of packets which failed attribute operation. NSX uses packet attributes to facilitate packet processing.

The packet attributes can be allocated, set or unset. In regular cases, the operation won't fail.

Some possible reasons for this statistic to increment might be the following:
  • Packet attribute heap is exhausted.
  • packet attribute is corrupted.
4.2.0

relayed_dhcpv4_req

Relayed DHCPv4 requests.

4.2.0

relayed_dhcpv4_rsp

Relayed DHCPv4 responses.

4.2.0

relayed_dhcpv6_req

Relayed DHCPv6 requests.

4.2.0

relayed_dhcpv6_rsp

Relayed DHCPv6 responses.

4.2.0

rpf_ipv4_drops

Number of IPv4 packets dropped due to reverse path forwarding check failure.

Distributed Router may check if the source IP of packets is coming from a valid (reachable) source and may drop the packets based on the configuration.

You can change this setting in the NSX Manager UI.

To check the current configuration in the NSX Manager UI, do these steps:
  1. Navigate to Networking > Segments.
  2. Edit the segment that is of interest to you.
  3. Go to Additional Settings.
  4. Check the URPF Mode.
4.2.0

rpf_ipv6_drops

Number of IPv6 packets dropped due to reverse path forwarding check failure.

Distributed Router may check if the source IP of packets is coming from a valid (reachable) source and may drop the packets based on the configuration.

You can change this setting in the NSX Manager UI.

To check the current configuration in the NSX Manager UI, do these steps:
  1. Navigate to Networking > Segments.
  2. Edit the segment that is of interest to you.
  3. Go to Additional Settings.
  4. Check the URPF Mode.
4.2.0

rx_arp_req

Number of ARP request packets received by the logical router port of a Distributed Router corresponding to a given segment.

4.2.0

rx_ipv4

Number of IPv4 packets that are coming to a logical routing port of a Distributed Router corresponding to a given segment.

4.2.0

rx_ipv6

Number of IPv6 packets that are coming to a logical routing port of a Distributed Router corresponding to a given segment.

4.2.0

rx_pkt_parsing_error_drops

Number of packet parsing failures for received Distributed Router packets.

Distributed Router performs packet parsing for each packet received to read metadata and headers.

If you see a high number for this statistic, one possible reason is that the packets are not structured correctly. Monitor if there is any traffic failure and do the packet capture to debug further.

4.2.0

rxgarp (lta)

Gratuitous GARP received on a Distributed Router.

4.2.0

ttl_ipv4_drops

Number of IPv4 packets dropped due to low TTL (Time-To-Live). Each logical router instance will deduce 1 from TTL value.

Use packet capture to determine which packets have low TTL values. If the TTL is fairly large at the source, possible reasons are too many routing hops on the path or packet is looping, which is rare.

4.2.0

ttl_ipv6_drops

Number of IPv6 packets dropped due to low TTL (Time-To-Live). Each logical router instance will deduce 1 from TTL value.

Use packet capture to determine which packets have low TTL values. If the TTL is fairly large at the source, possible reasons are too many routing hops on the path or packet is looping, which is rare.

4.2.0

tx_arp_rsp

Number of ARP request packets sent by the logical router port of a Distributed Router corresponding to a given segment.

4.2.0

tx_dispatch_queue_too_long_drops

Number of packets being tail dropped in the transmit dispatch queue.

The transmit dispatch queue holds Distributed Router self generated packets such as ARP packets, NS discovery, and so on.

Each packet consumes the packet handling system resources. If too many packets are being queued, limit the queue size and tail drop the packets.

4.2.0

tx_ipv4

Number of IPv4 packets that are going out from a logical router port of a Distributed Router corresponding to a given segment.

4.2.0

tx_ipv6

Number of IPv6 packets that are going out from a logical router port of a Distributed Router corresponding to a given segment.

4.2.0

Module: distributed_firewall

This datapth module provides distributed firewall capability. In this table, Rx refers to packets received by the switchport (sent from the VM), and Tx refers to packets transmitted from the switchport (received by the VM). This datapath module is known as nsxt-vsip in the NSX Central CLI.

Statistic Description Release Introduced

alg_handler_drops

Number of packets dropped due to ALG handling. It tracks the ALG state decoder packet handling error.

4.2.0

bad_offset_drops

Number of packets dropped by bad-offset.

4.2.0

bad_timestamp_drops

Number of packets dropped due to bad timestamp. For instance, ACK packets carrying an old timestamp, packets received with unexpected timestamp should be dropped.

4.2.0

congestion_drops

Number of packets dropped due to congestion. For instance, congestion detected in the queue for a network interface.

4.2.0

fragment_drops

Number of packets dropped due to failed reassembly of fragmented packets.

Fragmentation breaks packets into smaller fragments so that the resulting pieces can pass through a link with smaller MTU than the original packet size.

4.2.0

handshake_error_drops

Number of packets dropped due to TCP three-way handshake error.

This could happen when both sender and receiver are in SYN sent during three-way handshake. This drop reason falls under TCP state mismatch.

4.2.0

icmp_err_pkt_drops

Number of packets dropped due to extra ICMP error response packets.

This statistic tracks the extra ICMP error response packets that are dropped.

4.2.0

icmp_error_drops

Number of packets dropped due to sequence failure in ICMP error response to TCP packet.

When sequence numbers are out of expected range, it results in a drop.

4.2.0

icmp_flood_overlimit_drops

Number of packets dropped due to ICMP flood overlimit. There is a configured ICMP flood limit in kernel interface.

4.2.0

ignored_offloaded_fpdrops

Number of packets dropped due to flow offloaded to hardware.

Flow offloaded to the hardware means that connection tracking is being done by a smartNIC's hardware packet pipeline. In that case, getting a packet in software is unexpected. The packet cannot be processed in software as the software has no up-to-date CT information (for example, states, sequence numbers). So traffic must be dropped.

This case can happen because offloading a flow to the hardware takes some time and it can race with packets already enqueued to be delivered to VSIP. This drop reason is for the ENS fastpath packets.

4.2.0

ignored_offloaded_spdrops

Number of packets dropped due to flow offloaded to hardware.

Flow offloaded to the hardware means that connection tracking is being done by a smartNIC's hardware packet pipeline. In that case, getting a packet in software is unexpected. The packet cannot be processed in software as the software has no up-to-date CT information (for example, states, sequence numbers). So traffic must be dropped.

This case can happen because offloading a flow to the hardware takes some time and it can race with packets already enqueued to be delivered to VSIP. This drop reason is for the IOChain code path, also known as slowpath in this context.

4.2.0

ip_option_drops

Number of packets dropped because IP options is not allowed.

If allow_opts in firewall rule is not set, packet hitting that rule is dropped.

4.2.0

l7_alert_drops

L7 rule is present but there is no match. An alert is generated.

4.2.0

l7_attr_error_drops

Number of packets dropped due to failure to set state attributes.

This occurs when attrconn L7 attributes allocation or modification fails and it results in a drop.

4.2.0

l7_pending_misc

This statistic tracks the packets that are being parsed by DPI currently, rule match is pending.

Once L7 rule match happens, corresponding rule action will be taken on the packet.

4.2.0

lb_reject_drops

This statistic tracks drops due to packet rejected by load balancer.

Packets are dropped if it matches a load balancer virtual server but no pool member is selected.

4.2.0

match_drop_rule_rx_drops

Number of received packets dropped by hitting drop or reject distributed firewall rule.

4.2.0

match_drop_rule_tx_drops

Number of transmitted packets dropped by hitting drop or reject distributed firewall rule.

4.2.0

memory_drops

Number of packets dropped due to lack of memory. This is a capacity level error.

4.2.0

normalize_drops

Number of packets dropped due to malformed packets. For instance, IP version mismatch, TCP header offset inconsistent with packet description total length

4.2.0

other_flood_overlimit_drops

Number of packets dropped due to other protocol flood overlimit. There is a configured flood limit in the kernel interface for other protocols.

4.2.0

pkts_frag_queued_v4_misc

During packet fragmentation, the packet fragments are added to the fragment queue. These packet fragments are not necessarily dropped. Successful packet reassembly means that no fragmented packets are dropped.

This statistic tracks IPv4 packets that are added to the fragment queue.

4.2.0

pkts_frag_queued_v6_misc

During packet fragmentation, the packet fragments are added to the fragment queue. These packet fragments are not necessarily dropped. Successful packet reassembly means that no fragmented packets are dropped.

This statistic tracks IPv6 packets that are added to the fragment queue.

4.2.0

proto_cksum_drops

Number of packets dropped due to incorrect protocol checksum. This occurs when checksum validation of the packet fails.

4.2.0

rx_ipv4_drop_pkts

Number of received IPv4 dropped packets.

4.2.0

rx_ipv4_pass_pkts

Number of received IPv4 passed packets.

4.2.0

rx_ipv4_reject_pkts

Number of received IPv4 rejected packets.

4.2.0

rx_ipv6_drop_pkts

Number of received IPv6 dropped packets.

4.2.0

rx_ipv6_pass_pkts

Number of received IPv6 passed packets.

4.2.0

rx_ipv6_reject_pkts

Number of received IPv6 rejected packets.

4.2.0

rx_l2_drop_pkts

Number of received layer 2 dropped packets.

4.2.0

seqno_bad_ack_drops

Number of packets dropped due to TCP acknowledging forward more than one window.

This drop reason falls under TCP state mismatch.

4.2.0

seqno_gt_max_ack_drops

Number of packets dropped due to TCP sequence number being greater than maximum ACK number.

This drop reason falls under TCP state mismatch.

4.2.0

seqno_lt_minack_drops

Number of packets dropped due to TCP sequence number being smaller than minimum ACK number.

This drop reason falls under TCP state mismatch.

4.2.0

seqno_old_ack_drops

Number of packets dropped due to TCP acknowledging back more than one fragment.

This drop reason falls under TCP state mismatch.

4.2.0

seqno_old_retrans_drops

Number of packets dropped due to TCP retransmission older than one window.

This drop reason falls under TCP state mismatch.

4.2.0

seqno_outside_window_drops

Number of packets dropped due to TCP sequence number outside window.

This drop reason falls under TCP state mismatch.

4.2.0

short_drops

Number of short packets dropped.

Short packets are packets with incorrect length, for instance, packets with malformed ip_len value.

4.2.0

spoof_guard_drops

Number of packets dropped due to SpoofGuard check.

SpoofGuard is a tool that is designed to prevent virtual machines in your environment from sending traffic with an IP address that it is not authorized to send traffic from.

4.2.0

src_limit_misc

Number of packets hitting source limit.

This is related to firewall packet processing. This occurs due to failure of source node insert in Red-Black (RB) tree due to limit reached.

4.2.0

state_insert_drops

Number of packets dropped due to state insert failure. This occurs due to duplicate state insert.

4.2.0

state_limit_drops

Number of packets dropped due to maximum limit of states reached.

For instance, if number of TCP states are higher than the limit, it results in a drop.

4.2.0

state_mismatch_drops

Number of packets dropped due to state mismatch.

There are multiple possible reasons for drop, such as STRICTNOSYN, HANDSHAKE_SYNSENT, SEQ_GT_SEQHI, and so on.

4.2.0

strict_no_syn_drops

Number of packets dropped due to strict enforcement mode with no syn. SYN packet is expected to be seen in strict mode.

4.2.0

syn_expected_drops

Packet matches a load balancer virtual server but it is not a SYN packet. So, the system should not create a state for it. This results in a packet drop. This statistic tracks this drop.

4.2.0

syn_proxy_drops

Number of packets dropped due to synproxy. This is to protect TCP servers from attacks such as SYN FLOOD.

4.2.0

tcp_flood_overlimit_drops

Number of packets dropped due to TCP flood overlimit. There is a configured TCP flood limit in the kernel interface.

4.2.0

tx_ipv4_drop_pkts

Number of transmitted IPv4 dropped packets.

4.2.0

tx_ipv4_pass_pkts

Number of transmitted IPv4 passed packets.

4.2.0

tx_ipv4_reject_pkts

Number of transmitted IPv4 rejected packets.

4.2.0

tx_ipv6_drop_pkts

Number of transmitted IPv6 dropped packets.

4.2.0

tx_ipv6_pass_pkts

Number of transmitted IPv6 passed packets.

4.2.0

tx_ipv6_reject_pkts

Number of transmitted IPv6 rejected packets.

4.2.0

tx_l2_drop_pkts

Number of transmitted layer 2 dropped packets.

4.2.0

udp_flood_overlimit_drops

Number of packets dropped due to UDP flood overlimit. There is a configured UDP flood limit in the kernel interface.

4.2.0

Module: virtual_switch

This Layer 2 datapath module is responsible for providing switching functionality. This module forwards packets within a broadcast domain based on the VLAN and VNI an interface receives a packet on. In this table, Rx refers to packets sent "to" the switch, and Tx refers to packets received "from" the switch. Mcast refers to multicast packets. This datapath module is known as nsxt-vswitch in the NSX Central CLI.

Statistic Description Release Introduced

forged_transmit_rx_drops

Number of packets dropped as forged drops due to the source MAC of the packet being different from the MAC of the virtual machine adapter.

Forged transmits or MAC learning being disabled on the segment causes these drops. Enabling MAC learning or forged transmits on the segment should mitigate the issue.

4.2.0

unknown_unicast_rx_pkts

Number of unknown unicast packets received by vSwitch that are flooded to other ports in the same broadcast domain.

The statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning.

4.2.0

unknown_unicast_rx_uplink_pkts

Number of packets received from one or more uplinks to the vSwitch which are unknown unicast flooded to other ports in the same broadcast domain by the vSwitch.

The statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning.

4.2.0

unknown_unicast_tx_pkts

Number of unknown unicast packets flooded to other ports in the same broadcast domain by the vSwitch.

The statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning.

4.2.0

unknown_unicast_tx_uplink_pkts

Number of packets unknown unicast flooded by the vSwitch to one or more uplinks.

The statistic increments when packets are unknown unicast flooded in the presence of MAC learning enabled segments or sink ports. Unknown unicast flooding occurs when the destination MAC address of the packet is not found in the vSwitch MAC address table.

This statistic increments when a destination MAC ages out from the MAC address table in the presence of MAC learning.

4.2.0

vlan_tag_mismatch_rx

Number of unicast and broadcast packets dropped due to a VLAN tag mismatch.

These drops occur when the VLAN tag of a packet is not allowed according to the VLAN policy of the segment. Amending the VLAN policy of the segment or sending packets with an allowed VLAN tag can mitigate the issue.

4.2.0

vlan_tag_mismatch_rx_mcast

Number of multicast packets dropped due to a VLAN tag mismatch.

These drops occur when the VLAN tag of a packet is not allowed according to the VLAN policy of the segment. Amending the VLAN policy of the segment or sending packets with an allowed VLAN tag can mitigate the issue.

4.2.0

vlan_tag_mismatch_tx

Number of unicast packets dropped due to a VLAN tag mismatch.

The host switch locates an entry in its lookup table based on the destination address of the packet. When attempting to forward the packet out of a port, these drops occur when the VLAN tag of a packet is not allowed according to the VLAN policy of the segment. Amending the VLAN policy of the segment or sending packets with an allowed VLAN tag can mitigate the issue.

4.2.0

vlan_tag_mismatch_tx_mcast

Number of multicast packets dropped due to a VLAN tag mismatch.

The host switch locates an entry in its lookup table based on the destination address of the packet. When attempting to forward the packet out of a port, these drops occur when the VLAN tag of a packet is not allowed according to the VLAN policy of the segment. Amending the VLAN policy of the segment or sending packets with an allowed VLAN tag can mitigate the issue.

4.2.0

vni_tag_mismatch_tx

Number of unicast packets dropped due to a VNI tag mismatch.

The host switch locates an entry in its lookup table based on the destination address of the packet. When attempting to forward the packet out of a port, these drops occur when the VNI tag of a packet is not allowed according to the VNI policy of the segment. Moving the destination VM to this overlay segment can fix the issue.

4.2.0

vni_tag_mismatch_tx_mcast

Number of multicast packets dropped due to a VNI tag mismatch.

The host switch locates an entry in its lookup table based on the destination address of the packet. When attempting to forward the packet out of a port, these drops occur when the VNI tag of a packet is not allowed according to the VNI policy of the segment. Moving the destination VM to this overlay segment can fix the issue.

4.2.0