This topic provides information for understanding and troubleshooting the VMware NSX Edge appliance.

To troubleshoot issues with an NSX Edge appliance, validate that each troubleshooting step below is true for your environment. Each step provides instructions or a link to a document, to eliminate possible causes and take corrective action as necessary. The steps are ordered in the most appropriate sequence to isolate the issue and identify the proper resolution. Do not skip a step.

Check the release notes for current releases to see if the problem is resolved.

Ensure that the minimum system requirements are met when installing VMware NSX Edge. See the NSX Installation Guide.

Installation and Upgrade issues

  • Verify that the issue you are encountering is not related to the "Would Block" issue. For more information, see https://kb.vmware.com/kb/2107951.

  • If the upgrade or redeploy succeeds but there is no connectivity for the Edge interface, verify connectivity on the back-end Layer 2 switch. See https://kb.vmware.com/kb/2135285.

  • If deployment or upgrade of the Edge fails with the error:

    /sbin/ifconfig vNic_1 up failed : SIOCSIFFLAGS: Invalid argument

    OR

  • If the deployment or upgrade succeeds, but there is no connectivity on the Edge interfaces:

  • Running the show interface command as well as Edge Support logs displays entries similar to:

    vNic_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
        link/ether 00:50:56:32:05:03 brd ff:ff:ff:ff:ff:ff
        inet 21.12.227.244/23 scope global vNic_0
        inet6 fe80::250:56ff:fe32:503/64 scope link tentative dadfailed 
           valid_lft forever preferred_lft forever
    

    In both cases, the host switch is not ready or has some issues. To resolve, investigate the host switch.

Configuration Issues

  • Collect the NSX Edge diagnostic information. See https://kb.vmware.com/kb/2079380.

    Filter the NSX Edge logs by searching for the string vse_die. The logs near this string might provide information about the configuration error.

Firewall Issues

  • If there are inactivity time-out issues and you are noticing that applications are idle for a long time, increase inactivity-timeout settings using the REST API. See https://kb.vmware.com/kb/2101275.

Edge Firewall Packet Drop Issues

  1. Check the firewall rules table with the show firewall command. The usr_rules table displays the configured rules.

    nsxedge> show firewall
    Chain PREROUTING (policy ACCEPT 3146M packets, 4098G bytes)
    rid    pkts bytes target     prot opt in     out     source               destination
    
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
    rid    pkts bytes target     prot opt in     out     source               destination
    0     78903   16M ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
    0         0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            state INVALID
    0      140K 9558K block_in   all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     23789 1184K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
    0      116K 8374K usr_rules  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0         0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0
    
    Chain FORWARD (policy ACCEPT 3146M packets, 4098G bytes)
    rid    pkts bytes target     prot opt in     out     source               destination
    
    Chain OUTPUT (policy ACCEPT 173K packets, 22M bytes)
    rid    pkts bytes target     prot opt in     out     source               destination
    
    Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
    rid    pkts bytes target     prot opt in     out     source               destination
    0     78903   16M ACCEPT     all  --  *      lo      0.0.0.0/0            0.0.0.0/0
    0      679K   41M DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            state INVALID
    0     3146M 4098G block_out  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0         0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in tap0 --physdev-out vNic_+
    0         0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in vNic_+ --physdev-out tap0
    0         0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in na+ --physdev-out vNic_+
    0         0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in vNic_+ --physdev-out na+
    0     3145M 4098G ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
    0      221K   13M usr_rules  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0         0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0
    
    Chain block_in (1 references)
    rid    pkts bytes target     prot opt in     out     source               destination
    
    Chain block_out (1 references)
    rid    pkts bytes target     prot opt in     out     source               destination
    
    Chain usr_rules (2 references)
    rid    pkts bytes target     prot opt in     out     source               destination
    131074 70104 5086K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set 0_131074-os-v4-1 src
    131075  116K 8370K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set 1_131075-ov-v4-1 dst
    131073  151K 7844K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0
    

    Check for an incrementing value of a DROP invalid rule in the POST_ROUTING section of the show firewall command. Typical reasons include asymmetric routing issues or TCP-based applications that have been inactive for more than one hour. Further evidence of asymmetric routing issues include:

    • Ping works in one direction and fails in the other direction

    • Ping works, while TCP does not work

  2. Collect the show ipset command output.

    nsxedge> show ipset
    Name: 0_131074-os-v4-1
    Type: bitmap:if (Interface Match)
    Revision: 3
    Header: range 0-64000
    Size in memory: 8116
    References: 1
    Number of entries: 1
    Members:
    vse (vShield Edge Device)
    
    Name: 0_131074-os-v6-1
    Type: bitmap:if (Interface Match)
    Revision: 3
    Header: range 0-64000
    Size in memory: 8116
    References: 1
    Number of entries: 1
    Members:
    vse (vShield Edge Device)
    
    Name: 1_131075-ov-v4-1
    Type: hash:oservice (Match un-translated Ports)
    Revision: 2
    Header: family inet hashsize 64 maxelem 65536
    Size in memory: 704
    References: 1
    Number of entries: 2
    Members:
    Proto=6, DestPort=179, SrcPort=Any    (encoded: 0.6.0.179,0.6.0.0/16)
    Proto=89, DestPort=Any, SrcPort=Any    (encoded: 0.89.0.0/16,0.89.0.0/16)
    
    Name: 1_131075-ov-v6-1
    Type: hash:oservice (Match un-translated Ports)
    Revision: 2
    Header: family inet hashsize 64 maxelem 65536
    Size in memory: 704
    References: 1
    Number of entries: 2
    Members:
    Proto=89, DestPort=Any, SrcPort=Any    (encoded: 0.89.0.0/16,0.89.0.0/16)
    Proto=6, DestPort=179, SrcPort=Any    (encoded: 0.6.0.179,0.6.0.0/16)
    
  3. Enable logging on a particular firewall rule using the REST API or the Edge user interface, and monitor the logs with the show log follow command.

    If logs are not seen, enable logging on the DROP Invalid rule using the following REST API.

    URL : https://NSX_Manager_IP/api/4.0/edges/{edgeId}/firewall/config/global
    
    PUT Method 
    Input representation 
    <globalConfig>   <!-- Optional -->
    <tcpPickOngoingConnections>false</tcpPickOngoingConnections>   <!-- Optional. Defaults to false -->
    <tcpAllowOutOfWindowPackets>false</tcpAllowOutOfWindowPackets>    <!-- Optional. Defaults to false -->
    <tcpSendResetForClosedVsePorts>true</tcpSendResetForClosedVsePorts>    <!-- Optional. Defaults to true -->
    <dropInvalidTraffic>true</dropInvalidTraffic>    <!-- Optional. Defaults to true -->
    <logInvalidTraffic>true</logInvalidTraffic>     <!-- Optional. Defaults to false -->
    <tcpTimeoutOpen>30</tcpTimeoutOpen>       <!-- Optional. Defaults to 30 -->
    <tcpTimeoutEstablished>3600</tcpTimeoutEstablished>   <!-- Optional. Defaults to 3600 -->
    <tcpTimeoutClose>30</tcpTimeoutClose>   <!-- Optional. Defaults to 30 -->
    <udpTimeout>60</udpTimeout>             <!-- Optional. Defaults to 60 -->
    <icmpTimeout>10</icmpTimeout>           <!-- Optional. Defaults to 10 -->
    <icmp6Timeout>10</icmp6Timeout>           <!-- Optional. Defaults to 10 -->
    <ipGenericTimeout>120</ipGenericTimeout>    <!-- Optional. Defaults to 120 -->
    </globalConfig>
    Output representation 
    No payload
    

    Use the show log follow command to look for logs similar to:

    2016-04-18T20:53:31+00:00 edge-0 kernel: nf_ct_tcp: invalid TCP flag combination IN= OUT= 
    SRC=172.16.1.4 DST=192.168.1.4 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=43343 PROTO=TCP 
    SPT=5050 DPT=80 SEQ=0 ACK=1572141176 WINDOW=512 RES=0x00 URG PSH FIN URGP=0
    2016-04-18T20:53:31+00:00 edge-0 kernel: INVALID IN= OUT=vNic_1 SRC=172.16.1.4 
    DST=192.168.1.4 LEN=40 TOS=0x00 PREC=0x00 TTL=63 ID=43343 PROTO=TCP SPT=5050 DPT=80 
    WINDOW=512 RES=0x00 URG PSH FIN URGP=0
    

  4. Check for matching connections in the Edge firewall state table with the show flowtable rule_id command:

    nsxedge> show flowtable
    1: tcp  6 21554 ESTABLISHED src=192.168.110.10 dst=192.168.5.3 sport=25981 
    d port=22 pkts=52 bytes=5432 src=192.168.5.3 dst=192.168.110.10 sport=22 dport=259 
    81 pkts=44 bytes=7201 [ASSURED] mark=0 rid=131073 use=1
    2: tcp  6 21595 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=53194 
    dport=10 001 pkts=33334 bytes=11284650 src=127.0.0.1 dst=127.0.0.1 sport=10001 dport=5319 
    4 pkts=33324 bytes=1394146 [ASSURED] mark=0 rid=0 use=1
    

    Compare the active connection count and the maximum allowed count with the show flowstats command:

    nsxedge> show flowstats
    Total Flow Capacity: 65536
    Current Statistics :
    cpu=0 searched=3280373 found=3034890571 new=52678 invalid=659946 ignore=77605 
    delete=52667 delete_list=49778 insert=49789 insert_failed=0 drop=0 early_drop=0 
    error=0 search_restart=0
    

  5. Check the Edge logs with the show log follow command, and look for any ALG drops. Search for strings similar to tftp_alg, msrpc_alg, or oracle_tns. For additional information, see:

Edge Routing Connectivity issues

  1. Initiate controlled traffic from a client using the ping <destination_IP_address> command.

  2. Capture traffic simultaneously on both interfaces, write the output to a file, and export it using SCP.

    For example:

    Capture the traffic on the ingress interface with this command:

    debug packet display interface vNic_0 –n_src_host_1.1.1.1

    Capture the traffic on the egress interface with this command:

    debug packet display interface vNic_1 –n_src_host_1.1.1.1

    For simultaneous packet capture, use the ESXi packet capture utility pktcap-uw tool in ESXi. See https://kb.vmware.com/kb/2051814.

    If the packet drops are consistent, check for configuration errors related to:

    • IP addresses and routes

    • Firewall rules or NAT rules

    • Asymmetric routing

    • RP filter checks

    1. Check interface IP/subnets with the show interface command.

    2. If there are missing routes at the data plane, run these commands:

      • show ip route

      • show ip route static

      • show ip route bgp

      • show ip route ospf

    3. Check the routing table for needed routes by running the show ip forwarding command.

    4. If you have multiple paths, run the show rpfilter command.

      nsxedge> show rpfilter
      net.ipv4.conf.VDR.rp_filter = 0
      net.ipv4.conf.all.rp_filter = 0
      net.ipv4.conf.br-sub.rp_filter = 1
      net.ipv4.conf.default.rp_filter = 1
      net.ipv4.conf.lo.rp_filter = 0
      net.ipv4.conf.vNic_0.rp_filter = 1
      net.ipv4.conf.vNic_1.rp_filter = 1
      net.ipv4.conf.vNic_2.rp_filter = 1
      net.ipv4.conf.vNic_3.rp_filter = 1
      net.ipv4.conf.vNic_4.rp_filter = 1
      net.ipv4.conf.vNic_5.rp_filter = 1
      net.ipv4.conf.vNic_6.rp_filter = 1
      net.ipv4.conf.vNic_7.rp_filter = 1
      net.ipv4.conf.vNic_8.rp_filter = 1
      net.ipv4.conf.vNic_9.rp_filter = 1
      
      nsxedge> show rpfstats
      RPF drop packet count: 484
      
      

      To check for RPF statistics, run the show rpfstats command.

      nsxedge> show rpfstats
      RPF drop packet count: 484
      

    If the packet drops appear randomly, check for resource limitations:

    1. For CPU or memory usage, run these commands:

      • show system cpu

      • show system memory

      • show system storage

      • show process monitor

      • top

        For ESXi, run the esxtop n command.

High CPU Utilization

If you are experiencing high CPU utilization on the NSX Edge, verify the performance of the appliance using the esxtop command on the ESXi host. Review the following Knowledge Base articles:

Also see https://communities.vmware.com/docs/DOC-9279.

A high value for the ksoftirqd process indicates a high incoming packet rate. Check whether logging is enabled on the data path, such as for firewall rules. Run the show log follow command to determine whether a large number of log hits are being recorded.

NSX Manager and Edge Communication Issues

The NSX Manager communicates with NSX Edge through the VIX or Message Bus. It is chosen by the NSX Manager when the Edge is deployed and never changes.

VIX

  • VIX is used for NSX Edge if the ESXi host is not prepared.

  • The NSX Manager gets host credentials from the vCenter Server to connect to the ESXi host first.

  • The NSX Manager uses the Edge credentials to log in to the Edge appliance.

  • The vmtoolsd process on the Edge handles the VIX communication.

VIX failures occur because of:

  • The NSX Manager cannot communicate with the vCenter Server.

  • The NSX Manager cannot communicate with the ESXi hosts.

  • There are NSX Manager internal issues.

  • There are Edge internal issues.

VIX Debugging

Check for VIX errors VIX_E_<error> in the NSX Manager logs to narrow down the cause. Look for errors similar to:

Vix Command 1126400 failed, reason com.vmware.vshield.edge.exception.VixException: vShield 
Edge:10013:Error code 'VIX_E_FILE_NOT_FOUND' was returned by VIX API.:null

Health check failed for edge  edge-13 VM vm-5025 reason: 
com.vmware.vshield.edge.exception.VixException: vShield Edge:10013:Error code 
'VIX_E_VM_NOT_RUNNING' was returned by VIX API.:null

In general, if the same failure occurs for many Edges at the same time, the issue is not on the Edge side.

Edge Diagnosis

  • Check if vmtoolsd is running with this command:

    nsxedge> show process list
    Perimeter-Gateway-01-0> show process list
    %CPU %MEM    VSZ   RSZ STAT  STARTED     TIME COMMAND
     0.0  0.1   4244   720 Ss     May 16 00:00:15 init [3]
    ...
     0.0  0.1   4240   640 S      May 16 00:00:00 logger -p daemon debug -t vserrdd
     0.2  0.9  57192  4668 S      May 16 00:23:07 /usr/local/bin/vmtoolsd --plugin-pa
     0.0  0.4   4304  2260 SLs    May 16 00:01:54 /usr/sbin/watchdog
     ...
    
  • Check if Edge is in a good state by running this command:

    nsxedge> show eventmgr
    -----------------------
    messagebus     : enabled
    debug          : 0
    profiling      : 0
    cfg_rx         : 1
    cfg_rx_msgbus  : 0
    ...
    

    Also, you can use the show eventmgr command to verify that the query command is received and processed.

    nsxedge> show eventmgr
    -----------------------
    messagebus     : enabled
    debug          : 0
    profiling      : 0
    cfg_rx         : 1
    cfg_rx_msgbus  : 0
    cfg_rx_err     : 0
    cfg_exec_err   : 0
    cfg_resp       : 0
    cfg_resp_err   : 0
    cfg_resp_ln_err: 0
    fastquery_rx   : 0
    fastquery_err  : 0
    clearcmd_rx    : 0
    clearcmd_err   : 0
    ha_rx          : 0
    ha_rx_err      : 0
    ha_exec_err    : 0
    status_rx      : 16
    status_rx_err  : 0
    status_svr     : 10
    status_evt     : 0
    status_evt_push: 0
    status_ha      : 0
    status_ver     : 1
    status_sys     : 5
    status_cmd     : 0
    status_svr_err : 0
    status_evt_err : 0
    status_sys_err : 0
    status_ha_err  : 0
    status_ver_err : 0
    status_cmd_err : 0
    evt_report     : 1
    evt_report_err : 0
    hc_report      : 10962
    hc_report_err  : 0
    cli_rx         : 2
    cli_resp       : 1
    cli_resp_err   : 0
    counter_reset  : 0
    ---------- Health Status -------------
    system status  : good
    ha state       : active
    cfg version    : 7
    generation     : 0
    server status  : 1
    syslog-ng      : 1
    haproxy        : 0
    ipsec          : 0
    sslvpn         : 0
    l2vpn          : 0
    dns            : 0
    dhcp           : 0
    heartbeat      : 0
    monitor        : 0
    gslb           : 0
    ---------- System Events -------------
    

    If the vmtoolsd is not running or the Edge is in a bad state, reboot the Edge.

    You can also check the Edge logs. See https://kb.vmware.com/kb/2079380.

Message Bus Debugging

The Message Bus is used for NSX Edge communication when ESXi hosts are prepared. When you encounter issues, the NSX Manager logs might contain entries similar to:

GMT ERROR taskScheduler-6 PublishTask:963 - Failed to configure VSE-vm index 0, vm-id vm-117, 
edge edge-5. Error: RPC request timed out

This issue occurs if:

  • Edge is in a bad state

  • Message Bus connection is broken

To diagnose the issue on the Edge:

  • To check rmq connectivity, run this command:

    nsxedge> show messagebus messages
    -----------------------
    Message bus is enabled
    cmd conn state : listening
    init_req       : 1
    init_resp      : 1
    init_req_err   : 0
    ...
    
  • To check vmci connectivity, run this command:

    nsxedge> show messagebus forwarder
    -----------------------
    Forwarder Command Channel
    vmci_conn          : up
    app_client_conn    : up
    vmci_rx            : 3649
    vmci_tx            : 3648
    vmci_rx_err        : 0
    vmci_tx_err        : 0
    vmci_closed_by_peer: 8
    vmci_tx_no_socket  : 0
    app_rx             : 3648
    app_tx             : 3649
    app_rx_err         : 0
    app_tx_err         : 0
    app_conn_req       : 1
    app_closed_by_peer : 0
    app_tx_no_socket   : 0
    -----------------------
    Forwarder Event Channel
    vmci_conn          : up
    app_client_conn    : up
    vmci_rx            : 1143
    vmci_tx            : 13924
    vmci_rx_err        : 0
    vmci_tx_err        : 0
    vmci_closed_by_peer: 0
    vmci_tx_no_socket  : 0
    app_rx             : 13924
    app_tx             : 1143
    app_rx_err         : 0
    app_tx_err         : 0
    app_conn_req       : 1
    app_closed_by_peer : 0
    app_tx_no_socket   : 0
    -----------------------
    cli_rx             : 1
    cli_tx             : 1
    cli_tx_err         : 0
    counters_reset     : 0
    

    In the example, the output vmci_closed_by_peer: 8 indicates the number of times the connection has been closed by the host agent. If this number is increasing and vmci conn is down, the host agent cannot connect to the RMQ broker. In show log follow, look for repeated errors in the Edge logs: VmciProxy: [daemon.debug] VMCI Socket is closed by peer

To diagnose the issue on the ESXi host:

  • To check if the ESXi host connects to the RMQ broker, run this command:

    esxcli network ip connection list | grep 5671
    
    tcp   0   0  10.32.43.4:43329  10.32.43.230:5671    ESTABLISHED     35854  newreno  vsfwd          
    tcp   0   0  10.32.43.4:52667  10.32.43.230:5671    ESTABLISHED     35854  newreno  vsfwd          
    tcp   0   0  10.32.43.4:20808  10.32.43.230:5671    ESTABLISHED     35847  newreno  vsfwd          
    tcp   0   0  10.32.43.4:12486  10.32.43.230:5671    ESTABLISHED     35847  newreno  vsfwd 

Displaying Packet Drop Statistics

Starting with NSX for vSphere 6.2.3, you can use the command show packet drops to displays packet drop statistics for the following:

  • Interface

  • Driver

  • L2

  • L3

  • Firewall

To run the command, log in to the NSX Edge CLI and enter basic mode. For more information, see the NSX Command Line Interface Reference. For example:

show packet drops

vShield Edge Packet Drop Stats:

Driver Errors
=============
          TX      TX    TX   RX   RX      RX
Interface Dropped Error Ring Full Dropped Error Out Of Buf
vNic_0    0       0     0    0    0       0
vNic_1    0       0     0    0    0       0
vNic_2    0       0     0    0    0       2
vNic_3    0       0     0    0    0       0
vNic_4    0       0     0    0    0       0
vNic_5    0       0     0    0    0       0

Interface Drops
===============
Interface RX Dropped TX Dropped
vNic_0             4          0
vNic_1          2710          0
vNic_2             0          0
vNic_3             2          0
vNic_4             2          0
vNic_5             2          0

L2 RX Errors
============
Interface length crc frame fifo missed
vNic_0         0   0     0    0      0
vNic_1         0   0     0    0      0
vNic_2         0   0     0    0      0
vNic_3         0   0     0    0      0
vNic_4         0   0     0    0      0
vNic_5         0   0     0    0      0

L2 TX Errors
============
Interface aborted fifo window heartbeat
vNic_0          0    0      0         0
vNic_1          0    0      0         0
vNic_2          0    0      0         0
vNic_3          0    0      0         0
vNic_4          0    0      0         0
vNic_5          0    0      0         0

L3 Errors
=========
IP:
 ReasmFails : 0
 InHdrErrors : 0
 InDiscards : 0
 FragFails : 0
 InAddrErrors : 0
 OutDiscards : 0
 OutNoRoutes : 0
 ReasmTimeout : 0
ICMP:
 InTimeExcds : 0
 InErrors : 227
 OutTimeExcds : 0
 OutDestUnreachs : 152
 OutParmProbs : 0
 InSrcQuenchs : 0
 InRedirects : 0
 OutSrcQuenchs : 0
 InDestUnreachs : 151
 OutErrors : 0
 InParmProbs : 0

Firewall Drop Counters
======================

Ipv4 Rules
==========
Chain - INPUT
rid pkts bytes target prot opt in out source    destination
0    119 30517 DROP   all  --   *   * 0.0.0.0/0 0.0.0.0/0    state INVALID
0      0     0 DROP   all  --   *   * 0.0.0.0/0 0.0.0.0/0
Chain - POSTROUTING
rid pkts bytes target prot opt in out source    destination
0    101 4040  DROP   all   --  *   * 0.0.0.0/0 0.0.0.0/0    state INVALID
0      0    0  DROP   all   --  *   * 0.0.0.0/0 0.0.0.0/0

Ipv6 Rules
==========
Chain - INPUT
rid pkts bytes target prot opt in out source destination
0      0     0   DROP  all      *   * ::/0   ::/0            state INVALID
0      0     0   DROP  all      *   * ::/0   ::/0
Chain - POSTROUTING
rid pkts bytes target prot opt in out source destination
0      0     0   DROP  all       *   * ::/0   ::/0           state INVALID
0      0     0   DROP  all       *   * ::/0   ::/0

Expected Behavior When Managing NSX Edge

In vSphere Web Client, when you configure L2 VPN on an ESX Edge and add, remove, or modify Site Configuration Details, this action will cause all existing connections to be disconnected and reconnected. This behavior is expected.