Logical router kernel modules in the host perform routing between VXLAN networks, and between virtual and physical networks. An NSX Edge appliance provides dynamic routing ability if needed. A universal logical router provides east-west routing between universal logical switches.


  • You must have been assigned the Enterprise Administrator or NSX Administrator role.

  • You must have an operational controller cluster in your environment before installing a logical router.

  • You must create a local segment ID pool, even if you have no plans to create NSX logical switches.

  • A logical router cannot distribute routing information to hosts without the help of NSX controllers. A logical router relies on NSX controllers to function, while edge services gateways (ESGs) do not. Make sure the controller cluster is up and available before creating or changing a logical router configuration.

  • If a logical router is to be connected to VLAN dvPortgroups, ensure that all hypervisor hosts with a logical router appliance installed can reach each other on UDP port 6999 for logical router VLAN-based ARP proxy to work.

  • Logical router interfaces and bridging interfaces cannot be connected to a dvPortgroup with the VLAN ID set to 0.

  • A given logical router instance cannot be connected to logical switches that exist in different transport zones. This is to ensure that all logical switches and logical router instances are aligned.

  • A logical router cannot be connected to VLAN-backed portgroups if that logical router is connected to logical switches spanning more than one vSphere distributed switch (VDS).This is to ensure correct alignment of logical router instances with logical switch dvPortgroups across hosts.

  • Logical router interfaces should not be created on two different distributed portgroups (dvPortgroups) with the same VLAN ID if the two networks are in the same vSphere distributed switch.

  • Logical router interfaces should not be created on two different dvPortgroups with the same VLAN ID if two networks are in different vSphere distributed switches, but the two vSphere distributed switches share the same hosts. In other words, logical router interfaces can be created on two different networks with the same VLAN ID if the two dvPortgroups are in two different vSphere distributed switches, as long as the vSphere distributed switches do not share a host.

  • Unlike NSX version 6.0 and 6.1, NSX version 6.2 allows logical router-routed logical interfaces (LIFs) to be connected to a VXLAN that is bridged to a VLAN.

  • When selecting placement of a logical router virtual appliance, avoid placing it on the same host as one or more of its upstream ESGs if you use ESGs in an ECMP setup. You can use DRS anti-affinity rules to enforce this, thus reducing the impact of host failure on logical router forwarding. This guideline does not apply if you have one upstream ESG by itself or in HA mode. For more information, see the VMware NSX for vSphere Network Virtualization Design Guide at https://communities.vmware.com/docs/DOC-27683.

  • Determine if you need to enable local egress. Local egress allows you to selectively send routes to hosts. You may want this ability if your NSX deployment spans multiple sites. See Cross-vCenter NSX Topologies for more information. You cannot enable local egress after the universal logical router has been created.


  1. In the vSphere Web Client, navigate to Home > Networking & Security > NSX Edges.
  2. Select the Primary NSX Manager to add a universal logical router.
  3. Click the Add (add) icon.
  4. Select Universal Logical (Distributed) Router.
  5. (Optional) Enable local egress.
  6. Type a name for the device.

    This name appears in your vCenter inventory. The name should be unique across all logical routers within a single tenant.

    Optionally, you can also enter a hostname. This name appears in the CLI. If you do not specify the host name, the Edge ID, which gets created automatically, is displayed in the CLI.

    Optionally, you can enter a description and tenant.

  7. (Optional) Deploy an edge appliance.

    Deploy Edge Appliance is selected by default. An edge appliance (also called a logical router virtual appliance) is required for dynamic routing and the logical router appliance's firewall, which applies to logical router pings, SSH access, and dynamic routing traffic.

    You can deselect the edge appliance option if you require only static routes, and do not want to deploy an Edge appliance. You cannot add an Edge appliance to the logical router after the logical router has been created.

  8. (Optional) Enable High Availability.

    Enable High Availability is not selected by default. Select the Enable High Availability check box to enable and configure high availability. High availability is required if you are planning to do dynamic routing.

  9. Type and re-type a password for the logical router.

    The password must be 12-255 characters and must contain the following:

    • At least one upper case letter

    • At least one lower case letter

    • At last one number

    • At least one special character

  10. (Optional) Enable SSH and set the log level.

    By default, SSH is disabled. If you do not enable SSH, you can still access the logical router by opening the virtual appliance console. Enabling SSH here causes the SSH process to run on the logical router virtual appliance, but you will also need to adjust the logical router firewall configuration manually to allow SSH access to the logical router's protocol address. The protocol address is configured when you configure dynamic routing on the logical router.

    By default, the log level is emergency.

    For example:

  11. Configure deployment.
    • If you did not select Deploy NSX Edge, the Add (Add) icon is grayed out. Click Next to continue with configuration.

    • If you selected Deploy NSX Edge, enter the settings for the logical router virtual appliance that will be added to your vCenter inventory.

    For example:

  12. Configure interfaces.

    On logical routers, only IPv4 addressing is supported.

    In the HA Interface Configuration, if you selected Deploy NSX Edge you must connect the interface to a distributed port group. It is recommended to use a VXLAN logical switch for the HA interface. An IP address for each of the two NSX Edge appliances is chosen from the link local address space, No further configuration is necessary to configure the HA service.


    In prior releases of NSX, the HA interface was called the management interface. The HA interface is not supported for remote access to the logical router. You cannot SSH into the HA interface from anywhere that isn’t on the same IP subnet as the HA interface. You cannot configure static routes that point out of the HA interface, which means that RPF will drop incoming traffic. You could, in theory, disable RPF, but this would be counterproductive to high availability. For SSH, use the logical router's protocol address, which is configured later when you configure dynamic routing.

    In NSX 6.2, the HA interface of a logical router is automatically excluded from route redistribution.

    In Configure interfaces of this NSX Edge the internal interfaces are for connections to switches that allow VM-to-VM (sometimes called East-West) communication. Internal interfaces are created as pseudo vNICs on the logical router virtual appliance. Uplink interfaces are for North-South communication. A logical router uplink interface might connect to an NSX edge services gateway, a third-party router VM for that, or a VLAN-backed dvPortgroup to make the logical router connect to a physical router directly. You must have at least one uplink interface for dynamic routing to work. Uplink interfaces are created as vNICs on the logical router virtual appliance.

    The interface configuration that you enter here is modifiable later. You can add, remove, and modify interfaces after a logical router is deployed.

    The following example shows an HA interface connected to the management distributed port group. The example also shows two internal interfaces (app and web) and an uplink interface (to-ESG).

  13. Make sure any VMs attached to the logical switches have their default gateways set properly to the logical router interface IP addresses.


In the following example topology, the default gateway of app VM should be The default gateway of web VM should be Make sure the VMs can ping their default gateways and each other.

Log in via SSH to the NSX Manager, and run the following commands:

  • List all logical router instance information.

    nsxmgr-l-01a> show logical-router list all
    Edge-id             Vdr Name                      Vdr id              #Lifs
    edge-1              default+edge-1                0x00001388          3

  • List the hosts that have received routing information for the logical router from the controller cluster.

    nsxmgr-l-01a> show logical-router list dlr edge-1 host
    ID                   HostName                             

    The output includes all hosts from all host clusters that are configured as members of the transport zone that owns the logical switch that is connected to the specified logical router (edge-1 in this example).

  • List the routing table information that is communicated to the hosts by the logical router. Routing table entries should be consistent across all of the hosts.

    nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 route
    VDR default+edge-1 Route Table
    Legend: [U: Up], [G: Gateway], [C: Connected], [I: Interface]
    Legend: [H: Host], [F: Soft Flush] [!: Reject] [E: ECMP]
    Destination     GenMask          Gateway         Flags   Ref Origin   UpTime    Interface
    -----------     -------          -------         -----   --- ------   ------    ---------    UG      1   AUTO     4101      138800000002         UCI     1   MANUAL   10195     13880000000b         UCI     1   MANUAL   10196     13880000000a         UCI     1   MANUAL   10196     138800000002    UG      1   AUTO     3802      138800000002

  • List additional information about the router from the point of view of one of the hosts. This is helpful to learn which controller is communicating with the host.

    nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 verbose
    VDR Instance Information :
    Vdr Name:                   default+edge-1
    Vdr Id:                     0x00001388
    Number of Lifs:             3
    Number of Routes:           5
    State:                      Enabled
    Controller IP:    
    Control Plane IP: 
    Control Plane Active:       Yes
    Num unique nexthops:        1
    Generation Number:          0
    Edge Active:                No

Check the Controller IP field in the output of the show logical-router host host-25 dlr edge-1 verbose command.

SSH to a controller, and run the following commands to display the controller's learned VNI, VTEP, MAC, and ARP table state information.

  • # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000 Enabled         Enabled   0

    The output for VNI 5000 shows zero connections and lists controller as the owner for VNI 5000. Log in to that controller to gather further information for VNI 5000. # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000 Enabled         Enabled   3

    The output on shows three connections. Check additional VNIs. # show control-cluster logical-switches vni 5001
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5001 Enabled         Enabled   3 # show control-cluster logical-switches vni 5002
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5002 Enabled         Enabled   3

    Because owns all three VNI connections, we would expect to see zero connections on the other controller, # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000 Enabled         Enabled   0
  • Before checking the MAC and ARP tables, start pinging from one VM to the other VM.

    From app VM to web VM:

    Check the MAC tables. # show control-cluster logical-switches mac-table 5000
    VNI      MAC               VTEP-IP         Connection-ID
    5000     00:50:56:a6:23:ae  7 # show control-cluster logical-switches mac-table 5001
    VNI      MAC               VTEP-IP         Connection-ID
    5001     00:50:56:a6:8d:72  23

    Check the ARP tables. # show control-cluster logical-switches arp-table 5000
    VNI      IP              MAC               Connection-ID
    5000    00:50:56:a6:23:ae 7 # show control-cluster logical-switches arp-table 5001
    VNI      IP              MAC               Connection-ID
    5001    00:50:56:a6:8d:72 23

Check the logical router information. Each logical router Instance is served by one of the controller nodes.

The instance sub-command of show control-cluster logical-routers command displays a list of logical routers that are connected to this controller.

The interface-summary sub-command displays the LIFs that the controller learned from the NSX Manager. This information is sent to the hosts that are in the host clusters managed under the transport zone.

The routes sub-command shows the routing table that is sent to this controller by the logical router's virtual appliance (also known as the control VM). Note that unlike on the ESXi hosts, this routing table does not include directly connected subnets because this information is provided by the LIF configuration. Route information on the ESXi hosts includes directly connected subnets because in that case it is a forwarding table used by ESXi host’s datapath.

  • controller # show control-cluster logical-routers instance all
    LR-Id      LR-Name            Universal Service-Controller Egress-Locale
    0x1388     default+edge-1     false    local

    Note the LR-Id and use it in the following command.

  • controller # show control-cluster logical-routers interface-summary 0x1388
    Interface                        Type   Id           IP[]
    13880000000b                     vxlan  0x1389
    13880000000a                     vxlan  0x1388
    138800000002                     vxlan  0x138a

  • controller # show control-cluster logical-routers routes 0x1388
    Destination        Next-Hop[]      Preference Locale-Id                            Source    110        00000000-0000-0000-0000-000000000000 CONTROL_VM    0          00000000-0000-0000-0000-000000000000 CONTROL_VM
    [root@comp02a:~] esxcfg-route -l
    VMkernel Routes:
    Network          Netmask          Gateway          Interface    Local Subnet     vmk1    Local Subnet     vmk0
    default    vmk0
  • Display the controller connections to the specific VNI. # show control-cluster logical-switches connection-table 5000
    Host-IP         Port  ID  26167 4  27645 5  40895 6 # show control-cluster logical-switches connection-table 5001
    Host-IP         Port  ID  26167 4  27645 5  40895 6

    These Host-IP addresses are vmk0 interfaces, not VTEPs. Connections between ESXi hosts and controllers are created on the management network. The port numbers here are ephemeral TCP ports that are allocated by the ESXi host IP stack when the host establishes a connection with the controller.

  • On the host, you can view the controller network connection matched to the port number.

    [root@] #esxcli network ip connection list | grep 26167
    tcp         0       0     ESTABLISHED     96416  newreno  netcpa-worker
  • Display active VNIs on the host. Observe how the output is different across hosts. Not all VNIs are active on all hosts. A VNI is active on a host if the host has a VM that is connected to the logical switch.

    [root@] # esxcli network vswitch dvs vmware vxlan network list --vds-name Compute_VDS
    VXLAN ID  Multicast IP               Control Plane                        Controller Connection  Port Count  MAC Entry Count  ARP Entry Count  VTEP Count
    --------  -------------------------  -----------------------------------  ---------------------  ----------  ---------------  ---------------  ----------
        5000  N/A (headend replication)  Enabled (multicast proxy,ARP proxy) (up)            1                0                0           0
        5001  N/A (headend replication)  Enabled (multicast proxy,ARP proxy) (up)            1                0                0           0

    To enable the vxlan namespace in vSphere 6 and later, run the /etc/init.d/hostd restart command.

    For logical switches in hybrid or unicast mode, the esxcli network vswitch dvs vmware vxlan network list --vds-name <vds-name> command should contain the following output:

    • Control Plane is enabled.

    • Multicast proxy and ARP proxy are listed. AARP proxy is listed even if you disabled IP discovery.

    • A valid controller IP address is listed and the connection is up.

    • If a logical router is connected to the ESXi host, the port Count is at least 1, even if there are no VMs on the host connected to the logical switch. This one port is the vdrPort, which is a special dvPort connected to the logical router kernel module on the ESXi host.

  • First ping from VM to another VM on a different subnet and then display the MAC table. Note the Inner MAC is the VM entry while the Outer MAC and Outer IP refer to the VTEP.

    ~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5000
    Inner MAC          Outer MAC          Outer IP        Flags
    -----------------  -----------------  --------------  --------
    00:50:56:a6:23:ae  00:50:56:6a:65:c2  00000111
    ~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5001
    Inner MAC          Outer MAC          Outer IP        Flags
    -----------------  -----------------  --------------  --------
    02:50:56:56:44:52  00:50:56:6a:65:c2  00000101
    00:50:56:f0:d7:e4  00:50:56:6a:65:c2  00000111

What to do next

On the hosts where NSX edge appliances are first deployed, NSX enables automatic VM startup/shutdown. If the appliance VMs are later migrated to other hosts, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, VMware recommends that you check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See http://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-5FE08AC7-4486-438E-AF88-80D6C7928810.html.

After the logical router is deployed, double-click the logical router ID to configure additional settings, such as interfaces, routing, firewall, bridging, and DHCP relay.

For example: