A distributed logical router (DLR) is a virtual appliance that contains the routing control plane, while distributing the data plane in kernel modules to each hypervisor host. The DLR control plane function relies on the NSX Controller cluster to push routing updates to the kernel modules.
When deploying a new logical router, consider the following:
NSX Data Center for vSphere 6.2 and later allows logical router-routed logical interfaces (LIFs) to be connected to a VXLAN that is bridged to a VLAN.
Logical router interfaces and bridging interfaces cannot be connected to a dvPortgroup with the VLAN ID set to 0.
A given logical router instance cannot be connected to logical switches that exist in different transport zones. This is to ensure that all logical switches and logical router instances are aligned.
A logical router cannot be connected to VLAN-backed port groups if that logical router is connected to logical switches spanning more than one vSphere distributed switch (VDS). This is to ensure correct alignment of logical router instances with logical switch dvPortgroups across hosts.
Logical router interfaces must not be created on two different distributed port groups (dvPortgroups) with the same VLAN ID if the two networks are in the same vSphere distributed switch.
Logical router interfaces should not be created on two different dvPortgroups with the same VLAN ID if two networks are in different vSphere distributed switches, but the two vSphere distributed switches share identical hosts. In other words, logical router interfaces can be created on two different networks with the same VLAN ID if the two dvPortgroups are in two different vSphere distributed switches, as long as the vSphere distributed switches do not share a host.
If VXLAN is configured, logical router interfaces must be connected to distributed port groups on the vSphere Distributed Switch where VXLAN is configured. Do not connect logical router interfaces to port groups on other vSphere Distributed Switches.
The following list describes feature support by interface type (uplink and internal) on the logical router:
Dynamic routing protocols (BGP and OSPF) are supported only on uplink interfaces.
Firewall rules are applicable only on uplink interfaces and are limited to control and management traffic that is destined to the Edge virtual appliance.
For more information about the DLR Management Interface, see the Knowledge Base Article "Management Interface Guide: DLR Control VM - NSX" http://kb.vmware.com/kb/2122060.
vSphere Fault Tolerance does not work with logical router control VM.
You must be assigned the Enterprise Administrator or NSX Administrator role.
You must create a local segment ID pool, even if you have no plans to create logical switches.
Make sure that the controller cluster is up and available before creating or changing a logical router configuration. A logical router cannot distribute routing information to hosts without the help of NSX controllers. A logical router relies on NSX controllers to function, while Edge Services Gateways (ESGs) do not.
If a logical router is to be connected to VLAN dvPortgroups, ensure that all hypervisor hosts with a logical router appliance installed can reach each other on UDP port 6999. Communication on this port is required for logical router VLAN-based ARP proxy to work.
Determine where to deploy the logical router appliance.
The destination host must be part of the same transport zone as the logical switches connected to the new logical router's interfaces.
Avoid placing it on the same host as one or more of its upstream ESGs if you use ESGs in an ECMP setup. You can use DRS anti-affinity rules to enforce this practice, reducing the impact of host failure on logical router forwarding. This guideline does not apply if you have one upstream ESG by itself or in HA mode. For more information, see the NSX Network Virtualization Design Guide at https://communities.vmware.com/docs/DOC-27683.
Verify that the host cluster on which you install the logical router appliance is prepared for NSX Data Center for vSphere. See "Prepare Host Clusters for NSX" in the NSX Installation Guide.
- In the vSphere Web Client, navigate to .
- Click Add, and then click Logical Router.
- Enter name, description, and other details of the logical router.
Enter a name for the logical router as you want it to appear in the vCenter inventory.
Make sure that this name is unique across all logical routers within a single tenant.
Optional. Enter a host name that you want to display for the logical router in the CLI.
If you do not enter a host name, the Edge ID that is created automatically is displayed in the CLI.
Optional. Enter a description of the logical router.
Deploy Edge Appliance
By default, this option is selected. An Edge appliance (also called a logical router virtual appliance) is required for dynamic routing and the logical router appliance's firewall, which applies to logical router pings, SSH access, and dynamic routing traffic.
If you require only static routes, and do not want to deploy an Edge appliance, deselect this option. You cannot add an Edge appliance to the logical router after the logical router is created.
Optional. By default, HA is disabled. Select this option to enable and configure HA on the logical router.
If you are planning to do dynamic routing, HA is required.
Optional. By default, HA logging is disabled.
When logging is enabled, the default log level is set to info. You can change it, if necessary.
- Specify the CLI settings and other settings of the logical router.
Enter a user name that you want to use for logging in to the Edge CLI.
Enter a password that is at least 12 characters and it must satisfy these rules:
Must not exceed 255 characters
At least one uppercase letter and one lowercase letter
At least one number
At least one special character
Must not contain the user name as a substring
Must not consecutively repeat a character 3 or more times.
Reenter the password to confirm.
Optional. By default, SSH access is disabled. If you do not enable SSH, you can still access the logical router by opening the virtual appliance console.
Enabling SSH causes the SSH process to run on the logical router. You must adjust the logical router firewall configuration manually to allow SSH access to the logical router's protocol address. The protocol address is configured when you configure dynamic routing on the logical router.
Optional. By default, FIPS mode is disabled.
When you enable FIPS mode, any secure communication to or from the NSX Edge uses cryptographic algorithms or protocols that are allowed by FIPS.
Edge control level logging
Optional. By default, the log level is info.
- Configure deployment of the NSX Edge Appliance.
If you did not select Deploy Edge Appliance, you cannot add an appliance. Click Next to continue with the configuration.
If you selected Deploy Edge Appliance, enter the settings of the logical router virtual appliance.
Management & Edge
See "Managing NSX Edge Appliance Resource Reservations" in the NSX Administration Guide for more information on Resource Reservation.
- Configure the HA interface connection, and optionally an IP address.
If you selected Deploy Edge Appliance, you must connect the HA interface to a distributed port group or a logical switch. If you are using this interface as an HA interface only, use a logical switch. A /30 subnet is allocated from the link local range 169.254.0.0/16 and is used to provide an IP address for each of the two NSX Edge appliances.
Optionally, if you want to use this interface to connect to the NSX Edge, you can specify an extra IP address and prefix for the HA interface.Note:
Before NSX Data Center for vSphere 6.2, HA interface was called management interface. You cannot do an SSH connection to a HA interface from anywhere that is not on the same IP subnet as the HA interface. You cannot configure static routes that point out of the HA interface, which means that RPF will drop incoming traffic. However, you can, in theory, disable RPF, but this action is counterproductive to high availability. For SSH access, you can also use the logical router's protocol address, which is configured later when you configure dynamic routing.
In NSX Data Center for vSphere 6.2 and later, the HA interface of a logical router is automatically excluded from route redistribution.
For example, the following table shows a sample HA interface configuration where the HA interface is connected to a management dvPortgroup.
Subnet Prefix Length
- Configure interfaces of the NSX Edge.
- Specify the name, type, and other basic interface details.
Enter a name for the interface.
Select either Internal or Uplink.
The internal interfaces are for connections to switches that allow VM-to-VM (sometimes called East-West) communication. Internal interfaces are created as pseudo vNICs on the logical router virtual appliance. Uplink interfaces are for North-South communication, and they are created as vNICs on the logical router virtual appliance.
A logical router uplink interface might connect to an Edge Services Gateway or a third-party router VM. You must have at least one uplink interface for dynamic routing to work.
Select the distributed virtual port group or the logical switch to which you want to connect this interface to.
- Configure the subnets of the interface.
Primary IP Address
On logical routers, only IPv4 addressing is supported.
The interface configuration that you enter here is modifiable later. You can add, remove, and modify interfaces after a logical router is deployed.
Subnet Prefix Length
Enter the subnet mask of the interface.
- (Optional) Edit the default MTU value, if necessary. The default value is 1500.
The following table shows an example of two internal interfaces (app and web) and one uplink interface (to-ESG).
Table 1. Example: NSX Edge Interfaces
Subnet Prefix Length
- Specify the name, type, and other basic interface details.
- Configure the default gateway settings.
- Make sure that the VMs connected to the logical switches have their default gateways set properly to the logical router interface IP addresses.
In the following example topology, the default gateway of app VM is 172.16.20.1. The default gateway of web VM is 172.16.10.1. Make sure the VMs can ping their default gateways and each other.
Connect to the NSX Manager using SSH or the console, and run the following commands:
List all logical router instance information.
nsxmgr-l-01a> show logical-router list all Edge-id Vdr Name Vdr id #Lifs edge-1 default+edge-1 0x00001388 3
List the hosts that have received routing information for the logical router from the controller cluster.
nsxmgr-l-01a> show logical-router list dlr edge-1 host ID HostName host-25 192.168.210.52 host-26 192.168.210.53 host-24 192.168.110.53
The output includes all hosts from all host clusters that are configured as members of the transport zone that owns the logical switch that is connected to the specified logical router (edge-1 in this example).
List the routing table information that is communicated to the hosts by the logical router. Routing table entries should be consistent across all the hosts.
nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 route VDR default+edge-1 Route Table Legend: [U: Up], [G: Gateway], [C: Connected], [I: Interface] Legend: [H: Host], [F: Soft Flush] [!: Reject] [E: ECMP] Destination GenMask Gateway Flags Ref Origin UpTime Interface ----------- ------- ------- ----- --- ------ ------ --------- 0.0.0.0 0.0.0.0 192.168.10.1 UG 1 AUTO 4101 138800000002 172.16.10.0 255.255.255.0 0.0.0.0 UCI 1 MANUAL 10195 13880000000b 172.16.20.0 255.255.255.0 0.0.0.0 UCI 1 MANUAL 10196 13880000000a 192.168.10.0 255.255.255.248 0.0.0.0 UCI 1 MANUAL 10196 138800000002 192.168.100.0 255.255.255.0 192.168.10.1 UG 1 AUTO 3802 138800000002
List additional information about the router from the point of view of one of the hosts. This output is helpful to learn which controller is communicating with the host.
nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 verbose VDR Instance Information : --------------------------- Vdr Name: default+edge-1 Vdr Id: 0x00001388 Number of Lifs: 3 Number of Routes: 5 State: Enabled Controller IP: 192.168.110.203 Control Plane IP: 192.168.210.52 Control Plane Active: Yes Num unique nexthops: 1 Generation Number: 0 Edge Active: No
Check the Controller IP field in the output of the show logical-router host host-25 dlr edge-1 verbose command.
SSH to a controller, and run the following commands to display the controller's learned VNI, VTEP, MAC, and ARP table state information.
192.168.110.202 # show control-cluster logical-switches vni 5000 VNI Controller BUM-Replication ARP-Proxy Connections 5000 192.168.110.201 Enabled Enabled 0
The output for VNI 5000 shows zero connections and lists controller 192.168.110.201 as the owner for VNI 5000. Log in to that controller to gather further information for VNI 5000.
192.168.110.201 # show control-cluster logical-switches vni 5000 VNI Controller BUM-Replication ARP-Proxy Connections 5000 192.168.110.201 Enabled Enabled 3
The output on 192.168.110.201 shows three connections. Check additional VNIs.
192.168.110.201 # show control-cluster logical-switches vni 5001 VNI Controller BUM-Replication ARP-Proxy Connections 5001 192.168.110.201 Enabled Enabled 3
192.168.110.201 # show control-cluster logical-switches vni 5002 VNI Controller BUM-Replication ARP-Proxy Connections 5002 192.168.110.201 Enabled Enabled 3
Because 192.168.110.201 owns all three VNI connections, we expect to see zero connections on the other controller, 192.168.110.203.
192.168.110.203 # show control-cluster logical-switches vni 5000 VNI Controller BUM-Replication ARP-Proxy Connections 5000 192.168.110.201 Enabled Enabled 0
Before checking the MAC and ARP tables, ping from one VM to the other VM.
From app VM to web VM:
vmware@app-vm$ ping 172.16.10.10 PING 172.16.10.10 (172.16.10.10) 56(84) bytes of data. 64 bytes from 172.16.10.10: icmp_req=1 ttl=64 time=2.605 ms 64 bytes from 172.16.10.10: icmp_req=2 ttl=64 time=1.490 ms 64 bytes from 172.16.10.10: icmp_req=3 ttl=64 time=2.422 ms
Check the MAC tables.
192.168.110.201 # show control-cluster logical-switches mac-table 5000 VNI MAC VTEP-IP Connection-ID 5000 00:50:56:a6:23:ae 192.168.250.52 7
192.168.110.201 # show control-cluster logical-switches mac-table 5001 VNI MAC VTEP-IP Connection-ID 5001 00:50:56:a6:8d:72 192.168.250.51 23
Check the ARP tables.
192.168.110.201 # show control-cluster logical-switches arp-table 5000 VNI IP MAC Connection-ID 5000 172.16.20.10 00:50:56:a6:23:ae 7
192.168.110.201 # show control-cluster logical-switches arp-table 5001 VNI IP MAC Connection-ID 5001 172.16.10.10 00:50:56:a6:8d:72 23
Check the logical router information. Each logical router instance is served by one of the controller nodes.
The instance subcommand of show control-cluster logical-routers command displays a list of logical routers that are connected to this controller.
The interface-summary subcommand displays the LIFs that the controller learned from the NSX Manager. This information is sent to the hosts that are in the host clusters managed under the transport zone.
The routes subcommand shows the routing table that is sent to this controller by the logical router's virtual appliance (also known as the control VM). Unlike on the ESXi hosts, this routing table does not include directly connected subnets because this information is provided by the LIF configuration. Route information on the ESXi hosts includes directly connected subnets because in that case it is a forwarding table used by ESXi host’s datapath.
List all logical routers connected to this controller.
controller # show control-cluster logical-routers instance all LR-Id LR-Name Universal Service-Controller Egress-Locale 0x1388 default+edge-1 false 192.168.110.201 local
Note the LR-Id and use it in the following command.
controller # show control-cluster logical-routers interface-summary 0x1388 Interface Type Id IP 13880000000b vxlan 0x1389 172.16.10.1/24 13880000000a vxlan 0x1388 172.16.20.1/24 138800000002 vxlan 0x138a 192.168.10.2/29
controller # show control-cluster logical-routers routes 0x1388 Destination Next-Hop Preference Locale-Id Source 192.168.100.0/24 192.168.10.1 110 00000000-0000-0000-0000-000000000000 CONTROL_VM 0.0.0.0/0 192.168.10.1 0 00000000-0000-0000-0000-000000000000 CONTROL_VM
[root@comp02a:~] esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.20.20.0 255.255.255.0 Local Subnet vmk1 192.168.210.0 255.255.255.0 Local Subnet vmk0 default 0.0.0.0 192.168.210.1 vmk0
Display the controller connections to the specific VNI.
192.168.110.203 # show control-cluster logical-switches connection-table 5000 Host-IP Port ID 192.168.110.53 26167 4 192.168.210.52 27645 5 192.168.210.53 40895 6
192.168.110.202 # show control-cluster logical-switches connection-table 5001 Host-IP Port ID 192.168.110.53 26167 4 192.168.210.52 27645 5 192.168.210.53 40895 6
These Host-IP addresses are vmk0 interfaces, not VTEPs. Connections between ESXi hosts and controllers are created on the management network. The port numbers here are ephemeral TCP ports that are allocated by the ESXi host IP stack when the host establishes a connection with the controller.
On the host, you can view the controller network connection matched to the port number.
[firstname.lastname@example.org:~] #esxcli network ip connection list | grep 26167 tcp 0 0 192.168.110.53:26167 192.168.110.101:1234 ESTABLISHED 96416 newreno netcpa-worker
Display active VNIs on the host. Observe how the output is different across hosts. Not all VNIs are active on all hosts. A VNI is active on a host if the host has a VM that is connected to the logical switch.
[email@example.com:~] # esxcli network vswitch dvs vmware vxlan network list --vds-name Compute_VDS VXLAN ID Multicast IP Control Plane Controller Connection Port Count MAC Entry Count ARP Entry Count VTEP Count -------- ------------------------- ----------------------------------- --------------------- ---------- --------------- --------------- ---------- 5000 N/A (headend replication) Enabled (multicast proxy,ARP proxy) 192.168.110.203 (up) 1 0 0 0 5001 N/A (headend replication) Enabled (multicast proxy,ARP proxy) 192.168.110.202 (up) 1 0 0 0Note:
To enable the vxlan namespace in vSphere 6.0 and later, run the /etc/init.d/hostd restart command.
For logical switches in hybrid or unicast mode, the esxcli network vswitch dvs vmware vxlan network list --vds-name <vds-name> command contains the following output:
Control Plane is enabled.
Multicast proxy and ARP proxy are listed. AARP proxy is listed even if you disabled IP discovery.
A valid controller IP address is listed and the connection is up.
If a logical router is connected to the ESXi host, the port Count is at least 1, even if there are no VMs on the host connected to the logical switch. This one port is the vdrPort, which is a special dvPort connected to the logical router kernel module on the ESXi host.
First ping from VM to another VM on a different subnet and then display the MAC table. Note that the Inner MAC is the VM entry while the Outer MAC and Outer IP refer to the VTEP.
~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5000 Inner MAC Outer MAC Outer IP Flags ----------------- ----------------- -------------- -------- 00:50:56:a6:23:ae 00:50:56:6a:65:c2 192.168.250.52 00000111
~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5001 Inner MAC Outer MAC Outer IP Flags ----------------- ----------------- -------------- -------- 02:50:56:56:44:52 00:50:56:6a:65:c2 192.168.250.52 00000101 00:50:56:f0:d7:e4 00:50:56:6a:65:c2 192.168.250.52 00000111
What to do next
When you install an NSX Edge Appliance, NSX enables automatic VM startup/shutdown on the host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, when you install NSX Edge Appliances on clusters that have vSphere HA disabled, you must preferably check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine Administration.
After the logical router is deployed, double-click the logical router ID to configure additional settings, such as interfaces, routing, firewall, bridging, and DHCP relay.