A distributed logical router (DLR) is a virtual appliance that contains the routing control plane, while distributing the data plane in kernel modules to each hypervisor host. The DLR control plane function relies on the NSX controller cluster to push routing updates to the kernel modules.

About this task

When deploying a new logical router, consider the following:

  • NSX version 6.2 and later allows logical router-routed logical interfaces (LIFs) to be connected to a VXLAN that is bridged to a VLAN.

  • Logical router interfaces and bridging interfaces cannot be connected to a dvPortgroup with the VLAN ID set to 0.

  • A given logical router instance cannot be connected to logical switches that exist in different transport zones. This is to ensure that all logical switches and logical router instances are aligned.

  • A logical router cannot be connected to VLAN-backed portgroups if that logical router is connected to logical switches spanning more than one vSphere distributed switch (VDS).This is to ensure correct alignment of logical router instances with logical switch dvPortgroups across hosts.

  • Logical router interfaces should not be created on two different distributed portgroups (dvPortgroups) with the same VLAN ID if the two networks are in the same vSphere distributed switch.

  • Logical router interfaces should not be created on two different dvPortgroups with the same VLAN ID if two networks are in different vSphere distributed switches, but the two vSphere distributed switches share the same hosts. In other words, logical router interfaces can be created on two different networks with the same VLAN ID if the two dvPortgroups are in two different vSphere distributed switches, as long as the vSphere distributed switches do not share a host.

The following list describes feature support by interface type (uplink and internal) on the logical router:

  • Dynamic routing protocols (BGP and OSPF) are supported only on uplink interfaces.

  • Firewall rules are applicable only on uplink interfaces and are limited to control and management traffic that is destined to the edge virtual appliance.

  • For more information about the DLR Management Interface see the Knowledge Base Article, Considerations for Management Interface of Distributed Logical Router Control VM, http://kb.vmware.com/kb/2122060.

Prerequisites

  • You must have been assigned the Enterprise Administrator or NSX Administrator role.

  • You must create a local segment ID pool, even if you have no plans to create NSX logical switches.

  • Make sure the controller cluster is up and available before creating or changing a logical router configuration. A logical router cannot distribute routing information to hosts without the help of NSX controllers. A logical router relies on NSX controllers to function, while edge services gateways (ESGs) do not.

  • If a logical router is to be connected to VLAN dvPortgroups, ensure that all hypervisor hosts with a logical router appliance installed can reach each other on UDP port 6999 for logical router VLAN-based ARP proxy to work.

  • Determine where to deploy the logical router appliance.

    • The destination host must be part of the same transport zone as the logical switches connected to the new logical router's interfaces.

    • Avoid placing it on the same host as one or more of its upstream ESGs if you use ESGs in an ECMP setup. You can use DRS anti-affinity rules to enforce this, thus reducing the impact of host failure on logical router forwarding. This guideline does not apply if you have one upstream ESG by itself or in HA mode. For more information, see the VMware NSX for vSphere Network Virtualization Design Guide at https://communities.vmware.com/docs/DOC-27683.

  • Verify that the host cluster on which the logical router appliance will be installed is prepared for NSX. See Prepare Host Clusters for NSX in the NSX Installation Guide.

Procedure

  1. In the vSphere Web Client, navigate to Home > Networking & Security > NSX Edges.
  2. Click the Add (add) icon.
  3. Select Logical (Distributed) Router and type a name for the device.

    This name appears in your vCenter inventory. The name should be unique across all logical routers within a single tenant.

    Optionally, you can also enter a hostname. This name appears in the CLI. If you do not specify the host name, the Edge ID, which gets created automatically, is displayed in the CLI.

    Optionally, you can enter a description and tenant.

    For example:

  4. (Optional) : Deploy an edge appliance.

    Deploy Edge Appliance is selected by default. An edge appliance (also called a logical router virtual appliance) is required for dynamic routing and the logical router appliance's firewall, which applies to logical router pings, SSH access, and dynamic routing traffic.

    You can deselect the edge appliance option if you require only static routes, and do not want to deploy an Edge appliance. You cannot add an Edge appliance to the logical router after the logical router has been created.

  5. (Optional) : Enable High Availability.

    Enable High Availability is not selected by default. Select the Enable High Availability check box to enable and configure high availability. High availability is required if you are planning to do dynamic routing.

  6. Type and re-type a password for the logical router.

    The password must be 12-255 characters and must contain the following:

    • At least one upper case letter

    • At least one lower case letter

    • At last one number

    • At least one special character

  7. (Optional) : Enable SSH.

    By default, SSH is disabled. If you do not enable SSH, you can still access the logical router by opening the virtual appliance console. Enabling SSH here causes the SSH process to run on the logical router virtual appliance, but you will also need to adjust the logical router firewall configuration manually to allow SSH access to the logical router's protocol address. The protocol address is configured when you configure dynamic routing on the logical router.

  8. (Optional) : Enable FIPS mode and set the log level.

    By default, FIPS mode is disabled. Select the Enable FIPS mode check box to enable the FIPS mode. When you enable the FIPS mode, any secure communication to or from the NSX Edge uses cryptographic algorithms or protocols that are allowed by FIPS.

    By default, the log level is emergency.

    For example:

  9. Configure deployment.
    • If you did not select Deploy Edge Appliance, the Add (Add) icon is grayed out. Click Next to continue with configuration.

    • If you selected Deploy Edge Appliance, enter the settings for the logical router virtual appliance that will be added to your vCenter inventory.

    For example:

  10. Configure interfaces. On logical routers, only IPv4 addressing is supported.
    1. Configure HA interface connection, and optionally an IP address.

      If you selected Deploy Edge Appliance you must connect the HA interface to a distributed port group or logical switch. If you are using this interface as an HA interface only, VMware recommends using a logical switch. A /30 subnet is allocated from the link local range 169.254.0.0/16 and is used to provide an IP address for each of the two NSX Edge appliances.

      Optionally, if you want to use this interface to connect to the NSX Edge, you can specify an additional IP address and prefix for the HA interface.

      Note:

      Before NSX 6.2, the HA interface was called the management interface. You cannot SSH into the HA interface from anywhere that isn’t on the same IP subnet as the HA interface. You cannot configure static routes that point out of the HA interface, which means that RPF will drop incoming traffic. You could, in theory, disable RPF, but this would be counterproductive to high availability. For SSH access, you can also use the logical router's protocol address, which is configured later when you configure dynamic routing.

      In NSX 6.2 and later, the HA interface of a logical router is automatically excluded from route redistribution.

    2. Configure interfaces of this NSX Edge.

      In Configure interfaces of this NSX Edge the internal interfaces are for connections to switches that allow VM-to-VM (sometimes called East-West) communication. Internal interfaces are created as pseudo vNICs on the logical router virtual appliance. Uplink interfaces are for North-South communication. A logical router uplink interface might connect to an NSX edge services gateway, a third-party router VM for that, or a VLAN-backed dvPortgroup to make the logical router connect to a physical router directly. You must have at least one uplink interface for dynamic routing to work. Uplink interfaces are created as vNICs on the logical router virtual appliance.

      The interface configuration that you enter here is modifiable later. You can add, remove, and modify interfaces after a logical router is deployed.

    The following example shows an HA interface connected to the management distributed port group. The example also shows two internal interfaces (app and web) and an uplink interface (to-ESG).

  11. Configure a default gateway.

    For example:

  12. Make sure any VMs attached to the logical switches have their default gateways set properly to the logical router interface IP addresses.

Results

In the following example topology, the default gateway of app VM should be 172.16.20.1. The default gateway of web VM should be 172.16.10.1. Make sure the VMs can ping their default gateways and each other.

Log in via SSH to the NSX Manager, and run the following commands:

  • List all logical router instance information.

    nsxmgr-l-01a> show logical-router list all
    Edge-id             Vdr Name                      Vdr id              #Lifs
    edge-1              default+edge-1                0x00001388          3
    

  • List the hosts that have received routing information for the logical router from the controller cluster.

    nsxmgr-l-01a> show logical-router list dlr edge-1 host
    ID                   HostName                             
    host-25              192.168.210.52                       
    host-26              192.168.210.53                       
    host-24              192.168.110.53

    The output includes all hosts from all host clusters that are configured as members of the transport zone that owns the logical switch that is connected to the specified logical router (edge-1 in this example).

  • List the routing table information that is communicated to the hosts by the logical router. Routing table entries should be consistent across all of the hosts.

    nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 route
    
    VDR default+edge-1 Route Table
    Legend: [U: Up], [G: Gateway], [C: Connected], [I: Interface]
    Legend: [H: Host], [F: Soft Flush] [!: Reject] [E: ECMP]
    
    Destination     GenMask          Gateway         Flags   Ref Origin   UpTime    Interface
    -----------     -------          -------         -----   --- ------   ------    ---------
    0.0.0.0         0.0.0.0          192.168.10.1    UG      1   AUTO     4101      138800000002
    172.16.10.0     255.255.255.0    0.0.0.0         UCI     1   MANUAL   10195     13880000000b
    172.16.20.0     255.255.255.0    0.0.0.0         UCI     1   MANUAL   10196     13880000000a
    192.168.10.0    255.255.255.248  0.0.0.0         UCI     1   MANUAL   10196     138800000002
    192.168.100.0   255.255.255.0    192.168.10.1    UG      1   AUTO     3802      138800000002
    

  • List additional information about the router from the point of view of one of the hosts. This is helpful to learn which controller is communicating with the host.

    nsx-mgr-l-01a> show logical-router host host-25 dlr edge-1 verbose
    
    VDR Instance Information :
    ---------------------------
    
    Vdr Name:                   default+edge-1
    Vdr Id:                     0x00001388
    Number of Lifs:             3
    Number of Routes:           5
    State:                      Enabled
    Controller IP:              192.168.110.203
    Control Plane IP:           192.168.210.52
    Control Plane Active:       Yes
    Num unique nexthops:        1
    Generation Number:          0
    Edge Active:                No
    

Check the Controller IP field in the output of the show logical-router host host-25 dlr edge-1 verbose command.

SSH to a controller, and run the following commands to display the controller's learned VNI, VTEP, MAC, and ARP table state information.

  • 192.168.110.202 # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000     192.168.110.201 Enabled         Enabled   0
    

    The output for VNI 5000 shows zero connections and lists controller 192.168.110.201 as the owner for VNI 5000. Log in to that controller to gather further information for VNI 5000.

    192.168.110.201 # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000     192.168.110.201 Enabled         Enabled   3
    

    The output on 192.168.110.201 shows three connections. Check additional VNIs.

    192.168.110.201 # show control-cluster logical-switches vni 5001
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5001     192.168.110.201 Enabled         Enabled   3
    

    192.168.110.201 # show control-cluster logical-switches vni 5002
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5002     192.168.110.201 Enabled         Enabled   3

    Because 192.168.110.201 owns all three VNI connections, we would expect to see zero connections on the other controller, 192.168.110.203.

    192.168.110.203 # show control-cluster logical-switches vni 5000
    VNI      Controller      BUM-Replication ARP-Proxy Connections
    5000     192.168.110.201 Enabled         Enabled   0
    

  • Before checking the MAC and ARP tables, start pinging from one VM to the other VM.

    From app VM to web VM:

    vmware@app-vm$ ping 172.16.10.10
    PING 172.16.10.10 (172.16.10.10) 56(84) bytes of data.
    64 bytes from 172.16.10.10: icmp_req=1 ttl=64 time=2.605 ms
    64 bytes from 172.16.10.10: icmp_req=2 ttl=64 time=1.490 ms
    64 bytes from 172.16.10.10: icmp_req=3 ttl=64 time=2.422 ms
    

    Check the MAC tables.

    192.168.110.201 # show control-cluster logical-switches mac-table 5000
    VNI      MAC               VTEP-IP         Connection-ID
    5000     00:50:56:a6:23:ae 192.168.250.52  7

    192.168.110.201 # show control-cluster logical-switches mac-table 5001
    VNI      MAC               VTEP-IP         Connection-ID
    5001     00:50:56:a6:8d:72 192.168.250.51  23

    Check the ARP tables.

    192.168.110.201 # show control-cluster logical-switches arp-table 5000
    VNI      IP              MAC               Connection-ID
    5000     172.16.20.10    00:50:56:a6:23:ae 7

    192.168.110.201 # show control-cluster logical-switches arp-table 5001
    VNI      IP              MAC               Connection-ID
    5001     172.16.10.10    00:50:56:a6:8d:72 23

Check the logical router information. Each logical router Instance is served by one of the controller nodes.

The instance sub-command of show control-cluster logical-routers command displays a list of logical routers that are connected to this controller.

The interface-summary sub-command displays the LIFs that the controller learned from the NSX Manager. This information is sent to the hosts that are in the host clusters managed under the transport zone.

The routes sub-command shows the routing table that is sent to this controller by the logical router's virtual appliance (also known as the control VM). Note that unlike on the ESXi hosts, this routing table does not include directly connected subnets because this information is provided by the LIF configuration. Route information on the ESXi hosts includes directly connected subnets because in that case it is a forwarding table used by ESXi host’s datapath.

  • List all logical routers connected to this controller.

    controller # show control-cluster logical-routers instance all
    LR-Id      LR-Name            Universal Service-Controller Egress-Locale
    0x1388     default+edge-1     false     192.168.110.201    local
    

    Note the LR-Id and use it in the following command.

  • controller # show control-cluster logical-routers interface-summary 0x1388
    Interface                        Type   Id           IP[]
    13880000000b                     vxlan  0x1389       172.16.10.1/24
    13880000000a                     vxlan  0x1388       172.16.20.1/24
    138800000002                     vxlan  0x138a       192.168.10.2/29
    

  • controller # show control-cluster logical-routers routes 0x1388
    Destination        Next-Hop[]      Preference Locale-Id                            Source
    192.168.100.0/24   192.168.10.1    110        00000000-0000-0000-0000-000000000000 CONTROL_VM
    0.0.0.0/0          192.168.10.1    0          00000000-0000-0000-0000-000000000000 CONTROL_VM
    
    [root@comp02a:~] esxcfg-route -l
    VMkernel Routes:
    Network          Netmask          Gateway          Interface
    10.20.20.0       255.255.255.0    Local Subnet     vmk1
    192.168.210.0    255.255.255.0    Local Subnet     vmk0
    default          0.0.0.0          192.168.210.1    vmk0
    
  • Display the controller connections to the specific VNI.

    192.168.110.203 # show control-cluster logical-switches connection-table 5000
    Host-IP         Port  ID
    192.168.110.53  26167 4
    192.168.210.52  27645 5
    192.168.210.53  40895 6
    
    192.168.110.202 # show control-cluster logical-switches connection-table 5001
    Host-IP         Port  ID
    192.168.110.53  26167 4
    192.168.210.52  27645 5
    192.168.210.53  40895 6
    

    These Host-IP addresses are vmk0 interfaces, not VTEPs. Connections between ESXi hosts and controllers are created on the management network. The port numbers here are ephemeral TCP ports that are allocated by the ESXi host IP stack when the host establishes a connection with the controller.

  • On the host, you can view the controller network connection matched to the port number.

    [root@192.168.110.53:~] #esxcli network ip connection list | grep 26167
    tcp         0       0  192.168.110.53:26167             192.168.110.101:1234  ESTABLISHED     96416  newreno  netcpa-worker
    
  • Display active VNIs on the host. Observe how the output is different across hosts. Not all VNIs are active on all hosts. A VNI is active on a host if the host has a VM that is connected to the logical switch.

    [root@192.168.210.52:~] # esxcli network vswitch dvs vmware vxlan network list --vds-name Compute_VDS
    VXLAN ID  Multicast IP               Control Plane                        Controller Connection  Port Count  MAC Entry Count  ARP Entry Count  VTEP Count
    --------  -------------------------  -----------------------------------  ---------------------  ----------  ---------------  ---------------  ----------
        5000  N/A (headend replication)  Enabled (multicast proxy,ARP proxy)  192.168.110.203 (up)            1                0                0           0
        5001  N/A (headend replication)  Enabled (multicast proxy,ARP proxy)  192.168.110.202 (up)            1                0                0           0
    
    Note:

    To enable the vxlan namespace in vSphere 6 and later, run the /etc/init.d/hostd restart command.

    For logical switches in hybrid or unicast mode, the esxcli network vswitch dvs vmware vxlan network list --vds-name <vds-name> command should contain the following output:

    • Control Plane is enabled.

    • Multicast proxy and ARP proxy are listed. AARP proxy is listed even if you disabled IP discovery.

    • A valid controller IP address is listed and the connection is up.

    • If a logical router is connected to the ESXi host, the port Count is at least 1, even if there are no VMs on the host connected to the logical switch. This one port is the vdrPort, which is a special dvPort connected to the logical router kernel module on the ESXi host.

  • First ping from VM to another VM on a different subnet and then display the MAC table. Note the Inner MAC is the VM entry while the Outer MAC and Outer IP refer to the VTEP.

    ~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5000
    Inner MAC          Outer MAC          Outer IP        Flags
    -----------------  -----------------  --------------  --------
    00:50:56:a6:23:ae  00:50:56:6a:65:c2  192.168.250.52  00000111
    
    ~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --vxlan-id=5001
    Inner MAC          Outer MAC          Outer IP        Flags
    -----------------  -----------------  --------------  --------
    02:50:56:56:44:52  00:50:56:6a:65:c2  192.168.250.52  00000101
    00:50:56:f0:d7:e4  00:50:56:6a:65:c2  192.168.250.52  00000111
    

What to do next

When you install an NSX Edge appliance, NSX enables automatic VM startup/shutdown on the host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, VMware recommends that when you install NSX Edge appliances on clusters that have vSphere HA disabled, you should check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine Administration.

After the logical router is deployed, double-click the logical router ID to configure additional settings, such as interfaces, routing, firewall, bridging, and DHCP relay.