In addition to the host components, NSX Routing employs services of the Controller Cluster and DLR Control VMs, each of which is a source of the DLR control plane information and has its own CLI used to examine it.
DLR Instance Master Controller
Each DLR Instance is served by one of the Controller nodes. The following CLI commands can be used to view information that this Controller node has for the DLR Instance for which it is the master:
nsx-controller # show control-cluster logical-routers instance 1460487509 LR-Id LR-Name Hosts Edge-Connection Service-Controller 1460487509 default+edge-1 192.168.210.57 192.168.110.201 192.168.210.51 192.168.210.52 192.168.210.56 192.168.110.51 192.168.110.52 nsx-controller # show control-cluster logical-routers interface-summary 1460487509 Interface Type Id IP 570d455500000002 vxlan 5003 192.168.10.2/29 570d45550000000b vxlan 5001 172.16.20.1/24 570d45550000000c vxlan 5002 172.16.30.1/24 570d45550000000a vxlan 5000 172.16.10.1/24 nsx-controller # show control-cluster logical-routers routes 1460487509 LR-Id Destination Next-Hop 1460487509 0.0.0.0/0 192.168.10.1
The “instance” sub-command of the “show control-cluster logical-routers” command displays list of hosts that are connected to this Controller for this DLR Instance. In a correctly functioning environment, this list would include all hosts from all clusters where the DLR exists.
The “interface-summary” displays the LIFs that the Controller learned from the NSX Manager. This information is sent to the hosts.
The “routes” shows the routing table sent to this Controller by this DLR’s Control VM. Note that unlike on the ESXi hosts, this table does not include any directly connected subnets because this information is provided by the LIF configuration.
DLR Control VM
DLR Control VM has LIFs and routing/forwarding tables. The major output of DLR Control VM’s lifecycle is the DLR routing table, which is a product of Interfaces and Routes.
edge-1-0> show ip route Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived, C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2 Total number of routes: 5 S 0.0.0.0/0 [1/1] via 192.168.10.1 C 172.16.10.0/24 [0/0] via 172.16.10.1 C 172.16.20.0/24 [0/0] via 172.16.20.1 C 172.16.30.0/24 [0/0] via 172.16.30.1 C 192.168.10.0/29 [0/0] via 192.168.10.2 edge-1-0> show ip forwarding Codes: C - connected, R - remote, > - selected route, * - FIB route R>* 0.0.0.0/0 via 192.168.10.1, vNic_2 C>* 172.16.10.0/24 is directly connected, VDR C>* 172.16.20.0/24 is directly connected, VDR C>* 172.16.30.0/24 is directly connected, VDR C>* 192.168.10.0/29 is directly connected, vNic_2
The purpose of the Forwarding Table is to show which DLR interface is chosen as the egress for a given destination subnet.
The “VDR” interface is displayed for all LIFs of “Internal” type. The “VDR” interface is a pseudo-interface that does not correspond to a vNIC.
The DLR Control VM’s interfaces can be displayed as follows:
edge-1-0> show interface Interface VDR is up, line protocol is up index 2 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,NOARP> HWaddr: be:3d:a1:52:90:f4 inet6 fe80::bc3d:a1ff:fe52:90f4/64 inet 172.16.10.1/24 inet 172.16.20.1/24 inet 172.16.30.1/24 proxy_arp: disabled Auto-duplex (Full), Auto-speed (2460Mb/s) input packets 0, bytes 0, dropped 0, multicast packets 0 input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0 output packets 0, bytes 0, dropped 0 output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0 collisions 0 Interface vNic_0 is up, line protocol is up index 3 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST> HWaddr: 00:50:56:8e:1c:fb inet6 fe80::250:56ff:fe8e:1cfb/64 inet 169.254.1.1/30 inet 10.10.10.1/24 proxy_arp: disabled Auto-duplex (Full), Auto-speed (2460Mb/s) input packets 582249, bytes 37339072, dropped 49, multicast packets 0 input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0 output packets 4726382, bytes 461202852, dropped 0 output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0 collisions 0 Interface vNic_2 is up, line protocol is up index 9 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST> HWaddr: 00:50:56:8e:ae:08 inet 192.168.10.2/29 inet6 fe80::250:56ff:fe8e:ae08/64 proxy_arp: disabled Auto-duplex (Full), Auto-speed (2460Mb/s) input packets 361446, bytes 30167226, dropped 0, multicast packets 361168 input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0 output packets 361413, bytes 30287912, dropped 0 output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0 collisions 0
Notes of interest:
Interface “VDR” does not have a VM NIC (vNIC) associated with it. It is a single “pseudo-interface” that is configured with all IP addresses for all DLR’s “Internal” LIFs.
Interface vNic_0 in this example is the HA interface.
The output above was taken from a DLR deployed with HA enabled, and the HA interface is assigned an IP address. This appears as two IP addresses, 169.254.1.1/30 (auto-assigned for HA), and 10.10.10.1/24, manually assigned to the HA interface.
On an ESG, the operator can manually assign one of its vNICs as HA, or leave it as default for the system to choose automatically from available “Internal” interfaces. Having the “Internal” type is a requirement, or HA will fail.
Interface vNic_2 is an Uplink type; therefore, it is represented as a “real” vNIC.
Note that the IP address seen on this interface is the same as the DLR’s LIF; however, the DLR Control VM will not answer ARP queries for the LIF IP address (in this case, 192.168.10.2/29). There is an ARP filter applied for this vNIC’s MAC address that makes it happen.
The point above holds true until a dynamic routing protocol is configured on the DLR, when the IP address will be removed along with the ARP filter and replaced with the “Protocol IP” address specified during the dynamic routing protocol configuration.
This vNIC is used by the dynamic routing protocol running on the DLR Control VM to communicate with the other routers to advertise and learn routes.