When an NSX-T Data Center logical switch requires a Layer 2 connection to a VLAN-backed port group or needs to reach another device, such as a gateway, that resides outside of an NSX-T Data Center deployment, you can use an NSX-T Data Center Layer 2 bridge. This is especially useful in a migration scenario, in which you need to split a subnet across physical and virtual workloads.

The NSX-T Data Center concepts involved in Layer 2 bridging are bridge clusters, bridge endpoints, and bridge nodes. A bridge cluster is an high-availability (HA) collection of bridge nodes. A bridge node is a transport node that does bridging. Each logical switch that is used for bridging a virtual and the physical deployment has an associated VLAN ID. A bridge endpoint identifies the physical attributes of the bridge, such as the bridge cluster ID and the associated VLAN ID.

You can configure layer 2 bridging using either ESXi host transport nodes or NSX Edge transport nodes. To use ESXi host transport nodes for bridging, you create a bridge cluster. To use NSX Edge transport nodes for bridging, you create a bridge profile.

In the following example, two NSX-T Data Center transport nodes are part of the same overlay transport zone. This makes it possible for their NSX managed virtual distributed switches (N-VDS, previously known as hostswitch) to be attached to the same bridge-backed logical switch.

Figure 1. Bridge Topology
bridge topology showing bridge node, transport node, and NSX-external node

The transport node on the left belongs to a bridge cluster and is therefore a bridge node.

Because the logical switch is attached to a bridge cluster, it is called a bridge-backed logical switch. To be eligible for bridge backing, a logical switch must be in an overlay transport zone, not in a VLAN transport zone.

The middle transport node is not part of the bridge cluster. It is a normal transport node. It can be a KVM or ESXi host. In the diagram, a VM on this node called "app VM" is attached to the bridge-backed logical switch.

The node on the right is not part of the NSX-T Data Center overlay. It might be any hypervisor with a VM (as shown in the diagram) or it might be a physical network node. If the non-NSX-T Data Center node is an ESXi host, you can use a standard vSwitch or a vSphere distributed switch for the port attachment. One requirement is that the VLAN ID associated with the port attachment must match the VLAN ID on the bridge-backed logical switch. Also, the communication occurs over Layer 2, so the two end devices must have IP addresses in the same subnet.

As stated, the purpose of the bridge is to enable Layer 2 communication between the two VMs. When traffic is transmitted between the two VMs, the traffic traverses the bridge node.

Note:

When using Edge VMs running on ESXi host to provide layer 2 bridging, the port group on the standard or distributed switch sending and receiving traffic on the VLAN side should be in promiscuous mode. For optimal performance, note the following:

  • Do not have other port groups in promiscuous mode on the same host sharing the same set of VLANs.

  • The active and standby Edge VMs should be on different hosts. If they are on the same host the throughput might drop to 7 Gbps because VLAN traffic needs to be forwarded to both VMs in promiscuous mode.