The VXLAN network is used for Layer 2 logical switching across hosts, potentially spanning multiple underlying Layer 3 domains. You configure VXLAN on a per-cluster basis, where you map each cluster that is to participate in NSX to a vSphere distributed switch (VDS). When you map a cluster to a distributed switch, each host in that cluster is enabled for logical switches. The settings chosen here will be used in creating the VMkernel interface.
Before you begin
For details on prerequisites see Configure VXLAN from the Primary NSX Manager
- Log in to the vCenter linked to the secondary NSX Manager and navigate to Host Preparation tab.
and select the
If your vCenters are in Enhanced Linked Mode, you can configure the secondary NSX Manager from any linked vCenter. Navigate to the Host Preparation tab, and select the secondary NSX Manager from the drop-down menu.
- Click Not Configured in the VXLAN column.
- Set up logical networking.
This involves selecting a VDS, a VLAN ID, an MTU size, an IP addressing mechanism, and a NIC teaming policy.
The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600. If the vSphere distributed switch (VDS) MTU size is larger than the VXLAN MTU, the VDS MTU will not be adjusted down. If it is set to a lower value, it will be adjusted to match the VXLAN MTU. For example, if the VDS MTU is set to 2000 and you accept the default VXLAN MTU of 1600, no changes to the VDS MTU will be made. If the VDS MTU is 1500 and the VXLAN MTU is 1600, the VDS MTU will be changed to 1600.
These example screens show a configuration for a management cluster with IP pool address range of 188.8.131.52-192.168.150.100, backed by VLAN 150, and with a failover NIC teaming policy.
The number of VTEPs is not editable in the UI. The VTEP number is set to match the number of dvUplinks on the vSphere distributed switch being prepared.
For compute clusters, you may want to use different IP address settings (for example, 192.168.250.0/24 with VLAN 250). This would depend on how the physical network is designed, and likely won't be the case in small deployments.