The VXLAN network is used for Layer 2 logical switching across hosts, potentially spanning multiple underlying Layer 3 domains. You configure VXLAN on a per-cluster basis, where you map each cluster that is to participate in NSX to a vSphere distributed switch (VDS). When you map a cluster to a distributed switch, each host in that cluster is enabled for logical switches. The settings chosen here will be used in creating the VMkernel interface.

If you need logical routing and switching, all clusters that have NSX VIBs installed on the hosts should also have VXLAN transport parameters configured. If you plan to deploy distributed firewall only, you do not need to configure VXLAN transport parameters.


  • All hosts within the cluster must be attached to a common VDS.

  • NSX Manager must be installed.

  • NSX controllers must be installed, unless you are using multicast replication mode for the control plane.

  • Plan your NIC teaming policy. Your NIC teaming policy determines the load balancing and failover settings of the VDS.

    Do not mix different teaming policies for different portgroups on a VDS where some use Etherchannel or LACPv1 or LACPv2 and others use a different teaming policy. If uplinks are shared in these different teaming policies, traffic will be interrupted. If logical routers are present, there will be routing problems. Such a configuration is not supported and should be avoided.

    The best practice for IP hash-based teaming (EtherChannel, LACPv1 or LACPv2) is to use all uplinks on the VDS in the team, and do not have portgroups on that VDS with different teaming policies. For more information and further guidance, see the VMware® NSX for vSphere Network Virtualization Design Guide at

  • Plan the IP addressing scheme for the VXLAN tunnel end points (VTEPs). VTEPs are the source and destination IP addresses used in the external IP header to uniquely identify the ESX hosts originating and terminating the VXLAN encapsulation of frames. You can use either DHCP or manually configured IP pools for VTEP IP addresses.

    If you want a specific IP address to be assigned to a VTEP, you can either 1) use a DHCP fixed address or reservation that maps a MAC address to a specific IP address in the DHCP server or 2) use an IP pool and then manually edit the VTEP IP address assigned to the vmknic in Manage > Networking > Virtual Switches. For example:

    VTEPs have an associated VLAN ID. You can, however, specify VLAN ID = 0 for VTEPs, meaning frames will be untagged.

  • For clusters that are members of the same VDS, the VLAN ID for the VTEPs and the NIC teaming must be the same.

  • As a best practice, export the VDS configuration before preparing the cluster for VXLAN. See


  1. In vCenter, navigate to Home > Networking & Security > Installation and select the Host Preparation tab.
  2. From the NSX Manager dropdown menu, select the Primary NSX Manager.
  3. Click Not Configured in the VXLAN column.
  4. Set up logical networking.

    This involves selecting a VDS, a VLAN ID, an MTU size, an IP addressing mechanism, and a NIC teaming policy.

    The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600. If the vSphere distributed switch (VDS) MTU size is larger than the VXLAN MTU, the VDS MTU will not be adjusted down. If it is set to a lower value, it will be adjusted to match the VXLAN MTU. For example, if the VDS MTU is set to 2000 and you accept the default VXLAN MTU of 1600, no changes to the VDS MTU will be made. If the VDS MTU is 1500 and the VXLAN MTU is 1600, the VDS MTU will be changed to 1600.

    These example screens show a configuration for a management cluster with IP pool address range of, backed by VLAN 150, and with a failover NIC teaming policy.

    The number of VTEPs is not editable in the UI. The VTEP number is set to match the number of dvUplinks on the vSphere distributed switch being prepared.

    For compute clusters, you may want to use different IP address settings (for example, with VLAN 250). This would depend on how the physical network is designed, and likely won't be the case in small deployments.


Configuring VXLAN results in the creation of new distributed port groups.

For example: