John Admin configures VXLAN on Cluster 1 and Cluster 2, where he maps each cluster to a vSphere Distributed Switch. When he maps a cluster to a switch, each host in that cluster is enabled for logical switches.


  1. Click the Host Preparation tab.
  2. For Cluster1, select Configure in the VXLAN column.
  3. In the Configuring VXLAN networking dialog box, select dvSwitch1 as the vSphere Distributed Switch for the cluster.
  4. Type 10 for dvSwitch1 to use as the ACME transport VLAN.
  5. In Specify Transport Attributes, leave 1600 as the Maximum Transmission Units (MTU) for dvSwitch1.
    MTU is the maximum amount of data that can be transmitted in one packet before it is divided into smaller packets. John Admin knows that VXLAN logical switch traffic frames are slightly larger in size because of the encapsulation, so the MTU for each switch must be set to 1550 or higher.
  6. In VMKNic IP Addressing, select Use IP Pool and select an IP pool.
  7. For VMKNic Teaming Policy, select Failover.
    John Admin wants to maintain the quality of service in his network by keeping the performance of logical switches the same in normal and fault conditions. Hence, he chooses Failover as the teaming policy.
  8. Click Add.
  9. Repeat steps 4 through step 8 to configure VXLAN on Cluster2.


After John Admin maps Cluster1 and Cluster2 to the appropriate switch, the hosts on those clusters are prepared for logical switches:
  1. A VXLAN kernel module and vmknic is added to each host in Cluster 1 and Cluster 2.
  2. A special dvPortGroup is created on the vSwitch associated with the logical switch and the VMKNic is connected to it.