The teaming and failover policy lets you determine how network traffic is distributed between physical adapters and how traffic is rerouted in the event of an adapter failure.


Enable the port-level override option for this policy. See Edit Advanced Distributed Port Group Settings with the vSphere Web Client.


  1. In the vSphere Web Client, navigate to the distributed switch.
  2. On the Manage tab, and select Ports.
  3. Select a distributed port from the list and click Edit distributed port settings.
  4. Click Teaming and failover and select the check box next to the properties that you want to override.
  5. From the Load Balancing drop-down menu, specify how the standard or distributed switch selects an uplink to handle traffic from a virtual machine or VMkernel adapter.



    Route based on the originating virtual port

    Select an uplink based on the virtual port where the traffic entered the virtual switch.

    Route based on IP hash

    Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch simply uses the data at those fields to compute the hash.

    IP-based teaming requires that the physical switch be configured with EtherChannel.

    Route based on source MAC hash

    Select an uplink based on a hash of the source Ethernet.

    Route based on physical NIC load

    Available for distributed port groups or distributed ports. Select an uplink based on the current load of the physical network adapters connected to the port group or port. If an uplink remains busy at 75% or higher for 30 seconds, the host proxy switch moves a part of the virtual machine traffic to a physical adapter that has free capacity.

    Use explicit failover order

    From the list of active adapters, always use the highest order uplink that passes failover detection criteria.

  6. From the Network Failover Detection drop-down menu, specify the method that the standard or distributed switch uses for failover detection.



    Link Status only

    Relies only on the link status that the network adapter provides. This option detects failures, such as removed cables and physical switch power failures.

    However, it does not detect configuration errors, such as the following:

    • Physical switch port that is blocked by spanning tree or is misconfigured to the wrong VLAN.

    • Pulled cable that connects a physical switch to another networking devices, for example, an upstream switch.

    Beacon Probing

    Sends out and listens for beacon probes on all NICs in the team and uses this information, in addition to link status, to determine link failure. ESX/ESXi sends beacon packets every second.

    Beaconing is most useful with three or more NICs in a team because ESX/ESXi can detect failures of a single adapter. If only two NICs are assigned and one of them loses connectivity, the switch cannot determine which NIC needs to be taken out of service because both do not receive beacons and as a result all packets sent to both uplinks. Using at least three NICs in such a team allows for n-2 failures where n is the number of NICs in the team before reaching an ambiguous situation.

    The NICs must be in an active/active or active/standby configuration because the NICs in an unused state do not participate in beacon probing.

  7. From the Notify Switches drop-down menu, select whether the standard or distributed switch notifies the physical switch in case of a failover.

    If you select Yes, whenever a virtual NIC is connected to the virtual switch or whenever that virtual NIC’s traffic is routed over a different physical NIC in the team because of a failover event, a notification is sent over the network to update the lookup tables on physical switches. Notifying the physical switch offers lowest latency when a failover or a migration with vSphere vMotion occurs.


    Set this option to No if a connected virtual machines is using Microsoft Network Load Balancing in unicast mode. No issues exist with Network Load Balancing running in multicast mode.

  8. From the Failback drop-down menu, determine whether a physical adapter is returned to active status after recovering from a failure.

    If failback is set to Yes (default), the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any.

    If failback is set to No to a distributed port, a failed adapter is left inactive after recovery only if the associated virtual machine is running. If the Failback option is No and a virtual machine is powered off when all active physical adapters fail and then one of them recovers, after the virtual machine is powered on, the virtual NIC is connected to the recovered adapter instead of to a standby one. Powering a virtual machine off and then on leads to reconnecting the virtual NIC to a distributed port. The distributed switch considers the port as newly added, and assigns it the default uplink port, that is, the active uplink adapter.

  9. Specify how the uplinks in a team are used when a failover occurs by configuring the Failover Order list.

    If you want to use some uplinks but reserve others for emergencies in case the uplinks in use fail, use the up and down arrow keys to move them into different groups.



    Active adapters

    Continue to use the uplink if the network adapter connectivity is up and active.

    Standby adapters

    Use this uplink if one of the active physical adapter is down.

    Unused adapters

    Do not use this uplink.

  10. Click OK.