Include two or more physical NICs in a team to increase the network capacity of a distributed port group or port. Configure failover order to determine how network traffic is rerouted in case of adapter failure. Select a load balancing algorithm to determine how the distributed switch load balances the traffic between the physical NICs in a team.

Before you begin

To override a policy on distributed port level, enable the port-level override option for this policy. See Configure Overriding Networking Policies on Port Level.

About this task

Configure NIC teaming, failover, and load balancing according with the network configuration on the physical switch and the topology of the distributed switch. See Teaming and Failover Policy and Load Balancing Algorithms Available for Virtual Switches for more information.

If you configure the teaming and failover policy for a distributed port group, the policy is propagated to all ports in the group. If you configure the policy for a distributed port, it overrides the policy inherited from the group.

Procedure

  1. In the vSphere Web Client, navigate to the distributed switch.
  2. Navigate the Teaming and Failover policy on the distributed port group or port.

    Option

    Action

    Distributed port group

    1. From the Actions menu, select Distributed Port Group > Manage Distributed Port Groups.

    2. Select Teaming and failover.

    3. Select the port group and click Next.

    Distributed port

    1. Select Related Object, and select Distributed Port Groups.

    2. Select a distributed port group.

    3. Under Manage, select Ports.

    4. Select a port and click Edit distributed port settings.

    5. Select Teaming and failover.

    6. Select Override next to the properties that you want to override.

  3. From the Load Balancing drop-down menu, specify how the virtual switch load balances the outgoing traffic between the physical NICs in a team.

    Option

    Description

    Route based on the originating virtual port

    Select an uplink based on the virtual port IDs on the switch. After the virtual switch selects an uplink for a virtual machine or a VMkernel adapter, it always forwards traffic through the same uplink for this virtual machine or VMkernel adapter.

    Route based on IP hash

    Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash .

    IP-based teaming requires that the physical switch is configured with EtherChannel.

    Route based on source MAC hash

    Select an uplink based on a hash of the source Ethernet.

    Route based on physical NIC load

    Available for distributed port groups or distributed ports. Select an uplink based on the current load of the physical network adapters connected to the port group or port. If an uplink remains busy at 75 percent or higher for 30 seconds, the host proxy switch moves a part of the virtual machine traffic to a physical adapter that has free capacity.

    Use explicit failover order

    From the list of active adapters, always use the highest order uplink that passes failover detection criteria. No actual load balancing is performed with this option.

  4. From the Network Failover Detection drop-down menu, select the method that the virtual switch uses for failover detection.

    Option

    Description

    Link Status only

    Relies only on the link status that the network adapter provides. This option detects failures such as removed cables and physical switch power failures.

    Beacon Probing

    Sends out and listens for beacon probes on all NICs in the team, and uses this information, in addition to link status, to determine link failure.ESXi sends beacon packets every second.

    The NICs must be in an active/active or active/standby configuration because the NICs in an unused state do not participate in beacon probing.

  5. From the Notify Switches drop-down menu, select whether the standard or distributed switch notifies the physical switch in case of a failover.
    Note:

    Set this option to No if a connected virtual machine is using Microsoft Network Load Balancing in unicast mode. No issues exist with Network Load Balancing running in multicast mode.

  6. From the Failback drop-down menu, select whether a physical adapter is returned to active status after recovering from a failure.

    If failback is set to Yes, the default selection, the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any.

    If failback is set to No for a distributed port, a failed adapter is left inactive after recovery only if the associated virtual machine is running. When the Failback option is No and a virtual machine is powered off, if all active physical adapters fail and then one of them recovers, the virtual NIC is connected to the recovered adapter instead of to a standby one after the virtual machine is powered on. Powering a virtual machine off and then on leads to reconnecting the virtual NIC to a distributed port. The distributed switch considers the port as newly added, and assigns it the default uplink port, that is, the active uplink adapter.

  7. Specify how the uplinks in a team are used when a failover occurs by configuring the Failover Order list.

    If you want to use some uplinks but reserve others for emergencies in case the uplinks in use fail, use the up and down arrow keys to move uplinks into different groups.

    Option

    Description

    Active adapters

    Continue to use the uplink if the network adapter connectivity is up and active.

    Standby adapters

    Use this uplink if one of the active physical adapter is down.

    Unused adapters

    Do not use this uplink.

  8. Review your settings and apply the configuration.