This example describes the host network configuration for Fault Tolerance in a typical deployment with four 1GB NICs. This is one possible deployment that ensures adequate service to each of the traffic types identified in the example and could be considered a best practice configuration.

Fault Tolerance provides full uptime during the course of a physical host failure due to power outage, system panic, or similar reasons. Network or storage path failures or any other physical server components that do not impact the host running state may not initiate a Fault Tolerance failover to the Secondary VM. Therefore, customers are strongly encouraged to use appropriate redundancy (for example, NIC teaming) to reduce that chance of losing virtual machine connectivity to infrastructure components like networks or storage arrays.

NIC Teaming policies are configured on the vSwitch (vSS) Port Groups (or Distributed Virtual Port Groups for vDS) and govern how the vSwitch will handle and distribute traffic over the physical NICs (vmnics) from virtual machines and vmkernel ports. A unique Port Group is typically used for each traffic type with each traffic type typically assigned to a different VLAN.

Host Networking Configuration Guidelines

The following guidelines allow you to configure your host's networking to support Fault Tolerance with different combinations of traffic types (for example, NFS) and numbers of physical NICs.

  • Distribute each NIC team over two physical switches ensuring L2 domain continuity for each VLAN between the two physical switches.

  • Use deterministic teaming policies to ensure particular traffic types have an affinity to a particular NIC (active/standby) or set of NICs (for example, originating virtual port-id).

  • Where active/standby policies are used, pair traffic types to minimize impact in a failover situation where both traffic types will share a vmnic.

  • Where active/standby policies are used, configure all the active adapters for a particular traffic type (for example, FT Logging) to the same physical switch. This minimizes the number of network hops and lessens the possibility of oversubscribing the switch to switch links.

Configuration Example with Four 1Gb NICs

The diagram depicts the network configuration for a single ESXi host with four 1GB NICs supporting Fault Tolerance. Other hosts in the FT cluster would be configured similarly.

This example uses four port groups configured as follows:

  • VLAN A: Virtual Machine Network Port Group-active on vmnic2 (to physical switch #1); standby on vmnic0 (to physical switch #2.)

  • VLAN B: Management Network Port Group-active on vmnic0 (to physical switch #2); standby on vmnic2 (to physical switch #1.)

  • VLAN C: vMotion Port Group-active on vmnic1 (to physical switch #2); standby on vmnic3 (to physical switch #1.)

  • VLAN D: FT Logging Port Group-active on vmnic3 (to physical switch #1); standby on vmnic1 (to physical switch #2.)

vMotion and FT Logging can share the same VLAN (configure the same VLAN number in both port groups), but require their own unique IP addresses residing in different IP subnets. However, separate VLANs might be preferred if Quality of Service (QoS) restrictions are in effect on the physical network with VLAN based QoS. QoS is of particular use where competing traffic comes into play, for example, where multiple physical switch hops are used or when a failover occurs and multiple traffic types compete for network resources.

Figure 1. Fault Tolerance Networking Configuration Example
Example network configuration for Fault Tolerance