When considering the need to deploy an SDDC with more than two physical NICs, evaluate the reasons for such a configuration. Usually, the standard architecture with two physical NIC per host meets the requirements of your customers, data center networks, storage types, and use cases, without the added complexity of operating more physical NICs.

Legacy Practices

When you deploy a virtualized environment, you might need to follow older operational or environment practices without additional evaluation. In the past, some of the following common practices existed:

  • Use hosts with 8 or more physical NICs with a full separation of management, vSphere vMotion, storage, and virtual machine traffic. Today, these traffic types can be safely integrated on the same fabric using the safeguards noted in Network Architecture of VMware Cloud Foundation.

  • Physically separate virtual machine traffic from management traffic as a physical firewall was required to ensure traffic security. Today, hypervisor level firewalls can provide even better security without the added complexity and traffic flow of physical firewall.

Following legacy practices might carry forward additional complexity and risk. Consider modern network performance, VMware congestion mitigation processes with Network I/O Control, and availability when taking design decisions about your data center network.

Aggregate Bandwidth Limitation

You often build modern data centers around a 25 Gbps host connectivity. For environments where 2 x10 Gb is not a sufficient bandwidth to provide contention free networking, moving to 2 x 25-GbE NICs is the simplest and best choice. In this way, you can provide more potential bandwidth to each traffic flow from a host. Verify that your hardware manufacturer supports the NIC you have selected in your specific server. Network cards that support VXLAN or Geneve offload are also recommended if they are listed in the VMware Compatibility Guide.

Usually, fast physical data link speeds provide the best increase in the overall aggregate bandwidth available to individual traffic flows. In some cases, 2 x 25 GbE might not be sufficient, or a 25 GbE physical NIC might not be available or certified for your hardware. You can also move to 2 x 40 GbE or 2 x 100 GbE but the physical infrastructure costs to support 40 GbE might be a limitation.

Traffic Separation for Security

Some organizations have a business or security requirement for physical separation of traffic onto separate fabrics. For instance, you must completely isolate management traffic from virtual machine traffic or ESXi management access from other traffic to support specific data center deployment needs. Often, such a requirement exists because of legacy data center designs with physical firewalls or switch ACLs. With the use of VLANs and the NSX distributed firewall, traffic can be securely isolated without the complexity of additional pNICs. Virtual machine traffic on NSX logical switches is further encapsulated in VXLAN or Geneve packets.

However, for environments that require physical separation, VMware Cloud Foundation can support mapping of management, storage, or virtual machine traffic to a logical switch that is connected to dedicated physical NICs. Such a configuration can provide separation of traffic on to separate physical links from the host into the data center network – be it to different network ports on the same fabric, or to a distinct network fabric.

In this case, your management VMkernel adapters can be on one pair of physical NICs, with all other traffic (vSphere Motion, vSAN, and so on) on another pair. Another goal might be to isolate virtual machine traffic so that one pair of physical NICs handles all management VMkernel adapters, vSphere vMotion, and vSAN, and a second pair handles the overlay and virtual machine traffic.

Traffic Separation for Operational Concerns

In the past, some data center networks were physically seperate from each other because of data center operations concerns.

  • Such a configuration might come from the time when a physical throughput of 100 Mbps or 1 Gbps was a concern or congestion mitigation methods were not as advanced as they are today with Network I/O Control.

  • Such a configuration can also be related to legacy operational practices for storage area networking (SAN) where storage traffic was always isolated to a separate physical fabric not just for technical reasons, but to ensure the SAN fabric was treated with an extra high degree of care by data center operations personal.

Today, communication between components is critical regardless of type, source, or destination. All networks need to be treated with the same care because east-west workload communication is just as critical as storage communication. As discussed in Legacy Practices, you can resolve concerns on latency and congestion by sizing physical NICs appropriately and using Network I/O Control.

Even with modern data center design and appropriately sized physical NICs, some organizations require that you separate traffic with high bandwidth potential or low latency, most commonly storage traffic such as vSAN or NFS, or virtual machine backup traffic. In other cases, it is simply for physical separation without any performance concerns such as utilizing a dedicated management fabric or dedicated fabric for virtual machine traffic.

In these cases, VMkernel network adapters or virtual machine network adapters can be assigned to dedicated VLAN-backed port groups or overlay segments on a second vSphere Distributed Switch or N-VDS with dedicated physical NICs.

Traffic Separation Because of Bandwidth Limitations

Today, data centers are often built with 25 GbE, 40 GbE, and even 100 GbE physical NICs per host. With high throughput physical NICs, data centers are able to provide sufficient bandwidth for integrating vSAN, NFS, and virtual machine traffic (VLAN or overlay) on the same physical links. In some cases, customers might not be able to use physical NICs of 25 GbE or higher. Then, you add more physical NICs to a virtual switch. See Traffic Separation Because of Bandwidth Limitations.

In certain cases, after the VMware Professional Services team evaluates your environment and workload traffic patterns, benefits from total separation between storage traffic and virtual machine traffic (VLAN and overlay) might appear.

In these cases, you can connect VMkernel adapters or virtual machine port groups to a specific logical switch with dedicated physical NICs. Usually, you isolate virtual machine traffic including overlay on a separate physical link.

Traffic Separation for NSX-T

Because NSX-T uses its own type of virtual switch, that is, N-VDS, during migration to NSX-T, you migrate also all traffic types and VMkernel adapters to N-VDS.

On some server network interface cards (NICs), the Geneve encapsulation might not be fully supported. You can continue using vSphere Distributed Switch for VMkernel traffic (e.g. vSAN, vSphere vMotion, and so on). In such limited use cases, as an example, you can use a pair of additional physical NICs for the N-VDS instance while using VLAN-backed virtual machine traffic.