Networking maximums represent achievable maximum configuration limits in networking environments where no other more restrictive limits apply (for example, vCenter Server limits, the limits imposed by features such as HA or DRS, and other configurations that might impose restrictions must be considered when deploying large scale systems).

Note:

For all NIC devices that are not listed in the table below, the maximum number of ports supported is 2.

Table 1. Networking Maximums

Item

Maximum

Physical NICs

e1000e 1 Gb Ethernet ports (Intel PCI-e)

24

igb 1 Gb Ethernet ports (Intel)

16

tg3 1 Gb Ethernet ports (Broadcom)

16 with NetQueue enabled

32 with NetQueue disabled

NetQue is enabled by default in vSphere 6.0.

bnx2 1 Gb Ethernet ports (QLogic)

16

nx_nic 10 Gb Ethernet ports (NetXen)

8

elxnet 10Gb Ethernet ports (Emulex)

8

ixgbe 10 Gb Ethernet ports (Intel)

16

bnx2x 10 Gb Ethernet ports (QLogic)

8

Infiniband ports (refer to VMware Community Support)

N/A

Mellanox Technologies InfiniBand HCA device drivers are available directly from Mellanox Technologies. Go to the Mellanox Web site for information about support status of InfiniBand HCAs with ESXi. http://www.mellanox.com .

Combination of 10 Gb and 1Gb ethernet ports

Sixteen 10 Gb and four 1 Gb ports

nmlx4_en 40 GB Ethernet Ports (Mellanox)

4

VMDirectPath limits

SR-IOV Number of 10 G pNICs

8

vSphere Standard and Distributed Switch

Total virtual network switch ports per host (VDS and VSS ports)

4096

Maximum active ports per host (VDS and VSS)

1016

Virtual network switch creation ports per standard switch

4088

Port groups per standard switch

512

Static/Dynamic port groups per distributed switch

10,000

Ephemeral port groups per distributed switch

1016

Ports per distributed switch

60,000

Distributed virtual network switch ports per vCenter

60,000

Static/dynamic port groups per vCenter

10,000

Ephemeral port groups per vCenter

1016

Distributed switches per vCenter

128

Distributed switches per host

16

VSS portgroups per host

1000

LACP - LAGs per host

64

LACP - uplink ports per LAG (Team)

32

Hosts per distributed switch

1000

NIOC resource pools per vDS

64

Link aggregation groups per vDS

64