A distributed port group specifies port configuration options for each member port on a vSphere Distributed Switch. Distributed port groups define how a connection is made to a network.

vSphere Distributed Switch introduces two abstractions that you use to create consistent networking configuration for physical NICs, virtual machines, and VMkernel traffic.

Uplink port group

An uplink port group or dvuplink port group is defined during the creation of the distributed switch and can have one or more uplinks. An uplink is a template that you use to configure physical connections of hosts as well as failover and load balancing policies. You map physical NICs of hosts to uplinks on the distributed switch. You set failover and load balancing policies over uplinks and the policies are automatically propagated to the host proxy switches, or the data plane.

Distributed port group

Distributed port groups provide network connectivity to virtual machines and accommodate VMkernel traffic. You identify each distributed port group by using a network label, which must be unique to the current data center. You configure NIC teaming, failover, load balancing, VLAN, security, traffic shaping , and other policies on distributed port groups. As with uplink port groups, the configuration that you set on distributed port groups on vCenter Server (the management plane) is automatically propagated to all hosts on the distributed switch through their host proxy switches (the data plane).

Table 1. Distributed Port Group Configuration

Parameter

Setting

Failover detection

Link status only

Notify switches

Yes

Failback

Yes

Failover order

Active uplinks: Uplink1, Uplink2

Figure 1. vSphere Distributed Switch Design for Management Domain
The two NICs of a management ESXi host are connected to the distributed switch for the management domain. The switch propagates VLAN-backed port groups for management, vSphere vMotion, vSAN, NFS, NSX-T host TEP, and uplink traffic.
Table 2. Port Group Binding and Teaming

vSphere Distributed Switch

Port Group Name

Port Binding

Teaming Policy

Active Uplinks

sfo-m01-cl01-vds01

sfo01-m01-cl01-vds01-pg-mgmt

Ephemeral Port Binding

Route based on physical NIC load

1, 2

sfo-m01-cl01-vds01

sfo01-m01-cl01-vds01-pg-vmotion

Static Port Binding

Route based on physical NIC load

1, 2

NIC Teaming

For a predictable level of performance, use multiple network adapters in one of the following configurations.

  • An active-passive configuration that uses explicit failover when connected to two separate switches.

  • An active-active configuration in which two or more physical NICs in the server are assigned the active role.

This design uses an active-active configuration.
Table 3. Design Decisions on Distributed Port Groups

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-NET-006

Use ephemeral port binding for the management port group.

Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch.

Port-level permissions and controls are lost across power cycles, and no historical context is saved.

SDDC-MGMT-VI-NET-007

Use static port binding for all non-management port groups.

Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This allows for historical data and port level monitoring .

None

SDDC-MGMT-VI-NET-008

Use the Route based on physical NIC load teaming algorithm for the management port group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

SDDC-MGMT-VI-NET-009

Use the Route based on physical NIC load teaming algorithm for the vMotion Port Group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

VMkernel Network Adapter Configuration

The VMkernel networking layer provides connectivity to hosts and handles the system traffic for vSphere vMotion, IP storage, vSphere HA, vSAN, and others.

You can also create VMkernel network adapters on the source and target vSphere Replication hosts to isolate the replication data traffic.

Table 4. VMkernel Adapters for the Management Domain

vSphere Distributed Switch

Availability Zones

Network Function

Connected Port Group

Enabled Services

MTU Size (Bytes)

sfo-m01-cl01-vds01

Availability Zone 1

Management

sfo01-m01-cl01-vds01-pg-mgmt

Management Traffic

1500 (Default)

sfo-m01-cl01-vds01

vMotion

sfo01-m01-cl01-vds01-pg-vmotion

vMotion Traffic

9000

sfo-m01-cl01-vds01

vSAN

sfo01-m01-cl01-vds01-pg-vsan

vSAN

9000

sfo-m01-cl01-vds01

Availability Zone 2

Management

az2_sfo01-m01-cl01-vds01-pg-mgmt

Management Traffic

1500 (Default)

sfo-m01-cl01-vds01

vMotion

az2_sfo01-m01-cl01-vds01-pg-vmotion

vMotion Traffic

9000

sfo-m01-cl01-vds01

vSAN

az2_sfo01-m01-cl01-vds01-pg-vsan

vSAN

9000