A distributed port group specifies port configuration options for each member port on a vSphere distributed switch. Distributed port groups define how a connection is made to a network.

vSphere Distributed Switch introduces two abstractions that you use to create consistent networking configuration for physical NICs, virtual machines, and VMkernel traffic.

Uplink port group

An uplink port group or dvuplink port group is defined during the creation of the distributed switch and can have one or more uplinks. An uplink is a template that you use to configure physical connections of hosts as well as failover and load balancing policies. You map physical NICs of hosts to uplinks on the distributed switch. You set failover and load balancing policies over uplinks and the policies are automatically propagated to the host proxy switches, or the data plane.

Distributed port group

Distributed port groups provide network connectivity to virtual machines and accommodate VMkernel traffic. You identify each distributed port group by using a network label, which must be unique to the current data center. You configure NIC teaming, failover, load balancing, VLAN, security, traffic shaping, and other policies on distributed port groups. As with uplink port groups, the configuration that you set on distributed port groups on vCenter Server (the management plane) is automatically propagated to all hosts on the distributed switch through their host proxy switches (the data plane).

Table 1. Distributed Port Group Configuration

Parameter

Setting

Failover detection

Link status only

Notify switches

Yes

Failback

Yes

Failover order

Active uplinks: Uplink1, Uplink2

Figure 1. vSphere Distributed Switch Design for a Workload Domain
The vSphere Distributed Switch design for the workload domain includes two physical network ports that provide network access for tenant workloads and north/south on-ramp for the software-defined network.
Table 2. Port Group Binding and Teaming

vSphere Distributed Switch

Port Group Name

Port Binding

Teaming Policy

Active Uplinks

Failover Detection

Notify Switches

Failback

sfo01-w01-cl01-vds01

sfo-w01-cl01-vds01-pg-mgmt

Static Port Binding

Route based on physical NIC load

1, 2

Link status only Yes Yes

sfo01-w01-cl01-vds01

sfo-w01-cl01-vds01-pg-vmotion

Static Port Binding

Route based on physical NIC load

1, 2

Link status only Yes Yes
sfo01-w01-cl01-vds01 sfo-w01-cl01-vds01-pg-vsan Static Port Binding Route based on physical NIC load

1, 2

Link status only Yes Yes
sfo01-w01-cl01-vds01

Host Overlay

See Overlay Design for NSX-T Data Center for a vSphere with Tanzu Workload Domain.
sfo01-w01-cl01-vds01 sfo-w01-cl01-vds01-pg-uplink01
sfo01-w01-cl01-vds01 sfo-w01-cl01-vds01-pg-uplink02

NIC Teaming

For a predictable level of performance, use multiple network adapters in one of the following configurations.

  • An active-passive configuration that uses explicit failover when connected to two separate switches.

  • An active-active configuration in which two or more physical NICs in the server are assigned the active role.

This design uses an active-active configuration.

Table 3. Design Decisions on Distributed Port Groups

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-NET-004

Use static port binding for all port groups in the shared edge and workload cluster.

Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This allows for historical data and port level monitoring.

Since the vCenter Server managing the workload domain resides in the management domain, there is no need for an ephemeral port group for vCenter Server recoverability.

None

SDDC-KUBWLD-VI-NET-005

Use the Route based on physical NIC load teaming algorithm for the Management Port Group.

Reduces the complexity of the network design and increases resiliency and performance.

None

SDDC-KUBWLD-VI-NET-006

Use the Route based on physical NIC load teaming algorithm for the vMotion Port Group.

Reduces the complexity of the network design and increases resiliency and performance.

None

VMkernel Network Adapter Configuration

The VMkernel networking layer provides connectivity to hosts and handles the system traffic for vSphere vMotion, IP storage, vSphere HA, vSAN, and others.

You can also create VMkernel network adapters on the source and target vSphere Replication hosts to isolate the replication data traffic.

Table 4. VMkernel Adapters for the Workload Domain

vSphere Distributed Switch

Availability Zones

Network Label

Connected Port Group

Enabled Services

MTU

sfo-w01-cl01-vds01

Availability Zone 1

Management

sfo01-w01-cl01-vds01-pg-mgmt

Management Traffic

1500 (Default)

sfo-w01-cl01-vds01

vMotion

sfo01-w01-cl01-vds01-pg-vmotion

vMotion Traffic

9000

sfo-w01-cl01-vds01

vSAN

sfo01-w01-cl01-vds01-pg-vsan

vSAN

9000