Starting with VMware Cloud Foundation 3.9.1, you can use hosts with multiple physical NICs in your SDDC. If your environment requires physical traffic separation, use one or two vSphere Distributed Switch instances and an N-VDS instance, assigning each virtual switch a pair of physical NICs.

For information on the supported NIC configurations, see Isolating Traffic across Physical NICs in the VMware Cloud Foundation documentation. For information on the use cases for using hosts with multiple physical NICs and for API examples for workload domain deployment, see Using Hosts with Multiple Physical NICs with VMware Cloud Foundation.

Isolating Management Traffic from Tenant Workload Traffic

For example, by using ESXi hosts with four physical NICs, you can isolate management traffic on a vSphere Distributed Switch and edge uplink and overlay traffic on an N-VDS in an NSX-T workload domain with multiple availability zones. You follow Specification of an NSX-T Workload Domain with Multiple Availability Zones and Example IP and DNS Configuration of an NSX-T Workload Domain with Multiple Availability Zones, modifying these specifications according to the requirements of your environment.

Table 1. Example Virtual Switch Specification for Multi-NIC Hosts

Component

Value

Minimum number of hosts

4

Number of physical NICs per host

4

vSphere Distributed Switch configuration

vSphere Distributed Switch instances

sfo01-w01-vds01

vmnic configuration for vSphere Distributed Switch sfo01-w01-vds01

vmnic0, vmnic1

Distributed port groups for vSphere Distributed Switch sfo01-w01-vds01

Availability Zone 1
  • sfo01-w01-vds01-management
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1641
  • sfo01-w01-vds01-vmotion
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1642
  • sfo01-w01-vds01-vsan
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1643
Availability Zone 2
  • sfo02-w01-vds01-management
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1661
  • sfo02-w01-vds01-vmotion
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1662
  • sfo02-w01-vds01-vsan
    • NIC teaming: Route based on physical NIC load with active uplinks Uplink1 and Uplink2
    • VLAN ID: 1663

N-VDS configuration on Host Transport Nodes

N-VDS instance

sfo-w01-nvds01

vmnic configuration for N-VDS sfo01-w01-nvds01

vmnic2, vmnic3

Transport Zones
  • vlan-tz-UUID
  • overlay-tz-UUID
  • sfo01-w-uplink01
  • sfo01-w-uplink02
Segments for N-VDS sfo-w01-nvds01 Availability Zone 1 and Availability Zone 2
  • sfo-w-overlay
    • NIC teaming: Load Balance Source, that is, load balancing based on the source port ID, between uplink-1 and uplink-2
    • VLAN ID: 0-4094
    • Transport zone:
    • vlan-tz-UUID
Availiability Zone 1
  • sfo01-w-nvds01-uplink01
    • NIC teaming: Failover Order with active uplink uplink-1
    • VLAN ID: 0-4094
    • Transport zone: vlan-tz-UUID
  • sfo01-w-nvds01-uplink02
    • NIC teaming: Failover Order with active uplink uplink-2
    • VLAN ID: 0-4094
    • Transport zone: vlan-tz-UUID
Availability Zone 2
  • sfo02-w-nvds01-uplink01
    • NIC teaming: Failover Order with active uplink uplink-1
    • VLAN ID: 0-4094
    • Transport zone: vlan-tz-UUID
  • sfo02-w-nvds01-uplink02
    • NIC teaming: Failover Order with active uplink uplink-2
    • VLAN ID: 0-4094
    • Transport zone: vlan-tz-UUID
Table 2. Example NSX-T Edge Node Configuration for Multi-NIC Hosts

Setting

Value for sfo01wesg01

Value for sfo01wesg02

Value for sfo02wesg01

Value for sfo02wesg02

Network 3

sfo01-w-nvds01-uplink02

sfo01-w-nvds01-uplink02

sfo02-w-nvds01-uplink02

sfo02-w-nvds01-uplink02

Network 2

sfo01-w-nvds01-uplink01

sfo01-w-nvds01-uplink01

sfo02-w-nvds01-uplink01

sfo02-w-nvds01-uplink01

Network 1

sfo-w-overlay

sfo-w-overlay

sfo-w-overlay

sfo-w-overlay

Network 0

sfo01-w01-vds01-management

sfo01-w01-vds01-management

sfo02-w01-vds01-management

sfo02-w01-vds01-management

Management IP address

172.16.41.21

172.16.41.22

172.16.61.21

172.16.61.22

Default gateway

172.16.41.253

172.16.41.253

172.16.61.253

172.16.61.253

Transport Zones
  • overlay-tz-UUID
  • sfo01-w-uplink01
  • sfo01-w-uplink02
  • overlay-tz-UUID
  • sfo01-w-uplink01
  • sfo01-w-uplink02
  • overlay-tz-UUID
  • sfo01-w-uplink01
  • sfo01-w-uplink02
  • overlay-tz-UUID
  • sfo01-w-uplink01
  • sfo01-w-uplink02

To deploy the example configuration, you follow the scenario for hosts with two physical NICs modifying the configuration as needed.

Table 3. Workflow for Deploying Multiple Availability Zones on Multi-NIC Hosts
Deployment Stage Flow Modification for Multi-NIC Hosts
Prepare the virtual infrastructure for a NSX-T workload domain with multiple availability zones.
  1. Deploy the management domain in a configuration with multiple availability zones.
  2. By using the public API of SDDC Manager, deploy an NSX-T workload domain with the example multi-switch configuration that supports the subsequent addition of a second availability zone. See the Using Hosts with Multiple Physical NICs with VMware Cloud Foundation technical note and VMware Cloud Foundation API documentation.
  3. In SDDC Manager, create a network pool for the vSphere vMotion and vSAN subnets in Availability Zone 2.
  4. Install the ESXi hypervisor on the hosts for the second availability zone.
  5. Commission the hosts for the second availability zone by using the network pool you created for the zone.
  6. Provide a third location for the vSAN witness appliance.
Verify that your system satisfies the system requirements for deploying an NSX-T workload domain with multiple availability zones. None
Configure the virtual infrastructure for the second availability zone.
  1. Add the ESXi hosts to the workload domain cluster.
  2. Create resource pools for the NSX-T Edge appliances and tenant workloads, and the virtual machine folders for the NSX-T Edge appliances.
  3. On the vSphere Distributed Switch for the NSX-T workload domain, create the distributed port groups according to Example Virtual Switch Specification for Multi-NIC Hosts, and add the hosts for Availability Zone 2 to the switch by assigning the vmnic1 network adapters to the switch.
  4. Create the VMkernel network adapters for vSAN and vSphere vMotion traffic in Availability Zone 2.
  5. Migrate the vmnic0 network adapter of the hosts for Availability Zone 2 to the vSphere Distributed Switch for the domain.
  6. Enable SSH on the ESXi Hosts in Availability Zone 2.
  7. Create vSAN disk groups for the ESXi Hosts in Availability Zone 2.
  8. Add static routes for vSAN traffic between the ESXi hosts in both availability zones.
Deploy and configure the vSAN witness host for an NSX-T workload domain. None
Configure vSAN stretched cluster for an NSX-T workload domain. None
Configure the NSX-T instance for an NSX-T workload domain.
  1. Create the transport zones for uplink traffic for the NSX-T Edge nodes.
  2. Create the uplink profiles for Availability Zone 2.
  3. Create the segments for uplink traffic in the vlan-tz-UUID transport zone.
    • sfo01-w-uplink01
    • sfo01-w-uplink02
    • sfo01-w-nvds01-uplink01
    • sfo01-w-nvds01-uplink02
    • sfo02-w-uplink01
    • sfo02-w-uplink02
    • sfo02-w-nvds01-uplink01
    • sfo02-w-nvds01-uplink02
    • sfo-w-overlay
  4. Add the hosts for Availability Zone 2 to the vlan-tz-UUID transport zone by assigning vmnic2 and vmnic3 to the newly-created N-VDS. Do not migrate the VMkernel network adapters to the N-VDS.
  5. Leave the hosts in Availability Zone 2 connected to the vSphere Distributed Switch.
  6. In each availability zone, deploy the NSX-T Edge appliance pairs, connecting the appliances to the N-VDS for the domain according to Example NSX-T Edge Node Configuration for Multi-NIC Hosts.
  7. Configure the remaining components for dynamic routing in the same was as for hosts with two physical NICs.