The physical network design decisions determine the physical layout and use of VLANs. They also include decisions on jumbo frames and on other network-related requirements such as DNS and NTP.

Physical Network Design Decisions

Routing protocols

NSX-T supports only the BGP routing protocol.

DHCP Helper

Set the DHCP helper (relay) to point to a DHCP server by IPv4 address.

Table 1. Physical Network Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

NSXT-PHY-NET-001

Implement the following physical network architecture:

  • One 25 GbE (10 GbE minimum) port on each ToR switch for ESXi host uplinks.

  • No EtherChannel (LAG/LACP/vPC) configuration for ESXi host uplinks

  • Layer 3 device that supports BGP.

  • Guarantees availability during a switch failure.

  • Uses BGP as the only dynamic routing protocol that is supported by NSX-T.

  • Might limit the hardware choice.

  • Requires dynamic routing protocol configuration in the physical network.

NSXT-PHY-NET-002

Use a physical network that is configured for BGP routing adjacency.

  • Supports flexibility in network design for routing multi-site and multi-tenancy workloads.

  • Uses BGP as the only dynamic routing protocol that is supported by NSX-T.

Requires BGP configuration in the physical network.

NSXT-PHY-NET-003

Use two ToR switches for each rack.

Supports the use of two 10 GbE (25 GbE recommended) links to each server and provides redundancy and reduces the overall design complexity.

Requires two ToR switches per rack which can increase costs.

NSXT-PHY-NET-004

Use VLANs to segment physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates the different network functions of the SDDC so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the trunks made available to the ESXi hosts.

Additional Design Decisions

Additional design decisions deal with static IP addresses, DNS records, and the required NTP time source.

Table 2. IP Assignment, DNS, and NTP Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

NSXT-PHY-NET-005

Assign static IP addresses to all management components in the SDDC infrastructure except for NSX-T TEPs.

NSX-T TEPs are assigned by using a DHCP server. Set the lease duration for the TEP DHCP scope to at least 7 days.

Ensures that interfaces such as management and storage always have the same IP address. In this way, you provide support for continuous management of ESXi hosts using vCenter Server and for provisioning IP storage by storage administrators.

NSX-T TEPs do not have an administrative endpoint. As a result, they can use DHCP for automatic IP address assignment. You are also unable to assign directly a static IP address to the VMkernel port of an NSX-T TEP. IP pools are an option but the NSX-T administrator must create them. If you must change or expand the subnet, changing the DHCP scope is simpler than creating an IP pool and assigning it to the ESXi hosts.

Requires accurate IP address management.

NSXT-PHY-NET-006

Create DNS records for all management nodes to enable forward, reverse, short, and FQDN resolution.

Ensures consistent resolution of management nodes using both IP address (reverse lookup) and name resolution.

None.

NSXT-PHY-NET-007

Use an NTP time source for all management nodes.

Maintains accurate and synchronized time between management nodes.

None.

Jumbo Frames Design Decisions

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting improves the efficiency of data transfer. You must configure jumbo frames end-to-end. Select an MTU that matches the MTU of the physical switch ports.

According to the purpose of the workload, determine whether to configure jumbo frames on a virtual machine. If the workload consistently transfers large amounts of network data, configure jumbo frames, if possible. In that case, confirm that both the virtual machine operating system and the virtual machine NICs support jumbo frames.

Using jumbo frames also improves the performance of vSphere vMotion.

Note:

The Geneve overlay requires an MTU value of 1600 bytes or greater.

Table 3. Jumbo Frames Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

NSXT-PHY-NET-008

Configure the MTU size to at least 9000 bytes (jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types.

  • Geneve (overlay)

  • vSAN

  • vMotion

  • NFS

  • vSphere Replication

Improves traffic throughput.

To support Geneve, increase the MTU setting to a minimum of 1600 bytes.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.