The physical network design decisions govern the physical layout and use of VLANs. They also include decisions on jumbo frames and on some other network-related requirements such as DNS and NTP.

Physical Network Design Decisions

Routing protocols

Base the selection of the external routing protocol on your current implementation or on available expertise among the IT staff. Take performance requirements into consideration. Possible options are OSPF, BGP and IS-IS. While each routing protocol has a complex set of pros and cons, the VVD utilizes BGP as its routing protocol.

DHCP proxy

The DHCP proxy must point to a DHCP server by way of its IPv4 address. See the Planning and Preparation documentation for details on the DHCP server.

Table 1. Physical Network Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-PHY-NET-001

The physical network architecture must support the following requirements:

  • 1, 10 GbE port on each ToR switch for ESXi host uplinks

  • Host uplinks are not configured in an ether-channel (LAG/vPC) configuration

  • Layer 3 device that supports BGP

  • IGMP support

Having two uplinks per ESXi host guarantees availability during a switch failure.

This design utilizes functions of the vSphere Distributed Switch, NSX for vSphere, and the core vSphere platform that are not compatible with link-aggregation technologies.

BGP is used as the dynamic routing protocol in this design.

vSAN and NSX Hybrid mode replication requires IGMP.

Could limit hardware choice.

Requires dynamic routing protocol configuration in physical networking stack.

SDDC-PHY-NET-002

Use a physical network that is configured for BGP routing adjacency.

The VVD utilizes BGP as its routing protocol. Allows for flexibility in network design for routing multi-site and multi-tenant workloads.

Requires BGP configuration in physical networking stack.

SDDC-PHY-NET-003

Each rack uses two ToR switches. These switches provide connectivity across two 10 GbE links to each server.

This design uses two 10 GbE links to provide redundancy and reduce overall design complexity.

Requires two ToR switches per rack which can increase costs.

SDDC-PHY-NET-004

Use VLANs to segment physical network functions.

Allow for physical network connectivity without requiring large number of NICs.

Segregation is needed for the different network functions that are required in the SDDC. This segregation allows for differentiated services and prioritization of traffic as needed.

Uniform configuration and presentation is required on all the trunks made available to the ESXi hosts.

Additional Design Decisions

Additional design decisions deal with static IP addresses, DNS records, and the required NTP time source.

Table 2. Additional Network Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-PHY-NET-005

Assign static IP addresses to all management components in the SDDC infrastructure with the exception of NSX VTEPs which are assigned by DHCP.

Configuration of static IP addresses avoid connection outages due to DHCP availability or misconfiguration.

Accurate IP address management must be in place.

SDDC-PHY-NET-006

Create DNS records for all management nodes to enable forward, reverse, short and FQDN resolution.

Ensures consistent resolution of management nodes using both IP address (reverse lookup) and name resolution.

None

SDDC-PHY-NET-007

Use an NTP time source for all management nodes.

Critical to maintain accurate and synchronized time between management nodes.

None

Jumbo Frames Design Decisions

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting increases the efficiency of data transfer. Jumbo frames must be configured end-to-end, which is easily accomplished in a LAN. When you enable jumbo frames on an ESXi host, you have to select an MTU that matches the MTU of the physical switch ports.

The workload determines whether it makes sense to configure jumbo frames on a virtual machine. If the workload consistently transfers large amounts of network data, configure jumbo frames if possible. In that case, the virtual machine operating systems and the virtual machine NICs must also support jumbo frames.

Using jumbo frames also improves performance of vSphere vMotion.

Note:

VXLANs need an MTU value of at least 1600 bytes on the switches and routers that carry the transport zone traffic.

Table 3. Jumbo Frames Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-PHY-NET-008

Configure the MTU size to at least 9000 bytes (Jumbo Frames) on the physical switch ports and vDS portgroups that support the following traffic types.

  • NFS

  • vSAN

  • vMotion

  • VXLAN

Setting the MTU to at least 9000 bytes (Jumbo Frames) improves traffic throughput.

In order to support VXLAN the MTU setting must be increased to a minimum of 1600 bytes, setting this portgroup also to 9000 bytes has no effect on VXLAN but ensures consistency across portgroups that are adjusted from the default MTU size.

When adjusting the MTU packet size, the entire network path (VMkernel port, distributed switch, physical switches and routers) must also be configured to support the same MTU packet size.