The physical network design decisions govern the physical layout and use of VLANs. They also include decisions on jumbo frames and on some other network-related requirements such as DNS and NTP.

Physical Network Design Decisions

Routing Protocols

Base the selection of the external routing protocol on your current implementation or on available expertise among your IT staff. Take performance requirements into consideration. Possible options are OSPF, BGP and IS-IS. While each routing protocol has a complex set of pros and cons, this VMware Validated Design uses BGP as its routing protocol.

DHCP proxy

The DHCP proxy must point to a DHCP server by way of its IPv4 address. See the Planning and Preparation documentation for details on the DHCP server.

Table 1. Physical Network Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-NET-001

The physical network architecture must support the following requirements:

  • 1, 10 GbE port on each ToR switch for ESXi host uplinks.

  • Host uplinks are not configured in an ether-channel (LAG/vPC) configuration.

  • Layer 3 device that supports BGP.

  • IGMP support.

Using two uplinks per ESXi host guarantees availability during switch failures.

This design utilizes functions of the vSphere Distributed Switch, NSX for vSphere, and the core vSphere platform that are not compatible with link-aggregation technologies.

This design uses BGP as the dynamic routing protocol.

vSAN and NSX Hybrid mode replication require the use of IGMP.

May limit hardware choices.

Requires dynamic routing protocol configuration in physical networking stack.

CSDDC-PHY-NET-002

Use a physical network that is configured for BGP routing adjacency.

The design uses BGP as its routing protocol. This allows for flexibility in network design for routing multi-site and multi-tenancy workloads.

Requires BGP configuration in physical networking stack.

CSDDC-PHY-NET-003

Each rack uses two ToR switches. These switches provide connectivity across two 10 GbE links to each server.

This design uses two 10 GbE links to provide redundancy and reduce overall design complexity.

Requires two ToR switches per rack which can increase costs.

CSDDC-PHY-NET-004

Use VLANs to segment physical network functions.

Allow for physical network connectivity without requiring large number of NICs.

Segregation is needed for the different network functions that are required in the SDDC. This segregation allows for differentiated services and prioritization of traffic as needed.

Uniform configuration and presentation is required on all the trunks made available to the ESXi hosts.

Additional Design Decisions

Additional design decisions deal with static IP addresses, DNS records, and the required NTP time source.

Table 2. Additional Network Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-NET-005

Assign Static IP addresses to all management components in the SDDC infrastructure with the exception of NSX VTEPs which are assigned by DHCP.

Configuration of static IP addresses avoid connection outages due to DHCP availability or mis-configuration.

Accurate IP address management must be in place.

CSDDC-PHY-NET-006

Create DNS records for all management components to enable forward, reverse, short and FQDN resolution.

Ensures consistent resolution of management nodes using both IP address (reverse lookup) and name resolution.

None

CSDDC-PHY-NET-007

Use an NTP time source for all management components.

Critical to maintain accurate and synchronized time between management nodes.

None

Jumbo Frames Design Decisions

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting increases the efficiency of data transfer. Jumbo frames must be configured end-to-end, which is easily accomplished in a LAN. When you enable jumbo frames on an ESXi host, you have to select an MTU that matches the MTU of the physical switch ports.

The workload determines whether it makes sense to configure jumbo frames on a virtual machine. If the workload consistently transfers large amounts of network data, configure jumbo frames if possible. In that case, the virtual machine operating systems and the virtual machine NICs must also support jumbo frames.

Using jumbo frames also improves performance of vSphere vMotion.

Note:

VXLANs need an MTU value of at least 1600 bytes on the switches and routers that carry the transport zone traffic.

Table 3. Jumbo Frames Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-NET-008

Configure the MTU size to at least 9000 bytes (Jumbo Frames) on the physical switch ports and vDS portgroups that support the following traffic types.  

  • vSAN

  • vMotion

  • VXLAN

  • Secondary Storage

Setting the MTU to at least 9000 bytes (Jumbo Frames) improves traffic throughput.

In order to support VXLAN the MTU setting must be increased to a minimum of 1600 bytes, setting this portgroup to 9000 bytes has no effect on VXLAN but ensures consistency across portgroups that are adjusted from the default MTU size.

When adjusting the MTU packet size, the entire network path (VMkernel port, distributed switch, physical switches and routers) must also be configured to support the same MTU packet size.