Use the capabilities of vSphere networking to prevent unauthorized access and provide timely access to business data. VMware Cloud Foundation uses vSphere Distributed Switches and VMware NSX-T™ Data Center for virtual networking.

Virtual Network Design Guidelines

The high-level design goals apply regardless of your environment.
Table 1. Goals of the vSphere Networking Design

Design Goal

Description

Meet diverse needs

The network must meet the diverse needs of many different entities in an organization. These entities include applications, services, storage, administrators, and users.

Reduce costs

Server consolidation reduces network costs by reducing the number of required network ports and NICs. To improve cost effectiveness, you can allocate two 25-GbE NICs instead of four 10-GbE NICs.

Improve performance

You can achieve performance improvement and decrease the time that is required to perform maintenance by providing sufficient bandwidth which reduces contention and latency.

Improve availability

A well-designed network improves availability, usually by providing network redundancy.

Support security

A well-designed network supports an acceptable level of security through controlled access and isolation, where required.

Enhance infrastructure functionality

You can configure the network to support vSphere features such as vSphere vMotion, vSphere High Availability, and vSphere Fault Tolerance.

Follow networking best practices throughout your environment.

  • Separate network services from one another to achieve greater security and better performance.

  • Use Network I/O Control and traffic shaping to guarantee bandwidth to critical virtual machines. During network contention, these critical virtual machines will receive a higher percentage of the bandwidth.

  • Separate network services on a single vSphere Distributed Switch by attaching them to port groups with different VLAN IDs.

  • Keep vSphere vMotion traffic on a separate network.

    When a migration using vSphere vMotion occurs, the contents of the memory of the guest operating system is transmitted over the network. You can place vSphere vMotion on a separate network by using a dedicated vSphere vMotion VLAN.

  • When using pass-through devices with Linux kernel version 2.6.20 or an earlier guest OS, avoid MSI and MSI-X modes. These modes have significant performance impact.

  • For best performance, use VMXNET3 virtual machine NICs.

  • Ensure that physical network adapters that are connected to the same vSphere Standard Switch or vSphere Distributed Switch, are also connected to the same physical network.

Network Segmentation and VLANs

Separating different types of traffic is required to reduce contention and latency, and for access security.

High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage and the vSphere Fault Tolerance logging network because latency on these networks can negatively affect the performance of multiple virtual machines. According to the application or service, high latency on specific virtual machine networks can also negatively affect performance. Use information gathered from the current state analysis and from interviews with key stakeholder and SMEs to determine which workloads and networks are especially sensitive to high latency.

Networks in VMware Cloud Foundation

In VMware Cloud Foundation, traffic in the VI workload domain is distributed on several networks for traffic isolation and configuration autonomy.

Management network

Carries traffic for management of the ESXi hosts and communication to and from vCenter Server. In addition, on this network, the hosts exchange heartbeat messages when vSphere HA is enabled in non-vSAN clusters.

When using multiple availability zones, you use this network for vSAN witness traffic. See vSAN Witness Design for a Virtual Infrastructure Workload Domain.

vSphere vMotion network

Carries traffic for relocating virtual machines between ESXi hosts with zero downtime.

vSAN network

Carries the communication between ESXi hosts in the cluster to implement a vSAN shared storage. In addition, on this network, the hosts exchange heartbeat messages when vSphere HA is enabled in vSAN clusters.

Underlay Transport Network
Carries overlay traffic between the management components in the management domain and traffic for software-defined network services such as load balancing and dynamic routing (East-West traffic).
Uplink Networks

Carry traffic for communication between software-defined overlay networks and the external network (North-South traffic). In addition, on these networks, routing control packets are exchanged to establish required routing adjacency and peerings.

Virtual Networks

Determine the number of networks or VLANs that are required depending on the type of traffic

Configuration Type

Configuration Value

Networks in an environment with a single VMware Cloud Foundation instance

  • Networks that support vSphere system services

    • Management

    • vSphere vMotion

    • vSAN

    • NFS

    • Host tunnel endpoints (TEPs) for host overlay communication

  • Networks that support the services and applications in the organization

    • NSX Edge tunnel endpoints (TEPs) for edge overlay communication

    • NSX Edge uplinks

Networks in an environment with multiple VMware Cloud Foundation instances

  • Networks that support vSphere system traffic

    • Management

    • vSphere vMotion

    • vSAN

    • NFS

    • Host tunnel endpoints (TEPs) for host overlay communication

  • Networks that support the services and applications in the organization

    • NSX Edge tunnel endpoints (TEPs) for edge overlay communication

    • NSX Edge remote TEPs (RTEPs) for edge overlay communication between VMware Cloud Foundation instances

    • NSX Edge uplinks

For information about TEPs, RTEPs, and NSX Edge networks, see NSX-T Data Center Design for a Virtual Infrastructure Workload Domain.