The physical layer in an SDDC contains the compute, storage, and network resources in your data center.

Figure 1. Physical Configuration of the SDDC

Workload Domains

The compute, storage, and network resources are organized in workload domains. The physical layer also includes the physical network infrastructure, and storage setup. For information on workload domains and clusters, see Workload Domains in VMware Validated Design.


The physical compute resources are delivered through ESXi, a bare-metal hypervisor that installs directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs. ESXi is the base building block of the Software-Defined Data Center.


VMware Validated Design can use most physical network architectures. When building an SDDC, the following considerations exist:

  • Layer 2 or Layer 3 transport types

    This VMware Validated Design uses a Layer 3 network architecture.

    • A Top of Rack (ToR) switch is typically located inside a rack and provides network access to the servers inside that rack.

    • An inter-rack switch at the aggregation layer provides connectivity between racks. Links between inter-rack switches are typically not required. If a link failure between an inter-rack switch and a ToR switch occurs, the routing protocol ensures that no traffic is sent to the inter-rack switch that has lost connectivity.

  • Using quality of service tags for prioritized traffic handling on the network devices

  • NIC configuration on the physical servers

    VMware vSphere® Distributed Switch supports several NIC teaming options. Load-based NIC teaming supports an optimal use of available bandwidth and redundancy if a link failure occurs. Use a minimum of two 10-GbE connections, with two 25-GbE connections recommended, for each ESXi host in combination with a pair of top of rack switches.

  • VLAN port modes on both physical servers and network equipment

    802.1Q network trunks can support as many VLANs as required. For example, management, storage, overlay, and VMware vSphere® vMotion® traffic.

Because of the considerations for the physical network architecture, providing a robust physical network to support the physical-to-virtual network abstraction is an important requirement of network virtualization.

Regions and Availability Zones

Availability Zone

Represent the fault domain of the SDDC. Multiple availability zones can provide continuous availability of an SDDC. This VMware Validated Design supports one availability zone per region. See Multiple Availability Zones.


Each region is a separate SDDC instance. You use multiple regions for disaster recovery across individual SDDC instances.

In this VMware Validated Design, regions have similar physical and virtual infrastructure design but different naming.

Table 1. Regions in VMware Validated Design


Disaster Recovery Role

Region-Specific Domain Name

Region A


Region B



This VMware Validated Design provides guidance for the storage of the management components. A shared storage system not only hosts the management and tenant or container workloads, but also template repositories and backup locations. Storage within an SDDC can include either or both internal and external storage as either principal or supplemental storage. For the management domain, this validated design includes internal storage by using vSAN for principal storage and external NFS storage for supplemental storage.

Internal Storage

vSAN is a software-based distributed storage platform that combines the internal compute and storage resources of clustered VMware ESXi hosts. By using storage policies on a cluster, you configure multiple copies of the data. As a result, this data is accessible during maintenance and host outages.

External Storage

External storage provides non-vSAN storage by using NFS, iSCSI, or Fiber Channel. Different types of storage can provide different levels of SLA, ranging from just a bunch of disks (JBODs) using SATA drives with minimal to no redundancy, to fully redundant enterprise-class storage arrays.

Principal Storage

VMware vSAN™ storage is the default storage type for the SDDC management components. All design, deployment, and operational guidance are performed on vSAN. Considering block or file storage technology for principal storage is out of scope of the design. These storage technologies are referenced only for specific use cases such as backups to supplemental storage.

The storage devices on vSAN ready servers provide the storage infrastructure. This validated design uses vSAN in an all-flash configuration.

For workloads in workload domains, you can use vSAN, vVols, NFS, and VMFS on FC.

Supplemental Storage

NFS storage is the supplemental storage for the SDDC management components. It provides space for archiving log data and application templates.

Supplemental storage provides additional storage for backup of the SDDC. It can use the NFS, iSCSI, or Fibre Channel technology. Different types of stage can provide different levels of SLA, ranging from JBODs with minimal to no redundancy, to fully redundant enterprise-class storage arrays. For bandwidth-intense IP-based storage, the bandwidth of these pods can scale dynamically.