The physical design specifications of the ESXi host determine the characteristics of the ESXi hosts that you use to deploy this VMware Validated Design.

Physical Design Specification Fundamentals

The configuration and assembly process for each system is standardized, with all components installed in the same manner on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, you operate an easily manageable and supportable infrastructure. Deploy ESXi hosts with identical configuration across all cluster members, including storage and networking configurations. For example, consistent PCI card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. By using identical configurations, you have an even balance of virtual machine storage components across storage and compute resources.

Select all ESXi host hardware, including CPUs, according to VMware Compatibility Guide.

The sizing of the physical servers for the ESXi hosts for the management cluster and for the shared edge and compute cluster has special considerations because they use vSAN storage and vSAN ReadyNodes. See the vSAN ReadyNode document.

  • An average-size VM has two vCPUs with 4 GB of RAM.

  • A standard 2U server can host 60 average-size VMs on a single ESXi host.

Table 1. Design Decisions on the Physical Design of the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication


Use vSAN ReadyNodes with vSAN storage.

Using vSAN ReadyNodes ensures full compatibility with vSAN at deployment.

Using hardware, BIOS, and drivers from the VMware Compatibility Guide ensures the ESXi host hardware is supported.

If you do not use vSAN storage, verify that the system hardware complies with the ESXi requirements dictated by the VMware Compatibility Guide. This includes, but is not limited to:

  • System compatibility

  • I/O compatibility with network and host bus adapter (HBA) cards

  • Storage compatibility

Hardware choices might be limited.


Verify that all nodes have uniform configuration across a cluster.

A balanced cluster has more predictable performance even during hardware failures. In addition, the impact on performance during resync or rebuild is minimal if the cluster is balanced.

Apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

ESXi Host Memory

The amount of memory required for compute clusters varies according to the workloads running in the cluster. When sizing memory for compute cluster hosts, remember the admission control setting (n+1), which reserves one host resource for failover or maintenance.

The number of disk groups and disks that an ESXi host manages determines the memory requirements. To support the maximum number of disk groups, you must provide 32 GB of RAM. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN from the vSphere documentation.

Table 2. Design Decisions on the ESXi Host Memory

Decision ID

Design Decision

Design Justification

Design Implication


Set up each ESXi host in the management cluster with a minimum of 192 GB RAM.

The management and edge VMs in this cluster require a total of 448 GB RAM.


ESXi Host Boot Device

Minimum boot disk size for ESXi in SCSI-based devices (SAS/SATA/SAN) is greater than 5 GB. ESXi can be deployed using stateful local SAN SCSI boot devices, or by using vSphere Auto Deploy.

Supported features depend on the version of vSAN:

  • vSAN does not support stateless vSphere Auto Deploy.

  • vSAN 5.5 and later supports USB/SD embedded devices for ESXi boot device (4 GB or greater).

  • vSAN 6.0 and later supports SATADOM as a boot device.

See VMware vSAN Design and Sizing Guide to select the option that best fits your hardware.