The physical design specifications of the ESXi host list the characteristics of the hosts that were used during deployment and testing of this VMware Validated Design.

Physical Design Specification Fundamentals

The configuration and assembly process for each system is standardized, with all components installed the same manner on each host. Standardizing the entire physical configuration of the ESXi hosts is critical to providing an easily manageable and supportable infrastructure because standardization eliminates variability. Consistent PCI card slot location, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. Deploy ESXi hosts with identical configuration, including identical storage, and networking configurations, across all cluster members. Identical configurations ensure an even balance of virtual machine storage components across storage and compute resources.

Select all ESXi host hardware, including CPUs following the VMware Compatibility Guide.

The sizing of the physical servers for the ESXi hosts for the management and edge pods has special consideration because it is based on the VMware document VMware Virtual SAN Ready Nodes, as these pod type use VMware vSAN.

  • An average sized VM has two vCPUs with 4 GB of RAM.

  • A standard 2U server can host 60 average-sized VMs on a single ESXi host.

Table 1. ESXi Host Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication


Use vSAN Ready Nodes.

Using a vSAN Ready Node ensures seamless compatibility with vSAN during the deployment.

Might limit hardware choices.


All nodes must have uniform configurations across a given cluster.

A balanced cluster delivers more predictable performance even during hardware failures. In addition, performance impact during resync/rebuild is minimal when the cluster is balanced.

Vendor sourcing, budgeting and procurement considerations for uniform server nodes will be applied on a per cluster basis.

ESXi Host Memory

The amount of memory required for compute pods will vary depending on the workloads running in the pod. When sizing memory for compute pod hosts it's important to remember the admission control setting (n+1) which reserves one hosts resources for fail over or maintenance.


See the VMware vSAN 6.5 Design and Sizing Guide for more information about disk groups, including design and sizing guidance. The number of disk groups and disks that an ESXi host manages determines memory requirements. 32 GB of RAM is required to support the maximum number of disk groups.

Table 2. Host Memory Design Decision

Decision ID

Design Decision

Design Justification

Design Implication


Set up each ESXi host in the management pod to have a minimum 192 GB RAM.

The management and edge VMs in this pod require a total 424 GB RAM.


Host Boot Device Background Considerations

Minimum boot disk size for ESXi in SCSI-based devices (SAS / SATA / SAN ) is greater than 5 GB. ESXi can be deployed using stateful local SAN SCSI boot devices, or by using vSphere Auto Deploy.

What is supported depends on the version of vSAN that your are using:

  • vSAN does not support stateless vSphere Auto Deploy

  • vSAN 5.5 and greater supports USB/SD embedded devices for ESXi boot device (4 GB or greater).

  • Since vSAN 6.0, there is an option to use SATADOM as a supported boot device.

See the VMware vSAN 6.5 Design and Sizing Guide to choose the option that best fits your hardware.