The physical design specifications of the ESXi host list the characteristics of the ESXi hosts that were used during deployment and testing of this VMware Validated Design.

Physical Design Specification Fundamentals

The configuration and assembly process for each system is standardized, with all components installed in the same manner on each ESXi host. Standardizing the entire physical configuration of the ESXi hosts is critical to providing an easily manageable and supportable infrastructure because standardization eliminates variability. Deploy ESXi hosts with identical configuration, including identical storage, and networking configurations, across all cluster members. For example, consistent PCI card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. Identical configurations ensure an even balance of virtual machine storage components across storage and compute resources.

Select all ESXi host hardware, including CPUs according to the VMware Compatibility Guide.

The sizing of the physical servers for the ESXi hosts for the consolidated cluster has special considerations because it uses vSAN storage and vSAN ReadyNodes. See the vSAN Ready Node document.

  • An average-size VM has two vCPUs with 4 GB of RAM.

  • A standard 2U server can host 60 average-sized VMs on a single ESXi host.

Table 1. ESXi Host Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-007

Use vSAN ReadyNodes.

Using a vSAN ReadyNode ensures seamless compatibility with vSAN during the deployment.

Hardware choices might be limited.

CSDDC-PHY-008

You must ensure that all nodes have uniform configurations across a given cluster.

A balanced cluster delivers more predictable performance even during hardware failures. In addition, performance impact during resync or rebuild is minimal if the cluster is balanced.

Apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

ESXi Host Memory

The amount of memory required varies according to the workloads. When sizing memory, remember the admission control setting (n+1) which reserves one hosts resources for fail over.

Note:

See the Administering VMware vSAN from the vSphere documentation for more information about disk groups, including design and sizing guidance. The number of disk groups and disks that an ESXi host manages determines memory requirements. 32 GB of RAM is required to support the maximum number of disk groups.

Table 2. Host Memory Design Decision

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-009

Set up each ESXi host in the consolidated cluster with a minimum of 192 GB RAM.

The management and edge VMs in this cluster require a total of 87 GB RAM from the cluster. The remaining RAM is to support workload virtual machines. Ensures enough RAM is available to grow to a two-cluster design at a later time and re-use hardware for the shared edge and compute cluster.

Hardware choices might be limited.

ESXi Host Boot Device

Minimum boot disk size for ESXi in SCSI-based devices (SAS/SATA/SAN) is greater than 5 GB. ESXi can be deployed using stateful local SAN SCSI boot devices, or by using vSphere Auto Deploy.

Supported features depend on the version of vSAN:

  • vSAN does not support stateless vSphere Auto Deploy

  • vSAN 5.5 and later supports USB/SD embedded devices for ESXi boot device (4 GB or greater).

  • vSAN 6.0 and later supports SATADOM as a boot device.

See the VMware vSAN 6.6 Design and Sizing Guide to choose the option that best fits your hardware.