The physical design specifications of the ESXi host determine the characteristics of the ESXi hosts that you use to deploy this VMware Validated Design.

Physical Design Specification Fundamentals

The configuration and assembly process for each system is standardized, with all components installed in the same manner on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, you operate an easily manageable and supportable infrastructure. Deploy ESXi hosts with identical configuration across all cluster members, including storage and networking configurations. For example, consistent PCI card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. By using identical configurations, you have an even balance of virtual machine storage components across storage and compute resources.

Select all ESXi host hardware, including CPUs, according to VMware Compatibility Guide.

The sizing of physical servers that run ESXi requires special considerations when you use vSAN storage. The design provides details on using vSAN as the primary storage system for the consolidated cluster. vSAN ReadyNodes are also specified in the design. For information about the models of the physical servers, see the vSAN ReadyNode compatibility guide.

  • An average-size VM has two vCPUs with 4 GB of RAM.
  • A standard 2U server can host 60 average-size VMs on a single ESXi host.
Table 1. Design Decisions on the Physical Design of the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-007

Use vSAN ReadyNodes with vSAN storage.

Using a vSAN ReadyNode ensures full compatibility with vSAN during the deployment.

Hardware choices might be limited.

CSDDC-PHY-008

Verify that all nodes have uniform configuration across a given cluster.

A balanced cluster has more predictable performance even during hardware failures. In addition, the impact on performance during resync or rebuild is minimal if the cluster is balanced.

Apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

ESXi Host Memory

The amount of memory required varies according to the workloads. When sizing memory, consider the admission control setting (n+1) which reserves the resources of one host for failover or maintenance.

The number of disk groups and disks that an ESXi host manages determines the memory requirements. To support the maximum number of disk groups, you must provide 32 GB of RAM. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN from the vSphere documentation.

Table 2. Design Decisions on the ESXi Host Memory

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-PHY-009

Set up each ESXi host in the consolidated cluster with a minimum of 256 GB RAM.

The management and edge VMs in this cluster require a total of 176 GB RAM from the cluster.

The remaining RAM is to support workload virtual machines.

Note:

Verify that enough RAM is available to scale out to a two-cluster design later and reuse hardware for the shared edge and compute cluster.

Hardware choices might be limited.

ESXi Host Boot Device

Minimum boot disk size for ESXi in SCSI-based devices (SAS, SATA, or SAN) is greater than 5 GB. ESXi can be deployed using stateful local SAN SCSI boot devices, or by using vSphere Auto Deploy.

Selecting a boot device type and size for vSAN has the following considerations:

  • vSAN does not support stateless vSphere Auto Deploy.
  • Device types as ESXi boot devices
    • USB or SD embedded devices. The USB or SD flash drive must be at least 4 GB.
    • SATADOM devices. The size of the boot device per host must be at least 16 GB.
See VMware vSAN Design and Sizing Guide to select the option that best fits your hardware.