Ensure that the physical specifications of the ESXi hosts allow for successful deployment and operation of the design.
Physical Design Specification Fundamentals
The configuration and assembly process for each system is standardized, with all components installed in the same manner on each ESXi host. Because the standardization of the physical configuration of the ESXi hosts removes variability, you can operate an easily manageable and supportable infrastructure. Deploy ESXi hosts with identical configurations across all vSphere cluster members, including storage and networking configurations. For example, consistent PCI card slot placement, especially for network controllers, is essential for the accurate alignment of physical to virtual I/O resources. By using identical configurations, you can balance the VM storage components across storage and compute resources.
The sizing of physical servers that run ESXi requires special considerations when you use vSAN storage. The design provides details on using vSAN as the primary storage system for the vSphere management and compute clusters. This design also uses vSAN ReadyNodes.
ESXi Host Memory
The amount of memory required for vSphere compute clusters varies according to the workloads running in the cluster. When sizing the memory of hosts in the compute cluster, consider the admission control setting (n+1) that reserves the resources of a host for failover or maintenance. In addition, leave at least 8% of the resources for ESXi host operations.
The number of vSAN disk groups and disks managed by an ESXi host determines the memory requirements. To support the maximum number of disk groups, up to 100GB of RAM is required for vSAN.
ESXi Boot Device
The following considerations apply when you select a boot device type and size for vSAN:
vSAN does not support stateless vSphere Auto Deploy.
Device types supported as ESXi boot devices.
-
SATADOM devices. The size of the boot device per host must be at least 16 GB.
USB or SD embedded devices. The USB or SD flash drive must be at least 8 GB.
Caution:VMware strongly recommends that you avoid using an SD card or USB as a boot device. For more details, see KB 85685.
Design Decision |
Design Justification |
Design Implication |
---|---|---|
Use vSAN ReadyNodes. |
Ensures full compatibility with vSAN. |
Hardware choices might be limited. |
Ensure that all ESXi hosts have a uniform configuration across the vSphere cluster. |
A balanced cluster has more predictable performance even during hardware failures. In addition, the impact on performance during re-sync or rebuild is minimal if the cluster is balanced. |
As new models of servers become available, the deployed model phases out. So, it becomes difficult to keep a uniform cluster when adding hosts later. |
Set up the management cluster with a minimum of four ESXi hosts. |
Allocating 4 ESXi hosts provides full redundancy for the management cluster. |
Additional ESXi host resources are required for redundancy. |
Set up each ESXi host in the management cluster with a minimum of 256 GB RAM. |
|
|
Set up each edge cluster with a minimum of three ESXi hosts. |
Ensures full redundancy for the required 2 NSX Edge nodes. |
As Edge Nodes are added, more hosts must be added to the cluster to ensure redundancy. |
Set up each ESXi host in the edge cluster with a minimum of 192 GB RAM. |
Ensures that the NSX Edge Nodes have the required memory. |
In a three-node cluster, only the resources of two ESXi hosts are available as the resources of one host are reserved for vSphere HA. |
Set up each compute cluster with a minimum of four ESXi hosts. |
Allocating 4 ESXi hosts provides full redundancy for the compute clusters. |
Additional ESXi host resources are required for redundancy. |
Set up each ESXi host in the compute cluster with a minimum of 384 GB RAM. |
|
In a four-node cluster, only the resources of three ESXi hosts are available as the resources of one host are reserved for vSphere HA. |