You size the compute resources of the ESXi hosts in the workload domain according to the system requirements of the management components and the requirements for tenant workloads according to the design objectives.
For a dual-region SDDC, to accommodate the components of NSX-T Federation, that is used for central networking across the first and second regions, you must allocate more compute resources on the ESXi hosts in both regions. If you have configured the ESXi hosts in Region A for use in a single-region environment, you must increase their number of CPUs and memory size.
ESXi Server Hardware
The configuration and assembly process for each physical server designated to run ESXi should be standardized, with all components installed in a consistent manner. Standardization of the physical configuration of ESXi hosts removes variability, resulting in infrastructure that is more easily managed and supported. ESXi hosts should be deployed with identical configuration across all cluster members, including storage and networking configurations. For example, consistent PCIe card installation, especially for network controllers, is essential for accurate mapping of physical network controllers to virtual network resources. You ensure an even distribution of resources available to virtual machines across ESXi hosts in a cluster by standardizing components within each physical server.
-
An average-size virtual machine has two virtual CPUs with 4 GB of RAM.
-
A typical spec 2U ESXi host can run 60 average-size virtual machines.
This design uses vSAN ReadyNodes as the fundamental building block for the principal storage system in the workload domain. Select all ESXi host hardware, including CPUs, according to VMware Compatibility Guide and aligned to the ESXi version specified by this design. The sizing of physical servers that run ESXi requires special considerations when you use vSAN storage. For information about the models of physical servers that are vSAN-ready, see vSAN Compatibility Guide for vSAN ReadyNodes. If you are not using vSAN ReadyNodes, your CPU, Disks and I/O modules must be listed on the VMware Compatibility Guide under CPU Series and vSAN Compatibility List aligned to the ESXi version specified by this design.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
SDDC-WLD-VI-ESXi-001 |
Use vSAN ReadyNodes with vSAN storage for each ESXi host in the shared edge and workload cluster. |
Your SDDC is fully compatible with vSAN at deployment. |
Hardware choices might be limited. |
SDDC-WLD-VI-ESXi-002 |
Allocate hosts with uniform configuration across the shared edge and workload cluster. |
A balanced cluster has these advantages:
|
You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis. |
ESXi Host Memory
When sizing memory for the ESXi hosts in the workload domain, consider certain requirements.
-
Requirements for the workloads that are running in the cluster
When sizing memory for the hosts in a cluster, to reserve the resources of one host for failover or maintenance, set the admission control setting to N+1, which reserves the resources of one host for failover or maintenance.
-
Number of vSAN disk groups and disks on an ESXi host
To support the maximum number of disk groups, you must provide 32 GB of RAM. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN from the vSphere documentation.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
SDDC-WLD-VI-ESXi-003 |
Install each ESXi host in the shared edge and workload cluster with a minimum of 256 GB RAM. |
The large-sized NSX-T Edge appliances in this vSphere Cluster require a total of 64 GB RAM. The remaining RAM is available for tenant workloads. |
In a four-node cluster, only 768 GB is available for use due to the n+1 vSphere HA setting. |