You must ensure the physical specifications of the ESXi hosts allow for successful deployment and operation of the design.

Physical Design Specification Fundamentals

The configuration and assembly process for each system is standardized, with all components installed in the same manner on each ESXi host. Because the standardization of the physical configuration of the ESXi hosts removes variability, you can operate an easily manageable and supportable infrastructure. Deploy ESXi hosts with identical configurations across all cluster members, including storage and networking configurations. For example, consistent PCI card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. By using identical configurations, you can balance the VM storage components across storage and compute resources.

The sizing of physical servers that run ESXi requires special considerations when you use vSAN storage. The design provides details on using vSAN as the primary storage system for the management cluster and the compute cluster. This design also uses vSAN ReadyNodes.

ESXi Host Memory

The amount of memory required for compute clusters varies according to the workloads running in the cluster. When sizing the memory of hosts in the compute cluster, consider the admission control setting (n+1) that reserves the resources of a host for failover or maintenance. In addition, leave at least 8% of the resources for ESXi host operations.

The number of vSAN disk groups and disks managed by an ESXi host determines the memory requirements. To support the maximum number of disk groups, 32 GB of RAM is required for vSAN.

Table 1. Recommended Physical ESXi Host Design

Design Recommendation

Design Justification

Design Implication

Use vSAN ReadyNodes.

vSAN ReadyNodes ensure full compatibility with vSAN.

Hardware choices might be limited.

Ensure that all ESXi hosts have a uniform configuration across a cluster.

A balanced cluster has more predictable performance even during hardware failures. In addition, the impact on performance during re-sync or rebuild is minimal if the cluster is balanced.

As new models of servers become available, the deployed model phases out. So, it becomes difficult to keep a uniform cluster when adding hosts later.

Set up the management cluster with a minimum of four ESXi hosts.

Allocating 4 ESXi hosts provides full redundancy for the management cluster.

Additional ESXi host resources are required for redundancy.

Set up each ESXi host in the management cluster with a minimum of 256 GB RAM.

  • Ensures that the management components have enough memory to run during a single host failure.

  • Provides a buffer for future management or monitoring of components in the management cluster.

  • In a four-node cluster, only the resources of three ESXi hosts are available as the resources of one host are reserved for vSphere HA.

  • Depending on the products deployed and their configuration, more memory per host might be required.

Set up each edge cluster with a minimum of three ESXi hosts.

Ensures full redundancy for the required 2 NSX Edge nodes.

As Edge Nodes are added, more hosts must be added to the cluster to ensure redundancy.

Set up each ESXi host in the edge cluster with a minimum of 192 GB RAM.

Ensures that the NSX Edge Nodes have the required memory.

In a three-node cluster, only the resources of two ESXi hosts are available as the resources of one host are reserved for vSphere HA.

Set up each compute cluster with a minimum of four ESXi hosts.

Allocating 4 ESXi hosts provides full redundancy for the compute clusters.

Additional ESXi host resources are required for redundancy.

Set up each ESXi host in the compute cluster with a minimum of 384 GB RAM.

  • A good starting point for most workloads.

  • Allows for ESXi and vSAN overhead.

  • Increase the RAM size based on vendor recommendations.

In a four-node cluster, the resources of only three ESXi hosts are available as the resources of one host are reserved for vSphere HA.