You size the compute resources of the ESXi hosts in the management domain according to the system requirements of the management components and the requirements for managing customer workloads according to the design objectives.

For a dual-region SDDC, to accommodate the components of NSX-T Federation, that is used for central networking across the first and second regions, you must allocate more compute resources on the ESXi hosts in both regions. If you have configured the ESXi hosts in Region A for use in a single-region environment, you must increase their number of CPUs and memory size.

ESXi Server Hardware

The configuration and assembly process for each system should be standardized, with all components installed in the same manner on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, the infrastructure is easily managed and supported. ESXi hosts are deployed with identical configuration across all cluster members, including storage and networking configurations. For example, consistent PCIe card slot placement, especially for network controllers, is essential for accurate mapping of physical network controllers to virtual network resources. By using identical configurations, you have an even balance of virtual machine storage components across storage and compute resources.

In this design, the principal storage system for the management domain is vSAN.

This design uses vSAN ReadyNodes for the physical servers that run ESXi. For information about the models of physical servers that are vSAN-ready, see vSAN Compatibility Guide for vSAN ReadyNodes.

Table 1. Design Decisions on Server Hardware for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-ESXi-001

Use vSAN ReadyNodes with vSAN storage for each ESXi host in the management domain.

Your SDDC is fully compatible with vSAN at deployment.

Hardware choices might be limited.

SDDC-MGMT-VI-ESXi-002

Allocate hosts with uniform configuration across the first cluster of the management domain.

A balanced cluster has these advantages:

  • Predictable performance even during hardware failures

  • Minimal impact of resync or rebuild operations on performance

You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

ESXi Host CPU and CPU Overcommitment

When sizing CPU capacity for the ESXi hosts in the management domain, consider:

  • The requirements for the management workloads.

  • Scenarios where one host is not available because of failure or maintenance. In these cases, keep the CPU overcommitment ratio vCPU-to-pCPU less than or equal to 2:1.

  • Expected number of workload domains.

  • Additional third-party management components.

Size your CPU based on the number of physical cores, not the logical cores. Simultaneous multithreading (SMT) technologies in CPUs, such as hyper-threading in Intel CPUs, improve CPU performance by allowing multiple threads to run in parallel on the same CPU core. Although a single CPU core can be viewed as two logical cores, the performance enhancement will not be equivalent to 100% more CPU power. It will also differ from one environment to another.

Table 2. Design Decisions on CPU Configuration for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-ESXi-003

For a single-region SDDC, install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 32 physical CPU cores.

  • The management and NSX-T Edge appliances in the cluster require a total of 196 vCPUs.

  • If one of the hosts is not available because of failure or maintenance, the CPU overcommitment ratio becomes 2.04:1 per the optimal overcommitment ratio of 2:1.

If you plan to add more than one virtual infrastructure workload domain or third-party management components, you must add more CPU cores to the management ESXi hosts.

For a dual-region SDDC, in both regions, install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 42 physical CPU cores.

  • The management and NSX-T Edge appliances in the cluster require a total of 250 vCPUs.

  • If one of the hosts is not available because of failure or maintenance, the CPU overcommitment ratio becomes 1.98:1 per the optimal overcommitment ratio of 2:1.

If you plan to add more than one virtual infrastructure workload domain or third-party management components, you must add more CPU cores to the management ESXi hosts.

SDDC-MGMT-VI-ESXi-004

When sizing CPU, do not consider multithreading technology and associated performance gains.

Although multithreading technologies increase CPU performance, the performance gain depends on running workloads and differs from one case to another.

Because you must provide more physical CPU cores, costs increase and hardware choices become limited.

This design assumes that the management domain consists of four ESXi hosts, running only VMware Validated Design components. If your management domain will have more than four hosts, then the required number of CPU cores will be less than 30. Calculate the required number of physical CPU cores according to the requirements of the management workloads in your SDDC and the number of management ESXi hosts.

ESXi Host Memory

When sizing memory for the ESXi hosts in the management domain, consider certain requirements.

  • Requirements for the workloads that are running in the cluster

    When sizing memory for the hosts in a cluster, to reserve the resources of one host for failover or maintenance, set the admission control setting to N+1, which reserves the resources of one host for failover or maintenance.

  • Number of vSAN disk groups and disks on an ESXi host

    To support the maximum number of disk groups of 5 for vSphere 7, you must provide 32 GB of RAM. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN from the vSphere documentation.

Table 3. Design Decisions on Memory Size for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-ESXi-005

For a single-region SDDC, install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 256 GB RAM.

The management and NSX-T Edge appliances in this cluster require a total of 645 GB RAM.

You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new virtual infrastructure workload domains.

In a four-node cluster, only 768 GB is available for use because the host redundancy that is configured in vSphere HA is N+1.

For a dual-region SDDC, in both regions, install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 384 GB RAM.

The management and NSX-T Edge appliances in this cluster require a total of 861 GB RAM.

You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new virtual infrastructure workload domains

In a four-node cluster, only 1 152 GB is available for use because the host redundancy that is configured in vSphere HA is N+1.