Use this design decision list for reference related to the ESXi host configuration in an environment with a single or multiple VMware Cloud Foundation instances. The decisions determine the ESXi hardware configuration, networking, life cycle management and remote access.

For full design details, see ESXi Design for the Management Domain.

Deployment Specification

Table 1. Design Decisions on the ESXi Server Hardware

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-001

Use vSAN ReadyNodes with vSAN storage for each ESXi host in the management domain.

Your management domain is fully compatible with vSAN at deployment.

For information about the models of physical servers that are vSAN-ready, see vSAN Compatibility Guide for vSAN ReadyNodes.

Hardware choices might be limited.

If you plan to use a server configuration that is not a vSAN ReadyNode, your CPU, disks and I/O modules must be listed on the VMware Compatibility Guide under CPU Series and vSAN Compatibility List aligned to the ESXi version specified in VMware Cloud Foundation 4.3 Release Notes.

VCF-MGMT-ESX-CFG-002

Allocate hosts with uniform configuration across the default management vSphere cluster.

A balanced cluster has these advantages:

  • Predictable performance even during hardware failures

  • Minimal impact of resync or rebuild operations on performance

You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes on a per cluster basis.

Table 2. Design Decisions on the ESXi CPU Configuration for an Environment with a Single VMware Cloud Foundation Instance

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-003

Install each ESXi host in the default, four-node, management vSphere cluster with a minimum of 13 physical CPU cores.

  • The management components in the cluster require a total of 78 vCPUs.

  • If one of the hosts is not available because of a failure or maintenance event, the CPU overcommitment ratio becomes 2:1.

If you plan to add more than one virtual infrastructure workload domain, additional VMware solutions or third-party management components, you must add more CPU cores to the management ESXi hosts.

VCF-MGMT-ESX-CFG-004

When sizing CPU, do not consider multithreading technology and associated performance gains.

Although multithreading technologies increase CPU performance, the performance gain depends on running workloads and differs from one case to another.

Because you must provide more physical CPU cores, costs increase and hardware choices become limited.

Table 3. Design Decisions on the ESXi CPU Configuration for an Environment with Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-005

Install each ESXi host in the default, four-node, management cluster of each instance with a minimum of 22 physical CPU cores.

  • The management components in the cluster require a total of 132 vCPUs.

  • If one of the hosts is not available because of a failure or maintenance event, the CPU overcommitment ratio becomes 2:1.

If you plan to add more than one virtual infrastructure workload domain, additional VMware solution or third-party management components, you must add more CPU cores to the management ESXi hosts.

Table 4. Design Decisions on the ESXi Memory Size for an Environment with a Single VMware Cloud Foundation Instance

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-006

Install each ESXi host in the default, four-node, management cluster with a minimum of 128 GB RAM.

The management components in this cluster require a total of 295 GB RAM.

You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new virtual infrastructure workload domains.

  • In a four-node cluster, only 384 GB is available for use because the host redundancy in vSphere HA is configured to N+1.

  • If you plan to add more than one virtual infrastructure workload domain, additional VMware solutions or third-party management components, you must add more memory to the management ESXi hosts.

Table 5. Design Decisions on the ESXi Memory Size for an Environment with Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-007

Install each ESXi host in the default, four-node, management cluster with a minimum of 256 GB RAM.

The management components in this cluster require a total of 511 GB RAM.

You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new virtual infrastructure workload domains

  • In a four-node cluster, only 768 GB is available for use because the host redundancy that is configured in vSphere HA is N+1.

  • If you plan to add more than one virtual infrastructure workload domain, additional VMware solutions or third-party management components, you must add more memory to the management ESXi hosts.

Table 6. Design Decisions on the Boot Device and Scratch Partition of the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-008

Install and configure all ESXi hosts in the default management cluster to boot using a 32-GB device or greater.

Provides hosts that have large memory, that is, greater than 512 GB, with enough space for the scratch partition when using vSAN.

When you use SATA-DOM or SD devices, scratch partition and ESXi logs are not retained locally. Configure the scratch partition of each ESXi host on supplemental storage.

VCF-MGMT-ESX-CFG-009

Use the default configuration for the scratch partition on all ESXi hosts in the default management cluster.

  • If a failure in the vSAN cluster occurs, the ESXi hosts remain responsive and log information is still accessible.

  • It is not possible to use vSAN datastore for the scratch partition.

When you use SATA-DOM or SD devices, scratch partition and ESXi logs are not retained locally. Configure the scratch partition of each ESXi host on supplemental storage.

Table 7. Design Decisions on the Virtual Machine Swap Configuration of the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-CFG-010

For workloads running in the default management cluster, save the virtual machine swap file at the default location.

Simplifies the configuration process.

Increases the amount of replication traffic for management workloads that are recovered as part of the disaster recovery process.

Network Design

Table 8. Design Decisions on the Network Segments for the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-NET-001

Place the ESXi hosts in the default management cluster on the VLAN-backed management network segment.

Reduces the number of VLANs needed because a single VLAN can be allocated to both the ESXi hosts, vCenter Server, and management components for NSX-T for Data Center .

Separation of the physical VLAN between ESXi hosts and other management components for security reasons is missing.

Table 9. Design Decisions on the IP Addressing Scheme for the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-NET-002

Allocate statically assigned IP addresses and host names across all ESXi hosts in the default management cluster.

Ensures stability across the VMware Cloud Foundation instance, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Table 10. Design Decisions on Name Resolution for the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-NET-003

Configure forward and reverse DNS records for each ESXi host in the default management cluster.

All ESXi hosts are accessible by using a fully qualified domain name instead of by using IP addresses only.

You must provide DNS records for each ESXi host.

Table 11. Design Decisions on Time Synchronization for the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-NET-004

Configure time synchronization by using an internal NTP time source across all ESXi hosts in the management domain for the region.

Prevents from failures in the deployment of the vCenter Server appliance on an ESXi host if the host is not using NTP.

An operational NTP service must be available in the environment.

VCF-MGMT-ESX-NET-005

Set the NTP service policy to Start and stop with host across all ESXi hosts in the default management vSphere cluster.

Ensures that the NTP service is available right after you restart an ESXi host.

None.

Life Cycle Management Design

Table 12. Design Decisions on Life Cycle Management of the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-LCM-001

Use SDDC Manager to perform the life cycle management of ESXi hosts in the management domain.

Because the deployment scope of SDDC Manager covers the full VMware Cloud Foundation stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process.

The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager.

Information Security and Access Control

Table 13. Design Decisions on ESXi Host Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-SEC-001

Configure the SSH service policy to Start and stop with host across all ESXi hosts in the management domain.

Ensures that on an ESXi host reboot, the SSH service is started ensuring access from SDDC Manager is maintained.

Might be in a direct conflict with your corporate security policy.

VCF-MGMT-ESX-SEC-002

Set the advanced setting UserVars.SuppressShellWarning to 1 across all ESXi hosts in the management domain.

Ensures that only critical messages appear in the VMware Host Client and vSphere Client by suppressing the warning message about enabled local and remote shell access.

Might be in a direct conflict with your corporate security policy.

Table 14. Design Decisions on Certificate Management for the ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-ESX-SEC-003

Regenerate the certificate of each ESXi host after assigning the host an FQDN.

Establishes a secure connection with VMware Cloud Builder during the deployment of the management domain and prevents man-in-the-middle (MiTM) attacks.

You must manually regenerate the certificates of the ESXi hosts before the deployment of the management domain.