Design decisions for ESXi
Table 1. Design Decisions on Server Hardware for ESXi

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-001

Use vSAN ReadyNodes with vSAN storage for each ESXi host in the shared edge and workload cluster.

Your SDDC is fully compatible with vSAN at deployment.

Hardware choices might be limited.

SDDC-KUBWLD-VI-ESXi-002

Allocate hosts with uniform configuration across the shared edge and workload cluster.

A balanced cluster has these advantages:

  • Predictable performance even during hardware failures

  • Minimal impact of resync or rebuild operations on performance

You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

Table 2. Design Decisions on Host Memory for ESXi

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-003

Install each ESXi host in the shared edge and workload cluster with a minimum of 256 GB RAM.

The large-sized NSX-T Edge appliances in this vSphere Cluster require a total of 64 GB RAM.

The remaining RAM is available for tenant workloads.

In a four-node cluster, only 768 GB is available for use due to the n+1 vSphere HA setting.

Table 3. Design Decisions for Host Boot Device and Scratch Partition of ESXi

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-004

Install and configure all ESXi hosts in the shared edge and workload cluster to boot using a 32-GB device or greater.

Provides hosts with large memory, that is, greater than 512 GB, with enough space for the core dump partition while using vSAN.

When you use SATA-DOM or SD devices, scratch partition and ESXi logs are not retained locally. Configure the scratch partition of each ESXi host on supplemental storage.

SDDC-KUBWLD-VI-ESXi-005

Use the default configuration for the scratch partition on all ESXi hosts in the shared edge and workload cluster

  • If a failure in the vSAN cluster occurs, the ESXi hosts remain responsive and log information is still accessible.

  • It's not possible to use vSAN datastore for the scratch partition.

When you use SATA-DOM or SD devices, scratch partition and ESXi logs are not retained locally. Configure the scratch partition of each ESXi host on supplemental storage.

Table 4. Design Decisions for Virtual Machine Swap Configuration of ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-006

For tenant workloads running in the shared edge and workload cluster, save the virtual machine swap file at the default location.

Simplifies the configuration process.

Increases the amount of on-disk storage required to host the entire virtual machine state.

Table 5. Design Decisions for Lifecycle Management of ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-007

Use SDDC Manager to perform the life cycle management of ESXi hosts in the shared edge and workload cluster.

SDDC Manager has a greater awareness of the full SDDC solution and therefore handles the patch update or upgrade of the workload domain as a single process.

Directly performing life cycle management tasks on an ESXi host or through vCenter Server has the potential to cause issues within SDDC Manager.

The operations team must understand and be aware of the impact of performing a patch, upgrade, or upgrade by using SDDC Manager.

Table 6. Design Decisions on Network Segments for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-008

Place the ESXi hosts in the shared edge and workload cluster on a new VLAN-backed management network segment dedicated for VI Workload Domain.

  • Physical VLAN security separation between VI Workload Domain ESXi hosts and other management components in Management Domain is achieved.

  • Reduces the number of VLANs needed as a single VLAN can be allocated for both the ESXi hosts and NSX-T Edge nodes in the shared edge and workload cluster.

A new VLAN and a new subnet are required for VI Workload Domain management network.

Table 7. Design Decisions on IP Address Scheme for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-009

Allocate statically assigned IP addresses and host names across all ESXi hosts in the shared edge and workload cluster.

Ensures stability across the SDDC and makes it simpler to maintain and makes it easier to track.

Requires precise IP address management.

Table 8. Design Decisions on Name Resolution for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-010

Configure forward and reverse DNS records for each ESXi host in the shared edge and workload cluster, assigning the records to the child domain in each region.

All ESXi hosts are accessible by using a fully qualified domain name instead of by using IP addresses only.

You must provide DNS records for each ESXi host.

Table 9. Design Decisions on Time Synchronization for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-011

Configure time synchronization by using an internal NTP time source across all ESXi hosts in the shared edge and workload cluster.

Ensures consistent time across all devices in the environment, which can be critical for proper root cause analysis and auditing.

An operational NTP service must be available in the environment.

SDDC-KUBWLD-VI-ESXi-012

Set the NTP service policy to Start and stop with host across all ESXi hosts in the shared edge and workload cluster.

Ensures that the NTP service is available right after you restart an ESXi host.

None.

Table 10. Design Decisions on Host Access for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-013

Configure the SSH service policy to Start and stop with host across all ESXi hosts in the shared edge and workload cluster.

Ensures that on an ESXi host reboot, the SSH service is started ensuring access from SDDC Manager is maintained.

Might be in direct conflict with your corporate security policy.

SDDC-KUBWLD-VI-ESXi-014

Configure the advanced setting UserVars.SuppressShellWarning 1.

Ensures that only critical messages appear in the VMware Host Client and vSphere Client by suppressing the warning message about enabled local and remote shell access.

Might be in direct conflict with your corporate security policy.

Table 11. Design Decisions on User Access for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-015

Join each ESXi host in the domain to the Active Directory domain.

Using Active Directory membership provides greater flexibility in granting access to ESXi hosts. Ensuring that users log in with a unique user account provides greater visibility for auditing.

Adding ESXi hosts to the Active Directory domain can add some administrative overhead.

SDDC-KUBWLD-VI-ESXi-016

Change the default ESX Admins group to an Active Directory group ug-esxi-admins.

Using an Active Directory group is more secure because it removes a known administrative access point.

Additional changes to the ESXi hosts advanced settings are required.

SDDC-KUBWLD-VI-ESXi-017

Add ESXi administrators to the ug-esxi-admins group in Active Directory following standard access procedures.

Adding ESXi administrator accounts to the Active Directory group provides these benefits.

  • Direct control on the access to the ESXi hosts by using Active Directory group membership

  • Separation of management tasks

  • More visibility for access auditing

Administration of direct user access is controlled by using Active Directory.

Table 12. Design Decisions on Passwords and Account Lockout Behavior for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-KUBWLD-VI-ESXi-018

Configure a policy for ESXi host password and account lockout according to the security best practices or industry standards with which your organization maintains compliance.

Aligns with security best practices or industry standards with which your organization maintains compliance.

None.

Table 13. Design Decisions on Certificate Management for ESXi Hosts

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-WLD-VI-ESXi-019

Regenerate ESXi hosts certificates after assigning them an FQDN.

Establishes a secure connection with VMware Cloud Builder during the deployment of the management domain and prevents man-in-the-middle (MiTM) attacks.

You must manually regenerate the certificates of the ESXi hosts before the deployment of the workload domain.