The consolidated cluster design determines the number of hosts and vSphere HA settings for the cluster. The management virtual machines, NSX controllers and edges, and tenant workloads run on the ESXi hosts in the consolidated cluster.
Figure 1. Consolidated Cluster Resource Pools




Table 1. vSphere Cluster Workload Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

CSDDC-VI-VC-008

Create a consolidated cluster of a minimum of 4 hosts.

  • Three hosts are used to provide n+1 redundancy for the vSAN cluster. The fourth host is used to guarantee n+1 for vSAN redundancy during maintenance operations.

  • NSX deploys three NSX Controllers with anti-affinity rules. Using a forth host guarantees NSX Controller distribution across three hosts during maintenance operation.

You can add ESXi hosts to the cluster as needed.

ESXi hosts are limited to 200 virtual machines when using vSAN. Additional hosts are required for redundancy and scale.

CSDDC-VI-VC-009

Configure Admission Control for 1 ESXi host failure and percentage-based failover capacity.

Using the percentage-based reservation works well in situations where virtual machines have varying and sometime significant CPU or memory reservations. vSphere 6.5 or later automatically calculates the reserved percentage based on ESXi host failures to tolerate and the number of ESXi hosts in the cluster.

In a four-host cluster, only the resources of three ESXi hosts are available for use.

CSDDC-VI-VC-010

Create a host profile for the consolidated cluster.

Using host profiles simplifies configuration of ESXi hosts and ensures settings are uniform across the cluster.

Anytime an authorized change to an ESXi host is made the host profile must be updated to reflect the change or the status will show non-compliant.

CSDDC-VI-VC-011

Set up VLAN-backed port groups for external and management access.

Edge services gateways need access to the external network in addition to the management network.

VLAN-backed port groups must be configured with the correct number of ports, or with elastic port allocation.

CSDDC-VI-VC-012

Create a resource pool for the required management virtual machines with a CPU share level of High, a memory share level of normal, and a 146 GB memory reservation.

These virtual machines perform management and monitoring of the SDDC. In a contention situation, these virtual machines must receive all the resources required.

During contention, management components receive more resources than tenant workloads because monitoring and capacity management must be proactive operations.

CSDDC-VI-VC-013

Create a resource pool for the required NSX Controllers and edge appliances with a CPU share level of High, a memory share of normal, and a 17 GB memory reservation.

The NSX components control all network traffic in and out of the SDDC and update route information for inter-SDDC communication. In a contention situation, these virtual machines must receive all the resources required.

During contention, NSX components receive more resources than user workloads because such monitoring and capacity management must be proactive operations.

CSDDC-VI-VC-014

Create a resource pool for all user NSX Edge devices with a CPU share value of Normal and a memory share value of Normal.

You can use vRealize Automation to create on-demand NSX Edges for functions such as load balancing for user workloads. Because these edge devices do not support the entire SDDC, they receive a lower amount of resources during contention.

During contention, these NSX Edges devices receive fewer resources than the SDDC management edge devices. As a result, monitoring and capacity management must be a proactive activity.

CSDDC-VI-VC-015

Create a resource pool for all user virtual machines with a CPU share value of Normal and a memory share value of Normal.

Creating virtual machines outside of a resource pool will have a negative impact on all other virtual machines during contention. In a consolidated cluster the SDDC edge devices must be guaranteed resources above all other workloads as to not impact network connectivity. Setting the share values to normal gives the SDDC edges more shares of resources during contention ensuring network tra c is not impacted.

  • During contention, tenant workload virtual machines might receive insufficient resources and experience poor performance. It is critical that monitoring and capacity management remain proactive operations and that you add capacity before contention occurs.

  • Some workloads cannot be deployed directly to a resource pool. Additional administrative overhead might be required to move workloads to resource pools.

CSDDC-VI-VC-016

Create a DRS VM to Host rule that runs vCenter Server and the Platform Services Controller on the first four hosts in the cluster.

In the event of an emergency vCenter Server and the Platform Services Controller is easier to nd and bring up.

Limits DRS ability to place vCenter Server and the Platform Services Controller on any available host in the cluster.

Table 2. Consolidated Cluster Attributes

Attribute

Specification

Capacity for host failures per cluster

1

Number of usable hosts per cluster

3

Minimum number of hosts required to support the consolidated cluster

4