The vCenter Server functionality is distributed across a minimum of two workload domains and two clusters. This solution uses a minimum of two vCenter Server instances: one for the management workload domain and another for the first compute workload domain. The compute workload domain can contain multiple vSphere clusters.

The cluster design must consider the workloads that the cluster handles. Different cluster types in this design have different characteristics. When you design the cluster layout in vSphere, consider the following guidelines:

  • Use fewer large-sized ESXi hosts or more small-sized ESXi hosts.

    • A scale-up cluster has fewer large-sized ESXi hosts.

    • A scale-out cluster has more small-sized ESXi hosts.

  • Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing more, smaller ESXi hosts. Costs vary between vendors and models.

  • Evaluate the operational costs for managing a few ESXi hosts with the costs of managing more ESXi hosts.

  • Consider the purpose of the cluster.

  • Consider the total number of ESXi hosts and cluster limits.

vSphere High Availability

VMware vSphere High Availability (vSphere HA) protects your VMs in case of an ESXi host failure by restarting VMs on other hosts in the cluster. During the cluster configuration, the ESXi hosts elect a primary ESXi host. The primary ESXi host communicates with the vCenter Server system and monitors the VMs and secondary ESXi hosts in the cluster.

The primary ESXi host detects different types of failure:

  • ESXi host failure, for example, an unexpected power failure.

  • ESXi host network isolation or connectivity failure.

  • Loss of storage connectivity.

  • Problems with the virtual machine OS availability.

The vSphere HA Admission Control Policy allows an administrator to configure how the cluster determines available resources. In a small vSphere HA cluster, a large proportion of the cluster resources is reserved to accommodate ESXi host failures, based on the selected policy.

The following policies are available:

  • Cluster resource percentage: Reserves a specific percentage of cluster CPU and memory resources for recovery from host failures. With this type of admission control, vSphere HA ensures that a specified percentage of aggregate CPU and memory resources is reserved for failover.

  • Slot policy: vSphere HA admission control ensures that a specified number of hosts can fail and sufficient resources remain in the cluster to failover all the VMs from those hosts.

    • A slot is a logical representation of memory and CPU resources. By default, the slot is sized to meet the requirements for any powered-on VM in the cluster.

    • vSphere HA determines the current failover capacity in the cluster and leaves enough slots for the powered-on VMs. The failover capacity specifies the number of hosts that can fail.

  • Dedicated failover hosts: When a host fails, vSphere HA attempts to restart its VMs on any of the specified failover hosts.

vSphere Distributed Resource Scheduler

The distribution and usage of CPU and memory resources for all hosts and VMs in the cluster are monitored continuously. The vSphere Distributed Resource Scheduler (DRS) compares these metrics to an ideal resource usage based on the attributes of the cluster’s resource pools and VMs, the current demand, and the imbalance target. DRS then provides recommendations or performs VM migrations accordingly.

DRS supports the following modes of operation:

  • Manual

    • Initial placement: Recommended host is displayed.

    • Migration: Recommendation is displayed.

  • Partially Automated

    • Initial placement: Automatic

    • Migration: Recommendation is displayed.

  • Fully Automated

    • Initial placement: Automatic

    • Migration: Recommendation is run automatically.

Resource Pools

A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to partition the available CPU and memory resources hierarchically.

Each DRS cluster has an invisible root resource pool that groups the resources of that cluster. The root resource pool does not appear because the resources of the cluster and the root resource pool are always the same.

Users can create child resource pools of the root resource pool or of any user-created child resource pool. Each child resource pool owns some of the parent’s resources and can, in turn, have a hierarchy of child resource pools to represent successively smaller units of computational capability.

A resource pool can contain child resource pools, VMs, or both. You can create a hierarchy of shared resources. The resource pools at a higher level are called parent resource pools. Resource pools and VMs that are at the same level are called siblings. The cluster itself represents the root resource pool. If you do not create child resource pools, only the root resource pools exist.

Scalable Shares allow the resource pool shares to dynamically scale as the VMs are added or removed from the resource pool hierarchy.

vSphere Cluster Services

vSphere Cluster Services (vCLS) is enabled by default and runs in all vSphere clusters. vCLS ensures that if vCenter Server becomes unavailable, cluster services remain available to maintain the resources and health of the workloads that run in the clusters.

vSphere DRS is a critical feature of vSphere to maintain the health of the workloads running in the vSphere Cluster. DRS depends on the availability of vCLS VMs.

vCLS VMs are always powered-on because vSphere DRS depends on the availability of these VMs. These VMs must be treated as system VMs. No operations are blocked on vCLS VMs. However, any disruptive operation can result in failure of vSphere DRS. To avoid the failure of cluster services, avoid performing any configuration or operations on the vCLS VMs.

vSphere Lifecycle Manager

vSphere Lifecycle Manager allows for the management of software and firmware lifecycle of the ESXi hosts in a cluster with a single image. The vSphere Lifecycle Manager image is a new functionality that provides a simplified and unified workflow for patching and upgrading ESXi hosts. You can also use these images for bootstrapping and firmware updates.

An image defines the exact software stack to run on all ESXi hosts in a cluster. When you set up an image, you select an ESXi version and a vendor add-on from the vSphere Lifecycle Manager depot. If no ESXi base images and vendor add-ons are available in the depot, you must populate the depot with software updates by synchronizing the depot or uploading updates to the depot manually.

Table 1. Recommended vSphere Cluster Design

Design Recommendation

Design Justification

Design Implication

Create a single management cluster that contains all the management ESXi hosts.

  • Simplifies configuration by isolating management workloads from compute workloads.

  • Ensures that the compute workloads have no impact on the management stack.

  • You can add ESXi hosts to the cluster as needed.

Management of multiple clusters and vCenter Server instances increases operational overhead.

Create a single edge cluster per compute workload domain.

Supports running NSX Edge nodes in a dedicated cluster.

Requires an additional vSphere cluster.

Create at least one compute cluster. This cluster contains compute workloads.

  • The clusters can be placed close to end-users where the workloads run.

  • The management stack has no impact on compute workloads.

  • You can add ESXi hosts to the cluster as needed.

Management of multiple clusters and vCenter Server instances increases the operational overhead.

Create a management cluster with a minimum of four ESXi hosts.

Allocating 4 ESXi hosts provides full redundancy for the cluster.

Additional ESXi host resources are required for redundancy.

Create an edge cluster with a minimum of three ESXi hosts.

Supports availability for a minimum of two NSX Edge Nodes.

As Edge Nodes are added, additional ESXi hosts must be added to the cluster to maintain availability.

Create a compute cluster with a minimum of four ESXi hosts.

Allocating four ESXi hosts provides full redundancy for the cluster.

Additional ESXi host resources are required for redundancy.

Use vSphere HA to protect all VMs against failures.

Provides a robust level of protection for VM availability.

You must provide sufficient resources on the remaining hosts so that VMs can be migrated to those hosts in the event of a host outage.

Set the Host Isolation Response of vSphere HA to Power Off and Restart VMs.

vSAN requires that the HA Isolation Response is set to Power Off so that the VMs can be restarted on the available ESXi hosts.

VMs are powered off in case of a false positive and an ESXi host is declared isolated incorrectly.

Enable vSphere DRS in the management cluster and set it to Fully Automated, with the default setting (medium).

Provides the best trade-off between load balancing and excessive migration with vSphere vMotion events.

If a vCenter Server outage occurs, mapping from VMs to ESXi hosts might be more difficult to determine.

Enable vSphere DRS in the edge and compute clusters and set it to Partially Automated mode.

  • Enables automatic initial placement.

  • Ensures that the latency-sensitive VMs do not move between ESXi hosts automatically.

Increases the administrative overhead in ensuring that the cluster is properly balanced.

When creating resource pools, enable Scalable Shares.

Scalable shares ensure that the shares available in each resource pool is dynamic based on the number and priority of VMs in the resource pool instead of a static value.

None

Use vSphere Lifecycle Manager images to ensure that all hosts in a cluster contain the same software versions.

Images allow for a single ESXi image plus vendor add-on to be assigned to the cluster, ensuring that each ESXi host is running the same ESXi version and vendor add-ons.

Workload Management is not compatible with vSphere Lifecycle Manager Images.