The vCenter Server functionality is distributed across a minimum of two workload domains and two vSphere clusters. The telco cloud solution uses two multiple Server instances: one for the management domain and another for the first compute workload domain. The compute workload domain can contain multiple cell site ESXi hosts.

When designing a cluster for workloads, consider the types of workloads that the cluster handles. Different cluster types have different characteristics.

When designing the cluster layout in vSphere, consider the following guidelines:

  • Use a few large-sized ESXi hosts or more small-sized ESXi hosts for Core clusters

    • A scale-up cluster has few large-sized ESXi hosts.

    • A scale-out cluster has more small-sized ESXi hosts.

  • Use the ESXi hosts that are sized appropriately for your Cell Site locations.

  • Consider the total number of ESXi hosts and cluster limits as per vCenter Server maximums.

  • Consider the applications deployed to the cluster or workload domain.

  • Consider additional space for rolling upgrades of Kubernetes nodes.

  • Consider HA constraints and requirements for host outages

In a D-RAN scenario, hosts are not added to a vSphere cluster. When deployed through Telco Cloud Automation Cell Site Groups, the hosts are added into a folder structure aligned with the Cell Site Grouping in vCenter and not as a member of the cluster.

In a C-RAN deployment, depending on the workload deployments (DU / CU), a combination of standalone hosts and clusters may be necessary to accommodate availability requirements.

Consider isolating deployments of Telco Cloud Platform Essentials from Telco Cloud Platform Advanced using separate vCenters and clusters to avoid licensing challenges, provide workload separation, and provide different lifecycle management cadences for both environments.

vSphere High Availability

If an ESXi host fails, vSphere High Availability (vSphere HA) protects VMs by restarting them on other hosts in the same cluster. During the cluster configuration, the ESXi hosts elect a primary ESXi host. The primary ESXi host communicates with the vCenter Server system and monitors the VMs and secondary ESXi hosts in the cluster.

The primary ESXi host detects different types of failure:

  • ESXi host failure, for example, an unexpected power failure

  • ESXi host network isolation or connectivity failure

  • Loss of storage connectivity

  • Problems with the virtual machine OS availability

The vSphere HA Admission Control Policy allows an administrator to configure how the cluster determines available resources. In a small vSphere HA cluster, a large proportion of the cluster resources is reserved to accommodate ESXi host failures, based on the selected policy.

Note:

As D-RAN deployments do not leverage vSphere clusters, the vSphere HA is not a consideration on the D-RAN deployment. If the cell site architecture deploys more than one ESXi host, they are added as individual hosts and not as a cluster.

The following vSphere HA policies are available for workload clusters:

  • Cluster resource percentage: Reserves a specific percentage of cluster CPU and memory resources for recovery from host failures. With this type of admission control, vSphere HA ensures that a specified percentage of aggregate CPU and memory resources is reserved for failover.

  • Slot policy: vSphere HA admission control ensures that a specified number of hosts can fail and sufficient resources remain in the cluster to failover all the VMs from those hosts.

    • A slot is a logical representation of memory and CPU resources. By default, the slot is sized to meet the requirements for any powered-on VM in the cluster.

    • vSphere HA determines the current failover capacity in the cluster and leaves enough slots for the powered-on VMs. The failover capacity specifies the number of hosts that can fail.

  • Dedicated failover hosts: When a host fails, vSphere HA attempts to restart its VMs on any of the specified failover hosts.

Design Recommendation

Design Justification

Design Implication

Domain Applicability

Use vSphere HA to protect all VMs against failures.

vSphere HA provides a robust level of protection for VM availability.

You must provide sufficient resources on the remaining hosts so that VMs can be migrated to those hosts in the event of a host outage.

Management domain, Compute clusters, VNF, CNF, C-RAN, Near/Far Edge, and NSX Edge

Not applicable to RAN sites

Set the Host Isolation Response of vSphere HA to Power Off and Restart VMs.

vSAN requires that the HA Isolation Response is set to Power Off and the VMs are restarted on available ESXi hosts.

VMs are powered off in case of a false positive and an ESXi host is declared isolated incorrectly.

Management domain, Compute clusters VNF, CNF, C-RAN, Near/Far Edge, and NSX Edge

Not applicable to RAN sites

vSphere Distributed Resource Scheduler

The distribution and usage of CPU and memory resources for all hosts and VMs in the cluster are monitored continuously. The vSphere Distributed Resource Scheduler (DRS) compares these metrics to an ideal resource usage based on the attributes of the cluster’s resource pools and VMs, the current demand, and the imbalance target. DRS then provides recommendations or performs VM migrations accordingly.

DRS supports the following modes of operation:

  • Manual

    • Initial placement: Recommended host is displayed.

    • Migration: Recommendation is displayed.

  • Partially Automated

    • Initial placement: Automatic

    • Migration: Recommendation is displayed.

  • Fully Automated

    • Initial placement: Automatic

    • Migration: Recommendation is run automatically.

Note:
  • The configuration of DRS modes can vary between clusters. Some workloads or applications may not fully support automated vMotioning of VMs that occurs when DRS is optimizing the cluster.

  • Due to the nature of the host deployments and the Platform Awareness functionality leveraged in the RAN (SR-IOV or Accelerator pass-through), DRS is not a consideration for RAN deployments.

Design Recommendation

Design Justification

Design Implication

Domain Applicability

Enable vSphere DRS in the management cluster and set it to Fully Automated, with the default setting (medium).

Provides the best trade-off between load balancing and excessive migration with vSphere vMotion events.

If a vCenter Server outage occurs, mapping from VMs to ESXi hosts might be difficult to determine.

Management domain

Enable vSphere DRS in the edge and compute clusters and set it to Partially Automated mode.

Enables automatic initial placement

Ensures that the latency-sensitive VMs do not move between ESXi hosts automatically.

Increases the administrative overhead in ensuring that the cluster is properly balanced.

Compute clusters, VNF, CNF, C-RAN, Near/Far Edge, and NSX Edge

Not applicable to RAN sites

Resource Pools

A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources.

Each DRS cluster or standalone host has an invisible root resource pool that groups the resources of that cluster. The root resource pool does not appear because the resources of the cluster and the root resource pool are always the same.

You can create child resource pools from the root resource pool. Each child resource pool owns some of the parent’s resources and can, in turn, have a hierarchy of child resource pools to represent successively smaller units of computational capability.

A resource pool can contain child resource pools, VMs, or both. You can create a hierarchy of shared resources. The resource pools at a higher level are called parent resource pools. Resource pools and VMs that are at the same level are called siblings. The cluster represents the root resource pool. If you do not create child resource pools, only the root resource pools exist.

Scalable Shares allows the resource pool to dynamically scale as VMs are added or removed from the resource pool hierarchy.

The resource pool is a key component for both Cloud Director and Tanzu Kubernetes Grid deployments. The Organizational Virtual Data Centers (OrgVDCs) that are created by VMware Cloud Director are manifested as resource pools on vCenter, depending on the configuration of the components in Cloud Director. The resource pool can be configured across multiple clusters. VMware Cloud Director determines which resource pool to deploy.

For Tanzu Kubernetes Grid, the resource pool is the target endpoint for a nodepool of the Tanzu Kubernetes cluster. Control plane and node pools can be deployed to separate vSphere clusters and resource pools, but a single node pool or the control nodes cannot be deployed across different endpoints.

VMware Telco Automation does not create resource pools as with Cloud Director. Therefore, the resource pools for Tanzu Kubernetes Grid deployments must be manually created and sized with appropriate limits and reservations on vCenter Server before the Tanzu Kubernetes cluster creation.

Note:

When vSphere resources are added to Cloud Director as resource pools (instead of the entire cluster), resource pools for Cloud Director and resource pools for Tanzu Kubernetes Grid can exist on the same cluster. This level of cluster sharing is not recommended.

Design Recommendation

Design Justification

Design Implication

Domain Applicability

Do not share a single vSphere cluster between Cloud Director and TKG resources

Provides better isolation between VNF and CNFs within a single vCenter

Requires separate cluster for VNF and CNF workloads

Compute clusters VNF, CNF, C-RAN, Near/Far Edge, and NSX-Edge

Create TKG resource pools and allocate appropriate resources for each network function.

Ensures proper admission control for TKG clusters hosting control-plane workloads.

Requires understanding of the application sizing and scale considerations to effectively configure the resource pool

Compute clusters. VNF, CNF, C-RAN, Near/Far Edge, and NSX-Edge

The root resource pool can be used for D-RAN deployments.

Workload Cluster Scale

Workload cluster scale depends on the management architectural design. In both the centralized management domain and multi-site configuration, independent vCenter Servers can manage a domain or site.

Additional clusters can be added to the resource vCenter, either at the same or remote location. The definition of a 'site' is different from a workload domain. A workload domain can spread across multiple locations. The Near/Far Edge locations must be managed by a vCenter within the local workload domain.

In an NSX design, NSX Manager and vCenter Server must have 1:1 mapping. NSX Manager manages not only the local or co-located environment, but also remote locations connected to the domain.