Tenant workloads run on the ESXi hosts in the shared edge and compute cluster. Because of the shared nature of the cluster, NSX-T Edge appliances also run in this cluster. To support these workloads, you must determine the number of ESXi hosts and vSphere HA settings and several other characteristics of the shared edge and compute cluster.

Table 1. Shared Edge and Compute Cluster Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

NSXT-VI-VC-003

Create a shared edge and compute cluster that contains tenant workloads and NSX-T Edge appliances.

Limits the footprint of the design by saving the use of a vSphere cluster specifically for the NSX-T Edge nodes.

In a shared cluster, the VLANs and subnets between the VMkernel ports for ESXi host overlay and the overlay ports of the edge appliances must be separate.

NSXT-VI-VC-004

Configure admission control for a failure of one ESXi host and percentage-based failover capacity.

vSphere HA protects the tenant workloads and NSX-T Edge appliances in the event of an ESXi host failure. vSphere HA powers on the virtual machines from the non-responding ESXi hosts on the remaining ESXi hosts.

Only a single ESXi host failure is tolerated before a resource contention occurs.

NSXT-VI-VC-005

Create a shared edge and compute cluster that consists of a minimum of four ESXi hosts.

Allocating four ESXi hosts provides a full redundancy within the cluster.

Four ESXi hosts is the smallest starting point for the shared edge and compute cluster for redundancy and performance as a result increasing cost.

NSXT-VI-VC-006

Create a resource pool for the two large sized edge virtual machines with a CPU share level of High, a memory share of normal, and a 32-GB memory reservation.

The NSX-T Edge appliances control all network traffic in and out of the SDDC. In a contention situation, these appliances must receive all the resources required.

During contention, the NSX-T components receive more resources than the other workloads. As a result, monitoring and capacity management must be a proactive activity.

The resource pool memory reservation must be expanded if you plan to deploy more NSX-T Edge appliances.

NSXT-VI-VC-007

Create a resource pool for all tenant workloads with a CPU share value of Normal and a memory share value of Normal.

Running virtual machines at the cluster level has a negative impact on all other virtual machines during contention. To avoid an impact on network connectivity, in a shared edge and compute cluster, the NSX-T Edge appliances must receive resources with priority to the other workloads. Setting the share values to Normal increases the resource shares of the NSX-T Edge appliances in the cluster.

During contention, tenant workloads might have insufficient resources and have poor performance. Proactively perform monitoring and capacity management, add capacity or dedicate an edge cluster before contention occurs.

NSXT-VI-VC-008

Create a host profile for the shared edge and compute cluster.

Using host profiles simplifies the configuration of ESXi hosts and ensures that settings are uniform across the cluster.

The host profile is only useful for initial cluster deployment and configuration.

After you add the ESXi hosts as transport nodes to the NSX-T deployment, the host profile is no longer usable and you must remove them from the cluster.