The virtual infrastructure layer of the Standard SDDC contains the components that provide compute, networking, and storage resources to the management and tenant workloads.

vCenter Server Design

Table 1. vCenter Server Design Details

Design Area


vCenter Server instances

You deploy two vCenter Server instances in the following way:

  • One vCenter Server instance supporting the SDDC management components.

  • One vCenter Server instance supporting the edge components and tenant workloads.

Using this model provides the following benefits:

  • Isolation of management and compute vCenter Server operations

  • Simplified capacity planning

  • Separated upgrade

  • Separated roles


You distribute hosts and workloads in the following clusters:

  • Management cluster that contains all management hosts and handles resources for the management workloads.

  • Shared edge and compute cluster that contains tenant workloads, NSX Controllers, and associated NSX Edge gateway devices used for the tenant workloads.

Resource pools for tenant workloads and dedicated NSX components

On the shared edge and compute cluster, you use resource pools to distribute compute and storage resources to the tenant workloads and the NSX components carrying their traffic.

Deployment model

This VMware Validated Design uses two external Platform Services Controller instances and two vCenter Server instances.

For redundancy, the design joins the two Platform Services Controller instances to the same vCenter Single Sign-On domain, and points the vCenter Server instances to a load balancer that distributes the requests between the two Platform Services Controller instances.

Management host provisioning

You use host profiles to apply the networking and authentication configuration on the ESXi hosts in the management pod and in the shared edge and compute pod.

Figure 1. Layout of vCenter Server Clusters

Dynamic Routing and Application Virtual Networks

This VMware Validated Design supports dynamic routing for both management and tenant workloads, and also introduces a model of isolated application networks for the management components.

Dynamic routing support includes the following nodes:

  • Pair of NSX Edge service gateways (ESGs) with ECMP enabled for north/south routing across all regions.

  • Universal distributed logical router (UDLR) for east/west routing across all regions.

  • Distributed logical router (DLR) for the shared edge and compute cluster and compute clusters to provide east/west routing for workloads that require on-demand network objects from vRealize Automation.

Application virtual networks provide support for limited access to the nodes of the applications through published access points. Three application virtual networks exist:

  • Cross-region application virtual network that connects the components that are designed to fail over to a recovery region.

  • Region-specific application virtual network in Region A for components that are not designed to fail over.

  • Region-specific application virtual network in Region B for components that are not design to fail over.

Figure 2. Virtual Application Network Components and Design

Distributed Firewall

This VMware Validated Design uses the distributed firewall functionality that is available in NSX to protect all management applications attached to application virtual networks.

Software-Defined Storage Design for Management Products

In each region, workloads on the management cluster store their data on a vSAN datastore. The vSAN datastore spans all 4 ESXi hosts of the management cluster. Each host adds one disk group to the datastore.

Applications store their data according to the default storage policy for vSAN.

Figure 3. vSAN Conceptual Design

vSphere Data Protection, vRealize Log Insight and vRealize Automation Content Library use NFS exports as secondary storage. You create two datastores: one in the management cluster for vSphere Data Protection and one in the shared edge and compute cluster for vRealize Automation.