The SDDC functionality is split across multiple pods. Each pod can occupy one rack or multiple racks. The total number of pods for each pod type depends on scalability needs.

Figure 1. SDDC Pod Architecture


Two racks host compute pods of 19 ESXi hosts each. A third rack, with an external connection, hosts the shared compute and edge pod (4 hosts) and the management pod (4 hosts)

Table 1. Required Number of Racks

Pod (Function)

Required Number of Racks (for full scale deployment)

Minimum Number of Racks

Comment

Management pod and shared edge and compute pod

1

1

Two half-racks are sufficient for the management pod and shared edge and compute pod. As the number and resource usage of compute VMs increase adding additional hosts to the cluster will be required, as such extra space in the rack should be reserved for growth.

Compute pods

6

0

With 6 compute racks, 6 compute pods with 19 ESXi hosts each can achieve the target size of 6000 average-sized VMs. If an average size VM has two vCPUs with 4 GB of RAM, 6000 VMs with 20% overhead for bursting workloads require 114 hosts.

The quantity and performance varies based on the workloads running within the compute pods.

Storage pods

6

0 (if using vSAN for Compute Pods)

Storage that is not vSAN storage is hosted on isolated storage pods.

Total

13

1

Table 2. POD and Racks Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-PHY-003

The management and the shared edge and compute pod occupy the same rack.

The number of required compute resources for the management pod (4 ESXi servers) and shared edge and compute pod (4 ESXi servers) are low and do not justify a dedicated rack for each pod.

On-ramp and off-ramp connectivity to physical networks (for example, north-south L3 routing on NSX Edge virtual appliances) can be supplied to both the management and compute pods through this management/edge rack.

Edge resources require external connectivity to physical network devices. Placing edge resources for management and compute in the same rack will minimize VLAN spread.

The design must include sufficient power and cooling to operate the server equipment. This depends on the selected vendor and products.

If the equipment in this entire rack fails, a second region is needed to mitigate downtime associated with such an event.

SDDC-PHY-004

Storage pods can occupy one or more racks.

To simplify the scale out of the SDDC infrastructure, the storage pod to rack(s) relationship has been standardized.

It is possible that the storage system arrives from the manufacturer in dedicated rack or set of racks and a storage system of this type is accommodated for in the design.

The design must include sufficient power and cooling to operate the storage equipment. This depends on the selected vendor and products.

SDDC-PHY-005

Each rack has two separate power feeds.

Redundant power feeds increase availability by ensuring that failure of a power feed does not bring down all equipment in a rack.

Combined with redundant network connections into a rack and within a rack, redundant power feeds prevent failure of equipment in an entire rack.

All equipment used must support two separate power feeds. The equipment must keep running if one power feed fails.

If the equipment of an entire rack fails, the cause, such as flooding or an earthquake, also affects neighboring racks. A second region is needed to mitigate downtime associated with such an event.

SDDC-PHY-006

Mount the compute resources (minimum of 4 ESXi servers) for the management pod together in a rack.

Mounting the compute resources for the management pod together can ease physical datacenter design, deployment and troubleshooting.

None.

SDDC-PHY-007

Mount the compute resources for the shared edge and compute pod (minimum of 4 ESXi servers) together in a rack.

Mounting the compute resources for the shared edge and compute pod together can ease physical datacenter design, deployment and troubleshooting.

None.