Use this design decision list for reference related to shared storage and vSAN principal storage in an environment with a single VMware Cloud Foundation instance. The design also considers whether an instance contains a single or multiple availability zones.

After you set up the physical storage infrastructure, the configuration tasks for most design decisions are automated in VMware Cloud Foundation. You must perform the configuration manually only for a limited number of decisions as noted in the design implication.

For full design details, see Shared Storage Design for a Virtual Infrastructure Workload Domain.

vSAN Deployment Specification

Table 1. Design Decisions on the Storage I/O Controller Configuration for vSAN

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-001

Ensure that the storage I/O controller that is running the vSAN disk groups is capable and has a minimum queue depth of 256 set.

Storage controllers with lower queue depths can cause performance and stability problems when running vSAN.

vSAN ReadyNode servers are configured with the right queue depths for vSAN.

Limits the number of compatible I/O controllers that can be used for storage.

Table 2. Design Decisions on the vSAN Configuration

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-002

Configure vSAN in all-flash configuration in the VI workload domain cluster.
  • Provides support for vSAN deduplication and compression.

  • Meets the performance needs of the cluster.

Using high speed magnetic disks in a hybrid vSAN configuration can provide satisfactory performance and is supported.

All vSAN disks must be flash disks, which might cost more than magnetic disks.​

Table 3. Design Decisions on the vSAN Datastore

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-003

On all vSAN datastores, ensure that at least 30% of free space is always available.

When vSAN reaches 80% usage, a rebalance task is started which can be resource-intensive.

Increases the amount of available storage needed.

Table 4. Design Decisions on the vSAN Cluster Size in a VI Workload Domain with a Single Availability Zone

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-004

The VI workload domain cluster requires a minimum of 4 ESXi hosts to support vSAN.
  • Having 4 ESXi hosts addresses the availability and sizing requirements.

  • You can take an ESXi host offline for maintenance or upgrades without impacting the overall vSAN cluster health.

The availability requirements for the VI Workload Domain cluster might cause underutilization of the cluster's ESXi hosts.

Table 5. Design Decisions on the vSAN Cluster Size in a VI Workload Domain with Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-005

The VI workload domain cluster requires a minimum of 8 ESXi hosts (4 in each availability zone) to support a stretched vSAN configuration.

  • Having 8 ESXi hosts addresses the availability and sizing requirements.

  • You can take an availability zone offline for maintenance or upgrades without impacting the overall vSAN cluster health.

The capacity of the additional 4 hosts is not added to capacity of the cluster. They are only used to provide additional availability.

Table 6. Design Decision on the Disk Groups per ESXi Host

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-006

Configure vSAN with a minimum of two disk groups per ESXi host.

Reduces the size of the fault domain and spreads the I/O load over more disks for better performance.

Multiple disks groups require more disks in each ESXi host.

Table 7. Design Decisions on the vSAN Disk Configuration

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-007

Use a 600 GB or greater flash-based drive for the cache tier in each disk group.

Provides enough cache for both hybrid or all-flash vSAN configurations to buffer I/O and ensure disk group performance.

Larger flash disks can increase initial host cost

Table 8. Design Decisions on the vSAN Storage Policy in a VI Workload Domain with a Single Availability Zone

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-008

Use the default vSAN storage policy.

Provides the level of redundancy that is needed in the VI workload domain cluster.

Provides the level of performance that is enough for NSX Edge appliances and tenant workloads.

You might need additional policies for third-party virtual machines hosted in the VI workload domain cluster because their performance or availability requirements might differ from what the default vSAN policy supports.

Table 9. Design Decisions on the vSAN Storage Policy in a VI Workload Domain with Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-CFG-009

Add the following setting to the default vSAN storage policy:

Secondary Failures to Tolerate = 1

Provides the necessary protection for virtual machines in each availability zone, with the ability to recover from an availability zone outage.

You might need additional policies if third-party virtual machines are to be hosted in the VI workload domain cluster because their performance or availability requirements might differ from what the default vSAN policy supports.

VCF-WLD-vSAN-CFG-010

Configure two fault domains, one for each availability zone. Assign each host to their respective availability zone fault domain.

Fault domains are mapped to availability zones to provide logical host separation and ensure a copy of vSAN data is always available even when an availability zone goes offline.

Additional raw storage is required when the secondary failures to tolerate option and dault domains are enabled.

vSAN Network Design

Table 10. Design Decisions on the Virtual Switch Configuration for vSAN

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-NET-001

Use the existing vSphere Distributed Switch instance that SDDC Manager creates for the cluster of the VI workload domain.

Provides guaranteed performance for vSAN traffic, if there is network contention, by using existing networking components.

All traffic paths are shared over common uplinks.

VCF-WLD-vSAN-NET-002

Configure jumbo frames on the VLAN for vSAN traffic.

  • Simplifies configuration because jumbo frames are also used to improve the performance of vSphere vMotion and NFS storage traffic.

  • Reduces the CPU overhead resulting high network usage.

Every device in the network must support jumbo frames.

vSAN Witness Design

Table 11. Design Decisions for the vSAN Witness Appliance for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-WTN-001

Deploy a vSAN witness appliance in a location that is not local to the ESXi hosts in any of the availability zones of the VI workload domain.

The witness appliance has these features.

  • Acts as a tiebreaker if network isolation between the availability zones occurs.

  • Hosts all required witness components to form the required RAID-1 configuration on vSAN, that is, each data copy at a site while the witness is at the witness site.

A third physically-separate location is required. Such a location must have a vSphere environment running to host the witness appliance. Another VMware Cloud Foundation Instance in a separate physical location might be an option.

VCF-WLD-vSAN-WTN-002

Deploy a large-size witness appliance.

A large-size witness appliance supports more than 500 virtual machines which is required for high availability of workloads that run in the SDDC.

The vSphere environment at the witness location must satisfy the resource requirements of the witness appliance.

Table 12. Design Decisions on the Network Configuration of the vSAN Witness Appliance for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-vSAN-WTN-003

Connect the first VMkernel adapter of the vSAN witness appliance to the management network in the witness site.

Connects the witness appliance to the vCenter server instance and ESXi hosts in the VI workload domain.

The management networks of both the management and VI workload domains in both availability zones must be routed to the management network in the witness site.

VCF-WLD-vSAN-WTN-004

Configure the vSAN witness appliance to use the first VMkernel adapter, that is the management Interface, for vSAN witness traffic.

Separates the witness traffic from the vSAN data traffic. Witness traffic separation provides the following benefits:

  • Removes the requirement to have static routes from the vSAN networks in both availability zones to the witness site.

  • Removes the requirement to have jumbo frames enabled on the path between both availability zones and the witness site because witness traffic can use a regular MTU size of 1500 bytes.

The management networks for both the management and VI workload domains in both availability zones must be routed to the management network in the witness site.

VCF-WLD-vSAN-WTN-005

Place witness traffic on the management VMkernel adapter of all the ESXi hosts in the VI workload domain.

Separates the witness traffic from the vSAN data traffic. Witness traffic separation provides the following benefits:

  • Removes the requirement to have static routes from the vSAN networks in both availability zones to the witness site.

  • Removes the requirement to have jumbo frames enabled on the path between both availability zones and the witness site because witness traffic can use a regular MTU size of 1500 bytes.

The management networks for both the management and VI workload domains in both availability zones must be routed to the management network in the witness site.

VCF-WLD-vSAN-WTN-006

Allocate a statically assigned IP address and host name to the management adapter of the vSAN witness appliance.

Simplifies maintenance and tracking and implements a DNS configuration.

Requires precise IP address management.

VCF-WLD-vSAN-WTN-007

Configure forward and reverse DNS records for the vSAN witness appliance assigning the record to the child domain for the VMware Cloud Foundation instance.

Enables connecting the vSAN witness appliance to the VI workload domain vCenter Server by FQDN instead of by IP address.

You must provide DNS records for the vSAN witness appliance.

VCF-WLD-vSAN-WTN-008

Configure time synchronization by using an internal NTP time for the vSAN witness appliance.

Prevents any failures in the stretched cluster configuration that are caused by time mismatch between the vSAN witness appliance and the ESXi hosts in both availability zones and VI workload domain vCenter Server.

  • An operational NTP service must be available in the environment.

  • All firewalls between the vSAN witness appliance and the NTP servers must allow NTP traffic on the required network ports.