If you want to add capacity to a workload domain, you can use the SDDC Manager UI to add a vSphere cluster.

For the management domain, all vSphere clusters must use vSAN for principal storage. For VI workload domains, vSphere clusters can use vSAN, NFS, VMFS on FC, or vVols for principal storage. If a VI workload domain has multiple clusters, each cluster can use a different type of principal storage, as long as all hosts within a vSphere cluster use the same type.

You can run multiple add cluster tasks at the same time.

To add a cluster that contains ESXi hosts with two data processing units (DPU), you must configure a custom vSphere Distributed Switch (VDS) with the following settings: :
  • A single VDS that uses all four DPU-backed nics
  • Uplinks (uplink1 through uplink4) are mapped to the DPU-backed nics
  • The NSX network operational mode is set to Enhanced Datapath - Standard
  • In the NSX uplink profile, uplink-1 and uplink-2 are Active and uplink-3 and uplink-4 are Standby

Prerequisites

  • Verify that there are at least three hosts available in the SDDC Manager inventory. For information on commissioning hosts, see Commission Hosts.
    Note: If the vSphere cluster is using NFS, VMFS on FC, or vVols as principal storage, and the VI workload domain is using vSphere Lifecycle Manager images as the update method, then only two hosts are required. Workload Management requires a vSphere cluster with a minimum of three ESXi hosts.
  • Ensure that the hosts you want to add to the vSphere cluster are in an active state.
  • If you choose License Now, you must have valid license keys for vSphere and vSAN (if applicable) with adequate sockets available in the SDDC Manager inventory. For more information, see Add a Component License Key in the SDDC Manager UI.
  • If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured on the NSX Host Overlay VLAN of the management domain. When NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. Click the vertical ellipsis (three dots) next to the workload domain to which you want to add a vSphere cluster and click Add Cluster.
  3. Select the storage type for the vSphere cluster and click Begin.
    You can only activate vSAN ESA if the workload domain to which you are adding the cluster uses vSphere Lifecycle Manager images.
    Note: vSAN Max requires vSAN ESA.
  4. Enter a name for the vSphere cluster and click Next.
  5. Select the cluster image to be applied to the vSphere cluster.
    You can select a cluster image only if you are adding a vSphere cluster to a workload domain that uses vSphere Lifecycle Manager images.
    Note: If the cluster image contains a different version of a vendor add-on or component than what is installed on the ESXi hosts you add to the cluster, the hosts will be remediated to use the cluster image during cluster creation.
  6. If you selected vSAN storage for the vSphere cluster, the vSAN parameters page appears. The vSAN storage options are different for vSAN OSA and vSAN ESA.
    1. For vSAN OSA, select the vSAN Storage Type:
      Option Description
      vSAN HCI Provides storage and compute resources.

      Specify the level of availability you want configured for this cluster. The specified Failures To Tolerate (FTT) value determines the number of hosts required in the cluster.

      Select the check box to enable vSAN deduplication and compression.

      Click Next.

      vSAN Compute Cluster Provides compute resources only.

      Click Next and choose the remote datastore to provide storage to the new cluster.

    2. For vSAN ESA, select the vSAN Storage Type:
      Option Description
      vSAN HCI Provides storage and compute resources.
      SDDC Manager selects the following settings, which cannot be edited:
      • Storage Type: Local vSAN datastore.
      • Storage Policy: Auto-policy management.
        Note: Based on the type of cluster and number of hosts, vSAN creates and assigns a default datastore policy for best capacity utilization after the cluster configuration is completed. Policy details can be viewed in the vSphere Client ( Policies and Profiles > VM Storage Policies).

      Click Next.

      vSAN Max Provides storage resources only.

      You can mount a vSAN Max datastore on other vSAN ESA or vSAN computer clusters.

      SDDC Manager selects the following settings, which cannot be edited:
      • Storage Type: Local vSAN datastore.
      • Storage Policy: Auto-policy management.
        Note: Based on the type of cluster and number of hosts, vSAN creates and assigns a default datastore policy for best capacity utilization after the cluster configuration is completed. Policy details can be viewed in the vSphere Client ( Policies and Profiles > VM Storage Policies).

      Click Next.

  7. If you selected NFS storage for the vSphere cluster, the NFS Storage page appears.
    1. Enter a name for the NFS datastore.
    2. Enter the path to the NFS share.
    3. Enter the IP address of the NFS server.
    4. Click Next.
  8. If you selected VMFS on FC storage for the vSphere cluster, enter the name of the VMFS on FC datastore and click Next.
  9. If you selected vVols storage for the vSphere cluster, the vVols storage page appears.
    1. Select a VASA protocol type.
      vVols supports FC, NFS, and iSCSI storage protocol types.
    2. Select a VASA provider name.
    3. Select a storage container.
    4. Select a VASA user.
    5. Enter a datastore name.
    6. Click Next.
  10. On the Host Selection page, select hosts for the vSphere cluster.
    You can use the toggle button to turn Skip failed hosts during cluster creation off or on. When this option is off, cluster creation will fail if you select an unhealthy host. When this option is on, cluster creation will succeed if you selected enough healthy hosts to meet the minimum requirements for a new cluster.
    To view DPU-backed hosts, activate the Network Offloading toggle. Do not activate the toggle if you want to select hosts that are not DPU-backed.
    Note: The toggle is only available if the workload domains uses vLCM images and there are unassigned DPU-backed hosts available.
    If you are using DPU-backed hosts, select the DPU Vendor.
    Note: All hosts in a cluster must use DPUs from the same vendor.
    When you use the SDDC Manager UI to add a cluster, all hosts must be associated with the same network pool. The VMware Cloud Foundation API supports adding hosts from different network pools to workload domain clusters, as long as those network pools have the same VLAN ID and MTU settings.
    The hosts must be commissioned with the same storage type that you selected for the cluster. For example, select hosts commissioned for vSAN Max storage for a vSAN Max cluster.
    When you have selected the minimum number of hosts required for this cluster, the Next button is enabled.
  11. Click Next.
  12. On the Switch Configuration page, select either a preconfigured switch profile to apply to the cluster or select the option to create a custom switch configuration.
    1. When creating a custom switch configuration, specify the Distributed Switch Name, MTU, number of VDS uplinks, and uplink mapping.
    2. Click Configure Network Traffic and select the network traffic type to configure.
      You must configure all required network traffic types.
      Network Traffic Type Configuration
      Management, vMotion, vSAN, Public Enter the Distributed PortGroup Name, select the Load Balancing policy, and configure the uplinks. Click Save Configuration.
      NSX Select the Operational Mode (Standard, Enhanced Datapath - Standard, Enhanced Datapath - Performance) and Transport Zone Type(s) (NSX-Overlay, NSX-VLAN).
      Note: For a VI workload domain with DPU-backed hosts, you must select Enhanced Datapath - Standard.
      For NSX-Overlay:
      • Enter the NSX-Overlay Transport Zone Name.
      • Enter the VLAN ID and select the IP Allocation (DHCP or Static IP Pool).
      For NSX-VLAN:
      • Enter the NSX-VLAN Transport Zone Name.
      Configure the:
      • NSX transport node settings
      • Uplink mapping settings
      • NSX uplink profile settings

      Click Save Configuration.

      Note: The NSX network traffic type is not available if the workload domain does not include NSX Manager.
    3. Click Create Distributed Switch.
      Note: You cannot proceed until all mandatory traffic types are configured.
    4. Click Next.
  13. On the Licenses page, choose a licensing option.
    Option Description
    License Now Select the vSphere and vSAN (if applicable) license to apply to this cluster.
    License Later vSphere and vSAN (if applicable) components are deployed in evaluation mode.
    Important: After the cluster is created, you must switch to licensed mode by:
  14. Click Next.
  15. On the Review page, review the vSphere cluster details and click Finish.
    Note: Multiple VMkernels are created to test the vMotion network, which may cause changes in the MAC addresses and IP address relations. If MAC address filtering is enabled on your physical infrastructure, this may cause issues such as vMotion network connectivity validation failure.
    The details page for the workload domain appears with the following message: Adding a new cluster is in progress. When this process completes, the vSphere cluster appears in the Clusters tab in the details page for the workload domain.
    If Skip failed hosts during cluster creation is on and one or more of your selected hosts is unhealthy, the cluster creation task still succeeds as long as you have at least the minimum number of healthy hosts. The Tasks panel reports that a host was skipped and includes a link with more details.

    If, after skipping failed hosts, there are not enough heallthy hosts to create a cluster, the task fails.

What to do next

To add another cluster in parallel, click Add Cluster again and repeat the above steps.