If you want to add a vSphere cluster with a single vSphere Distributed Switch (vDS), using ESXi hosts with two pNICs, you can use the SDDC Manager UI.

When you use the SDDC Manager UI to add a vSphere cluster to a workload domain, it creates a single vDS for management, vMotion, vSAN, and host overlay traffic. The ESXi hosts that you add to the vSphere cluster must have two pNICs and both pNICs will be mapped to the single VDS. If you want to add a vSphere cluster with more than one vDS or use ESXi hosts with more than two pNICs, you must use the VMware Cloud Foundation API. See Add a vSphere Cluster to a Workload Domain Using the VMware Cloud Foundation API.

For the management domain, all vSphere clusters must use vSAN for principal storage. For VI workload domains, vSphere clusters can use vSAN, NFS, VMFS on FC, or vVols for principal storage. If a VI workload domain has multiple clusters, each cluster can use a different type of principal storage, as long as all hosts within a vSphere cluster use the same type.

You can run multiple add cluster tasks at the same time.

Prerequisites

  • Verify that there are at least three hosts available in the SDDC Manager inventory. For information on commissioning hosts, see Commission Hosts.
    Note: If the vSphere cluster is using NFS, VMFS on FC, or vVols as principal storage, and the VI workload domain is using vSphere Lifecycle Manager images as the update method, then only two hosts are required. Workload Management requires a vSphere cluster with a minimum of three ESXi hosts.
  • Ensure that the hosts you want to add to the vSphere cluster are in an active state.
  • You must have a valid vSphere and vSAN (if using vSAN storage) license specified in the Licensing tab of the SDDC Manager UI with adequate sockets available for the host to be added. For more information, see Add a License Key.
  • If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured on the NSX Host Overlay VLAN of the management domain. When NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. Click the vertical ellipsis (three dots) next to the workload domain to which you want to add a vSphere cluster and click Add Cluster.
  3. Select the storage type for the vSphere cluster and click Begin.
  4. Enter a name for the vSphere cluster and click Next.
  5. Select the cluster image to be applied to the vSphere cluster.
    You can select a cluster image only if you are adding a vSphere cluster to a VI workload domain that uses vSphere Lifecycle Manager images.
    Note: If the cluster image contains a different version of a vendor add-on or component than what is installed on the ESXi hosts you add to the cluster, the hosts will be remediated to use the cluster image during cluster creation.
  6. On the Networking page, enter the NSX Host Overlay (TEP) VLAN of the workload domain.
  7. Select the IP allocation method, provide the required information, and click Next.
    Note: You can only use a static IP pool for the management domain and VI workload domains with uniform L2 clusters. For L3 aware or stretch clusters, DHCP is required for Host Overlay Network TEP IP assignment.
    Option Description
    DHCP With this option SDDC Manager uses DHCP for the Host Overlay Network TEPs.
    Static IP Pool With this option SDDC Manager uses a static IP pool for the Host Overlay Network TEPs. You can re-use an existing IP pool or create a new one.
    To create a new static IP Pool provide the following information:
    • Pool Name
    • Description
    • CIDR
    • IP Range.
    • Gateway IP
    Make sure the IP range includes enough IP addresses for the number of hosts that will use the static IP Pool. The number of IP addresses required depends on the number of pNICs on the ESXi hosts that are used for the vSphere Distributed Switch that handles host overlay networking. For example, a host with four pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
    Note: You cannot stretch a cluster that uses static IP addresses for the NSX Host Overlay Network.
  8. If you selected vSAN storage for the vSphere cluster, the vSAN parameters page appears. To configure vSAN storage, select either Use local vSAN Datastore or Use remote vSAN Datastore (HCI Mesh Compute).

    If you select Use local vSAN Datastore, specify the level of availability you want configured for this cluster. The specified Failures To Tolerate (FTT) value determines the number of hosts required the cluster.

    1. Select the number of failures to tolerate for the vSAN cluster.

      Specify the level of availability you want configured for the cluster. The availability level determines the level of redundancy that is set for the assigned resources.

      Failures to tolerate Description
      0 vSAN requires a minimum of three hosts.
      1 (default) vSAN requires a minimum of four hosts.
      2 vSAN requires a minimum of five hosts.
    2. Select preferred space efficiancy option for vSAN deduplication and compression.
    3. Select vSAN storage encryption preference.

    If you select Use remote vSAN Datastore (HCI Mesh compute), the Mount Datastore page opens.

    Note: HCI Mesh compute-only cluster feature is not supported on VxRail.
    Note: To create compute-only clusters, vCenter and ESX versions must be 7.0 Update 2 or later.
    1. Select the remote datastore to provide storage to the new cluster.
  9. If you selected VMFS on FC storage for the vSphere cluster, enter the name of the VMFS on FC datastore.
  10. If you selected vVols storage for the vSphere cluster, the vVols storage page appears.
    1. Select a VASA protocol type.
      vVols supports FC, NFS, and iSCSI storage protocol types.
    2. Select a VASA provider name.
    3. Select a storage container.
    4. Select a VASA user.
    5. Enter a datastore name.
  11. Click Next.
  12. On the Object Names page, review the auto-generated object names and click Next.
  13. On the Host Selection page, select hosts for the vSphere cluster.
    You can use the toggle button to turn Skip failed hosts during cluster creation off or on. When this option is off, cluster creation will fail if you select an unhealthy host. When this option is on, cluster creation will succeed if you selected enough healthy hosts to meet the minimum requirements for a new cluster.
    When you use the SDDC Manager UI to add a cluster, all hosts must be associated with the same network pool. The VMware Cloud Foundation API supports adding hosts from different network pools to workload domain clusters, as long as those network pools have the same VLAN ID and MTU settings.
    You can add hosts with multiple active pNICs to the vSphere cluster using the VMware Cloud Foundation API.
    When you have selected the minimum number of hosts required for this cluster, the Next button is enabled.
  14. Click Next.
  15. If you selected NFS storage for the vSphere cluster, the NFS Storage page appears.
    1. Enter a name for the NFS datastore.
    2. Enter the path to the NFS share.
    3. Enter the IP address of the NFS server.
  16. Click Next.
  17. On the Licenses page, select the vSphere and vSAN (if using vSAN storage) license to apply to this cluster.
  18. Click Next.
  19. On the Review page, review the vSphere cluster details and click Finish.
    Note: Multiple VMkernels are created to test the vMotion network, which may cause changes in the MAC addresses and IP address relations. If MAC address filtering is enabled on your physical infrastructure, this may cause issues such as vMotion network connectivity validation failure.
    The details page for the workload domain appears with the following message: Adding a new cluster is in progress. When this process completes, the vSphere cluster appears in the Clusters tab in the details page for the workload domain.
    If Skip failed hosts during cluster creation is on and one or more of your selected hosts is unhealthy, the cluster creation task still succeeds as long as you have at least the minimum number of healthy hosts. The Tasks panel reports that a host was skipped and includes a link with more details.

    If, after skipping failed hosts, there are not enough heallthy hosts to create a cluster, the task fails.

What to do next

To add another cluster in parallel, click Add Cluster again and repeat the above steps.