Use the VI Configuration wizard in the SDDC Manager UI to create a new workload domain.

To create a VI workload domain that uses a static IP pool for the Host Overlay Network TEPs for L3 aware and stretch clusters, you must use the VMware Cloud Foundation API. See Create a Domain in the VMware Cloud Foundation API Reference Guide.

The SDDC Manager UI supports running multiple VI workload domain creation tasks in parallel.

Procedure

  1. In the navigation pane, click + Workload Domain and then click VI - Workload Domain.
  2. Select the principal storage type and click Begin.
    The Storage Selection settings for a VI workload domain.
    To enable vSAN Express Storage Architecture (ESA), you must have a vLCM image to manage clusters.

    For more information on vSAN ESA, see https://core.vmware.com/blog/introduction-vsan-express-storage-architecture and vSAN Express Storage Architecture (ESA) Frequently Asked Questions (FAQ).

Specify Names, vCenter Single Sign-On Domain, and vSphere Lifecycle Manager Method

Provide names for the VI workload domain and organization, choose a vCenter Single Sign-On domain, and select whether the VI workload domain will use vSphere Lifecycle Manager images or baselines.

When you create a VI workload domain, you can join it to the management domain's vCenter Single Sign-On domain or a new vCenter Single Sign-On domain that is not used by any other workload domain. Joining a new vCenter Single Sign-On domain enables a VI workload domain to be isolated from the other workload domains in your VMware Cloud Foundation instance. The vCenter Single Sign-On domain for a VI workload domain determines the local authentication space.
vSphere Lifecycle Manager enables centralized and simplified lifecycle management for VMware ESXi hosts through the use of images and baselines. By default, a new VI workload domain uses vSphere Lifecycle Manager images (if an image is available), but you can choose to use vSphere Lifecycle Manager baselines instead. Consider the following when choosing the vSphere Lifecycle Manager method:
  • Two-node clusters are not supported in a VI workload domain that uses vSphere Lifecycle Manager baselines. See Prerequisites for a Workload Domain for additional requirements for two-node cluster.

Prerequisites

Verify that you have met the prerequisites described in About VI Workload Domains.

Procedure

  1. Type a name for the VI workload domain, such as sfo01. The name must contain between 3 and 20 characters.
    It is good practice to include location information in the name since resource object names (such as host and vCenter names) are generated based on the VI workload domain name.
  2. (Optional) Type a name for the organization that requested or will use the virtual infrastructure, such as Finance. The name must contain between 3 and 20 characters.
  3. Select an SSO domain option for the VI workload domain.
    Option Description
    Create New SSO Domain Creates a new vCenter Single Sign-On domain:
    • Enter the domain name, for example mydomain.local.
      Note: Ensure that the domain name is unique and does not contain any upper-case letters.
    • Set the password for the SSO administrator account.

      This is the password for the user administrator@your_domain_name.

    • Confirm the administrator password.
    Note: All components in the management domain must be upgraded to VMware Cloud Foundation 5.0 before you can create a new SSO domain.
    Join Management SSO Domain Join the new VI workload domain to the management domain's vCenter Single Sign-On domain.
    Note: You cannot change the SSO domain for a VI workload domain after you deploy it.
  4. To use vSphere Lifecycle Manager baselines as the update method for the VI workload domain, select Manage clusters in this workload domain using baselines (deprecated).
    If no vSphere Lifecycle Manager images are available this option is automatically selected. You can proceed to use vSphere Lifecycle Manager baselines, or exit the wizard and create a vSphere Lifecycle Manager image. For more information on vSphere Lifecycle Manager images, see Managing vSphere Lifecycle Manager Images in VMware Cloud Foundation.
    The update method that you select for a VI workload domain cannot be changed later.
    Note: You must use vSphere Lifecycle Manager images in order to create vSphere clusters with only two hosts and those clusters must use NFS, VMFS on FC, or vVols as principal storage.
    Note: Network offloading using SmartNICs (DPU-backed hosts) is not supported with vLCM baselines.
  5. Click Next.

Specify vSphere Cluster Details

Provide a name for the workload domain vSphere cluster. If you are using vSphere Lifecycle Manager images, select a cluster image to apply to the hosts.

Prerequisites

You must have a cluster image available if the workload domain is using vSphere Lifecycle Manager images. See Managing vSphere Lifecycle Manager Images in VMware Cloud Foundation.

Procedure

  1. Enter a vSphere cluster name.
  2. Select a cluster image from the drop-down menu.
    The option is only available for VI workload domains that use vSphere Lifecycle Manager images.
    Note: If the cluster image contains a different version of a vendor add-on or component than what is installed on the ESXi hosts you add to the cluster, the hosts will be remediated to use the cluster image during cluster creation.

Specify Compute Details

Specify the details for the vCenter Server that gets deployed for the workload domain.

Procedure

  1. Enter the FQDN for the workload domain's vCenter Server.
    The vCenter Server IP address, subnet mask, and default gateway, are populated based on the FQDN and SDDC Manager details.
  2. Enter and confirm a vCenter Server root password.
  3. Click Next.

Specify Networking Details

Provide information about the NSX Manager cluster to use with the VI workload domain. If you already have an NSX Manager cluster for a different VI workload domain, you can reuse that NSX Manager cluster or create a new one.

Do not share an NSX Manager cluster between workload domains catering to different use cases that would require different NSX Edge cluster specifications and configurations.

See VMware Configuration Maximums for information about the maximum number of workload domains that can be managed by a single NSX Manager instance.

Procedure

  1. On the Networking page of the wizard, select an option for the VI workload domain's NSX Manager instance.
    Option Description
    Create New NSX instance Choose this option to create a new NSX Manager instance for the workload domain.
    Note:
    • You must create a new NSX Manager instance if this is the first VI workload domain in your VMware Cloud Foundation instance.
    • You must create a new NSX Manager instance if your VI workload domain is joining a new SSO domain.
    • Provide the NSX Manager cluster details:
      • FQDNs for three NSX Managers nodes
      • NSX Manager cluster FQDN
      • NSX Manager Admin password
      • NSX Manager Audit password
    Use Existing NSX instance Choose this option to use an existing NSX Manager instance from another VI workload domain.
    Important:
    • You cannot share an NSX Manager instance between VI workload domains that are in different SSO domains.
    • In order to share an NSX Manager instance, the VI workload domains must use the same update method. The VI workload domains must both use vSphere Lifecycle Manager baselines or they must both use vSphere Lifecycle Manager images.
    • Select the NSX Manager instance.
      Note: NSX Managers for workload domains that are in the process of deploying are not able to be shared and do not appear in the list of available NSX Managers.
  2. Click Next.

Select the vSAN Storage Parameters

At the vSAN Storage step of the creation wizard, specify the availability you want provisioned for the VI workload domain. This page appears only if you are using vSAN storage for this workload domain.

If you are using vSAN ESA, SDDC Manager selects the following settings, which cannot be edited:
  • Storage Type: Local vSAN datastore.
  • Storage Policy: Auto-policy management.
    Note: Based on the type of cluster and number of hosts, vSAN creates and assigns a default datastore policy for best capacity utilization after the cluster configuration is completed. Policy details can be viewed in the vSphere Client ( Policies and Profiles > VM Storage Policies).

If you are using vSAN OSA, SDDC Manager determines uses the settings you specify to determine:

  • The minimum number of hosts that it needs to fulfill those selections
  • The specific hosts in your environment that are available and appropriate to fulfill those selections
  • The virtual infrastructure features and their specific configurations that are needed to fulfill those selections
Note: You can modify the VMware vSAN OSA configuration in vSphere without negatively affecting the VMware Cloud Foundation configuration.

Procedure

  1. Specify the level of availability you want configured for the cluster.
    The availability level determines the level of redundancy that is set for the assigned resources.
    Failures to tolerate Description
    0 vSAN requires a minimum of three hosts.
    1 vSAN requires a minimum of four hosts.
    2 vSAN requires a minimum of five hosts.
  2. Select the check box to enable vSAN deduplication and compression.
  3. Click Next.

Specify the VMFS on FC Datastore

If you are using VMFS on FC storage for the workload domain, you must specify the VMFS on FC datastore name.

Procedure

  1. On the Storage page, enter the name of the VMFS on FC datastore.
  2. Click Next.

Specify vVols Storage Details

If you use vVols storage for the workload domain, you must specify the vVols storage details.

Procedure

  1. Select a VASA protocol type.
    vVols supports FC, NFS, and iSCSI storage protocol types.
  2. Select a VASA provider name.
  3. Select a storage container.
  4. Select a VASA user.
  5. Enter a datastore name.
  6. Click Next.

Select Hosts

The Host Selection page displays available hosts along with hosts details. Hosts that are powered off, cannot be accessed via SSH, or have not been properly commissioned are not displayed.

  • Select only healthy hosts.
    To check a host's health, SSH in to the SDDC Manager VM using the vcf administrative user account and type the following command:
    sudo /opt/vmware/sddc-support/sos --health-check
    When prompted, enter the vcf user password. For more information, see Supportability and Serviceability (SoS) Utility
  • For optimum performance, you must select hosts that are identical in terms of memory, CPU types, and disks.

    If you select unbalanced hosts, the SDDC Manager UI displays a warning message, but you can proceed with the workload domain creation.

  • You cannot select hosts that are in a dirty state. A host is in a dirty state when it has been removed from a cluster in a workload domain.

    To clean a dirty host, re-image the host.

  • You cannot combine DPU-backed and non-DPU hosts in the same cluster.
  • When creating a VI workload domain using the SDDC Manager UI, all hosts in a cluster must be associated with the same network pool. You can use the Cloud Foundation API to select hosts from different network pools, if those network pools have the same VLAN ID and MTU settings. We allow L3 Networks for VSAN, VMOTION, using SDDC Manager UI, but not for TEP and MANAGEMENT. For TEP and MANAGEMENT, API needs to be used using Network profiles.

Procedure

  1. To view for DPU-backed hosts, activate the Network Offloading toggle.
    Do not activate the toggle if you want to select hosts that are not DPU-backed.
    Note: The toggle is only available if the workload domains uses vLCM images and there are unassigned DPU-backed hosts available.
  2. If you are using DPU-backed hosts, select the DPU Vendor.
    Note: All hosts in a cluster must use DPUs from the same vendor.
  3. Select the hosts for creating the VI workload domain.
  4. Click Next.

Specify Switch Configuration

The Switch Configuration page displays preconfigured switch profiles to be applied to the hosts in the cluster. There is also the option to create a custom switch configuration.

You can use any vmnics available on the host. For preconfigured profiles, you can only edit the names of VDS and portgroups.
  • Default: This profile is recommended and is the default configuration. It provides a unified fabric for all traffic types.
  • Storage Separation: This profile is used to physically separate vSAN storage traffic from other traffic types. This separates vSAN storage traffic onto separate physical host NICs and data center switches.
  • NSX Separation: This profile is used to physically separate NSX traffic and workload traffic from other traffic types. This separates NSX Edge traffic and overlay traffic onto a separate physical host NICs and data center switches.
  • Storage Traffic and NSX Traffic Separation: This profile is used to physically separate storage traffic, NSX Edge traffic, and workload traffic from other traffic types (management and vMotion). This will separate storage traffic, NSX Edge traffic, and overlay traffic onto separate physical host NICs and data center switches.
  • Create Custom Switch Configuration: Copy from a preconfigured profile to customize or create distributed switch configuration.
    Multiple Distributed Switches (VDS) can be configured. Each VDS can hold one or more network traffic configurations.
    • Some network traffic types are mandatory. Switch configuration is incomplete until these mandatory traffic types are configured.
    • Network types Management, vMotion, vSAN and NSX-Overlay can be configured only once for a cluster.
    • NSX-VLAN and Public network traffic types can be configured in multiple VDSes.

Procedure

  1. Choose a preconfigured switch configuration profile and click Select and click Next or Create Custom Switch Configuration (Skip to step 3).
  2. For a preconfigured switch profile:
    1. Provide the VDS configuration to be applied to the hosts in the cluster.
      VLAN ID is required and must be entered before proceeding to the next step.
      The default VLAN ID is 0. It is not recommended to configure 0 because NSX-Overlay will not work.
    2. Once the details for the phyiscal network adapters and VLAN ID information is updated, click Next.
      Note: The numbering of physical NICs may become out of sync with the VDS numbering. This issue can be worked around by selecting Change Profileto reconfigure the mappings.
  3. To create a custom switch configuration:
    1. Click Create Custom Switch Configuration.
    2. Provide the Distributed Switch Name, MTU, VDS Uplinks properties, and select the network traffic configured on this Distributed Switch (vSAN, Management, vMotion, vSAN, Public, and NSX) from the dropdown.
      When mapping uplinks to phyical network adapters, you cannot select a NIC type more than once.
      Wizard screen shows inputs for options for Creating Distributed switch name, MTU, VDS Uplinks Configuration, and Network Traffic Configured
    3. Click Create Distributed Switch.
      Note: You cannot proceed until all mandatory traffic types are configured.
    4. For Management and vMotion switch configuration, enter the Distributed PortGroup Name, select the Load Balancing policy, and review the uplink information. Click Save Configuration.
    5. Click Create Distributed Switch.
    6. For NSX, select the Operational Mode (Standard, Enhanced Datapath, Enhanced Datapath interrupt), Transport Zone Type (NSX-Overlay, NSX-VLAN).
      Note: For a VI workload domain with DPU-backed hosts, you must select Enhanced Datapath.
      If you are using overlay on an NSX cluster, the NSX-Overlay Tranport Zone Name is autofilled.
    7. Enter the VLAN ID and select the IP Allocation (DHCP or Static IP Pool).
    8. Enter the NSX-VLAN Transport Zone Name and teaming policy uplink mapping.
    9. Enter the details for the NSX Uplink Profile.
    10. Click Save Configuration.
  4. When all mandatory traffic types are configured, click Next.

Specify NFS Storage Details

If you are using NFS storage for this workload domain, you must provide the NFS share folder and IP address of the NFS server.

Procedure

  1. On the NFS Storage page, enter a name for the NFS datastore.
  2. Enter the path to the NFS share.
  3. Enter the IP address of the NFS server.
    Note: When creating additional datastores for an NFS share and server, use the same datastore name. If you use a different datastore name, vCenter overwrites the datastore name provided earlier.
  4. Click Next.

Select Licenses

On the Licenses page, select a licensing option.

Prerequisites

If you choose License Now, you must have added valid component license keys for the following products:
  • VMware vSAN (if using vSAN as the storage option)

    NFS does not require a license

  • VMware NSX
  • VMware vSphere

    Since vSphere and vSAN licenses are per CPU, ensure that you have sufficient licenses for the ESXi hosts to be used for the workload domain.

For information on adding component license keys, see Add a Component License Key in the SDDC Manager UI.

Procedure

  1. Choose a licensing option.
    Option Description
    License Now Select a license key for each of the components in the VI workload domain.
    License Later VMware Cloud Foundation components are deployed in evaluation mode.
    Important: After your VI workload domain is created, you must switch to licensed mode by:
  2. Click Next.

View Object Names

The Object Names page displays the vSphere objects that will be generated for the VI workload domain. Object names are based on the VI workload domain name.

Procedure

  1. Review the syntax that will be used for the vSphere objects generated for this domain.
  2. Click Next.

Review Details and Start the Creation Workflow

At the Review step of the wizard, review the information about the workload domain and start the creation workflow. You can also print the information or download a printable version to print later. It can take up to two hours for the domain to be created.

The Review page displays information about the resources and their configurations that are deployed when the workflow creates and deploys the virtual infrastructure for this workload domain.

The hosts that will be added to the workload domain are listed along with information such as the network pool they belong to, memory, CPU, and so on.

Procedure

  1. Scroll down the page to review the information.
  2. Click Finish.
    SDDC Manager validates the VI workload domain confirguration and displays any errors.

    If validation succeeds, the Workload Domains page appears, and a notification is displayed letting you know that VI workload domain is being added. Click View Task Status to view the domain creation tasks and sub tasks.

    If a task fails, you can fix the issue and rerun the task. If the workload domain creation fails, contact VMware Support.
    Note: Multiple VMkernels are created to test the vMotion network, which may cause changes in the MAC addresses and IP address relations. If MAC address filtering is enabled on your physical infrastructure, this may cause issues such as vMotion network connectivity validation failure.

What to do next