Use the VI Configuration wizard in the SDDC Manager UI to create a new workload domain.

To create a VI workload domain that uses a static IP pool for the Host Overlay Network TEPs for L3 aware and stretch clusters, you must use the VMware Cloud Foundation API. See Create a Domain in the VMware Cloud Foundation API Reference Guide.

The SDDC Manager UI supports running multiple VI workload domain creation tasks in parallel.

For information about importing existing vSphere environments as VI workload domains, see Converting or Importing Existing vSphere Environments into VMware Cloud Foundation.

Procedure

  1. In the navigation pane, click + Workload Domain and then click VI - Workload Domain.
  2. Select the principal storage type and click Begin.
    The Storage Selection settings for a VI workload domain.
    To enable vSAN Express Storage Architecture (ESA), you must have a vLCM image to manage clusters.

    For more information on vSAN ESA, see https://core.vmware.com/blog/introduction-vsan-express-storage-architecture and vSAN Express Storage Architecture (ESA) Frequently Asked Questions (FAQ).

    vSAN Max requires vSAN ESA.

Specify Names and vCenter Single Sign-On Domain

Provide names for the VI workload domain and organization and choose a vCenter Single Sign-On domain. For VMware Cloud Foundation 5.2, select whether the VI workload domain will use vSphere Lifecycle Manager images or baselines.

When you create a VI workload domain, you can join it to the management domain's vCenter Single Sign-On domain or a new vCenter Single Sign-On domain that is not used by any other workload domain. Joining a new vCenter Single Sign-On domain enables a VI workload domain to be isolated from the other workload domains in your VMware Cloud Foundation instance. The vCenter Single Sign-On domain for a VI workload domain determines the local authentication space.

vSphere Lifecycle Manager enables centralized and simplified lifecycle management for VMware ESXi hosts through the use of images and baselines. In VMware Cloud Foundation 5.2, you specify the vSphere Lifecycle Manager method (baselines or images) at the workload domain level. All clusters in the workload domain must use the same vSphere Lifecycle Manager method. In VMware Cloud Foundation 5.2.1, you specify the vSphere Lifecycle Manager method at the cluster level. A single workload domain can have both vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image based clusters

Prerequisites

Verify that you have met the prerequisites described in About VI Workload Domains.

Procedure

  1. Type a name for the VI workload domain, such as sfo01. The name must contain between 3 and 20 characters.
    It is good practice to include location information in the name since resource object names (such as host and vCenter names) are generated based on the VI workload domain name.
  2. (Optional) Type a name for the organization that requested or will use the virtual infrastructure, such as Finance. The name must contain between 3 and 20 characters.
  3. Select an SSO domain option for the VI workload domain.
    Option Description
    Create New SSO Domain Creates a new vCenter Single Sign-On domain:
    • Enter the domain name, for example mydomain.local.
      Note: Ensure that the domain name does not contain any upper-case letters.
    • Set the password for the SSO administrator account.

      This is the password for the user administrator@your_domain_name.

    • Confirm the administrator password.
    Note: All components in the management domain must be upgraded to VMware Cloud Foundation 5.0 before you can create a new SSO domain.
    Join Management SSO Domain Join the new VI workload domain to the management domain's vCenter Single Sign-On domain.
    Note: You cannot change the SSO domain for a VI workload domain after you deploy it.
  4. For VMware Cloud Foundation 5.2, select Manage clusters in this workload domain using baselines (deprecated) to use vSphere Lifecycle Manager baselines as the update method for the VI workload domain.
    VMware Cloud Foundation 5.2.1, supports both vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image based clusters in the same workload domain. For VMware Cloud Foundation 5.2.1 the vSphere Lifecycle Manager option appears when you specify the cluster settings.
    If no vSphere Lifecycle Manager images are available the vLCM baselines option is automatically selected. You can proceed to use vSphere Lifecycle Manager baselines, or exit the wizard and create a vSphere Lifecycle Manager image. For more information on vSphere Lifecycle Manager images, see Managing vSphere Lifecycle Manager Images in VMware Cloud Foundation.
    The update method that you select for a VI workload domain cannot be changed later.
    Note:
    • You must use vSphere Lifecycle Manager images for vSAN ESA and vSAN Max clusters.
    • You must use vSphere Lifecycle Manager images in order to create vSphere clusters with only two hosts and those clusters must use NFS, VMFS on FC, or vVols as principal storage.
    • Network offloading using SmartNICs (DPU-backed hosts) is not supported with vSphere Lifecycle Manager baselines.
  5. Click Next.

Specify vSphere Cluster Details

Provide a name for the workload domain vSphere cluster. If you are using vSphere Lifecycle Manager images, select a cluster image to apply to the hosts.

VMware Cloud Foundation 5.2.1 supports both vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image based clusters in the same workload domain. Choose the vSphere Lifecycle Manager method for the primary cluster.

Prerequisites

You must have a cluster image available if the workload domain is using vSphere Lifecycle Manager images. See Managing vSphere Lifecycle Manager Images in VMware Cloud Foundation.

Procedure

  1. Enter a vSphere cluster name.
  2. For VMware Cloud Foundation 5.2.1, select Manage this cluster using vLCM images to use vSphere Lifecycle Manager images as the update method.
    If you do not select Manage this cluster using vLCM images, then the cluster uses vLCM baselines.
    Note:
    • You must use vSphere Lifecycle Manager images for vSAN ESA and vSAN Max clusters.
    • You must use vSphere Lifecycle Manager images in order to create vSphere clusters with only two hosts and those clusters must use NFS, VMFS on FC, or vVols as principal storage.
    • Network offloading using SmartNICs (DPU-backed hosts) is not supported with vSphere Lifecycle Manager baselines.
  3. Select a cluster image from the drop-down menu.
    The option is only available for clusters that use vSphere Lifecycle Manager images.
    Note: If the cluster image contains a different version of a vendor add-on or component than what is installed on the ESXi hosts you add to the cluster, the hosts will be remediated to use the cluster image during cluster creation.

Specify Compute Details

Specify the details for the vCenter Server that gets deployed for the workload domain.

The version of vCenter Server that gets deployed is based on the vCenter Server version in the management domain. If the management domain contains a version of vCenter Server that was upgraded from the BOM version, then SDDC Manager deploys the patched version of vCenter Server in the VI workload domain.

Procedure

  1. Enter the FQDN for the workload domain's vCenter Server.
    The vCenter Server IP address, subnet mask, and default gateway, are populated based on the FQDN and SDDC Manager details.
  2. Enter and confirm a vCenter Server root password.
  3. Click Next.

Specify Networking Details

Provide information about the NSX Manager cluster to use with the VI workload domain. If you already have an NSX Manager cluster for a different VI workload domain, you can reuse that NSX Manager cluster or create a new one.

The version of NSX Manager that gets deployed is based on the NSX Manager version in the management domain. If the management domain contains a version of NSX Manager that was upgraded from the BOM version, then SDDC Manager deploys the patched version of NSX Manager in the VI workload domain.

Do not share an NSX Manager cluster between workload domains catering to different use cases that would require different NSX Edge cluster specifications and configurations.

You can share an NSX Manager cluster under the following circumstances:
  • You are creating an isolated VI workload domain and want to share an NSX Manager cluster created by another isolated VI workload domain.
  • You are creating a VI workload domain that is joined to the management SSO domain and want to share an NSX Manager cluster created by another VI workload domain that is joined to the management SSO domain.
Note:
  • You cannot share an NSX Manager cluster between an isolated VI workload domain and a VI workload domain that is joined to the management SSO domain.
  • A new workload domain cannot share an NSX Manager cluster with a workload domain that has an Avi Load Balancer deployed.

See VMware Configuration Maximums for information about the maximum number of workload domains that can be managed by a single NSX Manager instance.

Procedure

  1. On the Networking page of the wizard, select an option for the VI workload domain's NSX Manager instance.
    Option Description
    Create New NSX instance Choose this option to create a new NSX Manager instance for the workload domain.
    Note:
    • You must create a new NSX Manager instance if this is the first VI workload domain in your VMware Cloud Foundation instance.
    • Provide the NSX Manager cluster details:
      • FQDNs for three NSX Manager nodes
      • NSX Manager cluster FQDN
      • Choose Large or Extra Large for the NSX Manager appliance size
      • NSX Manager Administrator password
      • NSX Manager Auditor password (optional)
    Use Existing NSX instance Choose this option to use an existing NSX Manager instance from another VI workload domain.
    Important: In order to share an NSX Manager instance in VMware Cloud Foundation 5.2, the VI workload domains must use the same update method. The VI workload domains must both use vSphere Lifecycle Manager baselines or they must both use vSphere Lifecycle Manager images. In VMware Cloud Foundation 5.2.1, this restriction no longer applies.
    • Select the NSX Manager instance.
      Note: NSX Managers for workload domains that are in the process of deploying are not able to be shared and do not appear in the list of available NSX Managers.
  2. Click Next.

Select the vSAN Storage Parameters

At the vSAN Storage step of the creation wizard, specify the vSAN storage options for the VI workload domain. This page appears only if you are using vSAN storage for this workload domain.

The vSAN storage options are different for vSAN OSA and vSAN ESA. For more information about vSAN storage, see vSAN Storage with VMware Cloud Foundation.

Procedure

  1. For vSAN OSA:
    1. Specify the level of availability you want configured for the cluster.
      The availability level determines the level of redundancy that is set for the assigned resources.
      Failures to tolerate Description
      0 vSAN requires a minimum of three hosts.
      1 vSAN requires a minimum of four hosts.
      2 vSAN requires a minimum of five hosts.
      SDDC Manager determines uses the settings you specify to determine:
      • The minimum number of hosts that it needs to fulfill those selections
      • The specific hosts in your environment that are available and appropriate to fulfill those selections
      • The virtual infrastructure features and their specific configurations that are needed to fulfill those selections
    2. Select the check box to enable vSAN deduplication and compression.
  2. For vSAN ESA:
    1. Select the vSAN cluster type.
      Option Description
      vSAN HCI Provides storage and compute resources.
      vSAN Max Provides storage resources only. You can mount a vSAN Max datastore on other vSAN ESA or vSAN compute clusters.
    2. If you are using vSAN ESA, SDDC Manager selects the following settings, which cannot be edited:
      • Storage Type: Local vSAN datastore.
      • Storage Policy: Auto-policy management.
        Note: Based on the type of cluster and number of hosts, vSAN creates and assigns a default datastore policy for best capacity utilization after the cluster configuration is completed. Policy details can be viewed in the vSphere Client ( Policies and Profiles > VM Storage Policies).
  3. Click Next.

Specify the VMFS on FC Datastore

If you are using VMFS on FC storage for the workload domain, you must specify the VMFS on FC datastore name.

Procedure

  1. On the Storage page, enter the name of the VMFS on FC datastore.
  2. Click Next.

Specify vVols Storage Details

If you use vVols storage for the workload domain, you must specify the vVols storage details.

Procedure

  1. Select a VASA protocol type.
    vVols supports FC, NFS, and iSCSI storage protocol types.
  2. Select a VASA provider name.
  3. Select a storage container.
  4. Select a VASA user.
  5. Enter a datastore name.
  6. Click Next.

Select Hosts

The Host Selection page displays available hosts along with hosts details. Hosts that are powered off, cannot be accessed via SSH, or have not been properly commissioned are not displayed.

  • Select only healthy hosts.
    To check a host's health, SSH in to the SDDC Manager VM using the vcf administrative user account and type the following command:
    sudo /opt/vmware/sddc-support/sos --health-check
    When prompted, enter the vcf user password. For more information, see Supportability and Serviceability (SoS) Utility
  • For optimum performance, you must select hosts that are identical in terms of memory, CPU types, and disks.

    If you select unbalanced hosts, the SDDC Manager UI displays a warning message, but you can proceed with the workload domain creation.

  • You cannot select hosts that are in a dirty state. A host is in a dirty state when it has been removed from a cluster in a workload domain.

    To clean a dirty host, re-image the host.

  • You cannot combine DPU-backed and non-DPU hosts in the same cluster.
  • When creating a VI workload domain using the SDDC Manager UI, all hosts in a cluster must be associated with the same network pool. You can use the Cloud Foundation API to select hosts from different network pools, if those network pools have the same VLAN ID and MTU settings. We allow L3 Networks for VSAN, VMOTION, using SDDC Manager UI, but not for TEP and MANAGEMENT. For TEP and MANAGEMENT, API needs to be used using Network profiles.

Procedure

  1. To view DPU-backed hosts, activate the Network Offloading toggle.
    Do not activate the toggle if you want to select hosts that are not DPU-backed.
    Note: The toggle is only available if the workload domains uses vLCM images and there are unassigned DPU-backed hosts available.
  2. If you are using DPU-backed hosts, select the DPU Vendor.
    Note: All hosts in a cluster must use DPUs from the same vendor.
  3. Select the hosts for creating the VI workload domain.
    The hosts must be commissioned with the same storage type as the primary cluster of the workload domain. For example, select hosts commissioned for vSAN ESA storage for a vSAN ESA workload domain.
  4. Click Next.

Specify Switch Configuration

The Switch Configuration page displays preconfigured switch profiles to be applied to the hosts in the cluster. There is also the option to create a custom switch configuration.

You can use any vmnics available on the host. For preconfigured profiles, you can only edit the names of VDS and portgroups.
  • Default: This profile is recommended and is the default configuration. It provides a unified fabric for all traffic types.
  • Storage Separation: This profile is used to physically separate vSAN storage traffic from other traffic types. This separates vSAN storage traffic onto separate physical host NICs and data center switches.
  • NSX Separation: This profile is used to physically separate NSX traffic and workload traffic from other traffic types. This separates NSX Edge traffic and overlay traffic onto a separate physical host NICs and data center switches.
    Note: This profile is not available for workload domains that do not have NSX Manager.
  • Storage Traffic and NSX Traffic Separation: This profile is used to physically separate storage traffic, NSX Edge traffic, and workload traffic from other traffic types (management and vMotion). This will separate storage traffic, NSX Edge traffic, and overlay traffic onto separate physical host NICs and data center switches.
    Note: This profile is not available for workload domains that do not have NSX Manager.
  • Create Custom Switch Configuration: Copy from a preconfigured profile to customize or create distributed switch configuration.
    Multiple vSphere Distributed Switches (VDS) can be configured. Each VDS can hold one or more network traffic configurations.
    • Some network traffic types are mandatory. Switch configuration is incomplete until these mandatory traffic types are configured.
    • Network types Management, vMotion, vSAN and NSX-Overlay can be configured only once for a cluster.
    • NSX-VLAN and Public network traffic types can be configured in multiple VDSes.
      Note: The available network traffic types will vary depending on whether or not the workload domain includes NSX Manager.
To deploy a new VI workload domain that uses ESXi hosts with two data processing units (DPU), you must configure a custom vSphere Distributed Switch (VDS) with the following settings:
  • A single VDS that uses all four DPU-backed nics
  • Uplinks (uplink1 through uplink4) are mapped to the DPU-backed nics
  • The NSX network operational mode is set to Enhanced Datapath - Standard
  • In the NSX uplink profile, uplink-1 and uplink-2 are Active and uplink-3 and uplink-4 are Standby

Procedure

  1. Choose a preconfigured switch configuration profile and click Select and click Next or Create Custom Switch Configuration (Skip to step 3).
  2. For a preconfigured switch profile:
    1. Provide the VDS configuration to be applied to the hosts in the cluster.
      VLAN ID is required and must be entered before proceeding to the next step.
      Note: A VLAN ID is not required for workload domains that do not have NSX Manager.
      The default VLAN ID is 0. It is not recommended to configure 0 because NSX-Overlay will not work.
    2. Once the details for the physical network adapters and VLAN ID information is updated, click Next.
      Note: The numbering of physical NICs may become out of sync with the VDS numbering. This issue can be worked around by selecting Change Profileto reconfigure the mappings.
  3. To create a custom switch configuration:
    1. Click Create Custom Switch Configuration.
    2. Click Create Distributed Switch or Copy from Preconfigured Profile.
      With Create Distributed Switch, you specify the switch configuration from scratch. With Copy from Preconfigured Profile, you select a profile to start with and then edit the configuration.
    3. Provide the Distributed Switch Name, MTU, number of VDS uplinks, and uplink mapping.
      When mapping uplinks to physical network adapters, you cannot select a NIC type more than once.
      Wizard screen shows inputs for options for Creating Distributed switch name, MTU, VDS Uplinks Configuration, and Network Traffic Configured
    4. Click Configure Network Traffic and select the network traffic type to configure.
      You must configure all required network traffic types.
      Network Traffic Type Configuration
      Management, vMotion, vSAN, Public Enter the Distributed PortGroup Name, select the Load Balancing policy, and configure the uplinks. Click Save Configuration.
      NSX Select the Operational Mode (Standard, Enhanced Datapath - Standard, Enhanced Datapath - Performance) and Transport Zone Type(s) (NSX-Overlay, NSX-VLAN).
      Note: For a VI workload domain with DPU-backed hosts, you must select Enhanced Datapath - Standard.
      For NSX-Overlay:
      • Enter the NSX-Overlay Transport Zone Name.
      • Enter the VLAN ID and select the IP Allocation (DHCP or Static IP Pool).
      For NSX-VLAN:
      • Enter the NSX-VLAN Transport Zone Name.
      Configure the:
      • NSX transport node settings
      • Uplink mapping settings
      • NSX uplink profile settings

      Click Save Configuration.

      Note: The NSX network traffic type is not available if the workload domain does not include NSX Manager.
    5. Click Create Distributed Switch.
      Note: You cannot proceed until all mandatory traffic types are configured.
  4. Click Next.

Specify NFS Storage Details

If you are using NFS storage for this workload domain, you must provide the NFS share folder and IP address of the NFS server.

Procedure

  1. On the NFS Storage page, enter a name for the NFS datastore.
  2. Enter the path to the NFS share.
  3. Enter the IP address of the NFS server.
    Note: When creating additional datastores for an NFS share and server, use the same datastore name. If you use a different datastore name, vCenter overwrites the datastore name provided earlier.
  4. Click Next.

Select Licenses

On the Licenses page, select a licensing option.

Prerequisites

If you choose License Now, you must have added valid component license keys for the following products:
  • VMware vSAN (if using vSAN as the storage option)

    NFS does not require a license

  • VMware NSX (if deploying a workload domain with NSX Manager)
  • VMware vSphere

    Since vSphere and vSAN licenses are per CPU, ensure that you have sufficient licenses for the ESXi hosts to be used for the workload domain.

For information on adding component license keys, see Add a Component License Key in the SDDC Manager UI.

Procedure

  1. Choose a licensing option.
    Option Description
    License Now Select a license key for each of the components in the VI workload domain.
    License Later VMware Cloud Foundation components are deployed in evaluation mode.
    Important: After your VI workload domain is created, you must switch to licensed mode by:
  2. Click Next.

View Object Names

The Object Names page displays the vSphere objects that will be generated for the VI workload domain. Object names are based on the VI workload domain name.

Procedure

  1. Review the syntax that will be used for the vSphere objects generated for this domain.
  2. Click Next.

Review Details and Start the Creation Workflow

At the Review step of the wizard, review the information about the workload domain and start the creation workflow. You can also print the information or download a printable version to print later. It can take up to two hours for the domain to be created.

The Review page displays information about the resources and their configurations that are deployed when the workflow creates and deploys the virtual infrastructure for this workload domain.

The hosts that will be added to the workload domain are listed along with information such as the network pool they belong to, memory, CPU, and so on.

Procedure

  1. Scroll down the page to review the information.
  2. Click Finish.
    SDDC Manager validates the VI workload domain confirguration and displays any errors.

    If validation succeeds, the Workload Domains page appears, and a notification is displayed letting you know that VI workload domain is being added. Click View Task Status to view the domain creation tasks and sub tasks.

    If a task fails, you can fix the issue and rerun the task. If the workload domain creation fails, contact Broadcom Support.
    Note: Multiple VMkernels are created to test the vMotion network, which may cause changes in the MAC addresses and IP address relations. If MAC address filtering is enabled on your physical infrastructure, this may cause issues such as vMotion network connectivity validation failure.

What to do next