Use the MultiDvsAutomator script to add the primary VxRail cluster if:
  • You have two system vSphere Distributed Switches. One is used for system traffic and one is used for overlay traffic.
  • Or, you have one or two system vSphere Distributed switches for system traffic and a separate vDS for overlay traffic.

Prerequisites

  • Create a local user in vCenter Server. This is required for the VxRail first run.
  • Image the VI workload domain nodes. For information on imaging the nodes, refer to Dell EMC VxRail documentation.
  • Perform a VxRail first run of the VI workload domain nodes using the vCenter Server for that workload domain. For information on VxRail first run, refer to the Dell EMC VxRail documentation.
  • Download the Multi-Dvs-Script-master.zip file from https://code.vmware.com/samples?id=7663. Copy the Multi-Dvs-Automator-VCF-4.3.0-master.zip file to the /home/vcf directory on the SDDC Manager VM and unzip it.

Procedure

  1. Using SSH, log in to the SDDC Manager VM with the user name vcf and the password you specified in the deployment parameter sheet.
  2. Enter su to switch to the root account.
  3. In the /home/vcf/Multi-Dvs-Automator-VCF-4.3.0-master directory, run python vxrailworkloadautomator.py.
  4. Enter the SSO user name and password.
  5. When prompted, select a workload domain to which you want to import the cluster.
  6. Select a cluster from the list of clusters that are ready to be imported.
  7. Enter passwords for the discovered hosts.
    • Enter a single password for all the discovered hosts.
    • Enter passwords individually for each discovered host.
  8. Choose the vSphere Distributed Switch (vDS) to use for overlay traffic.
    • Create new DVS
      1. Enter a name for the new vSphere Distributed Switch.
      2. Enter a comma-separated list of the vmnics to use.
    • Use existing DVS
      1. Select an existing vSphere Distributed Switch.
      2. Select a portgroup on the vDS. The vmnics mapped to the selected port group are used to configure overlay traffic.
  9. Enter the Geneve VLAN ID.
  10. Choose the NSX Manager cluster.
    • Use existing NSX Manager cluster
      1. Enter VLAN ID for the NSX-T host overlay network.
      2. Select an existing NSX Manager cluster.
    • Create a new NSX Manager cluster
      1. Enter VLAN ID for the NSX-T host overlay network.
      2. Enter the NSX Manager Virtual IP (VIP) address and FQDN.
      3. Enter the FQDNs for the NSX Managers (nodes).
  11. Select the IP allocation method for the Host Overlay Network TEPs.
    Option Description
    DHCP With this option VMware Cloud Foundation uses DHCP for the Host Overlay Network TEPs.

    A DHCP server must be configured on the NSX-T host overlay (Host TEP) VLAN. When NSX creates TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.

    Static IP Pool With this option VMware Cloud Foundation uses a static IP pool for the Host Overlay Network TEPs. You can re-use an existing IP pool or create a new one.
    To create a new static IP Pool provide the following information:
    • Pool Name
    • Description
    • CIDR
    • IP Range.
    • Gateway IP
    Make sure the IP range includes enough IP addresses for the number of hosts that will use the static IP Pool. The number of IP addresses required depends on the number of pNICs on the ESXi hosts that are used for the vSphere Distributed Switch that handles host overlay networking. For example, a host with four pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
    Note: You cannot stretch a cluster that uses static IP addresses for the NSX-T Host Overlay Network TEPs.
  12. Enter and confirm the VxRail Manager root and admin passwords.
  13. Confirm the SSH thumbprints for VxRail Manager and the ESXi hosts.
  14. Select the license keys for VMware vSAN and NSX-T Data Center.
  15. Press Enter to begin the validation process.
  16. When validation succeeds, press Enter to import the primary VxRail cluster.