This tab is a summary of infrastructure configuration requirements that need to be satisfied before deploying Cloud Foundation.

The VMware Cloud Builder runs a platform audit before starting deployment to check if the requirements listed on this tab are met. If the audit fails, you cannot proceed with the deployment.

For detailed planning guidance, see the Planning and Preparation Workbook.

VxRail

  • The VxRail first run is completed and vCenter Server and VxRail Manager VMs are deployed.
  • The vCenter Server version matches the build listed in the Cloud Foundation Bill of Materials (BOM). See the VMware Cloud Foundation Release Notes for the BOM.

Physical Network

  • Top of Rack switches are configured. Each host and NIC in the management domain must have the same network configuration. No Ethernet link aggregation technology (LAG/VPC/LACP) is being used.
  • Jumbo Frames (MTU 9000) are recommended on all VLANs. At a minimum, an MTU of 1600 is required on the NSX-T Host Overlay (Host TEP) and NSX-T Edge Overlay (Edge TEP) VLANs end-to-end through your environment.
  • If using DHCP for NSX-T Host Overlay TEPs: DHCP with an appropriate scope size (one IP per physical NIC per host) is configured for the NSX Host Overlay (Host TEP) VLAN.
  • If using a static IP pool for NSX-T Host Overlay TEPs: Make sure you have enough IP addresses available for the number of hosts that will use the static IP Pool. Each host requires an IP address for each physical NIC (pNIC) that is used for the vSphere Distributed Switch that handles host overlay traffic. For example, a host with four pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
  • To use Application Virtual Networks (AVNs) for vRealize Suite components you also need:
    • Top of Rack (ToR) switches configured with the Border Gateway Protocol (BGP), including Autonomous System (AS) numbers and BGP neighbor passwords, and interfaces to connect with NSX-T Edge nodes.
    • Two VLANs configured and presented to all ESXi hosts to support the uplink configuration between the (ToR) switches and NSX-T Edge nodes for outbound communication.

Physical Hardware and ESXi Hosts

  • vSAN cluster with a minimum of four hosts. vSphere Distributed Switch is configured on the cluster. Management, vSAN, and vMotion networks are created. Management network binding type is Ephemeral.
  • Identical hardware (CPU, Memory, NICs, SSD/HDD, and so on) within the management cluster is highly recommended. Refer to vSAN documentation for minimum configuration.
  • The ESXi version matches the build listed in the Cloud Foundation Bill of Materials (BOM). See the VMware Cloud Foundation Release Notes for the BOM.

DNS Configuration

Host names for the following components must be resolvable for forward, reverse, short name, and long name resolution.
  • ESXi hosts
  • vCenter Server
  • NSX-T Management cluster
  • SDDC Manager
  • VxRail Manager
  • NSX Edge VMs (if AVN is enabled)