This tab is a summary of infrastructure configuration requirements that need to be satisfied before deploying Cloud Foundation.
The VMware Cloud Builder runs a platform audit before starting deployment to check if the requirements listed on this tab are met. If the audit fails, you cannot proceed with the deployment.
For detailed planning guidance, see the Planning and Preparation Workbook.
- Top of Rack switches are configured. Each host and NIC in the management domain must have the same network configuration. No Ethernet link aggregation technology (LAG/VPC/LACP) is being used.
- IP ranges, subnet mask, and a reliable L3 (default) gateway for each VLAN.
- Jumbo Frames (MTU 9000) are recommended on all VLANs. At a minimum, an MTU of 1600 is required on the NSX-T Host Overlay (Host TEP) and NSX-T Edge Overlay (Edge TEP) VLANs end-to-end through your environment.
- VLANs for management, vMotion, vSAN, NSX-T Host Overlay (Host TEP), NSX-T Edge Overlay (Edge TEP), and NSX uplink networks are created and selectively tagged to host ports based on the vSphere Distributed Switch profile you select. Each VLAN is 802.1q tagged. NSX-T Host Overlay (Host TEP) VLAN and NSX-T Edge Overlay (Host TEP) VLAN are routed to each other.
- Management IP is VLAN-backed and configured on the hosts. vMotion and vSAN IP ranges are configured during the bring-up process.
- If using DHCP for NSX-T Host Overlay TEPs: DHCP with an appropriate scope size (one IP per physical NIC per host) is configured for the NSX Host Overlay (Host TEP) VLAN.
- If using a static IP pool for NSX-T Host Overlay TEPs: Make sure you have enough IP addresses available for the number of hosts that will use the static IP Pool. Each host requires an IP address for each physical NIC (pNIC) that is used for the vSphere Distributed Switch that handles host overlay traffic. For example, a host with four pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
- To use Application Virtual Networks (AVNs) for vRealize Suite components you also need:
- Top of Rack (ToR) switches configured with the Border Gateway Protocol (BGP), including Autonomous System (AS) numbers and BGP neighbor passwords, and interfaces to connect with NSX-T Edge nodes.
- Two VLANs configured and presented to all ESXi hosts to support the uplink configuration between the (ToR) switches and NSX-T Edge nodes for outbound communication.
Physical Hardware and ESXi Hosts
- All servers are vSAN-compliant and certified on the VMware Compatibility Guide, including BIOS, HBA, SSD, HDD, and so on.
- Identical hardware (CPU, Memory, NICs, SSD/HDD, and so on) within the management cluster is highly recommended. Refer to vSAN documentation for minimum configuration.
- Hardware and firmware (including HBA and BIOS) is configured for vSAN.
- One physical NIC on each host is configured and connected to the vSphere Standard switch. The second physical NIC is not configured.
- Physical hardware health status is "healthy" without any errors.
- The ESXi version matches the build listed in the Cloud Foundation Bill of Materials (BOM). See the VMware Cloud Foundation Release Notes for the BOM.
- The default port group, VM Network, is configured with the same VLAN ID as the management network.
- A static IP address assigned to the management interface (vmk0) for each ESXi host.
- TSM-SSH service is running on each ESXi host with the policy configured to Start and stop with host.
- All hosts are configured and in synchronization with a central time server (NTP). NTP service policy set to Start and stop with host.
- Each ESXi host is running a non-expired license. The bring-up process will configure the permanent license.
If you used the VMware Imaging Appliance service to install ESXi on your hosts and you completed the Post-Imaging Tasks, then your hosts are already configured properly and are ready for bring-up.
- ESXi hosts
- vCenter Server
- NSX-T Management cluster
- SDDC Manager
- NSX Edge VMs (if AVN is enabled)