This version of the checklist is prepared using a fictional migration scenario. The entries are completed using the scenario information.

The planning tables in this document are organized assuming there is one source environment and one destination environment:

  • It is assumed that the source vSphere contains the existing workloads and networks that are migrated. This environment can be legacy or relatively modern. See Software Version Requirements (Source Requirements).

  • It is assumed that the destination is a private cloud deployment, and is the target for HCX network extensions, migrations, and services. See Software Version Requirements (Destination Requirements).

Explanations are included in the regular pre-install checklists. This checklist omits them for brevity.

Scenario - XYZ Migration from Legacy DC to SDDC

The XYZ Widget Company plans to evacuate the XYZ Legacy DC into a newly built XYZ SDDC (in a new physical data center). HCX enables the evacuation of all workloads and the decommissioning of EOL hardware and EOS software without upgrades.

The objective for the HCX POC is to test core VMware HCX capabilities that enable the evacuation of the legacy data center. The proof of concept follows this success criteria:

  • Deploy the HCX Service Mesh, configured to provide services for the DEV environment.

  • Extend the prepared test network (virtual machine VLAN 10 backed DPG).

  • Successfully perform HCX vMotion and Bulk migration for a test virtual machine from Legacy DC to the SDDC.

    • Understand the time to migrate VM data for each protocol.

    • Understand the ability to use bandwidth for migrations under the POC configuration.

  • Test Network Extension:

    • Verify Legacy VM to SDDC VM connectivity over the HCX L2 path.

    • Understand Legacy to SDDC latency.

  • Successfully perform reverse HCX migrations from SDDC to Legacy.

  • Successfully complete the Bulk migration of 3-5 VMs in parallel from Legacy DC to SDD in parallel.

    • Test the Bulk migration failover scheduler.

    • Upgrade VM Hardware / VM Tools.

Scenario Environment Details

Fictional environment details for the XYZ-Legacy and the XYZ-SDDC.

Environment Facts

Source - Legacy DC

Destination - XYZ-SDDC

vSphere

  • vSphere 6.0 U1

  • Mgmt Cluster

  • Dev Cluster

  • Prod Cluster

  • Legacy DVS

  • vSphere 7.0 U1

  • Mgmt Cluster

  • Compute-1 Cluster

  • Compute-2 Cluster

  • Mgmt DVS

  • Compute DVS

Cluster Networks

  • ESXi Management 192.168.100.0 prefix-24 VSS-VLAN-100

  • ESXi vMotion 192.168.101.0 prefix-24 VSS-VLAN-101

  • ESXi Management 10.0.100.0 prefix-22

  • ESXi vMotion 10.0.104.0 prefix-22

  • ESXi Replication 10.0.108.0 prefix-22

VM Networking

  • Single Legacy DC DVS.

  • DPG Test-VM-NET-10 192.168.10.0 prefix-24 VLAN 10.

  • 5 Test-VMs deployed for the POC.

  • NSX-T Overlay TZ Configured

  • NSX-T T1 Router Created.

Storage

  • Block Storage Central Array

  • vSAN Storage

Site to Site Connectivity

  • 1 Gbps Internet / WAN.

  • No dedicated Public IPs required for HCX (HCX will NAT outbound).

  • 10 Gbps Internet / WAN.

  • 3 Public IPs reserved for HCX.

Collect vSphere Environment Details

Collect the relevant environment details in preparation for the installation. The bulleted entries may provide context, or about requirements related to the Environment Detail entry.

XYZ Widget Company Scenario information is [in brackets].

Environment Detail

Source Environment

Destination Environment

▢ vSphere Version:

[XYX Legacy is 6.0]

[XYZ SDDC is 7.0 U1]

▢ Distributed Switches and Connected

Clusters

[Shared DVS : Mgmt, Dev, Prod]

[Mgmt DVS: Mgmt Cluster

Compute DVS: Compute-1, Compute-2]

▢ ESXi Cluster Networks

[ESXi Management 192.168.100.0/24 VSS-VLAN-100

ESXi vMotion 192.168.101.0/24 VSS-VLAN-101]

[ESXi Management 10.0.100.0/22

ESXi vMotion 10.0.104.0/22

ESXi Replication 10.0.108.0/22]

▢ NSX version and configurations:

[No NSX in Legacy DC]

[XYZ SDDC is running NSX-T 3.1, with an overlay Transport Zone that includes Compute-1 and Compute-2 clusters]

▢ Verify all Software Version Requirements are satisfied.

[Verified XYZ Legacy DC meets all documented version requirements]

[Verified XYZ SDDC meets all documented version requirements]

▢ vCenter Server URL:

[https://legacy-vcenter]

[https://sddc-1-vcenter.xyz.com]

▢ vCenter administrator@vsphere.local or equivalent account.

[Verified administrator access to the vCenter Server]

[Verified administrator access to the vCenter Server]

▢ Destination NSX Manager URL:

[N/A]

[https://sddc-1-nsxm.xyz.com]

▢ NSX admin or equivalent account.

[N/A]

[Verified the NSX admin account]

▢ Destination vCenter SSO URL :

[embedded ]

[sddc-1-psc.xyz.com]

▢ DNS Server:

[legacy-dns.xyz.com]

[dns.xyz.com]

▢ NTP Server:

[legacy-ntp.xyz.com]

[ntp.xyz.com]

▢ HTTP Proxy Server:

[proxy.xyz.com]

[Verified xyz does not use HTTP proxy servers]

Planning for the HCX Manager Deployments

XYZ Widget Company Scenario information is [in brackets].

HCX Manager Deployment at Source

HCX Manager Deployment at Destination

▢ HCX Manager Placement:

[HCX Manager is deployed in the xyz-sddc1 ]

[HCX Manager is deployed in the XYZ-SDDC-1 Mgmt cluster ]

▢ HCX Manager Installer OVA:

[The OVA is downloaded from the SDDC-1 HCX Manager once that is online]

[The OVA has been downloaded.]

▢ HCX Manager Hostname:

[legacy-hcxm.xyz.com]

[sddc-1-hcxm.xyz.com]

▢ HCX Manager Internal IP Address:

[192.168.100.50]

[10.0.100.50]

▢ HCX Manager External Name / Public IP Address:

[External Name/Pub IP assignment is not applicable]

[sddc1-hcx.xyz.com , Pub IP assignment 192.0.2.50]

▢ HCX Manager admin / root password:

▢ Verify outbound access for the HCX Manager:

[Verified outbound NAT will allow outbound connections for legacy-hcxm]

[Verified the HCXM network can reach *.vmware.com using . HTTPS]

▢ HCX Activation / Licensing:

[The licenses for the sddc-1-hcx are used at the source)

[The XYZ HCX POC uses trial licenses, which allows testing up to 20 migrations]

Planning the Compute Profile Configurations

XYZ Widget Company Scenario information is [in brackets].

Note:

In the XYZ Widget Company POC scenario, a single Compute Profile is used.

In production deployments, one can create additional Compute Profiles to scale out the HCX services or to achieve connectivity when there are things like per-cluster vMotion or DVS isolation in the environment.

Source Compute Profile

Destination Compute Profile

▢ Compute Profile Name

[Legacy-DC-CP]

[sddc-1-CP]

▢ Services to activate

[All services activated]

[All services activated]

▢ Service Resources (Data Center or Cluster)

[legacy-dev cluster]

[Compute-1 , Compute-2 ]

▢ Deployment Resources (Cluster or Resource Pool)

[legacy-dev cluster]

[sddc-1-compute-1]

▢ Deployment Resources (Datastore)

[legacy-block-array]

[sddc-1-vsan-datastore]

▢ Distributed Switches or NSX Transport Zone for Network Extension

[legacy-shared-dvs]

[sddc-1-nsxt-overlay-tz, includes compute clusters]

Planning the Network Profile Configurations

The Network Profiles abstract Network consumption during HCX service deployments. See Network Profile Considerations and Concepts.

XYZ Widget Company Scenario information is [in brackets].

Network Profile Type

Source Network Details

Destination Network Details

HCX Uplink

[Using Mgmt]

[xyz-sddc-ext-net 192.0.2.11 - 192.9.2.15]

HCX Management

[legacy-mgmt, 192.168.100.0/24, gw: .1

HCX range: 192.168.100.201 - 192.168.100.205]

[xyz-sddc-mgmt, 10.0.100.0/22, gw: .1

10.0.100.201 - 10.0.100.205]

HCX vMotion

[legacy-vmotion, 192.168.101.0/24, gw: .1

HCX range: 192.168.101.201 - 192.168.101.205]

[xyz-sddc-vmo, 10.0.104.0/22, gw: .1

10.0.104.201 - 10.0.104.205]

HCX Replication

[Using Mgmt]

[xyz-sddc-repl, 10.0.108.0/22, gw: .1

10.0.108.201 - 10.0.108.205]

Source HCX to Destination HCX IP Connectivity

XYZ Widget Company Scenario information is [in brackets].

Bandwidth for Migrations

[XYZ Legacy DC has 1 Gbps Internet uplinks, 500 can be used for migrations. XYZ-SDDC has 10 Gbps available.]

Public IPs & NAT

[XYZ Legacy DC HCX components will SNAT.

XYZ Legacy DC Public IP addresses have been allocated as follows :

One for the HCX Manager (it is configured as an inbound DNAT rule).

Two for HCX Uplink NP (one for the IX appliance and one for the NE appliance) ]

Source HCX to Destination HCX Network Ports

[XYZ Legacy DC perimeter firewall has been configured to allow UDP-4500 and HTTPS outbound

XYX SDDC perimeter firewall has been configured to allow HT ]

HCX Network Ports

  • A full list of port requirements for HCX can be found in ports.vmware.com.