This version of the checklist is prepared using a fictional migration scenario. The entries are completed using the scenario information.

The planning tables in this document are organized assuming there is one source environment and one destination environment:

  • It is assumed that the source vSphere contains the existing workloads and networks that are migrated. This environment can be legacy or relatively modern. See Software Version Requirements (Source Requirements).

  • It is assumed that the destination is a private cloud deployment, and is the target for HCX network extensions, migrations, and services. See Software Version Requirements (Destination Requirements).

Explanations are included in the regular pre-install checklists. This checklist omits them for brevity.

Scenario - XYZ Migration from Legacy DC to SDDC

The XYZ Widget Company plans to evacuate the XYZ Legacy DC into a newly built XYZ SDDC (in a new physical data center). HCX enables the evacuation of all workloads and the decommissioning of EOL hardware and EOS software without upgrades.

The objective for the HCX POC is to test core VMware HCX capabilities that enable the evacuation of the legacy data center. The proof of concept follows this success criteria:

  • Deploy the HCX Service Mesh, configured to provide services for the DEV environment.

  • Extend the prepared test network (virtual machine VLAN 10 backed DPG).

  • Successfully perform HCX vMotion and Bulk migration for a test virtual machine from Legacy DC to the SDDC.

    • Understand the time to migrate VM data for each protocol.

    • Understand the ability to use bandwidth for migrations under the POC configuration.

  • Test Network Extension:

    • Verify Legacy VM to SDDC VM connectivity over the HCX L2 path.

    • Understand Legacy to SDDC latency.

  • Successfully perform reverse HCX migrations from SDDC to Legacy.

  • Successfully complete the Bulk migration of 3-5 VMs in parallel from Legacy DC to SDD in parallel.

    • Test the Bulk migration failover scheduler.

    • Upgrade VM Hardware / VM Tools.

Scenario Environment Details

Fictional environment details for the XYZ-Legacy and the XYZ-SDDC.

Environment Facts

Source - Legacy DC

Destination - XYZ-SDDC


  • vSphere 6.0 U1

  • Mgmt Cluster

  • Dev Cluster

  • Prod Cluster

  • Legacy DVS

  • vSphere 7.0 U1

  • Mgmt Cluster

  • Compute-1 Cluster

  • Compute-2 Cluster

  • Mgmt DVS

  • Compute DVS

Cluster Networks

  • ESXi Management prefix-24 VSS-VLAN-100

  • ESXi vMotion prefix-24 VSS-VLAN-101

  • ESXi Management prefix-22

  • ESXi vMotion prefix-22

  • ESXi Replication prefix-22

VM Networking

  • Single Legacy DC DVS.

  • DPG Test-VM-NET-10 prefix-24 VLAN 10.

  • 5 Test-VMs deployed for the POC.

  • NSX-T Overlay TZ Configured

  • NSX-T T1 Router Created.


  • Block Storage Central Array

  • vSAN Storage

Site to Site Connectivity

  • 1 Gbps Internet / WAN.

  • No dedicated Public IPs required for HCX (HCX will NAT outbound).

  • 10 Gbps Internet / WAN.

  • 3 Public IPs reserved for HCX.

Collect vSphere Environment Details

Collect the relevant environment details in preparation for the installation. The bulleted entries may provide context, or about requirements related to the Environment Detail entry.

XYZ Widget Company Scenario information is [in brackets].

Environment Detail

Source Environment

Destination Environment

▢ vSphere Version:

[XYX Legacy is 6.0]

[XYZ SDDC is 7.0 U1]

▢ Distributed Switches and Connected


[Shared DVS : Mgmt, Dev, Prod]

[Mgmt DVS: Mgmt Cluster

Compute DVS: Compute-1, Compute-2]

▢ ESXi Cluster Networks

[ESXi Management VSS-VLAN-100

ESXi vMotion VSS-VLAN-101]

[ESXi Management

ESXi vMotion

ESXi Replication]

▢ NSX version and configurations:

[No NSX in Legacy DC]

[XYZ SDDC is running NSX-T 3.1, with an overlay Transport Zone that includes Compute-1 and Compute-2 clusters]

▢ Verify all Software Version Requirements are satisfied.

[Verified XYZ Legacy DC meets all documented version requirements]

[Verified XYZ SDDC meets all documented version requirements]

▢ vCenter Server URL:



▢ vCenter administrator@vsphere.local or equivalent account.

[Verified administrator access to the vCenter Server]

[Verified administrator access to the vCenter Server]

▢ Destination NSX Manager URL:



▢ NSX admin or equivalent account.


[Verified the NSX admin account]

▢ Destination vCenter SSO URL :

[embedded ]


▢ DNS Server:



▢ NTP Server:



▢ HTTP Proxy Server:


[Verified xyz does not use HTTP proxy servers]

Planning for the HCX Manager Deployments

XYZ Widget Company Scenario information is [in brackets].

HCX Manager Deployment at Source

HCX Manager Deployment at Destination

▢ HCX Manager Placement:

[HCX Manager is deployed in the xyz-sddc1 ]

[HCX Manager is deployed in the XYZ-SDDC-1 Mgmt cluster ]

▢ HCX Manager Installer OVA:

[The OVA is downloaded from the SDDC-1 HCX Manager once that is online]

[The OVA has been downloaded.]

▢ HCX Manager Hostname:



▢ HCX Manager Internal IP Address:



▢ HCX Manager External Name / Public IP Address:

[External Name/Pub IP assignment is not applicable]

[ , Pub IP assignment]

▢ HCX Manager admin / root password:

▢ Verify outbound access for the HCX Manager:

[Verified outbound NAT will allow outbound connections for legacy-hcxm]

[Verified the HCXM network can reach * using . HTTPS]

▢ HCX Activation / Licensing:

[The licenses for the sddc-1-hcx are used at the source)

[The XYZ HCX POC uses trial licenses, which allows testing up to 20 migrations]

Planning the Compute Profile Configurations

XYZ Widget Company Scenario information is [in brackets].


In the XYZ Widget Company POC scenario, a single Compute Profile is used.

In production deployments, one can create additional Compute Profiles to scale out the HCX services or to achieve connectivity when there are things like per-cluster vMotion or DVS isolation in the environment.

Source Compute Profile

Destination Compute Profile

▢ Compute Profile Name



▢ Services to activate

[All services activated]

[All services activated]

▢ Service Resources (Data Center or Cluster)

[legacy-dev cluster]

[Compute-1 , Compute-2 ]

▢ Deployment Resources (Cluster or Resource Pool)

[legacy-dev cluster]


▢ Deployment Resources (Datastore)



▢ Distributed Switches or NSX Transport Zone for Network Extension


[sddc-1-nsxt-overlay-tz, includes compute clusters]

Planning the Network Profile Configurations

The Network Profiles abstract Network consumption during HCX service deployments. See Network Profile Considerations and Concepts.

XYZ Widget Company Scenario information is [in brackets].

Network Profile Type

Source Network Details

Destination Network Details

HCX Uplink

[Using Mgmt]

[xyz-sddc-ext-net -]

HCX Management

[legacy-mgmt,, gw: .1

HCX range: -]

[xyz-sddc-mgmt,, gw: .1 -]

HCX vMotion

[legacy-vmotion,, gw: .1

HCX range: -]

[xyz-sddc-vmo,, gw: .1 -]

HCX Replication

[Using Mgmt]

[xyz-sddc-repl,, gw: .1 -]

Source HCX to Destination HCX IP Connectivity

XYZ Widget Company Scenario information is [in brackets].

Bandwidth for Migrations

[XYZ Legacy DC has 1 Gbps Internet uplinks, 500 can be used for migrations. XYZ-SDDC has 10 Gbps available.]

Public IPs & NAT

[XYZ Legacy DC HCX components will SNAT.

XYZ Legacy DC Public IP addresses have been allocated as follows :

One for the HCX Manager (it is configured as an inbound DNAT rule).

Two for HCX Uplink NP (one for the IX appliance and one for the NE appliance) ]

Source HCX to Destination HCX Network Ports

[XYZ Legacy DC perimeter firewall has been configured to allow UDP-4500 and HTTPS outbound

XYX SDDC perimeter firewall has been configured to allow HT ]

HCX Network Ports

  • A full list of port requirements for HCX can be found in