This install checklist is written for HCX deployments with VMware Cloud on AWS as the target, where HCX is automatically installed by enabling the service (in private cloud HCX deployments, the user handles full HCX deployment and configuration for the destination environment.

This document presented in using on-prem as the source to VMC SDDC as the destination. All the checklist tables follow this format:

  • It is assumed that the on-prem vSphere contains the existing workloads and networks that will be migrated. This environment can be legacy vSphere or modern.

  • It is assumed that destination is a VMC SDDC instance.

HCX Use Cases and POC Success Criteria

"How am I doing with HCX? How will I define a success in my proof of concept?"

▢ What defines the success criteria for the HCX proof of concept?

Clearly define how the success criteria. For example:

  • Extend 2 test networks to VMC.

  • Live migrate a virtual machine.

  • Test HCX L2 connectivity using over the extended network.

  • Reverse migrate a VM.

  • Bulk Migrate a VM.

▢ Ensure features will be available with the trial or full licenses obtained.

  • HCX is an available add-on included with a VMC SDDC.

  • The add-on gives access to the HCX Advanced features.

  • The add-on gives access to select HCX Enterprise-class services: Replication Assisted vMotion, Mobility Optimized Networking, Application Path Resiliency, TCP Flow Conditioning, and Mobility Groups.

See VMware HCX Services.

Collect vSphere Environment Details

This section identifies vSphere related information that should be known about the environments that is relevant for HCX deployments.

Environment Detail

On-premises Environment

VMC SDDC

▢ vSphere Version:

  • vSphere version must be within Technical Guidance.

  • N/A. SDDC instances run supported software versions.

▢ Distributed Switches and Connected Clusters

  • Understand the relationships between clusters and the Distributed Switches.

  • Imported Distributed Switches are not supported for the HCX-NE service.

  • N/A. The SDDC compute profile will automatically include the workload clusters.

▢ ESXi Cluster Networks

  • Identify the ESXi Management, vMotion and Replication (if it exists). VSS PG or DPG Names, VLANs and Subnets.

  • If these networks vary from cluster to cluster, additional configuration will be needed.

  • Identify available IPs (HCX will participate in these networks)

  • N/A. In VMC the HCX is automatically installed.

▢ NSX version and configurations:

  • N/A. In VMC the HCX is automatically installed.

▢ Review and ensure all Software Version Requirements are satisfied.

▢ vCenter Server URL:

  • https://vcenter-ip-or-fqdn

  • The VMC URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ Administrative accounts

  • Know the administrator @vsphere.local or equivalent account for the vCenter Server registration step.

  • In VMC, know how to locate the cloudadmin@vmc.local account details.

▢ NSX Manager URL:

  • N/A. See the NSX versions column above.

  • N/A. Networking & Security features are managed using the VMC user interface.

▢ NSX admin or equivalent account.

  • If HCX will be used to extend NSX networks, know the administrator account for the NSX registration step.

  • N/A. Networking & Security features are managed using the VMC user interface.

▢ Destination vCenter SSO URL :

  • Use the SSO FQDN as seen in the vCenter Advanced Configurations (config.vpxd.sso.admin.uri)

  • The VMC URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ DNS Server:

  • DNS is required.

  • N/A. Automatically configured.

▢ NTP Server:

  • NTP server is required.

  • N/A. Automatically configured.

▢ HTTP Proxy Server:

  • If there is an HTTPS proxy server in the environment, it should be added to the configuration.

  • N/A. Automatically configured.

Planning for the HCX Manager Deployment

This section identifies information that should be known prior to deploying the HCX Manager system on-premises. HCX Manager at the VMC SDDC is deployed automatically when the service is enabled.

Source HCX Manager

(type: Connector)

Destination HCX Manager

(type: Cloud)

▢ HCX Manager Placement/Zoning:

  • The HCX Manager can be deployed like other management components (like vCenter Server or NSX Manager).

  • It does not have to be deployed where the migration workloads reside.

  • The VMC HCX Cloud Manager is deployed automatically in the SDDC management cluster whenever the HCX add-on service is enabled on the SDDC.

▢ HCX Manager Installer OVA:

  • The HCX Manager download link for the source is obtained from the destination HCX Manager, in the System Updates UI.

  • If OVA download links were provided by the VMware team, the file for the source will be named VMware-HCX-Connector-x.y.z-########.ova.

  • N/A.

▢ HCX Manager Hostname / FQDN:

  • The VMC URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ HCX Manager Internal IP Address:

  • The HCX Manager vNIC IP address, typically an internal address from the environment's management network.

  • The SDDC HCX Cloud system uses an IP address based on the provided subnet for SDDC management. This address is not required for site pairing with the SDDC.

▢ HCX Manager External Name / Public IP Address:

  • The source HCX Manager initiates the management connection to the destination, it does not need a dedicated public IP address.

  • The source HCX Manager supports outbound connections using Network Address Translation (Source NAT).

  • The SDDC Management firewall will reflect entries allowing TCP-443 connections to the HCX Cloud Manager public IP address.

▢ HCX Manager admin / root password:

  • In VMC, know how to locate the cloudadmin@vmc.local account details.

▢ Verify external access for the HCX Manager:

  • HCX Manager makes outbound HTTPS connections to connect.hcx.vmware.com and hybridity-depot.vmware.com.

  • The source HCX Manager will make outbound HTTPS connections to the site paired destination HCX Manager systems.

  • The VMC URLs are listed in vmc.vmware.com, under SDDCs > Settings.

  • Ensure the VMC management firewall allows inbound HTTPS connections from the on-prem HCX Connector and from the User systems that will access the interface.

▢ HCX Activation / Licensing:

  • Activation keys for the HCX Connector system on-premises are generated in VMC.

  • To generate a key, open add-ons tab to open HCX. Use Activation Keys > Create Activation Key > HCX Connector to generate a key for the on-premises HCX system.

  • The HCX in the VMC SDDC instance is activated when the service is enabled.

Proxy requirements

If a proxy server is configured, all HTTPS connections are sent to the proxy. An exclusion configuration is mandatory to allow HCX Manager to connect to local systems.

The exclusions can be entered as supernets and wildcard domain names. The configuration should encompass:

  • vSphere

    Management Subnets.

  • NSX Manager if present.

  • Internally addressed site pairing targets.

  • Generally, the RFC 1918 IP block, along with the internal domain name, can be used as the exception configuration.

For example: 10.0.0.0/8, *.internal_domain.com

Not applicable.

Configuration and Service limits

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Planning the Compute Profile Configuration

A Compute Profile contains the catalog of HCX services and allows in-scope infrastructure to be planned and selected prior to deploying the Service Mesh. The Compute Profile describes how HCX will deploy services and services appliances when a Service Mesh is created.

A Compute Profile is required in the on-premises HCX Connector.

A Compute Profile is pre-created in the VMC SDDC as part of enabling the HCX Add-on.

On-premises Compute Profile

SDDC Compute Profile

▢ Compute Profile Name

  • Using meaningful names simplify operations in multi-CPs deployments.

  • The Compute Profile configuration is created automatically in the SDDC HCX system when HCX is enabled.

▢ Services to activate

  • Services are presented as a catalog, showing available capabilities based on licensing.

  • This can be used to restrict the individual HCX services that will be activated.

  • All HCX services are activated in the SDDC Compute Profile.

▢ Service Resources (Data Center or Cluster)

  • Every cluster that contains virtual machines will be used as a Service Cluster in the Compute Profile.

  • The SDDC Compute Cluster is assigned as the HCX Service Cluster.

  • The SDDC Management Cluster is a Service Cluster.

▢ Deployment Resources (Cluster or Resource Pool)

  • The Deployment Cluster hosts HCX appliances.

  • It needs to be connected to DVS for HCX L2 and can reach the service cluster networks for HCX migration.

  • The SDDC Management Cluster is assigned as the HCX Deployment Cluster.

▢ Deployment Resources (Datastore)

  • Select the datastore to use with HCX service mesh deployments.

  • The SDDC Management Datastore is used.

▢ Distributed Switches or NSX Transport Zone for Network Extension

  • Select the virtual switch(es) or transport zone that contains virtual machine networks that will be extended.

  • The deployment cluster hosts must be connected to the selected switches.

  • The SDDC Transport zone is used in the configuration.

Planning the Network Profile Configurations

A Network Profiles contains information about the underlying networks and allows networks and IP addresses to be pre-allocated prior to creating a Service Mesh. Review and understand the information in Network Profile Considerations and Concepts before creating Network Profiles for HCX .

Network Profile Type

On-Prem Details

VMC SDDC Details

▢ HCX Uplink

  • It is typical to use the Management and Uplink Networks to use the same backing at the source. If a dedicated network is used, collect the following:

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • By default, the SDDC instance uses Public IP-based EIPs in the Uplink configuration.

  • If a DX private VIF will be used for connecting the on-prem environment to the SDDC, configure a unique private IP network.

▢ HCX Management

  • The ESX Management network (typically).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

▢ HCX vMotion

  • The vMotion network

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

▢ HCX Replication

  • The ESX Replication network. This will be the same as the Management network when a dedicated Replication network doesn't exist.

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

Service Mesh Planning Diagram

The illustration summarizes HCX service mesh component planning.



Site to Site Connectivity

▢ Bandwidth for Migrations

  • A minimum 100 Mbps of bandwidth is required for HCX migration services.

▢ Public IPs & NAT

  • HCX automatically enables strong encryption for site to site service mesh communications. It is typical for customers to begin migration projects over the Internet (while private circuits are not available, or won't become available).

  • HCX supports outbound SNAT at the source. The HCX Uplink can be a private/internal IP address at the source environment. The SNAT of all HCX components to a single Public IP address.

  • Inbound DNAT is not supported at the destination. A VMC HCX deployment automatically assigns public IP addresses to the HCX components

▢ Source HCX to Destination HCX Network Ports

  • The source HCX Manager connects to the HCX Cloud Manager using port TCP-443.

  • The on-prem IX (HCX-IX-I) connects to the VMC SDDC IX (HCX-IX-R) using port UDP-4500.

  • The on-prem NE (HCX-NE-I) connects to the VMC SDDC NE (HCX-NE-R) using port UDP-4500.

  • The source HCX appliances always initiate the transport tunnel connections.

▢ Other HCX Network Ports

  • A full list of port requirements for HCX can be found in ports.vmware.com.

HCX Network Ports On-Premises