This install checklist is written for fully private deployments, where HCX has to be prepared in each environment (in public cloud HCX deployments, the provider handles HCX installation and bootstraps an configuration using public IPs).

This document presented in a source vSphere to destination vSphere format:

  • It is assumed that the source vSphere contains the existing workloads and networks that will be migrated. This environment can be legacy vSphere (within Technical Guidance) or modern (Generally Available versions).

  • It is assumed that destination is a modern private cloud, or a VMware Cloud Foundation deployment that is the target for HCX network extensions, migrations, and services.

  • Deployment variations like multi-vCenter Server, multi-cloud, vCloud Director, OS-Assisted or performance-centric implementations are outside the scope of this checklist.

  • For checklist items specific to using OS Assisted Migration, see Checklist C.

Use Cases and POC Success Criteria

"How am I doing with HCX? How will I define a success in my proof of concept?"

▢ What defines the success criteria for the HCX proof of concept?

Clearly define how the success criteria. For example:

  • Extend 2 test networks.

  • Live migrate virtual machine.

  • Test HCX L2 connectivity using over the extended network.

  • Reverse migrate a VM.

  • Bulk Migrate a VM.

▢ Ensure features are available with the trial or full licenses obtained.

The core migration services (vMotion, Bulk, Optimization, and Network Extension) are available with HCX Advanced licensing.

OSAM, RAV, and SRM integration require HCX Enterprise licensing.

Trial license allows up to 20 migrations.

▢ Understand technology-specific restrictions.

For any HCX technologies that are used, have awareness of possible restrictions and requirements.

For example, if a zero downtime application must be migrated, HCX vMotion or RAV should be used.

In this case, one should note that "vMotion based migrations require Virtual Machine Hardware Version 9 or above." Restrictions like this one are documented in the About section for the specific migration type in the HCX User Guide.

Collect vSphere Environment Details

This section identifies vSphere related information that should be known about the environments that is relevant for HCX deployments.

Environment Detail

Source Environment

Destination Environment

▢ vSphere Version

  • vSphere version must be within Technical Guidance.

  • vSphere version must be generally available.

▢ Distributed Switches and Connected Clusters

  • Understand the relationships between clusters and the Distributed Switches.

  • Imported Distributed Switches are not supported for the HCX-NE service.

  • Understand the relationships between clusters and the NSX Transport Zone. HCX only deploys and extends networks to clusters included in the Transport Zone.

▢ ESXi Cluster Networks

  • Identify the ESXi Management, vMotion, and Replication (if it exists). VSS PG or DPG Names, VLANs, and Subnets.

  • If these networks vary from cluster to cluster, additional configuration will be needed.

  • Identify available IPs (HCX participates in these networks)

  • Identify the ESXi Management, vMotion, and Replication (if it exists). VSS PG or DPG Names, VLANs, and Subnets.

  • If these networks vary from cluster to cluster, additional configuration will be needed.

  • Identify available IPs (HCX will participate in these networks)

▢ NSX version and configurations:

▢ Review and ensure all Software Version Requirements are satisfied.

▢ vCenter Server URL:

  • https://vcenter-ip-or-fqdn

  • https://vcenter-ip-or-fqdn

▢ administrator @vsphere.local or equivalent account.

▢ NSX Manager URL:

  • NSX is optional. It is only required when HCX is used to extend NSX networks

  • https://nsxmgr-ip-or-fqdn

▢ NSX admin or equivalent account.

  • If HCX is used to extend NSX networks, know the administrator account for the NSX registration step.

  • A full access Enterprise Administrator user is required when registering the NSX Manager.

▢ Destination vCenter SSO URL :

  • Use the SSO FQDN as seen in the vCenter Advanced Configurations (config.vpxd.sso.admin.uri)

  • Use the SSO FQDN as seen in the vCenter Advanced Configurations (config.vpxd.sso.admin.uri).

▢ DNS Server:

  • DNS is required.

  • DNS is required.

▢ NTP Server:

  • NTP server is required.

  • NTP server is required.

▢ HTTP Proxy Server:

  • If there is an HTTPS proxy server in the environment, it should be added to the configuration.

  • If there is an HTTPS proxy server in the environment, it should be added to the configuration.

Planning for the HCX Manager Deployment

This section identifies information that should be known before deploying the source and destination HCX Manager systems.

Source HCX Manager

(type: Connector or Enterprise)

Destination HCX Manager

(type: Cloud)

▢ HCX Manager Placement/Zoning:

  • The HCX Manager can be deployed like other management components (like vCenter Server or NSX Manager).

  • It does not have to be deployed where the migration workloads reside.

  • The HCX Manager can be deployed like other management components (like vCenter Server or NSX Manager).

    It does not have to be deployed where the migration workloads reside.

▢ HCX Manager Installer OVA:

  • The HCX Manager download link for the source is obtained from the destination HCX Manager, in the System Updates UI.

  • If OVA download links were provided by the VMware team, the file for the source is named VMware-HCX-Connector-x.y.z-########.ova.

[The OVA has been downloaded.]

  • HCX Manager installer OVA can be obtained from downloads.vmware.com.

  • If OVA download links were provided by the VMware team, the file for the destination is named VMware-HCX-Cloud-x.y.z-########.ova.

Note:

The file VMware-HCX-Installer-x.y.z-########.ova is a generic installer that will update itself to the latest version during the installation.

▢ HCX Manager Hostname:

▢ HCX Manager Internal IP Address:

  • The HCX Manager vNIC IP address, typically an internal address from the environment's management network.

  • The HCX Manager vNIC IP address, typically an internal address from the environment's management network.

▢ HCX Manager External Name / Public IP Address:

  • The source HCX Manager initiates the management connection to the destination, it does not need a dedicated public IP address.

  • The source HCX Manager supports outbound connections using Network Address Translation (Source NAT).

  • Only required when the paired environments do not have a private connection and will connect over the Internet.

  • The external name record should resolve to a public IP address.

  • The destination HCX Cloud Manager supports load balanced inbound connections or Network Address Translation (DNAT) .

▢ HCX Manager admin / root password:

▢ Verify external access for the HCX Manager:

  • HCX Manager makes outbound HTTPS connections to connect.hcx.vmware.com and hybridity-depot.vmware.com.

  • The source HCX Manager will make outbound HTTPS connections to the site paired destination HCX Manager systems.

  • HCX Manager makes outbound HTTPS connections to connect.hcx.vmware.com and hybridity-depot.vmware.com.

  • The destination HCX Manager will receive HTTPS connections from the site paired source HCX Manager systems.

▢ HCX Activation / Licensing:

  • In private cloud, private data center, or VFC deployments, HCX Advanced features are licensed using the NSX Enterprise plus licenses from the destination NSX environment. See Activating or Licensing New HCX Systems for more details.

  • In private cloud, private data center, or VFC deployments, HCX Advanced features are licensed using the NSX Enterprise plus licenses from the destination NSX environment. See Activating or Licensing New HCX Systems for more details.

Proxy requirements

If a proxy server is configured, all HTTPS connections are sent to the proxy. An exclusion configuration is mandatory to allow HCX Manager to connect to local systems.

The exclusions can be entered as supernets and wildcard domain names. The configuration should encompass:

  • vSphere

    Management Subnets.

  • NSX Manager if present.

  • Internally addressed site pairing targets.

  • Generally, the RFC 1918 IP block, along with the internal domain name, can be used as the exception configuration.

For example: 10.0.0.0/8, *.internal_domain.com

Not applicable.

Configuration and Service limits

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Planning the Compute Profile Configurations

A Compute Profile contains the catalog of HCX services and allows in-scope infrastructure to be planned and selected before deploying the Service Mesh. The Compute Profile describes how HCX will deploy services and services appliances when a Service Mesh is created.

Source Compute Profile

Destination Compute Profile

▢ Compute Profile Name

  • Using meaningful names simplify operations in multi-CPs deployments.

  • Using meaningful names simplify operations in multi-CPs deployments.

▢ Services to activate

  • Services are presented as a catalog, showing available capabilities based on licensing.

  • This can be used to restrict the individual HCX services that will be activated.

  • Services are presented as a catalog, showing available capabilities based on licensing.

    This can be used to restrict the individual HCX services that will be activated.

▢ Service Resources (Data Center or Cluster)

[legacy-dev cluster]

  • Every cluster that contains virtual machines is used as a Service Cluster in the Compute Profile.

[Compute-1 , Compute-2 ]

  • Every cluster that is a valid target should be included as a Service Cluster in the Compute Profile.

▢ Deployment Resources (Cluster or Resource Pool)

  • The Deployment Cluster hosts HCX appliances.

  • It must be connected to DVS for HCX L2 and can reach the service cluster networks for HCX migration.

  • The Deployment Cluster hosts HCX appliances.

  • It must be connected to the NSX Transport Zone for L2 and can reach the service cluster networks for HCX migration.

▢ Deployment Resources (Datastore)

  • Select the datastore to use with HCX service mesh deployments.

  • Select the datastore to use with HCX service mesh deployments.

▢ Distributed Switches or NSX Transport Zone for Network Extension

  • Select the virtual switch(es) or transport zone that contains virtual machine networks that will be extended.

  • The deployment cluster hosts must be connected to the selected switches.

  • Select the transport zone that will be used with HCX Network Extension operations.

Planning the Network Profile Configurations

A Network Profiles contains information about the underlying networks and allows networks and IP addresses to be pre-allocated before creating a Service Mesh. Review and understand the information in Network Profile Considerations and Concepts before creating Network Profiles for HCX .

Network Profile Type

Source Network Details

Destination Network Details

▢ HCX Uplink

  • It is typical to use the Management and Uplink Networks to use the same backing at the source. If a dedicated network is used, collect the following:

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • When connecting environments over Internet, assign the Public IPs networks as the HCX Uplink.

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

▢ HCX Management

  • The ESX Management network (typically).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • The ESX Management network (typically).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

▢ HCX vMotion

  • The vMotion network

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • The vMotion network

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

▢ HCX Replication

  • The ESX Replication network. This is the same as the Management network when a dedicated Replication network doesn't exist or when using vSphere Replication NFC (required).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • The ESX Replication network. This is the same as the Management network when a dedicated Replication network doesn't exist or when using vSphere Replication NFC (required).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

Service Mesh Planning Diagram

The illustration summarizes HCX service mesh component planning.

Site to Site Connectivity

▢ Bandwidth for Migrations

  • A minimum 100 Mbps of bandwidth is required for HCX migration services. The requirement may be higher depending on the volume of migration.

▢ Public IPs & NAT

  • HCX automatically enables strong encryption for site to site service mesh communications. It is typical for customers to begin migration projects over the Internet (while private circuits are not available, or won't become available).

  • HCX supports outbound SNAT at the source. The HCX Uplink can be a private/internal IP address at the source environment. The SNAT of all HCX components to a single Public IP address.

  • Public IP addresses must be assigned directly in the Uplink Network Profile at the destination HCX configuration.

  • Inbound DNAT is not supported at the destination.

▢ Source HCX to Destination HCX Network Ports

  • The source HCX Manager connects to the HCX Cloud Manager using port TCP-443.

  • The source IX (HCX-IX-I) connects to the peer IX (HCX-IX-R) using port UDP-4500.

  • The source NE (HCX-NE-I) connects to the peer NE (HCX-NE-R) using port UDP-4500.

  • The source HCX appliances initiate the connections.

▢ Other HCX Network Ports

  • A full list of port requirements for HCX can be found in ports.vmware.com.

Figure 1. Network Ports at the Source
Figure 2. Network Ports at the Destination