Checklist B is written for HCX deployments with VMware Cloud on AWS as the target, where HCX is automatically installed by enabling the service (in private cloud HCX deployments, the user handles full HCX deployment and configuration for the destination environment.

This document presented in using on-premises as the source to VMware Cloud on AWS SDDC as the destination. All the checklist tables assume the following information:

  • The on-premises vSphere environment contains the existing migration workloads and networks. This environment can be legacy vSphere or modern.

  • The destination environment is a VMware Cloud on AWS SDDC instance.

HCX Use Cases and POC Success Criteria

As described in the following table, understanding a few key concepts can help you to get started and be successful deploying HCX.

Key Concepts

Examples

▢ Plan the success criteria for the HCX proof of concept in your environment.

Clearly define the success criteria. For example:

  • Extend 2 test networks to VMware Cloud on AWS.

  • Live migrate a virtual machine.

  • Test HCX L2 connectivity using over the extended network.

  • Reverse migrate a VM.

  • Bulk Migrate a VM.

▢ Ensure features are available with the trial or full licenses obtained.

  • HCX is an available add-on included with a VMware Cloud on AWS SDDC.

  • The add-on gives access to the HCX Advanced features.

  • The add-on gives access to select HCX Enterprise-class services: Replication Assisted vMotion, Mobility Optimized Networking, Application Path Resiliency, TCP Flow Conditioning, and Mobility Groups.

See VMware HCX System Services.

Collect vSphere Environment Details

This section identifies vSphere related information about the environments that is relevant for HCX deployments.

Environment Detail

On-premises Environment

VMware Cloud on AWS SDDC

▢ vSphere Version:

  • vSphere version must be within Technical Guidance.

  • N/A. SDDC instances run supported software versions.

▢ Distributed Switches and Connected Clusters

  • Understand the relationships between clusters and the Distributed Switches.

  • Imported Distributed Switches are not supported for the HCX-NE service.

  • N/A. The SDDC compute profile automatically includes the workload clusters.

▢ ESXi Cluster Networks

  • Identify the ESXi Management, vMotion and Replication (if it exists). VSS PG or DPG Names, VLANs and Subnets.

  • If these networks vary from cluster to cluster, additional configuration is needed.

  • Identify the available IPs (HCX participates in these networks)

  • N/A. In VMware Cloud on AWS environments, the HCX is automatically installed.

▢ NSX version and configurations:

  • N/A. In VMware Cloud on AWS the HCX is automatically installed.

▢ Review and ensure all Software Version Requirements are satisfied.

▢ vCenter Server URL:

  • https://vcenter-ip-or-fqdn

  • The VMware Cloud on AWS URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ Administrative accounts

  • Know the administrator @vsphere.local or equivalent account for the vCenter Server registration step.

▢ NSX Manager URL:

  • N/A. See the previous NSX versions column.

  • N/A. Networking & Security features are managed using the VMware Cloud on AWS user interface.

▢ NSX admin or equivalent account.

  • If HCX is used to extend NSX networks, know the administrator account for the NSX registration step.

  • N/A. Networking & Security features are managed using the VMware Cloud on AWS user interface.

▢ Destination vCenter SSO URL:

  • Use the SSO FQDN as seen in the vCenter Advanced Configurations (config.vpxd.sso.admin.uri)

  • The VMware Cloud on AWS URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ DNS Server:

  • DNS is required.

  • N/A. Automatically configured.

▢ NTP Server:

  • NTP server is required.

  • N/A. Automatically configured.

▢ HTTP Proxy Server:

  • If there is an HTTPS proxy server in the environment, it adds it to the configuration.

  • N/A. Automatically configured.

Planning for the HCX Manager Deployment

This section identifies information for deploying the HCX Manager system on-premises. The HCX Manager at the VMware Cloud on AWS SDDC is deployed automatically when the service is enabled.

Source HCX Manager

(type: Connector)

Destination HCX Manager

(type: Cloud)

▢ HCX Manager Placement/Zoning:

  • The HCX Manager can be deployed like other management components (like vCenter Server or NSX Manager).

  • It does not have to be deployed where the migration workloads reside.

  • The HCX Cloud Manager is deployed automatically in the VMware Cloud on AWS SDDC management cluster whenever the HCX add-on service is enabled on the SDDC.

▢ HCX Manager Installer OVA:

  • The HCX Manager download link for the source is obtained from the destination HCX Manager, in the System Updates UI.

  • If OVA download links were provided by the VMware team, the file for the source is named VMware-HCX-Connector-x.y.z-########.ova.

  • N/A.

▢ HCX Manager Hostname / FQDN:

  • The VMware Cloud on AWS URLs are listed in vmc.vmware.com, under SDDCs > Settings.

▢ HCX Manager Internal IP Address:

  • The HCX Manager vNIC IP address is typically an internal address from the environment's management network.

  • The SDDC HCX Cloud system uses an IP address based on the provided subnet for SDDC management. This address is not required for site pairing with the SDDC.

▢ HCX Manager External Name / Public IP Address:

  • The source HCX Manager initiates the management connection to the destination, it does not need a dedicated public IP address.

  • The source HCX Manager supports outbound connections using Network Address Translation (Source NAT).

  • The SDDC Management firewall reflects entries allowing TCP-443 connections to the HCX Cloud Manager public IP address.

▢ HCX Manager admin / root password:

▢ Verify external access for the HCX Manager:

  • HCX Manager makes outbound HTTPS connections to connect.hcx.vmware.com and hybridity-depot.vmware.com.

  • The source HCX Manager makes outbound HTTPS connections to the site paired destination HCX Manager systems.

  • The VMware Cloud on AWS URLs are listed in vmc.vmware.com, under SDDCs > Settings.

  • Ensure the VMware Cloud on AWS management firewall allows inbound HTTPS connections from the on-premises HCX Connector and from the User systems that access the interface.

▢ HCX Activation / Licensing:

  • Activation keys for the HCX Connector system on-premises are generated in VMware Cloud on AWS.

  • To generate a key, open add-ons tab to open HCX. Use Activation Keys > Create Activation Key > HCX Connector to generate a key for the on-premises HCX system.

  • The HCX in the VMware Cloud on AWS SDDC instance activates when the service is enabled.

Proxy requirements

If a proxy server is configured, all HTTPS connections are sent to the proxy. An exclusion configuration is mandatory to allow HCX Manager to connect to local systems.

The exclusions can be entered as supernets and wildcard domain names. The configuration can encompass these settings:

  • vSphere

    Management Subnets.

  • NSX Manager if present.

  • Internally addressed site pairing targets.

  • You can use the RFC 1918 IP block, along with the internal domain name, as the exception configuration.

For example: 10.0.0.0/8, *.internal_domain.com

Not applicable.

Configuration and Service limits

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Review the HCX configuration and operational limits: VMware Configurations Maximum.

Planning the Compute Profile Configuration

A Compute Profile contains the catalog of HCX services and allows in-scope infrastructure to be planned and selected prior to deploying the Service Mesh. The Compute Profile describes how HCX deploys services and service appliances when a Service Mesh is created.

A Compute Profile is required in the on-premises HCX Connector.

A Compute Profile is pre-created in the VMware Cloud on AWS SDDC as part of enabling the HCX Add-on.

""

Review the table for a checklist of Compute Profile configuration items. For each row, the table describes how that information is used in the source and the destination Compute Profile.

On-premises Compute Profile

SDDC Compute Profile

▢ Compute Profile Name

  • Using meaningful names simplify operations in multiple Compute Profile deployments.

  • The Compute Profile configuration is created automatically in the SDDC HCX system when HCX is enabled.

▢ Services to activate

  • Services are presented as a catalog, showing available capabilities based on licensing.

  • This can be used to restrict the individual HCX services that are activated.

  • All HCX services are activated in the SDDC Compute Profile.

▢ Service Resources (Data Center or Cluster)

  • Every cluster that contains virtual machines used as a Service Cluster in the Compute Profile.

  • The SDDC Compute Cluster is assigned as the HCX Service Cluster.

  • The SDDC Management Cluster is a Service Cluster.

▢ Deployment Resources (Cluster or Resource Pool)

  • The Deployment Cluster hosts HCX appliances.

  • It can connect to DVS for HCX L2 and can reach the service cluster networks for HCX migration.

  • The SDDC Management Cluster is assigned as the HCX Deployment Cluster.

▢ Deployment Resources (Datastore)

  • Select the datastore to use with HCX service mesh deployments.

  • The SDDC Management Datastore is used.

▢ Distributed Switches or NSX Transport Zone for Network Extension

  • Select the virtual switches or transport zone that contains virtual machine networks that will be extended.

  • The deployment cluster hosts must be connected to the selected switches.

  • The SDDC Transport zone is used in the configuration.

Planning the Network Profile Configurations

A Network Profiles contains information about the underlying networks and allows networks and IP addresses to be pre-allocated prior to creating a Service Mesh. Review and understand the information in Network Profile Considerations and Concepts before creating Network Profiles for HCX.

Network Profile Type

On-premises Details

VMware Cloud on AWS SDDC Details

▢ HCX Uplink

  • It is typical to use the Management and Uplink Networks to use the same backing at the source. If a dedicated network is used, collect the following:

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • By default, the SDDC instance uses Public IP-based EIPs in the Uplink configuration.

  • If a DX private VIF is used for connecting the on-premises environment to the SDDC, configure a unique private IP network.

▢ HCX Management

  • The ESX Management network (typically).

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

▢ HCX vMotion

  • The vMotion network

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

▢ HCX Replication

  • The ESX Replication network. This network is the same as the Management network when a dedicated Replication network doesn't exist.

  • VLAN, Port Group

  • VSS|DVS|NSX Network Name

  • Gateway IP

  • Range of available IPs for HCX to use.

  • Network Profiles are configured automatically when the HCX service is enabled using a portion of the SDDC management network.

Service Mesh Planning Diagram

The illustration summarizes HCX service mesh component planning.


Service Mesh elements: Compute Profile, Network Profile, services, switches, clusters using HCX services, cluster deployed with HCX, and datastore.

Site to Site Connectivity

▢ Bandwidth for Migrations

▢ Public IPs & NAT

  • HCX automatically enables strong encryption for site to site service mesh communications. It is typical for customers to begin migration projects over the Internet (while private circuits are not available, or won't become available).

  • HCX supports outbound SNAT at the source. The HCX Uplink can be a private/internal IP address at the source environment. The SNAT of all HCX components to a single Public IP address.

  • Inbound DNAT is not supported at the destination. A VMware Cloud on AWS HCX deployment automatically assigns public IP addresses to the HCX components.

▢ Source HCX to Destination HCX Network Ports

  • The source HCX Manager connects to the HCX Cloud Manager using port TCP-443.

  • The on-premises IX (HCX-IX-I) connects to the VMware Cloud on AWS SDDC IX (HCX-IX-R) using port UDP-4500.

  • The on-premises NE (HCX-NE-I) connects to the VMware Cloud on AWS SDDC NE (HCX-NE-R) using port UDP-4500.

  • The source HCX appliances always initiate the transport tunnel connections.

▢ Other HCX Network Ports

HCX Network Ports On-Premises

Topographical image of HCX ports at the source as listed in the VMware Ports and Protocols page.