You can migrate your VMware Integrated OpenStack (VIO) deployment from NSX-V to NSX. During the migration, the VIO control plane must be in read-only mode.

Datapath connectivity for VMs is unaffected during the migration, except for brief interruptions during north-south cutover and host migration. This migration must be performed during a single maintenance window.

Overview of the Migration Process

  1. Install NSX.
  2. Prepare NSX for VIO. This requires setting up tier-0 gateways or VRF-lites for external networks, as well as configuring edge clusters, DHCP server profiles, and metadata proxies. For more information, see https://docs.vmware.com/en/VMware-Integrated-OpenStack/index.html.
  3. Get the neutron migrator bundle, which is part of the VIO deliverables.
  4. Configure the neutron migrator.
  5. Deploy the neutron migrator pod.
  6. From the NSX Manager UI:
    • Start the edge cutover migration.
    • Handle feedback and complete the migration.
    • Start the host migration.
    • Handle feedback and complete the migration.
  7. Wait for the neutron migrator pod to complete.
  8. Delete the neutron migrator deployment.
  9. Remove the VIO installation in NSX-V.

Prerequisites

  • VIO 7.2.0 or later
  • NSX-V 6.4.11 or later
  • vSphere 6.7 or later (It is recommended that you upgrade ESXi hosts to 7.0 or later before the migration.)
The neutron migrator pod will run the following validation checks. Checks producing a warning can be bypassed via the neutron migrator configuration.
  • Number of address pairs allowed in Neutron (number of manual address bindings must not exceed 128)
  • Number of multiple subnets with DHCP per logical switch (only one allowed in NSX)
  • Number of router uplinks per network (only one in NSX)
  • Host groups - If HA for NSX Edge nodes is enabled and host groups are specified for the edge nodes to be placed. This will generate a warning.
  • Edge HA is ignored in NSX as it does not apply. This will generate a warning.
  • Provider networks or external networks based on a DVS port group are not supported in the NSX plugin.
  • Multi-provider VLAN networks are not supported.
  • Load balancing topologies not supported by the NSX plugin (for example, a load balancer with members from various subnets not uplinked to the same edge router or a load balancer on a network not connected to a Neutron router).
  • Usage of invalid addresses for NSX (for example, overlap with transit network).
  • VMs deployed on external networks. They do not work on NSX.
  • Reachability of subnets for load balancing members. NSX requires that all the load balancer's subnets are attached to the same gateway.

On NSX there must not be any openstack-owned resources (for instance resources from a previous VIO deployments on the NSX instance).

See In-Place Migration of Specific Parts of NSX-V for any preparations that are needed for the edge cutover migration and the host migration.

Preparing for the Migration - Sizing NSX Edge Cluster

The NSX edge cluster must have enough slots for OpenStack load balancers (LBs). To determine the list of tier-1 gateways that will host an LB service, do the following:
  • For each OpenStack VIP, find the corresponding subnet, and retrieve the router it is uplinked to, unless the subnet is on an external network.
  • For each OpenStack LB pool, list the members. Find the subnet they belong to and retrieve the router the subnet is uplinked to.

The number of routers found, together with the size of the largest OpenStack LB, will determine the number of LB slots required on the NSX edge cluster. For each LB, two slots will be required, for the active and standby service routers. Refer to https://configmax.vmware.com for the maximum number of load balancers that can run on each NSX edge appliance.

Preparing for the Migration - Configuring TEP IP Pool

During host migration, NSX-V and NSX TEPs must be able to reach each other to ensure connectivity. You must configure the NSX TEP IP pool so that it can route traffic to NSX-V TEPs.

NSX-V Configuration Parameters not Supported in NSX

The following table lists the unsupported NSX-V parameters and the reasons.

Parameter Description Reason
cluster_moid Lists IDs of clusters used by Openstack. Not applicable in NSX.

datacenter_moid

Identifies the datacenter for deploying NSX-V edge appliances. Not applicable in NSX.

deployment_container_id

Identifies deployment container for NSX-V edges. Not applicable in NSX.

resource_pool_id

Identifies resource pool for NSX-V edges. Not applicable in NSX.

datastore_id

Identifies datastore for NSX-V edges. Not applicable in NSX.

ha_datastore_id

Additional datastore if edge HA is enabled. Not applicable in NSX.

ha_placement_random

Divide active edges between primary and secondary datastore. Not applicable in NSX.

edge_host_groups

Ensure active/backup edges are placed in listed host groups. Not applicable in NSX.

external_network

ID of DVPG to use for physical network uplink. Not applicable in NSX.

task_status_check_interval

Interval for checking for task completion. Not applicable in NSX.

vdn_scope_id

ID of network scope object for VXLAN virtual wires. VDN scopes are replaced by NSX overlay transport zones.

dvs_id

ID of DVS connected to management and edge cluster. Also used by default for VLAN networks. DVS is replaced by VLAN transport zone in NSX.

maximum_tunnels_per_vnic

Maximum number of sub-interface supported by a VNIC on an edge appliance. Not applicable in NSX.

backup_edge_pool

Defines the size for NSX-V edge pool to be used by the Openstack deployment. Not applicable in NSX.

mgm_net_moid

Portgroup ID for metadata proxy management network. Not applicable in NSX.

mgt_net_proxy_ips

Comma-separated list of management network IP addresses. Not applicable in NSX.

mgt_net_proxy_netmask

Management network netmask for metadata proxy. Not applicable in NSX.

mgt_net_default_gateway

Management network default gateway for metadata proxy. Not applicable in NSX.

nova_metadata_ips

IP addresses used by Nova metadata service. Provided in NSX metadata proxy configuration.

nova_metadat_port

Port used by the Nova metadata service. Provided in NSX metadata proxy configuration.

spoofguard_enabled

By default spoofguard is enabled in NSX-V but if you disable spoofguard in NSX-V, spoofguard will be enabled in NSX after migration. Enabled by default in NSX (cannot be globally turned off).

use_exclude_list

Use NSX-V exclude list component when port security is disabled and spoofguard is enabled. Enabled by default in NSX (cannot be globally turned off).

tenant_router_types

Ordered list of router types to allocate as tenant routers. Not applicable in NSX.

edge_appliance_user

Username to configure for Edge appliance login. Not applicable in NSX.

metadata_initializer

Initialize metadata access infrastructure Not applicable in NSX.

shared_router_appliance_size

Edge appliance size to be used for creating a shared router edge. Not applicable in NSX.

use_dvs_features

Allow for directly configuring DVS backing NSX-V. Not applicable in NSX.

service_insertion_profile_id

The profile ID of the redirect firewall rules that will be used for service insertion. Feature does not exist in NSX integration.

service_insertion_redirect_all

Creates a firewall rule to redirect all traffic to a third-party firewall. Feature does not exist in NSX integration.

use_nsx_policies

Use NSX policies for implementing Neutron security groups. Feature does not exist in NSX integration.

default_policy_id

If use_nsx_policies is True, this policy will be used as the default policy for new tenants Feature does not exist in NSX integration.

bind_floatingip_to_all_interfaces

Bind floating IPs to downlink interfaces when set to True. In NSX, NAT for floating IP is always processed for east-west traffic as well.

vdr_transit_network

Network range for distributed router TLR/PLR connectivity. In NSX the range for DR/SR connectivity cannot be configured from OpenStack.

exclusive_dhcp_edge

Have exclusive DHCP edge per network Does not apply to NSX as DHCP is implemented on edge cluster.

bgp_neighbour_hold_down_timer

Interval for BGP neighbour hold down time. Feature does not exist in NSX integration. BGP peering is configured on NSX tier-0 gateway routing configuration.

bgp_neighbour_keep_alive_timer

Interval for neighbour keep alive time. Feature does not exist in NSX integration. BGP peering is configured on NSX tier-0 gateway routing configuration.

share_edges_between_tenants

Use same DHCP or router edge for multiple tenants. Not applicable in NSX.

use_routers_as_lbaas_platform

Use subnet's exclusive router as a platform for LBaaS. Not applicable in NSX, where LB services are always attached to rotuers used for forwarding.

nsx_sg_name_format

Format for the NSX name of an OpenStack security group. Backend resource naming is implicit in NSX.

loadbalancer_pool_transparency

Create LBaaS pools in transparent mode. Transparent mode is not supported in NSX.

default_edge_size

Defines the default edge size for router, DHCP, and LB edges. Not applicable in NSX.

Configuring the Neutron Migrator

Before launching the neutron migrator, create a JSON file called migrator.conf.json to specify the NSX environment and the hosts that need to be migrated. This file will be mounted in the migrator pod and validated by the migration process. The following is a sample migrator.conf.json file:
{
 "strict_validation": true,
 "edge_migration": true,
 "host_migration": true,
 "edge_migration_interfaces_down": true,
 "post_migration_cleanup": true,
 "rollback": false, 
 "nsxv_token_lifetime": 1440,
 "compute_clusters": [
   "domain-c17",
   "domain-c29",
   "domain-c71",
  ],
  "nsx_manager_ips": [
    "192.168.16.32",
    "192.168.16.64",
    "192.168.16.96",
  ],
  "nsx_manager_user": "admin",
  "nsx_manager_password": "<NSX password>",
  "metadata_proxy": "VIO_mdproxy",
  "dhcp_profile": "VIO_dhcp_profile",
  "default_overlay_tz": "0b3d2a91-2dfc-40a7-ac6b-fbd62b0e4c79",
  "default_vlan_tz": "b87c7a69-6d1a-4857-badd-0d0e4d4e924f",
  "default_tier0_router": "VIO_Tier0",
  "availability_zones": [
  {
    "name": "az1",
    "metadata_proxy": "VIOAZ1_mdproxy",
    "dhcp_profile": "VIOAZ1_dhcp_profile",
    "default_vlan_tz": "6320d1e3-45a1-4f37-87b4-6d35d19cafef",
    "default_tier0_router": "VIOAZ1_Tier0VRFLite"
   }
   ],
   "external_networks_map": {
    "61282e88-0abb-4036-9ea8-22418f85cdf3": "VIO_Tier0",
    "39db1d0f-4279-462b-a17e-1995a5c00ae8": "VIOAZ1_Tier0VRFLite"
  },
  "transit_network": "100.64.0.0/16"
}

The configuration parameters are:

Parameter Default Value Description
post_migration_cleanup True After the migration is completed, remove additional NSX entities created by the migration process that are not used by VIO or duplicated by other VIO resources.
rollback True Automatically roll back upon Failure (if possible).
nsxv_token_lifetime 1440 Duration in minutes of the token for NSX-V access. Token is provided to NSX. Duration should be chosen according to the deployment size and time expected to complete the migration. Token should not expire before the migration is completed.
compute_clusters List of vSphere compute clusters that will be migrated. This should include only the clusters where VIO VM instances are deployed. Edge clusters and VIO management clusters should not be included.
nsx_manager_ips IP or FQDN for NSX manager. If a manager cluster is used, this parameter can either specify a VIP or the list of NSX manager instances. In the latter case client-side load balancing will be used when accessing NSX Manager.
nsx_manager_user admin User for NSX Manager access. Authentication with principal identities is not supported by VIO.
nsx_manager_password Password for NSX Manager access.
metadata_proxy Identifier of the metadata proxy for the VIO default availability zone. The identifier is last segment of the resource's policy path.
dhcp_profile Identifier of the DHCP profile for the VIO default availability zone.
default_tier0_router Identifier of the tier-0 gateway for the VIO default availability zone. Will be used for north-south traffic by neutron routers whose gateway is the default external network.
default_overlay_tz Overlay NSX transport zone to be used for the VIO deployment.
default_vlan_tz VLAN NSX transport zone for the default availability zone.
transit_network 100.64.0.0/16 CIDR for the NSX transit network. Modify only if it was changed from NSX default.
external_networks_map Empty list
availability_zones Empty list

Deploying the Neutron Migrator

In the migrator bundle is a script called script build_yaml.sh. When the migrator configuration is ready, run the script to create the deployment specification and deploy it on the VIO control plane. For example:
./build_yaml.sh -t 7.1.1.1899999
The script accepts the following parameters:
-k Optional. Do not include vCenter Server certificate in deployment. Specify this only when VIO is using an insecure vCenter connection.
-t <full VIO version> Required. The VIO version must include the build number and match tag for existing VIO images.

The build_yaml.sh script creates <YAML-FILE-NAME> which contains all the information for deploying the neutron migration control plane.

Starting the Migration

To start the migration, run the following command:
kubectl apply -f <YAML-FILE-NAME>

This will create the neutron-migrator deployment in the openstack namespace. This deployment has a single replica. The migration pod is automatically started when the deployment's pod is created.

Migration Pod Startup

During startup the migrator pod will read the configuration file and the current status of the migration. Based on this information it will decide the next step of the migration, which could be one of the following:
  • API replay
  • Starting migration from NSX Manager
  • VIO reconfiguration

The migration pod will terminate if the configuration file is not found or if some required parameter has not been specified.

The migration pod will also terminate with an error if the current state of the migration is inconsistent, for example, if API replay has not completed, but a migration is already in progress.

When the migrator job is started, configuration files for Neutron NSX plugins are mounted into the pod. Any change made to Neutron configuration once the migrator is started will not be process by the migrator job. You must not make changes to Neutron configuration while the migrator is running. If you need to make changes, the migrator job must be restarted.

API Replay

In this state the migration process will create all the necessary configurations on NSX and populate the VIO Neutron database for use with NSX.

At the end of this process, all logical networking entities required by VIO will be configured in NSX, even if workloads are still running on NSX-V.

Before implementing VIO configuration on NSX, the following checks are performed:
  • Pre validation checks. These are the checks listed in the Prerequisites section above.
  • NSX version check. The NSX version must be 3.2 or later.
  • Ensure that compute manager is configured. The migration requires VIO's vCenter to be registered as a compute manager in NSX. This check verifies this has been done.
  • No Neutron resource should be configured on NSX. If the rollback option is set to True the migrator process will cleanup any (likely stale) neutron resource found on NSX.

After the checks are completed, the migration process initializes the Neutron NSX database and prepares its structure. Then a temporary neutron server is started within the migrator pod. This temporary Neutron server has been configured to run with NSX. After the temporary neutron server is up, the migration process collects information about the network VNI mappings and port/VIF mappings.

The API migration process is then started and will migrate the following resources:
  • Routers (to tier-1 gateways)
  • Networks (to segments)
  • Subnets (to segment subnets and segments' DHCP configuration)
  • Port (to segment ports and DHCP static bindings)
  • Security groups (to security policies, rules, groups, and services)
  • Floating IPs (to NAT rules)
  • QoS policies and rules
  • FWaaS groups, policies, and rules
  • Octavia load balancers, listeners, pools, members, and health monitors

After the API replay is completed, the temporary neutron server pod is shut down.

Monitor the migrator pod logs with the tail command. When the logs show that the migrator pod is waiting for the NSX migration process to be started, perform the next task (Edge Cutover).

Edge Cutover

Make the following API call to get the ID of the Edge node:
curl -v -s -X GET -k -u admin:<password> https://<nsx-mgr-ip>/api/v1/transport-nodes/ -H content-type:application/json
Make the following API call to modify the parameter v2t-migration-config on all the Edge nodes:
curl -v -s -X PUT -k -u admin:<password> https://<nsx-mgr-ip>/api/v1/transport-nodes/<edge-nodeid>/node/v2t-migration-config -H content-type:application/json -d '{"enabled": true}'
Follow the procedure in Migrating North-South Traffic to NSX Edges Using Edge Cutover. After this migration, the north-south traffic will be handled by NSX. The migration process will:
  • Bring NSX-V edge appliance interfaces down.
  • Enable ARP on NSX tier-1 downlink to ensuring seamless east-west and north-south traffic transition during migration.
  • Connect to vCenter to retrieve an NSX-V authentication token.
  • Prepare a mapping file for distributed routers (NSX-V DLRs).
  • Set up Edge migration on NSX and wait for its completion.

During the north-south cutover, VMs might briefly lose connectivity as connectivity is switched from NSX-V ESGs or DLRs to NSX tier-1 gateways. After the north-south cutover is complete, the NSX-V and metadata Edges will be powered off. The next step is host migration.

IMPORTANT: Before starting the north-south cutover, after a rollback, make sure that the edge mapping file is present. The file is automatically deleted after a rollback. The migrator job will restore it within 10 seconds of a rollback completion. This does not apply if there are no distributed routers in the NSX-V VIO environment.

Note: The NSX-V access token is renewed at each pod execution. Its duration should be long enough to ensure that the migration is completed within the migrator pod lifecycle. If the migrator pod is restarted for any reason, a new token will be fetched.

Host Migration

Follow the procedure in Migrating Distributed Firewall Configuration, Hosts, and Workloads.

The VIO migration utility will:
  • Power off all NSX-V edge appliances.
  • Set up the host migration on NSX.
  • Wait for host migration to successfully complete.

Powering off the edge appliances is necessary to ensure host migration completes successfully. Do not power on the NSX-V edge appliances during host migration.

After host migration is completed, make the following API call to reset the parameter v2t-migration-config for the Edge nodes. This parameter was set at the beginning of the Edge cutover step.
curl -v -s -X PUT -k -u admin:<password> https://<nsx-mgr-ip>/api/v1/transport-nodes/<edge-nodeid>/node/v2t-migration-config -H content-type:application/json -d '{"enabled":false}'

Post-Migration Cleanup

The migrator job reconfigures the Neutron CR to use NSX but does not remove the NSX-V configuration parameters so that you can view them for reference. These parameters are harmless. After the migration is completed you can remove them using the viocli update neutron command.

Logging

The neutron migrator process produces detailed logging for every phase in the process. Logs written to the pod's stdout is at the INFO level. Debug level logs are at /var/log/migration/vio-v2t.log on the VIO controller node where the migrator pod is running.

You can find out on which node the neutron-migrator pod is running with the following command:
osctl get pods neutron-migrator -o wide

You can then use the command viossh to open a shell on the controller node.

The /var/log/migration directory also contains the temporary neutron server log.

Rollback

Rollback can happen at various stages during migration.

If a failure occurs during the API replay stage, there is no need for an explicit rollback. The VIO neutron migrator utility will automatically remove resources that were created and then retry the migration.

If you choose to interrupt the migration by destroying the neutron migrator pod, the VIO control plane will still be functional in NSX-V. There may be NSX resources created by API replay. These resources will be removed.

Note that NSX does not allow rolling back host migration. After hosts have been migrated to NSX, it will not be possible to move them back to NSX-V.

If a failure occurs during host migration, you can review the logs and address the issue accordingly.

Alternatively, if a host consistently fails to migrate to NSX, you can remove it from the vSphere cluster and retry the migration. VMs running on the affected host will be migrated to other hosts in the cluster. After the migration, install NSX on the host and add it to the original vSphere cluster.

Error Codes

Code Description
0001, 0002, 0003, 0004 Bad system state or configuration. There are major issues with the migration such as:
  • Host migration already completed, but API replay not performed.
  • VIO running already with NSX but API replay or migration not performed.
  • Hosts on NSX, VIO running with NSX, but API replay not performed.
0101 Unable to create configuration files for the temporary Neutron server, which needs to be up for API replay. Check the migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. This error can usually be fixed by addressing the root cause with configuration file changes.
1001 NSX migration coordinator not running. To fix this error, start the migration coordinator service on the first node specified in migrator.conf.json. If using HA VIP, make sure the active manager instance is the one where the migration coordinator is running. For the migration, it is recommended to use a specific NSX manager, or use client-side load balancing. NSX manager FQDNs can be changed once the migration is completed.
1002 Invalid NSX version. NSX 3.2.0 or higher is required.
1003 Cannot retrieve NSX version. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
1004 Failure in compute manager validation. There must be at least one compute manager defined in NSX. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
1005 Must run cleanup on NSX. The NSX setup already has resources created by VIO. Ensure rollback is set to True in migrator.conf.json.
1006 Cannot start NSX migration. This is probably the result of a previous migration attempt. Roll back any migration in progress and retry.
1007 Cannot prepare NSX for north-south cutover. There was an error while setting up north-south cutover on NSX. This could either be an error in generating the "edge mappings" file or an error while preparing the migration plan. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
1008 The migrator pod is unable to bring down interfaces on NSX edge apliances. This is a required step for north-south cutover. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. To workaround this issue set edge_migration_interfaces_down to False in migrator.conf.json and manually ensure edge interfaces are down, or disconnected, before starting north-south cutover.
1009 Cannot migrate routers without downlinks. There are neutron router without downlinks. These cannot be migrated. If the operator believe this error is returned by mistake, it can be skipped by setting advanced_router_validation to False in migrator.conf.json.
1100 Invalid mode in migration plan. The NSX migration coordinator is already configured with a different plan. This is probably the result of a previous migration attempt. Roll back any migration in progress and retry.
1101 NSX Migration not acknowledged in configuration. Ensure edge_migration and/or host_migration are set to True in migration.conf.json.
1105 Cannot patch routers without gateway. The process for ensuring neutron routers without gateway can be seamlessy migrated to NSX failed. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. By setting advanced_router_validation to False this process will be skipped. It will be however up to the operator to ensure that each tier-1 gateway is connected to a tier-0 router before starting north-south cutover on NSX.
1106 Cannot restore routers without gateway. The process for restoring neutron routers without gateway after north-south cutover failed. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. By setting advanced_router_validation to False this process will be skipped. It will be however up to the operator ensuring the tier-1 gateways are disconnected from the tier-0 for neutron routers without gateways.
1110 Cannot start north-south cutover migration to NSX. There was an error while applying the migration plan. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
1114 Missing VM for edge appliances. Some edge appliances do not have an associated VM appliance. Remove the corresponding neutron router so that the edge is removed.
1115 Cannot power off NSX-V edge VMs before starting host migration. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. You can consider powering off VMs manually. This is necessary to avoid issues during host migration's runtime phase. You must power off at least DHCP and metadata proxy edge appliances.
1120 Cannot start host migration. There was an error while applying the migration plan. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for error details.
1130, 1131 Cannot complete migration. Error while setting migration as finished. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
1132 Timeout during migration. Timeout for north-south cutover is 12 hours. Timeout for host migration is 48 hours. If the migrator's job pod is left waiting for a migration to start it will eventually timeout. Operator just need to restart it.
2001 Unable to retrieve neutron CR from VIO control plane. This could either be an authorization issue or a problem in reaching VIO's Kubernetes control plane. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
2002 Unable to parse neutron CR. Make sure there is a 'manifests' attribute in the 'spec' section.
2003 Invalid contents in neutron CR. Make sure the NSX-V plugin is enabled and all the other plugins, including the NSX Policy plugin are disabled.
2004 Cannot update neutron CR. There was an error while updating Neutron CR. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. This could either be an error in updating the Neutron CR, creating the VIOSecret instance for the NSX password, or creating resources for NSX managers. Verify these resources have not been left stale from some previous failed attempt.
2011 There was a failure while creating a database for NSX with policy. This is likely a SQL error. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
2012 There was a failure while renaming the 'neutron_policy' database to 'neutron' This is likely a SQL error. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors.
2111 The temporary neutron server used for API replay could not be started. This is likely a mistake in the configuration of the temporary neutron server. Check /var/log/neutron-server-tmp.log for errors.
2112 API replay failed. This indicates an error while creating resources in NSX. Check migrator job's pod logs or /var/log/migration/vio-v2t.log for errors. Logs will reveal which resource failed to create. Then check /var/log/neutron-server-tmp.log for failure details. Common failure reasons:
  • Incorrect transport zones in temporary neutron server configuration
  • Non-Openstack networks using the same VLAN as some Openstack network
  • Edge cluster running out of slots for load balancers