Supported Topology

The NSX Migration for VMware Cloud Director tool supports the following topology.

  • One or multiple organization VDCs from the same Organization can be migrated together.
  • No edge gateway, one or multiple edge gateways connected to one or multiple external networks in the organization VDC backed by NSX Data Center for vSphere.
  • Direct, routed (including distributed), and isolated organization VDC networks.

Direct Networks

Direct organization VDC networks are connected directly to an external network that is backed by one or more vSphere Distributed Switch port groups (usually VLAN or VXLAN backed).

By default, the two scenarios are considered and migrated differently:

  • Colocation/MPLS use case: The external network is dedicated to a single tenant (organization VDC) and is VLAN backed. During the migration, in the VLAN transport zone, the NSX-T logical segment defined using ImportedNetworkTransportZone YAML element will be automatically created and imported into the target organization VDC.
  • Service network use case: The external network is shared across multiple tenants (more than one organization VDCs have an organization VDC network directly connected to it). In this scenario, an identically named External network with the suffix -v2t backed NSX-T segment (usually with the same VLAN as the source external network, else bridging needs to be configured manually) must be created by the system administrator before the migration. The migration tool will then create a directly connected organization VDC network to the NSX-T segment backed external network in the target organization VDC. The administrator can use NSX-T distributed firewall (to enforce tenant boundaries) on such segment directly in NSX-T Manager (outside of VMware Cloud Director).

Note that prior version 1.3.2 of the NSX Migration for VMware Cloud Director tool, the service network scenario migration method was different. The migration reused the existing external network scoped to both source and target PVDC (backed by VLAN backed vSphere Distributed Switch port groups. If required, this (legacy) behavior can still be invoked with YAML flag (LegacyDirectNetworks: true).

For both the above use cases, layer 2 bridging will not be set up during the migration with NSX-T (bridging) Edge Nodes.

Shared Networks

The shared networks are organization VDC networks shared with other Organization VDCs within the same organization.

  • When a shared network is present in an organization VDC that is going to be migrated, all organization VDCs having vApps connected to the shared network must be migrated together (supported maximum is 16 Org VDCs). The system administrator must provide a list of organization VDCs (and their specific parameters) to migrate together. NSX-T backed Provider VDCs use Data Center Groups to increase the organization VDC network scope across multiple organization VDCs. The migration tool will automatically create these Data Center Group(s). All organization VDCs migrated together will participate in these Data Center Groups and relevant organization VDC networks will be scoped to them. The migration tool validates if any other organization VDCs (than those mentioned in YAML) have vApps/VMs connected to it and warns that such Org VDCs also need to be included in the migration.
  • Organization VDC network that is directly connected to an External network used also by other organization VDCs (typically Service Network use case) will not be migrated via the Data Center Group mechanism. An identically named External network with the suffix -v2t backed by NSX-T segment (usually with the same VLAN as the source external network, else bridging needs to be configured manually) must be created by the system administrator before the migration. The migration tool will then create in the target organization VDC legacy shared directly connected organization VDC network to the NSX-T segment backed external network. The administrator can use NSX-T distributed firewall (to enforce tenant boundaries) on such segment directly in NSX-T Manager (outside of VMware Cloud Director).

Direct Network Migration Mechanism Summary

Direct Network Backing (Source) Is Organization VDC Network Shared? Default Migration Mechanism Migration Mechanism with LegacyDirectNetworks = True
External network VLAN backed, used only by single direct Org VDC network - Imported NSX-T VLAN segment automatically created during the migration process. Imported NSX-T VLAN segment automatically created during the migration process.
External network VLAN backed, used by multiple direct Org VDC networks No Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created before the migration (backed by NSX-T segments). Same external network is used to connect the target directly to the organization VDC network and must be scoped to target provider VDC.
The External network used by multiple direct organization VDC networks, backed by VLAN/VXLAN port groups Yes Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created before the migration (backed by NSX-T segments). Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created prior to the migration (backed by NSX-T segments).

Note Shared networks migration is supported only with VMware Cloud Director version 10.3 or higher.

Route Advertisement

  • BGP: When Route Redistribution is configured for BGP on the source NSX-V backed edge gateway, the migration tool will migrate all prefixes permitted by redistribution criteria that are either explicitly defined via Named IP Prefix, or implicitly via Allow learning from - Connected. Additionally, target gateway BGP IP Prefix Lists will be populated with specific IP Prefixes including allow/deny action. All such criteria are migrated into a single v-t migrated IP prefix list.
  • Static Routing: VMware Cloud Director 10.4 supports only configuration of internal static routes on NSX-T backed Edge Gateway (set on Tier-1 gateway) but not external static routes that need to be configured on Tier-0/VRF gateway. The external routes however can be preconfigured on Tier-0/VRF by the provider prior to the migration. The migration tool will advertise all connected Org VDC networks from Tier-1 gateway to Tier-0/VRF if the flag AdvertiseRoutedNetworks is set to True for the respective Edge Gateway in the user input file.

Automated IP Address Allocation Behavior

VMware Cloud Director (10.3.1 or older) does not preserve VM’s network interface IP address that is automatically allocated via Static – IP Pool IP Mode during migration or rollback. As a workaround, to retain the IP address of VMs, the migration tool will change the VM’s network interface IP mode from ‘Static – IP Pool’ to ‘Static - Manual’ with the specific originally allocated IP. This change persists after rollback as well.

Note VCD 10.3.2 resolved the issue of IP address reset while migrating a VM which was using a static IP address.

Routed vApp Networks

Starting with version 1.3.2 the migration tool supports the migration of routed vApp networks. vApp routers are deployed as standalone Tier-1 gateways connected to a single Org VDC network via service interface. Due to NSX-T service interface limitations, vApp routers can only be connected to overlay-backed Org VDC networks. This feature requires VMware Cloud Director 10.3.2.1 or newer.

Edge Gateway Rate Limits

Starting with version 1.3.2 the migration tool based on source Org VDC gateway rate limit configuration creates Gateway QoS profiles which is then assigned to a specific Tier-1 gateway. This feature requires VMware Cloud Director 10.3.2 or newer.

Multiple External Networks:

NSX-V backed Org VDC gateway can have multiple external networks connected and thus there can be specific rate limits set on each such external interface. NSX-T backed Org VDC gateway supports only single rate limit towards Tier-0/VRF therefore the target side will after migration be using only the single most restrictive (lowest) limit in such case.

Support for Org VDC network routed via SR (not distributed)

non distributed routing

Starting with version 10.3.2, VMware Cloud Director supports the configuration of an NSX-T Data Center edge gateway to allow non-distributed routing and to connect routed organization VDC networks directly to a tier-1 service router, forcing all VM traffic for a specific network through the service router. Starting with migration tool version 1.3.2, the migration of routed non-distributed Org VDC networks is supported.

A maximum of 9 Org VDC networks can use the non-distributed connection to a single NSX-T Data Center edge gateway. Sub-interface connected Org VDC networks are still migrated as distributed networks.

The two possible ways to enable non-distributed routing for Org VDC networks.

  1. Explicit configuration via an optional Org VDC YAML parameter NonDistributedNetworks set to True. In such case all routed (via an internal interface, excluding distributed and sub-interface) Org VDC networks of the particular Org VDC will be created as non-distributed (SR connected)

  2. Implicit configuration - if the migration tool detects that the Org VDC network DNS configuration is identical to its gateway IP and DNS relay/forwarding is enabled on the NSX for vSphere backed edge gateway, such network will be migrated in the non-distributed mode.

If you were using your Org VDC network gateway address as a DNS server address before migration, you can use non-distributed routing to configure your Org VDC network that is backed by NSX-T Data Center to also use its network gateway's IP address as a DNS server address. To do that, the migration tool will additionally create a DNAT rule for DNS traffic translating the network default gateway IP to the DNS forwarder IP.

Note Two DNAT rules for each network are needed - one for TCP and another for UDP DNS traffic.

Parallel Migrations

Starting with release 1.4.0 multiple parallel migrations utilizing shared pool of bridging NSX-T Edge Node clusters is supported. This is achieved by defining unique bridge profile for each migration instance in the format bridge-uplink-profileUUID.

A sophisticated mechanism is used to reserve bridging edge nodes for concurrent migrations. The migration tool will check for the availability of required nodes for bridging first. If required edge nodes are available the migration tool will create a tag with the format Organization-OrgVDC, that will be attached to bridge edge nodes and keep them reserved for the mentioned Org VDC migration.

In order to efficiently utilize bridge edge clusters and nodes, the bridging mechanism is enhanced in the following ways:

  1. Unique bridge/uplink/transport zone profile names (bridge-uplink-profileUUID).

  2. Reservation of edge nodes with NSX-T edge node tag (Organization-OrgVCD).

  3. Validation during pre-check that supplied bridge edge clusters have enough free (untagged) edge nodes based on the need for to-be-bridged networks.

Verifying Bridge Uplink in NSX-T:

Edge_Bridge_Uplink

Verifying tagging on bridging nodes in NSX-T:

Tagging_Edge_Bridge_Nodes

Verifying Transport zone on bridging nodes in NSX-T:

Transport-zone

Granular edge gateway parameters

Starting with release 1.4 when multiple edge gateways are migrated, it is possible to have specific extended parameters defined in YAML for each such gateway. If the granular edge gateway parameters are not provided, it takes the default from the Org VDC block configured in user YAML.

Example for granular edge gateway field from user input YAML:

EdgeGateways:
  EdgeGateway1:
    Tier0Gateways: tpm-externalnetwork
    NoSnatDestinationSubnet:
      - 10.102.0.0/16
      - 10.103.0.0/16
    ServiceEngineGroupName: Tenant1
    LoadBalancerVIPSubnet: 192.168.255.128/28
    LoadBalancerServiceNetwork: 192.168.155.1/25
    LoadBalancerServiceNetworkIPv6: fd0c:2fb3:9a78:d746:0000:0000:0000:0001/120
    AdvertiseRoutedNetworks: False
    NonDistributedNetworks: False
    serviceNetworkDefinition: 192.168.255.225/27
  EdgeGateway2:
    Tier0Gateways: tpm-externalnetwork2
    NoSnatDestinationSubnet:
      - 10.102.0.0/16
      - 10.103.0.0/16
    ServiceEngineGroupName: Tenant2
    LoadBalancerVIPSubnet: 192.168.255.128/28
    LoadBalancerServiceNetwork: 192.168.155.1/25
    LoadBalancerServiceNetworkIPv6: fd0c:2fb3:9a78:d746:0000:0000:0000:0001/120
    AdvertiseRoutedNetworks: False
    NonDistributedNetworks: False
    serviceNetworkDefinition: 192.168.255.225/27
check-circle-line exclamation-circle-line close-line
Scroll to top icon