The NSX Migration for VMware Cloud Director tool supports the following topology.
Direct organization VDC networks are connected directly to an external network that is backed by one or more vSphere Distributed Switch port groups (usually VLAN or VXLAN backed).
By default, the two scenarios are considered and migrated differently:
-v2t
backed NSX-T segment (usually with the same VLAN as the source external network, else bridging needs to be configured manually) must be created by the system administrator before the migration. The migration tool will then create a directly connected organization VDC network to the NSX-T segment backed external network in the target organization VDC. The administrator can use NSX-T distributed firewall (to enforce tenant boundaries) on such segment directly in NSX-T Manager (outside of VMware Cloud Director).Note that prior version 1.3.2 of the NSX Migration for VMware Cloud Director tool, the service network scenario migration method was different. The migration reused the existing external network scoped to both source and target PVDC (backed by VLAN backed vSphere Distributed Switch port groups. If required, this (legacy) behavior can still be invoked with YAML flag (LegacyDirectNetworks: true
).
For both the above use cases, layer 2 bridging will not be set up during the migration with NSX-T (bridging) Edge Nodes.
The shared networks are organization VDC networks shared with other Organization VDCs within the same organization.
Direct Network Backing (Source) | Is Organization VDC Network Shared? | Default Migration Mechanism | Migration Mechanism with LegacyDirectNetworks = True |
---|---|---|---|
External network VLAN backed, used only by single direct Org VDC network | - | Imported NSX-T VLAN segment automatically created during the migration process. | Imported NSX-T VLAN segment automatically created during the migration process. |
External network VLAN backed, used by multiple direct Org VDC networks | No | Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created before the migration (backed by NSX-T segments). | Same external network is used to connect the target directly to the organization VDC network and must be scoped to target provider VDC. |
The External network used by multiple direct organization VDC networks, backed by VLAN/VXLAN port groups | Yes | Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created before the migration (backed by NSX-T segments). | Target direct organization VDC network is connected to an external network with the same name but appended with -v2t suffix. This external network must be created prior to the migration (backed by NSX-T segments). |
Note Shared networks migration is supported only with VMware Cloud Director version 10.3 or higher.
v-t migrated
IP prefix list.AdvertiseRoutedNetworks
is set to True
for the respective Edge Gateway in the user input file.VMware Cloud Director (10.3.1 or older) does not preserve VM’s network interface IP address that is automatically allocated via Static – IP Pool IP Mode during migration or rollback. As a workaround, to retain the IP address of VMs, the migration tool will change the VM’s network interface IP mode from ‘Static – IP Pool’ to ‘Static - Manual’ with the specific originally allocated IP. This change persists after rollback as well.
Note VCD 10.3.2 resolved the issue of IP address reset while migrating a VM which was using a static IP address.
Starting with version 1.3.2 the migration tool supports the migration of routed vApp networks. vApp routers are deployed as standalone Tier-1 gateways connected to a single Org VDC network via service interface. Due to NSX-T service interface limitations, vApp routers can only be connected to overlay-backed Org VDC networks. This feature requires VMware Cloud Director 10.3.2.1 or newer.
Starting with version 1.3.2 the migration tool based on source Org VDC gateway rate limit configuration creates Gateway QoS profiles which is then assigned to a specific Tier-1 gateway. This feature requires VMware Cloud Director 10.3.2 or newer.
NSX-V backed Org VDC gateway can have multiple external networks connected and thus there can be specific rate limits set on each such external interface. NSX-T backed Org VDC gateway supports only single rate limit towards Tier-0/VRF therefore the target side will after migration be using only the single most restrictive (lowest) limit in such case.
Starting with version 10.3.2, VMware Cloud Director supports the configuration of an NSX-T Data Center edge gateway to allow non-distributed routing and to connect routed organization VDC networks directly to a tier-1 service router, forcing all VM traffic for a specific network through the service router. Starting with migration tool version 1.3.2, the migration of routed non-distributed Org VDC networks is supported.
A maximum of 9 Org VDC networks can use the non-distributed connection to a single NSX-T Data Center edge gateway. Sub-interface connected Org VDC networks are still migrated as distributed networks.
The two possible ways to enable non-distributed routing for Org VDC networks.
Explicit configuration via an optional Org VDC YAML parameter NonDistributedNetworks
set to True
. In such case all routed (via an internal interface, excluding distributed and sub-interface) Org VDC networks of the particular Org VDC will be created as non-distributed (SR connected)
Implicit configuration - if the migration tool detects that the Org VDC network DNS configuration is identical to its gateway IP and DNS relay/forwarding is enabled on the NSX for vSphere backed edge gateway, such network will be migrated in the non-distributed mode.
If you were using your Org VDC network gateway address as a DNS server address before migration, you can use non-distributed routing to configure your Org VDC network that is backed by NSX-T Data Center to also use its network gateway's IP address as a DNS server address. To do that, the migration tool will additionally create a DNAT rule for DNS traffic translating the network default gateway IP to the DNS forwarder IP.
Note Two DNAT rules for each network are needed - one for TCP and another for UDP DNS traffic.
Starting with release 1.4.0 multiple parallel migrations utilizing shared pool of bridging NSX-T Edge Node clusters is supported. This is achieved by defining unique bridge profile for each migration instance in the format bridge-uplink-profileUUID
.
A sophisticated mechanism is used to reserve bridging edge nodes for concurrent migrations. The migration tool will check for the availability of required nodes for bridging first. If required edge nodes are available the migration tool will create a tag with the format Organization-OrgVDC
, that will be attached to bridge edge nodes and keep them reserved for the mentioned Org VDC migration.
In order to efficiently utilize bridge edge clusters and nodes, the bridging mechanism is enhanced in the following ways:
Unique bridge/uplink/transport zone profile names (bridge-uplink-profileUUID).
Reservation of edge nodes with NSX-T edge node tag (Organization-OrgVCD).
Validation during pre-check that supplied bridge edge clusters have enough free (untagged) edge nodes based on the need for to-be-bridged networks.
Verifying Bridge Uplink in NSX-T:
Verifying tagging on bridging nodes in NSX-T:
Verifying Transport zone on bridging nodes in NSX-T:
Starting with release 1.4 when multiple edge gateways are migrated, it is possible to have specific extended parameters defined in YAML for each such gateway. If the granular edge gateway parameters are not provided, it takes the default from the Org VDC block configured in user YAML.
Example for granular edge gateway field from user input YAML:
EdgeGateways:
EdgeGateway1:
Tier0Gateways: tpm-externalnetwork
NoSnatDestinationSubnet:
- 10.102.0.0/16
- 10.103.0.0/16
ServiceEngineGroupName: Tenant1
LoadBalancerVIPSubnet: 192.168.255.128/28
LoadBalancerServiceNetwork: 192.168.155.1/25
LoadBalancerServiceNetworkIPv6: fd0c:2fb3:9a78:d746:0000:0000:0000:0001/120
AdvertiseRoutedNetworks: False
NonDistributedNetworks: False
serviceNetworkDefinition: 192.168.255.225/27
EdgeGateway2:
Tier0Gateways: tpm-externalnetwork2
NoSnatDestinationSubnet:
- 10.102.0.0/16
- 10.103.0.0/16
ServiceEngineGroupName: Tenant2
LoadBalancerVIPSubnet: 192.168.255.128/28
LoadBalancerServiceNetwork: 192.168.155.1/25
LoadBalancerServiceNetworkIPv6: fd0c:2fb3:9a78:d746:0000:0000:0000:0001/120
AdvertiseRoutedNetworks: False
NonDistributedNetworks: False
serviceNetworkDefinition: 192.168.255.225/27