In this migration, the migration coordinator migrates only the Distributed Firewall configuration from NSX Data Center for vSphere to NSX-T Data Center.
- 使用者定義的分散式防火牆 (DFW) 規則
- IP 集合
- MAC 集合
- 使用 Service Composer 建立的安全性原則 (僅會移轉 DFW 規則組態)
Service Composer 中的 Guest Introspection 服務組態和網路自我檢查規則組態並不會移轉。
- If "Applied To" is set to "DFW" in all the rules, and there are rules with Security Groups with dynamic memberships based on Security Tags or with static memberships.
- If "Applied To" is set to a universal security group or a universal logical switch in any rule, and there are rules with Security Groups with dynamic memberships based on Security Tags or with static memberships.
- If "Applied To" is set to a universal security group or a universal logical switch in any rule, and all the rules are IP-based.
從 NSX-T 3.1.1 開始，已支援對包含主要模式下的 NSX Manager、不含次要 NSX Manager，且主要站台上具有通用物件的單一站台 NSX for vSphere 部署進行移轉。此類單一站台 NSX for vSphere 部署會移轉至僅具有本機物件的單一站台 NSX-T 環境 (非同盟)。
- Use the migration coordinator to migrate only the existing Distributed Firewall configuration from NSX-v to NSX-T Data Center.
- Use Layer 2 Edge bridge and vSphere vMotion to migrate workload VMs from NSX-v to NSX-T.
To extend Layer 2 networks, you can use the NSX-T native Edge bridge.
Prerequisites for DFW-Only Migration
- Supported software version requirements:
- 支援 NSX-v 6.4.4、6.4.5、6.4.6、6.4.8 及更新版本。
- NSX-T Data Center version 3.1 or later.
NSX-T 3.1 supports this migration only with APIs. Migration with UI is available starting in NSX-T 3.1.1.
- A new NSX-T Data Center is prepared for this migration, and a Layer 2 bridge is pre-configured to extend the VXLAN Logical Switch in NSX-v to the overlay segment in NSX-T Data Center.
- 在進行此移轉之前，目的地 NSX-T Data Center 中沒有任何既存的使用者定義 DFW 規則。
- NSX-v 儀表板的系統概觀窗格中的所有狀態均為綠色。
- 在 NSX-v 環境中，分散式防火牆和 Service Composer 原則沒有解除發佈的變更。
- 在 NSX-v 主機上，分散式防火牆的匯出版本必須設定為 1000。您必須確認匯出版本並視需要更新。如需詳細資訊，請參閱在主機上設定分散式防火牆篩選器的匯出版本。
- All hosts in the NSX-managed cluster (NSX-v as well as NSX-T) must be connected to the same version of VDS and each host within the NSX-managed cluster must be a member of a single version of VDS.
- The lift and shift migration of DFW-only configuration does not involve migrating hosts from NSX-v to NSX-T. Therefore, it is not mandatory for the ESXi version that is used in your NSX-v environment to be supported by NSX-T.
- This Guide explains the DFW-only migration workflow in the migration coordinator UI. If you are using NSX-T 3.1, this migration is supported only with NSX-T APIs. To migrate using APIs, see the API calls that are explained in the Lift and Shift Migration Process section of the NSX Tech Zone article.
- In DFW-only migration mode of the migration coordinator, the firewall state (DVFilter) for existing connection sessions is maintained throughout the migration including vMotion. The firewall state is maintained regardless of whether the VMs are migrating within a single vCenter Server or across vCenter Servers. Also, the dynamic membership in the firewall rules is maintained after the migration coordinator migrates the Security Tags to the workload VMs.
- Objects that are created during the migration must not be updated or deleted before the migration is finished. However, you can create additional objects in NSX-T, if necessary.
- In NSX-T, DFW is enabled out of the box. All flows with sources and destinations as "any" in the DFW rules are allowed by default. When Distributed Firewall is enabled in the NSX-T environment, you cannot migrate the workload VMs again from NSX-T to NSX-v. Roll back of migrated workload VMs is not supported. The workaround is to add the workload VMs to the NSX-T Firewall Exclusion List, and then migrate the workload VMs back to NSX-v using vSphere vMotion.
- The automated migration of the DFW configurations supports workload VMs that are attached to NSX-V logical switches. These VMs will be migrated to NSX-T overlay segments. Workload VMs in NSX-V that are attached to vSphere Distributed Virtual Port Groups are not automatically migrated to NSX Distributed Virtual Port Groups. As a workaround, you must create the NSX Distributed Virtual Port Groups manually and attach the workload VMs to them.