This site will be decommissioned on December 31st 2024. Please visit techdocs.broadcom.com for the latest content.

VMware NSX 3.2.1 | 17 MAY 2022 | Build 19801959

Check for additions and updates to these release notes.

What's New

NSX-T Data Center 3.2.1 provides a variety of new features that offer new functionalities for virtualized networking, security, and migration from NSX Data Center for vSphere. Highlights include new features and enhancements in the following focus areas.

Federation

  • Federation brownfield onboarding is again supported. You can promote existing objects on a local manager into the global manager configuration.

  • Federation traceflow - Traceflow can now be initiated from global manager and display the different locations the packets are going through.

  • Federation higher latency supported - The supported network latency between the following components increases from 150ms to 500ms round-trip time:

    • Between Global Manager active cluster and Global Manager standby cluster

    • Between Global Manager cluster and Local Manager cluster

    • Between Local Managers clusters across different locations

    This offers more flexibility for security use cases. Maximum latency between RTEP is still at 150ms round-trip time for network stretch use cases.

Edge Platform

  • NSX Edge Node (VM form factor) - support up to 4 datapath interfaces. With NSX-T 3.2.1, you can add one more datapath interface for greenfield deployments. For brownfield deployment, you can use NSX redeploy if you need to have more than three vmnics. For OVF based deploy NSX Edge Node, you need to have four datapath interfaces during the deployment.

Distributed Firewall

  • Distributed Firewall now supports Physical Server SLES 12 SP5.

Gateway Firewall

  • TLS 1.2 Inspection was on Tech Preview mode in NSX-T 3.2, and now it is available for production environments in NSX-T 3.2.1. With this feature, Gateway Firewall can decrypt and inspect the payload to prevent any advanced persistent threats.

  • IDPS (Intrusion Detection and Prevention System) is introduced in NSX-T 3.2.1. With this feature, Gateway Firewall IDPS detects any attempt to exploit system flaws or gain unauthorized access to systems.

NSX Data Center for vSphere to NSX-T Data Center Migration

  • Migration Coordinator now supports migrating to NSX-T environments with Edge Nodes deployed with two TEPs for the following modes: User-Defined Topologies, Migrating Distributed Firewall Configuration, Hosts and Workloads.

  • Migration Coordinator supports adding hosts during migration for Single Site.

  • Added support for Cross-vCenter to Federation Migration to Migration Coordinator, including end-to-end and configuration only.

  • The Migration Coordinator now supports changing Certificate during Migration.

Install and Upgrade

  • Rolling upgrade of NSX Management Cluster - When upgrading the NSX Management Cluster from NSX-T 3.2.1, you can now get near-zero downtime of the NSX Management Plane (MP) by using the Rolling Upgrade feature. With this feature, the maintenance window for MP upgrade gets shortened, and NSX MP API/UI access is up throughout the upgrade process while not impacting Data Plane workloads, as before.

  • Install NSX on Bare Metal/Physical Servers as a non-root user - In NSX-T 3.2.1, you can now install NSX on Linux bare metal/physical servers as a non-root user.

N-VDS to VDS migrator tool

  • Reintroducing the N-VDS to VDS Migrator Tool enables you to migrate the underlying N-VDS connectivity to NSX on VDS while keeping workloads running on the hypervisors.

  • The N-VDS to VDS Migrator Tool now supports migration of the underlying N-VDS connectivity if there are different configurations of N-VDS with the same N-VDS name.

  • During NSX deployment on ESX, NSX checks the VDS configured MTU to make sure it can accommodate overlay traffic. If not, the MTU of the VDS automatically adjusts to NSX Global MTU.

Platform Security

  • Certificate Management Enhancements for TLS Inspection - With the introduction of the TLS Inspection feature, certificate management now supports addition and modification of certification bundles and the ability to generate CA certificates to be used with the TLS Inspection feature. In addition, the general certificate management UI carries modifications that simplify import/export of certificates.  

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.

Customers upgrading to this release are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Managers prior to upgrading.

API and CLI Resources

See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

May 17, 2022

1

Initial edition

Jun 07, 2022

2

Added Resolved Issue 2937810

Jul 07, 2022

3

Added Resolved Issues 2924317 and 2949527

Aug 02, 2022

4

Added Known Issue 2989696

Sept 07, 2022

5

Known Issue 2816781 is Fixed; added Known Issue 3025104

Sept 12, 2022

6

Added Known Issue 2969847

Sept 16, 2022

7

Added Known Issue 3025104

Sept 26, 2022

8

Added Italian to Available Languages

Oct 01, 2022

9

  • Added Known Issue 3012313

  • Moved NSX Application Platform known issues 2936504 and 2949575 from NSX Intelligence 3.2.1 Release Notes to this document

Oct 11, 2022

10

Added known issue 2983892

Oct 11, 2022

11

Added known issue 2931403

Oct 20, 2022

12

Added known issue 3044773

Nov 01, 2022

13

  • Added fixed issues: 2931127, 2955949, 2988499, 2968705

  • Added known issues: 2962718, 2965357, 2978739, 2990741, 2991201, 2992759, 2992964, 2994424, 3004128, 2872892, 3046183, 3047028

Feb 23, 2023

14

Editorial update

May 19, 2023

15

Added known issue 3116294.

May 22, 2023

16

Added known issue 3152512.

June 1, 2023

17

Moved known issue 2879119 to fixed issue.

August 6, 2024

18

Added known issue 3145439.

Resolved Issues

  • Fixed Issue 2968705: Global Manager UI shows an error message after upgrading to 3.2.0.1 hot patch.

    After installing the 3.2.0.1.2942942 hot patch, Global Manager UI shows the following error message:

    Search framework initialization failed, please restart the service via 'restart service global-manager’.

    You are not able to access Global Manager UI after deploying the hot patch.

  • Fixed Issue 2988499: NSX Manager UI does not load.

    UI shows the following error message:

    "Some appliance properly components are not functioning."

    This issue occurs due to the compactor running out of memory.

  • Fixed Issue 2955949: Controller fails to resubscribe with UFO table after a network disconnection.

    New API realization fails because the controller cannot receive new notifications from the UFO table.

  • Fixed Issue 2931127: For edge VM that is deployed using the NSX Manager UI, you are unable to edit the Edge Transport Node configuration in the UI.

    In the Edit Edge Transport Node window, the DPDK fast path interfaces are not displayed for the uplink when the uplink is associated with an NSX segment.

  • Fixed Issue 2949527: VM loses DFW rules if it migrates to a host where opsAgent is not generating VIF attachment notifications.

    If VM was part of a security group with firewall rules applied to it and migrates to a faulty TransportNode, the VM loses DFW rules that were inherited from a security group. The VM will still have any default DFW rule that the user has configured.

  • Fixed Issue 2924317: A few minutes after host migration starts, host migration fails with fabric node creation failure error.

    The overlay status on the NSX-T side has to be checked correctly. If not, host migration fails.

  • Fixed Issue 2937810: The datapath service fails to start and some Edge bridge functions (for example, Edge bridge port) do not work.

    If Edge bridging is enabled on Edge nodes, the Central Control Plane (CCP) sends the DFW rules to the Edge nodes, which should only be sent to host nodes. If the DFW rules contain a function which is not supported by the Edge firewall, the Edge nodes cannot handle the unsupported DFW configuration, which causes the datapath to fail to start.

  • Fixed Issue 2645632: During switchover operation of the local edge, IKE sessions are deleted and re-established by the peer.

    Some IPSec setups that have a large number (more than 30) of IKE sessions configured, local edges deployed in active-standby mode enabled with HA-Sync, and peers having DPD enabled with default settings, some IKE sessions may be torn down by the peer due to DPD timeout and re-established during the switchover.

  • Fixed Issue 2962901: Edge Datapath Configuration Failure alarm after edge node upgrade.

    On the NSX-T Federation setup, when T1 gateways are stretched with DHCP static bindings for downlink segments, MP also creates L2 forwarder ports for the DHCP switch. If a single edge node has two DHCP switches and it was restarted, it caused the failure.

  • Fixed Issue 2938407: Edge node fails to deploy completely in an NSX-T federation setup on NSX-T 3.2.0.x.

    No update for that edge node in the UI. The edge node fails to deploy completely with "Registration Timedout".

  • Fixed Issue 2938194: Refresh API fails with error code 100 - NullPointerException.

    Refresh enables the NSX Manager to collect the current config of the edge. The configuration sync does not work and fails with an error ‘Failed to refresh the transport node configuration: undefined’. Any external changes will not raise alarms.

  • Fixed Issue 2927442: Traffic sometimes hits the default deny DFW rule on VMs across different hosts and clusters since the NSX-T 3.2.0.1 upgrade.

    The issue is the result of a race condition where two different threads access the same memory address space simultaneously. This sometimes causes incomplete address sets to be forwarded to transport nodes whose control plane shard is on the impacted controller.

  • Fixed Issue 2894988: PSOD during normal operation of DFW.

    A PSOD host occurs in a pollWorld or NetWorld world, with the callstack showing rn_match_int() and pfr_update_stats() as top 2 functions. An address set object is in transition due to being reprogrammed, but a packet processing thread (pollWorld or NetWorld) is concurrently traversing the address set.

  • Fixed Issue 2933905: Replacing an NSX-T Manager results in transport nodes losing connection to the controller.

    Adding or removing a node from the Manager cluster can result in some transport nodes losing controller connectivity.

  • Fixed Issue 2921704: Edge Service CPU may spike due to nginx process deadlock when using L7 load balancer with ip-hash load balancing algorithm.

    You cannot connect to the backend servers behind the Load Balancer.

  • Fixed Issue 2914934: DFW rules on dvPortGroups are lost after NSX for vSphere to NSX-T migration.

    After migration, any workload that is still connected to vSphere dvPortGroup will not have DFW configuration.

  • Fixed Issue 2883505: PSOD on ESXi hosts during NSX for vSphere to NSX-T migration.

    PSOD on multiple ESXi hosts during migration. This results in a datapath outage.

  • Fixed Issue 2875667: Downloading the LTA PCAP file results in error when the NSX /tmp partition is full.

    The LTA PCAP file cannot be downloaded due to the /tmp partition being full.

  • Fixed Issue 2875563: Delete IN_PROGRESS LTA session may cause PCAP file leak.

    The PCAP file will leak if a LTA is deleted with PCAP action when the LTA session state is "IN_PROGRESS". This may cause the /tmp partition of the ESXi to be full.

  • Fixed Issue 2878414: While creating a group in the members' dialog for the group member type and when you click on "View Members", the members of that group are copied into the current group.

    You may see that the members are copied from the other group while viewing its members. You can always modify and unselect/remove those items.

  • Fixed Issue 2865827: VM loses the existing TCP and/or ICMP connectivity after the vMotion of Guest Virtual Machine.

    When Service Insertion is configured, VM loses the existing TCP and/or ICMP connectivity after the vMotion of Guest Virtual Machine.

  • Fixed Issue 2873440: Error is returned for VIF membership API during VM vMotion.

    During VM vMotion, effective VIF membership API (https://{{ip}}/policy/api/v1/infra/domains/:domains/groups/:group/members/vifs) returns error. API works fine after the VM vMotion is successfully completed. You can use effective VIF membership API after VM vmotion is completed.

  • Fixed Issue 2880406: Realized NSService or NSServiceGroups does not have policyPath present in tags if those are retrieved through search API.

    If a Service or ServiceEntry is created on the Policy side and retrieved using the Search API, the returned NSServiceGroup or NSService will not have a policyPath tag that contains the path of the Service or ServiceEntry on the Policy side.

  • Fixed Issue 2909840: The upgrade of NSX-T Baremetal Edge with a bond interface configured as the management interface fails during serial upgrade. PNIC is reported down.

    Following the upgrade reboot, the dataplane service fails to start. The syslogs indicate an error in a python script. For example, 2021-12-24T15:19:19.274Z HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - fd = file(path) 2021-12-24T15:19:19.296Z HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - NameError: name 'file' is not defined

    On the partially upgraded Edge, the dataplane service is down. There will still be an active Edge in the cluster, but it might be down to a single point of failure.

  • Fixed Issue 2894642: The datapath process on Edge VMs deployed on a host with SandyBridge, IvyBridge or Westmere CPU, or with EVC mode set to IvyBridge or earlier, fails to start.

    A newly deployed Edge has a Configuration State of Failed with an error. Resulting, Edge datapath to be non-functional.

  • Fixed Issue 2941110: The upgrade coordinator page failed to load in UI due to slowness in scale setup.

     You may not be able to navigate and check the upgrade status after starting the large-scale upgrade, since the Upgrade coordinator page failed to load in UI with an error upgrade status listing failed: Gateway Time-out.

  • Fixed Issue 2946102: Firewall Exclude List records from /internal (mp) entry are missing in upgrade paths GC/HL to 320 or 3201, which may lead to CCP having problems excluding the members in the firewall exclude list.

    CCP might have problems configuring the DFW Exclude list if the upgrade path includes the NSX-T 3.2.0 or 3.2.0.1 release. You will not be able to see DFW Firewall Exclude List members from the MP side, and you may find the members in the firewall exclude list not being excluded. One of the entries in the database that the CCP consumes is missing since the internal records were overwritten by the infra one. This issue does not occur if the customer directly upgrades from the NSX-T 3.0.x or 3.1.x release to the NSX-T 3.2.1 release.

  • Fixed Issue 2936347: Edge redeploy must raise an alarm if it cannot successfully find or delete the previous edge VM that is still connected to MP.

    With power-off and delete operations through VC failing, Edge redeploy operation may end up with two Edge VMs functioning at the same time, resulting in IP conflicts and other issues.

  • Fixed Issue 2938347: ISO installation on bare metal edge fails with a black screen after reboot.

    Installation of NSX-T Edge (bare metal) may fail during the first reboot after installation is complete on a Dell PowerEdge R750 server while in UEFI boot mode.

  • Fixed Issue 2914742: Tier0 Logical Routing enters an error state when one or more of its BGP neighbors Route filters "Maximum Routes" is set for L2VPN_EVPN address family of the neighbor.

    Routing stops working.

  • Fixed Issue 2893170: SAP - Policy API not able to fetch inventory, networking, or security details. UI displays an error message: "Error: Index is currently out of sync, system is trying to recover."

    In NSX-T 3.x, elastic search has been configured to index IP range data in the format of IP ranges instead of Strings. A specific IP address can be searched from the configured IP address range for any rule. Although elastic search works fine with existing formats like IPv4 addresses and ranges, IPv6 addresses and ranges, and CIDRs, it does not support IPv4-mapped IPv6 addresses with CIDR notation and will raise an exception. This will cause the UI to display an "Index out of sync" error, resulting in data loading failure.

  • Fixed Issue 2890348: The default VNI pool needs to be migrated correctly to IM if the default VniPool's name is changed in GC/HL.

    The default VNI Pool was named as "DefaultVniPool" before NSX-T 3.2. The VNI Pool will be migrated incorrectly if it was renamed prior to the release of NSX-T 3.2. The upgrade or migration will not fail, but the pool data will be inconsistent.

  • Fixed Issue 2885820: Missing translations on CCP for few IPs for IP range starting with 0.0.0.0

    NSGroup with IP range starting with 0.0.0.0, for example “0.0.0.0-255.255.255.0”, has translation issues (missing 0.0.0.0/1 subnet). NSGroup with IP range “1.0.0.0-255.255.255.0" are unaffected.

  • Fixed Issue 2881503: Scripts fail to clear PVLAN properties during upgrade if dvs name contains blank space.

    PVLAN properties are not cleared after upgrading, so the conflict with VC still persists.

  • Fixed Issue 2879667: Post NSX-T 3.2 migration, flows are not being streamed through Pub/Sub.

    When migrating to NSX-T 3.2, the broker endpoint for Pub/Sub subscriptions does not get updated. The subscription stops receiving flows if the broker IP is incorrect.

  • Fixed Issue 2871162: You cannot see the pool member failure reason through API when the load-balancer pool member is down.

    Failure Reason cannot be shown in pool member status when the load-balancer pool is configured with one monitor and the pool member status is DOWN.

  • Fixed Issue 2862606: For ESX version less than 7.0.1, NIOC profile is not supported.

    Creating or updating Transport Nodes appears to be successful. However, the actual configuration of the NIOC profile will not be applied to the datapath, so it will not work.

  • Fixed Issue 2658092: Onboarding fails when NSX Intelligence is configured on Local Manager.

    Onboarding fails with a principal identity error. You cannot onboard a system with principal identity user.

  • Fixed Issue 2636420: The transport node profile is applied on a cluster on which "remove nsx" is called after backup.

    Hosts are not in prepared state but the transport node profile is still applied at the cluster.

  • Fixed Issue 2687084: After upgrade or restart, the Search API may return 400 error with Error code 60508, "Re-creating indexes, this may take some time."

    Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete.

  • Fixed Issue 2782010: Policy API allows change vdr_mac/vdr_mac_nested even when "allow_changing_vdr_mac_in_use" is false.

    VDR MAC will be updated on TN even if allow_changing is set to false. Error is not thrown.

  • Fixed Issue 2791490: Federation: Unable to sync the objects to standby Global Manager (GM) after changing standby GM password.

    Cannot observe Active GM on the standby GM's location manager, either any updates from the Active GM.

  • Fixed Issue 2882574: Blocked 'Brownfield Config Onboarding' APIs in NSX-T 3.2.0 release as the feature is not fully supported.

    You will not be able to use the 'Brownfield Config Onboarding' feature.

  • Fixed Issue 2628503: DFW rule remains applied even after forcefully deleting the manager nsgroup.

    Traffic may still be blocked when forcefully deleting the nsgroup.

  • Fixed Issue 2526769: Restore fails on multi-node cluster.

    When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.

  • Fixed Issue 2613113: If onboarding is in progress, and restore of Local Manager is done, the status on Global Manager does not change from IN_PROGRESS.

    UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Configuration of the restored site cannot be imported.

  • Fixed Issue 2864250: A failure is seen in transport node realization if Custom NIOC Profile is used when creating a transport node.

    Transport node is not ready to use.

  • Fixed Issue 2884518: Incorrect count of VMs connected to segment on Network topology UI after upgrading an NSX appliance to NSX-T 3.2.

    You will see incorrect count of VM connected to Segment on Network Topology. However, actual count of VMs associated with the Segment will be shown when you expand the VM's node.

  • Fixed Issue 2772500: Enabling Nested overlay on ENS can result in PSOD.

    Can result in PSOD.

  • Fixed Issue 2866751: Consolidated effective membership API does not list static IPs in the response for a shadow group.

    No functional or datapath impact. You will not see the static IPs in the GET consolidated effective membership API for a shadow group. This is applicable only for a shadow group (also called reference groups).

  • Fixed Issue 2872658: After Site-registration, UI displays an error for "Unable to import due to these unsupported features: IDS."

    There is no functional impact. Config import is not supported in NSX-T 3.2.

  • Fixed Issue 2877628: When attempting to install Security feature on an ESX host switch VDS version lower than 6.6, an unclear error message is displayed.

    The error message is shown via the UI and API.

  • Fixed Issue 2881168: LogicalPort GET API output is in expanded format "fc00:0:0:0:0:0:0:1" as compared to previous format "fc00::1".

    LogicalPort GET API output in NSX-T 3.2 is in expanded format "fc00:0:0:0:0:0:0:1" as compared to NSX-T 3.0 format "fc00::1".

  • Fixed Issue 2882822: Temporary IPSets are not added to SecurityGroups used in EDGE Firewall rules / LB pool members during NSX for vSphere to NSX-T config migration.

    During migration, there may be a gap until the VMs/VIFs are discovered on NSX-T and are a part of the SGs to which they are applicable to via static/dynamic memberships. This can lead to traffic being dropped or allowed contrary to the Edge Firewall rules in the period between the North/South Cutover (N/S traffic going through NSX-T gateway) until the end of the migration.

  • Fixed Issue 2884070: If there is a mismatch of MTU setting between NSX-T edge uplink and peering router, OSPF neighbor-ship gets stuck in Exstart state.

    During NSX for vSphere to NSX-T migration, the MTU is not automatically migrated so a mismatch can impact dataplane during North/South Edge cutover. OSPF adjacency is stuck in Exstart state.

  • Fixed Issue 2884416: Load balancer status cannot be refreshed timely.

    Wrong load balancer status.

  • Fixed Issue 2885009: Global Manager has additional default Switching Profiles after upgrade.

    No functional impact.

  • Fixed Issue 2885248: For InterVtep scenario, if EdgeVnics are connected to NSX Portgroups (irrespective of vlan on Edge VM and ESX Host), the north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the Edge VTEP.

    The north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the Edge VTEP.

  • Fixed Issue 2885552: If you have created an LDAP Identity Source that uses OpenLDAP, and there is more than one LDAP server defined, only the first server is used.

    If the first LDAP server becomes unavailable, authentication fails, instead of trying the rest of the configured OpenLDAP server(s).

  • Fixed Issue 2886210: During restore, if the VC is down, a Backup/Restore dialog will appear telling the user to ensure that VC is up and running, however the IP address of the VC is not shown.

    the IP address of the VC is not shown during Restore for VC connectivity.

  • Fixed Issue 2886971: Groups created on Global Manager are not cleaned up post delete.

    This happens only if that Group is a reference group on a Local Manager site. No functional impact; however, you cannot create another Group with the same policypath as the deleted group.

  • Fixed Issue 2887037: Post Manager to Policy object promotion, NAT rules cannot be updated or deleted.

    This happens when NAT rules are created by a PI (Principal Identity) user on Manager before promotion is triggered. PI user created NAT rules cannot be updated or deleted post Manager to Policy object promotion.

  • Fixed Issue 2889748: Edge Node delete failure if redeployment has failed. Delete leaves stale intent in system, which is displayed on UI.

    Though Edge VM will be deleted, stale edge intent and internal objects will be retained in the system and delete operation will be retried internally. No functionality impact, as Edge VMs are deleted and only intent has stale entries.

  • Fixed Issue 2875962: Upgrade workflow for Cloud Native setups is different from NSX-T 3.1 to NSX-T 3.2.

    Following the usual workflow will erase all CSM data.

  • Fixed Issue 2888658: Significant performance impact in terms of connections per second and throughput observed on NSX-T Gateway Firewall when Malware Detection and Sandboxing feature is enabled.

    Any traffic subject to malware detection experiences significant latencies and possibly connection failures. When malware detection is enabled on the gateway, it will also impact L7FW traffic causing latencies and connection failures.

  • Fixed Issue 2882769: Tags on NSService and NSServiceGroup objects are not carried over after upgrading to NSX-T 3.2.

    There is no functional impact on NSX as Tags on NSService and NSServiceGroup are not being consumed by any workflow. There may be an impact on external scripts that have workflows that rely on Tags on these objects.

  • Fixed Issue 2867243: Effective membership APIs for a Policy Group or NSGroup with no effective members does not return 'results' and 'result_count' fields in API response.

    There is no functional impact.

  • Fixed Issue 2868235: On Quick Start - Networking and Security dialog, visualization shows duplicate VDS when there are multiple PNICs attached to the same VDS.

    Visualization shows duplicate VDS. It may be difficult to find or scroll to the customize host switch section in case the focus is on the visualization graph.

  • Fixed Issue 2870529: Runtime information for Identify Firewall (IDFW) not available if not exact case of netbios name is used when AD domain is added.

    You cannot easily and readily obtain IDFW current runtime information/status. Current active logins cannot be determined.

  • Fixed Issue 2870645: In response of /policy/api/v1/infra/realized-state/realized-entities API, 'publish_status_error_details' shows error details even if 'publish_status' is a "SUCCESS" which means that the realization is successful.

    There is no functional impact.

  • Fixed Issue 2874236: After upgrade, if you re-deploy only one Public Cloud Gateway (PCG) in an HA pair, the older HA AMI/VHD build is re-used.

    This happens only post upgrade in the first redeployment of PCGs.

  • Fixed Issue 2875385: When a new node joins the cluster, if local users (admin, audit, guestuser1, guestuser2) were renamed to some other name, these local user(s) may not be able to log in.

    Local user is not able to log in.

  • Fixed Issue 2882070: NSGroup members and criteria is not displayed for global groups in Manager API listing.

    No functional impact.

  • Fixed Issue 2862418: The first packet could be lost in Live Traffic Analysis (LTA) when configuring LTA to trace the exact number of packets.

    You cannot see the first packet trace.

  • Fixed Issue 2866885: Event log scrapper (ELS) requires the NetBIOS name configured in AD domain to match that in AD server.

    User login will not be detected by ELS.

  • Fixed Issue 2878030: Upgrade orchestrator node for Local Manager site change is not showing notification.

    If the orchestrator node is changed after the UC is upgraded and you continue with the UI workflow by clicking any action button (pre-check, start, etc.), you will not see any progress on the upgrade UI. This is only applicable if the Local Manager Upgrade UI is accessed in the Global Manager UI using site switcher.

  • Fixed Issue 2878325: In the Inventory Capacity Dashboard view for Manager, “Groups Based on IP Sets” attribute count doesn’t include Groups containing IP Addresses that are created from Policy UI.

    In the Inventory Capacity Dashboard view for Manager, the count for “Groups Based on IP Sets” is not correctly represented if there are Policy Groups containing IP Addresses.

  • Fixed Issue 2881281: Concurrently configuring multiple virtual servers might fail for some.

    Client connections to virtual servers may time out.

  • Fixed Issue 2881471: Service Deployment status not getting updated when deployment status switches from failure to success.

    You may see that Service Deployment status remains in Down state forever along with the alarm that was raised.

  • Fixed Issue 2816781: Physical servers cannot be configured with a load-balancing based teaming policy as they support a single VTEP.

    You won't be able to configure physical servers with a load-balancing based teaming policy.

  • Fixed Issue 2879119: When a virtual router is added, the corresponding kernel network interface does not come up.

    Routing on the vrf fails. No connectivity is established for VMs connected through the vrf.

Known Issues

  • Issue 3145439: Rules with more than 15 ports would be allowed to publish only to fail in later stages.

    You may not know that the rule fails to publish / realize for this reason.

    Workaround: Break the set of ports / PortRanges into multiple and publish those with smaller set of Ports / PortRanges.

  • Kafka logs fill up ephemeral storage causing Kafka pods to be restarted

    [Provide the steps to reproduce this problem]

    Currently we seeing this only in scale setup due to ssl failures.No specific steps to reproduce this

    [Enter the workaround, if any, else state “None”] Customer can use below workaround to stop logging in to file. console logging will be always available. Workaround can be use in same order to avoid manually restart of kafka pod. Step 1. Change kafka log4j configmap file to do logging in console only. a. kubectl edit configmap -n nsxi-platform kafka-log4j-configuration b. remove 'kafkaAppender in this line 'log4j.rootLogger=INFO, stdout, kafkaAppender c. new line should look like this > log4j.rootLogger=INFO, stdout 4. save and exit. Step 2. Add below entry in kafka statefulset object to stop logging kafkaServer-gc.log files. a. kubectl edit statefulset kafka -n nsxi-platform b. go to "env" section in container and add the following key value pair. containers - env: - name: EXTRA_ARGS value: -name kafkaServer c. save and exit. This will restart kafka pod automatically one by one. d. to verify , run the command "kubectl describe statefulset kafka -n nsxi-platform". We should see following entry under environment. Environment: EXTRA_ARGS: -name kafkaServer

  • Issue 3152512: Missing firewall rules after the upgrade from NSX 3.0.x or NSX 3.1.x to NSX 3.2.1 can be observed on the edge node when a rule is attached to more than one gateway/logical router.

    Traffic does not hit the correct rule in the gateway firewall and will be dropped.

    Workaround: Republish the Gateway Firewall rule by making a configuration change (for example, a name change).

  • Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not allowed or skipped correctly.

    Workaround: See knowledge base article 91421.

  • Issues 3046183 and 3047028: After activating or deactivating one of the NSX features hosted on the NSX Application Platform, the deployment status of the other hosted NSX features changes to In Progress. The affected NSX features are NSX Network Detection and Response, NSX Malware Prevention, and NSX Intelligence.

    After deploying the NSX Application Platform, activating or deactivating the NSX Network Detection and Response feature causes the deployment statuses of the NSX Malware Prevention feature and NSX Intelligence feature to change to In Progress. Similarly, activating and deactivating the NSX Malware Prevention feature causes the deployment status of the NSX Network Detection and Response feature to In Progress. If NSX Intelligence is activated and you activate NSX Malware Prevention, the status for the NSX Intelligence feature changes to Down and Partially up.

    Workaround: None. The system recovers on its own.

  • Issue 2983892: The Kubernetes pod associated with the NSX Metrics feature intermittently fails to detect the ingress traffic flows.

    When the Kubernetes pod associated with the NSX Metrics feature intermittently fails to detect the ingress traffic flows,  the ingress metrics data does not get stored. As a result, the missing data affects the metrics data analysis performed by other NSX features, such as NSX Intelligence, NSX Network Detection and Response, and NSX Malware Prevention.

    Workaround: Ask your infrastructure administrator to peform the following steps.

    1. Log in to the Kubernetes pod associated with the NSX Metrics feature and run the following command at the system prompt.

      kubectl edit deploy/monitor -n nsxi-platform  
    2. Change the network policy from allow-traffic-to-contour to allow-traffic-to-all and save the changes.

      After the Kubernetes pod restarts, the NSX Metrics feature should be collecting and storing the ingress traffic flows data correctly.

  • Issue 2931403: Network interface validation prevents API users from performing updates.

    An Edge VM network interface can be configured with network resources such as port groups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in VC after a power outage (moref of resource pool changed after VC was restored). API users are blocked from performing update operations.

    Workaround: Redeploy edge and specify valid moref Ids.

  • Issue 3044773: IDPS Signature Download will not work if NSX Manager is configured with HTTPS Proxy with certificate.

    IDPS On-demand and Auto signature download will not work.

    Workaround: Configure with HTTP proxy server (with scheme HTTP). Note scheme HTTP/HTTPS means the connection type it established between NSX manager and proxy. Or use the IDPS Offline upload Process.

  • Issue 2962718: A bond management interface can lose members when Mellanox NICs are used on bare metal edge.

    The management interface lost connection with the edge after a reboot. A Mellanox interface was configured as one of the bond slaves.

    Workaround: Stop the dataplane service before configuring the bond.

  • Issue 2965357: When N-VDS to VDS migration runs simultaneously on more than 64 hosts, the migration fails on some hosts.

    As multiple hosts try to update the vCenter Server simultaneously, the migration fails during the TN_RECONFIG_HOST stage.

    Workaround: Trigger migration on <= 64 hosts.

  • Issue 2978739: Public Cloud Gateway deployments will fail on AWS.

    Deployment of Public Cloud Gateway fails on AWS when roles created with NSX-T 3.2.1 scripts do not have "route53:ListHostedZonesByVPC" permissions.

    Workaround:  Add "route53:ListHostedZonesByVPC" to the CSM role and redeploy the AWS PCG.

  • Issue 2990741: After upgrading to NSX-T 3.2.x, search functionality does not work in the NSX Manager UI.

    NSX Manager UI shows the following error message:

    Search service is currently unavailable, please restart using 'start service search'.

    Workaround: Run the following CLI commands on the impacted NSX Manager nodes:

    • restart service search

    • restart service policy

  • Issue 2991201: After upgrading NSX Global Manager to 3.2.1.x, Service entries fail to realize.

    Existing Distributed Firewall rules that consume these Services do not work as expected.

    Workaround: Do a dummy update of the Service entry by following these steps:

    1. Take a backup.

    2. Run a GET API to retrieve the Service entry details

    3. Update the Service entry without changing the PUT payload as follows:

    PUT https://<manager_ip>/policy/api/v1/infra/services/<service-id>/service-entries/<service-entry-id>

    Example:

    PUT https://<manager_ip>/policy/api/v1/infra/services/VNC/service-entries/VNC

    {

        "protocol_number": 34,

        "resource_type": "IPProtocolServiceEntry",

        "id": "VNC",

        "display_name": "VNC",

        "path": "/infra/services/VNC/service-entries/VNC",

        "relative_path": "VNC",

        "parent_path": "/infra/services/VNC",

        "unique_id": "0c505596-b9ed-4670-a398-e973dc1e57b4",

        "realization_id": "0c505596-b9ed-4670-a398-e973dc1e57b4",

        "marked_for_delete": false,

        "overridden": false,

        "_system_owned": false,

        "_create_time": 1655870419829,

        "_create_user": "admin",

        "_last_modified_time": 1655870419829,

        "_last_modified_user": "admin",

        "_protection": "NOT_PROTECTED",

        "_revision": 0

    }

  • Issue 2992759: Prechecks fail during NSX Application Platform 4.0.1 deployment on NSX-T versions 3.2.0/3.2.1/4.0.0.1 with upstream K8s v1.24.

    The prechecks fail with the following error message:

    "Kubernetes cluster must have minimum 1 ready master node(s)."

    Workaround: None.

  • Issue 2992964: During NSX-V to NSX-T migration, edge firewall rules with local Security Group cannot be migrated to NSX Global Manager.

    You must migrate the edge firewall rules that use a local Security Group manually. Otherwise, depending on the rule definitions (actions, order, and so on), traffic might get dropped during edge cutover.

    Workaround: See VMware knowledge base article https://kb.vmware.com/s/article/88428.

  • Issue 2994424: URT generated multiple VDS for one cluster if named teaming of transport nodes in the cluster are different.

    Transport nodes with different named teaming were migrated to different VDSes, even if they are in the same clusters.

    Workaround: None.

  • Issue 3004128: Edit Edge Transport Node window does not display uplinks from Named Teaming policies or Link Aggregation Groups that are defined in the uplink profile.

    You cannot use uplinks and map them to Virtual NICs or DPDK fast path interfaces.

    Workaround: None from UI. Add/Edit Edge Transport Node can be done using REST APIs.

  • Issue 2872892: Inconsistent Host Status at Fabric.

    There is a discrepancy between the cluster's overall status and the host's status in the quick start.

    The cluster is showing "prepared" status and the host is showing "Applying Nsx switch configuration" status, which is incorrect.

    Workaround: The UI needs to keep refreshing the host state until the host status reaches the success or failure state.

  • Issue 2936504: The loading spinner appears on top of the NSX Application Platform's monitoring page.

    When you view the NSX Application Platform page after the NSX Application Platform is successfully installed, the loading spinner is initially displayed on top of the page. This spinner might give the impression that there is some connectivity issue occuring when there is none.

    Workaround: As soon as the NSX Application Platform page is loaded, refresh the Web browser page to clear the spinner.

  • Issue 2949575: Powering off one Kubernetes worker node in the cluster puts the NSX Application Platform in a degraded state indefinitely.

    After one Kubernetes worker node is removed from the cluster without first draining the pods on it, the NSX Application Platform is placed in a degraded state. When you check the status of the pods using the kubectl get pod -n nsxi-platform command, some pods display the Terminating status, and have been in that status for a long time.

    Workaround: Manually delete each of the pods that display a Terminating status using the following information.

    1. From the NSX Manager or the runner IP host (Linux jump host from which you can access the Kubernetes cluster), run the following command to list all the pods with the Terminating status.

      kubectl get pod -A | grep Terminating
    2. Delete each pod listed using the following command.

      kubectl delete pod <pod-name> -n <pod-namespace> --force --grace-period=0

  • Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.

    After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.

    1. The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.

    2. For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state.   

    3. For an NSX MP upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state.   

    4. The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.

    Workaround: See VMware knowledge base article 89418.

  • Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

    Workaround: Refresh DNS cache for the host using command : /etc/init.d/nscd restart

  • Issue 2989696: Scheduled backups fail to start after NSX Manager restore operation.

    Scheduled backup does not generate backups. Manual backups continue to work.

    Workaround: See Knowledge base article 89059.

  • Issue 2969847: Incorrect DSCP priority.

    DSCP priority from a custom QoS profile is not propagated to host when the value is 0, resulting in traffic prioritization issues.

    Workaround: None.

  • Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster.

  • Issue 2879979: IKE service may not initiate new IPsec route based session after "dead peer detection" has happened due to IPsec peer being unreachable.

    There could be outage for specific IPsec route based session.

    Workaround: Enable/disable on IPsec session can resolve the problem.

  • Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints.

    Failed IPsec session will not be established until the error is resolved.

    Workaround: Use unique self-signed certificate for each local endpoint.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

    Workaround: Check the logs.

  • Issue 2865273: Advanced Load Balancer (Avi) search engine won't connect to Avi Controller if there is a DFW rule to block ports 22, 443, 8443 and 123 prior to migration from NSX for vSphere to NSX-T Data Center.

    Avi search engine is not able to connect to the Avi Controller.

    Workaround: Add explicit DFW rules to allow ports 22, 443, 8443 and 123 for SE VMs or exclude SE VMs from DFW rules.

  • Issue 2864929: Pool member count is higher when migrated from NSX for vSphere to Avi Load Balancer on NSX-T Data Center.

    You will see a higher pool member count. Health monitor will mark those pool members down but traffic won't be sent to unreachable pool members.

    Workaround: None.

  • Issue 2719682: Computed fields from Avi controller are not synced to intent on Policy resulting in discrepancies in Data shown on Avi UI and NSX-T UI.

    Computed fields from Avi controller are shown as blank on the NSX-T UI.

    Workaround: App switcher to be used to check the data from Avi UI.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2866682: In Microsoft Azure, when accelerated networking is enabled on SUSE Linux Enterprise Server (SLES) 12 SP4 Workload VMs and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    VM agent doesn't start and VM becomes unmanaged.

    Workaround: Disable Accelerated networking.

  • Issue 2884939: NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102).

    The NSX rate limiting of 100 requests per second is reached when we migrate a large number of VS from NSX for vSphere to NSX-T ALB and all APIs are temporarily blocked.

    Workaround: Update Client API rate limit to 200 or more requests per second.

    Note: There is fix on AVI version 21.1.4 release.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You are unable to change local user passwords while vIDM is enabled.

    Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-enabled.

  • Issue 2885330: Effective member not shown for AD group.

    Effective members of AD group not displayed. No datapath impact.

    Workaround: None.

  • Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.

    This CLI output is confusing.

    Workaround: Restart nsx-proxy on that TN.

  • Issue 2874995: LCores priority may remain high even when not used, rendering them unusable by some VMs.

    Performance degradation for "Normal Latency" VMs.

    Workaround: There are two options.

    • Reboot the system.

    • Remove the high priority LCores and then recreate them. They will then default back to normal priority LCores.

  • Issue 2854139: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.

    Traffic drop for the prefixes that are getting continuously added/deleted.

    Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1.

    In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.

    Workaround: None.

  • Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported.

    In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address.

    Workaround: Disable accelerated networking for RedHat and CentOS based OS.

  • Issue 2283559: /routing-table and /forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.

    If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.

    Workaround: There are two options to fetch the RIB/FIB. These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest. CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.

  • Issue 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out.

    These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database.

    Workaround: Use the CLI commands to retrieve the information from the edge.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2839782: unable to upgrade from NSX-T 2.4.1 to 2.5.1 because CRL entity is large, and Corfu imposes a size limit in 2.4.1, thereby preventing the CRL entity from being created in the Corfu during upgrade.

    Unable to upgrade.

    Workaround: Replace certificate with a certificate signed by a different CA.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2799371: IPSec alarms for L2 VPN are not cleared even though L2 VPN and IPSec sessions are up.

    No functional impact except that unnecessary open alarms are seen.

    Workaround: Resolve alarms manually.

  • Issue 2584648: Switching primary for T0/T1 gateway affects northbound connectivity.

    Location failover time causes disruption for a few seconds and may affect location failover or failback test.

    Workaround: None.

  • Issue 2561988: All IKE/IPSEC sessions are temporarily disrupted.

    Traffic outage will be seen for some time.

    Workaround: Modify the local endpoints in phases instead of all at the same time.

  • Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2639424: Remediating a Host in a vLCM cluster with Host-based Deployment will fail after 95% Remediation Progress is completed.

    The remediation progress for a Host will be stuck at 95% and then Fail after 70 minute timeout is completed.

    Workaround: See VMware knowledge base article 81447.

  • Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade.

    When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.

    Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running.

  • Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs.

    By default, NSX tools are installed on /opt directory. However, during NSX tools installation default path can be overridden with "--chroot-path" option passed to the install script.

    Insufficient disk space on the partition where NSX tools is installed can cause NSX tools upgrade to fail.

    Workaround: Increase the partition size on which NSX tools is installed and then initiate NSX tools upgrade. Steps for increasing disk space are described in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/resize-os-disk-gpt-partition page.

  • Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform".

    The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected.  There is no functional impact. Only debugging might be affected.

    Workaround: There are two workarounds:

    • Workaround 1: Make sure the Kubernetes cluster comes up with version 0.4.x of the metrics-server pod before deploying NAPP platform. This issue is not seen when metrics-server 0.4.x is deployed.

    • Workaround 2: Delete the metrics-server instance deployed by the NAPP charts and deploy upstream Kubernetes metrics-server 0.4.x.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps:

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2942900: The identity firewall does not work for event log scraping when Active Directory queries time out.

    The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with event log scraper IP addresses.

    Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects.

  • Issue 2958032: If you are using NSX-T 3.2 or upgrading to an NSX-T 3.2 maintenance release, the file type is not shown properly and is truncated at 12 characters on the Malware Prevention dashboard.

    On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you will only see WindowsExecu as File Type on Malware Prevention UI.

    Workaround: Do a fresh NAPP installation with an NSX-T 3.2 maintenance build instead of upgrading from NSX-T 3.2 to an NSX-T 3.2 maintenance release.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: You need to use APIs to remove bridging:

    1. Update concerned LogicalPort and remove attachment

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2. DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

  • Issue 2889482: The wrong save confirmation is shown when updating segment profiles for discovered ports.

    The Policy UI allows editing of discovered ports but does not send the updated binding map for port update requests when segment profiles are updated. A false positive message is displayed after clicking Save. Segments appear to be updated for discovered ports, but they are not.

    Workaround: Use MP API or UI to update the segment profiles for discovered ports.

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

check-circle-line exclamation-circle-line close-line
Scroll to top icon