This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX 4.2.0 | 23 July 2024 | Build 24207721

Check for additions and updates to these release notes.

What's New

NSX 4.2.0 provides a variety of new features, offering new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • Easy adoption of virtual networking for virtual machines connected to VLAN topologies.

  • IPv6 only access to the NSX Manager and Edge Nodes.

  • Improved availability of network connectivity with dual DPU support, TEP grouping, improved detection of failure events, and prioritizing packets that detect failures.

  • Additional support for events, alarms, and other operational features.

  • Enhancements to multi-tenancy and VPCs.

  • Firewall Rule and Group Scale increase at both Local Manager and Global Manager.

  • Availability of IDS/IPS function on T0 for Gateway Firewall.

  • Distributed Malware Prevention available on stretched VSAN Clusters.

  • Added Packet Capture for threat triaging & forensics in NDR for IDS/IPS events

In addition, many other capabilities are added in every area of the product. More details are available below.

Important Advisories About NSX 4.2.0

  • Environments using L7 DFW rules or Security Intelligence must install or upgrade to NSX 4.2.0.1 immediately. See known issue 3422772 and review KB article 374611.

  • Users might get elevated privileges among all the role bindings done for LDAP groups configured on NSX. The elevated privileges can also be experienced when users have group names created with lowercase letters only. This will have implications to LDAP/vIDM both.

    Note:

    This applies to both Greenfield and Brownfield NSX 4.2.0 environments.

    Note:

    Users planning to install VCF 5.2 should install VCF 5.2.1 instead. Existing NSX deployments running a version less than 4.2.0 should upgrade directly to 4.2.1, where this defect is fixed..

Networking

Layer 2 Networking

  • TEP Groups leverage multiple TEPs on an Edge Node more effectively by performing flow based load balancing of traffic over the TEPs. This feature offers more bidirectional North-South throughput with a dedicated Edge Node for a Tier-0 gateway.

  • Support for MPLS and DFS Traffic introduces Enhanced DataPath (EDP) and Edge Nodes support for improved traffic throughput for MPLS and DFS (Nokia proprietary protocol) traffic.

  • Easy Virtual Networking Adoption offers a tool to help adopt the full benefits of network virtualization. The tool provides a step-by-step approach in the adoption of overlay networks, with validation before and after each step to ensure a smooth transition.

  • Combined Security-only and Networking and Security VIBs allows the configuration of Distributed Firewall on DVPG and Network Virtualization on the same ESX host. It also offers the ability for NSX to discover the existing DVPGs and enforce segment profiles and Distributed Firewall rules on them.

  • Datapath Observability enhancements introduces new datapath monitoring capability through the API and UI. With NSX 4.2, you can collect debug metrics and counters per transport node and per segment without having to log into the ESX Transport Nodes. Along with this, NSX 4.2 also includes a method to periodically collect debug counters in datapath.

  • Support of Unknown Ethertypes introduces VDS support for forwarding of traffic of any ethertype, ensuring proprietary double VLAN-tagged frames are forwarded.

  • Support non-preemptive active-standby teaming policy in addition to preemptive. This would ensure no traffic disruption when the former active link comes back online.

  • Enhanced Data Path improvements offers better performance for port mirroring and multicast capabilities.

  • Improved Switch Flexibility allows VDS mode changes from Standard to Enhanced Datapath Standard or Enhanced Datapath Performance and vice-versa without removing the rest of the switch configuration. There will be downtime during mode change operation. For details, refer to the NSX Administration Guide.

DPU-based Acceleration

  • Dual DPUs support to provide high availability configuration (Active / Standby), where failure of single DPU does NOT impact the server host. When active DPU fails all the traffic riding on that active DPU failover to standby DPU.

  • Dual DPUs support where both DPUs are active and can be used without high availability. If one of these DPU fails, traffic traveling over that DPU is NOT protected and will not move to the second DPU.

Layer 3 Networking

  • IPv6-only Support for NSX Managers and Edge Nodes allows IPv6-only deployment of NSX infrastructure without the need for IPv4 addresses on management and control planes.

  • EVPN Unique Route Distinguisher per Tier-0 SR (Edge Node) and VRF introduces the ability to define a specific BGP EVPN Route Distinguisher per Tier-0 router (Edge Node) and per VRF for active/active Tier-0 Gateways.  External routers will leverage ECMP routing towards NSX gateways when route-reflectors are in use.

  • BGP and BFD Session Failure Troubleshooting introduces advanced logging and debugging capabilities for BGP and BFD session failures. This provides additional visibility of BGP and BFD messaging exchange up on a session failure. This helps to determine and fix the root-cause of BGP and BFD failures and flaps.

Edge Platform

  • Edge Platform Alarms improve visibility of the Edge platform and services running on top of the Edge.

    • Edge long-running packet capture creates an alarm if packet capture runs too long. 

    • Edge Agent Down helps to identify Edge Agent liveliness.

    • Tunnels down Alarm helps identify if a Tunnel on the Edge Down faster.

    • Bridge Loop event detected when a loop in the network between the bridged network and NSX network is found.

VMware NSX Load Balancer Support High Security Cipher adds the TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 cipher in the default-balanced-client-ssl-profile.

  • Background: VMware NSX 4.2 to OpenSSL 3.0 for security considerations. OpenSSL has stricter requirements for the cipher suite, SSL protocol, and certificates used in SSL certification.

    OpenSSL 3.0 validates:

    • Certificate

    • Cipher suite

    • SSL protocol

    Due to these changes, brownfield customers who have configured Load Balancers (LB) with unsupported certificates, cipher suites, or SSL protocols, may encounter warning messages during the upgrade to VMware NSX 4.2. For more details, refer to Knowledge base articles 368005 and 368006.

  • BM Edge control plane LACP PDU packets prioritization prioritizes the control plane packets on a BM Edge to help stabilize the port-channel when High Throughput is forwarded. 

VPN

VPN Certificate Enhancement offers certificate management with isolation between Projects, as well as VPN on Project Tier-1 Gateway.

Container Networking

Antrea Egress adds NSX Child Segment support for Antrea Egress. This allows the use of a different network in Egress IP Pool configuration than the Node network.

Security

Gateway Firewall

  • IDS/IPS feature on T0 Gateway allows creation of IDS/IPS rules on the Tier-0 Gateway. 

  • Increased scale for vDefend Gateway Firewall provides several updates to the maximum scale supported by Edge as well as the Tier-0 and Tier-1 Gateway. For details, refer to the NSX 4.2 Configuration Maximum tool at https://configmax.esp.vmware.com/home.

Distributed Firewall

  • Increased Scale for vDefend Distributed Firewall Rules and Grouping offers increased scale for multiple objects in Grouping along with higher DFW rules scale. This increased scale is available on the new XL form factor of NSX Manager. For details, refer to the NSX 4.2 Configuration Maximum tool at https://configmax.esp.vmware.com/home.

  • Kubernetes Network Policies visibility - With NSX 4.2 users will be able to promote standard K8s NetworkPolicies into Antrea policies managed by NSX. This feature is available via API only.

  • UI enhancements for grouping membership displays Transport Nodes and Physical Servers that match the grouping criteria in the 'Effective Members' UI for a Group. 

Network Detection and Response (NDR)

  • NDR on-premises supports the correlation of all NDR campaigns to occur on-premises, eliminating the need to send detection events to a VMware cloud backend for correlation. This addresses concerns around data privacy and data sovereignty.

  • Logging detection events and campaigns to SIEM introduces the option to log individual detection events and correlated campaigns to a SIEM (Security Information and Event Management) platform. This improves the ability of security operators to triage and respond to security incidents from a central log collection tool. 

  • PCAP (packet capture) for IDPS events are available for export and download in the NDR UI. PCAPs are valuable forensic evidence, aiding in the investigation of security incidents, breaches, and network anomalies.

  • Malware events from the Distributed Firewall are included in NDR campaign correlation allowing for improved correlation and the creation of more effective campaigns.

Intrusion Detection & Prevention System (IDPS)

  • Network Oversubscription Support - Oversubscription settings for the IDS/IPS system now take into account networking oversubscription as part of the criteria for bypass or dropping traffic due to IDS/IPS oversubscription. Requires ESXi 8.0 Update 3.

  • Updated engine for IDPS on NSX Edge for better performance and improved detection capabilities.

  • Support for IDPS at Tier-0 on the NSX Edge.

Distributed Malware Detection and Prevention

  • Advanced Threat Prevention (ATP) support in VCF with vSAN stretched cluster (with Sub-TNP) offers extended to Sub-TNP. This enables VCF users to deploy vSAN stretched clusters across distant data centers, varied Availability Zones (AZ), or separate rack rows.

  • Simplification of SVM deployment includes the SVM bits now residing on the NAPP platform. This removes the requirement for users to supply an external web server. The overall UI workflow is also significantly improved. 

  • Support for OneNote file type offers Malware prevention detection of OneNote type files. The malware prevention feature has been enhanced to identify and block malicious OneNote files. OneNote files are commonly used to embed malicious payload.

Platform and Operations

Automation

  • Terraform Support for Install / Upgrade has been extended support to NSX Fabric management, including but not limited to NSX Manager creation and clustering, Edge creation and preparation, host preparation, transport zone creation, and user and upgrade management.

  • Terraform Support for GRE tunnels supports the ability to configure GRE tunnels.

Multi Tenancy

  • Ability to create Project and VPC without default firewall rules using user preferences. This allows you the options to deploy multi-tenant environments in the VCF environment where distributed firewall would be unavailable, or to rely on global default rules outside of the Project/VPC.

  • Ability to create VPNs under Project offers VPN configuration creation for Tier-1 gateways under the Project. This includes the ability to manage certificates for the tenant from the project context.

Installation and Upgrade

  • NSX Upgrades

    • Direct upgrades from NSX 3.2.x to NSX 4.2.0 are now available. For a list of supported upgrade paths, refer to the VMware Upgrade Matrix.

    • Additional memory checks are also available to preempt potential issues with NSX upgrades, especially NSX in-place upgrades.

  • NSX Certificate Management introduces operational ease via NSX's revamped certificate management capabilities: certificate replacement (single or multiple), renewal of certificates, automatic notifications for expiring certificates, revamped user experience and much more available in the NSX UI. 

  • Support for Standalone Hosts supports vLCM enabled standalone hosts. Leverage the same unified ESX + NSX lifecycle capabilities on standalone hosts (host that is not part of a cluster), as available at cluster level with vLCM.    

Operations and Monitoring

  • NSX Manager Available in XL Size introduces maximum scale supported by NSX with the XL form factor for the NSX Manager. For details on the scale support, refer to the VMware Configuration Maximums tool. For details on installation steps, refer to the NSX Installation Guide.

  • Dynamic Online Diagnostic System Runbooks introduces additional Online Diagnostic System Runbooks enabling customers to debug NSX at runtime using predefined runbooks. Get the runbooks as soon as they are available, without having to wait for the next NSX release. For details, refer to the "Debugging NSX at Runtime" section in the NSX Administration Guide.

  • Additional Metrics in NSX API offer Network Datapath & Enhanced Datapath Host Switch counter metrics available via the NSX API.

  • Packet Capture with Trace in Enhanced Datapath Host Switch supports the option to trace the path that packets traverse in the network stack for latency analysis and packet drop locations using the pktcap-uw tool with the trace option.

  • Reduced Support Bundle Size offer much quicker support bundle downloads due to advanced compression algorithms that provide the same debugging content in a much smaller size.

Platform Security

  • TLS 1.3 for Internal Communications provides support of TLS 1.3 for internal communications between NSX components.

Scale

Federation

  • XL Form Factor for Global Manager offers an additional size for Global Manager VM to prepare for better future scale.

  • VMware vSphere with Tanzu (Workload Management) support in NSX Federation.

    • vSphere with Tanzu (Workload Management) can now be deployed in NSX Federation environments. 

    • The vSphere with Tanzu deployment must be done on NSX local segments and local dedicated T0/T1 from NSX Local Manager.

    • The vSphere with Tanzu deployment cannot be done on NSX GM stretched or non-stretched segments nor GM T0/T1 stretched or non-stretched.

    • Make sure to configure NSX Federation before deploying Workload Management to avoid any potential NSX import issues to NSX Global Manager.

Security Intelligence

  • Security Intelligence enhances Firewall Operational Analytics with dashboards for Top Talkers, Top Traffic Inspected correlated with assets (VMs, IPs, Containers), Top Port Protocols.

  • Publication of Configuration Maximums for scale of intelligence deployment.

  • Providing users with Flow Size Estimator for sizing intelligence deployments.

NSX Application Platform (NAPP)

This release adds support for deploying NSX Application Platform behind a proxy and improves on Private CA / Self-Signed Certificate management during installation. For more information, refer to the NSX Application Platform Release Notes.

Feature and API Deprecations, Behavior Changes

1. Feature Deprecations

  • End of Support of Overlay on Physical Servers

    • NSX supports the deployment of NSX agents on physical servers to provide security and overlay connectivity.

    • VMware no longer supports overlay connectivity for physical servers in the NSX 4.2.0 release.

    • Security for physical servers will remain fully supported, with VLAN connectivity only. The corresponding NSX API parameters are marked with a "deprecated" label in the NSX API Guide.

    • We recommend that new NSX physical server deployments take advantage of Edge bridging to provide L2 connectivity to overlay segments. For more details, refer to the "Edge Bridging: Extending overlay segments to VLAN" topic in the NSX Administration Guide.

    • Support for overlay for physical servers for customers using NSX-T Data Center 3.x will remain for the duration of the NSX-T Data Center 3.x release series. Support for physical servers for customers using NSX 4.0.x and 4.1.x will remain for the duration of the NSX 4.0.x and 4.1.x release series. Details for both are described in the VMware Product Lifecycle Matrix.

  • End of Availability announcement  for NSX Network Introspection for Security

    VMware will discontinue the Network Introspection for Security feature after the last NSX 4.x release. VMware will continue to support the Network Introspection for Security feature for customers with active contracts until the End of the General Support of the NSX 4.2.x release or until October 11, 2027. The end of availability announcement and more details are provided as part of KB (https://kb.vmware.com/s/article/97043) in March 21, 2024. NSX 4.2.x End of General Support dates are available in VMware Product Lifecycle Matrix

    We are providing advanced notice to allow existing deployments using Vendors with Network Introspection for Security Feature to migrate to alternate solutions. 

    VMware offers VMware Firewall with Advanced Threat Prevention for lateral security, threat prevention, and segmentation.

    For more information:

  • Deprecation announcement for NSX Manager APIs and NSX Advanced UIs

  • NSX has two methods of configuring logical networking and security:

  1. Manager mode - The Manager API contains URIs that begin with /api.

  2. Policy mode. The Policy API contains URIs that begin with /policy/api.

  • VMware intends to remove support of the NSX Manager APIs and NSX Advanced UIs in an upcoming NSX major or minor release, which will be generally available no sooner than one year from the date of the original announcement made on 12/16/2021. NSX Manager APIs that are planned to be removed are marked with "deprecated" in the NSX Data Center API Guide, with guidance on replacement APIs.

  • As mentioned, this is a reminder as those APIs have already been deprecated since NSX 3.2 on 12/16/2021 (NSX 3.2 Release Note) and we strongly recommend to not leverage NSX Manager APIs for logical networking and security (in favor of NSX Policy APIs) in preparation of future removal. Those APIs are deprecated in the NSX API Guide and usage of deprecated APIs can be identified in the NSX logs.

  • Deprecation announcement for NSX embedded (NSXe)

    NSX embedded (NSXe) integrates NSX with VMware vCenter to enable the consumption of NSX from VMware vCenter. As a VI admin, you can install NSX Manager and NSX for virtual networking or security-only use cases by installing and configuring the NSX plugin in VMware vCenter.

    VMware will discontinue the NSX embedded (NSXe) feature after the last NSX 4.x release.

    We recommend that new deployments of NSX take advantage of the installation of NSX Managers through the OVF file provided on the Broadcom download portal. For more details, refer to the NSX Installation Guide.

  • Deprecation announcement for some parameters in Switch IPFIX

    VMware intends to deprecate support for the following two parameters in Switch IPFIX flows in an upcoming release:

    • Idle-timeout: After a flow has been idle for a certain period, an automatic record for the idle timeout will not be sent with Switch based IPFIX flows.

    • Outer packet encapsulation: For packets that are encapsulated with Generic Network Virtualization Encapsulation (Geneve) encapsulation protocol, a separate flow record for the outer layer will not be sent with Switch based IPFIX flows.

  • Deprecation Announcement for the NSX Migration Coordinator

    • The NSX Migration Coordinator offers tooling to migrate from NSX for vSphere to NSX, either in-place or side by side. 

    • VMware plans to remove the NSX Migration Coordinator in the next major release of NSX. It is recommended that any remaining NSX for vSphere environments requiring this feature should migrate using the NSX 4.x release train. Support for the NSX Migration Coordinator will remain for the NSX 4.x release series.

2. Entitlement Changes

Entitlement Change for the NSX Load Balancer

In a future major release of NSX, VMware intends to change the entitlement of the built-in NSX load balancer (a.k.a. NSX-T Load Balancer). This load balancer will only support load balancing for Aria Automation, IaaS Control Plane (Supervisor Cluster), and load balancing of VCF infrastructure components.

VMware recommends that customers who need general purpose and advanced load balancing features purchase Avi Load Balancer. Avi provides a superset of the NSX load balancing functionality including GSLB, advanced analytics, container ingress, application security, and WAF.

Existing entitlement to the built-in NSX load balancer for customers using NSX 4.x will remain for the duration of the NSX 4.x release series.

3. API Deprecations and Behavior Changes

  • For NSX Edge Clusters, the default uRPF mode for the GRE interface has been changed in VMware NSX 4.2.0. In NSX 4.1.x, the default uRPF mode for the GRE interface was Strict. Starting with NSX 4.2.0 and later releases, the default mode is Port_Check.

    Port_Check mode drops traffic if the egress interface for forwarding is the same as the ingress interface. In asymmetric routing scenarios, ingress traffic might be dropped in Strict mode. With Port_Check mode, such asymmetric traffic will no longer be dropped.

  • To simplify API consumption, refer to the new pages containing a list of deprecated and removed APIs and Types in the NSX API Guide.

    • Removed APIs: To review the removed APIs from NSX, view the Removed Methods category in the NSX API Guide. It lists the APIs removed and the version when removed.

    • Deprecated APIs: To review the deprecated APIs from NSX, view the Deprecated Methods category in the NSX API Guide. It lists the deprecated APIs still available in the product.

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading NSX components, see the NSX Upgrade Guide.

This release is not supported for NSX Cloud customers deployed with AWS/Azure workloads. Do not upgrade your environment in this scenario.

Note: Customers upgrading to NSX 3.2.1 or earlier are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Manager repository prior to upgrading. For customers upgrading to NSX 3.2.2 or later, the tool is already integrated into the Upgrade workflow as part of the upgrade pre-checks; no separate action is needed.

Upgrade Integration Issues Due to Download Site Decommission

The NSX upgrade experience is impacted due to the decommissioning of downloads.vmware.com. See knowledge base article 372634 before upgrading.

Available Languages

Beginning with the next major release, we will be reducing the number of supported localization languages. The three supported languages will be:

  • Japanese

  • Spanish

  • French

The following languages will no longer be supported:

  • Italian, German, Korean, Traditional Chinese, and Simplified Chinese.

Impact:

  • Customers who have been using the deprecated languages will no longer receive updates or support in these languages.

  • All user interfaces, help documentation, and customer support will be available only in English or in the three supported languages mentioned above.

Because NSX localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

July 23, 2024

1

Initial edition

July 25, 2024

2

Added resolved issue 3406962.

July 29, 2024

3

Moved deprecation announcement text to Entitlement Change for the NSX Load Balancer.

July 30, 2024

4

Updates to the deprecation section regarding physical server support and the Migration Coordinator.

July 31, 2024

5

Added known issue 3411866.

August 7, 2024

6

Added resolved issue 3302470.

August 13, 2024

7

Moved resolved issue 3224295 to known issues.

August 20, 2024

8

Added known issue 3422772 and What's New IDPS oversubscription.

August 21, 2024

9

Added known issue 3395578.

August 22, 2024

10

Added Important Advisory About NSX 4.2.0 and What's New vSphere with Tanzu Federation support.

October 15, 2024

11

Update the API Deprecations and Behavior Changes regarding GRE Interface uRPF mode. Added resolved issues 3372486, 3420422, 3421098, and 3407134. Added known issue 3401745.

Resolved Issues

  • Fixed Issue 3407134: In NSX-T versions prior to 3.1.2 users may experience the root partition getting full.

    In rare cases, the root directory may get filled up with remnants of a third-party vendor repo that is no longer used. This may cause an upgrade to fail. Starting with VMware NSX-T Data Center 3.1.2, a different vendor was used and these signatures are no longer required.

    Workaround: While upgrading to NSX-T version 3.1.2 and later, remove the /home/secureall/secureall/policy/trustwave-repo folder to clear partition space and restart the upgrade. For details, refer to the knowledge base article 372374.

  • Fixed Issue 3421098: After replacing CBM_CLUSTER_MANAGER certificate on manager nodes, the cluster status stays DEGRADED forever.

    The manager components are not functioning properly, and UI is not available.

  • Fixed Issue 3420422: VM lost connectivity after vMotion triggered by DRS.

    DRS migrated VM's loose connectivity, and VM's logical ports go into blocked state, resulting in network disruption for impacted VMs

  • [Fixed Issue 3372486: In NSX deployments with more than 100 TNs, previously prepared transport nodes revert to a status of "Applying NSX switch configuration" and remains stuck.

    In NSX deployments with more than 100 TNs, upon reboot of an NSX manager node, previously prepared transport nodes revert to a status of "Applying NSX switch configuration" and remains stuck at 68% progress.

  • Fixed Issue 3302470: Analyst API connectivity alarm is raised even though MPS successfully connects to the Lastline Cloud API.

    False alarm of analyst API connectivity for sa-scheduler-service under Malware Prevention Health feature stays open. Primary malware analysis continues to work.

  • Fixed Issue 3406962: After backup/restore, the nodes that were not restored but joined will see new self-signed certificates for APH_TN and CCP.

  • Fixed Issue 3373696: When one of the cluster appliance node is failed with "VM_CLUSTERING_FAILED" error, on the NSX UI appliance screen the corresponding card showing empty for that node. No details are available on the card.

    User may not be able to see the error details on UI if the node deployment ends up in a failure.

  • Fixed Issue 3372998: When the Global IDFW setting is disabled, an error will display if you enable IDFW on a Cluster or a Standalone host.

    There is no impact.

  • Fixed Issue 3375771: The status of some of the entities might not be correct after the restart of the proton service.

    Customers will see false data about the status of some of the entities.

  • Fixed Issue 3361514: Password requirement criteria are not taken into account while validating a password on the NSX Manager UI.

    There is no impact as the customer can still save the password.

  • Fixed Issue 3359688: Customers are not able to add interfaces.

    An error is reported for stale Edge node not being found and interfaces cannot be added.

  • Fixed Issue 3296940: Users that are part of a large number of Active Directory groups, either directly or through nesting, experience slow UI login.

    Users in large numbers of LDAP groups are unable to log in.

  • Fixed Issue 3370361: Exceptions in the UI search due to service config resource referring to an invalid Logical Router UUID.

    The production upgrade certification of VCF 5.x is impacted.

  • Fixed Issue 3360204: PSOD on ESXi while using the"Live traffic analysis" tool in NSX.

    PSOD/BlueScreen on ESXi.

  • Fixed Issue 3316495: After multiple exits, IKE process does not restart and causes VPN failures.

    Traffic going on VPN is impacted as the VPN tunnels are down.

  • Fixed Issue 3307808: Customer is not able to remove two certificates.

    Certificates are greyed out and cannot be removed. There is no reported impact.

  • Fixed Issue 3235790: Number of logical routers realized on an edge transport node is reported incorrect on the edge upgrade UI page.

    UI shows the number of logical routers realized on an edge cluster as the number of logical routers on an edge transport node. The issue does not have any functional impact.

  • Fixed Issue 3374878: AVI LB VIP advertise route prefix not getting removed from T0 on AVI detachment from T1 gateway.

    As NSX cannot remove AVI VIP advertised prefix from T0 gateway, customer can't use same VIP prefix on another T1 gateway.

  • Fixed Issue 3376916: NSX Manager appliance IP configuration leads to Controller to host connection down.

    Controller to host connection is down. New configuration realization is blocked.

  • Fixed Issue 3376823: Edge-agent core found in multi-vrf setup.

    No impact, edge-agent will restart.

  • Fixed Issue 3376281: UI is not showing some of the entities due to issue with refresh of indexed data in search even if those entities are present in NSX and can be retrieved through API.

    Customer will see stale data on UI.

  • Fixed Issue 3375928: When BGP in NSX Edge learns a prefix with multiple add-paths from a route-server via one (and only one) single session, and later one of the add-paths is withdrawn, all other add-paths are also removed from the NSX Edge RIB.

    All paths get deleted instead of the single specific path, causing possible traffic disruption.

  • Fixed Issue 3375875: Redundant WARNING log in /var/log/proton/nsxapi.log.

    No impact in the feature.

  • Fixed Issue 3374597: Invalid path passed for <TN-UUID> log statements observed in the /var/log/proton/nsxapi.log.

    There is no functional impact for the customer.


  • Fixed Issue 3374112: Some LDAP login failures were not being logged properly.

    The customer cannot audit all failed login attempts.

  • Fixed Issue 3374071: NSX upgrade may fail on an ESXi host with a DVS with DFW security and a DVS with NSX networking.

    NSX upgrade cannot proceed.

  • Fixed Issue 3372512: When there are any config sync errors for an Edge node, launching visualization for that Edge node from fabric topology view is auto resolving those config sync errors with config from vSphere/Edge appliance.

    When customer is not aware of any config sync issues, launching this Edge visualization dialog updates the configuration on NSX and resolves the open alarms before customer realizes the existence of any such issues.

  • Fixed Issue 3371979: Physical NIC in Bare Metal Edge resets under heavy traffic.

    The reset of the device leads to a short amount of time where the interface is inoperable. Traffic may be lost, and BFD sessions will flap.

  • Fixed Issue 3371694: In rare scenarios, messaging manager service becomes unavailable when there are service initialization delays.

    The affected unified appliance node will not be assigned as a leader to Transport Nodes. Therefore, the transport node traffic will be handled only by the remaining nodes in the cluster.

  • Fixed Issue 3371681: T1 service-group showing state as "Unknown".

    T1 centralized service on stateful A/A will not work.

  • Fixed Issue 3370805: Cannot use the same IP as GRE tunnel address although it is being used in 2 different T0-VRFs.

    Customers need to use numerous VRFs to provide services to their customers. If the same private GRE tunnel IPs cannot be used, it creates scalability issues for them.

  • Fixed Issue 3369617: An error is observed by a user while configuring new ESXi hosts for NSX enabled clusters or configuring the NSX settings of existing ESXi hosts.

    The following error message is displayed on the UI when a user configures new ESXi hosts for NSX enabled clusters or configures the NSX settings of existing ESXi hosts: "Error: General error has occurred. (Error code: 100)".

  • Fixed Issue 3369514: Redundant Operation="LOGIN" log message in nsx-audit.log.

    Customer can't track users who are logging in to the NSX Manager easily and nsx-audit.log may roll over quicker than expected.

  • Fixed Issue 3368733: When on edge's T0 uplink pnic is down, some traffic is impacted.

    Traffic for some flows will be impacted/blackholed.

  • Fixed Issue 3368616: Edge CLI "get configuration" results in internal error.

    Unable to run Edge CLI for get configuration.

  • Fixed Issue 3367961: Search API is intermittently failing with a CircuitBreaker exception having text as "Data too large".

    UI does not work intermittently.

  • Fixed Issue 3366670: Post upgrade from NSX 4.0.x or earlier releases to NSX 4.1.x and after APH_TN certificate replacement, the disk usage of manager rises up that results in manager_disk_usage_high alarm.

    Post upgrade from NSX 4.0.x or earlier releases to NSX 4.1.x, manager_disk_usage_high alarm might get displayed.

  • Fixed Issue 3365911: Syslog server configuration in Edges get lost if the edge is edited from UI.

    Functionality impact due to Syslog server settings are lost on the Edge Transport Node intent.

  • Fixed Issue 3365156: Heap allocation fails when required for ENS fastpath creation.

    ENS fastpath is created to cater for the customer network load. ENS fastpath creation failure leads to insufficient CPU resources to handle the traffic and may cause reduce performance.

  • Fixed Issue 3364540: The process that manages the central API and CLI functionalities that allow invoking NSX API requests or NSX CLI commands on any NSX appliance directly from NSX Manager nodes unexpectedly restarted and generated a core file.

    During the core generation and process restart, central NSX API and NSX CLI requests will not be served.

  • Fixed Issue 3364021: User created a Tier-0 uplink connecting to an uplink segment, but on edge the Tier-0 uplink is not connected to any segment.

    N-S traffic disruption if the Tier-0 is active.

  • Fixed Issue 3363912: ESX shows a tunnel to an Edge Down.

    ESX TN state will show degraded as some tunnels are down.

  • Fixed Issue 3363027: NSX Manager proxy stops responding to HTTP requests.

    NSX Manager proxy stops responding to HTTP requests.

  • Fixed Issue 336487: UI is not exposing compute folder field, but API starts exposing it from API.

    The compute folder ID gets lost from Edge Transport Node's intent. The customer needs to update the compute folder value via API again after updating the Edge node on the UI.

  • Fixed Issue 3361839: NSX Group query API calls with page size may fail via vRNI with error code 500.

    NSX Group query API calls with page size may fail via vRNI with error code 500.

  • Fixed Issue 3360614: After EdgeNode experience network outage and goes into read-only mode (if vSan is used), EdgeNode and Controller may not resume connection automatically after the network outage is fixed.

    EdgeNode may not receive the latest configuration from Controller afterwards after it experience network outage.

  • Fixed Issue 3360199: Corfu compaction for the non-config instance fails continuously.

    The non-config instance of Corfu can become unstable over time, causing frequent layout changes. If the amount of writes to non-config being made is large (depending on which IDS features are used by the customer), Corfu may go into read-only mode.

  • Fixed Issue 3359977: Status of networking entities (e.g. Load Balancers) show up as UNKNOWN due to NSX Manager node running out of system memory.

    When this happens, status messages from transport nodes failed to get updated and are reported as unknown. Depending on what process the kernel kills, other functionality may be impacted.

  • Fixed Issue 3358802: When the root password is about to expire, a warning message is generated to change the password. This is causing NSX CLI to fail.

    For the Edge Node, the root password expires after 90 days, so it starts showing a warning from the 84th day onwards indicating that the root password will expire. This causes BGP-related NSX CLI output to show JSON output instead of tabular output.

    There is no impact on the traffic flow or route exchange.

    If there is any automation done to process the NSX Routing Related CLI output it will fail due to change in the output format. If the automation is over json output then there is no problem.

  • Fixed Issue 3352668: When debug level logs are enabled for reverse-proxy, clear passwords can be seen in the logs.

    Security risk when debugging enabled, clear passwords can be seen in the logs.

  • Fixed Issue 3347964: Unwanted banner message is showing on NSX UI related to backup configuration only on stand-by GM instance.

    No functional impact, user will be seeing a banner message related to backup but backup is not applicable on standby GM instance.

  • Fixed Issue 3347327: In the presence of service insertion, with multiple uplinks, multicast packets get looped back from one uplink to another.

    With multiple uplinks, a multicast storm occurs in the customer network causing an outage.

  • Fixed Issue 3342609: Not able to search and select group from "Access List Control" dropdown.

    Customer is not able to search and select the group on scale setup with large number of groups.

  • Fixed Issue 3341991: There was an incorrect info icon text "Connectivity configuration to manually connect (ON) or disconnect (OFF) a logical entity from network topology" shown on Fabric Settings page for MTU check.

    Usability issue.

  • Fixed Issue 3332994: Zero ip address 0.0.0.0 in IpSet group gets published as 0.0.0.0/0 to host instead of 0.0.0.0/32. Similar issue is applicable for zero ipv6 address also.

    Impact to DFW rules which use these groups with zero IP address since the group members are no longer a single ip 0.0.0.0/32 but rather a set of IPs identified by 0.0.0.0/0.

  • Fixed Issue 3332753: Cluster boot manager service running as part of the NSX manager, may create large number of temp directories causing the node to go out of the disk space.

    Customer might see degradation in appliance performance or NSX manager cluster might be down.

  • Fixed Issue 3332225: VMs falling out of NSX inventory intermittently.

    Virtual machines (VMs) utilised in DFW rules or any firewall rules are missing from NSX inventory if the discovery agent fails to transmit messages to MP.

  • Fixed Issue 3331554: Issue in the realization of IPFIX Switch Profile.

    Deleting a logical switch or port associated with an IPFIX Switch profile causes the realization status of policy IPFIX Switch Profile to be in an error state. This issue blocks any new configuration changes that are applied on the host side.

  • Fixed Issue 3329078: /api/v1/transport-nodes API is not returning all the Edge transport nodes.

    Customer is unable to view all the Edge or Host transport nodes by invoking /api/v1/transport-nodes API.

  • Fixed Issue 3328388: Observed SNMP trap is send for the alarm even if suppress_snmp_trp is true for this alarm.

    Customer will see redundant SNMP trap for the alarms that they didn't enabled the SNMP trap send.

  • Fixed Issue 3327417: Logging only -- client IP address was not included in log message when logging in using WS1.

    Lack of source IP can limit security auditing.

  • Fixed Issue 3319733: NSX-T installation causes host running in low heap memory availability to PSOD.

    Outage of workloads running the crashed host.

  • Fixed Issue 3319612: Pre-upgrade check was failing because synchronization of the repository partition on NSX Manager was not successful.

    Customer won’t be able to upgrade the environment.

  • Fixed Issue 3318976: Cannot view a logical router via CLI when the logical router name contains some special characters.

    Unable to view the affected logical routers.

  • Fixed Issue 3318786: On navigating to Networking > Networking Overview screen, the Segments Classification widget under the "Segments" section is not loading and shows an error message.

    Customer is not able to view the Segments Classification widget which classifies Segments into Not Connected, Routed and NATed categories.

  • Fixed Issue 3315621: False alarm raised that edge VM is in Inventory and not found in VC.

    Usability, since false alarms generate noise.

  • Fixed Issue 3315258: ESXi host loses management connectivity during maintenance mode VMware NSX upgrade.

    ESXi host loses management connectivity during maintenance mode VMware NSX upgrade.

  • Fixed Issue 3315257: Tier-1 realization shown as failed with 'LR port IP overlaps with NAT service' error message even if overlapping port(s) have been deleted.

    No datapath impact. Tier1 realization shown as failed due to the stale GPRR object with alarm.

  • Fixed Issue 3312261: SHA on ESX node exits.

    Transport Node in unknown status Metrics on ESX collected by SHA cannot be reported LM and other remote collectors.

  • Fixed Issue 3308405: PSOD occurred on ESXi host with NSX when run "get host-switch <dvs> tunnels" in nsxcli.

    It will occur PSOD and the host will reboot.

  • Fixed Issue 3307192: OVF certificate validation failed while deploying an Edge using NSX 4.1.0.

    The customer is unable to upgrade an Edge node from NSX 4.1.0.

  • Fixed Issue 3306196: PolicyEdgeNode CPU is not updated when the edge transport node form factor is modified.

    Customer won't be able to scale up LB.

  • Fixed Issue 3305644: When site was getting offboarded, site's flow status in datastore is not getting cleaned up properly in some cases.

    After offboarding and re-onboarding a site, the data sync between GM and LM might not work.

  • Fixed Issue 3305201: Opsagent crashes and core dump is generated on the ESX host.

    There is a minimal impact since the service will restart automatically.

  • Fixed Issue 3303832: The NA packets sent by the ESX DR shows intermittent drop.

    Packet drops seen for N-S traffic stream.

  • Fixed Issue 3299145: Realization error for Global Tier-0 on both GM and LM after config onboarding.

    Customers will see error related to transport-zone realization, such as DHCP pool address assignment where a new node does not get an IP address.

  • Fixed Issue 3296530: Invalid Route Advertisement rules can cause out of memory issue in unified appliance.

    NSX Manager can crash that can prevent customer to make any configuration changes.

  • Fixed Issue 3295178: LDAP users are not able to retrieve the management cluster status on the NSX Manager UI.

    Users cannot see the cluster status from the NSX Manager UI.

  • Fixed Issue 3294471: Client applications such as vROPs that use vIDM credentials in Basic Authentication for every request can overload the processing between NSX and the vIDM server.

    vROPs activities fail. They must be restarted and can potentially fail again.

  • Fixed Issue 3294470: Route aggregation programming issues on ESXi hosts.

    Few routes from configured route aggregation subnets may be missing on the ESXi host's virtual routing table containing the active edge. Traffic towards certain route-aggregation or custom-T1 prefixes might not work. This can lead to potential network disruptions, which, in turn, can lead to partial network outage.

  • Fixed Issue 3291399: A previous fix, https://gitreview.eng.vmware.com/c/nsx/+/529886, that was made to lessen the load on the vIDM server disabled the ability to refresh the users' access tokens, which results in the need for the user to log in again.

    Unexpected Manager log out that requires the user to log in again.

  • Fixed Issue 3289750: DFW rules are not applied on a security-only cluster.

    VM connectivity issues are faced.

  • Fixed Issue 3287814: GRE stats query API with invalid tunnel name returns a null pointer exception in the response.

    There is no functional impact. GRE GET statistics summary APIs fail only if a user supplies invalid tunnel name in the URL.

  • Fixed Issue 3379346: The system throws purple screen of death (PSOD) when there is a combination of both shared and dedicated RSS vNICs requests and the total requested RSS Engines are more than the number of available RSS Engines.

    The system PSODs.

  • Fixed Issue 3275275: Datapath crashes when there is an IPSec SA update, because when a new_sa is freed there is a chance of double free if freed memory is allocated to something.

    A datapath crash results in failover.

  • Fixed Issue 3379348: ESXi might throw purple screen of death (PSOD) when a dedicated RSS is configured and logical cores are removed from the switch.

    The system PSODs.

  • Fixed Issue 3382577: NSX Manager fails to upgrade to version 3.2.4.

    Upgrade fails when you create a segment with connectivity_path value set as " " (empty string).

  • Fixed Issue 3285035: Allowing VPN Service on VRF in 4.1.1 release causes upgrade issues when setup is upgraded from 4.1.1 to 4.2.0.

    There is no impact as the feature is not supported in 4.1.1. Even if a user creates a session, it will not affect the datapath and traffic flow as the 4.1.1 release does not support VPN on VRF.

  • Fixed Issue 3282184: Stale entries of security-only extra configs are found on the host.

    Because of the stale entries, install could fail or DFW rules are wrongly applied on the port.

  • Fixed Issue 3270806: Segments took a long time to realize because port groups creation on VC took more time.

    There was a considerable delay in creating port groups.

  • Fixed Issue 3255286: After renewing the principal identity certificate in Antrea container, Antrea container clusters status are down.

    Since an Antrea cluster cannot communicate with NSX, customers cannot visit the Antrea cluster inventory and define DFW rules on NSX UI.

  • Fixed Issue 3254722: A project administrator cannot find out which resources have been shared with VPCs by org administrator.

    The project administrator will have to check each project and each VPC to know which resources are shared.

  • Fixed Issue 3387605: Bare Metal Edges cannot start if, while configuring NICs, the Rx or Tx ring size does not meet the alignment requirement.

    Some NICs used in Bare Metal Edges require that the Rx and Tx ring sizes are a multiple of a certain number, e.g., 32 for Intel NICs using the i40e driver. If the Rx or Tx ring size doesn't meet the alignment requirement, the dataplane fails to start until the ring size is set to a valid value.

    Set the Rx and Tx ring size to a valid value. The ring size value should not be a multiple of 32 for Intel NICs.

    The default value is 2048 but you can also use 4096.

  • Fixed Issue 3243063: Memory corruption in IKE daemon is leading to configuration failures.

    New configuration or configuration changes are not accepted at times.

  • Fixed Issue 3239197: Even when a group is shared with include_children:true, the group's effective members are not visible to a project admin.

    Definition of shared groups is not visible to a project admin.

  • Fixed Issue 3387947: The proton services crashes due to out of memory.

    Out of memory results in core dump. Due to this, you might experience unexpected crush and restart of the proton services.

    Do a rolling restart of the proton service on the affected nodes in the cluster.

  • Fixed Issue 3237340: All edge clusters and Tier-0 gateways are shown for selection during a project creation through UI.

    User needs to manually verify and select edge clusters and Tier-0 gateways from the default transport zone.

  • Fixed Issue 3319606: Segment port might get deleted on rare occasion if the vCenter API call fails.

    Users cannot configure features that consume the segment port as one of its parameters.

  • Fixed Issue 3215757: When the VTEP is already present on a host before installation, the install comes up with a duplicate VTEP IP and an alert of this is given only after workloads are moved.

    The workloads that are moved to the TN will not have any connectivity.

  • Fixed Issue 3368626: The nsx-exporter stopped responding during flow records retrieval from kernel under heavy traffic.

    Exporter zdump was observed and application alarm was raised. User will not have other notable functional impact.

  • Fixed Issue 3367973: Edge HA failover due to DP-FW crash and core.dp-fw-dispatch coredumps are observed on Edges.

    You will see a core alarm. No impact on functions. Config would be reapplied.

  • Fixed Issue 3367614: Latency of more than 20ms seen on enabling DNS Context Profile with FQDN filtering.

    Delayed responses noticed due to latency.

  • Fixed Issue 3326723: L7 Datapath daemon (nsx-vdpi) stops responding and restarts, resulting in L7 rule enforcement failures.

    NSX manager upgrade is blocked.

  • Fixed Issue 3331642: In NSX Federation, customer on Site-1 will see incorrect group member in the exclude list which doesn't span on that site.

    In the context of Federation, Global DFW Exclude list on Site-1 shows group member of another site (Site -2).

  • Fixed Issue 3374225: Traffic is unexpectedly dropped by older Gateway Firewall Policy_Default_Infra section.

    Traffic is unexpectedly dropped by older Gateway Firewall Policy_Default_Infra section.

  • Fixed Issue 3375387: Datapath component stops responding on NSX Edge node in the configuration churn conditions that involve deletion and addition of members to the groups used in the firewall rule.

    Traffic can be interrupted.

  • Fixed Issue 3380936: Cfgagent process on ESX host runs out of memory in around 90 days and stops responding due to IP Reputation config related auto updates.

    You will see the application crashed alarm. The watchdog ensures that the process is restarted immediately upon crashing. There is no interruption to traffic as this only impacts configuration changes.

  • Fixed Issue 3382265: NSX IDFW sometimes failed to synchronize the AD groups and their memberships from AD server due to time taken to process larger synchronization batch sizes.

    IDFW rules are not enforced for recently deleted AD users.

  • Fixed Issue 3383739: After enabling the IDPS feature, the throughput dropped to 1 Gbps.

    The throughput dropped to 1 Gbps after enabing the IDPS feature.

  • Fixed Issue 3378813: VDPI becomes unavailable while performing portmirror upgrade.

    Upgrade with PortMirroring performed from between multiple releases from 3.1 to 4.0 to 4.2. Unresponsiveness is with 4.0 as debug symbol matched.

  • Fixed Issue 3364256: Connection drops at the Edge because datapath daemon on NSX Edge crashes when packets not destined to the firewall are accidentally routed to the firewall for processing.

    Connection drop will lead to datapath restart.

  • Fixed Issue 3359454: Edge datapath may stop forwarding traffic due to epconn connection getting closed to NSXA process.

    All services on the NSX Edge will be brought down if the non-zero value is returned by epconn service to NSXA.

  • Fixed Issue 3352691: DFW Exclusion list might not work properly in some corner case scenarios, which leaves the components in exclusion list to still have DFW features applied to them.

    DFW Exclusion list may not work properly in some corner case scenarios, which leaves the components in exclusion list to still have DFW features applied to them.

  • Fixed Issue 3356675: In a environment comprising of DFW IPFIX, session established state for TCP flows may be incorrectly set.

    For certain TCP flows, the session established flag was set to TRUE when it was not.

  • Fixed Issue 3356883: Unable to filter DFW policies in NSX Global Manager.

    Difficulty in locating the Distributed Firewall rules in case of scale setup. User has to manually scroll through the rule configuration to locate the rule.

  • Fixed Issue 3278718: Failure in packet capture (PCAP) export if the PCAP file has not been received by the NSX Manager.

    Users will not be able to export the requested PCAPs as the request will fail.

    In most cases, you can avoid this issue by waiting for a few seconds and then exporting the PCAP files.

  • Fixed Issue 3109810: Intermittent FQDN rule enforcement failures.

    Default rule hit.

  • Fixed Issue 3222376: The NSX "Check Status" functionality in the LDAP configuration UI reports a failure when connecting to Windows Server 2012/Active Directory.

    This is because Windows 2012 only supports weaker TLS cipher suites that are no longer supported by NSX for security reasons.

    Even though an error message displays, LDAP authentication over SSL works because the set of cipher suites used by the LDAP authentication code is different than the set used by the "Check Status" link.

    See knowledge base article 92869 for details.

  • Fixed Issue 3285744: Cannot modify Edge firewall rules in NSX UI.

    Update operation both MP API and UI fail with validation error.

  • Fixed Issue 3284792: Consecutive PCAP export requests of 50 PCAPs per request are triggered on a setup with a large number of PCAPs, initial request returns a successful result but subsequent requests fail.

    When an export request is issued while another export request is in progress, it fails immediately.

  • Fixed Issue 3279537: After Edge fail-over , the existing SSH connection's state shows as APP_INVALID on newly active Edge and Standby Edge.

    After Edge failover, get firewall <interface uuid> connection state will show APP_SSH and APP_INAVALID for an existing SSH connection.

  • Fixed Issue 3274236: When memory allocation failures are seen during programming of large address sets, ESXi host may generate a PSOD.

    Traffic serviced by that host will be impacted.

  • Fixed Issue 3266660: PSOD might occur during NSX for vSphere to NSX migration under heavy traffic load.

    Migration from NSX for vSphere to NSX-T fails with PSOD error.

  • Fixed Issue 3263257: While sending long-lived flow user may see PCAP export fails sometimes in stress condition.

    In manager UI after exporting PCAP instead of getting status "Ready," the user might see "Incomplete" for signature id: 1096696 and 2025644.

  • Fixed Issue 3324829: Application on NSX edge and manager node crashed frequently.

    This is an intermittent Linux kernel issue.

  • Fixed Issue 3316724: In an NSX for vSphere to NSX-T migration scenario, TCP flow timeout for active flows gets set to a 30 second default (instead of default of 43,200 seconds) post migration, resulting in flow timeout.

    Traffic will continue to hit this active flow as expected. The flow timeout, however, has been reset to 30 seconds, instead of the standard 43200 seconds. Depending on the application and traffic pattern, a premature timeout may cause problems.

  • Fixed Issue 3314833: The platform-ui APIs return 404 error when NAPP ingress/messaging FQDN contains capital letters.

    NAPP UI fails to load.

  • Fixed Issue 3313729: V2T migration fails if only "NSX for vSphere - Standard" license is applied in NSX-T. This license doesn't support DFW.

    Config migration fails due to license.

  • Fixed Issue 3311775: Wrong message displays during small sized Global Manager appliance deployment.

    Small size form factor should display "Small VM appliance size is suitable for lab and proof-of-concept deployment."

  • Fixed Issue 3307552: ESXI PSOD might occur during NSX for vSphere to NSX-T migration if imported data contains layer 7 attributes.

    Host PSOD.

  • Fixed Issue 3305927: In NSX Federation environments, IDFW view user sessions don't return any data.

    A NullPointerException error is seen while accessing some of cluster enable/disable information. IDFW user sessions are not visible in the API/UI, but there is no impact to firewall functionality.

  • Fixed Issue 3299383: NAPP deployment fails on Prepare to Deploy upgrade page - "Helm pull chart operation failed."

  • Fixed Issue 3299055: NSX Network Detection and Response (NDR) UI might not load resulting in "HTTP 400 Bad Request: Request Header Or Cookie Too Large" error.

    This issue is due to the LDAP / VIDM user being part of too many groups.

  • Fixed Issue 3297834: Vsip-fqdn utilization on couple of hosts reaches 99%

    This may result in critical alarms from DFW memory and impacts to L7 traffic.

  • Fixed Issue 3291400: When a user belongs to multiple VIDM groups with different NSX roles then only one role is gets enforced.

    The VIDM user who is a member of these VIDM groups, might have the possibility of reduced login privileges in NSX.

  • Fixed Issue 3291181: Upgrade failing with logs error "Failed to start bean 'roleBindingInit'."

    NSX Manager is unable to come up (post-upgrade or restart). This might be due to a custom role that was created with duplicate features.

  • Fixed Issue 3250489: Certificate does not get restored properly.

    Some GM functionality that requires API calls to the LM will not work.

  • Fixed Issue 3233914: NSX reverse-proxy (due to bug in boringssl) fails to load a certificate if its length is multiple of 253. The certificate is of service_type CLIENT_AUTH.

    NSX reverse-proxy (envoy) fails to start after a restart (including upgrade).

  • Fixed Issue 3221820: User is allowed to edit the TZP ID when updating the transport node via API.

    The user will not be able to make subsequent updates to the transport node.

  • Fixed Issue 3234358: The child port realization is not succeeding because it's being invoked prior to the parent port invocation.

    The child port is getting realized after five minutes from the parent port realization.

  • Fixed Issue 3211228: At present, even if the proxy is configured it is not used while connecting to the VMware download site to fetch the upgrade bundles. This results in failure in case of airgap scenarios.

    Setups that have airgap scenarios will not be able to run the upload upgrade bundle API since they need the proxy to reach the download site.

  • Fixed Issue 3245222: NSX Manager upgrade dry run tool fails at InternalLogicalPortMigrationTask. NSX Manager upgrade is blocked.

    NSX Manager upgrade is blocked.

  • Fixed Issue 3305774/338485: The IPSet listing API default page size is seen as 50 though the API documentation shows it to be 1000.

    No functional impact.

  • Fixed Issue 3353199/3391651: User is allowed to select IDPS signature details not present in the active signature bundle.

    User might think that the filtering is not working correctly for signature details. User can select details of non-active signatures.

  • Fixed Issue 3392505: SX Application Platform Communication TN Flow Exp Disconnected.

    Exporting of flows to Security Intelligence may intermittently fail on ESX 7.x

  • Fixed Issue 3392904: [IDFW] The automatic full sync might not start after Active Directory domain is configured.

    The AD full sync might not start automatically if sharding service changes the leading node right after AD domain is configured.

  • Fixed Issue 3393424: For Malware Prevention Service (MPS) deployment, specifying IP Pool to be used is mandatory when choosing Static IP allocation in UI and API.

    On the MPS Service Deployment UI, if you choose Static IP allocation and do not select any IP Pool from the list and choose to deploy, the MPS health will be down.

    Or while using API, if you specify STATIC as IP allocation type but do not provide any IP pool ID, the same issue occurs.

  • Fixed Issue 3326899: Switch IPFIX profile has a realization issue.

    When a Switch IPFIX profile that is associated with the segment or segment port is removed, the realization state of the Switch IPFIX profile is displayed as error.

    Any configuration changes that are applied on the host are blocked.

  • Fixed Issue 3325100: Resolve action of repository synchronization does not work when repository synchronization has failed on orchestrator node and NSX Manager is dual stack or CA certificate is applied to it.

    Repository synchronization remains in failed state and users might face issues during installation and upgrade workflows.

  • Fixed Issue 3323189: Elasticsearch server does not shutdown in time and prevents OpenSearch server from starting properly.

    NSX upgrade fails.

  • Fixed Issue 3321459: When you force delete a transport node from the NSX Manager UI and run the DELETE API call, the host is not removed from the corfu database and NSX Manager UI.

    Host is stuck in uninstalling state.

  • Fixed Issue 3320794: The GET or DELETE API call and search API fails when a security policy rule ID contains the ‘\t’ special character.

    Users are able to create security policy rules with rule ID containing the special character (\t) by using the hierarchical policy API (HAPI). However, the GET or DELETE API call and search API fails when a rule ID contains the ‘\t’ special character.

  • Fixed Issue 3320355: Packets are forwarded to the old host after the machine (VM or container) is moved to another host.

    IP entry (IP-MAC) is installed by the control plane, but MAC entry (MAC-VTEP) is either missing, or is learned but has a wrong or old VTEP.

  • Fixed Issue 3318786: On the Network Overview page of the NSX Manager UI, the widget that classifies Segments into Not Connected, Routed and NAT categories does not load and throws an error.

    An exception is created and its stack trace is logged in the nsxapi.log file.

  • Fixed Issue 3303937: Traffic loss during VRRP VM failover.

    Traffic loss.

  • Fixed Issue 3315974: NAT rule statistics per edge node are not displayed in the NSX Manager UI.

    NAT statistics for edge node are always displayed as zero. However, you can view the NAT rule statistics in the NSX Manager UI by navigating to Tier0 Gateway > View NAT > NAT rule > Statistics.

  • Fixed Issue 3314273: Post-upgrade step of Edge is triggered even when Edge upgrade is not complete.

    Edge upgrade is not complete, and some Edge nodes are unable to connect to NSX Manager.

    Users are unable to upgrade Edge nodes.

  • Fixed Issue 3313894: No logical routers or segments are present after upgrading a baremetal edge.

    The fastpath interfaces or the management interfaces or both have different MAC addresses after the upgrade than they had before the baremetal edge upgrade.

    NSX dataplane goes down.

  • Fixed Issue 3308613: When dealing with large realized state, NestDB might run out of memory.

    NestDB will create a core dump and restart. It takes longer for realized state to get applied on the host. Also, vMotion can be impacted.

  • Fixed Issue 3307194: When deploying an Rdge node from the NSX Manager UI, the Edge VM is created and powers on, but the auto-join command fails.

    Users are unable to deploy Edge nodes in the NSX Manager UI.

  • Fixed Issue 3307011: NSX host upgrade to 4.1.2.1 fails if contents of any sticky bit file are modified manually at the host before upgrade.

    Prior to upgrade, manual changes to nsx-cfgagent.xml file causes host upgrade to 4.1.2.1 fail.

    Hosts where upgrade has failed are not available for workloads, thereby reducing the resources for workloads.

  • Fixed Issue 3304057: NSX load balancer does not work when load balancer traffic comes over IPSec VPN tunnel that terminates at the logical router.

    Workloads running behind the NSX load balancer over IPSec are not accessible.

  • Fixed Issue 3301884: NSX backup fails.

    Backup fails with the following error message:

    "Either bad directory path or sftp server disk full or check if the directory path is beyond 260 character limit on windows server."

    Users are unable to take a backup.

  • Fixed Issue 3300414: Duplicate IP failures are observed when anycast Neighbor Solicitation (NS) packets are received by the edge.

    NA to an /127 anycast address is received by the remote host.

    This issue occurs when the uplink is configured as a point-to-point link with an IPv6 address with a prefix length of 127. The edge responds with a Neighbor Advertisement (NA) address resulting in a duplicate IP address failure.

  • Fixed Issue 3298066: After upgrading from NSX 3.x to higher versions, you might not view any data on the NSX Manager UI if there is any failure in reading the search metadata during search initialization.

    You might continue to see the following error message:

    "Search service is currently unavailable" error or "Timed out while syncing indexes" error.

  • Fixed Issue 3297108: When you update the FQDN to a newer value, the NSX Messaging Manager service does not receive the updated FQDN.

    Hosts get disconnected from NSX Manager because of incorrect FQDN values in appliance-info.

  • Fixed Issue 3296339: Incorrect error message is displayed when you try to download the MAC table or VTEP table of a segment.

    Incorrect error message confuses users.

  • Fixed Issue 3295807: High disk usage in NSX Manager node disk partition /nonconfig

    The disk usage for the NSX Manager node disk partition nonconfig has reached 10%, which is at or above the high threshold value of 10%. This indicates rising disk usage by the NSX Datastore service in the /nonconfig/corfu directory.

    This issue generally triggers alarms.

  • Fixed Issue 3295081: On the Set Interfaces dialog box of Tier-0/Tier-1 Gateway in the NSX Manager UI, the View Statistics and DAD status actions are not visible.

    When you drag columns in the Interface grid to increase their width, the View Statistics and View DAD Status actions are not displayed.

  • Fixed Issue 3294869: VMs connected to NSX-backed VLAN segment lose connectivity at random intervals.

    VLAN-based segment ports use a wrong teaming policy. This might cause connectivity issue.

  • Fixed Issue 3294027: Unable to upgrade NSX due to repository sync status error.

    Upgrade Coordinator cannot handle SAN entries in the NSX Manager's REST API certificate that were exactly 127 bytes long.

  • Fixed Issue 3293503: Partner service deployment failed with 404 error.

    Issue is observed when adding the Service Manager. System is unable to create notification watcher in NSX Manager due to race condition.

  • Fixed Issue 3292915: If mirrorstack is enabled for port mirroring, NSX Manager might run out of memory after running for a long time.

    NSX Manager node crashes due to out-of-memory error.

  • Fixed Issue 3292633: EVPN VRF won’t have L2VPN_EVPN address family for routing when EVPN VRF was first created without L2VPN_EVPN address family and later added without explicitly configured import and export route targets.

    EVPN route for the VRF on an NSX Edge does not send the correct route distinguisher (RD) because it does not have L2VPN_EVPN address configured.

  • Fixed Issue 3292111: Quick roll over of logs due to frequent login and logout events.

    There is no impact on functionality.

  • Fixed Issue 3285196: Unable to delete route-based IPSec VPN session when it is configured with overlapping VTI on different logical routers.

    There is no impact on the data plane traffic. This issue affects the deletion of the route-based IPSec VPN session.

  • Fixed Issue 3289801: Datapathd crashes and restarts while using GRE keep alive.

    Datapathd might crash when either of these two situations occur:

    • A script is used to configure the GRE keep alive and associated logical router link is configured with APIs.

    • In a scale setup, the standby edge is replaced while GRE keep alive is being configured.

    Datapathd crash will impact traffic forwarding. However, datapathd is automatically restarted and operation will continue.

  • Fixed Issue 3289489: When you trigger upgrade with .mub/.pub file from a second NSX Manager node while Upgrade Coordinator is still in progress on the first NSX Manager node, it might cause upgrade to fail on the first node.

    Triggering upgrade with Upgrade Coordinator on multiple NSX Manager nodes in parallel is not supported. Upgrade fails and repository sync might go into a failed state.

  • Fixed Issue 3287645: When a a VLAN transport zone is selected in a host switch, the 'IP Asignment' field on the UI shows 'DHCP' value, which is not the expected behavior.

    The IP Assignment field is applicable only when an overlay transport zone is selected in a host switch.

  • Fixed Issue 3286851: Deletion of all role assignments from "Manage Project > Project Users" fails in the NSX Manager UI.

    The following error message is displayed in the UI:

    "Cannot remove all roles for paths in update operation. Please use DELETE role-binding api."

  • Fixed Issue 3286159: Changing the number of uplinks in the uplink profile from four to two causes a datapath outage.

    This issue occurs when VTEP ordering in transport node state is incorrect. This leads to deleting of incorrect VTEPs when you reduce the number of uplinks from four to two.

    For example, if four VTEPs are there, the incorrect order of VTEPs is vmk10, vmk12, vmk11, vmk13.

  • Fixed Issue 3281738: When more than 64 BFD and BGP sessions are configured, flaps in connectivity might be observed in the presence of heavy traffic.

    The vmxnet3 driver supports installation of a maximum of 64 filters to prioritize the traffic. Therefore, BGP and BFD sessions might flap under heavy traffic.

  • Fixed Issue 3281703: DHCP IPv6 pool ranges are not displayed in the NSX Capacity dashboard.

    You cannot view the total number of DHCP IPv6 pool ranges that are configured in the system.

  • Fixed Issue 3279761: nginx crashes with a core dump on an NSX edge appliance.

    Load balancer traffic might be impacted when nginx crashes.

  • Fixed Issue 3279587: Duplicate feedbacks are requested for 'Get Migration Report before finalization of migration' while doing a Cross-vCenter to Federation migration.

    Feedback gets generated per site unnecessarily when you are migrating from NSX-V Cross-vCenter to NSX-T Federation.

  • Fixed Issue 3278548: Error occurs while deleting a Tier-0 Gateway if there is a GRE tunnel associated with it.

    When you delete the Tier-0 Gateway from the NSX Manager UI instead of deleting the GRE tunnel first, hierarchical API does not work.

  • Fixed Issue 3277607: When you update the display name of the Edge transport node, the system does not update the Edge cluster member's display name automatically.

    You need to manually update the edge cluster where the Edge transport node is a member of the Edge cluster.

  • Fixed Issue 3275066: When using GRE tunnels, the NSX Edge CLI command for getting the logical router interface shows incorrect value for the op_state field.

    There is no functional impact.

  • Fixed Issue 3272570: When a controller node is removed from the cluster, stale entries might not be cleaned up in the ClusterNodeConfigModel table.

    Transport nodes cannot establish healthy connections to controllers due to stale entries in the controller-info.xml

  • Fixed Issue 3262526: Multicast traffic drops in transit Tier-0 SR scenario when sender stops multicast traffic and reinitiates it with IGMP join running

    Multicast traffic does not reach the receivers.

  • Fixed Issue 3235594: On a single large cluster, during the post promotion phase of Manager objects to Policy objects promotion, DFW packet drops were observed for a few minutes.

    Packets were dropped between source and destinations even when there was a rule to allow packets between them.

  • Fixed Issue 3340718: PSOD (purple screen of death) may occur during NSX for vSphere to NSX-T migration under heavy traffic load.

    Migration from NSX for vSphere is failing with PSOD error and cannot proceed further.

  • Fixed Issue 3296306: Too many logs from iked logged in syslog when there are multiple/duplicate IPSec SAs established in large number.

    Logs gets rolled over very fast.

  • Fixed Issue 3283883: Auto deployed edges missing vmId to perform edge upgrade.

    If the compute manager to which the edge is deployed is no longer with NSX manager, the upgrade will fail. If the compute manager is with the NSX manager, upgrade will succeed, but edge VM specfic information like hw version will not be updated.

  • Fixed Issue 3277398: Service Insertion can drop traffic in some vMotion scenarios when both redirection and copy mode are used.

    Loss of packets.

  • Fixed Issue 3235352: TCP MSS Clamping is not happening on T1 inter-sr port when egress port is rtep group.

    TCP retransmission, packet drop.

  • Fixed Issue 3289085: After upgrading from NSX 4.0.1 or NSX 3.2.3 to NSX 4.1.2, the NSX Intelligence data collection service gets disabled on some of the ESX transport nodes (TNs).

    The NSX Intelligence Data Collection service is disabled on a few hosts, or the hosts and cluster of hosts are not visible on the Data Collection UI after upgrading from NSX 4.0.1 or NSX 3.2.3 to NSX 4.1.2. There are no traffic flows being reported from some of the hosts. The Data Collection toggle for the affected hosts or clusters are not available on the System > Settings > NSX Intelligence UI.

  • Fixed Issue 3224257: BFD tunnel flap causing log overrun.

    vmkernel logs overrun with messages on BFD tunnel status.

  • Fixed Issue 3007558: APP_HTTPS detected on BITDEFENDER Flow.

    HTTPS rule enforced instead of Bit Defender.

  • Fixed Issue 3179989: The memory for Load Balancer access log is not initialized and the Load Balancer configuration is written into the edge syslog.

    No impact on the Load Balancer service.

  • Fixed Issue 3152062: E/W multicast doesn’t work without enabling PIM on a Tier0 uplink in AA.

    There is an inconsistency between A/S and A/A requirements.

    When there is upgrade from A-S to A-A, PIM needs to be enabled on the Tier0 uplink explicitly for E-W traffic to work.

  • Fixed Issue 3118643: MP Pre-upgrade checks show warning that, "A backup has not been taken in last 2 days" even though backup is taken.

    No functional impact. The message can be ignored and the upgrade can proceed.

  • Fixed Issue 3100299: Mixed Group reevaluation is slow and causing huge group request queue to build up.

    There is a delay in groups to become populated with members. Groups should be populated with members so that users can apply DFDW rules. Because of this delay, users have to wait for a long time to apply DFDW rules with groups.

  • Fixed Issue 3083285: Error related to vCenter connection is not shown correctly in Transport Node create/update flow.

    Host Transport Node is not prepared, and can be seen via UI/API. Transport Node Profile shown as applied success.

  • Fixed Issue 3073518: Service profile for endpoint protection goes into a failed state and shows an error.

    When an Endpoint protection service is unregistered and then registered with NSX Manager, service profile create operation fails for endpoint protection and the service profile goes into a failed state.

  • Fixed Issue 3029159: Import of configuration failed due to the presence of Service Insertion feature entries on the Local Manager.

    NSX Federation does not support the Service Insertion feature. When you try to onboard a Local Manager site, which has Service Insertion VMs, in to the Global Manager, the following error is displayed in the UI: "Unable to import due to these unsupported features: Service Insertion."

  • Fixed Issue 2854139, 3296124: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.

    Traffic drop for the prefixes that are getting continuously added/deleted.

  • Fixed Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

  • Fixed Issue 3311204: ManagementConfigModel table didn't get migrated to the new version after upgrading from 3.1 to 3.2.

    Customer needs to manually set the publish_fqdns to true again after the upgrade.

  • Fixed Issue 3310165: Operational status not calculated for VRF.

    VRF status stays as DOWN on UI and doesn't get updated.

  • Fixed Issue 3309623: The running process nsxaVim at ESX host goes into Error state, and is thus unable to process any requests from NSXA (nsx-opsagent).

    Auto-recovery is in place at nsxaVim. Also, restarting nsx-opsagent would also respawn/recover the nsxaVim process.

  • Fixed Issue 3308910: Upgrading NSX from 3.1.x to 3.2.x, mac_learning_aging_time in the mac discovery profile is changed from 600 to 0.

    The MAC aging time specifies the time before an MAC entry ages and is discarded from the MAC address table. Entering the value 0 disables MAC aging. Because of this MAC table size might increase over time.

  • Fixed Issue 3306725: Dataplane crash during collection of support bundle.

    Edge failover occurs.

  • Fixed Issue 3306183: The tx/rx drop of the kni-lrport increases and continues to increase on Bare Metal Edge due to CPU worker affinity overlap.

    Downloading slows when a drop occurs.

  • Fixed Issue 3305268: IDPS service did not start after appliance upgrade to 3.2.3.

    IDPS will be down.

  • Fixed Issue 3089238: Unable to register vCenter on NSX manager after the NSX-T extension is removed from vCenter on an NSXe setup.

    Unable to register vCenter on NSX manager after removing the extension from vCenter. This disrupts communication between vCenter and NSX the manager.

  • Fixed Issue 3304786: Unable to manually assign Edge nodes to T-1 gateways as display_name not coming for Edge nodes on setups upgraded from 3.0 or 3.1 to 3.2 or higher versions.

    You won't be able to identify the Edge nodes in the policy LR page.

  • Fixed Issue 3303418: The logical-routers/diagnosis API returns an error on some Edge nodes.

    Unable to use the API to get diagnostic information about logical routers.

  • Fixed Issue 3302402: NSX Manager upgrade stuck at Logical Migration step.

    NSX Manager upgrade can get blocked.

  • Fixed Issue 3299273: Search re-indexing takes a long time.

    You will not be able to see entities on the UI until re-indexing is completed.

  • Fixed Issue 3299044: When CCP restarts and full sync with Policy side, the order of ServiceAttachment and Service Chain inside full sync transaction are not guaranteed.

    Service Insertion feature is not redirecting the traffic to the SVM.

  • Fixed Issue 3298917: FIPS compliance alert "QAT running on Edge node is Non-FIPS Compliant" should not be triggered in edge VMs.

    Report has incorrect false positive entry that indicates non-compliance for edge VM.

  • Fixed Issue 3245645: The Datapath CPU usage graph on the UI does not reflect the correct values.

    The Time series database values on the graph are not correct. This may impact diagnosis of high CPU usage over a period of time.

  • Fixed Issue 3296264: Datapath segmentation fault occurred, DP core dump observed.

    Edge crash observed; unable to process any packet due to DP crash.

  • Fixed Issue 3291370: Some VMs connected to ephemeral DVPortgroups do not receive distributed firewall rules.

    If virtual machines are powered on using host CLI, they will not receive configured distributed firewall rules.

  • Fixed Issue 3069003: Excessive LDAP operations on customer LDAP directory service when using nested LDAP groups.

    High load on LDAP directory service in cases where nested LDAP groups are used.

  • Fixed Issue 3290194: Upgrade Coordinator pre-check hangs for networks with a large number of edges.

    Upgrade page will be stuck and resume only when the Upgrade Coordinator restarts.

  • Fixed Issue 3290055: DHCP on shared VLAN segment fails with Service Insertion enabled.

    Unable to use DHCP with shared VLAN segment.

  • Fixed Issue 3288339: In Quick Install flow, when VDS uplink name is “NSX”, VDS uplink name does not have any number as postfix and code expects it to contain a number.

    You will see General error in the UI: "Error: General error has occurred. (Error code: 100)".

  • Fixed Issue 3288062: There is intermittent Load Balancer traffic failure.

    Traffic fails intermittently.

  • Fixed Issue 3287778: When forged TX + SINK port is used on a VM vNIC along with other VM vNIC ports with MAC learning enabled on the same DVS host switch, and the host switch uses multiple uplinks in a non-LAG teaming, packets received on uplink and destined to a forged MAC that originally came from the sink port are getting dropped.

    Connectivity issue.

  • Fixed Issue 3284119: Field level validation errors occurred when editing the alarm setting of "GM To GM Latency Warning".

    You won't be able to change federation.gm_to_gm_latency_warning alarm's definition through the UI, but you can still use the API as a workaround.

  • Fixed Issue 3282747: Check realization failed during vRA BYOT migration.

    Migration stops without an option to proceed further.

  • Fixed Issue 3281829: CPU config values on Transport Node Profile is missed during upgrade to 3.2.x and beyond.

    Even if Transport Node Profile is applied on the cluster, CPU config values in Transport Node Profiles and individual Transport Node will be different.

  • Fixed Issue 3281436: DHCP backend server crashes when it is an IPv4 only while it receives an IPv6 DHCP packet.

    DHCP stops allocating IP address to client hosts.

  • Fixed Issue 3280495: The load balancer server keep-alive or NTLM feature doesn't work as expected intermittently.

    The load balancer server keep-alive feature works intermittently.

  • Fixed Issue 3279501: NSX Segments not showing up in vCenter.

    You will not be able to use NSX Segments on vCenter.

  • Fixed Issue 3278313: When updating compute manager to replace vCenter certificate thumbprint, 'Failed to enable trust' error is seen on NSX manager that was installed through vCenter's NSX embedded page.

    vCenter certficate thumbprint updates or other updates cannot be made on the compute manager.

  • Fixed Issue 3276461: The mdproxy service is using up the edge's disk as its log rotation doesn't work.

    The high disk usage leads to edge production down times.

  • Fixed Issue 3275406: Load Balancer nginx crash happens when there is TCP port conflict in SNAT and virtual server port.

    Some traffic failure happens.

  • Fixed Issue 3272988: Microsoft Windows server 2016 (Bare Metal Server) will have a BSOD due to the ovsim Kernel Driver.

    Microsoft Windows server 2016 restarts because of BSOD.

  • Fixed Issue 3272725: When loading the System Monitoring Dashboard, a permission error message is seen in the logs.

    You won't be able to see the Service Deployment widget in the System monitoring dashboard.

  • Fixed Issue 3262853: Overlay packet drop by physical switch because IP checksum is 0xFFFF is according to RFC 1624, should be 0x0 instead.

    Some traffic is not going through.

  • Fixed Issue 3262810: Group update notification is not sent for groups with Segment in group defn when VM is powered off.

    Even though group members are updated, no notification is received for such group.

  • Fixed Issue 3262184: Dataplane core dump with Mellanox NIC.

    The edge is unavailable for packet forwarding until the dataplane has been restarted.

  • Fixed Issue 3261769: If there is some uplink not in a LAG, a false alarm can be triggered.

    False-positive transport_node_uplink_down alarm found on UI / API.

  • Fixed Issue 3261597: After ESX upgrade from 7.0.2 to 7.0 U3n, multiple VMs lost network connectivity.

    Lost network connectivity.

  • Fixed Issue 3260084: Stretched traffic was dropped because remote rtep-group is not in L2 span.

    Cross-site traffic not working.

  • Fixed Issue 3259906: Some http-profiles cannot be selected under Manager Load Balancer->Virtual Server UI if the number of http-profiles is larger than 1000.

    You cannot select some application profiles out of 1000 as some are not available for selection in the UI.

  • Fixed Issue 3259749: A stalled mac entry pointing to wrong remote rtep-group caused packet drop.

    Cross-site traffic can be dropped because of these stalled entries.

  • Fixed Issue 3259568: SR not realized when a new VRF is configured.

    Traffic through VRF does not work.

  • Fixed Issue 3255245: Quick Install workflow gets stuck when no recommendation is provided in the recommendation step.

    Unable to prepare clusters for networking and security using quick start wizard.

  • Fixed Issue 3251559: Migration of NVDS switch with LAG configurations to CVDS fails.

    Cannot perform NVDS to CVDS migration.

  • Fixed Issue 3250276: After ESX host reboot, the Transport Node status is partially successful and won't succeed.

    If the ESX host is taken out of maintenace mode in this state and the VM's are vmotioned to this host, a VM that is using VDR traffic will not be able to send network traffic out.

  • Fixed Issue 3248874: Unable to edit Edge bridge.

    You will not be able to update the Edge bridges from the UI.

  • Fixed Issue 3248866: NVDS to CVDS migration fails, when LAG configurations are mapped with vmks in the migrating NVDS switch.

    NVDS to CVDS migration cannot proceed.

  • Fixed Issue 3247896: Traceflow observations are not getting recorded for IPv6 traffic when traffic is getting forwarded by firewall.

    No functional impact.

  • Fixed Issue 3247810: NSX-T Manager Cluster Certificate Private Key visible in /var/log/syslog.

    NSX-T Manager Cluster Certificate Private Key visible in /var/log/syslog.

  • Fixed Issue 3242437: BGP Down alarm is raised with the reason "Edge is not ready" for all the BGP neighbors configured on tier-0, even when the BGP Config is disabled.

    No functional impact. However, the BGP DOWN alarms are observed in the UI even though BGP Config is disabled.

  • Fixed Issue 3242135: On the Manager UI, filtering on Tier-0/Tier-1 Logical Routers shows wrong result.

    You will see the filtered data for the next visited Tier-0/Tier-1 Logical Routers view.

  • Fixed Issue 3242132: The next hop of static route was not set correctly in Tier1 router.

    Route configuration is incorrect, which may lead to data loss.

  • Fixed Issue 3242008: NVDS to CVDS migration fails due to timeout at TN_UPDATE_WAIT stage of migration.

    You will not be able to migrate from NVDS to CVDS switch.

  • Fixed Issue 3241069: In NSX for vSphere to NSX-T migration, when multiple DR only T1 gateways are mapped to DLRs or DR-only T1 gateway is mapped along with its parent T0 gateway to DLRs, edge migration fails.

    Edge migration fails.

  • Fixed Issue 3239517: TCP traffic to L7 Load Balancer cannot be established when the client port is reused rapidly.

    The traffic to L7 Load Balancer fails intermittently.

  • Fixed Issue 3239140: T1 CSP subnets stop getting advertised to connected T0 if the edge cluster of T1 gateway is changed.

    North-South datapath breaks for T1 CSP subnets.

  • Fixed Issue 3237041: Migration Coordinator showing "not set" in connections for some edges.

    You cannot see connected edges in the UI in Define Topology stage.

  • Fixed Issue 3236358: nginx memory leak causes out of memory on the active edge.

    Many features on the edge are impacted.

  • Fixed Issue 3232033: Validation lacking for "'any" subnet, and le/ge values.

    You could run into errors elsewhere if not correctly validated. "Any" will cause errors elsewhere.

  • Fixed Issue 3224339: Edge datapathd crashes when receiving fragmented IPv6 traffic (UDP) ending at edge's local CPU port.

    BGP sessions are impacted.

  • Fixed Issue 3223334: Unable to enable MON on HCX.

    You are not able to enable MON on HCX.

  • Fixed Issue 3222373: ESX host ramdisk is full due to a large number of errors in nsxaVim.err.

    Failure to deploy new VMs once the ramdisk is full.

  • Fixed Issue 3221768: The previously selected certificate goes away while searching and selecting another certificate in Virtual Server --> SNI Certificates dropdown.

    Not able to search and select multiple certificates in Virtual Server --> SNI Certificates dropdown.

  • Fixed Issue 3311568: Traffic passing through NSX Edge VM or a non-edge VM is impacted if the VM is rebooted after the NSX manager upgrade from 3.0.x to 3.2.x.

    Traffic passing through the affected VM’s is impacted.

  • Fixed Issue 3313308: Dataplane service (dp-fp) core dump occurs, resulting in traffic drops.

    Traffic impacted during core period.

  • Fixed Issue 3314572: The realization state of an unconsumed DHCP Relay becomes "In progress" after it was changed.

    No functional impact.

  • Fixed Issue 3315269: VNI filter does not work for packet capture on NSX-T Edge nodes.

    Can't use "vni" keyword to filter packets.

  • Fixed Issue 3316361: VM communication is blocked after deleting Service Insertion in the NSX-T environment.

    After deploying Service Insertion and deleting Service Insertion on the DVS, the packets are dropped and the VM traffic is blocked.

  • Fixed Issue 3317152: Container to Container connection timeout after NSX-T upgrade.

    Dynamic group membership is wrong. DFW/Gateway Firewall that leverages the group may be impacted. Traffic may not be correctly allowed or dropped.

  • Fixed Issue 3319292: Incorrect route advertisements received by workloads on virtual IPv6 subnet with prefix not a multiple of 8.

    This can cause traffic disruptions if multiple subnets are masked to the same value; for example, if two /50 prefixes are masked to the same /48.

  • Fixed Issue 3332272: The DFW vMotion for DFW filter on destination failed and the port for the entity is disconnected.

    After host removal, stale host alarms (dfw_vmotion_failure) are still listed in GET api/v1/alarms API's response. Some alarms may have unexpected empty entity_resource_type value in GET api/v1/alarms API's response. Stale dfw_vmotion_failure will be seen, which comes from no longer existing ESX nodes, by using GET api/v1/alarms.

  • Fixed Issue 3335018: Edge node over provisioned with load balancer capacity.

    Edge node ends up with over provisioned load balancer capacity.

  • Fixed Issue 3335384: NVDS to CVDS migration status remains in progress forever.

    NVDS to CVDS migration fails.

  • Fixed Issue 3335886: Back and forth vMotion led to incorrect deletion of logical port.

    DFW rules not applied on the vNIC/port. This caused the DFW rules not to be published on the port.

  • Fixed Issue 3337389: DHCPv6 relay does not work with third party DHCPv6 server when client not using MAC generating client ID.

    DHCPv6 client may not get its IP address.

  • Fixed Issue 3337410: TNP lost VMK Install/Uninstall mappings after NSX Manager upgrade.

    Incorrect VMK network associations on TNs.

  • Fixed Issue 3346506: NSX CLI command “get edge-cluster status" shows two Tunnel Endpoint in the case of bond configuration.

    On NSX Edge Transport Node after changing profile from multi Tunnel Endpoint to bond/lag profile, "get edge-cluster" CLI still shows two local VTEP ip. For one device info ('Device') shown as lag and for another, device info ('Device') is not displayed.

  • Fixed Issue 3353131: When configuring the metadata proxy with FQDN-based server settings, the backend dataplane is broken and status is showing error.

    The metadata service cannot be consumed via the metadata proxy. The following update of the metadata proxy also cannot take effect.

  • Fixed Issue 3365503: ESXi host encounters an ENSNetWorld PSOD if flow re-validation happens after the flow action's destination port gets disconnected.

    ESX PSOD.

  • Fixed Issue 3221768: The previously selected certificate goes away while searching and selecting another certificate in Virtual Server --> SNI Certificates dropdown.

    Not able to search and select multiple certificates in Virtual Server --> SNI Certificates dropdown.

  • Fixed Issue 3221760: Overlay traffic outage in LAG environment.

    Overlay traffic outage.

  • Fixed Issue 3219754: Kernel coredumps were not being generated on NSX Edge.

    Loss of all services on the node that crashes. Manual intervention required to restore service.

  • Fixed Issue 3219169: The DNS server config could not be fetched automatically on VIF interface in VLAN mode when doing vif-install on WinBMS.

    The related DNS server config on VIF interface is lost after the upgrading for WinBMS on VLAN mode.

  • Fixed Issue 3218194: Backup configuration accepts loopback IP/NSX Manager itself to be used as a backup server.

    No functional impact but Manager UI will show "Missing backup" banner: You have not configured backups of NSX Manager.

  • Fixed Issue 3213242: SNMP agent has encountered out of memory issue.

    You may not see the snmp traps for their alarms.

  • Fixed Issue 3210686: Kernel memory leak when vdl2 disconnects port from control plane.

    With this issue, there could be PSOD when an IPv6 enabled VM vMotions. Live upgrade will also fail due to memory leak.

  • Fixed Issue 3187543: Unable to configure LDAP authentication using SSL.

    You are unable to test LDAP connectivity if you have configured LDAP authentication to use LDAPS or LDAP-with-StartTLS. This only affects the "Check Status" button in the "Set LDAP Server" screen.

  • Fixed Issue 3186156: Pagination not working/missing in the Backup Overview grid in the UI for LM and GM. Pagination not working/missing for GM Backup overview API.

    UI impact and GM API - Pagination not working.

  • Fixed Issue 3185052: CLI get physical-ports incorrectly shows ADMIN_STATUS down for bond status.

    No bond use/traffic impact as bond admin status isn't used.

  • Fixed Issue 3183515: vdl2 disconnects port from controller under frequent IP discovery update.

    Port MAC change will no longer be updated to controller.

  • Fixed Issue 3182682: When local mac address matches a remote address learned from L2VPN, address flapping occurs.

    Traffic will be impacted.

  • Fixed Issue 3179208: VM's ports get blocked after installing NSX security only on cluster.

    The security port could get blocked.

  • Fixed Issue 3165849: Intermittent traffic loss at edge cutover for vra routed networks going via downlink cutover.

    Intermittent outage.

  • Fixed Issue 3165799: NSX for vSphere to NSX-T host migration failed when a vxlan VMkernel adapter is in a vSphere standard switch.

    Migration stopped in the middle.

  • Fixed Issue 3164437: The prefix_list limitation in capacity dashboard for large appliance should be 4200, instead of 4000.

    In large appliance, you can see wrong limitation in capacity dashboard and can see false capacity alarm when system-wide prefix_list is over 4000.

  • Fixed Issue 3162576: NSX for vSphere to NSX-T migrator in DFW-Host-Workload migration mode migrated NSX-V hosts to NSX-T transport nodes without any NSX host-switch.

    NSX-T is installed in hosts but no NSX feature is used by the VMs because VMs are still connected to VLAN DVPGs. Some VMs may lose DFW after the migration.

  • Fixed Issue 3157163: On one Edge Connected Tier1 routes appear as Static routes on Tier0.

    When you perform a failover between two active/standby Edge nodes, there are some missing routes/subnets on the TOR.

  • Fixed Issue 3153468: When you try to search for an IP pool above 1000 records, you are not able to find it.

    This is only when more than 1000 IP pools are available in the system.

  • Fixed Issue 3152084: The traceflow topology diagram showed the wrong segment.

    The troubleshooting process may not proceed in the correct direction due to wrong segment information.

  • Fixed Issue 3129560: Python and login core dump gets generated if printscreen key is pressed on VM console (vc / esxi console).

    nsx-cli session will stop.

  • Fixed Issue 3125438: Transport Node status shows still UNKNOWN/DOWN when Transport Node is up.

    You will see the wrong Transport Node status.

  • Fixed Issue 3119289: del nsx fails due to property 'com.vmware.common.opaqueDvs.status.component.vswitch' being set as RUNTIME property with value as 'down'.

    del nsx failure. Unable to remove NSX from ESXi host transport node.

  • Fixed Issue 3117763: snmpd process stopped and didn't recover automatically on Manager.

    Cannot use SNMP monitoring in NSX Manager because the snmpd process stops automatically.

  • Fixed Issue 3117695: When you configure the RTEP (Remote Tunnel Endpoint) configuration for an edge cluster using the 'System -> Quick Start -> Configure Edge Node for Stretch Networking' option in the UI, it may appear that the RTEP is not configured.

    Despite completing the RTEP configuration, the user interface indicates that the configuration has not been completed.

  • Fixed Issue 3117124: HaMode is shown as Active-standby even if no edge cluster is configured on Tier1.

    A visual issue only. There is no operational issue.

  • Fixed Issue 3113203: Uninstall failure due to multiple simultaneous attempts.

    Unable to complete the NSX uninstallation.

  • Fixed Issue 3112374: Security-only uninstall is failing as LP's are part of NSgroup.

    Uninstall not possible.

  • Fixed Issue 3109895: You may encounter Null pointer exception errors when trying to see group members or reverse group look-ups (i.e., for a given entity, find related group).

    This impacts group membership APIs and association APIs which are used to show group members/associations on the UI. You may encounter errors when trying to see group members or reverse group look-ups.

  • Fixed Issue 3109192: If a BGP neighbor is not directly connected on the uplink and does not have a source IP configured, then "bgp_down" alarm can be raised on the UI if the edge op-state flaps.

    There is no functionality impact. Only alarm will be raised from all edges where this BGP peer is not directly connected.

  • Fixed Issue 3108028: In UI, for 'ens0' network of Service Deployment, 'NetworkKey' is shown instead of 'NetworkName'.

    No functional impact. Only 'NetworkKey' will be shown instead of 'NetworkName' for 'ens0' network for all Service Deployments in the UI.

  • Fixed Issue 3106536: The port used in LTA session was disabled during on-going LTA session, which causes PSOD.

    ESXi Server experienced BlueScreen/PSOD crash.

  • Fixed Issue 3103647: The in-band mgmt interface was lost when configuring Edge Datapath properties, e.g., rx ring size.

    The in-band mgmt interface is lost immediately after configuring Datapath properties like rx ring size.

  • Fixed Issue 3101459: Tier1 state API is slow.

    If Tier1 API is called, the response is slow (more than 50s for API to respond).

  • Fixed Issue 3101405: FW Rule or DNAT Rule updates by removing services added earlier may not work. The intended service update may not be populated to ESX or EdgeNode correctly.

    FW Rule or DNAT Rule updates by removing services added earlier may not work.

  • Fixed Issue 3100672: Node id change caused traceflow observation display exception.

    You are unable to check traceflow observation result; get traceflow observation API will return error "General error has occurred".

  • Fixed Issue 3098283: False Manager FQDN Lookup Failure Alarm raised in the system.

    You can see this False alarm: Manager FQDN Lookup Failure Alarm in alarm dashboard. However, FQDN lookup is fine in the system and the alarm should not be raised.

  • Fixed Issue 3096626: When force delete of TN is performed, VTEP labels and IP address are not released.

    You will run out of IPs although no host is really using it.

  • Fixed Issue 3040604: Incorrect LTA observations when traffic traverses multiple overlay uplinks.

    No functional impact. However, this behavior would result in incorrect observations as the traffic is getting delivered when it is actually forwarded.There is no way to distinguish between these two scenarios: a packet getting actually delivered at the uplink vs a packet actually forwarded but incorrectly marked as 'delivered'.

  • Fixed Issue 3031006: When LAG is present and security-only install is used, LAG status shows "degraded".

    Lag status shows degraded in NSX UI.

  • Fixed Issue 2975197: Baremetal Edge Intel fastpath interfaces do not come up.

    Datapath non-functional.

  • Fixed Issue 2868382: Selecting 'skip' for OSPF feedback requests in NSX for vSphere to NSX-T migration caused UI display.

    Migration is blocked in the first step.

  • Fixed Issue 2756004: Incorrect logic in "NVDS Uplink Down" alarm calculation causes false alarm.

    False-positive "NVDS Uplink Down" alarm found on UI / API.

  • Fixed Issue 3321586: Rollback to Unified Appliance does not work when upgrade fails. UI becomes inaccessible and MP service is down.

    Failure in upgrade intermittently.

  • Fixed Issue 3217386: Event log scraping fails due to a concurrency issue.

    IDFW doesn't work when event log scraping is used.

  • Fixed Issue 3261430: NSX Manager upgrade gets marked as successful if reset plan API for MP component gets called after failed MP upgrade.

    NSX Manager upgrade incorrectly gets shown as successful even if MP upgrade had failed at data migration step, and side effects are seen during other operations, like NSX-T edge cluster expansion failure.

  • Fixed Issue 3273732: In Federation, full sync from Local Manager to the Global Manager does not complete.

    Some deleted resources remain grayed out and are not permanently cleaned up. Some other resources from the Local Manager, such as SegmentPorts, are not properly updated and are stale on the GlobalManager.

  • Fixed Issue 3242567: When there are frequent config churns (e.g., 30 seconds or less) on context profile causing it to be deleted and recreated, VDPI may crash and restart.

    There is brief traffic interruption to the traffic subjected to L7 DFW.

  • Fixed Issue 3180860: SNAT port exhaustion alarm continuously comes up on the NSX manager when there is no problem with SNAT ports exhaustion.

    There is no functional or traffic impact.

  • Fixed Issue 3353199: User is allowed to select signature details not present in the active signature bundle.

    User might think that the filtering is not working correctly for signature details. User can select details of non active signatures.

  • Fixed Issue 3337942: Realization error seen after upgrading to 3.2.3.2/4.1.1 for pre-existing IDPS sections created in 3.1.3.x releases.

    After upgrade, a realization error is seen for pre-existing IDPS sections with an error stating that the IDPS Rule is already present in the system. Post upgrade, any updates to such existing IDPS sections and rules are not realized. However, only pre-existing rules will continue to work as it worked pre-upgrade.

  • Fixed Issue 3336492: Updating the rule with scope as null using PATCH Gateway Policy API (/infra/domains/default/gateway-policies/test) was updating the rule scope as "ANY" in its GET request.

    Rule is not visible in the UI.

  • Fixed Issue 3330950: DFW L2 MAC address was not programmed into kernel for some filters during bulk VM vMotion or power on.

    L2 traffic hitting wrong L2 rule and impacts traffic for those zero MAC filters.

  • Fixed Issue 3327879: Unable to view the Standby GM cluster/node status from the Active GM Location Manager tab within the UI.

    Unable to view the Locations Manager page from Active GM login. The Standby site does not show the cluster status or the status of the 3 standby managers.

  • Fixed Issue 3326747: Service instance page is stuck in loading.

    You cannot check service instances from the UI.

  • Fixed Issue 3317410: MP API Validation doesn't exist to check if the user provides an existing MP Rule ID that doesn't belong to the Section they are updating/revising.

    Rule is not created in the desired Section.

  • Fixed Issue 3325439: CCP doesn't support retry for entering/exiting maintenance mode.

    If CCP fails to enter/exit maintenance mode during rolling upgrade, you may have to roll back and restart the upgrade from the beginning.

  • Fixed Issue 3314125: Syslog setting can get deleted even though central-config was disabled.

    Loss of configuration.

  • Fixed Issue 3310170: The NSX load balancer crashes with SNAT disabled and server keep-alives enabled.

    The HTTP and HTTPS-based applications will be impacted.

  • Fixed Issue 3300884: Bond NIC status shows down incorrectly resulting in a false alarm.

    There is no functional impact as these are false positives.

  • Fixed Issue 3297854: The NIC throughput alarms do not work correctly on Edge VM, PCG or Autonomous Edge.

    False positive alarms.

  • Fixed Issue 3285489: Edge node settings mismatch alarms are raised.

    False alarm.

  • Fixed Issue 3260245: There are inconsistencies between getting CPU stats from datapath and CPU stats from Linux.

    If datapath is busy, the stats will be coming from Linux. On an edge vm, the datapath cores will be shown as 100 percent.

  • Fixed Issue 3255704: On a Bare Metal edge with bond configuration and l2 bridge configuration, BFD down observed as well as edge agent crashes.

    BFD down. Edges lose connectivity with other transport nodes.

  • Fixed Issue 3247896: Traceflow observations are not getting recorded for IPv6 traffic when traffic is getting forwarded by firewall.

    No functional impact.

  • Fixed Issue 3239140: T1 CSP subnets stop getting advertised to connected T0 if the edge cluster of T1 gateway is changed.

    North-South datapath will break for T1 CSP subnets.

  • Fixed Issue 3237041: Migration Coordinator showing "not set" in connections for some edges.

    You cannot see connected edges in UI in Define Topology stage.

  • Fixed Issue 3229594: Wrong linkdown alarm of a bare metal edge is fired every 4 hours on NSX Manager.

    This is a false positive alarm. Rare occurrence of collision when UUID of alarm event collides with transport node UUID.

  • Fixed Issue 3228653: Edge cluster create fails since Transport Node internal flow causes config state flap between SUCCESS and IN_PROGRESS

    No impact on traffic.

  • Fixed Issue 3214446: Site sync status not showing when using vIDM /LDAP user account.

    Site sync status not showing when using vIDM /LDAP user account.

  • Fixed Issue 3213923: MAC to VLAN lswitch port in vlan fdb is not removed when the VLAN switch port is removed, which can cause coredump when issuing CLI "get host-switch vlan-table"

    coredump of dp.

  • Fixed Issue 3184235: Static routes are unstable after upgrading NSX from 3.1.3 to 3.2.2.

    VMs have network outage.

  • Fixed Issue 3163803: In the EVPN Route-Server environment, traffic from VNF to DCGW is dropped every 10 minutes as the Type-2 route is withdrawn from Edge.

    The traffic from VNF to DCGW is dropped for one second or less every 10 minutes.

  • Fixed Issue 3119464: In Bare Metal Edge, interfaces with Intel XXV710 NICs and AOC phys are down.

    After any change in configuration (e.g., MTU update), Intel XV710 NIC ports remain down. This has been observed with 25G links and AOC phys and fiber cabling.

  • Fixed Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not being allowed or skipped correctly.

  • Fixed Issue 3090983: Stale LS in the system possible when different nsx-manager instances invoking the provider for segment binding deletion and segment deletion.

    Stale LS will remain in the system.

  • Fixed Issue 3111794: Entering a logical router name containing multibyte characters causes the “get logical routers” CLI command to fail.

    The "get logical routers" CLI command errors out.

  • Fixed Issue 3060219: vMotion fails when the NFS mapped to the scratch location is disconnected.

    vMotion failure.

  • Fixed Issue 3044773: IDPS Signature Download will not work if NSX Manager is configured with HTTPS Proxy with certificate.

    IDPS On-demand and Auto signature download will not work.

  • Fixed Issue 2975197: Bare Metal Edge Intel fastpath interfaces do not come up.

    Datapath non-functional.

  • Fixed Issue 2946990: Slow memory leak in auth server (/etc/init.d/proxy) for local user authentication.

    API responses will get slower.

  • Fixed Issue 3353719: NSX manager has two certificates associated with same cluster id. This causes issues with NCP (Tanzu).

    NCP crashing blocking Tanzu communication with NSX, impacting the overall solution networking.

  • Fixed Issue 3274058: When deploying ALB controller, "." is replaced with "-" in the hostname.

    When deploying ALB controller, "." is replaced with "-" in the hostname.

  • Fixed Issue 3299235: PSOD in hosts with ESX 7.0U3 when using ENS.

    ESX will crash.

  • Fixed Issue 3261832: Subnet deletion fails even if subnet is not used in allocation when IpPool has multiple subnets and some of the subnets of IpPool have IP allocation.

    Subnet deletion will not work.

  • Fixed Issue 3290636: DFW intermittently drops TCP packets for long lived connections.

    Occasional communication failure between VMs.

  • Fixed Issue 3305774: The ipset listing API default page size is seen as 50 though the API documentation shows it to be 1000.

    No functional impact.

  • Fixed Issue 3302553: MP APIs to update gateway firewall section with rules failing with inaccurate validation.

    You will not be able to add/remove firewall rules using section level update MP API.

  • Fixed Issue 3296767/3392015: NSX Policy "Deny_Inter" is getting its sequence number changed without user intervention. Because of this, the policies, rules, and orders are changed, preventing users from accessing their applications.

    The firewall rule execution may impact the traffic flow.

  • Fixed Issue 3289756: Edge node disk partition /mnt/ids has reached 90% alarm.

    IDPS will Fail to write the extracted files. Without the extracted files, malware file analysis will be impacted.

  • Fixed Issue 3285429: Audit logs are not getting logged for internal calls in Policy Hierarchical API.

    You will not be able to see the audit logs for internal calls made using Policy HAPI.

  • Fixed Issue 3271441: REST API to NSX Manager intermittently fails while using it with vIDM authentication.

    vRA not working as expected due to 403 error codes returned from the NSX Manager.

  • Fixed Issue 3271233: Data migration issue with session timers on upgrade from 3.1.3.

    Timeout values for firewall session timer profile were changing.

  • Fixed Issue 3270603: "IP Block Usage Very High" alarm is getting triggered even if IpBlock usage is low (even usage is >= 50%).

    No functional impact.

  • Fixed Issue 3262310: Cluster-based SVM deployment fail if OVF hosted on Mongoose server.

    Cluster-based SVM deployment fails.

  • Fixed Issue 3259540: If service cores are configured when needed, datapathd will crash.

    Traffic is interrupted.

  • Fixed Issue 3259164: Event log Scraping doesn't work in federation setups.

    IDFW doesn't work.

  • Fixed Issue 3255053: When FirewallExcludeList intent table has the same internal key as mp entry key, InternalFirewallExcludeList's mp entry will be overwritten again.

    FirewallExcludeList related functionalities will not work.

  • Fixed Issue 3249481: Memory for the pffqdnsyncpl is above 85% even in the absence DNS traffic.

    As this happens on the backup node during the sync, it will not impact the traffic but if the backup becomes active, it will create the failure for sync.

  • Fixed Issue 3248151: When a service is deleted, the partner_channel_down alarm is raised and it remains open even after the service is redeployed.

    An alarm corresponding to the deleted service instance persists.

  • Fixed Issue 3246397: Service deployment is struck at Upgrade in progress.

    No functional impact. Deployment is completed and redirection is working as expected.

  • Fixed Issue 3243603: This error occurs when an object being migrated has the same policy intent as the object that is marked for deletion: "Segment port is marked for deletion. Either use another path or wait for the purge cycle."

    You will have to cancel the current promotion attempt and retry the mp to policy promotion.

  • Fixed Issue 3238442: Edge crashed during failover operation.

    This occurs when failover happens and DNS request packet is received on existing state before the connection expires. There may be loss of traffic prior to the system recovery.

  • Fixed Issue 3237776: Duplicate VMs reported in NSX.

    There is no user impact.

  • Fixed Issue 3233259: FirewallRule addition in a FirewallSection API fails or takes longer time than expected to be realized on hosts or Edge nodes.

    Unable to add new firewall configuration or delay in rule realization to hosts.

  • Fixed Issue 3217946: Search option in workflow Add Edge Node > Configure NSX > Teaming Policy Uplink Mapping does not work properly.

    Incorrect filtering results of teaming policies.

  • Fixed Issue 3186544: TCP RSTs generated for the URL filtering don't use the proper NAT IP but the original IP.

    The generated TCP RST packets don't have the applicable NAT IP address.

  • Fixed Issue 3186101: PXE boot failing with NSX-T Service Insertion.

    Cannot use service insertion for N-S PXE traffic.

  • Fixed Issue 3185286: New rules could not be created because ruleID generation failed.

    Cannot create new rules.

  • Fixed Issue 3179001: Firewall Section lock/unlock does not work.

    Existing locked sections cannot be unlocked by neither the user who created them nor the default enterprise admin account.

  • Fixed Issue 3178669: IDPS engine process crashes with a mix of http, smb traffic.

    Detection/prevention functionality down for a short time.

  • Fixed Issue 3166361: ConfigSpanMsg for DISTRIBUTED_FIREWALL is empty and disrupts the normal functionalities for firewall exclude list, or edges.

    Firewall ExcludeList and edge functionalities might not work as expected.

  • Fixed Issue 3160528: Memory exhaustion in nsx-opsagent daemon.

    All activities of opsagent will restart.

  • Fixed Issue 3159359: The TCP traffic from service link to VTI can get stuck with 64k window.

    Transferring files larger than 64k with TCP don't succeed.

  • Fixed Issue 3154551: Tier-1/Tier-0 realization errors after upgrade when using service insertion.

    Tier-0s/Tier-1s won't be realized.

  • Fixed Issue 3088665: The inner IP packet carried by Geneve encap has zero IP checksum.

    You may see error counters going up for drivers that have deep packet inspection capability.

  • Fixed Issue 3095501: DFW filter applied to Service VM.

    The Checkpoint SVM has DFW applied to its interface even when it is added into the system exclusion list.

  • Fixed Issue 3096744: The value of tcp_strict flag is shown as false, when not supplied, using the revise policy API.

    Inconsistent value of tcp_strict flag in Gateway Firewall when created via two different APIs.

  • Fixed Issue 3112614: The vmState shows Detached due to opsagent failed to get VIF from NestDB.

    The logical port of the virtual machine is set to "DOWN".

  • Fixed Issue 3104576: Gateway firewall rule stats throws an error.

    You won't be able to see the statistics.

  • Fixed Issue 3102093: REST API on logical port drop stats was excessive and triggered vRNI drop threshold.

    For vRNI features leveraging NSX drop stats will trigger alarms.

  • Fixed Issue 3097874: After 3.1 to 3.2 upgrade, endpoint rules update may fail to succeed.

    You can neither update an existing rule nor create a new rule. Existing protection still continues to work. Old endpoint rules keep on working but the change is not effective (e.g., new groups cannot be added to an existing endpoint rule).

  • Fixed Issue 3093269: Rule creation for TargetType Antrea fails with version mismatch error.

    Unable to create a Security rule.

  • Fixed Issue 3076569: NSX-T DFW sections in an unknown/In Progress state during migration from NSX for vSphere to NSX-T.

    DFW rules will not apply to the workload VMs.

  • Fixed Issue 3025203: nsx-syslog.log shows error message "failed to find transaction id".

    No user impact, other than log files with the above mentioned log lines.

  • Fixed Issue 3325439: CCP doesn't support retry for entering/exiting maintenance mode.

    If CCP fails to enter/exit maintenance mode during rolling upgrade, they may have to rollback and restart the upgrade from the beginning.

Known Issues

  • Issue 3401745: Unable to reorder security policies causes internal failure with sequence_number -1.

    When attempting to create the 200th security policy in NSX using the action=revise&operation=insert_bottom API, the API fails.

    Workaround: Create a policy without revising the sequences and specify the sequence number. Use revise API only when you want to reorder already created policies.

  • Issue 3395578: After uninstalling security from a cluster, the discovered DVPG is not deleted if it is a part of a dynamic group.

    Even after the group is deleted, the discovered DVPG is not deleted that might result in stale discovered DVPGs on the MP.

    Workaround:

    Perform the following steps:

    1. Execute the following command on the NSX node to get 'stringId' of the stale DVPG typed segments.

        corfu_tool_runner.py -t Segment -n nsx -o showTable

    2. Execute the following command on the NSX node to delete the stale DVPG typed segment entries from the Segment table.

        corfu_tool_runner.py -o deleteRecord -n nsx -t Segment --keyToDelete '{"stringId": "***"}'

    3. Execute the following command on the NSX node nsxcli shell.

        start search resync all

  • Issue 3422772: ESXi host might encounter a PSOD when L7 DFW rules are configured or security intelligence is deployed in the environment.

    Workaround: For details, refer to KB article 374611.

  • Issue 3411866: Malware prevention events overload the PostgreSQL database, resulting in the unavailability of the NSX Application Platform UI.

    When Malware Prevention events exceed two million in less than 14 days, the PostgreSQL database becomes overloaded. As a result, the NSX Application Platform UI shows the UNAVAILABLE state, and new Malware Prevention events do not appear on the UI.

    Workaround: Delete the records from the PostgreSQL database and vacuum the database to resolve the problem. See KB article 320807.

  • Issue 3396277: Upgrade integration issues due to download.vmware.com site decommission.

    Some integration issues occur because download.vmware.com is no longer available.

    • UI Notifications listing the NSX releases available for upgrade will not be available.

    • You will not be able to download the release binaries automatically to the NSX appliance.

    • The NSX pre-upgrade checks bundle (PUB), that is, asynchronous prechecks or dynamic prechecks, will not be downloaded automatically to the NSX appliance.

    Workaround: Before upgrading, refer to the knowledge base article 372634 for details.

  • Issue 3391130: Onboarding NSX Federation 4.2.0 Local Manager to 4.1.1 Global Manager fails with compatibility error.

    The issue occurs when GM is 4.1.1 and the LM version of NSX is greater than the GM.

    Workaround: Upgrade Global Manager version to NSX Federation 4.1.2.

  • Issue 3410849: User is able to delete policies/sections in the filtered view that could unintentionally delete rules not visible in the filtered result set.

    Filtered views show only a subset of rules. Deleting the entire policy based on a filtered view could unintentionally remove rules that are not available in filtered result set.

  • Issue 3215655: While upgrading the NSX Application Platform, some periodic cronjobs might not run if an older repository URL is blocked before the repository URL is updated to point to the new repository that contains the uploaded target version charts and images.

    ImagePullBackoff error after the old repository URL is blocked before the repository URL is updated to point to the new repository. The NSX Application Platform upgrade might complete but certain periodic NSX Intelligence cron jobs might not be able to run after the upgrade completes.

    Workaround: Log in to the NSX Manager and use the following command to manually delete the failed or stuck jobs.

    napp-k delete job <job-name>

    Also, avoid blocking access to an older repository URL before the NSX Application Platform upgrade has completed.

  • Issue 2389691: Publish recommendation job fails with error "request payload size exceeds the permitted limit, max 2,000 objects are allowed per request."

    If you try to publish a single recommendation job that contains more than 2,000 objects, it will fail with this error.

    Workaround: Reduce the number of objects in the recommendation job to fewer than 2,000 and retry the publication.

  • Issue 3364365: NSX upgrade fails due to the kernel module load failure.

    Live upgrade or maintenance upgrade will fail and the system will require a reboot that increases the upgrade time.

  • Issue 3357794: Some bare metal Edges that have in-band management configured might lose management connectivity when they enter maintenance mode.

    If you are performing an Edge upgrade, management connectivity might be lost when the Edge enters maintenance mode at the beginning of the upgrade that will cause the upgrade to fail.

  • Issue 3380679: DPU fail over due to uplink down impacts the TCP traffic.

    If the distributed firewall Policy is set to "TCP Strict" mode, DPU fail over due to uplink status impacts the TCP traffic.

    Workaround: In the NSX Manager distributed firewall policy, disable and enable the "TCP Strict" mode to recover or resume the traffic.

  • Issue 3398452: The capacity dashboard in a X-large deployment incorrectly shows the label as “Maximum capacity displayed above is for large appliance only”.

    In the capacity dashboard for a X-large setups, the footer incorrectly shows the label as “Maximum capacity displayed above is for large appliance only”.

    No workaround required as this bug has no functional impact.

  • Issue 3403889: Bridge loop detection alarm is set as Medium level instead of Critical level.

    To alert the customers, the bridge loop detection alarm should have been set to critical level instead of Medium level.

  • Issue 3377155: API response for creating a DFW Security Policy takes a long time (~275 seconds) due to duplicate Groups validation.

    Create security policies in chunks since creating full scale polices with rules could take longer times on large scale setups.

  • Issue 3391552: High connection rates and concurrent configuration changes, including group membership and firewall rule updates, can cause NSX Edge to run out of memory.

    New sessions will be impacted.

  • Issue 3384652: During upgrade, when existing DFW flows are migrated from 3.2.4 to 4.2.0, the flows are not classified correctly resulting in matching incorrect rules.

    Existing L7 flows might hit incorrect rules during and after the upgrade.

    Workaround: None. New flows after the migration will not have problem.

  • Issue 3361383: During high load when many files are downloaded in a short period (for example, an guest OS upgrade is performed), the Advanced Threat Prevention feature may take excessive time to analyze and provide results of file inspection.

    During this time the result of malware analysis is delayed. During extreme load it might take from 1 to 4 hours for results to arrive.

    Workaround: None. Wait for load event to end. After this time, malware analysis will resume normally.

  • Issue 3332991: 0.0.0.0 IP address is inaccurately treated as 0.0.0.0/0 [ANY] in DFW rules.

    Firewall rules with zero IPs (both v4 and v6) in source or destination fields were treated and enforced as ANY match.

    Workaround: No good workaround since support for zero IP address is not present in NSX since inception.

  • Issue 3393742: Malware SVM doesn’t get IP from static pool.

    On the MPS Service Deployment UI screen, choose Static IP allocation. Do not select any IP Pool from the list and choose to deploy.

  • Issue 3402184: Some TLS Inspection stats will be reported improperly on NAPP.

    Gateway Firewall TLS stats are not updated properly during the failover scenarios of the Active-Standby edge deployments. ACTIVE-STANDBY failover stats will reset to 0.

  • Issue 3276632: IPv4/IPv6 BGP sessions fail to establish due to IPv4/IPv6 addresses missing on the interfaces.

    IPv4/IPv6 BGP sessions are stuck in an idle state. That is, sessions are not established.

    Traffic through the problematic BGP session might be disrupted. However, BGP would gracefully restart on its own.

    To recover the missing IPv4/IPv6 addresses, you can rescan the interfaces by running the following commands on the edge CLI:

    Edge> get logical-routers Edge> vrf <vrf_id of SERVICE_ROUTER_TIER0> Edge(tier0_sr)> set debug Edge(tier0_sr)> start rescan interfaces Edge(tier0_sr)> exit
  • Issue 3272782: Post host upgrade from baseline remediation, TN states of hosts is shown as installed failed with errors in 'Configuration complete' step. In the error message, we can see "Node has invalid version 4.1.2.0.0-8.0.22293677 of software nsx-monitoring" for all builtin_ids of host.

    If a user tries to monitor the status of OS upgrade through automation, then there is a chance that incorrect reporting is shown temporarily. The issue can be fixed by using the same resolver workflow which is followed when a host TN creation fails. On the UI, click the Install failed status of the host. A popup appears with the error message. Click Resolve.

    Workaround: None.

  • Issue 3298108: During maintenance mode upgrade to NSX 4.1.2 with ESX version at 8.* or ESX version upgrade to 8.* with NSX version at 4.1.2, underlay gateway information is lost, resulting in overlay datapath outage.

    Downtime due to overlay VM traffic outage may occur.

    Workaround: See knowledge base article 95306 for details.

    Workaround: See knowledge base article 95306 for details.

  • Issue 3273294: The member in a group uses short ipv6 address format, but in earlier releases long format address is used.

    There is no functional/security impact. It is a visibility related change of behavior.

    Workaround: None.

  • Issue 3268012: The special wildcard character "^" in Custom Fully Qualified Domain Name (FQDN) values is available starting from GM version 4.1.2 onward. In case of federation deployments where LMs/sites are on lower versions, GM created firewall rules consisting of context profiles which in turn have Custom Fully Qualified Domain Name (FQDN) with "^" will have undeterministic behavior on the datapath.

    Undeterministic behavior on the datapath of  GM created firewall rules consisting of context profiles which in turn have Custom Fully Qualified Domain Name (FQDN) with "^".

    Workaround:

    1. If feasible, remove or update: the custom FQDN to remove "^" on 4.1.2 GM, context profile consuming that custom FQDN, or rules consuming the context profile.

    2. If step 1 is not feasible, upgrade the lower version LMs (4.1.1) to the 4.1.2 version.

  • Issue 3242530: New NSX-T Segments are not appearing in vCenter.

    Unable to deploy new segments.

    Workaround: Export and import DVS without preserving DVS IDs.

  • Issue 3227013: Unknown status for TN shown intermittently.

    LM UI shows wrong status of TN.

    Workaround: None. The status correction happens without any intervention.

  • Issue 3278718: Failure in packet capture (PCAP) export if the PCAP file has not been received by the NSX Manager.

    Users will not be able to export the requested PCAPs as the request will fail.

  • Issue 3261593: IDFW alarms will be reset after upgrade.

    After upgrade, the existing alarms will be reset. These alarms will be re-created if the issues remain and the corresponding operations are performed.

    Workaround: None.

  • Issue 3233352: Request payload validations (including password strength) are bypassed on redeploy.

    Alarm cannot be resolved and edit of TN configuration is not allowed till the password is fixed.

    Workaround: Fix the invalidated passwords by using the API POST https//<nsx-manager>/api/v1/transport-nodes/<node-id>?action=addOrUpdatePlacementReferences documented in the NSX-T Data Center REST API Reference Guide.

  • Issue 3223107: When BGP receives a same prefix from multiple neighbors and if the nexthop is also of the same subnet, then the route keeps flapping and user will see a continuous addition and deletion of the prefix.

    Traffic drop for prefixes that are getting continuously added/deleted.

    Workaround: Add an inbound routemap that filters the BGP prefix that is in the same subnet as the static route nexthop.

  • Issue 3275502: UDP checksum get computed to 0x0 while it should be 0xFFFF according to RFC 768.

    Customers who use physical NIC that do not support HW checksum offload will see intermittent traffic issue on UDP traffic over IPv6.

  • Issue 3262712: IPv4-compatible IPv6 address of the format ::<ipv4> gets converted to its equivalent IPv6 address in effective membership API response.

    There is no functional or security impact. The effective membership API response for Ipv4-compatible Ipv6 address will be different.

    Workaround:

    None. This is a change of behavior introduced in NSX 4.1.2.

    Workaround: None. This is a change of behavior introduced in NSX 4.1.2.

  • Issue 3261528: LB Admin is able to create the Tier-1 Gateway, but while deleting the Tier-1 Gateway it directs the page to Login page and LB admin needs to login again. After logging in, it is observed that the Tier-1 gateway is not deleted from the list/table.

    The LB Admins cannot delete the Tier-1 created by them.

    Workaround:

    Log in as one of the following users:

    enterprise_admin, cloud_admin, site_reliability_engineer, network_engineer, security_engineer, org_admin, project_admin, or vpc_admin (vpc_admin to delete the security-config policy resource).

  • Issue 3236772: After removing vIDM configuration, logs still show that background tasks are attempting to still reach invalid vIDM.

    Logs for NAPI will show the following error message after vIDM configuration is removed: Error reaching given VMware Identity Manager address <vIDM-FQDN> | [Errno -2] Name or service not known.

    Workaround: None.

  • Issue 2787353: Host transport node (TN) creation via vLCM workflow fails when host has undergone specific host movements in VC.

    Users will not be able to create a host TN.

    Workaround: Follow the regular resolver workflow for the vLCM cluster level from NSX UI.

  • Issue 3167100: Tunnels new configuration take several minutes to be observed from UI.

    It takes several minutes to observe the new tunnels information after configuring the host node.

    Workaround: None.

  • Issue 3167100: Tunnels new configuration take several minutes to be observed from UI.

    It takes several minutes to observe the new tunnels information after configuring the host node.

    Workaround: None.

  • Issue 3245183: The "join CSM command" adds CSM to MP cluster, but does not add the Manager account on CSM.

    It will not be possible to continue with any other CSM work unless the Manager account is added on CSM.

    Workaround:

    1. Run the join command without including CSM login credentials.

      Example:

      join <manager-IP> cluster-id <MP-cluster-ID> username <MP-username> password <MP-password> thumbprint <MP-thumbprint>

    2. Add NSX Manager details in CSM through UI.

      a. Go to System -> Settings.

      b. Click Configure on the Associated NSX Node tile.

      c. Provide NSX Manager details (username, password, and thumbprint).

  • Issue 3214034: Internal T0-T1 transit subnet prefix change after tier-0 creation is not supported by ESX datapath from Day 1.

    In cases where tier-1 router is created without SR, traffic loss can happen if the transient subnet IP prefix is changed.

    Workaround: Instead of changing the transient subnet IP prefix, delete and add Logical Router Port with a new transient subnet IP.

  • Issue 3248603: NSX Manager File system is corrupted or goes into read only mode.

    In the /var/log/syslog, you may see log messages similar to the log lines below.

    2023-06-30T01:34:55.506234+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074509] sd 2:0:1:0: [sdb] tag#1 CDB: Write(10) 2a 00 04 af de e0 00 02 78 00

    2023-06-30T01:34:55.506238+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074512] print_req_error: 1 callbacks suppressed

    2023-06-30T01:34:55.506240+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074516] print_req_error: I/O error, dev sdb, sector 78634720

    2023-06-30T01:34:55.513497+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.075123] EXT4-fs warning: 3 callbacks suppressed

    2023-06-30T01:34:55.513521+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.075127] EXT4-fs warning (device dm-8): ext4_end_bio:323: I/O error 10 writing to inode 4194321 (offset 85286912 size 872448 starting block 9828828)

    Appliance may not work as normal.

    Workaround: Refer to knowledge base article 330478 for details.

  • Issue 3145013: NCP pod deletion fails because of stale LogicalPorts on NSX.

    NCP pod deletion could become stuck.

    Workaround: Manually clean up the stale LogicalPorts on the underlying LogicalSwitch.

    Workaround: Manually clean up the stale LogicalPorts on the underlying LogicalSwitch.

  • Issue 3010038: On a two-port LAG that serves Edge Uniform Passthrough (UPT) VMs, if the physical connection to one of the LAG ports is disconnected, the uplink will be down, but Virtual Functions (VFs) used by those UPT VMs will continue to be up and running as they get connectivity through the other LAG interface.

    No impact.

    Workaround: None.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 3224295: IPv4/IPv6 BGP sessions fail to establish due to IPv4/IPv6 addresses missing on the interfaces.

    Traffic over the problematic BGP session would be disrupted. However, BGP would gracefully restart on its own.

    Workaround: See knowledge base article 3224295 for details.

check-circle-line exclamation-circle-line close-line
Scroll to top icon