This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX 4.1.0 | 28 FEB 2023 | Build 21332672

Check for additions and updates to these release notes.

What's New

NSX 4.1.0 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • IPv6 support for NSX internal control and management planes communication - This release introduces support for Control-plane and Management-plane communication between Transport Nodes and NSX Managers over IPv6. In this release, the NSX manager cluster must still be deployed in dual-stack mode (IPv4 and IPv6) and will be able to communicate to Transport Nodes (ESXi hosts and Edge Nodes) either over IPv4 or IPv6. When the Transport Node is configured with dual-stack (IPv4 and IPv6), IPv6 communication will be always preferred.

  • Multi-tenancy available in UI, API and alarm framework - With this release we are extending the consumption  model of NSX by introducing multi-tenancy, hence allowing multiple users in NSX to consume their own objects, see their own alarms and monitor their VMs with traceflow.  This is made possible by the ability for the Enterprise Admin to segment the platform into Projects, giving different spaces to different users while keeping visibility and control.

  • Antrea to NSX Integration improvements - With NSX 4.1, you can create firewall rules with both K8s and NSX objects. Dynamic groups can also be created based on NSX tags and K8s labels. This improves usability and functionality of using NSX to manage Antrea clusters.

  • Online Diagnostic System provides predefined runbooks that contain debugging steps to troubleshoot a specific issue. These runbooks can be invoked by API and will trigger debugging steps using the CLI, API and Scripts. Recommended actions will be provided post debugging to fix the issue and the artifacts generated related to the debugging can be downloaded for further analysis. Online Diagnostic System helps to automate debugging and simplifies troubleshooting. 

In addition, many other capabilities are added in every area of the product. More details are available below in the detailed description of added features.

Layer 2 Networking

  • ESXi MultiTEP High Availability - When multiple TEPs are configured on a hypervisor, NSX will track the status of the TEP IP addresses and the BFD sessions to failover the overlay traffic to another uplink. This feature provides high availability against physical switch issues, like a physical switch port remaining up but forwarding not working properly.

Layer 3 Networking

  • BGP Administrative Distance - This release introduces support for changing the BGP administrative distance from the default values. Administrative Distance is an arbitrary value assigned to each routing protocol and used for route selection. The ability to manipulate the admin distance of BGP routes provides additional route selection control.

  • BGP Autonomous System (AS) Number per Tier-0 VRF Gateway and BGP neighbor - This release introduces the support for configuring a different BGP ASN (Autonomous System Number) per Tier-0 VRF Gateway and also per BGP neighbor. Defining a separate ASN per VRF and BGP peer is an important feature for Service Provider and multi tenant topologies where end-customers will bring their own BGP ASN to the networking topologies.

  • Inter-VRF Routing - This release introduces a more advanced VRF interconnect and route leaking model. With this feature, users can configure inter-VRF routing using easier workflows and fine grained controls by dynamically importing and exporting routes between VRFs.

  • Autonomous-System-Wide Unique BGP Identifier - RFC6286 - This releases introduces support for relaxing the definition of the BGP Router ID to be a 4-octet, unsigned, non-zero integer and relaxes the "uniqueness" requirement for eBGP peers, as per the RFC 6286.

  • IPv6 support for NSX internal control and management planes communication - This release introduces support for Control-plane and Management-plane communication between Transport Nodes and NSX Managers over IPv6. In this release the NSX manager cluster must still be deployed in dual-stack mode (IPv4 and IPv6) and will be able to communicate to Transport Nodes (ESXi hosts and Edge Nodes) either over IPv4 or IPv6. When the Transport Node is configured with dual-stack (IPv4 and IPv6), IPv6 communication will be always preferred.

DPU-based Acceleration

  • UPT V2 is production ready for NVIDIA Bluefield-2.

  • Security

    • NSX Distributed Firewall (Stateful L2 and L3 firewall) is available for production deployment with DPU acceleration.

    • NSX Distributed IDS/IPS (Tech Preview)

Edge Node Platform

  • Edge Node settings support in Transport Node API

    • Edge Node API allows you to set the following parameters: Restart priority (Edge Node VM); coalescingScheme and coalescingParams. With this feature, customers can tune the performance settings and the restart priority through NSX Manager. This will give consistency for the NSX objects settings to be executed through NSX Manager.

  • Upgrade Edge Node Operating System to Ubuntu 20.04

    • Edge node Operating System is upgraded to Ubuntu 20.04, which offers a better hardware support for Bare Metal Edge.

  • Edge Platform: Hardware version upgrade

    • During upgrade to NSX 4.1.0, the system will automatically upgrade Edge VMs to the latest supported hardware version offering the best performance.

Network Detection and Response (NDR)

  • Support for IDPS events from the Gateway Firewall - Starting with NSX 4.1.0, IDPS events from the Gateway/Edge firewall are used by NDR in correlations/intrusion campaigns.

Container Networking and Security

  • Antrea to NSX Integration improvements - With NSX 4.1.0, you can create firewall rules with both K8s and NSX objects. Dynamic groups can also be created based on NSX tags and K8s labels. This improves usability and functionality of using NSX to manage Antrea clusters. You can now create firewall policies that allow/block traffic between Virtual Machines and K8s pods in one single rule. A new enforcement point is also introduced to include all endpoints and the correct apply-to is determined based on the source and destination group member targets. K8s NetworkPolicies and Tiers created in the Antrea cluster can now be viewed in the NSX Policy ruleset. Along with this, NSX 4.1.0 also includes Traceflow and UI improvements which allow for better troubleshooting and management of K8s network policies providing for a true centralized management of K8s network policies via NSX.

Installation and Upgrade

  • Quick Recovery From Upgrade Failures - One of the ways to quickly recover from a failure during NSX upgrade is restoring to a backup. But sometimes the unavailability of a viable backup can delay this process, requiring lengthy manual intervention. With NSX 4.1, an automatic backup of NSX will be taken implicitly and stored on the appliance itself. Meaning, in the event of a failure, VMware support can quickly restore NSX to a working state, using this in-built backup. 

Operations and Monitoring

  • Online Diagnostic System provides predefined runbooks, which contain debugging steps to troubleshoot a specific issue. These runbooks can be invoked by API and will trigger debugging steps using the CLI, API and Scripts. Recommended actions will be provided post debugging to fix the issue and the artifacts generated related to the debugging can be downloaded for further analysis. Online Diagnostic System helps to automate debugging and simplifies troubleshooting.

The following predefined runbooks to troubleshoot ESXi Host Transport Node issues are available, which can be invoked using the NSX API.

  • Runbooks to diagnose overlay tunnel issue, controller connectivity issue, Port blocking issue, and pNIC Performance issues.

VPN

  • IPv6 support for IPsec VPN - IPv6 addresses can now be used for IPsec VPN termination, offering a secured tunnel mechanism over IPv6 networks. Over the IPv6 VPN, NSX can transport both IPv4 and IPv6 data.

Guest Introspection

  • Support for GI Drivers on Windows 11 - Starting with NSX 4.1.0, Virtual Machines running Windows 11 operating systems are supported for NSX Guest Introspection.

Platform Security

  • Lifecycle Management of Local Users - This release allows customers to add and remove local users from the system.

  • Secure port changes for Install and Upgrade - This release changes the default TCP port 8080 for install and upgrades to TCP port 443 to provide better security for the platform.

  • Internal Certificate Replacement - This release allows customers to replace the self-signed internal certificates with CA-signed certificates.

Multi Tenancy

  • Multi-tenancy in NSX UI - NSX 4.1.0 enables multi-tenancy consumption from the UI for the Enterprise Admin (Provider) and the Project users (Tenants). 

    • Different view for different users - Both Project users (Tenants) and Enterprise Admins (Providers) can log in to NSX and have their own view. The Project users do not see other Projects or Provider configurations.

    • Project Switcher -  This release introduces a drop down on top of the interface, which allows you to switch context from one Project to the next according to the user RBAC. Configurations done outside of Projects are in the Default context - this is the default for the roles configured outside Projects. Users configured under Default ( / ) can view and configure all Projects mapped with their RBAC, while users tied to specific Project only have access to their own Projects.

  • All Project screen - In addition to the ability to switch between Projects, the Enterprise Admin has a consolidated view showing all configurations on the system.

  • Project lifecycle management - Projects are optional constructs aimed to offer tenancy which can be configured by Enterprise Admin on NSX instance.

    • CRUD Project - Ability to create those Projects from the UI and see a consolidated view of all Projects and their status/alarms.

    • User Allocation - Users and groups can be allocated to Projects (for instance, user "project_admin_1" is Project Admin for Project 1, Project2 and Project 4).

    • Quota - Quotas can be allocated to the different Projects to restrict by type the number of configurations available (for instance, Project 1 can create 10 segments).

    • Object Sharing - Objects can be shared from default context (/infra) by the Enterprise Admin to either all or specific Projects (for instance, the Group "Shared Services" is shared to all the Projects).

  • Multi-tenancy for Operations and Monitoring

    • Multi-tenant Logging for security logs - Introduction of the Project short log id, which is a label put on logs to attach them to a Project. In this release this short log id will apply to the Gateway Firewall logs and the distributed firewall logs.

    • Multi-tenant Alarms - Extension of the alarm framework to support multi-tenancy. Project Admin can now visualize their own alarms related to their own configurations.

    • Traceflow at Project level - Project Admin can use Traceflow in order to test connectivity between his workloads. Configurations applied to his/her object from outside the Project by the Provider will be obfuscated.

  • NSX Manager Platform

    • Upgrade NSX Manager Appliance Operating System to Ubuntu 20.04.

      • NSX Manager appliance Operating System has been upgraded to Ubuntu 20.04.

Feature and API Deprecations, Behavior Changes

Deprecation Announcement for NSX Load Balancer

VMware intends to deprecate the built-in NSX load balancer and recommends customers migrate to NSX Advanced Load Balancer (Avi) as soon as practical. VMware NSX Advanced Load Balancer (Avi) provides a superset of the NSX load balancing functionality and VMware recommends that you purchase VMware NSX Advanced Load Balancer (Avi) Enterprise to unlock enterprise grade load balancing, GSLB, advanced analytics, container ingress, application security and WAF.

We are giving advanced notice now to allow existing customers who use the built-in NSX load balancer time to migrate to NSX Advanced Load Balancer (Avi). Support for the built-in NSX load balancer for customers using NSX-T Data Center 3.x will remain for the duration of the NSX-T Data Center 3.x release series. Support for the built-in NSX load balancer for customers using NSX 4.x will remain for the duration of the NSX 4.x release series. Details for both are described in the VMware Product Lifecycle Matrix. We do not intend to provide support for the built-in NSX load balancer beyond the last NSX 4.x release.

More information:

Deprecation Announcement for NSX Manager APIs and NSX Advanced UIs

NSX has two methods of configuring logical networking and security: Manager mode and Policy mode. The Manager API contains URIs that begin with /api, and the Policy API contains URIs that begin with /policy/api.

Please be aware that VMware intends to remove support of the NSX Manager APIs and NSX Advanced UIs in an upcoming NSX major or minor release, which will be generally available no sooner than one year from the date of the original announcement made on 12/16/2021. NSX Manager APIs that are planned to be removed are marked with "deprecated" in the NSX Data Center API Guide, with guidance on replacement APIs.

It is recommended that new deployments of NSX take advantage of the NSX Policy APIs and NSX Policy UIs. For deployments currently leveraging NSX Manager APIs and NSX Advanced UIs, please refer to the NSX Manager for the Manager to Policy Objects Promotion page and NSX Data Center API Guide to aid in the transition.

API Deprecations and Behavior Changes

  • New pages on API deprecation of removal have been added to the NSX API Guide to simplify API consumption. Those will list the deprecated APIs and Types, and the removed APIs and Types.

  • Removed APIs: The following APIs have been removed. Please refer to the NSX API Guide for additional details.

Removed API

Replacement

POST /api/v1/intrusion-services/ids-events

 POST /policy/api/v1/infra/settings/firewall/security/intrusion-services/ids-events

POST /api/v1/intrusion-services/ids-summary

POST /policy/api/v1/infra/settings/firewall/security/intrusion-services/ids-summary

POST /api/v1/intrusion-services/affected-vms

POST /policy/api/v1/infra/settings/firewall/security/intrusion-services/affected-vms

POST /api/v1/intrusion-services/affected-users

POST /policy/api/v1/infra/settings/firewall/security/intrusion-services/affected-users

GET /api/v1/intrusion-services/profiles/<profile-id>

GET /policy/api/v1/infra/settings/firewall/security/intrusion-services/profiles/<profile-id>/effective-signatures

  • Deprecated APIs: The following APIs are marked as deprecated. Please refer to the NSX API Guide for additional details.

Deprecated API

Replacement

DELETE /api/v1/aaa/registration-token/<token>

POST /api/v1/aaa/registration-token/delete

GET /api/v1/aaa/registration-token/<token>

POST /api/v1/aaa/registration-token/retrieve

GET /api/v1/node/services/dataplane/l3vpn-pmtu

none

GET /api/v1/node/services/policy

GET /api/v1/node/services/manager

GET /api/v1/node/services/policy/status

GET /api/v1/node/services/manager/status

GET /api/v1/ns-groups/<ns-group-id>/effective-cloud-native-service-instance-members

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/cloud-native-service-instances

GET /api/v1/ns-groups/<ns-group-id>/effective-directory-group-members

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/identity-groups

GET /api/v1/ns-groups/<ns-group-id>/effective-ipset-members

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/segment-ports

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/logical-ports

GET /api/v1/ns-groups/<ns-group-id>/effective-physical-server-members

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/physical-servers

GET /api/v1/ns-groups/<ns-group-id>/effective-transport-node-members

GET /policy/api/v1/infra/domains/{domain-id}/groups/{group-id}/members/transport-nodes

POST /api/v1/batch

none

POST /api/v1/node/services/policy?action=reset-manager-logging-levels

POST /api/v1/node/services/manager?action=reset-manager-logging-levels

POST /api/v1/node/services/policy?action=restart

POST /api/v1/node/services/manager?action=restart

POST /api/v1/node/services/policy?action=start

POST /api/v1/node/services/manager?action=start

POST /api/v1/node/services/policy?action=stop

POST /api/v1/node/services/manager?action=stop

POST /policy/api/v1/infra/realized-state/enforcement-points/<enforcement-point-name>/virtual-machines?action=update_tags

POST /policy/api/v1/infra/realized-state/virtual-machines/{virtual-machine-id}/tags

PUT /api/v1/node/services/dataplane/l3vpn-pmtu

none

PUT /api/v1/node/services/policy

PUT /api/v1/node/services/manager

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading NSX components, see the NSX Upgrade Guide.

  • NSX 4.1.0 is a new release providing a variety of new features. Customers who require these features should upgrade to adopt the new functionality. Customers who do not require this functionality at this time should upgrade to the latest available version of NSX 3.2 (currently 3.2.2), which continues to be VMware’s recommended release.

  • NSX 4.1.0 is not supported for NSX Cloud customers deployed with AWS/Azure workloads. Please do not use NSX 4.1.0 to upgrade your environment in that scenario.

Customers upgrading to this release are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Managers prior to upgrading. The tool is integrated into the Upgrade workflow, before you begin upgrading the NSX Managers.

Available Languages

NSX has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

February 28, 2023

1

Initial Edition

March 29, 2023

2

Added known issues 3094405, 3106569, 3113067, 3113073, 3113076, 3113085, 3113093, 3113100, 3118868, 3121377, 3152174, 3152195.

May 12, 2023

3

Added known issue 3210316.

May 19, 2023

4

Added known issue 3116294.

June 1, 2023

5

Added known issues 3186573, 3155845, 3163020.

July 20, 2023

6

Added known issue 3108693.

September 8, 2023

7

Added NSX Manager upgrade to What's New.

December 14, 2023

8

Added known issue 3296976.

January 10, 2024

9

Added known issue 3222376.

March 7, 2024

10

Added known issue 3308657.

July 15, 2024

11

Added known issue 3180650.

Resolved Issues

  • Fixed Issue 3053647: Edge cutover failed because one of the NSX-v ESG's was in powered off state during NSX for vSphere to NSX-T migration.

    Operations done on ESG during migration fail.

  • Fixed Issue 3048603: UDP-based DNS traffic created session info when new connections came in, causing memory to be exhausted after running a long time.

    The datapath mempool usage reaches 100%, exceeding the threshold value of 85%.

  • Fixed Issue 3106018: The service dataplane may crash in NSX 4.0.0.1 and 4.0.1.1 if the Edge receives an IGMPv2 report packet.

    Datapath will restart and cause traffic disruption.

  • Fixed Issue 3073647: False-positive DNS "Forwarder Upstream Server Timeout" alarm occurs when at least two upstream servers are configured.

    The false-positive alarm is confusing. Upstream servers are all working fine.

  • Fixed Issue 3062615: Edge transport node deletion might leave stale entries on edge internal tables.

    NSX manager goes into hung state after NSX upgrade due to the stale entries.

  • Fixed Issue 3059517: Memory leak in DNS summary attribute programming.

    Depending on how much traffic, can cause continuous cores.

  • Fixed Issue 3055437: SPF property is enabled even when overlay Transport Zone is removed from host switch.

    VMs may lose connectivity after vMotion.

  • Fixed Issue 3046491: Relationship between IP Pool and Infra Segments is not checked during delete validation, allowing deletion of IP pools while it is actively consumed/referred in by infra segments, leading to deallocation of IP addresses while in use.

    Functional impact due to loss of IP addresses while in use by infra segments.

  • Fixed Issue 3044226: The realized intent revision in RealizationState of DhcpRelayConfig was not updated.

    The UI shows the dhcp-relay consolidated status as "IN_PROGRESS" though the dhcp-relay is functioning correctly.

  • Fixed Issue 3056265: NSX Manager node deployment failure with same hostname that was used earlier by a detached NSX Manager node.

    Deployment of NSX manager nodes is blocked with same hostnames that were detached earlier.

  • Fixed Issue 3024587: IDPS logs received on external syslog server are getting truncated, despite increasing the remote-host-max-msg-len setting.

    The IDPS events could be split into multiple lines on external syslog server.

  • Fixed Issue 3008229: CCP TN timeout set CLI does not have the option to use hour unit.

    Unable to use hour unit in CCP TN timeout set CLI.

  • Fixed Issue 2863105: A traffic group consumed by HCX cannot be deleted from NSX until the you delete the related entities from HCX.

    It is not clear how to delete HCX Traffic Groups/Association Map.

  • Fixed Issue 3091229: EffectiveMembership Rest API for cloud-native-service-instances is not working for groups that have more than two CNS members.

    Effective membership Rest API for CNS member would not work as expected.

  • Fixed Issue 3056889: Static routes on distributed routers and/or Tier-1 service routers are not propagated to transport nodes.

    Traffic disruption for traffic destined to those static routes. Logical router ports must be configured before static routes are configured.

  • Fixed Issue 3046448: Rule not deleted from the ESX hosts when NG group is force deleted in MP UI.

    Firewall rule with deleted NSgroup consumed in source or destination will continue to enforce security posture on ESX hosts for all sources or destinations respectively.

  • Fixed Issue 3044281: Edge VM has stale hardware config error. This causes a validation error when applying manager configuration changes.

    Edge configuration cannot be edited via transport node PUT API or UI since there is a stale error.

  • Fixed Issue 3043150: Stale Transport Node data is left behind by the CCP LCP Replicator after the Transport Node is removed by MP.

    If a new Transport Node is added by MP and it reuses the old VTEP IP with a different VTEP MAC, this will cause CCP to report wrong data as Transport Node stale data was not cleaned up.

  • Fixed Issue 3037403: Upon queries through RPC GetLportRealizedState, CCP will overwrite the binding's VLAN ID with the segment's VLAN ID.

    When user queries address bindings, they may find that the VLAN ID is different from what is set in manual bindings.

  • Fixed Issue 3013563: Changing Group Name in AD affects Data Path - DFW Rule Not Applied to user which is part of the group with changed name [After Full/Delta Sync].

    Data path is disrupted.

  • Fixed Issue 3008229: CCP TN timeout set CLI does not have the option to use hour unit.

    Unable to use hour unit in CCP TN timeout set CLI.

  • Fixed Issue 2982055: Nested groups members not synced on Intelligence.

    Group Flow Topology API doesn't show Nested group members.

  • Fixed Issue 2930122: CCP data migration from GC/HL to later release may leave conflict records, which may generate false alarms.

    You will observe false alarms about disconnected transport nodes even though they are connected.

  • Fixed Issue 2863105: A traffic group consumed by HCX cannot be deleted from NSX until the you delete the related entities from HCX.

    It is not clear how to delete HCX Traffic Groups/Association Map.

  • Fixed Issue 3062600: NSX proxy keeps restarting when controller-info.xml file is deleted/empty.

    NSX proxy would continuously restart and there would be continuous logging for the same.

  • Fixed Issue 3063646: Nestdb-ops-agent connection error logging on Unified appliance.

    Logs will be filled faster.

  • Fixed Issue 3065925: User-defined VRF RIB is not populated with IPv6 ECMP routes.

    Information is missing for IPv6 ICMP routes.

  • Fixed Issue 3071393: Gateway rule having L7 access profile is not realized on Edge after enable/disable of gateway.

    Gateway rule having L7 access profile is not realized on Edge after enable/disable of gateway.

  • Fixed Issue 3072735: Effective IP Discovery profile computation on CCP doesn’t take the precedence of LSP profiles -> LS profiles -> Group profiles as expected.

    Wrong IPs can be found in DFW groups.

  • Fixed Issue 3073055: Rules accidentally inherit span from DHCP definitions, getting sent to Edges.

    Rules are realized on Edges when they shouldn't be on the UI.

  • Fixed Issue 3073457: Remote tunnel endpoint status is not refreshed.

    Connectivity of BGP sessions over remote tunnel endpoint is intact, there is no datapath impact, but UI doesn't show appropriate status.

  • Fixed Issue 3078357: Federation LM-LM version compatibility should be +/-2, but CCP only supports +/-1.

    Two Federation sites are disconnected if one site is running on version V and the other is running on version V+2 or V-2.

  • Fixed Issue 3081190: Search service is using root partition on GM to store data. Root partition disk space is getting filled and raises an alarm.

    GM will not work properly after 100% root partition disk is used.

  • Fixed Issue 3081664: DFW Rule is sent to EdgeNode by mistake. If one of the rules pushed to Edge has the wrong parameters, it may result in a perpetual data plane crash.

    CCP computed Downlink port as part of FW Rule's span when Logical switch / LSP is used in Rule's appliedTo. As a result, DFW Rule is sent to EdgeNode by mistake. If one of the rules pushed to Edge has the wrong parameters, it may result in a perpetual data plane crash.

  • Fixed Issue 3057573: Applying TN profile to vLCM-enabled cluster failed.

    NSX install failed on LCM-enabled cluster as service account password is expired.

  • Fixed Issue 3046985: Some gateway firewall services are not supported ("MS_RPC_TCP, MS_RPC_UDP, SUN_RPC_TCP, SUN_RPC_UDP, ORACLE_TNS"). Publish operation fails with an error if the services are selected.

    The services can be selected in the UI but an error is displayed while publishing: "Unsupported App Level Gateway (ALG) Type : ORACLE_TNS."

  • Fixed Issue 3040934: Unable to publish changes on Distributed Firewall (DFW)/Gateway Firewall (GWFW) settings. The publish button is disabled when Distributed Firewall (DFW)/Gateway Firewall (GWFW) is disabled.

    Unable to publish configuration changes because Firewall (DFW)/Gateway Firewall (GWFW) is disabled.

  • Fixed Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You will be unable to change local user passwords while vIDM is enabled.

  • Fixed Issue 3068125: When BGP is configured over VTI interface in an active-standby edge deployment, BGP is expected to be down on standby edge node, but still an active ‘BGP Down’ alarm is generated on standby edge node.

    No functional impact. But a false ‘BGP Down’ alarm is observed on standby edge node.

  • Fixed Issue 3057573: Applying TN profile to vLCM-enabled cluster failed.

    NSX install failed on LCM-enabled cluster as service account password is expired.

  • Fixed Issue 3047608: Post CSM Appliance Deployment, CSM UI will not be accessible after login and nsx-cloud-service manager service is down.

    Day0 CSM UI will be down after login.

  • Fixed Issue 3045514: Unable to view flows from NSX-backed VMs in vRNI.

    User is unable to view flows from NSX-backed VMs in vRNI.

  • Fixed Issue 3042523: NSX discovered bindings (IP addresses) provided by vm-tools for segment ports of virtual machines become empty while storage vMotion is in progress.

    Any DFW rules based on discovered IP addresses by vm-tools as source do not apply on the IP during storage vMotion of virtual machine.

  • Fixed Issue 3038658: When a restore is performed in a 1K hypervisor scale setup, NSX service (proton) crashes due to oom issues.

    Restore process may run longer as the NSX service restarts.

  • Fixed Issue 3027580: If DVPGs are deleted in vCenter Server attached to hosts part of a cluster prepared for security-only which map to discovered segments that are part of inventory groups, the discovered segments will never be cleaned up.

    There is no functional impact. The NSX Manager UI displays stale objects.

  • Fixed Issue 3027473: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

  • Fixed Issue 3025367: If uplink profile has four active uplinks and in TN config only two uplinks for VDS uplink mapping are provided, four vmks will be created in the host side.

    No impact to functionality.

  • Fixed Issue 3024658: When processing packets with SMB traffic, NSX IDS/IPS generates high CPU utilization and latency.

    In some cases, NSX IDS/IPS process crashes due to out of memory error when processing SMB traffic.

  • Fixed Issue 3024136: Changes in VRF BGP config don't always take effect.

    Change in BGP config inside a VRF does not always take effect.

  • Fixed Issue 3024129: Global Manager alarms triggered frequently.

    Alarms triggered as soon as one is resolved.

  • Fixed Issue 3019893: NGINX crashes after load balancer persistence is disabled.

    A new connection cannot be established due to a deadlock.

  • Fixed Issue 2963524: UDP connections are not purged according to their expiry timeout.

    UI feedback is not shown.

  • Fixed Issue 2947840: Network topology view not showing all the networking components on UI.

    You are unable to see the complete network topology.

  • Fixed Issue 2928725: IDPS signature downloads not working with proxy using a specific port.

    Signature download call via Proxy was not allowed to reach to NTICS Server.

  • Fixed Issue 3034373: DFW IPFIX profile is stuck in deletion after upgrading from pre-3.2.0 to 3.2.x or 4.0.x.

    Can't delete DFW IPFIX profiles.

  • Fixed Issue 3043151: L7 classification of the firewall flows may not be available for a brief period when system is encountering heavy traffic that needs to determine AppId

    L7 rules might not hit on hosts that are under heavy traffic.

  • Fixed Issue 3039159: vMotion of VMs with interface in Uniform Passthrough (UPT) mode and with high number of flows might cause the host to PSOD.

    Host PSOD and hence will impact traffic during vMotion of UPT based VMs with high number of flows.

  • Fixed Issue 3047608: Post CSM Appliance Deployment, CSM UI will not be accessible after login and nsx-cloud-service manager service is down.

    Day0 CSM UI will be down after login.

  • Fixed Issue 3027580: If DVPGs are deleted in vCenter Server attached to hosts part of a cluster prepared for security-only which map to discovered segments that are part of inventory groups, the discovered segments will never be cleaned up.

    There is no functional impact. The NSX Manager UI displays stale objects.

  • Fixed Issue 3042382: Session should be re-lookup when packet matches to No-SNAT rule.

    Traffic matching to No-SNAT rule is stuck.

  • Fixed Issue 3044704: NSX-Manager only supports HTTP proxy without SSL-BUMP, even though the configuration page takes HTTPS scheme and certificate.

    When you configured the proxy in the System-> General Settings -> Internet Proxy Server, you had to provide details for scheme(HTTP or HTTPS), host, port and so on.

    Scheme here means the type of the connection that you want to establish between NSX manager and proxy server. It does not mean the type of the proxy. Typically, HTTPS proxy will use HTTPS scheme. But proxy like Squid configured with http_port + SSL bump also establish the HTTPS connection between NSX-Manger and proxy server. However, services in NSX-Manager always assume the proxy is exposed with http port. So the certificate you provide will never be used in NSX-Manager.

    When you choose HTTPS scheme and provide the certificate, the system displays "Certificate xx is not a valid certificate for xx." error for a HTTPS proxy in the configuration page.

    No error will be popped up if you try to use a Squid proxy with http_port + SSL bump, but NSX manager services will fail to send request to outside servers ("Unable to find certificate chain." error can be seen in the log)

  • Fixed Issue 3025104: Host goes into "Failed " state when you perform restore using different IP but the same FQDN.

    When restore is performed using different IP of NSX Manager nodes using the same FQDN, hosts are not able to connect to the NSX Manager nodes.

  • Fixed Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You are unable to change local user passwords while vIDM is enabled.

Known Issues

  • Issue 3180650: In medium edge, malloc heap exhaustion alarm is triggered in a new deployment.

    Alarm is observed in the manager UI, but there is no functional impact.

    Workaround: None.

  • Issue 3308657: When creating firewall sections with very large number of rules, the create/delete rule API response is taking more than 30 minutes.

    This slowness causes 409/500 errors or slowness in API executions for PODS to come up.

    Workaround: Reduce the number of rules in one section.

  • Issue 3222376: The NSX "Check Status" functionality in the LDAP configuration UI reports a failure when connecting to Windows Server 2012/Active Directory. This is because Windows 2012 only supports weaker TLS cipher suites that are no longer supported by NSX for security reasons.

    Even though an error message displays, LDAP authentication over SSL works because the set of cipher suites used by the LDAP authentication code is different than the set used by the "Check Status" link.

    Workaround: See knowledge base article 92869 for details.

  • Issue 3296976: Gateway Firewall may allow usage of unsupported Layer 7 App IDs as part of Context/L7 Access Profiles.

    Please refer to the following documentation page which lists which App IDs are supported per NSX release - https://docs.vmware.com/en/NSX-Application-IDs/index.html.

    Workaround: None.

  • Issue 3108693: Project admin will not be able to configure the dns-forwarder feature from the UI.

    If you are logged in as a ‘project-admin’ then T1s under project-scope are not listed in the drop-down menu on the dns-forwarder page. As a result, project-admin is not able to configure the dns-forwarder feature from UI.

    Workaround:

    1. Project admin can perform the same functionality from the API.

    2. Enterprise admin can create DNS services in project scope.

  • Issue 3186573: CorfuDB Data loss.

    Sudden loss of some configurations. Unable to create/update some configurations.

    Workaround: See knowledge base article 92039 for details.

  • Issue 3163020: When the FQDN in DNS packets differs in text case with the Domain Names configured in L7 profiles of FQDN rules, the DFW FQDN rule actions are not applied correctly.

    DFW FQDN rule actions are not applied correctly. DFW FQDN filtering does not work properly.

    Workaround: Configure FQDN in L7 context profile to match Domain name in DNS packet.

  • Issue 3155845: PSOD during vMotion when FQDN filtering is configured.

    ESXi host will reboot.

    Workaround: Remove FQDN configuration.

  • Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not allowed or skipped correctly.

    Workaround: See knowledge base article 91421.

  • Issue 3210316: vIDM configuration is cleared if (1) Manager is dual-stack IPv4/IPv6 and (2) No VIP address is configured and (3) you enter an IP address instead of an FQDN for the "NSX Appliance" field in vIDM configuration.

    You are unable to log in using vIDM until the vIDM configuration is reconfigured.

    Workaround: Use FQDN for the "NSX Appliance" field.

  • Issue 3152195: DFW rules with Context Profiles with FQDN of type .*XYZ.com fail to be enforced.

    DFW rule enforcement does not work as expected in this specific scenario.

    Workaround: None.

  • Issue 3152174: Host preparation with VDS fails with error: Host {UUID} is not added to VDS value.

    On vCenter, if networks are nested within folders, migrations from NVDS to CVDS or NSX-V to NSX-T may fail if migration is to CVDS in NSX-T.

    Workaround: First network of host is the first network as visible on network field in vCenter MOB page https://<VC-IP>/mob?moid=host-moref

    • Prior to 3.2.1: First network of host as mentioned above and concerned VDS should be directly under same folder. Folder could be either DataCenter or a network folder inside DataCenter.

    • From 3.2.1 and 4.0.0 onwards: First network of host as mentioned above should be directly under a folder and desired VDS can be directly under same folder or it can be nested inside same folder. Folder could be either DataCenter or a network folder inside DataCenter.

  • Issue 3118868: Incorrect or stale vNIC filters programmed on pNIC when overlay filters are programmed around the same time as a pNIC is enabled.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed around the same time as a pNIC is enabled, resulting in a possible performance regression.

    Workaround: None.

  • Issue 3113100: IP address is not realized for some VMs in the Dynamic security groups due to stale VIF entry.

    If a cluster has been initially set up for Networking and Security using Quick Start, uninstalled, and then reinstalled solely for Security purposes, DFW rules may not function as intended. This is because the auto-TZ that was generated for Networking and Security is still present and needs to be removed in order for the DFW rules to work properly.

    Workaround: Delete the auto-generated TZ from the Networking & Security Quick Start which references the same DVS as used by Security Only.

  • Issue 3113093: Newly added hosts are not configured for security.

    After the installation of security, when a new host is added to a cluster and connected to the Distributed Virtual Switch, it does not automatically trigger the installation of NSX on that host.

    Workaround: Make any updates to the existing VDS in VC (or) add a new VDS in VC and add all the hosts in the cluster to the VDS. This will auto update the TNP and the TNP will be reapplied on the TNC. When the TNC is updated, the new host added will have the latest configuration of the TNP.

  • Issue 3113085: DFW rules are not applied to VM upon vMotion.

    When a VM protected by DFW is vMotioned from one host to another in a Security-Only Install deployment, the DFW rules may not be enforced on the ESX host, resulting in incorrect rule classification.

    Workaround: Connect VM to another network and then reconnect it back to the target DVPortgroup.

  • Issue 3113076: Core dumps not generated for FRR daemon crashes.

    In the event of FRR daemon crashes, core dumps are not generated by the system in the /var/dump directory. This can cause BGP to flap.

    Workaround: Enable the core dump for the FRR daemons, trigger the crash, and obtain the core dump from /var/dump.

    To enable the core dump, use the following command on the edge node, which must be executed as a root user on the edge node.

    prlimit --pid <pid of the FRR daemon> --core=500000000:500000000

    To validate if the core dump is enabled for the FRR daemon, use the following command, and check the SOFT and HARD limits for the CORE resource. These limits must be 500000000 bytes or 500 MB.

    prlimit --pid <pid of the FRR daemon>

  • Issue 3113073: DFW rules are not getting enforced for some time after enabling lockdown mode.

    Enabling lockdown mode on a transport node can cause a delay in the enforcement of DFW rules. This is because when lockdown mode is enabled on a transport node, the associated VM may be removed from the NSX inventory and then recreated. During this time gap, DFW rules may not be enforced on the VMs associated with that ESXi host.

    Workaround: Add ‘da-user’ in exception list manually before entering ESX into lockdown mode.

  • Issue 3113067: Unable to connect to NSX-T Manager after vMotion.

    When upgrading NSX from a version lower than NSX 3.2.1, NSX manager VMs are not automatically added to the firewall exclusion list. As a result, all DFW rules are applied to manager VMs, which can cause network connectivity problems.

    This issue does not occur in fresh deployments from NSX 3.2.2 or later versions. However, if you are upgrading from NSX 3.2.1 or earlier versions to any target version up to and including NSX 4.1.0 this issue may be encountered.

    Workaround: Contact VMware Support.

  • Issue 3106569: Performance not reaching expected levels with EVPN route server mode.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed to the pNIC in a teaming situation, resulting in a possible performance regression.

    Workaround: None.

  • Issue 3094405: Incorrect or stale vNIC filters programmed to pNIC when overlay networks are configured.

    vNIC overlay filters are updated in a specific order. When updates occur in quick succession, only the first update is retained, and subsequent updates are discarded, resulting in incorrect filter programming and a possible performance regression.

    Workaround: None.

  • Issue 3121377: PSOD on ESX.

    Transport Node down impacting traffic.

    Workaround:

    1. Modify the workload config to avoid sending traffic to first hop router for peer on the same segment.

    2. Accept ICMP6 redirects gracefully from first hop router.

  • Issue 3098639: Upgrade of NSX Manager fails due to reverse-proxy/auth service's failure to enter maintenance mode during upgrade.

    Upgrade failure of nsx-manager.

    Workaround: See knowledge base article 91120 for details.

  • Issue 3114329: Intel QAT is not coming up post Bare Metal NSX Edge installation.

    Intel QuickAssist Technology (QAT) is a hardware accelerator technology designed to offload computationally intensive cryptographic and compression/decompression algorithms from the CPU to dedicated hardware. Because of this issue, you cannot use Intel QAT to improve the throughput performance of the VPN service with Bare Metal NSX Edge.

    Workaround: None.

  • Issue 3106950: After reaching the DFW quota, the creation of a new VPC under the project scope fails.

    You cannot create a VPC under the project where the DFW quota has been reached.

    Workaround: None.

  • Issue 3094463: A stateless firewall rule with ALG service FTP via deprecated MP API can be created. This operation is not supported via Policy API.

    You will not be able to perform MP to Policy migration in case of problematic firewall rules on MP.

    Workaround: Problematic firewall rule should be updated to be stateful or shouldn't have ALG service FTP service.

  • Issue 3083358: Controller taking long time to join the cluster on controller reboot.

    After controller reboot, the new configurations created on NSX Manager might face realization delay as the controller may take time to start.

    Workaround: Get rid of redundant malicious IP groups, and reboot.

  • Issue 3106317: When a VNIC MAC Address is changed in the guest, the changes may not be reflected in the filters programmed to the PNIC.

    Potential Performance degradation.

    Workaround: Disable and then re-enable interface in the guest.

  • Issue 3047727: CCP did not publish updated RouteMapMsg.

    Routes not intended to be published are published.

    Workaround: None.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 3092154: Current NIC does not support IPv6 routing extension header.

    Any IPv6 traffic with routing extension headers will get dropped by smart NIC.

    Workaround: None.

  • Issue 3079932: MTU change can fail with high scale offloaded TCP flows over UPT interfaces.

    With high scale offloaded TCP flows over UPT interfaces sometimes MTU change can fail.

    Workaround:

    1. Don't make MTU change during traffic.

    2. Or, disable hardware offload and then perform MTU change.

  • Issue 3077422: SmartNIC backed VMs and Port/Interfaces are not getting listed in LTA since LTA is not supported on smartNIC backed VMs and Ports.

    LTA can't be created on certain VMs.

    Workaround: None.

  • Issue 3076771: The 'get physical-ports <physical-port-name> stats verbose' displays a value of 0 for per queue stats.

    Unexpected zero values are seen when the verbose port stats are used in debugging.

    Workaround: Where available, 'get physical-ports <physical-port-name> xstats' displays per queue statistics for the physical ports.

  • Issue 3069003: Excessive LDAP operations on customer LDAP directory service when using nested LDAP groups.

    High load on LDAP directory service in cases where nested LDAP groups are used.

    Workaround: For vROPS prior to 8.6, use the "admin" local user instead of an LDAP user.

  • Issue 3068100: Route Leaking is only supported in case of symmetric routing. Asymmetric route leaking between VRFs may lead to traffic blackhole.

    If the VRF routes are asymmetric among the edge nodes within the cluster, then such asymmetric routes will not be sync'd across the edge cluster, even though inter-SR routing is enabled. This condition can potentially create a blackhole for the traffic from South to North.

    Workaround: Ensure routes are propagated symmetrical to all edge nodes in the cluster. In this example, prefixes 2.1.4.0/24 and 2.1.5.0/24 should be propagated to both edges e1 and e2.

  • Issue 2855860: Edge datapath stops forwarding indefinitely when Edge filesystem goes into read-only mode.

    Traffic loss. Accessing the Edge VM console may indicate the filesystem has entered read-only mode.

    Workaround: None.

  • Issues 3046183 and 3047028: After activating or deactivating one of the NSX features hosted on the NSX Application Platform, the deployment status of the other hosted NSX features changes to In Progress. The affected NSX features are NSX Network Detection and Response, NSX Malware Prevention, and NSX Intelligence.

    After deploying the NSX Application Platform, activating or deactivating the NSX Network Detection and Response feature causes the deployment statuses of the NSX Malware Prevention feature and NSX Intelligence feature to change to In Progress. Similarly, activating and deactivating the NSX Malware Prevention feature causes the deployment status of the NSX Network Detection and Response feature to In Progress. If NSX Intelligence is activated and you activate NSX Malware Prevention, the status for the NSX Intelligence feature changes to Down and Partially up.

    Workaround: None. The system recovers on its own.

  • Issue 3041672: For config-only and DFW migration modes, once all the migration stages are successful, you invoke pre-migrate, post-migrate APIs to move workloads. If you change the credentials of NSX for vSphere, vCenter Server or NSX after the migration stages are successful and then calling the APIs for pre-migrate and post-migrate will fail.

    You will not be able to move the workloads because the pre-migrate, post-migrate and finalize-infra API calls will fail.

    Workaround: Perform these steps.

    1. Re-start the migration coordinator.

    2. On the migration UI, using the same migration mode as before restart, provide all the authentication details. This should sync back the migration progress.

    3. Run the pre-migrate, post-migrate, finalize infra APIs.

  • Issue 3043600: The NSX Application Platform deployment fails when you use a private (non-default) Harbor repository with a self-signed certificate from a lesser-known Certificate Authority (CA).

    If you attempt to deploy the NSX Application Platform using a private (non-default) Harbor repository with a self-signed certificate from a lesser-known CA, the deployment fails because the deployment job is unable to obtain the NSX Application Platform Helm charts and Docker images. Because the NSX Application Platform did not get deployed successfully, you cannot activate any of the NSX features, such as NSX Intelligence, that the platform hosts.

    Workaround: Use a well-known trusted CA to sign the certificate you are using for your private Harbor server.

  • Issue 2491800: Async Replicator channel port-certificate attributes are not periodically checked for expiry or revocation.

    This could lead to using an expired/revoked certificate for an existing connection.

    Workaround: Any re-connects would use the new certificates (if present) or throw an error since the old certificate is expired or revoked. To trigger reconnect, restarting Appliance Proxy Hub (APH) on the manager node would suffice.

  • Issue 2994066: Failed to create mirror vmknic on ESXi 8.0.0.0 and NSX 4.0.1 and above.

    Unable to enable L3SPAN with mirror stack as mirror vmknic could not be created.

    Workaround:

    1. From the ESXi CLI prompt, create a mirror stack on ESXi.

    2. From the vSphere Web Client, create a vmknic.

    3. Run the following command:

    esxcli network ip netstack add -N mirror

  • Issue 3007396: If the remote node is set to slow LACP timeout mode then there could be a traffic drop of around 60-90 seconds once we bring down one of the LAG links via cli command "esxcli network nic down -n vmnicX"

    Traffic drop of around 60-90 seconds once we bring down one of the LAG links via cli command "esxcli network nic down -n vmnicX".

    The issue is only observed if the remote node LACP timeout is set to SLOW mode.

    Workaround: Set the external switch LACP timeout to FAST.

  • Issue 3013751: NSX Install/Uninstall Progress is not visible automatically on the Fabric Page.

    Progress is visible after manually refreshing the Fabric page.

    Workaround: Manually refresh the Fabric Page.

  • Issue 3018596: Virtual Function (VF) is released from if you set VM virtual NIC (vnic) MTU on Guest VM to be greater than physical NIC MTU.

    Once the vnic MTU is changed to be greater than pNIC MTU, then the VM will not be able to acquire a VF. Hence, VM will not be in UPT mode. It will be in "emu vmnic" mode.

    Workaround:

    1. Change the MTU size on DVS from 1600 to 9100.

    2. VF gets assigned back to the VM vnic.

  • Issue 3003762: Uninstallation of NSX Malware Prevention Service fails if you do not delete Mlware Prevention Service rules from policy and also does not display any error message that uninstallation fails because rules are still present in policy.

    Uninstallation of NSX Malware Prevention Service will fail in this scenario.

    Workaround: Delete rules and retry uninstallation.

  • Issue 3003919: The DFW rules matching CNAMES having different actions than the DFW rule matching original domain name will lead to inconsistent rule enforcement.

    In the unlikely case of an application/user accessing CNAME instead of original DOMAIN name may seem to bypass or get dropped incorrectly by the DFW rules.

    Workaround: Perform one of these steps.

    • Configure the DFW rules only for original domain name, not the CNAME.

    • Configure the rules with domain name and CNAMES with the same action.

  • Issue 3014499: Powering off Edge handling cross-site traffic causes disruption some flows.

    Some cross-site traffic stopped working.

    Workaround: Power on the powered-off edge.

  • Issue 3014978: Hovering over Networking & Security flag on the Fabric Page shows incorrect information regardless of the networking selected.

    No impact.

    Workaround: None.

  • Issue 3014979: NSX Install/Uninstall Progress is not visible automatically on the Fabric Page.

    No impact. Manually refresh the page to see the progress.

    Workaround: Manually refresh the Fabric Page.

  • Issue 3017885: FQDN analysis can only support one sub-cluster in stateful Active/Active mode.

    Do not enable the feature if the stateful Active/Active has more than one sub-cluster.

    Workaround: Deploy only one sub-cluster.

  • Issue 3010038: On a two-port LAG that serves Edge Uniform Passthrough (UPT) VMs, if the physical connection to one of the LAG ports is disconnected, the uplink will be down, but Virtual Functions (VFs) used by those UPT VMs will continue to be up and running as they get connectivity through the other LAG interface.

    No impact.

    Workaround: None.

  • Issue 3009907: NSX vibs are not deleted from SmartNIC host if the host was in disconnected state during “Remove NSX” operation on the cluster.

    No functional impact.

    Workaround: In vCenter Server, go to vLCM UI and remediate the cluster.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported.

    In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address.

    Workaround: Disable accelerated networking for RedHat and CentOS based OS.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions.

    Upon configuring the 501st VPN session on Tier0, the following error message is shown:

    {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out.

    These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database.

    Workaround: Use the CLI commands to retrieve the information from the edge.

  • Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2719682: Computed fields from Avi controller are not synced to intent on Policy resulting in discrepancies in Data shown on Avi UI and NSX-T UI.

    Computed fields from Avi controller are shown as blank on the NSX-T UI.

    Workaround: App switcher to be used to check the data from Avi UI.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 2799371: IPSec alarms for L2 VPN are not cleared even though L2 VPN and IPSec sessions are up.

    No functional impact except that unnecessary open alarms are seen.

    Workaround: Resolve alarms manually.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2866682: In Microsoft Azure, when accelerated networking is enabled on SUSE Linux Enterprise Server (SLES) 12 SP4 Workload VMs and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    VM agent doesn't start and VM becomes unmanaged.

    Workaround: Disable Accelerated networking.

  • Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

    Workaround: Check the logs.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.

    This CLI output is confusing.

    Workaround: Restart nsx-proxy on that TN.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2889482: The wrong save confirmation is shown when updating segment profiles for discovered ports.

    The Policy UI allows editing of discovered ports but does not send the updated binding map for port update requests when segment profiles are updated. A false positive message is displayed after clicking Save. Segments appear to be updated for discovered ports, but they are not.

    Workaround: Use MP API or UI to update the segment profiles for discovered ports.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None.

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps:

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

  • Issue 2942900: The identity firewall does not work for event log scraping when Active Directory queries time out.

    The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with event log scraper IP addresses.

    Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: Use APIs to remove bridging.

    1. Update concerned LogicalPort and remove attachment.

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

    Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2 DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>
  • Issue 3030847: Changes in VRF BGP config don't always take effect.

    Change in BGP config inside a VRF does not take always effect.

    Workaround: You can create a dummy static route and remove it. This will cause a configuration push which will realize the VRF BGP config on the edge.

  • Issue 3005685: When configuring an Open ID Connect connection as an NSX LM authentication provider, customers may encounter errors.

    OpenID Connect configuration produces errors on configuration.

    Workaround: None.

  • Issue 2949575: After one Kubernetes worker node is removed from the cluster without draining the pods on it first, the pods will be stuck in terminating status forever.

    NSX Application platform and applications on it might function partially or not function as expected.

    Workaround: Manually delete each of the pods that display a Terminating status using the following information.

    1. From the NSX Manager or the runner IP host (Linux jump host from which you can access the Kubernetes cluster), run the following command to list all the pods with the Terminating status.

      kubectl get pod -A | grep Terminating
    2. Delete each pod listed using the following command.

      kubectl delete pod <pod-name> -n <pod-namespace> --force --grace-period=0
  • Issue 3017840: An Edge switch over doesn't happen when uplink IP address is changed.

    Wrong HA state might result in blackholing of traffic.

    Workaround: Toggle the BFD state. Before you change the uplink IP of an Edge change, put the Edge in maintenance mode.

check-circle-line exclamation-circle-line close-line
Scroll to top icon