This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX 4.0.1.1 | 13 OCT 2022 | Build 20598726

Check for additions and updates to these release notes.

What's New

NSX 4.0.1.1 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • DPU-based Acceleration for NSX: VMware now supports DPU-based Acceleration for NSX in this release. This feature enables accelerated networking, security (Tech Preview), enhanced visibility and compute resources savings.

In addition to those, many other capabilities are added in every area on the product. More details are available below in the detailed description of added features.

Layer 3 Networking

  • EVPN Route Server Mode Enhancements - Added support for two new EVPN Route Server mode topologies:

    • Support for static routes between Tier-0 VRF Gateway and hosted VNF. 

    • Northbound L3ECMP for load-balancing the traffic from ESXi hypervisors to Datacenter Gateways.

  •  BFD IPv6 - BFD is a failure detection protocol designed to enable fast forward path failure detection and convergence. This release adds BFD support for IPv6 BGP neighbors and IPv6 static routes.

DPU-based Acceleration

DPU-based Acceleration for NSX: NSX now offers support for data processing units (DPUs) to allow customers to offload various NSX functionality and leverage more of the host CPU for compute virtualization. In addition the DPU allows for accelerated network performance. Security functionality of this feature is considered Tech Preview and is not recommended for production deployments.

The following NSX capabilities are supported with DPU-based Acceleration for NSX:

  • Networking:

    • Overlay and VLAN based segments

    • Distributed IPv4 and IPv6 routing

    • NIC teaming across the SmartNIC / DPU ports

  • Security (Tech Preview)

    • Distributed Firewall

    • Distributed IDS/IPS

  • Visibility and Operations

    • Traceflow

    • IPFIX

    • Packet Capture 

    • Port Mirroring

    • Statistics

  • Supported Vendors

    • NVIDIA Bluefield-2 (25Gb NIC models only) – (UPT - Tech Preview)

    • AMD / Pensando (25Gb and 100Gb NIC models)

  • Scale

    • Single DPU is supported per host consumed by single VDS

  • Uniform Passthrough (UPT V2):  DPU-based Acceleration for NSX supports the ability to bypass the host level ESXi hypervisor and allow direct access to the DPU which allows customers to get high level of performance while not sacrificing the features that they leverage from vSphere and NSX.

Edge platform

  • SmartNIC support for Edge VM: DPDK vmxnet3 driver updates to support DPU-based (SmartNIC) pNICs for datapath interfaces on Edge VM form factor. Traffic through the Edge VM will benefit from this hardware acceleration. It can only be enabled on all datapath interfaces at the same time.

  •  Events and Alarms:  this release introduces new alarms to improve the Edge Node visibility and troubleshooting:

    • Alarms for CPU utilization

    • Alarms for communication between the NSX Manager/ CCP and NSX Edge Node

    • Alarms for maximum IPv4 and IPv6 routes in the routing table and BGP peer maximum advertised prefixes

  • Nvidia ConnectX-6: added support for Nvidia ConnectX-6 for Bare Metal Edge, offering more throughput for the Bare Metal Edge.

  • Stateful Active-Active Edge Services: this release introduces support for stateful services on Tier-0 and Tier-1 gateway in Active-Active HA mode. The following stateful services are supported: L4/L7 Gateway Firewall, URL Filtering, NAT and TLS Inspection.

  • NSX Edge relocate API:  Added option to gracefully relocate the standby Tier-0 SR to another Edge VM when auto allocation is enabled. 

Distributed Firewall

  • L7 AppID: Additional AppIDs have been added. Please refer to documentation for a complete list of AppIDs now available.

  • FQDN Analysis: Better IP mapping when FQDN's resolution chain involves multiple CNAMEs.

  • Alarms: Additional alarms related to connections per second  and firewall rules per vNIC/host have been added.

Intrusion Detection & Prevention (IDS/IPS)

  • Alarms: Additional alarms related to approaching/crossed CPU and Network thresholds have been added.

  • Oversubscription configuration: IDS/IPS engine can now be globally configured to either drop or bypass traffic in case of CPU oversubscription. This configuration can also be applied at a rule level if granularity in behaviour is needed.

Malware Prevention

  • Support for Distributed Malware detection and prevention on Linux VMs:

    • Malware detection and prevention on the Distributed Firewall now supports  Linux guest endpoints (VMs), which are running on vSphere host clusters that are prepared for NSX.

  • Enhanced filtering for Potential Malware on NSX UI:

    • The following additional filter criteria are supported on both the bubble chart and the table in the Malware Prevention > Potential Malware page:

      • Blocked

      • File Type

      • Malware Class

      • Malware Family

  • Support for all the file categories for local and cloud file analysis on the Distributed Malware Prevention:

    • Previously on the Distributed Firewall, NSX Malware Prevention supported local and cloud file analysis only for Windows Portable Executable (PE) files. Now all the file categories are supported for local and cloud file analysis on the Distributed Firewall for both Windows and Linux guest endpoints (VMs).

  • Malware Prevention Health issue Alarms:

    • Additional alarms for Malware Prevention Health Issues have been added.

Gateway Firewall

  • Gateway Malware Detection Support on Bare Metal Edge Node: The Malware Prevention feature can be configured on bare metal edge nodes to detect known and unknown zero-day malware.

  • Gateway Firewall stats provides visibility into connections per session per rule, max connections per host, Connections per rule.

Service insertion

  • Additional Alarms have been added related to service deployment, health and liveness.

Federation

  • There are three firewall related features from Global Manager:

    • Enable/Disable of firewalls from the Global Manager for each site.

    • Supporting exclusion lists for workloads from Distributed Firewall.

    • Time based Distributed Firewall rules can be provisioned from Global Manager.

NSX Application Platform and associated services

  • NSX 4.0.1.1 is compatible with NSX Application Platform 4.0.1.0 version, along with the related NSX features (NSX Intelligence, NSX Network Detection and Response, NSX Malware Prevention, and NSX Metrics) which run on NAPP.

  • Updated K8s version support: 1.20 to 1.24.

  • Enhanced verification for container images.

  • OCI-compliant Helm charts are available. Use the OCI-compliant charts when using NSX 4.0.1.1 to deploy NSX Application Platform 4.0.1. If you are deploying NSX Application Platform 4.0.1 using NSX version 3.2.x or 4.0.0.1, you must use the ChartMuseum-compatible URLs to obtain the Helm chart and docker images. For details, see the Deploying and Managing the VMware NSX Application Platform documentation.

Installation and upgrade

  • vSphere with Tanzu - Partial Maintenance Mode notified in the NSX UI - Get notified of vCenter operations such as vSphere with Tanzu Partial Maintenance Mode within NSX UI to avoid operation failures during host maintenance.

  • Flexibility to upgrade selected TN groups across clusters during NSX upgrade - Previously, NSX upgrade order was fixed to: All Edges  All Hosts → NSX Manager. That meant users had to upgrade all their Edge nodes before they could start upgrading Hosts. Starting with NSX 4.0.1, enjoy more flexibility in upgrading Edges and Hosts. NSX now allows users to create Edge or Host Groups along with the ability to upgrade select Edge or Host Groups at a time, without being locked into the previous sequence. This will help users to better align with their business needs when upgrading their NSX deployments.

    However, please keep in mind:

    • A Group can contain Only Edges or Only Hosts, NOT a combination of the two.

    • While you can move back and forth, upgrading select Edge or Host Groups, the NSX Manager can only be upgraded after all Edges and Hosts have been upgraded. 

  • Command "del nsx" to remove NSX from a Bare Metal Windows host -Ability to clean up NSX from bare metal Windows host via a single command - "del nsx". NOTE: this will clear NSX completely from the host. Please use this capability with caution.

  • Support SSH Keys / Certificates when taking Backups of NSX - NSX backups now support the use of SSH keys in addition to username and password as a selectable option for backup server authentication methods.

  • Resync TN - MP communication through the NSX UI -Config changes can sometime lead to Transport Nodes going out-of-sync with the Management Plane. The management plane needs to push config again to resync these nodes. Now, this can be done from the NSX UI. NOTE: This operation can cause traffic disruption. Please use this capability with caution.

Operations and Monitoring

  • Alarms Enhancement

    • To make Alarm Recommendation Action steps easier to follow, link to Knowledge Base article is made part of the Recommended Actions which will detail the steps to be followed. In this release certificate expiration alarms are added with Knowledge Base article.

  • Network Monitoring with Time-Series Metrics

    • New NSX Edge monitoring metrics - Introduces 16 additional NSX Edge metrics for ease of monitoring and troubleshooting, including flow cache metrics, queue occupancy for fast path interfaces, and NIC throughput on ingress and egress on the NSX Edge fast path interfaces. These new metrics are available through NSX Application Platform - Metrics API.

    • Enhanced UI for NSX Edge monitoring metrics - Enhances NSX Edge Transport Node Monitoring UI page to show Current Packets Processed, Highest Packets Processed, Packets Processed Trend over time. Enhances NSX Edge Network Interface Statistics > Dropped Packets Trend UI page to show Rx packet drops per second (due to memory buffer allocation failures, due to lookup match failure).

    • Enhanced UI for NSX Tier0 & Tier1 Gateway Interface Statistics - Adds UI trend charts to show Tier0 and Tier1 Gateway Network Utilization Trend, to show IPv4 vs IPv6 packets per second.

  • Logging

    • Core Dump enhancements - Improvements to management and serviceability of core dump can be made through the CLI. Refer to the CLI guide for more information.

  • CLI

    • Several CLI based enhancements (refer to the CLI guide for details) -

      • While troubleshooting, filter log bundles by age of the logs

      • Get information on Transport Node: NSX Manager mapping, NTP configurations, & certificate information

      • Get information on NSX Edge like VRF details & packet capture fields

      • Get IP & hostnames of Transport Nodes when retrieving node information

VPN

  • VPN Monitoring - Provides visibility into VPN stats like tunnel status UP or Down:

    • Tunnel status UP or Down

    • Rx/Tx bytes per sec

    • Rx/Tx packets per sec

    • Packet drop rate

    • Drop reasons

Usability and User Interface

  • Search and Filter Enhancements - Enhanced the search and filtering capability. Added ability to filter Malware Prevention Events by Malware Class / Malware Family / File Type / Blocked.

NSX for vSphere to NSX Migration

  • Bring your own Topology NSX native Load Balancer support - As part of the deprecation of the NSX Policy APIs for ALB, the migration coordinator is adding support for migration of configuration from NSX for vSphere Load balancer to NSX native Load Balancer for configuration only. Before it used to support this only for in-place migration.

  • New mode in Migration Coordinator for lift and shift - Configuration and Edge Migration - This mode allows migrating both configuration and edge and establishes a performance optimized distributed bridge between the NSX-V source environment and the NSX destination environment to maintain connectivity during the lift and shift.  With this mode, optionally, a compatible HCX release may be leveraged for workload migration. This mode is for Local Manager only.

Multi Tenancy

  • Introduction of Multi-Tenancy in NSX

    • Introduction of Projects -  Provides the ability to segment a given NSX instance by creating Projects in addition to the default context (objects directly under /infra in Policy API). A Project is a construct offering isolation out of the box (objects under org/default/projects/<project-id>/infra) segmenting the platform. It allows different users to work on the same NSX instance by giving them only access to their own logical objects (Tier-1s, segments, groups, firewall rules...). In NSX 4.0.1, this is an API only feature.

Unless specifically made available by NSX Administrator those users won't be able to view or edit configurations outside their Projects.

  • Provider / Tenant Model - The NSX Administrator creates and manages the Projects and specifies the Edge Clusters, the Tier-0s, and users.

    In addition, the NSX Administrator can view all the Projects objects and manage platform wide security by applying security rules across Projects (non modifiable by Project users).

  • Multi-Tenancy Role-Based Access Control - Added two new roles, Org Admin and Project Admin for alignment with multi-tenancy introduction into the platform.

NSX vSphere UI Integration

  • Support for NSX Manager Clustering with NSX vSphere UI Integration.

  • NSX Events Integrated in vCenter.

  • UI Enhancements (skip workflow, EULA, security enhancements).

  • Workaround to register NSX as a Solution Asset with vCenter.

  • Back up and restore support for NSX Manager deployed using the vCenter Plugin.

Feature Deprecation

VMware intends to deprecate the built-in NSX load balancer and recommends customers migrate to NSX Advanced Load Balancer (Avi) as soon as practical. VMware NSX Advanced Load Balancer (Avi) provides a superset of the NSX load balancing functionality and VMware recommends that you purchase VMware NSX Advanced Load Balancer (Avi) Enterprise to unlock enterprise grade load balancing, GSLB, advanced analytics, container ingress, application security and WAF.

We are giving advanced notice now to allow existing customers who use the built-in NSX load balancer time to migrate to NSX Advanced Load Balancer (Avi). Support for the built-in NSX load balancer for customers using NSX-T Data Center 3.x will remain for the duration of the NSX-T Data Center 3.x release series. Support for the built-in NSX load balancer for customers using NSX 4.x will remain for the duration of the NSX 4.x release series. Details for both are described in the VMware Product Lifecycle Matrix. We do not intend to provide support for the built-in NSX load balancer beyond the last NSX 4.x release.

More information:

API Deprecation and Behavior Changes

  • New pages on API deprecation of removal have been added to the NSX API Guide to simplify API consumption. Those will list the deprecated APIs and Types, and the removed APIs and Types.

  • Removed APIs: No APIs have been removed in this release.

  • Deprecated APIs: The following APIs are marked as deprecated. Please refer to the NSX API Guide for additional details.

Deprecated API

Replacement API

GET /policy/api/v1/global-infra/security-global-config

None - This API has become obsolete. The values of this config are no longer used.

GET /policy/api/v1/infra/security-global-config

None - This API has become obsolete. The values of this config are no longer used.

PUT /policy/api/v1/global-infra/security-global-config

None - This API has become obsolete. The values of this config are no longer used.

PUT /policy/api/v1/infra/security-global-config

None - This API has become obsolete. The values of this config are no longer used.

POST /api/v1/trust-management/certificates?action=set_pi_certificate_for_federation

POST /api/v1/trust-management/certificates/?action=apply_certificate&service_type=GLOBAL_MANAGER

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading NSX components, see the NSX Upgrade Guide.

Customers upgrading to this release are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Managers prior to upgrading. The tool is integrated into the Upgrade workflow, before you begin upgrading the NSX Managers.

Available Languages

NSX has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

Oct 13, 2022

1

Initial Edition

Nov 2, 2022

2

Added an entry for known issues 3046183 and 3047028.

Nov 4, 2022

3

Added more details about the OCI-compliant helm charts available with NSX Application Platform 4.0.1.

Nov 30, 2022

4

Updated the Feature Deprecation section. Added known issue 3069457.

Feb 9, 2023

5

Added known issue 3074054

May 3, 2023

6

Added resolved issue 3023598

May 19, 2023

7

Added known issue 3116294

Dec 13, 2023

8

Added known issue 3296976

Jan 4, 2024

9

Added resolved issue 3037223

Resolved Issues

  • Fixed Issue 3037223: IPSec VPN service events (changes to configuration, Edge CLI commands, changes to LR HA states) are not serviced, resulting in timeouts.

    Depending on the nature of the un-serviced event, datapath may be impacted. For example, if the HA state changes, datapath may be impacted if the IPSec Service stays in the STANDBY state. For CLI timing out, the impact may be limited to monitoring status/stats collection.

  • Fixed Issue 3023598: When Virtual Distributed Router (VDR) connect info is updated, some packets routed by VDR might be leaked.

    The impacted VM might fail to send out packets.

  • Fixed Issue 2983892: The Kubernetes pod associated with the NSX Metrics feature intermittently fails to detect the ingress traffic flows.

    When the Kubernetes pod associated with the NSX Metrics feature intermittently fails to detect the ingress traffic flows,  the ingress metrics data does not get stored. As a result, the missing data affects the metrics data analysis performed by other NSX features, such as NSX Intelligence, NSX Network Detection and Response, and NSX Malware Prevention.

  • Fixed Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform".

    The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected.  There is no functional impact. Only debugging might be affected.

  • Fixed Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.

    After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention or NSX Network Detection and Response feature fails with one or more of the following symptoms.

    • The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.

    • For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state.   

    • For an NSX Malware Prevention upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state.   

    • The upgrade fails after you click UPGRADE, but the previous NSX Malware Prevention and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.

  • Fixed Issue 2936504: The loading spinner appears on top of the NSX Application Platform's monitoring page.

    When you view the NSX Application Platform page after the NSX Application Platform is successfully installed, the loading spinner is initially displayed on top of the page. This spinner might give the impression that there is some connectivity issue occuring when there is none.

  • Fixed Issue 2884939: NSX rate limiting 100 request per sec is hit when migrating large number of VS from NSX for vSphere to NSX ALB and all APIs are blocked for some time.

    NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102) for some time after migrate config is hit in migration.

  • Fixed Issue 2885330: All Effective VMs and IPs not shown correctly when multiple AD users login to different VMs for a policy group having an identity member with multiple AD users or multiple AD groups.

    Members of identity group not displayed. No datapath impact.

  • Fixed Issue 2958032: When you click to see the details of the inspected file on the NSX Malware Prevention dashboard, the file type will not be shown properly and will be truncated at 12 characters.

    You would not see the correct File Type for inspected file.

  • Fixed Issue 2912599: Distributed Load Balancer status is degraded.

    DLB traffic on this LSP cannot be handled by DLB.

  • Fixed Issue 2978995: vCenter Server uses the same Virtual Interface (VIF) ID is used when 2 Virtual NICs are connected to the same ephemeral DVPG and powered on.

    One of the ports does not behave as expected with the DFW rules.

  • Fixed Issue 2981861: With power-off and delete operations from VC failing for either stale vm moref ids or VC communication failures, Edge redeploy operation may end up with two Edge VMs functioning at the same time, possibly resulting in IP conflicts and other issues.

    This issue can occur in a number of ways:

    • vCenter Server compute manager is not reachable when redeploy or replace operation is being carried out.

    • ESXi host is unregistered and re-registered resulting in new Managed Object reference ID's being generated for the VM's (including Edge VMs) and the host. Inventory entry will still have the old moref ID and vCenter Server operations of poweroff and delete with this stale moref id will fail during edge redeploy/replace operations.

  • Fixed Issue 2988913: Max supported vNIC for Polling mode Enhanced Datapath is 108 with smartNIC offload.

    No more than 108 vNIC can be connected to enhanced datapath switch with smartNIC offload enabled.

  • Fixed Issue 2992062: NSX Edge deployment from vCenter 7.0.1 fails when mixed ESX versions are present in the cluster.

    Unable to deploy NSX Edge.

  • Fixed Issue 2993353: Logical span lost one packet on mirror destination.

    Not all of the packets are on the mirrored destination.

  • Fixed Issue 2996964: Host failed to migrate because an uplink name was not found in the UplinkHostSwitchProfile.

    The process will get stuck at host migration.

  • Fixed Issue 3006369: The NSX Malware Prevention Service feature activation fails because of missing license information in platform-licenses configmap.

    You will not be able to use Malware Prevention functionality because feature could not be activated.

  • Fixed Issue 3010374: On fresh installation of ESXi OS, max_vfs config value is reset to 0.

    Need to reconfigure the max_vfs parameter every time fresh OS install is done.

  • Fixed Issue 3017520: Network Oversubscription DROPPED and BYPASSED alarms are not getting generated.

    You can see these two alarms defined in the list of alarms, but those alarms will not be triggered.

  • Fixed Issue 3019816: Post upgrade from 3.2.0 of NSX vSphere to NSX ports, logical port entity's logical_switch_id field is null leading to issues in Central Control Plane.

    Firewall is affected after the upgrade.

  • Fixed Issue 3033225: NSX Malware Prevention service does not analyse .html and .htm extension file events and there are no logs generated.

    Since, .htm or .html files are downloaded as "Save Link As", the browser on the guest OS sends the associated .lnk files for scanning the Malware Prevention Service.

  • Fixed Issue 3036151: After upgrading smartNIC-backed NVIDIA® BlueField®-2 data processing unit (DPU) host, the host is in partial success state.

    Host switch info is lost and datapath broken.

  • Fixed Issue 3037901: Post NSX upgrade, incorrect service IP realization leads to removal of some service IPs from loopback port.

    Service IP removal from LR ports, leading to traffic loss.

  • Fixed Issue 3005825: Baremetal Edge management bond secondary interfaces may be lost after a reboot, or there may be incorrectly named interfaces.

    Loss of Edge management connectivity. Possible datapath connectivity issues.

  • Fixed Issue 3003433: Occasional DNS packets can be mis-enforced on certain firewall configurations.

    This will affect the initial DNS packets only. It will not be seen continuously on every DNS packet. There is no issue with FQDN enforcement.

  • Fixed Issue 2957504: After performing backup and restore and the required force full sync, any attempt to switchover to standby fails.

    You will not be able to perform a switchover immediately after backup-restore and force full sync.

  • Fixed Issue 2957150: Packet drop on IPsec Tunnel after upgrade to 3.2.0.1.

    Packet drop on IPsec Tunnel after upgrade to 3.2.0.1.

  • Fixed Issue 2884692: Adding / Copy To Manual binding from discovered/ Realized binding fails with error exception.

    You will not be able to configure manual bindings on segment ports through policy API.

  • Fixed Issue 3024081: The default behavior for tackling the oversubscription of the IDPS engine has changed from packets-dropped to packets-bypassed.

    If the packets are oversubscribed to the IDPS engine, then instead of dropping them, they would bypass the engine by default. This means that, by default, the packets that cannot be processed by the IDPS engine will be let through to their destination.

    Users will see alarms when the IDPS engine is oversubscribed at a high/very high level, and also when the traffic gets bypassed from the engine as a result of oversubscription.

Known Issues

  • Issue 3296976: Gateway Firewall may allow usage of unsupported Layer 7 App IDs as part of Context/L7 Access Profiles.

    Please refer to the following page, which lists which App IDs are supported per NSX release - https://docs.vmware.com/en/NSX-Application-IDs/index.html

    Workaround: None

  • Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not allowed or skipped correctly.

    Workaround: See knowledge base article 91421.

  • Issue 3074054: Service interface is not supported for Tier-0 Stateful Active-Active Configuration

    From NSX 4.0.2 release onwards, Tier-0 Gateway in Stateful Active-Active configuration does not support service interface.

    Workaround: Use external interface with BGP or BFD instead of service interface.

  • Issue 3069457: During NSX Security Only deployment upgrade from 3.2.x to 3.2.2 or 4.0.1.1, the host upgrade fails with the message, "NSX enabled switches already exist on host."

    Hosts on the UI show the status as Failed after upgrade and may create a datapath impact.

    Workaround: See knowledge base article 90298 for details.

  • Issues 3046183 and 3047028: After activating or deactivating one of the NSX features hosted on the NSX Application Platform, the deployment status of the other hosted NSX features changes to In Progress. The affected NSX features are NSX Network Detection and Response, NSX Malware Prevention, and NSX Intelligence.

    After deploying the NSX Application Platform, activating or deactivating the NSX Network Detection and Response feature causes the deployment statuses of the NSX Malware Prevention feature and NSX Intelligence feature to change to In Progress. Similarly, activating and deactivating the NSX Malware Prevention feature causes the deployment status of the NSX Network Detection and Response feature to In Progress. If NSX Intelligence is activated and you activate NSX Malware Prevention, the status for the NSX Intelligence feature changes to Down and Partially up.

    Workaround: None. The system recovers on its own.

  • Issue 3038658: When a restore is performed in a 1K hypervisor scale setup, NSX service (proton) crashes due to oom issues

    Restore process may run longer as the NSX service restarts.

    Workaround: NSX service crashes are seen during the restore procedure. After the restore finishes, proton service stabilizes and does not crash.

  • Issue 3043151: L7 classification of the firewall flows may not be available for a brief period when system is encountering heavy traffic that needs to determine AppId

    Sometimes, L7 rules might not hit on hosts that are under heavy traffic.

    Workaround: If you encounter this issue, reduce the traffic stress on the host.

  • Issue 3039159: vMotion of VMs with interface in Uniform Passthrough (UPT) mode and with high number of flows might cause the host to PSOD

    Host PSOD and hence will impact traffic during vMotion of UPT based VMs with high number of flows.

    Workaround: Avoid vMotion of UPT based VMs.

  • Issue 3047608: Post CSM Appliance Deployment, CSM UI will not be accessible after login and nsx-cloud-service manager service is down.

    Day0 CSM UI will be down after login.

    Workaround: See knowledge base article 89762 for details.

  • Issue 3027580: If DVPGs are deleted in vCenter Server attached to hosts part of a cluster prepared for security-only which map to discovered segments that are part of inventory groups, the discovered segments will never be cleaned up.

    There is no functional impact. The NSX Manager UI displays stale objects.

    Workaround: After you remove the stale ports/dvpgs from the groups, they need to create a new DVS in vCenter Server and attach the hosts in the cluster prepared for security to a new mock DVS. This action triggers the deletion of stale ports but not the stale segments.

  • Issue 3041672: For config-only and DFW migration modes, once all the migration stages are successful, you invoke pre-migrate, post-migrate APIs to move workloads. If you change the credentials of NSX for vSphere, vCenter Server or NSX after the migration stages are successful and then calling the APIs for pre-migrate and post-migrate will fail.

    You will not be able to move the workloads because the pre-migrate, post-migrate and finalize-infra API calls will fail.

    Workaround: Perform these steps.

    1. Re-start the migration coordinator.

    2. On the migration UI, using the same migration mode as before restart, provide all the authentication details. This should sync back the migration progress.

    3. Run the pre-migrate, post-migrate, finalize infra APIs.

  • Issue 3043600: The NSX Application Platform deployment fails when you use a private (non-default) Harbor repository with a self-signed certificate from a lesser-known Certificate Authority (CA).

    If you attempt to deploy the NSX Application Platform using a private (non-default) Harbor repository with a self-signed certificate from a lesser-known CA, the deployment fails because the deployment job is unable to obtain the NSX Application Platform Helm charts and Docker images. Because the NSX Application Platform did not get deployed successfully, you cannot activate any of the NSX features, such as NSX Intelligence, that the platform hosts.

    Workaround: Use a well-known trusted CA to sign the certificate you are using for your private Harbor server.

  • Issue 2491800: Async Replicator channel port - certificate attributes are not periodically checked for expiry or revocation

    This could lead to using an expired/revoked certificate for an existing connection.

    Workaround: Any re-connects would use the new certificates (if present) or throw an error since the old certificate is expired or revoked. To trigger reconnect, restarting Appliance Proxy Hub (APH) on the manager node would suffice.

  • Issue 2994066: Failed to create mirror vmknic on ESXi 8.0.0.0 and NSX 4.0.1

    Unable to enable L3SPAN with mirror stack as mirror vmknic could not be created.

    Workaround:

    1. From the ESXi CLI prompt, create a mirror stack on ESXi.

    2. From the vSphere Web Client, create a vmknic.

    3. Run the following command:

    esxcli network ip netstack add -N mirror

  • Issue 3007396: If the remote node is set to slow LACP timeout mode then there could be a traffic drop of around 60-90 seconds once we bring down one of the LAG links via cli command "esxcli network nic down -n vmnicX"

    Traffic drop of around 60-90 seconds once we bring down one of the LAG links via cli command "esxcli network nic down -n vmnicX".

    The issue is only observed if the remote node LACP timeout is set to SLOW mode.

    Workaround: Set the external switch LACP timeout to FAST.

  • Issue 3013751: NSX Install/Uninstall Progress is not visible automatically on the Fabric Page

    Progress is visible after manually refreshing the Fabric page.

    Workaround: Manually refresh the Fabric Page.

  • Issue 3018596: Virtual Function (VF) is released from if you set VM virtual NIC (vnic) MTU on Guest VM to be greater than physical NIC MTU

    Once the vnic MTU is changed to be greater than pNIC MTU, then the VM will not be able to acquire a VF. Hence, VM will not be in UPT mode. It will be in "emu vmnic" mode.

    Workaround: Perform these steps:

    1. Change the MTU size on DVS from 1600 to 9100.

    2. VF gets assigned back to the VM vnic.

  • Issue 3042382: Session should be re-lookup when packet matches to No-SNAT rule

    Traffic matching to No-SNAT rule is stuck.

    Workaround: Disable No-SNAT rule.

  • Issue 3003762: Uninstallation of NSX Malware Prevention Service fails if you do not delete Mlware Prevention Service rules from policy and also does not display any error message that uninstallation fails because rules are still present in policy

    Uninstallation of NSX Malware Prevention Service will fail in this scenario.

    Workaround: Delete rules and retry uninstallation.

  • Issue 3003919: The DFW rules matching CNAMES having different actions than the DFW rule matching original domain name will lead to inconsistent rule enforcement

    In the unlikely case of an application/user accessing CNAME instead of original DOMAIN name may seem to bypass or get dropped incorrectly by the DFW rules.

    Workaround: Perform one of the steps:

    • Configure the DFW rules only for original domain name, not the CNAME.

    • Configure the rules with domain name and CNAMES with the same action.

  • Issue 3014499: Powering off Edge handling cross-site traffic causes disruption some flows.

    Some cross-site traffic stopped working.

    Workaround: Power on the powered-off edge.

  • Issue 3014978: Hovering over Networking & Security flag on the Fabric Page shows incorrect information regardless of the networking selected

    No impact.

    Workaround: None.

  • Issue 3014979: NSX Install/Uninstall Progress is not visible automatically on the Fabric Page

    No impact. Manually refresh the page to see the progress.

    Workaround: Manually refresh the Fabric Page.

  • Issue 3017885: FQDN analysis can only support one sub-cluster in stateful Active/Active mode

    Do not enable the feature if the stateful Active/Active has more than one sub-cluster.

    Workaround: Deploy only one sub-cluster.

  • Issue 3044704: NSX-Manager only supports HTTP proxy without SSL-BUMP, even though the configuration page takes HTTPS scheme and certificate

    When you configured the proxy in the System-> General Settings -> Internet Proxy Server, you had to provide details for scheme(HTTP or HTTPS), host, port and so on.

    Scheme here means the type of the connection that you want to establish between NSX manager and proxy server. It does not mean the type of the proxy. Typically, HTTPS proxy will use HTTPS scheme. But proxy like Squid configured with http_port + SSL bump also establish the HTTPS connection between NSX-Manger and proxy server. However, services in NSX-Manager always assume the proxy is exposed with http port. So the certificate you provide will never be used in NSX-Manager.

    When you choose HTTPS scheme and provide the certificate, the system displays "Certificate xx is not a valid certificate for xx." error for a HTTPS proxy in the configuration page.

    No error will be popped up if you try to use a Squid proxy with http_port + SSL bump, but NSX manager services will fail to send request to outside servers ("Unable to find certificate chain." error can be seen in the log)

    Workaround: Configure with HTTP proxy server(with scheme HTTP).

    Note:

    Scheme HTTP/HTTPS means the connection type it established between NSX manager and proxy.

  • Issue 3010038: On a two-port LAG that serves Edge Uniform Passthrough (UPT) VMs, if the physical connection to one of the LAG ports is disconnected, the uplink will be down, but Virtual Functions (VFs) used by those UPT VMs will continue to be up and running as they get connectivity through the other LAG interface.

    No impact.

    Workaround: None.

  • Issue 3009907: NSX vibs are not deleted from SmartNIC host if the host was in disconnected state during “Remove NSX” operation on the cluster

    No functional impact.

    Workaround: In vCenter Server, go to vLCM UI and remediate the cluster.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported.

    In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address.

    Workaround: Disable accelerated networking for RedHat and CentOS based OS.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions.

    Upon configuring the 501st VPN session on Tier0, the following error message is shown:

    {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2584648: Switching primary for T0/T1 gateway affects northbound connectivity.

    Location failover time causes disruption for a few seconds and may affect location failover or failback test.

    Workaround: None.

  • Issue 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out.

    These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database.

    Workaround: Use the CLI commands to retrieve the information from the edge.

  • Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2719682: Computed fields from Avi controller are not synced to intent on Policy resulting in discrepancies in Data shown on Avi UI and NSX-T UI.

    Computed fields from Avi controller are shown as blank on the NSX-T UI.

    Workaround: App switcher to be used to check the data from Avi UI.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 3025104: Host goes into "Failed " state when you perform restore using different IP but the same FQDN.

    When restore is performed using different IP of NSX Manager nodes using the same FQDN, hosts are not able to connect to the NSX Manager nodes.

    Workaround: Refresh DNS cache for the host by executing the following command.

    /etc/init.d/nscd restart
  • Issue 2799371: IPSec alarms for L2 VPN are not cleared even though L2 VPN and IPSec sessions are up.

    No functional impact except that unnecessary open alarms are seen.

    Workaround: Resolve alarms manually.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2864929: Pool member count is higher when migrated from NSX for vSphere to Avi Load Balancer on NSX-T Data Center.

    You will see a higher pool member count. Health monitor will mark those pool members down but traffic won't be sent to unreachable pool members.

    Workaround: None.

  • Issue 2865273: Advanced Load Balancer (Avi) search engine won't connect to Avi Controller if there is a DFW rule to block ports 22, 443, 8443 and 123 prior to migration from NSX for vSphere to NSX-T Data Center.

    Avi search engine is not able to connect to the Avi Controller.

    Workaround: Add explicit DFW rules to allow ports 22, 443, 8443 and 123 for SE VMs or exclude SE VMs from DFW rules.

  • Issue 2866682: In Microsoft Azure, when accelerated networking is enabled on SUSE Linux Enterprise Server (SLES) 12 SP4 Workload VMs and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    VM agent doesn't start and VM becomes unmanaged.

    Workaround: Disable Accelerated networking.

  • Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

    Workaround: Check the logs.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.

    This CLI output is confusing.

    Workaround: Restart nsx-proxy on that TN.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You are unable to change local user passwords while vIDM is enabled.

    Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-enabled.

  • Issue 2889482:The wrong save confirmation is shown when updating segment profiles for discovered ports.

    The Policy UI allows editing of discovered ports but does not send the updated binding map for port update requests when segment profiles are updated. A false positive message is displayed after clicking Save. Segments appear to be updated for discovered ports, but they are not.

    Workaround: Use MP API or UI to update the segment profiles for discovered ports.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None.

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps:

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

  • Issue 2931403: Network interface validation prevents API users from performing updates.

    An Edge VM network interface can be configured with network resources such as port groups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in VC after a power outage (moref of resource pool changed after VC was restored).

    Workaround: Redeploy edge and specify valid moref Ids.

  • Issue 2942900: The identity firewall does not work for event log scraping when Active Directory queries time out.

    The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with event log scraper IP addresses.

    Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects.

  • Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade.

    When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.

    Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: Use APIs to remove bridging.

    1. Update concerned LogicalPort and remove attachment.

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

    Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2 DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>
  • Issue 3030847: Changes in VRF BGP config don't always take effect.

    Change in BGP config inside a VRF does not take always effect.

    Workaround: You can create a dummy static route and remove it. This will cause a configuration push which will realize the VRF BGP config on the edge.

  • Issue 3005685: When configuring an Open ID Connect connection as an NSX LM authentication provider, customers may encounter errors.

    OpenID Connect configuration produces errors on configuration.

    Workaround: None.

  • Issue 2949575: After one Kubernetes worker node is removed from the cluster without draining the pods on it first, the pods will be stuck in terminating status forever.

    NSX Application platform and applications on it might function partially or not function as expected.

    Workaround: Manually delete each of the pods that display a Terminating status using the following information.

    1. From the NSX Manager or the runner IP host (Linux jump host from which you can access the Kubernetes cluster), run the following command to list all the pods with the Terminating status.

      kubectl get pod -A | grep Terminating
    2. Delete each pod listed using the following command.

      kubectl delete pod <pod-name> -n <pod-namespace> --force --grace-period=0
  • Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

    Workaround: Refresh DNS cache for the host. Use command /etc/init.d/nscd restart.

  • Issue 3017840: An Edge switch over doesn't happen when uplink IP address is changed.

    Wrong HA state might result in blackholing of traffic.

    Workaround: Toggle the BFD state. Before you change the uplink IP of an Edge change, put the Edge in maintenance mode.

check-circle-line exclamation-circle-line close-line
Scroll to top icon