VMware NSX 3.2 | 16 DEC 2021 | Build 19067070 Check for additions and updates to these release notes. |
VMware NSX 3.2 | 16 DEC 2021 | Build 19067070 Check for additions and updates to these release notes. |
The release notes cover the following topics:
Please be advised that the VMware NSX Team has identified an issue with Upgrade from previous versions to NSX-T Data Center 3.2.0 and has decided to withdraw the release from the download pages in favor of NSX-T Data Center 3.2.0.1.
Customers who have downloaded and deployed NSX-T Data Center 3.2.0 remain supported but are advised to upgrade to NSX-T Data Center 3.2.0.1.
NSX-T Data Center 3.2.0 is a major release offering many new features in all the verticals of NSX-T: networking, security, services and onboarding. Here are some of the major enhancements.
Switch agnostic distributed security: Ability to extend micro-segmentation to workloads deployed on vSphere networks.
Gateway Security: Enhanced L7 App IDs, Malware Detection and Sandboxing, URL filtering, User-ID firewall, TLS inspection (Tech Preview) and Intrusion Detection and Prevention Service (IDS/IPS).
Enhanced Distributed Security: Malware detection and Prevention, Behavioral IDS/IPS, enhanced application identities for L7 firewall.
Improved integration with NSX Advanced Load Balancer (formerly Avi): Install and configure NSX ALB (Avi) from NSX-T UI; Migrate NSX for vSphere LB to NSX ALB (Avi).
NSX for vSphere to NSX-T Migration: Major enhancements to the Migration Coordinator to extend coverage of supported NSX for vSphere topologies and provide flexibility on the target NSX-T topologies.
Improved protection against Log4j vulnerability: Updated Apache Log4j to version 2.16 to resolve CVE-2021-44228 and CVE-2021-45046. For more information on these vulnerabilities and their impact on VMware products, please see VMSA-2021-0028.
In addition to these features, many other capabilities are added in every area of the product.
Support dual NIC bonds on Windows physical servers - Connections using dual physical NIC (pNIC) bonds are now supported on physical servers. This allows configuration of active/active or active/standby bond. This feature is supported on VLAN and overlay networks.
NUMA-aware teaming policy - NSX now supports processing of the traffic on the same NUMA node as the pNICs used to leave ESXi. This feature enhances the performance in deployment leveraging teaming policies across multiple NUMA.
Enhanced Datapath new capabilities - Enhanced Datapath Switch now supports Distributed Firewall (DFW), Distributed Load-Balancer (DLB), and port mirroring capabilities.
Support L3 EVPN route server mode - ESXi is now capable of directly sending VXLAN traffic to the data center fabric routers bypassing the Edge node in the data path. In this deployment model, the Tier-0 SR (Service Router) hosted on the Edge Node is still necessary only to handle the control plane, i.e., advertising the connected prefixes through EVPN l2vpn address-family to the fabric. ESXi now supports ECMP towards multiple physical routers in the fabric and test the availability of the routers with BFD.
5-tuple ECMP on ESXi and KVM - The distributed router (DR) hosted on ESXi now supports 5-tuple hashing algorithm when ECMP is enabled. With this feature, the hashing is based on IP address source, IP address destination, IP protocol, layer-4 port source, layer-4 port destination. This allows a better distribution of the traffic across all the available Service Routers (SRs).
Proxy ARP support on Active/Active Tier-0 Gateway - In simple topologies without a need for dynamic routing, a Tier-0 gateway in active/active HA mode can now be used, providing higher throughput.
Support for Active/Active on Tier-0 SR for Multicast Traffic - NSX now supports ECMP of multicast traffic across multiple Tier-0 SRs (Service Routers), offering a better throughput for multicast traffic.
UEFI support on Bare Metal Edge - NSX now supports the deployment of bare metal edge node on servers running in UEFI mode. This allows edge nodes to be deployed on a broader set of servers.
Distributed Firewall supports VMs deployed on Distributed Port Groups on VDS switches - In previous releases, NSX could only enforce Distributed Security features for N-VDS switchports. Now you can leverage Distributed Firewall capabilities for VDS based VLAN networks without having to convert the switchport to N-VDS.
Support for dynamic tag criteria on Groups of IP Addresses.
Distributed Firewall Support for Physical servers - Redhat Enterprise Linux 8.0 operating system.
Addition of more Application IDs for L7 Firewall usage.
Malware Prevention for Distributed Firewall (E-W use case) - NSX Distributed Firewall now has zero-day malware detection and prevention capabilities using advanced machine learning techniques and sandboxing capabilities.
Configuration of AD Selective-Sync for IDFW - Identity firewall AD configuration now supports selectively adding OUs and users.
Identity Firewall Statistics - Enhanced the Security Overview dashboard to include Identity Firewall statistics for active users and active user sessions.
Identity Firewall support for SMB protocol.
Distributed IDS/IPS is supported for VMs deployed on Distributed Port Groups on VDS.
Distributed IDS/IPS now supports Behavior-based detection and prevention - A new class of IDS signatures is now available both on the distributed IDPS and on the edge. Rather than attempting to identify malware-specific behavior, these signatures attempt to identify network behaviors that could be associated to sign of a successful infection. This includes, for instance, the identification of Tor communication in the network, the presence of self-signed TLS certificates on high ports, or more sophisticated stateful detections such as the detection of beaconing behavior. This class of signatures is characterized by the severity level “informational”. If NSX NDR is enabled, further ML-based processing is applied on alerts produced by these signatures to prioritize cases that are very likely to be anomalous in the specific monitored environment.
Curation and Combination of Trustwave and VMware Signatures - NSX IDS/IPS signature set now allows access to a new IDS ruleset that is developed and curated by VMware to ensure high-security effectiveness and minimize the likelihood of false positives. The ruleset combines detections developed by third-party vendors such as Trustwave and Emerging Threats with a corpus of VMware-developed signatures and optimized for the NSX IDS engines.
User Identity-based Access Control - Gateway Firewall introduces the following additional User Identity Firewall capabilities:
For deployments where Active Directory is used as the user authentication system, NSX leverages Active Directory logs.
For all other authentication systems, NSX can now leverage vRealize Log Insight based logs to identify User Identity to IP address mapping.
Enhanced set of L7 AppIDs - Gateway Firewall capabilities are enhanced to identify a more comprehensive number of Layer-7 applications.
TLS Inspection for both inbound and outbound traffic (🔎Tech Preview; not for production deployments) - More and more traffic is getting encrypted on the network. With the TLS inspection feature, you can now leverage NSX Gateway Firewall to do deep-packet inspection and threat detection and prevention services for encrypted traffic as well.
URL Filtering (includes categorization and reputation of URLs) - You can now control internet bound traffic based on the new URL Filtering feature. This feature allows you to control internet access based on the URL categories and as well as the reputation of the URLs. URL repository, including the categorization and reputation data, is updated on an ongoing basis for updated protection.
Malware Analysis and Sandboxing support - NSX Gateway Firewall now provides malware detection from known as well as zero-day malware using advanced machine learning techniques and sandboxing capabilities. The known malware data is updated on an ongoing basis. (Please see known issue 2888658 before deploying in live production deployments.)
Intrusion Detection and Prevention (🔎Tech Preview; not for production deployments) - For NSX Gateway Firewall, Intrusion Detection and Prevention capabilities (IPS) are introduced in a "Tech Preview" mode. You can try the feature set in non-production deployments.
NSX Application Platform - VMware NSX Application Platform is a new container-based solution introduced in NSX-T 3.2.0. It provides a highly available, resilient, scale out architecture to deliver a set of core platform services which enables several new NSX features such as:
NSX Intelligence |
NSX Metrics |
NSX Network Detection and Response |
NSX Malware Prevention |
The NSX Application Platform deployment process is fully orchestrated through the NSX Manager UI and requires a supported Kubernetes environment. Refer to the Deploying and Managing the VMware NSX Application Platform guide for more information on the infrastructure prerequisites and requirements for installation.
VMware NSX Network Detection and Response correlates IDPS, malware, and anomaly events into intrusion campaigns that help identify threats and malicious activities in the network.
Correlation into threat campaigns, rather than events, allows SOC operators to focus on triaging only a small set of actionable threats.
NSX Network Detection and Response collects IDPS events from Distributed IDPS, malware events (malicious files only) from the gateway, and network anomaly events from NSX Intelligence.
Gateway IDPS (Tech Preview) events are not collected by NSX Network Detection and Response in NSX-T 3.2.
The NSX Network Detection and Response functionality runs in the cloud and is available in two cloud regions: US and EU.
For more information on how to activate, use, administer, and troubleshoot the NSX Network Detection and Response feature, see the NSX Network Detection and Response section of the NSX-T Data Center Administration Guide.
Enhanced Service level alarms for VPN - IPSec Service Status details.
Additional logging details for packet tracing - Display SPI and Exchange Information for IPSec.
Guest Introspection Enhancements - Guest Introspection provides a set of APIs in the data plane for consumption within the guest context. This enhancement ensures only users with appropriate entitlement are provided this access.
Additional OS support for GI - Guest Introspection now supports CentOS 8.2, RHEL 8.2, SLES15 SP1, Ubuntu 20.04.
While NSX releases earlier than 3.2.0 supported onboarding of existing local manager sites, this onboarding support is delayed from 3.2.0 and will be re-introduced in a later point release of 3.2.
Support VM tag replication between Local Managers - During disaster recovery (DR), replicated VMs are restarted in the DR location. If the security policy is based on NSX VM tags, the replicated VMs in the DR location must have those NSX tags on the remote Local Manager at recovery time. NSX Federation 3.2 now supports VM tag replication between Local Managers. The tag replication policy is configurable only through API.
Federation communications monitoring - The location manager page now offers a view on the latency and usage of the channels between Global Managers and Local Managers. This provides a better visibility of the health between the different components of federation.
Firewall Drafts - Draft of the security policies are now available on Global Manager. This includes support for auto-drafts and manual drafts.
Global Manager LDAP Support - Global Manager now supports configuration of LDAP sources for Role-Based Access Control (RBAC) similarly to support on Local Managers.
Antrea to NSX-T Integration - Added the ability to define Antrea Network Policies from the NSX-T Distributed Firewall UI. Policies are applied on K8s clusters running Antrea 1.3.1-1.2.2 using the interworking controller. Also adds inventory collection: K8s Objects like Pods, Namespaces & services are collected in NSX-T inventory and tagged so that they can be selected in DFW Policies. Antrea Traceflow can now be controlled from the NSX-T Traceflow UI page, and Log bundles can be collected from K8s clusters using Antrea. There is no mandatory requirement to have NSX-T data-plane enabled on your K8s Antrea cluster nodes.
Grouping Enhancements - Added support for Antrea container objects. Added support for Not In operator on segment port tag criteria. Adds support for AND operator between group membership criteria involving segments and segment ports.
VMware NSX Advanced Load Balancer (Avi) Installation through NSX - VMware NSX Advanced Load Balancer (Avi) Controllers can now be installed through the NSX-T Manager UI, which provides a single pane for installation of all NSX components.
Cross-Launch VMware NSX Advanced Load Balancer (Avi) UI from NSX-T Manager UI - Launch VMware NSX ALB (Avi) UI from the NSX-T Manager for advanced features.
Advanced Load Balancer (Avi) User Interfaces displayed within NSX – Configure VMware NSX Advanced Load Balancer (Avi) from within NSX Manager.
Migrate Load Balancing from NSX for vSphere to VMware NSX Advanced Load Balancer (Avi) – Migrate Load Balancers to VMware NSX ALB (Avi) when using the Bring your own Topology model with the Migration Coordinator.
Distributed Load Balancer (DLB) for vSphere with Kubernetes (K8s) Use Cases
Support for Distributed Intrusion Detection System (DIDS) and DLB to work together.
vMotion support.
Additional DLB pool member selection algorithm: Least-connection and source-IP hash.
Additional troubleshooting commands.
NSX-T native Load Balancer - Load balancing features would not be added or enhanced going forward. NSX-T platform enhancements would not be extended to the NSX-T native load balancer.
Load Balancing Recommendation
If you are using Load Balancing in NSX-T, you are advised to migrate to VMware NSX Advanced Load Balancer (Avi), which provides a superset of the NSX-T load balancing functionality.
If you have purchased NSX Data Center Advanced, NSX Data Center Enterprise Plus, NSX Advanced, or NSX Enterprise, you are entitled to the Basic edition of VMware NSX Advanced Load Balancer (Avi), which has feature parity with NSX-T LB.
It is recommended that you purchase VMware NSX Advanced Load Balancer (Avi) Enterprise to unlock enterprise grade load balancing, GSLB, advanced analytics, container ingress, application security and WAF.
It is recommended that new deployments with NSX-T Data Center take advantage of VMware NSX Advanced Load Balancer (Avi) using release v20.1.6 or later and not use the native NSX-T Load Balancer.
For more information:
VMware NSX Advanced Load Balancer (Avi) page: https://www.vmware.com/products/nsx-advanced-load-balancer.html
Migrate to VMware NSX Advanced Load Balancer (Avi): https://www.vmware.com/products/nsx/migrate-to-advanced-load-balancing.html
Deploy VMware NSX Advanced Load Balancer (Avi) on VCF with Advanced Load Balancing for VCF VMware Validated Solution
How to apply your NSX Data Center license to VMware NSX Advanced Load Balancer (Avi): https://avinetworks.com/docs/
VMware NSX Advanced Load Balancer (Avi) Editions: https://avinetworks.com/docs/22.1/nsx-license-editions/
Extended OS Support on NSX Cloud - NSX Cloud now supports the following OS, in addition to the ones already supported:
Ubuntu 20.04
RHEL 8.next
NSX Cloud support of Advanced Security (Layer 7) features on PCG (🔎Tech Preview; not for production deployments) - NSX Cloud offers some Advanced Security (Layer 7) capability on the PCG on both Azure and AWS, enabling you to benefit from application layer security for your workloads in the public Cloud. You can try the feature set in non-production deployments.
NSX Cloud support of IDFW for single user VDI (🔎Tech Preview; not for production deployments) - NSX Cloud offers identity firewall to offer user based security for VDI deployment. It will be able to associate to a VM a security profile mapped to the user connected, hence simplifying security management and strengthening security. You can try the feature set in non-production deployments.
Command del nsx to clean up NSX on a Physical / Bare Metal Server - In continuation with the del nsx feature support on ESX servers, a CLI command del nsx
is available to remove NSX from a Physical / Bare Metal Server running a Linux OS. If you have a Physical / Bare Metal Server with NSX VIBs in a stale state and are unable to uninstall NSX from that host, you can use the CLI command del nsx
for a guided step-by-step process to remove NSX from that host and bring it back to a clean state so NSX can be reinstalled.
Live Traffic Analysis through NSX Manager UI - The Live Traffic Analysis feature is now available on the NSX Manager UI, allowing you to easily analyze the live traffic flows across data centers. This features provides a unified approach of diagnosis by combining Traceflow, and packet capture. You can perform both actions in one shot - trace live packets and perform packet capture at source. Live Traffic Analysis helps in accurately determining issues in network traffic and provides the ability to perform the analysis on specific flows to avoid noise.
Selective Port Mirroring - Enhanced mirroring with flow based filtering capability and reduced resource requirements. You can now focus on pertinent flows for effective troubleshooting.
Fabric MTU Configuration Check - An on-demand and periodic MTU check will be available on the NSX Manager UI to verify MTU configuration for overlay network; alerts are raised for MTU mismatches.
Traceflow support on VLAN backed Logical Network - You can perform Traceflow on a VLAN backed logical network. This feature is available through the API.
Improved Logging - Improved logging by detecting and suppressing repetitive log messages emitted too frequently to prevent important log messages from being lost or overshadowed.
Improved CLI guide and additional commands - Introduced a set of new CLI commands mapped with the UI constructs (Policy) like for instance segment. This allows for simpler consumption of the CLI for users. A completely refactored CLI guide is also introduced to simplify consumption.
Capacity Limits - The Capacity dashboard is now aware of the deployment size of the NSX Manager for some capacity limits and may generate alarms when over the recommended capacity after upgrading to NSX-T 3.2. Customers who need additional capacity are advised to upgrade from a medium sized NSX Manager to a large sized NSX Manager or reduce the utilization to remain supported.
Time-Series Monitoring - Provides the ability to collect and store metrics for a longer duration up to one year with NSX Application Platform. Time-Series metrics helps to monitor the trend in key performance indicators, performs before and after analysis, and provides the historical context helpful in troubleshooting. Time-series metrics is available for Edge Node, Tier-0 and Tier-1 Gateways, NSX Manager, NSX Application Platform and security features, which includes TLS Inspection, IDPS, Gateway Firewall. These time-series metrics are available through NSX-T APIs and a subset of these metrics are also available on the NSX Manager UI.
Events and Alarms
Certificates - CA Bundle Update Recommended
Operation - Cluster Down, Cluster Unavailable, Management Channel To Manager Node Down, Management Channel To Manager Node Down Long
Federation - GM to GM Latency Warning, GM to GM Synchronization Warning, GM to GM Synchronization Error, GM to LM Latency Warning, GM to LM Synchronization Warning, GM to LM Synchronization Error, LM restore While Config Import In Progress, Queue Occupancy Threshold Exceeded
Transport Node Health - Transport Node Uplink Down
Distributed Firewall - DFW Session Count High, DFW Vmotion Failure
Edge - Edge Node Settings and vSphere Settings Are Changed, Edge Node Settings Mismatch, Edge VM vSphere Settings Mismatch, Edge vSphere Location Mismatch
Edge Health - Edge Datapath NIC Throughput High, Edge Datapath NIC Throughput Very High, Failure Domain Down
VPN - IPSec Service Down
NAT - SNAT Port Usage on Gateway Is High
Load Balancing - Load Balancing Configuration Not Realized Due To Lack Of Memory
MTU Check - MTU Mismatch Within Transport Zone, Global Router MTU Too Big
NSX Application Platform Communication - Delay Detected In Messaging Overflow, Delay Detected In Messaging Rawflow, TN Flow Exporter Disconnected
NSX Application Platform Health - ~55 alarms to monitor health of the platform
Customize login messages and banners - You can configure and customize the login message from NSX Manager and specify the mandatory fields the user needs to accept before logging in.
Search and Filter Enhancements - Enhanced the existing search and filtering capability in the NSX UI. An initial screen displays the possible search phrases and the most recently searched items. A separate panel is available for 'Advanced Search' that allows users to customize and configure 'Searches'. Search queries now surface information from tags and alarms.
VPAT - Fixes to bridge the accessibility gap in the product.
NSX Topology - Visualize the underlying fabric associated with gateways. This feature provides you the ability to visualize the edge clusters, host switch configuration, and gather details on the host and edge configurations.
Improve Usability of Object Selector in UI - This feature provides the ability to select multiple objects that are in the same category. Additionally, you can select all the objects.
Revamped Security Overview - Revamped the Security Overview page to provide a holistic view of the security configurations. You can view 'Threats and Responses' across different features as well as view the existing configuration and capacity of the system.
NSX-T UI Integrated in vCenter - NSX-T can now be installed and configured via the vCenter UI with the vCenter plugin for NSX-T. This feature is supported ONLY from vCenter 7.0U3 onwards.
Deployment wizard of NSX-T for common use cases - When installed via the vCenter plugin, NSX-T can now enable NSX-T features based on common use cases, allowing users to quickly turn on NSX-T features leveraging the deployment wizards. This release supports two wizards, one to enable Security features of NSX and the other to enable Virtual Networking features of NSX.
NSX Manager to NSX Policy Promotion Tool - Provides ability to promote existing configuration from NSX Manager to NSX Policy without data path disruption or deletion/recreation of existing objects. Once the NSX Manager objects are promoted to NSX Policy, the NSX Manager objects are set to read-only through the Manager UI/API, and you can then interact with the same objects through NSX Policy UI/API.
Certificate Management Enhancements for TLS Inspection (🔎Tech Preview; not for production deployments) - With the introduction of the TLS Inspection feature, certificate management now supports addition and modification of certification bundles and the ability to generate CA certificates to be used with the TLS Inspection feature. In addition, the general certificate management UI can carry modifications that simplify import/export of certificates.
High Availability and Scale enhancements for LDAP Integration - LDAP configuration now supports the configuration of multiple LDAP servers per domain and support for 'trust' of multiple certificates associated with different LDAP servers per domain.
Increased Scale - There are several increases in the supported scale for the largest deployments. For details on scale changes, see the VMware Configuration Maximums tool.
License Enforcement - NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users are able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition are restricted to only viewing the objects; create and edit will be disallowed.
New Licenses - Added support for new VMware NSX Gateway Firewall and NSX Federation Add-On and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.
Migration for VMware Integrated OpenStack - Added the capability to perform NSX for vSphere to NSX-T migration in VIO environments without breaking the OpenStack representation of the objects. This feature requires the support of migration capabilities by the VMware Integrated OpenStack version that is used.
Bring your own Topology - The Migration Coordinator extends its model to offer migration between NSX for vSphere and a user defined topology in NSX-T. This offers more flexibility for users to define their NSX-T topology and extends the number of topologies which can be migrated from NSX for vSphere to NSX-T.
This feature can only be used for the configuration migration in order to enable lift and shift or as part of the complete workflow doing in place migration.
Support OSPF Migration for fixed topologies - The Migration Coordinator supports fixed topologies with OSPF (instead of BGP and static). This allows users wanting to use fixed topologies (and not BYOT) to do so even when they have OSP configured for N/S connectivity between ESG and top of rack (OSPF between ESG and DLR was already supported and replaced by NSX-T internal routing).
Increased scale for Migration Coordinator - The Migration Coordinator scale is increasing in order to cover larger environments and come closer to NSX for vSphere maximum scale.
Migration of Guest Introspection - NSX for vSphere to NSX-T migration for GI is now added in migration coordinator. You can use this feature provided the partner vendor also supports the migration coordinator.
IDFW/RDSH Migration - The migration coordinator now supports Identity based firewall configurations.
NSX-T Data Center 3.2 offers several features for your technical preview. Technical preview features are not supported by VMware for production use. They are not fully tested and some functionality might not work as expected. However, these previews help VMware improve current NSX-T functionality and develop future enhancements.
For details about these technical preview features, see the available documentation provided in the NSX-T Data Center 3.2 Administration Guide. Links are provided in the following list that briefly describes these technical preview features. The topics will have Technical Preview in their titles.
At VMware, we value inclusion and diversity. To promote these principles within our customer, partner, and internal communities, there is a company-wide effort to replace non-inclusive language in our products. Problematic terminology in the NSX-T Data Center UI, documentation, APIs, CLIs, and logs are in the process of being replaced with more inclusive alternatives. For example, the non-inclusive term disabled has been replaced with the alternative deactivate.
For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.
NSX-T has two methods of configuring logical networking and security: Manager mode and Policy mode. The Manager API contains URIs that begin with /api
, and the Policy API contains URIs that begin with /policy/api
.
Please be aware that VMware intends to remove support of the NSX-T Manager APIs and NSX Advanced UIs in an upcoming NSX-T major or minor release, which will be generally available no sooner than one year from the date of this message, 12/16/2021. NSX-T Manager APIs that are planned to be removed are marked with "deprecated" in the NSX Data Center API Guide, with guidance on replacement APIs.
It is recommended that new deployments of NSX-T take advantage of the NSX Policy APIs and NSX Policy UIs. For deployments currently leveraging NSX Manager APIs and NSX Advanced UIs, please refer to the NSX Manager for the Manager to Policy Objects Promotion page and NSX Data Center API Guide to aid in the transition.
NSX-T 3.0.0 and later can run on the vSphere VDS switch version 7.0 and later. This provides a tighter integration with vSphere and easier NSX-T adoption for customers adding NSX-T to their vSphere environment.
Please be aware that VMware intends to remove support of the NSX-T N-VDS virtual switch on ESXi hosts in an upcoming NSX-T release, which will be generally available no sooner than one year from the date this message was announced (April 17, 2021). N-VDS will remain the supported virtual switch on KVM, NSX-T Edge nodes, native public cloud NSX agents, and bare metal workloads.
It is recommended that new deployments of NSX-T and vSphere take advantage of this close integration and deploy using VDS switch version 7.0 and later. In addition, for existing deployments of NSX-T that use the N-VDS on ESXi hosts, VMware recommends moving toward the use of NSX-T on VDS. To make this process easy, VMware has provided both a CLI based switch migration tool, which was first made available in NSX-T 3.0.2, and a GUI based Upgrade Readiness Tool, which was first made available in NSX-T 3.1.1 (see NSX documentation for more details on these tools).
In NSX-T 3.2.0, the N-VDS to VDS migration tool is unavailable. Customers wanting to migrate their workloads from N-VDS to VDS in this release may do so by manually migrating the workloads.
The following deployment considerations are recommended when moving from N-VDS to VDS:
The N-VDS and VDS APIs are different, and the backing type for VM and vmKernel interface APIs for the N-VDS and VDS switches are also different. As you move to use VDS in your environment, you will have to invoke the VDS APIs instead of N-VDS APIs. This ecosystem change will have to be made before converting the N-VDS to VDS. See knowledge base article 79872 for more details. Note: There are no changes to N-VDS or VDS APIs.
VDS is configured through vCenter. N-VDS is vCenter independent. With NSX-T support on VDS and the eventual deprecation of N-VDS, NSX-T will be closely tied to vCenter and vCenter will be required to enable NSX in vSphere environments.
The 3.x release series is the final series to support KVM and non-VIO OpenStack distributions including but not limited to RedHat OpenStack Platform, Canonical/Ubuntu OpenStack, SUSE OpenStack Cloud, Mirantis OpenStack and community based OpenStack, which does not have a specific vendor. Customers using non-VIO OpenStack with NSX are encouraged to consider vRealize Automation or VMware Cloud Director as a replacement for their deployments.
NSX-T Load Balancer APIs would be marked as deprecated. This would apply to all APIs containing URIs that begin with /policy/api/v1/infra/lb-
Please be aware that VMware intends to remove support of the NSX-T Load Balancer in an upcoming NSX-T release, which will be generally available no sooner than one year from the date this message was announced (December 16, 2021). NSX-T Manager APIs that are planned to be removed are marked with "deprecated" in the NSX Data Center API Guide.
It is recommended that new deployments with NSX-T Data Center take advantage of VMware NSX Advanced Load Balancer (Avi) using release v20.1.6 or later.
API deprecation does not apply to the Distributed Load Balancer.
NSX Intelligence Communications alarms replaced by NSX Application Platform Communication alarms.
NSX Intelligence Health alarms replaced by NSX Application Platform Health alarms.
DNS - Fowarder Disabled alarm.
Infrastructure Service - Edge Service Status Down alarm will be covered as part of Edge Service Status Change alarm.
Transport Node Health - NVDS Uplink Down alarm got replaced by Transport Node Uplink Down alarm.
Live Traffic Analysis MP APIs are marked as deprecated in NSX-T 3.2.0.
The Packet Count option is removed from Live Traffic Analysis in NSX-T 3.2.0. Live Traffic Analysis will continue to support tracing and packet capture.
The following fields and types are removed:
Policy API: Fields - count_config
, Types - PolicyCountObservation
MP API: Fields - count_config
, count_results
, Types - CountActionConfig
, LiveTraceActionType
(COUNT: An unsupported action), CountActionArgument
, CountResult
, CountObservation
, BaseCountObservation
NSX deprecated APIs removal - The following APIs have been deprecated more than one year ago and are removed in NSX-T 3.2.0.
Removed API |
Replacement |
---|---|
|
Replaced by |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
No replacement in NSX-T API. |
|
|
|
|
|
No replacement in NSX-T API. |
|
|
|
|
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection(TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection(TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Workflows handled differently using Transport Node Profile (TNP) and Transport Node Collection (TNC) APIs |
|
Use |
|
Use |
|
Use |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
API removed following internal architecture changes in NSX-T |
|
API removed following internal architecture changes in NSX-T |
|
API removed following internal architecture changes in NSX-T |
|
API removed following internal architecture changes in NSX-T |
|
API removed following internal architecture changes in NSX-T |
|
API removed following internal architecture changes in NSX-T |
|
API removed from NSX-T Manager since NSX Intelligence is now hosted on NSX Application Platform |
|
API removed from NSX-T Manager since NSX Intelligence is now hosted on NSX Application Platform |
|
API removed from NSX-T Manager since NSX Intelligence is now hosted on NSX Application Platform |
|
These APIs were used to configure Bridge on ESXi, which is a deprecated and removed feature. Bridging is supported and Edge Cluster, it can be configured on Segment with Edge Bridge Profiles. (Refer to Edge Bridge documentation) |
|
These APIs were used to configure Bridge on ESXi, which is a deprecated and removed feature. Bridging is supported and Edge Cluster, it can be configured on Segment with Edge Bridge Profiles. (Refer to Edge Bridge documentation) |
NSX deprecated API fields removal - The following API fields have been deprecated more than one year ago and are removed in NSX-T 3.2.0. (The API itself is not removed, only the mentioned field.)
Transport Zone API field removal:
POST /api/v1/transport-zones
{
"display_name":"tz1",
"host_switch_name":"test-host-switch-1", <<==== WILL BE REMOVED
"host_switch_mode": "STANDARD", <<==== WILL BE REMOVED
"description":"Transport Zone 1",
"transport_type":"OVERLAY"
}
Transport Node API field removal for Edge Node VM:
POST /api/v1/transport-nodes
POST /api/v1/transport-node-profiles
{
"node_id": "92f893b0-1157-4926-9ce3-a1787100426c",
"host_switch_spec": {
"host_switches": [
{
...
...
"transport_zone_endpoints": [ <<<==== SHOULD BE USED INSTEAD
{
"transport_zone_id": "1b3a2f36-bfd1-443e-a0f6-4de01abc963e"
}
],
}
],
"resource_type": "StandardHostSwitchSpec"
},
"transport_zone_endpoints": [], <<<<=== WILL BE REMOVED
"node_deployment_info": {
"deployment_type": "VIRTUAL_MACHINE",
"deployment_config": {
"vm_deployment_config": {
"vc_id": "23eaf46e-d826-4ead-9aad-ed067574efb7",
"compute_id": "domain-c7",
"storage_id": "datastore-22",
"management_network_id": "network-24",
"hostname": "edge", <-- will be removed
"data_network_ids": [
"network-24"
],
"search_domains": [ "vmware.com" ] <<<<=== WILL BE REMOVED (replacement in out block)
"dns_servers": [ "10.172.40.1" ], <<<<=== WILL BE REMOVED
"enable_ssh": true, <<<<=== WILL BE REMOVED
"allow_ssh_root_login": true, <<<<=== WILL BE REMOVED
"placement_type": "VsphereDeploymentConfig"
},
...
"node_settings": {
"hostname": "edge", <<<<=== This should be used - REPLACEMENT
"search_domains": ["eng.vmware.com" ], <<<<=== This should be used - REPLACEMENT
"dns_servers": [ "10.195.12.31"], <<<<=== This should be used - REPLACEMENT
"enable_ssh": true, <<<<=== This should be used - REPLACEMENT
"allow_ssh_root_login": true <<<<=== This should be used - REPLACEMENT
},
"resource_type": "EdgeNode",
"id": "90ec1776-f3e2-4bd0-ba1f-a1292cc58707",
...
"display_name": "edge", <<<<=== WILL BE REMOVED (replacement in out block)
"_create_user": "admin", <<<<=== WILL BE REMOVED
"_create_time": 1594956232211, <<<<=== WILL BE REMOVED
"_last_modified_user": "admin", <<<<=== WILL BE REMOVED
"_last_modified_time": 1594956531314, <<<<=== WILL BE REMOVED
"_system_owned": false, <<<<=== WILL BE REMOVED
"_protection": "NOT_PROTECTED", <<<<=== WILL BE REMOVED
"_revision": 2 <<<<=== WILL BE REMOVED
},
"failure_domain_id": "4fc1e3b0-1cd4-4339-86c8-f76baddbaafb",
"resource_type": "TransportNode",
"id": "90ec1776-f3e2-4bd0-ba1f-a1292cc58707",
"display_name": "edge", <<<<=== This should be used - REPLACEMENT
"_create_user": "admin", <<<<=== This should be used - REPLACEMENT
"_create_time": 1594956232373, <<<<=== This should be used - REPLACEMENT
"_last_modified_user": "admin", <<<<=== This should be used - REPLACEMENT
"_last_modified_time": 1594956531551, <<<<=== This should be used - REPLACEMENT
"_system_owned": false, <<<<=== This should be used - REPLACEMENT
"_protection": "NOT_PROTECTED", <<<<=== This should be used - REPLACEMENT
"_revision": 1 <<<<=== This should be used - REPLACEMENT
}
For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.
Customers upgrading to NSX-T 3.2.0.1 are strongly encouraged to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Managers prior to upgrading.
See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.
The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.
NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.
Revision Date |
Edition |
Changes |
---|---|---|
December 16, 2021 |
1 |
Initial edition. |
January 3, 2022 |
2 |
Added note to Federation section in What's New. Added known issue 2882574. |
January 5, 2022 |
3 |
Added known issue to the Upgrade Notes for this Release and the Known Issues sections. |
January 21, 2022 |
4 |
Added Important Information about NSX-T Data Center 3.2.0 section. Added information about the NSX Upgrade Evaluation Tool in the Upgrade Notes for this Release section. Added known issue 2890348. |
February 4, 2022 |
5 |
Identity Firewall support for SMB protocol. |
February 15, 2022 |
6 |
Added bullet about capacity limits in What's New. |
February 22, 2022 |
7 |
Added requirement for deploying NSX Application Platform. |
April 29, 2022 |
8 |
Added Known Issues 2885820, 2871440, and 2945515. Removed duplicate entries for Known Issues 2685550, 2566121, and 2848614. |
May 5, 2022 |
9 |
Added Known Issues 2875563, 2875667, 2883505, 2914934, 2921704, and 2933905. |
May 16, 2022 |
10 |
Added Known Issue 2927442. |
June 6, 2022 |
11 |
Added Known Issue 2937810. |
August 2, 2022 |
12 |
Added Known Issue 2989696. |
August 24, 2022 |
13 |
Added Known Issue 3015843. |
September 1, 2022 |
14 |
Updated the location information for the VMware NSX Network Detection and Response documentation. |
September 7, 2022 |
15 |
Added Known Issue 3012313. |
September 14, 2022 |
16 |
Added Known Issue 3025104. |
February 6, 2023 |
17 |
Added bullet about additional functionality for Distributed Load Balancer for vSphere in What's New. |
February 23, 2023 |
18 |
Editorial update. |
April 24, 2023 |
19 |
Adds Resolved Issues 2468774 and 2643610. |
May 19, 2023 |
20 |
Added known issue 3116294. |
August 6, 2024 |
21 |
Added known issue 3145439. |
Fixed Issue 2643610: Load balancer statistics APIs are not returning stats.
Stats of API are not set. You can't see load balancer stats.
Workaround: Reduce the number of load balancers configured.
Fixed Issue 2468774: Backups Occurs Unexpectedly with the Detect NSX configuration change option enabled.
When Detect NSX configuration change is enabled, backups would occur even though there were no changes to the NSX configuration.
Fixed Issue 2692344: If you delete the Avi Enforcement point, it deletes all the realized objects from the policy, which deletes all default object’s realized entities from the policy. Adding new enforcement point fails to re-sync the default object from the Avi Controller.
You will not be able to use the system-default objects after deletion and recreation of the Enforcement point of AVIConnectionInfo.
Fixed Issue 2858893: Service Deployment cleanup fails for Clustered-based deployment.
No functional impact. Failure to clean up Service if trying to unregister ServiceDefinion with dangling ServiceDeployment or Instances. Have to manually/forcefully clean up from db.
Fixed Issue 2557287: TNP updates done after backup are not restored.
You won't see any TNP updates done after backup on a restored appliance.
Fixed Issue 2679614: When the API certificate is replaced on the Local Manager, the Global Manager's UI will display the message, "General Error has occurred."
When the API certificate is replaced on the Local Manager, the Global Manager's UI will display the message, "General Error has occurred."
Fixed Issue 2658687: Global Manager switchover API reports failure when transaction fails, but the failover happens.
API fails, but Global Manager switchover completes.
Fixed Issue 2652418: Slow deletion when large number of entities are deleted.
Deletion will be slower.
Fixed Issue 2649499: Firewall rule creation takes a long time when individual rules are created one after the other.
Slow API takes more time to create rules.
Fixed Issue 2649240: Deletion is slow when a large number of entities are deleted using individual delete APIs.
It takes significant time to complete the deletion process.
Fixed Issue 2606452: Onboarding is blocked when trying to onboard via API.
Onboarding API fails with the error message, "Default transport zone not found at site".
Fixed Issue 2587513: Policy shows error when multiple VLAN ranges are configured in bridge profile binding.
You will see an "INVALID VLAN IDs" error message.
Fixed Issue 2662225: When active edge node becomes non-active edge node during flowing S-N traffic stream, traffic loss is experienced.
Current S->N stream is running on multicast active node. The preferred route on TOR to source should be through the multicast active edge node only. Bringing up another edge can take over multicast active node (lower rank edge is active multicast node). Current S->N traffic will experience loss up to four minutes. This is will not impact new stream or if current stream is stopped and started again.
Fixed Issue 2625009: Inter-SR iBGP sessions keep flapping, when intermediate routers or physical NICs have lower or equal MTU as the inter-SR port.
This can impact inter-site connectivity in Federation topologies and inter-SR connectivity in non-federation topologies.
Fixed Issue 2521230: BFD status displayed under ‘get bgp neighbor summary’ may not reflect the latest BFD session status correctly.
BGP and BFD can set up their sessions independently. As part of ‘get bgp neighbor summary’ BGP also displays the BFD state. If the BGP is down, it will not process any BFD notifications and will continue to show the last known state. This could lead to displaying stale state for the BFD.
Fixed Issue 2550492: During an upgrade, the message, "The credentials were incorrect or the account specified has been locked" is displayed temporarily and the system recovers automatically.
Transient error message during upgrade.
Fixed Issue 2682480: Possible false alarm for NCP health status.
The NCP health status alarm may be unreliable in the sense that it is raised when NCP system is healthy.
Fixed Issue 2655539: Host names are not updated on the Location Manager page of the Global Manager UI when updating the host names using the CLI.
The old host name is shown.
Fixed Issue 2631703: When doing backup/restore of an appliance with vIDM integration, vIDM configuration will break.
Typically when an environment has been both upgraded and/or restored, attempting to restore an appliance where vIDM integration is up and running will cause that integration to break and you will to have to reconfigure.
Fixed Issue 2730109: When Edge is powering ON, Edge tries to make OSPF neighborship with the peer using its routerlink IP address as a OSPF RouterID though loop back is present.
After reloading Edge, OSPF selects the downlink IP-address (the higher IP-address) as router-id until it receives the OSPF router-id configuration due to the configuration sequencing order. The neighbour entry with older router-id will eventually become stale entry upon receiving OSPF HELLO with new router-id and get expired after dead timer expiry on the peer.
Fixed Issue 2610851: Namespaces, Compute Collection, L2VPN Service grid filtering might return no data for few combinations of resource type filters.
Applying multiple filters for a few types at the same time returned no results even though data is available with matching criteria. It is not a common scenario and filter will fail only these grids for the following combinations of filter attribute: For Namespaces grid ==> On Cluster Name and Pods Name filter For Network Topology page ==> On L2VPN service applying a remote ip filter For Compute Collection ==> On ComputeManager filter
Fixed Issue 2482580: IDFW/IDS configuration is not updated when an IDFW/IDS cluster is deleted from vCenter.
When a cluster with IDFW/IDS enabled is deleted from vCenter, the NSX management plane is not notified of the necessary updates. This results in accurate count of IDFW/IDS enabled clusters. There is no functional impact. Only the count of the enabled clusters is wrong.
Fixed Issue 2587257: In some cases, PMTU packet sent by NSX-T edge is ignored upon receipt at the destination.
PMTU discovery fails resulting in fragmentation and reassembly, and packet drop. This results in performance drop or outage in traffic.
Fixed Issue 2622576: Failures due to duplicate configuration are not propagated correctly to user.
While onboarding is in progress, you see an "Onboarding Failure" message.
Fixed Issue 2638673: SRIOV vNICs for VMs are not discovered by inventory.
SRIOV vNICs are not listed in Add new SPAN session dialog. You will not see SRIOV vNICs when adding new SPAN session.
Fixed Issue 2643749: Unable to nest group from custom region created on specific site into group that belongs to system created site specific region.
You will not see the group created in site specific custom region while selecting child group as a member for the group in the system created region with the same location.
Fixed Issue 2805986: Unable to deploy NSX-T managed edge VM.
NSX-T Edge deployment fails when done using ESX UI.
Fixed Issue 2685550: FW Rule realization status is always shown as "In Progress" when applied to bridged segments.
When applying FW Rules to an NSGroup that contains bridged segments as one of its members, realization status will always be shown as in progress. You won't be able to check the realization status of FW Rules applied to bridged segments.
Issue 3145439: Rules with more than 15 ports would be allowed to publish only to fail in later stages.
You may not know that the rule fails to publish / realize for this reason.
Workaround: Break the set of ports / PortRanges into multiple and publish those with smaller set of Ports / PortRanges.
Issue 3116294: Rule with nested group does not work as expected on hosts.
Traffic not allowed or skipped correctly.
Workaround: See knowledge base article 91421.
Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.
When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.
Workaround: Refresh DNS cache for the host. Use command /etc/init.d/nscd restart.
Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.
After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.
The Upgrade UI window displays a FAILED
status for NSX NDR and the cloud-connector pods.
For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr
are in the ImagePullBackOff
state.
For an NSX MP upgrade, a few pods with the prefix of cloud-connector
are in the ImagePullBackOff
state.
The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.
Workaround: See VMware knowledge base article 89418.
Issue 2989696: Scheduled backups fail to start after NSX Manager restore operation.
Scheduled backup does not generate backups. Manual backups continue to work.
Workaround: See Knowledge base article 89059.
Issue 3015843: The DFW packet log identifier was changed in NSX-T 3.2.0 and was not mentioned in the release notes.
Unable to see DFW packet logs with log identifier used from prior releases.
The following NSX-T syslog fields were changed to align with RFC compliance:
FIREWALL_PKTLOG
changed to FIREWALL-PKTLOG
DLB_PKTLOG
changed to DLB-PKTLOG
IDPS_EVT
changed to IDPS-EVT
nsx_logger
changed to nsx-logger
Since these changes were made to align with RFC compliance, there is no foreseeable scenario of future NSX releases reverting back to the log structure from prior releases.
Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported.
In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address.
Workaround: Disable accelerated networking for RedHat and CentOS based OS.
Issue 2283559: /routing-table and /forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.
If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.
Workaround: There are two options to fetch the RIB/FIB. These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest. CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.
Issue 2693576: Transport Node shows "NSX Install Failed" after KVM RHEL 7.9 upgrade to RHEL 8.2.
After RHEL 7.9 upgrade to 8.2, dependencies nsx-opsagent and nsx-cli are missing, Host is marked as install failed. Resolving the failure from the UI doesn't work: Failed to install software on host. Unresolved dependencies: [PyYAML, python-mako, python-netaddr, python3]
Workaround: Manually install NSX RHEL 8.2 vibs after host OS upgrade and then resolve it from the UI.
Issue 2628503: DFW rule remains applied even after forcefully deleting the manager nsgroup.
Workaround: Do not forcefully delete an nsgroup that is still used by a DFW rule. Instead, make the nsgroup empty or delete the DFW rule.
Issue 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out.
These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database.
Workaround: Use the CLI commands to retrieve the information from the edge.
Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.
This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.
Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster.
Issue 2636771: Search can return resource when a resource tagged with multiple tag pairs, and tag and scope match with any value of tag and scope.
This affects search query with condition on tag and scope. Filter may return extra data if tag and scope match with any pair.
Workaround: None.
Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1.
In cases where vRA creates multiple segments and connects to a shared ESG, V2T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.
Workaround: None.
Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.
After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.
Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.
Issue 2526769: Restore fails on multi-node cluster.
When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.
Workaround: Deploy a new setup (one node cluster) and start the restore.
Issue 2534933: Certificates that have LDAP based CDPs (CRL Distribution Point) fail to apply as tomcat/cluster certs.
You can't use CA-signed certificates that have LDAP CDPs as cluster/tomcat certificate.
Workaround: There are two options.
Use a certificate that has HTTP-based CDP.
Disable CRL checking using PUT https://<manager>/api/v1/global-configs/SecurityGlobalConfig API (In the payload for PUT, make sure to have "crl_checking_enabled" as false.)
Issue 2566121: Wrong error message, "Some appliance components are not functioning properly" is displayed when server is overloaded.
When there are too many API calls made to NSX, UI/API shows error, "Some appliance components are not functioning properly" instead of "server is overloaded."
Workaround: None.
Issue 2613113: If onboarding is in progress, and restore of Local Manager is done, the status on Global Manager does not change from IN_PROGRESS.
UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Configuration of the restored site cannot be imported.
Workaround: Restart the Global Manager cluster, one node at a time.
Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.
The joining manager will not work and the UI will not be available.
Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.
Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.
NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}
Workaround: Use Management Plane APIs to create additional VPN Sessions.
Issue 2658092: Onboarding fails when NSX Intelligence is configured on Local Manager.
Onboarding fails with a principal identity error. and you cannot onboard a system with principal identity user.
Workaround: Create a temporary principal identity user with the same principal identity name that is used by NSX Intelligence.
Issue 2639424: Remediating a Host in a vLCM cluster with Host-based Deployment will fail after 95% Remediation Progress is completed.
The remediation progress for a Host will be stuck at 95% and then Fail after 70 minute timeout is completed.
Workaround: See VMware knowledge base article 81447.
Issue 2636420: Host will go to "NSX install skipped" state and cluster in "Failed" state post restore if "Remove NSX" is run on cluster post backup.
"NSX Install Skipped" will be shown for host.
Workaround: Following restore, you should have to run "Remove NSX" on the cluster again to achieve the state that was present following backup (not configured state).
Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.
Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.
Workaround: Use the UI to configure system.
Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.
The connection would be using an expired/revoked SSL.
Workaround: Restart the APH on the Manager node to trigger a reconnection.
Issue 2561988: All IKE/IPSEC sessions are temporarily disrupted.
Traffic outage will be seen for some time.
Workaround: Modify the local endpoints in phases instead of all at the same time.
Issue 2584648: Switching primary for T0/T1 gateway affects northbound connectivity.
Location failover time causes disruption for a few seconds and may affect location failover or failback test.
Workaround: None.
Issue 2636420: The transport node profile is applied on a cluster on which "remove nsx" is called after backup.
Hosts are not in prepared state but the transport node profile is still applied at the cluster.
Workaround: After a restore, run "Remove NSX" on the cluster again to achieve the state that was present after backup (not configured state).
Issue 2687084: After upgrade or restart, the Search API may return 400 error with Error code 60508, "Re-creating indexes, this may take some time."
Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete.
Workaround: None.
Issue 2782010: Policy API allows change vdr_mac/vdr_mac_nested even when "allow_changing_vdr_mac_in_use" is false.
VDR MAC will be updated on TN even if allow_changing is set to false. Error is not thrown.
Workaround: Change the VDR MAC back to the old value if you did not intend for the field to change.
Issue 2791490: Federation: Unable to sync the objects to standby Global Manager (GM) after changing standby GM password.
Cannot observe Active GM on the standby GM's location manager, either any updates from the Active GM.
Workaround: Request a force sync on the standby GM's UI.
Issue 2799371: IPSec alarms for L2 VPN are not cleared even though L2 VPN and IPSec sessions are up.
No functional impact except that unnecessary open alarms are seen.
Workaround: Resolve alarms manually.
Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.
NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.
Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.
Issue 2839782: unable to upgrade from NSX-T 2.4.1 to 2.5.1 because CRL entity is large, and Corfu imposes a size limit in 2.4.1, thereby preventing the CRL entity from being created in the Corfu during upgrade.
Unable to upgrade.
Workaround: Replace certificate with a certificate signed by a different CA.
Issue 2851991: IDFW fails if Policy Group with AD Group also has nested empty source group.
IDFW fails if the Policy Group with AD Group has nested empty source group.
Workaround: Remove the empty child group.
Issue 2852419: Error message seen when Static route configured with non link-local IPv4/v6 next-hop having with multiple scope values.
Static route API with non link-local next hop IP having multiple scope values configuration will be rejected.
Workaround: Create multiple next hops instead of multiple scope values and make sure that each next hop has a different IP address.
Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.
It will take 5 minutes to realize the EVPN tenant configuration.
Workaround: None. Wait 5 minutes.
Issue 2854139: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.
Traffic drop for the prefixes that are getting continuously added/deleted.
Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop.
Issue 2864250: A failure is seen in transport node realization if Custom NIOC Profile is used when creating a transport node.
Transport node is not ready to use.
Workaround: Create the transport node with Default NIOC profile and then update it applying the custom NIOC profile.
Issue 2866682: In Microsoft Azure, when accelerated networking is enabled on SUSE Linux Enterprise Server (SLES) 12 SP4 Workload VMs and with NSX Agent installed, the ethernet interface does not obtain an IP address.
VM agent doesn't start and VM becomes unmanaged.
Workaround: Disable Accelerated networking.
Issue 2867243: Effective membership APIs for a Policy Group or NSGroup with no effective members does not return 'results' and 'result_count' fields in API response.
There is no functional impact.
Workaround: None.
Issue 2868235: On Quick Start - Networking and Security dialog, visualization shows duplicate VDS when there are multiple PNICs attached to the same VDS.
Visualization shows duplicate VDS. It may be difficult to find or scroll to the customize host switch section in case the focus is on the visualization graph.
Workaround: For the scrolling issue, press the Tab key or move the mouse pointer outside of the visualization area and scroll to the customize switch configuration section.
Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.
You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.
Workaround: Modify each rule to enable/disable logging.
Issue 2870529: Runtime information for Identify Firewall (IDFW) not available if not exact case of netbios name is used when AD domain is added.
You cannot easily and readily obtain IDFW current runtime information/status. Current active logins cannot be determined.
Workaround: Edit the domain in question and fix the netbios name. Once changes are applied, all future IDFW events will be reported correctly.
Issue 2870645: In response of /policy/api/v1/infra/realized-state/realized-entities API, 'publish_status_error_details' shows error details even if 'publish_status' is a "SUCCESS" which means that the realization is successful.
There is no functional impact.
Workaround: None.
Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.
You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.
Workaround: None.
Issue 2874236: After upgrade, if you re-deploy only one Public Cloud Gateway (PCG) in a HA pair, the older HA AMI/VHD build is re-used.
This happens only post upgrade in the first redeployment of PCGs.
Workaround: Provide the right AMI or VHD post upgrade via API.
Issue 2875385: When a new node joins the cluster, if local users (admin, audit, guestuser1, guestuser2) were renamed to some other name, these local user(s) may not be able to log in.
Local user is not able to log in.
Workaround: There are two workarounds.
Workaround 1: Rename the user back to anything and then back to the desired name.
Workaround 2: If you are unable to rename the users, restart the NSX Manager.
Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.
Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.
Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.
Issue 2719682: Computed fields from Avi controller are not synced to intent on Policy resulting in discrepancies in Data shown on Avi UI and NSX-T UI.
Computed fields from Avi controller are shown as blank on the NSX-T UI.
Workaround: App switcher to be used to check the data from Avi UI.
Issue 2747735: Post restore, VIP connectivity is broken due to network compatibility issues.
While restoring backup, customers can update VIP before the “AddNodeToCluster" step.
[Enter the workaround, if any, else state “None”] Restore typically pauses at the step “AddNodeToCluster” where the user is asked to add additional manager nodes. at that step a. first remove/update VIP to use a new IP from the ‘systems/appliances UI’ page. and b. add additional nodes to a restored 1-node cluster once VIP is corrected.
Issue 2816781: Physical servers cannot be configured with a load-balancing based teaming policy as they support a single VTEP.
You won't be able to configure physical servers with a load-balancing based teaming policy.
Workaround: Change the teaming policy to a failover based teaming policy or any policy having a single VTEP.
Issue 2856109: Adding 2000th pool member will error out for limit if Avi controller version is 21.1.2.
Avi Controller 21.1.2 allows 1999 pool members instead of 2k pool members.
Workaround: Use Avi controller version to 20.1.7 or 21.1.3.
Issue 2862418: The first packet could be lost in Live Traffic Analysis (LTA) when configuring LTA to trace the exact number of packets.
You cannot see the first packet trace.
Workaround: None.
Issue 2864929: Pool member count is higher when migrated from NSX for vSphere to Avi Load Balancer on NSX-T Data Center.
You will see a higher pool member count. Health monitor will mark those pool members down but traffic won't be sent to unreachable pool members.
Workaround: None.
Issue 2865273: Advanced Load Balancer (Avi) search engine won't connect to Avi Controller if there is a DFW rule to block ports 22, 443, 8443 and 123 prior to migration from NSX for vSphere to NSX-T Data Center.
Avi search engine is not able to connect to the Avi Controller.
Workaround: Add explicit DFW rules to allow ports 22, 443, 8443 and 123 for SE VMs or exclude SE VMs from DFW rules.
Issue 2866885: Event log scrapper (ELS) requires the NetBIOS name configured in AD domain to match that in AD server.
User login will not be detected by ELS.
Workaround: Change the NetBIOS name to match that in AD server.
Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.
UI feedback is not shown.
Workaround: Check the logs.
Issue 2878030: Upgrade orchestrator node for Local Manager site change is not showing notification.
If the orchestrator node is changed after the UC is upgraded and you continue with the UI workflow by clicking any action button (pre-check, start, etc.), you will not see any progress on the upgrade UI. This is only applicable if the Local Manager Upgrade UI is accessed in the Global Manager UI using site switcher.
Workaround: Go to the Local Manager for that site and continue the upgrade to see the expected notification.
Issue 2878325: In the Inventory Capacity Dashboard view for Manager, “Groups Based on IP Sets” attribute count doesn’t include Groups containing IP Addresses that are created from Policy UI.
In the Inventory Capacity Dashboard view for Manager, the count for “Groups Based on IP Sets” is not correctly represented if there are Policy Groups containing IP Addresses.
Workaround: None.
Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.
When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.
Workaround: Wait 15 minutes.
Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints.
Failed IPsec session will not be established until the error is resolved.
Workaround: Use unique self-signed certificate for each local endpoint.
Issue 2879979: IKE service may not initiate new IPsec route based session after "dead peer detection" has happened due to IPsec peer being unreachable.
There could be outage for specific IPsec route based session.
Workaround: Enable/disable on IPsec session can resolve the problem.
Issue 2881281: Concurrently configuring multiple virtual servers might fail for some.
Client connections to virtual servers may time out.
Workaround: Run the following API to re-trigger logical router workflow.
https://{ip}/api/v1/logical-routers/<id>? action=reprocess
Issue 2881471: Service Deployment status not getting updated when deployment status switches from failure to success.
You may see that Service Deployment status remains in Down state forever along with the alarm that was raised.
Workaround: Use the service deployment status API to check the status.
API: https://{{nsx-mgr-ip}}/api/v1/serviceinsertion/services/{{service-id}}/service-deployments/{{service-deployment-id}}/status
Issue 2882070: NSGroup members and criteria is not displayed for global groups in Manager API listing.
No functional impact.
Workaround: View the global group definitions via Policy API on Local Manager.
Issue 2882769: Tags on NSService and NSServiceGroup objects are not carried over after upgrading to NSX-T 3.2.
There is no functional impact on NSX as Tags on NSService and NSServiceGroup are not being consumed by any workflow. There may be an impact on external scripts that have workflows that rely on Tags on these objects.
Workaround: After upgrading to NSX-T 3.2, missing tags can be added to NSService and NSServiceGroup by updating the entities.
Issue 2884939: NSX rate limiting 100 req per second is hit when migrating a large number of Virtual Services from NSX for vSphere to NSX-T ALB and all APIs are blocked for some time.
NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102) for some time after migrate config is hit in migration.
Workaround: Update Client API rate limit to 200 requests per second.
Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.
NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.
Workaround: None.
Issue 2884518: Incorrect count of VMs connected to segment on Network topology UI after upgrading an NSX appliance to NSX-T 3.2.
You will see incorrect count of VM connected to Segment on Network Topology. However, actual count of VMs associated with the Segment will be shown when you expand the VM's node.
Workaround:
Log in to NSX appliance in engineering mode.
Run the command: curl -XDELETE http://localhost:9200/topology_manager
Issue 2874995: LCores priority may remain high even when not used, rendering them unusable by some VMs.
Performance degradation for "Normal Latency" VMs.
Workaround: There are two options.
Reboot the system.
Remove the high priority LCores and then recreate them. They will then default back to normal priority LCores.
Issue 2772500: Enabling Nested overlay on ENS can result in PSOD.
Can result in PSOD.
Workaround: Disable ENS.
Issue 2832622: IPv6 ND Profile not taking affect after being edited or changed.
The network will still refer to the old NDRA profile.
Workaround: Restart ccp to update IPv6 ND profile.
Issue 2840599: MP Normalization API gives INVALID_ARGUMENT error.
Poor UI experience.
Workaround: None.
Issue 2844756: Segment deletion fails with an error.
You won't be able to delete the segment.
Workaround: Force delete the ports attached to the segment.
Fetch all the logical ports connected to that segment using the following API. For example, for PolicySegment01.
GET : https://<NSX MANAGER IP>/policy/api/v1/infra/segments/PolicySegment01/ports
For each logical port listed in above call, make the following call.
DELETE : https://<NSX MANAGER IP>/api/v1/logical-ports/<LOGICAL PORT UUID>?detach=true
Once all ports attached are deleted, the Segment can be deleted.
Issue 2866751: Consolidated effective membership API does not list static IPs in the response for a shadow group.
No functional or datapath impact. You will not see the static IPs in the GET consolidated effective membership API for a shadow group. This is applicable only for a shadow group (also called reference groups).
Workaround: Check the consolidated effective membership of a shadow group on its original site.
Issue 2869356: Error is displayed on Management Plane when you click the "Overview" tab for an NSGroup with IPSet members.
Poor user experience.
Workaround: None.
Issue 2872658: After Site-registration, UI displays an error for “Unable to import due to these unsupported features: IDS”.
There is no functional impact. Config import is not supported in NSX-T 3.2.
Workaround: Remove IDS config from Local Manager and retry registration.
Issue 2877628: When attempting to install Security feature on an ESX host switch VDS version lower than 6.6, an unclear error message is displayed.
The error message is shown via the UI and API.
Workaround: Use VDS version greater than or equal to 6.6 on ESX to install NSX security.
Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.
This CLI output is confusing.
Workaround: Restart nsx-proxy on that TN.
Issue 2879119: When a virtual router is added, the corresponding kernel network interface does not come up.
Routing on the vrf fails. No connectivity is established for VMs connected through the vrf.
Workaround: Restart the dataplane service.
Issue 2881168: LogicalPort GET API output is in expanded format "fc00:0:0:0:0:0:0:1" as compared to previous format "fc00::1".
LogicalPort GET API output in NSX-T 3.2 is in expanded format "fc00:0:0:0:0:0:0:1" as compared to NSX-T 3.0 format "fc00::1".
Workaround: None.
Issue 2882822: Temporary IPSets are not added to SecurityGroups used in EDGE Firewall rules / LB pool members during NSX for vSphere to NSX-T config migration.
During migration, there may be a gap until the VMs/VIFs are discovered on NSX-T and are a part of the SGs to which they are applicable to via static/dynamic memberships. This can lead to traffic being dropped or allowed contrary to the Edge Firewall rules in the period between the North/South Cutover (N/S traffic going through NSX-T gateway) until the end of the migration.
Workaround: Add a fake DFW rule in which the source / destination contains all the Security Groups consumed in the Edge FW. Another option is to move Edge Firewall rules source and destination to IPsets before the migration.
Issue 2884070: If there is a mismatch of MTU setting between NSX-T edge uplink and peering router, OSPF neighbor-ship gets stuck in Exstart state. During NSX for vSphere to NSX-T migration, the MTU is not automatically migrated so a mismatch can impact dataplane during North/South Edge cutover.
OSPF adjacency is stuck in Exstart state.
Workaround: Manually configure matching MTU on OSPF neighbor interfaces before doing Edge cutover.
Issue 2884416: Load balancer status cannot be refreshed timely.
Wrong load balancer status.
Workaround: Stop load balancer stats collection.
Issue 2885009: Global Manager has additional default Switching Profiles after upgrade.
No functional impact.
Workaround: You are not expected to use the default Switching Profiles starting with the prefix "nsx-default".
Issue 2885248: For InterVtep scenario, if EdgeVnics are connected to NSX Portgroups (irrespective of vlan on Edge VM and ESX Host), the north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the Edge VTEP.
Workaround: Update EdgeTN by disconnecting the Segments from its interfaces and re-connecting them.
Issue 2885330: Effective member not shown for AD group.
Effective members of AD group not displayed. No datapath impact.
Workaround: None.
Issue 2885552: If you have created an LDAP Identity Source that uses OpenLDAP, and there is more than one LDAP server defined, only the first server is used.
If the first LDAP server becomes unavailable, authentication fails, instead of trying the rest of the configured OpenLDAP server(s).
Workaround: If it is possible to place a load balancer in front of the OpenLDAP servers and configure NSX with the virtual IP of the load balancer, you can have high availability.
Issue 2886210: During restore, if the VC is down, a Backup/Restore dialog will appear telling the user to ensure that VC is up and running, however the IP address of the VC is not shown.
the IP address of the VC is not shown during Restore for VC connectivity.
Workaround: Check that all registered VCs are running before proceeding to the next restore step.
Issue 2886971: Groups created on Global Manager are not cleaned up post delete. This happens only if that Group is a reference group on a Local Manager site.
No functional impact; however, you cannot create another Group with the same policypath as the deleted group.
Workaround: None.
Issue 2887037: Post Manager to Policy object promotion, NAT rules cannot be updated or deleted.
This happens when NAT rules are created by a PI (Principal Identity) user on Manager before promotion is triggered. PI user created NAT rules cannot be updated or deleted post Manager to Policy object promotion.
Workaround: NAT rules with the same configuration can be created with a non-protected user like "admin" before Manager to Policy object promotion.
Issue 2888207: Unable to reset local user credentials when vIDM is enabled.
You will be unable to change local user passwords while vIDM is enabled.
Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-enabled.
Issue 2889748: Edge Node delete failure if redeployment has failed. Delete leaves stale intent in system which gets displayed on UI.
Though Edge VM will be deleted, stale edge intent and internal objects will be retained in the system and delete operation will be retried internally. No functionality impact as Edge VMs are deleted and only intent has stale entries.
Workaround: None.
Issue 2875962: Upgrade workflow for Cloud Native setups is different from NSX-T 3.1 to NSX-T 3.2.
Following the usual workflow will erase all CSM data.
Workaround: The upgrade requires VMware assistance. Contact VMware Support.
Issue 2888658: Significant performance impact in terms of connections per second and throughput observed on NSX-T Gateway Firewall when Malware Detection and Sandboxing feature is enabled.
Any traffic subject to malware detection experiences significant latencies and possibly connection failures. When malware detection is enabled on the gateway, it will also impact L7FW traffic causing latencies and connection failures.
Workaround: Limit the amount of traffic that is subject to malware detection by writing IDS rules that match only a small subsection of the traffic.
Issue 2882574: Blocked 'Brownfield Config Onboarding' APIs in NSX-T 3.2.0 release as the feature is not fully supported.
You will not be able to use the 'Brownfield Config Onboarding' feature.
Workaround: None.
Issue 2890348: Renaming the default VNI pool causes inconsistent VNI pool when upgrading to NSX-T 3.2.
VNI allocation and related operations may fail.
Workaround: Prior to upgrading to NSX-T 3.2, rename the default VNI pool to DefaultVniPool
using the API PUT https://{{url}}/api/v1/pools/vni-pools/<vni-pool-id>
.
Issue 2885820: Missing translations for some IP addresses for IP range starting with 0.0.0.0.
NSGroup with IP range starting with 0.0.0.0 (for example, "0.0.0.0-255.255.255.0"), has translation issues (missing 0.0.0.0/1 subnet).
NSGroup with IP range "1.0.0.0-255.255.255.0" are unaffected.
Workaround: To configure groups with IP range starting with 0.0.0.0, manually add 0.0.0.0/1 in the group configuration.
Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotion to a host connected to a NSX Manager that is down.
For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotion to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.
Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.
Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs.
By default, NSX tools is installed on the /opt directory. However, during NSX tools installation, the default path can be overwritten with the "--chroot-path" option passed to the install script.
Insufficient disk space on the partition where NSX tools is installed can cause the NSX tools upgrade to fail.
Workaround: None.
Issue 2875563: Delete IN_PROGRESS LTA session may cause PCAP file leak.
The PCAP file will leak if a LTA is deleted with PCAP action when the LTA session state is "IN_PROGRESS". This may cause the /tmp partition of the ESXi to be full.
Workaround: Clearing the /tmp partition may help.
Issue 2875667: Downloading the LTA PCAP file results in error when the NSX /tmp partition is full.
The LTA PCAP file cannot be downloaded due to the /tmp partition being full.
Workaround: Clearing the /tmp partition may help.
Issue 2883505: PSOD on ESXi hosts during NSX V2T migration.
PSOD (Purple Screen of Death) on multiple ESXi hosts during V2T migration. This results in a datapath outage.
Workaround: None.
Issue 2914934: DFW rules on dvPortGroups are lost after migrating NSX-V to NSX-T.
After a NSX-V to NSX-T migration, any workload that is still connected to vSphere dvPortGroup will not have DFW configuration.
Workaround: After a NSX-V to NSX-T migration, VLAN segments are configured having an identical DFW configuration. Use the VLAN segments instead of the dvPortGroups.
Another workaround is to uninstall NSX-V and then install NSX-T in security-only mode. You can then use workloads on existing dvPortGroups.
Issue 2921704: Edge Service CPU may spike due to nginx process deadlock when using L7 load balancer with ip-hash load balancing algorithm.
The backend servers behind the load balancer cannot be connected to.
Workaround: Switch to the L4 engine to remediate the issue. If you want to keep using the L7 load balancer, change the load balancing algorithm to round-robin instead of ip-hash.
Issue 2933905: Replacing an NSX-T Manager results in transport nodes losing connection to the controller.
Adding or removing a node in the Manager cluster can result in some transport nodes losing controller connectivity.
Workaround: Restart the nsx proxy service on impacted transport nodes anytime a Manager is added or removed from the management cluster. This will repopulate controller-info.xml and allow the controller connection to come up.
Issue 2927442: Traffic sometimes hits the default deny DFW rule on VMs across different hosts and clusters since the NSX-T 3.2.0.1 upgrade.
Some PSOD issues occurred after the NSX for vSphere migration to NSX-T 3.1.3. Therefore, it was recommended to upgrade to 3.2.0.1. However, since then the hosts do not apply the Distributed Firewall rules consistently. Different hosts match different rules, which are not consistent and not expected.
Workaround:
- If the above exception is seen on a specific controller it can be rebooted to temporarily resolve the issue.
- Also, if the issue is impacting DFW, any/any allow rules can be created at the top of the rules list to bypass unwanted block rules getting hit.
Issue 2937810: The datapath service fails to start and some Edge bridge functions (for example, Edge bridge port) do not work.
If Edge bridging is enabled on Edge nodes, the Central Control Plane (CCP) sends the DFW rules to the Edge nodes, which should only be sent to host nodes. If the DFW rules contain a function which is not supported by the Edge firewall, the Edge nodes cannot handle the unsupported DFW configuration, which causes the datapath to fail to start.
Workaround: Remove or disable the DFW rules, which are not supported by the Edge firewall.