This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX-T Data Center 3.0   |  07 April 2020  |  Build 15946738

Check regularly for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

NSX-T Data Center 3.0 includes a variety of new features to provide new functionality for virtualized networking and security for private, public, and multi-clouds. Highlights include the following focus areas and new features:

  • Cloud-scale Networking: NSX Federation
  • Intrinsic Security: Distributed IDS, Micro-Segmentation for Windows Physical Servers, Time-based Firewall Rules, and a feature preview of URL Analysis
  • Modern Apps Networking: NSX-T for vSphere with Kubernetes, container networking and security enhancements
  • Next-Gen Telco Cloud: L3 EVPN for VM mobility, accelerated data plane performance, NAT64, IPv6 support for containers, E-W service chaining for NFV

The following new features and feature enhancements are available in the NSX-T Data Center 3.0.0 release.

L2 Networking

NSX-T support on VDS 7.0

NSX-T now has the capability to run on the vSphere VDS switch version 7.0. It is recommended that new deployments of NSX and vSphere take advantage of this close integration and start to move toward the use of NSX-T on VDS. The N-VDS NSX-T host switch will be deprecated in a future release. Going forward, the plan is to converge NSX-T and ESXi host switches. The N-VDS remains the switch on the KVM, NSX-T Edge Nodes, native public cloud NSX agents and for bare metal workloads.

The current NSX-T ESXi host switch, the N-VDS, continues to be supported for this release and it is recommended that NSX deployments that currently use the N-VDS on ESXi continue to utilize the same switch. The reason for this is two-fold:

  1. The conversion of N-VDS to VDS 7.0 for existing NSX deployments is a manual process. Please contact your VMware representative for further details if required.
  2. The APIs between N-VDS and VDS are different. There are no changes to N-VDS or VDS APIs. However, if you move to use the VDS in your environment you will have to start invoking the VDS APIs instead of N-VDS APIs. This ecosystem change will have to be made before converting the N-VDS to VDS.

The following deployment considerations are recommended when moving from N-VDS to VDS:

  • VDS is configured through vCenter. N-VDS is vCenter independent. With NSX-T support on VDS and the eventual deprecation of N-VDS, NSX-T will be closely tied to vCenter and vCenter will be required to enable NSX.
  • The N-VDS is able to support ESXi host specific configurations. The VDS uses cluster-based configuration and does not support ESXi host specific configuration.
  • This release does not have full feature parity between N-VDS and VDS.
  • The backing type for VM and vmKernel interface APIs is different for VDS when compared to N-VDS.

RHEL support: We add RHEL 7.6 and RHEL 7.7 to the list of supported operating systems for NSX. This applies to KVM and Bare Metal workloads.

Edge Bridge:

Segments that have been configured for Guest VLAN tagging can now be extended to VLAN through an edge bridge. The feature is enabled by configuring a range of VLAN IDs when mapping a segment to a bridge profile. Segment traffic with a VLAN ID in the range is bridged to VLAN, keeping their VLAN tag. Traffic received on the VLAN side of the bridge with a VLAN ID falling in the configured range in the segment to bridge mapping is bridged into the segment, keeping their VLAN ID as a guest VLAN tag.

Policy-based UI support is now available for the edge bridge.

MAC Limit per VNI: The default value of default MAC limit per Logical switch is 2,048 in ESXi dataplane. NSX now provides the ability to change the MAC limit per logical switch from the default value to match customer requirements.

Support for Windows 2016 Bare Metal Server

NSX supports the following use cases:

  1. Connectivity with VLAN-backed virtualized workloads

  2. Connectivity with overlay-backed virtualized workloads

  3. Security for communication between virtual and physical workloads

  4. Security for communication between physical workloads

Support is available for:

  • Two PNICs case (Mgmt. and Application use separate IP)

  • One PNIC case (Mgmt. and Application use same IP)

See the VMware Configuration Maximums for currently supported maximums.

Enhanced Data Path

  • Zero tx copy support in ENS - For bigger packet sizes (600-700B), zero tx copy is now supported. This is supported from vSphere 7.0, improves l2/l3 cache miss, which leads to better overall performance.

  • FPO Flow Director offload & associated vmkapi changes - N-VDS support for NIC offloads and hence improved performance in Enhanced Data Path.

  • U-ENS Dataplane Consolidation - ENS (Enhanced Network Stack) and FC (Flow Cache) provide fast-path on N-VDS (or vswitch) for NSX-T. ENS is for general purpose while FC is more for telco use cases.

  • ENS performance optimizations - Performance Improvements with respect to cache utilization and packet sizes have been made.

Federation

NSX-T 3.0 introduces the ability to federate multiple on-premises data centers through a single pane of glass, called Global Manager (GM). GM provides a graphical user interface and an intent-based REST API endpoint. Through the GM, you can configure consistent security policies across multiple locations and stretched networking objects: Tier0 and Tier1 gateways and segments.

Given the scale and upgrade limits (see "Federation Known Issues" below), Federation is not intended for production deployments in NSX-T 3.0.x releases.

Edge Platform

  • New Edge VM XLarge form factor provides more scale for advanced services and better throughput.
  • Enhanced convergence time on Tier-0 gateway with lower BFD interval supported on Edge VM down to 500ms and 50ms on Bare Metal Edge.
  • Enhanced Edge VM deployment: During Edge VM deployment through NSX, the following actions are taken:
    • Auto start the Edge VM on ESX reboot
    • Edge VM added in the DFW exclude list
    • Configuration of the following parameters: allow SSH, allow root login, NTP server list, domain search list, DNS server list and default users credentials

L3 Networking

  • Change of Tier0 Gateway HA mode through UI/API offers the option to change Tier-0 gateway High Availability mode from Active/Active to Active/Standby and vice versa through UI and API.
  • VRF Lite support provides multi-tenant data plane isolation through Virtual Routing Forwarding (VRF) in Tier-0 gateway. VRF has its own isolated routing table, uplinks, NAT and gateway firewall services.
  • L3 EVPN support provides a northbound connectivity option to advertise all VRFs on a Tier-0 gateway through MP-BGP EVPN AFI (Route Type 5) to a Provider Edge and maintain the isolation on the dataplane with VXLAN encapsulation by using one VNI per VRF.
  • Rate-limit support on Tier-1 gateways provides the ability to rate-limit all the traffic going egress and ingress of the Tier1 gateway uplink.
  • Metadata Proxy support in policy and UI enhances the intent-based API and policy UI to configure Metadata Proxy.
  • DHCP server policy and UI enhances the intent-based API and policy UI to configure a DHCP server locally to a segment.
  • Policy API for L3 enhances the intent-based API and policy UI to retrieve runtime information on the gateways.
  • L3 Multicast (Phase1):
    • NSX-T 3.0 introduces Multicast in NSX-T for the very first time.
    • Multicast replication is only supported on T0, and any host expecting to have multicast workloads will have to be connected to T0. T1 will be supported in future releases.
    • There is also just one uplink from the Edge to the TOR in NSX-T 3.0. In future releases, redundancy will be built-in and there could be multiple uplinks to a TOR supporting PIM.
    • RP always has to be programmed outside the NSX domain.
    • No Edge Services are supported for Multicast Traffic in NSX-T 3.0.

IPv6

  • NAT64 offers a transition mechanism from IPv4 to IPv6. It provides stateful Network Address Translation from IPv6 to IPv4 following standard RFC 6146.
  • Stateful DHCPv6 Support - NSX now supports stateful delivery of IPv6 addresses and associated parameters through NSX native DHCP server hosted on Edge nodes.

Firewall

  • Consistent Security Policy across multiple sites using NSX Federation - NSX-T 3.0 introduces the concept of a Global Manager (GM) that can manage multiple NSX Managers. With NSX-T 3.0, the global manager has the capability of consistent security policy across multiple sites through a single pane of glass.
  • Introducing Security Dashboards - NSX-T 3.0 introduces new Security Overview Dashboards for security and firewall admins to see at-a-glance the current operational state of firewall and distributed IDS.
  • Time-based Scheduling of Firewall Rules - With NSX-T 3.0, you can now schedule enforcing of specific rules for specific time intervals.
  • Introducing wizards to quickly do VLAN-based Micro-Segmentation - You can configure your data centers to introduce segmentation using NSX-T in very easy steps.
  • Micro-Segmentation for Windows Physical Servers - Introducing micro-segmentation for windows physical servers in NSX-T 3.0.
  • URL Analysis - Feature Preview - Introducing a preview of URL Filtering with detection, classification and reputation scores of URLs. This feature preview is available only on the gateway firewall.
  • Supporting TCP/UDP and ICMP Session Timer Configuration for FW in KVM - Supporting firewall timer configuration changes in KVM for workloads running on KVM.

Identity Firewall

  • Filter ICMP traffic for VDI environments as part of Identity Firewall rules - This allows the creation of Identity Firewall rules for VDI users to filter traffic based on the ICMP protocol. This is limited to VDI and not available for RDSH users.
  • Selectively sync AD groups for Identity Firewall groups - This allows syncing of specific AD groups to be used as endpoints in Identity Firewall rules. This capability optimizes the performance and usability of the AD Groups. This capability is currently available using the API only.
  • Filter UDP traffic for Identity Firewall rules - This allows filtering UDP traffic as part of Identity Firewall rules.

Distributed Intrusion Detection System (D-IDS)

Introducing in NSX Platform the capability of Distributed Intrusion Detection as a part of the platform's Threat & Vulnerability Detection capabilities. This feature allows you to enable intrusion detection capabilities within the hypervisor to detect vulnerable network traffic. This distributed mechanism can be enabled on a per VM and per vNIC of a VM basis with granular rule inspection. As part of this feature set, the NSX Manager is able to download the latest signature packs from the NSX Signature Service. This keeps the NSX Distributed IDS updated with the latest threat signatures in the environment.

Service Insertion and Guest Introspection

  • E-W Service Chaining for NFV-SFC at the Edge - The ability to chain multiple services was earlier available only to distributed traffic but is now available for edge traffic. The East-West service chains can now also be extended to redirect edge traffic.
  • Disable cloning of NSX Service VMs - Cloning of Service VMs is now prevented from the vSphere Client to prevent malfunctioning of the VMs.

Load Balancing

  • Load Balancer Health Check Support for Multiple Monitors and 'AND' Condition - This enhancement allows multiple active health monitors to be configured as one health monitoring policy. All active health monitors must pass successfully for the member to be considered healthy.
  • Connection Drop for Layer 4 and Layer 7 Virtual Servers - Layer 4 Virtual Servers have a new option to allow or deny connections from specified networks. LB rules for Layer 7 Virtual Servers allow Group to be used to specify networks and have a new action "Drop" to drop requests silently instead of returning an HTTP status code.
  • IPv6 Support for LB Virtual Servers and Members - Load balancing IPv6 VIP to IPv6 members is supported.
  • JSON Web Token Support - Load Balancer can validate JSON Web Token or JWT and grant access based on its payload.
  • SSL-passthrough and Dynamic SSL Termination by Load Balancer Rules - Load Balancer can select a pool based on SNI in SSL Client Hello without terminating SSL. Also, it can perform SSL-passthrough, SSL offload or end-to-end SSL based on SNI.
  • Load Balancer Extra Large Support - Extra Large form factor is introduced for XLarge Edge for better scale.
  • DLB for vSphere with Kubernetes - Distributed Load Balancer is supported for k8s cluster IPs managed by vSphere with Kubernetes. DLB is NOT supported for any other workload types.​

VPN

  • Intel QAT Support for VPN Bulk Encryption - Intel QAT card is supported to offload VPN bulk encryption on Bare Metal Edge.
  • Local egress for L2VPN - Two local gateways can be connected to the stretched network with the same gateway IP address to allow outbound traffic to egress through the local gateway.
  • On-demand DPD - On-demand DPD is supported to avoid a short interval of DPD keep-alive while providing a fast detection of a remote failure.
  • L2 VPN on Tier-1 LR - L2 VPN service is supported on Tier-1 Gateway.
  • Stateful Fail-over for VPN Sessions - IKE SAs and IPsec SAs are synchronized to the standby VPN service in real-time for stateful fail-over.
  • PMTU Discovery - PMTU discovery is supported for both L2 and L3 VPN services to avoid packet fragmentation.

Automation, OpenStack and other CMP

  • Search API available - Exposes NSX-T Search capabilities (already available in UI) through API. This allows you to craft powerful queries to return NSX-T objects based on their tags, types, names or other attributes and simplify automation. The detailed usage of the Search API is described in the API guide.
  • Terraform Provider for NSX-T - Declarative API support - The Terraform Provider offers the capability to configure logical objects from the declarative API for NSX-T on premises. This allows the benefits of Terraform with the flexibility and additional features of NSX-T policy API model. The new resources and data sources allow infrastructure as code by covering a wider range of constructs from networking (Tier-0 Gateway, Tier-1 Gateway, segments), security (centralized and distributed firewall, groups) and services (load balancer, NAT, DHCP). More details are available in the Terraform Provider release notes.
  • Ansible Module for NSX-T - Upgrade and Logical object support - The Ansible Modules for NSX-T are enhanced to support upgrade in addition to install. They also allow set up of the environment by configuring Tier-0 Gateway, Tier-1 Gateway, segments and distributed firewall rules from the declarative API. This allows you to automate the setup of its environment, its upgrades and the creation of the base topology. More details are available in the Ansible Module release notes.
  • OpenStack Integration Improvements - extended IPv6, VPNaaS support, vRF lite support - The Neutron plugin for NSX-T declarative API expands and covers IPv6 and VPNaaS. The IPv6 implementation adds load balancer support to all the functionality already in previous releases while the VPNaaS allows you to configure from OpenStack IPsec VPNs created in NSX-T. The OpenStack Neutron plugin also has validated the use of a Tier-0 vRF-lite as an External network in addition to the Tier-0, allowing large enterprise and service providers to provide isolation and flexibility with a minimum amount of edge resources. More details are available in the OpenStack Neutron Plugin Release Notes.

Container Networking and Security

  • Container Inventory & Monitoring in User Interface - Container cluster, Namespace, Network Policy, Pod level inventory can be visualized in the NSX-T User Interface. Visibility is also provided into co-relation of Container/K8 objects to NSX-T logical objects.
  • IPAM Flexibility - The NSX Policy IP Block API has been enhanced to carve out IP subnets of variable sizes. This functionality helps the NSX Container Plugin carve out variable size subnets for Namespaces using Policy API.
  • NCP Component Health Monitoring - The NSX Container Plugin and related component health information like NCP Status, NSX Node Agent Status, NSX Hyperbus Agent Status can be monitored using the NSX Manager UI/API.

NSX Cloud

  • App-ID and URL Filtering - App-ID and URL feature for Selective Native services in Public Cloud through NSX Cloud is now possible. This expands the scope of workloads/services that can be protected in public cloud using a single consistent policy in NSX Cloud. We are starting with the most commonly used services in AWS and Azure.
  • Support for AWS and Azure Gov Clouds - All functionalities of NSX Cloud on commercial clouds (AWS and Azure) are now extended to Gov Cloud (across all gov cloud regions inside the US). Features are subject to API/availability support from the respective public cloud providers.
  • Support SLES 12sp3 (SUSE 12 SP3) - Public Cloud VMs that have an agent running SLES 12sp3 are now supported.
  • Support for VPNs in agentless VPCs and VNets - VPN connections can be established between Edges that are located on premises, or in a public cloud (AWS and Azure), even if the VPCs and VNETs are running in agentless mode.

Operations

  • Support for both Thin and Thick Disk Mode - The NSX Manager and NSX Intelligence appliances now support both thin and thick mode and provide that choice upon deployment.
  • Increased disk size of NSX Manager - Starting in NSX-T 3.0, the disk size of NSX Manager will be increasing from 200 GB to 300 GB (per NSX Manager node in a cluster). During a new NSX Manager installation, please ensure that the underlying datastore has adequate disk space. During an upgrade to NSX-T 3.0, please ensure that the new datastore has been added prior to upgrading NSX Manager.
  • Reduction of VIB size in NSX-T - NSX-T 3.0 has a smaller VIB footprint in all NSX host installations so that you are able to install ESX and other 3rd party VIBs along with NSX on their hypervisors.
  • Support for Federation Upgrade - With NSX Federation, the admin can upgrade Global Manager and Local Manager asynchronously, following the detailed compatibility matrix in the Upgrade Guide.
  • Periodical MTU/VLAN Healthcheck for N-vDS - Healthcheck is now available for the NSX host switch, i.e., N-vDS.
  • Non-disruptive in-place upgrades - More enhancements have been made to the in-place upgrades of NSX Transport Nodes with fewer caveats, details of which are available in the Upgrade Guide.
  • Traceflow Policy support - The traceflow debugging tool is now available under the Policy tab (previously known as the Simplified tab).
  • Ability to follow TN install at a per-host level and provide progress bar - An admin can view in the UI a detailed progress of the installation of NSX on hypervisors.
  • Traceflow observations for Spoofguard - The traceflow debugging tool will now show any packet drops that may happen due to the Spoof-guard feature.
  • Ability to automatically uninstall NSX when a host leaves a VC cluster - NSX will be automatically uninstalled once a user moves a Transport Node to outside a vSphere cluster. More details on this are available in the Upgrade Guide.
  • Central Appliance Configuration - NSX now supports the ability to configure settings that are common across NSX Managers and Edge Nodes in a centralized way instead of requiring the administrator to use the local CLI configuration on a per node basis.
  • SNMP Traps - NSX now supports the ability to generate SNMP trap notifications from the NSX Manager, Edge Node and the NSX components of supported hypervisor hosts. These trap notifications are for events and alarms generated by NSX. The NSX MIB is provided as part of the deliverables of NSX on the NSX download site.
  • NSX Alarm Framework and System Alarms/Events - With this release of NSX, the product now supports an alarm framework that improves the delivery of alerts and alarms to aid in running NSX successfully in a production environment. A list of supported alarms is provided in the NSX documentation.

Inventory

  • NSX Tag Listing and Bulk Action Support - NSX-T adds UI/API support for listing NSX Tags, and assigning/un-assigning NSX Tags to multiple virtual machines.
  • Physical Servers Listing - NSX-T adds UI support for listing physical servers.

Usability and User Interface

  • Graphical Visualization of Network Topology - Provides an interactive network topology diagram of Tier 0 Gateways, Tier 1 Gateways, Segments, and connected workloads (VMs, Containers), with the ability to export to PDF.
  • New Getting Started Wizards - A new getting started wizard is introduced for preparing clusters for VLAN Micro-Segmentation in three easy steps.
  • Quick Access to Actions and Alarms from Search Results - Enhanced search results page to include quick access to relevant actions and alarms. Added more search criteria across Networking, Security, Inventory, and System objects.
  • User Interface Preferences for NSX Policy versus Manager Modes - You can switch between NSX Policy mode and NSX Manager mode within the user interface, as well as control the default display. By default, new installations display the UI in NSX Policy mode, and the UI Mode switcher is hidden. Environments that contain objects created through NSX Manager mode (such as from NSX upgrades or cloud management platforms) by default display the UI Mode switcher in the top right-hand corner of the UI.
  • UI Design Improvements for System Appliances Overview - Improved UI design layout for displaying resource activity and operational status of NSX system appliances.

Licensing

  • New VMware NSX Data Center Licenses - Adds support for a new Add-On license, VMware NSX Data Center Distributed Threat Prevention, introduced in April 2020, and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. Additionally, the license usage report captures micro-segmentation, federation  usage in core, CPU, CCU and VM metrics. Distributed Intrusion Detection usage is per CPU basis. See VMware knowledge base article 52462 for more information about NSX licenses.
  • vShield Endpoint Management Support - NSX-T supports management of vShield Endpoint anti-virus offload capabilities. For more information, see VMware knowledge base article 2110078.
  • Change in default license & evaluation key distribution - The default license upon install is "NSX for vShield Endpoint", which enables use of NSX for deploying and managing vShield Endpoint for anti-virus offload capability only. Evaluation license keys can be requested through VMware sales or the VMware evaluation web site.
  • NSX Evaluation License Expiration - Upon expiration of the 60-day NSX Evaluation license, you can delete but will not be allowed to create or edit objects.

AAA and Platform Security

  • Native AD-based Authentication via LDAP - This feature adds support for NSX administrators and users to authenticate and on-board to the NSX-T platform via direct AD (Active Directory) integration with LDAP (Lightweight Directory Access Protocol). The majority of enterprise/business user credentials are stored in Microsoft-based AD. Direct AD configuration simplifies user on-boarding without the hassle of configuring additional identity systems where its use is not suitable or adds operational complexity. In addition, this feature allows consumption of NSX-T RBAC capabilities with powerful search options to identify relevant AD users/groups for role assignments. Both secure (LDAPS, startTLS) and regular LDAP configurations are supported. NSX-T customers now have the flexible option to either configure Workspace One Access (formerly VIDM) or native AD configuration, or in some cases, a combination of VIDM and direct AD configuration, as suitable, to meet their operational needs.
  • Integration with OpenLDAP - In addition to supporting native AD integration, NSX-T 3.0 offers the flexibility to authenticate and on-board users who are using OpenLDAP directory services.
  • AAA for NSX-T features in vSphere with Kubernetes - Users running containerized applications and Kubernetes features in vSphere 7.0 appliance can leverage and troubleshoot a limited number of NSX networking features without additional authentication via vSphere appliance.
  • "On-behalf of" API feature - Allows tracking of derived user actions, especially when using Principle Identity (PI) or Service Accounts to perform API calls, by indicating if the API action was invoked on-behalf of another NSX user. Any API call with the “X-NSX-EUSER: <username>” header will result in audit log with additional user activity information - “euser=<username>”. This feature is useful for in-depth user-accounting, i.e., maintaining a rich audit trail with contextual data on "who" did "what."
  • Session Based Authentication for Remote Users - NSX-T 3.0 has enhanced authentication capabilities to allow both local and remote users to create cookie-based sessions - to authenticate and persist API based activities. The new enhancements simplify session creation for API users and facilitates efficient API operations and security compliance, by avoiding repeated authentication. Both VIDM- and LDAP-based remote users are supported.
  • Separate Audit Log for API Write Calls - This feature supports the ability to retrieve audit log content that solely contains information on API write calls. Improves readability of audit trail by tracking "before & "after" states.
  • Enable/Disable Cookie-based Authentication - NSX admins can now turn off cookie (session-based) based API authentication to improve the security posture of NSX-T platform operations. Cookie-based authentication is available by default and can be re-enabled after being turned off.
  • Enable/Disable Basic Authentication - NSX admins concerned about secure use of basic authentication can now disable (or re-enable) basic authentication for API and CLI use. By default, basic authentication support is available.

NSX Data Center for vSphere to NSX-T Data Center Migration

  • Migration Coordinator with Maintenance Mode - When you are using vSphere 7.0 and vDS 7.0, the Migration Coordinator will migrate hosts to an existing vDS (version 7.0) instead of migrating to an N-VDS. This minimizes the impact of the migration on the customer environment.
  • Migration from NSX Data Center for vSphere to NSX-T Data Center using vDS 7.0 - The NSX Migration Coordinator now supports maintenance mode for the final host migration step. This mode allows the migration of virtual machines from a host prior to converting the host from NSX for vSphere to NSX-T. By placing a host into maintenance mode, a virtual machine can be migrated using vMotion to minimize the impact to the data traffic to and from the virtual machine.

NSX Intelligence

Compatibility and System Requirements

For compatibility and system requirements information, see the NSX-T Data Center Installation Guide.

API Deprecations and General Behavior Changes

  • InterSR iBGP Peering behavior change - Starting with NSX-T 3.0, a new VRF dedicated to Inter-SR iBGP peering has been introduced. This VRF is named 'inter_sr_vrf' and routes advertised over the automatically built iBGP peering adjacency are now installed in the aforementioned dedicated VRF.
  • Removal of Application Discovery - This feature has been deprecated and removed.
  • Change in "Advanced Networking and Security" UI upon Upgrading from NSX-T 2.4 and 2.5 - If you are upgrading from NSX-T Data Center 2.4 and 2.5, the menu options that were found under the "Advanced Networking & Security" UI tab are available in NSX-T Data Center 3.0 by clicking "Manager" mode.
  • NSX-T Data Center 2.4 and 2.5 have the following configurations:
    • Two default rules for Distributed Firewall: one for the Policy interface, and one for the Advanced Networking & Security interface.
    • Two settings to enable or disable Distributed Firewall: one for the Policy interface, and one for the Advanced Networking & Security interface.
    • In NSX-T Data Center 3.0, if there are any Policy configurations present, the default rule and the enable/disable Distributed Firewall setting are available only in Policy mode. If only Manager (previously Advanced Networking & Security) configurations are present, you can configure these settings from Manager mode. See "Overview of the NSX Manager" in the NSX-T Data Center Administration Guide for more information about the modes.
  • API Deprecation Policy - VMware now publishes our API deprecation policy in the NSX API Guide to help customers who automate with NSX understand which APIs are considered deprecated and when they will be removed from the product in the future.

API and CLI Resources

See code.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

April 7, 2020. First edition.
April 20, 2020. Second edition. Added known issues 2550327, 2546509.
May 8, 2020. Third edition. Added known issue 2499819.
May 14, 2020. Fourth edition. Added known issue 2543239.
May 22, 2020. Fifth edition. Added known issue 2541923.
June 1, 2020. Sixth edition. Added known issue 2518183, 2543353, 2561740.
August 24, 2020. Seventh edition. Added known issue 2577452.
August 28, 2020. Eighth edition. Added known issues 2622672, 2630808, 2630813, 2630819.
March 15, 2021. Ninth edition. Added known issue 2730634.
September 17, 2021. Tenth edition. Added known issue 2761589.
October 11, 2021. Eleventh edition. Updated API Deprecations and General Behavior Changes section.
February 15, 2022. Twelfth edition. Removed processor names from What's New section.
May 18, 2022. Thirteenth edition. Added known issue 2959572.

Resolved Issues

  • Fixed Issue 2387578 - BFD session is not formed between edges of the same cluster over the management/TEP interfaces.

    Prior to this fix, BFD packets on the management/TEP interfaces were only sent using the single-hop BFD port irrespective of the maximum allowed hops. Now, both single-hop and multi-hop BFD ports are supported. When configuring the maximum allowed BFD hops in the edge-cluster profile as one, single-hop BFD is used. For any value greater than one, multi-hop BFD is used. The default value for maximum allowed hops is 255. Upgrading from previous releases will not result in a split-brain. During the upgrade, the single-hop BFD port will be used. After the upgrade, the port reflected by the configuration will be used.

  • Fixed Issue 2275388 - Loopback interface/connected interface routes could get redistributed before filters are added to deny the routes.

    Unnecessary route updates could cause sub-optimal routing on traffic for a few seconds. 

  • Fixed Issue 2275708 - Unable to import a certificate with its private key when the private key has a passphrase.

    The message returned is, "Invalid PEM data received for certificate. (Error code: 2002)". Unable to import a new certificate with private key.

  • Fixed Issue 2378970 - Cluster-level Enable/Disable setting for distributed firewall incorrectly shown as Disabled.

    Cluster-level Enable/Disable setting for IDFW on Simplified UI may show as Disabled even though it is Enabled on the management plane. After upgrading from 2.4.x to 2.5, this inaccuracy will persist until explicitly changed. 

  • Fixed Issue 2292096 - CLI command "get service router config route-maps" returns an empty output.

    CLI command "get service router config route-maps" returns an empty output even when route-maps are configured. This is a display issue only.

  • Fixed Issue 2416130 - No ARP proxy when Centralized Service Port (CSP) is connected to DR's downlink

    No ARP proxy when Centralized Service Port (CSP) is connected to DR's downlink causing no traffic to pass. 

  • Fixed Issue 2448006 - Querying a Firewall Section with inconsistencies in rule-mapping fails.

    Querying a Firewall Section with rule-mapping inconsistencies fails if you use the GetSectionWithRules API call. The UI is not impacted as it depends on GetSection and GetRules API calls.

  • Fixed Issue 2441985 - Host Live upgrade from NSX-T Data Center 2.5.0 to NSX-T Data Center 2.5.1 may fail in some cases.

    Host Live upgrade from NSX-T Data Center 2.5.0 to NSX-T Data Center 2.5.1 fails in some cases and you see the following error:
    Unexpected error while upgrading upgrade unit: Install of offline bundle failed on host 34206ca2-67e1-4ab0-99aa-488c3beac5cb with error : [LiveInstallationError] Error in running ['/etc/init.d/nsx-datapath', 'start', 'upgrade']: Return code: 1 Output: ioctl failed: No such file or directory start upgrade begin Exception: Traceback (most recent call last): File "/etc/init.d/nsx-datapath", line 1394, in CheckAllFiltersCleared() File "/etc/init.d/nsx-datapath", line 413, in CheckAllFiltersCleared if FilterIsCleared(): File "/etc/init.d/nsx-datapath", line 393, in FilterIsCleared output = os.popen(cmd).read() File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/os.py", line 1037, in popen File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 676, in __init__ File "/build/mts/release/bora-13885523/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 1228, in _execute_child OSError: [Errno 28] No space left on device It is not safe to continue. Please reboot the host immediately to discard the unfinished update. Please refer to the log file for more details..

  • Fixed Issue 2477859 - In rare cases, NSX Manager upgrade may fail during the data migration task.

    When upgrading to NSX-T Data Center 2.5.1, in a very rare scenario where the deletion of a logical router in an earlier version did not process correctly, it is possible that NSX Manager upgrade may fail during the data migration task with this error: NullPointer exception.

  • Fixed Issue 2481033 - Updates to an ESXi Host Transport Node and Transport Node Profile attached to a host with powered on VMs fail with the error: "The host has powered on VMs which must be moved or powered off before transport node create/update/delete can continue".

    Updates to an ESXi host Transport Node (TN) will fail if it has VMK migration specified and there are any powered-on VMs on that ESXi host. Updates to a Transport Node Profile (TNP) attached to such TNs will fail regardless of the VMK migration setting on the TNP. This happens because powered-on VMs cause the migration validation to fail, preventing updates to the TN or TNP.

  • Fixed Issue 2483552 - After upgrading from 2.4.x to 2.5.x, "nsx-exporter" binary gets removed from the host

    After upgrading NSX-T Data Center from versions 2.4.x to versions 2.5.x, the binary of nsx-exporter (/opt/vmware/nsx-exporter) and nsx-aggservice (/opt/vmware/nsx-aggservice) get removed causing nsx-exporter to stop running.

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 2320529 - "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores.

    "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores even though the storage is accessible from all hosts in the cluster. This error state persists for up to thirty minutes.

    Retry after thirty minutes. As an alternative, make the following API call to update the cache entry of datastore:

    https://<nsx-manager>/api/v1/fabric/compute-collections/<CC Ext ID>/storage-resources?uniform_cluster_access=true&source=realtime

    where   <nsx-manager> is the IP address of the NSX manager where the service deployment API has failed, and < CC Ext ID> is the identifier in NSX of the cluster where the deployment is being attempted.

  • Issue 2328126 - Bare Metal issue: Linux OS bond interface when used in NSX uplink profile returns error.

    When you create a bond interface in the Linux OS and then use this interface in the NSX uplink profile, you see this error message: "Transport Node creation may fail." This issue occurs because VMware does not support Linux OS bonding. However, VMware does support Open vSwitch (OVS) bonding for Bare Metal Server Transport Nodes.

    Workaround: If you encounter this issue, see Knowledge Article 67835 Bare Metal Server supports OVS bonding for Transport Node configuration in NSX-T.

  • Issue 2390624 - Anti-affinity rule prevents service VM from vMotion when host is in maintenance mode.

    If a service VM is deployed in a cluster with exactly two hosts, the HA pair with anti-affinity rule will prevent the VMs from vMotioning to the other host during any maintenance mode tasks. This may prevent the host from entering Maintenance Mode automatically.

    Workaround: Power off the service VM on the host before the Maintenance Mode task is started on vCenter.

  • Issue 2389993 - Route map removed after redistribution rule is modified using the Policy page or API.

    If there is a route-map added using management plane UI/API in Redistribution Rule, it will get removed If you modify the same Redistribution Rule from Simplified (Policy) UI/API.

    Workaround: You can restore the route map by returning the management plane interface or API to re-add it to the same rule. If you wish to include a route map in a redistribution rule, it is recommended you always use the management plane interface or API to create and modify it.

  • Issue 2329273 - No connectivity between VLANs bridged to the same segment by the same edge node.

    Bridging a segment twice on the same edge node is not supported. However, it is possible to bridge two VLANs to the same segment on two different edge nodes.

    Workaround: None 

  • Issue 2355113 - Unable to install NSX Tools on RedHat and CentOS Workload VMs with accelerated networking enabled in Microsoft Azure.

    In Microsoft Azure when accelerated networking is enabled on RedHat (7.4 or later) or CentOS (7.4 or later) based OS and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    Workaround: After booting up RedHat or CentOS based VM in Microsoft Azure, install the latest Linux Integration Services driver available at https://www.microsoft.com/en-us/download/details.aspx?id=55106 before installing NSX tools.

  • Issue 2370555 - User can delete certain objects in the Advanced interface, but deletions are not reflected in the Simplified interface.

    Specifically, groups added as part of a distributed firewall exclude list can be deleted in the Advanced interface Distributed Firewall Exclusion List settings. This leads to inconsistent behavior in the interface.

    Workaround: Use the following procedure to resolve this issue:

    1. Add an object to an exclusion list in the Simplified interface.
    2. Verify that it appears displayed in the Distributed Firewall exclusion list in the Advanced interface.
    3. Delete the object from the Distributed Firewall exclusion list in the Advanced interface.
    4. Return to the Simplified interface and a second object to the exclusion list and apply it.
    5. Verify that the new object appears in the Advanced interface.
  • Issue 2520803 - Encoding format for Manual Route Distinguisher and Route Target configuration in EVPN deployments.

    You currently can configure manual route distinguisher in both Type-0 encoding and in Type-1 encoding. However, using the Type-1 encoding scheme for configuring Manual Route Distinguisher in EVPN deployments is highly recommended. Also, only Type-0 encoding for Manual Route Target configuration is allowed.

    Workaround: Configure only Type-1 encoding for Route Distinguisher.

  • Issue 2490064 - Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2516481 - One UA node stopped accepting any New API calls, with "server is overloaded" message.

    The UA node stops accepting any New API calls, with "server is overloaded” message. There are around 200 connections stuck in CLOSE_WAIT state. These connections are not yet closed. New API call will be rejected.

    Workaround:

    Restart the proton service using the following command: 

    service proton restart

     

  • Issue 2529228 - After backup and restore, certificates in the system get into an inconsistent state and the certificates that the customer had set up at the time of backup are gone.

    Reverse proxy and APH start using different certificates than what they used in the backed up cluster.

    Workaround:

    1. Update Tomcat certs on all three new nodes and bring them back to the original state (same as backed-up cluster) using API POST /api/v1/node/services/http?action=apply_certificate&certificate_id=<cert-id>. The certificate ID corresponds to the ID of the tomcat certificate that was in use on the original setup (backed-up cluster).
    2. Apply cluster cert on primary node using API POST /api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=<cert-id> . The certificate ID corresponds to the ID of the cluster certificate that was in use on the original setup (backed-up cluster).
    3. Update APH certs on all three new nodes and bring them back to the original state (same as backed-up cluster) using API POST /api/v1/trust-management/certificates?action=set_appliance_proxy_certificate_for_inter_site_communication
    4. Inspect the output of GET /api/v1/trust-management/certificates (especially the used_by section) and release all certs tied to old node uuids (uuids of nodes in the cluster where backup was performed) using following on the root command line: "curl -i -k -X POST -H 'Content-type: application/json' -H 'X-NSX-Username:admin' -H 'X-NSX-Groups:superuser' -T data.json "http://127.0.0.1:7440/nsxapi/api/v1/trust-management/certificates/cert-id?action=release". This API is local-only and needs to be executed as root on the command line on any one node in the cluster. This step needs to be done for all certificates that are tied to old node uuids (uuids of nodes in the cluster where backup was performed). 
    5. Now delete all the unused certs and verify "/api/v1/trust-management/certificates" and cluster stability.
  • Issue 2535793 - The Central Node Config (CNC) disabled flag is not respected on a Manager node.

    NTP, syslog and SNMP config changes made locally on a Manager node will be overwritten when the user makes changes to CNC Profile (see System->Fabric->Profiles in UI) even if CNC is locally disabled (see CLI set node central-config disabled) on that Manager node. However, local NTP, syslog and SNMP config will persist as long as the CNC Profile is unchanged.

    Workaround:

    There are two options.

    • Option 1 is to use CNC without making local changes (i.e., get node central-config is Enabled).
    • Option 2 is to clear the CNC Profile and configure each Manager separately for NTP, syslog and SNMP settings. To transition from option 1 to option 2, use the following workaround.
    1. Delete all configuration from CNC Profile.
    2. After some time, verify that the configuration has been deleted from all Manager and Edge nodes (using Node API or CLI on each node). Also verify that SNMP configuration has been deleted from all KVM HV nodes.
    3. Configure NTP, syslog and SNMP on each Manager and Edge node individually (using NSX Node API or CLI).
    4. Configure VMware SNMP Agent on each KVM HV node individually using VMware SNMP Agent configuration command.
  • Issue 2537989 - Clearing VIP (Virtual IP) does not clear vIDM integration on all nodes.

    If VMware Identity Manager is configured on a cluster with a Virtual IP, disabling the Virtual IP does not result in the VMware Identity Manager integration being cleared throughout the cluster. You will have to manually fix vIDM integration on each individual node if the VIP is disabled.

    Workaround: Go to each node individually to manually fix the vIDM configuration on each.

  • Issue 2538956 - DHCP Profile shows a message of "NOT SET" and the Apply buttons is disabled when configuring a Gateway DHCP on Segment.

    When attempting to configure Gateway DHCP on Segment when there is no DHCP configured on the connected Gateway, the DHCP Profile cannot be applied because there is no valid DHCP to be saved.

    Workaround: None.

  • Issue 2525205 - Management plane cluster operations fail under certain circumstances.

    When attempting to join Manager N2 to Manager N1 by issuing a "join" command on Manager N2, the join command fails. You are unable to form a Management plane cluster, which might impact availability.

    Workaround:

    1. To retain Manager N1 in the cluster, issue a "deactivate" CLI command on Manager N1. This will remove all other Manager from the cluster, keeping Manager N1 as the sole member of the cluster.
    2. Ensure that the non-configuration Corfu server is up and running on Manager N1 by issuing the "systemctl start corfu-nonconfig-server" command.
    3. Join other new Managers to the cluster by issuing "join" commands on them.
  • Issue 2526769 - Restore fails on multi-node cluster.

    When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.

    Workaround: Deploy a new setup (one node cluster) and start the restore.

  • Issue 2538041 - Groups containing Manager Mode IP Sets can be created from Global Manager.

    Global Manager allows you to create Groups that contain IP Sets that were created in Manager Mode. The configuration is accepted but the groups do not get realized on Local Managers.

    Workaround: None.

  • Issue 2463947 - When preemptive mode HA is configured, and IPSec HA is enabled, upon double failover, packet drops over VPN are seen.

    Traffic over VPN will drop on peer side. IPSec Replay errors will increase.

    Workaround: Wait for the next IPSec Rekey. Or disable and enable that particular IPSec session.

  • Issue 2515006 - NSX-v to NSX-T migration rollback fails intermittently.

    During NSX-v to NSX-T migration, rollback fails and the following message displays: "Entity Edge Cluster<edge-cluster-id> can not be deleted as it is being referenced by entity(s): <logical-router-id>"

    Workaround: After failure, wait 10-15 minutes, and then retry the rollback. If still unsuccessful, delete the NSX-T appliances and edges, redeploy them and then restart the migration.

  • Issue 2523212 - The nsx-policy-manager becomes unresponsive and restarts.

    API calls to nsx-policy-manager will start failing, with service being unavailable. You will not be able to access policy manager until it restarts and is available.

    Workaround: Invoke API with at most 2000 objects.

  • Issue 2482672 - Isolated overlay segment stretched over L2VPN not able to reach default gateway on peer site.

    L2VPN tunnel is configured between site 1 and site 2 such that a T0/T1 overlay segment is stretched over L2VPN from site 1 and an isolated overlay segment is stretched over L2VPN from site 2. Also there is another T0/T1 overlay segment on site 2 in same transport zone and there is an instance of DR on the ESXi host where the workload VMs connected to isolated segment reside.

    When a VM on isolated segment (site 2) tries to reach the default gateway (DR downlink which is on site 1), unicast packets to default gateway will be received by site 2 ESXi host, and not forwarded to the remote site. L3 connectivity to default gateway on peer site fails.

    Workaround: Connect the isolated overlay segment on site 2 to an LR and give the same gateway IP address as that of site 1.

  • Issue 2521071 - For a Segment created in Global Manager, if it has a BridgeProfile configuration, then the Layer2 bridging configuration is not applied to individual NSX sites.

    The consolidated status of the Segment will remain at "ERROR". This is due to failure to create bridge endpoint at a given NSX site. You will not be able successfully configure a BridgeProfile on Segments created via Global Manager.

    Workaround: Create a Segment at the NSX site and configure it with bridge profile.

  • Issue 2527671 - When the DHCP server is not configured, retrieving DHCP statistics/status on a Tier0/Tier1 gateway or segment displays an error message indicating realization is not successful.

    There is no functional impact. The error message is incorrect and should report that the DHCP server is not configured.

    Workaround: None.

  • Issue 2532127 - LDAP user can't log in to NSX only if the user's Active Directory entry does not contain the UPN (userPrincipalName) attribute and contains only the samAccountName attribute.

    User authentication fails and the user is unable to log in to the NSX user interface.

    Workaround: None.

  • Issue 2533617 - While creating, updating or deleting services, the API call succeeds but the service entity update realization fails.

    While creating, updating, or deleting services (NatRule, LB VIP, etc.), the API call succeeds but the service entity update is not processed because of activity submission failure in the background. The service becomes inaccessible.

    Workaround: Manually run the ReProcessLogicalRouter API for the logical router on which the service entity exists, for which the realization failure occurred.

  • Issue 2540733 - Service Instance is not created after re-adding the same host in the cluster.

    Service Instance in NSX is not created after re-adding the same host in the cluster, even though the service VM is present on the host. The deployment status will be shown as successful, but protection on the given host will be down.

    Workaround: Delete the service VM from the host. This will create an issue which will be visible in the NSX user interface. On resolving the issue, a new SVM and corresponding service instance in NSX will be created.

  • Issue 2532796 - HNSEndpoint deletion failed under the latest Windows KB update.

    Deletion of HNSEndpoint hangs if you update Windows to its latest KB update (up to 2020 Mar). You cannot delete the Windows container instance. This may have a conflict when creating a new container by using the same HNSEndpoint name.

    Workaround: None.

  • Issue 2530822 - Registration of vCenter with NSX manager fails even though NSX-T extension is created on vCenter.

    While registering vCenter as compute manager in NSX, even though the "com.vmware.nsx.management.nsxt" extension is created on vCenter, the compute manager registration status remains "Not Registered" in NSX-T. Operations on vCenter, such as auto install of edge etc., cannot be performed using the vCenter Server compute manager.

    Workaround:

    1. Delete compute manager from NSX-T manager.
    2. Delete the "com.vmware.nsx.management.nsxt" extension from vCenter using the vCenter Managed Object Browser.
  • Issue 2482580 - IDFW/IDS configuration is not updated when an IDFW/IDS cluster is deleted from vCenter.

    When a cluster with IDFW/IDS enabled is deleted from vCenter, the NSX management plane is not notified of the necessary updates. This results in inaccurate count of IDFW/IDS enabled clusters. There is no functional impact. Only the count of the enabled clusters is wrong.

    Workaround: None.

  • Issue 2533365 - Moving a host from an IDFW enabled cluster to a new cluster (which was never enabled/disabled for IDFW before) will not disable IDFW on the moved host.

    If hosts are moved from an IDFW enabled cluster to cluster (which was never enabled/disabled for IDFW before), IDFW remains enabled on the moved hosts. This will result in unintended IDFW rule application on the moved host.

    Workaround: Enable IDFW on cluster 2 and then disable it. After this, moving hosts between these clusters or subsequent enabling/disabling of IDFW on these clusters would work as expected.

  • Issue 2536877 - X-Forwarded-For (XFF) shows erroneous data (HTTPS Traffic) with Load Balancer rules configured in Transport Phase.

    If you configure HTTP profile with XFF (INSERT/REPLACE), with Load Balancer rule in Transport phase, you may see incorrect values for XFF headers.

    Workaround: Configure Load Balancer Rules under "Request Rewrite phase" with variable condition and matching (using in-build variables). This will take priority and replace the incorrect values of X-Forwarded-For and X-Forwarded-Port with the correct values.

  • Issue 2534855 - Route maps and redistribution rules of Tier-0 gateways created on the simplified UI or policy API will replace the route maps and redistribution rules created on the advanced UI (or MP API).

    During upgrades, any existing route maps and rules that were created on the simplified UI (or policy API) will replace the configurations that were done directly on the advanced UI (or MP API).

    Workaround: If Redistribution rules/Route-maps created on Advanced UI (MP UI) are lost after upgrade, you can create all rules either from Advanced UI (MP) or Simplified UI (Policy). Always use either Policy or MP for redistribution rules, not both at same time, as in NSX-T 3.0, redistribution has full supported features.

  • Issue 2535355 - Session timer may not take effect after upgrading to NSX-T 3.0 under certain circumstances.

    Session timer setting is not taking effect. The connection session (e.g., tcp established, tcp fin wait) will use its system default session timer instead of the custom session timer. This may cause the connection (tcp/udp/icmp) session to be established longer or shorter than expected.

    Workaround: None.

  • Issue 2534933 - Certificates that have LDAP based CDPs (CRL Distribution Point) fail to apply as tomcat/cluster certs.

    You can't use CA-signed certificates that have LDAP CDPs as cluster/tomcat certificate.

    Workaround: See VMware knowledge base article 78794.

  • Issue 2538557 - Spoofguard on ARP packets may not work when ARP Snooping is enabled in the IP Discovery profile.

    There is a possibility that the ARP cache entries of a guest VM could be incorrect even when spoofguard and ARP snooping are enabled in the IP Discovery profile. The spoofguard functionality will not work for ARP packets.

    Workaround: Disable ARP Snooping. Use VMtools or DHCP snooping options in ipdiscovery profile or manual bindings.

  • Issue 2550327 - The Draft feature is not supported on Global Manager, but Draft APIs are available.

    The Draft feature is disabled from the Global Manager UI. Draft publish may not work as expected and you may observe the inconsistency between Global Manager and Local Manager firewall configuration.

    Workaround: Manually revert back to the older firewall configuration.

  • Issue 2499819 - Maintenance-based NSX for vSphere to NSX-T Data Center host migration for vCenter 6.5 or 6.7 might fail due to vMotion error.

    This error message is shown on the host migration page:
    Pre-migrate stage failed during host migration [Reason: [Vmotion] Can not proceed with migration: Max attempt done to vmotion vm b'3-vm_Client_VM_Ubuntu_1404-shared-1410'].

    Workaround: Retry host migration.

  • Issue 2543239 - NAT traffic flow is not subjected to firewall processing for specific NAT rules after upgrading to NSX-T Data Center 3.0.0.

    This issue occurs as Firewall Parameter "None" was deprecated in NSX-T Data Center 3.0.0. Any NAT rules configured with the Firewall parameter as "None" in the user interface, and any NAT rules configured through the API without the "Firewall_Match" parameter, are not subjected to firewall processing post-upgrade, even though the necessary firewall rules are configured at the Gateway firewall.

    Workaround: See VMware knowledge base article 79010 for more information.

  • Issue 2541923 - Creation of Tier-0 VRF on the Global Manager is not supported.

    You can configure VRF on a single-location tier-0 gateway from Global Manager, but this configuration is not supported. You will see an error if you configure VRF on a stretched tier-0 gateway from Global Manager.

    Workaround: None.

  • Issue 2518183 - For Manager UI screens, the Alarms column does not always show the latest alarm count.

    Recently generated alarms are not reflected on Manager entity screens.

    Workaround:

    1. Refresh the Manager entity screen to see the correct alarm count.
    2. Missing alarm details can also be seen from the alarm dashboard page.
  • Issue 2543353 - NSX T0 edge calculates incorrect UDP checksum post-eSP encapsulation for IPsec tunneled traffic.

    Traffic is dropped due to bad checksum in UDP packet.

    Workaround: None.

  • Issue 2561740 - PAS Egress DFW rule not applied due to effective members not updated in NSGroup.

    Due to ConcurrentUpdateException a LogicalPort creation was not processed causing failure in updating the corresponding NSGroup.

    Workaround: None.

  • Issue 2572394 - Unable to take backup when using SFTP server, where "keyboard-interactive" authentication is enabled but "password" authentication is disabled.

    User is unable to use SFTP servers, where "keyboard-interactive" authentication is enabled, but "password" authentication is disabled.

    Workaround: Use an Ubuntu-based server as the SFTP server or enable a "password" authentication on SFTP Server.

  • Issue 2577452: Replacing certificates on the Global Manager disconnects locations added to this Global Manager.

    When you replace a reverse proxy or appliance proxy hub (APH) certificate on the Global Manager, you lose connection with locations added to this Global Manager because REST API and NSX RPC connectivity is broken.Workaround:

    Workaround:

    • If you use the following APIs to change certificates on a Local Manager or Global Manager, you must make further configuration changes to avoid this issue:
      • Change node API certificate: POST https:// /api/v1/node/services/http?action=apply_certificate&certificate_id=
      • Change cluster API certificate: POST https:// /api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=
      • Change appliance proxy certificate: POST https:// /api/v1/trust-management/certificates?action=set_appliance_proxy_certificate_for_inter_site_communication
    • If you change the API certificate on a Local Manager node or cluster, you must add this location again from Global Manager.
    • If you change the API certificate on a Global Manager node or cluster, you must add all previously connected locations again from Global Manager.
    • If you change the appliance proxy certificate on a Local Manager, you must restart the appliance proxy service on all nodes in the cluster with "restart service applianceproxy" and add the location again from Global Manager.

    See Add a Location in the NSX-T Data Center Installation Guide. 

  • Issue 2730634: Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Workaround: Log in to NSX Manager with admin credentials and run the "start search resync policy" command. It will take a few minutes to load the networking components.

  • Issue 2761589: Default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane after upgrading from NSX-T 2.x to NSX-T 3.x.

    This issue occurs only when rules are not configured via Policy, and the default layer 3 rule on the Management Plane has the DROP action. After upgrade, the default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane.

    Workaround: Set the action of default layer3 rule to DROP from policy UI post upgrade.

  • Issue 2959572: Rebooting node after disabling BGP in default VRF causes loss of BGP connectivity.

    In a multi VRF setup with all VRFs configured with BGP, after reboot all BGP configuration is lost. This can happen if prior to reboot BGP was disabled in the default VRF.

    Workaround: Enable BGP in the default VRF. Wait until all connectivity is restored, then turn off BGP in the default VRF again.

Installation Known Issues
  • Issue 2522909 - Service vm upgrade is not working after correcting url if upgrade deployment failed with Invaildurl.

    Upgrade would be in failed state, with wrong url, blocking upgrade.

    Workaround: Create a new deployment_spec for upgrade to trigger.

  • Issue 2530110 - Cluster status is degraded after upgrade to NSX-T Data Center 3.0.0 or a restart of a NSX Manager node.

    The MONITORING group is degraded as the Monitoring application on the node that was restarted stays DOWN. Restore can fail. Alarms from the Manager on which Monitoring app is DOWN might not show up.

    Workaround: Restart the affected NSX-T Manager node on which the Monitoring app is DOWN.

Upgrade Known Issues
  • Issue 2546509 - ESXi 7.0 NSX kernel module is not installed after ESXi upgrade from vSphere 6.7 to vSphere 7.0.

    Transport Node status goes down after the ESXi upgrade from 6.7 to 7.0.

    Workaround: See VMware knowledge base article 78679.

  • Issue 2541232 - CORFU/config disk space may reach 100% upon upgrading to NSX-T 3.0.0.

    This is only encountered if upgrading from previous versions of NSX-T with the AppDiscovery feature enabled. The /config partition reaches 100% and thereafter the NSX Management Cluster will be unstable.

    Workaround: See VMware knowledge base article 78551 for more information.

  • Issue 2475963 - NSX-T VIBs fail to install due to insufficient space.

    NSX-T VIBs fail to install due to insufficient space in bootbank on ESXi host, returning a BootBankInstaller.pyc: ERROR. Some ESXi images provided by third-party vendors may include VIBs which are not in use and can be relatively large in size. This can result in insufficient space in bootbank/alt-bootbank when installing/upgrading any VIBs.

    Workaround: See Knowledge Base article 74864 NSX-T VIBs fail to install, due to insufficient space in bootbank on ESXi host.

  • Issue 2400379 - Context Profile page shows unsupported APP_ID error message.

    The Context Profile page shows the following error message: "This context profile uses an unsupported APP_ID - [<APP_ID>]. Please delete this context profile manually after making sure it is not being used in any rule." This is caused by the post-upgrade presence of six deprecated APP_IDs (AD_BKUP, SKIP, AD_NSP, SAP, SUNRPC, SVN) that no longer work on the data path.

    Workaround: After ensuring that they are no longer consumed, manually delete the six APP_ID context profiles.

  • Issue 2462079 - Some versions of ESXi hosts reboot during upgrade if there are stale DV filters present on the ESXi host.

    For hosts running ESXi 6.5-U2/U3 and/or 6.7-U1/U2, during maintenance mode upgrade to NSX-T 2.5.1, the host may reboot if stale DV filters are found to be present on the host after VMs are moved out.

    Workaround: Upgrade to ESXi 6.7 U3 or ESXi 6.5 P04 prior to upgrading to NSX-T Data Center 2.5.1 if you want to avoid rebooting the host during the NSX-T Data Center upgrade. See Knowledge Base article 76607 for details.

  • Issue 2515489 - After upgrading to NSX-T 3.0, the first certificate based IPSec VPN session fails to come up and displays a "Configuration failed" error.

    Traffic loss can be seen on tunnels under one certificate based IPSec VPN session.

    Workaround: Modify the local endpoint of the problematic IPSec VPN session by removing the CA certificate and adding it back. This results in session flap for all IPSec VPN sessions that consume the same local endpoint.

  • Issue 2536980 - PCG upgrade fails at the "reboot" step.

    PCG upgrade fails from the CSM upgrade UI. PCG CLI "get upgrade progress-status" shows the "reboot" task's state as SUCCESS. PCGs failed to upgrade to NSX-T 3.0 and are not operating.

    Workaround: Complete the failed PCG upgrade via PCG appliance CLIs in this order.

    1. start upgrade-bundle VMware-NSX-public-gateway-<target-version> step migrate_users
      For example: start upgrade-bundle VMware-NSX-public-gateway-3.0.0.0.0.34747521 step migrate_users
    2. start upgrade-bundle VMware-NSX-public-gateway-<target-version> step 41-postboot-exit_maintenance_mode
      For example: start upgrade-bundle VMware-NSX-public-gateway-3.0.0.0.0.34747521 step 41-postboot-exit_maintenance_mode
    3. start upgrade-bundle VMware-NSX-public-gateway-<target-version> step finish_upgrade
      For example: start upgrade-bundle VMware-NSX-public-gateway-3.0.0.0.0.34747521 step finish_upgrade
NSX Edge Known Issues
  • Issue 2283559 - https://<nsx-manager>/api/v1/routing-table and https://<nsx-manager>/api/v1/forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.

    If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.

    Workaround: There are two options to fetch the RIB/FIB.

    • These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest.
    • CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.
  • Issue 2513231 - The (default) max number of arp per logical router is 20k.

    The edge limits the total arp/neigh entries to 100K per edge node, and 20K per logical router. Once reaching these numbers, the edge cannot add more arp/neigh entries to the arp cache table. The arp resolution fails and the packet is dropped due to no arp resolution.

    Workaround: You can increase the per logical router arp limit using the following CLI commands.

    edge1> set dataplane neighbor     
    
      max-arp-logical-router          max-arp-logical-router
    
      max-arp-transport-node          max-arp-transport-node
    
      max-packet-held-transport-node  max-packet-held-transport-node
    
    
    edge1> set debug
    
    edge1> set dataplane neighbor max-arp-logical-router 30000
    
    maximum number of arp per logical router: 30000
    
    edge1> get dataplane neighbor info                        
    
    arp cache timeout(s)                    : 1200
    
    maximum number of arp per node          : 100000
    
    number of arp entries per node          : 1
    
    maximum number of mbuf held per node    : 1000
    
    number of mbuf held per node            : 0
    
    maximum number of arp per logical router: 30000

    Note: The max number used is not persistent. You should re-issue the same command once datapathd is restarted or edge node is rebooted.

  • Issue 2521230 - BFD status displayed under ‘get bgp neighbor summary’ may not reflect the latest BFD session status correctly.

    BGP and BFD can set up their sessions independently. As part of ‘get bgp neighbor summary’ BGP also displays the BFD state. If the BGP is down, it will not process any BFD notifications and will continue to show the last known state. This could lead to displaying stale state for the BFD.

    Workaround: Rely on the output of ‘get bfd-sessions’ and check the ‘State’ field to get the most up-to-date BFD status.

  • Issue 2532755 - Inconsistencies between CLI output and policy output for routing-table.

    Routing table downloaded from the UI has extra number of routes compared to CLI output. There is an additional route (default route) listed in the output downloaded from policy. There is no functional impact.

    Workaround: None.

NSX Cloud Known Issues
  • Issue 2289150 - PCM calls to AWS start to fail.

    If a user updates the PCG role for an AWS account on CSM from old-pcg-role to new-pcg-role, CSM updates the role for the PCG instance on AWS to new-pcg-role. However, the PCM does not know that the PCG role has been updated and as a result continues to use the old AWS clients it had created using old-pcg-role. This causes the AWS cloud inventory scan and other AWS cloud calls to fail.

    Workaround: If you encounter this issue, do not modify/delete the old PCG role immediately after changing to new role for at least 6.5 hours. Restarting the PCG will re-initialize all AWS clients with new role credentials.

Security Known Issues
  • Issue 2491800 - AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

Federation Known Issues
  • Issue 2533116 - Taking a backup of a particular site in Federation A and restoring on another site in Federation B will incorrectly add site details of Federation A onto Federation B.

    After a upgrading the Global Manager, the Backup UI may show a blank page.

    Workaround: None.

  • Issue 2532343 - In a Federation deployment, if the RTEP MTU size is smaller than the VTEP MTU size, IP fragmentation occurs, causing physical router to drop IP fragments and cross-site traffic to stop.

    When RTEP MTU size (1500) is smaller than VTEP MTU (1600), tracepath tool fails to complete. Large ping (i.e., ping -s 2000) also fails. Smaller RTEP MTU cannot be used.

    Workaround: Use the same MTU on RTEP and VTEP.

  • Issue 2535234 - VM tags are reset during cross vCenter vMotion.

    vMotion from site 1 to site 2 in a Federation setup and back to site 1 within 30 minutes will lead to tags resetting to what was applied on site 1. If you are using a VM tag-based global policy, the policy will not be applied.

    Workaround: Re-apply tags on site 1.

  • Issue 2630813 - SRM recovery or Cold vMotion for compute VMs will lose all the NSX tags applied to VM and Segment ports.

    If a SRM recovery test or run is initiated, the replicated compute VMs in the disaster recovery location will not have any NSX tags applied. Same in case of VMs Cold vMotion to another location managed by another LM, the VMs will not have any NSX tags applied on that new location.

  • 2630819: Enabling LM external VIP should not be done before LM register on GM.

    When Federation and PKS need to be used on the same LM, PKS tasks to create external VIP & change LM certificate should be done before registering the LM on GM. If done in the reverse order, communications between LM and GM will not be possible after change of LM certificates and global configuration for this LM will be lost.

  • Issue 2622672: Bare Metal Edge nodes cannot be used in Federation.

    Bare Metal Edge nodes cannot be configured for inter-location communications (RTEP).

  • Issue 2630808 - Upgrade from 3.0.0/3.0.1 to any higher releases is disruptive.

    As soon as Global Manager or Local Manager is upgraded from 3.0.0/3.0.1 to a higher release, communications between GM and LM are not possible.

    Workaround: To restore communications between LM and GM, all LM and GM need to be upgraded to a higher release.

check-circle-line exclamation-circle-line close-line
Scroll to top icon