VMware NSX Data Center for vSphere 6.4.11 | Released August 26, 2021 | Build 18524545 See the Revision History of this document. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Versions, System Requirements, and Installation
- Deprecated and Discontinued Functionality
- Upgrade Notes
- FIPS Compliance
- Revision History
- Resolved Issues
- Known Issues
What's New in NSX Data Center for vSphere 6.4.11
VMware NSX for vSphere 6.4.11 addresses a number of specific customer bugs. See Resolved Issues for more information.
- Improved platform security - NSX Controller accepts TLS connections using only the following four ciphers:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256;
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384;
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256;
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384.
Important: Customers planning to upgrade from NSX for vSphere 6.4.2, 6.4.3, 6.4.4 or 6.4.5 using a single-step upgrade should follow the workaround in VMware knowledge base article 82662 prior to upgrading the 6.4.10 or 6.4.11 bundle. Failing to implement the workaround steps will result in a failure mentioned in the KB article.
Note: NSX for vSphere 6.4.6 and above versions support direct upgrade to 6.4.10 and 6.4.11.
Note: The End of General Support (EoGS) for NSX Data Center for vSphere is January 16, 2022 and technical guidance ends on January 16, 2023. We recommend that you consider migrating to NSX-T Data Center. See the following documentation resources:
Versions, System Requirements and Installation
Note:
-
The table below lists recommended versions of VMware software. These recommendations are general and should not replace or override environment-specific recommendations. This information is current as of the publication date of this document.
-
For the minimum supported version of NSX and other VMware products, see the VMware Product Interoperability Matrix. VMware declares minimum supported versions based on internal testing.
Product or Component | Version |
NSX Data Center for vSphere | VMware recommends the latest NSX release for new deployments. When upgrading existing deployments, please review the NSX Data Center for vSphere Release Notes or contact your VMware technical support representative for more information on specific issues before planning an upgrade. |
vSphere |
For vSphere 6.5:
For vSphere 6.7:
Note: vSphere 5.5 is not supported with NSX 6.4. Note: vSphere 6.0 has reached End of General Support and is not supported with NSX 6.4.7 onwards. |
Guest Introspection for Windows | It is recommended that you upgrade VMware Tools to 10.3.10 before upgrading NSX for vSphere. |
Guest Introspection for Linux | Ensure that the guest virtual machine has a supported version of Linux installed. See VMware NSX Administration Guide for latest list of supported Linux versions. |
System Requirements and Installation
For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.
For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.
Deprecated and Discontinued Functionality
End of Life and End of Support Warnings
For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.
-
NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)
-
vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.
-
NSX for vSphere 6.2.x has reached End of General Support (EOGS) as of August 20, 2018.
-
Based on security recommendations, 3DES as an encryption algorithm in NSX Edge IPsec VPN service is no longer supported.
It is recommended that you switch to one of the secure ciphers available in IPsec service. This change regarding encryption algorithm is applicable to IKE SA (phase1) as well as IPsec SA (phase2) negotiation for an IPsec site.
If 3DES encryption algorithm is in use by NSX Edge IPsec service at the time of upgrade to the release in which its support is removed, it will be replaced by another recommended cipher and therefore the IPsec sites that were using 3DES will not come up unless the configuration on the remote peer is modified to match the encryption algorithm used in NSX Edge.
If using 3DES encryption, modify the encryption algorithm in the IPsec site configuration to replace 3DES with one of the supported AES variants (AES / AES256 / AES-GCM). For example, for each IPsec site configuration with the encryption algorithm as 3DES, replace it with AES. Accordingly, update the IPsec configuration at the peer endpoint.
General Behavior Changes
If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.4.1, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration. If you upgrade to NSX 6.4.1 or later and have incorrectly connected DLR interfaces, you will need to take action to resolve this. See the Upgrade Notes for details.
User Interface Removals and Changes
- In NSX 6.4.1, Service Composer Canvas is removed.
- In NSX 6.4.7, the following functionality is deprecated in vSphere Client 7.0:
- NSX Edge: SSL VPN-Plus (see KB 79929 for more information)
- Tools: Endpoint Monitoring (all functionality)
- Tools: Flow Monitoring (Flow Monitoring Dashboard, Details by Service, and Configuration)
- System Events: NSX Ticket Logger
Installation Behavior Changes
Starting with version 6.4.2, when you install NSX Data Center for vSphere on hosts that have physical NICs with ixgbe drivers, Receive Side Scaling (RSS) is not enabled on the ixgbe drivers by default. You must enable RSS manually on the hosts before installing NSX Data Center. Make sure that you enable RSS only on the hosts that have NICs with ixgbe drivers. For detailed steps about enabling RSS, see the VMware knowledge base article https://kb.vmware.com/s/article/2034676. This knowledge base article describes recommended RSS settings for improved VXLAN packet throughput.
This new behavior applies only when you are doing a fresh installation of kernel modules (VIB files) on the ESXi hosts. No changes are required when you are upgrading NSX-managed hosts to 6.4.2.
API Removals and Behavior Changes
Deprecations in NSX 6.4.2
The following item is deprecated, and might be removed in a future release:
GET/POST/DELETE /api/2.0/vdn/controller/{controllerId}/syslog
. UseGET/PUT /api/2.0/vdn/controller/cluster/syslog
instead.
Behavior Changes in NSX 6.4.1
When you create a new IP pool with POST /api/2.0/services/ipam/pools/scope/globalroot-0
, or modify an existing IP pool with PUT /api/2.0/services/ipam/pools/
, and the pool has multiple IP ranges defined, validation is done to ensure that the ranges do not overlap. This validation was not previously done.
Deprecations in NSX 6.4.0
The following items are deprecated, and might be removed in a future release.
- The systemStatus parameter in
GET /api/4.0/edges/edgeID/status
is deprecated. GET /api/2.0/services/policy/serviceprovider/firewall/
is deprecated. UseGET /api/2.0/services/policy/serviceprovider/firewall/info
instead.- Setting tcpStrict in the global configuration section of Distributed Firewall is deprecated. Starting in NSX 6.4.0, tcpStrict is defined at the section level. Note: If you upgrade to NSX 6.4.0 or later, the global configuration setting for tcpStrict is used to configure tcpStrict in each existing layer 3 section. tcpStrict is set to false in layer 2 sections and layer 3 redirect sections. See "Working with Distributed Firewall Configuration" in the NSX API Guide for more information.
Behavior Changes in NSX 6.4.0
In NSX 6.4.0, the <name>
parameter is required when you create a controller with POST /api/2.0/vdn/controller
.
NSX 6.4.0 introduces these changes in error handling:
- Previously
POST /api/2.0/vdn/controller
responded with 201 Created to indicate the controller creation job is created. However, the creation of the controller might still fail. Starting in NSX 6.4.0 the response is202 Accepted
. - Previously if you sent an API request which is not allowed in transit or standalone mode, the response status was 400 Bad Request. Starting in 6.4.0 the response status is 403 Forbidden.
CLI Removals and Behavior Changes
Do not use unsupported commands on NSX Controller nodes
There are undocumented commands to configure NTP and DNS on NSX Controller nodes. These commands are not supported, and should not be used on NSX Controller nodes. You should only use commands which are documented in the NSX CLI Guide.
Upgrade Notes
Note: For a list of known issues affecting installation and upgrades, see Installation and Upgrade Known Issues.
General Upgrade Notes
- After upgrading vCSA or ESXi to 7.0+, VMs on host experience East/West and North/South connectivity issues. See VMware knowledge base article 85070 for details.
- 3rd-party integrations that call NSX Controller APIs using weaker ciphers may stop working after upgrade to NSX Data Center for vSphere 6.4.11.
-
To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the "Upgrade Host Clusters" section.
-
Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.
-
System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.
- Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
-
Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.
- Downgrades are not supported:
-
Always capture a backup of NSX Manager before proceeding with an upgrade.
-
Once NSX has been upgraded successfully, NSX cannot be downgraded.
-
- To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
-
There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.
- Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
- Upgrading to NSX Data Center for vSphere 6.4.7: VIO is not compatible with NSX 6.4.7 due to multiple scale issues.
- Upgrading to NSX Data Center for vSphere 6.4: NSX 6.4 is not compatible with vSphere 5.5.
- Upgrading to NSX Data Center for vSphere 6.4.5: If NSX is deployed with VMware Integrated OpenStack (VIO), upgrade VIO to 4.1.2.2 or 5.1.0.1, as 6.4.5 is incompatible with previous releases due to spring package update to version 5.0.
- Upgrading to vSphere 6.5: When upgrading to vSphere 6.5a or later 6.5 versions, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See "Upgrading vSphere in an NSX Environment" in the NSX Upgrade Guide.
- Upgrading to vSphere 6.7: When upgrading to vSphere 6.7 you must first upgrade to NSX 6.4.1 or later. Earlier versions of NSX are not compatible with vSphere 6.7. See "Upgrading vSphere in an NSX Environment" in the NSX Upgrade Guide.
- Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the VMware Compatibility Guide before you upgrade, to verify that your vendor's service is compatible with this release of NSX.
- Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
- Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:
Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version.) In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:
- Find the new VIB URL from https://<nsxmanager>/bin/vdn/nwfabric.properties.
- Fetch VIBs of required ESX host version from corresponding URL.
- Add them to host image profile.
- Service Definitions functionality is not supported in NSX 6.4.7 UI with vSphere Client 7.0:
For example, if you have an old Trend Micro Service Definition registered with vSphere 6.5 or 6.7, follow any one of these two options:- Option #1: Before upgrading to vSphere 7.0, navigate to the Service Definition tab in the vSphere Web Client, edit the Service Definition to 7.0, and then upgrade to vSphere 7.0.
- Option #2: After upgrading to vSphere 7.0, run the following NSX API to add or edit the Service Definition to 7.0.
POST https://<nsmanager>/api/2.0/si/service/<service-id>/servicedeploymentspec/versioneddeploymentspec
Upgrade Notes for NSX Components
Support for VM Hardware version 11 for NSX components
- For new installs of NSX Data Center for vSphere 6.4.2, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 11.
- For upgrades to NSX Data Center for vSphere 6.4.2, the NSX Edge and Guest Introspection components are automatically upgraded to VM Hardware version 11. The NSX Manager and NSX Controller components remain on VM Hardware version 8 following an upgrade. Users have the option to upgrade the VM Hardware to version 11. Consult KB (https://kb.vmware.com/s/article/1010675) for instructions on upgrading VM Hardware versions.
- For new installs of NSX 6.3.x, 6.4.0, 6.4.1, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 8.
NSX Manager Upgrade
-
Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.
-
If you are upgrading from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.
-
If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282 for a list of supported security algorithms.
-
When you upgrade NSX Manager to NSX 6.4.1, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.
-
When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration. Log in to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.
Controller Upgrade
- The NSX Controller cluster must contain three controller nodes. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
-
In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.
When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.
See Upgrade the NSX Controller Cluster for more information on controller upgrades.
Host Cluster Upgrade
-
If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later. -
Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.
NSX Edge Upgrade
-
Validation added in NSX 6.4.1 to disallow an invalid distributed logical router configurations: In environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.4.1 or later will fail in those environments if the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. Use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. You can change the interface configuration using
PUT /api/4.0/edges/{edgeId}
orPUT /api/4.0/edges/{edgeId}/interfaces/{index}
. See the NSX API Guide for more information.
-
Delete UDLR Control VM from vCenter Server that is associated with secondary NSX Manager before upgrading UDLR from 6.2.7 to 6.4.5:
In a multi-vCenter environment, when you upgrade NSX UDLRs from 6.2.7 to 6.4.5, the upgrade of the UDLR virtual appliance (UDLR Control VM) fails on the secondary NSX Manager, if HA is enabled on the UDLR Control VM. During the upgrade, the VM with ha index #0 in the HA pair is removed from the NSX database; but, this VM continues to exist on the vCenter Server. Therefore, when the UDLR Control VM is upgraded on the secondary NSX Manager, the upgrade fails because the name of the VM clashes with an existing VM on the vCenter Server. To resolve this issue, delete the Control VM from the vCenter Server that is associated with the UDLR on the secondary NSX Manager, and then upgrade the UDLR from 6.2.7 to 6.4.5. -
Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See "Upgrade NSX Edge" in the NSX Upgrade Guide for details.
-
Upgrading Edge Services Gateway (ESG):
Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.To avoid such upgrade failures, perform the following steps before you upgrade an ESG:
The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.
NSX Edge
Form FactorCPU Reservation Memory Reservation COMPACT 1000MHz 512 MB LARGE 2000MHz 1024 MB QUADLARGE 4000MHz 2048 MB X-LARGE 6000MHz 8192 MB -
Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document Knowledge Base article 1002080 .
-
Use the NSX tuning configuration API:
PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).
-
-
Disable vSphere's Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).
-
Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:
<applicationProfile> <name>https-profile</name> <insertXForwardedFor>false</insertXForwardedFor> <sslPassthrough>false</sslPassthrough> <template>HTTPS</template> <serverSslEnabled>true</serverSslEnabled> <clientSsl> <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers> <clientAuth>ignore</clientAuth> <serviceCertificate>certificate-4</serviceCertificate> </clientSsl> <serverSsl> <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers> <serviceCertificate>certificate-4</serviceCertificate> </serverSsl> ... </applicationProfile>
- Set Correct Cipher version for Load Balanced Clients on vROPs versions older than 6.2.0: vROPs pool members on vROPs versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting "ssl-version=10" in the NSX Load Balancer configuration. See "Create a Service Monitor" in the NSX Administration Guide for instructions.
{ "expected" : null, "extension" : "ssl-version=10", "send" : null, "maxRetries" : 2, "name" : "sm_vrops", "url" : "/suite-api/api/deployment/node/status", "timeout" : 5, "type" : "https", "receive" : null, "interval" : 60, "method" : "GET" }
-
After upgrading to NSX 6.4.6, L2 bridges and interfaces on a DLR cannot connect to logical switches belonging to different transport zones: In NSX 6.4.5 or earlier, L2 bridge instances and interfaces on a Distributed Logical Router (DLR) supported use of logical switches that belonged to different transport zones. Starting in NSX 6.4.6, this configuration is not supported. The L2 bridge instances and interfaces on a DLR must connect to logical switches that are in a single transport zone. If logical switches from multiple transport zones are used, edge upgrade is blocked during pre-upgrade validation checks when you upgrade NSX to 6.4.6. To resolve this edge upgrade issue, ensure that the bridge instances and interfaces on a DLR are connected to logical switches in a single transport zone.
-
After upgrading to NSX 6.4.7, bridges and interfaces on a DLR cannot connect to dvPortGroups belonging to different VDS: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this, ensure that interfaces and L2 bridges of a DLR are connected to a single VDS.
-
After upgrading to NSX 6.4.7, DLR cannot be connected to VLAN-backed port groups if the transport zone of logical switch it is connected to spans more than one VDS: This is to ensure correct alignment of DLR instances with logical switch dvPortGroups across hosts. If such configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that there are no logical interfaces connected to VLAN-backed port groups, if a logical interface exists with a logical switch belonging to a transport zone spanning multiple VDS.
-
After upgrading to NSX 6.4.7, different DLRs cannot have their interfaces and L2 bridges on a same network: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that a network is used in only a single DLR.
Upgrade Notes for FIPS
When you upgrade from a version of NSX earlier than NSX 6.3.0 to NSX 6.3.0 or later, you must not enable FIPS mode before the upgrade is completed. Enabling FIPS mode before the upgrade is complete will interrupt communication between upgraded and not-upgraded components. See "Understanding FIPS Mode and NSX Upgrade" in the NSX Upgrade Guide for more information.
-
Ciphers supported on OS X Yosemite and OS X El Capitan: If you are using SSL VPN client on OS X 10.11 (EL Capitan), you will be able to connect using AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA38, AES256-SHA and AES128-SHA ciphers and those using OS X 10.10 (Yosemite) will be able to connect using AES256-SHA and AES128-SHA ciphers only.
-
Do not enable FIPS before the upgrade to NSX 6.3.x is complete. See "Understanding FIPS Mode and NSX Upgrade" in the NSX Upgrade Guide for more information.
- Before you enable FIPS, verify any partner solutions are FIPS mode certified. See the VMware Compatibility Guide and the relevant partner documentation.
FIPS Compliance
NSX 6.4 uses FIPS 140-2 validated cryptographic modules for all security-related cryptography when correctly configured.
Note:
- Controller and Clustering VPN: The NSX Controller uses IPsec VPN to connect Controller clusters. The IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware Photon OS 1.0 environment), which is in the process of being CMVP validated.
- Edge IPsec VPN: The NSX Edge IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware NSX OS 4.4 environment), which is in the process of being CMVP validated.
Document Revision History
26th August 2021: First edition.
9th November 2021: Second edition. Added a General Upgrade Note. Added resolved issues 2793863 and 2783553. Added known issues 2723675 and 2382623.
31st May, 2022. Third edition. Added known issue 2968016.
Resolved Issues
- Fixed Issue 2429861: End2end latency won’t be shown on vRNI UI after upgrading to NSX-v 6.4.6.
End2end latency is broken with vRNI interop for vRNI 4.2 and vRNI 5.0.
- Fixed Issue 2430805, 2576416: DLR instances are sent to all NSX-prepared hosts which are connected to VDS.
Issues involving a VLAN designated instance are seen. VLAN traffic is disrupted when the VLAN designated instance lands on an invalid host.
- Fixed Issue 2417029: Incorrect path for domain objects in NSX Manager database.
- NSX Edge upgrade or redeployment fails with the following error message:
Cluster/ResourcePool resgroup-XXX needs to be prepared for network fabric to upgrade edge edge-XX.
- Folder dropdown list does not show all VM folders in the Add Controller dialog box.
- One or more clusters might result in Not Ready status.
- Fixed Issue 2443830: Data path performance degradation occurs over VXLAN network when using ixgben NIC driver.
Throughput on the VXLAN network is reduced.
- Fixed Issue 2714980: On an NSX-v 6.4.9 or 6.4.10 setup, read timeout errors and exceptions are seen randomly on all three controller zookeeper logs.
On an NSX-v 6.4.9 or 6.4.10 setup, the log message, "Unexpected exception causing shutdown while sock still open java.io.EOFException" is observed on all three controller zookeeper logs.
- Fixed Issue 2659409: Default enable DNSSEC validation causes domain resolve failure.
Unsupportable domain cannot be resolved by DNS forwarder. Domain sites cannot be accessed.
- Fixed Issue 2667067: Translation of Security Groups with security tag as members takes a long time at scale, when 400+ security groups contain the same tag.
SecurityGroup translation time causes delay in DFW publish times.
- Fixed Issue 2678466: Few routes missing in ESX hosts.
Few routes missing in ESX hosts cause traffic outage.
- Fixed Issue 2685928: oops crash happens when doing iptables rule matching.
Firewall does not work for a while.
- Fixed Issue 2690275: Edge->Firewall->Rule->Menu “Add Rule Below” and “Add Rule Above” do not work and failed to create a new rule.
Unable to add rules using these two menus.
- Fixed Issue 2690380: SVM IP enforcement failing.
No Mgmt IP address to SVMs.
- Fixed Issue 2698462: For a few seconds after VM vmotion import, flows with L7 attributes can match an incorrect rule.
After vmotion import, some existing flows may be dropped. New flows may match incorrect rules.
- Fixed Issue 2708365: ESG advertises BGP routes to neighbor with next-hop as neighbor address, instead of local address when neighbor has default originate enabled.
There could be traffic loss because of incorrect nexthop.
- Fixed Issue 2726256: Unable to replace CA certificate/CRL on application profile via UI.
You can select CA certificate but the selected certificate is not saved.
- Fixed Issue 2733697: SSLVPN Server process crash seen with LDAP Authentication.
All sslvpn clients lose connection.
- Fixed Issue 2733890: If a stateful DFW (distributed firewall) is configured on NSX, VMs may not re-establish a TCP connection until the TCP timeout period, whose default value is 120s, expires.
Certain customer applications may fail if TCP connections cannot be re-established within a few seconds.
- Fixed Issue 2768777: Unable to view more than 10 sub-interfaces on the Edge.
Unable to view more than 10 sub-interfaces grid on the Edge even when there were more than 10 sub-interfaces.
- Fixed Issue 2780321: After NSX Manager upgrade, cluster/host upgrade fails with error, "Cannot remove module nsx-dvfilter-switch-security: module."
Host upgrade fails.
- Fixed Issue 2783605: NSX Edge fails to boot on ESXi 6.7 on PowerEdge R6525 with Processor Type AMD EPYC 7543 32-Core Processor.
NSX Edge not functional.
- Fixed Issue 2639708: Multiple DCN updates cause VSE broadcast leading to CPU spike on both Manager and Edges.
In a scale environment or a setup where grouping objects are heavily consumed, this churn can lead to CPU spikes on both NSX Manager and Edges. This CPU spike can affect datapath on the Edges.
- Fixed Issue 2690015: Edge IPsec VPN - tab Tunnel Configuration on Edit IPSec VPN dialog does not load for locales other than English.
Cannot view or edit Edge IPsec settings.
- Fixed Issue 2713018: Memory leak exists in the vsip-flow heap if the vsip module is unloaded with active L7 flows present.
If the vsip module fails to unload properly, the host must be manually rebooted to recover.
- Fixed Issue 2793863: Warning message, "Edge VMs current location does not match the configured location" is not shown when the first edge appliance has mismatch and the second doesn't.
No indication that mismatch is present for first appliance VM.
- Fixed Issue 2783553: NSX for vSphere plugin 6.4.10 does not work with vCenter version 70u3.
On all NSX for vSphere plugin pages, error is displayed: “HTTP Status 500 – Internal Server Error”. Plugin versions 6.4.10 and lower won't work.
Known Issues
The known issues are grouped as follows.
- Installation and Upgrade Known Issues
- General Known Issues
- Logical Networking and NSX Edge Known Issues
Before upgrading, please read the section Upgrade Notes, earlier in this document.
- Issue 1263858 - SSL VPN does not send upgrade notification to remote client.
SSL VPN gateway does not send an upgrade notification to users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.
Workaround: Users need to uninstall the older version of client and install the latest version manually.
- Issue 2238989: Post upgrade of the vSphere Distributed Switch on the ESXi host, the Software RSS feature of the VDS does not take effect.
In hosts that have Software Receive Side Scaling (SoftRSS) enabled, com.vmware.net.vdr.softrss VDS property does not get restored post VDS upgrade. This causes SoftRSS to get disabled. The /var/run/log/vmkernel.log file shows errors related to softrss property configuration.
Workaround:
Before upgrading the VDS, remove the softrss property and reconfigure it post VDS upgrade.
- Issue 2107188: When the name of a VDS switch contains non-ASCII characters, NSX VIBs upgrade fails on ESXi hosts.
The /var/run/log/esxupdate.log file shows the upgrade status.
Workaround:
Change the VDS name to use ASCII characters and upgrade again.
- Issue 2590902: After upgrading to NSX 6.4.7, when a static IPv6 address is assigned to workload VMs on an IPv6 network, the VMs are unable to ping the IPv6 gateway interface of the edge.
This issue occurs after upgrading the vSphere Distributed Switches from 6.x to 7.0.
Workaround 1:
Select the VDS where all the hosts are connected, go to the Edit setting, and under Multicast option switch to basic.
Workaround 2:
Add the following rules on the edge firewall:
- Ping allow rule.
- Multicast Listener Discover (MLD) allow rule, which are icmp6, type 130 (v1) and type 143 (v2).
- In the vSphere Web Client, when you open a Flex component which overlaps an HTML view, the view becomes invisible.
When you open a Flex component, such as a menu or dialog, which overlaps an HTML view, the view is temporarily hidden.
Workaround: None.
- Issue 2543977: Distributed Firewall IPFIX collector is unable to see flows from UDP ports 137 or 138.
When IPFIX is enabled on the DFW, NetBIOS flows with ports 137 or 138 were not sent by the host.
Workaround:
Use vSphere Client or NSX REST API to remove excluded ports 137 or 138 from flow exclusions.
- Issue 2574333: Edge vNIC configuration takes more than two minutes when NAT is configured with 8000 rules.
vSphere Client shows NSX Manager is not available. The UI times out after two minutes. vNIC configuration completes successfully in 2-3 mins.
Workaround: None
- Issue 2598824: vMotion of NSX-v Manager displays warning message.
The following message is shown on vMotion of NSX-v Manager: "Virtual ethernet card 'Network adapter 1' is not supported. This is not a limitation of the host in general, but of the virtual machine's configured guest OS on the selected host.”
Workaround: Ignore the warning message to successfully complete the vMotion.
- Issue 2112662: Support bundle GUI does not list Hosts if VXLAN is not configured.
Unable to download tech support bundle of ESX host if VXLAN is not enabled.
Workaround: Enable VXLAN on the cluster to see the ESX hosts.
- Issue 2723675: DNS resolution for private networks is not working with MacOS Client on BigSur.
DNS will not work on MacOS BigSur.
Workaround: None.
- Issue 2382623: VXLAN configuration task timed out waiting for Host Preparation status to turn GREEN.
VXLAN status on the cluster gives an error, "VXLAN configuration task timed-out waiting for Host Preparation status to turn GREEN", but host preparation is GREEN. This error is cosmetic and should not have any impact on the data path.
Workaround: Identify the list of ESXi hosts contributing to the cluster's error by reviewing the NSX Manager logs (vsm.log) looking for the message above. Once the ESXi hosts are identified, take the offending ESXi hosts out of the cluster and put them back in. This will re-install the VIBs and correct the error status.
- Issue 2968016: After upgrading to NSX for vSphere version 6.4.4 or higher, LB monitor with HOST header configured is not working as expected.
The NSX upgrade involves a nagios plugin version upgrade. On this new version, this can result in the HOST not being included in the header and the LB monitor with HOST header configured is failed.
Workaround: See knowledge base article 79469 for details.
- Issue 1993241: IPSec Tunnel redundancy using static routing is not supported
IPSec traffic will fail if primary tunnel goes down, causing traffic disruption. This issue is known since NSX 6.4.2.
Workaround: Disable and enable service.
- Issue 2576294: When the logical interfaces on a DLR are connected to multiple VDS, the logical interfaces get attached to incorrect VDS on the host when the host is part of two VXLAN VDS.
If the DLR is force synced or redeployed in a misconfigured state, all the DLR logical interfaces get attached to the incorrect VDS. In addition, logical interfaces for all the new DLRs get attached to incorrect VDS on the host. Data path traffic is disrupted.
Workaround:
- Remove the second VDS that is not used for VXLAN preparation of the cluster.
- Depending on the current state of the host configuration, either redeploy, or force sync, or sync route service to correct the configuration on host.
- Issue 2582197: When an NSX Edge interface is configured with /32 subnet mask, the connected route is not seen in the routing or forwarding table.
Traffic to that interface might still work, but a static route would be needed to reach any peers. Users cannot use interfaces with /32 subnet mask. There is no issue if any other subnet mask is used.
Workaround: Except for /32 subnet mask, use any other subnet mask.
- Issue 2395158: Firewall section is grayed out when Enterprise Admin role is assigned to an Active Directory group.
Enterprise Admin role is assigned to an Active Directory group by navigating to Users and Domains > Users > Identity User > Specify vCenter Group. When users belonging to an Active Directory group log in and lock a firewall section, the locked section is grayed out for these users after the page is refreshed.
Workaround:
Instead of assigning the Enterprise Admin role to the Active Directory group, assign this role directly to the users by navigating to Users and Domains > Users > Identity User > Specify vCenter User.
- Issue 2444677: Kernel panic error occurs on the NSX Edge when large files are transferred through L2 VPN over IPSec VPN tunnels.
This error occurs when the MTU is set to 1500 bytes.
Workaround: Set the MTU to 1600 bytes.
- Issue 2574260: Updating an existing Logical Switch configuration to enable the use of guest VLANs (Virtual Guest Tagging or 802.1q VLAN tagging) does not produce the expected result.
The VDS portgroup that is associated with the Logical Switch is not updated accordingly.
Workaround: None
- Issue 2661353: If keepalives are enabled for GRE tunnel, the GRE tunnel flaps occasionally making it unstable.
There can be traffic outage if GRE tunnel flaps. This happens only when GRE keepalives are enabled.
Workaround: Disable the GRE keepalives on ESG.