VMware NSX Data Center for vSphere 6.4.7 | Released July 09, 2020 | Build 16509800 See the Revision History of this document. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Versions, System Requirements, and Installation
- Deprecated and Discontinued Functionality
- Upgrade Notes
- FIPS Compliance
- Revision History
- Resolved Issues
- Known Issues
What's New in NSX Data Center for vSphere 6.4.7
Important Information about NSX for vSphere 6.4.7
Note: VMware identified an issue in VMware NSX for vSphere 6.4.7 that can affect both new NSX customers as well as customers upgrading from previous versions of NSX. As a result, VMware made the decision to remove NSX 6.4.7 from distribution. The current available NSX for vSphere version is 6.4.6. VMware is actively working towards releasing the next version to replace NSX for vSphere 6.4.7.
Please see VMware knowledge base article 80238 for more details.
NSX Data Center for vSphere 6.4.7 adds usability and serviceability enhancements, and addresses a number of specific customer bugs. See Resolved Issues for more information.
Changes introduced in NSX Data Center for vSphere 6.4.7:
- vSphere 7.0 Support
- Multicast Enhancements: Adds support for the following:
- PIM over one GRE tunnel, excluding PIM over any other uplinks at the same time.
- Static Routes used by unicast connectivity.
- Active-Standby mode.
- Edge Firewall for multicast.
- Distributed Firewall for multicast.
- VMware NSX - Functionality Updates for vSphere Client (HTML): For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.
Versions, System Requirements and Installation
Note:
-
The table below lists recommended versions of VMware software. These recommendations are general and should not replace or override environment-specific recommendations. This information is current as of the publication date of this document.
-
For the minimum supported version of NSX and other VMware products, see the VMware Product Interoperability Matrix. VMware declares minimum supported versions based on internal testing.
Product or Component | Version |
NSX Data Center for vSphere | VMware recommends the latest NSX release for new deployments. When upgrading existing deployments, please review the NSX Data Center for vSphere Release Notes or contact your VMware technical support representative for more information on specific issues before planning an upgrade. |
vSphere |
For vSphere 6.5:
For vSphere 6.7:
Note: vSphere 5.5 is not supported with NSX 6.4. Note: vSphere 6.0 has reached End of General Support and is not supported with NSX 6.4.7 onwards. |
Guest Introspection for Windows | It is recommended that you upgrade VMware Tools to 10.3.10 before upgrading NSX for vSphere. |
Guest Introspection for Linux | Ensure that the guest virtual machine has a supported version of Linux installed. See VMware NSX Administration Guide for latest list of supported Linux versions. |
System Requirements and Installation
For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.
For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.
Deprecated and Discontinued Functionality
End of Life and End of Support Warnings
For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.
-
NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)
-
vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.
-
NSX for vSphere 6.2.x has reached End of General Support (EOGS) as of August 20, 2018.
-
Based on security recommendations, 3DES as an encryption algorithm in NSX Edge IPsec VPN service is no longer supported.
It is recommended that you switch to one of the secure ciphers available in IPsec service. This change regarding encryption algorithm is applicable to IKE SA (phase1) as well as IPsec SA (phase2) negotiation for an IPsec site.
If 3DES encryption algorithm is in use by NSX Edge IPsec service at the time of upgrade to the release in which its support is removed, it will be replaced by another recommended cipher and therefore the IPsec sites that were using 3DES will not come up unless the configuration on the remote peer is modified to match the encryption algorithm used in NSX Edge.
If using 3DES encryption, modify the encryption algorithm in the IPsec site configuration to replace 3DES with one of the supported AES variants (AES / AES256 / AES-GCM). For example, for each IPsec site configuration with the encryption algorithm as 3DES, replace it with AES. Accordingly, update the IPsec configuration at the peer endpoint.
General Behavior Changes
If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.4.1, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration. If you upgrade to NSX 6.4.1 or later and have incorrectly connected DLR interfaces, you will need to take action to resolve this. See the Upgrade Notes for details.
User Interface Removals and Changes
- In NSX 6.4.1, Service Composer Canvas is removed.
- In NSX 6.4.7, the following functionality is deprecated in vSphere Client 7.0:
- NSX Edge: SSL VPN-Plus (see KB 79929 for more information)
- Tools: Endpoint Monitoring (all functionality)
- Tools: Flow Monitoring (Flow Monitoring Dashboard, Details by Service, and Configuration)
- System Events: NSX Ticket Logger
Installation Behavior Changes
Starting with version 6.4.2, when you install NSX Data Center for vSphere on hosts that have physical NICs with ixgbe drivers, Receive Side Scaling (RSS) is not enabled on the ixgbe drivers by default. You must enable RSS manually on the hosts before installing NSX Data Center. Make sure that you enable RSS only on the hosts that have NICs with ixgbe drivers. For detailed steps about enabling RSS, see the VMware knowledge base article https://kb.vmware.com/s/article/2034676. This knowledge base article describes recommended RSS settings for improved VXLAN packet throughput.
This new behavior applies only when you are doing a fresh installation of kernel modules (VIB files) on the ESXi hosts. No changes are required when you are upgrading NSX-managed hosts to 6.4.2.
API Removals and Behavior Changes
Deprecations in NSX 6.4.2
The following item is deprecated, and might be removed in a future release:
GET/POST/DELETE /api/2.0/vdn/controller/{controllerId}/syslog
. UseGET/PUT /api/2.0/vdn/controller/cluster/syslog
instead.
Behavior Changes in NSX 6.4.1
When you create a new IP pool with POST /api/2.0/services/ipam/pools/scope/globalroot-0
, or modify an existing IP pool with PUT /api/2.0/services/ipam/pools/
, and the pool has multiple IP ranges defined, validation is done to ensure that the ranges do not overlap. This validation was not previously done.
Deprecations in NSX 6.4.0
The following items are deprecated, and might be removed in a future release.
- The systemStatus parameter in
GET /api/4.0/edges/edgeID/status
is deprecated. GET /api/2.0/services/policy/serviceprovider/firewall/
is deprecated. UseGET /api/2.0/services/policy/serviceprovider/firewall/info
instead.- Setting tcpStrict in the global configuration section of Distributed Firewall is deprecated. Starting in NSX 6.4.0, tcpStrict is defined at the section level. Note: If you upgrade to NSX 6.4.0 or later, the global configuration setting for tcpStrict is used to configure tcpStrict in each existing layer 3 section. tcpStrict is set to false in layer 2 sections and layer 3 redirect sections. See "Working with Distributed Firewall Configuration" in the NSX API Guide for more information.
Behavior Changes in NSX 6.4.0
In NSX 6.4.0, the <name>
parameter is required when you create a controller with POST /api/2.0/vdn/controller
.
NSX 6.4.0 introduces these changes in error handling:
- Previously
POST /api/2.0/vdn/controller
responded with 201 Created to indicate the controller creation job is created. However, the creation of the controller might still fail. Starting in NSX 6.4.0 the response is202 Accepted
. - Previously if you sent an API request which is not allowed in transit or standalone mode, the response status was 400 Bad Request. Starting in 6.4.0 the response status is 403 Forbidden.
CLI Removals and Behavior Changes
Do not use unsupported commands on NSX Controller nodes
There are undocumented commands to configure NTP and DNS on NSX Controller nodes. These commands are not supported, and should not be used on NSX Controller nodes. You should only use commands which are documented in the NSX CLI Guide.
Upgrade Notes
Note: For a list of known issues affecting installation and upgrades, see Installation and Upgrade Known Issues.
General Upgrade Notes
-
To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.
-
Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.
-
System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.
- Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
-
Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.
- Downgrades are not supported:
-
Always capture a backup of NSX Manager before proceeding with an upgrade.
-
Once NSX has been upgraded successfully, NSX cannot be downgraded.
-
- To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
-
There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.
- Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
- Upgrading to NSX Data Center for vSphere 6.4.7: VIO is not compatible with NSX 6.4.7 due to multiple scale issues.
- Upgrading to NSX Data Center for vSphere 6.4: NSX 6.4 is not compatible with vSphere 5.5.
- Upgrading to NSX Data Center for vSphere 6.4.5: If NSX is deployed with VMware Integrated OpenStack (VIO), upgrade VIO to 4.1.2.2 or 5.1.0.1, as 6.4.5 is incompatible with previous releases due to spring package update to version 5.0.
- Upgrading to vSphere 6.5: When upgrading to vSphere 6.5a or later 6.5 versions, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
- Upgrading to vSphere 6.7: When upgrading to vSphere 6.7 you must first upgrade to NSX 6.4.1 or later. Earlier versions of NSX are not compatible with vSphere 6.7. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
- Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the VMware Compatibility Guide before you upgrade, to verify that your vendor's service is compatible with this release of NSX.
- Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
- Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:
Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version.) In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:
- Find the new VIB URL from https://<nsxmanager>/bin/vdn/nwfabric.properties.
- Fetch VIBs of required ESX host version from corresponding URL.
- Add them to host image profile.
- Service Definitions functionality is not supported in NSX 6.4.7 UI with vSphere Client 7.0:
For example, if you have an old Trend Micro Service Definition registered with vSphere 6.5 or 6.7, follow any one of these two options:- Option #1: Before upgrading to vSphere 7.0, navigate to the Service Definition tab in the vSphere Web Client, edit the Service Definition to 7.0, and then upgrade to vSphere 7.0.
- Option #2: After upgrading to vSphere 7.0, run the following NSX API to add or edit the Service Definition to 7.0.
POST https://<nsmanager>/api/2.0/si/service/<service-id>/servicedeploymentspec/versioneddeploymentspec
Upgrade Notes for NSX Components
Support for VM Hardware version 11 for NSX components
- For new installs of NSX Data Center for vSphere 6.4.2, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 11.
- For upgrades to NSX Data Center for vSphere 6.4.2, the NSX Edge and Guest Introspection components are automatically upgraded to VM Hardware version 11. The NSX Manager and NSX Controller components remain on VM Hardware version 8 following an upgrade. Users have the option to upgrade the VM Hardware to version 11. Consult KB (https://kb.vmware.com/s/article/1010675) for instructions on upgrading VM Hardware versions.
- For new installs of NSX 6.3.x, 6.4.0, 6.4.1, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 8.
NSX Manager Upgrade
-
Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.
-
If you are upgrading from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.
-
If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282 for a list of supported security algorithms.
-
When you upgrade NSX Manager to NSX 6.4.1, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.
-
When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration. Log in to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.
Controller Upgrade
- The NSX Controller cluster must contain three controller nodes. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
-
In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.
When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.
See Upgrade the NSX Controller Cluster for more information on controller upgrades.
Host Cluster Upgrade
-
If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later. -
Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.
NSX Edge Upgrade
-
Validation added in NSX 6.4.1 to disallow an invalid distributed logical router configurations: In environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.4.1 or later will fail in those environments if the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. Use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. You can change the interface configuration using
PUT /api/4.0/edges/{edgeId}
orPUT /api/4.0/edges/{edgeId}/interfaces/{index}
. See the NSX API Guide for more information.
-
Delete UDLR Control VM from vCenter Server that is associated with secondary NSX Manager before upgrading UDLR from 6.2.7 to 6.4.5:
In a multi-vCenter environment, when you upgrade NSX UDLRs from 6.2.7 to 6.4.5, the upgrade of the UDLR virtual appliance (UDLR Control VM) fails on the secondary NSX Manager, if HA is enabled on the UDLR Control VM. During the upgrade, the VM with ha index #0 in the HA pair is removed from the NSX database; but, this VM continues to exist on the vCenter Server. Therefore, when the UDLR Control VM is upgraded on the secondary NSX Manager, the upgrade fails because the name of the VM clashes with an existing VM on the vCenter Server. To resolve this issue, delete the Control VM from the vCenter Server that is associated with the UDLR on the secondary NSX Manager, and then upgrade the UDLR from 6.2.7 to 6.4.5. -
Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.
-
Upgrading Edge Services Gateway (ESG):
Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.To avoid such upgrade failures, perform the following steps before you upgrade an ESG:
The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.
NSX Edge
Form FactorCPU Reservation Memory Reservation COMPACT 1000MHz 512 MB LARGE 2000MHz 1024 MB QUADLARGE 4000MHz 2048 MB X-LARGE 6000MHz 8192 MB -
Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document Knowledge Base article 1002080 .
-
Use the NSX tuning configuration API:
PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).
-
-
Disable vSphere's Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).
-
Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:
<applicationProfile> <name>https-profile</name> <insertXForwardedFor>false</insertXForwardedFor> <sslPassthrough>false</sslPassthrough> <template>HTTPS</template> <serverSslEnabled>true</serverSslEnabled> <clientSsl> <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers> <clientAuth>ignore</clientAuth> <serviceCertificate>certificate-4</serviceCertificate> </clientSsl> <serverSsl> <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers> <serviceCertificate>certificate-4</serviceCertificate> </serverSsl> ... </applicationProfile>
- Set Correct Cipher version for Load Balanced Clients on vROPs versions older than 6.2.0: vROPs pool members on vROPs versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting "ssl-version=10" in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.
{ "expected" : null, "extension" : "ssl-version=10", "send" : null, "maxRetries" : 2, "name" : "sm_vrops", "url" : "/suite-api/api/deployment/node/status", "timeout" : 5, "type" : "https", "receive" : null, "interval" : 60, "method" : "GET" }
-
After upgrading to NSX 6.4.6, L2 bridges and interfaces on a DLR cannot connect to logical switches belonging to different transport zones: In NSX 6.4.5 or earlier, L2 bridge instances and interfaces on a Distributed Logical Router (DLR) supported use of logical switches that belonged to different transport zones. Starting in NSX 6.4.6, this configuration is not supported. The L2 bridge instances and interfaces on a DLR must connect to logical switches that are in a single transport zone. If logical switches from multiple transport zones are used, edge upgrade is blocked during pre-upgrade validation checks when you upgrade NSX to 6.4.6. To resolve this edge upgrade issue, ensure that the bridge instances and interfaces on a DLR are connected to logical switches in a single transport zone.
-
After upgrading to NSX 6.4.7, bridges and interfaces on a DLR cannot connect to dvPortGroups belonging to different VDS: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this, ensure that interfaces and L2 bridges of a DLR are connected to a single VDS.
-
After upgrading to NSX 6.4.7, DLR cannot be connected to VLAN-backed port groups if the transport zone of logical switch it is connected to spans more than one VDS: This is to ensure correct alignment of DLR instances with logical switch dvPortGroups across hosts. If such configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that there are no logical interfaces connected to VLAN-backed port groups, if a logical interface exists with a logical switch belonging to a transport zone spanning multiple VDS.
-
After upgrading to NSX 6.4.7, different DLRs cannot have their interfaces and L2 bridges on a same network: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that a network is used in only a single DLR.
Upgrade Notes for FIPS
When you upgrade from a version of NSX earlier than NSX 6.3.0 to NSX 6.3.0 or later, you must not enable FIPS mode before the upgrade is completed. Enabling FIPS mode before the upgrade is complete will interrupt communication between upgraded and not-upgraded components. See Understanding FIPS Mode and NSX Upgrade in the NSX Upgrade Guide for more information.
-
Ciphers supported on OS X Yosemite and OS X El Capitan: If you are using SSL VPN client on OS X 10.11 (EL Capitan), you will be able to connect using AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA38, AES256-SHA and AES128-SHA ciphers and those using OS X 10.10 (Yosemite) will be able to connect using AES256-SHA and AES128-SHA ciphers only.
-
Do not enable FIPS before the upgrade to NSX 6.3.x is complete. See Understand FIPS mode and NSX Upgrade in the NSX Upgrade Guide for more information.
- Before you enable FIPS, verify any partner solutions are FIPS mode certified. See the VMware Compatibility Guide and the relevant partner documentation.
FIPS Compliance
NSX 6.4 uses FIPS 140-2 validated cryptographic modules for all security-related cryptography when correctly configured.
Note:
- Controller and Clustering VPN: The NSX Controller uses IPsec VPN to connect Controller clusters. The IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware Photon OS 1.0 environment), which is in the process of being CMVP validated.
- Edge IPsec VPN: The NSX Edge IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware NSX OS 4.4 environment), which is in the process of being CMVP validated.
Document Revision History
9th July 2020: First edition.
4th August 2020: Second edition. Added known issue 2614777.
17th August 2020: Third edition. Added resolved issue 2306230.
Resolved Issues
- Fixed Issue 2445396 - Load Balancer Pool Ports are erased when editing load balancer pool followed by "Save".
When editing load balancer pool and saving the configuration, port numbers on every member are erased.
- Fixed Issue 2448235 - After upgrading to NSX-v 6.4.6, you may not be able to filter firewall rules using the Protocol, Source Port, Destination Port.
You will not be able to filter firewall rules using the Protocol, Source Port, Destination Port filter.
- Fixed Issue 2306230: Messaging infrastructure goes down for hosts due to hearbeat delay.
Hosts passwords get reset. Users might not be able to configure on the hosts during the disconnected time.
- Fixed Issue 1859572 - During the uninstall of NSX VIBs version 6.3.x on ESXi hosts that are being managed by vCenter version 6.0.0, the host continues to stay in Maintenance mode.
If you are uninstalling NSX VIBs version 6.3.x on a cluster, the workflow involves putting the hosts into Maintenance mode, uninstalling the VIBs and then removing the hosts from Maintenance mode by the EAM service. However, if such hosts are managed by vCenter server version 6.0.0, then this results in the host being stuck in Maintenance mode post uninstalling the VIBs. The EAM service responsible for uninstalling the VIBs puts the host in Maintenance mode but fails to move the hosts out of Maintenance mode.
- Fixed Issue 2006028 - Host upgrade may fail if vCenter Server system is rebooting during upgrade.
If the associated vCenter Server system is rebooted during a host upgrade, the host upgrade might fail and leave the host in maintenance mode. Clicking Resolve does not move the host out of maintenance mode. The cluster status is "Not Ready".
- Fixed Issue 2233029 - ESX does not support 10K routes.
ESX does not need to support 10K routes. By design, the maximum limit on ESX is 2K routes.
- Fixed Issue 2391153 - NSX Manager does not update vdn_vmknic_portgroup and vdn_cluster table after vCenter operations are done on vmknic dvportgroup.
When you run the PUT
https://nsx_manager_ip/api/2.0/nwfabric/configure
API request, NSX Manager does not update the vdn_vmknic_portgroup and the vdn_cluster table after vCenter operations are done on vmknic distributed virtual portgroup. vCenter UI continues to display the old VLAN ID. - Fixed Issue 2367906 - CPU utilization on the NSX edge reaches 100% when HTTP service monitor is configured on the load balancer with "no-body" extension.
This issue occurs when you configure HTTP service monitor with "no-body" extension while using the "nagios" plugin in the load balancer. The load balancer loses connection with all the back-end servers, and the user's request is broken.
- Fixed Issue 2449643 - The vSphere Web Client displays "No NSX Manager available" error after upgrading NSX from 6.4.1 to 6.4.6.
In the vSphere Web Client, the Firewall and Logical Switch pages display the following error message:
"No NSX Manager available. Verify current user has role assigned on NSX Manager."As the Firewall and Logical Switch pages are unavailable in the UI, users might not be able to configure firewall rules or create logical switches. NSX APIs also take a long time to respond.
In the /usr/nsx-webserver/logs/localhost_access_log file, entries similar to the following are seen:
127.0.0.1 - - [27/Oct/2019:08:38:22 +0100] "GET /api/4.0/firewall/globalroot-0/config HTTP/1.1" 200 1514314 91262
127.0.0.1 - - [27/Oct/2019:08:43:21 +0100] "GET /api/4.0/firewall/globalroot-0/config HTTP/1.1" 200 1514314 90832
127.0.0.1 - - [27/Oct/2019:11:07:39 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 264142
127.0.0.1 - - [27/Oct/2019:11:07:40 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 265023
127.0.0.1 - - [27/Oct/2019:11:07:40 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 265265 - Fixed Issue 2551523 - New rule publish may fail if rule contains quad-zero IP (0.0.0.0/0) after upgrading from NSX 6.3.6 to NSX 6.4.6.
Firewall rules with 0.0.0.0/0 IP address in the source or destination fail to publish.
- Fixed Issue 2536058 - High API Response time from NSX Manager for get flowstats APIs.
When querying for Firewall config and IPSets, response time for these APIs is high.
- Fixed Issue 2513773 - IDFW Rules are not being applied to VDIs after their respective security groups are removed mid-session.
VDI sessions are dropped randomly due to removal of security groups attached to IDFW rules.
- Fixed Issue 2509224 - Excessively large flow table on NSX Edge in HA leads to connection drops.
Excessive connection tracking table syncing from standby NSX Edge to active NSX Edge causes new connections to drop.
- Fixed Issue 2502739 - High API Response time from NSX Manager for FW and IPSet APIs.
When querying for Firewall config and IPSets, the response time for these APIs is high.
- Fixed Issue 2498988 – Response time is high when querying grouping objects.
When querying grouping objects, the response time for these APIs is high.
- Fixed Issue 2458746 - Unsupported LIF configurations resulted in PSOD on multiple hosts.
Implemented Host validations to avoid adding duplicate LIFs across different DLRs.
- Fixed Issue 2452833 - ESXi host servicing a DLR Control VM may experience a PSOD if the same VXLAN is consumed in bridging as well as a LIF in two different DLRs.
An ESXi servicing the DLR Control VM may PSOD if the same virtual wire is consumed in for bridging and as a LIF.
- Fixed Issue 2451586 - LB Application Rules not working after upgrading NSX to 6.4.6.
When HTTP(s) back-end servers are sitting behind an NSX Load Balancer as part of NSX edge, the clients are communicating with the back-end servers through the Load Balancer. The Load Balancer sends headers in lowercase, while in fact the servers have sent them in camel case.
- Fixed Issue 2448449 - Application rules configured with req_ssl_sni may fail with parsing error when upgraded to NSX for vSphere 6.4.6.
Upon upgrading to NSX for vSphere 6.4.6, if a Load Balancer is configured with an sni related rule or when you are creating or configuring a new sni related rule in this version, you may experience an issue where the upgrade fails or the creation of the related rule fails.
- Fixed Issue 2444275 - Host will PSOD when the Virtual Infrastructure Latency feature is enabled.
ESXi host experiences PSOD when Virtual Infrastructure Latency feature is enabled at VRNI in an environment comprised of more than 975 BFD Tunnels per host. In a scale environment, ESXi host will PSOD if latency is enabled.
- Fixed Issue 2493095 - Support for comma-separated IP lists in DFW removed from UI and database.
Upon upgrading from previous versions of NSX-v to NSX-v 6.4.6, comma separated IP in DFW was removed. Users with existing firewall configuration that have comma-separated IP addresses were not able to update the configuration.
- Fixed Issue 2459936 - Unable to split network into two parts using static routes with network (0.0.0.0/1 and 128.0.0.0/1).
NSX 6.4.0 to 6.4.6 allowed static route with only 0.0.0.0/0 network . Starting in NSX 6.4.7, adding static route with 0.0.0.0/x network is allowed, where 0 <= x <= 32.
- Fixed Issue 2430886: Incorrect host status displayed in the NSX Manager database for clusters that are disabled.
The Host Preparation page shows firewall state as error even when the cluster has firewall enabled.
- Fixed Issue 2383382: In the list of default commands on the NSX Manager CLI, delete command shows incorrect help information.
The help information for the delete command is displayed as "Show running system information".
- Fixed Issue: 2407662 Guest Introspection service does not function when ESXi host and vCenter Server restart simultaneously.
Guest Introspection service status is not Green after ESXi host is restarted or when GI VM is powered on manually while vCenter was not running.
The following error message is displayed in the "/usr/local/usvmmgmt/logs/eventmanager.log":
ERROR taskScheduler-4 PasswordChangeHelper:139 - Cmd does not exit normally, exit value = 1
- Fixed Issue 2410743: Connecting the logical interfaces of a DLR to dvPorGgroups of multiple vSphere Distributed Switches causes traffic disruption on some of the logical interfaces.
Traffic on some of the logical interfaces does not working correctly. Hosts show logical interfaces are attached to incorrect vSphere Distributed Switches.
- Fixed Issue 2413213: Housekeeping tasks are not getting scheduled and ai_useripmap table increases.
The "ai_useripmap" table is not cleaned up.
- Fixed Issue 2430753: While creating IP pool if extra space is added in the gateway IP address, then ESXi host does not create the gateway IP for the VTEP.
VTEP to VTEP connectivity is broken due to missing gateway in the IP pool configuration.
- Fixed Issue 2438649: Some Distributed Logical Routers on the host lose all their logical interfaces.
VMs lose connectivity with the DLR due to missing logical interfaces on the DLRs.
- Fixed Issue 2439627: Unable to get muxconfig.xml file to the ESXi host after the NSX Manager is upgraded.
Guest Introspection service is not available. GI service status shows the following warning message for all the hosts:
Guest Introspection service not ready.
- Fixed Issue 2440903: Using NSX Centralized CLI for NSX Edge shows data from passive edge.
Output of NSX Centralized CLI for NSX Edge might come from the wrong edge VM.
- Fixed Issue 2445183, 2454653: After replacing the SSL certificate of NSX Manager appliance, NSX Manager is not available in vSphere inventory.
The following error message is displayed in the vSphere Client:
No NSX Managers available. Verify current user has role assigned on NSX Manager.
- Fixed Issue 2446265: vSphere Web Client sends the vCenter GUID as NULL in the API call when retrieving the excluded VMs list.
In the vSphere Web Client, when Add button is clicked to add new members (VMs) in the exclusion list, the following error message is displayed before complete data is loaded in the UI:
Failed to communicate with vCenter Server.
The Add Excluded Members dialog box is not displayed correctly.
- Fixed Issue 2447697: NSX Edge goes into bad state due to configuration failure and rollback failure.
Edge becomes manageable after configuration failure.
- Fixed Issue 2453083: Stale entry in vnvp_vdr_instance table causes DLR config updates to fail and results in no logical interfaces on the host.
Traffic flow is disrupted. Attempt to either redeploy, synchronize, or upgrade the DLR fails.
- Fixed Issue 2460987: Service VM gets priority-tagged frames.
VXLAN encapsulated frames on uplinks with dot1p bit set results in SVM receiving priority tagged frames. Packets are dropped at SVM ports.
- Fixed Issue 2461125: On the NSX System Dashboard page, incorrect BGP neighbor count is displayed for UDLR.
Issue occurs due to multiple entries for routing feature in the edge_service_config table.
- Fixed Issue 2465697: Controller deployment fails when NSX Manager is at 6.4.x and NSX Controllers are at 6.3.x
After NSX Manager is upgraded from 6.3.x to 6.4.x, and before NSX Controllers are upgraded to 6.4.x, if you deploy 6.3.x NSX Controllers, the deployment fails because Syslog server is not supported in 6.3.x.
- Fixed Issue 2465720: While updating edge virtual appliances for edge-A, unintentionally, virtual appliances of edge-B also get updated.
Both edge-A and edge-B enter the redeploy state. Due to unwanted or unintentional changes to other edge appliances, multiple issues can occur, such as network latency, impact to business services, and so on.
- Fixed Issue 2467360: Guest Introspection service is stuck in warning state as a VM is not present in the NSX inventory.
Guest Introspection service is stuck in warning state because a VM cannot be found during endpoint health status monitoring. Health status XML sent by Mux contains a VM which is not present in the NSX inventory.
- Fixed Issue 2468786: Suspect broken prior Active Directory sync causes database integrity (ai_group has null primary_id) and prevents all future Active Directory sync, thus resulting future updates not visible in NSX.
Active Directory sync keeps failing and future Active Directory changes are not seen by NSX. While existing firewall rules work for existing users and groups, Active Directory changes are not seen by NSX. Users cannot see newly created Active Directory groups or any other Active Directory changes, such as, new users, deleted users, users added to or removed from Active Directory groups, and so on.
- Fixed Issue 2470918: CLI commands related to "show ip bgp *" generate a core dump.
Routes get deleted until the routing daemon is restarted. This causes a service outage until the routes are added again.
- Fixed Issue 2475392: When the same logical switch is used as a bridge and a logical interface in two different DLRs, PSOD might happen on the host where Control VM of DLR with bridge lies.
Such a configuration might go through when configuration is done on the second DLR before the pending job on first DLR is complete. Before the Purple Screen of Death (PSOD), the following error message is seen in the vsm.log file:
Cannot connect two Edge Distributed Routers edge-A and edge-B to the same network virtualwire-x.
- Fixed Issue 2478262: The "show host <host-id> health-status" command shows critical status even when the controller communication is healthy.
The health status shown by this command is a false positive. There is no impact on functionality.
- Fixed Issue 2479599: vserrdd process saturates the CPU usage.
CPU is saturated, and this probably impacts the other processes.
- Fixed Issue 2480450: Bug in error path when ESXi host is out of heap memory.
PSOD potentially occurs when ESXi host is out of heap memory at an inopportune time.
- Fixed Issue 2488759: Incorrect datastore path in the NSX Manager database during storage vMotion.
Mux crashes repeatedly and the guest OS frequently stops.
- Fixed Issue 2491042: High CPU load on NSX Manager caused by postgres process.
Postgres processes consume almost entire CPU resource as seen by the top command.
- Fixed Issue 2491749: Retransmitted SYN packet not always handled correctly in TCP state machine tracking code.
Depending on the sequence numbers, a retransmitted SYN packet can cause subsequent packet loss.
- Fixed Issue 2491914: While using network extensibility (NetX), after vMotion, TCP established sessions do not honor the state timeout.
After vMotion of states with NetX, though the TCP session continues, the idle timer starts and terminates after 12 hours. TCP session terminates unexpectedly and the flows are disrupted.
- Fixed Issue 2499225: DHCP relay on a Distributed Logical Router does not transmit broadcast correctly.
When a client machine sends DHCP discover or request message with the broadcast bit set, the DHCP relay on the DLR does not send response packet as broadcast. This might cause some client machines to not receive the response packet.
- Fixed Issue 2503002: DFW publish operation times out but rules are still published.
This issue happens in a large scale environment with a large number of DFW rules. The DFW page loads slowly in the UI. The localhost_access_log.txt file shows that the firewall publish API take more than 2 mins. DFW rule configuration gets published by the API. However, in the UI, timeout message is displayed.
- Fixed Issue 2506402: Edge UI and CLI on VM show discrepancies for concurrent connections.
Since NSX 6.4.0, an enhancement to quick HA switchover synced the flows from active edge node to standby edge node. This resulted in the edge interface statistics displaying double the data.
- Fixed Issue 2510798: NSX Edge Description field cannot be edited after deployment.
The Description field cannot be edited from both the UI and API after deploying the edge.
- Fixed Issue 2524239: BFD state change log cannot be printed on the syslog server.
BFD state change log is printed as an incomplete message and the syslog server cannot receive it.
- Fixed Issue 2531966: Firewall rule publish operation results in vCenter timeout.
When firewall configuration reaches 7000+ rules, adding or deleting of firewall rule section results in vCenter and NSX Manager communication timeout.
- Fixed Issue 2541643: Purge task does not handle jobs with schedule type FIXED_DATE.
System might slow down due to low disk space or high memory consumption. Job related tables contain a high number of records.
- Fixed Issue 2544344: Cannot create GRE tunnel with non-admin users who are assigned administrator privileges.
Non-admin users assigned with full Enterprise Admin privileges cannot create GRE tunnels. However administrator can create GRE tunnels.
- Fixed Issue 2554069: Load balancer on NSX Edge crashes in NSX 6.4.6.
Segmentation faults followed by HAProxy service restarts frequently.
- Fixed Issue 2556706: Unable to specify a local FQDN for some configurations on the NSX Controller.
In the NSX Controller syslog API, when local address is specified as an FQDN, the configuration fails.
- Fixed Issue 2560152: PSOD occurs on the ESXi host while exporting host log bundles in the NSX-prepared cluster.
This issue occurs when there are more than 100 VTEPs on a logical switch. Vmkernel zdumps get generated at /var/core.
- Issue 2560422: UI does not display the correct data on VM Exclusion list page.
Multiple API calls are made by the UI to retrieve the exclusion list. Data loads slowly on the Exclusion List page in the UI. The page might show only part of the exclusion list in the grid.
- Fixed Issue 2561009: SSL VPN server daemon crashes when a large number users try to log in with a single user credential.
SSL VPN daemon crashes and restarts.
- Fixed Issue 2566512: NSX Edge firewall does not support firewall rules configured with IGMP membership query, report and leave.
NSX Edge appliance does not support IGMP Leave and Membership services yet.
- Fixed Issue 2567085: VSFWD process might crash due to rule hit count processing.
VSFWD core file seen on host with backtrace pointing to rule hit processing. VSFWD watchdog automatically restarts the process after the crash.
- Fixed Issue 2444941, 2574374: In a large scale NSX environment, PSOD occurs on ESXi hosts when latency is enabled.
When BFD is enabled, network latency measurement is enabled. PSOD can occur when the number of BFD sessions exceed 975.
- Fixed Issue 2586872: SSL VPN server daemon crashes when user tries to change password from the SSL VPN Client and the session is not found.
SSL VPN daemon crashes and it restarts. All existing SSL VPN sessions are terminated and have to be restarted. All SSL VPN clients will loose their session and will have to log in again to the SSL VPN server.
- Fixed Issue 2358130: When Global Address Sets are enabled and a vMotion export occurs, the tables are not migrated to the destination host.
vMotion might complete successfully, but tables will not exist on the destination host until the configuration is published from the management plane.
In the vmkernel.log file on the source host, the following message appears:
Create tbl failed: -1
- Fixed Issue 2382346: When WIN2K8 event log server is added using the UI, the event log server is incorrectly detected as WIN2K3.
Identify firewall (IDFW) functionality breaks.
- Fixed Issue 2397810: When the locale in NSX Manager UI is set to US English (en_US) and the browser locale is set to French, some fields in the syslog are incorrectly logged in French.
Changes made to firewall rules are logged in French on the syslog server instead of being logged in English.
- Fixed Issue 2417536: IP translation of Security Groups with resource pools as members consumes lot of memory and CPU when there are multiple vNICs per VM in the system.
High CPU usage causes NSX Manager to restart frequently.
- Fixed Issue 2449644: NSX Manager job queue is overloaded.
Management plane actions are not realized on the hypervisor.
- Fixed Issue 2459817: NSX inventory sync fails repeatedly.
Similar instance UUIDs are detected in the vCenter for different VMs. This results in the same vNIC IDs getting generated for different vNICs. Users must detect this issue in the vsm.log file.
For example, the following information messages are displayed in the vsm.log file:
INFO ViInventoryThread VimVnic:159 - - [nsxv@6876 comp="nsx-manager" level="INFO" subcomp="manager"] Existing portgroupId : null , New portgroupId : dvportgroup-40 << indicate network change INFO ViInventoryThread VimVnic:217 - - [nsxv@6876 comp="nsx-manager" level="INFO" subcomp="manager"] network id : dvportgroup-40 , vnic id : 501c0ab1-e03c-b3ee-b662-9fd4649005d4.000 <= vnic id derived from instance uuid 501c0ab1-e03c-b3ee-b662-9fd4649005d4.
- Fixed Issue 2462523: High latency seen in VM traffic.
High number of NSX configuration churn leads to latency on the VM. When a high number of configurations reach the hypervisor, the local control plane tries to configure the data path, which blocks the traffic momentarily, thereby leading to latency. The churn in configuration might be created by VMs powering on or off continuously in a data center where these VMs are being used in groups.
To mitigate this issue, enable Global Address Set that will reduce the configuration only to one instance rather than one instance per filter on the hypervisor. Alternatively, keep the number of filters to around 25 to 40.
- Fixed Issue 2535292: IPv6 CIDR greater than /64 is not accepted due to management plane validation.
Firewall rule fails when it contains IPv6 CIDR with prefix greater than /64 in the source or destination of the rule definition.
- Fixed Issue 2214872: As part of vSphere Update Manager and ESX Agent Manager integration, EAM invokes VUM ImportFile API with http URL and VUM downloads bundle from that URL.
VUM FileUploadManagerImpl::ImportFile() API is designed to work only with URLs, which contain extension, or at least "." in the URL. If this condition is not satisfied, GetUrlFileNameExtension() method throws an exception. vSphere EAM agencies corresponding to clusters in the NSX-V Host Preparation tab are shown as failed.
Known Issues
The known issues are grouped as follows.
- Installation and Upgrade Known Issues
- General Known Issues
- Logical Networking and NSX Edge Known Issues
- Interoperability Issues
Before upgrading, please read the section Upgrade Notes, earlier in this document.
- Issue 1263858 - SSL VPN does not send upgrade notification to remote client.
SSL VPN gateway does not send an upgrade notification to users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.
Workaround: Users need to uninstall the older version of client and install the latest version manually.
- Issue 2429861- End2end latency won’t be shown on vRNI UI after upgrading to NSX-v 6.4.6.
End2end latency is broken with vRNI interop for vRNI 4.2 and vRNI 5.0.
Workaround: Upgrade vRNI to 5.0.0-P2.
- Issue 2417029: Incorrect path for domain objects in NSX Manager database.
- NSX Edge upgrade or redeployment fails with the following error message:
Cluster/ResourcePool resgroup-XXX needs to be prepared for network fabric to upgrade edge edge-XX.
- Folder dropdown list does not show all VM folders in the Add Controller dialog box.
- One or more clusters might result in Not Ready status.
Workaround: Contact VMware Support for the workaround of this issue.
- Issue 2238989: Post upgrade of the vSphere Distributed Switch on the ESXi host, the Software RSS feature of the VDS does not take effect.
In hosts that have Software Receive Side Scaling (SoftRSS) enabled, com.vmware.net.vdr.softrss VDS property does not get restored post VDS upgrade. This causes SoftRSS to get disabled. The /var/run/log/vmkernel.log file shows errors related to softrss property configuration.
Workaround:
Before upgrading the VDS, remove the softrss property and reconfigure it post VDS upgrade.
- Issue 2107188: When the name of a VDS switch contains non-ASCII characters, NSX VIBs upgrade fails on ESXi hosts.
The /var/run/log/esxupdate.log file shows the upgrade status.
Workaround:
Change the VDS name to use ASCII characters and upgrade again.
- Issue 2590902: After upgrading to NSX 6.4.7, when a static IPv6 address is assigned to workload VMs on an IPv6 network, the VMs are unable to ping the IPv6 gateway interface of the edge.
This issue occurs after upgrading the vSphere Distributed Switches from 6.x to 7.0.
Workaround 1:
Select the VDS where all the hosts are connected, go to the Edit setting, and under Multicast option switch to basic.
Workaround 2:
Add the following rules on the edge firewall:
- Ping allow rule.
- Multicast Listener Discover (MLD) allow rule, which are icmp6, type 130 (v1) and type 143 (v2).
- In the vSphere Web Client, when you open a Flex component which overlaps an HTML view, the view becomes invisible.
When you open a Flex component, such as a menu or dialog, which overlaps an HTML view, the view is temporarily hidden.
(Reference: http://pubs.vmware.com/Release_Notes/en/developer/webclient/60/vwcsdk_600_releasenotes.html#issues)Workaround: None.
- Issue 2543977: Distributed Firewall IPFIX collector is unable to see flows from UDP ports 137 or 138.
When IPFIX is enabled on the DFW, NetBIOS flows with ports 137 or 138 were not sent by the host.
Workaround:
Use vSphere Client or NSX REST API to remove excluded ports 137 or 138 from flow exclusions.
- Issue 2302171, 2590010: When IPFIX is enable on a vSphere Distributed Switch with 100% sampling rate, throughput drops.
Data path performance degradation occurs and the throughput on the data path reduces to half of the network channel bandwidth limit.
Workaround: Set the sampling rate to less than 10%.
- Issue 2574333: Edge vNIC configuration takes more than two minutes when NAT is configured with 8000 rules.
vSphere Client shows NSX Manager is not available. The UI times out after two minutes. vNIC configuration completes successfully in 2-3 mins.
Workaround: None
- Issue 2443830: Data path performance degradation occurs over VXLAN network when using ixgben NIC driver.
Throughput on the VXLAN network is reduced.
Workaround:
- Run the following command on the ESXi host:
esxcli system module parameters set -p "RSS=1,1" -m ixgben
- Restart the host.
- Issue 2547022: When a new Active Directory user is added in the child Active Directory Group, the user is not included in the security group that is based on the parent Active Directory Group.
The security group based on the the parent Active Directory does not have the user.
Workaround:
Add security groups that are based on the child Active Directory group.
- Issue 2614777: DFW Rule Publish fails if a rule is comprised of two services with overlapping service port/range of service ports.
DFW rule publish fails.
Workaround: See VMware knowledge base article 80238 for more information.
- Issue 1993241: IPSec Tunnel redundancy using static routing is not supported
IPSec traffic will fail if primary tunnel goes down, causing traffic disruption. This issue is known since NSX 6.4.2.
Workaround: Disable and enable service.
- Issue 2430805, 2576416: DLR instances are sent to all NSX-prepared hosts which are connected to VDS.
Issues involving a VLAN designated instance are seen. VLAN traffic is disrupted when the VLAN designated instance lands on an invalid host.
Workaround:
- Configure VXLAN on NSX-prepared hosts with the same VDS.
- Reboot the host so that it gets the new configuration.
- Issue 2576294: When the logical interfaces on a DLR are connected to multiple VDS, the logical interfaces get attached to incorrect VDS on the host when the host is part of two VXLAN VDS.
If the DLR is force synced or redeployed in a misconfigured state, all the DLR logical interfaces get attached to the incorrect VDS. In addition, logical interfaces for all the new DLRs get attached to incorrect VDS on the host. Data path traffic is disrupted.
Workaround:
- Remove the second VDS that is not used for VXLAN preparation of the cluster.
- Depending on the current state of the host configuration, either redeploy, or force sync, or sync route service to correct the configuration on host.
- Issue 2582197: When an NSX Edge interface is configured with /32 subnet mask, the connected route is not seen in the routing or forwarding table.
Traffic to that interface might still work, but a static route would be needed to reach any peers. Users cannot use interfaces with /32 subnet mask. There is no issue if any other subnet mask is used.
Workaround: Except for /32 subnet mask, use any other subnet mask.
- Issue 2594802: In the Service Composer, users cannot move a security policy beyond 200th location by using the Manage Priority dialog box.
This issue is observed in NSX 6.4.3 and later, and it occurs only when there are more than 200 security policies. The Manage Priority dialog displays only the first 200 policies, and therefore users cannot move a security policy beyond 200 position.
Workaround:
Identify the weight of your security policy so that security policy can be moved to that particular rank. For example, assume that rank 300 has weight 1000 and rank 301 has weight 1200. To move your security policy to 301, provide weight as 1100 (that is, a number between 1200 and 1100).
Do these steps:
- Open the Edit Security Policy dialog box.
- Edit weight of your security policy to 1100 so that it can be moved to 301 position.
- Click Finish.
- Observe that the policy is moved to 301 position.
- Issue 2395158: Firewall section is grayed out when Enterprise Admin role is assigned to an Active Directory group.
Enterprise Admin role is assigned to an Active Directory group by navigating to Users and Domains > Users > Identity User > Specify vCenter Group. When users belonging to an Active Directory group log in and lock a firewall section, the locked section is grayed out for these users after the page is refreshed.
Workaround:
Instead of assigning the Enterprise Admin role to the Active Directory group, assign this role directly to the users by navigating to Users and Domains > Users > Identity User > Specify vCenter User.
- Issue 2444677: Kernel panic error occurs on the NSX Edge when large files are transferred through L2 VPN over IPSec VPN tunnels.
This error occurs when the MTU is set to 1500 bytes.
Workaround:
Set the MTU to 1600 bytes.
- Issue 2574260: Updating an existing Logical Switch configuration to enable the use of guest VLANs (Virtual Guest Tagging or 802.1q VLAN tagging) does not produce the expected result.
The VDS portgroup that is associated with the Logical Switch is not updated accordingly.
Workaround: None
- Issue 2562965: When a DFW rule configuration is saved, the Save button spins for a few minutes and finally the UI throws an error.
The following error message is displayed in the vSphere Client:
No NSX Managers Available. Verify current user has role assigned on NSX Manager.
This issue occurs only while saving the DFW rule configuration. There is no impact on publishing the DFW rule.
Workaround: None
- Issue 2595144: In NSX 6.4.6, Quad Large Edge memory utilization exceeds 90% when active interface count is 6 or above.
This issue is observed in NSX 6.4.6 and later due to the change introduced for Rx ring buffer size to 4096 on Quad Large edges.
- NSX Edge reports high memory utilization when the interface count is 6 or above.
- On the NSX Edge VM, the top -H command output shows normal user space utilization.
- The slabtop -o command output shows an object count of over 100,000 for the skbuff_head_cache.
Workaround: Contact VMware Support for the workaround of this issue.
- Issue 2586381: VMware Integrated Openstack (VIO) is not compatible with NSX 6.4.7 due to multiple scale issues.
NSX 6.4.7 does not support interoperability with VIO.
Workaround: None