This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX Data Center for vSphere 6.4.6 | Released October 10, 2019 | Build 14819921

See the Revision History of this document.

What's in the Release Notes

The release notes cover the following topics:

What's New in NSX Data Center for vSphere 6.4.6

Important: NSX for vSphere is now known as NSX Data Center for vSphere.

NSX Data Center for vSphere 6.4.6 adds usability and serviceability enhancements, and addresses a number of specific customer bugs. See Resolved Issues for more information.

Changes introduced in NSX Data Center for vSphere 6.4.6:

  • VMware NSX - Functionality Updates for vSphere Client (HTML): The following VMware NSX features are now available through the vSphere Client: Edge Services (Edge Firewall, L2 VPN, IPSEC VPN, NSX Configurations). For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.
  • NSX Manager’s auditor role now has privileges to collect NSX Edge support log bundles using Skyline Log Assist. With this permissions enhancement, a user with an auditor role can download the support log bundle from both NSX Manager and Edge nodes without restriction. See the NSX Administration Guide for the full list of roles and permissions.

Versions, System Requirements and Installation

Note:

  • The table below lists recommended versions of VMware software. These recommendations are general and should not replace or override environment-specific recommendations. This information is current as of the publication date of this document.

  • For the minimum supported version of NSX and other VMware products, see the VMware Product Interoperability Matrix. VMware declares minimum supported versions based on internal testing.

Product or Component Version
NSX Data Center for vSphere

VMware recommends the latest NSX release for new deployments.

When upgrading existing deployments, please review the NSX Data Center for vSphere Release Notes or contact your VMware technical support representative for more information on specific issues before planning an upgrade.

vSphere

For vSphere 6.0:
Recommended: 6.0 Update 3
vSphere 6.0 Update 3 resolves the issue of duplicate VTEPs in ESXi hosts after rebooting vCenter server. SeeVMware Knowledge Base article 2144605 for more information.

For vSphere 6.5:
Recommended: 6.5 Update 3
Note: vSphere 6.5 Update 1 resolves the issue of EAM failing with OutOfMemory. See VMware Knowledge Base Article 2135378 for more information.
Important:

  • If you are using multicast routing on vSphere 6.5, vSphere 6.5 Update 2 or higher is recommended.
  • If you are using NSX Guest Introspection on vSphere 6.5, vSphere 6.5 P03 or higher is recommended.

For vSphere 6.7:
Recommended: 6.7 Update 2
Important: 

  • If you are using NSX Guest Introspection on vSphere 6.7, please refer to Knowledge Base Article 57248 prior to installing NSX 6.4.6, and consult VMware Customer Support for more information.

Note: vSphere 5.5 is not supported with NSX 6.4.

Guest Introspection for Windows

It is recommended that you upgrade VMware Tools to 10.3.10 before upgrading NSX for vSphere.

Guest Introspection for Linux This NSX version supports the following Linux versions:
  • RHEL 7 GA (64 bit)
  • SLES 12 GA (64 bit)
  • Ubuntu 14.04 LTS (64 bit)

System Requirements and Installation

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.

For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.

Deprecated and Discontinued Functionality

End of Life and End of Support Warnings

For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.

  • NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)

  • vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.

  • NSX for vSphere 6.2.x has reached End of General Support (EOGS) as of August 20, 2018.

  • Based on security recommendations, 3DES as an encryption algorithm in NSX Edge IPsec VPN service is no longer supported.
    It is recommended that you switch to one of the secure ciphers available in IPsec service. This change regarding encryption algorithm is applicable to IKE SA (phase1) as well as IPsec SA (phase2) negotiation for an IPsec site.
     
    If 3DES encryption algorithm is in use by NSX Edge IPsec service at the time of upgrade to the release in which its support is removed, it will be replaced by another recommended cipher and therefore the IPsec sites that were using 3DES will not come up unless the configuration on the remote peer is modified to match the encryption algorithm used in NSX Edge.
     
    If using 3DES encryption, modify the encryption algorithm in the IPsec site configuration to replace 3DES with one of the supported AES variants (AES / AES256 / AES-GCM). For example, for each IPsec site configuration with the encryption algorithm as 3DES, replace it with AES. Accordingly, update the IPsec configuration at the peer endpoint.

General Behavior Changes

If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.4.1, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration.  If you upgrade to NSX 6.4.1 or later and have incorrectly connected DLR interfaces, you will need to take action to resolve this. See the Upgrade Notes for details.

User Interface Removals and Changes

In NSX 6.4.1, Service Composer Canvas is removed.

Installation Behavior Changes

Starting with version 6.4.2, when you install NSX Data Center for vSphere on hosts that have physical NICs with ixgbe drivers, Receive Side Scaling (RSS) is not enabled on the ixgbe drivers by default. You must enable RSS manually on the hosts before installing NSX Data Center. Make sure that you enable RSS only on the hosts that have NICs with ixgbe drivers. For detailed steps about enabling RSS, see the VMware knowledge base article https://kb.vmware.com/s/article/2034676. This knowledge base article describes recommended RSS settings for improved VXLAN packet throughput.

This new behavior applies only when you are doing a fresh installation of kernel modules (VIB files) on the ESXi hosts. No changes are required when you are upgrading NSX-managed hosts to 6.4.2.

API Removals and Behavior Changes

Deprecations in NSX 6.4.2

The following item is deprecated, and might be removed in a future release:

  • GET/POST/DELETE /api/2.0/vdn/controller/{controllerId}/syslog. Use GET/PUT /api/2.0/vdn/controller/cluster/syslog instead.

Behavior Changes in NSX 6.4.1

When you create a new IP pool with POST /api/2.0/services/ipam/pools/scope/globalroot-0, or modify an existing IP pool with PUT /api/2.0/services/ipam/pools/ , and the pool has multiple IP ranges defined, validation is done to ensure that the ranges do not overlap. This validation was not previously done.

Deprecations in NSX 6.4.0
The following items are deprecated, and might be removed in a future release.

  • The systemStatus parameter in GET /api/4.0/edges/edgeID/status is deprecated.
  • GET /api/2.0/services/policy/serviceprovider/firewall/ is deprecated. Use GET /api/2.0/services/policy/serviceprovider/firewall/info instead.
  • Setting tcpStrict in the global configuration section of Distributed Firewall is deprecated. Starting in NSX 6.4.0, tcpStrict is defined at the section level. Note: If you upgrade to NSX 6.4.0 or later, the global configuration setting for tcpStrict is used to configure tcpStrict in each existing layer 3 section. tcpStrict is set to false in layer 2 sections and layer 3 redirect sections. See "Working with Distributed Firewall Configuration" in the NSX API Guide for more information.

Behavior Changes in NSX 6.4.0
In NSX 6.4.0, the <name> parameter is required when you create a controller with POST /api/2.0/vdn/controller.

NSX 6.4.0 introduces these changes in error handling:

  • Previously POST /api/2.0/vdn/controller responded with 201 Created to indicate the controller creation job is created. However, the creation of the controller might still fail. Starting in NSX 6.4.0 the response is 202 Accepted.
  • Previously if you sent an API request which is not allowed in transit or standalone mode, the response status was 400 Bad Request. Starting in 6.4.0 the response status is 403 Forbidden.

CLI Removals and Behavior Changes

Do not use unsupported commands on NSX Controller nodes
There are undocumented commands to configure NTP and DNS on NSX Controller nodes. These commands are not supported, and should not be used on NSX Controller nodes. You should only use commands which are documented in the NSX CLI Guide.

Upgrade Notes

Note: For a list of known issues affecting installation and upgrades, see Installation and Upgrade Known Issues.

General Upgrade Notes

  • To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.

  • Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.

  • System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.

  • Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
  • Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.

  • Downgrades are not supported:
    • Always capture a backup of NSX Manager before proceeding with an upgrade.

    • Once NSX has been upgraded successfully, NSX cannot be downgraded.

  • To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
  • There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.

  • Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
    • Upgrading to NSX Data Center for vSphere 6.4: NSX 6.4 is not compatible with vSphere 5.5.
    • Upgrading to NSX Data Center for vSphere 6.4.5: If NSX is deployed with VMware Integrated OpenStack (VIO), upgrade VIO to 4.1.2.2 or 5.1.0.1, as 6.4.5 is incompatible with previous releases due to spring package update to version 5.0.
    • Upgrading to vSphere 6.5: When upgrading to vSphere 6.5a or later 6.5 versions, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
    • Upgrading to vSphere 6.7: When upgrading to vSphere 6.7 you must first upgrade to NSX 6.4.1 or later. Earlier versions of NSX are not compatible with vSphere 6.7. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide
  • Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the  VMware Compatibility Guide before you upgrade, to verify that your vendor's service is compatible with this release of NSX.
  • Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
  • Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:

    Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version.) In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:

    1. Find the new VIB URL from https://<nsxmanager>/bin/vdn/nwfabric.properties.
    2. Fetch VIBs of required ESX host version from corresponding URL.
    3. Add them to host image profile.

Upgrade Notes for NSX Components

Support for VM Hardware version 11 for NSX components

  • For new installs of NSX Data Center for vSphere 6.4.2, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 11.
  • For upgrades to NSX Data Center for vSphere 6.4.2, the NSX Edge and Guest Introspection components are automatically upgraded to VM Hardware version 11. The NSX Manager and NSX Controller components remain on VM Hardware version 8 following an upgrade. Users have the option to upgrade the VM Hardware to version 11. Consult KB (https://kb.vmware.com/s/article/1010675) for instructions on upgrading VM Hardware versions.
  • For new installs of NSX 6.3.x, 6.4.0, 6.4.1, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 8.

NSX Manager Upgrade

  • Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.

  • If you are upgrading from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.

  • If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282  for a list of supported security algorithms.

  • When you upgrade NSX Manager to NSX 6.4.1, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.

  • When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration. Log in to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.

Controller Upgrade

  • The NSX Controller cluster must contain three controller nodes. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
  • In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.

    When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.

    See Upgrade the NSX Controller Cluster for more information on controller upgrades.

Host Cluster Upgrade

  • If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
    The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later.

  • Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.

NSX Edge Upgrade

  • Validation added in NSX 6.4.1 to disallow an invalid distributed logical router configurations: In environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.4.1 or later will fail in those environments if the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. Use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. You can change the interface configuration using PUT /api/4.0/edges/{edgeId} or PUT /api/4.0/edges/{edgeId}/interfaces/{index}. See the NSX API Guide for more information.
  • Delete UDLR Control VM from vCenter Server that is associated with secondary NSX Manager before upgrading UDLR from 6.2.7 to 6.4.5:
    In a multi-vCenter environment, when you upgrade NSX UDLRs from 6.2.7 to 6.4.5, the upgrade of the UDLR virtual appliance (UDLR Control VM) fails on the secondary NSX Manager, if HA is enabled on the UDLR Control VM. During the upgrade, the VM with ha index #0 in the HA pair is removed from the NSX database; but, this VM continues to exist on the vCenter Server. Therefore, when the UDLR Control VM is upgraded on the secondary NSX Manager, the upgrade fails because the name of the VM clashes with an existing VM on the vCenter Server. To resolve this issue, delete the Control VM from the vCenter Server that is associated with the UDLR on the secondary NSX Manager, and then upgrade the UDLR from 6.2.7 to 6.4.5.

  • Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.

  • Upgrading Edge Services Gateway (ESG):
    Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.

    To avoid such upgrade failures, perform the following steps before you upgrade an ESG:

    The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.

    NSX Edge
    Form Factor
    CPU Reservation Memory Reservation
    COMPACT 1000MHz 512 MB
    LARGE 2000MHz 1024 MB
    QUADLARGE 4000MHz 2048 MB
    X-LARGE 6000MHz 8192 MB
    1. Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document Knowledge Base article 1002080 .

    2. Use the NSX tuning configuration API:
      PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
      ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).

  • Disable vSphere's Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).

  • Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each  <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:

    <applicationProfile>
      <name>https-profile</name>
      <insertXForwardedFor>false</insertXForwardedFor>
      <sslPassthrough>false</sslPassthrough>
      <template>HTTPS</template>
      <serverSslEnabled>true</serverSslEnabled>
      <clientSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <clientAuth>ignore</clientAuth>
        <serviceCertificate>certificate-4</serviceCertificate>
      </clientSsl>
      <serverSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <serviceCertificate>certificate-4</serviceCertificate>
      </serverSsl>
      ...
    </applicationProfile>
  • Set Correct Cipher version for Load Balanced Clients on vROPs versions older than 6.2.0: vROPs pool members on vROPs versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting "ssl-version=10" in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.
    {
        "expected" : null,
        "extension" : "ssl-version=10",
        "send" : null,
        "maxRetries" : 2,
        "name" : "sm_vrops",
        "url" : "/suite-api/api/deployment/node/status",
        "timeout" : 5,
        "type" : "https",
        "receive" : null,
        "interval" : 60,
        "method" : "GET"
    }
  • After upgrading to NSX 6.4.6, L2 bridges and interfaces on a DLR cannot connect to logical switches belonging to different transport zones:  In NSX 6.4.5 or earlier, L2 bridge instances and interfaces on a Distributed Logical Router (DLR) supported use of logical switches that belonged to different transport zones. Starting in NSX 6.4.6, this configuration is not supported. The L2 bridge instances and interfaces on a DLR must connect to logical switches that are in a single transport zone. If logical switches from multiple transport zones are used, edge upgrade is blocked during pre-upgrade validation checks when you upgrade NSX to 6.4.6. To resolve this edge upgrade issue, ensure that the bridge instances and interfaces on a DLR are connected to logical switches in a single transport zone.

Upgrade Notes for FIPS

When you upgrade from a version of NSX earlier than NSX 6.3.0 to NSX 6.3.0 or later, you must not enable FIPS mode before the upgrade is completed. Enabling FIPS mode before the upgrade is complete will interrupt communication between upgraded and not-upgraded components. See Understanding FIPS Mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Ciphers supported on OS X Yosemite and OS X El Capitan: If you are using SSL VPN client on OS X 10.11 (EL Capitan), you will be able to connect using AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA38, AES256-SHA and AES128-SHA ciphers and those using OS X 10.10 (Yosemite) will be able to connect using AES256-SHA and AES128-SHA ciphers only.

  • Do not enable FIPS before the upgrade to NSX 6.3.x is complete. See Understand FIPS mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Before you enable FIPS, verify any partner solutions are FIPS mode certified. See the VMware Compatibility Guide and the relevant partner documentation.

FIPS Compliance

NSX 6.4 uses FIPS 140-2 validated cryptographic modules for all security-related cryptography when correctly configured.

Note:

  • Controller and Clustering VPN: The NSX Controller uses IPsec VPN to connect Controller clusters. The IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware Photon OS 1.0 environment), which is in the process of being CMVP validated.
  • Edge IPsec VPN: The NSX Edge IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware NSX OS 4.4 environment), which is in the process of being CMVP validated.

Document Revision History

10th October 2019: First edition.
6th November 2019. Second edition. Added Known Issue 2442884 and related upgrade note.
3rd December 2019. Third edition. Added workaround link to Known Issue 2367906.
9th January 2020. Fourth edition. Added Known Issue 2449643.
18th May 2020. Fifth edition. Added Known Issues 2444275, 2445183, 2445396, 2448235, 2448449, 2451586, 2452833, 2458746, 2459936, 2493095, 2498988, 2502739, 2509224, 2513773, 2536058, 2551523.
25 May 2020. Sixth Edition. Added Edge Upgrade Note for Fixed Issue 2379274.

Resolved Issues

  • Fixed Issue 1648578 - NSX forces the addition of cluster/network/storage when creating a new NetX host-based service instance.
    When you create a new service instance from the vSphere Web Client for NetX host-based services such as Firewall, IDS, and IPS , you are forced to add cluster/network/storage even though these are not required.
  • Fixed Issue 2001988 - During NSX host cluster upgrade, Installation status in Host Preparation tab alternates between "not ready" and "installing" for the entire cluster when each host in the cluster is upgrading.

    During NSX upgrade, clicking "upgrade available" for NSX prepared cluster triggers host upgrade. For clusters configured with DRS FULL AUTOMATIC, the installation status alternates between "installing" and "not ready", even though the hosts are upgraded in the background without issues.

  • Fixed Issue 2005973 - Routing daemon MSR loses all routing configuration after deleting a few gre tunnels and then doing a force sync of edge node from Management Plane.

    This problem can occur on a edge with BGP sessions over GRE tunnels. When some of the GRE tunnels are deleted and then a force sync of the edge is done from MP, edge loses all routing configuration.

  • Fixed Issue 2005900 - Routing daemon MSR on Edge is stuck at 100% CPU when all GRE tunnels are flapped in an 8-way iBGP/multi-hop BGP ECMP scale topology.

    This problem can occur in a scale topology where iBGP or multi-hop BGP is configured on ESG with multiple neighbors running over many GRE tunnels. When multiple GRE tunnels flap, MSR may get stuck indefinitely at 100% CPU.

  • Fixed Issue 2118255 - DLR control VM doesn't participate in multicast routing.

    DLR control VM doesn't participate in multicast routing; all the multicast related cli will show empty cli display in control VM.

  • Fixed Issue 2189417 - Number of Event Logs exceeding the supported limit.

    When Events are at scale, some are lost causing IDFW functionality loss (only for physical workloads).

  • Fixed Issue 2312793 - etag value in http header has double quotes around it.

    REST API will fail if using etag value coming into response.

  • Fixed Issue 2279045 - Block setting end2end latency when vnic connecting to vlan-backed dvportgroup.

    end2end latency doesn't support VLAN- backed dvportgroup, when the latency config on such portgroup is set, it will show exception. Only the overlay-backed dvportgroup can be used.

  • Fixed Issue 2265909 - No more support for double slash in REST API URI.

    REST API calls are failing with ERROR 500: Internal Server Error. REST APIs will fail if URI contains double slash.

  • Fixed Issue 2319867 - End2end latency is not configured on new host automatically after adding new host to the VDS with end2end latency enabled.

    When end2end latency is already configured on a VDS and a new host is added, end2end latency is not automatically configured on the new host. The latency on this host cannot be monitored.

  • Fixed Issue 2442884 - NSX-v 6.4.6 is not compatible with VMware Integrated OpenStack.

    Starting in NSX 6.4.6 you cannot create a firewall rule with a destination IP of 0.0.0.0/0. This results in an incompatibility between NSX 6.4.6 and all versions of vSphere Integrated Openstack. See the VMware Product Interoperability Matrix for compatible versions.

  • Fixed Issue 2331068 - NSX Edge becomes unmanageable due to too many firewall grouping object update messages.

    NSX Edge becomes unmanageable; however, the data plane works fine. Administrators cannot make configuration changes or query the edge status.

  • Fixed Issue 2320171 - Resource reservation of edge appliance resets to "No Reservation" when edge appliance is moved to a different cluster or resource pool.

    After moving the edge appliance to a different cluster or resource pool, administrators have to manually reset the edge appliance reservation by running an API request.

  • Fixed Issue 2349110 - Save password feature does not work in SSL VPN-Plus client on Mac computers.

    SSL VPN users have to re-enter username and password to log in to the SSL VPN Server every time even when the "Save Password" option is selected.

  • Fixed Issue 2360114 - In a multi-site Cross-VC NSX environment, Route Redistribution page for an edge that is managed by a secondary NSX Manager is unavailable. 

    This issue existed only in HTML5-based vSphere Client. vSphere Web Client did not have this issue.

  • Fixed Issue 2337437 - CPU kernel lockup for extended time.

    CPU does not receive hearbeat for N secs and performance degradation occurs.

  • Fixed Issue 2305079 - NSX identity firewall works intermittently. NSX context engine does not run on an ESXi host.

    While using NSX Guest Introspection for detecting network events, NSX identity firewall functions intermittently. When the NSX context engine is running on an ESXi host, identity firewall functions as intended. However, when the context engine is not running, identity firewall does not function.

  • Fixed Issue 2375249 - Firewall rules fail to publish when extra spaces exist in the Source IP address and Destination IP address fields.

    The vsfwd log file shows invalid firewall configuration by "string2ip" function.

  • Fixed Issue 2305989 - Memory leak occurs in dcsms process when "show ip bgp" and "show ip bgp neighbors" CLI commands are executed.

    Dynamic Clustering Secure Mobile Multicast (dcsms) process reaches very high memory utilization and normal processing is impacted. Edge goes in out-of-memory situation and causes the edge to reload.

  • Fixed Issue 2379274 - When interfaces or bridges of a DLR are connected to logical switches that are in different transport zones, VDR instances are missing on some hosts.

    VMs are unable to access their default gateway and data path traffic is down.

  • Fixed Issue 2291285 - When a bridge is configured on an interface of a DLR that has multicast already configured, that interface is not removed from configured multicast interfaces.

    In the vCenter UI, the interface on the DLR that is configured for multicast IGMP and bridging as a source is shown as disconnected.

  • Fixed Issue 2377057 - Publishing of distributed firewall rules causes NSX Manager outage due to high CPU usage.

    High CPU usage on the NSX Manager results in performance degradation.

  • Fixed Issue 2391802 - VMs disappear from the NSX Exclusion list when new VMs are added to the list.

    If you add a new VM before the list of existing VMs in the Exclusion list is loaded in the UI, the existing VMs in the Exclusion list are removed.

  • Fixed Issue 2389897 - log4j and log4j2 configuration conflict due to use of log4j by third-party libraries.

    NSX-generated logs are missing in vsm.log. Absence of NSX logs increases difficulty in debugging issues.

  • Fixed Issue 2273837 - Firewall rules that use IP Sets or Application or Application Groups with datacenter scope are not visible when NSX is upgraded to 6.4.0 or later.

    The following error message is displayed in the NSX logs when firewall rules use an IP set or application or application group with "datacenter" scope in NSX 6.4.0 and later:
    [Firewall] Invalid grouping ObjectId. This object does not exist or is not available.

  • Fixed Issue 2319561 - Synchronization errors on the primary NSX Manager.

    Following error message is displayed in the UI and NSX logs: 
    REST API failed: The uuid format of instance_uuid is invalid.

  • Fixed Issue 2327330 - TLS 1.1 is enabled on RabbitMQ port 5671.

    TLS 1.1 support exists for backward compatibility. Users can remove TLS 1.1 explicitly if they don't need it.
    To remove TLS 1.1, edit the rabbitmq.config file at /etc/rabbitmq/ and restart the broker on the manager.

  • Fixed Issue 2341214 - An edge appliance is deployed with HA enabled and all criteria for HA enablement are satisfied; however, HA remains disabled on the edge until a second configuration change is published to the edge appliance.

    When an edge (ESG/DLR/UDLR) is created with HA enabled and appliances are deployed (bulk configuration with other features configured), the configuration is published to edge appliances with HA as disabled. The management plane shows HA as enabled, but HA is actually disabled on the edge appliances.

  • Fixed Issue 2392371 - Edge virtual appliance fails to reboot when VMware Tools is not running on the edge appliance.

    Reboot of edge appliance could fail in some scenarios. For example, when FIPS is being enabled or disabled, when ForceSync is attempted on the edge that reports bad system status.

  • Fixed Issue 2273242 - On receiving a SYN on an established session, Filters with NETX not sending ACK on that connection.

    NetX Enabled, the SYN packet is forwarded to the Service Instance on an Established session. TCP sessions may hang.

  • Fixed Issue 2285624 - proxy_set failed when executed from Linux.

    Proxy support was not present on Linux sslvpn client.

  • Fixed Issue 2287017 - ESX may display PSOD with ARP storms to unknown destinations.

    With ARP storms to unknown destinations, ESX displays PSOD while replicating broadcast ARPs to many VMs in the network.

  • Fixed Issue 2287578 - Collecting rule stats causes Distributed Firewall (DFW) datapath to slow down.

    Whenever a rule publishing or rule stats collection occurs, a VM protected by DFW may experience degraded VM network performance.

  • Fixed Issue 2315879 - Multiple IPSec sites with same Local Endpoint, Peer Endpoint, Peer Subnet but different Local Subnets causes Edge to not route traffic.

    Missing routes or routes not installed. Edge does not route traffic.

  • Fixed Issue 2329851 - RSA SecureID authentication with Radius for SSLVPN Plus Client fails with page internal server error.

    After login on sslvpn portal, page not found error is seen. RSA with radius server fails.

  • Fixed Issue 2331796 - "HA dead time" value is not set in the HA configuration even when the value is specified during deployment of a standalone edge.

    After the standalone edge is deployed, the HA configuration incorrectly displays the "HA dead time" value in the "HA heartbeat interval" field. Users can also not reconfigure "HA dead time" value by using the CLI. Instead, the "HA heartbeat interval" gets set.

  • Fixed Issue 2332763 - Edge upgrade to NSX 6.4.2 or later fails when "sysctl.net.ipv4.tcp_tw_recycle" system control property is configured.

    Publish of the edge appliance fails with the following error message:
    ​ERROR :: VseCommandHandler :: Command failed eventually. Error: [C_UTILS][73001][65280] sysctl net.ipv4.tcp_tw_recycle="0" failed : sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: No such file or directory.

  • Fixed Issue 2342497 - VTEPs cannot work properly when IP hash teaming policy is used.

    NSX VTEPs cannot communicate to each other.

  • Fixed Issue 2350465 - Edge experiences packet loss and reboots due to out of memory.

    Out of memory occurs in high traffic situations and when IPSec is enabled.

  • Fixed Issue 2351224 - SSL VPN-Plus Client is installed successfully on a Linux machine, but the client does not work.

    Net-tools package is missing on the Linux machine.

  • Fixed Issue 2367084 - On a bridge, overlay VMs are not reachable from VLAN-backed VMs.

    When both DLR control VM and overlay VMs reside on the same ESXi host, VMs on the VLANs cannot resolve ARP requests from VMs on the overlay networks.

  • Fixed Issue - SynFloodProtection causes the firewall rules to be bypassed.

    When SynFloodProtection is enabled on the NSX Edge, firewall rule checking is bypassed, if the traffic is destined to the edge itself.

  • Fixed Issue 2381632 - Unable to add static routes in a DLR, which has no Edge Appliance VM deployed.

    vSphere Web Client displays a value in "admin distance" for a DLR, which has no Edge Appliance VM deployed.

  • Fixed Issue 2385541 - NSX Edge runs into split brain condition when clear ForceStandby flag is sent to the incorrect VM.

    When ForceSync is attempted and VMs are rebooted, clear ForceStandby is sent to incorrect VM. Its peer has ForceStandby set.
    Clearing the ForceStandby flag fails thereby causing the ForceSync task to be invoked. The VMs are rebooted as part of the ForceSync operation because the Edge reports a system bad state.

  • Fixed Issue 2385739 - Deadlock in NSX Manager.

    Zookeeper throws errors and this makes the NSX Manager unresponsive. The replicator deletes and recreates NSX Controllers on the secondary NSX Manager. The deadlock occurs in the final step.

  • Fixed Issue 2387720 - "Suite-B-GMAC-128" and "Suite-B-GMAC-256" compliance suites in IPSec VPN configuration do not provide data encryption.

    These suites use null encryption, which is very insecure. Starting in NSX 6.4.6, these two compliance suites are deprecated.

  • Fixed Issue 2390308 - On deploying a FIPS-enabled edge, a reboot is required after the first boot; however, the reboot operation fails sometimes.

    The following error message is seen in the NSX logs:
    Cannot complete operation because VMware Tools is not running in this virtual machine.

  • Fixed Issue 2406853 - While adding a trunk interface on an edge, the default MTU size is incorrectly set to 1500 in the UI.

    Data plane traffic crossing the edge interface is impacted due to reduced default MTU size.

  • Stale entries are present in DLR routing tables in following situations: dvportgroup is used in DLR logical interfaces, bridge is deleted from vCenter, virtual distributed switch is migrated or deleted.

    DLR upgrade fails. Edge redeployment fails when edge upgrade is available. Edge configuration updates might fail.

  • Fixed Issue 2412632 - Unable to save a load balancer application profile of type "HTTPS End-to-End" without selecting a Service Certificate in the Server SSL settings.

    This issue occurs after upgrading an NSX Edge from 6.4.4 to 6.4.5. In NSX 6.4.5, selecting a Service Certificate was made mandatory while specifying the Server SSL settings.

  • Fixed Issue 2314438 - Sync issues on Primary NSX Manager show up with error "REST API failed : 'The uuid format of instance_uuid is invalid".

    Certain VMs in a multi-VC environment although tagged with universal tags on primary are not replicated to secondary.

  • Fixed Issue VMSA-2019-0010 - Linux kernel vulnerabilities in TCP Selective Acknowledgement (SACK) CVE-2019-11477, CVE-2019-11478

    All NSX-V 6.4.6 components contain the fix for Linux kernel vulnerabilities in TCP Selective Acknowledgement (SACK) CVE-2019-11477, CVE-2019-11478. Customers can upgrade to 6.4.6 and later versions for permanent resolution for the above mentioned CVE's. KB article 71311 carries additional information on the issue.

  • Fixed Issue 2324338 - When DFW rule is configured which has effective vNIC having IPv4 and IPv6 addresses, updates/flap in one of the address types also deletes other IP address, resulting in corresponding DFW rule exhibiting unexpected behavior.

    Any one IP address type change overwrites both IP address types present. If an update comes that has only IPv6 address, the DB record is updated and both existing IPv4 and IPv6 are replaced by new IPv6 address only.

Known Issues

The known issues are grouped as follows.

Installation and Upgrade Known Issues

Before upgrading, please read the section Upgrade Notes, earlier in this document.

  • Issue 1859572 - During the uninstall of NSX VIBs version 6.3.x on ESXi hosts that are being managed by vCenter version 6.0.0, the host continues to stay in Maintenance mode.
    If you are uninstalling NSX VIBs version 6.3.x on a cluster, the workflow involves putting the hosts into Maintenance mode, uninstalling the VIBs and then removing the hosts from Maintenance mode by the EAM service. However, if such hosts are managed by vCenter server version 6.0.0, then this results in the host being stuck in Maintenance mode post uninstalling the VIBs. The EAM service responsible for uninstalling the VIBs puts the host in Maintenance mode but fails to move the hosts out of Maintenance mode.

    Workaround: Manually move the host out of Maintenance mode. This issue will not be seen if the host is managed by vCenter server version 6.5a and above.

  • Issue 1263858 - SSL VPN does not send upgrade notification to remote client.

    SSL VPN gateway does not send an upgrade notification to users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.

    Workaround: Users need to uninstall the older version of client and install the latest version manually.

  • Issue 2006028 - Host upgrade may fail if vCenter Server system is rebooting during upgrade.

    If the associated vCenter Server system is rebooted during a host upgrade, the host upgrade might fail and leave the host in maintenance mode. Clicking Resolve does not move the host out of maintenance mode. The cluster status is "Not Ready".

    Workaround: Exit the host from maintenance mode manually. Click "Not Ready" then "Resolve All" on the cluster.

  • Issue 2429861- End2end latency won’t be shown on vRNI UI after upgrading to NSX-v 6.4.6.

    End2end latency is broken with vRNI interop for vRNI 4.2 and vRNI 5.0.

    Workaround: Upgrade vRNI to 5.0.0-P2.

  • Issue 2449643 - The vSphere Web Client displays "No NSX Manager available" error after upgrading NSX from 6.4.1 to 6.4.6.

    In the vSphere Web Client, the Firewall and Logical Switch pages display the following error message:
    "No NSX Manager available. Verify current user has role assigned on NSX Manager."

    As the Firewall and Logical Switch pages are unavailable in the UI, users might not be able to configure firewall rules or create logical switches. NSX APIs also take a long time to respond.

    In the /usr/nsx-webserver/logs/localhost_access_log file, entries similar to the following are seen:
    127.0.0.1 - - [27/Oct/2019:08:38:22 +0100] "GET /api/4.0/firewall/globalroot-0/config HTTP/1.1" 200 1514314 91262
    127.0.0.1 - - [27/Oct/2019:08:43:21 +0100] "GET /api/4.0/firewall/globalroot-0/config HTTP/1.1" 200 1514314 90832

    127.0.0.1 - - [27/Oct/2019:11:07:39 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 264142
    127.0.0.1 - - [27/Oct/2019:11:07:40 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 265023
    127.0.0.1 - - [27/Oct/2019:11:07:40 +0100] "POST /remote/api/VdnInventoryFacade HTTP/1.1" 200 62817 265265

    Workaround:

    Rename the log4j-1.2.14.jar and log4j-1.2.17.jar files manually, and restart the bluelane-manager service.

    1. Log in to the NSX Manager CLI as a root user.

    2. Run these commands to rename the .jar files:

    /home/secureall/secureall/sem/WEB-INF/lib]# mv log4j-1.2.17.jar log4j-1.2.17.jar.old
    /home/secureall/secureall/sem/WEB-INF/lib]# mv log4j-1.2.14.jar log4j-1.2.14.jar.old

    3. Restart the bluelane-manager service by running this command:

    /etc/init.d/bluelane-manager restart

NSX Manager Known Issues
  • Issue 2233029 - ESX does not support 10K routes.

    ESX does not need to support 10K routes. By design, the maximum limit on ESX is 2K routes.

    Workaround: None.

  • Issue 2391153 - NSX Manager does not update vdn_vmknic_portgroup and vdn_cluster table after vCenter operations are done on vmknic dvportgroup.

    When you run the PUT https://nsx_manager_ip/api/2.0/nwfabric/configure API request, NSX Manager does not update the vdn_vmknic_portgroup and the vdn_cluster table after vCenter operations are done on vmknic distributed virtual portgroup. vCenter UI continues to display the old VLAN ID. 

    Workaround: None

  • Issue 2445396 - Load Balancer Pool Ports are erased when editing load balancer pool followed by "Save".

    When editing load balancer pool and saving the configuration, port numbers on every member are erased.

General Known Issues
  • In the vSphere Web Client, when you open a Flex component which overlaps an HTML view, the view becomes invisible.

    When you open a Flex component, such as a menu or dialog, which overlaps an HTML view, the view is temporarily hidden.
    (Reference: http://pubs.vmware.com/Release_Notes/en/developer/webclient/60/vwcsdk_600_releasenotes.html#issues)

    Workaround: None. 

  • Issue 2367906 - CPU utilization on the NSX edge reaches 100% when HTTP service monitor is configured on the load balancer with "no-body" extension.

    This issue occurs when you configure HTTP service monitor with "no-body" extension while using the "nagios" plugin in the load balancer. The load balancer loses connection with all the back-end servers, and the user's request is broken.

    Workaround: See KB article 71168 for more information.

  • Issue 2551523 - New rule publish may fail if rule contains quad-zero IP (0.0.0.0/0) after upgrading from NSX-v 6.3.x to NSX-v 6.4.6.

    New rule publish may fail if rule contains quad-zero IP (0.0.0.0/0) after upgrading from NSX-v 6.3.x to NSX-v 6.4.6.

    Workaround: Change 0.0.0.0/0 to 'any' in source or destination.

  • Issue 2536058 - High API Response time from NSX Manager for get flowstats APIs.

    When querying for Firewall config and IPSets, response time for these APIs is high.

  • Issue 2513773 - IDFW Rules are not being applied to VDIs after their respective security groups are removed mid-session.

    VDI sessions are dropped randomly due to removal of security groups attached to IDFW rules.

    Workaround: Trigger another manual sync.

  • Issue 2509224 - Excessively large flow table on NSX Edge in HA leads to connection drops.

    Excessive connection tracking table syncing from standby NSX Edge to active NSX Edge causes new connections to drop.

    Workaround: None.

  • Issue 2502739 - High API Response time from NSX Manager for FW and IPSet APIs.

    When querying for Firewall config and IPSets, the response time for these APIs is high.

    Workaround: None.

  • Issue 2498988 – Response time is high when querying grouping objects.

    When querying grouping objects, the response time for these APIs is high.

  • Issue 2458746 - Unsupported LIF configurations resulted in PSOD on multiple hosts.

    Implemented Host validations to avoid adding duplicate LIFs across different DLRs.

    Workaround: Make sure the virtual-wire is not present in 2 different DLRs by deleting one of them.

  • Issue 2452833 - ESXi host servicing a DLR Control VM may experience a PSOD if the same VXLAN is consumed in bridging as well as a LIF in two different DLRs.

    An ESXi servicing the DLR Control VM may PSOD if the same virtual wire is consumed in for bridging and as a LIF.

    Workaround: Wait for the current job to succeed and confirm that there are no pending jobs before moving to another edge for configuration.

  • Issue 2451586 - LB Application Rules not working after upgrading NSX to 6.4.6.

    When HTTP(s) back-end servers are sitting behind an NSX Load Balancer as part of NSX edge, the clients are communicating with the back-end servers through the Load Balancer. The Load Balancer sends headers in lowercase, while in fact the servers have sent them in camel case.

  • Issue 2448449 - Application rules configured with req_ssl_sni may fail with parsing error when upgraded to NSX for vSphere 6.4.6.

    Upon upgrading to NSX for vSphere 6.4.6, if a Load Balancer is configured with an sni related rule or when you are creating or configuring a new sni related rule in this version, you may experience an issue where the upgrade fails or the creation of the related rule fails.

    Workaround: See VMware Knowledge Base article 75281.

  • Issue 2444275 - Host will PSOD when the Virtual Infrastructure Latency feature is enabled.

    ESXi host experiences PSOD when Virtual Infrastructure Latency feature is enabled at VRNI in an environment comprised of more than 975 BFD Tunnels per host. In a scale environment, ESXi host will PSOD if latency is enabled.

    Workaround: Disable the latency/heatmap feature in a scale environment.

  • Issue 2493095 - Support for comma-separated IP lists in DFW removed from UI and database.

    Upon upgrading from previous versions of NSX-v to NSX-v 6.4.6, comma separated IP in DFW was removed.

  • Issue 2459936 - Unable to split network into two parts using static routes with network (0.0.0.0/1 and 128.0.0.0/1).

    Only static route with network 0.0.0.0/0 is allowed.

    Workaround: None.

  • Issue 2448235 - After upgrading to NSX-v 6.4.6, you may not be able to filter firewall rules using the Protocol, Source Port, Destination Port.

    You will not be able to filter firewall rules using the Protocol, Source Port, Destination Port filter.

  • Issue 2445183 - NSX Manager not visible in vSphere WEB Client after replacing NSX Manager SSL cert.

    After replacing NSX Manager SSL cert, NSX Manager is not visible on vSphere WEB Client.

    Workaround: See VMware Knowledge Base article 76129.

Solution Interoperability Known Issues
    check-circle-line exclamation-circle-line close-line
    Scroll to top icon