VMware NSX for vSphere 6.4.0 | Released January 16, 2018 | Build 7564187

See the Revision History of this document.

What's in the Release Notes

The release notes cover the following topics:

What's New in NSX 6.4.0

NSX for vSphere 6.4.0 adds serviceability enhancements, and addresses a number of specific customer bugs. See Resolved Issues for more information.

Changes introduced in NSX for vSphere 6.4.0:

Security Services:

  • Identity Firewall: Identity Firewall (IDFW) now supports user sessions on remote desktop and application servers (RDSH) sharing a single IP address, new "fast-path" architecture improves processing speed of IDFW rules. Active Directory integration now allows selective synchronization for faster AD updates.

  • Distributed Firewall: Distributed Firewall (DFW) adds layer-7 application-based context for flow control and micro-segmentation planning. Application Rule Manager (ARM) now recommends security groups and policies for a cohesive and manageable micro-segmentation strategy.

  • Distributed Firewall rules can now be created as stateless rules at a per DFW section level.

  • Distributed Firewall supports VM IP realization in the hypervisor. This allows users to verify if a particular VM IP is part of a securitygroup/cluster/resourcepool/host which is used in the source, destination, or appliedTo fields of a DFW rule.

  • IP address discovery mechanisms for VMs: Authoritative enforcement of security policies based on VM names, or other vCenter-based attributes requires that NSX know the IP address of the VM. NSX 6.2 introduced the option to discover the VM's IP address using DHCP snooping, or ARP snooping. In NSX 6.4.0, the number of ARP discovered IPs have been increased up to 128 and are configurable from 1 to 128.  These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.

  • Guest Introspection:  For vCenter 6.5 and later, Guest Introspection (GI) VM's are named Guest Introspection (XX.XX.XX.XX), where XX.XX.XX.XX is the IPv4 address of the host on which the GI machine resides. This occurs during the initial deployment of GI.

NSX User Interface:

  • Support for vSphere Client (HTML5): Introduces VMware NSX UI Plug-in for vSphere Client (HTML5). For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.
  • HTML5 Compatibility with vSphere Web Client (Flash): NSX functionality developed in HTML5 (for example, Dashboard) remains compatible with both vSphere Client and vSphere Web Client, offering seamless experience for users who are unable to transition immediately to vSphere Client.
  • Improved Navigation Menu: Reduced number of clicks to access key functionality, such as Grouping Objects, Tags, Exclusion List and System Configuration.

Operations and Troubleshooting:

  • Upgrade Coordinator provides a single portal to simplify the planning and execution of an NSX upgrade.  Upgrade Coordinator provides a complete system view of all NSX components with current and target versions, upgrade progress meters, one-click or custom upgrade plans and pre- and post-checks.
  • A new improved HTML5 dashboard is available along with many new components. Dashboard is now your default homepage.  You can also customize existing system-defined widgets, and can create your own custom widgets through API.
  • New System Scale dashboard collects information about the current system scale and displays the configuration maximums for the supported scale parameters.  Warnings and alerts can also be configured when limits are approached or exceeded.
  • Guest introspection reliability and troubleshooting enhancements.  Features such as EAM status notification, upgrade progress, custom names for SVMs, additional memory and more improve the reliability and troubleshooting of GI deployments.
  • A Central CLI for logical switch, logical router and edge distributed firewall reduces troubleshooting time with centralized access to distributed network functions.
  • New Support Bundle tab is available to help you collect the support bundle through UI on a single click. You can now collect the support bundle data for NSX components like NSX Manager, hosts, edges, and controllers. You can either download this aggregate support bundle, or can directly upload the bundle to a remote server. You can view the overall status of data collection and status for each component.
  • New Packet Capture tab is available to capture packets through UI. If there is a host which is not in a healthy state, you can get the packet dump for that host, and administrator can examine the packet information for further debugging.
  • You can now enable Controller Disconnected Operation (CDO) mode from the Management tab on the secondary site to avoid temporary connectivity issues. CDO mode ensures that the data plane connectivity is unaffected in a multi-site environment, when the primary site lose connectivity. 
  • Multi-syslog support for up to 5 syslog servers.
  • API improvements including JSON support.  NSX now offers the choice of JSON or XML for data formats.  XML remains the default for backwards compatibility.
  • Some of the NSX Edge system event messages now include Edge ID and/or VM ID parameters. For example, event code 30100, 30014, 30031.
    These message parameters will not be available for older system events. In such cases, the event message will display {0} or {1} for the Edge Id and/or VM Id parameters.
  • You can now use NSX API to know the status of the hypervisor. Monitoring hypervisor tunnel health is useful to troubleshoot issues quickly. The API response includes the pNIC status, tunnel status, hypervisor to control plane connection status, and hypervisor to management plane connection status.

NSX Edge Enhancements:

  • Enhancement to Edge load balancer health check. Three new health check monitors have been added: DNS, LDAP, and SQL.
  • You can now filter routes for redistribution based on LE/GE in prefix length in the destination IP.
  • Support for BGP and static routing over GRE tunnels.
  • NAT64 provides IPv6 to IPv4 translation.
  • Faster failover of edge routing services.
  • Routing events now generate system events in NSX Manager.
  • Improvements to L3 VPN performance and resiliency.

 

Versions, System Requirements and Installation

Note:

  • The table below lists recommended versions of VMware software. These recommendations are general and should not replace or override environment-specific recommendations.

  • This information is current as of the publication date of this document.

  • For the minimum supported version of NSX and other VMware products, see the VMware Product Interoperability Matrix. VMware declares minimum supported versions based on internal testing.

Product or Component Version
NSX for vSphere

VMware recommends the latest NSX release for new deployments.

When upgrading existing deployments, please review the NSX Release Notes or contact your VMware technical support representative for more information on specific issues before planning an upgrade.

vSphere

  • For vSphere 6.0:
    Supported: 6.0 Update 2, 6.0 Update 3
    Recommended: 6.0 Update 3. vSphere 6.0 Update 3 resolves the issue of duplicate VTEPs in ESXi hosts after rebooting vCenter server. SeeVMware Knowledge Base article 2144605 for more information.

  • For vSphere 6.5:
    Supported: 6.5a, 6.5 Update 1
    Recommended: 6.5 Update 1. vSphere 6.5 Update 1 resolves the issue of EAM failing with OutOfMemory. See VMware Knowledge Base Article 2135378 for more information. 

Note: vSphere 5.5 is not supported with NSX 6.4.0.

Guest Introspection for Windows All versions of VMware Tools are supported. Some Guest Introspection-based features require newer VMware Tools versions:
  • Use VMware Tools 10.0.9 and 10.0.12 to enable the optional Thin Agent Network Introspection Driver component packaged with VMware Tools.
  • Upgrade to VMware Tools 10.0.8 and later to resolve slow VMs after upgrading VMware Tools in NSX / vCloud Networking and Security (see VMware knowledge base article 2144236).
  • Use VMware Tools 10.1.0 and later for Windows 10 support.
  • Use VMware Tools 10.1.10 and later for Windows Server 2016 support.
Guest Introspection for Linux This NSX version supports the following Linux versions:
  • RHEL 7 GA (64 bit)
  • SLES 12 GA (64 bit)
  • Ubuntu 14.04 LTS (64 bit)

System Requirements and Installation

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.

For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.

Deprecated and Discontinued Functionality

End of Life and End of Support Warnings

For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.

  • NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)

  • vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.

  • NSX for vSphere 6.2.x will reach End of General Support (EOGS) on August 20 2018.

API Removals and Behavior Changes

Deprecations in NSX 6.4.0
The following items are deprecated, and might be removed in a future release.

  • The systemStatus parameter in GET /api/4.0/edges/edgeID/status is deprecated.
  • GET /api/2.0/services/policy/serviceprovider/firewall/ is deprecated. Use GET /api/2.0/services/policy/serviceprovider/firewall/info instead.
  • Setting tcpStrict in the global configuration section of Distributed Firewall is deprecated. Starting in NSX 6.4.0, tcpStrict is defined at the section level. Note: If you upgrade to NSX 6.4.0 or later, the global configuration setting for tcpStrict is used to configure tcpStrict in each existing layer 3 section. tcpStrict is set to false in layer 2 sections and layer 3 redirect sections. See "Working with Distributed Firewall Configuration" in the NSX API Guide for more information.

Behavior Changes in NSX 6.4.0
In NSX 6.4.0, the <name> parameter is required when you create a controller with POST /api/2.0/vdn/controller.

NSX 6.4.0 introduces these changes in error handling:

  • Previously POST /api/2.0/vdn/controller responded with 201 Created to indicate the controller creation job is created. However, the creation of the controller might still fail. Starting in NSX 6.4.0 the response is 202 Accepted.
  • Previously if you sent an API request which is not allowed in transit or standalone mode, the response status was 400 Bad Request. Starting in 6.4.0 the response status is 403 Forbidden.

CLI Removals and Behavior Changes

Do not use unsupported commands on NSX Controller nodes
There are undocumented commands to configure NTP and DNS on NSX Controller nodes. These commands are not supported, and should not be used on NSX Controller nodes. You should only use commands which are documented in the NSX CLI Guide.

Upgrade Notes

Note: For a list of known issues affecting installation and upgrades see the section, Installation and Upgrade Known Issues.

General Upgrade Notes

  • To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.

  • Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.

  • System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.

  • Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
  • Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.

  • Downgrades are not supported:
    • Always capture a backup of NSX Manager before proceeding with an upgrade.

    • Once NSX has been upgraded successfully, NSX cannot be downgraded.

  • To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
  • There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.

  • Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
    • Upgrading to NSX 6.4: NSX 6.4 is not compatible with vSphere 5.5.
    • Upgrading to vSphere 6.5a or later: When upgrading from vSphere 6.0 to vSphere 6.5a or later, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
  • Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the  VMware Compatibility Guide before you upgrade, to verify that your vendor's service is compatible with this release of NSX.
  • Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
  • Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:

    Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version.) In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:

    1. Find the new VIB URL from https://<nsxmanager>/bin/vdn/nwfabric.properties.
    2. Fetch VIBs of required ESX host version from corresponding URL.
    3. Add them to host image profile.
       

Upgrade Notes for NSX Components

NSX Manager Upgrade

  • Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.

  • If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282  for a list of supported security algorithms.

Controller Upgrade

  • The NSX Controller cluster must contain three controller nodes to upgrade to NSX 6.3.3. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
  • In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.

    When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.

    See Upgrade the NSX Controller Cluster for more information on controller upgrades.

Host Cluster Upgrade

  • If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
    The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later.

  • Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.

NSX Edge Upgrade

  • Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.

  • Upgrading Edge Services Gateway (ESG):
    Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.

    To avoid such upgrade failures, perform the following steps before you upgrade an ESG:

    The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.

    NSX Edge
    Form Factor
    CPU Reservation Memory Reservation
    COMPACT 1000MHz 512 MB
    LARGE 2000MHz 1024 MB
    QUADLARGE 4000MHz 2048 MB
    X-LARGE 6000MHz 8192 MB
    1. Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document Knowledge Base article 1002080 .

    2. Use the NSX tuning configuration API:
      PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
      ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).

  • Disable vSphere's Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).

  • Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each  <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:

    <applicationProfile>
      <name>https-profile</name>
      <insertXForwardedFor>false</insertXForwardedFor>
      <sslPassthrough>false</sslPassthrough>
      <template>HTTPS</template>
      <serverSslEnabled>true</serverSslEnabled>
      <clientSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <clientAuth>ignore</clientAuth>
        <serviceCertificate>certificate-4</serviceCertificate>
      </clientSsl>
      <serverSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <serviceCertificate>certificate-4</serviceCertificate>
      </serverSsl>
      ...
    </applicationProfile>
  • Set Correct Cipher version for Load Balanced Clients on vROPs versions older than 6.2.0: vROPs pool members on vROPs versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting "ssl-version=10" in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.
    {
        "expected" : null,
        "extension" : "ssl-version=10",
        "send" : null,
        "maxRetries" : 2,
        "name" : "sm_vrops",
        "url" : "/suite-api/api/deployment/node/status",
        "timeout" : 5,
        "type" : "https",
        "receive" : null,
        "interval" : 60,
        "method" : "GET"
    }

Guest Introspection Upgrade

  • Guest Introspection VM's now contain additional host identifying information in an XML file on the machine. When logging in to the Guest Introspection VM, the file "/opt/vmware/etc/vami/ovfEnv.xml" should include host identity information.

Upgrade Notes for FIPS

When you upgrade from a version of NSX earlier than NSX 6.3.0 to NSX 6.3.0 or later, you must not enable FIPS mode before the upgrade is completed. Enabling FIPS mode before the upgrade is complete will interrupt communication between upgraded and not-upgraded components. See Understanding FIPS Mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Ciphers supported on OS X Yosemite and OS X El Capitan: If you are using SSL VPN client on OS X 10.11 (EL Capitan), you will be able to connect using AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA38, AES256-SHA and AES128-SHA ciphers and those using OS X 10.10 (Yosemite) will be able to connect using AES256-SHA and AES128-SHA ciphers only.

  • Do not enable FIPS before the upgrade to NSX 6.3.x is complete. See Understand FIPS mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Before you enable FIPS, verify any partner solutions are FIPS mode certified. See the VMware Compatibility Guide and the relevant partner documentation.

FIPS Compliance

NSX 6.4 uses FIPS 140-2 validated cryptographic modules for all security-related cryptography when correctly configured.

Note:

  • Controller and Clustering VPN: The NSX Controller uses IPsec VPN to connect Controller clusters. The IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware Photon OS 1.0 environment), which is in the process of being CMVP validated.
  • Edge IPsec VPN: The NSX Edge IPsec VPN uses the VMware Linux Kernel Cryptographic Module (VMware NSX OS 4.4 environment), which is in the process of being CMVP validated.

Document Revision History

16 Jan 2018: First edition. 
17 Jan 2018: Second edition. Added resolved issues 1461421, 1499978, 1628679, 1634215, 1716464, 1753621, 1777792, 1787680, 1792548, 1801685, 1849043.
22 Jan 2018: Third edition. Added known issue 2036024.
24 Jan 2018: Fourth edition. Updated Upgrade Notes.
01 Feb 2018: Fifth edition. Updated FIPS compliance info.
22 Feb 2018: Sixth edition. Added resolved issues 1773240, 1839953, 1920574, 1954964, 1965589. Added known issues 2013820, 2016689, 2017806, 2026069.
02 Mar 2018: Seventh edition. Updated What's New section.
06 Apr 2018: Eighth edition. Added known issue 2014400. Added resolved issues 2029693, 2003453. Updated Behavior Changes and Deprecations.
27 Apr 2018: Ninth edition. Updated Behavior Changes and Deprecations.
07 May 2018: Tenth edition. Added resolved issues 1772473, 1954628. Added known issues 1993691, 2005900, 2007991.
14 Sept 2018: Eleventh edition. Added known issue 2006576.
05 Oct 2018: Twelfth edition. Added known issue 2164138.

Resolved Issues

The resolved issues are grouped as follows.

General Resolved Issues
  • Fixed Issue 1783528: NSX Manager CPU Utilization spikes every Friday night / Saturday Morning

    NSX polls LDAP for full sync every Friday night. There is no option to configure specific Active Directory Organisational Unit or Container, therefore NSX pulls in all objects which are related to the domain provided.

  • Fixed Issue 1801685: Unable to see filters on ESXi after upgrade from 6.2.x to 6.3.0 because of failure to connect to host
    After you upgrade from NSX 6.2.x to 6.3.0 and cluster VIBs to 6.3.0 bits, even though the installation status shows successful and Firewall Enabled, the "communication channel health" will show the NSX Manager to Firewall Agent connectivity and NSX Manager to ControlPlane Agent connectivity as down. This will lead to Firewall rules publish, Security Policy publish failures and VXLan configuration not being sent down to hosts.
  • Fixed Issue 1780998: NSX Manager retains only 100,000 audit log entries instead of the documented 1,000,000 entries

    Only 100,000 log entries are available in the audit logs.

  • Fixed Issue 1874735: "Upgrade Available" link not shown if cluster has an alarm

    Users are not be able to push the new service spec to EAM because the link is missing and the service will not be upgraded.

  • Fixed Issue 1893299: ODS scan of non-regular files like block device, character device may trigger system crash or hang

    ODS scans only regular files and symbolic links. It does not access non-regular files like block device, character file directly but it can access them if these files are target of symbolic links. Accessing these files may cause unintended action like system crash, system hang, etc.

  • Fixed Issue 1897878: High memory (and sometimes CPU) usage on the USVM, leading to operational problems for the VMs on the same host

    High memory usage causes unresponsive VMs failure to log in.

  • Fixed Issue 1882345: CPU keeps increasing over time reaching and staying at 100%

    When ARP Snooping is enabled as an IP detection mechanism and the environment has VMs with multiple ip addresses per vNIC, CPU usage keeps increasing, reaching 100% along with severe performance degradation. 

  • Fixed Issue 1920343: Server certificate can be created without a private key

    If the Private Key data is provided in the Certificate Content, the private key is ignored. 

  • Fixed Issue 1926060: The Negate source checkbox on the Firewall > Specify Source or Destination page gets selected even when you click outside 

    The checkbox for Negate source gets selected when you move objects from the list of Available objects to the Selected objects list. 

  • Fixed Issue 1965589: Publishing DFW drafts generated from releases prior to 6.4.0 fails on 6.4.0

    Drafts created prior to release 6.4 do not have Layer 2 sections with stateless flag attribute. When such a draft is loaded/published in 6.4 configuration, it fails because Layer 2 Sections are supposed to have stateless flag set to true NSX 6.4 onwards. Since the absence of attribute results in a default value (i.e., false), configuration validation fails resulting in publish failure.

Logical Networking and NSX Edge Resolved Issues
  • Fixed Issue: Controller logs flooded with bridge "Fail to add/delete a mac record MacRecord for non-existing bridge instance."

    When sharding changes, bridge fails to send join to controller. Fixed in 6.4.0

  • Fixed Issue 2014400: Standby NSX Edge starts responding to IPv6 traffic when firewall feature on edge is is disabled.

    With IPv6 enabled on NSX edge, if a failover is triggered, upstream devices will be updated with MAC of the standby Edge's due to which N-S traffic can get forwarded to incorrect edge. Fixed in 6.4.0

     

  • Fixed Issue 1783065: Cannot configure Load Balancer for UDP port along with TCP by IPv4 and IPv6 address together
    UDP only supports ipv4-ipv4, ipv6-ipv6 (frontend-backend). There is a bug in NSX Manager that even IPv6 link local address is read and pushed as an IP address of the grouping object, and you cannot select IP protocol to use in LB configuration.

    Here is an example LB configuration demonstrating the issue:
    In the Load Balancer configuration, pool "vCloud_Connector" is configured with a grouping object (vm-2681) as pool member and this object contains both IPv4 and IPv6 addresses, which cannot be supported by LB L4 Engine.

     

    {
        "algorithm" : {
            ...
        },
        "members" : [
            {
                ...  ,
                ...
            }
        ],
        "applicationRules" : [],
        "name" : "vCloud_Connector",
        "transparent" : {
            "enable" : false
        }
    }
    
    {
        "value" : [
            "fe80::250:56ff:feb0:d6c9",
            "10.204.252.220"
        ],
        "id" : "vm-2681"
    }

     

  • Fixed Issue 1764258: Traffic blackholed for upto eight minutes post HA failover or Force-Sync on NSX Edge configured with sub-interface
    If an HA failover is triggered or you start a Force-Sync over a sub-interface, traffic is blackholed for upto eight minutes.

     

  • Fixed Issue 1850773: NSX Edge NAT reports invalid configuration when multiple ports are used on the Load Balancer configuration
    This issue occurs every time you configure a Load Balancer virtual server with more than one port. Due to this, NAT becomes unmanageable while this configuration state exists for the affected NSX Edge.

     

  • Fixed Issue 1733282: NSX Edge no longer supports static device routes
    NSX Edge does not support configuration of static routes with NULL nexthop address.

     

  • Fixed Issue 1711013: Takes about 15 minutes to sync FIB between Active/Standby NSX Edge after rebooting the standby VM.
    When a standby NSX Edge is powered off, the TCP session is not closed between active and standby mode. The active edge will detect that standby is down after keepalive (KA) failure (15 minutes). After 15 minutes, a new socket connection is established with the standby edge and FIB is synced between the active/standby edge.

     

  • Fixed Issue 1781438: On the ESG or DLR NSX Edge appliances, the routing service does not send an error message if it receives the BGP path attribute MULTI_EXIT_DISC more than once.
    The edge router or distributed logical router does not send an error message if it receives the BGP path attribute MULTI_EXIT_DISC more than once. AS per RFC 4271 [Sec 5], the same attribute (attribute with the same type) cannot appear more than once within the Path Attributes field of a particular UPDATE message.

     

  • Fixed Issue 1860583: Avoid using remote sysloggers as FQDN if DNS is not reachable.
    On an NSX edge, if the remote sysloggers are configured using FQDN and DNS is not reachable, then routing functionality might be impacted. The problem might not happen consistently.

     

  • Fixed Issue 1242207: Changing router ID during the run time is not reflected in OSPF topology

    If you try to change router ID without disabling OSPF,  new external link-state advertisements (LSAs) are not re-generated with this router ID causing loss of OSPF external routes.

  • Fixed Issue 1767135: Errors when trying to access certificates and application profiles under Load Balancer
    Users with Security Admin privileges and Edge scope are unable to access certificates and application profiles under Load Balancer. The vSphere Web Client shows error messages.
  • Fixed Issue 1844546: Configured user assigned IP address on DLR HA Interface is not working

    Entering a single specific user assigned IP address for the HA interface of a DLR is not honored. Entering more than one IP address results in an “Internal server error.”

  • Fixed Issue 1874782: NSX Edge becomes unmanageable due to issue in connecting to message bus

    If the NSX Edge experiences problems connecting to the message bus, NSX Manager cannot change configuration or retrieve information from the NSX Edge.

  • Fixed Issue 1461421: "show ip bgp neighbor" command output for NSX Edge retains the historical count of previously established connections

    The “show ip bgp neighbor” command displays the number of times that the BGP state machine transitioned into the Established state for a given peer. Changing the password used with MD5 authentication causes the peer connection to be destroyed and re-created, which in turn will clear the counters. This issue does not occur with an Edge DLR.

  • Fixed Issue 1499978: Edge syslog messages do not reach remote syslog server
    Immediately after deployment, the Edge syslog server cannot resolve the hostnames for any configured remote syslog servers.
  • Fixed Issue 1916360: HA failover may fail due to full disk when >100 routes are installed.

    When more than 100 routes are installed, the vmtools daemon on the standby Edge will submit 2 warning log messages every 30s. The logs are saved to a file called /var/log/vmware-vmsvc.log, which can eventually grow to completely fill the log partition. Log rotation is not configured for this log file. When this occurs, HA failover may fail.

  • Fixed Issue 1634215: OSPF CLI commands output does not indicate whether routing is disabled

    When OSPF is disabled, routing CLI commands output does not show any message saying "OSPF is disabled". The output is empty.

  • Fixed Issue 1716464: NSX Load Balancer will not route to VMs newly tagged with a Security tag.
    If we deploy two VMs with a given tag, and then configure an LB to route to that tag, the LB will successfully route to those two VMs. But if we then deploy a third VM with that tag, the LB only routes to the first two VMs.
  • Fixed Issue 1935204: DLR takes 1 to 1.5 secs for ARP resolution

    When ARP suppression fails, local DLR's ARP resolution for a VM running in a remote host takes around 1 to 1.5 sec. 

  • Fixed Issue 1777792: Peer Endpoint set as 'ANY' causes IPSec connection to fail
    When IPSec configuration on NSX Edge sets remote peer endpoint as 'ANY', the Edge acts as an IPSec "server" and waits for remote peers to initiate connections. However, when the initiator sends a request for authentication using PSK+XAUTH, the Edge displays this error message: "initial Main Mode message received on XXX.XXX.XX.XX:500 but no connection has been authorized with policy=PSK+XAUTH" and IPsec can't be established.
  • Fixed Issue 1881348: Problem with AS_TRANS (ASN 23456) for BGP local Autonomous System Number (ASN) configuration.

    Configuring AS_TRANS (ASN 23456) for BGP local Autonomous System number (ASN), has some Issue on ESG/DLR, and 
    BGP neighbors do not come up, even after reverting the Autonomous System number (ASN) to any Valid one.

  • Fixed Issue 1792548: NSX Controller may get stuck at the message: 'Waiting to join cluster'
    NSX Controller may get stuck at the message: 'Waiting to join cluster' (CLI command:show control-cluster status). This occurs because the same IP address has been configured for eth0 and breth0 interfaces of the controller while the controller is coming up. You can verify this by using the following CLI command on the controller: show network interface
  • Fixed Issue 1983497: Purple screen appears when bridge failover and bridge configuration change happens at the same time

    When a bridge failover and bridge configuration change happens at the same time, it may result into deadlock and cause purple screen. The chances running into deadlock is low. 

  • Fixed Issue 1849042/1849043: Admin account lockout when password aging is configured on the NSX Edge appliance
    If password aging is configured for the admin user on the NSX Edge appliance, when the password ages out there is a 7 day period where the user will be asked to change the password when logging into the appliance. Failure to change the password will result in the account being locked. Additionally, if the password is changed at the time of logging in at the CLI prompt, the new password may not meet the strong password policy enforced by the UI and REST.
  • Fixed Issue 1982690: NSX controller does not store the MAC entry of the workload VM on the ESXi host where active L2-bridge control VM is running.

    You may experience traffic drop for all workload VMs that are installed on the hypervisor where the Active-Bridging Control VM is running. 

  • Fixed Issue 1753621: When Edge with private local AS sends routes to EBGP peers, all the private AS paths are stripped off from the BGP routing updates sent

    NSX for vSphere currently has a limitation that prevents it from sharing the full AS path with eBGP neighbors when the AS path contains only private AS paths. While this is the desired behavior in most cases, there are cases in which the administrator may want to share private AS paths with an external BGP neighbor. This fix allows you to change the “private AS path” behavior for external BGP peers. The default behavior for this feature is to “remove private ASN”, which is aligned with previous NSX for vSphere versions.

  • Fixed Issue 1954964: HA engine status not updated correctly on Standby ESG post split-brain

    In some situations, after split brain recovery the standby ESG Config engine status may not be updated correctly. This is an inconsistent state and customers can see intermittent traffic drops in this state.

  • Fixed Issue 1920574: Unable to configure sub interfaces for an Edge

    Creating sub interfaces on Edge fails with NSX for vSphere 6.3.2/6.3.3 (unable to publish the sub interface with IP).

  • Fixed Issue 1772473: SSLVPN clients unable to get IP from ippool

    Clients are unable to connect to private network because no IP gets assigned from ippool. IP from ippool gets consumed in autoreconnect.

    Clients are unable to connect to private network because no IP is assigned from ippool when client auto-reconnect with server. Further the old IP assigned to the client from IP pool is not cleaned up.

NSX Manager Resolved Issues
  • Fixed Issue 1804116: Logical Router goes into Bad State on a host that has lost communication with the NSX Manager
    If a Logical Router is powered on or redeployed on a host that has lost communication with the NSX Manager (due to NSX VIB upgrade/install failure or host communication issue), the Logical Router will go into Bad State and continuous auto-recovery operation via Force-Sync will fail.
  • Fixed Issue 1786515: User with 'Security Administrator' privileges unable to edit the load balancer configuration through the vSphere web client UI.
    A user with "Security Administrator" privileges for a specific NSX Edge is not able to edit the Global Load Balancer Configuration for that edge, using the vSphere web client UI. The following error message is displayed: "User is not authorized to access object Global and feature si.service, please check object access scope and feature permissions for the user."
  • Fixed Issue 1801325: 'Critical' system events and logging generated in the NSX Manager with high CPU and/or disk usage
    You may encounter one or more of the following problems when you have high disk space usage, high churn in job data, or a high job queue size on the NSX Manager:
    • 'Critical' system events in the vSphere web client
    • High disk usage on NSX Manager for /common partition
    • High CPU usage for prolonged periods or at regular intervals
    • Negative impact on NSX Manager's performance

    Workaround: Contact VMware customer support. See VMware Knowledge Base article 2147907 for more information.

  • Fixed Issue 1902723: NSX Edge file bundle does not get deleted from /common/tmp directory after every publish and fills up the /common directory

    /common directory gets full and NSX Manager runs out of space because NSX Edge file bundle (sslvpn-plus config) does not get deleted from /common/tmp.

  • Fixed Issue 1954628: Restore operation failed once the /common disk is full

    If a backup is restored on an NSX Manager that has /common disk full in size, then the restore might fail. The failure is reported on the summary page of the appliance. The NSX Manager reaches an inconsistent state that cannot be recovered from.

Security Services Resolved Issues
  • Fixed Issue 2029693: In a DFW Scale environment (with 65K+ rules) users may experience longer times in publishing DFW rules.

    Firewall rules takes effect after 10-15 minutes of publishing. Fixed in 6.4.0

  • Fixed Issue 1662020: Publish operation may fail resulting in an error message "Last publish failed on host host number" on DFW UI in General and Partner Security Services sections

    After changing any rule, the UI displays "Last publish failed on host host number". The hosts listed on the UI may not have correct version of firewall rules, resulting in lack of security and/or network disruption.

    The problem is usually seen in the following scenarios:

    • After upgrade from older to latest NSXv version.
    • Move a host out of cluster and move it back in.
    • Move a host from one cluster to another.
  • Fixed Issue 1496273: UI allows creation of in/out NSX firewall rules that cannot be applied to Edges
    The web client incorrectly allows creation of an NSX firewall rule applied to one or more NSX Edges when the rule has traffic traveling in the 'in' or 'out' direction and when PacketType is IPV4 or IPV6. The UI should not allow creation of such rules, as NSX cannot apply them to NSX Edges.
  • Fixed Issue 1854661: In a cross-VC setup, filtered firewall rules do not display the index value when you switch between NSX Managers
    After you apply a rule filter criteria to an NSX Manager and then switch to a different NSX Manager, the rule index shows as '0' for all filtered rules instead of showing the actual position of the rule.
  • Fixed Issue 2000749: Distributed Firewall stays in Publishing state with certain firewall configurations

    Distributed Firewall stays in "Publishing" state if you have a security group that contains an IPSet with 0.0.0.0/0 as an EXCLUDE member, an INCLUDE member, or as a part of 'dynamic membership containing Intersection (AND)'.

  • Fixed Issue 1628679: With identity-based firewall, the VM for removed users continues to be part of the security group

    When a user is removed from a group on the AD server, the VM where the user is logged-in continues to be a part of the security-group. This retains firewall policies at the VM vNIC on the hypervisor, thereby granting the user full access to services.

  • Fixed Issue 1787680: Deleting Universal Firewall Section fails when NSX Manager is in Transit mode
    When you try to delete a Universal Firewall Section from the UI of an NSX Manager in Transit mode, and publish, the Publish fails and as a result you are not able to set the NSX Manager to Standalone mode.
  • Fixed Issue 1773240: Security Admin role not allowed to edit and publish Service Insertion/Re-Direction rules

    If a user with a Security Admin role/privileges attempts to create/modify/publish Service Insertion rules, the operation fails.

  • Fixed Issue 1845174: All Security groups disappear from the UI once Security Policy is assigned

    All Security groups disappear from the UI once Security Policy is assigned.

  • Fixed Issue 1839953: ARP suppression of sub interface IP addresses of guest VMs fails

    ARP resolution of these interfaces takes slightly longer than usual (1 second) for the first time.

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 1931236: System text is displayed in the UI tabs

    System text is displayed in the UI tabs, such as "Dashboard.System.Scale.Title" instead of "System Scale".

    Workaround: Clear browser cookies and refresh the browser, or logout and re-login to the vSphere Client.

  • In the vSphere Web Client, when you open a Flex component which overlaps an HTML view, the view becomes invisible.

    When you open a Flex component, such as a menu or dialog, which overlaps an HTML view, the view is temporarily hidden.
    (Reference: http://pubs.vmware.com/Release_Notes/en/developer/webclient/60/vwcsdk_600_releasenotes.html#issues)

    Workaround: None. 

  • Issue 1944031: The DNS monitor uses default port 53 for health check instead of other ports

    The monitor port in the pool member is used by the service monitor to perform health checks against the backend server. The DNS monitor  uses default port 53 regardless of what the defined monitor port is. Do not change the backend DNS server listening port to a value other than 53, or the  DNS monitor will fail.

  • Issue 1874863: Unable to authenticate with changed password after sslvpn service disable/enable with local authentication server

    When SSL VPN service is disabled and re-enabled and when using local authentication, users are unable to log in with changed passwords.

    See VMware Knowledge Base Article 2151236 for more information.

  • Issue 1702339: Vulnerability scanners might report a Quagga bgp_dump_routes vulnerability CVE-2016-4049

    Vulnerability scanners might report a Quagga bgp_dump_routes vulnerability CVE-2016-4049 in NSX for vSphere. NSX for vSphere uses Quagga, but the BGP functionality (including the vulnerability) is not enabled. This vulnerability alert can be safely disregarded.

    Workaround: As the product is not vulnerable, no workaround is required.

  • Issue 1926467: Cursor not seen with Chrome version 59.0.3071.115 on NSX manager Appliance manager

    You cannot see the cursor to input username/password when opening the NSX Manager appliance using Chrome version 59.0.3071.115. You can still type in your credentials, even if the cursor is not visible. 

    Workaround: Upgrade Chrome browser to 61.0.3163.100 or 64.0.3253.0 or higher. 

  • Issue 2015520: Bridging configuration fails when the bridge name is more than 40 characters in length

    The bridging configuration fails when the bridge name is more than 40 characters in length.

    Workaround: When configuring the bridge, do not exceed 40 characters for the bridge name.

  • Issue 1934704: Non ascii name is not supported on bridge name

    If you configure non ascii characters on bridge name in VDR, bridge configuration is not seen on host and bridge data path won't work.

    Workaround: Use ascii characters for bridge name.

  • Issue 1529178: Uploading a server certificate which does not include a common name returns an "internal server error" message

    If you upload a server certificate that does not have any common name, an "internal server error" message appears. A server certificate without a common name specified is not supported.

  • Issue 2013820: Grouping object pool member cannot be shared with different pools that are using different IP filter policies

    Grouping object pool member cannot be shared with different pools that are using different IP filter policies.

    Workaround:

    1. If the grouping object pool member has to be shared across pools, make sure all the pools use the same IP filter policy.
    2. Use the IP address of the grouping object directly as the pool member if it has to be shared with different pools with different IP filter policies.
  • Issue 2016689: SMB application is not supported for RDSH users

    If you create an RDSH firewall Rule which blocks/allows SMB application, the rule will not trigger.

    Workaround: None.

  • Issue 2007991: Remote Desktop (RDP) Server or Client versions 4.0, 5.0, 5.1, 5.2 and 6.0 are not classified by DFW as L7 protocol

    If the customer environment uses RDP server or client versions 4.0, 5.0, 5.1, 5.2, 6.0, the RDP traffic is not classified or identified as RDP traffic by the Layer7 DFW rule using APP_RDP as the service. If the RDP traffic is expected to be blocked by the DFW rule, the traffic might not be blocked.

    Workaround: Create Layer 4 rule using layer 4 RDP service to match the RDP traffic.

  • Issue 1993691: Removing a host without first removing it as a replication node can lead to stale entries in VSM

    If a host serves as a replication node for a HW VTEP and it needs to be removed from its parent cluster, ensure first that it is no longer a replication node before removing it from the cluster. If that is not done, in some cases its status as a replication node is maintained in the NSX Manager database which can cause errors when further manipulating replication nodes.

    Workaround: See Knowledge Base Article 52418 for more information.

  • Issue 2164138: When preparing a cluster for NSX, ixgbe driver is reloaded on hosts with physical NICs running ixgbe driver

    The ixgbe driver is reloaded to enable RSS (receive side scaling) option to improve vxlan throughput. The ixgbe driver reload causes the vmnics using the ixgbe driver to go down for few seconds and back up. In rare circumstances, the ixgbe driver can fail to reload and the vmnics using the ixgbe driver will remain down until the ESXi host is rebooted.

    Workaround: See VMware Knowledge Base article 52980 for more information.

Installation and Upgrade Known Issues

Before upgrading, please read the section Upgrade Notes, earlier in this document.

  • Issue 2036024: NSX Manager upgrade stuck at "Verifying uploaded file" due to database disk usage

    Upgrade log file vsm-upgrade.log also contains this message: "Database disk usage is at 75%, but it should be less than 70%". You can view vsm-upgrade.log in the NSX Manager support bundle. Navigate to Networking & Security > Support Bundle, and select to include NSX Manager logs.

    Workaround: Contact VMware customer support.

  • Issue 2033438: vSphere Web Client shows "No NSX Manager available" if upgrading to NSX 6.4.0 and TLS 1.0 only is enabled

    When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration.

    Workaround: Login to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.

  • Issue 2006028: Host upgrade may fail if vCenter Server system is rebooting during upgrade

    If the associated vCenter Server system is rebooted during a host upgrade, the host upgrade might fail and leave the host in maintenance mode. Clicking Resolve does not move the host out of maintenance mode. The cluster status is "Not Ready".

    Workaround: Exit the host from maintenance mode manually. Click "Not Ready" then "Resolve All" on the cluster.

  • Issue 1959940: Error when deploying NSX Manager OVF using ovftool: "Invalid OVF name (VSMgmt) specified in net mapping"

    Starting in NSX 6.4.0, VSMgmt is no longer a valid name for the net mapping in the appliance OVF, and is replaced with "Management Network".

    Workaround: Use "Management Network" instead. For example, instead of --net:VSMgmt='VM Network' use --net:'Management Network=VM Network'.

  • Issue 2001988: During NSX host cluster upgrade, Installation status in Host Preparation tab alternates between "not ready" and "installing" for the entire cluster when each host in the cluster is upgrading

    During NSX upgrade, clicking "upgrade available" for NSX prepared cluster triggers host upgrade. For clusters configured with DRS FULL AUTOMATIC, the installation status alternates between "installing" and "not ready", even though the hosts are upgraded in the background without issues.

    Workaround: This is a user interface issue and can be ignored. Wait for the host cluster upgrade to proceed.

  • Issue 1859572: During the uninstall of NSX VIBs version 6.3.x on ESXi hosts that are being managed by vCenter version 6.0.0, the host continues to stay in Maintenance mode
    If you are uninstalling NSX VIBs version 6.3.x on a cluster, the workflow involves putting the hosts into Maintenance mode, uninstalling the VIBs and then removing the hosts from Maintenance mode by the EAM service. However, if such hosts are managed by vCenter server version 6.0.0, then this results in the host being stuck in Maintenance mode post uninstalling the VIBs. The EAM service responsible for uninstalling the VIBs puts the host in Maintenance mode but fails to move the hosts out of Maintenance mode.

    Workaround: Manually move the host out of Maintenance mode. This issue will not be seen if the host is managed by vCenter server version 6.5a and above.

  • Issue 1797929: Message bus channel down after host cluster upgrade
    After a host cluster upgrade, vCenter 6.0 (and earlier) does not generate the event "reconnect", and as a result, NSX Manager does not set up the messaging infrastructure on the host. This issue has been fixed in vCenter 6.5.

    Workaround: Resync the messaging infrastructure as below:
    POST https://<ip>:/api/2.0/nwfabric/configure?action=synchronize

    <nwFabricFeatureConfig>
      <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
      <resourceConfig>
        <resourceId>host-15</resourceId>
      </resourceConfig>
    </nwFabricFeatureConfig>
  • Issue 1263858: SSL VPN does not send upgrade notification to remote client
    SSL VPN gateway does not send an upgrade notification to users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.

    Workaround: Users need to uninstall the older version of client and install the latest version manually.

  • Issue 1979457: If GI-SVM is deleted or removed during the upgrade process and backward compatibility mode, then identity firewall through Guest Introspection (GI) will not work unless the GI cluster is upgraded.

    Identity firewall will not work and no logs related to identity firewall would be seen. Identity firewall protection will be suspended unless the cluster is upgraded. 

    Workaround: Upgrade the cluster so that all the hosts are running the newer version of GI-SVM.

    -Or -

    Enable Log scraper for identity firewall to work.

  • Issue 2016824: NSX context engine fails to start after Guest Introspection installation and/or upgrade

    This issue occurs when Guest Introspection (GI) is installed or upgraded before Host Preparation completes. Identity Firewall for RDSH VMs will not work if the context engine is not started.

    Workaround: See VMware Knowledge Base article 51973 for details.

  • Issue 2027916: Upgrade Coordinator may show that controllers that failed to upgrade were successfully upgraded

    For a three-node controller cluster, if the third controller failed during upgrade and is removed, the entire controller cluster upgrade might be marked as successful even though the upgrade failed.

    Workaround: Check the Install & Upgrade > Management tab and make sure all controllers are shown as "Upgraded" before upgrading other components (hosts, Edges, etc.).

NSX Manager Known Issues
  • Issue 1991125: Grouping objects created on Application Rule Manager session on 6.3.x will not be reflected on dashboard after upgrading NSX Manager to 6.4.0

    Grouping objects created on Application Rule Manager session on 6.3.x will not be reflected on dashboard after upgrading NSX Manager to 6.4.0.

    Workaround: The grouping objects created on the Application Rule Manager session on 6.3.x will be available after upgrading NSX Manager to 6.4.0. The grouping objects are available under the respective page of the Grouping Objects section.

  • Issue 1892999: Cannot modify the Unique Selection Criteria even when no VMs are attached to the Universal Security Tag

    If a VM attached to a universal security tag gets deleted, an internal object representing the VM still remains attched to the universal security tag. This causes the universal selection criteria change to fail with error that universal security tags are still attached to VMs.

    Workaround: Delete all the universal security tags and then change the universal selection criteria.

Logical Networking and NSX Edge Known Issues
  • Issue 1983902: In a scale setup environment, after NSX Manager reboot, netcpad does not immediately connect to vsfwd

    There is no datapath impact. The system recovers without intervention after 13 minutes.

    Workaround: None

  • Issue 1747978: OSPF adjacencies are deleted with MD5 authentication after NSX Edge HA failover
    In an NSX for vSphere 6.2.4 environment where the NSX Edge is configured for HA with OSPF graceful restart configured and MD5 is used for authentication, OSPF fails to start gracefully. Adjacencies forms only after the dead timer expires on the OSPF neighbor nodes.

    Workaround: None

  • Issue 1525003: Restoring an NSX Manager backup with an incorrect passphrase will silently fail as critical root folders cannot be accessed

    Workaround: None.

  • Issue 1995142: Host is not removed from replication cluster after being removed from VC inventory

    If a user adds a host to a replication cluster and then removes the host from VC inventory before removing it from the cluster, the legacy host will remain in the cluster.

    Workaround: Whenever removing a host, first make sure it has already been removed from replication cluster if any.

  • Issue 2005973: Routing daemon MSR loses all routing configuration after deleting a few gre tunnels and then doing a force sync of edge node from Management Plane

    This problem can occur on a edge with BGP sessions over GRE tunnels. When some of the GRE tunnels are deleted and then a force sync of the edge is done from MP, edge loses all routing configuration.

    Workaround: Reboot edge node.

  • Issue 2015368: Firewall logging may cause out-of-memory issues under certain circumstances

    When the Edge firewall is enabled in configurations of high scale, and firewall logging is enabled on some or all rules, it is possible, although uncommon, for the Edge to encounter an Out-Of-Memory (OOM) condition. This is especially true when there is a lot of traffic hitting the logging rules. When an OOM condition occurs, the Edge VM will automatically reboot.

    Workaround: Firewall logging is best used for debugging purposes, and then disabled again for normal use. To avoid this OOM issue, disable all firewall logging.

  • Issue 2005900: Routing daemon MSR on Edge is stuck at 100% CPU when all GRE tunnels are flapped in an 8-way iBGP/multi-hop BGP ECMP scale topology

    This problem can occur in a scale topology where iBGP or multi-hop BGP is configured on ESG with multiple neighbors running over many GRE tunnels. When multiple GRE tunnels flap, MSR may get stuck indefinitely at 100% CPU.

    Workaround: Reboot edge node.

Security Services Known Issues
  • Issue 1948648: Changes in AD group membership do not immediately take effect for logged in users using RDSH Identity Firewall rules.

    RDSH Identity Firewall rules are created. An Active Directory user logs into their RDS Desktop and firewall rules take effect. The AD administrator then makes changes to the AD user’s group membership and a delta sync is performed on the NSX Manager. However, the AD group membership change is not immediately seen by the logged in user and does not result in a change to the user’s effective firewall rules. This behavior is a limitation of Active Directory.

    Workaround: Users must log off and then log back in for AD group membership changes to take effect. 

  • Issue 2017806: Error message is not clear when adding members to security groups used in RDSH firewall sections on security policies

    If a security group is used in RDSH firewall sections on security policies, you can only add directory group members to it. If you try to add any member other than directory group, the following error displays:
    "Security group is being used by service composer, Firewall and cannot be modified"

    The error message doe not convey that the security group cannot be modified because the security group is used in RDSH firewall sections on security policies.

    Workaround: None.

  • Issue 2026069: Using Triple DES cipher as encryption algorithm in NSX Edge IPsec VPN service may result in loss of IPsec tunnels to any IPsec sites upon upgrade

    VMware does not recommend using 3DES as encryption algorithm in NSX Edge IPsec VPN service for security reasons. However, it is available in NSX releases up to version 6.4 for interoperability reasons. Based on the security recommendations, support for this cipher will be removed from the next release of NSX for vSphere. Therefore, you are requested to stop using Triple DES as the encryption algorithm and switch to one of the secure ciphers available in IPsec service. This change regarding encryption algorithm is applicable to IKE SA (phase1) as well as IPsec SA (phase2) negotiation for an IPsec site.

    If 3DES encryption algorithm is in use by NSX Edge IPsec service at the time of upgrade to the release in which it's support is removed, it will be replaced by another recommended cipher and therefore the IPsec sites which were using 3DES will not come up unless the configuration on remote peer is modified to match the encryption algorithm used in NSX Edge.

    Workaround: Modify encryption algorithm in IPsec site configuration to replace Triple DES by one of AES variants supported (AES / AES256 / AES-GCM). For example, for each IPsec site configuration having encryption algorithm as Triple DES, replace it with AES. Accordingly update the IPsec configuration at the peer endpoint.

  • Issue 1648578: NSX forces the addition of cluster/network/storage when creating a new NetX host-based service instance
    When you create a new service instance from the vSphere Web Client for NetX host-based services such as Firewall, IDS, and IPS , you are forced to add cluster/network/storage even though these are not required.

    Workaround: When creating a new service instance, you may add any information for cluster/network/storage to fill out the fields. This will allow the creation of the service instance and you will be able to proceed as required.

  • Issue 2018077: Firewall publish fails when rule has custom L7 ALG service without destination port and protocol

    When creating L7 service by selecting any of the following L7 ALG APP (APP_FTP, APP_TFTP, APP_DCERPC, APP_ORACLE) without providing destination port and protocol, and then using them in firewall rules, the firewall rule publish fails.

    Workaround: Provide the appropriate destination port and protocol (TCP/UDP) values while creating custom L7 service for the following ALG services:

    • APP_FTP : port 21 protocol: TCP
    • APP_TFTP: port 69 protocol: UDP
    • APP_DCERPC: port 135 protocol: TCP
    • APP_ORACLE: port 1521 protocol: TCP
  • Issue 1980551: NSX Manager does not support TLSv1 by default

    If you try to connect to NSX manager using TLSv1, it fails. 

    Workaround: Use higher versions of TLS: TLSv1.1 or TLSv1.2. 

    It is not recommended to use TLSv1.0 and it is planned to remove it from a future release, but it is possible to enable it for NSX Manager. See "Working with Security Settings" in the NSX for vSphere API Guide for instructions. Note that changing security settings will cause NSX Manager to reboot. 

  • Issue 2006576: Saved state (connection information stored for traffic hitting service insertion filters) is lost when guest VM with service insertion filter on it is moved from one cluster to another (assuming both clusters have same service deployed)

    Guest VMs with NetX (service insertion) rules configured will temporarily lose those NetX rules if the destination cluster for a vmotion is not the same as the origin cluster.

    Workaround: Limit vmotion of guest VM with service insertion filter within cluster.

Monitoring Services Known Issues
  • Issue 1466790: Unable to choose VMs on bridged network using the NSX traceflow tool
    Using the NSX traceflow tool, you cannot select VMs that are not attached to a logical switch. This means that VMs on an L2 bridged network cannot be chosen by VM name as the source or destination address for traceflow inspection.

    Workaround: For VMs attached to L2 bridged networks, use the IP address or MAC address of the interface you wish to specify as destination in a traceflow inspection. You cannot choose VMs attached to L2 bridged networks as source. See the knowledge base article 2129191 for more information.

check-circle-line exclamation-circle-line close-line
Scroll to top icon