check-circle-line exclamation-circle-line close-line

VMware NSX for vSphere 6.3.6 Release Notes

VMware NSX for vSphere 6.3.6 | Released March  29, 2018 | Build 8085122

See the Revision History of this document.

What's in the Release Notes

The release notes cover the following topics:

What's New in NSX 6.3.6

NSX for vSphere 6.3.6 addresses a number of specific customer bugs. See Resolved Issues for more information.

View Release Notes for previous versions:

Versions, System Requirements, and Installation

Note:

  • The table below lists recommended versions of VMware software. These recommendations are general and should not replace or override environment-specific recommendations.

  • This information is current as of the publication date of this document.

  • For the minimum supported version of NSX and other VMware products, see the VMware Product Interoperability Matrix. VMware declares minimum supported versions based on internal testing.

Product or Component Recommended Version
NSX for vSphere

VMware recommends the latest NSX release for new deployments.

When upgrading existing deployments, please review the NSX Release Notes or contact your VMware technical support representative for more information on specific issues before planning an upgrade.

vSphere

Guest Introspection for Windows All versions of VMware Tools are supported. Some Guest Introspection-based features require newer VMware Tools versions:
  • Use VMware Tools 10.0.9 and 10.0.12 to enable the optional Thin Agent Network Introspection Driver component packaged with VMware Tools.
  • Upgrade to VMware Tools 10.0.8 and later to resolve slow VMs after upgrading VMware Tools in NSX / vCloud Networking and Security (see VMware knowledge base article 2144236).
  • Use VMware Tools 10.1.0 and later for Windows 10 support.
  • Use VMware Tools 10.1.10 and later for Windows Server 2016 support.
Guest Introspection for Linux This NSX version supports the following Linux versions:
  • RHEL 7 GA (64 bit)
  • SLES 12 GA (64 bit)
  • Ubuntu 14.04 LTS (64 bit)

System Requirements and Installation

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.

For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.

Deprecated and Discontinued Functionality

End of Life and End of Support Warnings

For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.

  • NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)

  • NSX for vSphere 6.2.x will reach End of General Support (EOGS) on August 20 2018.

  • NSX Data Security removed: As of NSX 6.3.0, the NSX Data Security feature has been removed from the product.

  • NSX Activity Monitoring (SAM) deprecated: As of NSX 6.3.0, Activity Monitoring is no longer a supported feature of NSX. As a replacement, please use Endpoint Monitoring. For more information see Endpoint Monitoring in the NSX Administration Guide.
  • Web Access Terminal removed: Web Access Terminal (WAT) has been removed from NSX 6.3.0. You cannot configure Web Access SSL VPN-Plus and enable the public URL access through NSX Edge. VMware recommends using the full access client with SSL VPN deployments for improved security. If you are using WAT functionality in an earlier release, you must disable it before upgrading to 6.3.0.

  • IS-IS removed from NSX Edge: From NSX 6.3.0, you cannot configure IS-IS Protocol from the Routing tab.

  • vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3.x.

General Behavior Changes

If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.3.6, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration.

API Removals and Behavior Changes

Changes in API error handling

NSX 6.3.5 introduces these changes in error handling:

  • If an API request results in a database exception on the NSX Manager, the response is 500 Internal server error. In previous releases, NSX Manager responded with 200 OK, even though the request failed.
  • If you send an API request with an empty body when a request body is expected, the response is 400 Bad request. In previous releases NSX Manager responded with 500 Internal server error.
  • If you specify an incorrect security group in this API, GET /api/2.0/services/policy/securitygroup/{ID}/securitypolicies, the response is 404 Not found. In previous releases NSX Manager responded with 200 OK.

Changes in backup and restore API defaults

Starting in 6.3.3, the defaults for two backup and restore parameters have changed to match the defaults in the UI. Previously passiveMode and useEPSV defaulted to false, now they default to true. This affects the following APIs:

  • PUT /api/1.0/appliance-management/backuprestore/backupsettings
  • PUT /api/1.0/appliance-management/backuprestore/backupsettings/ftpsettings

Deleting firewall configuration or default section

  • Starting in 6.3.0, this request is rejected if the default section is specified: DELETE /api/4.0/firewall/globalroot-0/config/layer2sections|layer3sections/sectionId
  • A new method is introduced to get default configuration. Use the output of this method to replace entire configuration or any of the default sections:
    • Get default configuration with GET /api/4.0/firewall/globalroot-0/defaultconfig
    • Update entire configuration with PUT /api/4.0/firewall/globalroot-0/config
    • Update single section with PUT /4.0/firewall/globalroot-0/config/layer2sections|layer3sections/{sectionId}

defaultOriginate parameter:

Starting in NSX 6.3.0, the defaultOriginate parameter is removed from the following methods for logical (distributed) router NSX Edge appliances only:

  • GET/PUT /api/4.0/edges/{edge-id}/routing/config/ospf
  • GET/PUT /api/4.0/edges/{edge-id}/routing/config/bgp
  • GET/PUT /api/4.0/edges/{edge-id}/routing/config

Setting defaultOriginate to true on an NSX 6.3.0 or later logical (distributed) router edge appliance will fail.

All IS-IS methods removed from NSX Edge routing.

  • GET/PUT/DELETE /4.0/edges/{edge-id}/routing/config/isis
  • GET/PUT /4.0/edges/{edge-id}/routing/config

CLI Removals and Behavior Changes

Do not use unsupported commands on NSX Controller nodes
There are undocumented commands to configure NTP and DNS on NSX Controller nodes. These commands are not supported, and should not be used on NSX Controller nodes. You should only use commands which are documented in the NSX CLI Guide.

Upgrade Notes

Note: For a list of known issues affecting installation and upgrades see the section, Installation and Upgrade Known Issues.

General Upgrade Notes

  • To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.

  • System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.

  • Upgrade path from NSX 6.x: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
  • Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.

  • Downgrades are not supported:
    • Always capture a backup of NSX Manager before proceeding with an upgrade.

    • Once NSX has been upgraded successfully, NSX cannot be downgraded.

  • To validate that your upgrade to NSX 6.3.x was successful see knowledge base article 2134525.
  • There is no support for upgrades from vCloud Networking and Security to NSX 6.3.x. You must upgrade to a supported 6.2.x release first.

  • Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
    • Upgrading to vSphere 6.5a or later: When upgrading from vSphere 5.5 or 6.0 to vSphere 6.5a or later, you must first upgrade to NSX 6.3.x. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
      Note: NSX 6.2.x is not compatible with vSphere 6.5.
    • Upgrading to NSX 6.3.3 or later: The minimum supported version of vSphere for NSX interoperability changes between NSX 6.3.2 and NSX 6.3.3. See the VMware Product Interoperability Matrix for details.

  • Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the  VMware Compatibility Guide before you upgrade, to verify that your vendor's service is compatible with this release of NSX.
  • Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
  • Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:

    Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version.) In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:

    1. Find the new VIB URL from https://<NSXManager>/bin/vdn/nwfabric.properties.
    2. Fetch VIBs of required ESX host version from corresponding URL.
    3. Add them to host image profile.
       

Upgrade Notes for NSX Components

NSX Manager Upgrade

  • Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.

  • If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.x because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282  for a list of supported security algorithms in 6.3.x.

  • If you want to upgrade from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.

  • When you upgrade NSX Manager to NSX 6.3.6, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.

Controller Upgrade

  • In NSX 6.3.3, the NSX Controller appliance disk size changes from 20GB to 28GB.

  • The NSX Controller cluster must contain three controller nodes to upgrade to NSX 6.3.3. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
  • In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.

    When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.

    See Upgrade the NSX Controller Cluster for more information on controller upgrades.

Host Cluster Upgrade

  • In NSX 6.3.3, NSX VIB names change on ESXi 6.0 and later. VIB names on ESXi 5.5 remain the same.
    The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later.

  • Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded to NSX 6.3.x, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change.

    A host reboot is not required during the following tasks:

    • NSX 6.3.0 to NSX 6.3.x upgrades on ESXi 6.0 or later.
    • The NSX 6.3.x VIB install that is required after upgrading ESXi from 6.0 to 6.5.0a or later.

      Note: The ESXi upgrade still requires a host reboot.

    • NSX 6.3.x VIB uninstall on ESXi 6.0 or later.

    A host reboot is required during the following tasks:

    • NSX 6.2.x or earlier to NSX 6.3.x upgrades (any ESXi version).
    • NSX 6.3.0 to NSX 6.3.x upgrades on ESXi 5.5.
    • The NSX 6.3.x VIB install that is required after upgrading ESXi from 5.5 to 6.0 or later.
    • NSX 6.3.x VIB uninstall on ESXi 5.5.
  • Host may become stuck in the installing state: During large NSX upgrades, a host may become stuck in the installing state for a long time. This can occur due to issues uninstalling old NSX VIBs. In this case the EAM thread associated with this host will be reported in the VI Client Tasks list as stuck.
    Workaround: Do the following:

    • Log into vCenter using the VI Client.
    • Right click on the stuck EAM task and cancel it.
    • From the vSphere Web Client, issue a Resolve on the cluster. The stuck host may now show as InProgress.
    • Log into the host and issue a reboot to force completion of the upgrade on that host.

NSX Edge Upgrade

  • In NSX 6.3.0, the NSX Edge appliance disk sizes have changed:

    • Compact, Large, Quad Large: 1 disk 584MB + 1 disk 512MB

    • XLarge: 1 disk 584MB + 1 disk 2GB + 1 disk 256MB

  • Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.

  • Upgrading Edge Services Gateway (ESG):
    Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.

    To avoid such upgrade failures, perform the following steps before you upgrade an ESG:

    The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.

    NSX Edge
    Form Factor
    CPU Reservation Memory Reservation
    COMPACT 1000MHz 512 MB
    LARGE 2000MHz 1024 MB
    QUADLARGE 4000MHz 2048 MB
    X-LARGE 6000MHz 8192 MB
    1. Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document Knowledge Base article 1002080 .

    2. Use the NSX tuning configuration API:
      PUT https://<NSXManager>/api/4.0/edgePublish/tuningConfiguration
      ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).

  • Disable vSphere's Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).

  • Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each  <ciphers> list in <clientSsl> and <serverSsl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:

    <applicationProfile>
      <name>https-profile</name>
      <insertXForwardedFor>false</insertXForwardedFor>
      <sslPassthrough>false</sslPassthrough>
      <template>HTTPS</template>
      <serverSslEnabled>true</serverSslEnabled>
      <clientSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <clientAuth>ignore</clientAuth>
        <serviceCertificate>certificate-4</serviceCertificate>
      </clientSsl>
      <serverSsl>
        <ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>
        <serviceCertificate>certificate-4</serviceCertificate>
      </serverSsl>
      ...
    </applicationProfile>
  • Set Correct Cipher version for Load Balanced Clients on vROPs versions older than 6.2.0: vROPs pool members on vROPs versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting "ssl-version=10" in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.
    {
        "expected" : null,
        "extension" : "ssl-version=10",
        "send" : null,
        "maxRetries" : 2,
        "name" : "sm_vrops",
        "url" : "/suite-api/api/deployment/node/status",
        "timeout" : 5,
        "type" : "https",
        "receive" : null,
        "interval" : 60,
        "method" : "GET"
    }

Guest Introspection Upgrade

  • Guest Introspection VM's now contain additional host identifying information in an XML file on the machine. When logging in to the Guest Introspection VM, the file "/opt/vmware/etc/vami/ovfEnv.xml" should include host identity information.

Upgrade Notes for FIPS

When you upgrade from a version of NSX earlier than NSX 6.3.0 to NSX 6.3.0 or later, you must not enable FIPS mode before the upgrade is completed. Enabling FIPS mode before the upgrade is complete will interrupt communication between upgraded and not-upgraded components. See Understanding FIPS Mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Ciphers supported on OS X Yosemite and OS X El Capitan: If you are using SSL VPN client on OS X 10.11 (EL Capitan), you will be able to connect using AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA38, AES256-SHA and AES128-SHA ciphers and those using OS X 10.10 (Yosemite) will be able to connect using AES256-SHA and AES128-SHA ciphers only.

  • Do not enable FIPS before the upgrade to NSX 6.3.x is complete. See Understand FIPS mode and NSX Upgrade in the NSX Upgrade Guide for more information.

  • Before you enable FIPS, verify any partner solutions are FIPS mode certified. See the VMware Compatibility Guide and the relevant partner documentation.

FIPS Compliance

  • NSS and OpenSwan: The NSX Edge IPsec VPN uses the Mozilla NSS crypto module. Due to critical security issues, this version of NSX uses a newer version of NSS that has not been FIPS 140-2 validated. VMware affirms that the module works correctly, but it is no longer formally validated.
  • NSS and Password Entry: The NSX Edge password hashing use the Mozilla NSS crypto module. Due to critical security issues, this version of NSX uses a newer version of NSS that has not been FIPS 140-2 validated. VMware affirms that the module works correctly, but it is no longer formally validated.
  • Controller and Clustering VPN: The NSX Controller uses IPsec VPN to connect Controller clusters. The IPsec VPN uses the VMware Linux kernel crypto module (Photon 1 environment), which is in the process of being CMVP validated.

Document Revision History

29 Mar 2018: First edition.
2 May 2018: Second edition. Added resolved issue 1993384.
4 June 2018: Third edition. Added resolved issue 2058770.
25 July 2018: Fourth edition. Added resolved issues 2019124, 2021080.
5 Sept 2018: Fifth edition. Added known issue 2186968.

Resolved Issues

The resolved issues are grouped as follows.

General Resolved Issues
  • Fixed Issue 2058770: Excessive login events are raised at vCenter and vCenter SSO server experiences high load

    vCenter SSO server experiences excessive login events and high load when vCenter SSO users make many NSX APIs requests in a short span of time. This might result in sluggish behavior.

  • Fixed Issue 2003765: TOR Manager on NSX Controller fails to send update when the physical TOR device is reset/rebooted or power cycled

    VM remote macs are missing on TOR OVSDB table if TOR is reloaded.

    Workaround: Reboot all NSX Controllers. See VMware Knowledge Base article 52074 for more information.

  • Fixed Issue 2014220: Netcpa monitor should not run directly under "init"

    Host is in a non responsive state after upgrading to 6.5 upgrade 1.  Run netcpa monitor in group "netcpa" instead of "init."

  • Fixed Issue 2023494: If the NSX plugin is deployed on top of a Dell plugin, the error  "No NSX manager" is displayed on the vSphere Web Client.

    After upgrading, "No NSX Managers available" error is displayed on the vSphere Web Client.

  • Fixed Issue 2073125: Deployment of anti-virus partner solution to cluster fails, service VM stuck in 'Unknown' state

    Service VM is stuck in 'Unknown' state, however Eicar detection works and agents running on hosts protect the environment as expected if the Security policy in the Security group is applied.

  • Fixed Issue 2021080: Restart of host fails due to HostFirewallRuleset error

    Host loses connectivity to vCenter and reconnection fails. Operations cannot be performed on the host.

Installation and Upgrade Resolved Issues
  • Fixed Issue 2035026: Network outage of ~40-50 seconds seen on Edge Upgrade

    During Edge upgrade, there is an outage of approximately 40-50 seconds.

  • Fixed Issue 2058636: After upgrading to 6.3.5, the routing loop between DLR and ESG's causes connectivity issues in certain BGP configurations.

    A routing loop is causing a connectivity issue.

  • Fixed Issue 1977797: Upgrade from NSX 6.2.2 to NSX 6.3.x results in errors in vSphere Web Client, host errors

    After upgrading NSX Manager from NSX 6.2.2 to NSX 6.3.x, the vSphere Web Client displays "Internal Server Error", and host clusters display errors.

NSX Manager Resolved Issues
  • Fixed Issue 2012045: NSX Manager CPU high due to edge in read-only file system mode.

    NSX Manager is slow to respond because it keeps 100% CPU and receives a lot of read-only file system events from edge.

  • Fixed Issue 1995891: Changes done on primary NSX Manager are not synced to secondary NSX Manager.

    If a secondary NSX Manager is removed from the primary NSX Manager (with secondary still in SECONDARY role), there is no indication on secondary NSX Manager that it is not receiving any updates.

  • Fixed Issue 1983902: In a scale setup environment, after NSX Manager reboot, netcpad does not immediately connect to vsfwd.

    In a scale setup environment, after NSX Manager reboot, netcpad does not immediately connect to vsfwd. There is no datapath impact. The system recovers without intervention after 13 minutes.

NSX Controller Resolved Issues
  • Fixed Issue 2003453: Controller logs flooded with bridge "Fail to add/delete a mac record MacRecord for non-existing bridge instance."

    When sharding changes, bridge fails to send join to controller.

Logical Networking and NSX Edge Resolved Issues
  • Fixed Issue 1753621: When Edge with private local AS sends routes to EBGP peers, all the private AS paths are stripped off from the BGP routing updates sent

    NSX for vSphere currently has a limitation that prevents it from sharing the full AS path with eBGP neighbors when the AS path contains only private AS paths. While this is the desired behavior in most cases, there are cases in which the administrator may want to share private AS paths with an external BGP neighbor. This fix allows you to change the “private AS path” behavior for external BGP peers. The default behavior for this feature is to “remove private ASN”, which is aligned with previous NSX for vSphere versions.

  • Fixed Issue 2014400: Standby NSX Edge starts responding to IPv6 traffic when firewall feature on edge is is disabled.

    With IPv6 enabled on NSX edge, if a failover is triggered, upstream devices will be updated with MAC of the standby Edge's due to which N-S traffic can get forwarded to incorrect edge.

  • Fixed Issue 2018810:  With IPV6 enabled, upon initiating HA failover of NSX Edge, neighbor solicitation message is not sent, causing traffic to drop.

    Traffic from southbound VM stops.

  • Fixed Issue 2055195: When attempting to set up IPv6 static routing on NSX Edge, if the routes contain a /128 prefix, routes may not appear in forwarding table.

    IPv6 static route config may not work on reconfigs if /128 prefixes are present.

  • Fixed Issue 2069428: Disabling of IPv6 interface or sub-interface on NSX Edge causes the edge to reboot.

    Edge reboots upon disabling IPv6 interface and sub-interface, which is on the range of the next hop configured on a static route in NSX edge.  NSX Edge does not support IPv6 route recursion. 

    Workaround: Remove the static route whose next hop is on the range of the Ipv6 address assigned to the vnic or subInterface and retry the operation.

  • Fixed Issue 1976378: After upgrade from vCNS edge 5.5.4 to NSX 6.3.6,customers could not configure Health-Check-Monitor port nor make any changes directly from vCD.

    Customers will not be able to configure Health-Check-Monitor port nor make any changes directly from vCD.

    Workaround: Use API 4.0 to GET pool member XML configuration,DELETE this old pool config from the EDGE then PUT back the API 4.0 XML configuration to Edge.

  • Fixed Issue 1967402: An old and vulnerable version of tcpdump is used in the Edge appliance.

    The packet capture CLI on Edge uses the tcpdump package to capture and display packets. The tcpdump package in use (v4.9.0) contains many vulnerabilities that have been fixed in later version. As such, the CLI user is potentially vulnerable when using the packet capture CLI.

  • Fixed Issue 1993384: SSLVPN clients unable to get IP from ippool

    Clients are unable to connect to private network because no IP is assigned from ippool when client auto-reconnect with server. Further the old IP assigned to the client from IP pool is not cleaned up.

  • Fixed Issue 2019124: Packets are dropped by Edge FTP Load Balancer after entering passive mode

    FTP passive mode works with pool in non-transparent mode, but does not work with transparent mode.

Security Services Resolved Issues
  • Fixed Issue 2000749: Distributed Firewall stays in Publishing state with certain firewall configurations

    Distributed Firewall stays in "Publishing" state if you have a security group that contains an IPSet with 0.0.0.0/0 as an EXCLUDE member, an INCLUDE member, or as a part of 'dynamic membership containing Intersection (AND)'.

    Workaround: Use a subnet mask other than /0 in your IPSet configuration. You can define 0.0.0.0/0 as "0.0.0.0/1,128.0.0.0/1".

  • Fixed Issue 2063415: Warning message in NSX Edge logs about --physdev-out when configuring L2 VPN firewall rule

    The log message states "using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore." This message occurs because a feature (deferred output) has been removed from Linux kernel 2.6.20.

  • Fixed Issue 2040064: Addition of VMs as a static member to a Security Group takes a lot of time.

    Static inclusion of a VM to a Security Group which is connected to a large number of other Security Groups is slow.

  • Fixed Issue 2029693: In a DFW Scale environment (with 65K+ rules) users may experience longer times in publishing DFW rules.

    Firewall rules takes effect after 10-15 minutes of publishing.

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 1960383: Failure in network creation due to timeout when high number of inventory objects are deleted in short span of time.

    Network creation timeout happens due to delay in dvpg creation in NSX . The delay is caused when there are high number of inventory objects are deleted in short span of time and deletion operation in inventory thread consumes times within which dvpg creation times out on NSX.
     

    Workaround: Perform network creation when there no or low number of deletions are happening.  Retry failed network creation when there no or low number of deletions are happening.

  • Issue 1874863: Unable to authenticate with changed password after sslvpn service disable/enable with local authentication server

    When SSL VPN service is disabled and re-enabled and when using local authentication, users are unable to log in with changed passwords.

    See VMware Knowledge Base Article 2151236 for more information.

  • Issue 1702339: Vulnerability scanners might report a Quagga bgp_dump_routes vulnerability CVE-2016-4049

    Vulnerability scanners might report a Quagga bgp_dump_routes vulnerability CVE-2016-4049 in NSX for vSphere. NSX for vSphere uses Quagga, but the BGP functionality (including the vulnerability) is not enabled. This vulnerability alert can be safely disregarded.

    Workaround: As the product is not vulnerable, no workaround is required.

  • Issue 1740625, 1749975: UI problems on Mac OS in Firefox and Safari

    If you are using Firefox or Safari in Mac OS, the Back navigation button will not work in NSX Edge from the Networking and Security page in the vSphere 6.5 Web Client, and sometimes the UI freezes in Firefox.

    Workaround: Use Google Chrome on Mac OS or click on the Home button then proceed as required.

  • Issue 1700980: For security patch CVE-2016-2775, a query name which is too long can cause a segmentation fault in lwresd

    NSX 6.2.4 has BIND 9.10.4 installed with the product, but it does not use lwres option in named.conf, hence the product is not vulnerable.

    Workaround: As the product is not vulnerable, no workaround is required.

Installation and Upgrade Known Issues

Before upgrading, please read the section Upgrade Notes, earlier in this document.

  • Issue 2072696: Upgrading Distributed Logical Router to NSX 6.3.6 fails if a certain invalid configuration is present

    Validation is added in NSX 6.3.6 to ensure that in environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.3.6 will fail in environments where the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. The UI no longer displays the unsupported vSphere Distributed Switch.

    Workaround: If upgrade of a DLR fails due to this invalid configuration, use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. Change the interface configuring using PUT /api/4.0/edges/{edgeId} or PUT /api/4.0/edges/{edgeId}/interfaces/{index}. See the NSX API Guide for more information.

  • Issue 2001988: During NSX host cluster upgrade, Installation status in Host Preparation tab alternates between "not ready" and "installing" for the entire cluster when each host in the cluster is upgrading

    During NSX upgrade, clicking "upgrade available" for NSX prepared cluster triggers host upgrade. For clusters configured with DRS FULL AUTOMATIC, the installation status alternates between "installing" and "not ready", even though the hosts are upgraded in the background without issues.

    Workaround: This is a user interface issue and can be ignored. Wait for the host cluster upgrade to proceed.

  • Issue 1932907: Upgrade of Guest Introspection SVM Failed

    While trying to upgrade the Guest Introspection SVM, the installation status for the GI SVM is 'Failed'. This might be applicable for GI-SVMs of one or multiple hosts in the cluster.

    Workaround:
    1. Delete the GI-SVM from the VC.
    2. Click Resolve in the GI-SVM Service deployment pane. This will re-deploy the GI-SVM.

  • Issue 1747217: Preparing ESXi hosts results in muxconfig.xml.bad and Guest Introspection does not function correctly
    If the "vmx path" is missing for one of the VMs in muxconfig.xml, when MUX tries to parse the config file and doesn't find the "xml path" property, it renames the config file as "muxconfig.xml.bad", sends the error "Error - MUX Parsing config" to the USVM and closes the config channel.

    Workaround: Remove the orphaned VMs from the vCenter inventory.

  • Issue 1859572: During the uninstall of NSX VIBs version 6.3.x on ESXi hosts that are being managed by vCenter version 6.0.0, the host continues to stay in Maintenance mode
    If you are uninstalling NSX VIBs version 6.3.x on a cluster, the workflow involves putting the hosts into Maintenance mode, uninstalling the VIBs and then removing the hosts from Maintenance mode by the EAM service. However, if such hosts are managed by vCenter server version 6.0.0, then this results in the host being stuck in Maintenance mode post uninstalling the VIBs. The EAM service responsible for uninstalling the VIBs puts the host in Maintenance mode but fails to move the hosts out of Maintenance mode.

    Workaround: Manually move the host out of Maintenance mode. This issue will not be seen if the host is managed by vCenter server version 6.5a and above.

  • Issue 1435504: HTTP/HTTPS Health Check appears DOWN after upgrading from 6.0.x or 6.1.x to 6.3.x with failure reason "Return code of 127 is out of bounds - plugin may be missing"
    In NSX 6.0.x and 6.1.x releases, URLs configured without double quotes ("") caused Health Check to fail with this error: "Return code of 127 is out of bounds - plugin may be missing". The workaround for this was to add the double quotes ("") to the input URL (this was not required for send/receive/expect fields). However, this issue was fixed in 6.2.0 and as a result, if you are upgrading from 6.0.x or 6.1.x to 6.3.x, the additional double quotes result in the pool members shown as DOWN in Health Check.

    Workaround: Remove the double quotes ("") in the URL field from all the relevant Health Check configurations after upgrading.

  • Issue 1734245: Data Security causes upgrades to 6.3.0 to fail
    Upgrades to 6.3.0 will fail if Data Security is configured as part of a service policy. Ensure you remove Data Security from any service policies before upgrading.
  • Issue 1801685: Unable to see filters on ESXi after upgrade from 6.2.x to 6.3.0 because of failure to connect to host
    After you upgrade from NSX 6.2.x to 6.3.0 and cluster VIBs to 6.3.0 bits, even though the installation status shows successful and Firewall Enabled, the "communication channel health" will show the NSX Manager to Firewall Agent connectivity and NSX Manager to ControlPlane Agent connectivity as down. This will lead to Firewall rules publish, Security Policy publish failures and VXLan configuration not being sent down to hosts.

    Workaround: Run the message bus sync API call for the cluster using the API POST https://<NSX-IP>/api/2.0/nwfabric/configure?action=synchronize.
    API Body:

    <nwFabricFeatureConfig>
     <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
     <resourceConfig>
       <resourceId>{Cluster-MOID}</resourceId>
     </resourceConfig>
    </nwFabricFeatureConfig>
    
  • Issue 1797929: Message bus channel down after host cluster upgrade
    After a host cluster upgrade, vCenter 6.0 (and earlier) does not generate the event "reconnect", and as a result, NSX Manager does not set up the messaging infrastructure on the host. This issue has been fixed in vCenter 6.5.

    Workaround: Resync the messaging infrastructure as below:
    POST https://<ip>:/api/2.0/nwfabric/configure?action=synchronize

    <nwFabricFeatureConfig>
      <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
      <resourceConfig>
        <resourceId>host-15</resourceId>
      </resourceConfig>
    </nwFabricFeatureConfig>
  • Issue 1768144: Old NSX Edge appliance resource reservations that exceed new limits may cause failure during upgrade or redeployment
    In NSX 6.2.4 and earlier, you could specify an arbitrarily large resource reservation for an NSX Edge appliance. NSX did not enforce a maximum value.
 After NSX Manager is upgraded to 6.2.5 or later, if an existing Edge has resources reserved (especially memory) that exceed the newly enforced maximum value imposed for the chosen form factor, it would fail during Edge upgrade or redeploy (which would trigger an upgrade). For example, if the user has specified a memory reservation of 1000MB on a pre-6.2.5 LARGE Edge and, after upgrade to 6.2.5, changes the appliance size to COMPACT, the user-specified memory reservation will exceed the newly enforced maximum value, in this case 512 for a COMPACT Edge, and the operation will fail.
    See Upgrading Edge Service Gateway (ESG) for information on recommended resource allocation starting in NSX 6.2.5.

    Workaround: Use the appliance REST API:  PUT https://<NSXManager>/api/4.0/edges/<edge-Id>/appliances/ to reconfigure the memory reservation to be within values specified for the form factor, without any other appliance changes. You can change the appliance size after this operation completes.

  • Issue 1600281: USVM Installation Status for Guest Introspection shows as Failed in the Service Deployments tab

    If the backing datastore for the Guest Introspection Universal SVM goes offline or becomes inaccessible, the USVM may need to be rebooted or re-deployed to recover.

    Workaround: Reboot or re-deploy USVM to recover.

  • Issue 1660373: vCenter enforces expired NSX license

    As of vSphere 5.5 update 3 or vSphere 6.0.x vSphere Distributed Switch is included in the NSX license. However, vCenter does not allow ESX hosts to be added to a vSphere Distributed Switch if the NSX license is expired.

    Workaround: Your NSX license must be active in order to add a host to a vSphere Distributed Switch.​

  • Issue 1569010/1645525: When upgrading from 6.1.x to NSX for vSphere 6.2.3 on a system connected to vCenter 5.5, the Product field in the "Assign License Key" window displays the NSX license as a generic value of "NSX for vSphere" and not a more specific version such as "NSX for vSphere - Enterprise."

    Workaround: None.

  • Issue 1636916: In a vCloud Air environment, when the NSX Edge version is upgraded from vCNS 5.5.x to NSX 6.x, Edge firewall rules with a source protocol value of "any" are changed to "tcp:any, udp:any"
    As a result, ICMP traffic is blocked, and packet drops may be seen.

    Workaround: Before upgrading your NSX Edge version, create more specific Edge firewall rules and replace "any" with specific source port values.

  • Issue 1474238: After vCenter upgrade, vCenter might lose connectivity with NSX
    If you are using vCenter embedded SSO and you are upgrading vCenter 5.5 to vCenter 6.0, vCenter might lose connectivity with NSX. This happens if vCenter 5.5 was registered with NSX using the root user name. In NSX 6.2, vCenter registration with root is deprecated.
    Note: If you are using external SSO, no change is necessary. You can retain the same user name, for example admin@mybusiness.mydomain, and vCenter connectivity will not be lost.

    Workaround: Reregister vCenter with NSX using the admininstrator@vsphere.local user name instead of root.

  • Issue 1375794: Shutdown Guest OS for agent VMs (SVA) before powering OFF
    When a host is put into maintenance mode, all service appliances are powered-off, instead of shutting down gracefully. This may lead to errors within third-party appliances.

    Workaround: None.

  • Issue 1112628: Unable to power on the Service appliance that was deployed using the Service Deployments view

    Workaround: Before you proceed, verify the following:

    • The deployment of the virtual machine is complete.

    • No tasks such as cloning, reconfiguring, and so on are in progress for the virtual machine displayed in vCenter task pane.

    • In the vCenter events pane of the virtual machine, the following events are displayed after the deployment is initiated:
       
      Agent VM <vm name> has been provisioned.
      Mark agent as available to proceed agent workflow.
       

      In such a case, delete the service virtual machine. In service deployment UI, the deployment is seen as Failed. Upon clicking the Red icon, an alarm for an unavailable Agent VM is displayed for the host. When you resolve the alarm, the virtual machine is redeployed and powered on.

  • Issue 1413125: SSO cannot be reconfigured after upgrade
    When the SSO server configured on NSX Manager is the one native on vCenter server, you cannot reconfigure SSO settings on NSX Manager after vCenter Server is upgraded to version 6.0 and NSX Manager is upgraded to version 6.x.

    Workaround: None.

  • Issue 1263858: SSL VPN does not send upgrade notification to remote client
    SSL VPN gateway does not send an upgrade notification to users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.

    Workaround: Users need to uninstall the older version of client and install the latest version manually.

  • Issue 1462319: The esx-dvfilter-switch-security VIB is no longer present in the output of the "esxcli software vib list | grep esx" command.
    Starting in NSX 6.2, the esx-dvfilter-switch-security modules are included within the esx-vxlan VIB. The only NSX VIBs installed for 6.2 are esx-vsip and esx-vxlan. During an NSX upgrade to 6.2, the old esx-dvfilter-switch-security VIB gets removed from the ESXi hosts.
    Starting in NSX 6.2.3, a third VIB, esx-vdpi, is provided along with the esx-vsip and esx-vxlan NSX VIBs. A successful installation will show all 3 VIBs.

    Workaround: None.

  • Issue 1481083: After the upgrade, logical routers with explicit failover teaming configured might fail to forward packets properly
    When the hosts are running ESXi 5.5, the explicit failover NSX 6.2 teaming policy does not support multiple active uplinks on distributed logical routers.

    Workaround: Alter the explicit failover teaming policy so that there is only one active uplink and the other uplinks are in standby mode.

  • Issue 1411275: vSphere Web Client does not display Networking and Security tab after backup and restore in NSX vSphere 6.2
    When you perform a backup and restore operation after upgrading to NSX vSphere 6.2, the vSphere Web Client does not display the Networking and Security tab.

    Workaround: When an NSX Manager backup is restored, you are logged out of the Appliance Manager. Wait a few minutes before logging in to the vSphere Web Client.

  • Issue 1764460: After completing Host Preparation, all cluster members appear in ready state, but cluster level erroneously appears as "Invalid"
    After you complete Host Preparation, all cluster members correctly appear in "Ready" state, but cluster level appears as "Invalid" and the reason displayed is that you need a host reboot, even though the host has already been rebooted. This can occur intermittently with vSphere 5.5 and 6.0, and is fixed in vSphere 6.5.

     

    Workaround: In the vCenter ESX Agency Manager MOB https://VC_IP/eam/mob/ you can access the agencies associated with your host clusters. Click on one of the agencies, and click config to see the cluster details. Click ResolveAll for the affected clusters.

  • Issue 1979457: If GI-SVM is deleted or removed during the upgrade process and backward compatibility mode, then identity firewall through Guest Introspection (GI) will not work unless the GI cluster is upgraded.

    Identity firewall will not work and no logs related to identity firewall would be seen. Identity firewall protection will be suspended unless the cluster is upgraded. 

    Workaround: Upgrade the cluster so that all the hosts are running the newer version of GI-SVM.

    -Or -

    Enable Log scraper for identity firewall to work.

NSX Manager Known Issues
  • Issue 1892999: Cannot modify the Unique Selection Criteria even when no VMs are attached to the Universal Security Tag

    If a VM attached to a universal security tag gets deleted, an internal object representing the VM still remains attched to the universal security tag. This causes the universal selection criteria change to fail with error that universal security tags are still attached to VMs.

    Workaround: Delete all the universal security tags and then change the universal selection criteria.

  • Issue 1801325: 'Critical' system events and logging generated in the NSX Manager with high CPU and/or disk usage
    You may encounter one or more of the following problems when you have high disk space usage, high churn in job data, or a high job queue size on the NSX Manager:
    • 'Critical' system events in the vSphere web client
    • High disk usage on NSX Manager for /common partition
    • High CPU usage for prolonged periods or at regular intervals
    • Negative impact on NSX Manager's performance

    Workaround: Contact VMware customer support. See VMware Knowledge Base article 2147907 for more information.

  • Issue 1806368: Reusing controllers from a previously failed primary NSX Manager which is made primary again after a failover causes the DLR config to not be pushed to all hosts
    In a cross-vCenter NSX setup, when the primary NSX Manager fails, a secondary NSX Manager is promoted to primary and a new controller cluster is deployed to be used with the newly promoted secondary (now primary) NSX Manager. When the primary NSX Manager is back on, the secondary NSX Manager is demoted and the primary NSX Manager is restored. In this case, if you reuse the existing controllers that were deployed on this primary NSX Manager before the failover, the DLR config is not pushed to all hosts. This issue does not arise if you create a new controller cluster instead.

    Workaround: Deploy a new controller cluster for the restored primary NSX Manager.

  • Issue 1831131: Connection from NSX Manager to SSO fails when authenticated using the LocalOS user
    Connection from NSX Manager to SSO fails when authenticated using the LocalOS user with the error: "Could not establish communication with NSX Manager. Please contact administrator."

    Workaround: Add the Enterprise Admin role for nsxmanager@localos in addition to nsxmanager@domain.

  • Issue 1800820: UDLR interface update fails on secondary NSX Manager when the old UDLR interface is already deleted from the system
    In a scenario where the Universal Synchronization Service (Replicator) stops working on the primary NSX Manager, you have to delete the UDLR (Universal Distributed Logical Router) and ULS (Universal Logical Switch) interfaces on the primary NSX Manager and create new ones, and then replicate these on the secondary NSX Manager. In this case, the UDLR interface does not get updated in the secondary NSX Manager because a new ULS gets created on the secondary NSX Manager during replication and the UDLR is not connected with the new ULS.

    Workaround: Ensure that the replicator is running and delete the UDLR interface (LIF) on the primary NSX Manager which has a newly created ULS as backing and recreate the UDLR interface (LIF) again with the same backing ULS.

  • Issue 1772911: NSX Manager performing very slowly with disk space consumption, and task and job table sizes increasing with close to 100% CPU usage
    You will experience the following:
    • NSX Manager CPU is at 100% or is regularly spiking to 100% consumption and adding extra resources to NSX Manager appliance does not make a difference.
    • Running the show process monitor command in the NSX Manager Command Line Interface (CLI) displays the Java process that is consuming the highest CPU cycles.
    • Running the show filesystems command on the NSX Manager CLI shows the /common directory as having a very high percentage in use, such as > 90%.
    • Some of the configuration changes time out (sometimes taking over 50 minutes) and are not effective.

    See VMware Knowledge Base article 2147907 for more information.

    Workaround: Contact VMware customer support for a resolution of this problem.

  • Issue 1785142: Delay in showing 'Synchronization Issues' on primary NSX Manager when communication between primary and secondary NSX Manager is blocked.
    When communication between primary and secondary NSX Manager is blocked, you will not immediately see 'Synchronization Issues' on the primary NSX Manager.

    Workaround: Wait for about 20 minutes for communication to be reestablished.

  • Issue 1786066: In a cross-vCenter installation of NSX, disconnecting a secondary NSX Manager may render that NSX Manager unable to reconnect as secondary
    In a cross-vCenter installation of NSX, if you disconnect a secondary NSX Manager, you may be unable to re-add that NSX Manager later as a secondary NSX Manager. Attempts to reconnect the NSX Manager as secondary will result in the NSX Manager being listed as "Secondary" in the Management tab of the vSphere Web Client, but the connection to the primary is not established.

    Workaround: 

    1. Disconnect the secondary NSX Manager from the primary NSX Manager.
    2. Add the secondary NSX Manager again to the primary NSX Manager.
  • Issue 1715354: Delay in availability of the REST API
    The NSX Manager API takes some time to be up and running after NSX Manager restarts when FIPS mode is toggled. It may appear as if the API is hung, but this occurs because it takes time for the controllers to re-establish connection with the NSX Manager. You are advised to wait for the NSX API server to be up and running and ensure all controllers are in the connected state before doing any operations.

  • Issue 1441874: Upgrading a single NSX Manager in a vCenter Linked Mode Environment displays an error message
    In an environment with multiple VMware vCenter Servers with multiple NSX Managers, when selecting one or more NSX Managers from the vSphere Web Client > Networking and Security > Installation > Host Preparation, you see this error:
    "Could not establish communication with NSX Manager. Please contact administrator."

    Workaround: See VMware Knowledge Base article 2127061 for more information.

  • Issue 1696750: Assigning an IPv6 address to NSX Manager via PUT API requires a reboot to take effect
    Changing the configured network settings for NSX Manager via https://{NSX Manager IP address}/api/1.0/appliance-management/system/network requires a system reboot to take effect. Until the reboot, pre-existing settings will be shown.

    Workaround: None.

  • Issue 1529178: Uploading a server certificate which does not include a common name returns an "internal server error" message

    If you upload a server certificate that does not have any common name, an "internal server error" message appears.

    Workaround: Use a server certificate which has both a SubAltName and a common name, or at least a common name.

  • Issue 1655388: NSX Manager 6.2.3 UI displays English language instead of local language when using IE11/Edge browser on Windows 10 OS for JA, CN, and DE languages

    When you launch NSX Manager 6.2.3 with IE11/Edge browser on Windows 10 OS for JA, CN, and DE languages, English language is displayed.

    Workaround

    1. Launch the Microsoft Registry Editor (regedit.exe), and go to Computer > HKEY_CURRENT_USER > SOFTWARE > Microsoft > Internet Explorer > International.
    2. Modify the value of AcceptLanguage file to native language. For example, If you want to change language to DE, change value and make the DE show the first position.
    3. Restart the browser, and log in to the NSX Manager again. Appropriate language is displayed.
  • Issue 1435996: Log files exported as CSV from NSX Manager are timestamped with epoch not datetime
    Log files exported as CSV from NSX Manager using the vSphere Web Client are timestamped with the epoch time in milliseconds, instead of with the appropriate time based on the time zone.

    Workaround: None.

  • Issue 1644297: Add/delete operation for any DFW section on the primary NSX creates two DFW saved configurations on the secondary NSX
    In a cross-vCenter setup, when an additional universal or local DFW section is added to the primary NSX Manager, two DFW configurations are saved on the secondary NSX Manager. While it does not affect any functionality, this issue will cause the saved configurations limit to be reached more quickly, possibly overwriting critical configurations.

    Workaround: None.

  • Issue 1477138: NSX management service doesn't come up when the hostname's length is more than 64 characters
    Certificate creation via OpenSSL library requires a hostname less than or equal to 64 characters.
  • Issue 1437664: NSX Manager list slow to display in Web Client
    In vSphere 6.0 environments with multiple NSX Managers, the vSphere web client may take up to two minutes to display the list of NSX Managers when the logged-in user is being validated with a large AD Group set. You may see a data service timeout error when attempting to display the NSX Manager list. There is no workaround. You must wait for the list to load/relogin to see the NSX Manager list.
  • Issue 1534606: Host Preparation Page fails to load
    When running vCenter in linked mode, each vCenter must be connected to an NSX Manager on the same NSX version. If the NSX versions differ, the vSphere Web Client will only be able to communicate with the NSX Manager running the higher version of NSX. An error similar to "Could not establish communication with NSX Manager. Please contact administrator," will be displayed on the Host Preparation tab.

    Workaround: All NSX Managers should be upgraded to the same NSX software version.

  • Issue 1027066: vMotion of NSX Manager may display the error message, "Virtual ethernet card Network adapter 1 is not supported"
    You can ignore this error. Networking will work correctly after vMotion.
  • Issue 1460766: NSX Manager UI do not automatically log out after changing password using NSX Command Line Interface
    If you are logged in to NSX Manager and recently changed your password using CLI, you might continue to stay logged in to the NSX Manager UI using your old password. Typically, NSX Manager client should automatically log you out if the session times out for being inactive.

    Workaround: Log out from the NSX Manager UI and log back in with your new password.

  • Issue 1966681: Incorrect reporting of duplicate NSX Manager IP

    The log file get flooded with the duplicate NSX Manager IP and reports the incorrect information about the duplicate IP in the network.

  • Issue 1467382: Unable to edit a network host name
    After you login to NSX Manager virtual appliance and navigate to the Appliance Management, click Manage Appliance Settings, and click Network under Settings to edit the network host name, you might receive an invalid domain name list error. This happens when the domain names specified in the Search Domains field are separated with whitespace characters, instead of commas. NSX Manager only accepts domain names that are comma separated.

    Workaround

    1. Log in to the NSX Manager virtual appliance.

    2. Under Appliance Management, click Manage Appliance Settings.

    3. From the Settings panel, click Network.

    4. Click Edit next to DNS Servers.

    5. In the Search Domains field, replace all whitespace characters with commas.

    6. Click OK to save the changes.

  • Issue 1486193/1436953: False system event is generated even after successfully restoring NSX Manager from a backup
    After successfully restoring NSX Manager from a backup, the following system events appear in the vSphere Web Client when you navigate to Networking & Security: NSX Managers: Monitor: System Events.
    • Restore of NSX Manager from backup failed (with Severity=Critical).

    • Restore of NSX Manager successfully completed (with Severity=Informational).

    Workaround: If the final system event message shows as successful, you can ignore the system generated event messages.
  • Issue1783528:  NSX Manager CPU Utilization spikes every Friday night / Saturday Morning

    NSX polls LDAP for full sync every Friday night. There is no option to configure specific Active Directory Organisational Unit or Container, therefore NSX pulls in all objects which are related to the domain provided. 

    Workaround: Increase NSX Manager vCPU from 4 to 6

NSX Controller Known Issues
  • Issue 1856465: If an ESXi host is down on one of the site in a NSX Cross-vCenter environment, the CDO mode does not get enabled on that site

    If an ESXi host is down on a site, enabling or disabling CDO mode will not be completely successfully on that site.
    If the host is down on one of the Secondary site, the CDO mode operation will succeed on the Primary site. However the CDO mode operation will fail on the Secondary site. This may lead to inconsistent behavior.

    Workaround: This issue impacts NSX 6.3.0 and above.

    • Ensure that all the ESXi hosts are up before doing any CDO operations.
    • In order to recover from an inconsistent state, remove the host from the vCenter inventory and add it again.
Logical Networking and NSX Edge Known Issues
  • Issue 2071666: Traffic to remote VMs accessible via stretched networks in the L2VPN tunnel is disrupted after a vmotion of L2VPN configured edge

    Traffic to remote VMs accessible via stretched networks in the L2VPN tunnel is disrupted after a vmotion of L2VPN configured edge(both managed and standalone edge) until physical network MAC table entries for remote VMs MAC expire, are manually cleared or relearned if traffic from remote VMs is generated after vmotion.
     

    Workaround: Disable DRS for edges doing l2vpn to prevent uncontrolled emotions. After vmotion clear MAC table entries for remote VMs MACs. Generate traffic from remote VMs.

  • Issue 1904612: Layer 2 VPN tunnel displays "up" on L2VPN server when client is powered off

    If you create a L2 VPN between two NSX Edges, then power down the client NSX Edge, the Server NSX Edge still displays that the VPN tunnel is up.

    Workaround: None.

  • Issue 1242207: Changing router ID during the run time is not reflected in OSPF topology

    If you try to change router ID without disabling OSPF,  new external link-state advertisements (LSAs) are not re-generated with this router ID causing loss of OSPF external routes.

    Disable OSPF, change router ID and then enable OSPF again.

  • Issue 1894277: IPSec site configuration PSK is not retained when the local or peer subnet gets changed

    As the masked PSK gets saved in the database, tunnel between the peers won't come up because of the password mismatch.

    Workaround: Reconfigure the IPSec configuration with a valid password.

  • Issue 1492497: Cannot filter NSX Edge DHCP traffic
    You cannot apply any firewall filters to DHCP traffic on an NSX Edge because the DHCP server on an NSX Edge utilizes raw sockets that bypass the TCP/IP stack.

    Workaround: None.

  • Issue 1781438: On the ESG or DLR NSX Edge appliances, the routing service does not send an error message if it receives the BGP path attribute MULTI_EXIT_DISC more than once.
    The edge router or distributed logical router does not send an error message if it receives the BGP path attribute MULTI_EXIT_DISC more than once. AS per RFC 4271 [Sec 5], the same attribute (attribute with the same type) cannot appear more than once within the Path Attributes field of a particular UPDATE message.

    Workaround: None.

  • Issue 1786515: User with 'Security Administrator' privileges unable to edit the load balancer configuration through the vSphere web client UI.
    A user with "Security Administrator" privileges for a specific NSX Edge is not able to edit the Global Load Balancer Configuration for that edge, using the vSphere web client UI. The following error message is displayed: "User is not authorized to access object Global and feature si.service, please check object access scope and feature permissions for the user."

    Workaround: None.

  • Issue 1849042/1849043: Admin account lockout when password aging is configured on the NSX Edge appliance
    If password aging is configured for the admin user on the NSX Edge appliance, when the password ages out there is a 7 day period where the user will be asked to change the password when logging into the appliance. Failure to change the password will result in the account being locked. Additionally, if the password is changed at the time of logging in at the CLI prompt, the new password may not meet the strong password policy enforced by the UI and REST.

    Workaround: To avoid this problem, always use the UI or REST API to change the admin password before the existing password expires. If the account does become locked, also use the UI or REST API to configure a new password and the account will become unlocked again.

  • Issue 1711013: Takes about 15 minutes to sync FIB between Active/Standby NSX Edge after rebooting the standby VM.
    When a standby NSX Edge is powered off, the TCP session is not closed between active and standby mode. The active edge will detect that standby is down after keepalive (KA) failure (15 minutes). After 15 minutes, a new socket connection is established with the standby edge and FIB is synced between the active/standby edge.

    Workaround: None.

  • Issue 1733282: NSX Edge no longer supports static device routes
    NSX Edge does not support configuration of static routes with NULL nexthop address.

    Workaround: None.

  • Issue 1860583: Avoid using remote sysloggers as FQDN if DNS is not reachable.
    On an NSX edge, if the remote sysloggers are configured using FQDN and DNS is not reachable, then routing functionality might be impacted. The problem might not happen consistently.

    Workaround: It is recommended to use IP addresses instead of FQDN.

  • Issue 1850773: NSX Edge NAT reports invalid configuration when multiple ports are used on the Load Balancer configuration
    This issue occurs every time you configure a Load Balancer virtual server with more than one port. Due to this, NAT becomes unmanageable while this configuration state exists for the affected NSX Edge.

    Workaround: See VMware Knowledge Base article 2149942 for more information and workaround.

  • Issue 1764258: Traffic blackholed for upto eight minutes post HA failover or Force-Sync on NSX Edge configured with sub-interface
    If an HA failover is triggered or you start a Force-Sync over a sub-interface, traffic is blackholed for upto eight minutes.

    Workaround: Do not use subinterfaces for HA.

  • Issue 1767135: Errors when trying to access certificates and application profiles under Load Balancer
    Users with Security Admin privileges and Edge scope are unable to access certificates and application profiles under Load Balancer. The vSphere Web Client shows error messages.

    Workaround: None.

  • Issue 1792548: NSX Controller may get stuck at the message: 'Waiting to join cluster'
    NSX Controller may get stuck at the message: 'Waiting to join cluster' (CLI command:show control-cluster status). This occurs because the same IP address has been configured for eth0 and breth0 interfaces of the controller while the controller is coming up. You can verify this by using the following CLI command on the controller: show network interface

    Workaround: Contact VMware customer support.

  • Issue 1747978: OSPF adjacencies are deleted with MD5 authentication after NSX Edge HA failover
    In an NSX for vSphere 6.2.4 environment where the NSX Edge is configured for HA with OSPF graceful restart configured and MD5 is used for authentication, OSPF fails to start gracefully. Adjacencies forms only after the dead timer expires on the OSPF neighbor nodes.

    Workaround: None

  • Issue 1804116: Logical Router goes into Bad State on a host that has lost communication with the NSX Manager
    If a Logical Router is powered on or redeployed on a host that has lost communication with the NSX Manager (due to NSX VIB upgrade/install failure or host communication issue), the Logical Router will go into Bad State and continuous auto-recovery operation via Force-Sync will fail.

    Workaround: After resolving the host and NSX Manager communication issue, reboot the NSX Edge manually and wait for all interfaces to come up. This workaround is only needed for Logical Routers and not NSX Edge Services Gateway (ESG) because the auto-recovery process via force-sync reboots NSX Edge.

  • Issue 1783065: Cannot configure Load Balancer for UDP port along with TCP by IPv4 and IPv6 address together
    UDP only supports ipv4-ipv4, ipv6-ipv6 (frontend-backend). There is a bug in NSX Manager that even IPv6 link local address is read and pushed as an IP address of the grouping object, and you cannot select IP protocol to use in LB configuration.

    Here is an example LB configuration demonstrating the issue:
    In the Load Balancer configuration, pool "vCloud_Connector" is configured with a grouping object (vm-2681) as pool member and this object contains both IPv4 and IPv6 addresses, which cannot be supported by LB L4 Engine.

     

    {
        "algorithm" : {
            ...
        },
        "members" : [
            {
                ...  ,
                ...
            }
        ],
        "applicationRules" : [],
        "name" : "vCloud_Connector",
        "transparent" : {
            "enable" : false
        }
    }
    
    {
        "value" : [
            "fe80::250:56ff:feb0:d6c9",
            "10.204.252.220"
        ],
        "id" : "vm-2681"
    }

     

    Workaround:

    • Option 1: Enter the IP address of the pool member instead of grouping objects in pool member.
    • Option 2: Do not use IPv6 in the VMs.
  • Issue 1777792: Peer Endpoint set as 'ANY' causes IPSec connection to fail
    When IPSec configuration on NSX Edge sets remote peer endpoint as 'ANY', the Edge acts as an IPSec "server" and waits for remote peers to initiate connections. However, when the initiator sends a request for authentication using PSK+XAUTH, the Edge displays this error message: "initial Main Mode message received on XXX.XXX.XX.XX:500 but no connection has been authorized with policy=PSK+XAUTH" and IPsec can't be established.

    Workaround: Use specific peer endpoint IP address or FQDN in IPSec VPN configuration instead of ANY.

  • Issue 1741158: Creating a new, unconfigured NSX Edge and applying configuration can result in premature Edge service activation.
    If you use the NSX API to create a new, unconfigured NSX Edge, then make an API call to disable one of the Edge services on that Edge (for example, set dhcp-enabled to "false"), and finally apply configuration changes to the disabled Edge service, that service will be made active immediately.

    Workaround: After you make a configuration change to an Edge service that you wish to keep in disabled state, immediately issue a PUT call to set the enabled flag to "false" for that service.

  • Issue 1758500: Static route with multiple next-hops does not get installed in NSX Edge routing and forwarding tables if at least one of the next-hop configured is the Edge's vNIC IP address
    With ECMP and multiple next-hop addresses, NSX allows the Edge's vNIC's IP address to be configured as next-hop if at least one of the next-hop IP addresses is valid. This is accepted without any errors or warnings but route for the network is removed from the Edge's routing/forwarding table.

    Workaround: Do not configure the Edge's own vNIC IP address as a next-hop in static route when using ECMP.

  • Issue 1716464: NSX Load Balancer will not route to VMs newly tagged with a Security tag.
    If we deploy two VMs with a given tag, and then configure an LB to route to that tag, the LB will successfully route to those two VMs. But if we then deploy a third VM with that tag, the LB only routes to the first two VMs.

    Workaround: Click "Save" on the LB Pool. This rescans the VMs and will start routing to newly tagged VMs.

  • Issue 1461421: "show ip bgp neighbor" command output for NSX Edge retains the historical count of previously established connections

    The “show ip bgp neighbor” command displays the number of times that the BGP state machine transitioned into the Established state for a given peer. Changing the password used with MD5 authentication causes the peer connection to be destroyed and re-created, which in turn will clear the counters. This issue does not occur with an Edge DLR.

    Workaround: To clear the counters, execute the "clear ip bgp neighbor" command.

  • Issue 1656713: IPsec Security Policies (SPs) missing on the NSX Edge after HA failover, traffic cannot flow over tunnel

    The Standby > Active switchover will not work for traffic flowing on IPsec tunnels.

    Workaround: Disable/Enable IPsec after the NSX Edge switchover.

  • Issue 1354824: When an Edge VM becomes corrupted or becomes otherwise unreachable due to such reasons as a power failure, system events are raised when the health check from NSX Manager fails

    The system events tab will report "Edge Unreachability" events. The NSX Edges list may continue to report a Status of Deployed.

    Workaround: Use the following API to get detailed status information about an NSX Edge:

    GET https://NSX-Manager-IP-Address/api/4.0/edges/edgeId/status?detailedStatus=true 
  • Issue 1647657: Show commands on an ESXi host with DLR (Distributed Logical Router) display no more than 2000 routes per DLR instance

    Show commands on an ESXi host with DLR enabled will not show more than 2000 routes per DLR instance, although more than this maximum may be running. This issue is a display issue, and the data path will work as expected for all routes.

    Workaround: None.

  • Issue 1634215: OSPF CLI commands output does not indicate whether routing is disabled

    When OSPF is disabled, routing CLI commands output does not show any message saying "OSPF is disabled". The output is empty.

    Workaround: The show ip ospf command will display the correct status.

  • Issue 1647739: Redeploying an Edge VM after a vMotion operation will cause the Edge or DLR VM to be placed back on the original cluster.  

    Workaround: To place the Edge VM in a different resource pool or cluster, use the NSX Manager UI to configure the desired location.

  • Issue 1463856: When NSX Edge Firewall is enabled, existing TCP connections are blocked
    TCP connections are blocked through the Edge stateful firewall as the initial three-way handshake cannot be seen.

    Workaround: To handle such existing flows, do the following. Use the NSX REST API to enable the flag "tcpPickOngoingConnections" in the firewall global configuration. This switches the firewall from strict mode to lenient mode. Next, enable the firewall. Once existing connections have been picked up (this may take a few minutes after you enable the firewall), set the flag "tcpPickOngoingConnections" back to false to return the firewall to strict mode. (This setting is persistent.)

    PUT /api/4.0/edges/{edgeId}/firewall/config/global

    <globalConfig>
      <tcpPickOngoingConnections>true</tcpPickOngoingConnections>
    </globalConfig>
  • Issue 1374523: Reboot ESXi, or run [services.sh restart] after installation of VXLAN VIB to make the VXLAN commands available using esxcli

    After installation of VXLAN VIB, you must reboot ESXi or run the [services.sh restart] command, so that the VXLAN commands becomes available using esxcli.

    Workaround: Instead of using esxcli, use localcli.

  • Issue 1525003: Restoring an NSX Manager backup with an incorrect passphrase will silently fail as critical root folders cannot be accessed

    Workaround: None.

  • Issue 1483426: IPsec and L2 VPN service status shows as down even when the service is not enabled
    Under the Settings tab in the UI, the L2 service status is displayed as down, however the API shows the L2 status as up. L2 VPN and IPsec service always shows as down in the Settings tab unless the UI page is refreshed.

    Workaround: Refresh the page.

  • Issue 1637639: When using the Windows 8 SSL VPN PHAT client, the virtual IP is not assigned from the IP pool
    On Windows 8, the virtual IP address is not assigned as expected from the IP pool when a new IP address is assigned by the Edge Services Gateway or when the IP pool changes to use a different IP range.

    Workaround: This issue occurs only on Windows 8. Use a different Windows OS to avoid experiencing this issue.

  • Issue 1628220: DFW or NetX observations are not seen on receiver side
    Traceflow may not show DFW and NetX observations on receiver side if switch port associated with the destination vNIC changed. It will not be fixed for vSphere 5.5 releases. For vSphere 6.0 and up, there is no such issue.

    Workaround: Do not disable vNIC. Reboot VM.

  • Issue 1446327: Some TCP-based applications may time out when connecting through NSX Edge
    The default TCP established connection inactivity timeout is 3600 seconds. The NSX Edge deletes any connections idle for more than the inactivity timeout and drops those connections.

    Workaround:
    1. If the application has a relatively long inactivity time, enable TCP keepalives on the hosts with keep_alive_interval set to less than 3600 seconds.
    2. Increase the Edge TCP inactivity timeout to greater than 2 hours using the following NSX REST API. For example, to increase the inactivity timeout to 9000 seconds. NSX API URL:
      /api/4.0/edges/{edgeId}/systemcontrol/config PUT Method <systemControl> <property>sysctl.net.netfilter.nf_conntrack_tcp_timeout_established=9000</property> </systemControl>

  • Issue 1089238: Cannot configure OSPF on more than one DLR Edge uplink
    Currently it is not possible to configure OSPF on more than one of the eight DLR Edge uplinks. This limitation is a result of the sharing of a single forwarding address per DLR instance.

    Workaround: This is a current system limitation and there is no workaround.

  • Issue 1499978: Edge syslog messages do not reach remote syslog server
    Immediately after deployment, the Edge syslog server cannot resolve the hostnames for any configured remote syslog servers.

    Workaround: Configure remote syslog servers using their IP address, or use the UI to Force Sync the Edge.

  • Issue 1489829: Logical router DNS Client configuration settings are not fully applied after updating REST Edge API

    Workaround: When you use REST API to configure DNS forwarder (resolver), perform the following steps:

    1. Specify the DNS Client XML server's settings so that they match the DNS forwarder setting.

    2. Enable DNS forwarder, and make sure that the forwarder settings are same as the DNS Client server's settings specified in the XML configuration.

  • Issue 1243112: Validation and error message not present for invalid next hop in static route, ECMP enabled
    When trying to add a static route, with ECMP enabled, if the routing table does not contain a default route and there is an unreachable next hop in the static route configuration, no error message is displayed and the static route is not installed.

    Workaround: None.

  • Issue 1281425: If an NSX Edge virtual machine with one sub interface backed by a logical switch is deleted through the vCenter Web Client user interface, data path may not work for a new virtual machine that connects to the same port
    When the Edge virtual machine is deleted through the vCenter Web Client user interface (and not from NSX Manager), the VXLAN trunk configured on dvPort over opaque channel does not get reset. This is because trunk configuration is managed by NSX Manager.

    Workaround: Manually delete the VXLAN trunk configuration by following the steps below:

    1. Navigate to the vCenter Managed Object Browser by typing the following in a browser window:
      https://<vc-ip>/mob?vmodl=1
    2. Click Content.
    3. Retrieve the dvsUuid value by following the steps below.
      1. Click the rootFolder link (for example, group-d1(Datacenters)).
      2. Click the data center name link (for example, datacenter-1).
      3. Click the networkFolder link (for example, group-n6).
      4. Click the DVS name link (for example, dvs-1)
      5. Copy the value of uuid.
    4. Click DVSManager and then click updateOpaqueDataEx.
    5. In selectionSet, add the following XML.
      <selectionSet xsi:type="DVPortSelection">
        <dvsUuid>value</dvsUuid>
        <portKey>value</portKey> <!--port number of the DVPG where trunk vnic got connected-->
      </selectionSet>
    6. In opaqueDataSpec, add the following XML
      <opaqueDataSpec>
        <operation>remove</operation>
        <opaqueData>
          <key>com.vmware.net.vxlan.trunkcfg</key>
          <opaqueData></opaqueData>
        </opaqueData>
      </opaqueDataSpec>
    7. Set isRuntime to false.
    8. Click Invoke Method.
    9. Repeat steps 5 through 8 for each trunk port configured on the deleted Edge virtual machine.
  • Issue 1637939: MD5 certificates are not supported while deploying hardware gateways
    While deploying hardware gateway switches as VTEPs for logical L2 VLAN to VXLAN bridging, the physical switches support at minimum SHA1 SSL certificates for OVSDB connection between the NSX controller and OVSDB switch.

    Workaround: None.

  • Issue 1637943: No support for hybrid or multicast replication modes for VNIs that have a hardware gateway binding
    Hardware gateway switches when used as VTEPs for L2 VXLAN-to-VLAN bridging support Unicast replication mode only.

    Workaround: Use Unicast replication mode only.

  • Issue 1995142: Host is not removed from replication cluster after being removed from VC inventory

    If a user adds a host to a replication cluster and then removes the host from VC inventory before removing it from the cluster, the legacy host will remain in the cluster.

    Workaround: Whenever removing a host, first make sure it has already been removed from replication cluster if any.

  • Issue 2085286:  VDR is removed from host after removing all bridged interfaces when all those bridged interfaces have a routing lif. 

    This problem is encountered if VDR has n number of lifs with n logical vwires and all vwires used for bridges on same VDR and all bridges are deleted.

    Don't use all vWires which are used as routing lif for bridging.  If all the routing lif's have bridging enabled, then all bridges should not be removed at one time.

Security Services Known Issues
  • Issue 2186968: Static IPset not reported to containerset API call

    If you have service appliances, NSX might omit IP sets in communicating with Partner Service Managers.  This can lead to partner firewalls allowing or denying connections incorrectly.

    Workaround: Contact VMware customer support for a workaround. See VMware Knowledge Base article 57834 for more information.

  • Issue 1854661: In a cross-VC setup, filtered firewall rules do not display the index value when you switch between NSX Managers
    After you apply a rule filter criteria to an NSX Manager and then switch to a different NSX Manager, the rule index shows as '0' for all filtered rules instead of showing the actual position of the rule.

    Workaround: Clear the filter to see the rule position.

  • Issue 1474650: For NetX users, ESXi 5.5.x and 6.x hosts experience a purple diagnostic screen mentioning ALERT: NMI: 709: NMI IPI received
    When a large number of packets are transmitted or received by a service VM, DVFilter continues to dominate the CPU resulting in heartbeat loss and a purple diagnostic screen. See VMware Knowledge Base article 2149704 for more information.

    Workaround: Upgrade the ESXi host to any of the following ESXi versions that are the minimum required to use NetX:

    • 5.5 Patch 10
    • ESXi 6.0U3
    • ESXi 6.5
  • Issue 1787680: Deleting Universal Firewall Section fails when NSX Manager is in Transit mode
    When you try to delete a Universal Firewall Section from the UI of an NSX Manager in Transit mode, and publish, the Publish fails and as a result you are not able to set the NSX Manager to Standalone mode.

    Workaround: Use the Single Delete Section REST API to delete the Universal Firewall Section.

  • Issue 1689159: The Add Rule feature in Flow Monitoring does not work correctly for ICMP flows.
    When adding a rule from Flow Monitoring, the Services field will remain blank if you do not explicitly set it to ICMP and as a result, you may end up adding a rule with the service type "ANY".

    Workaround: Update the Services field to reflect ICMP traffic.

  • Issue 1632235: During Guest Introspection installation, network drop down list displays "Specified on Host" only
    When installing Guest Introspection with the NSX anti-virus-only license and vSphere Essential or Standard license, the network drop down list will display only the existing list of DV port groups. This license does not support DVS creation.

    Workaround: Before installing Guest Introspection on a vSphere host with one of these licenses, first specify the network in the "Agent VM Settings" window.

  • Issue 1652155: Creating or migrating firewall rules using REST APIs may fail under certain conditions and report HTTP 404 error

    Adding or migrating firewall rules using REST APIs is not supported under these conditions:

    • Creating firewall rules as a bulk operation when the autosavedraft=true is set.
    • Adding firewall rules in sections concurrently.

    Workaround: Set the autoSaveDraft parameter to false in the API call when performing bulk firewall rule creation or migration.

  • Issue 1509687: URL length supports up to 16000 characters when assigning a single security tag to many VMs at a time in one API call
    A single security tag cannot be assigned to a large number of VMs simultaneously with a single API if the URL length is more than 16,000 characters.

    Workaround: To optimize performance, tag up to 500 VMs in a single call.

  • Issue 1662020: Publish operation may fail resulting in an error message "Last publish failed on host host number" on DFW UI in General and Partner Security Services sections

    After changing any rule, the UI displays "Last publish failed on host host number". The hosts listed on the UI may not have correct version of firewall rules, resulting in lack of security and/or network disruption.

    The problem is usually seen in the following scenarios:

    • After upgrade from older to latest NSXv version.
    • Move a host out of cluster and move it back in.
    • Move a host from one cluster to another.

    Workaround: To recover, you must force sync the affected clusters (firewall only).

  • Issue 1481522: Migrating firewall rule drafts from 6.1.x to 6.2.3 is not supported as the drafts are not compatible between the releases

    Workaround: None.

  • Issue 1628679: With identity-based firewall, the VM for removed users continues to be part of the security group

    When a user is removed from a group on the AD server, the VM where the user is logged-in continues to be a part of the security-group. This retains firewall policies at the VM vNIC on the hypervisor, thereby granting the user full access to services.

    Workaround: None. This behavior is expected by design.

  • Issue 1496273: UI allows creation of in/out NSX firewall rules that cannot be applied to Edges
    The web client incorrectly allows creation of an NSX firewall rule applied to one or more NSX Edges when the rule has traffic traveling in the 'in' or 'out' direction and when PacketType is IPV4 or IPV6. The UI should not allow creation of such rules, as NSX cannot apply them to NSX Edges.

    Workaround: None.

  • Issue 1494718: New universal rules cannot be created, and existing universal rules cannot be edited from the flow monitoring UI

    Workaround: Universal rules cannot be added or edited from the flow monitoring UI. EditRule will be automatically disabled.

  • Issue 1066277: Security policy name does not allow more than 229 characters
    The security policy name field in the Security Policy tab of Service Composer can accept up to 229 characters. This is because policy names are prepended internally with a prefix.

    Workaround: None.

  • Issue 1443344: Some versions of 3rd-party Networks VM-Series do not work with NSX Manager default settings
    Some NSX 6.1.4 or later components disable SSLv3 by default. Before you upgrade, please check that all third-party solutions integrated with your NSX deployment do not rely on SSLv3 communication. For example, some versions of the Palo Alto Networks VM-series solution require support for SSLv3, so please check with your vendors for their version requirements.
  • Issue 1660718: Service Composer policy status is shown as "In Progress" at the UI and "Pending" in the API output

    Workaround: None.

  • Issue 1317814: Service Composer goes out of sync when policy changes are made while one of the Service Managers is down
    When a policy changes is made when one of multiple Service Managers is down, the changes will fail, and Service Composer will fall out of sync.

    Workaround: Ensure the Service Manager is responding and then issue a force sync from Service Composer.

  • Issue 1070905: Cannot remove and re-add a host to a cluster protected by Guest Introspection and third-party security solutions
    If you remove a host from a cluster protected by Guest Introspection and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.

    Workaround: To remove a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.

  • Issue 1648578: NSX forces the addition of cluster/network/storage when creating a new NetX host-based service instance
    When you create a new service instance from the vSphere Web Client for NetX host-based services such as Firewall, IDS, and IPS , you are forced to add cluster/network/storage even though these are not required.

    Workaround: When creating a new service instance, you may add any information for cluster/network/storage to fill out the fields. This will allow the creation of the service instance and you will be able to proceed as required.

Monitoring Services Known Issues
  • Issue 1466790: Unable to choose VMs on bridged network using the NSX traceflow tool
    Using the NSX traceflow tool, you cannot select VMs that are not attached to a logical switch. This means that VMs on an L2 bridged network cannot be chosen by VM name as the source or destination address for traceflow inspection.

     

    Workaround: For VMs attached to L2 bridged networks, use the IP address or MAC address of the interface you wish to specify as destination in a traceflow inspection. You cannot choose VMs attached to L2 bridged networks as source.