VMware NSX for vSphere 6.2.0 Release Notes

|

VMware NSX for vSphere 6.2.0 | 20 Aug 2015 | Build 2986609 | Document updated 22 Nov 2015

What's in the Release Notes

The release notes cover the following topics:

What's New

NSX vSphere 6.2 includes the following new and changed features:

  • Cross vCenter Networking and Security

    • NSX 6.2 with vSphere 6.0 supports Cross vCenter NSX where logical switches (LS), distributed logical routers (DLR) and distributed firewalls (DFW) can be deployed across multiple vCenters, thereby enabling logical networking and security for applications with workloads (VMs) that span multiple vCenters or multiple physical locations.

    • Consistent firewall policy across multiple vCenters: Firewall Rule Sections in NSX can now be marked as "Universal" whereby the rules defined in these sections get replicated across multiple NSX managers. This simplifies the workflows involving defining consistent firewall policy spanning multiple NSX installations

    • Cross vCenter vMotion with DFW: Virtual Machines that have policies defined in the "Universal" sections can be moved across hosts that belong to different vCenters with consistent security policy enforcement.

    • Universal Security Groups: Security Groups in NSX 6.2 that are based on IP Address, IP Set, MAC Address and MAC Set can now be used in Universal rules whereby the groups and group memberships are synced up across multiple NSX managers. This improves the consistency in object group definitions across multiple NSX managers, and enables consistent policy enforcement

    • Universal Logical Switch (ULS): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of logical switches that can span multiple vCenters, allowing the network administrator to create a contiguous L2 domain for an application or tenant.

    • Universal Distributed Logical Router (UDLR): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of distributed logical routers that can span multiple vCenters. The universal distributed logical routers enable routing across the universal logical switches described earlier. In addition, NSX UDLR is capable of localized north-south routing based on the physical location of the workloads.

  • Operations and Troubleshooting Enhancements

    • New traceflow troubleshooting tool: Traceflow is a troubleshooting tool that helps identify if the problem is in the virtual or physical network. It provides the ability to trace a packet from source to destination and helps observe how that packet passes through the various network functions in the virtual network.

    • Flow monitoring and IPFIX separation: In NSX 6.1.x, NSX supported IPFIX reporting, but IPFIX reporting could be enabled only if flow reporting to NSX Manager was also enabled. Starting in NSX 6.2.0, these features are decoupled. In NSX 6.2.0 and later, you can enable IPFIX independent of flow monitoring on NSX Manager.

    • New CLI monitoring and troubleshooting commands in 6.2: See the knowledge base article for more information.

    • Central CLI: Central CLI reduces troubleshooting time for distributed network functions. Commands are run from the command line on NSX Manager and retrieve information from controllers, hosts, and the NSX Manager. This allows you to quickly access and compare information from multiple sources. The central CLI provides information about logical switches, logical routers, distributed firewall and edges.

    • CLI ping command adds configurable packet size and do-not-fragment flag: Starting in NSX 6.2.0, the NSX CLI 'ping' command offers options to specify the data packet size (not including the ICMP header) and to set the do-not-fragment flag. See the NSX CLI Reference for details.

    • Show health of the communication channels: NSX 6.2.0 adds the ability to monitor communication channel health. The channel health status between NSX Manager and the firewall agent, between NSX Manager and the control plane agent, and between host and the NSX Controller can be seen from the NSX Manager UI. In addition, this feature detects when configuration messages from the NSX Manager have been lost before being applied to a host, and it instructs the host to reload its NSX configuration when such message failures occur.

    • Standalone Edge L2 VPN client CLI: Prior to NSX 6.2, a standalone NSX Edge L2 VPN client could be configured only through OVF parameters. Commands specific to standalone NSX Edge have been added to allow configuration using the command line interface. The OVF is now used for initial configuration only.

  • Logical Networking and Routing

    • L2 Bridging Interoperability with Distributed Logical Router: With VMware NSX for vSphere 6.2, L2 bridging can now participate in distributed logical routing. The VXLAN network to which the bridge instance is connected, will be used to connect the routing instance and the bridge instance together.

    • Support of /31 prefixes on ESG and DLR interfaces per RFC 3021.

    • Enhanced support of relayed DHCP request on the ESG DHCP server.

    • Ability to keep VLAN tags over VXLAN.

    • Exact Match for redistribution filters: The redistribution filter has same matching algorithm as ACL, so exact prefix match by default (except if le or ge options are used).

    • Support of administrative distance for static route.

    • Ability to Enable, relax, or disable check per interface on Edge.

    • Display AS path in CLI command show ip bgp

    • HA interface exclusion from redistribution into routing protocols on the DLR control VM.

    • Distributed logical router (DLR) force-sync avoids data loss for east-west routing traffic across the DLR. North-south routing and bridging may continue experience an interruption.

    • View active edge in HA pair: In the NSX 6.2 web client, you can find out if an NSX Edge appliance is the active or backup in an HA pair.

    • REST API supports reverse path filter (rp_filter) on Edge: Using the system control REST API, rp_filter sysctl can be configured through UI, and is also exposed through REST API for vNIC interfaces and sub-interfaces. See the NSX API documentation for more information.

    • Behavior of the IP prefix GE and IP prefix LE BGP route filters: In NSX 6.2, the following enhancements have been made to BGP route filters:

      • LE / GE keywords not allowed: For the null route network address (defined as ANY or in CIDR format 0.0.0.0/0), less-than-or-equal-to (LE) and greater-than-or-equal-to (GE) keywords are no longer allowed. In previous releases, these keywords were allowed.

      • LE and GE values in the range 0-7 are now treated as valid. In previous releases, this range was not valid.

      • For a given route prefix, you can no longer specify a GE value that is greater than the specified LE value.

  • Networking and Edge Services

    • The management interface of the DLR has been renamed to HA interface. This has been done to highlight the fact that the HA keepalives travel through this interface and that interruptions in traffic on this interface can result in a split-brain condition.

    • Load balancer health monitoring improvements: Delivers granular health monitoring, that reports information on failure, keeps track of last health check and status change, and reports failure reasons.

    • Support VIP and pool port range: Enables load balancer support for applications that require a range of ports.

    • Increased maximum number of virtual IP addresses (VIPs): VIP support rises to 1024.

  • Security Service Enhancements

    • New IP address discovery mechanisms for VMs: Authoritative enforcement of security policies based on VM names or other vCenter-based attributes requires that NSX know the IP address of the VM. In NSX 6.1 and earlier, IP address discovery for each VM relied on the presence of VMware Tools (vmtools) on that VM or the manual authorization of the IP address for that VM. NSX 6.2 introduces the option to discover the VM's IP address using DHCP snooping or ARP snooping. These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.

  • Solution Interoperability

    • Support for vSphere 6.0 Platform Services Controller topologies: NSX now supports external Platform Services Controllers (PSC), in addition to the already supported embedded PSC configurations.

    • Support for vRealize Orchestrator Plug-in for NSX 1.0.2: With NSX 6.2 release, NSX-vRO plug-in v1.0.2 is introduced in vRealize Automation (vRA).

System Requirements and Installation

Guest Introspection and Network Introspection based features in NSX are compatible with specific VMware Tools (VMTools) versions. To enable the optional NSX Network Introspection Driver component packaged with VMware Tools, upgrade VMware Tools to the following versions:

  • VMware Tools 5.1 P07 and later

  • VMware Tools 5.5 P04 and later

  • VMware Tools 6.0

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX 6.2 Installation.

Upgrade Notes

  • The VMware Product Interoperability Matrix provides details about the supported upgrade paths for VMware NSX.

  • Upgrades from NSX 6.1.5 to NSX 6.2.0 are not supported. Instead, you must upgrade from NSX 6.1.5 to NSX 6.2.1 or later.

  • When you are upgrading NSX in context with other VMware product upgrades, such as vCenter and ESXi, it is important to follow the supported upgrade sequence documented at this knowledge base article.

  • See the section, Installation and Upgrade Known Issues, later in this document, for a list of known upgrade-related issues.

  • The memory and CPU requirements for installing and upgrading NSX Manager have changed. See the System Requirements for NSX in the NSX 6.2 Installation or the NSX 6.2 Upgrade documentation.

  • Behavior change in redistribution filters on distributed logical router and Edge Services Gateway: Starting in the 6.2 release, redistribution rules in the DLR and ESG work as ACLs only. That is, if a rule is an exact match, the respective action is taken.

  • Before upgrading to NSX 6.2.0, you must make sure your installation is not using a VXLAN tunnel ID of 4094 on any tunnels. VXLAN tunnel ID 4094 no longer available for use. To assess and address this follow these steps:

    1. In vCenter, navigate to Home > Networking and Security > Installation and select the Host Preparation tab.
    2. Click Configure in the VXLAN column.

    3. In the Configure VXLAN Networking window, set the VLAN ID to a value between 1 and 4093.

  • After upgrading NSX Manager, you must reset the vSphere web client server as explained in the NSX Upgrade documentation. Until you do this, the Networking and Security tab may fail to appear in the vSphere web client.

  • NSX upgrades in a stateless host environment use new VIB URLs: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:

    1. Manually download the latest NSX VIBs from NSX Manager from a fixed URL.

    2. Add the VIBs to the host image profile.

    Prior to NSX 6.2.0, there was a single URL on NSX Manager from which VIBs for a certain version of the ESX Host could be found. (Meaning the administrator only needed to know a single URL, regardless of NSX version. In NSX 6.2.0 and later, the new NSX VIBs are available at different URLs. To find the correct VIBs, you must perform the following steps:

    • Find the new VIB URL from https://<NSX-Manager-IP>/bin/vdn/nwfabric.properties.
    • Fetch VIBs of required ESX host version from corresponding URL.
    • Add them to host image profile.
  • Before Upgrading VMware vCloud Network and Security 5.x to VMware NSX for vSphere 6.2: If you plan to upgrade to VMware NSX for vSphere 6.2 from VMware vCloud Network and Security 5.5.x, verify whether uplink port name information is missing from the tables by running the following REST API call:
    GET https://<nsxmgr-IP>/api/2.0/vdn/switches
    In the output, look for the uplinkPortName field. For example:

    <?xml version="1.0" encoding="UTF-8"?>
    <vdsContexts>
       <vdsContext>
          <switch>
             <objectId>dvs-22</objectId>
          <objectTypeName>VmwareDistributedVirtualSwitch</objectTypeName>
             <nsxmgrUuid>4236F6CA-3B1A-56BE-4B55-1EF82B8CA12D</nsxmgrUuid>
             <revision>2</revision>
             <type>
                <typeName>VmwareDistributedVirtualSwitch</typeName>
             </type>
             <name>1-vds-20</name>
             <scope>
                <id>datacenter-3</id>
                <objectTypeName>Datacenter</objectTypeName>
                <name>datacenter-1</name>
             </scope>
             <clientHandle />
             <extendedAttributes />
          </switch>
          <mtu>1600</mtu>
          <teaming>FAILOVER_ORDER</teaming>
          <uplinkPortName>uplink2</uplinkPortName>
          <promiscuousMode>false</promiscuousMode>
       </vdsContext>
    </vdsContexts>
    

    If the output of this command contains at least one uplink port name for each vSphere distributed switch, you can proceed with the upgrade. If the uplink port name is missing from the output, see the knowledge base article.

Known Issues

Known issues are grouped as follows:

General Known Issues

Distributed logical router fails with Would block error
NSX distributed logical routers may fail after host configuration changes. This occurs when vSphere fails to create a required VDR port on the host. This error may be seen as a DVPort connect failure in vmkernel.log, or a SIOCSIFFLAGS error in the guest. This can happen when VIBs are loaded after the DVS properties are pushed by vCenter.

Workaround: See VMware knowledge base article 2107951.

VMware ESXi 5.x and 6.x experiences a purple diagnostic screen when using IP discovery
When using IP discovery on logical switches in VMware NSX for vSphere 6.2.0, the ESXi 5.x and 6.x host fails with a purple diagnostic screen.

Workaround: See http://kb.vmware.com/kb/2134329 for instructions.

UI allows creation of in/out NSX firewall rules that cannot be applied to Edges
The web client incorrectly allows creation of an NSX firewall rule applied to one or more NSX Edges when the rule has traffic travelling in the 'in' or 'out' direction and when PacketType is IPV4 or IPV6. The UI should not allow creation of such rules, as NSX cannot apply them to NSX Edges.

Workaround: None.

User must download NSX Controller logs sequentially
NSX Controller logs can be downloaded for troubleshooting. Due to a known issue, you cannot download more than one controller log simultaneously. Even when downloading from multiple Controllers, you must wait for the download from the current controller to finish before you start the download from the next controller. Note also that you cannot cancel a log download once it has started.

Workaround: Wait for the current controller log download to finish before starting another log download.

Log files exported as CSV from NSX Manager are timestamped with epoch not datetime
When you export log files as CSV from NSX Manager using the vSphere Web Client, you might notice that the log files are timestamped with the epoch time in milliseconds, instead of with the appropriate time based on the time zone.

Workaround: None.

Unable to choose VMs on bridged network using the NSX traceflow tool
Using the NSX traceflow tool, you cannot select VMs that are not attached to a logical switch. This means that VMs on an L2 bridged network cannot be chosen by VM name as the source or destination address for traceflow inspection.

Workaround: For VMs attached to L2 bridged networks, use the IP address or MAC address of the interface you wish to specify as destination in a traceflow inspection. You cannot choose VMs attached to L2 bridged networks as source. See the knowledge base article for more information.

Flow Monitoring drops flows that exceed a 2 million flows / 5 minutes limit
NSX Flow Monitoring retains up to 2 million flow records. If hosts generate more than 2 million records in 5 minutes, new flows are dropped.

Workaround: None.

NSX API returns JSON instead of XML in certain circumstances
On occasion, an API request will result in JSON, not XML, being returned to the user.

Workaround: Add Accept: application/xml to the request header.

NSX Manager does not accept DNS search strings with a space delimiter
NSX Manager does not accept DNS search strings with a space delimiter. You may only use a comma as a delimiter. For example, if the DHCP server advertises eng.sample.com and sample.com for the DNS search list, NSX Manager is configured with eng.sample.com sample.com.

Workaround: Use comma separators. NSX Manager only accepts comma separator as DNS search strings.

In cross vCenter NSX deployments, multiple versions of saved configurations get replicated to secondary NSX Managers
Universal Sync saves multiple copies of universal configurations on secondary NSX Managers. The list of saved configurations contains multiple drafts created by the synchronizing across NSX Managers with the same name and at the same time or with a time difference of 1 second..

Workaround: Run the API call to delete duplicate drafts.

DELETE : https://<nsxmgr-ip>/api/4.0/firewall/config/drafts/

Find the drafts to be deleted by viewing all drafts:

GET: https://<nsxmgr-ip>/api/4.0/firewall/config/drafts

In the following sample output, drafts 143 and 144 have the same name and were created at the same time and are therefore duplicates. Likewise, drafts 127 and 128 have the same name are off by 1 second and are also duplicates.

<firewallDrafts>
    <firewallDraft id="144" name="AutoSaved_Wednesday, August 5, 2015 11:08:40 PM GMT" timestamp="1438816120917"> 
        <description>Auto saved configuration</description>
        <preserve>false</preserve>
        <user>replicator-1fd96022-db14-434d-811d-31912b1cb907</user>
        <mode>autosaved</mode>
    </firewallDraft>
    <firewallDraft id="143" name="AutoSaved_Wednesday, August 5, 2015 11:08:40 PM GMT" timestamp="1438816120713">
        <description>Auto saved configuration</description>
        <preserve>false</preserve>
        <user>replicator-1fd96022-db14-434d-811d-31912b1cb907</user>
        <mode>autosaved</mode>
    </firewallDraft>
   <firewallDraft id="128" name="AutoSaved_Wednesday, August 5, 2015 9:08:02 PM GMT" timestamp="1438808882608">
        <description>Auto saved configuration</description>
        <preserve>false</preserve>
        <user>replicator-1fd96022-db14-434d-811d-31912b1cb907</user>
        <mode>autosaved</mode>
    </firewallDraft>
    <firewallDraft id="127" name="AutoSaved_Wednesday, August 5, 2015 9:08:01 PM GMT" timestamp="1438808881750">
        <description>Auto saved configuration</description>
        <preserve>false</preserve>
        <user>replicator-1fd96022-db14-434d-811d-31912b1cb907</user>
        <mode>autosaved</mode>
    </firewallDraft>
</firewallDrafts>

When a firewall policy in the Service Composer is out of sync due to a deleted security group, the firewall rule cannot be fixed in the UI

Workaround: In the UI, you can delete the invalid firewall rule and then add it again. Or, in the API, you can fix the firewall rule by deleting the invalid security group. Then synchronize the firewall configuration: Select Service Composer: Security Policies, and for each security policy that has associated firewall rules, click Actions and select Synchronize Firewall Config. To prevent this issue, modify firewall rules so that they do not refer to security groups before deleting the security groups.

Unable to power on guest virtual machine
When you power on a guest virtual machine, the error All required agent virtual machines are not currently deployed may be displayed.

Workaround: Perform the following steps:

  1. In the vSphere Web Client, click Home and then click Administration.
  2. In Solution, select vCenter Server Extension.
  3. Click vSphere ESX Agent Manager and then click the Manage tab.
  4. Click Resolve.

Installation and Upgrade Known Issues

Before upgrading, please read the section Upgrade Notes, earlier in this document.

DVPort fails to enable with Would block due to host prep issue
On an NSX-enabled ESXi host, the DVPort fails to enable with "Would block" due to a host preparation issue. When this occurs, the error message first noticed varies (for example, this may be seen as a VTEP creation failure in VC/hostd.log, a DVPort connect failure in vmkernel.log, or a SIOCSIFFLAGS error in the guest). This happens when VIBs are loaded after the DVS properties are pushed by vCenter. This may happen during upgrade.

Workaround: In NSX 6.1.4 and earlier, an additional reboot is required to address this type of DVPort failure in sites using an NSX logical router. In NSX 6.2.0, a mitigation is provided in the NSX software. This mitigation helps avoid a second reboot in the majority of cases. The root cause is a known issue in vSphere. See VMware knowledge base article 2107951 for details. Please note that, for customers running NSX 6.1.x, this mitigation is also available in NSX 6.1.5 and later.

NSX Manager certificate replacement requires restart of NSX Manager and may require restart of vSphere Web Client
After you replace the NSX Manager appliance certificate, you must always restart the NSX Manager appliance. In certain cases after a certificate replacement, the vSphere web client will not display the "Networking and Security" tab. If this occurs, follow the workaround below.

Workaround: Restart the NSX Manager appliance and then restart the vSphere Web Client.

To restart NSX Manager, perform the following steps:

  1. Log in to the NSX Manager CLI.

  2. Switch to enable/privileged mode by typing en.

  3. Stop the web-manager service by typing no web-manager. Wait for the OK to confirm it has stopped.

  4. Start NSX Manager by typing web-manager. Wait for the OK to confirm it has restarted.

  5. To restart the vSphere web client, in vCenter 5.5, open https://{vcenter-ip}:5480 and restart the Web Client server.

  6. In the vCenter 6.0 appliance, log in to the vCenter Server shell as a root user and run the following commands:

    shell.set --enabled True

    shell

    localhost:~ # cd /bin

    localhost:~ # service-control --stop vsphere-client

    localhost:~ # service-control --start vsphere-client

  7. In vCenter Server 6.0, run the following commands:

    cd C:\Program Files\VMware\vCenter Server\bin

    service-control --stop vsphere-client

    service-control --start vsphere-client

After vCenter upgrade, vCenter might lose connectivity with NSX
If you are using vCenter embedded SSO and you are upgrading vCenter 5.5 to vCenter 6.0, vCenter might lose connectivity with NSX. This happens if vCenter 5.5 was registered with NSX using the root user name. In NSX 6.2, vCenter registration with root is deprecated. NOTE: If you are using external SSO, no change is necessary. You can retain the same user name, for example admin@mybusiness.mydomain, and vCenter connectivity will not be lost.

Workaround: Reregister vCenter with NSX using the admininstrator@vsphere.local user name instead of root.

Shutdown Guest OS for agent VMs (SVA) before powering OFF
When a host is put into maintenance mode, all service appliances are powered-off, instead of shutting down gracefully. This may lead to errors within third-party appliances.

Workaround: None.

Unable to power on the Service appliance that was deployed using the Service Deployments view

Workaround: Before you proceed, verify the following:

  • The deployment of the virtual machine is complete.

  • The tasks such as cloning, reconfiguring, and so on are in progress for the virtual machine displayed in VC task pane.

  • In the VC events pane of the virtual machine, the following events are displayed after the deployment is initiated:
     
    Agent VM <vm name> has been provisioned.
    Mark agent as available to proceed agent workflow.
     

    In such a case, delete the service virtual machine. In service deployment UI, the deployment is seen as Failed. Upon clicking the Red icon, an alarm for an unavailable Agent VM is displayed for the host. When you resolve the alarm, the virtual machine is redeployed and powered on.

If not all clusters in your environment are prepared, the Upgrade message for Distributed Firewall does not appear on the Host Preparation tab of Installation page
When you prepare clusters for network virtualization, distributed firewall is enabled on those clusters. If not all clusters in your environment are prepared, the upgrade message for Distributed Firewall does not appear on the Host Preparation tab.

Workaround: Use the following REST call to upgrade Distributed Firewall:

PUT https://<nsxmgr-ip>/api/4.0/firewall/globalroot-0/state

If a service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table
User created service groups are expanded in the Edge Firewall table during upgrade - i.e., the Service column in the firewall table displays all services within the service group. If the service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table.

Workaround: Create a new service group with a different name and then consume this service group in the firewall rule.

Service virtual machine deployed using the Service Deployments tab on the Installation page does not get powered on

Workaround: Follow the steps below.

  1. Manually remove the service virtual machine from the ESX Agents resource pool in the cluster.
  2. Click Networking and Security and then click Installation.
  3. Click the Service Deployments tab.
  4. Select the appropriate service and click the Resolve icon.
    The service virtual machine is redeployed.

vSphere Distributed Switch MTU does not get updated
If you specify an MTU value lower than the MTU of the vSphere distributed switch when preparing a cluster, the vSphere Distributed Switch does not get updated to this value. This is to ensure that existing traffic with the higher frame size isn't unintentionally dropped.

Workaround: Ensure that the MTU you specify when preparing the cluster is higher than or matches the current MTU of the vSphere distributed switch. The minimum required MTU for VXLAN is 1550.

SSO cannot be reconfigured after upgrade
When the SSO server configured on NSX Manager is the one native on vCenter server, you cannot reconfigure SSO settings on NSX Manager after vCenter Server is upgraded to version 6.0 and NSX Manager is upgraded to version 6.x.

Workaround: None.

After upgrading from vCloud Networking and Security 5.5.3 to NSX vSphere 6.0.5 or later, NSX Manager does not start up if you are using an SSL certificate with DSA-1024 keysize
SSL certificates with DSA-1024 keysize are not supported in NSX vSphere 6.0.5 onwards, so the upgrade is not successful.

Workaround: Import a new SSL certificate with a supported keysize before starting the upgrade.

SSL VPN does not send upgrade notification to remote client
SSL VPN gateway does not send an upgrade notification users. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to remote users and they must update their clients.

After upgrading NSX from version 6.0 to 6.0.x or 6.1, NSX Edges are not listed on the UI
When you upgrade from NSX 6.0 to NSX 6.0.x or 6.1, the vSphere Web Client plug-in may not upgrade correctly. This may result in UI display issues such as missing NSX Edges.
This issue is not seen if you are upgrading from NSX 6.0.1 or later.

Workaround: Follow the steps below.

  1. In vCenter mob, click Content.

  2. In the Value column, click ExtensionManager.

  3. Look for extensionList property value (for example com.vmware.vShieldManager) and copy the string.

  4. In the Methods area, click UnregisterExtension.

  5. In the Value field, paste the string that you copied in step 3.

  6. Click Invoke Method.

This ensures deployment of the latest plug-in package.

NSX Edge upgrade fails if L2 VPN is enabled on the Edge
L2 VPN configuration update from 5.x or 6.0.x to 6.1 is not supported. Hence, Edge upgrade fails if it has L2 VPN configured on it.

Workaround: Delete L2 VPN configuration before upgrading NSX Edge. After the upgrade, re-configure L2 VPN.

If vCenter is rebooted during NSX vSphere upgrade process, incorrect Cluster Status is displayed
When you do host prep in an environment with multiple NSX prepared clusters during an upgrade and the vCenter Server gets rebooted after at least one cluster has been prepared, the other clusters may show Cluster Status as Not Ready instead of showing an Update link. The hosts in vCenter may also show Reboot Required.

Workaround: Do not reboot vCenter during host preparation.

Momentary loss of third-party anti-virus protection during upgrade
When upgrading from NSX 6.0.x to NSX 6.1.x or 6.2.0, you might experience momentary loss of third-party anti-virus protection for VMs. This issue does not affect upgrades from NSX 6.1.x to NSX 6.2.

Workaround: None.

Host error message appears while configuring distributed firewall
While configuring distributed firewall, if you encounter an error message related to the host, check the status of fabric feature com.vmware.vshield.nsxmgr.messagingInfra. If the status is Red, perform the following workaround.

Workaround: Use the following REST API call to reset communication between NSX Manager and a single host or all hosts in a cluster.

POST https://<NSX Manager IP>/api/2.0/nwfabric/configure?action=synchronize

<nwFabricFeatureConfig>
  <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
  <resourceConfig>
    <resourceId>{HOST/CLUSTER MOID}</resourceId>
  </resourceConfig>
</nwFabricFeatureConfig>

Copy and paste of a firewall rule with negate source/destination enabled will list a new rule with Negate option disabled
If a firewall rule is copied and pasted with the negate source/destination option enabled, after the paste operation there will be a new firewall rule, however, the negate source/destination option is disabled.

Workaround: None.

NSX Manager log collects WARN messagingTaskExecutor-7 messages after upgrade to NSX 6.2
After upgrading from NSX 6.1.x to NSX 6.2, the NSX Manager log becomes flooded with messages similar to: WARN messagingTaskExecutor-7 ControllerInfoHandler:48 - host is unknown: host-15 return empty list. There is no operational impact.

Workaround: None.

After upgrading from vCNS 5.5.4 to NSX 6.2.0, the firewall on the Host Preparation tab remains disabled
After upgrading from vCNS 5.5.x to NSX 6.2.0 and upgrading all the clusters, the firewall on the Host Preparation tab remains disabled. In addition, the option to upgrade the firewall does not appear in the UI. This happens only when there are hosts that are not part of any prepared clusters in the datacenter, because the VIBs will not be installed on those hosts.

Workaround: To resolve the issue, move the hosts into an NSX 6.2 prepared cluster.

During an upgrade, L2 and L3 firewall rules do not get published to hosts
After publishing a change to the distributed firewall configuration, the status remains in progress both in the UI and the API indefinitely, and no log for L2 or L3 rules is written to the file vsfwd.log.

Workaround: During an NSX upgrade, do not publish changes to the distributed firewall configuration. To exit from the in progress state and resolve the issue, reboot the NSX Manager virtual appliance.

The NSX REST API call to enable or disable IP detection seems to have no effect
If host cluster preparation is not yet complete, the NSX REST API call to enable or disable IP detection (https://<nsxmgr-ip>/api/2.0/xvs/networks/universalwire-5/features) has no effect.

Workaround: Before issuing this API call, make sure the host cluster preparation is complete.

NSX 6.0.7 SSL VPN clients cannot connect to NSX 6.2 SSL VPN gateways
In NSX 6.2 SSL VPN gateways, the SSLv2 and SSLv3 protocols are disabled. This means the SSL VPN gateway only accepts the TLS protocol. The SSL VPN 6.2 clients have been upgraded to use the TLS protocol by default during connection establishment. In NSX 6.0.7, the SSL VPN client uses an older version of OpenSSL library and the SSLv3 protocol to establish a connection. When an NSX 6.0.x client tries to connect to an NSX 6.2 gateway, the connection establishment fails at the SSL handshake level.

Workaround: Upgrade your SSL VPN client to NSX 6.2 after you have upgraded to NSX 6.2. For upgrade instructions, see the NSX Upgrade documentation.

PSOD during ESXi upgrade
When you are upgrading an NSX-enabled vSphere 5.5U2 host to vSphere 6.0, some of the ESXi host upgrades might halt with a purple diagnostic screen (also known as a PSOD).

Workaround: Reboot the ESXi host and continue with the upgrade.

Must create a segment ID pool for new or upgraded logical routers
In NSX 6.2, a segment ID pool with available segment IDs must be present before you can upgrade a logical router to 6.2 or create a new 6.2 logical router. This is true even if you have no plans to use NSX logical switches in your deployment.

Workaround: If your NSX deployment does not have a local segment ID pool, create one as a prerequisite to NSX logical router upgrade or installation.

Error configuring VXLAN gateway
When configuring VXLAN using a static IP pool (at Networking& Security>Installation> Host Preparation>Configure VXLAN) and the configuration fails to set an IP pool gateway IP on the VTEP(because the gateway is not properly configured or is not reachable), the VXLAN configuration status enters the Error (RED) state at for the host cluster.

The error message is VXLAN Gateway cannot be set on host and the error status is VXLAN_GATEWAY_SETUP_FAILURE. In the REST API call, GET https://<nsxmgr-ip>/api/2.0/nwfabric/status?resource=<cluster-moid>, the status of VXLAN is as follows:

<nwFabricFeatureStatus>
  <featureId>com.vmware.vshield.nsxmgr.vxlan</featureId>
  <featureVersion>5.5</featureVersion>
  <updateAvailable>false</updateAvailable>
  <status>RED</status>
  <message>VXLAN Gateway cannot be set on host</message>
  <installed>true</installed>
  <enabled>true</enabled>
  <errorStatus>VXLAN_GATEWAY_SETUP_FAILURE</errorStatus>
</nwFabricFeatureStatus>

Workaround: To fix the error, there are two options.

  • Option 1: Remove VXLAN configuration for the host cluster, fix the underlying gateway setup in the IP pool by making sure the gateway is properly configured and reachable, and then reconfigure VXLAN for the host cluster.

  • Option 2: Perform the following steps.

    1. Fix the underlying gateway setup in the IP pool by making sure the gateway is properly configured and reachable.

    2. Put the host (or hosts) into maintenance mode to ensure no VM traffic is active on the host.

    3. Delete the VXLAN VTEPs from the host.

    4. Take the host out of maintenance mode. Taking hosts out of maintenance mode triggers the VXLAN VTEP creation process on NSX Manager. NSX Manager will try to re-create the required VTEPs on the host.

In a cross vCenter deployment, a universal configuration section might be under (subordinate to) a local configuration section
If you move a secondary NSX Manager to the standalone (transit) state and then change it back to the secondary state, any local configuration changes that you made while it was temporarily in the standalone state might be listed above the replicated universal configuration sections inherited from the primary NSX Manager. This produces the error condition universal section has to be on top of all other sections on secondary NSX Managers.

Workaround: Use the UI option to move sections up or down so that the local section is below the universal section.

After an upgrade, firewall rules and network introspection services might be out of sync with NSX Manager
After upgrading from NSX 6.0 to NSX 6.1 or 6.2, the NSX firewall configuration displays the error message: synchronization failed / out of sync. Using the Force Sync Services: Firewall action does not resolve the issue.

Workaround: In NSX 6.1 and NSX 6.2, either security groups or dvPortgroups can be bound to a service profile, but not both. To resolve the issue, modify your service profiles.

The esx-dvfilter-switch-security VIB is no longer present in the output of the "esxcli software vib list | grep esx" command.
Starting in NSX 6.2, the esx-dvfilter-switch-security modules are included within the esx-vxlan VIB. The only NSX VIBs installed for 6.2 are esx-vsip and esx-vxlan. During an NSX upgrade to 6.2, the old esx-dvfilter-switch-security VIB gets removed from the ESXi hosts.

Workaround: None.

After the upgrade, logical routers with explicit failover teaming configured might fail to forward packets properly
When the hosts are running ESXi 5.5, the explicit failover NSX 6.2 teaming policy does not support multiple active uplinks on distributed logical routers.

Workaround: Alter the explicit failover teaming policy so that there is only one active uplink and the other uplinks are in standby mode.

Uninstalling NSX from a host cluster sometimes results in an error condition
When using the Uninstall action on the Installation: Host Preparation tab, an error might occur with the eam.issue.OrphanedAgency message appearing in the EAM logs for the hosts. After using the Resolve action and rebooting the hosts, the error state continues even though the NSX VIBs are successfully uninstalled.

Workaround: Delete the orphaned agency from the vSphere ESX Agent Manager (Administration: vCenter Server Extensions: vSphere ESX Agent Manager).

SSLv2 and SSLv3 deprecated in NSX 6.2
Starting in NSX 6.2, the SSL VPN gateway only accepts the TLS protocol. After the NSX upgrade, any new NSX 6.2 clients that you create automatically use the TLS protocol during connection establishment. When an NSX 6.0.x client tries to connect to an NSX 6.2 gateway, the connection establishment fails at the SSL handshake step.

Workaround: After the upgrade to NSX 6.2, uninstall your old SSL VPN clients and install the NSX 6.2 version of the SSL VPN clients.

vSphere Web Client does not display Networking and Security tab after backup and restore in NSX vSphere 6.2
When you perform a backup and restore operation after upgrading to NSX vSphere 6.2, the vSphere Web Client does not display the Networking and Security tab.

Workaround: When an NSX Manager backup is restored, you are logged out of the Appliance Manager. Wait a few minutes before logging in to the vSphere Web Client.

After upgrade to NSX 6.2, NSX Manager has more than 100 percentage of physical memory allocated
Starting in NSX 6.2, NSX Manager requires 16 GB of reserved memory. The former requirement was 12 GB.

Workaround: Increase the NSX Manager virtual appliance's reserved memory to 16 GB.

After NSX upgrade, guest introspection fails to communicate with NSX Manager.
After upgrading from NSX 6.0.x to NSX 6.1.x or from NSX 6.0.x to NSX 6.2 and before the guest introspection service is upgraded, the NSX Manager cannot communicate with the guest introspection Universal Service Virtual Machine (USVM). The loss of communication between NSX Manager and guest introspection leads to a loss of protection for VMs in the NSX cluster when there is a change to the VMs, such as VM additions, vMotions, or deletions. The NSX Installation > Service Deployments tab shows the current version of guest introspection. When this issue is present, a warning appears in the Service Status column. The warning message includes the list of affected hosts and the error message, Guest Introspection is not ready.

Workaround: To resolve the issue, follow the procedure to upgrade guest introspection in the NSX Upgrade documentation.

Data Security service status is shown as UP even when IP connectivity is not established
Data Security appliance may have not received the IP address from DHCP or is connected to an incorrect port group.

Workaround: Ensure that the Data Security appliance gets the IP from DHCP/IP Pool and is reachable from the management network. Check if the ping to the Data Security appliance is successful from NSX/ESX.

NSX Manager Known Issues

Service Composer goes out of sync when policy changes are made while one of the Service Managers is down.
This is related to instances of multiple Services/Service Managers registered and policies created referencing those services. If changes are made in Service Composer to such a policy when one of those Service Managers is down, the changes fail because of callback failure to the Service Manager that is down. As a result, Service Composer goes out of sync.

Workaround: Ensure the Service Manager is responding and then issue a force sync from Service Composer.

Networking and Security Tab not displayed in vSphere Web Client
After vSphere is upgraded to 6.0, you cannot see the Networking and Security Tab when you log in to the vSphere Web Client with the root user name.

Workaround: Log in as administrator@vsphere.local or as any other vCenter user which existed on vCenter Server before the upgrade and whose role was defined in NSX Manager.

After NSX Manager backup is restored, REST call shows status of fabric feature com.vmware.vshield.nsxmgr.messagingInfra as Red
When you restore the backup of an NSX Manager and check the status of fabric feature com.vmware.vshield.nsxmgr.messagingInfra using a REST API call, it is displayed as Red instead of Green.

Workaround: Use the following REST API call to reset communication between NSX Manager and a single host or all hosts in a cluster.

POST https://<NSX Manager IP>/api/2.0/nwfabric/configure?action=synchronize

<nwFabricFeatureConfig>
  <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
  <resourceConfig>
    <resourceId>{HOST/CLUSTER MOID}</resourceId>
  </resourceConfig>
</nwFabricFeatureConfig>

Cannot remove and re-add a host to a cluster protected by Guest Introspection and third-party security solutions
If you remove a host from a cluster protected by Guest Introspection and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.

Workaround: To remove a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.

vMotion of NSX Manager may display error "Virtual ethernet card Network adapter 1 is not supported"
You can ignore this error. Networking will work correctly after vMotion.

Syslog shows host name of backed up NSX Manager on the restored NSX Manager
Suppose the host name of the first NSX Manager is A and a backup is created for that NSX Manager. Now a second NSX Manager is installed and configured to the same IP address as the old Manager according to backup-restore docs, but host name is B. Restore is run on this NSX Manager. The restored NSX Manager shows host name A just after restore and host name B again after reboot.

Workaround: Host name of second NSX Manager should be configured to be same as the backed up NSX Manager.

NSX Manager virtual appliance summary page shows no DNS name
When you log in to the NSX Manager virtual appliance, the Summary page has a field for the DNS name. This field remains blank even though a DNS name has been defined for the NSX Manager appliance.

Workaround: You can view the NSX Manager's hostname and the search domains on the Manage: Network page.

NSX Manager UI does not automatically logs out users after changing password using NSX Command Line Interface
If are logged in to NSX Manager and recently changed your password using CLI, you might continue to stay logged in to the NSX Manager UI using your old password. Typically, NSX Manager client should automatically log you out if the session times out for being inactive.

Workaround: Log out from the NSX Manager UI and log back in with your new password.

Standalone NSX Manager incorrectly allows import of universal firewall configuration
Typically, NSX Manager running in standalone role should allow the import of local firewall rules only. Starting in NSX 6.2, NSX Manager can be run in standalone role (managing networks for one vCenter) or cross vCenter mode where it incorrectly allows you to import a universal firewall rule into an NSX Manager environment running in standalone role. Once imported you cannot delete the universal firewall rules either through REST API or the vSphere Web Client. As the manager is currently running in standalone role where the universal section is treated like a local section.

Workaround: If you are running NSX Manager in standalone role, do not import a firewall configuration that contains universal rules. If you have already imported a universal firewall rule into a standalone NSX Manager, fix the issue by importing a saved firewall configuration file that does not contain universal rules, and publish that configuration file by loading it in the Firewall table.
Perform the following steps:

  1. Log in to the vSphere Web Client.

  2. Click Networking & Security and then click Firewall.

  3. Click the Firewall tab.

  4. Click the Saved Configurations tab.

  5. Click the Import configuration (import) icon.

  6. Click Browse and select the file containing the configuration that you want to import.

    Rules are imported based on the rule names. During the import, Firewall ensures that each object referenced in the rule exists in your environment. If an object is not found, the rule is marked as invalid. If a rule referenced a dynamic security group, the dynamic security group is created in NSX Manager during the import.

  7. Add the node back as a secondary node. The synchronizing across NSX Managers automatically syncs up the universal section correctly performing any required cleanup.

    Once you have successfully published the configuration file, the rules are pushed down to the host and impact the datapath. The system functions as expected.

Unable to edit a network host name
After you login to NSX Manager virtual appliance and navigate to the Appliance Management, click Manage Appliance Settings, and click Network under Settings to edit the network host name, you might receive an invalid domain name list error. This happens when the domain names specified in the Search Domains field are separated with whitespace characters, instead of commas. NSX Manager only accepts domain names that are comma separated.
Workaround: Perform the following steps:

  1. Log in to the NSX Manager virtual appliance.

  2. Under Appliance Management, click Manage Appliance Settings.

  3. From the Settings panel, click Network.

  4. Click Edit next to DNS Servers.

  5. In the Search Domains field, replace all whitespace characters with commas.

  6. Click OK to save the changes.

False system event is generated even after successfully restoring NSX Manager from a backup
After successfully restoring NSX Manager from a backup, the following system events appear in the vSphere Web Client when you navigate to Networking & Security: NSX Managers: Monitor: System Events.

  • Restore of NSX Manager from backup failed (with Severity=Critical).

  • Restore of NSX Manager successfully completed (with Severity=Informational).

Workaround: If the final system event message shows as successful, you can ignore the system generated event messages.

Change in behavior of NSX REST API call to add a namespace in a datacenter
In NSX 6.2, the POST https://<nsxmgr-ip>/api/2.0/namespace/datacenter/ REST API call returns a URL with an absolute path, for example http://198.51.100.3/api/2.0/namespace/api/2.0/namespace/datacenter/datacenter-1628/2. In previous releases of NSX, this API call returned a URL with a relative path, for example: /api/2.0/namespace/datacenter/datacenter-1628/2.

Workaround: None.

NSX Edge and Logical Routing Known Issues

Distributed logical router advertises incorrect next hop for default route when BGP neighbor filter is set to "ANY , OUT , DENY"
With 'default originate' enabled on an NSX distributed logical router (DLR), setting a BGP neighbor filter of "ANY , OUT , DENY" on the DLR causes the DLR to advertise an incorrect next hop address for the default route. This error occurs only when a BGP neighbor filter is added with the following attributes:

  • Direction: OUT
  • Action: Deny
  • Network: Any

Workaround: None.

Disabling a routing protocol on NSX Edge may result in temporary loss of data traffic
Disabling a routing protocol on NSX Edge does not send route-withdraw request to the peer, so traffic is black-holed until hold-down timer/dead timer expires.

Workaround: None.

Logical Router LIF routes are advertised by upstream Edge Services Gateway even if Logical Router OSPF is disabled
Upstream Edge Services Gateway will continue to advertise OSPF external LSAs learned from Logical Router connected interfaces even when Logical Router OSPF is disabled.

Workaround: Disable redistribution of connected routes into OSPF manually and publish before disabling OSPF protocol. This ensures that routes are properly withdrawn.

ESG syslog not able to send to remote server, saying it can't resolve hostname, however, DNS resolver is working
Immediately after deployment of an Edge, the syslog is unable to resolve hostnames for any configured remote syslog servers.

Workaround: Configure remote syslog servers using their IP address, or use the UI to Force Sync the Edge. This issue was first seen in 6.2.

Logical router DNS Client configuration settings are not fully applied after updating REST Edge API

Workaround: When you use REST API to configure DNS forwarder (resolver), perform the following steps:

  1. Specify the DNS Client XML servers settings same as the DNS forwarder setting.

  2. Enable DNS forwarder, and make sure that the forwarder settings are same as the DNS Client servers settings specified in XML.

  3. Validation and error message not present for invalid next hop in static route, ECMP enabled
    When trying to add a static route, with ECMP enabled, if the routing table does not contain a default route and there is an unreachable next hop in the static route configuration, no error message is displayed and the static route is not installed.

    Workaround: None.

    If an NSX Edge virtual machine with one sub interface backed by a logical switch is deleted through the vCenter Web Client user interface, data path may not work for a new virtual machine that connects to the same port
    When the Edge virtual machine is deleted through the vCenter Web Client user interface (and not from NSX Manager), the VXLAN trunk configured on dvPort over opaque channel does not get reset. This is because trunk configuration is managed by NSX Manager.

    Workaround: Manually delete the VXLAN trunk configuration by following the steps below:

    1. Navigate to the vCenter Managed Object Browser by typing the following in a browser window:
      https://<vc-ip>/mob?vmodl=1
    2. Click Content.
    3. Retrieve the dvsUuid value by following the steps below.
      1. Click the rootFolder link (for example, group-d1(Datacenters)).
      2. Click the data center name link (for example, datacenter-1).
      3. Click the networkFolder link (for example, group-n6).
      4. Click the DVS name link (for example, dvs-1)
      5. Copy the value of uuid.
    4. Click DVSManager and then click updateOpaqueDataEx.
    5. In selectionSet, add the following XML.
      <selectionSet xsi:type="DVPortSelection">
        <dvsUuid>value</dvsUuid>
        <portKey>value</portKey> <!--port number of the DVPG where trunk vnic got connected-->
      </selectionSet>
    6. In opaqueDataSpec, add the following XML
      <opaqueDataSpec>
        <operation>remove</operation>
        <opaqueData>
          <key>com.vmware.net.vxlan.trunkcfg</key>
          <opaqueData></opaqueData>
        </opaqueData>
      </opaqueDataSpec>
    7. Set isRuntime to false.
    8. Click Invoke Method.
    9. Repeat steps 5 through 8 for each trunk port configured on the deleted Edge virtual machine.

    Security Services Known Issues

    New universal rules cannot be created, and existing universal rules cannot be edited from the flow monitoring UI

    Workaround: Universal rules cannot be added or edited from the flow monitoring UI. EditRule will be automatically disabled.

    Service composer firewall configuration out of sync
    In the NSX service composer, if any firewall policy is invalid (for example of you deleted a security group that was currently in use in a firewall rule), deleting or modifying another firewall policy causes the service composer to become out of sync with the error message Firewall configuration is not in sync.

    Workaround: Delete any invalid firewall rules and then synchronize the firewall configuration. Select Service Composer: Security Policies, and for each security policy that has associated firewall rules, click Actions and select Synchronize Firewall Config. To prevent this issue, always fix or delete invalid firewall configurations before making further firewall configuration changes.

    Security policy name does not allow more than 229 characters
    The security policy name field in the Security Policy tab of Service Composer can accept up to 229 characters. This is because policy names are prepended internally with a prefix.

    Workaround: None.

    Some versions of Palo Alto Networks VM-Series do not work with NSX Manager default settings
    Some NSX 6.1.4 components disable SSLv3 by default. Before you upgrade, please check that all third-party solutions integrated with your NSX deployment do not rely on SSLv3 communication. For example, some versions of the Palo Alto Networks VM-series solution require support for SSLv3, so please check with your vendors for their version requirements.

    In upgraded NSX installations, publishing a firewall rule may result in Null Pointer exception in Web Client
    In upgraded NSX installations, publishing a firewall rule may result in a Null Pointer exception in the UI. The rule changes are saved. This is a display issue only.

    If you delete the firewall configuration using a REST API call, you cannot load and publish saved configurations
    When you delete the firewall configuration, a new default section is created with a new section ID. When you load a saved draft (that has the same section name but an older section ID), section names conflict and display the following error:
    Duplicate key value violates unique constraint firewall_section_name_key

    Workaround: Perform one of the following:

    • Rename the current default firewall section after loading a saved configuration.
    • Rename the default section on a loaded saved configuration before publishing it.

    Monitoring Services Known Issues

    Unable to add more than eight uplink interfaces during Distributed Logical Router (DLR) deployment using vSphere Web Client

    Workaround: Wait for the DLR deployment to complete and then add additional interfaces to Distributed Logical Router.

    Resolved Issues

    The following issues were resolved in 6.2.0:

    Resolved issues are grouped as follows:

    Install and Upgrade Resolved Issues

    • After upgrading NSX vSphere from 6.0.7 to 6.1.3, vSphere Web Client crashes on login screen
      After upgrading NSX Manager from 6.0.7 to 6.1.3, you will see exceptions displayed on the vSphere Web Client UI login screen. You will not be able to login and perform operations on either vCenter or NSX Manager.

      This has been fixed in NSX 6.2.0.

    • Guest Introspection installation fails with error
      When installing Guest Introspection on a cluster, the install fails with the following error:
      Invalid format for VIB Module

      This has been fixed in NSX 6.2.0.

    • Attempts to delete existing NSX Edge Gateway fail in an environment upgraded to NSX 6.1.4
      In NSX installations upgraded from 6.1.3 to 6.1.4, the existing NSX Edge Gateways cannot be deleted after the upgrade to 6.1.4. This issue does not affect new Edge Gateways created after the upgrade. Installations that upgraded directly from 6.1.2 or earlier are not affected by this issue.

      This has been fixed in NSX 6.2.0.

    • The AES encryption unavailable when performing an NSX backup using third-party secured FTP backup

      This has been fixed in NSX 6.2.0.

    • NSX Manager UI does not display user-friendly error messages during host reboot
      In this 6.2 release, NSX Manager UI is updated to display detail error messages that describe the problems you might encounter during host reboot and provide possible solution.

      This has been fixed in NSX 6.2.0.

    • Unable to install NSX VIB installation
      The installation of NSX VIB might not complete, as expected if the ixgbe driver fails to load from third-party module because it has been locked and prevents it from being used for installation.

      This has been fixed in NSX 6.2.0.

    • Unable to start NSX Manager service after upgrading from vCloud Networking and Security (vCNS) 5.5.3
      After upgrading vCloud Networking and Security (vCNS) 5.5.3 to NSX 6.1.3, the NSX Manager service hangs and is unable to start successfully.

      This has been fixed in NSX 6.2.0.

    • The message bus randomly does not start after NSX Edge reboot
      After restarting an Edge VM, the message bus often does not start after powering on, and an additional reboot is required.

      This has been fixed in NSX 6.2.0.

    NSX Manager Resolved Issues

    • Add Domain shows error at LDAP option with Use Domain Credentials
      In NSX 6.1.x, the user when trying to add an LDAP domain, the web client gave a User Name was not specified error, even when Username was provided in UI. This has been fixed in NSX 6.2.0.

      This has been fixed in NSX 6.2.0.

    • CA signed certificate import needs an NSX Manager reboot before becoming effective
      When you import an NSX Manager certificate signed by CA, the newly imported certificate does not become effective until NSX Manager is rebooted.

      This has been fixed in NSX 6.2.0.

    • Unable to import NSX Manager to LDAPS domain
      When you attempt to add NSX manager to LDAPS domain, the following error message appears.
      Cannot connect to host <Server FQDN>
      error message: simple bind failed: <Server FQDN:Number>

      This has been fixed in NSX 6.2.0.

    • NSX Manager is non-functional after running the write erase command
      When you restart the NSX Manager after running the write erase command, you might notice that the NSX Manager is not working as expected, such as the password to access the Linux shell has been reset, the setup command is missing, and so on.

      This has been fixed in NSX 6.2.0.

    Logical Networking Resolved Issues

    • BGP filters are taking approximately 40 seconds to be effectively applied.
      During this period all the redistribution policies are applied without filters. This delay applies only to NSX Distributed Logical Router (DLR) for OUT directions.

      This has been fixed in NSX 6.2.0.

    • On NSX Edge subinterfaces, ICMP redirects are sent out, even when the Send ICMP redirect option is disabled
      By default, NSX Edge subinterfaces have Send ICMP redirect disabled. Although this option is disabled, ICMP redirects are sent out on edge subinterfaces.

      This has been fixed in NSX 6.2.0.

    • Cannot add non-ASCII characters in bridge or tenant name for Logical Router
      NSX controller APIs do not support non-ASCII characters.

      This has been fixed in NSX 6.2.0.

    • When a BGP neighbor filter rule is modified, the existing filters may not be applied for up to 40 seconds
      When BGP filters are applied to an NSX Edge running IBGP, it may take up to 40 seconds for the filters to be applied on the IBGP session. During this time, NSX Edge may advertise routes which are denied in the BGP filter for the IBGP peer.

      This has been fixed in NSX 6.2.0.

    • One of the NSX Controllers does not hand over master role to other controllers when it is shut down
      Typically, when a controller assumes operations master role and is preparing to shut down, it automatically hands over the master role to other controllers. In this case, the controller fails to hand over the role to other controllers and the status becomes interrupted and then goes into disconnected mode.

      This has been fixed in NSX 6.2.0.

    • Unable to pass VXLAN traffic between hosts with unicast or multicast
      When VMs are on the same host they can communicate across VXLAN with unicast or multicast, but cannot communicate when VMs are on different hosts.

      This has been fixed in NSX 6.2.0.

    • Removing multiple BGP rules on NSX Edge/DLR at the same time causes web client to crash

      This has been fixed in NSX 6.2.0. You can now delete multiple BGP rules at a time.

    • Protocol address is briefly displayed after adding Border Gateway Protocol (BGP) deny rule
      You might notice that the protocol address is briefly displayed after adding Border Gateway Protocol (BGP) deny rule in NSX Edge services gateway.

      This has been fixed in NSX 6.2.0.

    • VMs disconnect during vMotion
      You might notice that VMs disconnect during vMotion or you might receive alerts for VMs with disconnected NICs.

      This has been fixed in NSX 6.2.0.

    • Unable to download controller snapshot
      When downloading controller snapshots, you might notice that you are unable to download snapshot for the last controller. For example, if you have three controllers, one can successfully download snapshots of the first two but you might fail to download snapshot of the third controller.

      This has been fixed in NSX 6.2.0.

    Networking and Edge Services Resolved Issues

    • When HA is enabled on Edge Services Gateway, OSPF hello and dead interval configured to values other than 30 seconds and 120 seconds respectively can cause some traffic loss during failover
      When the primary NSX Edge fails with OSPF running and HA enabled, the time required for standby to take over exceeds the graceful restart timeout and results in OSPF neighbors removing learned routes from their Forwarding Information Base (FIB) table. This results in dataplane outage until OSPF re-initiates converges.

      This has been fixed in NSX 6.2.0.

    • VMs are unable to receive ping from Edge DHCP server
      VM's can ping the Edge gateway but unable to receive DHCP ping from an Edge gateway trunk over an overlay network. The Edge DHCP server is setup as a trunk port and fails to pass or receive any traffic. However, when the Edge Gateway and the DHCP Edge are on the same host they are able to ping each other. When the DHCP Edge is moved to another host, the DHCP Edge is unable to receive ping from the Edge Gateway.

      This has been fixed in NSX 6.2.0.

    • Edge Load Balancer stats not correctly displayed in the vSphere Web Client
      The Load Balancer does not display the number of concurrent connection statistics in the chart in vSphere Web Client UI.

      This has been fixed in NSX 6.2.0.

    • When the direct aggregate network in local and remote subnet of an IPsec VPN channel is removed, the aggregate route to the indirect subnets of the peer Edge also disappears
      When there is no default gateway on Edge and you remove all of the direct connect subnets in local subnets and part of the remote subnets at the same time when configuring IPsec, the remaining peer subnets become unreachable by IPsec VPN.

      This has been fixed in NSX 6.2.0.

    • Unable to pass traffic through load balancer after upgrading to NSX 6.1.2 or later
      When using option Insert X-Forwarded-For on NSX Edge Load Balancer, traffic may not pass through the load balancer.

      This has been fixed in NSX 6.2.0.

    • Running the clear ip ospf neighbor command returns a segmentation fault error

      This has been fixed in NSX 6.2.0.

    • Unable to process Kerberos requests
      Certain Kerberos requests are failing when being balanced with an NSX Edge.

      This has been fixed in NSX 6.2.0.

    Security Services Resolved Issues

    • The vsfwd.log gets overwritten quickly with a large number of container updates
      After the SpoofGuard policy is changed the NSX Manager promptly sends the change to host but host takes longer to process the change and update the state of the virtual machine's SpoofGuard state.

      This has been fixed in NSX 6.2.0.

    • Cannot configure NSX firewall using security groups or other grouping objects defined at global scope
      Administrator users defined at the NSX Edge scope cannot access objects defined at the global scope. For example, if user abc is defined at Edge scope and security group sg-1 is defined at global scope, then abc will not be able to use sg-1 in firewall configuration on the NSX Edge.

      This has been fixed in NSX 6.2.0.

    • Delayed mouse movement when viewing FW rules
      In the NSX Networking and Security section of vSphere Web Client, moving the mouse over rows in the Firewall Rules display results in a 3 second delay.

      This has been fixed in NSX 6.2.0.

    • UI shows error Firewall Publish Failed despite successful publish
      If Distributed Firewall is enabled on a subset of clusters in your environment and you update an application group that is used in one or more active firewall rules, any publish action on the UI will display an error message containing IDs of the hosts belonging to the clusters where NSX firewall is not enabled.
      Despite error messages, rules will be successfully published and enforced on the hosts where Distributed Firewall is enabled.

      This has been fixed in NSX 6.2.0.

    • Deleting security rules via REST displays error
      If a REST API call is used to delete security rules created by Service Composer, the corresponding rule set is not actually deleted in the service profile cache resulting in an ObjectNotFoundException error.

      This has been fixed in NSX 6.2.0.

    • Firewall rules do not reflect newly added virtual machine
      When new VMs were added to the logical switch, firewall rules are not updated correctly to include the newly added VMs. If you make a change to the firewall and publish changes the new objects are added to the policy.

      This has been fixed in NSX 6.2.0.

    • Cannot select Active Directory objects when configuring security groups
      In NSX 6.1.x, AD/LDAP Domain Objects took a long time to return in the Security Group Object selection screen.

      This has been fixed in NSX 6.2.0.

    • Cannot add firewall rule with source/destination as multiple comma-separated IP addresses

      This has been fixed in NSX 6.2.0.

    • Unable to move NSX Distributed Firewall (DFW) section at the top of the list
      When using Service Composer to create a security group policy, the section created in the DFW table cannot be added to the top of the list. There is no way to move DFW section up or down.

      This has been fixed in NSX 6.2.0.

    • Security policy configured as a port range causes firewall to go out of sync
      Configuring security policies as a port range (for example, "5900-5964") will cause the firewall to go out of sync with a NumberFormatException error.

      This has been fixed in NSX 6.2.0.

    Monitoring Services Resolved Issues

    • The #show interface command does not display the bandwidth/speed of vNic_0 interface
      After running the "#show interface" command, a full duplex,0M/s speed is displayed but not the bandwidth/speed of NSX Edge vNic_0 interface.

      This has been fixed in NSX 6.2.0.

    • When IPFIX configuration is enabled for Distributed Firewall, firewall ports in the ESXi management interface for NetFlow for vDS or SNMP may be removed
      When a collector IP and port is defined for IPFIX, the firewall for ESXi management interface is opened up in the outbound direction for the specified UDP collector ports. This operation may remove the dynamic ruleset configuration on ESXi management interface firewall for the following services if they were previously configured on the ESXi host:

      • Netflow collector port configuration on vDS

      • SNMP target port configuration


      This has been fixed in NSX 6.2.0.

    • Unable to process Denied/Block events through IPFIX protocol
      Typically, vsfwd user process handles the collection of flows, including dropped/denied ones and processes them for IPFIX. This happens when the IPFIX Collector fails to see the Denied/Block events because the vSIP drop packet queue is either too narrow or is wrapped around by inactive flow events. In this release, the ability to send Denied/Block events using the IPFIX protocol is implemented.

      This has been fixed in NSX 6.2.0.

    Solution Interoperability Resolved Issues

    • Unable to set up organizational network
      When attempting to set up an organization-wide network, vCloud Director fails with an error message.

      This has been fixed in NSX 6.2.0.

    • Unable to launch multiple VMs using VIO setup
      Users using VMware Integrated OpenStack were unable to launch large numbers of VMs or publish large numbers of firewall rules in a short period of time. This resulted in Error publishing ip for vnic messages in the log.

      This has been fixed in NSX 6.2.0.

    Document Revision History

    20 Aug 2015: First edition for NSX 6.2.0.
    4 Sept 2015: Second edition for NSX 6.2.0. Removed unneeded upgrade warning.
    22 Nov 2015: Third edition for NSX 6.2.0. Moved would-block issue (1328589) from the list of fixed issues to the list of known issues. This issue remains a known issue until a fix is provided in vSphere. In NSX 6.1.5 and 6.2.0, NSX adds mitigations for the issue.

Some Super Hot Topic

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.

Some Super Newest KB's

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.

Some Super Most Helpfull

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.