This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX-T Data Center 3.0.3   |  25 March 2021  |  Build 17778534

Check regularly for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

Features, Functional Enhancements and Extensions

This release of NSX-T Data Center is a maintenance release and there are no major or minor features, functional enhancements, or extensions.

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.

API and CLI Resources

See code.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

March 25, 2021. First edition.
April 15, 2021. Second edition. Added resolved issue 2736837.
August 20, 2021. Third edition. Added resolved issue 2699417.
September 17, 2021. Fourth edition. Added known issue 2761589.
May 31, 2022. Fifth edition. Updated link to KB article for known issue 2690455.

 

Resolved Issues

  • Fixed Issue 2557166: Distributed Firewall rules using context-profiles (layer 7) are not working as expected when applied to Kubernetes pods.

    After configuring L7 rules on Kubernetes pods, traffic that is supposed to match L7 rules is hitting the default rule instead.

  • Fixed Issue 2486119: Physical NICs are migrated from NVDS back to VDS uplinks with mapping that is different from the original mapping in VDS.

    When a Transport Node is created with a Transport Node Profile that has pNIC install and uninstall mappings, pNICs are migrated from VDS to NVDS. Later when NSX-T Data Center is removed from the Transport Node, the pNICs are migrated back to the VDS, but the mapping of pNIC to uplink might be different from the original mapping in VDS.

  • Fixed Issue 2586606: Load balancer does not work when Source-IP persistence is configured on a large number of virtual servers. 

    When Source-IP persistence is configured on a large number of virtual servers on a load balancer, it consumes significant amount of memory and might lead to NSX Edge running out of memory. However the issue might reoccur with the addition of more virtual servers.

  • Fixed Issue 2577028: Host Preparation might fail.

    Host prep might fail due to config hash mismatch leading to discovery loop.

  • Fixed Issue 2685284: The control plane loses connectivity with the host after circular certificate replacement. 

    The host loses connectivity with the control plane and requires a reboot. This condition occurs when you replace certificate-1 with certificate-2 and replace certificate-2 back with certificate-1. 

  • Fixed Issue 2688015: Tier-0 realized state shows "In progress" when the Tier-0-SR is deleted from an NSX Edge node. 

    If a Tier-0 SR is deleted from an NSX Edge node, the realized state of the Tier-0 might start showing as "In progress" on that NSX Edge node.  

  • Fixed issue 2688410: In multiple TEP setups, the MAC sync table might not synchronize between standby and active edge nodes which might cause traffic loss.

    With multiple TEPs, the active edge node learns the L2 MAC table and synchronizes to the standby edge node. This is called MAC sync mechanism. In this instance, the MAC sync code does not follow this rule which causes the standby node to drop the MAC sync message. Traffic loss might occur due to an empty MAC sync table. 

  • Fixed Issue 2603550: Some VMs that are migrated using vMotion might lose network connectivity during the upgrade of NSX Manager nodes.

    During the upgrade of NSX Manager nodes, you might find that some VMs are migrated via DRS and lose network connectivity after the migration.

  • Fixed issue 2692829: VRF ARP Proxy entries are not created or cleared when uplink is updated.

    For VRF, after an uplink is updated, ping is not working. As a result, ARP entries are not seen on the NSX Edge.

  • Fixed Issue 2690675: Sorting in NSX-T Data Center APIs is not working properly.

    The NSX-T Data Center APIs are ignoring sort_by and sort_ascending parameters. Instead the API response sorts rules by sequence_number irrespective of the parameters given.

  • Fixed Issue 2692946: Configuring Client SSL certificate must not be required if virtual server rule with SSL passthrough was configured.

    If a virtual server rule with SSL passthrough action and no specific SNI condition is configured, VIP must not require SSL config with client certificates.

  • Fixed Issue 2715469: Upgrading an NSX Edge deployed on a bare metal server that has multiple disks might fail. 

    An upgrade of an NSX Edge that is deployed on a bare metal server with multiple disks might result in a failure after reboot.

  • Fixed Issues 2708200: NSX Manager might run out of memory and stop due to a Classless Inter-Domain Routing (CIDR) NAT rule issue.

    A NAT rule using CIDR that contains a small prefix might cause the NSX Manager service to report an OutOfMemory issue and fail.

  • Fixed Issue 2708189: NSX Edge stops working.

    Your NSX-T Data Center deployment experiences an outage because the NSX Edge stops working. 

  • Fixed Issue 2704023: NSX-T Data Center upgrade fails when the TeamPolicyUpDelay runtime optional parameter of the deployed ENS switch is increased from the default value.

    NSX-T Data Center upgrade fails when the TeamPolicyUpDelay runtime optional parameter of the deployed ENS switch is increased from the default 100ms value.

  • Fixed Issue 2699607: When SSL passthrough is configured, LB SNAT port allocations might fail.

    When SSL passthrough is configured, HTTP requests to allocate LB SNAT ports cannot be forwarded to backend servers. This might result in the inability to free up ports and allocation failure.

  • Fixed Issue 2695259: Edge runs out of memory due to connection time out errors.

    As a result of LB configuration changes and many sessions running at the same time, the Edge might run out of memory due to connection time out errors.

  • Fixed Issue 2706241: Datapath service might fail when two VRFs on the Edge share "reject" settings in firewall rules.

    Datapath service might fail when two different VRFs on the Edge form a circular reject policy. The failure occurs when a TCP reset packet attempts to return, but is stopped by the reject rule in the first VRF.

  • Fixed Issue 2701344: You experience a break in the flow of service insertion traffic.

    SPF port might be missing required properties, such as VXLAN ID, and that causes the flow of service insertion traffic to break.

  • Fixed Issue 2694838: Validation errors might occur attaching T0 to Segments configured with named-teaming policy for BM-Edges.

    When BM-Edges are configured with named-teaming policy containing LAGs, creating T0 interfaces and attaching them to Segments using the same named-teaming policy produces a validation error.

  • Fixed Issue 2692952: Policy work flows might fail.

    Policy search might not resolve 'start search resync policy' on NSX Manager nodes.

  • Fixed Issue 2692961: Controller nodes are reported by CLI command "nsxcli -c get nodes" as separate nodes.

    NSX-T Data Center CLI command 'nsxcli -c get nodes' reports the Manager and Controller nodes separately. This might be confusing since historically nodes were reported as different nodes, but even after the nodes were merged the CLI still reports them separately. The CLI report is inaccurate and can be disregarded.

  • Fixed Issue 2693089: Distributed Firewall does not handle 0.0.0.0/0 or ::0 addresses correctly.

    The vMotion import might fail and ports can get disconnected when using the addresses 0.0.0.0/0 or ::0.

  • Fixed Issue 2693109: Autonomous Edge Backup/Restore to a new NSX Edge VM does not adapt to the new uplink MAC addresses (for example, fp-eth0), resulting in traffic problems.

    Following a new NSX Edge VM restore, traffic to external devices via the uplink might fail due to incorrect MAC address.

  • Fixed Issue 2693112: Palo Alto Networks watcher cannot receive notifications.

    Palo Alto Networks (PANs) watcher cannot receive notifications when their service is unhealthy. NSX-T Data Center continues to send "refresh_needed=true", but receives a send failure until the PAN watcher becomes healthy.

  • Fixed Issue 2693158: NSX-T appliance runs out of disk space creating issues in normal functioning of services.

    Some NSX-T tables are not removing data entries periodically which causes disk space issues and can cause normal service functions to be impacted.

  • Fixed Issues 2693193: The SYN ACK packet gets dropped by the Distributed FW which results in session failure.

    The SYN ACK packet gets dropped by the Distributed FW which results in session failure.

  • Fixed Issue 2693156: After a Health Check is enabled or manually triggered, the Health Check results might not be accessible.

    After a Health Check is enabled or manually triggered, the Health Check results and other feature data, such as traceflow observations, might not be accessible. This might be caused when a maximum pendMap size is reached which results in the opsagent stopping.

  • Fixed Issue 2693146: System Overview page cannot load after policy group is deleted.

    When the System Overview page fails to load, the error Failed to get "{{reportName}}" report - Failed to fetch System details. Please contact the administrator. displays. This occurs when a policy group containing a redirect rule is deleted.

  • Fixed Issue 2693145: Traffic going through the NSX Edge might be disrupted. This might occur if you configured your virtual load balancer with a single port range.

    Traffic going through the NSX Edge might be disrupted. For example, BGP peering with that NSX Edge node might become unavailable. This might occur if you configured your virtual load balancer with a single port range​, like 1000 to 2000.

  • Fixed Issue 2693141: Traffic delays might occur when NSX Edge VMs are not excluded from SI Exclude List.

    Traffic delays might occur when NSX Edge VMs are not excluded from SI Exclude List.

  • Fixed Issue 2693138: Backup fails due to a failed file transfer to a Windows-based SFTP server.

    Backup fails due to a failed file transfer to a Windows-based SFTP server.

  • Fixed Issue 2693136: High bandwidth consumption or packets per second counts observed on uplinks with SI VMs.

    High bandwidth consumption or packets per second counts observed on uplinks with SI VMs. 

  • Fixed Issue 2693134: Restore fails if the same case-sensitive characters are not used when referencing the FQDN configuration during backup and restore.

    Restore fails if the same case-sensitive characters are not used when referencing the FQDN configuration during backup and restore.

  • Fixed Issue 2693120: NSX-T Data Center DHCP does not support broadcast bit in protocol.

    Legacy devices with outdated TCP/IP stacks might not get IP addresses using DHCP. This occurs because NSX-T Data Center DHCP does not support broadcast bit in protocol.

  • Fixed Issue 2712832: Your ESXi host might stop responding.

    When ESXi flow cache is enabled and traffic has multiple destinations, such as multicast traffic, your ESXi host can become unavailable because of a rare racing condition.

  • Fixed Issue 2713627: NSX Agent incorrectly reports BGP neighbor state as running even though loss of traffic has occurred. 

    NSX Agent incorrectly reports BGP neighbor state as running after NSX Edge runs out of memory. 

  • Fixed Issue 2699271: Distributed Firewall IPFIX reports about container traffic do not contain any records.

    Distributed Firewall IPFIX reports about container traffic do not contain any records even though traffic hit firewall rules.

  • Fixed Issue 2704063: NSX Edge VM or IP discovery might not work properly when ENS Interrupt or Enhanced Datapath mode is used.

    The parsing logic does not recognize UDP packets and skips the relevant processing for the NSX Edge VM or IP discovery. This might impact NSX Edge VM operations and IP discovery switching profiles.

  • Fixed Issue 2693154: When NT LAN Manager (NTLM) is enabled, Load Balancer might stop working.

    When NT LAN Manager (NTLM) is enabled, Load Balancer might stop working. ntlm ctx still holds the freed connection that is reused by the next request.

  • Fixed Issue 2708175: NSX-T Virtual Distributed Switch runtime options are not persisted across reboot. 

    If non-default values are set in N-VDS runtime options, they are not persisted after a reboot; only configuration options persist.

  • Fixed Issue 2701323: OVF does not support multiple DNS servers.

    OVF does not support multiple DNS servers.

  • Fixed Issues 2701327: SVM ID mentioned in error condition has mismatched naming convention.

    SVM ID mentioned in error condition has mismatched naming convention.

  • Fixed Issue 2692969: After a MTU change (or any other reconfiguration that requires a restart of the NIC), the physical NIC link becomes unavailable.

    In bare metal NSX-T Edges, Intel XXV710 25G NICs become unavailable after any reconfiguration that requires a restart of the NIC. This includes a MTU change.

  • Fixed Issue 2705074: Ping traffic testing connectivity between two hosts fails when the corrupted packets are dropped. 

    In Bare Metal NSX Edge with Intel NICs, ICMP traffic with DF (don't fragment) bit set in the IP header might be corrupted during transmission.

  • Fixed Issue 2690714: Poorly constructed VM firewall configuration group memberships might cause a NSX non-maskable interrupt (NMI) during a vMotion import.

    During vMotion import of a VM's firewall configuration data, the destination host CPU is blocked. This might cause a host NMI exception. The situation might occur when the same IP addresses are members of multiple groups, and those groups are used in multiple rules. This type of configuration is not a common occurrence.

  • Fixed Issue 263199: When upgrading from NSX 2.5.x or 3.0.2, if transport nodes are configured with N-VDS with named-teaming policies, the NSX VIB upgrade fails for ESXi 6.7 hosts.

    When upgrading from NSX 2.5.x or 3.0.2, if TNs are configured with N-VDS with named-teaming policies, the NSX VIB upgrade fails for ESXi 6.7 hosts. The opsAgent fails to clear previously created named-teaming policies which causes issues in upgrade path.

  • Fixed Issue 2693101: Load balancer might run out of memory if too many sticky tables are enabled.

    On large load balancer servers with over 100 virtual servers with enabled sticky tables, service might become unavailable after running out of memory.

  • Fixed Issue 2736837: For certain configurations where two Security Groups are configured with the same dynamic criteria, there is a rare condition that a compute VM matching the criteria may not be added in the Security group.

    VMs may not be attached to the appropriate security group that could result in incorrect rule validation.

  • Fixed Issue 2699417: Load balancer SNAT port allocation failed.

    Traffic cannot be forwarded to backend server by load balancer.

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 2320529: "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores.

    "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores even though the storage is accessible from all hosts in the cluster. This error state persists for up to thirty minutes.

    Workaround: Retry after thirty minutes. As an alternative, make the following API call to update the cache entry of datastore:

    https://<nsx-manager>/api/v1/fabric/compute-collections/<CC Ext ID>/storage-resources?uniform_cluster_access=true&source=realtime

    where   <nsx-manager> is the IP address of the NSX manager where the service deployment API has failed, and < CC Ext ID> is the identifier in NSX of the cluster where the deployment is being attempted.

  • Issue 2328126: Bare Metal issue: Linux OS bond interface when used in NSX uplink profile returns error.

    When you create a bond interface in the Linux OS and then use this interface in the NSX uplink profile, you see this error message: "Transport Node creation may fail." This issue occurs because VMware does not support Linux OS bonding. However, VMware does support Open vSwitch (OVS) bonding for Bare Metal Server Transport Nodes.

    Workaround: If you encounter this issue, see knowledgebase article 67835 Bare Metal Server supports OVS bonding for Transport Node configuration in NSX-T.

  • Issue 2390624: Anti-affinity rule prevents service VM from vMotion when host is in maintenance mode.

    If a service VM is deployed in a cluster with exactly two hosts, the HA pair with anti-affinity rule will prevent the VMs from vMotioning to the other host during any maintenance mode tasks. This might prevent the host from entering Maintenance Mode automatically.

    Workaround: Power off the service VM on the host before the Maintenance Mode task is started on vCenter.

  • Issue 2389993: Route map removed after redistribution rule is modified using the Policy page or API.

    If there is a route-map added using management plane UI/API in Redistribution Rule, it will get removed If you modify the same Redistribution Rule from Simplified (Policy) UI/API.

    Workaround: You can restore the route map by returning the management plane interface or API to re-add it to the same rule. If you wish to include a route map in a redistribution rule, always use the management plane interface or API to create and modify it.

  • Issue 2690455: Restore might not complete due to certificate rejection.

    A restore might not complete as a result of an installed certificate with no CRL Distribution Point and an incorrect setting (crl_checking_enable configuration to true).

    Workaround: See VMware knowledge base article 83257.

  • Issue 2329273: No connectivity between VLANs bridged to the same segment by the same edge node.

    Bridging a segment twice on the same edge node is not supported. However, it is possible to bridge two VLANs to the same segment on two different edge nodes.

    Workaround: None 

  • Issue 2355113: Cannot install NSX Tools on RedHat and CentOS Workload VMs with accelerated networking enabled in Microsoft Azure.

    In Microsoft Azure when accelerated networking is enabled on RedHat (7.4 or later) or CentOS (7.4 or later) based OS and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    Workaround: After booting up RedHat or CentOS based VM in Microsoft Azure, install the latest Linux Integration Services driver available at https://www.microsoft.com/en-us/download/details.aspx?id=55106 before installing NSX tools.

  • Issue 2370555: User can delete certain objects in the Advanced interface, but deletions are not reflected in the Simplified interface.

    Specifically, groups added as part of a distributed firewall exclude list can be deleted in the Advanced interface Distributed Firewall Exclusion List settings. This leads to inconsistent behavior in the interface.

    Workaround: Use the following procedure to resolve this issue:

    1. Add an object to an exclusion list in the Simplified interface.
    2. Verify that it appears displayed in the Distributed Firewall exclusion list in the Advanced interface.
    3. Delete the object from the Distributed Firewall exclusion list in the Advanced interface.
    4. Return to the Simplified interface and a second object to the exclusion list and apply it.
    5. Verify that the new object appears in the Advanced interface.
  • Issue 2520803: Encoding format for Manual Route Distinguisher and Route Target configuration in EVPN deployments.

    You currently can configure manual route distinguisher in both Type-0 encoding and in Type-1 encoding. However, to ensure the best possible encoding scheme for configuring Manual Route Distinguisher in EVPN deployments use  Type-1. Only Type-0 encoding for Manual Route Target configuration is implemented.

    Workaround: Configure only Type-1 encoding for Route Distinguisher.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2537989: Clearing VIP (Virtual IP) does not clear vIDM integration on all nodes.

    If VMware Identity Manager is configured on a cluster with a Virtual IP, disabling the Virtual IP does not result in the VMware Identity Manager integration being cleared throughout the cluster. You can manually fix vIDM integration on each individual node if the VIP is disabled.

    Workaround: Manually fix the vIDM configuration on each node individually.

  • Issue 2538956: DHCP Profile shows a message of "NOT SET" and the Apply buttons is disabled when configuring a Gateway DHCP on Segment.

    When attempting to configure Gateway DHCP on Segment when there is no DHCP configured on the connected Gateway, the DHCP Profile cannot be applied because there is no valid DHCP to be saved.

    Workaround: None.

  • Issue 2525205: Management plane cluster operations fail under certain circumstances.

    When attempting to join Manager N2 to Manager N1 by issuing a "join" command on Manager N2, the join command fails. You cannot form a Management plane cluster, which might impact availability.

    Workaround:

    1. To retain Manager N1 in the cluster, issue a "deactivate" CLI command on Manager N1. This will remove all other Manager from the cluster, keeping Manager N1 as the sole member of the cluster.
    2. Ensure that the non-configuration Corfu server is up and running on Manager N1 by issuing the "systemctl start corfu-nonconfig-server" command.
    3. Join other new Managers to the cluster by issuing "join" commands on them.
  • Issue 2526769: Restore fails on multi-node cluster.

    When starting a restore on a multi-node cluster, restore fails and you have to redeploy the appliance.

    Workaround: Deploy a new setup (one node cluster) and start the restore.

  • Issue 2538041: Groups containing Manager Mode IP Sets can be created from Global Manager.

    With Global Manager, you can create Groups that contain IP Sets that were created in Manager Mode. The configuration is accepted, but the groups do not get realized on Local Managers.

    Workaround: None.

  • Issue 2463947: When preemptive mode HA is configured, and IPSec HA is enabled, upon double failover, packet drops over VPN are seen.

    Traffic over VPN will drop on peer side. IPSec Replay errors will increase.

    Workaround: Wait for the next IPSec Rekey. Or disable and enable that particular IPSec session.

  • Issue 2523212: The nsx-policy-manager becomes unresponsive and restarts.

    API calls to nsx-policy-manager will start failing, with service being unavailable. You cannot access policy manager until it restarts and is available.

    Workaround: Invoke API with at most 2000 objects.

  • Issue 2521071: For a Segment created in Global Manager, if it has a BridgeProfile configuration, then the Layer2 bridging configuration is not applied to individual NSX sites.

    The consolidated status of the Segment will remain at "ERROR". This is due to failure to create bridge endpoint at a given NSX site. You will not be able successfully configure a BridgeProfile on Segments created via Global Manager.

    Workaround: Create a Segment at the NSX site and configure it with bridge profile.

  • Issue 2527671: When the DHCP server is not configured, retrieving DHCP statistics/status on a Tier0/Tier1 gateway or segment displays an error message indicating realization is not successful.

    There is no functional impact. The error message is incorrect and should report that the DHCP server is not configured.

    Workaround: None.

  • Issue 2532127: LDAP user can't log in to NSX only if the user's Active Directory entry does not contain the UPN (userPrincipalName) attribute and contains only the samAccountName attribute.

    User authentication fails and the user cannot log in to the NSX user interface.

    Workaround: None.

  • Issue 2540733: Service Instance is not created after re-adding the same host in the cluster.

    Service Instance in NSX is not created after re-adding the same host in the cluster, even though the service VM is present on the host. The deployment status will be shown as successful, but protection on the given host will be down.

    Workaround: Delete the service VM from the host. This will create an issue which will be visible in the NSX user interface. On resolving the issue, a new SVM and corresponding service instance in NSX will be created.

  • Issue 2530822: Registration of vCenter with NSX manager fails even though NSX-T extension is created on vCenter.

    While registering vCenter as compute manager in NSX, even though the "com.vmware.nsx.management.nsxt" extension is created on vCenter, the compute manager registration status remains "Not Registered" in NSX-T. Operations on vCenter, such as auto install of edge etc., cannot be performed using the vCenter Server compute manager.

    Workaround:

    1. Delete compute manager from NSX-T manager.
    2. Delete the "com.vmware.nsx.management.nsxt" extension from vCenter using the vCenter Managed Object Browser.
  • Issue 2482580: IDFW/IDS configuration is not updated when an IDFW/IDS cluster is deleted from vCenter.

    When a cluster with IDFW/IDS enabled is deleted from vCenter, the NSX management plane is not notified of the necessary updates. This results in inaccurate count of IDFW/IDS enabled clusters. There is no functional impact. Only the count of the enabled clusters is wrong.

    Workaround: None.

  • Issue 2534855: Route maps and redistribution rules of Tier-0 gateways created on the simplified UI or policy API will replace the route maps and redistribution rules created on the advanced UI (or MP API).

    During upgrades, any existing route maps and rules that were created on the simplified UI (or policy API) will replace the configurations that were done directly on the advanced UI (or MP API).

    Workaround: If Redistribution rules/Route-maps created on Advanced UI (MP UI) are lost after upgrade, you can create all rules either from Advanced UI (MP) or Simplified UI (Policy). Always use either Policy or MP for redistribution rules, not both at same time, as in NSX-T 3.0, redistribution has full supported features.

  • Issue 2535355: Session timer might not take effect after upgrading to NSX-T 3.0 under certain circumstances.

    Session timer setting is not taking effect. The connection session (e.g., tcp established, tcp fin wait) will use its system default session timer instead of the custom session timer. This might cause the connection (TCP/UDP/ICMP) session to be established longer or shorter than expected.

    Workaround: None.

  • Issue 2534933: Certificates that have LDAP based CDPs (CRL Distribution Point) fail to apply as tomcat/cluster certs.

    You can't use CA-signed certificates that have LDAP CDPs as cluster/tomcat certificate.

    Workaround: See VMware knowledge base article 78794.

  • Issue 2499819: Maintenance-based NSX for vSphere to NSX-T Data Center host migration for vCenter 6.5 or 6.7 might fail due to vMotion error.

    This error message is shown on the host migration page:
    Pre-migrate stage failed during host migration [Reason: [Vmotion] Can not proceed with migration: Max attempt done to vmotion vm b'3-vm_Client_VM_Ubuntu_1404-shared-1410'].

    Workaround: Retry host migration.

  • Issue 2518183: For Manager UI screens, the Alarms column does not always show the latest alarm count.

    Recently generated alarms are not reflected on Manager entity screens.

    Workaround:

    1. Refresh the Manager entity screen to see the correct alarm count.
    2. Missing alarm details can also be seen from the alarm dashboard page.
  • Issue 2543353: NSX T0 Edge calculates incorrect UDP checksum post-eSP encapsulation for IPsec tunneled traffic.

    Traffic is dropped due to bad checksum in UDP packet.

    Workaround: None.

  • Issue 2556730: When configuring an LDAP identity source, authentication via LDAP Group -> NSX Role Mapping does not work if the LDAP domain name is configured using mixed case.

    Users who attempt to log in are denied access to NSX.

    Workaround: Use all lowercase characters in the domain name field of the LDAP Identity Source configuration.

  • Issue 2557287: TNP updates done after backup are not restored.

    You won't see any TNP updates done after backup on a restored appliance.

    Workaround: Take a backup after any updates to TNP.

  • Issue 2572052: Scheduled backups might not get generated.

    In some corner case, scheduled backups are not generated.

    Workaround: Restart all NSX Manager appliances.

  • Issue 2549175: Searching in policy fails with the message: "Unable to resolve with start search resync policy."

    Searching in policy fails because search is out of sync when the NSX Manager nodes are provided with new IPs.

    Workaround: Ensure that the DNS PTR records (IP to hostname mappings in the DNS server) for all the NSX Managers are correct.

  • Issue 2628634: Day2tools will try to migrate TN from NVDS to CVDS even after "vds-migrate disable-migrate" is called.

    The NVDS to CVDS migration will fail and host will leave in maintenance mode.

    Workaround: Manually move the hosts out of maintenance mode.

  • Issue 2607196: Service Insertion (SI) and Guest Introspection (GI) that use Host base deployments are not supported for NVDS to CVDS Migration.

    Cannot migrate Transport Nodes with NVDS using NVDS to CVDS tool.

    Workaround: Undeploy GI/SI and try NVDS to CVDS migration. This will delete any SVM instances.
    Once migration is complete on all Transport Nodes, deploy GI/SI instance back.

  • Issue 2588072: NVDS to CVDS switch migrator doesn't support Stateless ESX with vmks.

    NVDS to CVDS switch migrator cannot be used for migration with Stateless ESX hosts if the NVDS switch has vmks on it.

    Workaround: Either migrate out the vmks from NVDS or remove that host from NSX and perform migration.

  • Issue 2627439: If a transport node profile is applied on a cluster before migration, one extra transport node profile is created by the system in detached state after migration.

    There might be one extra transport node profile generated for each original transport node profile.

    Workaround: None.

  • Issue 2540352: No backups are available in the Restore Backup window for CSM.

    When restoring a CSM appliance from a backup, you enter the details of the backup file server in the Restore wizard but a list of backups does not appear in the UI even though it is available on the server.

    Workaround: On the newly installed CSM appliance that you are restoring, select the Backup tab and provide details of the backup server in the configuration window. When you switch to the Restore tab and start the Restore wizard, you will see the list of backups to restore from.

  • Issue 2622846: IDS with proxy settings enabled cannot access GitHub, which is used to download signatures.

    New updates of signature bundle downloads will fail.

    Workaround: Use the offline download feature to download and upload signatures in case of no network connectivity.

  • Issue 2723812: In NSX-T Data Center, UI or REST API might not show the latest transport node status.

    Depending on the scale size and the system load, the UI and REST API might not show the latest transport node status.

    Workaround: Reboot the MP proton service.

  • Issue 2761589: Default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane after upgrading from NSX-T 2.x to NSX-T 3.x.

    This issue occurs only when rules are not configured via Policy, and the default layer 3 rule on the Management Plane has the DROP action. After upgrade, the default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane.

    Workaround: Set the action of default layer3 rule to DROP from policy UI post upgrade.

Installation Known Issues
  • Issue 2522909: Service vm upgrade is not working after correcting URL if upgrade deployment failed with Invalidurl.

    Upgrade might be in a failed state, with wrong URL, blocking the upgrade.

    Workaround: Create a new deployment_spec for upgrade to trigger.

Upgrade Known Issues
  • Issue 2475963: NSX-T VIBs fail to install due to insufficient space.

    NSX-T VIBs fail to install due to insufficient space in bootbank on ESXi host, returning a BootBankInstaller.pyc: ERROR. Some ESXi images provided by third-party vendors might include VIBs which are not in use and can be relatively large in size. This might result in insufficient space in bootbank/alt-bootbank when installing/upgrading any VIBs.

    Workaround: See knowledge base article 74864 NSX-T VIBs fail to install, due to insufficient space in bootbank on ESXi host.

  • Issue 2400379: Context Profile page shows unsupported APP_ID error message.

    The Context Profile page shows the following error message: "This context profile uses an unsupported APP_ID - [<APP_ID>]. Please delete this context profile manually after making sure it is not being used in any rule." This is caused by the post-upgrade presence of six deprecated APP_IDs (AD_BKUP, SKIP, AD_NSP, SAP, SUNRPC, SVN) that no longer work on the data path.

    Workaround: After ensuring that they are no longer consumed, manually delete the six APP_ID context profiles.

  • Issue 2462079: Some versions of ESXi hosts reboot during upgrade if there are stale DV filters present on the ESXi host.

    For hosts running ESXi 6.5-U2/U3 and 6.7-U1/U2, during maintenance mode upgrade to NSX-T 2.5.1, the host might reboot if stale DV filters are found to be present on the host after VMs are moved out.

    Workaround: Upgrade to ESXi 6.7 U3 or ESXi 6.5 P04 prior to upgrading to NSX-T Data Center 2.5.1 to avoid rebooting the host during the NSX-T Data Center upgrade. See knowledge base article 76607 for details.

  • Issue 2730634: On uniscale-based setups (with scale-level logical object numbers), the Networking component page shows 'Index out of sync' error after upgrading.

    On uniscale-based setups (with scale-level logical object numbers), the Networking component page shows 'Index out of sync' error&nbsp;after upgrading.

    Workaround: Log in to NSX Manager with admin credentials and run the "start search resync policy" command. It will take a few minutes to load the Networking components.

  • Issue 2693195: When upgrading MP nodes to 310 stateless Transport Nodes, a temporary DP outage on Workload VMs might be experienced.

    When upgrading MP nodes to 310 stateless TNs, a temporary DP outage on Workload VMs might be experienced. This occurs on networks with over fifty transport nodes and only in the Upgrade window when MP nodes are upgraded and the proton service is restarted.

    Workaround: None.

  • Issue 2716575: NSX-V to NSX-T topology migration fails and displays a prefix list realization error message.

    An error message similar to "Realization failure, waiting for realization of [{PrefixList}]..." displays due to uncleared alarms in route map after attempting a NSX-V to NSX-T topology migration.

    Workaround: Clear all the alarms on the route map once realization has completed.

  • Issue 2717667: Upgraded environments from NSXT 2.5.x to 3.0.x/3.1.x are unable to scale beyond 32K PODs.

    Upgraded environments from NSXT 2.5.x to 3.0.x/3.1.x are unable to scale beyond 32K PODs. PODs beyond 32K do not move to a running state. The following error displays in /var/log/proton/nsxapi.log: "No free id found to allocate for request AllocationRequest. "

    Workaround: None.

NSX Edge Known Issues
  • Issue 2283559: https://<nsx-manager>/api/v1/routing-table and https://<nsx-manager>/api/v1/forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.

    If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.

    Workaround: There are two options to fetch the RIB/FIB.

    • These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest.
    • CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.
  • Issue 2521230: BFD status displayed under ‘get bgp neighbor summary’ might not reflect the latest BFD session status correctly.

    BGP and BFD can set up their sessions independently. As part of ‘get bgp neighbor summary’ BGP also displays the BFD state. If the BGP is down, it will not process any BFD notifications and will continue to show the last known state. This might lead to displaying stale state for the BFD.

    Workaround: Rely on the output of ‘get bfd-sessions’ and check the ‘State’ field to get the most up-to-date BFD status.

  • Issue 2532755: Inconsistencies between CLI output and policy output for routing-table.

    Routing table downloaded from the UI has extra number of routes compared to CLI output. There is an additional route (default route) listed in the output downloaded from policy. There is no functional impact.

    Workaround: None.

NSX Cloud Known Issues
  • Issue 2289150: PCM calls to AWS start to fail.

    If a user updates the PCG role for an AWS account on CSM from old-pcg-role to new-pcg-role, CSM updates the role for the PCG instance on AWS to new-pcg-role. However, the PCM does not know that the PCG role has been updated and as a result continues to use the old AWS clients it had created using old-pcg-role. This causes the AWS cloud inventory scan and other AWS cloud calls to fail.

    Workaround: If you encounter this issue, do not modify/delete the old PCG role immediately after changing to new role for at least 6.5 hours. Restarting the PCG will re-initialize all AWS clients with new role credentials.

Security Known Issues
  • Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection might be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

Federation Known Issues
  • Issue 2630808: Upgrade from 3.0.0/3.0.1 to any higher releases is disruptive.

    As soon as Global Manager or Local Manager is upgraded from 3.0.0/3.0.1 to a higher release, communications between GM and LM are not possible.

    Workaround: To restore communications between LM and GM, all LM and GM need to be upgraded to a higher release.

  • Issue 2630813: SRM recovery for compute VMs will lose all the NSX tags applied to VM and Segment ports.

    If a SRM recovery test or run is initiated, the replicated compute VMs in the disaster recovery location will not have any NSX tags applied.

  • Issue 2630819: Changing LM certificates should not be done after LM register on GM.

    When Federation and PKS need to be used on the same LM, PKS tasks to create external VIP & change LM certificate should be done before registering the LM on GM. If done in the reverse order, communications between LM and GM will not be possible after change of LM certificates and LM has to be registered again.

  • Issue 2601493: Concurrent config onboarding is not supported on Global Manager in order to prevent heavy processing load.

    Although parallel config onboarding does not interfere with each other, multiple such config onboarding executions on GM might make GM slow and sluggish for other operations in general.

    Workaround: Security Admin / Users must sync up maintenance windows to avoid initiating config onboarding concurrently.

  • Issue 2605420: UI shows general error message instead of specific one indicating Local Manager VIP changes.

    Global Manager to site communication is impacted.

    Workaround: Restart the Global Manager cluster, one node at a time.

  • Issue 2613113: If onboarding is in progress, and restore of Local Manager is done, the status on Global Manager does not change from IN_PROGRESS.

    UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Unable to import the configuration of the restored site.

    Workaround: Use the Local Manager API to start the onboarding of the Local Manager site, if required.

  • Issue 2638571: Deleting 5000 NAT rules sometime takes more than 1 hour.

    NAT rules are still visible in the UI but grayed out. You have to wait for their clean up before creating NAT rules with the same name. There is not impact with a different name. 

    Workaround: None.

  • Issue 2629422: Message shown on UI is incomplete in case system tries to onboard a site having DNS service on Tier-1 gateway in addition to LB service.

    On-boarding is blocked for tier-1 gateway that offers DNS/DHCP service in addition to one-arm LB service. It is expected to block onboarding. 
    UI shows possible resolution text by giving reference of DHCP service only. But same resolution is applicable for DNS service as well.

    Workaround: None.

  • Issue 2628428: Global Manager status shows success initially and then changes to IN_PROGRESS.

    In a scale setup, if there are too many sections being modified at frequent intervals, it takes time for the Global Manager to reflect the correct status of the configuration. This causes a delay in seeing the right status on the UI/API for the distributed firewall configuration done.

    Workaround: None.

  • Issue 2625009: Inter-SR iBGP sessions keep flapping, when intermediate routers or physical NICs have lower or equal MTU as the inter-SR port.

    This can impact inter-site connectivity in Federation topologies.

    Workaround: Keep the pNiC MTU and intermediate routers' MTU bigger than the global MTU (i.e., the MTU used by inter-SR port). The size of the packets becomes more than MTU because of encapsulation and packets don't go through.

  • Issue 2634034: When the site role is changed for a stretched T1-LR (logical router), any traffic for that logical router is impacted for about 6-8 minutes.

    The static route takes long to get programmed and affects the datapath. There will be traffic loss of about 6-8 minutes when the site role is changed. This could be even longer based on the scale of the config.

    Workaround: None.

check-circle-line exclamation-circle-line close-line
Scroll to top icon