This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX-T Data Center 3.0.1   |  23 June 2020  |  Build 16404613

Check regularly for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

NSX-T Data Center 3.0.1 is a maintenance release that includes new features and bug fixes. See the "Resolved Issues" section for the list of issues resolved in this release.

The following new features and feature enhancements are available in the NSX-T Data Center 3.0.1 release.

  • Detailed memory usage of Edge node for both the control plane and data plane is available in the NSX Manager user interface (UI). Detailed CPU usage is also reported in the UI.  

  • A CLI command is available to remove NSX from your ESXi host. If you have an ESX host with NSX vibs in a stale state and you are unable to uninstall NSX from that host, you can log in to the host and use the CLI command "del nsx" for a guided step-by-step process to remove NSX from that host and bring it back to a clean state so you can reinstall NSX from scratch.

  • Active Global Manager Clustering: Global Manager VMs can now form a cluster to provide enhanced local resilience.

  • Local Manager override workflows: In case Global Manager cannot be reached from Local Manager, a limited set of parameters of global objects could be overridden on Local Manager.

  • Nested groups: On Global Manager, the groups can now be nested (groups in groups).

  • Firewall rules across regions: On Global Manager, the firewall rules can now use one group from a region that is different from the firewall policy to which the rule belongs.

  • Tier-0 Active/Standby topologies: On Global Manager, Tier-0 can now be configured as active/standby with location primary/secondary.

  • Federation scale: Support up to four locations and with up to 128 hosts total.

  • Given the scale and upgrade limits (see "Federation Known Issues" below), Federation is not intended for production deployments in NSX-T 3.0.x releases.

Compatibility and System Requirements

For compatibility and system requirements information, see the NSX-T Data Center Installation Guide.

API Deprecations and Behavior Changes

  • Removal of Application Discovery - This feature has been deprecated and removed.
  • Change in "Advanced Networking and Security" UI upon Upgrading from NSX-T 2.4 and 2.5 - If you are upgrading from NSX-T Data Center 2.4 and 2.5, the menu options that were found under the "Advanced Networking & Security" UI tab are available in NSX-T Data Center 3.0 by clicking "Manager" mode.
  • Snapshots are Auto-disabled: From NSX-T Data Center 3.0.1 onwards, snapshots are auto-disabled when an appliance is deployed. You do not need to perform this manual procedure.
  • API Deprecation Policy - VMware now publishes our API deprecation policy in the NSX API Guide to help customers who automate with NSX understand which APIs are considered deprecated and when they will be removed from the product in the future.

API and CLI Resources

See code.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

June 23, 2020. First edition.
August 24, 2020. Second edition. Added known issue 2577452.
August 28, 2020. Third edition. Added known issues 2622672, 2630808, 2630813, 2630819.
March 15, 2021. Fourth edition. Added known issue 2730634.
September 17, 2021. Fifth edition. Added known issue 2761589.

Resolved Issues

  • Fixed Issue 2513231 - The (default) max number of arp per logical router is 20k.

    The edge limits the total arp/neigh entries to 100K per edge node, and 20K per logical router. Once reaching these numbers, the edge cannot add more arp/neigh entries to the arp cache table. The arp resolution fails and the packet is dropped due to no arp resolution.

  • Fixed Issue 2515006 - NSX-v to NSX-T migration rollback fails intermittently.

    During NSX-v to NSX-T migration, rollback fails and the following message displays: "Entity Edge Cluster<edge-cluster-id> can not be deleted as it is being referenced by entity(s): <logical-router-id>"

  • Fixed Issue 2515489 - After upgrading to NSX-T 3.0, the first certificate based IPSec VPN session fails to come up and displays a "Configuration failed" error.

    Traffic loss can be seen on tunnels under one certificate based IPSec VPN session.

  • Fixed Issue 2532343 - In a Federation deployment, if the RTEP MTU size is smaller than the VTEP MTU size, IP fragmentation occurs, causing physical router to drop IP fragments and cross-site traffic to stop.

    When RTEP MTU size (1500) is smaller than VTEP MTU (1600), tracepath tool fails to complete. Large ping (i.e., ping -s 2000) also fails. Smaller RTEP MTU cannot be used.

  • Fixed Issue 2536980 - PCG upgrade fails at the "reboot" step.

    PCG upgrade fails from the CSM upgrade UI. PCG CLI "get upgrade progress-status" shows the "reboot" task's state as SUCCESS. PCGs failed to upgrade to NSX-T 3.0 and are not operating.

  • Fixed Issue 2533617 - While creating, updating or deleting services, the API call succeeds but the service entity update realization fails.

    While creating, updating, or deleting services (NatRule, LB VIP, etc.), the API call succeeds but the service entity update is not processed because of activity submission failure in the background. The service becomes inaccessible.

  • Fixed Issue 2535234 - VM tags are reset during cross vCenter vMotion.

    vMotion from site 1 to site 2 in a Federation setup and back to site 1 within 30 minutes will lead to tags resetting to what was applied on site 1. If you are using a VM tag-based global policy, the policy will not be applied.

  • Fixed Issue 2532796 - HNSEndpoint deletion failed under the latest Windows KB update.

    Deletion of HNSEndpoint hangs if you update Windows to its latest KB update (up to 2020 Mar). You cannot delete the Windows container instance. This may have a conflict when creating a new container by using the same HNSEndpoint name.

  • Fixed Issue 2536877 - X-Forwarded-For (XFF) shows erroneous data (HTTPS Traffic) with Load Balancer rules configured in Transport Phase.

    If you configure HTTP profile with XFF (INSERT/REPLACE), with Load Balancer rule in Transport phase, you may see incorrect values for XFF headers.

  • Fixed Issue 2541232 - CORFU/config disk space may reach 100% upon upgrading to NSX-T 3.0.0.

    This is only encountered if upgrading from previous versions of NSX-T with the AppDiscovery feature enabled. The /config partition reaches 100% and thereafter the NSX Management Cluster will be unstable.

  • Fixed Issue 2538557 - Spoofguard on ARP packets may not work when ARP Snooping is enabled in the IP Discovery profile.

    There is a possibility that the ARP cache entries of a guest VM could be incorrect even when spoofguard and ARP snooping are enabled in the IP Discovery profile. The spoofguard functionality will not work for ARP packets.

  • Fixed Issue 2550327 - The Draft feature is not supported on Global Manager, but Draft APIs are available.

    The Draft feature is disabled from the Global Manager UI. Draft publish may not work as expected and you may observe the inconsistency between Global Manager and Local Manager firewall configuration.

  • Fixed Issue 2546509 - ESXi 7.0 NSX kernel module is not installed after ESXi upgrade from vSphere 6.7 to vSphere 7.0.

    Transport Node status goes down after the ESXi upgrade from 6.7 to 7.0.

  • Fixed Issue 2543239 - NAT traffic flow is not subjected to firewall processing for specific NAT rules after upgrading to NSX-T Data Center 3.0.0.

    This issue occurs as Firewall Parameter "None" was deprecated in NSX-T Data Center 3.0.0. Any NAT rules configured with the Firewall parameter as "None" in the user interface, and any NAT rules configured through the API without the "Firewall_Match" parameter, are not subjected to firewall processing post-upgrade, even though the necessary firewall rules are configured at the Gateway firewall.

  • Fixed Issue 2526083 - Some NSX services might not function properly when the NSX Manager becomes disconnected from the NSX Intelligence appliance.

    In the System > Appliances page of the NSX Manager UI, the NSX Intelligence Appliance card displays an error or shows a status that the appliance appears to be stuck in the data fetching state.

Known Issues

The known issues are grouped as follows.

General Known Issues
  • Issue 2320529 - "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores.

    "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores even though the storage is accessible from all hosts in the cluster. This error state persists for up to thirty minutes.

    Retry after thirty minutes. As an alternative, make the following API call to update the cache entry of datastore:

    https://<nsx-manager>/api/v1/fabric/compute-collections/<CC Ext ID>/storage-resources?uniform_cluster_access=true&source=realtime

    where   <nsx-manager> is the IP address of the NSX manager where the service deployment API has failed, and < CC Ext ID> is the identifier in NSX of the cluster where the deployment is being attempted.

  • Issue 2328126 - Bare Metal issue: Linux OS bond interface when used in NSX uplink profile returns error.

    When you create a bond interface in the Linux OS and then use this interface in the NSX uplink profile, you see this error message: "Transport Node creation may fail." This issue occurs because VMware does not support Linux OS bonding. However, VMware does support Open vSwitch (OVS) bonding for Bare Metal Server Transport Nodes.

    Workaround: If you encounter this issue, see Knowledge Article 67835 Bare Metal Server supports OVS bonding for Transport Node configuration in NSX-T.

  • Issue 2390624 - Anti-affinity rule prevents service VM from vMotion when host is in maintenance mode.

    If a service VM is deployed in a cluster with exactly two hosts, the HA pair with anti-affinity rule will prevent the VMs from vMotioning to the other host during any maintenance mode tasks. This may prevent the host from entering Maintenance Mode automatically.

    Workaround: Power off the service VM on the host before the Maintenance Mode task is started on vCenter.

  • Issue 2389993 - Route map removed after redistribution rule is modified using the Policy page or API.

    If there is a route-map added using management plane UI/API in Redistribution Rule, it will get removed If you modify the same Redistribution Rule from Simplified (Policy) UI/API.

    Workaround: You can restore the route map by returning the management plane interface or API to re-add it to the same rule. If you wish to include a route map in a redistribution rule, it is recommended you always use the management plane interface or API to create and modify it.

  • Issue 2329273 - No connectivity between VLANs bridged to the same segment by the same edge node.

    Bridging a segment twice on the same edge node is not supported. However, it is possible to bridge two VLANs to the same segment on two different edge nodes.

    Workaround: None 

  • Issue 2355113 - Unable to install NSX Tools on RedHat and CentOS Workload VMs with accelerated networking enabled in Microsoft Azure.

    In Microsoft Azure when accelerated networking is enabled on RedHat (7.4 or later) or CentOS (7.4 or later) based OS and with NSX Agent installed, the ethernet interface does not obtain an IP address.

    Workaround: After booting up RedHat or CentOS based VM in Microsoft Azure, install the latest Linux Integration Services driver available at https://www.microsoft.com/en-us/download/details.aspx?id=55106 before installing NSX tools.

  • Issue 2370555 - User can delete certain objects in the Advanced interface, but deletions are not reflected in the Simplified interface.

    Specifically, groups added as part of a distributed firewall exclude list can be deleted in the Advanced interface Distributed Firewall Exclusion List settings. This leads to inconsistent behavior in the interface.

    Workaround: Use the following procedure to resolve this issue:

    1. Add an object to an exclusion list in the Simplified interface.
    2. Verify that it appears displayed in the Distributed Firewall exclusion list in the Advanced interface.
    3. Delete the object from the Distributed Firewall exclusion list in the Advanced interface.
    4. Return to the Simplified interface and a second object to the exclusion list and apply it.
    5. Verify that the new object appears in the Advanced interface.
  • Issue 2520803 - Encoding format for Manual Route Distinguisher and Route Target configuration in EVPN deployments.

    You currently can configure manual route distinguisher in both Type-0 encoding and in Type-1 encoding. However, using the Type-1 encoding scheme for configuring Manual Route Distinguisher in EVPN deployments is highly recommended. Also, only Type-0 encoding for Manual Route Target configuration is allowed.

    Workaround: Configure only Type-1 encoding for Route Distinguisher.

  • Issue 2490064 - Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2516481 - One UA node stopped accepting any New API calls, with "server is overloaded" message.

    The UA node stops accepting any New API calls, with "server is overloaded” message. There are around 200 connections stuck in CLOSE_WAIT state. These connections are not yet closed. New API call will be rejected.

    Workaround: Restart the proton service using the following command: 

    service proton restart

     

  • Issue 2529228 - After backup and restore, certificates in the system get into an inconsistent state and the certificates that the customer had set up at the time of backup are gone.

    Reverse proxy and APH start using different certificates than what they used in the backed up cluster.

    Workaround:

    1. Update Tomcat certs on all three new nodes and bring them back to the original state (same as backed-up cluster) using API POST /api/v1/node/services/http?action=apply_certificate&certificate_id=<cert-id>. The certificate ID corresponds to the ID of the tomcat certificate that was in use on the original setup (backed-up cluster).
    2. Apply cluster cert on primary node using API POST /api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=<cert-id> . The certificate ID corresponds to the ID of the cluster certificate that was in use on the original setup (backed-up cluster).
    3. Update APH certs on all three new nodes and bring them back to the original state (same as backed-up cluster) using API POST /api/v1/trust-management/certificates?action=set_appliance_proxy_certificate_for_inter_site_communication
    4. Inspect the output of GET /api/v1/trust-management/certificates (especially the used_by section) and release all certs tied to old node uuids (uuids of nodes in the cluster where backup was performed) using following on the root command line: "curl -i -k -X POST -H 'Content-type: application/json' -H 'X-NSX-Username:admin' -H 'X-NSX-Groups:superuser' -T data.json "http://127.0.0.1:7440/nsxapi/api/v1/trust-management/certificates/cert-id?action=release". This API is local-only and needs to be executed as root on the command line on any one node in the cluster. This step needs to be done for all certificates that are tied to old node uuids (uuids of nodes in the cluster where backup was performed). 
    5. Now delete all the unused certs and verify "/api/v1/trust-management/certificates" and cluster stability.
  • Issue 2535793 - The Central Node Config (CNC) disabled flag is not respected on a Manager node.

    NTP, syslog and SNMP config changes made locally on a Manager node will be overwritten when the user makes changes to CNC Profile (see System->Fabric->Profiles in UI) even if CNC is locally disabled (see CLI set node central-config disabled) on that Manager node. However, local NTP, syslog and SNMP config will persist as long as the CNC Profile is unchanged.

    Workaround:

    There are two options.

    • Option 1 is to use CNC without making local changes (i.e., get node central-config is Enabled).
    • Option 2 is to clear the CNC Profile and configure each Manager separately for NTP, syslog and SNMP settings. To transition from option 1 to option 2, use the following workaround.
    1. Delete all configuration from CNC Profile.
    2. After some time, verify that the configuration has been deleted from all Manager and Edge nodes (using Node API or CLI on each node). Also verify that SNMP configuration has been deleted from all KVM HV nodes.
    3. Configure NTP, syslog and SNMP on each Manager and Edge node individually (using NSX Node API or CLI).
    4. Configure VMware SNMP Agent on each KVM HV node individually using VMware SNMP Agent configuration command.
  • Issue 2537989 - Clearing VIP (Virtual IP) does not clear vIDM integration on all nodes.

    If VMware Identity Manager is configured on a cluster with a Virtual IP, disabling the Virtual IP does not result in the VMware Identity Manager integration being cleared throughout the cluster. You will have to manually fix vIDM integration on each individual node if the VIP is disabled.

    Workaround: Go to each node individually to manually fix the vIDM configuration on each.

  • Issue 2538956 - DHCP Profile shows a message of "NOT SET" and the Apply buttons is disabled when configuring a Gateway DHCP on Segment.

    When attempting to configure Gateway DHCP on Segment when there is no DHCP configured on the connected Gateway, the DHCP Profile cannot be applied because there is no valid DHCP to be saved.

    Workaround: None.

  • Issue 2525205 - Management plane cluster operations fail under certain circumstances.

    When attempting to join Manager N2 to Manager N1 by issuing a "join" command on Manager N2, the join command fails. You are unable to form a Management plane cluster, which might impact availability.

    Workaround:

    1. To retain Manager N1 in the cluster, issue a "deactivate" CLI command on Manager N1. This will remove all other Manager from the cluster, keeping Manager N1 as the sole member of the cluster.
    2. Ensure that the non-configuration Corfu server is up and running on Manager N1 by issuing the "systemctl start corfu-nonconfig-server" command.
    3. Join other new Managers to the cluster by issuing "join" commands on them.
  • Issue 2526769 - Restore fails on multi-node cluster.

    When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.

    Workaround: Deploy a new setup (one node cluster) and start the restore.

  • Issue 2538041 - Groups containing Manager Mode IP Sets can be created from Global Manager.

    Global Manager allows you to create Groups that contain IP Sets that were created in Manager Mode. The configuration is accepted but the groups do not get realized on Local Managers.

    Workaround: None.

  • Issue 2463947 - When preemptive mode HA is configured, and IPSec HA is enabled, upon double failover, packet drops over VPN are seen.

    Traffic over VPN will drop on peer side. IPSec Replay errors will increase.

    Workaround: Wait for the next IPSec Rekey. Or disable and enable that particular IPSec session.

  • Issue 2523212 - The nsx-policy-manager becomes unresponsive and restarts.

    API calls to nsx-policy-manager will start failing, with service being unavailable. You will not be able to access policy manager until it restarts and is available.

    Workaround: Invoke API with at most 2000 objects.

  • Issue 2482672 - Isolated overlay segment stretched over L2VPN not able to reach default gateway on peer site.

    L2VPN tunnel is configured between site 1 and site 2 such that a T0/T1 overlay segment is stretched over L2VPN from site 1 and an isolated overlay segment is stretched over L2VPN from site 2. Also there is another T0/T1 overlay segment on site 2 in same transport zone and there is an instance of DR on the ESXi host where the workload VMs connected to isolated segment reside.

    When a VM on isolated segment (site 2) tries to reach the default gateway (DR downlink which is on site 1), unicast packets to default gateway will be received by site 2 ESXi host, and not forwarded to the remote site. L3 connectivity to default gateway on peer site fails.

    Workaround: Connect the isolated overlay segment on site 2 to an LR and give the same gateway IP address as that of site 1.

  • Issue 2521071 - For a Segment created in Global Manager, if it has a BridgeProfile configuration, then the Layer2 bridging configuration is not applied to individual NSX sites.

    The consolidated status of the Segment will remain at "ERROR". This is due to failure to create bridge endpoint at a given NSX site. You will not be able successfully configure a BridgeProfile on Segments created via Global Manager.

    Workaround: Create a Segment at the NSX site and configure it with bridge profile.

  • Issue 2527671 - When the DHCP server is not configured, retrieving DHCP statistics/status on a Tier0/Tier1 gateway or segment displays an error message indicating realization is not successful.

    There is no functional impact. The error message is incorrect and should report that the DHCP server is not configured.

    Workaround: None.

  • Issue 2532127 - LDAP user can't log in to NSX only if the user's Active Directory entry does not contain the UPN (userPrincipalName) attribute and contains only the samAccountName attribute.

    User authentication fails and the user is unable to log in to the NSX user interface.

    Workaround: None.

  • Issue 2540733 - Service Instance is not created after re-adding the same host in the cluster.

    Service Instance in NSX is not created after re-adding the same host in the cluster, even though the service VM is present on the host. The deployment status will be shown as successful, but protection on the given host will be down.

    Workaround: Delete the service VM from the host. This will create an issue which will be visible in the NSX user interface. On resolving the issue, a new SVM and corresponding service instance in NSX will be created.

  • Issue 2530822 - Registration of vCenter with NSX manager fails even though NSX-T extension is created on vCenter.

    While registering vCenter as compute manager in NSX, even though the "com.vmware.nsx.management.nsxt" extension is created on vCenter, the compute manager registration status remains "Not Registered" in NSX-T. Operations on vCenter, such as auto install of edge etc., cannot be performed using the vCenter Server compute manager.

    Workaround:

    1. Delete compute manager from NSX-T manager.
    2. Delete the "com.vmware.nsx.management.nsxt" extension from vCenter using the vCenter Managed Object Browser.
  • Issue 2482580 - IDFW/IDS configuration is not updated when an IDFW/IDS cluster is deleted from vCenter.

    When a cluster with IDFW/IDS enabled is deleted from vCenter, the NSX management plane is not notified of the necessary updates. This results in inaccurate count of IDFW/IDS enabled clusters. There is no functional impact. Only the count of the enabled clusters is wrong.

    Workaround: None.

  • Issue 2533365 - Moving a host from an IDFW enabled cluster to a new cluster (which was never enabled/disabled for IDFW before) will not disable IDFW on the moved host.

    If hosts are moved from an IDFW enabled cluster to cluster (which was never enabled/disabled for IDFW before), IDFW remains enabled on the moved hosts. This will result in unintended IDFW rule application on the moved host.

    Workaround: Enable IDFW on cluster 2 and then disable it. After this, moving hosts between these clusters or subsequent enabling/disabling of IDFW on these clusters would work as expected.

  • Issue 2534855 - Route maps and redistribution rules of Tier-0 gateways created on the simplified UI or policy API will replace the route maps and redistribution rules created on the advanced UI (or MP API).

    During upgrades, any existing route maps and rules that were created on the simplified UI (or policy API) will replace the configurations that were done directly on the advanced UI (or MP API).

    Workaround: If Redistribution rules/Route-maps created on Advanced UI (MP UI) are lost after upgrade, you can create all rules either from Advanced UI (MP) or Simplified UI (Policy). Always use either Policy or MP for redistribution rules, not both at same time, as in NSX-T 3.0, redistribution has full supported features.

  • Issue 2535355 - Session timer may not take effect after upgrading to NSX-T 3.0 under certain circumstances.

    Session timer setting is not taking effect. The connection session (e.g., tcp established, tcp fin wait) will use its system default session timer instead of the custom session timer. This may cause the connection (tcp/udp/icmp) session to be established longer or shorter than expected.

    Workaround: None.

  • Issue 2534933 - Certificates that have LDAP based CDPs (CRL Distribution Point) fail to apply as tomcat/cluster certs.

    You can't use CA-signed certificates that have LDAP CDPs as cluster/tomcat certificate.

    Workaround: See VMware knowledge base article 78794.

  • Issue 2499819 - Maintenance-based NSX for vSphere to NSX-T Data Center host migration for vCenter 6.5 or 6.7 might fail due to vMotion error.

    This error message is shown on the host migration page:
    Pre-migrate stage failed during host migration [Reason: [Vmotion] Can not proceed with migration: Max attempt done to vmotion vm b'3-vm_Client_VM_Ubuntu_1404-shared-1410'].

    Workaround: Retry host migration.

  • Issue 2518183 - For Manager UI screens, the Alarms column does not always show the latest alarm count.

    Recently generated alarms are not reflected on Manager entity screens.

    Workaround:

    1. Refresh the Manager entity screen to see the correct alarm count.
    2. Missing alarm details can also be seen from the alarm dashboard page.
  • Issue 2543353 - NSX T0 edge calculates incorrect UDP checksum post-eSP encapsulation for IPsec tunneled traffic.

    Traffic is dropped due to bad checksum in UDP packet.

    Workaround: None.

  • Issue 2561740 - PAS Egress DFW rule not applied due to effective members not updated in NSGroup.

    Due to ConcurrentUpdateException a LogicalPort creation was not processed causing failure in updating the corresponding NSGroup.

    Workaround: None.

  • Issue 2556730 - When configuring an LDAP identity source, authentication via LDAP Group -> NSX Role Mapping does not work if the LDAP domain name is configured using mixed case.

    Users who attempt to log in are denied access to NSX.

    Workaround: Use all lowercase characters in the domain name field of the LDAP Identity Source configuration.

  • Issue 2557287 - TNP updates done after backup are not restored.

    You won't see any TNP updates done after backup on a restored appliance.

    Workaround: Take a backup after any updates to TNP.

  • Issue 2577452: Replacing certificates on the Global Manager disconnects locations added to this Global Manager.

    When you replace a reverse proxy or appliance proxy hub (APH) certificate on the Global Manager, you lose connection with locations added to this Global Manager because REST API and NSX RPC connectivity is broken.

    Workaround:

    • If you use the following APIs to change certificates on a Local Manager or Global Manager, you must make further configuration changes to avoid this issue:
      • Change node API certificate: POST https:// /api/v1/node/services/http?action=apply_certificate&certificate_id=
      • Change cluster API certificate: POST https:// /api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=
      • Change appliance proxy certificate: POST https:// /api/v1/trust-management/certificates?action=set_appliance_proxy_certificate_for_inter_site_communication
    • If you change the API certificate on a Local Manager node or cluster, you must add this location again from Global Manager.
    • If you change the API certificate on a Global Manager node or cluster, you must add all previously connected locations again from Global Manager.
    • If you change the appliance proxy certificate on a Local Manager, you must restart the appliance proxy service on all nodes in the cluster with "restart service applianceproxy" and add the location again from Global Manager.

    See Add a Location in the NSX-T Data Center Installation Guide. 

  • Issue 2730634: Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Post uniscale upgrade networking component page shows an "Index out of sync" error.

    Workaround: Log in to NSX Manager with admin credentials and run the "start search resync policy" command. It will take a few minutes to load the networking components.

  • Issue 2761589: Default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane after upgrading from NSX-T 2.x to NSX-T 3.x.

    This issue occurs only when rules are not configured via Policy, and the default layer 3 rule on the Management Plane has the DROP action. After upgrade, the default layer 3 rule configuration changes from DENY_ALL to ALLOW_ALL on Management Plane.

    Workaround: Set the action of default layer3 rule to DROP from policy UI post upgrade.

Installation Known Issues
  • Issue 2522909 - Service vm upgrade is not working after correcting url if upgrade deployment failed with Invaildurl.

    Upgrade would be in failed state, with wrong url, blocking upgrade.

    Workaround: Create a new deployment_spec for upgrade to trigger.

  • Issue 2530110 - Cluster status is degraded after upgrade to NSX-T Data Center 3.0.0 or a restart of a NSX Manager node.

    The MONITORING group is degraded as the Monitoring application on the node that was restarted stays DOWN. Restore can fail. Alarms from the Manager on which Monitoring app is DOWN might not show up.

    Workaround: Restart the affected NSX-T Manager node on which the Monitoring app is DOWN.

Upgrade Known Issues
  • Issue 2475963 - NSX-T VIBs fail to install due to insufficient space.

    NSX-T VIBs fail to install due to insufficient space in bootbank on ESXi host, returning a BootBankInstaller.pyc: ERROR. Some ESXi images provided by third-party vendors may include VIBs which are not in use and can be relatively large in size. This can result in insufficient space in bootbank/alt-bootbank when installing/upgrading any VIBs.

    Workaround: See Knowledge Base article 74864 NSX-T VIBs fail to install, due to insufficient space in bootbank on ESXi host.

  • Issue 2400379 - Context Profile page shows unsupported APP_ID error message.

    The Context Profile page shows the following error message: "This context profile uses an unsupported APP_ID - [<APP_ID>]. Please delete this context profile manually after making sure it is not being used in any rule." This is caused by the post-upgrade presence of six deprecated APP_IDs (AD_BKUP, SKIP, AD_NSP, SAP, SUNRPC, SVN) that no longer work on the data path.

    Workaround: After ensuring that they are no longer consumed, manually delete the six APP_ID context profiles.

  • Issue 2462079 - Some versions of ESXi hosts reboot during upgrade if there are stale DV filters present on the ESXi host.

    For hosts running ESXi 6.5-U2/U3 and/or 6.7-U1/U2, during maintenance mode upgrade to NSX-T 2.5.1, the host may reboot if stale DV filters are found to be present on the host after VMs are moved out.

    Workaround: Upgrade to ESXi 6.7 U3 or ESXi 6.5 P04 prior to upgrading to NSX-T Data Center 2.5.1 if you want to avoid rebooting the host during the NSX-T Data Center upgrade. See Knowledge Base article 76607 for details.

NSX Edge Known Issues
  • Issue 2283559 - https://<nsx-manager>/api/v1/routing-table and https://<nsx-manager>/api/v1/forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.

    If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.

    Workaround: There are two options to fetch the RIB/FIB.

    • These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest.
    • CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.
  • Issue 2521230 - BFD status displayed under ‘get bgp neighbor summary’ may not reflect the latest BFD session status correctly.

    BGP and BFD can set up their sessions independently. As part of ‘get bgp neighbor summary’ BGP also displays the BFD state. If the BGP is down, it will not process any BFD notifications and will continue to show the last known state. This could lead to displaying stale state for the BFD.

    Workaround: Rely on the output of ‘get bfd-sessions’ and check the ‘State’ field to get the most up-to-date BFD status.

  • Issue 2532755 - Inconsistencies between CLI output and policy output for routing-table.

    Routing table downloaded from the UI has extra number of routes compared to CLI output. There is an additional route (default route) listed in the output downloaded from policy. There is no functional impact.

    Workaround: None.

NSX Cloud Known Issues
  • Issue 2289150 - PCM calls to AWS start to fail.

    If a user updates the PCG role for an AWS account on CSM from old-pcg-role to new-pcg-role, CSM updates the role for the PCG instance on AWS to new-pcg-role. However, the PCM does not know that the PCG role has been updated and as a result continues to use the old AWS clients it had created using old-pcg-role. This causes the AWS cloud inventory scan and other AWS cloud calls to fail.

    Workaround: If you encounter this issue, do not modify/delete the old PCG role immediately after changing to new role for at least 6.5 hours. Restarting the PCG will re-initialize all AWS clients with new role credentials.

Security Known Issues
  • Issue 2491800 - AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

Federation Known Issues
  • Issue 2622672: Bare Metal Edge nodes cannot be used in Federation.

    Bare Metal Edge nodes cannot be configured for inter-location communications (RTEP).

  • Issue 2630808: Upgrade from 3.0.0/3.0.1 to any higher releases is disruptive.

    As soon as Global Manager or Local Manager is upgraded from 3.0.0/3.0.1 to a higher release, communications between GM and LM are not possible.

    Workaround: To restore communications between LM and GM, all LM and GM need to be upgraded to a higher release.

  • Issue 2630813: SRM recovery or Cold vMotion for compute VMs will lose all the NSX tags applied to VM and Segment ports.

    If a SRM recovery test or run is initiated, the replicated compute VMs in the disaster recovery location will not have any NSX tags applied. Same in case of VMs Cold vMotion to another location managed by another LM, the VMs will not have any NSX tags applied on that new location.

  • Issue 2630819: Changing LM certificates should not be done after LM register on GM.

    When Federation and PKS need to be used on the same LM, PKS tasks to create external VIP & change LM certificate should be done before registering the LM on GM. If done in the reverse order, communications between LM and GM will not be possible after change of LM certificates and LM has to be registered again.

check-circle-line exclamation-circle-line close-line
Scroll to top icon