VMware NSX-T Data Center 3.0.2 | 17 September 2020 | Build 16887200
Check regularly for additions and updates to these release notes.
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Compatibility and System Requirements
- General Behavior Changes
- Available Languages
- API and CLI Resources
- Revision History
- Resolved Issues
- Known Issues
NSX-T Data Center 3.0.2 is a maintenance release that includes new features and bug fixes. See the "Resolved Issues" section for the list of issues resolved in this release.
The following new features and feature enhancements are available in the NSX-T Data Center 3.0.2 release.
- Federation enhancement
- Global Manager auto deployment
- Onboarding NSX Local Manager objects to Global Manager
- Layer 4 DFW for Oracle Enterprise Linux Physical Servers
- API support to upgrade N-VDS to VDS
- Basic constraint check for end-entity certificate according to RFC 5280
- Ability to change the MAC address of Distributed Router in an existing Transport Zone
Given the scale and upgrade limits (see "Federation Known Issues" below), Federation is not intended for production deployments in NSX-T 3.0.x releases.
Compatibility and System Requirements
For compatibility and system requirements information, see the NSX-T Data Center Installation Guide.
API Deprecations and Behavior Changes
- Removal of Application Discovery - This feature has been deprecated and removed.
- Change in "Advanced Networking and Security" UI upon Upgrading from NSX-T 2.4 and 2.5 - If you are upgrading from NSX-T Data Center 2.4 and 2.5, the menu options that were found under the "Advanced Networking & Security" UI tab are available in NSX-T Data Center 3.0 by clicking "Manager" mode.
- Snapshots are Auto-disabled: From NSX-T Data Center 3.0.1 onwards, snapshots are auto-disabled when an appliance is deployed. You do not need to perform this manual procedure.
- API Deprecation Policy - VMware now publishes our API deprecation policy in the NSX API Guide to help customers who automate with NSX understand which APIs are considered deprecated and when they will be removed from the product in the future.
API and CLI Resources
See code.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.
The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.
NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.
Document Revision History
September 17, 2020. First edition.
September 25, 2020. Second edition. Added resolved issues 2561740, 2577452, 2622672. Added known issues 2540352, 2586606, 2588072, 2601493, 2605420, 2613113, 2627439, 2629422, 2638571.
October 20, 2020. Third edition. Added resolved issue 2605659. Added known issues 2628428, 2625009.
- Fixed Issue 2567865: nginx core dumps and vip stops responding.
Some client transactions fail.
- Fixed Issue 2529228: After backup and restore, certificates in the system get into an inconsistent state and the certificates that the customer had set up at the time of backup are gone.
Reverse proxy and APH start using different certificates than what they used in the backed up cluster.
- Fixed Issue 2535793: The Central Node Config (CNC) disabled flag is not respected on a Manager node.
NTP, syslog and SNMP config changes made locally on a Manager node will be overwritten when the user makes changes to CNC Profile (see System->Fabric->Profiles in UI) even if CNC is locally disabled (see CLI set node central-config disabled) on that Manager node. However, local NTP, syslog and SNMP config will persist as long as the CNC Profile is unchanged.
- Fixed Issue 2530110: Cluster status is degraded after upgrade to NSX-T Data Center 3.0.0 or a restart of a NSX Manager node.
The MONITORING group is degraded as the Monitoring application on the node that was restarted stays DOWN. Restore can fail. Alarms from the Manager on which Monitoring app is DOWN might not show up.
- Fixed Issue 2482672: Isolated overlay segment stretched over L2VPN not able to reach default gateway on peer site.
L2VPN tunnel is configured between site 1 and site 2 such that a T0/T1 overlay segment is stretched over L2VPN from site 1 and an isolated overlay segment is stretched over L2VPN from site 2. Also there is another T0/T1 overlay segment on site 2 in same transport zone and there is an instance of DR on the ESXi host where the workload VMs connected to isolated segment reside.
When a VM on isolated segment (site 2) tries to reach the default gateway (DR downlink which is on site 1), unicast packets to default gateway will be received by site 2 ESXi host, and not forwarded to the remote site. L3 connectivity to default gateway on peer site fails.
- Fixed Issue 2561740: PAS Egress DFW rule not applied due to effective members not updated in NSGroup.
Due to ConcurrentUpdateException a LogicalPort creation was not processed causing failure in updating the corresponding NSGroup.
- Fixed Issue 2577452: Replacing certificates on the Global Manager disconnects locations added to this Global Manager.
When you replace a reverse proxy or appliance proxy hub (APH) certificate on the Global Manager, you lose connection with locations added to this Global Manager because REST API and NSX RPC connectivity is broken.
- Fixed Issue 2622672: Bare Metal Edge nodes cannot be used in Federation.
Bare Metal Edge nodes cannot be configured for inter-location communications (RTEP).
- Fixed Issue 2569691: Ping between external network and logical switch/segment does not work in specific cases.
Consider the following configuration: 1) Create an uplink with x.x.x.x network. 2) The default route creation for nexthop is: x.x.x.y 3) Now update the connected IP for uplink to: x.x.x.y This is a misconfiguration and causes pings to fail from the external network to the logical switch or segment.
- Fixed Issue 2607651: NSX Manager does not reflect users from vIDM if First Name Attribute is missing.
If a vIDM user is created in AD with no First Name / Last Name / Email ID Attribute, the user is not reflected in NSX Manager.
- Fixed Issue 2577028: Host Preparation might fail.
Host prep might fail due to config hash mismatch leading to discovery loop.
- Fixed Issue 2572394 / 2574635: Unable to take backup when using SFTP server, where "keyboard-interactive" authentication is enabled but "password" authentication is disabled.
User is unable to use SFTP servers, where "keyboard-interactive" authentication is enabled, but "password" authentication is disabled.
- Fixed Issue 2555333: "nsxuser" fails to get created during host prep.
During the host prep lifecycle (install/uninstall/upgrade) "nsxuser" is created internally in ESXi hosts managed by vCenter Server for managing NSX VIBs. This user creation fails intermittently because of the ESXi password requirements.
- Fixed Issue 2519300: NSX Manager upgrade fails with no clear errors.
Upgrading NSX Manager might fail because of the Upgrade Coordinator providing a message: "This page is only available on the NSX Manager where Upgrade Coordinator is running" or there are no clear errors.
- Fixed Issue 2484006: Protected VMs lose network connectivity.
SRM Protected VMs in an NSX-T Data Center environment lose network connectivity despite being configured on a different logical network, when placeholder VMs in the secondary site are powered on. This issue occurs because the same VIF UUID is applied to both the protected and the placeholder VMs.
- Fixed Issue 2575649: Named teaming policy not supported for remote tunnel ports configured on edge nodes.
Remote tunnel configuration fails if the name teaming policy is configured and break cross-site forwarding.
- Fixed Issue 2605659: Packets are not getting forwarded to the pool members on correct port when NSGroup for server pool is not statically configured, rule action is "select pool" in forwarding phase and there is no default pool for virtual server. The matched packets after the first non-matched packet will be forwarded to backend server on port 80.
Packets are set to incorrect port.
The known issues are grouped as follows.
- General Known Issues
- Installation Known Issues
- Upgrade Known Issues
- NSX Edge Known Issues
- NSX Cloud Known Issues
- Security Known Issues
- Federation Known Issues
- Issue 2320529: "Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores.
"Storage not accessible for service deployment" error thrown after adding third-party VMs for newly added datastores even though the storage is accessible from all hosts in the cluster. This error state persists for up to thirty minutes.
Retry after thirty minutes. As an alternative, make the following API call to update the cache entry of datastore:
https://<nsx-manager>/api/v1/fabric/compute-collections/<CC Ext ID>/storage-resources?uniform_cluster_access=true&source=realtime
<nsx-manager> is the IP address of the NSX manager where the service deployment API has failed, and < CC Ext ID> is the identifier in NSX of the cluster where the deployment is being attempted.
- Issue 2328126: Bare Metal issue: Linux OS bond interface when used in NSX uplink profile returns error.
When you create a bond interface in the Linux OS and then use this interface in the NSX uplink profile, you see this error message: "Transport Node creation may fail." This issue occurs because VMware does not support Linux OS bonding. However, VMware does support Open vSwitch (OVS) bonding for Bare Metal Server Transport Nodes.
Workaround: If you encounter this issue, see Knowledge Article 67835 Bare Metal Server supports OVS bonding for Transport Node configuration in NSX-T.
- Issue 2390624: Anti-affinity rule prevents service VM from vMotion when host is in maintenance mode.
If a service VM is deployed in a cluster with exactly two hosts, the HA pair with anti-affinity rule will prevent the VMs from vMotioning to the other host during any maintenance mode tasks. This may prevent the host from entering Maintenance Mode automatically.
Workaround: Power off the service VM on the host before the Maintenance Mode task is started on vCenter.
- Issue 2389993: Route map removed after redistribution rule is modified using the Policy page or API.
If there is a route-map added using management plane UI/API in Redistribution Rule, it will get removed If you modify the same Redistribution Rule from Simplified (Policy) UI/API.
Workaround: You can restore the route map by returning the management plane interface or API to re-add it to the same rule. If you wish to include a route map in a redistribution rule, it is recommended you always use the management plane interface or API to create and modify it.
- Issue 2329273: No connectivity between VLANs bridged to the same segment by the same edge node.
Bridging a segment twice on the same edge node is not supported. However, it is possible to bridge two VLANs to the same segment on two different edge nodes.
- Issue 2355113: Unable to install NSX Tools on RedHat and CentOS Workload VMs with accelerated networking enabled in Microsoft Azure.
In Microsoft Azure when accelerated networking is enabled on RedHat (7.4 or later) or CentOS (7.4 or later) based OS and with NSX Agent installed, the ethernet interface does not obtain an IP address.
Workaround: After booting up RedHat or CentOS based VM in Microsoft Azure, install the latest Linux Integration Services driver available at https://www.microsoft.com/en-us/download/details.aspx?id=55106 before installing NSX tools.
- Issue 2370555: User can delete certain objects in the Advanced interface, but deletions are not reflected in the Simplified interface.
Specifically, groups added as part of a distributed firewall exclude list can be deleted in the Advanced interface Distributed Firewall Exclusion List settings. This leads to inconsistent behavior in the interface.
Workaround: Use the following procedure to resolve this issue:
- Add an object to an exclusion list in the Simplified interface.
- Verify that it appears displayed in the Distributed Firewall exclusion list in the Advanced interface.
- Delete the object from the Distributed Firewall exclusion list in the Advanced interface.
- Return to the Simplified interface and a second object to the exclusion list and apply it.
- Verify that the new object appears in the Advanced interface.
- Issue 2520803: Encoding format for Manual Route Distinguisher and Route Target configuration in EVPN deployments.
You currently can configure manual route distinguisher in both Type-0 encoding and in Type-1 encoding. However, using the Type-1 encoding scheme for configuring Manual Route Distinguisher in EVPN deployments is highly recommended. Also, only Type-0 encoding for Manual Route Target configuration is allowed.
Workaround: Configure only Type-1 encoding for Route Distinguisher.
- Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.
After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.
Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.
- Issue 2537989: Clearing VIP (Virtual IP) does not clear vIDM integration on all nodes.
If VMware Identity Manager is configured on a cluster with a Virtual IP, disabling the Virtual IP does not result in the VMware Identity Manager integration being cleared throughout the cluster. You will have to manually fix vIDM integration on each individual node if the VIP is disabled.
Workaround: Go to each node individually to manually fix the vIDM configuration on each.
- Issue 2538956: DHCP Profile shows a message of "NOT SET" and the Apply buttons is disabled when configuring a Gateway DHCP on Segment.
When attempting to configure Gateway DHCP on Segment when there is no DHCP configured on the connected Gateway, the DHCP Profile cannot be applied because there is no valid DHCP to be saved.
- Issue 2525205: Management plane cluster operations fail under certain circumstances.
When attempting to join Manager N2 to Manager N1 by issuing a "join" command on Manager N2, the join command fails. You are unable to form a Management plane cluster, which might impact availability.
- To retain Manager N1 in the cluster, issue a "deactivate" CLI command on Manager N1. This will remove all other Manager from the cluster, keeping Manager N1 as the sole member of the cluster.
- Ensure that the non-configuration Corfu server is up and running on Manager N1 by issuing the "systemctl start corfu-nonconfig-server" command.
- Join other new Managers to the cluster by issuing "join" commands on them.
- Issue 2526769: Restore fails on multi-node cluster.
When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.
Workaround: Deploy a new setup (one node cluster) and start the restore.
- Issue 2538041: Groups containing Manager Mode IP Sets can be created from Global Manager.
Global Manager allows you to create Groups that contain IP Sets that were created in Manager Mode. The configuration is accepted but the groups do not get realized on Local Managers.
- Issue 2463947: When preemptive mode HA is configured, and IPSec HA is enabled, upon double failover, packet drops over VPN are seen.
Traffic over VPN will drop on peer side. IPSec Replay errors will increase.
Workaround: Wait for the next IPSec Rekey. Or disable and enable that particular IPSec session.
- Issue 2523212: The nsx-policy-manager becomes unresponsive and restarts.
API calls to nsx-policy-manager will start failing, with service being unavailable. You will not be able to access policy manager until it restarts and is available.
Workaround: Invoke API with at most 2000 objects.
- Issue 2521071: For a Segment created in Global Manager, if it has a BridgeProfile configuration, then the Layer2 bridging configuration is not applied to individual NSX sites.
The consolidated status of the Segment will remain at "ERROR". This is due to failure to create bridge endpoint at a given NSX site. You will not be able successfully configure a BridgeProfile on Segments created via Global Manager.
Workaround: Create a Segment at the NSX site and configure it with bridge profile.
- Issue 2527671: When the DHCP server is not configured, retrieving DHCP statistics/status on a Tier0/Tier1 gateway or segment displays an error message indicating realization is not successful.
There is no functional impact. The error message is incorrect and should report that the DHCP server is not configured.
- Issue 2532127: LDAP user can't log in to NSX only if the user's Active Directory entry does not contain the UPN (userPrincipalName) attribute and contains only the samAccountName attribute.
User authentication fails and the user is unable to log in to the NSX user interface.
- Issue 2540733: Service Instance is not created after re-adding the same host in the cluster.
Service Instance in NSX is not created after re-adding the same host in the cluster, even though the service VM is present on the host. The deployment status will be shown as successful, but protection on the given host will be down.
Workaround: Delete the service VM from the host. This will create an issue which will be visible in the NSX user interface. On resolving the issue, a new SVM and corresponding service instance in NSX will be created.
- Issue 2530822: Registration of vCenter with NSX manager fails even though NSX-T extension is created on vCenter.
While registering vCenter as compute manager in NSX, even though the "com.vmware.nsx.management.nsxt" extension is created on vCenter, the compute manager registration status remains "Not Registered" in NSX-T. Operations on vCenter, such as auto install of edge etc., cannot be performed using the vCenter Server compute manager.
- Delete compute manager from NSX-T manager.
- Delete the "com.vmware.nsx.management.nsxt" extension from vCenter using the vCenter Managed Object Browser.
- Issue 2482580: IDFW/IDS configuration is not updated when an IDFW/IDS cluster is deleted from vCenter.
When a cluster with IDFW/IDS enabled is deleted from vCenter, the NSX management plane is not notified of the necessary updates. This results in inaccurate count of IDFW/IDS enabled clusters. There is no functional impact. Only the count of the enabled clusters is wrong.
- Issue 2534855: Route maps and redistribution rules of Tier-0 gateways created on the simplified UI or policy API will replace the route maps and redistribution rules created on the advanced UI (or MP API).
During upgrades, any existing route maps and rules that were created on the simplified UI (or policy API) will replace the configurations that were done directly on the advanced UI (or MP API).
Workaround: If Redistribution rules/Route-maps created on Advanced UI (MP UI) are lost after upgrade, you can create all rules either from Advanced UI (MP) or Simplified UI (Policy). Always use either Policy or MP for redistribution rules, not both at same time, as in NSX-T 3.0, redistribution has full supported features.
- Issue 2535355: Session timer may not take effect after upgrading to NSX-T 3.0 under certain circumstances.
Session timer setting is not taking effect. The connection session (e.g., tcp established, tcp fin wait) will use its system default session timer instead of the custom session timer. This may cause the connection (tcp/udp/icmp) session to be established longer or shorter than expected.
- Issue 2534933: Certificates that have LDAP based CDPs (CRL Distribution Point) fail to apply as tomcat/cluster certs.
You can't use CA-signed certificates that have LDAP CDPs as cluster/tomcat certificate.
Workaround: See VMware knowledge base article 78794.
- Issue 2499819: Maintenance-based NSX for vSphere to NSX-T Data Center host migration for vCenter 6.5 or 6.7 might fail due to vMotion error.
This error message is shown on the host migration page:
Pre-migrate stage failed during host migration [Reason: [Vmotion] Can not proceed with migration: Max attempt done to vmotion vm b'3-vm_Client_VM_Ubuntu_1404-shared-1410'].
Workaround: Retry host migration.
- Issue 2518183: For Manager UI screens, the Alarms column does not always show the latest alarm count.
Recently generated alarms are not reflected on Manager entity screens.
- Refresh the Manager entity screen to see the correct alarm count.
- Missing alarm details can also be seen from the alarm dashboard page.
- Issue 2543353: NSX T0 edge calculates incorrect UDP checksum post-eSP encapsulation for IPsec tunneled traffic.
Traffic is dropped due to bad checksum in UDP packet.
- Issue 2556730: When configuring an LDAP identity source, authentication via LDAP Group -> NSX Role Mapping does not work if the LDAP domain name is configured using mixed case.
Users who attempt to log in are denied access to NSX.
Workaround: Use all lowercase characters in the domain name field of the LDAP Identity Source configuration.
- Issue 2557287: TNP updates done after backup are not restored.
You won't see any TNP updates done after backup on a restored appliance.
Workaround: Take a backup after any updates to TNP.
- Issue 2572052: Scheduled backups might not get generated.
In some corner case, scheduled backups are not generated.
Workaround: Restart all NSX Manager appliances.
- Issue 2557166: Distributed Firewall rules using context-profiles (layer 7) are not working as expected when applied to Kubernetes pods.
After configuring L7 rules on Kubernetes pods, traffic that is supposed to match L7 rules is hitting the default rule instead.
Workaround: Use Services instead of Context-profiles.
- Issue 2549175: Searching in policy fails with the message: "Unable to resolve with start search resync policy."
Searching in policy fails because search is out of sync when the NSX Manager nodes are provided with new IPs.
Workaround: Ensure that the DNS PTR records (IP to hostname mappings in the DNS server) for all the NSX Managers are correct.
- Issue 2486119: PNICs are migrated from NVDS back to VDS uplinks with mapping that is different from the original mapping in VDS.
When a Transport Node is created with a Transport Node Profile that has PNIC install and uninstall mappings, PNICs are migrated from VDS to NVDS. Later when NSX-T Data Center is removed from the Transport Node, the PNICs are migrated back to the VDS, but the mapping of PNIC to uplink may be different from the original mapping in VDS.
Workaround: Go to the vCenter Server UI to change the PNIC-to-uplink assignment in the VDS in the host.
- Issue 2628634: Day2tools will try to migrate TN from NVDS to CVDS even after "vds-migrate disable-migrate" is called.
The NVDS to CVDS migration will fail and host will leave in maintenance mode.
Workaround: Manually move the hosts out of maintenance mode.
- Issue 2607196: Service Insertion (SI) and Guest Introspection (GI) that use Host base deployments are not supported for NVDS to CVDS Migration.
Cannot migrate Transport Nodes with NVDS using NVDS to CVDS tool.
Workaround: Undeploy GI/SI and try NVDS to CVDS migration. This will delete any SVM instances.
Once migration is complete on all Transport Nodes, deploy GI/SI instance back.
- Issue 2588072: NVDS to CVDS switch migrator doesn't support Stateless ESX with vmks.
NVDS to CVDS switch migrator cannot be used for migration with Stateless ESX hosts if the NVDS switch has vmks on it.
Workaround: Either migrate out the vmks from NVDS or remove that host from NSX and perform migration.
- Issue 2627439: If a transport node profile is applied on a cluster before migration, one extra transport node profile is created by the system in detached state after migration.
There will one extra transport node profile generated for each original transport node profile.
- Issue 2586606: Load balancer does not work when Source-IP persistence is configured on a large number of virtual servers.
When Source-IP persistence is configured on a large number of virtual servers on a load balancer, it consumes significant amount of memory and may lead to NSX Edge running out of memory. However the issue can reoccur with addition of more virtual servers.
Workaround: Disable source IP persistence or move VIPs with source IP persistence to different LB Services.
- Issue 2540352: No backups are available in the Restore Backup window for CSM.
When restoring a CSM appliance from a backup, you enter the details of the backup file server in the Restore wizard but a list of backups does not appear in the UI even though it is available on the server.
Workaround: On the newly installed CSM appliance that you are restoring, go to the Backup tab and provide details of the backup server in the configuration window. When you switch to the Restore tab and start the Restore wizard, you will see the list of backups to restore from.
- Issue 2522909: Service vm upgrade is not working after correcting url if upgrade deployment failed with Invaildurl.
Upgrade would be in failed state, with wrong url, blocking upgrade.
Workaround: Create a new deployment_spec for upgrade to trigger.
- Issue 2475963: NSX-T VIBs fail to install due to insufficient space.
NSX-T VIBs fail to install due to insufficient space in bootbank on ESXi host, returning a BootBankInstaller.pyc: ERROR. Some ESXi images provided by third-party vendors may include VIBs which are not in use and can be relatively large in size. This can result in insufficient space in bootbank/alt-bootbank when installing/upgrading any VIBs.
Workaround: See Knowledge Base article 74864 NSX-T VIBs fail to install, due to insufficient space in bootbank on ESXi host.
- Issue 2400379: Context Profile page shows unsupported APP_ID error message.
The Context Profile page shows the following error message: "This context profile uses an unsupported APP_ID - [<APP_ID>]. Please delete this context profile manually after making sure it is not being used in any rule." This is caused by the post-upgrade presence of six deprecated APP_IDs (AD_BKUP, SKIP, AD_NSP, SAP, SUNRPC, SVN) that no longer work on the data path.
Workaround: After ensuring that they are no longer consumed, manually delete the six APP_ID context profiles.
- Issue 2462079: Some versions of ESXi hosts reboot during upgrade if there are stale DV filters present on the ESXi host.
For hosts running ESXi 6.5-U2/U3 and/or 6.7-U1/U2, during maintenance mode upgrade to NSX-T 2.5.1, the host may reboot if stale DV filters are found to be present on the host after VMs are moved out.
Workaround: Upgrade to ESXi 6.7 U3 or ESXi 6.5 P04 prior to upgrading to NSX-T Data Center 2.5.1 if you want to avoid rebooting the host during the NSX-T Data Center upgrade. See Knowledge Base article 76607 for details.
- Issue 2283559: https://<nsx-manager>/api/v1/routing-table and https://<nsx-manager>/api/v1/forwarding-table MP APIs return an error if the edge has 65k+ routes for RIB and 100k+ routes for FIB.
If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using API/UI.
Workaround: There are two options to fetch the RIB/FIB.
- These APIs support filtering options based on network prefixes or type of route. Use these options to download the routes of interest.
- CLI support in case the entire RIB/FIB table is needed and there is no timeout for the same.
- Issue 2521230: BFD status displayed under ‘get bgp neighbor summary’ may not reflect the latest BFD session status correctly.
BGP and BFD can set up their sessions independently. As part of ‘get bgp neighbor summary’ BGP also displays the BFD state. If the BGP is down, it will not process any BFD notifications and will continue to show the last known state. This could lead to displaying stale state for the BFD.
Workaround: Rely on the output of ‘get bfd-sessions’ and check the ‘State’ field to get the most up-to-date BFD status.
- Issue 2532755: Inconsistencies between CLI output and policy output for routing-table.
Routing table downloaded from the UI has extra number of routes compared to CLI output. There is an additional route (default route) listed in the output downloaded from policy. There is no functional impact.
- Issue 2289150: PCM calls to AWS start to fail.
If a user updates the PCG role for an AWS account on CSM from old-pcg-role to new-pcg-role, CSM updates the role for the PCG instance on AWS to new-pcg-role. However, the PCM does not know that the PCG role has been updated and as a result continues to use the old AWS clients it had created using old-pcg-role. This causes the AWS cloud inventory scan and other AWS cloud calls to fail.
Workaround: If you encounter this issue, do not modify/delete the old PCG role immediately after changing to new role for at least 6.5 hours. Restarting the PCG will re-initialize all AWS clients with new role credentials.
- Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.
The connection would be using an expired/revoked SSL.
Workaround: Restart the APH on the Manager node to trigger a reconnection.
- Issue 2630808: Upgrade from 3.0.0/3.0.1 to any higher releases is disruptive.
As soon as Global Manager or Local Manager is upgraded from 3.0.0/3.0.1 to a higher release, communications between GM and LM are not possible.
Workaround: To restore communications between LM and GM, all LM and GM need to be upgraded to a higher release.
- Issue 2630813: SRM recovery for compute VMs will lose all the NSX tags applied to VM and Segment ports.
If a SRM recovery test or run is initiated, the replicated compute VMs in the disaster recovery location will not have any NSX tags applied.
- Issue 2630819: Changing LM certificates should not be done after LM register on GM.
When Federation and PKS need to be used on the same LM, PKS tasks to create external VIP & change LM certificate should be done before registering the LM on GM. If done in the reverse order, communications between LM and GM will not be possible after change of LM certificates and LM has to be registered again.
- Issue 2601493: Concurrent config onboarding is not supported on Global Manager in order to prevent heavy processing load.
Although parallel config onboarding does not interfere with each other, multiple such config onboarding executions on GM would make GM slow and sluggish for other operations in general.
Workaround: Security Admin / Users must sync up maintenance windows to avoid initiating config onboarding concurrently.
- Issue 2605420: UI shows general error message instead of specific one indicating Local Manager VIP changes.
Global Manager to site communication is impacted.
Workaround: Restart the Global Manager cluster, one node at a time.
- Issue 2613113: If onboarding is in progress, and restore of Local Manager is done, the status on Global Manager does not change from IN_PROGRESS.
UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Unable to import the configuration of the restored site.
Workaround: Use the Local Manager API to start the onboarding of the Local Manager site, if required.
- Issue 2638571: Deleting 5000 NAT rules sometime takes more than 1 hour.
NAT rules are still visible in the UI but grayed out. You have to wait for their cleanup before creating NAT rules with the same name. There is not impact with a different name.
- Issue 2629422: Message shown on UI is incomplete in case system tries to onboard a site having DNS service on Tier-1 gateway in addition to LB service.
On-boarding is blocked for tier-1 gateway that offers DNS/DHCP service in addition to one-arm LB service. It is expected to block onboarding.
UI shows possible resolution text by giving reference of DHCP service only. But same resolution is applicable for DNS service as well.
- Issue 2628428: Global Manager status shows success initially and then changes to IN_PROGRESS.
In a scale setup, if there are too many sections being modified at frequent intervals, it takes time for the Global Manager to reflect the correct status of the configuration. This causes a delay in seeing the right status on the UI/API for the distributed firewall configuration done.
- Issue 2625009: Inter-SR iBGP sessions keep flapping, when intermediate routers or physical NICs have lower or equal MTU as the inter-SR port.
This can impact inter-site connectivity in Federation topologies.
Workaround: Keep the pNic MTU and intermediate routers' MTU bigger than the global MTU (i.e., the MTU used by inter-SR port). The size of the packets becomes more than MTU because of encapsulation and packets don't go through.
- Issue 2634034: When the site role is changed for a stretched T1-LR (logical router), any traffic for that logical router is impacted for about 6-8 minutes.
The static route takes long to get programmed and affects the datapath. There will be traffic loss of about 6-8 minutes when the site role is changed. This could be even longer based on the scale of the config.