This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX 3.2.4 | 18 APRIL 2024 | Build 23653566

Check for additions and updates to these release notes.

What's in the Release Notes

NSX-T Data Center 3.2.4 is an update release that comprises bug fixes only. See "Known Issues" below for the current known issues. See "Resolved Issues" below for the list of issues resolved in this release. See the VMware NSX-T Data Center 3.2 Release Notes for the list of new features introduced in NSX-T 3.2.

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.

API and CLI Resources

See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

April 18, 2024

1

Initial edition.

June 3, 2024

2

Added known issue 3386124.

August 13, 2024

3

Moved resolved issue 3224295 to known issues.

Resolved Issues

  • Fixed Issue 3303937: Traffic loss during VRRP VM failover.

    Traffic loss.

  • Fixed Issue 3296306: Too many logs from iked logged in syslog when there are multiple/duplicate IPSec SAs established in large number.

    Logs gets rolled over very fast.

  • Fixed Issue 3295807: Disk usage high in partition /nonconfig.

    No functional impact.

  • Fixed Issue 3283883: Auto deployed edges missing vmId to perform edge upgrade.

    If the compute manager to which the edge is deployed is no longer with NSX manager, the upgrade will fail. If the compute manager is with the NSX manager, upgrade will succeed, but edge VM specfic information like hw version will not be updated.

  • Fixed Issue 3277398: Service Insertion can drop traffic in some vMotion scenarios when both redirection and copy mode are used.

    Loss of packets.

  • Fixed Issue 3235352: TCP MSS Clamping is not happening on T1 inter-sr port when egress port is rtep group.

    TCP retransmission, packet drop.

  • Issue 3224257: BFD tunnel flap causing log overrun.

    vmkernel logs overrun with messages on BFD tunnel status.

  • Fixed Issue 3179989: The memory for Load Balancer access log is not initialized and the Load Balancer configuration is written into the edge syslog.

    No impact on the Load Balancer service.

  • Fixed Issue 3152062: E/W multicast doesn’t work without enabling PIM on a Tier0 uplink in AA.

    There is an inconsistency between A/S and A/A requirements.

    When there is upgrade from A-S to A-A, PIM needs to be enabled on the Tier0 uplink explicitly for E-W traffic to work.

  • Fixed Issue 3118643: MP Pre-upgrade checks show warning that, "A backup has not been taken in last 2 days" even though backup is taken.

    No functional impact. The message can be ignored and the upgrade can proceed.

  • Fixed Issue 3100299: Mixed Group reevaluation is slow and causing huge group request queue to build up.

    There is a delay in groups to become populated with members. Groups should be populated with members so that users can apply DFDW rules. Because of this delay, users have to wait for a long time to apply DFDW rules with groups.

  • Fixed Issue 3083285: Error related to vCenter connection is not shown correctly in Transport Node create/update flow.

    Host Transport Node is not prepared, and can be seen via UI/API. Transport Node Profile shown as applied success.

  • Fixed Issue 3073518: Service profile for endpoint protection goes into a failed state and shows an error.

    When an Endpoint protection service is unregistered and then registered with NSX Manager, service profile create operation fails for endpoint protection and the service profile goes into a failed state.

  • Fixed Issue 3029159: Import of configuration failed due to the presence of Service Insertion feature entries on the Local Manager.

    NSX Federation does not support the Service Insertion feature. When you try to onboard a Local Manager site, which has Service Insertion VMs, in to the Global Manager, the following error is displayed in the UI: "Unable to import due to these unsupported features: Service Insertion."

  • Fixed Issue 2854139, 3296124: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.

    Traffic drop for the prefixes that are getting continuously added/deleted.

  • Fixed Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

  • Fixed Issue 3311204: ManagementConfigModel table didn't get migrated to the new version after upgrading from 3.1 to 3.2.

    Customer needs to manually set the publish_fqdns to true again after the upgrade.

  • Fixed Issue 3310165: Operational status not calculated for VRF.

    VRF status stays as DOWN on UI and doesn't get updated.

  • Fixed Issue 3309623: The running process nsxaVim at ESX host goes into Error state, and is thus unable to process any requests from NSXA (nsx-opsagent).

    Auto-recovery is in place at nsxaVim. Also, restarting nsx-opsagent would also respawn/recover the nsxaVim process.

  • Fixed Issue 3308910: Upgrading NSX from 3.1.x to 3.2.x, mac_learning_aging_time in the mac discovery profile is changed from 600 to 0.

    The MAC aging time specifies the time before an MAC entry ages and is discarded from the MAC address table. Entering the value 0 disables MAC aging. Because of this MAC table size might increase over time.

  • Fixed Issue 3306725: Dataplane crash during collection of support bundle.

    Edge failover occurs.

  • Fixed Issue 3306183: The tx/rx drop of the kni-lrport increases and continues to increase on Bare Metal Edge due to CPU worker affinity overlap.

    Downloading slows when a drop occurs.

  • Fixed Issue 3305268: IDPS service did not start after appliance upgrade to 3.2.3.

    IDPS will be down.

  • Fixed Issue 3304786: Unable to manually assign edge nodes to T-1 gateways as display_name not coming for edge nodes on setups upgraded from 3.0 or 3.1 to 3.2 or higher versions.

    You won't be able to identify the edge nodes in the policy LR page.

  • Fixed Issue 3303418: The logical-routers/diagnosis API returns an error on some Edge nodes.

    Unable to use the API to get diagnostic information about logical routers.

  • Fixed Issue 3302402: NSX Manager upgrade stuck at Logical Migration step.

    NSX Manager upgrade can get blocked.

  • Fixed Issue 3299273: Search re-indexing takes a long time.

    You will not be able to see entities on the UI until re-indexing is completed.

  • Fixed Issue 3299044: When CCP restarts and full sync with Policy side, the order of ServiceAttachment and Service Chain inside full sync transaction are not guaranteed.

    Service Insertion feature is not redirecting the traffic to the SVM.

  • Fixed Issue 3298917: FIPS compliance alert "QAT running on Edge node is Non-FIPS Compliant" should not be triggered in edge VMs.

    Report has incorrect false positive entry that indicates non-compliance for edge VM.

  • Fixed Issue 3296264: Datapath segmentation fault occurred, DP core dump observed.

    Edge crash observed; unable to process any packet due to DP crash.

  • Fixed Issues 3291370: Some VMs connected to ephemeral DVPortgroups do not receive distributed firewall rules.

    If virtual machines are powered on using host CLI, they will not receive configured distributed firewall rules.

  • Fixed Issue 3290194: Upgrade Coordinator pre-check hangs for networks with a large number of edges.

    Upgrade page will be stuck and resume only when the Upgrade Coordinator restarts.

  • Fixed Issue 3290055: DHCP on shared VLAN segment fails with Service Insertion enabled.

    Unable to use DHCP with shared VLAN segment.

  • Fixed Issue 3288339: In Quick Install flow, when VDS uplink name is “NSX”, VDS uplink name does not have any number as postfix and code expects it to contain a number.

    You will see General error in the UI: "Error: General error has occurred. (Error code: 100)".

  • Fixed Issue 3288062: There is intermittent Load Balancer traffic failure.

    Traffic fails intermittently.

  • Fixed Issue 3287778: When forged TX + SINK port is used on a VM vNIC along with other VM vNIC ports with MAC learning enabled on the same DVS host switch, and the host switch uses multiple uplinks in a non-LAG teaming, packets received on uplink and destined to a forged MAC that originally came from the sink port are getting dropped.

    Connectivity issue.

  • Fixed Issue 3284119: Field level validation errors occurred when editing the alarm setting of "GM To GM Latency Warning".

    You won't be able to change federation.gm_to_gm_latency_warning alarm's definition through the UI, but you can still use the API as a workaround.

  • Fixed Issue 3282747: Check realization failed during vRA BYOT migration.

    Migration stops without an option to proceed further.

  • Fixed Issue 3281829: CPU config values on Transport Node Profile is missed during upgrade to 3.2.x and beyond.

    Even if Transport Node Profile is applied on the cluster, CPU config values in Transport Node Profiles and individual Transport Node will be different.

  • Fixed Issue 3281436: DHCP backend server crashes when it is an IPv4 only while it receives an IPv6 DHCP packet.

    DHCP stops allocating IP address to client hosts.

  • Fixed Issue 3280495: The load balancer server keep-alive or NTLM feature doesn't work as expected intermittently.

    The load balancer server keep-alive feature works intermittently.

  • Fixed Issue 3279501: NSX Segments not showing up in vCenter.

    You will not be able to use NSX Segments on vCenter.

  • Fixed Issue 3278313: When updating compute manager to replace vCenter certificate thumbprint, 'Failed to enable trust' error is seen on NSX manager that was installed through vCenter's NSX embedded page.

    vCenter certficate thumbprint updates or other updates cannot be made on the compute manager.

  • Fixed Issue 3276461: The mdproxy service is using up the edge's disk as its log rotation doesn't work.

    The high disk usage leads to edge production down times.

  • Fixed Issue 3275406: Load Balancer nginx crash happens when there is TCP port conflict in SNAT and virtual server port.

    Some traffic failure happens.

  • Fixed Issue 3272988: Microsoft Windows server 2016 (Bare Metal Server) will have a BSOD due to the ovsim Kernel Driver.

    Microsoft Windows server 2016 restarts because of BSOD.

  • Fixed Issue 3272725: When loading the System Monitoring Dashboard, a permission error message is seen in the logs.

    You won't be able to see the Service Deployment widget in the System monitoring dashboard.

  • Fixed Issue 3262853: Overlay packet drop by physical switch because IP checksum is 0xFFFF is according to RFC 1624, should be 0x0 instead.

    Some traffic is not going through.

  • Fixed Issue 3262810: Group update notification is not sent for groups with Segment in group defn when VM is powered off.

    Even though group members are updated, no notification is received for such group.

  • Fixed Issue 3262184: Dataplane core dump with Mellanox NIC.

    The edge is unavailable for packet forwarding until the dataplane has been restarted.

  • Fixed Issue 3261769: If there is some uplink not in a LAG, a false alarm can be triggered.

    False-positive transport_node_uplink_down alarm found on UI / API.

  • Fixed Issue 3261597: After ESX upgrade from 7.0.2 to 7.0 U3n, multiple VMs lost network connectivity.

    Lost network connectivity.

  • Fixed Issue 3260084: Stretched traffic was dropped because remote rtep-group is not in L2 span.

    Cross-site traffic not working.

  • Fixed Issue 3259906: Some http-profiles cannot be selected under Manager Load Balancer->Virtual Server UI if the number of http-profiles is larger than 1000.

    You cannot select some application profiles out of 1000 as some are not available for selection in the UI.

  • Fixed Issue 3259749: A stalled mac entry pointing to wrong remote rtep-group caused packet drop.

    Cross-site traffic can be dropped because of these stalled entries.

  • Fixed Issue 3259568: SR not realized when a new VRF is configured.

    Traffic through VRF doesn’t work.

  • Fixed Issue 3255245: Quick Install workflow gets stuck when no recommendation is provided in the recommendation step.

    Unable to prepare clusters for networking and security using quick start wizard.

  • Fixed Issue 3251559: Migration of NVDS switch with LAG configurations to CVDS fails.

    Cannot perform NVDS to CVDS migration.

  • Fixed Issue 3250276: After ESX host reboot, the Transport Node status is partially successful and won't succeed.

    If the ESX host is taken out of maintenace mode in this state and the VM's are vmotioned to this host, a VM that is using VDR traffic will not be able to send network traffic out.

  • Fixed Issue 3248874: Unable to edit Edge bridge.

    You will not be able to update the edge bridges from the UI.

  • Fixed Issue 3248866: NVDS to CVDS migration fails, when LAG configurations are mapped with vmks in the migrating NVDS switch.

    NVDS to CVDS migration cannot proceed.

  • Fixed Issue 3247896: Traceflow observations are not getting recorded for IPv6 traffic when traffic is getting forwarded by firewall.

    No functional impact.

  • Fixed Issue 3247810: NSX-T Manager Cluster Certificate Private Key visible in /var/log/syslog.

    NSX-T Manager Cluster Certificate Private Key visible in /var/log/syslog.

  • Fixed Issue 3242437: BGP Down alarm is raised with the reason "Edge is not ready" for all the BGP neighbors configured on tier-0, even when the BGP Config is disabled.

    No functional impact. However, the BGP DOWN alarms are observed in the UI even though BGP Config is disabled.

  • Fixed Issue 3242135: On the Manager UI, filtering on Tier-0/Tier-1 Logical Routers shows wrong result.

    You will see the filtered data for the next visited Tier-0/Tier-1 Logical Routers view.

  • Fixed Issue 3242132: The next hop of static route was not set correctly in Tier1 router.

    Route configuration is incorrect, which may lead to data loss.

  • Fixed Issue 3242008: NVDS to CVDS migration fails due to timeout at TN_UPDATE_WAIT stage of migration.

    You will not be able to migrate from NVDS to CVDS switch.

  • Fixed Issue 3241069: In NSX for vSphere to NSX-T migration, when multiple DR only T1 gateways are mapped to DLRs or DR-only T1 gateway is mapped along with its parent T0 gateway to DLRs, edge migration fails.

    Edge migration fails.

  • Fixed Issue 3239517: TCP traffic to L7 Load Balancer cannot be established when the client port is reused rapidly.

    The traffic to L7 Load Balancer fails intermittently.

  • Fixed Issue 3239140: T1 CSP subnets stop getting advertised to connected T0 if the edge cluster of T1 gateway is changed.

    North-South datapath breaks for T1 CSP subnets.

  • Fixed Issue 3237041: Migration Coordinator showing "not set" in connections for some edges.

    You cannot see connected edges in the UI in Define Topology stage.

  • Fixed Issue 3236358: nginx memory leak causes out of memory on the active edge.

    Many features on the edge are impacted.

  • Fixed Issue 3232033: Validation lacking for "'any" subnet, and le/ge values.

    You could run into errors elsewhere if not correctly validated. "Any" will cause errors elsewhere.

  • Fixed Issue 3224339: Edge datapathd crashes when receiving fragmented IPv6 traffic (UDP) ending at edge's local CPU port.

    BGP sessions are impacted.

  • Fixed Issue 3223334: Unable to enable MON on HCX.

    You are not able to enable MON on HCX.

  • Fixed Issue 3222373: ESX host ramdisk is full due to a large number of errors in nsxaVim.err.

    Failure to deploy new VMs once the ramdisk is full.

  • Fixed Issue 3221768: The previously selected certificate goes away while searching and selecting another certificate in Virtual Server --> SNI Certificates dropdown.

    Not able to search and select multiple certificates in Virtual Server --> SNI Certificates dropdown.

  • Fixed Issue 3311568: Traffic passing through NSX Edge VM or a non-edge VM is impacted if the VM is rebooted after the NSX manager upgrade from 3.0.x to 3.2.x.

    Traffic passing through the affected VM’s is impacted.

  • Fixed Issue 3313308: Dataplane service (dp-fp) core dump occurs, resulting in traffic drops.

    Traffic impacted during core period.

  • Fixed Issue 3314572: The realization state of an unconsumed DHCP Relay becomes "In progress" after it was changed.

    No functional impact.

  • Fixed Issue 3315269: VNI filter does not work for packet capture on NSX-T edge nodes.

    Can't use "vni" keyword to filter packets.

  • Fixed Issue 3316361: VM communication is blocked after deleting Service Insertion in the NSX-T environment.

    After deploying Service Insertion and deleting Service Insertion on the DVS, the packets are dropped and the VM traffic is blocked.

  • Fixed Issue 3317152: Container to Container connection timeout after NSX-T upgrade.

    Dynamic group membership is wrong. DFW/Gateway Firewall that leverages the group may be impacted. Traffic may not be correctly allowed or dropped.

  • Fixed Issue 3319292: Incorrect route advertisements received by workloads on virtual IPv6 subnet with prefix not a multiple of 8.

    This can cause traffic disruptions if multiple subnets are masked to the same value; for example, if two /50 prefixes are masked to the same /48.

  • Fixed Issue 3332272: The DFW vMotion for DFW filter on destination failed and the port for the entity is disconnected.

    After host removal, stale host alarms (dfw_vmotion_failure) are still listed in GET api/v1/alarms API's response. Some alarms may have unexpected empty entity_resource_type value in GET api/v1/alarms API's response. Stale dfw_vmotion_failure will be seen, which comes from no longer existing ESX nodes, by using GET api/v1/alarms.

  • Fixed Issue 3335018: Edge node over provisioned with load balancer capacity.

    Edge node ends up with over provisioned load balancer capacity.

  • Fixed Issue 3335384: NVDS to CVDS migration status remains in progress forever.

    NVDS to CVDS migration fails.

  • Fixed Issue 3335886: Back and forth vMotion led to incorrect deletion of logical port.

    DFW rules not applied on the vNIC/port. This caused the DFW rules not to be published on the port.

  • Fixed Issue 3337389: DHCPv6 relay does not work with third party DHCPv6 server when client not using MAC generating client ID.

    DHCPv6 client may not get its IP address.

  • Fixed Issue 3337410: TNP lost VMK Install/Uninstall mappings after NSX Manager upgrade.

    Incorrect VMK network associations on TNs.

  • Fixed Issue 3346506: NSX CLI command “get edge-cluster status" shows two Tunnel Endpoint in the case of bond configuration.

    On NSX Edge Transport Node after changing profile from multi Tunnel Endpoint to bond/lag profile, "get edge-cluster" CLI still shows two local VTEP ip. For one device info ('Device') shown as lag and for another, device info ('Device') is not displayed.

  • Fixed Issue 3353131: When configuring the metadata proxy with FQDN-based server settings, the backend dataplane is broken and status is showing error.

    The metadata service cannot be consumed via the metadata proxy. The following update of the metadata proxy also cannot take effect.

  • Fixed Issue 3365503: ESXi host encounters an ENSNetWorld PSOD if flow re-validation happens after the flow action's destination port gets disconnected.

    ESX PSOD.

  • Fixed Issue 3221768: The previously selected certificate goes away while searching and selecting another certificate in Virtual Server --> SNI Certificates dropdown.

    Not able to search and select multiple certificates in Virtual Server --> SNI Certificates dropdown.

  • Fixed Issue 3221760: Overlay traffic outage in LAG environment.

    Overlay traffic outage.

  • Fixed Issue 3219754: Kernel coredumps were not being generated on NSX Edge.

    Loss of all services on the node that crashes. Manual intervention required to restore service.

  • Fixed Issue 3219169: The DNS server config could not be fetched automatically on VIF interface in VLAN mode when doing vif-install on WinBMS.

    The related DNS server config on VIF interface is lost after the upgrading for WinBMS on VLAN mode.

  • Fixed Issue 3218194: Backup configuration accepts loopback IP/NSX Manager itself to be used as a backup server.

    No functional impact but Manager UI will show "Missing backup" banner: You have not configured backups of NSX Manager.

  • Fixed Issue 3213242: SNMP agent has encountered out of memory issue.

    You may not see the snmp traps for their alarms.

  • Fixed Issue 3210686: Kernel memory leak when vdl2 disconnects port from control plane.

    With this issue, there could be PSOD when an IPv6 enabled VM vMotions. Live upgrade will also fail due to memory leak.

  • Fixed Issue 3187543: Unable to configure LDAP authentication using SSL.

    You are unable to test LDAP connectivity if you have configured LDAP authentication to use LDAPS or LDAP-with-StartTLS. This only affects the "Check Status" button in the "Set LDAP Server" screen.

  • Fixed Issue 3186156: Pagination not working/missing in the Backup Overview grid in the UI for LM and GM. Pagination not working/missing for GM Backup overview API.

    UI impact and GM API - Pagination not working.

  • Fixed Issue 3185052: CLI get physical-ports incorrectly shows ADMIN_STATUS down for bond status.

    No bond use/traffic impact as bond admin status isn't used.

  • Fixed Issue 3183515: vdl2 disconnects port from controller under frequent IP discovery update.

    Port MAC change will no longer be updated to controller.

  • Fixed Issue 3182682: When local mac address matches a remote address learned from L2VPN, address flapping occurs.

    Traffic will be impacted.

  • Fixed Issue 3179208: VM's ports get blocked after installing NSX security only on cluster.

    The security port could get blocked.

  • Fixed Issue 3165849: Intermittent traffic loss at edge cutover for vra routed networks going via downlink cutover.

    Intermittent outage.

  • Fixed Issue 3165799: NSX for vSphere to NSX-T host migration failed when a vxlan VMkernel adapter is in a vSphere standard switch.

    Migration stopped in the middle.

  • Fixed Issue 3164437: The prefix_list limitation in capacity dashboard for large appliance should be 4200, instead of 4000.

    In large appliance, you can see wrong limitation in capacity dashboard and can see false capacity alarm when system-wide prefix_list is over 4000.

  • Fixed Issue 3162576: NSX for vSphere to NSX-T migrator in DFW-Host-Workload migration mode migrated NSX-V hosts to NSX-T transport nodes without any NSX host-switch.

    NSX-T is installed in hosts but no NSX feature is used by the VMs because VMs are still connected to VLAN DVPGs. Some VMs may lose DFW after the migration.

  • Fixed Issue 3157163: On one Edge Connected Tier1 routes appear as Static routes on Tier0.

    When you perform a failover between two active/standby edge nodes, there are some missing routes/subnets on the TOR.

  • Fixed Issue 3153468: When you try to search for an IP pool above 1000 records, you are not able to find it.

    This is only when more than 1000 IP pools are available in the system.

  • Fixed Issue 3152084: The traceflow topology diagram showed the wrong segment.

    The troubleshooting process may not proceed in the correct direction due to wrong segment information.

  • Fixed Issue 3129560: Python and login core dump gets generated if printscreen key is pressed on VM console (vc / esxi console).

    nsx-cli session will stop.

  • Fixed Issue 3125438: Transport Node status shows still UNKNOWN/DOWN when Transport Node is up.

    You will see the wrong Transport Node status.

  • Fixed Issue 3119289: del nsx fails due to property 'com.vmware.common.opaqueDvs.status.component.vswitch' being set as RUNTIME property with value as 'down'.

    del nsx failure. Unable to remove NSX from ESXi host transport node.

  • Fixed Issue 3117763: snmpd process stopped and didn't recover automatically on Manager.

    Cannot use SNMP monitoring in NSX Manager because the snmpd process stops automatically.

  • Fixed Issue 3117695: When you configure the RTEP (Remote Tunnel Endpoint) configuration for an edge cluster using the 'System -> Quick Start -> Configure Edge Node for Stretch Networking' option in the UI, it may appear that the RTEP is not configured.

    Despite completing the RTEP configuration, the user interface indicates that the configuration has not been completed.

  • Fixed Issue 3117124: HaMode is shown as Active-standby even if no edge cluster is configured on Tier1.

    A visual issue only. There is no operational issue.

  • Fixed Issue 3113203: Uninstall failure due to multiple simultaneous attempts.

    Unable to complete the NSX uninstallation.

  • Fixed Issue 3112374: Security-only uninstall is failing as LP's are part of NSgroup.

    Uninstall not possible.

  • Fixed Issue 3109895: You may encounter Null pointer exception errors when trying to see group members or reverse group look-ups (i.e., for a given entity, find related group).

    This impacts group membership APIs and association APIs which are used to show group members/associations on the UI. You may encounter errors when trying to see group members or reverse group look-ups.

  • Fixed Issue 3109192: If a BGP neighbor is not directly connected on the uplink and does not have a source IP configured, then "bgp_down" alarm can be raised on the UI if the edge op-state flaps.

    There is no functionality impact. Only alarm will be raised from all edges where this BGP peer is not directly connected.

  • Fixed Issue 3108028: In UI, for 'ens0' network of Service Deployment, 'NetworkKey' is shown instead of 'NetworkName'.

    No functional impact. Only 'NetworkKey' will be shown instead of 'NetworkName' for 'ens0' network for all Service Deployments in the UI.

  • Fixed Issue 3106536: The port used in LTA session was disabled during on-going LTA session, which causes PSOD.

    ESXi Server experienced BlueScreen/PSOD crash.

  • Fixed Issue 3103647: The in-band mgmt interface was lost when configuring Edge Datapath properties, e.g., rx ring size.

    The in-band mgmt interface is lost immediately after configuring Datapath properties like rx ring size.

  • Fixed Issue 3101459: Tier1 state API is slow.

    If Tier1 API is called, the response is slow (more than 50s for API to respond).

  • Fixed Issue 3101405: FW Rule or DNAT Rule updates by removing services added earlier may not work. The intended service update may not be populated to ESX or EdgeNode correctly.

    FW Rule or DNAT Rule updates by removing services added earlier may not work.

  • Fixed Issue 3100672: Node id change caused traceflow observation display exception.

    You are unable to check traceflow observation result; get traceflow observation API will return error "General error has occurred".

  • Fixed Issue 3098283: False Manager FQDN Lookup Failure Alarm raised in the system.

    You can see this False alarm: Manager FQDN Lookup Failure Alarm in alarm dashboard. However, FQDN lookup is fine in the system and the alarm should not be raised.

  • Fixed Issue 3096626: When force delete of TN is performed, VTEP labels and IP address are not released.

    You will run out of IPs although no host is really using it.

  • Fixed Issue 3040604: Incorrect LTA observations when traffic traverses multiple overlay uplinks.

    No functional impact. However, this behavior would result in incorrect observations as the traffic is getting delivered when it is actually forwarded.There is no way to distinguish between these two scenarios: a packet getting actually delivered at the uplink vs a packet actually forwarded but incorrectly marked as 'delivered'.

  • Fixed Issue 3031006: When LAG is present and security-only install is used, LAG status shows "degraded".

    Lag status shows degraded in NSX UI.

  • Fixed Issue 2975197: Baremetal Edge Intel fastpath interfaces do not come up.

    Datapath non-functional.

  • Fixed Issue 2868382: Selecting 'skip' for OSPF feedback requests in NSX for vSphere to NSX-T migration caused UI display.

    Migration is blocked in the first step.

  • Fixed Issue 2756004: Incorrect logic in "NVDS Uplink Down" alarm calculation causes false alarm.

    False-positive "NVDS Uplink Down" alarm found on UI / API.

  • Fixed Issue 3321586: Rollback to Unified Appliance does not work when upgrade fails. UI becomes inaccessible and MP service is down.

    Failure in upgrade intermittently.

  • Fixed Issue 3217386: Event log scraping fails due to a concurrency issue.

    IDFW doesn't work when event log scraping is used.

  • Fixed Issue 3261430: NSX Manager upgrade gets marked as successful if reset plan API for MP component gets called after failed MP upgrade.

    NSX Manager upgrade incorrectly gets shown as successful even if MP upgrade had failed at data migration step, and side effects are seen during other operations, like NSX-T edge cluster expansion failure.

  • Fixed Issue 3273732: In Federation, full sync from Local Manager to the Global Manager does not complete.

    Some deleted resources remain grayed out and are not permanently cleaned up. Some other resources from the Local Manager, such as SegmentPorts, are not properly updated and are stale on the GlobalManager.

  • Fixed Issue 3242567: When there are frequent config churns (e.g., 30 seconds or less) on context profile causing it to be deleted and recreated, VDPI may crash and restart.

    There is brief traffic interruption to the traffic subjected to L7 DFW.

  • Fixed Issue 3180860: SNAT port exhaustion alarm continuously comes up on the NSX manager when there is no problem with SNAT ports exhaustion.

    There is no functional or traffic impact.

  • Fixed Issue 3353199: User is allowed to select signature details not present in the active signature bundle.

    User might think that the filtering is not working correctly for signature details. User can select details of non active signatures.

  • Fixed Issue 3337942: Realization error seen after upgrading to 3.2.3.2/4.1.1 for pre-existing IDPS sections created in 3.1.3.x releases.

    After upgrade, a realization error is seen for pre-existing IDPS sections with an error stating that the IDPS Rule is already present in the system. Post upgrade, any updates to such existing IDPS sections and rules are not realized. However, only pre-existing rules will continue to work as it worked pre-upgrade.

  • Fixed Issue 3336492: Updating the rule with scope as null using PATCH Gateway Policy API (/infra/domains/default/gateway-policies/test) was updating the rule scope as "ANY" in its GET request.

    Rule is not visible in the UI.

  • Fixed Issue 3330950: DFW L2 MAC address was not programmed into kernel for some filters during bulk VM vMotion or power on.

    L2 traffic hitting wrong L2 rule and impacts traffic for those zero MAC filters.

  • Fixed Issue 3327879: Unable to view the Standby GM cluster/node status from the Active GM location manager tab within the UI.

    Unable to view the Locations Manager page from Active GM login. The Standby site does not show the cluster status or the status of the 3 standby managers.

  • Fixed Issue 3326747: Service instance page is stuck in loading.

    You cannot check service instances from the UI.

  • Fixed Issue 3317410: MP API Validation doesn't exist to check if the user provides an existing MP Rule ID that doesn't belong to the Section they are updating/revising.

    Rule is not created in the desired Section.

  • Fixed Issue 3325439: CCP doesn't support retry for entering/exiting maintenance mode.

    If CCP fails to enter/exit maintenance mode during rolling upgrade, you may have to roll back and restart the upgrade from the beginning.

  • Fixed Issue 3314125: Syslog setting can get deleted even though central-config was disabled.

    Loss of configuration.

  • Fixed Issue 3310170: The NSX load balancer crashes with SNAT disabled and server keep-alives enabled.

    The HTTP and HTTPS-based applications will be impacted.

  • Fixed Issue 3300884: Bond NIC status shows down incorrectly resulting in a false alarm.

    There is no functional impact as these are false positives.

  • Fixed Issue 3297854: The NIC throughput alarms do not work correctly on Edge VM, PCG or Autonomous Edge.

    False positive alarms.

  • Fixed Issue 3285489: Edge node settings mismatch alarms are raised.

    False alarm.

  • Fixed Issue 3260245: There are inconsistencies between getting CPU stats from datapath and CPU stats from Linux.

    If datapath is busy, the stats will be coming from Linux. On an edge vm, the datapath cores will be shown as 100 percent.

  • Fixed Issue 3255704: On a Bare Metal edge with bond configuration and l2 bridge configuration, BFD down observed as well as edge agent crashes.

    BFD down. Edges lose connectivity with other transport nodes.

  • Fixed Issue 3247896: Traceflow observations are not getting recorded for IPv6 traffic when traffic is getting forwarded by firewall.

    No functional impact.

  • Fixed Issue 3239140: T1 CSP subnets stop getting advertised to connected T0 if the edge cluster of T1 gateway is changed.

    North-South datapath will break for T1 CSP subnets.

  • Fixed Issue 3237041: Migration Coordinator showing "not set" in connections for some edges.

    You cannot see connected edges in UI in Define Topology stage.

  • Fixed Issue 3229594: Wrong linkdown alarm of a bare metal edge is fired every 4 hours on NSX Manager.

    This is a false positive alarm. Rare occurrence of collision when UUID of alarm event collides with transport node UUID.

  • Fixed Issue 3228653: Edge cluster create fails since Transport Node internal flow causes config state flap between SUCCESS and IN_PROGRESS

    No impact on traffic.

  • Fixed Issue 3214446: Site sync status not showing when using vIDM /LDAP user account.

    Site sync status not showing when using vIDM /LDAP user account.

  • Fixed Issue 3213923: MAC to VLAN lswitch port in vlan fdb is not removed when the VLAN switch port is removed, which can cause coredump when issuing CLI "get host-switch vlan-table"

    coredump of dp.

  • Fixed Issue 3184235: Static routes are unstable after upgrading NSX from 3.1.3 to 3.2.2.

    VMs have network outage.

  • Fixed Issue 3163803: In the EVPN Route-Server environment, traffic from VNF to DCGW is dropped every 10 minutes as the Type-2 route is withdrawn from Edge.

    The traffic from VNF to DCGW is dropped for one second or less every 10 minutes.

  • Fixed Issue 3119464: In Bare Metal Edge, interfaces with Intel XXV710 NICs and AOC phys are down.

    After any change in configuration (e.g., MTU update), Intel XV710 NIC ports remain down. This has been observed with 25G links and AOC phys and fiber cabling.

  • Fixed Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not being allowed or skipped correctly.

  • Fixed Issue 3090983: Stale LS in the system possible when different nsx-manager instances invoking the provider for segment binding deletion and segment deletion.

    Stale LS will remain in the system.

  • Fixed Issue 3111794: Entering a logical router name containing multibyte characters causes the “get logical routers” CLI command to fail.

    The "get logical routers" CLI command errors out.

  • Fixed Issue 3060219: vMotion fails when the NFS mapped to the scratch location is disconnected.

    vMotion failure.

  • Fixed Issue 3044773: IDPS Signature Download will not work if NSX Manager is configured with HTTPS Proxy with certificate.

    IDPS On-demand and Auto signature download will not work.

  • Fixed Issue 2975197: Bare Metal Edge Intel fastpath interfaces do not come up.

    Datapath non-functional.

  • Fixed Issue 2946990: Slow memory leak in auth server (/etc/init.d/proxy) for local user authentication.

    API responses will get slower.

  • Fixed Issue 3353719: NSX manager has two certificates associated with same cluster id. This causes issues with NCP (Tanzu).

    NCP crashing blocking Tanzu communication with NSX, impacting the overall solution networking.

  • Fixed Issue 3274058: When deploying ALB controller, "." is replaced with "-" in the hostname.

    When deploying ALB controller, "." is replaced with "-" in the hostname.

  • Fixed Issue 3299235: PSOD in hosts with ESX 7.0U3 when using ENS.

    ESX will crash.

  • Fixed Issue 3261832: Subnet deletion fails even if subnet is not used in allocation when IpPool has multiple subnets and some of the subnets of IpPool have IP allocation.

    Subnet deletion will not work.

  • Fixed Issue 3290636: DFW intermittently drops TCP packets for long lived connections.

    Occasional communication failure between VMs.

  • Fixed Issue 3305774: The ipset listing API default page size is seen as 50 though the API documentation shows it to be 1000.

    No functional impact.

  • Fixed Issue 3302553: MP APIs to update gateway firewall section with rules failing with inaccurate validation.

    You will not be able to add/remove firewall rules using section level update MP API.

  • Fixed Issue 3296767: NSX Policy "Deny_Inter" is getting its sequence number changed without user intervention. Because of this, the policies, rules, and orders are changed, preventing users from accessing their applications.

    The firewall rule execution may impact the traffic flow.

  • Fixed Issue 3289756: Edge node disk partition /mnt/ids has reached 90% alarm.

    IDPS will Fail to write the extracted files. Without the extracted files, malware file analysis will be impacted.

  • Fixed Issue 3285429: Audit logs are not getting logged for internal calls in Policy Hierarchical API.

    You will not be able to see the audit logs for internal calls made using Policy HAPI.

  • Fixed Issue 3271441: REST API to NSX Manager intermittently fails while using it with vIDM authentication.

    vRA not working as expected due to 403 error codes returned from the NSX Manager.

  • Fixed Issue 3271233: Data migration issue with session timers on upgrade from 3.1.3.

    Timeout values for firewall session timer profile were changing.

  • Fixed Issue 3270603: "IP Block Usage Very High" alarm is getting triggered even if IpBlock usage is low (even usage is >= 50%).

    No functional impact.

  • Fixed Issue 3262310: Cluster-based SVM deployment fail if OVF hosted on Mongoose server.

    Cluster-based SVM deployment fails.

  • Fixed Issue 3259540: If service cores are configured when needed, datapathd will crash.

    Traffic is interrupted.

  • Fixed Issue 3259164: Event log Scraping doesn't work in federation setups.

    IDFW doesn't work.

  • Fixed Issue 3255053: When FirewallExcludeList intent table has the same internal key as mp entry key, InternalFirewallExcludeList's mp entry will be overwritten again.

    FirewallExcludeList related functionalities will not work.

  • Fixed Issue 3249481: Memory for the pffqdnsyncpl is above 85% even in the absence DNS traffic.

    As this happens on the backup node during the sync, it will not impact the traffic but if the backup becomes active, it will create the failure for sync.

  • Fixed Issue 3248151: When a service is deleted, the partner_channel_down alarm is raised and it remains open even after the service is redeployed.

    An alarm corresponding to the deleted service instance persists.

  • Fixed Issue 3246397: Service deployment is struck at Upgrade in progress.

    No functional impact. Deployment is completed and redirection is working as expected.

  • Fixed Issue 3243603: This error occurs when an object being migrated has the same policy intent as the object that is marked for deletion: "Segment port is marked for deletion. Either use another path or wait for the purge cycle."

    You will have to cancel the current promotion attempt and retry the mp to policy promotion.

  • Fixed Issue 3238442: Edge crashed during failover operation.

    This occurs when failover happens and DNS request packet is received on existing state before the connection expires. There may be loss of traffic prior to the system recovery.

  • Fixed Issue 3237776: Duplicate VMs reported in NSX.

    There is no user impact.

  • Fixed Issue 3233259: FirewallRule addition in a FirewallSection API fails or takes longer time than expected to be realized on hosts or edge nodes.

    Unable to add new firewall config or delay in rule realization to hosts.

  • Fixed Issue 3217946: Search option in workflow Add Edge Node > Configure NSX > Teaming Policy Uplink Mapping does not work properly.

    Incorrect filtering results of teaming policies.

  • Fixed Issue 3186544: TCP RSTs generated for the URL filtering don't use the proper NAT IP but the original IP.

    The generated TCP RST packets don't have the applicable NAT IP address.

  • Fixed Issue 3186101: PXE boot failing with NSX-T Service Insertion.

    Cannot use service insertion for N-S PXE traffic.

  • Fixed Issue 3185286: New rules could not be created because ruleID generation failed.

    Cannot create new rules.

  • Fixed Issue 3179001: Firewall Section lock/unlock does not work.

    Existing locked sections cannot be unlocked by neither the user who created them nor the default enterprise admin account.

  • Fixed Issue 3178669: IDPS engine process crashes with a mix of http, smb traffic.

    Detection/prevention functionality down for a short time.

  • Fixed Issue 3166361: ConfigSpanMsg for DISTRIBUTED_FIREWALL is empty and disrupts the normal functionalities for firewall exclude list, or edges.

    Firewall ExcludeList and edge functionalities might not work as expected.

  • Fixed Issue 3160528: Memory exhaustion in nsx-opsagent daemon.

    All activities of opsagent will restart.

  • Fixed Issue 3159359: The TCP traffic from service link to VTI can get stuck with 64k window.

    Transferring files larger than 64k with TCP don't succeed.

  • Fixed Issue 3154551: Tier-1/Tier-0 realization errors after upgrade when using service insertion.

    Tier-0s/Tier-1s won't be realized.

  • Fixed Issue 3088665: The inner IP packet carried by Geneve encap has zero IP checksum.

    You may see error counters going up for drivers that have deep packet inspection capability.

  • Fixed Issue 3095501: DFW filter applied to Service VM.

    The Checkpoint SVM has DFW applied to its interface even when it is added into the system exclusion list.

  • Fixed Issue 3096744: The value of tcp_strict flag is shown as false, when not supplied, using the revise policy API.

    Inconsistent value of tcp_strict flag in Gateway Firewall when created via two different APIs.

  • Fixed Issue 3112614: The vmState shows Detached due to opsagent failed to get VIF from NestDB.

    The logical port of the virtual machine is set to "DOWN".

  • Fixed Issue 3104576: Gateway firewall rule stats throws an error.

    You won't be able to see the statistics.

  • Fixed Issue 3102093: REST API on logical port drop stats was excessive and triggered vRNI drop threshold.

    For vRNI features leveraging NSX drop stats will trigger alarms.

  • Fixed Issue 3097874: After 3.1 to 3.2 upgrade, endpoint rules update may fail to succeed.

    You can neither update an existing rule nor create a new rule. Existing protection still continues to work. Old endpoint rules keep on working but the change is not effective (e.g., new groups cannot be added to an existing endpoint rule).

  • Fixed Issue 3093269: Rule creation for TargetType Antrea fails with version mismatch error.

    Unable to create a Security rule.

  • Fixed Issue 3076569: NSX-T DFW sections in an unknown/In Progress state during migration from NSX for vSphere to NSX-T.

    DFW rules will not apply to the workload VMs.

  • Fixed Issue 3025203: nsx-syslog.log shows error message "failed to find transaction id".

    No user impact, other than log files with the above mentioned log lines.

  • Fixed Issue 3325439: CCP doesn't support retry for entering/exiting maintenance mode.

    If CCP fails to enter/exit maintenance mode during rolling upgrade, they may have to rollback and restart the upgrade from the beginning.

Known Issues

  • Known Issue 3386124: During Upgrade to NSX-T 3.2.4, if you have more than 100 EdgeTransportNodes, then Upgrade will fail.

    During Upgrade to NSX-T 3.2.4, if you have more than 100 EdgeTransportNodes, then Upgrade will fail with the error -> "[Edge UCP] Issue retrieving Edge transport nodes [8c50294c-9cd7-450a-ba49-72260d2c8797, ebfcf285-24bf-4d28-8ea6-4361041d76cb, ] aggregate information from MP.

    KB Article : https://broadcomcms-software-agent.wolkenservicedesk.com/wolken/esd/knowledge-base-view/view-kb-article?articleNumber=368723

  • Issue 2854116: If you have VM templates that are backed by NSX, then after an N-VDS to VDS migration, N-VDS is not deleted.

    The migration is successful, but N-VDS is not deleted because it is still using the VM templates.

    Workaround: Convert the VM template to VM either before or after starting the migration.

  • Issue 3095149: After replacing a certificate for an LM Principal Identity, there are two certificates shown to be used by the LM Principal Identity.

    Two certificates shown to be used by the LM Principal Identity.

    Workaround: None.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1.

    In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.

    Workaround: None.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 3024652: VMs connected to ephemeral DVPort groups may not be configured correctly for distributed firewall.

    If NSX is installed in security-only mode on a cluster and if VMs from such a clusters are connected to ephemeral DVPort groups, then some of these VMs may not receive the correct set of NSX DFW rules.

    In some scenario, multiple ephemeral DVPorts can end up with the same port attachment ID. These DVPorts will lose DFW rules after such attachment ID collision.

    Use one of the following workarounds:

    • Disconnect and reconnect virtual NICs having duplicate connection IDs.

    • Migrate the VM networking to static DVPort group.

  • Issue 3019813: During an NSX-V to NSX-T migration, if you specify admin distance value as 0, you cannot proceed with the migration.

    Admin distance with 0 value is not supported in NSX-T.

    Workaround: Set admin distance value other than 0.

  • Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.

    After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.

    1. The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.

    2. For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state. 

    3. For an NSX MP upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state. 

    4. The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.

    Workaround: See VMware knowledge base article 89418.

  • Issue 2979212: When using bridging, MAC SYNC messages are sent over a tunnel between the Edges. On a bridge failover, a FULL SYNC message is sent. However, with pre-emptive mode, this message is not delivered and the Edges end up with different MAC tables.

    Since the MAC SYNC update is lost, the new active Edge will not RARP the MAC addresses to the TOR. Now traffic for these MACs is blackholed until the ARP is re-learned. The re-learning takes place automatically after the ARP times out on the bridged VM.

    Workaround: Use the non-preemptive failover mode for bridge port.

  • Issue 2958032: If you are using NSX-T 3.2 or upgrading to an NSX-T 3.2 maintenance release, the file type is not shown properly and is truncated at 12 characters on the Malware Prevention dashboard.

    On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you will only see WindowsExecu as File Type on Malware Prevention UI.

    Workaround: Do a fresh NAPP installation with an NSX-T 3.2 maintenance build instead of upgrading from NSX-T 3.2 to an NSX-T 3.2 maintenance release.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: You need to use APIs to remove bridging:

    1. Update concerned LogicalPort and remove attachment

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2. DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

  • Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade.

    When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.

    Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running.

  • Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs.

    By default, NSX tools are installed on /opt directory. However, during NSX tools installation default path can be overridden with "--chroot-path" option passed to the install script.

    Insufficient disk space on the partition where NSX tools is installed can cause NSX tools upgrade to fail.

    Workaround: Increase the partition size on which NSX tools is installed and then initiate NSX tools upgrade. Steps for increasing disk space are described in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/resize-os-disk-gpt-partition page.

  • Issue 2931403: Network interface validation prevents API users from performing updates.

    Network interface on an edge VM can be configured with network resources such as port groups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in vCenter Server after a power outage (moref of resource pool changed after vCenter Server was restored). API users are blocked from performing update operations.

    Workaround: Redeploy edge and specify valid moref Ids.

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps.

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None.

  • Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform".

    The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected.  There is no functional impact. Only debugging might be affected.

    Workaround: There are two workarounds.

    • Workaround 1: Make sure the Kubernetes cluster comes up with version 0.4.x of the metrics-server pod before deploying NAPP platform. This issue is not seen when metrics-server 0.4.x is deployed.

    • Workaround 2: Delete the metrics-server instance deployed by the NAPP charts and deploy upstream Kubernetes metrics-server 0.4.x.

  • Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints.

    Failed IPsec session will not be established until the error is resolved.

    Workaround: Use unique self-signed certificate for each local endpoint.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 3048107: When a VM with multiple vNICs attached to the ephemeral DVPG(s) is powered on, vCenter will set the same connectionID to all ports. This causes an issue with DFWonDVPG since the connection ID is used to assign the vif id. Multiple ports have the same vif ID, which causes problems.

    If the cluster is prepared for security-only, DFW rules will not work as expected.

    Workaround:

    1. Power on the VM, with only a single vNIC and then add a second vNIC after the VM is powered on to the same ephemeral DVPG. In this case, vCenter will issue a new connection-ID for the second vNIC.

    2. If the VM is already powered on with two vNICs attached to the same ephemeral DVPG, move a single vNIC to a different DVPG, and then move it back to the original DVPG. vCenter will issue a new connection ID.

  • Issue 3224295: IPv4/IPv6 BGP sessions fail to establish due to IPv4/IPv6 addresses missing on the interfaces.

    Traffic over the problematic BGP session would be disrupted. However, BGP would gracefully restart on its ow

    Workaround: See knowledge base article 3224295 for details.

check-circle-line exclamation-circle-line close-line
Scroll to top icon