This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware NSX 3.2.3 | 9 MAY 2023 | Build 21703624

Check for additions and updates to these release notes.

What's New

NSX-T Data Center 3.2.3 is an update release that comprises bug fixes only. See "Known Issues" below for the current known issues. See "Resolved Issues" below for the list of issues resolved in this release. See the VMware NSX-T Data Center 3.2 Release Notes for the list of new features introduced in NSX-T 3.2.

NSX Data Center for vSphere to NSX-T Data Center Migration

The migration mode Migration Coordinator for Lift-and-Shift - Configuration and Edge Migration introduced in Tech-Preview in NSX-T Data Center 3.2.2 is GA in version 3.2.3.

This new migration mode allows migrating both configurations and edge, and establishes a performance-optimized distributed bridge between the NSX-V source environment and the NSX-T destination environment to maintain connectivity during a lift-and-shift migration. For details about this feature, see the NSX-T Data Center 3.2 Migration Guide.

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.

API and CLI Resources

See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

May 09, 2023

1

Initial edition.

May 19, 2023

2

Added known issue 3116294.

June 02, 2023

3

Added resolved issue 3106782.

January 4, 2024

4

Added resolved issue 3037223.

Resolved Issues

  • Fixed Issue 3037223: IPSec VPN service events (changes to configuration, Edge CLI commands, changes to LR HA states) are not serviced, resulting in timeouts.

    Depending on the nature of the un-serviced event, datapath may be impacted. For example, if the HA state changes, datapath may be impacted if the IPSec Service stays in the STANDBY state. For CLI timing out, the impact may be limited to monitoring status/stats collection.

  • Fixed Issue 3106782: NSX EDGE Dataplane memory exhaustion causes production outage.

    Continuous L7 traffic with UDP flow until memory is exhausted.

  • Fixed Issue 3157322: Slow memory leak in auth server (/etc/init.d/proxy) for local user authentication.

    Over time the auth server (/etc/init.d/proxy) accumulates resident memory. Resident memory exceeding 1 GB is likely to cause issues.

  • Fixed Issue 3150739: Some of the tunnels to NSX-V TEPs remain in the Down status after V2T migration.

    TEPs of the failed NSX-V hosts remain in the system and tunnels to the TEPs are in the Down status.

  • Fixed Issue 3108996: VPN Datapath is disrupted after HA failover until IPsec SA is rekeyed.

    After failover, VPN datapath is disrupted until IPsec SA is rekeyed.

  • Fixed Issue 3103780: Configuration collection fails and migration is stopped.

    Configuration collection fails on some URLs with read timeout error as there is a large amount of data to be fetched.

  • Fixed Issue 3100530: FQDNs of type .*XYZ.com fail to be enforced.

    Rule enforcement does not work as expected.

  • Fixed Issue 3099621: NSX-T CLI fails to locate the logger path, throws an exception and exits abruptly. Installing and uninstalling from NSX-T host may be affected.

    NSX-T CLI throws an exception and exits abruptly.

  • Fixed Issue 3097763: Certificate changes from LM are not propagated to GM.

    API calls to LM may not work.

  • Fixed Issue 3092509: V2T migration fails at configuration translation step.

    Migration from NSX-V to NSX-T fails at configuration translation step.

  • Fixed Issue 3088447: NSX-T Host Transport nodes report as "Partial Success" after V2T migration.

    Configuration issues encountered with the host transport node.

  • Fixed Issue 3084982: LB conf process is not regenerated successfully and new configurations cannot be loaded into LB nginx process.

    If the load balancer size is not small, LB conf process cannot be regenerated after it is terminated. And new configuration cannot be loaded into LB nginx process successfully.

  • Fixed Issue 3084139: Split brain alarm observed on the UI.

    NSX-T UI displays split brain alarm.

  • Fixed Issue 3080853: Connectivity to NSX-T VMs is lost.

    Linux kernel panic is observed and connectivity to NSX-T VMs is lost.

  • Fixed Issue 3080705: V2T migration fails on some hosts in a cluster that has only the firewall feature enabled but shares a VDS that has VXLAN enabled for another cluster.

    Hosts in a cluster fail to migrate from NSX-V to NSX-T.

  • Fixed Issue 3077406: Post V2T migration, load balancer is in the disabled state and when enabled it fails.

    Load balancer is in disabled state and when enabled it fails due to persistence expired value.

  • Fixed Issue 3069624: Duplicate VTI rule for an LR is present in another firewall section present on proton.

    Setups with VTI rules are duplicated due to provider synchronization.

  • Fixed Issue 3068261: Unable to edit a segment to make changes like renaming, adding profiles, etc.

    When an IP address pool is configured for a segment and then unconfigured and deleted, the segment can no longer be edited to make changes like renaming, adding profiles, etc.

  • Fixed Issue 3066351: RTEP BGP status shows an error with an alarm.

    RTEP BGP status shows an error with an alarm when there is no data path issue.

  • Fixed Issue 3050542: NSX Manager fails to upgrade.

    Upgrade fails if LoadBalancerHttpMonitor is created and HttpRequestBody is not empty and HttpResponseBody is empty.

  • Fixed Issue 3048434: Unable to add new hosts to an NXT-T prepared cluster.

    Deleting a TZProfile when in use in a TNP results in issues when adding/updating transport nodes to a cluster where the TNP is attached.

  • Fixed Issue 3045913: V2T migration fails at host migration stage.

    V2T host migration fails with error message, "Failed vmknic-and-pnic migration" and nsx-syslog.log of the host shows an error message.

  • Fixed Issue 3037436: Windows bare metal hosts loses connectivity during deployment.

    The only pNIC available to the Windows host disappears and deployment fails.

  • Fixed Issue 3027038: Manager and Edge node run out of memory.

    Processes are randomly terminated causing disruption.

  • Fixed Issue 3076743: NSX-V to NSX-T migration is blocked if DLR/UDLR edge appliance is not deployed.

    Migration is blocked if DLR/UDLR edge appliance is not deployed.

  • Fixed Issue 3059973: Negative statistics are displayed when using Intel NIC (I40e).

    Physical port statistics show negative output.

  • Fixed Issue 3160947: Networking performance affected due to two critical polling threads running on the same CPU core.

    Two critical polling threads run on the same CPU core.

  • Fixed Issue 3120832: On failure of concurrent updates of an NSGroup to add or remove members, HTTP error code 100 is thrown instead of 409.

    Incorrect HTTP error code.

  • Fixed Issue 3115221: "General Error occurred" error when performing traceflow from the NSX Manager UI.

    Unable to start traceflow from NSX Manager UI or API.

  • Fixed Issue 3102915: Upgrade to NSX-T 3.2.2 failed with error "Unable to import due to these unsupported features: policy.location.manager.config.onboarding.ipfix.dfw".

    Unable to finish the configuration import on a Federation setup.

  • Fixed Issue 3088183: LDAP authentication may time out intermittently when ID Firewall is configured before LDAP authentication.

    Unable to log into NSX-T using LDAP credentials.

  • Fixed Issue 3081999: V2T migration fails if there are DFW rules on NSX-V, however the license is unsupported on NSX-T.

    V2T migration fails.

  • Fixed Issue 3079365: NSX-T allows deletion of IP pools while they are actively consumed or referred to by infra segments, leading to the deallocation of IP addresses while in use.

    Deletion of IP pools while they are actively consumed or referred to.

  • Fixed Issue 3069045: Error while checking firewall statistics on LM.

    Unable to view firewall statistics.

  • Fixed Issue 3057791: The context for ports is incorrectly set.

    vRNI sees unwanted context information while fetching logical ports for the VMs.

  • Fixed Issue 3045414: Host transport node or edge transport node displayed as Unavailable on the UI.

    Status of Host transport node or edge transport node displayed as Unavailable.

  • Fixed Issue 2964724: Tier-1 gateway cannot be deleted if L2 QoS profiles are created after gateway creation.

    Unable to delete tier-1 gateway.

  • Fixed Issue 3110993: The VMs connected to a segment whose downlink has old MAC value lose connectivity.

    The connectivity of the segment that has the downlink with an old MAC value is affected.

  • Fixed Issue 3096430: Execution of unsupported CLI “start search resync policy” on Global Manager wipes off the data from the search index leading to configuration data not being listed on the Global Manager UI.

    No configuration details are listed on the Global Manager UI, after executing “start search resync policy” CLI on the Global Manager. Although the configuration pushed via Global Manager is available on the LMs.

  • Fixed Issue 3151402: False unresolved alarm on NSX Manager.

    EAM alarm is not resolved even when there is no EAM issue.

  • Fixed Issue 3145214: Bare Metal edge loses management IP connectivity after install or upgrade due to incorrectly named interfaces.

    Loss of management IP connectivity to the Bare Metal edge and likely data plane connectivity issues.

  • Fixed issue 3117779: Bare Metal edge shows deployment_type incorrectly as VIRTUAL_MACHINE.

    Bare Metal(BM) edge deployment_type is incorrectly reported as VIRTUAL_MACHINE in API response.

  • Fixed Issue 3116787: Expected new configured BGP functionality such as route aggregation does not work as expected.

    New BgpConfig configuration is not realized.

  • Fixed Issue 3109763: Failed to edit IDPS profile due to error, "Invalid Signature IDs", after updating IDS signature set.

    Unable to edit IDPS profile in case the override signatures are not present in the active bundle with error "Invalid Signature IDs [1116076] passed in IDS Profile /infra/settings/firewall/security/intrusion-services/profiles/SR".

  • Fixed Issue 3108110: New federation sites cannot sync with the local site after the local site is upgraded(rolling upgrade).

    After the local site is upgraded, new Federation sites cannot be added and synced with the local site.

  • Fixed Issue 3106207: When a new tier-0 interface is created on the global manager for one site, it shows on the local manager but the edge does not realize this interface.

    Edge does not realize the interface created on the GM.

  • Fixed Issue 3104743: Load balancer is in Degraded status and the configuration does not take effect on the Edge Node.

    Load balancer may be in Degraded status and the configuration does not take effect on EdgeNode if LoadBalancerCertificate is created before its CryptoServiceKey is ready.

  • Fixed Issue 3103395: Unable to update a firewall rule using APIs if the rule is applied to a logical router.

    Updating GFW rules using MP-only APIs fails.

  • Fixed Issue 3102520: Data is not displayed on the UI after upgrading to 3.2.x, with the error message, "Search service is currently unavailable, please restart using 'restart service search'".

    Unable to view data on the UI and error message, "Search service is currently unavailable, please restart using 'restart service search'" is displayed.

  • Fixed Issue 3102484: Upgrade from NSX-T 3.1.x/3.2.0.x to NSX 3.2.1 fails intermittently with IllegalArgumentException.

    Unable to upgrade to the NSX-T 3.2.1 as the dry run tool fails with an IllegalArgumentException.

  • Fixed Issue 3102188: Configuration import failed.

    Unable to import configuration from LM to GM.

  • Fixed Issue 3101731: When an NAPI to delete syslog/ntp/dns/search domain is called, the payload in GET API- /api/v1/transport-nodes continues to show the deleted stale configuration.

    Deleted syslog/ntp/dns/search domain config is shown in API GET /api/v1/transport-nodes .

  • Fixed Issue 3100111: A deleted VRF is not cleared from nsd and the stale entry causes a Linux interface to go down and result in missing or incorrect routes.

    A Linux interface goes into admin-down state.

  • Fixed Issue 3099134: Core dumps not generated for FRR daemon crashes.

    When the FRR daemons crash, the system does not generate the core dump at /var/dump.

  • Fixed Issue 3099066: 503 connection error on Edge VM deployment and NSX-T manager node auto deployment.

    When installing NSX Edge using the NSX Manager UI/API, NSX Edge deployment fails. Auto-deployment of the NSX-T Manager node also fails.

  • Fixed Issue 3098690: DFW rules are not enforced for some time after enabling lockdown mode on ESX which is configured as a transport node.

    After enabling lockdown mode on a transport node when the VM is removed from the NSX-T inventory and recreated, the DFW rules are not enforced on the VMs of the ESXi Host in that duration.

  • Fixed Issue 3097598: When an upgrading from a version earlier than NSX-T 3.2.1, NSX Manager VMs are not added to the firewall exclusion list. As a result, all DFW rules are applied to the manager VMs, causing network connectivity issues.

    NSX Manager VMs are not added to the firewall exclusion list. Traffic from and to the NSX Managers is down or blocked.

  • Fixed Issue 3096979: - Service entries created on Global Manager (NSX-T 3.1.x) are not pushed to Local Managers (NSX-T 3.2.x).

    DFW rules consuming the service entries created on Global Manager do not work because those rules are not pushed to the ESXi hosts.

  • Fixed Issue 3096648: Default Path Selection Policy for a Service Chain is 'LOCAL'.

    With PathSelectionPolicy 'Any' traffic can be redirected to any host instead of 'LOCAL' as first option.

  • Fixed Issue 3095348: Selected IP Pool for TEP is not displayed while editing a host transport node or transport node profile.

    After installation when the host transport node is edited from the UI, the selected IP Pool value is not displayed.

  • Fixed Issue 3094799: LB pool down if SNAT LB uses downlink IP address.

    Traffic going through SNAT IP does not work, for example, SNAT LB traffic.

  • Fixed Issue 3094028: The transport zone status shows as 'Unknown' on the Local Manager.

    The transport zone and edge node status is shown as "Unknown" and there are no alerts on the UI.

  • Fixed Issue 3093687: DFW stats are not reported correctly by the aggregation service API.

    DFW stats are not reported correctly by the aggregation service API. The counters are not being updated.

  • Fixed Issue 3091940: Network latency is reduced during Distributed Firewall configuration changes.

    Network traffic experiences an increase in latency immediately after a firewall ruleset configuration change. This change reduces that impact.

  • Fixed Issue 3091670: N-S redirection rules are not realized on tier-1 uplink interface.

    When a tier-1 gateway is disconnected and connected back to a tier-0 gateway, N-S redirection rules are not realized on the tier-1 uplink interface.

  • Fixed Issue 3090261: On a federation setup, while editing host switches in either transport node profile or host transport node, the transport zone dropdown shows duplicate entries.

    While accessing host switches in a transport node profile or host transport node, the dropdown for transport zone selection shows two entries for each transport zone object.

  • Fixed Issues 3089948: DFW rules do not work as expected until the auto-TZ generated for Networking & Security is deleted.

    If a cluster is prepared for Networking & Security using Quick Start, uninstalled, and then installed for Security Only, DFW rules do not work as expected until the auto-TZ generated for Networking & Security is deleted.

  • Fixed Issue 3089447: Tag replication from protected site to recovery site using tag replication policy does not work if the VM is connected to a vCenter managed DVPG.

    Disaster-recovery policy applied on the VM does not work as DrVmTagMsg is missing on recovery sites.

  • Fixed Issue 3084091: Port files for non-existing ports remain stale on the ESXi host.

    Stale port files are present on the EXSi host.

  • Fixed Issue 3083098: Tier-1 inter-site default route missing on secondary sites in edge replace transport node workflow.

    In case of edge replace transport node workflow, MP does not create fc01 underlay route for stretched tier-1 gateway on primary site under RTEP tunnel VRF which eventually causes loss of default route on secondary site of tier-1 gateway.

  • Fixed Issue 3081278: Active global manager UI certificate fields are disabled.

    Unable to change active global manager UI certificate through UI.

  • Fixed Issue 3080503: CA Signed service certificate was not available to assign for L7 LB in NSX Manager UI.

    Using NSX-T Manager UI, a CA signed service certificate cannot be assigned the virtual server as it's not available in the list for selection.

  • Fixed Issue 3079711: V2T migration fails in pre-migration stage.

    V2T migration fails in pre-migration stage with following error: Pre-migrate stage failed during host migration [Reason: Distributed Firewall failed with ‘400: Exclude List : Reached maximum allowed number of members, limit : 100 for url: http://localhost:7440/nsxapi/api/v1/firewall/excludelist?action=add_member’]

  • Fixed Issue 3079495: Tier-0 SR stays in Standby mode on two edges causing traffic to be unavailable.

    SR stays in Standby mode on two edges.

  • Fixed Issue 3079025: Upgrade from NSX-T 3.0.x/3.1.x to NSX-T 4.0.x/3.2.x fails if there is a NSService/NSServiceGroup with an invalid ID.

    Upgrade fails as the UUID of NSService/NSServiceGroup is invalid.

  • Fixed Issue 3078691: Data migration of the NSX Manager fails.

    NSX Manager upgrade fails.

  • Fixed Issue 3077316: Configuring a transport node with VMK migration during creation fails with an NPE when VC pinned PNICs are configured.

    Configuring a transport node with VMK migration during creation fails.

  • Fixed Issue 3076597: NSX-T Data Center UI does not display certificate entries. Certificates with an obsolete service type observed in GetCertificates API output.

    UI does not display certificates correctly as indexing is affected.

  • Fixed Issue 3076569: NSX-T DFW sections in an Unknown/In Progress state during the V2T migration

    The status of Distributed Services shows enabled but all the firewall sections shows Unknown/In progress on the NSX-T Data Center UI.

  • Fixed Issue 3075981: Upgrade failure caused during upgrade pre-check due to expired service account on VC

    Upgrade pre-check failure.

  • Fixed Issue 3075614: During upgrade from NSX-T 3.2.1 or 4.0.0 to NSX-T 4.1.0, delete logical router port call intermittently fails picking up null service config profile during rolling upgrade.

    Logical port doesn’t get deleted. Traffic flow is impacted during the upgrade process.

  • Fixed Issue 3073713: After vMotion, NIC can't connect to the previous segment. The VM vNIC remains in the disconnected state.

    VMs do not have network connectivity after powering on and requires manually re-connecting to the network.

  • Fixed Issue 3073490: After multiple upgrade and rollback actions, all the commands performed by the previous upgrade are not rolled back.

    Multiple upgrades, stop, and rollback actions puts the system in an inconsistent state.

  • Fixed Issue 3073457: Remote tunnel end point status is not refreshed.

    Remote tunnel end point status is not refreshed. The connectivity of BGP sessions over remote tunnel end point is intact and there is no datapath impact but UI doesn't show the appropriate status.

  • Fixed Issue 3071439: When an LdapContext has a broken connection, the connection which uses the cached LdapContext fails.

    Failed to connect to AD server. When an LdapContext has a broken connection, the connection which uses the cached LdapContext fails.

  • Fixed Issue 3071137: Post upgrade from NSX-T Data Center 3.2.0, the metadata-proxy malfunctions.

    Post upgrade from NSX-T Data Center 3.2.0, the realization state of a MetadataProxy is shown as succeeded, but the MetadataProxy does not function on the target segment.

  • Fixed Issue 3069611: Unable to create transport nodes of the hosts that have chained certificates.

    Host preparation fails. Transport nide creation is not successful.

  • Fixed Issue 3067727: A transport node is shown as configured at times and not configured at other times on the NSX-T Data Center UI.

    A transport node at times shows as configured on one of the NSX Manager nodes and not configured on another NSX Manager node UI.

  • Fixed Issue 3054125: Incorrect VLAN segment port ID assigned to interface.

    VLAN segment port connected to tier-0 SR uplink has an invalid ID.

  • Fixed Issue 3066623: Unable to create Policy resources using PUT through Batch API.

    When calling PUT API to create new Policy resources through the Batch API, only first PUT goes through.

  • Fixed Issue 3065600: When groups are selected only 50 are displayed though more groups exist.

    Only 50 groups are displayed though more groups exist.

  • Fixed Issue 3063646: Nestdb-ops-agent connection error logging on Unified appliance.

    Logs will be filled faster.

  • Fixed Issue 3063573: The datapath crashes during edge upgrade from NSX-T 2.5.3.

    The datapath crashes and there could be a temporary traffic outage.

  • Fixed Issue 3063560: Distributed IDS/IPS signatures cannot be downloaded automatically from NSX Manager via proxy with authentication.

    IDPS signature downloads not working with proxy using a specific port.

  • Fixed Issue 3063223: V2T migration fails in pre-host migration stage.

    V2T migration fails in pre-host migration stage.

  • Fixed Issue 3062632: DFW rules based on discovered IP addresses by vm-tools as source do not apply on the IP during storage vMotion of virtual machine.

    NSX discovered bindings (IP addresses) provided by vm-tools for segment ports of virtual machines is empty while storage vMotion is in progress. DFW rules based on discovered IP addresses by vm-tools as source do not apply on the IP during storage vMotion of virtual machine.

  • Fixed Issue 3061663: Broadcast packets do not reach Edge Node VM.

    MTEP replication fails if Edge VTEPs and outer ESX VTEPs are in the same transport VLAN. BUM traffic from workload VM to Edge fails.

  • Fixed Issue 3061589: Upgrade from NSX-T 3.1.x/3.2.0.x to NSX-T 3.2.1 fails intermittently with IllegalArgumentException.

    Unable to upgrade intermittently to the NSX-T 3.2.1 with IllegalArgumentException.

  • Fixed Issue 3060278: NVDS to VDS migration fails at URT pre-check if there is a Security Cluster configured.

    URT failed with error on one of the hosts of Security Only Cluster.

  • Fixed Issue 3060048: When the admin user name is changed, integration with Deep Security does not work.

    Failure to register vCenter Server and NSX Manager on Deep Security Manager when the admin user name is changed.

  • Fixed Issue 3057774: Post direct upgrade from NSX-T 3.0.x to NSX-T 3.2.x, TKGI upgrade using certificate based PI users fails.

    Post direct upgrade from NSX-T 3.0.x to NSX-T 3.2.x, TKGI upgrade using certificate based PI users fails.

  • Fixed Issue 3057477: When an NSX-T Tag Replication Policy is configured and a failover test is performed, the VM tags are found missing on the recovery site.

    VM on the recovery site is missing the tags configured in NSX-T Tag Replication Policy.

  • Fixed Issue 3056196: VLAN trunk logical switches not showing in output of get logical-switches CLI command in NSX-T CLI.

    When the NSX-T CLI get logical-switches command is executed, the VLAN trunk logical switches are not listed.

  • Fixed Issue 3055519: Adding DNS-server/syslog server in PI user deployed edges using node-mgmt API causes TN state mismatch.

    Edge TN configuration state is shown as mismatched instead of SUCCESS.

  • Fixed Issue 3053843: Evaluation tool throws an error when planning an upgrade from NSX-T 3.1.3.7 to NSX-T 3.2.1.2

    Unable to upgrade because evaluation tool throws an error when planning an upgrade from NSX-T 3.1.3.7 to NSX-T 3.2.1.2.

  • Fixed Issue 3053647: Edge cutover failed because one of the NSX-V ESG's was in powered off state during NSX for vSphere to NSX-T migration.

    Operations done on ESG during migration fail.

  • Fixed Issue 3053507: Logging long messages causes "Message too long" exception when sent to socket().

    When there are exceptionally long messages and Syslog is used for logging, a python exception about the message being too long is thrown when it is being sent to the socket().

  • Fixed Issue 3053354: GARP was not sent out and split-brain healing was not performed after split-brain is resolved in Edge HA.

    If split-brain healing was not accordingly performed after split-brain was resolved, external packets could arrive on Standby Edge node.

  • Fixed Issue 3052786: Tier-0 failover failure due to worker-framework issue.

    No logs of worker-framework are present. Tier-0 failover failure due to worker-framework issue.

  • Fixed Issue 3052622: When the tier-0 SR has mixed overlay/VLAN uplink segments, disconnecting all Tier-0-SR's VLAN segment uplink pNICs brings down the tier-0 SR and its VRFs, but non-default VRFs still advertise routes, causing a traffic blackhole.

    When the tier-0 SR has mixed overlay/VLAN uplink segments, disconnecting all Tier-0 SR's VLAN segment uplink pNICs brings down the tier-0 SR and its VRFs, but N-S traffic is forwarded to its non-default VRF, causing traffic disruption.

  • Fixed Issue 3052309: Out-of-sync error for hosts after upgrading VDS 7.x in an NSX-T system migrated by V2T migrator.

    After all NSX-V hosts are migrated to NSX-T Data Center and before removing NSX-V from the system that uses VDS 7.x, if VDS upgrade or other operations like ESX upgrade and VC upgrade are performed to trigger syncing between VC and ESX hosts, VC shows VDS out-of-sync error for the hosts.

  • Fixed Issue 3050021: NSX-T Bare Metal Edge automatically exits maintenance mode after edge reboot.

    When a Bare Metal Edge is placed in maintenance mode, it may unexpectedly exit maintenance mode after a reboot.

  • Fixed Issue 3049027: Extremely slow multicast bandwidth when traffic flows from tier-0 to tier-1 gateway.

    Extremely slow unicast traffic throughput when multicast traffic flows between different DRs.

  • Fixed Issue 3048610: APIs do not work when server is overloaded until the reverse-proxy is restarted.

    APIs do not work when server overloaded condition is reached until the reverse-proxy is restarted.

  • Fixed Issue 3045886: Transport Node preparation for an ESXi host fails.

    Transport node preparation cannot be successfully completed if a particular DVS switch is being used.

  • Fixed Issue 3045514: Unable to view flows from NSX-backed VMs in vRNI.

    User is unable to view flows from NSX-backed VMs in vRNI.

  • Fixed Issue 3045407: Tier-0 gateway not realized on the Edge node.

    Tier-0 gateway is not realized on the Edge node.

  • Fixed Issue 3044522: IGMP group specific query may not be generated.

    An edge node may not process IGMP Leave packet when a multicast receiver leaves. This may cause no IGMP group specific query to be sent, and faster aging of stale forwarding entry may not happen.

  • Fixed Issue 3044281: Edge VM has stale hardware config error. This causes a validation error when applying manager configuration changes.

  • Fixed Issue 3043831: VMs connected to NSX-T Data Center port groups on VDS lose network connectivity.

    VMs connected to NSX-T Data Center port groups on VDS lose network connectivity randomly.

  • Fixed Issue 3042466: ESXi host crashes with a purple diagnostic screen for multicast traffic.

    ESXi host crashes with a purple diagnostic screen for multicast traffic when NSX-T flow cache is enabled.

  • Fixed Issue 3041998: NSX-T tag replication policy does not work.

    Tags on virtual machines are not replicated when virtual machines are moved from a primary site to a recovery site using SRM planned migration or disaster recovery.

  • Fixed Issue 3040934: Unable to publish changes to Gateway Firewall settings. The publish button is disabled.

    The publish button is disabled when Distributed Firewall (DFW)/Gateway Firewall (GWFW) is disabled.

  • Fixed Issue 3038671: Changing the DNS settings of an Edge in the UI does not update the containers within the Edge accordingly.

    Updates to name-servers from the CLI/UI are not reflected on containers running on the edge.

  • Fixed Issue 3035962: FQDN alarm issued after upgrade to NSX-T 3.1.3.7 from NSX-T 2.5

    A manager_fqdn_lookup_failure alarm is incorrectly raised.

  • Fixed Issue 3030879: Duplicate transport zone displayed in the NSX-T LM/GM UI in the Edge node summary section.

    The edge intent displays duplicate system transport zones. System RTEP transport zone is added to edge intent, when edge intent is refreshed.

  • Fixed Issue 3029276: Config-import workflow failed after some progress.

    The config-import workflow failed at around 50% with the following error: “Duplicate resource found on GM Site while onboarding LM resource”.

  • Fixed Issue 3026395: North to south traffic on VRF can get dropped on SR where default VRF SR is down.

    In a multi VRF Active-Active topology, if the Default VRF uplink layer2 connectivity is down(VLAN), then all VRF's SR go down, but BGPs on VRF is up. This creates the blackhole routes in the TOR on the VRFs.

    Manually bring down the BGPs on VRF.

  • Fixed Issue 3026005: Root partition disk space is used up on GM and an alarm is raised.

    Search service is using root partition on GM to store data, using up all the root partition disk space and resulting in an alarm being raised.

  • Fixed Issue 3010798: Alarms raised due to tunnels being down for stale ESXi transport node

    Alarms are raised for stale UUIDs in corfu ClientHeartbeatTable.

  • Fixed Issue 3007628: Many L3 entities show a realization error due to missing GPRR objects.

    On policy UI, some L3 entities have realization status in FAILED state but there are corresponding realized entity and data traffic works.

  • Fixed Issue 2950341: Capacity alarm is raised when the number of compute managers reaches 1 in a medium size setup.

    A capacity alarm is raised when the number of compute managers reaches 1 in a medium size setup.

  • Fixed Issue 2992964: During NSX-V to NSX-T migration, edge firewall rules with local Security Group cannot be migrated to NSX Global Manager.

    You must migrate the edge firewall rules that use a local Security Group manually. Otherwise, depending on the rule definitions (actions, order, and so on), traffic might get dropped during an edge cutover.

  • Fixed Issue 3069457: During NSX-T security only deployment upgrade from 3.2.x to 3.2.2 or 4.0.1.1, the host upgrade fails with the message, "NSX enabled switches already exist on host."

    Hosts on the UI show the status as Failed after upgrade and may create a datapath impact.

  • Fixed Issue 3094405: Incorrect or stale vNIC filters programmed to pNIC when overlay networks are configured.

    vNIC overlay filters are updated in a specific order. When updates occur in quick succession, only the first update is retained, and subsequent updates are discarded, resulting in incorrect filter programming and a possible performance regression.

  • Fixed Issue 3106569: Performance not reaching expected levels with EVPN route server mode.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed to the pNIC in a teaming situation, resulting in a possible performance regression.

  • Fixed Issue 3113067: Unable to connect to NSX-T Manager after vMotion.

    When upgrading NSX from a version lower than NSX 3.2.1, NSX manager VMs are not automatically added to the firewall exclusion list. As a result, all DFW rules are applied to manager VMs, which can cause network connectivity problems.

    This issue does not occur in fresh deployments from NSX 3.2.2 or later versions. However, if you are upgrading from NSX 3.2.1 or earlier versions to any target version up to and including NSX 4.1.0 this issue may be encountered.

  • Fixed Issue 3113073: DFW rules are not getting enforced for some time after enabling lockdown mode.

    Enabling lockdown mode on a transport node can cause a delay in the enforcement of DFW rules. This is because when lockdown mode is enabled on a transport node, the associated VM may be removed from the NSX inventory and then recreated. During this time gap, DFW rules may not be enforced on the VMs associated with that ESXi host.

  • Fixed Issue 3113076: Core dumps not generated for FRR daemon crashes.

    In the event of FRR daemon crashes, core dumps are not generated by the system in the /var/dump directory. This can cause BGP to flap.

  • Fixed Issue 3113085: DFW rules are not applied to VM upon vMotion.

    When a VM protected by DFW is vMotioned from one host to another in a Security-Only Install deployment, the DFW rules may not be enforced on the ESX host, resulting in incorrect rule classification.

  • Fixed Issue 3113093: Newly added hosts are not configured for security.

    After the installation of security, when a new host is added to a cluster and connected to the Distributed Virtual Switch, it does not automatically trigger the installation of NSX on that host.

  • Fixed Issue 3113100: IP address is not realized for some VMs in the Dynamic security groups due to stale VIF entry.

    If a cluster has been initially set up for Networking and Security using Quick Start, uninstalled, and then reinstalled solely for Security purposes, DFW rules may not function as intended. This is because the auto-TZ that was generated for Networking and Security is still present and needs to be removed in order for the DFW rules to work properly.

  • Fixed Issue 3118868: Incorrect or stale vNIC filters programmed on pNIC when overlay filters are programmed around the same time as a pNIC is enabled.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed around the same time as a pNIC is enabled, resulting in a possible performance regression.

  • Fixed Issue 3152174: Host preparation with VDS fails with error: Host {UUID} is not added to VDS value.

    On vCenter, if networks are nested within folders, migrations from NVDS to CVDS or NSX-V to NSX-T may fail if migration is to CVDS in NSX-T.

  • Fixed Issue 3152195: DFW rules with Context Profiles with FQDN of type .*XYZ.com fail to be enforced.

    DFW rule enforcement does not work as expected in this specific scenario.

  • Fixed Issue 2687084: After upgrade or restart, the Search API may return 400 error with Error code 60508, "Re-creating indexes, this may take some time."

    Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete.

  • Fixed Issue 2992807: After upgrading from NSX-T 3.0 or 3.1 to NSX-T 3.2.1.1 /3.2.2 or 4.0.0.1, Transport Node goes into a failed state.

    Transport Node realization fails with the following error message:

    Failed to handle reply for TransportNodeHostSwitches migration to VDS.

  • Fixed Issue 2969847: Incorrect DSCP priority

    DSCP priority from a custom QoS profile is not propagated to host when the value is 0, resulting in traffic prioritization issues.

  • Fixed Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

  • Fixed Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

  • Fixed Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You are unable to change local user passwords while vIDM is enabled.

  • Fixed Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.

    This CLI output is confusing.

Known Issues

  • Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not allowed or skipped correctly.

    Workaround: See knowledge base article 91421.

  • Issue 3105505: Stale entries are present in the site configuration.

    Site configuration has stale entries.

    Workaround: If offboard from the UI does not work, run offboard via SM (change site_id only, leave other fields blank) curl -X POST -ik http://localhost:7441/api/v1/sites?action=offboard_remote -H "Content-Type: application/json" -d '{"credential": {"ip": "", "port":443, "username": "", "password": "", "thumbprint": ""}, "site_id": "a8342ef0-1538-4fb0-b47e-a27d09978219"}'.

  • Issue 3095149: After replacing a certificate for an LM Principal Identity, there are two certificates shown to be used by the LM Principal Identity.

    Two certificates shown to be used by the LM Principal Identity.

    Workaround: None

  • Issue 3073518: Service profile for endpoint protection goes into a failed state and shows an error.

    When an Endpoint protection service is unregistered and then registered with NSX Manager, service profile create operation fails for endpoint protection and the service profile goes into a failed state.

    1. Delete endpoint rules attached to the failed service profiles and also delete the failed service profiles in NSX-T Data Center.

    2. Check API GET /policy/api/v1/infra/service-references API.

    3. Delete the appropriate service reference pointing to the service using API DELETE /policy/api/v1/infra/service-references/<id>

    4. Switch back and forth in the Service Profile tab.

    5. Recreate the service profile.

  • Issue 3024652: VMs connected to ephemeral DVPort groups may not be configured correctly for distributed firewall.

    If NSX is installed in security-only mode on a cluster and if VMs from such a clusters are connected to ephemeral DVPort groups, then some of these VMs may not receive the correct set of NSX DFW rules.

    In some scenario, multiple ephemeral DVPorts can end up with the same port attachment ID. These DVPorts will lose DFW rules after such attachment ID collision.

    Use one of the following workarounds:

    • Disconnect and reconnect virtual NICs having duplicate connection IDs.

    • Migrate the VM networking to static DVPort group.

  • Issue 2979212: When using bridging, MAC SYNC messages are sent over a tunnel between the Edges. On a bridge failover, a FULL SYNC message is sent. However, with pre-emptive mode, this message is not delivered and the Edges end up with different MAC tables.

    Since the MAC SYNC update is lost, the new active Edge will not RARP the MAC addresses to the TOR. Now traffic for these MACs is blackholed until the ARP is re-learned. The re-learning takes place automatically after the ARP times out on the bridged VM.

    Use the non-preemptive failover mode for bridge port.

  • Issue 3019813: During an NSX-V to NSX-T migration, if you specify admin distance value as 0, you cannot proceed with the migration.

    Admin distance with 0 value is not supported in NSX-T.

    Workaround: Set admin distance value other than 0.

  • Issue 3029159: Import of configuration failed due to the presence of Service Insertion feature entries on the Local Manager.

    NSX Federation does not support the Service Insertion feature. When you try to onboard a Local Manager site, which has Service Insertion VMs, in to the Global Manager, the following error is displayed in the UI:

    Unable to import due to these unsupported features: Service Insertion.

    Workaround:

    1. Use the following DELETE API to manually delete the unsupported Service Insertion entries from the Local Manager before initiating the import configuration workflow.

      DELETE https://<nsx_mgr_ip>/policy/api/v1/infra/service-references/<SERVICE_REF_ID>

    2. Redeploy Service Insertion after the Local Manager is onboarded to the Global Manager successfully.

  • Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.

    After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.

    1. The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.

    2. For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state.   

    3. For an NSX MP upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state.   

    4. The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.

    Workaround: See VMware knowledge base article 89418.

  • Issue 2931403: Network interface validation prevents API users from performing updates.

    Network interface on an edge VM can be configured with network resources such as port groups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in vCenter Server after a power outage (moref of resource pool changed after vCenter Server was restored). API users are blocked from performing update operations.

    Redeploy edge and specify valid moref Ids.

  • Issue: 2854116: If you have VM templates that are backed by NSX, then after an N-VDS to VDS migration, N-VDS is not deleted.

    The migration is successful, but N-VDS is not deleted because it is still using the VM templates.

    Convert the VM template to VM either before or after starting the migration.

  • Issue 2932354: After replacing the Virtual IP certificate on the NSX Manager, communication between the Global Manager and the Local Manager is lost.

    You cannot view the status of the Local Manager from the Global Manager UI.

    Workaround:

    Update the Local Manager certificate thumbprint in the NSX Global Manager Cluster. For more information see the procedure explained in the VMware Cloud Foundation Administration Guide.

  • Issue 2936504: The loading spinner appears on top of the NSX Application Platform's monitoring page.

    When you view the NSX Application Platform page after the NSX Application Platform is successfully installed, the loading spinner is initially displayed on top of the page. This spinner might give the impression that there is some connectivity issue occurring when there is none.

    Workaround: As soon as the NSX Application Platform page is loaded, refresh the Web browser page to clear the spinner.

  • Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

    Workaround: Refresh DNS cache for the host using command : /etc/init.d/nscd restart

  • Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints.

    Failed IPsec session will not be established until the error is resolved.

    Workaround: Use unique self-signed certificate for each local endpoint.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2884939: NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102).

    The NSX rate limiting of 100 requests per second is reached when we migrate a large number of VS from NSX for vSphere to NSX-T ALB and all APIs are temporarily blocked.

    Workaround: Update Client API rate limit to 200 or more requests per second.

    Note: There is fix on AVI version 21.1.4 release.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 2854139: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.

    Traffic drop for the prefixes that are getting continuously added/deleted.

    Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1.

    In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.

    Workaround: None.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade.

    When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.

    Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running.

  • Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs.

    By default, NSX tools are installed on /opt directory. However, during NSX tools installation default path can be overridden with "--chroot-path" option passed to the install script.

    Insufficient disk space on the partition where NSX tools is installed can cause NSX tools upgrade to fail.

    Workaround: Increase the partition size on which NSX tools is installed and then initiate NSX tools upgrade. Steps for increasing disk space are described in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/resize-os-disk-gpt-partition page.

  • Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform".

    The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected.  There is no functional impact. Only debugging might be affected.

    Workaround: There are two workarounds:

    • Workaround 1: Make sure the Kubernetes cluster comes up with version 0.4.x of the metrics-server pod before deploying NAPP platform. This issue is not seen when metrics-server 0.4.x is deployed.

    • Workaround 2: Delete the metrics-server instance deployed by the NAPP charts and deploy upstream Kubernetes metrics-server 0.4.x.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps:

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2958032: If you are using NSX-T 3.2 or upgrading to an NSX-T 3.2 maintenance release, the file type is not shown properly and is truncated at 12 characters on the Malware Prevention dashboard.

    On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you will only see WindowsExecu as File Type on Malware Prevention UI.

    Workaround: Do a fresh NAPP installation with an NSX-T 3.2 maintenance build instead of upgrading from NSX-T 3.2 to an NSX-T 3.2 maintenance release.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: You need to use APIs to remove bridging:

    1. Update concerned LogicalPort and remove attachment

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2. DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

check-circle-line exclamation-circle-line close-line
Scroll to top icon