VMware NSX 4.1.2 | 17 October 2023 | Build 22589037

Check for additions and updates to these release notes.

What's New

NSX 4.1.2 provides a variety of new features, offering new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • GRE tunnels - NSX 4.1.2 introduces support for GRE tunnels in default Tier-0 Gateways and Tier-0 VRF Gateways. Dynamic routing (BGP) and static routing are supported over GRE tunnels.

  • VPN support on Tier-0 VRF Gateways - Use VPN on top of VRF. 

  • Distributed IDS/IPS Packet Capture - VMware NSX 4.1.2 supports packet capture capabilities enabled through the Distributed IDS/IPS profiles for capturing traffic on signature trigger. 

In addition, many other capabilities are added in every area of the product. More details are available below.

Layer 2 Networking

  • Edit Default Uplink Profile: This feature allows for the overlay VLAN to be modified in the default uplink profile.

Layer 3 Networking

  • GRE tunnels: NSX 4.1.2 introduces support for GRE tunnels in default Tier-0 Gateways and Tier-0 VRF Gateways. Dynamic routing (BGP) and static routing are supported over GRE tunnels.

Edge Platform

  • Debug Packet Drops on Edge platform: VMware NSX 4.1.2 gives you a tool to define better where packet drops are appearing and define if the Edge platform is dropping packets. Granular filtering per flow is available.

Physical Server

  • Bare Metal Server supports Bond interface on Windows Server: VMware NSX 4.1.2 supports Bond Interface when Bare Metal server is running the Windows OS.

Intrusion Detection and Prevention (IDS/IPS)

  • Distributed IDS/IPS Packet Capture: VMware NSX 4.1.2 supports packet capture capabilities enabled through the Distributed IDS/IPS profiles for capturing traffic on signature trigger.

Distributed Firewall

  • FQDN filtering: FQDN filtering supports additional regex capabilities for more granularity in FQDN host names.

Network Detection and Response (NDR)

  • Network Detection and Response now works on federated Local Managers on stretched and non-stretched segments.

Distributed Malware Detection and Prevention

  • Distributed Malware Detection and Prevention now works on federated Local Managers on stretched and non-stretched segments. Use it in conjunction with VMware Cloud Foundation and vSphere Site Recovery Manager (SRM) to enable a robust disaster recovery solution that protects workloads across sites.

Note: NSX Malware must be deployed and configured from the Local Manager UI.

  • Enhanced troubleshooting capabilities by providing SVM health logs for NSX tech support bundle. 

Installation and Upgrade

  • New RO memory check to verify host health for NSX in-place upgrade.

  • Easy to access, consolidated troubleshooting information for common issues seen with NSX in-place upgrades.

  • User experience improvements to make Transport Node Profile configuration easier during NSX installation.  

Operations and Monitoring

  • Alarm is added to alert when remote logging is not configured on NSX Manager, Edge and Host Nodes. Remote logging will be useful in retaining logs for the longer duration.

  • Policy API is added to retrieve edge physical port stats available on the CLI commands get physical-port 'fp-ethN' xstats/stats verbose.


  • VPN support on Tier-0 VRF Gateways: VPN can be used on top of VRF. 

Platform Security

  • NSX Manager now supports the highest version of Transport Layer Security, TLS v1.3, for Web and API communications.


Multi Tenancy

  • Terraform Support for NSX Project: The NSX Terraform Provider is adding support for Projects and configuration within the context of a Project. This allows consumption of NSX through Terraform but within a tenant (Project).

  • Tenant Aware Logging: The labeling of the logs in Project/VPC with the short ID has been extended to include routing and service logs (like NAT, Edge datapath). In addition, it is possible to dedicate a Tier-0/VRF to a Project in order to have its logs labeled with the Project short ID (the configuration of Tier-0/VRF remains up to the Enterprise Admin).

Gateway Firewall: L7 AppID support on Tier-0 Gateway

  • L7 App-ID based Gateway Firewall rules can now be configured using L7 Access profiles on T0 Gateways. 

Feature and API Deprecations, Behavior Changes

Deprecation announcement for NSX Load Balancer

VMware intends to deprecate the built-in NSX load balancer and recommends customers migrate to NSX Advanced Load Balancer (Avi) as soon as practical. VMware NSX Advanced Load Balancer (Avi) provides a superset of the NSX load balancing functionality and VMware recommends that you purchase VMware NSX Advanced Load Balancer (Avi) Enterprise to unlock enterprise grade load balancing, GSLB, advanced analytics, container ingress, application security and WAF.

We are giving advanced notice now to allow existing customers who use the built-in NSX load balancer time to migrate to NSX Advanced Load Balancer (Avi). Support for the built-in NSX load balancer for customers using NSX-T Data Center 3.x will remain for the duration of the NSX-T Data Center 3.x release series. Support for the built-in NSX load balancer for customers using NSX 4.x will remain for the duration of the NSX 4.x release series. Details for both are described in the VMware Product Lifecycle Matrix. We do not intend to provide support for the built-in NSX load balancer beyond the last NSX 4.x release.

More information:

Deprecation announcement for NSX Manager APIs and NSX Advanced UIs

NSX has two methods of configuring logical networking and security: Manager mode and Policy mode. The Manager API contains URIs that begin with /api, and the Policy API contains URIs that begin with /policy/api.

Please be aware that VMware intends to remove support of the NSX Manager APIs and NSX Advanced UIs in an upcoming NSX major or minor release, which will be generally available no sooner than one year from the date of the original announcement made on 12/16/2021. NSX Manager APIs that are planned to be removed are marked with "deprecated" in the NSX Data Center API Guide, with guidance on replacement APIs.

It is recommended that new deployments of NSX take advantage of the NSX Policy APIs and NSX Policy UIs. For deployments currently leveraging NSX Manager APIs and NSX Advanced UIs, please refer to the NSX Manager for the Manager to Policy Objects Promotion page and NSX Data Center API Guide to aid in the transition

API Deprecation and Behavior Changes

Deprecated API

Replacement API

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-0s/{tier-0-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/summary (Deprecated)

GET /infra/tier-0s/<tier-0-id>/ipsec-vpn-services/<service-id>/summary

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-1s/{tier-1-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/summary (Deprecated)

GET /infra/tier-1s/<tier-1-id>/ipsec-vpn-services/<service-id>/summary

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-0s/{tier-0-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/statistics (Deprecated)

GET /infra/tier-0s/<tier-0-id>/ipsec-vpn-services/<service-id>/ sessions/<session-id>/statistics

POST /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-0s/{tier-0-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/statistics (Deprecated)

GET /infra/tier-0s/<tier-0-id>/ipsec-vpn-services/<service-id>/ sessions/<session-id>/statistics

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-1s/{tier-1-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/statistics (Deprecated)

GET /infra/tier-1s/<tier-1-id>/ipsec-vpn-services/<service-id>/ sessions/<session-id>/statistics

POST /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-1s/{tier-1-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/statistics (Deprecated)

GET /infra/tier-1s/<tier-1-id>/ipsec-vpn-services/<service-id>/ sessions/<session-id>/statistics

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-0s/{tier-0-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/detailed-status (Deprecated)

GET /infra/tier-0s/<tier-0-id>/ipsec-vpn-services/<service-id>/ sessions/<session-id>/detailed-status

GET /policy/api/v1/orgs/{org-id}/projects/{project-id}/infra/tier-1s/{tier-1-id}/locale-services/{locale-service-id}/ipsec-vpn-services/{service-id}/sessions/{session-id}/detailed-status (Deprecated)

GET /infra/tier-1s/<tier-1-id>/ipsec-vpn-services/<service-id>/sessions/<session-id>/detailed-status

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading NSX components, see the NSX Upgrade Guide.

This release is not supported for NSX Cloud customers deployed with AWS/Azure workloads. Please do not upgrade your environment in this scenario.

Note: Customers upgrading to NSX 3.2.1 or below are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Manager repository prior to upgrading. For customers upgrading to NSX 3.2.2 or higher, the tool is already integrated into the Upgrade workflow as part of the upgrade pre-checks; no separate action is needed.

NSX Application Platform

NSX Application Platform (NAPP) version 4.1.1 is interoperable with NSX 4.1.2. You can deploy and manage NAPP 4.1.1 from NSX 4.1.2 including installation of the following features:

  • NSX Intelligence

  • NSX Network Detection and Response

  • NSX Malware Prevention

  • NSX Metrics

Available Languages

NSX has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date



October 17, 2023


Initial edition

October 26, 2023


Added known issue 3298108.

November 3, 2023


Added resolved issue 3221388.

November 6, 2023


Added known issue 3272782.

Resolved Issues

  • Fixed Issue 3221388: NSX Cloud Ubuntu-based Azure VMs cannot be upgraded from 4.0.x to 4.1.x with the regular workflow.

    Upgrade of Ubuntu VMs on Azure is impacted.

  • Fixed Issue 3100299: Mixed Group reevaluation is slow and causing huge group request queue to build up.

    There is a delay in groups to become populated with members. Groups should be populated with members so that users can apply DFDW rules. Because of this delay, users have to wait for a long time to apply DFDW rules with groups.

  • Fixed Issue 3271585: NSX-CLI generates core dump if user terminates the session using "ctrl + \" keys or SIGQUIT signal.

    Core dump gets generated.

  • Fixed Issue 3271487: Once the active GM PI certification expires, some sync of information between GM and LM will stop working, such as the status refreshing and UI Site Switcher drop-down.

    If the certificates associated with principal identities are forcibly deleted, the federated setup ends in a bad state. When this happen, users can no longer delete the affected principal identities, register new principal identities, or replace existing principal identities' certificates.

  • Fixed Issue 3270875: Host migration fails when chain certificates are present on the ESX.

    Host migration will fail.

  • Fixed Issue 3269113: Service Deployment is visible on the UI with the status as Unknown after deletion. Deleting it again throws an invalid ID error.

    Users are unable to remove the current Service Deployment and deploy a new one on their application because of which they are working without Security.

  • Fixed Issue 3268748: Unable to rename admin user during the Edge deployment.

    User name customization does not work as documented.

  • Fixed Issue 3266199: User is not able to remove edge node because it reports being used by mdproxy.

    The mdproxy cannot be deleted because of which even edge cannot be removed.

  • Fixed Issue 3265997: SNMP trap string cannot be generated successfully when an alarm's context data includes some non-string characters.

    Users might miss some alarm's SNMP trap.

  • Fixed Issue 3256536: Audit log is truncated when a log message is split into several logs.

    Users might miss some important information when they view the splitted logs in nsx-audit-write.log/nsx-audit.log.

  • Fixed Issue 3254236: Service profiles stuck 'in progress'.

    Users cannot consume service profile.

  • Fixed Issue 3252091: LM-VRF Service Interface creation without the edge node workflow does not work form UI.

    Users will not be able to create VRF > Service interface without the edge node using the UI.

  • Fixed Issue 3251805: NSX 4.x reverse-proxy fails to load API certificate with extra data.

    After applying the new certificate (with extra information in PEM), UI fails to pick up the new certificate.  If envoy is restarted, UI and API endpoint stops accepting requests.  Once the system gets in this state, applying a different certificate won't work even though the API shows the new certificate has been applied.  Envoy will not pick up the new certificate.

  • Fixed Issue 3251767: After upgrading NSX, changing an existing tier-0 to Stateful Active- Active mode and then connecting any tier-1 to this tier-0 will fail.

    Customer won't be able to connect a tier-1 gateway to tier-0 after upgrade and switching the tier-0 to Stateful Active-Active.

  • Fixed Issue 3248151: When a service is deleted, the partner_channel_down alarm is raised and it remains open even after the service is redeployed.

    An alarm corresponding to the deleted service instance persists.

  • Fixed Issue 3237041: Migration Coordinator showing "not set" in connections for some edges.

    Customer cannot view the connected edges in UI in the Define Topology stage.

  • Fixed Issue 3235510: Antrea-interworking failed to run that causes no pods being visibile in the NSX inventory.

    Antrea-interworking will fail to run and this feature cannot be used.

  • Fixed Issue 3230873: During migration from NSX-V to NSX-T, while importing DFW from NSX-V to NSX-T the configuration import failed with error "Config collection failed Failed to fetch the name of a VM in rule".

    NSX-V to NSX-T migration is blocked on vCenter/NSX DB check.

  • Fixed Issue 3257182: A stalled mac entry pointing to a wrong remote rtep-group causes packet drop.

    Cross site traffic can be dropped because of these stalled entries.

  • Fixed Issue 3245179: VDPI crash.

    FQDN resolution rule application failure. VDPI restart.

  • Fixed Issue 3223368: VM status is erroneously reported to be "Needs Review" and VM is quarantined even though VM has no error.

    Users will see a false "Needs Review" status on CSM and the VM will be quarantined.

  • Fixed Issue 3221933: On the UI for Tier-1 Stateful A/A > Linked Tier-0 Gateway dialog > Tier-0 Router Link information is not visible.

    For AA Tier-1, users will have to view the Router Link information from the Additional settings section in the UI.

  • Fixed Issue 3215326: Bulk VPC/Subnet create/update/delete operation using 20+ parallel calls fails with conflicting transaction errors.

    Users will not be able to create VPCs/Subnets in bulk using 20+ parallel calls.

  • Fixed Issue 3257024: Edge password validation for edge nodes that are older than 2.5 blocks a user from resolving the Mismatch alarm for differences between NSX and vSphere state. It also blocks the PUT and refresh API since these operations trigger intent validation.

    Lifecycle operations on edge VM from NSX Manager are blocked by password validation. Password being validated is no longer in use. It is saved from initial edge deployment.

  • Fixed Issue 3250981: Users with specific roles do not have permissions on effective ip-groups, identity-groups, and physical-servers APIs.

    If users with any of the following roles call ip-groups, identity-groups, and physical-servers APIs, the APIs return 401 error: LB Admin, Security Operator, LB Operator, Netx Partner Admin, Security Admin, VPN Admin, Network Operator, Network Admin, and GI Partner Admin.

  • Fixed Issue 3284692: Wrong linkdown alarm of a bare metal edge is fired every four hours on NSX Manager.

    It is a false positive alarm. Rare occurrence of collision when a UUID of alarm event collides with a transport node UUID.

  • Fixed Issue 3283252: NullPointerException was observed in CCP's TN disconnection alarm when TN was removed and this causes stale TN disconnection alarm never got removed from NSX manager UI.

    The TN and CCP disconnection alarm cannot be resolved for the stale TN.  Control_channel_to_Transport_node_down alarm remains open for stale TN in NSX Manager UI.

  • Fixed Issue 3278282: IDPS process coredumps due to going out of memory in heavy traffic with the packet capture (PCAP) feature enabled.

    Without this change to bump the allocated memory to 2GB from 1GB, customers might see idps coredumps/application crashed alarms on the NSX manager and the IDPS process might be unavailable momentarily during the coredump and reboot operation.

  • Fixed Issue 3277849: The Edge datapath process crashes on Sandy Bridge, Ivy Bridge, and Westmere CPUs.

    Edge dataplane non-functional.

  • Fixed Issue 3261883: When upgrading from a version prior to 4.1.1, a new trust-store called ".cacerts_store" should be created. If the system has done a restore in the past, the trust-store is not created.

    Telemetry collection from LM to SaaS will not take place.

  • Fixed Issue 3261843: After upgrade, the MTU value is reset to 1700 irrespective of the value which was there before upgrade.

    If the customer has overridden MTU value, then it will get reset to default value of 1700 after the upgrade.

  • Fixed Issue 3261068: In Federation mixed version case, LM-to-LM connection keeps resyncing and throws exception endlessly. No data plane impact.

    Manager syslog will be filled with IllegalStateException all the time.

  • Fixed Issue 3259679: CLI 'get routing-domain' and its variants fail with an error while fetching information regarding global routing-domain.

    An error is thrown if a user has a global routing-domain configured and wants to use the CLI for any debugging purpose of this particular routing-domain.

  • Fixed Issue 3241468: For the north-south traffic, users have unique NAT IP on each T1 for the same connected segment subnet and also the advertisement flag enabled on all T1 connected to T0.

    VMs have network outage.

  • Fixed Issue 3235548: vLCM based NSX upgrade failed as netopad service is unable to stop.

    NSX upgrade failed with vLCM.

  • Fixed Issue 3233233: cloud_admin user unable to add/edit tag to VMs.

    Unable to add or edit tag to VMs.

  • Fixed Issue 3223377: pNIC/bond status down in security only deployments with LAG.

    The issue does not impact the actual DFW feature or VM connectivity.

  • Fixed Issue 3187879: When VTEP table is downloaded on the edge, the 'tep_label' field is 0.

    There is no functional impact. The issue is in the display with the incorrect value for 'tep_label' being displayed.

  • Fixed Issue 3185193: A change in the ESX hostd API behavior requires that vswitch configures the "com.vmware.common.opaqueDvs.status.component.vswitch" property as a 'CONFIG' property instead of a 'RUNTIME' property.

    User cannot delete NSX on a TN using 'del nsx'.

  • Fixed Issue 3154577: The NSX resync script may incorrectly delete ports connected to NSXPGs.

    The VM needs to be reconfigured to correctly connect to the DVPG.

  • Fixed Issue 3152082: Memory fragmentation is observed when an l7-access-profile is continuously added and deleted over the course of 24 hours or more.

    The issue can occur in environments when there is an addition and deletion of l7-access-profile over a long time.

  • Fixed Issue 3115627: Traffic hitting failure policy if SVM vmotions.

    Upon SVM vmotion, existing flows might be dropped. New flows will succeed.

  • Fixed Issue 3091131: Any attempt to use vIDM accounts that relies on UPN authentication through API with NSX fails.

    Only vIDM users using SAMAccountName can authenticate to NSX through API. UI is unaffected.

  • Fixed Issue 3080916: IPv6 ECMP traffic over VTI does not work as multiple vtis have same link local address on same edge and across edges too.

    IPv6 ECMP traffic over VTI is not working.

  • Fixed Issue 3234358: The child port realization is not succeeding because it's being invoked prior to the parent port invocation.

    The child port is getting realized after five minutes from the parent port realization.

  • Fixed Issue 3250489: Certificate does not get restored properly.

    Some GM functionality that requires API calls to the LM will not work.

  • Fixed Issue 3233914: NSX reverse-proxy (due to bug in boringssl) fails to load a certificate if its length is multiple of 253. The certificate is of service_type CLIENT_AUTH.

    NSX reverse-proxy (envoy) fails to start after a restart (including upgrade). This will cause API and UI to be inaccessible.

    Workaround: Log in to NSX unified appliance and delete the certificate using command:

    curl -H "x-nsx-username: admin" -X DELETE<cert-id>

    Where cert-id is the ID of certificate of service_type client_auth and having length multiple of 253. (The length of certificate is counted for string pem_encoded of API /api/v1/trust-management/certificates and excluding "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----".) Make sure the certificate to be deleted is of service_type CLIENT_AUTH.

  • Fixed Issue 3245222: NSX Manager upgrade dry run tool fails at InternalLogicalPortMigrationTask.

    NSX Manager upgrade is blocked.

  • Fixed Issue 3249446: Service Insertion North South Active/Standby (HA) deployments triggers alarms even if deployment is successful.

    Users are encouraged to remove deployments (although not required). Users delete deployments for no reason.

  • Fixed Issue 3014499: Powering off Edge handling cross-site traffic causes disruption some flows.

    Some cross-site traffic stopped working.

Known Issues

  • Issue 3272782: Post host upgrade from baseline remediation, TN states of hosts is shown as installed failed with errors in 'Configuration complete' step. In the error message, we can see "Node has invalid version of software nsx-monitoring..." for all builtin_ids of host.

    If a user tries to monitor the status of OS upgrade through automation, then there is a chance of incorrect reporting, but the resolver workflow seems to sort the issue out. 

    Workaround: None.

  • Issue 3298108: During maintenance mode upgrade to NSX 4.1.2 with ESX version at 8.* or ESX version upgrade to 8.* with NSX version at 4.1.2, underlay gateway information is lost, resulting in overlay datapath outage.

    Downtime due to overlay VM traffic outage may occur.

    Workaround: See knowledge base article 95306 for details.

  • Issue 3289085: After upgrading from NSX 4.0.1 or NSX 3.2.3 to NSX 4.1.2, the NSX Intelligence data collection service gets disabled on some of the ESX transport nodes (TNs).

    The NSX Intelligence Data Collection service is disabled on a few hosts, or the hosts and cluster of hosts are not visible on the Data Collection UI after upgrading from NSX 4.0.1 or NSX 3.2.3 to NSX 4.1.2. There are no traffic flows being reported from some of the hosts. The Data Collection toggle for the affected hosts or clusters are not available on the System > Settings > NSX Intelligence UI.

    Workaround: If only a few transport node hosts are not reporting network traffic flows, navigate to the System > Settings > NSX Intelligence UI and toggle the Deactivate/Activate toggle for hosts that are behaving incorrectly. This action should reset the data collection configuration on the affected hosts.

    If no hosts or clusters are visible on the System > Settings > NSX Intelligence UI, use the following API calls to resolve the issue.

    1. Send the following get request to fetch the cluster ID.

      GET https://{{NSX-manager_ip}}/policy/api/v1/infra/sites/napp/registration
    2. Set the is_intelligence_enabled property to false by sending the following patch request. In the following example, the cluster_id value eb663da2-e0ee-42d0-b5ad-c66b48e159f8 used is the value returned from step 1 above.

           "cluster_id": "eb663da2-e0ee-42d0-b5ad-c66b48e159f8", 
           "is_intelligence_enabled": false, 
           "id": "eb663da2-e0ee-42d0-b5ad-c66b48e159f8" 
    3. Reset is_intelligence_enabled to true by sending the following patch request.

      https://{{NSX-manager_ip}}/policy/api/v1/infra/sites/napp/registration/{{cluster-id}} { 
            "cluster_id": "eb663da2-e0ee-42d0-b5ad-c66b48e159f8", 
            "is_intelligence_enabled": true, 
            "id": "eb663da2-e0ee-42d0-b5ad-c66b48e159f8" 
  • Issue 3007558: APP_HTTPS detected on BITDEFENDER Flow.

    HTTPS rule enforced instead of Bit Defender.

    Workaround: None.

  • Issue 3273294: The member in a group uses short ipv6 address format, but in earlier releases long format address is used.

    There is no functional/security impact. It is a visibility related change of behavior.



  • Issue 3268012: The special wildcard character "^" in Custom Fully Qualified Domain Name (FQDN) values is available starting from GM version 4.1.2 onward. In case of federation deployments where LMs/sites are on lower versions, GM created firewall rules consisting of context profiles which in turn have Custom Fully Qualified Domain Name (FQDN) with "^" will have undeterministic behavior on the datapath.

    Undeterministic behavior on the datapath of  GM created firewall rules consisting of context profiles which in turn have Custom Fully Qualified Domain Name (FQDN) with "^".


    1. If feasible, remove or update: the custom FQDN to remove "^" on 4.1.2 GM, context profile consuming that custom FQDN, or rules consuming the context profile.

    2. If step 1 is not feasible, upgrade the lower version LMs (4.1.1) to the 4.1.2 version.

  • Issue 3248324: Outbound SMTP emails with attachments larger than  48KB or above time out when the DFW is in the datapath.

    Email traffic with larger attachment size fails.


    Exclude this SMTP server VM from DFW in order for the workflow to complete. Or apply stateless DFW rules to these two interfaces of the SMTP server VM.

  • Issue 3242530: New NSX-T Segments are not appearing in vCenter.

    Unable to deploy new segments.


    Export and import DVS without preserving DVS IDs.

  • Issue 3227013: Unknown status for TN shown intermittently.

    LM UI shows wrong status of TN.


    None. The status correction happens without any intervention.

  • Issue 3278718: Failure in packet capture (PCAP) export if the PCAP file has not been received by the NSX Manager.

    Users will not be able to export the requested PCAPs as the request will fail.


    In most cases, this issue can be avoided by waiting for a few seconds and then exporting the PCAP files.

  • Issue 3261593: IDFW alarms will be reset after upgrade.

    After upgrade, the existing alarms will be reset. These alarms will be re-created if the issues remain and the corresponding operations are performed.

    Workaround: None.

  • Issue 3233352: Request payload validations (including password strength) are bypassed on redeploy.

    Alarm cannot be resolved and edit of TN config is not allowed till the password is fixed.


    Fix the invalidated passwords by using the POST API addOrUpdatePlacementReferences documented at https://confluence.eng.vmware.com/x/1N_MZw.

  • Issue 3223107: When BGP receives a same prefix from multiple neighbors and if the nexthop is also of the same subnet, then the route keeps flapping and user will see a continuous addition and deletion of the prefix.

    Traffic drop for prefixes that are getting continuously added/deleted.


    Add an inbound routemap that filters the BGP prefix that is in the same subnet as the static route nexthop.

  • Issue 3275502: UDP checksum get computed to 0x0 while it should be 0xFFFF according to RFC 768.

    Customers who use physical NIC that do not support HW checksum offload will see intermittent traffic issue on UDP traffic over IPv6.

  • Issue 3262712: IPv4-compatible IPv6 address of the format ::<ipv4> gets converted to its equivalent IPv6 address in effective membership API response.

    There is no functional or security impact. The effective membership API response for Ipv4-compatible Ipv6 address will be different.


    None. This is a change of behavior introduced in NSX 4.1.2.

  • Issue 3261528: LB Admin is able to create the Tier-1 Gateway, but while deleting the Tier-1 Gateway it directs the page to Login page and LB admin needs to login again. After logging in, it is observed that the Tier-1 gateway is not deleted from the list/table.

    The LB Admins cannot delete the Tier-1 created by them.


    Log in as one of the following users:

    enterprise_admin, cloud_admin, site_reliability_engineer, network_engineer, security_engineer, org_admin, project_admin, or vpc_admin (vpc_admin to delete the security-config policy resource).

  • Issue 3236772: After removing vIDM configuration, logs still show that background tasks are attempting to still reach invalid vIDM.

    Logs for NAPI will show the following error message after vIDM configuration is removed: Error reaching given VMware Identity Manager address <vIDM-FQDN> | [Errno -2] Name or service not known.

    Workaround: None.

  • Issue 2787353: Host transport node (TN) creation via vLCM workflow fails when host has undergone specific host movements in VC.

    Users will not be able to create a host TN.

    Follow the regular resolver workflow for the vLCM cluster level from NSX UI.

  • Issue 3167100: Tunnels new configuration take several minutes to be observed from UI.

    It takes several minutes to observe the new tunnels information after configuring the host node.

    Workaround: None.

  • Issue 3089238: Unable to register vCenter on NSX manager after the NSX-T extension is removed from vCenter on an NSXe setup.

    Unable to register vCenter on NSX manager after removing the extension from vCenter. This disrupts communication between vCenter and NSX the manager.

    Workaround: See knowledge base article 90847.

  • Issue 3082587: Communication between SHA and metrics MUX would fail in checking the host name.

    The metrics transmission fails from SHA to metrics MUX.

    Workaround: See KB article 93896 for details.

  • Issue 3245183: The "join CSM command" adds CSM to MP cluster, but does not add the Manager account on CSM.

    It will not be possible to continue with any other CSM work unless the Manager account is added on CSM.


    1. Run the join command without including CSM login credentials.


      join <manager-IP> cluster-id <MP-cluster-ID> username <MP-username> password <MP-password> thumbprint <MP-thumbprint>

    2. Add NSX Manager details in CSM through UI.

      a. Go to System -> Settings.

      b. Click Configure on the Associated NSX Node tile.

      c. Provide NSX Manager details (username, password, and thumbprint).

  • Issue 3214034: Internal T0-T1 transit subnet prefix change after tier-0 creation is not supported by ESX datapath from Day 1.

    In cases where tier-1 router is created without SR, traffic loss can happen if the transient subnet IP prefix is changed.

    Workaround: Instead of changing the transient subnet IP prefix, delete and add Logical Router Port with a new transient subnet IP.

  • Issue 3248603: NSX Manager File system is corrupted or goes into read only mode.

    In the /var/log/syslog, you may see log messages similar to the log lines below.

    2023-06-30T01:34:55.506234+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074509] sd 2:0:1:0: [sdb] tag#1 CDB: Write(10) 2a 00 04 af de e0 00 02 78 00

    2023-06-30T01:34:55.506238+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074512] print_req_error: 1 callbacks suppressed

    2023-06-30T01:34:55.506240+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.074516] print_req_error: I/O error, dev sdb, sector 78634720

    2023-06-30T01:34:55.513497+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.075123] EXT4-fs warning: 3 callbacks suppressed

    2023-06-30T01:34:55.513521+00:00 nos-wld-nsxtmn02.vcf.netone.local kernel - - - [6869346.075127] EXT4-fs warning (device dm-8): ext4_end_bio:323: I/O error 10 writing to inode 4194321 (offset 85286912 size 872448 starting block 9828828)

    Appliance may not work as normal.

    Workaround: See knowledge base article 93856 for details.

  • Issue 3145013: NCP pod deletion fails because of stale LogicalPorts on NSX.

    NCP pod deletion could become stuck.

    Workaround: Manually clean up the stale LogicalPorts on the underlying LogicalSwitch.

  • Issue 3245645: The Datapath CPU usage graph on the UI does not reflect the correct values.

    The Time series database values on the graph are not correct. This may impact diagnosis of high CPU usage over a period of time.

    Workaround: You can use the instantaneous Datapath CPU stats shown on the left side of the graph.

  • Issue 3221820: User is allowed to edit the TZP ID when updating the transport node via API.

    The user will not be able to make subsequent updates to the transport node.

    Workaround: Contact VMware support.

  • Issue 3069003: Excessive LDAP operations on customer LDAP directory service when using nested LDAP groups.

    High load on LDAP directory service in cases where nested LDAP groups are used.

    Workaround: For vROPS prior to 8.6, use the "admin" local user instead of an LDAP user.

  • Issue 3010038: On a two-port LAG that serves Edge Uniform Passthrough (UPT) VMs, if the physical connection to one of the LAG ports is disconnected, the uplink will be down, but Virtual Functions (VFs) used by those UPT VMs will continue to be up and running as they get connectivity through the other LAG interface.

    No impact.

    Workaround: None.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

check-circle-line exclamation-circle-line close-line
Scroll to top icon