VMware NSX-T Data Center 3.1.3.6   |  3 February 2022  |  Build 19278618

Check regularly for additions and updates to these release notes.

What's in the Release Notes

NSX-T Data Center 3.1.3.6 is an express patch release that includes bug fixes. See "Resolved Issues" below for the list of issues resolved in this release. See the VMware NSX-T Data Center 3.1.3 Release Notes for the current known issues.

Important Information about NSX-T Data Center 3.1.3.6

Please be advised that the VMware NSX team has identified an issue with NSX-T Edge in NSX-T Data Center 3.1.3.6, and has decided to withdraw the release from the download pages in favor of NSX-T Data Center 3.1.3.7.

Customers who have downloaded and deployed NSX-T Data Center 3.1.3.6 remain supported but are advised to upgrade to NSX-T Data Center 3.1.3.7. See knowledge base article 87627 for details. 

Document Revision History

February 3, 2022. First edition.
February 14, 2022. Second edition. Added known issues 2909448 and 2914711. Removed issue 2893739.
February 28, 2022. Third edition. Added important information about NSX-T Data Center 3.1.3.6. Removed issue 2882297. Added issue 2891203. Updated issues 2880558, 2882286, 2892565, 2894641.

Resolved Issues

  • Fixed Issue 2868827: NSX for vSphere to NSX-T Host migration In advanced V2T migration mode DFW-Host-Workload migration may fail because the same VLAN transport zone ID is used in multiple host switches.

    Hosts fail to migrate.

  • Fixed Issue 2880558: Policy API not able to fetch inventory, networking or security details for IPv4-mapped IPv6 addresses: "Index out of sync, please resync via start search resync policy."

    Elastic search does not allow IPv4-mapped IPv6 address with CIDR notation and will throw an exception. You will see "Index out of sync" errors on UI, which results in failure to load data.

  • Fixed Issue 2882281: PSOD on ESXi hosts in NSX-T 3.1.3 - 3.1.3.5 as a result of vMotion import corruption.

    Host reboot required. Data path outage until the host comes online again.

  • Fixed Issue 2882285: Bare metal Edge datapath performance is lower than expected.

    Significantly reduced Edge datapath forwarding performance for some flows.

  • Fixed Issue 2882286: ip checksum error for inner ip/esp header if the esp is encapped in vxlan/geneve.

    Packet loss for connections with Geneve/VXLAN encapsulation.

  • Fixed Issue 2882288: System PSODs when a normal latency VM is started on a NUMA node on which all the Lcores are designated are high priority.

    The system PSODs.

  • Fixed Issue 2882289: Timeout of the neighbor entries and packet drop during neighbor resolution after reaching 1000 neighbor resolution.

    Delay in connection establishment for new TCP connections. TCP segment drop by DR for existing flows.

  • Fixed Issue 2882290: In multiple VTEP configurations, the edge may use the mac of local vtep1 for local vtep2 as icmp reply’s src IP.

    The ACI may drop the pkt due to this pkt.

  • Fixed Issue 2882295: Management IP lost on in-band interface on reboot of Edge.

    Communication lost with Edge on management interface. Won't be able to SSH into edge. No connectivity between Edge and Managers/Controllers.

  • Fixed Issue 2882636: PSOD when multiple ports share the same MAC address on the same VLAN or VNI, and edge VM is connected on the same ESX host on a NSX Logical Switch.

    PSOD of ESX server.

  • Fixed Issue 2883804: NSX-T Manager upgrade from NSX-T 3.0.3 to 3.1.3 fails. When upgrading LogicalPortAttachersMigrationTask fails if LPAttachers table has null entries for a LogicalPort.

    Cannot upgrade as LogicalPortAttachersMigrationTask will always fail because of Null entries. This can happen in some cases when attachers for a LogicalPort are removed and LogicalPort itself needs to stay back (delayed deletion case).

  • Fixed Issue 2883811: Failure reason cannot be shown in pool member status when LB pool is configured with one monitor and the pool member status is DOWN.

    You cannot see the pool member failure reason via API when one monitor is configured in LB pool.

  • Fixed Issue 2885688:  Agent NestDB status shows DOWN because NestDB is too busy to respond to status check call.

    NestDB status is shown as DOWN on the UI.

  • Fixed Issue 2886882: The NestDB process is given insufficient memory on startup because ESX host memory totals are calculated incorrectly.

    Since the NestDB process can go down from having too little memory, BFD traffic can become impacted, and there may also be delays in configuration materializing on the host. This occurs because CCP is no longer able to push configuration to the other LCP daemons running on the host.

  • Fixed Issue 2892565: North-South multicast routed traffic outage experienced by some VMs due to incorrect Flow-cache entry.

    In multicast routing enabled LRs, DR fails to route multicast traffic to some of the receiver VMs due to incorrect flow-cache entry action. This affects only the receiver VMs running on Edge offloaded ESX transport nodes. North-South multicast routed traffic outage for some VMs due to incorrect flow-cache entry action.

  • Fixed Issue 2894640: During NSX-T upgrade, if VM hosting containers vMotion from old version TN to new version TN, it may lose connection with hyperbus.

    You will see hyperbus connection not in HEALTHY for the VM hosting containers.

  • Fixed Issue 2894641: PVLAN properties are not cleared during upgrading with rebootless_upgrade=false.

    If customer upgrades with rebootless_upgrade=false and  subsequently reboots the hosts, the hyperbus connections will not come up.

  • Fixed Issue 2895275: In case of TCP connections when a connection is terminated, that connection is added to an inactive list, which is periodically cleaned up by firewall to make room for new connections. Inactive list is corrupted and firewall is unable to clean up connections that keep utilizing memory.

    As there is a max limit on the total number of connections inside firewall after this max number is reached, no new connection is accepted by firewall, which results in packet drops. If this issue occurs in a customer environment, at some point memory pool related to firewall (pfstatepl3) runs out of memory and no new connections are accepted by firewall, thus packets could be dropped.

  • Fixed Issue 2895991: TCP connections remain beyond their expiration timeout.

    Expired connection will prevent new connection from getting established.

  • Fixed Issue 2897298: Incomplete Firewall service initialization prevented Firewall table cleanup resulting in very high table size.

    You may not be able use REST APIs or UI if proton server crashes during compaction.

  • Fixed Issue 2877986: The insert before/after revisions APIs are not working as expected for gateway firewall policy.

    Unabe able to insert before/after existing policies using API.

  • Fixed Issue 2899071: VLAN in and VXLAN out IPv6 traffic is dropped on the GRETAP interface because the packet type is not changed to IPv4 after VXLAN encapsulation.

    VLAN in and VXLAN out IPv6 does not work.

  • Fixed Issue 2891203: Federation inter-site communication is down for minutes after RTEP Edge failover because MAC table on ESXi is not updated.

    There is a 15 minute N-S traffic blackout until the new edge entry gets populated in MAC table. Outage when edge crossover happens.

Known Issues

  • Issue 2909448: Memory leak when sending traffic to L4 load balancer.

    The edge would be out of memory and the traffic would stop working after hours of traffic to L4 load balancer.

    Workaround: See VMware knowledge base article 87627 for more information.

  • Issue 2914711: Datapathd process running on an edge node crashes and generates a core dump.

    If the issue occurs and generates a coredump, an edge failover would be triggered. And during this time a traffic outage would happen.

    Workaround: None.

check-circle-line exclamation-circle-line close-line
Scroll to top icon