VMware NSX-T Data Center 18.104.22.168 | 13 October 2020 | Build 17003648
Check regularly for additions and updates to these release notes.
What's in the Release Notes
NSX-T Data Center 22.214.171.124 is an express patch release that comprises bug fixes only. See "Resolved Issues" below for the list of issues resolved in this release. See the VMware NSX-T Data Center 2.5.2 Release Notes for current known issues.
Document Revision History
13 October 2020. First edition.
- Fixed Issue 2425595: NTLM is not working with persistence configuration.
When a virtual server is configured with both NTLM and persistence (cookie persistence or source IP persistence), NTLM traffic failed on the second request. NTLM won't work.
- Fixed Issue 2572052: Scheduled backups might not get generated.
In some corner cases, scheduled backups are not generated.
- Fixed Issue 2603919: BFD tunnels between the bare metal edge and VM/BM edges are down.
The BFD session will be in the Down state at the bare metal edge and will be in INIT state on the peer edge. On bare metal edge, output of CLI "get bfd session stats" will have packets drop due to null BFD session (rx_drop_null_bfd_session). On the NSX UI, the edge will show up in degraded state.
- Fixed Issue 2605659: Packets are not getting forwarded to the pool members on correct port when NSGroup for server pool is not statically configured, rule action is "select pool" in forwarding phase and there is no default pool for virtual server. The matched packets after the first non-matched packet will be forwarded to backend server on port 80.
Packets are set to incorrect port.
- Fixed Issue 2621322: HTTP health check does not work when the HTTP content is in multi TCP segments.
Load Balancer cannot check backend server status according to the HTTP content.
- Fixed Issue 2645484: Load Balancer health check does not work well for body content match when there is chunk encoding in HTTP packet.
There is CHUNK header in HTTP packet from backend server for health check. The pool member status cannot be up. The backend server is not down and available.
- Fixed Issue 2540073: In an ARP proxy scenario, when the edge recovers from split-brain, the incoming traffic still goes to the old active node.
Solutions like NAT/LB/VPN can consume the ARP proxy feature. When the edge vMotions, it could cause edge cluster split-brain, then multiple edge nodes are in inactive state. After recovery from split-brain, some move to standby, but the traffic still goes to those standby nodes, disrupting the data traffic to proxy-arp.
Load Balancer needs ARP proxy configured. When the edge vMotions, it could cause edge cluster split-brain, then multiple edge nodes are in active state. After recovery from split-brain, some move to standby, but the traffic still goes to those standby nodes, causing the Load Balancer not to work.
- Fixed Issue 2626399: Datapath doesn't send FIN ACK to backend.
Health check shows down, with error Timeout. Traffic may break for a while when all pool members are down.
- Fixed Issue 2641150: TFTP traffic is not working when Edge firewall is enabled on the logical router.
TFTP Traffic keeps retransmitting between the server and client and the transaction does not complete. The error counter in firewall "error inserting state" keeps incrementing and the packet drop count keeps increasing.
- Fixed Issue 2613106: The presence of the following address 0,0,0,0/0 or ::/0 in an address set causes the vMotion to fail leading to disconnection of the filter/port.
The following messages are printed in the vmkernel.log during vMotion:
2020-07-28T13:51:38.861Z cpu20:2098264)pfp_add_table_one_addr: failed to add ke
2020-07-28T13:51:38.864Z cpu20:2098264)configured filter nic-2107256-eth0-vmware-sfw.2
2020-07-28T13:51:38.864Z cpu27:2102240)configured filter nic-2107256-eth0-vmware-si.12
2020-07-28T13:51:38.871Z cpu20:2098264)pfp_add_table_one_addr: failed to add ke
2020-07-28T13:51:38.871Z cpu20:2098264)pfp_add: failed for dst
2020-07-28T13:51:38.871Z cpu20:2098264)pfr_table_clr_rule: ERROR!! table ed01c805-2aef-450a-8f5b-e4c3a23db1e9
2020-07-28T13:51:38.871Z cpu20:2098264)pfp_del_addr_with_table: can't clear rule
2020-07-28T13:51:38.871Z cpu20:2098264)pfp_del_ruleid: rule not found 4109 rs 1
2020-07-28T13:51:38.871Z cpu20:2098264)pfioctl: failed to add rules (0)
2020-07-28T13:51:38.871Z cpu20:2098264)addrule ioctl failed: 22
2020-07-28T13:51:38.871Z cpu20:2098264)pf_rollback_rules: rs_num: 1, anchor: mainrs
2020-07-28T13:51:38.871Z cpu20:2098264)pf_rollback_rules: rs_num: 2, anchor: mainrs
2020-07-28T13:51:38.871Z cpu20:2098264)pf_rollback_rules: rs_num: 4, anchor: mainrs
2020-07-28T13:51:38.871Z cpu20:2098264)need(250008) != re->rule_size (252056)
2020-07-28T13:51:38.871Z cpu20:2098264)import claims to be bigger than buffer: 29554, 3224
2020-07-28T13:51:38.871Z cpu20:2098264)failed to import single ruleset: Failure
2020-07-28T13:51:38.871Z cpu20:2098264)failed to import rules: Failure
2020-07-28T13:51:38.871Z cpu20:2098264)Failed to restore datapath state : Failure
2020-07-28T13:51:38.871Z cpu20:2098264)NetX DVF: Restore state called (nic-2107256-eth0-vmware-si.12)
2020-07-28T13:51:38.871Z cpu20:2098264)NetX DVF: No state available to restore on this host
2020-07-28T13:51:38.871Z cpu20:2098264)DVFilter: 1260: No unrestored state left, freeing pending state for world 2107256
2020-07-28T13:51:38.871Z cpu20:2098264)DVFilter: 1276: Bringing down port due to failed DVFilter state restoration and failPolicy of FAIL_CLOSED. <<<==== Port Disconnected leading to traffic failure