VMware NSX 3.2.2 | 08 NOV 2022 | Build 20737185

Check for additions and updates to these release notes.

What's New

NSX-T Data Center 3.2.2 provides a variety of new features to offer new functionalities and enhancements for virtualized networking and security for private, public, and multi-clouds. New features and enhancements are included in the following focus areas:

Federation

  • Bridge support allows configuration of bridges through the Global Manager and connect a bridge on a segment that is created on the Global Manager.

Edge Platform

  • NSX-T provides visibility of memory leak issues and NSX edge node process termination. Corresponding alarms are also triggered by the NSX Manager.

Gateway Firewall

  • The fw/del_conn command provides an option to delete stale entries. Currently, this command is supported for IPv4.

NSX Data Center for vSphere to NSX-T Data Center Migration

  • Pause or remove hosts during in-place migration: Migration Coordinator supports pause between hosts or remove hosts during the host migration phase.

  • New migration mode added in Migration Coordinator for Lift-and-Shift - Configuration and Edge Migration. For more information, see Tech Preview Feature

  • User-defined topology: Support for NSX-T Load Balancer when configurations are migrated using the Migration Coordinator as part of the lift-and-shift migration.

Install and Upgrade

  • Sub-Transport Node Profile within a Transport Node Profile: Create sub-Transport Node Profiles (TNP) within a TNP to deploy and maintain stretched L3/L2 clusters and stretched vSAN clusters, and support them at scale.

  • Register multiple NSX management clusters to a singe vCenter Server: This ability allows isolation of NSX lifecycle management on a per cluster basis, and have all NSX workloads connect to the NSX on vSphere Distributed Switch (VDS).

Operations

  • Backup & Restore: A viable backup is key to restore the system for disaster recovery. Now, get a reminder to configure backups in NSX Manager UI.

  • Upgrade Readiness: NSX Upgrade Evaluation Tool is now integrated with pre-upgrade checks as part of the NSX framework. No need to spend additional resources, get compliance approvals, or worry about versioning with a separate appliance. Simply, run the pre-upgrade checks as you do them today, and NSX will check the readiness of your NSX deployment for a successful NSX Manager upgrade. 

Tech Preview Feature

NSX-T Data Center 3.2.2 provides a new migration mode in Migration Coordinator for Lift-and-Shift - Configuration and Edge Migration.

This new migration mode allows migrating both configurations and edge, and establishes a performance-optimized distributed bridge between the NSX-V source environment and the NSX-T destination environment to maintain connectivity during a lift-and-shift migration. For details about this tech preview feature, see the documentation in the NSX-T Data Center 3.2 Migration Guide.

Important:

Technical preview features are not supported by VMware for production use. They are not fully tested and some functionality might not work as expected. However, previews help VMware improve current NSX-T functionality and develop future enhancements.

Feature Deprecation

NSX-T Advanced Load Balancing Policy API and UI Deprecation

  • Configuration of NSX Advanced Load Balancer (Avi) using NSX Advanced Load Balance Policy API and UI is deprecated starting in NSX 3.2.2 and will be removed completely in future releases. We recommend you to use NSX Advanced Load Balancer (Avi) UI and API directly for the configuration of Load Balancers in NSX-T integration across all deployment models.

  • Installation of NSX Advanced Load Balancer appliance cluster and cross-launch of NSX Advanced Load Balancer UI from the NSX-T Manager will continue to be supported. 

  • Users consuming NSX Advanced Load Balance Policy APIs and UI in the earlier releases of NSX-T (3.1.x, 3.2.0, and 3.2.1) and now upgrading to NSX-T 3.2.2 will need to clean up the NSX Advanced Load Balance Policy configuration in the NSX-T Manager (using Deactive workflow). This action will retain the configurations in VMware NSX Advanced Load Balancer (Avi) without impacting the data path or load balancer traffic. From there on, users can consume load balancing functionality directly from VMware NSX Advanced Load Balancer (Avi).

  • Migration of NSX-V Load Balancer for User-Defined Topology (Lift-and-Shift migration) will not support migration to NSX-T Advanced Load Balancer (ALB) configuration directly as it is deprecated in NSX-T 3.2.2. Starting in NSX-T 3.2.2, the migration path from NSX-V Load Balancer to NSX Advanced Load Balancer (Avi) will require a two-phase migration. First, migrate from NSX-V Load Balancer to NSX-T Load Balancer, and then migrate from NSX-T Load Balancer to NSX Advanced Load Balancer (Avi).

API Deprecation

The following table lists the NSX-T APIs that are deprecated and the recommended equivalent NSX Advanced Load Balancer (Avi) APIs.

Advanced Load Balancing Functionality

Deprecated API

https://{NSX-T-Policy-Manager-IP/FQDN}/<api>

Recommended Avi API

https://{Avi-controller-IP/FQDN}/<api>

ALB Auth Token

PUT /policy/api/v1/infra/alb-auth-token

Not Applicable

ALB Controller Version

GET /policy/api/v1/infra/alb-controller-version

GET /api/initial-data

ALB Analytics Profile

GET /policy/api/v1/infra/alb-analytics-profiles

GET /api/analyticsprofile

DELETE /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

GET /api/analyticsprofile

GET /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

DELETE /api/analyticsprofile/{uuid}GET /api/analyticsprofile/{uuid}

PATCH /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

PATCH /api/analyticsprofile/{uuid}

PUT /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

PUT /api/analyticsprofile/{uuid}

ALB Application Persistence Profiles

GET /policy/api/v1/infra/alb-application-persistence-profiles

GET /api/applicationpersistenceprofile

DELETE /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

DELETE /api/applicationpersistenceprofile/{uuid}

GET /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

GET /api/applicationpersistenceprofile/{uuid}

PATCH /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

PATCH /api/applicationpersistenceprofile/{uuid}

PUT /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

PUT /api/applicationpersistenceprofile/{uuid}

ALB Application Profiles

GET /policy/api/v1/infra/alb-application-profiles

GET /api/applicationprofile

DELETE /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

DELETE /api/applicationprofile/{uuid}

GET /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

GET /api/applicationprofile/{uuid}

PATCH /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

PATCH /api/applicationprofile/{uuid}

PUT /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

PUT /api/applicationprofile/{uuid}

ALB Auth Profiles

GET /policy/api/v1/infra/alb-auth-profiles

GET /api/authprofile

DELETE /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

DELETE /api/authprofile/{uuid}

GET /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

GET /api/authprofile/{uuid}

PATCH /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

PATCH /api/authprofile/{uuid}

PUT /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

PUT /api/authprofile/{uuid}

ALB Auto Scale Launch Configs

GET /policy/api/v1/infra/alb-auto-scale-launch-configs

GET /api/autoscalelaunchconfig

DELETE /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

DELETE /api/autoscalelaunchconfig/{uuid}

GET /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

GET /api/autoscalelaunchconfig/{uuid}

PATCH /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

PATCH /api/autoscalelaunchconfig/{uuid}

PUT /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

PUT /api/autoscalelaunchconfig/{uuid}

ALB DNS Policies

GET /policy/api/v1/infra/alb-dns-policies

GET /api/dnspolicy

DELETE /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

DELETE /api/dnspolicy/{uuid}

GET /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

GET /api/dnspolicy/{uuid}

PATCH /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

PATCH /api/dnspolicy/{uuid}

PUT /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

PUT /api/dnspolicy/{uuid}

ALB Error Page Bodies

GET /policy/api/v1/infra/alb-error-page-bodies

GET /api/errorpagebody

DELETE /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

DELETE /api/errorpagebody/{uuid}

GET /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

GET /api/errorpagebody/{uuid}

PATCH /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

PATCH /api/errorpagebody/{uuid}

PUT /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

PUT /api/errorpagebody/{uuid}

ALB Error Page Profiles

GET /policy/api/v1/infra/alb-error-page-profiles

GET /api/errorpageprofile

DELETE /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

DELETE /api/errorpageprofile/{uuid}

GET /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

GET /api/errorpageprofile/{uuid}

PATCH /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

PATCH /api/errorpageprofile/{uuid}

PUT /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

PUT /api/errorpageprofile/{uuid}

ALB HTTP Policy Sets

GET /policy/api/v1/infra/alb-http-policy-sets

GET /api/httppolicyset

DELETE /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

DELETE /api/httppolicyset/{uuid}

GET /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

GET /api/httppolicyset/{uuid}

PATCH /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

PATCH /api/httppolicyset/{uuid}

PUT /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

PUT /api/httppolicyset/{uuid}

ALB Hardware Security Module Groups

GET /policy/api/v1/infra/alb-hardware-security-module-group

GET /api/hardwaresecuritymodulegroup

DELETE /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

DELETE /api/hardwaresecuritymodulegroup/{uuid}

GET /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

GET /api/hardwaresecuritymodulegroup/{uuid}

PATCH /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

PATCH /api/hardwaresecuritymodulegroup/{uuid}

PUT /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

PUT /api/hardwaresecuritymodulegroup/{uuid}

ALB Health Monitors

GET /policy/api/v1/infra/alb-health-monitors

GET /api/healthmonitor

DELETE /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

DELETE /api/healthmonitor/{uuid}

GET /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

GET /api/healthmonitor/{uuid}

PATCH /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

PATCH /api/healthmonitor/{uuid}

PUT /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

PUT /api/healthmonitor/{uuid}

ALB IP Addr Groups

GET /policy/api/v1/infra/alb-ip-addr-groups

GET /api/ipaddrgroup

DELETE /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

DELETE /api/ipaddrgroup/{uuid}

GET /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

GET /api/ipaddrgroup/{uuid}

PATCH /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

PATCH /api/ipaddrgroup/{uuid}

PUT /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

PUT /api/ipaddrgroup/{uuid}

ALB L4 Policy Sets

GET /policy/api/v1/infra/alb-l4-policy-sets

GET /api/l4policyset

DELETE /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

DELETE /api/l4policyset/{uuid}

GET /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

GET /api/l4policyset/{uuid}

PATCH /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

PATCH /api/l4policyset/{uuid}

PUT /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

PUT /api/l4policyset/{uuid}

ALB Network Profiles

GET /policy/api/v1/infra/alb-network-profiles

GET /api/networkprofile

DELETE /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

DELETE /api/networkprofile/{uuid}

GET /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

GET /api/networkprofile/{uuid}

PATCH /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

PATCH /api/networkprofile/{uuid}

PUT /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

PUT /api/networkprofile/{uuid}

ALB Network Security Policies

GET /policy/api/v1/infra/alb-network-security-policies

GET /api/networksecuritypolicy

DELETE /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

DELETE /api/networksecuritypolicy/{uuid}

GET /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

GET /api/networksecuritypolicy/{uuid}

PATCH /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

PATCH /api/networksecuritypolicy/{uuid}

PUT /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

PUT /api/networksecuritypolicy/{uuid}

ALB Onboarding Workflow

PUT /policy/api/v1/infra/alb-onboarding-workflowDELETE /policy/api/v1/infra/alb-onboarding-workflow/<managed-by>

Not Applicable.

ALB PKI Profiles

GET /policy/api/v1/infra/alb-pki-profiles

GET /api/pkiprofile

DELETE /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

DELETE /api/pkiprofile/{uuid}

GET /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

GET /api/pkiprofile/{uuid}

PATCH /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

PATCH /api/pkiprofile/{uuid}

PUT /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

PUT /api/pkiprofile/{uuid}

ALB Pool Group Deployment Policies

GET /policy/api/v1/infra/alb-pool-group-deployment-policies

GET /api/poolgroupdeploymentpolicy

DELETE /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

DELETE /api/poolgroupdeploymentpolicy/{uuid}

GET /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

GET /api/poolgroupdeploymentpolicy/{uuid}

PATCH /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

PATCH /api/poolgroupdeploymentpolicy/{uuid}

PUT /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

PUT /api/poolgroupdeploymentpolicy/{uuid}

ALB Pool Groups

GET /policy/api/v1/infra/alb-pool-groups

GET /api/poolgroup

DELETE /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

DELETE /api/poolgroup/{uuid}

GET /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

GET /api/poolgroup/{uuid}

PATCH /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

PATCH /api/poolgroup/{uuid}

PUT /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

PUT /api/poolgroup/{uuid}

ALB Pools

GET /policy/api/v1/infra/alb-pools

GET /api/pool

DELETE /policy/api/v1/infra/alb-pools/<alb-pool-id>

DELETE /api/pool/{uuid}

GET /policy/api/v1/infra/alb-pools/<alb-pool-id>

GET /api/pool/{uuid}

PATCH /policy/api/v1/infra/alb-pools/<alb-pool-id>

PATCH /api/pool/{uuid}

PUT /policy/api/v1/infra/alb-pools/<alb-pool-id>

/PUT /api/pool/{uuid}

ALB Priority Labels

GET /policy/api/v1/infra/alb-priority-labels

GET /api/prioritylabels

DELETE /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

DELETE /api/prioritylabels/{uuid}

GET /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

GET /api/prioritylabels/{uuid}

PATCH /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

PATCH /api/prioritylabels/{uuid}

PUT /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

PUT /api/prioritylabels/{uuid}

ALB Protocol Parsers

GET /policy/api/v1/infra/alb-protocol-parsers

GET /api/protocolparser

DELETE /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

DELETE /api/protocolparser/{uuid}

GET /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

GET /api/protocolparser/{uuid}

PATCH /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

PATCH /api/protocolparser/{uuid}

PUT /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

PUT /api/protocolparser/{uuid}

ALB Security Policies

GET /policy/api/v1/infra/alb-security-policies

GET /api/securitypolicy

DELETE /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

DELETE /api/securitypolicy/{uuid}

GET /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

GET /api/securitypolicy/{uuid}

PATCH /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

PATCH /api/securitypolicy/{uuid}

PUT /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

PUT /api/securitypolicy/{uuid}

ALB Server Auto Scale Policies

GET /policy/api/v1/infra/alb-server-auto-scale-policies

GET /api/serverautoscalepolicy

DELETE /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

DELETE /api/serverautoscalepolicy/{uuid}

GET /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

GET /api/serverautoscalepolicy/{uuid}

PATCH /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

PATCH /api/serverautoscalepolicy/{uuid}

PUT /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>+

PUT /api/serverautoscalepolicy/{uuid}

ALB SSL Key And Certificates

GET /policy/api/v1/infra/alb-ssl-key-and-certificates

GET /api/sslkeyandcertificate

DELETE /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

DELETE /api/sslkeyandcertificate/{uuid}

GET /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

GET /api/sslkeyandcertificate/{uuid}

PATCH /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

PATCH /api/sslkeyandcertificate/{uuid}

PUT /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

PUT /api/sslkeyandcertificate/{uuid}

ALB SSL Profiles

GET /policy/api/v1/infra/alb-ssl-profilesDELETE /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

GET /api/sslprofile

DELETE /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

DELETE /api/sslprofile/{uuid}

GET /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

GET /api/sslprofile/{uuid}

PATCH /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

PATCH /api/sslprofile/{uuid}

PUT /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

PUT /api/sslprofile/{uuid}

ALB SSO Policies

GET /policy/api/v1/infra/alb-sso-policies

GET /api/ssopolicy

DELETE /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

DELETE /api/ssopolicy/{uuid}

GET /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

GET /api/ssopolicy/{uuid}

PATCH /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

PATCH /api/ssopolicy/{uuid}

PUT /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

PUT /api/ssopolicy/{uuid}

ALB String Groups

GET /policy/api/v1/infra/alb-string-groups

GET /api/stringgroup

DELETE /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

DELETE /api/stringgroup/{uuid}

GET /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

GET /api/stringgroup/{uuid}

PATCH /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

PATCH /api/stringgroup/{uuid}

PUT /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

PUT /api/stringgroup/{uuid}

ALB Traffic Clone Profiles

GET /policy/api/v1/infra/alb-traffic-clone-profiles

GET /api/trafficcloneprofile

DELETE /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

DELETE /api/trafficcloneprofile/{uuid}

GET /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

GET /api/trafficcloneprofile/{uuid}

PATCH /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

PATCH /api/trafficcloneprofile/{uuid}

PUT /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

PUT /api/trafficcloneprofile/{uuid}

ALB Virtual Services

GET /policy/api/v1/infra/alb-virtual-services

GET /api/virtualservice

DELETE /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

DELETE /api/virtualservice/{uuid}

GET /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

GET /api/virtualservice/{uuid}

PATCH /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

PATCH /api/virtualservice/{uuid}

PUT /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

PUT /api/virtualservice/{uuid}

ALB VS Data Script Sets

GET /policy/api/v1/infra/alb-vs-data-script-sets

GET /api/vsdatascriptset

DELETE /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

DELETE /api/vsdatascriptset/{uuid}

GET /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

GET /api/vsdatascriptset/{uuid}

PATCH /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

PATCH /api/vsdatascriptset/{uuid}

PUT /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

PUT /api/vsdatascriptset/{uuid}

ALB VS Vips

GET /policy/api/v1/infra/alb-vs-vips

GET /api/vsvip

DELETE /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

DELETE /api/vsvip/{uuid}

GET /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

GET /api/vsvip/{uuid}

PATCH /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

PATCH /api/vsvip/{uuid}

PUT /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

PUT /api/vsvip/{uuid}

ALB WAF CRS

GET /policy/api/v1/infra/alb-waf-crs

GET /api/wafcrs

DELETE /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

DELETE /api/wafcrs/{uuid}

GET /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

GET /api/wafcrs/{uuid}

PATCH /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

PATCH /api/wafcrs/{uuid}

PUT /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

PUT /api/wafcrs/{uuid}

ALB WAF Policies

GET /policy/api/v1/infra/alb-waf-policies

GET /api/wafpolicy

DELETE /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

DELETE //apiwafpolicy/{uuid}

GET /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

GET /api/wafpolicy/{uuid}

PATCH /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

PATCH /api/wafpolicy/{uuid}

PUT /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

PUT /api/wafpolicy/{uuid}

ALB WAF Policy PSM Groups

GET /policy/api/v1/infra/alb-waf-policy-psm-groups

GET /api/wafpolicypsmgroup

DELETE /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

DELETE /api/wafpolicypsmgroup/{uuid}

GET /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

GET /api/wafpolicypsmgroup/{uuid}

PATCH /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

PATCH /api/wafpolicypsmgroup/{uuid}

PUT /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

PUT /api/wafpolicypsmgroup/{uuid}

ALB WAF Profiles

GET /policy/api/v1/infra/alb-waf-profiles

GET /api/wafprofile

DELETE /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

DELETE /api/wafprofile/{uuid}

GET /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

GET /api/wafprofile/{uuid}

PATCH /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

PATCH /vwafprofile/{uuid}

PUT /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

PUT /wafprofile/{uuid}

Advanced Load Balancing Functionality

Deprecated API https://{NSX-T-Policy-Manager-IP/FQDN}/<api>

Recommendation Avi API

https://{Avi-controller-IP/FQDN}/<api>

ALB Auth Token

PUT /policy/api/v1/infra/alb-auth-token

Not Applicable

ALB Controller Version

GET /policy/api/v1/infra/alb-controller-version

GET /api/initial-data

ALB Analytics Profile

GET /policy/api/v1/infra/alb-analytics-profiles

GET /api/analyticsprofile

DELETE /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

GET /api/analyticsprofile

GET /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

DELETE /api/analyticsprofile/{uuid}GET /api/analyticsprofile/{uuid}

PATCH /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

PATCH /api/analyticsprofile/{uuid}

PUT /policy/api/v1/infra/alb-analytics-profiles/<alb-analyticsprofile-id>

PUT /api/analyticsprofile/{uuid}

ALB Application Persistence Profiles

GET /policy/api/v1/infra/alb-application-persistence-profiles

GET /api/applicationpersistenceprofile

DELETE /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

DELETE /api/applicationpersistenceprofile/{uuid}

GET /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

GET /api/applicationpersistenceprofile/{uuid}

PATCH /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

PATCH /api/applicationpersistenceprofile/{uuid}

PUT /policy/api/v1/infra/alb-application-persistence-profiles/<alb-applicationpersistenceprofile-id>

PUT /api/applicationpersistenceprofile/{uuid}

ALB Application Profiles

GET /policy/api/v1/infra/alb-application-profiles

GET /api/applicationprofile

DELETE /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

DELETE /api/applicationprofile/{uuid}

GET /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

GET /api/applicationprofile/{uuid}

PATCH /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

PATCH /api/applicationprofile/{uuid}

PUT /policy/api/v1/infra/alb-application-profiles/<alb-applicationprofile-id>

PUT /api/applicationprofile/{uuid}

ALB Auth Profiles

GET /policy/api/v1/infra/alb-auth-profiles

GET /api/authprofile

DELETE /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

DELETE /api/authprofile/{uuid}

GET /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

GET /api/authprofile/{uuid}

PATCH /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

PATCH /api/authprofile/{uuid}

PUT /policy/api/v1/infra/alb-auth-profiles/<alb-authprofile-id>

PUT /api/authprofile/{uuid}

ALB Auto Scale Launch Configs

GET /policy/api/v1/infra/alb-auto-scale-launch-configs

GET /api/autoscalelaunchconfig

DELETE /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

DELETE /api/autoscalelaunchconfig/{uuid}

GET /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

GET /api/autoscalelaunchconfig/{uuid}

PATCH /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

PATCH /api/autoscalelaunchconfig/{uuid}

PUT /policy/api/v1/infra/alb-auto-scale-launch-configs/<alb-autoscalelaunchconfig-id>

PUT /api/autoscalelaunchconfig/{uuid}

ALB DNS Policies

GET /policy/api/v1/infra/alb-dns-policies

GET /api/dnspolicy

DELETE /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

DELETE /api/dnspolicy/{uuid}

GET /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

GET /api/dnspolicy/{uuid}

PATCH /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

PATCH /api/dnspolicy/{uuid}

PUT /policy/api/v1/infra/alb-dns-policies/<alb-dnspolicy-id>

PUT /api/dnspolicy/{uuid}

ALB Error Page Bodies

GET /policy/api/v1/infra/alb-error-page-bodies

GET /api/errorpagebody

DELETE /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

DELETE /api/errorpagebody/{uuid}

GET /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

GET /api/errorpagebody/{uuid}

PATCH /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

PATCH /api/errorpagebody/{uuid}

PUT /policy/api/v1/infra/alb-error-page-bodies/<alb-errorpagebody-id>

PUT /api/errorpagebody/{uuid}

ALB Error Page Profiles

GET /policy/api/v1/infra/alb-error-page-profiles

GET /api/errorpageprofile

DELETE /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

DELETE /api/errorpageprofile/{uuid}

GET /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

GET /api/errorpageprofile/{uuid}

PATCH /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

PATCH /api/errorpageprofile/{uuid}

PUT /policy/api/v1/infra/alb-error-page-profiles/<alb-errorpageprofile-id>

PUT /api/errorpageprofile/{uuid}

ALB HTTP Policy Sets

GET /policy/api/v1/infra/alb-http-policy-sets

GET /api/httppolicyset

DELETE /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

DELETE /api/httppolicyset/{uuid}

GET /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

GET /api/httppolicyset/{uuid}

PATCH /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

PATCH /api/httppolicyset/{uuid}

PUT /policy/api/v1/infra/alb-http-policy-sets/<alb-httppolicyset-id>

PUT /api/httppolicyset/{uuid}

ALB Hardware Security Module Groups

GET /policy/api/v1/infra/alb-hardware-security-module-group

GET /api/hardwaresecuritymodulegroup

DELETE /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

DELETE /api/hardwaresecuritymodulegroup/{uuid}

GET /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

GET /api/hardwaresecuritymodulegroup/{uuid}

PATCH /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

PATCH /api/hardwaresecuritymodulegroup/{uuid}

PUT /policy/api/v1/infra/alb-hardware-security-module-groups/<alb-hardwaresecuritymodulegroup-id>

PUT /api/hardwaresecuritymodulegroup/{uuid}

ALB Health Monitors

GET /policy/api/v1/infra/alb-health-monitors

GET /api/healthmonitor

DELETE /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

DELETE /api/healthmonitor/{uuid}

GET /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

GET /api/healthmonitor/{uuid}

PATCH /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

PATCH /api/healthmonitor/{uuid}

PUT /policy/api/v1/infra/alb-health-monitors/<alb-healthmonitor-id>

PUT /api/healthmonitor/{uuid}

ALB IP Addr Groups

GET /policy/api/v1/infra/alb-ip-addr-groups

GET /api/ipaddrgroup

DELETE /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

DELETE /api/ipaddrgroup/{uuid}

GET /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

GET /api/ipaddrgroup/{uuid}

PATCH /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

PATCH /api/ipaddrgroup/{uuid}

PUT /policy/api/v1/infra/alb-ip-addr-groups/<alb-ipaddrgroup-id>

PUT /api/ipaddrgroup/{uuid}

ALB L4 Policy Sets

GET /policy/api/v1/infra/alb-l4-policy-sets

GET /api/l4policyset

DELETE /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

DELETE /api/l4policyset/{uuid}

GET /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

GET /api/l4policyset/{uuid}

PATCH /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

PATCH /api/l4policyset/{uuid}

PUT /policy/api/v1/infra/alb-l4-policy-sets/<alb-l4policyset-id>

PUT /api/l4policyset/{uuid}

ALB Network Profiles

GET /policy/api/v1/infra/alb-network-profiles

GET /api/networkprofile

DELETE /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

DELETE /api/networkprofile/{uuid}

GET /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

GET /api/networkprofile/{uuid}

PATCH /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

PATCH /api/networkprofile/{uuid}

PUT /policy/api/v1/infra/alb-network-profiles/<alb-networkprofile-id>

PUT /api/networkprofile/{uuid}

ALB Network Security Policies

GET /policy/api/v1/infra/alb-network-security-policies

GET /api/networksecuritypolicy

DELETE /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

DELETE /api/networksecuritypolicy/{uuid}

GET /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

GET /api/networksecuritypolicy/{uuid}

PATCH /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

PATCH /api/networksecuritypolicy/{uuid}

PUT /policy/api/v1/infra/alb-network-security-policies/<alb-networksecuritypolicy-id>

PUT /api/networksecuritypolicy/{uuid}

ALB Onboarding Workflow

PUT /policy/api/v1/infra/alb-onboarding-workflowDELETE /policy/api/v1/infra/alb-onboarding-workflow/<managed-by>

Not Applicable.

ALB PKI Profiles

GET /policy/api/v1/infra/alb-pki-profiles

GET /api/pkiprofile

DELETE /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

DELETE /api/pkiprofile/{uuid}

GET /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

GET /api/pkiprofile/{uuid}

PATCH /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

PATCH /api/pkiprofile/{uuid}

PUT /policy/api/v1/infra/alb-pki-profiles/<alb-pkiprofile-id>

PUT /api/pkiprofile/{uuid}

ALB Pool Group Deployment Policies

GET /policy/api/v1/infra/alb-pool-group-deployment-policies

GET /api/poolgroupdeploymentpolicy

DELETE /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

DELETE /api/poolgroupdeploymentpolicy/{uuid}

GET /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

GET /api/poolgroupdeploymentpolicy/{uuid}

PATCH /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

PATCH /api/poolgroupdeploymentpolicy/{uuid}

PUT /policy/api/v1/infra/alb-pool-group-deployment-policies/<alb-poolgroupdeploymentpolicy-id>

PUT /api/poolgroupdeploymentpolicy/{uuid}

ALB Pool Groups

GET /policy/api/v1/infra/alb-pool-groups

GET /api/poolgroup

DELETE /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

DELETE /api/poolgroup/{uuid}

GET /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

GET /api/poolgroup/{uuid}

PATCH /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

PATCH /api/poolgroup/{uuid}

PUT /policy/api/v1/infra/alb-pool-groups/<alb-poolgroup-id>

PUT /api/poolgroup/{uuid}

ALB Pools

GET /policy/api/v1/infra/alb-pools

GET /api/pool

DELETE /policy/api/v1/infra/alb-pools/<alb-pool-id>

DELETE /api/pool/{uuid}

GET /policy/api/v1/infra/alb-pools/<alb-pool-id>

GET /api/pool/{uuid}

PATCH /policy/api/v1/infra/alb-pools/<alb-pool-id>

PATCH /api/pool/{uuid}

PUT /policy/api/v1/infra/alb-pools/<alb-pool-id>

/PUT /api/pool/{uuid}

ALB Priority Labels

GET /policy/api/v1/infra/alb-priority-labels

GET /api/prioritylabels

DELETE /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

DELETE /api/prioritylabels/{uuid}

GET /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

GET /api/prioritylabels/{uuid}

PATCH /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

PATCH /api/prioritylabels/{uuid}

PUT /policy/api/v1/infra/alb-priority-labels/<alb-prioritylabels-id>

PUT /api/prioritylabels/{uuid}

ALB Protocol Parsers

GET /policy/api/v1/infra/alb-protocol-parsers

GET /api/protocolparser

DELETE /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

DELETE /api/protocolparser/{uuid}

GET /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

GET /api/protocolparser/{uuid}

PATCH /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

PATCH /api/protocolparser/{uuid}

PUT /policy/api/v1/infra/alb-protocol-parsers/<alb-protocolparser-id>

PUT /api/protocolparser/{uuid}

ALB Security Policies

GET /policy/api/v1/infra/alb-security-policies

GET /api/securitypolicy

DELETE /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

DELETE /api/securitypolicy/{uuid}

GET /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

GET /api/securitypolicy/{uuid}

PATCH /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

PATCH /api/securitypolicy/{uuid}

PUT /policy/api/v1/infra/alb-security-policies/<alb-securitypolicy-id>

PUT /api/securitypolicy/{uuid}

ALB Server Auto Scale Policies

GET /policy/api/v1/infra/alb-server-auto-scale-policies

GET /api/serverautoscalepolicy

DELETE /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

DELETE /api/serverautoscalepolicy/{uuid}

GET /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

GET /api/serverautoscalepolicy/{uuid}

PATCH /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>

PATCH /api/serverautoscalepolicy/{uuid}

PUT /policy/api/v1/infra/alb-server-auto-scale-policies/<alb-serverautoscalepolicy-id>+

PUT /api/serverautoscalepolicy/{uuid}

ALB SSL Key And Certificates

GET /policy/api/v1/infra/alb-ssl-key-and-certificates

GET /api/sslkeyandcertificate

DELETE /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

DELETE /api/sslkeyandcertificate/{uuid}

GET /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

GET /api/sslkeyandcertificate/{uuid}

PATCH /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

PATCH /api/sslkeyandcertificate/{uuid}

PUT /policy/api/v1/infra/alb-ssl-key-and-certificates/<alb-sslkeyandcertificate-id>

PUT /api/sslkeyandcertificate/{uuid}

ALB SSL Profiles

GET /policy/api/v1/infra/alb-ssl-profilesDELETE /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

GET /api/sslprofile

DELETE /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

DELETE /api/sslprofile/{uuid}

GET /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

GET /api/sslprofile/{uuid}

PATCH /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

PATCH /api/sslprofile/{uuid}

PUT /policy/api/v1/infra/alb-ssl-profiles/<alb-sslprofile-id>

PUT /api/sslprofile/{uuid}

ALB SSO Policies

GET /policy/api/v1/infra/alb-sso-policies

GET /api/ssopolicy

DELETE /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

DELETE /api/ssopolicy/{uuid}

GET /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

GET /api/ssopolicy/{uuid}

PATCH /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

PATCH /api/ssopolicy/{uuid}

PUT /policy/api/v1/infra/alb-sso-policies/<alb-ssopolicy-id>

PUT /api/ssopolicy/{uuid}

ALB String Groups

GET /policy/api/v1/infra/alb-string-groups

GET /api/stringgroup

DELETE /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

DELETE /api/stringgroup/{uuid}

GET /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

GET /api/stringgroup/{uuid}

PATCH /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

PATCH /api/stringgroup/{uuid}

PUT /policy/api/v1/infra/alb-string-groups/<alb-stringgroup-id>

PUT /api/stringgroup/{uuid}

ALB Traffic Clone Profiles

GET /policy/api/v1/infra/alb-traffic-clone-profiles

GET /api/trafficcloneprofile

DELETE /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

DELETE /api/trafficcloneprofile/{uuid}

GET /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

GET /api/trafficcloneprofile/{uuid}

PATCH /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

PATCH /api/trafficcloneprofile/{uuid}

PUT /policy/api/v1/infra/alb-traffic-clone-profiles/<alb-trafficcloneprofile-id>

PUT /api/trafficcloneprofile/{uuid}

ALB Virtual Services

GET /policy/api/v1/infra/alb-virtual-services

GET /api/virtualservice

DELETE /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

DELETE /api/virtualservice/{uuid}

GET /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

GET /api/virtualservice/{uuid}

PATCH /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

PATCH /api/virtualservice/{uuid}

PUT /policy/api/v1/infra/alb-virtual-services/<alb-virtualservice-id>

PUT /api/virtualservice/{uuid}

ALB VS Data Script Sets

GET /policy/api/v1/infra/alb-vs-data-script-sets

GET /api/vsdatascriptset

DELETE /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

DELETE /api/vsdatascriptset/{uuid}

GET /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

GET /api/vsdatascriptset/{uuid}

PATCH /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

PATCH /api/vsdatascriptset/{uuid}

PUT /policy/api/v1/infra/alb-vs-data-script-sets/<alb-vsdatascriptset-id>

PUT /api/vsdatascriptset/{uuid}

ALB VS Vips

GET /policy/api/v1/infra/alb-vs-vips

GET /api/vsvip

DELETE /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

DELETE /api/vsvip/{uuid}

GET /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

GET /api/vsvip/{uuid}

PATCH /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

PATCH /api/vsvip/{uuid}

PUT /policy/api/v1/infra/alb-vs-vips/<alb-vsvip-id>

PUT /api/vsvip/{uuid}

ALB WAF CRS

GET /policy/api/v1/infra/alb-waf-crs

GET /api/wafcrs

DELETE /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

DELETE /api/wafcrs/{uuid}

GET /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

GET /api/wafcrs/{uuid}

PATCH /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

PATCH /api/wafcrs/{uuid}

PUT /policy/api/v1/infra/alb-waf-crs/<alb-wafcrs-id>

PUT /api/wafcrs/{uuid}

ALB WAF Policies

GET /policy/api/v1/infra/alb-waf-policies

GET /api/wafpolicy

DELETE /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

DELETE //apiwafpolicy/{uuid}

GET /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

GET /api/wafpolicy/{uuid}

PATCH /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

PATCH /api/wafpolicy/{uuid}

PUT /policy/api/v1/infra/alb-waf-policies/<alb-wafpolicy-id>

PUT /api/wafpolicy/{uuid}

ALB WAF Policy PSM Groups

GET /policy/api/v1/infra/alb-waf-policy-psm-groups

GET /api/wafpolicypsmgroup

DELETE /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

DELETE /api/wafpolicypsmgroup/{uuid}

GET /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

GET /api/wafpolicypsmgroup/{uuid}

PATCH /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

PATCH /api/wafpolicypsmgroup/{uuid}

PUT /policy/api/v1/infra/alb-waf-policy-psm-groups/<alb-wafpolicypsmgroup-id>

PUT /api/wafpolicypsmgroup/{uuid}

ALB WAF Profiles

GET /policy/api/v1/infra/alb-waf-profiles

GET /api/wafprofile

DELETE /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

DELETE /api/wafprofile/{uuid}

GET /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

GET /api/wafprofile/{uuid}

PATCH /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

PATCH /vwafprofile/{uuid}

PUT /policy/api/v1/infra/alb-waf-profiles/<alb-wafprofile-id>

PUT /wafprofile/{uuid}

ALB Webhooks

GET /policy/api/v1/infra/alb-webhooks

GET /api/webhook

DELETE /policy/api/v1/infra/alb-webhooks/<alb-webhook-id>

DELETE /api/webhook/{uuid}

GET /policy/api/v1/infra/alb-webhooks/<alb-webhook-id>

GET /api/webhook/{uuid}

PATCH /policy/api/v1/infra/alb-webhooks/<alb-webhook-id>

PATCH /api/webhook/{uuid}

PUT /policy/api/v1/infra/alb-webhooks/<alb-webhook-id>

PUT /api/webhook/{uuid}

Compatibility and System Requirements

For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide.

Upgrade Notes for This Release

For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide.

Upgrade Readiness: Starting with NSX-T 3.2.2, NSX Upgrade Evaluation Tool is now integrated with pre-upgrade checks as part of the NSX framework. No need to spend additional resources, get compliance approvals, or worry about versioning with a separate appliance. Simply, run the pre-upgrade checks as you do them today, and NSX will check the readiness of your NSX deployment for a successful NSX Manager upgrade.

Important:

Upgrade from NSX-T 3.2.2 to 4.0.1 or 4.0.1.1 is not supported. The reason is that the General Availability (GA) of NSX-T 3.2.2 happened after the GA of NSX versions 4.0.1 and 4.0.1.1. Some capabilities and important fixes in NSX-T 3.2.2 might not be available in versions 4.0.1 or 4.0.1.1 due to the chronological order in which these versions were released.

API and CLI Resources

See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation.

The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab.

Available Languages

NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese, Korean, Traditional Chinese, Italian, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language.

Document Revision History

Revision Date

Edition

Changes

November 08, 2022

1

Initial edition

November 24, 2022

2

Added Upgrade Readiness information in the Upgrade Notes for this Release section.

December 2, 2022

3

Added known issue 3069457.

January 25, 2023

4

Added an important note in the Upgrade Notes for this Release section.

March 30, 2023

5

Added known issues 3094405, 3106569, 3113067, 3113073, 3113076, 3113085, 3113093, 3113100, 3118868, 3152174, 3152195.

May 19, 2023

6

Added known issue 3116294.

May 22, 2023

7

Added resolved issue 3152512.

June 29, 2023

8

Added known issue 3041672.

November 16, 2023

9

Added resolved issue 3042447.

August 6, 2024

10

Added resolved issue 3145439.

Resolved Issues

  • Fixed Issue 3145439: Rules with more than 15 ports would be allowed to publish only to fail in later stages.

    You may not know that the rule fails to publish / realize for this reason.

  • Fixed Issue 3042447: Multiple IDPS events concatenated in one alert.

    Multiple IDPS events concatenated in one alert results in incorrect log split. Logging format related issues can be seen at external syslog collector.

  • Fixed Issue 3152512: Missing firewall rules after the upgrade from NSX 3.0.x or NSX 3.1.x to NSX 3.2.1 can be observed on the edge node when a rule is attached to more than one gateway/logical router.

    Traffic does not hit the correct rule in the gateway firewall and will be dropped.

  • Fixed Issue 2961770: If edge transport node deletion fails due to host not reachable or messaging related intermittent issues, the configuration state in the NSX UI for that edge node gets stuck in Pending status.

    NSX Manager UI shows Pending status for that edge node and is stuck in that state with no further progress.

  • Fixed Issue 2966319: Distributed Firewall policies go to a failed state, except the policies with zero rules.

    If a global address set is required by a firewall rule, but the address set has been deleted, the rule will fail. This can occur when Local Control Plane pushes the rule configuration (global address sets and filters) to a host. For example, you have a rule with an address set in the source or destination field, and that rule is applied to only a single filter. If you apply the rule to a different filter in a subsequent rule publish, then that publish can fail.

    When rules fail, the VMkernel log messages look like this:

    cpu13:267699)pfioctl: DIOCADDRULE failed with error 22

    cpu13:267699)VSIPConversionCreateRuleSet: Cannot insert 0 rule 2024 22

  • Fixed Issue 3030690: Tier-1 Gateway goes into a failed state with error message: "Principal nsx_policy with role [] attempts to delete or modify an object of type nsx$EdgeClusterMemberCapacityPool it doesn't own."

    Invalid ownership exception occurs when an NSX user attempts to delete or modify an object that is owned by a Principal Identity user. This is the expected behavior. There is no workaround for this issue.

  • Fixed Issue 3030584: Alarms are seen every time when you create a new Segment.

    Syslog shows the following error message:

    [ERROR] Failed to add l2fwd port 574, as another 574 exists errorCode="EDG0400706"

    There is no impact on the data path.

  • Fixed Issue 3028358: vLCM remediation of a host cluster fails because of NSX-T health check issue.

    The NSX-T log of the Upgrade Coordinator indicates a null pointer exception, which causes the API call to fail with HTTP 500 Internal server error.

  • Fixed Issue 3027634: NAT rules do not work without firewall rule after upgrading NSX-T from 2.3.x to 3.2.x.

    North-south traffic is impacted.

  • Fixed Issue 3026992: Failed to associate L2 VPN session to a segment that is connected to a Tier-1 Gateway, which is under a Tier-0 VRF.

    Both Infra and Tier-1 Segment PATCH or PUT API fails with the following error message:

    Cannot use L2VPN Session=[/infra/tier-0s/T0GW-3/locale-services/default/l2vpn-services/gL2VPN-Client/sessions/Session02]. The L2VPN Session and Segment must belong to the same Tier0 or Tier1 gateway.

  • Fixed Issue 3026668: After disabling Distributed Firewall and Gateway Firewall in the UI, the "get firewall status" command output in the CLI still shows the firewall status as enabled.

    When NSX Manager restarts, the ID in the GPRR maps it to the incorrect entry in the internal table. This leads to overwriting the incorrect entry in the internal table.

  • Fixed Issue 3025935: The Network Topology view in the NSX Manager UI shows incorrect count of VMs when an advance filter is applied.

    When there are more than 1000 VMs in the NSX topology, which are connected across different segments, then applying the advance filter restricts the cumulative VM count to 1000 in the topology.

  • Fixed Issue 3024868: In a collapsed edge transport node configuration with edge and ESXi host transport nodes in the same transport VLAN, routed multicast traffic is getting dropped at the uplink of the host transport node that hosts the collapsed edge.

    Multicast routing does not work for south-north and inter-tier-1 (east-west) traffic flows.

  • Fixed Issue 3024658: When processing packets with SMB traffic, NSX IDS/IPS generates high CPU utilization and latency.

    In some cases, NSX IDS/IPS process crashes due to out of memory error when processing SMB traffic.

  • Fixed Issue 3023507: Load balancer diagnosis output on the CLI shows incorrect process status.

    The load balancer diagnosis output shows "load balancer processes are not running." However, load balancer processes are running on the edge node. The diagnosis output on the CLI is incorrect. There is no impact on the load balancer traffic.

  • Fixed Issue 3022691: Due to a large number of NSX Manager backups, the Backup page fails to load or takes a long time to open.

    The Backup & Restore page does not display the status of the last backup, and nor does it allow you to make any configuration changes until the overview of available backups to restore is loaded.

  • Fixed Issue 3022604: In the NSX Manager UI, the Monitor page of the Edge Transport Nodes shows an incorrect description for the heap memory.

    The incorrect description is:

    To do, Replace this description

  • Fixed Issue 3020724: Packets are dropped intermittently on the Tier-1 Gateway where Load Balancer or SNAT services are running, and the number of active NAT sessions reach the maximum limit.

    Intermittent packet drops are seen on the Tier-1 Gateway.

  • Fixed Issue 3020530: Unable to acknowledge alarms for the Local Manager site from the Global Manager UI.

    When you navigate to System > Fabric > Nodes > Host Transport Nodes Or System > Fabric > Nodes > Edge Transport Nodes, and click on the alarms for the edge or the host, the following error message is displayed:

    Support bundle request failed: The requested resource [/remote/api/v1/alarms] is not available.

  • Fixed Issue 3020398: When host clusters are prepared for both NSX-T and Distributed Firewall, you are unable to set an existing or a new transport zone as the default transport zone.

    In the NSX Manager UI, when you navigate to System > Fabric > Transport Zones, there is no default transport zone. When you try to set one of the existing transport zones to be the default, the following error message is displayed:

    General error (Error code:400)

    After you create a new transport zone and try to set it as the default, the same error message is displayed.

  • Fixed Issue 3020220: NSX does not validate whether a syslog exporter with the same name already exists, and allows you to configure multiple syslog exporters with the same name, but different protocols.

    You might get confused when you observe multiple syslog exporters with the same name.

  • Fixed Issue 3045039: ESX VDR DHCP relay sends out unicast offer, though the broadcast flag is set.

    VMs cannot get IP addresses through PXE boot.

  • Fixed Issue 3018748: The data path heap memory of a small edge form factor is exhausted just after VM deployment and even without any configuration from the NSX Manager.

    For the edge small form factor, the edge data path heap memory is very limited and not meant for a production environment.

  • Fixed Issue 3017921: NSX Manager upgrade from 3.1.2 to 3.2.1 failed due to some null pointer exception.

    Upgrade has failed and retry might not work. NSX Manager appliance OS is at a new version. However, UI will not be available. You need to roll back to the old version.

  • Fixed Issue 3016565: New Segment creation is stuck on NSX Manager UI due to an exception in the grouping workflow.

    When creating a new segment from the NSX Manager UI, the segment creation cannot go past the "In progress" status and sometimes fails. When trying to delete these segments, they are stuck in the "mark for delete" status. You will need to run NSX APIs to delete the segments.

  • Fixed Issue 3015297: Unable to nest some groups from NSX Manager UI after upgrading NSX-T to 3.2.1.

    While editing the group membership criteria, a group with a similar display name to the current groups is not shown in the group's grid for selection. Therefore, you are unable to nest the group with a similar display name.

  • Fixed Issue 3015270: NSX-V to NSX-T host migration fails when the host uses VDS 6.x and management VMkernel interface is connected to a Distrbuted Portgroup whose uplink teaming active list does not contain the uplink whose MAC is shared by the management VMkernel interface.

    Host migration step fails with an error message that is similar to the following:

    Management kernel interface vmk0 shares MAC with pnic vmnic4, but the pnic is not in the active list of default uplink teaming of NVDS [3f d7 dc 05 11 51 45 ee-8a c6 f9 46 53 b7 9f 5d]

  • Fixed Issue 3014810: When you add an LDAP server to Identity Firewall AD configuration using LDAPS protocol, the operation fails with a verification error.

    The add LDAP server operation fails with the following error message:

    LDAP server connection failed during verification (Error code: 524007).

    You are unable to add an Identity Firewall LDAP server with LDAPS protocol without a SHA-256 thumbprint value.

  • Fixed Issue 3014237: Random packet drops observed on the edge with high in_use_count.

    Packet drops and edge failover keep flipping.

  • Fixed Issue 3013374: FQDN alarms are raised and then they get cleared intermittently.

    There is no functional impact, but you may observe FQDN alarms while the system seems to work without any issue.

  • Fixed Issue 3012192: Failure in checking the dvPort type while setting the IPFIX property causes the dvPort not to be connected.

    When VMs are powered on or when they are hosted on NSX-T prepared ESXi hosts with IPFIX enabled, VM NICs do not load.

  • Fixed Issue 3011974: When you view the interface (IP or LLDP) details for ESXi transport nodes in the NSX Manager UI, the information is not present and several errors are seen.

    The query for LLDP neighbor might fail. There is no impact. If you query for the LLDP neighbor the second time, it will fetch the details.

  • Fixed Issues 3008206, 3050428: Bare metal edge management bond slave interfaces may be lost after the edge is rebooted, or there may be incorrectly named interfaces.

    Loss of edge management connectivity. Possible data path connectivity issues.

  • Fixed Issue 3007745: Migration of a transport node from N-VDS to VDS results in a teaming policy configuration issue with 9528 error code.

    Migration of the transport node to VDS is blocked if a Default or a Named teaming policy has a combination of both LAG and normal uplinks.

  • Fixed Issue 3007646: Static IP addresses of Global Manager groups are not synced across sites by the NSX Central Control Plane.

    Hosts on Local Manager sites receive empty container messages (ContainerMsg) for the referenced Global Manager groups.

  • Fixed Issue 3006135: Stale entries are seen in the edge internal tables due to issues while deleting the edge transport nodes.

    Stale edge entries can make the NSX Manager unresponsive post NSX upgrade.

  • Fixed Issue 3004489: VMs connected to a dvPortgroup that is enabled with Distributed Firewall service does not have correct firewall rules applied to them.

    If the VM undergoes a storage vMotion, the dvPort that is connected to the VM loses its NSX configuration.

  • Fixed Issue 3004485: The effective membership API of an NSX group, which contains an Actve Directory group as its member, does not return some or all effective IPs or VMs of the users from that AD group.

    Data path is not impacted.

  • Fixed Issue 3002535: Segment realization keeps failing with "ObjectAlreadyExistsException" error.

    In the NSX Manager UI, Segment is shown with Error status. The Segment is unusable.

  • Fixed Issue 3002526: Transmit queue might become inoperable when a packet with zero length is transmitted.

    A significant amount of traffic is dropped, leading to connectivity issues. Failover (data path restart) is needed to resolve the issue.

  • Fixed Issue 3001471: Host preparation fails while applying a transport node profile on the host cluster.

    The JVM of the Inventory Compute Manager goes out of memory.

  • Fixed Issue 2999439: During installation of a bare metal edge from an ISO file, the management interface that you select may change after the installation is complete and the edge is rebooted.

    Connectivity to the management interface goes down. The data path might also be impacted.

  • Fixed Issue 2998636: After migration, a VM might lose NSGroup membership if its VNI is added statically as a member of that group.

    Distributed Firewall service might be impacted after the VMs are migrated.

  • Fixed Issue 2997928: Alarms are repeatedly generated for a configuration error without a clear resolution to the earlier generated alarms.

    Numerous alarms related to realization status without any resolution can result in out of memory issue and other systemic issues on the management plane.

  • Fixed Issue 2997048: MBUF is held by the physical port's transmit queue on the standby edge node when the north-bound physical router pings the uplink IP of the edge.

    The edge runs out of jumbo_mbuf_pool. The following alert is displayed on the bare metal edge:

    The datapath mempool usage for jumbo_mbuf_pool on Edge node <edge_uuid> which is at or above the high threshold value of 85%.

  • Fixed Issue 2996107: High CPU alert on NSX Manager UI due to NGINX load balancer worker process.

    Layer 7 load balancer might be abnormal.

  • Fixed Issue 2995382: Realization errors are seen for NSX-T Policy objects.

    Updates to NSX-T Policy objects are not pushed to the data path.

  • Fixed Issue 2995180: Distributed Firewall rules might not be applied successfully to the VMs due to group handling issue.

    The rules are not pushed correctly to the impacted VMs on the host.

  • Fixed Issue 2988057: When an in-band management interface is configured on a Mellanox network switch, and the edge is put into a maintenance mode, then the in-band VLAN sub-interface disappears.

    Edge loses connectivity to the management interface.

  • Fixed Issue 2986435: Tooltip message in the UI incorrectly states that you can edit or delete a VIP after logging in to any one of the NSX Manager nodes.

    As per design, NSX does not allow you to modify or delete a VIP after the NSX Advanced Load Balancer Controller Appliances are deployed. This behavior is as per the design.

  • Fixed Issue 2985150: Vulnerabilities identified in NSX-T.

    Nessus vulnerability scanner identified a number of issues in NSX-T.

  • Fixed Issue 2975966: NSX-V to NSX-T IDFW migration plug-in does not translate multiple event log servers configuration correctly.

    When there are multiple event log servers configured in NSX-V, the IDFW migration plug-in throws an error when translating the configuration to NSX-T configuration.

  • Fixed Issue 2970605: Unable to force delete dvPortGroups that you created for a security-only NSX-T installation.

    After uninstalling NSX-T on a security-enabled host cluster, you can force remove stale dvPortGroups from the system if they are not deleted automatically. However, in some cases, a force delete operation might fail due to invalid transport zone path errors.

  • Fixed Issue 2962994: MTU configuration check process breaks for an irrelevant transport zone, which is not found.

    An error message is shown but you cannot find the MTU mismatch issue.

  • Fixed Issue 2956751: Applying a transport node profile to a vLCM-enabled host cluster failed.

    NSX installation fails on the vLCM-enabled host cluster because the service account password has expired.

  • Fixed Issues 3027225, 3043126, 3048448: When the proton service restarts, the id in the GPRR will map it to the wrong entry in the Internal table, overwriting the wrong entry in the internal table.

    Some exclude list related functionalities are unusable.

  • Fixed Issue 3040487: Static route with network same as connected interface subnet not getting resolved on Tier-1 Gateway.

    The user-created static route does not appear in the FIB table of the Tier-1 Gateway.

  • Fixed Issue 3040486: In NSX 3.2.0 and 3.2.1.x, manager CLI "get vtep-groups" does not work and returns an error. Subsequent Controller CLIs also fail.

    Federation CLI "get vtep-groups" does not work.

  • Fixed Issue 3040267: While only one CIDR is allowed when creating a NAT rule with action type DNAT, the validation message implies that multiple CIDRs are allowed.

    The validation error message is confusing.

  • Fixed Issue 3039728: VMs are losing network connectivity as DFW rules are not applied as expected.

    The DFW rules are not applied on the VMs as expected, which is resulting in network connectivity issues.

  • Fixed Issue 3039414: The segment is not connected to the Tier-0 or Tier-1 Gateway.

    You cannot add static routes after updating the segment.

  • Fixed Issue 3037574: The log rotation of /var/log/lb/access.log is not working.

    The disk on edge is consumed by the /var/log/lb/access.log file.

  • Fixed Issue 3036407: Tier-0 grid Linked Tier-1 gateways column, Connected Tier-1s count mismatch between column value and records in the dialog.

    The connected Tier-1 count is different from the number of records in the connected Tier-1 dialog.

  • Fixed Issue 3031956: NSX for vSphere to NSX-T host migration may fail when the host uses VDS 6.x and VSAN vmk is connected to a DVPG.

    Cannot migrate hosts from NSX for vSphere to NSX-T.

  • Fixed Issue 2942961: The "nestdb_remedy" plug-in of SHA introduces an extra disk read to ESXi host.

    ESXi hosts see an increased rate of of disk read operations. The extra disk read operations impact the storage performance.

  • Fixed Issue 3053551: NSX-V to NSX-T migration fails to import data from the vCenter Server when it takes too long.

    An error is displayed when data collection from the vCenter Server takes a long time. The NSX-V to NSX-T migration fails at the first step.

  • Fixed Issue 3044600: A full sync operation for NSX Federation accidentally triggered the delete and cleanup of large amounts of global resources.

    The aggressive background purge operation caused the cleanup of GenericPolicyRealizedResource (GPRR) objects during the full sync operation. This resulted in the loss of a large number of internal resources that could not be properly recovered. The global intent resources were restored with future full sync operations. However, the deleted GPRR resources could not be recreated properly due to the provider implementations not expecting this scenario and attempted to recreate existing resources internally, which resulted in a failure.

    With the transport nodes in a degraded state, Distributed Firewall rules in an unknown state, and edge nodes all down, the VMs were not able to communicate with external networks.

  • Fixed Issue 3005851: Gateway Firewall rule to allow FTP service does not drop passive and extended passive FTP data traffic.

    FTP service behind the NSX Gateway Firewall fails to transfer files or respond to FTP commands properly if passive or extended passive FTP mode is used.

  • Fixed Issue 2993001: NSX-T Distributed Firewall does not display rule statistics for enforcement points (alb-endpoint).

    Distributed Firewall rule statistics in the UI is shown as 0.

  • Fixed Issue 2992172: If you try to set two different route filters for BGPv6 (one for IPv4 family and other for IPv6 family) in a single step from the UI or in the same API call, the BGPv6 configuration fails.

    BGP neighbor API returns the following error message:

    Both new and deprecated properties are specified for out_route_filters in /infra/tier-0s/10077e3a-c392-4a32-9be3-c5ee0a22d8d7/locale-services/default/bgp/neighbors/<neighbor_ID>

  • Fixed Issue 2991939: DFW rules with only Active Directly members in a group fail to realize after upgrading to NSX to 3.2.1, if the DFW rule is updated after the NSX upgrade.

    You are unable to update DFW rules with only Active Directory members in the group.

  • Fixed Issue 2991310: NSX API shows incorrect error messages for some runtime workflows when there are multiple enforcement points in the system.

    Runtime APIs are returning confusing error messages in specific IPSec VPN configurations.

  • Fixed Issue 2990764: Corfu compaction failure due to Alarm table.

    NSX Global Manager is unable to connect to corfu. Corfu compactor goes out of memory.

  • Fixed Issue 2989923: The snmpd process created a core dump and stopped running on the NSX edge nodes.

    SNMP service is down. Restarting the service did not work.

  • Fixed Issue 2988824: "Query teaming failed for dvs" error causes LogicalSwitchFullSync failure at the transport node.

    The transport node realization for the ESXi host is in a partial success or a failed state.

  • Fixed Issue 2986459: The Distributed Logical Router-Edge Services Gateway subnet link, which is the same as the BGP neighbor subnet of the Edge Services Gateway, is ignored.

    The NSX-V topology graph does not show the DLR-ESG subnet link. You are unable to proceed with NSX-V to NSX-T migration.

  • Fixed Issue 2985694: Rule ID of NAT action is incorrect.

    The IP address in the traffic that is matching the NAT rule is not translated. This issue is caused by a random RST packet or some TCP flow that is not started with SYN, which a TCP three-way handshake requires.

  • Fixed Issue 2984375: Unable to create VLAN segments as NSX Manager cannot receive status of host transport nodes.

    NSX Manager UI shows the transport node connection status as Unknown. The following error message is shown when you try to create a VLAN segment:

    Failed to get capability data of TransportZone/"long string" for validation.

  • Fixed Issue 2982217: Duplicate copies of DTO classes present in packages.

    You are unable to upgrade your NSX environment from 3.2.0.1 to 3.2.1.

  • Fixed Issue 2979491: VMs connected to Distributed Virtual Port Groups, which have a mismatch between DVPG key and DVPG Managed Object ID do not have Distributed Firewall rules applied as expected.

    Distributed Firewall rules are not applied to expected VMs because NSX Manager cannot map VM vmx paths to existing discovered segments.

  • Fixed Issue 2978708: When an LDAP server that is used for authentication responds slowly, all the NSX reverse proxy worker threads are consumed and causes the NSX UI and API to stop responding.

    NSX API calls return the following error message:

    Some appliance components are not functions properly.

  • Fixed Issue 2972522: The vRealize Log Insight requests do not enforce firewall rules when user names do not match the Active Directory user names.

    In the Active IDFW Sessions tab of the NSX Manager UI, the AD user login/logout events are not seen. Firewall rules are not applied correctly to the IDFW sessions.

  • Fixed Issue 2970087: Unable to create vRNI application.

    You cannot create vRNI application when using VM custom search with the NSGroup option or the Security option. The reason is that the system fails to fetch the VMs even when the groups are created and VMs are associated with the groups.

  • Fixed Issue 2963644: Firewall rules are not realized from NSX Policy to Management Plane due to a higher number of raw service ports than the allowed limit of ≤ 15.

    Firewall policy modifications, such as add/update/delete rules or section Applied To will not be realized if it contains problematic firewall rules.

  • Fixed Issue 2963524: UDP connections are not purged according to their expiry timeout.

    The firewall connection table shows very old UDP connections. You might run out of memory as the old connections are not purged.

  • Fixed Issue 2960748: If you configured OpenLDAP to support server-side sorting by using the sssvlv OpenLDAP overlay, the role assignment does not work when searching for LDAP users.

    The UI gets stuck when you search for LDAP users on the Role Assignment page.

  • Fixed Issue 2957522: Edge transport node realization fails with the error message: "Failed to enable the data path on the edge node after three attempts."

    The realization fails due to duplicate VTEP IP addresses on two edge nodes. Traffic is impacted on the two edge nodes.

  • Fixed Issue 2746576: DVPort that is owned by NSX-T is unexpectedly removed from the vCenter Server.

    An NSX-T logical segment port, such as the Service Plane Forwarding (SPF) port, might be unexpectedly removed from a vSphere Distributed Switch. When a VM is migrated from one ESXi host transport node to another, the VM NIC gets disconnected and cannot be connected anymore. This issue is fixed in vSphere 7.0.2 P03, vSphere7.0.1 EP4, vSphere 7.0.3 or later.

  • Fixed Issue 3036344: Duplicate SNAT IP addresses found in the load balancer pool configuration.

    Users may exceed the pool limit and may not be able to add new IPs or remove existing IPs from the pool. Edge might get impacted.

  • Fixed Issue 3027836: Distributed Firewall rule with a L7 context profile incorrectly drops traffic.

    VMs lose connectvity. DVFilter transmit channel gets stuck.

  • Fixed Issues 3016543, 2996057: An invalid flow cache entry causes packets of unrelated flows to bypass NSX firewall and SNAT processing.

    Application resets or the connection fails after a third edge node is added to the edge cluster and the active Tier-1 Gateway is relocated to the third edge node from the edge cluster.

  • Fixed Issue 3002114: NSX-T host upgrade fails when VMkernel ports contain service insertion configuration.

    The failure might also happen when an NSX-V to NSX-T migration is done with VMkernel ports. The following error message is displayed during the host upgrade:

    ERROR: Cannot load module nsx-esx-70u1/nsxt-vsip-19582723: Failure

  • Fixed Issue 2995194: When you apply another Transport Node Profile (TNP) to an existing Transport Node Collection (TNC), the revision of the realized TNC does get updated.

    The TNC provider is invoked every 5 minutes. If on a TNP-applied host cluster another TNP is applied, hosts lose connectivity.

  • Fixed Issue 2991745: Segment gets detached or deleted successfully, but downlink interface does not get deleted from the Tier-1 DR on the host and edge transport nodes.

    A stale interface remains on the Tier-1 DR of transport nodes and all connected routes in the FIB table. Also, on the Tier-0 DR FIB, you can see the advertised routes. Later, when you configure the same subnet on another interface, it might cause the data path to fail.

  • Fixed Issue 2989970: Unable to view ruleID and reason fields for NSX-T firewall messages on the Interactive Analytics dashboard of the VMware Log Insight server.

    The filter criteria does not match with ingested NSX-T firewall log messages specifically for the ruleID and reason fields.

  • Fixed Issue 2983067: During an NSX-V to NSX-T migration, communication is lost between the NSX-T edge and the NSX-V workloads after the edge cutover step.

    All the north-south traffic through the NSX-T edge nodes is lost.

  • Fixed Issue 2975798: Arista CloudVision eXchange (CVX) integration fails when the source or destination of the firewall rule contains raw IP addresses.

    Realization of the firewall rule fails on the NSX Manager.

  • Fixed Issue 2979135: Upgrade of NSX Manager from 3.2.0.1 to 3.2.1 fails.

    If groups are added to the firewall exclusion list of the Distributed Firewall before upgrading the NSX Manager from 3.2.0.1 to 3.2.1, the upgrade fails.

  • Fixed Issue 2954707: Host preparation fails when the vSphere Distributed Switch name contains "|"

    The following error message is displayed in the nsxapi.log file:

    Host switch apply operation failed: [invalid dvs parameter]

  • Fixed Issue 2952216: Upgrade fails when you use the NSX Upgrade Evaluation Tool to upgrade from NSX-T 3.1.3.7 to 3.2.0.1.

    Upgrade fails because BIOS UUID is empty for virtual machines.

  • Fixed Issue 2950293: When an NSX system detects large-sized certificate revocation lists, NSX Manager crashes and the UI becomes inaccessible.

    NSX Manager crashes regularly with an out of memory error while it is indexing the large-sized CRLs.

  • Fixed Issue 2950175: When you update the segment port profiles of a segment port, a false positive message is displayed after clicking the Save button.

    The save confirmation message in the UI misleads you to believe that the segment port profile is updated for the segment port, but actually the segment port is not updated.

  • Fixed Issue 2949038: Inter-SR IBGP was not in established status when Tier-0 Gateway was stretched between multiple sites.

    Inter-site traffic does not work.

  • Fixed Issue 2944520: NGINX crashes frequently and cannot communicate through Layer 4 load balancer.

    Communication through Layer 4 load balancer is lost.

  • Fixed Issue 2941521: Upload of PCG support bundle to NSX Manager fails.

    PCG support bundle upload script relies on NSX Manager node thumbprint to differentiate between NSX Manager nodes. If you have configured the same API cert on all three NSX Manager nodes, the upload script sends the support bundle to the first node it connects with. This causes the upload to fail on other nodes.

  • Fixed Issue 2940776: Security configurations are not realized on the host switch and ports due to incorrect classification of host switch when some NSX networking & security configurations are left behind on the host switch.

    Some VMs do not get the required NSX security configurations after NSX security features are installed on the host cluster.

  • Fixed Issue 2935179: IDFW login/logout event table might grow too large when several users log in.

    Corfu compactor goes out of memory.

  • Fixed Issue 2934903: Unable to register a vCenter Server with custom ports in NSX.

    Registration of the vCenter Server in NSX fails.

  • Fixed Issue 2932415: After enabling L2VPN and edge bridging together on a segment, the WAN network is saturated.

    Traffic gets dropped because the WAN link is saturated.

    Enabling L2VPN and edge bridging features together on a segment is not a supported configuration. The workaround is to disable L2VPN before enabling the edge bridge.

  • Fixed Issue 2913034: Segment status is shown as Up when its admin status is Down and the reverse.

    Unable to view the correct segment status in the NSX Manager UI.

  • Fixed Issue 2908451: NSX metrics consume a large disk space on NSX Manager nodes.

    NSX Manager UI becomes inaccessible. You are unable to upgrade NSX and gather logs because of insufficient disk space to generate a support bundle.

  • Fixed Issue 2950920: When you specify a timeout greater than two hours for the edge through the timeout settings, the timeout value is capped to two hours.

    You cannot specify a timeout greater than two hours for the configurable firewall timeouts.

  • Fixed Issue 2816781: Physical servers cannot be configured with a load-balancing based teaming policy as they support a single VTEP.

    You won't be able to configure physical servers with a load-balancing based teaming policy.

  • Fixed Issue 3035862: All IKE or IPSEC sessions are temporarily disrupted.

    Traffic outage is observed for some time.

  • Fixed Issue 3034053: Password validation error blocks users from upgrading NSX edges that were deployed in NSX-T 2.5 release with a weak password.

    In the Actions menu on the Edge Transport Node page of the NSX Manager UI, the following error message is displayed when you click on Sync Edge Configuration:

    Failed to refresh the transport node configuration: [Fabric] Password for the following user(s) root does not follow complexity rules. Password must have at least 12 characters including 1 upper case character, 1 lower case character, 1 numeric digit, 1 special character, and at least 5 different characters. Passwords based on dictionary words and palindromes are invalid.

  • Fixed Issue 3031622: After upgrading NSX-T to 3.2.1, incorrect service IP is realized leading to the removal of some service IPs from the loopback port.

    Traffic loss occurs.

  • Fixed Issue 2992587: URT generated VDS names in VCF cases might be longer than 80 characters, which leads to creating VDS failure.

    No impact, as you can customize the VDS names in the generated topology.

  • Fixed Issue 2994424: URT generated multiple VDS for one cluster if named teaming of transport nodes in the cluster are different.

    Transport nodes with different named teaming were migrated to different VDSes, even if they are in the same clusters.

  • Fixed Issue 3019893: NGINX crashes after load balancer persistence is disabled.

    A new connection cannot be established due to a deadlock.

  • Fixed Issue 3017426: Logical ports of VMs that are migrated from NSX-V to NSX-T are deleted after upgrading NSX-T NSX Manager from 3.2.0 to 3.2.1.

    The affected VMs lose network connectivity.

  • Fixed Issue 3010061: During the upgrade of the NSX Application Platform, Helm repository URL is not being validated.

    No error message is displayed in the UI for more than 10 minutes.

  • Fixed Issue 3008193: Effective members are not displayed for Active Directory group.

    Datapath is not impacted.

  • Fixed Issues 3004848, 2989885: Edge VM to be deleted is not reachable from NSX Manager, and deletion is stuck.

    You cannot delete the Edge VM.

  • Fixed Issue 3004413: Large packets sent by an NSX-T bare metal edge are truncated and dropped.

    Transit ping for large packets with size ≥ 2048 fails.

  • Fixed Issues 3004128, 2990647: Edit Edge Transport Node window does not display uplinks from Named Teaming policies or Link Aggregation Groups that are defined in the uplink profile.

    You cannot use uplinks and map them to Virtual NICs or DPDK fastpath interfaces.

  • Fixed Issue 3002469: Ater NSX-T is upgraded from 3.1 to 3.2, destination_transport_port property is set to false, causing failure of DFW IPFIX data collection.

    TCP/UDP destination port is set to 0 in raw flow records.

  • Fixed Issue 2999521: In EVPN Route-Server mode, when NSX edge peers with a Nokia DCGW, type-5 routes advertised by NSX are not installed by Nokia DCGW.

    This issue occurs because NSX advertises these routes with Router MAC extended community.

  • Fixed Issue 2996964: Host failed to migrate because an uplink name was not found in the UplinkHostSwitchProfile.

    The process gets stuck at host migration.

  • Fixed Issue 2992759: Prechecks fail during NSX Application Platform 4.0.1 deployment on NSX-T versions 3.2.0/3.2.1/4.0.0.1 with upstream K8s v1.24.

    The prechecks fail with the following error message:

    Kubernetes cluster must have minimum 1 ready master node(s).

  • Fixed Issue 2991201: After upgrading NSX Global Manager to 3.2.1.x, Service entries fail to realize.

    Existing Distributed Firewall rules that consume these Services do not work as expected.

  • Fixed Issue 2990741: After upgrading to NSX-T 3.2.x, search functionality does not work in the NSX Manager UI.

    NSX Manager UI shows the following error message:

    Search service is currently unavailable, please restart using 'start service search'.

  • Fixed Issue 2990081: Central CLI stops working when an ESXi 7.0.2 or later host reboots.

    After installing NSX-T 3.2.x, Central CLI that is registered with ESXi 7.0.2 or later host initially works. However, when a host is rebooted, Central CLI stops working on that host.

  • Fixed Issue 2989756: Reverse ARP messages are not forwarded when an edge cluster fails in a environment where multiple L2 bridges are connected to a single overlay segment.

    Edge bridge failover is impacted.

  • Fixed Issue 2989696: Scheduled backup fails to start after an NSX Manager restore operation.

    Scheduled backup fails on NSX Manager, but a manual backup works. In the NSX Manager UI, the Backup & Restore tab shows that the latest scheduled backup has failed, but when you do a manual backup, it succeeds.

    For more information, see the VMware knowledge base article 89059.

  • Fixed Issue 2986638: Hosts enforce traffic shaping even when the QoS profile in the management plane has the Enabled field set to false.

    During an NSX-V to NSX-T migration, network traffic throughput drops 10 times from 150 MBps to 12.5 MBps after VMs or vmks are migrated from NSX-V to NSX-T segments.

  • Fixed Issue 2981647: Upgrade from NSX 3.2.0.1 to any version fails.

    Users cannot upgrade NSX Manager in their environment.

  • Fixed Issue 2978739: Deployment of Public Cloud Gateway fails on AWS when roles created with NSX-T 3.2.1 scripts do not have "route53:ListHostedZonesByVPC" permissions.

    All AWS PCG deployments fail.

  • Fixed Issue 2965357: When N-VDS to VDS migration runs simultaneously on more than 64 hosts, the migration fails on some hosts.

    As multiple hosts try to update the vCenter Server simultaneously, the migration fails during the TN_RECONFIG_HOST stage.

  • Fixed Issue 2962718: A bond management interface can lose members when Mellanox NICs are used on bare metal edge.

    The management interface lost connection with the edge after a reboot. A Mellanox interface was configured as one of the bond slaves.

  • Fixed Issue 2959934: L7 Load Balancer Nginx processes crash if the server keepalive is enabled.

    Layer 7 Load Balancer service is unavailable.

  • Fixed Issue 2954205: Upgrading NSX-T from 3.0.x to 3.2.0 or 3.2.1 causes nonconfig disk to grow for customers that are using the IDS/IPS feature.

    The nonconfig disk partition size becomes too high and alarms are raised.

  • Fixed Issue 2937649: The dp-ipc thread and all fastpath threads go in a deadlock state when there are a large number of fragmented IP packets and firewall is enabled.

    Edge goes down and failover is performed to the standby edge node. The original active edge does not recover automatically.

  • Fixed Issue 2928030: Objects created on NSX Manager fail to get promoted to NSX Policy when "FirewallCpuMemThresholdsProfile" Service config profile is applied to NSGroup in Manager mode.

    You cannot promote NSX Manager objects to NSX Policy mode in this scenario.

  • Fixed Issue 2872892: When hosts are prepared for NSX by using the Quick Start feature, host's status is not consistent with the host cluster's status.

    The host cluster shows Prepared status, but the hosts show the "Applying NSX switch configuration" status.

  • Fixed Issue 2807744: After downgrading NSX-T VIBs on a host, new VMs that were migrated to this host faced connectivity issues.

    When the host's firmware was updated, the host lost a bootbank or reverted to an old bootbank. This caused the host to lose the latest NSX-T VIBs.

  • Fixed Issue 3005825: Baremetal Edge management bond secondary interfaces may be lost after a reboot, or there may be incorrectly named interfaces.

    Loss of Edge management connectivity. Possible datapath connectivity issues.

  • Fixed Issue 2879979: IKE service may not initiate new IPsec route based session after "dead peer detection" has happened due to IPsec peer being unreachable.

    There could be outage for specific IPsec route based session.

  • Fixed Issue 2885330: Effective members not shown for AD group.

    Effective members of AD group not displayed. No datapath impact.

    Workaround: None.

  • Fixed Issue 2879119: When a virtual router is added, the corresponding kernel network interface does not come up.

    Routing on the vrf fails. No connectivity is established for VMs connected through the vrf.

  • Fixed Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported.

    In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address.

  • Fixed Issue 2561988: All IKE/IPSEC sessions are temporarily disrupted.

    Traffic outage will be seen for some time.

  • Fixed Issue 2889482: The wrong save confirmation is shown when updating segment profiles for discovered ports.

    The Policy UI allows editing of discovered ports but does not send the updated binding map for port update requests when segment profiles are updated. A false positive message is displayed after clicking Save. Segments appear to be updated for discovered ports, but they are not.

Known Issues

  • New - Issue 3041672: For config-only and DFW migration modes, once all the migration stages are successful, you invoke pre-migrate, post-migrate APIs to move workloads. If you change the credentials of NSX for vSphere, vCenter Server or NSX after the migration stages are successful, then the API calls for pre-migrate and post-migrate will fail.

    You will not be able to move the workloads because the pre-migrate, post-migrate and finalize-infra API calls will fail.

    Workaround: Perform these steps.

    1. Re-start the migration coordinator.

    2. On the migration UI, using the same migration mode as before restart, provide all the authentication details. This should sync the migration progress.

    3. Run the pre-migrate, post-migrate, and finalize infra APIs.

  • Issue 3116294: Rule with nested group does not work as expected on hosts.

    Traffic not allowed or skipped correctly.

    Workaround: See knowledge base article 91421.

  • Issue 3152195: DFW rules with Context Profiles with FQDN of type .*XYZ.com fail to be enforced.

    DFW rule enforcement does not work as expected in this specific scenario.

    Workaround: None.

  • Issue 3152174: Host preparation with VDS fails with error: Host {UUID} is not added to VDS value.

    On vCenter, if networks are nested within folders, migrations from NVDS to CVDS or NSX-V to NSX-T may fail if migration is to CVDS in NSX-T.

    Workaround: First network of host is the first network as visible on network field in vCenter MOB page https://<VC-IP>/mob?moid=host-moref

    • Prior to 3.2.1: First network of host as mentioned above and concerned VDS should be directly under same folder. Folder could be either DataCenter or a network folder inside DataCenter.

    • From 3.2.1 and 4.0.0 onwards: First network of host as mentioned above should be directly under a folder and desired VDS can be directly under same folder or it can be nested inside same folder. Folder could be either DataCenter or a network folder inside DataCenter.

  • Issue 3118868: Incorrect or stale vNIC filters programmed on pNIC when overlay filters are programmed around the same time as a pNIC is enabled.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed around the same time as a pNIC is enabled, resulting in a possible performance regression.

    Workaround: None.

  • Issue 3113100: IP address is not realized for some VMs in the Dynamic security groups due to stale VIF entry.

    If a cluster has been initially set up for Networking and Security using Quick Start, uninstalled, and then reinstalled solely for Security purposes, DFW rules may not function as intended. This is because the auto-TZ that was generated for Networking and Security is still present and needs to be removed in order for the DFW rules to work properly.

    Workaround: Delete the auto-generated TZ from the Networking & Security Quick Start which references the same DVS as used by Security Only.

  • Issue 3113093: Newly added hosts are not configured for security.

    After the installation of security, when a new host is added to a cluster and connected to the Distributed Virtual Switch, it does not automatically trigger the installation of NSX on that host.

    Workaround: Make any updates to the existing VDS in VC (or) add a new VDS in VC and add all the hosts in the cluster to the VDS.

    This will auto update the TNP and the TNP will be reapplied on the TNC. When the TNC is updated, the new host added will have the latest configuration of the TNP.

  • Issue 3113085: DFW rules are not applied to VM upon vMotion.

    When a VM protected by DFW is vMotioned from one host to another in a Security-Only Install deployment, the DFW rules may not be enforced on the ESX host, resulting in incorrect rule classification.

    Workaround: Connect VM to another network and then reconnect it back to the target DVPortgroup.

  • Issue 3113076: Core dumps not generated for FRR daemon crashes.

    In the event of FRR daemon crashes, core dumps are not generated by the system in the /var/dump directory. This can cause BGP to flap.

    Workaround: Enable the core dump for the FRR daemons, trigger the crash, and obtain the core dump from /var/dump.

    To enable the core dump, use the following command on the edge node, which must be executed as a root user on the edge node.

    prlimit --pid <pid of the FRR daemon> --core=500000000:500000000

    To validate if the core dump is enabled for the FRR daemon, use the following command, and check the SOFT and HARD limits for the CORE resource. These limits must be 500000000 bytes or 500 MB.

    prlimit --pid <pid of the FRR daemon>

  • Issue 3113073: DFW rules are not getting enforced for some time after enabling lockdown mode.

    Enabling lockdown mode on a transport node can cause a delay in the enforcement of DFW rules. This is because when lockdown mode is enabled on a transport node, the associated VM may be removed from the NSX inventory and then recreated. During this time gap, DFW rules may not be enforced on the VMs associated with that ESXi host.

    Workaround: Add ‘da-user’ in exception list manually before entering ESX into lockdown mode.

  • Issue 3113067: Unable to connect to NSX-T Manager after vMotion.

    When upgrading NSX from a version lower than NSX 3.2.1, NSX manager VMs are not automatically added to the firewall exclusion list. As a result, all DFW rules are applied to manager VMs, which can cause network connectivity problems.

    This issue does not occur in fresh deployments from NSX 3.2.2 or later versions. However, if you are upgrading from NSX 3.2.1 or earlier versions to any target version up to and including NSX 4.1.0 this issue may be encountered.

    Workaround: Contact VMware Support.

  • Issue 3106569: Performance not reaching expected levels with EVPN route server mode.

    vNIC filters programmed on pNIC may be stale, incorrect, or missing when overlay filters are programmed to the pNIC in a teaming situation, resulting in a possible performance regression.

    Workaround: None.

  • Issue 3094405: Incorrect or stale vNIC filters programmed to pNIC when overlay networks are configured.

    vNIC overlay filters are updated in a specific order. When updates occur in quick succession, only the first update is retained, and subsequent updates are discarded, resulting in incorrect filter programming and a possible performance regression.

    Workaround: None.

  • Issue 3069457: During NSX Security Only deployment upgrade from 3.2.x to 3.2.2 or 4.0.1.1, the host upgrade fails with the message, "NSX enabled switches already exist on host."

    Hosts on the UI show the status as Failed after upgrade and may create a datapath impact.

    Workaround: See knowledge base article 90298 for details.

  • Issue 3029159: Import of configuration failed due to the presence of Service Insertion feature entries on the Local Manager.

    NSX Federation does not support the Service Insertion feature. When you try to onboard a Local Manager site, which has Service Insertion VMs, in to the Global Manager, the following error is displayed in the UI:

    Unable to import due to these unsupported features: Service Insertion.

    Workaround:

    1. Use the following DELETE API to manually delete the unsupported Service Insertion entries from the Local Manager before initiating the import configuration workflow.

      DELETE https://<nsx_mgr_ip>/policy/api/v1/infra/service-references/<SERVICE_REF_ID>

    2. Redeploy Service Insertion after the Local Manager is onboarded to the Global Manager successfully.

  • Issue 3019813: During an NSX-V to NSX-T migration, if you specify admin distance value as 0, you cannot proceed with the migration.

    Admin distance with 0 value is not supported in NSX-T.

    Workaround: Set admin distance value other than 0.

  • Issues 3046183 and 3047028: After activating or deactivating one of the NSX features hosted on the NSX Application Platform, the deployment status of the other hosted NSX features changes to In Progress. The affected NSX features are NSX Network Detection and Response, NSX Malware Prevention, and NSX Intelligence.

    After deploying the NSX Application Platform, activating or deactivating the NSX Network Detection and Response feature causes the deployment statuses of the NSX Malware Prevention feature and NSX Intelligence feature to change to In Progress. Similarly, activating and deactivating the NSX Malware Prevention feature causes the deployment status of the NSX Network Detection and Response feature to In Progress. If NSX Intelligence is activated and you activate NSX Malware Prevention, the status for the NSX Intelligence feature changes to Down and Partially up.

    Workaround: None. The system recovers on its own.

  • Issue 2992964: During NSX-V to NSX-T migration, edge firewall rules with local Security Group cannot be migrated to NSX Global Manager.

    You must migrate the edge firewall rules that use a local Security Group manually. Otherwise, depending on the rule definitions (actions, order, and so on), traffic might get dropped during an edge cutover.

    Workaround: See the VMware knowledge base article 88428.

  • Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.

    After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.

    1. The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.

    2. For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state.   

    3. For an NSX MP upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state.   

    4. The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as before the upgrade was started. However, the NSX Application Platform might become unstable.

    Workaround: See VMware knowledge base article 89418.

  • Issue 2931403: Network interface validation prevents API users from performing updates.

    Network interface on an edge VM can be configured with network resources such as portgroups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in vCenter Server after a power outage (moref of resource pool changed after vCenter Server was restored). API users are blocked from performing update operations.

    Redeploy edge and specify valid moref Ids.

  • Issue 2687084: After upgrade or restart, the Search API may return 400 error with Error code 60508, "Re-creating indexes, this may take some time."

    Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete.

    None

  • Issue: 2854116: If you have VM templates that are backed by NSX, then after an N-VDS to VDS migration, N-VDS is not deleted.

    The migration is successful, but N-VDS is not deleted because it is still using the VM templates.

    Convert the VM template to VM either before or after starting the migration.

  • Issue 2992807: After upgrading from NSX-T 3.0 or 3.1 to NSX-T 3.2.1.1 /3.2.2 or 4.0.0.1, Transport Node goes into a failed state.

    Transport Node realization fails with the following error message:

    Failed to handle reply for TransportNodeHostSwitches migration to VDS.

    Workaround:

    1. Finish N-VDS to VDS migration in NSX-T 3.0 or 3.1.

    2. Run the Get API for Transport Node and use the API response to update it back on the Transport Node. This update clears the migration parameters.

    3. Continue with the upgrade to NSX-T 3.2.2 or 4.0.0.1.

  • Issue 2932354: Adter replacing the Virtual IP certificate on the NSX Manager, communication between the Global Manager and the Local Manager is lost.

    You cannot view the status of the Local Manager from the Global Manager UI.

    Workaround:

    Update the Local Manager certificate thumbprint in the NSX Global Manager Cluster. For more information see the procedure explained in the VMware Cloud Foundation Administration Guide.

  • Issue 2981778: In an NSX Federated environment, configuring vIDM with External Load Balancer toggle pushes configuration across all sites instead of just the Local Manager.

    Users are not able to use separate vIDM instances with External Load Balancer toggle across multiple sites. All sites must be configured with the same vIDM instance.

    Workaround: None.

    NSX vIDM configuration does not support having multiple sites with a separate IDM instance at each site. You must do a manual reconfiguration if you want to use a separate vIDM instance at each site.

  • Issue 3020223: Edge transport node shows duplicate Transport Zone entries.

    There is no functional impact.

  • Issue 2936504: The loading spinner appears on top of the NSX Application Platform's monitoring page.

    When you view the NSX Application Platform page after the NSX Application Platform is successfully installed, the loading spinner is initially displayed on top of the page. This spinner might give the impression that there is some connectivity issue occuring when there is none.

    Workaround: As soon as the NSX Application Platform page is loaded, refresh the Web browser page to clear the spinner.

  • Issue 2949575: Powering off one Kubernetes worker node in the cluster puts the NSX Application Platform in a degraded state indefinitely.

    After one Kubernetes worker node is removed from the cluster without first draining the pods on it, the NSX Application Platform is placed in a degraded state. When you check the status of the pods using the kubectl get pod -n nsxi-platform command, some pods display the Terminating status, and have been in that status for a long time.

    Workaround: Manually delete each of the pods that display a Terminating status using the following information.

    1. From the NSX Manager or the runner IP host (Linux jump host from which you can access the Kubernetes cluster), run the following command to list all the pods with the Terminating status.

      kubectl get pod -A | grep Terminating
    2. Delete each pod listed using the following command.

      kubectl delete pod <pod-name> -n <pod-namespace> --force --grace-period=0

  • Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN

    When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.

    Workaround: Refresh DNS cache for the host using command : /etc/init.d/nscd restart

  • Issue 2969847: Incorrect DSCP priority

    DSCP priority from a custom QoS profile is not propagated to host when the value is 0, resulting in traffic prioritization issues.

    Workaround: None

  • Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.

    Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster.

  • Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints.

    Failed IPsec session will not be established until the error is resolved.

    Workaround: Use unique self-signed certificate for each local endpoint.

  • Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working.

    When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.

    Workaround: Wait 15 minutes.

  • Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer.

    UI feedback is not shown.

    Workaround: Check the logs.

  • Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node.

    Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node.

    Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.

  • Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS.

    You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion.

    Workaround: None.

  • Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working.

    You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.

    Workaround: Modify each rule to enable/disable logging.

  • Issue 2884939: NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102).

    The NSX rate limiting of 100 requests per second is reached when we migrate a large number of VS from NSX for vSphere to NSX-T ALB and all APIs are temporarily blocked.

    Workaround: Update Client API rate limit to 200 or more requests per second.

    Note: There is fix on AVI version 21.1.4 release.

  • Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter.

    NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager.

    Workaround: None.

  • Issue 2888207: Unable to reset local user credentials when vIDM is enabled.

    You are unable to change local user passwords while vIDM is enabled.

    Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-enabled.

  • Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file.

    This CLI output is confusing.

    Workaround: Restart nsx-proxy on that TN.

  • Issue 2874995: LCores priority may remain high even when not used, rendering them unusable by some VMs.

    Performance degradation for "Normal Latency" VMs.

    Workaround: There are two options.

    • Reboot the system.

    • Remove the high priority LCores and then recreate them. They will then default back to normal priority LCores.

  • Issue 2854139: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR.

    Traffic drop for the prefixes that are getting continuously added/deleted.

    Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop.

  • Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically.

    It will take 5 minutes to realize the EVPN tenant configuration.

    Workaround: None. Wait 5 minutes.

  • Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node.

    The joining manager will not work and the UI will not be available.

    Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.

  • Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work.

    After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.

    Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes.

  • Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1.

    In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.

    Workaround: None.

  • Issue 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out.

    These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database.

    Workaround: Use the CLI commands to retrieve the information from the edge.

  • Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions.

    NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'}

    Workaround: Use Management Plane APIs to create additional VPN Sessions.

  • Issue 2839782: unable to upgrade from NSX-T 2.4.1 to 2.5.1 because CRL entity is large, and Corfu imposes a size limit in 2.4.1, thereby preventing the CRL entity from being created in the Corfu during upgrade.

    Unable to upgrade.

    Workaround: Replace certificate with a certificate signed by a different CA.

  • Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster.

    NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported.

    Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.

  • Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection.

    The connection would be using an expired/revoked SSL.

    Workaround: Restart the APH on the Manager node to trigger a reconnection.

  • Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager.

    Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.

    Workaround: Use the UI to configure system.

  • Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade.

    When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.

    Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running.

  • Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs.

    By default, NSX tools are installed on /opt directory. However, during NSX tools installation default path can be overridden with "--chroot-path" option passed to the install script.

    Insufficient disk space on the partition where NSX tools is installed can cause NSX tools upgrade to fail.

    Workaround: Increase the partition size on which NSX tools is installed and then initiate NSX tools upgrade. Steps for increasing disk space are described in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/resize-os-disk-gpt-partition page.

  • Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform".

    The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected.  There is no functional impact. Only debugging might be affected.

    Workaround: There are two workarounds:

    • Workaround 1: Make sure the Kubernetes cluster comes up with version 0.4.x of the metrics-server pod before deploying NAPP platform. This issue is not seen when metrics-server 0.4.x is deployed.

    • Workaround 2: Delete the metrics-server instance deployed by the NAPP charts and deploy upstream Kubernetes metrics-server 0.4.x.

  • Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down.

    For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established.

    Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager.

  • Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes.

    The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior.

    Workaround: None

  • Issue 2910529: Edge loses IPv4 address after DHCP allocation.

    After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP.

    Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps:

    1. Log in to the console of Edge VM as an admin.

    2. Stop service dataplane.

    3. Set interface <mgmt intf> dhcp plane mgmt.

    4. Start service dataplane.

  • Issue 2942900: The identity firewall does not work for event log scraping when Active Directory queries time out.

    The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with event log scraper IP addresses.

    Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects.

  • Issue 2958032: If you are using NSX-T 3.2 or upgrading to an NSX-T 3.2 maintenance release, the file type is not shown properly and is truncated at 12 characters on the Malware Prevention dashboard.

    On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you will only see WindowsExecu as File Type on Malware Prevention UI.

    Workaround: Do a fresh NAPP installation with an NSX-T 3.2 maintenance build instead of upgrading from NSX-T 3.2 to an NSX-T 3.2 maintenance release.

  • Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI.

    You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.

    If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side.

    Workaround: You need to use APIs to remove bridging:

    1. Update concerned LogicalPort and remove attachment

    PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true

    2. DELETE BridgeEndpoint

    DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>

    3. Delete LogicalPort

    DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>

  • Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts.

    After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values.

    Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.

check-circle-line exclamation-circle-line close-line
Scroll to top icon