Refer to this topic for possible issues and limitations that may occur when configuring IPv6 for NSX.

Table 1. VMware vCenter Fails to be Added as a Compute Manager When Using its IPv6 Address
Problem Workflow that results in the problem:
  1. Deploy a dual stack (both IPv4 and IPv6) NSX Manager cluster.
  2. Deploy a dual stack VMware vCenter.
  3. Add the VMware vCenter as a compute manager with its IPv6 address, and it fails to be added as a compute manager.
Cause It is a requirement for a dual stack NSX Manager to have both its IPv4 and IPv6 addresses to point to the same FQDN that is used to configure the NSX Manager.
Solution Use the same FQDN for both IPv4 and IPv6 addresses that are used to deploy NSX Manager. See NSX Manager Installation Requirements.
Table 2. ESXi Host Fails to Deploy in a IPv4 Only NSX Manager
Problem Workflow that results in the problem:
  1. Deploy a topology with the following:
    • Dual stack (both IPv4 and IPv6) VMware vCenter
    • Dual stack ESXi
    • IPv4 only NSX Manager
  2. Deploy NSX Managers as IPv4 only.
  3. Add the VMware vCenter as a compute manager.
  4. Start the Add NSX Appliance wizard and select a dual stack ESXi host from the cluster and deploy with the NSX Manager IPv4 address.
  5. The installation fails with the following error:
    Error occurred during vmdk transfer. java.net.SocketException Protocol family unavailable
Cause For an IPv4 only NSX Manager and a dual stack ESXi host which are registered in VMware vCenter using IPv6, there is a limitation where the ESXi host cannot communicate with an IPv4 only NSX Manager.
Solution The workaround is to onboard the ESXi host to the VMware vCenter cluster with its IPv4 address.
Table 3. Deploying NSX Manager with Virtual Disk Format of Thick Provision Eager Zeroed Fails to Deploy
Problem Workflow that results in the problem:
  1. Deploy a supported VMware vCenter and ESXi topology. See Supported Topologies for IPv6.
  2. Add the VMware vCenter as the compute manager.
  3. Start the Add NSX Appliance wizard and deploy the third NSX Manager with a Thick Provision Eager Zeroed virtual disk format which results in the deployment progress to be stuck at 1%.
Cause The time for a thick provision eager zeroed disk creation exceeds the timeout length.
Solution In VMware vCenter, for the vpxa configuration setting task.completedLifetime, increase the default value of 600 seconds:
[root@sc1-10-78-185-35:/] configstorecli config default get -c esx -g services -k vpxa
{
   ...
   "task": {
      ...
      "completed_lifetime": 600,
      ...
   },
   ...
}
Table 4. Adding an IPv6 Only ESX Host to a vLCM Cluster Fails
Problem Workflow that results in the problem:
  1. Deploy a topology with the following:
    • Dual stack (both IPv4 and IPv6) NSX Manager
    • IPv6 only VMware vCenter
    • IPv6 only ESX
  2. Add the VMware vCenter as a compute manager.
  3. Create a vSphere Lifecycle Manager (vLCM) cluster.
  4. Create an NSX transport node profile and apply the profile to the vLCM cluster.
  5. Add an ESX host to the vLCM cluster which causes an error and fails.
Cause For IPv6 only ESX and IPv6 only VMware vCenter, the vLCM workflow is not supported in software versions 7.x or earlier for ESX and vCenter.
Solution None. This is a known limitation.
Table 5. Uninstalling Transport Node Clears the IPv6 DNS Configuration
Problem Workflow that results in the problem:
  1. Deploy a topology with the following:
    • Dual stack (both IPv4 and IPv6) NSX Manager
    • Dual stack VMware vCenter
    • IPv6 only and dual stack ESXi
  2. Install NSX on the cluster, add a transport node profile, and VLAN transport zone with DHCP.
  3. Uninstall NSX from the cluster which clears the IPV6 DNS configuration for IPv6 only and dual stack ESXi hosts.
Cause This problem occurs in vCenter and ESX software versions 7.0.3 and earlier.
Solution None for ESXi 7.0.3 or earlier. Workaround is to reconfigure the DNS server.

You can upgrade ESXi to 8.0.0.1 for the issue fix.