Follow these recommendations when using Clustered VMDKs with WSFC.

  1. Do not present LUNs used for clustered VMDKs to ESXi hosts unless the host is configured with ESXi 7.0 or later. This might cause slow boot times, hostd to become unresponsive and other issues. A host with a version lower than ESXi 7.0 cannot mount a clustered VMDK datastore. This is because the ESXi hosts on which WSFC VMs run must have physical Persistent reservation (SCSI/NVMe) of type WEAR on the device . A host must have ESXi 8.0 or later to mount a clustered VMDK datastore, if the backend LUN is created from NVMe FC SAN.

    With vSphere 8.0 U2, a new configuration using WSFC with Clustered VMDK and Windows Server 2022 or later supports NVMe Virtual Adapters.

    With vSphere 8.0 U3, Clustered VMDKs also support storage from NVMe TCP array using PVSCSI/NVMe controllers for Windows Server 2022 or later.

  2. Make sure that all VMs hosting nodes of WSFC are migrated off or powered off properly before removing it from a clustered VMDK datastore to ensure the resources, like heartbeat (HB) slots, are freed. If a VM fails or there is an APD on the clustered VMDK datastore during power-off, always power on and power off before removing the VM from a cluster.
  3. Do not combine clustered and non-clustered VMDKs on the same clustered datastore. Although VMs that are using non-shared disks on a clustered datastore will continue to work normally and support all operations such as snapshot and clone.
  4. Do not keep clustered VMDKs for different clusters on the same shared datastore. Use a different clustered datastore for different WSFC clusters.
  5. Set vHardware (virtual compatibility) to vSphere 7.0 or later while using the clustered VMDK feature.