check-circle-line exclamation-circle-line close-line

As occurs with VMware vSphere VMFS and NFS datastores, vSAN prevents multiple virtual machines (VMs) from opening the same virtual machine disk (VMDK) in read/write mode. This protects the data stored on the virtual disk from corruption caused by multiple writers on the non–cluster-aware file systems used by most guest OSs.

To enable in-guest systems that leverage cluster-aware file systems that have distributed write (multiwriter) capability, we must explicitly enable multiwriter support for all applicable VMs and VMDKs.

vSAN also supports the multiwriter option to enable in-guest systems that leverage cluster-aware file systems that have distributed write capability.

A list of supported and unsupported actions associated with enabling the multiwriter attribute can be found in VMware knowledge base article 2121181, “Using Oracle RAC on a vSphere 6.x vSAN Datastore.”

Table 1 shows supported and unsupported actions or features with multiwriter flag.

NOTE: vSphere Storage vMotion is unsupported for shared disks using the multiwriter attribute.

 

VMware knowledge base article 2121181 cites the following restrictions to using shared disks with the multiwriter attribute:

  • Because vSAN does not support raw device mappings (RDMs), this document applies only to virtual disks resident on the vSAN datastore.
  • Because vSAN does not support SCSI-3 persistent reservations, storage-based fencing is not supported using multiwriter mode. Host-based fencing must be used instead.
  • When using the multiwriter mode, the virtual disk must be eager-zeroed thick (EZT). When creating an EZT disk on vSAN, the disk is not zeroed automatically. Customer needs to reach out to VMware Cloud on AWS SRE team if this is a requirement.. For more information, see VMware knowledge base article 1033570, “Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode.”
  • Sharing virtual disks in multiwriter mode on vSAN is limited to 8 ESXi/ESX 6.x hosts up to and including vSphere version 6.7. From vSphere 6.7 Update 1 onwards, the virtual disks sharing support in multi-writer has been extended to more than 8 hosts. In order to enable this feature, you need to enable /VMFS3/GBLAllowMWadvance config option. This support has also been extended to VMFS5 and VMFS6. For more information, see VMware knowledge base article 1034165, "Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag".
  • VMs with multiwriterVMDKs cannot be protected with VMware Site Recovery Manager™ or VMwarevSphereReplication™.
  • Starting vSAN 6.2, the Client Cache , used on both hybrid and all flash vSAN configurations, leverages DRAM memory local to the virtual machine to accelerate read performance. The amount of memory allocated is 0.4% up to 1GB per host. As the cache is local to the virtual machine, it can properly leverage the latency of memory by avoiding having to reach out across the network for the data. In testing of read cache friendly workloads it was able to significantly reduce read latency. For more information on Client Cache feature, see VMware vSAN Design and Sizing Guide.
    The only caveat is Client Cache will be disabled automatically for a vmdk if the Multi-Writer is enabled.  Enabling multi-writer indicates that that the vmdk can be opened from multiple VMs from same or different hosts. The Client cache being per-host local basis , its is used for targeting re-use of the local hot data for a particular host. For this reason, Client cache is disable for vmdk’s shared with multi-writer attribute.

NOTE: Shared VMDKs need not be set to independent persistent mode for the multiwriter attribute.

Below shows an example of how the multiwriter flag for a shared VMDK can be set using the vSphere Web Client for VMDKs backed by VMware hyper-converged infrastructure (HCI) vSAN.

 

Find here the steps for adding shared VMDKs using the multiwriter attribute to an Oracle RAC online without any downtime.

The storage option for Oracle workload has been discussed in detail here.