For array-based storage, SAN transport is often the best performing choice for backups when running on a physical proxy. It is unavailable inside virtual machines, so use SCSI HotAdd instead on a virtual proxy.

SAN transport is not always the best choice for restores. It offers the best performance on thick disks, but the worst performance on thin disks, because of round trips through two disk manager APIs, AllocateBlock and ClearLazyZero. For thin disk restore, NBDSSL is usually faster. Changed Block Tracking (CBT) must be deactivated for SAN restores, and a snapshot must be taken before restore. SAN transport does not support writing to redo logs (child disks including linked clones and snapshots), only to base disks. Finally, SAN transport is not supported on vVol datastores.

Provided the disk LUN backing an NVMe-oF datastore is accessible from the physical machine running backups, SAN transport mode might be supported. However, VDDK searches only SCSI LUNs during initialization, so backup administrators would probably have to specify all NVMe-oF disks in vixDiskLib.transport.san.allowList (see Initialize Virtual Disk API).

Before vSphere 5.5, when writing to SAN during restore, disk size had to be a multiple of the underlying VMFS block size, otherwise the write to the last fraction of a disk would fail. This was fixed in the ESXi 5.5 release.

Programs that open a local virtual disk in SAN mode might be able to read (if the disk is empty) but writing will throw an error. Even if programs call VixDiskLib_ConnextEx() with NULL parameter to accept the default transport mode, SAN is selected as the preferred mode if SAN storage is connected to the ESXi host. VixDiskLib should, but does not, check SAN accessibility on open. With local disk, programs must explicitly request NBDSSL mode.

For a Windows Server 2008 and later proxy, set SAN policy to onlineAll. Set SAN disk to read-only except for restore. You can use the diskpart utility to clear the read-only flag. SAN policy varies by Windows Server 2008 edition. For Enterprise and Datacenter editions, the default Windows SAN policy is offline, which is unnecessary when vSphere mediates SAN storage.

For SAN transport, one major factor impacting performance is that the read buffer should be aligned with the sector size, currently 512. You can specify three parameters for VixDiskLib_Read: start sector, number of sectors to read, and the buffer to hold data. The proper read buffer size can be allocated using, for example, _aligned_malloc on Windows or posix_memalign on Linux. SAN mode performs best with about six concurrent streams per ESXi host; more than six streams usually results in slower total throughput.

VDDK 8.0.1 and later provide a cache file to assist SAN disk device discovery. Before doing a SAN mode backup, each instance of VDDK scans disks to determine the device name and VMFS ID of the LUN. In large datacenters with many SAN based disks, this had a significant negative performance impact. Partners requested a cache mechanism to reduce frequency of scans. As implemented, cache file scsiDiskList.json is created in the tmpDirectory specified by the VDDK configuration file. When VDDK fails to open a disk with cached LUN path, because disk was added or deleted since last scan, LUNs are rescanned and the JSON file overwritten. This cache mechanism operates without customer intervention.