VDDK 8.0.1 | 18 April 2023 | ESXi and vCenter Server, VDDK build 21562716
For vSphere 8.0 U1 release | Last document update 2 November 2023
Check back for additions and updates to these release notes, marked New.

About the Virtual Disk Development Kit

The Virtual Disk Development Kit (VDDK) 8.0.1 is an update to support vSphere 8.0 Update 1 and respond to partner requests for enhancement.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 8.0 and all its update releases will support vSphere 6.7, 7.0.x, and (except for new features) the next major release.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain the software, programming details, and redistribution, see the VDDK landing page on developer.vmware.com.

Changes and New Features

VDDK 8.0.1 offers the following features:

  • New: Datastore-level access to VMDK.

    In this release, it is possible to directly access virtual disks on a datastore. Programs do this by specifying VixDiskLibSpecType as Datastore. At the datastore level, there is no support for snapshots and CBT. See below for steps to employ this feature. Currently only NBD/NBDSSL and HotAdd transport modes are supported. Note that HotAdd transport does not support RDM disks. For datacenters with encrypted disks, NBDSSL transport mode is required, and clusters must activate vSphere HA to guarantee that all hosts have a crypto key ID.

  • New: Wildcards in allow and deny lists for SAN.

    After configuring allowList and denyList for SAN transport mode, customers and partners requested a wildcard mechanism for ease of use. See section “Initialize Virtual Disk API” in the VDDK Programming Guide for details. VDDK 8.0.1 supports a new wildcard mechanism to shorten list making. You can use meta character expressions * ? [xyz] [a-z] [0-9] with the same meanings as in Bash shell.

  • New: Cache file for SAN disk device discovery.

    Before doing a SAN mode backup, each instance of VDDK scans disks to determine the device name and VMFS ID of the LUN. In large datacenters with many SAN based disks, this has a significant negative performance impact. Partners requested a cache mechanism to reduce frequency of scans. As implemented, cache file scsiDiskList.json is created in the tmpDirectory specified by the VDDK configuration file. When VDDK fails to open a disk with cached LUN path, because disk was added or deleted since last scan, LUNs are rescanned and the JSON file overwritten. This cache mechanism operates without customer intervention.

For new features in the previous VDDK versions, see the VDDK 8.0 Release Notes and before.

Datastore-Level Access to VMDK

VDDK header file vixDiskLib.h has a new structure definition for VixDiskLibDatastoreSpec containing two strings: a managed object reference (MoRef) to the datastore, and a folder path to the disk. Datastore specification is enumerated as type 2 in VixDiskLibSpecType. Programs call VixDiskLib_ConnectEx to connect to a datastore instead of a virtual machine or first class disk (FCD). Then programs call VixDiskLib_Open on that datastore with a folder path to open the specified disk.

  1. Connect to datastore.
  2. VixDiskLibConnection theDSconn;
    VixDiskLibConnectParams *cnxParams = VixDiskLib_AllocateConnectParams();
    connParams->specType = VIXDISKLIB_SPEC_DATASTORE; 
    cnxParams->spec.dsSpec.datastoreMoRef = “datastore-19”;
    cnxParams->spec.dsSpec.diskFolder = “RHELXfsTargetClone_dest”;
    VixDiskLib_ConnectEx(cnxParams, ..., &theDSconn); 
    
  3. Open disk on the datastore.
  4. VixDiskLibHandle diskHdl;
    VixDiskLib_Open(theDSconn, /* diskPath */, /* open flags */, &diskHdl);
    
  5. Read from or write to the disk.
  6. Close the disk.
  7. Disconnect from VixDiskLib.

Wildcards in Allow and Deny Lists

The vixDiskLib.transport.san.allowlisttakes precedence over vixDiskLib.transport.san.denylist in the case of duplicates. Wildcards will not match directory separator slash (/) nor folder separator backslash (\). On Windows, deny and allow lists must contain “\\?\” or “\\.\PhysicalDrive” prefix, and sharp (#) must be inside double-quotes to avoid interpretation as a comment, allowlist="\\?\scsi#disk*" for example. Also see combined list below the table.

    Metacharacters Examples of Wildcard Use
    * Match the preceding character or subexpression zero or more times,
    for example /dev/* matches /dev/, /dev/ab, and /dev/cde.
    ? Match any single character,
    for example /dev/sd? matches /dev/sda and /dev/sdb.
    [a-z] Match a character in the specified range,
    for example /dev/test[1-2] matches /dev/test1 and /dev/test2.
    [xyz] Match any character included in the set,
    for example /dev/sd[abc] matches /dev/sda, /dev/sdb, and /dev/sdc.
    vixDiskLib.transport.san.allowlist="\\?\scsi#disk&ven_scst_bio&prod_disk[1-3]*"
    vixDiskLib.transport.san.denylist="\\?\scsi#disk&ven_scst_bio&prod_disk4*"
    

Compatibility Notices

VDDK 8.0 U1 supports the following operating systems for proxy backup (mostly the same as VDDK 8.0):

  • Windows Server 2022, 2019, and 2016
  • Red Hat Enterprise Linux (RHEL) 7.9, 8.1, 8.2, 8.6, 9.0
  • New: RHEL 8.8 was retroactively verified after support in VDDK 8.0.3
  • CentOS 7
  • CentOS 8.5
  • Oracle Linux 8.7
  • SUSE Linux Enterprise Server (SLES) 12 SP5 and 15.1
  • Ubuntu 18.04
  • Photon v3, v4

This table shows recommended VDDK versions for various VMC milestones:

    VMC Milestone Compatible VDDK Versions
    1.16 – 1.19 6.7.x, 7.0.x, 8.0.x
    1.20, 1.22 7.0.x, 8.0.x

Here is a possible issue with backward and forward compatibility.

  • Best practice for restore with pre-existing snapshot. When a backed-up VM had snapshots, disaster recovery software can restore a fresh VM with its pre-existing snapshots in place. However for partial file recovery, or to recover a VM back to a point in time, it is difficult to determine how to restore a VM that has an existing snapshot. Should parts of the snapshot be restored to a previous state (if applicable), or only the VM by ignoring the snapshot? Certain backup-restore applications do not handle this situation well; snapshot contention causes a GenericVmConfigFault. One good solution is for customers to delete any pre-existing snapshots before recovery, especially if not needed. This is mandatory for SAN transport recovery, anyway. Restore applications could refuse to recover a VM until any pre-existing snapshot is deleted. If not, a customer work-around is to recover the VM to a different location, avoiding the issue, then retain the recovered VM and delete the VM with pre-existing snapshot.

Recently Resolved Issues

The VDDK 8.0.1 release resolves the following issue.

  • Customers complained about nonconfigurable crash dumps.

    On Windows, vmacore crash dumps now appear in the VDDK configured tmpDir. On Linux, crash dumps appear as configured in the kernel, as they did before.

  • Open SSL library upgraded for FIPS support.

    The Open SSL library openssl is upgraded to version 3.0 for vSphere 8.0 U1. To enable FIPS with Open SSL 3.0, see below.

  • Reminder about CBT Check in VDDK 7.0.

    VDDK 7.0 included vixDiskCheck utility with cbtCheck option, and now readPerf and writePerf options. For details, find the VDDK bin64 folder and run vixDiskCheck for help on all its options. Also see Troubleshooting CBT Corruption below.

The VDDK 8.0 release resolved the following issues.

  • Snapshot support for cloud native storage (CNS).

    A new section in the programming guide describes how to create volume snapshots in a Kubernetes cluster and retrieve snapshot handles using container storage interface (CSI) based on first class disk (FCD). Cloud native storage (CNS) is a vSphere feature that allows Kubernetes to auto-provision scalable storage on demand. CNS also provides vSphere administrator visibility into container volumes through vCenter Server.

For resolved issued in previous VDDK versions, see the VDDK 8.0 Release Notes and before.

Known Issues and Workarounds

These are unresolved issues known at this time.

  • VDDK non-support of NVMe over Fabrics.

    vSphere 8 contains many enhancements regarding NVMe over Fabrics (NVMe-oF) such as more namespaces, extend reservations, discovery services, and vVol support. However it seems unlikely that VDDK handles backup for all these new features. More information will be forthcoming. Meanwhile applications can use NBD/NBDSSL and HotAdd transport to back up NVME-oF datastores.

How to Enable FIPS on VDDK Proxy VM

Open SSL 3.0 is required, so the proxy VM must run VDDK 8.0.1 or later.

  1. Location of the FIPS dynamic library differs. To install on Windows or Linux, run one of these commands:
  2. openssl.exe fipsinstall -out \path\of\fipsmodule.cnf -module VDDKpackage\bin\fips.dll
    openssl fipsinstall -out /path/of/fipsmodule.cnf -module VDDKpackage/lib64/fips.so
    
  3. In the Open SSL configuration file, dot-include fipsmodule.cnf must be updated with an absolute path, and other values should be set as in this example:
  4. openssl_conf = openssl_init
    .include /path/of/fipsmodule.cnf
    [openssl_init]
    providers = provider_sect
    alg_section = algorithm_sect
    [provider_sect]
    default = default_sect
    fips = fips_sect
    [default_sect]
    activate = 1
    [algorithm_sect]
    default_properties = "fips=yes"
    
  5. Set environment variable OPENSSL_CONF to the path of the Open SSL configuration file. Set environment variable OPENSSL_MODULES to the path of fips.dll or fips.so, as above.
  6. Before VixDiskLib initialization, add vixDiskLib.ssl.enableSslFIPS=1 to the VDDK configuration file.
  7. With FIPS enabled, the VDDK information log will record “SSL is in FIPS mode” when VixDiskLibInitEx() is called.

Troubleshooting CBT Corruption

The vixDiskCheck utility was provided to verify data consistency of CBT enabled incremental backups of VMDK virtual disk. In addition to authorization options, similar to options of the VixDiskLib sample program, the -cbtdir option specifies a temporary directory that should be as large as the VMDK being checked. As of VDDK 8.0.1, the utility also measures I/O performance to help debug customer backup environments. Examples:

bin64/vixDiskCheck cbtCheck [auth-options] -cbtdir directory -disk disk.vmdk
bin64/vixDiskCheck readPerf [auth-options] -cbtdir directory -disk disk1.vmdk [-disk disk2.vmdk ...]
bin64/vixDiskCheck writePerf [auth-options] -cbtdir directory -disk disk1.vmdk [-disk disk2.vmdk ...]

To debug CBT on a customer site, download and install the VDDK package, then locate the bin64/vixDiskCheck utility in the installed folder. Set LD_LIBRARY_PATH (or PATH on Windows) to the lib64 (or bin) folder.

Delete previous snapshots, take a new snapshot, collect the base disk logs *-ctk.vmdk, and run a command like the following. Note that bigCBTbackupFile must be as large as VMDK. Repeat those steps with more snapshots if necessary.

vixDiskCheck cbtCheck -host IPaddr -user username -password password -vm moref=VmMoref
  -ssmoref SnapshotMoRef -libdir VDDKinstallDir -thumb SSLhostThumbprint -cbtdir bigCBTbackupFile
  -disk Base-disk-000001.vmdk -transportMode nbdssl 2>&1 | tee vixDiskCheck-000001.log
check-circle-line exclamation-circle-line close-line
Scroll to top icon