VDDK 7.0.1 | 6 Oct 2020 | ESXi build 16850804, VDDK build 16860560
For vSphere 7.0 Update 1 and VMC | Last document update 28 Sept 2020
Check frequently for additions and updates to these release notes.

About the Virtual Disk Development Kit

The Virtual Disk Development Kit (VDDK) 7.0.1 is an update to support vSphere 7.0 U1, deliver new features, and resolve some reported issues. VDDK 7.0.1 supports ESXi 7.0 U1, vCenter Server 7.0 U1, and VMware Cloud (VMC). It was tested for backward compatibility against vSphere 6.5 and 6.7.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 7.0 and all its update releases support vSphere 6.5, 6.7 (except for new features), and the next major release.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain the software, programming details, and redistribution, see the VDDK landing page on code.vmware.com.

Changes and New Features

VDDK 7.0.1 offers the following enhancements:

  • FIPS (federal information processing standards) compliance.
    If users set vixDiskLib.ssl.enableSslFIPS in the VDDK configuration file, SSL FIPS mode will be enabled when programs call VixDiskLib_Init(). Compliance includes FIPS validated cryptography with OpenSSL.
  • Phone home (CEIP) enhancements.
    Metrics now include VDDK build number, transport mode, and other useful data. Customers can choose to avoid CEIP (customer experience improvement program) by disabling it in the vSphere Client.
  • NBD server daemon change in vSphere 7.0 U1.
    For greater concurrency, vCenter Server 7.0 U1 no longer uses vpxa to handle NBD connections, but rather hostd as on ESXi hosts. For 7.0 U1 and later datacenters, the recommended number of connections to one host for parallel backups is no more than 50. For example, with two hosts, each can take 50 NBD connections, resulting in a total of 100 concurrent backup streams. Normally VDDK connects to vCenter, which selects an ESXi host for backup, and returns an NFC ticket for that host. VDDK cannot determine in advance which host vCenter might select for NBD transport. NFC is also used for disk cloning and cold migration, which may consume server resources as competitors of backup and restore. A vCenter Server is not an optimum load balancer, but backup with vCenter is simpler than selecting specific ESXi hosts. See the 7.0.1 VDDK Programming Guide for NBD best practices.
  • Network I/O Control for dedicated network backup, with UI in vSphere Client.
    When network I/O Control (NIOC) is enabled in the virtual distributed switch (VDS or DVS), switch traffic is divided into the various predefined network resource pools, now including one dedicated to vSphere Backup NFC. The API enumeration for this network resource pool is VADP_NIOConBackupNfc. System administrators can set this up in the vSphere Client with System Traffic > Configure > Edit, then optionally change resource settings. Thereafter NBD network traffic originated by VDDK 7.0.1 (or later) to 7.0 U1 hosts (or later) is shaped by these VDS settings. NIOC may be used together with the dedicated NFC network option, new in VDDK 7.0, but this is not a requirement.
  • NBD mode enhancements to increase backup resiliency in vSphere.
    Two VIX_E_HOST error codes were introduced to flag hosts nearly or already Entering Maintenance Mode (EMM) during backup. While in maintenance mode, ESXi hosts cannot perform backup operations. For recoverable conditions, backup operations should retry multiple times. For non-recoverable conditions, backup operations may switch to a different host. Retry and switching should be transparent to end users. See the 7.0.1 VDDK Programming Guide for NBD best practices.
  • Adjustable block size for CBT (changed block tracking). By automatically adapting block size, CBT overhead can shink up to four times for certain incremental or differential backups, depending on virtual disk state. (This was a VDDK 7.0 feature but not documented until now.)

For new features in the previous VDDK version, see the Virtual Disk Development Kit 7.0 Release Notes.

Compatibility Notices

Missing from documentation (until 7.0.1) was this caveat about multithreading: QueryAllocatedBlocks should be called in the same thread as open and close disk, not from read and write worker threads.

vVol and vSAN datastores do not support SAN mode transport. As of VDDK 6.7, SAN transport is explicitly rejected for vSAN and vVols.

As of 6.7.3 and in vSphere 7.0, the VDDK library marks parentHandle internally to prevent closure and ensure cleanup. In earlier releases it was an error to close parentHandle after VixDiskLib_Attach succeeds. See the VDDK 6.7.1 Release Notes for coding advice.

VDDK 7.0.1 supports the same operating systems for proxy backup as 7.0:

  • Windows Server 2019
  • Windows Server 2016, including versions 1709 and 1803
  • Windows Server 2012 and 2012 R2
  • Red Hat Enterprise Linux RHEL 7.7 and 8.0
  • CentOS 7.7
  • SUSE Linux Enterprise Server SLES 12 SP5 and 15 SP1
The following table shows recommended VDDK versions for various VMC milestones.

    VMC Milestone Compatible VDDK Versions
    M8 6.5.2, 6.5.2 EP1, 6.5.3, 6.5.4, 6.7.x, 7.0.x
    M9 6.5.2, 6.5.2 EP1, 6.5.3, 6.5.4, 6.7.x, 7.0.x
    M10 6.7.x, 7.0.x
    M11 6.7.x, 7.0.x

Recently Resolved Issues

This VDDK release resolves the following issues.

  • SAN mode restore not supported for clustered VMDK.

    A datastore marked clusteredvmdk was no longer writable in VMFS6, due to SCSI reservations on the physical LUN, so restore did not proceed. Note that MSCS clustering works without clusteredvmdk. It is not required for backup. If users set Clustered VMDK to Enabled in vSphere Client > Configure > Datastore Capabilities, VDDK 6.7 hangs with a Windows proxy, or returns an error with a Linux proxy. VDDK 7.0 tried to fix the problem by rejecting SAN writes to that datastore, however 7.0 also rejects SAN writes with Disabled set. In 7.0.1, VDDK allows SAN writes with Disabled set.

  • VDDK programs in SAN mode could not find disks added after initial open.

    To improve SAN mode performance, VDDK established the VMFS LUNs list only at first open. Any virtual disks added afterwards would be unrecognized. In 7.0.1 to solve this issue in, customers can set vixDiskLib.transport.san.RefreshScsiListCount in the VixDiskLib configuration file. This enables Count number of refreshes to the SCSI device list if VDDK fails to find an active path to the LUN device. Valid integer settings are one (1) to five (5) retries to obtain an active path. Lower numbers have no effect, and higher numbers get reduced to 5.

Known Issues and Workarounds

These are unresolved issues known at this time.

  • Cleanup function crash if connection parameter not set.

    Before VDDK 6.7, passing Null vmxSpec was a way to clean up from multiple VM backups after a crash. In VDDK 6.7 and 7.0, the VixDiskLib_Cleanup function required its first parameter VixDiskLibConnectParams->vmxSpec to be set correctly. If vmxSpec is Null, applications crash when calling the cleanup function. In a future release, libraries will be changed to permit Null parameter.

  • Clean-up function not working with default tmpDirectory.

    With a default temporary directory, the folder name can change from backup to backup, if the backup job is a new OS process, so the VixDiskLib_Cleanup function might not clean up everything left behind. As a workaround, programs should set tmpDirectory in the configuration file.

  • VDDK 6.7 did not support Windows Server 2019 as a backup proxy.

    When using Windows Server 2019 as a HotAdd backup proxy with vSphere 6.x, every target VM gets flagged for reboot after backup. The user can ignore this message, as a reboot is not required. and subsequent HotAdd backups will continue to work. This issue was fixed in the VDDK 7.0 release by opening the disk itself instead of the disk adapter.

  • HotAdd transport mode limitations for vSAN datastores.

    If the target VM is on a vSAN datastore and the backup proxy on a non-vSAN datastore, HotAdd transport is not supported. VMware may add support in a future release. A workaround is to vMotion the backup proxy to a vSAN datastore.

  • Errata in VDDK documentation regarding CEIP phone home.

    In the VDDK 6.7 programming guide, section “Phone Home Support” on page 48 was inaccurate. In vSphere 7.0, CEIP is customer controlled. The EnablePhoneHome=1 setting in the VDDK configuration file has no effect. However, backup software should set vendor details as recommended below, otherwise vendor name and version will appear as “Unknown” in VMware analytics. Legal characters: 26 letters, digits, underscore (_), minus (-), period (.), and space. Double quotes are mandatory if setting strings contain spaces. DatabaseDir stores phone home data in a separate folder.

    vixDiskLib.phoneHome.ProductName = "vendorName or ApplicationName"
    vixDiskLib.phoneHome.ProductVersion = "versionNumber"
    vixDiskLib.phoneHome.DatabaseDir = "folderName"
    

  • vSphere 6.7 HTTP File Service not backward compatible until 6.7 U2.

    This is related to “Disk open fails in HotAdd mode if name contains special characters” in 6.7.2. Backup partners with on-prem customers running VDDK 6.7.0 or VDDK 6.7.1 can recommend the NoNfcSession=0 setting to work around the problem, and urge an upgrade to VDDK 6.7.2 or 7.0 when feasible. Partners with cloud-based customers should make upgrade to post M5 and VDDK 7.0 mandatory, because NoNfcSession=1 must be set for security reasons. Partners using the Http_Access API in their own programs should code a fix to support multiple vCenter versions with adaptive single and double decoding. This issue will be permanent for partners supporting vSphere 6.7, 6.7 U1, and M5.

check-circle-line exclamation-circle-line close-line
Scroll to top icon