This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.
VDDK 7.0 | 2 April 2020 | Build 15832853 on developer.vmware.com
For vSphere 7.0 and VMC M10 | Last document update 10 June 2021
Check frequently for additions and updates to these release notes.

About the Virtual Disk Development Kit

Virtual Disk Development Kit (VDDK) 7.0 is an update to support vSphere 7.0, and to resolve issues reported against previous releases. VDDK 7.0 adds support for ESXi 7.0, vCenter Server 7.0, and VMC M10. It was tested for backward compatibility against vSphere 6.5 and 6.7.

VDDK is used with vSphere Storage APIs for Data Protection (VADP) to develop backup and restore software. For general information about this development kit, how to obtain the software, programming details, and redistribution, see the VDDK landing page on VMware {Code}.

The VMware policy concerning backward and forward compatibility is for VDDK to support N-2 and N+1 releases. In other words, VDDK 7.0 and all its update releases support vSphere 6.5, 6.7 (except for new features), and the next major release.

Changes and New Features

VDDK 7.0 offers the following new features:

  • Smaller and adaptable block size for Change Block Tracking (CBT) version 2.0. Although not user visible, this improves the resolution of tracking changes, decreases the amount of backup data, and improves perspectives for block-based recovery. The feature is applied automatically on vSphere 7.0 when a VM is created or upgraded to hardware version 17, after CBT set/reset.
  • Configurable CBT memory limits in VMkernel. Customers can change this for an ESXi host in the vSphere Client under Advanced System Settings, key MemCBTBitmapMaxAlloc. The allowed range is 128MB to 2048MB, default 1024MB. Customers need not increase the limit unless they want more than 1024 CBT-enabled disks open at the same time. ESXi hosts have a hard limit of 2048 simultaneously open disks.
  • Optional dedicated network for NBD backups. When tag vSphereBackupNFC is applied to a VMkernel adapter's NIC type, NBD backup traffic goes through the chosen virtual NIC. Programmers can apply the tag by calling HostVirtualNicManager->SelectVnicForNicType(nicType,device);
    see the vSphere API Reference. Customers can use a command like this, which designates interface vmk2 for NBD backup:
    esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
  • The vixDiskCheck utility is provided to verify the data consistency of CBT enabled incremental backups. In addition to other mandatory options, the -cbtdiroption specifies a temporary directory that should be as large as the VMDK being checked.
    bin64/vixDiskCheck cbtCheck -cbtdir directory -disk disk.vmdk
  • Sending of phone-home (CEIP) data is customer controlled by on-premise opt-in or opt-out, or by the VMware Cloud terms of service.
  • Backup of encrypted first class disk (FCD) is tested and supported.
  • Improved HotAdd transport for SEsparse disks. SEsparse disks were supported with both target and proxy VM on VMFS datastores. As of vSphere 7.0, HotAdd transport with SEsparse is supported when the target VM is on a VMFS or NFS datastore, and the backup proxy is on a vSAN datastore.

For new features in an earlier VDDK, see the Virtual Disk Development Kit 6.7 Release Notes. For VixMntapi enhancements, see the Virtual Disk Development Kit 6.7.1 Release Notes.

Compatibility Notices

New: On Windows, backup applications should call VixDiskLib_InitEx at the very beginning of the program. Some thread local storage was introduced in VDDK 7.0, so now all threads that call VixDiskLib functions must be created and initialized after calling VixDiskLib_InitEx. This is advised but not required for Linux.

Missing from documentation is this caveat about multithreading: QueryAllocatedBlocks should be called in the same thread as open and close disk, not from read and write worker threads.

vVol and vSAN datastores do not support SAN mode transport. As of VDDK 6.7, SAN transport is explicitly rejected for vSAN and vVols.

As of 6.7.3 and in vSphere 7.0, the VDDK library marks parentHandle internally to prevent closure and ensure cleanup. In earlier releases it was an error to close parentHandle after VixDiskLib_Attach succeeds. See the VDDK 6.7.1 Release Notes for coding advice.

VDDK 7.0 supports the following operating systems for proxy backup.

  • Windows Server 2019
  • Windows Server 2016, including versions 1709 and 1803
  • Windows Server 2012 and 2012 R2
  • Red Hat Enterprise Linux RHEL 7.7 and 8.0
  • CentOS 7.7
  • SUSE Linux Enterprise Server SLES 12 SP5 and 15 SP1

Recently Resolved Issues

This VDDK release resolves the following issues.

  • Reuse of vCenter Server session more reliable now.

    When reusing the vCenter Server session to avoid connection overflow, the login operation sometimes failed during a long backup series, causing the VDDK library to hang and backups to fail. This issue was fixed by replacing vixDiskLibVim with the vddkVimAccess library.

  • Windows 2019 ReFS not backward compatible for mount-volume.

    When ReFS volumes are mounted on Windows 2019, they get silently upgraded from version 3.1 to 3.4, rendering them incompatible with Windows 2016 and before. To work around this problem, and to minimize crashes, VixMntApi will skip ReFS volumes if the backup proxy is Windows 10 build 17763 or above. Future status of ReFS mount-volume is unknown.

  • Mount-volume problem if virtual disk has many child disks.

    Programs crash in VixMntapi_MountVolume when mounting a virtual disk that has 90 or more child disks. The cause is a too-small stack size in the VixMntapi library. This has been fixed by increasing the default stack size.

  • KMS and vTA are incompatible as HotAdd backup proxies.

    In a vSphere setup with mixed security key providers of both KMS and vSphere Trust Authority (vTA), formerly THS, when using HotAdd transport mode the proxy VM and target VM must be encrypted with same type of key provider. For example, if the proxy is encrypted with KMS, while the target VMDKs are encrypted with vTA, the proxy VM cannot back up the target VM using HotAdd transport. The workaround is for customers to configure two backup proxies, one for KMS and the other vTA.

  • SAN mode restore is not supported for clustered VMDK.

    If a datastore is marked clusteredvmdk it is no longer writable in VMFS6, due to SCSI reservations on the physical LUN, so restore cannot proceed. Note that MSCS clustering works without clusteredvmdk. It's not required for backup. If users set Clustered VMDK to Enabled in vSphere Client > Configure > Datastore Capabilities, VDDK 6.7 hangs with a Windows proxy, or returns an error with a Linux proxy. VDDK 7.0 tried to fix the problem by rejecting SAN writes to that datastore, however 7.0 also rejects SAN writes with Disabled set. In an upcoming release, VDDK will allow SAN writes with Disabled set.

  • SSH thumbprint supported only SHA1 not other algorithms.

    VDDK documentation refers to the SSH thumbprint, also called public key fingerprint using a secure hash algorithm (SHA). Customers requested support for SHA256, so in this release, support was added for SHA224, SHA256, SHA384, and SHA512.

  • Large files took a long time to open with SAN mode backup.

    Backup customers complained that very large files took longer to open in SAN mode than in other backup modes. A patch was implemented to fetch block mapping information at start of file, then proceed to fetch more information as I/O proceeds. That patch is included in this release.

  • No access rights when restoring FCD with SAN transport.

    When restoring first class disk (FCD) using SAN transport, if there is a pre-existing snapshot, you must specify its snapshot ID (ssid) in ConnectParams. If you do not, you will get error code 131 saying “Failed to open Disk. You do not have access rights.” For other transport modes, it is not necessary to specify ssid.

  • Forward compatibility issue with CBT and hardware version 17.

    When using HotAdd transport to save a VM of virtual hardware version (VHV) 17 with Changed Block Tracking (CBT) enabled, backup fails if virtual disk size is > 256GB, because large disks in vSphere 7.0 require change tracking (CTK) version 2.0, not supported by vSphere 6.x. The VM's log file indicates that ReconfigureVM failed, and the UI shows “Unknown change tracking version” status. To summarize, this happens under these conditions: (1) the backed up VM is VHV 17 or higher, (2) saved disk size is > 256GB, and (3) its CBT was (re)initialized on a 7.0 ESXi host. A workaround is for users to disable CBT for the target VM to avoid CTK, but this is not recommended because then full backups will ensue. The solution is to vMotion the backup proxy VM to a host running ESXi 7.0, which supports CTK version 2.0.

  • Curl program upgraded.

    The Curl program was upgraded from an older version to version 7.66.0 because of known security vulnerabilities. Also gettext for internationalization was upgraded to version 0.20.1.

  • Open SSL library upgraded.

    The Open SSL library openssl was upgraded in VDDK 7.0 to version 1.0.2u because of known security vulnerabilities.

  • SAN mode backups hang when SEsparse disk is not 4K multiple.

    VixDisklib threads can time out when opening disks during SAN backup, if disk size is not a 4K multiple. The fix in this release is to make it fail instead of hang. Disks should be 4K block aligned for successful backup.

Known Issues and Workarounds

These are unresolved issues known at this time.

  • New: I/O threads possibly not initialized on Windows.

    The diskLibPlugin module is dynamically loaded into vixDiskLib when programs call VixDiskLib_InitEx. On Windows, dynamic loading may result in the static thread for local storage not getting initialized. Some operations, such as fetch block map, can fail if I/O threads are created before calling VixDiskLib_InitEx. The workaround is to wait for completion of VixDiskLib_InitEx before creating any I/O threads.

  • Clean-up function not working with default tmpDirectory.

    With a default temporary directory, the folder name can change from backup to backup, if the backup job is a new OS process, so the VixDiskLib_Cleanup function might not clean up everything left behind. As a workaround, programs should set tmpDirectory in the configuration file.

  • VDDK 6.7 did not support Windows Server 2019 as a backup proxy.

    When using Windows Server 2019 as a HotAdd backup proxy with vSphere 6.x, every target VM gets flagged for reboot after backup. The user can ignore this message, as a reboot is not required. and subsequent HotAdd backups will continue to work. This issue was fixed in the VDDK 7.0 release by opening the disk itself instead of the disk adapter.

  • HotAdd transport mode limitations for vSAN datastores.

    If the target VM is on a vSAN datastore and the backup proxy on a non-vSAN datastore, HotAdd transport is not supported. VMware may add support in a future release. A workaround is to vMotion the backup proxy to a vSAN datastore.

  • Errata in VDDK documentation regarding CEIP phone home.

    In the VDDK 6.7 programming guide, section “Phone Home Support” on page 48 was inaccurate. In vSphere 7.0, CEIP is customer controlled. The EnablePhoneHome=1 setting in the VDDK configuration file has no effect. However, backup software should set vendor details as recommended below, otherwise vendor name and version will appear as “Unknown” in VMware analytics. Legal characters: 26 letters, digits, underscore (_), minus (-), period (.), and space. Double quotes are mandatory if setting strings contain spaces. DatabaseDir stores phone home data in a separate folder.

    vixDiskLib.phoneHome.ProductName = "vendorName or ApplicationName"
    vixDiskLib.phoneHome.ProductVersion = "versionNumber"
    vixDiskLib.phoneHome.DatabaseDir = "folderName"
    

  • vSphere 6.7 HTTP File Service not backward compatible until 6.7 U2.

    This is related to “Disk open fails in HotAdd mode if name contains special characters” in 6.7.2. Backup partners with on-prem customers running VDDK 6.7.0 or VDDK 6.7.1 can recommend the NoNfcSession=0 setting to work around the problem, and urge an upgrade to VDDK 6.7.2 or 7.0 when feasible. Partners with cloud-based customers should make upgrade to post M5 and VDDK 7.0 mandatory, because NoNfcSession=1 must be set for security reasons. Partners using the Http_Access API in their own programs should code a fix to support multiple vCenter versions with adaptive single and double decoding. This issue will be permanent for partners supporting vSphere 6.7, 6.7 U1, and M5.

check-circle-line exclamation-circle-line close-line
Scroll to top icon