Updated on: 24 February 2017

VMware Virtual SAN 6.2| 15 March 2016 | ISO Build 3620759

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

Virtual SAN 6.2 introduces the following new features and enhancements:

  • New Support for TLS. In vSphere 6.0 Update 3 and later releases, support for TLS v1.0, TLS v1.1 and TLS v1.2 is enabled by default and is configurable. Learn how to configure TLS v1.0, TLS v1.1 and TLS v1.2 from the VMware Knowledge Base article 2148819. For a list of VMware products supported for TLS v1.0 disablement and the use of TLS v1.1/v1.2, consult VMware Knowledge Base article 2145796.

  • Deduplication and compression. Virtual SAN 6.2 supports deduplication and compression to eliminate duplicate data. This technique reduces the total storage space required to meet your needs. When you enable deduplication and compression on a Virtual SAN cluster, redundant copies of data in a particular disk group are reduced to single copy. Deduplication and compression are available as a cluster-wide setting on all-flash clusters.

  • RAID 5 and RAID 6 erasure coding. Virtual SAN 6.2 supports both RAID 5 and RAID 6 erasure coding to reduce the storage space required to protect your data. RAID 5 and RAID 6 are available as a policy attribute for VMs in all-flash clusters. You can use RAID 5 in clusters with at least four fault domains, and RAID 6 in clusters with at least six fault domains.

  • Software checksum. Virtual SAN 6.2 supports software-based checksum on hybrid and all-flash clusters. The software checksum policy attribute is enabled by default on all objects in the Virtual SAN cluster.

  • New on-disk format. Virtual SAN 6.2 supports upgrades to new on-disk virtual file format 3.0 through the vSphere Web Client. This file system provides support for new features in the Virtual SAN cluster. On-disk format version 3.0 is based on an internal 4K block size technology, which provides improved efficiency, but can result in reduced performance if the guest operating system I/Os are not 4K aligned. >

  • IOPS limits. Virtual SAN supports IOPS limits to restrict the number of I/O (read/write) operations per second for a specified object. When the number of read/write operations reaches the IOPS limit, those operations are delayed until the current second expires. The IOPS limit is a policy attribute that you can apply to any Virtual SAN object, including VMDK, namespace, and so on.

  • IPv6. Virtual SAN supports IPv4 or IPv6 addressing.

  • Space reporting. The Virtual SAN 6.2 Capacity monitor displays information about the Virtual SAN datastore, including used space and free space, and provides a breakdown of capacity usage by different object types or data types.

  • Health service. Virtual SAN 6.2 includes new health checks that help you monitor the cluster and enable you to diagnose and fix problems with the cluster. If the Virtual SAN health service detects health issues, it triggers vCenter events and alarms.

  • Performance service. Virtual SAN 6.2 includes performance service monitors with cluster-level, host-level, VM-level, and disk-level statistics. The performance service collects and analyzes performance statistics and displays the data in a graphical format. You can use the performance charts to manage your workload and determine the root cause of problems.

  • Write-through in-memory cache. Virtual SAN 6.2 improves virtual machine performance by using a host resident write-through read cache. This caching algorithm reduces read I/O latency and reduces Virtual SAN CPU and network usage.

Earlier Releases of Virtual SAN

Features and known issues of Virtual SAN 6.0 and 6.1 are described in the release notes. Release notes for Virtual SAN are available at the following location:

VMware Virtual SAN Community

Use the Virtual SAN Community Web site to provide feedback and request assistance with any problems you encounter while using Virtual SAN.

Upgrades for This Release

For instructions about upgrading Virtual SAN, see the VMware Virtual SAN 6.2 documentation.

Upgrading the On-disk Format for Hosts with Limited Capacity

During an upgrade of the Virtual SAN on-disk format, a disk group evacuation is performed. Then the disk group is removed and upgraded to on-disk format version 3.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters that do not have enough capacity to perform an evacuation of each disk group, you must use the following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade --allow-reduced-redundancy

When you allow reduced redundancy your VMs will be unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It simply removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.

If you enable deduplication and compression during the upgrade to Virtual SAN 6.2, you can select Allow Reduced Redundancy from the vSphere Web Client.

Using VMware Update Manager with Stretched Clusters

Using VMware Update Manager to upgrade hosts in parallel might result in the witness host being upgraded in parallel with one of the data hosts in a stretched cluster. To avoid upgrade problems, do not configure VMware Update Manager to upgrade a witness host in parallel with the data hosts in a stretched cluster. Upgrade the witness host after all data hosts have been successfully upgraded and have exited maintenance mode.

Verifying Health Check Failures During Upgrade

During upgrades of the Virtual SAN on-disk format, the Physical Disk Health – Metadata Health check can fail intermittently. These failures can occur if the destaging process is slow, most likely because Virtual SAN needs to do physical block allocations on the storage devices. Before you take action, verify the status of this health check after the period of high activity, such as multiple virtual machine deployments, is complete. If the health check is still red, the warning is valid. If the health check is green, you can ignore the previous warning. For more information, see Knowledge Base article 2108690.


In an all-flash configuration, Virtual SAN supports a maximum write buffer cache size of 600 GB for each disk group.

For information about other maximum configuration limits for Virtual SAN 6.2 release, see the Configuration Maximums documentation.

Resolved Issues

  • When upgrading to Virtual SAN 6.1, an error message appears: Unable to access agent offline bundle
    This error might occur when you are upgrading from Virtual SAN 6.0 with health check enabled to Virtual SAN 6.1. During the upgrade process, the health check VIB is substituted and its service is temporarily stopped. In some cases, an error message might be generated by the health check.

    This issue is resolved in this release.

  • When you place a host into a Virtual SAN cluster to be used as a witness, and then move the host out of the cluster, the health check VIB is removed from the host
    If you move an ESXi host out of the Virtual SAN cluster, its health check VIB is removed. Therefore, if the host is a witness for the cluster, the installation status of the witness is red.

    This issue is resolved in this release.

  • During rolling site failure in a large stretched cluster, such as 15:15:1, where each node in a fault domain fails in succession with several seconds between each failure, VMs might become inaccessible or orphaned

    This issue is resolved in this release.

  • Attempts to configure all-flash disk group on witness host for stretched cluster fail
    When you attempt to add a witness host with an all-flash disk group to a stretched cluster, the task fails and no disk group is added to the host.

    This issue is resolved in this release.

  • Adding a host to a Virtual SAN cluster triggers an installer error
    When you add an ESXi host to a cluster on which HA and Virtual SAN health service enabled, you might encounter either one or both of the following errors due to a VIB installation race condition:

    • In the task view, the Configuring vSphere HA task might fail with an error message similar to the following: Cannot install the vCenter Server agent service. Unknown installer error

    • The Enable agent task might fail with an error message similar to the following: Cannot complete the operation, see event log for details

    This issue is resolved in this release.

  • Known Issues

    • Cannot claim disks when Virtual SAN is disabled
      If you attempt to claim disks and create disk groups before enabling Virtual SAN on the cluster, the operation will fail.

      Workaround: Enable Virtual SAN on the cluster before you claim disks and create disk groups.

    • After enabling Virtual SAN 6.2 through esxcli, automatic disk-claiming does not work
      If you enable Virtual SAN 6.2 through esxcli, the automatic method to claim disks does not work.

      Workaround: Use the vSphere Web Client to configure automatic disk claiming. You also can use the manual method to claim disks.

    • Host upgrade fails due to insufficient space in locker partition
      Upgrade failed with the following message in /var/log/esxupdate.log:

      Failed to create temporary DB dir: [Errno 28] No space left on device: '/locker/packages/var/db/locker/profiles.new' filename = /locker/packages/var/db/locker

      You might also see the following events on /var/log/vobd.log:

      2016-02-23T11:50:16.095Z: [VfatCorrelator] 676355748510us: [vob.vfat.filesystem.full] VFAT volume mpx.vmhba32:C0:T0:L0:8 (UUID 55e71deb-2f773c48-5dda-a0369f56dd20) is full. (585696 sectors, 0 free sectors)

      2016-02-24T17:13:00.037Z: [VisorfsCorrelator] 782119690921us: [vob.visorfs.ramdisk.full] Cannot extend visorfs file /vsantraces/vsantraces--2016-02-24T17h12m27s256.gz because its ramdisk (vsantraces) is full. 2016-02-24T17:13:00.037Z: [VisorfsCorrelator] 782119691022us: [esx.problem.visorfs.ramdisk.full] The ramdisk 'vsantraces' is full. As a result, the file /vsantraces/vsantraces--2016-02-24T17h12m27s256.gz could not be written.

      Workaround: Delete Virtual SAN Observer .gz files from the locker partition on the host and retry the upgrade.

      1. Login to ESXi shell and check the space consumption in the locker partition located at /locker/vsantraces/

      2. Use the df -h command to identify any 100 percent utilized VFAT partitions. For example:

        df -h
        Filesystem Size Used Available Use% Mounted on
        vfat 249.7M 202.5M 47.3M 81% /vmfs/volumes/68a04eea-90716418-ba59-6dc3297f0ef8
        vfat 249.7M 202.4M 47.3M 81% /vmfs/volumes/6f065ae4-88f03302-b4c6-c0b765c07ff8
        vfat 285.8M 285.7M 112.0K 100% /vmfs/volumes/55e71deb-2f773c48-5dda-a0369f56dd20

      3. Remove the Virtual SAN Observer .gz logs from the locker partition on the host (/locker/vsantraces/). For example: vsanObserver--YYYY-MM-DDTxxhyymzzs.gz

      4. Retry the host upgrade.

    • Compliance check and remediation fails when a 6.0 Update 1 host profile is applied to a 6.0 Update 2 host with Virtual SAN vmknic configured
      If you upgrade the host from ESXi 6.0 Update 1 to ESXi 6.0 Update 2, and then apply the host profile, it fails and the following message appears: Unexpected error updating task config spec: 'IPProtocol'

      Workaround: You can upgrade the host to 6.0 Update 2, and extract the host profile from the upgraded host to get a 6.0 Update 2 version of the host profile.

      You also can edit the host profile and apply the edited host profile to the upgraded host.

      1. Right click the host profile and choose menu Export Host Profile. The host profile is exported as a .vpf file.

      2. Use a text editor to open the .vpf file and replace all instances of the following text:

        <parameter><key>IPProtocol</key><value xsi:type="xsd:string">IPv4</value></parameter>

        Replace with the following text:

        <parameter><key>IPProtocol</key><value xsi:type="xsd:string">IP</value></parameter>

      3. Import the modified host profile and apply it to hosts running ESXi 6.0 Update 2.

    • After a primary node failover and restart, SPBM compliance status shows invalid results
      After a number of Virtual SAN node failures, the Storage Policy-based Management (SPBM) might incorrectly show a virtual machine’s compliance status as Not Applicable. This problem might occur when both the primary node and the backup node fail, and is caused by a sequence of automatic compliance queries initiated by SPBM in response to the failure. Because these compliance queries put additional load on the system, any new compliance queries might time out or return invalid results.

      Workaround: Wait until all automatically-initiated compliance queries are complete. Any new compliance queries will return valid results.

    • Create New VM Storage Policy wizard shows incorrect labels for rules
      When you open the Create New VM Storage Policy wizard to define a policy based on Virtual SAN data services, the labels used to describe the policy rules might display an internal identifier instead of a user-friendly label. For example, you might see vsan.capabilitymetadata.propertymetadata.summary.replicaPreference.label instead of Number of disk stripes per object.

      Workaround: Log out of the vSphere Web Client, and log in again.

    • The Resynching Components page in the vSphere Web Client does not show resynchronization activity if hosts have different ESXi software versions
      When upgrading hosts in a Virtual SAN cluster, some hosts might have different ESXi software versions. For example, some hosts are running ESXi 6.0 Update 1 and some hosts are running ESXi 6.0 Update 2. During this phase of the upgrade, the Resyncing Components page in the vSphere Web Client might not show resynchronization activity that is occurring in the cluster.

      Workaround: To monitor resynchronization activity during host upgrades, use the RVC command vsan.resync_dashboard.

    • New policy rules ignored on hosts with older versions of ESXi software
      This might occur when you have two or more Virtual SAN clusters, with one cluster running the latest software and another cluster running an older software version. The vSphere Web Client displays policy rules for the latest Virtual SAN software, but those new policies are not supported on the older hosts. For example, RAID-5/6 (Erasure Coding) – Capacity is not supported on hosts running 6.0U1 or earlier software. You can configure the new policy rules and apply them to any VMs and objects, but they are ignored on hosts running the older software version.

      Workaround: None

    • Snapshot consolidation might fail during upgrade
      During upgrade of the Virtual SAN on-disk format from version 2.0 to version 3.0, snapshot consolidation might fail. The Need Consolidation column in the vSphere Client indicates that the VM needs consolidation. To avoid this situation, perform snapshot consolidation either before upgrading the on-disk format, or wait until after the upgrade is complete.

      Workaround: If the snapshot consolidation fails, perform the operation after the on-disk format upgrade is complete.

    • Snapshot memory objects are not displayed in the Used Capacity Breakdown of the Virtual SAN Capacity monitor
      For Virtual Machines created with hardware version lower than 10, the snapshot memory is included in the Vmem objects on the Used Capacity Breakdown.

      Workaround: To view snapshot memory objects in the Used Capacity Breakdown, create Virtual Machines with hardware version 10 or higher.

    • Virtual SAN performance service is disabled when you delete the storage policy applied to the Stats database object
      If you delete the storage policy applied to the performance service, the vSphere Web Client displays the following message: The Performance Service is disabled.

      Workaround: Perform to the following steps to restore the deleted storage policy or apply an existing storage policy to the performance service Stats database object using RVC commands:

      1. Log in to the vCenter Server using SSH and access the Bash shell.

      2. Run the following command to login with your vCenter account:

        rvc localhost

      3. Run the following command to apply a storage policy to the Stats database object:

        vsan.perf.stats_object_setpolicy -o <policy> <cluster>

      For example:

      vsan.perf.stats_object_setpolicy -o "/localhost/VSAN-DC/storage/vmprofiles/Virtual SAN Default Storage Policy" MyCluster

    • Storage Usage reported in VM Summary page might appear larger after upgrading to Virtual SAN 6.2
      In previous releases of Virtual SAN, the value reported for VM Storage Usage was the space used by a single copy of the data. For example, if the guest wrote 1 GB to a thin-provisioned object with two mirrors, the Storage Usage was shown as 1 GB. In Virtual SAN 6.2, the Storage Usage field displays the actual space used, including all copies of the data. So if the guest writes 1 GB to a thin-provisioned object with two mirrors, the Storage Usage is shown as 2 GB. The reported storage usage on some VMs might appear larger after upgrading to Virtual SAN 6.2, but the actual space consumed did not increase.

      Workaround: None

    • Cannot place a witness host in Maintenance Mode
      When you attempt to place a witness host in Maintenance Mode, the host remains in the current state and you see the following notification: A specified parameter was not correct.

      Workaround: When placing a witness host in Maintenance Mode, choose the No data migration option.

    • Moving the witness host into and then out of a stretched cluster leaves the cluster in a misconfigured state
      If you place the witness host in a Virtual SAN-enabled vCenter cluster, an alarm notifies you that the witness host cannot reside in the cluster. But if you move the witness host out of the cluster, the cluster remains in a misconfigured state.

      Workaround: Move the witness host out of the Virtual SAN stretched cluster, and reconfigure the stretched cluster. For more information, see Knowledge Base article 2130587.

    • When a network partition occurs in a cluster which has an HA heartbeat datastore, VMs are not restarted on the other data site
      When the preferred or secondary site in a Virtual SAN cluster loses its network connection to the other sites, VMs running on the site that loses network connectivity are not restarted on the other data site, and the following error might appear: vSphere HA virtual machine HA failover failed.

      This is expected behavior for Virtual SAN clusters.

      Workaround: Do not select HA heartbeat datastore while configuring vSphere HA on the cluster.

    • Unmounted Virtual SAN disks and disk groups displayed as mounted in the vSphere Web Client Operational Status field
      After the Virtual SAN disks or disk groups are unmounted by either running the esxcli vsan storage disk group unmount command or by the Virtual SAN Device Monitor service when disks show persistently high latencies, the vSphere Web Client incorrectly displays the Operational Status field as mounted.

      Workaround: Use the Health field to verify disk status, instead of the Operational Status field.

    • On-disk format upgrade displays disks not on Virtual SAN
      When you upgrade the disk format, Virtual SAN might incorrectly display disks that were removed from the cluster. The UI also might show the version status as mixed. This display issue usually occurs after one or multiple disks are manually unmounted from the cluster. It does not affect the upgrade process. Only the mounted disks are checked. The unmounted disks are ignored.
    • Workaround: None

    • Cannot enter Virtual SAN fault domain name that is more than 256 characters
      When you attempt to assign a fault domain name with more than 256 bytes in vSphere Web Client, the system displays an error: A specified parameter was not correct: faultDomainInfo.name. When you use multi-byte unicode characters, you can reach the limit of 256 bytes with fewer than 256 characters.

      Workaround: None.

    • All Virtual SAN clusters share the same external proxy settings
      All Virtual SAN clusters share the same external proxy settings, even if you set the proxy at the cluster level. Virtual SAN uses external proxies to connect to Support Assistant, the Customer Experience Improvement Program, and the HCL database, if the cluster does not have direct Internet access.

      Workaround: None

    • Virtual SAN health service malfunctions when the vCenter HTTP or HTTPS port and certification settings are changed from the default values
      Virtual SAN health service only supports the default HTTPS port 443 and default certificate under /etc/vmware-vpx/ssl/rui.crt and /etc/vmware-vpx/ssl/rui.key. If you change the default port or modify the certificate, Virtual SAN health service cannot function properly. You might receive a status code 400 (Bad request) or have a rejected request.

      Workaround: Configure the Virtual SAN health service HTTP and HTTPS settings to use the default values.

    • Multicast performance test of Virtual SAN health check does not run on Virtual SAN network
      In some cases, depending on the routing configuration of ESXi hosts, the network multicast performance test does not run on the Virtual SAN network.

      Workaround: Use the Virtual SAN network as the only network setting for the ESXi hosts, and conduct the network multicast performance test based on this configuration.

      If ESXi hosts have multiple network settings, you also can follow the steps listed in this example. Assume that Virtual SAN runs on the network.

      1. Bind the multicast group address to this network on each host:

        $ esxcli network ip route ipv4 add -n -g

      2. Check the routing table:

        $ esxcli network ip route ipv4 list
        default  vmk0       DHCP        vmk0       MANUAL        vmk3       MANUAL    vmk3       MANUAL

      3. Run the proactive multicast network performance test, and check the result.

      4. After the test is complete, recover the routing table:

        $ esxcli network ip route ipv4 remove -n -g

    • VMs in a stretched cluster become inaccessible when preferred site is isolated, then regains connectivity only to the witness host
      When the preferred site becomes unavailable or loses its network connection to the secondary site and the witness host, the secondary site forms a cluster with the witness host and continues storage operations. Data on the preferred site might become outdated over time. If the preferred site then reconnects to the witness host but not to the secondary site, the witness host leaves the cluster it is in and forms a cluster with the preferred site, and some VMs might become inaccessible because they do not have access to the most recent data in this cluster.

      Workaround: Before you reconnect the preferred site to the cluster, mark the secondary site as the preferred site. After the sites are resynchronized, you can mark the site you want to use as the preferred site.

    • Virtual SAN Witness Host OVA does not support internal DVS configuration
      The VMware Virtual SAN Witness Host OVA package does not support configuration of a Distributed Virtual Switch (DVS) within the witness host.

      Workaround: Use a legacy virtual switch.

  • check-circle-line exclamation-circle-line close-line
    Scroll to top icon