Updated on: 10 OCT 2019 VMware vSAN 6.7 Update 3 | 20 AUG 2019 | Build 14320388 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:What's New
vSAN 6.7 Update 3 introduces the following new features and enhancements:
- vSAN performance enhancements. This release provides improved performance and availability SLAs on all-flash configurations with deduplication enabled. Latency sensitive applications have better performance in terms of predictable I/O latencies and increased sequential I/O throughput. Rebuild times on disk and node failures are shorter, which provides better availability SLAs.
- Enhanced capacity monitoring. The capacity monitoring dashboard has been redesigned for improved visibility of overall usage, granular breakdown, and simplified capacity alerting. Capacity-related health checks are more visible and consistent. Granular capacity utilization is available per site, fault domain, and at the host/disk group level.
- Enhanced resync monitoring. The Resyncing Objects dashboard introduces new logic to improve the accuracy of resync completion times, as well as granular visibility into different types of resyncing activity, such as rebalancing or policy compliance.
- Data migration pre-check for maintenance mode operations. This release of vSAN introduces a dedicated dashboard to provide in-depth analysis for host maintenance mode operations, including a more descriptive pre-check for data migration activities. This report provides deeper insight into object compliance, cluster capacity and predicted health before placing a host into maintenance mode.
- Increased hardening during capacity-strained scenarios. This release includes new robust handling of capacity usage conditions for improved detection, prevention, and remediation of conditions where cluster capacity has exceeded recommended thresholds.
- Proactive rebalancing enhancements. You can automate all rebalancing activities with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
- Efficient capacity handling for policy changes. This release of vSAN introduces new logic to reduce the amount of space temporarily consumed by policy changes across the cluster. vSAN processes policy resynchronizations in small batches, which efficiently utilizes capacity from the slack space reserve and simplifies user operations.
- Disk format conversion pre-checks. All disk group format conversions that require a rolling data evacuation now include a backend pre-check which accurately determines success or failure of the operation before any movement of data.
- Parallel resynchronization. vSAN 6.7 Update 3 includes optimized resynchronization behavior, which automatically runs additional data streams per resyncing component when resources are available. This new behavior runs in the background and provides greater I/O management and performance for workload demands.
- Windows Server Failover Clusters (WSFC) on native vSAN VMDKs. vSAN 6.7 Update 3 introduces native support for SCSI-3 PR, which enables Windows Server Failover Clusters to be deployed directly on VMDKs as first class workloads. This capability makes it possible to migrate legacy deployments on physical RDMs or external storage protocols to vSAN.
- Enable Support Insight in the vSphere Client. You can enable vSAN Support Insight, which provides access to all vSAN proactive support and diagnostics based on the CEIP, such as online vSAN health checks, performance diagnostics and improved support experience during SR resolution.
- vSphere Update Manager (VUM) baseline preference. This release includes an improved vSAN update recommendation experience from VUM, which allows users to configure the recommended baseline for a vSAN cluster to either stay within the current version and only apply available patches or updates, or upgrade to the latest ESXi version that is compatible with the cluster.
- Upload and download VMDKs from a vSAN datastore. This release adds the ability to upload and download VMDKs to and from the vSAN datastore. This capability provides a simple way to protect and recover VM data during capacity-strained scenarios.
- vCenter forward compatibility with ESXi. vCenter Server can manage newer versions of ESXi hosts in a vSAN cluster, as long as both vCenter and it's managed hosts have the same major vSphere version. You can apply critical ESXi patches without updating vCenter Server to the same version.
- New performance metrics and troubleshooting utility. This release introduces a vSAN CPU metric through the performance service, and provides a new command-line utility (
vsantop
) for real-time performance statistics of vSAN, similar toesxtop
for vSphere. - vSAN iSCSI service enhancements. The vSAN iSCSI service has been enhanced to allow dynamic resizing of iSCSI LUNs without disruption.
- Cloud Native Storage. Cloud Native Storage is a solution that provides comprehensive data management for stateful applications. With Cloud Native Storage, vSphere persistent storage integrates with Kubernetes. When you use Cloud Native Storage, you can create persistent storage for containerized stateful applications capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
For information on how to install and configure Kubernetes node VMs and to use Cloud Native Storage, see Getting Started with VMware Cloud Native Storage.
VMware vSAN Community
Use the vSAN Community Web site to provide feedback and request assistance with any problems you find while using vSAN.
Upgrades for This Release
For instructions about upgrading vSAN, see the VMware vSAN 6.7 documentation.
Note: Before performing the upgrade, please review the most recent version of the VMware Compatibility Guide to validate that the latest vSAN version is available for your platform.
vSAN 6.7 Update 3 is a new release that requires a full upgrade to vSphere 6.7 Update 3. Perform the following tasks to complete the upgrade:
1. Upgrade to vCenter Server 6.7 Update 3. For more information, see the VMware vSphere 6.7 Release Notes.
2. Upgrade hosts to ESXi 6.7 Update 3. For more information, see the VMware vSphere 6.7 Release Notes.
3. Upgrade the vSAN on-disk format to version 10.0. If upgrading from on-disk format version 3.0 or later, no data evacuation is required (metadata update only).
Upgrading the On-disk Format for Hosts with Limited Capacity
During an upgrade of the vSAN on-disk format, a disk group evacuation is performed. The disk group is removed and upgraded to on-disk format version 10.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, select Allow Reduced Redundancy from the vSphere Client. You also can use the following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade --allow-reduced-redundancy
When you allow reduced redundancy, your VMs are unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.
If you enable deduplication and compression during the upgrade to vSAN 6.7, you can select Allow Reduced Redundancy from the vSphere Client.
Verifying Health Check Failures During Upgrade
During upgrades of the vSAN on-disk format, the Physical Disk Health – Metadata Health check can fail intermittently. These failures can occur if the destaging process is slow, most likely because vSAN must allocate physical blocks on the storage devices. Before you take action, verify the status of this health check after the period of high activity, such as multiple virtual machine deployments, is complete. If the health check is still red, the warning is valid. If the health check is green, you can ignore the previous warning. For more information, see Knowledge Base article 2108690.
Limitations
In vSAN 6.7 Update 3, Configuration Assist and Updates are available only in the Flex-based vSphere Web Client.
For information about maximum configuration limits for the vSAN 6.7 release, see the Configuration Maximums documentation.
Known Issues
The known issues are grouped as follows.
Cloud Native Storage Issues- Deleted CNS volumes might temporarily appear as existing in the CNS UI
After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following:
The object or item referred to could not be found.
Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI.
- Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error
When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail. You might see an error message similar to the following:
CnsFault error: CNS: The input volume xyz is not a CNS volume.
Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM.
- Under certain circumstances, while a CNS operation fails, the task status appears as successful in the vSphere Client
This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful.
Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results.
- Update Manager displays test ID instead of health check name
When you use Update Manager to remediate hosts in a vSAN cluster, vSAN health checks can identify upgrade issues. When the remediation task fails on a host, you might see an error message with a test ID instead of a health check name. For example:
Before host exits MM, remediation failed because vSAN health check failed. vSAN cluster is not healthy because vSAN health check(s): com.vmware.vsan.health.test.controlleronhcl failed
Each test ID is related to a vSAN health check. To learn about the remediation health checks, refer to the following article: https://kb.vmware.com/s/article/60219
Workaround: If a remediation task fails on a vSAN host, use the health service to identify and resolve the issues. Then perform another remediation task.
- Cannot place last host in a cluster into maintenance mode, or remove a disk or disk group
Operations in Full data migration or Ensure accessibility mode might fail without providing guidance to add a new resource, when there is only one host left in the cluster and that host enters maintenance mode. This can also happen when there is only one disk or disk group left in the cluster and that disk or disk group is to be removed.
Workaround: Before you place the last remaining host in the cluster into maintenance mode with Full data migration or Ensure accessibility mode selected, add another host with the same configuration to the cluster. Before you remove the last remaining disk or disk group in the cluster, add a new disk or disk group with the same configuration and capacity.
- Object reconfiguration workflows might fail due to the lack of capacity if one or more disks or disk groups are almost full
vSAN resyncs get paused when the disks in non-deduplication clusters or disk groups in deduplication clusters reach a configurable resync pause fullness threshold. This is to avoid filling up the disks with resync I/O. If the disks reach this threshold, vSAN stops reconfiguration workflows, such as EMM, repairs, rebalance, and policy change.
Workaround: If space is available elsewhere in the cluster, rebalancing the cluster frees up space on the other disks, so that subsequent reconfiguration attempts succeed.
- Total Bytes To Sync values might be higher than the overall cluster capacity
The Total Bytes To Sync displayed in logs or in the vSphere Client might be higher than the expected values. This is because the same object could be queued multiple times for different workflows, resulting in total queued resync bytes that might be greater than the overall cluster capacity. For example, Repair and Fix Compliance or Change Policy and Repair.
Workaround: None.
- After recovery from cluster full, VMs can lose HA protection
In a vSAN cluster that has hosts with disks 100% full, the VMs might have a question pending and hence lose the HA protection. Also, the VMs that had a pending question are not HA protected after recovering from cluster full scenario.
Workaround: After recovering from vSAN cluster full scenario, perform one of the following actions:
- Disable and re-enable HA.
- Reconfigure HA.
- Power off and power on the VMs.
- Power Off VMs fails with Question Pending
If a VM has a pending question, you are not allowed to do any VM-related operations until the question is answered.
Workaround: Try to free the disk space on the relevant volume, and then click Retry.
- When the cluster is full, the IP addresses of VMs either change to IPV6 or become unavailable
When a vSAN cluster is full with one or more disk groups reaching 100%, there can be a VM pending question that requires user action. If the question is not answered and if the cluster full condition is left unattended, the IP addresses VMs might change to IPv6 or become unavailable. This prevents you from using SSH to access the VMs. It also prevents you from using the VM console, because the console goes blank after you type
root
.Workaround: None.
- Unable to remove a dedupe enabled disk group after a capacity disk enters PDL state
When a capacity disk in a dedupe-enabled disk group is removed, or its unique ID changes, or when the device experiences an unrecoverable hardware error, it enters Permanent Device Loss (PDL) state. If you try to remove the disk group, you might see an error message informing you that the action cannot be completed.
Workaround: Whenever a capacity disk is removed, or its unique ID changes, or when the device experiences an unrecoverable hardware error, wait for a few minutes before trying to remove the disk group.
- vSAN health indicates non-availability related incompliance with failed pending policy
A policy change request leaves the object health status of vSAN in a non-availability related incompliance state. This is because there might be other scheduled work that is utilizing the requested resources. However, vSAN reschedules this policy request automatically as resources become available.
Workaround: The vSAN period scan fixes this issue automatically in most cases. However, other work in progress might use up available resources even after the policy change was accepted but not applied. You can add more capacity if the capacity reporting displays a high value.
- In deduplication clusters, reactive rebalancing might not happen when the disks show more than 80% full
In deduplication clusters, when the disks display more than 80% full on the dashboard, the reactive rebalancing might not start as expected. This is because in deduplication clusters, pending writes and deletes are also considered for calculating the free capacity.
Workaround: None.
- Unable to light service LED on host with lsi_msgpt2 disk serviceability plugin
The lsi_msgpt2 disk serviceability plugin is not supported. vSAN cannot light the service LED on hosts that use the lsi_msgpt2 disk serviceability plugin.
Workaround: None.
- Display of unknown objects in stretched cluster after recovery from cluster full state
After recovery from the cluster full state on stretched clusters, the vSphere Client displays unknown objects. You can delete these leaked swap objects.
Workaround: Use the following procedure to delete an object.
- Make sure the object is related to the vswp object path and a number is available after .vswp.
Object path:/vmfs/volumes/vsan:520669801772c95f-035c5337120cdb6a/f588f35c-5057-caf4-5162-ecf4bbdbd068/p3_grp103-10.155.182.236-vsanDatastore-rhel7-2018-vmwpv-p-0001-2b3ba28e.vswp.37478 - Get the object owner host through rvc command:
vsan.object_info cluster objUuid - On the owner host, use the following command to delete an object.
/usr/lib/vmware/osfs/bin/objtool delete -f -u objUuid
where objUuid is the UUID of the object and f indicates optional force delete mode.
- Make sure the object is related to the vswp object path and a number is available after .vswp.
- Placing a host in maintenance mode times out
If a vSAN cluster is resynchronizing a large number of objects, placing a host in maintenance mode might fail with a timeout. This problem can occur when you use one of the following data evacuation modes: Ensure accessibility or Full data migration.
Workaround: Wait for cluster resynchronization to complete, and then place the host into maintenance mode.
- When rebalancing disks, the amount of data to move displayed by vSAN health service does not match the amount displayed by RVC
RVC performs a rough calculation to determine the amount of data to move when rebalancing disks. The value displayed by the vSAN health service is more accurate.
When rebalancing disks, refer to the vSAN health service to check the amount of data to move.
Workaround: None.
- TRIM/UNMAP commands from Guest OS fail
If the Guest OS attempts to perform space reclamation during online snapshot consolidation, the TRIM/UNMAP commands fail. This failure keeps space from being reclaimed.
Workaround: Try to reclaim the space after the online snapshot operation is complete. If subsequent TRIM/UNMAP operations fail, remount the disk.
- Space reclamation from SCSI TRIM/UNMAP is lost when online snapshot consolidation is performed
Space reclamation achieved from SCSI TRIM/UNMAP commands is lost when you perform online snapshot consolidation. Offline snapshot consolidation does not affect SCSI unmap operation.
Workaround: Reclaim the space after online snapshot consolidation is complete.
- Host failure when converting data host into witness host
When you convert a vSAN cluster into a stretched cluster, you must provide a witness host. You can convert a data host into the witness host, but you must use maintenance mode with Full data migration during the process. If you place the host into maintenance mode with Ensure accessibility option, and then configure it as the witness host, the host might fail with a purple diagnostic screen.
Workaround: Remove the disk group on the witness host and then re-create the disk group.
- Host times out when entering maintenance mode during upgrade from 6.6.1 or earlier
During upgrade, each host is placed into maintenance mode with the Ensure data accessibility option. A host running vSphere 6.0 Update 2 or earlier might time out if objects with FTT=0 are present on the cluster. This problem occurs when a large number of objects are present in the cluster, and some objects have FTT=0.
Workaround: You can increase the FTT value from 0 to 1 for all objects before you upgrade the cluster. You also can wait for 60 minutes, and retry the upgrade.
- Duplicate VM with the same name in vCenter Server when residing host fails during datastore migration
If a VM is undergoing storage vMotion from vSAN to another datastore, such as NFS, and the host on which it resides encounters a failure on the vSAN network, causing HA failover of the VM, the VM might be duplicated in the vCenter Server.
Workaround: Power off the invalid VM and unregister it from the vCenter Server.
- vSAN Capacity Overview shows incorrect information after enabling deduplication and compression
After you enable deduplication and compression, the Capacity Overview in the HTML5-based vSphere Client might display incorrect information. The capacity bars might wrap onto the next line. This problem occurs for a short period after you enable deduplication and compression.
Workaround: After the deduplication and compression task is complete, wait a minute and then refresh the vSphere Client display.
- Compliance of VMs displayed as unknown
Storage compliance checks are not supported for ESXi hosts with software older than 6.0 Update 1. The storage compliance is displayed as Unknown.
You might see the following message:
StorageFault
"StorageFault"Workaround: None.
Reconfiguring an existing stretched cluster under a new vCenter Server causes vSAN to issue a health check warning
When rebuilding a current stretched cluster under a new vCenter Server, the vSAN cluster health check is red. The following message appears: vSphere cluster members match vSAN cluster membersWorkaround: Use the following procedure to configure the stretched cluster.
- Use SSH to log in to the witness host.
- Decommission the disks on witness host. Run the following command: esxcli vsan storage remove -s "SSD UUID"
- Force the witness host to leave the cluster. Run the following command: esxcli vsan cluster leave
- Reconfigure the stretched cluster from the new vCenter Server (Configure > vSAN > Fault Domains & Stretched Cluster).
-
During VCenter Server replacement, esxcli vsan health cluster list command displays health issues
During replacement of vCenter Server, the following command incorrectly displays health issues: esxcli vsan health cluster list It might report issues with network connectivity, physical disk health retrieval, and vSAN CLOMD liveness. Health checks displayed in vCenter report no issues.Workaround: After VCenter Server replacement is complete, go to Cluster > Monitor > vSAN > Health. Select Cluster > vCenter state is authoritative, and click Update ESXi configuration.
-
Disk format upgrade fails while vSAN resynchronizes large objects
If the vSAN cluster contains very large objects, the disk format upgrade might fail while the object is resynchronized. You might see the following error message: Failed to convert object(s) on vSANvSAN cannot perform the upgrade until the object is resynchronized. You can check the status of the resynchronization (Monitor > vSAN > Resyncing Components) to verify when the process is complete.
Workaround: Wait until no resynchronization is pending, then retry the disk format upgrade.
-
Cluster consistency health check fails during deep rekey operation
The deep rekey operation on an encrypted vSAN cluster can take several hours. During the rekey, the following health check might indicate a failure: Cluster configuration consistency. The cluster consistency check does not detect the deep rekey operation, and there might not be a problem.Workaround: Retest the vSAN cluster consistency health check after the deep rekey operation is complete.
-
vSAN stretched cluster configuration lost after you disable vSAN on a cluster
If you disable vSAN on a stretched cluster, the stretched cluster configuration is not retained. The stretched cluster, witness host, and fault domain configuration is lost.Workaround: Reconfigure the stretched cluster parameters when you re-enable the vSAN cluster.
On-disk format version for witness host is later than version for data hosts
When you change the witness host during an upgrade to vSAN 6.6 and later, the new witness host receives the latest on-disk format version. The on-disk format version of the witness host might be later than the on-disk format version of the data hosts. In this case, the witness host cannot store components.Workaround: Use the following procedure to change the on-disk format to an earlier version.
- Delete the disk group on the new witness host.
- Set the advanced parameter to enable formatting of disk groups with an earlier on-disk format. For more information, see Knowledge Base article 2146221.
- Recreate a new disk group on the witness host with a vSAN on-disk format version that matches the data hosts.
- Powered off VMs appear as inaccessible during witness host replacement
When you change a witness host in a stretched cluster, VMs that are powered off appear as inaccessible in the vSphere Web Client for a brief time. After the process is complete, powered off VMs appear as accessible. All running VMs appear as accessible throughout the process.
Workaround: None.
-
Cannot place hosts in maintenance mode if they have faulty boot media
vSAN cannot place hosts with faulty boot media into maintenance mode. The task to enter maintenance mode might fail with an internal vSAN error, due to the inability to save configuration changes. You might see log events similar to the following: Lost Connectivity to the device xxx backing the boot filesystemWorkaround: Remove disk groups manually from each host, using the Full data evacuation option. Then place the host in maintenance mode.
-
Health service does not work if vSAN cluster has ESXi hosts with vSphere 6.0 Update 1 or earlier
The vSAN 6.6 and later health service does not work if the cluster has ESXi hosts running vSphere 6.0 Update 1 or earlier releases.Workaround: Do not add ESXi hosts with vSphere 6.0 Update 1 or earlier software to a vSAN 6.6 or later cluster.
- After stretched cluster failover, VMs on the preferred site register alert: Failed to failover
If the secondary site in a stretched cluster fails, VMs failover to the preferred site. VMs already on the preferred site might register the following alert: Failed to failover. Ignore this alert. It does not impact the behavior of the failover.
Workaround: None.
-
During network partition, components in the active site appear to be absent
During a network partition in a vSAN 2 host or stretched cluster, the vSphere Web Client might display a view of the cluster from the perspective of the non-active site. You might see active components in the primary site displayed as absent.Workaround: Use RVC commands to query the state of objects in the cluster. For example: vsan.vm_object_info
- Temporary Update configuration tasks visible if hosts are disconnected when you change vSAN encryption configurations
When you change the configurations in an encrypted vSAN cluster (such as turning encryption on or off or changing the KMS key), an Update vSAN configuration task runs on each host every 3 seconds until all hosts reconnect or until 5 minutes have passed. These tasks are not harmful and rarely impact performance.
Workaround: None.
-
Some objects are non-compliant after force repair
After you perform a force repair, some objects might not be repaired because the ownership of the objects was transferred to a different node during the process. The force repair might be delayed for those objects.Workaround: Attempt the force repair operation after all other objects are repaired and resynchronized. You can wait until vSAN repairs the objects.
-
When you move a host from one encrypted cluster to another, and then back to the original cluster, the task fails
When you move a host from an encrypted vSAN cluster to another encrypted vSAN cluster, then move the host back to the original encrypted cluster, the task might fail. You might see the following message: A general system error occurred: Invalid fault. This error occurs because vSAN cannot re-encrypt data on the host using the original encryption key. After a short time, vCenter Server restores the original key on the host, and all unmounted disks in the vSAN cluster are mounted.Workaround: Reboot the host and wait for all disks to get mounted.
-
Stretched cluster imbalance after a site recovers
When you recover a failed site in a stretched cluster, sometimes hosts in the failed site are brought back sequentially over a long period of time. vSAN might overuse some hosts when it begins repairing the absent components.Workaround: Recover all of the hosts in a failed site together within a short time window.
- VM operations fail due to HA issue with stretched clusters
Under certain failure scenarios in stretched clusters, certain VM operations such as vMotions or powering on a VM might be impacted. These failures scenarios include a partial or a complete site failure, or the failure of the high speed network between the sites. This problem is caused by the dependency on VMware HA being available for normal operation of stretched cluster sites.
Workaround: Disable vSphere HA before performing vMotion, VM creation, or powering on VMs. Then re-enable vSphere HA.
-
Cannot perform deep rekey if a disk group is unmounted
Before vSAN performs a deep rekey, it performs a shallow rekey. The shallow rekey fails if an unmounted disk group is present. The deep rekey process cannot begin.Workaround: Remount or remove the unmounted disk group.
Log entries state that firewall configuration has changed
A new firewall entry appears in the security profile when vSAN encryption is enabled: vsanEncryption. This rule controls how hosts communicate directly to the KMS. When it is triggered, log entries are added to /var/log/vobd.log. You might see the following messages:Firewall configuration has changed. Operation 'addIP4' for rule set vsanEncryption succeeded.
Firewall configuration has changed. Operation 'removeIP4' for rule set vsanEncryption succeeded.These messages can be ignored.
Workaround: None.
Limited support for First Class Disks with vSAN datastores
vSAN 6.6 and later does not fully support First Class Disks in vSAN datastores. You might experience the following problems if you use First Class Disks in a vSAN datastore:- vSAN health service does not display the health of First Class Disks correctly.
- The Used Capacity Breakdown includes the used capacity for First Class Disks in the following category: Other
- The health status of VMs that use First Class Disks is not calculated correctly.
Workaround: None.
-
HA failover does not occur after setting Traffic Type option on a vmknic to support witness traffic
If you set the traffic type option on a vmknic to support witness traffic, vSphere HA does not automatically discover the new setting. You must manually disable and then re-enable HA so it can discover the vmknic. If you configure the vmknic and the vSAN cluster first, and then enable HA on the cluster, it does discover the vmknic.Workaround: Manually disable vSphere HA on the cluster, and then re-enable it.
- iSCSI MCS is not supported
vSAN iSCSI target service does not support Multiple Connections per Session (MCS).
Workaround: None.
Any iSCSI initiator can discover iSCSI targets
vSAN iSCSI target service allows any initiator on the network to discover iSCSI targets.Workaround: You can isolate your ESXi hosts from iSCSI initiators by placing them on separate VLANs.
-
After resolving network partition, some VM operations on linked clone VMs might fail
Some VM operations on linked clone VMs that are not producing I/O inside the guest operating system might fail. The operations that might fail include taking snapshots and suspending the VMs. This problem can occur after a network partition is resolved, if the parent base VM's namespace is not yet accessible. When the parent VM's namespace becomes accessible, HA is not notified to power on the VM.Workaround: Power cycle VMs that are not actively running I/O operations.
-
When you log out of the Web client after using the Configure vSAN wizard, some configuration tasks might fail
The Configure vSAN wizard might require up to several hours to complete the configuration tasks. You must remain logged in to the Web client until the wizard completes the configuration. This problem usually occurs in clusters with many hosts and disk groups.Workaround: If some configuration tasks failed, perform the configuration again.
- New policy rules ignored on hosts with older versions of ESXi software
This might occur when you have two or more vSAN clusters, with one cluster running the latest software and another cluster running an older software version. The vSphere Web Client displays policy rules for the latest vSAN software, but those new policies are not supported on the older hosts. For example, RAID-5/6 (Erasure Coding) – Capacity is not supported on hosts running 6.0U1 or earlier software. You can configure the new policy rules and apply them to any VMs and objects, but they are ignored on hosts running the older software version.
Workaround: None.
-
Snapshot memory objects are not displayed in the Used Capacity Breakdown of the vSAN Capacity monitor
For virtual machines created with hardware version lower than 10, the snapshot memory is included in the Vmem objects on the Used Capacity Breakdown.Workaround: To view snapshot memory objects in the Used Capacity Breakdown, create virtual machines with hardware version 10 or higher.
- Storage Usage reported in VM Summary page might appear larger after upgrading to vSAN 6.5 or later
In previous releases of vSAN, the value reported for VM Storage Usage was the space used by a single copy of the data. For example, if the guest wrote 1 GB to a thin-provisioned object with two mirrors, the Storage Usage was shown as 1 GB. In vSAN 6.5 and later, the Storage Usage field displays the actual space used, including all copies of the data. So if the guest writes 1 GB to a thin-provisioned object with two mirrors, the Storage Usage is shown as 2 GB. The reported storage usage on some VMs might appear larger after the upgrade, but the actual space consumed did not increase.
Workaround: None.
Cannot place a witness host in Maintenance Mode
When you attempt to place a witness host in Maintenance Mode, the host remains in the current state and you see the following notification: A specified parameter was not correct.Workaround: When placing a witness host in Maintenance Mode, choose the No data migration option.
Moving the witness host into and then out of a stretched cluster leaves the cluster in a misconfigured state
If you place the witness host in a vSAN-enabled vCenter cluster, an alarm notifies you that the witness host cannot reside in the cluster. But if you move the witness host out of the cluster, the cluster remains in a misconfigured state.Workaround: Move the witness host out of the vSAN stretched cluster, and reconfigure the stretched cluster. For more information, see Knowledge Base article 2130587.
When a network partition occurs in a cluster which has an HA heartbeat datastore, VMs are not restarted on the other data site
When the preferred or secondary site in a vSAN cluster loses its network connection to the other sites, VMs running on the site that loses network connectivity are not restarted on the other data site, and the following error might appear: vSphere HA virtual machine HA failover failed.This is expected behavior for vSAN clusters.
Workaround: Do not select HA heartbeat datastore while configuring vSphere HA on the cluster.
- Unmounted vSAN disks and disk groups displayed as mounted in the vSphere Web Client Operational Status field
After the vSAN disks or disk groups are unmounted by either running the esxcli vsan storage disk group unmount command or by the vSAN Device Monitor service when disks show persistently high latencies, the vSphere Web Client incorrectly displays the Operational Status field as mounted.
Workaround: Use the Health field to verify disk status, instead of the Operational Status field.