VMware ESXi 6.0 Update 3a Release Notes

|

ESXi 6.0 Update 3a | 11 JULY 2017 | ISO Build 5572656

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • This release of ESXi 6.0 Update 3a addresses issues that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 6.0

Features and known issues of ESXi 6.0 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.0, are:

Internationalization

VMware ESXi 6.0 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Spanish
  • Traditional Chinese

Components of VMware vSphere 6.0, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client is packaged with the vCenter Server. You can install the vSphere Client from the VMware vCenter autorun menu that is part of the modules ISO file.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 6.0. During the upgrade process, the device driver is installed on the ESXi 6.0 host. The device driver might still function on ESXi 6.0, but the device is not supported on ESXi 6.0. For a list of devices that are deprecated and no longer supported on ESXi 6.0, see KB 2087970.

Third-Party Switch Compatibility for ESXi

VMware now supports Cisco Nexus 1000V with vSphere 6.0. vSphere requires a minimum NX-OS release of 5.2(1)SV3(1.4). For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Ciscso Nexus 1000V AVS mode is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

 

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.0. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.0, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

Installation and Upgrades for This Release

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

vSphere 6.0 Recommended Deployment Models

VMware recommends only two deployment models:

  • vCenter Server with embedded Platform Services Controller. This model is recommended if one or more standalone vCenter Server instances are required to be deployed in a data center. Replication between these vCenter Server with embedded Platform Services Controller models are not recommended.

  • vCenter Server with external Platform Services Controller. This model is recommended only if multiple vCenter Server instances need to be linked or want to have reduced footprint of Platform Services Controller in the data center. Replication between these vCenter Server with external Platform Services Controller models are supported.

Read the vSphere Installation and Setup documentation for guidance on installing and configuring vCenter Server.

Read the Update sequence for vSphere 6.0 and its compatible VMware products for the proper sequence in which vSphere components should be updated.

Also, read KB 2108548 for guidance on installing and configuring vCenter Server.

vCenter Host OS Information

Read the Knowledge Base article KB 2091273.

Backup and Restore for vCenter Server and the vCenter Server Appliance Deployments that Use an External Platform Services Controller

Although statements in the vSphere Installation and Setup documentation restrict you from attempting to backup and restore vCenter Server and vCenter Server Appliance deployments that use an external Platform Services Controller, you can perform this task by following the steps in KB 2110294.

Migration from Embedded Platform Services Controller to External Platform Services Controller

vCenter Server with embedded Platform Services Controller cannot be migrated automatically to vCenter Server with external Platform Services Controller. Testing of this migration utility is not complete.

Before installing vCenter Server, determine your desired deployment option. If more than one vCenter Servers are required for replication setup, always deploy vCenter with external Platform Services Controller.

Migrating Third-Party Solutions

For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 6.0 supports only processors available after June (third quarter) 2006. Comparing the processors supported by vSphere 5.x, vSphere 6.0 no longer supports the following processors:

  • AMD Opteron 12xx Series
  • AMD Opteron 22xx Series
  • AMD Operton 82xx Series

During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 6.0. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and the vSphere 6.0 installation process stops.

Upgrade Notes for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Open Source Components for VMware vSphere 6.0

The copyright statements and licenses applicable to the open source software components distributed in vSphere 6.0 are available at http://www.vmware.com. You need to log in to your My VMware account. Then, from the Downloads menu, select vSphere. On the Open Source tab, you can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vCenter Server database. Oracle 11g and 12c as an external database for vCenter Server Appliance has been deprecated in the vSphere 6.0 release. VMware continues to support Oracle 11g and 12c as an external database in vSphere 6.0. VMware will drop support for Oracle 11g and 12c as an external database for vCenter Server Appliance in a future major release.

  • vSphere Web Client. The Storage Reports selection from an object's Monitor tab is no longer available in the vSphere 6.0 Web Client.

  • vSphere Client. The Storage Views tab is no longer available in the vSphere 6.0 Client.

  • Site Recovery Manager: Site Recovery Manager (SRM) versions older than SRM 6.5 do not support IP customization and in-guest callout operations for VMs that are placed on ESXi 6.0 and use VMware Tools version 10.1 and above. For further details, see VMware Tools Issues.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi600-Update03a contains the following individual bulletins:

Patch Release ESXi600-Update03a (Security-only build) contains the following individual bulletins:

Patch Release ESXi600-Update03a contains the following image profiles:

Patch Release ESXi600-Update03a (Security-only build) contains the following image profiles:

Resolved Issues

The resolved issues are grouped as follows.

Backup Issues
  • When you hot-add an existing or new virtual disk to a CBT (Changed Block Tracking) enabled virtual machine (VM) residing on VVOL datastore, the guest operation system might stop responding

    When you hot-add an existing or new virtual disk to a CBT enabled VM residing on VVOL datastore, the guest operation system might stop responding until the hot-add process completes. The VM unresponsiveness depends on the size of the virtual disk being added. The VM automatically recovers once hot-add completes.

    This issue is resolved in this release.

CIM and API Issues
  • SNMP agent is reporting ifOutErrors and ifOutOctets counter values incorrectly

    Simple Network Management Protocol (SNMP) agent is reporting the same value for both ifOutErrors and ifOutOctets counters, when they should be different.

    This issue is resolved in this release.

  • IPMI stack is unresponsive after a hard BMC reset

    Intelligent Platform Management Interface (IPMI) stack is unresponsive after a hard Baseboard Management Controller (BMC) reset.

    This issue is resolved in this release.

  • DDR4 memory modules are displayed as Unknown on the Hardware Health status Page in vCenter Server

    DDR4 memory modules of Dell 13G Servers are displayed as Unknown on the Hardware status Page in vCenter Server.

    This issue is resolved in this release.

Guest Operating System Issues
  • A host fails with a purple diagnostic screen showing VMKPCIPassthru_SetupIntrProxy when you use PCI passthru

    When you use PCI passthru with devices that use MSI-X and newer Linux kernels, a purple diagnostic screen that shows VMKPCIPassthru_SetupIntrProxy appears. This issue is due to the code in PCIPassthruChangeIntrSettings.

    This issue is resolved in this release.

Host Profiles and Auto Deploy Issues
  • You cannot log in to an ESXi host that is added to an Active Directory domain with Auto Deploy using vSphere Authentication Proxy

    After you use vSphere Auto Deploy to add hosts to an Active Direcotry domain using vSphere Authentication Proxy, you cannot log in to the host with AD credentials.

    The issue is resolved in this release.

Internationalization Issues
  • Non-latin characters might be displayed incorrectly in VM storage profile names

    UTF-8 characters are not handled properly before being passed on to a VVol Vasa Provider. As a result, the VM storage profiles which are using international characters are either not recognised by the Vasa Provider or are treated, or displayed incorrectly by the Vasa Provider.

    This issue is resolved in this release.

Miscellaneous Issues
  • The Guest OS might appear to slowdown or might experience a CPU spike

    Your Guest OS might appear to slowdown or might experience a CPU spike that disappears after you disable ASLR in the Guest OS and perform an FSR.

    The following processes might cause such behavior:
    1. Your translation cache fills with translations for numerous user-level CPUID/RDTSC instructions that are encountered at different virtual addresses in the Guest OS.
    2. Your virtual machine monitor uses a hash function with poor dispersion when checking for existing translations.

    Disabling ASLR resolves the issue temporarily until you upgrade your ESXi host to a version that contains the fix. The issue is resolved in this release.

  • An ESXi host fails with a purple screen or a warning message occurs during the destruction of a physically contiguous vmkernel heap with initial allocated size of 64MB or more

    Because of incorrect accounting of overhead memory, an ESXi host fails with a purple screen, or a warning message appears at unload time when destroying a physically contiguous vmkernel heap with initial allocated size of 64MB or more.

    The following warning message is observed:

    Heap: 2781: Non-empty heap (<heapName>) being destroyed (avail is <size>, should be <size>).

    This issue is resolved in this release.

  • You might observe lock contention error messages on a lunTimestamps.log file in hostd logs

    To update the last seen time stamp for each LUN on an ESXi host, a process has to acquire a lock on /etc/vmware/lunTimestamps.log file. The lock is being held for a long time than necessary in each process. If there are too many such processes trying to update the /etc/vmware/lunTimestamps.log file, they might result in lock contention on this file. If hostd is one of these processes that is trying to acquire the lock, the ESXi host might get disconnected from vCenter Server or become unresponsive with lock contention error messages (on lunTimestamps.log file) in the hostd logs. You might get a similar error message:

    Error interacting with configuration file /etc/vmware/lunTimestamps.log: Timeout while waiting for lock, /etc/vmware/lunTimestamps.log.LOCK, to be released. Another process has kept this file locked for more than 30 seconds. The process currently holding the lock is (). This is likely a temporary condition. Please try your operation again.

    Note:

    • process_name is the process or service that is currently holding the lock on the /etc/vmware/lunTimestamps.log. For example, smartd, esxcfg-scsidevs, localcli, etc.
    • PID is the process ID for any of these services.

     

    This issue is resolved in this release.

  • A virtual machine might power off automatically with an error "MXUserAllocSerialNumber: too many locks"

    During normal VM operation, VMware Tools services (version 9.10.0 and later) create vSocket connections to exchange data with the hypervisor. When a large number of such connections are made, the hypervisor may run out of lock serial numbers and the virtual machine powers off with an error.

    This issue is resolved in this release.

  • Log spew in system logs

    Every time the kernel API vmk_ScsiCmdGetVMUuid fails to obtain a valid VM UUID, it prints an error message in the system logs similar to the following:

    2016-06-30T16:46:08.749Z cpu6:33528)WARNING: World: vm 0: 11020: vm not found

    The issue is resolved in this release by conditionally invoking the function World_GetVcUuid which caused the log spew from the kernel API vmk_ScsiCmdGetVMUuid.

Networking Issues
  • An ESXi host might become unresponsive when you reconnect it to vCenter Server

    If you disconnect your ESXi host from vCenter Server and some of the virtual machines on that host are using LAG, your ESXi host might become unresponsive when you reconnect it to vCenter Server after recreating the same lag on the vCenter Server side and you might see an error, such as the following:
    0x439116e1aeb0:[0x418004878a9c]LACPScheduler@ # +0x3c stack: 0x417fcfa00040 0x439116e1aed0:[0x418003df5a26]Net_TeamScheduler@vmkernel#nover+0x7a stack: 0x43070000003c 0x439116e1af30:[0x4180044f5004]TeamES_Output@ # +0x410 stack: 0x4302c435d958 0x439116e1afb0:[0x4180044e27a7]EtherswitchPortDispatch@ # +0x633 stack: 0x0

    This issue is resolved in this release.

  • The network statistic shows abnormal packet count in the vCenter Server network performance chart

    The network packet count calculation might be handled by multiple CPUs. This calculation might introduce a network statistic calculation error, and display the wrong number in the network performance chart.

    This issue is resolved in this release.

  • Virtual machines configured to use EFI firmware fail to PXE boot in some DHCP environments

    Virtual machine configured to use EFI firmware will fail to obtain an IP address when trying to PXE boot if the DHCP environment responds by IP unicast. The EFI firmware was not capable of receiving a DHCP reply sent by IP unicast.

    This issue is resolved in this release.

  • All VMs lose connectivity due to Еtherswitch heap running out of memory

    There is a memory leak in the Еtherswitch heap with size from 32 bytes to 63 bytes. When the heap is running out of memory, the VMs lose connection.

    This issue is resolved in this release.

  • The ESXi host fails with purple diagnostic screen at DVFilter vMotion level and reports a "PCPU 25: no heartbeat (3/3 IPIs received)" error

    When you reboot the ESXi host under the following conditions, the host might fail with a purple diagnostic screen and a PCPU xxx: no heartbeat error.
     

    •  You use the vSphere Network Appliance (DVFilter) in an NSX environment
    •  You migrate a virtual machine with vMotion under DVFilter control

    This issue is resolved in this release.

  • Virtual machines (VMs), which are part of Virtual Desktop Infrastructure (VDI) pools using Instant Clone technology, lose connection to Guest Introspection services

    Existing VMs using Instant Clone and new ones, created with or without Instant Clone, lose connection with the Guest Introspection host module. As a result, the VMs are not protected and no new Guest Introspection configurations can be forward to the ESXi host. You will also see a "Guest introspection not ready” warning in the vCenter Server User Interface.

    This issue is resolved in this release.

  • VMkernel log includes one or more "Couldn't enable keep alive" warnings

    "Couldn't enable keep alive" warnings occur during VMware NSX and partner solutions communication through a VMCI socket (vsock). The VMkernel log now omits these repeated warnings because they can be safely ignored.

    This issue is resolved in this release.

  • Kernel panic may happen due to network connectivity loss for a VM with e1000/e1000e vNIC

    For a VM with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor (the transmit descriptor address and length are 0), a loss of connectivity can occur and the VM can enter kernel panic state.

    The issue is resolved in this release.

  • vSphere vMotion fails when an ESXi is connected to a vSphere Distributed Switch configured with LACP

    If an ESXi host is connected to a vSphere Distributed Switch configured with LACP and if the LAG has uplinks in link down state, when you try to use vSphere vMotion, you see a warning similar to: Currently connected network interface 'Network Adapter 1' uses network 'DSwitchName', which is not accessible.

    The issue is resolved in this release.

  • An ESXi host might become unavailable during shutdown

    If you are using an IPv6 address type on an ESXi host, the host might become unavailable during shutdown

    Upgrade your ESXi host to a version 6.0 Update 3a.

Security Issues
  • Update to Pixman library

    The Pixman library is updated to version 0.35.1.

  • Likewise stack on ESXi is not enabled to support SMBv2

    Windows 2012 domain controller supports SMBv2, whereas Likewise stack on ESXi supports only SMBv1.

    With this release, the likewise stack on ESXi is enabled to support SMBv2.

    This issue is resolved in this release.

  • Update to VMware Tools

    VMware Tools is updated to version 10.1.5. For more information, see the VMware Tools 10.1.5 Release Notes.

    VMware Tools 10.1.5 addresses security issues in Open Source Components.

  • Update to OpenSSL

    The OpenSSL package is updated to version openssl-1.0.2k to resolve CVE-2017-3731, CVE-2017-3730, CVE-2017-3732, and CVE-2016-7055.

  • Update to Python

    Python is updated to version 2.7.13 to resolve CVE-2016-2183 and CVE-2016-1000110.

Server Configuration Issues
  • The vpxd service crashes and the vSphere Web Client user interface cannot connect to and update the vCenter Server

    Under certain conditions, the profile path for the VMODL object is left unset. This condition triggers a serialization issue during answer file validation for the network configuration, causing a vpxd service crash.

    This issue is resolved in this release.

  • Cannot join an ESXi 6.x host to Active Directory through Host Profile

    When you try to use host profiles to join an ESXi 6.x host to an Active Directory domain, the application hangs or fails with an error.

    This issue is resolved in this release.

Storage Issues
  • The vm-support command runs with a non-fatal error

    The vm-support command uses a script called smartinfo.sh to collect SMART data for each storage device on the ESXi host. The vm-support command imposes a 20-second timeout for every command that collects support data. However, smartinfo.sh takes more than 20 seconds to complete, which causes the vm-support command to run with the following error: cmd /usr/lib/vmware/vm-support/bin/smartinfo.sh timed out after 20 seconds due to lack of progress in last 10 seconds (0 bytes read).

    This issue is resolved in this release.

  • hostd crashes due to uninitialized libraries

    When you try to re-add a host to vCenter Server, hostd might crash if the host has IOFilter enabled and if VMs with enabled Changed Block Tracking (CBT) reside on that host. The filter library uses the poll and worker libraries. When the filter library is initialized before the poll and worker libraries, it cannot work properly and crashes.

    This issue is resolved in this release.

  • An ESXi host might stop responding while a virtual machine on that host is running on a SeSparse snapshot

    After you create a virtual machine snapshot of a SEsparse format, you might hit a rare race condition if there are significant but varying write IOPS to the snapshot. This race condition might make the ESXi host stop responding.

    This issue is resolved in this release.

  • Virtual Machines that use SEsparse virtual disk format might stop responding while running specific I/O workloads

    Virtual Machines with SEsparse based snapshots might stop responding, during I/O operations with a specific type of I/O workload in multiple threads.

    This issue is resolved in this release.

  • Reverting from an error during a storage profile change operation, results in a corrupted profile ID

    If a VVol VASA Provider returns an error during a storage profile change operation, vSphere tries to undo the operation, but the profile ID gets corrupted in the process.

    This issue is resolved in this release.

  • Incorrect Read/Write latency displayed in vSphere Web Client for VVol datastores

    Per host Read/Write latency displayed for VVol datastores in the vSphere Web Client is incorrect.

    This issue is resolved in this release.

  • Host profile operations fail in an Auto Deploy environment

    The host profile operations such as check compliance, remediation and cloning of host profile fail in an Auto Deploy environment.
    The following scenarios are observed:

    1. During fresh installation of ESXi hosts using Auto Deploy
      • Check compliance for host profiles fails with similar message:
        Host is unavailable for checking compliance
      • Host profile remediation (apply host profiles) fails with the following error:
        Call "HostProfileManager.GenerateConfigTaskList" for object "HostProfileManager" on vCenter Server <vCenter_hostname> failed
    2. Change the reference host of host profile fails with the following error:
      Call "HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server <vCenter_hostname> failed.
    3. Clone host profile fails with the following error:
      Call "HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server <vCenter_hostname> failed. The profile does not have an associated reference host

    In the log files, located at /var/log/syslog.log, the failed operations appear with the following error:
    Error: profileData from only a single profile instance supported in VerifyMyProfilesPolicies.

    This issue is resolved in this release.

  • An ESXi host with VMware vFlash Read Cache (VFRC) configured Virtual Machine might fail with a purple screen

    An ESXi host with VMware vFlash Read Cache (VFRC) configured Virtual Machine might fail with a purple screen when backend storage becomes slow or inaccessible. This failure is due to a locking defect in the VFRC code.

    This issue is resolved in this release.

  • SESparse causes a corrupted Guest OS file system

    Using SESparse for both creating snapshots and cloning of virtual machines, might cause a corrupted Guest OS file system.

    This issue is resolved in this release.

  • A dynamic change in the queue depth parameter for devices results in flooding the hostd with event notifications

    When the Storage I/O Control (SIOC) changes the LUN's maximum queue depth parameter, an event notification is sent from the Pluggable Storage Architecture (PSA) to the hostd. In setups where the queue depth parameter is changed dynamically, a load of event notifications is sent to the hostd causing performance issues such as slow vSphere tasks or disconnecting the hostd from the vCenter Server.

    In this release, PSA does not send any event notification to hostd

  • Recomputing the digest for a Content Based Read Cache (CBRC) enabled disk never reports a percentage complete calculation, instead returning a system error

    The CBRC filter uses 32-bit computation to perform calculations and returns a completion percentage for every digest recompute request. For large disks, the number of hashes are great enough to overflow the 32 bit calculation, resulting in an incorrect completion percentage.

    This issue is resolved in this release.

  • User needs to manually configure the SATP rule for Pure Storage FlashArray model

    For a Pure Storage FlashArray device user will have to add manually the SATP rule to set the SATP, PSP and IOPs as per requirement.

    The issue is resolved in this release and a new SATP rule is added to ESXi to set SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR, and IOPs to 1 for all Pure Storage FlashArray models.

    Note: In case of a stateless ESXi installation, if an old host profile is applied, it overwrites the new rules after upgrade.

  • Datastore unmounting fails

    Sometimes when you try to unmount an NFS datastore from vCenter Server the operation might fail with the error NFS datastore unmount failure - Datsatore has open files, cannot be unmounted.

    The issue is resolved in this release.

  • When you use Storage vMotion, the UUID of a virtual disk might change

    When you use Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified.

    The issue is resolved in this release.

  • You see “Failed to open file” error messages in vmkernel.log during ESXi boot or disk group mounting in vSAN

    You see “Failed to open file” error messages in the vmkernel.log file when a vSAN-enabled ESXi host boots up or during a manual disk group mount in vSAN.

    The issue is resolved in this release.

  • A linked-clone VM with its disk type SESparse might hang in case of I/O or packet drops because of any issue with HBA driver, controller, firmware, connectivity, or storage topology

    When the I/O operations hang or drop at HBA driver layer because of any issue with HBA driver, controller, firmware, connectivity, or storage topology, the stuck I/O is not aborted which causes the VM to hang.

    The issue is resolved in this release.

  • The system becomes unresponsive and you might receive the error "Issue of delete blocks" in the vmkernel.log file

    When the unmap commands fail, the ESXi host might stop responding due to a memory leak in the failure path. You might receive the following error message in the vmkernel.log file: FSDisk: 300: Issue of delete blocks failed [sync:0] and the host gets unresponsive.

    This issue is resolved in this release by avoiding the memory leak.

  • If you use SEsparse and enable unmapping operation, the file system of the guest OS might be corrupt

    In case you use SEsparse and enable unmapping operation to create snapshots and clones of virtual machines, after the wipe operation (the storage unmapping) is completed, the file system of the guest OS might be corrupt. The full clone of the virtual machine performs well.

    This issue is resolved in this release.

  • Modification of IOPS limit of virtual disks with enabled Changed Block Tracking (CBT) might fail

    To define the storage I/O scheduling policy for a vuirtual machine (VM), you can configure the I/O throughput for each VM disk by modifying the IOPS limit. When you edit the IOPS limit and CBT is enabled for the VM, the operation fails with an error The scheduling parameter change failed. Due to this problem, the scheduling policies of the VM cannot be altered. The error message appears in the vSphere Recent Tasks pane.

    You can see the following errors in the /var/log/vmkernel.log file:

    2016-11-30T21:01:56.788Z cpu0:136101)VSCSI: 273: handle 8194(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000 2016-11-30T21:01:56.788Z cpu0:136101)ScsiSched: 2760: Invalid Bandwidth Cap Configuration 2016-11-30T21:01:56.788Z cpu0:136101)WARNING: VSCSI: 337: handle 8194(vscsi0:0):Failed to invert policy
     

    This issue is resolved in this release.

  • Cancellation of a snapshot creation task is not executed properly

    When you attempt to cancel a snapshot creation task, but the VASA provider is unable to cancel the related underlying operations on a VVoL-backed disk, a snapshotted VVoL is created and it exists until garbage collection cleans it up.

    This issue is resolved in this release.

  • Cancellation of a clone creation task is not executed properly

    When you attempt to cancel a clone creation task, but the VASA provider is unable to cancel the related underlying operations, vCenter Server creates a new VVoL, copies all the data, and reports that the clone creation has been successful.

    This issue is resolved in this release.

Supported Hardware Issues
  • ESXi hosts running with the ESXi600-201611001 patch might fail with a purple diagnostic screen caused by non-maskable-interrupts (NMI) on HPE ProLiant Gen8 Servers

    The ESXi600-201611001 patch includes a change that allows ESXi to disable Intel® IOMMU (also known as VT-d) interrupt remapper functionality. In HPE ProLiant Gen8 servers, disabling this functionality causes PCI errors. As a result of these errors, the platform generates an NMI that causes the ESXi host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

Upgrade and Installation Issues
  • Logging in to an ESXi host by using SSH requires password re-entry

    You are prompted for a password twice when connecting to an ESXi host through SSH if the ESXi host is upgraded from vSphere version 5.5 to 6.0 while being part of a domain.

    This issue is resolved in this release.

vCenter Server, vSphere Web Client, and vSphere Client Issues
  • You cannot use the VMware Host Client in Chrome 57

    When you try to log in to the VMware Host Client by using Chrome 57, the VMware Host Client immediately reports an error. The reported error is an Angular Digest in progress error.

    This issue is resolved in this release.

Virtual Machine Management Issues
  • Digest VMDK files are not deleted from the VM folder when you delete a VM

    When you create a linked clone from a digest VMDK file, vCenter Server marks the digest disk file as non-deletable. Thus, when you delete the respective VM, the digest VMDK file is not deleted from the VM folder because of the ddb.deletable = FALSE ddb entry in the descriptor file.

    This issue is resolved in this release.

  • Virtual machine might become unresponsive

    When you take a snapshot of a virtual machine, the virtual machine might become unresponsive.

    This issue is resolved in this release.

  • The virtual machine might become unresponsive due to active memory drop

    If the active memory of a virtual machine that runs on an ESXi host falls under 1% and drops to zero, the host might start reclaiming memory even if the host has enough free memory.

    This issue is resolved in this release.

    1. Connect to vCenter Server by using the vSphere Web Client.
    2. Select an ESXi host in the inventory.
    3. Power off all virtual machines on the ESXi host.
    4. Click Settings.
    5. Under the System heading, click Advanced System Settings.
    6. Search for the Mem.SampleActivePctMin setting.
    7. Click Edit.
    8. Change the value to 1.
    9. Click OK to accept the changes.
    10. Power on the virtual machines.

     

  • An ESXi host might get disconnected from vCenter Server

    Because of a memory leak, hostd process might crash with the following error: 

    Memory exceeds hard limit. Panic.

    The hostd logs report numerous errors such as Unable to build Durable Name.

    This kind of a memory leak causes the host to get disconnected from vCenter Server.

    This issue is resolved in this release.

  • Virtual Machine stops responding during snapshot consolidation

    During snapshot consolidation a precise calculation might be performed to determine the storage space required to perform the consolidation. This precise calculation can cause the virtual machine to stop responding, because it takes a long time to complete.

    This issue is resolved in this release.

  • A vMotion migration of a Virtual Machine (VM) gets suspended for sometime and further fails with timeout

    If a VM has a driver (especially a graphics driver) or an application that pins too much memory, it creates a sticky page in the VM. When such a VM is about to be vMotion migrated to another host, the migration process is suspended and later on fails due to incorrect pending of Input/Output computations.

    This issue is fixed in this release.

vSAN Issues
  • bytsToSync calculation values in the VC and RVC might appear incorrectly for RAID5/6 objects

    The current calculation of the resync bytes overestimates the full resync traffic for RAID5/6 configurations. This may happen when either of the following is present:

    • Placing the node in maintenance mode with either "Full Data Migration" or "Ensure Accessibility" evacuation mode.
    • Creating a complete mirror of a component after losing it due to a failure in the cluster.

    This issue is resolved in this release.

  • The system might show a generic error message rather than a specific message for identifying out of space issues

    In some cases, the system might display a generic error message rather than a specific message for identifying out of space issues. For example, when a failure is caused by insufficient disk space, you can see an error message such as "Storage policy change failure: 12 (Cannot allocate memory)".

    This issue is fixed in this release.

  • An ESXi host might fail with a purple screen at bora/modules/vmkernel/lsomcommon/ssdlog/ssdopslog.c:199

    A few races happen between multiple LSOM internal code paths. Freeing up a region in the caching tier twice leads to a stack trace and panic of the type:

    PanicvPanicInt@vmkernel#nover+0x36b stack: 0x417ff6af0980, 0x4180368 2015-04-20T16:27:38.399Z cpu7:1000015002)0x439124d1a780:[0x4180368ad6b7]Panic_vPanic@vmkernel#nover+0x23 stack: 0x46a, 0x4180368d7bc1, 0x43a 2015-04-20T16:27:38.411Z cpu7:1000015002)0x439124d1a7a0:[0x4180368d7bc1]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x439124d1a800, 0x4 2015-04-20T16:27:38.423Z cpu7:1000015002)0x439124d1a800:[0x418037cc6d46]SSDLOG_FreeLogEntry@LSOMCommon#1+0xb6e stack: 0x6, 0x4180368dd0f4, 0 2015-04-20T16:27:38.435Z cpu7:1000015002)0x439124d1a880:[0x418037d3c351]PLOGCommitDispatch@com.vmware.plog#0.0.0.1+0x849 stack: 0x46a7500, 0

    The races happen between PLOG Relog, PLOG Probe, and PLOG Decommission workflows.

    This issue is fixed in this release.

  • Under heavy I/O workload, a vSAN process might occupy the CPU cycles for a long time, which results in a brief PCPU lockup

    Under heavy I/O workload, a vSAN process might occupy the CPU cycles for a long time, which results in a brief PCPU lockup. This leads to a non-maskable interrupt and a log spew in the vmkernel log files.

    This issue is resolved in this release.

  • ESXi vSAN enabled host might fail with PSOD

    An ESXi vSAN enabled host might fail with purple screen with the following backtrace on the screen:

    2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bd20:[0x418032a77f83]Panic_vPanic@vmkernel#nover+0x23 stack: 0x4313df6720ba, 0x418032a944 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bd40:[0x418032a944a9]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x43911b29bda0, 0x4 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bda0:[0x41803387b46c]vs_space_mgmt_svc_start@com.vmware.virsto#0.0.0.1+0x414 stack: 0x100 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29be00:[0x41803384266d]Virsto_StartInstance@com.vmware.virsto#0.0.0.1+0x68d stack: 0x4312df 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bf00:[0x4180338f138f]LSOMMountHelper@com.vmware.lsom#0.0.0.1+0x19b stack: 0x43060d72b980, 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bf30:[0x418032a502c2]helpFunc@vmkernel#nover+0x4e6 stack: 0x0, 0x43060d6a60a0, 0x35, 0x0, 2017-02-19T09:58:26.778Z cpu17:33637)0x43911b29bfd0:[0x418032c14c1e]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0, 0

    This issue is resolved in this release.

  • Using objtool on a vSAN witness host might cause an ESXi host to fail with a purple screen

    If you use objtool on a witness host, it performs an ioctl call which leads to a NULL pointer dereference in the vSAN witness ESXi host and the host crashes.

    This issue is resolved in this release.

  • Decommissioning dedupe and compression-enabled disks which have failing media access commands can result in a failure with a purple screen in the vSAN node

    During the decommissioning of a vSAN diskgroup with dedupe and compression enabled, the diskgroup should have disks with access commands failures. The failures can be verified by vmkernel log messages such as:
    Partition: 914: Read of GPT header (hdrlba = 1) failed on "naa.55cd2e404c185332" : I/O error.
    This results in a failure of the vSAN host when decommissioning.

    This issue is fixed in this release.

  • You might receive a false alarm for the vSAN health check

    Sometimes, the vSAN health user interface might report an incorrect status for the network health check of the type "All hosts have a Virtual SAN vmknic configured" and then trigger the false vCenter Server alarm.

    This issue is fixed in this release.

  • A virtual machine might stop responding or a host might become disconnected from vCenter Server accompanied by log congestion in 6.0 Update 2 or memory congestion in 6.0 Update 3

    Removing a vSAN component that is in an invalid state from a vSAN cluster might cause a virtual machine to stop responding or a host to become disconnected from vCenter Server.

    This issue is fixed in this release.

  • An ESXi host might fail with a purple screen

    An ESXi host might fail with a purple screen because of a race between the distributed object manager client initialization and distributed object manager VMkernel sysinfo interface codepaths.

    This issue is resolved in this release.

  • SSD congestion might cause multiple virtual machines to become unresponsive

    Depending on the workload and the number of virtual machines, diskgroups on the host might go into permanent device loss (PDL) state. This causes the diskgroups to not admit further IOs, rendering them unusable until manual intervention is performed.
     

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Installation Issues
  • DNS suffix might persist even after you change the default configuration in DCUI
    An ESXi host might automatically get configured with the default DNS + DNS suffix on first boot, if deployed on a network served by a DHCP server. When you attempt to change the DNS suffix, the DCUI does not remove the existing DNS suffix but just adds the new suffix provided as well.

    Workaround: When configuring DNS hostname of the witness OVF, set the FULL FQDN name in the DNS Hostname field to append the correct DNS suffix. You can then remove unwanted DNS suffixes in the Custom DNS Suffix field.

  • The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade/installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

Upgrade Issues
Review also the Installation Issues section of the release notes. Many installation issues can also impact your upgrade process.
  • Attempts to upgrade from ESXi 6.x to 6.0 Update 2 and above with the esxcli software vib update command fail
    Attempts to upgrade from ESXi 6.x to 6.0 Update 2 with the esxcli software vib update fails with error messages similar to the following:

    [DependencyError]
    VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
    VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.


    The issue occurs due to introduction of a new Virtual SAN VIB which is interdependent with the esx-base VIB and the esxcli software vib update command only updates the VIBs already installed on the system.

    Workaround: To resolve this issue, run the esxcli software profile update as shown in the following example:

    esxcli software profile update -d /vmfs/volumes/datastore1/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-standard

  • SSLv3 remains enabled on Auto Deploy after upgrade from earlier release of vSphere 6.0 to vSphere 6.0 Update 1 and above
    When you upgrade from an earlier release of vSphere 6.0 to vSphere 6.0 Update 1 and above, the SSLv3 protocol remains enabled on Auto Deploy.

    Workaround: Perform to the following steps to disable SSLv3 using PowerCLI commands:

    1. Run the following command to Connect to vCenter Server:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Connect-VIServer -Server <FQDN_hostname or IP Address of vCenter Server>

    2. Run the following command to check the current sslv3 status:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-DeployOption

    3. Run the following command to disable sslv3:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Set-DeployOption disable-sslv3 1

    4. Restart the Auto Deploy service to update the change.

  • Fibre Channel host bus adapter device number might change after ESXi upgrade from 5.5.x to 6.0

    During ESXi upgrade from 5.5.x to 6.0, the Fibre Channel host bus adapter device number changes occasionally. The device number might change to another number if you use the esxcli storage core adapter list command.

    For example, the device numbers for a Fibre Channel host bus adapter might look similar to the following before ESXi upgrade:

    HBA Name
    ––––––––
    vmhba3
    vmhba4
    vmhba5
    vmhba66

    The device numbers from the Fibre Channel host bus adapter might look similar to the following after an ESXi upgrade 6.0:

    HBA Name
    ––––––––
    vmhba64
    vmhba65
    vmhba5
    vmhba6

    The example illustrates the random change that might occur if you use the esxcli storage core adapter list command: the device alias numbers vmhba2 and vmhba3 change to vmhba64 and vmhba65, while device numbers vmhba5 and vmhba6 are not changed. However, if you used the esxcli hardware pci list command, the device numbers do not change after upgrade.

    This problem is external to VMware and may not affect you. ESXi displays device alias names but it does not use them for any operations. You can use the host profile to reset the device alias name. Consult VMware product documentation and knowledge base articles.

    Workaround: None.

  • Active Directory settings are not retained post-upgrade
    The Active Directory settings configured in the ESXi host before upgrade are not retained when the host is upgraded to ESXi 6.0.

    Workaround: Add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is 5.1 or later. Do not add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is ESXi 5.0.x.

  • After ESXi upgrade to 6.0 hosts that were previously added to the domain are no longer joined to the domain
    When upgrading to from vSphere 5.5 to vSphere 6.0 for the first time, the Active Directory configuration is not retained.

    Workaround: After upgrade, rejoin the hosts to the vCenter Server domain:

    1. Add the hosts to vCenter Server.

    2. Join the hosts to domain (for example, example.com)

    3. Upgrade all the hosts to ESXi 6.0.

    4. Manually join one recently upgraded host to domain.

    5. Extract the host profile and disabled all other profiles except Authentication.

    6. Apply the manually joined host profile to the other recently upgraded hosts.

  • Previously running VMware ESXi Dump Collector service resets to default Disabled setting after upgrade of vCenter Server for Windows
    The upgrade process installs VMware Vsphere ESXi Dump Collector 6.0 as part of a group of optional services for vCenter Server. You must manually enable the VMware vSphere ESXi Dump Collector service to use it as part of vCenter Server 6.0 for Windows.

    Workaround: Read the VMware documentation or search the VMware Knowledge Base for information on how to enable and run optional services in vCenter Server 6.0 for Windows.

    Enable the VMware vSphere ESXi Dump Collector service in the operating system:

    1. In the Control Panel menu, select Administrative Tools and double-click on Services.

    2. Right click VMware vSphere ESXi Dump Collector and Edit Startup Type.

    3. Set the Start-up Type to Automatic.

    4. Right Click VMware vSphere ESXi Dump Collector and Start.

    The Service Start-up Type is set to automatic and the service is in a running state.

vCenter Single Sign-On and Certificate Management Issues
  • Cannot connect to VM console after SSL certificate upgrade of ESXi host
    A certificate validation error might result if you upgrade the SSL certificate that is used by an ESXi host, and you then attempt to connect to the VM console of any VM running when the certificate was replaced. This is because the old certificate is cached, and any new console connection is rejected due to the mismatch.
    The console connection might still succeed, for example, if the old certificate can be validated through other means, but is not guaranteed to succeed. Existing virtual machine console connections are not affected, but you might see the problem if the console was running during the certificate replacement, was stopped, and was restarted.

    Workaround: Place the host in maintenance mode or suspend or power off all VMs. Only running VMs are affected. As a best practice, perform all SSL certificate upgrades after placing the host in maintenance mode.

Networking Issues

  • Certain vSphere functionality does not support IPv6
    You can enable IPv6 for all nodes and components except for the following features:

    • IPv6 addresses for ESXi hosts and vCenter Server that are not mapped to fully qualified domain names (FQDNs) on the DNS server.
      Workaround: Use FQDNs or make sure the IPv6 addresses are mapped to FQDNs on the DNS servers for reverse name lookup.

    • Virtual volumes

    • PXE booting as a part of Auto Deploy and Host Profiles
      Workaround: PXE boot an ESXi host over IPv4 and configure the host for IPv6 by using Host Profiles.

    • Connection of ESXi hosts and the vCenter Server Appliance to Active Directory
      Workaround: Use Active Directory over LDAP as an identity source in vCenter Single Sign-On.

    • NFS 4.1 storage with Kerberos
      Workaround: Use NFS 4.1 with AUTH_SYS.

    • Authentication Proxy

    • Connection of the vSphere Management Assistant and vSphere Command-Line Interface to Active Directory.
      Workaround: Connect to Active Directory over LDAP.

    • Use of the vSphere Client to enable IPv6 on vSphere features
      Workaround: Use the vSphere Web Client to enable IPv6 for vSphere features.

  • Recursive panic might occur when using ESXi Dump Collector
    Recursive kernel panic might occur when the host is in panic state while it displays the purple diagnostic screen and write the core dump over the network to the ESXi Dump Collector. A VMkernel zdump file might not be available for troubleshooting on the ESXi Dump Collector in vCenter Server.

    In the case of a recursive kernel panic, the purple diagnostic screen on the host displays the following message:
    2014-09-06T01:59:13.972Z cpu6:38776)Starting network coredump from host_ip_address to esxi_dump_collector_ip_address.
    [7m2014-09-06T01:59:13.980Z cpu6:38776)WARNING: Net: 1677: Check what type of stack we are running on [0m
    Recursive panic on same CPU (cpu 6, world 38776, depth 1): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Secondary panic trap frame registers:
    RAX:0x0002000001230121 RCX:0x000043917bc1af80 RDX:0x00004180009d5fb8 RBX:0x000043917bc1aef0
    RSP:0x000043917bc1aee8 RBP:0x000043917bc1af70 RSI:0x0002000001230119 RDI:0x0002000001230121
    R8: 0x0000000000000038 R9: 0x0000000000000040 R10:0x0000000000010000 R11:0x0000000000000000
    R12:0x00004304f36b0260 R13:0x00004304f36add28 R14:0x000043917bc1af20 R15:0x000043917bc1afd0
    CS: 0x4010 SS: 0x0000 FS: 0x4018 GS: 0x4018 IP: 0x0000418000f0eeec RFG:0x0000000000010006
    2014-09-06T01:59:14.047Z cpu6:38776)Backtrace for current CPU #6, worldID=38776, rbp=0x43917bc1af70
    2014-09-06T01:59:14.056Z cpu6:38776)0x43917bc1aee8:[0x418000f0eeec]do_free_skb@com.vmware.driverAPI#9.2+0x4 stack: 0x0, 0x43a18b4a5880,
    2014-09-06T01:59:14.068Z cpu6:38776)Recursive panic on same CPU (cpu 6, world 38776): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Halt$Si0n5g# PbC8PU 7.

    Recursive kernel panic might occur when the VMkernel panics while heavy traffic is passing through the physical network adapter that is also configured to send the core dumps to the collector on vCenter Server.

    Workaround: Perform either of the following workarounds:

    • Dedicate a physical network adapter to core dump transmission only to reduce the impact from system and virtual machine traffic.

    • Disable the ESXi Dump Collector on the host by running the following ESXCLI console command:
      esxcli system coredump network set --enable false

Storage Issues

    NFS Version 4.1 Issues

    • Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share recovers from an all paths down (APD) state
      When the NFS 4.1 storage enters an APD state and then exits it after a grace period, powered on virtual machines that run on the NFS 4.1 datastore fail. The grace period depends on the array vendor.
      After the NFS 4.1 share recovers from APD, you see the following message on the virtual machine summary page in the vSphere Web Client:
      The lock protecting VM.vmdk has been lost, possibly due to underlying storage issues. If this virtual machine is configured to be highly available, ensure that the virtual machine is running on some other host before clicking OK.
      After you click OK, crash files are generated and the virtual machine powers off.

      Workaround: None.

    • NFS 4.1 client loses synchronization with server when trying to create new sessions
      After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server when trying to create new sessions. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that an NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED.

      Workaround: Perform the following sequence of steps.

      1. Attempt to unmount the affected file systems. If no files are open when you unmount, this operation succeeds and the NFS client module cleans up its internal state. You can then remount the file systems that were unmounted and resume normal operation.

      2. Take down the NICs connecting to the mounts' IP addresses and leave them down long enough for several server lease times to expire. Five minutes should be sufficient. You can then bring the NICs back up. Normal operation should resume.

      3. If the preceding steps fail, reboot the ESXi host.

    • NFS 4.1 client loses synchronization with an NFS server and connection cannot be recovered even when session is reset
      After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server and the synchronized connection with the server cannot be recovered even if the session is reset. This problem is caused by an EMC VNX server issue. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that NFS41: NFS41ProcessSessionUp:2111: resetting session with mismatched clientID; probable server bug.

      Workaround: To end the session, unmount all datastores and then remount them.

    • ONTAP Kerberos volumes become inaccessible or experience VM I/O failures
      A NetApp server does not respond when it receives RPCSEC_GSS requests that arrive out of sequence. As a result, the corresponding I/O operation stalls unless it is terminated and the guest OS can stall or encounter I/O errors. Additionally, according to RFC 2203, the client can only have a number of outstanding requests equal to seq_window (32 in case of ONTAP) according to RPCSEC_GSS context and it must wait until the lowest of these outstanding requests is completed by the server. Therefore, the server never replies to the out-of-sequence RPCSEC_GSS request, and the client stops sending requests to the server after it reaches the maximum seq_window number of outstanding requests. This causes the volume to become inaccessible.

      Workaround: None. Check the latest Hardware Compatibility List (HCL) to find a supported ONTAP server that has resolved this problem.

    • You cannot create a larger than 1 TB virtual disk on NFS 4.1 datastore from EMC VNX
      NFS version 4.1 storage from EMC VNX with firmware version 7.x supports only 32-bit file formats. This prevents you from creating virtual machine files that are larger than 1 TB on the NFS 4.1 datastore.

      Workaround: Update the EMC VNX array to version 8.x.

    • NFS 4.1 datastores backed by EMC VNX storage become inaccessible during firmware upgrades
      When you upgrade EMC VNX storage to a new firmware, NFS 4.1 datastores mounted on the ESXi host become inaccessible. This occurs because the VNX server changes its major device number after the firmware upgrade. The NFS 4.1 client on the host does not expect the major number to change after it has established connectivity with the server, and causes the datastores to be permanently inaccessible.

      Workaround: Unmount all NFS 4.1 datastores exported by the VNX server before upgrading the firmware.

    • When ESXi hosts use different security mechanisms to mount the same NFS 4.1 datastore, virtual machine failures might occur
      If different ESXi hosts mount the same NFS 4.1 datastore using different security mechanisms, AUTH_SYS and Kerberos, virtual machines placed on this datastore might experience problems and failure. For example, your attempts to migrate the virtual machines from host1 to host2 might fail with permission denied errors. You might also observe these errors when you attempt to access a host1 virtual machine from host2.

      Workaround: Make sure that all hosts that mount an NFS 4.1 volume use the same security type.

    • Attempts to copy read-only files to NFS 4.1 datastore with Kerberos fail
      The failure might occur when you attempt to copy data from a source file to a target file. The target file remains empty.

      Workaround: None.

    • When you create a datastore cluster, uniformity of NFS 4.1 security types is not guaranteed
      While creating a datastore cluster, vSphere does not verify and enforce the uniformity of NFS 4.1 security types. As a result, datastores that use different security types, AUTH_SYS and Kerberos, might be a part of the same cluster. If you migrate a virtual machine from a datastore with Kerberos to a datastore with AUTH_SYS, the security level for the virtual machine becomes lower.
      This issue applies to such functionalities as vMotion, Storage vMotion, DRS, and Storage DRS.

      Workaround: If Kerberos security is required for your virtual machines, make sure that all NFS 4.1 volumes that compose the same cluster use only the Kerberos security type. Do not include NFS 3 datastores, because NFS 3 supports only AUTH_SYS.

    Virtual SAN Issues

    • Virtual SAN Health UI does not show because of a timeout
      When you access the Virtual SAN Health UI under a Virtual SAN cluster > Monitor > Virtual SAN > Health, the UI does not show. A possible cause is the vSphere ESX Agent Manager hanging and resulting in a timeout. To confirm, open the Virtual SAN health log located at /var/log/vmware/vsan-health/vmware-vsan-health-service.log and search for calls to the vSphere ESX Agent Manager service by using the string VsanEamUtil.getClusterStatus:.

      Workaround: Restart the vSphere ESX Agent Manager service by using the vSphere Web Client and refresh the Virtual SAN health UI.

    • Virtual SAN disk serviceability does not work when you use third-party lsi_msgpt3 drivers
      A health check of a two or three-node Virtual SAN cluster under Virtual SAN cluster > Monitor > Virtual SAN > Health > Limit Health > after 1 additional host failure shows red and triggers a false vCenter Server event or alarm when the disk space usage of the cluster exceeds 50%.

      Workaround: Add one or more hosts to the Virtual SAN cluster or add more disks to decrease the disk space usage of the cluster under 50%.

    • The limit health check of a two or three-node Virtual SAN cluster shows red
      The plugin for Virtual SAN disk serviceability, lsu-lsi-lsi-msgpt3-plugin, supports the operation to get the device location and turn on or off the disk LED. The VMware lsi_msgpt3 inbox driver supports the serviceability plugin. However, if you use a third-party asynchronous driver, the plugin does not work.

      Workaround: Use the VMware inbox lsi_msgpt3 driver version 06.255.10.00-2vmw or later.

    Virtual Volumes Issues

    • Failure to create virtual datastores due to incorrect certificate used by Virtual Volumes VASA provider
      Occasionally, a self-signed certificate used by the Virtual Volumes VASA provider might incorrectly define the KeyUsage extension as critical without setting the keyCertSign bit. In this case, the provider registration succeeds. However, you are not able to create a virtual datastore from storage containers reported by the VASA provider.

      Workaround: Self-signed certificate used by the VASA provider at the time of provider registration should not define KeyUsage extension as critical without setting the keyCertSign bit.

    General Storage Issues

    • ESXi 6.0 Update 2 hosts connected to certain storage arrays with a particular version of the firmware might see I/O timeouts and subsequent aborts
      When ESXi 6.0 Update 2 hosts connected to certain storage arrays with a particular version of the firmware send requests for SMART data to the storage array, and if the array responds with a PDL error, the PDL response behavior in 6.0 update 2 might result in a condition where these failed commands are continuously retried thereby blocking other commands. This error results in widespread I/O timeouts and subsequent aborts.

      Also, the ESXi hosts might take a long time to reconnect to the vCenter Server after reboot or the hosts might go into a Not Responding state in the vCenter Server. Storage-related tasks such as HBA rescan might take a very long time to complete.

      Workaround: To resolve this issue, see Knowledge Base article 2133286.

    • vSphere Web Client incorrectly displays Storage Policy as attached when new VM is created from an existing disk
      When you use the vSphere Web Client to create a new VM from an existing disk and specify a storage policy when setting up the disk. The filter appears to be attached when you select the new VM --> click on VM policies --> Edit VM storage policies, however the filter is not actually attached. You can check the .vmdk file or the vmkfstools --iofilterslist <vmdk-file>' to verify if the filter is attached or not.

      Workaround: After you create the new VM, but before you power it on, add the filter to the vmdk by clicking on VM policies --> Edit VM storage policies.

    • NFS Lookup operation returns NFS STALE errors
      When you deploy large number of VMs in the NFS datastore, the VM deployment fails with an error message similar to the following due to a race condition:

      Stale NFS file handle

      Workaround: Restart the Lookup operation. See Knowledge Based article 2130593 for details.

    • Attempts to create a VMFS datastore on Dell EqualLogic LUNs fail when QLogic iSCSI adapters are used
      You cannot create a VMFS datastore on a Dell EqualLogic storage device that is discovered through QLogic iSCSI adapters.
      When your attempts fail, the following error message appears on vCenter Server: Unable to create Filesystem, please see VMkernel log for more details: Connection timed out. The VMkernel log contains continuous iscsi session blocked and iscsi session unblocked messages. On the Dell EqualLogic storage array, monitoring logs show a protocol error in packet received from the initiator message for the QLogic initiator IQN names.

      This issue is observed when you use the following components:

      • Dell EqualLogic array firmware : V6.0.7

      • QLogic iSCSI adapter firmware versions : 3.00.01.75

      • Driver version : 5.01.03.2-7vmw-debug

      Workaround: Enable the iSCSI ImmediateData adapter parameter on QLogic iSCSI adapter. By default, the parameter is turned off. You cannot change this parameter from the vSphere Web Client or by using esxcli commands. To change this parameter, use the vendor provided software, such as QConvergeConsole CLI.

    • ESXi host with Emulex OneConnect HBA fails to boot
      When an ESXi host has the Emulex OneConnect HBA installed, the host might fail to boot. This failure occurs due to a problem with the Emulex firmware.

      Workaround: To correct this problem, contact Emulex to get the latest firmware for your HBA.

      If you continue to use the old firmware, follow these steps to avoid the boot failure:

      1. When ESXi is loading, press Shift+O before booting the ESXi kernel.

      2. Leave the existing boot option as is, and add a space followed by dmaMapperPolicy=false.

    • Flash Read Cache does not accelerate I/Os during APD
      When the flash disk configured as a virtual flash resource for Flash Read Cache is faulty or inaccessible, or the disk storage is unreachable from the host, the Flash Read Cache instances on that host are invalid and do not work to accelerate I/Os. As a result, the caches do not serve stale data after connectivity is re-established between the host and storage. The connectivity outage might be temporary, all paths down (APD) condition, or permanent, permanent device loss (PDL). This condition persists until the virtual machine is power-cycled.

      Workaround: The virtual machine can be power-cycled to restore I/O acceleration using Flash Read Cache.

    • All Paths Down (APD) or path-failovers might cause system failure
      In a shared SAS environment, APD or path-failover situations might cause system failure if the disks are claimed by the lsi_msgpt3 driver and they are experiencing heavy I/O activity.

      Workaround: None

    • Frequent use of SCSI abort commands can cause system failure
      With heavy I/O activity, frequent SCSI abort commands can cause a very slow response from the MegaRAID controller. If an unexpected interrupt occurs with resource references that were already released in a previous context, system failure might result.

      Workaround: None

    • iSCSI connections fail and datastores become inaccessible when IQN changes
      This problem might occur if you change the IQN of an iSCSI adapter while iSCSI sessions on the adapter are still active.

      Workaround: When you change the IQN of an iSCSI adapter, no session should be active on that adapter. Remove all iSCSI sessions and all targets on the adapter before changing the IQN.

    • nvmecli online and offline operations might not always take effect
      When you perform the nvmecli device online -A vmhba* operation to bring a NVMe device online, the operation appears to be successful. However, the device might still remain in offline state.

      Workaround: Check the status of NVMe devices by running the nvmecli device list command.

    Server Configuration Issues
    • Remediation fails when applying a host profile from a stateful host to a host provisioned with Auto Deploy
      When applying a host profile from a statefully deployed host to a host provisioned with Auto Deploy (stateless host) with no local storage, the remediation attempt fails with one of the following error messages:

      • The vmhba device at PCI bus address sxxxxxxxx.xx is not present on your host. You must shut down and then insert a card into PCI slot yy. The type of card should exactly match the one in the reference host.

      • No valid coredump partition found.

      Workaround: Disable the plug-in that is causing the issue (for example, the Device Alias Configuration or Core Dump Configuration) from the host profile, and then remediate the host profile.

    • Applying host profile with static IP to a host results in compliance error
      If you extract a host profile from a host with a DHCP network configuration, and then edit the host profile to have a static IP address, a compliance error occurs with the following message when you apply it to another host:

      Number of IPv4 routes did not match.

      Workaround: Before extracting the host profile from the DHCP host, configure the host so that it has a static IP address.

    • When you hot-add a virtual network adapter that has network resources overcommitted, the virtual machine might be powered off
      On a vSphere Distributed Switch that has Network I/O Control enabled, a powered on virtual machine is configured with a bandwidth reservation according to the reservation for virtual machine system traffic on the physical network adapter on the host. You hot-add a network adapter to the virtual machine setting network bandwidth reservation that is over the bandwidth available on the physical network adapters on the host.

      When you hot-add the network adapter, the VMkernel starts a Fast Suspend and Resume (FSR) process. Because the virtual machine requests more network resources than available, the VMkernel exercises the failure path of the FSR process. A fault in this failure path causes the virtual machine to power off.

      Workaround: Do not configure bandwidth reservation when you add a network adapter to a powered on virtual machine.

    VMware HA and Fault Tolerance Issues
    • Legacy Fault Tolerance (FT) not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
      Legacy FT is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor, Legacy Fault Tolerance.

      Workaround: None.

    Guest Operating System Issues
    • Attempts to enable passthrough mode on NVMe PCIe SSD devices might fail after hot plug
      To enable passthrough mode on an SSD device from the vSphere Web Client, you select a host, click the Manage tab, click Settings, navigate to the Hardware section, click PCI Devices > Edit, select a device from a list of active devices that can be enabled for passthrough, and click OK. However, when you hot plug a new NVMe device to an ESXi 6.0 host that does not have a PCIe NVMe drive, the new NVMe PCIe SSD device cannot be enabled for passthrough mode and does not appear in the list of available passthrough devices.

      Workaround: Restart your host. You can also run the command on your ESXi host.

      1. Log in as a root user.

      2. Run the command
        /etc/init.d/hostd start

    Supported Hardware Issues
    • When you run esxcli to get the disk location, the result is not correct for Avago controllers on HP servers
      When you run esxcli storage core device physical get, against an Avago controller on an HP server, the result is not correct.

      For example, if you run:
      esxcli storage core device physical get -d naa.5000c5004d1a0e76
      The system returns:
      Physical Location: enclosure 0, slot 0

      The actual label of that slot on the physical server is 1.

      Workaround: Check the slot on your HP server carefully. Because the slot numbers on the HP server start at 1, you have to increase the slot number that the command returns for the correct result.

    CIM and API Issues
    • The sfcb-vmware_raw might fail
      The sfcb-vmware_raw might fail as the maximum default plugin resource group memory allocated is not enough.

      Workaround: Add UserVars CIMOemPluginsRPMemMax for memory limits of sfcbd plugins using the following command and restart the sfcbd for the new plugins value to take effect:

      esxcfg-advcfg -A CIMOemPluginsRPMemMax --add-desc 'Maximum Memory for plugins RP' --add-default XXX --add-type int --add-min 175 --add-max 500

      XXX being the memory limit you want to allocate. This value should be within the minimum (175) and maximum (500) values.