This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Updated on: 31 October 2018

ESXi 6.5 Update 1 | 27 JULY 2017 | ISO Build 5969303

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

The ESXi 6.5 Update 1 release includes the following list of new features.

  • If you prefer to use Update Manager for the upgrade of ESXi and vSAN stack, you can now enable vSAN software upgrades through integration with vSphere Update Manager. This provides a unified and common workflow. For more information see the vSphere Update Manager Installation and Administration Guide.
  • Driver Updates:
    • Cavium qlnativefc driver
    • VMware nvme driver
    • Intel i40en driver with Lewisburg 10G NIC Support
    • Intel ne1000 driver with Lewisburg 1G NIC Support
    • Intel igbn driver
    • Intel ixgben driver
    • Broadcom ntg3 driver

Earlier Releases of ESXi 6.5

Features and known issues of ESXi 6.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.5 are:

For compatibility, installation and upgrades, product support notices, and features see the VMware vSphere 6.5 Release Notes.

Internationalization

VMware vSphere 6.5 is available in the following languages:

  • English
  • French
  • German
  • Spanish
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of VMware vSphere 6.5, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi and vCenter Server Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Update Manager, vSphere Web Client and vSphere Client are packaged with vCenter Server.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.5 Update 1, use the ESXi 6.5 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.5. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.5, upgrade the virtual machine compatibility.
See the vSphere Upgrade documentation.

Product and Support Notices

  • The VMware Lifecycle Product Matrix provides detailed information about all supported and unsupported products. Check the VMware Lifecycle Product Matrix also for further information about the End of General Support, End of Technical Guidance, and End Of Availability.
  • VMware is announcing discontinuation of its third party virtual switch (vSwitch) program, and plans to deprecate the VMware vSphere APIs used by third party switches in the release following vSphere 6.5 Update 1. Subsequent vSphere versions will have the third party vSwitch APIs completely removed and third party vSwitches will no longer work. For more information, see FAQ: Discontinuation of third party vSwitch program (2149722).

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.

Update Release ESXi-6.5.0-update01 contains the following individual bulletins:

Update Release ESXi-6.5.0-update01 (Security-only build) contains the following individual bulletins:

Update Release ESXi-6.5.0-update01 contains the following image profiles:

Update Release ESXi-6.5.0-update01 (Security-only build) contains the following image profiles:

Resolved Issues

The resolved issues are grouped as follows.

Auto Deploy
  • ESXi host loses network connectivity when performing a stateless boot from Auto Deploy

    An ESXi host might lose network connectivity when performing a stateless boot from Auto Deploy if the management vmkernel NIC has static IP and is connected to a Distributed Virtual Switch. 

    This issue is resolved in this release.

CIM and API Issues
  • 13G DDR4 memory modules are displayed with status Unknown on the Hardware Health Status Page in vCenter Server 

    Dell 13G Servers use DDR4 memory modules. These modules are displayed with status "Unknown" on the Hardware Health Status Page in vCenter Server. 

    This issue is resolved in this release.

Internationalization
  • The newly connected USB keyboards are assigned with U.S. default layout

    When a keyboard is configured with a different layout other than the U.S. default, and later it is unplugged and plugged again to the ESXi host, the newly-connected keyboard is assigned with U.S. default layout instead of the user-selected layout. 

    This issue is resolved in this release. 

Miscellaneous Issues
  • The vmswapcleanup plugin fails to start

    The vmswapcleanup jumpstart plugin fails to start. The syslog contains the following line: 

    jumpstart[XXXX]: execution of '--plugin-dir /usr/lib/vmware/esxcli/int/ systemInternal vmswapcleanup cleanup' failed : Host Local Swap Location has not been enabled 

    This issue is resolved in this release.

  • High read load of VMware Tools ISO images might cause corruption of flash media

    In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media. 

    This issue is resolved n this release.

  • SNMP agent is reporting ifOutErrors and ifOutOctets counter values incorrectly

    Simple Network Management Protocol (SNMP) agent is reporting the same value for both ifOutErrors and ifOutOctets counters, when they should be different. 

    This issue is resolved in this release.

  • ESXi disconnects from vCenter Server when a virtual machine with a hardware version 3 is registered

    Although the ESXi host no longer supports running hardware version 3 virtual machines, it does allow registering these legacy virtual machines to upgrade them to a newer, supported, version. A recent regression caused the ESXi hostd service to disconnect from vCenter Server during the registration process. This would prevent the virtual machine registration from succeeding. 

    This issue is resolved in this release.

  • Virtual machine shuts down automatically with the MXUserAllocSerialNumber: too many locks error

    During normal virtual machine operation VMware Tools (version 9.10.0 and later) services create vSocket connections to exchange data with the ESXi host. When a large number of such connections have been made, the ESXi host might run out of lock serial numbers, causing the virtual machine to shut down automatically with the MXUserAllocSerialNumber: too many locks error. If the workaround from KB article 2149941 has been used, please re-enable Guest RPC communication over vSockets by removing the line: guest_rpc.rpci.usevsocket = "FALSE" 

    This issue is resolved in this release.

  • Virtual machine crashes on ESXi 6.5 when multiple users log on to Windows Terminal Server VM

    Windows 2012 terminal server running VMware tools 10.1.0 on ESXi 6.5 stops responding when many users are logged in. 

    vmware.log will show similar messages to 

    2017-03-02T02:03:24.921Z| vmx| I125: GuestRpc: Too many RPCI vsocket channels opened.
    2017-03-02T02:03:24.921Z| vmx| E105: PANIC: ASSERT bora/lib/asyncsocket/asyncsocket.c:5217
    2017-03-02T02:03:28.920Z| vmx| W115: A core file is available in "/vmfs/volumes/515c94fa-d9ff4c34-ecd3-001b210c52a3/h8-
    ubuntu12.04x64/vmx-debug-zdump.001"
    2017-03-02T02:03:28.921Z| mks| W115: Panic in progress... ungrabbing 

    This issue is resolved in this release.

  • The vSphere Web Client cannot change the Syslog.global.logDirUnique option

    When using the vSphere Web Client to attempt to change the value of the Syslog.global.logDirUnique option, this option appears grayed out, and cannot be modified. 

    This issue is resolved in this release.

  • The activation of iodm and vmci jumpstart plug-ins might fail during the ESXi boot

    During the boot of an ESXi host, error messages related to execution of the jumpstart plug-ins iodm and vmci are observed in the jumpstart logs. 

    This issue is resolved in this release.

  • The time out values larger than 30 minutes are ignored by the EnterMaintenanceMode task

    Entering maintenance mode would time out after 30 mins even if the specified a timeout is larger than 30 mins.

    This issue is resolved in this release.

  • Even if all the NAS Storages are disabled in the Host Profile document, host profile compliance fails and existing datastores are removed or new are added

    When all NFS datastores are disabled within the Host Profile document, extracted from a reference host, the host profile’s remediation might fail with compliance errors and existing datastores are removed or new are added during the remediation.

    This issue is resolved in this release.

  • An ESXi host might fail with purple diagnostic screen when collecting performance snapshots

    An ESXi host might fail with purple diagnostic screen when collecting performance snapshots with vm-support due to calls for memory access after the data structure has already been freed.

    An error message similar to the following is displayed:

    @BlueScreen: #PF Exception 14 in world 75561:Default-drai IP 0xxxxxxxxxxxxaddr 0xxxxxxxxxxxx
    PTEs:0xXXXXXXXXXXXXX;0xYYYYYYYYYYYYYY;0xZZZZZZZZZZZZZZ;0x0;
    [0xxxxxxxxxxxx]SPLockIRQWork@vmkernel#nover+0x26 stack: 0xxxxxxxxxxxx
    [0xxxxxxxxxxxx]VMKStatsDrainWorldLoop@vmkernel#nover+0x90 stack: 0x17
    

    This issue is resolved in this release.

Networking Issues
  • Virtual machines configured to use EFI firmware fail to PXE boot in some DHCP environments.

    Virtual machine configured to use EFI firmware will fail to obtain an IP address when trying to PXE boot if the DHCP environment responds by IP unicast. The EFI firmware was not capable of receiving a DHCP reply sent by IP unicast.

    This issue is resolved in this release.

  • When you configure the guest VLAN with disabled VLAN offloading (stripping the VLAN tag), the ping command between the VLAN interfaces might fail

    The original packet buffer can be shared across multiple destination ports if the packet is forwarded to multiple ports (such as broadcast packet). If the VLAN offloading is disabled and you modify the original packet buffer, the VLAN tag will be inserted to the packet buffer before it is forwarded to the guest VLAN. The other port will detect a packet corruption and drop. 

    This issue is resolved in this release. 

  • Joining the ESXi 6.5 host to Active Directory domain fails with the Operation timed out error

    The ESXi 6.5 host fails to join the Active Directory domain and the process might become unresponsive for an hour before returning the Operation timed out error, if the host uses only IPv4 address, and the domain has IPv6 or mixed IPv4 and IPv4 setup. 

    This issue is resolved in this release. 

  • Full duplex configured on physical switch may cause duplex mismatch issue with igb native Linux driver supporting only auto-negotiate mode for nic speed/duplex setting
    If you are using the igb native driver on an ESXi host, it always works in auto-negotiate speed and duplex mode. No matter what configuration you set up on this end of the connection, it is not applied on the ESXi side. The auto-negotiate support causes a duplex mismatch issue if a physical switch is set manually to a full-duplex mode.


    This issue is resolved in this release.

  • If you perform a fresh installation of ESXi host, it might not have network connection in case the first physical NIC linkstate is down

    By default, each ESXi host has one virtual switch, vSwitch0. During the installation of ESXi, the first physical NIC is chosen as the default uplink for the vSwitch0. In case that NIC linkstate is down, the ESXi host might not have a network connection, although the other NICs linkstate is up and have a network access. 

    This issue is resolved in this release.

  • Virtual machines (VMs), which are part of Virtual Desktop Infrastructure (VDI) pools using Instant Clone technology, lose connection to Guest Introspection services

    Existing VMs using Instant Clone and new ones, created with or without Instant Clone, lose connection with the Guest Introspection host module. As a result, the VMs are not protected and no new Guest Introspection configurations can be forward to the ESXi host. You are also present with a Guest introspection not ready warning in the vCenter Server UI. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen on shutdown with IPv6 mld used

    An ESXi host might fail with a purple screen on shutdown if IPv6 mld is used because of a race condition in tcpip stack. 

    This issue is resolved in this release.

  • An ESXi host might become unresponsive when you reconnect it to vCenter Server

    If you disconnect your ESXi host from vCenter Server and some of the virtual machines on that host are using LAG, your ESXi host might become unresponsive when you reconnect it to vCenter Server after recreating the same lag on the vCenter Server side and you might see an error, such as the following: 

    0x439116e1aeb0:[0x418004878a9c]LACPScheduler@#+0x3c stack: 0x417fcfa00040 0x439116e1aed0:
    [0x418003df5a26]Net_TeamScheduler@vmkernel#nover+0x7a stack: 0x43070000003c 0x439116e1af30:
    [0x4180044f5004]TeamES_Output@#+0x410 stack: 0x4302c435d958 0x439116e1afb0:
    [0x4180044e27a7]EtherswitchPortDispatch@#+0x633 stack: 0x0 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error

    An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error, when you reboot the ESXi host under the following conditions: 

    • You use the vSphere Network Appliance (DVFilter) in an NSX environment
    • You migrate a virtual machine with vMotion under DVFilter control

    This issue is resolved in this release.

  • A Virtual Machine (VM) with e1000/e1000e vNIC might have network connectivity issues

    For a VM with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor (the transmit descriptor address and length are 0), a loss of network connectivity might occur.

    This issue is resolved in this release. 

  • VMkernel log includes one or more warnings Couldn't enable keep alive

    Couldn't enable keep alive warnings occur during VMware NSX and partner solutions communication through a VMCI socket (vsock). The VMkernel log now omits these repeated warnings because they can be safely ignored. 

    This issue is resolved in this release.

  • A virtual machine with a paravirtual RDMA device becomes inaccessible during a snapshot operation or a vMotion migration

    Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with the peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible if during a snapshot operation or migration, the RDMA application is still running.

    This issue is resolved in this release.

  • Guest VMkernel applications might receive unexpected completion entries for non-signaled fast-register work requests

    The RDMA communication between two virtual machines that reside on a host with an active RDMA uplink occasionally triggers spurious completion entries in the guest VMkernel applications. The completion entries are incorrectly triggered by non-signaled fast-register work requests that are issued by the kernel-level RDMA Upper Layer Protocol (ULP) of a guest. This can cause completion queue overflows in the kernel ULP. 

    This issue is resolved in this release.

  • When you use a paravirtual RDMA (PVRDMA) device, you might be stuck with the unavailable device in link down state until the guest OS reloads the PVRDMA guest driver

    When the PVRDMA driver is installed on a guest OS that supports PVRDMA device, the PVRDMA driver might fail to load properly when the guest OS is powered on. You might be stuck with the unavailable device in link down state until you manually reload the PVRDMA driver. 

    This issue is resolved in this release.

  • pNIC disconnect and connect events in the VMware NetQueue load balancer might cause a purple screen

    If a pNIC is disconnected and connected to a virtual switch, the VMware NetQueue load balancer must identify it and pause the ongoing balancing work. In some cases, the load balancer might not detect this and access wrong data structures. As a result, you might see a purple screen. 

    This issue is resolved in this release.

  • The netqueue load balancer might cause the ESXi host to stop responding

    For latency-sensitive virtual machines, the netqueue load balancer can try to reserve exclusive Rx queue. If the driver provides queue-preemption, then netqueue load balancer uses this to get exclusive queue for latency-sensitive virtual machines. The netqueue load balancer holds lock and execute queue preemption callback of the driver. With some drivers, this might result in a purple screen in the ESXi host, especially if a driver implementation involves sleep mode. 

    This issue is resolved in this release.

  • A USB network device integrated in Integrated Management Module (IMM) on some servers might stop responding if you reboot IMM

    On some servers, a USB network device is integrated in IMM or iLO to manange the server. When you reboot IMM by using the vSphere Web Client or an IMM or iLO command, the transaction on the USB network device is lost. 

    This issue is resolved in this release.

  • Link status output of a vmnic does not reflect the real physical status

    When the physical link status of a vminc is changed, for example the cable is unplugged or the switch port is shutdown, the generated output from the esxcli command might give a wrong link status on Intel 82574L based NICs (Intel Gigabit Desktop CT/CT2). You must manually restart the NIC to get the actual link status. 

    This issue is resolved in this release.

  • The network connection might be lost if a ntg3 driver is used with Broadcom NetXtreme I NICs 

    NICs using ntg3 driver might experience unexpected loss of connectivity. The network connection cannot be restored until you reboot the ESXi host. The devices affected are Broadcom NetXtreme I 5717, 5718, 5719, 5720, 5725 and 5727 Ethernet Adapters. The problem is related to certain malformed TSO packets sent by VMs such as (but not limited to) F5 BIG-IP Virtual Appliances.
    The ntg3 driver, version 4.1.2.0 resolves this problem. 

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when you turn off the IPv6 support

    An ESXi host might fail with purple screen when you turn off globally the IPv6 support and reboot the host.

    This issue is resolved in this release.

  • An ESXi host might stop responding when you migrate a virtual machine with Storage vMotion between ESXi 6.0 and ESXi 6.5 hosts

    The vmxnet3 device tries to access the memory of the guest OS while the guest memory preallocation is in progress during the migration of virtual machine with Storage vMotion. This results in an invalid memory access and the ESXi 6.5 host failure.

    This issue is resolved in this release.

Security Issues
  • Update to the libcurl library

    The ESXi userworld libcurl library is updated to version 7.53.1.

  • Update to the NTP package

    The ESXi NTP package is updated to version 4.2.8p10.

  • Update to the OpenSSH version

    The OpenSSH version is updated to version 7.5p1.

  • The likewise stack on ESXi is not enabled to support SMBv2

    The Windows 2012 domain controller supports SMBv2, whereas likewise stack on ESXi supports only SMBv1. With this release, the likewise stack on ESXi is enabled to support SMBv2. 

    This issue is resolved in this release.

  • You cannot use custom ESXi SSL certificates with keys that are longer than 2048 bits

    In vSphere 6.5 the secure heartbeat feature supported adding ESXi hosts with certificates with exactly 2048-bit keys. If you try to add or replace the ESXi host certificate with a custom certificate with a key longer than 2048 bits, the host gets disconnected from vCenter Server. The log messages in vpxd.log look similar to:

    error vpxd[7FB5BFF7E700] [Originator@6876 sub=vpxCrypt opID=HeartbeatModuleStart-4b63962d] [bool VpxPublicKey::Verify(const EVP_MD*, const unsigned char*, size_t, const unsigned char*, size_t)] ERR error:04091077:rsa routines:INT_RSA_VERIFY:wrong signature length

    warning vpxd[7FB5BFF7E700] [Originator@6876 sub=Heartbeat opID=HeartbeatModuleStart-4b63962d] Failed to verify signature; host: host-42, cert: (**THUMBPRINT_REMOVED**), signature : (**RSA_SIGNATURE_REMOVED**)

    warning vpxd[7FB5BFF7E700] [Originator@6876 sub=Heartbeat opID=HeartbeatModuleStart-4b63962d] Received incorrect size for heartbeat Expected size (334) Received size (590) Host host-87

    This issue is fixed in this release.

  • Update to the libPNG library

    The libPNG library is updated to libpng-1.6.29.

  • Update to OpenSSL

    The OpenSSL package is updated to version openssl-1.0.2k.

  • Xorg driver upgrade

    The Xorg driver upgrade includes the following updates:

    • the libXfont package is updated to libxfont - 1.5.1
    • the pixman library is updated to pixman - 0.35.1
  • Update to the libarchive library

    The libarchive library is updated to libarchive - 3.3.1.

  • Update to the SQLlite3 library

    The SQLlite3 library is updated to SQLlite3 - 3.17.0.

Server Configuration
  • PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits 

    PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits. 

    This issue is resolved in this release.

  • Logging in to an ESXi host by using SSH requires password re-entry

    You are prompted for a password twice when connecting to an ESXi host through SSH if the ESXi host is upgraded from vSphere version 5.5 to 6.5 while being part of a domain.

    This issue is resolved in this release.

  • Compliance error on Security.PasswordQualityControl in the host profile section

    In the host profile section, a compliance error on Security.PasswordQualityControl is observed, when the PAM password setting in the PAM password profile is different to the advanced configuration option Security.PasswordQualityControl. Because the advanced configuration option Security.PasswordQualityControl is unavailable for the host profile in this release, use the Requisite option in the Password PAM Configuration for changing password policy instead. 

    This issue is resolved in this release.

  • The vpxd service might crash preventing the vSphere Web Client user interface to connect and update the vCenter Server

    If the mandatory field in the VMODL object of the profile path is left unset, a serialization issue might occur during the answer file validation for network configuration, resulting in a vpxd service failure.

    This issue is resolved in this release.

  • The configuration of /Power/PerfBias cannot be edited

    Setting the /Power/PerfBias advanced configuration option is not available. Any attempt to set it to a value returns an error. 

    This issue is resolved in this release.

  • The joining of any host (ESXi or vCenter Server) to Active Directory in vSphere 6.5 domain can cause a service failure

    vSphere 6.5 does not support disjointed Active Directory domain. The disjoint namespace is a scenario in which a computer's primary domain name system (DNS) suffix does not match the DNS domain name where that computer resides. 

    This issue is resolved in this release.

  • The XHCI related platform errors reported in ESXi VMKernel Logs

    The XHCI related platform error messages, such as xHCI Host Controller USB 2.0 Control Transfer may cause IN Data to be dropped and xHCI controller Parity Error response bit set to avoid parity error on poison packet, are reported in ESXi VMkernel logs. 

    This issue is resolved in this release. 

  • An ESXi host might stop responding with no heartbeat NMI state

    An ESXi host might become unresponsive with no heartbeat NMI state on AMD machines with OHCI USB Host Controllers. 

    This issue is resolved in this release. 

Storage Issues
  • Modification of IOPS limit of virtual disks with enabled Changed Block Tracking (CBT) fails with errors in the log files

    To define the storage I/O scheduling policy for a virtual machine, you can configure the I/O throughput for each virtual machine disk by modifying the IOPS limit. When you edit the IOPS limit and CBT is enabled for the virtual machine, the operation fails with an error The scheduling parameter change failed. Due to this problem, the scheduling policies of the virtual machine cannot be altered. The error message appears in the vSphere Recent Tasks pane.

    You can see the following errors in the /var/log/vmkernel.log file:

    2016-11-30T21:01:56.788Z cpu0:136101)VSCSI: 273: handle 8194(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000
    2016-11-30T21:01:56.788Z cpu0:136101)ScsiSched: 2760: Invalid Bandwidth Cap Configuration
    2016-11-30T21:01:56.788Z cpu0:136101)WARNING: VSCSI: 337: handle 8194(vscsi0:0):Failed to invert policy

    This issue is resolved in this release.

  • When you hot-add multiple VMware Paravirtual SCSI (PVSCSI) hard disks in a single operation, only one is visible for the guest OS

    When you hot-add two or more hard disks to a VMware PVSCSI controller in a single operation, the guest OS can see only one of them.

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen

    An ESXi host might fail with a purple screen because of a race condition when multiple multipathing plugins (MPPs) try to claim paths.

    This issue is resolved in this release.

  • Reverting from an error during a storage profile change operation, results in a corrupted profile ID

    If a VVol VASA Provider returns an error during a storage profile change operation, vSphere tries to undo the operation, but the profile ID gets corrupted in the process. 

    This issue is resolved in this release.

  • Incorrect Read or Write latency displayed in vSphere Web Client for VVol datastores

    Per host Read or Write latency displayed for VVol datastores in the vSphere Web Client is incorrect. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen during NFSCacheGetFreeEntry

    The NFS v3 client does not properly handle a case where NFS server returns an invalid filetype as part of File attributes, which causes the ESXi host to fail with a purple screen. 

    This issue is resolved in this release.

  • You need to configure manually the SATP rule for a new Pure Storage FlashArray

    For a Pure Storage FlashArray device you have to add manually the SATP rule to set the SATP, PSP and IOPs. A new SATP rule is added to ESXi to set SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR, and IOPs to 1 for all Pure Storage FlashArray models. 

    Note: In case of a stateless ESXi installation, if an old host profile is applied, it overwrites the new rules after upgrade.

    This issue is resolved in this release.

  • The lsi_mr3 driver and hostd process might stop responding due to a memory allocation failure in ESXi 6.5

    The lsi_mr3 driver allocates memory from address space below 4GB. The vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plugin and the lsi_mr3 driver communicate with each other. The driver might stop responding during the memory allocation when handling the IOCTL event from storelib. As a result, lsu-lsi-lsi-mr3-plugin might stop responding and the hostd process might also fail even after restart of hostd. 

    This issue is resolved in this release with a code change in the lsu-lsi-lsi-mr3-plugin plugin of lsi_mr3 driver, setting a timeout value to 3 seconds to get the device information to avoid plugin and hostd failures.

  • When you hot-add an existing or new virtual disk to a CBT (Changed Block Tracking) enabled virtual machine (VM) residing on VVOL datastore, the guest operation system might stop responding

    When you hot-add an existing or new virtual disk to a CBT enabled VM residing on VVOL datastore, the guest operation system might stop responding until the hot-add process completes. The VM unresponsiveness depends on the size of the virtual disk being added. The VM automatically recovers once hot-add completes. 

    This issue is resolved in this release.

  • When you use vSphere Storage vMotion, the UUID of a virtual disk might change

    When you use vSphere Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified. 

    This issue is resolved in this release.

  • An ESXi host might stop responding if a LUN unmapping is made on the storage array side

    An ESXi host might stop responding if a LUN unmapping is made on the storage array side to those LUNs while connected to an ESXi host through Broadcom/Emulex fiber channel adapter (the driver is lpfc) and has I/O running.

    This issue is resolved in this release.

  • An ESXi host might become unresponsive if the VMFS-6 volume has no space for the journal

    When opening a VMFS-6 volume, it allocates a journal block. Upon successful allocation, a background thread is started. If there is no space on the volume for the journal, it is opened in read-only mode and no background thread is initiated. Any intent to close the volume, results in attempts to wake up a nonexistent thread. This results in the ESXi host failure.

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen if the virtual machines running on it have large capacity vRDMs and use the SPC4 feature

    When the virtual machines use the SCP4 feature with Get LBA Status command to query thin-provisioned features of large vRDMs attached, the processing of this command might run for a long time in the ESXi kernel without relinquishing the CPU. The high CPU usage can cause the CPU heartbeat watchdog process to deem a hung process and the ESXi host might stop responding. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen if the VMFS6 datastore is mounted on multiple ESXi hosts, while the disk.vmdk has file blocks allocated from an increased portion on the same datastore

    A VMDK file might reside on a VMFS6 datastore which is mounted on multiple ESXi hosts (for example 2 hosts, ESXi host1 and ESXi host2). When the VMFS6 datastore capacity is increased from ESXi host1, while having it mounted on ESXi host2, and the disk.vmdk has file blocks allocated from an increased portion of the VMFS6 datastore from ESXi host1. Now, if the disk.vmdk file is accessed from ESXi host2, and if the file blocks are allocated to it from ESXi host2, the ESXi host2 might fail with a purple screen.

    This issue is resolved in this release.

  • After installation or upgrade certain multipathed LUNs will not be visible

    If the paths to a LUN have different LUN IDs in case of multipathing, the LUN will not be registered by PSA and end users will not see them. 

    This issue is resolved in this release.

  • A virtual machine residing on NFS datastores might be failing the recompose operation through Horizon View

    The recompose operation in Horizon View might fail for desktop virtual machines residing on NFS datastores with stale NFS file handle errors, because of the way virtual disk descriptors are written to NFS datastores. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen because of a CPU heartbeat failure

     An ESXi host might fail with a purple screen because of a CPU heartbeat failure only if the SEsparse is used for creating snapshots and clones of virtual machines. The use of SEsparse might lead to CPU lockups with the warning message in the VMkernel logs, followed by a purple screen: 

    PCPU <cpu-num>  didn't have a heartbeat for <seconds>  seconds; *may* be locked up.

    This issue is resolved in this release.

  • Disabled frequent lookup to an internal vSAN metadata directory (.upit) on virtual volume datastores. This metadata folder is not applicable to virtual volumes

    The frequent lookup to a vSAN metadata directory (.upit) on virtual volume datastores can impact its performance. The .upit directory is not applicable to virtual volume datastores. The change disables the lookup to the .upit directory. 

    This issue is resolved in this release.

  • Performance issues on Windows Virtual Machine (VM) might occur after upgrading to VMware ESXi 6.5.0 P01 or 6.5 EP2 

    Performance issues might occur when the not aligned unmap requests are received from the Guest OS under certain conditions. Depending on the size and number of the not aligned unmaps, this might occur when a large number of small files (less than 1 MB in size) are deleted from the Guest OS. 

    This issue is resolved in this release.

  • ESXi 5.5 and 6.x hosts stop responding after running for 85 days

    ESXi 5.5 and 6.x hosts stop responding after running for 85 days. In the /var/log/vmkernel log file you see entries similar to: 

    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved a PUREX IOCB woh oo
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved the PUREX IOCB.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): sizeof(struct rdp_rsp_payload) = 0x88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674qlnativefc: vmhba2(5:0.0): transceiver_codes[0] = 0x3
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): transceiver_codes[0,1] = 0x3, 0x40
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Stats Mailbox successful.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Sending the Response to the RDP packet
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 53 01 00 00 00 00 00 00 00 00 04 00 01 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) c0 1d 13 00 00 00 18 00 01 fc ff 00 00 00 00 20
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 88 00 00 00 b0 d6 97 3c 01 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 02 00 00 00 00 00 00 80 00 00 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 18 00 00 00 00 01 00 00 00 00 00 0c 1e 94 86 08
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0e 81 13 ec 0e 81 00 51 00 01 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 2c 00 04 00 00 01 00 02 00 00 00 1c 00 00 00 01
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 40 00 00 00 00 01 00 03 00 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)50 01 43 80 23 18 a8 89 50 01 43 80 23 18 a8 88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 01 00 03 00 00 00 10 10 00 50 eb 1a da a1 8f

    This is a firmware problem and it is caused when Read Diagnostic Parameters (RDP) between the Fibre Channel (FC) Switch and the Hot Bus Adapter (HDA) fails 2048 times. The HBA adapter stops responding and because of this the virtual machine and/or the ESXi host might fail. By default, the RDP routine is initiated by the FC Switch and occurs once every hour, resulting in a reaching the 2048 limit in approximately 85 days. 

    This issue is resolved in this release.

  • Resolve the performance drop in Intel devices with stripe size limitation 

    Some Intel devices, for example P3700, P3600, and so on, have a vendor specific limitation on their firmware or hardware. Due to this limitation, all IOs across the stripe size (or boundary), delivered to the NVMe device can be affected from significant performance drop. This problem is resolved from the driver by checking all IOs and splitting command in case it crosses the stripe on the device.

    This issue is resolved in this release.

  • Remove the redundant controller reset when starting controller

    The driver might reset the controller twice (disable, enable, disable and then finally enable it) when the controller starts. This is a workaround for the QEMU emulator for an early version, but it might delay the display of some controllers. According to the NVMe specifications, only one reset is needed, that is, disable and enable the controller. This upgrade removes the redundant controller reset when starting the controller. 

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen if the virtual machine with large virtual disks uses the SPC-4 feature

    An ESXi host might stop responding and fail with purple screen with entries similar to the following as a result of a CPU lockup. 

    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]@BlueScreen: PCPU x: no heartbeat (x/x IPIs received)
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Code start: 0xxxxx VMK uptime: x:xx:xx:xx.xxx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Saved backtrace from: pcpu x Heartbeat NMI
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MCSLockWithFlagsWork@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PB3_Read@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PB3_AccessPBVMFS5@esx#nover+00xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3FileOffsetToBlockAddrCommonVMFS5@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_ResolveFileOffsetAndGetBlockTypeVMFS5@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_GetExtentDescriptorVMFS5@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_ScanExtentsBounded@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3GetFileMappingAndLabelInt@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_FileIoctl@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSSVec_Ioctl@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSS_IoctlByFH@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFsEmulateCommand@vmkernel#nover+0xx stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSI_FSCommand@vmkernel#nover+0xx stack: 0x1
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSI_IssueCommandBE@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIExecuteCommandInt@vmkernel#nover+0xx stack: 0xb298e000
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSIVmkProcessCmd@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSIVmkProcessRequestRing@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSI_ProcessRing@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMMVMKCall_Call@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMKVMM_ArchEnterVMKernel@vmkernel#nover+0xe stack: 0x0

    This occurs if your virtual machine's hardware version is 13 and uses SPC-4 feature for the large virtual disk. 

    This issue is resolved in this release.

  • The Marvell Console device on the Marvell 9230 ACHI controller is not available

    According to the kernel log, the ATAPI device is exposed on one of the AHCI ports of the Marvell 9230 controller. This Marvel Console device is an interface to configure RAID of the Marvell 9230 AHCI controller, which is used from some Marvell CLI tools. 

    As a result of the esxcfg-scsidevs -l command, the host equipped with the Marvell 9230 controller cannot detect the SCSI device with the Local Marvell Processor display name. 

    The information in the kernel log is:
    WARNING: vmw_ahci[XXXXXXXX]: scsiDiscover:the ATAPI device is not CD/DVD device 

    This issue is resolved in this release. 

  • SSD congestion might cause multiple virtual machines to become unresponsiv

    Depending on the workload and the number of virtual machines, diskgroups on the host might go into permanent device loss (PDL) state. This causes the diskgroups to not admit further IOs, rendering them unusable until manual intervention is performed.

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when running HBR + CBT on a datastore that supports unmap

    The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request may occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments. 

    This issue is resolved in this release.

  • An ESXi host might lose connectivity to VMFS datastore

    Due to a memory leak in the LVM module, you might see the LVM driver running out of memory on certain conditions, causing the ESXi host to lose access to the VMFS datastore.

    This issue is resolved in this release.

Supported Hardware Issues
  • Intel I218 NIC resets frequently in a heavy traffic scenario

    When TSO capability is enabled in NE1000 driver, I218 NIC reset frequently in heavy traffic scenario, because of I218 h/w issue. The NE1000 TSO capability for I218 NIC should be disabled. 

    This issue is resolved in this release.

Upgrade and Installation Issues
  • Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails 

    Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails with the Cannot execute upgrade script on host error. 

    This issue is resolved in this release.

  • The previous software profile version of an ESXi host is displayed after a software profile update and the software profile name is not marked Updated after an ISO upgrade

    The previous software profile version of an ESXi host is displayed in esxcli software profile get command output after execution of a esxcli software profile update command. Also software profile name is not marked Updated in esxcli software profile get command output after an ISO upgrade. 

    This is resolved in this release.

  • Unable to collect vm-support bundle from an ESXi 6.5 host

    Unable to collect vm-support bundle from an ESXi 6.5 host because when generating logs in ESXi 6.5 by using the vSphere Web Client, the select specific logs to export text box is blank. The options: network, storage, fault tolerance, hardware etc. are blank as well. This issue occurs because the rhttpproxy port for /cgi-bin has a value different from 8303.

    This issue is resolved in this release.

  • Installation on TPM 1.2 machine hangs early during boot

    The installation of ESXi 6.5 stops responding on a system with the TPM 1.2 chip. If tbootdebug is specified as a command line parameter, the last log message is: 

    Relocating modules and starting up the kernel...
    TBOOT: **************TBOOT ******************
    TBOOT: TPM family: 1.2

    This issue is resolved in this release.

  • vSphere Update Manager fails with RuntimeError 

    A host scan operation fails with a RuntimeError in ImageProfile module if the module contains VIBs for a specific hardware combination. What causes the failure is a code transition issue between Python 2 and Python 3. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen because of the system cannot recognize or reserve resources for the USB device

    The ESXi host might fail with a purple screen when booting the ESXi host on the following Oracle servers: X6-2, X5-2, and X4-2, and the backtrace shows that is caused by pcidrv_alloc_resource failure. The issue is caused because the system cannot recognize or reserve resources for the USB device. 

    This issue is resolved in this release.

vCenter Server, vSphere Web Client, and vSphere Client Issues
  • A warning message appears after you install the desktop vSphere Client for Windows, and try to connect to an ESXi host 

    The desktop vSphere Client was deprecated in vSphere 6.5 and is no longer supported. When you try to connect to an ESXi 6.5 host, the following warning message appears: The required client support files need to be retrieved from the server "0.0.0.0" and installed. Click Run the installer or Save the installer.

    In this case use the VMware Host Client to conduct host management operations. 

Virtual Machine Management Issues
  • The performance counter cpu.system incorrectly shows a value of 0 (zero)

    The performance counter cpu.system is incorrectly calculated for a virtual machine. The value of the counter is always 0 (zero) and never changes, which makes it impossible to do any kind of data analysis on it.

    This issue is resolved in this release.

  • The virtual machine might become unresponsive due to active memory drop

    If the active memory of a virtual machine that runs on an ESXi host falls under 1% and drops to zero, the host might start reclaiming memory even if the host has enough free memory.

    This issue is resolved in this release.

  • Virtual Machine stops responding during snapshot consolidation

    During snapshot consolidation, a precise calculation might be performed to determine the storage space required to perform the consolidation. This precise calculation can cause the virtual machine to stop responding, because it takes a long time to complete. 

    This issue is resolved in this release.

  • Wrong NUMA placement of a preallocated virtual machine leading to sub-optimal performance

    A preallocated VM will allocate all its memory at power-on time. However, it can happen that the scheduler will pick a wrong NUMA node for this initial allocation. In particular, the numa.nodeAffinity vmx option might not be honored. The Guest OS might see a performance degradation because of this. 

    Virtual machines configured with latency sensitivity set to high or with a passtrhough device are typically preallocated. 

    This issue is resolved in this release.

  • Virtual machine might become unresponsive

    When you take a snapshot of a virtual machine, the virtual machine might become unresponsive. 

    This issue is resolved in this release.

  • vSphere Storage vMotion might fail with an error message if it takes more than 5 minutes

    The destination virtual machine of the vSphere Storage vMotion is incorrectly stopped by a periodic configuration validation for the virtual machine. vSphere Storage vMotion that takes more than 5 minutes fails with the The source detected that the destination failed to resume message.
    The VMkernel log from the ESXi host contains the message D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details

    This issue is resolved in this release. 

  • The virtual machine management stack does not handle correctly a backing file that is specified as a generic FileBackingInfo and results in the virtual machine not being reconfigured properly

    Attaching a disk in a different folder than the virtual machine's folder while it is powered on might fail, if it is initiated by using the vSphere API directly with a ConfigSpec that specifies the disk backing file using the generic vim.vm.Device.VirtualDevice.FileBackingInfo class, instead of the disk type specific backing class, such as vim.vm.Device.VirtualDisk.FlatVer2BackingInfo, vim.vm.Device.VirtualDisk.SeSparseBackingInfo.

    This issue is resolved in this release.

  • A vMotion migration of a Virtual Machine (VM) gets suspended for some time, and further fails with timeout

    If a virtual machine has a driver (especially a graphics driver) or an application that pins too much memory, it creates a sticky page in the VM. When such a VM is about to be migrated with vMotion to another host, the migration process is suspended and later fails because of incorrect pending of I/O computations. 

    This issue is resolved in this release.

  • A reconfigure operation of a powered-on virtual machine that sets an extraConfig option with an integer value might fail with SystemError

    The reconfigure operation stops responding with SystemError under the following conditions: 

    • the virtual machine is powered on 
    • the ConfigSpec includes an extraConfig option with an integer value 


    The SystemError is triggered by a TypeMismatchException, which can be seen in the hostd log on the ESXi host with the message: 

    Unexpected exception during reconfigure: (vim.vm.ConfigSpec) { } Type Mismatch: expected: N5Vmomi9PrimitiveISsEE, found: N5Vmomi9PrimitiveIiEE. 

    This issue is resolved in this release. 

  • Digest VMDK files are not deleted from the VM folder when you delete a VM

    When you create a linked clone from a digest VMDK file, vCenter Server marks the digest disk file as non-deletable. Thus, when you delete the respective VM, the digest VMDK file is not deleted from the VM folder because of the ddb.deletable = FALSE ddb entry in the descriptor file.

    This issue is resolved in this release.

Virtual Volumes Issues
  • Non-Latin characters might be displayed incorrectly in VM storage profile names

    UTF-8 characters are not handled properly before being passed on to a VVol Vasa Provider. As a result, the VM storage profiles which are using international characters are either not recognized by the Vasa Provider or are treated, or displayed incorrectly by the Vasa Provider. 

    This issue is resolved in this release.

VMware HA and Fault Tolerance Issues
  • vSphere Guest Application Monitoring SDK fails for VMs with vSphere Fault Tolerance enabled

    When vSphere FT is enabled on an vSphere HA-protected VM where the vSphere Guest Application Monitor is installed, the vSphere Guest Application Monitoring SDK might fail.

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when a Fault Tolerance Secondary virtual machine (VM) fails to power on and the host runs out of memory

    When you configure a virtual machine with enabled Fault Tolerance and its Secondary VM is being powered on from the ESXi host with insufficient memory, the Secondary VM cannot power on and the ESXi host might fail with purple screen. 

    This issue is resolved in this release.

VMware Tools
  • The guestinfo.toolsInstallErrCode variable is not cleared on Guest OS reboot when installing VMware Tools

    If installation of VMware Tools requires a reboot to complete, the guest variable guestinfo.toolsInstallErrCode is set to 1603. The variable is not cleared by rebooting the Guest OS.

    This issue is resolved in this release.

  • VMware Tools version 10.1.7 included

    This release includes the VMware Tools version 10.1.7. Refer to the VMware Tools 10.1.7 Release Notes for further details.

vSAN Issues
  • An ESXi host fails with purple diagnostic screen when mounting a vSAN disk group

    Due to an internal race condition in vSAN, an ESXi host might fail with a purple diagnostic screen when you attempt to mount a vSAN disk group.

    This issue is resolved in this release.

  • Using objtool on a vSAN witness host causes an ESXi host to fail with a purple diagnostic screen

    If you use objtool on a vSAN witness host, it performs an I/O control (ioctl) call which leads to a NULL pointer in the ESXi host and the host crashes.

    This issue is resolved in this release.

  • Hosts in a vSAN cluster have high congestion which leads to host disconnects

    When vSAN components with invalid metadata are encountered while an ESXi host is booting, a leak of reference counts to SSD blocks can occur. If these components are removed by policy change, disk decommission, or other method, the leaked reference counts cause the next I/O to the SSD block to get stuck. The log files can build up, which causes high congestion and host disconnects. 

    This issue is resolved in this release.

  • Cannot enable vSAN or add an ESXi host into a vSAN cluster due to corrupted disks

    When you enable vSAN or add a host to a vSAN cluster, the operation might fail if there are corrupted storage devices on the host. Python zdumps are present on the host after the operation, and the vdq -q command fails with a core dump on the affected host.

    This issue is resolved in this release.

  • vSAN Configuration Assist issues a physical NIC warning for lack of redundancy when LAG is configured as the active uplink

    When the uplink port is a member of a Link Aggregation Group (LAG), the LAG provides redundancy. If the Uplink port number is 1, vSAN Configuration Assist issues a warning that the physical NIC lacks redundancy. 

    This issue is resolved in this release.

  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot

    If the hosts in a unicast vSAN cluster and the vCenter Server are rebooted at the same time, the cluster might become partitioned. The vCenter Server does not properly handle unstable vpxd property updates during a simultaneous reboot of hosts and vCenter Server. 

    This issue is resolved in this release.

  • An ESXi host fails with a purple diagnostic screen due to incorrect adjustment of read cache quota

    The vSAN mechanism to that controls read cache quota might make incorrect adjustments that result in a host failure with purple diagnostic screen.

    This issue is resolved in this release.

  • Large File System overhead reported by the vSAN capacity monitor

    When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.

    This issue is resolved in this release.

  • vSAN health check reports CLOMD liveness issue due to swap objects with size of 0 bytes

    If a vSAN cluster has objects with size of 0 bytes, and those objects have any components in need of repair, CLOMD might crash. The CLOMD log in /var/run/log/clomd.log might display logs similar to the following: 

    2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387
    2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring ae9cf658-cd5e-dbd4-668d-020010a45c75 workItem type REPAIR

    2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf658-cd5e-dbd4-668d-020010a45c75 found

    The vSAN health check reports a CLOMD liveness issue. Each time CLOMD is restarted it crashes while attempting to repair the affected object. Swap objects are the only vSAN objects that can have size of zero bytes.

    This issue is resolved in this release.

  • vSphere API FileManager.DeleteDatastoreFile_Task fails to delete DOM objects in vSAN

    If you delete vmdks from the vSAN datastore using FileManager.DeleteDatastoreFile_Task API, through filebrowser or SDK scripts, the underlying DOM objects are not deleted.

    These objects can build up over time and take up space on the vSAN datastore.

    This issue is resolved in this release.

  • A host in a vSAN cluster fails with a purple diagnostic screen due to internal race condition

    When a host in a vSAN cluster reboots, a race condition might occur between PLOG relog code and vSAN device discovery code. This condition can corrupt memory tables and cause the ESXi host to fail and display a purple diagnostic screen.  

    This issue is resolved in this release.

vSphere Command-Line Interface
  • You cannot change some ESXi advanced settings through the vSphere Web Client or esxcli commands

    You cannot change some ESXi advanced settings such as /Net/NetPktSlabFreePercentThreshold because of the wrong default value. This problem is resolved by changing the default value. 

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Installation Issues
  • The installation of unsigned VIB with --no-sig check option on a secure boot enabled ESXi host might fail

    Installing unsigned VIBs on a secure booted ESXi host will be prohibited because the unsigned vibs prevents the system from booting. The VIB signature check is mandatory on a secure booted ESXi host.

    Workaround: Use only signed VIBs on secure booted ESXi hosts.

  • Attempts to install or upgrade an ESXi host with ESXCLI or vSphere PowerCLI commands might fail for esx-base, vsan and vsanhealth VIBs

    From ESXi 6.5 Update 1 and above, there is a dependency between the esx-tboot VIB and the esx-base VIB and you must also include the esx-tboot VIB as part of the vib update command for successful installation or upgrade of ESXi hosts.

    Workaround: Include also the esx-tboot VIB as part of the vib update command. For example:

    esxcli software vib update -n esx-base -n vsan -n vsanhealth -n esx-tboot -d /vmfs/volumes/datastore1/update-from-esxi6.5-6.5_update01.zip
  • Remediation against an ESX 6.5 Update 1 baseline might fail on an ESXi host with enabled secure boot

    If you use vSphere Update Manager to upgrade an ESXi host by an upgrade baseline containing an ESXi 6.5 Update 1 image, the upgrade might fail if the host has secure boot enabled.

    Workaround: You can use a vSphere Update Manager patch baseline instead of a host upgrade baseline.

Miscellaneous Issues
  • The change in BIOS, BMC, and PLSA firmware versions are not displayed on the Direct Console User Interface (DCUI)

    The DCUI provides information about the hardware versions and when they change, the DCUI remains blank. 

    Workaround: Start the WBEM services through the command: esxcli system wbem set --enabled​

Networking Issues
  • Intel i40en driver does not allow you to disable hardware due to stripping the RX VLAN tag capability

    For Intel i40en driver, the hardware always strips the RX VLAN tag and you cannot disable it using the following vsish command:

    vsish -e set /net/pNics/vmnicX/hwCapabilities/CAP_VLAN_RX 0

    Workaround: None.

  • Adding a Physical Uplink to a Virtual Switch in the VMware Host Client Might Fail

    Adding a physical network adapter, or uplink, to a virtual switch in the VMware Host Client might fail if you select Networking > Virtual switches> Add uplink.

    Workaround:

    1. Right-click the virtual switch that you want to edit and click Edit Settings.
    2. Click Add uplink to add a new physical uplink to the virtual switch.
    3. Click Save.
Storage Issues
  • VMFS datastore is not available after rebooting an ESXi host for ATS-only array devices

    If the target connected to the ESXi host supports only implicit ALUA and has only standby paths, the device will get registered, but device attributes related to media access will not get populated. It could take up to 5 minutes for the VAAI attributes to refresh if an active path gets added on post registration. As a result, the VMFS volumes, configured with ATS-only, may fail to mount until the VAAI update.

    Workaround: If the target supports only standby paths and has only implicit ALUA, enable the FailDiskRegistration config option on the host by using the following ESXi CLI command: esxcli system settings advanced set -o /Disk/FailDiskRegistration -i 1

    To take effect, you must set the config option and reboot the host. This will delay the registration of the devices until you see an active path.
     

  • The LED management commands might fail when you turn on or off them in the HBA mode

    When you turn on or off the following LED commands

    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>,

    they might fail in the HBA mode of some HP Smart Array controllers, for example, P440ar and HP H240. In addition, the controller might stop responding, causing the following management commands to fail: 

    LED management:
    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>

    Get disk location:esxcli storage core device physical get -d <device id>

    This problem is firmware specific and it is triggered only by LED management commands in the HBA mode. There is no such issue in the RAID mode.

    Workaround: Retry the management command until success. 

  • The vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plugin for the lsi_mr3 driver might fail with Can not get device info ... or not well-formed ... error

    The vSAN disk serviceability plugin provide extended management and information support for disks. In vSphere 6.5 Update 1 the commands:

    Get disk location:
    esxcli storage core device physical get -d <device UID> for JBOD mode disk.
    esxcli storage core device raid list -d <device UID> for RAID mode disk.

    Led management:
    esxcli storage core device set --led-state=locator --led-duration=<seconds> --device=<device UID>
    esxcli storage core device set --led-state=error --led-duration=<seconds> --device=<device UID>
    esxcli storage core device set --led-state=off -device=<device UID>

    might fail with one of the following errors:
    Plugin lsu-lsi-mr3-plugin cannot get information for device with name <NAA ID>.
    The error is: Can not get device info ... or not well-formed (invalid token): ...

    Workaround:
    1. Use the localcli instead of esxcli command on the server that experiences this issue.
    The localcli command produces the correct message while esxcli might randomly fail.​

Virtual Machine Management Issues
  • VMware Host Client runs actions on all virtual machines on an ESXi host instead on a range selected with Search

    When you use Search to select a range of virtual machines in VMware Host Client, if you check the box to select all, and run an action such as power off, power on or delete, the action might affect all VMs on the host instead only the selection.

    Workaround: You must select the virtual machines individually.

Known Issues from earlier releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon