ESXi 6.5 Update 2 | 3 MAY 2018 | ISO Build 8294253

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

The ESXi 6.5 Update 2 release includes the following list of new features.

  • ESXi 6.5 Update 2 enables Microsemi Smart PQI (smartpqi) driver support for the HPE ProLiant Gen10 Smart Array Controller.
  • Updates to the nhpsa driver: The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.
  • Updates to the nvme driver: ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the Non-Volatile Memory Express (NVMe) 1.2 specification and enhanced diagnostic log.
  • With ESXi 6.5 Update 2, you can add tags to the Trusted Platform Module (TPM) hardware version 1.2 on ESXi by using ESXCLI commands. A new API provides read and write access in the form of get, set, and clear commands to the non-volatile memory on TPMs of version 1.2.
  • ESXi 6.5 Update 2 enables a new native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including combination of NVMe, SAS, and SATA drives. The HBA device IDs are: SAS3516(00AA), SAS3516_1(00AB), SAS3416(00AC), SAS3508(00AD), SAS3508_1(00AE), SAS3408(00AF), SAS3716(00D0), SAS3616(00D1), SAS3708(00D2).
  • In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

Earlier Releases of ESXi 6.5

Features and known issues of ESXi 6.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.5 are:

For compatibility, installation and upgrades, product support notices, and features see the VMware vSphere 6.5 Release Notes.

Internationalization

VMware vSphere 6.5 is available in the following languages:

  • English
  • French
  • German
  • Spanish
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of VMware vSphere 6.5, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client and vSphere Client are packaged with vCenter Server.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.5 Update 2, use the ESXi 6.5 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.5. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.5, upgrade the virtual machine compatibility.
See the vSphere Upgrade documentation.

Product and Support Notices

Important: Upgrade paths from ESXi 6.5 Update 2 to ESXi 6.7 are not supported. 

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.

Details

Download Filename: update-from-esxi6.5-6.5_update02.zip
Build:

8294253

8285314 (Security-only)

Download Size: 486.6 MB

md5sum:

45a251b8c8d80289f678041d5f1744a3

sha1checksum:

fa4aa02e4bf06babca214be2dcd61facbd42c493
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

For more information, see VMware ESXi 6.5, Patch Release ESXi-6.5.0-update02.

Bulletins

This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin will supersede the security- only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in "SG". For information on patch and update classification, see KB 2014447.

Bulletin ID Category Severity Knowledge Base Article
ESXi650-201805201-UG Bugfix Critical 53067
ESXi650-201805202-UG Bugfix Important 53068
ESXi650-201805203-UG Bugfix Important 53069
ESXi650-201805204-UG Bugfix Important 53070
ESXi650-201805205-UG Enhancement Important 53071
ESXi650-201805206-UG Enhancement Important 53072
ESXi650-201805207-UG Enhancement Important 53073
ESXi650-201805208-UG Enhancement Important 53074
ESXi650-201805209-UG Enhancement Important 53075
ESXi650-201805210-UG Enhancement Important 53076
ESXi650-201805211-UG Enhancement Important 53077
ESXi650-201805212-UG Bugfix Important 53078
ESXi650-201805213-UG Enhancement Important 53079
ESXi650-201805214-UG Bugfix Important 53080
ESXi650-201805215-UG Bugfix Important 53081
ESXi650-201805216-UG Bugfix Important 53082
ESXi650-201805217-UG Bugfix Important 53083
ESXi650-201805218-UG Enhancement Important 53084
ESXi650-201805219-UG Bugfix Important 53085
ESXi650-201805220-UG Bugfix Important 53086
ESXi650-201805221-UG Bugfix Moderate 53087
ESXi650-201805222-UG Bugfix Moderate 53088
ESXi650-201805223-UG Bugfix Important 53089
ESXi650-201805101-SG Security Important 53092
ESXi650-201805102-SG Security Important 53093
ESXi650-201805103-SG Security Important 53094

Image Profiles

VMware patch and update releases contain general and critical image profiles.

Application of the general release image profile applies to new bug fixes.

Image Profile Name Knowledge Base Article
ESXi-6.5.0-20180502001-standard 53090
ESXi-6.5.0-20180502001-no-tools 53091
ESXi-6.5.0-20180501001s-standard 53095
ESXi-6.5.0-20180501001s-no-tools 53096

Resolved Issues

The resolved issues are grouped as follows.

ESXi650-201805201-UG
Patch Category Bugfix
Patch Severity Critical
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_vsan_6.5.0-2.50.8064065
  • VMware_bootbank_vsanhealth_6.5.0-2.50.8143339
  • VMware_bootbank_esx-tboot_6.5.0-2.50.8294253
  • VMware_bootbank_esx-base_6.5.0-2.50.8294253
PRs Fixed  1882261, 1907025, 1911721, 1913198, 1913483, 1919300, 1923173, 1937646, 1947334, 1950336, 1950800, 1954364, 1958600, 1962224, 1962739, 1971168, 1972487, 1975787, 1977715, 1982291, 1983123, 1986020, 1987996, 1988248, 1995640, 1996632, 1997995, 1998437, 1999439, 2004751, 2006929, 2007700, 2008663, 2011392, 2012602, 2015132, 2016411, 2016413, 2018046, 2020615, 2020785, 2021819, 2023740, 2023766, 2023858, 2024647, 2028736, 2029476, 2032048, 2033471, 2034452, 2034625, 2034772, 2034800, 2034930, 2039373, 2039896, 2046087, 2048702, 2051775, 2052300, 2053426, 2054940, 2055746, 2057189, 2057918, 2058697, 2059300, 2060589, 2061175, 2064616, 2065000, 1491595, 1994373
numbers N/A

This patch updates esx-base, esx-tboot,vsan and vsanhealth VIBs to resolve the following issues:

  • PR 1913483: Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines

    Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines when one disk is attached to many virtual machines, due to insufficient metadata.

    This issue is resolved in this release.

  • PR 1975787: An ESXi support bundle generated by using the VMware Host Client might not contain all log files

    Sometimes, when you generate a support bundle for an ESXi host by using the Host Client, the support bundle might not contain all log files.

    This issue is resolved in this release.

  • PR 1947334: Attempts to modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads fail

    You cannot modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads, because the parameter is designed as an internal parameter and is not a subject to change. This fix hides internal parameters of vSAN iSCSI to avoid confusion.

    This issue is resolved in this release.

  • PR 1919300: Host Profile compliance check might show Unknown status if a profile is imported from a different vCenter Server system

    If you export a Host Profile from one vCenter Server system and import it to another vCenter Server system, compliance checks might show a status Unknown.

    This issue is resolved in this release.

  • PR 2007700: The advanced setting VMFS.UnresolvedVolumeLiveCheck might be inactive in the vSphere Web Client

    In the vSphere Web Client, the option to enable or disable checks during unresolved volume queries with the VMFS.UnresolvedVolumeLiveCheck option under Advance Setting > VMFS, might be grayed out.

    This issue is resolved in this release. You can also workaround it by using the vSphere Client or set the option by directly connecting to the ESXi host.

  • PR 1983123: ESXi hosts with scratch partitions located on a vSAN datastore might intermittently become unresponsive

    If the scratch partition of an ESXi host is located on a vSAN datastore, the host might become unresponsive, because if the datastore is temporary inaccessible, this might block userworld daemons, such as hostd, if they try to access this datastore at the same time. With this fix, scratch partitions to a vSAN datastore are no longer configurable through the API or the user interface. 

    This issue is resolved in this release.

  • PR 1950336: Hostd might stop responding in attempt to reconfigure a virtual machine

    Hostd might stop responding in attempt to reconfigure virtual machines. Configuration entries with unset value in the ConfigSpec object, which encapsulates configuration settings for creation and configuration of virtual machines, cause the issue.

    This issue is resolved in this release.

  • PR 2006929: VMware vSphere vMotion might fail due to timeout and inaccessible virtual machines

    vSphere vMotion might fail due to timeout, because virtual machines might need more buffer to store the shared memory page details used by 3D or graphic devices. The issue is observed mainly on virtual machines running graphics intensive loads or working in virtual desktop infrastructure environments. With this fix, the buffer size is increased by four times for 3D virtual machines and if the increased capacity is still not enough, the virtual machine is powered off.

    This issue is resolved in this release.

  • PR 1958600: The netdump IP address might be lost after upgrading from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2

    The netdump IP address might be lost after your upgrade from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2.

    This issue is resolved in this release.

  • PR 1971168: ESXi system identification information in the vSphere Web Client might not be available

    System identification information consists of asset tags, service tags, and OEM strings. In earlier releases, this information comes from the Common Information Model (CIM) service. Starting with vSphere 6.5, the CIM service is turned off by default, unless any third-party CIM providers are installed, which prevents ESXi system identification information to display in the vSphere Web Client.

    This issue is resolved in this release.

  • PR 2020785: Virtual machines created by the VMware Integrated OpenStack might not work properly

    When you create virtual machine disks with the VMware Integrated OpenStack block storage service (Cinder), the disks get random UUIDs. In around 0.5% of the cases, the virtual machines do not recognize these UUIDs, which results in unexpected behavior of the virtual machines.

    This issue is resolved in this release.

  • PR 2004751: A virtual machine might fail to boot after an upgrade to macOS 10.13

    When you upgrade a virtual machine to macOS 10.13, the virtual machine might fail to locate a bootable operating system. If the macOS 10.13 installer determines that the system volume is located on a Solid State Disk (SSD), during the OS upgrade, the installer converts the Hierarchical File System Extended (HFS+/HFSX) to the new Apple File System (APFS) format. As a result, the virtual machine might fail to boot.

    This issue is resolved in this release.

  • PR 2015132: You might not be able to apply a Host Profile

    You might not be able to apply a host profile, if the host profile contains a DNS configuration for a VMkernel network adapter that is associated to a vSphere Distributed Switch.

    This issue is resolved in this release.

  • PR 2011392: An ESXi host might fail with a purple diagnostic screen due to a deadlock in vSphere Distributed Switch health checks

    If you enable VLAN and maximum transmission unit (MTU) checks in vSphere Distributed Switches, your ESXi hosts might fail with a purple diagnostic screen due to a possible deadlock.  

    This issue is resolved in this release.

  • PR 2004948: Update to the nhpsa driver

    The nhpsa driver is updated to 2.0.22-1vmw.

  • PR 1907025: An ESXi host might fail with an error Unexpected message type

    An ESXi host might fail with a purple diagnostic screen due to invalid network input in the error message handling of vSphere Fault Tolerance, which results into an Unexpected message type error.

    This issue is resolved in this release.

  • PR 1962224: Virtual machines might fail with an error Unexpected FT failover due to PANIC

    When you run vSphere Fault Tolerance in vSphere 6.5 and vSphere 6.0, virtual machines might fail with an error such as Unexpected FT failover due to PANIC error: VERIFY bora/vmcore/vmx/main/monitorAction.c:598. The error occurs when virtual machines use PVSCSI disks and multiple virtual machines using vSphere FT are running on the same host.

    This issue is resolved in this release.

  • PR 1988248: An ESXi host might fail with a purple diagnostic screen due to filesystem journal allocation issue

    An ESXi host might fail with a purple diagnostic screen if the filesystem journal allocation fails while opening a volume due to space limitation or I/O timeout, and the volume opens as read only.

    This issue is resolved in this release.

  • PR 2024647: An ESXi host might fail with a purple diagnostic screen and an error message when you try to retrieve information about multicast group that contains more than 128 IP addresses

    When you add more than 128 multicast IP addresses to an ESXi host and try to retrieve information about the multicast group, the ESXi host might fail with a purple diagnostic screen and an error message such as: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

    This issue is resolved in this release.

  • PR 2039373: Hostd might fail due to a high rate of Map Disk Region tasks generated by a backup solution

    Backup solutions might produce a high rate of Map Disk Region tasks, causing hostd to run out of memory and fail.

    This issue is resolved in this release.

  • PR 1972487: An ESXi host might fail with a purple diagnostic screen due to a rare race condition​

    An ESXi host might fail with a purple diagnostic screen due to a rare race condition between file allocation paths and volume extension operations.

    This issue is resolved in this release.

  • PR 1913198: NFS 4.1 mounts might fail when using AUTH_SYS, KRB5 or KRB5i authentication

    If you use AUTH_SYS, KRB5 or KRB5i authentication to mount NFS 4.1 shares, the mount might fail. This happens when the Generic Security Services type is not the same value for all the NFS requests generated during the mount session. Some hosts, which are less tolerant of such inconsistencies, fail the mount requests.

    This issue is resolved in this release.

  • PR 1882261: ESXi hosts might report wrong esxcli storage core device list data

    For some devices, such as CD-ROM, where the maximum queue depth is less than the default value of number of outstanding I/Os with competing worlds, the command esxcli storage core device list might show the maximum number of outstanding I/O requests exceeding the maximum queue depth. With this fix, the maximum value of No of outstanding IOs with competing worlds is restricted to the maximum queue depth of the storage device on which you make the change.

    This issue is resolved in this release.

  • PR 1824576: ESXi hosts might lose connectivity during snapshot scalability testing

    The Small-Footprint CIM Broker (SFCB) might fail during snapshot operations on large scale with a core dump file similar to sfcb-vmware_bas-zdump.XXX in /var/core/ and cause ESXi hosts to become unresponsive.

    This issue is resolved in this release. 

  • PR 2032048: An ESXi host might fail with an error due to invalid keyboard input

    An ESXi host might fail with an error kbdmode_set:519: invalid keyboard mode 4: Not supported due to an invalid keyboard input during the shutdown process.

    This issue is resolved in this release.

  • PR 1937646: The sfcb-hhrc process might stop working, causing the hardware monitoring service on the ESXi host to become unavailable

    The sfcb-hhrc process might stop working, causing the hardware monitoring service sfcbd on the ESXi host to become unavailable.

    This issue is resolved in this release.

  • PR 2016411: When you add an ESXi host to an Active Directory domain, hostd might become unresponsive and the ESXi host disconnects from vCenter Server

    When you add an ESXi host to an Active Directory domain, hostd might become unresponsive due to Likewise services running out of memory and the ESXi host disconnects from the vCenter Server system.

    This issue is resolved in this release.

  • PR 1954364: Virtual machines with EFI firmware might fail to boot from Windows Deployment Services on Windows Server 2008

    PXE boot of virtual machines with 64-bit Extensible Firmware Interface (EFI) might fail when booting from Windows Deployment Services on Windows Server 2008, because PXE boot discovery requests might contain an incorrect PXE architecture ID. This fix sets a default architecture ID in line with the processor architecture identifier for x64 UEFI that the Internet Engineering Task Force (IETF) stipulates. This release also adds a new advanced configuration option, efi.pxe.architectureID = <integer>, to facilitate backwards compatibility. For instance, if your environment is configured to use architecture ID 9, you can add efi.pxe.architectureID = 9 to the advanced configuration options of the virtual machine.

    This issue is resolved in this release.

  • PR 1962739: Shared raw device mapping devices might go offline after a vMotion operation with MSCS clustered virtual machines

    If a backend failover of a SAN storage controller takes place after a vMotion operation of MSCS clustered virtual machines, shared raw device mapping devices might go offline.

    The issue is resolved in this release.

  • PR 2046087: An ESXi host might fail with a purple diagnostic screen and NULL pointer value

    An ESXi host might fail with a purple diagnostic screen in the process of collecting information for statistical purposes due to stale TCP connections.

    This issue is resolved in this release.

  • PR 1986020: ESXi VMFS5 and VMFS6 file systems might fail to expand

    ESXi VMFS5 and VMFS6 file systems with certain volume sizes might fail to expand due to an issue with the logical volume manager (LVM). A VMFS6 volume fails to expand with a size greater than 16 TB.

    This issue is resolved in this release.

  • PR 1977715: ESXi hosts intermittently stop sending logs to remote syslog servers

    The Syslog Service of ESXi hosts might stop transferring logs to remote log servers when a remote server is down and does not restart transfers after the server is up.

    This issue is resolved in this release. 

  • PR 1998437: SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa

    SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa, which translates to the alert LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION. This causes a premature all paths down state that might lead to VMFS heartbeat timeouts and ultimately to performance drops. With this fix, you can use the advanced configuration option /Nmp/Nmp Satp Alua Cmd RetryTime and increase the timeout value to 50 seconds. 

    This issue is resolved in this release.

  • PR 2058697: EtherSwitch service modules might run out of memory when configuring a L3 mirror session

    EtherSwitch service modules might run out of memory when configuring a L3 mirror session with Encapsulated Remote Switched Port Analyzer (ERSPAN) type II or type III.

    This issue is resolved in this release.

  • PR 1996632: ESXi hosts might intermittently lose connectivity if configured to use an Active Directory domain with a large number of accounts

    If an ESXi host is configured to use an Active Directory domain with more than 10,000 accounts, the host might intermittently lose connectivity.

    This issue is resolved in this release.

  • PR 1995640: An ESXi host might fail with a purple diagnostic screen with PCPU locked error

    An ESXi host might fail with a purple diagnostic screen with an error for locked PCPUs, caused by a spin count fault.

    You might see a log similar to this:

    2017-08-05T21:51:50.370Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T21:56:55.398Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:02:00.418Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:07:05.438Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:12:10.453Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:17:15.472Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T16:32:08.539Z cpu67:67618 opID=da84f05c)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_NETSERVER-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

    2017-08-05T16:32:45.177Z cpu61:67618 opID=389fe17f)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_Way4SRV-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

    2017-08-05T16:33:16.529Z cpu66:68466 opID=db8b1ddc)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_FIleServer-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

    This issue is resolved in this release.

  • PR 2053426: Virtual machines with more than 1TB of memory and PCI passthrough devices might fail to power on

    The VMkernel module that manages PCI passthrough devices has a dedicated memory heap that does not allow support to virtual machines configured with more than 1 TB of memory and one or more PCI passthrough devices. Such virtual machines might fail to power on with a warning in the ESXi vmkernel log (/var/log/vmkernel.log) similar to WARNING: Heap: 3729: Heap mainHeap (2618200/31462232): Maximum allowed growth (28844032) too small for size (32673792). The issue also occurs when multiple virtual machines, each with less than 1 TB of memory and one or more PCI passthrough devices, are powered on concurrently and the sum of the memory configured across all virtual machines exceeds 1 TB.

    This issue is resolved in this release.

  • PR 1987996: Warning appears in vSAN health service Hardware compatibility check: Timed out in querying HCI info

    When vSAN health service runs a Hardware compatibility check, the following warning message might appear: Timed out in querying HCI info. This warning is caused by slow retrieval of hardware information. The check times out after 20 seconds, which is less than the time required to retrieve information from the disks.

    This issue is resolved in this release.

  • PR 2048702: ImageConfigManager causes increased log activity in the syslog.log file

    The syslog.log file is repeatedly populated with messages related to ImageConfigManager.

    This issue is resolved in this release.

  • PR 2051775: The Common Information Model interface of an ESXi host might stop working and fail hardware health monitoring

    You might not be able to monitor the hardware health of an ESXi host through the CIM interface, because some processes in SFCB might stop working. This fix provides code enhancements to the sfcb-hhrc and the sfcb-vmware_base processes. The sfcb-hhrc process manages third-party Host Hardware RAID Controller (HHRC) CIM providers and provides hardware health monitoring, and the sfcb-vmware_base process.

    This issue is resolved in this release.

  • PR 2055746: HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are enabled for Content Based Read Cache (CBRC)

    HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are CBRC-enabled, because the snapshot disk of the target virtual machine cannot be added to the backup proxy virtual machine.

    This issue is resolved in this release.

  • PR 2057918: VMkernel logs in VMware vSAN environment might be flooded with the message Unable to register file system

    In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

    This issue is resolved in this release. 

  • PR 2052300: Attempts to create a host profile might fail with an error due to a null value in iSCSI configuration

    Attempts to create a host profile might fail with an error NoneType object has no attribute lower if you pass a null value parameter in the iSCSI configuration.

    This issue resolved in this release.

  • PR 2057189: Packet loss across L2 VPN networks

    You might see packet loss of up to 15% and poor application performance across a L2 VPN network if a sink port is configured on the vSphere Distributed Switch but no overlay network is available.

    This issue is resolved in this release.

  • PR 1911721: The esxtop utility reports incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI-supported LUNs

    The esxtop utility reports incorrect statistics for the average device latency per command (DAVG/cmd) and average ESXi VMkernel latency per command (KAVG/cmd) on VAAI-supported LUNs due to an incorrect calculation.

    This issue is resolved in this release.

  • PR 1678559: You might notice performance degradation after resizing a VMDK file on vSAN

    Distributed Object Manager processing operations might take too much time when an object has concatenated components. This might cause delays in other operations, which leads to high I/O latency.

    This issue is resolved in this release.

  • PR 2064616: Application running parallel File I/O operations might cause the ESXi host to stop responding

    Application running parallel File I/O operations on a VMFS datastore might cause the ESXi host to stop responding.

    This issue is resolved in this release.

  • PR 2034930: When you customize a Host Profile some parameter values might be lost

    When you customize a Host Profile to attach it to a host, some parameter values, including host name and MAC address, might be lost.

    This issue is resolved in this release.

  • PR 2034625: An ESXi host might fail with a purple diagnostic screen due to faulty packets from third-party networking modules

    If VMware vSphere Network I/O Control is not enabled, packets from third-party networking modules might cause an ESXi host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 1994373: You might fail to gather data to diagnose networking issues

    With ESXi 6.5 Update 2, you can use PacketCapture, a lightweight tcpdump utility implemented in the Reverse HTTP Proxy service, to capture and store only the minimum amount of data to diagnose a networking issue, saving CPU and storage. For more information, see KB 52843.

    This issue is resolved in this release.

  • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

    After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

    This issue is resolved in this release.

  • The ESXCLI unmap command might reclaim only small file block resources in VMFS-6 datastores and not process large file block resources

    In VMFS-6 datastores, if you use the esxcli storage vmfs unmap command for manual reclamation of free space, it might only reclaim space from small file blocks and not process large file block resources.

    This issue is resolved in this release.

  • PR 2060589: vSphere Web Client might display incorrect storage allocation after an increase of the disk size of a virtual machine

    If you use vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration and display the storage allocation of the virtual machine as unchanged.

    This issue is resolved in this release.

  • PR 2012602: An ESXi host might not have access to the Web Services for Management (WSMan) protocol due to failed CIM ticket authentication

    An ESXi host might not have access to the WSMan protocol due to failing CIM ticket authentication.

    This issue is resolved in this release.

  • PR 2065000: Hostd might fail during backup

    The ESXi daemon hostd might fail during backup due to a race condition.

    This issue is resolved in this release.

  • PR 2015082: Deleting files from a content library might cause the vCenter Server Agent to fail

    Deleting files from a content library might cause a failure of the vCenter Server Agent (vpxa) service which results in failure of the delete operation.

    This issue is resolved in this release.

  • PR 1997995: Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers

    Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers, because the ESXi kernel module fails to load automatically during ESXi boot. This is due to different vendor names in the SMBIOS System Information of Dell and Dell OEM servers.

    This issue is resolved in this release.

  • PR 1491595: Hostd might fail due to a memory leak

    Under certain circumstances, the memory of hostd might exceed the hard limit and the agent might fail.

    This issue is resolved in this release. 

  • PR 2034772: An ESXi host might fail with a purple screen due to a rare race condition in iSCSI sessions

    An ESXi host might fail with a purple diagnostic screen due to a rare race condition in iSCSI sessions while processing the scheduled queue and running queue during connection shutdown. The connection lock might be freed to terminate a task while the shutdown is still in process. The next node pointer would be incorrect and cause the failure. 

    This issue resolved in this release.

  • PR 2056969: Log congestion leads to unresponsive hosts and inaccessible virtual machines

    If a device in a disk group encounters a permanent disk loss, the physical log (PLOG) in the cache device might increase to its maximum threshold. This problem is due to faulty cleanup of data in the log, which are bound to the lost disk. The log entries build up over time, leading to log congestion, which can cause virtual machines to become inaccessible.

    This issue is resolved in this release. 

  • PR 2054940: vSAN health service unable to read controller information

    If you are using HP controllers, vSAN health service might be unable to read the controller firmware information. The health check might issue a warning, with the firmware version listed as N/A.

    This issue is resolved in this release.

  • PR 2034800: Ping failure in vSAN network health check when VMware vRealize Operations manages the vSAN cluster

    When vSAN cluster is managed by vRealize Operations, you might observe intermittent ping health check failures. 

    This issue is resolved in this release.

  • PR 2034452: The vSAN health service might be slow displaying data for a large scale cluster after integration with vRealize Operations

    After vRealize Operations integration with vSAN Management Pack enabled, the vSAN health service might become less responsive. This problem affects large scale clusters, such as those with 24 hosts, each with 10 disks. vSAN health service uses an inefficient log filter for parsing some specific log patterns, and can consume a large amount of CPU resources while communicating with vRealize Operations.

    This issue is resolved in this release.

  • PR 2023740: Host might fail with a purple diagnostic screen error if its own local configuration is added through esxcli vsan cluster unicastagent add

    When adding unicast configuration of cluster hosts, if the host own local configuration is added, the host experiences a purple diagnostic screen error.

    This issue is resolved in this release.

  • PR 2033471: Very low IOPS on vSAN iSCSI target when outstanding I/O is 1

    For each iSCSI target, the initiator establishes one or more connections depending on whether multipath is configured. For each connection, if there is only one outstanding I/O all the time, and the network is very fast (ping time less than 1 ms), the IOPS on an iSCSI LUN might be much lower that on a native vSAN LUN.

    This issue is resolved in this release.

  • PR 2028736: A host in a vSAN cluster might fail with a page fault

    When the instantiation of a vSAN object fails due to memory constraints, the host might experience a purple diagnostic screen error.

    This issue is resolved in this release. 

  • PR 2023858: vSAN health service alarm could delay 5 to 10 minutes to inform hosts with connection issue

    After a host has lost connection with vCenter Server, with no manual retest, there could be a 5 to 10 minute delay for vSAN health service to report the loss of connection.

    This issue is resolved in this release.

  • PR 2021819: False alarm for cluster configuration consistency health check after enabling vSAN encryption

    A false alarm is reported for cluster configuration consistency health check when vSAN encryption is enabled with two or more KMS server configured. The KMS certifications fetched from host and vCenter Server do not match due to improper ordering.

    This issue is resolved in this release.

  • PR 2059300: Host failure when upgrading to vCenter Server 6.5 Update 2

    During upgrade to vCenter Server 6.5 Update 2, when vSAN applies unicast configuration of other hosts in the cluster, a local configuration might be added on the host, resulting in a host failure with purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2020615: vSphere vMotion operations for virtual machines with more than 64GB of vRAM might fail on vSAN

    When you perform a vSphere vMotion operation with a virtual machine with more than 64GB of memory (vRAM) on vSAN, the task might fail due to error in swap file initialization on the destination.

    This issue is resolved in this release.

  • PR 2008663: After reboot, host is delayed from joining cluster running vSAN 6.6 and later

    When there are many vSAN components on disks, during reboot it might take a long time to publish vSAN storage. The host is delayed from joining the vSAN cluster, so this host might be partitioned for a long time.

    This issue is resolved in this release.

  • PR 2023766: Large amounts of resync traffic for an object being repaired or placed in maintenance mode

    When an object spans across all available fault domains, vSAN is unable to replace a component that is impacted by a maintenance mode operation or a repair operation. If this happens, vSAN rebuilds the entire object, which can result in large amounts of resync traffic.

    This issue is resolved in this release.

  • PR 2039896: Some operations fail and the host must be rebooted

    When vSAN Observer is running, memory leaks can occur in the host init group. Other operations that run under the init group might fail until you reboot the host.

    This issue is resolved in this release.

  • PR 1999439: Slow performance and client latency due to excess DOM I/O activity

    Under certain conditions, when I/Os from the Distributed Object Manager (DOM) fail and are retried, the cluster might experience high latencies and slow performance. You might see the following trace logs:
    [33675821] [cpu66] [6bea734c OWNER writeWithBlkAttr5 VMDISK] DOMTraceOperationNeedsRetry:3630: {'op': 0x43a67f18c6c0, 'obj': 0x439e90cb0c00, 'objUuid': '79ec2259-fd98-ef85-0e1a-1402ec44eac0', 'status': 'VMK_STORAGE_RETRY_OPERATION'}

    This issue is resolved in this release.

  • PR 2061175: I/O performance of the input–output memory management unit (IOMMU) driver of AMD might be slow on ESX PCIe devices

    I/O performance of ESX PCIe devices might drop when the IOMMU driver of AMD is enabled.

    This issue is resolved in this release.

  • PR 1982291: Virtual machines with brackets in their names might not deploy successfully

    Virtual machines with brackets ([]) in their name might not deploy successfully. For more information, see KB 53648

    This issue is resolved in this release.

ESXi650-201805202-UG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_qedentv_2.0.6.4-8vmw.650.2.50.8294253
PRs Fixed  N/A
Related CVE numbers N/A

This patch updates the qedentv VIB.

    ESXi650-201805203-UG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_vmkusb_0.1-1vmw.650.2.50.8294253
    PRs Fixed  1998108
    Related CVE numbers N/A

    This patch updates the vmkusb VIB to resolve the following issue:

    • PR 1998108: An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device

      An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device. This happens because some USB devices report invalid number of interface descriptors to the host.

      This issue is resolved in this release.

    ESXi650-201805204-UG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.650.2.50.8294253
    • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.650.2.50.8294253
    PRs Fixed  1948936
    Related CVE numbers N/A

    This patch updates ipmi-ipmi-devintf and ipmi-ipmi-msghandler VIBs to resolve the following issue:

    • PR 1948936: Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host

      Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host and the host might fail with a purple diagnostic screen. Memory corruptions are visible at later point, and not at the exact time of corruption.

      This issue is resolved in this release.

    ESXi650-201805205-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_vmw-ahci_1.1.1-1vmw.650.2.50.8294253
    PRs Fixed  2004950 
    Related CVE numbers N/A

    This patch updates the vmw-ahci VIB to resolve the following issue:

    • PR 2004950: Update to the VMware Advanced Host Controller Interface driver

      The VMware Advanced Host Controller Interface (AHCI) driver is updated to version 1.1.1-1 to fix issues with memory allocation, the Marvell 9230 AHCI controller, and Intel Cannon Lake PCH-H SATA AHCI controller.

    ESXi650-201805206-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_nhpsa_2.0.22-3vmw.650.2.50.8294253
    PRs Fixed  2004948
    Related CVE numbers N/A

    This patch updates the nhpsa VIB to resolve the following issue:

    • PR 2004948: Update to the nhpsa driver

      The nhpsa driver is updated to 2.0.22-1vmw.

    ESXi650-201805207-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_nvme_1.2.1.34-1vmw.650.2.50.8294253
    PRs Fixed  2004949
    Related CVE numbers N/A

    This patch updates the nvme VIB to resolve the following issue:

    • PR 2004949: Updates to the NVMe driver

      ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

    ESXi650-201805208-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_smartpqi_1.0.1.553-10vmw.650.2.50.8294253
    PRs Fixed  2004957
    Related CVE numbers N/A

    The VIB in this bulletin is new and you might see the following expected behavior:

    1. The command esxcli software vib update -d <zip bundle> might skip the new VIB. You must use the command esxcli software profile update --depot=depot_location --profile=profile_name instead.
    2. After running the command esxcli software vib update -n <vib name>, in Installation Result you might see the message Host not changed. You must use the command esxcli software vib install -n instead.

    This patch updates the smartpqi VIB to resolve the following issue:

    • PR 2004957: smartpqi driver for HPE Smart PQI controllers

      ESXi 6.5 Update 2 enables smartpqi driver support for the HPE ProLiant Gen10 Smart Array Controller.

    ESXi650-201805209-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_lsi-msgpt35_03.00.01.00-9vmw.650.2.50.8294253
    PRs Fixed  2004956
    Related CVE numbers N/A

    The VIB in this bulletin is new and you might see the following expected behavior:

    1. The command esxcli software vib update -d <zip bundle> might skip the new VIB. You must use the command esxcli software profile update --depot=depot_location --profile=profile_name instead.
    2. After running the command esxcli software vib update -n <vib name>, in Installation Result you might see the message Host not changed. You must use the command esxcli software vib install -n instead.

    This patch updates the lsi-msgpt35 VIB to resolve the following issue:

    • PR 2004956: I/O devices enablement for Broadcom SAS 3.5 IT/IR controllers

      ESXi 6.5 Update 2 enables a new native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including combination of NVMe, SAS, and SATA drives. The HBA device IDs are: SAS3516(00AA), SAS3516_1(00AB), SAS3416(00AC), SAS3508(00AD), SAS3508_1(00AE), SAS3408(00AF), SAS3716(00D0), SAS3616(00D1), SAS3708(00D2).

    ESXi650-201805210-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.650.2.50.8294253
    PRs Fixed  2004947
    Related CVE numbers N/A

    This patch updates the lsi-msgpt3 VIB to resolve the following issue:

    • PR 2004947: Update to the lsi_msgpt3 driver

      The lsi_msgpt3 driver is updated to version 16.00.01.00-1vmw.

    ESXi650-201805211-UG
    Patch Category Enhancement
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_lsi-mr3_7.702.13.00-3vmw.650.2.50.8294253
    PRs Fixed  2004946
    Related CVE numbers N/A

    This patch updates the lsi-mr3 VIB to resolve the following issue:

    • PR 2004946: Update to the lsi_mr3 driver to version MR 7.2​​

      The lsi_mr3 driver is updated to enable support for Broadcom SAS 3.5 RAID controllers. It also contains significant enhancement on task management and critical bug fixes.  

    ESXi650-201805212-UG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_bnxtnet_20.6.101.7-11vmw.650.2.50.8294253
    PRs Fixed  N/A
    Related CVE numbers N/A

    The VIB in this bulletin is new and you might see the following expected behavior:

    1. The command esxcli software vib update -d <zip bundle> might skip the new VIB. You must use the command esxcli software profile update --depot=depot_location --profile=profile_name instead.
    2. After running the command esxcli software vib update -n <vib name>, in Installation Result you might see the message Host not changed. You must use the command esxcli software vib install -n instead.

    This patch updates the  bnxtnet VIB.

      ESXi650-201805213-UG
      Patch Category Enhancement
      Patch Severity Important
      Host Reboot Required No
      Virtual Machine Migration or Shutdown Required No
      Affected Hardware N/A
      Affected Software N/A
      VIBs Included
      • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-2.50.8294253
      PRs Fixed  2004949
      Related CVE numbers N/A

      This patch updates the vmware-esx-esxcli-nvme-plugin VIB to resolve the following issue:

      • PR 2004949: Updates to the NVMe driver

        ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

      ESXi650-201805214-UG
      Patch Category Bugfix
      Patch Severity Important
      Host Reboot Required Yes
      Virtual Machine Migration or Shutdown Required Yes
      Affected Hardware N/A
      Affected Software N/A
      VIBs Included
      • VMW_bootbank_i40en_1.3.1-19vmw.650.2.50.8294253
      PRs Fixed  N/A
      Related CVE numbers N/A

      This patch updates the i40en VIB.

        ESXi650-201805215-UG
        Patch Category Bugfix
        Patch Severity Important
        Host Reboot Required Yes
        Virtual Machine Migration or Shutdown Required Yes
        Affected Hardware N/A
        Affected Software N/A
        VIBs Included
        • VMW_bootbank_misc-drivers_6.5.0-2.50.8294253
        PRs Fixed  N/A
        Related CVE numbers N/A

        This patch updates the misc-drivers VIB.

          ESXi650-201805216-UG
          Patch Category Bugfix
          Patch Severity Important
          Host Reboot Required Yes
          Virtual Machine Migration or Shutdown Required Yes
          Affected Hardware N/A
          Affected Software N/A
          VIBs Included
          • VMW_bootbank_lsi-msgpt2_20.00.01.00-4vmw.650.2.50.8294253
          PRs Fixed  2015606
          Related CVE numbers N/A

          This patch updates the lsi-msgpt2 VIB to resolve the following issue:

          • PR 2015606: VMware ESXi native drivers might fail to support more than eight message signaled interrupts

            ESXi native drivers might fail to support more than eight message signaled interrupts even on devices with MSI-X that allows the device to allocate up to 2048 interrupts. This issue is observed on newer AMD-based platforms and is due to default limits of the native drivers, lsi_msgpt2 and lsi_msgpt3.

            This issue is resolved in this release.

          ESXi650-201805217-UG
          Patch Category Bugfix
          Patch Severity Important
          Host Reboot Required Yes
          Virtual Machine Migration or Shutdown Required Yes
          Affected Hardware N/A
          Affected Software N/A
          VIBs Included
          • VMW_bootbank_ne1000_0.8.3-7vmw.650.2.50.8294253
          PRs Fixed  N/A
          Related CVE numbers N/A

          This patch updates the ne1000 VIB.

            ESXi650-201805218-UG
            Patch Category Enhancement
            Patch Severity Important
            Host Reboot Required Yes
            Virtual Machine Migration or Shutdown Required Yes
            Affected Hardware N/A
            Affected Software N/A
            VIBs Included
            • VMW_bootbank_ixgben_1.4.1-12vmw.650.2.50.8294253
            PRs Fixed  N/A
            Related CVE numbers N/A

            This patch updates the ixgben VIB.

              ESXi650-201805219-UG
              Patch Category Bugfix
              Patch Severity Important
              Host Reboot Required No
              Virtual Machine Migration or Shutdown Required No
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-6vmw.650.2.50.8294253
              PRs Fixed  2036299
              Related CVE numbers N/A

              This patch updates the lsu-hp-hpsa-plugin VIB.

              • PR 2036299: The ESXi native driver for HPE Smart Array controllers extends the list of expander devices to enable compatibility with HPE Gen9 configurations

                The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.

              ESXi650-201805220-UG
              Patch Category Bugfix
              Patch Severity Moderate
              Host Reboot Required Yes
              Virtual Machine Migration or Shutdown Required Yes
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMW_bootbank_lpfc_11.4.33.1-6vmw.650.2.50.8294253
              PRs Fixed  2053073
              Related CVE numbers N/A

              This patch updates the lpfc VIB to resolve the following issue:

              • PR 2053073: Support for LightPulse and OneConnect adapters is split to separate default drivers

                In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

              ESXi650-201805221-UG
              Patch Category Bugfix
              Patch Severity Moderate
              Host Reboot Required Yes
              Virtual Machine Migration or Shutdown Required Yes
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMW_bootbank_brcmfcoe_11.4.1078.0-8vmw.650.2.50.8294253
              PRs Fixed  2053073
              Related CVE numbers N/A

              The VIB in this bulletin is new and you might see the following expected behavior:

              1. The command esxcli software vib update -d <zip bundle> might skip the new VIB. You must use the command esxcli software profile update --depot=depot_location --profile=profile_name instead.
              2. After running the command esxcli software vib update -n <vib name>, in Installation Result you might see the message Host not changed. You must use the command esxcli software vib install -n instead.

              This patch updates the brcmfcoe VIB to resolve the following issue:

              • PR 2053073: Support for LightPulse and OneConnect adapters is split to separate default drivers

                In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

              ESXi650-201805222-UG
              Patch Category Bugfix
              Patch Severity Important
              Host Reboot Required Yes
              Virtual Machine Migration or Shutdown Required Yes
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMW_bootbank_usbcore-usb_1.0-3vmw.650.2.50.8294253
              PRs Fixed  2053073
              Related CVE numbers N/A

              This patch updates the usbcore-usb VIB to resolve the following issue:

              • PR 2053703: An ESXi host might fail with purple diagnostic screen when completing installation from a USB media drive on HP servers

                When you press Enter to reboot an ESXi host and complete a new installation from a USB media drive on some HP servers, you might see an error on a purple diagnostic screen due to a rare race condition.

                This issue is resolved in this release.

              ESXi650-201805223-UG
              Patch Category Bugfix
              Patch Severity Critical
              Host Reboot Required No
              Virtual Machine Migration or Shutdown Required No
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMware_bootbank_esx-xserver_6.5.0-2.50.8294253
              PRs Fixed  2051280
              Related CVE numbers N/A

              This patch updates the esx-xserver VIB to resolve the following issue:

              • PR 2051280: Some Single Root I/O Virtualization (SR-IOV) graphics configurations that are running ESXi 6.5 with enabled GPU Statistics might cause virtual machines to become unresponsive

                Some SR-IOV graphics configurations, that are running ESXi 6.5 with enabled GPU Statistics , might cause virtual machines on the ESXi host to become unresponsive after extended host uptime due to a memory leak. In some cases, the ESXi host might become unresponsive as well.

                This issue is resolved in this release.

              ESXi650-201805101-SG
              Patch Category Security
              Patch Severity Important
              Host Reboot Required Yes
              Virtual Machine Migration or Shutdown Required Yes
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMware_bootbank_esx-base_6.5.0-1.47.8285314
              • VMware_bootbank_vsan_6.5.0-1.47.7823008
              • VMware_bootbank_esx-tboot_6.5.0-1.47.8285314
              • VMware_bootbank_vsanhealth_6.5.0-1.47.7823009
              PRs Fixed  1827139, 2011288, 1964481
              Related CVE numbers N/A

              This patch updates esx-base, esx-tboot,vsan and vsanhealth VIBs to resolve the following issues:

              • PR 1827139: ESXi hosts might randomly fail due to errors in sfcbd or other daemon processes

                This fix improves internal error handling to prevent ESXi host failures, caused by sfcbd or other daemon processes.

                This issue is resolved in this release.

              • PR 2011288: Update to glibc package to address security issues

                The glibc package has been updated to address CVE-2017-15804, CVE-2017-15670, CVE-2016-4429, CVE-2015-8779, CVE-2015-8778, CVE-2014-9761 and CVE-2015-8776.

              • PR 1964481: Update of the SQLite database

                The SQLite database is updated to version 3.20.1 to resolve a security issue with identifier CVE-2017-10989.

              ESXi650-201805102-SG
              Patch Category Security
              Patch Severity Important
              Host Reboot Required No
              Virtual Machine Migration or Shutdown Required No
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included
              • VMware_locker_tools-light_6.5.0-1.47.8285314
              PRs Fixed  N/A
              Related CVE numbers N/A

              This patch updates the tools-light VIB.

                ESXi650-201805103-SG
                Patch Category Security
                Patch Severity Important
                Host Reboot Required No
                Virtual Machine Migration or Shutdown Required No
                Affected Hardware N/A
                Affected Software N/A
                VIBs Included
                • VMware_bootbank_esx-ui_1.27.1-7909286
                PRs Fixed  N/A
                Related CVE numbers N/A

                This patch updates the esx-ui VIB.

                  ESXi-6.5.0-20180502001-standard
                  Profile Name ESXi-6.5.0-20180502001-standard
                  Build For build information, see  Patches Contained in this Release.
                  Vendor VMware, Inc.
                  Release Date May 3, 2018
                  Acceptance Level PartnerSupported
                  Affected Hardware N/A
                  Affected Software N/A
                  Affected VIBs
                  • VMware_bootbank_vsan_6.5.0-2.50.8064065
                  • VMware_bootbank_vsanhealth_6.5.0-2.50.8143339
                  • VMware_bootbank_esx-base_6.5.0-2.50.8294253
                  • VMware_bootbank_esx-tboot_6.5.0-2.50.8294253
                  • VMW_bootbank_qedentv_2.0.6.4-8vmw.650.2.50.8294253
                  • VMW_bootbank_vmkusb_0.1-1vmw.650.2.50.8294253
                  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.650.2.50.8294253
                  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.650.2.50.8294253
                  • VMW_bootbank_vmw-ahci_1.1.1-1vmw.650.2.50.8294253
                  • VMW_bootbank_nhpsa_2.0.22-3vmw.650.2.50.8294253
                  • VMW_bootbank_nvme_1.2.1.34-1vmw.650.2.50.8294253
                  • VMW_bootbank_smartpqi_1.0.1.553-10vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-msgpt35_03.00.01.00-9vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-mr3_7.702.13.00-3vmw.650.2.50.8294253
                  • VMW_bootbank_bnxtnet_20.6.101.7-11vmw.650.2.50.8294253
                  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-2.50.8294253
                  • VMW_bootbank_i40en_1.3.1-19vmw.650.2.50.8294253
                  • VMW_bootbank_misc-drivers_6.5.0-2.50.8294253
                  • VMW_bootbank_lsi-msgpt2_20.00.01.00-4vmw.650.2.50.8294253
                  • VMW_bootbank_ne1000_0.8.3-7vmw.650.2.50.8294253
                  • VMW_bootbank_ixgben_1.4.1-12vmw.650.2.50.8294253
                  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-6vmw.650.2.50.8294253
                  • VMW_bootbank_lpfc_11.4.33.1-6vmw.650.2.50.8294253
                  • VMW_bootbank_brcmfcoe_11.4.1078.0-8vmw.650.2.50.8294253
                  • VMW_bootbank_usbcore-usb_1.0-3vmw.650.2.50.8294253
                  • VMware_bootbank_esx-xserver_6.5.0-2.50.8294253
                  • VMware_locker_tools-light_6.5.0-1.47.8285314
                  PRs Fixed 1882261, 1907025, 1911721, 1913198, 1913483, 1919300, 1923173, 1937646, 1947334, 1950336, 1950800, 1954364, 1958600, 1962224, 1962739, 1971168, 1972487, 1975787, 1977715, 1982291, 1983123, 1986020, 1987996, 1988248, 1995640, 1996632, 1997995, 1998437, 1999439, 2004751, 2006929, 2007700, 2008663, 2011392, 2012602, 2015132, 2016411, 2016413, 2018046, 2020615, 2020785, 2021819, 2023740, 2023766, 2023858, 2024647, 2028736, 2029476, 2032048, 2033471, 2034452, 2034625, 2034772, 2034800, 2034930, 2039373, 2039896, 2046087, 2048702, 2051775, 2052300, 2053426, 2054940, 2055746, 2057189, 2057918, 2058697, 2059300, 2060589, 2061175, 2064616, 2065000, 1998108, 1948936, 2004950, 2004948, 2004949, 2004957, 2004956, 2004947, 1491595, 2004946, 2004949, 2015606, 2036299, 2053073, 2053073, 2053703, 2051280, 1994373
                  Related CVE numbers N/A
                  • This patch updates the following issues:
                    • PR 1913483: Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines

                      Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines when one disk is attached to many virtual machines, due to insufficient metadata.

                      This issue is resolved in this release.

                    • PR 1975787: An ESXi support bundle generated by using the VMware Host Client might not contain all log files

                      Sometimes, when you generate a support bundle for an ESXi host by using the Host Client, the support bundle might not contain all log files.

                      This issue is resolved in this release.

                    • PR 1947334: Attempts to modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads fail

                      You cannot modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads, because the parameter is designed as an internal parameter and is not a subject to change. This fix hides internal parameters of vSAN iSCSI to avoid confusion.

                      This issue is resolved in this release.

                    • PR 1919300: Host Profile compliance check might show Unknown status if a profile is imported from a different vCenter Server system

                      If you export a Host Profile from one vCenter Server system and import it to another vCenter Server system, compliance checks might show a status Unknown.

                      This issue is resolved in this release.

                    • PR 2007700: The advanced setting VMFS.UnresolvedVolumeLiveCheck might be inactive in the vSphere Web Client

                      In the vSphere Web Client, the option to enable or disable checks during unresolved volume queries with the VMFS.UnresolvedVolumeLiveCheck option under Advance Setting > VMFS, might be grayed out.

                      This issue is resolved in this release. You can also workaround it by using the vSphere Client or set the option by directly connecting to the ESXi host.

                    • PR 1983123: ESXi hosts with scratch partitions located on a vSAN datastore might intermittently become unresponsive

                      If the scratch partition of an ESXi host is located on a vSAN datastore, the host might become unresponsive, because if the datastore is temporary inaccessible, this might block userworld daemons, such as hostd, if they try to access this datastore at the same time. With this fix, scratch partitions to a vSAN datastore are no longer configurable through the API or the user interface. 

                      This issue is resolved in this release.

                    • PR 1950336: Hostd might stop responding in attempt to reconfigure a virtual machine

                      Hostd might stop responding in attempt to reconfigure virtual machines. Configuration entries with unset value in the ConfigSpec object, which encapsulates configuration settings for creation and configuration of virtual machines, cause the issue.

                      This issue is resolved in this release.

                    • PR 2006929: VMware vSphere vMotion might fail due to timeout and inaccessible virtual machines

                      vSphere vMotion might fail due to timeout, because virtual machines might need more buffer to store the shared memory page details used by 3D or graphic devices. The issue is observed mainly on virtual machines running graphics intensive loads or working in virtual desktop infrastructure environments. With this fix, the buffer size is increased by four times for 3D virtual machines and if the increased capacity is still not enough, the virtual machine is powered off.

                      This issue is resolved in this release.

                    • PR 1958600: The netdump IP address might be lost after upgrading from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2

                      The netdump IP address might be lost after your upgrade from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2.

                      This issue is resolved in this release.

                    • PR 1971168: ESXi system identification information in the vSphere Web Client might not be available

                      System identification information consists of asset tags, service tags, and OEM strings. In earlier releases, this information comes from the Common Information Model (CIM) service. Starting with vSphere 6.5, the CIM service is turned off by default, unless any third-party CIM providers are installed, which prevents ESXi system identification information to display in the vSphere Web Client.

                      This issue is resolved in this release.

                    • PR 2020785: Virtual machines created by the VMware Integrated OpenStack might not work properly

                      When you create virtual machine disks with the VMware Integrated OpenStack block storage service (Cinder), the disks get random UUIDs. In around 0.5% of the cases, the virtual machines do not recognize these UUIDs, which results in unexpected behavior of the virtual machines.

                      This issue is resolved in this release.

                    • PR 2004751: A virtual machine might fail to boot after an upgrade to macOS 10.13

                      When you upgrade a virtual machine to macOS 10.13, the virtual machine might fail to locate a bootable operating system. If the macOS 10.13 installer determines that the system volume is located on a Solid State Disk (SSD), during the OS upgrade, the installer converts the Hierarchical File System Extended (HFS+/HFSX) to the new Apple File System (APFS) format. As a result, the virtual machine might fail to boot.

                      This issue is resolved in this release.

                    • PR 2015132: You might not be able to apply a Host Profile

                      You might not be able to apply a host profile, if the host profile contains a DNS configuration for a VMkernel network adapter that is associated to a vSphere Distributed Switch.

                      This issue is resolved in this release.

                    • PR 2011392: An ESXi host might fail with a purple diagnostic screen due to a deadlock in vSphere Distributed Switch health checks

                      If you enable VLAN and maximum transmission unit (MTU) checks in vSphere Distributed Switches, your ESXi hosts might fail with a purple diagnostic screen due to a possible deadlock.  

                      This issue is resolved in this release.

                    • PR 2004948: Update to the nhpsa driver

                      The nhpsa driver is updated to 2.0.22-1vmw.

                    • PR 1907025: An ESXi host might fail with an error Unexpected message type

                      An ESXi host might fail with a purple diagnostic screen due to invalid network input in the error message handling of vSphere Fault Tolerance, which results into an Unexpected message type error.

                      This issue is resolved in this release.

                    • PR 1962224: Virtual machines might fail with an error Unexpected FT failover due to PANIC

                      When you run vSphere Fault Tolerance in vSphere 6.5 and vSphere 6.0, virtual machines might fail with an error such as Unexpected FT failover due to PANIC error: VERIFY bora/vmcore/vmx/main/monitorAction.c:598. The error occurs when virtual machines use PVSCSI disks and multiple virtual machines using vSphere FT are running on the same host.

                      This issue is resolved in this release.

                    • PR 1988248: An ESXi host might fail with a purple diagnostic screen due to filesystem journal allocation issue

                      An ESXi host might fail with a purple diagnostic screen if the filesystem journal allocation fails while opening a volume due to space limitation or I/O timeout, and the volume opens as read only.

                      This issue is resolved in this release.

                    • PR 2024647: An ESXi host might fail with a purple diagnostic screen and an error message when you try to retrieve information about multicast group that contains more than 128 IP addresses

                      When you add more than 128 multicast IP addresses to an ESXi host and try to retrieve information about the multicast group, the ESXi host might fail with a purple diagnostic screen and an error message such as: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

                      This issue is resolved in this release.

                    • PR 2039373: Hostd might fail due to a high rate of Map Disk Region tasks generated by a backup solution

                      Backup solutions might produce a high rate of Map Disk Region tasks, causing hostd to run out of memory and fail.

                      This issue is resolved in this release.

                    • PR 1972487: An ESXi host might fail with a purple diagnostic screen due to a rare race condition​

                      An ESXi host might fail with a purple diagnostic screen due to a rare race condition between file allocation paths and volume extension operations.

                      This issue is resolved in this release.

                    • PR 1913198: NFS 4.1 mounts might fail when using AUTH_SYS, KRB5 or KRB5i authentication

                      If you use AUTH_SYS, KRB5 or KRB5i authentication to mount NFS 4.1 shares, the mount might fail. This happens when the Generic Security Services type is not the same value for all the NFS requests generated during the mount session. Some hosts, which are less tolerant of such inconsistencies, fail the mount requests.

                      This issue is resolved in this release.

                    • PR 1882261: ESXi hosts might report wrong esxcli storage core device list data

                      For some devices, such as CD-ROM, where the maximum queue depth is less than the default value of number of outstanding I/Os with competing worlds, the command esxcli storage core device list might show the maximum number of outstanding I/O requests exceeding the maximum queue depth. With this fix, the maximum value of No of outstanding IOs with competing worlds is restricted to the maximum queue depth of the storage device on which you make the change.

                      This issue is resolved in this release.

                    • PR 1824576: ESXi hosts might lose connectivity during snapshot scalability testing

                      The Small-Footprint CIM Broker (SFCB) might fail during snapshot operations on large scale with a core dump file similar to sfcb-vmware_bas-zdump.XXX in /var/core/ and cause ESXi hosts to become unresponsive.

                      This issue is resolved in this release. 

                    • PR 2032048: An ESXi host might fail with an error due to invalid keyboard input

                      An ESXi host might fail with an error kbdmode_set:519: invalid keyboard mode 4: Not supported due to an invalid keyboard input during the shutdown process.

                      This issue is resolved in this release.

                    • PR 1937646: The sfcb-hhrc process might stop working, causing the hardware monitoring service on the ESXi host to become unavailable

                      The sfcb-hhrc process might stop working, causing the hardware monitoring service sfcbd on the ESXi host to become unavailable.

                      This issue is resolved in this release.

                    • PR 2016411: When you add an ESXi host to an Active Directory domain, hostd might become unresponsive and the ESXi host disconnects from vCenter Server

                      When you add an ESXi host to an Active Directory domain, hostd might become unresponsive due to Likewise services running out of memory and the ESXi host disconnects from the vCenter Server system.

                      This issue is resolved in this release.

                    • PR 1954364: Virtual machines with EFI firmware might fail to boot from Windows Deployment Services on Windows Server 2008

                      PXE boot of virtual machines with 64-bit Extensible Firmware Interface (EFI) might fail when booting from Windows Deployment Services on Windows Server 2008, because PXE boot discovery requests might contain an incorrect PXE architecture ID. This fix sets a default architecture ID in line with the processor architecture identifier for x64 UEFI that the Internet Engineering Task Force (IETF) stipulates. This release also adds a new advanced configuration option, efi.pxe.architectureID = <integer>, to facilitate backwards compatibility. For instance, if your environment is configured to use architecture ID 9, you can add efi.pxe.architectureID = 9 to the advanced configuration options of the virtual machine.

                      This issue is resolved in this release.

                    • PR 1962739: Shared raw device mapping devices might go offline after a vMotion operation with MSCS clustered virtual machines

                      If a backend failover of a SAN storage controller takes place after a vMotion operation of MSCS clustered virtual machines, shared raw device mapping devices might go offline.

                      The issue is resolved in this release.

                    • PR 2046087: An ESXi host might fail with a purple diagnostic screen and NULL pointer value

                      An ESXi host might fail with a purple diagnostic screen in the process of collecting information for statistical purposes due to stale TCP connections.

                      This issue is resolved in this release.

                    • PR 1986020: ESXi VMFS5 and VMFS6 file systems might fail to expand

                      ESXi VMFS5 and VMFS6 file systems with certain volume sizes might fail to expand due to an issue with the logical volume manager (LVM). A VMFS6 volume fails to expand with a size greater than 16 TB.

                      This issue is resolved in this release.

                    • PR 1977715: ESXi hosts intermittently stop sending logs to remote syslog servers

                      The Syslog Service of ESXi hosts might stop transferring logs to remote log servers when a remote server is down and does not restart transfers after the server is up.

                      This issue is resolved in this release. 

                    • PR 1998437: SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa

                      SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa, which translates to the alert LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION. This causes a premature all paths down state that might lead to VMFS heartbeat timeouts and ultimately to performance drops. With this fix, you can use the advanced configuration option /Nmp/Nmp Satp Alua Cmd RetryTime and increase the timeout value to 50 seconds. 

                      This issue is resolved in this release.

                    • PR 2058697: EtherSwitch service modules might run out of memory when configuring a L3 mirror session

                      EtherSwitch service modules might run out of memory when configuring a L3 mirror session with Encapsulated Remote Switched Port Analyzer (ERSPAN) type II or type III.

                      This issue is resolved in this release.

                    • PR 1996632: ESXi hosts might intermittently lose connectivity if configured to use an Active Directory domain with a large number of accounts

                      If an ESXi host is configured to use an Active Directory domain with more than 10,000 accounts, the host might intermittently lose connectivity.

                      This issue is resolved in this release.

                    • PR 1995640: An ESXi host might fail with a purple diagnostic screen with PCPU locked error

                      An ESXi host might fail with a purple diagnostic screen with an error for locked PCPUs, caused by a spin count fault.

                      You might see a log similar to this:

                      2017-08-05T21:51:50.370Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T21:56:55.398Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:02:00.418Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:07:05.438Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:12:10.453Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:17:15.472Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T16:32:08.539Z cpu67:67618 opID=da84f05c)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_NETSERVER-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      2017-08-05T16:32:45.177Z cpu61:67618 opID=389fe17f)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_Way4SRV-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      2017-08-05T16:33:16.529Z cpu66:68466 opID=db8b1ddc)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_FIleServer-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      This issue is resolved in this release.

                    • PR 2053426: Virtual machines with more than 1TB of memory and PCI passthrough devices might fail to power on

                      The VMkernel module that manages PCI passthrough devices has a dedicated memory heap that does not allow support to virtual machines configured with more than 1 TB of memory and one or more PCI passthrough devices. Such virtual machines might fail to power on with a warning in the ESXi vmkernel log (/var/log/vmkernel.log) similar to WARNING: Heap: 3729: Heap mainHeap (2618200/31462232): Maximum allowed growth (28844032) too small for size (32673792). The issue also occurs when multiple virtual machines, each with less than 1 TB of memory and one or more PCI passthrough devices, are powered on concurrently and the sum of the memory configured across all virtual machines exceeds 1 TB.

                      This issue is resolved in this release.

                    • PR 1987996: Warning appears in vSAN health service Hardware compatibility check: Timed out in querying HCI info

                      When vSAN health service runs a Hardware compatibility check, the following warning message might appear: Timed out in querying HCI info. This warning is caused by slow retrieval of hardware information. The check times out after 20 seconds, which is less than the time required to retrieve information from the disks.

                      This issue is resolved in this release.

                    • PR 2048702: ImageConfigManager causes increased log activity in the syslog.log file

                      The syslog.log file is repeatedly populated with messages related to ImageConfigManager.

                      This issue is resolved in this release.

                    • PR 2051775: The Common Information Model interface of an ESXi host might stop working and fail hardware health monitoring

                      You might not be able to monitor the hardware health of an ESXi host through the CIM interface, because some processes in SFCB might stop working. This fix provides code enhancements to the sfcb-hhrc and the sfcb-vmware_base processes. The sfcb-hhrc process manages third-party Host Hardware RAID Controller (HHRC) CIM providers and provides hardware health monitoring, and the sfcb-vmware_base process.

                      This issue is resolved in this release.

                    • PR 2055746: HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are enabled for Content Based Read Cache (CBRC)

                      HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are CBRC-enabled, because the snapshot disk of the target virtual machine cannot be added to the backup proxy virtual machine.

                      This issue is resolved in this release.

                    • PR 2057918: VMkernel logs in VMware vSAN environment might be flooded with the message Unable to register file system

                      In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

                      This issue is resolved in this release. 

                    • PR 2052300: Attempts to create a host profile might fail with an error due to a null value in iSCSI configuration

                      Attempts to create a host profile might fail with an error NoneType object has no attribute lower if you pass a null value parameter in the iSCSI configuration.

                      This issue resolved in this release.

                    • PR 2057189: Packet loss across L2 VPN networks

                      You might see packet loss of up to 15% and poor application performance across a L2 VPN network if a sink port is configured on the vSphere Distributed Switch but no overlay network is available.

                      This issue is resolved in this release.

                    • PR 1911721: The esxtop utility reports incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI-supported LUNs

                      The esxtop utility reports incorrect statistics for the average device latency per command (DAVG/cmd) and average ESXi VMkernel latency per command (KAVG/cmd) on VAAI-supported LUNs due to an incorrect calculation.

                      This issue is resolved in this release.

                    • PR 1678559: You might notice performance degradation after resizing a VMDK file on vSAN

                      Distributed Object Manager processing operations might take too much time when an object has concatenated components. This might cause delays in other operations, which leads to high I/O latency.

                      This issue is resolved in this release.

                    • PR 2064616: Application running parallel File I/O operations might cause the ESXi host to stop responding

                      Application running parallel File I/O operations on a VMFS datastore might cause the ESXi host to stop responding.

                      This issue is resolved in this release.

                    • PR 2034930: When you customize a Host Profile some parameter values might be lost

                      When you customize a Host Profile to attach it to a host, some parameter values, including host name and MAC address, might be lost.

                      This issue is resolved in this release.

                    • PR 2034625: An ESXi host might fail with a purple diagnostic screen due to faulty packets from third-party networking modules

                      If VMware vSphere Network I/O Control is not enabled, packets from third-party networking modules might cause an ESXi host to fail with a purple diagnostic screen.

                      This issue is resolved in this release.

                    • PR 1994373: You might fail to gather data to diagnose networking issues

                      With ESXi 6.5 Update 2, you can use PacketCapture, a lightweight tcpdump utility implemented in the Reverse HTTP Proxy service, to capture and store only the minimum amount of data to diagnose a networking issue, saving CPU and storage. For more information, see KB 52843.

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      This issue is resolved in this release.

                    • PR 2060589: vSphere Web Client might display incorrect storage allocation after an increase of the disk size of a virtual machine

                      If you use vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration and display the storage allocation of the virtual machine as unchanged.

                      This issue is resolved in this release.

                    • PR 2012602: An ESXi host might not have access to the Web Services for Management (WSMan) protocol due to failed CIM ticket authentication

                      An ESXi host might not have access to the WSMan protocol due to failing CIM ticket authentication.

                      This issue is resolved in this release.

                    • PR 2065000: Hostd might fail during backup

                      The ESXi daemon hostd might fail during backup due to a race condition.

                      This issue is resolved in this release.

                    • PR 2015082: Deleting files from a content library might cause the vCenter Server Agent to fail

                      Deleting files from a content library might cause a failure of the vCenter Server Agent (vpxa) service which results in failure of the delete operation.

                      This issue is resolved in this release.

                    • PR 1997995: Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers

                      Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers, because the ESXi kernel module fails to load automatically during ESXi boot. This is due to different vendor names in the SMBIOS System Information of Dell and Dell OEM servers.

                      This issue is resolved in this release.

                    • PR 1491595: Hostd might fail due to a memory leak

                      Under certain circumstances, the memory of hostd might exceed the hard limit and the agent might fail.

                      This issue is resolved in this release. 

                    • PR 2034772: An ESXi host might fail with a purple screen due to a rare race condition in iSCSI sessions

                      An ESXi host might fail with a purple diagnostic screen due to a rare race condition in iSCSI sessions while processing the scheduled queue and running queue during connection shutdown. The connection lock might be freed to terminate a task while the shutdown is still in process. The next node pointer would be incorrect and cause the failure. 

                      This issue resolved in this release.

                    • PR 205696: Log congestion leads to unresponsive hosts and inaccessible virtual machines

                      If a device in a disk group encounters a permanent disk loss, the physical log (PLOG) in the cache device might increase to its maximum threshold. This problem is due to faulty cleanup of data in the log, which are bound to the lost disk. The log entries build up over time, leading to log congestion, which can cause virtual machines to become inaccessible.

                      This issue is resolved in this release. 

                    • PR 2054940: vSAN health service unable to read controller information

                      If you are using HP controllers, vSAN health service might be unable to read the controller firmware information. The health check might issue a warning, with the firmware version listed as N/A.

                      This issue is resolved in this release.

                    • PR 2034800: Ping failure in vSAN network health check when VMware vRealize Operations manages the vSAN cluster

                      When vSAN cluster is managed by vRealize Operations, you might observe intermittent ping health check failures. 

                      This issue is resolved in this release.

                    • PR 2034452: The vSAN health service might be slow displaying data for a large scale cluster after integration with vRealize Operations

                      After vRealize Operations integration with vSAN Management Pack enabled, the vSAN health service might become less responsive. This problem affects large scale clusters, such as those with 24 hosts, each with 10 disks. vSAN health service uses an inefficient log filter for parsing some specific log patterns, and can consume a large amount of CPU resources while communicating with vRealize Operations.

                      This issue is resolved in this release.

                    • PR 2023740: Host might fail with a purple diagnostic screen error if its own local configuration is added through esxcli vsan cluster unicastagent add

                      When adding unicast configuration of cluster hosts, if the host own local configuration is added, the host experiences a purple diagnostic screen error.

                      This issue is resolved in this release.

                    • PR 2033471: Very low IOPS on vSAN iSCSI target when outstanding I/O is 1

                      For each iSCSI target, the initiator establishes one or more connections depending on whether multipath is configured. For each connection, if there is only one outstanding I/O all the time, and the network is very fast (ping time less than 1 ms), the IOPS on an iSCSI LUN might be much lower that on a native vSAN LUN.

                      This issue is resolved in this release.

                    • PR 2028736: A host in a vSAN cluster might fail with a page fault

                      When the instantiation of a vSAN object fails due to memory constraints, the host might experience a purple diagnostic screen error.

                      This issue is resolved in this release. 

                    • PR 2023858: vSAN health service alarm could delay 5 to 10 minutes to inform hosts with connection issue

                      After a host has lost connection with vCenter Server, with no manual retest, there could be a 5 to 10 minute delay for vSAN health service to report the loss of connection.

                      This issue is resolved in this release.

                    • PR 2021819: False alarm for cluster configuration consistency health check after enabling vSAN encryption

                      A false alarm is reported for cluster configuration consistency health check when vSAN encryption is enabled with two or more KMS server configured. The KMS certifications fetched from host and vCenter Server do not match due to improper ordering.

                      This issue is resolved in this release.

                    • PR 2059300: Host failure when upgrading to vCenter Server 6.5 Update 2

                      During upgrade to vCenter Server 6.5 Update 2, when vSAN applies unicast configuration of other hosts in the cluster, a local configuration might be added on the host, resulting in a host failure with purple diagnostic screen.

                      This issue is resolved in this release.

                    • PR 2020615: vSphere vMotion operations for virtual machines with more than 64GB of vRAM might fail on vSAN

                      When you perform a vSphere vMotion operation with a virtual machine with more than 64GB of memory (vRAM) on vSAN, the task might fail due to error in swap file initialization on the destination.

                      This issue is resolved in this release.

                    • PR 2008663: After reboot, host is delayed from joining cluster running vSAN 6.6 and later

                      When there are many vSAN components on disks, during reboot it might take a long time to publish vSAN storage. The host is delayed from joining the vSAN cluster, so this host might be partitioned for a long time.

                      This issue is resolved in this release.

                    • PR 2023766: Large amounts of resync traffic for an object being repaired or placed in maintenance mode

                      When an object spans across all available fault domains, vSAN is unable to replace a component that is impacted by a maintenance mode operation or a repair operation. If this happens, vSAN rebuilds the entire object, which can result in large amounts of resync traffic.

                      This issue is resolved in this release.

                    • PR 2039896: Some operations fail and the host must be rebooted

                      When vSAN Observer is running, memory leaks can occur in the host init group. Other operations that run under the init group might fail until you reboot the host.

                      This issue is resolved in this release.

                    • PR 1999439: Slow performance and client latency due to excess DOM I/O activity

                      Under certain conditions, when I/Os from the Distributed Object Manager (DOM) fail and are retried, the cluster might experience high latencies and slow performance. You might see the following trace logs:
                      [33675821] [cpu66] [6bea734c OWNER writeWithBlkAttr5 VMDISK] DOMTraceOperationNeedsRetry:3630: {'op': 0x43a67f18c6c0, 'obj': 0x439e90cb0c00, 'objUuid': '79ec2259-fd98-ef85-0e1a-1402ec44eac0', 'status': 'VMK_STORAGE_RETRY_OPERATION'}

                      This issue is resolved in this release.

                    • PR 2061175: I/O performance of the input–output memory management unit (IOMMU) driver of AMD might be slow on ESX PCIe devices

                      I/O performance of ESX PCIe devices might drop when the IOMMU driver of AMD is enabled.

                      This issue is resolved in this release.

                    • PR 1982291: Virtual machines with brackets in their names might not deploy successfully

                      Virtual machines with brackets ([]) in their name might not deploy successfully. For more information, see KB 53648

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      Workaround: You must unplug the LAN cable and plug it again.

                    • PR 1998108: An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device

                      An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device. This happens because some USB devices report invalid number of interface descriptors to the host.

                      This issue is resolved in this release.

                    • PR 1948936: Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host

                      Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host and the host might fail with a purple diagnostic screen. Memory corruptions are visible at later point, and not at the exact time of corruption.

                      This issue is resolved in this release.

                    • PR 2004950: Update to the VMware Advanced Host Controller Interface driver

                      The VMware Advanced Host Controller Interface (AHCI) driver is updated to version 1.1.1-1 to fix issues with memory allocation, the Marvell 9230 AHCI controller, and Intel Cannon Lake PCH-H SATA AHCI controller.

                    • PR 2004948: Update to the nhpsa driver

                      The nhpsa driver is updated to 2.0.22-1vmw.

                    • PR 2004949: Updates to the NVMe driver

                      ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

                    • PR 2004957: smartpqi driver for HPE Smart PQI controllers

                      ESXi 6.5 Update 2 enables smartpqi driver support for the HPE ProLiant Gen10 Smart Array Controller.

                    • PR 2004956: I/O devices enablement for Broadcom SAS 3.5 IT/IR controllers

                      ESXi 6.5 Update 2 enables a new native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including combination of NVMe, SAS, and SATA drives. The HBA device IDs are: SAS3516(00AA), SAS3516_1(00AB), SAS3416(00AC), SAS3508(00AD), SAS3508_1(00AE), SAS3408(00AF), SAS3716(00D0), SAS3616(00D1), SAS3708(00D2).

                    • PR 2004947: Update to the lsi_msgpt3 driver

                      The lsi_msgpt3 driver is updated to version 16.00.01.00-1vmw.

                    • PR 2004946: Update to the lsi_mr3 driver to version MR 7.2​​

                      The lsi_mr3 driver is updated to enable support for Broadcom SAS 3.5 RAID controllers. It also contains significant enhancement on task management and critical bug fixes.  

                    • PR 2004949: Updates to the NVMe driver

                      ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

                    • PR 2015606: VMware ESXi native drivers might fail to support more than eight message signaled interrupts

                      ESXi native drivers might fail to support more than eight message signaled interrupts even on devices with MSI-X that allows the device to allocate up to 2048 interrupts. This issue is observed on newer AMD-based platforms and is due to default limits of the native drivers, lsi_msgpt2 and lsi_msgpt3.

                      This issue is resolved in this release.

                    • PR 2036299: The ESXi native driver for HPE Smart Array controllers extends the list of expander devices to enable compatibility with HPE Gen9 configurations

                      The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.

                    • PR 2053073: Support for LightPulse and OneConnect adapters is split to separate default drivers

                      In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

                    • PR 2053703: An ESXi host might fail with purple diagnostic screen when completing installation from a USB media drive on HP servers

                      When you press Enter to reboot an ESXi host and complete a new installation from a USB media drive on some HP servers, you might see an error on a purple diagnostic screen due to a rare race condition.

                      This issue is resolved in this release.

                    • PR 2051280: Some Single Root I/O Virtualization (SR-IOV) graphics configurations that are running ESXi 6.5 with enabled GPU Statistics might cause virtual machines to become unresponsive

                      Some SR-IOV graphics configurations, that are running ESXi 6.5 with enabled GPU Statistics , might cause virtual machines on the ESXi host to become unresponsive after extended host uptime due to a memory leak. In some cases, the ESXi host might become unresponsive as well.

                      This issue is resolved in this release.

                    • The ESXCLI unmap command might reclaim only small file block resources in VMFS-6 datastores and not process large file block resources

                      In VMFS-6 datastores, if you use the esxcli storage vmfs unmap command for manual reclamation of free space, it might only reclaim space from small file blocks and not process large file block resources.

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      Workaround: You must unplug the LAN cable and plug it again.

                  ESXi-6.5.0-20180502001-no-tools
                  Profile Name ESXi-6.5.0-20180502001-standard
                  Build For build information, see  Patches Contained in this Release.
                  Vendor VMware, Inc.
                  Release Date May 3, 2018
                  Acceptance Level PartnerSupported
                  Affected Hardware N/A
                  Affected Software N/A
                  Affected VIBs
                  • VMware_bootbank_vsan_6.5.0-2.50.8064065
                  • VMware_bootbank_vsanhealth_6.5.0-2.50.8143339
                  • VMware_bootbank_esx-base_6.5.0-2.50.8294253
                  • VMware_bootbank_esx-tboot_6.5.0-2.50.8294253
                  • VMW_bootbank_qedentv_2.0.6.4-8vmw.650.2.50.8294253
                  • VMW_bootbank_vmkusb_0.1-1vmw.650.2.50.8294253
                  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.650.2.50.8294253
                  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.650.2.50.8294253
                  • VMW_bootbank_vmw-ahci_1.1.1-1vmw.650.2.50.8294253
                  • VMW_bootbank_nhpsa_2.0.22-3vmw.650.2.50.8294253
                  • VMW_bootbank_nvme_1.2.1.34-1vmw.650.2.50.8294253
                  • VMW_bootbank_smartpqi_1.0.1.553-10vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-msgpt35_03.00.01.00-9vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.650.2.50.8294253
                  • VMW_bootbank_lsi-mr3_7.702.13.00-3vmw.650.2.50.8294253
                  • VMW_bootbank_bnxtnet_20.6.101.7-11vmw.650.2.50.8294253
                  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-2.50.8294253
                  • VMW_bootbank_i40en_1.3.1-19vmw.650.2.50.8294253
                  • VMW_bootbank_misc-drivers_6.5.0-2.50.8294253
                  • VMW_bootbank_lsi-msgpt2_20.00.01.00-4vmw.650.2.50.8294253
                  • VMW_bootbank_ne1000_0.8.3-7vmw.650.2.50.8294253
                  • VMW_bootbank_ixgben_1.4.1-12vmw.650.2.50.8294253
                  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-6vmw.650.2.50.8294253
                  • VMW_bootbank_lpfc_11.4.33.1-6vmw.650.2.50.8294253
                  • VMW_bootbank_brcmfcoe_11.4.1078.0-8vmw.650.2.50.8294253
                  • VMW_bootbank_usbcore-usb_1.0-3vmw.650.2.50.8294253
                  • VMware_bootbank_esx-xserver_6.5.0-2.50.8294253
                  • VMware_locker_tools-light_6.5.0-1.47.8285314
                  PRs Fixed 1882261, 1907025, 1911721, 1913198, 1913483, 1919300, 1923173, 1937646, 1947334, 1950336, 1950800, 1954364, 1958600, 1962224, 1962739, 1971168, 1972487, 1975787, 1977715, 1982291, 1983123, 1986020, 1987996, 1988248, 1995640, 1996632, 1997995, 1998437, 1999439, 2004751, 2006929, 2007700, 2008663, 2011392, 2012602, 2015132, 2016411, 2016413, 2018046, 2020615, 2020785, 2021819, 2023740, 2023766, 2023858, 2024647, 2028736, 2029476, 2032048, 2033471, 2034452, 2034625, 2034772, 2034800, 2034930, 2039373, 2039896, 2046087, 2048702, 2051775, 2052300, 2053426, 2054940, 2055746, 2057189, 2057918, 2058697, 2059300, 2060589, 2061175, 2064616, 2065000, 1998108, 1948936, 2004950, 2004948, 2004949, 2004957, 2004956, 2004947, 1491595, 2004946, 2004949, 2015606, 2036299, 2053073, 2053073, 2053703, 2051280, 1994373
                  Related CVE numbers N/A
                  • This patch updates the following issues:
                    • PR 1913483: Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines

                      Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines when one disk is attached to many virtual machines, due to insufficient metadata.

                      This issue is resolved in this release.

                    • PR 1975787: An ESXi support bundle generated by using the VMware Host Client might not contain all log files

                      Sometimes, when you generate a support bundle for an ESXi host by using the Host Client, the support bundle might not contain all log files.

                      This issue is resolved in this release.

                    • PR 1947334: Attempts to modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads fail

                      You cannot modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads, because the parameter is designed as an internal parameter and is not a subject to change. This fix hides internal parameters of vSAN iSCSI to avoid confusion.

                      This issue is resolved in this release.

                    • PR 1919300: Host Profile compliance check might show Unknown status if a profile is imported from a different vCenter Server system

                      If you export a Host Profile from one vCenter Server system and import it to another vCenter Server system, compliance checks might show a status Unknown.

                      This issue is resolved in this release.

                    • PR 2007700: The advanced setting VMFS.UnresolvedVolumeLiveCheck might be inactive in the vSphere Web Client

                      In the vSphere Web Client, the option to enable or disable checks during unresolved volume queries with the VMFS.UnresolvedVolumeLiveCheck option under Advance Setting > VMFS, might be grayed out.

                      This issue is resolved in this release. You can also workaround it by using the vSphere Client or set the option by directly connecting to the ESXi host.

                    • PR 1983123: ESXi hosts with scratch partitions located on a vSAN datastore might intermittently become unresponsive

                      If the scratch partition of an ESXi host is located on a vSAN datastore, the host might become unresponsive, because if the datastore is temporary inaccessible, this might block userworld daemons, such as hostd, if they try to access this datastore at the same time. With this fix, scratch partitions to a vSAN datastore are no longer configurable through the API or the user interface. 

                      This issue is resolved in this release.

                    • PR 1950336: Hostd might stop responding in attempt to reconfigure a virtual machine

                      Hostd might stop responding in attempt to reconfigure virtual machines. Configuration entries with unset value in the ConfigSpec object, which encapsulates configuration settings for creation and configuration of virtual machines, cause the issue.

                      This issue is resolved in this release.

                    • PR 2006929: VMware vSphere vMotion might fail due to timeout and inaccessible virtual machines

                      vSphere vMotion might fail due to timeout, because virtual machines might need more buffer to store the shared memory page details used by 3D or graphic devices. The issue is observed mainly on virtual machines running graphics intensive loads or working in virtual desktop infrastructure environments. With this fix, the buffer size is increased by four times for 3D virtual machines and if the increased capacity is still not enough, the virtual machine is powered off.

                      This issue is resolved in this release.

                    • PR 1958600: The netdump IP address might be lost after upgrading from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2

                      The netdump IP address might be lost after your upgrade from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2.

                      This issue is resolved in this release.

                    • PR 1971168: ESXi system identification information in the vSphere Web Client might not be available

                      System identification information consists of asset tags, service tags, and OEM strings. In earlier releases, this information comes from the Common Information Model (CIM) service. Starting with vSphere 6.5, the CIM service is turned off by default, unless any third-party CIM providers are installed, which prevents ESXi system identification information to display in the vSphere Web Client.

                      This issue is resolved in this release.

                    • PR 2020785: Virtual machines created by the VMware Integrated OpenStack might not work properly

                      When you create virtual machine disks with the VMware Integrated OpenStack block storage service (Cinder), the disks get random UUIDs. In around 0.5% of the cases, the virtual machines do not recognize these UUIDs, which results in unexpected behavior of the virtual machines.

                      This issue is resolved in this release.

                    • PR 2004751: A virtual machine might fail to boot after an upgrade to macOS 10.13

                      When you upgrade a virtual machine to macOS 10.13, the virtual machine might fail to locate a bootable operating system. If the macOS 10.13 installer determines that the system volume is located on a Solid State Disk (SSD), during the OS upgrade, the installer converts the Hierarchical File System Extended (HFS+/HFSX) to the new Apple File System (APFS) format. As a result, the virtual machine might fail to boot.

                      This issue is resolved in this release.

                    • PR 2015132: You might not be able to apply a Host Profile

                      You might not be able to apply a host profile, if the host profile contains a DNS configuration for a VMkernel network adapter that is associated to a vSphere Distributed Switch.

                      This issue is resolved in this release.

                    • PR 2011392: An ESXi host might fail with a purple diagnostic screen due to a deadlock in vSphere Distributed Switch health checks

                      If you enable VLAN and maximum transmission unit (MTU) checks in vSphere Distributed Switches, your ESXi hosts might fail with a purple diagnostic screen due to a possible deadlock.  

                      This issue is resolved in this release.

                    • PR 2004948: Update to the nhpsa driver

                      The nhpsa driver is updated to 2.0.22-1vmw.

                    • PR 1907025: An ESXi host might fail with an error Unexpected message type

                      An ESXi host might fail with a purple diagnostic screen due to invalid network input in the error message handling of vSphere Fault Tolerance, which results into an Unexpected message type error.

                      This issue is resolved in this release.

                    • PR 1962224: Virtual machines might fail with an error Unexpected FT failover due to PANIC

                      When you run vSphere Fault Tolerance in vSphere 6.5 and vSphere 6.0, virtual machines might fail with an error such as Unexpected FT failover due to PANIC error: VERIFY bora/vmcore/vmx/main/monitorAction.c:598. The error occurs when virtual machines use PVSCSI disks and multiple virtual machines using vSphere FT are running on the same host.

                      This issue is resolved in this release.

                    • PR 1988248: An ESXi host might fail with a purple diagnostic screen due to filesystem journal allocation issue

                      An ESXi host might fail with a purple diagnostic screen if the filesystem journal allocation fails while opening a volume due to space limitation or I/O timeout, and the volume opens as read only.

                      This issue is resolved in this release.

                    • PR 2024647: An ESXi host might fail with a purple diagnostic screen and an error message when you try to retrieve information about multicast group that contains more than 128 IP addresses

                      When you add more than 128 multicast IP addresses to an ESXi host and try to retrieve information about the multicast group, the ESXi host might fail with a purple diagnostic screen and an error message such as: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

                      This issue is resolved in this release.

                    • PR 2039373: Hostd might fail due to a high rate of Map Disk Region tasks generated by a backup solution

                      Backup solutions might produce a high rate of Map Disk Region tasks, causing hostd to run out of memory and fail.

                      This issue is resolved in this release.

                    • PR 1972487: An ESXi host might fail with a purple diagnostic screen due to a rare race condition​

                      An ESXi host might fail with a purple diagnostic screen due to a rare race condition between file allocation paths and volume extension operations.

                      This issue is resolved in this release.

                    • PR 1913198: NFS 4.1 mounts might fail when using AUTH_SYS, KRB5 or KRB5i authentication

                      If you use AUTH_SYS, KRB5 or KRB5i authentication to mount NFS 4.1 shares, the mount might fail. This happens when the Generic Security Services type is not the same value for all the NFS requests generated during the mount session. Some hosts, which are less tolerant of such inconsistencies, fail the mount requests.

                      This issue is resolved in this release.

                    • PR 1882261: ESXi hosts might report wrong esxcli storage core device list data

                      For some devices, such as CD-ROM, where the maximum queue depth is less than the default value of number of outstanding I/Os with competing worlds, the command esxcli storage core device list might show the maximum number of outstanding I/O requests exceeding the maximum queue depth. With this fix, the maximum value of No of outstanding IOs with competing worlds is restricted to the maximum queue depth of the storage device on which you make the change.

                      This issue is resolved in this release.

                    • PR 1824576: ESXi hosts might lose connectivity during snapshot scalability testing

                      The Small-Footprint CIM Broker (SFCB) might fail during snapshot operations on large scale with a core dump file similar to sfcb-vmware_bas-zdump.XXX in /var/core/ and cause ESXi hosts to become unresponsive.

                      This issue is resolved in this release. 

                    • PR 2032048: An ESXi host might fail with an error due to invalid keyboard input

                      An ESXi host might fail with an error kbdmode_set:519: invalid keyboard mode 4: Not supported due to an invalid keyboard input during the shutdown process.

                      This issue is resolved in this release.

                    • PR 1937646: The sfcb-hhrc process might stop working, causing the hardware monitoring service on the ESXi host to become unavailable

                      The sfcb-hhrc process might stop working, causing the hardware monitoring service sfcbd on the ESXi host to become unavailable.

                      This issue is resolved in this release.

                    • PR 2016411: When you add an ESXi host to an Active Directory domain, hostd might become unresponsive and the ESXi host disconnects from vCenter Server

                      When you add an ESXi host to an Active Directory domain, hostd might become unresponsive due to Likewise services running out of memory and the ESXi host disconnects from the vCenter Server system.

                      This issue is resolved in this release.

                    • PR 1954364: Virtual machines with EFI firmware might fail to boot from Windows Deployment Services on Windows Server 2008

                      PXE boot of virtual machines with 64-bit Extensible Firmware Interface (EFI) might fail when booting from Windows Deployment Services on Windows Server 2008, because PXE boot discovery requests might contain an incorrect PXE architecture ID. This fix sets a default architecture ID in line with the processor architecture identifier for x64 UEFI that the Internet Engineering Task Force (IETF) stipulates. This release also adds a new advanced configuration option, efi.pxe.architectureID = <integer>, to facilitate backwards compatibility. For instance, if your environment is configured to use architecture ID 9, you can add efi.pxe.architectureID = 9 to the advanced configuration options of the virtual machine.

                      This issue is resolved in this release.

                    • PR 1962739: Shared raw device mapping devices might go offline after a vMotion operation with MSCS clustered virtual machines

                      If a backend failover of a SAN storage controller takes place after a vMotion operation of MSCS clustered virtual machines, shared raw device mapping devices might go offline.

                      The issue is resolved in this release.

                    • PR 2046087: An ESXi host might fail with a purple diagnostic screen and NULL pointer value

                      An ESXi host might fail with a purple diagnostic screen in the process of collecting information for statistical purposes due to stale TCP connections.

                      This issue is resolved in this release.

                    • PR 1986020: ESXi VMFS5 and VMFS6 file systems might fail to expand

                      ESXi VMFS5 and VMFS6 file systems with certain volume sizes might fail to expand due to an issue with the logical volume manager (LVM). A VMFS6 volume fails to expand with a size greater than 16 TB.

                      This issue is resolved in this release.

                    • PR 1977715: ESXi hosts intermittently stop sending logs to remote syslog servers

                      The Syslog Service of ESXi hosts might stop transferring logs to remote log servers when a remote server is down and does not restart transfers after the server is up.

                      This issue is resolved in this release. 

                    • PR 1998437: SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa

                      SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa, which translates to the alert LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION. This causes a premature all paths down state that might lead to VMFS heartbeat timeouts and ultimately to performance drops. With this fix, you can use the advanced configuration option /Nmp/Nmp Satp Alua Cmd RetryTime and increase the timeout value to 50 seconds. 

                      This issue is resolved in this release.

                    • PR 2058697: EtherSwitch service modules might run out of memory when configuring a L3 mirror session

                      EtherSwitch service modules might run out of memory when configuring a L3 mirror session with Encapsulated Remote Switched Port Analyzer (ERSPAN) type II or type III.

                      This issue is resolved in this release.

                    • PR 1996632: ESXi hosts might intermittently lose connectivity if configured to use an Active Directory domain with a large number of accounts

                      If an ESXi host is configured to use an Active Directory domain with more than 10,000 accounts, the host might intermittently lose connectivity.

                      This issue is resolved in this release.

                    • PR 1995640: An ESXi host might fail with a purple diagnostic screen with PCPU locked error

                      An ESXi host might fail with a purple diagnostic screen with an error for locked PCPUs, caused by a spin count fault.

                      You might see a log similar to this:

                      2017-08-05T21:51:50.370Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T21:56:55.398Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:02:00.418Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:07:05.438Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:12:10.453Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T22:17:15.472Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

                      2017-08-05T16:32:08.539Z cpu67:67618 opID=da84f05c)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_NETSERVER-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      2017-08-05T16:32:45.177Z cpu61:67618 opID=389fe17f)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_Way4SRV-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      2017-08-05T16:33:16.529Z cpu66:68466 opID=db8b1ddc)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_FIleServer-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

                      This issue is resolved in this release.

                    • PR 2053426: Virtual machines with more than 1TB of memory and PCI passthrough devices might fail to power on

                      The VMkernel module that manages PCI passthrough devices has a dedicated memory heap that does not allow support to virtual machines configured with more than 1 TB of memory and one or more PCI passthrough devices. Such virtual machines might fail to power on with a warning in the ESXi vmkernel log (/var/log/vmkernel.log) similar to WARNING: Heap: 3729: Heap mainHeap (2618200/31462232): Maximum allowed growth (28844032) too small for size (32673792). The issue also occurs when multiple virtual machines, each with less than 1 TB of memory and one or more PCI passthrough devices, are powered on concurrently and the sum of the memory configured across all virtual machines exceeds 1 TB.

                      This issue is resolved in this release.

                    • PR 1987996: Warning appears in vSAN health service Hardware compatibility check: Timed out in querying HCI info

                      When vSAN health service runs a Hardware compatibility check, the following warning message might appear: Timed out in querying HCI info. This warning is caused by slow retrieval of hardware information. The check times out after 20 seconds, which is less than the time required to retrieve information from the disks.

                      This issue is resolved in this release.

                    • PR 2048702: ImageConfigManager causes increased log activity in the syslog.log file

                      The syslog.log file is repeatedly populated with messages related to ImageConfigManager.

                      This issue is resolved in this release.

                    • PR 2051775: The Common Information Model interface of an ESXi host might stop working and fail hardware health monitoring

                      You might not be able to monitor the hardware health of an ESXi host through the CIM interface, because some processes in SFCB might stop working. This fix provides code enhancements to the sfcb-hhrc and the sfcb-vmware_base processes. The sfcb-hhrc process manages third-party Host Hardware RAID Controller (HHRC) CIM providers and provides hardware health monitoring, and the sfcb-vmware_base process.

                      This issue is resolved in this release.

                    • PR 2055746: HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are enabled for Content Based Read Cache (CBRC)

                      HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are CBRC-enabled, because the snapshot disk of the target virtual machine cannot be added to the backup proxy virtual machine.

                      This issue is resolved in this release.

                    • PR 2057918: VMkernel logs in VMware vSAN environment might be flooded with the message Unable to register file system

                      In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

                      This issue is resolved in this release. 

                    • PR 2052300: Attempts to create a host profile might fail with an error due to a null value in iSCSI configuration

                      Attempts to create a host profile might fail with an error NoneType object has no attribute lower if you pass a null value parameter in the iSCSI configuration.

                      This issue resolved in this release.

                    • PR 2057189: Packet loss across L2 VPN networks

                      You might see packet loss of up to 15% and poor application performance across a L2 VPN network if a sink port is configured on the vSphere Distributed Switch but no overlay network is available.

                      This issue is resolved in this release.

                    • PR 1911721: The esxtop utility reports incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI-supported LUNs

                      The esxtop utility reports incorrect statistics for the average device latency per command (DAVG/cmd) and average ESXi VMkernel latency per command (KAVG/cmd) on VAAI-supported LUNs due to an incorrect calculation.

                      This issue is resolved in this release.

                    • PR 1678559: You might notice performance degradation after resizing a VMDK file on vSAN

                      Distributed Object Manager processing operations might take too much time when an object has concatenated components. This might cause delays in other operations, which leads to high I/O latency.

                      This issue is resolved in this release.

                    • PR 2064616: Application running parallel File I/O operations might cause the ESXi host to stop responding

                      Application running parallel File I/O operations on a VMFS datastore might cause the ESXi host to stop responding.

                      This issue is resolved in this release.

                    • PR 2034930: When you customize a Host Profile some parameter values might be lost

                      When you customize a Host Profile to attach it to a host, some parameter values, including host name and MAC address, might be lost.

                      This issue is resolved in this release.

                    • PR 2034625: An ESXi host might fail with a purple diagnostic screen due to faulty packets from third-party networking modules

                      If VMware vSphere Network I/O Control is not enabled, packets from third-party networking modules might cause an ESXi host to fail with a purple diagnostic screen.

                      This issue is resolved in this release.

                    • PR 1994373: You might fail to gather data to diagnose networking issues

                      With ESXi 6.5 Update 2, you can use PacketCapture, a lightweight tcpdump utility implemented in the Reverse HTTP Proxy service, to capture and store only the minimum amount of data to diagnose a networking issue, saving CPU and storage. For more information, see KB 52843.

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      This issue is resolved in this release.

                    • PR 2060589: vSphere Web Client might display incorrect storage allocation after an increase of the disk size of a virtual machine

                      If you use vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration and display the storage allocation of the virtual machine as unchanged.

                      This issue is resolved in this release.

                    • PR 2012602: An ESXi host might not have access to the Web Services for Management (WSMan) protocol due to failed CIM ticket authentication

                      An ESXi host might not have access to the WSMan protocol due to failing CIM ticket authentication.

                      This issue is resolved in this release.

                    • PR 2065000: Hostd might fail during backup

                      The ESXi daemon hostd might fail during backup due to a race condition.

                      This issue is resolved in this release.

                    • PR 2015082: Deleting files from a content library might cause the vCenter Server Agent to fail

                      Deleting files from a content library might cause a failure of the vCenter Server Agent (vpxa) service which results in failure of the delete operation.

                      This issue is resolved in this release.

                    • PR 1997995: Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers

                      Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers, because the ESXi kernel module fails to load automatically during ESXi boot. This is due to different vendor names in the SMBIOS System Information of Dell and Dell OEM servers.

                      This issue is resolved in this release.

                    • PR 1491595: Hostd might fail due to a memory leak

                      Under certain circumstances, the memory of hostd might exceed the hard limit and the agent might fail.

                      This issue is resolved in this release. 

                    • PR 2034772: An ESXi host might fail with a purple screen due to a rare race condition in iSCSI sessions

                      An ESXi host might fail with a purple diagnostic screen due to a rare race condition in iSCSI sessions while processing the scheduled queue and running queue during connection shutdown. The connection lock might be freed to terminate a task while the shutdown is still in process. The next node pointer would be incorrect and cause the failure. 

                      This issue resolved in this release.

                    • PR 205696: Log congestion leads to unresponsive hosts and inaccessible virtual machines

                      If a device in a disk group encounters a permanent disk loss, the physical log (PLOG) in the cache device might increase to its maximum threshold. This problem is due to faulty cleanup of data in the log, which are bound to the lost disk. The log entries build up over time, leading to log congestion, which can cause virtual machines to become inaccessible.

                      This issue is resolved in this release. 

                    • PR 2054940: vSAN health service unable to read controller information

                      If you are using HP controllers, vSAN health service might be unable to read the controller firmware information. The health check might issue a warning, with the firmware version listed as N/A.

                      This issue is resolved in this release.

                    • PR 2034800: Ping failure in vSAN network health check when VMware vRealize Operations manages the vSAN cluster

                      When vSAN cluster is managed by vRealize Operations, you might observe intermittent ping health check failures. 

                      This issue is resolved in this release.

                    • PR 2034452: The vSAN health service might be slow displaying data for a large scale cluster after integration with vRealize Operations

                      After vRealize Operations integration with vSAN Management Pack enabled, the vSAN health service might become less responsive. This problem affects large scale clusters, such as those with 24 hosts, each with 10 disks. vSAN health service uses an inefficient log filter for parsing some specific log patterns, and can consume a large amount of CPU resources while communicating with vRealize Operations.

                      This issue is resolved in this release.

                    • PR 2023740: Host might fail with a purple diagnostic screen error if its own local configuration is added through esxcli vsan cluster unicastagent add

                      When adding unicast configuration of cluster hosts, if the host own local configuration is added, the host experiences a purple diagnostic screen error.

                      This issue is resolved in this release.

                    • PR 2033471: Very low IOPS on vSAN iSCSI target when outstanding I/O is 1

                      For each iSCSI target, the initiator establishes one or more connections depending on whether multipath is configured. For each connection, if there is only one outstanding I/O all the time, and the network is very fast (ping time less than 1 ms), the IOPS on an iSCSI LUN might be much lower that on a native vSAN LUN.

                      This issue is resolved in this release.

                    • PR 2028736: A host in a vSAN cluster might fail with a page fault

                      When the instantiation of a vSAN object fails due to memory constraints, the host might experience a purple diagnostic screen error.

                      This issue is resolved in this release. 

                    • PR 2023858: vSAN health service alarm could delay 5 to 10 minutes to inform hosts with connection issue

                      After a host has lost connection with vCenter Server, with no manual retest, there could be a 5 to 10 minute delay for vSAN health service to report the loss of connection.

                      This issue is resolved in this release.

                    • PR 2021819: False alarm for cluster configuration consistency health check after enabling vSAN encryption

                      A false alarm is reported for cluster configuration consistency health check when vSAN encryption is enabled with two or more KMS server configured. The KMS certifications fetched from host and vCenter Server do not match due to improper ordering.

                      This issue is resolved in this release.

                    • PR 2059300: Host failure when upgrading to vCenter Server 6.5 Update 2

                      During upgrade to vCenter Server 6.5 Update 2, when vSAN applies unicast configuration of other hosts in the cluster, a local configuration might be added on the host, resulting in a host failure with purple diagnostic screen.

                      This issue is resolved in this release.

                    • PR 2020615: vSphere vMotion operations for virtual machines with more than 64GB of vRAM might fail on vSAN

                      When you perform a vSphere vMotion operation with a virtual machine with more than 64GB of memory (vRAM) on vSAN, the task might fail due to error in swap file initialization on the destination.

                      This issue is resolved in this release.

                    • PR 2008663: After reboot, host is delayed from joining cluster running vSAN 6.6 and later

                      When there are many vSAN components on disks, during reboot it might take a long time to publish vSAN storage. The host is delayed from joining the vSAN cluster, so this host might be partitioned for a long time.

                      This issue is resolved in this release.

                    • PR 2023766: Large amounts of resync traffic for an object being repaired or placed in maintenance mode

                      When an object spans across all available fault domains, vSAN is unable to replace a component that is impacted by a maintenance mode operation or a repair operation. If this happens, vSAN rebuilds the entire object, which can result in large amounts of resync traffic.

                      This issue is resolved in this release.

                    • PR 2039896: Some operations fail and the host must be rebooted

                      When vSAN Observer is running, memory leaks can occur in the host init group. Other operations that run under the init group might fail until you reboot the host.

                      This issue is resolved in this release.

                    • PR 1999439: Slow performance and client latency due to excess DOM I/O activity

                      Under certain conditions, when I/Os from the Distributed Object Manager (DOM) fail and are retried, the cluster might experience high latencies and slow performance. You might see the following trace logs:
                      [33675821] [cpu66] [6bea734c OWNER writeWithBlkAttr5 VMDISK] DOMTraceOperationNeedsRetry:3630: {'op': 0x43a67f18c6c0, 'obj': 0x439e90cb0c00, 'objUuid': '79ec2259-fd98-ef85-0e1a-1402ec44eac0', 'status': 'VMK_STORAGE_RETRY_OPERATION'}

                      This issue is resolved in this release.

                    • PR 2061175: I/O performance of the input–output memory management unit (IOMMU) driver of AMD might be slow on ESX PCIe devices

                      I/O performance of ESX PCIe devices might drop when the IOMMU driver of AMD is enabled.

                      This issue is resolved in this release.

                    • PR 1982291: Virtual machines with brackets in their names might not deploy successfully

                      Virtual machines with brackets ([]) in their name might not deploy successfully. For more information, see KB 53648

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      Workaround: You must unplug the LAN cable and plug it again.

                    • PR 1998108: An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device

                      An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device. This happens because some USB devices report invalid number of interface descriptors to the host.

                      This issue is resolved in this release.

                    • PR 1948936: Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host

                      Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host and the host might fail with a purple diagnostic screen. Memory corruptions are visible at later point, and not at the exact time of corruption.

                      This issue is resolved in this release.

                    • PR 2004950: Update to the VMware Advanced Host Controller Interface driver

                      The VMware Advanced Host Controller Interface (AHCI) driver is updated to version 1.1.1-1 to fix issues with memory allocation, the Marvell 9230 AHCI controller, and Intel Cannon Lake PCH-H SATA AHCI controller.

                    • PR 2004948: Update to the nhpsa driver

                      The nhpsa driver is updated to 2.0.22-1vmw.

                    • PR 2004949: Updates to the NVMe driver

                      ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

                    • PR 2004957: smartpqi driver for HPE Smart PQI controllers

                      ESXi 6.5 Update 2 enables smartpqi driver support for the HPE ProLiant Gen10 Smart Array Controller.

                    • PR 2004956: I/O devices enablement for Broadcom SAS 3.5 IT/IR controllers

                      ESXi 6.5 Update 2 enables a new native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including combination of NVMe, SAS, and SATA drives. The HBA device IDs are: SAS3516(00AA), SAS3516_1(00AB), SAS3416(00AC), SAS3508(00AD), SAS3508_1(00AE), SAS3408(00AF), SAS3716(00D0), SAS3616(00D1), SAS3708(00D2).

                    • PR 2004947: Update to the lsi_msgpt3 driver

                      The lsi_msgpt3 driver is updated to version 16.00.01.00-1vmw.

                    • PR 2004946: Update to the lsi_mr3 driver to version MR 7.2​​

                      The lsi_mr3 driver is updated to enable support for Broadcom SAS 3.5 RAID controllers. It also contains significant enhancement on task management and critical bug fixes.  

                    • PR 2004949: Updates to the NVMe driver

                      ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

                    • PR 2015606: VMware ESXi native drivers might fail to support more than eight message signaled interrupts

                      ESXi native drivers might fail to support more than eight message signaled interrupts even on devices with MSI-X that allows the device to allocate up to 2048 interrupts. This issue is observed on newer AMD-based platforms and is due to default limits of the native drivers, lsi_msgpt2 and lsi_msgpt3.

                      This issue is resolved in this release.

                    • PR 2036299: The ESXi native driver for HPE Smart Array controllers extends the list of expander devices to enable compatibility with HPE Gen9 configurations

                      The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.

                    • PR 2053073: Support for LightPulse and OneConnect adapters is split to separate default drivers

                      In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

                    • PR 2053703: An ESXi host might fail with purple diagnostic screen when completing installation from a USB media drive on HP servers

                      When you press Enter to reboot an ESXi host and complete a new installation from a USB media drive on some HP servers, you might see an error on a purple diagnostic screen due to a rare race condition.

                      This issue is resolved in this release.

                    • PR 2051280: Some Single Root I/O Virtualization (SR-IOV) graphics configurations that are running ESXi 6.5 with enabled GPU Statistics might cause virtual machines to become unresponsive

                      Some SR-IOV graphics configurations, that are running ESXi 6.5 with enabled GPU Statistics , might cause virtual machines on the ESXi host to become unresponsive after extended host uptime due to a memory leak. In some cases, the ESXi host might become unresponsive as well.

                      This issue is resolved in this release.

                    • The ESXCLI unmap command might reclaim only small file block resources in VMFS-6 datastores and not process large file block resources

                      In VMFS-6 datastores, if you use the esxcli storage vmfs unmap command for manual reclamation of free space, it might only reclaim space from small file blocks and not process large file block resources.

                      This issue is resolved in this release.

                    • PR 1950800: VMkernel adapter fails to communicate through an Emulex physical NIC after an ESXi host boots

                      After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

                      Workaround: You must unplug the LAN cable and plug it again.

                  ESXi-6.5.0-20180501001s-standard
                  Profile Name ESXi-6.5.0-20180501001s-standard
                  Build For build information, see  Patches Contained in this Release.
                  Vendor VMware, Inc.
                  Release Date May 3, 2018
                  Acceptance Level PartnerSupported
                  Affected Hardware N/A
                  Affected Software N/A
                  Affected VIBs
                  • VMware_bootbank_esx-base_6.5.0-1.47.8285314
                  • VMware_bootbank_vsan_6.5.0-1.47.7823008
                  • VMware_bootbank_esx-tboot_6.5.0-1.47.8285314
                  • VMware_bootbank_vsanhealth_6.5.0-1.47.7823009
                  • VMware_locker_tools-light_6.5.0-1.47.8285314
                  • VMware_bootbank_esx-ui_1.27.1-7909286
                  PRs Fixed 1827139, 2011288, 1964481
                  Related CVE numbers N/A
                    • PR 1827139: ESXi hosts might randomly fail due to errors in sfcbd or other daemon processes

                      This fix improves internal error handling to prevent ESXi host failures, caused by sfcbd or other daemon processes.

                      This issue is resolved in this release.

                    • PR 2011288: Update to glibc package to address security issues

                      The glibc package has been updated to address CVE-2017-15804, CVE-2017-15670, CVE-2016-4429, CVE-2015-8779, CVE-2015-8778, CVE-2014-9761 and CVE-2015-8776.

                    • Update of the SQLite database

                      The SQLite database is updated to version 3.20.1 to resolve a security issue with identifier CVE-2017-10989.

                  ESXi-6.5.0-20180501001s-no-tools
                  Profile Name ESXi-6.5.0-20180501001s-no-tools
                  Build For build information, see  Patches Contained in this Release.
                  Vendor VMware, Inc.
                  Release Date May 3, 2018
                  Acceptance Level PartnerSupported
                  Affected Hardware N/A
                  Affected Software N/A
                  Affected VIBs
                  • VMware_bootbank_esx-base_6.5.0-1.47.8285314
                  • VMware_bootbank_vsan_6.5.0-1.47.7823008
                  • VMware_bootbank_esx-tboot_6.5.0-1.47.8285314
                  • VMware_bootbank_vsanhealth_6.5.0-1.47.7823009
                  • VMware_bootbank_esx-ui_1.27.1-7909286
                  PRs Fixed 1827139, 2011288, 1964481
                  Related CVE numbers N/A
                    • PR 1827139: ESXi hosts might randomly fail due to errors in sfcbd or other daemon processes

                      This fix improves internal error handling to prevent ESXi host failures, caused by sfcbd or other daemon processes.

                      This issue is resolved in this release.

                    • PR 2011288: Update to glibc package to address security issues

                      The glibc package has been updated to address CVE-2017-15804, CVE-2017-15670, CVE-2016-4429, CVE-2015-8779, CVE-2015-8778, CVE-2014-9761 and CVE-2015-8776.

                    • Update of the SQLite database

                      The SQLite database is updated to version 3.20.1 to resolve a security issue with identifier CVE-2017-10989.

                  Known Issues

                  The known issues are grouped as follows.

                  ESXi650-201804201-UG
                  • vSAN Capacity Overview shows incorrect file system overhead

                    If you have a newly-created vSAN cluster or a cluster storing a small amount of user data, the file system overhead appears to consume a large percentage of the Used Capacity Breakdown. The file system overhead is calculated as a constant, based on the cluster capacity, which is allocated upfront. Since the cluster has very little user data, this constant appears as a large percentage of the used capacity. Once the cluster begins storing user data, the utilization graph becomes more realistic and better reflects the percentages.

                    Workaround: None.

                  Known Issues from Earlier Releases

                  To view a list of previous known issues, click here.

                  check-circle-line exclamation-circle-line close-line
                  Scroll to top icon