This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Release Date: 12 OCT, 2021

What's in the Release Notes

The release notes cover the following topics:

Build Details

Download Filename: ESXi650-202110001.zip
Build: 18678235
Download Size: 484.5 MB
md5sum: 9bbf065773dfff41eca0605936c98cfc
sha256checksum: f403c5bf4ae81c61b3e12069c8cedaf297aa757cd977161643743bcd8e8f74dd
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

Bulletins

Bulletin ID Category Severity
ESXi650-202110401-BG Bugfix Important
ESXi650-202110402-BG Bugfix Important
ESXi650-202110403-BG Bugfix Important
ESXi650-202110404-BG Bugfix Important
ESXi650-202110101-SG Security Important
ESXi650-202110102-SG Security Important
ESXi650-202110103-SG Security Important

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.

Bulletin ID Category Severity
ESXi650-202110001 Bugfix Important

IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-6.5.0-20211004001-standard
ESXi-6.5.0-20211004001-no-tools
ESXi-6.5.0-20211001001s-standard
ESXi-6.5.0-20211001001s-no-tools

For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from VMware Customer Connect. Navigate to Products and Accounts > Product Patches. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 6.5.0. Install VIBs by using the esxcli software vib update command. Additionally, the system can be updated by using the image profile and the esxcli software profile update command. For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.

Resolved Issues

The resolved issues are grouped as follows.

ESXi650-202110401-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_esx-base_6.5.0-3.170.18678235
  • VMware_bootbank_vsan_6.5.0-3.170.18569621
  • VMware_bootbank_vsanhealth_6.5.0-3.170.18569625
  • VMware_bootbank_esx-tboot_6.5.0-3.170.18678235
PRs Fixed  2761135, 2685552, 2720188, 2793135, 2761972, 2755231, 2492991, 2710163, 2715132, 2701240, 2750814, 2635091, 2540214, 2749440, 2719215, 2603461, 2654781, 2603445, 2418383, 2796947, 2704429, 2765145, 2704412, 2731316, 2688189
Related CVE numbers N/A

This patch updates esx-baseesx-tboot, vsan, and vsanhealth VIBs to resolve the following issues:

  • PR 2761135: If you import virtual machines with a virtual USB device to a vCenter Server system, the VMs might fail to power on

    If you import a VM with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to: PANIC: Unexpected signal: 11. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices.

    This issue is resolved in this release. The fix ignores any unsupported virtual USB devices in virtual machines imported to a vCenter Server system.

  • PR 2685552: If you use large block sizes, I/O bandwidth might drop

    In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.

    This issue is resolved in this release. The fix optimizes the I/O split layer to avoid new buffer allocations. However, the optimization depends on the buffers that the guest OS creates while issuing I/O and might not work in all cases.

  • PR 2720188: ESXi hosts intermittently fail with a BSVMM_Validate@vmkernel error in the backtrace.

    A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A BSVMM_Validate@vmkernel error is indicative for the problem.

    This issue is resolved in this release.

  • PR 2793135: The hostd service might fail and ESXi hosts lose connectivity to vCenter Server due to an empty or unset property of a Virtual Infrastructure Management (VIM) API array

    In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.

    This issue is resolved in this release.

  • PR 2761972: TPM 1.2 read operations might cause an ESXi host to fail with a purple diagnostic screen

    An issue with the TPM 1.2 character device might cause ESXi hosts to fail with a page fault (#PF) purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2755231: An ESXi host might not discover devices configured with the VMW_SATP_INV plug-in

    In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the VMW_SATP_INV plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in.

    This issue is resolved in this release.

  • PR 2492991: You see multiple duplicate logs in the vSAN health service logs

    Due to an issue with clearing some profile variables, you might see multiple duplicate logs in the /var/log/vmware/vsan-health/vmware-vsan-health-service.log file. Such messages are very short and do not cause significant memory consumption.

    This issue is resolved in this release. 

  • PR 2710163: Claim rules must be manually added to ESXi for Fujitsu Eternus AB/HB series

    This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the Fujitsu Eternus AB/HB series.

    This issue is resolved in this release.

  • PR 2715132: You cannot collect performance statistics by using the rvc.bat file

    If you try to collect performance statistics by using the rvc.bat file in a vCenter Server system of version later than 6.5 Update 3k, the operation fails due to a wrong library path.

    This issue is resolved in this release.

  • PR 2701240: Virtual machines might power off during an NFS server failover

    During an NFS server failover, the NFS client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.

    This issue is resolved in this release. The fix makes sure that reclaim requests succeed.

  • PR 2750814: Failure of some Intel CPUs to forward Debug Exception (#DB) traps might result in virtual machine triple fault

    In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the vmware.log file, you see an error such as msg.monitorEvent.tripleFault. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue.

    This issue is resolved in this release. The fix forwards all #DB traps from CPUs into the guest operating system, except when the DB trap comes from a debugger that is attached to the virtual machine.

  • PR 2635091: ESXi hosts fail with a purple diagnostic screen with a memory region error

    A checksum validation issue with newly created memory regions might cause ESXi hosts fail with a purple diagnostic screen. In the backtrace, you see errors such as VMKernel: checksum BAD: 0xXXXX 0xXXXX​. The issue is specific to the process of VMkernel module loading.

    This issue is resolved in this release. The fix adds a check to avoid verifying checksum for an uninitialized memory region.

  • PR 2540214: Taking a snapshot of a large virtual machine on a large VMFS6 datastore might take long

    Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.

    This issue is resolved in this release.

  • PR 2749440: The ESXi SNMP service intermittently stops working and ESXi hosts become unresponsive

    If your environment has IPv6 disabled, the ESXi SNMP agent might stop responding. As a result, ESXi hosts also become unresponsive. In the VMkernel logs, you see multiple messages such as Admission failure in path: snmpd/snmpd.3955682/uw.3955682.

    Workaround: Manually restart the SNMP service or use a cron job to periodically restart the service.

  • PR 2719215: You see health alarms for sensor entity ID 44 after upgrading the firmware of HPE Gen10 servers

    After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

    This issue is resolved in this release.

  • PR 2603461: In case of a non-UTF8 string in the name property of numeric sensors, the vpxa service fails

    The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

    This issue is resolved in this release.

  • PR 2465245: ESXi upgrades fail due to all-paths-down error of the vmkusb driver

    In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.

    This issue is resolved in this release. The fix makes sure no read or write operations run on LUNs with external drives if no media is currently available in that drive until booting is complete.

  • PR 2654781: The advanced config option UserVars/HardwareHealthIgnoredSensors fails to ignore some sensors

    If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

    This issue has been fixed in this release.

  • PR 2603445: The hostd service fails due to an invalid UTF8 string for numeric sensor base-unit property

    If the getBaseUnitString function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as: ​[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure").

    This issue is resolved in this release.

  • PR 2418383: You see Sensor -1 type hardware health alarms on ESXi hosts and receive excessive mail alerts

    After upgrading to ESXi 6.5 Update 3, you might see Sensor -1 type hardware health alarms on ESXi hosts being triggered without an actual problem. This can result in excessive email alerts if you have configured email notifications for hardware sensor state alarms in your vCenter Server system. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms and Tasks (SEAT) directory goes above the 95% threshold.

    This issue is resolved in this release.

  • PR 2796947: Disabling the SLP service might cause failures during operations with host profiles

    When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.

    This issue is resolved in this release. 

  • PR 2704429: The sfcb service fails and you see multiple core dumps while a reset of CIM providers

    During a reset of some CIM providers, the sfcb service might make a shutdown call to close child processes that are already closed. As a result, the sfcb service fails and you see multiple core dumps during the reset operation.

    This issue is resolved in this release. Fix enhances tracking of sfcb service child processes during shutdown.

  • PR 2765145: vSphere Client might not meet all current browser security standards

    In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.

    This issue is resolved in this release. 

  • PR 2704412: You do not see vCenter Server alerts in the vSphere Client and the vSphere Web Client

    If you use the /bin/services.sh restart command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client.

    This issue is resolved in this release. The fix makes sure than the vobd daemon does not shut down when using the /bin/services.sh restart.

  • PR 2731316: Virtual machines become unresponsive after pass through a USB device

    Some devices, such as USB, might add invalid bytes to the configuration descriptor that ESXi reads and cause virtual machines that pass through such devices to become unresponsive.

    This issue is resolved in this release.

ESXi650-202110402-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_vmkusb_0.1-1vmw.650.3.170.18678235
PRs Fixed  2688189
Related CVE numbers N/A

This patch updates the vmkusb VIB to resolve the following issue:

  • PR 2688189: Guest OS of virtual machines cannot recognize some USB passthrough devices

    In rare cases, the guest OS of virtual machines might not recognize some USB passthrough devices, such as ExcelSecu USB devices, and you cannot use the VMs.

    This issue is resolved in this release. The fix makes sure that USB control transfer larger than 1KB are not split by using link transfer ring blocks (TRB), therefore the subsequent data stage TRB does not fail.

ESXi650-202110403-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_brcmfcoe_11.4.1078.26-14vmw.650.3.170.18678235
PRs Fixed  2775374
Related CVE numbers N/A

This patch updates the brcmfcoe VIB to resolve the following issue:

  • PR 2775374: ESXi hosts might lose connectivity after brcmfcoe driver upgrade on Hitachi storage arrays

    After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.

    This issue is resolved in this release.

ESXi650-202110404-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_nvme_1.2.2.28-5vmw.650.3.170.18678235
PRs Fixed  2805346
Related CVE numbers N/A

This patch updates the nvme VIB to resolve the following issue:

  • PR 2805346: ESXi hosts with a NVMe device might fail with a purple diagnostic screen while exporting a log bundle

    When the ESXi NVMe driver receives an esxcli nvme device register get request, it copies chunks of 8 bytes at a time from NVMe controller registers to user space. The widths of NVMe controller registers can be 4 bytes or 8 bytes. When drivers access 8 bytes on a 4-byte register, some NVMe devices might report critical errors, which cause ESXi hosts to fail with a purple diagnostic screen. Since the vm-support command automatically calls esxcli nvme device register get on NVMe controllers to dump device information, the issue might also occur while collecting vm-support log bundle.

    This issue is resolved in this release.

ESXi650-202110101-SG
Patch Category Security
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_esx-base_6.5.0-3.166.18677441
  • VMware_bootbank_vsanhealth_6.5.0-3.166.18596722
  • VMware_bootbank_vsan_6.5.0-3.166.18596718
  • VMware_bootbank_esx-tboot_6.5.0-3.166.18677441
PRs Fixed  2761972, 2765145, 2801745, 2801963, 2819291, 2819327, 2819329, 2819366, 2821179, 2821577, 2825672, 2825674, 2831766
Related CVE numbers CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366, CVE-2016-10196, CVE-2016-10197

This patch esx-base, esx-tboot, vsan and vsanhealth VIBs to resolve the following issues:

    • The ESXi userworld libcurl library is updated to version 7.78.0.
    • The glibc package is updated to address CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366.
    • The ESXi userworld OpenSSL library is updated to version openssl- 1.0.2za.
    • The jansson library is updated to version 2.10.
    • The libarchive library is updated to version 3.3.1.
    • The Libxml2 library is updated to version 2.9.12.
    • The Sqlite library is updated to version 3.34.1.
    • The Libevent library is updated to address CVE-2016-10196 and CVE-2016-10197.
ESXi650-202110102-SG
Patch Category Security
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_vmkusb_0.1-1vmw.650.3.166.18677441
PRs Fixed  N/A
Related CVE numbers N/A

This patch updates the vmkusb ​VIB.

    ESXi650-202110103-SG
    Patch Category Security
    Patch Severity Important
    Host Reboot Required No
    Virtual Machine Migration or Shutdown Required No
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMware_locker_tools-light_6.5.0-3.166.18677441
    PRs Fixed  2785533 
    Related CVE numbers N/A

    This patch updates the tools-light ​VIB.

    • The following VMware Tools ISO images are bundled with ESXi 650-202110001:

      • windows.iso: VMware Tools 11.3.0 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
      • linux.iso: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.

      The following VMware Tools ISO images are available for download:

      • VMware Tools 10.0.12:
        • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
        • linuxPreGLibc25.iso: for Linux OS with a glibc version less than 2.5.
           
      • VMware Tools 11.0.6:
        • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
           
      • solaris.iso: VMware Tools image 10.3.10 for Solaris.
      • darwin.iso: Supports Mac OS X versions 10.11 and later.

      Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

    ESXi-6.5.0-20211004001-standard
    Profile Name ESXi-6.5.0-20211004001-standard
    Build For build information, see the top of the page.
    Vendor VMware, Inc.
    Release Date October 12, 2021
    Acceptance Level PartnerSupported
    Affected Hardware N/A
    Affected Software N/A
    Affected VIBs
    • VMware_bootbank_esx-base_6.5.0-3.170.18678235
    • VMware_bootbank_vsan_6.5.0-3.170.18569621
    • VMware_bootbank_vsanhealth_6.5.0-3.170.18569625
    • VMware_bootbank_esx-tboot_6.5.0-3.170.18678235
    • VMW_bootbank_vmkusb_0.1-1vmw.650.3.170.18678235
    • VMW_bootbank_brcmfcoe_11.4.1078.26-14vmw.650.3.170.18678235
    • VMW_bootbank_nvme_1.2.2.28-5vmw.650.3.170.18678235
    • VMware_locker_tools-light_6.5.0-3.166.18677441
    PRs Fixed 2761135, 2685552, 2720188, 2793135, 2761972, 2755231, 2492991, 2710163, 2715132, 2701240, 2750814, 2635091, 2540214, 2749440, 2719215, 2603461, 2654781, 2603445, 2418383, 2796947, 2704429, 2765145, 2704412, 2731316, 2688189, 2465245, 2775374, 2805346
    Related CVE numbers N/A
    • This patch resolves the following issues:
      • If you import a VM with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to: PANIC: Unexpected signal: 11. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices.

      • In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.

      • A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A BSVMM_Validate@vmkernel error is indicative for the problem.

      • In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.

      • An issue with the TPM 1.2 character device might cause ESXi hosts to fail with a page fault (#PF) purple diagnostic screen.

      • In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the VMW_SATP_INV plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in.

      • Due to an issue with clearing some profile variables, you might see multiple duplicate logs in the /var/log/vmware/vsan-health/vmware-vsan-health-service.log file. Such messages are very short and do not cause significant memory consumption.

      • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the Fujitsu Eternus AB/HB series.

      • If you try to collect performance statistics by using the rvc.bat file in a vCenter Server system of version later than 6.5 Update 3k, the operation fails due to a wrong library path.

      • During an NFS server failover, the NFS client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.

      • In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the vmware.log file, you see an error such as msg.monitorEvent.tripleFault. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue.

      • A checksum validation issue with newly created memory regions might cause ESXi hosts fail with a purple diagnostic screen. In the backtrace, you see errors such as VMKernel: checksum BAD: 0xXXXX 0xXXXX​. The issue is specific to the process of VMkernel module loading.

      • Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.

      • If your environment has IPv6 disabled, the ESXi SNMP agent might stop responding. As a result, ESXi hosts also become unresponsive. In the VMkernel logs, you see multiple messages such as Admission failure in path: snmpd/snmpd.3955682/uw.3955682.

      • After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

      • The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

      • If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

      • If the getBaseUnitString function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as: ​[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure")

      • After upgrading to ESXi 6.5 Update 3, you might see Sensor -1 type hardware health alarms on ESXi hosts being triggered without an actual problem. This can result in excessive email alerts if you have configured email notifications for hardware sensor state alarms in your vCenter Server system. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms and Tasks (SEAT) directory goes above the 95% threshold.

      • When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.

      • During a reset of some CIM providers, the sfcb service might make a shutdown call to close child processes that are already closed. As a result, the sfcb service fails and you see multiple core dumps during the reset operation.

      • In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.

      • If you use the /bin/services.sh restart command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client.

      • Some devices, such as USB, might add invalid bytes to the configuration descriptor that ESXi reads and cause virtual machines that pass through such devices to become unresponsive.

      • In rare cases, the guest OS of virtual machines might not recognize some USB passthrough devices, such as ExcelSecu USB devices, and you cannot use the VMs.

      • In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.

      • After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.

      • When the ESXi NVMe driver receives an esxcli nvme device register get request, it copies chunks of 8 bytes at a time from NVMe controller registers to user space. The widths of NVMe controller registers can be 4 bytes or 8 bytes. When drivers access 8 bytes on a 4-byte register, some NVMe devices might report critical errors, which cause ESXi hosts to fail with a purple diagnostic screen. Since the vm-support command automatically calls esxcli nvme device register get on NVMe controllers to dump device information, the issue might also occur while collecting vm-support log bundle.

    ESXi-6.5.0-20210704001-no-tools
    Profile Name ESXi-6.5.0-20211004001-no-tools
    Build For build information, see the top of the page.
    Vendor VMware, Inc.
    Release Date October 12, 2021
    Acceptance Level PartnerSupported
    Affected Hardware N/A
    Affected Software N/A
    Affected VIBs
    • VMware_bootbank_esx-base_6.5.0-3.170.18678235
    • VMware_bootbank_vsan_6.5.0-3.170.18569621
    • VMware_bootbank_vsanhealth_6.5.0-3.170.18569625
    • VMware_bootbank_esx-tboot_6.5.0-3.170.18678235
    • VMW_bootbank_vmkusb_0.1-1vmw.650.3.170.18678235
    • VMW_bootbank_brcmfcoe_11.4.1078.26-14vmw.650.3.170.18678235
    • VMW_bootbank_nvme_1.2.2.28-5vmw.650.3.170.18678235
    PRs Fixed 2761135, 2685552, 2720188, 2793135, 2761972, 2755231, 2492991, 2710163, 2715132, 2701240, 2750814, 2635091, 2540214, 2749440, 2719215, 2603461, 2654781, 2603445, 2418383, 2796947, 2704429, 2765145, 2704412, 2731316, 2688189, 2465245, 2775374, 2805346
    Related CVE numbers N/A
    • This patch resolves the following issues:
      • If you import a VM with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to: PANIC: Unexpected signal: 11. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices.

      • In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.

      • A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A BSVMM_Validate@vmkernel error is indicative for the problem.

      • In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.

      • An issue with the TPM 1.2 character device might cause ESXi hosts to fail with a page fault (#PF) purple diagnostic screen.

      • In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the VMW_SATP_INV plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in.

      • Due to an issue with clearing some profile variables, you might see multiple duplicate logs in the /var/log/vmware/vsan-health/vmware-vsan-health-service.log file. Such messages are very short and do not cause significant memory consumption.

      • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the Fujitsu Eternus AB/HB series.

      • If you try to collect performance statistics by using the rvc.bat file in a vCenter Server system of version later than 6.5 Update 3k, the operation fails due to a wrong library path.

      • During an NFS server failover, the NFS client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.

      • In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the vmware.log file, you see an error such as msg.monitorEvent.tripleFault. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue.

      • A checksum validation issue with newly created memory regions might cause ESXi hosts fail with a purple diagnostic screen. In the backtrace, you see errors such as VMKernel: checksum BAD: 0xXXXX 0xXXXX​. The issue is specific to the process of VMkernel module loading.

      • Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.

      • If your environment has IPv6 disabled, the ESXi SNMP agent might stop responding. As a result, ESXi hosts also become unresponsive. In the VMkernel logs, you see multiple messages such as Admission failure in path: snmpd/snmpd.3955682/uw.3955682.

      • After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

      • The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

      • If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

      • If the getBaseUnitString function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as: ​[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure")

      • After upgrading to ESXi 6.5 Update 3, you might see Sensor -1 type hardware health alarms on ESXi hosts being triggered without an actual problem. This can result in excessive email alerts if you have configured email notifications for hardware sensor state alarms in your vCenter Server system. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms and Tasks (SEAT) directory goes above the 95% threshold.

      • When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.

      • During a reset of some CIM providers, the sfcb service might make a shutdown call to close child processes that are already closed. As a result, the sfcb service fails and you see multiple core dumps during the reset operation.

      • In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.

      • If you use the /bin/services.sh restart command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client.

      • Some devices, such as USB, might add invalid bytes to the configuration descriptor that ESXi reads and cause virtual machines that pass through such devices to become unresponsive.

      • In rare cases, the guest OS of virtual machines might not recognize some USB passthrough devices, such as ExcelSecu USB devices, and you cannot use the VMs.

      • In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.

      • After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.

      • When the ESXi NVMe driver receives an esxcli nvme device register get request, it copies chunks of 8 bytes at a time from NVMe controller registers to user space. The widths of NVMe controller registers can be 4 bytes or 8 bytes. When drivers access 8 bytes on a 4-byte register, some NVMe devices might report critical errors, which cause ESXi hosts to fail with a purple diagnostic screen. Since the vm-support command automatically calls esxcli nvme device register get on NVMe controllers to dump device information, the issue might also occur while collecting vm-support log bundle.

    ESXi-6.5.0-20211001001s-standard
    Profile Name ESXi-6.5.0-20211001001s-standard
    Build For build information, see the top of the page.
    Vendor VMware, Inc.
    Release Date October 12, 2021
    Acceptance Level PartnerSupported
    Affected Hardware N/A
    Affected Software N/A
    Affected VIBs
    • VMware_bootbank_esx-base_6.5.0-3.166.18677441
    • VMware_bootbank_vsanhealth_6.5.0-3.166.18596722
    • VMware_bootbank_vsan_6.5.0-3.166.18596718
    • VMware_bootbank_esx-tboot_6.5.0-3.166.18677441
    • VMW_bootbank_vmkusb_0.1-1vmw.650.3.166.18677441
    • VMware_locker_tools-light_6.5.0-3.166.18677441
    PRs Fixed 2761972, 2765145, 2801745, 2801963, 2819291, 2819327, 2819329, 2819366, 2821179, 2821577, 2825672, 2825674, 2831766, 2785533
    Related CVE numbers CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366, CVE-2016-10196, CVE-2016-10197
    • This patch resolves the following issues: 
      • The ESXi userworld libcurl library is updated to version 7.78.0.
      • The glibc package is updated to address CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366.
      • The ESXi userworld OpenSSL library is updated to version openssl- 1.0.2za.
      • The jansson library is updated to version 2.10.
      • The libarchive library is updated to version 3.3.1.
      • The Libxml2 library is updated to version 2.9.12.
      • The Sqlite library is updated to version 3.34.1.
      • The Libevent library is updated to address CVE-2016-10196 and CVE-2016-10197.
      • The following VMware Tools ISO images are bundled with ESXi 650-202110001:

        • windows.iso: VMware Tools 11.3.0 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
        • linux.iso: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.

        The following VMware Tools ISO images are available for download:

        • VMware Tools 10.0.12:
          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
          • linuxPreGLibc25.iso: for Linux OS with a glibc version less than 2.5.
             
        • VMware Tools 11.0.6:
          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
             
        • solaris.iso: VMware Tools image 10.3.10 for Solaris.
        • darwin.iso: Supports Mac OS X versions 10.11 and later.

        Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

    ESXi-6.5.0-20211001001s-no-tools
    Profile Name ESXi-6.5.0-20211001001s-no-tools
    Build For build information, see the top of the page.
    Vendor VMware, Inc.
    Release Date October 12, 2021
    Acceptance Level PartnerSupported
    Affected Hardware N/A
    Affected Software N/A
    Affected VIBs
    • VMware_bootbank_esx-base_6.5.0-3.166.18677441
    • VMware_bootbank_vsanhealth_6.5.0-3.166.18596722
    • VMware_bootbank_vsan_6.5.0-3.166.18596718
    • VMware_bootbank_esx-tboot_6.5.0-3.166.18677441
    • VMW_bootbank_vmkusb_0.1-1vmw.650.3.166.18677441
    PRs Fixed 2761972, 2765145, 2801745, 2801963, 2819291, 2819327, 2819329, 2819366, 2821179, 2821577, 2825672, 2825674, 2831766, 2785533
    Related CVE numbers CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366, CVE-2016-10196, CVE-2016-10197
    • This patch resolves the following issues: 
      • The ESXi userworld libcurl library is updated to version 7.78.0.
      • The glibc package is updated to address CVE-2018-6485, CVE-2021-35942, CVE-2018-11236, CVE-2019-9169, CVE-2017-1000366.
      • The ESXi userworld OpenSSL library is updated to version openssl- 1.0.2za.
      • The jansson library is updated to version 2.10.
      • The libarchive library is updated to version 3.3.1.
      • The Libxml2 library is updated to version 2.9.12.
      • The Sqlite library is updated to version 3.34.1.
      • The Libevent library is updated to address CVE-2016-10196 and CVE-2016-10197.
      • The following VMware Tools ISO images are bundled with ESXi 650-202110001:

        • windows.iso: VMware Tools 11.3.0 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
        • linux.iso: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.

        The following VMware Tools ISO images are available for download:

        • VMware Tools 10.0.12:
          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
          • linuxPreGLibc25.iso: for Linux OS with a glibc version less than 2.5.
             
        • VMware Tools 11.0.6:
          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
             
        • solaris.iso: VMware Tools image 10.3.10 for Solaris.
        • darwin.iso: Supports Mac OS X versions 10.11 and later.

        Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

    Known Issues from Previous Releases

    To view a list of previous known issues, click here.

    check-circle-line exclamation-circle-line close-line
    Scroll to top icon