Release Date: JUL 30, 2020
What's in the Release Notes
The release notes cover the following topics:
What's New
- HTML interface. The HTML5-based vSphere Client ships with vCenter Server alongside the Flex-based vSphere Web Client. The vSphere Client uses many of the same interface terminologies, topologies, and workflows as the vSphere Web Client. You can use the new vSphere Client, or continue to use the vSphere Web Client.
Build Details
Download Filename: | ESXi650-202007001.zip |
Build: | 16576891 |
Download Size: | 483.4 MB |
md5sum: | ae6babff62f7daa08bcbab1c2dc674c4 |
sha1checksum: | f9d1a212f06ace2e746385f3f91397769fff897e |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi650-202007401-BG | Bugfix | Important |
ESXi650-202007402-BG | Bugfix | Important |
ESXi650-202007403-BG | Bugfix | Important |
ESXi650-202007404-BG | Bugfix | Important |
ESXi650-202007101-SG | Security | Important |
ESXi650-202007102-SG | Security | Important |
ESXi650-202007103-SG | Security | Important |
ESXi650-202007104-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.
Bulletin ID | Category | Severity |
ESXi650-202007001 | Bugfix | Important |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20200704001-standard |
ESXi-6.5.0-20200704001-no-tools |
ESXi-6.5.0-20200701001s-standard |
ESXi-6.5.0-20200701001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, the system can be updated by using the image profile and the esxcli software profile
command.
For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.
Product Support Notices
- Intel Memory Protection Extensions (MPX) is being deprecated with the introduction of Ice Lake CPUs. While this feature continues to be supported, it will not be exposed by default to virtual machines at power on. As an alternative, you can open the <name of virtual machine>.vmx file and add
cpuid.enableMPX = TRUE
. For more information, see VMware knowledge base article 76799 and Set Advanced Virtual Machine Attributes.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi650-202007401-BG
- ESXi650-202007402-BG
- ESXi650-202007403-BG
- ESXi650-202007404-BG
- ESXi650-202007101-SG
- ESXi650-202007102-SG
- ESXi650-202007103-SG
- ESXi650-202007104-SG
- ESXi-6.5.0-20200704001-standard
- ESXi-6.5.0-20200704001-no-tools
- ESXi-6.5.0-20200701001s-standard
- ESXi-6.5.0-20200701001s-no-tools
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2466294, 2454362, 2449192, 2500024, 2477965, 2462102, 2425577, 2468510, 2535758, 2525498, 2511325, 2533654, 2536923, 2411906, 2577439, 2476942, 2561925, 2488525, 2474261, 2387387, 2537030, 2537137, 2569156, 2537073, 2567795, 2474304, 2458185, 2514237, 2541272, 2459344, 2577914, 2569088, 2493976, 2541335, 2327059, 2464910, 2577879, 2541493, 2553475, 2554581, 2497571 |
Related CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2466294: Teaming failback policy with beacon detection does not take effect
In rare occasions, a VMNIC that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active VMNIC.
This issue is resolved in this release.
- PR 2454362: The hostd service might fail due to an out of memory error caused by uncleaned tasks
If one or more tasks time out while the list of completed tasks in the hostd service is empty, no task cleanup is scheduled for any consecutive completed task. As a result, memory leaks from uncleaned tasks over time cause the hostd to fail due to exceeding the memory limit.
This issue is resolved in this release.
- PR 2449192: Secondary virtual machines in vSphere Fault Tolerance might get stuck in VM_STATE_CREATE_SCREENSHOT state and consecutive operations fail
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state.This issue is resolved in this release. With the fix, invalid screenshot requests for secondary virtual machines in vSphere FT directly fail.
- PR 2500024: The hostd service might fail if many retrieveData calls in the virtual machine namespace run in a short period
If many
NamespaceManager.retrieveData
calls run in a short period of time, since the result of such calls can be large and hostd keeps them for 10 minutes by default, the hostd service might fail with an out of memory error.This issue is resolved in this release. For earlier ESXi versions, you can either avoid running many
NamespaceManager.retrieveData
calls in quick succession, or lower the value of thetaskRetentionInMins
option in the hostdconfig.xml
. - PR 2477965: If another service requests sensor data from the hostd service during a hardware health check, hostd might fail
If another service requests sensor data from the hostd service during a hardware health check, hostd might fail with an error similar to
IpmiIfcSdrReadRecordId: retry expired
. As a result, you cannot access the ESXi host from the vCenter Server system.This issue is resolved in this release.
- PR 2462102: XCOPY requests to Dell/EMC VMAX storage arrays might cause VMFS datastore corruption
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
This issue is resolved in this release. For more information, see VMware knowledge base article 74595.
- PR 2425577: The command enum_instances OMC_IpmiLogRecord fails with an error
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in a no instances found error. This happens whenRawIpmiProvider
is not loaded and fails to respond to the query.This issue is resolved in this release.
- PR 2468510: The hostd service fails repeatedly with an error message that memory exceeds the hard limit
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder.This issue is resolved in this release. With the fix, hostd checks the manifest file of VMware Tools for the current installed versions and prevents failures due to leaking memory on each check. However, to resolve the root cause of the issue, you must:
- Put the problematic ESXi host in maintenance mode.
- Enable SSH on both an ESXi host that is not affected by the issue and on the affected host.
- Log in as a root user.
- Copy the entire contents of the
vmtools
folder from the unaffected host to the affected host. - Run the
md5sum
command on each copied file on the unaffected host and on the affected host. The results for each pair of files must be the same.
- PR 2535758: If a trusted domain goes offline, Active Directory authentications for some user groups might fail
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
This issue is resolved in this release. This fix makes sure Likewise lists all the groups from all the other online domains.
- PR 2525498: You cannot log in to an ESXi host by using a smart card after your enable smart card authentication and reboot the host
When you configure smart card authentication on an ESXi host, and upload the domain controller root and intermediate certificates, after you reboot the host, certificates might not be retained. As a result, log in to the host by using a smart card fails after the reboot operation. The issue occurs because the domain controller certificate does not persist after the host reboot.
This issue is resolved in this release.
- PR 2511325: VMFS6 datastores fail to open with space error for journal blocks and virtual machines cannot power on
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
This issue is resolved in this release.
- PR 2533654: ESXi hosts in a cluster with vCenter Server High Availability enabled report an error that cannot reach a non-existing isolation address
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected.This issue is resolved in this release.
- PR 2536923: Device aliases might not be in the expected order on ESXi hosts that conform to SMBIOS 3.2 or later
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example,
vmnic0
orvmhba1
. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi650-202007001, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts.This issue is resolved in this release. However, ESXi650-202007001 completely fixes the issue only for fresh installs. If you upgrade from an earlier release to ESXi650-202007001, aliases still might not be reassigned in the expected order and you might need to manually change device aliases. For more information on correcting the order of device aliases, see VMware knowledge base article 2091560.
- PR 2411906: When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from ESXi hosts
vSphere Host Profiles does not manage some settings in the
/etc/vmware/config
file. When a host profile is applied to a cluster, such settings, including some for EVC, are removed unintentionally. As a result, some EVC functionalities are not operational. For example, unmasked CPUs expose to workloads.This issue is resolved in this release.
- PR 2577439: A burst of corrected memory errors can cause performance issues or ESXi hosts failing with a purple diagnostic screen
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and panic errors on multiple physical CPUs.
This issue is resolved in this release. However, although the ESXi tolerance to closely spaced memory errors is enhanced, performance issues are still possible. In such cases, you might need to replace the physical memory.
- PR 2476942: Host name or IP Network uplink redundancy lost alarm resets to Green even if a VMNIC is still down
The Host name or IP Network uplink redundancy lost alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
This issue is resolved in this release. The fix aggregates all the restored dvPort and redundancy events at the net correlator layer, and reports it to the vCenter Server system only when all uplinks are restored.
- PR 2561925: You cannot change the maximum number of outstanding I/O requests that all virtual machines can issue to a LUN
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot.
For instance, if you change the DSNRO value by entering the following command:
esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the commandesxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:
No of outstanding IOs with competing worlds: 32
.
DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation.This issue is resolved in this release. The fix handles exceptions thrown by external SATP to ensure DSNRO values persist after reboot.
- PR 2488525: Virtual machines stop running and a space error appears on the Summary tab
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
This issue is resolved in this release.
- PR 2474261: If the VMNIC default gateway is set in a host profile but is not set on a target host, a host remediation operation might fail
If the VMNIC default gateway is set in a host profile but is not set on a target host, a host remediation operation might fail with the message
Remediation cannot start due to error
.This issue is resolved in this release.
- PR 2387387: Third-party LSI CIM provider might fail while disabling WBEM
If you use a third-party LSI CIM provider on an ESXi host and disable WBEM, the sfcb service might fail with a core dump. In the VMware Host Client, under Storage > Hardware, you see a message such as
The Small Footprint Cim Broker Daemon(SFCBD) is running, but no data has been reported
.You may need to install a CIM provider for your storage provider.
The issue occurs because of the LSI CIM provider might take more time to shutdown than sfcb tolerates.This issue is resolved in this release. The fix increases the shutdown interval of LSI CIM providers to 75 sec.
- PR 2537030: ESXi hosts fail with a purple diagnostic screen and a #PF Exception 14 error message
ESXi hosts fail with a purple diagnostic screen and an error message such as
BlueScreen: #PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations.This issue is resolved in this release.
- PR 2537137: An ESXi host fails with a purple diagnostic screen and an error #PF Exception 14 in world 82187
In rare occasions, while processing network interface-related data that the underlying network layers are returning, an ESXi host might fail with an errors in the
vmkernel-zdump.1
file such as:
#PF Exception 14 in world 82187:hostd-probe IP 0x41802f2ee91a addr 0x430fd590a000
and
UserSocketInetBSDToLinuxIfconf@(user)#
+0x1b6 stack: 0x430fd58ea4b0 This issue is resolved in this release.
- PR 2569156: If a datastore experiences unusual slowness, ESXi hosts become unresponsive
If the
/etc/vmware/hostd/config.xml
has the<outputtofiles></outputtofiles>
parameter set totrue
and a datastore experiences unusual slowness, the hostd service stops responding while trying to report the storage issue. As a result, one or more ESXi hosts become unresponsive.This issue is resolved in this release.
- PR 2537073: ESXi hosts fail with an error about the Content Based Read Cache (CBRC) memory
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data.This issue is resolved in this release.
- PR 2567795: If too many tasks, such as calls for current time, come in quick succession, the hostd service fails with an out of memory error
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the ServiceInstance managed object, hostd might not be able to process them all and fail with an out of memory message.
This issue is resolved in this release.
- PR 2474304: While taking a screen shot in the VMware Remote Console, a virtual machine fails with VMware ESX unrecoverable error: (vmx)
Due to very rare race condition in the screen shot capture code, the VMX service might fail during MKS operations, such as taking a screen shot, in the VMware Remote Console. As a result, virtual machines might become unresponsive and you see a message such as
VMware ESX unrecoverable error: (vmx)
in thevmware.log
file.Workaround: None
- PR 2458185: NFS 4.1 datastores might become inaccessible after failover or failback operations of storage arrays
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
This issue is resolved in this release.
- PR 2514237: Unused port groups might cause high response times of ESXi hosts
Unused port groups might cause high response times of all tasks run on ESXi hosts, such as loading the vSphere Client, power on virtual machines and editing settings.
This issue is resolved in this release. The fix is optimizing the work of the
NetworkSystemVmkImplProvider::GetNetworkInfo
method to avoid looping port groups. - PR 2541272: After reverting a virtual machine to a memory snapshot, an ESXi host is disconnected from the vCenter Server system
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
. As a result, the ESXi host loses connectivity to the vCenter Server system.This issue is resolved in this release.
- PR 2459344: Direct I/O or passthrough operations by using an AMD FCH SATA controller might result in an ESXi host platform reboot
If you use a FCH SATA controller for direct I/O or passthrough operations on AMD Zen platforms, the default reset method of the controller might cause unexpected platform reboots.
This issue is resolved in this release. The fix allows the default reset method of FCH SATA Controller [AHCI mode] [1022:7901] to
alternate with another reset method, D3D0, when applicable. - PR 2577914: You must manually add the claim rules to an ESXi host for HUAWEI XSG1 array Storage Arrays to set different configuration
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
This issue is resolved in this release.
- PR 2569088: Many hostspec events might cause the hostd agent to run out of memory and fail
During an update of a vSphere Distributed Switch, many hostspec events triggered at the same time might cause an out of memory condition of the hostd service. As a result, the hostd agent loses connectivity to the vCenter Server system.
This issue is resolved in this release.
- PR 2493976: A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine.This issue is resolved in this release. The fix makes sure the
getLbaStatus
command always returns a result. - PR 2541335: ESXi host fails with a purple diagnostic screen and an error for the FSAtsUndoUpgradedOptLocks() method in the backtrace
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In the backtrace of thevmkernel-zdump.1
file you see messages similar toFSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
. Before the failure, you might see warnings that the VMFS heap is exhausted, such asWARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
.This issue is resolved in this release.
- PR 2327059: When ESXi hosts fail with a purple diagnostic screens you cannot see the full backtrace
When ESXi hosts fail with a purple diagnostic screens you cannot see the full backtrace, because regardless if the core dump is extracted from a coredump partition or a dump file, logs seem truncated. The issue occurs because the compressed offset of dump files is not 4K-aligned. Sometimes, 2KB are written to the last block of a VMFS extent and the remaining 2KB to a new VMFS extent. As a result, core dumps are partial.
This issue is resolved in this release.
- PR 2464910: ESXi hosts become unresponsive and the hostd service fails with a core dump
If an ESXi host in a vSAN cluster fails to handle certain non-disruptive vSAN exceptions, it might fail to respond to commands. The hostd service fails with a core dump.
This issue is resolved in this release.
- PR 2577879: You see many health warnings in the vSphere Client or vSphere Web Client and mails for potential hardware failure
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
This issue is resolved in this release.
- PR 2541493: Compliant objects in a vSAN environment appear as non-compliant
Objects that comply with the storage policy of a vSAN environment might appear as non-compliant. A deprecated attribute in the storage policy causes the issue.
This issue is resolved in this release.
- PR 2553475: CPU and memory stats of an ESXi host in the VMware Host Client display 0 after a restart of the ESXi management services
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
This issue is resolved in this release.
- PR 2554581: Logging group goes out of memory and writing headers on rotation log files fails
If a logging group goes out of memory, writing headers on rotation log files in the syslog service might fail. Sometimes logs from the vmkernel might also fail.
This issue is resolved in this release. For more information, see VMware knowledge base article 79032.
- PR 2497571: The hostd service might fail due to a race condition caused by heavy use of the NamespaceManager API
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2468970, 2477953 |
Related CVE numbers | N/A |
This patch updates the vmkusb
VIB to resolve the following issues:
- PR 2477953: If an ESXi host uses an usb network driver, it might fail with a purple diagnostic screen due to duplicate TX buffers
In rare occasions, if an ESXi host uses an usb network driver, it might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2509410 |
Related CVE numbers | N/A |
This patch updates the lpfc
VIB to resolve the following issue:
- PR 2509410: If external DIF support for FC HBAs is enabled, an ESXi host might fail with a purple diagnostic screen due to a race condition
An ESXi host might fail with a purple diagnostic screen if it is connecting to a storage array by using external DIF and a node path is destroyed at the same time. You can see an error for the
lpfc_external_dif_cmpl
function in the logs.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the net-vmxnet3
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2527261, 2568254, 2568260, 2568268, 2568275, 2568291, 2568294, 2568298, 2568303, 2568323, 2568327 |
Related CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan,
and vsanhealth
VIBs to resolve the following issues:
-
- Update to OpenSSL library
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2v.
- Update to the Python library
The Python third-party library is updated to version 3.5.9.
- Update to the OpenSSH
The OpenSSH version is updated to 8.1p1.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to libcurl-7.70.
- Update to libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.10.
- Update of the SQLite database
The SQLite database is updated to version 3.31.1.
- Update to the libPNG library
The libPNG library is updated to libpng-1.6.37.
- Update to the Expat XML parser
The Expat XML parser is updated to version 2.2.9.
- Update to zlib library
The zlib library is updated to version 1.2.11.
- Update to OpenSSL library
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2489779 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi650-202007001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterwinPreVista.iso:
VMware Tools 10.0.12 ISO image for Windows 2000, Windows XP, and Windows 2003linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2543392 |
CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
- Intel microcode updates
ESXi650-202007001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core)
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.5.0-20200704001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | July 30, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2466294, 2454362, 2449192, 2500024, 2477965, 2462102, 2425577, 2468510, 2535758, 2525498, 2511325, 2533654, 2536923, 2411906, 2577439, 2476942, 2561925, 2488525, 2474261, 2387387, 2537030, 2537137, 2569156, 2537073, 2567795, 2474304, 2458185, 2514237, 2541272, 2459344, 2577914, 2569088, 2493976, 2541335, 2327059, 2464910, 2577879, 2541493, 2553475, 2554581, 2497571, 2477953, 2509410 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
In rare occasions, a VMNIC that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active VMNIC.
-
If one or more tasks time out while the list of completed tasks in the hostd service is empty, no task cleanup is scheduled for any consecutive completed task. As a result, memory leaks from uncleaned tasks over time cause the hostd to fail due to exceeding the memory limit.
-
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state. -
If many
NamespaceManager.retrieveData
calls run in a short period of time, since the result of such calls can be large and hostd keeps them for 10 minutes by default, the hostd service might fail with an out of memory error. -
If another service requests sensor data from the hostd service during a hardware health check, hostd might fail with an error similar to
IpmiIfcSdrReadRecordId: retry expired
. As a result, you cannot access the ESXi host from the vCenter Server system. -
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
-
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in a no instances found error. This happens whenRawIpmiProvider
is not loaded and fails to respond to the query. -
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder. -
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
-
When you configure smart card authentication on an ESXi host, and upload the domain controller root and intermediate certificates, after you reboot the host, certificates might not be retained. As a result, log in to the host by using a smart card fails after the reboot operation. The issue occurs because the domain controller certificate does not persist after the host reboot.
-
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
-
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected. -
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example,
vmnic0
orvmhba1
. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi650-202007001, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts. -
vSphere Host Profiles does not manage some settings in the
/etc/vmware/config
file. When a host profile is applied to a cluster, such settings, including some for EVC, are removed unintentionally. As a result, some EVC functionalities are not operational. For example, unmasked CPUs expose to workloads. -
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and panic errors on multiple physical CPUs.
-
The Host name or IP Network uplink redundancy lost alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
-
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot.
For instance, if you change the DSNRO value by entering the following command:
esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the commandesxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:
No of outstanding IOs with competing worlds: 32
.
DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation. -
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
-
If the VMNIC default gateway is set in a host profile but is not set on a target host, a host remediation operation might fail with the message
Remediation cannot start due to error
. -
If you use a third-party LSI CIM provider on an ESXi host and disable WBEM, the sfcb service might fail with a core dump. In the VMware Host Client, under Storage > Hardware, you see a message such as
The Small Footprint Cim Broker Daemon(SFCBD) is running, but no data has been reported
.You may need to install a CIM provider for your storage provider.
The issue occurs because of the LSI CIM provider might take more time to shutdown than sfcb tolerates. -
ESXi hosts fail with a purple diagnostic screen and an error message such as
BlueScreen: #PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations. -
In rare occasions, while processing network interface-related data that the underlying network layers are returning, an ESXi host might fail with an errors in the
vmkernel-zdump.1
file such as:
#PF Exception 14 in world 82187:hostd-probe IP 0x41802f2ee91a addr 0x430fd590a000
and
UserSocketInetBSDToLinuxIfconf@(user)#+0x1b6 stack: 0x430fd58ea4b0
-
If the
/etc/vmware/hostd/config.xml
has the<outputtofiles></outputtofiles>
parameter set totrue
and a datastore experiences unusual slowness, the hostd service stops responding while trying to report the storage issue. As a result, one or more ESXi hosts become unresponsive. -
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data. -
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the ServiceInstance managed object, hostd might not be able to process them all and fail with an out of memory message.
-
Due to very rare race condition in the screen shot capture code, the VMX service might fail during MKS operations, such as taking a screen shot, in the VMware Remote Console. As a result, virtual machines might become unresponsive and you see a message such as
VMware ESX unrecoverable error: (vmx)
in thevmware.log
file. -
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
-
Unused port groups might cause high response times of all tasks run on ESXi hosts, such as loading the vSphere Client, power on virtual machines and editing settings.
-
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
. As a result, the ESXi host loses connectivity to the vCenter Server system. -
If you use a FCH SATA controller for direct I/O or passthrough operations on AMD Zen platforms, the default reset method of the controller might cause unexpected platform reboots.
-
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
-
During an update of a vSphere Distributed Switch, many hostspec events triggered at the same time might cause an out of memory condition of the hostd service. As a result, the hostd agent loses connectivity to the vCenter Server system.
-
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine. -
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In the backtrace of thevmkernel-zdump.1
file you see messages similar toFSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
. Before the failure, you might see warnings that the VMFS heap is exhausted, such asWARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
. -
When ESXi hosts fail with a purple diagnostic screens you cannot see the full backtrace, because regardless if the core dump is extracted from a coredump partition or a dump file, logs seem truncated. The issue occurs because the compressed offset of dump files is not 4K-aligned. Sometimes, 2KB are written to the last block of a VMFS extent and the remaining 2KB to a new VMFS extent. As a result, core dumps are partial.
-
If an ESXi host in a vSAN cluster fails to handle certain non-disruptive vSAN exceptions, it might fail to respond to commands. The hostd service fails with a core dump.
-
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
-
Objects that comply with the storage policy of a vSAN environment might appear as non-compliant. A deprecated attribute in the storage policy causes the issue.
-
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
-
If a logging group goes out of memory, writing headers on rotation log files in the syslog service might fail. Sometimes logs from the vmkernel might also fail.
-
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
-
In rare occasions, if an ESXi host uses an usb network driver, it might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
. -
An ESXi host might fail with a purple diagnostic screen if it is connecting to a storage array by using external DIF and a node path is destroyed at the same time. You can see an error for the
lpfc_external_dif_cmpl
function in the logs.
-
Profile Name | ESXi-6.5.0-20200704001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | July 30, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2466294, 2454362, 2449192, 2500024, 2477965, 2462102, 2425577, 2468510, 2535758, 2525498, 2511325, 2533654, 2536923, 2411906, 2577439, 2476942, 2561925, 2488525, 2474261, 2387387, 2537030, 2537137, 2569156, 2537073, 2567795, 2474304, 2458185, 2514237, 2541272, 2459344, 2577914, 2569088, 2493976, 2541335, 2327059, 2464910, 2577879, 2541493, 2553475, 2554581, 2497571, 2477953, 2509410 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
In rare occasions, a VMNIC that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active VMNIC.
-
If one or more tasks time out while the list of completed tasks in the hostd service is empty, no task cleanup is scheduled for any consecutive completed task. As a result, memory leaks from uncleaned tasks over time cause the hostd to fail due to exceeding the memory limit.
-
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state. -
If many
NamespaceManager.retrieveData
calls run in a short period of time, since the result of such calls can be large and hostd keeps them for 10 minutes by default, the hostd service might fail with an out of memory error. -
If another service requests sensor data from the hostd service during a hardware health check, hostd might fail with an error similar to
IpmiIfcSdrReadRecordId: retry expired
. As a result, you cannot access the ESXi host from the vCenter Server system. -
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
-
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in a no instances found error. This happens whenRawIpmiProvider
is not loaded and fails to respond to the query. -
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder. -
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
-
When you configure smart card authentication on an ESXi host, and upload the domain controller root and intermediate certificates, after you reboot the host, certificates might not be retained. As a result, log in to the host by using a smart card fails after the reboot operation. The issue occurs because the domain controller certificate does not persist after the host reboot.
-
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
-
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected. -
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example,
vmnic0
orvmhba1
. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi650-202007001, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts. -
vSphere Host Profiles does not manage some settings in the
/etc/vmware/config
file. When a host profile is applied to a cluster, such settings, including some for EVC, are removed unintentionally. As a result, some EVC functionalities are not operational. For example, unmasked CPUs expose to workloads. -
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and panic errors on multiple physical CPUs.
-
The Host name or IP Network uplink redundancy lost alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
-
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot.
For instance, if you change the DSNRO value by entering the following command:
esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the commandesxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:
No of outstanding IOs with competing worlds: 32
.
DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation. -
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
-
If the VMNIC default gateway is set in a host profile but is not set on a target host, a host remediation operation might fail with the message
Remediation cannot start due to error
. -
If you use a third-party LSI CIM provider on an ESXi host and disable WBEM, the sfcb service might fail with a core dump. In the VMware Host Client, under Storage > Hardware, you see a message such as
The Small Footprint Cim Broker Daemon(SFCBD) is running, but no data has been reported
.You may need to install a CIM provider for your storage provider.
The issue occurs because of the LSI CIM provider might take more time to shutdown than sfcb tolerates. -
ESXi hosts fail with a purple diagnostic screen and an error message such as
BlueScreen: #PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations. -
In rare occasions, while processing network interface-related data that the underlying network layers are returning, an ESXi host might fail with an errors in the
vmkernel-zdump.1
file such as:
#PF Exception 14 in world 82187:hostd-probe IP 0x41802f2ee91a addr 0x430fd590a000
and
UserSocketInetBSDToLinuxIfconf@(user)#+0x1b6 stack: 0x430fd58ea4b0
-
If the
/etc/vmware/hostd/config.xml
has the<outputtofiles></outputtofiles>
parameter set totrue
and a datastore experiences unusual slowness, the hostd service stops responding while trying to report the storage issue. As a result, one or more ESXi hosts become unresponsive. -
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data. -
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the ServiceInstance managed object, hostd might not be able to process them all and fail with an out of memory message.
-
Due to very rare race condition in the screen shot capture code, the VMX service might fail during MKS operations, such as taking a screen shot, in the VMware Remote Console. As a result, virtual machines might become unresponsive and you see a message such as
VMware ESX unrecoverable error: (vmx)
in thevmware.log
file. -
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
-
Unused port groups might cause high response times of all tasks run on ESXi hosts, such as loading the vSphere Client, power on virtual machines and editing settings.
-
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
. As a result, the ESXi host loses connectivity to the vCenter Server system. -
If you use a FCH SATA controller for direct I/O or passthrough operations on AMD Zen platforms, the default reset method of the controller might cause unexpected platform reboots.
-
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
-
During an update of a vSphere Distributed Switch, many hostspec events triggered at the same time might cause an out of memory condition of the hostd service. As a result, the hostd agent loses connectivity to the vCenter Server system.
-
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine. -
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In the backtrace of thevmkernel-zdump.1
file you see messages similar toFSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
. Before the failure, you might see warnings that the VMFS heap is exhausted, such asWARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
. -
When ESXi hosts fail with a purple diagnostic screens you cannot see the full backtrace, because regardless if the core dump is extracted from a coredump partition or a dump file, logs seem truncated. The issue occurs because the compressed offset of dump files is not 4K-aligned. Sometimes, 2KB are written to the last block of a VMFS extent and the remaining 2KB to a new VMFS extent. As a result, core dumps are partial.
-
If an ESXi host in a vSAN cluster fails to handle certain non-disruptive vSAN exceptions, it might fail to respond to commands. The hostd service fails with a core dump.
-
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
-
Objects that comply with the storage policy of a vSAN environment might appear as non-compliant. A deprecated attribute in the storage policy causes the issue.
-
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
-
If a logging group goes out of memory, writing headers on rotation log files in the syslog service might fail. Sometimes logs from the vmkernel might also fail.
-
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
-
In rare occasions, if an ESXi host uses an usb network driver, it might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
. -
An ESXi host might fail with a purple diagnostic screen if it is connecting to a storage array by using external DIF and a node path is destroyed at the same time. You can see an error for the
lpfc_external_dif_cmpl
function in the logs.
-
Profile Name | ESXi-6.5.0-20200701001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | July 30, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2527261, 2568254, 2568260, 2568268, 2568275, 2568291, 2568294, 2568298, 2568303, 2568323, 2568327, 2489779, 2572558, 2543392 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2v.
-
The Python third-party library is updated to version 3.5.9.
-
The OpenSSH version is updated to 8.1p1.
-
The ESXi userworld libcurl library is updated to libcurl-7.70.
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The SQLite database is updated to version 3.31.1.
-
The libPNG library is updated to libpng-1.6.37.
-
The Expat XML parser is updated to version 2.2.9.
-
The zlib library is updated to version 1.2.11.
The following VMware Tools ISO images are bundled with ESXi650-202007001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterwinPreVista.iso:
VMware Tools 10.0.12 ISO image for Windows 2000, Windows XP, and Windows 2003linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
-
solaris.iso
: VMware Tools image for Solaris darwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.1.1 Release Notes
- Updating to VMware Tools 10 - Must Read
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
ESXi650-202007001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core) -
Profile Name | ESXi-6.5.0-20200701001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | July 30, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2527261, 2568254, 2568260, 2568268, 2568275, 2568291, 2568294, 2568298, 2568303, 2568323, 2568327, 2489779, 2572558, 2543392 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2v.
-
The Python third-party library is updated to version 3.5.9.
-
The OpenSSH version is updated to 8.1p1.
-
The ESXi userworld libcurl library is updated to libcurl-7.70.
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The SQLite database is updated to version 3.31.1.
-
The libPNG library is updated to libpng-1.6.37.
-
The Expat XML parser is updated to version 2.2.9.
-
The zlib library is updated to version 1.2.11.
The following VMware Tools ISO images are bundled with ESXi650-202007001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterwinPreVista.iso:
VMware Tools 10.0.12 ISO image for Windows 2000, Windows XP, and Windows 2003linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
-
solaris.iso
: VMware Tools image for Solaris darwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.1.1 Release Notes
- Updating to VMware Tools 10 - Must Read
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
ESXi650-202007001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core) -
Known Issues
The known issues are grouped as follows.
Upgrade Issues- Upgrades to ESXi650-202007001 might fail due to the replacement of an expired digital signing certificate and key
Upgrades to ESXi650-202007001 from some earlier versions of ESXi might fail due to the replacement of an expired digital signing certificate and key. If you use ESXCLI, you see a message
Could not find a trusted signer
. If you use vSphere Update Manager, you see a message similar tocannot execute upgrade script on host
.Workaround: For more details on the issue and workaround, see VMware knowledge base article 76555.
- PR 2467075: Due to unavailable P2M slots during migration by using vSphere vMotion, virtual machines might fail with purple diagnostic screen or power off
Memory resourcing for virtual machines that require more memory, such as 3D devices, might cause an overflow of the P2M buffer during migration by using vSphere vMotion. As a result, shared memory pages break. The virtual machine fails with a purple screen or power off. You might see an error message similar to
P2M reservation failed after max retries
.Workaround: Follow the steps described in VMware knowledge base article 76387 to manually configure the P2M buffer.
- PR 2511839: Faulty hardware acceleration implementation on SSDs on a VMFS datastore might lead to VMFS metadata inconsistencies and data integrity issues
VMFS datastores rely on hardware acceleration that the vSphere API for Array Integration provides to achieve zeroing of blocks to isolate virtual machines and promote security. Some SSDs advertise support for hardware acceleration and specifically for the WRITE SAME (Zero) SCSI operation for zeroing blocks. VMFS metadata also relies on blocks being zeroed before use. However, some SSDs do not zero blocks as expected, but complete WRITE SAME requests with GOOD status. As a result, ESXi considers non-zeroed blocks as zeroed, which causes VMFS metadata inconsistencies and data integrity issues. To avoid such data inconsistency issues, that are critical to the normal operation of the vCenter Server system and data availability, WRITE SAME (Zero) operations are disabled by default for local SSDs as of ESX 7.0. As a consequence, certain operations on VMFS datastores might slow down.
Workaround: You can turn on WRITE SAME (Zero) SCSI operations by using the following ESXCLI command:
esxcli storage core device vaai status set -d
-Z 1
For example:esxcli storage core device vaai status set -d naa.55cd2e414fb3c242 -Z 1
If you see performance issues with VMFS datastores, confirm with vendors if SSDs with hardware acceleration can actually support WRITE SAME (Zero) SCSI operations.