Release Date: AUG 20, 2020
Build Details
Download Filename: | ESXi670-202008001.zip |
Build: | 16713306 |
Download Size: | 475.3 MB |
md5sum: | 86930fff7617f010f38d2e00230fc6d4 |
sha1checksum: | 0b097e0be71f8133a6e71c705be9128b5a13b39e |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi670-202008401-BG | Bugfix | Critical |
ESXi670-202008402-BG | Bugfix | Important |
ESXi670-202008403-BG | Bugfix | Important |
ESXi670-202008404-BG | Bugfix | Important |
ESXi670-202008405-BG | Bugfix | Moderate |
ESXi670-202008406-BG | Bugfix | Important |
ESXi670-202008101-SG | Security | Important |
ESXi670-202008102-SG | Security | Important |
ESXi670-202008103-SG | Security | Moderate |
ESXi670-202008104-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.
Bulletin ID | Category | Severity |
ESXi670-202008001 | Bugfix | Critical |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only the ESXi hosts is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20200804001-standard |
ESXi-6.7.0-20200804001-no-tools |
ESXi-6.7.0-20200801001s-standard |
ESXi-6.7.0-20200801001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
You can update ESXi hosts by manually downloading the patch ZIP file from the VMware download page and installing the VIBs by using the esxcli software vib update
command. Additionally, you can update the system by using the image profile and the esxcli software profile update
command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-202008401-BG
- ESXi670-202008402-BG
- ESXi670-202008403-BG
- ESXi670-202008404-BG
- ESXi670-202008405-BG
- ESXi670-202008406-BG
- ESXi670-202008101-SG
- ESXi670-202008102-SG
- ESXi670-202008103-SG
- ESXi670-202008104-SG
- ESXi-6.7.0-20200804001-standard
- ESXi-6.7.0-20200804001-no-tools
- ESXi-6.7.0-20200801001s-standard
- ESXi-6.7.0-20200801001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2537083, 2536682, 2537593, 2543048, 2559163, 2575416, 2583807, 2536169, 2605361, 2577878, 2560686, 2577911, 2582235, 2521297, 2544574, 2573004, 2560546, 2560960, 2601778, 2586088, 2337784, 2535458, 2550983, 2553476, 2555823, 2573557, 2577438, 2583029, 2585883, 2594996, 2583814, 2583813, 2517190, 2596979, 2567370, 2594956, 2499715, 2581524, 2566470, 2566683, 2560998, 2605033, 2585590, 2539907, 2565242, 2552832, 2584586 |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2537083: ESXi hosts fail with an error about the Content Based Read Cache (CBRC) memory
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data.This issue is resolved in this release.
- PR 2536682: Device aliases might not be in the expected order on ESXi hosts that conform to SMBIOS 3.2 or later
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example, vmnic0 or vmhba1. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi670-20200801, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts.
This issue is resolved in this release. However, ESXi670-20200801 completely fixes the issue only for fresh installs. If you upgrade from an earlier release to ESXi670-20200801, aliases still might not be reassigned in the expected order and you might need to manually change device aliases. For more information on correcting the order of device aliases, see VMware knowledge base article 2091560.
- PR 2537593: ESXi hosts in a cluster with vCenter Server High Availability enabled report an error that cannot reach a non-existing isolation address
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected.This issue is resolved in this release.
- PR 2543048: ESXi hosts fail with an error message on a purple diagnostic screen such as #PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
ESXi hosts fail with an error message on a purple diagnostic screen such as
#PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations.This issue is resolved in this release.
- PR 2559163: Multiple ESXi hosts fail with purple diagnostic screen and a panic error
If a NULL pointer is dereferenced while querying for an uplink that is part of a teaming policy, but no active uplink is currently available, ESXi hosts might fail with a purple diagnostic screen. On the screen, you see an error such as
Panic Message: @BlueScreen: #PF Exception 14 in world 2097208:netqueueBala IP 0x418032902747 addr 0x10
.This issue is resolved in this release. The fix adds an additional check to the
Team_IsUplinkPortMatch
method to ensure at least one active uplink exists before calling thegetUplink
method. - PR 2575416: You cannot change the maximum number of outstanding I/O requests that all virtual machines can issue to a LUN
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot. For instance, if you change the DSNRO value by entering the following command:esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the command
esxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:No of outstanding IOs with competing worlds: 32
.DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation.
This issue is resolved in this release. The fix handles exceptions thrown by external SATP to ensure DSNRO values persist after a reboot.
- PR 2583807: If too many tasks, such as calls for current time, come in quick succession, the hostd service fails with an out of memory error
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the
ServiceInstance
managed object, hostd might not be able to process them all and fail with an out of memory message.This issue is resolved in this release.
- PR 2536169: ESXi host fails with a purple diagnostic screen and an error for the FSAtsUndoUpgradedOptLocks() method in the backtrace
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In thevmkernel-zdump.1
file you see messages such as
FSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
.
Before the failure, you might see warnings that the VMFS heap is exhausted, such as
WARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
.This issue is resolved in this release.
- PR 2605361: After reverting a virtual machine to a memory snapshot, an ESXi host disconnects from the vCenter Server system
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
file. As a result, the ESXi host loses connectivity to the vCenter Server system.This issue is resolved in this release.
- PR 2577878: You see many health warnings in the vSphere Client or vSphere Web Client and mails for potential hardware failure
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
This issue is resolved in this release.
- PR 2560686: The small footprint CIM broker (SFCB) fails while fetching the class information of third-party provider classes
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace.This issue is resolved in this release.
- PR 2577911: You must manually add the claim rules to an ESXi host for HUAWEI XSG1 array Storage Arrays to set different configuration
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
This issue is resolved in this release.
- PR 2582235: The Target Port Group (TPG) ID value does not update as part of periodic scans
If the TPG changes for a path, the TPG ID updates only after an ESXi reboot, but not at the time of the change. For example, if a target port moves to a different controller, the TPG ID value does not update for the path.
This issue is resolved in this release. The fix updates TPG ID at the time of change for a path.
- PR 2521297: The net-lbt service fails intermittently
The net-lbt daemon fails with an error such as
...Admission failure in path: lbt/net-lbt.7320960/uw.7320960
. As a result, load-based teaming (LBT) policy does not work as expected. The issue occurs if the net-lbt daemon resource pool gets out of memory.This issue is resolved in this release.
- PR 2544574: In case of a memory corruption, the hostd service might remain unresponsive instead of failing, and random ESXi hosts become unresponsive as well
In case of a memory corruption event, the hostd service might not fail and restart as expected but stay unresponsive until a manual restart. As a result, random ESXi hosts become unresponsive as well.
This issue is resolved in this release.
- PR 2573004: Unmounting any VMFS datastore sets coredump files to inactive for all other datastores on a vCenter Server system
Unmounting any VMFS datastore turns status to inactive for all coredump files that belong to other datastores in the vCenter Server system. You must manually reset the status to active.
This issue is resolved in this release.
- PR 2560546: An ESXi host fails with an error such as BlueScreen: #PF Exception 14 in world 2100600:storageRM IP 0x41801b1e588a addr 0x160
If during the initialization of an unmap operation the available memory is not sufficient to create unmap heaps, the hostd service starts a cleanup routine. During the cleanup routine, it is possible that an uninitialized variable causes the ESXi host to fail with an error such as
BlueScreen: #PF Exception 14 in world 2100600:storageRM IP 0x41801b1e588a addr 0x160
.This issue is resolved in this release.
- PR 2560960: After a successful remediation, ESXi hosts still appear as non-compliant in the vSphere Client and vSphere Web Client
Even after a successful remediation, you can still see errors in the vSphere Client and vSphere Web Client that the ESXi hosts are not compliant due to modified iSCSI target parameters.
This issue is resolved in this release. For more information, see VMware knowledge base article 67580.
- PR 2601778: When migrating virtual machines between vSphere Virtual Volume datastores, the source VM disks remain undeleted
In certain cases, such as when a VASA provider for a vSphere Virtual Volume datastore is not reachable but does not return an error, for instance a transport error or a provider timeout, the source VM disks remain undeleted after migrating virtual machines between vSphere Virtual Volume datastores. As result, the source datastore capacity is not correct.
This issue is resolved in this release.
- PR 2586088: A virtual machine cloned to a different ESXi host might be unresponsive for a minute
A virtual machine clone operation involves a snapshot of the source VM followed by creating a clone from that snapshot. The snapshot of the source virtual machine is deleted after the clone operation is complete. If the source virtual machine is on a vSphere Virtual Volumes datastore in one ESXi host and the clone virtual machine is created on another ESXi host, deleting the snapshot of the source VM might take some time. As a result, the cloned VM stays unresponsive for 50 to 60 seconds and might cause disruption of applications running on the source VM.
This issue is resolved in this release.
- PR 2337784: Virtual machines on a VMware vSphere High Availability-enabled cluster display as unprotected when power on
If an ESXi host in a vSphere HA-enabled cluster using a vSphere Virtual Volumes datastore fails to create the
.vSphere-HA
folder, vSphere HA configuration fails for the entire cluster. This issue occurs due to a possible race condition between ESXi hosts to create the.vSphere-HA
folder in the shared vSphere Virtual Volumes datastore.This issue is resolved in this release.
- PR 2535458: After a stateless reboot, only one of two vmknics connected to the same NSX-T logical switch is created
When two vmknics are connected to the same NSX-T logical switch, after a stateless reboot, only one vmknic is created, because of an error in the portgroup mapping logic.
This issue is resolved in this release.
- PR 2550983: The dapathd service repeatedly fails on VMware NSX Edge virtual machines
The datapathd service repeatedly fails on NSX Edge virtual machines and the entire node becomes unresponsive.
The failure occurs if you use pyVmomi API to change the MAC limit policy, defined by using themacLimitPolicy
parameter. The API of the host service only accepts upper case values, such asDROP
andALLOW
, and if you provide a value in low case, such as Drop, the command fails. As a result, datapathd fails as well.This issue is resolved in this release. With this fix, the hostd service accepts
macLimitPolicy
parameters in both low and upper case. - PR 2553476: CPU and memory stats of an ESXi host in the VMware Host Client display 0 after a restart of the ESXi management services
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
This issue is resolved in this release.
- PR 2555823: Multiple virtual machines cannot power on after loss of connectivity to storage
If a storage outage occurs while VMFS5 or VMFS6 volumes close, when storage becomes available, journal replay fails because on-disk locks for some transactions have already been released during the previous volume close. As a result, all further transactions are blocked.
In the vmkernel logs, you see an error such as:
WARNING: HBX: 5454: Replay of journal
on vol 'iSCSI-PLAB-01' failed: Lost previously held disk lock This issue is resolved in this release.
- PR 2573557: An ESXi host becomes unresponsive and you see an error such as hostd detected to be non-responsive in the Direct Console User Interface (DCUI)
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks.This issue is resolved in this release.
- PR 2577438: A burst of corrected memory errors might cause performance issues or ESXi hosts failing with a purple diagnostic screen
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and errors such as panic on multiple physical CPUs.
This issue is resolved in this release. However, although the ESXi tolerance to closely spaced memory errors is enhanced, performance issues are still possible. In such cases, you might need to replace the physical memory.
- PR 2583029: Some vSphere vMotion operations fail every time when an ESXi host goes into maintenance mode
If you put an ESXi host into maintenance mode and migrate virtual machines by using vSphere vMotion, some operations might fail with an error such as
A system error occurred:
in the vSphere Client or the vSphere Web Client.
In thehostd.log
, you can see the following error:
2020-01-10T16:55:51.655Z warning hostd[2099896] [Originator@6876 sub=Vcsvc.VMotionDst.5724640822704079343 opID=k3l22s8p-5779332-auto-3fvd3-h5:70160850-b-01-b4-3bbc user=vpxuser:<user>] TimeoutCb: Expired
The issue occurs if vSphere vMotion fails to get all required resources before the defined waiting time of vSphere Virtual Volumes due to slow storage or VASA provider.This issue is resolved in this release. The fix makes sure vSphere vMotion operations are not interrupted by vSphere Virtual Volumes timeouts.
- PR 2585883: In an NSX-T environment, virtual machine and ESXi host operations fail with item could not be found error
Virtual machine operations, such as cloning, migrating, taking or removing snapshots, or ESXi host operations, such as add or remove a datastore, intermittently fail in an NSX-T environment. In the vpxd logs, and in the vSphere Client or vSphere Web Client, you see an error such as:
The object or item referred to could not be found. DVS c5 fb 29 dd c8 5b 4a fc-b2 d0 ee cd f6 b2 3e 7b cannot be found
The VMkernel module responsible for the VLAN MTU health reports causes the issue.This issue is resolved in this release.
- PR 2594996: Inconsistency in the vSAN deduplication metadata causes ESXi hosts failure
If an ESXI host in a vSAN environment encounters an inconsistency in the deduplication metadata, the host service might fail with a purple diagnostic screen. In the backtrace, you see an error such as:
2019-12-12T19:16:34.464Z cpu0:2099114)@BlueScreen: Re-formatting a valid dedup metadata block
This issue is resolved in this release. vSAN marks the affected disk groups as offline.
- PR 2583814: An ESXi host in a vSAN environment might fail when an object with more than 64 copies of data is de-allocated in memory
In rare cases, if the Delta Component feature is enabled, an object might have more than 64 copies of data, or the equivalent in mirror copies. The in-memory object allocation and deallocation functions might corrupt the memory, leading to a failure of the ESXi host with purple diagnostic screen. In the backtrace, you see an error such as:
PANIC bora/vmkernel/main/dlmalloc.c:4934 - Usage error in dlmalloc
This issue is resolved in this release.
- PR 2583813: A VSAN object has an excessive number of data copies after a cluster recovers from power failure
After a vSAN cluster experiences a power failure and all ESXi hosts recover, vSAN might reconfigure an object to have too many data copies. Such copies consume resources unnecessarily.
This issue is resolved in this release.
- PR 2517190: Unable to execute RVC commands when the vCenter Server system is configured with a custom HTTPS port
When your vCenter Server system is configured with a custom port for HTTPS, RVC might attempt to connect to the default port 443. As a result, vSAN commands such as
vsan.host_info
might fail.This issue is resolved in this release.
- PR 2596979: Unable to deploy tiny and large witness appliance types by using CLI tools
When invoked from the vSphere Client, vSAN witness appliance OVF deploys all types of witness appliance sizes. Due to a code change, OVF deployment fails to deploy tiny and large instances by using CLI tools like ovftool or PowerCLI.
This issue is resolved in this release.
- PR 2567370: ESX hosts randomly disconnect from Active Directory or fail due to Likewise services running out of memory
ESX hosts might randomly disconnect from Active Directory or fail with an out of memory error for some Likewise services. Memory leaks in some of the Active Directory operations performed by Likewise services cause the issue.
This issue is resolved in this release. For ESXi hosts with smart card authentication enabled that might still face the issue, see VMware knowledge base article 78968.
- PR 2594956: The hostd service fails due to an invalid UTF8 string for numeric sensor base-unit property
If the
getBaseUnitString
function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as:
[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure")
This issue is resolved in this release.
- PR 2499715: A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine.This issue is resolved in this release. The fix makes sure the
getLbaStatus
command always returns a result. - PR 2581524: Heartbeat status of guest virtual machines intermittently changes between yellow and green
Under certain load conditions, the sampling frequency of guest virtual machines for the hostd service and the VMX service might be out of sync, and cause intermittent changes of the
vim.VirtualMachine.guestHeartbeatStatus
parameter from green to yellow. These changes do not necessarily indicate an issue with the operation of the guest virtual machines, because this is a rare condition.This issue is resolved in this release.
- PR 2566470: If the total datastore capacity of ESXi hosts across a vCenter Server system exceeds 4 PB, an ESXi host might fail with a purple diagnostic screen due to an out of memory issue
LFBCInfo is an in-memory data structure that tracks some resource allocation details for the datastores in an ESXi host. If the total datastore capacity of ESXi hosts across a vCenter Server system exceeds 4 PB, LBCInfo might get out of memory. As a result, an ESXi host might fail with a purple diagnostic screen during some routine operations such as migrating virtual machines by using Storage vMotion.
In the backtrace, you see an error similar to:^[[7m2019-07-20T08:54:18.772Z cpu29:xxxx)WARNING: Res6: 2545: 'xxxxx': out of slab elems trying to allocate LFBCInfo (2032 allocs)^[[0m
This issue is resolved in this release. The fix is to increase to limit on the memory pool for LFBCInfo. However, to make sure that the fix works, you must manually configure the memory pool by using the command
vsish -e set /config/VMFS3/intOpts/LFBCSlabSizeMaxMB 128
. TheLFBCSlabSizeMaxMB
parameter is part of the ESXi Host Advanced Settings and defines the maximum size (in MB) to which the VMFS affinity manager cluster cache is allowed to grow. - PR 2566683: Deleted objects in a partitioned cluster appear as inaccessible
In a partitioned cluster, object deletion across the partitions sometimes fails to complete. The deleted objects appear as inaccessible objects.
This issue is resolved in this release. The fix enhances the tracking of deleted objects by the Entity Persistence Daemon (EPD) to prevent leakage of discarded components.
- PR 2560998: Unmap operations might cause I/O latency to guest virtual machines
If all snapshots of a booted virtual machine on a VMFS datastore are deleted, or after VMware Storage vMotion operations, unmap operations might start failing and cause slow I/O performance of the virtual machine. The issue occurs because certain virtual machine operations change the underlying disk unmap granularity of the guest OS. If the guest OS does not automatically refresh the unmap granularity, the VMFS base disk and snapshot disks might have different unmap granularity based on their storage layout. This issue occurs only when the last snapshot of a virtual machine is deleted or if the virtual machine is migrated to a target that has different unmap granularity from the source.
This issue is resolved in this release. The fix prevents the effect of failing unmap operations on the I/O performance of virtual machines. However, to ensure that unmap operations do not fail, you must reboot the virtual machine or use guest OS-specific solutions to refresh the unmap granularity.
- PR 2605033: VMware vRealize Network Insight receives reports for dropped packets from NSX-T Data Center logical ports
In an NSX-T Data Center environment, vRealize Network Insight might receive many warning events for dropped packets from logical ports. The issue occurs because the dropped packets counter statistics are calculated incorrectly. The warnings do not indicate real networking issues.
This issue is resolved in this release.
- PR 2585590: Unrecoverable medium error in vSAN disk groups
In rare conditions, certain blocks containing vSAN metadata in a device might fail with unrecoverable medium error. This causes data inaccessibility in the disk group. A message similar to the following is logged in the
vmkernel.log
:2020-08-12T13:16:13.170Z cpu1:1000341424)ScsiDeviceIO: SCSICompleteDeviceCommand:4267: Cmd(0x45490152eb40) 0x28, CmdSN 0x11 from world 0 to dev "mpx.vmhba0:C0:T3:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x3 0x10 0x0
2020-08-12T13:16:13.170Z cpu1:1000341424)LSOMCommon: IORETRY_handleCompletionOnError:1723: Throttled: 0x454bc05ff900 IO type 264 (READ) isOrdered:NO isSplit:NO isEncr:NO since 0 msec status Read error
This issue is resolved in this release. The auto re-create disk group feature automatically discards bad blocks and recreates the disk group. It waits for 60 minutes, and then resyncs from a good working copy. Once resync completes, the cluster becomes compliant again. This feature is disabled by default. If you experience the unrecoverable medium error, turn the auto re-create disk group feature on at the ESXi host level.
- PR 2539907: Virtual machines intermittently lose connectivity
A VMFS datastore might lose track of an actively managed resource lock due to a temporary storage accessibility issue, such as an APD state. As a result, some virtual machines on other ESXi hosts intermittently lose connectivity.
This issue is resolved in this release.
- PR 2565242: ESXi hosts might fail with a purple diagnostic screen due a deadlock in the logging infrastructure in case of excessive logging
Excessive logging might result in a race condition that causes a deadlock in the logging infrastructure. The deadlock results in an infinite loop in the non-preemptive context. As a result, you see no-heartbeat warnings in the vmkernel logs. ESXi hosts fail with a purple diagnostic screen and a panic error such as
PSOD: 2020-03-08T17:07:20.614Z cpu73:3123688)@BlueScreen: PCPU 38: no heartbeat (1/3 IPIs received)
.This issue is resolved in this release.
- PR 2552832: ESXi does not detect a PCIe hot added device
If you hot add a PCIe device, ESXi might not detect and enumerate the device.
This issue is resolved in this release.
- PR 2584586: Intermittent high latency might trigger vSAN health check alarms
The existing vSAN network latency check is sensitive to intermittent high network latency that might not impact the workload running on the vSAN environment. As a result, if the network latency between two nodes exceeds a certain threshold, you see health warnings, but they might not be indicating a real problem with the environment.
This issue is resolved in this release. The algorithm for monitoring the vSAN network latencies is improved to prevent false alarms.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2604372 |
CVE numbers | N/A |
Updates the nvme
VIB to resolve the following issue:
- PR 2604372: SSD firmware updates by using third-party tools might fail because the ESXi NVMe drivers do not permit NVMe Opcodes
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, if the data transfer direction for vendor-specific commands is set to device-to-host, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmware-esx-esxcli-nvme-plugin
VIB.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2532914 |
CVE numbers | N/A |
Updates the vmw-ahci
VIB to resolve the following issue:
- PR 2532914: Virtual machines intermittently become unresponsive
Virtual machines might intermittently become unresponsive due to an issue with
vmw_ahci
drivers of version earlier than 2.0.4-1. In thevmkernel.log
file you can see a lot of messages such as:
…vmw_ahci[00000017]: ahciAbortIO:(curr) HWQD: 11 BusyL: 0
or
ahciRequestIo:ERROR: busy queue is full, this should NEVER happen!
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the ntg3
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan
, and vsanhealth
VIBs.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2543391 |
CVE numbers | N/A |
This patch updates the cpu-microcode
VIB to resolve the following issue:
- Intel microcode updates
ESXi670-202008001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core)
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2591468 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi670-202008001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools ISO imagess are available for download:
VMware Tools 11.0.6windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)
VMware Tools 10.0.12
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003
VMWare Tools 10.3.22
linux.iso
: for Linux OS with glibc 2.5 or later
VMware Tools 10.3.10
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the net-e1000
VIB.
Profile Name | ESXi-6.7.0-20200804001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2537083, 2536682, 2537593, 2543048, 2559163, 2575416, 2583807, 2536169, 2605361, 2577878, 2560686, 2577911, 2582235, 2521297, 2544574, 2573004, 2560546, 2560960, 2601778, 2586088, 2337784, 2535458, 2550983, 2553476, 2555823, 2573557, 2577438, 2583029, 2585883, 2594996, 2583814, 2583813, 2517190, 2596979, 2567370, 2594956, 2499715, 2581524, 2566470, 2566683, 2560998, 2605033, 2585590, 2539907, 2565242, 2552832, 2604372, 2532914, 2584586 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data. -
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example vmnic0 or vmhba1. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi670-20200801, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts.
-
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected. -
ESXi hosts fail with an error message on a purple diagnostic screen such as
#PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations. -
If a NULL pointer is dereferenced while querying for an uplink that is part of a teaming policy but no active uplink is currently available, ESXi hosts might fail with a purple diagnostic screen. On the screen, you see an error such as
Panic Message: @BlueScreen: #PF Exception 14 in world 2097208:netqueueBala IP 0x418032902747 addr 0x10
. -
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot. For instance, if you change the DSNRO value by entering the following command:esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the command
esxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:No of outstanding IOs with competing worlds: 32
.DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation.
-
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the
ServiceInstance
managed object, hostd might not be able to process them all and fail with an out of memory message. -
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In thevmkernel-zdump.1
file you see messages such as
FSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
.
Before the failure, you might see warnings that the VMFS heap is exhausted, such as
WARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
. -
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
file. As a result, the ESXi host loses connectivity to the vCenter Server system. -
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
-
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace. -
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
-
If the TPG changes for a path, the TPG ID updates only after an ESXi reboot, but not at the time of the change. For example, if a target port moves to a different controller, the TPG ID value does not update for the path.
-
The net-lbt daemon fails with an error such as
...Admission failure in path: lbt/net-lbt.7320960/uw.7320960
. As a result, load-based teaming (LBT) policy does not work as expected. The issue occurs if the net-lbt daemon resource pool gets out of memory. -
In case of a memory corruption event, the hostd service might not fail and restart as expected but stay unresponsive until a manual restart. As a result, random ESXi hosts become unresponsive as well.
-
Unmounting any VMFS datastore turns status to inactive for all coredump files that belong to other datastores in the vCenter Server system. You must manually reset the status to active.
-
If during the initialization of an unmap operation the available memory is not sufficient to create unmap heaps, the hostd service starts a cleanup routine. During the cleanup routine, it is possible that an uninitialized variable causes the ESXi host to fail with an error such as
BlueScreen: #PF Exception 14 in world 2100600:storageRM IP 0x41801b1e588a addr 0x160
. -
Even after a successful remediation, you can still see errors in the vSphere Client and vSphere Web Client that the ESXi hosts are not compliant due to modified iSCSI target parameters.
-
In certain cases, such as when a VASA provider for a vSphere Virtual Volume datastore is not reachable but does not return an error, for instance a transport error or a provider timeout, the source VM disks remain undeleted after migrating virtual machines between vSphere Virtual Volume datastores. As result, the source datastore capacity is not correct.
-
A virtual machine clone operation involves a snapshot of the source VM followed by creating a clone from that snapshot. The snapshot of the source virtual machine is deleted after the clone operation is complete. If the source virtual machine is on a vSphere Virtual Volumes datastore in one ESXi host and the clone virtual machine is created on another ESXi host, deleting the snapshot of the source VM might take some time. As a result, the cloned VM stays unresponsive for 50 to 60 seconds and might cause disruption of applications running on the source VM.
-
If an ESXi host in a vSphere HA-enabled cluster using a vSphere Virtual Volumes datastore fails to create the
.vSphere-HA
folder, vSphere HA configuration fails for the entire cluster. This issue occurs due to a possible race condition between ESXi hosts to create the.vSphere-HA
folder in the shared vSphere Virtual Volumes datastore. -
When two vmknics are connected to the same NSX-T logical switch, after a stateless reboot, only one vmknic is created, because of an error in the portgroup mapping logic.
-
The datapathd service repeatedly fails on NSX Edge virtual machines and the entire node becomes unresponsive.
The failure occurs if you use pyVmomi API to change the MAC limit policy, defined by using themacLimitPolicy
parameter. The API of the host service only accepts upper case values, such asDROP
andALLOW
, and if you provide a value in low case, such as Drop, the command fails. As a result, datapathd fails as well. -
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
-
If a storage outage occurs while VMFS5 or VMFS6 volumes close, when storage becomes available, journal replay fails because on-disk locks for some transactions have already been released during the previous volume close. As a result, all further transactions are blocked.
In the vmkernel logs, you see an error such as:
WARNING: HBX: 5454: Replay of journal on vol 'iSCSI-PLAB-01' failed: Lost previously held disk lock
-
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks. -
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and errors such as panic on multiple physical CPUs.
-
If you put an ESXi host into maintenance mode and migrate virtual machines by using vSphere vMotion, some operations might fail with an error such as
A system error occurred:
in the vSphere Client or the vSphere Web Client.
In thehostd.log
, you can see the following error:
2020-01-10T16:55:51.655Z warning hostd[2099896] [Originator@6876 sub=Vcsvc.VMotionDst.5724640822704079343 opID=k3l22s8p-5779332-auto-3fvd3-h5:70160850-b-01-b4-3bbc user=vpxuser:<user>] TimeoutCb: Expired
The issue occurs if vSphere vMotion fails to get all required resources before the defined waiting time of vSphere Virtual Volumes due to slow storage or VASA provider. -
Virtual machine operations, such as cloning, migrating, taking or removing snapshots, or ESXi host operations, such as add or remove a datastore, intermittently fail in an NSX-T environment. In the vpxd logs, and in the vSphere Client or vSphere Web Client, you see an error such as:
The object or item referred to could not be found. DVS c5 fb 29 dd c8 5b 4a fc-b2 d0 ee cd f6 b2 3e 7b cannot be found
The VMkernel module responsible for the VLAN MTU health reports causes the issue. -
If an ESXI host in a vSAN environment encounters an inconsistency in the deduplication metadata, the host service might fail with a purple diagnostic screen. In the backtrace, you see an error such as:
2019-12-12T19:16:34.464Z cpu0:2099114)@BlueScreen: Re-formatting a valid dedup metadata block
-
In rare cases, if the Delta Component feature is enabled, an object might have more than 64 copies of data, or the equivalent in mirror copies. The in-memory object allocation and deallocation functions might corrupt the memory, leading to a failure of the ESXi host with purple diagnostic screen. In the backtrace, you see an error such as:
PANIC bora/vmkernel/main/dlmalloc.c:4934 - Usage error in dlmalloc
-
After a vSAN cluster experiences a power failure and all ESXi hosts recover, vSAN might reconfigure an object to have too many data copies. Such copies consume resources unnecessarily.
-
When your vCenter Server system is configured with a custom port for HTTPS, RVC might attempt to connect to the default port 443. As a result, vSAN commands such as
vsan.host_info
might fail. -
When invoked from the vSphere Client, vSAN witness appliance OVF deploys all types of witness appliance sizes. Due to a code change, OVF deployment fails to deploy tiny and large instances by using CLI tools like ovftool or PowerCLI.
-
ESX hosts might randomly disconnect from Active Directory or fail with an out of memory error for some Likewise services. Memory leaks in some of the Active Directory operations performed by Likewise services cause the issue.
-
If the
getBaseUnitString
function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as:
[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure")
-
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine. -
Under certain load conditions, the sampling frequency of guest virtual machines for the hostd service and the VMX service might be out of sync, and cause intermittent changes of the
vim.VirtualMachine.guestHeartbeatStatus
parameter from green to yellow. These changes do not necessarily indicate an issue with the operation of the guest virtual machines, because this is a rare condition. -
LFBCInfo is an in-memory data structure that tracks some resource allocation details for the datastores in an ESXi host. If the total datastore capacity of ESXi hosts across a vCenter Server system exceeds 4 PB, LBCInfo might get out of memory. As a result, an ESXi host might fail with a purple diagnostic screen during some routine operations such as migrating virtual machines by using Storage vMotion.
In the backtrace, you see an error similar to:^[[7m2019-07-20T08:54:18.772Z cpu29:xxxx)WARNING: Res6: 2545: 'xxxx': out of slab elems trying to allocate LFBCInfo (2032 allocs)^[[0m
-
In a partitioned cluster, object deletion across the partitions sometimes fails to complete. The deleted objects appear as inaccessible objects.
-
If all snapshots of a booted virtual machine on a VMFS datastore are deleted, or after VMware Storage vMotion operations, unmap operations might start failing and cause slow I/O performance of the virtual machine. The issue occurs because certain virtual machine operations change the underlying disk unmap granularity of the guest OS. If the guest OS does not automatically refresh the unmap granularity, the VMFS base disk and snapshot disks might have different unmap granularity based on their storage layout. This issue occurs only when the last snapshot of a virtual machine is deleted or if the virtual machine is migrated to a target that has different unmap granularity from the source.
-
In an NSX-T Data Center environment, vRealize Network Insight might receive many warning events for dropped packets from logical ports. The issue occurs because the dropped packets counter statistics are calculated incorrectly. The warnings do not indicate real networking issues.
-
In rare conditions, certain blocks containing vSAN metadata in a device might fail with unrecoverable medium error. This causes data inaccessibility in the disk group. A message similar to the following is logged in the
vmkernel.log
:2020-08-12T13:16:13.170Z cpu1:1000341424)ScsiDeviceIO: SCSICompleteDeviceCommand:4267: Cmd(0x45490152eb40) 0x28, CmdSN 0x11 from world 0 to dev "mpx.vmhba0:C0:T3:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x3 0x10 0x0
2020-08-12T13:16:13.170Z cpu1:1000341424)LSOMCommon: IORETRY_handleCompletionOnError:1723: Throttled: 0x454bc05ff900 IO type 264 (READ) isOrdered:NO isSplit:NO isEncr:NO since 0 msec status Read error
-
A VMFS datastore might lose track of an actively managed resource lock due to a temporary storage accessibility issue, such as an APD state. As a result, some virtual machines on other ESXi hosts intermittently lose connectivity.
-
Excessive logging might result in a race condition that causes a deadlock in the logging infrastructure. The deadlock results in an infinite loop in the non-preemptive context. As a result, you see no-heartbeat warnings in the vmkernel logs. ESXi hosts fail with a purple diagnostic screen and a panic error such as
PSOD: 2020-03-08T17:07:20.614Z cpu73:3123688)@BlueScreen: PCPU 38: no heartbeat (1/3 IPIs received)
. -
If you hot add a PCIe device, ESXi might not detect and enumerate the device.
-
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, if the data transfer direction for vendor-specific commands is set to device-to-host, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
-
Virtual machines might intermittently become unresponsive due to an issue with
vmw_ahci
drivers of version earlier than 2.0.4-1. In thevmkernel.log
file you can see a lot of messages such as:
…vmw_ahci[00000017]: ahciAbortIO:(curr) HWQD: 11 BusyL: 0
or
ahciRequestIo:ERROR: busy queue is full, this should NEVER happen!
-
The existing vSAN network latency check is sensitive to intermittent high network latency that might not impact the workload running on the vSAN environment. As a result, if the network latency between two nodes exceeds a certain threshold, you see health warnings, but they might not be indicating a real problem with the environment. The algorithm for monitoring the vSAN network latencies is improved to prevent false alarms.
-
Profile Name | ESXi-6.7.0-20200804001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2537083, 2536682, 2537593, 2543048, 2559163, 2575416, 2583807, 2536169, 2605361, 2577878, 2560686, 2577911, 2582235, 2521297, 2544574, 2573004, 2560546, 2560960, 2601778, 2586088, 2337784, 2535458, 2550983, 2553476, 2555823, 2573557, 2577438, 2583029, 2585883, 2594996, 2583814, 2583813, 2517190, 2596979, 2567370, 2594956, 2499715, 2581524, 2566470, 2566683, 2560998, 2605033, 2585590, 2539907, 2565242, 2552832, 2604372, 2532914, 2584586 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
An ESXi host might fail with a purple diagnostic screen displaying an error such as
#PF Exception 14 in word Id 90984419:CBRC Memory IP 0x418039d15bc9 addr 0x8
. The failure is due to a race condition in the CBRC module, which is also used in the View Storage Accelerator feature in vCenter Server to cache virtual machine disk data. -
VMware ESXi assigns short names, called aliases, to devices such as network adapters, storage adapters, and graphics devices. For example vmnic0 or vmhba1. ESXi assigns aliases in a predictable order by physical location based on the location information obtained from ESXi host firmware sources such as the SMBIOS table. In releases earlier than ESXi670-20200801, the ESXi VMkernel does not support an extension to the SMBIOS Type 9 (System Slot) record that is defined in SMBIOS specification version 3.2. As a result, ESXi assigns some aliases in an incorrect order on ESXi hosts.
-
In the Summary tab of the Hosts and Cluster inventory view in the vSphere Client, you might see an error message such as
could not reach isolation address 6.0.0.0
for some ESXi hosts in a cluster with vCenter Server High Availability enabled, without having set such an address. The message does not report a real issue and does not indicate that vCenter Server High Availability might not function as expected. -
ESXi hosts fail with an error message on a purple diagnostic screen such as
#PF Exception 14 in world 66633:Cmpl-vmhba1- IP 0x41801ad0dbf3 addr 0x49
. The failure is due to a race condition in Logical Volume Manager (LVM) operations. -
If a NULL pointer is dereferenced while querying for an uplink that is part of a teaming policy but no active uplink is currently available, ESXi hosts might fail with a purple diagnostic screen. On the screen, you see an error such as
Panic Message: @BlueScreen: #PF Exception 14 in world 2097208:netqueueBala IP 0x418032902747 addr 0x10
. -
If you change the
Disk.SchedNumReqOutstanding
(DSNRO) parameter to align with a modified LUN queue depth, the value does not persist after a reboot. For instance, if you change the DSNRO value by entering the following command:esxcli storage core device set -O | --sched-num-req-outstanding 8 -d device_ID
after a reboot, when you run the command
esxcli storage core device list -d device_ID
to verify your changes, you see a different output than expected, such as:No of outstanding IOs with competing worlds: 32
.DSNRO controls the maximum number of outstanding I/O requests that all virtual machines can issue to the LUN and inconsistent changes in the parameter might lead to performance degradation.
-
By default, the hostd service retains completed tasks for 10 minutes. If too many tasks come at the same time, for instance calls to get the current system time from the
ServiceInstance
managed object, hostd might not be able to process them all and fail with an out of memory message. -
A NULL pointer exception in the
FSAtsUndoUpgradedOptLocks ()
method might cause an ESXi host to fail with a purple diagnostic screen. In thevmkernel-zdump.1
file you see messages such as
FSAtsUndoUpgradedOptLocks (…) at bora/modules/vmkernel/vmfs/fs3ATSCSOps.c:665
.
Before the failure, you might see warnings that the VMFS heap is exhausted, such as
WARNING: Heap: 3534: Heap vmfs3 already at its maximum size. Cannot expand
. -
A quick sequence of tasks in some operations, for example after reverting a virtual machine to a memory snapshot, might trigger a race condition that causes the hostd service to fail with a
/var/core/hostd-worker-zdump.000
file. As a result, the ESXi host loses connectivity to the vCenter Server system. -
In the vSphere Client or vSphere Web Client, you see many health warnings and receive mails for potential hardware failure, even though the actual sensor state has not changed. The issue occurs when the CIM service resets all sensors to an unknown state if a failure in fetching sensor status from IPMI happens.
-
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace. -
This fix sets Storage Array Type Plugin (SATP), Path Selection Policy (PSP), iops and Claim Options as default values with the following rules:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V HUAWEI -M "XSG1" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "HUAWEI arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V HUAWEI -M "XSG1" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e " HUAWEI arrays without ALUA support"
-
If the TPG changes for a path, the TPG ID updates only after an ESXi reboot, but not at the time of the change. For example, if a target port moves to a different controller, the TPG ID value does not update for the path.
-
The net-lbt daemon fails with an error such as
...Admission failure in path: lbt/net-lbt.7320960/uw.7320960
. As a result, load-based teaming (LBT) policy does not work as expected. The issue occurs if the net-lbt daemon resource pool gets out of memory. -
In case of a memory corruption event, the hostd service might not fail and restart as expected but stay unresponsive until a manual restart. As a result, random ESXi hosts become unresponsive as well.
-
Unmounting any VMFS datastore turns status to inactive for all coredump files that belong to other datastores in the vCenter Server system. You must manually reset the status to active.
-
If during the initialization of an unmap operation the available memory is not sufficient to create unmap heaps, the hostd service starts a cleanup routine. During the cleanup routine, it is possible that an uninitialized variable causes the ESXi host to fail with an error such as
BlueScreen: #PF Exception 14 in world 2100600:storageRM IP 0x41801b1e588a addr 0x160
. -
Even after a successful remediation, you can still see errors in the vSphere Client and vSphere Web Client that the ESXi hosts are not compliant due to modified iSCSI target parameters.
-
In certain cases, such as when a VASA provider for a vSphere Virtual Volume datastore is not reachable but does not return an error, for instance a transport error or a provider timeout, the source VM disks remain undeleted after migrating virtual machines between vSphere Virtual Volume datastores. As result, the source datastore capacity is not correct.
-
A virtual machine clone operation involves a snapshot of the source VM followed by creating a clone from that snapshot. The snapshot of the source virtual machine is deleted after the clone operation is complete. If the source virtual machine is on a vSphere Virtual Volumes datastore in one ESXi host and the clone virtual machine is created on another ESXi host, deleting the snapshot of the source VM might take some time. As a result, the cloned VM stays unresponsive for 50 to 60 seconds and might cause disruption of applications running on the source VM.
-
If an ESXi host in a vSphere HA-enabled cluster using a vSphere Virtual Volumes datastore fails to create the
.vSphere-HA
folder, vSphere HA configuration fails for the entire cluster. This issue occurs due to a possible race condition between ESXi hosts to create the.vSphere-HA
folder in the shared vSphere Virtual Volumes datastore. -
When two vmknics are connected to the same NSX-T logical switch, after a stateless reboot, only one vmknic is created, because of an error in the portgroup mapping logic.
-
The datapathd service repeatedly fails on NSX Edge virtual machines and the entire node becomes unresponsive.
The failure occurs if you use pyVmomi API to change the MAC limit policy, defined by using themacLimitPolicy
parameter. The API of the host service only accepts upper case values, such asDROP
andALLOW
, and if you provide a value in low case, such as Drop, the command fails. As a result, datapathd fails as well. -
After a restart of the ESXi management services, you see CPU and memory stats the VMware Host Client display 0, because quick-stats providers might remain disconnected.
-
If a storage outage occurs while VMFS5 or VMFS6 volumes close, when storage becomes available, journal replay fails because on-disk locks for some transactions have already been released during the previous volume close. As a result, all further transactions are blocked.
In the vmkernel logs, you see an error such as:
WARNING: HBX: 5454: Replay of journal on vol 'iSCSI-PLAB-01' failed: Lost previously held disk lock
-
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks. -
In rare cases, many closely spaced memory errors cause performance issues that might lead to ESXi hosts failing with a purple diagnostic screen and errors such as panic on multiple physical CPUs.
-
If you put an ESXi host into maintenance mode and migrate virtual machines by using vSphere vMotion, some operations might fail with an error such as
A system error occurred:
in the vSphere Client or the vSphere Web Client.
In thehostd.log
, you can see the following error:
2020-01-10T16:55:51.655Z warning hostd[2099896] [Originator@6876 sub=Vcsvc.VMotionDst.5724640822704079343 opID=k3l22s8p-5779332-auto-3fvd3-h5:70160850-b-01-b4-3bbc user=vpxuser:<user>] TimeoutCb: Expired
The issue occurs if vSphere vMotion fails to get all required resources before the defined waiting time of vSphere Virtual Volumes due to slow storage or VASA provider. -
Virtual machine operations, such as cloning, migrating, taking or removing snapshots, or ESXi host operations, such as add or remove a datastore, intermittently fail in an NSX-T environment. In the vpxd logs, and in the vSphere Client or vSphere Web Client, you see an error such as:
The object or item referred to could not be found. DVS c5 fb 29 dd c8 5b 4a fc-b2 d0 ee cd f6 b2 3e 7b cannot be found
The VMkernel module responsible for the VLAN MTU health reports causes the issue. -
If an ESXI host in a vSAN environment encounters an inconsistency in the deduplication metadata, the host service might fail with a purple diagnostic screen. In the backtrace, you see an error such as:
2019-12-12T19:16:34.464Z cpu0:2099114)@BlueScreen: Re-formatting a valid dedup metadata block
-
In rare cases, if the Delta Component feature is enabled, an object might have more than 64 copies of data, or the equivalent in mirror copies. The in-memory object allocation and deallocation functions might corrupt the memory, leading to a failure of the ESXi host with purple diagnostic screen. In the backtrace, you see an error such as:
PANIC bora/vmkernel/main/dlmalloc.c:4934 - Usage error in dlmalloc
-
After a vSAN cluster experiences a power failure and all ESXi hosts recover, vSAN might reconfigure an object to have too many data copies. Such copies consume resources unnecessarily.
-
When your vCenter Server system is configured with a custom port for HTTPS, RVC might attempt to connect to the default port 443. As a result, vSAN commands such as
vsan.host_info
might fail. -
When invoked from the vSphere Client, vSAN witness appliance OVF deploys all types of witness appliance sizes. Due to a code change, OVF deployment fails to deploy tiny and large instances by using CLI tools like ovftool or PowerCLI.
-
ESX hosts might randomly disconnect from Active Directory or fail with an out of memory error for some Likewise services. Memory leaks in some of the Active Directory operations performed by Likewise services cause the issue.
-
If the
getBaseUnitString
function returns a non-UTF8 string for the max value of a unit description array, the hostd service fails with a core dump. You see an error such as:
[Hostd-zdump] 0x00000083fb085957 in Vmacore::PanicExit (msg=msg@entry=0x840b57eee0 "Validation failure")
-
A virtual machine suddenly stops responding and when you try to reset the virtual machine, it cannot recover. In the backtrace, you see multiple RESET calls to a device linked to the virtual machine RDM disk. The issue is that if the RDM disk is greater than 2 TB, the
getLbaStatus
command during the shutdown process might continuously loop and prevent the proper shutdown of the virtual machine. -
Under certain load conditions, the sampling frequency of guest virtual machines for the hostd service and the VMX service might be out of sync, and cause intermittent changes of the
vim.VirtualMachine.guestHeartbeatStatus
parameter from green to yellow. These changes do not necessarily indicate an issue with the operation of the guest virtual machines, because this is a rare condition. -
LFBCInfo is an in-memory data structure that tracks some resource allocation details for the datastores in an ESXi host. If the total datastore capacity of ESXi hosts across a vCenter Server system exceeds 4 PB, LBCInfo might get out of memory. As a result, an ESXi host might fail with a purple diagnostic screen during some routine operations such as migrating virtual machines by using Storage vMotion.
In the backtrace, you see an error similar to:^[[7m2019-07-20T08:54:18.772Z cpu29:29479008)WARNING: Res6: 2545: 'xxx': out of slab elems trying to allocate LFBCInfo (2032 allocs)^[[0m
-
In a partitioned cluster, object deletion across the partitions sometimes fails to complete. The deleted objects appear as inaccessible objects.
-
If all snapshots of a booted virtual machine on a VMFS datastore are deleted, or after VMware Storage vMotion operations, unmap operations might start failing and cause slow I/O performance of the virtual machine. The issue occurs because certain virtual machine operations change the underlying disk unmap granularity of the guest OS. If the guest OS does not automatically refresh the unmap granularity, the VMFS base disk and snapshot disks might have different unmap granularity based on their storage layout. This issue occurs only when the last snapshot of a virtual machine is deleted or if the virtual machine is migrated to a target that has different unmap granularity from the source.
-
In an NSX-T Data Center environment, vRealize Network Insight might receive many warning events for dropped packets from logical ports. The issue occurs because the dropped packets counter statistics are calculated incorrectly. The warnings do not indicate real networking issues.
-
In rare conditions, certain blocks containing vSAN metadata in a device might fail with unrecoverable medium error. This causes data inaccessibility in the disk group. A message similar to the following is logged in the
vmkernel.log
:2020-08-12T13:16:13.170Z cpu1:1000341424)ScsiDeviceIO: SCSICompleteDeviceCommand:4267: Cmd(0x45490152eb40) 0x28, CmdSN 0x11 from world 0 to dev "mpx.vmhba0:C0:T3:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x3 0x10 0x0
2020-08-12T13:16:13.170Z cpu1:1000341424)LSOMCommon: IORETRY_handleCompletionOnError:1723: Throttled: 0x454bc05ff900 IO type 264 (READ) isOrdered:NO isSplit:NO isEncr:NO since 0 msec status Read error
-
A VMFS datastore might lose track of an actively managed resource lock due to a temporary storage accessibility issue, such as an APD state. As a result, some virtual machines on other ESXi hosts intermittently lose connectivity.
-
Excessive logging might result in a race condition that causes a deadlock in the logging infrastructure. The deadlock results in an infinite loop in the non-preemptive context. As a result, you see no-heartbeat warnings in the vmkernel logs. ESXi hosts fail with a purple diagnostic screen and a panic error such as
PSOD: 2020-03-08T17:07:20.614Z cpu73:3123688)@BlueScreen: PCPU 38: no heartbeat (1/3 IPIs received)
. -
If you hot add a PCIe device, ESXi might not detect and enumerate the device.
-
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, if the data transfer direction for vendor-specific commands is set to device-to-host, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
-
Virtual machines might intermittently become unresponsive due to an issue with
vmw_ahci
drivers of version earlier than 2.0.4-1. In thevmkernel.log
file you can see a lot of messages such as:
…vmw_ahci[00000017]: ahciAbortIO:(curr) HWQD: 11 BusyL: 0
or
ahciRequestIo:ERROR: busy queue is full, this should NEVER happen!
-
The existing vSAN network latency check is sensitive to intermittent high network latency that might not impact the workload running on the vSAN environment. As a result, if the network latency between two nodes exceeds a certain threshold, you see health warnings, but they might not be indicating a real problem with the environment. The algorithm for monitoring the vSAN network latencies is improved to prevent false alarms.
-
Profile Name | ESXi-6.7.0-20200801001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2543391, 2591468 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
ESXi670-202008001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core) -
The following VMware Tools ISO images are bundled with ESXi670-202008001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools ISO imagess are available for download:
VMware Tools 11.0.6windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)
VMware Tools 10.0.12
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003
VMWare Tools 10.3.22
linux.iso
: for Linux OS with glibc 2.5 or later
VMware Tools 10.3.10
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-6.7.0-20200801001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2543391, 2591468 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
ESXi670-202008001 updates Intel microcode for ESXi-supported CPUs. See the table for the microcode updates that are currently included:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006906 4/24/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f01 4/23/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000dc 4/27/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000d6 4/23/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000d6 4/23/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000d6 4/27/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000d6 4/23/2020 Intel Xeon E-2200 Series (8 core) -
The following VMware Tools ISO images are bundled with ESXi670-202008001:
windows.iso
: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools ISO imagess are available for download:
VMware Tools 11.0.6windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)
VMware Tools 10.0.12
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003
VMWare Tools 10.3.22
linux.iso
: for Linux OS with glibc 2.5 or later
VMware Tools 10.3.10
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-