Release Date: NOV 23, 2021
Build Details
Download Filename: | ESXi670-202111001.zip |
Build: | 18828794 |
Download Size: | 478.2 MB |
md5sum: | faa7493ac44f766c307bfa1a92f2bc03 |
sha256checksum: | 92fb8d22d012c34195a39e6ccca2f123dfaf6db64fafcead131e9d71ccb2b146 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi670-202111401-BG | Bugfix | Critical |
ESXi670-202111402-BG | Bugfix | Important |
ESXi670-202111403-BG | Bugfix | Important |
ESXi670-202111404-BG | Bugfix | Important |
ESXi670-202111405-BG | Bugfix | Important |
ESXi670-202111101-SG | Security | Important |
ESXi670-202111102-SG | Security | Important |
ESXi670-202111103-SG | Security | Important |
ESXi670-202111104-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.
Bulletin ID | Category | Severity |
ESXi670-202111001 | Bugfix | Critical |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only the ESXi hosts is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20211104001-standard |
ESXi-6.7.0-20211104001-no-tools |
ESXi-6.7.0-20211101001s-standard |
ESXi-6.7.0-20211101001s-no-tools |
For more information about the individual bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 6.7.0. Install VIBs by using the esxcli software vib update
command. Additionally, you can update the system by using the image profile and the esxcli software profile update
command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-202111401-BG
- ESXi670-202111402-BG
- ESXi670-202111403-BG
- ESXi670-202111404-BG
- ESXi670-202111405-BG
- ESXi670-202111101-SG
- ESXi670-202111102-SG
- ESXi670-202111103-SG
- ESXi670-202111104-SG
- ESXi-6.7.0-20211104001-standard
- ESXi-6.7.0-20211104001-no-tools
- ESXi-6.7.0-20211101001s-standard
- ESXi-6.7.0-20211101001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2731317, 2727062, 2704831, 2764894, 2720189, 2761940, 2757263, 2752542, 2772048, 2768356, 2741112, 2786864, 2725025, 2776791, 2724166, 2790353, 2787866, 2790502, 2834968, 2832090, 2841163, 2804742, 2732482, 2793380, 2813328, 2792504, 2749902, 2765136, 2697256, 2731139, 2733489, 2755232, 2761131, 2763933, 2766401, 2744211, 2771730, 2800655, 2839716, 2847812, 2726678, 2789302, 2750034, 2739223, 2794562, 2857511, 2847091, 2765679, 2731643, 2774667, 2778008, 2833428, 2835990, 2852099 |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2731317: During an USB passthrough, virtual machines might become unresponsive
If the USB device attached to an ESXi host has descriptors that are not compliant with the standard USB specifications, the virtual USB stack might fail to pass through a USB device into a virtual machine. As a result, virtual machines become unresponsive, and you must either power off the virtual machine by using an ESXCLI command, or restart the ESXi host.
This issue is resolved in this release.
- PR 2727062: Editing an advanced options parameter in a host profile and setting a value to false, results in setting the value to true
When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile.
This issue is resolved in this release.
- PR 2704831: After vSphere Replication appliance powers on, ESXi hosts intermittently lose connectivity to vCenter Server
A rare race condition with static map initialization might cause ESXi hosts to temporarily lose connectivity to vCenter Server after the vSphere Replication appliance powers on. However, the hostd service automatically restarts and the ESXi hosts restore connectivity.
This issue is resolved in this release.
- PR 2764894: The hostd service might fail and ESXi hosts lose connectivity to vCenter Server due to an empty or unset property of a Virtual Infrastructure Management (VIM) API array
In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.
This issue is resolved in this release.
- PR 2720189: ESXi hosts intermittently fail with a BSVMM_Validate@vmkernel error in the backtrace
A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A
BSVMM_Validate@vmkernel
error is indicative for the problem.This issue is resolved in this release.
- PR 2761940: An ESXi host might fail with a purple diagnostic screen due to locks during a vSphere vMotion operation
During a vSphere vMotion operation, if the calling context holds any locks, ESXi hosts might fail with a purple diagnostic screen and an error such as
PSOD: Panic at bora/vmkernel/core/lock.c:2070
.This issue is resolved in this release.
- PR 2757263: ESXi hosts fail with a purple diagnostic screen with a kernel context error, but not in the correct context
ESXi hosts might intermittently fail with a purple diagnostic screen with an error such as
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c
that suggests a preemption anomaly. However, the VMkernel preemption anomaly detection logic might fail to identify the correct kernel context and show a warning for the wrong context.This issue is resolved in this release. The fix covers corner cases that might cause a preemption anomaly, such as when a system world exits while holding a spinlock. The preemption anomaly detection warning remains
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c
, but in the correct context. - PR 2752542: If you lower the value of the DiskMaxIOSize advanced config option, ESXi hosts I/O operations might fail
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail.This issue is resolved in this release.
- PR 2772048: ESXi hosts might fail with a purple diagnostic screen due to a memory allocation failure in the vmkapi character device driver
Some poll requests might exceed the metadata heap of the vmkapi character device driver and cause ESXi hosts to fail with a purple diagnostic screen and an error such as:
#PF Exception 14 in world 2099138:IPMI Response IP 0x41800b922ab0 addr 0x18 PTEs:0x0;
In the VMkernel logs, you might see messages such as:
WARNING: Heap: 3571: Heap VMKAPI-char-metadata already at its maximum size. Cannot expand.
and
WARNING: Heap: 4079: Heap_Align(VMKAPI-char-metadata, 40/40 bytes, 8 align) failed. caller: 0x418029922489
This issue is resolved in this release.
- PR 2741112: Failure of some Intel CPUs to forward Debug Exception (#DB) traps might result in virtual machine triple fault
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the
vmware.log
file, you see an error such asmsg.monitorEvent.tripleFault
. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue.This issue is resolved in this release. The fix forwards all #DB traps from CPUs into the guest operating system, except when the DB trap comes from a debugger that is attached to the virtual machine.
- PR 2786864: An ESXi host fails with a purple diagnostic screen due to a call stack corruption in the ESXi storage stack
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen.This issue is resolved in this release.
- PR 2725025: You see alarms that a NIC link is down even after the uplink is restored
If you plug in and out a physical NIC in your vCenter Sever system, after the uplink is restored, you still see an alarm in the vSphere Client or the vSphere Web Client that a NIC link on some ESXi hosts is down. The VOBD daemon might not create the event
esx.clear.net.redundancy.restored
to remove such alarms, which causes the issue.This issue is resolved in this release.
- PR 2776791: ESXi installation by using a ks.cfg installation script and a USB or SD booting device might fail
In some cases, when multiple USB or SD devices with different file systems are connected to your vCenter Server system, ESXi installation by using a ks.cfg installation script and a USB or SD booting device might fail.
This issue is resolved in this release. The fix logs an exception and allows the installation to continue, instead of failing.
- PR 2724166: You might see higher than configured rate of automatic unmap operations on a newly mounted VMFS6 volumes
In the first minutes after mounting a VMFS6 volume, you might see higher than configured rate of automatic unmap operations. For example, if you set the unmap bandwidth to 2000 MB/s, you see unmap operations running at more than 3000 MB/s.
This issue is resolved in this release.
- PR 2790353: Old data on an independent nonpersistent mode virtual disk remains after a virtual machine reset
For independent nonpersistent mode virtual disks, all writes are stored in a redo log, which is a temporary file with extension
.REDO_XXXXXX
in the VM directory. However, a reset of a virtual machine does not clear the old redo log on such disks.This issue is resolved in this release.
- PR 2787866: A replay of a pre-recorded support bundle by using the esxtop utility might fail with a segmentation fault
When you run the esxtop utility by using the ESXi Shell to replay a pre-recorded support bundle, the operation might fail with a segmentation fault.
This issue is resolved in this release. The fix makes sure all local variables of the esxtop utility are initialized properly.
- PR 2790502: Shutdown or reboot of ESXi hosts takes very long
Rarely, in certain configurations, the shutdown or reboot of ESXi hosts might stop at the step
Shutting down device drivers
for a long time, in the order of 20 minutes, but the operation eventually completes.This issue is resolved in this release.
- PR 2834968: Virtual machines fail with a message for too many sticky pages
The virtual network adapter VMXNET Generation 3 (VMXNET3) uses buffers to process rx packets. Such buffers are either pre-pinned or pinned and mapped during runtime. In rare occasions, some buffers might not un-pin and get re-pinned later, resulting in a higher-than-expected pin count of such a buffer. As a result, virtual machines might fail with an error such as
MONITOR PANIC: vmk: vcpu-0:Too many sticky pages.
.This issue is resolved in this release.
- PR 2832090: ESXi hosts might fail with a purple diagnostic screen when vSphere Replication is enabled due to a rare lock rank violation
Due to a rare lock rank violation in the vSphere Replication I/O filter, some ESXi hosts might fail with a purple diagnostic screen when vSphere Replication is enabled. You see an error such as
VERIFY bora/vmkernel/main/bh.c:978
on the screen.This issue is resolved in this release.
- PR 2841163: You see packet drops for virtual machines with VMware Network Extensibility (NetX) redirection enabled
In vCenter Server advanced performance charts, you see an increasing number of packet drop count for all virtual machines that have NetX redirection enabled. However, if you disable NetX redirection, the count becomes 0.
This issue is resolved in this release.
- PR 2804742: A virtual switch might drop IGMPv3 SSM specific queries during snooping
Virtual switches processing IGMPv3 SSM specific queries might drop some queries when an IP is not included in the list of available ports as a result of the source IP check.
This issue is resolved in this release. The fix makes sure a group specific query is not processed like a normal multicast group and does not check the source IP.
- PR 2732482: You see higher than usual scrub activity on vSAN
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
This issue is resolved in this release.
- PR 2793380: ESXi hosts become unresponsive when you power on a virtual machine
In particular circumstances, when you power on a virtual machine with a corrupted VMware Tools manifest file, the hostd service might fail. As a result, the ESXi host becomes unresponsive. You can backtrace the issue in the hostd dump file that is usually generated in such cases at the time a VM is powered on.
This issue is resolved in this release.
- PR 2813328: Encrypted virtual machines do not auto power on even when the Start delay option is configured
After a reboot of an ESXi host, encrypted virtual machines might not auto power on even when Autostart is configured with the Start delay option to set a specific start time of the host. The issue affects only encrypted virtual machines due to a delay in the distribution of keys from standard key providers.
This issue is resolved in this release. The fix makes sure that the Start delay option for encrypted VMs considers the delay for getting a key after an ESXi host reboot.
- PR 2792504: Disabling the SLP service might cause failures during operations with host profiles
When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.
This issue is resolved in this release.
- PR 2749902: vSphere vMotion operations on ESXi 6.7.x hosts with more than 120 virtual machines might get slow or occasionally fail
When an ESXi 6.7.x host has more than 120 VMs on it, vSphere vMotion operations to that host or from that host might become slow or occasionally fail.
This issue is resolved in this release.
- PR 2765136: vSphere Client might not meet all current browser security standards
In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.
This issue is resolved in this release.
- PR 2697256: ESXi updates and upgrades might fail on Mac Pro servers
ESXi updates and upgrades might fail on Mac Pro servers with a purple diagnostic screen and a message such as
PSOD - NOT IMPLEMENTED bora/vmkernel/hardware/pci/bridge.c:372 PCPU0:2097152/bootstrap
. The issue occurs when Mac Pro BIOS does not assign resources to some devices.This issue is resolved in this release.
- PR 2731139: Packets with "IPv6 Tunnel Encapsulation Limit option" are lost in traffic between virtual machines with ENS enabled
Some Linux kernels add the
IPv6 Tunnel Encapsulation Limit option
to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines with ENS enabled, because of the IPv6 extension header.This issue is resolved in this release. The fix is to correctly parse packets with the
IPv6 Tunnel Encapsulation Limit option
enabled. - PR 2733489: Taking a snapshot of a large virtual machine on a large VMFS6 datastore might take long
Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.
This issue is resolved in this release.
- PR 2755232: An ESXi host might not discover devices configured with the VMW_SATP_INV plug-in
In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the
VMW_SATP_INV
plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in.This issue is resolved in this release.
- PR 2761131: If you import virtual machines with a virtual USB device to a vCenter Server system, the virtual machines might fail to power on
If you import a virtual machine with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to:
PANIC: Unexpected signal: 11
. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices.This issue is resolved in this release. The fix ignores any unsupported virtual USB devices in virtual machines imported to a vCenter Server system.
- PR 2763933: In the vmkernel.log, you see an error for an unexpected result while waiting for pointer block cluster (PBC) to clear
A non-blocking I/O operation might block a PBC file operation in a VMFS volume. In the
vmkernel.log
, you see an error such as:
cpu2:2098100)WARNING: PB3: 5020: Unexpected result (Would block) while waiting for PBC to clear -FD c63 r24
However, this is expected behavior.This issue is resolved in this release. The fix removes the error message in cases when an I/O operation blocks a PBC file operation.
- PR 2766401: When an overlay tunnel is configured from a guest VM with different default VXLAN port than 4789, vmxnet3 might drop packets
By default, the
vmxnet3
driver uses port 4789 when a VM uses ENS and port 8472 when ENS is not enabled. As a result, when an overlay tunnel is configured from a guest VM with different default VXLAN port,vmxnet3
might drop packets.This issue is resolved in this release. The fix makes sure
vmxnet3
delivers packets when a VM uses either of ports 4789 or 8472. However, the fix works only on VMs with version ESXi 6.7 and later, hardware versions later than 14, and if the overlay tunnel is configured from a guest VM. - PR 2744211: A static MAC cannot be learned on another VNIC port across a different VLAN
In some cases, MAC learning does not work as expected and affects some virtual machine operations. For example, cloning a VM with the same MAC address on a different VLAN causes a traffic flood to the cloned VM in the ESXi host where the original VM is present.
This issue is resolved in this release. The fix makes sure that a static MAC can be learned on another port, on a different VLAN. For example, a static MAC x on VNIC port p1 with on VLAN 1, can be learned on VNIC port p2, VLAN 2.
- PR 2771730: You cannot create a vCenter Host Profiles CIM Indication Subscription due to an out of memory error
Attempts to create a vCenter Host Profiles CIM Indication Subscription might fail due to a cim-xml parsing request that returns an out of memory error.
This issue is resolved in this release.
- PR 2800655: You might see packet drops on uplink on heavy traffic
The packet scheduler that controls network I/O in a vCenter Server system has a fixed length queue that might quickly fill up. As a result, you might see packet drops on uplink on heavy traffic.
This issue is resolved in this release. The fix makes the Network I/O Control queue size dynamic to allow expansion under certain conditions.
- PR 2839716: You cannot add Intel Ice Lake servers to a Skylake cluster with enabled Enhanced vMotion Compatibility (EVC) mode
In a vSphere 6.7.x system, you cannot add an Intel Ice Lake server to a Skylake cluster with enabled EVC mode. In the vSphere Web Client or the vSphere Client, you see the following message:
Host CPU lacks features required by that mode. XSAVE of BND0-BND3 bounds registers (BNDREGS) is unsupported. XSAVE of BNDCFGU and BNDSTATUS registers (BNDCSR) is unsupported.
This issue is resolved in this release. You can update your ESXi hosts to ESXi 670-202111001, or upgrade to ESXi 7.0.x.
- PR 2847812: Active Directory latency issues might impact performance across the entire vCenter Server system
The VIM API server deletes the Active Directory cache on certain intervals to check the validity of permissions of Active Directory domain accounts. After the cache delete operation, daily by default, the VIM API server re-populates the cache. However, the cache re-populate operation might be slow on domains with many accounts. As a result, such operations might impact performance across the entire vCenter Server system.
This issue is resolved in this release. The fix skips the step of re-populating the cache.
- PR 2726678: ESXi hosts might become unresponsive after an abrupt power cycling or reboot
An abrupt power cycling or reboot of an ESX host might cause a race between subsequent journal replays because other hosts might try to access the same resources. If a race condition happens, the journal replay cannot complete. As a result, virtual machines on a VMFS6 datastore become inaccessible or any operations running on the VMs fail or stop.
This issue is resolved in this release. The fix enhances local and remote journal replay for VMFS6.
- PR 2789302: Multiple cron instances run after using the reboot_helper.py script to shut down a vSAN cluster
If you manually run the
reboot_helper.py
script to shut down a vSAN cluster, the process cleans unused cron jobs registered during the shutdown. However, the script might start a new cron instance instead of restarting the cron daemon after cleaning the unused cron jobs. As a result, multiple cron instances might accumulate and you see cron jobs executed multiple times.This issue is resolved in this release.
- PR 2750034: The hostd service might fail with a “Memory exceeds hard limit” error due to a memory leak
If you run a recursive call on a large datastore, such as
DatastoreBrowser::SearchSubFolders
, a memory leak condition might cause the hostd service to fail with an out-of-memory error.This issue is resolved in this release. The fix prevents the memory leak condition.
- PR 2739223: vSAN health reports certificate errors in the configured KMIP provider
When you enable encryption for a vSAN cluster with a Key Management Interoperability Protocol (KMIP) provider, the following health check might report status errors: vCenter KMS status.
This issue is resolved in this release.
- PR 2794562: The recommended size of the core dump partition for an all-flash vSAN cluster might be larger than required
For all-flash vSAN configurations, you might see larger than required recommended size for the core dump partition. As a result, the core dump configuration check might fail.
This issue is resolved in this release.
- PR 2857511: Virtual machines appear as inaccessible in the vSphere Client and you might see some downtime for applications
In rare cases, hardware issues might cause an SQlite DB corruption that makes multiple VMs become inaccessible and lead to some downtime for applications.
This issue is resolved in this release.
- PR 2847091: Hosts in a vSAN cluster might fail due to a memory block attributes issue
When a vSAN host has memory pressure, adding disks to a disk group might cause failure of the memory block attributes (blkAttr) component initialization. Without the blkAttr component, commit and flush tasks stop, which causes log entries to build up in the SSD and eventually cause а congestion. As a result, a NMI failure might occur due to the CPU load for processing the large number of log entries.
This issue is resolved in this release.
- PR 2765679: Cannot add hosts to a vSAN cluster with large cluster support enabled
In some cases, the host designated as the leader cannot send heartbeat messages to other hosts in a large vSAN cluster. This issue occurs when the leader uses an insufficient size of the TX buffer. The result is that new hosts cannot join the cluster with large cluster support (up to 64 hosts) enabled.
This issue is resolved in this release.
- PR 2731643: You see memory congestion on vSAN cluster hosts with large number of idle VMs
On a vSAN cluster with a large number of idle virtual machines, vSAN hosts might experience memory congestion due to higher scrub frequency.
This issue is resolved in this release.
- PR 2774667: vSAN automatic rebalance does not occur even after capacity reaches 80%
An issue with trim space operations count might cause a large number of pending delete operations in vSAN. If an automatic rebalancing operation triggers, it cannot progress due to such pending delete operations. For example, if you set the rebalancing threshold at 30%, you do not see any progress on rebalancing even when the storage capacity reaches 80%.
This issue is resolved in this release.
- PR 2778008: ESXi hosts might fail with a purple diagnostic screen due to a lock spinout on a virtual machine configured to use persistent Memory (PMem) with large number of vCPUs
Due to a dependency between the management of virtual RAM and virtual NVDIMM within a virtual machine, excessive access to the virtual NVDIMM device might lead to a lock spinout while accessing virtual RAM and cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see the following:
SP_WaitLock
SPLockWork
AsyncRemapPrepareRemapListVM
AsyncRemap_AddOrRemapVM
LPageSelectLPageToDefrag
VmAssistantProcessTasksThis issue is resolved in this release.
- PR 2833428: ESXi hosts with virtual machines with Latency Sensitivity enabled might randomly become unresponsive due to CPU starvation
When you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.
This issue is resolved in this release. The fix makes sure lwsmd does not set CPU affinity explicitly.
- PR 2835990: Adding ESXi hosts to an Active Directory domain might take long
Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
This issue is resolved in this release. The fix adds a 15 seconds standard timeout with additional logging around the LDAP calls during domain join workflow.
- PR 2852099: When vSphere Replication is enabled on a virtual machine, many other VMs might become unresponsive
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
This issue is resolved in this release. The fix offloads the vSphere Replication MD5 calculation from the I/O completion path to a work pool and reduces the amount of outstanding I/O that vSphere Replication issues.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2761169 |
CVE numbers | N/A |
Updates the vmkusb
VIB to resolve the following issue:
- PR 2761169: Connection to the /bootbank partition intermittently breaks when you use USB or SD devices
ESXi670-202111001 adds two fixes to the USB host controller for cases when a USB or SD device lose connectivity to the
/bootbank
partition:- Adds a parameter
usbStorageRegisterDelaySecs
.
TheusbStorageRegisterDelaySecs
parameter is part of thevmkusb
module and it delays to register cached USB storage device within the limit of 0 to 600 seconds. The default delay is 10 seconds. The parametric delay makes sure that if the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the USB or SD device does not get a new path that might break connection to the/bootbank
partition or corrupt the VMFS-L LOCKER partition. - Enhances the USB host controller driver to tolerate command timeout failures.
In cases when a device, such as Dell IDSDM, reconnects by a hardware signal, or a USB device resets due to a heavy workload, the USB host controller driver might make only one retry to restore connectivity. With the fix, the controller makes multiple retries and extends tolerance to timeout failures.
This issue is resolved in this release. As a best practice, do not set dump partition on USB storage device and do not set USB devices under a heavy workload. For more information, see VMware knowledge base articles 2077516 and 2149257.
- Adds a parameter
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2555789 |
CVE numbers | N/A |
Updates the nvme
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2775375 |
CVE numbers | N/A |
Updates the brcmfcoe
VIB to resolve the following issue:
- PR 2775375: ESXi hosts might lose connectivity after brcmfcoe driver upgrade on Hitachi storage arrays
After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2820768 |
CVE numbers | N/A |
Updates the lsu-smartpqi-plugin
VIB.
- PR 2820768: OEM smartpqi drivers of version 1.0.3.2323 and earlier fail to blink LEDs or get the physical locations on logical drives
The OEM smartpqi driver in ESXi 670-202111001 uses a new logical LUN ID assignment rule to become compatible with the drivers of versions later than 1.0.3.2323. As a result, OEM smartpqi drivers of version 1.0.3.2323 and earlier might fail to blink LEDs or get the physical locations on logical drives.
This issue is resolved in this release. However, if you use an older version of the OEM smartpqi driver, upgrade to a version later than 1.0.3.2323 to avoid the issue.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2704746, 2765136, 2794798, 2794804, 2794933, 2794943, 2795965, 2797008, 2830222, 2830671 |
CVE numbers | CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029, CVE-2021-28831 |
Updates esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- Update of the SQLite database
The SQLite database is updated to version 3.34.1.
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2za.
- Update to cURL
The cURL library is updated to 7.78.0.
- Update to the OpenSSH
The OpenSSH is updated to version 8.6p1.
- Update to the libarchive library
The libarchive library is updated to libarchive - 3.5.1.
- Update to the BusyBox package
The Busybox package is updated to address CVE-2021-28831.
- Update to the GNU C Library (glibc)
The glibc library is updated tо address the following CVEs: CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2762034 |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2762034, 2785530, 2816541 |
Related CVE numbers | N/A |
This patch updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi 670-202111001:
windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2776028 |
Related CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
- The cpu-microcode VIB includes the following Intel microcode:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000046 1/27/2021 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000019 2/5/2021 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b00003e 2/6/2021 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006b06 3/8/2021 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003103 4/20/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003103 4/8/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x07002302 4/23/2021 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x0700001b 2/4/2021 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000019 2/4/2021 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e000012 2/4/2021 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000ea 1/25/2021 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000034 10/23/2020 Intel Atom C3000 Series Ice Lake SP 0x606a6 0x87 0x0d0002a0 4/25/2021 Intel Xeon Silver 4300 Series;
Intel Xeon Gold 6300/5300 Series;
Intel Xeon Platinum 8300 SeriesSnow Ridge 0x80665 0x01 0x0b00000f 2/17/2021 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ea 1/5/2021 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000ea 1/5/2021 Intel Xeon E-2200 Series (8 core) Rocket Lake S 0xa0671 0x02 0x00000040 4/11/2021 Intel Xeon E-2300 Series
Profile Name | ESXi-6.7.0-20211104001-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | November 23, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2731317, 2727062, 2704831, 2764894, 2720189, 2761940, 2757263, 2752542, 2772048, 2768356, 2741112, 2786864, 2725025, 2776791, 2724166, 2790353, 2787866, 2790502, 2834968, 2832090, 2841163, 2804742, 2732482, 2793380, 2813328, 2792504, 2749902, 2765136, 2697256, 2731139, 2733489, 2755232, 2761131, 2763933, 2766401, 2744211, 2771730, 2800655, 2839716, 2847812, 2726678, 2789302, 2750034, 2739223, 2794562, 2857511, 2847091, 2765679, 2731643, 2774667, 2778008, 2833428, 2761169, 2775375, 2820768, 2835990, 2852099 |
Related CVE numbers | N/A |
- This patch resolves the following issues:
-
If the USB device attached to an ESXi host has descriptors that are not compliant with the standard USB specifications, the virtual USB stack might fail to pass through a USB device into a virtual machine. As a result, virtual machines become unresponsive, and you must either power off the virtual machine by using an ESXCLI command, or restart the ESXi host.
-
When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile.
-
A rare race condition with static map initialization might cause ESXi hosts to temporarily lose connectivity to vCenter Server after the vSphere Replication appliance powers on. However, the hostd service automatically restarts and the ESXi hosts restore connectivity.
-
In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.
-
A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A
BSVMM_Validate@vmkernel
error is indicative for the problem. -
During a vSphere vMotion operation, if the calling context holds any locks, ESXi hosts might fail with a purple diagnostic screen and an error such as
PSOD: Panic at bora/vmkernel/core/lock.c:2070
. -
ESXi hosts might intermittently fail with a purple diagnostic screen with an error such as
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c
that suggests a preemption anomaly. However, the VMkernel preemption anomaly detection logic might fail to identify the correct kernel context and show a warning for the wrong context. -
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail. -
Some poll requests might exceed the metadata heap of the vmkapi character device driver and cause ESXi hosts to fail with a purple diagnostic screen and an error such as:
#PF Exception 14 in world 2099138:IPMI Response IP 0x41800b922ab0 addr 0x18 PTEs:0x0;
In the VMkernel logs, you might see messages such as:
WARNING: Heap: 3571: Heap VMKAPI-char-metadata already at its maximum size. Cannot expand.
and
WARNING: Heap: 4079: Heap_Align(VMKAPI-char-metadata, 40/40 bytes, 8 align) failed. caller: 0x418029922489
-
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the
vmware.log
file, you see an error such asmsg.monitorEvent.tripleFault
. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue. -
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen. -
If you plug in and out a physical NIC in your vCenter Sever system, after the uplink is restored, you still see an alarm in the
vmkernel.log
of some ESXi hosts that a NIC link is down. The VOBD daemon might not create the eventesx.clear.net.redundancy.restored
to remove such alarms, which causes the issue. -
In some cases, when multiple USB or SD devices with different file systems are connected to your vCenter Server system, ESXi installation by using a ks.cfg installation script and a USB or SD booting device might fail.
-
In the first minutes after mounting a VMFS6 volume, you might see higher than configured rate of automatic unmap operations. For example, if you set the unmap bandwidth to 2000 MB/s, you see unmap operations running at more than 3000 MB/s.
-
For independent nonpersistent mode virtual disks, all writes are stored in a redo log, which is a temporary file with extension
.REDO_XXXXXX
in the VM directory. However, a reset of a virtual machine does not clear the old redo log on such disks. -
When you run the esxtop utility by using the ESXi Shell to replay a pre-recorded support bundle, the operation might fail with a segmentation fault.
-
Rarely, in certain configurations, the shutdown or reboot of ESXi hosts might stop at the step
Shutting down device drivers
for a long time, in the order of 20 minutes, but the operation eventually completes. -
The virtual network adapter VMXNET Generation 3 (VMXNET3) uses buffers to process rx packets. Such buffers are either pre-pinned or pinned and mapped during runtime. In rare occasions, some buffers might not un-pin and get re-pinned later, resulting in a higher-than-expected pin count of such a buffer. As a result, virtual machines might fail with an error such as
MONITOR PANIC: vmk: vcpu-0:Too many sticky pages.
. -
Due to a rare lock rank violation in the vSphere Replication I/O filter, some ESXi hosts might fail with a purple diagnostic screen when vSphere Replication is enabled. You see an error such as
VERIFY bora/vmkernel/main/bh.c:978
on the screen. -
In vCenter Server advanced performance charts, you see an increasing number of packet drop count for all virtual machines that have NetX redirection enabled. However, if you disable NetX redirection, the count becomes 0.
-
Virtual switches processing IGMPv3 SSM specific queries might drop some queries when an IP is not included in the list of available ports as a result of the source IP check.
-
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
-
In particular circumstances, when you power on a virtual machine with a corrupted VMware Tools manifest file, the hostd service might fail. As a result, the ESXi host becomes unresponsive. You can backtrace the issue in the hostd dump file that is usually generated in such cases at the time a VM is powered on.
-
After a reboot of an ESXi host, encrypted virtual machines might not auto power on even when Autostart is configured with the Start delay option to set a specific start time of the host. The issue affects only encrypted virtual machines due to a delay in the distribution of keys from standard key providers.
-
When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.
-
When an ESXi 6.7.x host has more than 120 VMs on it, vSphere vMotion operations to that host or from that host might become slow or occasionally fail.
-
In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.
-
ESXi updates and upgrades might fail on Mac Pro servers with a purple diagnostic screen and a message such as
PSOD - NOT IMPLEMENTED bora/vmkernel/hardware/pci/bridge.c:372 PCPU0:2097152/bootstrap
. The issue occurs when Mac Pro BIOS does not assign resources to some devices. -
Some Linux kernels add the
IPv6 Tunnel Encapsulation Limit option
to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines with ENS enabled, because of the IPv6 extension header. -
Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.
-
In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the
VMW_SATP_INV
plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in. -
If you import a virtual machine with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to:
PANIC: Unexpected signal: 11
. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices. -
A non-blocking I/O operation might block a PBC file operation in a VMFS volume. In the
vmkernel.log
, you see an error such as:
cpu2:2098100)WARNING: PB3: 5020: Unexpected result (Would block) while waiting for PBC to clear -FD c63 r24
However, this is expected behavior. -
By default, the
vmxnet3
driver uses port 4789 when a VM uses ENS and port 8472 when ENS is not enabled. As a result, when an overlay tunnel is configured from a guest VM with different default VXLAN port,vmxnet3
might drop packets. -
In some cases, MAC learning does not work as expected and affects some virtual machine operations. For example, cloning a VM with the same MAC address on a different VLAN causes a traffic flood to the cloned VM in the ESXi host where the original VM is present.
-
Attempts to create a vCenter Host Profiles CIM Indication Subscription might fail due to a cim-xml parsing request that returns an out of memory error.
-
The packet scheduler that controls network I/O in a vCenter Server system has a fixed length queue that might quickly fill up. As a result, you might see packet drops on uplink on heavy traffic.
-
In a vSphere 6.7.x system, you cannot add an Intel Ice Lake server to a Skylake cluster with enabled EVC mode. In the vSphere Web Client or the vSphere Client, you see the following message:
Host CPU lacks features required by that mode. XSAVE of BND0-BND3 bounds registers (BNDREGS) is unsupported. XSAVE of BNDCFGU and BNDSTATUS registers (BNDCSR) is unsupported.
-
The VIM API server deletes the Active Directory cache on certain intervals to check the validity of permissions of Active Directory domain accounts. After the cache delete operation, daily by default, the VIM API server re-populates the cache. However, the cache re-populate operation might be slow on domains with many accounts. As a result, such operations might impact performance across the entire vCenter Server system.
-
An abrupt power cycling or reboot of an ESX host might cause a race between subsequent journal replays because other hosts might try to access the same resources. If a race condition happens, the journal replay cannot complete. As a result, virtual machines on a VMFS6 datastore become inaccessible or any operations running on the VMs fail or stop.
-
If you manually run the
reboot_helper.py
script to shut down a vSAN cluster, the process cleans unused cron jobs registered during the shutdown. However, the script might start a new cron instance instead of restarting the cron daemon after cleaning the unused cron jobs. As a result, multiple cron instances might accumulate and you see cron jobs executed multiple times. -
If you run a recursive call on a large datastore, such as
DatastoreBrowser::SearchSubFolders
, a memory leak condition might cause the hostd service to fail with an out-of-memory error. -
When you enable encryption for a vSAN cluster with a Key Management Interoperability Protocol (KMIP) provider, the following health check might report status errors: vCenter KMS status.
-
For all-flash vSAN configurations, you might see larger than required recommended size for the core dump partition. As a result, the core dump configuration check might fail.
-
In rare cases, hardware issues might cause an SQlite DB corruption that makes multiple VMs become inaccessible and lead to some downtime for applications.
-
When a vSAN host has memory pressure, adding disks to a disk group might cause failure of the memory block attributes (blkAttr) component initialization. Without the blkAttr component, commit and flush tasks stop, which causes log entries to build up in the SSD and eventually cause а congestion. As a result, a NMI failure might occur due to the CPU load for processing the large number of log entries.
-
In some cases, the host designated as the leader cannot send heartbeat messages to other hosts in a large vSAN cluster. This issue occurs when the leader uses an insufficient size of the TX buffer. The result is that new hosts cannot join the cluster with large cluster support (up to 64 hosts) enabled.
-
On a vSAN cluster with a large number of idle virtual machines, vSAN hosts might experience memory congestion due to higher scrub frequency.
-
An issue with trim space operations count might cause a large number of pending delete operations in vSAN. If an automatic rebalancing operation triggers, it cannot progress due to such pending delete operations. For example, if you set the rebalancing threshold at 30%, you do not see any progress on rebalancing even when the storage capacity reaches 80%.
-
Due to a dependency between the management of virtual RAM and virtual NVDIMM within a virtual machine, excessive access to the virtual NVDIMM device might lead to a lock spinout while accessing virtual RAM and cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see the following:
SP_WaitLock
SPLockWork
AsyncRemapPrepareRemapListVM
AsyncRemap_AddOrRemapVM
LPageSelectLPageToDefrag
VmAssistantProcessTasks -
When you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.
-
ESXi670-202111001 adds two fixes to the USB host controller for cases when a USB or SD device lose connectivity to the
/bootbank
partition:- Adds a parameter
usbStorageRegisterDelaySecs
.
TheusbStorageRegisterDelaySecs
parameter is part of thevmkusb
module and it delays to register cached USB storage device within the limit of 0 to 600 seconds. The default delay is 10 seconds. The parametric delay makes sure that if the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the USB or SD device does not get a new path that might break connection to the/bootbank
partition or corrupt the VMFS-L LOCKER partition. - Enhances the USB host controller driver to tolerate command timeout failures.
In cases when a device, such as Dell IDSDM, reconnects by a hardware signal, or a USB device resets due to a heavy workload, the USB host controller driver might make only one retry to restore connectivity. With the fix, the controller makes multiple retries and extends tolerance to timeout failures.
- Adds a parameter
- Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
-
After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.
-
The OEM smartpqi driver in ESXi 670-202111001 uses a new logical LUN ID assignment rule to become compatible with the drivers of versions later than 1.0.3.2323. As a result, OEM smartpqi drivers of version 1.0.3.2323 and earlier might fail to blink LEDs or get the physical locations on logical drives.
-
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
-
Profile Name | ESXi-6.7.0-20211104001-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | November 23, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2731317, 2727062, 2704831, 2764894, 2720189, 2761940, 2757263, 2752542, 2772048, 2768356, 2741112, 2786864, 2725025, 2776791, 2724166, 2790353, 2787866, 2790502, 2834968, 2832090, 2841163, 2804742, 2732482, 2793380, 2813328, 2792504, 2749902, 2765136, 2697256, 2731139, 2733489, 2755232, 2761131, 2763933, 2766401, 2744211, 2771730, 2800655, 2839716, 2847812, 2726678, 2789302, 2750034, 2739223, 2794562, 2857511, 2847091, 2765679, 2731643, 2774667, 2778008, 2833428, 2761169, 2775375, 2820768, 2835990, 2852099 |
Related CVE numbers | N/A |
- This patch resolves the following issues:
-
If the USB device attached to an ESXi host has descriptors that are not compliant with the standard USB specifications, the virtual USB stack might fail to pass through a USB device into a virtual machine. As a result, virtual machines become unresponsive, and you must either power off the virtual machine by using an ESXCLI command, or restart the ESXi host.
-
When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile.
-
A rare race condition with static map initialization might cause ESXi hosts to temporarily lose connectivity to vCenter Server after the vSphere Replication appliance powers on. However, the hostd service automatically restarts and the ESXi hosts restore connectivity.
-
In rare cases, an empty or unset property of a VIM API data array might cause the hostd service to fail. As a result, ESXi hosts lose connectivity to vCenter Server and you must manually reconnect the hosts.
-
A rare error condition in the VMKernel might cause ESXi hosts to fail when powering on a virtual machine with more than 1 virtual CPU. A
BSVMM_Validate@vmkernel
error is indicative for the problem. -
During a vSphere vMotion operation, if the calling context holds any locks, ESXi hosts might fail with a purple diagnostic screen and an error such as
PSOD: Panic at bora/vmkernel/core/lock.c:2070
. -
ESXi hosts might intermittently fail with a purple diagnostic screen with an error such as
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c
that suggests a preemption anomaly. However, the VMkernel preemption anomaly detection logic might fail to identify the correct kernel context and show a warning for the wrong context. -
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail. -
Some poll requests might exceed the metadata heap of the vmkapi character device driver and cause ESXi hosts to fail with a purple diagnostic screen and an error such as:
#PF Exception 14 in world 2099138:IPMI Response IP 0x41800b922ab0 addr 0x18 PTEs:0x0;
In the VMkernel logs, you might see messages such as:
WARNING: Heap: 3571: Heap VMKAPI-char-metadata already at its maximum size. Cannot expand.
and
WARNING: Heap: 4079: Heap_Align(VMKAPI-char-metadata, 40/40 bytes, 8 align) failed. caller: 0x418029922489
-
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault. In the
vmware.log
file, you see an error such asmsg.monitorEvent.tripleFault
. This issue is not consistent and virtual machines that consistently triple fault or triple fault after boot are impacted by another issue. -
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen. -
If you plug in and out a physical NIC in your vCenter Sever system, after the uplink is restored, you still see an alarm in the
vmkernel.log
of some ESXi hosts that a NIC link is down. The VOBD daemon might not create the eventesx.clear.net.redundancy.restored
to remove such alarms, which causes the issue. -
In some cases, when multiple USB or SD devices with different file systems are connected to your vCenter Server system, ESXi installation by using a ks.cfg installation script and a USB or SD booting device might fail.
-
In the first minutes after mounting a VMFS6 volume, you might see higher than configured rate of automatic unmap operations. For example, if you set the unmap bandwidth to 2000 MB/s, you see unmap operations running at more than 3000 MB/s.
-
For independent nonpersistent mode virtual disks, all writes are stored in a redo log, which is a temporary file with extension
.REDO_XXXXXX
in the VM directory. However, a reset of a virtual machine does not clear the old redo log on such disks. -
When you run the esxtop utility by using the ESXi Shell to replay a pre-recorded support bundle, the operation might fail with a segmentation fault.
-
Rarely, in certain configurations, the shutdown or reboot of ESXi hosts might stop at the step
Shutting down device drivers
for a long time, in the order of 20 minutes, but the operation eventually completes. -
The virtual network adapter VMXNET Generation 3 (VMXNET3) uses buffers to process rx packets. Such buffers are either pre-pinned or pinned and mapped during runtime. In rare occasions, some buffers might not un-pin and get re-pinned later, resulting in a higher-than-expected pin count of such a buffer. As a result, virtual machines might fail with an error such as
MONITOR PANIC: vmk: vcpu-0:Too many sticky pages.
. -
Due to a rare lock rank violation in the vSphere Replication I/O filter, some ESXi hosts might fail with a purple diagnostic screen when vSphere Replication is enabled. You see an error such as
VERIFY bora/vmkernel/main/bh.c:978
on the screen. -
In vCenter Server advanced performance charts, you see an increasing number of packet drop count for all virtual machines that have NetX redirection enabled. However, if you disable NetX redirection, the count becomes 0.
-
Virtual switches processing IGMPv3 SSM specific queries might drop some queries when an IP is not included in the list of available ports as a result of the source IP check.
-
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
-
In particular circumstances, when you power on a virtual machine with a corrupted VMware Tools manifest file, the hostd service might fail. As a result, the ESXi host becomes unresponsive. You can backtrace the issue in the hostd dump file that is usually generated in such cases at the time a VM is powered on.
-
After a reboot of an ESXi host, encrypted virtual machines might not auto power on even when Autostart is configured with the Start delay option to set a specific start time of the host. The issue affects only encrypted virtual machines due to a delay in the distribution of keys from standard key providers.
-
When the SLP service is disabled to prevent potential security vulnerabilities, the sfcbd-watchdog service might remain enabled and cause compliance check failures when you perform updates by using a host profile.
-
When an ESXi 6.7.x host has more than 120 VMs on it, vSphere vMotion operations to that host or from that host might become slow or occasionally fail.
-
In certain environments, vSphere Client might not fully protect the underlying websites through modern browser security mechanisms.
-
ESXi updates and upgrades might fail on Mac Pro servers with a purple diagnostic screen and a message such as
PSOD - NOT IMPLEMENTED bora/vmkernel/hardware/pci/bridge.c:372 PCPU0:2097152/bootstrap
. The issue occurs when Mac Pro BIOS does not assign resources to some devices. -
Some Linux kernels add the
IPv6 Tunnel Encapsulation Limit option
to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines with ENS enabled, because of the IPv6 extension header. -
Resource allocation for a delta disk to create a snapshot of a large virtual machine, with virtual disks equal to or exceeding 1TB, on large VMFS6 datastores of 30TB or more, might take significant time. As a result, the virtual machine might temporarily lose connectivity. The issue affects primarily VMFS6 filesystems.
-
In case of temporary connectivity issues, ESXi hosts might not discover devices configured with the
VMW_SATP_INV
plug-in after connectivity restores, because SCSI commands that fail during the device discovery stage cause an out-of-memory condition for the plug-in. -
If you import a virtual machine with a virtual USB device that is not supported by vCenter Server, such as a virtual USB camera, to a vCenter Server system, the VM might fail to power on. The start of such virtual machines fails with an error message similar to:
PANIC: Unexpected signal: 11
. The issue affects mostly VM import operations from VMware Fusion or VMware Workstation systems, which support a wide range of virtual USB devices. -
A non-blocking I/O operation might block a PBC file operation in a VMFS volume. In the
vmkernel.log
, you see an error such as:
cpu2:2098100)WARNING: PB3: 5020: Unexpected result (Would block) while waiting for PBC to clear -FD c63 r24
However, this is expected behavior. -
By default, the
vmxnet3
driver uses port 4789 when a VM uses ENS and port 8472 when ENS is not enabled. As a result, when an overlay tunnel is configured from a guest VM with different default VXLAN port,vmxnet3
might drop packets. -
In some cases, MAC learning does not work as expected and affects some virtual machine operations. For example, cloning a VM with the same MAC address on a different VLAN causes a traffic flood to the cloned VM in the ESXi host where the original VM is present.
-
Attempts to create a vCenter Host Profiles CIM Indication Subscription might fail due to a cim-xml parsing request that returns an out of memory error.
-
The packet scheduler that controls network I/O in a vCenter Server system has a fixed length queue that might quickly fill up. As a result, you might see packet drops on uplink on heavy traffic.
-
In a vSphere 6.7.x system, you cannot add an Intel Ice Lake server to a Skylake cluster with enabled EVC mode. In the vSphere Web Client or the vSphere Client, you see the following message:
Host CPU lacks features required by that mode. XSAVE of BND0-BND3 bounds registers (BNDREGS) is unsupported. XSAVE of BNDCFGU and BNDSTATUS registers (BNDCSR) is unsupported.
-
The VIM API server deletes the Active Directory cache on certain intervals to check the validity of permissions of Active Directory domain accounts. After the cache delete operation, daily by default, the VIM API server re-populates the cache. However, the cache re-populate operation might be slow on domains with many accounts. As a result, such operations might impact performance across the entire vCenter Server system.
-
An abrupt power cycling or reboot of an ESX host might cause a race between subsequent journal replays because other hosts might try to access the same resources. If a race condition happens, the journal replay cannot complete. As a result, virtual machines on a VMFS6 datastore become inaccessible or any operations running on the VMs fail or stop.
-
If you manually run the
reboot_helper.py
script to shut down a vSAN cluster, the process cleans unused cron jobs registered during the shutdown. However, the script might start a new cron instance instead of restarting the cron daemon after cleaning the unused cron jobs. As a result, multiple cron instances might accumulate and you see cron jobs executed multiple times. -
If you run a recursive call on a large datastore, such as
DatastoreBrowser::SearchSubFolders
, a memory leak condition might cause the hostd service to fail with an out-of-memory error. -
When you enable encryption for a vSAN cluster with a Key Management Interoperability Protocol (KMIP) provider, the following health check might report status errors: vCenter KMS status.
-
For all-flash vSAN configurations, you might see larger than required recommended size for the core dump partition. As a result, the core dump configuration check might fail.
-
In rare cases, hardware issues might cause an SQlite DB corruption that makes multiple VMs become inaccessible and lead to some downtime for applications.
-
When a vSAN host has memory pressure, adding disks to a disk group might cause failure of the memory block attributes (blkAttr) component initialization. Without the blkAttr component, commit and flush tasks stop, which causes log entries to build up in the SSD and eventually cause а congestion. As a result, a NMI failure might occur due to the CPU load for processing the large number of log entries.
-
In some cases, the host designated as the leader cannot send heartbeat messages to other hosts in a large vSAN cluster. This issue occurs when the leader uses an insufficient size of the TX buffer. The result is that new hosts cannot join the cluster with large cluster support (up to 64 hosts) enabled.
-
On a vSAN cluster with a large number of idle virtual machines, vSAN hosts might experience memory congestion due to higher scrub frequency.
-
An issue with trim space operations count might cause a large number of pending delete operations in vSAN. If an automatic rebalancing operation triggers, it cannot progress due to such pending delete operations. For example, if you set the rebalancing threshold at 30%, you do not see any progress on rebalancing even when the storage capacity reaches 80%.
-
Due to a dependency between the management of virtual RAM and virtual NVDIMM within a virtual machine, excessive access to the virtual NVDIMM device might lead to a lock spinout while accessing virtual RAM and cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see the following:
SP_WaitLock
SPLockWork
AsyncRemapPrepareRemapListVM
AsyncRemap_AddOrRemapVM
LPageSelectLPageToDefrag
VmAssistantProcessTasks -
When you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.
-
ESXi670-202111001 adds two fixes to the USB host controller for cases when a USB or SD device lose connectivity to the
/bootbank
partition:- Adds a parameter
usbStorageRegisterDelaySecs
.
TheusbStorageRegisterDelaySecs
parameter is part of thevmkusb
module and it delays to register cached USB storage device within the limit of 0 to 600 seconds. The default delay is 10 seconds. The parametric delay makes sure that if the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the USB or SD device does not get a new path that might break connection to the/bootbank
partition or corrupt the VMFS-L LOCKER partition. - Enhances the USB host controller driver to tolerate command timeout failures.
In cases when a device, such as Dell IDSDM, reconnects by a hardware signal, or a USB device resets due to a heavy workload, the USB host controller driver might make only one retry to restore connectivity. With the fix, the controller makes multiple retries and extends tolerance to timeout failures.
- Adds a parameter
- Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
-
After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.
-
The OEM smartpqi driver in ESXi 670-202111001 uses a new logical LUN ID assignment rule to become compatible with the drivers of versions later than 1.0.3.2323. As a result, OEM smartpqi drivers of version 1.0.3.2323 and earlier might fail to blink LEDs or get the physical locations on logical drives.
-
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
-
Profile Name | ESXi-6.7.0-20211101001s-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | November 23, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2704746, 2765136, 2794798, 2794804, 2794933, 2794943, 2795965, 2797008, 2830222, 2830671, 2762034, 2762034, 2785530, 2816541, 2776028 |
Related CVE numbers | CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029, CVE-2021-28831 |
- This patch resolves the following issues:
-
The SQLite database is updated to version 3.34.1.
-
The OpenSSL package is updated to version openssl-1.0.2za.
-
The cURL library is updated to 7.78.0.
-
The OpenSSH is updated to version 8.6p1.
-
The libarchive library is updated to libarchive - 3.5.1.
-
The Busybox package is updated to address CVE-2021-28831.
-
The glibc library is updated tо address the following CVEs: CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029.
-
The following VMware Tools ISO images are bundled with ESXi 670-202111001:
windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
This patch updates the
cpu-microcode
VIB. -
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000046 1/27/2021 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000019 2/5/2021 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b00003e 2/6/2021 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006b06 3/8/2021 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003103 4/20/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003103 4/8/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x07002302 4/23/2021 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x0700001b 2/4/2021 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000019 2/4/2021 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e000012 2/4/2021 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000ea 1/25/2021 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000034 10/23/2020 Intel Atom C3000 Series Ice Lake SP 0x606a6 0x87 0x0d0002a0 4/25/2021 Intel Xeon Silver 4300 Series;
Intel Xeon Gold 6300/5300 Series;
Intel Xeon Platinum 8300 SeriesSnow Ridge 0x80665 0x01 0x0b00000f 2/17/2021 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ea 1/5/2021 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000ea 1/5/2021 Intel Xeon E-2200 Series (8 core) Rocket Lake S 0xa0671 0x02 0x00000040 4/11/2021 Intel Xeon E-2300 Series
-
Profile Name | ESXi-6.7.0-20211101001s-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | November 23, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2704746, 2765136, 2794798, 2794804, 2794933, 2794943, 2795965, 2797008, 2830222, 2830671, 2762034, 2762034, 2785530, 2816541, 2776028 |
Related CVE numbers | CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029, CVE-2021-28831 |
- This patch updates the following issues:
-
The SQLite database is updated to version 3.34.1.
-
The OpenSSL package is updated to version openssl-1.0.2za.
-
The cURL library is updated to 7.78.0.
-
The OpenSSH is updated to version 8.6p1.
-
The libarchive library is updated to libarchive - 3.5.1.
-
The Busybox package is updated to address CVE-2021-28831.
-
The glibc library is updated tо address the following CVEs: CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, CVE-2020-10029.
-
The following VMware Tools ISO images are bundled with ESXi 670-202111001:
windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
This patch updates the
cpu-microcode
VIB. -
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000046 1/27/2021 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000019 2/5/2021 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b00003e 2/6/2021 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006b06 3/8/2021 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003103 4/20/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003103 4/8/2021 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x07002302 4/23/2021 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x0700001b 2/4/2021 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000019 2/4/2021 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e000012 2/4/2021 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000ea 1/25/2021 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000034 10/23/2020 Intel Atom C3000 Series Ice Lake SP 0x606a6 0x87 0x0d0002a0 4/25/2021 Intel Xeon Silver 4300 Series;
Intel Xeon Gold 6300/5300 Series;
Intel Xeon Platinum 8300 SeriesSnow Ridge 0x80665 0x01 0x0b00000f 2/17/2021 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ea 1/5/2021 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ea 1/5/2021 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000ea 1/5/2021 Intel Xeon E-2200 Series (8 core) Rocket Lake S 0xa0671 0x02 0x00000040 4/11/2021 Intel Xeon E-2300 Series
-
Known Issues
The known issues are grouped as follows.
Miscellaneous Issues- The sensord daemon fails to report ESXi host hardware status
A logic error in the IPMI SDR validation might cause
sensord
to fail to identify a source for power supply information. As a result, when you run the commandvsish -e get /power/hostStats
, you might not see any output.Workaround: None
- The ESXi SNMP service intermittently stops working and ESXi hosts become unresponsive
If your environment has IPv6 disabled, the ESXi SNMP agent might stop responding. As a result, ESXi hosts also become unresponsive. In the VMkernel logs, you see multiple messages such as
Admission failure in path: snmpd/snmpd.3955682/uw.3955682
.Workaround: Manually restart the SNMP service or use a cron job to periodically restart the service.
- Stateful ESXi installation might fail on hosts with a hardware iSCSI disk connected to a Emulex OneConnect OCe11100 or 14000 NIC device
An issue with the IMA
elx-esx-libelxima
plug-in might cause the hostd service to fail and statefull ESXi installation cannot complete on hosts with hardware iSCSI disk connected to a Emulex OneConnect OCe11100 or 14000 NIC device. After a network boot, the ESXi server does not reboot, but you see no errors in the vSphere Client or vSphere Web Client.Workaround: Install a vendor-provided async version of the
elx-esx-libelxima
plug-in. - The Windows guest OS of a virtual machine configured with a virtual NVDIMM of size less than 16MB might fail while initializing a new disk
If you configure a Windows virtual machine with a NVDIMM of size less than 16MB, when you try to initialize a new disk, you might see either the guest OS failing with a blue diagnostic screen or an error message in a pop-up window in the Disk Management screen. The blue diagnostic screen issue occurs in Windows 10, Windows Server 2022, and Windows 11 v21H2 guest operating systems.
Workaround: Increase the size of the virtual NVDIMM to 16MB or larger.