Release Date: MAR 18, 2021
Build Details
Download Filename: | ESXi670-202103001.zip |
Build: | 17700523 |
Download Size: | 476.7 MB |
md5sum: | 26f177706bce4a0432d9e8af016a5bad |
sha1checksum: | 0263a25351d003d34321a062b391d9fe5c312bd2 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi670-202103401-BG | Bugfix | Critical |
ESXi670-202103402-BG | Bugfix | Important |
ESXi670-202103403-BG | Bugfix | Important |
ESXi670-202103101-SG | Security | Important |
ESXi670-202103102-SG | Security | Moderate |
ESXi670-202103103-SG | Security | Moderate |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.
Bulletin ID | Category | Severity |
ESXi670-202103001 | Bugfix | Critical |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only the ESXi hosts is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20210304001-standard |
ESXi-6.7.0-20210304001-no-tools |
ESXi-6.7.0-20210301001s-standard |
ESXi-6.7.0-20210301001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
You can update ESXi hosts by manually downloading the patch ZIP file from the VMware download page and installing VIBs by using the esxcli software vib update
command. Additionally, you can update the system by using the image profile and the esxcli software profile update
command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-202103401-BG
- ESXi670-202103402-BG
- ESXi670-202103403-BG
- ESXi670-202103101-SG
- ESXi670-202103102-SG
- ESXi670-202103103-SG
- ESXi-6.7.0-20210304001-standard
- ESXi-6.7.0-20210304001-no-tools
- ESXi-6.7.0-20210301001s-standard
- ESXi-6.7.0-20210301001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2677275, 2664834, 2678794, 2681680, 2649392, 2680976, 2681253, 2701199, 2687411, 2664504, 2684836, 2687729, 2685806, 2683749, 2713397, 2706021, 2710158, 2669702, 2660017, 2685772, 2704741, 2645044, 2683393, 2678705, 2693303, 2695382, 2678639, 2679986, 2713569, 2682261, 2652285, 2697898, 2681921, 2691872, 2719154, 2711959, 2681786, 2659768, 2663643 |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2677275: If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
This issue is resolved in this release.
- PR 2664834: A storage outage might cause errors in your environment due to a disk layout issue in virtual machines
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
.This issue is resolved in this release.
- PR 2678794: An ESX host becomes unresponsive due to failure of the hostd service
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation triggered by VMware vSphere Replication on virtual disks that do not support this operation might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system.This issue is resolved in this release.
- PR 2681680: If you disable RC4, Active Directory user authentication on ESXi hosts might fail
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors.This issue is resolved in this release.
- PR 2649392: Insufficient heap of a DvFilter agent might cause virtual machine migration by using vSphere vMotion to fail
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In the vmkernel logs, you see an error such asFailed waiting for data. Error bad0014. Out of memory
.This issue is resolved in this release.
- PR 2680976: The hostd service might fail due to missing data while creating a ContainerView managed object
If not all data required for the creation of a ContainerView managed object is available during the VM Object Management Infrastructure session, the hostd service might fail.
This issue is resolved in this release. If you already face the issue, remove the third-party solution that you use to create ContainerView objects.
- PR 2681253: An ESXi host might fail with a purple diagnostic screen due to a race condition in container ports
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off. You see a backtrace such as:
gdb) bt #0
LockCheckSelfDeadlockInt (lck=0x4301ad7d6094) at bora/vmkernel/core/lock.c:1820
#1 0x000041801a911dff in Lock_CheckSelfDeadlock (lck=0x4301ad7d6094) at bora/vmkernel/private/lock.h:598
#2 MCSLockRWContended (rwl=rwl@entry=0x4301ad7d6080, writer=writer@entry=1 '\001', downgrade=downgrade@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK) at bora/vmkernel/main/mcslock.c:2078
#3 0x000041801a9124fd in MCS_DoAcqWriteLockWithRA (rwl=rwl@entry=0x4301ad7d6080, try=try@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK, ra=) at bora/vmkernel/main/mcslock.c:2541
#4 0x000041801a90ee41 in MCS_AcqWriteLockWithFlags (ra=, flags=MCS_ALWAYS_LOCK, rwl=0x4301ad7d6080) at bora/vmkernel/private/mcslock.h:1246
#5 RefCountBlockSpin (ra=0x41801aa753ca, refCounter=0x4301ad7d6070) at bora/vmkernel/main/refCount.c:565
#6 RefCountBlock (refCounter=0x4301ad7d6070, ra=ra@entry=0x41801aa753ca) at bora/vmkernel/main/refCount.c:646
#7 0x000041801aa3c0a8 in RefCount_BlockWithRA (ra=0x41801aa753ca, refCounter=) at bora/vmkernel/private/refCount.h:793
#8 Portset_LockExclWithRA (ra=0x41801aa753ca, ps=0x4305d081c010) at bora/vmkernel/net/portset.h:763
#9 Portset_GetPortExcl (portID=1329) at bora/vmkernel/net/portset.c:3254
#10 0x000041801aa753ca in Net_WorldCleanup (initFnArgs=) at bora/vmkernel/net/vmkernel_exports.c:1605
#11 0x000041801a8ee8bb in InitTable_Cleanup (steps=steps@entry=0x41801acaa640, numSteps=numSteps@entry=35, clientData=clientData@entry=0x451a86f9bf28) at bora/vmkernel/main/initTable.c:381
#12 0x000041801a940912 in WorldCleanup (world=) at bora/vmkernel/main/world.c:1522 #13 World_TryReap (worldID= ) at bora/vmkernel/main/world.c:6113
#14 0x000041801a90e133 in ReaperWorkerWorld (data=) at bora/vmkernel/main/reaper.c:425
#15 0x000041801ab107db in CpuSched_StartWorld (destWorld=, previous= )
at bora/vmkernel/sched/cpusched.c:11957 #16 0x0000000000000000 in ?? ()This issue is resolved in this release.
- PR 2701199: Virtual machines might power off during an NFS server failover
During an NFS server failover, the client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.
This issue is resolved in this release. The fix makes sure that reclaim requests succeed.
- PR 2687411: You might not see NFS datastores using a fault tolerance solution by Nutanix mounted in a vCenter Server system after an ESXi host reboots
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted in a vCenter Server system after an ESXi host reboots. However, you can see the volumes in the ESXi host.
This issue is resolved in this release.
- PR 2664504: vSAN host fails while resizing or disabling large cache
vSAN hosts running in a non-preemptible context might tax the CPU when freeing up 256Gb or more of memory in the cache buffer. The host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2684836: If you use large block sizes, I/O bandwidth might drop
In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.
This issue is resolved in this release. The fix optimizes the I/O split layer to avoid new buffer allocations. However, the optimization depends on the buffers that the guest OS creates while issuing I/O and might not work in all cases.
- PR 2687729: ESXi hosts might fail with a purple diagnostic screen due to stuck I/O traffic
If a high priority task holds a physical CPU for a long time, I/O traffic might stop and eventually cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see a message such as
WARNING: PLOG: PLOGStuckIOCB:7729: Stuck IOs detected on vSAN device:xxx
.This issue is resolved in this release.
- PR 2685806: Booting ESXi on hosts with a large number of VMFS datastores might take long
If you have set up a virtual flash resource by using the Virtual Flash File System (VFFS), booting an ESXi hosts with a large number of VMFS datastores might take long. The delay is due to the process of comparing file types, VFFS and VMFS.
This issue is resolved in this release. The fix removes the file types comparison.
- PR 2683749: Virtual machines with a virtual network device might randomly become unresponsive without a backtrace
In rare cases, virtual machines with a virtual network device might randomly become unresponsive and you need to shut them down. Attempts to generate a coredump for diagnostic reasons fail.
This issue is resolved in this release.
- PR 2713397: After the hostd service restarts, the bootTime property of an ESXi host might change
Boot time is recalculated on every start of the hostd service. The VMkernel only provides the uptime of the ESXi host. Timers do not move synchronously and as a result, on hostd restart, the boot time might change, back or forth.
This issue is resolved in this release.
- PR 2706021: vSAN reports component creation failure when available capacity exceeds required capacity
When vSAN reports that the required capacity is less than the available capacity, the corresponding log message might be misleading. The values reported for Required space and Available space might be incorrect.
This issue is resolved in this release.
- PR 2710158: You must manually add the claim rules to an ESXi host for FUJITSU ETERNUS storage
You must manually add the claim rules to an ESXi host for FUJITSU ETERNUS AB/HB Series storage arrays.
This issue is resolved in this release. This fix sets Storage Array Type Plugin (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
, and Claim Options totpgs_on
as default for FUJITSU ETERNUS AB/HB Series storage arrays. - PR 2669702: A vSAN host might fail with a purple diagnostic screen due to slab metadata corruption
A vSAN host might fail with a purple diagnostic screen due to a Use-After-Free (UAF) error in the LSOM/VSAN slab. You can observe the issue in the following stack:
cpu11:2100603)0x451c5f09bd68:[0x418027d285d2]vmk_SlabAlloc@vmkernel#nover+0x2e stack: 0x418028f981ba, 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960
cpu11:2100603)0x451c5f09bd70:[0x418028f5e28c][email protected]#0.0.0.1+0xd stack: 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98
cpu11:2100603)0x451c5f09bd80:[0x418028f981b9]LSOM_Alloc@LSOMCommon#1+0xaa stack: 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98, 0x418028f57eb3, 0x431d231dbae0
cpu11:2100603)0x451c5f09bdb0:[0x418028f57eb2][email protected]#0.0.0.1+0x5f stack: 0x1, 0x459c836cb960, 0x418029164388, 0x41802903c607, 0xffffffffffffffff
cpu11:2100603)0x451c5f09bde0:[0x41802903c606][email protected]#0.0.0.1+0x87 stack: 0x459c836cb740, 0x0, 0x43218d3b7c00, 0x43036e815070, 0x451c5f0a3000
cpu11:2100603)0x451c5f09be20:[0x418029165075][email protected]#0.0.0.1+0xfa stack: 0x0, 0x418027d15483, 0x451bc0580000, 0x418027d1c57a, 0x26f5abad8d0ac0
cpu11:2100603)0x451c5f09bf30:[0x418027ceaf52]HelperQueueFunc@vmkernel#nover+0x30f stack: 0x431d231ba618, 0x431d231ba608, 0x431d231ba640, 0x451c5f0a3000, 0x431d231ba618
cpu11:2100603)0x451c5f09bfe0:[0x418027f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0This issue is resolved in this release.
- PR 2660017: In the vSphere Client, you see hardware health warning with status Unknown
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
This issue is resolved in this release. Sensors that are not supported for decoding are ignored and are not included in system health reports.
- PR 2685772: The wsman service might become unresponsive and stop accepting incoming requests until restarted
Due to internal memory leaks, the memory usage of the wsman service on some servers might increase beyond the max limit of 23 MB. As a result, the service stops accepting incoming requests until restarted. The issue affects Dell and Lenovo servers.
This issue is resolved in this release. However, the fix applies specifically to Dell servers.
- PR 2704741: The hostd service intermittently becomes unresponsive and virtual machine snapshots time out
In some environments, the vSphere Storage Appliance (VSA) plug-in, also known as Security Virtual Appliance (SVA), might cause the hostd service to become intermittently unresponsive during the creation of a TCP connection. In addition, virtual machine snapshots might time out. VSA is not supported since 2014.
This issue is resolved in this release. For earlier versions, remove the plug-in is removed from
/usr/lib/vmware/nas_plugins/lib64
and/usr/lib/vmware/nas_plugins/lib32
after every ESXi reboot. - PR 2645044: An unlocked spinlock might cause an ESXi host to fail with a purple diagnostic screen when restarting a virtual machine on a MSCS cluster
In rare cases, an unlocked spinlock might cause an ESXi host to fail with a purple diagnostic screen when restarting a virtual machine that is part of a Microsoft Cluster Service (MSCS) cluster. If you have VMware vSphere High Availability enabled in your environment, the issue might affect many ESXi hosts, because vSphere HA tries to power on the virtual machine in other hosts.
This issue is resolved in this release.
- PR 2683393: A vSAN host might fail with a purple diagnostic screen due to slab allocation issue
When a slab allocation failure is not handled properly in the
RCRead
path, a vSAN host might fail with a purple diagnostic screen. You see a warning such as:bora/modules/vmkernel/lsom/rc_io.c:2115 -- NOT REACHED.
This issue is resolved in this release.
- PR 2678705: The VMXNET3 driver does not copy trailing bytes of TCP packets, such as Ethernet trailers, to guest virtual machines
The VMXNET3 driver copies and calculates the checksums from the payload indicated in the IP total length field. However, VMXNET3 does not copy any additional bytes after the payload that are not included in the length field and such bytes do not pass to guest virtual machines. For example, VMXNET3 does not copy and pass Ethernet trailers.
This issue is resolved in this release.
- PR 2693303: Transient errors due to faulty vSAN device might cause latency and congestion
When a storage device in a vSAN disk group has a transient error due to read failures, Full Rebuild Avoidance (FRA) might not be able to repair the device. In such cases, FRA does not perform relog, which might lead to a log build up at PLOG, causing congestion and latency issues.
This issue is resolved in this release.
- PR 2695382: Booting virtual machines with BIOS firmware from a network interface takes long
When you boot virtual machines with BIOS firmware from a network interface, interactions with the different servers in the environment might take long due to a networking issue. This issue does not impact network performance during guest OS runtime.
This issue is resolved in this release.
- PR 2678639: If a physical port in a link aggregation group goes down or restarts, link aggregation might be temporarily unstable
If a physical port in a link aggregation group goes down or restarts, but at the same time an ESXi host sends out of sync LACP packet to the physical switch, the vmnic might also go down. This might cause link aggregation to be temporarily unstable.
This issue is resolved in this release.
- PR 2679986: During vSphere vMotion operations to NFS datastores, virtual machines might become unresponsive for around 40 seconds
During vSphere vMotion operations to NFS datastores, the NFS disk-based lock files might not be deleted. As a result, virtual machines on the destination host become unresponsive for around 40 seconds.
This issue is resolved in this release. The fix ensures lock files are removed from disks during vSphere vMotion operations.
- PR 2713569: You do not see vCenter Server alerts in the vSphere Client and the vSphere Web Client
If you use the
/bin/services.sh restart
command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client.This issue is resolved in this release. The fix makes sure than the vobd daemon does not shut down when using the
/bin/services.sh restart
. - PR 2682261: If vSphere HA restarts or fails over, a virtual machine vNIC might disconnect from the network
In rare occasions, when vSphere HA restarts or fails over, a virtual machine on the vSphere HA cluster might lose connectivity with an error such as
VMXNET3 user: failed to connect 'Ethernet0' to DV Port 'xx'
.This issue is resolved in this release.
- PR 2652285: If a middlebox or a TCP proxy removes the timestamp option in the SYN-ACK phase, the TCP connection to an ESXi host fails
TCP uses a 3-step synchronize (SYN) and acknowledge (ACK) process to establish connections, SYN, SYN-ACK and ACK of the SYN-ACK. Options such as TCP timestamps are negotiated as part of this process. For traffic optimization reasons, middleboxes or TCP proxies might remove the timestamp option in the SYN-ACK phase. As a result, such TCP connections to ESXi hosts reset and close. A packet capture shows an RST packet from the server, immediately after the TCP handshake.
This issue is fixed in this release.
- PR 2697898: An ESXi host might fail with a purple diagnostic screen due to invalid value in the heap memory
A rare condition when an uninitialized field in the heap memory returns an invalid value for a VMFS resource cluster number might cause subsequent searches for this cluster on the VMFS volume to fail. As a result, the ESXi host fails with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2681921: An ESXi host might fail with a purple diagnostic screen while disconnecting from a vSphere Distributed Switch
A rare race condition of LAG port read locks might cause an ESXi host might fail with a purple diagnostic screen while disconnecting from a VDS.
This issue is resolved in this release.
- PR 2691872: IORETRY queue exhaustion might cause I/O failures
Issuance of read I/Os by the vSAN read cache in order to fill 1 MB read cache lines of a hybrid vSAN array from a highly fragmented write buffer, might result in IORETRY splitting of all 64 KB read I/Os issued by the read cache into 16 4 KB I/Os. This might cause IORETRY queue exhaustion.
This issue is resolved in this release. You can change the default value of
LSOM maxQueudIos
from 25,000 to 100,000 by using the following command for each non-witness vSAN host:
esxcfg-advcfg --set 100000 /LSOM/maxQueudIos
The new value takes effect when the in-memory instance of each vSAN disk group is re-established, by re-mounting the disk groups or by rebooting the host. - PR 2719154: If vSAN datastore contains many idle VMs, you might see performance degradation on active VMs
If a vSAN datastore contains many powered off or suspended VMs that are not receiving any I/Os, active VMs on the vSAN datastore might experience performance degradation due to increasing LSOM memory congestion in vSAN.
This issue is resolved in this release.
- PR 2711959: A vSAN host might fail with a purple diagnostic screen due to an object in a bad state
In rare cases, if you commit an object on a disk group, but the object appears as not committed after the vSAN host with that disk group reboots, vSAN might incorrectly consider the object to be in bad state. As a result, the vSAN host might fail with a purple diagnostic screen with a message such as:
Object <uuid> corrupted. lastCWLSN = <number>, pendingLSN = <number>
PanicvPanicInt@vmkernel#nover+0x439
Panic_vPanic@vmkernel#nover+0x23
vmk_PanicWithModuleID@vmkernel#nover+0x41
[email protected]#0.0.0.1+0x6b3
[email protected]#0.0.0.1+0xa5
[email protected]#0.0.0.1+0xce
[email protected]#0.0.0.1+0x5e1
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77This issue is resolved in this release.
- PR 2681786: After a power cycle, a powered on virtual machine might show as powered off
If you set a power cycle flag by using the
vmx.reboot.powerCycle=TRUE
advanced setting during a hardware upgrade of virtual machines, the hostd service might lose track of the VMs power state. After the VMs reboot, hostd might report powered on VMs as powered off.This issue is resolved in this release. The fix makes sure hostd keeps track of the power state of virtual machines across reboots during upgrades with the
vmx.reboot.powerCycle=TRUE
setting. If you already face the issue, reload all powered off VMs.
To reload all powered off VMs, use this command:
Get-View -ViewType VirtualMachine -Property Runtime.PowerState -Filter @{ "Runtime.PowerState" = "poweredOff" }).reload()
If you want to filter the powered off VMs by cluster, use this command:
$ClusterName = "MyClusterName" ; (Get-View -ViewType VirtualMachine -Property Runtime.PowerState -Filter @{ "Runtime.PowerState" = "poweredOff" } -SearchRoot $(Get-View -ViewType "ClusterComputeResource" -Property Name -Filter @{"Name"="$ClusterName"}).MoRef).reload()
- PR 2659768: An ESXi host might become unresponsive due to a lock during virtual machine disk consolidation operations
During virtual machine disk consolidation operations, such as snapshot disk consolidation, the hostd service might hold a lock for a long time. As a result, hostd delays responds to other tasks and state requests. Consequently, if the disk consolidation task takes too long, the ESXi host becomes unresponsive.
This issue is resolved in this release.
- PR 2663643: Virtual machines lose network connectivity after migration operations by using vSphere vMotion
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2666166 |
CVE numbers | N/A |
Updates the vmw-ahci
VIB to resolve the following issue:
- PR 2666166: ESXi hosts intermittently become unresponsive
One or more ESXi hosts might intermittently become unresponsive due to a race condition in the VMware AHCI SATA Storage Controller Driver. In the
vmkernel.log
file, you see a lot of messages such as:
…ahciAbortIO:(curr) HWQD: 0 BusyL: 0
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2688190 |
CVE numbers | N/A |
Updates the vmkusb
VIB to resolve the following issue:
- PR 2688190: If the process control I/O of an USB device exceeds 1K, I/O traffic to virtual machines on an ESXi host might fail
The ESXi USB driver,
vmkusb
, might fail to process control I/O that exceeds 1K. As a result, some USB devices cause failures of I/O traffic to virtual machines on an ESXi host.This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2704743 |
CVE numbers | N/A |
Updates esx-base
, esx-update
, vsan
, and vsanhealth
VIB to resolve the following issues:
- Update to the OpenSSH
The OpenSSH version is updated to 8.4p1.
- Update to the Python library
The Python third-party library is updated to version 3.5.10.
- Update to OpenSSL library
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2x.
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2713501 |
CVE numbers | N/A |
Updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi 670-202103001:
windows.iso
: VMware Tools 11.2.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2673509 |
Related CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
- The cpu-microcode VIB includes the following Intel microcode:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000044 5/27/2020 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006a08 6/16/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x0700001f 9/17/2020 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000e2 7/14/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000032 3/7/2020 Intel Atom C3000 Series Snow Ridge 0x80665 0x01 0x0b000007 2/25/2020 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000de 5/26/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000de 5/25/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000de 5/25/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000de 6/3/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000de 5/24/2020 Intel Xeon E-2200 Series (8 core)
Profile Name | ESXi-6.7.0-20210304001-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 18, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2677275, 2664834, 2678794, 2681680, 2649392, 2680976, 2681253, 2701199, 2687411, 2664504, 2684836, 2687729, 2685806, 2683749, 2713397, 2706021, 2710158, 2669702, 2660017, 2685772, 2704741, 2645044, 2683393, 2678705, 2693303, 2695382, 2678639, 2679986, 2713569, 2682261, 2652285, 2697898, 2681921, 2691872, 2719154, 2711959, 2681786, 2659768, 2663643, 2666166, 2688190 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
-
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
. -
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation triggered by VMware vSphere Replication on virtual disks that do not support this operation might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system. -
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In the vmkernel logs, you see an error such asFailed waiting for data. Error bad0014. Out of memory
. -
If not all data required for the creation of a ContainerView managed object is available during the VM Object Management Infrastructure session, the hostd service might fail.
-
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off. You see a backtrace such as:
gdb) bt #0
LockCheckSelfDeadlockInt (lck=0x4301ad7d6094) at bora/vmkernel/core/lock.c:1820
#1 0x000041801a911dff in Lock_CheckSelfDeadlock (lck=0x4301ad7d6094) at bora/vmkernel/private/lock.h:598
#2 MCSLockRWContended (rwl=rwl@entry=0x4301ad7d6080, writer=writer@entry=1 '\001', downgrade=downgrade@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK) at bora/vmkernel/main/mcslock.c:2078
#3 0x000041801a9124fd in MCS_DoAcqWriteLockWithRA (rwl=rwl@entry=0x4301ad7d6080, try=try@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK, ra=) at bora/vmkernel/main/mcslock.c:2541
#4 0x000041801a90ee41 in MCS_AcqWriteLockWithFlags (ra=, flags=MCS_ALWAYS_LOCK, rwl=0x4301ad7d6080) at bora/vmkernel/private/mcslock.h:1246
#5 RefCountBlockSpin (ra=0x41801aa753ca, refCounter=0x4301ad7d6070) at bora/vmkernel/main/refCount.c:565
#6 RefCountBlock (refCounter=0x4301ad7d6070, ra=ra@entry=0x41801aa753ca) at bora/vmkernel/main/refCount.c:646
#7 0x000041801aa3c0a8 in RefCount_BlockWithRA (ra=0x41801aa753ca, refCounter=) at bora/vmkernel/private/refCount.h:793
#8 Portset_LockExclWithRA (ra=0x41801aa753ca, ps=0x4305d081c010) at bora/vmkernel/net/portset.h:763
#9 Portset_GetPortExcl (portID=1329) at bora/vmkernel/net/portset.c:3254
#10 0x000041801aa753ca in Net_WorldCleanup (initFnArgs=) at bora/vmkernel/net/vmkernel_exports.c:1605
#11 0x000041801a8ee8bb in InitTable_Cleanup (steps=steps@entry=0x41801acaa640, numSteps=numSteps@entry=35, clientData=clientData@entry=0x451a86f9bf28) at bora/vmkernel/main/initTable.c:381
#12 0x000041801a940912 in WorldCleanup (world=) at bora/vmkernel/main/world.c:1522 #13 World_TryReap (worldID=) at bora/vmkernel/main/world.c:6113
#14 0x000041801a90e133 in ReaperWorkerWorld (data=) at bora/vmkernel/main/reaper.c:425
#15 0x000041801ab107db in CpuSched_StartWorld (destWorld=, previous=)
at bora/vmkernel/sched/cpusched.c:11957 #16 0x0000000000000000 in ?? () -
During an NFS server failover, the client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.
-
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted in a vCenter Server system after an ESXi host reboots. However, you can see the volumes in the ESXi host.
-
vSAN hosts running in a non-preemptible context might tax the CPU when freeing up 256Gb or more of memory in the cache buffer. The host might fail with a purple diagnostic screen.
-
In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.
-
If a high priority task holds a physical CPU for a long time, I/O traffic might stop and eventually cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see a message such as
WARNING: PLOG: PLOGStuckIOCB:7729: Stuck IOs detected on vSAN device:xxx
. -
If you have set up a virtual flash resource by using the Virtual Flash File System (VFFS), booting an ESXi hosts with a large number of VMFS datastores might take long. The delay is due to the process of comparing file types, VFFS and VMFS.
-
In rare cases, virtual machines with a virtual network device might randomly become unresponsive and you need to shut them down. Attempts to generate a coredump for diagnostic reasons fail.
-
Boot time is recalculated on every start of the hostd service. The VMkernel only provides the uptime of the ESXi host. Timers do not move synchronously and as a result, on hostd restart, the boot time might change, back or forth.
-
When vSAN reports that the required capacity is less than the available capacity, the corresponding log message might be misleading. The values reported for Required space and Available space might be incorrect.
-
You must manually add the claim rules to an ESXi host for FUJITSU ETERNUS AB/HB Series storage arrays.
-
A vSAN host might fail with a purple diagnostic screen due to a Use-After-Free (UAF) error in the LSOM/VSAN slab. You can observe the issue in the following stack:
cpu11:2100603)0x451c5f09bd68:[0x418027d285d2]vmk_SlabAlloc@vmkernel#nover+0x2e stack: 0x418028f981ba, 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960
cpu11:2100603)0x451c5f09bd70:[0x418028f5e28c][email protected]#0.0.0.1+0xd stack: 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98
cpu11:2100603)0x451c5f09bd80:[0x418028f981b9]LSOM_Alloc@LSOMCommon#1+0xaa stack: 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98, 0x418028f57eb3, 0x431d231dbae0
cpu11:2100603)0x451c5f09bdb0:[0x418028f57eb2][email protected]#0.0.0.1+0x5f stack: 0x1, 0x459c836cb960, 0x418029164388, 0x41802903c607, 0xffffffffffffffff
cpu11:2100603)0x451c5f09bde0:[0x41802903c606][email protected]#0.0.0.1+0x87 stack: 0x459c836cb740, 0x0, 0x43218d3b7c00, 0x43036e815070, 0x451c5f0a3000
cpu11:2100603)0x451c5f09be20:[0x418029165075][email protected]#0.0.0.1+0xfa stack: 0x0, 0x418027d15483, 0x451bc0580000, 0x418027d1c57a, 0x26f5abad8d0ac0
cpu11:2100603)0x451c5f09bf30:[0x418027ceaf52]HelperQueueFunc@vmkernel#nover+0x30f stack: 0x431d231ba618, 0x431d231ba608, 0x431d231ba640, 0x451c5f0a3000, 0x431d231ba618
cpu11:2100603)0x451c5f09bfe0:[0x418027f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0 -
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
-
Due to internal memory leaks, the memory usage of the wsman service on some servers might increase beyond the max limit of 23 MB. As a result, the service stops accepting incoming requests until restarted. The issue affects Dell and Lenovo servers.
-
In some environments, the vSphere Storage Appliance (VSA) plug-in, also known as Security Virtual Appliance (SVA), might cause the hostd service to become intermittently unresponsive during the creation of a TCP connection. In addition, virtual machine snapshots might time out. VSA is not supported since 2014.
-
In rare cases, an unlocked spinlock might cause an ESXi host to fail with a purple diagnostic screen when restarting a virtual machine that is part of a Microsoft Cluster Service (MSCS) cluster. If you have VMware vSphere High Availability enabled in your environment, the issue might affect many ESXi hosts, because vSphere HA tries to power on the virtual machine in other hosts.
-
When a slab allocation failure is not handled properly in the
RCRead
path, a vSAN host might fail with a purple diagnostic screen. You see a warning such as:bora/modules/vmkernel/lsom/rc_io.c:2115 -- NOT REACHED.
-
The VMXNET3 driver copies and calculates the checksums from the payload indicated in the IP total length field. However, VMXNET3 does not copy any additional bytes after the payload that are not included in the length field and such bytes do not pass to guest virtual machines. For example, VMXNET3 does not copy and pass Ethernet trailers.
-
When a storage device in a vSAN disk group has a transient error due to read failures, Full Rebuild Avoidance (FRA) might not be able to repair the device. In such cases, FRA does not perform relog, which might lead to a log build up at PLOG, causing congestion and latency issues.
-
When you boot virtual machines with BIOS firmware from a network interface, interactions with the different servers in the environment might take long due to a networking issue. This issue does not impact network performance during guest OS runtime.
-
If a physical port in a link aggregation group goes down or restarts, but at the same time an ESXi host sends out of sync LACP packet to the physical switch, the vmnic might also go down. This might cause link aggregation to be temporarily unstable.
-
During vSphere vMotion operations to NFS datastores, the NFS disk-based lock files might not be deleted. As a result, virtual machines on the destination host become unresponsive for around 40 seconds.
-
If you use the
/bin/services.sh restart
command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client. -
In rare occasions, when vSphere HA restarts or fails over, a virtual machine on the vSphere HA cluster might lose connectivity with an error such as
VMXNET3 user: failed to connect 'Ethernet0' to DV Port 'xx'
. -
TCP uses a 3-step synchronize (SYN) and acknowledge (ACK) process to establish connections, SYN, SYN-ACK and ACK of the SYN-ACK. Options such as TCP timestamps are negotiated as part of this process. For traffic optimization reasons, middleboxes or TCP proxies might remove the timestamp option in the SYN-ACK phase. As a result, such TCP connections to ESXi hosts reset and close. A packet capture shows an RST packet from the server, immediately after the TCP handshake.
-
A rare condition when an uninitialized field in the heap memory returns an invalid value for a VMFS resource cluster number might cause subsequent searches for this cluster on the VMFS volume to fail. As a result, the ESXi host fails with a purple diagnostic screen.
-
A rare race condition of LAG port read locks might cause an ESXi host might fail with a purple diagnostic screen while disconnecting from a VDS.
-
Issuance of read I/Os by the vSAN read cache in order to fill 1 MB read cache lines of a hybrid vSAN array from a highly fragmented write buffer, might result in IORETRY splitting of all 64 KB read I/Os issued by the read cache into 16 4 KB I/Os. This might cause IORETRY queue exhaustion.
-
If a vSAN datastore contains many powered off or suspended VMs that are not receiving any I/Os, active VMs on the vSAN datastore might experience performance degradation due to increasing LSOM memory congestion in vSAN.
-
In rare cases, if you commit an object on a disk group, but the object appears as not committed after the vSAN host with that disk group reboots, vSAN might incorrectly consider the object to be in bad state. As a result, the vSAN host might fail with a purple diagnostic screen with a message such as:
Object <uuid> corrupted. lastCWLSN = <number>, pendingLSN = <number>
PanicvPanicInt@vmkernel#nover+0x439
Panic_vPanic@vmkernel#nover+0x23
vmk_PanicWithModuleID@vmkernel#nover+0x41
[email protected]#0.0.0.1+0x6b3
[email protected]#0.0.0.1+0xa5
[email protected]#0.0.0.1+0xce
[email protected]#0.0.0.1+0x5e1
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77 -
If you set a power cycle flag by using the
vmx.reboot.powerCycle=TRUE
advanced setting during a hardware upgrade of virtual machines, the hostd service might lose track of the VMs power state. After the VMs reboot, hostd might report powered on VMs as powered off. -
During virtual machine disk consolidation operations, such as snapshot disk consolidation, the hostd service might hold a lock for a long time. As a result, hostd delays responds to other tasks and state requests. Consequently, if the disk consolidation task takes too long, the ESXi host becomes unresponsive.
-
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
One or more ESXi hosts might intermittently become unresponsive due to a race condition in the VMware AHCI SATA Storage Controller Driver. In thevmkernel.log
file, you see a lot of messages such as:
…ahciAbortIO:(curr) HWQD: 0 BusyL: 0
-
The ESXi USB driver,
vmkusb
, might fail to process control I/O that exceeds 1K. As a result, some USB devices cause failures of I/O traffic to virtual machines on an ESXi host.
-
Profile Name | ESXi-6.7.0-20210304001-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 18, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2677275, 2664834, 2678794, 2681680, 2649392, 2680976, 2681253, 2701199, 2687411, 2664504, 2684836, 2687729, 2685806, 2683749, 2713397, 2706021, 2710158, 2669702, 2660017, 2685772, 2704741, 2645044, 2683393, 2678705, 2693303, 2695382, 2678639, 2679986, 2713569, 2682261, 2652285, 2697898, 2681921, 2691872, 2719154, 2711959, 2681786, 2659768, 2663643, 2666166, 2688190 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
-
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
. -
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation triggered by VMware vSphere Replication on virtual disks that do not support this operation might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system. -
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In the vmkernel logs, you see an error such asFailed waiting for data. Error bad0014. Out of memory
. -
If not all data required for the creation of a ContainerView managed object is available during the VM Object Management Infrastructure session, the hostd service might fail.
-
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off. You see a backtrace such as:
gdb) bt #0
LockCheckSelfDeadlockInt (lck=0x4301ad7d6094) at bora/vmkernel/core/lock.c:1820
#1 0x000041801a911dff in Lock_CheckSelfDeadlock (lck=0x4301ad7d6094) at bora/vmkernel/private/lock.h:598
#2 MCSLockRWContended (rwl=rwl@entry=0x4301ad7d6080, writer=writer@entry=1 '\001', downgrade=downgrade@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK) at bora/vmkernel/main/mcslock.c:2078
#3 0x000041801a9124fd in MCS_DoAcqWriteLockWithRA (rwl=rwl@entry=0x4301ad7d6080, try=try@entry=0 '\000', flags=flags@entry=MCS_ALWAYS_LOCK, ra=) at bora/vmkernel/main/mcslock.c:2541
#4 0x000041801a90ee41 in MCS_AcqWriteLockWithFlags (ra=, flags=MCS_ALWAYS_LOCK, rwl=0x4301ad7d6080) at bora/vmkernel/private/mcslock.h:1246
#5 RefCountBlockSpin (ra=0x41801aa753ca, refCounter=0x4301ad7d6070) at bora/vmkernel/main/refCount.c:565
#6 RefCountBlock (refCounter=0x4301ad7d6070, ra=ra@entry=0x41801aa753ca) at bora/vmkernel/main/refCount.c:646
#7 0x000041801aa3c0a8 in RefCount_BlockWithRA (ra=0x41801aa753ca, refCounter=) at bora/vmkernel/private/refCount.h:793
#8 Portset_LockExclWithRA (ra=0x41801aa753ca, ps=0x4305d081c010) at bora/vmkernel/net/portset.h:763
#9 Portset_GetPortExcl (portID=1329) at bora/vmkernel/net/portset.c:3254
#10 0x000041801aa753ca in Net_WorldCleanup (initFnArgs=) at bora/vmkernel/net/vmkernel_exports.c:1605
#11 0x000041801a8ee8bb in InitTable_Cleanup (steps=steps@entry=0x41801acaa640, numSteps=numSteps@entry=35, clientData=clientData@entry=0x451a86f9bf28) at bora/vmkernel/main/initTable.c:381
#12 0x000041801a940912 in WorldCleanup (world=) at bora/vmkernel/main/world.c:1522 #13 World_TryReap (worldID=) at bora/vmkernel/main/world.c:6113
#14 0x000041801a90e133 in ReaperWorkerWorld (data=) at bora/vmkernel/main/reaper.c:425
#15 0x000041801ab107db in CpuSched_StartWorld (destWorld=, previous=)
at bora/vmkernel/sched/cpusched.c:11957 #16 0x0000000000000000 in ?? () -
During an NFS server failover, the client reclaims all open files. In rare cases, the reclaim operation fails and virtual machines power off, because the NFS server rejects failed requests.
-
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted in a vCenter Server system after an ESXi host reboots. However, you can see the volumes in the ESXi host.
-
vSAN hosts running in a non-preemptible context might tax the CPU when freeing up 256Gb or more of memory in the cache buffer. The host might fail with a purple diagnostic screen.
-
In some configurations, if you use block sizes higher than the supported max transfer length of the storage device, you might see a drop in the I/O bandwidth. The issue occurs due to buffer allocations in the I/O split layer in the storage stack that cause a lock contention.
-
If a high priority task holds a physical CPU for a long time, I/O traffic might stop and eventually cause the ESXi host to fail with a purple diagnostic screen. In the backtrace, you see a message such as
WARNING: PLOG: PLOGStuckIOCB:7729: Stuck IOs detected on vSAN device:xxx
. -
If you have set up a virtual flash resource by using the Virtual Flash File System (VFFS), booting an ESXi hosts with a large number of VMFS datastores might take long. The delay is due to the process of comparing file types, VFFS and VMFS.
-
In rare cases, virtual machines with a virtual network device might randomly become unresponsive and you need to shut them down. Attempts to generate a coredump for diagnostic reasons fail.
-
Boot time is recalculated on every start of the hostd service. The VMkernel only provides the uptime of the ESXi host. Timers do not move synchronously and as a result, on hostd restart, the boot time might change, back or forth.
-
When vSAN reports that the required capacity is less than the available capacity, the corresponding log message might be misleading. The values reported for Required space and Available space might be incorrect.
-
You must manually add the claim rules to an ESXi host for FUJITSU ETERNUS AB/HB Series storage arrays.
-
A vSAN host might fail with a purple diagnostic screen due to a Use-After-Free (UAF) error in the LSOM/VSAN slab. You can observe the issue in the following stack:
cpu11:2100603)0x451c5f09bd68:[0x418027d285d2]vmk_SlabAlloc@vmkernel#nover+0x2e stack: 0x418028f981ba, 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960
cpu11:2100603)0x451c5f09bd70:[0x418028f5e28c][email protected]#0.0.0.1+0xd stack: 0x0, 0x431d231dbae0, 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98
cpu11:2100603)0x451c5f09bd80:[0x418028f981b9]LSOM_Alloc@LSOMCommon#1+0xaa stack: 0x451c5f09bde8, 0x459c836cb960, 0x431d231f7c98, 0x418028f57eb3, 0x431d231dbae0
cpu11:2100603)0x451c5f09bdb0:[0x418028f57eb2][email protected]#0.0.0.1+0x5f stack: 0x1, 0x459c836cb960, 0x418029164388, 0x41802903c607, 0xffffffffffffffff
cpu11:2100603)0x451c5f09bde0:[0x41802903c606][email protected]#0.0.0.1+0x87 stack: 0x459c836cb740, 0x0, 0x43218d3b7c00, 0x43036e815070, 0x451c5f0a3000
cpu11:2100603)0x451c5f09be20:[0x418029165075][email protected]#0.0.0.1+0xfa stack: 0x0, 0x418027d15483, 0x451bc0580000, 0x418027d1c57a, 0x26f5abad8d0ac0
cpu11:2100603)0x451c5f09bf30:[0x418027ceaf52]HelperQueueFunc@vmkernel#nover+0x30f stack: 0x431d231ba618, 0x431d231ba608, 0x431d231ba640, 0x451c5f0a3000, 0x431d231ba618
cpu11:2100603)0x451c5f09bfe0:[0x418027f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0 -
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
-
Due to internal memory leaks, the memory usage of the wsman service on some servers might increase beyond the max limit of 23 MB. As a result, the service stops accepting incoming requests until restarted. The issue affects Dell and Lenovo servers.
-
In some environments, the vSphere Storage Appliance (VSA) plug-in, also known as Security Virtual Appliance (SVA), might cause the hostd service to become intermittently unresponsive during the creation of a TCP connection. In addition, virtual machine snapshots might time out. VSA is not supported since 2014.
-
In rare cases, an unlocked spinlock might cause an ESXi host to fail with a purple diagnostic screen when restarting a virtual machine that is part of a Microsoft Cluster Service (MSCS) cluster. If you have VMware vSphere High Availability enabled in your environment, the issue might affect many ESXi hosts, because vSphere HA tries to power on the virtual machine in other hosts.
-
When a slab allocation failure is not handled properly in the
RCRead
path, a vSAN host might fail with a purple diagnostic screen. You see a warning such as:bora/modules/vmkernel/lsom/rc_io.c:2115 -- NOT REACHED.
-
The VMXNET3 driver copies and calculates the checksums from the payload indicated in the IP total length field. However, VMXNET3 does not copy any additional bytes after the payload that are not included in the length field and such bytes do not pass to guest virtual machines. For example, VMXNET3 does not copy and pass Ethernet trailers.
-
When a storage device in a vSAN disk group has a transient error due to read failures, Full Rebuild Avoidance (FRA) might not be able to repair the device. In such cases, FRA does not perform relog, which might lead to a log build up at PLOG, causing congestion and latency issues.
-
When you boot virtual machines with BIOS firmware from a network interface, interactions with the different servers in the environment might take long due to a networking issue. This issue does not impact network performance during guest OS runtime.
-
If a physical port in a link aggregation group goes down or restarts, but at the same time an ESXi host sends out of sync LACP packet to the physical switch, the vmnic might also go down. This might cause link aggregation to be temporarily unstable.
-
During vSphere vMotion operations to NFS datastores, the NFS disk-based lock files might not be deleted. As a result, virtual machines on the destination host become unresponsive for around 40 seconds.
-
If you use the
/bin/services.sh restart
command to restart vCenter Server management services, the vobd daemon, which is responsible for sending ESXi host events to vCenter Server, might not restart. As a result, you do not see alerts in the vSphere Client and the vSphere Web Client. -
In rare occasions, when vSphere HA restarts or fails over, a virtual machine on the vSphere HA cluster might lose connectivity with an error such as
VMXNET3 user: failed to connect 'Ethernet0' to DV Port 'xx'
. -
TCP uses a 3-step synchronize (SYN) and acknowledge (ACK) process to establish connections, SYN, SYN-ACK and ACK of the SYN-ACK. Options such as TCP timestamps are negotiated as part of this process. For traffic optimization reasons, middleboxes or TCP proxies might remove the timestamp option in the SYN-ACK phase. As a result, such TCP connections to ESXi hosts reset and close. A packet capture shows an RST packet from the server, immediately after the TCP handshake.
-
A rare condition when an uninitialized field in the heap memory returns an invalid value for a VMFS resource cluster number might cause subsequent searches for this cluster on the VMFS volume to fail. As a result, the ESXi host fails with a purple diagnostic screen.
-
A rare race condition of LAG port read locks might cause an ESXi host might fail with a purple diagnostic screen while disconnecting from a VDS.
-
Issuance of read I/Os by the vSAN read cache in order to fill 1 MB read cache lines of a hybrid vSAN array from a highly fragmented write buffer, might result in IORETRY splitting of all 64 KB read I/Os issued by the read cache into 16 4 KB I/Os. This might cause IORETRY queue exhaustion.
-
If a vSAN datastore contains many powered off or suspended VMs that are not receiving any I/Os, active VMs on the vSAN datastore might experience performance degradation due to increasing LSOM memory congestion in vSAN.
-
In rare cases, if you commit an object on a disk group, but the object appears as not committed after the vSAN host with that disk group reboots, vSAN might incorrectly consider the object to be in bad state. As a result, the vSAN host might fail with a purple diagnostic screen with a message such as:
Object <uuid> corrupted. lastCWLSN = <number>, pendingLSN = <number>
PanicvPanicInt@vmkernel#nover+0x439
Panic_vPanic@vmkernel#nover+0x23
vmk_PanicWithModuleID@vmkernel#nover+0x41
[email protected]#0.0.0.1+0x6b3
[email protected]#0.0.0.1+0xa5
[email protected]#0.0.0.1+0xce
[email protected]#0.0.0.1+0x5e1
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77 -
If you set a power cycle flag by using the
vmx.reboot.powerCycle=TRUE
advanced setting during a hardware upgrade of virtual machines, the hostd service might lose track of the VMs power state. After the VMs reboot, hostd might report powered on VMs as powered off. -
During virtual machine disk consolidation operations, such as snapshot disk consolidation, the hostd service might hold a lock for a long time. As a result, hostd delays responds to other tasks and state requests. Consequently, if the disk consolidation task takes too long, the ESXi host becomes unresponsive.
-
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
One or more ESXi hosts might intermittently become unresponsive due to a race condition in the VMware AHCI SATA Storage Controller Driver. In thevmkernel.log
file, you see a lot of messages such as:
…ahciAbortIO:(curr) HWQD: 0 BusyL: 0
-
The ESXi USB driver,
vmkusb
, might fail to process control I/O that exceeds 1K. As a result, some USB devices cause failures of I/O traffic to virtual machines on an ESXi host.
-
Profile Name | ESXi-6.7.0-20210301001s-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 18, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2704743, 2713501, 2673509 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The OpenSSH version is updated to 8.4p1.
-
The Python third-party library is updated to version 3.5.10.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2x.
-
The following VMware Tools ISO images are bundled with ESXi 670-202103001:
windows.iso
: VMware Tools 11.2.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.-
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.-
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.2.5 Release Notes
- Earlier versions of VMware Tools
- What Every vSphere Admin Must Know About VMware Tools
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
-
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000044 5/27/2020 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006a08 6/16/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x0700001f 9/17/2020 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000e2 7/14/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000032 3/7/2020 Intel Atom C3000 Series Snow Ridge 0x80665 0x01 0x0b000007 2/25/2020 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000de 5/26/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000de 5/25/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000de 5/25/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000de 6/3/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000de 5/24/2020 Intel Xeon E-2200 Series (8 core)
-
Profile Name | ESXi-6.7.0-20210301001s-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 18, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2704743, 2713501, 2673509 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The OpenSSH version is updated to 8.4p1.
-
The Python third-party library is updated to version 3.5.10.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2x.
-
The following VMware Tools ISO images are bundled with ESXi 670-202103001:
windows.iso
: VMware Tools 11.2.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.-
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.-
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.2.5 Release Notes
- Earlier versions of VMware Tools
- What Every vSphere Admin Must Know About VMware Tools
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
-
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000044 5/27/2020 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006a08 6/16/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x0700001f 9/17/2020 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000e2 7/14/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000032 3/7/2020 Intel Atom C3000 Series Snow Ridge 0x80665 0x01 0x0b000007 2/25/2020 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000de 5/26/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000de 5/25/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000de 5/25/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000de 6/3/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000de 5/24/2020 Intel Xeon E-2200 Series (8 core)
-