ESXi 7.0 Update 2c | 24 AUG 2021 | Build 18426014 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Product Support Notices
- Resolved Issues
- Known Issues
What's New
-
ESXi 7.0 Update 2c delivers bug and security fixes documented in the Resolved Issues section.
Earlier Releases of ESXi 7.0
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 2a
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 2
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1d
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1c
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1b
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1a
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1
- VMware ESXi 7.0, Patch Release ESXi 7.0b
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
For more information on ESXi versions that support upgrade to ESXi 7.0 Update 2c, refer to VMware knowledge base article 67077.
Patches Contained in This Release
This release of ESXi 7.0 Update 2c delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U2c-18426014-depot |
Build: | 18426014 |
Download Size: | 580.8 MB |
md5sum: | a6b2d1e3a1a071b8d93a55d7a7fc0b63 |
sha1checksum: | 829da0330f765b4ae46c2fb5b90a8b60f90e4e5b |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs | ESXi_7.0.2-0.20.18426014 | Bugfix | Critical |
Emulex FC Driver | Broadcom-ELX-lpfc_12.8.298.3-2vmw.702.0.20.18426014 | Bugfix | Critical |
Broadcom NetXtreme-E Network and ROCE/RDMA Drivers for VMware ESXi | Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.702.0.20.18426014 | Bugfix | Important |
QLogic FastLinQ 10/25/40/50/100 GbE Ethernet and RoCE/RDMA Drivers for VMware ESXi | MRVL-E4-CNA-Driver-Bundle_1.0.0.0-1vmw.702.0.20.18426014 | Bugfix | Important |
ESXi Install/Upgrade Component | esx-update_7.0.2-0.20.18426014 | Bugfix | Critical |
Network driver for Intel(R) X710/XL710/XXV710/X722 Adapters | Intel-i40en_1.8.1.137-1vmw.702.0.20.18426014 | Bugfix | Critical |
USB Driver | VMware-vmkusb_0.1-4vmw.702.0.20.18426014 | Bugfix | Critical |
ESXi Component - core ESXi VIBs | ESXi_7.0.2-0.15.18295176 | Security | Critical |
USB Driver | VMware-vmkusb_0.1-1vmw.702.0.15.18295176 | Security | Moderate |
ESXi Tools Component | VMware-VM-Tools_11.2.6.17901274-18295176 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.2-0.15.18295176 | Security | Critical |
IMPORTANT:
- To download the ESXi 7.0 Update 2c patch offline depot ZIP file from VMware Customer Connect, you must navigate to Products and Accounts > Product Patches. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0.
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity | Details |
ESXi-7.0U2c-18426014 | Bugfix | Critical | Bugfix and Security |
ESXi-7.0U2sc-18295176 | Security | Critical | Security only |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0U2c-18426014-standard |
ESXi-7.0U2c-18426014-no-tools |
ESXi-7.0U2sc-18295176-standard |
ESXi-7.0U2sc-18295176-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi70U2c-18426014 | 08/24/2021 | Bugfix | Bugfix and Security image |
ESXi70U2sc-18295176 | 08/24/2021 | Security | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.0.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.0.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the Product Patches page and use the esxcli software profile update
command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Product Support Notices
-
Discontinuation of Trusted Platform Module (TPM) 1.2 in a future major vSphere release: VMware intends in a future major vSphere release to discontinue support of TPM 1.2 and associated features such as TPM 1.2 with TXT. To get full use of vSphere features, you can use TPM 2.0 instead of TPM 1.2. Support for TPM 1.2 continues in all vSphere 7.0.x releases, updates, and patches. However, you will not see a deprecation warning for TPM 1.2 during installation or updates of 7.0.x releases.
- Service Location Protocol (SLP) service is disabled by default: Starting from ESX 7.0 Update 2c, the SLP service is disabled by default to prevent potential security vulnerabilities. The SLP service is also automatically disabled after upgrades to ESX 7.0 Update 2c. You can manually enable the SLP service by using the command
esxcli system slp set --enable true
. For more information, see VMware knowledge base article 76372.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.2-0.20.18426014
- esx-update_7.0.2-0.20.18426014
- Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.702.0.20.18426014
- MRVL-E4-CNA-Driver-Bundle_1.0.0.0-1vmw.702.0.20.18426014
- Intel-i40en_1.8.1.137-1vmw.702.0.20.18426014
- Broadcom-ELX-lpfc_12.8.298.3-2vmw.702.0.20.18426014
- VMware-vmkusb_0.1-4vmw.702.0.20.18426014
- ESXi_7.0.2-0.15.18295176
- esx-update_7.0.2-0.15.18295176
- VMware-VM-Tools_11.2.6.17901274-18295176
- VMware-vmkusb_0.1-1vmw.702.0.15.18295176
- ESXi-7.0U2c-18426014-standard
- ESXi-7.0U2c-18426014-no-tools
- ESXi-7.0U2sc-18295176-standard
- ESXi-7.0U2sc-18295176-no-tools
- ESXi Image - ESXi70U2c-18426014
- ESXi image - ESXi70U2sc-18295176
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2717698, 2731446, 2515171, 2731306, 2751564, 2718934, 2755056, 2725886, 2725886, 2753231, 2728910, 2741111, 2759343, 2721379, 2751277, 2708326, 2755977, 2737934, 2777001, 2760932, 2731263, 2731142, 2760267, 2749688, 2763986, 2765534, 2766127, 2777003, 2778793, 2755807, 2760081, 2731644, 2749815, 2749962, 2738238, 2799408 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the crx, vsanhealth, vsan, gc, esx-xserver, clusterstore, vdfs, native-misc-drivers, cpu-microcode, esx-dvfilter-generic-fastpath
, and esx-base
VIBs to resolve the following issues:
- PR 2717698: An ESXi host might fail with a purple diagnostic screen due to a rare race condition in the qedentv driver
A rare race condition in the qedentv driver might cause an ESXi host to fail with a purple diagnostic screen. The issue occurs when an Rx complete interrupt arrives just after a General Services Interface (GSI) queue pair (QP) is destroyed, for example during a qedentv driver unload or a system shut down. In such a case, the qedentv driver might access an already freed QP address that leads to a PF exception. The issue might occur in ESXi hosts that are connected to a busy physical switch with heavy unsolicited GSI traffic. In the backtrace, you see messages such as:
cpu4:2107287)0x45389609bcb0:[0x42001d3e6f72]qedrntv_ll2_rx_cb@(qedrntv)#<None>+0x1be stack: 0x45b8f00a7740, 0x1e146d040, 0x432d65738d40, 0x0, 0x
2021-02-11T03:31:53.882Z cpu4:2107287)0x45389609bd50:[0x42001d421d2a]ecore_ll2_rxq_completion@(qedrntv)#<None>+0x2ab stack: 0x432bc20020ed, 0x4c1e74ef0, 0x432bc2002000,
2021-02-11T03:31:53.967Z cpu4:2107287)0x45389609bdf0:[0x42001d1296d0]ecore_int_sp_dpc@(qedentv)#<None>+0x331 stack: 0x0, 0x42001c3bfb6b, 0x76f1e5c0, 0x2000097, 0x14c2002
2021-02-11T03:31:54.039Z cpu4:2107287)0x45389609be60:[0x42001c0db867]IntrCookieBH@vmkernel#nover+0x17c stack: 0x45389609be80, 0x40992f09ba, 0x43007a436690, 0x43007a43669
2021-02-11T03:31:54.116Z cpu4:2107287)0x45389609bef0:[0x42001c0be6b0]BH_Check@vmkernel#nover+0x121 stack: 0x98ba, 0x33e72f6f6e20, 0x0, 0x8000000000000000, 0x430000000001
2021-02-11T03:31:54.187Z cpu4:2107287)0x45389609bf70:[0x42001c28370c]NetPollWorldCallback@vmkernel#nover+0x129 stack: 0x61, 0x42001d0e0000, 0x42001c283770, 0x0, 0x0
2021-02-11T03:31:54.256Z cpu4:2107287)0x45389609bfe0:[0x42001c380bad]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42001c0c2b44, 0x0, 0x0, 0x0
2021-02-11T03:31:54.319Z cpu4:2107287)0x45389609c000:[0x42001c0c2b43]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
2021-02-11T03:31:54.424Z cpu4:2107287)^[[45m^[[33;1mVMware ESXi 7.0.2 [Releasebuild-17435195 x86_64]^[[0m
#PF Exception 14 in world 2107287:vmnic7-pollW IP 0x42001d3e6f72 addr 0x1cThis issue is resolved in this release.
- PR 2731446: While vSphere Replication appliance powers on, the hostd service might fail and cause temporary unresponsiveness of ESXi hosts to vCenter Server
In rare cases, while the vSphere Replication appliance powers on, the hostd service might fail and cause temporary unresponsiveness of ESXi hosts to vCenter Server. The hostd service restarts automatically and connectivity restores.
This issue is resolved in this release.
- PR 2515171: Editing an advanced options parameter in a host profile and setting a value to false, results in setting the value to true
When attempting to set a value to
false
for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted astrue
and the advanced option parameter receives atrue
value in the host profile.This issue is resolved in this release.
- PR 2731306: ESXi hosts become unresponsive to vCenter Server with repeating messages for admission failures
ESXi hosts might become unresponsive to vCenter Server even when the hosts are accessible and running. The issue occurs in vendor images with customized drivers where
ImageConfigManager
consumes more RAM than allocated. As a result, repeated failures ofImageConfigManager
cause ESXi hosts to disconnect from vCenter Server.
In the vmkernel logs, you see repeating errors such as:
Admission failure in path: host/vim/vmvisor/hostd-tmp/sh.<pid1>:python.<pid2>:uw.<pid2>
.
In the hostd logs, you see messages such as:Task Created : haTask-ha-host-vim.host.ImageConfigManager.fetchSoftwarePackages-<sequence>
, followed by:ForkExec(/usr/bin/sh) <pid1>
, where<pid1>
identifies anImageConfigManager
process.This issue is resolved in this release.
- PR 2751564: If you lower the value of the DiskMaxIOSize advanced config option, ESXi hosts I/O operations might fail
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail.This issue is resolved in this release.
- PR 2718934: Power on or power off operations of virtual machines take too long
In certain conditions, the power on or power off operations of virtual machines might take as long as 20 minutes. The issue occurs when the underlying storage of the VMs runs a background operation such as HBA rescan or APD events, and some locks are continuously held.
The issue is resolved in this release. The fix removes unnecessary locks on storage resources and introduces a fast mutex to enhance VM power operation performance.
- PR 2755056: ESXi hosts fail with purple diagnostic screen due to Vital Product Data (VPD) page response size difference
In rare cases, when the VPD page response size from the target ESXi host is different on different paths to the host, ESXi might write more bytes than the allocated length for a given path. As a result, the ESXi host fails with a purple diagnostic screen and a message such as:
Panic Message: @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4878 - Corruption in dlmalloc
This issue is resolved in this release. The fix tracks the size of the VPD pages using a separate field instead of depending on the VPD response header.
- PR 2725886: Rare lock contention might cause some virtual machine operations to fail
A rare lock contention might cause virtual machine operations such as failover or migration to fail.
Locking failure messages look similar to:
vol 'VM_SYS_11', lock at 480321536: [Req mode 1] Checking liveness:
type 10c00003 offset 480321536 v 1940464, hb offset 3801088
or
Res3: 2496: Rank violation threshold reached: cid 0xc1d00002, resType 2, cnum 5 vol VM_SYS_11
This issue is resolved in this release.
- PR 2766036: In the vSphere Client, you see incorrect numbers for the provisioned space of virtual machines
In complicated workflows that result in creating multiple virtual machines, when you navigate to VMs > Virtual Machines, you might see huge numbers in the Provisioned Space column for some VMs. For example, 2.95 TB instead of the actual provisioned space of 25 GB.
This issue is resolved in this release.
- PR 2753231: An ESXi host fails with a purple diagnostic screen due to a call stack corruption in the ESXI storage stack
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen.This issue is resolved in this release.
- PR 2728910: ESX hosts might fail with a purple diagnostic screen when virtual machines power on
In very rare cases, VMFS resource clusters that are included in a journal transaction might not be locked. As a result, during power on of virtual machines, multiple ESXi hosts might fail with a purple diagnostic screen due to exception 14 in the VMFS layer. A typical message and stack trace seen the vmkernel log is as follows:
@BlueScreen: #PF Exception 14 in world 2097684:VSCSI Emulat IP 0x418014d06fca addr 0x0
PTEs:0x16a47a027;0x600faa8007;0x0;
2020-06-24T17:35:57.073Z cpu29:2097684)Code start: 0x418013c00000 VMK uptime: 0:01:01:20.555
2020-06-24T17:35:57.073Z cpu29:2097684)0x451a50a1baa0:[0x418014d06fca]Res6MemUnlockTxnRCList@esx#nover+0x176 stack: 0x1 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb10:[0x418014c7cdb6]J3_DeleteTransaction@esx#nover+0x33f stack: 0xbad0003
2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb40:[0x418014c7db10]J3_AbortTransaction@esx#nover+0x105 stack: 0x0 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb80:[0x418014cbb752]Fil3_FinalizePunchFileHoleTxnVMFS6@esx#nover+0x16f stack: 0x430fe950e1f0
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bbd0:[0x418014c7252b]Fil3UpdateBlocks@esx#nover+0x348 stack: 0x451a50a1bc78 2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bce0:[0x418014c731dc]Fil3_PunchFileHoleWithRetry@esx#nover+0x89 stack: 0x451a50a1bec8
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bd90:[0x418014c739a5]Fil3_FileBlockUnmap@esx#nover+0x50e stack: 0x230eb5 2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be40:[0x418013c4c551]FSSVec_FileBlockUnmap@vmkernel#nover+0x6e stack: 0x230eb5
2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be90:[0x418013fb87b1]VSCSI_ExecFSSUnmap@vmkernel#nover+0x8e stack: 0x0 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf00:[0x418013fb71cb]VSCSIDoEmulHelperIO@vmkernel#nover+0x2c stack: 0x430145fbe070
2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf30:[0x418013ceadfa]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430aa05c9618 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bfe0:[0x418013f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
2020-06-24T17:35:57.083Z cpu29:2097684)base fs=0x0 gs=0x418047400000 Kgs=0x0This issue is resolved in this release.
- PR 2741111: Failure of some Intel CPUs to forward Debug Exception (#DB) traps might result in virtual machine triple fault
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault.
This issue is resolved in this release. The fix forwards all #DB traps from CPUs into the guest operating system, except when the DB trap comes from a debugger that is attached to the virtual machine.
- PR 2759343: The sfcbd service does not start when you restart ESXi management agents from the Direct Console User Interface (DCUI)
If you restart ESXi management agents from DCUI, the sfcbd service does not start. You must manually start the SFCB service.
This issue is resolved in this release.
- PR 2721379: The hostd service logs a warning for zero power supply sensors every 2 minutes
The hostd service logs the message
BuildPowerSupplySensorList found 0 power supply sensors
every 2 minutes. The issue occurs when you upgrade to or fresh install ESXi 7.0 Update 1 on an ESXi host.This issue is resolved in this release.
- PR 2751277: ESXi hosts might fail with a purple diagnostic screen due to a null pointer reference error
In rare cases, the kernel memory allocator might return a
NULL
pointer that might not be correctly dereferenced. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as#GP Exception 13 in world 2422594:vmm:overCom @ 0x42003091ede4
. In the backtrace, you see errors similar to:
#1 Util_ZeroPageTable
#2 Util_ZeroPageTableMPN
#3 VmMemPfUnmapped
#4 VmMemPfInt
#5 VmMemPfGetMapping
#6 VmMemPf
#7 VmMemPfLockPageInt
#8 VMMVMKCall_Call
#9 VMKVMM_ArchEnterVMKernelThis issue is resolved in this release.
- PR 2708326: If an NVMe device is hot added and hot removed in a short interval, the ESXi host might fail with a purple diagnostic screen
If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as
WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout
.
Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to#PF Exception ... in world ...:vmkdevmgr
.This issue is resolved in this release.
- PR 2755977: ESXi hosts might fail to provision thick VMDK or create a swap file during a virtual machine power on
When an ESXi host does not have free large file blocks (LFBs) to allocate, the host fulfills thick VMDK provisioning or the creation of a swap file with small file blocks (SFBs). However, in rare cases, the host might fail to allocate SFBs as well. As a result, thick VMDK provisioning or the creation of a swap file fails. When virtual machines try to power on, you see an error in the vmkernel logs such as:
vmkernel.1:2021-01-08T19:13:28.341Z cpu20:2965260)Fil3: 10269: Max no space retries (10) exceeded for caller Fil3_SetFileLength (status 'No space left on device')
This issue is resolved in this release.
- PR 2737934: If you use very large VMFS6 datastores, you might see repeating CPU lockup errors, or ESXi hosts might intermittently fail
When you use very large VMFS6 datastores, the process of allocating file blocks for thin VMDKs in a resource cluster might cause a CPU lockup. As a result, you see CPU lockup messages and in rare cases, ESXi hosts might fail. In the backtrace, you see messages such as:
2021-04-07T02:18:08.730Z cpu31:2345212)WARNING: Heartbeat: 849: PCPU 10 didn't have a heartbeat for 7 seconds; *may* be locked up.
The associated backtrace involvingRes6AffMgr
functions looks similar to:
2021-01-06T02:16:16.073Z cpu0:2265741)ALERT: NMI: 694: NMI IPI: RIPOFF(base):RBP:CS [0x121d74d(0x42001a000000):0x50d0:0xf48] (Src 0x1, CPU0)
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b320:[0x42001b21d74c]Res6AffMgrComputeSortIndices@esx#nover+0x369 stack: 0x43119125a000
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b3b0:[0x42001b224e46]Res6AffMgrGetCluster@esx#nover+0xb1f stack: 0xd 2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b4b0:[0x42001b226491]Res6AffMgr_AllocResourcesInt@esx#nover+0x40a stack: 0x4311910f4890
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b680:[0x42001b2270e2]Res6AffMgr_AllocResources@esx#nover+0x1b stack: 0x0This issue is resolved in this release. For more information, see VMware knowledge base article 83473.
- PR 2777001: ESXi hosts with large physical CPU count might fail with a purple diagnostic screen during or after an upgrade to ESXi 7.0 Update 2
During or after an upgrade to ESXi 7.0 Update 2, the ESX storage layer might not allocate sufficient memory resources for ESXi hosts with a large physical CPU count and many storage devices or paths connected to the hosts. As a result, such ESXi hosts might fail with a purple diagnostic screen.
This issue is resolved in this release. The fix corrects the ESX storage layer memory allocation.
- PR 2760932: Virtual machines with hardware version 18 might intermittently fail during vSphere vMotion operations
Virtual machines with hardware version 18 and VMware Tools 11.2.0 or later might fail due to a virtual graphics device issue on the destination side during a vSphere vMotion or Storage vMotion.
In thevmkernel.log
you see a line such as:
PF failed to handle a fault on mmInfo at va 0x114f5ae0: Out of memory. Terminating...
This issue is resolved in this release.
- PR 2731263: After updating to hardware version 18, you see a guest OS kernel panic in virtual machines
After updating to hardware version 18, some guest OS, such as CentOS 7, on AMD CPUs might fail with kernel panic when you boot the virtual machine. You see the kernel panic message when you open a web console for the virtual machine.
This issue is resolved in this release.
- PR 2731142: Packets with IPv6 Tunnel Encapsulation Limit option are lost in traffic between virtual machines
Some Linux kernels add the IPv6 Tunnel Encapsulation Limit option to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines, because of the IPv6 extension header.
This issue is resolved in this release. The fix parses packets with the IPv6 Tunnel Encapsulation Limit option correctly.
- PR 2760267: In vmkernel logs, you see multiple warnings indicating a switch to preferred path XXX in STANDBY state for device XXX
When virtual media, such as a CD drive, is connected to your vSphere environment by using an Integrated Dell Remote Access Controller (iDRAC), and no media file is mapped to the drive, in the vmkernel log you might see multiple warning messages such as:
vmw_psp_fixed: psp_fixedSelectPathToActivateInt:439: Switching to preferred path vmhba32:C0:T0:L1 in STANDBY state for device mpx.vmhba32:C0:T0:L1.
WARNING: vmw_psp_fixed: psp_fixedSelectPathToActivateInt:464: Selected current STANDBY path vmhba32:C0:T0:L1 for device mpx.vmhba32:C0:T0:L1 to activate. This may lead to path thrashing.
This issue is resolved in this release.
- PR 2749688: vCenter Server might intermittently remove internal NSX-controlled distributed virtual ports from ESXi hosts
During 24-hour sync workflows of the vSphere Distributed Switch, vCenter Server might intermittently remove NSX-controlled distributed virtual ports such as the Service Plane Forwarding (SPF) port, from ESXi hosts. As a result, vSphere vMotion operations of virtual machines might fail. You see an error such as
Could not connect SPF port : Not found
in the logs.This issue is resolved in this release.
- PR 2763986: If you configure the scratch partition on the root directory of a VMFS volume, upgrade to ESXi 7.0.x fails and you might lose data on the VMFS volume
ESXi 7.0 introduced the VMFS-L based ESX-OSData partition, which takes on the role of the legacy scratch partition, locker partition for VMware Tools, and core dump destination. If you configure the scratch partition on the root directory of a VMFS volume during an upgrade to ESXi 7.0.x, the system tries to convert the VMFS files into VMFS-L format to match the OS-Data partition requirements. As a result, the OS-Data partition might overwrite the VMFS volume and cause data loss.
This issue is resolved in this release. The fix adds a check if the filesystem is VFAT before converting it into OS-DATA. For more information, see VMware knowledge base article 83647.
- PR 2765534: Communication Channel health status of ESXi hosts in NSX Data Center for vSphere clusters is down after upgrade to ESXi 7.0 Update 2a
Due to a log handling issue, after an upgrade to ESXi 7.0 Update 2a, when you check the Communication Channel health status you see all ESXi hosts with status down, because NSX Manager fails to connect to the Firewall Agent.
This issue is resolved in this release. If you already face the issue, stop the
vsfwd
service and retry the upgrade operation. - PR 2766127: If you upgrade ESXi hosts by using vSphere Quick Boot in an FCoE environment, the physical server might fail with a purple diagnostic screen
If an ESXi host of version 7.0 Update 2 is installed on a FCoE LUN and uses UEFI boot mode, when you try to upgrade the host by using vSphere QuickBoot, the physical server might fail with a purple diagnostic screen because of a memory error.
This issue is resolved in this release.
- PR 2777003: If you use a USB as a boot device for ESXi 7.0 Update 2a, ESXi hosts might become unresponsive, and you see host not-responding and boot bank is not found alerts
USB devices have a small queue depth and due to a race condition in the ESXi storage stack, some I/O operations might not get to the device. Such I/Os queue in the ESXi storage stack and ultimately time out. As a result, ESXi hosts become unresponsive.
In the vSphere Client, you see alerts such asAlert: /bootbank not to be found at path '/bootbank'
andHost not-responding
.
In vmkernel logs, you see errors such as:
2021-04-12T04:47:44.940Z cpu0:2097441)ScsiPath: 8058: Cancelled Cmd(0x45b92ea3fd40) 0xa0, cmdId.initiator=0x4538c859b8f8 CmdSN 0x0 from world 0 to path "vmhba32:C0:T0:L0". Cmd count Active:0 Queued:1.
2021-04-12T04:48:50.527Z cpu2:2097440)ScsiDeviceIO: 4315: Cmd(0x45b92ea76d40) 0x28, cmdId.initiator=0x4305f74cc780 CmdSN 0x1279 from world 2099370 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1
2021-04-12T04:48:50.527Z cpu2:2097440)Queued:4This issue is resolved in this release.
- PR 2778793: Boot time of ESXi hosts slows down after upgrade to ESXi Update 2a
You might see a slowdown of up to 60 min in the booting time of ESXi hosts after an upgrade to ESXi Update 2a.
The issue does not affect updates to ESXi Update 2a from earlier versions of ESXi 7.0.x.
The issue occurs only when you use a vSphere Lifecycle Manager image or baseline, or theesxcli software profile update
command to perform the upgrade operation. If you use an ISO image or a scripted installation, you do not encounter the problem.
The issue is most likely to affect iSCSI configurations, but is related to the ESXi algorithm for boot bank detection, not to slow external storage targets.This issue is resolved in this release. The fix increases the tolerance limits when a
boot.cfg
file or storage are not promptly provided. - PR 2755807: vSphere vMotion operations on an ESXi host fail with an out of memory error
Due to a memory leak in the portset heap, vSphere vMotion operations on an ESXi host might fail with an out of memory error. In the vSphere Client, you see an error such as:
Failed to reserve port: DVSwitch 50 18 34 d1 76 81 ec 9e-03 62 e2 d4 6f 23 30 ba port 1 39.
Failed with error status: Out of memory.This issue is resolved in this release.
- PR 2760081: vSAN Build Recommendation Engine Health warning after upgrade to vSAN 7.0 Update 2
If vCenter Server cannot resolve
vcsa.vmware.com
to an IP address, the vSAN release catalog cannot be uploaded. The following vSAN heath check displays a warning: vSAN Build Recommendation Engine Health.
You might see an error message similar to the following:
2021-04-12T11:40:59.435Z ERROR vsan-mgmt[32221] [VsanHttpRequestWrapper::urlopen opID=noOpId] Exception while sending request : Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z WARNING vsan-mgmt[32221] [VsanVumCoreUtil::CheckVumServiceStatus opID=noOpId] Failed to connect VUM service: Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumIntegration::VumStatusCallback opID=noOpId] VUM is not installed or not responsive
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumSystemUtil::HasConfigIssue opID=noOpId] Config issue health.test.vum.vumDisabled exists on entity NoneThis issue is resolved in this release.
- PR 2731644: You see higher than usual scrub activity on vSAN
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
This issue is resolved in this release.
- PR 2749815: ESXi hosts fail with a purple diagnostic screen when a resync is aborted with active guest writes
When a resync runs in parallel with guest writes on objects owned by an ESXi host, the host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2749962: You see a syslog error Failed to write header on rotate
When the cmmdstimemachineDump.log file reaches a max limit of 10 MB, rotation happens and a new cmmdstimemachineDump file is created. However, during the rotation, in the vmsyslogd.err logs you might see an error such as:
2021-04-08T03:15:53.719Z vmsyslog.loggers.file : ERROR ] Failed to write header on rotate. Exception: [Errno 2] No such file or directory: '/var/run/cmmdstimemachineDumpHeader.txt'
This issue is resolved in this release.
- PR 2738238: Object Format Change task takes long to respond
In some cases, the cluster monitoring, membership, and directory service (CMMDS) does not provide correct input to the Object Format Change task and the task takes long to respond.
This issue is resolved in this release.
- PR 2799408: Upgrading a stretched cluster to version ESXi 7.0 Update 2 or later might cause multiple ESX hosts to fail with a purple diagnostic screen
In rare scenarios, if the witness host is replaced during the upgrade process and a Disk Format Conversion task runs shortly after the replacement, multiple ESX hosts on a stretched cluster might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2949777: If the LargeBAR setting of a vmxnet3 device is enabled on ESX 7.0 Update 3 and later, virtual machines might lose connectivity
The
LargeBAR
setting that extends the Base Address Register (BAR) on a vmxnet3 device supports the Uniform Passthrough (UPT). However, UPT is not supported on ESX 7.0 Update 3 and later, and if the vmxnet3 driver is downgraded to a version earlier than 7.0, and the LargeBAR setting is enabled, virtual machines might lose connectivity.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2722263, 2812906 |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs to resolve the following issue:
- PR 2722263: You cannot remove Components in a customized ESXi image after upgrade to ESXi 7.0 Update 2a
If you create a custom 7.0 Update 2a image by using the ESXi Packaging Kit, override some of the Components in the base image, for example
Intel-ixgben:1.8.9.0-1OEM.700.1.0.15525992
, and then upgrade your ESXi 7.0.x hosts, you cannot remove the custom Component after the upgrade. Also, some VIBs from the custom Components might be missing.This issue is resolved in this release.
- PR 2812906: Metadata of VMFS datastores might get corrupted in certain ESXi deployment scenarios due to duplicating System Universally Unique Identifiers (UUIDs)
Certain deployment scenarios, such as cloning ESXi boot banks, might lead to partially of fully identical UUIDs of multiple ESXi hosts. Since UUIDs are used for VMFS heartbeat and journal operations, when you have duplicating UUIDS, after installation or upgrade operations multiple ESXi hosts might attempt to access metadata regions on the same VMFS datastore. As a result, you might see metadata corruption in some VMFS datastores.
In the vmkernel logs, you see messages such as:
vmkernel 608: VMFS volume DCS_HCFA_TEDS_Regulated_10032/5ba2905b-ac11166d-b145-0025b5920a02 on naa.60060e8012a34f005040a34f00000d64:1 has been detected corrupted.
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 319: FS3RCMeta 2245 200 21 105 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 326: 0 0 0 0 0 0 0 0 0 0 0 7 0 254 1 4 0 248 3 96 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 332: 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 338: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Other impacted deployment scenarios are reuse of hardware profiles, for example Blade systems that use the same MAC to deploy a new instance, and moving boot disks between servers.
Workaround: None. For more information, see VMware knowledge base articles 84280 and 84349.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the bnxtroce
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the qedrntv
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2750390 |
CVE numbers | N/A |
Updates the i40enu
VIB to resolve the following issue:
- PR 2750390: After an upgrade to ESXi 7.0 Update 2a, VIBs of the async i40en network drivers for ESXi are skipped or reverted to the VMware inbox driver i40enu
Starting with vSphere 7.0 Update 2, the inbox i40en driver was renamed to i40enu. As a result, if you attempt to install an i40en partner async driver, the i40en VIB is either skipped or reverted to the VMware i40enu inbox driver.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2715138 |
CVE numbers | N/A |
Updates the lpfc
VIB to resolve the following issue:
- PR 2715138: ESXi hosts might fail with purple diagnostic screen during installation of Dell EMC PowerPath plug-in
A rare
Page Fault
error might cause ESXi hosts to fail with purple diagnostic screen during installation of Dell EMC PowerPath plug-in.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2753546 |
CVE numbers | N/A |
Updates the vmkusb
VIB to resolve the following issues:
- PR 2753546: If you use a USB boot device, connection to the /bootbank partition might break or the VMFS-L LOCKER partition gets corrupted
If the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the device path cannot be released by the time the device reconnects. As a result, ESXi provides a new path to the USB device, which breaks connection to the
/bootbank
partition or corrupts theVMFS-L LOCKER
partition.This issue is resolved in this release. The fix enhances the USB host controller driver to tolerate command timeout failures to allow ESXi sufficient time to release USB device resources.
As a best practice, do not set dump partition on USB storage device. For more information, see VMware knowledge base articles 2077516 and 2149257.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2738816, 2738857, 2735594 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the vdfs, esx-xserver, esx-base, esx-dvfilter-generic-fastpath, crx, native-misc-drivers, cpu-microcode, gc, vsanhealth, clusterstore, vsan,
and esx-base
VIBs to resolve the following issues:
- Update to OpenSSH
OpenSSH is updated to version 8.5p1.
- Update to the OpenSSL library
The OpenSSL library is updated to version openssl-1.0.2y.
- Update to the Libarchive library
The Libarchive library is updated to version 3.5.1.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2761942 |
CVE numbers | N/A |
Updates the tools-light
VIB to resolve the following issue:
The following VMware Tools 11.2.6 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.23 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Profile Name | ESXi-7.0U2c-18426014-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 24, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2717698, 2731446, 2515171, 731306, 2751564, 2718934, 2755056, 2725886, 2725886, 2753231, 2728910, 2741111, 2759343, 2721379, 2751277, 2708326, 2755977, 2737934, 2777001, 2760932, 2731263, 2731142, 2760267, 2749688, 2763986, 2765534, 2766127, 2777003, 2778793, 2755807, 2760081, 2731644, 2749815, 2749962, 2738238, 2799408, 2722263, 2812906, 2750390, 2715138, 2753546 |
Related CVE numbers | N/A |
This patch updates the following issues:
-
-
A rare race condition in the qedentv driver might cause an ESXi host to fail with a purple diagnostic screen. The issue occurs when an Rx complete interrupt arrives just after a General Services Interface (GSI) queue pair (QP) is destroyed, for example during a qedentv driver unload or a system shut down. In such a case, the qedentv driver might access an already freed QP address that leads to a PF exception. The issue might occur in ESXi hosts that are connected to a busy physical switch with heavy unsolicited GSI traffic. In the backtrace, you see messages such as:
cpu4:2107287)0x45389609bcb0:[0x42001d3e6f72]qedrntv_ll2_rx_cb@(qedrntv)#<None>+0x1be stack: 0x45b8f00a7740, 0x1e146d040, 0x432d65738d40, 0x0, 0x
2021-02-11T03:31:53.882Z cpu4:2107287)0x45389609bd50:[0x42001d421d2a]ecore_ll2_rxq_completion@(qedrntv)#<None>+0x2ab stack: 0x432bc20020ed, 0x4c1e74ef0, 0x432bc2002000,
2021-02-11T03:31:53.967Z cpu4:2107287)0x45389609bdf0:[0x42001d1296d0]ecore_int_sp_dpc@(qedentv)#<None>+0x331 stack: 0x0, 0x42001c3bfb6b, 0x76f1e5c0, 0x2000097, 0x14c2002
2021-02-11T03:31:54.039Z cpu4:2107287)0x45389609be60:[0x42001c0db867]IntrCookieBH@vmkernel#nover+0x17c stack: 0x45389609be80, 0x40992f09ba, 0x43007a436690, 0x43007a43669
2021-02-11T03:31:54.116Z cpu4:2107287)0x45389609bef0:[0x42001c0be6b0]BH_Check@vmkernel#nover+0x121 stack: 0x98ba, 0x33e72f6f6e20, 0x0, 0x8000000000000000, 0x430000000001
2021-02-11T03:31:54.187Z cpu4:2107287)0x45389609bf70:[0x42001c28370c]NetPollWorldCallback@vmkernel#nover+0x129 stack: 0x61, 0x42001d0e0000, 0x42001c283770, 0x0, 0x0
2021-02-11T03:31:54.256Z cpu4:2107287)0x45389609bfe0:[0x42001c380bad]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42001c0c2b44, 0x0, 0x0, 0x0
2021-02-11T03:31:54.319Z cpu4:2107287)0x45389609c000:[0x42001c0c2b43]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
2021-02-11T03:31:54.424Z cpu4:2107287)^[[45m^[[33;1mVMware ESXi 7.0.2 [Releasebuild-17435195 x86_64]^[[0m
#PF Exception 14 in world 2107287:vmnic7-pollW IP 0x42001d3e6f72 addr 0x1c -
In rare cases, while the vSphere Replication appliance powers on, the hostd service might fail and cause temporary unresponsiveness of ESXi hosts to vCenter Server. The hostd service restarts automatically and connectivity restores.
-
When attempting to set a value to
false
for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted astrue
and the advanced option parameter receives atrue
value in the host profile. -
ESXi hosts might become unresponsive to vCenter Server even when the hosts are accessible and running. The issue occurs in vendor images with customized drivers where
ImageConfigManager
consumes more RAM than allocated. As a result, repeated failures ofImageConfigManager
cause ESXi hosts to disconnect from vCenter Server.
In the vmkernel logs, you see repeating errors such as:
Admission failure in path: host/vim/vmvisor/hostd-tmp/sh.<pid1>:python.<pid2>:uw.<pid2>
.
In the hostd logs, you see messages such as:Task Created : haTask-ha-host-vim.host.ImageConfigManager.fetchSoftwarePackages-<sequence>
, followed by:ForkExec(/usr/bin/sh) <pid1>
, where<pid1>
identifies anImageConfigManager
process. -
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail. -
In certain conditions, the power on or power off operations of virtual machines might take as long as 20 minutes. The issue occurs when the underlying storage of the VMs runs a background operation such as HBA rescan or APD events, and some locks are continuously held.
-
In rare cases, when the VPD page response size from the target ESXi host is different on different paths to the host, ESXi might write more bytes than the allocated length for a given path. As a result, the ESXi host fails with a purple diagnostic screen and a message such as:
Panic Message: @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4878 - Corruption in dlmalloc
-
A rare lock contention might cause virtual machine operations such as failover or migration to fail.
Locking failure messages look similar to:
vol 'VM_SYS_11', lock at 480321536: [Req mode 1] Checking liveness:
type 10c00003 offset 480321536 v 1940464, hb offset 3801088
or
Res3: 2496: Rank violation threshold reached: cid 0xc1d00002, resType 2, cnum 5 vol VM_SYS_11
-
In complicated workflows that result in creating multiple virtual machines, when you navigate to VMs > Virtual Machines, you might see huge numbers in the Provisioned Space column for some VMs. For example, 2.95 TB instead of the actual provisioned space of 25 GB.
-
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen. -
In very rare cases, VMFS resource clusters that are included in a journal transaction might not be locked. As a result, during power on of virtual machines, multiple ESXi hosts might fail with a purple diagnostic screen due to exception 14 in the VMFS layer. A typical message and stack trace seen the vmkernel log is as follows:
@BlueScreen: #PF Exception 14 in world 2097684:VSCSI Emulat IP 0x418014d06fca addr 0x0
PTEs:0x16a47a027;0x600faa8007;0x0;
2020-06-24T17:35:57.073Z cpu29:2097684)Code start: 0x418013c00000 VMK uptime: 0:01:01:20.555
2020-06-24T17:35:57.073Z cpu29:2097684)0x451a50a1baa0:[0x418014d06fca]Res6MemUnlockTxnRCList@esx#nover+0x176 stack: 0x1 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb10:[0x418014c7cdb6]J3_DeleteTransaction@esx#nover+0x33f stack: 0xbad0003
2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb40:[0x418014c7db10]J3_AbortTransaction@esx#nover+0x105 stack: 0x0 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb80:[0x418014cbb752]Fil3_FinalizePunchFileHoleTxnVMFS6@esx#nover+0x16f stack: 0x430fe950e1f0
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bbd0:[0x418014c7252b]Fil3UpdateBlocks@esx#nover+0x348 stack: 0x451a50a1bc78 2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bce0:[0x418014c731dc]Fil3_PunchFileHoleWithRetry@esx#nover+0x89 stack: 0x451a50a1bec8
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bd90:[0x418014c739a5]Fil3_FileBlockUnmap@esx#nover+0x50e stack: 0x230eb5 2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be40:[0x418013c4c551]FSSVec_FileBlockUnmap@vmkernel#nover+0x6e stack: 0x230eb5
2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be90:[0x418013fb87b1]VSCSI_ExecFSSUnmap@vmkernel#nover+0x8e stack: 0x0 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf00:[0x418013fb71cb]VSCSIDoEmulHelperIO@vmkernel#nover+0x2c stack: 0x430145fbe070
2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf30:[0x418013ceadfa]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430aa05c9618 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bfe0:[0x418013f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
2020-06-24T17:35:57.083Z cpu29:2097684)base fs=0x0 gs=0x418047400000 Kgs=0x0 -
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault.
-
If you restart ESXi management agents from DCUI, the sfcbd service does not start. You must manually start the SFCB service.
-
The hostd service logs the message
BuildPowerSupplySensorList found 0 power supply sensors
every 2 minutes. The issue occurs when you upgrade to or fresh install ESXi 7.0 Update 1 on an ESXi host. -
In rare cases, the kernel memory allocator might return a
NULL
pointer that might not be correctly dereferenced. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as#GP Exception 13 in world 2422594:vmm:overCom @ 0x42003091ede4
. In the backtrace, you see errors similar to:
#1 Util_ZeroPageTable
#2 Util_ZeroPageTableMPN
#3 VmMemPfUnmapped
#4 VmMemPfInt
#5 VmMemPfGetMapping
#6 VmMemPf
#7 VmMemPfLockPageInt
#8 VMMVMKCall_Call
#9 VMKVMM_ArchEnterVMKernel -
If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as
WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout
.
Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to#PF Exception ... in world ...:vmkdevmgr
. -
When an ESXi host does not have free large file blocks (LFBs) to allocate, the host fulfills thick VMDK provisioning or the creation of a swap file with small file blocks (SFBs). However, in rare cases, the host might fail to allocate SFBs as well. As a result, thick VMDK provisioning or the creation of a swap file fails. When virtual machines try to power on, you see an error in the vmkernel logs such as:
vmkernel.1:2021-01-08T19:13:28.341Z cpu20:2965260)Fil3: 10269: Max no space retries (10) exceeded for caller Fil3_SetFileLength (status 'No space left on device')
-
When you use very large VMFS6 datastores, the process of allocating file blocks for thin VMDKs in a resource cluster might cause a CPU lockup. As a result, you see CPU lockup messages and in rare cases, ESXi hosts might fail. In the backtrace, you see messages such as:
2021-04-07T02:18:08.730Z cpu31:2345212)WARNING: Heartbeat: 849: PCPU 10 didn't have a heartbeat for 7 seconds; *may* be locked up.
The associated backtrace involvingRes6AffMgr
functions looks similar to:
2021-01-06T02:16:16.073Z cpu0:2265741)ALERT: NMI: 694: NMI IPI: RIPOFF(base):RBP:CS [0x121d74d(0x42001a000000):0x50d0:0xf48] (Src 0x1, CPU0)
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b320:[0x42001b21d74c]Res6AffMgrComputeSortIndices@esx#nover+0x369 stack: 0x43119125a000
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b3b0:[0x42001b224e46]Res6AffMgrGetCluster@esx#nover+0xb1f stack: 0xd 2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b4b0:[0x42001b226491]Res6AffMgr_AllocResourcesInt@esx#nover+0x40a stack: 0x4311910f4890
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b680:[0x42001b2270e2]Res6AffMgr_AllocResources@esx#nover+0x1b stack: 0x0 -
During or after an upgrade to ESXi 7.0 Update 2, the ESX storage layer might not allocate sufficient memory resources for ESXi hosts with a large physical CPU count and many storage devices or paths connected to the hosts. As a result, such ESXi hosts might fail with a purple diagnostic screen.
-
Virtual machines with hardware version 18 and VMware Tools 11.2.0 or later might fail due to a virtual graphics device issue on the destination side during a vSphere vMotion or Storage vMotion.
In thevmkernel.log
you see a line such as:
PF failed to handle a fault on mmInfo at va 0x114f5ae0: Out of memory. Terminating...
-
After updating to hardware version 18, some guest OS, such as CentOS 7, on AMD CPUs might fail with kernel panic when you boot the virtual machine. You see the kernel panic message when you open a web console for the virtual machine.
-
Some Linux kernels add the IPv6 Tunnel Encapsulation Limit option to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines, because of the IPv6 extension header.
-
When virtual media, such as a CD drive, is connected to your vSphere environment by using an Integrated Dell Remote Access Controller (iDRAC), and no media file is mapped to the drive, in the vmkernel log you might see multiple warning messages such as:
vmw_psp_fixed: psp_fixedSelectPathToActivateInt:439: Switching to preferred path vmhba32:C0:T0:L1 in STANDBY state for device mpx.vmhba32:C0:T0:L1.
WARNING: vmw_psp_fixed: psp_fixedSelectPathToActivateInt:464: Selected current STANDBY path vmhba32:C0:T0:L1 for device mpx.vmhba32:C0:T0:L1 to activate. This may lead to path thrashing.
-
During 24-hour sync workflows of the vSphere Distributed Switch, vCenter Server might intermittently remove NSX-controlled distributed virtual ports such as the Service Plane Forwarding (SPF) port, from ESXi hosts. As a result, vSphere vMotion operations of virtual machines might fail. You see an error such as
Could not connect SPF port : Not found
in the logs. -
ESXi 7.0 introduced the VMFS-L based ESX-OSData partition, which takes on the role of the legacy scratch partition, locker partition for VMware Tools, and core dump destination. If you configure the scratch partition on the root directory of a VMFS volume during an upgrade to ESXi 7.0.x, the system tries to convert the VMFS files into VMFS-L format to match the OS-Data partition requirements. As a result, the OS-Data partition might overwrite the VMFS volume and cause data loss.
-
Due to a log handling issue, after an upgrade to ESXi 7.0 Update 2a, when you check the Communication Channel health status you see all ESXi hosts with status down, because NSX Manager fails to connect to the Firewall Agent.
-
If an ESXi host of version 7.0 Update 2 is installed on a FCoE LUN and uses UEFI boot mode, when you try to upgrade the host by using vSphere QuickBoot, the physical server might fail with a purple diagnostic screen because of a memory error.
-
USB devices have a small queue depth and due to a race condition in the ESXi storage stack, some I/O operations might not get to the device. Such I/Os queue in the ESXi storage stack and ultimately time out. As a result, ESXi hosts become unresponsive.
In the vSphere Client, you see alerts such asAlert: /bootbank not to be found at path '/bootbank'
andHost not-responding
.
In vmkernel logs, you see errors such as:
2021-04-12T04:47:44.940Z cpu0:2097441)ScsiPath: 8058: Cancelled Cmd(0x45b92ea3fd40) 0xa0, cmdId.initiator=0x4538c859b8f8 CmdSN 0x0 from world 0 to path "vmhba32:C0:T0:L0". Cmd count Active:0 Queued:1.
2021-04-12T04:48:50.527Z cpu2:2097440)ScsiDeviceIO: 4315: Cmd(0x45b92ea76d40) 0x28, cmdId.initiator=0x4305f74cc780 CmdSN 0x1279 from world 2099370 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1
2021-04-12T04:48:50.527Z cpu2:2097440)Queued:4 -
You might see a slowdown of up to 60 min in the booting time of ESXi hosts after an upgrade to ESXi Update 2a.
The issue does not affect updates to ESXi Update 2a from earlier versions of ESXi 7.0.x.
The issue occurs only when you use a vSphere Lifecycle Manager image or baseline, or theesxcli software profile update
command to perform the upgrade operation. If you use an ISO image or a scripted installation, you do not encounter the problem.
The issue is most likely to affect iSCSI configurations, but is related to the ESXi algorithm for boot bank detection, not to slow external storage targets. -
Due to a memory leak in the portset heap, vSphere vMotion operations on an ESXi host might fail with an out of memory error. In the vSphere Client, you see an error such as:
Failed to reserve port: DVSwitch 50 18 34 d1 76 81 ec 9e-03 62 e2 d4 6f 23 30 ba port 1 39.
Failed with error status: Out of memory. -
If vCenter Server cannot resolve
vcsa.vmware.com
to an IP address, the vSAN release catalog cannot be uploaded. The following vSAN heath check displays a warning: vSAN Build Recommendation Engine Health.
You might see an error message similar to the following:
2021-04-12T11:40:59.435Z ERROR vsan-mgmt[32221] [VsanHttpRequestWrapper::urlopen opID=noOpId] Exception while sending request : Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z WARNING vsan-mgmt[32221] [VsanVumCoreUtil::CheckVumServiceStatus opID=noOpId] Failed to connect VUM service: Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumIntegration::VumStatusCallback opID=noOpId] VUM is not installed or not responsive
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumSystemUtil::HasConfigIssue opID=noOpId] Config issue health.test.vum.vumDisabled exists on entity None -
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
-
When a resync runs in parallel with guest writes on objects owned by an ESXi host, the host might fail with a purple diagnostic screen.
-
When the cmmdstimemachineDump.log file reaches a max limit of 10 MB, rotation happens and a new cmmdstimemachineDump file is created. However, during the rotation, in the vmsyslogd.err logs you might see an error such as:
2021-04-08T03:15:53.719Z vmsyslog.loggers.file : ERROR ] Failed to write header on rotate. Exception: [Errno 2] No such file or directory: '/var/run/cmmdstimemachineDumpHeader.txt'
-
In some cases, the cluster monitoring, membership, and directory service (CMMDS) does not provide correct input to the Object Format Change task and the task takes long to respond.
-
In rare scenarios, if the witness host is replaced during the upgrade process and a Disk Format Conversion task runs shortly after the replacement, multiple ESX hosts on a stretched cluster might fail with a purple diagnostic screen.
-
If you create a custom 7.0 Update 2a image by using the ESXi Packaging Kit, override some of the Components in the base image, for example
Intel-ixgben:1.8.9.0-1OEM.700.1.0.15525992
, and then upgrade your ESXi 7.0.x hosts, you cannot remove the custom Component after the upgrade. Also, some VIBs from the custom Components might be missing. -
When you clone an ESXi boot device, you create instances with identical UUIDs. Since UUIDs are used for VMFS Heartbeat and Journal operations, when you have duplicating UUIDS, after installation or upgrade operations multiple ESXi hosts attempt to access metadata regions on the same VMFS datastore. As a result, you might see a massive metadata corruption in VMFS datastores on multiple ESXi hosts.
In the vmkernel logs, you see messages such as:
vmkernel 608: VMFS volume DCS_HCFA_TEDS_Regulated_10032/5ba2905b-ac11166d-b145-0025b5920a02 on naa.60060e8012a34f005040a34f00000d64:1 has been detected corrupted.
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 319: FS3RCMeta 2245 200 21 105 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 326: 0 0 0 0 0 0 0 0 0 0 0 7 0 254 1 4 0 248 3 96 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 332: 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 338: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -
Starting with vSphere 7.0 Update 2, the inbox i40en driver was renamed to i40enu. As a result, if you attempt to install an i40en partner async driver, the i40en VIB is either skipped or reverted to the VMware i40enu inbox driver.
-
A rare
Page Fault
error might cause ESXi hosts to fail with purple diagnostic screen during installation of Dell EMC PowerPath plug-in. -
If the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the device path cannot be released by the time the device reconnects. As a result, ESXi provides a new path to the USB device, which breaks connection to the
/bootbank
partition or corrupts theVMFS-L LOCKER
partition.
-
Profile Name | ESXi-7.0U2c-18426014-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 24, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2717698, 2731446, 2515171, 731306, 2751564, 2718934, 2755056, 2725886, 2725886, 2753231, 2728910, 2741111, 2759343, 2721379, 2751277, 2708326, 2755977, 2737934, 2777001, 2760932, 2731263, 2731142, 2760267, 2749688, 2763986, 2765534, 2766127, 2777003, 2778793, 2755807, 2760081, 2731644, 2749815, 2749962, 2738238, 2799408, 2722263, 2812906, 2750390, 2715138, 2753546 |
Related CVE numbers | N/A |
This patch updates the following issues:
-
-
A rare race condition in the qedentv driver might cause an ESXi host to fail with a purple diagnostic screen. The issue occurs when an Rx complete interrupt arrives just after a General Services Interface (GSI) queue pair (QP) is destroyed, for example during a qedentv driver unload or a system shut down. In such a case, the qedentv driver might access an already freed QP address that leads to a PF exception. The issue might occur in ESXi hosts that are connected to a busy physical switch with heavy unsolicited GSI traffic. In the backtrace, you see messages such as:
cpu4:2107287)0x45389609bcb0:[0x42001d3e6f72]qedrntv_ll2_rx_cb@(qedrntv)#<None>+0x1be stack: 0x45b8f00a7740, 0x1e146d040, 0x432d65738d40, 0x0, 0x
2021-02-11T03:31:53.882Z cpu4:2107287)0x45389609bd50:[0x42001d421d2a]ecore_ll2_rxq_completion@(qedrntv)#<None>+0x2ab stack: 0x432bc20020ed, 0x4c1e74ef0, 0x432bc2002000,
2021-02-11T03:31:53.967Z cpu4:2107287)0x45389609bdf0:[0x42001d1296d0]ecore_int_sp_dpc@(qedentv)#<None>+0x331 stack: 0x0, 0x42001c3bfb6b, 0x76f1e5c0, 0x2000097, 0x14c2002
2021-02-11T03:31:54.039Z cpu4:2107287)0x45389609be60:[0x42001c0db867]IntrCookieBH@vmkernel#nover+0x17c stack: 0x45389609be80, 0x40992f09ba, 0x43007a436690, 0x43007a43669
2021-02-11T03:31:54.116Z cpu4:2107287)0x45389609bef0:[0x42001c0be6b0]BH_Check@vmkernel#nover+0x121 stack: 0x98ba, 0x33e72f6f6e20, 0x0, 0x8000000000000000, 0x430000000001
2021-02-11T03:31:54.187Z cpu4:2107287)0x45389609bf70:[0x42001c28370c]NetPollWorldCallback@vmkernel#nover+0x129 stack: 0x61, 0x42001d0e0000, 0x42001c283770, 0x0, 0x0
2021-02-11T03:31:54.256Z cpu4:2107287)0x45389609bfe0:[0x42001c380bad]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42001c0c2b44, 0x0, 0x0, 0x0
2021-02-11T03:31:54.319Z cpu4:2107287)0x45389609c000:[0x42001c0c2b43]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
2021-02-11T03:31:54.424Z cpu4:2107287)^[[45m^[[33;1mVMware ESXi 7.0.2 [Releasebuild-17435195 x86_64]^[[0m
#PF Exception 14 in world 2107287:vmnic7-pollW IP 0x42001d3e6f72 addr 0x1c -
In rare cases, while the vSphere Replication appliance powers on, the hostd service might fail and cause temporary unresponsiveness of ESXi hosts to vCenter Server. The hostd service restarts automatically and connectivity restores.
-
When attempting to set a value to
false
for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted astrue
and the advanced option parameter receives atrue
value in the host profile. -
ESXi hosts might become unresponsive to vCenter Server even when the hosts are accessible and running. The issue occurs in vendor images with customized drivers where
ImageConfigManager
consumes more RAM than allocated. As a result, repeated failures ofImageConfigManager
cause ESXi hosts to disconnect from vCenter Server.
In the vmkernel logs, you see repeating errors such as:
Admission failure in path: host/vim/vmvisor/hostd-tmp/sh.<pid1>:python.<pid2>:uw.<pid2>
.
In the hostd logs, you see messages such as:Task Created : haTask-ha-host-vim.host.ImageConfigManager.fetchSoftwarePackages-<sequence>
, followed by:ForkExec(/usr/bin/sh) <pid1>
, where<pid1>
identifies anImageConfigManager
process. -
If you change the
DiskMaxIOSize
advanced config option to a lower value, I/Os with large block sizes might get incorrectly split and queue at the PSA path. As a result, ESXi hosts I/O operations might time out and fail. -
In certain conditions, the power on or power off operations of virtual machines might take as long as 20 minutes. The issue occurs when the underlying storage of the VMs runs a background operation such as HBA rescan or APD events, and some locks are continuously held.
-
In rare cases, when the VPD page response size from the target ESXi host is different on different paths to the host, ESXi might write more bytes than the allocated length for a given path. As a result, the ESXi host fails with a purple diagnostic screen and a message such as:
Panic Message: @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4878 - Corruption in dlmalloc
-
A rare lock contention might cause virtual machine operations such as failover or migration to fail.
Locking failure messages look similar to:
vol 'VM_SYS_11', lock at 480321536: [Req mode 1] Checking liveness:
type 10c00003 offset 480321536 v 1940464, hb offset 3801088
or
Res3: 2496: Rank violation threshold reached: cid 0xc1d00002, resType 2, cnum 5 vol VM_SYS_11
-
In complicated workflows that result in creating multiple virtual machines, when you navigate to VMs > Virtual Machines, you might see huge numbers in the Provisioned Space column for some VMs. For example, 2.95 TB instead of the actual provisioned space of 25 GB.
-
While handling the SCSI command
READ CAPACITY (10)
, ESXi might copy excess data from the response and corrupt the call stack. As a result, the ESXi host fails with a purple diagnostic screen. -
In very rare cases, VMFS resource clusters that are included in a journal transaction might not be locked. As a result, during power on of virtual machines, multiple ESXi hosts might fail with a purple diagnostic screen due to exception 14 in the VMFS layer. A typical message and stack trace seen the vmkernel log is as follows:
@BlueScreen: #PF Exception 14 in world 2097684:VSCSI Emulat IP 0x418014d06fca addr 0x0
PTEs:0x16a47a027;0x600faa8007;0x0;
2020-06-24T17:35:57.073Z cpu29:2097684)Code start: 0x418013c00000 VMK uptime: 0:01:01:20.555
2020-06-24T17:35:57.073Z cpu29:2097684)0x451a50a1baa0:[0x418014d06fca]Res6MemUnlockTxnRCList@esx#nover+0x176 stack: 0x1 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb10:[0x418014c7cdb6]J3_DeleteTransaction@esx#nover+0x33f stack: 0xbad0003
2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb40:[0x418014c7db10]J3_AbortTransaction@esx#nover+0x105 stack: 0x0 2020-06-24T17:35:57.074Z cpu29:2097684)0x451a50a1bb80:[0x418014cbb752]Fil3_FinalizePunchFileHoleTxnVMFS6@esx#nover+0x16f stack: 0x430fe950e1f0
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bbd0:[0x418014c7252b]Fil3UpdateBlocks@esx#nover+0x348 stack: 0x451a50a1bc78 2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bce0:[0x418014c731dc]Fil3_PunchFileHoleWithRetry@esx#nover+0x89 stack: 0x451a50a1bec8
2020-06-24T17:35:57.075Z cpu29:2097684)0x451a50a1bd90:[0x418014c739a5]Fil3_FileBlockUnmap@esx#nover+0x50e stack: 0x230eb5 2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be40:[0x418013c4c551]FSSVec_FileBlockUnmap@vmkernel#nover+0x6e stack: 0x230eb5
2020-06-24T17:35:57.076Z cpu29:2097684)0x451a50a1be90:[0x418013fb87b1]VSCSI_ExecFSSUnmap@vmkernel#nover+0x8e stack: 0x0 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf00:[0x418013fb71cb]VSCSIDoEmulHelperIO@vmkernel#nover+0x2c stack: 0x430145fbe070
2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bf30:[0x418013ceadfa]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430aa05c9618 2020-06-24T17:35:57.077Z cpu29:2097684)0x451a50a1bfe0:[0x418013f0eaa2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
2020-06-24T17:35:57.083Z cpu29:2097684)base fs=0x0 gs=0x418047400000 Kgs=0x0 -
In rare cases, some Intel CPUs might fail to forward #DB traps and if a timer interrupt happens during the Windows system call, virtual machine might triple fault.
-
If you restart ESXi management agents from DCUI, the sfcbd service does not start. You must manually start the SFCB service.
-
The hostd service logs the message
BuildPowerSupplySensorList found 0 power supply sensors
every 2 minutes. The issue occurs when you upgrade to or fresh install ESXi 7.0 Update 1 on an ESXi host. -
In rare cases, the kernel memory allocator might return a
NULL
pointer that might not be correctly dereferenced. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as#GP Exception 13 in world 2422594:vmm:overCom @ 0x42003091ede4
. In the backtrace, you see errors similar to:
#1 Util_ZeroPageTable
#2 Util_ZeroPageTableMPN
#3 VmMemPfUnmapped
#4 VmMemPfInt
#5 VmMemPfGetMapping
#6 VmMemPf
#7 VmMemPfLockPageInt
#8 VMMVMKCall_Call
#9 VMKVMM_ArchEnterVMKernel -
If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as
WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout
.
Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to#PF Exception ... in world ...:vmkdevmgr
. -
When an ESXi host does not have free large file blocks (LFBs) to allocate, the host fulfills thick VMDK provisioning or the creation of a swap file with small file blocks (SFBs). However, in rare cases, the host might fail to allocate SFBs as well. As a result, thick VMDK provisioning or the creation of a swap file fails. When virtual machines try to power on, you see an error in the vmkernel logs such as:
vmkernel.1:2021-01-08T19:13:28.341Z cpu20:2965260)Fil3: 10269: Max no space retries (10) exceeded for caller Fil3_SetFileLength (status 'No space left on device')
-
When you use very large VMFS6 datastores, the process of allocating file blocks for thin VMDKs in a resource cluster might cause a CPU lockup. As a result, you see CPU lockup messages and in rare cases, ESXi hosts might fail. In the backtrace, you see messages such as:
2021-04-07T02:18:08.730Z cpu31:2345212)WARNING: Heartbeat: 849: PCPU 10 didn't have a heartbeat for 7 seconds; *may* be locked up.
The associated backtrace involvingRes6AffMgr
functions looks similar to:
2021-01-06T02:16:16.073Z cpu0:2265741)ALERT: NMI: 694: NMI IPI: RIPOFF(base):RBP:CS [0x121d74d(0x42001a000000):0x50d0:0xf48] (Src 0x1, CPU0)
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b320:[0x42001b21d74c]Res6AffMgrComputeSortIndices@esx#nover+0x369 stack: 0x43119125a000
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b3b0:[0x42001b224e46]Res6AffMgrGetCluster@esx#nover+0xb1f stack: 0xd 2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b4b0:[0x42001b226491]Res6AffMgr_AllocResourcesInt@esx#nover+0x40a stack: 0x4311910f4890
2021-01-06T02:16:16.073Z cpu0:2265741)0x4538d261b680:[0x42001b2270e2]Res6AffMgr_AllocResources@esx#nover+0x1b stack: 0x0 -
During or after an upgrade to ESXi 7.0 Update 2, the ESX storage layer might not allocate sufficient memory resources for ESXi hosts with a large physical CPU count and many storage devices or paths connected to the hosts. As a result, such ESXi hosts might fail with a purple diagnostic screen.
-
Virtual machines with hardware version 18 and VMware Tools 11.2.0 or later might fail due to a virtual graphics device issue on the destination side during a vSphere vMotion or Storage vMotion.
In thevmkernel.log
you see a line such as:
PF failed to handle a fault on mmInfo at va 0x114f5ae0: Out of memory. Terminating...
-
After updating to hardware version 18, some guest OS, such as CentOS 7, on AMD CPUs might fail with kernel panic when you boot the virtual machine. You see the kernel panic message when you open a web console for the virtual machine.
-
Some Linux kernels add the IPv6 Tunnel Encapsulation Limit option to IPv6 tunnel packets as described in the RFC 2473, par. 5.1. As a result, IPv6 tunnel packets are dropped in traffic between virtual machines, because of the IPv6 extension header.
-
When virtual media, such as a CD drive, is connected to your vSphere environment by using an Integrated Dell Remote Access Controller (iDRAC), and no media file is mapped to the drive, in the vmkernel log you might see multiple warning messages such as:
vmw_psp_fixed: psp_fixedSelectPathToActivateInt:439: Switching to preferred path vmhba32:C0:T0:L1 in STANDBY state for device mpx.vmhba32:C0:T0:L1.
WARNING: vmw_psp_fixed: psp_fixedSelectPathToActivateInt:464: Selected current STANDBY path vmhba32:C0:T0:L1 for device mpx.vmhba32:C0:T0:L1 to activate. This may lead to path thrashing.
-
During 24-hour sync workflows of the vSphere Distributed Switch, vCenter Server might intermittently remove NSX-controlled distributed virtual ports such as the Service Plane Forwarding (SPF) port, from ESXi hosts. As a result, vSphere vMotion operations of virtual machines might fail. You see an error such as
Could not connect SPF port : Not found
in the logs. -
ESXi 7.0 introduced the VMFS-L based ESX-OSData partition, which takes on the role of the legacy scratch partition, locker partition for VMware Tools, and core dump destination. If you configure the scratch partition on the root directory of a VMFS volume during an upgrade to ESXi 7.0.x, the system tries to convert the VMFS files into VMFS-L format to match the OS-Data partition requirements. As a result, the OS-Data partition might overwrite the VMFS volume and cause data loss.
-
Due to a log handling issue, after an upgrade to ESXi 7.0 Update 2a, when you check the Communication Channel health status you see all ESXi hosts with status down, because NSX Manager fails to connect to the Firewall Agent.
-
If an ESXi host of version 7.0 Update 2 is installed on a FCoE LUN and uses UEFI boot mode, when you try to upgrade the host by using vSphere QuickBoot, the physical server might fail with a purple diagnostic screen because of a memory error.
-
USB devices have a small queue depth and due to a race condition in the ESXi storage stack, some I/O operations might not get to the device. Such I/Os queue in the ESXi storage stack and ultimately time out. As a result, ESXi hosts become unresponsive.
In the vSphere Client, you see alerts such asAlert: /bootbank not to be found at path '/bootbank'
andHost not-responding
.
In vmkernel logs, you see errors such as:
2021-04-12T04:47:44.940Z cpu0:2097441)ScsiPath: 8058: Cancelled Cmd(0x45b92ea3fd40) 0xa0, cmdId.initiator=0x4538c859b8f8 CmdSN 0x0 from world 0 to path "vmhba32:C0:T0:L0". Cmd count Active:0 Queued:1.
2021-04-12T04:48:50.527Z cpu2:2097440)ScsiDeviceIO: 4315: Cmd(0x45b92ea76d40) 0x28, cmdId.initiator=0x4305f74cc780 CmdSN 0x1279 from world 2099370 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1
2021-04-12T04:48:50.527Z cpu2:2097440)Queued:4 -
You might see a slowdown of up to 60 min in the booting time of ESXi hosts after an upgrade to ESXi Update 2a.
The issue does not affect updates to ESXi Update 2a from earlier versions of ESXi 7.0.x.
The issue occurs only when you use a vSphere Lifecycle Manager image or baseline, or theesxcli software profile update
command to perform the upgrade operation. If you use an ISO image or a scripted installation, you do not encounter the problem.
The issue is most likely to affect iSCSI configurations, but is related to the ESXi algorithm for boot bank detection, not to slow external storage targets. -
Due to a memory leak in the portset heap, vSphere vMotion operations on an ESXi host might fail with an out of memory error. In the vSphere Client, you see an error such as:
Failed to reserve port: DVSwitch 50 18 34 d1 76 81 ec 9e-03 62 e2 d4 6f 23 30 ba port 1 39.
Failed with error status: Out of memory. -
If vCenter Server cannot resolve
vcsa.vmware.com
to an IP address, the vSAN release catalog cannot be uploaded. The following vSAN heath check displays a warning: vSAN Build Recommendation Engine Health.
You might see an error message similar to the following:
2021-04-12T11:40:59.435Z ERROR vsan-mgmt[32221] [VsanHttpRequestWrapper::urlopen opID=noOpId] Exception while sending request : Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z WARNING vsan-mgmt[32221] [VsanVumCoreUtil::CheckVumServiceStatus opID=noOpId] Failed to connect VUM service: Cannot resolve localhost or Internet websites.
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumIntegration::VumStatusCallback opID=noOpId] VUM is not installed or not responsive
2021-04-12T11:40:59.435Z INFO vsan-mgmt[32221] [VsanVumSystemUtil::HasConfigIssue opID=noOpId] Config issue health.test.vum.vumDisabled exists on entity None -
An integer overflow bug might cause vSAN DOM to issue scrubs more frequently than the configured setting.
-
When a resync runs in parallel with guest writes on objects owned by an ESXi host, the host might fail with a purple diagnostic screen.
-
When the cmmdstimemachineDump.log file reaches a max limit of 10 MB, rotation happens and a new cmmdstimemachineDump file is created. However, during the rotation, in the vmsyslogd.err logs you might see an error such as:
2021-04-08T03:15:53.719Z vmsyslog.loggers.file : ERROR ] Failed to write header on rotate. Exception: [Errno 2] No such file or directory: '/var/run/cmmdstimemachineDumpHeader.txt'
-
In some cases, the cluster monitoring, membership, and directory service (CMMDS) does not provide correct input to the Object Format Change task and the task takes long to respond.
-
In rare scenarios, if the witness host is replaced during the upgrade process and a Disk Format Conversion task runs shortly after the replacement, multiple ESX hosts on a stretched cluster might fail with a purple diagnostic screen.
-
If you create a custom 7.0 Update 2a image by using the ESXi Packaging Kit, override some of the Components in the base image, for example
Intel-ixgben:1.8.9.0-1OEM.700.1.0.15525992
, and then upgrade your ESXi 7.0.x hosts, you cannot remove the custom Component after the upgrade. Also, some VIBs from the custom Components might be missing. -
When you clone an ESXi boot device, you create instances with identical UUIDs. Since UUIDs are used for VMFS Heartbeat and Journal operations, when you have duplicating UUIDS, after installation or upgrade operations multiple ESXi hosts attempt to access metadata regions on the same VMFS datastore. As a result, you might see a massive metadata corruption in VMFS datastores on multiple ESXi hosts.
In the vmkernel logs, you see messages such as:
vmkernel 608: VMFS volume DCS_HCFA_TEDS_Regulated_10032/5ba2905b-ac11166d-b145-0025b5920a02 on naa.60060e8012a34f005040a34f00000d64:1 has been detected corrupted.
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 319: FS3RCMeta 2245 200 21 105 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 326: 0 0 0 0 0 0 0 0 0 0 0 7 0 254 1 4 0 248 3 96 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 332: 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 338: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2021-06-08T15:58:22.126Z cpu1:2097600)FS3: 346: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -
Starting with vSphere 7.0 Update 2, the inbox i40en driver was renamed to i40enu. As a result, if you attempt to install an i40en partner async driver, the i40en VIB is either skipped or reverted to the VMware i40enu inbox driver.
-
A rare
Page Fault
error might cause ESXi hosts to fail with purple diagnostic screen during installation of Dell EMC PowerPath plug-in. -
If the USB host controller disconnects from your vSphere system and any resource on the device, such as dump files, is still in use by ESXi, the device path cannot be released by the time the device reconnects. As a result, ESXi provides a new path to the USB device, which breaks connection to the
/bootbank
partition or corrupts theVMFS-L LOCKER
partition.
-
Profile Name | ESXi-7.0U2sc-18295176-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 24, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2738816, 2738857, 2735594, 2761942 |
Related CVE numbers | N/A |
This patch updates the following issues:
-
-
OpenSSH is updated to version 8.5p1.
-
The OpenSSL library is updated to version openssl-1.0.2y.
-
The Libarchive library is updated to version 3.5.1.
-
The following VMware Tools 11.2.6 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.23 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-7.0U2sc-18295176-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 24, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2738816, 2738857, 2735594, 2761942 |
Related CVE numbers | N/A |
This patch updates the following issues:
-
-
OpenSSH is updated to version 8.5p1.
-
The OpenSSL library is updated to version openssl-1.0.2y.
-
The Libarchive library is updated to version 3.5.1.
-
The following VMware Tools 11.2.6 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.23 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Name | ESXi |
Version | ESXi70U2c-18426014 |
Release Date | August 24, 2020 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 2717698, 2731446, 2515171, 731306, 2751564, 2718934, 2755056, 2725886, 2725886, 2753231, 2728910, 2741111, 2759343, 2721379, 2751277, 2708326, 2755977, 2737934, 2777001, 2760932, 2731263, 2731142, 2760267, 2749688, 2763986, 2765534, 2766127, 2777003, 2778793, 2755807, 2760081, 2731644, 2749815, 2749962, 2738238, 2799408, 2722263, 2812906, 2750390, 2715138, 2753546 |
Related CVE numbers | N/A |
Name | ESXi |
Version | ESXi70U2sc-18295176 |
Release Date | August 24, 2020 |
Category | Security |
Affected Components |
|
PRs Fixed | 2738816, 2738857, 2735594, 2761942 |
Related CVE numbers | N/A |
Known Issues
The known issues are grouped as follows.
Miscellaneous Issues- The sensord daemon fails to report ESXi host hardware status
A logic error in the IPMI SDR validation might cause
sensord
to fail to identify a source for power supply information. As a result, when you run the commandvsish -e get /power/hostStats
, you might not see any output.Workaround: None
- ESXi hosts might lose connectivity after brcmfcoe driver upgrade on Hitachi storage arrays
After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.
Workaround: None.