ESXi 7.0 Update 3f | 12 JUL 2022 | ISO Build 20036589 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Resolved Issues
- Known Issues
- Known Issues from Previous Releases
IMPORTANT: If your source system contains hosts of versions between ESXi 7.0 Update 2 and Update 3c, and Intel drivers, before upgrading to ESXi 7.0 Update 3f, see the What's New section of the VMware vCenter Server 7.0 Update 3c Release Notes, because all content in the section is also applicable for vSphere 7.0 Update 3f. Also, see the related VMware knowledge base articles: 86447, 87258, and 87308.
What's New
- This release resolves CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, and CVE-2022-29901. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2022-0020.
- ESXi 7.0 Update 3f supports vSphere Quick Boot on the following servers:
-
Cisco Systems Inc:
- UCSC-C220-M6N
- UCSC-C225-M6N
- UCSC-C240-M6L
- UCSC-C240-M6N
- UCSC-C240-M6SN
- UCSC-C240-M6SX
-
Dell Inc:
- PowerEdge XR11
- PowerEdge XR12
- PowerEdge XE8545
- HPE:
- Edgeline e920
- Edgeline e920d
- Edgeline e920t
- ProLiant DL20 Gen10 Plus
- ProLiant DL110 Gen10 Plus
- ProLiant ML30 Gen10 Plus
- Lenovo:
- ThinkSystem SR 860 V2
-
Earlier Releases of ESXi 7.0
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, ESXi 7.0 Update 3e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2 Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1b Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1 Release Notes
- VMware ESXi 7.0, ESXi 7.0b Release Notes
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
Patches Contained in This Release
This release of ESXi 7.0 Update 3f delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U3f-20036589-depot |
Build: | 20036589 |
Download Size: | 575.2 MB |
md5sum: | 8543deb5d6d71bc7cc6d6c21977b1181 |
sha256checksum: | b4cd253cbc28abfa01fbe8e996c3b0fd8b6be9e442a4631f35616eb34e9e01e9 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.50.20036589 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.50.20036589 | Bugfix | Critical |
Broadcom NetXtreme-E Network and ROCE/RDMA Drivers for VMware ESXi | Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.703.0.50.20036589 | Bugfix | Critical |
Network driver for Intel(R) E810 Adapters | Intel-icen_1.4.1.20-1vmw.703.0.50.20036589 | Bugfix | Critical |
Network driver for Intel(R) X722 and E810 based RDMA Adapters | Intel-irdman_1.3.1.22-1vmw.703.0.50.20036589 | Bugfix | Critical |
VMware Native iSER Driver | VMware-iser_1.1.0.1-1vmw.703.0.50.20036589 | Bugfix | Critical |
Broadcom Emulex Connectivity Division lpfc driver for FC adapters | Broadcom-ELX-lpfc_14.0.169.26-5vmw.703.0.50.20036589 | Bugfix | Critical |
LSI NATIVE DRIVERS LSU Management Plugin | Broadcom-lsiv2-drivers-plugin_1.0.0-12vmw.703.0.50.20036589 | Bugfix | Critical |
Networking Driver for Intel PRO/1000 Family Adapters | Intel-ne1000_0.9.0-1vmw.703.0.50.20036589 | Bugfix | Critical |
USB Driver | VMware-vmkusb_0.1-7vmw.703.0.50.20036589 | Bugfix | Critical |
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.45.20036586 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.45.20036586 | Security | Critical |
VMware-VM-Tools | VMware-VM-Tools_12.0.0.19345655-20036586 | Security | Critical |
IMPORTANT:
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity | Details |
ESXi70U3f-20036589 | Bugfix | Critical | Security and Bugfix |
ESXi70U3sf-20036586 | Security | Critical | Security only |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0U3f-20036589-standard |
ESXi-7.0U3f-20036589-no-tools |
ESXi-7.0U3sf-20036586-standard |
ESXi-7.0U3sf-20036586-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi70U3f-20036589 | 12 JUL 2022 | Bugfix | Security and Bugfix image |
ESXi70U3sf-20036586 | 12 JUL 2022 | Security | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.3-0.50.20036589
- esx-update_7.0.3-0.50.20036589
- Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.703.0.50.20036589
- Broadcom-ELX-lpfc_14.0.169.26-5vmw.703.0.50.20036589
- Broadcom-lsiv2-drivers-plugin_1.0.0-12vmw.703.0.50.20036589
- Intel-icen_1.4.1.20-1vmw.703.0.50.20036589
- Intel-irdman_1.3.1.22-1vmw.703.0.50.20036589
- Intel-ne1000_0.9.0-1vmw.703.0.50.20036589
- VMware-iser_1.1.0.1-1vmw.703.0.50.20036589
- VMware-vmkusb_0.1-7vmw.703.0.50.20036589
- ESXi_7.0.3-0.45.20036586
- esx-update_7.0.3-0.45.20036586
- VMware-VM-Tools_12.0.0.19345655-20036586
- ESXi-7.0U3f-20036589-standard
- ESXi-7.0U3f-20036589-no-tools
- ESXi-7.0U3sf-20036586-standard
- ESXi-7.0U3sf-20036586-no-tools
- ESXi Image - ESXi70U3f-20036589
- ESXi Image - ESXi70U3sf-20036586
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2916848, 2817702, 2937074, 2957732, 2984134, 2944807, 2985369, 2907963, 2949801, 2984245, 2911320 , 2915911, 2990593, 2992648, 2905357, 2980176, 2974472, 2974376, 2972059, 2973031, 2967359, 2860126, 2878245, 2876044, 2968157, 2960242, 2970238, 2966270, 2966783, 2963328, 2939395, 2966362, 2946550, 2958543, 2966628, 2961584, 2911056, 2965235, 2952427, 2963401, 2965146, 2963038, 2963479, 2949375, 2961033, 2958926, 2839515, 2951139, 2878224, 2954320, 2952432, 2961346, 2857932, 2960949, 2960882, 2957966, 2929443, 2956080, 2959293, 2944919, 2948800, 2928789, 2957769, 2928268, 2957062, 2792139, 2934102, 2953709, 2950686, 2953488, 2949446, 2955016, 2953217, 2956030, 2949902, 2944894, 2944521, 2911363, 2952205, 2894093, 2910856, 2953754, 2949777, 2925733, 2951234, 2915979, 2904241, 2935355, 2941263, 2912661, 2891231, 2928202, 2928268, 2867146, 2244126, 2912330, 2898858, 2906297, 2912213, 2910340, 2745800, 2912182, 2941274, 2912230, 2699748, 2882789, 2869031, 2913017, 2864375, 2925133, 2965277 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esxio-combiner, esx-ui, esx-xserver, cpu-microcode, trx, vsanhealth, esx-base, esx-dvfilter-generic-fastpath, gc, native-misc-drivers, bmcal, vsan, vdfs
, and crx
VIBs to resolve the following issues:
- NEW: PR 2965277: Windows virtual machines with nested virtualization support that have Virtualization-Based Security (VBS) enabled might show up to 100% CPU utilization
Under certain conditions, such as increasing page-sharing or disabling the use of large pages at VM or ESXi host level, you might see CPU utilization of Windows VBS-enabled VMs to reach up to 100%.
This issue is resolved in this release. The fix prevents page-sharing in some specific cases to avoid the issue.
- PR 2839515: Python SDK (pyVmomi) call to EnvironmentBrowser.QueryConfigOptionEx fails with UnknownWsdlTypeError
pyVmomi calls to the vSphere
EnvironmentBrowser.QueryConfigTargetEx
API might fail withUnknownWsdlTypeError
.This issue is resolved in this release.
- PR 2911320: You see status Unknown for sensors of type System Event in the hardware health monitoring screen in the vSphere Client
The hardware health module of ESXi might fail to decode some sensors of the type System Event when a physical server is rebranded. As a result, in the vSphere Client you see status Unknown for sensors of type System Event under Monitor > Hardware Health.
This issue is resolved in this release.
- PR 2910340: If a virtual machine with vmx.reboot.powerCycle=TRUE reboots during migration, it might not power on
If you you set the
vmx.reboot.powerCycle
advanced setting on a virtual machine toTRUE
, when the guest OS initiates a reboot, the virtual machine powers off and then powers on. However, if a power cycle occurs during a migration by using vSphere vMotion, the operation fails and the virtual machine on the source host might not power back on.This issue is resolved in this release.
- PR 2916848: ESXi hosts might fail with a purple diagnostic screen due to a data race in UNIX domain sockets
If a socket experiences a connection failure while it is polled during the internal communication between UNIX domain sockets, a data race might occur. As a result, in some cases, the ESXi host might access an invalid memory region and fail with a purple diagnostic screen with
#PF Exception 14
, and errors such asUserDuct_PollSize()
andUserSocketLocalPoll()
.This issue is resolved in this release.
- PR 2891231: You cannot create a host profile from an ESXi host that uses a hardware iSCSI adapter with pseudo NIC
If a hardware iSCSI adapter on an ESXi host in your environment uses pseudo NICs, you might not be able to create a host profile from such a host since pseudo NICs do not have the required PCI address and vendor name for a profile.
This issue is resolved in this release. For pseudo NICs, the fix removes the requirement for PCI address and PCI vendor name values.
- PR 2898858: Virtual machines might become unresponsive due to underlying storage issues on a software iSCSI adapter
Race conditions in the
iscsi_vmk
driver might cause stuck I/O operations or heartbeat timeouts on a VMFS datastore. As a result, you might see some virtual machines become unresponsive.This issue is resolved in this release.
- PR 2792139: Throughput with software iSCSI adapters is limited due to hardcoded send and receive buffer sizes
Throughput with software iSCSI adapters is limited due to hardcoded send and receive buffer sizes, 600 KB for send and 256 KB for receive buffer, respectively. As a result, the performance of the
iscsi_vmk
adapter is not optimal.This issue is resolved in this release. The fix makes the send and receive buffer sizes for iSCSI connections adjustable parameters. You can set the send buffer size by using the ESXCLI command
esxcli system settings advanced set -o /ISCSI/SocketSndBufLenKB -i <your size>
with a limit of 6144 KB. You can set the receive buffer size by using the commandesxcli system settings advanced set -o /ISCSI/SocketRcvBufLenKB -i <your size>
with a limit of 6144 KB. - PR 2904241: Virtual machines with enabled NVIDIA virtual GPU (vGPU) might not destroy all multi-instance GPU resources during VM power off
When multiple NVIDIA vGPU VMs power off simultaneously, occasionally some multi-instance GPU (MIG) resources are not destroyed. As a result, subsequent vGPU VM power on might fail due to the residual MIG resources.
This issue is resolved in this release.
- PR 2944894: Virtual machines on NFSv4.1 datastores become unresponsive for few seconds during storage failover
In rare cases, when a NFSv4.1 server returns a transient error during storage failover, you might see virtual machines to become unresponsive for 10 seconds before the operation restarts.
This issue is resolved in this release. The fix reduces wait time for recovery during storage failover.
- PR 2916980: An ESXi host might fail with a purple diagnostic screen due to packet completion on a different portset
In rare occasions, packet completion might not happen on the original port or portset, but on a different portset, which causes a loop that could corrupt the packet list with an invalid pointer. As a result, an ESXi host might fail with a purple diagnostic screen. In the logs, you see an error such as
PF Exception 14 in world 61176327:nsx.cp.tx IP 0xxxxxx addr 0x3c
.This issue is resolved in this release.
- PR 2925733: In the vSphere API, the memoryRangeLength field of NumaNodeInfo does not always count all memory in the associated ESXi host NUMA node
The memory in an ESXi host NUMA node might consist of multiple physical ranges. In ESXi releases earlier than 7.0 Update 3f, the
memoryRangeBegin
,memoryRangeLength
field pair inNumaNodeInfo
gives the starting host physical address and the length of one range in the NUMA node, ignoring any additional ranges.This issue is resolved in this release. The
memoryRangeLength
field is set to the total amount of memory in the NUMA node, summed across all physical ranges. ThememoryRangeBegin
field is set to 0, because this information is not meaningful in case of multiple ranges. - PR 2932736: If you enable fast path on the High-Performance Plug-in (HPP) for 512e Software Emulated 4KN devices, I/Os might fail
If you enable fast path on HPP for 512e Software Emulated 4KN devices, it does not work as expected, because the fast path does not handle Read-Modify-Write(R-M-W), which must use slow path. The use of fast path on 4KN devices is not supported.
This issue is resolved in this release. The fix makes sure that even if the fast path is enabled for a 512e Software Emulated 4KN devices, the setting is ignored, and I/Os take the slow path.
- PR 2860126: Deleting snapshots of large virtual disks might temporarily freeze a running VM
If you have a running VM with virtual disks larger than 1TB and you delete a snapshot of the disks on this VM, the VM might freeze for seconds or even minutes. The VM ultimately recovers but VM workloads might experience outages. The issue occurs because the delete operation triggers snapshot consolidation in the background that causes the delay. The issue is more likely to occur on slower storage, such as NFS.
This issue is resolved in this release.
- PR 2875070: ESXi hosts might fail with a purple diagnostic screen due to mishandling DVFilter packets
DVFilter packets might incorrectly be transferred to a network port, where the packet completion code fails to run. As a result, ESXi hosts fail with a purple diagnostic screen and the error
PF Exception 14 Vmxnet3VMKDevTxCompleteOne E1000DevTxCompleteOneWork
.This issue is resolved in this release.
- PR 2883317: Virtual machines might intermittently lose connectivity due to a rare issue with the distributed firewall (DFW)
In rare cases, DFW might send a packet list to the wrong portset. As a result, the VMkernel service might fail and virtual machines lose connectivity. In the
vmkernel.log
file, you see errors such as:
2021-09-27T04:29:53.170Z cpu84:xxxx)NetPort: 206: Failure: lockModel[0] vs psWorldState->lockModel[0] there is no portset lock holding.
This issue is resolved in this release.
- PR 2944521: You cannot rename devices claimed by the High-Performance Plug-in (HPP)
Starting from vSphere 7.0 Update 2, HPP becomes the default plug-in for local NVMe and SCSI devices, and replaces the ESX Native Multipathing Plug-in (NMP). However, in some environments, the change from NMP to HPP makes some properties of devices claimed by HPP, such as Display Name, inaccessible.
This issue is resolved in this release.
- PR 2945697: You might see outdated path states to ALUA devices on an ESXi host
In an ALUA target, if the target port group IDs (TPGIDs) are changed for a LUN, the cached device identification response that SATP uses might not update accordingly. As a result, ESXi might not reflect the correct path states for the corresponding device.
This issue is resolved in this release
- PR 2946036: RTPG for all paths to a volume reports the same ALUA access state for devices registered by using NVMe-SCSI translation on a LUN
If you use the NVMe-SCSI translation stack to register NVMe devices in your system, the
targetPortGroup
andrelativeTargetPortId
properties get a controller ID of0
for all paths. As a result, an RTPG command returns the same ALUA access state for all the paths of the namespace, because every pathtpgID
matches the first descriptor, which is0
.This issue is resolved in this release. In case you use third-party plug-ins, if no
tpgID
for the path is present in the tpg descriptors (which is a blank ANA page for the controller), you must run the RTPG command on that particular path to populate the ANA page for that controller and to get the tpg descriptor and ALUA state for the path. - PR 2943674: vSAN File Service health check has warning after AD user domain password change: File server not found
If the password for the File Service administrator is changed on the Active Directory server, the password in the vSAN File Service domain configuration might not match. If the passwords do not match or if the account is locked, some file shares might be inaccessible. vSAN health service shows the following warning:
File Service: File server not found
.This issue is resolved in this release.
- PR 2944919: You see virtual machine disks (VMDK) with stale binding in vSphere Virtual Volumes datastores
When you reinstall an ESXi host after a failure, since the failed instance never reboots, stale bindings of VMDKs remain intact on the VASA provider and vSphere Virtual Volumes datastores. As a result, when you reinstall the host, you cannot delete the VMDKs due to the existing binding. Over time, many such VMDKs might accrue and consume idle storage space.
This issue is resolved in this release. However, you must contact VMware Global Support Services to implement the task.
- PR 2912213: The hostd service might repeatedly fail on virtual machines with serial devices with no fileName property and serial.autodetect property set to FALSE
In rare cases, a serial device on a virtual machine might not have a
serial<N>.fileName property
and the>serial<N>.autodetect
property to be set toFALSE
. As a result, the hostd service might repeatedly fail.This issue is resolved in this release.
- PR 2951234: If you set an option of the traffic shaping policy switch or port group level with a size larger than 2147483 Kbps, settings are not saved
In the vSphere Client, if you set any of the traffic shaping options, such as Average Bandwidth, Peak Bandwidth or Burst Size, to a value larger than 2147483 Kbps, the settings are not kept.
This issue is resolved in this release.
- PR 2945707: vSphere vMotion operations fail after upgrade to ESXi 7.x for environments configured with vSphere APIs for Array Integration (VAAI) for Network-attached Storage (NAS)
Due to the low default number of file descriptors, some features such as vSphere vMotion might fail in VAAI for NAS environments after upgrade to ESXi 7.x, due to the insufficient number of vmdk files that can be operated simultaneously. You see errors such as
Too many open files
in thevaai-nas
daemon logs in thesyslog.log
file.This issue is resolved in this release. The default number of file descriptors is up to the maximum 64,000 from the current 256.
- PR 2952427: Virtual machines might become unresponsive due to a rare deadlock issue in a VMFS6 volume
In rare cases, if a write I/O request runs in parallel with an unmap operation triggered by the guest OS on a thin-provisioned VM, a deadlock might occur in a VMFS6 volume. As a result, virtual machines on this volume become unresponsive.
This issue is resolved in this release.
- PR 2876044: vSphere Storage vMotion or hot-add operations might fail on virtual machines with virtual memory size of more than 300 GB
During vSphere Storage vMotion or hot-add operations on a virtual machine with more than 300 GB of memory, the switchover time can approach 2 minutes that causes a timeout failure.
This issue is resolved in this release.
- PR 2952003: During boot, iSCSI configurations might fail to restore and you do not see some LUNs or datastores after the boot
If you do not remove port binding after you delete a VMkernel NIC that is bound to an iSCSI adapter, the stale port binding might cause issues after an ESXi host reboot. During boot, binding of the non-existing VMkernel NIC to the iSCSI adapter fails and iSCSI configurations cannot restore during the boot. As a result, after the reboot completes, you might not see some LUNs or datastores.
This issue is resolved in this release.
- PR 2957062: NFSv4.1 datastore remains in Inaccessible state after storage failover
After a storage failover, the ESXi NFSv4.1 client might falsely identify the NFS server to be a different entity and skip recovery. As a result, the NFSv4.1 datastore remains in Inaccessible state.
This issue is resolved in this release. The fix makes sure the NFSv4.1 client identify the NFS server after a failover.
- PR 2961033: If an NVMe device reports a critical warning, initialization on ESXi fails and the device goes offline
In some cases, such as temperature exceeding the threshold, an NVMe device might report a critical warning and the ESXi NVMe controller does not register the device, and puts it offline.
This issue is resolved in this release. The fix makes sure that NVMe devices register with the ESXi NVMe controller regardless of critical warnings. If the device is in a bad state, and not a transient state such as high temperature, the firmware of the NVMe controller must send appropriate error codes during an I/O admin command processing.
- PR 2928202: You cannot access some shares in the vSAN file service and see warnings from the VDFS daemon
A rare issue when the used block cache exceeds the reserved cache might cause reservation issues in the vSAN file service. As a result, you cannot reach some file shares and the health check shows the error
VDFS daemon is not running
. In thevdfsd-server
log, you see errors such as:
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:621
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:626
This issue is resolved in this release.
- PR 2949902: OVF files might unexpectedly be modified and virtual machines cannot be imported
In rare cases, background search queries executed by vCenter Server on an ESXi host with access to the vmdk files of virtual machines exported to an OVF format might accidentally modify the files. As a result, you cannot import the virtual machines.
This issue is resolved in this release.
- PR 2963401: You cannot add a multipathing PSA claim rule by using the JumpStart tool
Due to a component load issue, adding a multipathing PSA claim rule to the set of claim rules on a vCenter Server system by using the JumpStart tool might fail.
This issue is resolved in this release. The fix makes sure all required components load to enable the use of the JumpStart tool.
- PR 2965235: LUN trespasses might occur on Asymmetric Logical Unit Access (ALUA) arrays upon shutdown or booting of ESXi 7.x hosts
After a controlled shutdown or booting of any server in a cluster of ESXi servers attached to an ALUA array, all LUNs to which that server has access might trespass to one storage processor on the array. As a result, performance of the other ESXi servers accessing the LUNs aggravates.
This issue is resolved in this release. The fix adds a check before activating the last path on the target during the shutdown phase to prevent LUN trespassing.
- PR 2966783: ESXi host might fail with a purple diagnostic screen due to a rare issue with memory allocation failure by PSA
In rare cases, when an ESXi host is under memory pressure, PSA does not handle memory allocation failures gracefully. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as
#PF Exception 14 in world 2098026:SCSI path sc IP / SCSI path scan helpers
.This issue is resolved in this release.
- PR 2984134: If a virtual machine reboots while a snapshot is deleted, the VM might fail with a core dump
If a running virtual machine reboots during a snapshot deletion operation, the VM disks might be incorrectly reopened and closed during the snapshot consolidation. As a result, the VM might fail. However, this is timing issue and occurs accidentally.
This issue is resolved in this release.
- PR 2929443: Mounting VMFS6 datastores with enabled clustered VMDK support might randomly fail
During device discovery, a reservation conflict might cause ATS to be wrongly reported as
not supported
and due to this, ESXi uses SCSI-2 reservation instead of ATS. As a result, mounting VMFS6 datastores with enabled clustered VMDK support might randomly fail.This issue is resolved in this release.
- PR 2957732: vSAN file service servers cannot boot after host UUID change
Some operations can change the host UUID. For example, if you reinstall ESXi software or move the host across clusters, the host UUID might change. If the host UUID changes during vSAN file service downtime, then vSAN file service servers cannot boot.
This issue is resolved in this release.
- PR 2955688: vSAN file service might not work as expected after it is disabled and reenabled
If you disable the vSAN file service with no existing file shares, vSAN removes the file service domain. A removal failure of one server might interrupt the process, and leave behind some metadata. When you reenable the file service, the old metadata might cause the file service to not work as expected.
This issue is resolved in this release.
- PR 2965146: The IMPLICIT TRANSITION TIME value returned from the SCSI Report Target Port Groups command might not be correct
The Report Target Port Groups command might return a wrong value in the IMPLICIT TRANSITION TIME field, which affects the SCSI to NVMe translation layer. In cases such as multi appliance migration, the time for ALUA transition is critical for some multipathing software, for example PowerPath, to perform correct operations.
This issue is resolved in this release.
- PR 2992648: High latencies observed after reboot or disk group remount
Small-sized pending unmaps might be blocked when you reboot a host or remount a disk group. The pending unmaps can cause log congestion which leads to I/O latency.
This issue is resolved in this release.
- PR 2964856: Path to the /productLocker directory on a datastore truncates after patching ESXi 7.x
After the first upgrade or update of your system to ESXi 7.x, with each consecutive update, also called a patch, you might see a truncation in the path to the
/productLocker
directory on a datastore. For example, if your first patch on ESXi 7.x is from 7.0 Update 2 to 7.0 Update 3, the path to the/productLocker
directory originally is similar to/vmfs/volumes/xxx/VMware-Tools/productLocker/
. However, for each consecutive patch, for example from 7.0 Update 3 to 7.0 Update 3c, the path becomes similar to/VMware-Tools/productLocker/
.This issue is resolved in this release.
- PR 2963038: Inaccessible object on vSAN stretched or two-node cluster
In a vSAN stretched cluster or two-node cluster, quorum votes might not be distributed correctly for objects with PFTT of 1 and SFTT of 1 or more. If one site fails, and an additional host or disk fails on the active site, the object might lose quorum and become inaccessible.
This issue is resolved in this release.
- PR 2961832: ESXi hosts might fail while resizing a VMDK not aligned to 512 bytes
An object resize request might fail if the new size is not aligned to 512 bytes. This problem can cause an ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2935355: You might see increased CPU consumption on an ESXi host due to a leak of the PacketCapture (pktcap-uw) utility
In some cases, for example when a virtual machine restarts, a running SSH session that the pktcap-uw utility uses to monitor and validate switch configurations on the ESXi host might be terminated, but the pktcap-uw utility might continue to try to process packets. As a result, the utility starts consuming more CPU than usual. You can use the
esxtop -a
command to track CPU performance.This issue is resolved in this release. The fix prevents leaks from the pktcap-uw utility that cause increased CPU consumption. You can use the
ps -u | grep pktcap
command to see if any background pktcap-uw utility process runs, although it does not impact CPU performance. If a zombie process still runs, you can use the commandkill -9
to end it. - PR 2973031: vSAN health check shows certified NVMe device as not certified
In vCenter Server running 7.0 Update 3, a certified NVMe device might have the following health check warning:
NVMe device is not VMware certified
. You might also see the following health check warning:NVMe device can not be identified
.This issue is resolved in this release.
- PR 2968157: The ESXCLI command hardware ipmi bmc get does not return the IPv6 address of an IPMI Baseboard Management Controller (BMC)
The ESXCLI command
hardware ipmi bmc
get does not return the IPv6 address of a BMC due to wrong parsing.This issue has been fixed in this release.
- PR 2963328: Shutdown wizard fails for a vSAN stretched or two-node vSAN cluster with error: Disconnected host found from orch
If the witness host cannot be reached by cluster hosts, the vSAN cluster Shutdown wizard might fail for a stretched cluster or two-node clusters. The issue occurs when vSAN data traffic and witness traffic use different vmknics. In the vSphere Client. you see the following error message in the vSAN Services page:
Disconnected host found from orch <witness's ip>
. If vCenter is hosted on the cluster, the vSphere Client is not available during shutdown, and the error message is available in the Host Client where the vCenter VM resides.This issue is resolved in this release.
- PR 2882789: While browsing a vSphere Virtual Volumes datastore, you see the UUID of some virtual machines, instead of their name
In the vSphere Client, when you right-click to browse a datastore, you might see the UUID of some virtual machines in a vSphere Virtual Volumes datastore, such as
naa.xxxx
, instead of their names. The issue rarely occurs in large-scale environments with a large number of containers and VMs on a vSphere Virtual Volumes datastore. The issue has no functional impact, such as affecting VM operations, backup, or anything than VM display in the vSphere Client.This issue is resolved in this release.
- PR 2928268: A rare issue with the ESXi infrastructure might cause ESXi hosts to become unresponsive
Due to a rare issue with the ESXi infrastructure, a slow VASA provider might lead to a situation where the vSphere Virtual Volumes datastores are inaccessible, and the ESXi host becomes unresponsive.
This issue is resolved in this release.
- PR 2946550: When you shut down an ESXi host, it does not power off and you see an error message
In certain environments, when you shut down an ESXi host, it does not power off and you see a screen with the message
This system has been halted. It is safe to use the reset or power button to reboot..
This issue is resolved in this release.
- PR 2949777: If the LargeBAR setting of a vmxnet3 device is enabled on ESX 7.0 Update 3 and later, virtual machines might lose connectivity
The
LargeBAR
setting that extends the Base Address Register (BAR) on a vmxnet3 device supports the Uniform Passthrough (UPT). However, UPT is not supported on ESX 7.0 Update 3 and later, and if the vmxnet3 driver is downgraded to a version earlier than 7.0, and theLargeBAR
setting is enabled, virtual machines might lose connectivity.This issue is resolved in this release.
- PR 2984245: Redundant redo log files take significant storage space
If you hot remove independent nonpersistent disks from a virtual machine that has the
vmx.reboot.PowerCycle
configuration parameter enabled, ESXi stores a redo log file. If such a VM is a backup proxy VM, you might see a lot of redundant redo log files to take a significant amount of storage.Workaround: Disable the
vmx.reboot.PowerCycle
parameter on backup proxy VMs either by power-cycling the VM or by using thePowerCLI Set-AdvancedSetting
cmdlet. Delete the redundant redo log files. - PR 2939395: ESXi hosts intermittently fail with purple diagnostic screen with a dump that shows J3_DeleteTransaction and J6ProcessReplayTxnList errors
In rare cases, when an ESXi host tries to access an uncached entry from a resource pool cache, the host intermittently fails with a
PF Exception 14
purple diagnostic screen and a core dump file. In the dump file, you see errors for theJ3_DeleteTransaction
andJ6ProcessReplayTxnList
modules that indicate the issue.This issue is resolved in this release.
- PR 2952432: ESXi hosts might become unresponsive due to a rare issue with VMFS that causes high lock contention of hostd service threads
A rare issue with VMFS might cause high lock contention of hostd service threads, or even deadlocks, for basic filesystem calls such as
open
,access
orrename
. As a result, the ESXi host becomes unresponsive.This issue is resolved in this release.
- PR 2960882: If the authentication protocol in an ESXi SNMP agent configuration is MD5, upgrades to ESXi 7.x might fail
Since the MD5 authentication protocol is deprecated from ESXi 7.x, if an ESXi SNMP agent configuration uses the MD5 authentication protocol, upgrades to ESXi 7.x fail.
This issue is resolved in this release. The fix removes the MD5 authentication protocol and you see a VMkernel Observation (VOB) message such as
Upgrade detected a weak crypto protocol (MD5) and removed it. Please regenerate v3 Users.
- PR 2957966: Spaces in the Syslog.global.logHost parameter might cause upgrades to ESXi 7.x to fail
In ESXi 7.x, the
Syslog.global.logHost
parameter that defines a comma-delimited list of remote hosts and specifications for message transmissions, does not tolerate spaces after the comma. ESXi versions earlier than 7.x tolerate a space after the comma in theSyslog.global.logHost
parameter. As a result, upgrades to ESXi 7.x might fail.This issue is resolved in this release. The fix allows spaces after the comma.
- PR 2967359: You might see logical consistency-based I/O error for virtual machines running on a snapshot
An issue related to the use of the probabilistic data structure Bloom Filter that aims to optimize read I/O for VMs running on a snapshot, might cause a logical consistency-based I/O error when VM is running on snapshot. The issue is limited and occurs only if you run running SQL server on snapshot.
This issue is resolved in this release. The fix disables the use of Bloom Filter functionality until the root cause of the issue is resolved.
- PR 2974376: ESXi hosts fail with a purple diagnostic screen with a NMI error due to a rare CPU lockup
An issue with the process that scans datastores to help file-block allocations for thin files might cause a CPU lockup in certain cases. As a result, the ESXi host might fail with a purple diagnostic screen with an error similar to
PSOD - @BlueScreen: NMI
.This issue is resolved in this release.
- PR 2905357: You see unusually high number of Task Created and Task Completed messages in the hostd log files
The log files of the hostd service might get an unusually high number of
Task Created
andTask Completed
messages for invisible tasks, which in turn might reduce the log retention time.This issue is resolved in this release.
- PR 2956080: Read and write operations to a vSphere Virtual Volumes datastore might fail
If a SCSI command to a protocol endpoint of a vSphere Virtual Volumes datastore fails, the endpoint might get an Unsupported status, which might be cached. As a result, following SCSI commands to that protocol endpoint fail with an error code such as
0x5 0x20
, and read and write operations to a vSphere Virtual Volumes datastore fail.This issue is resolved in this release.
- PR 2985369: If the NVMe controller reports a FAILED state, it might lose connectivity to ESXi hosts
In some cases, such as port down error, ESXi hosts might lose connectivity to the NVMe controller although some IO queues are still active.
This issue is resolved in this release.
- PR 2892901: If the disk of a replicated virtual machine is not 512-aligned, the VM might fail to power on
If you replicate a virtual machine by using vSphere Replication, and the VM disk is increased to a number that is not 512-aligned, the VM cannot power on.
This issue is resolved in this release.
- PR 2912230: If the number of CPUs is not multiple to the number of core per socket in the configuration of a virtual machine, the VM cannot power on
If the number of CPUs that you define with the parameter
ConfigSpec#numCPUs
is not multiple to the number of core per socket that you define with in the parameterConfigSpec#numCoresPerSocket
in the configuration of a virtual machine, the VM does not power on.This issue is resolved in this release. The fix prevents you from setting a
numCPUs
value that is not multiple of thenumCoresPerSocket
value. - PR 2949375: vSAN host in stretched cluster fails to enter maintenance mode with ensure accessibility mode
This problem affects hosts in stretched clusters with
locality=None
,HFT=0
,SFT=1
orSFT=2
policy settings. When you place a host into maintenance mode with Ensure Accessibility, the operation might stay at 100% for a long time or fail after 60 min.This issue is resolved in this release.
- PR 2950686: Frequent vSAN network latency alarms after upgrade to ESXi 7.0 Update 2
After you upgrade an ESXi host to 7.0 Update 2, you might notice frequent vSAN network latency alarms on the cluster when vSAN performance service is enabled. The latency results show that most of the alarms are issued by the vSAN primary node.
This issue is resolved in this release.
- PR 2928789: NVMe over RDMA storage becomes inaccessible after update from ESXi 7.0, 7.0 Update 1 or 7.0 Update 2 to 7.0 Update 3c
Due to a change in the default admin queue size for NVMe over RDMA controllers in ESXi 7.0 Update 3c, updates from ESXi 7.0, 7.0 Update 1 or 7.0 Update 2 to 7.0 Update 3c might cause NVMe over RDMA storage to become inaccessible.
This issue is resolved in this release. If you need to update to ESXi 7.0 Update 3c, you can run the script attached in VMware knowledge base article 88938 before the update to make sure the issue is resolved.
- PR 2983089: An ESXi host might fail with a purple diagnostic screen due to a race condition in container ports
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off or migrate by using vSphere vMotion. The issue occurs due to duplicating port IDs.
This issue is resolved in this release.
- PR 2925133: When you disable or suspend vSphere Fault Tolerance, pinging virtual machines might time out
When you disable or suspend vSphere FT, virtual machines might temporarily lose connectivity and not respond to pinging or any network traffic. Pinging virtual machines might time out consecutively within a short period, such as 20 sec.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the bnxtnet
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2929821 |
CVE numbers | N/A |
Updates the
lpfc
VIB to resolve the following issue:
- PR 2929821: ESXi hosts lose access to Dell EMC Unity storage arrays after upgrading the lpfc driver to 14.0.x.x
ESXi hosts might lose access to Unity storage arrays after upgrading the lpfc driver to 14.0.x.x. You see errors such as
protocol failure detected during processing of FCP I/O
andrspInfo3 x2
in the driver logs.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2960435 |
CVE numbers | N/A |
Updates the
lsuv2-lsiv2-drivers-plugin
VIB.
- PR 2960435: Changes in disk location take long to become visible from the vSphere Client or by using ESXCLI
For LSI-related drivers, when an ESXi server boots up, if you plug out a disk and plug it in another slot on the same host, changes in the disk location might take long to become visible from the vSphere Client or by using the ESXCLI command
esxcli storage core device physical get -d
. The issue is specific to drivers with many disks connected to the driver, 150 and more, and is resolved within 5 minutes.This issue is resolved in this release. The fix adds a cache of all devices connected to an LSI controller at boot time to make sure calls to the strange inventory are quickly served.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2965878 |
CVE numbers | N/A |
Updates the icen
VIB.
With ESXi 7.0 Update 3f, the Intel-icen driver supports networking functions on E822/E823 NICs for Intel Icelake-D platforms. ENS (Enhanced Network Stack) and RDMA functions on such devices are not supported.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the irdman
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2957673 |
CVE numbers | N/A |
Updates the ne1000
VIB.
ESXi 7.0 Update 3f upgrades the
Intel-ne1000
driver to support Intel I219-LM devices that are required for newer server models, such as the Intel Rocket Lake-S platform. The TCP segmentation offload for the I219 devices is deactivated because of known issues in the hardware DMA.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2921564 |
CVE numbers | N/A |
Updates the iser
VIB.
- PR 2921564: ESXi hosts that have iSER initiators connected to IBM FlashSystem V9000 arrays might fail with a purple diagnostic screen due to a rare issue
A very rare null pointer error issue might cause ESXi hosts on IBM FlashSystem V9000 arrays to fail with a purple diagnostic screen and an error such as
#PF Exception 14 in world 2098414:CqWorld IP 0xxxxx addr 0xxxx
.This issue is resolved in this release. The fix adds a special handling to an RDMA error status for one of the SCSI management commands.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2920287, 2946147, 2946217, 2946222, 2946671, 2946863, 2947156, 2951864, 2951866, 2972147, 2972151 |
CVE numbers | CVE-2004-0230, CVE-2020-7451, CVE-2015-2923, CVE-2015-5358, CVE-2013-3077, CVE-2015-1414, CVE-2018-6918, CVE-2020-7469, CVE-2019-5611, CVE-2020-7457, CVE-2018-6916, CVE-2019-5608, CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, CVE-2022-29901 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the native-misc-drivers, bmcal, esx-ui, vsanhealth, esxio-combiner, esx-xserver, trx, esx-dvfilter-generic-fastpath, vdfs, vsan, cpu-microcode, esx-base, gc
, and crx
VIBs to resolve the following issues:
- ESXi 7.0 Update 3f provides the following security updates:
- The Expat XML parser is updated to version 2.4.7.
- The SQLite database is updated to version 3.37.2.
- The cURL library is updated to version 7.81.0.
- The OpenSSL package is updated to version openssl-1.0.2ze.
- The ESXi userworld libxml2 library is updated to version 2.9.14.
- The Python package is updated to 3.8.13.
- The zlib library is updated to 1.2.12.
- This release resolves CVE-2004-0230. VMware has evaluated the severity of this issue to be in the low severity range with a maximum CVSSv3 base score of 3.7.
- This release resolves CVE-2020-7451. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 5.3.
- This release resolves CVE-2015-2923. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2015-5358. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2013-3077. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.0.
- This release resolves CVE-2015-1414. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2018-6918. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7469. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2019-5611. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7457. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.8.
- This release resolves CVE-2018-6916. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
- This release resolves CVE-2019-5608. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
This release resolves CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, and CVE-2022-29901. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2022-0020.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the tools-light
VIB.
Profile Name | ESXi-7.0U3f-20036589-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 12, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2916848, 2817702, 2937074, 2957732, 2984134, 2944807, 2985369, 2907963, 2949801, 2984245, 2915911, 2990593, 2992648, 2905357, 2980176, 2974472, 2974376, 2972059, 2973031, 2967359, 2860126, 2878245, 2876044, 2968157, 2960242, 2970238, 2966270, 2966783, 2963328, 2939395, 2966362, 2946550, 2958543, 2966628, 2961584, 2911056, 2965235, 2952427, 2963401, 2965146, 2963038, 2963479, 2949375, 2961033, 2958926, 2839515, 2951139, 2878224, 2954320, 2952432, 2961346, 2857932, 2960949, 2960882, 2957966, 2929443, 2956080, 2959293, 2944919, 2948800, 2928789, 2957769, 2928268, 2957062, 2792139, 2934102, 2953709, 2950686, 2953488, 2949446, 2955016, 2953217, 2956030, 2949902, 2944894, 2944521, 2911363, 2952205, 2894093, 2910856, 2953754, 2949777, 2925733, 2951234, 2915979, 2904241, 2935355, 2941263, 2912661, 2891231, 2928202, 2928268, 2867146, 2244126, 2912330, 2898858, 2906297, 2912213, 2910340, 2745800, 2912182, 2941274, 2912230, 2699748, 2882789, 2869031, 2913017, 2864375, 2929821, 2957673, 2921564, 2925133, 2965277 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
pyVmomi calls to the vSphere
EnvironmentBrowser.QueryConfigTargetEx
API might fail withUnknownWsdlTypeError
. -
The hardware health module of ESXi might fail to decode some sensors of the type System Event when a physical server is rebranded. As a result, in the vSphere Client you see status Unknown for sensors of type System Event under Monitor > Hardware Health.
-
If you you set the
vmx.reboot.powerCycle
advanced setting on a virtual machine toTRUE
, when the guest OS initiates a reboot, the virtual machine powers off and then powers on. However, if a power cycle occurs during a migration by using vSphere vMotion, the operation fails and the virtual machine on the source host might not power back on. -
If a socket experiences a connection failure while it is polled during the internal communication between UNIX domain sockets, a data race might occur. As a result, in some cases, the ESXi host might access an invalid memory region and fail with a purple diagnostic screen with
#PF Exception 14
, and errors such asUserDuct_PollSize()
andUserSocketLocalPoll()
. -
If a hardware iSCSI adapter on an ESXi host in your environment uses pseudo NICs, you might not be able to create a host profile from such a host since pseudo NICs do not have the required PCI address and vendor name for a profile.
-
Race conditions in the
iscsi_vmk
driver might cause stuck I/O operations or heartbeat timeouts on a VMFS datastore. As a result, you might see some virtual machines become unresponsive. -
Throughput with software iSCSI adapters is limited due to hardcoded send and receive buffer sizes, 600 KB for send and 256 KB for receive buffer, respectively. As a result, the performance of the
iscsi_vmk
adapter is not optimal. -
When multiple NVIDIA vGPU VMs power off simultaneously, occasionally some multi-instance GPU (MIG) resources are not destroyed. As a result, subsequent vGPU VM power on might fail due to the residual MIG resources.
-
In rare cases, when a NFSv4.1 server returns a transient error during storage failover, you might see virtual machines to become unresponsive for 10 seconds before the operation restarts.
-
In rare occasions, packet completion might not happen on the original port or portset, but on a different portset, which causes a loop that could corrupt the packet list with an invalid pointer. As a result, an ESXi host might fail with a purple diagnostic screen. In the logs, you see an error such as
PF Exception 14 in world 61176327:nsx.cp.tx IP 0xxxxxx addr 0x3c
. -
The memory in an ESXi host NUMA node might consist of multiple physical ranges. In ESXi releases earlier than 7.0 Update 3f, the
memoryRangeBegin
,memoryRangeLength
field pair inNumaNodeInfo
gives the starting host physical address and the length of one range in the NUMA node, ignoring any additional ranges. -
If you enable fast path on HPP for 512e Software Emulated 4KN devices, it does not work as expected, because the fast path does not handle Read-Modify-Write(R-M-W), which must use slow path. The use of fast path on 4KN devices is not supported.
-
If you have a running VM with virtual disks larger than 1TB and you delete a snapshot of the disks on this VM, the VM might freeze for seconds or even minutes. The VM ultimately recovers but VM workloads might experience outages. The issue occurs because the delete operation triggers snapshot consolidation in the background that causes the delay. The issue is more likely to occur on slower storage, such as NFS.
-
DVFilter packets might incorrectly be transferred to a network port, where the packet completion code fails to run. As a result, ESXi hosts fail with a purple diagnostic screen and the error
PF Exception 14 Vmxnet3VMKDevTxCompleteOne E1000DevTxCompleteOneWork
. -
In rare cases, DFW might send a packet list to the wrong portset. As a result, the VMkernel service might fail and virtual machines lose connectivity. In the
vmkernel.log
file, you see errors such as:
2021-09-27T04:29:53.170Z cpu84:xxxx)NetPort: 206: Failure: lockModel[0] vs psWorldState->lockModel[0] there is no portset lock holding.
-
Starting from vSphere 7.0 Update 2, HPP becomes the default plug-in for local NVMe and SCSI devices, and replaces the ESX Native Multipathing Plug-in (NMP). However, in some environments, the change from NMP to HPP makes some properties of devices claimed by HPP, such as Display Name, inaccessible.
-
In an ALUA target, if the target port group IDs (TPGIDs) are changed for a LUN, the cached device identification response that SATP uses might not update accordingly. As a result, ESXi might not reflect the correct path states for the corresponding device.
-
If you use the NVMe-SCSI translation stack to register NVMe devices in your system, the
targetPortGroup
andrelativeTargetPortId
properties get a controller ID of0
for all paths. As a result, an RTPG command returns the same ALUA access state for all the paths of the namespace, because every pathtpgID
matches the first descriptor, which is0
. -
If the password for the File Service administrator is changed on the Active Directory server, the password in the vSAN File Service domain configuration might not match. If the passwords do not match or if the account is locked, some file shares might be inaccessible. vSAN health service shows the following warning:
File Service: File server not found
. -
When you reinstall an ESXi host after a failure, since the failed instance never reboots, stale bindings of VMDKs remain intact on the VASA provider and vSphere Virtual Volumes datastores. As a result, when you reinstall the host, you cannot delete the VMDKs due to the existing binding. Over time, many such VMDKs might accrue and consume idle storage space.
-
In rare cases, a serial device on a virtual machine might not have a
serial<N>.fileName property
and the>serial<N>.autodetect
property to be set toFALSE
. As a result, the hostd service might repeatedly fail. -
In the vSphere Client, if you set any of the traffic shaping options, such as Average Bandwidth, Peak Bandwidth or Burst Size, to a value larger than 2147483 Kbps, the settings are not kept.
-
Due to the low default number of file descriptors, some features such as vSphere vMotion might fail in VAAI for NAS environments after upgrade to ESXi 7.x, due to the insufficient number of vmdk files that can be operated simultaneously. You see errors such as
Too many open files
in thevaai-nas
daemon logs in thesyslog.log
file. -
In rare cases, if a write I/O request runs in parallel with an unmap operation triggered by the guest OS on a thin-provisioned VM, a deadlock might occur in a VMFS6 volume. As a result, virtual machines on this volume become unresponsive.
-
During vSphere Storage vMotion or hot-add operations on a virtual machine with more than 300 GB of memory, the switchover time can approach 2 minutes that causes a timeout failure.
-
If you do not remove port binding after you delete a VMkernel NIC that is bound to an iSCSI adapter, the stale port binding might cause issues after an ESXi host reboot. During boot, binding of the non-existing VMkernel NIC to the iSCSI adapter fails and iSCSI configurations cannot restore during the boot. As a result, after the reboot completes, you might not see some LUNs or datastores.
-
After a storage failover, the ESXi NFSv4.1 client might falsely identify the NFS server to be a different entity and skip recovery. As a result, the NFSv4.1 datastore remains in Inaccessible state.
-
In some cases, such as temperature exceeding the threshold, an NVMe device might report a critical warning and the ESXi NVMe controller does not register the device, and puts it offline.
-
A rare issue when the used block cache exceeds the reserved cache might cause reservation issues in the vSAN file service. As a result, you cannot reach some file shares and the health check shows the error
VDFS daemon is not running
. In thevdfsd-server
log, you see errors such as:
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:621
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:626
-
In rare cases, background search queries executed by vCenter Server on an ESXi host with access to the vmdk files of virtual machines exported to an OVF format might accidentally modify the files. As a result, you cannot import the virtual machines.
-
Due to a component load issue, adding a multipathing PSA claim rule to the set of claim rules on a vCenter Server system by using the JumpStart tool might fail.
-
After a controlled shutdown or booting of any server in a cluster of ESXi servers attached to an ALUA array, all LUNs to which that server has access might trespass to one storage processor on the array. As a result, performance of the other ESXi servers accessing the LUNs aggravates.
-
In rare cases, when an ESXi host is under memory pressure, PSA does not handle memory allocation failures gracefully. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as
#PF Exception 14 in world 2098026:SCSI path sc IP / SCSI path scan helpers
. -
If a running virtual machine reboots during a snapshot deletion operation, the VM disks might be incorrectly reopened and closed during the snapshot consolidation. As a result, the VM might fail. However, this is timing issue and occurs accidentally.
-
During device discovery, a reservation conflict might cause ATS to be wrongly reported as
not supported
and due to this, ESXi uses SCSI-2 reservation instead of ATS. As a result, mounting VMFS6 datastores with enabled clustered VMDK support might randomly fail. -
Some operations can change the host UUID. For example, if you reinstall ESXi software or move the host across clusters, the host UUID might change. If the host UUID changes during vSAN file service downtime, then vSAN file service servers cannot boot.
-
If you disable the vSAN file service with no existing file shares, vSAN removes the file service domain. A removal failure of one server might interrupt the process, and leave behind some metadata. When you reenable the file service, the old metadata might cause the file service to not work as expected.
-
The Report Target Port Groups command might return a wrong value in the IMPLICIT TRANSITION TIME field, which affects the SCSI to NVMe translation layer. In cases such as multi appliance migration, the time for ALUA transition is critical for some multipathing software, for example PowerPath, to perform correct operations.
-
Small-sized pending unmaps might be blocked when you reboot a host or remount a disk group. The pending unmaps can cause log congestion which leads to I/O latency.
-
After the first upgrade or update of your system to ESXi 7.x, with each consecutive update, also called a patch, you might see a truncation in the path to the
/productLocker
directory on a datastore. For example, if your first patch on ESXi 7.x is from 7.0 Update 2 to 7.0 Update 3, the path to the/productLocker
directory originally is similar to/vmfs/volumes/xxx/VMware-Tools/productLocker/
. However, for each consecutive patch, for example from 7.0 Update 3 to 7.0 Update 3c, the path becomes similar to/VMware-Tools/productLocker/
. -
In a vSAN stretched cluster or two-node cluster, quorum votes might not be distributed correctly for objects with PFTT of 1 and SFTT of 1 or more. If one site fails, and an additional host or disk fails on the active site, the object might lose quorum and become inaccessible.
-
An object resize request might fail if the new size is not aligned to 512 bytes. This problem can cause an ESXi host to fail with a purple diagnostic screen.
-
In some cases, for example when a virtual machine restarts, a running SSH session that the pktcap-uw utility uses to monitor and validate switch configurations on the ESXi host might be terminated, but the pktcap-uw utility might continue to try to process packets. As a result, the utility starts consuming more CPU than usual. You can use the
esxtop -a
command to track CPU performance. -
In vCenter Server running 7.0 Update 3, a certified NVMe device might have the following health check warning:
NVMe device is not VMware certified
. You might also see the following health check warning:NVMe device can not be identified
. -
The ESXCLI command
hardware ipmi bmc
get does not return the IPv6 address of a BMC due to wrong parsing. -
If the witness host cannot be reached by cluster hosts, the vSAN cluster Shutdown wizard might fail for a stretched cluster or two-node clusters. The issue occurs when vSAN data traffic and witness traffic use different vmknics. In the vSphere Client. you see the following error message in the vSAN Services page:
Disconnected host found from orch <witness's ip>
. If vCenter is hosted on the cluster, the vSphere Client is not available during shutdown, and the error message is available in the Host Client where the vCenter VM resides. -
In the vSphere Client, when you right-click to browse a datastore, you might see the UUID of some virtual machines in a vSphere Virtual Volumes datastore, such as
naa.xxxx
, instead of their names. The issue rarely occurs in large-scale environments with a large number of containers and VMs on a vSphere Virtual Volumes datastore. The issue has no functional impact, such as affecting VM operations, backup, or anything than VM display in the vSphere Client. -
Due to a rare issue with the ESXi infrastructure, a slow VASA provider might lead to a situation where the vSphere Virtual Volumes datastores are inaccessible, and the ESXi host becomes unresponsive.
-
In certain environments, when you shut down an ESXi host, it does not power off and you see a screen with the message
This system has been halted. It is safe to use the reset or power button to reboot..
-
The
LargeBAR
setting that extends the Base Address Register (BAR) on a vmxnet3 device supports the Uniform Passthrough (UPT). However, UPT is not supported on ESX 7.0 Update 3 and later, and if the vmxnet3 driver is downgraded to a version earlier than 7.0, and theLargeBAR
setting is enabled, virtual machines might lose connectivity. -
If you hot remove independent nonpersistent disks from a virtual machine that has the
vmx.reboot.PowerCycle
configuration parameter enabled, ESXi stores a redo log file. If such a VM is a backup proxy VM, you might see a lot of redundant redo log files to take a significant amount of storage. -
In rare cases, when an ESXi host tries to access an uncached entry from a resource pool cache, the host intermittently fails with a
PF Exception 14
purple diagnostic screen and a core dump file. In the dump file, you see errors for theJ3_DeleteTransaction
andJ6ProcessReplayTxnList
modules that indicate the issue. -
A rare issue with VMFS might cause high lock contention of hostd service threads, or even deadlocks, for basic filesystem calls such as
open
,access
orrename
. As a result, the ESXi host becomes unresponsive. -
Since the MD5 authentication protocol is deprecated from ESXi 7.x, if an ESXi SNMP agent configuration uses the MD5 authentication protocol, upgrades to ESXi 7.x fail.
-
In ESXi 7.x, the
Syslog.global.logHost
parameter that defines a comma-delimited list of remote hosts and specifications for message transmissions, does not tolerate spaces after the comma. ESXi versions earlier than 7.x tolerate a space after the comma in theSyslog.global.logHost
parameter. As a result, upgrades to ESXi 7.x might fail. -
An issue related to the use of the probabilistic data structure Bloom Filter that aims to optimize read I/O for VMs running on a snapshot, might cause a logical consistency-based I/O error when VM is running on snapshot. The issue is limited and occurs only if you run running SQL server on snapshot.
-
An issue with the process that scans datastores to help file-block allocations for thin files might cause a CPU lockup in certain cases. As a result, the ESXi host might fail with a purple diagnostic screen with an error similar to
PSOD - @BlueScreen: NMI
. -
The log files of the hostd service might get an unusually high number of
Task Created
andTask Completed
messages for invisible tasks, which in turn might reduce the log retention time. -
If a SCSI command to a protocol endpoint of a vSphere Virtual Volumes datastore fails, the endpoint might get an Unsupported status, which might be cached. As a result, following SCSI commands to that protocol endpoint fail with an error code such as
0x5 0x20
, and read and write operations to a vSphere Virtual Volumes datastore fail. -
In some cases, such as port down error, ESXi hosts might lose connectivity to the NVMe controller although some IO queues are still active.
-
If you replicate a virtual machine by using vSphere Replication, and the VM disk is increased to a number that is not 512-aligned, the VM cannot power on.
-
If the number of CPUs that you define with the parameter
ConfigSpec#numCPUs
is not multiple to the number of core per socket that you define with in the parameterConfigSpec#numCoresPerSocket
in the configuration of a virtual machine, the VM does not power on. -
This problem affects hosts in stretched clusters with
locality=None
,HFT=0
,SFT=1
orSFT=2
policy settings. When you place a host into maintenance mode with Ensure Accessibility, the operation might stay at 100% for a long time or fail after 60 min. -
After you upgrade an ESXi host to 7.0 Update 2, you might notice frequent vSAN network latency alarms on the cluster when vSAN performance service is enabled. The latency results show that most of the alarms are issued by the vSAN primary node.
-
Due to a change in the default admin queue size for NVMe over RDMA controllers in ESXi 7.0 Update 3c, updates from ESXi 7.0, 7.0 Update 1 or 7.0 Update 2 to 7.0 Update 3c might cause NVMe over RDMA storage to become inaccessible.
-
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off or migrate by using vSphere vMotion. The issue occurs due to duplicating port IDs.
-
ESXi hosts might lose access to Unity storage arrays after upgrading the lpfc driver to 14.0.x.x. You see errors such as
protocol failure detected during processing of FCP I/O
andrspInfo3 x2
in the driver logs. -
For LSI-related drivers, when an ESXi server boots up, if you plug out a disk and plug it in another slot on the same host, changes in the disk location might take long to become visible from the vSphere Client or by using the ESXCLI command
esxcli storage core device physical get -d
. The issue is specific to drivers with many disks connected to the driver, 150 and more, and is resolved within 5 minutes. -
With ESXi 7.0 Update 3f, the Intel-icen driver supports networking functions on E822/E823 NICs for Intel Icelake-D platforms. ENS (Enhanced Network Stack) and RDMA functions on such devices are not supported.
-
ESXi 7.0 Update 3f upgrades the
Intel-ne1000
driver to support Intel I219-LM devices that are required for newer server models, such as the Intel Rocket Lake-S platform. The TCP segmentation offload for the I219 devices is deactivated because of known issues in the hardware DMA. -
A very rare null pointer error issue might cause ESXi hosts on IBM FlashSystem V9000 arrays to fail with a purple diagnostic screen and an error such as
#PF Exception 14 in world 2098414:CqWorld IP 0xxxxx addr 0xxxx
. -
When you disable or suspend vSphere FT, virtual machines might temporarily lose connectivity and not respond to pinging or any network traffic. Pinging virtual machines might time out consecutively within a short period, such as 20 sec.
-
Under certain conditions, such as increasing page-sharing or disabling the use of large pages at VM or ESXi host level, you might see CPU utilization of Windows VBS-enabled VMs to reach up to 100%.
-
Profile Name | ESXi-7.0U3f-20036589-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 12, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2916848, 2817702, 2937074, 2957732, 2984134, 2944807, 2985369, 2907963, 2949801, 2984245, 2915911, 2990593, 2992648, 2905357, 2980176, 2974472, 2974376, 2972059, 2973031, 2967359, 2860126, 2878245, 2876044, 2968157, 2960242, 2970238, 2966270, 2966783, 2963328, 2939395, 2966362, 2946550, 2958543, 2966628, 2961584, 2911056, 2965235, 2952427, 2963401, 2965146, 2963038, 2963479, 2949375, 2961033, 2958926, 2839515, 2951139, 2878224, 2954320, 2952432, 2961346, 2857932, 2960949, 2960882, 2957966, 2929443, 2956080, 2959293, 2944919, 2948800, 2928789, 2957769, 2928268, 2957062, 2792139, 2934102, 2953709, 2950686, 2953488, 2949446, 2955016, 2953217, 2956030, 2949902, 2944894, 2944521, 2911363, 2952205, 2894093, 2910856, 2953754, 2949777, 2925733, 2951234, 2915979, 2904241, 2935355, 2941263, 2912661, 2891231, 2928202, 2928268, 2867146, 2244126, 2912330, 2898858, 2906297, 2912213, 2910340, 2745800, 2912182, 2941274, 2912230, 2699748, 2882789, 2869031, 2913017, 2864375, 2929821, 2957673, 2921564, 2925133, 2965277 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
pyVmomi calls to the vSphere
EnvironmentBrowser.QueryConfigTargetEx
API might fail withUnknownWsdlTypeError
. -
The hardware health module of ESXi might fail to decode some sensors of the type System Event when a physical server is rebranded. As a result, in the vSphere Client you see status Unknown for sensors of type System Event under Monitor > Hardware Health.
-
If you you set the
vmx.reboot.powerCycle
advanced setting on a virtual machine toTRUE
, when the guest OS initiates a reboot, the virtual machine powers off and then powers on. However, if a power cycle occurs during a migration by using vSphere vMotion, the operation fails and the virtual machine on the source host might not power back on. -
If a socket experiences a connection failure while it is polled during the internal communication between UNIX domain sockets, a data race might occur. As a result, in some cases, the ESXi host might access an invalid memory region and fail with a purple diagnostic screen with
#PF Exception 14
, and errors such asUserDuct_PollSize()
andUserSocketLocalPoll()
. -
If a hardware iSCSI adapter on an ESXi host in your environment uses pseudo NICs, you might not be able to create a host profile from such a host since pseudo NICs do not have the required PCI address and vendor name for a profile.
-
Race conditions in the
iscsi_vmk
driver might cause stuck I/O operations or heartbeat timeouts on a VMFS datastore. As a result, you might see some virtual machines become unresponsive. -
Throughput with software iSCSI adapters is limited due to hardcoded send and receive buffer sizes, 600 KB for send and 256 KB for receive buffer, respectively. As a result, the performance of the
iscsi_vmk
adapter is not optimal. -
When multiple NVIDIA vGPU VMs power off simultaneously, occasionally some multi-instance GPU (MIG) resources are not destroyed. As a result, subsequent vGPU VM power on might fail due to the residual MIG resources.
-
In rare cases, when a NFSv4.1 server returns a transient error during storage failover, you might see virtual machines to become unresponsive for 10 seconds before the operation restarts.
-
In rare occasions, packet completion might not happen on the original port or portset, but on a different portset, which causes a loop that could corrupt the packet list with an invalid pointer. As a result, an ESXi host might fail with a purple diagnostic screen. In the logs, you see an error such as
PF Exception 14 in world 61176327:nsx.cp.tx IP 0xxxxxx addr 0x3c
. -
The memory in an ESXi host NUMA node might consist of multiple physical ranges. In ESXi releases earlier than 7.0 Update 3f, the
memoryRangeBegin
,memoryRangeLength
field pair inNumaNodeInfo
gives the starting host physical address and the length of one range in the NUMA node, ignoring any additional ranges. -
If you enable fast path on HPP for 512e Software Emulated 4KN devices, it does not work as expected, because the fast path does not handle Read-Modify-Write(R-M-W), which must use slow path. The use of fast path on 4KN devices is not supported.
-
If you have a running VM with virtual disks larger than 1TB and you delete a snapshot of the disks on this VM, the VM might freeze for seconds or even minutes. The VM ultimately recovers but VM workloads might experience outages. The issue occurs because the delete operation triggers snapshot consolidation in the background that causes the delay. The issue is more likely to occur on slower storage, such as NFS.
-
DVFilter packets might incorrectly be transferred to a network port, where the packet completion code fails to run. As a result, ESXi hosts fail with a purple diagnostic screen and the error
PF Exception 14 Vmxnet3VMKDevTxCompleteOne E1000DevTxCompleteOneWork
. -
In rare cases, DFW might send a packet list to the wrong portset. As a result, the VMkernel service might fail and virtual machines lose connectivity. In the
vmkernel.log
file, you see errors such as:
2021-09-27T04:29:53.170Z cpu84:xxxx)NetPort: 206: Failure: lockModel[0] vs psWorldState->lockModel[0] there is no portset lock holding.
-
Starting from vSphere 7.0 Update 2, HPP becomes the default plug-in for local NVMe and SCSI devices, and replaces the ESX Native Multipathing Plug-in (NMP). However, in some environments, the change from NMP to HPP makes some properties of devices claimed by HPP, such as Display Name, inaccessible.
-
In an ALUA target, if the target port group IDs (TPGIDs) are changed for a LUN, the cached device identification response that SATP uses might not update accordingly. As a result, ESXi might not reflect the correct path states for the corresponding device.
-
If you use the NVMe-SCSI translation stack to register NVMe devices in your system, the
targetPortGroup
andrelativeTargetPortId
properties get a controller ID of0
for all paths. As a result, an RTPG command returns the same ALUA access state for all the paths of the namespace, because every pathtpgID
matches the first descriptor, which is0
. -
If the password for the File Service administrator is changed on the Active Directory server, the password in the vSAN File Service domain configuration might not match. If the passwords do not match or if the account is locked, some file shares might be inaccessible. vSAN health service shows the following warning:
File Service: File server not found
. -
When you reinstall an ESXi host after a failure, since the failed instance never reboots, stale bindings of VMDKs remain intact on the VASA provider and vSphere Virtual Volumes datastores. As a result, when you reinstall the host, you cannot delete the VMDKs due to the existing binding. Over time, many such VMDKs might accrue and consume idle storage space.
-
In rare cases, a serial device on a virtual machine might not have a
serial<N>.fileName property
and the>serial<N>.autodetect
property to be set toFALSE
. As a result, the hostd service might repeatedly fail. -
In the vSphere Client, if you set any of the traffic shaping options, such as Average Bandwidth, Peak Bandwidth or Burst Size, to a value larger than 2147483 Kbps, the settings are not kept.
-
Due to the low default number of file descriptors, some features such as vSphere vMotion might fail in VAAI for NAS environments after upgrade to ESXi 7.x, due to the insufficient number of vmdk files that can be operated simultaneously. You see errors such as
Too many open files
in thevaai-nas
daemon logs in thesyslog.log
file. -
In rare cases, if a write I/O request runs in parallel with an unmap operation triggered by the guest OS on a thin-provisioned VM, a deadlock might occur in a VMFS6 volume. As a result, virtual machines on this volume become unresponsive.
-
During vSphere Storage vMotion or hot-add operations on a virtual machine with more than 300 GB of memory, the switchover time can approach 2 minutes that causes a timeout failure.
-
If you do not remove port binding after you delete a VMkernel NIC that is bound to an iSCSI adapter, the stale port binding might cause issues after an ESXi host reboot. During boot, binding of the non-existing VMkernel NIC to the iSCSI adapter fails and iSCSI configurations cannot restore during the boot. As a result, after the reboot completes, you might not see some LUNs or datastores.
-
After a storage failover, the ESXi NFSv4.1 client might falsely identify the NFS server to be a different entity and skip recovery. As a result, the NFSv4.1 datastore remains in Inaccessible state.
-
In some cases, such as temperature exceeding the threshold, an NVMe device might report a critical warning and the ESXi NVMe controller does not register the device, and puts it offline.
-
A rare issue when the used block cache exceeds the reserved cache might cause reservation issues in the vSAN file service. As a result, you cannot reach some file shares and the health check shows the error
VDFS daemon is not running
. In thevdfsd-server
log, you see errors such as:
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:621
PANIC: NOT_IMPLEMENTED bora/vdfs/core/VDFSPhysicalLog.cpp:626
-
In rare cases, background search queries executed by vCenter Server on an ESXi host with access to the vmdk files of virtual machines exported to an OVF format might accidentally modify the files. As a result, you cannot import the virtual machines.
-
Due to a component load issue, adding a multipathing PSA claim rule to the set of claim rules on a vCenter Server system by using the JumpStart tool might fail.
-
After a controlled shutdown or booting of any server in a cluster of ESXi servers attached to an ALUA array, all LUNs to which that server has access might trespass to one storage processor on the array. As a result, performance of the other ESXi servers accessing the LUNs aggravates.
-
In rare cases, when an ESXi host is under memory pressure, PSA does not handle memory allocation failures gracefully. As a result, the ESXi host might fail with a purple diagnostic screen with an error such as
#PF Exception 14 in world 2098026:SCSI path sc IP / SCSI path scan helpers
. -
If a running virtual machine reboots during a snapshot deletion operation, the VM disks might be incorrectly reopened and closed during the snapshot consolidation. As a result, the VM might fail. However, this is timing issue and occurs accidentally.
-
During device discovery, a reservation conflict might cause ATS to be wrongly reported as
not supported
and due to this, ESXi uses SCSI-2 reservation instead of ATS. As a result, mounting VMFS6 datastores with enabled clustered VMDK support might randomly fail. -
Some operations can change the host UUID. For example, if you reinstall ESXi software or move the host across clusters, the host UUID might change. If the host UUID changes during vSAN file service downtime, then vSAN file service servers cannot boot.
-
If you disable the vSAN file service with no existing file shares, vSAN removes the file service domain. A removal failure of one server might interrupt the process, and leave behind some metadata. When you reenable the file service, the old metadata might cause the file service to not work as expected.
-
The Report Target Port Groups command might return a wrong value in the IMPLICIT TRANSITION TIME field, which affects the SCSI to NVMe translation layer. In cases such as multi appliance migration, the time for ALUA transition is critical for some multipathing software, for example PowerPath, to perform correct operations.
-
Small-sized pending unmaps might be blocked when you reboot a host or remount a disk group. The pending unmaps can cause log congestion which leads to I/O latency.
-
After the first upgrade or update of your system to ESXi 7.x, with each consecutive update, also called a patch, you might see a truncation in the path to the
/productLocker
directory on a datastore. For example, if your first patch on ESXi 7.x is from 7.0 Update 2 to 7.0 Update 3, the path to the/productLocker
directory originally is similar to/vmfs/volumes/xxx/VMware-Tools/productLocker/
. However, for each consecutive patch, for example from 7.0 Update 3 to 7.0 Update 3c, the path becomes similar to/VMware-Tools/productLocker/
. -
In a vSAN stretched cluster or two-node cluster, quorum votes might not be distributed correctly for objects with PFTT of 1 and SFTT of 1 or more. If one site fails, and an additional host or disk fails on the active site, the object might lose quorum and become inaccessible.
-
An object resize request might fail if the new size is not aligned to 512 bytes. This problem can cause an ESXi host to fail with a purple diagnostic screen.
-
In some cases, for example when a virtual machine restarts, a running SSH session that the pktcap-uw utility uses to monitor and validate switch configurations on the ESXi host might be terminated, but the pktcap-uw utility might continue to try to process packets. As a result, the utility starts consuming more CPU than usual. You can use the
esxtop -a
command to track CPU performance. -
In vCenter Server running 7.0 Update 3, a certified NVMe device might have the following health check warning:
NVMe device is not VMware certified
. You might also see the following health check warning:NVMe device can not be identified
. -
The ESXCLI command
hardware ipmi bmc
get does not return the IPv6 address of a BMC due to wrong parsing. -
If the witness host cannot be reached by cluster hosts, the vSAN cluster Shutdown wizard might fail for a stretched cluster or two-node clusters. The issue occurs when vSAN data traffic and witness traffic use different vmknics. In the vSphere Client. you see the following error message in the vSAN Services page:
Disconnected host found from orch <witness's ip>
. If vCenter is hosted on the cluster, the vSphere Client is not available during shutdown, and the error message is available in the Host Client where the vCenter VM resides. -
In the vSphere Client, when you right-click to browse a datastore, you might see the UUID of some virtual machines in a vSphere Virtual Volumes datastore, such as
naa.xxxx
, instead of their names. The issue rarely occurs in large-scale environments with a large number of containers and VMs on a vSphere Virtual Volumes datastore. The issue has no functional impact, such as affecting VM operations, backup, or anything than VM display in the vSphere Client. -
Due to a rare issue with the ESXi infrastructure, a slow VASA provider might lead to a situation where the vSphere Virtual Volumes datastores are inaccessible, and the ESXi host becomes unresponsive.
-
In certain environments, when you shut down an ESXi host, it does not power off and you see a screen with the message
This system has been halted. It is safe to use the reset or power button to reboot..
-
The
LargeBAR
setting that extends the Base Address Register (BAR) on a vmxnet3 device supports the Uniform Passthrough (UPT). However, UPT is not supported on ESX 7.0 Update 3 and later, and if the vmxnet3 driver is downgraded to a version earlier than 7.0, and theLargeBAR
setting is enabled, virtual machines might lose connectivity. -
If you hot remove independent nonpersistent disks from a virtual machine that has the
vmx.reboot.PowerCycle
configuration parameter enabled, ESXi stores a redo log file. If such a VM is a backup proxy VM, you might see a lot of redundant redo log files to take a significant amount of storage. -
In rare cases, when an ESXi host tries to access an uncached entry from a resource pool cache, the host intermittently fails with a
PF Exception 14
purple diagnostic screen and a core dump file. In the dump file, you see errors for theJ3_DeleteTransaction
andJ6ProcessReplayTxnList
modules that indicate the issue. -
A rare issue with VMFS might cause high lock contention of hostd service threads, or even deadlocks, for basic filesystem calls such as
open
,access
orrename
. As a result, the ESXi host becomes unresponsive. -
Since the MD5 authentication protocol is deprecated from ESXi 7.x, if an ESXi SNMP agent configuration uses the MD5 authentication protocol, upgrades to ESXi 7.x fail.
-
In ESXi 7.x, the
Syslog.global.logHost
parameter that defines a comma-delimited list of remote hosts and specifications for message transmissions, does not tolerate spaces after the comma. ESXi versions earlier than 7.x tolerate a space after the comma in theSyslog.global.logHost
parameter. As a result, upgrades to ESXi 7.x might fail. -
An issue related to the use of the probabilistic data structure Bloom Filter that aims to optimize read I/O for VMs running on a snapshot, might cause a logical consistency-based I/O error when VM is running on snapshot. The issue is limited and occurs only if you run running SQL server on snapshot.
-
An issue with the process that scans datastores to help file-block allocations for thin files might cause a CPU lockup in certain cases. As a result, the ESXi host might fail with a purple diagnostic screen with an error similar to
PSOD - @BlueScreen: NMI
. -
The log files of the hostd service might get an unusually high number of
Task Created
andTask Completed
messages for invisible tasks, which in turn might reduce the log retention time. -
If a SCSI command to a protocol endpoint of a vSphere Virtual Volumes datastore fails, the endpoint might get an Unsupported status, which might be cached. As a result, following SCSI commands to that protocol endpoint fail with an error code such as
0x5 0x20
, and read and write operations to a vSphere Virtual Volumes datastore fail. -
In some cases, such as port down error, ESXi hosts might lose connectivity to the NVMe controller although some IO queues are still active.
-
If you replicate a virtual machine by using vSphere Replication, and the VM disk is increased to a number that is not 512-aligned, the VM cannot power on.
-
If the number of CPUs that you define with the parameter
ConfigSpec#numCPUs
is not multiple to the number of core per socket that you define with in the parameterConfigSpec#numCoresPerSocket
in the configuration of a virtual machine, the VM does not power on. -
This problem affects hosts in stretched clusters with
locality=None
,HFT=0
,SFT=1
orSFT=2
policy settings. When you place a host into maintenance mode with Ensure Accessibility, the operation might stay at 100% for a long time or fail after 60 min. -
After you upgrade an ESXi host to 7.0 Update 2, you might notice frequent vSAN network latency alarms on the cluster when vSAN performance service is enabled. The latency results show that most of the alarms are issued by the vSAN primary node.
-
Due to a change in the default admin queue size for NVMe over RDMA controllers in ESXi 7.0 Update 3c, updates from ESXi 7.0, 7.0 Update 1 or 7.0 Update 2 to 7.0 Update 3c might cause NVMe over RDMA storage to become inaccessible.
-
Due to a rare race condition, when a container port tries to re-acquire a lock it already holds, an ESXi host might fail with a purple diagnostic screen while virtual machines with container ports power off or migrate by using vSphere vMotion. The issue occurs due to duplicating port IDs.
-
ESXi hosts might lose access to Unity storage arrays after upgrading the lpfc driver to 14.0.x.x. You see errors such as
protocol failure detected during processing of FCP I/O
andrspInfo3 x2
in the driver logs. -
For LSI-related drivers, when an ESXi server boots up, if you plug out a disk and plug it in another slot on the same host, changes in the disk location might take long to become visible from the vSphere Client or by using the ESXCLI command
esxcli storage core device physical get -d
. The issue is specific to drivers with many disks connected to the driver, 150 and more, and is resolved within 5 minutes. -
With ESXi 7.0 Update 3f, the Intel-icen driver supports networking functions on E822/E823 NICs for Intel Icelake-D platforms. ENS (Enhanced Network Stack) and RDMA functions on such devices are not supported.
-
ESXi 7.0 Update 3f upgrades the
Intel-ne1000
driver to support Intel I219-LM devices that are required for newer server models, such as the Intel Rocket Lake-S platform. The TCP segmentation offload for the I219 devices is deactivated because of known issues in the hardware DMA. -
A very rare null pointer error issue might cause ESXi hosts on IBM FlashSystem V9000 arrays to fail with a purple diagnostic screen and an error such as
#PF Exception 14 in world 2098414:CqWorld IP 0xxxxx addr 0xxxx
. -
When you disable or suspend vSphere FT, virtual machines might temporarily lose connectivity and not respond to pinging or any network traffic. Pinging virtual machines might time out consecutively within a short period, such as 20 sec.
-
Under certain conditions, such as increasing page-sharing or disabling the use of large pages at VM or ESXi host level, you might see CPU utilization of Windows VBS-enabled VMs to reach up to 100%.
-
Profile Name | ESXi-7.0U3sf-20036586-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 12, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2920287, 2946147, 2946217, 2946222, 2946671, 2946863, 2947156, 2951864, 2951866, 2972147, 2972151, 2816546 |
Related CVE numbers | CVE-2004-0230, CVE-2020-7451, CVE-2015-2923, CVE-2015-5358, CVE-2013-3077, CVE-2015-1414, CVE-2018-6918, CVE-2020-7469, CVE-2019-5611, CVE-2020-7457, CVE-2018-6916, CVE-2019-5608, CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, CVE-2022-29901 |
- This patch updates the following issues:
-
- The Expat XML parser is updated to version 2.4.7.
- The SQLite database is updated to version 3.37.2.
- The cURL library is updated to version 7.81.0.
- The OpenSSL package is updated to version openssl-1.0.2ze.
- The ESXi userworld libxml2 library is updated to version 2.9.14.
- The Python package is updated to 3.8.13.
- The zlib library is updated to 1.2.12.
- This release resolves CVE-2004-0230. VMware has evaluated the severity of this issue to be in the low severity range with a maximum CVSSv3 base score of 3.7.
- This release resolves CVE-2020-7451. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 5.3.
- This release resolves CVE-2015-2923. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2015-5358. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2013-3077. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.0.
- This release resolves CVE-2015-1414. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2018-6918. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7469. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2019-5611. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7457. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.8.
- This release resolves CVE-2018-6916. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
- This release resolves CVE-2019-5608. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
-
This release resolves CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, and CVE-2022-29901. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2022-0020.
-
Profile Name | ESXi-7.0U3sf-20036586-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 12, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2920287, 2946147, 2946217, 2946222, 2946671, 2946863, 2947156, 2951864, 2951866, 2972147, 2972151, 2816546 |
Related CVE numbers | CVE-2004-0230, CVE-2020-7451, CVE-2015-2923, CVE-2015-5358, CVE-2013-3077, CVE-2015-1414, CVE-2018-6918, CVE-2020-7469, CVE-2019-5611, CVE-2020-7457, CVE-2018-6916, CVE-2019-5608, CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, CVE-2022-29901 |
- This patch updates the following issues:
-
- The Expat XML parser is updated to version 2.4.7.
- The SQLite database is updated to version 3.37.2.
- The cURL library is updated to version 7.81.0.
- The OpenSSL package is updated to version openssl-1.0.2ze.
- The ESXi userworld libxml2 library is updated to version 2.9.14.
- The Python package is updated to 3.8.13.
- The zlib library is updated to 1.2.12.
- This release resolves CVE-2004-0230. VMware has evaluated the severity of this issue to be in the low severity range with a maximum CVSSv3 base score of 3.7.
- This release resolves CVE-2020-7451. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 5.3.
- This release resolves CVE-2015-2923. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2015-5358. VMware has evaluated the severity of this issue to be in the moderate severity range with a maximum CVSSv3 base score of 6.5.
- This release resolves CVE-2013-3077. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.0.
- This release resolves CVE-2015-1414. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2018-6918. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7469. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2019-5611. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.5.
- This release resolves CVE-2020-7457. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 7.8.
- This release resolves CVE-2018-6916. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
- This release resolves CVE-2019-5608. VMware has evaluated the severity of this issue to be in the important severity range with a maximum CVSSv3 base score of 8.1.
-
This release resolves CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, and CVE-2022-29901. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2022-0020.
-
Name | ESXi |
Version | ESXi70U3f-20036589 |
Release Date | July 12, 2022 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 2916848, 2817702, 2937074, 2957732, 2984134, 2944807, 2985369, 2907963, 2949801, 2984245, 2911320 , 2915911, 2990593, 2992648, 2905357, 2980176, 2974472, 2974376, 2972059, 2973031, 2967359, 2860126, 2878245, 2876044, 2968157, 2960242, 2970238, 2966270, 2966783, 2963328, 2939395, 2966362, 2946550, 2958543, 2966628, 2961584, 2911056, 2965235, 2952427, 2963401, 2965146, 2963038, 2963479, 2949375, 2961033, 2958926, 2839515, 2951139, 2878224, 2954320, 2952432, 2961346, 2857932, 2960949, 2960882, 2957966, 2929443, 2956080, 2959293, 2944919, 2948800, 2928789, 2957769, 2928268, 2957062, 2792139, 2934102, 2953709, 2950686, 2953488, 2949446, 2955016, 2953217, 2956030, 2949902, 2944894, 2944521, 2911363, 2952205, 2894093, 2910856, 2953754, 2949777, 2925733, 2951234, 2915979, 2904241, 2935355, 2941263, 2912661, 2891231, 2928202, 2928268, 2867146, 2244126, 2912330, 2898858, 2906297, 2912213, 2910340, 2745800, 2912182, 2941274, 2912230, 2699748, 2882789, 2869031, 2913017, 2864375, 2929821, 2957673, 2921564, 2925133, 2965277, 2980176 |
Related CVE numbers | N/A |
Name | ESXi |
Version | ESXi70U3sf-20036586 |
Release Date | July 12, 2022 |
Category | Security |
Affected Components |
|
PRs Fixed | 2920287, 2946147, 2946217, 2946222, 2946671, 2946863, 2947156, 2951864, 2951866, 2972147, 2972151 |
Related CVE numbers | CVE-2004-0230, CVE-2020-7451, CVE-2015-2923, CVE-2015-5358, CVE-2013-3077, CVE-2015-1414, CVE-2018-6918, CVE-2020-7469, CVE-2019-5611, CVE-2020-7457, CVE-2018-6916, CVE-2019-5608, CVE-2022-23816, CVE-2022-23825, CVE-2022-28693, CVE-2022-29901 |
Known Issues
The known issues are grouped as follows.
vSphere Client Issues- BIOS manufacturer displays as "--" in the vSphere Client
In the vSphere Client, when you select an ESXi host and navigate to Configure > Hardware > Firmware, you see
--
instead of the BIOS manufacturer name.Workaround: For more information, see VMware knowledge base article 88937.
- USB device passthrough from ESXi hosts to virtual machines might fail
A USB modem device might simultaneously claim multiple interfaces by the VMkernel and block the device passthrough to VMs.
Workaround: You must apply the USB.quirks advanced configuration on the ESXi host to ignore the NET interface from VMkernel and allow the USB modem to passthrough to VMs. You can apply the configuration in 3 alternative ways:- Access the ESXi shell and run the following command:
esxcli system settings advanced set -o /USB/quirks -s
0xvvvv:0xpppp:0:0xffff:UQ_NET_IGNORE | |- Device Product ID |------- Device Vendor ID
For example, for the Gemalto M2M GmbH Zoom 4625 Modem(vid:pid/1e2d:005b
), you can have the command
esxcli system settings advanced set
-o/USB/quirks -s 0x1e2d:0x005b:0:0xffff:UQ_NET_IGNORE
Reboot the ESXi host. - Set the advanced configuration from the vSphere Client or the vSphere Web Client and reboot the ESXi host.
- Use a Host Profile to apply the advanced configuration.
For more information on the steps, see VMware knowledge base article 80416.
- Access the ESXi shell and run the following command: