ESXi 7.0 Update 3d | 29 MAR 2022 | ISO Build 19482537 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Resolved Issues
- Known Issues
- Known Issues from Previous Releases
IMPORTANT: If your source system contains the ESXi 7.0 Update 2 release (build number 17630552) or later builds with Intel drivers, before upgrading to ESXi 7.0 Update 3d, see the What's New section of the VMware vCenter Server 7.0 Update 3c Release Notes, because all content in the section is also applicable for vSphere 7.0 Update 3d. Also, see the related VMware knowledge base articles: 86447, 87258, and 87308.
What's New
- ESXi 7.0 Update 3d supports vSphere Quick Boot on the following servers:
- Dell Inc. C6420 vSAN Ready Node
- Dell Inc. MX740C vSAN Ready Node
- Dell Inc. MX750C vSAN Ready Node
- Dell Inc. PowerEdge R750xa
- Dell Inc. PowerEdge R750xs
- Dell Inc. PowerEdge T550
- Dell Inc. R650 vSAN Ready Node
- Dell Inc. R6515 vSAN Ready Node
- Dell Inc. R740 vSAN Ready Node
- Dell Inc. R750 vSAN Ready Node
- Dell Inc. R7515 vSAN Ready Node
- Dell Inc. R840 vSAN Ready Node
- Dell Inc. VxRail E660
- Dell Inc. VxRail E660F
- Dell Inc. VxRail E660N
- Dell Inc. VxRail E665
- Dell Inc. VxRail E665F
- Dell Inc. VxRail E665N
- Dell Inc. VxRail G560
- Dell Inc. VxRail G560F
- Dell Inc. VxRail P580N
- Dell Inc. VxRail P670F
- Dell Inc. VxRail P670N
- Dell Inc. VxRail P675F
- Dell Inc. VxRail P675N
- Dell Inc. VxRail S670
- Dell Inc. VxRail V670F
Earlier Releases of ESXi 7.0
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, ESXi 7.0 Update 3c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2 Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1b Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1 Release Notes
- VMware ESXi 7.0, ESXi 7.0b Release Notes
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
Patches Contained in This Release
This release of ESXi 7.0 Update 3d delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U3d-19482537-depot.zip |
Build: | 19482537 |
Download Size: | 586.8 MB |
md5sum: | 22fca2ef1dc38f490d1635926a86eb02 |
sha256checksum: | 2ef5b43b4e9d64a9f48c7ea0ee8561f7619c9ab54e874974b7f0165607a2355a |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.35.19482537 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.35.19482537 | Bugfix | Critical |
LSI NATIVE DRIVERS LSU Management Plugin | Broadcom-lsiv2-drivers-plugin_1.0.0-10vmw.703.0.35.19482537 | Bugfix | Critical |
VMware NVMe over TCP Driver | VMware-NVMeoF-TCP_1.0.0.1-1vmw.703.0.35.19482537 | Bugfix | Critical |
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.30.19482531 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.30.19482531 | Security | Critical |
VMware-VM-Tools | VMware-VM-Tools_11.3.5.18557794-19482531 | Security | Critical |
IMPORTANT:
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity | Details |
ESXi70U3d-19482537 | Bugfix | Critical | Bugfix and Security |
ESXi70U3sd-19482531 | Security | Critical | Security only |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0U3d-19482537-standard |
ESXi-7.0U3d-19482537-no-tools |
ESXi-7.0U3sd-19482531-standard |
ESXi-7.0U3sd-19482531-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi70U3d-19482537 | 29 MAR 2022 | Bugfix | Bugfix and Security image |
ESXi70U3sd-19482531 | 29 MAR 2022 | Security | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.3-0.35.19482537
- esx-update_7.0.3-0.35.19482537
- Broadcom-lsiv2-drivers-plugin_1.0.0-10vmw.703.0.35.19482537
- VMware-NVMeoF-TCP_1.0.0.1-1vmw.703.0.35.19482537
- ESXi_7.0.3-0.30.19482531
- esx-update_7.0.3-0.30.19482531
- VMware-VM-Tools_11.3.5.18557794-19482531
- ESXi-7.0U3d-19482537-standard
- ESXi-7.0U3d-19482537-no-tools
- ESXi-7.0U3sd-19482531-standard
- ESXi-7.0U3sd-19482531-no-tools
- ESXi Image - ESXi70U3d-19482537
- ESXi Image - ESXi70U3sd-19482531
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2896898, 2851531, 2927968, 2909411, 2878701, 2854493, 2890621, 2893225, 2886578, 2896305, 2869790, 2890559, 2834582, 2870586, 2865369, 2812683, 2870580, 2872144, 2846265, 2827728, 2873956, 2719181, 2859229, 2827728, 2820052, 2855810, 2859184, 2821515, 2808113, 2806622, 2855114, 2813609, 2825435, 2854588, 2868955, 2812731, 2848074, 2805651, 2827765, 2852726, 2830051, 2851725, 2853684, 2807138, 2862331, 2843918, 2827691, 2825746, 2835345, 2812704, 2826753, 2834958, 2851309, 2851221, 2854497, 2852726, 2854493 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsanhealth, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver, gc, bmcal, esxio-combiner, trx
and cpu-microcode
VIBs to resolve the following issues:
- NEW PR 2854493: Virtual machines might become unresponsive due to a deadlock issue in a VMFS volume
In some cases, if an I/O request runs in parallel with a delete operation triggered by the guest OS, a deadlock might occur in VMFS6 volumes. For example, when the guest OS on a thin-provisioned VM continuously executes unmap operations during which file write and delete repeats to secure and release the disk. As a result, virtual machines on this volume become unresponsive.
This issue is resolved in this release.
- PR 2855241: Adding ESXi hosts to an Active Directory domain might take long
Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
This issue is resolved in this release. The fix adds a 15 seconds standard timeout with additional logging around the LDAP calls during domain join workflow.
- PR 2834582: Concurrent power on of a large number of virtual machines might take long or fail
In certain environments, concurrent power on of a large number of VMs hosted on the same VMFS6 datastore might take long or fail. Time to create swap files for all VMs causes delays and might ultimately cause failure of the power on operations.
This issue is resolved in this release. The fix enhances the VMFS6 resource allocation algorithm to prevent the issue.
- PR 2851811: Virtual machines might stop to respond during power on or snapshot consolidation operations
A virtual machine might stop responding during a power on or snapshot consolidation operation and you must reboot the ESXi host to restart the VM. The issue is rare and occurs while opening the VMDK file.
This issue is resolved in this release. However, the fix resolves an identified root cause and might not resolve all aspects of the issue. The fix adds logs with the tag
AFF_OPEN_PATH
to facilitate identifying and resolving an alternative root cause if you face the issue. - PR 2865369: An ESXi host might fail with a purple diagnostic screen due to a very rare race condition in software iSCSI adapters
In very rare cases, a race condition in software iSCSI adapters might cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2849843: Virtual machine storage panel statistics shows VMname as null on ESXi servers running for more than 150 days
If an ESXi server runs for more than 150 days without restart, the resource pool ID (GID) number might overflow
uint32
. As a result, in the VM storage panel statistics pane you might seeGID
as a negative number andVMname
as null.This issue is resolved in this release. The fix changes the
GID
variable touint64
. - PR 2850065: The VMkernel might shut down virtual machines due to a vCPU timer issue
In rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the
vmkernel.log
file, you see messages such as:
2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.
2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VMThe issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.
This issue is resolved in this release.
- PR 2851400: When vSphere Replication is enabled on a virtual machine, many other VMs might become unresponsive
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
This issue is resolved in this release. The fix offloads the vSphere Replication MD5 calculation from the I/O completion path to a work pool and reduces the amount of outstanding I/O that vSphere Replication issues.
- PR 2859882: Network interface entries republished with old format
After a reboot, the lead host of a vSAN cluster might have a new format for network interface entries. The new format might not propagate to some entries. For example, interface entries in the local update queue of the lead host.
This issue is resolved in this release.
- PR 2859643: You see sfcb core dumps during planned removal of NVMe devices
To optimize the processing of queries related to PCI devices, SFCB maintains a list of the PCI devices in a cache. However, when you remove an NVMe device, even with a planned workflow, the cache might not get refreshed. As a result, you see sfcb core dumps since the lookup for the removed device fails.
This issue is resolved in this release. The fix makes sure that SFCB refreshes the cache on any change in the PCI devices list.
- PR 2851531: ESXi hosts in environments using uplink and teaming policies might lose connectivity after remediation by applying a host profile
When you remediate ESXi hosts by using a host profile, network settings might fail to apply due to a logic fault in the check of the uplink number of uplink ports that are configured for the default teaming policy. If the uplink number check returns 0 while applying a host profile, the task fails. As a result, ESXi hosts lose connectivity after reboot.
This issue is resolved in this release. The fix refines the uplink number check and makes sure it returns error only in specific conditions.
- PR 2869790: You do not see VMkernel network adapters after ESXi hosts reboot
In vSphere systems of version 7.0 Update 2 and later, where VMkernel network adapters are connected to multiple TCP/IP stacks, after ESX hosts reboot, some of the adapters might not be restored. In the vSphere client, when you navigate to Host > Configure > VMkernel Adapters, you see a message such as
No items found
. If you run the ESXCLI commandslocalcli network ip interface list
oresxcfg-vmknic -l
, you see the errorUnable to get node: Not Found
. In thehostd.log
reports, you see the same error.This issue is resolved in this release.
- PR 2871577: If you disable and then re-enable migration operations by using vSphere vMotion, consecutive migrations might cause ESXi hosts to fail with a purple diagnostic screen
In specific cases, if you use vSphere vMotion on a vSphere system after disabling and re-enabling migration operations, ESXi hosts might fail with a purple diagnostic screen. For example, if you use the ESXCLI commands
esxcli system settings advanced set --option /Migrate/Enabled --int-value=0
to disable the feature and then runesxcli system settings advanced set --option /Migrate/Enabled --default
to enable it, any migration after that might cause the issue.This issue is resolved in this release.
- PR 2914095: High CPU utilization after a non-disruptive upgrade (NDU) of a storage array firmware
ESXi hosts might experience more than 90% CPU utilization after an NDU upgrade of a storage array firmware, for example a PowerStore VASA provider. The high CPU usage eventually settles down but, in some cases, might take long, even more than 24 hours.
This issue is resolved in this release.
- PR 2871515: ESXi hosts lose IPv6 DNS after a VMkernel port migration
After a VMkernel port migration from a standard virtual switch to a vSphere Distributed Virtual Switch (VDS) or from one VDS to another, ESXi hosts might lose their IPv6 DNS.
This issue is resolved in this release. The fix makes sure that during a VMkernel port migration IPv6 nameservers are added or removed one at a time to avoid removing them all in certain environments.
- PR 2852173: ESXi hosts might fail with a purple diagnostic screen due to insufficient socket buffer space
ESXi management daemons that generate high volumes of log messages might impact operational communication between user-level components due to insufficient socket buffer space. As a result, ESXi hosts might fail with a purple diagnostic screen with a message such as
nicmgmtd: Cannot allocate a new data segment, out of memory
.This issue is resolved in this release. The fix allocates low level socket space shared between components separately from application buffer space.
- PR 2872509: You cannot see the status of objects during a Resyncing objects task
In the vSphere Client, when you select Resyncing objects under a vSAN cluster > Monitor > vSAN, you do not see the status of objects that are being resynchronized. Instead, you see the error
Failed to extract requested data. Check vSphere Client logs for details
.This issue is resolved in this release.
- PR 2878701: In the VMware Host Client, you see an error that no sensor data is available
In the VMware Host Client, you see the error
No sensor data available
due to an issue with theDateTime
formatting. In the backtrace, you see logs such as:
hostd[1051205] [Originator@6876 sub=Cimsvc] Refresh hardware status failed N7Vmacore23DateTimeFormatExceptionE(Error formatting DateTime)
.This issue is resolved in this release.
- PR 2847291: Management accounts of VMware Cloud Director on Dell EMC VxRail might be deleted during host profile remediation
When you create a host profile, service accounts for VMware Cloud Director on Dell EMC VxRail are automatically created and might be deleted during a remediation of the host profile.
This issue is resolved in this release. The fix makes sure that service accounts for VMware Cloud Director on Dell EMC VxRail do not depend on host profile operations.
- PR 2884344: vSAN host configuration does not match vCenter Server when a native key provider is unhealthy
When the status of a native key provider for vSAN encryption is unhealthy, the remediation workflow might be blocked. vCenter Server cannot synchronize its configuration settings with the vSAN hosts until the block is cleared.
This issue is resolved in this release.
- PR 2854558: Cannot enable vSAN encryption by using a native key provider when hosts are behind a proxy
When you place vSAN hosts behind a proxy server, vSAN cannot determine the health of the native key provider. As a result, you cannot enable vSAN encryption by using the native key provider. You might see the following message:
Key provider is not available on host.
This issue is resolved in this release.
- PR 2859229: You see compliance check error for hosts in a vSAN HCI Mesh cluster
Hosts in a vSAN cluster with HCI Mesh enabled might experience the following compliance check error:
Unable to gather datastore name from Host.
This issue is resolved in this release.
- PR 2840405: You cannot change the resource pool size of WBEM providers
Names of WBEM providers can be different from their resource group name. In such cases, commands such as
esxcli system wbem set --rp-override
fail to change the existing configuration, because the method to change a resource pool size also checks the resource group name.This issue is resolved in this release. The fix removes the check between WBEM provider names and resource group names.
- PR 2897700: If data in transit encryption is enabled on a vSAN cluster, ESXi hosts might fail with a purple diagnostic screen
If data in transit encryption is enabled on a vSAN cluster and other system traffic types, such as vSphere vMotion traffic or vSphere HA traffic, routes to a port used by vSAN, ESXi hosts might fail with an error such as
PSOD: #PF Exception 14 in world 1000215083:rdtNetworkWo IP 0x42000df1be47 addr 0x80
on a purple diagnostic screen.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2925847 |
CVE numbers | N/A |
Updates the
lsuv2-lsiv2-drivers-plugin
VIB to resolve the following issue:
- PR 2925847: After ESXi update to 7.0 Update 3 or later, the VPXA service fails to start and ESXi hosts disconnect from vCenter Server
After updating ESXi to 7.0 Update 3 or later, hosts might disconnect from vCenter Server and when you try to reconnect a host by using the vSphere Client, you see an error such as
A general system error occurred: Timed out waiting for vpxa to start..
The VPXA service also fails to start when you use the command/etc/init.d/vpxa start
. The issue affects environments with RAIDs that contain more than 15 physical devices. Thelsuv2-lsiv2-drivers-plugin
can manage up to 15 physical disks and RAIDs with more devices cause an overflow that prevents VPXA from starting.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the
nvmetcp
VIB.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsanhealth, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver, gc, bmcal, esxio-combiner, trx
and cpu-microcode
VIBs to resolve the following issues:
- ESXi 7.0 Update 3d provides the following security updates:
- cURL is updated to version 7.79.1.
- OpenSSH is updated to version 8.8p1.
- OpenSSL is updated to version 1.0.2zb.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2816546 |
CVE numbers | N/A |
Updates the tools-light
VIB to resolve the following issue:
-
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3d:
windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-70U3d-19482537-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 29, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2896898, 2851531, 2927968, 2909411, 2878701, 2854493, 2890621, 2893225, 2886578, 2896305, 2869790, 2890559, 2834582, 2870586, 2865369, 2812683, 2870580, 2872144, 2846265, 2827728, 2873956, 2719181, 2859229, 2827728, 2820052, 2855810, 2859184, 2821515, 2808113, 2806622, 2855114, 2813609, 2825435, 2854588, 2868955, 2812731, 2848074, 2805651, 2827765, 2852726, 2830051, 2851725, 2853684, 2807138, 2862331, 2843918, 2827691, 2825746, 2835345, 2812704, 2826753, 2834958, 2851309, 2851221, 2854497, 2852726, 2925847, 2854493 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
In some cases, if an I/O request runs in parallel with a delete operation triggered by the guest OS, a deadlock might occur in a VMFS volume. As a result, virtual machines on this volume become unresponsive.
-
Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
-
In certain environments, concurrent power on of a large number of VMs hosted on the same VMFS6 datastore might take long or fail. Time to create swap files for all VMs causes delays and might ultimately cause failure of the power on operations.
-
A virtual machine might stop responding during a power on or snapshot consolidation operation and you must reboot the ESXi host to restart the VM. The issue is rare and occurs while opening the VMDK file.
-
In very rare cases, a race condition in software iSCSI adapters might cause the ESXi host to fail with a purple diagnostic screen.
-
If an ESXi server runs for more than 150 days without restart, the resource pool ID (GID) number might overflow
uint32
. As a result, in the VM storage panel statistics pane you might seeGID
as a negative number andVMname
as null. -
In rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the
vmkernel.log
file, you see messages such as:
2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.
2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VMThe issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.
-
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
-
After a reboot, the lead host of a vSAN cluster might have a new format for network interface entries. The new format might not propagate to some entries. For example, interface entries in the local update queue of the lead host.
-
To optimize the processing of queries related to PCI devices, SFCB maintains a list of the PCI devices in a cache. However, when you remove an NVMe device, even with a planned workflow, the cache might not get refreshed. As a result, you see sfcb core dumps since the lookup for the removed device fails.
-
When you remediate ESXi hosts by using a host profile, network settings might fail to apply due to a logic fault in the check of the uplink number of uplink ports that are configured for the default teaming policy. If the uplink number check returns 0 while applying a host profile, the task fails. As a result, ESXi hosts lose connectivity after reboot.
-
In vSphere systems where VMkernel network adapters are connected to multiple TCP/IP stacks, after ESX hosts reboot, some of the adapters might not be restored. In the vSphere client, when you navigate to Host > Configure > VMkernel Adapters, you see a message such as
No items found
. If you run the ESXCLI commandslocalcli network ip interface list
oresxcfg-vmknic -l
, you see the errorUnable to get node: Not Found
. In thehostd.log
reports, you see the same error. -
In specific cases, if you use vSphere vMotion on a vSphere system after disabling and re-enabling migration operations, ESXi hosts might fail with a purple diagnostic screen. For example, if you use the ESXCLI commands
esxcli system settings advanced set --option /Migrate/Enabled --int-value=0
to disable the feature and then runesxcli system settings advanced set --option /Migrate/Enabled --default
to enable it, any migration after that might cause the issue. -
ESXi hosts might experience more than 90% CPU utilization after an NDU upgrade of a storage array firmware, for example a PowerStore VASA provider. The high CPU usage eventually settles down but, in some cases, might take long, even more than 24 hours.
-
After a VMkernel port migration from a standard virtual switch to a vSphere Distributed Virtual Switch (VDS) or from one VDS to another, ESXi hosts might lose their IPv6 DNS.
-
ESXi management daemons that generate high volumes of log messages might impact operational communication between user-level components due to insufficient socket buffer space. As a result, ESXi hosts might fail with a purple diagnostic screen with a message such as
nicmgmtd: Cannot allocate a new data segment, out of memory
. -
In the vSphere Client, when you select Resyncing objects under a vSAN cluster > Monitor > vSAN, you do not see the status of objects that are being resynchronized. Instead, you see the error
Failed to extract requested data. Check vSphere Client logs for details
. -
In the VMware Host Client, you see the error
No sensor data available
due to an issue with theDateTime
formatting. In the backtrace, you see logs such as:
hostd[1051205] [Originator@6876 sub=Cimsvc] Refresh hardware status failed N7Vmacore23DateTimeFormatExceptionE(Error formatting DateTime)
. -
When you create a host profile, service accounts for VMware Cloud Director on Dell EMC VxRail are automatically created and might be deleted during a remediation of the host profile.
-
When the status of a native key provider for vSAN encryption is unhealthy, the remediation workflow might be blocked. vCenter Server cannot synchronize its configuration settings with the vSAN hosts until the block is cleared.
-
When you place vSAN hosts behind a proxy server, vSAN cannot determine the health of the native key provider. As a result, you cannot enable vSAN encryption by using the native key provider. You might see the following message:
Key provider is not available on host.
-
After updating ESXi to 7.0 Update 3 or later, hosts might disconnect from vCenter Server and when you try to reconnect a host by using the vSphere Client, you see an error such as
A general system error occurred: Timed out waiting for vpxa to start..
The VPXA service also fails to start when you use the command/etc/init.d/vpxa start
. The issue affects environments with RAIDs that contain more than 15 physical devices. Thelsuv2-lsiv2-drivers-plugin
can manage up to 15 physical disks and RAIDs with more devices cause an overflow that prevents VPXA from starting. -
Hosts in a vSAN cluster with HCI Mesh enabled might experience the following compliance check error:
Unable to gather datastore name from Host.
-
Names of WBEM providers can be different from their resource group name. In such cases, commands such as
esxcli system wbem set --rp-override
fail to change the existing configuration, because the method to change a resource pool size also checks the resource group name. -
If data in transit encryption is enabled on a vSAN cluster and other system traffic types, such as vSphere vMotion traffic or vSphere HA traffic, routes to a port used by vSAN, ESXi hosts might fail with an error such as
PSOD: #PF Exception 14 in world 1000215083:rdtNetworkWo IP 0x42000df1be47 addr 0x80
on a purple diagnostic screen.
-
Profile Name | ESXi-70U3d-19482537-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 29, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2896898, 2851531, 2927968, 2909411, 2878701, 2854493, 2890621, 2893225, 2886578, 2896305, 2869790, 2890559, 2834582, 2870586, 2865369, 2812683, 2870580, 2872144, 2846265, 2827728, 2873956, 2719181, 2859229, 2827728, 2820052, 2855810, 2859184, 2821515, 2808113, 2806622, 2855114, 2813609, 2825435, 2854588, 2868955, 2812731, 2848074, 2805651, 2827765, 2852726, 2830051, 2851725, 2853684, 2807138, 2862331, 2843918, 2827691, 2825746, 2835345, 2812704, 2826753, 2834958, 2851309, 2851221, 2854497, 2852726, 2925847, 2854493 |
Related CVE numbers | N/A |
- This patch updates the following issue:
-
In some cases, if an I/O request runs in parallel with a delete operation triggered by the guest OS, a deadlock might occur in a VMFS volume. As a result, virtual machines on this volume become unresponsive.
-
Some LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.
-
In certain environments, concurrent power on of a large number of VMs hosted on the same VMFS6 datastore might take long or fail. Time to create swap files for all VMs causes delays and might ultimately cause failure of the power on operations.
-
A virtual machine might stop responding during a power on or snapshot consolidation operation and you must reboot the ESXi host to restart the VM. The issue is rare and occurs while opening the VMDK file.
-
In very rare cases, a race condition in software iSCSI adapters might cause the ESXi host to fail with a purple diagnostic screen.
-
If an ESXi server runs for more than 150 days without restart, the resource pool ID (GID) number might overflow
uint32
. As a result, in the VM storage panel statistics pane you might seeGID
as a negative number andVMname
as null. -
In rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the
vmkernel.log
file, you see messages such as:
2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.
2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VMThe issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.
-
When vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.
-
After a reboot, the lead host of a vSAN cluster might have a new format for network interface entries. The new format might not propagate to some entries. For example, interface entries in the local update queue of the lead host.
-
To optimize the processing of queries related to PCI devices, SFCB maintains a list of the PCI devices in a cache. However, when you remove an NVMe device, even with a planned workflow, the cache might not get refreshed. As a result, you see sfcb core dumps since the lookup for the removed device fails.
-
When you remediate ESXi hosts by using a host profile, network settings might fail to apply due to a logic fault in the check of the uplink number of uplink ports that are configured for the default teaming policy. If the uplink number check returns 0 while applying a host profile, the task fails. As a result, ESXi hosts lose connectivity after reboot.
-
In vSphere systems where VMkernel network adapters are connected to multiple TCP/IP stacks, after ESX hosts reboot, some of the adapters might not be restored. In the vSphere client, when you navigate to Host > Configure > VMkernel Adapters, you see a message such as
No items found
. If you run the ESXCLI commandslocalcli network ip interface list
oresxcfg-vmknic -l
, you see the errorUnable to get node: Not Found
. In thehostd.log
reports, you see the same error. -
In specific cases, if you use vSphere vMotion on a vSphere system after disabling and re-enabling migration operations, ESXi hosts might fail with a purple diagnostic screen. For example, if you use the ESXCLI commands
esxcli system settings advanced set --option /Migrate/Enabled --int-value=0
to disable the feature and then runesxcli system settings advanced set --option /Migrate/Enabled --default
to enable it, any migration after that might cause the issue. -
ESXi hosts might experience more than 90% CPU utilization after an NDU upgrade of a storage array firmware, for example a PowerStore VASA provider. The high CPU usage eventually settles down but, in some cases, might take long, even more than 24 hours.
-
After a VMkernel port migration from a standard virtual switch to a vSphere Distributed Virtual Switch (VDS) or from one VDS to another, ESXi hosts might lose their IPv6 DNS.
-
ESXi management daemons that generate high volumes of log messages might impact operational communication between user-level components due to insufficient socket buffer space. As a result, ESXi hosts might fail with a purple diagnostic screen with a message such as
nicmgmtd: Cannot allocate a new data segment, out of memory
. -
In the vSphere Client, when you select Resyncing objects under a vSAN cluster > Monitor > vSAN, you do not see the status of objects that are being resynchronized. Instead, you see the error
Failed to extract requested data. Check vSphere Client logs for details
. -
In the VMware Host Client, you see the error
No sensor data available
due to an issue with theDateTime
formatting. In the backtrace, you see logs such as:
hostd[1051205] [Originator@6876 sub=Cimsvc] Refresh hardware status failed N7Vmacore23DateTimeFormatExceptionE(Error formatting DateTime)
. -
When you create a host profile, service accounts for VMware Cloud Director on Dell EMC VxRail are automatically created and might be deleted during a remediation of the host profile.
-
When the status of a native key provider for vSAN encryption is unhealthy, the remediation workflow might be blocked. vCenter Server cannot synchronize its configuration settings with the vSAN hosts until the block is cleared.
-
When you place vSAN hosts behind a proxy server, vSAN cannot determine the health of the native key provider. As a result, you cannot enable vSAN encryption by using the native key provider. You might see the following message:
Key provider is not available on host.
-
After updating ESXi to 7.0 Update 3 or later, hosts might disconnect from vCenter Server and when you try to reconnect a host by using the vSphere Client, you see an error such as
A general system error occurred: Timed out waiting for vpxa to start..
The VPXA service also fails to start when you use the command/etc/init.d/vpxa start
. The issue affects environments with RAIDs that contain more than 15 physical devices. Thelsuv2-lsiv2-drivers-plugin
can manage up to 15 physical disks and RAIDs with more devices cause an overflow that prevents VPXA from starting. -
Hosts in a vSAN cluster with HCI Mesh enabled might experience the following compliance check error:
Unable to gather datastore name from Host.
-
Names of WBEM providers can be different from their resource group name. In such cases, commands such as
esxcli system wbem set --rp-override
fail to change the existing configuration, because the method to change a resource pool size also checks the resource group name. -
If data in transit encryption is enabled on a vSAN cluster and other system traffic types, such as vSphere vMotion traffic or vSphere HA traffic, routes to a port used by vSAN, ESXi hosts might fail with an error such as
PSOD: #PF Exception 14 in world 1000215083:rdtNetworkWo IP 0x42000df1be47 addr 0x80
on a purple diagnostic screen.
-
Profile Name | ESXi-70U3sd-19482531-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 29, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2816546 |
Related CVE numbers | N/A |
- This patch updates the following issue:
- cURL is updated to version 7.79.1.
- OpenSSH is updated to version 8.8p1.
- OpenSSL is updated to version 1.0.2zb.
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3d: windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-70U3sd-19482531-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 29, 2022 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2816546 |
Related CVE numbers | N/A |
- This patch updates the following issues:
- cURL is updated to version 7.79.1.
- OpenSSH is updated to version 8.8p1.
- OpenSSL is updated to version 1.0.2zb.
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3d: windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Name | ESXi |
Version | 70U3d-19482537 |
Release Date | March 29, 2022 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2896898, 2851531, 2927968, 2909411, 2878701, 2854493, 2890621, 2893225, 2886578, 2896305, 2869790, 2890559, 2834582, 2870586, 2865369, 2812683, 2870580, 2872144, 2846265, 2827728, 2873956, 2719181, 2859229, 2827728, 2820052, 2855810, 2859184, 2821515, 2808113, 2806622, 2855114, 2813609, 2825435, 2854588, 2868955, 2812731, 2848074, 2805651, 2827765, 2852726, 2830051, 2851725, 2853684, 2807138, 2862331, 2843918, 2827691, 2825746, 2835345, 2812704, 2826753, 2834958, 2851309, 2851221, 2854497, 2852726, 2925847 |
Related CVE numbers | N/A |
Name | ESXi |
Version | 70U3sd-19482531 |
Release Date | March 29, 2022 |
Category | Security |
Affected Components |
|
PRs Fixed | 2856149, 2854275, 2850065, 2855473, 2899795, 2892193, 2925847, 2849843, 2871515, 2816546 |
Related CVE numbers | N/A |
Known Issues
The known issues are grouped as follows.
Miscellaneous Issues- SSH access fails after you upgrade to ESXi 7.0 Update 3d
After you upgrade to ESXi 7.0 Update 3d, SSH access might fail in certain conditions due to an update of OpenSSH to version 8.8.
Workaround: For more information, see VMware knowledge base article 88055.