ESXi 7.0 Update 3l | MAR 30 2023 | ISO Build 21424296 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Product Support Notices
- Resolved Issues
- Known Issues
- Known Issues from Previous Releases
IMPORTANT: If your source system contains hosts of versions between ESXi 7.0 Update 2 and Update 3c, and Intel drivers, before upgrading to ESXi 7.0 Update 3l, see the What's New section of the VMware vCenter Server 7.0 Update 3c Release Notes, because all content in the section is also applicable for vSphere 7.0 Update 3l. Also, see the related VMware knowledge base articles: 86447, 87258, and 87308.
What's New
- ESXi 7.0 Update 3l supports vSphere Quick Boot on the following servers:
- HPE
- Cray XD225v
- Cray XD295v
- ProLiant DL325 Gen11
- ProLiant DL345 Gen11
- ProLiant DL365 Gen11
- ProLiant DL385 Gen11
- Synergy 480 Gen11
- Dell
- PowerEdge C6620
- PowerEdge MX760c
- PowerEdge R660
- PowerEdge R6615
- PowerEdge R6625
- PowerEdge R760
- PowerEdge R7615
- PowerEdge R7625
- HPE
- This release resolves CVE-2023-1017. VMware has evaluated the severity of this issue to be in the low severity range with a maximum CVSSv3 base score of 3.3.
- This release resolves CVE-2023-1018. VMware has evaluated the severity of this issue to be in the low severity range with a maximum CVSSv3 base score of 3.3.
Earlier Releases of ESXi 7.0
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, ESXi 7.0 Update 3k Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3j Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3i Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3g Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3f Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1e Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 3c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 2 Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1d Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1c Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1b Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1a Release Notes
- VMware ESXi 7.0, ESXi 7.0 Update 1 Release Notes
- VMware ESXi 7.0, ESXi 7.0b Release Notes
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
Patches Contained in This Release
This release of ESXi 7.0 Update 3l delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U3l-21424296-depot |
Build: | 21424296 |
Download Size: | 570.6 MB |
md5sum: | bc8be7994ff95cf45e218f19eb38a4f1 |
sha256checksum: | a85acc0fab15d5c2744e6b697817961fde3979eddfc4dd1af07c83731f2f87cf |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.85.21424296 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.85.21424296 | Bugfix | Critical |
Broadcom NetXtreme I ESX VMKAPI ethernet driver | Broadcom-ntg3_4.1.9.0-4vmw.703.0.85.21424296 | Bugfix | Critical |
Non-Volatile memory controller driver | VMware-NVMe-PCIe_1.2.3.16-2vmw.703.0.85.21424296 | Bugfix | Critical |
USB Native Driver for VMware | VMware-vmkusb_0.1-8vmw.703.0.85.21424296 | Bugfix | Critical |
ESXi Component - core ESXi VIBs | ESXi_7.0.3-0.80.21422485 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.3-0.80.21422485 | Security | Critical |
ESXi Tools Component | VMware-VM-Tools_12.1.5.20735119-21422485 | Security | Critical |
IMPORTANT:
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity | Details |
ESXi70U3l-21424296 | Bugfix | Critical | Security and Bugfix |
ESXi70U3sl-21422485 | Security | Critical | Security only |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0U3l-21424296-standard |
ESXi-7.0U3l-21424296-no-tools |
ESXi-7.0U3sl-21422485-standard |
ESXi-7.0U3sl-21422485-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi7.0U3l - 21424296 | MAR 30 2023 | Bugfix | Security and Bugfix image |
ESXi7.0U3sl - 21422485 | MAR 30 2023 | Security | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Product Support Notices
-
Restoring Memory Snapshots of a VM with independent disk for virtual machine backups is not supported: You can take a memory snapshot of a virtual machine with an independent disk only to analyze the guest operating system behavior of a virtual machine. You cannot use such snapshots for virtual machine backups because restoring this type of snapshots is unsupported. You can convert a memory snapshot to a powered-off snapshot, which can be successfully restored.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.3-0.85.21424296
- esx-update_7.0.3-0.85.21424296
- Broadcom-ntg3_4.1.9.0-4vmw.703.0.85.21424296
- VMware-vmkusb_0.1-8vmw.703.0.85.21424296
- VMware-NVMe-PCIe_1.2.3.16-2vmw.703.0.85.21424296
- ESXi_7.0.3-0.80.21422485
- esx-update_7.0.3-0.80.21422485
- VMware-VM-Tools_12.1.5.20735119-21422485
- ESXi-7.0U3l-21424296-standard
- ESXi-7.0U3l-21424296-no-tools
- ESXi-7.0U3l-21422485-standard
- ESXi-7.0U3l-21422485-no-tools
- ESXi70U3l-21424296
- ESXi70U3sl-21422485
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs Included |
|
PRs Fixed | 3057026, 3062185, 3040049, 3061156, 3023389, 3062861, 3089449, 3081870, 3082900, 3033157, 3090683, 3075786, 3074392, 3081275, 3080022, 3083314, 3084812, 3074360, 3092270, 3076977, 3051685, 3082477, 3082282, 3082427, 3077163, 3087219, 3074187, 3082431, 3050562, 3072430, 3077072, 3077060, 3046875, 3082991, 3083473, 3051059, 3091256, 3074912, 3058993, 3063987, 3072500, 3060661, 3076188, 3049652, 2902475, 3087946, 3116848, 3010502, 3118090, 3078875, 3069298, 3074121 |
CVE numbers | CVE-2023-1017, CVE-2023-1018 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsan, bmcal, crx, esx-ui, esxio-combiner, vsanhealth, native-misc-drivers, gc, cpu-microcode, vdfs, esx-xserver, esx-base,
and trx
VIBs to resolve the following issues:
- PR 3057026: When you change the NUMA nodes Per Socket (NPS) setting on an AMD EPYC 9004 'Genoa' server from the default of 1, you might see incorrect number of sockets
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
This issue is resolved in this release.
- PR 3062185: ESXi hosts might fail with a purple diagnostic screen during a vSphere Storage vMotion operation or hot-adding a device
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as
PFrame_IsBackedByLPage
in the backtrace. The issue occurs during vSphere Storage vMotion operations or hot-adding a device.This issue is resolved in this release.
- PR 3040049: The performance of the pktcap-uw utility degrades when using the -c option for counting packets in heavy network traffic
Packet capture with the
-c
option of thepktcap-uw
utlity in heavy network traffic might be slower than packet capture without the option. You might also see a delay in capturing packets, so that the last package counted has a timestap few seconds earlier than the stop time of the function. The issue results from theFile_GetSize
function to check file sizes, which is slow in VMFS environments and causes low performance ofpktcap-uw
.This issue is resolved in this release. The fix changes the
File_GetSize
function with thelseek
function to enhance performance. - PR 3061156: The Data Center Bridging (DCB) daemon on an ESXi host might generate a log spew in the syslog
In the
syslog
file you might see frequent log messages such as:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] *** Received pre-CEE DCBX Packet on port: vmnicX
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info]2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Src MAC: xc:xc:X0:0X:xc:xx
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Dest MAC: X0:xb:xb:fx:0X:X1
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] 2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Port ID TLV:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] ifName:
The issue occurs because the Data Center Bridging (DCB) daemon,
dcbd
, on the ESXi host records unnecessary messages at/var/log/syslog.log
. As a result, the log file might quickly fill up withdcbd
logs.This issue is resolved in this release.
- PR 3023389: You see up to 100% CPU usage on one of the cores of an ESXi host due to an early timed-wait system call
In some cases, a timed-wait system call in the VMkernel might return too early. As a result, some applications, such as the System Health Agent, might take up to 100% of the CPU usage on one of the ESXi host cores.
This issue is resolved in this release.
- PR 3062861: You do not see virtual machine image details in the Summary tab of the vSphere Client
Due to an issue with the IPv6 loopback interfaces, you might not see virtual machine image details in the Summary tab of the vSphere Client.
This issue is resolved in this release.
- PR 3089449: If the number of paths from an ESXi host using SATP ALUA to storage exceeds 255, you might see I/O latency in some VMs
If the number of paths from an ESXi host using NMP SATP ALUA to storage exceed 255, ESXi ignores part of the ALUA information reported by the target and incorrectly retains some of the path state as active optimised. As a result, some I/Os use the non-optimised path to the storage device and cause latency issues..
This issue is resolved in this release. The fix makes sure that when RTPG response length is greater than pre-allocated buffer length, ESXi reallocates RTPG buffer with the required size and re-issues the RTPG command.
- PR 3081870: ESXi host reboot process stops with “initializing init_VMKernelShutdownHelper: (14/39) ShutdownSCSICleanupForDevTearDown” message
In some cases, during an ESXi host shutdown, helper queues of device probe requests get deleted, but the reference to the device remains. As a result, the ESXi host reboot stops with a message such as
initializing init_VMKernelShutdownHelper: (14/39) ShutdownSCSICleanupForDevTearDown
in the Direct Console User Interface (DCUI).This issue is resolved in this release. The fix adds a cancel function associated with helper requests that decrement device references when deleting helper queue requests.
- PR 3082900: ESXi hosts might fail with a purple diagnostic screen during a VM Instant Clone operation
Due to a race condition during an Instant Clone operation, VM memory pages might be corrupted and the ESXi host might fail with a purple diagnostic screen. You see errors such as
VmMemMigrate_MarkPagesCow
orAsyncRemapProcessVMRemapList
in the backtrace.This issue is resolved in this release.
- PR 3033157: Storage failover might take long as repeated scans may be required to bring all paths to a LUN online
In some environments with SATP plug-ins such as
satp_alua
andsatp_alua_cx
, bringing all paths to a LUN online might require several scans. As a result, a storage failover operation might take long. For example, with 4 paths to a LUN, it might take up to 20 minutes to bring the 4 paths online.This issue is resolved in this release.
- PR 3090683: Virtual machine vSphere tags might be lost after vSphere vMotion operations across vCenter Server instances
After vSphere vMotion operations across vCenter Server instances for VMs that have vSphere tags, it is possible that the tags do not exist in the target location. The issue only occurs in migration operations between vCenter Server instances that do not share storage.
This issue is resolved in this release.
- PR 3075786: The multicast filter mode of an NSX Virtual Switch might intermittently revert from legacy to snooping
When you set the multicast filter mode of an NSX Virtual Switch for NVDS, the mode might intermittently revert from legacy to snooping.
This issue is resolved in this release. If you already face the issue, clear the stale property on the NVDS by completing the following steps:
- Run the
net-dvs
command on the ESXi host and check ifcom.vmware.etherswitch.multicastFilter
is set on the NVDS. Ifcom.vmware.etherswitch.multicastFilter
is not set, no additional steps are required. - If the property
com.vmware.etherswitch.multicastFilter
is set, use the following command to clear it:
net-dvs -u com.vmware.etherswitch.multicastFilter -p globalPropList <NVDS name>
- Run the
- PR 3074392: IOFilterVP service stops when you run a Nessus scan on an ESXi host
When you run a Nessus scan, which internally runs a SYN scan, to detect active ports on an ESXi host, the iofiltervpd service might shut down. The service cannot be restarted within the max retry attempts due to the SYN scan.
This issue is resolved in this release. The fix makes sure the IOFilterVP service handles the possible connection failure due to SYN scans.
- PR 3081275: You see no change in the ESXi config after you run a restore config procedure and the ESXi host reboots
If a VIB containing a new kernel module is installed before you run any of the restore config procedures described in VMware knowledge base article 2042141, you see no change in the config after the ESXi host reboots. For example, if you attempt to restore the ESXi config immediately after installing the NSX VIB, the ESXi host reboots, but no changes in the config are visible. The issue occurs due to a logical fault in the updates of the bootbank partition used for ESXi host reboots after a new kernel module VIB is installed.
This issue is resolved in this release.
- PR 3080022: Listing files on mounted NFS 4.1 Azure storage might not work as expected
Listing files of mounted NFS 4.1 Azure storage might not work as expected, because although the files exist and you can open them by name, the
ls
command on an ESXi host by using SSH does not show any files.This issue is resolved in this release.
- PR 3083314: Network File Copy (NFC) properties in the vpxa.cfg file might cause an ESXi host to fail with a purple diagnostic screen during upgrade
Starting with ESXi 7.0 Update 1, the
vpxa.cfg
file no longer exists and the vpxa configuration belongs to the ESXi Configuration Store (ConfigStore
). When you upgrade an ESXi host from a version earlier than 7.0 Update 1 to a later version, the fieldsVpx.Vpxa.config.nfc.loglevel
andVpx.Vpxa.config.httpNfc.accessMode
in thevpxa.cfg
file might cause the ESXi host to fail with a purple diagnostic screen during the conversion fromvpxa.cfg
format toConfigStore
format. The issue occurs due to discrepancies in the NFC log level betweenvpxa.cfg
andConfigStore
, and case-sensitive values.This issue is resolved in this release. The fix removes the strict case-sensitive rules and maps NFC log level with their
ConfigStore
equivalents. For more information, see VMware knowledge base article 90865. - PR 3084812: Virtual machines might take longer to respond or become temporary unresponsive when vSphere DRS triggers multiple migration operations
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to complete in-flight disk I/Os. As a result, such virtual machines might take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
This issue is resolved in this release.
- PR 3074360: After an API call to reload a virtual machine, the VM virtual NIC might disconnect from a Logical Switch Port (LSP)
During an API call to reload a virtual machine from one ESXi host to another, the
externalId
in the VM port data file might be lost. As a result, after the VM reloads, the VM virtual NIC might not be able to connect to a LSP.This issue is resolved in this release.
- PR 3092270: VMware Host Client fails to attach an existing virtual disk to a virtual machine
After upgrading to ESXi 7.0 Update 3i, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
This issue is resolved in this release.
- PR 3076977: ESXi host might fail with a purple diagnostic screen due to a rare issue with dereferencing a freed pointer
Due to a race condition between the Create and Delete resource pool workflows, a freed pointer in the VMFS resource pool cache might get dereferenced. As a result, ESXi hosts might fail with a purple diagnostic screen and error such as:
@BlueScreen: #PF Exception 14 in world 2105491:res3HelperQu IP 0x420032a21f55 addr 0x0
This issue is resolved in this release.
- PR 3051685: The batch QueryUnresolvedVmfsVolume API might take long to list a large number of unresolved VMFS volumes
With the batch
QueryUnresolvedVmfsVolume
API, you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as re-signaturing specific unresolved VMFS volumes. By default, when the APIQueryUnresolvedVmfsVolume
is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time.This issue is resolved in this release.
- PR 3082477: vSAN cluster shutdown fails with error: 'NoneType' object has no attribute '_moId'
This problem can occur when vCenter manages multiple clusters, and the vCenter VM runs on one of the clusters. If you shut down the cluster where the vCenter VM resides, the following error can occur during the shutdown operation:
'NoneType' object has no attribute '_moId'
. The vCenter VM is not powered off.This issue is resolved in this release.
- PR 3082282: You do not see the Restart Cluster option in the vSphere Client after shutting down multiple vSAN clusters
This problem can occur when vCenter manages multiple clusters and runs several shut down and restart cluster operations in parallel. For example, if you shut down 3 clusters and restart the vSAN health service or the vCenter VM, and you restart one of the 3 clusters before the others, the Restart Cluster option might be available for the first vSAN cluster, but not for the other clusters.
This issue is resolved in this release.
- PR 3082427: You see an unexpected time drift alert on vSAN clusters managed by the same vCenter Server
Unexpected time drift can occur intermittently on a vSAN cluster when a single vCenter Server manages several vSAN clusters, due to internal delays in VPXA requests. In the vSphere Client, in the vSAN Health tab, you see the status of Time is synchronized across hosts and VC changing from green to yellow and back to green.
This issue is resolved in this release.
- PR 3077163: When the ToolsRamdisk advanced option is active on an ESXi host, after a VM Tools upgrade, new versions are not available for VMs
When you enable the
ToolsRamdisk
advanced option that makes sure the/vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating thetools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs.This issue is resolved in this release. The fix makes sure that when
ToolsRamdisk
is enabled, VM Tools updates automatically, without the need of manual steps or restart. - PR 3087219: ESXi hosts might fail with a purple diagnostic screen after a migration operation
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 3074187: ESXi hosts randomly fail with a purple diagnostic screen and an error such as Exception 14 in world #######
If your environment has both small and large file block clusters mapped to one affinity rule, when you add a cluster, the Affinity Manager might allocate more blocks for files than the actual volume of the datastore. As a result, ESXi hosts randomly fail with a purple diagnostic screen with error
Exception 14 in world #######
.This issue is resolved in this release.
- PR 3082431: vSAN health history has no data available for the selected time period
This issue occurs when you view health details of any health check in the history view. In the vSphere Client, in the Skyline Health tab, you see a message such as
No data available for the selected time period
. The issue occurs when a query to the historical health records fails due to timestamps with an incorrect time zone.This issue is resolved in this release.
- PR 3050562: You might see incorrect facility and severity values in ESXi syslog log messages
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
This issue is resolved in this release.
- PR 3072430: A rare race condition in the Logical Volume Manager (LVM) schedule queue might cause an ESXi host to fail with a purple diagnostic screen
In rare cases, in a heavy workload, two LVM tasks that perform updating and querying on the device schedule in an ESXi host might run in parallel and cause a synchronization issue. As a result, the ESXi host might fail with a purple diagnostic screen and a messages similar to
[email protected]#nover+0x1018 stack: 0xXXXXX and Addr3ResolvePhysVector@esx#nover+0x68 stack: 0xXXXX
in the backtrace.This issue is resolved in this release.
- PR 3077072: When an ESXi host boots, some dynamically discovered iSCSI target portals and iSCSI LUNs might get deleted
In some cases, while an ESXi host boots, some dynamically discovered iSCSI targets might get deleted. As a result, after the boot completes, you might no longer see some iSCSI LUNs. The issue is more likely to occur in environments with Challenge-Handshake Authentication Protocol
(CHAP) enabled for iSCSI.This issue is resolved in this release.
- PR 3077060: The VMware Host Client or vSphere Client intermittently stop displaying software iSCSI adapters
In busy environments with many rescans by using a software iSCSI adapter, the VMware Host Client might intermittently stop displaying the adapter. The issue occurs only with target arrays which return target portals with both IPv4 and IPv6 addresses during discovery by using the
SendTargets
method.This issue is resolved in this release.
- PR 3046875: Disabling Storage I/O Control on datastores results in high read I/O on all ESXi hosts
A rare issue with calculating the Max Queue Depth per datastore device might cause queue depth mismatch warnings from the underlying SCSI layer. As a result, when you disable Storage I/O Control on datastores, you might see high read I/O on all ESXi hosts.
This issue is resolved in this release.
- PR 3082991: After an upgrade to ESXi 7.0 Update 3 and later, you might see multiple -1.log files to fill up the root directory
Due to missing configuration for
stats-log
in the file/etc/vmware/vvold/config.xml
, you might see multiple-1.log
files to fill up the/ramdisk
partition after an upgrade to ESXi 7.0 Update 3 and later.This issue is resolved in this release.
- PR 3083473: Username or password in proxy cannot include an escape character
If a username or password in a network proxy contains any escape character, the environment fails to send network requests through the proxy.
This issue is resolved in this release. The fix makes sure username or password work fine even with non-escaped characters.
- PR 3051059: The command esxcli system ntp test fails to resolve the IP of an NTP server configured with additional parameters along with the host name
If you configure an NTP server on an ESXi host with additional parameters along with
hostname
, such asmin_poll
ormax_poll
, the commandesxcli system ntp test
might fail to resolve the IP address of that server. This issue occurs because in such configurations, the NTP name resolution applies to the entire server config instead of just the host name.This issue is resolved in this release.
- PR 3091256: After upgrade to ESXi 7.0 Update 2 and later, VMs might not be able to communicate properly with some USB devices attached to the VMs
After upgrading to ESXi 7.0 Update 2 and later, a virtual machine might not discover some USB devices attached to it, such as an Eaton USB UPS, and fail to communicate with the device properly.
This issue is resolved in this release.
- PR 3074912: If you change the datastore of a cloned virtual machine before the initial power on, guest OS customizations might be lost
If you clone a VM, including the guest customization specs, move the VMDK file to another datastore and make it thick provisioned, and then power on the VM, the guest OS customization might not be preserved. In the hostd logs, you see an Event Type Description such as
The customization component failed to set the required parameters inside the guest operating system
.
The issue occurs because while moving the VMDK file of the cloned VM to another datastore, the newpathName
for the guest OS customization might temporarily be the same as theoldPathName
and the package gets deleted.This issue is resolved in this release.
- PR 3058993: Virtual machine-related tasks might fail when you use vSphere APIs for I/O Filtering (VAIO) for VMware or third-party backup applications
Improper sizing of the resource needs for the hostd service on an ESXi host might cause some virtual machine-related tasks to fail when you use vSphere APIs for I/O Filtering (VAIO) for VMware or third-party backup applications. For example, when a VAIO filter is attached to one or more virtual disks of a virtual machine, such VMs might fail to perform backups or power-on after a backup.
This issue is resolved in this release. The fix increases the number of threads to prevent I/O-related issues with the hostd resources on ESXi hosts.
- PR 3063987: First Class Disk (FCD) sync might fail on NFSv3 datastores and you see inconsistent values in Govc reports
A rare file corruption issue might occur when multiple ESXi hosts do a simultaneous file append operation on an NFSv3 datastore. As a result, if you use the Govc open source command-line utility to perform administrative actions on a vCenter instance, you might see reports with inconsistent values. For example, if you run the command
govc disk.ls
to list all FCDs in a newly created datastore, you see a different number of disks from what you expect.This issue is resolved in this release.
- PR 3072500: ESXi hosts intermittently fail with a purple diagnostic screen and an error VERIFY bora/vmkernel/sched/cpusched.c
In rare cases, when a VMFS volume is closed immediately after it is opened, one of the threads in the opening process might race with a thread in the closing process and take a rare error handling path that can exit without unlocking a locked spinlock. As a result, the ESXi host might fail with a purple diagnostic screen with a message in the backtrace such as:
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c:11154.
This issue is resolved in this release.
- PR 3060661: Windows Server Failover Cluster (WSFC) applications might lose connectivity to a virtual volume-based disk after initiating a rebind operation on that disk
Due to reservation conflicts, WSFC applications might lose connectivity to a virtual volume-based disk after a rebind operation on that disk.
This issue is resolved in this release.
- PR 3076188: vSphere HA might not work as expected on a vSAN datastore after a power outage
In rare situations, the management agent that vSphere HA deploys on ESXi hosts, Fault Domain Manager (FDM), might fail to acquire a vSAN datastore in the first attempt before an exceptional condition, such as power outage, occurs. As a result, vSphere HA failover for virtual machines on that datastore might not be successful. In the vSphere Client, you see a message such as
This virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure
. Successive attempts of FDM to acquire the vSAN datastore might also fail.This issue is resolved in this release. The fix makes sure that the successive attempts to acquire a vSAN datastore after an exceptional condition succeed.
- PR 3049652: The Power|Total Energy (Wh) metric from the sustainability metrics in VMware vRealize Operations might display in wrong units
In the sustainability metrics in VMware vRealize Operations, you might see the value of Power|Total Energy (Wh) per virtual machine higher than the ESXi host on which the VM is hosted. The issue occurs because after measuring the total energy consumed by the VM, the power.energy performance counter is calculated and reported in millijoules instead of joules.
This issue is resolved in this release.
- PR 2902475: Windows virtual machines of hardware version 19 that have Virtualization-Based Security (VBS) enabled might fail with a blue diagnostic screen on ESXi hosts running on AMD processors
In ESXi hosts running on AMD processors, Windows virtual machines of hardware version 19 with VBS enabled might fail with a blue diagnostic screen due to a processor mode misconfiguration error. In the
vmware.log
file, you see errors such asvcpu-0 - WinBSOD: Synthetic MSR[0xx0000x00] 0xxxx
.This issue is resolved in this release.
- PR 3087946: vSphere Lifecycle Manager compliance scan might show a warning for a CPU that is supported
Due to an internal error, even if a CPU model is listed as supported in the VMware Compatibility Guide, in the vSphere Client you might see a warning such as
The CPU on the host is not supported by the image
during a compliance scan by using a vSphere Lifecycle Manager image. The warning does not prevent you from completing an update or upgrade.This issue is resolved in this release.
- PR 3116848: Virtual machines with Virtualization-Based Security (VBS) enabled might fail with a blue diagnostic screen after a migration
When you migrate virtual machines with VBS enabled that reside on ESXi hosts running on AMD processors, such VMs might fail with a blue diagnostic screen on the target host due to a guest interruptibility state error.
This issue is resolved in this release.
- PR 3010502: If you override a networking policy for a port group and then revert the change, the override persists after an ESXi host reboot
When you override a networking policy such as Teaming, Security, or Shaping for a port group, and then revert the change, after a reboot of the ESXi host, you might still see the override setting. For example, if you accept promiscuous mode activation on a port group level, then revert the mode back to Reject, and reboot the ESXi host, the setting is still Accept.
This issue is resolved in this release.
- PR 3118090: The autocomplete attribute in forms for web access is not disabled on password and username fields
Your remote web server might contain an HTML form field that has an input of type
password
andusername
where theautocomplete
attribute is not set tooff
. As a result, you might see a warning by security scanners such asAutoComplete Attribute Not Disabled for Password in Form Based Authentication
. The setting of theautocomplete
attribute toon
does not directly put a risk on web servers, but in certain browsers, user credentials in such forms might be saved and potentially lead to a loss of confidentiality.This issue is resolved in this release. The fix makes sure the attribute
autocomplete
is set tooff
in password and username fields to prevent browsers from caching credentials. - PR 3078875: Cannot set IOPS limit for vSAN file service objects
vSAN Distributed File System (vDFS) objects do not support IOPS limit. You cannot set the IOPS limit for vSAN file service objects.
This issue is resolved in this release. With the fix, you can set an IOPSLimit policy with vSAN file service objects.
- PR 3069298: In vSphere Client, you do not see the asset tag for some servers
In vSphere Client, when you navigate to Configure > Hardware > Overview, for some servers you do not see any asset tag listed.
This issue is resolved in this release. The fix ensures that vCenter receives either baseboard info (Type2) asset tag or chassis info(Type3) asset tag for the server. You see the chassis info asset tag when the baseboard info asset tag for the server is empty.
- PR 3074121: Network configuration lost after vSAN upgrade to 7.0 or later
When you upgrade a vSAN host from a pre-6.2 release to 7.0 or later, the VMkernel port tagging for vSAN might be lost. If this occurs, vSAN networking configuration is missing after the upgrade.
This issue is resolved in this release. The fix prevents the issue to occur on ESXi 7.0 Update 3l and later. If you already face the issue, upgrading to ESXi 7.0 Update 3l does not automatically resolve the issue, but you can manually fix it by enabling VMkernel traffic for vSAN on the ESXi host.
- PR 3085003: Cannot enable vSAN File Service with uploaded OVF files
You cannot enable vSAN File Service if the vCenter does not have Internet access.
This issue is resolved in this release.
- PR 3088608: Virtual machines might fail with a core dump while resuming from a checkpoint or after migration
In rare cases, when a data breakpoint is enabled in the guest OS of virtual machines, during checkpoint resume or after a vSphere vMotion migration, an internal virtualization data structure might be accessed before it is allocated. As a result, the virtual machines crash with core dump and an error such as:
vcpu-1:VMM fault 14: src=MONITOR rip=0xfffffffffc0XXe00 regs=0xfffffffffcXXXdc0 LBR stack=0xfffffffffcXXXaXX
.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3083426 |
CVE numbers | N/A |
Update the ntg3
VIB to resolve the following issue:
- PR 3083426: You might see jumbo frames packet loss when NICs using the ntg3 driver are connected to certain Dell switches
When NICs using the
ntg3
driver such as Broadcom BCM5719 and BCM5720 are connected to certain models of Dell switches, including but not limited to S4128T-ON and S3048-ON, such NICs might incorrectly drop some of the received packets greater than 6700 bytes. This might cause very low network performance or loss of connectivity.This issue is resolved in this release. The
ntg3
driver version 4.1.9.0, which is released with ESXi 7.0 Update 3l, contains the fix.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3086841, 3104368, 3061328 |
CVE numbers | N/A |
Update the nvme-pcie
VIB to resolve the following issues:
- PR 3086841: Third-party multipathing plug-ins report "Invalid length RTPG Discriptor" warning while running on the SCSI to NVMe translation stack
In case of an extended header format in the NVMe to SCSI translation layer, third-party multipathing plug-ins such as PowerPath might not account for the extended header size of 4 bytes. As a result, in the
vmkernel.log
, you see anInvalid length RTPG Descriptor
warning.This issue is resolved in this release. The fix accounts for 4 bytes of extended headers.
- PR 3104368: You cannot install ESXi or create datastores on an NVMe device
When installing ESXi or creating a VMFS datastore on an NVMe device, the native NVMe driver might need to handle a Compare and Write fused operation. According to NVMe specs, the Compare and Write commands must be inserted next to each other in the same Submission Queue, and the Submission Queue Tail doorbell pointer update must indicate both commands as part of one doorbell update. The native NVMe driver puts the two commands together in one Submission Queue, but writes the doorbell for each command separately. As a result, the device firmware might complete the fused commands with an error and fail to create a VMFS datastore. Since creating a VMFS datastore on a device is a prerequisite for successful ESXi installation, you might not be able to install ESXi on such NVMe devices.
This issue is resolved in this release.
- PR 3061328: Cannot clear vSAN health check: NVMe device can be identified
If the vSAN health check indicates that an NVMe device cannot be identified, the warning might not be cleared after you correctly select the device model.
This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | N/A |
CVE numbers | CVE-2023-1017, CVE-2023-1018 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsan, bmcal, crx, esx-ui, esxio-combiner, vsanhealth, native-misc-drivers, gc, cpu-microcode, vdfs, esx-xserver, esx-base,
and trx
VIBs to resolve the following issues:
- ESXi 7.0 Update 3l provides the following security updates:
- The Python package is updated to versions 3.8.16/3.5.10.
- The ESXi userworld libxml2 library is updated to version 2.10.3.
- The cURL library is updated to version 7.86.0.
- The Expat XML parser is updated to version 2.5.0.
- The SQLite database is updated to version 3.40.1.
- The zlib library is updated to 1.2.13.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3091480 |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
-
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3l:
-
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
-
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
-
VMware Tools 11.0.6:
-
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
-
-
VMware Tools 10.0.12:
-
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
-
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
-
solaris.iso: VMware Tools image 10.3.10 for Solaris.
-
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
-
Profile Name | ESXi-7.0U3l-21424296-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 30, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3057026, 3062185, 3040049, 3061156, 3023389, 3062861, 3089449, 3081870, 3082900, 3033157, 3090683, 3075786, 3074392, 3081275, 3080022, 3083314, 3084812, 3074360, 3092270, 3076977, 3051685, 3082477, 3082282, 3082427, 3077163, 3087219, 3074187, 3082431, 3050562, 3072430, 3077072, 3077060, 3046875, 3082991, 3083473, 3051059, 3091256, 3074912, 3058993, 3063987, 3072500, 3060661, 3076188, 3049652, 2902475, 3087946, 3116848, 3010502, 3118090, 3078875, 3069298, 3074121, 3083426, 3086841, 3104368, 3061328 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
- This patch updates the following issues:
-
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
-
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as
PFrame_IsBackedByLPage
in the backtrace. The issue occurs during vSphere Storage vMotion operations or hot-adding a device. -
Packet capture with the
-c
option of thepktcap-uw
utlity in heavy network traffic might be slower than packet capture without the option. You might also see a delay in capturing packets, so that the last package counted has a timestap few seconds earlier than the stop time of the function. The issue results from theFile_GetSize
function to check file sizes, which is slow in VMFS environments and causes low performance ofpktcap-uw
. -
In the
syslog
file you might see frequent log messages such as:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] *** Received pre-CEE DCBX Packet on port: vmnicX
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info]2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Src MAC: xc:xc:X0:0X:xc:xx
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Dest MAC: X0:xb:xb:fx:0X:X1
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] 2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Port ID TLV:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] ifName:
The issue occurs because the Data Center Bridging (DCB) daemon,
dcbd
, on the ESXi host records unnecessary messages at/var/log/syslog.log
. As a result, the log file might quickly fill up withdcbd
logs. -
In some cases, a timed-wait system call in the VMkernel might return too early. As a result, some applications, such as the System Health Agent, might take up to 100% of the CPU usage on one of the ESXi host cores.
-
Due to an issue with the IPv6 loopback interfaces, you might not see virtual machine image details in the Summary tab of the vSphere Client.
-
If the number of paths from an ESXi host using NMP SATP ALUA to storage exceed 255, ESXi ignores part of the ALUA information reported by the target and incorrectly retains some of the path state as active optimised. As a result, some I/Os use the non-optimised path to the storage device and cause latency issues..
-
In some cases, during an ESXi host shutdown, helper queues of device probe requests get deleted, but the reference to the device remains. As a result, the ESXi host reboot stops with a message such as
initializing init_VMKernelShutdownHelper: (14/39) ShutdownSCSICleanupForDevTearDown
in the Direct Console User Interface (DCUI). -
Due to a race condition during an Instant Clone operation, VM memory pages might be corrupted and the ESXi host might fail with a purple diagnostic screen. You see errors such as
VmMemMigrate_MarkPagesCow
orAsyncRemapProcessVMRemapList
in the backtrace. -
In some environments with SATP plug-ins such as
satp_alua
andsatp_alua_cx
, bringing all paths to a LUN online might require several scans. As a result, a storage failover operation might take long. For example, with 4 paths to a LUN, it might take up to 20 minutes to bring the 4 paths online. -
After vSphere vMotion operations across vCenter Server instances for VMs that have vSphere tags, it is possible that the tags do not exist in the target location. The issue only occurs in migration operations between vCenter Server instances that do not share storage.
-
When you set the multicast filter mode of an NSX Virtual Switch for NVDS, the mode might intermittently revert from legacy to snooping.
-
When you run a Nessus scan, which internally runs a SYN scan, to detect active ports on an ESXi host, the iofiltervpd service might shut down. The service cannot be restarted within the max retry attempts due to the SYN scan.
-
If a VIB containing a new kernel module is installed before you run any of the restore config procedures described in VMware knowledge base article 2042141, you see no change in the config after the ESXi host reboots. For example, if you attempt to restore the ESXi config immediately after installing the NSX VIB, the ESXi host reboots, but no changes in the config are visible. The issue occurs due to a logical fault in the updates of the bootbank partition used for ESXi host reboots after a new kernel module VIB is installed.
-
Listing files of mounted NFS 4.1 Azure storage might not work as expected, because although the files exist and you can open them by name, the
ls
command on an ESXi host by using SSH does not show any files. -
Starting with ESXi 7.0 Update 1, the
vpxa.cfg
file no longer exists and the vpxa configuration belongs to the ESXi Configuration Store (ConfigStore
). When you upgrade an ESXi host from a version earlier than 7.0 Update 1 to a later version, the fieldsVpx.Vpxa.config.nfc.loglevel
andVpx.Vpxa.config.httpNfc.accessMode
in thevpxa.cfg
file might cause the ESXi host to fail with a purple diagnostic screen during the conversion fromvpxa.cfg
format toConfigStore
format. The issue occurs due to discrepancies in the NFC log level betweenvpxa.cfg
andConfigStore
, and case-sensitive values. -
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to complete in-flight disk I/Os. As a result, such virtual machines might take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
-
During an API call to reload a virtual machine from one ESXi host to another, the
externalId
in the VM port data file might be lost. As a result, after the VM reloads, the VM virtual NIC might not be able to connect to a LSP. -
After upgrading to ESXi 7.0 Update 3i, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
-
Due to a race condition between the Create and Delete resource pool workflows, a freed pointer in the VMFS resource pool cache might get dereferenced. As a result, ESXi hosts might fail with a purple diagnostic screen and error such as:
@BlueScreen: #PF Exception 14 in world 2105491:res3HelperQu IP 0x420032a21f55 addr 0x0
-
With the batch
QueryUnresolvedVmfsVolume
API, you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as re-signaturing specific unresolved VMFS volumes. By default, when the APIQueryUnresolvedVmfsVolume
is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time. -
This problem can occur when vCenter manages multiple clusters, and the vCenter VM runs on one of the clusters. If you shut down the cluster where the vCenter VM resides, the following error can occur during the shutdown operation:
'NoneType' object has no attribute '_moId'
. The vCenter VM is not powered off. -
This problem can occur when vCenter manages multiple clusters and runs several shut down and restart cluster operations in parallel. For example, if you shut down 3 clusters and restart the vSAN health service or the vCenter VM, and you restart one of the 3 clusters before the others, the Restart Cluster option might be available for the first vSAN cluster, but not for the other clusters.
-
Unexpected time drift can occur intermittently on a vSAN cluster when a single vCenter Server manages several vSAN clusters, due to internal delays in VPXA requests. In the vSphere Client, in the vSAN Health tab, you see the status of Time is synchronized across hosts and VC changing from green to yellow and back to green.
-
When you enable the
ToolsRamdisk
advanced option that makes sure the/vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating thetools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs. -
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
-
If your environment has both small and large file block clusters mapped to one affinity rule, when you add a cluster, the Affinity Manager might allocate more blocks for files than the actual volume of the datastore. As a result, ESXi hosts randomly fail with a purple diagnostic screen with error
Exception 14 in world #######
. -
This issue occurs when you view health details of any health check in the history view. In the vSphere Client, in the Skyline Health tab, you see a message such as
No data available for the selected time period
. The issue occurs when a query to the historical health records fails due to timestamps with an incorrect time zone. -
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
-
In rare cases, in a heavy workload, two LVM tasks that perform updating and querying on the device schedule in an ESXi host might run in parallel and cause a synchronization issue. As a result, the ESXi host might fail with a purple diagnostic screen and a messages similar to
[email protected]#nover+0x1018 stack: 0xXXXXX and Addr3ResolvePhysVector@esx#nover+0x68 stack: 0xXXXX
in the backtrace. -
In some cases, while an ESXi host boots, some dynamically discovered iSCSI targets might get deleted. As a result, after the boot completes, you might no longer see some iSCSI LUNs. The issue is more likely to occur in environments with Challenge-Handshake Authentication Protocol
(CHAP) enabled for iSCSI. -
In busy environments with many rescans by using a software iSCSI adapter, the VMware Host Client might intermittently stop displaying the adapter. The issue occurs only with target arrays which return target portals with both IPv4 and IPv6 addresses during discovery by using the
SendTargets
method. -
A rare issue with calculating the Max Queue Depth per datastore device might cause queue depth mismatch warnings from the underlying SCSI layer. As a result, when you disable Storage I/O Control on datastores, you might see high read I/O on all ESXi hosts.
-
Due to missing configuration for
stats-log
in the file/etc/vmware/vvold/config.xml
, you might see multiple-1.log
files to fill up the/ramdisk
partition after an upgrade to ESXi 7.0 Update 3 and later. -
If a username or password in a network proxy contains any escape character, the environment fails to send network requests through the proxy.
-
If you configure an NTP server on an ESXi host with additional parameters along with
hostname
, such asmin_poll
ormax_poll
, the commandesxcli system ntp test
might fail to resolve the IP address of that server. This issue occurs because in such configurations, the NTP name resolution applies to the entire server config instead of just the host name. -
After upgrading to ESXi 7.0 Update 2 and later, a virtual machine might not discover some USB devices attached to it, such as an Eaton USB UPS, and fail to communicate with the device properly.
-
If you clone a VM, including the guest customization specs, move the VMDK file to another datastore and make it thick provisioned, and then power on the VM, the guest OS customization might not be preserved. In the hostd logs, you see an Event Type Description such as
The customization component failed to set the required parameters inside the guest operating system
.
The issue occurs because while moving the VMDK file of the cloned VM to another datastore, the newpathName
for the guest OS customization might temporarily be the same as theoldPathName
and the package gets deleted. -
Improper sizing of the resource needs for the hostd service on an ESXi host might cause some virtual machine-related tasks to fail when you use vSphere APIs for I/O Filtering (VAIO) for VMware or third-party backup applications. For example, when a VAIO filter is attached to one or more virtual disks of a virtual machine, such VMs might fail to perform backups or power-on after a backup.
-
A rare file corruption issue might occur when multiple ESXi hosts do a simultaneous file append operation on an NFSv3 datastore. As a result, if you use the Govc open source command-line utility to perform administrative actions on a vCenter instance, you might see reports with inconsistent values. For example, if you run the command
govc disk.ls
to list all FCDs in a newly created datastore, you see a different number of disks from what you expect. -
In rare cases, when a VMFS volume is closed immediately after it is opened, one of the threads in the opening process might race with a thread in the closing process and take a rare error handling path that can exit without unlocking a locked spinlock. As a result, the ESXi host might fail with a purple diagnostic screen with a message in the backtrace such as:
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c:11154.
-
Due to reservation conflicts, WSFC applications might lose connectivity to a virtual volume-based disk after a rebind operation on that disk.
-
In rare situations, the management agent that vSphere HA deploys on ESXi hosts, Fault Domain Manager (FDM), might fail to acquire a vSAN datastore in the first attempt before an exceptional condition, such as power outage, occurs. As a result, vSphere HA failover for virtual machines on that datastore might not be successful. In the vSphere Client, you see a message such as
This virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure
. Successive attempts of FDM to acquire the vSAN datastore might also fail. -
In the sustainability metrics in VMware vRealize Operations, you might see the value of Power|Total Energy (Wh) per virtual machine higher than the ESXi host on which the VM is hosted. The issue occurs because after measuring the total energy consumed by the VM, the power.energy performance counter is calculated and reported in millijoules instead of joules.
-
In ESXi hosts running on AMD processors, Windows virtual machines of hardware version 19 with VBS enabled might fail with a blue diagnostic screen due to a processor mode misconfiguration error. In the
vmware.log
file, you see errors such asvcpu-0 - WinBSOD: Synthetic MSR[0xx0000x00] 0xxxx
. -
Due to an internal error, even if a CPU model is listed as supported in the VMware Compatibility Guide, in the vSphere Client you might see a warning such as
The CPU on the host is not supported by the image
during a compliance scan by using a vSphere Lifecycle Manager image. The warning does not prevent you from completing an update or upgrade. -
When you migrate virtual machines with VBS enabled that reside on ESXi hosts running on AMD processors, such VMs might fail with a blue diagnostic screen on the target host due to a guest interruptibility state error.
-
When you override a networking policy such as Teaming, Security, or Shaping for a port group, and then revert the change, after a reboot of the ESXi host, you might still see the override setting. For example, if you accept promiscuous mode activation on a port group level, then revert the mode back to Reject, and reboot the ESXi host, the setting is still Accept.
-
Your remote web server might contain an HTML form field that has an input of type
password
andusername
where theautocomplete
attribute is not set tooff
. As a result, you might see a warning by security scanners such asAutoComplete Attribute Not Disabled for Password in Form Based Authentication
. The setting of theautocomplete
attribute toon
does not directly put a risk on web servers, but in certain browsers, user credentials in such forms might be saved and potentially lead to a loss of confidentiality. -
vSAN Distributed File System (vDFS) objects do not support IOPS limit. You cannot set the IOPS limit for vSAN file service objects.
-
In vSphere Client, when you navigate to Configure > Hardware > Overview, for some servers you do not see any asset tag listed.
-
When you upgrade a vSAN host from a pre-6.2 release to 7.0 or later, the VMkernel port tagging for vSAN might be lost. If this occurs, vSAN networking configuration is missing after the upgrade.
-
You cannot enable vSAN File Service if the vCenter does not have Internet access.
-
In rare cases, when a data breakpoint is enabled in the guest OS of virtual machines, during checkpoint resume or after a vSphere vMotion migration, an internal virtualization data structure might be accessed before it is allocated. As a result, the virtual machines crash with core dump and an error such as:
vcpu-1:VMM fault 14: src=MONITOR rip=0xfffffffffc0XXe00 regs=0xfffffffffcXXXdc0 LBR stack=0xfffffffffcXXXaXX
. -
When NICs using the
ntg3
driver such as Broadcom BCM5719 and BCM5720 are connected to certain models of Dell switches, including but not limited to S4128T-ON and S3048-ON, such NICs might incorrectly drop some of the received packets greater than 6700 bytes. This might cause very low network performance or loss of connectivity. -
In case of an extended header format in the NVMe to SCSI translation layer, third-party multipathing plug-ins such as PowerPath might not account for the extended header size of 4 bytes. As a result, in the
vmkernel.log
, you see anInvalid length RTPG Descriptor
warning. -
When installing ESXi or creating a VMFS datastore on an NVMe device, the native NVMe driver might need to handle a Compare and Write fused operation. According to NVMe specs, the Compare and Write commands must be inserted next to each other in the same Submission Queue, and the Submission Queue Tail doorbell pointer update must indicate both commands as part of one doorbell update. The native NVMe driver puts the two commands together in one Submission Queue, but writes the doorbell for each command separately. As a result, the device firmware might complete the fused commands with an error and fail to create a VMFS datastore. Since creating a VMFS datastore on a device is a prerequisite for successful ESXi installation, you might not be able to install ESXi on such NVMe devices.
-
If the vSAN health check indicates that an NVMe device cannot be identified, the warning might not be cleared after you correctly select the device model.
-
Profile Name | ESXi-7.0U3l-21424296-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 30, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3057026, 3062185, 3040049, 3061156, 3023389, 3062861, 3089449, 3081870, 3082900, 3033157, 3090683, 3075786, 3074392, 3081275, 3080022, 3083314, 3084812, 3074360, 3092270, 3076977, 3051685, 3082477, 3082282, 3082427, 3077163, 3087219, 3074187, 3082431, 3050562, 3072430, 3077072, 3077060, 3046875, 3082991, 3083473, 3051059, 3091256, 3074912, 3058993, 3063987, 3072500, 3060661, 3076188, 3049652, 2902475, 3087946, 3116848, 3010502, 3118090, 3078875, 3069298, 3074121, 3083426, 3086841, 3104368, 3061328 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
- This patch updates the following issues:
-
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
-
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as
PFrame_IsBackedByLPage
in the backtrace. The issue occurs during vSphere Storage vMotion operations or hot-adding a device. -
Packet capture with the
-c
option of thepktcap-uw
utlity in heavy network traffic might be slower than packet capture without the option. You might also see a delay in capturing packets, so that the last package counted has a timestap few seconds earlier than the stop time of the function. The issue results from theFile_GetSize
function to check file sizes, which is slow in VMFS environments and causes low performance ofpktcap-uw
. -
In the
syslog
file you might see frequent log messages such as:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] *** Received pre-CEE DCBX Packet on port: vmnicX
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info]2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Src MAC: xc:xc:X0:0X:xc:xx
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Dest MAC: X0:xb:xb:fx:0X:X1
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] 2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] Port ID TLV:
2022-03-27T00:xx:xx.xxxx dcbd[xxxx]: [info] ifName:
The issue occurs because the Data Center Bridging (DCB) daemon,
dcbd
, on the ESXi host records unnecessary messages at/var/log/syslog.log
. As a result, the log file might quickly fill up withdcbd
logs. -
In some cases, a timed-wait system call in the VMkernel might return too early. As a result, some applications, such as the System Health Agent, might take up to 100% of the CPU usage on one of the ESXi host cores.
-
Due to an issue with the IPv6 loopback interfaces, you might not see virtual machine image details in the Summary tab of the vSphere Client.
-
If the number of paths from an ESXi host using NMP SATP ALUA to storage exceed 255, ESXi ignores part of the ALUA information reported by the target and incorrectly retains some of the path state as active optimised. As a result, some I/Os use the non-optimised path to the storage device and cause latency issues..
-
In some cases, during an ESXi host shutdown, helper queues of device probe requests get deleted, but the reference to the device remains. As a result, the ESXi host reboot stops with a message such as
initializing init_VMKernelShutdownHelper: (14/39) ShutdownSCSICleanupForDevTearDown
in the Direct Console User Interface (DCUI). -
Due to a race condition during an Instant Clone operation, VM memory pages might be corrupted and the ESXi host might fail with a purple diagnostic screen. You see errors such as
VmMemMigrate_MarkPagesCow
orAsyncRemapProcessVMRemapList
in the backtrace. -
In some environments with SATP plug-ins such as
satp_alua
andsatp_alua_cx
, bringing all paths to a LUN online might require several scans. As a result, a storage failover operation might take long. For example, with 4 paths to a LUN, it might take up to 20 minutes to bring the 4 paths online. -
After vSphere vMotion operations across vCenter Server instances for VMs that have vSphere tags, it is possible that the tags do not exist in the target location. The issue only occurs in migration operations between vCenter Server instances that do not share storage.
-
When you set the multicast filter mode of an NSX Virtual Switch for NVDS, the mode might intermittently revert from legacy to snooping.
-
When you run a Nessus scan, which internally runs a SYN scan, to detect active ports on an ESXi host, the iofiltervpd service might shut down. The service cannot be restarted within the max retry attempts due to the SYN scan.
-
If a VIB containing a new kernel module is installed before you run any of the restore config procedures described in VMware knowledge base article 2042141, you see no change in the config after the ESXi host reboots. For example, if you attempt to restore the ESXi config immediately after installing the NSX VIB, the ESXi host reboots, but no changes in the config are visible. The issue occurs due to a logical fault in the updates of the bootbank partition used for ESXi host reboots after a new kernel module VIB is installed.
-
Listing files of mounted NFS 4.1 Azure storage might not work as expected, because although the files exist and you can open them by name, the
ls
command on an ESXi host by using SSH does not show any files. -
Starting with ESXi 7.0 Update 1, the
vpxa.cfg
file no longer exists and the vpxa configuration belongs to the ESXi Configuration Store (ConfigStore
). When you upgrade an ESXi host from a version earlier than 7.0 Update 1 to a later version, the fieldsVpx.Vpxa.config.nfc.loglevel
andVpx.Vpxa.config.httpNfc.accessMode
in thevpxa.cfg
file might cause the ESXi host to fail with a purple diagnostic screen during the conversion fromvpxa.cfg
format toConfigStore
format. The issue occurs due to discrepancies in the NFC log level betweenvpxa.cfg
andConfigStore
, and case-sensitive values. -
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to complete in-flight disk I/Os. As a result, such virtual machines might take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
-
During an API call to reload a virtual machine from one ESXi host to another, the
externalId
in the VM port data file might be lost. As a result, after the VM reloads, the VM virtual NIC might not be able to connect to a LSP. -
After upgrading to ESXi 7.0 Update 3i, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
-
Due to a race condition between the Create and Delete resource pool workflows, a freed pointer in the VMFS resource pool cache might get dereferenced. As a result, ESXi hosts might fail with a purple diagnostic screen and error such as:
@BlueScreen: #PF Exception 14 in world 2105491:res3HelperQu IP 0x420032a21f55 addr 0x0
-
With the batch
QueryUnresolvedVmfsVolume
API, you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as re-signaturing specific unresolved VMFS volumes. By default, when the APIQueryUnresolvedVmfsVolume
is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time. -
This problem can occur when vCenter manages multiple clusters, and the vCenter VM runs on one of the clusters. If you shut down the cluster where the vCenter VM resides, the following error can occur during the shutdown operation:
'NoneType' object has no attribute '_moId'
. The vCenter VM is not powered off. -
This problem can occur when vCenter manages multiple clusters and runs several shut down and restart cluster operations in parallel. For example, if you shut down 3 clusters and restart the vSAN health service or the vCenter VM, and you restart one of the 3 clusters before the others, the Restart Cluster option might be available for the first vSAN cluster, but not for the other clusters.
-
Unexpected time drift can occur intermittently on a vSAN cluster when a single vCenter Server manages several vSAN clusters, due to internal delays in VPXA requests. In the vSphere Client, in the vSAN Health tab, you see the status of Time is synchronized across hosts and VC changing from green to yellow and back to green.
-
When you enable the
ToolsRamdisk
advanced option that makes sure the/vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating thetools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs. -
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
-
If your environment has both small and large file block clusters mapped to one affinity rule, when you add a cluster, the Affinity Manager might allocate more blocks for files than the actual volume of the datastore. As a result, ESXi hosts randomly fail with a purple diagnostic screen with error
Exception 14 in world #######
. -
This issue occurs when you view health details of any health check in the history view. In the vSphere Client, in the Skyline Health tab, you see a message such as
No data available for the selected time period
. The issue occurs when a query to the historical health records fails due to timestamps with an incorrect time zone. -
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
-
In rare cases, in a heavy workload, two LVM tasks that perform updating and querying on the device schedule in an ESXi host might run in parallel and cause a synchronization issue. As a result, the ESXi host might fail with a purple diagnostic screen and a messages similar to
[email protected]#nover+0x1018 stack: 0xXXXXX and Addr3ResolvePhysVector@esx#nover+0x68 stack: 0xXXXX
in the backtrace. -
In some cases, while an ESXi host boots, some dynamically discovered iSCSI targets might get deleted. As a result, after the boot completes, you might no longer see some iSCSI LUNs. The issue is more likely to occur in environments with Challenge-Handshake Authentication Protocol
(CHAP) enabled for iSCSI. -
In busy environments with many rescans by using a software iSCSI adapter, the VMware Host Client might intermittently stop displaying the adapter. The issue occurs only with target arrays which return target portals with both IPv4 and IPv6 addresses during discovery by using the
SendTargets
method. -
A rare issue with calculating the Max Queue Depth per datastore device might cause queue depth mismatch warnings from the underlying SCSI layer. As a result, when you disable Storage I/O Control on datastores, you might see high read I/O on all ESXi hosts.
-
Due to missing configuration for
stats-log
in the file/etc/vmware/vvold/config.xml
, you might see multiple-1.log
files to fill up the/ramdisk
partition after an upgrade to ESXi 7.0 Update 3 and later. -
If a username or password in a network proxy contains any escape character, the environment fails to send network requests through the proxy.
-
If you configure an NTP server on an ESXi host with additional parameters along with
hostname
, such asmin_poll
ormax_poll
, the commandesxcli system ntp test
might fail to resolve the IP address of that server. This issue occurs because in such configurations, the NTP name resolution applies to the entire server config instead of just the host name. -
After upgrading to ESXi 7.0 Update 2 and later, a virtual machine might not discover some USB devices attached to it, such as an Eaton USB UPS, and fail to communicate with the device properly.
-
If you clone a VM, including the guest customization specs, move the VMDK file to another datastore and make it thick provisioned, and then power on the VM, the guest OS customization might not be preserved. In the hostd logs, you see an Event Type Description such as
The customization component failed to set the required parameters inside the guest operating system
.
The issue occurs because while moving the VMDK file of the cloned VM to another datastore, the newpathName
for the guest OS customization might temporarily be the same as theoldPathName
and the package gets deleted. -
Improper sizing of the resource needs for the hostd service on an ESXi host might cause some virtual machine-related tasks to fail when you use vSphere APIs for I/O Filtering (VAIO) for VMware or third-party backup applications. For example, when a VAIO filter is attached to one or more virtual disks of a virtual machine, such VMs might fail to perform backups or power-on after a backup.
-
A rare file corruption issue might occur when multiple ESXi hosts do a simultaneous file append operation on an NFSv3 datastore. As a result, if you use the Govc open source command-line utility to perform administrative actions on a vCenter instance, you might see reports with inconsistent values. For example, if you run the command
govc disk.ls
to list all FCDs in a newly created datastore, you see a different number of disks from what you expect. -
In rare cases, when a VMFS volume is closed immediately after it is opened, one of the threads in the opening process might race with a thread in the closing process and take a rare error handling path that can exit without unlocking a locked spinlock. As a result, the ESXi host might fail with a purple diagnostic screen with a message in the backtrace such as:
@BlueScreen: VERIFY bora/vmkernel/sched/cpusched.c:11154.
-
Due to reservation conflicts, WSFC applications might lose connectivity to a virtual volume-based disk after a rebind operation on that disk.
-
In rare situations, the management agent that vSphere HA deploys on ESXi hosts, Fault Domain Manager (FDM), might fail to acquire a vSAN datastore in the first attempt before an exceptional condition, such as power outage, occurs. As a result, vSphere HA failover for virtual machines on that datastore might not be successful. In the vSphere Client, you see a message such as
This virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure
. Successive attempts of FDM to acquire the vSAN datastore might also fail. -
In the sustainability metrics in VMware vRealize Operations, you might see the value of Power|Total Energy (Wh) per virtual machine higher than the ESXi host on which the VM is hosted. The issue occurs because after measuring the total energy consumed by the VM, the power.energy performance counter is calculated and reported in millijoules instead of joules.
-
In ESXi hosts running on AMD processors, Windows virtual machines of hardware version 19 with VBS enabled might fail with a blue diagnostic screen due to a processor mode misconfiguration error. In the
vmware.log
file, you see errors such asvcpu-0 - WinBSOD: Synthetic MSR[0xx0000x00] 0xxxx
. -
Due to an internal error, even if a CPU model is listed as supported in the VMware Compatibility Guide, in the vSphere Client you might see a warning such as
The CPU on the host is not supported by the image
during a compliance scan by using a vSphere Lifecycle Manager image. The warning does not prevent you from completing an update or upgrade. -
When you migrate virtual machines with VBS enabled that reside on ESXi hosts running on AMD processors, such VMs might fail with a blue diagnostic screen on the target host due to a guest interruptibility state error.
-
When you override a networking policy such as Teaming, Security, or Shaping for a port group, and then revert the change, after a reboot of the ESXi host, you might still see the override setting. For example, if you accept promiscuous mode activation on a port group level, then revert the mode back to Reject, and reboot the ESXi host, the setting is still Accept.
-
Your remote web server might contain an HTML form field that has an input of type
password
andusername
where theautocomplete
attribute is not set tooff
. As a result, you might see a warning by security scanners such asAutoComplete Attribute Not Disabled for Password in Form Based Authentication
. The setting of theautocomplete
attribute toon
does not directly put a risk on web servers, but in certain browsers, user credentials in such forms might be saved and potentially lead to a loss of confidentiality. -
vSAN Distributed File System (vDFS) objects do not support IOPS limit. You cannot set the IOPS limit for vSAN file service objects.
-
In vSphere Client, when you navigate to Configure > Hardware > Overview, for some servers you do not see any asset tag listed.
-
When you upgrade a vSAN host from a pre-6.2 release to 7.0 or later, the VMkernel port tagging for vSAN might be lost. If this occurs, vSAN networking configuration is missing after the upgrade.
-
You cannot enable vSAN File Service if the vCenter does not have Internet access.
-
In rare cases, when a data breakpoint is enabled in the guest OS of virtual machines, during checkpoint resume or after a vSphere vMotion migration, an internal virtualization data structure might be accessed before it is allocated. As a result, the virtual machines crash with core dump and an error such as:
vcpu-1:VMM fault 14: src=MONITOR rip=0xfffffffffc0XXe00 regs=0xfffffffffcXXXdc0 LBR stack=0xfffffffffcXXXaXX
. -
When NICs using the
ntg3
driver such as Broadcom BCM5719 and BCM5720 are connected to certain models of Dell switches, including but not limited to S4128T-ON and S3048-ON, such NICs might incorrectly drop some of the received packets greater than 6700 bytes. This might cause very low network performance or loss of connectivity. -
In case of an extended header format in the NVMe to SCSI translation layer, third-party multipathing plug-ins such as PowerPath might not account for the extended header size of 4 bytes. As a result, in the
vmkernel.log
, you see anInvalid length RTPG Descriptor
warning. -
When installing ESXi or creating a VMFS datastore on an NVMe device, the native NVMe driver might need to handle a Compare and Write fused operation. According to NVMe specs, the Compare and Write commands must be inserted next to each other in the same Submission Queue, and the Submission Queue Tail doorbell pointer update must indicate both commands as part of one doorbell update. The native NVMe driver puts the two commands together in one Submission Queue, but writes the doorbell for each command separately. As a result, the device firmware might complete the fused commands with an error and fail to create a VMFS datastore. Since creating a VMFS datastore on a device is a prerequisite for successful ESXi installation, you might not be able to install ESXi on such NVMe devices.
-
If the vSAN health check indicates that an NVMe device cannot be identified, the warning might not be cleared after you correctly select the device model.
-
Profile Name | ESXi-7.0U3l-21422485-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 30, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3091480 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
- This patch updates the following issues:
- The Python package is updated to versions 3.8.16/3.5.10.
- The ESXi userworld libxml2 library is updated to version 2.10.3.
- The cURL library is updated to version 7.86.0.
- The Expat XML parser is updated to version 2.5.0.
- The SQLite database is updated to version 3.40.1.
- The zlib library is updated to 1.2.13.
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3l:
-
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
-
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
-
VMware Tools 11.0.6:
-
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
-
-
VMware Tools 10.0.12:
-
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
-
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
-
solaris.iso: VMware Tools image 10.3.10 for Solaris.
-
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-7.0U3l-21422485-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | March 30, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3091480 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
- This patch updates the following issues:
- The Python package is updated to versions 3.8.16/3.5.10.
- The ESXi userworld libxml2 library is updated to version 2.10.3.
- The cURL library is updated to version 7.86.0.
- The Expat XML parser is updated to version 2.5.0.
- The SQLite database is updated to version 3.40.1.
- The zlib library is updated to 1.2.13.
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3l:
-
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
-
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
-
VMware Tools 11.0.6:
-
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
-
-
VMware Tools 10.0.12:
-
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
-
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
-
-
solaris.iso: VMware Tools image 10.3.10 for Solaris.
-
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Name | ESXi |
Version | ESXi70U3l-21424296 |
Release Date | March 30, 2023 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 3057026, 3062185, 3040049, 3061156, 3023389, 3062861, 3089449, 3081870, 3082900, 3033157, 3090683, 3075786, 3074392, 3081275, 3080022, 3083314, 3084812, 3074360, 3092270, 3076977, 3051685, 3082477, 3082282, 3082427, 3077163, 3087219, 3074187, 3082431, 3050562, 3072430, 3077072, 3077060, 3046875, 3082991, 3083473, 3051059, 3091256, 3074912, 3058993, 3063987, 3072500, 3060661, 3076188, 3049652, 2902475, 3087946, 3116848, 3010502, 3118090, 3078875, 3069298, 3074121, 3083426, 3086841, 3104368, 3061328 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
Name | ESXi |
Version | ESXi70U3sl-21422485 |
Release Date | March 30, 2023 |
Category | Security |
Affected Components |
|
PRs Fixed | 3091480 |
Related CVE numbers | CVE-2023-1017, CVE-2023-1018 |
Known Issues
The known issues are grouped as follows.
Installation, Upgrade and Migration Issues- After an upgrade to ESXi 7.0 Update 3l, some ESXi hosts and virtual machines connected to virtual switches might lose network
After an upgrade to ESXi 7.0 Update 3l, some ESXi hosts, their VMs, and other VMkernel ports, such as ports used by vSAN and vSphere Replication, which are connected to virtual switches, might lose connectivity due to an unexpected change in the NIC teaming policy. For example, the teaming policy on a portgroup might change to Route Based on Originating Virtual Port from Route Based on IP Hash. As a result, such a portgroup might lose network connectivity and some ESXi hosts and their VMs become inaccessible.
Workaround: See VMware knowledge base article 91887.
- After upgrading the ntg3 driver to version 4.1.9.0-4vmw, Broadcom NICs with fiber physical connectivity might lose network
Changes in the
ntg3
driver version4.1.9.0-4vmw
might cause link issues for the fiber physical layer and connectivity on some NICs, such as Broadcom 1Gb, fails to come up.Workaround: See VMware knowledge base article 92035.
- Corrupted VFAT partitions from a vSphere 6.7 environment might cause upgrades to ESXi 7.x to fail
Due to corrupted VFAT partitions from a vSphere 6.7 environment, repartitioning the boot disks of an ESXi host might fail during an upgrade to ESXi 7.x. As a result, you might see the following errors:
When upgrading to ESXi 7.0 Update 3l, the operation fails with a purple diagnostic screen and/or an error such as:An error occurred while backing up VFAT partition files before re-partitioning: Failed to calculate size for temporary Ramdisk:
.<error> An error occurred while backing up VFAT partition files before re-partitioning: Failed to copy files to Ramdisk:
.<error>
If you use an ISO installer, you see the errors, but no purple diagnostic screen.
When upgrading to an ESXi 7.x version earlier than 7.0 Update 3l, you might see:
- Logs such as
ramdisk (root) is full
in thevmkwarning.log
file. - Unexpected rollback to ESXi 6.5 or 6.7 on reboot.
- The
/bootbank
,/altbootbank
, and ESX-OSData partitions are not present.
Workaround: You must first remediate the corrupted partitions before completing the upgrade to ESXi 7.x. For more details, see VMware knowledge base article 91136.