VMware ESXi 8.0 Update 3b | 17 SEP 2024 | ISO Build 24280767 Check for additions and updates to these release notes. |
VMware ESXi 8.0 Update 3b | 17 SEP 2024 | ISO Build 24280767 Check for additions and updates to these release notes. |
This release provides bug and security fixes. For more information, see the Resolved Issues section.
DPU/SmartNIC
VMware vSphere Distributed Services Engine support with NVIDIA Bluefield-3 DPUs: Starting with ESXi 8.0 Update 3b, vSphere Distributed Services Engine adds support for NVIDIA Bluefield-3 DPUs. NVIDIA Bluefield-3 DPUs are supported in both single-DPU and dual-DPU configurations.
ESXi 8.0 Update 3b adds support to vSphere Quick Boot for multiple servers, including:
HPE
ProLiant DL145 Gen11
ProLiant MicroServer Gen11
Cisco Systems Inc.
UCSC-C245-M8SX
Supermicro
AS-2025HS-TNR
For the full list of supported servers, see the VMware Compatibility Guide.
Design improvements for cloud-init guestInfo variables: Starting with ESXi 8.0 Update 3b, setting cloud-init
guestInfo variables by regular users in the guest results into an error. For more information, see KB 377267.
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 8.0 are:
For internationalization, compatibility, and open source components, see the VMware vSphere 8.0 Release Notes.
For updates to the Product Support Notices section, see the ESXi 8.0 Update 3 release notes.
Build Details
VMware vSphere Hypervisor (ESXi ISO) image
Download Filename: |
VMware-VMvisor-Installer-8.0U3b-24280767.x86_64.iso |
Build: |
24280767 |
Download Size: |
607 MB |
SHA256 checksum: |
718c4fa3447dd138db64398604438ee0d700e2c2fae9276201c016e386c3812b |
Host Reboot Required: |
Yes |
Virtual Machine Migration or Shutdown Required: |
Yes |
VMware vSphere Hypervisor (ESXi) Offline Bundle
Download Filename: |
VMware-ESXi-8.0U3b-24280767-depot.zip |
Build: |
24280767 |
Download Size: |
994.7 MB |
SHA256 checksum: |
1824dc62fb36e7e107bcf5526278436986287597c0c4f450646d8d524118782f |
Host Reboot Required: |
Yes |
Virtual Machine Migration or Shutdown Required: |
Yes |
Components
Component |
Bulletin |
Category |
Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs |
ESXi_8.0.3-0.35.24280767 |
Bugfix |
Critical |
ESXi Install/Upgrade Component |
esx-update_8.0.3-0.35.24280767 |
Bugfix |
Critical |
ESXi Install/Upgrade Component |
esxio-update_8.0.3-0.35.24280767 |
Bugfix |
Critical |
VMware BlueField RShim Driver |
VMware-rshim_0.1-12vmw.803.0.35.24280767 |
Bugfix |
Critical |
Networking Device Driver for NVIDIA BlueField RShim devices |
VMware-rshim-net_0.1.0-1vmw.803.0.35.24280767 |
Bugfix |
Critical |
NVIDIA BlueField boot control driver |
VMware-mlnx-bfbootctl_0.1-7vmw.803.0.35.24280767 |
Bugfix |
Critical |
VMware NVMe over TCP Driver |
VMware-NVMeoF-TCP_1.0.1.29-1vmw.803.0.35.24280767 |
Bugfix |
Critical |
ESXi Component - core ESXi VIBs |
ESXi_8.0.3-0.30.24262298 |
Security |
Critical |
ESXi Install/Upgrade Component |
esx-update_8.0.3-0.30.24262298 |
Security |
Critical |
ESXi Install/Upgrade Component |
esxio-update_8.0.3-0.30.24262298 |
Security |
Critical |
ESXi Tools Component |
VMware-VM-Tools_12.4.5.23787635-24262298 |
Security |
Critical |
Rollup Bulletins
These rollup bulletins contain the latest VIBs with all the fixes after the initial release of ESXi 8.0.
Bulletin ID |
Category |
Severity |
Detail |
ESXi80U3b-24280767 |
Bugfix |
Critical |
Security and Bugfix image |
ESXi80U3sb-24262298 |
Security |
Critical |
Security only image |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-8.0U3b-24280767-standard |
ESXi-8.0U3b-24280767-no-tools |
ESXi-8.0U3sb-24262298-standard |
ESXi-8.0U3sb-24262298-no-tools |
ESXi Images
Name and Version |
Release Date |
Category |
Detail |
---|---|---|---|
ESXi_8.0.3-0.35.24280767 |
09/17/2024 |
General |
Security and Bugfix image |
ESXi_8.0.3-0.30.24262298 |
09/17/2024 |
Security |
Security only image |
Log in to the Broadcom Support Portal to download this patch.
For download instructions, see Download Broadcom products and software.
For details on updates and upgrades by using vSphere Lifecycle Manager, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images. You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an image profile. To do this, you must manually download the patch offline bundle ZIP file.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3426946, 3421084, 3421179, 3403683, 3388844, 3424051, 3422735, 3420488, 3421971, 3422005, 3417326, 3420850, 3420421, 3420586, 3410311, 3420907, 3419241, 3409528, 3404817, 3416221, 3421665, 3415365, 3415102, 3417329, 3421434, 3419074, 3417667, 3408802, 3417224, 3420702, 3385757, 3412010, 3418878, 3408477, 3406546, 3406968, 3406999, 3398549, 3402823, 3408281, 3409108, 3406627, 3407251, 3392225, 3412138, 3406037, 3406875, 3412536, 3407952, 3410114, 3386751, 3408145, 3403240, 3389766, 3283598, 3408302, 3403639, 3407532, 3394043, 3387721, 3408300, 3396479, 3401629, 3401392, 3409085, 3408640, 3405912, 3403680, 3405106, 3407204, 3407022, 3409176, 3408739, 3408740, 3390434, 3406053, 3400702, 3392173, 3403496, 3402955, 3433092 |
CVE numbers |
N/A |
The ESXi
and esx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
This patch updates the esx-base, esxio-base, bmcal, gc, esxio-dvfilter-generic-fastpath, gc-esxio, native-misc-drivers-esxio, vdfs, cpu-microcode, vsan, trx, esx-xserver, clusterstore, esxio-combiner, vcls-pod-crx, esxio-combiner-esxio, vds-vsip, vsanhealth, native-misc-drivers, crx, esxio, infravisor, drivervm-gpu-base, esx-dvfilter-generic-fastpath, pensandoatlas
, and bmcal-esxio
VIBs.
This patch resolves the following issues:
PR 3392225: You see status Unknown for virtual machines after migration to an ESXi host with active host-local swap location
In rare occasions with very specific sequence of conditions, when you migrate to or power on a virtual machine on an ESXi host with active host-local swap location, the virtual machine might hang with a status Unknown
.
This issue is resolved in this release.
PR 3421434: When you migrate virtual machines with snapshots from a vSAN ESA 8.x datastore, you might see errors at the destination datastore
In vSAN ESA 8.x environments, under certain conditions, migrating VMs with snapshots from a vSAN ESA datastore by using either a Storage vMotion, cross vMotion, cold relocate or clone operation might result in errors at the destination datastore, such as VMs failing to boot up. The issue is specific to vSAN ESA and is not applicable to vSAN OSA. It can affect both the VM snapshots and the running VM.
This issue is resolved in this release.
PR 3388844: You cannot activate Kernel Direct Memory Access (DMA) Protection for Windows guest OS on ESXi hosts with Intel CPU
If an input–output memory management unit (IOMMU) is active on a Windows guest OS, the Kernel DMA Protection option under System Information might be off for VMs running on ESXi hosts with Intel CPUs. As a result, you might not be able to fulfill some security requirements for your environment.
This issue is resolved in this release. The fix deactivates Kernel DMA Protection by default. ESXi 8.0 Update 3b adds the vmx parameter acpi.dmar.enableDMAProtection
and to activate Kernel DMA Protection in a Windows guest OS, you must add acpi.dmar.enableDMAProtection=TRUE
to the vmx file.
PR 3377863: You see "Hosts are remediated" message in the upgrade precheck results
When running a precheck before an ESXi update, you see a message such as Hosts are remediated
in the precheck results, which is not clear and might be misleading.
This issue is resolved in this release. The new message is Hosts are remediated sequentially
.
PR 3417329: Virtual machine tasks might intermittently fail due to a rare issue with the memory slab
Due to a rare issue with the vSAN DOM object, where a reference count on a component object might not decrement correctly, the in-memory object might never be released from the slab and can cause the component manager slab to reach its limit. As a result, you might not be able to create VMs, migrate VMs or might encounter VM power-on failures on vSAN clusters, either OSA or ESA.
This issue is resolved in this release.
PR 3406140: Extracting a vSphere Lifecycle Manager image from an existing ESXi host might fail after a kernel configuration change
Each update of the kernel configuration also triggers an update to the /bootbank/useropts.gz
file, but due to a known issue, the basemisc.tgz
might not contain the default useropts
file after such an update. As a result, when attempting to extract a vSphere Lifecycle Manager image from an existing ESXi host, the absence of the default useropts
file leads to failure to create the esx-base.vib
file and the operation also fails.
This issue is resolved in this release.
PR 3408477: Some ESXi hosts might not have a locker directory after an upgrade from ESXi 6.x to 8.x
When you upgrade an ESXi host with a boot disk of less than 10 GB and not on USB from ESXi 6.x to 8.x, the upgrade process might not create a locker directory and the /locker
symbolic link is not active.
This issue is resolved in this release. If you already face the issue, upgrading to ESXi 8.0 Update 3b creates a locker directory but does not automatically create a VMware Tools repository. As a result, clusters that host such ESXi hosts display as non-compliant. Remediate the cluster again to create a VMware Tools repository and to become compliant.
PR 3422005: ESXi hosts of version 8.0 Update 2 and later might fail to synchronize time with certain NTP servers
A change in the ntp-4.2.8p17 package in ESXi 8.0 Update 2 might cause the NTP client to reject certain server packets as poorly formatted or invalid. For example, if a server sends packets with a ppoll
value of 0
, the NTP client on ESXi does not synchronize with the server.
This issue is resolved in this release.
PR 3408145: The vmx service might fail with a core dump due to rare issue with the vSphere Data Protection solution running out of resources
In rare cases, if a backup operation starts during high concurrent guest I/O load, for example a VM with high write I/O intensity and a high number of overwrites, the VAIO filter component of the vSphere Data Protection solution might run out of resources to handle the guest write I/Os. As a result, the vmx service might fail with a core dump and restart.
This issue is resolved in this release. Alternatively, run backups during periods of low guest I/O activity.
PR 3405912: In the vSphere Client, you do not see the correct total vSAN storage consumption for which you have a license
Due to a rare race condition in environments with many vSAN clusters managed by a single vCenter instance, in the vSphere Client under Licensing > Licenses you might see a discrepancy between the total claimed vSAN storage capacity and the reported value for the clusters.
This issue is resolved in this release.
PR 3414588: Snapshot tasks on virtual machines on NVMe/TCP datastores might take much longer than on VMs provisioned on NVMe/FC datastores
When creating or deleting a snapshot on a VM provisioned on datastores backed by NVMe/TCP namespaces, such tasks might take much longer than on VMs provisioned on datastores backed by NVMe/FC namespaces. The issue occurs because the nvmetcp driver handles some specific NVMe commands not in the way the NVMe/TCP target systems expect.
This issue is resolved in this release.
PR 3392173: A rare issue with the Virsto vSAN component might cause failure to create vSAN objects, or unmount disk groups, or reboot ESXi hosts
In very rare cases, if a Virsto component creation task fails, it might not be properly handled and cause background deletion of virtual disks to stop. As a result, deleting virtual disks in tasks such as creating vSAN objects, or unmounting of disk groups, or rebooting ESXi hosts does not occur as expected and might cause such tasks to fail.
This issue is resolved in this release.
PR 3415365: ESXi upgrade to 8.0 Update 3 fails with an error in the vFAT bootbank partitions
ESXi 8.0 Update 3 adds a precheck in the upgrade workflow that uses the dosfsck
tool to catch vFAT corruptions. One of the errors that dosfsck
flags is the dirty bit set
but ESXi does not use that concept and such errors are false positive.
In the vSphere Client, you see an error such as A problem with one or more vFAT bootbank partitions was detected. Please refer to KB 91136 and run dosfsck on bootbank partitions
.
In the remediation logs on ESXi hosts, you see logs such as:
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: runcommand:199 runcommand called with: args = ['/bin/dosfsck', '-V', '-n', '/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6'], outfile = None, returnoutput= True, timeout = 10.
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: upgrade_precheck:1836 dosfsck output: b'CP850//TRANSLIT: Invalid argument\nCP850: Invalid argument\nfsck.fat 4.1+git (2017-01-24)\n0x25: Dirty bit is set. Fswas not properly unmounted and some data may be corrupt.\n Automatically removing dirty bit.\nStarting check/repair pass.\nStarting verification pass.\n\nLeaving filesystem unchanged.\n/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6: 121 files, 5665/65515 clusters\n'
This issue is resolved in this release.
PR 3421084: An ESXi host might become temporarily inaccessible from vCenter if an NFSv3 datastore fails to mount during reboot or bring up
During an ESXi host reboot, if an NFSv3 datastore fails to mount during the reboot or bring up in VMware Cloud Foundation environments, retries to mount the datastore continue in the background. However, while the datastore is still not available, the hostd daemon might fail with a core dump when trying to access it and cause the host to lose connectivity to the vCenter system for a short period.
This issue is resolved in this release.
PR 3407251: ESXi host fails with a purple diagnostic screen due to a rare physical CPU (PCPU) lockup
In the vSphere Client, when you use the Delete from Disk option to remove a virtual machine from a vCenter system and delete all VM files from the datastore, including the configuration file and virtual disk files, if any of the files is corrupted, a rare issue with handling corrupted files in the delete path might lead to a PCPU lockup. As a result, the ESXi host fails with a purple diagnostic screen and a message such as NMI IPI: Panic requested by another PCPU
.
This issue is resolved in this release.
PR 3406627: VMFS6 automatic UNMAP feature might fail to reclaim filesystem space beyond 250GB
In certain cases, when you delete a filesystem space of more than 250GB on a VMFS6 volume, for example 1 TB, if the volume has no active references such as active VMs, then the VMFS6 automatic UNMAP feature might fail to reclaim the filesystem space beyond 250GB.
This issue is resolved in this release.
PR 3419241: After deleting a snapshot or snapshot consolidation, some virtual machines intermittently fail
When a VM on a NFSv3 datastore has multiple snapshots, such as s1, s2, s3, s4, if the VM reverts to one of the snapshots, for example s2, then powers on, and then one of the other snapshots, such as s3, is deleted, the vmx service might fail. The issue occurs because the code tries to consolidate links of a disk that is not part of the VM current state and gets a null pointer. As a result, snapshot consolidation might also fail and cause the vmx service to fail as well.
This issue is resolved in this release. If you already face the issue, power off the VM, edit its .vmx
file to add the following setting: consolidate.upgradeNFS3Locks = "FALSE"
, and power on the VM.
PR 3407532: VMs with snapshots and active encryption experience higher I/O latency
In rare cases, encrypted VMs with snapshots might experience higher than expected latency. This issue occurs due to unaligned I/O operations that generate excessive metadata requests to the underlying storage and lead to increased latency.
This issue is resolved in this release. To optimize performance, the VMcrypt I/O filter is enhanced to allocate memory in 4K-aligned blocks for both read and write operations. This reduction in metadata overhead significantly improves overall I/O performance.
PR 3410311: You cannot log in to the Direct Console User Interface (DCUI) with regular Active Directory credentials
When you try to log with a regular Active Directory account in to the DCUI of an ESXi host by using either a remote management application such as HP Integrated Lights-Out (iLO) or Dell Remote Access Card (DRAC), or a server management system such as Lenovo XCC or Huawei iBMC, login might fail. In the DCUI, you see an error such as Wrong user name or password
. In the vmkernel.log
file, you see logs such as:
2024-07-01T10:40:53.007Z In(182) vmkernel: cpu1:264954)VmkAccess: 106: dcui: running in dcuiDom(7): socket = /etc/likewise/lib/.lsassd (unix_stream_socket_connect): Access denied by vmkernel access control policy.
The issue occurs due to a restriction of ESXi processes to access certain resources, such as Likewise.
This issue is resolved in this release.
PR 3403706: Hot extending a non-shared disk in a Windows Server Failover Cluster might result in lost reservations on shared disks
In a WSFC cluster, due to an issue with releasing SCSI reservations, in some cases hot extending a non-shared disk might result in lost reservations on shared disks and failover of the disk resource.
This issue is resolved in this release. The fix makes sure releasing SCSI reservations are properly handled for all type of shared disks.
PR 3394043: Creating a vSAN File Service fails when you use IPs within the 172.17.0.0/16 range as mount points
Prior to vSphere 8.0 Update 3b, you must change your network configuration in cases when the specified file service network overlaps with the Docker default internal network 172.17.0.0/16. As a result, you see Skyline Health warnings for DNS lookup failures and you cannot create vSAN File Services.
This issue is resolved in this release. The fix routes traffic to the correct endpoint to avoid possible conflicts.
PR 3389766: During a vSphere vMotion migration of a fault tolerant Primary VM with encryption, the migration task might fail and vSphere FT failover occurs
In rare cases, the encryption key package might not be sent or received correctly during a vSphere vMotion migration of a fault tolerant Primary VM with encryption, and as a result the migration task fails and vSphere FT failover occurs.
This issue is resolved in this release. The fix sends the encryption key at the end of the vSphere FT checkpoint to avoid errors.
PR 3403680: Mounting of vSphere Virtual Volumes stretched storage container fails with an undeclared fault
In the vSphere Client, you might see the error Undeclared fault
while mounting a newly created vSphere Virtual Volumes stretched storage container from an existing storage array. The issue occurs due to a rare race condition. vSphere Virtual Volumes generates a core dump and restarts after the failure.
This issue is resolved in this release.
PR 3408300: You cannot remove or delete a VMFS partition on a 4K native (4Kn) Software Emulation (SWE) disk
When you attempt to remove or delete a VMFS partition on a 4Kn SWE disk, in the vSphere Client you see an error such as Read-only file system during write on /dev/disks/<device name>
and the operation fails. In the vmkernel.log
, you see entries such as in-use partition <part num>, modification is not supported
.
This issue is resolved in this release.
PR 3402823: Fresh installation or creating VMFS partitions on a Micron 7500 or Intel D5-P5336 NVMe drives might fail with a purple diagnostic screen
UNMAP commands enable ESXi hosts to release storage space that is mapped to data deleted from the host. In NVMe, the equivalent of UNMAP commands is a deallocate DSM request. Micron 7500 and Intel D5-P5336 devices advertise a very large value in one of the deallocate limit attributes, DMSRL, which is the maximum number of logical blocks in a single range for a Dataset Management command. This leads to an integer overflow when the ESXi unmap split code converts number of blocks to number of bytes, which in turn might cause a failure of either installation or VMFS creation. You see a purple diagnostics screen with an error such as Exception 14 or corruption in dlmalloc
. The issue affects ESXi 8.0 Update 2 and later.
This issue is resolved in this release.
PR 3396479: Standard image profiles for ESXi 8.0 Update 3 show last modified date as release date
The Release Date field of the standard image profile for ESXi 8.0 Update 3 shows the Last Modified Date value. The issue is only applicable to the image profiles used in Auto Deploy or ESXCLI. Base images used in vSphere Lifecycle Manager workflows display the release date correctly. This issue has no functional impact. The side effect is that if you search for profiles by release date, the profile does not show with the actual release date.
This issue is resolved in this release.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3412448, 3389057, 3397914, 3421179, 3426946 |
CVE numbers |
N/A |
Updates the esx-update
and loadesx
VIBs to resolve the following issue:
PR 3412448: Remediation of ESXi hosts might fail due to a timeout in the update of the VMware-VM-Tools component
While upgrading ESXi hosts by using vSphere Lifecycle Manager, remediation of the VMware-VM-Tools component might occasionally fail due to a timeout. The issue occurs when the update process takes longer than the timeout setting of 30 sec for the VMware-VM-Tools component.
This issue is resolved in this release. The fix increases the timeout limit to 120 sec.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
N/A |
CVE numbers |
N/A |
Updates the loadesxio
and esxio-update
VIBs.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3421179, 3426946 |
CVE numbers |
N/A |
Updates the rshim
VIB.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3421179 |
CVE numbers |
N/A |
Updates the rshim-net
VIB.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3421179 |
CVE numbers |
N/A |
Updates the mlnx-bfbootctl-esxio
VIB.
Patch Category |
Bugfix |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3414588 |
CVE numbers |
N/A |
Updates the nvmetcp
and nvmetcp-esxio
VIBs.
Patch Category |
Security |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3425039, 3423080, 3408352, 3415908, 3380359, 3404366, 3421357, 3421359, 3390663, 3404362, 3396479, 3404367, 3398130, 3398132, 3316536, 3410159, 3412279, 3395162, 3404369, 3404657, 3390662 |
CVE numbers |
N/A |
The ESXi
and esx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
This patch updates the drivervm-gpu-base, esxio-combiner, esx-xserver, vdfs, cpu-microcode, vds-vsip, vsan, bmcal, clusterstore, native-misc-drivers, infravisor, esxio-dvfilter-generic-fastpath, crx, gc, vcls-pod-crx, esxio-base, esx-base, esxio, gc-esxio, native-misc-drivers-esxio, esxio-combiner-esxio, esx-dvfilter-generic-fastpath, vsanhealth, trx, pensandoatlas,
and bmcal-esxio
VIBs.
This patch resolves the following issues:
ESXi 8.0 Update 3b includes the following Intel microcode:
Code Name |
FMS |
Plat ID |
Servicing |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
baseline |
0x1d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
baseline |
0x11 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
baseline |
0x7 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
baseline |
0x2f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
baseline |
0x1f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
baseline |
0x621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
baseline |
0x71a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
baseline |
0xd |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
baseline |
0x3b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
baseline |
0x21 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
baseline |
0x28 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
baseline |
0x42e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
baseline |
0x715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
baseline |
0x49 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
baseline |
0x1a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
baseline |
0x22 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
baseline |
0x12d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
baseline |
0xb000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
baseline |
0x2007206 |
4/15/2024 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
baseline |
0x4003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
baseline |
0x5003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
baseline |
0x7002904 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
baseline |
0x1c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
baseline |
0x700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
baseline |
0xf00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
baseline |
0xe000015 |
8/3/2023 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
baseline |
0xf0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
baseline |
0x3e |
10/5/2023 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
baseline |
0xd0003e7 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
baseline |
0x10002b0 |
4/3/2024 |
Intel Xeon D-2700 Series; Intel Xeon D-1700 Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Tiger Lake U |
0x806c1 (06/8c/1) |
0x80 |
baseline |
0xb8 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake U Refresh |
0x806c2 (06/8c/2) |
0xc2 |
baseline |
0x38 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake H |
0x806d1 (06/8d/1) |
0xc2 |
baseline |
0x52 |
2/15/2024 |
Intel Xeon W-11000E Series |
Sapphire Rapids SP |
0x806f8 (06/8f/8) |
0x87 |
baseline |
0x2b0005c0 |
2/5/2024 |
Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
baseline |
0xf8 |
9/28/2023 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
baseline |
0xf6 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
baseline |
0x100 |
2/5/2024 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
baseline |
0x62 |
3/7/2024 |
Intel Xeon E-2300 Series |
Raptor Lake E/HX/S |
0xb0671 (06/b7/1) |
0x32 |
baseline |
0x125 |
4/16/2024 |
Intel Xeon E-2400 Series |
Emerald Rapids SP |
0xc06f2 (06/cf/2) |
0x87 |
baseline |
0x21000200 |
11/20/2023 |
Intel Xeon 8500 Series; Intel Xeon Gold 6500/5500 Series; Intel Xeon Silver 4500 Series; Intel Xeon Bronze 3500 Series |
ESX 8.0 Update 3b contains the following security updates:
OpenSSH is updated to version 9.8p1.
Patch Category |
Security |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
N/A |
CVE numbers |
N/A |
Updates the esx-update
and loadesx
VIBs.
Patch Category |
Security |
Patch Severity |
Critical |
Host Reboot Required |
Yes |
Virtual Machine Migration or Shutdown Required |
Yes |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
N/A |
CVE numbers |
N/A |
Updates the esxio-update
and loadesxio
VIBs.
Patch Category |
Security |
Patch Severity |
Critical |
Host Reboot Required |
No |
Virtual Machine Migration or Shutdown Required |
No |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs Included |
|
PRs Fixed |
3409328 |
CVE numbers |
N/A |
Updates the tools-light
VIB.
VMware Tools Bundling Changes in ESXi 8.0 Update 3b
The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2b:
windows.iso: VMware Tools 12.4.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.26 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name |
ESXi-8.0U3b-24280767-standard |
Build |
For build information, see Patches Contained in This Release. |
Vendor |
VMware by Broadcom, Inc. |
Release Date |
September 17, 2024 |
Acceptance Level |
Partner Supported |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3426946, 3421084, 3421179, 3403683, 3388844, 3424051, 3422735, 3420488, 3421971, 3422005, 3417326, 3420850, 3420421, 3420586, 3410311, 3420907, 3419241, 3409528, 3404817, 3416221, 3421665, 3415365, 3415102, 3417329, 3421434, 3419074, 3417667, 3408802, 3417224, 3420702, 3385757, 3412010, 3418878, 3408477, 3406546, 3406968, 3406999, 3398549, 3402823, 3408281, 3409108, 3406627, 3407251, 3392225, 3412138, 3406037, 3406875, 3412536, 3407952, 3410114, 3386751, 3408145, 3403240, 3389766, 3283598, 3408302, 3403639, 3407532, 3394043, 3387721, 3408300, 3396479, 3401629, 3401392, 3409085, 3408640, 3405912, 3403680, 3405106, 3407204, 3407022, 3409176, 3408739, 3408740, 3390434, 3406053, 3400702, 3392173, 3403496, 3402955, 3433092, 3421179, 3414588 |
Related CVE numbers |
N/A |
This patch updates the following issues:
PR 3392225: You see status Unknown for virtual machines after migration to an ESXi host with active host-local swap location
In rare occasions with very specific sequence of conditions, when you migrate to or power on a virtual machine on an ESXi host with active host-local swap location, the virtual machine might hang with a status Unknown
.
This issue is resolved in this release.
PR 3388844: You cannot activate Kernel Direct Memory Access (DMA) Protection for Windows guest OS on ESXi hosts with Intel CPU
If an input–output memory management unit (IOMMU) is active on a Windows guest OS, the Kernel DMA Protection option under System Information might be off for VMs running on ESXi hosts with Intel CPUs. As a result, you might not be able to fulfill some security requirements for your environment.
This issue is resolved in this release. The fix activates Kernel DMA Protection by default. The fix also adds the vmx parameter acpi.dmar.enableDMAProtection
to activate the Kernel DMA Protection in Windows OS. The default value of the parameter is FALSE
and to activate Kernel DMA Protection in a Windows guest OS, you must add acpi.dmar.enableDMAProtection=TRUE
to the vmx file.
PR 3412448: Remediation of ESXi hosts might fail due to a timeout in the update of the VMware-VM-Tools component
While upgrading ESXi hosts by using vSphere Lifecycle Manager, remediation of the VMware-VM-Tools component might occasionally fail due to a timeout. The issue occurs when the update process takes longer than the timeout setting of 30 sec for the VMware-VM-Tools component.
This issue is resolved in this release. The fix increases the timeout limit to 120 sec.
PR 3377863: You see "Hosts are remediated" message in the upgrade precheck results
When running a precheck before an ESXi update, you see a message such as Hosts are remediated
in the precheck results, which is not clear and might be misleading.
This issue is resolved in this release. The new message is Hosts are remediated sequentially
.
PR 3417329: Virtual machine tasks might intermittently fail due to a rare issue with the memory slab
Due to a rare issue with the vSAN DOM object, where a reference count on a component object might not decrement correctly, the in-memory object might never be released from the slab and can cause the component manager slab to reach its limit. As a result, you might not be able to create VMs, migrate VMs or might encounter VM power-on failures on vSAN clusters, either OSA or ESA.
This issue is resolved in this release.
PR 3406140: Extracting a vSphere Lifecycle Manager image from an existing ESXi host might fail after a kernel configuration change
Each update of the kernel configuration also triggers an update to the /bootbank/useropts.gz
file, but due to a known issue, the basemisc.tgz
might not contain the default useropts
file after such an update. As a result, when attempting to extract a vSphere Lifecycle Manager image from an existing ESXi host, the absence of the default useropts
file leads to failure to create the esx-base.vib
file and the operation also fails.
This issue is resolved in this release.
PR 3408477: Some ESXi hosts might not have a locker directory after an upgrade from ESXi 6.x to 8.x
When you upgrade an ESXi host with a boot disk of less than 10 GB and not on USB from ESXi 6.x to 8.x, the upgrade process might not create a locker directory and the /locker
symbolic link is not active.
This issue is resolved in this release. If you already face the issue, upgrading to ESXi 8.0 Update 3b creates a locker directory but does not automatically create a VMware Tools repository. As a result, clusters that host such ESXi hosts display as non-compliant. Remediate the cluster again to create a VMware Tools repository and to become compliant.
PR 3422005: ESXi hosts of version 8.0 Update 2 and later might fail to synchronize time with certain NTP servers
A change in the ntp-4.2.8p17 package in ESXi 8.0 Update 2 might cause the NTP client to reject certain server packets as poorly formatted or invalid. For example, if a server sends packets with a ppoll
value of 0
, the NTP client on ESXi does not synchronize with the server.
This issue is resolved in this release.
PR 3408145: The vmx service might fail with a core dump due to rare issue with the vSphere Data Protection solution running out of resources
In rare cases, if a backup operation starts during high concurrent guest I/O load, for example a VM with high write I/O intensity and a high number of overwrites, the VAIO filter component of the vSphere Data Protection solution might run out of resources to handle the guest write I/Os. As a result, the vmx service might fail with a core dump and restart.
This issue is resolved in this release. Alternatively, run backups during periods of low guest I/O activity.
PR 3405912: In the vSphere Client, you do not see the correct total vSAN storage consumption for which you have a license
Due to a rare race condition in environments with many vSAN clusters managed by a single vCenter instance, in the vSphere Client under Licensing > Licenses you might see a discrepancy between the total claimed vSAN storage capacity and the reported value for the clusters.
This issue is resolved in this release.
PR 3414588: Snapshot tasks on virtual machines on NVMe/TCP datastores might take much longer than on VMs provisioned on NVMe/FC datastores
When creating or deleting a snapshot on a VM provisioned on datastores backed by NVMe/TCP namespaces, such tasks might take much longer than on VMs provisioned on datastores backed by NVMe/FC namespaces. The issue occurs because the nvmetcp driver handles some specific NVMe commands not in the way the NVMe/TCP target systems expect.
This issue is resolved in this release.
PR 3392173: A rare issue with the Virsto vSAN component might cause failure to create vSAN objects, or unmount disk groups, or reboot ESXi hosts
In very rare cases, if a Virsto component creation task fails, it might not be properly handled and cause background deletion of virtual disks to stop. As a result, deleting virtual disks in tasks such as creating vSAN objects, or unmounting of disk groups, or rebooting ESXi hosts does not occur as expected and might cause such tasks to fail.
This issue is resolved in this release.
PR 3415365: ESXi upgrade to 8.0 Update 3 fails with an error in the vFAT bootbank partitions
ESXi 8.0 Update 3 adds a precheck in the upgrade workflow that uses the dosfsck
tool to catch vFAT corruptions. One of the errors that dosfsck
flags is the dirty bit set
but ESXi does not use that concept and such errors are false positive.
In the vSphere Client, you see an error such as A problem with one or more vFAT bootbank partitions was detected. Please refer to KB 91136 and run dosfsck on bootbank partitions
.
In the remediation logs on ESXi hosts, you see logs such as:
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: runcommand:199 runcommand called with: args = ['/bin/dosfsck', '-V', '-n', '/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6'], outfile = None, returnoutput= True, timeout = 10.
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: upgrade_precheck:1836 dosfsck output: b'CP850//TRANSLIT: Invalid argument\nCP850: Invalid argument\nfsck.fat 4.1+git (2017-01-24)\n0x25: Dirty bit is set. Fswas not properly unmounted and some data may be corrupt.\n Automatically removing dirty bit.\nStarting check/repair pass.\nStarting verification pass.\n\nLeaving filesystem unchanged.\n/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6: 121 files, 5665/65515 clusters\n'
This issue is resolved in this release.
PR 3421084: An ESXi host might become temporarily inaccessible from vCenter if an NFSv3 datastore fails to mount during reboot or bring up
During an ESXi host reboot, if an NFSv3 datastore fails to mount during the reboot or bring up in VMware Cloud Foundation environments, retries to mount the datastore continue in the background. However, while the datastore is still not available, the hostd daemon might fail with a core dump when trying to access it and cause the host to lose connectivity to the vCenter system for a short period.
This issue is resolved in this release.
PR 3407251: ESXi host fails with a purple diagnostic screen due to a rare physical CPU (PCPU) lockup
In the vSphere Client, when you use the Delete from Disk option to remove a virtual machine from a vCenter system and delete all VM files from the datastore, including the configuration file and virtual disk files, if any of the files is corrupted, a rare issue with handling corrupted files in the delete path might lead to a PCPU lockup. As a result, the ESXi host fails with a purple diagnostic screen and a message such as NMI IPI: Panic requested by another PCPU
.
This issue is resolved in this release.
PR 3406627: VMFS6 automatic UNMAP feature might fail to reclaim filesystem space beyond 250GB
In certain cases, when you delete a filesystem space of more than 250GB on a VMFS6 volume, for example 1 TB, if the volume has no active references such as active VMs, then the VMFS6 automatic UNMAP feature might fail to reclaim the filesystem space beyond 250GB.
This issue is resolved in this release.
PR 3419241: After deleting a snapshot or snapshot consolidation, some virtual machines intermittently fail
When a VM on a NFSv3 datastore has multiple snapshots, such as s1, s2, s3, s4, if the VM reverts to one of the snapshots, for example s2, then powers on, and then one of the other snapshots, such as s3, is deleted, the vmx service might fail. The issue occurs because the code tries to consolidate links of a disk that is not part of the VM current state and gets a null pointer. As a result, snapshot consolidation might also fail and cause the vmx service to fail as well.
This issue is resolved in this release. If you already face the issue, power off the VM, edit its .vmx
file to add the following setting: consolidate.upgradeNFS3Locks = "FALSE"
, and power on the VM.
PR 3407532: VMs with snapshots and active encryption experience higher I/O latency
In rare cases, encrypted VMs with snapshots might experience higher than expected latency. This issue occurs due to unaligned I/O operations that generate excessive metadata requests to the underlying storage and lead to increased latency.
This issue is resolved in this release. To optimize performance, the VMcrypt I/O filter is enhanced to allocate memory in 4K-aligned blocks for both read and write operations. This reduction in metadata overhead significantly improves overall I/O performance.
PR 3410311: You cannot log in to the Direct Console User Interface (DCUI) with regular Active Directory credentials
When you try to log with a regular Active Directory account in to the DCUI of an ESXi host by using either a remote management application such as HP Integrated Lights-Out (iLO) or Dell Remote Access Card (DRAC), or a server management system such as Lenovo XCC or Huawei iBMC, login might fail. In the DCUI, you see an error such as Wrong user name or password
. In the vmkernel.log
file, you see logs such as:
2024-07-01T10:40:53.007Z In(182) vmkernel: cpu1:264954)VmkAccess: 106: dcui: running in dcuiDom(7): socket = /etc/likewise/lib/.lsassd (unix_stream_socket_connect): Access denied by vmkernel access control policy.
The issue occurs due to a restriction of ESXi processes to access certain resources, such as Likewise.
This issue is resolved in this release.
PR 3403706: Hot extending a non-shared disk in a Windows Server Failover Cluster might result in lost reservations on shared disks
In a WSFC cluster, due to an issue with releasing SCSI reservations, in some cases hot extending a non-shared disk might result in lost reservations on shared disks and failover of the disk resource.
This issue is resolved in this release. The fix makes sure releasing SCSI reservations are properly handled for all type of shared disks.
PR 3394043: Creating a vSAN File Service fails when you use IPs within the 172.17.0.0/16 range as mount points
Prior to vSphere 8.0 Update 3b, you must change your network configuration in cases when the specified file service network overlaps with the Docker default internal network 172.17.0.0/16. As a result, you see Skyline Health warnings for DNS lookup failures and you cannot create vSAN File Services.
This issue is resolved in this release. The fix routes traffic to the correct endpoint to avoid possible conflicts.
PR 3389766: During a vSphere vMotion migration of a fault tolerant Primary VM with encryption, the migration task might fail and vSphere FT failover occurs
In rare cases, the encryption key package might not be sent or received correctly during a vSphere vMotion migration of a fault tolerant Primary VM with encryption, and as a result the migration task fails and vSphere FT failover occurs.
This issue is resolved in this release. The fix sends the encryption key at the end of the vSphere FT checkpoint to avoid errors.
PR 3403680: Mounting of vSphere Virtual Volumes stretched storage container fails with an undeclared fault
In the vSphere Client, you might see the error Undeclared fault
while mounting a newly created vSphere Virtual Volumes stretched storage container from an existing storage array. The issue occurs due to a rare race condition. vSphere Virtual Volumes generates a core dump and restarts after the failure.
This issue is resolved in this release.
PR 3408300: You cannot remove or delete a VMFS partition on a 4K native (4Kn) Software Emulation (SWE) disk
When you attempt to remove or delete a VMFS partition on a 4Kn SWE disk, in the vSphere Client you see an error such as Read-only file system during write on /dev/disks/<device name>
and the operation fails. In the vmkernel.log
, you see entries such as in-use partition <part num>, modification is not supported
.
This issue is resolved in this release.
PR 3402823: Fresh installation or creating VMFS partitions on a Micron 7500 or Intel D5-P5336 NVMe drives might fail with a purple diagnostic screen
UNMAP commands enable ESXi hosts to release storage space that is mapped to data deleted from the host. In NVMe, the equivalent of UNMAP commands is a deallocate DSM request. Micron 7500 and Intel D5-P5336 devices advertise a very large value in one of the deallocate limit attributes, DMSRL, which is the maximum number of logical blocks in a single range for a Dataset Management command. This leads to an integer overflow when the ESXi unmap split code converts number of blocks to number of bytes, which in turn might cause a failure of either installation or VMFS creation. You see a purple diagnostics screen with an error such as Exception 14 or corruption in dlmalloc
. The issue affects ESXi 8.0 Update 2 and later.
This issue is resolved in this release.
PR 3396479: Standard image profiles for ESXi 8.0 Update 3 show last modified date as release date
The Release Date field of the standard image profile for ESXi 8.0 Update 3 shows the Last Modified Date value. The issue is only applicable to the image profiles used in Auto Deploy or ESXCLI. Base images used in vSphere Lifecycle Manager workflows display the release date correctly. This issue has no functional impact. The side effect is that if you search for profiles by release date, the profile does not show with the actual release date.
This issue is resolved in this release.
Profile Name |
ESXi-8.0U3b-24280767-no-tools |
Build |
For build information, see Patches Contained in This Release. |
Vendor |
VMware by Broadcom, Inc. |
Release Date |
September 17, 2024 |
Acceptance Level |
Partner Supported |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3426946, 3421084, 3421179, 3403683, 3388844, 3424051, 3422735, 3420488, 3421971, 3422005, 3417326, 3420850, 3420421, 3420586, 3410311, 3420907, 3419241, 3409528, 3404817, 3416221, 3421665, 3415365, 3415102, 3417329, 3421434, 3419074, 3417667, 3408802, 3417224, 3420702, 3385757, 3412010, 3418878, 3408477, 3406546, 3406968, 3406999, 3398549, 3402823, 3408281, 3409108, 3406627, 3407251, 3392225, 3412138, 3406037, 3406875, 3412536, 3407952, 3410114, 3386751, 3408145, 3403240, 3389766, 3283598, 3408302, 3403639, 3407532, 3394043, 3387721, 3408300, 3396479, 3401629, 3401392, 3409085, 3408640, 3405912, 3403680, 3405106, 3407204, 3407022, 3409176, 3408739, 3408740, 3390434, 3406053, 3400702, 3392173, 3403496, 3402955, 3433092, 3421179, 3414588 |
Related CVE numbers |
N/A |
This patch updates the following issues:
PR 3388844: You cannot activate Kernel Direct Memory Access (DMA) Protection for Windows guest OS on ESXi hosts with Intel CPU
If an input–output memory management unit (IOMMU) is active on a Windows guest OS, the Kernel DMA Protection option under System Information might be off for VMs running on ESXi hosts with Intel CPUs. As a result, you might not be able to fulfill some security requirements for your environment.
This issue is resolved in this release. The fix activates Kernel DMA Protection by default. The fix also adds the vmx parameter acpi.dmar.enableDMAProtection
to activate the Kernel DMA Protection in Windows OS. The default value of the parameter is FALSE
and to activate Kernel DMA Protection in a Windows guest OS, you must add acpi.dmar.enableDMAProtection=TRUE
to the vmx file.
PR 3412448: Remediation of ESXi hosts might fail due to a timeout in the update of the VMware-VM-Tools component
While upgrading ESXi hosts by using vSphere Lifecycle Manager, remediation of the VMware-VM-Tools component might occasionally fail due to a timeout. The issue occurs when the update process takes longer than the timeout setting of 30 sec for the VMware-VM-Tools component.
This issue is resolved in this release. The fix increases the timeout limit to 120 sec.
PR 3377863: You see "Hosts are remediated" message in the upgrade precheck results
When running a precheck before an ESXi update, you see a message such as Hosts are remediated
in the precheck results, which is not clear and might be misleading.
This issue is resolved in this release. The new message is Hosts are remediated sequentially
.
PR 3417329: Virtual machine tasks might intermittently fail due to a rare issue with the memory slab
Due to a rare issue with the vSAN DOM object, where a reference count on a component object might not decrement correctly, the in-memory object might never be released from the slab and can cause the component manager slab to reach its limit. As a result, you might not be able to create VMs, migrate VMs or might encounter VM power-on failures on vSAN clusters, either OSA or ESA.
This issue is resolved in this release.
PR 3406140: Extracting a vSphere Lifecycle Manager image from an existing ESXi host might fail after a kernel configuration change
Each update of the kernel configuration also triggers an update to the /bootbank/useropts.gz
file, but due to a known issue, the basemisc.tgz
might not contain the default useropts
file after such an update. As a result, when attempting to extract a vSphere Lifecycle Manager image from an existing ESXi host, the absence of the default useropts
file leads to failure to create the esx-base.vib
file and the operation also fails.
This issue is resolved in this release.
PR 3408477: Some ESXi hosts might not have a locker directory after an upgrade from ESXi 6.x to 8.x
When you upgrade an ESXi host with a boot disk of less than 10 GB and not on USB from ESXi 6.x to 8.x, the upgrade process might not create a locker directory and the /locker
symbolic link is not active.
This issue is resolved in this release. If you already face the issue, upgrading to ESXi 8.0 Update 3b creates a locker directory but does not automatically create a VMware Tools repository. As a result, clusters that host such ESXi hosts display as non-compliant. Remediate the cluster again to create a VMware Tools repository and to become compliant.
PR 3422005: ESXi hosts of version 8.0 Update 2 and later might fail to synchronize time with certain NTP servers
A change in the ntp-4.2.8p17 package in ESXi 8.0 Update 2 might cause the NTP client to reject certain server packets as poorly formatted or invalid. For example, if a server sends packets with a ppoll
value of 0
, the NTP client on ESXi does not synchronize with the server.
This issue is resolved in this release.
PR 3408145: The vmx service might fail with a core dump due to rare issue with the vSphere Data Protection solution running out of resources
In rare cases, if a backup operation starts during high concurrent guest I/O load, for example a VM with high write I/O intensity and a high number of overwrites, the VAIO filter component of the vSphere Data Protection solution might run out of resources to handle the guest write I/Os. As a result, the vmx service might fail with a core dump and restart.
This issue is resolved in this release. Alternatively, run backups during periods of low guest I/O activity.
PR 3405912: In the vSphere Client, you do not see the correct total vSAN storage consumption for which you have a license
Due to a rare race condition in environments with many vSAN clusters managed by a single vCenter instance, in the vSphere Client under Licensing > Licenses you might see a discrepancy between the total claimed vSAN storage capacity and the reported value for the clusters.
This issue is resolved in this release.
PR 3414588: Snapshot tasks on virtual machines on NVMe/TCP datastores might take much longer than on VMs provisioned on NVMe/FC datastores
When creating or deleting a snapshot on a VM provisioned on datastores backed by NVMe/TCP namespaces, such tasks might take much longer than on VMs provisioned on datastores backed by NVMe/FC namespaces. The issue occurs because the nvmetcp driver handles some specific NVMe commands not in the way the NVMe/TCP target systems expect.
This issue is resolved in this release.
PR 3392173: A rare issue with the Virsto vSAN component might cause failure to create vSAN objects, or unmount disk groups, or reboot ESXi hosts
In very rare cases, if a Virsto component creation task fails, it might not be properly handled and cause background deletion of virtual disks to stop. As a result, deleting virtual disks in tasks such as creating vSAN objects, or unmounting of disk groups, or rebooting ESXi hosts does not occur as expected and might cause such tasks to fail.
This issue is resolved in this release.
PR 3415365: ESXi upgrade to 8.0 Update 3 fails with an error in the vFAT bootbank partitions
ESXi 8.0 Update 3 adds a precheck in the upgrade workflow that uses the dosfsck
tool to catch vFAT corruptions. One of the errors that dosfsck
flags is the dirty bit set
but ESXi does not use that concept and such errors are false positive.
In the vSphere Client, you see an error such as A problem with one or more vFAT bootbank partitions was detected. Please refer to KB 91136 and run dosfsck on bootbank partitions
.
In the remediation logs on ESXi hosts, you see logs such as:
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: runcommand:199 runcommand called with: args = ['/bin/dosfsck', '-V', '-n', '/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6'], outfile = None, returnoutput= True, timeout = 10.
2024-07-02T16:01:16Z In(14) lifecycle[122416262]: upgrade_precheck:1836 dosfsck output: b'CP850//TRANSLIT: Invalid argument\nCP850: Invalid argument\nfsck.fat 4.1+git (2017-01-24)\n0x25: Dirty bit is set. Fswas not properly unmounted and some data may be corrupt.\n Automatically removing dirty bit.\nStarting check/repair pass.\nStarting verification pass.\n\nLeaving filesystem unchanged.\n/dev/disks/naa.600508b1001c7d25f5336a7220b5afc1:6: 121 files, 5665/65515 clusters\n'
This issue is resolved in this release.
PR 3421084: An ESXi host might become temporarily inaccessible from vCenter if an NFSv3 datastore fails to mount during reboot or bring up
During an ESXi host reboot, if an NFSv3 datastore fails to mount during the reboot or bring up in VMware Cloud Foundation environments, retries to mount the datastore continue in the background. However, while the datastore is still not available, the hostd daemon might fail with a core dump when trying to access it and cause the host to lose connectivity to the vCenter system for a short period.
This issue is resolved in this release.
PR 3407251: ESXi host fails with a purple diagnostic screen due to a rare physical CPU (PCPU) lockup
In the vSphere Client, when you use the Delete from Disk option to remove a virtual machine from a vCenter system and delete all VM files from the datastore, including the configuration file and virtual disk files, if any of the files is corrupted, a rare issue with handling corrupted files in the delete path might lead to a PCPU lockup. As a result, the ESXi host fails with a purple diagnostic screen and a message such as NMI IPI: Panic requested by another PCPU
.
This issue is resolved in this release.
PR 3406627: VMFS6 automatic UNMAP feature might fail to reclaim filesystem space beyond 250GB
In certain cases, when you delete a filesystem space of more than 250GB on a VMFS6 volume, for example 1 TB, if the volume has no active references such as active VMs, then the VMFS6 automatic UNMAP feature might fail to reclaim the filesystem space beyond 250GB.
This issue is resolved in this release.
PR 3419241: After deleting a snapshot or snapshot consolidation, some virtual machines intermittently fail
When a VM on a NFSv3 datastore has multiple snapshots, such as s1, s2, s3, s4, if the VM reverts to one of the snapshots, for example s2, then powers on, and then one of the other snapshots, such as s3, is deleted, the vmx service might fail. The issue occurs because the code tries to consolidate links of a disk that is not part of the VM current state and gets a null pointer. As a result, snapshot consolidation might also fail and cause the vmx service to fail as well.
This issue is resolved in this release. If you already face the issue, power off the VM, edit its .vmx
file to add the following setting: consolidate.upgradeNFS3Locks = "FALSE"
, and power on the VM.
PR 3407532: VMs with snapshots and active encryption experience higher I/O latency
In rare cases, encrypted VMs with snapshots might experience higher than expected latency. This issue occurs due to unaligned I/O operations that generate excessive metadata requests to the underlying storage and lead to increased latency.
This issue is resolved in this release. To optimize performance, the VMcrypt I/O filter is enhanced to allocate memory in 4K-aligned blocks for both read and write operations. This reduction in metadata overhead significantly improves overall I/O performance.
PR 3410311: You cannot log in to the Direct Console User Interface (DCUI) with regular Active Directory credentials
When you try to log with a regular Active Directory account in to the DCUI of an ESXi host by using either a remote management application such as HP Integrated Lights-Out (iLO) or Dell Remote Access Card (DRAC), or a server management system such as Lenovo XCC or Huawei iBMC, login might fail. In the DCUI, you see an error such as Wrong user name or password
. In the vmkernel.log
file, you see logs such as:
2024-07-01T10:40:53.007Z In(182) vmkernel: cpu1:264954)VmkAccess: 106: dcui: running in dcuiDom(7): socket = /etc/likewise/lib/.lsassd (unix_stream_socket_connect): Access denied by vmkernel access control policy.
The issue occurs due to a restriction of ESXi processes to access certain resources, such as Likewise.
This issue is resolved in this release.
PR 3403706: Hot extending a non-shared disk in a Windows Server Failover Cluster might result in lost reservations on shared disks
In a WSFC cluster, due to an issue with releasing SCSI reservations, in some cases hot extending a non-shared disk might result in lost reservations on shared disks and failover of the disk resource.
This issue is resolved in this release. The fix makes sure releasing SCSI reservations are properly handled for all type of shared disks.
PR 3394043: Creating a vSAN File Service fails when you use IPs within the 172.17.0.0/16 range as mount points
Prior to vSphere 8.0 Update 3b, you must change your network configuration in cases when the specified file service network overlaps with the Docker default internal network 172.17.0.0/16. As a result, you see Skyline Health warnings for DNS lookup failures and you cannot create vSAN File Services.
This issue is resolved in this release. The fix routes traffic to the correct endpoint to avoid possible conflicts.
PR 3389766: During a vSphere vMotion migration of a fault tolerant Primary VM with encryption, the migration task might fail and vSphere FT failover occurs
In rare cases, the encryption key package might not be sent or received correctly during a vSphere vMotion migration of a fault tolerant Primary VM with encryption, and as a result the migration task fails and vSphere FT failover occurs.
This issue is resolved in this release. The fix sends the encryption key at the end of the vSphere FT checkpoint to avoid errors.
PR 3403680: Mounting of vSphere Virtual Volumes stretched storage container fails with an undeclared fault
In the vSphere Client, you might see the error Undeclared fault
while mounting a newly created vSphere Virtual Volumes stretched storage container from an existing storage array. The issue occurs due to a rare race condition. vSphere Virtual Volumes generates a core dump and restarts after the failure.
This issue is resolved in this release.
PR 3408300: You cannot remove or delete a VMFS partition on a 4K native (4Kn) Software Emulation (SWE) disk
When you attempt to remove or delete a VMFS partition on a 4Kn SWE disk, in the vSphere Client you see an error such as Read-only file system during write on /dev/disks/<device name>
and the operation fails. In the vmkernel.log
, you see entries such as in-use partition <part num>, modification is not supported
.
This issue is resolved in this release.
PR 3402823: Fresh installation or creating VMFS partitions on a Micron 7500 or Intel D5-P5336 NVMe drives might fail with a purple diagnostic screen
UNMAP commands enable ESXi hosts to release storage space that is mapped to data deleted from the host. In NVMe, the equivalent of UNMAP commands is a deallocate DSM request. Micron 7500 and Intel D5-P5336 devices advertise a very large value in one of the deallocate limit attributes, DMSRL, which is the maximum number of logical blocks in a single range for a Dataset Management command. This leads to an integer overflow when the ESXi unmap split code converts number of blocks to number of bytes, which in turn might cause a failure of either installation or VMFS creation. You see a purple diagnostics screen with an error such as Exception 14 or corruption in dlmalloc
. The issue affects ESXi 8.0 Update 2 and later.
This issue is resolved in this release.
PR 3396479: Standard image profiles for ESXi 8.0 Update 3 show last modified date as release date
The Release Date field of the standard image profile for ESXi 8.0 Update 3 shows the Last Modified Date value. The issue is only applicable to the image profiles used in Auto Deploy or ESXCLI. Base images used in vSphere Lifecycle Manager workflows display the release date correctly. This issue has no functional impact. The side effect is that if you search for profiles by release date, the profile does not show with the actual release date.
This issue is resolved in this release.
Profile Name |
ESXi-8.0U3sb-24262298-standard |
Build |
For build information, see Patches Contained in This Release. |
Vendor |
VMware by Broadcom, Inc. |
Release Date |
September 17, 2024 |
Acceptance Level |
Partner Supported |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3425039, 3423080, 3408352, 3415908, 3380359, 3404366, 3421357, 3421359, 3390663, 3404362, 3396479, 3404367, 3398130, 3398132, 3316536, 3410159, 3412279, 3395162, 3404369, 3404657, 3390662, 3409328 |
Related CVE numbers |
N/A |
This patch updates the following issues:
PR 3392225: You see status Unknown for virtual machines after migration to an ESXi host with active host-local swap location
In rare occasions with very specific sequence of conditions, when you migrate to or power on a virtual machine on an ESXi host with active host-local swap location, the virtual machine might hang with a status Unknown
.
This issue is resolved in this release.
VMware Tools Bundling Changes in ESXi 8.0 Update 3b
The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2b:
windows.iso: VMware Tools 12.4.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.26 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
ESXi 8.0 Update 3b includes the following Intel microcode:
Code Name |
FMS |
Plat ID |
Servicing |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
baseline |
0x1d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
baseline |
0x11 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
baseline |
0x7 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
baseline |
0x2f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
baseline |
0x1f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
baseline |
0x621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
baseline |
0x71a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
baseline |
0xd |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
baseline |
0x3b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
baseline |
0x21 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
baseline |
0x28 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
baseline |
0x42e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
baseline |
0x715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
baseline |
0x49 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
baseline |
0x1a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
baseline |
0x22 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
baseline |
0x12d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
baseline |
0xb000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
baseline |
0x2007206 |
4/15/2024 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
baseline |
0x4003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
baseline |
0x5003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
baseline |
0x7002904 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
baseline |
0x1c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
baseline |
0x700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
baseline |
0xf00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
baseline |
0xe000015 |
8/3/2023 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
baseline |
0xf0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
baseline |
0x3e |
10/5/2023 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
baseline |
0xd0003e7 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
baseline |
0x10002b0 |
4/3/2024 |
Intel Xeon D-2700 Series; Intel Xeon D-1700 Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Tiger Lake U |
0x806c1 (06/8c/1) |
0x80 |
baseline |
0xb8 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake U Refresh |
0x806c2 (06/8c/2) |
0xc2 |
baseline |
0x38 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake H |
0x806d1 (06/8d/1) |
0xc2 |
baseline |
0x52 |
2/15/2024 |
Intel Xeon W-11000E Series |
Sapphire Rapids SP |
0x806f8 (06/8f/8) |
0x87 |
baseline |
0x2b0005c0 |
2/5/2024 |
Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
baseline |
0xf8 |
9/28/2023 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
baseline |
0xf6 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
baseline |
0x100 |
2/5/2024 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
baseline |
0x62 |
3/7/2024 |
Intel Xeon E-2300 Series |
Raptor Lake E/HX/S |
0xb0671 (06/b7/1) |
0x32 |
baseline |
0x125 |
4/16/2024 |
Intel Xeon E-2400 Series |
Emerald Rapids SP |
0xc06f2 (06/cf/2) |
0x87 |
baseline |
0x21000200 |
11/20/2023 |
Intel Xeon 8500 Series; Intel Xeon Gold 6500/5500 Series; Intel Xeon Silver 4500 Series; Intel Xeon Bronze 3500 Series |
ESX 8.0 Update 3b contains the following security updates:
OpenSSH is updated to version 9.8p1.
Profile Name |
ESXi-8.0U3sb-24262298-no-tools |
Build |
For build information, see Patches Contained in This Release. |
Vendor |
VMware by Broadcom, Inc. |
Release Date |
September 17, 2024 |
Acceptance Level |
Partner Supported |
Affected Hardware |
N/A |
Affected Software |
N/A |
Affected VIBs |
|
PRs Fixed |
3425039, 3423080, 3408352, 3415908, 3380359, 3404366, 3421357, 3421359, 3390663, 3404362, 3396479, 3404367, 3398130, 3398132, 3316536, 3410159, 3412279, 3395162, 3404369, 3404657, 3390662, 3409328 |
Related CVE numbers |
N/A |
This patch updates the following issues:
VMware Tools Bundling Changes in ESXi 8.0 Update 3b
The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2b:
windows.iso: VMware Tools 12.4.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.26 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
ESXi 8.0 Update 3b includes the following Intel microcode:
Code Name |
FMS |
Plat ID |
Servicing |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
baseline |
0x1d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
baseline |
0x11 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
baseline |
0x7 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
baseline |
0x2f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
baseline |
0x1f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
baseline |
0x621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
baseline |
0x71a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
baseline |
0xd |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
baseline |
0x3b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
baseline |
0x21 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
baseline |
0x28 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
baseline |
0x42e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
baseline |
0x715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
baseline |
0x49 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
baseline |
0x1a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
baseline |
0x22 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
baseline |
0x12d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
baseline |
0xb000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
baseline |
0x2007206 |
4/15/2024 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
baseline |
0x4003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
baseline |
0x5003707 |
3/1/2024 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
baseline |
0x7002904 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
baseline |
0x1c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
baseline |
0x700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
baseline |
0xf00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
baseline |
0xe000015 |
8/3/2023 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
baseline |
0xf0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
baseline |
0x3e |
10/5/2023 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
baseline |
0xd0003e7 |
4/1/2024 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
baseline |
0x10002b0 |
4/3/2024 |
Intel Xeon D-2700 Series; Intel Xeon D-1700 Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
baseline |
0x4c000026 |
2/28/2024 |
Intel Atom P5000 Series |
Tiger Lake U |
0x806c1 (06/8c/1) |
0x80 |
baseline |
0xb8 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake U Refresh |
0x806c2 (06/8c/2) |
0xc2 |
baseline |
0x38 |
2/15/2024 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake H |
0x806d1 (06/8d/1) |
0xc2 |
baseline |
0x52 |
2/15/2024 |
Intel Xeon W-11000E Series |
Sapphire Rapids SP |
0x806f8 (06/8f/8) |
0x87 |
baseline |
0x2b0005c0 |
2/5/2024 |
Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
baseline |
0xf8 |
9/28/2023 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
baseline |
0xf6 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
baseline |
0xf8 |
2/1/2024 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
baseline |
0x100 |
2/5/2024 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
baseline |
0x62 |
3/7/2024 |
Intel Xeon E-2300 Series |
Raptor Lake E/HX/S |
0xb0671 (06/b7/1) |
0x32 |
baseline |
0x125 |
4/16/2024 |
Intel Xeon E-2400 Series |
Emerald Rapids SP |
0xc06f2 (06/cf/2) |
0x87 |
baseline |
0x21000200 |
11/20/2023 |
Intel Xeon 8500 Series; Intel Xeon Gold 6500/5500 Series; Intel Xeon Silver 4500 Series; Intel Xeon Bronze 3500 Series |
ESX 8.0 Update 3b contains the following security updates:
OpenSSH is updated to version 9.8p1.
Name |
ESXi |
Version |
ESXi-8.0U3b-24280767 |
Release Date |
September 17, 2024 |
Category |
Bugfix |
Affected Components |
|
PRs Fixed |
3426946, 3421084, 3421179, 3403683, 3388844, 3424051, 3422735, 3420488, 3421971, 3422005, 3417326, 3420850, 3420421, 3420586, 3410311, 3420907, 3419241, 3409528, 3404817, 3416221, 3421665, 3415365, 3415102, 3417329, 3421434, 3419074, 3417667, 3408802, 3417224, 3420702, 3385757, 3412010, 3418878, 3408477, 3406546, 3406968, 3406999, 3398549, 3402823, 3408281, 3409108, 3406627, 3407251, 3392225, 3412138, 3406037, 3406875, 3412536, 3407952, 3410114, 3386751, 3408145, 3403240, 3389766, 3283598, 3408302, 3403639, 3407532, 3394043, 3387721, 3408300, 3396479, 3401629, 3401392, 3409085, 3408640, 3405912, 3403680, 3405106, 3407204, 3407022, 3409176, 3408739, 3408740, 3390434, 3406053, 3400702, 3392173, 3403496, 3402955, 3433092, 3421179, 3414588 |
Related CVE numbers |
N/A |
This patch resolves the issues listed in ESXi_8.0.3-0.35.24280767.
Name |
ESXi |
Version |
ESXi-8.0U3sb-24262298 |
Release Date |
September 17, 2024 |
Category |
Security |
Affected Components |
|
PRs Fixed |
3425039, 3423080, 3408352, 3415908, 3380359, 3404366, 3421357, 3421359, 3390663, 3404362, 3396479, 3404367, 3398130, 3398132, 3316536, 3410159, 3412279, 3395162, 3404369, 3404657, 3390662, 3409328 |
Related CVE numbers |
N/A |
This patch resolves the issues listed in ESXi_8.0.3-0.30.24262298.
If you use an offline bundle from a vendor, remediation of ESXi hosts with a Non-Critical Host Patches baseline might fail
If you import an offline bundle from a vendor depot, remediation of ESXi hosts with a Non-Critical Host Patches baseline might fail with a profile validation error.
Workaround: Instead of using a Non-Critical Host Patches baseline, create a new rollup bulletin that contains the necessary updates.
If you update your vCenter to 8.0 Update 1, but ESXi hosts remain on an earlier version, vSphere Virtual Volumes datastores on such hosts might become inaccessible
Self-signed VASA provider certificates are no longer supported in vSphere 8.0 and the configuration option Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false by default. If you update a vCenter instance to 8.0 Update 1 that introduces vSphere APIs for Storage Awareness (VASA) version 5.0, and ESXi hosts remain on an earlier vSphere and VASA version, hosts that use self-signed certificates might not be able to access vSphere Virtual Volumes datastores or cannot refresh the CA certificate.
Workaround: Update hosts to ESXi 8.0 Update 1. If you do not update to ESXi 8.0 Update 1, see VMware knowledge base article 91387.
You cannot update to ESXi 8.0 Update 2b by using esxcli software vib commands
Starting with ESXi 8.0 Update 2, upgrade or update of ESXi by using the commands esxcli software vib update
or esxcli software vib install
is not supported. If you use esxcli software vib update
or esxcli software vib install
to update your ESXi 8.0 Update 2 hosts to 8.0 Update 2b or later, the task fails. In the logs, you see an error such as:
ESXi version change is not allowed using esxcli software vib commands.
Please use a supported method to upgrade ESXi.
vib = VMware_bootbank_esx-base_8.0.2-0.20.22481015 Please refer to the log file for more details.
Workaround: If you are upgrading or updating ESXi from a depot zip bundle downloaded from the VMware website, VMware supports only the update command esxcli software profile update --depot=<depot_location> --profile=<profile_name>
. For more information, see Upgrade or Update a Host with Image Profiles.
The Cancel option in an interactive ESXi installation might not work as expected
Due to an update of the Python library, the Cancel option by pressing the ESC button in an interactive ESXi installation might not work as expected. The issue occurs only in interactive installations, not in scripted or upgrade scenarios.
Workaround: Press the ESC key twice and then press any other key to activate the Cancel option.
If you apply a host profile using a software FCoE configuration to an ESXi 8.0 host, the operation fails with a validation error
Starting from vSphere 7.0, software FCoE is deprecated, and in vSphere 8.0 software FCoE profiles are not supported. If you try to apply a host profile from an earlier version to an ESXi 8.0 host, for example to edit the host customization, the operation fails. In the vSphere Client, you see an error such as Host Customizations validation error
.
Workaround: Disable the Software FCoE Configuration subprofile in the host profile.
You cannot use ESXi hosts of version 8.0 as a reference host for existing host profiles of earlier ESXi versions
Validation of existing host profiles for ESXi versions 7.x, 6.7.x and 6.5.x fails when only an 8.0 reference host is available in the inventory.
Workaround: Make sure you have a reference host of the respective version in the inventory. For example, use an ESXi 7.0 Update 2 reference host to update or edit an ESXi 7.0 Update 2 host profile.
VMNICs might be down after an upgrade to ESXi 8.0
If the peer physical switch of a VMNIC does not support Media Auto Detect, or Media Auto Detect is disabled, and the VMNIC link is set down and then up, the link remains down after upgrade to or installation of ESXi 8.0.
Workaround: Use either of these 2 options:
Enable the option media-auto-detect
in the BIOS settings by navigating to System Setup Main Menu, usually by pressing F2 or opening a virtual console, and then Device Settings > <specific broadcom NIC> > Device Configuration Menu > Media Auto Detect. Reboot the host.
Alternatively, use an ESXCLI command similar to: esxcli network nic set -S <your speed> -D full -n <your nic>
. With this option, you also set a fixed speed to the link, and it does not require a reboot.
After upgrade to ESXi 8.0, you might lose some nmlx5_core driver module settings due to obsolete parameters
Some module parameters for the nmlx5_core
driver, such as device_rss
, drss
and rss
, are deprecated in ESXi 8.0 and any custom values, different from the default values, are not kept after an upgrade to ESXi 8.0.
Workaround: Replace the values of the device_rss
, drss
and rss
parameters as follows:
device_rss
: Use the DRSS
parameter.
drss
: Use the DRSS
parameter.
rss
: Use the RSS
parameter.
Second stage of vCenter Server restore procedure freezes at 90%
When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user
, even though the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server, which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.
Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get
command from the appliancesh
shell. If you use VAMI, refresh your browser.
RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a Broadcom RDMA network interface controller (RNIC)
The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app environment, when an ESXi host reboots or an uplink goes up and down.
Workaround: None
The irdman driver might fail when you use Unreliable Datagram (UD) transport mode ULP for RDMA over Converged Ethernet (RoCE) traffic
If for some reason you choose to use the UD transport mode upper layer protocol (ULP) for RoCE traffic, the irdman driver might fail. This issue is unlikely to occur, as the irdman driver only supports iSCSI Extensions for RDMA (iSER), which uses ULPs in Reliable Connection (RC) mode.
Workaround: Use ULPs with RC transport mode.
You might see compliance errors during upgrade to ESXi 8.0 Update 2b on servers with active Trusted Platform Module (TPM) encryption and vSphere Quick Boot
If you use the vSphere Lifecycle Manager to upgrade your clusters to ESXi 8.0 Update 2b, in the vSphere Client you might see compliance errors for hosts with active TPM encryption and vSphere Quick Boot.
Workaround: Ignore the compliance errors and proceed with the upgrade.
If IPv6 is deactivated, you might see 'Jumpstart plugin restore-networking activation failed' error during ESXi host boot
In the ESXi console, during the boot up sequence of a host, you might see the error banner Jumpstart plugin restore-networking activation failed
. The banner displays only when IPv6 is deactivated and does not indicate an actual error.
Workaround: Activate IPv6 on the ESXi host or ignore the message.
Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs
If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is present
. A backend call to the -f
force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the -f
force reboot option, if you use reboot -f
on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.
Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system configuration in a vSphere system with DPUs.
In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs
Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.
Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.
TCP connections intermittently drop on an ESXi host with Enhanced Networking Stack
If the sender VM is on an ESXi host with Enhanced Networking Stack, TCP checksum interoperability issues when the value of the TCP checksum in a packet is calculated as 0xFFFF
might cause the end system to drop or delay the TCP packet.
Workaround: Disable TCP checksum offloading on the sender VM on ESXi hosts with Enhanced Networking Stack. In Linux, you can use the command sudo ethtool -K <interface> tx off
.
You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU
In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU. As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.
Workaround: None.
If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an additional standard switch vSwitchBMC with uplink vusb0
Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC
with uplink vusb0
gets created on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the VMware Cloud Foundation Bring-Up process to fail.
Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following vendor documentation.
After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
to prevent the creation of a virtual switch vSwitchBMC
and associated portgroups on the next reboot of host.
See this script as an example:
~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
The value of BMCNetworkEnable is 0 and the service is disabled.
~# reboot
On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.
If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function cannot power on
NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to power on and operate.
Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured SR-IOV virtual function connected to a virtual switch.
In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage
Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.
In the Pre-Update Check Results screen, you see an error such as:
Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be validated. They may not function properly after vCenter Server upgrade.
Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter Server version.
Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution and VMware knowledge base article 87880.
You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on a Pensando DPU
If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.
Workaround: None.
Connection-intensive RDMA workload might lead to loss of traffic on Intel Ethernet E810 Series devices with inbox driver irdman-1.4.0.1
The inbox irdman driver version 1.4.0.1 does not officially support vSAN over RDMA. Tests running 10,000 RDMA connections, usual for vSAN environments, might occasionally lose all traffic on Intel Ethernet E810 Series devices with NVM version 4.2 and irdman driver version 1.4.0.1.
Workaround: None.
Transfer speed in IPv6 environments with active TCP segmentation offload is slow
In environments with active IPv6 TCP segmentation offload (TSO), transfer speed for Windows virtual machines with an e1000e virtual NIC might be slow. The issue does not affect IPv4 environments.
Workaround: Deactivate TSO or use a vmxnet3 adapter instead of e1000e.
Capture of network packets by using the PacketCapture tool on ESXi does not work
Due to tightening of the rhttpproxy security policy, you can no longer use the PacketCapture tool as described in Collecting network packets using the lightweight PacketCapture on ESXi.
Workaround: Use the pktcap-uw tool. For more information, see Capture and Trace Network Packets by Using the pktcap-uw Utility.
You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later
When two NICs that use the ntg3
driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping might occur. The issue does not occur on ntg3
drivers of versions earlier than 4.1.3 or the tg3
driver. This issue is not related to the occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3
driver of version 4.1.7 or later, or disable EEE on physical switch ports.
Workaround: Upgrade the ntg3
driver to version 4.1.8 and set the new module parameter noPhyStateSet
to 1
. The noPhyStateSet
parameter defaults to 0
and is not required in most environments, except they face the issue.
When you migrate a VM from an ESXi host with a DPU device operating in SmartNIC (ECPF) Mode to an ESXi host with a DPU device operating in traditional NIC Mode, overlay traffic might drop
When you use vSphere vMotion to migrate a VM attached to an overlay-backed segment from an ESXi host with a vSphere Distributed Switch operating in offloading mode (where traffic forwarding logic is offloaded to the DPU) to an ESXi host with a VDS operating in a non-offloading mode (where DPUs are used as a traditional NIC), the overlay traffic might drop after the migration.
Workaround: Deactivate and activate the virtual NIC on the destination ESXi host.
You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0
Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in ConnectX-5 and ConnectX-6 adapter cards.
Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.
Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts
When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.
Workaround: None.
VASA API version does not automatically refresh after upgrade to vCenter Server 8.0
vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA API version might not automatically change to 4.0. You see the issue in 2 cases:
If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see VASA API version 3.5.
If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API version to 4.0, even after the upgrade, you still see VASA API version 3.5.
Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.
vSphere vMotion operations of virtual machines residing on Pure-backed vSphere Virtual Volumes storage might time out
vSphere vMotion operations for VMs residing on vSphere Virtual Volumes datastores depend on the vSphere API for Storage Awareness (VASA) provider and the timing of VASA operations to complete. In rare cases, and under specific conditions when the VASA provider is under heavy load, response time from a Pure VASA provider might cause ESXi to exceed the timeout limit of 120 sec for each phase of vSphere vMotion tasks. In environments with multiple stretched storage containers you might see further delays in the Pure VASA provider response. As a result, running vSphere vMotion tasks time out and cannot complete.
Workaround: Reduce parallel workflows, especially on Pure storage on vSphere Virtual Volumes datastores exposed from the same VASA provider, and retry the vSphere vMotion task.
You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest operation has failed
A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed
in the backtrace. The snapshot creation task completes upon retry.
Workaround: Retry the snapshot creation task.
vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.
In a vSphere Virtual Volumes stretched storage cluster environment, some VMs might fail to power on after recovering from a cluster-wide APD
In high scale Virtual Volumes stretched storage cluster environments, after recovering from a cluster-wide APD, due to the high load during the recovery some VMs might fail to power on even though the datastores and protocol endpoints are online and accessible.
Workaround: Migrate the affected VMs to a different ESXi host and power on the VMs.
You see "Object or item referred not found" error for tasks on a First Class Disk (FCD)
Due to a rare storage issue, during the creation of a snapshot of an attached FCD, the disk might be deleted from the Managed Virtual Disk Catalog. If you do not reconcile the Managed Virtual Disk Catalog, all consecutive operations on such a FCD fail with the Object or item referred not found
error.
Workaround: See Reconciling Discrepancies in the Managed Virtual Disk Catalog.
If you load the vSphere virtual infrastructure to more than 90%, ESXi hosts might intermittently disconnect from vCenter Server
In rare occasions, if the vSphere virtual infrastructure is continuously using more than 90% of its hardware capacity, some ESXi hosts might intermittently disconnect from the vCenter Server. Connection typically restores within a few seconds.
Workaround: If connection to vCenter Server accidentally does not restore in a few seconds, reconnect ESXi hosts manually by using vSphere Client.
ESXi hosts might become unresponsive, and you see a vpxa dump file due to a rare condition of insufficient file descriptors for the request queue on vpxa
In rare cases, when requests to the vpxa service take long, for example waiting for access to a slow datastore, the request queue on vpxa might exceed the limit of file descriptors. As a result, ESXi hosts might briefly become unresponsive, and you see a vpxa-zdump.00*
file in the /var/core
directory. The vpxa logs contain the line Too many open files
.
Workaround: None. The vpxa service automatically restarts and corrects the issue.
If you use custom update repository with untrusted certificates, vCenter Server upgrade or update by using vCenter Lifecycle Manager workflows to vSphere 8.0 might fail
If you use a custom update repository with self-signed certificates that the VMware Certificate Authority (VMCA) does not trust, vCenter Lifecycle Manager fails to download files from such a repository. As a result, vCenter Server upgrade or update operations by using vCenter Lifecycle Manager workflows fail with the error Failed to load the repository manifest data for the configured upgrade
.
Workaround: Use CLI, the GUI installer, or the Virtual Appliance Management Interface (VAMI) to perform the upgrade. For more information, see VMware knowledge base article 89493.
You see error messages when try to stage vSphere Lifecycle Manager Images on ESXi hosts of version earlier than 8.0
ESXi 8.0 introduces the option to explicitly stage desired state images, which is the process of downloading depot components from the vSphere Lifecycle Manager depot to the ESXi hosts without applying the software and firmware updates immediately. However, staging of images is only supported on an ESXi 8.0 or later hosts. Attempting to stage a vSphere Lifecycle Manager image on ESXi hosts of version earlier than 8.0 results in messages that the staging of such hosts fails, and the hosts are skipped. This is expected behavior and does not indicate any failed functionality as all ESXi 8.0 or later hosts are staged with the specified desired image.
Workaround: None. After you confirm that the affected ESXi hosts are of version earlier than 8.0, ignore the errors.
A remediation task by using vSphere Lifecycle Manager might intermittently fail on ESXi hosts with DPUs
When you start a vSphere Lifecycle Manager remediation on an ESXi hosts with DPUs, the host upgrades and reboots as expected, but after the reboot, before completing the remediation task, you might see an error such as:
A general system error occurred: After host … remediation completed, compliance check reported host as 'non-compliant'. The image on the host does not match the image set for the cluster. Retry the cluster remediation operation.
This is a rare issue, caused by an intermittent timeout of the post-remediation scan on the DPU.
Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-remediation scan.
VMware Host Client might display incorrect descriptions for severity event states
When you look in the VMware Host Client to see the descriptions of the severity event states of an ESXi host, they might differ from the descriptions you see by using Intelligent Platform Management Interface (IPMI) or Lenovo XClarity Controller (XCC). For example, in the VMware Host Client, the description of the severity event state for the PSU Sensors might be Transition to Non-critical from OK
, while in the XCC and IPMI, the description is Transition to OK
.
Workaround: Verify the descriptions for severity event states by using the ESXCLI command esxcli hardware ipmi sdr list
and Lenovo XCC.
If you use an RSA key size smaller than 2048 bits, RSA signature generation fails
Starting from vSphere 8.0, ESXi uses the OpenSSL 3.0 FIPS provider. As part of the FIPS 186-4 requirement, the RSA key size must be at least 2048 bits for any signature generation, and signature generation with SHA1 is not supported.
Workaround: Use RSA key size larger than 2048.
Even though you deactivate Lockdown Mode on an ESXi host, the lockdown is still reported as active after a host reboot
Even though you deactivate Lockdown Mode on an ESXi host, you might still see it as active after a reboot of the host.
Workaround: Add users dcui
and vpxuser
to the list of lockdown mode exception users and deactivate Lockdown Mode after the reboot. For more information, see Specify Lockdown Mode Exception Users and Specify Lockdown Mode Exception Users in the VMware Host Client.