This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware ESXi 8.0 Update 2b | 29 FEB 2024 | Build 23305546

Check for additions and updates to these release notes.

What's New

  • New - Solution License for VMware Cloud Foundation (VCF) and VMware vSphere Foundation (VVF):

    Starting with vSphere 8.0 Update 2b, you can apply a single Solution License to the components of VVF and VCF.

    For more information on applying the Solution License to VVF components, see VMware vSphere Foundation (VVF) Licensing.

    For more information on VCF Solution License, see VMware Cloud Foundation 5.1.1 Release Notes.

  • 100 GiB of included storage capacity per licensed core:

    Starting with vSphere 8.0 Update 2b, as part of the VMware vSphere Foundation Solution License, you can use up to 100 gibibytes (GiB) of included vSAN storage per host licensed core. For capacity larger than 100 GiB per core, you must purchase vSAN capacity per tebibyte (TiB) and apply a vSAN license key that reflects the total raw storage capacity of the vSAN cluster. For more information on capacity reporting and licensing in vSAN, see Demystifying Capacity Reporting in vSAN and Counting Cores for VMware Cloud Foundation and vSphere Foundation and TiBs for vSAN.

  • Dynamic firewall rules when configuring syslog collectors with non-standard ports:

    With ESXi 8.0 Update 2b, when you configure syslog remote hosts, or loghosts, with non-standard ports, the vmsyslogd service automatically creates persistent dynamic firewall rules. You no longer need to manually open the firewall at ports that are different from the default 514 for TCP/UDP and 1514 for SSL protocols respectively. When configuring remote hosts with the standard ports, you still need to enable the syslog firewall ruleset.

  • ESXi 8.0 Update 2b adds support to vSphere Quick Boot for multiple servers, including: 

    • Dell

      • MX760c vSAN Ready Node

      • PowerEdge C6615

      • PowerEdge XR8610t

      • R7625 vSAN Ready Node

      • VxRail VP-760xa

      • VxRail VS-760

    • HPE

      • Alletra Storage Server 4110

      • Alletra Storage Server 4140

      • HPE Cray XD670

      • ProLiant DL110 Gen11

      • ProLiant DL20 Gen11

      • ProLiant ML30 Gen11

    • Lenovo

      • DN8848 V2

    For the full list of supported servers, see the VMware Compatibility Guide.

Earlier Releases of ESXi 8.0

New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 8.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 8.0 Release Notes.

Product Support Notices

  • Reduction in the memory usage of the ps command: In ESXi 8.0 Update 2b, an optimization of memory usage for the ps command reduces memory required to run the command by up to 96%.

Patches Contained in This Release

VMware ESXi 8.0 Update 2b

Build Details

Download Filename:

VMware-ESXi-8.0U2b-23305546-depot.zip

Build:

23305546

Download Size:

987.6 MB

sha256checksum:

264ebd69044a85fc4ade78ff088f5d4fa9af3cbd23eca241f0855cb7bdeb8258

Host Reboot Required:

Yes

Virtual Machine Migration or Shutdown Required:

Yes

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 8.0.

Bulletin ID

Category

Severity

Detail

ESXi80U2b-23305546

Bugfix

Critical

Security fixes and Bug fixes

ESXi80U2sb-23305545

Security

Critical

Security only fixes

Components

Component

Bulletin

Category

Severity

ESXi Component - core ESXi VIBs

ESXi_8.0.2-0.30.23305546

Bugfix

Critical

ESXi Install/Upgrade Component

esx-update_8.0.2-0.30.23305546

Bugfix

Critical

ESXi Install/Upgrade Component

esxio-update_8.0.2-0.30.23305546

Bugfix

Critical

Host Based Replication Server for ESX

VMware-HBR-UW_8.0.2-0.30.23305546

Bugfix

Critical

ESXi Component - core ESXi VIBs

ESXi_8.0.2-0.25.23305545

Security

Critical

ESXi Install/Upgrade Component

esx-update_8.0.2-0.25.23305545

Security

Critical

ESXi Install/Upgrade Component

esxio-update_8.0.2-0.25.23305545

Security

Critical

ESXi Tools Component

VMware-VM-Tools_12.3.5.22544099-23305545

Security

Critical

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name

ESXi-8.0U2b-23305546-standard

ESXi-8.0U2b-23305546-no-tools

ESXi-8.0U2sb-23305545-standard

ESXi-8.0U2sb-23305545-no-tools

ESXi Image

Name and Version

Release Date

Category

Detail

ESXi-8.0U2b-23305546

02/29/2024

General

Security and Bugfix image

ESXi-8.0U2sb-23305545

02/29/2024

Security

Security only image

For information about the individual components and bulletins, see the Resolved Issues section.

Patch Download and Installation

Log in to the Broadcom Support Portal to download this patch.

For download instructions for earlier releases, see Download Broadcom products and software.

For details on updates and upgrades by using vSphere Lifecycle Manager, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images. You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an image profile. To do this, you must manually download the patch offline bundle ZIP file. 

For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

Resolved Issues

ESXi_8.0.2-0.30.23305546

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_vds-vsip_8.0.2-0.30.23305546

  • VMware_bootbank_crx_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_vsanhealth_8.0.2-0.30.23305546

  • VMware_bootbank_gc-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_clusterstore_8.0.2-0.30.23305546

  • VMware_bootbank_trx_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_cpu-microcode_8.0.2-0.30.23305546

  • VMware_bootbank_infravisor_8.0.2-0.30.23305546

  • VMware_bootbank_esx-base_8.0.2-0.30.23305546

  • VMware_bootbank_vsan_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal_8.0.2-0.30.23305546

  • VMware_bootbank_esxio_8.0.2-0.30.23305546

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314-2vmw.802.0.0.22939414

  • VMware_bootbank_gc_8.0.2-0.30.23305546

  • VMware_bootbank_vdfs_8.0.2-0.30.23305546

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers_8.0.2-0.30.23305546

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_esx-xserver_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-base_8.0.2-0.30.23305546

PRs Fixed

3326852, 3326624, 3319724, 3316967, 3307362, 3318781, 3331997, 3317441, 3317938, 3336882, 3317707, 3316767, 3306552, 3210610, 3262965, 3312008, 3304392, 3308819, 3313089, 3313912, 3313572, 3308844, 3298815, 3312006, 3311820, 3308054, 3308132, 3302896, 3308133, 3312713, 3301092, 3296088, 3311524, 3311123, 3307579, 3308347, 3309887, 3308458, 3309849, 3303232, 3311989, 3308023, 3308025, 3301665, 3302759, 3305165, 3304472, 3303651, 3293470, 3303807, 3307435, 3282049, 3293466, 3293464, 3296047, 3300173, 3294828, 3300031, 3295243, 3268113, 3284909, 3256705, 3292627, 3287767, 3289239, 3297899, 3295250, 3287747, 3222955, 3300326, 3292550, 3300705, 3287772, 3294624, 3291105, 3296694, 3291219, 3280469, 3288307, 3287769, 3287762, 3289397, 3293199, 3285514, 3275379, 3288872

CVE numbers

N/A

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

Updates the bootbank_vds-vsip, bootbank_crx, bootbank_esxio-combiner-esxio, bootbank_esxio-dvfilter-generic-fastpath, bootbank_vsanhealth, bootbank_gc-esxio, bootbank_clusterstore, bootbank_trx, bootbank_native-misc-drivers-esxio, bootbank_cpu-microcode, bootbank_infravisor, bootbank_esx-base, bootbank_vsan, bootbank_bmcal, bootbank_esxio, VMW_bootbank_pensandoatlas, bootbank_gc, bootbank_vdfs, bootbank_drivervm-gpu-base, bootbank_esxio-combiner, bootbank_native-misc-drivers, bootbank_esx-dvfilter-generic-fastpath, bootbank_esx-xserver, bootbank_bmcal-esxio, and bootbank_esxio-base VIBs to resolve the following issues:

  • PR 3303807: ESXi hosts with active Trusted Platform Module (TPM) encryption might fail with a purple diagnostic screen when using vSphere Quick Boot for upgrade to ESXi 8.0 Update 1 or later

    If you use vSphere Quick Boot to upgrade ESXi hosts with active TPM encryption to ESXi 8.0 Update 1 or later, such hosts might fail with a purple diagnostic screen an a message similar to security violation was detected.

    This issue is resolved in this release. Upgrade of ESXi hosts with TPM encryption and vSphere Quick Boot is deactivated for ESXi 8.0 Update 2b.

  • PR 3316767: VM I/Os might prematurely fail or ESXi hosts crash while HPP claims a local multipath SCSI device

    Due to a rare issue with the failover handling in the High Performance Plug-in (HPP) module, if HPP claims a local multipath SCSI device in active-active mode, you might see the following issues:

    • in case of connection loss or a NO CONNECT error, VM I/Os can fail prematurely instead of retrying on another path.

    • in case of all paths down (APD) state, if a complete storage rescan is triggered from vCenter or the ESXi host, the host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3302759: Pending I/Os might cause the ESXi host running a Primary VM to fail with a purple diagnostic screen during failover

    During a failover of a Primary VM protected by vSphere Fault Tolerance, if some I/Os between the VM and the ESXi host are stll unresolved, the host might intermittently fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix makes sure that no pending I/Os exist at the time an FT session info cleanup runs.

  • PR 3291105: While detaching an Enhanced Networking Stack (ENS) port from an ESXi host, the host might fail with a purple diagnostic screen

    During deactivation of ENS in scenarios such as an upgrade or a VNIC configuration change, while detaching an ENS port from an ESXi host with active ENS, the port might be removed from the port list and not be available for further reference, which might cause an invalid memory access error. As a result, the ESXi host fails with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3297899: vSAN file server continues restarting

    In case all AD servers are inaccessible, vSAN Skyline Health might show a file server that is always in restarting state, with repeated failovers from one host to another. If this occurs, all file shares on the file server, including NFS shares with securityType=AUTH_SYS, become inaccessible.

    This issue is resolved in this release.

  • PR 3288307: Cannot upgrade vSAN File Service or enter maintenance mode

    In rare cases, if the hostd service reboots, upgrade of the vSAN File Service or entering maintenance mode might be blocked.

    This issue is resolved in this release.

  • PR 3294624: Skyline Health fails to load for file shares in an inextensible state

    When a file share is in an inextensible state, the current storage policy cannot be fulfilled. The file share might also have other issues, such as exceeding the soft quota with its used capacity. Skyline Health fails to load for objects in this state.

    This issue is resolved in this release.

  • PR 3293199: The file /var/run/log/clusterAgent.stderr on an ESXi host becomes extremely large, filling up the OSDATA partition

    In case of certain rare error conditions, a component within the clusterAgent ESXi service might start periodically writing to /var/run/log/clusterAgent.stderr instead of the regular syslog destination on an ESXi host. Since this is not expected behavior, the stderr file is not rotated or otherwise monitored. If the error conditions persist over time, for example several months, the file can expand to several GB in size, filling up the OSDATA partition. When disk space is exhausted, ESXi services and VMs might lock up.

    This issue is resolved in this release. The fix makes sure that if your environment had the issue, after upgrading to ESXi 8.0 Update 2b or later, the oversized log file is compressed. The file is not automatically deleted, but you can safely manually remove the compressed file.

  • PR 3319724: vSAN ESA VMs with large-sized VMDKs become inaccessible

    In vSAN Express Storage Architecture (ESA) environments of version 8.0 and later, VMDK objects with a large amount of used capacity might overload the object scrubber, causing such objects to become inaccessible.

    This issue is resolved in this release.

  • PR 3294828: The hostd service might intermittently fail due to an invalid sensor data record

    In rare cases, the Baseboard Management Controller (BMC) might provide an invalid sensor data record to the hostd service and cause it to fail. As a result, ESXi hosts become unresponsive or disconnect from vCenter.

    This issue is resolved in this release. The fix implements checks to validate the sensor data record before passing it to the hostd service.

  • PR 3287769: Unexpected failover of vSAN File Service containers

    The EMM monitor can trigger unexpectedly and block the File Service EPiC heartbeat update to the arbitrator host. This problem can cause an unexpected container failover, which interrupts the I/O of SMB file shares.

    This issue is resolved in this release.

  • PR 3293464: The VMX service might fail during migration of virtual machines with vSphere Virtual Volumes if the volume closure on the source takes long

    In some scenarios, such as virtual machines with vSphere Virtual Volumes and Changed Block Tracking (CBT) or Content Based Read Cache (CBRC) enabled, during storage migration, flushing pending I/Os on volume closure at the source might take long, like 10 sec. If clearing the pending I/Os is not complete by the time the VMX service tries to reach the volumes on the destinations host, the service fails.

    This issue is resolved in this release. The fix makes sure that all vSphere Virtual Volumes and disks are closed on the source host before the destination host tries to open them.

  • PR 3308819: You cannot add allowed IP addresses for an ESXi host

    With ESXi 8.0 Update 2, some ESXi firewall rulesets, such as dhcp, are system-owned by default and prevent manual adding of allowed IP addresses to avoid possible break of service. With ESXi 8.0 Update 2b, you can manually add allowed IP addresses to all rulesets, except for nfsClient, nfs41Client, trusted-infrastructure-kmxd, trusted-infrastructure-kmxa, and vsanEncryption.

    This issue is resolved in this release.

  • PR 3308461: When MAC learning is not active, newly added ports on a vSphere Distributed Switch (VDS) might go into blocked state

    If MAC learning on a VDS is not active, when from the guest OS you change the MAC address of a virtual machine to a new proxy switch port, the newly added port might be in a blocked state due to a L2 violation error, even when the Mac address changes option is set to Accept. The issue occurs only when MAC learning on a VDS is not active.

    This issue is resolved in this release.

  • PR 3332617: vGPU VMs with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fail to start after update to ESXi 8.0 Update 2

    After an update to ESXi 8.0 Update 2, booting of a vGPU VM with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fails with an error such as vmiop-display unable to reserve vgpu in the logs. In the vSphere Client, you see an error such as Module DevicePowerOnEarly power on failed. .

    This issue is resolved in this release.

  • PR 3326624: An ESXi host might fail during upgrade due to an unmap I/O request larger than 2040 MB on a vSAN-backed object

    During an upgrade to ESXi 8.0 Update 2, in rare cases, a guest virtual machine might issue an unmap I/O request larger than 2040 MB on a vSAN-backed object and the request might split in two parts. As a result, the ESXi host receives duplicate metadata entries and might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3304472: vSphere vMotion operations might fail due to a rare issue with no namespaces present in the namespacemgr.db file

    When a virtual machine is configured with namespaces and for some reason the namespaces table in the namespacemgr.db file is empty in the ESXi host database, migration of such a VM by using vSphere vMotion might fail. In the vmware.log file, you see errors such as:

    vmx - NamespaceMgrCheckpoint: No valid queue found while restoring the namespace events. The migrate type is 1.

    This issue is resolved in this release.

  • PR 3293470: In the vSphere Client, you see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore

    In the vSphere Client, when you navigate to Monitor >Advanced >Datastore to see the advanced performance metrics of a virtual machine, you might see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore. The issue occurs due to incorrect calculation of the counters for the read/write IO latency as a cumulative average.

    This issue is resolved in this release.

  • PR 3300031: The Network Time Protocol (NTP) peer and loop statistics are not automatically captured in the vm-support bundle

    Prior to vSphere 8.0 Update 2b, unless you manually add the NTP peer and loop statistics to the vm-support bundle, NTP logs are not available for troubleshooting.

    This issue is resolved in this release. The fix adds support for automatically capturing NTP peer and loop statistics from the /var/log/ntp/ directory in the vm-support bundle.

  • PR 3293466: vSphere vMotion operations between ESXi 7.x hosts fail with the error "Destination failed to preopen disks"

    If on an ESXi 7.x host you use an older Virtual Volumes storage provider, also called a VASA provider, it might not allow binding multiple vSphere Virtual Volumes in batches to optimize the performance of migration and power on. As a result, in the vSphere Client you might see the error Destination failed to preopen disks when running vSphere vMotion operations between 7.x hosts.

    ​This issue is resolved in this release.

  • PR 3316967: Changed Block Tracking (CBT) might not work as expected on a hot extended virtual disk

    In vSphere 8.0 Update 2, to optimize the open and close process of virtual disks during hot extension, the disk remains open during hot extend operations. Due to this change, incremental backup of virtual disks with CBT enabled might be incomplete, because the CBT in-memory bitmap does not resize, and CBT cannot record the changes to the extended disk block. As a result, when you try to restore a VM from an incremental backup of virtual disks with CBT, the VM might fail to start.

    This issue is resolved in this release.

  • PR 3317441: Enhanced Network Datapath (EDP) might reserve too much memory on a large vSphere system and cause resource issues

    EDP reservation is linear to the total memory by reserving up to 1.6% of the total. For a VM with high memory reservation in a large vSphere system, such a memory buffer might be too big. For example, if the system has a 100TB of memory, the EDP reservation might be 1.6TB. As a result, you might see resource issues such as not being able to hot-add VNICs to VMs with high memory reservation.

    This issue is resolved in this release. The fix sets the EDP reservation limit at 1% of the total memory and puts a 5GB cap on a single memory reservation. You can adjust the memory reservation limit by using the VMkernel boot option ensMbufPoolMaxMinMB or by using the following command: esxcli system settings kernel set -s ensMbufPoolMaxMBPerGB -v <value-in-MB>. In any case, you must reboot the ESXi host to enable the change.

  • PR 3275379: VBS-enabled Windows VMs might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors.

    Windows virtual machines with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors. The BSOD has the following signature:

    SECURE_KERNEL_ERROR (18b)The secure kernel has encountered a fatal error.
    Arguments: 
    Arg1: 000000000000018c 
    Arg2: 000000000000100b 
    Arg3: 0000000000000000 
    Arg4: 000000000000409b

    ​This issue is resolved in this release.

  • PR 3311820: Creation of VMFS volumes on local software emulated 4Kn devices fails

    In some environments, after an update to ESXi 8.0 Update 2, you might not be able to create VMFS volumes on local software emulated 4Kn devices. The issue occurs when a SCSI driver fails to discover and register a 4Kn NVMe device as a 4Kn SWE device. In the vSphere On-disk Metadata Analyzer (VOMA), you see an error such as ON-DISK ERROR: Invalid disk block size 512, should be 4096.

    This issue is resolved in this release.

  • PR 3296047: The vvold service fails with Unable to Allocate Memory error

    If a vSphere API for Storage Awareness (VASA) provider is down for a long time, frequent reconnection attempts from the vvold service might exhaust its memory, because of repetitively creating and destroying session objects. The issue can also occur when an event such as a change in a VASA provider triggers an update to the vvold service. The vvold service restarts within seconds, but if the restart interrupts a running operation of the VASA provider, the operation might fail and affect related virtual machine operations as well. In the backtrace, you see an error such as Unable To Allocate Memory.

    This issue is resolved in this release.

  • PR 3309887: You cannot use host profiles to configure a path selection policy for storage disks

    In some cases, when you try to change or configure the path selection policy of storage disks by using a host profile, the operation fails. In the in syslog.log file, you see lines such as

    Errors: Unable to set SATP configuration: Invalid options provided for SATP VMW_SATP_ALUA

    This issue is resolved in this release.

  • PR 3308844: The hostd service might become unresponsive after a restart due to a rare issue with Inter Process Communication (IPC)

    If an IPC process has incorrect payload in the IPC packet, for example: an incorrect version in the header, a deadlock in the iSCSI daemon might cause the hostd service to become unresponsive after a hostd restart. As a result, you cannot perform any operations on the affected ESXi host and VMs by using the vSphere Client.

    This issue is resolved in this release.

  • PR 3256705: ESXi hosts might fail to boot due to a rare deadlock between open and rename operations on objects with parent-child relationship

    A rename operation locks the source and destination directories based on the object pointer address with the assumption that the parent object has the lower address. In rare cases, the assumption might not be correct and the rename lock can happen for the child first and then the parent, while the open operation locks the parent first and then the child. As a result, a deadlock between the open and rename operations occurs and ESXi hosts fail to boot.

    This issue is resolved in this release.

  • PR 3311989: Linux-based virtual machines randomly stop responding to input from the keyboard and the mouse

    A race condition on Linux-based virtual machines between the interrupt requests from the keyboard and the mouse, such as moving the mouse and typing at the same time, might cause the VM to stop responding to input from the keyboard and the mouse.

    This issue is resolved in this release.

  • PR 3307579: You see virtual machines in environments with Enhanced Data Path enabled randomly losing connectivity

    A rare memory leak might cause VMkernel NICs connected to ESXi host switches, working on the networking stack mode known as Enhanced Data Path or Enhanced Networking Stack, randomly lose connectivity.

    This issue is resolved in this release.

  • PR 3311123: ESXi hosts randomly fail with a purple diagnostic screen and errors such as Spinlock spinout NMI or Panic requested by another PCPU

    A rare race condition between an I/O thread and a Storage I/O Control thread on an ESXi host, waiting for each other to release read/write locks, might cause the host to fail with a purple diagnostic screen. In the panic backtrace you see a call to either PsaStorSchedQUpdateMaxQDepth or PsaStorDirectSubmitOrQueueCommand and the error on the screen is either Spin count exceeded or Panic requested by another PCPU respectively.

    This issue is resolved in this release.

  • PR 3312713: An ESXi host might fail with purple diagnostic screen due to rare issue with the offline consolidation of SEsparse delta disks

    Due to slow storage, the offline consolidation of a large SEsparse delta disk, such as 3TB or more, might take longer than the default 10-hour timeout. In such cases, a clearing process stops the consolidation task before it completes. As a result, threads trying to access the consolidation task cause a PCPU lock up and lead to a failure of the ESXi host with a purple diagnostic screen with an error such as Spin count exceeded - possible deadlock.

    This issue is resolved in this release.

  • PR 3307435: The hostd service might fail after an update in environments with Virtual Shared Graphics Acceleration (vSGA) and Enhanced vMotion Compatibility (EVC)

    In EVC-enabled clusters with active vSGA to share GPU resources across multiple virtual machines, an update operation, such as VIB installation on an ESXi host, might cause the hostd service to fail and start generating frequent coredumps. The issue occurs while adding the GPU-related features to the ESXi host after exiting maintenance mode.

    This issue is resolved in this release.

  • PR 3303651: In a High Performance Plug-in (HPP) environment, a network outage might trigger host failure

    In HPP environments, which aim to improve the performance of NVMe devices on ESXi hosts, in case of a network outage, you might see two issues:

    1. In case of multi-pathed local SCSI devices in active-active mode claimed by HPP, VM I/Os might fail prematurely instead of being retried on the other path.

    2. In case of local SCSI devices claimed by HPP, in an all paths down (APD) scenario, if a complete storage rescan is triggered from vCenter, the ESXi host might fail with a purple diagnostic screen.

    Both issues occur only with SCSI local devices and when I/Os fail due to lack of network or APD.

    This issue is resolved in this release. The fix makes sure that I/Os fail only in case of CHECK CONDITION error indicating a change in a SCSI local device.

esx-update_8.0.2-0.30.23305546

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_esx-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesx_8.0.2-0.30.23305546

PRs Fixed

3317203, 3313226, 3309606, 3261182, 3286057

CVE numbers

N/A

Updates the esx-update and loadesx VIBs to resolve the following issue:

  • PR 3309606: Using the poll() system call on character devices in some drivers might lead to an ESXi host failure with a purple diagnostic screen

    Depending on the implementation, for example in the Integrated Lights Out (ILO) driver of HPE, using the poll() system call on character devices might lead to an ESXi host failure with a purple diagnostic screen. The issue is due to an internal memory leak in some cases where the driver returns an error to the poll() system call.

    This issue is resolved in this release.

esxio-update_8.0.2-0.30.23305546

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_esxio-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesxio_8.0.2-0.30.23305546

PRs Fixed

N/A

CVE numbers

N/A

Updates the esxio-update and loadesxio VIBs.

VMware-HBR-UW_8.0.2-0.30.23305546

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

No

Virtual Machine Migration or Shutdown Required

No

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_vmware-hbrsrv_8.0.2-0.30.23305546

PRs Fixed

3312725, 3312593, 3311987, 3311956, 3311691, 3295096, 3295060, 3306346, 3306344, 3306345

CVE numbers

N/A

Updates the hbrsrv VIB.

ESXi_8.0.2-0.25.23305545

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_vsan_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner_8.0.2-0.25.23305545

  • VMware_bootbank_infravisor_8.0.2-0.25.23305545

  • VMware_bootbank_clusterstore_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers_8.0.2-0.25.23305545

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314-2vmw.802.0.0.22939414

  • VMware_bootbank_crx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_esx-xserver_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_trx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-base_8.0.2-0.25.23305545

  • VMware_bootbank_gc-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal_8.0.2-0.25.23305545

  • VMware_bootbank_gc_8.0.2-0.25.23305545

  • VMware_bootbank_vsanhealth_8.0.2-0.25.23305545

  • VMware_bootbank_vdfs_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.25.23305545

  • VMware_bootbank_vds-vsip_8.0.2-0.25.23305545

  • VMware_bootbank_esx-base_8.0.2-0.25.23305545

PRs Fixed

3333007, 3328623, 3317046, 3315810, 3309040, 3341832, 3309039, 3293189, 3309041, 3291298, 3304476, 3260854, 3291282, 3291430, 3302555, 3301475, 3291280, 3291297, 3291275, 3291295

CVE numbers

N/A

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

Updates the bootbank_vsan, bootbank_esxio-combiner, bootbank_infravisor, bootbank_clusterstore, bootbank_native-misc-drivers, VMW_bootbank_pensandoatlas, bootbank_crx, bootbank_esxio-combiner-esxio, bootbank_esxio-dvfilter-generic-fastpath, bootbank_esx-xserver, bootbank_bmcal-esxio, bootbank_trx, bootbank_esxio, bootbank_esxio-base, bootbank_gc-esxio, bootbank_esx-dvfilter-generic-fastpath, bootbank_bmcal, bootbank_gc, bootbank_vsanhealth, bootbank_vdfs, bootbank_native-misc-drivers-esxio, bootbank_drivervm-gpu-base, bootbank_vds-vsip, and bootbank_esx-base VIBs to resolve the following issues:

  • ESXi 8.0 Update 2b provides the following security updates:

    • OpenSSL is updated to version 3.0.11.

    • The Python library is updated to version 3.8.18.

    • The Envoy proxy is updated to version 1.25.11.

    • The cURL library is updated to version 8.3.0.

    • The c-ares C library is updated to version 1.19.1.

    • The Go-lang library is updated to version 1.19.11.

    • The pyOpenSSL library is updated to version 2.0.

    • The Libxslt library is updated to version 1.1.28.

    • The libarchive library is updated to version 3.5.1.

  • ESXi 8.0 Update 2b includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003b9

    9/1/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000268

    9/8/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xb4

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x34

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x4e

    9/7/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c000321

    8/21/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000541

    8/21/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x5d

    9/3/2023

    Intel Xeon E-2300 Series

    Raptor Lake E/HX/S

    0xb0671 (06/b7/1)

    0x32

    0x11e

    8/31/2023

    Intel Xeon E-2400 Series

esx-update_8.0.2-0.25.23305545

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_loadesx_8.0.2-0.25.23305545

  • VMware_bootbank_esx-update_8.0.2-0.25.23305545

PRs Fixed

N/A

CVE numbers

N/A

Updates the esx-update and loadesx VIBs.

esxio-update_8.0.2-0.25.23305545

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_esxio-update_8.0.2-0.25.23305545

  • VMware_bootbank_loadesxio_8.0.2-0.25.23305545

PRs Fixed

N/A

CVE numbers

N/A

Updates the esxio-update and loadesxio VIBs.

VMware-VM-Tools_12.3.5.22544099-23305545

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

No

Virtual Machine Migration or Shutdown Required

No

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_locker_tools-light_12.3.5.22544099-23305545

PRs Fixed

3304809

CVE numbers

N/A

Updates the locker_tools-light VIB to resolve the following issue:

  • VMware Tools Bundling Changes in ESXi 8.0 Update 2b

      • The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2b: 

        • windows.iso: VMware Tools 12.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.

        • linux.iso: VMware Tools 10.3.26 ISO image for Linux OS with glibc 2.11 or later.

        The following VMware Tools ISO images are available for download:

        • VMware Tools 11.0.6:

          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).

        • VMware Tools 10.0.12:

          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.

          • linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.

        • solaris.iso: VMware Tools image 10.3.10 for Solaris.

        • darwin.iso: Supports Mac OS X versions 10.11 and later.

        Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

ESXi-8.0U2b-23305546-standard

Profile Name

ESXi-8.0U2b-23305546-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

February 29, 2024

Acceptance Level

Partner Supported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vds-vsip_8.0.2-0.30.23305546

  • VMware_bootbank_crx_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_vsanhealth_8.0.2-0.30.23305546

  • VMware_bootbank_gc-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_clusterstore_8.0.2-0.30.23305546

  • VMware_bootbank_trx_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_cpu-microcode_8.0.2-0.30.23305546

  • VMware_bootbank_infravisor_8.0.2-0.30.23305546

  • VMware_bootbank_esx-base_8.0.2-0.30.23305546

  • VMware_bootbank_vsan_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal_8.0.2-0.30.23305546

  • VMware_bootbank_esxio_8.0.2-0.30.23305546

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314-2

  • VMware_bootbank_gc_8.0.2-0.30.23305546

  • VMware_bootbank_vdfs_8.0.2-0.30.23305546

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers_8.0.2-0.30.23305546

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_esx-xserver_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-base_8.0.2-0.30.23305546

  • VMware_bootbank_esx-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesx_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesxio_8.0.2-0.30.23305546

  • VMware_bootbank_vmware-hbrsrv_8.0.2-0.30.23305546

  • VMware_locker_tools-light_12.3.5.22544099-23305545

PRs Fixed

3326852, 3326624, 3319724, 3316967, 3307362, 3318781, 3331997, 3317441, 3317938, 3336882, 3317707, 3316767, 3306552, 3210610, 3262965, 3312008, 3304392, 3308819, 3313089, 3313912, 3313572, 3308844, 3298815, 3312006, 3311820, 3308054, 3308132, 3302896, 3308133, 3312713, 3301092, 3296088, 3311524, 3311123, 3307579, 3308347, 3309887, 3308458, 3309849, 3303232, 3311989, 3308023, 3308025, 3301665, 3302759, 3305165, 3304472, 3303651, 3293470, 3303807, 3307435, 3282049, 3293466, 3293464, 3296047, 3300173, 3294828, 3300031, 3295243, 3268113, 3284909, 3256705, 3292627, 3287767, 3289239, 3297899, 3295250, 3287747, 3222955, 3300326, 3292550, 3300705, 3287772, 3294624, 3291105, 3296694, 3291219, 3280469, 3288307, 3287769, 3287762, 3289397, 3293199, 3285514, 3275379, 3288872, 3317203, 3313226, 3309606, 3261182, 3286057, 3312725, 3312593, 3311987, 3311956, 3311691, 3295096, 3295060, 3306346, 3306344, 3306345

Related CVE numbers

N/A

This patch updates the following issues:

  • PR 3316767: VM I/Os might prematurely fail or ESXi hosts crash while HPP claims a local multipath SCSI device

    Due to a rare issue with the failover handling in the High Performance Plug-in (HPP) module, if HPP claims a local multipath SCSI device in active-active mode, you might see the following issues:

    • in case of connection loss or a NO CONNECT error, VM I/Os can fail prematurely instead of retrying on another path.

    • in case of all paths down (APD) state, if a complete storage rescan is triggered from vCenter or the ESXi host, the host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3302759: Pending I/Os might cause the ESXi host running a Primary VM to fail with a purple diagnostic screen during failover

    During a failover of a Primary VM protected by vSphere Fault Tolerance, if some I/Os between the VM and the ESXi host are stll unresolved, the host might intermittently fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix makes sure that no pending I/Os exist at the time an FT session info cleanup runs.

  • PR 3309606: Using the poll() system call on character devices in some drivers might lead to an ESXi host failure with a purple diagnostic screen

    Depending on the implementation, for example in the Integrated Lights Out (ILO) driver of HPE, using the poll() system call on character devices might lead to an ESXi host failure with a purple diagnostic screen. The issue is due to an internal memory leak in some cases where the driver returns an error to the poll() system call.

    This issue is resolved in this release.

  • PR 3291105: While detaching an Enhanced Networking Stack (ENS) port from an ESXi host, the host might fail with a purple diagnostic screen

    During deactivation of ENS in scenarios such as an upgrade or a VNIC configuration change, while detaching an ENS port from an ESXi host with active ENS, the port might be removed from the port list and not be available for further reference, which might cause an invalid memory access error. As a result, the ESXi host fails with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3297899: vSAN file server continues restarting

    In case all AD servers are inaccessible, vSAN Skyline Health might show a file server that is always in restarting state, with repeated failovers from one host to another. If this occurs, all file shares on the file server, including NFS shares with securityType=AUTH_SYS, become inaccessible.

    This issue is resolved in this release.

  • PR 3288307: Cannot upgrade vSAN File Service or enter maintenance mode

    In rare cases, if the hostd service reboots, upgrade of the vSAN File Service or entering maintenance mode might be blocked.

    This issue is resolved in this release.

  • PR 3294624: Skyline Health fails to load for file shares in an inextensible state

    When a file share is in an inextensible state, the current storage policy cannot be fulfilled. The file share might also have other issues, such as exceeding the soft quota with its used capacity. Skyline Health fails to load for objects in this state.

    This issue is resolved in this release.

  • PR 3293199: The file /var/run/log/clusterAgent.stderr on an ESXi host becomes extremely large, filling up the OSDATA partition

    In case of certain rare error conditions, a component within the clusterAgent ESXi service might start periodically writing to /var/run/log/clusterAgent.stderr instead of the regular syslog destination on an ESXi host. Since this is not expected behavior, the stderr file is not rotated or otherwise monitored. If the error conditions persist over time, for example several months, the file can expand to several GB in size, filling up the OSDATA partition. When disk space is exhausted, ESXi services and VMs might lock up.

    This issue is resolved in this release. The fix makes sure that if your environment had the issue, after upgrading to ESXi 8.0 Update 2b or later, the oversized log file is compressed. The file is not automatically deleted, but you can safely manually remove the compressed file.

  • PR 3319724: vSAN ESA VMs with large-sized VMDKs become inaccessible

    In vSAN Express Storage Architecture (ESA) environments of version 8.0 and later, VMDK objects with a large amount of used capacity might overload the object scrubber, causing such objects to become inaccessible.

    This issue is resolved in this release.

  • PR 3294828: The hostd service might intermittently fail due to an invalid sensor data record

    In rare cases, the Baseboard Management Controller (BMC) might provide an invalid sensor data record to the hostd service and cause it to fail. As a result, ESXi hosts become unresponsive or disconnect from vCenter.

    This issue is resolved in this release. The fix implements checks to validate the sensor data record before passing it to the hostd service.

  • PR 3287769: Unexpected failover of vSAN File Service containers

    The EMM monitor can trigger unexpectedly and block the File Service EPiC heartbeat update to the arbitrator host. This problem can cause an unexpected container failover, which interrupts the I/O of SMB file shares.

    This issue is resolved in this release.

  • PR 3293464: The VMX service might fail during migration of virtual machines with vSphere Virtual Volumes if the volume closure on the source takes long

    In some scenarios, such as virtual machines with vSphere Virtual Volumes and Changed Block Tracking (CBT) or Content Based Read Cache (CBRC) enabled, during storage migration, flushing pending I/Os on volume closure at the source might take long, like 10 sec. If clearing the pending I/Os is not complete by the time the VMX service tries to reach the volumes on the destinations host, the service fails.

    This issue is resolved in this release. The fix makes sure that all vSphere Virtual Volumes and disks are closed on the source host before the destination host tries to open them.

  • PR 3308819: You cannot add allowed IP addresses for an ESXi host

    With ESXi 8.0 Update 2, some ESXi firewall rulesets, such as dhcp, are system-owned by default and prevent manual adding of allowed IP addresses to avoid possible break of service. With ESXi 8.0 Update 2b, you can manually add allowed IP addresses to all rulesets, except for nfsClient, nfs41Client, trusted-infrastructure-kmxd, trusted-infrastructure-kmxa, and vsanEncryption.

    This issue is resolved in this release.

  • PR 3308461: When MAC learning is not active, newly added ports on a vSphere Distributed Switch (VDS) might go into blocked state

    If MAC learning on a VDS is not active, when from the guest OS you change the MAC address of a virtual machine to a new proxy switch port, the newly added port might be in a blocked state due to a L2 violation error, even when the Mac address changes option is set to Accept. The issue occurs only when MAC learning on a VDS is not active.

    This issue is resolved in this release.

  • PR 3332617: vGPU VMs with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fail to start after update to ESXi 8.0 Update 2

    After an update to ESXi 8.0 Update 2, booting of a vGPU VM with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fails with an error such as vmiop-display unable to reserve vgpu in the logs. In the vSphere Client, you see an error such as Module DevicePowerOnEarly power on failed. .

    This issue is resolved in this release.

  • PR 3326624: An ESXi host might fail during upgrade due to an unmap I/O request larger than 2040 MB on a vSAN-backed object

    During an upgrade to ESXi 8.0 Update 2, in rare cases, a guest virtual machine might issue an unmap I/O request larger than 2040 MB on a vSAN-backed object and the request might split in two parts. As a result, the ESXi host receives duplicate metadata entries and might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3304472: vSphere vMotion operations might fail due to a rare issue with no namespaces present in the namespacemgr.db file

    When a virtual machine is configured with namespaces and for some reason the namespaces table in the namespacemgr.db file is empty in the ESXi host database, migration of such a VM by using vSphere vMotion might fail. In the vmware.log file, you see errors such as:

    vmx - NamespaceMgrCheckpoint: No valid queue found while restoring the namespace events. The migrate type is 1.

    This issue is resolved in this release.

  • PR 3293470: In the vSphere Client, you see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore

    In the vSphere Client, when you navigate to Monitor >Advanced >Datastore to see the advanced performance metrics of a virtual machine, you might see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore. The issue occurs due to incorrect calculation of the counters for the read/write IO latency as a cumulative average.

    This issue is resolved in this release.

  • PR 3300031: The Network Time Protocol (NTP) peer and loop statistics are not automatically captured in the vm-support bundle

    Prior to vSphere 8.0 Update 2b, unless you manually add the NTP peer and loop statistics to the vm-support bundle, NTP logs are not available for troubleshooting.

    This issue is resolved in this release. The fix adds support for automatically capturing NTP peer and loop statistics from the /var/log/ntp/ directory in the vm-support bundle.

  • PR 3293466: vSphere vMotion operations between ESXi 7.x hosts fail with the error "Destination failed to preopen disks"

    If on an ESXi 7.x host you use an older Virtual Volumes storage provider, also called a VASA provider, it might not allow binding multiple vSphere Virtual Volumes in batches to optimize the performance of migration and power on. As a result, in the vSphere Client you might see the error Destination failed to preopen disks when running vSphere vMotion operations between 7.x hosts.

    ​This issue is resolved in this release.

  • PR 3316967: Changed Block Tracking (CBT) might not work as expected on a hot extended virtual disk

    In vSphere 8.0 Update 2, to optimize the open and close process of virtual disks during hot extension, the disk remains open during hot extend operations. Due to this change, incremental backup of virtual disks with CBT enabled might be incomplete, because the CBT in-memory bitmap does not resize, and CBT cannot record the changes to the extended disk block. As a result, when you try to restore a VM from an incremental backup of virtual disks with CBT, the VM might fail to start.

    This issue is resolved in this release.

  • PR 3317441: Enhanced Network Datapath (EDP) might reserve too much memory on a large vSphere system and cause resource issues

    EDP reservation is linear to the total memory by reserving up to 1.6% of the total. For a VM with high memory reservation in a large vSphere system, such a memory buffer might be too big. For example, if the system has a 100TB of memory, the EDP reservation might be 1.6TB. As a result, you might see resource issues such as not being able to hot-add VNICs to VMs with high memory reservation.

    This issue is resolved in this release. The fix sets the EDP reservation limit at 1% of the total memory and puts a 5GB cap on a single memory reservation. You can adjust the memory reservation limit by using the VMkernel boot option ensMbufPoolMaxMinMB or by using the following command: esxcli system settings kernel set -s ensMbufPoolMaxMBPerGB -v <value-in-MB>. In any case, you must reboot the ESXi host to enable the change.

  • PR 3275379: VBS-enabled Windows VMs might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors.

    Windows virtual machines with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors. The BSOD has the following signature:

    SECURE_KERNEL_ERROR (18b)The secure kernel has encountered a fatal error.
    Arguments: 
    Arg1: 000000000000018c 
    Arg2: 000000000000100b 
    Arg3: 0000000000000000 
    Arg4: 000000000000409b

    ​This issue is resolved in this release.

  • PR 3311820: Creation of VMFS volumes on local software emulated 4Kn devices fails

    In some environments, after an update to ESXi 8.0 Update 2, you might not be able to create VMFS volumes on local software emulated 4Kn devices. The issue occurs when a SCSI driver fails to discover and register a 4Kn NVMe device as a 4Kn SWE device. In the vSphere On-disk Metadata Analyzer (VOMA), you see an error such as ON-DISK ERROR: Invalid disk block size 512, should be 4096.

    This issue is resolved in this release.

  • PR 3296047: The vvold service fails with Unable to Allocate Memory error

    If a vSphere API for Storage Awareness (VASA) provider is down for a long time, frequent reconnection attempts from the vvold service might exhaust its memory, because of repetitively creating and destroying session objects. The issue can also occur when an event such as a change in a VASA provider triggers an update to the vvold service. The vvold service restarts within seconds, but if the restart interrupts a running operation of the VASA provider, the operation might fail and affect related virtual machine operations as well. In the backtrace, you see an error such as Unable To Allocate Memory.

    This issue is resolved in this release.

  • PR 3309887: You cannot use host profiles to configure a path selection policy for storage disks

    In some cases, when you try to change or configure the path selection policy of storage disks by using a host profile, the operation fails. In the in syslog.log file, you see lines such as

    Errors: Unable to set SATP configuration: Invalid options provided for SATP VMW_SATP_ALUA

    This issue is resolved in this release.

  • PR 3308844: The hostd service might become unresponsive after a restart due to a rare issue with Inter Process Communication (IPC)

    If an IPC process has incorrect payload in the IPC packet, for example: an incorrect version in the header, a deadlock in the iSCSI daemon might cause the hostd service to become unresponsive after a hostd restart. As a result, you cannot perform any operations on the affected ESXi host and VMs by using the vSphere Client.

    This issue is resolved in this release.

  • PR 3256705: ESXi hosts might fail to boot due to a rare deadlock between open and rename operations on objects with parent-child relationship

    A rename operation locks the source and destination directories based on the object pointer address with the assumption that the parent object has the lower address. In rare cases, the assumption might not be correct and the rename lock can happen for the child first and then the parent, while the open operation locks the parent first and then the child. As a result, a deadlock between the open and rename operations occurs and ESXi hosts fail to boot.

    This issue is resolved in this release.

  • PR 3311989: Linux-based virtual machines randomly stop responding to input from the keyboard and the mouse

    A race condition on Linux-based virtual machines between the interrupt requests from the keyboard and the mouse, such as moving the mouse and typing at the same time, might cause the VM to stop responding to input from the keyboard and the mouse.

    This issue is resolved in this release.

  • PR 3307579: You see virtual machines in environments with Enhanced Data Path enabled randomly losing connectivity

    A rare memory leak might cause VMkernel NICs connected to ESXi host switches, working on the networking stack mode known as Enhanced Data Path or Enhanced Networking Stack, randomly lose connectivity.

    This issue is resolved in this release.

  • PR 3311123: ESXi hosts randomly fail with a purple diagnostic screen and errors such as Spinlock spinout NMI or Panic requested by another PCPU

    A rare race condition between an I/O thread and a Storage I/O Control thread on an ESXi host, waiting for each other to release read/write locks, might cause the host to fail with a purple diagnostic screen. In the panic backtrace you see a call to either PsaStorSchedQUpdateMaxQDepth or PsaStorDirectSubmitOrQueueCommand and the error on the screen is either Spin count exceeded or Panic requested by another PCPU respectively.

    This issue is resolved in this release.

  • PR 3312713: An ESXi host might fail with purple diagnostic screen due to rare issue with the offline consolidation of SEsparse delta disks

    Due to slow storage, the offline consolidation of a large SEsparse delta disk, such as 3TB or more, might take longer than the default 10-hour timeout. In such cases, a clearing process stops the consolidation task before it completes. As a result, threads trying to access the consolidation task cause a PCPU lock up and lead to a failure of the ESXi host with a purple diagnostic screen with an error such as Spin count exceeded - possible deadlock.

    This issue is resolved in this release.

  • PR 3307435: The hostd service might fail after an update in environments with Virtual Shared Graphics Acceleration (vSGA) and Enhanced vMotion Compatibility (EVC)

    In EVC-enabled clusters with active vSGA to share GPU resources across multiple virtual machines, an update operation, such as VIB installation on an ESXi host, might cause the hostd service to fail and start generating frequent coredumps. The issue occurs while adding the GPU-related features to the ESXi host after exiting maintenance mode.

    This issue is resolved in this release.

  • PR 3303651: In a High Performance Plug-in (HPP) environment, a network outage might trigger host failure

    In HPP environments, which aim to improve the performance of NVMe devices on ESXi hosts, in case of a network outage, you might see two issues:

    1. In case of multi-pathed local SCSI devices in active-active mode claimed by HPP, VM I/Os might fail prematurely instead of being retried on the other path.

    2. In case of local SCSI devices claimed by HPP, in an all paths down (APD) scenario, if a complete storage rescan is triggered from vCenter, the ESXi host might fail with a purple diagnostic screen.

    Both issues occur only with SCSI local devices and when I/Os fail due to lack of network or APD.

    This issue is resolved in this release. The fix makes sure that I/Os fail only in case of CHECK CONDITION error indicating a change in a SCSI local device.

ESXi-8.0U2b-23305546-no-tools

Profile Name

ESXi-8.0U2b-23305546-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

February 29, 2024

Acceptance Level

Partner Supported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vds-vsip_8.0.2-0.30.23305546

  • VMware_bootbank_crx_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_vsanhealth_8.0.2-0.30.23305546

  • VMware_bootbank_gc-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_clusterstore_8.0.2-0.30.23305546

  • VMware_bootbank_trx_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_cpu-microcode_8.0.2-0.30.23305546

  • VMware_bootbank_infravisor_8.0.2-0.30.23305546

  • VMware_bootbank_esx-base_8.0.2-0.30.23305546

  • VMware_bootbank_vsan_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal_8.0.2-0.30.23305546

  • VMware_bootbank_esxio_8.0.2-0.30.23305546

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314-2

  • VMware_bootbank_gc_8.0.2-0.30.23305546

  • VMware_bootbank_vdfs_8.0.2-0.30.23305546

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-combiner_8.0.2-0.30.23305546

  • VMware_bootbank_native-misc-drivers_8.0.2-0.30.23305546

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.30.23305546

  • VMware_bootbank_esx-xserver_8.0.2-0.30.23305546

  • VMware_bootbank_bmcal-esxio_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-base_8.0.2-0.30.23305546

  • VMware_bootbank_esx-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesx_8.0.2-0.30.23305546

  • VMware_bootbank_esxio-update_8.0.2-0.30.23305546

  • VMware_bootbank_loadesxio_8.0.2-0.30.23305546

  • VMware_bootbank_vmware-hbrsrv_8.0.2-0.30.23305546

PRs Fixed

3326852, 3326624, 3319724, 3316967, 3307362, 3318781, 3331997, 3317441, 3317938, 3336882, 3317707, 3316767, 3306552, 3210610, 3262965, 3312008, 3304392, 3308819, 3313089, 3313912, 3313572, 3308844, 3298815, 3312006, 3311820, 3308054, 3308132, 3302896, 3308133, 3312713, 3301092, 3296088, 3311524, 3311123, 3307579, 3308347, 3309887, 3308458, 3309849, 3303232, 3311989, 3308023, 3308025, 3301665, 3302759, 3305165, 3304472, 3303651, 3293470, 3303807, 3307435, 3282049, 3293466, 3293464, 3296047, 3300173, 3294828, 3300031, 3295243, 3268113, 3284909, 3256705, 3292627, 3287767, 3289239, 3297899, 3295250, 3287747, 3222955, 3300326, 3292550, 3300705, 3287772, 3294624, 3291105, 3296694, 3291219, 3280469, 3288307, 3287769, 3287762, 3289397, 3293199, 3285514, 3275379, 3288872, 3317203, 3313226, 3309606, 3261182, 3286057, 3312725, 3312593, 3311987, 3311956, 3311691, 3295096, 3295060, 3306346, 3306344, 3306345

Related CVE numbers

N/A

This patch updates the following issues:

  • PR 3302759: Pending I/Os might cause the ESXi host running a Primary VM to fail with a purple diagnostic screen during failover

    During a failover of a Primary VM protected by vSphere Fault Tolerance, if some I/Os between the VM and the ESXi host are stll unresolved, the host might intermittently fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix makes sure that no pending I/Os exist at the time an FT session info cleanup runs.

  • PR 3309606: Using the poll() system call on character devices in some drivers might lead to an ESXi host failure with a purple diagnostic screen

    Depending on the implementation, for example in the Integrated Lights Out (ILO) driver of HPE, using the poll() system call on character devices might lead to an ESXi host failure with a purple diagnostic screen. The issue is due to an internal memory leak in some cases where the driver returns an error to the poll() system call.

    This issue is resolved in this release.

  • PR 3291105: While detaching an Enhanced Networking Stack (ENS) port from an ESXi host, the host might fail with a purple diagnostic screen

    During deactivation of ENS in scenarios such as an upgrade or a VNIC configuration change, while detaching an ENS port from an ESXi host with active ENS, the port might be removed from the port list and not be available for further reference, which might cause an invalid memory access error. As a result, the ESXi host fails with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3297899: vSAN file server continues restarting

    In case all AD servers are inaccessible, vSAN Skyline Health might show a file server that is always in restarting state, with repeated failovers from one host to another. If this occurs, all file shares on the file server, including NFS shares with securityType=AUTH_SYS, become inaccessible.

    This issue is resolved in this release.

  • PR 3288307: Cannot upgrade vSAN File Service or enter maintenance mode

    In rare cases, if the hostd service reboots, upgrade of the vSAN File Service or entering maintenance mode might be blocked.

    This issue is resolved in this release.

  • PR 3294624: Skyline Health fails to load for file shares in an inextensible state

    When a file share is in an inextensible state, the current storage policy cannot be fulfilled. The file share might also have other issues, such as exceeding the soft quota with its used capacity. Skyline Health fails to load for objects in this state.

    This issue is resolved in this release.

  • PR 3293199: The file /var/run/log/clusterAgent.stderr on an ESXi host becomes extremely large, filling up the OSDATA partition

    In case of certain rare error conditions, a component within the clusterAgent ESXi service might start periodically writing to /var/run/log/clusterAgent.stderr instead of the regular syslog destination on an ESXi host. Since this is not expected behavior, the stderr file is not rotated or otherwise monitored. If the error conditions persist over time, for example several months, the file can expand to several GB in size, filling up the OSDATA partition. When disk space is exhausted, ESXi services and VMs might lock up.

    This issue is resolved in this release. The fix makes sure that if your environment had the issue, after upgrading to ESXi 8.0 Update 2b or later, the oversized log file is compressed. The file is not automatically deleted, but you can safely manually remove the compressed file.

  • PR 3319724: vSAN ESA VMs with large-sized VMDKs become inaccessible

    In vSAN Express Storage Architecture (ESA) environments of version 8.0 and later, VMDK objects with a large amount of used capacity might overload the object scrubber, causing such objects to become inaccessible.

    This issue is resolved in this release.

  • PR 3294828: The hostd service might intermittently fail due to an invalid sensor data record

    In rare cases, the Baseboard Management Controller (BMC) might provide an invalid sensor data record to the hostd service and cause it to fail. As a result, ESXi hosts become unresponsive or disconnect from vCenter.

    This issue is resolved in this release. The fix implements checks to validate the sensor data record before passing it to the hostd service.

  • PR 3287769: Unexpected failover of vSAN File Service containers

    The EMM monitor can trigger unexpectedly and block the File Service EPiC heartbeat update to the arbitrator host. This problem can cause an unexpected container failover, which interrupts the I/O of SMB file shares.

    This issue is resolved in this release.

  • PR 3293464: The VMX service might fail during migration of virtual machines with vSphere Virtual Volumes if the volume closure on the source takes long

    In some scenarios, such as virtual machines with vSphere Virtual Volumes and Changed Block Tracking (CBT) or Content Based Read Cache (CBRC) enabled, during storage migration, flushing pending I/Os on volume closure at the source might take long, like 10 sec. If clearing the pending I/Os is not complete by the time the VMX service tries to reach the volumes on the destinations host, the service fails.

    This issue is resolved in this release. The fix makes sure that all vSphere Virtual Volumes and disks are closed on the source host before the destination host tries to open them.

  • PR 3308819: You cannot add allowed IP addresses for an ESXi host

    With ESXi 8.0 Update 2, some ESXi firewall rulesets, such as dhcp, are system-owned by default and prevent manual adding of allowed IP addresses to avoid possible break of service. With ESXi 8.0 Update 2b, you can manually add allowed IP addresses to all rulesets, except for nfsClient, nfs41Client, trusted-infrastructure-kmxd, trusted-infrastructure-kmxa, and vsanEncryption.

    This issue is resolved in this release.

  • PR 3308461: When MAC learning is not active, newly added ports on a vSphere Distributed Switch (VDS) might go into blocked state

    If MAC learning on a VDS is not active, when from the guest OS you change the MAC address of a virtual machine to a new proxy switch port, the newly added port might be in a blocked state due to a L2 violation error, even when the Mac address changes option is set to Accept. The issue occurs only when MAC learning on a VDS is not active.

    This issue is resolved in this release.

  • PR 3332617: vGPU VMs with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fail to start after update to ESXi 8.0 Update 2

    After an update to ESXi 8.0 Update 2, booting of a vGPU VM with NVIDIA RTX 5000 Ada or NVIDIA RTX 6000 Ada graphics card fails with an error such as vmiop-display unable to reserve vgpu in the logs. In the vSphere Client, you see an error such as Module DevicePowerOnEarly power on failed. .

    This issue is resolved in this release.

  • PR 3326624: An ESXi host might fail during upgrade due to an unmap I/O request larger than 2040 MB on a vSAN-backed object

    During an upgrade to ESXi 8.0 Update 2, in rare cases, a guest virtual machine might issue an unmap I/O request larger than 2040 MB on a vSAN-backed object and the request might split in two parts. As a result, the ESXi host receives duplicate metadata entries and might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3304472: vSphere vMotion operations might fail due to a rare issue with no namespaces present in the namespacemgr.db file

    When a virtual machine is configured with namespaces and for some reason the namespaces table in the namespacemgr.db file is empty in the ESXi host database, migration of such a VM by using vSphere vMotion might fail. In the vmware.log file, you see errors such as:

    vmx - NamespaceMgrCheckpoint: No valid queue found while restoring the namespace events. The migrate type is 1.

    This issue is resolved in this release.

  • PR 3293470: In the vSphere Client, you see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore

    In the vSphere Client, when you navigate to Monitor >Advanced >Datastore to see the advanced performance metrics of a virtual machine, you might see a flat line in the read/write IO latency stats for a vSphere Virtual Volumes datastore. The issue occurs due to incorrect calculation of the counters for the read/write IO latency as a cumulative average.

    This issue is resolved in this release.

  • PR 3300031: The Network Time Protocol (NTP) peer and loop statistics are not automatically captured in the vm-support bundle

    Prior to vSphere 8.0 Update 2b, unless you manually add the NTP peer and loop statistics to the vm-support bundle, NTP logs are not available for troubleshooting.

    This issue is resolved in this release. The fix adds support for automatically capturing NTP peer and loop statistics from the /var/log/ntp/ directory in the vm-support bundle.

  • PR 3293466: vSphere vMotion operations between ESXi 7.x hosts fail with the error "Destination failed to preopen disks"

    If on an ESXi 7.x host you use an older Virtual Volumes storage provider, also called a VASA provider, it might not allow binding multiple vSphere Virtual Volumes in batches to optimize the performance of migration and power on. As a result, in the vSphere Client you might see the error Destination failed to preopen disks when running vSphere vMotion operations between 7.x hosts.

    ​This issue is resolved in this release.

  • PR 3316967: Changed Block Tracking (CBT) might not work as expected on a hot extended virtual disk

    In vSphere 8.0 Update 2, to optimize the open and close process of virtual disks during hot extension, the disk remains open during hot extend operations. Due to this change, incremental backup of virtual disks with CBT enabled might be incomplete, because the CBT in-memory bitmap does not resize, and CBT cannot record the changes to the extended disk block. As a result, when you try to restore a VM from an incremental backup of virtual disks with CBT, the VM might fail to start.

    This issue is resolved in this release.

  • PR 3317441: Enhanced Network Datapath (EDP) might reserve too much memory on a large vSphere system and cause resource issues

    EDP reservation is linear to the total memory by reserving up to 1.6% of the total. For a VM with high memory reservation in a large vSphere system, such a memory buffer might be too big. For example, if the system has a 100TB of memory, the EDP reservation might be 1.6TB. As a result, you might see resource issues such as not being able to hot-add VNICs to VMs with high memory reservation.

    This issue is resolved in this release. The fix sets the EDP reservation limit at 1% of the total memory and puts a 5GB cap on a single memory reservation. You can adjust the memory reservation limit by using the VMkernel boot option ensMbufPoolMaxMinMB or by using the following command: esxcli system settings kernel set -s ensMbufPoolMaxMBPerGB -v <value-in-MB>. In any case, you must reboot the ESXi host to enable the change.

  • PR 3275379: VBS-enabled Windows VMs might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors.

    Windows virtual machines with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen on ESXi hosts running on AMD processors. The BSOD has the following signature:

    SECURE_KERNEL_ERROR (18b)The secure kernel has encountered a fatal error.
    Arguments: 
    Arg1: 000000000000018c 
    Arg2: 000000000000100b 
    Arg3: 0000000000000000 
    Arg4: 000000000000409b

    ​This issue is resolved in this release.

  • PR 3311820: Creation of VMFS volumes on local software emulated 4Kn devices fails

    In some environments, after an update to ESXi 8.0 Update 2, you might not be able to create VMFS volumes on local software emulated 4Kn devices. The issue occurs when a SCSI driver fails to discover and register a 4Kn NVMe device as a 4Kn SWE device. In the vSphere On-disk Metadata Analyzer (VOMA), you see an error such as ON-DISK ERROR: Invalid disk block size 512, should be 4096.

    This issue is resolved in this release.

  • PR 3296047: The vvold service fails with Unable to Allocate Memory error

    If a vSphere API for Storage Awareness (VASA) provider is down for a long time, frequent reconnection attempts from the vvold service might exhaust its memory, because of repetitively creating and destroying session objects. The issue can also occur when an event such as a change in a VASA provider triggers an update to the vvold service. The vvold service restarts within seconds, but if the restart interrupts a running operation of the VASA provider, the operation might fail and affect related virtual machine operations as well. In the backtrace, you see an error such as Unable To Allocate Memory.

    This issue is resolved in this release.

  • PR 3309887: You cannot use host profiles to configure a path selection policy for storage disks

    In some cases, when you try to change or configure the path selection policy of storage disks by using a host profile, the operation fails. In the in syslog.log file, you see lines such as

    Errors: Unable to set SATP configuration: Invalid options provided for SATP VMW_SATP_ALUA

    This issue is resolved in this release.

  • PR 3308844: The hostd service might become unresponsive after a restart due to a rare issue with Inter Process Communication (IPC)

    If an IPC process has incorrect payload in the IPC packet, for example: an incorrect version in the header, a deadlock in the iSCSI daemon might cause the hostd service to become unresponsive after a hostd restart. As a result, you cannot perform any operations on the affected ESXi host and VMs by using the vSphere Client.

    This issue is resolved in this release.

  • PR 3256705: ESXi hosts might fail to boot due to a rare deadlock between open and rename operations on objects with parent-child relationship

    A rename operation locks the source and destination directories based on the object pointer address with the assumption that the parent object has the lower address. In rare cases, the assumption might not be correct and the rename lock can happen for the child first and then the parent, while the open operation locks the parent first and then the child. As a result, a deadlock between the open and rename operations occurs and ESXi hosts fail to boot.

    This issue is resolved in this release.

  • PR 3311989: Linux-based virtual machines randomly stop responding to input from the keyboard and the mouse

    A race condition on Linux-based virtual machines between the interrupt requests from the keyboard and the mouse, such as moving the mouse and typing at the same time, might cause the VM to stop responding to input from the keyboard and the mouse.

    This issue is resolved in this release.

  • PR 3307579: You see virtual machines in environments with Enhanced Data Path enabled randomly losing connectivity

    A rare memory leak might cause VMkernel NICs connected to ESXi host switches, working on the networking stack mode known as Enhanced Data Path or Enhanced Networking Stack, randomly lose connectivity.

    This issue is resolved in this release.

  • PR 3311123: ESXi hosts randomly fail with a purple diagnostic screen and errors such as Spinlock spinout NMI or Panic requested by another PCPU

    A rare race condition between an I/O thread and a Storage I/O Control thread on an ESXi host, waiting for each other to release read/write locks, might cause the host to fail with a purple diagnostic screen. In the panic backtrace you see a call to either PsaStorSchedQUpdateMaxQDepth or PsaStorDirectSubmitOrQueueCommand and the error on the screen is either Spin count exceeded or Panic requested by another PCPU respectively.

    This issue is resolved in this release.

  • PR 3312713: An ESXi host might fail with purple diagnostic screen due to rare issue with the offline consolidation of SEsparse delta disks

    Due to slow storage, the offline consolidation of a large SEsparse delta disk, such as 3TB or more, might take longer than the default 10-hour timeout. In such cases, a clearing process stops the consolidation task before it completes. As a result, threads trying to access the consolidation task cause a PCPU lock up and lead to a failure of the ESXi host with a purple diagnostic screen with an error such as Spin count exceeded - possible deadlock.

    This issue is resolved in this release.

  • PR 3307435: The hostd service might fail after an update in environments with Virtual Shared Graphics Acceleration (vSGA) and Enhanced vMotion Compatibility (EVC)

    In EVC-enabled clusters with active vSGA to share GPU resources across multiple virtual machines, an update operation, such as VIB installation on an ESXi host, might cause the hostd service to fail and start generating frequent coredumps. The issue occurs while adding the GPU-related features to the ESXi host after exiting maintenance mode.

    This issue is resolved in this release.

  • PR 3303651: In a High Performance Plug-in (HPP) environment, a network outage might trigger host failure

    In HPP environments, which aim to improve the performance of NVMe devices on ESXi hosts, in case of a network outage, you might see two issues:

    1. In case of multi-pathed local SCSI devices in active-active mode claimed by HPP, VM I/Os might fail prematurely instead of being retried on the other path.

    2. In case of local SCSI devices claimed by HPP, in an all paths down (APD) scenario, if a complete storage rescan is triggered from vCenter, the ESXi host might fail with a purple diagnostic screen.

    Both issues occur only with SCSI local devices and when I/Os fail due to lack of network or APD.

    This issue is resolved in this release. The fix makes sure that I/Os fail only in case of CHECK CONDITION error indicating a change in a SCSI local device.

ESXi-8.0U2sb-23305545-standard

Profile Name

ESXi-8.0U2sb-23305545-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

February 29, 2024

Acceptance Level

Partner Supported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsan_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner_8.0.2-0.25.23305545

  • VMware_bootbank_infravisor_8.0.2-0.25.23305545

  • Mware_bootbank_clusterstore_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers_8.0.2-0.25.23305545

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314

  • VMware_bootbank_crx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_esx-xserver_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_trx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-base_8.0.2-0.25.23305545

  • VMware_bootbank_gc-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal_8.0.2-0.25.23305545

  • VMware_bootbank_gc_8.0.2-0.25.23305545

  • VMware_bootbank_vsanhealth_8.0.2-0.25.23305545

  • VMware_bootbank_vdfs_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.25.23305545

  • VMware_bootbank_vds-vsip_8.0.2-0.25.23305545

  • VMware_bootbank_esx-base_8.0.2-0.25.23305545

  • VMware_bootbank_loadesx_8.0.2-0.25.23305545

  • VMware_bootbank_esx-update_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-update_8.0.2-0.25.23305545

  • VMware_bootbank_loadesxio_8.0.2-0.25.23305545

  • VMware_locker_tools-light_12.3.5.22544099-23305545

PRs Fixed

3333007, 3328623, 3317046, 3315810, 3309040, 3341832, 3309039, 3293189, 3309041, 3291298, 3304476, 3260854, 3291282, 3291430, 3302555, 3301475, 3291280, 3291297, 3291275, 3291295, 3304809

Related CVE numbers

N/A

This patch updates the following issues:

  • PR 3316767: VM I/Os might prematurely fail or ESXi hosts crash while HPP claims a local multipath SCSI device

    Due to a rare issue with the failover handling in the High Performance Plug-in (HPP) module, if HPP claims a local multipath SCSI device in active-active mode, you might see the following issues:

    • in case of connection loss or a NO CONNECT error, VM I/Os can fail prematurely instead of retrying on another path.

    • in case of all paths down (APD) state, if a complete storage rescan is triggered from vCenter or the ESXi host, the host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • VMware Tools Bundling Changes in ESXi 8.0 Update 2b

      • The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2b: 

        • windows.iso: VMware Tools 12.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.

        • linux.iso: VMware Tools 10.3.26 ISO image for Linux OS with glibc 2.11 or later.

        The following VMware Tools ISO images are available for download:

        • VMware Tools 11.0.6:

          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).

        • VMware Tools 10.0.12:

          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.

          • linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.

        • solaris.iso: VMware Tools image 10.3.10 for Solaris.

        • darwin.iso: Supports Mac OS X versions 10.11 and later.

        Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

  • ESXi 8.0 Update 2b provides the following security updates:

    • OpenSSL is updated to version 3.0.11.

    • The Python library is updated to version 3.8.18.

    • The Envoy proxy is updated to version 1.25.11.

    • The cURL library is updated to version 8.3.0.

    • The c-ares C library is updated to version 1.19.1.

    • The Go-lang library is updated to version 1.19.11.

    • The pyOpenSSL library is updated to version 2.0.

    • The Libxslt library is updated to version 1.1.28.

    • The libarchive library is updated to version 3.5.1.

  • ESXi 8.0 Update 2b includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003b9

    9/1/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000268

    9/8/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xb4

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x34

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x4e

    9/7/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c000321

    8/21/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000541

    8/21/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x5d

    9/3/2023

    Intel Xeon E-2300 Series

    Raptor Lake E/HX/S

    0xb0671 (06/b7/1)

    0x32

    0x11e

    8/31/2023

    Intel Xeon E-2400 Series

ESXi-8.0U2sb-23305545-no-tools

Profile Name

ESXi-8.0U2sb-23305545-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

February 29, 2024

Acceptance Level

Partner Supported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsan_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner_8.0.2-0.25.23305545

  • VMware_bootbank_infravisor_8.0.2-0.25.23305545

  • Mware_bootbank_clusterstore_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers_8.0.2-0.25.23305545

  • VMW_bootbank_pensandoatlas_1.46.0.E.28.1.314

  • VMware_bootbank_crx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-combiner-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_esx-xserver_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_trx_8.0.2-0.25.23305545

  • VMware_bootbank_esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-base_8.0.2-0.25.23305545

  • VMware_bootbank_gc-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.2-0.25.23305545

  • VMware_bootbank_bmcal_8.0.2-0.25.23305545

  • VMware_bootbank_gc_8.0.2-0.25.23305545

  • VMware_bootbank_vsanhealth_8.0.2-0.25.23305545

  • VMware_bootbank_vdfs_8.0.2-0.25.23305545

  • VMware_bootbank_native-misc-drivers-esxio_8.0.2-0.25.23305545

  • VMware_bootbank_drivervm-gpu-base_8.0.2-0.25.23305545

  • VMware_bootbank_vds-vsip_8.0.2-0.25.23305545

  • VMware_bootbank_esx-base_8.0.2-0.25.23305545

  • VMware_bootbank_loadesx_8.0.2-0.25.23305545

  • VMware_bootbank_esx-update_8.0.2-0.25.23305545

  • VMware_bootbank_esxio-update_8.0.2-0.25.23305545

  • VMware_bootbank_loadesxio_8.0.2-0.25.23305545

PRs Fixed

3333007, 3328623, 3317046, 3315810, 3309040, 3341832, 3309039, 3293189, 3309041, 3291298, 3304476, 3260854, 3291282, 3291430, 3302555, 3301475, 3291280, 3291297, 3291275, 3291295

Related CVE numbers

N/A

This patch updates the following issues:

  • ESXi 8.0 Update 2b provides the following security updates:

    • OpenSSL is updated to version 3.0.11.

    • The Python library is updated to version 3.8.18.

    • The Envoy proxy is updated to version 1.25.11.

    • The cURL library is updated to version 8.3.0.

    • The c-ares C library is updated to version 1.19.1.

    • The Go-lang library is updated to version 1.19.11.

    • The pyOpenSSL library is updated to version 2.0.

    • The Libxslt library is updated to version 1.1.28.

    • The libarchive library is updated to version 3.5.1.

  • ESXi 8.0 Update 2b includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003b9

    9/1/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000268

    9/8/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xb4

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x34

    9/7/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x4e

    9/7/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c000321

    8/21/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000541

    8/21/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x5d

    9/3/2023

    Intel Xeon E-2300 Series

    Raptor Lake E/HX/S

    0xb0671 (06/b7/1)

    0x32

    0x11e

    8/31/2023

    Intel Xeon E-2400 Series

ESXi-8.0U2b-23305546

Name

ESXi

Version

ESXi-8.0U2b-23305546

Release Date

February 29, 2024

Category

Bugfix

Affected Components

  • ESXi Component

  • ESXi Install/Upgrade Component

  • Host Based Replication Server for ESX

PRs Fixed

3326852, 3326624, 3319724, 3316967, 3307362, 3318781, 3331997, 3317441, 3317938, 3336882, 3317707, 3316767, 3306552, 3210610, 3262965, 3312008, 3304392, 3308819, 3313089, 3313912, 3313572, 3308844, 3298815, 3312006, 3311820, 3308054, 3308132, 3302896, 3308133, 3312713, 3301092, 3296088, 3311524, 3311123, 3307579, 3308347, 3309887, 3308458, 3309849, 3303232, 3311989, 3308023, 3308025, 3301665, 3302759, 3305165, 3304472, 3303651, 3293470, 3303807, 3307435, 3282049, 3293466, 3293464, 3296047, 3300173, 3294828, 3300031, 3295243, 3268113, 3284909, 3256705, 3292627, 3287767, 3289239, 3297899, 3295250, 3287747, 3222955, 3300326, 3292550, 3300705, 3287772, 3294624, 3291105, 3296694, 3291219, 3280469, 3288307, 3287769, 3287762, 3289397, 3293199, 3285514, 3275379, 3288872, 3317203, 3313226, 3309606, 3261182, 3286057, 3312725, 3312593, 3311987, 3311956, 3311691, 3295096, 3295060, 3306346, 3306344, 3306345

Related CVE numbers

N/A

ESXi-8.0U2sb-23305545

Name

ESXi

Version

ESXi-8.0U2sb-23305545

Release Date

February 29, 2024

Category

Security

Affected Components

  • ESXi Component

  • ESXi Install/Upgrade Component

  • ESXi Tools Component

PRs Fixed

3333007, 3328623, 3317046, 3315810, 3309040, 3341832, 3309039, 3293189, 3309041, 3291298, 3304476, 3260854, 3291282, 3291430, 3302555, 3301475, 3291280, 3291297, 3291275, 3291295, 3304809

Related CVE numbers

N/A

Known Issues

Miscellaneous Issues

  • You might see compliance errors during upgrade to ESXi 8.0 Update 2b on servers with active Trusted Platform Module (TPM) encryption and vSphere Quick Boot

    If you use the vSphere Lifecycle Manager to upgrade your clusters to ESXi 8.0 Update 2b, in the vSphere Client you might see compliance errors for hosts with active TPM encryption and vSphere Quick Boot.

    Workaround: Ignore the compliance errors and proceed with the upgrade.

Installation and Upgrade Issues

  • You cannot update to ESXi 8.0 Update 2b by using esxcli software vib commands

    Starting with ESXi 8.0 Update 2, upgrade or update of ESXi by using the commands esxcli software vib update or esxcli software vib install is not supported. If you use esxcli software vib update or esxcli software vib install to update your ESXi 8.0 Update 2 hosts to 8.0 Update 2b or later, the task fails. In the logs, you see an error such as: 

    ESXi version change is not allowed using esxcli software vib commands.

    Please use a supported method to upgrade ESXi.       

     vib = VMware_bootbank_esx-base_8.0.2-0.20.22481015 Please refer to the log file for more details.

    Workaround: If you are upgrading or updating ESXi from a depot zip bundle downloaded from the VMware website, VMware supports only the update command esxcli software profile update --depot=<depot_location> --profile=<profile_name>. For more information, see Upgrade or Update a Host with Image Profiles.

Known Issues from Previous Releases

Installation, Upgrade, and Migration Issues

  • If you update your vCenter to 8.0 Update 1, but ESXi hosts remain on an earlier version, vSphere Virtual Volumes datastores on such hosts might become inaccessible

    Self-signed VASA provider certificates are no longer supported in vSphere 8.0 and the configuration option Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false by default. If you update a vCenter instance to 8.0 Update 1 that introduces vSphere APIs for Storage Awareness (VASA) version 5.0, and ESXi hosts remain on an earlier vSphere and VASA version, hosts that use self-signed certificates might not be able to access vSphere Virtual Volumes datastores or cannot refresh the CA certificate.

    Workaround: Update hosts to ESXi 8.0 Update 1. If you do not update to ESXi 8.0 Update 1, see VMware knowledge base article 91387.

  • If you apply a host profile using a software FCoE configuration to an ESXi 8.0 host, the operation fails with a validation error

    Starting from vSphere 7.0, software FCoE is deprecated, and in vSphere 8.0 software FCoE profiles are not supported. If you try to apply a host profile from an earlier version to an ESXi 8.0 host, for example to edit the host customization, the operation fails. In the vSphere Client, you see an error such as Host Customizations validation error.

    Workaround: Disable the Software FCoE Configuration subprofile in the host profile.

  • You cannot use ESXi hosts of version 8.0 as a reference host for existing host profiles of earlier ESXi versions

    Validation of existing host profiles for ESXi versions 7.x, 6.7.x and 6.5.x fails when only an 8.0 reference host is available in the inventory.

    Workaround: Make sure you have a reference host of the respective version in the inventory. For example, use an ESXi 7.0 Update 2 reference host to update or edit an ESXi 7.0 Update 2 host profile.

  • VMNICs might be down after an upgrade to ESXi 8.0

    If the peer physical switch of a VMNIC does not support Media Auto Detect, or Media Auto Detect is disabled, and the VMNIC link is set down and then up, the link remains down after upgrade to or installation of ESXi 8.0.

    Workaround: Use either of these 2 options:

    1. Enable the option media-auto-detect in the BIOS settings by navigating to System Setup Main Menu, usually by pressing F2 or opening a virtual console, and then Device Settings > <specific broadcom NIC> > Device Configuration Menu > Media Auto Detect. Reboot the host.

    2. Alternatively, use an ESXCLI command similar to: esxcli network nic set -S <your speed> -D full -n <your nic>. With this option, you also set a fixed speed to the link, and it does not require a reboot.

  • After upgrade to ESXi 8.0, you might lose some nmlx5_core driver module settings due to obsolete parameters

    Some module parameters for the nmlx5_core driver, such as device_rss, drss and rss, are deprecated in ESXi 8.0 and any custom values, different from the default values, are not kept after an upgrade to ESXi 8.0.

    Workaround: Replace the values of the device_rss, drss and rss parameters as follows:

    • device_rss: Use the DRSS parameter.

    • drss: Use the DRSS parameter.

    • rss: Use the RSS parameter.

  • Second stage of vCenter Server restore procedure freezes at 90%

    When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user, even though the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server, which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.

    Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get command from the appliancesh shell. If you use VAMI, refresh your browser.

Miscellaneous Issues

  • RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a Broadcom RDMA network interface controller (RNIC)

    The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app environment, when an ESXi host reboots or an uplink goes up and down.

    Workaround: None

  • If IPv6 is deactivated, you might see 'Jumpstart plugin restore-networking activation failed' error during ESXi host boot

    In the ESXi console, during the boot up sequence of a host, you might see the error banner Jumpstart plugin restore-networking activation failed. The banner displays only when IPv6 is deactivated and does not indicate an actual error.

    Workaround: Activate IPv6 on the ESXi host or ignore the message.

  • Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs

    If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is present. A backend call to the -f force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the -f force reboot option, if you use reboot -f on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.

    Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system configuration in a vSphere system with DPUs.

  • In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs

    Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.

    Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.

  • TCP connections intermittently drop on an ESXi host with Enhanced Networking Stack

    If the sender VM is on an ESXi host with Enhanced Networking Stack, TCP checksum interoperability issues when the value of the TCP checksum in a packet is calculated as 0xFFFF might cause the end system to drop or delay the TCP packet.

    Workaround: Disable TCP checksum offloading on the sender VM on ESXi hosts with Enhanced Networking Stack. In Linux, you can use the command sudo ethtool -K <interface> tx off.

  • You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU

    In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU. As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.

    Workaround: None.

  • If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an additional standard switch vSwitchBMC with uplink vusb0

    Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC with uplink vusb0 gets created on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the VMware Cloud Foundation Bring-Up process to fail.

    Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following vendor documentation.

    After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable to prevent the creation of a virtual switch vSwitchBMC and associated portgroups on the next reboot of host.

    See this script as an example:

    ~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable

    The value of BMCNetworkEnable is 0 and the service is disabled.

    ~# reboot

    On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.

  • If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function cannot power on

    NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to power on and operate.

    Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured SR-IOV virtual function connected to a virtual switch.

  • In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage

    Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.

    In the Pre-Update Check Results screen, you see an error such as:

    Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be validated. They may not function properly after vCenter Server upgrade.

    Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter Server version.

    Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution and VMware knowledge base article 87880.

  • You cannot remove a PCI passthrough device assigned to a virtual Non-Uniform Memory Access (NUMA) node from a virtual machine with CPU Hot Add enabled

    Although by default when you enable CPU Hot Add to allow the addition of vCPUs to a running virtual machine, virtual NUMA topology is deactivated, if you have a PCI passthrough device assigned to a NUMA node, attempts to remove the device end with an error. In the vSphere Client, you see messages such as Invalid virtual machine configuration. Virtual NUMA cannot be configured when CPU hotadd is enabled.

    Workaround: See VMware knowledge base article 89638.

Networking Issues

  • You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on a Pensando DPU

    If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.

    Workaround: None.

  • Transfer speed in IPv6 environments with active TCP segmentation offload is slow

    In environments with active IPv6 TCP segmentation offload (TSO), transfer speed for Windows virtual machines with an e1000e virtual NIC might be slow. The issue does not affect IPv4 environments.

    Workaround: Deactivate TSO or use a vmxnet3 adapter instead of e1000e.

  • You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later

    When two NICs that use the ntg3 driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping might occur. The issue does not occur on ntg3 drivers of versions earlier than 4.1.3 or the tg3 driver. This issue is not related to the occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3 driver of version 4.1.7 or later, or disable EEE on physical switch ports.

    Workaround: Upgrade the ntg3 driver to version 4.1.8 and set the new module parameter noPhyStateSet to 1. The noPhyStateSet parameter defaults to 0 and is not required in most environments, except they face the issue.

  • When you migrate a VM from an ESXi host with a DPU device operating in SmartNIC (ECPF) Mode to an ESXi host with a DPU device operating in traditional NIC Mode, overlay traffic might drop

    When you use vSphere vMotion to migrate a VM attached to an overlay-backed segment from an ESXi host with a vSphere Distributed Switch operating in offloading mode (where traffic forwarding logic is offloaded to the DPU) to an ESXi host with a VDS operating in a non-offloading mode (where DPUs are used as a traditional NIC), the overlay traffic might drop after the migration.

    Workaround: Deactivate and activate the virtual NIC on the destination ESXi host.

  • You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0

    Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in ConnectX-5 and ConnectX-6 adapter cards.

    Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.

  • Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts

    When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.

    Workaround: None.

Storage Issues

  • VASA API version does not automatically refresh after upgrade to vCenter Server 8.0

    vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA API version might not automatically change to 4.0. You see the issue in 2 cases:

    1. If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see VASA API version 3.5.

    2. If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API version to 4.0, even after the upgrade, you still see VASA API version 3.5.

    Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.

  • vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager

    Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.

    Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.

  • You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest operation has failed

    A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed in the backtrace. The snapshot creation task completes upon retry.

    Workaround: Retry the snapshot creation task.

vCenter Server and vSphere Client Issues

  • If you load the vSphere virtual infrastructure to more than 90%, ESXi hosts might intermittently disconnect from vCenter Server

    In rare occasions, if the vSphere virtual infrastructure is continuously using more than 90% of its hardware capacity, some ESXi hosts might intermittently disconnect from the vCenter Server. Connection typically restores within a few seconds.

    Workaround: If connection to vCenter Server accidentally does not restore in a few seconds, reconnect ESXi hosts manually by using vSphere Client.

  • You see an error for Cloud Native Storage (CNS) block volumes created by using API in a mixed vCenter environment

    If your environment has vCenter Server systems of version 8.0 and 7.x, creating Cloud Native Storage (CNS) block volume by using API is successful, but you might see an error in the vSphere Client, when you navigate to see the CNS volume details. You see an error such as Failed to extract the requested data. Check vSphere Client logs for details. + TypeError: Cannot read properties of null (reading 'cluster'). The issue occurs only if you review volumes managed by the 7.x vCenter Server by using the vSphere Client of an 8.0 vCenter Server.

    Workaround: Log in to vSphere Client on a vCenter Server system of version 7.x to review the volume properties.

  • ESXi hosts might become unresponsive, and you see a vpxa dump file due to a rare condition of insufficient file descriptors for the request queue on vpxa

    In rare cases, when requests to the vpxa service take long, for example waiting for access to a slow datastore, the request queue on vpxa might exceed the limit of file descriptors. As a result, ESXi hosts might briefly become unresponsive, and you see a vpxa-zdump.00* file in the /var/core directory. The vpxa logs contain the line Too many open files.

    Workaround: None. The vpxa service automatically restarts and corrects the issue.

  • If you use custom update repository with untrusted certificates, vCenter Server upgrade or update by using vCenter Lifecycle Manager workflows to vSphere 8.0 might fail

    If you use a custom update repository with self-signed certificates that the VMware Certificate Authority (VMCA) does not trust, vCenter Lifecycle Manager fails to download files from such a repository. As a result, vCenter Server upgrade or update operations by using vCenter Lifecycle Manager workflows fail with the error Failed to load the repository manifest data for the configured upgrade.

    Workaround: Use CLI, the GUI installer, or the Virtual Appliance Management Interface (VAMI) to perform the upgrade. For more information, see VMware knowledge base article 89493.

Virtual Machine Management Issues

  • When you add an existing virtual hard disk to a new virtual machine, you might see an error that the VM configuration is rejected

    When you add an existing virtual hard disk to a new virtual machine by using the VMware Host Client, the operation might fail with an error such as The VM configuration was rejected. Please see browser Console. The issue occurs because the VMware Host Client might fail to get some properties, such as the hard disk controller.

    Workaround: After you select a hard disk and go to the Ready to complete page, do not click Finish. Instead, return one step back, wait for the page to load, and then click Next > Finish.

vSphere Lifecycle Manager Issues

  • You see error messages when try to stage vSphere Lifecycle Manager Images on ESXi hosts of version earlier than 8.0

    ESXi 8.0 introduces the option to explicitly stage desired state images, which is the process of downloading depot components from the vSphere Lifecycle Manager depot to the ESXi hosts without applying the software and firmware updates immediately. However, staging of images is only supported on an ESXi 8.0 or later hosts. Attempting to stage a vSphere Lifecycle Manager image on ESXi hosts of version earlier than 8.0 results in messages that the staging of such hosts fails, and the hosts are skipped. This is expected behavior and does not indicate any failed functionality as all ESXi 8.0 or later hosts are staged with the specified desired image.

    Workaround: None. After you confirm that the affected ESXi hosts are of version earlier than 8.0, ignore the errors.

  • A remediation task by using vSphere Lifecycle Manager might intermittently fail on ESXi hosts with DPUs

    When you start a vSphere Lifecycle Manager remediation on an ESXi hosts with DPUs, the host upgrades and reboots as expected, but after the reboot, before completing the remediation task, you might see an error such as:

    A general system error occurred: After host … remediation completed, compliance check reported host as 'non-compliant'. The image on the host does not match the image set for the cluster. Retry the cluster remediation operation.

    This is a rare issue, caused by an intermittent timeout of the post-remediation scan on the DPU.

    Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-remediation scan.

VMware Host Client Issues

  • VMware Host Client might display incorrect descriptions for severity event states

    When you look in the VMware Host Client to see the descriptions of the severity event states of an ESXi host, they might differ from the descriptions you see by using Intelligent Platform Management Interface (IPMI) or Lenovo XClarity Controller (XCC). For example, in the VMware Host Client, the description of the severity event state for the PSU Sensors might be Transition to Non-critical from OK, while in the XCC and IPMI, the description is Transition to OK.

    Workaround: Verify the descriptions for severity event states by using the ESXCLI command esxcli hardware ipmi sdr list and Lenovo XCC.

Security Features Issues

  • If you use an RSA key size smaller than 2048 bits, RSA signature generation fails

    Starting from vSphere 8.0, ESXi uses the OpenSSL 3.0 FIPS provider. As part of the FIPS 186-4 requirement, the RSA key size must be at least 2048 bits for any signature generation, and signature generation with SHA1 is not supported.

    Workaround: Use RSA key size larger than 2048.

  • Even though you deactivate Lockdown Mode on an ESXi host, the lockdown is still reported as active after a host reboot

    Even though you deactivate Lockdown Mode on an ESXi host, you might still see it as active after a reboot of the host.

    Workaround: Add users dcui and vpxuser to the list of lockdown mode exception users and deactivate Lockdown Mode after the reboot. For more information, see Specify Lockdown Mode Exception Users and Specify Lockdown Mode Exception Users in the VMware Host Client.

check-circle-line exclamation-circle-line close-line
Scroll to top icon