VMware ESXi 8.0 Update 2 | 21 SEP 2023 | GA ISO Build 22380479 Check for additions and updates to these release notes. |
VMware ESXi 8.0 Update 2 | 21 SEP 2023 | GA ISO Build 22380479 Check for additions and updates to these release notes. |
This ESXi 8.0 Update 2 release is a General Availability (GA) designation. For more information on the vSphere 8.0 IA/GA Release Model of vSphere Update releases, see The vSphere 8 Release Model Evolves.
For overview of the new features in vSphere 8.0 Update 2, see What's New in vSphere 8 Update 2 and the following release notes:
The list of new features and enhancements that follows adds some of the key highlights for ESXi 8.0 Update 2:
With vSphere 8.0 Update 2, vSphere Distributed Services Engine adds support for:
NVIDIA BlueField-2 DPUs to server designs from Fujitsu (Intel Sapphire Rapids).
ESXi 8.0 Update 2 adds support to vSphere Quick Boot for multiple servers, including:
Dell
VxRail VD-4510c
VxRail VD-4520c
Fujitsu
PRIMERGY RX2540 M7
PRIMERGY RX2530 M7
Lenovo
ThinkSystem SR635 V3
ThinkSystem SR645 V3
ThinkSystem SR655 V3
ThinkSystem ST650 V3
For the full list of supported servers, see the VMware Compatibility Guide.
In-Band Error-Correcting Code (IB ECC) support: With vSphere 8.0 Update 2, you can use IB ECC on hardware platforms that support this option to perform data integrity checks without the need for actual ECC type DDR memory.
Support for Graphics and AI/ML workloads on Intel ATS-M: vSphere 8.0 Update 2 adds support for graphics and AI/ML workloads on Intel ATS-M.
Enhanced ESXi CPU Scheduler: vSphere 8.0 Update 2 adds performance enhancements in the ESXi CPU scheduler for newer generation systems that use high core count CPUs, such as Intel Sapphire Rapids with up to 60 cores per socket.
Broadcom lpfc driver update: With vSphere 8.0 Update 2, the lpfc driver can generate and provide information about Fibre Channel Port Identifier (FPIN) reports.
Mellanox (nmlx5) driver update: With vSphere 8.0 Update 2, the nmlx5 driver supports NVIDIA ConnectX-7 SmartNIC and improves performance, such as support for up to 8 Mellanox uplinks per ESXi host, offload support for up to 200G speeds, hardware offloads for inner IPv4/IPv6 checksum offloads (CSO), TCP Segment Offloads (TSO) with outer IPv6 for both GENEVE and VXLAN, and enabling NetQ Receive-Side Scaling (RSS).
Marvell (qedentv) driver update: With vSphere 8.0 Update 2, the qedentv driver supports NetQ Receive-Side Scaling (RSS) and Hardware Large Receive Offload (HW LRO) to enhance performance, scalability, and efficiency.
Other driver updates:
Broadcom bcm_mpi3 - bug fixes
Broadcom bnxtnet - accumulated bnxtnet driver updates from async driver, including queueGetStats callback of vmk_UplinkQueueOps,enhanced debugging
Broadcom lsi_mr3 - routine update
Intel icen - enables the RSS feature in native mode for the icen NIC device driver
Microchip smartpqi - bug fixes and addition of OEM branded PCI IDs
Pensando ionic_cloud: support for cloud vendors
IPv6 driver enhancements: With vSphere 8.0 Update 2, ESXi drivers add offload capabilities for performance improvements with IPv6 when used as an overlay.
Uniform Passthrough (UPT) mode support in the nvmxnet3 driver: vSphere 8.0 Update 2 adds support for UPT to allow faster vSphere vMotion operations in nested ESXi environments.
Support 8 ports of 100G on a single host with both Broadcom and Mellanox NICs: vSphere 8.0 Update 2 increases support for 100GB NIC ports from 4 to 8 in ESXi for Broadcom and Mellanox.
CIM Services Tickets for REST Authentication: In addition to the JWT-based authentication, vSphere 8.0 Update 2 adds the option to authenticate with ESXi host by using CIM services tickets with the acquireCimServicesTicket()
API for SREST plug-ins.
glibc library update: The glibc library is updated to version 2.28 to align with NIAP requirements.
Guest platform for workloads:
Virtual hardware version 21: vSphere 8.0 Update 2 introduces virtual hardware version 21 to enhance latest guest operating system support and increase maximums for vGPU and vNVME as follows:
16 vGPU devices per VM (see Configure Virtual Graphics on vSphere)
256 vNVMe disks per VM (64 x 4 vNVMe adapters)
NVMe 1.3 support for Windows 11 and Windows Server 2022
NVMe support for Windows Server Failover Clustering (WSFC)
Hot-extend a shared vSphere Virtual Volumes Disk: vSphere 8.0 Update 2 supports hot extension of shared vSphere Virtual Volumes disks, which allows you to increase the size of a shared disk without deactivating the cluster, and effectively no downtime, and is helpful for VM clustering solutions such as Windows Server Failover Cluster (WSFC). For more information, see VMware vSphere Virtual Volumes Support for WSFC and You Can Hot Extend a Shared vVol Disk.
Use vNVME controller for WSFC: With vSphere 8.0 Update 2, you can use NVMe controller in addition to existing Paravirtual controller for WSFC with Clustered VMDK for Windows Server 2022 (OS Build 20348.1547) and later. To use NVMe Controller virtual machine hardware version must be 21 or later. For more information, see Setup for Windows Server Failover Clustering.
Use vNVME controller for disks in multi-writer mode: Starting with vSphere 8.0 Update 2, you can use virtual NVMe controllers for virtual disks with multi-writer mode, used by third party cluster applications such as Oracle RAC.
USB 3.2 Support in virtual eXtensible Host Controller Interface (xHCI): With vSphere 8.0 Update 2, the virtual xHCI controller is 20 Gbps compatible.
Read-only mode for attached virtual disks: With vSphere 8.0 Update 2, you can attach a virtual disk as read-only to a virtual machine to avoid temporary redo logs and improve performance for use cases such as VMware App Volumes.
Support VM clone when a First Class Disk (FCD) is attached: With vSphere 8.0 Update 2, you can use the cloneVM()
API to clone a VM with a FCD attached.
GPU
GPU Driver VM for any passthrough GPU card: With vSphere 8.0 Update 2, a GPU Driver VM facilitates support of new GPU vendors for the virtual SVGA device (vSGA).
Storage
Support for multiple TCP connections on a single NFS v3 volume: With vSphere 8.0 Update 2, the capability that allows support of multiple TCP connections for a single NFS v3 volume by using ESXCLI, nConnect, becomes fully supported for on-prem environments. For more information, see VMware knowledge base articles 91497 and 91479.
ESXCLI support for SCSI UNMAP operations for vSphere Virtual Volumes: Starting with vSphere 8.0 Update 2, you can use command line, ESXCLI, for SCSI UNMAP operations for vSphere Virtual Volumes.
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 8.0 are:
For internationalization, compatibility, and open source components, see the VMware vSphere 8.0 Release Notes.
Build Details
Download Filename: |
VMware-ESXi-8.0U2-22380479-depot.zip |
Build: |
22380479 |
Download Size: |
615.7 MB |
sha256checksum: |
76dbd1f0077b20b765c8084e74a5fd012c0c485297f60530e1f9703bdf051b5c |
Host Reboot Required: |
Yes |
Virtual Machine Migration or Shutdown Required: |
Yes |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 8.0.
Bulletin ID |
Category |
Severity |
ESXi80U2-22380479 |
Enhancement |
Important |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-8.0U2-22380479-standard |
ESXi-8.0U2-22380479-no-tools |
ESXi Image
Name and Version |
Release Date |
Category |
Detail |
---|---|---|---|
ESXi_8.0.2-0.0.22380479 |
09/21/2023 |
General |
Bugfix image |
For information about the individual components and bulletins, see the Resolved Issues section.
You can download ESXi 8.0 Update 2 from the Broadcom Support Portal.
For download instructions for earlier releases, see Download Broadcom products and software.
For details on updates and upgrades by using vSphere Lifecycle Manager, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images. You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an image profile. To do this, you must manually download the patch offline bundle ZIP file.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Restricting the use of the ptrace system call by default: Starting with vSphere 8.0 Update 2, VMware restricts the use of the ptrace
system call by default due to security concerns.
Deprecation of the Internet Wide Area RDMA Protocol (iWARP) on Intel X722 NIC hardware: With vSphere 8.0 Update 2, iWARP on Intel X722 NIC hardware is no longer supported.
Deprecation of legacy SOAP API contracts: With vSphere 8.0 Update 2, WSDL-based SOAP API contracts of vSphere 5.5 and earlier releases are deprecated and will be removed in a future vSphere release. All vSphere automation based on VMware or third-party tooling that use vSphere 5.5 or earlier contracts must be upgraded to more recent tooling.
Default NVMe version 1.3 for Windows virtual machines: Starting with vSphere 8.0 Update 2, the NVMe version of Windows Server 2022 or Windows 11 and later virtual machines with hardware version 21 and later, and a virtual NVMe controller, is set to 1.3 by default. For proper NVMe hot add, remove, and extend operations, you must upgrade Windows at least to KB5029250 (OS Build 20348.1906) for Windows Server 2022 and KB5028254 (OS Build 22621.2070) for Windows 11.
TLS 1.3 support: vSphere 8.0 Update 2 introduces initial support for TLS 1.3 in ESXi. vCenter stays on TLS 1.2.
Support of NVMe/TCP in Enhanced Networking Stack (ENS) mode for the qedentv driver: Starting with vSphere 8.0 Update 2, the VMware qedentv NIC driver from Marvell supports NVMe/TCP in ENS mode.
When updating vSphere Distributed Service Engine hosts configured with NVIDIA BlueField-2 (BF2) DPUs to ESXi 8.0 Update 2, you must follow a specific order for ESXi and BF2 DPU firmware upgrade in the same maintenance window:
Enter the vSphere Distributed Service Engine host into maintenance mode.
Make sure that the NIC and ARM firmware of the BF2 DPU are not upgraded and they still run firmware applicable for ESXi versions earlier than 8.0 Update 2, such as NIC firmware version 24.33.1246 and ARM firmware version 18.2.0.12580.
Update ESXi to 8.0 Update 2.
Upgrade the BF2 DPU ARM (UEFI & ATF) firmware to 4.0.2.12722, which is the firmware version required for ESXi 8.0 Update 2.
Upgrade the BF2 DPU NIC firmware to 24.36.7506, which is the firmware version required for ESXi 8.0 Update 2 drivers.
Exit the vSphere Distributed Service Engine host from maintenance mode.
If you do not follow the steps in this order, ESXi rollback fails and you cannot recover the BF2 DPU based vSphere Distributed Service Engine host configuration if upgrade fails.
VMware Tools Bundling Changes in ESXi 8.0 Update 2
The following VMware Tools ISO images are bundled with ESXi 8.0 Update 2:
windows.iso: VMware Tools 12.3.0 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
ESXi 8.0 Update 2 includes the following Intel microcode:
Code Name |
FMS |
Plt ID |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
0x1d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
0x11 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
0x7 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
0x2f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
0x1f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
0x621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
0x71a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
0xd |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
0x3b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
0x21 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
0x28 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
0x42e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
0x715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
0x49 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
0x1a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
0x22 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
0x12d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
0xb000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
0x2007006 |
3/6/2023 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
0x4003604 |
3/17/2023 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
0x5003604 |
3/17/2023 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
0x7002703 |
3/21/2023 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
0x1c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
0x700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
0xf00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
0xe000014 |
9/18/2021 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
0xf0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
0x38 |
12/2/2021 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
0xd0003a5 |
3/30/2023 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
0x1000230 |
1/27/2023 |
Intel Xeon D-2700 Series; Intel Xeon D-1700 Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
0x4c000023 |
2/22/2023 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
0x4c000023 |
2/22/2023 |
Intel Atom P5000 Series |
Tiger Lake U |
0x806c1 (06/8c/1) |
0x80 |
0xac |
2/27/2023 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake U Refresh |
0x806c2 (06/8c/2) |
0xc2 |
0x2c |
2/27/2023 |
Intel Core i3/i5/i7-1100 Series |
Tiger Lake H |
0x806d1 (06/8d/1) |
0xc2 |
0x46 |
2/27/2023 |
Intel Xeon W-11000E Series |
Sapphire Rapids SP HBM |
0x806f8 (06/8f/8) |
0x10 |
0x2c0001d1 |
2/14/2023 |
Intel Xeon Max 9400 Series |
Sapphire Rapids SP |
0x806f8 (06/8f/8) |
0x87 |
0x2b000461 |
3/13/2023 |
Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
0xf4 |
2/23/2023 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
0xf4 |
2/23/2023 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
0xf4 |
2/23/2023 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
0xf4 |
2/23/2023 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
0xfa |
2/27/2023 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
0x59 |
2/26/2023 |
Intel Xeon E-2300 Series |
ESXi 8.0 Update 2 includes the following AMD CPU microcode:
Code Name |
FMS |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|
Bulldozer |
0x600f12 (15/01/2) |
0x0600063e |
2/7/2018 |
Opteron 6200/4200/3200 Series |
Piledriver |
0x600f20 (15/02/0) |
0x06000852 |
2/6/2018 |
Opteron 6300/4300/3300 Series |
Zen-Naples |
0x800f12 (17/01/2) |
0x0800126e |
11/11/2021 |
EPYC 7001 Series |
Zen2-Rome |
0x830f10 (17/31/0) |
0x0830107a |
5/17/2023 |
EPYC 7002/7Fx2/7Hx2 Series |
Zen3-Milan-B1 |
0xa00f11 (19/01/1) |
0x0a0011d1 |
7/10/2023 |
EPYC 7003/7003X Series |
Zen3-Milan-B2 |
0xa00f12 (19/01/2) |
0x0a001234 |
7/10/2023 |
EPYC 7003/7003X Series |
VMware NSX installation or upgrade in a vSphere environment with DPUs might fail with a connectivity error
An intermittent timing issue on the ESXi host side might cause NSX installation or upgrade in a vSphere environment with DPUs to fail. In the nsxapi.log
file you see logs such as Failed to get SFHC response. MessageType MT_SOFTWARE_STATUS
.
This issue is resolved in this release.
You cannot mount an IPv6-based NFS 3 datastore with VMkernel port binding by using ESXCLI commands
When you try to mount an NFS 3 datastore with an IPv6 server address and VMkernel port binding by using an ESXCLI command, the task fails with an error such as:
[:~] esxcli storage nfs add -I fc00:xxx:xxx:xx::xxx:vmk1 -s share1 -v volume1
Validation of vmknic failed Instance(defaultTcpipStack, xxx:xxx:xx::xxx:vmk1) Input(): Not found:
The issue is specific for NFS 3 datastores with an IPv6 server address and VMkernel port binding.
This issue is resolved in this release.
If a PCI passthrough is active on a DPU during the shutdown or restart of an ESXi host, the host fails with a purple diagnostic screen
If an active virtual machine has a PCI passthrough to a DPU at the time of shutdown or reboot of an ESXi host, the host fails with a purple diagnostic screen. The issue is specific for systems with DPUs and only in case of VMs that use PCI passthrough to the DPU.
This issue is resolved in this release.
If you configure a VM at HW version earlier than 20 with a Vendor Device Group, such VMs might not work as expected
Vendor Device Groups, which enable binding of high-speed networking devices and the GPU, are supported only on VMs with HW version 20 and later, but you are not prevented to configure a VM at HW version earlier than 20 with a Vendor Device Group. Such VMs might not work as expected: for example, fail to power-on.
This issue is resolved in this release.
ESXi reboot takes long due to NFS server mount timeout
When you have multiple mounts on an NFS server that is not accessible, ESXi retries connection to each mount for 30 seconds, which might add up to minutes of ESXi reboot delay, depending on the number of mounts.
This issue is resolved in this release.
Auto discovery of NVMe Discovery Service might fail on ESXi hosts with NVMe/TCP configurations
vSphere 8.0 adds advanced NVMe-oF Discovery Service support in ESXi that enables the dynamic discovery of standards-compliant NVMe Discovery Service. ESXi uses the mDNS/DNS-SD service to obtain information such as IP address and port number of active NVMe-oF discovery services on the network. However, in ESXi servers with NVMe/TCP enabled, the auto discovery on networks configured to use vSphere Distributed Switch might fail. The issue does not affect NVMe/TCP configurations that use standard switches.
This issue is resolved in this release.
If you do not reboot an ESXi host after you enable or disable SR-IOV with the icen driver, when you configure a transport node in ENS Interrupt mode on that host, some virtual machines might not get DHCP addresses
If you enable or disable SR-IOV with the icen
driver on an ESXi host and configure a transport node in ENS Interrupt mode, some Rx (receive) queues might not work if you do not reboot the host. As a result, some virtual machines might not get DHCP addresses.
This issue is resolved in this release.
If you use an ESXi host deployed from a host profile with enabled stateful install as an image to deploy other ESXi hosts in a cluster, the operation fails
If you extract an image of an ESXi host deployed from a host profile with enabled stateful install to deploy other ESXi hosts in a vSphere Lifecycle Manager cluster, the operation fails. In the vSphere Client, you see an error such as A general system error occurred: Failed to extract image from the host: no stored copy available for inactive VIB VMW_bootbank_xxx. Extraction of image from host xxx.eng.vmware.com failed
.
This issue is resolved in this release.
If IPv6 is deactivated, you might see 'Jumpstart plugin restore-networking activation failed' error during ESXi host boot
In the ESXi console, during the boot up sequence of a host, you might see the error banner Jumpstart plugin restore-networking activation failed
. The banner displays only when IPv6 is deactivated and does not indicate an actual error.
Workaround: Activate IPv6 on the ESXi host or ignore the message.
In a High Performance Plug-in (HPP) environment, a network outage might trigger host failure
In HPP environments, which aim to improve the performance of NVMe devices on ESXi hosts, in case of a network outage, you might see two issues:
In case of multi-pathed local SCSI devices (active-active presentation) claimed by HPP, VM I/Os might fail prematurely instead of being retried on the other path.
In case of local SCSI devices claimed by HPP, in an all paths down (APD) scenario, if a complete storage rescan is triggered from vCenter, the ESXi host might fail with a purple diagnostic screen.
Both issues occur only with SCSI local devices and when I/Os fail due to lack of network or APD.
Workaround: Add user claimrules to claim devices by the ESXi native multipathing plug-in (NMP) instead of HPP. For more information, see VMware knowledge base article 94377.
TCP connections intermittently drop on an ESXi host with Enhanced Networking Stack
If the sender VM is on an ESXi host with Enhanced Networking Stack, TCP checksum interoperability issues when the value of the TCP checksum in a packet is calculated as 0xFFFF
might cause the end system to drop or delay the TCP packet.
Workaround: Disable TCP checksum offloading on the sender VM on ESXi hosts with Enhanced Networking Stack. In Linux, you can use the command sudo ethtool -K <interface> tx off
.
Even though you deactivate Lockdown Mode on an ESXi host, the lockdown is still reported as active after a host reboot
Even though you deactivate Lockdown Mode on an ESXi host, you might still see it as active after a reboot of the host.
Workaround: Add users dcui
and vpxuser
to the list of lockdown mode exception users and deactivate Lockdown Mode after the reboot. For more information, see Specify Lockdown Mode Exception Users and Specify Lockdown Mode Exception Users in the VMware Host Client.
Transfer speed in IPv6 environments with active TCP segmentation offload is slow
In environments with active IPv6 TCP segmentation offload (TSO), transfer speed for Windows virtual machines with an e1000e virtual NIC might be slow. The issue does not affect IPv4 environments.
Workaround: Deactivate TSO or use a vmxnet3 adapter instead of e1000e.
When you migrate a VM from an ESXi host with a DPU device operating in SmartNIC (ECPF) Mode to an ESXi host with a DPU device operating in traditional NIC Mode, overlay traffic might drop
When you use vSphere vMotion to migrate a VM attached to an overlay-backed segment from an ESXi host with a vSphere Distributed Switch operating in offloading mode (where traffic forwarding logic is offloaded to the DPU) to an ESXi host with a VDS operating in a non-offloading mode (where DPUs are used as a traditional NIC), the overlay traffic might drop after the migration.
Workaround: Deactivate and activate the virtual NIC on the destination ESXi host.
If you update your vCenter to 8.0 Update 1, but ESXi hosts remain on an earlier version, vSphere Virtual Volumes datastores on such hosts might become inaccessible
Self-signed VASA provider certificates are no longer supported in vSphere 8.0 and the configuration option Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false by default. If you update a vCenter instance to 8.0 Update 1 that introduces vSphere APIs for Storage Awareness (VASA) version 5.0, and ESXi hosts remain on an earlier vSphere and VASA version, hosts that use self-signed certificates might not be able to access vSphere Virtual Volumes datastores or cannot refresh the CA certificate.
Workaround: Update hosts to ESXi 8.0 Update 1. If you do not update to ESXi 8.0 Update 1, see VMware knowledge base article 91387.
If you apply a host profile using a software FCoE configuration to an ESXi 8.0 host, the operation fails with a validation error
Starting from vSphere 7.0, software FCoE is deprecated, and in vSphere 8.0 software FCoE profiles are not supported. If you try to apply a host profile from an earlier version to an ESXi 8.0 host, for example to edit the host customization, the operation fails. In the vSphere Client, you see an error such as Host Customizations validation error
.
Workaround: Disable the Software FCoE Configuration subprofile in the host profile.
You cannot use ESXi hosts of version 8.0 as a reference host for existing host profiles of earlier ESXi versions
Validation of existing host profiles for ESXi versions 7.x, 6.7.x and 6.5.x fails when only an 8.0 reference host is available in the inventory.
Workaround: Make sure you have a reference host of the respective version in the inventory. For example, use an ESXi 7.0 Update 2 reference host to update or edit an ESXi 7.0 Update 2 host profile.
VMNICs might be down after an upgrade to ESXi 8.0
If the peer physical switch of a VMNIC does not support the auto negotiate option, or the option is not active, and the VMNIC link is set down and then up, the link remains down after upgrade to or installation of ESXi 8.0.
Workaround: Use either of these 2 options:
Enable the option media-auto-detect
in the BIOS settings by navigating to System Setup Main Menu, usually by pressing F2 or opening a virtual console, and then Device Settings > <specific broadcom NIC> > Device Configuration Menu > Media Auto Detect. Reboot the host.
Alternatively, use an ESXCLI command similar to: esxcli network nic set -S <your speed> -D full -n <your nic>
. With this option, you also set a fixed speed to the link, and it does not require a reboot.
After upgrade to ESXi 8.0, you might lose some nmlx5_core driver module settings due to obsolete parameters
Some module parameters for the nmlx5_core
driver, such as device_rss
, drss
and rss
, are deprecated in ESXi 8.0 and any custom values, different from the default values, are not kept after an upgrade to ESXi 8.0.
Workaround: Replace the values of the device_rss
, drss
and rss
parameters as follows:
device_rss
: Use the DRSS
parameter.
drss
: Use the DRSS
parameter.
rss
: Use the RSS
parameter.
Second stage of vCenter Server restore procedure freezes at 90%
When you use the vCenter Server GUI installer or the vCenter Server Appliance Management Interface (VAMI) to restore a vCenter from a file-based backup, the restore workflow might freeze at 90% with an error 401 Unable to authenticate user
, even though the task completes successfully in the backend. The issue occurs if the deployed machine has a different time than the NTP server, which requires a time sync. As a result of the time sync, clock skew might fail the running session of the GUI or VAMI.
Workaround: If you use the GUI installer, you can get the restore status by using the restore.job.get
command from the appliancesh
shell. If you use VAMI, refresh your browser.
RDMA over Converged Ethernet (RoCE) traffic might fail in Enhanced Networking Stack (ENS) and VLAN environment, and a Broadcom RDMA network interface controller (RNIC)
The VMware solution for high bandwidth, ENS, does not support MAC VLAN filters. However, a RDMA application that runs on a Broadcom RNIC in an ENS + VLAN environment, requires a MAC VLAN filter. As a result, you might see some RoCE traffic disconnected. The issue is likely to occur in a NVMe over RDMA + ENS + VLAN environment, or in an ENS+VLAN+RDMA app environment, when an ESXi host reboots or an uplink goes up and down.
Workaround: None
Reset or restore of the ESXi system configuration in a vSphere system with DPUs might cause invalid state of the DPUs
If you reset or restore the ESXi system configuration in a vSphere system with DPUs, for example, by selecting Reset System Configuration in the direct console, the operation might cause invalid state of the DPUs. In the DCUI, you might see errors such as Failed to reset system configuration. Note that this operation cannot be performed when a managed DPU is present
. A backend call to the -f
force reboot option is not supported for ESXi installations with a DPU. Although ESXi 8.0 supports the -f
force reboot option, if you use reboot -f
on an ESXi configuration with a DPU, the forceful reboot might cause an invalid state.
Workaround: Reset System Configuration in the direct console interface is temporarily disabled. Avoid resetting the ESXi system configuration in a vSphere system with DPUs.
In a vCenter Server system with DPUs, if IPv6 is disabled, you cannot manage DPUs
Although the vSphere Client allows the operation, if you disable IPv6 on an ESXi host with DPUs, you cannot use the DPUs, because the internal communication between the host and the devices depends on IPv6. The issue affects only ESXi hosts with DPUs.
Workaround: Make sure IPv6 is enabled on ESXi hosts with DPUs.
You might see 10 min delay in rebooting an ESXi host on HPE server with pre-installed Pensando DPU
In rare cases, HPE servers with pre-installed Pensando DPU might take more than 10 minutes to reboot in case of a failure of the DPU. As a result, ESXi hosts might fail with a purple diagnostic screen and the default wait time is 10 minutes.
Workaround: None.
If you have an USB interface enabled in a remote management application that you use to install ESXi 8.0, you see an additional standard switch vSwitchBMC with uplink vusb0
Starting with vSphere 8.0, in both Integrated Dell Remote Access Controller (iDRAC) and HP Integrated Lights Out (ILO), when you have an USB interface enabled, vUSB or vNIC respectively, an additional standard switch vSwitchBMC
with uplink vusb0
gets created on the ESXi host. This is expected, in view of the introduction of data processing units (DPUs) on some servers but might cause the VMware Cloud Foundation Bring-Up process to fail.
Workaround: Before vSphere 8.0 installation, disable the USB interface in the remote management application that you use by following vendor documentation.
After vSphere 8.0 installation, use the ESXCLI command esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
to prevent the creation of a virtual switch vSwitchBMC
and associated portgroups on the next reboot of host.
See this script as an example:
~# esxcfg-advcfg -s 0 /Net/BMCNetworkEnable
The value of BMCNetworkEnable is 0 and the service is disabled.
~# reboot
On host reboot, no virtual switch, PortGroup and VMKNIC are created in the host related to remote management application network.
If an NVIDIA BlueField DPU is in hardware offload mode disabled, virtual machines with configured SR-IOV virtual function cannot power on
NVIDIA BlueField DPUs must be in hardware offload mode enabled to allow virtual machines with configured SR-IOV virtual function to power on and operate.
Workaround: Always use the default hardware offload mode enabled for NVIDIA BlueField DPUs when you have VMs with configured SR-IOV virtual function connected to a virtual switch.
In the Virtual Appliance Management Interface (VAMI), you see a warning message during the pre-upgrade stage
Moving vSphere plug-ins to a remote plug-in architecture, vSphere 8.0 deprecates support for local plug-ins. If your 8.0 vSphere environment has local plug-ins, some breaking changes for such plug-ins might cause the pre-upgrade check by using VAMI to fail.
In the Pre-Update Check Results screen, you see an error such as:
Warning message: The compatibility of plug-in package(s) %s with the new vCenter Server version cannot be validated. They may not function properly after vCenter Server upgrade.
Resolution: Please contact the plug-in vendor and make sure the package is compatible with the new vCenter Server version.
Workaround: Refer to the VMware Compatibility Guide and VMware Product Interoperability Matrix or contact the plug-in vendors for recommendations to make sure local plug-ins in your environment are compatible with vCenter Server 8.0 before you continue with the upgrade. For more information, see the blog Deprecating the Local Plugins :- The Next Step in vSphere Client Extensibility Evolution and VMware knowledge base article 87880.
You cannot remove a PCI passthrough device assigned to a virtual Non-Uniform Memory Access (NUMA) node from a virtual machine with CPU Hot Add enabled
Although by default when you enable CPU Hot Add to allow the addition of vCPUs to a running virtual machine, virtual NUMA topology is deactivated, if you have a PCI passthrough device assigned to a NUMA node, attempts to remove the device end with an error. In the vSphere Client, you see messages such as Invalid virtual machine configuration. Virtual NUMA cannot be configured when CPU hotadd is enabled
.
Workaround: See VMware knowledge base article 89638.
You cannot set the Maximum Transmission Unit (MTU) on a VMware vSphere Distributed Switch to a value larger than 9174 on a Pensando DPU
If you have the vSphere Distributed Services Engine feature with a Pensando DPU enabled on your ESXi 8.0 system, you cannot set the Maximum Transmission Unit (MTU) on a vSphere Distributed Switch to a value larger than 9174.
Workaround: None.
You see link flapping on NICs that use the ntg3 driver of version 4.1.3 and later
When two NICs that use the ntg3
driver of versions 4.1.3 and later are connected directly, not to a physical switch port, link flapping might occur. The issue does not occur on ntg3
drivers of versions earlier than 4.1.3 or the tg3
driver. This issue is not related to the occasional Energy Efficient Ethernet (EEE) link flapping on such NICs. The fix for the EEE issue is to use a ntg3
driver of version 4.1.7 or later, or disable EEE on physical switch ports.
Workaround: Upgrade the ntg3
driver to version 4.1.8 and set the new module parameter noPhyStateSet
to 1
. The noPhyStateSet
parameter defaults to 0
and is not required in most environments, except they face the issue.
You cannot use Mellanox ConnectX-5, ConnectX-6 cards Model 1 Level 2 and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0
Due to hardware limitations, Model 1 Level 2, and Model 2 for Enhanced Network Stack (ENS) mode in vSphere 8.0 is not supported in ConnectX-5 and ConnectX-6 adapter cards.
Workaround: Use Mellanox ConnectX-6 Lx and ConnectX-6 Dx or later cards that support ENS Model 1 Level 2, and Model 2A.
Pensando DPUs do not support Link Layer Discovery Protocol (LLDP) on physical switch ports of ESXi hosts
When you enable LLDP on an ESXi host with a DPU, the host cannot receive LLDP packets.
Workaround: None.
VASA API version does not automatically refresh after upgrade to vCenter Server 8.0
vCenter Server 8.0 supports VASA API version 4.0. However, after you upgrade your vCenter Server system to version 8.0, the VASA API version might not automatically change to 4.0. You see the issue in 2 cases:
If a VASA provider that supports VASA API version 4.0 is registered with a previous version of VMware vCenter, the VASA API version remains unchanged after you upgrade to VMware vCenter 8.0. For example, if you upgrade a VMware vCenter system of version 7.x with a registered VASA provider that supports both VASA API versions 3.5 and 4.0, the VASA API version does not automatically change to 4.0, even though the VASA provider supports VASA API version 4.0. After the upgrade, when you navigate to vCenter Server > Configure > Storage Providers and expand the General tab of the registered VASA provider, you still see VASA API version 3.5.
If you register a VASA provider that supports VASA API version 3.5 with a VMware vCenter 8.0 system and upgrade the VASA API version to 4.0, even after the upgrade, you still see VASA API version 3.5.
Workaround: Unregister and re-register the VASA provider on the VMware vCenter 8.0 system.
vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.
You cannot create snapshots of virtual machines due to an error in the Content Based Read Cache (CBRC) that a digest operation has failed
A rare race condition when assigning a content ID during the update of the CBRC digest file might cause a discrepancy between the content ID in the data disk and the digest disk. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed
in the backtrace. The snapshot creation task completes upon retry.
Workaround: Retry the snapshot creation task.
If you load the vSphere virtual infrastructure to more than 90%, ESXi hosts might intermittently disconnect from vCenter Server
In rare occasions, if the vSphere virtual infrastructure is continuously using more than 90% of its hardware capacity, some ESXi hosts might intermittently disconnect from the vCenter Server. Connection typically restores within a few seconds.
Workaround: If connection to vCenter Server accidentally does not restore in a few seconds, reconnect ESXi hosts manually by using vSphere Client.
You see an error for Cloud Native Storage (CNS) block volumes created by using API in a mixed vCenter environment
If your environment has vCenter Server systems of version 8.0 and 7.x, creating Cloud Native Storage (CNS) block volume by using API is successful, but you might see an error in the vSphere Client, when you navigate to see the CNS volume details. You see an error such as Failed to extract the requested data. Check vSphere Client logs for details. + TypeError: Cannot read properties of null (reading 'cluster')
. The issue occurs only if you review volumes managed by the 7.x vCenter Server by using the vSphere Client of an 8.0 vCenter Server.
Workaround: Log in to vSphere Client on a vCenter Server system of version 7.x to review the volume properties.
ESXi hosts might become unresponsive, and you see a vpxa dump file due to a rare condition of insufficient file descriptors for the request queue on vpxa
In rare cases, when requests to the vpxa service take long, for example waiting for access to a slow datastore, the request queue on vpxa might exceed the limit of file descriptors. As a result, ESXi hosts might briefly become unresponsive, and you see a vpxa-zdump.00*
file in the /var/core
directory. The vpxa logs contain the line Too many open files
.
Workaround: None. The vpxa service automatically restarts and corrects the issue.
If you use custom update repository with untrusted certificates, vCenter Server upgrade or update by using vCenter Lifecycle Manager workflows to vSphere 8.0 might fail
If you use a custom update repository with self-signed certificates that the VMware Certificate Authority (VMCA) does not trust, vCenter Lifecycle Manager fails to download files from such a repository. As a result, vCenter Server upgrade or update operations by using vCenter Lifecycle Manager workflows fail with the error Failed to load the repository manifest data for the configured upgrade
.
Workaround: Use CLI, the GUI installer, or the Virtual Appliance Management Interface (VAMI) to perform the upgrade. For more information, see VMware knowledge base article 89493.
When you add an existing virtual hard disk to a new virtual machine, you might see an error that the VM configuration is rejected
When you add an existing virtual hard disk to a new virtual machine by using the VMware Host Client, the operation might fail with an error such as The VM configuration was rejected. Please see browser Console
. The issue occurs because the VMware Host Client might fail to get some properties, such as the hard disk controller.
Workaround: After you select a hard disk and go to the Ready to complete page, do not click Finish. Instead, return one step back, wait for the page to load, and then click Next > Finish.
You see error messages when try to stage vSphere Lifecycle Manager Images on ESXi hosts of version earlier than 8.0
ESXi 8.0 introduces the option to explicitly stage desired state images, which is the process of downloading depot components from the vSphere Lifecycle Manager depot to the ESXi hosts without applying the software and firmware updates immediately. However, staging of images is only supported on an ESXi 8.0 or later hosts. Attempting to stage a vSphere Lifecycle Manager image on ESXi hosts of version earlier than 8.0 results in messages that the staging of such hosts fails, and the hosts are skipped. This is expected behavior and does not indicate any failed functionality as all ESXi 8.0 or later hosts are staged with the specified desired image.
Workaround: None. After you confirm that the affected ESXi hosts are of version earlier than 8.0, ignore the errors.
A remediation task by using vSphere Lifecycle Manager might intermittently fail on ESXi hosts with DPUs
When you start a vSphere Lifecycle Manager remediation on an ESXi hosts with DPUs, the host upgrades and reboots as expected, but after the reboot, before completing the remediation task, you might see an error such as:
A general system error occurred: After host … remediation completed, compliance check reported host as 'non-compliant'. The image on the host does not match the image set for the cluster. Retry the cluster remediation operation.
This is a rare issue, caused by an intermittent timeout of the post-remediation scan on the DPU.
Workaround: Reboot the ESXi host and re-run the vSphere Lifecycle Manager compliance check operation, which includes the post-remediation scan.
VMware Host Client might display incorrect descriptions for severity event states
When you look in the VMware Host Client to see the descriptions of the severity event states of an ESXi host, they might differ from the descriptions you see by using Intelligent Platform Management Interface (IPMI) or Lenovo XClarity Controller (XCC). For example, in the VMware Host Client, the description of the severity event state for the PSU Sensors might be Transition to Non-critical from OK
, while in the XCC and IPMI, the description is Transition to OK
.
Workaround: Verify the descriptions for severity event states by using the ESXCLI command esxcli hardware ipmi sdr list
and Lenovo XCC.
If you use an RSA key size smaller than 2048 bits, RSA signature generation fails
Starting from vSphere 8.0, ESXi uses the OpenSSL 3.0 FIPS provider. As part of the FIPS 186-4 requirement, the RSA key size must be at least 2048 bits for any signature generation, and signature generation with SHA1 is not supported.
Workaround: Use RSA key size larger than 2048.