ESXi 8.0b | FEB 14 2023 | Build 21203435 Check for additions and updates to these release notes. |
The release notes cover the following topics:
ESXi 8.0.b supports vSphere Quick Boot on the following servers:
HPE:
ProLiant DL110 Gen10
ProLiant DL160 Gen10
ProLiant ML350 Gen10
Lenovo:
ThinkSystem SR 665
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 8.0 are:
For internationalization, compatibility, upgrade and product support notices, and open source components, see the VMware vSphere 8.0 Release Notes.
This release of ESXi 8.0b delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-8.0b-21203435-depot.zip |
Build: | 21203435 |
Download Size: | 993.8 MB |
md5sum: | 3311032dada17653a5b017b3f2f0d3e4 |
sha256checksum: | 94c4e8fddaf8da2b127ba891b82fefbad6d672e7ac160477a159f462de29c5b6 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi Component - core ESXi VIBs | ESXi_8.0.0-1.20.21203435 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_8.0.0-1.20.21203435 esxio-update_8.0.0-1.20.21203435 |
Bugfix | Critical |
Broadcom Emulex Connectivity Division lpfc driver for FC adapters | Broadcom-ELX-lpfc_14.0.635.4-14vmw.800.1.20.21203435 | Bugfix | Critical |
Mellanox 5th generation NICs (ConnectX and BlueField DPU series) core Ethernet and RoCE Drivers for VMware ESXi | Mellanox-nmlx5_4.23.0.36-10.2vmw.800.1.20.21203435 | Bugfix | Critical |
VMware DesignWare I2C Driver | VMware-dwi2c_0.1-7vmw.800.1.20.21203435 | Bugfix | Critical |
Network driver for Intel(R) X710/XL710/XXV710/X722 Adapters | Intel-i40en_1.11.2.6-1vmw.800.1.20.21203435 | Bugfix | Critical |
Pensando Systems Native Ethernet Driver | Pensando-ionic-en_20.0.0-30vmw.800.1.20.21203435 | Bugfix | Critical |
ESXi Component - core ESXi VIBs | ESXi_8.0.0-1.15.21203431 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_8.0.0-1.15.21203431 esxio-update_8.0.0-1.15.21203431 |
Security | Critical |
ESXi Tools Component | VMware-VM-Tools_12.1.5.20735119-21203431 | Security | Critical |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 8.0.
Bulletin ID | Category | Severity |
ESXi80b-21203435 | Bugfix | Critical |
ESXi80sb-21203431 | Security | Critical |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-8.0b-21203435-standard |
ESXi-8.0b-21203435-no-tools |
ESXi-8.0sb-21203431-standard |
ESXi-8.0sb-21203431-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi_8.0.0-1.20.21203435 | FEB 14 2023 | Bugfix | Bugfix image |
ESXi_8.0.0-1.15.21203431 | FEB 14 2023 | Security | Enhancement |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
For details on updates and upgrades by using vSphere Lifecycle Manager, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images. You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an image profile. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 8.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
The resolved issues are grouped as follows.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3053428, 3051651, 3057027, 3035652, 3065063, 3051004, 3062043, 3070831, 3065990, 3065990, 3041101, 2988813, 3047637, 3044476, 3070865, 3078203, 3055341, 3081041, 3035652, 3051140, 3041537, 3050240, 3044476, 3051933, 3057524, 3048660, 3053504, 3026095, 3026096, 3064350, 3064715, 3058714, 3031741, 3064723, 3064425, 3055431, 3064226, 3027341, 3063928, 3064308, 3064605, 3053812, 3053705, 3050596, 3062043, 3050878, 3030641, 3031898, 3057619, 3064334, 3051004, 3047546, 3064068, 3064055, 3055374, 3064059, 3064063, 3064062, 3064054, 3064053, 3062062, 3064096, 3066138, 3065063, 3064848, 3068605, 3064060, 3034879, 3051651, 3081884, 3078334, 3073025, 3070865, 3073230, 3071612, 3070831, 3068028, 3071614, 3071981, 3072190, 3065990, 3081686, 3062907, 3055084, 3069024, 3069113, 3061447, 3065694 |
CVE numbers | N/A |
Updates the esx-xserver, gc-esxio, vdfs, esxio-combiner-esxio, native-misc-drivers-esxio, trx, esx-dvfilter-generic-fastpath, esxio-base, esxio-dvfilter-generic-fastpath, esxio, cpu-microcode, clusterstore, native-misc-drivers, esxio-combiner, gc, bmcal, esx-base, vsanhealth, vsan, drivervm-gpu, bmcal-esxio, esx-ui
, and crx
VIBs to resolve the following issues:
PR 3053428: Direct Console User Interface (DCUI) does not display network adapters
When you select Network Adapters in the DCUI, you see no output and cannot modify the network configuration. In the /var/log/syslog.log
file, you see an error such as 2022-xx-xx:02:02.xxxx Er(11) DCUI[265583]: Scanning dvswitches failed: Unable to get the dvs name: Status(bad0001)= Failure
This issue is resolved in this release.
PR 3051651: Virtual machines might intermittently experience soft lockups inside the guest kernel on Intel machines
Starting with 7.0 Update 2, ESXi supports Posted Interrupts (PI) on Intel CPUs for PCI passthrough devices to improve the overall system performance. In some cases, a race between PIs and the VMkernel scheduling might occur. As a result, virtual machines that are configured with PCI passthrough devices with normal or low latency sensitivity might experience soft lockups.
This issue is resolved in this release.
PR 3057027: When you change the NUMA nodes Per Socket (NPS) setting on an AMD EPYC 9004 'Genoa' server from the default of 1, you might see incorrect number of sockets
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
This issue is resolved in this release.
PR 3035652: ESXi hosts might fail with a purple diagnostic screen when a VM with bus sharing set to Physical issues a SCSI-2 reserve command
Windows 2012 and later use SCSI-3 reservation for resource arbitration to support Windows failover clustering (WSFC) on ESXi for cluster-across-box (CAB) configurations. However, if you configure the bus sharing of the SCSI controller on that VM to Physical, the SCSI RESERVE
command causes the ESXi host to fail with a purple diagnostic screen. SCSI RESERVE
is SCSI-2 semantic and is not supported with WSFC clusters on ESXi.
This issue is resolved in this release.
PR 3065063: Virtual machines might take longer to respond or become temporary unresponsive when vSphere DRS triggers multiple migration operations
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
This issue is resolved in this release.
PR 3051004: You might see incorrect facility and severity values in ESXi syslog log messages
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
This issue is resolved in this release.
PR 3062043: vSAN File Service fails to enable when an isolated witness network is configured
vSAN File Service requires hosts to communicate with each other. File Service might incorrectly use an IP address in the witness network for inter-communication. If you have configured an isolated witness network for vSAN, the host can communicate with a witness node over the witness network, but hosts cannot communicate with each other over the witness network. Communication between hosts for vSAN File Service cannot be established.
This issue is resolved in this release.
PR 3070831: ESXi hosts might fail with a purple diagnostic screen after a migration operation
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release.
PR 3065990: On vSphere systems with number of CPUs more than 448, ESXi hosts might fail with a purple diagnostic screen upon upgrade or reboot
In rare cases, on vSphere systems with number of CPUs more than 448, the calculation of memory utilization overflows the 32 bit value which rounds up to a value lower than the expected to be allocated. As a result, ESXi hosts might fail with a purple diagnostic screen upon upgrade or reboot.
This issue is resolved in this release.
PR 3041101: When TPM 2.0 is enabled with TXT on an ESXi host, attempts to power-on a virtual machine might fail
When TXT is enabled on an ESX host, attempts to power-on a VM might fail with an error. In the vSphere Client, you see a message such as This host supports Intel VT-x, but Intel VT-x is restricted. Intel VT-x might be restricted because 'trusted execution' has been enabled in the BIOS/firmware settings or because the host has not been power-cycled since changing this setting
.
This issue is resolved in this release.
PR 2988813: If a parallel remediation task fails, you do not see the correct number of ESXi hosts that passed or skipped the operation
With vSphere 8.0, you can enable vSphere Lifecycle Manager to remediate all hosts that are in maintenance mode in parallel instead of in sequence. However, if a parallel remediation task fails, in the vSphere Client you might not see the correct number of hosts that passed, failed, or skipped the operation, or even not see such counts at all. The issue does not affect the vSphere Lifecycle Manager functionality, but only the reporting in the vSphere Client.
This issue is resolved in this release.
PR 3047637: ESXi hosts might fail with a purple diagnostic screen during a Fast Suspend Resume (FSR) operation on a VM with Transparent Page Sharing (TPS) enabled
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as PFrame_IsBackedByLPage
in the backtrace.
This issue is resolved in this release.
PR 3044476: Virtual machines might fail with ESX unrecoverable error due to a rare issue with handling AVX2 instructions
Due to a rare issue with handling AVX2 instructions, a virtual machine of version ESX 8.0 might fail with ESX unrecoverable
error. In the vmware.log
file, you see a message such as: MONITOR PANIC: vcpu-0:VMM fault 6: src=MONITOR ...
. The issue is specific for virtual machines with hardware versions 12 or earlier.
This issue is resolved in this release.
PR 3041057: The guest operating system of a VM might become unresponsive due to lost communication over the Virtual Machine Communication Interface (VMCI)
In very specific circumstances, when a vSphere vMotion operation on a virtual machine runs in parallel with an operation that sends VMCI datagrams, services that use VMCI datagrams might see unexpected communication or loss of communication. Under the same conditions, the issue can also happen when restoring a memory snapshot, resuming a suspended VM or using CPU Hot Add. As a result, the guest operating system that depends on services communicating over VMCI might become unresponsive. The issue might also affect services that use vSockets over VMCI. This problem does not impact VMware Tools. The issue is specific for VMs on hardware version 20 with a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
This issue is resolved in this release.
PR 3041057: Linux guest operating system cannot complete booting if Direct Memory Access (DMA) remapping is enabled
If the advanced processor setting Enable IOMMU in this virtual machine
is enabled on a virtual machine, and the guest operating system has enabled DMA remapping, the Linux guest operating system might fail to complete the booting process. This issue affects VMs with hardware version 20 and a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
This issue is resolved in this release.
For ESXi hosts in vSAN clusters of version earlier than vSAN 6.2, upgrades to ESXi 8.0 might cause loss of the VMkernel port tagging for vSAN. As a result, the vSAN networking configuration does not exist after the upgrade.
This issue is resolved in this release. For earlier releases, you can manually fix the issue by enabling VMkernel traffic for vSAN on the ESXi host after the upgrade.
In VMware Aria Operations for Logs, formerly vRealize Log Insight, you might see a large volume of logs generated by Storage I/O Control such as Invalid share value: 0. Using default
. and Skipping device naa.xxxx either due to VSI read error or abnormal state
. The volume of logs varies depending on the number of ESXi hosts in a cluster and the number of devices in switched off state. When the issue occurs, the log volume generates quickly, within 24 hours, and VMware Aria Operations for Logs might classify the messages as critical. However, such logs are harmless and do not impact the operations on other datastores that are online.
This issue is resolved in this release. The fix moves such logs from error to trivia to prevent misleading logging.
When you enable the ToolsRamdisk
advanced option that makes sure the /vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating the tools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs.
This issue is resolved in this release. The fix makes sure that when ToolsRamdisk
is enabled, VM Tools updates automatically, without the need of manual steps or restart.
When you try to export a file with a list of Inventory objects, such as VMs, Hosts, and Datastores, the task fails with an error Export Data Failure
.
This issue is resolved in this release.
If the vSAN sockrelay runs out of memory, it can cause spurious errors in the vSAN File Service. The vSAN health service shows numerous file service health check errors. The following entry appears in the /var/run/log/sockrelay
.log: Failed to create thread: No space left on device
.
This issue is resolved in this release.
If the network firewall rules are refreshed on the ESXi host, vSAN iSCSI firewall rules might be lost. This problem can impact vSAN iSCSI network connections.
This issue is resolved in this release.
The installation and upgrade of VMware NSX by using the NSX Manager in a vSphere environment with DPUs on ESXi 8.0 hosts might fail with a Configuration State error and a message similar to:
Host configuration: Failed to send the HostConfig message. [TN=TransportNode/ce1dba65-7fa5-4a47-bf84-e897859fe0db]. Reason: Failed to send HostConfig RPC to MPA TN:ce1dba65-7fa5-4a47-bf84-e897859fe0db. Error: Unable to reach client ce1dba65-7fa5-4a47-bf84-e897859fe0db, application SwitchingVertical. LogicalSwitch full-sync: LogicalSwitch full-sync realization query skipped.
The issue occurs because a DPU scan prevents the ESXi 8.0 host to reboot after installing the NSX image on the DPU.
This issue is resolved in this release. If you do not update to ESXi 8.0b, do the following:
VMware-ESXi-8.0a-20842819-depot.zip.
Starting from ESXi 6.0, mClock is the default I/O scheduler for ESXi, but some environments might still use legacy schedulers of ESXi versions earlier than 6.0. As a result, upgrades of such hosts to ESXi 7.0 Update 3 and later might fail with a purple diagnostic screen.
This issue is resolved in this release.
After upgrading to ESXi 8.0, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
This issue is resolved in this release.
NVIDIA vGPU 15.0 drivers supports multiple fractional vGPU devices in a single VM, but this does not work on non SR-IOV GPUs such as NVIDIA T4.
This issue is resolved in this release. The fix ensures multiple fractional vGPU devices can work on all supported GPU hardware.
Data-in-Transit encryption is not supported on clusters with HCI Mesh configuration. If you deactivate vSAN with DIT encryption, and then reenable vSAN without DIT encryption, the RDT counter for DIT encryption is not decremented correctly. When you enable HCI mesh, the RDT code assumes that traffic needs to be encrypted. The unencrypted traffic is dropped, causing loss of connectivity to the HCI mesh datastore.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3045268 |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the esxio-update
and loadesxio
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3059592, 3054305, 3048127, 3064828, 3060495 |
CVE numbers | N/A |
Updates the nmlx5-rdma
, nmlx5-rdma-esxio
, nmlx5-core-esxio
, and nmlx5-core
VIBs to resolve the following issue:
If your underlying hardware allows devices to support a TCP window scale option higher than 7, you might see a reduction in TCP throughput in vSphere environments with Nvidia Bluefield 2 DPUs, because the Nvidia Bluefield 2 scale limit is 7.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3057364 |
CVE numbers | N/A |
Updates the dwi2c-esxio
and dwi2c
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3046937 |
CVE numbers | N/A |
Updates the i40en
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3041427 |
CVE numbers | N/A |
Updates the ionic-en-esxio
and ionic-en
VIBs to resolve the following issue:
Pensando Distributed Services Platform (DSC) adapters have 2 high speed ethernet controllers (for example vmnic6
and vmnic7
) and one management controller (for example vmnic8
):
:~] esxcfg-nics -l
vmnic6 0000:39:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:48 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
vmnic7 0000:3a:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:49 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
:~] esxcfg-nics -lS
vmnic8 0000:3b:00.0 ionic_en_unstable Up 1000Mbps Full 00:ae:cd:09:c9:4a 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Management Controller
The high-speed ethernet controllers vmnic6
and vmnic7
register first and operate with RSS set to 16 receive queues.
:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=16, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
However, in rare cases, if the management controller vmnic8
registers first with the vSphere Distributed Switch, the high-speed ethernet controllers vmnic6
or vmnic7
uplink might end up operating with RSS set to 1 receive queue.:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=1, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
As a result, you might see slower performance in native mode.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3063528 |
CVE numbers | N/A |
Updates the lpfc
VIB to resolve the following issue:
On servers with tens of devices attached, dump files for each of the devices might exceed the total limit. As a result, when you reboot an ESXi host or unload the lpfc driver, the host might fail with a purple diagnostic screen and an error #PF exception vmk_DumpAddFileCallback()
identifying that the dump file resources are used up.
This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3049562, 3049613, 3049619, 3049640, 3050030, 3050032, 3050037, 3050512, 3054127, 3068093, 3030692 |
CVE numbers | CVE-2020-28196 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-ui, esx-xserver, cpu-microcode, trx, vsanhealth, esx-base, esx-dvfilter-generic-fastpath, gc, esxio-combiner, native-misc-drivers, bmcal, vsan, vdfs
, and crx
VIBs to resolve the following issue:
ESXi 8.0b provides the following security updates:
The SQLite database is updated to version 3.39.2.
The Python library is updated to version 3.8.15.
OpenSSL is updated to version 3.0.7.
The libxml2 library is updated to version 2.10.2.
The Expat XML parser is updated to version 2.5.0.
The lxml XML toolkit is updated to version 4.5.2.
cURL is updated to version 7.84.
The Go library is updated to version 1.18.6.
The tcpdump package is updated to version 4.99.1.
The cpu-microcode VIB includes the following Intel microcode:
Code Name |
FMS |
Plt ID |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
0x0000001d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
0x00000011 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
0x00000007 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
0x0000002f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
0x0000001f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
0x00000621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
0x0000071a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
0x0000000d |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
0x0000003b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
0x00000021 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
0x00000028 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
0x0000042e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
0x00000715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
0x00000049 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
0x0000001a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
0x00000022 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
0x0000012d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
0x0b000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
0x02006e05 |
3/8/2022 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
0x04003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
0x05003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
0x07002501 |
11/19/2021 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
0x0000001c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
0x0700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
0x0f00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
0x0e000014 |
9/18/2021 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
0x00000038 |
12/2/2021 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
0x0d000375 |
4/7/2022 |
Intel Xeon Silver 4300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Platinum 8300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
0x010001f0 |
6/24/2022 |
Intel Xeon D Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
0x000000f0 |
11/12/2021 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
0x000000f4 |
7/31/2022 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
0x00000056 |
8/2/2022 |
Intel Xeon E-2300 Series |
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the esxio-update
and loadesxio
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 3070447 |
CVE numbers | N/A |
Updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi 8.0b:
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-8.0b-21203435-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | February 14, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3053428, 3051651, 3057027, 3035652, 3065063, 3051004, 3062043, 3070831, 3065990, 3065990, 3041101, 2988813, 3047637, 3044476, 3070865, 3078203, 3055341, 3081041, 3064063, 3064062, 3064054, 3064053, 3062062, 3064096, 3066138, 3065063, 3064848, 3068605, 3064060, 3034879, 3051651, 3081884, 3078334, 3073025, 3070865, 3073230, 3071612, 3070831, 3068028, 3071614, 3071981, 3072190, 3065990, 3081686, 3062907, 3055084, 3069024 |
Related CVE numbers | N/A |
When you select Network Adapters in DCUI, you see no output and cannot modify the network configuration. In the /var/log/syslog.log
file, you see an error such as 2022-xx-xx:02:02.xxxx Er(11) DCUI[265583]: Scanning dvswitches failed: Unable to get the dvs name: Status(bad0001)= Failure
Starting with 7.0 Update 2, ESXi supports Posted Interrupts (PI) on Intel CPUs for PCI passthrough devices to improve the overall system performance. In some cases, a race between posted interrupts and the VMkernel scheduling might occur. As a result, virtual machines that are configured with PCI passthrough devices with normal or low latency sensitivity, might experience soft lockups.
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
Windows 2012 and later use SCSI-3 reservation for resource arbitration to support Windows failover clustering (WSFC) on ESXi for cluster-across-box (CAB) configurations. However, if you configure the bus sharing of the SCSI controller on that VM to Physical, the SCSI RESERVE
command causes the ESXi host to fail with a purple diagnostic screen. SCSI RESERVE
is SCSI-2 semantic and is not supported with WSFC clusters on ESXi.
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
vSAN File Service requires hosts to communicate with each other. File Service might incorrectly use an IP address in the witness network for inter-communication. If you have configured an isolated witness network for vSAN, the host can communicate with a witness node over the witness network, but hosts cannot communicate with each other over the witness network. Communication between hosts for vSAN File Service cannot be established.
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
In rare cases, on vSphere systems with number of CPUs more than 448, the calculation of memory utilization overflows the 32 bit value which rounds up to a value lower than the expected to be allocated. As a result, ESXi hosts might fail with a purple diagnostic screen upon upgrade or reboot.
This issue is resolved in this release.
When TXT is enabled on an ESX host, attempts to power-on a VM might fail with an error. In the vSphere Client, you see a message such as This host supports Intel VT-x, but Intel VT-x is restricted. Intel VT-x might be restricted because 'trusted execution' has been enabled in the BIOS/firmware settings or because the host has not been power-cycled since changing this setting
.
With vSphere 8.0, you can enable vSphere Lifecycle Manager to remediate all hosts that are in maintenance mode in parallel instead of in sequence. However, if a parallel remediation task fails, in the vSphere Client you might not see the correct number of hosts that passed, failed, or skipped the operation, or even not see such counts at all. The issue does not affect the vSphere Lifecycle Manager functionality, but only the reporting in the vSphere Client.
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as PFrame_IsBackedByLPage
in the backtrace.
Due to a rare issue with handling AVX2 instructions, a virtual machine of version ESX 8.0 might fail with ESX unrecoverable
error. In the vmware.log
file, you see a message such as: MONITOR PANIC: vcpu-0:VMM fault 6: src=MONITOR ...
. The issue is specific for virtual machines with hardware versions 12 or earlier.
For ESXi hosts in vSAN clusters of version earlier than vSAN 6.2, upgrades to ESXi 8.0 might cause loss of the VMkernel port tagging for vSAN. As a result, the vSAN networking configuration does not exist after the upgrade.
In VMware Aria Operations for Logs, formerly vRealize Log Insight, you might see a large volume of logs generated by Storage I/O Control such as Invalid share value: 0. Using default
. and Skipping device naa.xxxx either due to VSI read error or abnormal state
. The volume of logs varies depending on the number of ESXi hosts in a cluster and the number of devices in switched off state. When the issue occurs, the log volume generates quickly, within 24 hours, and VMware Aria Operations for Logs might classify the messages as critical. However, such logs are harmless and do not impact the operations on other datastores that are online.
When you enable the ToolsRamdisk
advanced option that makes sure the /vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating the tools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs.
When you try to export a file with a list of Inventory objects, such as VMs, Hosts, and Datastores, the task fails with an error Export Data Failure
.
If the vSAN sockrelay runs out of memory, it can cause spurious errors in the vSAN File Service. The vSAN health service shows numerous file service health check errors. The following entry appears in the /var/run/log/sockrelay
.log: Failed to create thread: No space left on device
.
If the network firewall rules are refreshed on the ESXi host, vSAN iSCSI firewall rules might be lost. This problem can impact vSAN iSCSI network connections.
The installation and upgrade of VMware NSX by using the NSX Manager in a vSphere environment with DPUs on ESXi 8.0 hosts might fail with a Configuration State error and a message similar to:
Host configuration: Failed to send the HostConfig message. [TN=TransportNode/ce1dba65-7fa5-4a47-bf84-e897859fe0db]. Reason: Failed to send HostConfig RPC to MPA TN:ce1dba65-7fa5-4a47-bf84-e897859fe0db. Error: Unable to reach client ce1dba65-7fa5-4a47-bf84-e897859fe0db, application SwitchingVertical. LogicalSwitch full-sync: LogicalSwitch full-sync realization query skipped.
The issue occurs because a DPU scan prevents the ESXi 8.0 host to reboot after installing the NSX image on the DPU.
Starting from ESXi 6.0, mClock is the default I/O scheduler for ESXi, but some environments might still use legacy schedulers of ESXi versions earlier than 6.0. As a result, upgrades of such hosts to ESXi 7.0 Update 3 and later might fail with a purple diagnostic screen.
After upgrading to ESXi 8.0, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
NVIDIA vGPU 15.0 drivers supports multiple fractional vGPU devices in a single VM, but this does not work on non SR-IOV GPUs such as NVIDIA T4.
Data-in-Transit encryption is not supported on clusters with HCI Mesh configuration. If you deactivate vSAN with DIT encryption, and then reenable vSAN without DIT encryption, the RDT counter for DIT encryption is not decremented correctly. When you enable HCI mesh, the RDT code assumes that traffic needs to be encrypted. The unencrypted traffic is dropped, causing loss of connectivity to the HCI mesh datastore.
If your underlying hardware allows devices to support a TCP window scale option higher than 7, you might see a reduction in TCP throughput in vSphere environments with Nvidia Bluefield 2 DPUs, because the Nvidia Bluefield 2 scale limit is 7.
Pensando Distributed Services Platform (DSC) adapters have 2 high speed ethernet controllers (for example vmnic6
and vmnic7
) and one management controller (for example vmnic8
):
:~] esxcfg-nics -l
vmnic6 0000:39:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:48 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
vmnic7 0000:3a:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:49 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
:~] esxcfg-nics -lS
vmnic8 0000:3b:00.0 ionic_en_unstable Up 1000Mbps Full 00:ae:cd:09:c9:4a 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Management Controller
The high-speed ethernet controllers vmnic6
and vmnic7
register first and operate with RSS set to 16 receive queues.
:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=16, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
However, in rare cases, if the management controller vmnic8
registers first with the vSphere Distributed Switch, the high-speed ethernet controllers vmnic6
or vmnic7
uplink might end up operating with RSS set to 1 receive queue.:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=1, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
As a result, you might see slower performance in native mode.
On servers with tens of devices attached, dump files for each of the devices might exceed the total limit. As a result, when you reboot an ESXi host or unload the lpfc driver, the host might fail with a purple diagnostic screen and an error #PF exception vmk_DumpAddFileCallback()
identifying that the dump file resources are used up.
In very specific circumstances, when a vSphere vMotion operation on a virtual machine runs in parallel with an operation that sends VMCI datagrams, services that use VMCI datagrams might see unexpected communication or loss of communication. Under the same conditions, the issue can also happen when restoring a memory snapshot, resuming a suspended VM or using CPU Hot Add. As a result, the guest operating system that depends on services communicating over VMCI might become unresponsive. The issue might also affect services that use vSockets over VMCI. This problem does not impact VMware Tools. The issue is specific for VMs on hardware version 20 with a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
If the advanced processor setting Enable IOMMU in this virtual machine
is enabled on a virtual machine, and the guest operating system has enabled DMA remapping, the Linux guest operating system might fail to complete the booting process. This issue affects VMs with hardware version 20 and a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
Profile Name | ESXi-8.0b-21203435-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | February 14, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3053428, 3051651, 3057027, 3035652, 3065063, 3051004, 3062043, 3070831, 3065990, 3065990, 3041101, 2988813, 3047637, 3044476, 3070865, 3078203, 3055341, 3081041, 3064063, 3064062, 3064054, 3064053, 3062062, 3064096, 3066138, 3065063, 3064848, 3068605, 3064060, 3034879, 3051651, 3081884, 3078334, 3073025, 3070865, 3073230, 3071612, 3070831, 3068028, 3071614, 3071981, 3072190, 3065990, 3081686, 3062907, 3055084, 3069024, 3061447, 3065694, 3045268, 3059592, 3054305, 3048127, 3041427, 3063528 |
Related CVE numbers | N/A |
When you select Network Adapters in DCUI, you see no output and cannot modify the network configuration. In the /var/log/syslog.log
file, you see an error such as 2022-xx-xx:02:02.xxxx Er(11) DCUI[265583]: Scanning dvswitches failed: Unable to get the dvs name: Status(bad0001)= Failure
Starting with 7.0 Update 2, ESXi supports Posted Interrupts (PI) on Intel CPUs for PCI passthrough devices to improve the overall system performance. In some cases, a race between posted interrupts and the VMkernel scheduling might occur. As a result, virtual machines that are configured with PCI passthrough devices with normal or low latency sensitivity, might experience soft lockups.
In the vSphere Client and the ESXi Host Client, you might see incorrect number of sockets and respectively cores per socket when you change the NPS setting on an AMD EPYC 9004 server from the default of 1, or Auto, to another value, such as 2 or 4.
Windows 2012 and later use SCSI-3 reservation for resource arbitration to support Windows failover clustering (WSFC) on ESXi for cluster-across-box (CAB) configurations. However, if you configure the bus sharing of the SCSI controller on that VM to Physical, the SCSI RESERVE
command causes the ESXi host to fail with a purple diagnostic screen. SCSI RESERVE
is SCSI-2 semantic and is not supported with WSFC clusters on ESXi.
In rare cases, multiple concurrent open and close operations on datastores, for example when vSphere DRS triggers multiple migration tasks, might cause some VMs to take longer to respond or become unresponsive for a short time. The issue is more likely to occur on VMs that have multiple virtual disks over different datastores.
The logger command of some logger clients might not pass the right facility code to the ESXi syslog daemon and cause incorrect facility or severity values in ESXi syslog log messages.
vSAN File Service requires hosts to communicate with each other. File Service might incorrectly use an IP address in the witness network for inter-communication. If you have configured an isolated witness network for vSAN, the host can communicate with a witness node over the witness network, but hosts cannot communicate with each other over the witness network. Communication between hosts for vSAN File Service cannot be established.
A rare race condition caused by memory reclamation for some performance optimization processes in vSphere vMotion might cause ESXi hosts to fail with a purple diagnostic screen.
In rare cases, on vSphere systems with number of CPUs more than 448, the calculation of memory utilization overflows the 32 bit value which rounds up to a value lower than the expected to be allocated. As a result, ESXi hosts might fail with a purple diagnostic screen upon upgrade or reboot.
This issue is resolved in this release.
When TXT is enabled on an ESX host, attempts to power-on a VM might fail with an error. In the vSphere Client, you see a message such as This host supports Intel VT-x, but Intel VT-x is restricted. Intel VT-x might be restricted because 'trusted execution' has been enabled in the BIOS/firmware settings or because the host has not been power-cycled since changing this setting
.
With vSphere 8.0, you can enable vSphere Lifecycle Manager to remediate all hosts that are in maintenance mode in parallel instead of in sequence. However, if a parallel remediation task fails, in the vSphere Client you might not see the correct number of hosts that passed, failed, or skipped the operation, or even not see such counts at all. The issue does not affect the vSphere Lifecycle Manager functionality, but only the reporting in the vSphere Client.
Due to a rare race condition during a Fast Suspend Resume (FSR) operation on a VM with shared pages between VMs, ESXi hosts might fail with a purple diagnostic screen with an error such as PFrame_IsBackedByLPage
in the backtrace.
Due to a rare issue with handling AVX2 instructions, a virtual machine of version ESX 8.0 might fail with ESX unrecoverable
error. In the vmware.log
file, you see a message such as: MONITOR PANIC: vcpu-0:VMM fault 6: src=MONITOR ...
. The issue is specific for virtual machines with hardware versions 12 or earlier.
For ESXi hosts in vSAN clusters of version earlier than vSAN 6.2, upgrades to ESXi 8.0 might cause loss of the VMkernel port tagging for vSAN. As a result, the vSAN networking configuration does not exist after the upgrade.
In VMware Aria Operations for Logs, formerly vRealize Log Insight, you might see a large volume of logs generated by Storage I/O Control such as Invalid share value: 0. Using default
. and Skipping device naa.xxxx either due to VSI read error or abnormal state
. The volume of logs varies depending on the number of ESXi hosts in a cluster and the number of devices in switched off state. When the issue occurs, the log volume generates quickly, within 24 hours, and VMware Aria Operations for Logs might classify the messages as critical. However, such logs are harmless and do not impact the operations on other datastores that are online.
When you enable the ToolsRamdisk
advanced option that makes sure the /vmtools
partition is always on a RAM disk rather than on a USB or SD-card, installing or updating the tools-light
VIB does not update the content of the RAM disk automatically. Instead, you must reboot the host so that the RAM disk updates and the new VM Tools version is available for the VMs.
When you try to export a file with a list of Inventory objects, such as VMs, Hosts, and Datastores, the task fails with an error Export Data Failure
.
If the vSAN sockrelay runs out of memory, it can cause spurious errors in the vSAN File Service. The vSAN health service shows numerous file service health check errors. The following entry appears in the /var/run/log/sockrelay
.log: Failed to create thread: No space left on device
.
If the network firewall rules are refreshed on the ESXi host, vSAN iSCSI firewall rules might be lost. This problem can impact vSAN iSCSI network connections.
The installation and upgrade of VMware NSX by using the NSX Manager in a vSphere environment with DPUs on ESXi 8.0 hosts might fail with a Configuration State error and a message similar to:
Host configuration: Failed to send the HostConfig message. [TN=TransportNode/ce1dba65-7fa5-4a47-bf84-e897859fe0db]. Reason: Failed to send HostConfig RPC to MPA TN:ce1dba65-7fa5-4a47-bf84-e897859fe0db. Error: Unable to reach client ce1dba65-7fa5-4a47-bf84-e897859fe0db, application SwitchingVertical. LogicalSwitch full-sync: LogicalSwitch full-sync realization query skipped.
The issue occurs because a DPU scan prevents the ESXi 8.0 host to reboot after installing the NSX image on the DPU.
Starting from ESXi 6.0, mClock is the default I/O scheduler for ESXi, but some environments might still use legacy schedulers of ESXi versions earlier than 6.0. As a result, upgrades of such hosts to ESXi 7.0 Update 3 and later might fail with a purple diagnostic screen.
After upgrading to ESXi 8.0, you might not be able to attach an existing virtual machine disk (VMDK) to a VM by using the VMware Host Client, which is used to connect to and manage single ESXi hosts. The issue does not affect the vSphere Client.
NVIDIA vGPU 15.0 drivers supports multiple fractional vGPU devices in a single VM, but this does not work on non SR-IOV GPUs such as NVIDIA T4.
Data-in-Transit encryption is not supported on clusters with HCI Mesh configuration. If you deactivate vSAN with DIT encryption, and then reenable vSAN without DIT encryption, the RDT counter for DIT encryption is not decremented correctly. When you enable HCI mesh, the RDT code assumes that traffic needs to be encrypted. The unencrypted traffic is dropped, causing loss of connectivity to the HCI mesh datastore.
If your underlying hardware allows devices to support a TCP window scale option higher than 7, you might see a reduction in TCP throughput in vSphere environments with Nvidia Bluefield 2 DPUs, because the Nvidia Bluefield 2 scale limit is 7.
Pensando Distributed Services Platform (DSC) adapters have 2 high speed ethernet controllers (for example vmnic6
and vmnic7
) and one management controller (for example vmnic8
):
:~] esxcfg-nics -l
vmnic6 0000:39:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:48 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
vmnic7 0000:3a:00.0 ionic_en_unstable Up 25000Mbps Full 00:ae:cd:09:c9:49 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Ethernet Controller
:~] esxcfg-nics -lS
vmnic8 0000:3b:00.0 ionic_en_unstable Up 1000Mbps Full 00:ae:cd:09:c9:4a 1500 Pensando Systems DSC-25 10/25G 2-port 4G RAM 8G eMMC G1 Services Card, Management Controller
The high-speed ethernet controllers vmnic6
and vmnic7
register first and operate with RSS set to 16 receive queues.
:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=16, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
However, in rare cases, if the management controller vmnic8
registers first with the vSphere Distributed Switch, the high-speed ethernet controllers vmnic6
or vmnic7
uplink might end up operating with RSS set to 1 receive queue.:~] localcli --plugin-dir /usr/lib/vmware/esxcli/int networkinternal nic privstats get -n vmnic6…Num of RSS-Q=1, ntxq_descs=2048, nrxq_descs=1024, log_level=3, vlan_tx_insert=1, vlan_rx_strip=1, geneve_offload=1 }
As a result, you might see slower performance in native mode.
On servers with tens of devices attached, dump files for each of the devices might exceed the total limit. As a result, when you reboot an ESXi host or unload the lpfc driver, the host might fail with a purple diagnostic screen and an error #PF exception vmk_DumpAddFileCallback()
identifying that the dump file resources are used up.
In very specific circumstances, when a vSphere vMotion operation on a virtual machine runs in parallel with an operation that sends VMCI datagrams, services that use VMCI datagrams might see unexpected communication or loss of communication. Under the same conditions, the issue can also happen when restoring a memory snapshot, resuming a suspended VM or using CPU Hot Add. As a result, the guest operating system that depends on services communicating over VMCI might become unresponsive. The issue might also affect services that use vSockets over VMCI. This problem does not impact VMware Tools. The issue is specific for VMs on hardware version 20 with a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
If the advanced processor setting Enable IOMMU in this virtual machine
is enabled on a virtual machine, and the guest operating system has enabled DMA remapping, the Linux guest operating system might fail to complete the booting process. This issue affects VMs with hardware version 20 and a Linux distribution that has specific patches introduced in Linux kernel 5.18 for a VMCI feature, including but not limited to up-to-date versions of RHEL 8.7, Ubuntu 22.04 and 22.10, and SLES15 SP3, and SP4.
Profile Name | ESXi-8.0sb-21203431-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | February 14, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3049562, 3049613, 3049619, 3049640, 3050030, 3050032, 3050037, 3050512, 3054127, 3068093, 3030692, 3070447 |
Related CVE numbers | CVE-2020-28196 |
ESXi 8.0b provides the following security updates:
The SQLite database is updated to version 3.39.2.
The Python library is updated to version 3.8.15.
OpenSSL is updated to version 3.0.7.
The libxml2 library is updated to version 2.10.2.
The Expat XML parser is updated to version 2.5.0.
The lxml XML toolkit is updated to version 4.5.2.
cURL is updated to version 7.84.
The Go library is updated to version 1.18.6.
The tcpdump package is updated to version 4.99.1.
The cpu-microcode VIB includes the following Intel microcode:
Code Name |
FMS |
Plt ID |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
0x0000001d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
0x00000011 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
0x00000007 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
0x0000002f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
0x0000001f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
0x00000621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
0x0000071a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
0x0000000d |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
0x0000003b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
0x00000021 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
0x00000028 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
0x0000042e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
0x00000715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
0x00000049 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
0x0000001a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
0x00000022 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
0x0000012d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
0x0b000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
0x02006e05 |
3/8/2022 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
0x04003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
0x05003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
0x07002501 |
11/19/2021 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
0x0000001c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
0x0700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
0x0f00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
0x0e000014 |
9/18/2021 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
0x00000038 |
12/2/2021 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
0x0d000375 |
4/7/2022 |
Intel Xeon Silver 4300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Platinum 8300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
0x010001f0 |
6/24/2022 |
Intel Xeon D Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
0x000000f0 |
11/12/2021 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
0x000000f4 |
7/31/2022 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
0x00000056 |
8/2/2022 |
Intel Xeon E-2300 Series |
The following VMware Tools ISO images are bundled with ESXi 8.0b:
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-8.0sb-21203431-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | February 14, 2023 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 3049562, 3049613, 3049619, 3049640, 3050030, 3050032, 3050037, 3050512, 3054127, 3068093, 3030692, 3070447 |
Related CVE numbers | CVE-2020-28196 |
ESXi 8.0b provides the following security updates:
The SQLite database is updated to version 3.39.2.
The Python library is updated to version 3.8.15.
OpenSSL is updated to version 3.0.7.
The libxml2 library is updated to version 2.10.2.
The Expat XML parser is updated to version 2.5.0.
The lxml XML toolkit is updated to version 4.5.2.
cURL is updated to version 7.84.
The Go library is updated to version 1.18.6.
The tcpdump package is updated to version 4.99.1.
The cpu-microcode VIB includes the following Intel microcode:
Code Name |
FMS |
Plt ID |
MCU Rev |
MCU Date |
Brand Names |
---|---|---|---|---|---|
Nehalem EP |
0x106a5 (06/1a/5) |
0x03 |
0x0000001d |
5/11/2018 |
Intel Xeon 35xx Series; Intel Xeon 55xx Series |
Clarkdale |
0x20652 (06/25/2) |
0x12 |
0x00000011 |
5/8/2018 |
Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series |
Arrandale |
0x20655 (06/25/5) |
0x92 |
0x00000007 |
4/23/2018 |
Intel Core i7-620LE Processor |
Sandy Bridge DT |
0x206a7 (06/2a/7) |
0x12 |
0x0000002f |
2/17/2019 |
Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series |
Westmere EP |
0x206c2 (06/2c/2) |
0x03 |
0x0000001f |
5/8/2018 |
Intel Xeon 56xx Series; Intel Xeon 36xx Series |
Sandy Bridge EP |
0x206d6 (06/2d/6) |
0x6d |
0x00000621 |
3/4/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Sandy Bridge EP |
0x206d7 (06/2d/7) |
0x6d |
0x0000071a |
3/24/2020 |
Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series |
Nehalem EX |
0x206e6 (06/2e/6) |
0x04 |
0x0000000d |
5/15/2018 |
Intel Xeon 65xx Series; Intel Xeon 75xx Series |
Westmere EX |
0x206f2 (06/2f/2) |
0x05 |
0x0000003b |
5/16/2018 |
Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series |
Ivy Bridge DT |
0x306a9 (06/3a/9) |
0x12 |
0x00000021 |
2/13/2019 |
Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C |
Haswell DT |
0x306c3 (06/3c/3) |
0x32 |
0x00000028 |
11/12/2019 |
Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series |
Ivy Bridge EP |
0x306e4 (06/3e/4) |
0xed |
0x0000042e |
3/14/2019 |
Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series |
Ivy Bridge EX |
0x306e7 (06/3e/7) |
0xed |
0x00000715 |
3/14/2019 |
Intel Xeon E7-8800/4800/2800-v2 Series |
Haswell EP |
0x306f2 (06/3f/2) |
0x6f |
0x00000049 |
8/11/2021 |
Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series |
Haswell EX |
0x306f4 (06/3f/4) |
0x80 |
0x0000001a |
5/24/2021 |
Intel Xeon E7-8800/4800-v3 Series |
Broadwell H |
0x40671 (06/47/1) |
0x22 |
0x00000022 |
11/12/2019 |
Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series |
Avoton |
0x406d8 (06/4d/8) |
0x01 |
0x0000012d |
9/16/2019 |
Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series |
Broadwell EP/EX |
0x406f1 (06/4f/1) |
0xef |
0x0b000040 |
5/19/2021 |
Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series |
Skylake SP |
0x50654 (06/55/4) |
0xb7 |
0x02006e05 |
3/8/2022 |
Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series |
Cascade Lake B-0 |
0x50656 (06/55/6) |
0xbf |
0x04003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cascade Lake |
0x50657 (06/55/7) |
0xbf |
0x05003302 |
12/10/2021 |
Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200 |
Cooper Lake |
0x5065b (06/55/b) |
0xbf |
0x07002501 |
11/19/2021 |
Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 |
Broadwell DE |
0x50662 (06/56/2) |
0x10 |
0x0000001c |
6/17/2019 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50663 (06/56/3) |
0x10 |
0x0700001c |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell DE |
0x50664 (06/56/4) |
0x10 |
0x0f00001a |
6/12/2021 |
Intel Xeon D-1500 Series |
Broadwell NS |
0x50665 (06/56/5) |
0x10 |
0x0e000014 |
9/18/2021 |
Intel Xeon D-1600 Series |
Skylake H/S |
0x506e3 (06/5e/3) |
0x36 |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series |
Denverton |
0x506f1 (06/5f/1) |
0x01 |
0x00000038 |
12/2/2021 |
Intel Atom C3000 Series |
Ice Lake SP |
0x606a6 (06/6a/6) |
0x87 |
0x0d000375 |
4/7/2022 |
Intel Xeon Silver 4300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Platinum 8300 Series |
Ice Lake D |
0x606c1 (06/6c/1) |
0x10 |
0x010001f0 |
6/24/2022 |
Intel Xeon D Series |
Snow Ridge |
0x80665 (06/86/5) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Snow Ridge |
0x80667 (06/86/7) |
0x01 |
0x4c000020 |
5/10/2022 |
Intel Atom P5000 Series |
Kaby Lake H/S/X |
0x906e9 (06/9e/9) |
0x2a |
0x000000f0 |
11/12/2021 |
Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series |
Coffee Lake |
0x906ea (06/9e/a) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core) |
Coffee Lake |
0x906eb (06/9e/b) |
0x02 |
0x000000f0 |
11/12/2021 |
Intel Xeon E-2100 Series |
Coffee Lake |
0x906ec (06/9e/c) |
0x22 |
0x000000f0 |
11/15/2021 |
Intel Xeon E-2100 Series |
Coffee Lake Refresh |
0x906ed (06/9e/d) |
0x22 |
0x000000f4 |
7/31/2022 |
Intel Xeon E-2200 Series (8 core) |
Rocket Lake S |
0xa0671 (06/a7/1) |
0x02 |
0x00000056 |
8/2/2022 |
Intel Xeon E-2300 Series |
The following VMware Tools ISO images are bundled with ESXi 8.0b:
windows.iso: VMware Tools 12.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
VMware Tools 11.0.6:
windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
VMware Tools 10.0.12:
winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso: VMware Tools image 10.3.10 for Solaris.
darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. Refer VMware knowledge base article 88698 for details.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Name | ESXi |
Version | ESXi_8.0.0-1.20.21203435 |
Release Date | February 14, 2023 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 3003171, 3023438 |
Related CVE numbers | N/A |
Name | ESXi |
Version | ESXi_8.0.0-1.15.21203431 |
Release Date | February 14, 2023 |
Category | Bugfix |
Affected Components |
|
PRs Fixed | 3003171, 3023438 |
Related CVE numbers | CVE-2020-28196 |
The known issues are grouped as follows.
Miscellaneous IssuesIn a vSphere environment with DPUs, when you use a kickstart file to automate ESXi installation, ESXi might select a DPU NIC for the default vSwitch0
during the installation for the management network, which leads to networking failures. For example, fetching a network IP address for the Management Network portgroup might fail. As a result, the ESXi installation also fails.
Workaround: Remove the line network –bootproto=dhcp
from the kickstart file.