Release Date: JUN 23, 2020
What's in the Release Notes
The release notes cover the following topics:
What's New
- With ESXi 7.0b, Dell NVMe device serviceability features, including advanced device information and LED management capability, are enabled by default.
- With ESXi 7.0b, vSphere Quick Boot is supported on the following servers:
- Cisco HX220C-M4S
- Cisco HX220C-M5SX
- Cisco HX240C-M4SX
- Cisco HX240C-M5L
- Cisco HX240C-M5SX
- Cisco HXAF220C-M4S
- Cisco HXAF220C-M5SN
- Cisco HXAF220C-M5SX
- Cisco HXAF240C-M4SX
- Cisco HXAF240C-M5SX
- Cisco UCSB-B200-M4
- Cisco UCSB-EX-M4-1 (B260 M4 v2)
- Cisco UCSB-EX-M4-1 (B460 M4 v2)
- Cisco UCSB-EX-M4-2 (B260 M4 v3)
- Cisco UCSB-EX-M4-2 (B460 M4 v3)
- Cisco UCSB-EX-M4-3 (B260 M4 v4)
- Cisco UCSB-EX-M4-3 (B460 M4 v4)
- Cisco UCSC-480-M5
- Cisco UCSC-480-M5ML
- Cisco UCSC-C220-M4S
- Cisco UCSC-C220-M5L
- Cisco UCSC-C220-M5SN
- Cisco UCSC-C240-M4S2
- Cisco UCSC-C240-M4SX
- Cisco UCSC-C240-M5L
- Cisco UCSC-C240-M5S
- Cisco UCSC-C240-M5SN
- Cisco UCSC-C240-M5SX
- Fujitsu Primergy RX2530 M5
- HPE ProLiant DL325 Gen 10
- Lenovo ThinkSystem SR 650
- With ESXi 7.0b, the following servers are added back to vSphere Quick Boot support:
- HPE ProLiant DL380 Gen9
- HPE ProLiant DL560 Gen9
- HPE ProLiant DL580 Gen9
Build Details
Download Filename: | VMware-ESXi-7.0b-16324942-depot |
Build: | 16324942 |
Download Size: | 508.5 MB |
md5sum: | 18a8c2243a0bd15286c331092ab028fc |
sha1checksum: | d0a02bbf0716364fb3e799501357944c88e17401 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Components
Component | Bulletin | Category | Severity |
---|---|---|---|
ESXi | ESXi_7.0.0-1.25.16324942 | Bugfix | Critical |
ESXi | ESXi_7.0.0-1.20.16321839 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.0-1.25.16324942 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.0-1.20.16321839 | Security | Critical |
VMWare USB Driver | VMware-vmkusb_0.1-1vmw.700.1.25.16324942 | Bugfix | Important |
VMware NVMe PCI Express Storage Driver | VMware-NVMe-PCIe_1.2.2.14-1vmw.700.1.25.16324942 | Bugfix | Important |
VMware-VM-Tools | VMware-VM-Tools_11.1.0.16036546-16321839 | Security | Important |
IMPORTANT: Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 7.0.
Bulletin ID | Category | Severity |
ESXi70b-16324942 | Bugfix | Critical |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0b-16324942-standard |
ESXi-7.0b-16324942-no-tools |
ESXi-7.0bs-16321839-standard |
ESXi-7.0bs-16321839-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi 7.0b - 16324942 | 06/16/2020 | Enhancement | Security and Bugfix image |
ESXi 7.0bs - 16321839 | 06/16/2020 | Enhancement | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.0, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.0.x hosts is by using the Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without the use of vSphere Lifecycle Manager by using an image profile. This requires that you manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile
command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.0-1.25.16324942
- esx-update_7.0.0-1.25.16324942
- VMware-NVMe-PCIe_1.2.2.14-1vmw.700.1.25.16324942
- VMware-vmkusb_0.1-1vmw.700.1.25.16324942
- ESXi_7.0.0-1.20.16321839
- esx-update_7.0.0-1.20.16321839
- VMware-VM-Tools_11.1.0.16036546-16321839
- ESXi-7.0b-16324942-standard
- ESXi-7.0b-16324942-no-tools
- ESXi-7.0bs-16321839-standard
- ESXi-7.0bs-16321839-no-tools
- ESXi Image - 7.0b – 16324942
- ESXi Image - 7.0bs – 16321839
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed |
2543390, 2536681, 2486316, 2503110, 2549993, 2549419, 2550655, 2554564, 2554116, 2489840, 2548068, 2556037, 2572393, 2544236, 2555270, 2560238, 2552532, 2551359, 2541690 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsanhealth, esx-ui, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver,
and cpu-microcode
VIBs to resolve the following issues:
This release updates the Intel microcode for ESXi supported CPUs to the revisions that were production verified as of April 29th 2020. See the table for the microcode updates that are currently included.
Note: Not all microcode updates that Intel publicly disclosed on June 9th 2020 are included. Intel made available the updates to several
CPUs too late for VMware to test and include them in this ESXi patch release. To get the latest microcode, contact your hardware vendor for a BIOS update.Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006901 2/12/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000d6 10/3/2019 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ca 10/3/2019 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000cc 12/8/2019 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000cc 12/12/2019 Intel Xeon E-2200 Series (8 core) - PR 2536681: Device aliases might not be in the expected order on ESXi hosts that conform to SMBIOS 3.2 or later
In releases earlier than ESXi 7.0b, the ESXi parsing of the SMBIOS table in the host firmware to determine the order in which device aliases are assigned is not fully compatible with SMBIOS 3.2 or later. As a result, ESXi hosts might assign device aliases in the incorrect order.
This issue is resolved in this release. For more information on correcting the order of device aliases, see VMware knowledge base article 2091560.
- PR 2486316: ESXi hosts might fail with a purple diagnostic screen during a timerCallback function execution
ESXi hosts might fail with a purple diagnostic screen while the kernel storage stack executes a timerCallback function. The timerCallback function runs in a non-preemptive manner. However, in certain cases, the kernel storage stack might try to enable preemption on a CPU, while executing a timerCallback function. As a result, the ESXi host fails with an error similar to:
PSOD : Assert bora/vmkernel/main/bh.c:981
This issue is resolved in this release.
- PR 2503110: Setting the SIOControlLoglevel parameter after enabling Storage I/O Control logging might not change log levels
Logging for Storage I/O Control is disabled by default. To enable and set a log level, you must use a value of 1 to 7 for the
This issue is resolved in this release.
- PR 2503110: Enabling the VMware vSphere Storage I/O Control log might result in flooding the syslog and rsyslog servers
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to a rapid log rotation.This issue is fixed in this release. The fix moves some logs from regular Log to Log_Trivia and prevents additional logging.
- PR 1221281: Quiescing operations of virtual machines on volumes backed by LUNs supporting unmap granularity value greater than 1 MB might take longer than usual
VMFS datastores backed by LUNs that have optimal unmap granularity greater than 1 MB might get into repeated on-disk locking during the automatic unmap processing. As a result, the quiescing operations for virtual machines on such datastores might take longer than usual.
This issue is resolved in this release.
- PR 2549419: Setting space reclamation priority on a VMFS datastore might not work for all ESXi hosts that use the datastore
You can change the default space reclamation priority of a VMFS datastore by using the ESXCLI command
esxcli storage vmfs reclaim config set --reclaim-priority
. For example, to change the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate, run the command:
esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
The change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore.This issue is resolved in this release.
- PR 2550655: You might see a VMkernel log spew when VMFS volumes are frequently opened and closed
If VMFS volumes are frequently opened and closed, you might see messages in the VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed.This issue is resolved in this release. The logs
does not support unmap
when a VMFS volume is opened andExiting async journal replay manager world
when a volume is closed are no longer visible. - PR 2554564: ESXi hosts with 4 or more vSAN disk groups might fail with a purple diagnostic screen due to an out of memory condition in NUMA configurations with 4 or more nodes
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail. After rebooting the host, in the
vmkernel.log
you can see an error such asThrottled: BlkAttr not ready for disk.
This issue is resolved in this release.
- PR 2554116: ESXi hosts might fail with a purple diagnostic screen while loading a kernel module during boot
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node. In such a case, a checksum mismatch across the NUMA nodes might occur. This can cause an ESXi host to fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2489840: With VMware NSX-T and lockdown mode enabled, a stateless ESXi host might fail boot time remediation and stay in maintenance mode
Some user accounts, related to NSX-T, are available only after a boot. Because they do not exist at boot time, the boot time remediation of a stateless ESXi host might fail when such accounts get into the lockdown mode exception user list. In the
syslog.log
, you see a similar line:
key = 'POST_BOOT_CONFIG_FAILED', value = 'Failed to update user accounts exempted from Lockdown Mode. Invalid user specified: mux_user'
This issue is resolved in this release.
- PR 2548068: Some files might be missing in ProductLocker when performing a live upgrade to ESXi 7.0 from an earlier disk dump image version of an ESXi host
If you try a live upgrade to ESXi 7.0 from an earlier ESXi version, installed as a disk dump image, by using the
esxiso2dd
command and VMware Tools version 4.0 or later, some VMware Tools files might be missing from the/productLocker
location. Earlier versions of ESXi use a case-insensitive VFAT file system. However, ESXi 7.0 uses VMFS-L, which makes case-sensitive filename checks, and files that fail this check are not added to ProductLocker.This issue is resolved in this release.
- PR 2556037: If one of the address families on a dual stack domain controller is not enabled, adding ESXi hosts to the domain might randomly fail
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the data center is not available, such as
Error: NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain.This issue is resolved in this release.
- PR 2572393: Upgrades to ESXi 7.0 from an ESXi 6.x host with iSCSI devices might fail with a fatal error message during ESXi booting
Upgrades to ESXi 7.0 from an ESXi 6.x custom or standard image with an async version of offload iSCSI drivers might fail with messages similar to
Fatal Error: 15 (Not Found)
orError loading /vmfs/volumes/
. The affected offload iSCSI driver sets are/b.b00 bnx2i
/qfle3i
,be2iscsi
/elxiscsi
, andqedil
/qedi
. The issue might also occur in environments with a software iSCSI and iSCSI Management API plug-ins.
You see the messages in the ESXi loading page during the reboot phase of the upgrade that you can perform by using vSphere Lifecycle Manager or ESXCLI. If you use vSphere Lifecycle Manager, your vCenter Server system rolls back to the previous image upon reset. This issue affects mostly systems with iSCSI enabled QLogic 578XX converged network adapters (CNA) or iSCSI enabled Emulex OneConnect CNAs, or QLogic FastLinQ QL4xxxx CNAs, or software iSCSI configurations. The problem does not occur if the CNA is only configured for FCoE.This issue is resolved in this release.
- PR 2544236: If you run CIM queries to a newly added VMkernel port on an ESXi host, the small footprint CIM broker (SFCB) service might fail
If you run a CIM query, such as
enum_instances
, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump.This issue is resolved in this release.
- PR 2555270: If you restore a storage device configuration backed up after more than 7 days, some settings might be lost
If you restore a configuration backed up by using
backup.sh
after more than 7 days, storage device settings, such as Is perennially reserved, might be lost. This happens because last seen timestamps of devices also get backed up, and if the device has not been active for more than 7 days, device entries from/etc/vmware/esx.conf
are deleted. As a result, the restore operation might restore older timestamps.This issue is resolved in this release.
- PR 2552532: An ESXi host might not pick up remapped namespaces
In some cases, when namespaces are unmapped on a target site and remapped, that ESXi might not pick up the remapped namespace. This occurs when the cleanup of the old namespace instance is not complete. In such a case, to rediscover the namespaces, you must reset the NVMe controller.
This issue is resolved in this release.
- PR 2551359: vSAN file service configuration allows more than 8 IP addresses
The vSAN file service supports up to 8 IP addresses in the static IP address pool. If a vSAN cluster has more than 8 hosts, you can enter more than 8 IP addresses during the file service configuration. However, additional IP addresses cannot be used.
This issue is resolved in this release. You cannot enter more than 8 IP addresses in the File Service configuration.
- PR 2541690: vSAN hosts with deduplication run out of space while processing deletes
When a vSAN host uses deduplication and compression, delete operations cannot complete, causing the file system to run out of space.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the loadesx and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2554114 |
CVE numbers | N/A |
Updates the
nvme-pcie
VIB to resolve the following issue:
- PR 2554114: If an NVMe device is hot removed and hot added in less than a minute, the ESXi host might fail with a purple diagnostic screen
If an NVMe device is hot removed and hot added before the storage stack cleanup completes, typically in around 60 seconds, the PCIe driver fails to print
Disabled hotplug
. As a result, you might see a messageDevice already present
. Eventually, the ESXi host might fail with a purple diagnostic screen with an error such as
LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed
.This issue is resolved in this release. The fix reduces the I/O time in the PSA layer from about a minute to 1 ms to quiesce the I/Os in case of a hot add or removal.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb VIB.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2550705, 2550688, 2550702, 2560224, 2544206, 2536240, 2558119, 2558150, 2558534, 2536327, 2536334, 2536337, 2555190, 2465417 |
CVE numbers | CVE-2020-3962, CVE-2020-3963, CVE-2020-3964, CVE-2020-3965, CVE-2020-3966, CVE-2020-3967, CVE-2020-3968, CVE-2020-3969, CVE-2020-3970 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the vdfs, esx-xserver, vsan, esx-base, esx-ui, esx-dvfilter-generic-fastpath, native-misc-drivers, vsanhealth, cpu-microcode
, and crx
VIBs to resolve the following issues:
- Update to the libsqlite3 library
The ESXi libsqlite3 library is updated to version 3.31.1.
- Update to the libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.10.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to version 7.69.1.
- Update to the Python library
The ESXi Python library is updated to version 3.5.9 and Freetype Python to 2.9.
- Update to the NTP library
The ESXi NTP library is updated to version 4.2.8p14.
-
-
VMware ESXi contains a use-after-free vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3962 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a use-after-free vulnerability in PVNVRAM. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3963 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the EHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3964 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the XHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3965 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow due to a race condition issue in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3966 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow vulnerability in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3967 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds write vulnerability in the USB 3.0 controller (xHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3968 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an off-by-one heap-overflow vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3969 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds read vulnerability in the Shader functionality. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3970 to this issue. For more information, see VMSA-2020-0015.
-
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the
loadesx and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2564270 |
CVE numbers | N/A |
Updates the tools-light
VIB to resolve the following issue:
In ESXi 7.0b, a subset of VMware Tools 11.1.0 ISO images are bundled with the ESXi 7.0b host.
The following VMware Tools 11.1.0 ISO images are bundled with ESXi:windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.22 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-7.0b-16324942-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | June 16, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2543390, 2536681, 2486316, 2503110, 2549993, 2549419, 2550655, 2554564, 2554116, 2489840, 2548068, 2556037, 2572393, 2544236, 2555270, 2560238, 2552532, 2551359, 2541690, 2554114 |
Related CVE numbers | N/A |
Updates the loadesx, esx-dvfilter-generic-fastpath, vsanhealth, esx-update, tools-light, esx-ui, vdfs, vsan, esx-base, nvme-pcie, crx native-misc-drivers, esx-xserver, vmkusb
, and cpu-microcode
VIBs .
- This patch updates the following issues:
-
This release updates the Intel microcode for ESXi supported CPUs to the revisions that were production verified as of April 29th 2020. See the table for the microcode updates that are currently included.
Note: Not all microcode updates that Intel publicly disclosed on June 9th 2020 are included. Intel made available the updates to several
CPUs too late for VMware to test and include them in this ESXi patch release. To get the latest microcode, contact your hardware vendor for a BIOS update.Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006901 2/12/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000d6 10/3/2019 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ca 10/3/2019 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000cc 12/8/2019 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000cc 12/12/2019 Intel Xeon E-2200 Series (8 core) -
In releases earlier than ESXi 7.0b, the ESXi parsing of the SMBIOS table in the host firmware to determine the order in which device aliases are assigned is not fully compatible with SMBIOS 3.2 or later. As a result, ESXi hosts might assign device aliases in the incorrect order.
-
ESXi hosts might fail with a purple diagnostic screen while the kernel storage stack executes a timerCallback function. The timerCallback function runs in a non-preemptive manner. However, in certain cases, the kernel storage stack might try to enable preemption on a CPU, while executing a timerCallback function. As a result, the ESXi host fails with an error similar to:
PSOD : Assert bora/vmkernel/main/bh.c:981
-
Logging for Storage I/O Control is disabled by default. To enable and set a log level, you must use a value of 1 to 7 for the
parameter. However, in some cases, changing the parameter value does not change the log level. -
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to a rapid log rotation. -
VMFS datastores backed by LUNs that have optimal unmap granularity greater than 1 MB might get into repeated on-disk locking during the automatic unmap processing. As a result, the quiescing operations for virtual machines on such datastores might take longer than usual.
-
You can change the default space reclamation priority of a VMFS datastore by using the ESXCLI command
esxcli storage vmfs reclaim config set --reclaim-priority
. For example, to change the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate, run the command:
esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
The change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore. -
If VMFS volumes are frequently opened and closed, you might see messages in the VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed. -
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail. After rebooting the host, in the
vmkernel.log
you can see an error such asThrottled: BlkAttr not ready for disk.
-
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node. In such a case, a checksum mismatch across the NUMA nodes might occur. This can cause an ESXi host to fail with a purple diagnostic screen. -
Some user accounts, related to NSX-T, are available only after a boot. Because they do not exist at boot time, the boot time remediation of a stateless ESXi host might fail when such accounts get into the lockdown mode exception user list. In the
syslog.log
, you see a similar line:
key = 'POST_BOOT_CONFIG_FAILED', value = 'Failed to update user accounts exempted from Lockdown Mode. Invalid user specified: mux_user'
-
If you try a live upgrade to ESXi 7.0 from an earlier ESXi version, installed as a disk dump image, by using the
esxiso2dd
command and VMware Tools version 4.0 or later, some VMware Tools files might be missing from the/productLocker
location. Earlier versions of ESXi use a case-insensitive VFAT file system. However, ESXi 7.0 uses VMFS-L, which makes case-sensitive filename checks, and files that fail this check are not added to ProductLocker. -
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the data center is not available, such as
Error: NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain. -
Upgrades to ESXi 7.0 from an ESXi 6.x custom or standard image with an async version of offload iSCSI drivers might fail with messages similar to
Fatal Error: 15 (Not Found)
orError loading /vmfs/volumes//b.b00
. The affected offload iSCSI driver sets arebnx2i
/qfle3i
,be2iscsi
/elxiscsi
, andqedil
/qedi
. The issue might also occur in environments with a software iSCSI and iSCSI Management API plug-ins.
You see the messages in the ESXi loading page during the reboot phase of the upgrade that you can perform by using vSphere Lifecycle Manager or ESXCLI. If you use vSphere Lifecycle Manager, your vCenter Server system rolls back to the previous image upon reset. This issue affects mostly systems with iSCSI enabled QLogic 578XX converged network adapters (CNA) or iSCSI enabled Emulex OneConnect CNAs, or QLogic FastLinQ QL4xxxx CNAs, or software iSCSI configurations. The problem does not occur if the CNA is only configured for FCoE. -
If you run a CIM query, such as
enum_instances
, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump. -
If you restore a configuration backed up by using
backup.sh
after more than 7 days, storage device settings, such as Is perennially reserved, might be lost. This happens because last seen timestamps of devices also get backed up, and if the device has not been active for more than 7 days, device entries from/etc/vmware/esx.conf
are deleted. As a result, the restore operation might restore older timestamps. -
In some cases, when namespaces are unmapped on a target site and remapped, that ESXi might not pick up the remapped namespace. This occurs when the cleanup of the old namespace instance is not complete. In such a case, to rediscover the namespaces, you must reset the NVMe controller.
-
The vSAN file service supports up to 8 IP addresses in the static IP address pool. If a vSAN cluster has more than 8 hosts, you can enter more than 8 IP addresses during the file service configuration. However, additional IP addresses cannot be used.
-
When a vSAN host uses deduplication and compression, delete operations cannot complete, causing the file system to run out of space.
-
If an NVMe device is hot removed and hot added before the storage stack cleanup completes, typically in around 60 seconds, the PCIe driver fails to print
Disabled hotplug
. As a result, you might see a messageDevice already present
. Eventually, the ESXi host might fail with a purple diagnostic screen with an error such as
LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed
.
-
Profile Name | ESXi-7.0b-16324942-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | June 16, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2543390, 2536681, 2486316, 2503110, 2549993, 2549419, 2550655, 2554564, 2554116, 2489840, 2548068, 2556037, 2572393, 2544236, 2555270, 2560238, 2552532, 2551359, 2541690, 2554114 |
Related CVE numbers | N/A |
Updates the loadesx, esx-dvfilter-generic-fastpath, vsanhealth, esx-update, esx-ui, vdfs, vsan, esx-base, nvme-pcie, crx native-misc-drivers, esx-xserver, vmkusb
, and cpu-microcode
VIBs .
- This patch updates the following issues:
-
This release updates the Intel microcode for ESXi supported CPUs to the revisions that were production verified as of April 29th 2020. See the table for the microcode updates that are currently included.
Note: Not all microcode updates that Intel publicly disclosed on June 9th 2020 are included. Intel made available the updates to several
CPUs too late for VMware to test and include them in this ESXi patch release. To get the latest microcode, contact your hardware vendor for a BIOS update.Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006901 2/12/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05002f00 1/14/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000d6 10/3/2019 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000ca 10/3/2019 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000cc 12/8/2019 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000ca 10/3/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000cc 12/12/2019 Intel Xeon E-2200 Series (8 core) -
In releases earlier than ESXi 7.0b, the ESXi parsing of the SMBIOS table in the host firmware to determine the order in which device aliases are assigned is not fully compatible with SMBIOS 3.2 or later. As a result, ESXi hosts might assign device aliases in the incorrect order.
-
ESXi hosts might fail with a purple diagnostic screen while the kernel storage stack executes a timerCallback function. The timerCallback function runs in a non-preemptive manner. However, in certain cases, the kernel storage stack might try to enable preemption on a CPU, while executing a timerCallback function. As a result, the ESXi host fails with an error similar to:
PSOD : Assert bora/vmkernel/main/bh.c:981
-
Logging for Storage I/O Control is disabled by default. To enable and set a log level, you must use a value of 1 to 7 for the
parameter. However, in some cases, changing the parameter value does not change the log level. -
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to a rapid log rotation. -
VMFS datastores backed by LUNs that have optimal unmap granularity greater than 1 MB might get into repeated on-disk locking during the automatic unmap processing. As a result, the quiescing operations for virtual machines on such datastores might take longer than usual.
-
You can change the default space reclamation priority of a VMFS datastore by using the ESXCLI command
esxcli storage vmfs reclaim config set --reclaim-priority
. For example, to change the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate, run the command:
esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
The change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore. -
If VMFS volumes are frequently opened and closed, you might see messages in the VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed. -
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail. After rebooting the host, in the
vmkernel.log
you can see an error such asThrottled: BlkAttr not ready for disk.
-
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node. In such a case, a checksum mismatch across the NUMA nodes might occur. This can cause an ESXi host to fail with a purple diagnostic screen. -
Some user accounts, related to NSX-T, are available only after a boot. Because they do not exist at boot time, the boot time remediation of a stateless ESXi host might fail when such accounts get into the lockdown mode exception user list. In the
syslog.log
, you see a similar line:
key = 'POST_BOOT_CONFIG_FAILED', value = 'Failed to update user accounts exempted from Lockdown Mode. Invalid user specified: mux_user'
-
If you try a live upgrade to ESXi 7.0 from an earlier ESXi version, installed as a disk dump image, by using the
esxiso2dd
command and VMware Tools version 4.0 or later, some VMware Tools files might be missing from the/productLocker
location. Earlier versions of ESXi use a case-insensitive VFAT file system. However, ESXi 7.0 uses VMFS-L, which makes case-sensitive filename checks, and files that fail this check are not added to ProductLocker. -
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the data center is not available, such as
Error: NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain. -
Upgrades to ESXi 7.0 from an ESXi 6.x custom or standard image with an async version of offload iSCSI drivers might fail with messages similar to
Fatal Error: 15 (Not Found)
orError loading /vmfs/volumes//b.b00
. The affected offload iSCSI driver sets arebnx2i
/qfle3i
,be2iscsi
/elxiscsi
, andqedil
/qedi
. The issue might also occur in environments with a software iSCSI and iSCSI Management API plug-ins.
You see the messages in the ESXi loading page during the reboot phase of the upgrade that you can perform by using vSphere Lifecycle Manager or ESXCLI. If you use vSphere Lifecycle Manager, your vCenter Server system rolls back to the previous image upon reset. This issue affects mostly systems with iSCSI enabled QLogic 578XX converged network adapters (CNA) or iSCSI enabled Emulex OneConnect CNAs, or QLogic FastLinQ QL4xxxx CNAs, or software iSCSI configurations. The problem does not occur if the CNA is only configured for FCoE. -
If you run a CIM query, such as
enum_instances
, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump. -
If you restore a configuration backed up by using
backup.sh
after more than 7 days, storage device settings, such as Is perennially reserved, might be lost. This happens because last seen timestamps of devices also get backed up, and if the device has not been active for more than 7 days, device entries from/etc/vmware/esx.conf
are deleted. As a result, the restore operation might restore older timestamps. -
In some cases, when namespaces are unmapped on a target site and remapped, that ESXi might not pick up the remapped namespace. This occurs when the cleanup of the old namespace instance is not complete. In such a case, to rediscover the namespaces, you must reset the NVMe controller.
-
The vSAN file service supports up to 8 IP addresses in the static IP address pool. If a vSAN cluster has more than 8 hosts, you can enter more than 8 IP addresses during the file service configuration. However, additional IP addresses cannot be used.
-
When a vSAN host uses deduplication and compression, delete operations cannot complete, causing the file system to run out of space.
-
If an NVMe device is hot removed and hot added before the storage stack cleanup completes, typically in around 60 seconds, the PCIe driver fails to print
Disabled hotplug
. As a result, you might see a messageDevice already present
. Eventually, the ESXi host might fail with a purple diagnostic screen with an error such as
LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed
.
-
Profile Name | ESXi-7.0bs-16321839-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | June 16, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2550705, 2550688, 2550702, 2560224, 2544206, 2536240, 2558119, 2558150, 2558534, 2536327, 2536334, 2536337, 2555190, 2465417, 2564270 |
Related CVE numbers | CVE-2020-3962, CVE-2020-3963, CVE-2020-3964, CVE-2020-3965, CVE-2020-3966, CVE-2020-3967, CVE-2020-3968, CVE-2020-3969, CVE-2020-3970 |
Updates the vdfs, esx-xserver, vsan, esx-base, esx-ui, esx-dvfilter-generic-fastpath, native-misc-drivers, vsanhealth, cpu-microcode
, and crx
VIBs to resolve the following issues:
- This patch updates the following issues:
-
The ESXi libsqlite3 library is updated to version 3.31.1.
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The ESXi userworld libcurl library is updated to version 7.69.1.
-
The ESXi Python library is updated to version 3.5.9 and Freetype Python to 2.9.
-
The ESXi NTP library is updated to version 4.2.8p14.
-
VMware ESXi contains a use-after-free vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3962 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a use-after-free vulnerability in PVNVRAM. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3963 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the EHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3964 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the XHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3965 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow due to a race condition issue in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3966 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow vulnerability in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3967 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds write vulnerability in the USB 3.0 controller (xHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3968 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an off-by-one heap-overflow vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3969 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds read vulnerability in the Shader functionality. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3970 to this issue. For more information, see VMSA-2020-0015.
-
In ESXi 7.0b, a subset of VMware Tools 11.1.0 ISO images are bundled with the ESXi 7.0b host.
The following VMware Tools 11.1.0 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.22 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-7.0bs-16321839-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | June 16, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2550705, 2550688, 2550702, 2560224, 2544206, 2536240, 2558119, 2558150, 2558534, 2536327, 2536334, 2536337, 2555190, 2465417 |
Related CVE numbers | CVE-2020-3962, CVE-2020-3963, CVE-2020-3964, CVE-2020-3965, CVE-2020-3966, CVE-2020-3967, CVE-2020-3968, CVE-2020-3969, CVE-2020-3970 |
Updates the vdfs, esx-xserver, vsan, esx-base, esx-ui, esx-dvfilter-generic-fastpath, native-misc-drivers, vsanhealth, cpu-microcode
, and crx
VIBs to resolve the following issues:
- This patch updates the following issues:
-
The ESXi libsqlite3 library is updated to version 3.31.1.
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The ESXi userworld libcurl library is updated to version 7.69.1.
-
The ESXi Python library is updated to version 3.5.9 and Freetype Python to 2.9.
-
The ESXi NTP library is updated to version 4.2.8p14.
-
VMware ESXi contains a use-after-free vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3962 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a use-after-free vulnerability in PVNVRAM. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3963 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the EHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3964 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an information leak in the XHCI USB controller. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3965 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow due to a race condition issue in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3966 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains a heap-overflow vulnerability in the USB 2.0 controller (EHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3967 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds write vulnerability in the USB 3.0 controller (xHCI). The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3968 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an off-by-one heap-overflow vulnerability in the SVGA device. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3969 to this issue. For more information, see VMSA-2020-0015.
-
VMware ESXi contains an out-of-bounds read vulnerability in the Shader functionality. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3970 to this issue. For more information, see VMSA-2020-0015.
-
In ESXi 7.0b, a subset of VMware Tools 11.1.0 ISO images are bundled with the ESXi 7.0b host.
The following VMware Tools 11.1.0 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows 7 SP1 or Windows Server 2008 R2 SP1 or laterlinux.iso
: VMware Tools 10.3.22 image for older versions of Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Name | ESXi |
Version | 7.0b - 16324942 |
Release Date | June 16, 2020 |
Category | Enhancement |
Affected Components |
|
PRs Fixed | 2543390, 2536681, 2486316, 2503110, 2549993, 2549419, 2550655, 2554564, 2554116, 2489840, 2548068, 2556037, 2572393, 2544236, 2555270, 2560238, 2552532, 2551359, 2541690, 2554114 |
Name | ESXi |
Version | 7.0bs - 16321839 |
Release Date | June 16, 2020 |
Category | Enhancement |
Affected Components |
|
PRs Fixed | 2550705, 2550688, 2550702, 2560224, 2544206, 2536240, 2558119, 2558150, 2558534, 2536327, 2536334, 2536337, 2555190, 2465417 |
Known Issues
The known issues are grouped as follows.
Upgrade Issues- PR 2579437: Upgrade to ESXi 7.0b fails with the error The operation cannot continue due to downgrade of the following Components: VMware-VM-Tools
When upgrading ESXi by using the vSphere Lifecycle Manager, if you select a new ESXi base image from the ESXi Version drop-down menu and you do not remove the async VMware Tools component from the Components section, the upgrade fails. In the vSphere Client, you see an error such as
The operation cannot continue due to downgrade of the following Components: VMware-VM-Tools.
The issue occurs because the async VMware Tools component from ESXi 7.0 overrides the component in the selected base image.Workaround: Remove the async VMware Tools component from the Components section in your image. For more details, see VMware knowledge base article 79442.