ESXi 7.0 Update 2 | 09 MAR 2021 | ISO Build 17630552 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Product Support Notices
- Resolved Issues
- Known Issues
What's New
- vSphere Fault Tolerance supports vSphere Virtual Machine Encryption: Starting with vSphere 7.0 Update 2, vSphere FT supports VM encryption. In-guest and array-based encryption do not depend on or interfere with VM encryption, but having multiple encryption layers uses additional compute resources, which might impact virtual machine performance. The impact varies with the hardware as well as the amount and type of I/O, therefore VMware cannot quantify it, but overall performance impact is negligible for most workloads. The effectiveness and compatibility of back-end storage features such as deduplication, compression, and replication might also be affected by VM encryption and you should take into consideration storage tradeoffs. For more information, see Virtual Machine Encryption and Virtual Machine Encryption Best Practices.
- ESXi 7.0 Update 2 supports vSphere Quick Boot on the following servers:
- Dell Inc.
- PowerEdge M830
- PowerEdge R830
- HPE
- ProLiant XL675d Gen10 Plus
- Lenovo
- ThinkSystem SR 635
- ThinkSystem SR 655
- Dell Inc.
- Some ESXi configuration files become read-only: As of ESXi 7.0 Update 2, configuration formerly stored in the files
/etc/keymap
,/etc/vmware/welcome
,/etc/sfcb/sfcb.cfg
,/etc/vmware/snmp.xml
,/etc/vmware/logfilters
,/etc/vmsyslog.conf
, and/etc/vmsyslog.conf.d/*.conf
files, now resides in the ConfigStore database. You can modify this configuration only by using ESXCLI commands, and not by editing files. For more information, see VMware knowledge base articles 82637 and 82638.
- VMware vSphere Virtual Volumes statistics for better debugging: With ESXi 7.0 Update 2, you can track performance statistics for vSphere Virtual Volumes to quickly identify issues such as latency in third-party VASA provider responses. By using a set of commands, you can get statistics for all VASA providers in your system, or for a specified namespace or entity in the given namespace, or enable statistics tracking for the complete namespace. For more information, see Collecting Statistical Information for vVols.
- NVIDIA Ampere аrchitecture support: vSphere 7.0 Update 2 adds support for the NVIDIA Ampere architecture that enables you to perform high end AI/ML training, and ML inference workloads, by using the accelerated capacity of the A100 GPU. In addition, vSphere 7.0 Update 2 improves GPU sharing and utilization by supporting the Multi-Instance GPU (MIG) technology. With vSphere 7.0 Update 2, you also see enhanced performance of device-to-device communication, building on the existing NVIDIA GPUDirect functionality, by enabling Address Translation Services (ATS) and Access Control Services (ACS) at the PCIe bus layer in the ESXi kernel.
- Support for Mellanox ConnectX-6 200G NICs: ESXi 7.0 Update 2 supports Mellanox Technologies MT28908 Family (ConnectX-6) and Mellanox Technologies MT2892 Family (ConnectX-6 Dx) 200G NICs.
- Performance improvements for AMD Zen CPUs: With ESXi 7.0 Update 2, out-of-the-box optimizations can increase AMD Zen CPU performance by up to 30% in various benchmarks. The updated ESXi scheduler takes full advantage of the AMD NUMA architecture to make the most appropriate placement decisions for virtual machines and containers. AMD Zen CPU optimizations allow a higher number of VMs or container deployments with better performance.
- Reduced compute and I/O latency, and jitter for latency sensitive workloads: Latency sensitive workloads, such as in financial and telecom applications, can see significant performance benefit from I/O latency and jitter optimizations in ESXi 7.0 Update 2. The optimizations reduce interference and jitter sources to provide a consistent runtime environment. With ESXi 7.0 Update 2, you can also see higher speed in interrupt delivery for passthrough devices.
- Confidential vSphere Pods on a Supervisor Cluster in vSphere with Tanzu: Starting with vSphere 7.0 Update 2, you can run confidential vSphere Pods, keeping guest OS memory encrypted and protected against access from the hypervisor, on a Supervisor Cluster in vSphere with Tanzu. You can configure confidential vSphere Pods by adding Secure Encrypted Virtualization-Encrypted State (SEV-ES) as an extra security enhancement. For more information, see Deploy a Confidential vSphere Pod.
- vSphere Lifecycle Manager fast upgrades: Starting with vSphere 7.0 Update 2, you can significantly reduce upgrade time and system downtime, and minimize system boot time, by suspending virtual machines to memory and using the Quick Boot functionality. You can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk when you update an ESXi host. For more information, see Configuring vSphere Lifecycle Manager for Fast Upgrades.
- Encrypted Fault Tolerance log traffic: Starting with vSphere 7.0 Update 2, you can encrypt Fault Tolerance log traffic to get enhanced security. vSphere Fault Tolerance performs frequent checks between the primary and secondary VMs to enable quick resumption from the last successful checkpoint. The checkpoint contains the VM state that has been modified since the previous checkpoint. Encrypting the log traffic prevents malicious access or network attacks.
Earlier Releases of ESXi 7.0
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1d
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1c
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1b
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1a
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1
- VMware ESXi 7.0, Patch Release ESXi 7.0b
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
Patches Contained in This Release
This release of ESXi 7.0 Update 2 delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U2-17630552-depot |
Build: | 17630552 |
Download Size: | 390.9 MB |
md5sum: | 4eae7823678cc7c57785e4539fe89d81 |
sha1checksum: | 7c6b70a0190bd78bcf118f856cf9c60b4ad7d4b5 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
IMPORTANT:
- In the Lifecycle Manager plug-in of the vSphere Client, the release date for the ESXi 7.0.2 base image, profiles, and components is 2021-02-17. This is expected. To ensure you can use correct filters by release date, only the release date of the rollup bulletin is 2021-03-09.
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity |
ESXi70U2-17630552 | Enhancement | Important |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0.2-17630552-standard |
ESXi-7.0.2-17630552-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi_7.0.2-0.0.17630552 | 03/09/2021 | General | Bugfix image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile update
command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Product Support Notices
- The inbox i40en network driver changes name: Starting with vSphere 7.0 Update 2, the inbox i40en network driver for ESXi changes name to i40enu.
- Removal of SHA1 from Secure Shell (SSH): In vSphere 7.0 Update 2, the SHA-1 cryptographic hashing algorithm is removed from the SSHD default configuration.
- Deprecation of Sphere 6.0 to 6.7 REST APIs: VMware deprecates REST APIs from vSphere 6.0 to 6.7 that were served under
/rest
and are referred to as old REST APIs. With vSphere 7.0 Update 2, REST APIs are served under/api
and referred to as new REST APIs. For more information, see the bog vSphere 7 Update 2 - REST API Modernization and vSphere knowledge base article 83022. - Intent to deprecate SHA-1: The SHA-1 cryptographic hashing algorithm will be deprecated in a future release of vSphere. SHA-1 and the already-deprecated MD5 have known weaknesses, and practical attacks against them have been demonstrated.
- Standard formats of log files and syslog transmissions: In a future major ESXi release, VMware plans to standardize the formats of all ESXi log files and syslog transmissions. This standardization affects the metadata associated with each log file line or syslog transmission. For example, the time stamp, programmatic source identifier, message severity, and operation identifier data. For more information, visit https://core.vmware.com/esxi-log-message-formats.
Resolved Issues
The resolved issues are grouped as follows.
- Intel Microcode Updates
- Storage Issues
- Auto Deploy Issues
- Networking Issues
- Miscellaneous Issues
- Upgrade Issues
- ESXi 7.0 Update 2 includes the following Intel microcode updates:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series;
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series;
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series;
Intel Xeon E3-1200 Series;
Intel i7-2655-LE Series;
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series;
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d6 0x6d 0x00000621 3/4/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesSandy Bridge EP 0x206d7 0x6d 0x0000071a 3/24/2020 Intel Pentium 1400 Series;
Intel Xeon E5-1400 Series;
Intel Xeon E5-1600 Series;
Intel Xeon E5-2400 Series;
Intel Xeon E5-2600 Series;
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series;
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series;
Intel Xeon E7-4800 Series;
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series;
Intel i7-3500-LE/UE;
Intel i7-3600-QE;
Intel Xeon E3-1200-v2 Series;
Intel Xeon E3-1100-C-v2 Series;
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000028 11/12/2019 Intel Xeon E3-1200-v3 Series;
Intel i7-4700-EQ Series;
Intel i5-4500-TE Series;
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series;
Intel Xeon E5-2600-v2 Series;
Intel Xeon E5-2400-v2 Series;
Intel Xeon E5-1600-v2 Series;
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000044 5/27/2020 Intel Xeon E5-4600-v3 Series;
Intel Xeon E5-2600-v3 Series;
Intel Xeon E5-2400-v3 Series;
Intel Xeon E5-1600-v3 Series;
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000022 11/12/2019 Intel Core i7-5700EQ;
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series;
Intel Atom C2500 Series;
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series;
Intel Xeon E5-4600-v4 Series;
Intel Xeon E5-2600-v4 Series;
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02006a08 6/16/2020 Intel Xeon Platinum 8100 Series;
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series;
Intel Xeon D-2100 Series;
Intel Xeon D-1600 Series;
Intel Xeon W-3100 Series;
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05003003 6/18/2020 Intel Xeon Platinum 9200/8200 Series;
Intel Xeon Gold 6200/5200;
Intel Xeon Silver 4200/Bronze 3200;
Intel Xeon W-3200Cooper Lake 0x5065b 0xbf 0x0700001f 9/17/2020 Intel Xeon Platinum 8300 Series;
Intel Xeon Gold 6300/5300Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000e2 7/14/2020 Intel Xeon E3-1500-v5 Series;
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x00000032 3/7/2020 Intel Atom C3000 Series Snow Ridge 0x80665 0x01 0x0b000007 2/25/2020 Intel Atom P5000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000de 5/26/2020 Intel Xeon E3-1200-v6 Series;
Intel Xeon E3-1500-v6 SeriesCoffee Lake 0x906ea 0x22 0x000000de 5/25/2020 Intel Xeon E-2100 Series;
Intel Xeon E-2200 Series (4 or 6 core)Coffee Lake 0x906eb 0x02 0x000000de 5/25/2020 Intel Xeon E-2100 Series Coffee Lake 0x906ec 0x22 0x000000de 6/3/2020 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000de 5/24/2020 Intel Xeon E-2200 Series (8 core)
- After recovering from APD or PDL conditions, VMFS datastore with enabled support for clustered virtual disks might remain inaccessible
You can encounter this problem only on datastores where the clustered virtual disk support is enabled. When the datastore recovers from an All Paths Down (APD) or Permanent Device Loss (PDL) condition, it remains inaccessible. The VMkernel log might show multiple
SCSI3 reservation conflict
messages similar to the following:2020-02-18T07:41:10.273Z cpu22:1001391219)ScsiDeviceIO: vm 1001391219: SCSIDeviceCmdCompleteCB:2972: Reservation conflict retries 544 for command 0x45ba814b8340 (op: 0x89) to device "naa.624a9370b97601e346f64ba900024d53"
The problem can occur because the ESXi host participating in the cluster loses SCSI reservations for the datastore and cannot always reacquire them automatically after the datastore recovers.
This issue is resolved in this release.
- PR 2710383: If you deploy an ESXi host by using the vSphere Auto Deploy stateful install, ESXi configurations migrated to the ConfigStore database are lost during upgrade
If you deploy an ESXi host by using the Auto Deploy stateful install feature, an option indicating the stateful install boot in the
boot.cfg
file is not removed and conflicts with the persistent state of the ConfigStore database. As a result, ESXi configurations migrated to the ConfigStore database are lost during upgrade to ESXi 7.x.This issue is resolved in this release.
- PR 2696435: You cannot use virtual guest tagging (VGT) by default in an SR-IOV environment
With the i40enu driver in ESXi 7.0 Update 2, you cannot use VGT by default to avoid untrusted SR-IOV virtual functions (VF) to transmit or receive packets tagged with a VLAN ID different from the port VLAN.
You must set all VFs to trusted mode by using the following module parameter:
esxcli system module parameters set -a -m i40enu -p "trust_all_vfs=1,1,1,...
This issue is resolved in this release.
- NEW: If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
This issue is resolved in this release.
- NEW: If a vCenter Server system is of version 7.0, ESXi host upgrades to a later version by using the vSphere Lifecycle Manager and an ISO image fail
If you use an ISO image to upgrade ESXi hosts to a version later than 7.0 by using the vSphere Lifecycle Manager, and the vCenter Server system is still on version 7.0, the upgrade fails. In the vSphere Client, you see an error
Upgrade is not supported for host
.This issue is resolved in this release.
Known Issues
The known issues are grouped as follows.
- Networking Issues
- Security Issues
- Miscellaneous Issues
- Storage Issues
- Virtual Machines Management Issues
- vSAN Issues
- Installation, Upgrade and Migration Issues
- Known Issues from Earlier Releases
- NEW: After an upgrade to ESXi 7.0 Update 2, VIBs of the async i40en network drivers for ESXi are skipped or reverted to the VMware inbox driver i40enu
Starting with vSphere 7.0 Update 2, the inbox
i40en
driver was renamed toi40enu
. As a result, if you attempt to install ani40en
partner async driver, thei40en
VIB is either skipped or reverted to the VMwarei40enu
inbox driver.Workaround: To complete the upgrade to ESXi 7.0 Update 2, you must create a custom image replacing the
i40en
component version1.8.1.136-1vmw.702.0.0.17630552
with thei40en
component versionIntel-i40en_1.10.9.0-1OEM.700.1.0.15525992
or greater. For more information, see Customizing Installations with vSphere ESXi Image Builder. - NEW: When you set auto-negotiation on a network adapter, the device might fail
In some environments, if you set link speed to auto-negotiation for network adapters by using the command
esxcli network nic set -a -n vmmicx
, the devices might fail and reboot does not recover connectivity. The issue is specific to a combination of some Intel X710/X722 network adapters, a SFP+ module and a physical switch, where auto-negotiate speed/duplex scenario is not supported.Workaround: Make sure you use an Intel-branded SFP+ module. Alternatively, use a Direct Attach Copper (DAC) cable.
- Paravirtual RDMA (PVRDMA) network adapters do not support NSX networking policies
If you configure an NSX distributed virtual port for use in PVRDMA traffic, the RDMA protocol traffic over the PVRDMA network adapters does not comply with the NSX network policies.
Workaround: Do not configure NSX distributed virtual ports for use in PVRDMA traffic.
- Solarflare x2542 and x2541 network adapters configured in 1x100G port mode achieve throughput of up to 70Gbps in a vSphere environment
vSphere 7.0 Update 2 supports Solarflare x2542 and x2541 network adapters configured in 1x100G port mode. However, you might see a hardware limitation in the devices that causes the actual throughput to be up to some 70Gbps in a vSphere environment.
Workaround: None
- VLAN traffic might fail after a NIC reset
A NIC with PCI device ID 8086:1537 might stop to send and receive VLAN tagged packets after a reset, for example, with a command
vsish -e set /net/pNics/vmnic0/reset 1
.Workaround: Avoid resetting the NIC. If you already face the issue, use the following commands to restore the VLAN capability, for example at vmnic0:
# esxcli network nic software set --tagging=1 -n vmnic0
# esxcli network nic software set --tagging=0 -n vmnic0 - Any change in the NetQueue balancer settings causes NetQueue to be disabled after an ESXi host reboot
Any change in the NetQueue balancer settings by using the command
esxcli/localcli network nic queue loadbalancer set -n <nicname> --<lb_setting>
causes NetQueue, which is enabled by default, to be disabled after an ESXi host reboot.Workaround: After a change in the NetQueue balancer settings and host reboot, use the command
configstorecli config current get -c esx -g network -k nics
to retrieve ConfigStore data to verify whether the/esx/network/nics/net_queue/load_balancer/enable
is working as expected.After you run the command, you see output similar to:
{
"mac": "02:00:0e:6d:14:3e",
"name": "vmnic1",
"net_queue": {
"load_balancer": {
"dynamic_pool": true,
"enable": true
}
},
"virtual_mac": "00:50:56:5a:21:11"
}If the output is not as expected, for example
"load_balancer": "enable": false"
, run the following command:
esxcli/localcli network nic queue loadbalancer state set -n <nicname> -e true
- Turn off the Service Location Protocol service in ESXi, slpd, to prevent potential security vulnerabilities
Some services in ESXi that run on top of the host operating system, including slpd, the CIM object broker, sfcbd, and the related openwsmand service, have proven security vulnerabilities. VMware has addressed all known vulnerabilities in VMSA-2019-0022 and VMSA-2020-0023, and the fixes are part of the vSphere 7.0 Update 2 release. While sfcbd and openwsmand are disabled by default in ESXi, slpd is enabled by default and you must turn it off, if not necessary, to prevent exposure to a future vulnerability after an upgrade.
Workaround: To turn off the slpd service, run the following PowerCLI commands:
$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Set-VMHostService -policy “off”
$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Stop-VMHostService -Confirm:$false
Alternatively, you can use the commandchkconfig slpd off && /etc/init.d/slpd stop
.
The openwsmand service is not on the ESXi services list and you can check the service state by using the following PowerCLI commands:
$esx=(Get-EsxCli -vmhost xx.xx.xx.xx -v2)
$esx.system.process.list.invoke() | where CommandLine -like '*openwsman*' | select commandlineIn the ESXi services list, the sfcbd service appears as sfcbd-watchdog.
For more information, see VMware knowledge base articles 76372 and 1025757.
- You cannot create snapshots of virtual machines due to an error that a digest operation has failed
A rare race condition when an All-Paths-Down (APD) state occurs during the update of the Content Based Read Cache (CBRC) digest file might cause inconsistencies in the digest file. As a result, you cannot create virtual machine snapshots. You see an error such as
An error occurred while saving the snapshot: A digest operation has failed
in the backtrace.Workaround: Power cycle the virtual machines to trigger a recompute of the CBRC hashes and clear the inconsistencies in the digest file.
- An ESXi host might fail with a purple diagnostic screen due to a rare race condition in the qedentv driver
A rare race condition in the qedentv driver might cause an ESXi host to fail with a purple diagnostic screen. The issue occurs when an Rx complete interrupt arrives just after a General Services Interface (GSI) queue pair (QP) is destroyed, for example during a qedentv driver unload or a system shut down. In such a case, the qedentv driver might access an already freed QP address that leads to a PF exception. The issue might occur in ESXi hosts that are connected to a busy physical switch with heavy unsolicited GSI traffic. In the backtrace, you see messages such as:
cpu4:2107287)0x45389609bcb0:[0x42001d3e6f72]qedrntv_ll2_rx_cb@(qedrntv)#<None>+0x1be stack: 0x45b8f00a7740, 0x1e146d040, 0x432d65738d40, 0x0, 0x
2021-02-11T03:31:53.882Z cpu4:2107287)0x45389609bd50:[0x42001d421d2a]ecore_ll2_rxq_completion@(qedrntv)#<None>+0x2ab stack: 0x432bc20020ed, 0x4c1e74ef0, 0x432bc2002000,
2021-02-11T03:31:53.967Z cpu4:2107287)0x45389609bdf0:[0x42001d1296d0]ecore_int_sp_dpc@(qedentv)#<None>+0x331 stack: 0x0, 0x42001c3bfb6b, 0x76f1e5c0, 0x2000097, 0x14c2002
2021-02-11T03:31:54.039Z cpu4:2107287)0x45389609be60:[0x42001c0db867]IntrCookieBH@vmkernel#nover+0x17c stack: 0x45389609be80, 0x40992f09ba, 0x43007a436690, 0x43007a43669
2021-02-11T03:31:54.116Z cpu4:2107287)0x45389609bef0:[0x42001c0be6b0]BH_Check@vmkernel#nover+0x121 stack: 0x98ba, 0x33e72f6f6e20, 0x0, 0x8000000000000000, 0x430000000001
2021-02-11T03:31:54.187Z cpu4:2107287)0x45389609bf70:[0x42001c28370c]NetPollWorldCallback@vmkernel#nover+0x129 stack: 0x61, 0x42001d0e0000, 0x42001c283770, 0x0, 0x0
2021-02-11T03:31:54.256Z cpu4:2107287)0x45389609bfe0:[0x42001c380bad]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42001c0c2b44, 0x0, 0x0, 0x0
2021-02-11T03:31:54.319Z cpu4:2107287)0x45389609c000:[0x42001c0c2b43]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
2021-02-11T03:31:54.424Z cpu4:2107287)^[[45m^[[33;1mVMware ESXi 7.0.2 [Releasebuild-17435195 x86_64]^[[0m
#PF Exception 14 in world 2107287:vmnic7-pollW IP 0x42001d3e6f72 addr 0x1cWorkaround: None
- NEW: If you use a USB as a boot device, ESXi hosts might become unresponsive and you see host not-responding and boot bank is not found alerts
USB devices have a small queue depth and due to a race condition in the ESXi storage stack, some I/O operations might not get to the device. Such I/Os queue in the ESXi storage stack and ultimately time out. As a result, ESXi hosts become unresponsive. In the vSphere Client, you see alerts such as
Alert: /bootbank not to be found at path '/bootbank'
andHost not-responding
.
In vmkernel logs, you see errors such as:
2021-04-12T04:47:44.940Z cpu0:2097441)ScsiPath: 8058: Cancelled Cmd(0x45b92ea3fd40) 0xa0, cmdId.initiator=0x4538c859b8f8 CmdSN 0x0 from world 0 to path "vmhba32:C0:T0:L0". Cmd count Active:0 Queued:1.
2021-04-12T04:48:50.527Z cpu2:2097440)ScsiDeviceIO: 4315: Cmd(0x45b92ea76d40) 0x28, cmdId.initiator=0x4305f74cc780 CmdSN 0x1279 from world 2099370 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1
2021-04-12T04:48:50.527Z cpu2:2097440)Queued:4Workaround: None.
- If an NVMe device is hot added and hot removed in a short interval, the ESXi host might fail with a purple diagnostic screen
If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as
WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout
.
Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to#PF Exception ... in world ...:vmkdevmgr
.Workaround: Perform hot-plug operations on a slot only after the previous hot-plug operation on the slot is complete. For example, if you want to run a hot-remove after a hot-add operation, wait until the HBAs are created and LUNs are discovered. For the alternative scenario, hot-add after a hot-remove operation, wait until all the LUNs and HBAs are removed.
- UEFI HTTP booting of virtual machines on ESXi hosts of version earlier than 7.0 Update 2 fails
UEFI HTTP booting of virtual machines is supported only on hosts of version ESXi 7.0 Update 2 and later and VMs with HW version 19 or later.
Workaround: Use UEFI HTTP booting only in virtual machines with HW version 19 or later. Using HW version 19 ensures the virtual machines are placed only on hosts with ESXi version 7.0 Update 2 or later.
- If you change the preferred site in a VMware vSAN Stretched Cluster, some objects might incorrectly appear as compliant
If you change the preferred site in a stretched cluster, some objects might incorrectly appear as compliant, because their policy settings might not automatically change. For example, if you configure a virtual machine to keep data at the preferred site, when you change the preferred site, data might remain on the nonpreferred site.
Workaround: Before you change a preferred site, in Advanced Settings, lower the
ClomMaxCrawlCycleMinutes
setting to 15 min to make sure objects policies are updated. After the change, revert theClomMaxCrawlCycleMinutes
option to the earlier value.
- UEFI booting of ESXi hosts might stop with an error during an update to ESXi 7.0 Update 2 from an earlier version of ESXi 7.0
If you attempt to update your environment to 7.0 Update 2 from an earlier version of ESXi 7.0 by using vSphere Lifecycle Manager patch baselines, UEFI booting of ESXi hosts might stop with an error such as:
Loading /boot.cfg
Failed to load crypto64.efi
Fatal error: 15 (Not found)Workaround: For more information, see VMware knowledge base articles 83063 and 83107 .
- If legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract a desired software specification to seed to a new cluster
With vCenter Server 7.0 Update 2, you can create a new cluster by importing the desired software specification from a single reference host. However, if legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract in the vCenter Server instance where you create the cluster a reference software specification from such a host. In the
/var/log/lifecycle.log
, you see messages such as:
020-11-11T06:54:03Z lifecycle: 1000082644: HostSeeding:499 ERROR Extract depot failed: Checksum doesn't match. Calculated 5b404e28e83b1387841bb417da93c8c796ef2497c8af0f79583fd54e789d8826, expected: 0947542e30b794c721e21fb595f1851b247711d0619c55489a6a8cae6675e796 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:366 ERROR Extract depot failed. 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:145 ERROR [VibChecksumError]
Workaround: Follow the steps described in VMware knowledge base article 83042.
- You see a short burst of log messages in the syslog.log after every ESXi boot
After updating to ESXi 7.0 Update 2, you might see a short burst of log messages after every ESXi boot.
Such logs do not indicate any issue with ESXi and you can ignore these messages. For example:
2021-01-19T22:44:22Z watchdog-vaai-nasd: '/usr/lib/vmware/nfs/bin/vaai-nasd -f' exited after 0 seconds (quick failure 127) 1
2021-01-19T22:44:22Z watchdog-vaai-nasd: Executing '/usr/lib/vmware/nfs/bin/vaai-nasd -f'
2021-01-19T22:44:22.990Z aainasd[1000051135]: Log for VAAI-NAS Daemon for NFS version=1.0 build=build-00000 option=DEBUG
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoadFile: No entries loaded by dictionary.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/config": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/preferences": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: Switching to VMware syslog extensions
2021-01-19T22:44:22.992Z vaainasd[1000051135]: Loading VAAI-NAS plugin(s).
2021-01-19T22:44:22.992Z vaainasd[1000051135]: DISKLIB-PLUGIN : Not loading plugin /usr/lib/vmware/nas_plugins/lib64: Not a shared library.Workaround: None
- You see warning messages for missing VIBs in vSphere Quick Boot compatibility check reports
After you upgrade to ESXi 7.0 Update 2, if you check vSphere Quick Boot compatibility of your environment by using the
/usr/lib/vmware/loadesx/bin/loadESXCheckCompat.py
command, you might see some warning messages for missing VIBs in the shell. For example:
Cannot find VIB(s) ... in the given VIB collection.
Ignoring missing reserved VIB(s) ..., they are removed from reserved VIB IDs.
Such warnings do not indicate a compatibility issue.Workaround: The missing VIB messages can be safely ignored and do not affect the reporting of vSphere Quick Boot compatibility. The final output line of the
loadESXCheckCompat
command unambiguously indicates if the host is compatible. - Auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image fails with an error
If you attempt auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image to perform a stateful install and overwrite the VMFS partitions, the operation fails with an error. In the support bundle, you see messages such as:
2021-02-11T19:37:43Z Host Profiles[265671 opID=MainThread]: ERROR: EngineModule::ApplyHostConfig. Exception: [Errno 30] Read-only file system
Workaround: Follow vendor guidance to clean the VMFS partition in the target host and retry the operation. Alternatively, use an empty disk. For more information on the disk-partitioning utility on ESXi, see VMware knowledge base article 1036609.
- Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using ESXCLI might fail due to a space limitation
Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the
esxcli software profile update
oresxcli software profile install
ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:[InstallationError] The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB. Please refer to the log file for more details.
The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands
esxcli software vib update
oresxcli software vib install
.Workaround: You can perform the upgrade in two steps, by using the
esxcli software profile update
command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.