ESXi 7.0 Update 2 | 09 MAR 2021 | ISO Build 17630552

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • ESXi 7.0 Update 2 supports vSphere Quick Boot on the following servers:
    • Dell Inc.
      • PowerEdge M830
      • PowerEdge R830
    • HPE
      • ProLiant XL675d Gen10 Plus
    • Lenovo 
      • ThinkSystem SR 635
      • ThinkSystem SR 655
  • Some ESXi configuration files become read-only: As of ESXi 7.0 Update 2, configuration formerly stored in the files /etc/keymap, /etc/vmware/welcome, /etc/sfcb/sfcb.cfg, /etc/vmware/snmp.xml, /etc/vmware/logfilters, /etc/vmsyslog.conf,  and /etc/vmsyslog.conf.d/*.conf files, now resides in the ConfigStore database. You can modify this configuration only by using ESXCLI commands, and not by editing files. For more information, see VMware knowledge base articles 82637 and 82638.
     
  • VMware vSphere Virtual Volumes statistics for better debugging: With ESXi 7.0 Update 2, you can track performance statistics for vSphere Virtual Volumes to quickly identify issues such as latency in third-party VASA provider responses. By using a set of commands, you can get statistics for all VASA providers in your system, or for a specified namespace or entity in the given namespace, or enable statistics tracking for the complete namespace. For more information, see Collecting Statistical Information for vVols.
     
  • NVIDIA Ampere аrchitecture support: vSphere 7.0 Update 2 adds support for the NVIDIA Ampere architecture that enables you to perform high end AI/ML training, and ML inference workloads, by using the accelerated capacity of the A100 GPU. In addition, vSphere 7.0 Update 2 improves GPU sharing and utilization by supporting the Multi-Instance GPU (MIG) technology. With vSphere 7.0 Update 2, you also see enhanced performance of device-to-device communication, building on the existing NVIDIA GPUDirect functionality, by enabling Address Translation Services (ATS) and Access Control Services (ACS) at the PCIe bus layer in the ESXi kernel.
     
  • Support for Mellanox ConnectX-6 200G NICs: ESXi 7.0 Update 2 supports Mellanox Technologies MT28908 Family (ConnectX-6) and Mellanox Technologies MT2892 Family (ConnectX-6 Dx) 200G NICs.
     
  • Performance improvements for AMD Zen CPUs: With ESXi 7.0 Update 2, out-of-the-box optimizations can increase AMD Zen CPU performance by up to 30% in various benchmarks. The updated ESXi scheduler takes full advantage of the AMD NUMA architecture to make the most appropriate placement decisions for virtual machines and containers. AMD Zen CPU optimizations allow a higher number of VMs or container deployments with better performance.
     
  • Reduced compute and I/O latency, and jitter for latency sensitive workloads: Latency sensitive workloads, such as in financial and telecom applications, can see significant performance benefit from I/O latency and jitter optimizations in ESXi 7.0 Update 2. The optimizations reduce interference and jitter sources to provide a consistent runtime environment. With ESXi 7.0 Update 2, you can also see higher speed in interrupt delivery for passthrough devices.
     
  • Confidential vSphere Pods on a Supervisor Cluster in vSphere with Tanzu: Starting with vSphere 7.0 Update 2, you can run confidential vSphere Pods, keeping guest OS memory encrypted and protected against access from the hypervisor, on a Supervisor Cluster in vSphere with Tanzu. You can configure confidential vSphere Pods by adding Secure Encrypted Virtualization-Encrypted State (SEV-ES) as an extra security enhancement. For more information, see Deploy a Confidential vSphere Pod.
     
  • vSphere Lifecycle Manager fast upgrades: Starting with vSphere 7.0 Update 2, you can significantly reduce upgrade time and system downtime, and minimize system boot time, by suspending virtual machines to memory and using the Quick Boot functionality. You can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk when you update an ESXi host. For more information, see Configuring vSphere Lifecycle Manager for Fast Upgrades.
     
  • Encrypted Fault Tolerance log traffic: Starting with vSphere 7.0 Update 2, you can encrypt Fault Tolerance log traffic to get enhanced security. vSphere Fault Tolerance performs frequent checks between the primary and secondary VMs to enable quick resumption from the last successful checkpoint. The checkpoint contains the VM state that has been modified since the previous checkpoint. Encrypting the log traffic prevents malicious access or network attacks.

Earlier Releases of ESXi 7.0

New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.

Patches Contained in This Release

This release of ESXi 7.0 Update 2 delivers the following patches:

Build Details

Download Filename: VMware-ESXi-7.0U2-17630552-depot
Build: 17630552
Download Size: 390.9 MB
md5sum: 4eae7823678cc7c57785e4539fe89d81
sha1checksum: 7c6b70a0190bd78bcf118f856cf9c60b4ad7d4b5
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

IMPORTANT:

  • In the Lifecycle Manager plug-in of the vSphere Client, the release date for the ESXi 7.0.2 base image, profiles, and components is 2021-02-17. This is expected. To ensure you can use correct filters by release date, only the release date of the rollup bulletin is 2021-03-09.
     
  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. 
  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher


Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.

Bulletin ID Category Severity
ESXi70U2-17630552 Enhancement Important

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-7.0.2-17630552-standard
ESXi-7.0.2-17630552-no-tools

ESXi Image

Name and Version Release Date Category Detail
ESXi_7.0.2-0.0.17630552 03/09/2021 General Bugfix image

For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.

Patch Download and Installation

In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

Product Support Notices

  • Removal of SHA1 from Secure Shell (SSH): In vSphere 7.0 Update 2, the SHA-1 cryptographic hashing algorithm is removed from the SSHD default configuration.
  • Deprecation of Sphere 6.0 to 6.7 REST APIs: VMware deprecates REST APIs from vSphere 6.0 to 6.7 that were served under /rest and are referred to as old REST APIs. With vSphere 7.0 Update 2, REST APIs are served under /api and referred to as new REST APIs. For more information, see the bog vSphere 7 Update 2 - REST API Modernization and vSphere knowledge base article 83022.
  • Intent to deprecate SHA-1: The SHA-1 cryptographic hashing algorithm will be deprecated in a future release of vSphere. SHA-1 and the already-deprecated MD5 have known weaknesses, and practical attacks against them have been demonstrated.
  • Standard formats of log files and syslog transmissions: In a future major ESXi release, VMware plans to standardize the formats of all ESXi log files and syslog transmissions. This standardization affects the metadata associated with each log file line or syslog transmission. For example, the time stamp, programmatic source identifier, message severity, and operation identifier data. For more information, visit https://core.vmware.com/esxi-log-message-formats

Resolved Issues

The resolved issues are grouped as follows.

Storage Issues
  • After recovering from APD or PDL conditions, VMFS datastore with enabled support for clustered virtual disks might remain inaccessible

    You can encounter this problem only on datastores where the clustered virtual disk support is enabled. When the datastore recovers from an All Paths Down (APD) or Permanent Device Loss (PDL) condition, it remains inaccessible. The VMkernel log might show multiple SCSI3 reservation conflict messages similar to the following:

    2020-02-18T07:41:10.273Z cpu22:1001391219)ScsiDeviceIO: vm 1001391219: SCSIDeviceCmdCompleteCB:2972: Reservation conflict retries 544 for command 0x45ba814b8340 (op: 0x89) to device "naa.624a9370b97601e346f64ba900024d53"

    The problem can occur because the ESXi host participating in the cluster loses SCSI reservations for the datastore and cannot always reacquire them automatically after the datastore recovers.

    This issue is resolved in this release.

Auto Deploy Issues
  • PR 2710383: If you deploy an ESXi host by using the vSphere Auto Deploy stateful install, ESXi configurations migrated to the ConfigStore database are lost during upgrade

    If you deploy an ESXi host by using the Auto Deploy stateful install feature, an option indicating the stateful install boot in the boot.cfg file is not removed and conflicts with the persistent state of the ConfigStore database. As a result, ESXi configurations migrated to the ConfigStore database are lost during upgrade to ESXi 7.x.

    This issue is resolved in this release.

Networking Issues
  • PR 2696435: You cannot use virtual guest tagging (VGT) by default in an SR-IOV environment

    With the i40enu driver in ESXi 7.0 Update 2, you cannot use VGT by default to avoid untrusted SR-IOV virtual functions (VF) to transmit or receive packets tagged with a VLAN ID different from the port VLAN. 
    You must set all VFs to trusted mode by using the following module parameter:
    esxcli system module parameters set -a -m i40enu -p "trust_all_vfs=1,1,1,...

    This issue is resolved in this release.

Miscellaneous Issues
  • NEW: If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive

    If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Networking Issues
  • NEW: When you set auto-negotiation on a network adapter, the device might fail

    In some environments, if you set link speed to auto-negotiation for network adapters by using the command esxcli network nic set -a -n vmmicx, the devices might fail and reboot does not recover connectivity. The issue is specific to a combination of some Intel X710/X722 network adapters, a SFP+ module and a physical switch, where auto-negotiate speed/duplex scenario is not supported.

    Workaround: Make sure you use an Intel-branded SFP+ module. Alternatively, use a Direct Attach Copper (DAC) cable.

  • Paravirtual RDMA (PVRDMA) network adapters do not support NSX networking policies

    If you configure an NSX distributed virtual port for use in PVRDMA traffic, the RDMA protocol traffic over the PVRDMA network adapters does not comply with the NSX network policies.

    Workaround: Do not configure NSX distributed virtual ports for use in PVRDMA traffic.

  • Solarflare x2542 and x2541 network adapters configured in 1x100G port mode achieve throughput of up to 70Gbps in a vSphere environment

    vSphere 7.0 Update 2 supports Solarflare x2542 and x2541 network adapters configured in 1x100G port mode. However, you might see a hardware limitation in the devices that causes the actual throughput to be up to some 70Gbps in a vSphere environment.

    Workaround: None

  • VLAN traffic might fail after a NIC reset

    A NIC with PCI device ID 8086:1537 might stop to send and receive VLAN tagged packets after a reset, for example, with a command vsish -e set /net/pNics/vmnic0/reset 1.

    Workaround: Avoid resetting the NIC. If you already face the issue, use the following commands to restore the VLAN capability, for example at vmnic0:
    # esxcli network nic software set --tagging=1 -n vmnic0
    # esxcli network nic software set --tagging=0 -n vmnic0

  • Any change in the NetQueue balancer settings causes NetQueue to be disabled after an ESXi host reboot

    Any change in the NetQueue balancer settings by using the command esxcli/localcli network nic queue loadbalancer set -n <nicname> --<lb_setting> causes NetQueue, which is enabled by default, to be disabled after an ESXi host reboot.

    Workaround: After a change in the NetQueue balancer settings and host reboot, use the command configstorecli config current get -c esx -g network -k nics to retrieve ConfigStore data to verify whether the /esx/network/nics/net_queue/load_balancer/enable is working as expected.

    After you run the command, you see output similar to:
    {
    "mac": "02:00:0e:6d:14:3e",
    "name": "vmnic1",
    "net_queue": {
      "load_balancer": {
        "dynamic_pool": true,
          "enable": true
      }
     },
     "virtual_mac": "00:50:56:5a:21:11"
    }

    If the output is not as expected, for example "load_balancer": "enable": false", run the following command:
    esxcli/localcli network nic queue loadbalancer state set -n <nicname> -e true

  • Read latency of QLogic 16Gb Fibre Channel adapters supported by the ESXi 7.0 Update 2 qlnativefc driver increases in certain conditions

    Due to a regression, you see increased read latency on virtual machines placed on storage devices connected to a QLogic 16Gb Fibre Channel adapter supported by the ESXi 7.0 Update 2 qlnativefc driver. If a VM is encrypted, sequential read latency increases by 8% in your ESXi 7.0 Update 2 environment, compared to the ESXi 7.0 Update 1 environment. If the VM is not encrypted, latency increases by between 15% and 23%.

    Workaround: None

Security Issues
  • Turn off the Service Location Protocol service in ESXi, slpd, to prevent potential security vulnerabilities

    Some services in ESXi that run on top of the host operating system, including slpd, the CIM object broker, sfcbd, and the related openwsmand service, have proven security vulnerabilities. VMware has addressed all known vulnerabilities in VMSA-2019-0022 and VMSA-2020-0023, and the fixes are part of the vSphere 7.0 Update 2 release. While sfcbd and openwsmand are disabled by default in ESXi, slpd is enabled by default and you must turn it off, if not necessary, to prevent exposure to a future vulnerability after an upgrade. 

    Workaround: To turn off the slpd service, run the following PowerCLI commands:
    $ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Set-VMHostService -policy “off”
    $ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Stop-VMHostService -Confirm:$false


    Alternatively, you can use the command chkconfig slpd off && /etc/init.d/slpd stop.

    The openwsmand service is not on the ESXi services list and you can check the service state by using the following PowerCLI commands:
    $esx=(Get-EsxCli -vmhost xx.xx.xx.xx -v2)
    $esx.system.process.list.invoke() | where CommandLine -like '*openwsman*' | select commandline

    In the ESXi services list, the sfcbd service appears as sfcbd-watchdog.

    For more information, see VMware knowledge base articles 76372 and 1025757.

Miscellaneous Issues
  • You cannot create snapshots of virtual machines due to an error that a digest operation has failed

    A rare race condition when an All-Paths-Down (APD) state occurs during the update of the Content Based Read Cache (CBRC) digest file might cause inconsistencies in the digest file. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed in the backtrace.

    Workaround: Power cycle the virtual machines to trigger a recompute of the CBRC hashes and clear the inconsistencies in the digest file.

  • An ESXi host might fail with a purple diagnostic screen due to a rare race condition in the qedentv driver

    A rare race condition in the qedentv driver might cause an ESXi host to fail with a purple diagnostic screen. The issue occurs when an Rx complete interrupt arrives just after a General Services Interface (GSI) queue pair (QP) is destroyed, for example during a qedentv driver unload or a system shut down. In such a case, the qedentv driver might access an already freed QP address that leads to a PF exception. The issue might occur in ESXi hosts that are connected to a busy physical switch with heavy unsolicited GSI traffic. In the backtrace, you see messages such as:

    cpu4:2107287)0x45389609bcb0:[0x42001d3e6f72]qedrntv_ll2_rx_cb@(qedrntv)#<None>+0x1be stack: 0x45b8f00a7740, 0x1e146d040, 0x432d65738d40, 0x0, 0x
    2021-02-11T03:31:53.882Z cpu4:2107287)0x45389609bd50:[0x42001d421d2a]ecore_ll2_rxq_completion@(qedrntv)#<None>+0x2ab stack: 0x432bc20020ed, 0x4c1e74ef0, 0x432bc2002000,
    2021-02-11T03:31:53.967Z cpu4:2107287)0x45389609bdf0:[0x42001d1296d0]ecore_int_sp_dpc@(qedentv)#<None>+0x331 stack: 0x0, 0x42001c3bfb6b, 0x76f1e5c0, 0x2000097, 0x14c2002
    2021-02-11T03:31:54.039Z cpu4:2107287)0x45389609be60:[0x42001c0db867]IntrCookieBH@vmkernel#nover+0x17c stack: 0x45389609be80, 0x40992f09ba, 0x43007a436690, 0x43007a43669
    2021-02-11T03:31:54.116Z cpu4:2107287)0x45389609bef0:[0x42001c0be6b0]BH_Check@vmkernel#nover+0x121 stack: 0x98ba, 0x33e72f6f6e20, 0x0, 0x8000000000000000, 0x430000000001
    2021-02-11T03:31:54.187Z cpu4:2107287)0x45389609bf70:[0x42001c28370c]NetPollWorldCallback@vmkernel#nover+0x129 stack: 0x61, 0x42001d0e0000, 0x42001c283770, 0x0, 0x0
    2021-02-11T03:31:54.256Z cpu4:2107287)0x45389609bfe0:[0x42001c380bad]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42001c0c2b44, 0x0, 0x0, 0x0
    2021-02-11T03:31:54.319Z cpu4:2107287)0x45389609c000:[0x42001c0c2b43]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
    2021-02-11T03:31:54.424Z cpu4:2107287)^[[45m^[[33;1mVMware ESXi 7.0.2 [Releasebuild-17435195 x86_64]^[[0m
    #PF Exception 14 in world 2107287:vmnic7-pollW IP 0x42001d3e6f72 addr 0x1c

    Workaround: None

Storage Issues
  • If an NVMe device is hot added and hot removed in a short interval, the ESXi host might fail with a purple diagnostic screen

    If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout.
    Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to #PF Exception ... in world ...:vmkdevmgr.

    Workaround: Perform hot-plug operations on a slot only after the previous hot-plug operation on the slot is complete. For example, if you want to run a hot-remove after a hot-add operation, wait until the HBAs are created and LUNs are discovered. For the alternative scenario, hot-add after a hot-remove operation, wait until all the LUNs and HBAs are removed.

Virtual Machines Management Issues
  • UEFI HTTP booting of virtual machines on ESXi hosts of version earlier than 7.0 Update 2 fails

    UEFI HTTP booting of virtual machines is supported only on hosts of version ESXi 7.0 Update 2 and later and VMs with HW version 19 or later.

    Workaround: Use UEFI HTTP booting only in virtual machines with HW version 19 or later. Using HW version 19 ensures the virtual machines are placed only on hosts with ESXi version 7.0 Update 2 or later.

vSAN Issues
  • If you change the preferred site in a VMware vSAN Stretched Cluster, some objects might incorrectly appear as compliant

    If you change the preferred site in a stretched cluster, some objects might incorrectly appear as compliant, because their policy settings might not automatically change. For example, if you configure a virtual machine to keep data at the preferred site, when you change the preferred site, data might remain on the nonpreferred site.

    Workaround: Before you change a preferred site, in Advanced Settings, lower the ClomMaxCrawlCycleMinutes setting to 15 min to make sure objects policies are updated. After the change, revert the ClomMaxCrawlCycleMinutes option to the earlier value.

Installation, Upgrade and Migration Issues
  • UEFI booting of ESXi hosts might stop with an error during an update to ESXi 7.0 Update 2 from an earlier version of ESXi 7.0

    If you attempt to update your environment to 7.0 Update 2 from an earlier version of ESXi 7.0 by using vSphere Lifecycle Manager patch baselines, UEFI booting of ESXi hosts might stop with an error such as:
    Loading /boot.cfg
    Failed to load crypto64.efi
    Fatal error: 15 (Not found)

    Workaround: For more information, see VMware knowledge base articles 83063 and 83107 .

  • If legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract a desired software specification to seed to a new cluster

    With vCenter Server 7.0 Update 2, you can create a new cluster by importing the desired software specification from a single reference host.  However, if legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract in the vCenter Server instance where you create the cluster a reference software specification from such a host. In the /var/log/lifecycle.log, you see messages such as:
    020-11-11T06:54:03Z lifecycle: 1000082644: HostSeeding:499 ERROR Extract depot failed: Checksum doesn't match. Calculated 5b404e28e83b1387841bb417da93c8c796ef2497c8af0f79583fd54e789d8826, expected: 0947542e30b794c721e21fb595f1851b247711d0619c55489a6a8cae6675e796 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:366 ERROR Extract depot failed. 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:145 ERROR [VibChecksumError]

    Workaround: Follow the steps described in VMware knowledge base article 83042.

  • You see a short burst of log messages in the syslog.log after every ESXi boot

    After updating to ESXi 7.0 Update 2, you might see a short burst of log messages after every ESXi boot.
    Such logs do not indicate any issue with ESXi and you can ignore these messages. For example:
    ​2021-01-19T22:44:22Z watchdog-vaai-nasd: '/usr/lib/vmware/nfs/bin/vaai-nasd -f' exited after 0 seconds (quick failure 127) 1
    2021-01-19T22:44:22Z watchdog-vaai-nasd: Executing '/usr/lib/vmware/nfs/bin/vaai-nasd -f'
    2021-01-19T22:44:22.990Z aainasd[1000051135]: Log for VAAI-NAS Daemon for NFS version=1.0 build=build-00000 option=DEBUG
    2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoadFile: No entries loaded by dictionary.
    2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.
    2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/config": No such file or directory.
    2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/preferences": No such file or directory.
    2021-01-19T22:44:22.990Z vaainasd[1000051135]: Switching to VMware syslog extensions
    2021-01-19T22:44:22.992Z vaainasd[1000051135]: Loading VAAI-NAS plugin(s).
    2021-01-19T22:44:22.992Z vaainasd[1000051135]: DISKLIB-PLUGIN : Not loading plugin /usr/lib/vmware/nas_plugins/lib64: Not a shared library.

    Workaround: None

  • You see warning messages for missing VIBs in vSphere Quick Boot compatibility check reports

    After you upgrade to ESXi 7.0 Update 2, if you check vSphere Quick Boot compatibility of your environment by using the /usr/lib/vmware/loadesx/bin/loadESXCheckCompat.py command, you might see some warning messages for missing VIBs in the shell. For example:
    Cannot find VIB(s) ... in the given VIB collection.
    Ignoring missing reserved VIB(s) ..., they are removed from reserved VIB IDs.

    Such warnings do not indicate a compatibility issue.

    Workaround: The missing VIB messages can be safely ignored and do not affect the reporting of vSphere Quick Boot compatibility. The final output line of the loadESXCheckCompat command unambiguously indicates if the host is compatible.

  • Auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image fails with an error

    If you attempt auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image to perform a stateful install and overwrite the VMFS partitions, the operation fails with an error. In the support bundle, you see messages such as:
    2021-02-11T19:37:43Z Host Profiles[265671 opID=MainThread]: ERROR: EngineModule::ApplyHostConfig. Exception: [Errno 30] Read-only file system

    Workaround: Follow vendor guidance to clean the VMFS partition in the target host and retry the operation. Alternatively, use an empty disk. For more information on the disk-partitioning utility on ESXi, see VMware knowledge base article 1036609.

  • Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using ESXCLI might fail due to a space limitation

    Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the esxcli software profile update or esxcli software profile install ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:

    [InstallationError]
     The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB.
     Please refer to the log file for more details.

    The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands esxcli software vib update or esxcli software vib install.

    Workaround: You can perform the upgrade in two steps, by using the esxcli software profile update command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.

Known Issues from Earlier Releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon