ESXi 7.0 Update 3 | 05 OCT 2021 | ISO Build 18644231

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • vSphere Memory Monitoring and Remediation, and support for snapshots of PMem VMs: vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs. For more information, see vSphere Memory Monitoring and Remediation.

  • Extended support for disk drives types: Starting with vSphere 7.0 Update 3, vSphere Lifecycle Manager validates the following types of disk drives and storage device configurations:
    • HDD (SAS/SATA)
    • SSD (SAS/SATA)
    • SAS/SATA disk drives behind single-disk RAID-0 logical volumes
    For more information, see Cluster-Level Hardware Compatibility Checks.

  • Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host: Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host. For more information, see Using vSphere Lifecycle Manager Images to Remediate vSAN Stretched Clusters.

  • vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs. 

  • Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.

  • New VMNIC tag for NVMe-over-RDMA storage traffic: ESXi 7.0 Update 3 adds a new VMNIC tag for NVMe-over-RDMA storage traffic. This VMkernel port setting enables NVMe-over-RDMA traffic to be routed over the tagged interface. You can also use the ESXCLI command esxcli network ip interface tag add -i <interface name> -t NVMeRDMA to enable the NVMeRDMA VMNIC tag.

  • NVMe over TCP support: vSphere 7.0 Update 3 extends the NVMe-oF suite with the NVMe over TCP storage protocol to enable high performance and parallelism of NVMe devices over a wide deployment of TCP/IP networks.

  • Zero downtime, zero data loss for mission critical VMs in case of Machine Check Exception (MCE) hardware failure: With vSphere 7.0 Update 3, mission critical VMs protected by VMware vSphere Fault Tolerance can achieve zero downtime, zero data loss in case of Machine Check Exception (MCE) hardware failure, because VMs fallback to the secondary VM, instead of failing. For more information, see How Fault Tolerance Works.
     
  • Micro-second level time accuracy for workloads: ESXi 7.0 Update 3 adds the hardware timestamp Precision Time Protocol (PTP) to enable micro-second level time accuracy. For more information, see Use PTP for Time and Date Synchronization of a Host.
     
  • Improved ESXi host timekeeping configuration:  ESXi 7.0 Update 3 enhances the workflow and user experience for setting an ESXi host timekeeping configuration. For more information, see Editing the Time Configuration Settings of a Host.

Earlier Releases of ESXi 7.0

New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.

Patches Contained in This Release

This release of ESXi 7.0 Update 3 delivers the following patches:

Build Details

Download Filename: VMware-ESXi-7.0U3-18644231-depot
Build: 18644231
Download Size: 395.7 MB
md5sum: 0a98ba035ef0958989faacba3a294b76
sha256checksum: 078cc6230d5df379b398ae41d0ef961cdf5b450e115d1ea93deb14e849141d53
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

IMPORTANT:

  • The docURL in the ESXi base-image metadata, ESXi_7.0.3-0.0.18644231, points to the vCenter Server 7.0 Update 3 Release Notes instead of the ESXi 7.0 Update 3 Release Notes. 
  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. 
  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher


Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.

Bulletin ID Category Severity
ESXi70U3-18644231 Enhancement Important

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-7.0U3-18644231-standard
ESXi-7.0U3-18644231-no-tools

ESXi Image

Name and Version Release Date Category Detail
ESXi_7.0.3-0.0.18644231 05 OCT 2021 General Bugfix image

For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.

Patch Download and Installation

In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from VMware Customer Connect. Navigate to Products and Accounts > Product Patches. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

Product Support Notices

  • Merging the lpfc and brcmnvmefc drivers: Starting with vSphere 7.0 Update 3, the brcmnvmefc driver is no longer available. The NVMe over Fibre Channel functionality previously delivered with the brcmnvmefc driver is now included in the lpfc driver.
     
  • The inbox i40enu network driver changes name: Starting with vSphere 7.0 Update 3, the inbox i40enu network driver for ESXi changes name back to i40en. The i40en driver was renamed to i40enu in vSphere 7.0 Update 2, but the name change impacted some upgrade paths. For example, rollup upgrade of ESXi hosts that you manage with baselines and baseline groups from 7.0 Update 2 or 7.0 Update 2a to 7.0 Update 3 fails. In most cases, the i40enu driver upgrades to ESXi 7.0 Update 3 without any additional steps. However, if the driver upgrade fails, you cannot update ESXi hosts that you manage with baselines and baseline groups. You also cannot use host seeding or a vSphere Lifecycle Manager single image to manage the ESXi hosts. If you have already made changes related to the i40enu driver and devices in your system, before upgrading to ESXi 7.0 Update 3, you must uninstall the i40enu VIB or Component on ESXi, or first upgrade ESXi to ESXi 7.0 Update 2c. For more information, see VMware knowledge base article 85982.
     
  • Deprecation of RDMA over Converged Ethernet (RoCE) v1: VMware intends in a future major vSphere release to discontinue support for the network protocol RoCE v1. You must migrate drivers that rely on the RoCEv1 protocol to RoCEv2. In addition, you must migrate paravirtualized remote direct memory access (PVRDMA) network adapters for virtual machines and guest operating systems to an adapter that supports RoCEv2.
     
  • Deprecation of SD and USB devices for the ESX-OSData partition: The use of SD and USB devices for storing the ESX-OSData partition, which consolidates the legacy scratch partition, locker partition for VMware Tools, and core dump destinations, is being deprecated. SD and USB devices are supported for boot bank partitions. For warnings related to the use of SD and USB devices during ESXi 7.0 Update 3 update or installation, see VMware Knowledge Based Article 85615. For more information, see VMware knowledge base article 85685

Resolved Issues

The resolved issues are grouped as follows.

Installation, Upgrade and Migration Issues
  • The /locker partition might be corrupted when the partition is stored on a USB or SD device

    Due to the I/O sensitivity of USB and SD devices, the VMFS-L locker partition on such devices that stores VMware Tools and core dump files might get corrupted.

    This issue is resolved in this release. By default, ESXi loads the locker packages to the RAM disk during boot. 

  • ESXi hosts might lose connectivity after brcmfcoe driver upgrade on Hitachi storage arrays

    After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.

    This issue is resolved in this release.

  • After upgrading to ESXi 7.0 Update 2, you see excessive storage read I/O load

    ESXi 7.0 Update 2 introduced a system statistics provider interface that requires reading the datastore stats for every ESXi host on every 5 min. If a datastore is shared by multiple ESXi hosts, such frequent reads might cause a read latency on the storage array and lead to excessive storage read I/O load.

    This issue is resolved in this release.

Virtual Machine Management Issues
  • Virtual machines with enabled AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) cannot create Virtual Machine Communication Interface (VMCI) sockets

    Performance and functionality of features that require VMCI might be affected on virtual machines with enabled AMD SEV-ES, because such virtual machines cannot create VMCI sockets.

    This issue is resolved in this release.

  • Virtual machines might fail when rebooting a heavily loaded guest OS

    In rare cases, when a guest OS reboot is initiated outside the guest, for example from the vSphere Client, virtual machines might fail, generating a VMX dump. The issue might occur when the guest OS is heavily loaded and, as a result, responses from the guest to VMX requests are delayed prior to the reboot. In such cases, the vmware.log file of the virtual machines includes messages such as:

    I125: Tools: Unable to send state change 3: TCLO error.
    E105: PANIC: NOT_REACHED bora/vmx/tools/toolsRunningStatus.c:953

     This issue is resolved in this release.

Networking Issues
  • RDMA traffic by using the iWARP protocol might not complete

    RDMA traffic by using the iWARP protocol on Intel x722 cards might time out and not complete.

    This issue is resolved in this release.

Miscellaneous Issues
  • Asynchronous read I/O containing a SCATTER_GATHER_ELEMENT array of more than 16 members with at least 1 member falling in the last partial block of a file might lead to ESXi host panic

    In rare cases, in an asynchronous read I/O containing a SCATTER_GATHER_ELEMENT array of more than 16 members, at least 1 member might fall in the last partial block of a file. This might lead to corrupting VMFS memory heap, which in turn causes ESXi hosts to fail with a purple diagnostic screen.

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

vSphere Cluster Services Issues
  • You see compatibility issues in new vCLS VMs deployed in vSphere 7.0 Update 3 environment

    The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.

    Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3.

Miscellaneous Issues
  • NEW: The VMkernel might shut down virtual machines due to a vCPU timer issue

    In rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the vmkernel.log file, you see messages such as:
    2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.

    2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VM
    The issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.

    Workaround: Disable PCPU heartbeat by using the command vsish -e set /reliability/heartbeat/status 0.

  • If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, Trusted Platform Module (TPM) attestation of the ESXi hosts fails

    If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, and you enable TPM, ESXi hosts fail to pass attestation. In the vSphere Client, you see the warning Host TPM attestation alarm. The Elliptic Curve Digital Signature Algorithm (ECDSA) introduced with ESXi 7.0 Update 3 causes the issue when vCenter Server is not of version 7.0 Update 3.

    Workaround: Upgrade your vCenter Server to 7.0 Update 3 or acknowledge the alarm.

  • You see warnings in the boot loader screen about TPM asset tags

    If a TPM-enabled ESXi host has no asset tag set on, you might see idle warning messages in the boot loader screen such as:
    Failed to determine TPM asset tag size: Buffer too small
    Failed to measure asset tag into TPM: Buffer too small

    Workaround: Ignore the warnings or set an asset tag by using the command $ esxcli hardware tpm tag set -d

  • The sensord daemon fails to report ESXi host hardware status

    A logic error in the IPMI SDR validation might cause sensord to fail to identify a source for power supply information. As a result, when you run the command vsish -e get /power/hostStats, you might not see any output.

    Workaround: None

  • If an ESXi host fails with a purple diagnostic screen, the netdump service might stop working

    In rare cases, if an ESXi host fails with a purple diagnostic screen, the netdump service might fail with an error such as NetDump FAILED: Couldn't attach to dump server at IP x.x.x.x.

    Workaround: Configure the VMkernel core dump to use local storage.

  • You see frequent VMware Fault Domain Manager (FDM) core dumps on multiple ESXi hosts

    In some environments, the number of datastores might exceed the FDM file descriptor limit. As a result, you see frequent core dumps on multiple ESXi hosts indicating FDM failure.

    Workaround: Increase the FDM file descriptor limit to 2048. You can use the setting das.config.fdm.maxFds from the vSphere HA advanced options in the vSphere Client. For more information, see Set Advanced Options.

  • Virtual machines on a vSAN cluster with enabled NSX-T and a converged vSphere Distributed Switch (CVDS) in a VLAN transport zone cannot power on after a power off

    If a secondary site is 95% disk full and VMs are powered off before simulating a secondary site failure, during recovery some of the virtual machines fail to power on. As a result, virtual machines become unresponsive. The issue occurs regardless if site recovery includes adding disks or ESXi hosts or CPU capacity.

    Workaround: Select the virtual machines that do not power on and change the network to VM Network from Edit Settings on the VM context menu.

  • ESXI hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835

    In rare cases, for example when running SESparse tests, the number of locks per transaction in a VMFS datastore might exceed the limit of 50 for the J6_MAX_TXN_LOCKACTIONS parameter. As a result, ESXi hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835.

    Workaround: None

  • If you modify the netq_rss_ens parameter of the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen

    If you try to enable the netq_rss_ens parameter when you configure an enhanced data path on the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen. The netq_rss_ens parameter, which enables NetQ RSS, is disabled by default with a value of 0.

    Workaround: Keep the default value for the netq_rss_ens module parameter in the nmlx5_core driver. 

Virtual Machine Management Issues
  • Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10

    Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10 with an error such as An error occurred while saving the snapshot: The VVol target encountered a vendor specific error. The issue is specific for Purity version 5.3.10.

    Workaround: Upgrade to Purity version 6.1.7 or follow vendor recommendations. 

Installation, Upgrade and Migration Issues
  • You cannot migrate linked clones across vCenter Servers

    If you migrate a linked clone across vCenter Servers, operations such as power on and delete might fail for the source virtual machine with an Invalid virtual machine state error.

    Workaround: Keep linked clones on the same vCenter Server as the source VM. Alternatively, promote the linked clone to full clone before migration.

  • Migration across vCenter Servers of virtual machines with many virtual disks and snapshot levels to a datastore on NVMe over TCP storage might fail

    Migration across vCenter Servers of virtual machines with more than 180 virtual disks and 32 snapshot levels to a datastore on NVMe over TCP storage might fail. The ESXi host preemptively fails with an error such as The migration has exceeded the maximum switchover time of 100 second(s).

    Workaround: None

  • A virtual machine with enabled Virtual Performance Monitoring Counters (VPMC) might fail to migrate between ESXi hosts

    If you try to migrate a virtual machine with enabled VPMC by using vSphere vMotion, the operation might fail if the target host is using some of the counters to compute memory or performance statistics. The operation fails with an error such as A performance counter used by the guest is not available on the host CPU.

    Workaround: Power off the virtual machine and use cold migration. For more information, see VMware knowledge base article 81191.

  • If a live VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the upgrade fails

    When a VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the ConfigStore might not keep some configurations of the upgrade. As a result, ESXi hosts become inaccessible after the upgrade operation, although the upgrade seems successful. To prevent this issue, the ESXi 7.0 Update 3 installer adds a temporary check to block such scenarios. In the ESXi installer console, you see the following error message: Live VIB installation, upgrade or removal may cause subsequent ESXi upgrade to fail when using the ISO installer.

    Workaround: Use an alternative upgrade method to avoid the issue, such as using ESXCLI or the vSphere Lifecycle Manager.

Networking Issues
  • Rollback from converged vSphere Distributed Switch (VDS) to NSX-T VDS is not supported in vSphere 7.0 Update 3

    Rollback from converged VDS that supports both vSphere 7 traffic and NSX-T 3 traffic on the same VDS to one N-VDS for NSX-T traffic is not supported in vSphere 7.0 Update 3.

    Workaround: None 

  • If you do not set the nmlx5 network driver module parameter, network connectivity or ESXi hosts might fail

    If you do not set the supported_num_ports module parameter for the nmlx5_core driver on an ESXi host with multiple network adapters of versions Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6, the driver might not allocate sufficient memory for operating all the NIC ports for the host. As a result, you might experience network loss or ESXi host failure with purple diagnostic screen, or both.

    Workaround: Set the supported_num_ports module parameter value in the nmlx5_core network driver equal to the total number of Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6 network adapter ports on the ESXi host.

Known Issues from Earlier Releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon