ESXi 7.0 Update 3c | 27 JAN 2022  | ISO Build 19193900

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

IMPORTANT: VMware removed ESXi 7.0 Update 3, 7.0 Update 3a and 7.0 Update 3b from all sites on November 19, 2021 due to an upgrade-impacting issue. Build 19193900 for ESXi 7.0 Update 3c ISO replaces build 18644231, 18825058, and 18905247 for ESXi 7.0 Update 3, 7.0 Update 3a, and 7.0 Update 3b respectively. To make sure you run a smooth upgrade to vSphere 7.0 Update 3c, see VMware knowledge base articles 86447 and 87327.

What's New

For new features in the rolled back releases, see this list:

  • vSphere Memory Monitoring and Remediation, and support for snapshots of PMem VMs: vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs. For more information, see vSphere Memory Monitoring and Remediation.

  • Extended support for disk drives types: Starting with vSphere 7.0 Update 3, vSphere Lifecycle Manager validates the following types of disk drives and storage device configurations:
    • HDD (SAS/SATA)
    • SSD (SAS/SATA)
    • SAS/SATA disk drives behind single-disk RAID-0 logical volumes
    For more information, see Cluster-Level Hardware Compatibility Checks.

  • Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host: Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host. For more information, see Using vSphere Lifecycle Manager Images to Remediate vSAN Stretched Clusters.

  • vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs. 

  • Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.

  • New VMNIC tag for NVMe-over-RDMA storage traffic: ESXi 7.0 Update 3 adds a new VMNIC tag for NVMe-over-RDMA storage traffic. This VMkernel port setting enables NVMe-over-RDMA traffic to be routed over the tagged interface. You can also use the ESXCLI command esxcli network ip interface tag add -i <interface name> -t NVMeRDMA to enable the NVMeRDMA VMNIC tag.

  • NVMe over TCP support: vSphere 7.0 Update 3 extends the NVMe-oF suite with the NVMe over TCP storage protocol to enable high performance and parallelism of NVMe devices over a wide deployment of TCP/IP networks.

  • Zero downtime, zero data loss for mission critical VMs in case of Machine Check Exception (MCE) hardware failure: With vSphere 7.0 Update 3, mission critical VMs protected by VMware vSphere Fault Tolerance can achieve zero downtime, zero data loss in case of Machine Check Exception (MCE) hardware failure, because VMs fallback to the secondary VM, instead of failing. For more information, see How Fault Tolerance Works.
     
  • Micro-second level time accuracy for workloads: ESXi 7.0 Update 3 adds the hardware timestamp Precision Time Protocol (PTP) to enable micro-second level time accuracy. For more information, see Use PTP for Time and Date Synchronization of a Host.
     
  • Improved ESXi host timekeeping configuration:  ESXi 7.0 Update 3 enhances the workflow and user experience for setting an ESXi host timekeeping configuration. For more information, see Editing the Time Configuration Settings of a Host.

Earlier Releases of ESXi 7.0

New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.

Patches Contained in This Release

This release of ESXi 7.0 Update 3c delivers the following patches:

Build Details

Download Filename: VMware-ESXi-7.0U3c-19193900-depot
Build: 19193900
Download Size: 395.8 MB
md5sum: e39a951f4e96e92eae41c94947e046ec
sha256checksum: 20cdcd6fd8f22f5f8a848b45db67316a3ee630b31a152312f4beab737f2b3cdc
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

For a table of build numbers and versions of VMware ESXi, see VMware knowledge base article 2143832.

IMPORTANT:

  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins depend on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. 
  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.

Bulletin ID Category Severity
ESXi70U3c-19193900 Bugfix Critical

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi70U3c-19193900-standard
ESXi70U3c-19193900-no-tools

ESXi Image

Name and Version Release Date Category Detail
ESXi70U3c-19193900 27 JAN 2022 Bugfix Bugfix image

For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.

Patch Download and Installation

In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file after you log in to VMware Customer Connect. From the Select a Product drop-down menu, select ESXi (Embedded and Installable) and from the Select a Version drop-down menu, select 7.0. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

Product Support Notices

  • Deprecation of localos accounts: Support for use of localos accounts as an identity source is deprecated. VMware plans to discontinue support for use of the local operating system as an identity source. This functionality will be removed in a future release of vSphere.

  • The cURL version in ESXi650-202110001 and ESXi670-202111001 is later than the cURL version in ESXi 7.0 Update 3c: The cURL version in ESXi 7.0 Update 3c is 7.77.0, while ESXi650-202110001 and ESXi670-202111001 have the newer fixed version 7.78.0. As a result, if you upgrade from ESXi650-202110001 or ESXi670-202111001 to ESXi 7.0 Update 3c, cURL 7.7.0 might expose your system to the following vulnerabilities:
    CVE-2021-22926: CVSS 7.5
    CVE-2021-22925: CVSS 5.3
    CVE-2021-22924: CVSS 3.7
    CVE-2021-22923: CVSS 5.3
    CVE-2021-22922: CVSS 6.5
    cURL version 7.78.0 comes with a future ESXi 7.x release.
     
  • Merging the lpfc and brcmnvmefc drivers: Starting with vSphere 7.0 Update 3c, the brcmnvmefc driver is no longer available. The NVMe over Fibre Channel functionality previously delivered with the brcmnvmefc driver is now included in the lpfc driver.
     
  • Deprecation of RDMA over Converged Ethernet (RoCE) v1: VMware intends in a future major vSphere release to discontinue support for the network protocol RoCE v1. You must migrate drivers that rely on the RoCEv1 protocol to RoCEv2. In addition, you must migrate paravirtualized remote direct memory access (PVRDMA) network adapters for virtual machines and guest operating systems to an adapter that supports RoCEv2.
     
  • Deprecation of SD and USB devices for the ESX-OSData partition: The use of SD and USB devices for storing the ESX-OSData partition, which consolidates the legacy scratch partition, locker partition for VMware Tools, and core dump destinations, is being deprecated. SD and USB devices are supported for boot bank partitions. For warnings related to the use of SD and USB devices during ESXi 7.0 Update 3c update or installation, see VMware Knowledge Based Article 85615. For more information, see VMware knowledge base article 85685

Resolved Issues

The resolved issues are grouped as follows.

Miscellaneous Issues
  • NEW: In the vSphere Client, you might see the alarm Host connection and power state on xxx to change from green to red

    Due to a rare issue with handling Asynchronous Input/Output (AIO) calls, hostd and vpxa services on an ESXi host might fail and trigger alarms in the vSphere Client. In the backtrace, you see errors such as:
    #0 0x0000000bd09dcbe5 in __GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
    #1 0x0000000bd09de05b in __GI_abort () at abort.c:90
    #2 0x0000000bc7d00b65 in Vmacore::System::SignalTerminateHandler (info=, ctx=) at bora/vim/lib/vmacore/posix/defSigHandlers.cpp:62
    #3 <signal called="" handler="">
    #4 NfcAioProcessCloseSessionMsg (closeMsg=0xbd9280420, session=0xbde2c4510) at bora/lib/nfclib/nfcAioServer.c:935
    #5 NfcAioProcessMsg (session=session@entry=0xbde2c4510, aioMsg=aioMsg@entry=0xbd92804b0) at bora/lib/nfclib/nfcAioServer.c:4206
    #6 0x0000000bd002cc8b in NfcAioGetAndProcessMsg (session=session@entry=0xbde2c4510) at bora/lib/nfclib/nfcAioServer.c:4324
    #7 0x0000000bd002d5bd in NfcAioServerProcessMain (session=session@entry=0xbde2c4510, netCallback=netCallback@entry=0 '\000') at bora/lib/nfclib/nfcAioServer.c:4805
    #8 0x0000000bd002ea38 in NfcAioServerProcessClientMsg (session=session@entry=0xbde2c4510, done=done@entry=0xbd92806af "") at bora/lib/nfclib/nfcAioServer.c:5166
       

    This issue is resolved in this release. The fix makes sure the AioSession object works as expected. 

  • A very rare issue with NVIDIA vGPU-powered virtual machines might cause ESXi hosts to fail with a purple diagnostic screen

    In very rare conditions, ESXi hosts with NVIDIA vGPU-powered virtual machines might intermittently fail with a purple diagnostic screen with a kernel panic error. The issue might affect multiple ESXi hosts, but not at the same time. In the backlog, you see kernel reports about heartbeat timeouts against CPU for x seconds and the stack informs about a P2M cache.

    This issue is resolved in this release.

  • ESXi hosts with virtual machines with Latency Sensitivity enabled might randomly become unresponsive due to CPU starvation  

    When you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.  

    This issue is resolved in this release. The fix makes sure lwsmd does not set CPU affinity explicitly. 

  • In very rare cases, the virtual NVME adapter (VNVME) retry logic in ESXi 7.0 Update 3 might potentially cause silent data corruption

    The VNVME retry logic in ESXi 7.0 Update 3 has an issue that might potentially cause silent data corruption. Retries rarely occur and they can potentially, not always, cause data errors. The issue affects only ESXi 7.0 Update 3.

    This issue is resolved in this release.

  • ESXi hosts might fail with a purple diagnostic screen during shutdown due to stale metadata

    In rare cases, when you delete a large component in an ESXi host, followed by a reboot, the reboot might start before all metadata of the component gets deleted. The stale metadata might cause the ESXi host to fail with a purple diagnostic screen. 

    This issue is resolved in this release. The fix makes sure no pending metadata remains before a reboot of ESXi hosts.

  • Virtual desktop infrastructure (VDI) might become unresponsive due to a race condition in the VMKAPI driver

    Event delivery to applications might delay indefinitely due to a race condition in the VMKAPI driver. As a result, the virtual desktop infrastructure in some environments, such as systems using NVIDIA graphic cards, might become unresponsive or lose connection to the VDI client.

    This issue is resolve in this release.

  • ESXi hosts might fail with a purple diagnostic screen due to issues with ACPI Component Architecture (ACPICA) semaphores

    Several issues in the implementation of ACPICA semaphores in ESXi 7.0 Update 3 and earlier can result in VMKernel panics, typically during boot. An issue in the semaphore implementation can cause starvation, and on several call paths the VMKernel might improperly try to acquire an ACPICA semaphore or to sleep within ACPICA while holding a spinlock. Whether these issues cause problems on a specific machine depends on details of the ACPI firmware of the machine.

    These issues are resolved in this release. The fix involves a rewrite of the ACPICA semaphores in ESXi, and correction of the code paths that try to enter ACPICA while holding a spinlock.

  • ESXi hosts might fail with a purple diagnostic screen when I/O operations run on a software iSCSI adapter

    I/O operations on a software iSCSI adapter might cause a rare race condition inside the iscsi_vmk driver. As a result, ESXi hosts might intermittently fail with a purple diagnostic screen.

    This issue is resolved in this release.

Networking Issues
  • If you use a vSphere Distributed Switch (VDS) of version earlier than 6.6 and change the LAG hash algorithm, ESXi hosts might fail with a purple diagnostic screen

    If you use a VDS of version earlier than 6.6 on a vSphere 7.0 Update 1 or later system, and you change the LAG hash algorithm, for example from L3 to L2 hashes, ESXi hosts might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • You see packet drops for virtual machines with VMware Network Extensibility (NetX) redirection enabled

    In vCenter Server advanced performance charts, you see an increasing number of packet drop count for all virtual machines that have NetX redirection enabled. However, if you disable NetX redirection, the count becomes 0.

    This issue is resolved in this release.

  • An ESXi host might fail with a purple diagnostic screen during booting due to incorrect CQ to EQ mapping in an Emulex FC HBA

    In rare cases, incorrect mapping of completion queues (CQ) when the total number of I/O channels of a Emulex FC HBA is not an exact multiple of the number of event queues (EQ), might cause booting of an ESXi host to fail with a purple diagnostic screen. In the backtrace, you can see an error in the lpfc_cq_create() method.

    This issue is resolved in this release. The fix ensures correct mapping of CQs to EQs.

  • ESXi hosts might fail with a purple diagnostic screen due to memory allocation issue in the UNIX domain sockets

    During internal communication between UNIX domain sockets, a heap allocation might occur instead of cleaning ancillary data such as file descriptors. As a result, in some cases, the ESXi host might report an out of memory condition and fail with a purple diagnostic screen with #PF Exception 14 and errors similar to UserDuct_ReadAndRecvMsg().

    This issue is resolved in this release. The fix cleans ancillary data to avoid buffer memory allocations.

  • NTP optional configurations do not persist on ESXi host reboot

    When you set up optional configurations for NTP by using ESXCLI commands, the settings might not persist after the ESXi host reboots.

    This issue is resolved in this release. The fix makes sure that optional configurations are restored into the local cache from ConfigStore during ESXi host bootup.

  • When you change the LACP hashing algorithm in systems with vSphere Distributed Switch of version 6.5.0, multiple ESXi hosts might fail with a purple diagnostic screen

    In systems with vSphere Distributed Switch of version 6.5.0 and ESXi hosts of version 7.0 or later, when you change the LACP hashing algorithm, this might cause an unsupported LACP event error due to a temporary string array used to save the event type name. As a result, multiple ESXi hosts might fail with a purple diagnostic screen.

    This issue is resolved in this release. To avoid facing the issue, in vCenter Server systems of version 7.0 and later make sure you use a vSphere Distributed Switch version later than 6.5.0.

Installation, Upgrade and Migration Issues
  • Remediation of clusters that you manage with vSphere Lifecycle Manager baselines might take long

    Remediation of clusters that you manage with vSphere Lifecycle Manager baselines might take long after updates from ESXi 7.0 Update 2d and earlier to a version later than ESXi 7.0 Update 2d.

    This issue is resolved in this release.

  • After updating to ESXi 7.0 Update 3, virtual machines with physical RDM disks fail to migrate or power-on on destination ESXi hosts

    In certain cases, for example virtual machines with RDM devices running on servers with SNMP, a race condition between device open requests might lead to failing vSphere vMotion operations.

    This issue is resolved in this release. The fix makes sure that device open requests are sequenced to avoid race conditions. For more information, see VMware knowledge base article 86158

  • After upgrading to ESXi 7.0 Update 2d and later, you see an NTP time sync error

    In some environments, after upgrading to ESXi 7.0 Update 2d and later, in the vSphere Client you might see the error Host has lost time synchronization. However, the alarm might not indicate an actual issue.

    This issue is resolved in this release. The fix replaces the error message with a log function for backtracing but prevents false alarms.

Security Issues
  • Update to OpenSSL

    The OpenSSL package is updated to version openssl-1.0.2zb.

  • Update to the Python package

    The Python package is updated to address CVE-2021-29921.

  • You can connect to port 9080 by using restricted DES/3DES ciphers

    With the OPENSSL command openssl s_client -cipher <CIPHER> -connect localhost:9080 you can connect to port 9080 by using restricted DES/3DES ciphers.

    This issue is resolved in this release. You cannot connect to port 9080 by using the following ciphers: DES-CBC3-SHA, EDH-RSA-DES-CBC3-SHA, ECDHE-RSA-DES-CBC3-SHA, and AECDH-DES-CBC3-SHA.

  • The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3c:

    • windows.iso: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
    • linux.iso: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.

    The following VMware Tools ISO images are available for download:

    • VMware Tools 11.0.6:
      • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
         
    • VMware Tools 10.0.12:
      • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.
      • linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
         
      solaris.iso: VMware Tools image 10.3.10 for Solaris.
    • darwin.iso: Supports Mac OS X versions 10.11 and later.

    Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

vSphere Client Issues
  • Virtual machines appear as inaccessible in the vSphere Client and you might see some downtime for applications

    In rare cases, hardware issues might cause an SQlite DB corruption that makes multiple VMs become inaccessible and lead to some downtime for applications.

    This issue is resolved in this release.

Storage Issues
  • Virtual machine operations fail with an error for insufficient disk space on datastore

    A new datastore normally has a high number of large file block (LFB) resources and a lesser number of small file block (SFB) resources. For workflows that consume SFBs, such as virtual machine operations, LFBs convert to SFBs. However, due to a delay in updating the conversion status, newly converted SFBs might not be recognized as available for allocation. As a result, you see an error such as Insufficient disk space on datastore when you try to power on, clone, or migrate a virtual machine.

    This issue is resolved in this release.

  • vSphere Virtual Volume snapshot operations might fail on the source volume or the snapshot volume on Pure storage

    Due to an issue that allows the duplication of the unique ID of vSphere Virtual Volumes, virtual machine snapshot operations might fail, or the source volume might get deleted. The issue is specific to Pure storage and affects Purity release lines 5.3.13 and earlier, 6.0.5 and earlier, and 6.1.1 and earlier.

    This issue is resolved in this release.

vSAN Issues
  • You might see vSAN health errors for cluster partition when data-in-transit encryption is enabled

    In the vSphere Client, you might see vSAN health errors such as vSAN cluster partition or vSAN object health when data-in-transit encryption is enabled. The issue occurs because when a rekey operation starts in a vSAN cluster, a temporary resource issue might cause key exchange between peers to fail.

    This issue is resolved in this release.

Virtual Machine Management Issues
  • A race condition between live migration operations might cause the ESXi host to fail with a purple diagnostic screen

    In environments with VMs of 575 GB or more reserved memory that do not use Encrypted vSphere vMotion, a live migration operation might race with another live migration and cause the ESXi host to fail with a purple diagnostic screen.

    This issue is resolved in this release. However, in very rare cases, the migration operation might still fail, regardless that the root cause for the purple diagnostic screen condition is fixed. In such cases, retry the migration when no other live migration is in progress on the source host, or enable Encrypted vSphere vMotion on the virtual machines.

Resolved Issues from Previous Releases

    Networking Issues

    • RDMA traffic by using the iWARP protocol might not complete

      RDMA traffic by using the iWARP protocol on Intel x722 cards might time out and not complete.

      This issue is resolved in this release.

    Installation, Upgrade and Migration Issues

    • The /locker partition might be corrupted when the partition is stored on a USB or SD device

      Due to the I/O sensitivity of USB and SD devices, the VMFS-L locker partition on such devices that stores VMware Tools and core dump files might get corrupted.

      This issue is resolved in this release. By default, ESXi loads the locker packages to the RAM disk during boot. 

    • ESXi hosts might lose connectivity after brcmfcoe driver upgrade on Hitachi storage arrays

      After an upgrade of the brcmfcoe driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.

      This issue is resolved in this release.

    • After upgrading to ESXi 7.0 Update 2, you see excessive storage read I/O load

      ESXi 7.0 Update 2 introduced a system statistics provider interface that requires reading the datastore stats for every ESXi host on every 5 min. If a datastore is shared by multiple ESXi hosts, such frequent reads might cause a read latency on the storage array and lead to excessive storage read I/O load.

      This issue is resolved in this release.

    Virtual Machine Management Issues

    • Virtual machines with enabled AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) cannot create Virtual Machine Communication Interface (VMCI) sockets

      Performance and functionality of features that require VMCI might be affected on virtual machines with enabled AMD SEV-ES, because such virtual machines cannot create VMCI sockets.

      This issue is resolved in this release.

    • Virtual machines might fail when rebooting a heavily loaded guest OS

      In rare cases, when a guest OS reboot is initiated outside the guest, for example from the vSphere Client, virtual machines might fail, generating a VMX dump. The issue might occur when the guest OS is heavily loaded. As a result, responses from the guest to VMX requests are delayed prior to the reboot. In such cases, the vmware.log file of the virtual machines includes messages such as: I125: Tools: Unable to send state change 3: TCLO error. E105: PANIC: NOT_REACHED bora/vmx/tools/toolsRunningStatus.c:953.

      This issue is resolved in this release.

    Miscellaneous Issues

    • Asynchronous read I/O containing a SCATTER_GATHER_ELEMENT array of more than 16 members with at least 1 member falling in the last partial block of a file might lead to ESXi host panic

      In rare cases, in an asynchronous read I/O containing a SCATTER_GATHER_ELEMENT array of more than 16 members, at least 1 member might fall in the last partial block of a file. This might lead to corrupting VMFS memory heap, which in turn causes ESXi hosts to fail with a purple diagnostic screen.

      This issue is resolved in this release.

    • If a guest OS issues UNMAP requests with large size on thin provisioned VMDKs, ESXi hosts might fail with a purple diagnostic screen

      ESXi 7.0 Update 3 introduced an uniform UNMAP granularity for VMFS and SEsparse snapshots, and set the maximum UNMAP granularity reported by VMFS to 2GB. However, in certain environments, when the guest OS makes a trim or unmap request of 2GB, such a request might require the VMFS metadata transaction to do lock acquisition of more than 50 resource clusters. VMFS might not handle such requests correctly. As a result, an ESXi host might fail with a purple diagnostic screen. VMFS metadata transaction requiring lock actions on greater then 50 resource clusters is rare and can only happen on aged datastores. The issue impacts only thin-provisioned VMDKs. Thick and eager zero thick VMDKs are not impacted.
      Along with the purple diagnostic screen, in the /var/run/log/vmkernel file you see errors such as:
      2021-10-20T03:11:41.679Z cpu0:2352732)@BlueScreen: NMI IPI: Panic requested by another PCPU. RIPOFF(base):RBP:CS [0x1404f8(0x420004800000):0x12b8:0xf48] (Src 0x1, CPU0)
      2021-10-20T03:11:41.689Z cpu0:2352732)Code start: 0x420004800000 VMK uptime: 11:07:27:23.196
      2021-10-20T03:11:41.697Z cpu0:2352732)Saved backtrace from: pcpu 0 Heartbeat NMI
      2021-10-20T03:11:41.715Z cpu0:2352732)0x45394629b8b8:[0x4200049404f7]HeapVSIAddChunkInfo@vmkernel#nover+0x1b0 stack: 0x420005bd611e

      This issue is resolved in this release.

    • The hostd service might fail due to a time service event monitoring issue

      An issue in the time service event monitoring service, which is enabled by default, might cause the hostd service to fail. In the vobd.log file, you see errors such as:
      2021-10-21T18:04:28.251Z: [UserWorldCorrelator] 304957116us: [esx.problem.hostd.core.dumped] /bin/hostd crashed (1 time(s) so far) and a core file may have been created at /var/core/hostd-zdump.000. This may have caused connections to the host to be dropped.
      2021-10-21T18:04:28.251Z: An event (esx.problem.hostd.core.dumped) could not be sent immediately to hostd; queueing for retry. 2021-10-21T18:04:32.298Z: [UserWorldCorrelator] 309002531us: [vob.uw.core.dumped] /bin/hostd(2103800) /var/core/hostd-zdump.001
      2021-10-21T18:04:36.351Z: [UserWorldCorrelator] 313055552us: [vob.uw.core.dumped] /bin/hostd(2103967) /var/core/hostd-zdump.002
      .

      This issue is resolved in this release. 

    Known Issues

    The known issues are grouped as follows.

    Networking Issues
    • Stale NSX for vSphere properties in vSphere Distributed Switch 7.0 (VDS) or ESXi 7.x hosts might fail host updates

      If you had NSX for vSphere with VXLAN enabled on a vSphere Distributed Switch (VDS) of version 7.0 and migrated to NSX-T Data Center by using NSX V2T migration, stale NSX for vSphere properties in the VDS or some hosts might prevent ESXi 7.x hosts updates. Host update fails with a platform configuration error.

      Workaround: Upload the CleanNSXV.py script to the /tmp dir in vCenter Server. Log in to the appliance shell as a user with super administrative privileges (for example, root) and follow these steps:

      1. Run CleanNSXV.py by using the command PYTHONPATH=$VMWARE_PYTHON_PATH python /tmp/CleanNSXV.py --user <vc_admin_user> --password <passwd>. The <vc_admin_user> parameter is a vCenter Server user with super administrative privileges and <passwd> parameter is the user password. 
        For example: 
        PYTHONPATH=$VMWARE_PYTHON_PATH python /tmp/CleanNSXV.py --user '[email protected]' --password 'Admin123'
      2. Verify if the following NSX for vSphere properties, com.vmware.netoverlay.layer0 and com.vmware.net.vxlan.udpport, are removed from the ESXi hosts:
        1. Connect to a random ESXi host by using an SSH client.
        2. Run the command net-dvs -l | grep "com.vmware.netoverlay.layer0\|com.vmware.net.vxlan.udpport".
          If you see no output, then the stale properties are removed.

      To download the CleanNSXV.py script and for more details, see VMware knowledge base article 87423.

    Security Issues
    • The cURL version in ESXi650-202110001 and ESXi670-202111001 is later than the cURL version in ESXi 7.0 Update 3c

      The cURL version in ESXi 7.0 Update 3c is 7.77.0, while ESXi650-202110001 and ESXi670-202111001 have the newer fixed version 7.78.0. As a result, if you upgrade from ESXi650-202110001 or ESXi670-202111001 to ESXi 7.0 Update 3c, cURL 7.7.0 might expose your system to the following vulnerabilities:
      CVE-2021-22926: CVSS 7.5
      CVE-2021-22925: CVSS 5.3
      CVE-2021-22924: CVSS 3.7
      CVE-2021-22923: CVSS 5.3
      CVE-2021-22922: CVSS 6.5

      Workaround: None. cURL version 7.78.0 comes with a future ESXi 7.x release.

    Known Issues from Earlier Releases

    To view a list of previous known issues, click here.

    check-circle-line exclamation-circle-line close-line
    Scroll to top icon