ESXi 7.0 Update 1c | 17 DEC 2020 | ISO Build 17325551 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 7.0
- Patches Contained in this Release
- Product Support Notices
- Resolved Issues
- Known Issues
What's New
- ESXi 7.0 Update 1c supports vSphere Quick Boot on the following servers:
- Cisco Systems Inc:
- HX240C-M5SD
- HXAF240C-M5SD
- UCSC-C240-M5SD
- Dell Inc:
- PowerEdge C6420
- PowerEdge C6525
- PowerEdge FC640
- PowerEdge M640
- PowerEdge MX740c
- PowerEdge MX840c
- PowerEdge R540
- PowerEdge R6515
- PowerEdge R6525
- PowerEdge R7515
- PowerEdge R7525
- PowerEdge R840
- PowerEdge R930
- PowerEdge R940
- PowerEdge R940xa
- HPE:
- ProLiant DL385 Gen10
- Cisco Systems Inc:
-
ESXi 7.0 Update 1c adds five physical NIC statistics:
droppedRx
,droppedTx
,errorsRx
,RxCRCErrors
anderrorsTx
, to thehostd.log
file at/var/run/log/hostd.log
to enable you detect uncorrected networking errors and take necessary corrective action. -
With ESXi 7.0 Update 1c, you can use the
--remote-host-max-msg-len
parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts. Set the maximum message length up to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure. -
With ESXi 7.0 Update 1c, you can use the installer boot option
systemMediaSize
to limit the size of system storage partitions on the boot media. If your system has a small footprint that does not require the maximum 138 GB system-storage size, you can limit it to the minimum of 33 GB. ThesystemMediaSize
parameter accepts the following values:The selected value must fit the purpose of your system. For example, a system with 1TB of memory must use the minimum of 69 GB for system storage. To set the boot option at install time, for example
systemMediaSize=small
, refer to Enter Boot Options to Start an Installation or Upgrade Script. For more information, see VMware knowledge base article 81166.- min (33 GB, for single disk or embedded servers)
- small (69 GB, for servers with at least 512 GB RAM)
- default (138 GB)
- max (consume all available space, for multi-terabyte servers)
Earlier Releases of ESXi 7.0
Features, resolved and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1b
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1a
- VMware ESXi 7.0, Patch Release ESXi 7.0 Update 1
- VMware ESXi 7.0, Patch Release ESXi 7.0b
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.
Patches Contained in This Release
This release of ESXi 7.0 Update 1c delivers the following patches:
Build Details
Download Filename: | VMware-ESXi-7.0U1c-17325551-depot.zip |
Build: | 17325551 |
Download Size: | 523.2 MB |
md5sum: | d1410e6c741ada23c3570e07b94bd8c7 |
sha1checksum: | a70defe8353b39f74339b158697ed1a12df6c55d |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
IMPORTANT:
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and
esx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. -
When patching ESXi hosts by using the vSphere Lifecycle Manager from a version earlier than ESXi 7.0 Update 1, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, make sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
Components
Component | Bulletin ID | Category | Severity |
ESXi | ESXi_7.0.1-0.25.17325551 | Bugfix | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.1-0.25.17325551 | Bugfix | Critical |
VMWare ATA Storage Controller Driver | VMware-vmkata_0.1-1vmw.701.0.25.17325551 | Bugfix | Moderate |
HPE Smart Array Controller Driver | HPE-nhpsa_70.0051.0.100-2vmw.701.0.25.17325551 | Bugfix | Moderate |
VMWare USB Driver | VMware-vmkusb_0.1-1vmw.701.0.25.17325551 | Bugfix | Moderate |
Microsemi Storage Solution Smart Array Storage Controller Driver | Microchip-smartpqi_70.4000.0.100-4vmw.701.0.25.17325551 | Bugfix | Moderate |
ESXi | ESXi_7.0.1-0.20.17325020 | Security | Critical |
ESXi Install/Upgrade Component | esx-update_7.0.1-0.20.17325020 | Security | Critical |
VMWare ATA Storage Controller Driver | VMware-vmkata_0.1-1vmw.701.0.20.17325020 | Security | Important |
VMware NVMe over Fabric - RDMA Driver | VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.20.17325020 | Security | Important |
VMware native Software FCoE Driver | VMware-vmkfcoe_1.0.0.2-1vmw.701.0.20.17325020 | Security | Important |
VMWare USB Driver | VMware-vmkusb_0.1-1vmw.701.0.20.17325020 | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID | Category | Severity |
ESXi70U1c-17325551 | Bugfix | Critical |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-7.0U1c-17325551-standard |
ESXi-7.0U1c-17325551-no-tools |
ESXi-7.0U1sc-17325020-standard |
ESXi-7.0U1sc-17325020-no-tools |
ESXi Image
Name and Version | Release Date | Category | Detail |
---|---|---|---|
ESXi70U1c-17325551 | 12/17/2020 | Enhancement | Security and Bugfix image |
ESXi70U1sc-17325020 | 12/17/2020 | Enhancement | Security only image |
For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
Patch Download and Installation
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile update
command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.
Product Support Notices
VMware Tools 9.10.x and 10.0.x has reached End of General Support. For more details, refer to VMware Tools listed under the VMware Product Lifecycle Matrix.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi_7.0.1-0.25.17325551
- esx-update_7.0.1-0.25.17325551
- VMware-vmkata_0.1-1vmw.701.0.25.17325551
- HPE-nhpsa_70.0051.0.100-2vmw.701.0.25.17325551
- VMware-vmkusb_0.1-1vmw.701.0.25.17325551
- Microchip-smartpqi_70.4000.0.100-4vmw.701.0.25.17325551
- ESXi_7.0.1-0.20.17325020
- esx-update_7.0.1-0.20.17325020
- VMware-vmkata_0.1-1vmw.701.0.20.17325020
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.20.17325020
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.20.17325020
- VMware-vmkusb_0.1-1vmw.701.0.20.17325020
- ESXi-7.0U1c-17325551-standard
- ESXi-7.0U1c-17325551-no-tools
- ESXi-7.0U1sc-17325020-standard
- ESXi-7.0U1sc-17325020-no-tools
- ESXi Image - ESXi70U1c-17325551
- ESXi Image - ESXi7.0U1sc-17325020
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2656093, 2652863, 2653874, 2643508, 2644189, 2652344, 2661064, 2667291, 2662512, 2644221, 2662606, 2661153, 2655181, 2657411, 2675442, 2657649, 2662558, 2661818, 2664084, 2625155, 2658647, 2659015, 2654686, 2664278, 2676632, 2647557, 2647557, 2628899, 2663717, 2633194, 2661808, 2670891, 2665031, 2644003, 2664045 |
CVE numbers | N/A |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsanhealth, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver, gc
and cpu-microcode
VIBs to resolve the following issues:
- PR 2656093: You might see loss of network connectivity as a result of a physical switch reboot
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity.This issue is resolved in this release. The fix increases the
Net.TeamPolicyUpDelay
parameter to up to 30 minutes. You can set the parameter by selecting the ESXi host and navigating to Configure > System -> Advanced System Settings >Net.TeamPolicyUpDelay
. Alternatively, you can use the commandesxcfg-advcfg -s
<value>/Net/TeamPolicyUpDelay. - PR 2652863: Changes in the Distributed Firewall (DFW) filter configuration might cause virtual machines to lose network connectivity
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group, or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you seestate: IOChain Detaching
for the failed filter.This issue is resolved in this release.
- PR 2653874: A virtual machine might fail with a SIGSEGV error during 3D rendering
A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.
This issue is resolved in this release.
- PR 2643508: If a virtual machine restarts or resets during a hot-plug operation, logs might cause available disk space to fill up and the machine to stop responding
If a virtual machine restarts or resets during a hot-plug operation, logs in the
vmware.log
file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The log messages are identical, such as:acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF
.This issue is resolved in this release. If you cannot apply this patch, do not perform a reset or restart of the virtual machine before hot-plug operations or driver installations finish. If you already face the issue, power cycle the virtual machine.
- PR 2644189: smpboot fails for Linux virtual machines with enabled Secure Encrypted Virtualization-Encrypted State (SEV-ES)
When a Linux virtual machine with multiple virtual CPUs and enabled SEV-ES boots, all CPUs except CPU0 are offline. You cannot bring the remaining CPUs online. The
dmesg
command returns an error such assmpboot: do_boot_cpu failed(-1) to wakeup CPU#1
.This issue is resolved in this release.
- PR 2652344: If you have .vswp swap files in a virtual machine directory, you see Device or resource busy error messages when scanning all files in the directory
If you have
.vswp
swap files in a virtual machine directory, you seeDevice or resource busy
error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service.
The issue occurs if you attempt to open a file with.vswp
extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension.vswp
, but the swap files of the VMX process must not be opened as object descriptors.This issue is resolved in this release.
- PR 2661064: The hostd service intermittently becomes unresponsive
In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host. In the vmkernel logs, you see alerts such as:
2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive.
Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.
This issue is resolved in this release.
- PR 2667291: Virtual machines encryption takes long and ultimately fails with an error
Virtual machines encryption might take several hours and ultimately fail with
The file already exists
error in the logs of the hostd service. The issue occurs if an orphaned or unused file<vm name="">.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the.vmx
file, the encryption operation creates an encrypted file with the.nvram
file extension, which the system considers a duplicate of the existing orphaned file.This issue is resolved in this release. If you already face the issue, manually delete the orphaned
<vm name="">.nvram
file before encryption. - PR 2662512: vSAN host failure while disabling large client cache
If vSAN attempts to disable a memory cache of 256 GB or more, the operation might cause ESXi host failure with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2644221: If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive
If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as
#PF Exception 14 in world 2125468
on a purple diagnostic screen.This issue is resolved in this release.
- PR 2662606: You see health alarms for sensor entity ID 44 after upgrading the firmware of HPE Gen10 servers
After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2
ALOM_Link_P2
andNIC_Link_02P2
sensors, related to the sensor entityID 44.x
. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.This issue is resolved in this release.
- PR 2661153: If you disable RC4, Active Directory user authentication on ESXi hosts might fail
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors.This issue is resolved in this release.
- PR 2655181: Virtual machines on NFS 4.1 datastore might become unresponsive after an NFS server failover or failback
If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.
This issue is resolved in this release. The fix analyzes reclaim responses in detail and allows retries only when necessary.
- PR 2657411: The time that the processor is in system mode (%SYS) increases for certain disk I/O workloads in virtual machines
When virtual machines resume from a snapshot, after migration operations by using vSphere vMotion, resuming a suspended virtual machine, or a hot-plug operation, you might see a small increase in the reported %SYS time for certain disk I/O workloads. The issue occurs because the PVSCSI virtual storage adapter might stop dynamically sizing its internal queue to the guest workload. As a result, driver overhead increases during high I/O activity and might cause a minor increase in %SYS time for certain disk I/O workloads. The issue does not affect the NVMe and LSI virtual devices.
This issue is resolved in this release.
- PR 2675442: Java applications on an AMD host in an Enhanced vMotion Compatibility (EVC) cluster might fail to start with an error that SSE2 is not supported
Java applications on an AMD host in an EVC cluster might fail to start and report the error:
Unknown x64 processor: SSE2 not supported
. The issue occurs because the CPUID fieldfamily (leaf 1, EAX, bits 11-8)
has an incorrect value. As a result, the Lookup Service and other Java-based services on the vCenter Server Appliance cannot start.This issue is resolved in this release.
- PR 2657649: After upgrade of HPE servers to HPE Integrated Lights-Out 5 (iLO 5) firmware version 2.30, you see memory sensor health alerts
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, the vSphere Client displays health alerts for the memory sensors. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade.This issue is resolved in this release.
- PR 2662558: If an SD card does not support Read Capacity 16, you see numerous errors in the logs
On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel logs, such as:
2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
and
2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...
This issue is resolved in this release.
- PR 2661818: In case of a non-UTF8 string in the name property of numeric sensors, the vpxa service fails
The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.
This issue is resolved in this release.
- PR 2664084: The Managed Object Browser might display CPU and memory sensors status incorrectly
Due to an error in processing sensor entries,
memoryStatusInfo
andcpuStatusInfo
data might incorrectly include the status for non-CPU and non-memory sensors as well. This leads to incorrect status for the CPU and memory sensors in the Managed Object Browser.This issue is resolved in this release.
- PR 2625155: ESXi installation might fail on media with unformatted or corrupt datastores
ESXi installation might fail on media with datastores that cannot mount because a VMFS partition is either corrupt, unformatted, or has a different system ownership. The ESXi Installer might also remove VMFS partitions on attached USB boot media during upgrade or install.
This issue is resolved in this release.
- PR 2658647: Audit records configured to the scratch partition are lost when upgrading to ESXi 7.x from 6.7 Update 2 and later
Audit records configured to the scratch partition on the boot device are not retained when upgrading to ESXi 7.x from 6.7 Update 2 and later. National Information Assurance Partnership (NIAP) certification requirements for ESXi post 6.7 Update 2 are that audit records located in
/scratch
persist after an upgrade.This issue is resolved in this release. For earlier ESXi 7.x releases, backup the audit records in
/scratch
to another filesystem or partition before upgrading. - PR 2659015: After a backup operation, identical error messages flood the hostd.log file
After a backup operation, identical error messages, such as
Block list: Cannot convert disk path <vmdk file> to real path, skipping.
, might flood thehostd.log
file. This issue prevents other hostd service logs and might fill up the log memory.
This issue is resolved in this release.
- PR 2654686: The vSphere Virtual Volumes algorithm might not pick out the first Config-VVol that an ESXi host requests
In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi host requests and this might create issues in the vSphere Virtual Volumes datastores.
This issue is resolved in this release. The vSphere Virtual Volumes algorithm uses a timestamp rather than a UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time.
- PR 2664278: In the vSphere Client, you cannot change the log level configuration of the vpxa service after an upgrade of your vCenter Server system
In the vSphere Client or by using API, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.
This issue is resolved in this release. The vpxa service automatically sets a valid value for the Vpx.Vpxa.config.log.level option and exposes it to the vSphere Client or an API call.
- PR 2676632: ESXi on USB or FCoE devices cannot find boot device paths
If ESXi is installed on a slow boot device, such as USB or FCoE, ESXi might not be able to detect a boot device storage path. As a result, the bootbank partition and other partitions, such as
/scratch
, are not identified, and configuration changes are not saved. When ESXi boots, the/bootbank
and/scratch
symbolic links reference a temporary in-memory path.This issue is resolved in this release. However, the fix works for boot devices that ESXi detects in less than 2 minutes. On rare occasions, when boot device detection takes longer than 2 min, you must follow the steps in VMware knowledge base article 2149444 to manually set the boot option
devListStabilityCount
. - PR 2647557: The VMware vSphere High Availability agent might become unresponsive due to a memory issue
Under vSphere Lifecycle Manager cluster image management, the Lifecycle Manager agent might not be able to contact an ESXi host to start a task, such as installation. As a result, the vSphere Client displays an error such as
The vSphere HA agent is not reachable from vCenter Server
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file.This issue is resolved in this release.
- PR 2647557: vSphere Lifecycle Manager image compliance checks might fail with an unknown error
During a cluster image management operation, the vSphere Lifecycle Manager might fail to contact the Lifecycle Manager agent on an ESXi host. In the vSphere Client, you see an error such as
Unknown error occurred when invoking host API
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file.This issue is resolved in this release.
- PR 2628899: Enabling vSphere HA fails with a configuration error
Under vSphere Lifecycle Manager cluster image management, the vSphere HA agent might not be able to configure an ESXi host. As a result, in the vSphere Client you see an error such as:
Cannot complete the configuration of the vSphere HA agent on the host. Applying HA VIBs on the cluster encountered a failure.
The failure occurs because the process of the vSphere HA agent installation on the ESXi host might consume more memory than the allocated quota.This issue is resolved in this release.
- PR 2663717: In the vSphere Client, you see hardware health warning with status Unknown
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
This issue is resolved in this release. Sensors that are not supported for decoding are ignored, and are not included in system health reports.
- PR 2633194: If you only open and close the Edit NTP Settings dialog box, the Start and stop with host option switches off
If you only open and close the Edit NTP Settings dialog box, even without making any changes, the Start and stop with host option switches off. The NTP service does not start when an ESXi host powers on. In the vSphere Client, you see the NTP Service Startup Policy option Start and stop with host as selected. However, in the ESX Shell, when you run the command
chkconfig –list
, the NTP service is off.This issue is resolved in this release.
- PR 2670891: Snapshot operations for virtual machines with Change Block Tracking (CBT) enabled fail
If you disable a software FCoE adapter as means to access Fibre Channel storage, the CBT module might fail to load and ESXi might not be able to detect a boot device. As a result, snapshot operations, such as creating and consolidating snapshots, fail.
This issue is resolved in this release.
- PR 2665031: vSAN health service time out if there are issues with internet connectivity or DNS resolution
In case of connectivity problems, vSAN health checks that require connectivity to VMware might time out. The vSphere Client displays an error message such as:
Unable to query vSAN health information. Check vSphere Client logs for details.
This issue can affect online health checks, HCL updates, and vSphere Lifecycle Manager baseline recommendations. If vCenter Server cannot resolve the problem, vSAN health checks might timeout while querying DNS entries.This issue is resolved in this release.
- PR 2644003: vSAN host fails during disk removal
While decommissioning a disk for removal, an ESXi host might fail with purple diagnostic screen. You see entries similar to the following in the backtrace:
2020-09-01T04:22:47.112Z cpu7:2099790)@BlueScreen: Failed at bora/modules/vmkernel/lsomcommon/ssdlog/ssdopslog.c:398 -- NOT REACHED
2020-09-01T04:22:47.112Z cpu7:2099790)Code start: 0x418037400000 VMK uptime: 0:00:39:25.026
2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001
2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121
2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00
2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000
2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103
2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150
2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0
2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000
2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380
2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca
2020-09-01T04:22:47.116Z cpu7:2099790)0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkata
VIB.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2655992 |
CVE numbers | N/A |
Updates the nhpsa
VIB to resolve the following issue:
- Update to the nhpsa driver
The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers,
nhpsa
, is updated to version 70.0051.0.100 to resolve several known issues.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the
vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2661584 |
CVE numbers | N/A |
Updates the smartpqi
VIB to resolve the following issue:
- Update to the smartpqi driver
The disk serviceability plug-in of the ESXi native SCSI driver for Microsemi Smart Family controllers,
smartpqi
, is updated to version 70.4000.0.100-5 to revert back several missing device IDs.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2661006, 2671485, 2636149 |
CVE numbers | CVE-2020-3999 |
The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Updates the esx-dvfilter-generic-fastpath, vsanhealth, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver, gc
and cpu-microcode
VIBs to resolve the following issues:
ESXi 7.0 Update 1c addresses a denial of service vulnerability due to improper input validation in GuestInfo variables. A malicious actor with normal user privilege access to a virtual machine can cause failure in the virtual machine’s VMX process, leading to a denial of service condition. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3999 to this issue. For more information, see VMSA-2020-0029.
- Update of the SQLite database
The SQLite database is updated to version 3.33.0.
- Update to OpenSSL library
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.
- Update to the OpenSSH
The OpenSSH version is updated to 8.3p1.
- Update to the Network Time Protocol (NTP) daemon
The NTP daemon is updated to version ntp-4.2.8p15.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to version 7.72.0.
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 1c:
windows.iso
: VMware Tools 11.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image for Solaris.darwin.iso
: VMware Tools image for OSX.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the loadesx
and esx-update
VIBs.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkata
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the nvmerdma
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the
vmkfcoe
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the vmkusb
VIB.
Profile Name | ESXi-7.0U1c-17325551-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | December 17, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2656093, 2652863, 2653874, 2643508, 2644189, 2652344, 2661064, 2667291, 2662512, 2644221, 2662606, 2661153, 2655181, 2657411, 2675442, 2657649, 2662558, 2661818, 2664084, 2625155, 2658647, 2659015, 2654686, 2664278, 2676632, 2647557, 2647557, 2628899, 2663717, 2633194, 2661808, 2670891, 2665031, 2644003, 2664045, 2655992, 2661584 |
Related CVE numbers | CVE-2020-3999 |
- This patch updates the following issues:
-
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity. -
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group, or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you seestate: IOChain Detaching
for the failed filter. -
A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.
-
If a virtual machine restarts or resets during a hot-plug operation, logs in the
vmware.log
file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The log messages are identical, such as:acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF
. -
When a Linux virtual machine with multiple virtual CPUs and enabled SEV-ES boots, all CPUs except CPU0 are offline. You cannot bring the remaining CPUs online. The
dmesg
command returns an error such assmpboot: do_boot_cpu failed(-1) to wakeup CPU#1
. -
If you have
.vswp
swap files in a virtual machine directory, you seeDevice or resource busy
error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service.
The issue occurs if you attempt to open a file with.vswp
extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension.vswp
, but the swap files of the VMX process must not be opened as object descriptors. -
In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host. In the vmkernel logs, you see alerts such as:
2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive.
Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.
-
Virtual machines encryption might take several hours and ultimately fail with
The file already exists
error in the logs of the hostd service. The issue occurs if an orphaned or unused file <vm name="">.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the.vmx
file, the encryption operation creates an encrypted file with the.nvram
file extension, which the system considers a duplicate of the existing orphaned file. -
If vSAN attempts to disable a memory cache of 256 GB or more, the operation might cause ESXi host failure with a purple diagnostic screen.
-
If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as
#PF Exception 14 in world 2125468
on a purple diagnostic screen. -
After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2
ALOM_Link_P2
andNIC_Link_02P2
sensors, related to the sensor entityID 44.x
. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version. -
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.
-
When virtual machines resume from a snapshot, after migration operations by using vSphere vMotion, resuming a suspended virtual machine, or a hot-plug operation, you might see a small increase in the reported %SYS time for certain disk I/O workloads. The issue occurs because the PVSCSI virtual storage adapter might stop dynamically sizing its internal queue to the guest workload. As a result, driver overhead increases during high I/O activity and might cause a minor increase in %SYS time for certain disk I/O workloads. The issue does not affect the NVMe and LSI virtual devices.
-
Java applications on an AMD host in an EVC cluster might fail to start and report the error:
Unknown x64 processor: SSE2 not supported
. The issue occurs because the CPUID fieldfamily (leaf 1, EAX, bits 11-8)
has an incorrect value. As a result, the Lookup Service and other Java-based services on the vCenter Server Appliance cannot start. -
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, the vSphere Client displays health alerts for the memory sensors. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade. -
On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel logs, such as:
2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
and
2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...
-
The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.
-
Due to an error in processing sensor entries,
memoryStatusInfo
andcpuStatusInfo
data might incorrectly include the status for non-CPU and non-memory sensors as well. This leads to incorrect status for the CPU and memory sensors in the Managed Object Browser. -
ESXi installation might fail on media with datastores that cannot mount because a VMFS partition is either corrupt, unformatted, or has a different system ownership. The ESXi Installer might also remove VMFS partitions on attached USB boot media during upgrade or install.
-
Audit records configured to the scratch partition on the boot device are not retained when upgrading to ESXi 7.x from 6.7 Update 2 and later. National Information Assurance Partnership (NIAP) certification requirements for ESXi post 6.7 Update 2 are that audit records located in
/scratch
persist after an upgrade. -
After a backup operation, identical error messages, such as
Block list: Cannot convert disk path <vmdk file> to real path, skipping.
, might flood thehostd.log
file. This issue prevents other hostd service logs and might fill up the log memory. -
In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi host requests and this might create issues in the vSphere Virtual Volumes datastores.
-
In the vSphere Client or by using API, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.
-
If ESXi is installed on a slow boot device, such as USB or FCoE, ESXi might not be able to detect a boot device storage path. As a result, the bootbank partition and other partitions, such as
/scratch
, are not identified, and configuration changes are not saved. When ESXi boots, the/bootbank
and/scratch
symbolic links reference a temporary in-memory path. -
Under vSphere Lifecycle Manager cluster image management, the Lifecycle Manager agent might not be able to contact an ESXi host to start a task, such as installation. As a result, the vSphere Client displays an error such as
The vSphere HA agent is not reachable from vCenter Server
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file. -
During a cluster image management operation, the vSphere Lifecycle Manager might fail to contact the Lifecycle Manager agent on an ESXi host. In the vSphere Client, you see an error such as
Unknown error occurred when invoking host API
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file. -
Under vSphere Lifecycle Manager cluster image management, the vSphere HA agent might not be able to configure an ESXi host. As a result, in the vSphere Client you see an error such as:
Cannot complete the configuration of the vSphere HA agent on the host. Applying HA VIBs on the cluster encountered a failure.
The failure occurs because the process of the vSphere HA agent installation on the ESXi host might consume more memory than the allocated quota. -
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
-
If you only open and close the Edit NTP Settings dialog box, even without making any changes, the Start and stop with host option switches off. The NTP service does not start when an ESXi host powers on. In the vSphere Client, you see the NTP Service Startup Policy option Start and stop with host as selected. However, in the ESX Shell, when you run the command
chkconfig –list
, the NTP service is off. -
If you disable a software FCoE adapter as means to access Fibre Channel storage, the CBT module might fail to load and ESXi might not be able to detect a boot device. As a result, snapshot operations, such as creating and consolidating snapshots, fail.
-
In case of connectivity problems, vSAN health checks that require connectivity to VMware might time out. The vSphere Client displays an error message such as:
Unable to query vSAN health information. Check vSphere Client logs for details.
This issue can affect online health checks, HCL updates, and vSphere Lifecycle Manager baseline recommendations. If vCenter Server cannot resolve the problem, vSAN health checks might timeout while querying DNS entries. -
While decommissioning a disk for removal, an ESXi host might fail with purple diagnostic screen. You see entries similar to the following in the backtrace:
2020-09-01T04:22:47.112Z cpu7:2099790)@BlueScreen: Failed at bora/modules/vmkernel/lsomcommon/ssdlog/ssdopslog.c:398 -- NOT REACHED 2020-09-01T04:22:47.112Z cpu7:2099790)Code start: 0x418037400000 VMK uptime: 0:00:39:25.026 2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001 2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103 2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150 2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca 2020-09-01T04:22:47.116Z cpu7:2099790)0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
-
If a NVME device has multiple namespaces configured, support for vSAN secure disk wipe operations can change dynamically, based on the number of namespaces. If a new namespace is configured while a secure wipe operation is in progress, for example in VxRail or the Google Cloud Platform, the data on the newly added namespace might be erased.
-
The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, is updated to version 70.0051.0.100 to resolve several known issues.
-
The disk serviceability plug-in of the ESXi native SCSI driver for Microsemi Smart Family controllers, smartpqi, is updated to version 70.4000.0.100-5 to revert back several missing device IDs.
-
Profile Name | ESXi-7.0U1c-17325551-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | December 17, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2656093, 2652863, 2653874, 2643508, 2644189, 2652344, 2661064, 2667291, 2662512, 2644221, 2662606, 2661153, 2655181, 2657411, 2675442, 2657649, 2662558, 2661818, 2664084, 2625155, 2658647, 2659015, 2654686, 2664278, 2676632, 2647557, 2647557, 2628899, 2663717, 2633194, 2661808, 2670891, 2665031, 2644003, 2664045, 2655992, 2661584 |
Related CVE numbers | CVE-2020-3999 |
- This patch updates the following issues:
-
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity. -
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group, or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you seestate: IOChain Detaching
for the failed filter. -
A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.
-
If a virtual machine restarts or resets during a hot-plug operation, logs in the
vmware.log
file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The log messages are identical, such as:acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF
. -
When a Linux virtual machine with multiple virtual CPUs and enabled SEV-ES boots, all CPUs except CPU0 are offline. You cannot bring the remaining CPUs online. The
dmesg
command returns an error such assmpboot: do_boot_cpu failed(-1) to wakeup CPU#1
. -
If you have
.vswp
swap files in a virtual machine directory, you seeDevice or resource busy
error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service.
The issue occurs if you attempt to open a file with.vswp
extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension.vswp
, but the swap files of the VMX process must not be opened as object descriptors. -
In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host. In the vmkernel logs, you see alerts such as:
2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive.
Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.
-
Virtual machines encryption might take several hours and ultimately fail with
The file already exists
error in the logs of the hostd service. The issue occurs if an orphaned or unused file <vm name="">.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the.vmx
file, the encryption operation creates an encrypted file with the.nvram
file extension, which the system considers a duplicate of the existing orphaned file. -
If vSAN attempts to disable a memory cache of 256 GB or more, the operation might cause ESXi host failure with a purple diagnostic screen.
-
If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as
#PF Exception 14 in world 2125468
on a purple diagnostic screen. -
After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2
ALOM_Link_P2
andNIC_Link_02P2
sensors, related to the sensor entityID 44.x
. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version. -
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.
-
When virtual machines resume from a snapshot, after migration operations by using vSphere vMotion, resuming a suspended virtual machine, or a hot-plug operation, you might see a small increase in the reported %SYS time for certain disk I/O workloads. The issue occurs because the PVSCSI virtual storage adapter might stop dynamically sizing its internal queue to the guest workload. As a result, driver overhead increases during high I/O activity and might cause a minor increase in %SYS time for certain disk I/O workloads. The issue does not affect the NVMe and LSI virtual devices.
-
Java applications on an AMD host in an EVC cluster might fail to start and report the error:
Unknown x64 processor: SSE2 not supported
. The issue occurs because the CPUID fieldfamily (leaf 1, EAX, bits 11-8)
has an incorrect value. As a result, the Lookup Service and other Java-based services on the vCenter Server Appliance cannot start. -
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, the vSphere Client displays health alerts for the memory sensors. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade. -
On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel logs, such as:
2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
and
2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...
-
The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.
-
Due to an error in processing sensor entries,
memoryStatusInfo
andcpuStatusInfo
data might incorrectly include the status for non-CPU and non-memory sensors as well. This leads to incorrect status for the CPU and memory sensors in the Managed Object Browser. -
ESXi installation might fail on media with datastores that cannot mount because a VMFS partition is either corrupt, unformatted, or has a different system ownership. The ESXi Installer might also remove VMFS partitions on attached USB boot media during upgrade or install.
-
Audit records configured to the scratch partition on the boot device are not retained when upgrading to ESXi 7.x from 6.7 Update 2 and later. National Information Assurance Partnership (NIAP) certification requirements for ESXi post 6.7 Update 2 are that audit records located in
/scratch
persist after an upgrade. -
After a backup operation, identical error messages, such as
Block list: Cannot convert disk path <vmdk file> to real path, skipping.
, might flood thehostd.log
file. This issue prevents other hostd service logs and might fill up the log memory. -
In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi host requests and this might create issues in the vSphere Virtual Volumes datastores.
-
In the vSphere Client or by using API, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.
-
If ESXi is installed on a slow boot device, such as USB or FCoE, ESXi might not be able to detect a boot device storage path. As a result, the bootbank partition and other partitions, such as
/scratch
, are not identified, and configuration changes are not saved. When ESXi boots, the/bootbank
and/scratch
symbolic links reference a temporary in-memory path. -
Under vSphere Lifecycle Manager cluster image management, the Lifecycle Manager agent might not be able to contact an ESXi host to start a task, such as installation. As a result, the vSphere Client displays an error such as
The vSphere HA agent is not reachable from vCenter Server
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file. -
During a cluster image management operation, the vSphere Lifecycle Manager might fail to contact the Lifecycle Manager agent on an ESXi host. In the vSphere Client, you see an error such as
Unknown error occurred when invoking host API
. The failure occurs when the vSphere Lifecycle Manager task status database on the ESXi host is not frequently cleaned up and the Lifecycle Manager agent does not have enough memory allocated to handle the size of the database file. -
Under vSphere Lifecycle Manager cluster image management, the vSphere HA agent might not be able to configure an ESXi host. As a result, in the vSphere Client you see an error such as:
Cannot complete the configuration of the vSphere HA agent on the host. Applying HA VIBs on the cluster encountered a failure.
The failure occurs because the process of the vSphere HA agent installation on the ESXi host might consume more memory than the allocated quota. -
In the vSphere Client, you see hardware health warning with status Unknown for some sensors on ESXi hosts.
-
If you only open and close the Edit NTP Settings dialog box, even without making any changes, the Start and stop with host option switches off. The NTP service does not start when an ESXi host powers on. In the vSphere Client, you see the NTP Service Startup Policy option Start and stop with host as selected. However, in the ESX Shell, when you run the command
chkconfig –list
, the NTP service is off. -
If you disable a software FCoE adapter as means to access Fibre Channel storage, the CBT module might fail to load and ESXi might not be able to detect a boot device. As a result, snapshot operations, such as creating and consolidating snapshots, fail.
-
In case of connectivity problems, vSAN health checks that require connectivity to VMware might time out. The vSphere Client displays an error message such as:
Unable to query vSAN health information. Check vSphere Client logs for details.
This issue can affect online health checks, HCL updates, and vSphere Lifecycle Manager baseline recommendations. If vCenter Server cannot resolve the problem, vSAN health checks might timeout while querying DNS entries. -
While decommissioning a disk for removal, an ESXi host might fail with purple diagnostic screen. You see entries similar to the following in the backtrace:
2020-09-01T04:22:47.112Z cpu7:2099790)@BlueScreen: Failed at bora/modules/vmkernel/lsomcommon/ssdlog/ssdopslog.c:398 -- NOT REACHED 2020-09-01T04:22:47.112Z cpu7:2099790)Code start: 0x418037400000 VMK uptime: 0:00:39:25.026 2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001 2020-09-01T04:22:47.112Z cpu7:2099790)0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000 2020-09-01T04:22:47.113Z cpu7:2099790)0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103 2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150 2020-09-01T04:22:47.114Z cpu7:2099790)0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380 2020-09-01T04:22:47.115Z cpu7:2099790)0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca 2020-09-01T04:22:47.116Z cpu7:2099790)0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
-
If a NVME device has multiple namespaces configured, support for vSAN secure disk wipe operations can change dynamically, based on the number of namespaces. If a new namespace is configured while a secure wipe operation is in progress, for example in VxRail or the Google Cloud Platform, the data on the newly added namespace might be erased.
-
The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, is updated to version 70.0051.0.100 to resolve several known issues.
-
The disk serviceability plug-in of the ESXi native SCSI driver for Microsemi Smart Family controllers, smartpqi, is updated to version 70.4000.0.100-5 to revert back several missing device IDs.
-
Profile Name | ESXi-7.0U1sc-17325020-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | December 17, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2661006, 2671485, 2636149 |
Related CVE numbers | CVE-2020-3999 |
- This patch updates the following issues:
-
ESXi 7.0 Update 1c addresses a denial of service vulnerability due to improper input validation in GuestInfo variables. A malicious actor with normal user privilege access to a virtual machine can cause failure in the virtual machine’s VMX process, leading to a denial of service condition. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3999 to this issue. For more information, see VMSA-2020-0029.
-
The SQLite database is updated to version 3.33.0.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.
-
The OpenSSH version is updated to 8.3p1.
-
The NTP daemon is updated to version ntp-4.2.8p15.
-
The ESXi userworld libcurl library is updated to version 7.72.0.
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 1c:
windows.iso
: VMware Tools 11.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image for Solaris.darwin.iso
: VMware Tools image for OSX.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-7.0U1sc-17325020-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | December 17, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2661006, 2671485, 2636149 |
Related CVE numbers | CVE-2020-3999 |
- This patch updates the following issues:
-
ESXi 7.0 Update 1c addresses a denial of service vulnerability due to improper input validation in GuestInfo variables. A malicious actor with normal user privilege access to a virtual machine can cause failure in the virtual machine’s VMX process, leading to a denial of service condition. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3999 to this issue. For more information, see VMSA-2020-0029.
-
The SQLite database is updated to version 3.33.0.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.
-
The OpenSSH version is updated to 8.3p1.
-
The NTP daemon is updated to version ntp-4.2.8p15.
-
The ESXi userworld libcurl library is updated to version 7.72.0.
-
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 1c:
windows.iso
: VMware Tools 11.1.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image for Solaris.darwin.iso
: VMware Tools image for OSX.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Name | ESXi |
Version | 7.0 U1c-17325551 |
Release Date | December 17, 2020 |
Category | Enhancement |
Affected Components |
|
PRs Fixed | 2656093, 2652863, 2653874, 2643508, 2644189, 2652344, 2661064, 2667291, 2662512, 2644221, 2662606, 2661153, 2655181, 2657411, 2675442, 2657649, 2662558, 2661818, 2664084, 2625155, 2658647, 2659015, 2654686, 2664278, 2676632, 2647557, 2647557, 2628899, 2663717, 2633194, 2661808, 2670891, 2665031, 2644003, 2664045, 2655992, 2661584 |
Related CVE numbers | CVE-2020-3999 |
Name | ESXi |
Version | 7.0 U1sc-17325020 |
Release Date | December 17, 2020 |
Category | Enhancement |
Affected Components |
|
PRs Fixed | 2661006, 2671485, 2636149 |
Related CVE numbers | CVE-2020-3999 |
Known Issues
The known issues are grouped as follows.
Upgrade Issues- Upgrades to ESXi 7.x from 6.5x and 6.7.0 by using ESXCLI might fail due to a space limitation
Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the
esxcli software profile update
oresxcli software profile install
ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:
[InstallationError]
The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB.
Please refer to the log file for more details.
The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commandsesxcli software vib update
oresxcli software vib install
.Workaround: You can perform the upgrade in two steps, by using the
esxcli software profile update
command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.
- vSAN secure disk wipe not supported on NVMe devices with multiple namespaces
If a NVME device has multiple namespaces configured, vSAN secure disk wipe operations are not supported.
If a new namespace is configured while a secure wipe operation is in progress, for example in VMware Tanzu Architecture for Dell EMC VxRail or the Google Cloud Platform, the data on the newly added namespace might be erased, so vSAN secure disk wipe operations are supported only for NVME devices with a single namespace.Workaround: None