Release Date: DEC 5, 2019
Build Details
Download Filename: | ESXi670-201912001.zip |
Build: | 15160138 |
Download Size: | 473.7 MB |
md5sum: | 153ea9de288d1cc2518e747f3806f929 |
sha1checksum: | e9761a1a8148d13af8a920decd9d729658d59f1c |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi670-201912401-BG | Bugfix | Critical |
ESXi670-201912402-BG | Bugfix | Important |
ESXi670-201912403-BG | Bugfix | Important |
ESXi670-201912404-BG | Bugfix | Important |
ESXi670-201912405-BG | Bugfix | Important |
ESXi670-201912101-SG | Security | Critical |
ESXi670-201912102-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.
Bulletin ID | Category | Severity |
ESXi670-201912001 | Bugfix | Important |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20191204001-standard |
ESXi-6.7.0-20191204001-no-tools |
ESXi-6.7.0-20191201001s-standard |
ESXi-6.7.0-20191201001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
You can update ESXi hosts by manually downloading the patch ZIP file from the VMware download page and installing the VIBs by using the esxcli software vib update
command. Additionally, you can update the system by using the image profile and the esxcli software profile update
command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-201912401-BG
- ESXi670-201912402-BG
- ESXi670-201912403-BG
- ESXi670-201912404-BG
- ESXi670-201912405-BG
- ESXi670-201912101-SG
- ESXi670-201912102-SG
- ESXi-6.7.0-20191204001-standard
- ESXi-6.7.0-20191204001-no-tools
- ESXi-6.7.0-20191201001s-standard
- ESXi-6.7.0-20191201001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2403863, 2379544, 2400052, 2396958, 2411015, 2430010, 2421035, 2388157, 2407141, 2387638, 2448171, 2423301, 2367001, 2419339, 2445066, 2432096, 2382664, 2432530, 2240272, 2382662, 2389011, 2400262, 2409342, 2411738, 2411907, 2412159, 2413837, 2380198, 2417593, 2418327, 2423588, 2430947, 2434152, 2349230, 2311565, 2412845, 2409136, 2340752, 2444667, 2398163, 2416514, 2412475, 2435882, 2386978, 2436227, 2411494, 2385716, 2390792 |
CVE numbers | N/A |
This patch updates the esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2403863: Мanually triggering a non-maskable interrupt (NMI) might not work оn a vSphere system with an AMD EPYC 7002 series processor
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and ESXi continues running.
This issue is resolved in this release.
- PR 2379544: You see MULTIPROCESSOR CONFIGURATION NOT SUPPORTED error message on a blue screen on Windows virtual machines while migrating to a newer version of ESXi
Windows virtual machines might fail while migrating to a newer version of ESXi after a reboot initiated by the guest OS. You see a
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC id
field of the guest CPUID to be modified during the migration.This issue is resolved in this release.
- Update of the SQLite database
The SQLite database is updated to version 3.29.0.
- PR 2396958: DNS short name of ESXi hosts cannot be resolved
The DNS short name of ESXi hosts cannot be resolved. You see an error similar to:
nslookup <shortname>
** server can't find <shortname>: SERVFAIL
Fully Qualified Domain Names (FQDN) resolve as expected.This issue is resolved in this release.
- PR 2411015: Queries by using the CIM client or CLI to return an IPv4 VMkernel network endpoint fail
If you run a query to the class
VMware_KernelIPv4ProtocolEndpoint
by using the CIM client or CLI, the query does not return VMkernel NIC instances. The issue is seen when IP ranges are 128.x.x.x and above.This issue is resolved in this release.
- PR 2430010: Notifications for Permanent Device Loss (PDL) events might cause ESXi hosts to fail with a purple diagnostic screen
In ESXi 6.7, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2421035: Virtual machine I/O requests might fail and cause virtual machine reboots due to timeouts
If you enable implicit Asymmetric Logical Unit Access (ALUA) for target devices, the
action_OnRetryErrors
method takes 40 tries to pass I/O requests before dropping a path. If a target is in the process of controller reset, and the time to switch path is greater than the time that the 40 retries take, the path is marked as dead. This can cause All-Paths-Down (APD) for the device.This issue is resolved in this release. The fix disables the
action_OnRetryErrors
method for implicit ALUA target devices. - PR 2388157: vSAN permissions allow access to other datastores
The
+Host.Config.Storage
permission is required to create a vSAN datastore by using vCenter Server. This permission provides access to other datastore, managed by the vCenter Server system, including the ability to unmount those datastores.This issue is resolved in this release.
- PR 2407141: You cannot provision virtual disks to be shared in multi-writer mode as eager zeroed thick-provisioned by using the vSphere Client
A shared virtual disk on a vSAN datastore for use in multi-writer mode, such as for Oracle RAC, must be eager zeroed thick-provisioned. However, the vSphere Client does not allow you to provision the virtual disk as eager zeroed thick-provisioned.
This issue is resolved in this release. You can share any type of virtual disks on the vSAN datastore in multi-writer mode.
- PR 2387638: You see an unexpected failover or a blue diagnostic screen when both vSphere Fault Tolerance (FT) and a graphics address remapping table (GART) are enabled in a guest OS
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
This issue is resolved in this release. To avoid a race condition, the fix forces the use of atomic operations for bitmap write operations and enables
physmem
support for GART. - PR 2448171: If you add end-entity certificates to the root CA of your vCenter Server system, you cannot add ESXi hosts
TLS certificates are usually arranged with a signing chain of Root CA, Intermediate CA and then a leaf certificate, where the leaf certificate names a specific server. A vCenter Server system expects the root CA to contain only certificates marked as capable of signing other certificates but does not enforce this requirement. As a result, you can add non-CA leaf certificates to the Root CA list. While previous releases ignore non-CA leaf certificates, ESXi 6.7 Update 3 throws an error for an invalid CA chain and prevents vCenter Server from completing the Add Host workflow.
This issue is resolved in this release. The fix silently discards non-CA certificates instead of throwing an error. There is no security impact from this change. ESXi670-201912001 also adds the configuration options
Config.HostAgent.ssl.keyStore.allowSelfSigned
,Config.HostAgent.ssl.keyStore.allowAny
andConfig.HostAgent.ssl.keyStore.discardLeaf
to allow customizing the root CA. For more information, see VMware knowledge base article 1038578. - PR 2423301: After you revert a virtual machine to a snapshot, change block tracking (CBT) data might be corrupted
When reverting a virtual machine that has CBT enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error.This issue is resolved in this release. With ESXi670-201912001, the output of the
QuerychangedDiskAreas ()
call changes toFileFault
and adds the messageChange tracking is not active on the disk <disk_path>
to provide more details on the issue.
With the fix, you must power on or reconfigure the virtual machine to enable CBT after reverting it to a snapshot and then take a snapshot to make a full backup.
To reconfigure the virtual machine, you must complete the following steps:- In the Managed Object Browser graphical interface, run
ReconfigVM_Task
by using an url such ashttps://<vc or host ip>/mob/?moid=<the virtual machine Managed Object ID>&method=reconfigure
. - In the
<spec>
tag, add<ChangeTrackingEnabled>true</ChangeTrackingEnabled>
. - Click Invoke Method.
- In the Managed Object Browser graphical interface, run
- PR 2419339: ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
This issue is resolved in this release.
- PR 2445066: HPE servers with AMD processors might fail with a purple diagnostic screen due to a physical CPU lockup
Certain HPE servers with AMD processors might fail with a purple diagnostic screen due to a physical CPU lockup. The issue occurs when HPE servers run HPE modules and management agents installed by using HPE VIBs. You can see messages similar to:
2019-05-22T09:04:01.510Z cpu21:65700)WARNING: Heartbeat: 794: PCPU 0 didn't have a heartbeat for 7 seconds; *may* be locked up.
2019-05-22T09:04:01.510Z cpu0:65575)ALERT: NMI: 689: NMI IPI: RIPOFF(base):RBP:CS [0x8a46f2(0x418017e00000):0x43041572e040:0x4010] (Src 0x1, CPU0)This issue is resolved in this release.
- PR 2432096: PCI Express (PCIe) Advanced Error Reporting (AER) register settings might be missing or is not correct for hot added PCIe devices
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized.This issue is resolved in this release.
- PR 2382664: vSAN health service times out when a vCenter Server system is not accessible over the Internet
If the HTTPS proxy is configured in
/etc/sysconfig/proxy
, but the vCenter Server system does not have Internet access, the vSAN health service times out and cannot be accessed.This issue is resolved in this release.
- PR 2432530: You cannot use a batch mode to unbind VMware vSphere Virtual Volumes
ESXi670-201912001 implements the
UnbindVirtualVolumes ()
method in batch mode to unbind VMware vSphere Virtual Volumes. Previously, unbinding took one connection per vSphere Virtual Volume. This sometimes led to consuming all available connections to a vStorage APIs for Storage Awareness (VASA) provider and delayed response from or completely failed other API calls.This issue is resolved in this release.
- PR 2240272: If a two host vSAN cluster has a network partition, one or more vSAN objects might become inaccessible
One or more vSAN objects might become temporarily inaccessible for about 30 seconds during a network partition on a two host vSAN cluster. A rare race condition which might occur when a preferred host goes down causes the problem.
This issue is resolved in this release.
- PR 2382662: The vSAN performance service health check displays a warning on stretched cluster: Hosts Not Contributing Stats
If a stretched cluster has no route for witness traffic, or the firewall settings block port 80 for witness traffic, the vSAN performance service cannot collect performance statistics from the ESXi hosts. When this happens, the performance service health check displays a warning:
Hosts Not Contributing Stats
.This issue is resolved in this release.
- PR 2389011: Adding an ESXi host to an Active Directory domain by using a vSphere Authentication Proxy might fail intermittently
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to an error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN
.This issue is resolved in this release. If a host is temporarily unable to find its created machine account in the AD environment, the fix adds a retry logic for adding ESXi hosts to AD domains by using the Authentication Proxy.
- PR 2400262: In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
This issue is resolved in this release.
- PR 2409342: You cannot select to disable the Maximum Transmission Unit (MTU) check in the vmxnet3 backend for packet length to not exceed the vNIC MTU
With ESXi670-201912001, you can select to disable the Maximum Transmission Unit (MTU) check in the vmxnet3 backend for packet length to not exceed the vNIC MTU. The default behavior is to perform the MTU check. However, if you use vmxnet3, as a result of this check, you might see an increase of dropped packets. For more information, see VMware knowledge base article 75213.
This issue is resolved in this release.
- PR 2411738: Content-Based Read Cache (CBRC) digest recompute operations might take long for the Horizon linked clone desktops
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
This issue is resolved in this release.
- PR 2411907: Virtual machines might power on from older chipsets instead from newer chipsets
During a host profile remediation, blocked parameters such as Enhanced vMotion Compatibility (EVC) parameters at
/etc/vmware/config
might be lost. This results in virtual machines powering on from older chipsets such as Haswell instead from newer chipsets such as Broadwell.This issue is resolved in this release.
- PR 2412159: With multi-vMotion vmknics configured, normal ping causes an error in vSAN health vMotion: Basic (unicast) connectivity check
A vSAN cluster that has multi-vMotion VMNICs configured might report a false alarm raised by vSAN health
vMotion: Basic (unicast) connectivity check
.This issue is resolved in this release.
- PR 2413837: The cmmdsTimeMachine service fails on all the ESXi hosts in a vSAN cluster
In some vSAN environments, the cmmdsTimeMachine service running on ESXi hosts fails soon after starting. This problem occurs when excessive memory is consumed by the watchdog processes.
This issue is resolved in this release.
- Update to the libPNG library
The libPNG library is updated to libpng-1.6.37.
- PR 2417593: For a vSAN cluster that is part of an environment using a custom certificate chain, you might see a performance service warning such as Hosts Not Contributing Stats
The problem occurs in some environments that use custom SSL certificate chains. The vSAN performance service cannot collect vSAN performance metrics from one or more ESXi hosts. The health service issues a warning such as
Hosts Not Contributing Stats.
This issue is resolved in this release.
- PR 2418327: Host name or IP Network uplink redundancy lost alarm resets to Green even if a VMNIC is still down
The Host name or IP Network uplink redundancy alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. The redundant physical NICs are either down or are not assigned to the switch. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
This issue is resolved in this release. The fix aggregates all the restored dvPort and redundancy events at the net correlator layer, and reports it to the vCenter Server system only when all uplinks are restored.
- PR 2423588: If the allocated memory for a large number of virtual distributed switch ports exceeds the heap limit, the hostd service might fail
In some cases, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail.This issue is fixed in this release. You can use the following configuration setting:
esxcfg-advcfg -s X /Net/DVSLargeHeapMBPERGB
to align thedvsLargeHeap
parameter with the system physical memory. Here,X
is an integer value from 1 to 20 that defines the heap limit relative to the physical memory of an ESXi host. For example, ifX
is 5 and the physical memory is 40 GB, the heap limit is set to 200 MB.
To use this setting, you must reboot the ESXi host. - PR 2430947: Virtual machines might fail during a passthrough of an iLOK USB key device
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
.This issue is resolved in this release.
- PR 2434152: An ESXi host might fail with a purple diagnostic screen during the creation or mounting of a disk group
During the creation or mounting of a vSAN disk group, the ESXi host might fail with a purple diagnostic screen. This problem occurs due to a
NULL
pointer dereference. You can see similar information in the backtrace:
[email protected]#0.0.0.1+0x203
[email protected]#0.0.0.1+0x358
[email protected]#0.0.0.1+0x1a4
[email protected]#0.0.0.1+0x590
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77This issue is resolved in this release.
- PR 2349230: You lose subscription to CIM indications without a warning
If a CIM provider fails for some reason, the small footprint CIM broker (SFCB) service restarts it, but the provider might not keep all existing indication subscriptions. As a result, you might not receive a CIM indication for a hardware-related error event.
This issue is resolved in this release. The fix calls the
enableindications
method to reconfigure indication subscriptions after a CIM provider restart. - PR 2311565: An ESXi host detects only few LUNs after booting
If you try to rescan HBA and VMFS volumes or get a support bundle, ESXi hosts might detect a random few LUNs after booting and might lose connectivity. The issue is caused by a deadlock between the helper threads that serve to find SCSI paths and read capacity. As a result, the device discovery fails.
This issue is resolved in this release.
- PR 2412845: After a fresh installation or upgrade to ESXi 6.7 Update 3, ESXi hosts might fail to boot in UEFI mode
After a fresh installation or upgrade to ESXi 6.7 Update 3, due to incompatibility between the ESXi bootloader and the UEFI firmware on certain machines, ESXi hosts might fail to boot in UEFI mode. You see similar messages to appear in white and red on a black background:
Shutting down firmware services...
Page allocation error: Out of resources
Failed to shutdown the boot services.
Unrecoverable error
If you upgrade to ESXi 6.7 Update 3 from a previous release and 6.7 Update 3 has never booted successfully, the failure causes ESXi Hypervisor Recovery to automatically roll back to the installation that you upgraded from. However, the machine still fails to boot because Hypervisor Recovery is unable to roll back the bootloader.This issue is resolved in this release.
- PR 2409136: Virtual machines with PCI passthrough devices fail to power on after a reset or reboot
Stale parameters might cause incorrect handling of interrupt info from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
This issue is resolved in this release.
- PR 2340752: HPE servers with firmware version 1.30 might trigger hardware status warnings for I/O module sensors
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
.This issue is resolved in this release. For more information, see VMware knowledge base article 53134.
- PR 2444667: If you configure a small limit for the vmx.log.rotateSize parameter, the VMX process might fail while powering on or during vSphere vMotion
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. If you use a value of less than 100 KB for thevmx.log.rotateSize
parameter, the chances that these issues occur increase.This issue is resolved in this release.
- PR 2398163: An ESXi host might fail with a purple diagnostic screen during shutdown if both MLDv1 and MLDv2 devices are present in the network
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task, if both MLDv1 and MLDv2 devices are present in the network, and global IPv6 addresses are disabled. This issue was fixed in ESXi 6.7 Update 2. However, the fix revealed another issue during ESXi hosts shutdown, a
NULL
pointer dereference while IPv6 was disabled.This issue is resolved in this release.
- PR 2416514: You see IOMMU warnings from AMD servers in the vmkernel logs
You might see multiple IOMMU warnings from AMD servers in the vmkernel logs, similar to:
WARNING: AMDIOMMU: 222: completion wait bit is not set after a while!
AMD IOMMU hardware might be slow to processCOMPLETION_WAIT
commands. As a result, not completed invalidation requests might be propagated, causing stale mappings in the IOMMU TLB during DMA transactions.This issue is resolved in this release.
- PR 2412475: You see Sensor -1 type hardware health alarms on ESXi hosts and receive excessive mail alerts
After upgrading to ESXi 6.7 Update 3, you might see
Sensor -1
type hardware health alarms on ESXi hosts being triggered without an actual problem.If you have configured email notifications for hardware sensor state alarms in your vCenter Server system, this can result in excessive email alerts. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms, and Tasks (SEAT) directory goes above the 95% threshold.This issue is resolved in this release.
- PR 2435882: ESXi 6.7 Update 3 hosts take long to complete tasks such as exit maintenance mode
In some cases, ESXi hosts running ESXi 6.7 Update 3 might take long to complete tasks such as entering or exiting maintenance mode or connecting to a vCenter Server system. The delay in response might take up to 30 min. This happens when the CIM service tries to refresh periodically the storage and numeric sensors data under a common lock, causing the hostd threads to wait for response.
This issue is resolved in this release. The fix uses a different lock to refresh the sensors data for the CIM service to avoid other threads from waiting for the common lock to get released.
- PR 2386978: ESXi hosts might fail with a purple diagnostic screen during a SEsparse operation
In some scenarios, SEsparse I/O threads might either pause or be blocked in a non-blocking thread context. As a result, ESXi hosts goes into panic state and fails with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2436227: If an ESXi host cannot allocate memory to the filters in Netqueue, the host might fail with a purple diagnostic screen
If an ESXi host cannot allocate memory to the filters in Netqueue for some reason, the host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2411494: VMFS6 automatic asynchronous reclamation of free space might work at a higher space reclamation priority than configured
In certain cases, automatic asynchronous reclamation of free space in VMFS6 datastores, also called unmapping, might work at a higher space reclamation priority than configured. Removal of Eager Zero Thick (EZT) and Lazy Zero Thick (LZT) VMDKs from the datastore might trigger higher space reclamation than configured. For space reclamation priority that is less than 1 GBps, you also might see higher unmapping rates, depending on the fragmentation in the volume.
This issue is resolved in this release.
- PR 2390792: Enabling the VMware vSphere Storage I/O Control log might result in flooding syslog and rsyslog servers
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to rapid log rotation.This issue is fixed in this release. The fix moves some logs from regular Log to Log_Trivia to prevent additional logging.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2424231 |
Related CVE numbers | N/A |
This patch updates the vmkusbv
VIB to resolve the following issue:
- PR 2424231: You cannot update ESXi hosts due to duplicate IDs in USB storage devices
Some USB storage devices do not support Device Identification Inquiry requests and use the same value as the Serial Number Inquiry, or even the same serial descriptor. Multiple LUNs using such devices might not be able to access the bootbank partition of the ESXi host and default to the
/tmp
directory instead. As a result, ESXi host updates fail.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the net-vmxnet3
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2438108 |
Related CVE numbers | N/A |
This patch updates the elx-esx-libelxima.so
VIBs to resolve the following issue:
- PR 2438108: Emulex drivers logs might fill up the /var file system logs if the /scratch/log/ is temporarily unavailable
Emulex drivers might write logs to
/var/log/EMU/mili/mili2d.log
and fill up the 40 MB/var
file system logs of RAM drives. A previous fix changed writes of Emulex drivers to the/scratch/log/
folder instead of to the/var/log/
folder to prevent the issue. However, when the/scratch/log/
folder is temporarily unavailable, the/var/log/EMU/mili/mili2d.log
is still periodically used for logging.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the native-misc-drivers
VIB.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | CVE-2019-5544 |
This patch updates the esx-base, esx-update, vsan and vsanhealth
VIBs.
OpenSLP as used in ESXi has a heap overwrite issue. This issue may allow a malicious actor with network access to port 427 on an ESXi host to overwrite the heap of the OpenSLP service resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5544 to this issue. For more information, see VMware Security Advisory VMSA-2019-0022.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2377204 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi670-201912001:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-6.7.0-20191204001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 5, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2403863, 2379544, 2400052, 2396958, 2411015, 2430010, 2421035, 2388157, 2407141, 2387638, 2448171, 2423301, 2367001, 2419339, 2445066, 2432096, 2382664, 2432530, 2240272, 2382662, 2389011, 2400262, 2409342, 2411738, 2411907, 2412159, 2413837, 2380198, 2417593, 2418327, 2423588, 2430947, 2434152, 2349230, 2311565, 2412845, 2409136, 2340752, 2444667, 2398163, 2416514, 2412475, 2435882, 2386978, 2436227, 2411494, 2424231, 2438108, 2390792 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and ESXi continues running.
-
Windows virtual machines might fail while migrating to a newer version of ESXi after a reboot initiated by the guest OS. You see a
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC id
field of the guest CPUID to be modified during the migration. -
The SQLite database is updated to version 3.29.0.
-
The DNS short name of ESXi hosts cannot be resolved. You see an error similar to:
nslookup <shortname>
** server can't find <shortname>: SERVFAIL
Fully Qualified Domain Names (FQDN) resolve as expected. -
If you run a query to the class
VMware_KernelIPv4ProtocolEndpoint
by using the CIM client or CLI, the query does not return VMkernel NIC instances. The issue is seen when IP ranges are 128.x.x.x and above. -
In ESXi 6.7, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
-
If you enable implicit Asymmetric Logical Unit Access (ALUA) for target devices, the
action_OnRetryErrors
method takes 40 tries to pass I/O requests before dropping a path. If a target is in the process of controller reset, and the time to switch path is greater than the time that the 40 retries take, the path is marked as dead. This can cause All-Paths-Down (APD) for the device. -
The
+Host.Config.Storage
permission is required to create a vSAN datastore by using vCenter Server. This permission provides access to other datastore, managed by the vCenter Server system, including the ability to unmount those datastores. -
A shared virtual disk on a vSAN datastore for use in multi-writer mode, such as for Oracle RAC, must be eager zeroed thick-provisioned. However, the vSphere Client does not allow you to provision the virtual disk as eager zeroed thick-provisioned.
-
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
-
TLS certificates are usually arranged with a signing chain of Root CA, Intermediate CA and then a leaf certificate, where the leaf certificate names a specific server. A vCenter Server system expects the root CA to contain only certificates marked as capable of signing other certificates but does not enforce this requirement. As a result, you can add non-CA leaf certificates to the Root CA list. While previous releases ignore non-CA leaf certificates, ESXi 6.7 Update 3 throws an error for an invalid CA chain and prevents vCenter Server from completing the Add Host workflow.
-
When reverting a virtual machine that has CBT enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error. -
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
-
Certain HPE servers with AMD processors might fail with a purple diagnostic screen due to a physical CPU lockup. The issue occurs when HPE servers run HPE modules and management agents installed by using HPE VIBs. You can see messages similar to:
2019-05-22T09:04:01.510Z cpu21:65700)WARNING: Heartbeat: 794: PCPU 0 didn't have a heartbeat for 7 seconds; *may* be locked up.
2019-05-22T09:04:01.510Z cpu0:65575)ALERT: NMI: 689: NMI IPI: RIPOFF(base):RBP:CS [0x8a46f2(0x418017e00000):0x43041572e040:0x4010] (Src 0x1, CPU0) -
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized. -
If the HTTPS proxy is configured in
/etc/sysconfig/proxy
, but the vCenter Server system does not have Internet access, the vSAN health service times out and cannot be accessed. -
ESXi670-201912001 implements the
UnbindVirtualVolumes ()
method in batch mode to unbind VMware vSphere Virtual Volumes. Previously, unbinding took one connection per vSphere Virtual Volume. This sometimes led to consuming all available connections to a vStorage APIs for Storage Awareness (VASA) provider and delayed response from or completely failed other API calls. -
One or more vSAN objects might become temporarily inaccessible for about 30 seconds during a network partition on a two host vSAN cluster. A rare race condition which might occur when a preferred host goes down causes the problem.
-
If a stretched cluster has no route for witness traffic, or the firewall settings block port 80 for witness traffic, the vSAN performance service cannot collect performance statistics from the ESXi hosts. When this happens, the performance service health check displays a warning:
Hosts Not Contributing Stats
. -
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to an error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN
. -
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
-
With ESXi670-201912001, you can select to disable the Maximum Transmission Unit (MTU) check in the vmxnet3 backend for packet length to not exceed the vNIC MTU. The default behavior is to perform the MTU check. However, if you use vmxnet3, as a result of this check, you might see an increase of dropped packets. For more information, see VMware knowledge base article 75213.
-
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
-
During a host profile remediation, blocked parameters such as Enhanced vMotion Compatibility (EVC) parameters at
/etc/vmware/config
might be lost. This results in virtual machines powering on from older chipsets such as Haswell instead from newer chipsets such as Broadwell. -
A vSAN cluster that has multi-vMotion VMNICs configured might report a false alarm raised by vSAN health
vMotion: Basic (unicast) connectivity check
. -
In some vSAN environments, the cmmdsTimeMachine service running on ESXi hosts fails soon after starting. This problem occurs when excessive memory is consumed by the watchdog processes.
-
The libPNG library is updated to libpng-1.6.37.
-
The problem occurs in some environments that use custom SSL certificate chains. The vSAN performance service cannot collect vSAN performance metrics from one or more ESXi hosts. The health service issues a warning such as
Hosts Not Contributing Stats.
-
The Host name or IP Network uplink redundancy alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. The redundant physical NICs are either down or are not assigned to the switch. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
-
In some cases, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail. -
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
. -
During the creation or mounting of a vSAN disk group, the ESXi host might fail with a purple diagnostic screen. This problem occurs due to a
NULL
pointer dereference. You can see similar information in the backtrace:
[email protected]#0.0.0.1+0x203
[email protected]#0.0.0.1+0x358
[email protected]#0.0.0.1+0x1a4
[email protected]#0.0.0.1+0x590
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77 -
If a CIM provider fails for some reason, the small footprint CIM broker (SFCB) service restarts it, but the provider might not keep all existing indication subscriptions. As a result, you might not receive a CIM indication for a hardware-related error event.
-
If you try to rescan HBA and VMFS volumes or get a support bundle, ESXi hosts might detect a random few LUNs after booting and might lose connectivity. The issue is caused by a deadlock between the helper threads that serve to find SCSI paths and read capacity. As a result, the device discovery fails.
-
After a fresh installation or upgrade to ESXi 6.7 Update 3, due to incompatibility between the ESXi bootloader and the UEFI firmware on certain machines, ESXi hosts might fail to boot in UEFI mode. You see similar messages to appear in white and red on a black background:
Shutting down firmware services...
Page allocation error: Out of resources
Failed to shutdown the boot services.
Unrecoverable error
If you upgrade to ESXi 6.7 Update 3 from a previous release and 6.7 Update 3 has never booted successfully, the failure causes ESXi Hypervisor Recovery to automatically roll back to the installation that you upgraded from. However, the machine still fails to boot because Hypervisor Recovery is unable to roll back the bootloader. -
Stale parameters might cause incorrect handling of interrupt info from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
-
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
. -
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. If you use a value of less than 100 KB for thevmx.log.rotateSize
parameter, the chances that these issues occur increase. -
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task, if both MLDv1 and MLDv2 devices are present in the network, and global IPv6 addresses are disabled. This issue was fixed in ESXi 6.7 Update 2. However, the fix revealed another issue during ESXi hosts shutdown, a
NULL
pointer dereference while IPv6 was disabled. -
You might see multiple IOMMU warnings from AMD servers in the vmkernel logs, similar to:
WARNING: AMDIOMMU: 222: completion wait bit is not set after a while!
AMD IOMMU hardware might be slow to processCOMPLETION_WAIT
commands. As a result, not completed invalidation requests might be propagated, causing stale mappings in the IOMMU TLB during DMA transactions. -
After upgrading to ESXi 6.7 Update 3, you might see
Sensor -1
type hardware health alarms on ESXi hosts being triggered without an actual problem.If you have configured email notifications for hardware sensor state alarms in your vCenter Server system, this can result in excessive email alerts. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms, and Tasks (SEAT) directory goes above the 95% threshold. -
In some cases, ESXi hosts running ESXi 6.7 Update 3 might take long to complete tasks such as entering or exiting maintenance mode or connecting to a vCenter Server system. The delay in response might take up to 30 min. This happens when the CIM service tries to refresh periodically the storage and numeric sensors data under a common lock, causing the hostd threads to wait for response.
-
In some scenarios, SEsparse I/O threads might either pause or be blocked in a non-blocking thread context. As a result, ESXi hosts goes into panic state and fails with a purple diagnostic screen.
-
If an ESXi host cannot allocate memory to the filters in Netqueue for some reason, the host might fail with a purple diagnostic screen.
-
In certain cases, automatic asynchronous reclamation of free space in VMFS6 datastores, also called unmapping, might work at a higher space reclamation priority than configured. As a result, you might lose connectivity to datastores or see Eager Zero Thick (EZT) and Lazy Zero Thick (LZT) VMDKs removed from new volumes. For space reclamation priority that is less than 1 GBps, you also might see higher unmapping rates, depending on the fragmentation in the volume.
-
Some USB storage devices do not support Device Identification Inquiry requests and use the same value as the Serial Number Inquiry, or even the same serial descriptor. Multiple LUNs using such devices might not be able to access the bootbank partition of the ESXi host and default to the
/tmp
directory instead. As a result, ESXi host updates fail. -
Emulex drivers might write logs to
/var/log/EMU/mili/mili2d.log
and fill up the 40 MB/var
file system logs of RAM drives. A previous fix changed writes of Emulex drivers to the/scratch/log/
folder instead of to the/var/log/
folder to prevent the issue. However, when the/scratch/log/
folder is temporarily unavailable, the/var/log/EMU/mili/mili2d.log
is still periodically used for logging. -
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to rapid log rotation.
-
Profile Name | ESXi-6.7.0-20191204001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 5, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2403863, 2379544, 2400052, 2396958, 2411015, 2430010, 2421035, 2388157, 2407141, 2387638, 2448171, 2423301, 2367001, 2419339, 2445066, 2432096, 2382664, 2432530, 2240272, 2382662, 2389011, 2400262, 2409342, 2411738, 2411907, 2412159, 2413837, 2380198, 2417593, 2418327, 2423588, 2430947, 2434152, 2349230, 2311565, 2412845, 2409136, 2340752, 2444667, 2398163, 2416514, 2412475, 2435882, 2386978, 2436227, 2411494, 2424231, 2438108, 2390792 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and ESXi continues running.
-
Windows virtual machines might fail while migrating to a newer version of ESXi after a reboot initiated by the guest OS. You see a
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC id
field of the guest CPUID to be modified during the migration. -
The SQLite database is updated to version 3.29.0.
-
The DNS short name of ESXi hosts cannot be resolved. You see an error similar to:
nslookup <shortname>
** server can't find <shortname>: SERVFAIL
Fully Qualified Domain Names (FQDN) resolve as expected. -
If you run a query to the class
VMware_KernelIPv4ProtocolEndpoint
by using the CIM client or CLI, the query does not return VMkernel NIC instances. The issue is seen when IP ranges are 128.x.x.x and above. -
In ESXi 6.7, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
-
If you enable implicit Asymmetric Logical Unit Access (ALUA) for target devices, the
action_OnRetryErrors
method takes 40 tries to pass I/O requests before dropping a path. If a target is in the process of controller reset, and the time to switch path is greater than the time that the 40 retries take, the path is marked as dead. This can cause All-Paths-Down (APD) for the device. -
The
+Host.Config.Storage
permission is required to create a vSAN datastore by using vCenter Server. This permission provides access to other datastore, managed by the vCenter Server system, including the ability to unmount those datastores. -
A shared virtual disk on a vSAN datastore for use in multi-writer mode, such as for Oracle RAC, must be eager zeroed thick-provisioned. However, the vSphere Client does not allow you to provision the virtual disk as eager zeroed thick-provisioned.
-
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
-
TLS certificates are usually arranged with a signing chain of Root CA, Intermediate CA and then a leaf certificate, where the leaf certificate names a specific server. A vCenter Server system expects the root CA to contain only certificates marked as capable of signing other certificates but does not enforce this requirement. As a result, you can add non-CA leaf certificates to the Root CA list. While previous releases ignore non-CA leaf certificates, ESXi 6.7 Update 3 throws an error for an invalid CA chain and prevents vCenter Server from completing the Add Host workflow.
-
When reverting a virtual machine that has CBT enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error. -
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
-
Certain HPE servers with AMD processors might fail with a purple diagnostic screen due to a physical CPU lockup. The issue occurs when HPE servers run HPE modules and management agents installed by using HPE VIBs. You can see messages similar to:
2019-05-22T09:04:01.510Z cpu21:65700)WARNING: Heartbeat: 794: PCPU 0 didn't have a heartbeat for 7 seconds; *may* be locked up.
2019-05-22T09:04:01.510Z cpu0:65575)ALERT: NMI: 689: NMI IPI: RIPOFF(base):RBP:CS [0x8a46f2(0x418017e00000):0x43041572e040:0x4010] (Src 0x1, CPU0) -
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized. -
If the HTTPS proxy is configured in
/etc/sysconfig/proxy
, but the vCenter Server system does not have Internet access, the vSAN health service times out and cannot be accessed. -
ESXi670-201912001 implements the
UnbindVirtualVolumes ()
method in batch mode to unbind VMware vSphere Virtual Volumes. Previously, unbinding took one connection per vSphere Virtual Volume. This sometimes led to consuming all available connections to a vStorage APIs for Storage Awareness (VASA) provider and delayed response from or completely failed other API calls. -
One or more vSAN objects might become temporarily inaccessible for about 30 seconds during a network partition on a two host vSAN cluster. A rare race condition which might occur when a preferred host goes down causes the problem.
-
If a stretched cluster has no route for witness traffic, or the firewall settings block port 80 for witness traffic, the vSAN performance service cannot collect performance statistics from the ESXi hosts. When this happens, the performance service health check displays a warning:
Hosts Not Contributing Stats
. -
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to an error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN
. -
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
-
With ESXi670-201912001, you can select to disable the Maximum Transmission Unit (MTU) check in the vmxnet3 backend for packet length to not exceed the vNIC MTU. The default behavior is to perform the MTU check. However, if you use vmxnet3, as a result of this check, you might see an increase of dropped packets. For more information, see VMware knowledge base article 75213.
-
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
-
During a host profile remediation, blocked parameters such as Enhanced vMotion Compatibility (EVC) parameters at
/etc/vmware/config
might be lost. This results in virtual machines powering on from older chipsets such as Haswell instead from newer chipsets such as Broadwell. -
A vSAN cluster that has multi-vMotion VMNICs configured might report a false alarm raised by vSAN health
vMotion: Basic (unicast) connectivity check
. -
In some vSAN environments, the cmmdsTimeMachine service running on ESXi hosts fails soon after starting. This problem occurs when excessive memory is consumed by the watchdog processes.
-
The libPNG library is updated to libpng-1.6.37.
-
The problem occurs in some environments that use custom SSL certificate chains. The vSAN performance service cannot collect vSAN performance metrics from one or more ESXi hosts. The health service issues a warning such as
Hosts Not Contributing Stats.
-
The Host name or IP Network uplink redundancy alarm reports the loss of uplink redundancy on a vSphere standard or a distributed switch for an ESXi host. The redundant physical NICs are either down or are not assigned to the switch. In some cases, when more than one VMNIC is down, the alarm resets to Green even when one of the VMNICs is up, while others might still be down.
-
In some cases, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail. -
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
. -
During the creation or mounting of a vSAN disk group, the ESXi host might fail with a purple diagnostic screen. This problem occurs due to a
NULL
pointer dereference. You can see similar information in the backtrace:
[email protected]#0.0.0.1+0x203
[email protected]#0.0.0.1+0x358
[email protected]#0.0.0.1+0x1a4
[email protected]#0.0.0.1+0x590
vmkWorldFunc@vmkernel#nover+0x4f
CpuSched_StartWorld@vmkernel#nover+0x77 -
If a CIM provider fails for some reason, the small footprint CIM broker (SFCB) service restarts it, but the provider might not keep all existing indication subscriptions. As a result, you might not receive a CIM indication for a hardware-related error event.
-
If you try to rescan HBA and VMFS volumes or get a support bundle, ESXi hosts might detect a random few LUNs after booting and might lose connectivity. The issue is caused by a deadlock between the helper threads that serve to find SCSI paths and read capacity. As a result, the device discovery fails.
-
After a fresh installation or upgrade to ESXi 6.7 Update 3, due to incompatibility between the ESXi bootloader and the UEFI firmware on certain machines, ESXi hosts might fail to boot in UEFI mode. You see similar messages to appear in white and red on a black background:
Shutting down firmware services...
Page allocation error: Out of resources
Failed to shutdown the boot services.
Unrecoverable error
If you upgrade to ESXi 6.7 Update 3 from a previous release and 6.7 Update 3 has never booted successfully, the failure causes ESXi Hypervisor Recovery to automatically roll back to the installation that you upgraded from. However, the machine still fails to boot because Hypervisor Recovery is unable to roll back the bootloader. -
Stale parameters might cause incorrect handling of interrupt info from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
-
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
. -
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. If you use a value of less than 100 KB for thevmx.log.rotateSize
parameter, the chances that these issues occur increase. -
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task, if both MLDv1 and MLDv2 devices are present in the network, and global IPv6 addresses are disabled. This issue was fixed in ESXi 6.7 Update 2. However, the fix revealed another issue during ESXi hosts shutdown, a
NULL
pointer dereference while IPv6 was disabled. -
You might see multiple IOMMU warnings from AMD servers in the vmkernel logs, similar to:
WARNING: AMDIOMMU: 222: completion wait bit is not set after a while!
AMD IOMMU hardware might be slow to processCOMPLETION_WAIT
commands. As a result, not completed invalidation requests might be propagated, causing stale mappings in the IOMMU TLB during DMA transactions. -
After upgrading to ESXi 6.7 Update 3, you might see
Sensor -1
type hardware health alarms on ESXi hosts being triggered without an actual problem.If you have configured email notifications for hardware sensor state alarms in your vCenter Server system, this can result in excessive email alerts. These mails might cause storage issues in the vCenter Server database if the Stats, Events, Alarms, and Tasks (SEAT) directory goes above the 95% threshold. -
In some cases, ESXi hosts running ESXi 6.7 Update 3 might take long to complete tasks such as entering or exiting maintenance mode or connecting to a vCenter Server system. The delay in response might take up to 30 min. This happens when the CIM service tries to refresh periodically the storage and numeric sensors data under a common lock, causing the hostd threads to wait for response.
-
In some scenarios, SEsparse I/O threads might either pause or be blocked in a non-blocking thread context. As a result, ESXi hosts goes into panic state and fails with a purple diagnostic screen.
-
If an ESXi host cannot allocate memory to the filters in Netqueue for some reason, the host might fail with a purple diagnostic screen.
-
In certain cases, automatic asynchronous reclamation of free space in VMFS6 datastores, also called unmapping, might work at a higher space reclamation priority than configured. As a result, you might lose connectivity to datastores or see Eager Zero Thick (EZT) and Lazy Zero Thick (LZT) VMDKs removed from new volumes. For space reclamation priority that is less than 1 GBps, you also might see higher unmapping rates, depending on the fragmentation in the volume.
-
Some USB storage devices do not support Device Identification Inquiry requests and use the same value as the Serial Number Inquiry, or even the same serial descriptor. Multiple LUNs using such devices might not be able to access the bootbank partition of the ESXi host and default to the
/tmp
directory instead. As a result, ESXi host updates fail. -
Emulex drivers might write logs to
/var/log/EMU/mili/mili2d.log
and fill up the 40 MB/var
file system logs of RAM drives. A previous fix changed writes of Emulex drivers to the/scratch/log/
folder instead of to the/var/log/
folder to prevent the issue. However, when the/scratch/log/
folder is temporarily unavailable, the/var/log/EMU/mili/mili2d.log
is still periodically used for logging. -
Some Storage I/O Control logs might cause a log spew in the
storagerm.log
andsdrsinjector.log
files. This condition might lead to rapid log rotation.
-
Profile Name | ESXi-6.7.0-20191201001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 5, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2377204 |
Related CVE numbers | CVE-2019-5544 |
- This patch updates the following issues:
- OpenSLP as used in ESXi has a heap overwrite issue. This issue may allow a malicious actor with network access to port 427 on an ESXi host to overwrite the heap of the OpenSLP service resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5544 to this issue. For more information, see VMware Security Advisory VMSA-2019-0022.
-
The following VMware Tools ISO images are bundled with ESXi670-201912001:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-6.7.0-20191201001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 5, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2377204 |
Related CVE numbers | CVE-2019-5544 |
- This patch updates the following issues:
- OpenSLP as used in ESXi has a heap overwrite issue. This issue may allow a malicious actor with network access to port 427 on an ESXi host to overwrite the heap of the OpenSLP service resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5544 to this issue. For more information, see VMware Security Advisory VMSA-2019-0022.
-
The following VMware Tools ISO images are bundled with ESXi670-201912001:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi: