Release Date: DEC 19, 2019
Build Details
Download Filename: | ESXi650-201912002.zip |
Build: | 15256549 |
Download Size: | 482.4 MB |
md5sum: | b3d3d2c6c92648dc3d13b03aecda1faf |
sha1checksum: | fffdf762801e389046242b7b8a42e5d8adacb10e |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi650-201912401-BG | Bugfix | Critical |
ESXi650-201912402-BG | Bugfix | Important |
ESXi650-201912403-BG | Bugfix | Important |
ESXi650-201912404-BG | Bugfix | Important |
ESXi650-201912101-SG | Security | Important |
ESXi650-201912102-SG | Security | Important |
ESXi650-201912103-SG | Security | Important |
ESXi650-201912104-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.
Bulletin ID | Category | Severity |
ESXi650-201912002 | Bugfix | Critical |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20191204001-standard |
ESXi-6.5.0-20191204001-no-tools |
ESXi-6.5.0-20191201001s-standard |
ESXi-6.5.0-20191201001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, the system can be updated by using the image profile and the esxcli software profile
command.
For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi650-201912401-BG
- ESXi650-201912402-BG
- ESXi650-201912403-BG
- ESXi650-201912404-BG
- ESXi650-201912101-SG
- ESXi650-201912102-SG
- ESXi650-201912103-SG
- ESXi650-201912104-SG
- ESXi-6.5.0-20191204001-standard
- ESXi-6.5.0-20191204001-no-tools
- ESXi-6.5.0-20191201001s-standard
- ESXi-6.5.0-20191201001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2360407, 2403863, 2368281, 2399214, 2399723, 2337888, 2379543, 2389006, 2370145, 2410508, 2394253, 2425066, 2389128, 2444667, 2448025, 2437368, 2271176, 2423302, 2405783, 2393379, 2412737, 2357384, 2381549, 2443938, 2445628, 2170024, 2334089, 2343912, 2372211, 2388141, 2409981, 2412640, 2406038, 2367194, 2430956, 2367003, 2397504, 2432248, 2404787, 2407598, 2453909, 2430178, 2423695, 2359950, 2334733, 2449875, 2436649, 2349289, 2385716, 2340751, 2386848, 2439250, 2423391 |
Related CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2360407: When some applications that use 3D software acceleration run on a virtual machine, the virtual machine might stop responding
If a virtual machine runs applications that use particular 3D state and shaders, and if software 3D acceleration is enabled, the virtual machine might stop responding.
This issue is resolved in this release.
- PR 2403863: Manually triggering a non-maskable interrupt (NMI) might not work оn an ESXi host with an AMD EPYC 7002 series processor
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and the ESXi host continues running.
This issue is resolved in this release.
- PR 2368281: An API call to configure the number of queues and worlds of a driver might cause an ESXi host to fail with a purple diagnostic screen
You can use the
SCSIBindCompletionWorlds()
method to set the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to a value more than 1 and thenumWorlds
parameter to a value that is equal or less than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2399214: In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
This issue is resolved in this release.
- PR 2399723: When a virtual machine client needs an extra memory reservation, ESXi hosts might fail with a purple diagnostic screen
When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
@BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778This issue is resolved in this release.
- PR 2337888: If you do not configure a default route in an ESXi host, the SNMP traps are sent with a payload in which the ESXi SNMP agent IP address sequence is in reverse order
If you do not configure a default route in an ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload.This issue is resolved in this release.
- PR 2379543: You see MULTIPROCESSOR CONFIGURATION NOT SUPPORTED error message on a blue screen on Windows virtual machines while migrating to ESXi 6.5
Windows virtual machines might fail while migrating to ESXi 6.5 after a reboot initiated by the guest OS. You see
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC ID
field of the guest CPUID to be modified during the migration.This issue is resolved in this release.
- PR 2389006: Adding an ESXi host to an Active Directory domain by using a vSphere Authentication Proxy might fail intermittently
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN,
when a host is temporarily unable to find its created machine account in the AD environment. This fix adds a retry logic for adding ESXi hosts to AD domains by using the Authentication Proxy.This issue is resolved in this release.
- PR 2370145: Virtual machines failing with Unrecoverable Memory Allocation Failure error
When 3D software acceleration is enabled, the virtual machine executable (VMX) process might fail if the software renderer uses more memory than is reserved for it. You see an error
Unrecoverable Memory Allocation Failure
. Windows 10 OS virtual machines are prone to this issue.This issue is resolved in this release.
- PR 2410508: An ESXi host might fail with a purple diagnostic screen during shutdown if both MLDv1 and MLDv2 devices are present in the network
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task. This issue occurs when both MLDv1 and MLDv2 devices are present in the network and global IPv6 addresses are disabled. The issue was fixed in ESXi650-201811002. However, the fix revealed another issue during ESXi hosts shutdown, that resulted in a
NULL
pointer dereference while IPv6 was disabled.This issue is resolved in this release.
- PR 2394253: You cannot set virtual machines to power cycle when the guest OS reboots
After a microcode update, sometimes it is necessary to reenumerate the CPUID for the virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power cycle when necessary.This issue is resolved in this release.
- PR 2425066: If you disable checksums for vSphere Replication, an ESXi host might fail with a purple diagnostic screen
Sometimes, if you disable checksums for vSphere Replication by setting the parameter
HBR.ChecksumUseChecksumInfo = 0
in the ESXi Advanced settings, the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2389128: You see an unexpected failover or a blue diagnostic screen when both vSphere Fault Tolerance (vSphere FT) and a graphics address remapping table (GART) are enabled in a guest OS
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
This issue is resolved in this release. To avoid a race condition, the fix forces the use of atomic operations for bitmap write operations and enables
physmem
support for GART. - PR 2444667: If you configure a small limit for the vmx.log.rotateSize parameter, the VMX process might fail while powering on or during vSphere vMotion
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. The chances that these issues occur increases if you use a value of less than 100 KB for thevmx.log.rotateSize
parameter.This issue is resolved in this release.
- PR 2448025: PCI Express (PCIe) Advanced Error Reporting (AER) register settings might be missing or are not correct for hot added PCIe devices
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized.This issue is resolved in this release.
- PR 2437368: Host Profile remediation might end with deleting a virtual network adapter from the VMware vSphere Distributed Switch
If you extract a host profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that host profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS). However, the adapter is actually removed from the VDS. This is due to a
"::"
value of theipV6DefaultGateway
parameter.This issue is resolved in this release.
- PR 2271176: ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
This issue is resolved in this release.
- PR 2423302: After you revert a virtual machine to a snapshot, change block tracking (CBT) data might be corrupted
When reverting a virtual machine on which CBT is enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error.This issue is resolved in this release. With ESXi650-201912002, the output of the
QuerychangedDiskAreas ()
call changes toFileFault
and adds the messageChange tracking is not active on the disk <disk_path>
to provide more details on the issue.
With the fix, you must power on or reconfigure the virtual machine to enable CBT after reverting it to a snapshot and then take a snapshot to make a full backup.
To reconfigure the virtual machine:- In the Managed Object Browser graphical interface, run
ReconfigVM_Task
by using an URL such as
https://vc or host ip/mob/?moid=the virtual machine Managed Object ID&method=reconfigure
. - In the
<spec>
tag, add<ChangeTrackingEnabled>true</ChangeTrackingEnabled>
. - Click Invoke Method.
- In the Managed Object Browser graphical interface, run
- PR 2405783: You see random health check alerts for ESXi hosts with connectivity issues
ESXi host queries by using API sometimes experience slow response, leading to improper health check warnings about the host connectivity.
This issue is resolved in this release.
- PR 2393379: Warnings for vSAN: Basic (unicast) connectivity check in large cluster
For large clusters with 33 or more ESXi hosts, the vSAN: Basic (unicast) connectivity check might report warnings due to the large number of hosts to ping.
This issue is resolved in this release.
- PR 2412737: If cluster takeover does not complete, virtual machines on a vSAN stretched cluster might stop responding
In a vSAN stretched cluster, a cluster primary host is elected from the preferred fault domain and if no hosts are available, the cluster primary host is elected from the secondary fault domain. However, if a node from the preferred fault domain joins the cluster, then that host is elected as primary by using a process called cluster takeover, which might get the cluster into a bad state. If, for some reason, the cluster takeover does not complete, the virtual machines on this cluster stop responding.
This issue is resolved in this release. For the fix to take effect, all hosts in a vSAN stretched cluster must be upgraded to this patch.
- PR 2357384: The vSAN health service fails to start after patching a vCenter Server Appliance that has a custom HTTP port
After patching a vCenter Server Appliance to 6.5 Update 2 or later, the vSAN health service might fail to start. This problem might occur because the following file does not correctly update custom port assignments for HTTP and HTTPS:
/etc/vmware-vsan-health/config.conf
.This issue is resolved in this release.
- PR 2381549: A vSAN VASA provider might go offline, causing the VASA vendor provider updates to fail
The vsanvpd process on an ESXi host might stop after running QRadar Vulnerability Manager to scan ESXi hosts. This problem occurs when communication to the vsanvpd does not follow the expected protocol. The vsanvpd process denies the connection and exits. The result is that registration or unregistration of the VASA vendor provider information fails.
This issue is resolved in this release.
- PR 2443938: The SNMP service might fail if many IPv6 addresses are assigned by using SLAAC
If IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC, the SNMP service might fail. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor ESXi hosts status.
This issue is resolved in this release.
- PR 2445628: Virtual machines with PCI passthrough devices fail to power on after a reset or reboot
Stale parameters might cause incorrect handling of interrupt information from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
This issue is resolved in this release.
- PR 2170024: The small footprint CIM broker (SFCB) core dumps due to memory reallocation failure
SFCB might fail and core dump due to a memory reallocation failure. You see error messages similar to:
sfcb-vmware_raw[68518]: tool_mm_realloc_or_die: memory re-allocation failed(orig=1600000 new=3200000 msg=Cannot allocate memory, aborting.
This issue is resolved in this release.
- PR 2334089: The hostd service becomes unresponsive and the core dumps
During a save operation to a distributed portgroup, the hostd service might become temporarily unresponsive due to an internal error. You see a
hostd-worker-zdump
file in the/var/core/
directory.This issue is resolved in this release.
- PR 2343912: You must manually add the claim rules to an ESXi host
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
This issue is resolved in this release. The fix sets Storage Array Type Plug-in (SATP) in ESXi hosts version 6.5.x for Tegile Storage Arrays to:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V TEGILE -M "INTELLIFLASH" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "Tegile arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V TEGILE -M "INTELLIFLASH" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e "Tegile arrays without ALUA support"'''
- PR 2372211: If the allocated memory for a large number of virtual distributed switch ports exceeds the heap limit, the hostd service might fail
Sometimes, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail.This issue is fixed in this release. You can use the following configuration setting:
esxcfg-advcfg -s X /Net/DVSLargeHeapMBPERGB
to align thedvsLargeHeap
parameter with the system physical memory. Here,X
is an integer value from 1 to 20 that defines the heap limit relative to the physical memory of an ESXi host. For example, ifX
is 5 and the physical memory is 40 GB, the heap limit is set to 200 MB.
To use this setting, you must reboot the ESXi host. - PR 2388141: A vSAN permission granted at the root level in vCenter Server might impact other vSphere datastores
Administrators must have the following permission to create a vSAN datastore:
+Host.Config.Storage
.
This permission also enables the administrator to unmount any datastores that reside in vCenter Server.This issue is resolved in this release.
- PR 2409981: Content-Based Read Cache (CBRC) digest recompute operations might take long for the Horizon linked clone desktops
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
This issue is resolved in this release.
- PR 2412640: If you remove and add a SR-IOV network adapter during the same reconfigure operation, the adapter displays as an unknown PCIe device
In some cases, when you remove and add a SR-IOV network adapter during the same reconfigure operation, the spec is generated in such a way that the properties of the new adapter are overwritten, and it displays as an unknown PCIe device.
This issue is resolved in this release.
- PR 2406038: Provisioning of instant clone pools by using VMware Horizon View sometimes fails with the error message Module 'CheckpointLate' power on failed
Due to a rare race condition, the provisioning of virtual machines by using View might fail with an error
Module 'CheckpointLate' power on failed
.This issue is resolved in this release.
- PR 2367194: When updating a host profile from a reference host, per-host based kernel parameters and the Challenge-Handshake Authentication Protocol (CHAP) secrets are lost
When updating a host profile from a reference host, since the default policy option for kernel module parameters is a fixed value, all per-host kernel parameters also become fixed values and cannot be re-used. Also, CHAP secrets cannot be extracted from the host and are effectively lost.
This issue is resolved in this release. To avoid re-entering the per-host based kernel module parameters and CHAP secrets on each update of an ESXi host settings, ESXi650-201912002 adds a new feature to preserve this data.
- PR 2430956: Virtual machines might fail during a passthrough of an iLOK USB key device
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
.This issue is resolved in this release.
- PR 2367003: You must manually add the claim rules to an ESXi host for Dell EMC Trident Storage Arrays
You must manually add the claim rules to an ESXi host for Dell EMC Trident Storage Arrays to set the I/O Operations Per Second to 1.
This issue is resolved in this release. This fix sets Storage Array Type Plugin (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
, and Claim Options totpgs_on
as default for Trident Storage Arrays. - PR 2397504: Notifications for Permanent Device Loss (PDL) events might cause ESXi hosts to fail with a purple diagnostic screen
In ESXi 6.5, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2432248: The small footprint CIM broker (SFCB) fails while fetching the class information of third-party provider classes
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace.This issue is resolved in this release.
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2t.
- PR 2407598: You cannot use a virtual function (VF) spawned on a bus different than the parent physical function (PF)
If a parent PF is at SBDF, such as C1:00.0, and spawns VFs starting at, for example, C2:00.0, you cannot use that VF.
This issue is resolved in this release.
- PR 2453909: Log spam in the syslog.log starts as soon as the CIM Agent is enabled
As soon as the CIM Agent is enabled, you might start receiving multiple logs per minute in the
syslog.log
of ESXi hosts.
Logs are similar to:
2019-09-16T12:59:55Z sfcb-vmware_base[2100102]: VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
This issue is resolved in this release.
- PR 2430178: An ESXi host with 3D virtual machines might fail with a blue screen error due to xmap allocation failure
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause failure of the ESXi host.This issue is resolved in this release.
- PR 2423695: Running the esxcfg-info command returns an output with errors hidden in it
When running the
esxcfg-info
command, you might see output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
This issue is resolved in this release.
- PR 2359950: A CTK file might be unexpectedly removed after backing up independent nonpersistent VMDK files by using HotAdd Transport Mode
If you hot remove the disk from the proxy virtual machine, the CTK file of an independent nonpersistent VMDK file might be deleted. The issue occurs regardless if the proxy virtual machine has CBT enabled or not. If you delete the last snapshot after a backup, the CTK file is restored and the issue does not affect backup workflows.
This issue is resolved in this release.
- PR 2334733: An ESXi host might fail with a purple diagnostic screen and an error similar to: PF Exception 14 in world XXX:Unmap Helper IP XXX
If unmapped data structures are accessed while a connection to a VMFS volume closes, an ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX:Unmap Helper IP XXX
in the backtrace.This issue is resolved in this release.
- PR 2449875: Connection failure to vCenter Server might cause a vSAN cluster partition
When an ESXi host in a vSAN cluster loses connection to a vCenter Server system, the host cannot retrieve the latest vSAN vmodl version, and defaults to a standard vim. This might cause the ESXi host to clear its list of unicast addresses, and lead to a cluster partition.
This issue is resolved in this release.
- PR 2436649: You might not see software-based iSCSI adapters in the vSphere Client or the vSphere Web Client
Due to a memory leak in the iSCSI module, software-based iSCSI adapters might not display in the vSphere Client or the vSphere Web Client during the target discovery process.
This issue is resolved in this release.
- PR 2349289: ESXi hosts might fail during cluster rejoin
During a cluster rejoin event, an ESXi host running ESXi650-201811002 might fail with a purple diagnostic screen. The failure happens when the CMMDS module improperly attempts to free the host CMMDS utility.
This issue is resolved in this release.
- PR 2385716: Virtual machines with Changed Block Tracking (CBT) enabled might report long waiting times during snapshot creation
Virtual machines with CBT enabled might report long waiting times during snapshot creation due to the 8 KB buffer used for CBT file copying. With this fix, the buffer size is increased to 1 MB to overcome multiple reads and writes of a large CBT file copy, and reduce waiting time.
This issue is resolved in this release.
- PR 2340751: HPE servers with firmware version 1.30 might trigger hardware status warnings for I/O module sensors
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
.This issue is resolved in this release. For more information, see VMware knowledge base article 53134.
- PR 2386848: An ESXi host might lose connectivity due to network issues caused by a race condition in the transmission queues
A race condition causes transmission queues of physical NICs to become inactive whenever a virtual switch uplink port is disconnected from one virtual switch or distributed virtual switch and connected to another virtual switch or distributed virtual switch. As a result, network connectivity between virtual machines might fail.
This issue is resolved in this release.
- PR 2439250: VMFS6 volumes greater than 1 PB might cause ESXi hosts to fail with a purple diagnostic screen
By default, the configuration limit for the memory pool of VMFS6 volumes on an ESXi host is 1 PB, but you can extend the pool to support up to 4 PB. However, a total datastore size greater than 1 PB might cause an ESXi host to fail with a purple diagnostic screen due to a memory outage. This fix allows changes in the configuration for up to 16 PB. You must set a limit for the LFBCSlabSizeMaxMB setting, scaling it to the total size of all datastores on an ESXi host. The default is 8, which covers a total capacity of 1PB, but you can increase the limit to up to 128, or 16 PB. You must reboot the ESXi host to enable the change.
This issue is resolved in this release.
- PR 2423391: Under rare conditions an ESXi host might fail with a purple diagnostic screen when collecting CPU performance counters for a virtual machine
Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_bootbank_esx-xserver_6.5.0-3.120.15256549 |
PRs Fixed | 2364560 |
Related CVE numbers | N/A |
This patch updates the esx-xserver
VIB to resolve the following issue:
- PR 2364560: Xorg might fail to start on systems with multiple GPUs
More than one Xorg scripts might run in parallel and interfere with each other. As a result, the Xorg script might fail to start on systems with multiple GPUs.
This issue is resolved in this release. The fix ensures that a single Xorg script runs during start operations.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMW_bootbank_vmkusb_0.1-1vmw.650.3.120.15256549 |
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_bootbank_native-misc-drivers_6.5.0-3.120.15256549 |
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the native-misc-drivers
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates esx-base, esx-tboot, vsan,
and vsanhealth
VIBs.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_bootbank_cpu-microcode_6.5.0-3.116.15256468 |
PRs Fixed | 2363675 |
CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
- The cpu-microcode VIB includes the following Intel microcode:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series
Intel Xeon E3-1200 Series
Intel i7-2655-LE Series
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d7 0x6d 0x00000718 5/21/2019 Intel Pentium 1400 Series
Intel Xeon E5-1400 Series
Intel Xeon E5-1600 Series
Intel Xeon E5-2400 Series
Intel Xeon E5-2600 Series
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series
Intel Xeon E7-4800 Series
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series
Intel i7-3500-LE/UE
Intel i7-3600-QE
Intel Xeon E3-1200-v2 Series
Intel Xeon E3-1100-C-v2 Series
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000027 2/26/2019 Intel Xeon E3-1200-v3 Series
Intel i7-4700-EQ Series
Intel i5-4500-TE Series
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series
Intel Xeon E5-2600-v2 Series
Intel Xeon E5-2400-v2 Series
Intel Xeon E5-1600-v2 Series
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series
Intel Xeon E5-2600-v3 Series
Intel Xeon E5-2400-v3 Series
Intel Xeon E5-1600-v3 Series
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000021 6/13/2019 Intel Core i7-5700EQ
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series
Intel Atom C2500 Series
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series
Intel Xeon E5-4600-v4 Series
Intel Xeon E5-2600-v4 Series
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02000065 9/5/2019 Intel Xeon Platinum 8100 Series
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series
Intel Xeon D-2100 Series
Intel Xeon D-1600 Series
Intel Xeon W-3100 Series
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x0400002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x0500002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000cc 4/1/2019 Intel Xeon E3-1500-v5 Series
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000c6 8/14/2019 Intel Xeon E3-1200-v6 Series
Intel Xeon E3-1500-v6 SeriesCoffee Lake H/S 0x906ea 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906eb 0x02 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906ec 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000c6 8/14/2019 Intel Xeon E-2200 Series
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_locker_tools-light_6.5.0-3.116.15256468 |
PRs Fixed | 2377205 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
The following VMware Tools ISO images are bundled with ESXi650-201912002:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms that are not bundled with ESXi:
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_bootbank_esx-ui_1.33.5-15102916 |
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.5.0-20191204001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 19, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2360407, 2403863, 2368281, 2399214, 2399723, 2337888, 2379543, 2389006, 2370145, 2410508, 2394253, 2425066, 2389128, 2444667, 2448025, 2437368, 2271176, 2423302, 2405783, 2393379, 2412737, 2357384, 2381549, 2443938, 2445628, 2170024, 2334089, 2343912, 2372211, 2388141, 2409981, 2412640, 2406038, 2367194, 2430956, 2367003, 2397504, 2432248, 2404787, 2407598, 2453909, 2430178, 2423695, 2359950, 2334733, 2449875, 2436649, 2349289, 2385716, 2340751, 2386848, 2439250, 2364560 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If a virtual machine runs applications that use particular 3D state and shaders, and if software 3D acceleration is enabled, the virtual machine might stop responding.
-
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and the ESXi host continues running.
-
You can use the
SCSIBindCompletionWorlds()
method to set the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to a value more than 1 and thenumWorlds
parameter to a value that is equal or less than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen. -
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
-
When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
@BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778 -
If you do not configure a default route in an ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload. -
Windows virtual machines might fail while migrating to ESXi 6.5 after a reboot initiated by the guest OS. You see
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC ID
field of the guest CPUID to be modified during the migration. -
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN,
when a host is temporarily unable to find its created machine account in the AD environment. This fix adds a retry logic for adding ESXi hosts to AD domains by using the Authentication Proxy. -
When 3D software acceleration is enabled, the virtual machine executable (VMX) process might fail if the software renderer uses more memory than is reserved for it. You see an error
Unrecoverable Memory Allocation Failure
. Windows 10 OS virtual machines are prone to this issue. -
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task. This issue occurs when both MLDv1 and MLDv2 devices are present in the network and global IPv6 addresses are disabled. The issue was fixed in ESXi650-201811002. However, the fix revealed another issue during ESXi hosts shutdown, that resulted in a
NULL
pointer dereference while IPv6 was disabled. -
After a microcode update, sometimes it is necessary to reenumerate the CPUID for the virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power cycle when necessary. -
Sometimes, if you disable checksums for vSphere Replication by setting the parameter
HBR.ChecksumUseChecksumInfo = 0
in the ESXi Advanced settings, the ESXi host might fail with a purple diagnostic screen. -
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
-
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. The chances that these issues occur increases if you use a value of less than 100 KB for thevmx.log.rotateSize
parameter. -
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized. -
If you extract a host profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that host profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS). However, the adapter is actually removed from the VDS. This is due to a
"::"
value of theipV6DefaultGateway
parameter. -
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
-
When reverting a virtual machine on which CBT is enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error. -
ESXi host queries by using API sometimes experience slow response, leading to improper health check warnings about the host connectivity.
-
For large clusters with 33 or more ESXi hosts, the vSAN: Basic (unicast) connectivity check might report warnings due to the large number of hosts to ping.
-
In a vSAN stretched cluster, a cluster primary host is elected from the preferred fault domain and if no hosts are available, the cluster primary host is elected from the secondary fault domain. However, if a node from the preferred fault domain joins the cluster, then that host is elected as primary by using a process called cluster takeover, which might get the cluster into a bad state. If, for some reason, the cluster takeover does not complete, the virtual machines on this cluster stop responding.
-
After patching a vCenter Server Appliance to 6.5 Update 2 or later, the vSAN health service might fail to start. This problem might occur because the following file does not correctly update custom port assignments for HTTP and HTTPS:
/etc/vmware-vsan-health/config.conf
. -
The vsanvpd process on an ESXi host might stop after running QRadar Vulnerability Manager to scan ESXi hosts. This problem occurs when communication to the vsanvpd does not follow the expected protocol. The vsanvpd process denies the connection and exits. The result is that registration or unregistration of the VASA vendor provider information fails.
-
If IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC, the SNMP service might fail. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor ESXi hosts status.
-
Stale parameters might cause incorrect handling of interrupt information from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
-
SFCB might fail and core dump due to a memory reallocation failure. You see error messages similar to:
sfcb-vmware_raw[68518]: tool_mm_realloc_or_die: memory re-allocation failed(orig=1600000 new=3200000 msg=Cannot allocate memory, aborting.
-
During a save operation to a distributed portgroup, the hostd service might become temporarily unresponsive due to an internal error. You see a
hostd-worker-zdump
file in the/var/core/
directory. -
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
-
Sometimes, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail. -
Administrators must have the following permission to create a vSAN datastore:
+Host.Config.Storage
.
This permission also enables the administrator to unmount any datastores that reside in vCenter Server. -
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
-
In some cases, when you remove and add a SR-IOV network adapter during the same reconfigure operation, the spec is generated in such a way that the properties of the new adapter are overwritten, and it displays as an unknown PCIe device.
-
Due to a rare race condition, the provisioning of virtual machines by using View might fail with an error
Module 'CheckpointLate' power on failed
. -
When updating a host profile from a reference host, since the default policy option for kernel module parameters is a fixed value, all per-host kernel parameters also become fixed values and cannot be re-used. Also, CHAP secrets cannot be extracted from the host and are effectively lost.
-
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
. -
You must manually add the claim rules to an ESXi host for Dell EMC Trident Storage Arrays to set the I/O Operations Per Second to 1.
-
In ESXi 6.5, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
-
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace. -
The OpenSSL package is updated to version openssl-1.0.2t.
-
If a parent PF is at SBDF, such as C1:00.0, and spawns VFs starting at, for example, C2:00.0, you cannot use that VF.
-
As soon as the CIM Agent is enabled, you might start receiving multiple logs per minute in the
syslog.log
of ESXi hosts.
Logs are similar to:
2019-09-16T12:59:55Z sfcb-vmware_base[2100102]: VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
-
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause failure of the ESXi host. -
When running the
esxcfg-info
command, you might see output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
-
If you hot remove the disk from the proxy virtual machine, the CTK file of an independent nonpersistent VMDK file might be deleted. The issue occurs regardless if the proxy virtual machine has CBT enabled or not. If you delete the last snapshot after a backup, the CTK file is restored and the issue does not affect backup workflows.
-
If unmapped data structures are accessed while a connection to a VMFS volume closes, an ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX:Unmap Helper IP XXX
in the backtrace. -
When an ESXi host in a vSAN cluster loses connection to a vCenter Server system, the host cannot retrieve the latest vSAN vmodl version, and defaults to a standard vim. This might cause the ESXi host to clear its list of unicast addresses, and lead to a cluster partition.
-
Due to a memory leak in the iSCSI module, software-based iSCSI adapters might not display in the vSphere Client or the vSphere Web Client during the target discovery process.
-
During a cluster rejoin event, an ESXi host running ESXi650-201811002 might fail with a purple diagnostic screen. The failure happens when the CMMDS module improperly attempts to free the host CMMDS utility.
-
Virtual machines with CBT enabled might report long waiting times during snapshot creation due to the 8 KB buffer used for CBT file copying. With this fix, the buffer size is increased to 1 MB to overcome multiple reads and writes of a large CBT file copy, and reduce waiting time.
-
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
. -
A race condition causes transmission queues of physical NICs to become inactive whenever a virtual switch uplink port is disconnected from one virtual switch or distributed virtual switch and connected to another virtual switch or distributed virtual switch. As a result, network connectivity between virtual machines might fail.
-
By default, the configuration limit for the memory pool of VMFS6 volumes on an ESXi host is 1 PB, but you can extend the pool to support up to 4 PB. However, a total datastore size greater than 1 PB might cause an ESXi host to fail with a purple diagnostic screen due to a memory outage. This fix allows changes in the configuration for up to 16 PB. You must set a limit for the LFBCSlabSizeMaxMB setting, scaling it to the total size of all datastores on an ESXi host. The default is 8, which covers a total capacity of 1PB, but you can increase the limit to up to 128, or 16 PB. You must reboot the ESXi host to enable the change.
- Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
-
Profile Name | ESXi-6.5.0-20191204001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 19, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2360407, 2403863, 2368281, 2399214, 2399723, 2337888, 2379543, 2389006, 2370145, 2410508, 2394253, 2425066, 2389128, 2444667, 2448025, 2437368, 2271176, 2423302, 2405783, 2393379, 2412737, 2357384, 2381549, 2443938, 2445628, 2170024, 2334089, 2343912, 2372211, 2388141, 2409981, 2412640, 2406038, 2367194, 2430956, 2367003, 2397504, 2432248, 2404787, 2407598, 2453909, 2430178, 2423695, 2359950, 2334733, 2449875, 2436649, 2349289, 2385716, 2340751, 2386848, 2439250, 2364560 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If a virtual machine runs applications that use particular 3D state and shaders, and if software 3D acceleration is enabled, the virtual machine might stop responding.
-
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button must cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and the ESXi host continues running.
-
You can use the
SCSIBindCompletionWorlds()
method to set the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to a value more than 1 and thenumWorlds
parameter to a value that is equal or less than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen. -
In a vCenter Server system using AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device might fail to power on. In the
vmkernel.log
, you see messages similar to:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fIn the AMD IOMMU interrupt remapper, IOAPIC interrupts the use of an IRTE index equal to the vector number. In certain cases, a non-IOAPIC interrupt might take the index that an IOAPIC interrupt needs.
-
When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
@BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778 -
If you do not configure a default route in an ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload. -
Windows virtual machines might fail while migrating to ESXi 6.5 after a reboot initiated by the guest OS. You see
MULTIPROCESSOR CONFIGURATION NOT SUPPORTED
error message on a blue screen. The fix prevents thex2APIC ID
field of the guest CPUID to be modified during the migration. -
Adding an ESXi host to an AD domain by using a vSphere Authentication Proxy might fail intermittently with error code
41737
, which corresponds to error messageLW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN,
when a host is temporarily unable to find its created machine account in the AD environment. This fix adds a retry logic for adding ESXi hosts to AD domains by using the Authentication Proxy. -
When 3D software acceleration is enabled, the virtual machine executable (VMX) process might fail if the software renderer uses more memory than is reserved for it. You see an error
Unrecoverable Memory Allocation Failure
. Windows 10 OS virtual machines are prone to this issue. -
An ESXi host might fail with a purple diagnostic screen during shutdown due to very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task. This issue occurs when both MLDv1 and MLDv2 devices are present in the network and global IPv6 addresses are disabled. The issue was fixed in ESXi650-201811002. However, the fix revealed another issue during ESXi hosts shutdown, that resulted in a
NULL
pointer dereference while IPv6 was disabled. -
After a microcode update, sometimes it is necessary to reenumerate the CPUID for the virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power cycle when necessary. -
Sometimes, if you disable checksums for vSphere Replication by setting the parameter
HBR.ChecksumUseChecksumInfo = 0
in the ESXi Advanced settings, the ESXi host might fail with a purple diagnostic screen. -
You might see an unexpected failover or a blue diagnostic screen when both vSphere FT and a GART are enabled in a guest OS due to a race condition. vSphere FT scans the guest page table to find the dirty pages and generate a bitmap. To avoid a conflict, each vCPU scans a separate range of pages. However, if a GART is also enabled, it might map a guest physical page number (PPN) to an already mapped region. Also, multiple PPNs might be mapped to the same BusMem page number (BPN). This causes two vCPUs to write on the same QWORD in the bitmap when they are processing two PPNs in different regions.
-
If you configure a small limit for the
vmx.log.rotateSize
parameter, the VMX process might fail while powering on or during vSphere vMotion. The chances that these issues occur increases if you use a value of less than 100 KB for thevmx.log.rotateSize
parameter. -
When you hot add a device under a PCI hot plug slot that has only
PCIe _HPX
record settings, some PCIe registers in the hot added device might not be properly set. This results in missing or incorrect PCIe AER register settings. For example, AER driver control or AER mask register might not be initialized. -
If you extract a host profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that host profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS). However, the adapter is actually removed from the VDS. This is due to a
"::"
value of theipV6DefaultGateway
parameter. -
ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.
-
When reverting a virtual machine on which CBT is enabled to a snapshot which is not a memory snapshot, and if you use the
QueryChangedDiskAreas()
API call, you might see anInvalidArgument
error. -
ESXi host queries by using API sometimes experience slow response, leading to improper health check warnings about the host connectivity.
-
For large clusters with 33 or more ESXi hosts, the vSAN: Basic (unicast) connectivity check might report warnings due to the large number of hosts to ping.
-
In a vSAN stretched cluster, a cluster primary host host is elected from the preferred fault domain and if no hosts are available, the cluster primary host is elected from the secondary fault domain. However, if a node from the preferred fault domain joins the cluster, then that host is elected as primary by using a process called cluster takeover, which might get the cluster into a bad state. If, for some reason, the cluster takeover does not complete, the virtual machines on this cluster stop responding.
-
After patching a vCenter Server Appliance to 6.5 Update 2 or later, the vSAN health service might fail to start. This problem might occur because the following file does not correctly update custom port assignments for HTTP and HTTPS:
/etc/vmware-vsan-health/config.conf
. -
The vsanvpd process on an ESXi host might stop after running QRadar Vulnerability Manager to scan ESXi hosts. This problem occurs when communication to the vsanvpd does not follow the expected protocol. The vsanvpd process denies the connection and exits. The result is that registration or unregistration of the VASA vendor provider information fails.
-
If IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC, the SNMP service might fail. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor ESXi hosts status.
-
Stale parameters might cause incorrect handling of interrupt information from passthrough devices during a reset or reboot. As a result, virtual machines with PCI passthrough devices might fail to power on after a reset or reboot.
-
SFCB might fail and core dump due to a memory reallocation failure. You see error messages similar to:
sfcb-vmware_raw[68518]: tool_mm_realloc_or_die: memory re-allocation failed(orig=1600000 new=3200000 msg=Cannot allocate memory, aborting.
-
During a save operation to a distributed portgroup, the hostd service might become temporarily unresponsive due to an internal error. You see a
hostd-worker-zdump
file in the/var/core/
directory. -
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
-
Sometimes, the allocated memory for a large number of virtual distributed switch ports exceeds the
dvsLargeHeap
parameter. This might cause the hostd service or running commands to fail. -
Administrators must have the following permission to create a vSAN datastore:
+Host.Config.Storage
.
This permission also enables the administrator to unmount any datastores that reside in vCenter Server. -
When Horizon linked clone desktops are refreshed or recomposed, a recompute operation follows and it might take long. The delay in the recompute operation might cause a misconfiguration of the desktop digest files. This results in all the recompute I/O ending up in the replica disk. I/O congestion in the replica causes longer digest recompute times.
-
In some cases, when you remove and add a SR-IOV network adapter during the same reconfigure operation, the spec is generated in such a way that the properties of the new adapter are overwritten, and it displays as an unknown PCIe device.
-
Due to a rare race condition, the provisioning of virtual machines by using View might fail with an error
Module 'CheckpointLate' power on failed
. -
When updating a host profile from a reference host, since the default policy option for kernel module parameters is a fixed value, all per-host kernel parameters also become fixed values and cannot be re-used. Also, CHAP secrets cannot be extracted from the host and are effectively lost.
-
Virtual machines might fail with a VMX panic error message during a passthrough of an iLOK USB key device. You see an error similar to
PANIC: Unexpected signal: 11
in the virtual machine log file,vmware.log
. -
You must manually add the claim rules to an ESXi host for Dell EMC Trident Storage Arrays to set the I/O Operations Per Second to 1.
-
In ESXi 6.5, notifications for a PDL exit are no longer supported, but the Pluggable Storage Architecture (PSA) might still send notifications to the VMFS layer for such events. This might cause ESXi hosts to fail with a purple diagnostic screen.
-
SFCB fails due to a segmentation fault while querying with the
getClass
command third-party provider classes such asOSLS_InstCreation
,OSLS_InstDeletion
andOSLS_InstModification
under theroot/emc/host
namespace. -
The OpenSSL package is updated to version openssl-1.0.2t.
-
If a parent PF is at SBDF, such as C1:00.0, and spawns VFs starting at, for example, C2:00.0, you cannot use that VF.
-
As soon as the CIM Agent is enabled, you might start receiving multiple logs per minute in the
syslog.log
of ESXi hosts.
Logs are similar to:
2019-09-16T12:59:55Z sfcb-vmware_base[2100102]: VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
-
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause failure of the ESXi host. -
When running the
esxcfg-info
command, you might see output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
-
If you hot remove the disk from the proxy virtual machine, the CTK file of an independent nonpersistent VMDK file might be deleted. The issue occurs regardless if the proxy virtual machine has CBT enabled or not. If you delete the last snapshot after a backup, the CTK file is restored and the issue does not affect backup workflows.
-
If unmapped data structures are accessed while a connection to a VMFS volume closes, an ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX:Unmap Helper IP XXX
in the backtrace. -
When an ESXi host in a vSAN cluster loses connection to a vCenter Server system, the host cannot retrieve the latest vSAN vmodl version, and defaults to a standard vim. This might cause the ESXi host to clear its list of unicast addresses, and lead to a cluster partition.
-
Due to a memory leak in the iSCSI module, software-based iSCSI adapters might not display in the vSphere Client or the vSphere Web Client during the target discovery process.
-
During a cluster rejoin event, an ESXi host running ESXi650-201811002 might fail with a purple diagnostic screen. The failure happens when the CMMDS module improperly attempts to free the host CMMDS utility.
-
Virtual machines with CBT enabled might report long waiting times during snapshot creation due to the 8 KB buffer used for CBT file copying. With this fix, the buffer size is increased to 1 MB to overcome multiple reads and writes of a large CBT file copy, and reduce waiting time.
-
HPE servers with firmware version 1.30 might report the status of I/O module sensors as warnings. You might see similar messages:
[Device] I/O Module n ALOM_Link_Pn
or[Device] I/O module n NIC_Link_Pn
. -
A race condition causes transmission queues of physical NICs to become inactive whenever a virtual switch uplink port is disconnected from one virtual switch or distributed virtual switch and connected to another virtual switch or distributed virtual switch. As a result, network connectivity between virtual machines might fail.
-
By default, the configuration limit for the memory pool of VMFS6 volumes on an ESXi host is 1 PB, but you can extend the pool to support up to 4 PB. However, a total datastore size greater than 1 PB might cause an ESXi host to fail with a purple diagnostic screen due to a memory outage. This fix allows changes in the configuration for up to 16 PB. You must set a limit for the LFBCSlabSizeMaxMB setting, scaling it to the total size of all datastores on an ESXi host. The default is 8, which covers a total capacity of 1PB, but you can increase the limit to up to 128, or 16 PB. You must reboot the ESXi host to enable the change.
- Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
-
Profile Name | ESXi-6.5.0-20191201001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 19, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2363675, 2377205 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series
Intel Xeon E3-1200 Series
Intel i7-2655-LE Series
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d7 0x6d 0x00000718 5/21/2019 Intel Pentium 1400 Series
Intel Xeon E5-1400 Series
Intel Xeon E5-1600 Series
Intel Xeon E5-2400 Series
Intel Xeon E5-2600 Series
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series
Intel Xeon E7-4800 Series
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series
Intel i7-3500-LE/UE
Intel i7-3600-QE
Intel Xeon E3-1200-v2 Series
Intel Xeon E3-1100-C-v2 Series
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000027 2/26/2019 Intel Xeon E3-1200-v3 Series
Intel i7-4700-EQ Series
Intel i5-4500-TE Series
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series
Intel Xeon E5-2600-v2 Series
Intel Xeon E5-2400-v2 Series
Intel Xeon E5-1600-v2 Series
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series
Intel Xeon E5-2600-v3 Series
Intel Xeon E5-2400-v3 Series
Intel Xeon E5-1600-v3 Series
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000021 6/13/2019 Intel Core i7-5700EQ
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series
Intel Atom C2500 Series
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series
Intel Xeon E5-4600-v4 Series
Intel Xeon E5-2600-v4 Series
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02000065 9/5/2019 Intel Xeon Platinum 8100 Series
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series
Intel Xeon D-2100 Series
Intel Xeon D-1600 Series
Intel Xeon W-3100 Series
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x0400002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x0500002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000cc 4/1/2019 Intel Xeon E3-1500-v5 Series
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000c6 8/14/2019 Intel Xeon E3-1200-v6 Series
Intel Xeon E3-1500-v6 SeriesCoffee Lake H/S 0x906ea 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906eb 0x02 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906ec 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000c6 8/14/2019 Intel Xeon E-2200 Series -
The following VMware Tools ISO images are bundled with ESXi650-201912002:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms that are not bundled with ESXi:
-
Profile Name | ESXi-6.5.0-20191201001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | December 19, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2363675, 2377205 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series
Intel Xeon E3-1200 Series
Intel i7-2655-LE Series
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d7 0x6d 0x00000718 5/21/2019 Intel Pentium 1400 Series
Intel Xeon E5-1400 Series
Intel Xeon E5-1600 Series
Intel Xeon E5-2400 Series
Intel Xeon E5-2600 Series
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series
Intel Xeon E7-4800 Series
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series
Intel i7-3500-LE/UE
Intel i7-3600-QE
Intel Xeon E3-1200-v2 Series
Intel Xeon E3-1100-C-v2 Series
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000027 2/26/2019 Intel Xeon E3-1200-v3 Series
Intel i7-4700-EQ Series
Intel i5-4500-TE Series
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series
Intel Xeon E5-2600-v2 Series
Intel Xeon E5-2400-v2 Series
Intel Xeon E5-1600-v2 Series
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series
Intel Xeon E5-2600-v3 Series
Intel Xeon E5-2400-v3 Series
Intel Xeon E5-1600-v3 Series
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000016 6/17/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000021 6/13/2019 Intel Core i7-5700EQ
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012d 9/16/2019 Intel Atom C2300 Series
Intel Atom C2500 Series
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000038 6/18/2019 Intel Xeon E7-8800/4800-v4 Series
Intel Xeon E5-4600-v4 Series
Intel Xeon E5-2600-v4 Series
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x02000065 9/5/2019 Intel Xeon Platinum 8100 Series
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series
Intel Xeon D-2100 Series
Intel Xeon D-1600 Series
Intel Xeon W-3100 Series
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x0400002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x0500002c 9/5/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001c 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000019 6/17/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000017 6/17/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000f 6/17/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000cc 4/1/2019 Intel Xeon E3-1500-v5 Series
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000c6 8/14/2019 Intel Xeon E3-1200-v6 Series
Intel Xeon E3-1500-v6 SeriesCoffee Lake H/S 0x906ea 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906eb 0x02 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906ec 0x22 0x000000c6 8/14/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000c6 8/14/2019 Intel Xeon E-2200 Series -
The following VMware Tools ISO images are bundled with ESXi650-201912002:
windows.iso
: VMware Tools 11.0.1 ISO image for Windows Vista (SP2) or laterlinux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms that are not bundled with ESXi:
-