Release Date: AUG 20, 2019
Build Details
Download Filename: | ESXi650-201908001.zip |
Build: | 14320405 |
Download Size: | 320.9 MB |
md5sum: | f2831247a52f05e71a493ccb23456af1 |
sha1checksum: | 98393794e1520f768660851c153ef1f8b0bf11fd |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi650-201908401-BG | Bugfix | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.
Bulletin ID | Category | Severity |
ESXi650-201908001 | Bugfix | Important |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20190804001-standard |
ESXi-6.5.0-20190804001-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
ESXi650-201908401-BGPatch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2379650 |
Related CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan
and vsanhealth
VIBs to resolve the following issue:
- PR 2379650: An API call to configure the number of queues and worlds of a driver might cause an ESXi hosts to fail with a purple diagnostic screen
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release.
Profile Name | ESXi-6.5.0-20190804001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2379650 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen.
-
Profile Name | ESXi-6.5.0-20190804001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2379650 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen.
-
Known Issues
The known issues are grouped as follows.
Miscellaneous Issues- A virtual machine that has a PCI passthrough device assigned to it might fail to power on in a vCenter Server system with an AMD EPYC 7002 series processor
In specific vCenter Server system configurations and devices, such as AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device assigned to it might fail to power on. In the vmkernel log, you can see a similar message:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fWorkaround: Disable the use of the interrupt remapper by setting the kernel boot option
iovDisableIR
toTRUE
:- Set
iovDisableIR=TRUE
by using this command:# esxcli system settings kernel set -s iovDisableIR -v TRUE
- Reboot the ESXi host.
- After the reboot, verify that
iovDisableIR
is set toTRUE
:# esxcli system settings kernel list |grep iovDisableIR.
Do not apply this workaround unless you need it to solve this specific problem.
- Set
- Мanually triggering a non-maskable interrupt (NMI) might not work оn a vCenter Server system with an AMD EPYC 7002 series processor
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button should cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and ESXi continues running.
Workaround: Disable the use of the interrupt remapper by setting the kernel boot option
iovDisableIR
toTRUE
:- Set
iovDisableIR=TRUE
by using this command:# esxcli system settings kernel set -s iovDisableIR -v TRUE
- Reboot the ESXi host.
- After the reboot, verify that
iovDisableIR
is set toTRUE
:# esxcli system settings kernel list |grep iovDisableIR.
Do not apply this workaround unless you need it to solve this specific problem.
- Set