ESXi 6.7 Update 3 | AUG 20 2019 | ISO Build 14320388 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 6.7
- Installation Notes for This Release
- Patches Contained in this Release
- Resolved Issues
- Known Issues
What's New
- ixgben driver enhancements: The ixgben driver adds queue pairing to optimize CPU efficiency.
- VMXNET3 enhancements: ESXi 6.7 Update 3 adds guest encapsulation offload and UDP, and ESP RSS support to the Enhanced Networking Stack (ENS). Checksum calculations are offloaded from encapsulated packets to the virtual device emulation and you can run RSS on UDP and ESP packets on demand. UDP RSS supports both IPv4 and IPv6, while ESP RSS supports only IPv4. The feature requires a corresponding VMXNET3 v4 driver.
- NVIDIA virtual GPU (vGPU) enhancements: With vCenter Server 6.7 Update 3, you can configure virtual machines and templates with up to four vGPU devices to cover use cases requiring multiple GPU accelerators attached to a virtual machine. To use the vMotion vGPU feature, you must set the
vgpu.hotmigrate.enabled
advanced setting to true and make sure that both your vCenter Server and ESXi hosts are running vSphere 6.7 Update 3.
vMotion of multi GPU-accelerated virtual machines might fail gracefully under heavy GPU workload due to the maximum switch over time of 100 secs. To avoid this failure, either increase the maximum allowable switch over time or wait until the virtual machine is performing a less intensive GPU workload. - bnxtnet driver enhancements: ESXi 6.7 Update 3 adds support for Broadcom 100G Network Adapters and multi-RSS feed to the bnxtnet driver.
- QuickBoot support enhancements: ESXi 6.7 Update 3 whitelists Dell EMC vSAN Ready Nodes servers R740XD and R640 for QuickBoot support.
- Configurable shutdown time for the sfcbd service: With ESXi 6.7 Update 3, you can configure the shutdown time of the sfcbd service. You can set a shutdown time depending on the third-party CIM provider that you use, or else use the default setting of 10 sec.
- New SandyBridge microcode: ESXi 6.7 Update 3 adds new SandyBridge microcode to the cpu-microcode VIB to bring SandyBridge security up to par with other CPUs and fix per-VM Enhanced vMotion Compatibility (EVC) support. For more information, see VMware knowledge base article 1003212.
Earlier Releases of ESXi 6.7
Features and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 6.7 are:
- VMware ESXi 6.7, Patch Release ESXi670-201906002
- VMware ESXi 6.7, Patch Release ESXi670-201905001
- VMware ESXi 6.7, Patch Release ESXi670-201904001
- VMware ESXi 6.7 Update 2 Release Notes
- VMware ESXi 6.7, Patch Release ESXi670-201903001
- VMware ESXi 6.7, Patch Release ESXi670-201901001
- VMware ESXi 6.7, Patch Release ESXi670-201811001
- VMware vSphere 6.7 Update 1 Release Notes
- VMware vSphere 6.7 Release Notes
For internationalization, compatibility, installation and upgrades, open source components, and product support notices, see the VMware vSphere 6.7 Update 1 Release Notes.
Installation Notes for This Release
VMware Tools Bundling Changes in ESXi 6.7 Update 3
In ESXi 6.7 Update 3, a subset of VMware Tools 10.3.10 ISO images are bundled with the ESXi 6.7 Update 3 host.
The following VMware Tools 10.3.10 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows Vista or higherlinux.iso
: VMware Tools image for Linux OS with glibc 2.5 or higher
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisfreebsd.iso
: VMware Tools image for FreeBSDdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 10.3.10 Release Notes
- Updating to VMware Tools 10 - Must Read
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
Patches Contained in this Release
This release contains all bulletins for ESXi that were released earlier to the release date of this product.
Build Details
Download Filename: | update-from-esxi6.7-6.7_update03.zip |
Build: | 14320388 14141615 (Security-only) |
Download Size: | 454.6 MB |
md5sum: |
b3a89ab239caf1e10bac7d60277ad995 |
sha1checksum: |
590aabc3358c45ca7ee3702d50995ce86ac43bf8 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin will supersede the security-only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in "SG". For information on patch and update classification, see KB 2014447.
For more information about the individual bulletins, see the My VMware page and the Resolved Issues section.
Bulletin ID | Category | Severity |
ESXi670-201908201-UG | Bugfix | Critical |
ESXi670-201908202-UG | Bugfix | Important |
ESXi670-201908203-UG | Enhancement | Important |
ESXi670-201908204-UG | Enhancement | Important |
ESXi670-201908205-UG | Enhancement | Important |
ESXi670-201908206-UG | Enhancement | Important |
ESXi670-201908207-UG | Enhancement | Important |
ESXi670-201908208-UG | Enhancement | Important |
ESXi670-201908209-UG | Enhancement | Important |
ESXi670-201908210-UG | Enhancement | Important |
ESXi670-201908211-UG | Enhancement | Important |
ESXi670-201908212-UG | Enhancement | Important |
ESXi670-201908213-UG | Bugfix | Important |
ESXi670-201908214-UG | Enhancement | Important |
ESXi670-201908215-UG | Enhancement | Important |
ESXi670-201908216-UG | Enhancement | Important |
ESXi670-201908217-UG | Bugfix | Important |
ESXi670-201908218-UG | Bugfix | Important |
ESXi670-201908219-UG | Enhancement | Important |
ESXi670-201908220-UG | Bugfix | Important |
ESXi670-201908221-UG | Bugfix | Important |
ESXi670-201908101-SG | Security | Important |
ESXi670-201908102-SG | Security | Important |
ESXi670-201908103-SG | Security | Important |
ESXi670-201908104-SG | Security | Important |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles.
Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20190802001-standard |
ESXi-6.7.0-20190802001-no-tools |
ESXi-6.7.0-20190801001s-standard |
ESXi-6.7.0-20190801001s-no-tools |
Patch Download and Installation
The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-201908201-UG
- ESXi670-201908202-UG
- ESXi670-201908203-UG
- ESXi670-201908204-UG
- ESXi670-201908205-UG
- ESXi670-201908206-UG
- ESXi670-201908207-UG
- ESXi670-201908208-UG
- ESXi670-201908209-UG
- ESXi670-201908210-UG
- ESXi670-201908211-UG
- ESXi670-201908212-UG
- ESXi670-201908213-UG
- ESXi670-201908214-UG
- ESXi670-201908215-UG
- ESXi670-201908216-UG
- ESXi670-201908217-UG
- ESXi670-201908218-UG
- ESXi670-201908219-UG
- ESXi670-201908220-UG
- ESXi670-201908221-UG
- ESXi670-201908101-SG
- ESXi670-201908102-SG
- ESXi670-201908103-SG
- ESXi670-201908104-SG
- ESXi-6.7.0-20190802001-standard
- ESXi-6.7.0-20190802001-no-tools
- ESXi-6.7.0-20190801001s-standard
- ESXi-6.7.0-20190801001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2288428, 2282206, 2305377, 2213824, 2305719, 2301374, 2272250, 2220008, 2262288, 2290043, 2295425, 2300124, 2303227, 2301818, 2294347, 2338914, 2327317, 2312215, 2331247, 2340952, 2366745, 2331525, 2320309, 2304092, 2241121, 2306836, 2343913, 2361711, 2350171, 2297765, 2355516, 2287232, 2310826, 2332785, 2280742, 2386930, 2333442, 2346582, 2214222, 2335847, 2292419, 2132634, 2363202, 2367669, 2303016, 2337889, 2360408, 2344159, 2331369, 2370239, 2367142, 2336379, 2301231, 2354762, 2305414, 2305410, 2305404, 2305406, 2305405, 2335120, 2305407, 2315656, 2315652, 2305412, 2315684, 2323424, 2328263, 2332785, 2338914, 2214222, 2394247, 2402409, 2322882, 2482603 |
CVE numbers | CVE-2019-5528 |
This patch updates the esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2288428: Virtual machines might become unresponsive due to repetitive failures of third-party device drivers to process commands
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. When you open the virtual machine console, you might see the following error:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
This issue is resolved in this release. This fix recovers commands to unresponsive third party device drivers and ensures that failed commands are stopped and retried until success.
- PR 2282206: After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
This issue is resolved in this release.
- PR 2305377: Virtual machines that run on NFS version 3 datastores might start performing poorly over time
A virtual machine that runs on an NFS version 3 datastore might start performing poorly over time. After you reboot the ESXi host, the virtual machine starts performing normally again.
This issue is resolved in this release .
- PR 2213824: An ESXi host might fail with a purple diagnostic screen because of a physical CPU heartbeat failure
If a virtual machine has an SEsparse snapshot and if the base VMDK file size is not 4K multiple, when you query for the physical layout of the VMDK from the guest operating system or third-party applications, a physical CPU lockup might occur. As a result, the ESXi fails with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2305719: The migration of virtual machines by using vSphere Storage vMotion with container ports might cause the ESXi host to fail with a purple diagnostic screen
If a container port has a duplicate virtual interface (VIF) attachment ID, during the migration of virtual machines by using vSphere Storage vMotion with container ports, the migration might cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2301374: An ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a converged network adapter (CNA)
When you use the pktcap-uw syntax, an ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA. This issue occurs as a result of a rare condition when a device does not have an uplink object associated when capturing a packet on the uplink.
This issue is resolved in this release. The fix is to move the capture point if no associated uplink object is available.
- PR 2272250: The Route Based on Physical NIC Load policy does not work on ESXi 6.7 hosts that use vSphere Distributed Switch version 6.6
The Route Based on Physical NIC Load policy was not supported on vSphere Distributed Switch version 6.6 prior to ESXi 6.7 Update 3.
This issue is resolved in this release.
- PR 2220008: Network booting of virtual machines in a large network might fail
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for booting (PXE, DHCP, DNS, TFTP), the booting fails, because the booting request does not reach the server.
This issue is resolved in this release.
- PR 2262288: An ESXi host might stop responding due to SNMPv3 object identifier GET requests
Each SNMPv3 object identifier might receive GET requests although there is a user authentication. As a result, the ESXi host might stop responding.
This issue is resolved in this release.
- PR 2290043: The vSphere Web Client or vSphere Client might display a host profile as compliant even when it has a non-compliant value
The vSphere Web Client or vSphere Client might display a host profile as compliant even when it has a non-compliant value such as the size of the maximum transmission unit (MTU). Remediation of the profile works as expected, but it is not reflected in the vSphere Web Client or vSphere Client.
This issue is resolved in this release.
- PR 2295425: Performance charts of ESXi hosts might intermittently disappear from any web-based application that connects to the vCenter Server system
You might intermittently stop seeing performance charts of ESXi hosts in the embedded VMware Host Client or any web-based application that connects to the vCenter Server system, including the vSphere Client and vSphere Web Client. An internal issue causes hostd to delete available performance counters information for some ESXi hosts.
This issue is resolved in this release.
- PR 2300124: An ESXi host does not use the C2 low-power state on AMD EPYC CPUs
Even if C-states are enabled in the firmware setup of an ESXi host, the VMkernel does not detect all the C-states correctly. The power screen of the
esxtop
tool shows columns %C0 (percentage of time spent in C-state 0) and %C1 (percentage of time spent in C-state 1), but does not show column %C2. As a result, system performance per watt of power is not maximized.This issue is resolved in this release.
- PR 2303227: If an ESXi host is with a certain BIOS version, an assertion failure might be triggered early during the start phase of the host
If an ESXi host is with a certain BIOS version, an assertion failure might be triggered early during the start phase of the host, and when Intel Trusted Execution Technology (TXT) is enabled in the BIOS. The assertion failure is of the type:
Panic: ASSERT bora/vmkernel/vmkBoot/x86/vmkBootTxt.c:2235
.This issue is resolved in this release. For versions earlier than 6.7 Update 3, disable Intel TXT in the BIOS if it is not required. An upgrade of the BIOS might also work.
- PR 2301818: An ESXi host might fail with a purple diagnostic screen due to a race condition
An ESXi host might fail with a purple diagnostic screen due to a rare race condition when the host is trying to access a memory region in the exact time between the host is freed and allocated to another task.
This issue is resolved in this release.
- PR 2294347: An ESXi host might fail with a purple diagnostic screen when you use the Software iSCSI adapter
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
This issue is resolved in this release.
- PR 2338914: Discrepancies in the AHCI specification of the vmw_ahci driver and AHCI controller firmware might cause multiple errors
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
This issue is resolved in this release.
- PR 2327317: Virtual machines might lose connectivity during cross-cluster migrations by using vSphere vMotion in an NSX setup
Virtual machines might lose connectivity during cross-cluster migrations by using vSphere vMotion in an NSX setup, because NetX filters might be blocking traffic to ports.
This issue is resolved in this release.
- PR 2312215: A virtual machine with VMDK files backed by vSphere Virtual Volumes might fail to power on when you revert it to a snapshot
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
This issue is resolved in this release.
- PR 2331247: PXE booting of a virtual machine with a VMXNET3 virtual network device from a Citrix Provisioning Services (PVS) server might fail
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap. This is caused by pending interrupts on the virtual network device that are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
This issue is resolved in this release. The fix clears any pending interrupts and any mask INTx interrupts when the virtual device starts.
- PR 2340952: Custom Rx and Tx ring sizes of physical NICs might not be persistent across ESXi host reboots
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance by using the following commands:
esxcli network nic ring current set -n
-t
esxcli network nic ring current set -n
-r
the settings might not be persistent across ESXi host reboots.This issue is resolved in this release. The fix makes these ESXCLI configurations persistent across reboots by writing to the ESXi configuration file. Ring-size configuration must be allowed on physical NICs before using the CLI.
- PR 2366745: macOS 10.15 Developer Preview might fail to start on a virtual machine
When you start macOS 10.15 Developer Preview in a virtual machine, the operating system might fail with a blank screen displaying a white Apple logo. A power-on operation in verbose mode stops responding after the following error message is displayed:
Error unserializing prelink plist: OSUnserializeXML: syntax error near line 2
panic(...): "Unable to find driver for this platform: \"ACPI\".\n"@...
This issue is resolved in this release. For the fix to take effect, you must power off the existing virtual machines before you upgrade to macOS 10.15 Developer Preview.
- PR 2331525: If you set the NetworkNicTeamingPolicy or SecurityPolicy to use the default settings, you might fail to apply a host profile
If you modify the NetworkNicTeamingPolicy or SecurityPolicy of a host profile and change it to the default settings, you might fail to apply the host profile to an ESXi host and receive an error such as:
Error: A specified parameter was not correct.
This issue is resolved in this release.
- PR 2320309: When a vMotion operation fails and is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen
If the Maximum Transmission Unit (MTU) size of a virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operations might fail. If a failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2304092: A VMXNET3 virtual network device might get null properties for MAC addresses of some virtual machine interfaces
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the VMkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
.This issue is resolved in this release.
- PR 2241121: When exporting virtual machines as OVF files by using the VMware Host Client, VMDK disk files for different virtual machines have the same prefix
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
This issue is resolved in this release. Exported VMDK file names are now based on the virtual machine name.
- PR 2306836: An ESXi host upgrade or migration operation might fail with a purple diagnostic screen
If you use a vSphere Distributed Switch or an NSX-T managed vSphere Distributed Switch that are with an enabled container feature, and if your ESXi host is of version 6.7 or earlier, during an upgrade or migration operation, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2343913: You must manually add the claim rules to an ESXi host
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
This issue is resolved in this release. The fix sets Storage Array Type Plug-in (SATP) in ESXi hosts version 6.5 and 6.7 for Tegile Storage Arrays to:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V TEGILE -M "INTELLIFLASH" -c tpgs_on --psp="VMW_PSP_RR" -O "iops=1" -e "Tegile arrays with ALUA support"
esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -V TEGILE -M "INTELLIFLASH" -c tpgs_off --psp="VMW_PSP_RR" -O "iops=1" -e "Tegile arrays without ALUA support"'''
- PR 2361711: When provisioning instant clones, you might see an error and virtual machines fail to power on
When provisioning instant clones by using the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails.This issue is resolved in this release. The fix corrects the VMX panic and returns an error.
- PR 2350171: If you run the kdump utility while vIOMMU is enabled, Linux guest OS in an ESXi host might become unresponsive
If vIOMMU is enabled, Linux guest OS might become unresponsive while initializing the
kdump
kernel.This issue is resolved in this release.
- PR 2297765: Removal of disks does not trigger the StorageHealthAlarm on the vCenter Server system
An ESXi host might fail to monitor the health of the LSI Host Bus Adapter (HBA) and the attached storage. As a result, the vCenter Server system cannot display an alarm when the HBA health degrades. You must install the third-party provider LSI SMI-S CIM. For more information, see VMware knowledge base article 2001549.
This issue is resolved in this release.
- PR 2355516: An API call to configure the number of queues and worlds of a driver might cause an ESXi hosts to fail with a purple diagnostic screen
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen.The issue is resolved in this release.
- PR 2287232: Virtual machines with Microsoft Windows 10 Version 1809 might start slowly or stop responding during the start phase if they are running on a VMFS6 datastore
If a virtual machine is with Windows 10 Version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
This issue is resolved in this release.
- PR 2310826: An ESXi host might fail with a purple diagnostic screen if connectivity to NFS41 datastores intermittently fails
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails.This issue is resolved in this release.
- PR 2332785: An ESXi host might fail with a purple diagnostic screen due to lack of physical CPU heartbeat
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2280742: Corruption in the heap of a DvFilter virtual switch might cause an ESXi host to fail with a purple diagnostic screen
A race condition in the get-set firewall rule operations for DvFilter virtual switches might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2386930: Applications might not be available on destination ESXi hosts after migration of virtual machines with configured NVIDIA GRID vGPUs
Memory corruption might occur during the live migration of both compute and storage of virtual machines with NVIDIA GRID vGPUs. Migration succeeds, but running applications might not resume on the destination host.
This issue is resolved in this release.
- PR 2333442: Virtual machines might fail to power on when the number of virtual devices exceeds the maximum limit
Virtual machines configured with a maximum number of virtual devices along with PCI passthrough might fail to power on if the limit is exceeded.
The issue is identified with a panic log in thevmware.log
file similar to:vmx| E105: PANIC: VERIFY bora/devices/pci/pci.c:1529.
This issue is resolved in this release.
- PR 2346582: When you update your ESXi hosts from version 6.7 or 6.7 Update 1, including patch releases, to version 6.7 Update 2 or any 6.7 Update 2 patch release, and if a server is using Intel Skylake CPUs, Quick Boot is cancelled and the ESXi host performs a full system reboot
ESXi 6.7 Update 2 introduces a fix related to Intel errata for Skylake CPUs. The fix impacts the hardware configuration and some sanity checks are triggered, so the ESXi host performs full system reboot instead of Quick Boot. This issue occurs only once, during an update of the ESXi hosts from a version without the fix to a version which includes this fix, and when Quick Boot is enabled for this update.
This issue is resolved in this release. With this fix, when you update your ESXi hosts from version 6.7 or 6.7 Update 1 to version 6.7 Update 3, the ESXi hosts perform Quick Boot on servers with Intel Skylake CPUs.
- PR 2214222: Xorg might fail to start on systems with multiple GPUs
More than one Xorg scripts might run in parallel and interfere with each other. As a result, the Xorg script might fail to start on systems with multiple GPUs.
This issue is resolved in this release. The fix ensures that a single Xorg script runs during start operations.
- PR 2335847: Mellanox ConnectX5 devices might report that Auto Negotiation is not active although the feature is turned on
When you use the
esxcli network nic get -n vmnicX
command to get Auto Negotiation status on Mellanox ConnectX5 devices, you might see the status as False even if the feature is enabled on the physical switch.This issue is resolved in this release.
- PR 2334911: When you configure the Link Aggregation Group (LAG), if a bundle member flaps, a wrong VMkernel observation message might appear
When you configure the LAG, if a bundle member flaps, a wrong VMkernel observation message might appear for the LAG:
Failed Criteria: 0
.This issue is resolved in this release.
- PR 2366640: If the uplink name is not in a format such as vmnicX, where X is a number, the Link Aggregation Control Protocol (LACP) does not work
If the uplink name is not in a format such as
vmnicX
, whereX
is a number, LACP does not work.This issue is resolved in this release.
- PR 2292419: When Site Recovery Manager test recovery triggers a vSphere Replication synchronization phase, hostd might become unresponsive
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The maximum number of concurrent snapshots is 32, which is too high, because too many threads are used in the task tracking of the snapshot operation. As a result, the hostd service might become unresponsive.
This issue is resolved in this release. The fix reduces the maximum number of concurrent snapshots to 8.
- PR 2132634: The hostd service sometimes fails while recomposing VDI Pools and creates a hostd-worker-zdump
The hostd service sometimes fails while recomposing VDI Pools, with the error message
VmfsExtentCommonOpen: VmfsExtentParseExtentLine failed
. You can seehostd-worker-zdump
files generated in the/var/core/
directory of the ESXi host.This issue is resolved in this release.
- PR 2363202: The monitoring services show that the virtual machines on a vSphere Virtual Volumes datastore are in a critical state
In the vSphere Web Client, incorrect Read or Write latency is displayed for the performance graphs of the vSphere Virtual Volumes datastores at a virtual machine level. As a result, the monitoring service shows that the virtual machines are in a critical state.
This issue is resolved in this release.
- PR 2367669: During decompression of its memory page, a virtual machine might stop responding and restart
During the decompression of the memory page of a virtual machine running on ESXi 6.7, the check of the compression window size might fail. As a result, the virtual machine might stop responding and restart. In the backtrace of the VMkernel log, you might see entries similar to:
Unable to decompress BPN(0x1005ef842) from frameIndex(0x32102f0) block 0 for VM(6609526)
0x451a55f1baf0:[0x41802ed3d040]WorldPanicWork
0x451a55f1bb50:[0x41802ed3d285]World_Panic
0x451a55f1bcc0:[0x41802edb3002]VmMemIO_FaultCompressedPage
0x451a55f1bd20:[0x41802edbf72f]VmMemPfCompressed
0x451a55f1bd70:[0x41802edc033b]VmMemPfInt
0x451a55f1be10:[0x41802edc0d4f]VmMemPf
0x451a55f1be80:[0x41802edc108e]VmMemPfLockPageInt
0x451a55f1bef0:[0x41802edc3517]VmMemPf_LockPage
0x451a55f1bfa0:[0x41802ed356e6]VMMVMKCall_Call
0x451a55f1bfe0:[0x41802ed59e5d]VMKVMM_ArchEnterVMKernelThis issue is resolved in this release.
- PR 2303016: Test recovery with VMware Site Recovery Manager might fail with an error DiskQueue is full
The number of concurrent contexts of the resource manager of an ESXi host might exceed the maximum of 512 due to an error in the dispatch logic. In case of slow secondary host or network problems, this might result in
DiskQueue is full
errors and fail synchronization of virtual machines in operations run by the Site Recovery Manager.This issue is resolved in this release.
- PR 2337889: If you do not configure a default route in an ESXi host, the SNMP traps are sent with a payload in which the ESXi SNMP agent IP address sequence is in reverse order
If you do not configure a default route in the ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload.This issue is resolved in this release.
- PR 2360408: When some applications that use 3D software acceleration run on a virtual machine, the virtual machine might stop responding
If a virtual machine runs applications which use particular 3D state and shaders, and if software 3D acceleration is used, the virtual machine might stop responding.
This issue is resolved in this release.
- PR 2344159: When IPv6 checksum is turned on, and an EXSi host receives an IPv6 packet with routing extension headers, the ESXi host might fail with a purple diagnostic screen
When the guest sends an IPv6 checksum offloading with enabled routing headers, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release. If you already face the issue, disable the checksum offload on the vNIC.
- PR 2331369: When an ESXi host removes a PShare Hint with the function VmMemCow_PShareRemoveHint, the ESXi host might fail with a purple diagnostic screen
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernelThis issue is resolved in this release.
- PR 2370239: When а virtual machine uses a remote serial port, if a DNS outage occurs during a migration, the virtual machine might power off
When a virtual machine uses a remote serial port connected to a virtual serial port concentrator (vSPC), where a DNS name is the address of the vSPC, if a DNS outage occurs during a migration by using vMotion, the virtual machine might power off.
This issue is resolved in this release.
- PR 2367142: If you change the hardware compatibility of a virtual machine, the PCI passthrough devices might be removed
When you change the hardware compatibility level of a virtual machine with a PCI passthrough device, the device might be removed from the virtual machine during the procedure.
The issue is resolved in this release.
- PR 2336379: When you set the physical NIC (pNIC) alias, the mapping of the uplinks in the ESXi configuration file might be malformed
When you use the command-line and run the command to set the pNIC alias, the MAC address that is saved in esx.conf for the corresponding pNIC is malformed. The command that you use is the following:
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --bus-type pci --alias vmnicX --bus-address XX
.This issue is resolved in this release.
- PR 2301231: An ESXi host fails to discover local devices after upgrade from vSphere 6.0 to a later release
This problem occurs only when the SATP is
VMW_SATP_LOCAL
. In ESXi 6.0, if a claimrule with SATP asVMW_SATP_LOCAL
is added with an incorrectly formatted config option, then NMP/SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the pathThis issue is resolved in this release.
- PR 2280742: Corruption in the heap of a DvFilter virtual switch might cause an ESXi host to fail with a purple diagnostic screen
A race condition in the get-set firewall rule operations for DvFilter virtual switches can lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2354762: A vCenter Server system hardware health status might not display failures on DIMMs
A vCenter Server system hardware health status might not display failures on DIMMs, regardless that the server controllers display a warning. For instance, the Dell Remote Access Controller in a Dell M630 server might display a warning in a DIMM, but you see no alert or alarm in your vCenter Server system.
This issue is resolved in this release.
- PR 2394247: You cannot set virtual machines to power cycle when the guest OS reboots
After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power-cycle when necessary.This issue is resolved in this release.
- PR 2402409: Virtual machines with enabled Changed Block Tracking (CBT) might fail while a snapshot is created due to lack of allocated memory for the CBT bit map
While a snapshot is being created, a virtual machine might power off and fail with an error similar to:
2019-01-01T01:23:40.047Z| vcpu-0| I125: DISKLIB-CTK : Failed to mmap change bitmap of size 167936: Cannot allocate memory.
2019-01-01T01:23:40.217Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Could not open change tracker /vmfs/volumes/DATASTORE_UUID/VM_NAME/VM_NAME_1-ctk.vmdk: Not enough memory for change tracking.
The error is a result of lack of allocated memory for the CBT bit map.This issue is resolved in this release.
- PR 2322882: An ESXi host might stop responding due to a memory problem in vSphere Storage I/O Control in VMware vSphere 6.7
Due to a memory problem in vSphere Storage I/O Control, an ESXi host might stop responding and the
/var/log/vmkernel.log
file contains the following:
MemSchedAdmit: 470: Admission failure in path: sioc/storageRM.13011002/uw.13011002
MemSchedAdmit: 477: uw.13011002 (68246725) extraMin/extraFromParent: 256/256, sioc (840) childEmin/eMinLimit: 14059/14080This issue is resolved in this release.
- PR 2482603: If you have physical NICs that support SR-IOV and others that do not on the same ESXi host, you might see different info from the EXCLI command set and the vSphere Client
If the same NIC driver on an ESXi host has physical NICs that do not support SR-IOV and others that support SR-IOV, you might see different info from the EXCLI command set and the vSphere Client or vSphere Web Client. For example, from the ESXCLI command set you might see that vmnics 16, 17 and 18 have SR-IOV configured, while in the vSphere Client you might see that vmnics 6, 7 and 8 have SR-IOV configured.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the igbn
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305414 |
CVE numbers | N/A |
This patch updates the nvme
VIB to resolve the following issue:
- Update to the NVMe driver
The update to the NVMe driver fixes the performance overhead issue on devices that run on Intel SSD DC P4600 Series.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305410 |
CVE numbers | N/A |
This patch updates the qlnativefc
VIB to resolve the following issue:
- Update to the qlnativefc driver
The qlnativefc driver is updated to version 3.1.8.0.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305404 |
CVE numbers | N/A |
This patch updates the lsi-mr3
VIB to resolve the following issue:
- Update to the lsi_mr3 driver
The lsi_mr3 driver is updated to version MR 7.8.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305406 |
CVE numbers | N/A |
This patch updates the lsi-msgpt35
VIB to resolve the following issue:
- Update to the lsi_msgpt35 driver
The lsi_msgpt35 driver is updated to version 09.00.00.00-5vmw.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2335120 |
CVE numbers | N/A |
This patch updates the lsi-msgpt3
VIB to resolve the following issue:
- Update to the lsi_msgpt3 driver
The lsi_msgpt3 driver is updated to version 17.00.02.00-1vmw.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305407 |
CVE numbers | N/A |
This patch updates the lsi-msgpt2
VIB to resolve the following issue:
- Update to the lsi_msgpt2 driver
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315656 |
CVE numbers | N/A |
This patch updates the i40en
VIB.
- Update to the i40en driver
The i40en driver is updated to version 1.8.1.9-2.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315652 |
CVE numbers | N/A |
This patch updates the ixgben
VIB to resolve the following issue:
The ixgben driver adds queue pairing to optimize CPU efficiency.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305412 |
CVE numbers | N/A |
This patch updates the smartpqi
VIB to resolve the following issue:
- Update to the smartpqi driver
The smartpqi driver is updated to version 1.0.1.553-28.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315684 |
CVE numbers | N/A |
This patch updates the bnxtnet
VIB to resolve the following issue:
ESXi 6.7 Update 3 adds support for Broadcom 100G Network Adapters and multi-RSS feed to the bnxtnet driver.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2323424 |
CVE numbers | N/A |
This patch updates the elxnet
VIB to resolve the following issue:
- PR 2323424: In rare cases, NICs that use an elxnet driver might lose connectivity
An issue with the elxnet driver might cause NICs that use this driver to lose connectivity. In the ESXi host kernel logs, you might see warnings similar to:
WARNING: elxnet: elxnet_workerThread:2122: [vmnic3] GetStats: MCC cmd timed out. timeout:36
WARNING: elxnet: elxnet_detectDumpUe:416: [vmnic3] Recoverable HW error detected
This issue is resolved in this release.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the brcmfcoe
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lpfc
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2328263 |
CVE numbers | N/A |
This patch updates the nenic
VIB to resolve the following issue:
- Update to the nenic driver
The nenic driver is updated to version 1.0.29.0.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2332785, 2338914 |
CVE numbers | N/A |
This patch updates the vmw-ahci
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nmlx5-core
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nfnic
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2214222 |
CVE numbers | N/A |
This patch updates the esx-xserver
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the sfvmk
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2307421, 2360047, 2300903, 2262288, 2313239 |
CVE numbers | N/A |
This patch updates the esx-base, esx-update, vsan,
and vsanhealth
VIBs to resolve the following issues:
- Update to the Network Time Protocol (NTP) daemon
The NTP daemon is updated to version ntp-4.2.8p13.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to version 7.64.1.
- PR 2262288: ESXi hosts might answer any snmpv3 get requests from object identifier (OID) user IDs, even not authenticated
ESXi hosts might answer any snmpv3 get requests from the special user "", which is used for engine ID discovery, even if the user is not authenticated or wrongly authenticated.
The issue is resolved in this release.
- Update to the libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.9.
- Update to the Python library
The Python third-party library is updated to version 3.5.7.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2313239 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
- The cpu-microcode VIB includes the following Intel microcode:
Code Name FMS Plt ID MCU Rev MCU Date Brand Names Nehalem EP 0x106a5 0x03 0x0000001d 5/11/2018 Intel Xeon 35xx Series
Intel Xeon 55xx SeriesLynnfield 0x106e5 0x13 0x0000000a 5/8/2018 Intel Xeon 34xx Lynnfield Series Clarkdale 0x20652 0x12 0x00000011 5/8/2018 Intel i3/i5 Clarkdale Series
Intel Xeon 34xx Clarkdale SeriesArrandale 0x20655 0x92 0x00000007 4/23/2018 Intel Core i7-620LE Processor Sandy Bridge DT 0x206a7 0x12 0x0000002f 2/17/2019 Intel Xeon E3-1100 Series
Intel Xeon E3-1200 Series
Intel i7-2655-LE Series
Intel i3-2100 SeriesWestmere EP 0x206c2 0x03 0x0000001f 5/8/2018 Intel Xeon 56xx Series
Intel Xeon 36xx SeriesSandy Bridge EP 0x206d7 0x6d 0x00000718 5/21/2019 Intel Pentium 1400 Series
Intel Xeon E5-1400 Series
Intel Xeon E5-1600 Series
Intel Xeon E5-2400 Series
Intel Xeon E5-2600 Series
Intel Xeon E5-4600 SeriesNehalem EX 0x206e6 0x04 0x0000000d 5/15/2018 Intel Xeon 65xx Series
Intel Xeon 75xx SeriesWestmere EX 0x206f2 0x05 0x0000003b 5/16/2018 Intel Xeon E7-8800 Series
Intel Xeon E7-4800 Series
Intel Xeon E7-2800 SeriesIvy Bridge DT 0x306a9 0x12 0x00000021 2/13/2019 Intel i3-3200 Series
Intel i7-3500-LE/UE
Intel i7-3600-QE
Intel Xeon E3-1200-v2 Series
Intel Xeon E3-1100-C-v2 Series
Intel Pentium B925CHaswell DT 0x306c3 0x32 0x00000027 2/26/2019 Intel Xeon E3-1200-v3 Series
Intel i7-4700-EQ Series
Intel i5-4500-TE Series
Intel i3-4300 SeriesIvy Bridge EP 0x306e4 0xed 0x0000042e 3/14/2019 Intel Xeon E5-4600-v2 Series
Intel Xeon E5-2600-v2 Series
Intel Xeon E5-2400-v2 Series
Intel Xeon E5-1600-v2 Series
Intel Xeon E5-1400-v2 SeriesIvy Bridge EX 0x306e7 0xed 0x00000715 3/14/2019 Intel Xeon E7-8800/4800/2800-v2 Series Haswell EP 0x306f2 0x6f 0x00000043 3/1/2019 Intel Xeon E5-4600-v3 Series
Intel Xeon E5-2600-v3 Series
Intel Xeon E5-2400-v3 Series
Intel Xeon E5-1600-v3 Series
Intel Xeon E5-1400-v3 SeriesHaswell EX 0x306f4 0x80 0x00000014 3/1/2019 Intel Xeon E7-8800/4800-v3 Series Broadwell H 0x40671 0x22 0x00000020 3/7/2019 Intel Core i7-5700EQ
Intel Xeon E3-1200-v4 SeriesAvoton 0x406d8 0x01 0x0000012a 1/4/2018 Intel Atom C2300 Series
Intel Atom C2500 Series
Intel Atom C2700 SeriesBroadwell EP/EX 0x406f1 0xef 0x0b000036 3/2/2019 Intel Xeon E7-8800/4800-v4 Series
Intel Xeon E5-4600-v4 Series
Intel Xeon E5-2600-v4 Series
Intel Xeon E5-1600-v4 SeriesSkylake SP 0x50654 0xb7 0x0200005e 4/2/2019 Intel Xeon Platinum 8100 Series
Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series
Intel Xeon D-2100 Series
Intel Xeon D-1600 Series
Intel Xeon W-3100 Series
Intel Xeon W-2100 SeriesCascade Lake B-0 0x50656 0xbf 0x04000024 4/7/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Cascade Lake 0x50657 0xbf 0x05000024 4/7/2019 Intel Xeon Platinum 9200/8200 Series
Intel Xeon Gold 6200/5200
Intel Xeon Silver 4200/Bronze 3200
Intel Xeon W-3200Broadwell DE 0x50662 0x10 0x0000001a 3/23/2019 Intel Xeon D-1500 Series Broadwell DE 0x50663 0x10 0x07000017 3/23/2019 Intel Xeon D-1500 Series Broadwell DE 0x50664 0x10 0x0f000015 3/23/2019 Intel Xeon D-1500 Series Broadwell NS 0x50665 0x10 0x0e00000d 3/23/2019 Intel Xeon D-1600 Series Skylake H/S 0x506e3 0x36 0x000000cc 4/1/2019 Intel Xeon E3-1500-v5 Series
Intel Xeon E3-1200-v5 SeriesDenverton 0x506f1 0x01 0x0000002e 3/21/2019 Intel Atom C3000 Series Kaby Lake H/S/X 0x906e9 0x2a 0x000000b4 4/1/2019 Intel Xeon E3-1200-v6 Series
Intel Xeon E3-1500-v6 SeriesCoffee Lake H/S 0x906ea 0x22 0x000000b4 4/1/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906eb 0x02 0x000000b4 4/1/2019 Intel Xeon E-2100 Series Coffee Lake H/S 0x906ec 0x22 0x000000b8 3/17/2019 Intel Xeon E-2100 Series Coffee Lake Refresh 0x906ed 0x22 0x000000b8 3/17/2019 Intel Xeon E-2200 Series
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.7.0-20190802001-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2288428, 2282206, 2305377, 2213824, 2305719, 2301374, 2272250, 2220008, 2262288, 2290043, 2295425, 2300124, 2303227, 2301818, 2294347, 2338914, 2327317, 2312215, 2331247, 2340952, 2366745, 2331525, 2320309, 2304092, 2241121, 2306836, 2343913, 2361711, 2350171, 2297765, 2355516, 2287232, 2310826, 2332785, 2280742, 2386930, 2333442, 2346582, 2214222, 2335847, 2292419, 2132634, 2363202, 2367669, 2303016, 2337889, 2360408, 2344159, 2331369, 2370239, 2367142, 2336379, 2301231, 2354762, 2305414, 2305410, 2305404, 2305406, 2305405, 2335120, 2305407, 2315656, 2315652, 2305412, 2315684, 2323424, 2328263, 2332785, 2338914, 2214222, 2394247, 2402409, 2482603 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. When you open the virtual machine console, you might see the following error:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
-
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
-
A virtual machine that runs on an NFS version 3 datastore might start performing poorly over time. After you reboot the ESXi host, the virtual machine starts performing normally again.
-
If a virtual machine has an SEsparse snapshot and if the base VMDK file size is not 4K multiple, when you query for the physical layout of the VMDK from the guest operating system or third-party applications, a physical CPU lockup might occur. As a result, the ESXi fails with a purple diagnostic screen.
-
If a container port has a duplicate virtual interface (VIF) attachment ID, during the migration of virtual machines by using vSphere Storage vMotion with container ports, the migration might cause the ESXi host to fail with a purple diagnostic screen.
-
When you use the pktcap-uw syntax, an ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA. This issue occurs as a result of a rare condition when a device does not have an uplink object associated when capturing a packet on the uplink.
-
The Route Based on Physical NIC Load policy was not supported on vSphere Distributed Switch version 6.6 prior to ESXi 6.7 Update 3.
-
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for booting (PXE, DHCP, DNS, TFTP), the booting fails, because the booting request does not reach the server.
-
Each SNMPv3 object identifier might receive GET requests although there is a user authentication. As a result, the ESXi host might stop responding.
-
The vSphere Web Client or vSphere Client might display a host profile as compliant even when it has a non-compliant value such as the size of the maximum transmission unit (MTU). Remediation of the profile works as expected, but it is not reflected in the vSphere Web Client or vSphere Client.
-
You might intermittently stop seeing performance charts of ESXi hosts in the embedded VMware Host Client or any web-based application that connects to the vCenter Server system, including the vSphere Client and vSphere Web Client. An internal issue causes hostd to delete available performance counters information for some ESXi hosts.
-
Even if C-states are enabled in the firmware setup of an ESXi host, the VMkernel does not detect all the C-states correctly. The power screen of the
esxtop
tool shows columns %C0 (percentage of time spent in C-state 0) and %C1 (percentage of time spent in C-state 1), but does not show column %C2. As a result, system performance per watt of power is not maximized. -
If an ESXi host is with a certain BIOS version, an assertion failure might be triggered early during the start phase of the host, and when Intel Trusted Execution Technology (TXT) is enabled in the BIOS. The assertion failure is of the type:
Panic: ASSERT bora/vmkernel/vmkBoot/x86/vmkBootTxt.c:2235
. -
An ESXi host might fail with a purple diagnostic screen due to a rare race condition when the host is trying to access a memory region in the exact time between the host is freed and allocated to another task.
-
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
-
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
-
Virtual machines might lose connectivity during cross-cluster migrations by using vSphere vMotion in an NSX setup, because NetX filters might be blocking traffic to ports.
-
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
-
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap. This is caused by pending interrupts on the virtual network device that are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance by using the following commands:
esxcli network nic ring current set -n -t
esxcli network nic ring current set -n -r
the settings might not be persistent across ESXi host reboots. -
When you start macOS 10.15 Developer Preview in a virtual machine, the operating system might fail with a blank screen displaying a white Apple logo. A power-on operation in verbose mode stops responding after the following error message is displayed:
Error unserializing prelink plist: OSUnserializeXML: syntax error near line 2
panic(...): "Unable to find driver for this platform: \"ACPI\".\n"@...
-
If you modify the NetworkNicTeamingPolicy or SecurityPolicy of a host profile and change it to the default settings, you might fail to apply the host profile to an ESXi host and receive an error such as:
Error: A specified parameter was not correct.
-
If the Maximum Transmission Unit (MTU) size of a virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operations might fail. If a failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen.
-
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the VMkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
. -
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
-
If you use a vSphere Distributed Switch or an NSX-T managed vSphere Distributed Switch that are with an enabled container feature, and if your ESXi host is of version 6.7 or earlier, during an upgrade or migration operation, the ESXi host might fail with a purple diagnostic screen.
-
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
-
When provisioning instant clones by using the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails. -
If vIOMMU is enabled, Linux guest OS might become unresponsive while initializing the
kdump
kernel. -
An ESXi host might fail to monitor the health of the LSI Host Bus Adapter (HBA) and the attached storage. As a result, the vCenter Server system cannot display an alarm when the HBA health degrades. You must install the third-party provider LSI SMI-S CIM. For more information, see VMware knowledge base article 2001549.
-
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen. -
If a virtual machine is with Windows 10 Version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
-
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails. -
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen. -
A race condition in the get-set firewall rule operations for DvFilter virtual switches might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
-
Memory corruption might occur during the live migration of both compute and storage of virtual machines with NVIDIA GRID vGPUs. Migration succeeds, but running applications might not resume on the destination host.
-
Virtual machines configured with a maximum number of virtual devices along with PCI passthrough might fail to power on if the limit is exceeded.
The issue is identified with a panic log in thevmware.log
file similar to:vmx| E105: PANIC: VERIFY bora/devices/pci/pci.c:1529.
-
ESXi 6.7 Update 2 introduces a fix related to Intel errata for Skylake CPUs. The fix impacts the hardware configuration and some sanity checks are triggered, so the ESXi host performs full system reboot instead of Quick Boot. This issue occurs only once, during an update of the ESXi hosts from a version without the fix to a version which includes this fix, and when Quick Boot is enabled for this update.
-
More than one Xorg scripts might run in parallel and interfere with each other. As a result, the Xorg script might fail to start on systems with multiple GPUs.
-
When you use the
esxcli network nic get -n vmnicX
command to get Auto Negotiation status on Mellanox ConnectX5 devices, you might see the status as False even if the feature is enabled on the physical switch. -
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The maximum number of concurrent snapshots is 32, which is too high, because too many threads are used in the task tracking of the snapshot operation. As a result, the hostd service might become unresponsive.
-
The hostd service sometimes fails while recomposing VDI Pools, with the error message
VmfsExtentCommonOpen: VmfsExtentParseExtentLine failed
. You can seehostd-worker-zdump
files generated in the/var/core/
directory of the ESXi host. -
In the vSphere Web Client, incorrect Read or Write latency is displayed for the performance graphs of the vSphere Virtual Volumes datastores at a virtual machine level. As a result, the monitoring service shows that the virtual machines are in a critical state.
-
During the decompression of the memory page of a virtual machine running on ESXi 6.7, the check of the compression window size might fail. As a result, the virtual machine might stop responding and restart. In the backtrace of the VMkernel log, you might see entries similar to:
Unable to decompress BPN(0x1005ef842) from frameIndex(0x32102f0) block 0 for VM(6609526)
0x451a55f1baf0:[0x41802ed3d040]WorldPanicWork
0x451a55f1bb50:[0x41802ed3d285]World_Panic
0x451a55f1bcc0:[0x41802edb3002]VmMemIO_FaultCompressedPage
0x451a55f1bd20:[0x41802edbf72f]VmMemPfCompressed
0x451a55f1bd70:[0x41802edc033b]VmMemPfInt
0x451a55f1be10:[0x41802edc0d4f]VmMemPf
0x451a55f1be80:[0x41802edc108e]VmMemPfLockPageInt
0x451a55f1bef0:[0x41802edc3517]VmMemPf_LockPage
0x451a55f1bfa0:[0x41802ed356e6]VMMVMKCall_Call
0x451a55f1bfe0:[0x41802ed59e5d]VMKVMM_ArchEnterVMKernel -
The number of concurrent contexts of the resource manager of an ESXi host might exceed the maximum of 512 due to an error in the dispatch logic. In case of slow secondary host or network problems, this might result in
DiskQueue is full
errors and fail synchronization of virtual machines in operations run by the Site Recovery Manager. -
If you do not configure a default route in the ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload. -
If a virtual machine runs applications which use particular 3D state and shaders, and if software 3D acceleration is used, the virtual machine might stop responding.
-
When the guest sends an IPv6 checksum offloading with enabled routing headers, the ESXi host might fail with a purple diagnostic screen.
-
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel -
When a virtual machine uses a remote serial port connected to a virtual serial port concentrator (vSPC), where a DNS name is the address of the vSPC, if a DNS outage occurs during a migration by using vMotion, the virtual machine might power off.
-
When you change the hardware compatibility level of a virtual machine with a PCI passthrough device, the device might be removed from the virtual machine during the procedure.
-
When you use the command-line and run the command to set the pNIC alias, the MAC address that is saved in esx.conf for the corresponding pNIC is malformed. The command that you use is the following:
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --bus-type pci --alias vmnicX --bus-address XX
. -
This problem occurs only when the SATP is
VMW_SATP_LOCAL
. In ESXi 6.0, if a claimrule with SATP asVMW_SATP_LOCAL
is added with an incorrectly formatted config option, then NMP/SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the path -
A race condition in the get-set firewall rule operations for DvFilter virtual switches can lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
-
A vCenter Server system hardware health status might not display failures on DIMMs, regardless that the server controllers display a warning. For instance, the Dell Remote Access Controller in a Dell M630 server might display a warning in a DIMM, but you see no alert or alarm in your vCenter Server system.
-
The update to the NVMe driver fixes the performance overhead issue on devices that run on Intel SSD DC P4600 Series.
-
The qlnativefc driver is updated to version 3.1.8.0.
-
The lsi_mr3 driver is updated to version MR 7.8.
-
The lsi_msgpt35 driver is updated to version 09.00.00.00-5vmw.
-
The i40en driver is updated to version 1.8.1.9-2.
-
The lsi_msgpt3 driver is updated to version 17.00.02.00-1vmw.
-
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
-
The ixgben driver adds queue pairing to optimize CPU efficiency.
-
The smartpqi driver is updated to version 1.0.1.553-28.
-
ESXi 6.7 Update 3 adds support for Broadcom 100G Network Adapters and multi-RSS feed to the bnxtnet driver.
-
An issue with the elxnet driver might cause NICs that use this driver to lose connectivity. In the ESXi host kernel logs, you might see warnings similar to:
WARNING: elxnet: elxnet_workerThread:2122: [vmnic3] GetStats: MCC cmd timed out. timeout:36
WARNING: elxnet: elxnet_detectDumpUe:416: [vmnic3] Recoverable HW error detected
-
The nenic driver is updated to version 1.0.29.0.
-
If you modify the NetworkNicTeamingPolicy or SecurityPolicy of a host profile and change it to the default settings, you might fail to apply the host profile to an ESXi host and receive an error such as:
Error: A specified parameter was not correct.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance by using the following commands:
esxcli network nic ring current set -n -t
esxcli network nic ring current set -n -r
the settings might not be persistent across ESXi host reboots. -
If you use a vSphere Distributed Switch or an NSX-T managed vSphere Distributed Switch that are with an enabled container feature, and if your ESXi host is of version 6.7 or earlier, during an upgrade or migration operation, the ESXi host might fail with a purple diagnostic screen.
-
When the guest sends an IPv6 checksum offloading with enabled routing headers, the ESXi host might fail with a purple diagnostic screen.
-
When you use the command-line and run the command to set the pNIC alias, the MAC address that is saved in esx.conf for the corresponding pNIC is malformed. The command that you use is the following:
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --bus-type pci --alias vmnicX --bus-address XX
. -
If the uplink name is not in a format such as vmnicX, where X is a number, LACP does not work.
-
When you configure the LAG, if a bundle member flaps, a wrong VMkernel observation message might appear for the LAG:
Failed Criteria: 0
. -
After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power-cycle when necessary. -
While a snapshot is being created, a virtual machine might power off and fail with an error similar to:
2019-01-01T01:23:40.047Z| vcpu-0| I125: DISKLIB-CTK : Failed to mmap change bitmap of size 167936: Cannot allocate memory.
2019-01-01T01:23:40.217Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Could not open change tracker /vmfs/volumes/DATASTORE_UUID/VM_NAME/VM_NAME_1-ctk.vmdk: Not enough memory for change tracking.
The error is a result of lack of allocated memory for the CBT bit map. - If the same NIC driver on an ESXi host has physical NICs that do not support SR-IOV and others that support SR-IOV, you might see different info from the EXCLI command set and the vSphere Client or vSphere Web Client. For example, from the ESXCLI command set you might see that vmnics 16, 17 and 18 have SR-IOV configured, while in the vSphere Client you might see that vmnics 6, 7 and 8 have SR-IOV configured.
-
Profile Name | ESXi-6.7.0-20190802001-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2288428, 2282206, 2305377, 2213824, 2305719, 2301374, 2272250, 2220008, 2262288, 2290043, 2295425, 2300124, 2303227, 2301818, 2294347, 2338914, 2327317, 2312215, 2331247, 2340952, 2366745, 2331525, 2320309, 2304092, 2241121, 2306836, 2343913, 2361711, 2350171, 2297765, 2355516, 2287232, 2310826, 2332785, 2280742, 2386930, 2333442, 2346582, 2214222, 2335847, 2292419, 2132634, 2363202, 2367669, 2303016, 2337889, 2360408, 2344159, 2331369, 2370239, 2367142, 2336379, 2301231, 2354762, 2305414, 2305410, 2305404, 2305406, 2305405, 2335120, 2305407, 2315656, 2315652, 2305412, 2315684, 2323424, 2328263, 2332785, 2338914, 2214222, 2394247, 2402409, 2482603 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. When you open the virtual machine console, you might see the following error:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
-
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
-
A virtual machine that runs on an NFS version 3 datastore might start performing poorly over time. After you reboot the ESXi host, the virtual machine starts performing normally again.
-
If a virtual machine has an SEsparse snapshot and if the base VMDK file size is not 4K multiple, when you query for the physical layout of the VMDK from the guest operating system or third-party applications, a physical CPU lockup might occur. As a result, the ESXi fails with a purple diagnostic screen.
-
If a container port has a duplicate virtual interface (VIF) attachment ID, during the migration of virtual machines by using vSphere Storage vMotion with container ports, the migration might cause the ESXi host to fail with a purple diagnostic screen.
-
When you use the pktcap-uw syntax, an ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA. This issue occurs as a result of a rare condition when a device does not have an uplink object associated when capturing a packet on the uplink.
-
The Route Based on Physical NIC Load policy was not supported on vSphere Distributed Switch version 6.6 prior to ESXi 6.7 Update 3.
-
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for booting (PXE, DHCP, DNS, TFTP), the booting fails, because the booting request does not reach the server.
-
Each SNMPv3 object identifier might receive GET requests although there is a user authentication. As a result, the ESXi host might stop responding.
-
The vSphere Web Client or vSphere Client might display a host profile as compliant even when it has a non-compliant value such as the size of the maximum transmission unit (MTU). Remediation of the profile works as expected, but it is not reflected in the vSphere Web Client or vSphere Client.
-
You might intermittently stop seeing performance charts of ESXi hosts in the embedded VMware Host Client or any web-based application that connects to the vCenter Server system, including the vSphere Client and vSphere Web Client. An internal issue causes hostd to delete available performance counters information for some ESXi hosts.
-
Even if C-states are enabled in the firmware setup of an ESXi host, the VMkernel does not detect all the C-states correctly. The power screen of the
esxtop
tool shows columns %C0 (percentage of time spent in C-state 0) and %C1 (percentage of time spent in C-state 1), but does not show column %C2. As a result, system performance per watt of power is not maximized. -
If an ESXi host is with a certain BIOS version, an assertion failure might be triggered early during the start phase of the host, and when Intel Trusted Execution Technology (TXT) is enabled in the BIOS. The assertion failure is of the type:
Panic: ASSERT bora/vmkernel/vmkBoot/x86/vmkBootTxt.c:2235
. -
An ESXi host might fail with a purple diagnostic screen due to a rare race condition when the host is trying to access a memory region in the exact time between the host is freed and allocated to another task.
-
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
-
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
-
Virtual machines might lose connectivity during cross-cluster migrations by using vSphere vMotion in an NSX setup, because NetX filters might be blocking traffic to ports.
-
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
-
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap. This is caused by pending interrupts on the virtual network device that are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance by using the following commands:
esxcli network nic ring current set -n -t
esxcli network nic ring current set -n -r
the settings might not be persistent across ESXi host reboots. -
When you start macOS 10.15 Developer Preview in a virtual machine, the operating system might fail with a blank screen displaying a white Apple logo. A power-on operation in verbose mode stops responding after the following error message is displayed:
Error unserializing prelink plist: OSUnserializeXML: syntax error near line 2
panic(...): "Unable to find driver for this platform: \"ACPI\".\n"@...
-
If you modify the NetworkNicTeamingPolicy or SecurityPolicy of a host profile and change it to the default settings, you might fail to apply the host profile to an ESXi host and receive an error such as:
Error: A specified parameter was not correct.
-
If the Maximum Transmission Unit (MTU) size of a virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operations might fail. If a failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen.
-
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the VMkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
. -
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
-
If you use a vSphere Distributed Switch or an NSX-T managed vSphere Distributed Switch that are with an enabled container feature, and if your ESXi host is of version 6.7 or earlier, during an upgrade or migration operation, the ESXi host might fail with a purple diagnostic screen.
-
You must manually add the claim rules to an ESXi host for Tegile Storage Arrays to set the I/O Operations Per Second to 1.
-
When provisioning instant clones by using the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails. -
If vIOMMU is enabled, Linux guest OS might become unresponsive while initializing the
kdump
kernel. -
An ESXi host might fail to monitor the health of the LSI Host Bus Adapter (HBA) and the attached storage. As a result, the vCenter Server system cannot display an alarm when the HBA health degrades. You must install the third-party provider LSI SMI-S CIM. For more information, see VMware knowledge base article 2001549.
-
You can use the
SCSIBindCompletionWorlds()
method to set the the number of queues and worlds of a driver. However, if you set thenumQueues
parameter to higher than 1 and thenumWorlds
parameter to equal or lower than 1, the API call might return without releasing the lock held. This results in a deadlock and the ESXi host might fail with a purple diagnostic screen. -
If a virtual machine is with Windows 10 Version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
-
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails. -
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen. -
A race condition in the get-set firewall rule operations for DvFilter virtual switches might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
-
Memory corruption might occur during the live migration of both compute and storage of virtual machines with NVIDIA GRID vGPUs. Migration succeeds, but running applications might not resume on the destination host.
-
Virtual machines configured with a maximum number of virtual devices along with PCI passthrough might fail to power on if the limit is exceeded.
The issue is identified with a panic log in thevmware.log
file similar to:vmx| E105: PANIC: VERIFY bora/devices/pci/pci.c:1529.
-
ESXi 6.7 Update 2 introduces a fix related to Intel errata for Skylake CPUs. The fix impacts the hardware configuration and some sanity checks are triggered, so the ESXi host performs full system reboot instead of Quick Boot. This issue occurs only once, during an update of the ESXi hosts from a version without the fix to a version which includes this fix, and when Quick Boot is enabled for this update.
-
More than one Xorg scripts might run in parallel and interfere with each other. As a result, the Xorg script might fail to start on systems with multiple GPUs.
-
When you use the
esxcli network nic get -n vmnicX
command to get Auto Negotiation status on Mellanox ConnectX5 devices, you might see the status as False even if the feature is enabled on the physical switch. -
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The maximum number of concurrent snapshots is 32, which is too high, because too many threads are used in the task tracking of the snapshot operation. As a result, the hostd service might become unresponsive.
-
The hostd service sometimes fails while recomposing VDI Pools, with the error message
VmfsExtentCommonOpen: VmfsExtentParseExtentLine failed
. You can seehostd-worker-zdump
files generated in the/var/core/
directory of the ESXi host. -
In the vSphere Web Client, incorrect Read or Write latency is displayed for the performance graphs of the vSphere Virtual Volumes datastores at a virtual machine level. As a result, the monitoring service shows that the virtual machines are in a critical state.
-
During the decompression of the memory page of a virtual machine running on ESXi 6.7, the check of the compression window size might fail. As a result, the virtual machine might stop responding and restart. In the backtrace of the VMkernel log, you might see entries similar to:
Unable to decompress BPN(0x1005ef842) from frameIndex(0x32102f0) block 0 for VM(6609526)
0x451a55f1baf0:[0x41802ed3d040]WorldPanicWork
0x451a55f1bb50:[0x41802ed3d285]World_Panic
0x451a55f1bcc0:[0x41802edb3002]VmMemIO_FaultCompressedPage
0x451a55f1bd20:[0x41802edbf72f]VmMemPfCompressed
0x451a55f1bd70:[0x41802edc033b]VmMemPfInt
0x451a55f1be10:[0x41802edc0d4f]VmMemPf
0x451a55f1be80:[0x41802edc108e]VmMemPfLockPageInt
0x451a55f1bef0:[0x41802edc3517]VmMemPf_LockPage
0x451a55f1bfa0:[0x41802ed356e6]VMMVMKCall_Call
0x451a55f1bfe0:[0x41802ed59e5d]VMKVMM_ArchEnterVMKernel -
The number of concurrent contexts of the resource manager of an ESXi host might exceed the maximum of 512 due to an error in the dispatch logic. In case of slow secondary host or network problems, this might result in
DiskQueue is full
errors and fail synchronization of virtual machines in operations run by the Site Recovery Manager. -
If you do not configure a default route in the ESXi host, the IP address of the ESXi SNMP agent might be in reverse order in the payload of the sent SNMP traps. For example, if the SNMP agent is with an IP address
172.16.0.10
, in the payload the IP address is10.0.16.172
. As a result, the SNMP traps reach the target with an incorrect IP address of the ESXi SNMP agent in the payload. -
If a virtual machine runs applications which use particular 3D state and shaders, and if software 3D acceleration is used, the virtual machine might stop responding.
-
When the guest sends an IPv6 checksum offloading with enabled routing headers, the ESXi host might fail with a purple diagnostic screen.
-
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel -
When a virtual machine uses a remote serial port connected to a virtual serial port concentrator (vSPC), where a DNS name is the address of the vSPC, if a DNS outage occurs during a migration by using vMotion, the virtual machine might power off.
-
When you change the hardware compatibility level of a virtual machine with a PCI passthrough device, the device might be removed from the virtual machine during the procedure.
-
When you use the command-line and run the command to set the pNIC alias, the MAC address that is saved in esx.conf for the corresponding pNIC is malformed. The command that you use is the following:
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --bus-type pci --alias vmnicX --bus-address XX
. -
This problem occurs only when the SATP is
VMW_SATP_LOCAL
. In ESXi 6.0, if a claimrule with SATP asVMW_SATP_LOCAL
is added with an incorrectly formatted config option, then NMP/SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the path -
A race condition in the get-set firewall rule operations for DvFilter virtual switches can lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.
-
A vCenter Server system hardware health status might not display failures on DIMMs, regardless that the server controllers display a warning. For instance, the Dell Remote Access Controller in a Dell M630 server might display a warning in a DIMM, but you see no alert or alarm in your vCenter Server system.
-
The update to the NVMe driver fixes the performance overhead issue on devices that run on Intel SSD DC P4600 Series.
-
The qlnativefc driver is updated to version 3.1.8.0.
-
The lsi_mr3 driver is updated to version MR 7.8.
-
The lsi_msgpt35 driver is updated to version 09.00.00.00-5vmw.
-
The i40en driver is updated to version 1.8.1.9-2.
-
The lsi_msgpt3 driver is updated to version 17.00.02.00-1vmw.
-
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
-
The ixgben driver adds queue pairing to optimize CPU efficiency.
-
The smartpqi driver is updated to version 1.0.1.553-28.
-
ESXi 6.7 Update 3 adds support for Broadcom 100G Network Adapters and multi-RSS feed to the bnxtnet driver.
-
An issue with the elxnet driver might cause NICs that use this driver to lose connectivity. In the ESXi host kernel logs, you might see warnings similar to:
WARNING: elxnet: elxnet_workerThread:2122: [vmnic3] GetStats: MCC cmd timed out. timeout:36
WARNING: elxnet: elxnet_detectDumpUe:416: [vmnic3] Recoverable HW error detected
-
The nenic driver is updated to version 1.0.29.0.
-
If you modify the NetworkNicTeamingPolicy or SecurityPolicy of a host profile and change it to the default settings, you might fail to apply the host profile to an ESXi host and receive an error such as:
Error: A specified parameter was not correct.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance by using the following commands:
esxcli network nic ring current set -n -t
esxcli network nic ring current set -n -r
the settings might not be persistent across ESXi host reboots. -
If you use a vSphere Distributed Switch or an NSX-T managed vSphere Distributed Switch that are with an enabled container feature, and if your ESXi host is of version 6.7 or earlier, during an upgrade or migration operation, the ESXi host might fail with a purple diagnostic screen.
-
When the guest sends an IPv6 checksum offloading with enabled routing headers, the ESXi host might fail with a purple diagnostic screen.
-
When you use the command-line and run the command to set the pNIC alias, the MAC address that is saved in esx.conf for the corresponding pNIC is malformed. The command that you use is the following:
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --bus-type pci --alias vmnicX --bus-address XX
. -
If the uplink name is not in a format such as vmnicX, where X is a number, LACP does not work.
-
When you configure the LAG, if a bundle member flaps, a wrong VMkernel observation message might appear for the LAG:
Failed Criteria: 0
. -
After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter
vmx.reboot.powerCycle = TRUE
you can schedule virtual machines for power-cycle when necessary. -
While a snapshot is being created, a virtual machine might power off and fail with an error similar to:
2019-01-01T01:23:40.047Z| vcpu-0| I125: DISKLIB-CTK : Failed to mmap change bitmap of size 167936: Cannot allocate memory.
2019-01-01T01:23:40.217Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Could not open change tracker /vmfs/volumes/DATASTORE_UUID/VM_NAME/VM_NAME_1-ctk.vmdk: Not enough memory for change tracking.
The error is a result of lack of allocated memory for the CBT bit map. - If the same NIC driver on an ESXi host has physical NICs that do not support SR-IOV and others that support SR-IOV, you might see different info from the EXCLI command set and the vSphere Client or vSphere Web Client. For example, from the ESXCLI command set you might see that vmnics 16, 17 and 18 have SR-IOV configured, while in the vSphere Client you might see that vmnics 6, 7 and 8 have SR-IOV configured.
-
Profile Name | ESXi-6.7.0-20190801001s-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2307421, 2360047, 2300903, 2262288, 2313239 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The NTP daemon is updated to version ntp-4.2.8p13.
-
The ESXi userworld libcurl library is updated to version 7.64.1.
-
ESXi hosts might answer any snmpv3 get requests from the special user "", which is used for engine ID discovery, even if the user is not authenticated or wrongly authenticated.
-
The ESXi userworld libxml2 library is updated to version 2.9.9.
-
The Python third-party library is updated to version 3.5.7.
-
Profile Name | ESXi-6.7.0-20190801001s-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | August 20, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2307421, 2360047, 2300903, 2262288, 2313239 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The NTP daemon is updated to version ntp-4.2.8p13.
-
The ESXi userworld libcurl library is updated to version 7.64.1.
-
ESXi hosts might answer any snmpv3 get requests from the special user "", which is used for engine ID discovery, even if the user is not authenticated or wrongly authenticated.
-
The ESXi userworld libxml2 library is updated to version 2.9.9.
-
The Python third-party library is updated to version 3.5.7.
-
Known Issues
The known issues are grouped as follows.
Miscellaneous Issues- Мanually triggering a non-maskable interrupt (NMI) might not work оn a vCenter Server system with an AMD EPYC 7002 series processor
Requesting an NMI from the hardware management console (BMC) or by pressing a physical NMI button should cause ESXi hosts to fail with a purple diagnostic screen and dump core. Instead, nothing happens and ESXi continues running.
Workaround: Disable the use of the interrupt remapper by setting the kernel boot option
iovDisableIR
toTRUE
:- Set
iovDisableIR=TRUE
by using this command:# esxcli system settings kernel set -s iovDisableIR -v TRUE
- Reboot the ESXi host.
- After the reboot, verify that
iovDisableIR
is set toTRUE
:# esxcli system settings kernel list |grep iovDisableIR.
Do not apply this workaround unless you need it to solve this specific problem.
- Set
- A virtual machine that has a PCI passthrough device assigned to it might fail to power on in a vCenter Server system with an AMD EPYC 7002 series processor
In specific vCenter Server system configurations and devices, such as AMD EPYC 7002 series processors, a virtual machine that has a PCI passthrough device assigned to it might fail to power on. In the vmkernel log, you can see a similar message:
4512 2019-08-06T06:09:55.058Z cpu24:1001397137)AMDIOMMU: 611: IOMMU 0000:20:00.2: Failed to allocate IRTE for IOAPIC ID 243 vector 0x3f
4513 2019-08-06T06:09:55.058Z cpu24:1001397137)WARNING: IOAPIC: 1238: IOAPIC Id 243: Failed to allocate IRTE for vector 0x3fWorkaround: Disable the use of the interrupt remapper by setting the kernel boot option
iovDisableIR
toTRUE
:- Set
iovDisableIR=TRUE
by using this command:# esxcli system settings kernel set -s iovDisableIR -v TRUE
- Reboot the ESXi host.
- After the reboot, verify that
iovDisableIR
is set toTRUE
:# esxcli system settings kernel list |grep iovDisableIR.
Do not apply this workaround unless you need it to solve this specific problem.
- Set
- ESXi 6.7 host profiles might not support PEERDNS from vmknics that are attached to a VMware NSX-T based opaque network
In certain cases, host profiles on ESXi 6.7 might not support PEERDNS from vmknics that are attached to an NSX-T based opaque network. The issue is specific to a scenario when you navigate to Host > Configure > TCP/IP configuration > Default > DNS configuration > Obtain settings automatically from a VMkernel network adapter. If you select a vmknic that is connected to a NSX-T based opaque network, host profiles extracted from this host do not work. Such host profiles lack the
Portgroup backing the vmkernel network adapter to be used for DNS
policy in the DNS subprofile. Editing the host profile gives a validation error.Workaround: None. Do not select PEERDNS from a vmknic that is connected to an NSX-T based opaque network.
- Some Intel network adapters might not be able to receive LLDP packets
Intel network adapters of the series X710, XL710, XXV710, and X722 might not be able to receive LLDP packets. This is due to an on-chip agent running on firmware to consume LLDP frames received from LAN ports. The consumption prevents software depending on LLDP frames, such as virtual distributed switches, to get LLDP packets.
Workaround: To enable Intel network adapters to receive LLDP packets, you must:
- Use a firmware version >= 6.0
- Use a driver version >= 1.7.x
- Configure the LLDP module parameter of i40en to 0.
For example, to disable LLDP agent on all 4 NICs, run the command:
# esxcli system module parameters set -p "LLDP=0,0,0,0" i40en
The number of 0 parameters should be equal to the number of i40en uplinks on the ESXi host. - If you use Single Root I/O Virtualization (SR-IOV) and modify a physical function (PF) for the ixgben driver, some virtual machines might lose connectivity
If you use SR-IOV and reset a PF, or first down a PF and then up the PF, virtual machines that use an SR-IOV virtual function for networking might lose connectivity.
Workaround: Reboot the ESXi server and then power on the virtual machines. However, if you use DHCP, the virtual machines IPs might change.
- If you set identical MAC address and uplink port address on devices using the i40en driver, vmknics might receive duplicate packages
If you manually set the MAC address of a vmknic the same as the uplink port address on devices using the i40en driver, the vmknic might receive duplicate packages in heavy traffic.
Workaround: Do not set the MAC address of a vmknic the same as the uplink port address. For vmk0 management network, where vmnics use the same MAC address and uplink port address by default, do not use the vmnics for heavy workloads.