ESXi 6.7 Update 1 | 16 OCT 2018 | ISO Build 10302608 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 6.7
- Installation Notes for This Release
- Upgrade Notes for This Release
- Patches Contained in this Release
- Resolved Issues
- Known Issues
What's New
- ESXi 6.7 Update 1 adds a Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.
- ESXi 6.7 Update 1 adds Quick Boot support for Intel i40en and ixgben Enhanced Network Stack (ENS) drivers, and extends support for HPE ProLiant and Synergy servers. For more information, see VMware knowledge base article 52477.
- ESXi 6.7 Update 1 enables a precheck when upgrading ESXi hosts by using the software profile commands of the ESXCLI command set to ensure upgrade compatibility.
- ESXi 6.7 Update 1 adds nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
- ESXi 6.7 Update 1 adds support for Namespace Globally Unique Identifier (NGUID) in the NVMe driver.
- With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:
esxcli storage core device set -l locator -d
esxcli storage core device set -l error -d
esxcli storage core device set -l off -d
- ESXi 6.7 Update 1 adds APIs to avoid ESXi host reboot while configuring ProductLocker and to enable the management of VMware Tools configuration by the CloudAdmin role in cloud SDDC without the need of access to Host Profiles.
- ESXi 6.7 Update 1 adds the advanced configuration option
EnablePSPLatencyPolicy
to claim devices with latency based Round Robin path selection policy. TheEnablePSPLatencyPolicy
configuration option does not affect existing device or vendor-specific claim rules. You can also enable logging to display paths configuration when using theEnablePSPLatencyPolicy
option. - With ESXi 6.7 Update 1, you can use vSphere vMotion to migrate virtual machines configured with NVIDIA virtual GPU types to other hosts with compatible NVIDIA Tesla GPUs. For more information on the supported NVIDIA versions, see the VMware Compatibility Guide.
- With ESXi 6.7 Update 1, Update Manager Download Service (UMDS) does not require a Database and the installation procedure is simplified. For more information, see the vSphere Update Manager Installation and Administration Guide.
For more information on VMware vSAN issues, see VMware vSAN 6.7 Update 1 Release Notes.
Earlier Releases of ESXi 6.7
Features and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 6.7 are:
For internationalization, compatibility, installation and upgrades, open source components, and product support notices, see the VMware vSphere 6.7 Release Notes.
Installation Notes for This Release
VMware Tools Bundling Changes in ESXi 6.7 Update 1
In ESXi 6.7 Update 1, a subset of VMware Tools 10.3.2 ISO images are bundled with the ESXi 6.7 Update 1 host.
The following VMware Tools 10.3.2 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows Vista or higherlinux.iso
: VMware Tools image for Linux OS with glibc 2.5 or higher
The following VMware Tools 10.3.2 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisfreebsd.iso
: VMware Tools image for FreeBSDdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 10.3.2 Release Notes
- Updating to VMware Tools 10 - Must Read
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
Upgrade Notes for This Release
IMPORTANT: Upgrade from ESXi650-201811002 to ESXi 6.7 Update 1 is not supported because this patch released after 6.7 Update 1 and is considered a back in time upgrade.
Patches Contained in this Release
This release contains all bulletins for ESXi that were released earlier to the release date of this product.
Build Details
Download Filename: | update-from-esxi6.7-6.7_update01.zip |
Build: | 10302608 10176879 (Security-only) |
Download Size: | 450.5 MB |
md5sum: |
0ae2ab210d1ece9b3e22b5db2fa3a08e |
sha1checksum: |
b11a856121d2498453cef7447294b461f62745a3 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin will supersede the security-only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in "SG". For information on patch and update classification, see KB 2014447.
For more information about the individual bulletins, see the My VMware page and the Resolved Issues section.
Bulletin ID |
Category |
Severity |
---|---|---|
ESXi670-201810201-UG |
Bugfix |
Critical |
ESXi670-201810202-UG |
Bugfix |
Important |
ESXi670-201810203-UG |
Bugfix |
Important |
ESXi670-201810204-UG |
Bugfix |
Critical |
ESXi670-201810205-UG |
Bugfix |
Moderate |
ESXi670-201810206-UG |
Bugfix |
Critical |
ESXi670-201810207-UG |
Enhancement |
Important |
ESXi670-201810208-UG |
Enhancement |
Important |
ESXi670-201810209-UG |
Bugfix |
Important |
ESXi670-201810210-UG |
Bugfix |
Important |
ESXi670-201810211-UG |
Enhancement |
Important |
ESXi670-201810212-UG |
Bugfix |
Important |
ESXi670-201810213-UG |
Bugfix |
Critical |
ESXi670-201810214-UG |
Bugfix |
Critical |
ESXi670-201810215-UG |
Bugfix |
Important |
ESXi670-201810216-UG |
Bugfix |
Important |
ESXi670-201810217-UG |
Bugfix |
Important |
ESXi670-201810218-UG |
Bugfix |
Important |
ESXi670-201810219-UG |
Bugfix |
Moderate |
ESXi670-201810220-UG |
Bugfix |
Important |
ESXi670-201810221-UG |
Bugfix |
Important |
ESXi670-201810222-UG |
Bugfix |
Important |
ESXi670-201810223-UG |
Bugfix |
Important |
ESXi670-201810224-UG |
Bugfix |
Important |
ESXi670-201810225-UG |
Bugfix |
Important |
ESXi670-201810226-UG |
Bugfix |
Important |
ESXi670-201810227-UG |
Bugfix |
Important |
ESXi670-201810228-UG |
Bugfix |
Important |
ESXi670-201810229-UG |
Bugfix |
Important |
ESXi670-201810230-UG |
Bugfix |
Important |
ESXi670-201810231-UG |
Bugfix |
Important |
ESXi670-201810232-UG |
Bugfix |
Important |
ESXi670-201810233-UG |
Bugfix |
Important |
ESXi670-201810234-UG |
Bugfix |
Important |
ESXi670-201810101-SG |
Security |
Important |
ESXi670-201810102-SG |
Security |
Moderate |
ESXi670-201810103-SG |
Security |
Important |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi and vCenter Server to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles.
Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20181002001-standard |
ESXi-6.7.0-20181002001-no-tools |
ESXi-6.7.0-20181001001s-standard |
ESXi-6.7.0-20181001001s-no-tools |
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-201810201-UG
- ESXi670-201810202-UG
- ESXi670-201810203-UG
- ESXi670-201810204-UG
- ESXi670-201810205-UG
- ESXi670-201810206-UG
- ESXi670-201810207-UG
- ESXi670-201810208-UG
- ESXi670-201810209-UG
- ESXi670-201810210-UG
- ESXi670-201810211-UG
- ESXi670-201810212-UG
- ESXi670-201810213-UG
- ESXi670-201810214-UG
- ESXi670-201810215-UG
- ESXi670-201810216-UG
- ESXi670-201810217-UG
- ESXi670-201810218-UG
- ESXi670-201810219-UG
- ESXi670-201810220-UG
- ESXi670-201810221-UG
- ESXi670-201810222-UG
- ESXi670-201810223-UG
- ESXi670-201810224-UG
- ESXi670-201810225-UG
- ESXi670-201810226-UG
- ESXi670-201810227-UG
- ESXi670-201810228-UG
- ESXi670-201810229-UG
- ESXi670-201810230-UG
- ESXi670-201810231-UG
- ESXi670-201810232-UG
- ESXi670-201810233-UG
- ESXi670-201810234-UG
- ESXi670-201810101-SG
- ESXi670-201810102-SG
- ESXi670-201810103-SG
- ESXi-6.7.0-20181002001-standard
- ESXi-6.7.0-20181002001-no-tools
- ESXi-6.7.0-20181001001s-standard
- ESXi-6.7.0-20181001001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 1220910, 2036262, 2036302, 2039186, 2046226, 2057603, 2058908, 2066026, 2071482, 2072977, 2078138, 2078782, 2078844, 2079807, 2080427, 2082405, 2083581, 2084722, 2085117, 2086803, 2086807, 2089047, 2096126, 2096312, 2096947, 2096948, 2097791, 2098170, 2098868, 2103579, 2103981, 2106747, 2107087, 2107333, 2110971, 2118588, 2119610, 2119663, 2120346, 2126919, 2128933, 2129130, 2129181, 2131393, 2131407, 2133153, 2133588, 2136004, 2137261, 2145089, 2146206, 2146535, 2149518, 2152380, 2154913, 2156841, 2157501, 2157817, 2163734, 2165281, 2165537, 2165567, 2166114, 2167098, 2167878, 2173810, 2186253, 2187008 |
CVE numbers | N/A |
This patch updates the esx-base, esx-update, vsan
and vsanhealth
VIBs to resolve the following issues:
- PR 2084722: If you delete the support bundle folder of a virtual machine from an ESXi host, the hostd service might fail
If you manually delete the support bundle folder of a virtual machine downloaded in the
/scratch/downloads
directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the/scratch/downloads
directory one hour after the files are created.This issue is resolved in this release.
- PR 2083581: You might not be able to use a smart card reader as a passthrough device by using the feature Support vMotion while device is connected
If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.
This issue is resolved in this release.
- PR 2058908: VMkernel logs in a VMware vSAN environment might be flooded with the message Unable to register file system
In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message
Unable to register file system
.This issue is resolved in this release.
- PR 2106747: The vSphere Web Client might display incorrect storage allocation after you increase the disk size of a virtual machine
If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.
This issue is resolved in this release.
- PR 2089047: Using the variable windows extension option in Windows Deployment Services (WDS) on virtual machines with EFI firmware might result in slow PXE booting
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.
This issue is resolved in this release.
- PR 2103579: Apple devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
This issue is resolved in this release. To enable connection in earlier releases of ESXi, append
usb.quirks.darwinVendor45 = TRUE
as a new option to the.vmx
configuration file. - PR 2096126: Applying a host profile with enabled Stateful Install to an ESXi 6.7 host by using vSphere Auto Deploy might fail
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
This issue is resolved in this release.
- PR 2072977: Virtual machine disk consolidation might fail if the virtual machine has snapshots taken with Content Based Read Cache (CBRC) enabled and then the feature is disabled
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.This issue is resolved in this release.
- PR 2107087: An ESXi host might fail with a purple diagnostic screen
An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.
This issue is resolved in this release.
- PR 2066026: The esxtop utility might report incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI-supported LUNs
An incorrect calculation in the VMkernel causes the
esxtop
utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI).This issue is resolved in this release.
- PR 2036262: Claim rules must be manually added to ESXi for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.This issue is resolved in this release.
- PR 2082405: A virtual machine cannot connect to a distributed port group after cold migration or VMware vSphere High Availability failover
After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.
This issue is resolved in this release.
- PR 2078782: I/O commands might fail with INVALID FIELD IN CDB error
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
This issue is resolved in this release.
- PR 2107333: Claim rules must be manually added to ESXi for HITACHI OPEN-V storage arrays
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.The fix also sets Storage Array Type Plug-in (SATP) to
VMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for HITACHI OPEN-v type storage arrays without ALUA support.This issue is resolved in this release.
- PR 2078138: The backup process of a virtual machine with Windows Server guest OS might fail because the quiesced snapshot disk cannot be hot-added to a proxy virtual machine that is enabled for CBRC
If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.
This issue is resolved in this release.
- PR 2098868: Calls to the HostImageConfigManager managed object might cause increased log activity in the syslog.log file
The
syslog.log
file might be repeatedly populated with messages related to calls to theHostImageConfigManager
managed object.This issue is resolved in this release.
- PR 2096312: While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured
While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.
This issue is resolved in this release. For vCenter Server 6.7.0, workaround this issue by running the following code line:
# /bin/esxcfg-dumppart -C -D active --stdout 2> /dev/null
- PR 2057603: An ESXi host might fail with a purple diagnostic screen during shutdown or power off of virtual machines if you use EMC RecoverPoint
Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.
This issue is resolved in this release.
- PR 2039186: VMware vSphere Virtual Volumes metadata might not be updated with associated virtual machines and make virtual disk containers untraceable
vSphere Virtual Volumes set with
VMW_VVolType
metadata keyOther
andVMW_VVolTypeHint
metadata keySidecar
might not getVMW_VmID
metadata key to the associated virtual machines and cannot be tracked by using IDs.This issue is resolved in this release.
- PR 2036302: SolidFire arrays might not get optimal performance without re-configuration of SATP claim rules
This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to
VMW_SATP_DEFAULT_AA
and the Path Selection Policy (PSP) toVMW_PSP_RR
with 10 I/O operations per second by default to achieve optimal performance.This issue is resolved in this release.
- PR 2112769: A fast reboot of a system with an LSI controller or a reload of an LSI driver might result in unresponsive or inaccessible datastores
A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a
SCSI STOP UNIT
command during unload, but a correspondingSCSI START UNIT
command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible.This issue is resolved in this release.
- PR 2119610: Migration of a virtual machine with a Filesystem Device Switch (FDS) on a vSphere Virtual Volumes datastore by using VMware vSphere vMotion might cause multiple issues
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
Тhis issue is resolved in this release.
- PR 2128933: I/O submitted to a VMFSsparse snapshot might fail without an error
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.
This issue is resolved in this release.
- PR 2046226: SCSI INQUIRY commands on Raw Device Mapping (RDM) LUNs might return data from the cache instead of querying the LUN
SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the
.vmx
file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI commandesxcli storage core device inquirycache set --device <device-id> --ignore true.
With the ESXCLI option, a reboot of the virtual machine is not necessary.
This issue is resolved in this release.
- PR 2133588: Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays
This fix sets Storage Array Type Plugin (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) toVMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for Tegile IntelliFlash storage arrays without ALUA support.This issue is resolved in this release.
- PR 2129181: If the state of the paths to a LUN changes during an ESXi host booting, booting might take longer
During an ESXi booting, if the commands issued over the initial device discovery fail with
ASYMMETRIC ACCESS STATE CHANGE UA
, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
You might see logs similar to:
2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev "eui.0011223344550003" on path "vmhba64:C0:T0:L3" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error
This issue is resolved in this release.
- PR 2119663: A VMFS6 datastore might report an incorrect out-of-space message
A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.
This issue is resolved in this release.
- PR 2129130: NTP servers removed by using the vSphere Web Client might remain in the NTP configuration file
If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the
/etc/ntp.conf
file.This issue is resolved in this release.
- PR 2078844: Virtual machines might stop responding during migration if you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu
If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.
This issue is resolved in this release.
- PR 2080427: VMkernel Observations (VOB) events might generate unnecessary device performance warnings
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
- 1.
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
- 2.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
This issue is resolved in this release.
- 1.
- PR 2137261: Exports of a large virtual machine by using the VMware Host Client might fail
The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the
ExportVm
method might expire before the file transfer finishes.This issue is resolved in this release.
- PR 2149518: The hostd process might intermittently fail due to a high number of networking tasks
High number of networking tasks, specifically multiple calls to
QueryNetworkHint()
, might exceed the memory limit of the hostd process and make it fail intermittently.This issue is resolved in this release.
- PR 2146535: Firmware event code logs might flood the vmkernel.log
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
.This issue is resolved in this release.
- PR 2071482: Dell OpenManage Integration for VMware vCenter (OMIVV) might fail to identify some Dell modular servers from the Integrated Dell Remote Access Controller (iDRAC)
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration.This issue is resolved in this release.
- PR 2145089: vSphere Virtual Volumes might become unresponsive if an API for Storage Awareness (VASA) provider loses binding information from the database
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.
This issue is resolved in this release. This fix prevents infinite loops in case of database binding failures.
- PR 2146206: vSphere Virtual Volumes metadata might not be available to storage array vendor software
vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.
This issue is resolved in this release. This fix makes vSphere Virtual Volumes metadata available at the time vSphere Virtual Volumes are configured, not when a virtual machine starts running.
- PR 2096947: getTaskUpdate API calls to deleted task IDs might cause log spew and higher API bandwidth consumption
If you use a VASA provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew.This issue is resolved in this release.
- PR 2156841: Virtual machines using EFI and running Windows Server 2016 on AMD processors might stop responding during reboot
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.
This issue is resolved in this release.
- PR 2163734: Enabling NetFlow might lead to high network latency
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
This issue is resolved in this release. You can further optimize the NetFlow performance by setting the
ipfixHashTableSize IPFIX
parameter to-p "ipfixHashTableSize=65536" -m ipfix
by using CLI. To complete the task, reboot the ESXi host. - PR 2136004: Stale tickets in RAM disks might cause some ESXi hosts to stop responding
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.
This issue is resolved in this release.
- PR 2152380: The esxtop command-line utility might not display the queue depth of devices correctly
The
esxtop
command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.This issue is resolved in this release.
- PR 2133153: Reset to green functionality might not work as expected for hardware health alarms
Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.
This issue is resolved in this release. With this fix, you can customize the time that the system generates a new alarm if the problem is not fixed. By using the advanced option
esxcli system settings advanced list -o /UserVars/HardwareHeathSyncTime
, you can disable the sync interval, or set it to your preferred time. - PR 2131393: Presence Sensors in the Hardware Status tab might display status Unknown
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
This issue is resolved in this release. This fix filters components with Unknown status.
- PR 2154913: VMware Tools might display incorrect status if you configure the /productLocker directory on a shared VMFS datastore
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported.This issue is resolved in this release.
- PR 2137041: Encrypted vSphere vMotion might fail due to insufficient migration heap space
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
This issue is resolved in this release.
- PR 2165537: Backup proxy virtual machines might go to invalid state during backup
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand.This issue is resolved in this release.
- PR 2157501: You might see false hardware health alarms due to disabled or idle Intelligent Platform Management Interface (IPMI) sensors
Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.
This issue is resolved in this release. This fix filters out such alarms.
- PR 2167878: An ESXi host might fail with a purple diagnostic screen if you enable IPFIX in continuous heavy traffic
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
This issue is resolved in this release.
- PR 2165567: Performance of long distance vSphere vMotion operations at high latency might deteriorate due to the max socket buffer size limit
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 10 GbE, due to the hard-coded socket buffer limit of 16 MB.
This issue is resolved in this release. With this fix, you can configure the max socket buffer size parameter
SB_MAX_ADJ
. - PR 2082940: An ESXi host might become unresponsive when a user creates and adds a vmknic in the vSphereProvisioning netstack and uses it for NFC traffic
When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.
This issue is resolved in this release. If you already face the issue, you must restart the NFCD daemon.
- PR 2157817: You might not be able to create Virtual Flash File (VFFS) volumes by using a vSphere standard license
An attempt to create VFFS volumes by using a vSphere standard license might fail with the error
License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition
. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed.This issue is resolved in this release. Since VFFS can be used outside of the Flash Read Cache, the check is removed.
- PR 1220910: Multi-Writer Locks cannot be set for more than 8 ESXi hosts in a shared environment
In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message
Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users)
. This fix makes the advanced configuration option/VMFS3/GBLAllowMW
visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking.This issue is resolved in this release. For more details, see VMware knowledge base article 1034165.
- PR 2187719: Third Party CIM providers installed in an ESXi host might fail to work properly with their external applications
Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.
This issue is resolved in this release.
- PR 2096948: An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMware vSphere VMFS
An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.
This issue is resolved in this release.
- PR 2166824: SNMP monitoring systems might report incorrect memory statistics on ESXi hosts
SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that
free
,top
orvmstat
commands report.This issue is resolved in this release. The fix aligns the formula that SNMP agents use with the other command line command tools.
- PR 2165281: If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807This issue is resolved in this release.
- PR 2166114: I/Os might fail on some paths of a device due to faulty switch error
I/Os might fail with an error
FCP_DATA_CNT_MISMATCH
, which translates into aHOST_ERROR
in the ESXi host, and indicates link errors or faulty switches on some paths of the device.This issue is resolved in this release. The fix adds a configuration option
PSPDeactivateFlakyPath
toPSP_RR
to disable paths if I/Os are continuously failing withHOST_ERROR
and to allow the
active paths to handle the operations. The configuration is enabled by default with the option to be disabled, if not necessary. - PR 2167098: All Paths Down (APD) is not triggered for LUNs behind IBM SAN Volume Controller (SVC) target even when no paths can service I/Os
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
This issue is resolved in this release. The fix is disabled by default. To enable the fix, set the ESXi config option
/Scsi/ExtendAPDCondition
toesxcfg-advcfg -s 1 /Scsi/ExtendAPDCondition
. - PR 2120346: Slow network performance of NUMA servers for devices using VMware NetQueue
You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration
NetNetqNumaIOCpuPinThreshold
.This issue is resolved in this release.
- After running ESXCLI diskgroup rebuild command, you might see the error message Throttled: BlkAttr not ready for disk
vSAN creates
blk
attribute components after it creates a disk group. If theblk
attrib component is missing from the API that supports thediskgroup rebuild
command, you might see the following warning message in the vmkernel log:Throttled: BlkAttr not ready for disk.
This issue is resolved in this release.
- PR 2204439: Heavy I/Os issued to a snapshot virtual machines using the SEsparse format might cause guest OS file system corruption
Heavy I/Os issued to a snapshot virtual machines using the SEsparse format might cause guest OS file system corruption, or data corruption in applications.
This issue is resolved in this release.
- PR 1996879: Unable to perform some host-related operations, such as place hosts into maintenance mode
In previous releases, vSAN Observer runs under the init group. This group can occupy other group's memory, and starve the memory required for other host-related operations. To resolve the problem, you can run vSAN Observer under its own resource group.
This issue is resolved in this release.
- PR 2000367: Health check is unavailable: All hosts are in same network subnet
In vSphere 6.7 Update 1, some vSAN stretched clusters can have hosts in different L3 networks. Therefore, the following health check is no longer valid, and has been removed:
All hosts are in same network subnet.
This issue is resolved in this release.
- PR 2144043: API returns SystemError exception when calling querySyncingVsanObjects API on vmodl VsanSystemEx
You might get
SystemError
exception when callingquerySyncingVsanObjects
API onvmodl VsanSystemEx,
due to memory pressure caused by a large number of synchronizing objects. You might see an error message in the Recent Tasks pane of the vSphere Client .This issue is resolved in the release.
- PR 2074369: vSAN does not mark a disk as degraded even after failed I/Os reported on the disk
In some cases, vSAN takes a long time to mark a disk as degraded, even though I/O failures are reported by the disk and vSAN has stopped servicing any further I/Os from that disk.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmw-ahci
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2095671, 2095681 |
CVE numbers | N/A |
This patch updates the lpfc
VIB to resolve the following issues:
- PR 2095671: ESXi hosts might stop responding or fail with a purple diagnostic screen if you unmap LUNs from an EMC VPLEX storage system
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
This issue is resolved in this release.
- PR 2095681: ESXi hosts might disconnect from an EMC VPLEX storage system upon heavy I/O load
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2124061 |
CVE numbers | N/A |
This patch updates the lsu-hp-hpsa-plugin
VIB to resolve the following issue:
- PR 2124061: VMware vSAN with HPE ProLiant Gen9 Smart Array Controllers might not light locator LEDs on the correct disk
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2095671, 2095681 |
CVE numbers | N/A |
This patch updates the brcmfcoe
VIB to resolve the following issues:
- PR 2095671: ESXi hosts might stop responding or fail with a purple diagnostic screen if you unmap LUNs from an EMC VPLEX storage system
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
This issue is resolved in this release.
- PR 2095681: ESXi hosts might disconnect from an EMC VPLEX storage system upon heavy I/O load
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
This issue is resolved in this release.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lsu-smartpqi-plugin
VIB.
ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lsu-intel-vmd-plugin
VIB.
With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:
esxcli storage core device set -l locator -d
esxcli storage core device set -l error -d
esxcli storage core device set -l off -d.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the cpu-microcode
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the elxnet
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nfnic
VIB.
ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the ne1000
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nvme
VIB.
ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2112193 |
CVE numbers | N/A |
This patch updates the lsu-lsi-lsi-mr3-plugin
VIB to resolve the following issue:
- PR 2112193: An ESXi host and hostd might become unresponsive while using the lsi_mr3 driver
An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the smartpqi
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nvmxnet3
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2153838 |
CVE numbers | N/A |
This patch updates the i40en
VIB.
- PR 2153838: Wake-on-LAN (WOL) might not work for NICs of the Intel X722 series in IPv6 networks
The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the ipmi-ipmi-devintf
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2142221 |
CVE numbers | N/A |
This patch updates the ixgben
VIB.
- PR 2142221: You might see idle alert logs Device 10fb does not support flow control autoneg
You might see colored logs
Device 10fb does not support flow control autoneg
as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2106647 |
CVE numbers | N/A |
This patch updates the qedentv
VIB to resolve the following issue:
- PR 2106647: QLogic FastLinQ QL41xxx ethernet adapters might not create virtual functions after the single root I/O virtualization (SR-IOV) interface is enabled
Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nenic
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the bnxtroce
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the ipmi-ipmi-msghandler
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the ipmi-ipmi-si-drv
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the iser
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmkfcoe
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lsi-mr3
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lsi-msgpt35
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmware-esx-esxcli-nvme-plugin
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2156445 |
CVE numbers | N/A |
This patch updates the ntg3
VIB.
- PR 2156445: Oversize packets might cause NICs using the ntg3 driver to temporarily stop sending packets
In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the nhpsa
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the the lsi-msgpt3
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the lsi-msgpt2
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the mtip32xx-native
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2093391, 2099951, 2093433, 2098003, 2155702 |
CVE numbers | CVE-2018-6974 |
This patch updates the esx-base, vsan,
and vsanhealth
VIBs to resolve the following issues:
- Update to libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.7.
- Update to the Network Time Protocol (NTP) daemon
The NTP daemon is updated to version ntp-4.2.8p11.
- Update of the SQLite database
The SQLite database is updated to version 3.23.1.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to libcurl-7.59.0.
- Update to the OpenSSH
The OpenSSH version is updated to 7.7p1.
- Update to OpenSSL library
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.
- Update to the Python library
The Python third-party library is updated to version 3.5.5.
ESXi has an out-of-bounds read vulnerability in the SVGA device that might allow a guest to execute code on the host.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2018-6974 to this issue.
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.7.0-20181002001-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | October 16, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1220910 , 2036262 , 2036302 , 2039186 , 2046226 , 2057603 , 2058908 , 2066026 , 2071482 , 2072977 , 2078138 , 2078782 , 2078844 , 2079807 , 2080427 , 2082405 , 2083581 , 2084722 , 2085117 , 2086803 , 2086807 , 2089047 , 2096126 , 2096312 , 2096947 , 2096948 , 2097791 , 2098170 , 2098868 , 2103579 , 2103981 , 2106747 , 2107087 , 2107333 , 2110971 , 2118588 , 2119610 , 2119663 , 2120346 , 2126919 , 2128933 , 2129130 , 2129181 , 2131393 , 2131407 , 2133153 , 2133588 , 2136004 , 2137261 , 2139127 , 2145089 , 2146206 , 2146535 , 2149518 , 2152380 , 2154913 , 2156841 , 2157501 , 2157817 , 2163734 , 2165281 , 2165537 , 2165567 , 2166114 , 2167098 , 2167878 , 2173810 , 2186253 , 2187008 , 2204439 , 2095671 , 2095681 , 2124061 , 2095671 , 2095681 , 2093493 , 2103337 , 2112361 , 2099772 , 2137374 , 2112193 , 2153838 , 2142221 , 2106647 , 2137374 , 2156445 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If you manually delete the support bundle folder of a virtual machine downloaded in the
/scratch/downloads
directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the/scratch/downloads
directory one hour after the files are created. -
If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.
-
In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message
Unable to register file system
. -
If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.
-
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.
-
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
-
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
-
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled. -
An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.
-
An incorrect calculation in the VMkernel causes the
esxtop
utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI). -
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi. -
After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.
-
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
-
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.The fix also sets Storage Array Type Plug-in (SATP) to
VMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for HITACHI OPEN-v type storage arrays without ALUA support. -
If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.
-
The
syslog.log
file might be repeatedly populated with messages related to calls to theHostImageConfigManager
managed object. -
While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.
-
Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.
-
vSphere Virtual Volumes set with
VMW_VVolType
metadata keyOther
andVMW_VVolTypeHint
metadata keySidecar
might not getVMW_VmID
metadata key to the associated virtual machines and cannot be tracked by using IDs. -
This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to
VMW_SATP_DEFAULT_AA
and the Path Selection Policy (PSP) toVMW_PSP_RR
with 10 I/O operations per second by default to achieve optimal performance. -
A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a
SCSI STOP UNIT
command during unload, but a correspondingSCSI START UNIT
command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible. -
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
-
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.
-
SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the
.vmx
file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI commandesxcli storage core device inquirycache set --device <device-id> --ignore true.
With the ESXCLI option, a reboot of the virtual machine is not necessary.
-
This fix sets Storage Array Type Plugin (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) toVMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for Tegile IntelliFlash storage arrays without ALUA support. -
During an ESXi booting, if the commands issued over the initial device discovery fail with
ASYMMETRIC ACCESS STATE CHANGE UA
, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
You might see logs similar to:
2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev "eui.0011223344550003" on path "vmhba64:C0:T0:L3" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error
-
A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.
-
If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the
/etc/ntp.conf
file. -
If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.
-
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
- 1.
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
- 2.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
- 1.
-
The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the
ExportVm
method might expire before the file transfer finishes. -
High number of networking tasks, specifically multiple calls to
QueryNetworkHint()
, might exceed the memory limit of the hostd process and make it fail intermittently. -
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
. -
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration. -
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.
-
vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.
-
If you use a VASA provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew. -
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.
-
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
-
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.
-
The
esxtop
command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes. -
Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.
-
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
-
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported. -
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
-
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand. -
Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.
-
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
-
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 1 GbE and higher, due to the hard-coded socket buffer limit of 16 MB.
-
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000 -
When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.
-
An attempt to create VFFS volumes by using a vSphere standard license might fail with the error
License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition
. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed. -
In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message
Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users)
. This fix makes the advanced configuration option/VMFS3/GBLAllowMW
visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking. -
Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.
-
An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.
-
SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that
free
,top
orvmstat
commands report. -
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807 -
The Cluster Compliance status of a vSAN enabled cluster might display as Not compliant, because the compliance check might not recognize vSAN datastores as shared datastores.
-
I/Os might fail with an error
FCP_DATA_CNT_MISMATCH
, which translates into aHOST_ERROR
in the ESXi host, and indicates link errors or faulty switches on some paths of the device. -
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
-
You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration
NetNetqNumaIOCpuPinThreshold
. -
Some hosts might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This problem occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host. The host might timeout while creating the last disk group.
-
vSAN creates
blk
attribute components after it creates a disk group. If theblk
attrib component is missing from the API that supports thediskgroup rebuild
command, you might see the following warning message in the vmkernel log:Throttled: BlkAttr not ready for disk.
-
When all disk groups are removed from a vSAN cluster, the vSphere Client displays a warning similar to the following:
VMware vSAN cluster in datacenter does not have capacity
After you disable vSAN on the cluster, the warning message persists. -
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
-
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
-
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.
-
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
-
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
-
ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.
-
With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:
esxcli storage core device set -l locator -d
esxcli storage core device set -l error -d
esxcli storage core device set -l off -d.
- ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
- ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.
- An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.
- The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.
- You might see colored logs
Device 10fb does not support flow control autoneg
as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host. - Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.
- In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
-
Profile Name | ESXi-6.7.0-20181002001-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | October 16, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1220910 , 2036262 , 2036302 , 2039186 , 2046226 , 2057603 , 2058908 , 2066026 , 2071482 , 2072977 , 2078138 , 2078782 , 2078844 , 2079807 , 2080427 , 2082405 , 2083581 , 2084722 , 2085117 , 2086803 , 2086807 , 2089047 , 2096126 , 2096312 , 2096947 , 2096948 , 2097791 , 2098170 , 2098868 , 2103579 , 2103981 , 2106747 , 2107087 , 2107333 , 2110971 , 2118588 , 2119610 , 2119663 , 2120346 , 2126919 , 2128933 , 2129130 , 2129181 , 2131393 , 2131407 , 2133153 , 2133588 , 2136004 , 2137261 , 2139127 , 2145089 , 2146206 , 2146535 , 2149518 , 2152380 , 2154913 , 2156841 , 2157501 , 2157817 , 2163734 , 2165281 , 2165537 , 2165567 , 2166114 , 2167098 , 2167878 , 2173810 , 2186253 , 2187008 , 2204439 , 2095671 , 2095681 , 2124061 , 2095671 , 2095681 , 2093493 , 2103337 , 2112361 , 2099772 , 2137374 , 2112193 , 2153838 , 2142221 , 2106647 , 2137374 , 2156445 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If you manually delete the support bundle folder of a virtual machine downloaded in the
/scratch/downloads
directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the/scratch/downloads
directory one hour after the files are created. -
If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.
-
In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message
Unable to register file system
. -
If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.
-
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.
-
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
-
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
-
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled. -
An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.
-
An incorrect calculation in the VMkernel causes the
esxtop
utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI). -
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi. -
After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.
-
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
-
This fix sets Storage Array Type Plug-in (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.The fix also sets Storage Array Type Plug-in (SATP) to
VMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for HITACHI OPEN-v type storage arrays without ALUA support. -
If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.
-
The
syslog.log
file might be repeatedly populated with messages related to calls to theHostImageConfigManager
managed object. -
While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.
-
Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.
-
vSphere Virtual Volumes set with
VMW_VVolType
metadata keyOther
andVMW_VVolTypeHint
metadata keySidecar
might not getVMW_VmID
metadata key to the associated virtual machines and cannot be tracked by using IDs. -
This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to
VMW_SATP_DEFAULT_AA
and the Path Selection Policy (PSP) toVMW_PSP_RR
with 10 I/O operations per second by default to achieve optimal performance. -
A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a
SCSI STOP UNIT
command during unload, but a correspondingSCSI START UNIT
command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible. -
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
-
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.
-
SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the
.vmx
file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI commandesxcli storage core device inquirycache set --device <device-id> --ignore true.
With the ESXCLI option, a reboot of the virtual machine is not necessary.
-
This fix sets Storage Array Type Plugin (SATP) to
VMW_SATP_ALUA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_on
as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) toVMW_SATP_DEFAULT_AA
, Path Selection Policy (PSP) toVMW_PSP_RR
and Claim Options totpgs_off
as default for Tegile IntelliFlash storage arrays without ALUA support. -
During an ESXi booting, if the commands issued over the initial device discovery fail with
ASYMMETRIC ACCESS STATE CHANGE UA
, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
You might see logs similar to:
2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev "eui.0011223344550003" on path "vmhba64:C0:T0:L3" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev "eui.0011223344550003" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error
-
A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.
-
If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the
/etc/ntp.conf
file. -
If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.
-
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
- 1.
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
- 2.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
- 1.
-
The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the
ExportVm
method might expire before the file transfer finishes. -
High number of networking tasks, specifically multiple calls to
QueryNetworkHint()
, might exceed the memory limit of the hostd process and make it fail intermittently. -
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
. -
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration. -
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.
-
vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.
-
If you use a VASA provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew. -
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.
-
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
-
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.
-
The
esxtop
command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes. -
Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.
-
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
-
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported. -
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
-
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand. -
Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.
-
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
-
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 1 GbE and higher, due to the hard-coded socket buffer limit of 16 MB.
-
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000 -
When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.
-
An attempt to create VFFS volumes by using a vSphere standard license might fail with the error
License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition
. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed. -
In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message
Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users)
. This fix makes the advanced configuration option/VMFS3/GBLAllowMW
visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking. -
Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.
-
An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.
-
SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that
free
,top
orvmstat
commands report. -
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807 -
The Cluster Compliance status of a vSAN enabled cluster might display as Not compliant, because the compliance check might not recognize vSAN datastores as shared datastores.
-
I/Os might fail with an error
FCP_DATA_CNT_MISMATCH
, which translates into aHOST_ERROR
in the ESXi host, and indicates link errors or faulty switches on some paths of the device. -
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
-
You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration
NetNetqNumaIOCpuPinThreshold
. -
Some hosts might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This problem occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host. The host might timeout while creating the last disk group.
-
vSAN creates
blk
attribute components after it creates a disk group. If theblk
attrib component is missing from the API that supports thediskgroup rebuild
command, you might see the following warning message in the vmkernel log:Throttled: BlkAttr not ready for disk.
-
When all disk groups are removed from a vSAN cluster, the vSphere Client displays a warning similar to the following:
VMware vSAN cluster in datacenter does not have capacity
After you disable vSAN on the cluster, the warning message persists. -
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
-
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
-
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.
-
If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.
-
The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.
-
ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.
-
With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:
esxcli storage core device set -l locator -d
esxcli storage core device set -l error -d
esxcli storage core device set -l off -d.
- ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
- ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.
- An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.
- The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.
- You might see colored logs
Device 10fb does not support flow control autoneg
as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host. - Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.
- In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
-
Profile Name | ESXi-6.7.0-20181001001s-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | October 16, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1804719, 2093433, 2099951, 2168471 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld libxml2 library is updated to version 2.9.7.
-
The NTP daemon is updated to version ntp-4.2.8p11.
-
The SQLite database is updated to version 3.23.1.
-
The ESXi userworld libcurl library is updated to libcurl-7.59.0.
-
The OpenSSH version is updated to 7.7p1.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.
-
The Python third-party library is updated to version 3.5.5.
-
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.
-
Profile Name | ESXi-6.7.0-20181001001s-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | October 16, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1804719, 2093433, 2099951, 2168471 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld libxml2 library is updated to version 2.9.7.
-
The NTP daemon is updated to version ntp-4.2.8p11.
-
The SQLite database is updated to version 3.23.1.
-
The ESXi userworld libcurl library is updated to libcurl-7.59.0.
-
The OpenSSH version is updated to 7.7p1.
-
The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.
-
The Python third-party library is updated to version 3.5.5.
-
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.
-
Known Issues
The known issues are grouped as follows.
ESXi670-201810201-UG- After you upgrade a host to ESXi 6.7 Update 1, if you remediate a host profile, the compliance checks might fail if the host is with NVMe devices, which support only NGUID as device identifier
After you upgrade to ESXi 6.7 Update 1, a host with NVMe devices, which support only NGUID as device identifier, if you try to remediate a host profile configured for stateful installs, the compliance checks might fail. This is because the device identifier presented by the NVMe driver in ESXi 6.7 Update 1 is NGUID itself, instead of an ESXi-generated t10 identifier.
Workaround: Update the Host Profile configuration:
- Navigate to the Host Profile.
- Click Copy Settings from Host.
- Select the host from which you want to copy the configuration settings.
- Click OK.
- Right-click the host and select Host Profiles > Reset Host Customizations.
- Host failure when converting a data host into a witness host
When you convert a vSAN cluster into a stretched cluster, you must provide a witness host. You can convert a data host into the witness host, but you must use maintenance mode with full data migration during the process. If you place the host into maintenance mode, enable the Ensure accessibility option, and then configure the host as the witness host, the host might fail with a purple diagnostic screen.
Workaround: Remove the disk group on the witness host and then re-create the disk group.
- Tagged unicast packets from a Virtual Function (VF) cannot arrive at its Physical Function (PF) when the port groups connected to the VF and PF are set to Virtual Guest Tagging (VGT)
If you configure VGT in trunk mode on both VF and PF port groups, the unicast traffic might not pass between the VF and the PF.
Workaround: Do not use the PF for VGT when its VFs are used for VGT. When you must use VGT on a PF and a VF for any reason, the VF and the PF must be on different physical NICs.
- Physical NIC binding might be lost after a PXE boot
If the vmknic is connected to an NSX-T logical switch with physical NIC binding configured, the configuration might be lost after a PXE boot. If a software iSCSI adapter is configured, you might see a host compliance error similar to
The iSCSI adapter vmhba65 does not have the vnic vmkX that is configured in the profile
.Workaround: Configure the physical NIC binding and software iSCSI adapter manually after the host boots.
- The vmk0 management network MAC address might be deleted while remediating a host profile
Remediating a host profile with a VMkernel interface created on a VMware NSX-T logical switch might remove vmk0 from the ESXi hosts. Such host profiles cannot be used in an NSX-T environment.
Workaround: Configure the hosts manually.
- The SNMP service fails frequently after upgrade to ESXi 6.7
The SNMP service might be failing frequently, in intervals as small as 30 min, after an upgrade to ESXi 6.7, if the execution of the main thread is not complete when a child thread is called. The service generates
snmpd-zdump
core dumps. If the SNMP service fails, you can restart it with the following commands:esxcli system snmp set -e false
andesxcli system snmp set -e true
.Workaround: None.
- When rebalancing disks, the amount of data to move displayed by vSAN health service does not match the amount displayed by the Ruby vSphere Console (RVC)
RVC performs a rough calculation to determine the amount of data to move when rebalancing disks. The value displayed by the vSAN health service is more accurate. When rebalancing disks, refer to the vSAN health service to check the amount of data to move.
Workaround: None.
- Host with three or more disk groups fails to boot after upgrade to vSAN 6.7 Update 1
In some cases, a host might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This rare condition occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host.
Workaround: If a host with three or more disk groups failed to boot during an upgrade to vSAN 6.7 Update 1, change the ESXi configuration on each host as follows:
Run the following commands on each vSAN host:
esxcfg-advcfg --set 0 /LSOM/blPLOGRecovCacheLines
auto-backup.sh
- An ESXi host might fail with a purple diagnostic screen while shutting down
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000Workaround: Disable IPv6. If you cannot disable IPv6 for some reason, disable MLDv1 from all other devices in your network.