Release Date: November 29, 2018
IMPORTANT: If you are a vSAN customer, before you upgrade to ESXi650-201810002, if you have not upgraded to ESXi650-201810001, and if an expansion of a VMDK is likely, follow these steps:
- Log in to the ESXi host through SSH as root.
- Set the VSAN.ClomEnableInplaceExpansion advanced configuration option to 0 on all hosts.
No reboot is required. The VSAN.ClomEnableInplaceExpansion setting does not affect host reboots or running workloads.
After you upgrade all hosts to ESXi650-201810001, you can set VSAN.ClomEnableInplaceExpansion back to 1 and enable it.
If you have upgraded to ESXi650-201810001, no additional steps are required.
Download Filename:
ESXi650-201811002.zip
Build:
10884925
Download Size:
457.4 MB
md5sum:
7d3cd75ace63f381d8c8f34e24c9c413
sha1checksum:
882b1fb4bc139dd65956a3e9380f4f03eeb2538a
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes
Bulletins
Bulletin ID | Category | Severity |
ESXi650-201811401-BG | Bugfix | Critical |
ESXi650-201811402-BG | Bugfix | Important |
ESXi650-201811403-BG | Bugfix | Important |
ESXi650-201811404-BG | Bugfix | Critical |
ESXi650-201811405-BG | Bugfix | Important |
ESXi650-201811406-BG | Bugfix | Moderate |
ESXi650-201811101-SG | Security | Critical |
ESXi650-201811102-SG | Security | Important |
ESXi650-201811103-SG | Security | Moderate |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.
Bulletin ID | Category | Severity |
ESXi650-201811002 | Bugfix | Critical |
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20181104001-standard |
ESXi-6.5.0-20181104001-no-tools |
ESXi-6.5.0-20181101001s-standard |
ESXi-6.5.0-20181101001s-no-tools |
For more information about the individual bulletins, see the My VMware page and the Resolved Issues, and Known Issues sections.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, the system can be updated by using the image profile and the esxcli software profile
command.
For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi650-201811401-BG
- ESXi650-201811402-BG
- ESXi650-201811403-BG
- ESXi650-201811404-BG
- ESXi650-201811405-BG
- ESXi650-201811406-BG
- ESXi650-201811101-SG
- ESXi650-201811102-SG
- ESXi650-201811103-SG
- ESXi-6.5.0-20181104001-standard
- ESXi-6.5.0-20181104001-no-tools
- ESXi-6.5.0-20181101001s-standard
- ESXi-6.5.0-20181101001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 1847663 , 1863313 , 1894568 , 1949789 , 2021943 , 2022147 , 2029579 , 2037849 , 2037926 , 2057600 , 2063154 , 2071506 , 2071610 , 2071817 , 2072971 , 2073175 , 2078843 , 2079002 , 2083594 , 2083627 , 2084723 , 2088951 , 2089048 , 2096942 , 2097358 , 2098928 , 2102135 , 2102137 , 2107335 , 2113432 , 2119609 , 2122259 , 2122523 , 2128932 , 2130371 , 2133589 , 2136002 , 2139317 , 2139940 , 2142767 , 2144766 , 2152381 , 2153867 , 2154912 , 2187127, 2128759, 2155840 , 2155858 , 2156840 , 2157503 , 2158561 , 2164733 , 2167294 , 2167877 , 2170126 , 2171799 , 2173856 , 2179262 , 2180962 , 2182211 , 2186065 , 2191349 , 2192629 , 2192836 , 2193829 , 2187136, 2194304 , 2197789 , 2203385 , 2203836 , 2204024 , 2204028 , 2204507 , 2209900 , 2209919 , 2211285 , 2211639 , 2213917 , 2225439 , 2225471, 2133634, 2139131 |
CVE numbers | N/A |
This patch updates esx-base, esx-tboot, vsan
and vsanhealth
VIBs to resolve the following issues:
Intelligent site continuity for stretched clusters. In the case of a partition between the preferred and secondary data sites, ESXi650-201811002 enables vSAN to intelligently determine which site leads to maximum data availability before automatically forming a quorum with the witness. The secondary site can operate as the active site until the preferred site has the latest copy of the data. This prevents the virtual machines from migrating back to the preferred site and losing locality of data reads.
Adaptive resync for dynamic management of resynchronization traffic. With ESXi650-201811002, adaptive resynchronization speeds up time to compliance, restoring an object back to its provisioned failures to tolerate, by allocating dedicated bandwidth to resynchronization I/O. Resynchronization I/O is generated by vSAN to bring an object back to compliance. While minimum bandwidth is guaranteed for resynchronization I/Os, the bandwidth can be increased dynamically if there is no contention from the client I/O. Conversely, if there are no resynchronization I/Os, client I/Os can use the additional bandwidth.
- PR 1949789: Upgrade to ESXi 6.5 and later using vSphere Update Manager might fail due to an issue with the libparted library
Upgrade to ESXi 6.5 and later using vSphere Update Manager might fail due to an issue with the libparted open source library. You might see the following backtrace:
[root@hidrogenio07:~] partedUtil getptbl /vmfs/devices/disks/naa.60000970000592600166533031453135
Backtrace has 12 calls on stack:
12: /lib/libparted.so.0(ped_assert+0x2a) [0x9e524ea]
11: /lib/libparted.so.0(ped_geometry_read+0x117) [0x9e5be77]
10: /lib/libparted.so.0(ped_geometry_read_alloc+0x75) [0x9e5bf45]
9: /lib/libparted.so.0(nilfs2_probe+0xb5) [0x9e82fe5]
8: /lib/libparted.so.0(ped_file_system_probe_specific+0x5e) [0x9e53efe]
7: /lib/libparted.so.0(ped_file_system_probe+0x69) [0x9e54009]
6: /lib/libparted.so.0(+0x4a064) [0x9e8f064]
5: /lib/libparted.so.0(ped_disk_new+0x67) [0x9e5a407]
4: partedUtil() [0x804b309]
3: partedUtil(main+0x79e) [0x8049e6e]
2: /lib/libc.so.6(__libc_start_main+0xe7) [0x9edbb67]
1: partedUtil() [0x804ab4d] AbortedThis issue is resolved in this release.
- PR 2169914: SolidFire arrays might not get optimal performance
SolidFire arrays might not get optimal performance without reconfiguration of SATP claim rules.
This issue is resolved in this release. This fix sets Storage Array Type Plug-in (SATP) claim rules for Solidfire SSD SAN storage arrays to
VMW_SATP_DEFAULT_AA
and the Path Selection Policy (PSP) toVMW_PSP_RR
with 10 I/O operations per second by default to achieve optimal performance. - PR 2169924: Claim rules must be manually added to ESXi
Claim rules must be manually added to ESXi for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.
This issue is resolved in this release. The fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi. - PR 2170118: An ESXi host might fail with a purple diagnostic screen during shutdown or power off of virtual machines if you use EMC RecoverPoint
An ESXi host might fail with a purple diagnostic screen during shutdown or power off of a virtual machine if you use EMC RecoverPoint because of a race condition in the vSCSI filter tool.
This issue is resolved in this release.
- PR 2170120: Some paths to a storage device might become unavailable after a non-disruptive upgrade of the storage array controllers
During a non-disruptive upgrade, some paths might enter a permanent device loss (PDL) state. Such paths remain unavailable even after the upgrade. As a result, the device might lose connectivity.
This issue is resolved in this release.
- PR 2071817: ESXi hosts might fail due to a corrupt data structure of the page cache
ESXi hosts might fail with a purple diagnostic screen while removing a page from the page cache due to a corrupt data structure. You might see the following backtrace:
2018-01-01T04:02:47.859Z cpu13:33232)Backtrace for current CPU #13, worldID=33232, rbp=0xd
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b720:[0x41800501883b]PageCacheRemoveFirstPageLocked@vmkernel#nover+0x2f stack: 0x4305dd30
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b740:[0x418005019144]PageCacheAdjustSize@vmkernel#nover+0x260 stack: 0x0, 0x3cb418317a909
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81bfd0:[0x41800521746e]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0, 0
2018-01-01T04:02:47.877Z cpu13:33232)^[[45m^[[33;1mVMware ESXi 6.0.0 [Releasebuild-6921384 x86_64]^[[0m
#GP Exception 13 in world 33232:memMap-13 @ 0x41800501883bThis issue is resolved in this release. For more information, see VMware Knowledge Base article 53511.
- PR 2107335: Claim rules must be manually added to the ESXi host
Claim rules must be manually added to the ESXi host for HITACHI OPEN-V storage arrays.
This issue is resolved in this release. This fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (AULA) support. The fix also sets SATP toVMW_SATP_DEFAULT_AA
, PSP toVMW_PSP_RR
and Claim Options totpgs_off
as default for HITACHI OPEN-v type storage arrays without ALUA support. - PR 2122259: Manually added settings to the NTP configuration might be lost after an NTP update
If you manually add settings to the NTP configuration, they might be deleted from the
ntp.conf
file if you update NTP by using the vSphere Web Client. NTP updates preserve all settings along with restrict options, driftfile, and manually added servers to thentp.conf
file. If you manually modify thentp.conf
file, you must restart hostd to propagate the updates.This issue is resolved in this release.
- PR 2136002: Stale tickets in RAM disks might cause ESXi hosts to stop responding
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks nodes. This might cause ESXi hosts to stop responding.
This issue is resolved in this release.
- PR 2083594: You might be unable to use a connected USB device as a passthrough to migrate virtual machines by using vSphere vMotion
You might be unable to use a connected USB device as a passthrough to migrate virtual machines by using vSphere vMotion due to a redundant condition check.
This issue is resolved in this release. If you already face the issue, add to the vmx file the following line:
usb.generic.allowCCID = "TRUE"
. For more information, see VMware Knowledge Base article 55789. - PR 2128932: I/O submitted to a VMFSsparse snapshot might fail without an error
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might be processed only partially at VMFSsparse level, but upper layers, such as I/O Filters, could presume the transfer is successful. This might lead to data inconsistency.
This issue is resolved in this release. This fix sets a transient error status for reference from upper layers if an I/O is complete.
- PR 2144766: I/O commands might fail with INVALID FIELD IN CDB error
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
This issue is resolved in this release.
- PR 2152381: The esxtop command-line utility might not display the queue depth of devices correctly
The esxtop command-line utility might not display an updated value of queue depth of devices if the corresponding device path queue depth changes.
This issue is resolved in this release.
- PR 2089048: Using the variable Windows extension option in Windows Deployment Services (WDS) on virtual machines with EFI firmware might result in slow PXE booting
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable Windows extension option in WDS, the virtual machine might boot extremely slowly.
This issue is resolved in this release.
- PR 2102135: Apple devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
This issue is resolved in this release. To enable connection in earlier releases of ESXi, add
usb.quirks.darwinVendor45 = TRUE
as a new option to the .vmx
configuration file. - PR 2139940: IDs for sending vmware.log data to the Syslog might not work as expected
IDs for sending vmware.log data to the Syslog, defined with
vmx.log.syslogID
, might not work as expected because the string specified in the variable is ignored.This issue is resolved in this release.
- PR 2088951: ESXi hosts might intermittently fail if you disable global IPv6 addresses
ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.
This issue is resolved in this release. If you already face the issue, re-enable the global IPv6 addresses to avoid failure of hosts. If you need to disable IPv6 addresses for some reason, you must disable IPv6 on individual vmknics, not global IPv6 addresses.
- PR 2098928: Hostd might fail if an HTTP request is made during hostd initialization
In very rare situations, when an HTTP request is made during hostd initialization, the agent might fail.
This issue is resolved in this release.
- PR 2097358: Hostd fails when certain invalid parameters are present in the VMX file of a virtual machine
Hostd might fail when certain invalid parameters are present in the
.vmx
file of a virtual machine. For example, manually adding a parameter such asideX:Y.present
with valueTrue
to the.vmx
file.This issue is resolved in this release.
- PR 2078843: Migration by using vSphere vMotion might fail between hosts of different ESXi versions
When you have hosts of different ESXi versions in a cluster of Linux virtual machines, and the Enhanced vMotion Compatibility (EVC) level is set to L0 or L1 for Merom and Penryn processors, the virtual machines might stop responding if you attempt migration by using vSphere vMotion from an ESXi 6.0 host to a host with a newer ESXi version.
This issue is resolved in this release.
- PR 2153867: An ESXi host might become unresponsive while closing VMware vSphere VMFS6 volumes
An ESXi host might become unresponsive because datastore heartbeating might stop prematurely while closing VMware vSphere VMFS6. As a result, the affinity manager cannot exit gracefully.
This issue is resolved in this release.
- PR 2133589: Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays
Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays.
This issue is resolved in this release. This fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for Tegile IntelliFlash storage arrays with ALUA support. The fix also sets SATP toVMW_SATP_DEFAULT_AA
, PSP toVMW_PSP_RR
and Claim Options totpgs_off
as default for Tegile IntelliFlash storage arrays without ALUA support. - PR 2113432: VMkernel Observations (VOB) events might generate unnecessary device performance warnings
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
This issue is resolved in this release.
- PR 2119609: Migration of a virtual machine with a Filesystem Device Switch (FDS) on a VMware vSphere Virtual Volumes datastore by using VMware vSphere vMotion might cause multiple issues
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC), or IO filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache IO filters, corrupted replication IO filters and disk corruption, when cache IO filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
Тhis issue is resolved in this release.
- PR 2096942: getTaskUpdate API calls to deleted task IDs might cause log spew and higher API bandwidth consumption
If you use VMware vSphere APIs for Storage Awareness provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of vSphere API for Storage Awareness bandwidth and a log spew.This issue is resolved in this release.
- PR 2142767: VMware vSphere Virtual Volumes might become unresponsive if a vSphere API for Storage Awareness provider loses binding information from its database
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a vSphere API for Storage Awareness provider loses binding information from its database. Hostd might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.
This issue is resolved in this release.
- PR 2155840: Hostd might become unresponsive during log stress
If the logging resource group of an ESXi host is under stress due to heavy logging, hostd might become unresponsive while running
esxcfg-syslog
commands. Since this fix moves theesxcfg-syslog
related commands from the logging resource group to the ESXi shell, you might see higher usage of memory under the shell default resource group.This issue is resolved in this release.
- PR 2088424: You might see false hardware health alarms due to disabled or idle Intelligent Platform Management Interface (IPMI) sensors
Disabled IPMI sensors, or such that do not report any data, might generate false hardware health alarms.
This issue is resolved in this release. This fix filters out such alarms.
- PR 2156458: Unregistering a vSphere API for Storage Awareness provider in a disconnected state might result in an error
If you unregister a vSphere API for Storage Awareness provider in a disconnected state, you might see a
java.lang.NullPointerException
error. Event and alarm managers are not initialized when a provider is in a disconnected state, which causes the error.This issue is resolved in this release.
- PR 2122523: A VMFS6 datastore might report out of space incorrectly
A VMFS6 datastore might report out of space incorrectly due to stale cache entries.
This issue is resolved in this release. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.
- PR 2052171: Virtual machine disk consolidation might fail if the virtual machine has snapshots taken with Content Based Read Cache (CBRC) enabled and then disabled
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved.This issue is resolved in this release. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled. If you face the issue, enable digest on the whole disk chain to delete and recreate all digest files.
- PR 2071506: Dell OpenManage Integration for VMware vCenter (OMIVV) might fail to identify some Dell modular servers from the Integrated Dell Remote Access Controller (iDRAC)
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration.This issue is resolved in this release.
- PR 2137711: Presence Sensors in the Hardware Status tab might display status Unknown
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
This issue is resolved in this release. This fix filters components with Unknown status.
- PR 2192836: If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807This issue is resolved in this release.
- PR 2130371: Upgrade of VMware Tools might fail with VIX error code 21000
If an ESXi host has no available VMware Tools ISO image and you try to upgrade VMware Tools on an active virtual machine, the operation might fail with
VIX error code 21000
. You might not be able to upgrade VMware Tools by using either API, the vSphere Client, or vSphere Web Client, even after an ISO image is available. This is because the ESXi host caches the first availability check at VM power on and does not update it.This issue is resolved in this release. The fix sets a 5-minute default expiration of the VMware Tool ISO image availability check. If an upgrade fails, you can retry the operation 5 minutes after mounting an ISO image.
- PR 2021943: Logs of the hostd service and the syslog might collect unnecessary debug exception logs
Logs of the hostd service and the Syslog might collect unnecessary debug exception logs across all ESXi hosts in a cluster.
In the hostd log, messages are similar to:
No match, sensor_health file missing sensorNumer 217: class 3 sensor type 35 offset 0
In the
syslog
,/etc/sfcb/omc/sensor_health,
messages are similar to:
Missing expected value to check for
This issue is resolved in this release.
- PR 2167294: All Paths Down (APD) is not triggered for LUNs behind IBM SAN Volume Controller (SVC) target even when no paths can service I/Os
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
This issue is resolved in this release. The fix is disabled by default. To enable the fix, set the ESXi config option
/Scsi/ExtendAPDCondition
toesxcfg-advcfg -s 1 /Scsi/ExtendAPDCondition
. - PR 2022147: Custom Rx and Tx ring sizes of physical NICs might not be persistent across ESXi host reboots
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance, by using the following commands:
esxcli network nic ring current set -n <vmnicX> -t <value>
esxcli network nic ring current set -n <vmnicX> -r <value>
the settings might not be persistent across ESXi host reboots.This issue is resolved in this release. The fix makes these ESXCLI configurations persistent across reboots by writing to the ESXi configuration file. Ring size configuration must be allowed on physical NICs before using CLI.
- PR 2171799: A soft lockup of physical CPUs might cause an ESXi host to fail with a purple diagnostic screen
Large number of I/Os getting timeout on a heavy loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2173856: Performance of long distance vSphere vMotion operations at high latency might deteriorate due to the max socket buffer size limit
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 10 GbE, due to the hard-coded socket buffer limit of 16 MB. With this fix, you can configure the max socket buffer size parameter
SB_MAX_ADJ
.This issue is resolved in this release.
- PR 2193829: Тhe virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS
Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.
This issue is resolved in this release.
- PR 1894568: Log spew in the syslog.log
You might see multiple messages such as
sfcb-vmware_raw[69194]: IpmiIfcFruGetInv: Failed during send cc = 0xc9
in thesyslog.log
. These are regular information logs, not error or warning logs.This issue is resolved in this release.
- PR 2203385: After a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced
After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.
This issue is resolved in this release.
- PR 2191349: The SMART disk monitoring daemon, smartd, might flood the Syslog service logs of release builds with debugging and info messages
In release builds, smartd might generate a lot of debugging and info messages into the Syslog service logs.
This issue is resolved in this release. The fix removes debug messages from release builds.
- PR 2179262: Virtual machines with a virtual SATA CD-ROM might fail due to invalid commands
Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory.
You might see the following logs for invalid commands:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.
And a similar panic message:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure
This issue is resolved in this release.
- PR 2182211: The Physical Location column on the Disk Management table of vSphere Web Client remains empty
When you select the Physical Location column from the Disk Management table on Cluster Configuration Dashboard of vSphere Web Client, the column might remain empty and display only information about the hpsa driver.
This issue is resolved in this release.
- PR 2194304: ESXi host smartd reports some warnings
Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.
This issue is resolved in this release.
- PR 2209900: An ESXi host might become unresponsive
The hostd service might stop responding while the external process esxcfg-syslog remains stuck. As a result, the ESXi host might become unresponsive.
This issue is resolved in this release.
- PR 2167877: An ESXi host might fail with a purple diagnostic screen if you enable IPFIX in continuous heavy traffic
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
This issue is resolved in this release.
- PR 2211285: A virtual machine port might be blocked after a VMware vSphere High Availability (vSphere HA) failover
By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.
This issue is resolved in this release.
- PR 2211639: You might see out of order packets with infrastructure traffic when the VMware vSphere Network I/O Control and queue pairing are disabled
If the vSphere Network I/O Control and queue pairing of NICs are disabled, you might see out of order packets with infrastructure traffic, such as management, ISCSI, and NFS. This is because when vSphere Network I/O Control is not enabled, queue pairing is also disabled, and NICs work with multiple queues.
This issue is resolved in this release.
- PR 2186065: Firmware event code logs might flood the vmkernel.log
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
.This issue is resolved in this release.
- PR 2209919: An ESXi host might fail with a purple diagnostic screen and report the error: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc
When you replicate virtual machines using VMware vSphere Replication, the ESXi host might fail with a purple diagnostic screen immediately or within 24 hours and reports the error:
PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc
.This issue is resolved in this release.
- PR 2204024: Virtual machines might become unresponsive due to repetitive failures of third party device drivers to process commands
Virtual machines might become unresponsive due to repetitive failures in some third party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period"
This issue is resolved in this release. This fix recovers commands to unresponsive third party device drivers and ensures that failed commands are stopped and retried until success.
- PR 2164733: Migration of virtual machines by using VMware vSphere vMotion might fail with a NamespaceDb compatibility error if Guest Introspection service is on
If the Guest Introspection service in a vSphere 6.5 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with error in the vSphere Web Client similar to
The source detected that the destination failed to resume.
The destination vmware.log contains error messages similar to:
2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.The vmkernel log has error messages such as:
2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3This issue is resolved in this release.
- PR 2063154: An ESXi host might become unresponsive when it disconnects from an NFS datastore that is configured for logging
If an NFS datastore is configured as a Syslog datastore and an ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.
This issue is resolved in this release.
- PR 2203836: Claim rules must be manually added to ESXi for Lenovo DE series storage arrays
This fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for Lenovo DE series storage arrays.This issue is resolved in this release.
- PR 2110196: Capacity disk failure in vSAN cluster with deduplication, causing host failure
In a vSAN cluster with deduplication enabled, an ESXi host might fail when the I/O is directed to a disk that is in Permanent Device Loss (PDL) state.
This issue is resolved in this release.
- PR 2084723: Hostd might fail if you delete from an ESXi host the support bundle folder of a virtual machine
If you manually delete from an ESXi host the support bundle folder of a virtual machine, downloaded in the
/scratch/downloads
directory of the host, hostd might fail when it automatically tries to delete folders in this path one hour after their creation.This issue is resolved in this release.
- PR 2213917: An ESXi host might fail with purple diagnostic screen due to a race condition
Although the cache entry is already evicted, it might be evicted again due to a race condition. This leads to a null pointer dereference in the ESXi host which causes the ESXi host failure.
This issue is resolved in this release.
- PR 2197789: The NIOC scheduler might reset the uplink
The NIOC hClock scheduler might reset the uplink network device when the uplink is periodically used, and the reset is non-predictable.
This issue is resolved in this release.
- PR 1863313: Applying a host profile with enabled Stateful Install to an ESXi 6.5 host by using vSphere Auto Deploy might fail
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.5 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
This issue is resolved in this release.
- PR 2029579: Continuous power on and off of a virtual machine with SR-IOV vNICs by using scripts might cause the ESXi host to fail
Power cycling of a virtual machine with SR-IOV vNICs by using scripts might cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release. Run power cycle operations by using the vSphere Web Client or vSphere Client to avoid similar issues.
- PR 2139317: The ESXi host agent service might fail if an entity is no longer on the stats database of an ESXi host
If an entity, such as a virtual machine or a datastore, is no longer on the stats database of an ESXi host, but the vim.PerformanceManager issues a request for performance data by this entity, it is possible to hit a code path that fails the host agent process. As a result, the host might become temporary unavailable to the vCenter Server system.
This issue is resolved in this release.
- PR 2187136: Enabling NetFlow might lead to high network latency
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
This issue is resolved in this release. You can further optimize the NetFlow performance by setting the
ipfixHashTableSize IPFIX
parameter to-p "ipfixHashTableSize=65536" -m ipfix
by using CLI. To complete the task, reboot the ESXi host. - PR 2170126: Backup proxy virtual machines might go to invalid state during backup
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions where the backup proxy virtual machine might be terminated.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand.This issue is resolved in this release.
- PR 2155858: Exclusive affinity might not be unset after a virtual machines powers off
When powering off the last virtual machine on an ESXi host, if this virtual machine is using the sysContexts configuration, the scheduler might not remove the exclusive affinity set by the sysContexts configuration.
This issue is resolved in this release.
- PR 2095984: The SNMP agent might deliver an incorrect trap vmwVmPoweredOn
The SNMP agent might deliver an incorrect trap
vmwVmPoweredOn
when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client.This issue is resolved in this release.
- PR 2204507: Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of hostd
Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of hostd due to a division by zero error in some counters.
This issue is resolved in this release.
- PR 2225439: The NSX opsAgent might fail when you deploy NSX-T appliance on an ESXi host
The NSX opsAgent might fail and you might see the core dump file located at
/var/core
when you deploy NSX-T appliance on an ESXi host. The failure is due to a race condition in the library code which is provided by the host.This issue is resolved in this release.
- PR 2154912: VMware Tools might display incorrect status if you configure the /productLocker directory on a shared VMFS datastore
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools on the virtual machine might display an incorrect status Unsupported.This issue is resolved in this release.
- PR 2180890: The MSINFO32 command might not display installed physical RAM on virtual machines running Windows 7 or Windows Server 2008 R2
The
MSINFO32
command might not display installed physical RAM on virtual machines running Windows 7 or Windows Server 2008 R2. If two or more vCPUs are assigned to a virtual machine, the Installed Physical RAM field displaysnot available
.This issue is resolved in this release.
- PR 2085546: I/O latency in a vSAN cluster might increase when unaligned overlapping I/Os occur
In some cases, in which unaligned overlapping I/Os occur, the vSAN stack two phase commit engine's commit scheduler might not run immediately. This delay can add latency to I/O operations.
This issue is resolved in this release.
- PR 2051108: Readdir API always returns EOF, resulting in unexpected behavior for listdir commands
OSFS_Readdir API populates directory entries limited by the buffer size provided, and always returns End Of File (EOF) as Yes. You might have to call FSSReadDir multiple times to read the entire directory.
This issue is resolved in this release.
- PR 2164137: ESXi hosts in a vSAN stretched cluster might fail during upgrade to ESXi 6.5
While upgrading a vSAN stretched cluster from ESXi 6.0 to 6.5, a host might fail with a purple diagnostic screen. The following stack trace identifies this problem.
#0 DOMUtil_HashFromUUID
#1 DOMServer_GetServerIndexFromUUID
#2 DOMOwnerGetRdtMuxGroupInt
#3 DOMOwner_GetRdtMuxGroupUseNumServers
#4 DOMAnchorObjectGetOwnerVersionAndRdtMuxGroup
#5 DOMAnchorObjectCreateResolverAndSetMuxGroup
#6 DOMObject_InitServerAssociation
#7 DOMAnchorObjectInitAssociationToProxyOwnerStartTask
#8 DOMOperationStartTask
#9 DOMOperationDispatch
#10 VSANServerExecuteOperation
#11 VSANServerMainLoop - PR 2112683: vSAN does not mark a disk as degraded even after failed I/Os are reported on the disk
In some cases, vSAN takes a long time to mark a disk as degraded, even though I/O failures are reported by the disk and vSAN has stopped servicing any further I/Os from that disk.
This issue is resolved in this release.
- PR 1814870: Fans on Dell R730 servers might appear in wrong hardware health monitoring group or not be displayed
Fans on Dell R730 servers might not appear under the Fan section in the Hardware Status tab of the vSphere Client or the vSphere Web Client, but in other sections, or not display at all.
This issue is resolved in this release.
- PR 2140766: vSAN capacity monitor displays incorrect used capacity
After upgrading vCenter Server to 6.5 Update 2, the vSAN capacity monitor does not include the capacity for virtual machines deployed in ESXi hosts with versions older than 6.5 Update 2. The monitor might show less used capacity than the amount actually used in the cluster.
This issue is resolved in this release.
- PR 2172723: Low guest I/O bandwidth in a vSAN cluster due to component congestion during large writes
In some cases, higher latency occurs during very large sequential write operations. You might notice a drop in IOPS when entering maintenance mode.
This issue is resolved in this release.
- PR 2073175: Connectivity to NFS datastores might be lost if you name a new datastore with the old name of a renamed existing datastore
Connectivity to NFS datastores might be lost if you name a new datastore with the old name of a renamed existing datastore. For example, if you rename an existing datastore from NFS-01 to NFS-01-renamed and then create a new NFS datastore with the NFS-01 name. As a result, the ESXi host loses connectivity to the renamed datastore and fails to mount the new one.
This issue is resolved in this release.
- PR 2079002: You cannot set IPv6 hostnames in pure IPv6 mode if you disable IPv4
If you disable IPv4 on an ESXi host, you might not be able to set IPv6 hostnames, because the system requires an IPv4 address.
This issue is resolved in this release. If both IPv4 and IPv6 are enabled on a host, IPv4 hostnames take precedence.
- PR 2192629: The number of generated syslog.log archive files might be less than the configured default, and different between ESXi versions 6.0 and 6.5
The number of generated
syslog.log
archive files might be one less than the default value set using thesyslog.global.defaultRotate
parameter. The number ofsyslog.log
archive files might also be different between ESXi versions 6.0 and 6.5. For instance, ifsyslog.global.defaultRotate
is set to 8 by default, ESXi 6.0 createssyslog.0.gz
tosyslog.7.gz
, while ESXi 6.5 createssyslog.0.gz
tosyslog.6.gz
.This issue is resolved in this release.
- PR 2150648: Rebalance cannot free up space from an imbalanced disk if all components are under a concat node
When an object is expanded, a concatenation is added to the replicas to support the increased size requirement. Over a period of time, as vSAN creates new replicas to include the concatenation, the original replicas and concatenation are discarded. As vSAN merges the original replica and concatenation into a new replica, the operation might fail if vSAN does not have enough space to put the new components. If the disk becomes imbalanced, and all components on the disk are under concat nodes, the rebalance operation cannot move any components to balance the disk.
This issue is resolved in this release.
- PR 2039320: VMDK expansion fails in vSAN stretched cluster if the storage policy has set Data locality to Preferred or Secondary
You cannot expand a VMDK while also changing the storage policy of the VMDK, but expansion might still fail without any change in storage policy if the existing policy has set the Data locality rule.
This issue is resolved in this release.
- PR 2158561: You might not use the VMware DirectPath I/O feature on virtual machines running on AMD based host
The upper Reserved Memory Range Reporting limit might not be correctly computed for virtual machines running on AMD based hosts. This might prevent virtual machines using the VMware DirectPath I/O feature from powering on.
This issue is resolved in this release.
- PR 2180962: An ESXi host might fail with a purple diagnostic screen if a call in the Native Multipathing Plug-In (NMP) tries to unlock an already released lock
An error in the NMP device quiesce code path might lead to calls to unlock an already released lock, which might cause an ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2156840: Virtual machines using EFI and running Windows Server 2016 on AMD processors might stop responding during reboot
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or later, or the guest OS is not Windows, or the processors are Intel.
This issue is resolved in this release.
- PR 2204028: An ESXi host might fail after repeated attempts to re-register character device with the same name
If you attempt to re-register a character device with the same name, you might see a warning similar to the following:
2018-09-14T04:49:27.441Z cpu33:4986380)WARNING: CharDriver: 291: Driver with name XXXX is already using slot XX
Heap memory needed to register the device does not get freed for a duplicate attempt. This causes a memory leak, and over time the host might fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 1847663: The NMP SATP module might fail to get changed parameters while loading
When loading the NMP SATP module after a system reboot, the
vmk_ModuleLoad()
method might fail to read changed module specific parameters. The issue affects mostly third-party drivers.This issue is resolved in this release.
- PR 2031499: Disk rebalance might not work as expected in a stretched cluster if disk usage across sites is uneven
vSAN uses disk fullness value across all disks in the cluster to calculate the mean fullness value. This value is used during the rebalance operation to ensure moving data does not cause an imbalance on the destination disk. If you have a vSAN stretched cluster, storage utilization can be different across sites. In such cases, rebalance might not work as expected if the mean fullness value on one site is higher than the other site.
This issue is resolved in this release.
- PR 2128759: Virtual machine directory remains in your NSX-T environment after the virtual machine is deleted
When you delete a a virtual machine, the virtual machine directory might remain in your NSX-T environment due to the wrong order in which the port files in the .dvsData folder are deleted.
This issue is resolved in this release.
- Pr 2187127: Performance of a VXLAN environment might degrade if IPFIX has a high sampling rate and traffic is heavy
When you enable IPFIX with a high sampling rate and traffic is heavy, you might see degraded performance of a VXLAN environment.
This issue is resolved in this release. Use lower sampling rates to optimize performance.
- PR 2139131: An ESXi host might fail with a purple diagnostic screen while shutting down
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000This issue is resolved in this release.
- PR 2133634: An ESXi host might fail with a purple diagnostic screen during migration of clustered virtual machines by using vSphere vMotion
An ESXi host might fail with a purple diagnostic screen during migration of clustered virtual machines by using vSphere vMotion. The issue affects virtual machines in clusters containing shared non-RDM disk, such as VMDK or vSphere Virtual Volumes, in a physical bus sharing mode.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the ne1000
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmkusb
VIB.
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2096875 |
CVE numbers | N/A |
This patch updates the lsu-lsi-lsi-mr3-plugin
VIB to resolve the following issue:
- PR 2096875: ESXi hosts and hostd might stop responding due to a memory allocation issue in the lsu-lsi-lsi-mr3-plugin
ESXi hosts and hostd might stop responding due to a memory allocation issue in the vSAN disk serviceability plug-in lsu-lsi-lsi-mr3-plugin. You might see the error messages
Out of memory
orError cloning thread
.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2151342 |
CVE numbers | N/A |
This patch updates the ntg3
VIB to resolve the following issue:
- PR 2151342: Oversize packets might cause NICs using the ntg3 driver to temporarily stop sending packets
In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2064111 |
CVE numbers | N/A |
This patch updates the lsu-hp-hpsa-plugin
VIB to resolve the following issue:
- PR 2064111: VMware vSAN with HPE ProLiant Gen9 Smart Array Controllers might not light locator LEDs on the correct disk
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lit on the correct failed device.
This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2094558, 2169094, 2020984, 2025909, 2109023, 2189347, 2154394 |
CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan
and vsanhealth
VIBs to resolve the following issue:
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista iso image is available for download by users who require it. For download information, see the Product Download page.
- Update the NTP daemon to 4.2.8p12
The NTP daemon is updated to 4.2.8p12.
- PR 2154394: Encrypted vSphere vMotion might fail due to insufficient migration heap space
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
This issue is resolved in this release.
- Update to the Python package
The Python package is updated to version 3.5.5.
- Update to the libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.8.
- Update to the OpenSSL package
The OpenSSL package is updated to version 1.0.2p.
- Update to the OpenSSH package
The OpenSSH package is updated to version 7.7p1.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esxi-ui
VIB.
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
Profile Name | ESXi-6.5.0-20181104001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | November 29, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1847663 , 1863313 , 1894568 , 1949789 , 2021943 , 2022147 , 2029579 , 2037849 , 2037926 , 2057600 , 2063154 , 2071506 , 2071610 , 2071817 , 2072971 , 2073175 , 2078843 , 2079002 , 2083594 , 2083627 , 2084723 , 2088951 , 2089048 , 2096942 , 2097358 , 2098928 , 2102135 , 2102137 , 2107335 , 2113432 , 2119609 , 2122259 , 2122523 , 2128932 , 2130371 , 2133589 , 2136002 , 2139317 , 2139940 , 2142767 , 2144766 , 2152381 , 2153867 , 2154912 , 2187127, 2128759, 2187136, 2155840 , 2155858 , 2156840 , 2157503 , 2158561 , 2164733 , 2167294 , 2167877 , 2170126 , 2171799 , 2173856 , 2179262 , 2180962 , 2182211 , 2186065 , 2191349 , 2192629 , 2192836 , 2193829 , 2194304 , 2197789 , 2203385 , 2203836 , 2204024 , 2204028 , 2204507 , 2209900 , 2209919 , 2211285 , 2211639 , 2213917 , 2225439 , 2225471, 2096875, 2151342, 2064111 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Upgrade to ESXi 6.5 and later using vSphere Update Manager might fail due to an issue with the libparted open source library. You might see the following backtrace:
[root@hidrogenio07:~] partedUtil getptbl /vmfs/devices/disks/naa.60000970000592600166533031453135
Backtrace has 12 calls on stack:
12: /lib/libparted.so.0(ped_assert+0x2a) [0x9e524ea]
11: /lib/libparted.so.0(ped_geometry_read+0x117) [0x9e5be77]
10: /lib/libparted.so.0(ped_geometry_read_alloc+0x75) [0x9e5bf45]
9: /lib/libparted.so.0(nilfs2_probe+0xb5) [0x9e82fe5]
8: /lib/libparted.so.0(ped_file_system_probe_specific+0x5e) [0x9e53efe]
7: /lib/libparted.so.0(ped_file_system_probe+0x69) [0x9e54009]
6: /lib/libparted.so.0(+0x4a064) [0x9e8f064]
5: /lib/libparted.so.0(ped_disk_new+0x67) [0x9e5a407]
4: partedUtil() [0x804b309]
3: partedUtil(main+0x79e) [0x8049e6e]
2: /lib/libc.so.6(__libc_start_main+0xe7) [0x9edbb67]
1: partedUtil() [0x804ab4d] Aborted -
SolidFire arrays might not get optimal performance without reconfiguration of SATP claim rules.
-
Claim rules must be manually added to ESXi for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.
-
An ESXi host might fail with a purple diagnostic screen during shutdown or power off of a virtual machine if you use EMC RecoverPoint because of a race condition in the vSCSI filter tool.
-
During a non-disruptive upgrade, some paths might enter a permanent device loss (PDL) state. Such paths remain unavailable even after the upgrade. As a result, the device might lose connectivity.
-
ESXi hosts might fail with a purple diagnostic screen while removing a page from the page cache due to a corrupt data structure. You might see the following backtrace:
2018-01-01T04:02:47.859Z cpu13:33232)Backtrace for current CPU #13, worldID=33232, rbp=0xd
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b720:[0x41800501883b]PageCacheRemoveFirstPageLocked@vmkernel#nover+0x2f stack: 0x4305dd30
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b740:[0x418005019144]PageCacheAdjustSize@vmkernel#nover+0x260 stack: 0x0, 0x3cb418317a909
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81bfd0:[0x41800521746e]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0, 0
2018-01-01T04:02:47.877Z cpu13:33232)^[[45m^[[33;1mVMware ESXi 6.0.0 [Releasebuild-6921384 x86_64]^[[0m
#GP Exception 13 in world 33232:memMap-13 @ 0x41800501883b -
Claim rules must be manually added to the ESXi host for HITACHI OPEN-V storage arrays.
-
If you manually add settings to the NTP configuration, they might be deleted from the
ntp.conf
file if you update NTP by using the vSphere Web Client. NTP updates preserve all settings along with restrict options, driftfile, and manually added servers to thentp.conf
file. If you manually modify thentp.conf
file, you must restart hostd to propagate the updates. -
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks nodes. This might cause ESXi hosts to stop responding.
-
You might be unable to use a connected USB device as a passthrough to migrate virtual machines by using vSphere vMotion due to a redundant condition check.
-
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might be processed only partially at VMFSsparse level, but upper layers, such as I/O Filters, could presume the transfer is successful. This might lead to data inconsistency.
-
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
-
The esxtop command-line utility might not display an updated value of queue depth of devices if the corresponding device path queue depth changes.
-
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable Windows extension option in WDS, the virtual machine might boot extremely slowly.
-
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
-
IDs for sending vmware.log data to the Syslog, defined with
vmx.log.syslogID
, might not work as expected because the string specified in the variable is ignored. -
ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.
-
In very rare situations, when an HTTP request is made during hostd initialization, the agent might fail.
-
Hostd might fail when certain invalid parameters are present in the
.vmx
file of a virtual machine. For example, manually adding a parameter such asideX:Y.present
with valueTrue
to the.vmx
file. -
When you have hosts of different ESXi versions in a cluster of Linux virtual machines, and the Enhanced vMotion Compatibility (EVC) level is set to L0 or L1 for Merom and Penryn processors, the virtual machines might stop responding if you attempt migration by using vSphere vMotion from an ESXi 6.0 host to a host with a newer ESXi version.
-
An ESXi host might become unresponsive because datastore heartbeating might stop prematurely while closing VMware vSphere VMFS6. As a result, the affinity manager cannot exit gracefully.
-
Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays.
-
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
-
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC), or IO filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache IO filters, corrupted replication IO filters and disk corruption, when cache IO filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
-
If you use VMware vSphere APIs for Storage Awareness provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of vSphere API for Storage Awareness bandwidth and a log spew. -
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a vSphere API for Storage Awareness provider loses binding information from its database. Hostd might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.
-
If the logging resource group of an ESXi host is under stress due to heavy logging, hostd might become unresponsive while running
esxcfg-syslog
commands. Since this fix moves theesxcfg-syslog
related commands from the logging resource group to the ESXi shell, you might see higher usage of memory under the shell default resource group. -
Disabled IPMI sensors, or such that do not report any data, might generate false hardware health alarms.
-
If you unregister a vSphere API for Storage Awareness provider in a disconnected state, you might see a
java.lang.NullPointerException
error. Event and alarm managers are not initialized when a provider is in a disconnected state, which causes the error. -
A VMFS6 datastore might report out of space incorrectly due to stale cache entries.
-
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved. -
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration. -
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
-
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807 -
If an ESXi host has no available VMware Tools ISO image and you try to upgrade VMware Tools on an active virtual machine, the operation might fail with
VIX error code 21000
. You might not be able to upgrade VMware Tools by using either API, the vSphere Client, or vSphere Web Client, even after an ISO image is available. This is because the ESXi host caches the first availability check at VM power on and does not update it. -
Logs of the hostd service and the Syslog might collect unnecessary debug exception logs across all ESXi hosts in a cluster.
In the hostd log, messages are similar to:
No match, sensor_health file missing sensorNumer 217: class 3 sensor type 35 offset 0
In the
syslog
,/etc/sfcb/omc/sensor_health,
messages are similar to:
Missing expected value to check for
-
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance, by using the following commands:
esxcli network nic ring current set -n <vmnicX> -t <value>
esxcli network nic ring current set -n <vmnicX> -r <value>
the settings might not be persistent across ESXi host reboots. -
Large number of I/Os getting timeout on a heavy loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.
-
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 10 GbE, due to the hard-coded socket buffer limit of 16 MB. With this fix, you can configure the max socket buffer size parameter
SB_MAX_ADJ
. -
Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.
-
You might see multiple messages such as
sfcb-vmware_raw[69194]: IpmiIfcFruGetInv: Failed during send cc = 0xc9
in thesyslog.log
. These are regular information logs, not error or warning logs. -
After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.
-
In release builds, smartd might generate a lot of debugging and info messages into the Syslog service logs.
-
Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory.
You might see the following logs for invalid commands:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.
And a similar panic message:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure
-
When you select the Physical Location column from the Disk Management table on Cluster Configuration Dashboard of vSphere Web Client, the column might remain empty and display only information about the hpsa driver.
-
Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.
-
The hostd service might stop responding while the external process esxcfg-syslog remains stuck. As a result, the ESXi host might become unresponsive.
-
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
-
By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.
-
If the vSphere Network I/O Control and queue pairing of NICs are disabled, you might see out of order packets with infrastructure traffic, such as management, ISCSI and NFS. This is because when vSphere Network I/O Control is not enabled, queue pairing is also disabled, and NICs work with multiple queues.
-
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
. -
When you replicate virtual machines using VMware vSphere Replication, the ESXi host might fail with purple diagnostic screen immediately or within 24 hours and reports the error:
PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc
. -
Virtual machines might become unresponsive due to repetitive failures in some third party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period"
-
If the Guest Introspection service in a vSphere 6.5 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with error in the vSphere Web Client similar to
The source detected that the destination failed to resume.
The destination vmware.log contains error messages similar to:
2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.The vmkernel log has error messages such as:
2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3 -
If an NFS datastore is configured as a Syslog datastore and an ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.
-
This fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for Lenovo DE series storage arrays. -
In a vSAN cluster with deduplication enabled, I/Os destined for a capacity disk might be re-routed to another disk. The new destination device might suffer a Permanent Device Loss (PDL). If I/O requests to the failed disk arrive during this interval, the host might fail with a purple diagnostic screen.
-
If you manually delete from an ESXi host the support bundle folder of a virtual machine, downloaded in the
/scratch/downloads
directory of the host, hostd might fail when it automatically tries to delete folders in this path one hour after their creation. -
Although the cache entry is already evicted, it might be evicted again due to a race condition. This leads to a null pointer dereference in the ESXi host which causes the ESXi host failure.
-
During the migration or upgrade to vCenter Server Appliance 6.5, some deployment sizes are not available for selection in the information table if the required size of your vCenter Server Appliance is greater than the threshold for that deployment size.
-
The NIOC hClock scheduler might reset the uplink network device when the uplink is periodically used, and the reset is non-predictable.
-
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.5 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
-
Power cycling of a virtual machine with SR-IOV vNICs by using scripts might cause the ESXi host to fail with a purple diagnostic screen.
-
If an entity, such as a virtual machine or a datastore, is no longer on the stats database of an ESXi host, but the vim.PerformanceManager issues a request for performance data by this entity, it is possible to hit a code path that fails the host agent process. As a result, the host might become temporary unavailable to the vCenter Server system.
-
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
-
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions where the backup proxy virtual machine might be terminated.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand. -
When powering off the last virtual machine on an ESXi host, if this virtual machine is using the sysContexts configuration, the scheduler might not remove the exclusive affinity set by the sysContexts configuration.
-
The SNMP agent might deliver an incorrect trap
vmwVmPoweredOn
when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client. -
Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of hostd due to a division by zero error in some counters.
-
The NSX opsAgent might fail and you might see the core dump file located at
/var/core
when you deploy NSX-T appliance on an ESXi host. The failure is due to a race condition in the library code which is provided by the host. -
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools on the virtual machine might display an incorrect status Unsupported. -
The
MSINFO32
command might not display installed physical RAM on virtual machines running Windows 7 or Windows Server 2008 R2. If two or or more vCPUs are assigned to a virtual machine, the Installed Physical RAM field displaysnot available
. -
In some cases, in which unaligned overlapping I/Os occur, the vSAN stack two phase commit engine's commit scheduler might not run immediately. This delay can add latency to I/O operations.
-
OSFS_Readdir API populates directory entries limited by the buffer size provided, and always returns End Of File (EOF) as Yes. You might have to call FSSReadDir multiple times to read the entire directory.
-
While upgrading a vSAN stretched cluster from ESXi 6.0 to 6.5, a host might fail with a purple diagnostic screen. The following stack trace identifies this problem.
#0 DOMUtil_HashFromUUID
#1 DOMServer_GetServerIndexFromUUID
#2 DOMOwnerGetRdtMuxGroupInt
#3 DOMOwner_GetRdtMuxGroupUseNumServers
#4 DOMAnchorObjectGetOwnerVersionAndRdtMuxGroup
#5 DOMAnchorObjectCreateResolverAndSetMuxGroup
#6 DOMObject_InitServerAssociation
#7 DOMAnchorObjectInitAssociationToProxyOwnerStartTask
#8 DOMOperationStartTask
#9 DOMOperationDispatch
#10 VSANServerExecuteOperation
#11 VSANServerMainLoop -
In some cases, vSAN takes a long time to mark a disk as degraded, even though I/O failures are reported by the disk and vSAN has stopped servicing any further I/Os from that disk.
-
Fans on Dell R730 servers might not appear under the Fan section in the Hardware Status tab of the vSphere Client or the vSphere Web Client, but in other sections, or not display at all.
-
After upgrading vCenter Server to 6.5 Update 2, the vSAN capacity monitor does not include the capacity for virtual machines deployed in ESXi hosts with versions older than 6.5 Update 2. The monitor might show less used capacity than the amount actually used in the cluster.
-
Component congestion, originating from the logging layer of vSAN, might lower guest I/O bandwidth due to internal I/O throttling by vSAN. Large I/Os that get divided are not properly leveraged for faster destaging. This issue results in log buildup, which can cause congestion. You might notice a drop in IOPS when entering maintenance mode.
-
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000 -
Connectivity to NFS datastores might be lost if you name a new datastore with the old name of a renamed existing datastore. For example, if you rename an existing datastore from NFS-01 to NFS-01-renamed and then create a new NFS datastore with the NFS-01 name. As a result, the ESXi host loses connectivity to the renamed datastore and fails to mount the new one.
-
If you disable IPv4 on an ESXi host, you might not be able to set IPv6 hostnames, because the system requires an IPv4 address.
-
The number of generated
syslog.log
archive files might be one less than the default value set using thesyslog.global.defaultRotate
parameter. The number ofsyslog.log
archive files might also be different between ESXi versions 6.0 and 6.5. For instance, ifsyslog.global.defaultRotate
is set to 8 by default, ESXi 6.0 createssyslog.0.gz
tosyslog.7.gz
, while ESXi 6.5 createssyslog.0.gz
tosyslog.6.gz
. -
When an object is expanded, a concatenation is added to the replicas to support the increased size requirement. Over a period of time, as vSAN creates new replicas to include the concatenation, the original replicas and concatenation are discarded. As vSAN merges the original replica and concatenation into a new replica, the operation might fail if vSAN does not have enough space to put the new components. If the disk becomes imbalanced, and all components on the disk are under concat nodes, the rebalance operation cannot move any components to balance the disk.
-
You cannot expand a VMDK while also changing the storage policy of the VMDK, but expansion might still fail without any change in storage policy if the existing policy has set the Data locality rule.
-
The upper Reserved Memory Range Reporting limit might not be correctly computed for virtual machines running on AMD based hosts. This might prevent virtual machines using the VMware DirectPath I/O feature from powering on.
-
An error in the NMP device quiesce code path might lead to calls to unlock an already released lock, which might cause an ESXi host to fail with a purple diagnostic screen.
-
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or later, or the guest OS is not Windows, or the processors are Intel.
-
If you attempt to re-register a character device with the same name, you might see a warning similar to the following:
2018-09-14T04:49:27.441Z cpu33:4986380)WARNING: CharDriver: 291: Driver with name XXXX is already using slot XX
Heap memory needed to register the device does not get freed for a duplicate attempt. This causes a memory leak, and over time the host might fail with a purple diagnostic screen. -
When loading the NMP SATP module after a system reboot, the
vmk_ModuleLoad()
method might fail to read changed module specific parameters. The issue affects mostly third-party drivers. -
vSAN uses disk fullness value across all disks in the cluster to calculate the mean fullness value. This value is used during the rebalance operation to ensure moving data does not cause an imbalance on the destination disk. If you have a vSAN stretched cluster, storage utilization can be different across sites. In such cases, rebalance might not work as expected if the mean fullness value on one site is higher than the other site.
-
ESXi hosts and hostd might stop responding due to a memory allocation issue in the vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plug-in. You might see the error messages
Out of memory
orError cloning thread
. -
In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
-
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lit on the correct failed device.
-
When you delete a a virtual machine, the virtual machine directory might remain in your NSX-T environment due to the wrong order in which the port files in the
.dvsData
folder are deleted. -
When you enable IPFIX with a high sampling rate and traffic is heavy, you might see degraded performance of a VXLAN environment.
-
Profile Name | ESXi-6.5.0-20181104001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | November 29, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 1847663 , 1863313 , 1894568 , 1949789 , 2021943 , 2022147 , 2029579 , 2037849 , 2037926 , 2057600 , 2063154 , 2071506 , 2071610 , 2071817 , 2072971 , 2073175 , 2078843 , 2079002 , 2083594 , 2083627 , 2084723 , 2088951 , 2089048 , 2096942 , 2097358 , 2098928 , 2102135 , 2102137 , 2107335 , 2113432 , 2119609 , 2122259 , 2122523 , 2128932 , 2130371 , 2133589 , 2136002 , 2139317 , 2139940 , 2142767 , 2144766 , 2152381 , 2153867 , 2154912 , 2187136, 2155840 , 2155858 , 2156840 , 2157503 ,2187127, 2128759, 2158561 , 2164733 , 2167294 , 2167877 , 2170126 , 2171799 , 2173856 , 2179262 , 2180962 , 2182211 , 2186065 , 2191349 , 2192629 , 2192836 , 2193829 , 2194304 , 2197789 , 2203385 , 2203836 , 2204024 , 2204028 , 2204507 , 2209900 , 2209919 , 2211285 , 2211639 , 2213917 , 2225439 , 2225471, 2096875, 2151342, 2064111 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Upgrade to ESXi 6.5 and later using vSphere Update Manager might fail due to an issue with the libparted open source library. You might see the following backtrace:
[root@hidrogenio07:~] partedUtil getptbl /vmfs/devices/disks/naa.60000970000592600166533031453135
Backtrace has 12 calls on stack:
12: /lib/libparted.so.0(ped_assert+0x2a) [0x9e524ea]
11: /lib/libparted.so.0(ped_geometry_read+0x117) [0x9e5be77]
10: /lib/libparted.so.0(ped_geometry_read_alloc+0x75) [0x9e5bf45]
9: /lib/libparted.so.0(nilfs2_probe+0xb5) [0x9e82fe5]
8: /lib/libparted.so.0(ped_file_system_probe_specific+0x5e) [0x9e53efe]
7: /lib/libparted.so.0(ped_file_system_probe+0x69) [0x9e54009]
6: /lib/libparted.so.0(+0x4a064) [0x9e8f064]
5: /lib/libparted.so.0(ped_disk_new+0x67) [0x9e5a407]
4: partedUtil() [0x804b309]
3: partedUtil(main+0x79e) [0x8049e6e]
2: /lib/libc.so.6(__libc_start_main+0xe7) [0x9edbb67]
1: partedUtil() [0x804ab4d] Aborted -
SolidFire arrays might not get optimal performance without reconfiguration of SATP claim rules.
-
Claim rules must be manually added to ESXi for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.
-
An ESXi host might fail with a purple diagnostic screen during shutdown or power off of a virtual machine if you use EMC RecoverPoint because of a race condition in the vSCSI filter tool.
-
During a non-disruptive upgrade, some paths might enter a permanent device loss (PDL) state. Such paths remain unavailable even after the upgrade. As a result, the device might lose connectivity.
-
ESXi hosts might fail with a purple diagnostic screen while removing a page from the page cache due to a corrupt data structure. You might see the following backtrace:
2018-01-01T04:02:47.859Z cpu13:33232)Backtrace for current CPU #13, worldID=33232, rbp=0xd
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b720:[0x41800501883b]PageCacheRemoveFirstPageLocked@vmkernel#nover+0x2f stack: 0x4305dd30
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81b740:[0x418005019144]PageCacheAdjustSize@vmkernel#nover+0x260 stack: 0x0, 0x3cb418317a909
2018-01-01T04:02:47.859Z cpu13:33232)0x43914e81bfd0:[0x41800521746e]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0, 0
2018-01-01T04:02:47.877Z cpu13:33232)^[[45m^[[33;1mVMware ESXi 6.0.0 [Releasebuild-6921384 x86_64]^[[0m
#GP Exception 13 in world 33232:memMap-13 @ 0x41800501883b -
Claim rules must be manually added to the ESXi host for HITACHI OPEN-V storage arrays.
-
If you manually add settings to the NTP configuration, they might be deleted from the
ntp.conf
file if you update NTP by using the vSphere Web Client. NTP updates preserve all settings along with restrict options, driftfile, and manually added servers to thentp.conf
file. If you manually modify thentp.conf
file, you must restart hostd to propagate the updates. -
Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks nodes. This might cause ESXi hosts to stop responding.
-
You might be unable to use a connected USB device as a passthrough to migrate virtual machines by using vSphere vMotion due to a redundant condition check.
-
If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might be processed only partially at VMFSsparse level, but upper layers, such as I/O Filters, could presume the transfer is successful. This might lead to data inconsistency.
-
ESXi hosts might not reflect the
MAXIMUM TRANSFER LENGTH
parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
-
The esxtop command-line utility might not display an updated value of queue depth of devices if the corresponding device path queue depth changes.
-
If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable Windows extension option in WDS, the virtual machine might boot extremely slowly.
-
Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.
-
IDs for sending vmware.log data to the Syslog, defined with
vmx.log.syslogID
, might not work as expected because the string specified in the variable is ignored. -
ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.
-
In very rare situations, when an HTTP request is made during hostd initialization, the agent might fail.
-
Hostd might fail when certain invalid parameters are present in the
.vmx
file of a virtual machine. For example, manually adding a parameter such asideX:Y.present
with valueTrue
to the.vmx
file. -
When you have hosts of different ESXi versions in a cluster of Linux virtual machines, and the Enhanced vMotion Compatibility (EVC) level is set to L0 or L1 for Merom and Penryn processors, the virtual machines might stop responding if you attempt migration by using vSphere vMotion from an ESXi 6.0 host to a host with a newer ESXi version.
-
An ESXi host might become unresponsive because datastore heartbeating might stop prematurely while closing VMware vSphere VMFS6. As a result, the affinity manager cannot exit gracefully.
-
Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays.
-
The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:
Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
-
If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC), or IO filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache IO filters, corrupted replication IO filters and disk corruption, when cache IO filters are configured in write-back mode. You might also see issues with the virtual machine encryption.
-
If you use VMware vSphere APIs for Storage Awareness provider, you might see multiple
getTaskUpdate
calls to cancelled or deleted tasks. As a result, you might see a higher consumption of vSphere API for Storage Awareness bandwidth and a log spew. -
vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a vSphere API for Storage Awareness provider loses binding information from its database. Hostd might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.
-
If the logging resource group of an ESXi host is under stress due to heavy logging, hostd might become unresponsive while running
esxcfg-syslog
commands. Since this fix moves theesxcfg-syslog
related commands from the logging resource group to the ESXi shell, you might see higher usage of memory under the shell default resource group. -
Disabled IPMI sensors, or such that do not report any data, might generate false hardware health alarms.
-
If you unregister a vSphere API for Storage Awareness provider in a disconnected state, you might see a
java.lang.NullPointerException
error. Event and alarm managers are not initialized when a provider is in a disconnected state, which causes the error. -
A VMFS6 datastore might report out of space incorrectly due to stale cache entries.
-
If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error
A specified parameter was not correct: spec.deviceChange.device
due to a deleted digest file after CBRC was disabled. An alertVirtual machine disks consolidation is needed.
is displayed until the issue is resolved. -
OMIVV relies on information from the iDRAC property
hardware.systemInfo.otherIdentifyingInfo.ServiceTag
to fetch theSerialNumber
parameter for identifying some Dell modular servers. A mismatch in theserviceTag
property might fail this integration. -
Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.
-
If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:
SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
0x0000418017d8026c in nmp_DeviceStartLoop (device=device@entry=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., data@entry=...) at bora/modules/vmkernel/nmp/nmp_core.c:807 -
If an ESXi host has no available VMware Tools ISO image and you try to upgrade VMware Tools on an active virtual machine, the operation might fail with
VIX error code 21000
. You might not be able to upgrade VMware Tools by using either API, the vSphere Client, or vSphere Web Client, even after an ISO image is available. This is because the ESXi host caches the first availability check at VM power on and does not update it. -
Logs of the hostd service and the Syslog might collect unnecessary debug exception logs across all ESXi hosts in a cluster.
In the hostd log, messages are similar to:
No match, sensor_health file missing sensorNumer 217: class 3 sensor type 35 offset 0
In the
syslog
,/etc/sfcb/omc/sensor_health,
messages are similar to:
Missing expected value to check for
-
In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.
-
If you customize the Rx and Tx ring sizes of physical NICs to boost network performance, by using the following commands:
esxcli network nic ring current set -n <vmnicX> -t <value>
esxcli network nic ring current set -n <vmnicX> -r <value>
the settings might not be persistent across ESXi host reboots. -
Large number of I/Os getting timeout on a heavy loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.
-
You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 10 GbE, due to the hard-coded socket buffer limit of 16 MB. With this fix, you can configure the max socket buffer size parameter
SB_MAX_ADJ
. -
Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.
-
You might see multiple messages such as
sfcb-vmware_raw[69194]: IpmiIfcFruGetInv: Failed during send cc = 0xc9
in thesyslog.log
. These are regular information logs, not error or warning logs. -
After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.
-
In release builds, smartd might generate a lot of debugging and info messages into the Syslog service logs.
-
Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory.
You might see the following logs for invalid commands:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.
And a similar panic message:
YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure
-
When you select the Physical Location column from the Disk Management table on Cluster Configuration Dashboard of vSphere Web Client, the column might remain empty and display only information about the hpsa driver.
-
Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.
-
The hostd service might stop responding while the external process esxcfg-syslog remains stuck. As a result, the ESXi host might become unresponsive.
-
When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.
-
By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.
-
If the vSphere Network I/O Control and queue pairing of NICs are disabled, you might see out of order packets with infrastructure traffic, such as management, ISCSI and NFS. This is because when vSphere Network I/O Control is not enabled, queue pairing is also disabled, and NICs work with multiple queues.
-
Drives that do not support Block Limits VPD page
0xb0
might generate event code logs that flood thevmkernel.log
. -
When you replicate virtual machines using VMware vSphere Replication, the ESXi host might fail with purple diagnostic screen immediately or within 24 hours and reports the error:
PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc
. -
Virtual machines might become unresponsive due to repetitive failures in some third party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period"
-
If the Guest Introspection service in a vSphere 6.5 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with error in the vSphere Web Client similar to
The source detected that the destination failed to resume.
The destination vmware.log contains error messages similar to:
2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.The vmkernel log has error messages such as:
2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3 -
If an NFS datastore is configured as a Syslog datastore and an ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.
-
This fix sets SATP to
VMW_SATP_ALUA
, PSP toVMW_PSP_RR
and Claim Options totpgs_on
as default for Lenovo DE series storage arrays. -
In a vSAN cluster with deduplication enabled, I/Os destined for a capacity disk might be re-routed to another disk. The new destination device might suffer a Permanent Device Loss (PDL). If I/O requests to the failed disk arrive during this interval, the host might fail with a purple diagnostic screen.
-
If you manually delete from an ESXi host the support bundle folder of a virtual machine, downloaded in the
/scratch/downloads
directory of the host, hostd might fail when it automatically tries to delete folders in this path one hour after their creation. -
Although the cache entry is already evicted, it might be evicted again due to a race condition. This leads to a null pointer dereference in the ESXi host which causes the ESXi host failure.
-
During the migration or upgrade to vCenter Server Appliance 6.5, some deployment sizes are not available for selection in the information table if the required size of your vCenter Server Appliance is greater than the threshold for that deployment size.
-
The NIOC hClock scheduler might reset the uplink network device when the uplink is periodically used, and the reset is non-predictable.
-
If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.5 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.
-
Power cycling of a virtual machine with SR-IOV vNICs by using scripts might cause the ESXi host to fail with a purple diagnostic screen.
-
If an entity, such as a virtual machine or a datastore, is no longer on the stats database of an ESXi host, but the vim.PerformanceManager issues a request for performance data by this entity, it is possible to hit a code path that fails the host agent process. As a result, the host might become temporary unavailable to the vCenter Server system.
-
If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.
-
A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions where the backup proxy virtual machine might be terminated.
In the hostd log, you might see content similar to:
2018-06-08T10:33:14.150Z info hostd[15A03B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
2018-06-08T10:33:14.167Z error hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
2018-06-08T10:33:14.826Z info hostd[15640B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMIN\admin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
2018-06-08T10:35:53.120Z error hostd[15A44B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171In the vmkernel log, the content is similar to:
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand. -
When powering off the last virtual machine on an ESXi host, if this virtual machine is using the sysContexts configuration, the scheduler might not remove the exclusive affinity set by the sysContexts configuration.
-
The SNMP agent might deliver an incorrect trap
vmwVmPoweredOn
when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client. -
Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of hostd due to a division by zero error in some counters.
-
The NSX opsAgent might fail and you might see the core dump file located at
/var/core
when you deploy NSX-T appliance on an ESXi host. The failure is due to a race condition in the library code which is provided by the host. -
If you configure the
/productLocker
directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools on the virtual machine might display an incorrect status Unsupported. -
The
MSINFO32
command might not display installed physical RAM on virtual machines running Windows 7 or Windows Server 2008 R2. If two or or more vCPUs are assigned to a virtual machine, the Installed Physical RAM field displaysnot available
. -
In some cases, in which unaligned overlapping I/Os occur, the vSAN stack two phase commit engine's commit scheduler might not run immediately. This delay can add latency to I/O operations.
-
OSFS_Readdir API populates directory entries limited by the buffer size provided, and always returns End Of File (EOF) as Yes. You might have to call FSSReadDir multiple times to read the entire directory.
-
While upgrading a vSAN stretched cluster from ESXi 6.0 to 6.5, a host might fail with a purple diagnostic screen. The following stack trace identifies this problem.
#0 DOMUtil_HashFromUUID
#1 DOMServer_GetServerIndexFromUUID
#2 DOMOwnerGetRdtMuxGroupInt
#3 DOMOwner_GetRdtMuxGroupUseNumServers
#4 DOMAnchorObjectGetOwnerVersionAndRdtMuxGroup
#5 DOMAnchorObjectCreateResolverAndSetMuxGroup
#6 DOMObject_InitServerAssociation
#7 DOMAnchorObjectInitAssociationToProxyOwnerStartTask
#8 DOMOperationStartTask
#9 DOMOperationDispatch
#10 VSANServerExecuteOperation
#11 VSANServerMainLoop -
In some cases, vSAN takes a long time to mark a disk as degraded, even though I/O failures are reported by the disk and vSAN has stopped servicing any further I/Os from that disk.
-
Fans on Dell R730 servers might not appear under the Fan section in the Hardware Status tab of the vSphere Client or the vSphere Web Client, but in other sections, or not display at all.
-
After upgrading vCenter Server to 6.5 Update 2, the vSAN capacity monitor does not include the capacity for virtual machines deployed in ESXi hosts with versions older than 6.5 Update 2. The monitor might show less used capacity than the amount actually used in the cluster.
-
Component congestion, originating from the logging layer of vSAN, might lower guest I/O bandwidth due to internal I/O throttling by vSAN. Large I/Os that get divided are not properly leveraged for faster destaging. This issue results in log buildup, which can cause congestion. You might notice a drop in IOPS when entering maintenance mode.
-
An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
#PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
...
0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000 -
Connectivity to NFS datastores might be lost if you name a new datastore with the old name of a renamed existing datastore. For example, if you rename an existing datastore from NFS-01 to NFS-01-renamed and then create a new NFS datastore with the NFS-01 name. As a result, the ESXi host loses connectivity to the renamed datastore and fails to mount the new one.
-
If you disable IPv4 on an ESXi host, you might not be able to set IPv6 hostnames, because the system requires an IPv4 address.
-
The number of generated
syslog.log
archive files might be one less than the default value set using thesyslog.global.defaultRotate
parameter. The number ofsyslog.log
archive files might also be different between ESXi versions 6.0 and 6.5. For instance, ifsyslog.global.defaultRotate
is set to 8 by default, ESXi 6.0 createssyslog.0.gz
tosyslog.7.gz
, while ESXi 6.5 createssyslog.0.gz
tosyslog.6.gz
. -
When an object is expanded, a concatenation is added to the replicas to support the increased size requirement. Over a period of time, as vSAN creates new replicas to include the concatenation, the original replicas and concatenation are discarded. As vSAN merges the original replica and concatenation into a new replica, the operation might fail if vSAN does not have enough space to put the new components. If the disk becomes imbalanced, and all components on the disk are under concat nodes, the rebalance operation cannot move any components to balance the disk.
-
You cannot expand a VMDK while also changing the storage policy of the VMDK, but expansion might still fail without any change in storage policy if the existing policy has set the Data locality rule.
-
The upper Reserved Memory Range Reporting limit might not be correctly computed for virtual machines running on AMD based hosts. This might prevent virtual machines using the VMware DirectPath I/O feature from powering on.
-
An error in the NMP device quiesce code path might lead to calls to unlock an already released lock, which might cause an ESXi host to fail with a purple diagnostic screen.
-
Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or later, or the guest OS is not Windows, or the processors are Intel.
-
If you attempt to re-register a character device with the same name, you might see a warning similar to the following:
2018-09-14T04:49:27.441Z cpu33:4986380)WARNING: CharDriver: 291: Driver with name XXXX is already using slot XX
Heap memory needed to register the device does not get freed for a duplicate attempt. This causes a memory leak, and over time the host might fail with a purple diagnostic screen. -
When loading the NMP SATP module after a system reboot, the
vmk_ModuleLoad()
method might fail to read changed module specific parameters. The issue affects mostly third-party drivers. -
vSAN uses disk fullness value across all disks in the cluster to calculate the mean fullness value. This value is used during the rebalance operation to ensure moving data does not cause an imbalance on the destination disk. If you have a vSAN stretched cluster, storage utilization can be different across sites. In such cases, rebalance might not work as expected if the mean fullness value on one site is higher than the other site.
-
ESXi hosts and hostd might stop responding due to a memory allocation issue in the vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plug-in. You might see the error messages
Out of memory
orError cloning thread
. -
In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
-
In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lit on the correct failed device.
-
When you delete a a virtual machine, the virtual machine directory might remain in your NSX-T environment due to the wrong order in which the port files in the
.dvsData
folder are deleted. -
When you enable IPFIX with a high sampling rate and traffic is heavy, you might see degraded performance of a VXLAN environment.
-
Profile Name | ESXi-6.5.0-20181101001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | November 29, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2094558, 2169094, 2020984, 2025909, 2109023, 2189347, 2154394 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista iso image is available for download by users who require it. For download information, see the Product Download page.
-
The NTP daemon is updated to 4.2.8p12.
-
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
-
The Python package is updated to version 3.5.5.
-
The ESXi userworld libxml2 library is updated to version 2.9.8.
-
The OpenSSL package is updated to version 1.0.2p.
-
The OpenSSH package is updated to version 7.7p1.
-
Profile Name | ESXi-6.5.0-20181101001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | November 29, 2018 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2094558, 2169094, 2020984, 2025909, 2109023, 2189347, 2154394 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista iso image is available for download by users who require it. For download information, see the Product Download page.
-
The NTP daemon is updated to 4.2.8p12.
-
For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.
-
The Python package is updated to version 3.5.5.
-
The ESXi userworld libxml2 library is updated to version 2.9.8.
-
The OpenSSL package is updated to version 1.0.2p.
-
The OpenSSH package is updated to version 7.7p1.
-
Known Issues
The known issues are grouped as follows.
ESXi650-201811401-BGPatch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs | N/A |
CVE numbers | N/A |
This patch updates esx-base, esx-tboot,vsan
and vsanhealth
VIBs to resolve the following issues:
- Migration of virtual machines by using vSphere vMotion on ESXi hosts of versions earlier than 5.0 might fail with a Monitor Panic error in the destination site
You might fail to migrate virtual machines by using vSphere vMotion on ESXi hosts of versions earlier than 5.0 to version 6.5 due to a missing parameter. Cross vCenter Server vMotion migration might also fail. You might see a Monitor Panic error in the destination site.
Workaround: Power off and power on the virtual machine.
- Some hardware health sensors might display status Unknown in the vSphere Web Client and vSphere Client due to a decoding issue
Some IPMI sensors might report status Unknown in the Hardware Health tab of the vSphere Web Client and vSphere Client due to a decoding issue.
Workaround: Consult your hardware vendor for the particular sensor to understand its use. For more information, see VMware knowledge base article 53134.
- The ESXi Syslog service might display inaccurate timestamps at start or rotation
The ESXi Syslog service might display inaccurate timestamps when the service starts or restarts, or when the log reaches its maximum size by configuration and rotates.
Workaround: This issue is resolved for fresh installs of ESXi. To fix the issue on the existing configuration for the
hostd.log
, you must:- Open the
/etc/vmsyslog.conf.d/hostd.conf
file. - Replace
onrotate = logger -t Hostd < /var/run/vmware/hostdLogHeader.txt
with
onrotate = printf '%%s - last log rotation time, %%s\n' "$(date --utc +%%FT%%T.%%3NZ)" "$(cat /var/run/vmware/hostdLogHeader.txt)" | logger -t Hostd
- Save the changes.
- Restart the vmsyslogd service by running
/etc/init.d/vmsyslogd restart.
- Open the
- VMware vSphere vApp or virtual machine power-on operations might fail with an error message
vSphere vApp or virtual machine power-on operations might fail in a DRS-managed cluster if the vApp or the virtual machines use a non-expandable memory reservation resource pool. The following error message is displayed: The available Memory resources in the parent resource pool are insufficient for the operation.
Workaround: See VMware knowledge base article 1003638.