check-circle-line exclamation-circle-line close-line

Release Date: SEP 12, 2019

Download Filename:

ESXi600-201909001.zip

Build Details

Download Filename: ESXi600-201909001.zip
Build:

14513180

Security only: 14516143

Download Size: 566.1 MB
md5sum: 170ff5e815e7ac357f0c06daeec667c6
sha1checksum: 0a5f04790797eeb5c0708dff11a9e0a515c3fd8c
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

Bulletins

Bulletin ID Category Severity
ESXi600-201909401-BG Bugfix Critical
ESXi600-201909402-BG Bugfix Important
ESXi600-201909403-BG Bugfix Important
ESXi600-201909404-BG Bugfix Important
ESXi600-201909405-BG Bugfix Moderate
ESXi600-201909101-SG Security Important
ESXi600-201909102-SG Security Important
ESXi600-201909103-SG Security Important
ESXi600-201909104-SG Security Important

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.0.

Bulletin ID Category Severity
ESXi600-201909001 Bugfix Critical

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-6.0.0-20190904001-standard
ESXi-6.0.0-20190904001-no-tools
ESXi-6.0.0-20190901001s-standard
ESXi-6.0.0-20190901001s-no-tools

For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command.

For more information, see the vSphere Command-Line Interface Concepts and Example Guide and the vSphere Upgrade Guide.

Resolved Issues

The resolved issues are grouped as follows.

ESXi600-201909401-BG
Patch Category Bugfix
Patch Severity Critical
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.129.14292922
  • VMware_bootbank_esx-base_6.0.0-3.129.14513180
  • VMware_bootbank_vsan_6.0.0-3.129.14292921
PRs Fixed  2072964, 2193583, 2195026, 2199258, 2205117, 2212177, 2331370, 2206616, 2222553, 2298989, 2144765, 2246706, 2158355, 2219532, 2145513, 2298926, 2154883, 2175314, 2244215, 2087171, 2224239, 2203838, 2186066, 2167099, 2155337, 2197793, 2206216, 2004022, 2166555, 2292414, 2112682, 2280731, 2034812, 2115943, 2128545, 2156838, 2244863, 2274774, 2320290, 2327967, 2394252, 1863888, 2326269, 2361713, 2245346, 2398343, 2226560, 2139134
CVE numbers N/A

This patch updates esx-base, vsanhealth and vsan VIBs to resolve the following issues:

  • PR 2072964: Virtual machine disk consolidation might fail if the virtual machine has snapshots taken with Content Based Read Cache (CBRC) enabled and then disabled

    If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

    This issue is resolved in this release.

  • PR 2193583: The SMART disk monitoring daemon, smartd, might flood the syslog service logs of release builds with debugging and info messages

    In release builds, smartd might generate a lot of debugging and info messages into the syslog service logs.

    This issue is resolved in this release. The fix removes debug messages from release builds.

  • PR 2195026: ESXi host smartd reports some warnings

    Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.

    This issue is resolved in this release.

  • PR 2199258: Virtual machines might become unresponsive due to repetitive failures of third-party device drivers to process commands

    Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console: Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period". 

    This issue is resolved in this release. This fix recovers commands to unresponsive third-party device drivers and ensures that failed commands are cancelled and retried until success.

  • PR 2205117: After a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced

    After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.

    This issue is resolved in this release.

  • PR 2212177: Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of the hostd service

    Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of the hostd service due to a division by zero error in some counters.

    This issue is resolved in this release.

  • PR 2331370: An ESXi host might fail with a purple diagnostic screen when it removes a PShare Hint with the function VmMemCow_PShareRemoveHint

    When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
    0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
    0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
    0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
    0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
    0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel

    This issue is resolved in this release.

  • PR 2206616: Migrating virtual machines by using VMware vMotion might fail due to random errors in the tagging of traffic types of VMkernel interfaces

    In earlier releases of ESXi, a VMkernel interface could transport three types of traffic: Management, vMotion and Fault Tolerance. Since ESXi 5.1, the traffic types configuration is stored in vmknic tags and not in the esx.conf file as advanced option strings. During conversion of the vmknic tagging configuration from one format to another, in some cases, the wrong traffic type is enabled. As a result, you might see migration of virtual machines by using VMware vMotion to fail, because the vmk0 interface, which contains the default Management traffic type, is being incorrectly tagged.

    This issue is resolved in this release. This fix removes the processing of vmknic-related advanced options from hostd and simplifies vmknic event handling.

  • PR 2222553: Virtual machines using 3D software might intermittently lose connectivity

    Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.

    This issue is resolved in this release. The fix adds checks for texture coordinates with values Infinite or NaN.

  • PR 2298989: An ESXi host might fail with the purple diagnostic screen when you use the software iSCSI adapter

    When you use the software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.

    This issue is resolved in this release.

  • PR 2144765: I/O commands might fail with INVALID FIELD IN CDB error

    ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:
    2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

    This issue is resolved in this release.

  • PR 2246706: The hostd service might fail with an error message reporting too many locks

    Larger configurations might exceed the limited number of locks available, causing the hostd service to fail with an error similar to: hostd Panic: MXUserAllocSerialNumber: too many locks!

    This issue is resolved in this release. The fix removes the limit on the number of locks.

  • PR 2158355: Enabling NetFlow might lead to high network latency

    If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

    This issue is resolved in this release.

  • PR 2219532: guestinfo settings do not persist across virtual machine power on and off sequences

    Any guestinfo settings, values provided by the guest virtual machine that are recorded while the virtual machine is running, and designed to persist across power on and off sequences, might be lost after the virtual machine is powered off.

    This issue is resolved in this release.

  • PR 2145513: The Direct Console User Interface (DCUI) might display incorrect manufacturer names of white box servers

    DCUI might display junk characters or incorrect manufacturer names of white box servers.

    This issue is resolved in this release.

  • PR 2298926: When a VNC connection is interrupted, virtual machines might intermittently fail due to dereference of a NULL pointer

    Under certain conditions, when a Virtual Network Computing (VNC) connection is interrupted, virtual machines might fail due to dereference of a NULL pointer in the VNC backend.

    This issue is resolved in this release.

  • PR 2154883: VMware Tools status might incorrectly display as unsupported when you have configured ProductLocker on a shared VMFS datastore

    Virtual machines running on different ESXi hosts that use the same VMFS datastore as ProductLocker might get wrong VMware Tools status if a virtual machine on one host mounts the VMware Tools ISO image and other virtual machines cannot access that image. The problem continues as long as one of the virtual machines has the VMware Tools ISO image mounted and prevents others from using the image to compute their VMware Tools version status.

    This issue is resolved in this release.

  • PR 2175314: Update of BusyBox

    BusyBox is updated to version 1.29.3 to resolve security issues with identifier FG-VD-18-127.

  • PR 2087171: An ESX host might fail with a purple diagnostic screen if the list of allowed IPs in the esxfw module changes frequently

    If the esxfw module is enabled in your environment and you frequently reconfigure the list of allowed IPs, this might trigger a race condition that causes the ESX host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2224239: VMware vSphere Virtual Volumes might become unresponsive if an API for Storage Awareness (VASA) provider loses binding information from its database

    vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from its database. The hostd service might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.

    This issue is resolved in this release.

  • PR 2203838: Claim rules must be manually added to an ESXi host

    You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

    This issue is resolved in this release. This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR, and Claim Options to tpgs_on as default for Lenovo ThinkSystem DE Series Storage Arrays.

  • PR 2186066: Firmware event code logs might flood the vmkernel.log

    Drives that do not support Block Limits VPD page0xb0 might generate event code logs that flood the vmkernel.log.

    This issue is resolved in this release.

  • PR 2167099: All Paths Down (APD) is not triggered for LUNs behind IBM SAN Volume Controller (SVC) target even when no paths can service I/Os

    In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

    This issue is resolved in this release. The fix is disabled by default. To enable the fix, set the ESXi config option /Scsi/ExtendAPDCondition to esxcfg-advcfg -s 1 /Scsi/ExtendAPDCondition.

  • PR 2155337: The esxtop command-line utility might not display the queue depth of devices correctly

    The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

    This issue is resolved in this release. The fix enables curQDepth updates if queue throttling is inactive.

  • PR 2197793: The scheduler of the VMware vSphere Network I/O Control (NIOC) might intermittently reset the uplink network device

    The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

    This issue is resolved in this release.

  • PR 2206216: The hostd service fails if the link speed mode of a physical NIC is UNKNOWN and the link speed is AUTO

    The hostd service cannot support a state when the link speed mode of a physical NIC is UNKNOWN and the link speed is AUTO, and fails.

    This issue is resolved in this release.

  • PR 2004022: Virtual machines not responding due to disconnected I/O from Guest OS

    Restarting the hostd service causes reconfiguration of the network interfaces. When this occurs, the ESXi host disconnects from the cluster. This reconfiguration might lead to unresponsive virtual machines.

    This issue is resolved in this release.

  • PR 2166555: When virtual machines are deleted, the hostd service might fail and core dump on an ESXi host

    When you delete multiple virtual machines, a virtual machine with a pending reload operation thread might cause the hostd service to crash. This problem can occur when a reload thread does not capture the Managed Object not found exception.

    This issue is resolved in this release.

  • PR 2112682: vSAN does not mark a disk as degraded even after the the disk reports failed I/Os

    In some cases, vSAN takes a long time to mark a disk as degraded, even though the disk reports I/O failures and vSAN has stopped servicing I/Os from that disk.

    This issue is resolved in this release.

  • PR 2280731: Corruption in the DvFilter heap might cause an ESXi host to fail with a purple diagnostic screen

    A race condition in the get-set firewall rule operations for a DvFilter might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2034812: The SNMP monitoring tool might report incorrect values for the polling network bandwidth

    The SNMP monitoring tool might report incorrect values while configuring the polling network bandwidth due to improper 64bit counters. The Network Node Manager i (NNMi) relies on the managed device to provide values for the calculation. While configuring the polling network bandwidth, the queries for ifHCOutOctets OID in the ifmib SNMP MIB module return values not as expected. This fix enhances 64bit counts.

    This issue is resolved in this release.

  • PR 2115943: An ESXi host might fail with a purple diagnostic screen due to a rare error in the TCP/IP stack

    In very rare cases an error occurs while the TCP connection switches between established and not established states and an ESXi host fails with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2128545: If you use the fixed option of the Path Selection policy, you cannot set a selected path as the preferred path after upgrading to ESXi600-201711001 or later

    After an upgrade to ESXi600-201711001 or later, you might not be able to set a selected path as the preferred path to a device with the Path Selection policy option set to fixed via the user interface.

    This issue is resolved in this release.

  • PR 2156838: Virtual machines using EFI and running Windows Server 2016 on AMD processors might stop responding during reboot

    Virtual machines with hardware version 10 or older, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during a reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or newer, or the guest OS is not Windows, or the processors are Intel.

    This issue is resolved in this release.

  • PR 2244863: Virtual machines might fail during a long-running quiesced snapshot operation

    If the hostd service restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

    This issue is resolved in this release.

  • PR 2274774: CLOMD crashes during rebalance operation

    When object creation fails, vSAN might retain a component from the failed create object operation. These components can cause the Cluster Level Object Manager Daemon (CLOMD) to crash during disk rebalancing.

    This issue is resolved in this release.

  • PR 2320290: When a vMotion operation fails and is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen in some cases

    If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2327967: If a resource group reaches its max memory limit then starting a new process under it might cause a server to fail with a purple diagnostic screen

    If a resource group reaches its max memory limit, then starting a new process under it might fail to completely initialize the counters of the performance monitoring utility vmkperf. The partially created vmkperf counters are not properly freed while recovering from the memory limit error. As a result, a server might fail with a purple diagnostic screen due to a timeout.

    This issue is resolved in this release. The fix properly frees allocated memory for vmkperf counters.

  • PR 2394252: You cannot set virtual machines to power cycle when the guest OS reboots

    After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter vmx.reboot.powerCycle = TRUE, you can schedule virtual machines for power-cycle when necessary.

    This issue is resolved in this release.

  • PR 1863888: An ESXi host might fail with purple diagnostic screen when running HBR + CBT on a datastore that supports unmap

    The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request might occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple diagnostic screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments.

    This issue is resolved in this release.

  • PR 2326269: An ESXi host might fail when you power off a virtual machine with a SR-IOV passthrough adapter

    An ESXi host might fail when you power off a virtual machine with a SR-IOV passthrough adapter before it finishes booting.

    This issue is resolved in this release.

  • PR 2361713: Deployment of virtual machines fails with an error Disconnected from virtual machine

    Due to a rare race condition, deployment of virtual machines might fail with an error Disconnected from virtual machine.

    This issue is resolved in this release.

  • PR 2245346: You cannot use third-party products for hardware monitoring

    You might not be able to use third-party products for hardware monitoring, because of a memory leak issue with the openwsmand service.

    This issue is resolved in this release.

  • PR 2398343: When a virtual machine client needs an extra memory reservation, ESXi hosts might fail with a purple diagnostic screen

    When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
    @BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
    PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
    0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
    0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
    0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
    0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
    0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
    0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
    0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
    0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
    0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778

    This issue is resolved in this release.

  • PR 2226560: An ESXi host might fail with a purple diagnostic screen due to a spin lock timeout caused by an infinite loop of the memory scheduler

    Concurrent updates might cause an infinite loop of the memory scheduler and lead to a spin lock timeout. As a result, ESXi hosts might fail with a purple diagnostic screen and a similar backtrace:
    0x43917739baa8:[0x418005a154e5]MemSchedUpdateFreeStateInt
    0x43917739bad8:[0x418005a1aefb]MemSched_UpdateFreeState
    0x43917739baf8:[0x418005868590]MemMapFreeAndAccountPages
    0x43917739bb58:[0x418005818f3c]PageCache_Free
    0x43917739bb88:[0x41800586bc62]MemMap_FreePages
    0x43917739bbc8:[0x418005b4362a]MemDistribute_Free
    0x43917739bbe8:[0x4180058f583b]VmMem_FreePageNoBackmap
    0x43917739bc28:[0x418005900637]VmMemCowPShareDone
    0x43917739bc88:[0x418005900cf4]VmMemCowSharePageInt
    0x43917739bd18:[0x418005900fb6]VmMemCowSharePages
    0x43917739bf88:[0x418005901907]VmMemCow_SharePages
    0x43917739bfb8:[0x4180058ab029]VMMVMKCall_Call

    This issue is resolved in this release.

  • PR 2139134: An ESXi host might fail with a purple diagnostic screen while shutting down

    An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
    #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
    ...
    0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
    0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
    0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
    0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000

    This issue is resolved in this release.

  • PR 2292414: When Site Recovery Manager test recovery triggers a vSphere Replication synchronization phase, hostd might become unresponsive

    When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.

    This issue is resolved in this release. The fix reduces the maximum number of concurrent snapshots to 8.

ESXi600-201909402-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_xhci-xhci_1.0-3vmw.600.3.129.14513180
PRs Fixed  2198386
CVE numbers N/A

This patch updates the  xhci-xhci VIB to resolve the following issue:

  • PR 2198386: The ESXi VMKernel fails in a legacy USB stack

    Under low memory conditions and after a failure to allocate memory for the endpoint XCHI ring, the endpoint initialization routine tries to use a free element from the XHCI rings cache, if available. The access to the XHCI rings cache is done with an incorrect index, which causes the ESXi VMKernel failure.

    This issue is resolved in this release.

ESXi600-201909403-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_misc-drivers_6.0.0-3.129.14513180
PRs Fixed  N/A
CVE numbers N/A

This patch updates the misc-drivers VIB.

    ESXi600-201909404-BG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMware_bootbank_lsi-msgpt3_06.255.12.00-9vmw.600.3.129.14513180
    PRs Fixed  2176751
    CVE numbers N/A

    This patch updates the lsi-msgpt3 VIB to resolve the following issue:

    • PR 2176751: The hostd service might lose connectivity if the lsi_msgpt3 driver delays response to IOCTL calls

      The lsu-lsi-lsi-msgpt3 plug-in might send lots of IOCTL calls to the lsi_msgpt3 driver to get device info. However, if the driver delays the response to any of the calls, the hostd service loses connectivity.

      This issue is resolved in this release.

    ESXi600-201909405-BG
    Patch Category Bugfix
    Patch Severity Moderate
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-14vmw.600.3.129.14513180
    PRs Fixed  2280793
    CVE numbers N/A

    This patch updates the  lsu-hp-hpsa-plugin VIB to resolve the following issue:

    • PR 2280793: HPE ProLiant Gen9 Smart Array Controllers might not light locator LEDs on the correct disk

      HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, might not light the locator LEDs on the correct failed devices.

      This issue is resolved in this release.

    ESXi600-201909101-SG
    Patch Category                 Security
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMware_bootbank_vsan_6.0.0-3.125.14292904
    • VMware_bootbank_esx-base_6.0.0-3.125.14475122
    • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.125.14292905
    PRs Fixed  1923483, 2153227, 2175314, 2377657, 2379129
    CVE numbers N/A

    This patch updates esx-base, vsanhealth and vsan VIBs to resolve the following issues:

    • Update to the NTP daemon

      The NTP daemon is updated to version 4.2.8p13.

    • Update to OpenSSL

      The OpenSSL package is updated to version openssl-1.0.2s.

    • Update to the Python library

      The Python third-party library is updated to version 2.7.16.

    • Update to the libxml2 library

      The ESXi userworld libxml2 library is updated to version 2.9.9.

    ESXi600-201909102-SG
    Patch Category Security
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMware_bootbank_cpu-microcode_6.0.0-3.125.14475122
    PRs Fixed  N/A
    CVE numbers N/A

    This patch updates the cpu-microcode VIB.

      ESXi600-201909103-SG
      Patch Category Bugfix
      Patch Severity Important
      Host Reboot Required No
      Virtual Machine Migration or Shutdown Required No
      Affected Hardware N/A
      Affected Software N/A
      VIBs Included
      • VMware_bootbank_esx-ui_1.33.4-14110286
      PRs Fixed  N/A
      CVE numbers N/A

      This patch updates esx-ui​ VIB.

        ESXi600-201909104-SG
        Patch Category Security
        Patch Severity Important
        Host Reboot Required No
        Virtual Machine Migration or Shutdown Required No
        Affected Hardware N/A
        Affected Software N/A
        VIBs Included
        • VMware_locker_tools-light_6.0.0-3.125.14475122
        PRs Fixed  N/A
        CVE numbers N/A

        This patch updates the tools-light VIB.

          ESXi-6.0.0-20190904001-standard
          Patch Category ESXi-6.0.0-20190904001-standard
          Build For build information, see Patches Contained in this Release.
          Vendor VMware, Inc.
          Release Date September 12, 2019
          Acceptance Level PartnerSupported
          Affected Hardware N/A
          Affected Software N/A
          Affected VIBs
          • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.129.14292922
          • VMware_bootbank_esx-base_6.0.0-3.129.14513180
          • VMware_bootbank_vsan_6.0.0-3.129.14292921
          • VMware_bootbank_xhci-xhci_1.0-3vmw.600.3.129.14513180
          • VMware_bootbank_misc-drivers_6.0.0-3.129.14513180
          • VMware_bootbank_lsi-msgpt3_06.255.12.00-9vmw.600.3.129.14513180
          • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-14vmw.600.3.129.14513180
          PRs Fixed 2072964, 2193583, 2195026, 2199258, 2205117, 2212177, 2331370, 2206616, 2222553, 2298989, 2144765, 2246706, 2158355, 2219532, 2145513, 2298926, 2154883, 2292414, 2175314, 2244215, 2087171, 2224239, 2203838, 2186066, 2167099, 2155337, 2197793, 2206216, 2004022, 2166555, 2112682, 2280731, 2034812, 2115943, 2128545, 2156838, 2244863, 2274774, 2320290, 2327967, 2394252, 1863888, 2326269, 2361713, 2245346, 2398343, 2226560, 2139134
          Related CVE numbers N/A
          • This patch updates the following issues:
            • If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

            • In release builds, smartd might generate a lot of debugging and info messages into the syslog service logs.

            • Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.

            • Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console: Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period". 

            • After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.

            • Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of the hostd service due to a division by zero error in some counters.

            • When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
              0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
              0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
              0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
              0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
              0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel

            • In earlier releases of ESXi, a VMkernel interface could transport three types of traffic: Management, vMotion and Fault Tolerance. Since ESXi 5.1, the traffic types configuration is stored in vmknic tags and not in the esx.conf file as advanced option strings. During conversion of the vmknic tagging configuration from one format to another, in some cases, the wrong traffic type is enabled. As a result, you might see migration of virtual machines by using VMware vMotion to fail, because the vmk0 interface, which contains the default Management traffic type, is being incorrectly tagged.

            • Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.

            • When you use the software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.

            • ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:
              2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

            • Larger configurations might exceed the limited number of locks available, causing the hostd service to fail with an error similar to: hostd Panic: MXUserAllocSerialNumber: too many locks!

            • If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

            • Any guestinfo settings, values provided by the guest virtual machine that are recorded while the virtual machine is running, and designed to persist across power on and off sequences, might be lost after the virtual machine is powered off.

            • DCUI might display junk characters or incorrect manufacturer names of white box servers.

            • Under certain conditions, when a Virtual Network Computing (VNC) connection is interrupted, virtual machines might fail due to dereference of a NULL pointer in the VNC backend.

            • Virtual machines running on different ESXi hosts that use the same VMFS datastore as ProductLocker might get wrong VMware Tools status if a virtual machine on one host mounts the VMware Tools ISO image and other virtual machines cannot access that image. The problem continues as long as one of the virtual machines has the VMware Tools ISO image mounted and prevents others from using the image to compute their VMware Tools version status.

            • BusyBox is updated to version 1.29.3 to resolve security issues with identifier FG-VD-18-127.

            • The vSAN health configuration file might become corrupted, due to no disk quota or if the thread is stopped. When this problem occurs while setting vSAN health configuration, the health service cannot start.

            • If the esxfw module is enabled in your environment and you frequently reconfigure the list of allowed IPs, this might trigger a race condition that causes the ESX host to fail with a purple diagnostic screen.

            • vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from its database. The hostd service might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.

            • You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

            • Drives that do not support Block Limits VPD page0xb0 might generate event code logs that flood the vmkernel.log.

            • In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

            • The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

            • The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

            • The hostd service cannot support a state when the link speed mode of a physical NIC is UNKNOWN and the link speed is AUTO, and fails.

            • Restarting the hostd service causes reconfiguration of the network interfaces. When this occurs, the ESXi host disconnects from the cluster. This reconfiguration might lead to unresponsive virtual machines.

            • When you delete multiple virtual machines, a virtual machine with a pending reload operation thread might cause the hostd service to crash. This problem can occur when a reload thread does not capture the Managed Object not found exception.

            • In some cases, vSAN takes a long time to mark a disk as degraded, even though the disk reports I/O failures and vSAN has stopped servicing I/Os from that disk.

            • A race condition in the get-set firewall rule operations for a DvFilter might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.

            • The SNMP monitoring tool might report incorrect values while configuring the polling network bandwidth due to improper 64bit counters. The Network Node Manager i (NNMi) relies on the managed device to provide values for the calculation. While configuring the polling network bandwidth, the queries for ifHCOutOctets OID in the ifmib SNMP MIB module return values not as expected. This fix enhances 64bit counts.

            • In very rare cases an error occurs while the TCP connection switches between established and not established states and an ESXi host fails with a purple diagnostic screen.

            • After an upgrade to ESXi600-201711001 or later, you might not be able to set a selected path as the preferred path to a device with the Path Selection policy option set to fixed via the user interface.

            • Virtual machines with hardware version 10 or older, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during a reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or newer, or the guest OS is not Windows, or the processors are Intel.

            • If the hostd service restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

            • When object creation fails, vSAN might retain a component from the failed create object operation. These components can cause the Cluster Level Object Manager Daemon (CLOMD) to crash during disk rebalancing.

            • If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.

            • If a resource group reaches its max memory limit, then starting a new process under it might fail to completely initialize the counters of the performance monitoring utility vmkperf. The partially created vmkperf counters are not properly freed while recovering from the memory limit error. As a result, a server might fail with a purple diagnostic screen due to a timeout.

            • After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter vmx.reboot.powerCycle = TRUE, you can schedule virtual machines for power-cycle when necessary.

            • The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request might occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple diagnostic screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments.

            • Due to a rare race condition, deployment of virtual machines might fail with an error Disconnected from virtual machine.

            • You might not be able to use third-party products for hardware monitoring, because of a memory leak issue with the openwsmand service.

            • When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
              @BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
              PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
              0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
              0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
              0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
              0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
              0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
              0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
              0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
              0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
              0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778

            • Concurrent updates might cause an infinite loop of the memory scheduler and lead to a spin lock timeout. As a result, ESXi hosts might fail with a purple diagnostic screen and a similar backtrace:
              0x43917739baa8:[0x418005a154e5]MemSchedUpdateFreeStateInt
              0x43917739bad8:[0x418005a1aefb]MemSched_UpdateFreeState
              0x43917739baf8:[0x418005868590]MemMapFreeAndAccountPages
              0x43917739bb58:[0x418005818f3c]PageCache_Free
              0x43917739bb88:[0x41800586bc62]MemMap_FreePages
              0x43917739bbc8:[0x418005b4362a]MemDistribute_Free
              0x43917739bbe8:[0x4180058f583b]VmMem_FreePageNoBackmap
              0x43917739bc28:[0x418005900637]VmMemCowPShareDone
              0x43917739bc88:[0x418005900cf4]VmMemCowSharePageInt
              0x43917739bd18:[0x418005900fb6]VmMemCowSharePages
              0x43917739bf88:[0x418005901907]VmMemCow_SharePages
              0x43917739bfb8:[0x4180058ab029]VMMVMKCall_Call

            • An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
              #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
              ...
              0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
              0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
              0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
              0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000

            • An ESXi host might fail when you power off a virtual machine with a SR-IOV passthrough adapter before it finishes booting.

            • Under low memory conditions and after a failure to allocate memory for the endpoint XCHI ring, the endpoint initialization routine tries to use a free element from the XHCI rings cache, if available. The access to the XHCI rings cache is done with an incorrect index, which causes the ESXi VMKernel failure.

            • The lsu-lsi-lsi-msgpt3 plug-in might send lots of IOCTL calls to the lsi_msgpt3 driver to get device info. However, if the driver delays the response to any of the calls, the hostd service loses connectivity.

            • HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, might not light the locator LEDs on the correct failed devices.

            • When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.

          ESXi-6.0.0-20190904001-no-tools
          Patch Category ESXi-6.0.0-20190904001-no-tools
          Build For build information, see Patches Contained in this Release.
          Vendor VMware, Inc.
          Release Date September 12, 2019
          Acceptance Level                             PartnerSupported
          Affected Hardware N/A
          Affected Software N/A
          Affected VIBs
          • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.129.14292922
          • VMware_bootbank_esx-base_6.0.0-3.129.14513180
          • VMware_bootbank_vsan_6.0.0-3.129.14292921
          • VMware_bootbank_xhci-xhci_1.0-3vmw.600.3.129.14513180
          • VMware_bootbank_misc-drivers_6.0.0-3.129.14513180
          • VMware_bootbank_lsi-msgpt3_06.255.12.00-9vmw.600.3.129.14513180
          • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-14vmw.600.3.129.14513180
          PRs Fixed 2072964, 2193583, 2195026, 2199258, 2205117, 2212177, 2331370, 2206616, 2222553, 2298989, 2144765, 2246706, 2158355, 2219532, 2145513, 2298926, 2154883, 2175314, 2244215, 2292414, 2087171, 2224239, 2203838, 2186066, 2167099, 2155337, 2197793, 2206216, 2004022, 2166555, 2112682, 2280731, 2034812, 2115943, 2128545, 2156838, 2244863, 2274774, 2320290, 2327967, 2394252, 1863888, 2326269, 2361713, 2245346, 2398343, 2226560, 2139134
          Related CVE numbers N/A
          • This patch updates the following issues:
            • If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

            • In release builds, smartd might generate a lot of debugging and info messages into the syslog service logs.

            • Some critical flash device parameters, including Temperature and Reallocated Sector Count, do not provide threshold values. As a result, the ESXi host smartd deamon might report some warnings.

            • Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console: Error: "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period". 

            • After taking a successful quiesced snapshot of a Linux virtual machine, the Snapshot Manager might still display the snapshot as not quiesced.

            • Advanced performance charts might stop drawing graphs for some virtual machine statistics after a restart of the hostd service due to a division by zero error in some counters.

            • When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
              0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
              0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
              0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
              0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
              0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel

            • In earlier releases of ESXi, a VMkernel interface could transport three types of traffic: Management, vMotion and Fault Tolerance. Since ESXi 5.1, the traffic types configuration is stored in vmknic tags and not in the esx.conf file as advanced option strings. During conversion of the vmknic tagging configuration from one format to another, in some cases, the wrong traffic type is enabled. As a result, you might see migration of virtual machines by using VMware vMotion to fail, because the vmk0 interface, which contains the default Management traffic type, is being incorrectly tagged.

            • Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.

            • When you use the software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.

            • ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:
              2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev "naa.514f0c5d38200035" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

            • Larger configurations might exceed the limited number of locks available, causing the hostd service to fail with an error similar to: hostd Panic: MXUserAllocSerialNumber: too many locks!

            • If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

            • Any guestinfo settings, values provided by the guest virtual machine that are recorded while the virtual machine is running, and designed to persist across power on and off sequences, might be lost after the virtual machine is powered off.

            • DCUI might display junk characters or incorrect manufacturer names of white box servers.

            • Under certain conditions, when a Virtual Network Computing (VNC) connection is interrupted, virtual machines might fail due to dereference of a NULL pointer in the VNC backend.

            • Virtual machines running on different ESXi hosts that use the same VMFS datastore as ProductLocker might get wrong VMware Tools status if a virtual machine on one host mounts the VMware Tools ISO image and other virtual machines cannot access that image. The problem continues as long as one of the virtual machines has the VMware Tools ISO image mounted and prevents others from using the image to compute their VMware Tools version status.

            • BusyBox is updated to version 1.29.3 to resolve security issues with identifier FG-VD-18-127.

            • The vSAN health configuration file might become corrupted, due to no disk quota or if the thread is stopped. When this problem occurs while setting vSAN health configuration, the health service cannot start.

            • If the esxfw module is enabled in your environment and you frequently reconfigure the list of allowed IPs, this might trigger a race condition that causes the ESX host to fail with a purple diagnostic screen.

            • vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from its database. The hostd service might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.

            • You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

            • Drives that do not support Block Limits VPD page0xb0 might generate event code logs that flood the vmkernel.log.

            • In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

            • The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

            • The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

            • The hostd service cannot support a state when the link speed mode of a physical NIC is UNKNOWN and the link speed is AUTO, and fails.

            • Restarting the hostd service causes reconfiguration of the network interfaces. When this occurs, the ESXi host disconnects from the cluster. This reconfiguration might lead to unresponsive virtual machines.

            • When you delete multiple virtual machines, a virtual machine with a pending reload operation thread might cause the hostd service to crash. This problem can occur when a reload thread does not capture the Managed Object not found exception.

            • In some cases, vSAN takes a long time to mark a disk as degraded, even though the disk reports I/O failures and vSAN has stopped servicing I/Os from that disk.

            • A race condition in the get-set firewall rule operations for a DvFilter might lead to a buffer overflow and corruption of the heap. As a result, an ESXi host might fail with a purple diagnostic screen.

            • The SNMP monitoring tool might report incorrect values while configuring the polling network bandwidth due to improper 64bit counters. The Network Node Manager i (NNMi) relies on the managed device to provide values for the calculation. While configuring the polling network bandwidth, the queries for ifHCOutOctets OID in the ifmib SNMP MIB module return values not as expected. This fix enhances 64bit counts.

            • In very rare cases an error occurs while the TCP connection switches between established and not established states and an ESXi host fails with a purple diagnostic screen.

            • After an upgrade to ESXi600-201711001 or later, you might not be able to set a selected path as the preferred path to a device with the Path Selection policy option set to fixed via the user interface.

            • Virtual machines with hardware version 10 or older, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during a reboot. The issue does not occur if a virtual machine uses BIOS, or its hardware version is 11 or newer, or the guest OS is not Windows, or the processors are Intel.

            • If the hostd service restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

            • When object creation fails, vSAN might retain a component from the failed create object operation. These components can cause the Cluster Level Object Manager Daemon (CLOMD) to crash during disk rebalancing.

            • If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.

            • If a resource group reaches its max memory limit, then starting a new process under it might fail to completely initialize the counters of the performance monitoring utility vmkperf. The partially created vmkperf counters are not properly freed while recovering from the memory limit error. As a result, a server might fail with a purple diagnostic screen due to a timeout.

            • After a microcode update, sometimes it is necessary to re-enumerate the CPUID for virtual machines on an ESXi server. By using the configuration parameter vmx.reboot.powerCycle = TRUE, you can schedule virtual machines for power-cycle when necessary.

            • The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request might occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple diagnostic screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments.

            • Due to a rare race condition, deployment of virtual machines might fail with an error Disconnected from virtual machine.

            • You might not be able to use third-party products for hardware monitoring, because of a memory leak issue with the openwsmand service.

            • When a virtual machine client needs an extra memory reservation, if the ESXi host has no available memory, the host might fail with a purple diagnostic screen. You see a similar backtrace:
              @BlueScreen: #PF Exception 14 in world 57691007:vmm0:LGS-000 IP 0x41802601d987 addr 0x88
              PTEs:0x2b12a6c027;0x21f0480027;0xbfffffffff001;
              0x43935bf9bd48:[0x41802601d987]MemSchedReapSuperflousOverheadInt@vmkernel#nover+0x1b stack: 0x0
              0x43935bf9bd98:[0x41802601daad]MemSchedReapSuperflousOverhead@vmkernel#nover+0x31 stack: 0x4306a812
              0x43935bf9bdc8:[0x41802601dde4]MemSchedGroupAllocAllowed@vmkernel#nover+0x300 stack: 0x4300914eb120
              0x43935bf9be08:[0x41802601e46e]MemSchedGroupSetAllocInt@vmkernel#nover+0x52 stack: 0x17f7c
              0x43935bf9be58:[0x418026020a72]MemSchedManagedKernelGroupSetAllocInt@vmkernel#nover+0xae stack: 0x1
              0x43935bf9beb8:[0x418026025e11]MemSched_ManagedKernelGroupSetAlloc@vmkernel#nover+0x7d stack: 0x1bf
              0x43935bf9bee8:[0x41802602649b]MemSched_ManagedKernelGroupIncAllocMin@vmkernel#nover+0x3f stack: 0x
              0x43935bf9bf28:[0x418025ef2eed]VmAnonUpdateReservedOvhd@vmkernel#nover+0x189 stack: 0x114cea4e1c
              0x43935bf9bfb8:[0x418025eabc29]VMMVMKCall_Call@vmkernel#nover+0x139 stack: 0x418025eab778

            • Concurrent updates might cause an infinite loop of the memory scheduler and lead to a spin lock timeout. As a result, ESXi hosts might fail with a purple diagnostic screen and a similar backtrace:
              0x43917739baa8:[0x418005a154e5]MemSchedUpdateFreeStateInt
              0x43917739bad8:[0x418005a1aefb]MemSched_UpdateFreeState
              0x43917739baf8:[0x418005868590]MemMapFreeAndAccountPages
              0x43917739bb58:[0x418005818f3c]PageCache_Free
              0x43917739bb88:[0x41800586bc62]MemMap_FreePages
              0x43917739bbc8:[0x418005b4362a]MemDistribute_Free
              0x43917739bbe8:[0x4180058f583b]VmMem_FreePageNoBackmap
              0x43917739bc28:[0x418005900637]VmMemCowPShareDone
              0x43917739bc88:[0x418005900cf4]VmMemCowSharePageInt
              0x43917739bd18:[0x418005900fb6]VmMemCowSharePages
              0x43917739bf88:[0x418005901907]VmMemCow_SharePages
              0x43917739bfb8:[0x4180058ab029]VMMVMKCall_Call

            • An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:
              #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
              ...
              0x451a1b81b9d0:[0x41802e62abc1]mld_set_version@(tcpip4)#+0x161 stack: 0x430e0593c9e8
              0x451a1b81ba20:[0x41802e62bb57]mld_input@(tcpip4)#+0x7fc stack: 0x30
              0x451a1b81bb20:[0x41802e60d7f8]icmp6_input@(tcpip4)#+0xbe1 stack: 0x30
              0x451a1b81bcf0:[0x41802e621d3b]ip6_input@(tcpip4)#+0x770 stack: 0x451a00000000

            • An ESXi host might fail when you power off a virtual machine with a SR-IOV passthrough adapter before it finishes booting.

            • Under low memory conditions and after a failure to allocate memory for the endpoint XCHI ring, the endpoint initialization routine tries to use a free element from the XHCI rings cache, if available. The access to the XHCI rings cache is done with an incorrect index, which causes the ESXi VMKernel failure.

            • The lsu-lsi-lsi-msgpt3 plug-in might send lots of IOCTL calls to the lsi_msgpt3 driver to get device info. However, if the driver delays the response to any of the calls, the hostd service loses connectivity.

            • HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, might not light the locator LEDs on the correct failed devices.

            • When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.

          ESXi-6.0.0-20190901001s-standard
          Patch Category ESXi-6.0.0-20190901001s-standard
          Build                                   For build information, see Patches Contained in this Release.
          Vendor VMware, Inc.
          Release Date September 12, 2019
          Acceptance Level PartnerSupported
          Affected Hardware N/A
          Affected Software N/A
          Affected VIBs
          • VMware_bootbank_vsan_6.0.0-3.125.14292904
          • VMware_bootbank_esx-base_6.0.0-3.125.14475122
          • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.125.14292905
          • VMware_bootbank_cpu-microcode_6.0.0-3.125.14475122
          • VMware_bootbank_esx-ui_1.33.4-14110286
          • VMware_locker_tools-light_6.0.0-3.125.14475122
          PRs Fixed 1923483, 2153227, 2175314, 2377657, 2379129
          Related CVE numbers N/A
          • This patch updates the following issues:
            • The NTP daemon is updated to version 4.2.8p13.

            • The OpenSSL package is updated to version openssl-1.0.2s.

            • The Python third-party library is updated to version 2.7.16.

            • The ESXi userworld libxml2 library is updated to version 2.9.9.

          ESXi-6.0.0-20190901001s-no-tools
          Patch Category ESXi-6.0.0-20190901001s-no-tools
          Build                             For build information, see Patches Contained in this Release.
          Vendor VMware, Inc.
          Release Date September 12, 2019
          Acceptance Level PartnerSupported
          Affected Hardware N/A
          Affected Software N/A
          Affected VIBs
          • VMware_bootbank_vsan_6.0.0-3.125.14292904
          • VMware_bootbank_esx-base_6.0.0-3.125.14475122
          • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.125.14292905
          • VMware_bootbank_cpu-microcode_6.0.0-3.125.14475122
          • VMware_bootbank_esx-ui_1.33.4-14110286
          • VMware_locker_tools-light_6.0.0-3.125.14475122
          PRs Fixed 1923483, 2153227, 2175314, 2377657, 2379129
          Related CVE numbers N/A
          • This patch updates the following issues:
            • The NTP daemon is updated to version 4.2.8p13.

            • The OpenSSL package is updated to version openssl-1.0.2s.

            • The Python third-party library is updated to version 2.7.16.

            • The ESXi userworld libxml2 library is updated to version 2.9.9.

          Known Issues

          The known issues are grouped as follows.

          vMotion Issues
          • An ESXi host might fail with a purple diagnostic screen during migration of clustered virtual machines by using vSphere vMotion

            An ESXi host might fail with a purple diagnostic screen during migration of clustered virtual machines by using vSphere vMotion. The issue affects virtual machines in clusters containing shared non-RDM disk, such as VMDK or vSphere Virtual Volumes, in a physical bus sharing mode.

            Workaround: Check your configuration. Do not use physical bus sharing mode for VMDK. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide.

          Known Issues from Earlier Releases

          To view a list of previous known issues, click here.