This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

ESXi 6.7 Update 2 |  APR 11 2019 | ISO Build 13006603

What's in the Release Notes

The release notes cover the following topics:

What's New

  • Solarflare native driver: ESXi 6.7 Update 2 adds Solarflare native driver (sfvmk) support for Solarflare 10G and 40G network adaptor devices, such as SFN8542 and SFN8522.
  • Virtual Hardware Version 15: ESXi 6.7 Update 2 introduces Virtual Hardware Version 15 which adds support for creating virtual machines with up to 256 virtual CPUs. For more information, see VMware knowledge base articles 1003746 and 2007240.
  • Standalone ESXCLI command package: ESXi 6.7 Update 2 provides a new Standalone ESXCLI package for Linux, separate from the vSphere Command Line Interface (vSphere CLI) installation package. The ESXCLI, which is a part of the vSphere CLI, is not updated for ESXi 6.7 Update 2. Although the vSphere CLI installation package is deprecated for this release and is still available for download, you must not install it together with the new Standalone ESXCLI for Linux package. For information about downloading and installing the Standalone ESXCLI package, see VMware {code}.
  • In ESXi 6.7 Update 2, the Side-Channel-Aware Scheduler is updated to enhance the compute performance for ESXi hosts that are mitigated for speculative execution hardware vulnerabilities. For more information, see VMware knowledge base article 55806.
  • ESXi 6.7 update 2 adds support for VMFS6 automatic unmap processing on storage arrays and devices that report to ESXi hosts an unmap granularity value greater than 1 MB. On arrays that report granularity of 1 MB and less, the unmap operation is supported if the granularity is a factor of 1 MB.
  • ESXi 6.7 update 2 adds VMFS6 to the list of supported file systems by the vSphere On-disk Metadata Analyzer (VOMA) to allow you to check and fix issues with VMFS volumes metadata, LVM metadata, and partition table inconsistencies.

Earlier Releases of ESXi 6.7

Features and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 6.7 are:

For internationalization, compatibility, installation and upgrade, and open source components, see the VMware vSphere 6.7 Release Notes.

Product Support Notices

  • VMware vSphere Flash Read Cache is being deprecated. While this feature continues to be supported in the vSphere 6.7 generation, it will be discontinued in a future vSphere release. As an alternative, you can use the vSAN caching mechanism or any VMware certified third-party I/O acceleration software listed in the VMware Compatibility Guide.

Upgrade Notes for This Release

For more information on ESXi versions that support upgrade to ESXi 6.7 Update 2, please see VMware knowledge base article 67077.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product.

Build Details

Download Filename: update-from-esxi6.7-6.7_update02.zip
Build:

13006603

12986307 (Security-only)

Download Size: 453.0 MB

md5sum:

afb6642c5f78212683d9d9a1e9dde6bc

sha1checksum:

783fc0a2a457677e69758969dadc6421e5a7c17a
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

Bulletins

This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin will supersede the security-only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in "SG". For information on patch and update classification, see KB 2014447.
For more information about the individual bulletins, see the My VMware page and the Resolved Issues section.

Bulletin ID Category Severity
ESXi670-201904201-UG Bugfix Critical
ESXi670-201904202-UG Bugfix Important
ESXi670-201904203-UG Bugfix Important
ESXi670-201904204-UG Bugfix Important
ESXi670-201904205-UG Bugfix Important
ESXi670-201904206-UG Bugfix Important
ESXi670-201904207-UG Bugfix Moderate
ESXi670-201904208-UG Bugfix Important
ESXi670-201904209-UG Enhancement Important
ESXi670-201904210-UG Enhancement Important
ESXi670-201904211-UG Bugfix Critical
ESXi670-201904212-UG Bugfix Important
ESXi670-201904213-UG Bugfix Important
ESXi670-201904214-UG Bugfix Important
ESXi670-201904215-UG Bugfix Moderate
ESXi670-201904216-UG Bugfix Important
ESXi670-201904217-UG Bugfix Important
ESXi670-201904218-UG Bugfix Important
ESXi670-201904219-UG Bugfix Important
ESXi670-201904220-UG Bugfix Important
ESXi670-201904221-UG Bugfix Important
ESXi670-201904222-UG Bugfix Moderate
ESXi670-201904223-UG Bugfix Important
ESXi670-201904224-UG Bugfix Important
ESXi670-201904225-UG Enhancement Important
ESXi670-201904226-UG Bugfix Important
ESXi670-201904227-UG Bugfix Important
ESXi670-201904228-UG Bugfix Important
ESXi670-201904229-UG Bugfix Important
ESXi670-201904101-SG Security Important
ESXi670-201904102-SG Security Important
ESXi670-201904103-SG Security Important

IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi and vCenter Server to the current version. 

Image Profiles

VMware patch and update releases contain general and critical image profiles.

Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-6.7.0-20190402001-standard
ESXi-6.7.0-20190402001-no-tools
ESXi-6.7.0-20190401001s-standard
ESXi-6.7.0-20190401001s-no-tools

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command.

For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.

Resolved Issues

The resolved issues are grouped as follows.

ESXi670-201904201-UG
Patch Category Bugfix
Patch Severity Critical
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_vsanhealth_6.7.0-2.48.12775454
  • VMware_bootbank_vsan_6.7.0-2.48.12775451
  • VMware_bootbank_esx-base_6.7.0-2.48.13006603
  • VMware_bootbank_esx-update_6.7.0-2.48.13006603
PRs Fixed  2227123, 2250472, 2273186, 2282052, 2267506, 2256771, 2269616, 2250697, 2193516, 2203431, 2264996, 2267158, 2231707, 2191342, 2197795, 2221935, 2193830, 2210364, 2177107, 2179263, 2185906, 2204508, 2211350, 2204204, 2226565, 2240385, 2163397, 2238134, 2213883, 2141221, 2240444, 2249713, 2184068, 2257480, 2235031, 2257354, 2212140, 2279897, 2227623, 2277648, 2221778, 2287560, 2268541, 2203837, 2236531, 2242656, 2250653, 2260748, 2256001, 2266762, 2263849, 2267698, 2271653, 2232538, 2210056, 2271322, 2272737, 2286563, 2178119, 2223879, 2221256, 2258272, 2219661, 2268826, 2224767, 2257590, 2241917, 2246891, 2269308
CVE numbers N/A

This patch updates the esx-base, esx-update, vsan and vsanhealth VIBs to resolve the following issues:

  • PR 2227123: Very large values of counters for read or write disk latency might trigger alarms

    You might intermittently see alarms for very large values of counters for read or write disk latency, such as datastore.totalReadLatency and datastore.totalWriteLatency.

    This issue is resolved in this release.

  • PR 2250472: Linux virtual machines might stop responding while powering on due to some memory size configurations

    When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, instead of powering on, it might display a blank screen.

    This issue is resolved in this release.

  • PR 2273186: An ESXi host might fail when you enable a vSphere Distributed Switch health check

    When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2282052: The SNMP agent might deliver an incorrect trap vmwVmPoweredOn

    The SNMP agent might deliver an incorrect trap vmwVmPoweredOn when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client.

    This issue is resolved in this release.

  • PR 2267506: An ESXi host might fail with purple diagnostic screen at DVFilter level

    The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return NULL. If this NULL value is not handled gracefully, an ESXi host might fail with purple diagnostic screen at DVFilter level.

    This issue is resolved in this release.

  • PR 2256771: You might see an unnecessary event message for network connectivity restored on virtual switch when powering on or off virtual machines

    In the Events tab of the vSphere Client or the vSphere Web Client, when you power on or off virtual machines, you might see events similar to Network connectivity restored on virtual switch XXX, portgroups: XXX. Physical NIC vmnicX is up. When checking uplink status, the system report might include port groups that are not affected by the virtual machine powering on or off operation, which triggers the event. However, the event does not indicate a problem or prompt an action.

    This issue is resolved in this release. The event message is removed.

  • PR 2250697: Windows Server Failover Cluster validation might fail if you configure Virtual Volumes with a Round Robin path policy

    If during the Windows Server Failover Cluster setup you change the default path policy from Fixed or Most Recently Used to Round Robin, the I/O of the cluster might fail and the cluster might stop responding.

    This issue is resolved in this release.

  • PR 2193516: Host Profile remediation might end with deleting a virtual network adapter from the VMware vSphere Distributed Switch

    If you extract a Host Profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that Host Profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS), but the adapter is actually removed from the VDS. This is due to a "::" value of the ipV6DefaultGateway parameter.

    This issue is resolved in this release.

  • PR 2203431: Some guest virtual machines might report their identity as an empty string

    Some guest virtual machines might report their identity as an empty string if the guest is running a later version of VMware Tools than the ESXi host was released with, or if the host cannot recognize the guestID parameter of the GuestInfo object. The issue affects mainly virtual machines with CentOS and VMware Photon OS.

    This issue is resolved in this release.

  • PR 2264996: You see repetitive messages in the vCenter Web Client prompting you to delete host sub specifications nsx

    You might see repetitive messages in the vCenter Web Client, similar to Delete host sub specification nsx if you use network booting for ESXi hosts, due to some dummy code in hostd.

    This issue is resolved in this release.

  • PR 2231707: The virtual machine might stop responding if the AutoInstalls ISO image file is mounted

    The installation of the VMware Tools unmounts the ISO image file that ends with autoinst.iso, if it is already mounted to the virtual machine. To mount the VMware Tools ISO image file, you must click Install VMware Tools again. As a result, the virtual machine might stop responding.

    This issue is resolved in this release.

  • Pr 2267158: ESXi hosts might lose connectivity to the vCenter Sever system due to failure of the vpxa agent service

    ESXi hosts might lose connectivity to the vCenter Sever system due to failure of the vpxa agent service in result of an invalid property update generated by the PropertyCollector object. A race condition in hostd leads to a malformed sequence of property update notification events that cause the invalid property update.

    This issue is resolved in this release.

  • PR 2191342: The SMART disk monitoring daemon, smartd, might flood the syslog service logs of release builds with debugging and info messages

    In release builds, the smartd daemon might generate a lot of debugging and info messages into the syslog service logs.

    This issue is resolved in this release. The fix removes debug messages from release builds.

  • PR 2197795: The scheduler of the VMware vSphere Network I/O Control (NIOC) might intermittently reset the uplink network device

    The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

    This issue is resolved in this release.

  • PR 2221935: ESXi hosts might intermittently fail if you disable global IPv6 addresses

    ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.

    This issue is resolved in this release. If you already face the issue, re-enable the global IPv6 addresses to avoid failure of hosts. If you need to disable IPv6 addresses for some reason, you must disable IPv6 on individual vmknics, not global IPv6 addresses.

  • PR 2193830: Тhe virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS

    Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.

    This issue is resolved in this release.

  • PR 2210364: The ESXi daemon hostd might become unresponsive while waiting for an external process to finish

    Hostd might become unresponsive while waiting for an external process initiated with the esxcfg-syslog command to finish.

    This issue is resolved in this release.

  • PR 2177107: A soft lockup of physical CPUs might cause an ESXi host to fail with a purple diagnostic screen

    Large number of I/Os getting timeout on a heavy-loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2179263: Virtual machines with a virtual SATA CD-ROM might fail due to invalid commands

    Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory. You might see the following logs for invalid commands:

    YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
    YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.

    And a similar panic message:
    YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure

    This issue is resolved in this release.

  • PR 2185906: Migration of virtual machines by using VMware vSphere vMotion might fail with a NamespaceDb compatibility error if Guest Introspection service is on

    If the Guest Introspection service in a vSphere 6.7 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with an error in the vSphere Web Client similar to The source detected that the destination failed to resume.
    The destination vmware.log contains error messages similar to:

    2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
    2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
    2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
    2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
    2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.

    The vmkernel log contains error messages such as:

    2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
    2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3

    This issue is resolved in this release.

  • PR 2204508: The advanced performance chart might stop displaying data after a hostd restart

    You can edit the statistical data to be collected only when the virtual machine is in a particular power state. For example, you cannot edit the memory data when the virtual machine is powered on. When you restart the hostd service, the virtual machine might not change its power state, which leads to false data initialization. The calculation of the host consumed memory counter might end up with division by zero and the chart might stop drawing graphs.

    This issue is resolved in this release.

  • PR 2211350: A virtual machine port might be blocked after a VMware vSphere High Availability (vSphere HA) failover

    By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.

    This issue is resolved in this release. 

  • PR 2204204: e1000 vNICs might intermittently drop unrecognized GRE packets

    e1000 vNICs might intermittently drop unrecognized GRE packets due to a parsing issue.

    This issue is resolved in this release.

  • PR 2226565: Booting ESXi on Apple hardware might fail with an error that the Mutiboot buffer is too small

    While booting an Apple system, the boot process might stop with the following error message:
    Shutting down firmware services...
    Mutiboot buffer is too small.
    Unrecoverable error

    The issue affects only certain Apple firmware versions.

    This issue is resolved in this release.

  • PR 2240385: Virtual machines might fail during a long-running quiesced snapshot operation

    If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

    This issue is resolved in this release.

  • PR 2163397: The SNMP agent might display D_Failed battery status

    The SNMP agent might display the status of all batteries as D_Failed when you run the snmpwalk command. This issue occurs due to the processing of code, which might not check the status properly and the misinterpretation of the compact sensors with sensor-specific code.

    This issue is resolved in this release.

  • PR 2238134: When you use an IP hash or source MAC hash teaming policy, the packets might drop or go through the wrong uplink

    When you use an IP hash or source MAC hash teaming policy, some packets from the packet list might use a different uplink. As a result, some of them might drop or not be sent out through the uplink determined by the teaming policy.

    This issue is resolved in this release.

  • PR 2213883: An ESXi host becomes unresponsive when it disconnects from an NFS datastore that is configured for logging

    If an NFS datastore is configured as a syslog datastore and if the ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.

    This issue is resolved in this release.

  • PR 2141221: An ESXi host might fail during replication of virtual machines by using VMware vSphere Replication with VMware Site Recovery Manager

    When you replicate virtual machines by using vSphere Replication with Site Recovery Manager, the ESXi host might fail with a purple diagnostic screen immediately or within 24 hours. You might see a similar error: ​PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

    This issue is resolved in this release.

  • PR 2240444: The hostd daemon might fail with an error due to too many locks

    Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
    hostd Panic: MXUserAllocSerialNumber: too many locks!

    This issue is resolved in this release. The fix removes the limit for the number of locks.

  • PR 2249713: The esxcli network plug-in command might return a false value as a result

    Тhe esxcli $esxcli.network.nic.coalesce.set.Invoke network command might return a false value as a result even though the value is correctly set. This might impact your automation scripts. 

    This issue is resolved in this release.

  • PR 2184068: An ESXi host might fail with a purple diagnostic screen when accessing a corrupted VMFS datastore

    When a virtual machine tries to access a VMFS datastore with corrupted directory, the ESXi host might fail with a purple diagnostic screen and the DirEntry corruption error message.

    This issue is resolved in this release.

  • PR 2257480: The LACPPDU Actor кey value might change to 0

    The LACPPDU Actor key might change its value to 0 during the link status flapping.

    This issue is resolved in this release.

  • PR 2235031: An ESXi host becomes unresponsive and you see warnings for reached maximum heap size in the vmkernel.log

    Due to a timing issue in the VMkernel, buffers might not be flushed, and the heap gets exhausted. As a result, services such as hostd, vpxa and vmsyslogd might not be able to write logs on the ESXi host, and the host becomes unresponsive. In the /var/log/vmkernel.log, you might see a similar warning:WARNING: Heap: 3571: Heap vfat already at its maximum size. Cannot expand.

    This issue is resolved in this release.

  • PR 2257354: Some virtual machines might become invalid when trying to install or update VMware Tools

    If you have modified the type of a virtual CD-ROM device of a virtual machine in a previous version of ESXi, after you update to a later version, when you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.

    This issue is resolved in this release.

  • PR 2212140: Renewing a host certificate might not push the full chain of trust to the ESXi host

    When you renew a certificate, only the first certificate from the supplied chain of trust might be stored on the ESXi host. Any intermediate CA certificates are truncated. Because of the missing certificates, the chain to the root CA cannot be built. This leads to a warning for an untrusted connection.

    This issue is resolved in this release.

  • PR 2279897: Creating a snapshot of a virtual machine might fail due to a null VvolId parameter

    If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null VvolID parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a null VvoId parameter and a failure when creating a virtual machine snapshot.

    This issue is resolved in this release. The fix handles the policy modification failure and prevents the null VvolId parameter.

  • PR 2227623: Parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might fail with an error message for failed file creation

    If a call from a vSphere API for Storage Awareness provider fails due to all connections to the virtual provider being busy, operations for parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might become unresponsive or fail with an error message similar to Cannot complete file creation operation.

    This issue is resolved in this release.

  • PR 2277648: An ESXi host might fail with a purple diagnostic screen displaying a sbflush_internal panic message

    An ESXi host might fail with a purple diagnostic screen displaying a sbflush_internal panic message due to some discrepancies in the internal statistics.

    This issue is resolved in this release. The fix converts the panic message into an assert.

  • PR 2221778: You cannnot migrate virtual machines by using vSphere vMotion between ESXi hosts with NSX managed virtual distributed switches (N-VDS) and vSphere Standard Switches

    With ESXi 6.7 Update 2, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.7 Update 2 and ESXi 6.7 Update 2 on both source and destination sites.

    This issue is resolved in this release.

  • PR 2287560: ESXi hosts might fail with a purple diagnostic screen due to a page fault exception

    ESXi hosts might fail with a purple diagnostic screen due to a page fault exception. The error comes from a missing flag that could trigger cache eviction on an object that no longer exists, and result in a NULL pointer reference.

    This issue is resolved in this release.

  • PR 2268541: You might see redundant VOB messages in the vCenter Server system, such as Lost uplink redundancy on virtual switch "xxx"

    When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might not be correct. As a result, you might see redundant VOB messages in the /var/run/log/vobd.log file, such as:

    Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnicX is down. Affected portgroups:"xxx".

    This issue is resolved in this release.

  • PR 2203837: Claim rules must be manually added to an ESXi host

    You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

    This issue is resolved in this release. This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR, and Claim Options to tpgs_on as default for Lenovo ThinkSystem DE Series Storage Arrays.

  • PR 2236531: An ESXi host might get disconnected from a vCenter Server system

    The vpxa service might fail due to excessive memory usage when iterating on a large directory tree. As a result, an ESXi host might get disconnected from a vCenter Server system.

    This issue is resolved in this release.

  • PR 2242656: Updates with ESXCLI commands might fail when SecureBoot is enabled

    The commands esxcli software profile update or esxcli software profile install might fail on hosts running ESXi 6.7 Update 1 with enabled SecureBoot. You might see error messages such as Failed to setup upgrade using esx-update VIB and Failed to mount tardisk in ramdisk: [Errno 1] Operation not permitted.

    This issue is resolved in this release. If you already face the issue, run the esxcli software vib update command to update your system to ESXi 6.7 Update 2. You can also disable SecureBoot, run the ESXCLI commands, and then re-enable SecureBoot.

  • PR 2250653: NFS volume mount might not persist after reboot of an ESXi host

    NFS volume mount might not persist after a reboot of an ESXi host due to intermittent failure in resolving the host name of the NFS server.

    This issue is resolved in this release. The fix adds retry logic if a host name resolution fails.

  • PR 2260748: You might intermittently lose connectivity while using the Route Based on Physical NIC Load option in the vSphere Distributed Switch

    This issue happens if you create a VMK or vNIC port by using the Route Based on Physical NIC Load option when the physical NIC assigned to that port group is temporarily down. In such cases, the port team uplink is not set because there are no active uplinks in the portgroup. When the physical NIC becomes active again, teaming code fails to update the port data, resulting in a packet loss.

    This issue is resolved in this release.

  • PR 2256001: Host fails with Bucketlist_LowerBound or PLOGRelogRetireLsns in the backtrace

    A race condition can occur when Plog Relog runs along with DecommissionMD, causing Relog to access freed Plog device state tables. This problem can cause a host failure with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2266762: You might see many warnings for unsupported physical block sizes in the vmkernel logs

    If you use devices with physical block size other than 4096 or 512 MB, in the vmkernel logs you might see many warnings similar to:
    ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.

    This issue is resolved in this release. The fix converts the warnings into debug log messages.

  • PR 2263849: Operations on stretched cluster failed with name '_VSAN_VER3' is not defined

    After configuring or upgrading a vSAN stretched cluster, all cluster level requests might fail. You might see the following error message:
    name '_VSAN_VER3' is not defined.

    This issue is resolved in this release.

  • PR 2267698: vSAN performance charts do not show VMDK metrics properly

    If the virtual machine folder is created using a name instead of UUID, the vSAN performance charts do not show the following VMDK metrics properly:
    - IOPS and IOPS Limit
    - Delayed Normalized IOPS

    This issue is resolved in this release.

  • PR 2271653: You might see wrong reports for dropped packets when using IOChain and the Large Receive Offload (LRO) feature is enabled

    In Host > Monitor > Performance > Physical Adapters > pNIC vSwitch Port Drop Rate you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.

    This issue is resolved in this release.

  • PR 2232538: An ESXi host might fail while powering on a virtual machine with a name that starts with “SRIOV”

    An ESXi host might fail with a purple diagnostic screen while powering on a virtual machine. This happens if the virtual machine name starts with “SRIOV” and has an uplink connected to a vSphere Distributed Switch.

    This issue is resolved in this release.

  • PR 2210056: If SNMP disables dynamic rulesets, but they are active in a Host Profile, you might see an unexpected compliance error

    In a rare race condition, when SNMP disables dynamic rulesets, but they are active in a Host Profile, you might see an unexpected compliance error such as Ruleset dynamicruleset not found.

    This issue is resolved in this release.

  • PR 2271322: Recovery of a virtual machine running Windows 2008 or later by using vSphere Replication with application quiescing enabled might result in a corrupt replica

    Recovery of a virtual machine running Windows 2008 or later by using vSphere Replication with application quiescing enabled might result in a corrupt replica. Corruption only happens when application quiescing is enabled. If VMware Tools on the virtual machine is not configured for application quiescing, vSphere Replication uses a lower level of consistency, such as file system quiescing, and replicas are not corrupted.

    This issue is resolved in this release.

  • PR 2272737: When you perform CIM requests more than 2 MB, the SFCB configuration might fail on an ESXi host

    When you perform CIM requests more than 2 MB like firmware download, CIM service might fail on an ESXi host due to out of stack space and the SFCB zdumb file appears in /var/core.

    This issue is resolved in this release.

  • PR 2286563: The number of syslog.log archive files generated might be less than the configured default and different between ESXi versions 6.0 and 6.7

    The number of syslog.log archive files generated might be one less than the default value set by using the syslog.global.defaultRotate parameter. The number of syslog.log archive files might also be different between ESXi versions 6.0 and 6.7. For instance, if syslog.global.defaultRotate is set to 8 by default, ESXi 6.0 creates syslog.0.gz to syslog.7.gz, while ESXi 6.7 creates syslog.0.gz to syslog.6.gz.

    This issue is resolved in this release.

  • PR 2178119: A large value of the advanced config option /Misc/MCEMonitorInterval might cause ESXi hosts to become unresponsive

    If you set the advanced config option /Misc/MCEMonitorInterval to a very large value, ESXi hosts might become unresponsive in around 10 days of uptime, due to a timer overflow.

    This issue is resolved in this release.

  • PR 2223879: An ESXi host might fail with a purple diagnostic screen and an error for a possible deadlock

    The P2MCache lock is a spin lock causing slow CPU performance that might cause starvation of other CPUs. As a result, an ESXi host might fail with a purple diagnostic screen and a Spin count exceeded - possible deadlock, PVSCSIDoIOComplet, NFSAsyncIOComplete error.

    This issue is resolved in this release. 

  • PR 2221256: An ESXi host might fail with a purple diagnostic screen during a Storage vMotion operation between NFS datastores

    An ESXi host might fail with a purple diagnostic screen during a Storage vMotion operation between NFS datastores, due to a race condition in the request completion path. The race condition happens under heavy I/O load in the NFS stack.

    This issue is resolved in this release.

  • PR 2258272: ESXi host with vSAN enabled fails due to race condition in decommission workflow

    During disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2219661: An ESXi host might fail with a purple diagnostic screen if RSS is enabled on a physical NIC

    An ESXi host might fail with a purple diagnostic screen if RSS is enabled on a physical NIC. You might see a similar backtrace:
    yyyy-mm-dd cpu9:2097316)0x451a8521bc70:[0x41801a68ad30]RSSPlugCleanupRSSEngine@(lb_netqueue_bal)# +0x7d stack: 0x4305e4c1a8e0, 0x41801a68af2b, 0x430db8c6b188, 0x4305e4c1a8e0,
    0x0yyyy-mm-dd cpu9:2097316)0x451a8521bc90:[0x41801a68af2a]RSSPlugInitRSSEngine@(lb_netqueue_bal)# +0x127 stack: 0x0, 0x20c49ba5e353f7cf, 0x4305e4c1a930, 0x4305e4c1a800, 0x4305e4c1a9f0
    yyyy-mm-dd cpu9:2097316)0x451a8521bcd0:[0x41801a68b21c]RSSPlug_PreBalanceWork@(lb_netqueue_bal)# +0x1cd stack: 0x32, 0x32, 0x0, 0xe0, 0x4305e4c1a800
    yyyy-mm-dd cpu9:2097316)0x451a8521bd30:[0x41801a687752]Lb_PreBalanceWork@(lb_netqueue_bal)# +0x21f stack: 0x4305e4c1a800, 0xff, 0x0, 0x43057f38ea00, 0x4305e4c1a800
    yyyy-mm-dd cpu9:2097316)0x451a8521bd80:[0x418019c13b18]UplinkNetqueueBal_BalanceCB@vmkernel#nover+0x6f1 stack: 0x4305e4bc6088, 0x4305e4c1a840, 0x4305e4c1a800, 0x4305e435fb30, 0x4305e4bc6088
    yyyy-mm-dd cpu9:2097316)0x451a8521bf00:[0x418019cd1c89]UplinkAsyncProcessCallsHelperCB@vmkernel#nover+0x116 stack: 0x430749e60870, 0x430196737070, 0x451a85223000, 0x418019ae832b, 0x430749e60088
    yyyy-mm-dd cpu9:2097316)0x451a8521bf30:[0x418019ae832a]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430749e600b8, 0x430749e600a8, 0x430749e600e0, 0x451a85223000, 0x430749e600b8
    yyyy-mm-dd cpu9:2097316)0x451a8521bfe0:[0x418019d081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

    This issue is resolved in this release. The fix cleans data in the RSS engine to avoid overloading of the dedicated heap of the NetQueue balancer.

  • PR 2268826: An ESXi host might fail with a purple diagnostic screen when the VMware APIs for Storage Awareness (VASA) provider sends a rebind request to switch the protocol endpoint for a vSphere Virtual Volume

    When the VASA provider sends a rebind request to an ESXi host to switch the binding for a particular vSphere Virtual Volume driver, the ESXi host might switch the protocol endpoint and other resources to change the binding without any I/O disturbance. As a result, the ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2224767: You might see false hardware health alarms from Intelligent Platform Management Interface (IPMI) sensors

    Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.

    This issue is resolved in this release. With this fix, you can use the advanced option command esxcfg-advcfg -s {sensor ID},{sensor ID} /UserVars/HardwareHealthIgnoredSensors to ignore hardware health alarms from selected sensors.

  • PR 2257590: An ESXi host might fail with a memory admission error if you install an async 32-bit iSCSI Management API (IMA) plug-in

    If you install an async 32-bit IMA plug-in and there are already installed 32-bit IMA plug-ins, the system might try to load these plug-ins and check if they are loaded from the QLogic independent iSCSI adapter. During this process the ESXi host might fail with a memory admission error.

    This issue is resolved in this release. This fix loads each 32-bit IMA plug-in only from the QLogic adapter.

  • PR 2241917: A reprotect operation might fail with an error for vSphere Replication synchronization

    If you run a reprotect operation by using the Site Recovery Manager, the operation might fail if a virtual machine is late with the initial synchronization. You might see an error similar to:
    VR synchronization failed for VRM group {…}. Operation timed out: 10800 seconds

    This issue is resolved in this release.

  • PR 2246891: An I/O error TASK_SET_FULL on a secondary LUN might slow down the I/O rate on all secondary LUNs behind the protocol endpoint of Virtual Volumes on HPE 3PAR storage if I/O throttling is enabled

    When I/O throttling is enabled on a protocol endpoint of Virtual Volumes on HPE 3PAR storage and if an I/O on a secondary LUN fails with an error TASK_SET_FULL, the I/O rate on all secondary LUNs that are associated with the protocol endpoint slows down.

    This issue is resolved in this release. With this fix, you can enable I/O throttling on individual Virtual Volumes to avoid the slowdown of all secondary LUNs behind the protocol endpoint if the TASK_SET_FULL error occurs.

  • PR 2269308: ESXi hosts with visibility to RDM LUNs might take a long time to start or during LUN rescan

    The large number of RDM LUNs might cause an ESXi host to take a long time to start or delay while performing a LUN rescan. If you use APIs, such as MarkPerenniallyReserved or MarkPerenniallyReservedEx, you can mark a specific LUN as perennially reserved, which improves the start and time and rescan time of the ESXi hosts.

    This issue is resolved in this release.

ESXi670-201904202-UG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_i40en_1.3.1-23vmw.670.2.48.13006603
PRs Fixed  N/A
CVE numbers N/A

This patch updates the i40en​ VIB.

    ESXi670-201904203-UG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_igbn_0.1.1.0-4vmw.670.2.48.13006603
    PRs Fixed  N/A
    CVE numbers N/A

    This patch updates the igbn VIB.

      ESXi670-201904204-UG
      Patch Category Bugfix
      Patch Severity Important
      Host Reboot Required Yes
      Virtual Machine Migration or Shutdown Required Yes
      Affected Hardware N/A
      Affected Software N/A
      VIBs Included
      • VMW_bootbank_ixgben_1.4.1-18vmw.670.2.48.13006603
      PRs Fixed  N/A
      CVE numbers N/A

      This patch updates the ixgben VIB.

        ESXi670-201904205-UG
        Patch Category Bugfix
        Patch Severity Important
        Host Reboot Required Yes
        Virtual Machine Migration or Shutdown Required Yes
        Affected Hardware N/A
        Affected Software N/A
        VIBs Included
        • VMW_bootbank_ne1000_0.8.4-2vmw.670.2.48.13006603
        PRs Fixed  N/A
        CVE numbers N/A

        This patch updates the ne1000 VIB.

          ESXi670-201904206-UG
          Patch Category Bugfix
          Patch Severity Important
          Host Reboot Required Yes
          Virtual Machine Migration or Shutdown Required Yes
          Affected Hardware N/A
          Affected Software N/A
          VIBs Included
          • VMW_bootbank_vmkusb_0.1-1vmw.670.2.48.13006603
          PRs Fixed  N/A
          CVE numbers N/A

          This patch updates the vmkusb VIB.

            ESXi670-201904207-UG
            Patch Category Bugfix
            Patch Severity Moderate
            Host Reboot Required Yes
            Virtual Machine Migration or Shutdown Required Yes
            Affected Hardware N/A
            Affected Software N/A
            VIBs Included
            • VMware_bootbank_qlnativefc_3.1.8.0-4vmw.670.2.48.13006603
            PRs Fixed  2228529
            CVE numbers N/A

            This patch updates the qlnativefc VIB.

            • Update to the qlnativefc driver

              The qlnativefc driver is updated to version 3.1.8.0.

            ESXi670-201904208-UG
            Patch Category Bugfix
            Patch Severity Important
            Host Reboot Required Yes
            Virtual Machine Migration or Shutdown Required Yes
            Affected Hardware N/A
            Affected Software N/A
            VIBs Included
            • VMW_bootbank_nfnic_4.0.0.17-0vmw.670.2.48.13006603
            PRs Fixed  2231435
            CVE numbers N/A

            This patch updates the nfnic VIB.

            • Update to the nfnic driver

              The nfnic driver is updated to version 4.0.0.17.

            ESXi670-201904209-UG
            Patch Category Bugfix
            Patch Severity Enhancement
            Host Reboot Required Yes
            Virtual Machine Migration or Shutdown Required Yes
            Affected Hardware N/A
            Affected Software N/A
            VIBs Included
            • VMW_bootbank_brcmfcoe_11.4.1078.19-12vmw.670.2.48.13006603
            PRs Fixed  N/A
            CVE numbers N/A

            This patch updates the brcmfcoe VIB.

              ESXi670-201904210-UG
              Patch Category Bugfix
              Patch Severity Enhancement
              Host Reboot Required Yes
              Virtual Machine Migration or Shutdown Required Yes
              Affected Hardware N/A
              Affected Software N/A
              VIBs Included VMW_bootbank_lpfc_11.4.33.18-12vmw.670.2.48.13006603
              PRs Fixed  N/A
              CVE numbers N/A

              This patch updates the lpfc VIB.

                ESXi670-201904211-UG
                Patch Category Bugfix
                Patch Severity Critical
                Host Reboot Required Yes
                Virtual Machine Migration or Shutdown Required Yes
                Affected Hardware N/A
                Affected Software N/A
                VIBs Included
                • VMware_bootbank_elx-esx-libelxima.so_11.4.1184.1-2.48.13006603
                PRs Fixed  2226688 
                CVE numbers N/A

                This patch updates the elx-esx-libelxima.so VIB to resolve the following issue:

                • PR 2226688: Emulex drivers logs might fill up the /var file system logs

                  Emulex drivers might write logs at /var/log/EMU/mili/mili2d.log and fill up the 40 MB /var file system logs of RAM drives.

                  This issue is resolved in this release. The fix changes writes of Emulex drivers to the /scratch/log/ instead of the /var/log/.

                ESXi670-201904212-UG
                Patch Category Bugfix
                Patch Severity Important
                Host Reboot Required Yes
                Virtual Machine Migration or Shutdown Required No
                Affected Hardware N/A
                Affected Software N/A
                VIBs Included
                • VMW_bootbank_misc-drivers_6.7.0-2.48.13006603
                PRs Fixed  N/A
                CVE numbers N/A

                This patch updates the misc-drivers VIB.

                  ESXi670-201904213-UG
                  Patch Category Bugfix
                  Patch Severity Important
                  Host Reboot Required Yes
                  Virtual Machine Migration or Shutdown Required No
                  Affected Hardware N/A
                  Affected Software N/A
                  VIBs Included
                  • VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-9vmw.670.2.48.13006603
                  PRs Fixed  N/A
                  CVE numbers N/A

                  This patch updates the lsu-lsi-lsi-msgpt3-plugin​ VIB.

                    ESXi670-201904214-UG
                    Patch Category Bugfix
                    Patch Severity Important
                    Host Reboot Required No
                    Virtual Machine Migration or Shutdown Required No
                    Affected Hardware N/A
                    Affected Software N/A
                    VIBs Included
                    • VMW_bootbank_sfvmk_1.0.0.1003-6vmw.670.2.48.13006603
                    PRs Fixed

                    2234599

                    CVE numbers N/A

                    This patch updates the sfvmk VIB.

                    • ESXi 6.7 Update 2 adds Solarflare native driver (sfvmk) support for Solarflare 10G and 40G network adaptor devices, such as SFN8542 and SFN8522.

                    ESXi670-201904215-UG
                    Patch Category Bugfix
                    Patch Severity Moderate
                    Host Reboot Required Yes
                    Virtual Machine Migration or Shutdown Required Yes
                    Affected Hardware N/A
                    Affected Software N/A
                    VIBs Included
                    • VMW_bootbank_lsi-msgpt2_20.00.05.00-1vmw.670.2.48.13006603
                    PRs Fixed  2256555 
                    CVE numbers N/A

                    This patch updates the lsi-msgpt2 VIB.

                    • Update to the lsi-msgpt2 driver

                      The lsi-msgpt2 driver is updated to version 20.00.05.00 to address task management enhancements and critical bug fixes.

                    ESXi670-201904216-UG
                    Patch Category Bugfix
                    Patch Severity Important
                    Host Reboot Required Yes
                    Virtual Machine Migration or Shutdown Required Yes
                    Affected Hardware N/A
                    Affected Software N/A
                    VIBs Included
                    • VMW_bootbank_nmlx4-core_3.17.13.1-1vmw.670.2.48.13006603
                    PRs Fixed  N/A
                    CVE numbers N/A

                    This patch updates the nmlx4-core VIB.

                      ESXi670-201904217-UG
                      Patch Category Bugfix
                      Patch Severity Important
                      Host Reboot Required Yes
                      Virtual Machine Migration or Shutdown Required Yes
                      Affected Hardware N/A
                      Affected Software N/A
                      VIBs Included
                      • VMW_bootbank_nmlx4-en_3.17.13.1-1vmw.670.2.48.13006603
                      PRs Fixed  N/A
                      CVE numbers N/A

                      This patch updates the nmlx4-en​ VIB.

                        ESXi670-201904218-UG
                        Patch Category Bugfix
                        Patch Severity Important
                        Host Reboot Required Yes
                        Virtual Machine Migration or Shutdown Required Yes
                        Affected Hardware N/A
                        Affected Software N/A
                        VIBs Included
                        • VMW_bootbank_nmlx4-rdma_3.17.13.1-1vmw.670.2.48.13006603
                        PRs Fixed  N/A
                        CVE numbers N/A

                        This patch updates the nmlx4-rdma​ VIB.

                          ESXi670-201904219-UG
                          Patch Category Bugfix
                          Patch Severity Important
                          Host Reboot Required Yes
                          Virtual Machine Migration or Shutdown Required Yes
                          Affected Hardware N/A
                          Affected Software N/A
                          VIBs Included
                          • VMW_bootbank_nmlx5-core_4.17.13.1-1vmw.670.2.48.13006603
                          PRs Fixed  N/A
                          CVE numbers N/A

                          This patch updates the nmlx5-core​ VIB.

                            ESXi670-201904220-UG
                            Patch Category Bugfix
                            Patch Severity Important
                            Host Reboot Required Yes
                            Virtual Machine Migration or Shutdown Required Yes
                            Affected Hardware N/A
                            Affected Software N/A
                            VIBs Included
                            • VMW_bootbank_nmlx5-rdma_4.17.13.1-1vmw.670.2.48.13006603
                            PRs Fixed  N/A
                            CVE numbers N/A

                            This patch updates the nmlx5-rdma​ VIB.

                              ESXi670-201904221-UG
                              Patch Category Bugfix
                              Patch Severity Important
                              Host Reboot Required Yes
                              Virtual Machine Migration or Shutdown Required Yes
                              Affected Hardware N/A
                              Affected Software N/A
                              VIBs Included
                              • VMW_bootbank_bnxtnet_20.6.101.7-21vmw.670.2.48.13006603
                              PRs Fixed  N/A
                              CVE numbers N/A

                              This patch updates the bnxtnet VIB.

                                ESXi670-201904222-UG
                                Patch Category Bugfix
                                Patch Severity Moderate
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required No
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMware_bootbank_lsu-lsi-drivers-plugin_1.0.0-1vmw.670.2.48.13006603
                                PRs Fixed  2242441 
                                CVE numbers N/A

                                This patch updates the lsu-lsi-drivers-plugin VIB to resolve the following issue:

                                • PR 2242441: You might not be able to locate and turn on the LED for disks under lsi-msgpt35 controllers

                                  You might not be able to locate and turn on the LED for disks under a lsi-msgpt35 controller.

                                  This issue is resolved in this release. The fix introduces a new lsu-lsi-drivers-plugin which works with lsi-msgpt35 async and inbox drivers.

                                ESXi670-201904223-UG
                                Patch Category Bugfix
                                Patch Severity Important
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required Yes
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMW_bootbank_nvme_1.2.2.27-1vmw.670.2.48.13006603
                                PRs Fixed  2270098 
                                CVE numbers N/A

                                This patch updates the nvme VIB to resolve the following issues:

                                • PR 2270098: ESXi hosts might fail with a purple diagnostic screen when using Intel NVMe devices

                                  If a split I/O request is cancelled in the inbox nvme driver, the command completion path might unlock the queue lock due to an incorrect check of the command type. This might cause ESXi hosts to fail with a purple diagnostic screen.

                                  This issue is resolved in this release.

                                • PR 2270098: You cannot see NVMe devices with doorbell stride different from 0h in ESXi hosts

                                  VMware inbox nvme driver supports only doorbell stride with value 0h and you cannot use any other value or see NVMe devices with doorbell stride different from 0h.

                                  This issue is resolved in this release.

                                ESXi670-201904224-UG
                                Patch Category Bugfix
                                Patch Severity Important
                                Host Reboot Required No
                                Virtual Machine Migration or Shutdown Required No
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.36-2.48.13006603
                                PRs Fixed  2270098 
                                CVE numbers N/A

                                This patch updates the vmware-esx-esxcli-nvme-plugin VIB to resolve the following issue:

                                • PR 2270098: You might see incorrect NVMe device information by running the esxcfg-scsidevs –a command

                                  For some Dell-branded NVMe devices, and for Marvell NR2241, you might see incorrect NVMe device information by running the esxcfg-scsidevs –a command or looking at the interface.

                                  This issue is resolved in this release.

                                ESXi670-201904225-UG
                                Patch Category Bugfix
                                Patch Severity Enhancement
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required Yes
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMW_bootbank_lsi-mr3_7.708.07.00-2vmw.670.2.48.13006603
                                PRs Fixed  2272901 
                                CVE numbers N/A

                                This patch updates the lsi-mr3​ VIB.

                                • Upgrade to the Broadcom lsi-mr3 driver

                                  The Broadcom lsi-mr3 driver is upgraded to version MR 7.8.

                                ESXi670-201904226-UG
                                Patch Category Bugfix
                                Patch Severity Important
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required Yes
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMW_bootbank_lsi-msgpt35_09.00.00.00-1vmw.670.2.48.13006603
                                PRs Fixed  2275290 
                                CVE numbers N/A

                                This patch updates the lsi-msgpt35 VIB.

                                • Update to the lsi-msgpt35 driver

                                  The lsi-msgpt35 driver is updated to version 09.00.00.00-1vmw.

                                ESXi670-201904227-UG
                                Patch Category Bugfix
                                Patch Severity Important
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required Yes
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMW_bootbank_lsi-msgpt3_17.00.01.00-3vmw.670.2.48.13006603
                                PRs Fixed  2275290 
                                CVE numbers N/A

                                This patch updates the lsi-msgpt3 VIB.

                                • Update to the lsi-msgpt3 driver

                                  The lsi-msgpt3 driver is updated to version 17.00.01.00-3vmw.

                                ESXi670-201904228-UG
                                Patch Category Bugfix
                                Patch Severity Important
                                Host Reboot Required Yes
                                Virtual Machine Migration or Shutdown Required Yes
                                Affected Hardware N/A
                                Affected Software N/A
                                VIBs Included
                                • VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.2.48.13006603
                                PRs Fixed  N/A
                                CVE numbers N/A

                                This patch updates the net-vmxnet3 VIB.

                                  ESXi670-201904229-UG
                                  Patch Category Bugfix
                                  Patch Severity Important
                                  Host Reboot Required Yes
                                  Virtual Machine Migration or Shutdown Required Yes
                                  Affected Hardware N/A
                                  Affected Software N/A
                                  VIBs Included
                                  • VMware_bootbank_native-misc-drivers_6.7.0-2.48.13006603
                                  PRs Fixed  N/A
                                  CVE numbers N/A

                                  This patch updates the native-misc-drivers VIB.

                                    ESXi670-201904101-SG
                                    Patch Category Security
                                    Patch Severity Important
                                    Host Reboot Required Yes
                                    Virtual Machine Migration or Shutdown Required Yes
                                    Affected Hardware N/A
                                    Affected Software N/A
                                    VIBs Included
                                    • VMware_bootbank_vsan_6.7.0-1.44.11399678
                                    • VMware_bootbank_esx-update_6.7.0-1.44.12986307
                                    • VMware_bootbank_esx-base_6.7.0-1.44.12986307
                                    • VMware_bootbank_vsanhealth_6.7.0-1.44.11399680
                                    PRs Fixed   2228005, 2301046, 2232983, 2222019, 2224776, 2260077 
                                    CVE numbers CVE-2019-5516, CVE-2019-5517, CVE-2019-5520

                                    This patch updates the esx-base, vsan, and vsanhealth VIBs to resolve the following issues:

                                    • PR 2224776: ESXi hosts might become unresponsive if the sfcbd service fails to process all forked processes

                                      You might see many error messages such as Heap globalCartel-1 already at its maximum size. Cannot expand. in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail.

                                      This issue is resolved in this release.

                                    • Update to the Network Time Protocol (NTP) daemon

                                      The NTP daemon is updated to version ntp-4.2.8p12.

                                    • Update to the libxml2 library

                                      The ESXi userworld libxml2 library is updated to version 2.9.8.

                                    • Update to the Python library

                                      The Python third party library is updated to version 3.5.6.

                                    • Update to the OpenSSH

                                      The OpenSSH is updated to version 7.9 to resolve security issue with identifier CVE-2018-15473.

                                    • Update to OpenSSL

                                      The OpenSSL package is updated to version openssl-1.0.2r.

                                    • ESXi updates address an out-of-bounds vulnerability with the vertex shader functionality. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5516 to this issue. See VMSA-2019-0006 for further information.

                                    • ESXi contains multiple out-of-bounds read vulnerabilities in the shader translator. Exploitation of these issues requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of these issues may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for these issues involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5517 to these issues. See VMSA-2019-0006 for further information.

                                    • ESXi updates address an out-of-bounds read vulnerability. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of this issue may lead to information disclosure. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5520 to this issue. See VMSA-2019-0006 for further information.

                                    ESXi670-201904102-SG
                                    Patch Category Security
                                    Patch Severity Important
                                    Host Reboot Required No
                                    Virtual Machine Migration or Shutdown Required No
                                    Affected Hardware N/A
                                    Affected Software N/A
                                    VIBs Included
                                    • VMware_locker_tools-light_10.3.5.10430147-12986307
                                    PRs Fixed  N/A
                                    CVE numbers N/A

                                    This patch updates the tools-light VIB.

                                      ESXi670-201904103-SG
                                      Patch Category Security
                                      Patch Severity Important
                                      Host Reboot Required No
                                      Virtual Machine Migration or Shutdown Required No
                                      Affected Hardware N/A
                                      Affected Software N/A
                                      VIBs Included
                                      • VMware_bootbank_esx-ui_1.33.3-12923304
                                      PRs Fixed  N/A
                                      CVE numbers N/A

                                      This patch updates the esx-ui​ VIB.

                                        ESXi-6.7.0-20190402001-standard
                                        Profile Name ESXi-6.7.0-20190402001-standard
                                        Build For build information, see Patches Contained in this Release.
                                        Vendor VMware, Inc.
                                        Release Date April 11, 2019
                                        Acceptance Level PartnerSupported
                                        Affected Hardware N/A
                                        Affected Software N/A
                                        Affected VIBs
                                        • VMware_bootbank_vsanhealth_6.7.0-2.48.12775454
                                        • VMware_bootbank_vsan_6.7.0-2.48.12775451
                                        • VMware_bootbank_esx-base_6.7.0-2.48.13006603
                                        • VMware_bootbank_esx-update_6.7.0-2.48.13006603
                                        • VMW_bootbank_i40en_1.3.1-23vmw.670.2.48.13006603
                                        • VMW_bootbank_igbn_0.1.1.0-4vmw.670.2.48.13006603
                                        • VMW_bootbank_ixgben_1.4.1-18vmw.670.2.48.13006603
                                        • VMW_bootbank_ne1000_0.8.4-2vmw.670.2.48.13006603
                                        • VMW_bootbank_vmkusb_0.1-1vmw.670.2.48.13006603
                                        • VMware_bootbank_qlnativefc_3.1.8.0-4vmw.670.2.48.13006603
                                        • VMW_bootbank_nfnic_4.0.0.17-0vmw.670.2.48.13006603
                                        • VMW_bootbank_brcmfcoe_11.4.1078.19-12vmw.670.2.48.13006603
                                        • VMW_bootbank_lpfc_11.4.33.18-12vmw.670.2.48.13006603
                                        • VMware_bootbank_elx-esx-libelxima.so_11.4.1184.1-2.48.13006603
                                        • VMW_bootbank_misc-drivers_6.7.0-2.48.13006603
                                        • VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-9vmw.670.2.48.13006603
                                        • VMW_bootbank_sfvmk_1.0.0.1003-6vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt2_20.00.05.00-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-core_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-en_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-rdma_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx5-core_4.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx5-rdma_4.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_bnxtnet_20.6.101.7-21vmw.670.2.48.13006603
                                        • VMware_bootbank_lsu-lsi-drivers-plugin_1.0.0-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nvme_1.2.2.27-1vmw.670.2.48.13006603
                                        • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.36-2.48.13006603
                                        • VMW_bootbank_lsi-mr3_7.708.07.00-2vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt35_09.00.00.00-1vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt3_17.00.01.00-3vmw.670.2.48.13006603
                                        • VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.2.48.13006603
                                        • VMware_bootbank_native-misc-drivers_6.7.0-2.48.13006603
                                        • VMware_locker_tools-light_10.3.5.10430147-12986307
                                        • VMware_bootbank_esx-ui_1.33.3-12923304
                                        PRs Fixed 2227123, 2250472, 2273186, 2282052, 2267506, 2256771, 2269616, 2250697, 2193516, 2203431, 2264996, 2267158, 2231707, 2191342, 2197795, 2221935, 2193830, 2210364, 2177107, 2179263, 2185906, 2204508, 2211350, 2204204, 2226565, 2240385, 2163397, 2238134, 2213883, 2141221, 2240444, 2249713, 2184068, 2257480, 2235031, 2257354, 2212140, 2279897, 2227623, 2277648, 2221778, 2287560, 2268541, 2203837, 2236531, 2242656, 2250653, 2260748, 2256001, 2266762, 2263849, 2267698, 2271653, 2232538, 2210056, 2271322, 2272737, 2286563, 2178119, 2223879, 2221256, 2258272, 2219661, 2268826, 2224767, 2257590, 2241917, 2246891, 2228529, 2231435, 2226688, 2234599, 2256555, 2242441, 2270098, 2272901, 2275290
                                        Related CVE numbers N/A
                                        • This patch updates the following issues:
                                          • You might intermittently see alarms for very large values of counters for read or write disk latency, such as datastore.totalReadLatency and datastore.totalWriteLatency.

                                          • When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, instead of powering on, it might display a blank screen.

                                          • When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with purple diagnostic screen.

                                          • The SNMP agent might deliver an incorrect trap vmwVmPoweredOn when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client.

                                          • The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return NULL. If this NULL value is not handled gracefully, an ESXi host might fail with purple diagnostic screen at DVFilter level.

                                          • In the Events tab of the vSphere Client or the vSphere Web Client, when you power on or off virtual machines, you might see events similar to Network connectivity restored on virtual switch XXX, portgroups: XXX. Physical NIC vmnicX is up. When checking uplink status, the system report might include port groups that are not affected by the virtual machine powering on or off operation, which triggers the event. However, the event does not indicate a problem or prompt an action.

                                          • If during the Windows Server Failover Cluster setup you change the default path policy from Fixed or Most Recently Used to Round Robin, the I/O of the cluster might fail and the cluster might stop responding.

                                          • In release builds, the smartd daemon might generate a lot of debugging and info messages into the syslog service logs.

                                          • The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

                                          • ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.

                                          • Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.

                                          • Hostd might become unresponsive while waiting for an external process initiated with the esxcfg-syslog command to finish.

                                          • Large number of I/Os getting timeout on a heavy-loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.

                                          • Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory. You might see the following logs for invalid commands:

                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.

                                            And a similar panic message:
                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure

                                          • If the Guest Introspection service in a vSphere 6.7 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with an error in the vSphere Web Client similar to The source detected that the destination failed to resume.
                                            The destination vmware.log contains error messages similar to:

                                            2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
                                            2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.

                                            The vmkernel log contains error messages such as:

                                            2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
                                            2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3

                                          • You can edit the statistical data to be collected only when the virtual machine is in a particular power state. For example, you cannot edit the memory data when the virtual machine is powered on. When you restart the hostd service, the virtual machine might not change its power state, which leads to false data initialization. The calculation of the host consumed memory counter might end up with division by zero and the chart might stop drawing graphs.

                                          • By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.

                                          • e1000 vNICs might intermittently drop unrecognized GRE packets due to a parsing issue.

                                          • While booting an Apple system, the boot process might stop with the following error message:
                                            Shutting down firmware services...
                                            Mutiboot buffer is too small.
                                            Unrecoverable error

                                            The issue affects only certain Apple firmware versions.

                                          • If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

                                          • The SNMP agent might display the status of all batteries as D_Failed when you run the snmpwalk command. This issue occurs due to the processing of code, which might not check the status properly and the misinterpretation of the compact sensors with sensor-specific code.

                                          • When you use an IP hash or source MAC hash teaming policy, some packets from the packet list might use a different uplink. As a result, some of them might drop or not be sent out through the uplink determined by the teaming policy.

                                          • If an NFS datastore is configured as a syslog datastore and if the ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.

                                          • When you replicate virtual machines by using vSphere Replication with Site Recovery Manager, the ESXi host might fail with a purple diagnostic screen immediately or within 24 hours. You might see a similar error: ​PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

                                          • Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
                                            hostd Panic: MXUserAllocSerialNumber: too many locks!

                                          • Тhe esxcli $esxcli.network.nic.coalesce.set.Invoke network command might return a false value as a result even though the value is correctly set. This might impact your automation scripts. 

                                          • When a virtual machine tries to access a VMFS datastore with corrupted directory, the ESXi host might fail with a purple diagnostic screen and the DirEntry corruption error message.

                                          • The LACPPDU Actor key might change its value to 0 during the link status flapping.

                                          • Due to a timing issue in the VMkernel, buffers might not be flushed, and the heap gets exhausted. As a result, services such as hostd, vpxa and vmsyslogd might not be able to write logs on the ESXi host, and the host becomes unresponsive. In the /var/log/vmkernel.log, you might see a similar warning:WARNING: Heap: 3571: Heap vfat already at its maximum size. Cannot expand.

                                          • If you have modified the type of a virtual CD-ROM device of a virtual machine in a previous version of ESXi, after you update to a later version, when you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.

                                          • When you renew a certificate, only the first certificate from the supplied chain of trust might be stored on the ESXi host. Any intermediate CA certificates are truncated. Because of the missing certificates, the chain to the root CA cannot be built. This leads to a warning for an untrusted connection.

                                          • If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null VvolID parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a null VvoId parameter and a failure when creating a virtual machine snapshot.

                                          • If a call from a vSphere API for Storage Awareness provider fails due to all connections to the virtual provider being busy, operations for parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might become unresponsive or fail with an error message similar to Cannot complete file creation operation.

                                          • An ESXi host might fail with a purple diagnostic screen displaying a sbflush_internal panic message due to some discrepancies in the internal statistics.

                                          • With ESXi 6.7 Update 2, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.7 Update 2 and ESXi 6.7 Update 2 on both source and destination sites.

                                          • ESXi hosts might fail with a purple diagnostic screen due to a page fault exception. The error comes from a missing flag that could trigger cache eviction on an object that no longer exists, and result in a NULL pointer reference.

                                          • When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might not be correct. As a result, you might see redundant VOB messages in the /var/run/log/vobd.log file, such as:

                                            Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnicX is down. Affected portgroups:"xxx".

                                          • You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

                                          • The vpxa service might fail due to excessive memory usage when iterating on a large directory tree. As a result, an ESXi host might get disconnected from a vCenter Server system.

                                          • The commands esxcli software profile update or esxcli software profile install might fail on hosts running ESXi 6.7 Update 1 with enabled SecureBoot. You might see error messages such as Failed to setup upgrade using esx-update VIB and Failed to mount tardisk in ramdisk: [Errno 1] Operation not permitted.

                                          • NFS volume mount might not persist after a reboot of an ESXi host due to intermittent failure in resolving the host name of the NFS server.

                                          • This issue happens if you create a VMK or vNIC port by using the Route Based on Physical NIC Load option when the physical NIC assigned to that port group is temporarily down. In such cases, the port team uplink is not set because there are no active uplinks in the portgroup. When the physical NIC becomes active again, teaming code fails to update the port data, resulting in a packet loss.

                                          • A race condition can occur when Plog Relog runs along with DecommissionMD, causing Relog to access freed Plog device state tables. This problem can cause a host failure with a purple diagnostic screen.

                                          • If you use devices with physical block size other than 4096 or 512 MB, in the vmkernel logs you might see many warnings similar to:
                                            ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.

                                          • After configuring or upgrading a vSAN stretched cluster, all cluster level requests might fail. You might see the following error message:
                                            name '_VSAN_VER3' is not defined.

                                          • If the virtual machine folder is created using a name instead of UUID, the vSAN performance charts do not show the following VMDK metrics properly:
                                            - IOPS and IOPS Limit
                                            - Delayed Normalized IOPS

                                          • In Host Monitor Performance Physical Adapters pNIC vSwitch Port Drop Rate you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.

                                          • An ESXi host might fail with a purple diagnostic screen while powering on a virtual machine. This happens if the virtual machine name starts with “SRIOV” and has an uplink connected to a vSphere Distributed Switch.

                                          • In a rare race condition, when SNMP disables dynamic rulesets, but they are active in a Host Profile, you might see an unexpected compliance error such as Ruleset dynamicruleset not found.

                                          • Recovery of a virtual machine running Windows 2008 or later by using vSphere Replication with application quiescing enabled might result in a corrupt replica. Corruption only happens when application quiescing is enabled. If VMware Tools on the virtual machine is not configured for application quiescing, vSphere Replication uses a lower level of consistency, such as file system quiescing, and replicas are not corrupted.

                                          • When you perform CIM requests more than 2 MB like firmware download, CIM service might fail on an ESXi host due to out of stack space and the SFCB zdumbfile appears in /var/core.

                                          • The number of syslog.log archive files generated might be one less than the default value set by using the syslog.global.defaultRotate parameter. The number of syslog.log archive files might also be different between ESXi versions 6.0 and 6.7. For instance, if syslog.global.defaultRotate is set to 8 by default, ESXi 6.0 creates syslog.0.gz to syslog.7.gz, while ESXi 6.7 creates syslog.0.gz to syslog.6.gz.

                                          • If you set the advanced config option /Misc/MCEMonitorInterval to a very large value, ESXi hosts might become unresponsive in around 10 days of uptime, due to a timer overflow.

                                          • The P2MCache lock is a spin lock causing slow CPU performance that might cause starvation of other CPUs. As a result, an ESXi host might fail with a purple diagnostic screen and a Spin count exceeded - possible deadlock, PVSCSIDoIOComplet, NFSAsyncIOComplete error.

                                          • An ESXi host might fail with a purple diagnostic screen during a Storage vMotion operation between NFS datastores, due to a race condition in the request completion path. The race condition happens under heavy I/O load in the NFS stack.

                                          • During disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.

                                          • An ESXi host might fail with a purple diagnostic screen if RSS is enabled on a physical NIC. You might see a similar backtrace:
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bc70:[0x41801a68ad30]RSSPlugCleanupRSSEngine@(lb_netqueue_bal)#+0x7d stack: 0x4305e4c1a8e0, 0x41801a68af2b, 0x430db8c6b188, 0x4305e4c1a8e0,
                                            0x0yyyy-mm-dd cpu9:2097316)0x451a8521bc90:[0x41801a68af2a]RSSPlugInitRSSEngine@(lb_netqueue_bal)#+0x127 stack: 0x0, 0x20c49ba5e353f7cf, 0x4305e4c1a930, 0x4305e4c1a800, 0x4305e4c1a9f0
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bcd0:[0x41801a68b21c]RSSPlug_PreBalanceWork@(lb_netqueue_bal)#+0x1cd stack: 0x32, 0x32, 0x0, 0xe0, 0x4305e4c1a800
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bd30:[0x41801a687752]Lb_PreBalanceWork@(lb_netqueue_bal)#+0x21f stack: 0x4305e4c1a800, 0xff, 0x0, 0x43057f38ea00, 0x4305e4c1a800
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bd80:[0x418019c13b18]UplinkNetqueueBal_BalanceCB@vmkernel#nover+0x6f1 stack: 0x4305e4bc6088, 0x4305e4c1a840, 0x4305e4c1a800, 0x4305e435fb30, 0x4305e4bc6088
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bf00:[0x418019cd1c89]UplinkAsyncProcessCallsHelperCB@vmkernel#nover+0x116 stack: 0x430749e60870, 0x430196737070, 0x451a85223000, 0x418019ae832b, 0x430749e60088
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bf30:[0x418019ae832a]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430749e600b8, 0x430749e600a8, 0x430749e600e0, 0x451a85223000, 0x430749e600b8
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bfe0:[0x418019d081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

                                          • When the VASA provider sends a rebind request to an ESXi host to switch the binding for a particular vSphere Virtual Volume driver, the ESXi host might switch the protocol endpoint and other resources to change the binding without any I/O disturbance. As a result, the ESXi host might fail with a purple diagnostic screen.

                                          • Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.

                                          • If you install an async 32-bit IMA plug-in and there are already installed 32-bit IMA plug-ins, the system might try to load these plug-ins and check if they are loaded from the QLogic independent iSCSI adapter. During this process the ESXi host might fail with a memory admission error.

                                          • If you run a reprotect operation by using the Site Recovery Manager, the operation might fail if a virtual machine is late with the initial synchronization. You might see an error similar to:
                                            VR synchronization failed for VRM group {…}. Operation timed out: 10800 seconds

                                          • When I/O throttling is enabled on a protocol endpoint of Virtual Volumes on HPE 3PAR storage and if an I/O on a secondary LUN fails with an error TASK_SET_FULL, the I/O rate on all secondary LUNs that are associated with the protocol endpoint slows down.

                                          • The qlnativefc driver is updated to version 3.1.8.0.

                                          • The nfnic driver is updated to version 4.0.0.17.

                                          • Emulex drivers might write logs at /var/log/EMU/mili/mili2d.log and fill up the 40 MB /var file system logs of RAM drives.

                                          • ESXi 6.7 Update 2 adds Solarflare native driver (sfvmk) support for Solarflare 10G and 40G network adaptor devices, such as SFN8542 and SFN8522.

                                          • The lsi-msgpt2 driver is updated to version 20.00.05.00 to address task management enhancements and critical bug fixes.

                                          • You might not be able to locate and turn on the LED for disks under a lsi-msgpt35 controller.

                                          • If a split I/O request is cancelled in the inbox nvme driver, the command completion path might unlock the queue lock due to an incorrect check of the command type. This might cause ESXi hosts to fail with a purple diagnostic screen.

                                          • VMware inbox nvme driver supports only doorbell stride with value 0h and you cannot use any other value or see NVMe devices with doorbell stride different from 0h.

                                          • For some Dell-branded NVMe devices, and for Marvell NR2241, you might see incorrect NVMe device information by running the esxcfg-scsidevs –a command or looking at the interface.

                                          • The Broadcom lsi-mr3 driver is upgraded to version MR 7.8.

                                          • The lsi-msgpt35 driver is updated to version 09.00.00.00-1vmw.

                                          • The lsi-msgpt3 driver is updated to version 17.00.01.00-3vmw.

                                          • If you extract a Host Profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that Host Profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS), but the adapter is actually removed from the VDS. This is due to a "::" value of the ipV6DefaultGateway parameter.

                                          • Some guest virtual machines might report their identity as an empty string if the guest is running a later version of VMware Tools than the ESXi host was released with, or if the host cannot recognize the guestID parameter of the GuestInfo object. The issue affects mainly virtual machines with CentOS and VMware Photon OS.

                                          • You might see repetitive messages in the vCenter Web Client, similar to Delete host sub specification nsx if you use network booting for ESXi hosts, due to some dummy code in hostd.

                                          • The installation of the VMware Tools unmounts the ISO image file that ends with autoinst.iso, if it is already mounted to the virtual machine. To mount the VMware Tools ISO image file, you must click Install VMware Tools again. As a result, the virtual machine might stop responding.

                                          • ESXi hosts might lose connectivity to the vCenter Sever system due to failure of the vpxa agent service in result of an invalid property update generated by the PropertyCollector object. A race condition in hostd leads to a malformed sequence of property update notification events that cause the invalid property update.

                                        ESXi-6.7.0-20190402001-no-tools
                                        Profile Name ESXi-6.7.0-20190402001-no-tools
                                        Build For build information, see Patches Contained in this Release.
                                        Vendor VMware, Inc.
                                        Release Date April 11, 2019
                                        Acceptance Level PartnerSupported
                                        Affected Hardware N/A
                                        Affected Software N/A
                                        Affected VIBs
                                        • VMware_bootbank_vsanhealth_6.7.0-2.48.12775454
                                        • VMware_bootbank_vsan_6.7.0-2.48.12775451
                                        • VMware_bootbank_esx-base_6.7.0-2.48.13006603
                                        • VMware_bootbank_esx-update_6.7.0-2.48.13006603
                                        • VMW_bootbank_i40en_1.3.1-23vmw.670.2.48.13006603
                                        • VMW_bootbank_igbn_0.1.1.0-4vmw.670.2.48.13006603
                                        • VMW_bootbank_ixgben_1.4.1-18vmw.670.2.48.13006603
                                        • VMW_bootbank_ne1000_0.8.4-2vmw.670.2.48.13006603
                                        • VMW_bootbank_vmkusb_0.1-1vmw.670.2.48.13006603
                                        • VMware_bootbank_qlnativefc_3.1.8.0-4vmw.670.2.48.13006603
                                        • VMW_bootbank_nfnic_4.0.0.17-0vmw.670.2.48.13006603
                                        • VMW_bootbank_brcmfcoe_11.4.1078.19-12vmw.670.2.48.13006603
                                        • VMW_bootbank_lpfc_11.4.33.18-12vmw.670.2.48.13006603
                                        • VMware_bootbank_elx-esx-libelxima.so_11.4.1184.1-2.48.13006603
                                        • VMW_bootbank_misc-drivers_6.7.0-2.48.13006603
                                        • VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-9vmw.670.2.48.13006603
                                        • VMW_bootbank_sfvmk_1.0.0.1003-6vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt2_20.00.05.00-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-core_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-en_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx4-rdma_3.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx5-core_4.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nmlx5-rdma_4.17.13.1-1vmw.670.2.48.13006603
                                        • VMW_bootbank_bnxtnet_20.6.101.7-21vmw.670.2.48.13006603
                                        • VMware_bootbank_lsu-lsi-drivers-plugin_1.0.0-1vmw.670.2.48.13006603
                                        • VMW_bootbank_nvme_1.2.2.27-1vmw.670.2.48.13006603
                                        • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.36-2.48.13006603
                                        • VMW_bootbank_lsi-mr3_7.708.07.00-2vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt35_09.00.00.00-1vmw.670.2.48.13006603
                                        • VMW_bootbank_lsi-msgpt3_17.00.01.00-3vmw.670.2.48.13006603
                                        • VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.2.48.13006603
                                        • VMware_bootbank_native-misc-drivers_6.7.0-2.48.13006603
                                        • VMware_locker_tools-light_10.3.5.10430147-12986307
                                        • VMware_bootbank_esx-ui_1.33.3-12923304
                                        PRs Fixed 2227123, 2250472, 2273186, 2282052, 2267506, 2256771, 2269616, 2250697, 2193516, 2203431, 2264996, 2267158, 2231707, 2191342, 2197795, 2221935, 2193830, 2210364, 2177107, 2179263, 2185906, 2204508, 2211350, 2204204, 2226565, 2240385, 2163397, 2238134, 2213883, 2141221, 2240444, 2249713, 2184068, 2257480, 2235031, 2257354, 2212140, 2279897, 2227623, 2277648, 2221778, 2287560, 2268541, 2203837, 2236531, 2242656, 2250653, 2260748, 2256001, 2266762, 2263849, 2267698, 2271653, 2232538, 2210056, 2271322, 2272737, 2286563, 2178119, 2223879, 2221256, 2258272, 2219661, 2268826, 2224767, 2257590, 2241917, 2246891, 2228529, 2231435, 2226688, 2234599, 2256555, 2242441, 2270098, 2272901, 2275290
                                        Related CVE numbers N/A
                                        • This patch updates the following issues:
                                          • You might intermittently see alarms for very large values of counters for read or write disk latency, such as datastore.totalReadLatency and datastore.totalWriteLatency.

                                          • When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, instead of powering on, it might display a blank screen.

                                          • When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with purple diagnostic screen.

                                          • The SNMP agent might deliver an incorrect trap vmwVmPoweredOn when a virtual machine is selected under the Summary tab and the Snapshot Manager tab in the vSphere Web Client.

                                          • The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return NULL. If this NULL value is not handled gracefully, an ESXi host might fail with purple diagnostic screen at DVFilter level.

                                          • In the Events tab of the vSphere Client or the vSphere Web Client, when you power on or off virtual machines, you might see events similar to Network connectivity restored on virtual switch XXX, portgroups: XXX. Physical NIC vmnicX is up. When checking uplink status, the system report might include port groups that are not affected by the virtual machine powering on or off operation, which triggers the event. However, the event does not indicate a problem or prompt an action.

                                          • If during the Windows Server Failover Cluster setup you change the default path policy from Fixed or Most Recently Used to Round Robin, the I/O of the cluster might fail and the cluster might stop responding.

                                          • In release builds, the smartd daemon might generate a lot of debugging and info messages into the syslog service logs.

                                          • The NIOC scheduler might reset the uplink network device if the uplink is rarely used. The reset is non-predictable.

                                          • ESXi hosts might intermittently fail if you disable global IPv6 addresses, because a code path still uses IPv6.

                                          • Due to a race condition, the virtual machine executable process might fail and shut down virtual machines during a reboot of the guest OS.

                                          • Hostd might become unresponsive while waiting for an external process initiated with the esxcfg-syslog command to finish.

                                          • Large number of I/Os getting timeout on a heavy-loaded system might cause a soft lockup of physical CPUs. As a result, an ESXi host might fail with a purple diagnostic screen.

                                          • Invalid commands to a virtual SATA CD-ROM might trigger errors and increase the memory usage of virtual machines. This might lead to a failure of virtual machines if they are unable to allocate memory. You might see the following logs for invalid commands:

                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-USER: Invalid size in command
                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| I125: AHCI-VMM: sata0:0: Halted the port due to command abort.

                                            And a similar panic message:
                                            YYYY-MM-DDTHH:MM:SS.mmmZ| vcpu-1| E105: PANIC: Unrecoverable memory allocation failure

                                          • If the Guest Introspection service in a vSphere 6.7 environment with more than 150 virtual machines is active, migration of virtual machines by using vSphere vMotion might fail with an error in the vSphere Web Client similar to The source detected that the destination failed to resume.
                                            The destination vmware.log contains error messages similar to:

                                            2018-07-18T02:41:32.035Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
                                            2018-07-18T02:41:32.035Z| vmx| I125: Migrate: Caching migration error message list:
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.namespaceDb.badVersion] Incompatible version -1 (expect 2).
                                            2018-07-18T02:41:32.035Z| vmx| I125: [msg.checkpoint.mrestoregroup.failed] An error occurred restoring the virtual machine state during migration.

                                            The vmkernel log contains error messages such as:

                                            2018-07-18T02:32:43.011Z cpu5:66134)WARNING: Heap: 3534: Heap fcntlHeap-1 already at its maximum size. Cannot expand.
                                            2018-07-18T02:41:35.613Z cpu2:66134)WARNING: Heap: 4169: Heap_Align(fcntlHeap-1, 200/200 bytes, 8 align) failed. caller: 0x41800aaca9a3

                                          • You can edit the statistical data to be collected only when the virtual machine is in a particular power state. For example, you cannot edit the memory data when the virtual machine is powered on. When you restart the hostd service, the virtual machine might not change its power state, which leads to false data initialization. The calculation of the host consumed memory counter might end up with division by zero and the chart might stop drawing graphs.

                                          • By default, the Isolation Response feature in an ESXi host with enabled vSphere HA is disabled. When the Isolation Response feature is enabled, a virtual machine port, which is connected to an NSX-T logical switch, might be blocked after a vSphere HA failover.

                                          • e1000 vNICs might intermittently drop unrecognized GRE packets due to a parsing issue.

                                          • While booting an Apple system, the boot process might stop with the following error message:
                                            Shutting down firmware services...
                                            Mutiboot buffer is too small.
                                            Unrecoverable error

                                            The issue affects only certain Apple firmware versions.

                                          • If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run the snapshot Consolidation command to remove redundant disks and improve virtual machine performance. However, the Consolidation command might race with the running quiesced snapshot operation and cause failure of virtual machines.

                                          • The SNMP agent might display the status of all batteries as D_Failed when you run the snmpwalk command. This issue occurs due to the processing of code, which might not check the status properly and the misinterpretation of the compact sensors with sensor-specific code.

                                          • When you use an IP hash or source MAC hash teaming policy, some packets from the packet list might use a different uplink. As a result, some of them might drop or not be sent out through the uplink determined by the teaming policy.

                                          • If an NFS datastore is configured as a syslog datastore and if the ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.

                                          • When you replicate virtual machines by using vSphere Replication with Site Recovery Manager, the ESXi host might fail with a purple diagnostic screen immediately or within 24 hours. You might see a similar error: ​PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

                                          • Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
                                            hostd Panic: MXUserAllocSerialNumber: too many locks!

                                          • Тhe esxcli $esxcli.network.nic.coalesce.set.Invoke network command might return a false value as a result even though the value is correctly set. This might impact your automation scripts. 

                                          • When a virtual machine tries to access a VMFS datastore with corrupted directory, the ESXi host might fail with a purple diagnostic screen and the DirEntry corruption error message.

                                          • The LACPPDU Actor key might change its value to 0 during the link status flapping.

                                          • Due to a timing issue in the VMkernel, buffers might not be flushed, and the heap gets exhausted. As a result, services such as hostd, vpxa and vmsyslogd might not be able to write logs on the ESXi host, and the host becomes unresponsive. In the /var/log/vmkernel.log, you might see a similar warning:WARNING: Heap: 3571: Heap vfat already at its maximum size. Cannot expand.

                                          • If you have modified the type of a virtual CD-ROM device of a virtual machine in a previous version of ESXi, after you update to a later version, when you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.

                                          • When you renew a certificate, only the first certificate from the supplied chain of trust might be stored on the ESXi host. Any intermediate CA certificates are truncated. Because of the missing certificates, the chain to the root CA cannot be built. This leads to a warning for an untrusted connection.

                                          • If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null VvolID parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a null VvoId parameter and a failure when creating a virtual machine snapshot.

                                          • If a call from a vSphere API for Storage Awareness provider fails due to all connections to the virtual provider being busy, operations for parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might become unresponsive or fail with an error message similar to Cannot complete file creation operation.

                                          • An ESXi host might fail with a purple diagnostic screen displaying a sbflush_internal panic message due to some discrepancies in the internal statistics.

                                          • With ESXi 6.7 Update 2, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.7 Update 2 and ESXi 6.7 Update 2 on both source and destination sites.

                                          • ESXi hosts might fail with a purple diagnostic screen due to a page fault exception. The error comes from a missing flag that could trigger cache eviction on an object that no longer exists, and result in a NULL pointer reference.

                                          • When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might not be correct. As a result, you might see redundant VOB messages in the /var/run/log/vobd.log file, such as:

                                            Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnicX is down. Affected portgroups:"xxx".

                                          • You must manually add the claim rules to an ESXi host for Lenovo ThinkSystem DE Series Storage Arrays.

                                          • The vpxa service might fail due to excessive memory usage when iterating on a large directory tree. As a result, an ESXi host might get disconnected from a vCenter Server system.

                                          • The commands esxcli software profile update or esxcli software profile install might fail on hosts running ESXi 6.7 Update 1 with enabled SecureBoot. You might see error messages such as Failed to setup upgrade using esx-update VIB and Failed to mount tardisk in ramdisk: [Errno 1] Operation not permitted.

                                          • NFS volume mount might not persist after a reboot of an ESXi host due to intermittent failure in resolving the host name of the NFS server.

                                          • This issue happens if you create a VMK or vNIC port by using the Route Based on Physical NIC Load option when the physical NIC assigned to that port group is temporarily down. In such cases, the port team uplink is not set because there are no active uplinks in the portgroup. When the physical NIC becomes active again, teaming code fails to update the port data, resulting in a packet loss.

                                          • A race condition can occur when Plog Relog runs along with DecommissionMD, causing Relog to access freed Plog device state tables. This problem can cause a host failure with a purple diagnostic screen.

                                          • If you use devices with physical block size other than 4096 or 512 MB, in the vmkernel logs you might see many warnings similar to:
                                            ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.

                                          • After configuring or upgrading a vSAN stretched cluster, all cluster level requests might fail. You might see the following error message:
                                            name '_VSAN_VER3' is not defined.

                                          • If the virtual machine folder is created using a name instead of UUID, the vSAN performance charts do not show the following VMDK metrics properly:
                                            - IOPS and IOPS Limit
                                            - Delayed Normalized IOPS

                                          • In Host Monitor Performance Physical Adapters pNIC vSwitch Port Drop Rate you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.

                                          • An ESXi host might fail with a purple diagnostic screen while powering on a virtual machine. This happens if the virtual machine name starts with “SRIOV” and has an uplink connected to a vSphere Distributed Switch.

                                          • In a rare race condition, when SNMP disables dynamic rulesets, but they are active in a Host Profile, you might see an unexpected compliance error such as Ruleset dynamicruleset not found.

                                          • Recovery of a virtual machine running Windows 2008 or later by using vSphere Replication with application quiescing enabled might result in a corrupt replica. Corruption only happens when application quiescing is enabled. If VMware Tools on the virtual machine is not configured for application quiescing, vSphere Replication uses a lower level of consistency, such as file system quiescing, and replicas are not corrupted.

                                          • When you perform CIM requests more than 2 MB like firmware download, CIM service might fail on an ESXi host due to out of stack space and the SFCB zdumbfile appears in /var/core.

                                          • The number of syslog.log archive files generated might be one less than the default value set by using the syslog.global.defaultRotate parameter. The number of syslog.log archive files might also be different between ESXi versions 6.0 and 6.7. For instance, if syslog.global.defaultRotate is set to 8 by default, ESXi 6.0 creates syslog.0.gz to syslog.7.gz, while ESXi 6.7 creates syslog.0.gz to syslog.6.gz.

                                          • If you set the advanced config option /Misc/MCEMonitorInterval to a very large value, ESXi hosts might become unresponsive in around 10 days of uptime, due to a timer overflow.

                                          • The P2MCache lock is a spin lock causing slow CPU performance that might cause starvation of other CPUs. As a result, an ESXi host might fail with a purple diagnostic screen and a Spin count exceeded - possible deadlock, PVSCSIDoIOComplet, NFSAsyncIOComplete error.

                                          • An ESXi host might fail with a purple diagnostic screen during a Storage vMotion operation between NFS datastores, due to a race condition in the request completion path. The race condition happens under heavy I/O load in the NFS stack.

                                          • During disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.

                                          • An ESXi host might fail with a purple diagnostic screen if RSS is enabled on a physical NIC. You might see a similar backtrace:
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bc70:[0x41801a68ad30]RSSPlugCleanupRSSEngine@(lb_netqueue_bal)#+0x7d stack: 0x4305e4c1a8e0, 0x41801a68af2b, 0x430db8c6b188, 0x4305e4c1a8e0,
                                            0x0yyyy-mm-dd cpu9:2097316)0x451a8521bc90:[0x41801a68af2a]RSSPlugInitRSSEngine@(lb_netqueue_bal)#+0x127 stack: 0x0, 0x20c49ba5e353f7cf, 0x4305e4c1a930, 0x4305e4c1a800, 0x4305e4c1a9f0
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bcd0:[0x41801a68b21c]RSSPlug_PreBalanceWork@(lb_netqueue_bal)#+0x1cd stack: 0x32, 0x32, 0x0, 0xe0, 0x4305e4c1a800
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bd30:[0x41801a687752]Lb_PreBalanceWork@(lb_netqueue_bal)#+0x21f stack: 0x4305e4c1a800, 0xff, 0x0, 0x43057f38ea00, 0x4305e4c1a800
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bd80:[0x418019c13b18]UplinkNetqueueBal_BalanceCB@vmkernel#nover+0x6f1 stack: 0x4305e4bc6088, 0x4305e4c1a840, 0x4305e4c1a800, 0x4305e435fb30, 0x4305e4bc6088
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bf00:[0x418019cd1c89]UplinkAsyncProcessCallsHelperCB@vmkernel#nover+0x116 stack: 0x430749e60870, 0x430196737070, 0x451a85223000, 0x418019ae832b, 0x430749e60088
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bf30:[0x418019ae832a]HelperQueueFunc@vmkernel#nover+0x157 stack: 0x430749e600b8, 0x430749e600a8, 0x430749e600e0, 0x451a85223000, 0x430749e600b8
                                            yyyy-mm-dd cpu9:2097316)0x451a8521bfe0:[0x418019d081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

                                          • When the VASA provider sends a rebind request to an ESXi host to switch the binding for a particular vSphere Virtual Volume driver, the ESXi host might switch the protocol endpoint and other resources to change the binding without any I/O disturbance. As a result, the ESXi host might fail with a purple diagnostic screen.

                                          • Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.

                                          • If you install an async 32-bit IMA plug-in and there are already installed 32-bit IMA plug-ins, the system might try to load these plug-ins and check if they are loaded from the QLogic independent iSCSI adapter. During this process the ESXi host might fail with a memory admission error.

                                          • If you run a reprotect operation by using the Site Recovery Manager, the operation might fail if a virtual machine is late with the initial synchronization. You might see an error similar to:
                                            VR synchronization failed for VRM group {…}. Operation timed out: 10800 seconds

                                          • When I/O throttling is enabled on a protocol endpoint of Virtual Volumes on HPE 3PAR storage and if an I/O on a secondary LUN fails with an error TASK_SET_FULL, the I/O rate on all secondary LUNs that are associated with the protocol endpoint slows down.

                                          • The qlnativefc driver is updated to version 3.1.8.0.

                                          • The nfnic driver is updated to version 4.0.0.17.

                                          • Emulex drivers might write logs at /var/log/EMU/mili/mili2d.log and fill up the 40 MB /var file system logs of RAM drives.

                                          • ESXi 6.7 Update 2 adds Solarflare native driver (sfvmk) support for Solarflare 10G and 40G network adaptor devices, such as SFN8542 and SFN8522.

                                          • The lsi-msgpt2 driver is updated to version 20.00.05.00 to address task management enhancements and critical bug fixes.

                                          • You might not be able to locate and turn on the LED for disks under a lsi-msgpt35 controller.

                                          • If a split I/O request is cancelled in the inbox nvme driver, the command completion path might unlock the queue lock due to an incorrect check of the command type. This might cause ESXi hosts to fail with a purple diagnostic screen.

                                          • VMware inbox nvme driver supports only doorbell stride with value 0h and you cannot use any other value or see NVMe devices with doorbell stride different from 0h.

                                          • For some Dell-branded NVMe devices, and for Marvell NR2241, you might see incorrect NVMe device information by running the esxcfg-scsidevs –a command or looking at the interface.

                                          • The Broadcom lsi-mr3 driver is upgraded to version MR 7.8.

                                          • The lsi-msgpt35 driver is updated to version 09.00.00.00-1vmw.

                                          • The lsi-msgpt3 driver is updated to version 17.00.01.00-3vmw.

                                          • If you extract a Host Profile from an ESXi host with disabled IPv6 support, and the IPv4 default gateway is overridden while remediating that Host Profile, you might see a message that a virtual network adapter, such as DSwitch0, is to be created on the vSphere Distributed Switch (VDS), but the adapter is actually removed from the VDS. This is due to a "::" value of the ipV6DefaultGateway parameter.

                                          • Some guest virtual machines might report their identity as an empty string if the guest is running a later version of VMware Tools than the ESXi host was released with, or if the host cannot recognize the guestID parameter of the GuestInfo object. The issue affects mainly virtual machines with CentOS and VMware Photon OS.

                                          • You might see repetitive messages in the vCenter Web Client, similar to Delete host sub specification nsx if you use network booting for ESXi hosts, due to some dummy code in hostd.

                                          • The installation of the VMware Tools unmounts the ISO image file that ends with autoinst.iso, if it is already mounted to the virtual machine. To mount the VMware Tools ISO image file, you must click Install VMware Tools again. As a result, the virtual machine might stop responding.

                                          • ESXi hosts might lose connectivity to the vCenter Sever system due to failure of the vpxa agent service in result of an invalid property update generated by the PropertyCollector object. A race condition in hostd leads to a malformed sequence of property update notification events that cause the invalid property update.

                                        ESXi-6.7.0-20190401001s-standard
                                        Profile Name ESXi-6.7.0-20190401001s-standard
                                        Build For build information, see Patches Contained in this Release.
                                        Vendor VMware, Inc.
                                        Release Date Аpril 11, 2019
                                        Acceptance Level PartnerSupported
                                        Affected Hardware N/A
                                        Affected Software N/A
                                        Affected VIBs
                                        • VMware_bootbank_vsan_6.7.0-1.44.11399678
                                        • VMware_bootbank_esx-update_6.7.0-1.44.12986307
                                        • VMware_bootbank_esx-base_6.7.0-1.44.12986307
                                        • VMware_bootbank_vsanhealth_6.7.0-1.44.11399680
                                        • VMware_locker_tools-light_10.3.5.10430147-12986307
                                        • VMware_bootbank_esx-ui_1.33.3-12923304
                                        PRs Fixed 2228005, 2301046, 2232983, 2222019, 2224776, 2260077, 2232955
                                        Related CVE numbers CVE-2019-5516, CVE-2019-5517, CVE-2019-5520

                                        This patch updates the following issues:

                                          • You might see many error messages such as Heap globalCartel-1 already at its maximum size. Cannot expand. in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail.

                                          • The NTP daemon is updated to version ntp-4.2.8p12.

                                          • The ESXi userworld libxml2 library is updated to version 2.9.8.

                                          • The Python third party library is updated to version 3.5.6.

                                          • The OpenSSH is updated to version 7.9 to resolve security issue with identifier CVE-2018-15473.

                                          • The OpenSSL package is updated to version openssl-1.0.2r.

                                          • ESXi updates address an out-of-bounds vulnerability with the vertex shader functionality. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5516 to this issue. See VMSA-2019-0006 for further information.

                                          • ESXi contains multiple out-of-bounds read vulnerabilities in the shader translator. Exploitation of these issues requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of these issues may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for these issues involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5517 to these issues. See VMSA-2019-0006 for further information.

                                          • ESXi updates address an out-of-bounds read vulnerability. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of this issue may lead to information disclosure. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5520 to this issue. See VMSA-2019-0006 for further information.

                                        ESXi-6.7.0-20190401001s-no-tools
                                        Profile Name ESXi-6.7.0-20190301001s-no-tools
                                        Build For build information, see Patches Contained in this Release.
                                        Vendor VMware, Inc.
                                        Release Date April 11, 2019
                                        Acceptance Level PartnerSupported
                                        Affected Hardware N/A
                                        Affected Software N/A
                                        Affected VIBs
                                        • VMware_bootbank_vsan_6.7.0-1.44.11399678
                                        • VMware_bootbank_esx-update_6.7.0-1.44.12986307
                                        • VMware_bootbank_esx-base_6.7.0-1.44.12986307
                                        • VMware_bootbank_vsanhealth_6.7.0-1.44.11399680
                                        • VMware_bootbank_esx-ui_1.33.3-12923304
                                        PRs Fixed 2228005, 2301046, 2232983, 2222019, 2224776, 2260077, 2232955
                                        Related CVE numbers CVE-2019-5516, CVE-2019-5517, CVE-2019-5520

                                        This patch updates the following issues:

                                          • You might see many error messages such as Heap globalCartel-1 already at its maximum size. Cannot expand. in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail.

                                          • The NTP daemon is updated to version ntp-4.2.8p12.

                                          • The ESXi userworld libxml2 library is updated to version 2.9.8.

                                          • The Python third party library is updated to version 3.5.6.

                                          • The OpenSSH is updated to version 7.9 to resolve security issue with identifier CVE-2018-15473.

                                          • The OpenSSL package is updated to version openssl-1.0.2r.

                                          • ESXi updates address an out-of-bounds vulnerability with the vertex shader functionality. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5516 to this issue. See VMSA-2019-0006 for further information.

                                          • ESXi contains multiple out-of-bounds read vulnerabilities in the shader translator. Exploitation of these issues requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of these issues may lead to information disclosure or may allow attackers with normal user privileges to create a denial-of-service condition on their own VM. The workaround for these issues involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5517 to these issues. See VMSA-2019-0006 for further information.

                                          • ESXi updates address an out-of-bounds read vulnerability. Exploitation of this issue requires an attacker to have access to a virtual machine with 3D graphics enabled.  Successful exploitation of this issue may lead to information disclosure. The workaround for this issue involves disabling the 3D-acceleration feature. This feature is not enabled by default on ESXi. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2019-5520 to this issue. See VMSA-2019-0006 for further information.

                                        Known Issues

                                        The known issues are grouped as follows.

                                        Miscellaneous
                                        • The ESXi Syslog service might display inaccurate timestamps at start or rotation

                                          The ESXi Syslog service might display inaccurate timestamps when the service starts or restarts, or when the log reaches its maximum size by configuration and rotates.

                                          Workaround: This issue is resolved for fresh installs of ESXi. To fix the issue on the existing configuration for the hostd.log, you must:

                                          1. Open the /etc/vmsyslog.conf.d/hostd.conf file.
                                          2. Replace onrotate = logger -t Hostd < /var/run/vmware/hostdLogHeader.txt with
                                            onrotate = printf '%%s - last log rotation time, %%s\n' "$(date --utc +%%FT%%T.%%3NZ)" "$(cat /var/run/vmware/hostdLogHeader.txt)" | logger -t Hostd
                                          3. Save the changes.
                                          4. Restart the vmsyslogd service by running /etc/init.d/vmsyslogd restart.
                                        • You cannot migrate virtual machines by using vSphere vMotion on different NSX Logical Switches

                                          In ESXi 6.7 Update 2, migration of virtual machines using vSphere vMotion between two different NSX Logical Switches is not supported. You can still migrate virtual machines with vSphere vMotion on the same NSX Logical Switch.

                                          Workaround: None

                                        • When you upgrade to VMware ESXi 6.7 Update 2, the SSH service might stop responding

                                          When you upgrade to VMware ESXi 6.7 Update 2 and boot an ESXi host, the /etc/ssh/sshd_config file might contain Unicode characters. As a result, the SSH configuration might not be updated and the SSH service might stop responding, refusing the incoming connections.

                                          Workaround: You must disable and then enable the SSH service manually, following the steps in VMware knowledge base article 2004746.
                                          When you upgrade to VMware ESXi 6.7 Update 2, the SSH update is successful if the first line in the /etc/ssh/sshd_config file displays: # Version 6.7.2.0.

                                        • Linux guest OS in ESXi hosts might become unresponsive if you run the kdump utility while vIOMMU is enabled

                                          Some Linux guest OS might become unresponsive while initializing the kdump kernel if vIOMMU is enabled.

                                          Workaround: If kdump is required, turn off interrupt remapping in the guest OS by appending intremap=off to the grub menu or disable vIOMMU for this virtual machine completely.

                                        Known Issues from Earlier Releases

                                        To view a list of previous known issues, click here.

                                        check-circle-line exclamation-circle-line close-line
                                        Scroll to top icon