Release Date: APR 28, 2020
Build Details
Download Filename: | ESXi670-202004002.zip |
Build: | 16075168 |
Download Size: | 474.3 MB |
md5sum: | 7e8314662d52ce4b907741d9dfca3eed |
sha1checksum: | b794622ada10d8cbdd7361d554adb494c3ef6754 |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi670-202004401-BG | Bugfix | Critical |
ESXi670-202004402-BG | Bugfix | Important |
ESXi670-202004403-BG | Bugfix | Important |
ESXi670-202004404-BG | Bugfix | Important |
ESXi670-202004405-BG | Bugfix | Important |
ESXi670-202004406-BG | Bugfix | Important |
ESXi670-202004407-BG | Bugfix | Important |
ESXi670-202004408-BG | Bugfix | Important |
ESXi670-202004101-SG | Security | Important |
ESXi670-202004102-SG | Security | Important |
ESXi670-202004103-SG | Security | Important |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.
Bulletin ID | Category | Severity |
ESXi670-202004002 | Bugfix | Important |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only the ESXi hosts is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.7.0-20200404001-standard |
ESXi-6.7.0-20200404001-no-tools |
ESXi-6.7.0-20200401001s-standard |
ESXi-6.7.0-20200401001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.
You can update ESXi hosts by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, you can update the system by using the image profile and the esxcli software profile
command.
For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.
Product Support Notices
Intel Memory Protection Extensions (MPX) is being deprecated with the introduction of Ice Lake CPUs. While this feature continues to be supported, it is not exposed by default to virtual machines at power on. For more information, see VMware knowledge base article 76799 and Set Advanced Virtual Machine Attributes.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi670-202004401-BG
- ESXi670-202004402-BG
- ESXi670-202004403-BG
- ESXi670-202004404-BG
- ESXi670-202004405-BG
- ESXi670-202004406-BG
- ESXi670-202004407-BG
- ESXi670-202004408-BG
- ESXi670-202004101-SG
- ESXi670-202004102-SG
- ESXi670-202004103-SG
- ESXi-6.7.0-20200404001-standard
- ESXi-6.7.0-20200404001-no-tools
- ESXi-6.7.0-20200401001s-standard
- ESXi-6.7.0-20200401001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2449111, 2436641, 2447585, 2486154, 2466300, 2451413, 2406230, 2443942, 2452877, 2431310, 2407597, 2481222, 2496838, 2320980, 2458186, 2374140, 2504887, 2492286, 2522494, 2486909, 2512739, 2387296, 2481899, 2443483, 2458201, 2515102, 2424969, 2424363, 2495662, 2499073, 2462098, 2482303, 2500832, 2497099, 2438978, 2446482, 2449082, 2449462, 2454662, 2460368, 2465049, 2467765, 2475723, 2422471, 2477972, 2466965, 2467517, 2473823, 2460088, 2467615, 2521131, 2486433, 2487787, 2491652, 2458918, 2443170, 2498721, 2500197, 2498114, 2465248, 2517910, 2521345, 2525720, 2513441, 2490140, 2489409, 2448039, 2516450, 2513433, 2483575, 2497570, 2468784, 2531167, 2444428 |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan,
and vsanhealth
VIBs.
- PR 2449111: Under rare conditions an ESXi host might fail with a purple diagnostic screen when collecting CPU performance counters for a virtual machine
Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2436641: Software iSCSI adapters might not appear in the vSphere Client or vSphere Web Client
Due to a memory leak in the iSCSI module, software iSCSI adapters might not appear in the vSphere Client or vSphere Web Client during the target discovery process.
This issue is resolved in this release.
- PR 2447585: An ESXi host with 3D virtual machines might fail with a blue screen error due to an xmap allocation failure
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause a failure of the ESXi host.This issue is resolved in this release.
- PR 2486154: Smart card authentication on an ESXi host might stop working after the ESXi host reboots
After an ESXi host reboots, smart card authentication might stop working, because you might need to re-upload the root certificate of the Active Directory domain to which the ESXi host is joined.
This issue is resolved in this release.
- PR 2466300: Teaming failback policy with beacon detection does not take effect
In rare occasions, a vmnic that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active vmnic.
This issue is resolved in this release.
- PR 2451413: Running the esxcfg-info command returns an output with errors hidden in it
When you run the
esxcfg-info
command, you might see an output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
This issue is resolved in this release.
- PR 2406230: When you configure HTTP and HTTPs proxy settings with a user name and password, adding an online software depot for vSphere Auto Deploy might fail
When you configure HTTP and HTTPs proxy settings with a user name and password, adding an online software depot under Home > Auto Deploy might fail with an error similar to
invalid literal for int() with base 10:<proxy>
. The colon sign (:) that introduces the user name and password in the proxy URL might be the reason that the URL is not parsed correctly.This issue is resolved in this release.
- PR 2443942: The SNMP service might fail if many IPv6 addresses are assigned by using SLAAC
The SNMP service might fail if IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor the status of ESXi hosts.
This issue is resolved in this release.
- PR 2452877: Attempt to get the block map of an offline storage might fail the hostd service
When the hostd service tries to get the block map of storage that is currently offline, the service might fail.
This issue is resolved in this release. Part of the fix is to return a correct error message instead of failing the service.
- PR 2431310: If you migrate virtual machines with NVIDIA virtual GPU (vGPU) devices to an ESXi host that has an incompatible vGPU driver, the virtual machines might shut down unexpectedly
If you migrate virtual machines with vGPU devices by using vSphere vMotion to an ESXi host that has an incompatible vGPU driver, the virtual machines might shut down unexpectedly. The issue occurs because NVIDIA disallows certain combinations of their host and guest vGPU drivers.
This issue is resolved in this release. The fix detects potential vGPU incompatibilities early and prevents migrations by using vSphere vMotion so that the virtual machine can continue to run on the source.
- PR 2407597: You cannot use a virtual function (VF) spawned on a bus different than the parent physical function (PF)
If a parent PF is at SBDF, such as C1:00.0, and spawns VFs starting at, for example, C2:00.0, you cannot use that VF.
This issue is resolved in this release.
- PR 2481222: The hostd service might fail if many retrieveData calls in the virtual machine namespace run within a short period
If many
NamespaceManager.retrieveData
calls run within a short period of time, the hostd service might fail with an out of memory error. This issue occurs, because the result of such calls can be large and the hostd service keeps them for 10 minutes by default.This issue is resolved in this release. For previous ESXi versions, you can either avoid running many
NamespaceManager.retrieveData
calls in quick succession, or lower the value of thetaskRetentionInMins
option in theconfig.xml
file of the hostd service. - PR 2496838: Secondary virtual machines in vSphere Fault Tolerance might get stuck in VM_STATE_CREATE_SCREENSHOT state and consecutive operations fail
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state.This issue is resolved in this release. With the fix, invalid screenshot requests for secondary virtual machines in vSphere FT directly fail.
- PR 2320980: After changing an ESXi host GPU mode, virtual machines that require 3D graphics hardware might not power on
If you switch the GPU mode of an ESXi host between virtual shared graphics acceleration (vSGA) and virtual shared passthrough graphics acceleration (vGPU), virtual machines that require 3D graphics hardware might not power on. During such operations, the
accel3dSupported
property of theHostCapability
managed object in the vCenter Server system is not automatically set to TRUE, which causes the issue.This issue is resolved in this release.
- PR 2458186: NFS 4.1 datastores might become inaccessible after failover or failback operations of storage arrays
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
This issue is resolved in this release.
- PR 2374140: A small race window during vSphere Distributed Switch (VDS) upgrades might cause ESXi hosts to fail with a purple diagnostic screen
During upgrades of VDS to version 6.6, a small race window might cause ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release. The fix adds a check to handle the race window.
- PR 2504887: Setting space reclamation priority on a VMFS datastore might not work for all ESXi hosts using the datastore
You can change the default space reclamation priority of a VMFS datastore by running the following ESXCLI command
esxcli storage vmfs reclaim config set
with a value for the --reclaim-priority parameter. For example,esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
changes the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate. However, the change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore.This issue is resolved in this release.
- PR 2492286: Virtual machines stop running and a space error appears on the Summary tab
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
This issue is resolved in this release.
- PR 2522494: ESXi hosts with 4 or more vSAN disk groups might fail with a purple diagnostic screen due to an out of memory condition in NUMA configurations with 4 or more nodes
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail.
This issue is resolved in this release.
- PR 2486909: A rare race condition might cause cloning a virtual machine with a virtual USB controller by using Instant Clone to fail
If an operation for cloning a virtual machine with a configured virtual USB controller by using Instant Clone coincides with a reconfiguration of USB devices running from the guest operating system side, the operation might fail. The vmx process for the cloned virtual machine ends with a core dump. The instant clone task appears as unsuccessful in the vSphere Client and you might see a message similar to
The source detected that the destination failed to resume
.This issue is resolved in this release.
- PR 2512739: A rare race condition in a VMFS workflow might cause unresponsiveness of ESXi hosts
A rare race condition between the volume close and unmap paths in the VMFS workflow might cause a deadlock, eventually leading to unresponsiveness of ESXi hosts.
This issue is resolved in this release.
- PR 2387296: VMFS6 datastores fail to open with space error for journal blocks and virtual machines cannot power on
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
This issue is resolved in this release.
- PR 2481899: If a peer physical port does not send sync bits, LACP NICs might go down after a reboot of an ESXi host
After a reboot of an ESXi host, LACP packets might be blocked if a peer physical port does not send sync bits for some reason. As a result, all LACP NICs on the host are down and LAG Management fails.
This issue is resolved in this release. The fix adds the configuration option
esxcfg-advcfg -s 1 /Net/LACPActorSystemPriority
to unblock LACP packets. This option must be used only if you face the issue. The command does not work on stateless ESXi hosts. - PR 2443483: A race condition might cause the Auto Deploy service to stop working every several hours
A race condition in the python subprocess module in Auto Deploy might cause the service to stop working every several hours. As a result, you might not be able to register ESXi hosts in the inventory. On restart, Auto Deploy works as expected, but soon fails again.
This issue is resolved in this release.
- PR 2458201: Some vSphere Virtual Volumes snapshot objects might not get a virtual machine UUID metadata tag
During snapshot operations, especially a fast sequence of creating and deleting snapshots, a refresh of the virtual machine configuration might start prematurely. This might cause incorrect updates of the vSphere Virtual Volumes metadata. As a result, some vSphere Virtual Volumes objects, which are part of a newly created snapshot, might remain untagged or get tagged and then untagged with the virtual machine UUID.
This issue is resolved in this release.
- PR 2515102: If a trusted domain goes offline, Active Directory authentications for some user groups might fail
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
This issue is resolved in this release. This fix makes sure Likewise lists all the groups from all the other online domains.
- PR 2424969: If the first attempt of an ESXi host to contact a VASA provider fails, vSphere Virtual Volumes datastores might remain inaccessible
If a VASA provider is not reachable or not responding at the time an ESXi host boots up and tries to mount vSphere Virtual Volumes datastores, the mount operation fails. However, if after some time a VASA provider is available, the ESXi host does not attempt to reconnect to a provider and datastores remain inaccessible.
This issue is resolved in this release.
- PR 2424363: During rebind operations, I/Os might fail with NOT_BOUND error
During rebind operations, the source protocol endpoint of a virtual volume might start failing I/Os with a
NOT_BOUND
error even when the target protocol endpoint is busy. If the target protocol endpoint is inWAIT_RBZ
state and returns a statusPE_NOT_READY
, the source protocol endpoint must retry the I/Os instead of failing them.This issue is resolved in this release. With the fix, the upstream relays a BUSY status to the virtual SCSI disk (vSCSI) and the ESXi host operating system to ensure a retry of the I/O.
- PR 2495662: Virtual Objects view displays inaccessible objects for vSAN VM namespace
If the VM namespace path does not include a valid vSAN UUID, the Virtual Objects view displays inaccessible objects for the vSAN VM namespace.
This issue is resolved in this release. Objects with an invalid vSAN UUID are displayed as unknown objects.
- PR 2499073: You might see a VMkernel log spew when VMFS volumes are frequently opened and closed
If VMFS volumes are frequently opened and closed, this might result in a spew of VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed .This issue is resolved in this release. The logs
does not support unmap
when a VMFS volume is opened andExiting async journal replay manager world
when a volume is closed are no longer visible. - PR 2462098: XCOPY requests to Dell/EMC VMAX storage arrays might cause VMFS datastore corruption
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
This issue is resolved in this release. For more information, see VMware knowledge base article 74595.
- PR 2482303: Virtual disk thick provisioned when object space reservation is set to 0
When the provision type of a virtual disk is unset in the provision specification, vSAN creates a thick provisioned VMDK file on the vSAN datastore. This can occur if you provision a VMDK by using VMODL API, with
isThinProvisioned
unset in backingInfo of the VMDK. The value set for object space reservation in the storage profile is ignored.This issue is resolved in this release. Re-apply the storage profile to existing virtual disks.
- PR 2500832: vCenter Server unresponsive during cluster reconfiguration
vCenter Server might fail while performing cluster reconfiguration tasks.
This issue is resolved in this release.
- PR 2497099: vSAN tasks blocked until Storage vMotion is complete
Some vSAN tasks, such as Reconfigure policy, vCenter Server High Availability, and vSphere DRS enablement operations cannot proceed while Storage vMotion is in progress.
This issue is resolved in this release.
- PR 2438978: Unused port groups might cause high response times of ESXi hosts
Unused port groups might cause high response times of all tasks running on ESXi hosts, such as loading the vSphere Client, powering on virtual machines, and editing settings.
This issue is resolved in this release. The fix is optimizing the work of the
NetworkSystemVmkImplProvider::GetNetworkInfo
method to avoid looping port groups. - PR 2446482: The nicmgmtd daemon might run out of resource pool memory and eventually stop responding
The nicmgmtd daemon does frequent mmap and munmap operations with anonymous memory and in certain cases it does not reuse addresses from recently freed memory ranges. This might cause mmaped regions to always have an increasing start address and consume additional L2 and L3 page tables. As a result, nicmgmtd might run out of resource pool memory over time and eventually stop responding.
This issue is resolved in this release.
- PR 2449082: When CIM service is enabled, the length of inquiry response data might exceed the 255 byte limit and cause log spew
In some cases, when the CIM service is enabled, the length of inquiry response data might exceed the 255 byte limit and cause log spew. In the syslog, you might see messages similar to
VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
.This issue is resolved in this release.
- PR 2449462: You might not be able to mount a Virtual Volumes storage container due to a stale mount point
If the mount point was busy and a previous unmount operation has failed silently, attempts to mount a Virtual Volumes storage container might fail with an error that the container already exists.
This issue is resolved in this release.
- PR 2454662: If you frequently add and remove uplink ports on a vSphere Distributed Switch, the ESXi host might eventually lose connectivity
If adding an uplink port to a VDS fails for some reason, an ESXi host might become unresponsive while trying to restore the MTU on the uplink. This might happen if you frequently add and remove uplink ports on a VDS.
This issue is resolved in this release.
- PR 2460368: If you upgrade a vSphere Distributed Switch with an active port mirroring configuration, the ESXi host might fail with a purple diagnostic screen
If you upgrade a VDS with an active port mirroring configuration from 6.5 to a later version, the ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX
.This issue is resolved in this release.
- PR 2465049: The hostd service fails repeatedly with an error message that memory exceeds the hard limit
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder.This issue is resolved in this release. With the fix, hostd checks the manifest file of VMware Tools for the current installed versions and prevents failures due to leaking memory on each check. However, to resolve the root cause of the issue, you must:
- Put the problematic ESXi host in maintenance mode.
- Enable SSH on both an ESXi host that is not affected by the issue and on the affected host.
- Log in as a root user.
- Copy the entire contents of the
vmtools
folder from the unaffected host to the affected host. - Run the
md5sum
command on each copied file on the unaffected host and on the affected host. The results for each pair of files must be the same.
- PR 2467765: Upon failure to bind volumes to protocol endpoint LUNs on an ESXi host, virtual machines on vSphere Virtual Volumes might become inaccessible
If a VASA provider fails to register protocol endpoint IDs discovered on an ESXi host, virtual machines on vSphere Virtual Volumes datastores on this host might become inaccessible. You might see an error similar to
vim.fault.CannotCreateFile
. A possible reason for failing to register protocol endpoint IDs from an ESXi host is that theSetPEContext()
request to the VASA provider fails for some reason. This results in failing any subsequent request for binding virtual volumes, and losing accessibility to data and virtual machines on vSphere Virtual Volumes datastores.This issue is resolved in this release. The fix is to reschedule
SetPEContext
calls to the VASA provider if aSetPEContext()
request on a VASA provider fails. This fix allows the ESXi host eventually to register discovered protocol endpoint IDs and ensures that volumes on vSphere Virtual Volumes datastores remain accessible. - PR 2475723: Virtual machines might fail during screenshots capturing
A race condition in the screenshot capture code might cause intermittent failures of virtual machines.
This issue is resolved in this release.
- PR 2422471: iSCSI target service fails to add target with auto-generated IQN
If you use the vSphere Client to configure an iSCSI target, and allow the system to auto-generate the IQN, the iSCSI initiator fails to add the target. This problem occurs because the format of the auto-generated IQN is incorrect.
This issue is resolved in this release.
- PR 2477972: If another service requests sensor data from the hostd service during a hardware health check, hostd might fail
If another service requests sensor data from the hostd service during a hardware health check, the hostd service might fail with an error similar to
IpmiIfcSdrReadRecordId: retry
expired
. As a result, you cannot access the ESXi host from the vCenter Server system.This issue is resolved in this release.
- PR 2466965: ChecksumError exception messages displayed in vsanmgmt.log
Inappropriate error messages might get logged in the
vsanmgmt.log
file. The following messages in/var/log/vsanmgmt.log
can be ignored:
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/virstoStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/CFStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/checksumErrors exceptionThis issue is resolved in this release.
- PR 2467517: vCenter Server fails to install or upgrade due to failed vSAN health service
While evaluating an auto proxy configuration, the vSAN health service might fail. If this occurs, vCenter Server installation or upgrade fails.
This issue is resolved in this release.
- PR 2473823: vSAN performance service is not working
If the
vsantop
utility fails to exit normally, or the configuration is incorrect, the utility might not start again. This problem might keep the vSAN management daemon from starting. Some vSAN services, such as the performance service, are unavailable.This issue is resolved in this release.
- PR 2460088: Due to failing I/Os to NFS datastores, virtual machines might become unresponsive
Virtual machine I/Os sent to NFS datastores might fail with an out of memory error and make the virtual machines unresponsive. In the
vmkernel.log
file, you can see a message similar to:NFS: Failed to convert sgArr to NFSIoInfo: Out of memory
.This issue is resolved in this release.
- PR 2467615: Deleting files from the Content Library might cause the vCenter Server Agent (vpxa) to fail
Deleting files from the Content Library might cause vpxa to fail and the delete operation to be unsuccessful.
This issue is resolved in this release.
- PR 2521131: If you use the Copy Settings from Host option, kernel module settings might not sustain, or iSCSI CHAP secret is lost
When a host profile is updated by using the Copy Settings from Host option, the kernel module parameter configuration policy might change, or an iSCSI CHAP secret might be lost, or both. As a result, you might need to edit the host profile again to modify the kernel module parameter configuration policy to per-host based, or reenter the iSCSI CHAP secret, or both.
This issue is resolved in this release. Kernel module parameter configuration policy pertain after updating a Host Profile and the CHAP secret is preserved from the old host profile document.
- PR 2486433: ESXi hosts upgrading from vSAN 6.5 run out of free space
In a cluster enabled for deduplication and compression, ESXi hosts that are being upgraded from vSAN 6.5 might run out of space. New provisions and reconfigurations fail due to lack of free space.
This issue is resolved in this release.
- PR 2487787: vSAN health service fails due to an empty log file
If the vSAN health log file is empty or corrupted, the vSAN health service might fail.
This issue is resolved in this release.
- PR 2491652: Setting the SIOControlLoglevel parameter after enabling Storage I/O Control logging might not change log levels
Logging for Storage I/O Control is disabled by default and you must enable it by using a value of 1 to 7 for the <SIOControlLoglevel> parameter to set a log level. However, in some cases, changing the parameter value does not change the log level as well.
This issue is resolved in this release.
- PR 2458918: If you run CIM queries to a newly added VMkernel port on an ESXi host, the small footprint CIM broker (SFCB) service might fail
If you run a CIM query, such as the
enum_instances
call, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump.This issue is resolved in this release.
- PR 2443170: If you use virtualization-based security (VBS) along with export address filtering (EAF), performance of applications on Windows 10 virtual machines might degrade
If you use VBS along with the EAF mitigation of the exploit protection of Windows Defender, performance of any application protected by EAF on Windows 10 virtual machines might degrade.
This issue is resolved in this release.
- PR 2444428: The Advanced Settings option TeamPolicyUpDelay does not work as expected on vSphere Distributed Switch 6.6
You might be able to configure the Advanced Settings option
TeamPolicyUpDelay
by using the commandesx-advcfg
for environments using vSphere Distributed Switch 6.6, but the setting does not work as expected.This issue is resolved in this release.
- PR 2498721: If a hot-plug event occurs on a device during a rescan, ESXi hosts might fail
If an intermittent hot-plug event occurs on a device which is under rescan, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2500197: You cannot configure Windows virtual machines with more than 128 GB of RAM
Attempts to create or reconfigure Windows virtual machines with more than 128 GB of RAM might fail for the 64-bit versions of Windows 7, Windows 8, and Windows 10.
This issue is resolved in this release.
- PR 2498114: If one of the address families on a dual stack domain controller is not enabled, adding ESXi hosts to the domain might randomly fail
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the datacenter is not available, similar to Error:
NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain.This issue is resolved in this release.
- PR 2465248: ESXi upgrades fail due to all-paths-down error of the vmkusb driver
In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.
This issue is resolved in this release. The fix makes sure no read or write operations run on LUNs with external drives if no media is currently available in that drive until booting is complete.
- PR 2517910: ESXi hosts might fail with a purple diagnostic screen while loading a kernel module during boot
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node for some reason. If this happens, a checksum mismatch across NUMA nodes is possible during the code section initialization. This can cause an ESXi host to fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2521345: An ESXi host fails while mounting disk group due to lack of memory
When a disk group is mounted to an ESXi host, PLOG log recovery might consume all of the available memory. The host might fail with a purple diagnostic screen. The vmkernel log might display a message similar to the following:
BucketlistGetNode@LSOMCommon
Bucketlist_InsertWithDiskGroup@LSOMCommon
Rangemap_Insert@LSOMCommon
LSOMLbaTable_Insert@LSOMCommon
[email protected]
SSDLOGEnumLog@LSOMCommon
SSDLOG_EnumLogHelper@LSOMCommon
helpFunc@vmkernel
CpuSched_StartWorld@vmkernelThis issue is resolved in this release.
- PR 2525720: vSphere Web Client virtual objects view displays an error message
If the rhttpproxy port is configured with a value other than the default value of 443, the virtual objects view in the vSphere Web Client displays a blank page with an error message.
This issue is resolved in this release.
- PR 2513441: The vSAN health service reports Unknown disk health state for transient disk error
When the vSAN health service cannot identify a transient disk error, it might trigger an alarm in Physical disk - Operational health check:
Unknown disk health state
.This issue is resolved in this release. Transient disk errors are reported as yellow, not red.
- PR 2490140: The command enum_instances OMC_IpmiLogRecord fails with an error
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in ano instances found
error. This happens when RawIpmiProvider is not loaded and fails to respond to the query.This issue is resolved in this release.
- PR 2489409: CMNewInstance calls from a live installed CIM provider might take time to respond or fail
After live installing a CIM provider,
CMNewInstance
calls from the provider might take a long time to respond or fail. You must restart WBEM to retry the call.This issue is resolved in this release.
- PR 2448039: The SNMP service might core dump and fail due to a memory leak if several Link Layer Discovery Protocol (LLDP) peers are reported to the VMkernel sysinfo interface
If several LLDP peers are reported to the VMkernel sysinfo interface, a memory leak might cause the SNMP service to fail and dump a
snmpd-zdump
core file. As a result, you cannot monitor the status of ESXi hosts. If LLDP is turned off, the SNMP service does not fail.This issue is resolved in this release.
- PR 2516450: vSAN disk group creation fails to add disks due memory issues
When you create a disk group in vSAN, internal memory resource limits can be reached. One or more disks might not be added to the disk group. You might see the following error messages in the vmkernel.log:
WARNING: LSOMCommon: IORETRYCreateSlab:3150: Unable to create IORetry slab Out of memory
WARNING: PLOG: PLOG_OpenDevHandles:1858: Unable to create IOretry handle for device 522984ce-343d-2533-930d-05f790e54bf6 : Out of memory
WARNING: Service: VirstoCreateVDiskV4:1506: Could not allocate memory for vDisk id=1This issue is resolved in this release.
- PR 2513433: Transient errors cause vSAN objects to be marked as degraded
When a storage device experiences a transient error, vSAN might incorrectly mark the device as Failed. vSAN components on the device are marked as Degraded, which eventually triggers a full rebuild of the objects in the cluster.
This issue is resolved in this release.
- PR 2483575: When an ESXi host is joined into multiple transport zones, fetching data for opaque networks of the type nsx.LogicalSwitch might fail
When an ESXi host is joined into multiple transport zones, fetching data for opaque networks of the type nsx.LogicalSwitch might fail, because the
esxcli
andlocalcli
commands to get vmkernel interface info return only the first found opaque switch. Providing all data does not work when multiple opaque switches exist on the ESXi host.This issue is resolved in this release.
- PR 2497570: The hostd service might fail due to a race condition caused by a heavy use of the NamespaceManager API
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This issue can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
This issue is resolved in this release.
- PR 2468784: You see Sensor -1 type storage health alarms on ESXi hosts
You might see excessive health events for storage sensors on ESXi hosts even though the state of the sensors has not changed. The health events are triggered in every polling cycle and report the same state of storage sensors.
This issue is resolved in this release.
- PR 2442355: Migrating virtual machines by using vSphere vMotion takes long and eventually fails
When vSphere vMotion operation for migrating virtual machines is triggered, either manually or automatically, you can see your monitor going into a sleep mode waiting for some event, but this mode might take long and never change. The reason is that vSphere vMotion might trigger another event, parallel to the first for which the system is waiting for, which causes the long sleep mode.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates the qfle3f VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates sfvmk
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2429068 |
CVE numbers | N/A |
Updates nfnic
VIB to resolve the following issue:
- PR 2429068: Virtual machines might become inaccessible due to wrongly assigned second level LUN IDs (SLLID)
The nfnic driver might intermittently assign wrong SLLID to virtual machines and as a result, Windows and Linux virtual machines might become inaccessible.
This issue is resolved in this release. Make sure that you upgrade the nfnic driver to version 4.0.0.44.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates ixgben
VIB.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2481771 |
CVE numbers | N/A |
Updates vmkusb
VIB to resolve the following issue:
- PR 2481771: If an ESXi host uses a USB network driver, the host might fail with a purple diagnostic screen due to duplicate TX buffers
In rare occasions, if an ESXi host uses a USB network driver, the host might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2511720 |
CVE numbers | N/A |
Updates lpfc
VIB to resolve the following issue:
- PR 2511720: If an external Data Integrity Field (DIF) support for FC HBAs is enabled, an ESXi host might fail with a purple diagnostic screen due to a race condition
If an ESXi host is connecting to a storage array by using an external DIF and a node path is destroyed at the same time, the host might fail with a purple diagnostic screen. You can see an error for the
lpfc_external_dif_cmpl
function in the logs.This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
Updates net-vmxnet3
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2495187, 2485387, 2485381, 2485379, 2485383, 2485390, 2485385 |
CVE numbers | N/A |
Updates esx-base, esx-update, vsan
, and vsanhealth
VIBs to resolve the following issues:
- Update to libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.10.
- Update to the libcurl library
The ESXi userworld libcurl library is updated to libcurl-7.67.0.
- Update to the Python library
The Python third-party library is updated to version 3.5.8
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2u.
- Update to the OpenSSH version
The OpenSSH version is updated to version 8.1p1.
- Update to the Network Time Protocol (NTP) daemon
The NTP daemon is updated to version ntp-4.2.8p13.
- Update of the SQLite database
The SQLite database is updated to version 3.30.1.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2489776 |
CVE numbers | N/A |
This patch updates the tools-light
VIB to resolve the following issue:
The following VMware Tools ISO images are bundled with ESXi670-202004002:
windows.iso
: VMware Tools 11.0.5 ISO image for Windows Vista (SP2) or laterwinPreVista.iso:
VMware Tools 10.0.12 ISO image for Windows 2000, Windows XP, and Windows 2003linux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
To download VMware Tools for platforms not bundled with ESXi, follow the procedures listed in the following documents:
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.7.0-20200404001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | April 28, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2449111, 2436641, 2447585, 2486154, 2466300, 2451413, 2406230, 2443942, 2452877, 2431310, 2407597, 2481222, 2496838, 2320980, 2458186, 2374140, 2504887, 2492286, 2522494, 2486909, 2512739, 2387296, 2481899, 2443483, 2458201, 2515102, 2424969, 2424363, 2495662, 2499073, 2462098, 2482303, 2500832, 2497099, 2438978, 2446482, 2449082, 2449462, 2454662, 2460368, 2465049, 2467765, 2475723, 2422471, 2477972, 2466965, 2467517, 2473823, 2460088, 2467615, 2521131, 2486433, 2487787, 2491652, 2458918, 2443170, 2498721, 2500197, 2498114, 2465248, 2517910, 2521345, 2525720, 2513441, 2490140, 2489409, 2448039, 2516450, 2513433, 2483575, 2497570, 2468784, 2531167, 2444428 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
-
Due to a memory leak in the iSCSI module, software iSCSI adapters might not appear in the vSphere Client or vSphere Web Client during the target discovery process.
-
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause a failure of the ESXi host. -
After an ESXi host reboots, smart card authentication might stop working, because you might need to re-upload the root certificate of the Active Directory domain to which the ESXi host is joined.
-
In rare occasions, a vmnic that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active vmnic.
-
When you run the
esxcfg-info
command, you might see an output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
-
When you configure HTTP and HTTPs proxy settings with a user name and password, adding an online software depot under Home > Auto Deploy might fail with an error similar to
invalid literal for int() with base 10:<proxy>
. The colon sign (:) that introduces the user name and password in the proxy URL might be the reason that the URL is not parsed correctly. -
The SNMP service might fail if IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor the status of ESXi hosts.
-
When the hostd service tries to get the block map of storage that is currently offline, the service might fail.
-
If you migrate virtual machines with vGPU devices by using vSphere vMotion to an ESXi host that has an incompatible vGPU driver, the virtual machines might shut down unexpectedly. The issue occurs because NVIDIA disallows certain combinations of their host and guest vGPU drivers.
-
If you use an FCH SATA controller for direct I/O or passthrough operations on AMD Zen platforms, the default reset method of the controller might cause unexpected platform reboots.
-
If many
NamespaceManager.retrieveData
calls run within a short period of time, the hostd service might fail with an out of memory error. This issue occurs, because the result of such calls can be large and the hostd service keeps them for 10 minutes by default. -
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state. -
If you switch the GPU mode of an ESXi host between virtual shared graphics acceleration (vSGA) and virtual shared passthrough graphics acceleration (vGPU), virtual machines that require 3D graphics hardware might not power on. During such operations, the
accel3dSupported
property of theHostCapability
managed object in the vCenter Server system is not automatically set to TRUE, which causes the issue. -
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
-
During upgrades of VDS to version 6.6, a small race window might cause ESXi hosts to fail with a purple diagnostic screen.
-
You can change the default space reclamation priority of a VMFS datastore by running the following ESXCLI command
esxcli storage vmfs reclaim config set
with a value for the --reclaim-priority parameter. For example,esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
changes the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate. However, the change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore. -
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
-
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail.
-
If an operation for cloning a virtual machine with a configured virtual USB controller by using Instant Clone coincides with a reconfiguration of USB devices running from the guest operating system side, the operation might fail. The vmx process for the cloned virtual machine ends with a core dump. The instant clone task appears as unsuccessful in the vSphere Client and you might see a message similar to
The source detected that the destination failed to resume
. -
A rare race condition between the volume close and unmap paths in the VMFS workflow might cause a deadlock, eventually leading to unresponsiveness of ESXi hosts.
-
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
-
After a reboot of an ESXi host, LACP packets might be blocked if a peer physical port does not send sync bits for some reason. As a result, all LACP NICs on the host are down and LAG Management fails.
-
A race condition in the python subprocess module in Auto Deploy might cause the service to stop working every several hours. As a result, you might not be able to register ESXi hosts in the inventory. On restart, Auto Deploy works as expected, but soon fails again.
-
During snapshot operations, especially a fast sequence of creating and deleting snapshots, a refresh of the virtual machine configuration might start prematurely. This might cause incorrect updates of the vSphere Virtual Volumes metadata. As a result, some vSphere Virtual Volumes objects, which are part of a newly created snapshot, might remain untagged or get tagged and then untagged with the virtual machine UUID.
-
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
-
If a VASA provider is not reachable or not responding at the time an ESXi host boots up and tries to mount vSphere Virtual Volumes datastores, the mount operation fails. However, if after some time a VASA provider is available, the ESXi host does not attempt to reconnect to a provider and datastores remain inaccessible.
-
During rebind operations, the source protocol endpoint of a virtual volume might start failing I/Os with a
NOT_BOUND
error even when the target protocol endpoint is busy. If the target protocol endpoint is inWAIT_RBZ
state and returns a statusPE_NOT_READY
, the source protocol endpoint must retry the I/Os instead of failing them. -
If the VM namespace path does not include a valid vSAN UUID, the Virtual Objects view displays inaccessible objects for the vSAN VM namespace.
-
If VMFS volumes are frequently opened and closed, this might result in a spew of VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed . -
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
-
When the provision type of a virtual disk is unset in the provision specification, vSAN creates a thick provisioned VMDK file on the vSAN datastore. This can occur if you provision a VMDK by using VMODL API, with
isThinProvisioned
unset in backingInfo of the VMDK. The value set for object space reservation in the storage profile is ignored. -
vCenter Server might fail while performing cluster reconfiguration tasks.
-
Some vSAN tasks, such as Reconfigure policy, vCenter Server High Availability, and vSphere DRS enablement operations cannot proceed while Storage vMotion is in progress.
-
Unused port groups might cause high response times of all tasks running on ESXi hosts, such as loading the vSphere Client, powering on virtual machines, and editing settings.
-
The nicmgmtd daemon does frequent mmap and munmap operations with anonymous memory and in certain cases it does not reuse addresses from recently freed memory ranges. This might cause mmaped regions to always have an increasing start address and consume additional L2 and L3 page tables. As a result, nicmgmtd might run out of resource pool memory over time and eventually stop responding.
-
In some cases, when the CIM service is enabled, the length of inquiry response data might exceed the 255 byte limit and cause log spew. In the syslog, you might see messages similar to
VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
. -
If the mount point was busy and a previous unmount operation has failed silently, attempts to mount a Virtual Volumes storage container might fail with an error that the container already exists.
-
If adding an uplink port to a VDS fails for some reason, an ESXi host might become unresponsive while trying to restore the MTU on the uplink. This might happen if you frequently add and remove uplink ports on a VDS.
-
If you upgrade a VDS with an active port mirroring configuration from 6.5 to a later version, the ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX
. -
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder. -
If a VASA provider fails to register protocol endpoint IDs discovered on an ESXi host, virtual machines on vSphere Virtual Volumes datastores on this host might become inaccessible. You might see an error similar to
vim.fault.CannotCreateFile
. A possible reason for failing to register protocol endpoint IDs from an ESXi host is that theSetPEContext()
request to the VASA provider fails for some reason. This results in failing any subsequent request for binding virtual volumes, and losing accessibility to data and virtual machines on vSphere Virtual Volumes datastores. -
A race condition in the screenshot capture code might cause intermittent failures of virtual machines.
-
If you use the vSphere Client to configure an iSCSI target, and allow the system to auto-generate the IQN, the iSCSI initiator fails to add the target. This problem occurs because the format of the auto-generated IQN is incorrect.
-
If another service requests sensor data from the hostd service during a hardware health check, the hostd service might fail with an error similar to
IpmiIfcSdrReadRecordId: retry
expired
. As a result, you cannot access the ESXi host from the vCenter Server system. -
Inappropriate error messages might get logged in the
vsanmgmt.log
file. The following messages in/var/log/vsanmgmt.log
can be ignored:
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/virstoStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/CFStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/checksumErrors exception -
While evaluating an auto proxy configuration, the vSAN health service might fail. If this occurs, vCenter Server installation or upgrade fails.
-
If the
vsantop
utility fails to exit normally, or the configuration is incorrect, the utility might not start again. This problem might keep the vSAN management daemon from starting. Some vSAN services, such as the performance service, are unavailable. -
Virtual machine I/Os sent to NFS datastores might fail with an out of memory error and make the virtual machines unresponsive. In the
vmkernel.log
file, you can see a message similar to:NFS: Failed to convert sgArr to NFSIoInfo: Out of memory
. -
Deleting files from the Content Library might cause vpxa to fail and the delete operation to be unsuccessful.
-
When a host profile is updated by using the Copy Settings from Host option, the kernel module parameter configuration policy might change, or an iSCSI CHAP secret might be lost, or both. As a result, you might need to edit the host profile again to modify the kernel module parameter configuration policy to per-host based, or reenter the iSCSI CHAP secret, or both.
-
In a cluster enabled for deduplication and compression, ESXi hosts that are being upgraded from vSAN 6.5 might run out of space. New provisions and reconfigurations fail due to lack of free space.
-
If the vSAN health log file is empty or corrupted, the vSAN health service might fail.
-
Logging for Storage I/O Control is disabled by default and you must enable it by using a value of 1 to 7 for the <SIOControlLoglevel> parameter to set a log level. However, in some cases, changing the parameter value does not change the log level as well.
-
If you run a CIM query, such as the
enum_instances
call, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump. -
If you use VBS along with the EAF mitigation of the exploit protection of Windows Defender, performance of any application protected by EAF on Windows 10 virtual machines might degrade.
-
If an intermittent hot-plug event occurs on a device which is under rescan, the ESXi host might fail with a purple diagnostic screen.
-
Attempts to create or reconfigure Windows virtual machines with more than 128 GB of RAM might fail for the 64-bit versions of Windows 7, Windows 8, and Windows 10.
-
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the datacenter is not available, similar to Error:
NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain. -
In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.
-
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node for some reason. If this happens, a checksum mismatch across NUMA nodes is possible during the code section initialization. This can cause an ESXi host to fail with a purple diagnostic screen. -
When a disk group is mounted to an ESXi host, PLOG log recovery might consume all of the available memory. The host might fail with a purple diagnostic screen. The vmkernel log might display a message similar to the following:
BucketlistGetNode@LSOMCommon
Bucketlist_InsertWithDiskGroup@LSOMCommon
Rangemap_Insert@LSOMCommon
LSOMLbaTable_Insert@LSOMCommon
[email protected]
SSDLOGEnumLog@LSOMCommon
SSDLOG_EnumLogHelper@LSOMCommon
helpFunc@vmkernel
CpuSched_StartWorld@vmkernel -
If the rhttpproxy port is configured with a value other than the default value of 443, the virtual objects view in the vSphere Web Client displays a blank page with an error message.
-
When the vSAN health service cannot identify a transient disk error, it might trigger an alarm in Physical disk - Operational health check:
Unknown disk health state
. -
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in ano instances found
error. This happens when RawIpmiProvider is not loaded and fails to respond to the query. -
After live installing a CIM provider,
CMNewInstance
calls from the provider might take a long time to respond or fail. You must restart WBEM to retry the call. -
If several LLDP peers are reported to the VMkernel sysinfo interface, a memory leak might cause the SNMP service to fail and dump a
snmpd-zdump
core file. As a result, you cannot monitor the status of ESXi hosts. If LLDP is turned off, the SNMP service does not fail. -
When you create a disk group in vSAN, internal memory resource limits can be reached. One or more disks might not be added to the disk group. You might see the following error messages in the vmkernel.log:
WARNING: LSOMCommon: IORETRYCreateSlab:3150: Unable to create IORetry slab Out of memory
WARNING: PLOG: PLOG_OpenDevHandles:1858: Unable to create IOretry handle for device 522984ce-343d-2533-930d-05f790e54bf6 : Out of memory
WARNING: Service: VirstoCreateVDiskV4:1506: Could not allocate memory for vDisk id=1 -
When a storage device experiences a transient error, vSAN might incorrectly mark the device as Failed. vSAN components on the device are marked as Degraded, which eventually triggers a full rebuild of the objects in the cluster.
-
When an ESXi host is joined into multiple transport zones, fetching data for opaque networks of the type nsx.LogicalSwitch might fail, because the
esxcli
andlocalcli
commands to get vmkernel interface info return only the first found opaque switch. Providing all data does not work when multiple opaque switches exist on the ESXi host. -
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This issue can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
-
You might see excessive health events for storage sensors on ESXi hosts even though the state of the sensors has not changed. The health events are triggered in every polling cycle and report the same state of storage sensors.
-
When vSphere vMotion operation for migrating virtual machines is triggered, either manually or automatically, you can see your monitor going into a sleep mode waiting for some event, but this mode might take long and never change. The reason is that vSphere vMotion might trigger another event, parallel to the first for which the system is waiting for, which causes the long sleep mode.
-
The nfnic driver might intermittently assign wrong SLLID to virtual machines and as a result, Windows and Linux virtual machines might become inaccessible.
-
In rare occasions, if an ESXi host uses a USB network driver, the host might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
. -
If an ESXi host is connecting to a storage array by using an external DIF and a node path is destroyed at the same time, the host might fail with a purple diagnostic screen. You can see an error for the
lpfc_external_dif_cmpl
function in the logs. -
You might be able to configure the Advanced Settings option TeamPolicyUpDelay by using the command esx-advcfg for environments using vSphere Distributed Switch 6.6, but the setting does not work as expected.
-
Profile Name | ESXi-6.7.0-20200404001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | April 28, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2449111, 2436641, 2447585, 2486154, 2466300, 2451413, 2406230, 2443942, 2452877, 2431310, 2407597, 2481222, 2496838, 2320980, 2458186, 2374140, 2504887, 2492286, 2522494, 2486909, 2512739, 2387296, 2481899, 2443483, 2458201, 2515102, 2424969, 2424363, 2495662, 2499073, 2462098, 2482303, 2500832, 2497099, 2438978, 2446482, 2449082, 2449462, 2454662, 2460368, 2465049, 2467765, 2475723, 2422471, 2477972, 2466965, 2467517, 2473823, 2460088, 2467615, 2521131, 2486433, 2487787, 2491652, 2458918, 2443170, 2498721, 2500197, 2498114, 2465248, 2517910, 2521345, 2525720, 2513441, 2490140, 2489409, 2448039, 2516450, 2513433, 2483575, 2497570, 2468784, 2531167, 2444428 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
Due to a race between the power off operation of a virtual machine and a running query to collect CPU performance counters, a vCenter Server system might dereference a NULL pointer. This might result into ESXi hosts failing with a purple diagnostic screen.
-
Due to a memory leak in the iSCSI module, software iSCSI adapters might not appear in the vSphere Client or vSphere Web Client during the target discovery process.
-
During operations, such as migrating 3D virtual machines by using vSphere vMotion, the memory allocation process might return a
NULL
value. As a result, the VMkernel components might not get access to the memory and cause a failure of the ESXi host. -
After an ESXi host reboots, smart card authentication might stop working, because you might need to re-upload the root certificate of the Active Directory domain to which the ESXi host is joined.
-
In rare occasions, a vmnic that recovers from a failure under the teaming and failover policy by using beacon probing, might not fallback to the active vmnic.
-
When you run the
esxcfg-info
command, you might see an output with similar errors hidden in it:
ResourceGroup: Skipping CPU times for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
ResourceGroup: Skipping VCPU stats for: Vcpu Id 3295688 Times due to error: max # of processors: 4 < 3295688
-
When you configure HTTP and HTTPs proxy settings with a user name and password, adding an online software depot under Home > Auto Deploy might fail with an error similar to
invalid literal for int() with base 10:<proxy>
. The colon sign (:) that introduces the user name and password in the proxy URL might be the reason that the URL is not parsed correctly. -
The SNMP service might fail if IPv6 is enabled on an ESXi host and many IPv6 addresses are assigned by using SLAAC. An error in the logic for looping IPv6 addresses causes the issue. As a result, you cannot monitor the status of ESXi hosts.
-
When the hostd service tries to get the block map of storage that is currently offline, the service might fail.
-
If you migrate virtual machines with vGPU devices by using vSphere vMotion to an ESXi host that has an incompatible vGPU driver, the virtual machines might shut down unexpectedly. The issue occurs because NVIDIA disallows certain combinations of their host and guest vGPU drivers.
-
If you use an FCH SATA controller for direct I/O or passthrough operations on AMD Zen platforms, the default reset method of the controller might cause unexpected platform reboots.
-
If many
NamespaceManager.retrieveData
calls run within a short period of time, the hostd service might fail with an out of memory error. This issue occurs, because the result of such calls can be large and the hostd service keeps them for 10 minutes by default. -
If you attempt to take a screenshot of a secondary virtual machine in vSphere FT, the request might never get a response and the virtual machine remains in а
VM_STATE_CREATE_SCREENSHOT
status. As a result, any consecutive operation, such as reconfigure and migration, fails for such virtual machines with anInvalidState
error until the hostd service is restarted to clear the transient state. -
If you switch the GPU mode of an ESXi host between virtual shared graphics acceleration (vSGA) and virtual shared passthrough graphics acceleration (vGPU), virtual machines that require 3D graphics hardware might not power on. During such operations, the
accel3dSupported
property of theHostCapability
managed object in the vCenter Server system is not automatically set to TRUE, which causes the issue. -
When storage array failover or failback operations take place, NFS 4.1 datastores fall into an All-Paths-Down (APD) state. However, after the operations complete, the datastores might remain in APD state and become inaccessible.
-
During upgrades of VDS to version 6.6, a small race window might cause ESXi hosts to fail with a purple diagnostic screen.
-
You can change the default space reclamation priority of a VMFS datastore by running the following ESXCLI command
esxcli storage vmfs reclaim config set
with a value for the --reclaim-priority parameter. For example,esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-priority none
changes the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate. However, the change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore. -
NFS datastores might have space limits per user directory and if one directory exceeds the quota, I/O operations to other user directories also fail with a
NO_SPACE
error. As a result, some virtual machines stop running. On the Summary tab, you see a message similar to:
There is no more space for virtual disk '<>'. You might be able to continue this session by freeing disk space on the relevant volume, and clicking _Retry. Click Cancel to terminate this session.
-
A vSAN configuration on an ESXi host with 4 or more disk groups running on a NUMA configuration with 4 or more nodes might exhaust the slab memory resources of the VMkernel on that host dedicated to a given NUMA node. Other NUMA nodes on the same host might have excess of slab memory resources. However, the VMkernel might prematurely generate an out-of-memory condition instead of utilizing the excess slab memory capacity of the other NUMA nodes. As a result, the out of memory condition might cause the ESXi host to fail with a purple diagnostic screen or a vSAN disk group on that host to fail.
-
If an operation for cloning a virtual machine with a configured virtual USB controller by using Instant Clone coincides with a reconfiguration of USB devices running from the guest operating system side, the operation might fail. The vmx process for the cloned virtual machine ends with a core dump. The instant clone task appears as unsuccessful in the vSphere Client and you might see a message similar to
The source detected that the destination failed to resume
. -
A rare race condition between the volume close and unmap paths in the VMFS workflow might cause a deadlock, eventually leading to unresponsiveness of ESXi hosts.
-
When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.
-
After a reboot of an ESXi host, LACP packets might be blocked if a peer physical port does not send sync bits for some reason. As a result, all LACP NICs on the host are down and LAG Management fails.
-
A race condition in the python subprocess module in Auto Deploy might cause the service to stop working every several hours. As a result, you might not be able to register ESXi hosts in the inventory. On restart, Auto Deploy works as expected, but soon fails again.
-
During snapshot operations, especially a fast sequence of creating and deleting snapshots, a refresh of the virtual machine configuration might start prematurely. This might cause incorrect updates of the vSphere Virtual Volumes metadata. As a result, some vSphere Virtual Volumes objects, which are part of a newly created snapshot, might remain untagged or get tagged and then untagged with the virtual machine UUID.
-
If any of the trusted domains goes offline, Likewise returns none or a partial set of group membership for users who are part of any of the groups on the offline domain. As a result, Active Directory authentications for some user groups fails.
-
If a VASA provider is not reachable or not responding at the time an ESXi host boots up and tries to mount vSphere Virtual Volumes datastores, the mount operation fails. However, if after some time a VASA provider is available, the ESXi host does not attempt to reconnect to a provider and datastores remain inaccessible.
-
During rebind operations, the source protocol endpoint of a virtual volume might start failing I/Os with a
NOT_BOUND
error even when the target protocol endpoint is busy. If the target protocol endpoint is inWAIT_RBZ
state and returns a statusPE_NOT_READY
, the source protocol endpoint must retry the I/Os instead of failing them. -
If the VM namespace path does not include a valid vSAN UUID, the Virtual Objects view displays inaccessible objects for the vSAN VM namespace.
-
If VMFS volumes are frequently opened and closed, this might result in a spew of VMkernel logs such as
does not support unmap
when a volume is opened andExiting async journal replay manager world
when a volume is closed . -
Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.
-
When the provision type of a virtual disk is unset in the provision specification, vSAN creates a thick provisioned VMDK file on the vSAN datastore. This can occur if you provision a VMDK by using VMODL API, with
isThinProvisioned
unset in backingInfo of the VMDK. The value set for object space reservation in the storage profile is ignored. -
vCenter Server might fail while performing cluster reconfiguration tasks.
-
Some vSAN tasks, such as Reconfigure policy, vCenter Server High Availability, and vSphere DRS enablement operations cannot proceed while Storage vMotion is in progress.
-
Unused port groups might cause high response times of all tasks running on ESXi hosts, such as loading the vSphere Client, powering on virtual machines, and editing settings.
-
The nicmgmtd daemon does frequent mmap and munmap operations with anonymous memory and in certain cases it does not reuse addresses from recently freed memory ranges. This might cause mmaped regions to always have an increasing start address and consume additional L2 and L3 page tables. As a result, nicmgmtd might run out of resource pool memory over time and eventually stop responding.
-
In some cases, when the CIM service is enabled, the length of inquiry response data might exceed the 255 byte limit and cause log spew. In the syslog, you might see messages similar to
VMwareHypervisorStorageExtent::fillVMwareHypervisorStorageExtentInstance - durable name length is too large: 256
. -
If the mount point was busy and a previous unmount operation has failed silently, attempts to mount a Virtual Volumes storage container might fail with an error that the container already exists.
-
If adding an uplink port to a VDS fails for some reason, an ESXi host might become unresponsive while trying to restore the MTU on the uplink. This might happen if you frequently add and remove uplink ports on a VDS.
-
If you upgrade a VDS with an active port mirroring configuration from 6.5 to a later version, the ESXi host might fail with a purple diagnostic screen. You see an error similar to
PF Exception 14 in world XXX
. -
The hostd service might start failing repeatedly with an error message similar to
Panic: Memory exceeds hard limit
. The issue occurs if a corrupted Windows ISO image of VMware Tools is active in theproductLocker/vmtools
folder. -
If a VASA provider fails to register protocol endpoint IDs discovered on an ESXi host, virtual machines on vSphere Virtual Volumes datastores on this host might become inaccessible. You might see an error similar to
vim.fault.CannotCreateFile
. A possible reason for failing to register protocol endpoint IDs from an ESXi host is that theSetPEContext()
request to the VASA provider fails for some reason. This results in failing any subsequent request for binding virtual volumes, and losing accessibility to data and virtual machines on vSphere Virtual Volumes datastores. -
A race condition in the screenshot capture code might cause intermittent failures of virtual machines.
-
If you use the vSphere Client to configure an iSCSI target, and allow the system to auto-generate the IQN, the iSCSI initiator fails to add the target. This problem occurs because the format of the auto-generated IQN is incorrect.
-
If another service requests sensor data from the hostd service during a hardware health check, the hostd service might fail with an error similar to
IpmiIfcSdrReadRecordId: retry
expired
. As a result, you cannot access the ESXi host from the vCenter Server system. -
Inappropriate error messages might get logged in the
vsanmgmt.log
file. The following messages in/var/log/vsanmgmt.log
can be ignored:
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/virstoStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/CFStats exception
vsi.get /vmkModules/lsom/disks/<cache-disk-uuid>/checksumErrors exception -
While evaluating an auto proxy configuration, the vSAN health service might fail. If this occurs, vCenter Server installation or upgrade fails.
-
If the
vsantop
utility fails to exit normally, or the configuration is incorrect, the utility might not start again. This problem might keep the vSAN management daemon from starting. Some vSAN services, such as the performance service, are unavailable. -
Virtual machine I/Os sent to NFS datastores might fail with an out of memory error and make the virtual machines unresponsive. In the
vmkernel.log
file, you can see a message similar to:NFS: Failed to convert sgArr to NFSIoInfo: Out of memory
. -
Deleting files from the Content Library might cause vpxa to fail and the delete operation to be unsuccessful.
-
When a host profile is updated by using the Copy Settings from Host option, the kernel module parameter configuration policy might change, or an iSCSI CHAP secret might be lost, or both. As a result, you might need to edit the host profile again to modify the kernel module parameter configuration policy to per-host based, or reenter the iSCSI CHAP secret, or both.
-
In a cluster enabled for deduplication and compression, ESXi hosts that are being upgraded from vSAN 6.5 might run out of space. New provisions and reconfigurations fail due to lack of free space.
-
If the vSAN health log file is empty or corrupted, the vSAN health service might fail.
-
Logging for Storage I/O Control is disabled by default and you must enable it by using a value of 1 to 7 for the <SIOControlLoglevel> parameter to set a log level. However, in some cases, changing the parameter value does not change the log level as well.
-
If you run a CIM query, such as the
enum_instances
call, to a newly added VMkernel port on an ESXi host, the SFCB service might fail because it cannot validate the IP address of the new instance. For example, if IPv6 is enabled and configured as static, but the IPv6 address is blank, querying for either of CIM classesCIM_IPProtocolEndpoint
orVMware_KernelIPv6ProtocolEndpoint
generates an SFCB coredump. -
If you use VBS along with the EAF mitigation of the exploit protection of Windows Defender, performance of any application protected by EAF on Windows 10 virtual machines might degrade.
-
If an intermittent hot-plug event occurs on a device which is under rescan, the ESXi host might fail with a purple diagnostic screen.
-
Attempts to create or reconfigure Windows virtual machines with more than 128 GB of RAM might fail for the 64-bit versions of Windows 7, Windows 8, and Windows 10.
-
If one of the address families on a dual stack domain controller is not enabled, for example IPv6, even after disabling IPv6 on the controller, still a CLDAP ping to an IPv6 address is possible. This might lead to a timeout and trigger an error that the datacenter is not available, similar to Error:
NERR_DCNotFound [code 0x00000995]
. As a result, you might fail to add ESXi hosts to the domain. -
In certain environments, ESXi upgrades might fail due to all-paths-down error of the vmkusb driver that prevents mounting images from external devices while running scripted installations. If you try to use a legacy driver, the paths might display as active but the script still does not run.
-
While loading kernel modules, the VMkernel Module Loader command
vmkload_mod
that is used to load device driver and network shaper modules into the VMkernel, might be migrated to another NUMA node for some reason. If this happens, a checksum mismatch across NUMA nodes is possible during the code section initialization. This can cause an ESXi host to fail with a purple diagnostic screen. -
When a disk group is mounted to an ESXi host, PLOG log recovery might consume all of the available memory. The host might fail with a purple diagnostic screen. The vmkernel log might display a message similar to the following:
BucketlistGetNode@LSOMCommon
Bucketlist_InsertWithDiskGroup@LSOMCommon
Rangemap_Insert@LSOMCommon
LSOMLbaTable_Insert@LSOMCommon
[email protected]
SSDLOGEnumLog@LSOMCommon
SSDLOG_EnumLogHelper@LSOMCommon
helpFunc@vmkernel
CpuSched_StartWorld@vmkernel -
If the rhttpproxy port is configured with a value other than the default value of 443, the virtual objects view in the vSphere Web Client displays a blank page with an error message.
-
When the vSAN health service cannot identify a transient disk error, it might trigger an alarm in Physical disk - Operational health check:
Unknown disk health state
. -
The command
enum_instances OMC_IpmiLogRecord
that returns an instance of the CIM classOMC_IpmiLogRecord
, might not work as expected and result in ano instances found
error. This happens when RawIpmiProvider is not loaded and fails to respond to the query. -
After live installing a CIM provider,
CMNewInstance
calls from the provider might take a long time to respond or fail. You must restart WBEM to retry the call. -
If several LLDP peers are reported to the VMkernel sysinfo interface, a memory leak might cause the SNMP service to fail and dump a
snmpd-zdump
core file. As a result, you cannot monitor the status of ESXi hosts. If LLDP is turned off, the SNMP service does not fail. -
When you create a disk group in vSAN, internal memory resource limits can be reached. One or more disks might not be added to the disk group. You might see the following error messages in the vmkernel.log:
WARNING: LSOMCommon: IORETRYCreateSlab:3150: Unable to create IORetry slab Out of memory
WARNING: PLOG: PLOG_OpenDevHandles:1858: Unable to create IOretry handle for device 522984ce-343d-2533-930d-05f790e54bf6 : Out of memory
WARNING: Service: VirstoCreateVDiskV4:1506: Could not allocate memory for vDisk id=1 -
When a storage device experiences a transient error, vSAN might incorrectly mark the device as Failed. vSAN components on the device are marked as Degraded, which eventually triggers a full rebuild of the objects in the cluster.
-
When an ESXi host is joined into multiple transport zones, fetching data for opaque networks of the type nsx.LogicalSwitch might fail, because the
esxcli
andlocalcli
commands to get vmkernel interface info return only the first found opaque switch. Providing all data does not work when multiple opaque switches exist on the ESXi host. -
A race condition when a virtual machine is a target of a namespace operation but at the same time another call is running to delete or unregister that virtual machine might cause the hostd service to fail. This issue can happen in environments where the NamespaceManager API is heavily used to communicate with a virtual machine guest OS agent.
-
You might see excessive health events for storage sensors on ESXi hosts even though the state of the sensors has not changed. The health events are triggered in every polling cycle and report the same state of storage sensors.
-
When vSphere vMotion operation for migrating virtual machines is triggered, either manually or automatically, you can see your monitor going into a sleep mode waiting for some event, but this mode might take long and never change. The reason is that vSphere vMotion might trigger another event, parallel to the first for which the system is waiting for, which causes the long sleep mode.
-
The nfnic driver might intermittently assign wrong SLLID to virtual machines and as a result, Windows and Linux virtual machines might become inaccessible.
-
In rare occasions, if an ESXi host uses a USB network driver, the host might fail with a purple diagnostic screen due to duplicate TX buffers. You might see an error similar to
PF Exception 14 in world 66160:usbus0 cdce_bulk_write_callback
. -
If an ESXi host is connecting to a storage array by using an external DIF and a node path is destroyed at the same time, the host might fail with a purple diagnostic screen. You can see an error for the
lpfc_external_dif_cmpl
function in the logs. -
You might be able to configure the Advanced Settings option TeamPolicyUpDelay by using the command esx-advcfg for environments using vSphere Distributed Switch 6.6, but the setting does not work as expected.
-
Profile Name | ESXi-6.7.0-20200401001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | April 28, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2495187, 2485387, 2485381, 2485379, 2485383, 2485390, 2485385, 2489776 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The ESXi userworld libcurl library is updated to libcurl-7.67.0.
-
The Python third party library is updated to version 3.5.8
-
The OpenSSL package is updated to version openssl-1.0.2u.
-
The OpenSSH version is updated to version 8.1p1.
-
The NTP daemon is updated to version ntp-4.2.8p13.
-
The SQLite database is updated to version 3.30.1.
The following VMware Tools ISO images are bundled with ESXi670-202004002:
windows.iso
: VMware Tools 11.0.5 ISO image for Windows Vista (SP2) or laterwinPreVista.iso:
VMware Tools 10.0.12 ISO image for Windows 2000, Windows XP, and Windows 2003linux.iso
: VMware Tools 10.3.21 ISO image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
-
Profile Name | ESXi-6.7.0-20200401001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | April 28, 2020 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2495187, 2485387, 2485381, 2485379, 2485383, 2485390, 2485385 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The ESXi userworld libxml2 library is updated to version 2.9.10.
-
The ESXi userworld libcurl library is updated to libcurl-7.67.0.
-
The Python third party library is updated to version 3.5.8
-
The OpenSSL package is updated to version openssl-1.0.2u.
-
The OpenSSH version is updated to version 8.1p1.
-
The NTP daemon is updated to version ntp-4.2.8p13.
-
The SQLite database is updated to version 3.30.1.
-
Known Issues
The known issues are grouped as follows.
CIM Issues- NEW: After an upgrade to ESXi670-202004002, you might see a critical warning in the vSphere Client for the fan health of HP Gen10 servers
After an upgrade to ESXi670-202004002, you might see a critical warning in the vSphere Client for the fan health of HP Gen10 servers due to a false positive validation.
Workaround: Follow the steps described in VMware knowledge base article 78989.
- Quiescing operations of virtual machines on volumes backed by LUNs supporting unmap granularity value greater than 1 MB might take longer than usual
VMFS datastores backed by LUNs that have optimal unmap granularity greater than 1 MB might get into repeated on-disk locking during the automatic unmap processing. This can result in longer running quiescing operations for virtual machines on such datastores.
Workaround: Disable automatic unmapping on volumes backed by LUNs that have optimal unmap granularity greater than 1 MB.
- Due to unavailable P2M slots during migration by using vSphere vMotion, virtual machines might fail or power off
Memory resourcing for virtual machines that require more memory, such as 3D devices, might cause an overflow of the P2M buffer during migration by using vSphere vMotion. As a result, shared memory pages might break. The virtual machines might fail or power off. You might see an error message similar to
P2M reservation failed after max retries
.Workaround: To manually configure the P2M buffer, follow the steps from VMware knowledge base article 76387.
- If you restore a storage device configuration backed up by using backup.sh after more than 7 days, some settings might be lost
If you restore backup after more than 7 days, storage device settings, such as
Is perennially reserved,
might be lost. This happens because the last seen timestamps of devices also get backed up and if the device has not been active for more than 7 days, device entries from/etc/vmware/esx.conf
are deleted. As a result, the restore operation might restore older timestamps.Workaround: None.
- Upgrades to ESXi670-202004002 might fail due to the replacement of an expired digital signing certificate and key
Upgrades to ESXi670-202004002 from some earlier versions of ESXi might fail due to the replacement of an expired digital signing certificate and key. If you use ESXCLI, you see a message
Could not find a trusted signer
. If you use vSphere Update Manager, you see a message similar tocannot execute upgrade script on host
.Workaround: For more details on the issue and workaround, see VMware knowledge base article 76555.
- Attempts to enable vSphere HA on a UEFI Secure Boot enabled cluster might fail with a timeout error
Configuring vSphere HA on a UEFI Secure Boot enabled cluster might fail with
Operation Timed Out
error due to a change in the ESXi VIB certificates.Workaround: The Fault Domain Manager VIB is signed with a new ESXi VIB certificate and you must upgrade ESXi hosts to a later versions. For more information on the impacted ESXi versions, see VMware knowledge base article 76555.