Release Date: FEB 23, 2021
What's in the Release Notes
The release notes cover the following topics:
Build Details
Download Filename: | ESXi650-202102001.zip |
Build: | 17477841 |
Download Size: | 483.8 MB |
md5sum: | b7dbadd21929f32b5c7e19c7eb60c5ca |
sha1checksum: | f13a79ea106e38f41b731c19758a24023c4449ca |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
Bulletin ID | Category | Severity |
ESXi650-202102401-BG | Bugfix | Critical |
ESXi650-202102402-BG | Bugfix | Important |
ESXi650-202102101-SG | Security | Important |
ESXi650-202102102-SG | Security | Important |
ESXi650-202102103-SG | Security | Moderate |
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.5.
Bulletin ID | Category | Severity |
ESXi650-202102001 | Bugfix | Critical |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20210204001-standard |
ESXi-6.5.0-20210204001-no-tools |
ESXi-6.5.0-20210201001s-standard |
ESXi-6.5.0-20210201001s-no-tools |
For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, the system can be updated by using the image profile and the esxcli software profile
command.
For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi650-202102401-BG
- ESXi650-202102402-BG
- ESXi650-202102101-SG
- ESXi650-202102102-SG
- ESXi650-202102103-SG
- ESXi-6.5.0-20210204001-standard
- ESXi-6.5.0-20210204001-no-tools
- ESXi-6.5.0-20210201001s-standard
- ESXi-6.5.0-20210201001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2571797, 2614876, 2641918, 2649146, 2665433, 2641030, 2643261, 2600691, 2609377, 2663642, 2686646, 2657658, 2610707, 2102346, 2601293, 2678799, 2666929, 2672518, 2677274, 2651994, 2683360, 2634074, 2621142, 2587398, 2649283, 2647832 |
Related CVE numbers | N/A |
This patch updates the esx-base, esx-tboot, vsan,
and vsanhealth
VIBs to resolve the following issues:
- PR 2571797: Setting a virtual CD/DVD drive to be a client device might cause a virtual machine to power off and get into an invalid state
If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure. In the
hostd.log
, you see an error such as:
Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found
.If the Virtual Device Node is set to
SATA(0:0)
, in the virtual machine configuration file, you see an entry such as:
sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0"
.This issue is resolved in this release.
- PR 2614876: You might see loss of network connectivity as a result of a physical switch reboot
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity.This issue is resolved in this release. The fix increases the
Net.TeamPolicyUpDelay
parameter to 30 minutes. You can set the parameter by selecting the host in the vCenter System interface and navigating to Configure > System -> Advanced System Settings > Net.TeamPolicyUpDelay. Alternatively, you can use the commandesxcfg-advcfg -s
<value>Net/TeamPolicyUpDelay
. - PR 2641918: Virtual machines encryption takes long and ultimately fails with an error
Virtual machines encryption might take several hours and ultimately fail with the error
The file already exists
in the hostd logs. The issue occurs if an orphaned or unused file<VM name>.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file.This issue is resolved in this release. If you already face the issue, manually delete the orphaned
.nvram file before encryption. - PR 2649146: The vpxa service might intermittently fail due to a malformed string and ESXi hosts lose connectivity to the vCenter Server system
A malformed UTF-8 string might lead to failure of the vpxa service and cause ESXi hosts to lose connectivity to the vCenter Server system. In the hostd logs, you can see records of a competed
vim.SimpleCommand
task that indicate the issue:
hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
In the vpxa logs, you see messages such as:vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now
This issue is resolved in this release.
- PR 2665433: If you disable RC4, Active Directory user authentication on ESXi hosts might fail
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors.This issue is resolved in this release.
- PR 2641030: Changes in the Distributed Firewall (DFW) filter configuration might cause virtual machines to lose network connectivity
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you see state:IOChain Detaching for the failed filter
.This issue is resolved in this release.
- PR 2643261: Virtual machines lose connectivity to other virtual machines and network
A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.
This issue is resolved in this release.
- PR 2600691: An ESXi host unexpectedly goes into maintenance mode due to a hardware issue reported by Proactive HA
If you configure a filter on your vCenter Server system by using the Proactive HA feature to bypass certain health updates, such as to block failure conditions, the filter might not take effect after a restart of the vpxd service. As a result, an ESXi host might unexpectedly go into maintenance mode due to a hardware issue reported by Proactive HA.
This issue is resolved in this release.
- PR 2609377: ESX hosts might fail with a purple diagnostic screen during a DVFilter configuration operation
Due to a rare race condition, while creating an instance of the DVFilter agent, the agent might receive a routine configuration change message before it is completely set up. The race condition might cause the ESXi host to fail with a purple diagnostic screen. You see an error such as
Exception 14 (Page Fault)
in the debug logs.This issue is resolved in this release.
- PR 2663642: Virtual machines lose network connectivity after migration operations by using vSphere vMotion
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
This issue is resolved in this release.
- PR 2686646: ESXi hosts fail with a purple diagnostic screen displaying a vmkernel error of type Spin count exceeded / Possible deadlock
ESXi hosts fail with a purple diagnostic screen displaying vmkernel errors such as:
Panic Details: Crash at xxxx-xx-xx:xx:xx.xxxx on CPU xx running world xxx - HBR[xx.xx.xx.xx]:xxx:xxx
Panic Message: @BlueScreen: Spin count exceeded - possible deadlock with PCPU 24This issue is resolved in this release.
- PR 2657658: After upgrade of HPE servers to HPE Integrated Lights-Out 5 (iLO 5) firmware version 2.30, you see memory sensor health alerts
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade.This issue is resolved in this release.
- PR 2610707: An ESXi host becomes unresponsive and you see an error such as hostd detected to be non-responsive in the Direct Console User Interface (DCUI)
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks.This issue is resolved in this release.
- PR 2102346: You might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server after an ESXi host reboot
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted on the vCenter Server system after an ESXi host reboot. However, you can see the volumes in the ESXi host.
This issue is resolved in this release.
- PR 2601293: Operations with stateless ESXi hosts might not pick the expected remote disk for system cache, which causes remediation or compliance issues
Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.
This issue is resolved in this release. The fix provides a consistent way to sort the remote disks and always pick the disk with the lowest LUN ID. To make sure you enable the fix, follow these steps:
1. On the Edit host profile page of the Auto Deploy wizard, select Advanced Configuration Settings > System
Image Cache Configuration > System Image Cache Configuration.
2. In the System Image Cache Profile Settings drop-down menu, select Enable stateless caching on the host.
3. Edit Arguments for first disk by replacingremote
withsortedremote
and/orremoteesx
withsortedremoteesx.
- PR 2678799: An ESX host might become unresponsive due to a failure of the hostd service
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation, triggered by VMware vSphere Replication on virtual disks that do not support this operation, might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system.This issue is resolved in this release.
- PR 2666929: Hidden portsets, such as pps and DVFilter Coalesce Portset, might become visible after an ESXi upgrade
In rare occasions, when the hostd service refreshes the networking configuration after an ESXi upgrade, in the vSphere Web Client or the vSphere Client you might see hidden portsets, such as pps and DVFilter Coalesce Portset, that are not part of the configuration.
This issue is resolved in this release.
- PR 2672518: Virtual machine operations on NFS3 datastores might fail due to a file system error
Virtual machine operations, such as power on and backup, might fail on NFS3 datastores due to a JUKEBOX error returned by the NFS server.
This issue is resolved in this release. The fix retries NFS operations in case of a JUKEBOX error.
- PR 2677274: If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
This issue is resolved in this release.
- PR 2651994: A storage outage might cause errors in your environment due to a disk layout issue in virtual machines
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
.This issue is resolved in this release.
- PR 2683360: If the LACP uplink of a destination ESXi host is down, vSphere vMotion operations might fail
If you have more than one LAG in your environment and the LACP uplink of a destination ESXi host is down, vSphere vMotion operations might fail with a compatibility error. In the vSphere Web Client or vSphere Client, you might see an error such as:
# Currently connected network interface 'Network Adapter 1' uses network xxxx', which is not accessible
.This issue is resolved in this release.
- PR 2634074: If you change the swap file location of a virtual machine, the virtual machine might become invalid after a hard reset
If you enable the host-local swap for a standalone ESXi host, or if you change the swap file datastore to another shared storage location while a virtual machine is running, the virtual machine might be terminated and marked as invalid after a hard reset.
This issue is resolved in this release.
- PR 2621142: In the vSphere Web Client, you cannot change the log level configuration of the VPXA service after an upgrade of your vCenter Server system
In the vSphere Web Client, you might not be able to change the log level configuration of the VPXA service on an ESX host due to a missing or invalid
Vpx.Vpxa.config.log.level
option after an upgrade of your vCenter Server system.This issue is resolved in this release. The VPXA service automatically sets a valid value for the
Vpx.Vpxa.config.log.level
option and exposes it to the vSphere Web Client. - PR 2587398: Virtual machines become unresponsive after power-on, with VMware logo on screen
If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.
This issue is resolved in this release.
- PR 2649283: Insufficient heap of a DvFilter agent might cause virtual machine migration by using vSphere vMotion to fail
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In thevmware.log
file, you see an error such asFailed waiting for data. Error bad0014. Out of memory
.This issue is resolved in this release.
- PR 2647832: The Ruby vSphere Console (RVC) fails to print vSAN disk object information for an ESXi host
The RVC command
vsan.disk_object_info
might fail to print information for one or more ESXi hosts. The command returns the following error:Server failed to query vSAN objects-on-disk.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMW_bootbank_nvme_1.2.2.28-4vmw.650.3.157.17477841 |
PRs Fixed | 2658926 |
Related CVE numbers | N/A |
This patch updates the nvme
VIB to resolve the following issues:
- PR 2658926: SSD firmware updates by using third-party tools might fail because the ESXi NVMe drivers do not permit NVMe Opcodes
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, the data transfer direction for vendor-specific commands is set to device-to-host. As a result, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2686295, 2686296, 2689538 |
Related CVE numbers | CVE-2021-21974 |
This patch updates the esx-base, esx-tboot, vsan,
and vsanhealth
VIBs to resolve the following issues:
OpenSLP as used in ESXi has a heap-overflow vulnerability. A malicious actor residing within the same network segment as ESXi, who has access to port 427, might trigger the heap-overflow issue in OpenSLP service, resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2021-21974 to this issue. For more information, see VMware Security Advisory VMSA-2021-0002.
- Update of the SQLite database
The SQLite database is updated to version 3.33.0.
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2x.
- Update to cURL
The cURL library is updated to 7.72.0.
- Update to the Python library
The Python third party library is updated to version 3.5.10.
- Update to the Network Time Protocol (NTP) daemon
The NTP daemon is updated to version ntp-4.2.8p15.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.3.153.17459147 |
PRs Fixed | N/A |
Related CVE numbers | N/A |
This patch updates the net-e1000
VIB.
Patch Category | Security |
Patch Severity | Moderate |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | VMware_locker_tools-light_6.5.0-3.153.17459147 |
PRs Fixed | 2687436 |
Related CVE numbers | N/A |
This patch updates the tools-light
VIB to resolve the following issue:
The following VMware Tools ISO images are bundled with ESXi 650-202102001:
windows.iso
: VMware Tools 11.2.1 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
Profile Name | ESXi-6.5.0-20210204001-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | February 4, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2571797, 2614876, 2641918, 2649146, 2665433, 2641030, 2643261, 2600691, 2609377, 2663642, 2686646, 2657658, 2610707, 2102346, 2601293, 2678799, 2666929, 2672518, 2677274, 2651994, 2683360, 2634074, 2621142, 2587398, 2649283, 2647832, 2658926 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure. In the
hostd.log
, you see an error such as:
Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found
.If the Virtual Device Node is set to
SATA(0:0)
, in the virtual machine configuration file, you see an entry such as:
sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0"
. -
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity. -
Virtual machines encryption might take several hours and ultimately fail with the error
The file already exists
in the hostd logs. The issue occurs if an orphaned or unused file<VM name>.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file. -
A malformed UTF-8 string might lead to failure of the vpxa service and cause ESXi hosts to lose connectivity to the vCenter Server system. In the hostd logs, you can see records of a competed
vim.SimpleCommand
task that indicate the issue:
hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
In the vpxa logs, you see messages such as:vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now
-
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you see state:IOChain Detaching for the failed filter
. -
A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.
-
If you configure a filter on your vCenter Server system by using the Proactive HA feature to bypass certain health updates, such as to block failure conditions, the filter might not take effect after a restart of the vpxd service. As a result, an ESXi host might unexpectedly go into maintenance mode due to a hardware issue reported by Proactive HA.
-
Due to a rare race condition, while creating an instance of the DVFilter agent, the agent might receive a routine configuration change message before it is completely set up. The race condition might cause the ESXi host to fail with a purple diagnostic screen. You see an error such as
Exception 14 (Page Fault)
in the debug logs. -
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
-
ESXi hosts fail with a purple diagnostic screen displaying vmkernel errors such as:
Panic Details: Crash at xxxx-xx-xx:xx:xx.xxxx on CPU xx running world xxx - HBR[xx.xx.xx.xx]:xxx:xxx
Panic Message: @BlueScreen: Spin count exceeded - possible deadlock with PCPU 24 -
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade. -
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks. -
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted on the vCenter Server system after an ESXi host reboot. However, you can see the volumes in the ESXi host.
-
Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.
-
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation, triggered by VMware vSphere Replication on virtual disks that do not support this operation, might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system. -
In rare occasions, when the hostd service refreshes the networking configuration after an ESXi upgrade, in the vSphere Web Client or the vSphere Client you might see hidden portsets, such as pps and DVFilter Coalesce Portset, that are not part of the configuration.
-
Virtual machine operations, such as power on and backup, might fail on NFS3 datastores due to a JUKEBOX error returned by the NFS server.
-
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
-
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
. -
If you have more than one LAG in your environment and the LACP uplink of a destination ESXi host is down, vSphere vMotion operations might fail with a compatibility error. In the vSphere Web Client or vSphere Client, you might see an error such as:
# Currently connected network interface 'Network Adapter 1' uses network xxxx', which is not accessible
. -
If you enable the host-local swap for a standalone ESXi host, or if you change the swap file datastore to another shared storage location while a virtual machine is running, the virtual machine might be terminated and marked as invalid after a hard reset.
-
In the vSphere Web Client, you might not be able to change the log level configuration of the VPXA service on an ESX host due to a missing or invalid
Vpx.Vpxa.config.log.level
option after an upgrade of your vCenter Server system. -
If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.
-
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In the vmkernel logs, you see an error such asFailed waiting for data. Error bad0014. Out of memory
. -
The RVC command
vsan.disk_object_info
might fail to print information for one or more ESXi hosts. The command returns the following error:Server failed to query vSAN objects-on-disk.
-
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, the data transfer direction for vendor-specific commands is set to device-to-host. As a result, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
-
Profile Name | ESXi-6.5.0-20210204001-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | February 4, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2571797, 2614876, 2641918, 2649146, 2665433, 2641030, 2643261, 2600691, 2609377, 2663642, 2686646, 2657658, 2610707, 2102346, 2601293, 2678799, 2666929, 2672518, 2677274, 2651994, 2683360, 2634074, 2621142, 2587398, 2649283, 2647832, 2658926 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure. In the
hostd.log
, you see an error such as:
Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found
.If the Virtual Device Node is set to
SATA(0:0)
, in the virtual machine configuration file, you see an entry such as:
sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0"
. -
The parameter for network teaming failback delay on ESXi hosts,
Net.TeamPolicyUpDelay
, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity. -
Virtual machines encryption might take several hours and ultimately fail with the error
The file already exists
in the hostd logs. The issue occurs if an orphaned or unused file<VM name>.nvram
exists in the VM configuration files. If the virtual machines have an entry such asNVRAM = “nvram”
in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file. -
A malformed UTF-8 string might lead to failure of the vpxa service and cause ESXi hosts to lose connectivity to the vCenter Server system. In the hostd logs, you can see records of a competed
vim.SimpleCommand
task that indicate the issue:
hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
In the vpxa logs, you see messages such as:vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now
-
If you disable RC4 from your Active Directory configuration, user authentication to ESXi hosts might start to fail with
Failed to authenticate user
errors. -
Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the
summarize-dvfilter
command, you see state:IOChain Detaching for the failed filter
. -
A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.
-
If you configure a filter on your vCenter Server system by using the Proactive HA feature to bypass certain health updates, such as to block failure conditions, the filter might not take effect after a restart of the vpxd service. As a result, an ESXi host might unexpectedly go into maintenance mode due to a hardware issue reported by Proactive HA.
-
Due to a rare race condition, while creating an instance of the DVFilter agent, the agent might receive a routine configuration change message before it is completely set up. The race condition might cause the ESXi host to fail with a purple diagnostic screen. You see an error such as
Exception 14 (Page Fault)
in the debug logs. -
If the UUID of a virtual machine changes, such as after a migration by using vSphere vMotion, and the virtual machine has a vNIC on an NSX-T managed switch, the VM loses network connectivity. The vNIC cannot reconnect.
-
ESXi hosts fail with a purple diagnostic screen displaying vmkernel errors such as:
Panic Details: Crash at xxxx-xx-xx:xx:xx.xxxx on CPU xx running world xxx - HBR[xx.xx.xx.xx]:xxx:xxx
Panic Message: @BlueScreen: Spin count exceeded - possible deadlock with PCPU 24 -
After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the
Mem_Stat_*
sensors when the first LUN is enabled after the upgrade. -
An ESXi host might lose connectivity to your vCenter Server system and you cannot access the host by either the vSphere Client or vSphere Web Client. In the DCUI, you see the message
ALERT: hostd detected to be non-responsive
. The issue occurs due to a memory corruption, happening in the CIM plug-in while fetching sensor data for periodic hardware health checks. -
You might not see NFS datastores using a fault tolerance solution by Nutanix mounted on the vCenter Server system after an ESXi host reboot. However, you can see the volumes in the ESXi host.
-
Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.
-
A missing
NULL
check in avim.VirtualDiskManager.revertToChildDisk
operation, triggered by VMware vSphere Replication on virtual disks that do not support this operation, might cause the hostd service to fail. As a result, the ESXi host loses connectivity to the vCenter Server system. -
In rare occasions, when the hostd service refreshes the networking configuration after an ESXi upgrade, in the vSphere Web Client or the vSphere Client you might see hidden portsets, such as pps and DVFilter Coalesce Portset, that are not part of the configuration.
-
Virtual machine operations, such as power on and backup, might fail on NFS3 datastores due to a JUKEBOX error returned by the NFS server.
-
If the Xorg process fails to restart while an ESXi host exits maintenance mode, the hostd service might become unresponsive as it cannot complete the exit operation.
-
After a brief storage outage, it is possible that upon recovery of the virtual machines, the disk layout is not refreshed and remains incomplete. As a result, you might see errors in your environment. For example, in the View Composer logs in a VMware Horizon environment, you might see a repeating error such as
InvalidSnapshotDiskConfiguration
. -
If you have more than one LAG in your environment and the LACP uplink of a destination ESXi host is down, vSphere vMotion operations might fail with a compatibility error. In the vSphere Web Client or vSphere Client, you might see an error such as:
# Currently connected network interface 'Network Adapter 1' uses network xxxx', which is not accessible
. -
If you enable the host-local swap for a standalone ESXi host, or if you change the swap file datastore to another shared storage location while a virtual machine is running, the virtual machine might be terminated and marked as invalid after a hard reset.
-
In the vSphere Web Client, you might not be able to change the log level configuration of the VPXA service on an ESX host due to a missing or invalid
Vpx.Vpxa.config.log.level
option after an upgrade of your vCenter Server system. -
If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.
-
The DVFilter agent uses a common heap to allocate space for both its internal structures and buffers, as well as for the temporary allocations used for moving the state of client agents during vSphere vMotion operations.
In some deployments and scenarios, the filter states can be very large in size and exhaust the heap during vSphere vMotion operations.
In the vmkernel logs, you see an error such asFailed waiting for data. Error bad0014. Out of memory
. -
The RVC command
vsan.disk_object_info
might fail to print information for one or more ESXi hosts. The command returns the following error:Server failed to query vSAN objects-on-disk.
-
The NVMe driver provides a management interface that allows you to build tools to pass through NVMe admin commands. However, the data transfer direction for vendor-specific commands is set to device-to-host. As a result, some vendor-specific write commands cannot complete successfully. For example, a flash command from the Marvell CLI utility fails to flash the firmware of NVMe SSD on an HPE Tinker controller.
-
Profile Name | ESXi-6.5.0-20210201001s-standard |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | February 4, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2686295, 2686296, 2689538, 2687436 |
Related CVE numbers | CVE-2021-21974 |
- This patch updates the following issues:
-
OpenSLP as used in ESXi has a heap-overflow vulnerability. A malicious actor residing within the same network segment as ESXi, who has access to port 427, might trigger the heap-overflow issue in OpenSLP service, resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2021-21974 to this issue. For more information, see VMware Security Advisory VMSA-2021-0002.
-
The SQLite database is updated to version 3.33.0.
-
The OpenSSL package is updated to version openssl-1.0.2x.
-
The cURL library is updated to 7.72.0.
-
The Python third party library is updated to version 3.5.10.
-
The NTP daemon is updated to version ntp-4.2.8p15.
-
The following VMware Tools ISO images are bundled with ESXi 650-202102001:
windows.iso
: Mware Tools 11.2.1 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.-
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.-
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.2.1 Release Notes
- Earlier versions of VMware Tools
- What Every vSphere Admin Must Know About VMware Tools
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
-
Profile Name | ESXi-6.5.0-20210201001s-no-tools |
Build | For build information, see the top of the page. |
Vendor | VMware, Inc. |
Release Date | February 4, 2021 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2686295, 2686296, 2689538, 2687436 |
Related CVE numbers | CVE-2021-21974 |
- This patch updates the following issues:
-
OpenSLP as used in ESXi has a heap-overflow vulnerability. A malicious actor residing within the same network segment as ESXi, who has access to port 427, might trigger the heap-overflow issue in OpenSLP service, resulting in remote code execution. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2021-21974 to this issue. For more information, see VMware Security Advisory VMSA-2021-0002.
-
The SQLite database is updated to version 3.33.0.
-
The OpenSSL package is updated to version openssl-1.0.2x.
-
The cURL library is updated to 7.72.0.
-
The Python third party library is updated to version 3.5.10.
-
The NTP daemon is updated to version ntp-4.2.8p15.
-
The following VMware Tools ISO images are bundled with ESXi 650-202102001:
windows.iso
: Mware Tools 11.2.1 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.-
The following VMware Tools ISO images are available for download:
- VMware Tools 10.0.12:
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.linuxPreGLibc25.iso
: for Linux OS with a glibc version less than 2.5.
- VMware Tools 11.0.6:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
solaris.iso
: VMware Tools image 10.3.10 for Solaris.darwin.iso
: Supports Mac OS X versions 10.11 and later.-
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 11.2.1 Release Notes
- Earlier versions of VMware Tools
- What Every vSphere Admin Must Know About VMware Tools
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
-
Known Issues
The known issues are grouped as follows.
Upgrade Issues- Upgrades to ESXi 7.x from esxi650-202102001 by using ESXCLI might fail due to a space limitation
Upgrades to ESXi 7.x from esxi650-202102001 by using the
esxcli software profile update
oresxcli software profile install
ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:[InstallationError] The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB. Please refer to the log file for more details.
The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands
esxcli software vib update
oresxcli software vib install
.Workaround: You can perform the upgrade in two steps, by using the
esxcli software profile update
command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.x. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.