Release Date: NOV 19, 2020

Build Details

Download Filename: ESXi670-202011002.zip
Build: 17167734
Download Size: 475.3 MB
md5sum: ba6c848559c291809bcb0e81d2c60a0c
sha1checksum: 073ab9895c9db6e75ab38e41e5d79cbb37a5f760
Host Reboot Required: Yes
Virtual Machine Migration or Shutdown Required: Yes

Bulletins

Bulletin ID Category Severity
ESXi670-202011401-BG Bugfix Critical
ESXi670-202011402-BG Bugfix Important
ESXi670-202011403-BG Bugfix Moderate
ESXi670-202011404-BG Bugfix Important
ESXi670-202011101-SG Security Critical

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes since the initial release of ESXi 6.7.

Bulletin ID Category Severity
ESXi670-202011002 Bugfix Critical

IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only the ESXi hosts is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version. 

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-6.7.0-20201104001-standard
ESXi-6.7.0-20201104001-no-tools
ESXi-6.7.0-20201101001s-standard
ESXi-6.7.0-20201101001s-no-tools

For more information about the individual bulletins, see the Download Patches page and the Resolved Issues section.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see the About Installing and Administering VMware vSphere Update Manager.

You can update ESXi hosts by manually downloading the patch ZIP file from the VMware download page and installing the VIBs by using the esxcli software vib update command. Additionally, you can update the system by using the image profile and the esxcli software profile update command.

For more information, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.

Resolved Issues

The resolved issues are grouped as follows.

ESXi670-202011401-BG
Patch Category Bugfix
Patch Severity Critical
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMware_bootbank_vsanhealth_6.7.0-3.132.17135221
  • VMware_bootbank_vsan_6.7.0-3.132.17135222 
  • VMware_bootbank_esx-base_6.7.0-3.132.17167734
  • VMware_bootbank_esx-update_6.7.0-3.132.17167734
PRs Fixed  2623890, 2587530, 2614441, 2600239, 2613897, 2657657, 2652346, 2625293, 2641914, 2653741, 2649677, 2644214, 2624574, 2531669, 2659304, 2640971, 2647710, 2587397, 2622858, 2639827, 2643255, 2643507, 2645723, 2617315, 2637122, 2621143, 2656056, 2603460, 2630579, 2656196, 2630045, 2645428, 2638586, 2539704, 2641029, 2643094
CVE numbers N/A

Updates esx-base, esx-update, vsan, and vsanhealth VIBs to resolve the following issues:

  • NEW PR 2643094: If an All-Paths-Down State (APD) is not processed on all devices when the disk group storage controller goes down, a vSAN cluster might become unresponsive

    If the storage controller behind a disk group goes down, it is possible that not all devices in the group, affected by the APD, process the state. As a result, I/O traffic piles up to an extent that might cause and entire vSAN cluster to become unresponsive.

    This issue is resolved in this release. 

  • PR 2623890: A virtual machine might fail with a SIGSEGV error during 3D rendering

    A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.

    This issue is resolved in this release.

  • PR 2587530: Setting a virtual CD/DVD drive to be a client device might cause a virtual machine to power off and get into an invalid state

    If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure.
    In the hostd.log, you can see an error similar to Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found.
    If the Virtual Device Node is set to SATA(0:0), in the virtual machine configuration file, you see an entry such as: sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0".

    This issue is resolved in this release.

  • PR 2614441: The hostd service intermittently becomes unresponsive

    In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host.
    In the vmkernel logs, you see alerts such as:
    2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive
    Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.

    This issue is resolved in this release.

  • PR 2600239: You might see loss of network connectivity as a result of a physical switch reboot

    The parameter for network teaming failback delay on ESXi hosts, Net.TeamPolicyUpDelay, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity.

    This issue is resolved in this release. The fix increases the Net.TeamPolicyUpDelay parameter to 30 minutes.
    You can set the parameter by navigating to Configure > System -> Advanced System Settings > Net.TeamPolicyUpDelay by selecting the ESXi host in the vCenter System interface. Alternatively, you can use the command esxcfg-advcfg -s <value> Net/TeamPolicyUpDelay.

  • PR 2613897: ESXi hosts fail with purple diagnostic screen during vSphere vMotion operations in environments with enabled Distributed Firewall (DFW) or NSX Distributed IDS/IPS

    The source ESXi host in a vSphere vMotion operation might fail with a purple diagnostic screen due to a race condition in environments with either Distributed Firewall (DFW) or NSX Distributed IDS/IPS enabled. The issue occurs when vSphere vMotion operations start soon after you add ESXi hosts to a vSphere Distributed Switch.

    This issue is resolved in this release.

  • PR 2657657: After upgrade of HPE servers to HPE Integrated Lights-Out 5 (iLO 5) firmware version 2.30, you see memory sensor health alerts

    After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the Mem_Stat_* sensors when the first LUN is enabled after the upgrade.

    This issue is resolved in this release.

  • PR 2652346: If you have .vswp swap files in a virtual machine directory, you see Device or resource busy error messages when scanning all files in the directory

    If you have .vswp swap files in a virtual machine directory, you see Device or resource busy error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service. The issue occurs if you attempt to open a file with .vswp extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension .vswp, but the swap files of the VMX process must not be opened as object descriptors.

    This issue is resolved in this release.

  • PR 2625293: After a backup operation, identical error messages flood the hostd.log file

    After a backup operation, identical error messages, such as Block list: Cannot convert disk path <vmdk file> to real path, skipping., might flood the hostd.log file. This issue prevents other hostd logs and might fill up the log memory.

    This issue is resolved in this release.

  • PR 2641914: Virtual machines encryption takes long and ultimately fails with an error

    Virtual machines encryption might take several hours and ultimately fail with The file already exists error in the hostd logs. The issue occurs if an orphaned or unused file <VM name>.nvram exists in the VM configuration files. If the virtual machines have an entry such as NVRAM = “nvram” in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file.

    This issue is resolved in this release. If you already face the issue, manually delete the orphaned <VM name>.nvram file before encryption.

  • PR 2653741: Virtual machines on NFS 4.1 datastore might become unresponsive after an NFS server failover or failback

    If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.

    This issue is resolved in this release. The fix analyzes reclaim responses in detail and allows retries only when necessary.

  • PR 2649677: You cannot access or power-on virtual machines on a vSphere Virtual Volumes datastore

    In rare cases, an ESXi host is unable to report protocol endpoint LUNs to the vSphere API for Storage Awareness (VASA) provider while a vSphere Virtual Volumes datastore is being provisioned. As a result, you cannot access or power on virtual machines on the vSphere Virtual Volumes datastore. This issue occurs only when a networking error or a timeout of the VASA provider happens exactly at the time when the ESXi host attempts to report the protocol endpoint LUNs to the VASA provider.

    This issue is resolved in this release.

  • PR 2644214: If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive

    If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as #PF Exception 14 in world 2125468 on a purple diagnostic screen.

    This issue is resolved in this release.​

  • PR 2624574: The sfcb-CIMXML-Processor service might fail while creating an instance request

    If the value of the PersistenceType property in a createInstance query by sfcb is null, the sfcb-CIMXML-Processor service might fail with a core dump.

    This issue is resolved in this release.

  • PR 2531669: The sfcb-vmware_base process core dumps due to a memory reallocation failure

    If IPv6 is not enabled in your environment, the sfcb-vmware_base process might leak memory while enumerating instances of the CIM_IPProtocolEndpoint class. As a result, the process might eventually core dump with an error such as:
    tool_mm_realloc_or_die: memory re-allocation failed(orig=400000 new=800000 msg=Cannot allocate memory, aborting.

    This issue is resolved in this release. 

  • PR 2659304: You see health alarms for sensor entity ID 44 after upgrading the firmware of HPE Gen10 servers

    After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

    This issue is resolved in this release.

  • PR 2640971: vSAN host fails during shutdown

    If a shutdown operation is performed while a vSAN host has pending data to send, the host might fail with purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2647710: The advanced config option UserVars/HardwareHealthIgnoredSensors fails to ignore some sensors

    If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

    This issue is resolved in this release.

  • PR 2587397: Virtual machines become unresponsive after power-on, with VMware logo on screen

    If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.

    This issue is resolved in this release.

  • PR 2622858: Automatic recreation of vSAN disk group fails

    If /LSOM/lsomEnableRebuildOnLSE is enabled on a disk, and the device's unmap granularity is not set to a multiple of 64K, the rebuild operation might fail. The disk group is removed, but not recreated.

    This issue is resolved in this release.

  • PR 2639827: The OCFlush process might cause ESXi hosts to fail with a purple diagnostic screen

    The OCFlush process is non-preemptable, which might lead to a heartbeat issue. Under heavy workload, the heartbeat issue might even cause ESXi hosts to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2643255: Virtual machines lose connectivity to other virtual machines and network

    A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.

    This issue is resolved in this release.

  • PR 2643507: If a virtual machine restarts or resets during a hot-plug operation, vmware.log spam might fill up the available disk space and make the virtual machine unresponsive

    If a virtual machine restarts or resets during a hot-plug operation, logs in the vmware.log file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The logs are identical: acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF.

    This issue is resolved in this release. If you cannot apply this patch, do not perform a reset or restart of the virtual machine before hot-plug operations or driver installations complete. If you already face the issue, power cycle the virtual machine.

  • PR 2645723: The vpxa service might intermittently fail due to a malformed string and ESXi hosts lose connectivity to the vCenter Server system

    A malformed UTF-8 string might cause a failure of the vpxa service and ESXi hosts might lose connectivity to the vCenter Server system as a result.
    In the hostd logs, you can see records of a competed vim.SimpleCommand task that indicate the issue:
    hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
    In the vpxa logs, you see messages such as:
    vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now

    This issue is resolved in this release.

  • PR 2617315: vSphere Events log shows multiple instances of the following: [email protected] logged in

    If vSAN is disabled on a cluster, the vSAN plugin might still attempt to retrieve information from all the hosts from the cluster. These data requests can lead to multiple login operations on the cluster hosts. The vSphere Client Events log shows multiple instances of the following event: [email protected] logged in.

    This issue is resolved in this release.

  • PR 2637122: Replacing the certificate of an ESXi host causes I/O filter storage providers to go offline

    When you navigate to Host > Certificate > Renew Certificate to replace the certificate of an ESXi 6.5.x host, I/O filter storage providers go offline.
    In the sps logs, you see messages such as:
    2017-03-10T11:31:46.694Z [pool-12-thread-2] ERROR opId=SWI-457448e1 com.vmware.vim.sms.provider.vasa.alarm.AlarmDispatcher - Error: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml occured as provider: org.apache.axis2.AxisFault: self signed certificate is offline
    and
    2017-03-10T11:31:56.693Z [pool-12-thread-3] ERROR opId=sps-Main-135968-406 com.vmware.vim.sms.provider.vasa.event.EventDispatcher - Error occurred while polling events provider: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml
    In the iofiltervpd.log reports, you see a message such as:
    2017-03-10T11:50:56Z iofiltervpd[66456]: IOFVPSSL_VerifySSLCertificate:150:Client certificate can't be verified

    This issue is resolved in this release. However, the fix works only for new ESXi nodes. For existing nodes that experience the issue, you must either remove and re-add the ESXi host to the vCenter Server system, as described in Remove a Host from vCenter Server and Add a Host to a Folder or a Data Center, or use the workarounds described in VMware knowledge base article 76633.

  • PR 2621143: In the vSphere Web Client, you cannot change the log level configuration of the vpxa service after an upgrade of your vCenter Server system

    In the vSphere Web Client, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.

    This issue is resolved in this release. The vpxa service automatically sets a valid value for the Vpx.Vpxa.config.log.level option and exposes it to the vSphere Web Client.

  • PR 2656056: vSAN all-flash experiences random high write latency spikes

    In an all-flash vSAN cluster, guest virtual machines and applications might experience random high write latency spikes. These latency spikes might happen during a rescan or similar vSAN control operation that occurs during the I/O workload.

    This issue is resolved in this release.

  • PR 2603460: In case of a non-UTF8 string in the name property of numeric sensors, the vpxa service fails

    The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

    This issue is resolved in this release.

  • PR 2630579: The Managed Object Browser might display CPU and Memory sensors status incorrectly

    Due to an error in processing sensor entries, memoryStatusInfo and cpuStatusInfo data might incorrectly include the status for non-cpu and non-memory sensors as well. This leads to incorrect status for the CPU and Memory sensors in the Managed Object Browser.

    This issue is resolved in this release. 

  • PR 2656196: You cannot use a larger batch size than the default for vSphere API for Storage Awareness calls

    If a vendor provider does not publish or define a max batch size, the default max batch size for vSphere API for Storage Awareness calls is 16. This fix increases the default batch size to 1024.

    This issue is resolved in this release.

  • PR 2630045: The vSphere Virtual Volumes algorithm might not pick out the first Config-VVol that an ESXi host requests

    In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi hosts in the cluster requests and this might create issues in the vSphere Virtual Volumes datastores.

    This issue is resolved in this release. The vSphere Virtual Volumes algorithm uses a timestamp rather than an UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time.

  • PR 2625439: Operations with stateless ESXi hosts might not pick the expected remote disk for system cache, which causes remediation or compliance issues

    Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

    This issue is resolved in this release. The fix provides a consistent way to sort the remote disks and always pick the disk with the lowest LUN ID. To make sure you enable the fix, follow these steps:
    1. On the Edit host profile page of the Auto Deploy wizard, select Advanced Configuration Settings > System
    Image Cache Configuration
    > System Image Cache Configuration.
    2. In the System Image Cache Profile Settings drop-down menu, select Enable stateless caching on the host.
    3. Edit Arguments for first disk by replacing remote with sortedremote and/or remoteesx with sortedremoteesx.

  • PR 2643670: vSAN host fails when removing disk group

    When you decommission a VSAN disk group, the ESXi host might fail with purple diagnostic screen. You might see the following entries in the backtrace:

    0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001, 0x418038764878, 0x451a8049baa8, 0x0, 0x418000000001 0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121, 0x4180375219c1, 0x712d103, 0x10, 0x451a8049bb00 0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00, 0x451a8049bac0, 0x0, 0x1, 0x418038764e48 0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000, 0x801000, 0xffffffffffffffff, 0x1000, 0x1d7ba0a 0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103, 0x121b103, 0x0, 0x0, 0x0 0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150, 0x451a8049bc20, 0x0, 0x80100000000000, 0x100000000000 0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0, 0x1, 0x45aad11f81c0, 0x4180387dabb6, 0x4180387d9fbc 0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000, 0x1000, 0x712d11a, 0x0, 0x0 0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380, 0x2, 0x451a613a3780, 0x41803770cf65, 0x0 0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca, 0x0, 0x451a613a3100, 0x451a804a3000, 0x451a613a3100 0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

    This issue is resolved in this release.

  • PR 2645428: You cannot split syslog messages in length of more than 1 KiB

    With ESXi670-202011002, you can use the --remote-host-max-msg-len parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts.
    The maximum message length should be set to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure.

    This issue is resolved in this release.

  • PR 2638586: You cannot remove large pages backing on a per-VM basis

    With ESXi670-202011002, you can use the .vmx option monitor_control.disable_mmu_largepages = TRUE to define whether to use large pages backing on a per-VM basis.

    This issue is resolved in this release.

  • PR 2539704: If the port index in an Enhanced Networking Stack environment is higher than 128, ESXi hosts might fail with a purple diagnostic screen

    After you enable the Enhanced Networking Stack in your environment and the port index happens to exceed 128, ESXi hosts might fail with a purple diagnostic screen. In the screen, you see a message such as:
    VMware ESXi 6.7.0 [Releasebuild-xxxx x86_64] #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063:0x193c11063;0x0;
    In the vmkernel-log.1 file, you see: ​ ​2020-03-30T18:10:16.203Z cpu32:2097438)@BlueScreen: #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063;0x193c11063;0x0;.

    This issue is resolved in this release.

  • PR 2641029: Changes in the Distributed Firewall (DFW) filter configuration might cause virtual machines to lose network connectivity

    Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the summarize-dvfilter command, you see state: IOChain Detaching for the failed filter.

    This issue is resolved in this release.

ESXi670-202011402-BG
Patch Category Bugfix
Patch Severity Important
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
Affected Hardware N/A
Affected Software N/A
VIBs Included
  • VMW_bootbank_nvme_1.2.2.28-4vmw.670.3.132.17167734
PRs Fixed  N/A
CVE numbers N/A

Updates the nvme VIB.

    ESXi670-202011403-BG
    Patch Category Bugfix
    Patch Severity Moderate
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_vmkusb_0.1-1vmw.670.3.132.17167734
    PRs Fixed  2643589
    CVE numbers N/A

    Updates the vmkusb VIB to resolve the following issue:

    • PR 2643589: If an SD card does not support Read Capacity 16, you see numerous errors in the logs

      On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel log such as:
      2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
      and
      2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

      This issue is resolved in this release.

    ESXi670-202011404-BG
    Patch Category Bugfix
    Patch Severity Important
    Host Reboot Required Yes
    Virtual Machine Migration or Shutdown Required Yes
    Affected Hardware N/A
    Affected Software N/A
    VIBs Included
    • VMW_bootbank_vmw-ahci_2.0.5-2vmw.670.3.132.17167734
    PRs Fixed  N/A
    CVE numbers N/A

    Updates the vmw-ahci VIB.

      ESXi670-202011101-SG
      Patch Category Security
      Patch Severity

      Critical

      Host Reboot Required Yes
      Virtual Machine Migration or Shutdown Required Yes
      Affected Hardware N/A
      Affected Software N/A
      VIBs Included
      • VMware_bootbank_esx-update_6.7.0-3.128.17167699
      • VMware_bootbank_vsanhealth_6.7.0-3.128.17098397
      • VMware_bootbank_esx-base_6.7.0-3.128.17167699
      • VMware_bootbank_vsan_6.7.0-3.128.17098396
      PRs Fixed  2633870, 2671479
      CVE numbers CVE-2020-4004, CVE-2020-4005

      Updates esx-base, esx-update, vsan, and vsanhealth VIBs to resolve the following issues:

        • Update of the SQLite database

          The SQLite database is updated to version 3.33.0.

        • Update to the libcurl library

          The ESXi userworld libcurl library is updated to libcurl-7.72.0.

        • Update to the OpenSSH

          The OpenSSH version is updated to 8.3p1.

        • Update to OpenSSL library

          ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.

      • The following VMware Tools ISO images are bundled with ESXi670-202011002:

        • windows.iso: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or later
        • linux.iso: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later

        The following VMware Tools ISO imagess are available for download:

        VMware Tools 11.0.6

        • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)

        VMware Tools 10.0.12

        • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003

        VMWare Tools 10.3.22

        • linux.iso: for Linux OS with glibc 2.5 or later

        VMware Tools 10.3.10

        • solaris.iso: VMware Tools image for Solaris
        • darwin.iso: VMware Tools image for OSX

        Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

      • VMware ESXi contains a use-after-free vulnerability in the XHCI USB controller. A malicious actor with local administrative privileges on a virtual machine might exploit this issue to execute code as the virtual machine's VMX process running on the host. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4004 to this issue. For more information, see VMSA-2020-0026.

      • VMware ESXi contains a privilege-escalation vulnerability that exists in the way certain system calls are being managed. A malicious actor with privileges within the VMX process only, might escalate their privileges on the affected system. Successful exploitation of this issue is only possible when chained with another vulnerability. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4005 to this issue. For more information, see VMSA-2020-0026

      ESXi-6.7.0-20201104001-standard
      Profile Name ESXi-6.7.0-20201104001-standard
      Build For build information, see Patches Contained in this Release.
      Vendor VMware, Inc.
      Release Date November 19, 2020
      Acceptance Level PartnerSupported
      Affected Hardware N/A
      Affected Software N/A
      Affected VIBs
      • VMware_bootbank_vsanhealth_6.7.0-3.132.17135221
      • VMware_bootbank_vsan_6.7.0-3.132.17135222 
      • VMware_bootbank_esx-base_6.7.0-3.132.17167734
      • VMware_bootbank_esx-update_6.7.0-3.132.17167734
      • VMW_bootbank_nvme_1.2.2.28-4vmw.670.3.132.17167734
      • VMW_bootbank_vmkusb_0.1-1vmw.670.3.132.17167734
      • VMW_bootbank_vmw-ahci_2.0.5-2vmw.670.3.132.17167734
      PRs Fixed 2623890, 2587530, 2614441, 2600239, 2613897, 2657657, 2652346, 2625293, 2641914, 2653741, 2649677, 2644214, 2624574, 2531669, 2659304, 2640971, 2647710, 2587397, 2622858, 2639827, 2643255, 2643507, 2645723, 2617315, 2637122, 2621143, 2656056, 2603460, 2630579, 2656196, 2630045, 2645428, 2638586, 2539704, 2641029, 2643589, 2643094
      Related CVE numbers N/A
      • This patch updates the following issues:
        • If the storage controller behind a disk group goes down, it is possible that not all devices in the group, affected by the APD, process the state. As a result, I/O traffic piles up to an extent that might cause and entire vSAN cluster to become unresponsive.

        • A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.

        • If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure.
          In the hostd.log, you can see an error similar to Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found.
          If the Virtual Device Node is set to SATA(0:0), in the virtual machine configuration file, you see an entry such as: sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0".

        • In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host.
          In the vmkernel logs, you see alerts such as:
          2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive
          Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.

        • The parameter for network teaming failback delay on ESXi hosts, Net.TeamPolicyUpDelay, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity.

        • The source ESXi host in a vSphere vMotion operation might fail with a purple diagnostic screen due to a race condition in environments with either Distributed Firewall (DFW) or NSX Distributed IDS/IPS enabled. The issue occurs when vSphere vMotion operations start soon after you add ESXi hosts to a vSphere Distributed Switch.

        • After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the Mem_Stat_* sensors when the first LUN is enabled after the upgrade.

        • If you have .vswp swap files in a virtual machine directory, you see Device or resource busy error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service. The issue occurs if you attempt to open a file with .vswp extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension .vswp, but the swap files of the VMX process must not be opened as object descriptors.

        • After a backup operation, identical error messages, such as Block list: Cannot convert disk path <vmdk file> to real path, skipping., might flood the hostd.log file. This issue prevents other hostd logs and might fill up the log memory.

        • Virtual machines encryption might take several hours and ultimately fail with The file already exists error in the hostd logs. The issue occurs if an orphaned or unused file <VM name>.nvram exists in the VM configuration files. If the virtual machines have an entry such as NVRAM = “nvram” in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file.

        • If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.

        • In rare cases, an ESXi host is unable to report protocol endpoint LUNs to the vSphere API for Storage Awareness (VASA) provider while a vSphere Virtual Volumes datastore is being provisioned. As a result, you cannot access or power on virtual machines on the vSphere Virtual Volumes datastore. This issue occurs only when a networking error or a timeout of the VASA provider happens exactly at the time when the ESXi host attempts to report the protocol endpoint LUNs to the VASA provider.

        • If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as #PF Exception 14 in world 2125468 on a purple diagnostic screen.

        • If the value of the PersistenceType property in a createInstance query by sfcb is null, the sfcb-CIMXML-Processor service might fail with a core dump.

        • If IPv6 is not enabled in your environment, the sfcb-vmware_base process might leak memory while enumerating instances of the CIM_IPProtocolEndpoint class. As a result, the process might eventually core dump with an error such as:
          tool_mm_realloc_or_die: memory re-allocation failed(orig=400000 new=800000 msg=Cannot allocate memory, aborting.

        • After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

        • If a shutdown operation is performed while a vSAN host has pending data to send, the host might fail with purple diagnostic screen.

        • If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

        • If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.

        • If /LSOM/lsomEnableRebuildOnLSE is enabled on a disk, and the device's unmap granularity is not set to a multiple of 64K, the rebuild operation might fail. The disk group is removed, but not recreated.

        • The OCFlush process is non-preemptable, which might lead to a heartbeat issue. Under heavy workload, the heartbeat issue might even cause ESXi hosts to fail with a purple diagnostic screen.

        • A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.

        • If a virtual machine restarts or resets during a hot-plug operation, logs in the vmware.log file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The logs are identical: acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF.

        • A malformed UTF-8 string might cause a failure of the vpxa service and ESXi hosts might lose connectivity to the vCenter Server system as a result.
          In the hostd logs, you can see records of a competed vim.SimpleCommand task that indicate the issue:
          hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
          In the vpxa logs, you see messages such as:
          vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now

        • If vSAN is disabled on a cluster, the vSAN plugin might still attempt to retrieve information from all the hosts from the cluster. These data requests can lead to multiple login operations on the cluster hosts. The vSphere Client Events log shows multiple instances of the following event: [email protected] logged in.

        • When you navigate to Host Certificate Renew Certificate to replace the certificate of an ESXi 6.5.x host, I/O filter storage providers go offline.
          In the sps logs, you see messages such as:
          2017-03-10T11:31:46.694Z [pool-12-thread-2] ERROR opId=SWI-457448e1 com.vmware.vim.sms.provider.vasa.alarm.AlarmDispatcher - Error: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml occured as provider: org.apache.axis2.AxisFault: self signed certificate is offline
          and
          2017-03-10T11:31:56.693Z [pool-12-thread-3] ERROR opId=sps-Main-135968-406 com.vmware.vim.sms.provider.vasa.event.EventDispatcher - Error occurred while polling events provider: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml
          In the iofiltervpd.log reports, you see a message such as:
          2017-03-10T11:50:56Z iofiltervpd[66456]: IOFVPSSL_VerifySSLCertificate:150:Client certificate can't be verified

        • In the vSphere Web Client, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.

        • In an all-flash vSAN cluster, guest virtual machines and applications might experience random high write latency spikes. These latency spikes might happen during a rescan or similar vSAN control operation that occurs during the I/O workload.

        • The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

        • Due to an error in processing sensor entries, memoryStatusInfo and cpuStatusInfo data might incorrectly include the status for non-cpu and non-memory sensors as well. This leads to incorrect status for the CPU and Memory sensors in the Managed Object Browser.

        • If a vendor provider does not publish or define a max batch size, the default max batch size for vSphere API for Storage Awareness calls is 16. This fix increases the default batch size to 1024.

        • In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi hosts in the cluster requests and this might create issues in the vSphere Virtual Volumes datastores.

        • Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

        • When you decommission a VSAN disk group, the ESXi host might fail with purple diagnostic screen. You might see the following entries in the backtrace:

          0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001, 0x418038764878, 0x451a8049baa8, 0x0, 0x418000000001 0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121, 0x4180375219c1, 0x712d103, 0x10, 0x451a8049bb00 0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00, 0x451a8049bac0, 0x0, 0x1, 0x418038764e48 0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000, 0x801000, 0xffffffffffffffff, 0x1000, 0x1d7ba0a 0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103, 0x121b103, 0x0, 0x0, 0x0 0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150, 0x451a8049bc20, 0x0, 0x80100000000000, 0x100000000000 0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0, 0x1, 0x45aad11f81c0, 0x4180387dabb6, 0x4180387d9fbc 0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000, 0x1000, 0x712d11a, 0x0, 0x0 0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380, 0x2, 0x451a613a3780, 0x41803770cf65, 0x0 0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca, 0x0, 0x451a613a3100, 0x451a804a3000, 0x451a613a3100 0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

        • With ESXi670-202011002, you can use the --remote-host-max-msg-len parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts.
          The maximum message length should be set to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure.

        • With ESXi670-202011002, you can use the .vmx option monitor_control.disable_mmu_largepages = TRUE to define whether to use large pages backing on a per-VM basis.

        • After you enable the Enhanced Networking Stack in your environment and the port index happens to exceed 128, ESXi hosts might fail with a purple diagnostic screen. In the screen, you see a message such as:
          VMware ESXi 6.7.0 [Releasebuild-xxxx x86_64] #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063:0x193c11063;0x0;
          In the vmkernel-log.1 file, you see: ​ ​2020-03-30T18:10:16.203Z cpu32:2097438)@BlueScreen: #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063;0x193c11063;0x0;.

        • Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the summarize-dvfilter command, you see state: IOChain Detaching for the failed filter.

        • On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel log such as:
          2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
          and
          2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

      ESXi-6.7.0-20201104001-no-tools
      Profile Name ESXi-6.7.0-20201104001-no-tools
      Build For build information, see Patches Contained in this Release.
      Vendor VMware, Inc.
      Release Date November 19, 2020
      Acceptance Level PartnerSupported
      Affected Hardware N/A
      Affected Software N/A
      Affected VIBs
      • VMware_bootbank_vsanhealth_6.7.0-3.132.17135221
      • VMware_bootbank_vsan_6.7.0-3.132.17135222 
      • VMware_bootbank_esx-base_6.7.0-3.132.17167734
      • VMware_bootbank_esx-update_6.7.0-3.132.17167734
      • VMW_bootbank_nvme_1.2.2.28-4vmw.670.3.132.17167734
      • VMW_bootbank_vmkusb_0.1-1vmw.670.3.132.17167734
      • VMW_bootbank_vmw-ahci_2.0.5-2vmw.670.3.132.17167734
      PRs Fixed 2623890, 2587530, 2614441, 2600239, 2613897, 2657657, 2652346, 2625293, 2641914, 2653741, 2649677, 2644214, 2624574, 2531669, 2659304, 2640971, 2647710, 2587397, 2622858, 2639827, 2643255, 2643507, 2645723, 2617315, 2637122, 2621143, 2656056, 2603460, 2630579, 2656196, 2630045, 2645428, 2638586, 2539704, 2641029, 2643589, 2643094
      Related CVE numbers N/A
      • This patch updates the following issues:
        • If the storage controller behind a disk group goes down, it is possible that not all devices in the group, affected by the APD, process the state. As a result, I/O traffic piles up to an extent that might cause and entire vSAN cluster to become unresponsive.

        • A buffer over-read during some rendering operations might cause a 3D-enabled virtual machine to fail with a SIGSEGV error during interaction with graphics applications that use 3D acceleration.

        • If you edit the settings of a running virtual machine to change an existing virtual CD/DVD drive to become a client device, in some cases, the virtual machine powers off and gets into an invalid state. You cannot power on or operate the virtual machine after the failure.
          In the hostd.log, you can see an error similar to Expected permission (3) for /dev/cdrom/mpx.vmhba2:C0:T7:L0 not found.
          If the Virtual Device Node is set to SATA(0:0), in the virtual machine configuration file, you see an entry such as: sata0:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba2:C0:T7:L0".

        • In rare cases, a race condition of multiple threads attempting to create a file and remove the directory at the same directory might cause a deadlock that fails the hostd service. The service restores only after a restart of the ESXi host.
          In the vmkernel logs, you see alerts such as:
          2020-03-31T05:20:00.509Z cpu12:4528223)ALERT: hostd detected to be non-responsive
          Such a deadlock might affect other services as well, but the race condition window is small, and the issue is not frequent.

        • The parameter for network teaming failback delay on ESXi hosts, Net.TeamPolicyUpDelay, is currently set at 10 minutes, but in certain environments, a physical switch might take more than 10 minutes to be ready to receive or transmit data after a reboot. As a result, you might see loss of network connectivity.

        • The source ESXi host in a vSphere vMotion operation might fail with a purple diagnostic screen due to a race condition in environments with either Distributed Firewall (DFW) or NSX Distributed IDS/IPS enabled. The issue occurs when vSphere vMotion operations start soon after you add ESXi hosts to a vSphere Distributed Switch.

        • After upgrading HPE servers, such as HPE ProLiant Gen10 and Gen10 Plus, to iLO 5 firmware version 2.30, in the vSphere Web Client you see memory sensor health alerts. The issue occurs because the hardware health monitoring system does not appropriately decode the Mem_Stat_* sensors when the first LUN is enabled after the upgrade.

        • If you have .vswp swap files in a virtual machine directory, you see Device or resource busy error messages when scanning all files in the directory. You also see extra I/O flow on vSAN namespace objects and a slow down in the hostd service. The issue occurs if you attempt to open a file with .vswp extension as an object descriptor. The swap files of the VMX process and the virtual machine main memory have the same extension .vswp, but the swap files of the VMX process must not be opened as object descriptors.

        • After a backup operation, identical error messages, such as Block list: Cannot convert disk path <vmdk file> to real path, skipping., might flood the hostd.log file. This issue prevents other hostd logs and might fill up the log memory.

        • Virtual machines encryption might take several hours and ultimately fail with The file already exists error in the hostd logs. The issue occurs if an orphaned or unused file <VM name>.nvram exists in the VM configuration files. If the virtual machines have an entry such as NVRAM = “nvram” in the .vmx file, the encryption operation creates an encrypted file with the NVRAM file extension, which the system considers a duplicate of the existing orphaned file.

        • If a reclaim request repeats during an NFS server failover or failback operation, the open reclaim fails and causes virtual machines on NFS 4.1 datastores to become unresponsive.

        • In rare cases, an ESXi host is unable to report protocol endpoint LUNs to the vSphere API for Storage Awareness (VASA) provider while a vSphere Virtual Volumes datastore is being provisioned. As a result, you cannot access or power on virtual machines on the vSphere Virtual Volumes datastore. This issue occurs only when a networking error or a timeout of the VASA provider happens exactly at the time when the ESXi host attempts to report the protocol endpoint LUNs to the VASA provider.

        • If you enable LiveCoreDump as an option to collect system logs on an ESXi host, the host might become unresponsive. You see an error such as #PF Exception 14 in world 2125468 on a purple diagnostic screen.

        • If the value of the PersistenceType property in a createInstance query by sfcb is null, the sfcb-CIMXML-Processor service might fail with a core dump.

        • If IPv6 is not enabled in your environment, the sfcb-vmware_base process might leak memory while enumerating instances of the CIM_IPProtocolEndpoint class. As a result, the process might eventually core dump with an error such as:
          tool_mm_realloc_or_die: memory re-allocation failed(orig=400000 new=800000 msg=Cannot allocate memory, aborting.

        • After upgrading the firmware version on HP Gen10 servers, you might see health alarms for the I/O Module 2 ALOM_Link_P2 and NIC_Link_02P2 sensors, related to the sensor entity ID 44.x. The alarms do not indicate an actual health issue and you can ignore them irrespective of the firmware version.

        • If a shutdown operation is performed while a vSAN host has pending data to send, the host might fail with purple diagnostic screen.

        • If you use the advanced config option UserVars/HardwareHealthIgnoredSensors to ignore sensors with consecutive entries in a numeric list, such as 0.52 and 0.53, the operation might fail to ignore some sensors. For example, if you run the command esxcfg-advcfg -s 52,53 /UserVars/HardwareHealthIgnoredSensors, only the sensor 0.53 might be ignored.

        • If a network adapter is replaced or the network adapter address is changed, Linux virtual machines that use EFI firmware and iPXE to boot from a network, might become unresponsive. For example, the issue occurs when you convert such a virtual machine to a virtual machine template, and then deploy other virtual machines from that template.

        • If /LSOM/lsomEnableRebuildOnLSE is enabled on a disk, and the device's unmap granularity is not set to a multiple of 64K, the rebuild operation might fail. The disk group is removed, but not recreated.

        • The OCFlush process is non-preemptable, which might lead to a heartbeat issue. Under heavy workload, the heartbeat issue might even cause ESXi hosts to fail with a purple diagnostic screen.

        • A rare failure of parsing strings in the vSphere Network Appliance (DVFilter) properties of a vSphere Distributed Switch might cause all traffic to and from virtual machines on a given logical switch to fail.

        • If a virtual machine restarts or resets during a hot-plug operation, logs in the vmware.log file of the virtual machine might fill up the available disk space and make the virtual machine unresponsive. The logs are identical: acpiNotifyQueue: Spurious ACPI event completion, data 0xFFFFFFFF.

        • A malformed UTF-8 string might cause a failure of the vpxa service and ESXi hosts might lose connectivity to the vCenter Server system as a result.
          In the hostd logs, you can see records of a competed vim.SimpleCommand task that indicate the issue:
          hostd.log:34139:2020-09-17T02:38:19.798Z info hostd[3408470] [Originator@6876 sub=Vimsvc.TaskManager opID=6ba8e50e-90-60f9 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Completed : haTask--vim.SimpleCommand.Execute-853061619 Status success
          In the vpxa logs, you see messages such as:
          vpxa.log:7475:2020-09-17T02:38:19.804Z info vpxa[3409126] [Originator@6876 sub=Default opID=WFU-53423ccc] [Vpxa] Shutting down now

        • If vSAN is disabled on a cluster, the vSAN plugin might still attempt to retrieve information from all the hosts from the cluster. These data requests can lead to multiple login operations on the cluster hosts. The vSphere Client Events log shows multiple instances of the following event: [email protected] logged in.

        • When you navigate to Host Certificate Renew Certificate to replace the certificate of an ESXi 6.5.x host, I/O filter storage providers go offline.
          In the sps logs, you see messages such as:
          2017-03-10T11:31:46.694Z [pool-12-thread-2] ERROR opId=SWI-457448e1 com.vmware.vim.sms.provider.vasa.alarm.AlarmDispatcher - Error: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml occured as provider: org.apache.axis2.AxisFault: self signed certificate is offline
          and
          2017-03-10T11:31:56.693Z [pool-12-thread-3] ERROR opId=sps-Main-135968-406 com.vmware.vim.sms.provider.vasa.event.EventDispatcher - Error occurred while polling events provider: https://cs-dhcp34-75.csl.vmware.com:9080/version.xml
          In the iofiltervpd.log reports, you see a message such as:
          2017-03-10T11:50:56Z iofiltervpd[66456]: IOFVPSSL_VerifySSLCertificate:150:Client certificate can't be verified

        • In the vSphere Web Client, you might not be able to change the log level configuration of the vpxa service on an ESX host due to a missing or invalid Vpx.Vpxa.config.log.level option after an upgrade of your vCenter Server system.

        • In an all-flash vSAN cluster, guest virtual machines and applications might experience random high write latency spikes. These latency spikes might happen during a rescan or similar vSAN control operation that occurs during the I/O workload.

        • The vpxa service fails in case of a non-UTF8 string in the name property of numeric sensors, and ESXi hosts disconnect from the vCenter Server system.

        • Due to an error in processing sensor entries, memoryStatusInfo and cpuStatusInfo data might incorrectly include the status for non-cpu and non-memory sensors as well. This leads to incorrect status for the CPU and Memory sensors in the Managed Object Browser.

        • If a vendor provider does not publish or define a max batch size, the default max batch size for vSphere API for Storage Awareness calls is 16. This fix increases the default batch size to 1024.

        • In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi hosts in the cluster requests and this might create issues in the vSphere Virtual Volumes datastores.

        • Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

        • When you decommission a VSAN disk group, the ESXi host might fail with purple diagnostic screen. You might see the following entries in the backtrace:

          0x451a8049b9e0:[0x41803750bb65]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x44a00000001, 0x418038764878, 0x451a8049baa8, 0x0, 0x418000000001 0x451a8049ba80:[0x41803750c0a2]Panic_vPanic@vmkernel#nover+0x23 stack: 0x121, 0x4180375219c1, 0x712d103, 0x10, 0x451a8049bb00 0x451a8049baa0:[0x4180375219c0]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x451a8049bb00, 0x451a8049bac0, 0x0, 0x1, 0x418038764e48 0x451a8049bb00:[0x41803874707a]SSDLOG_FreeLogEntry@LSOMCommon#1+0x32b stack: 0x800000, 0x801000, 0xffffffffffffffff, 0x1000, 0x1d7ba0a 0x451a8049bb70:[0x4180387ae4d1][email protected]#0.0.0.1+0x2e stack: 0x712d103, 0x121b103, 0x0, 0x0, 0x0 0x451a8049bbe0:[0x4180387dd08d][email protected]#0.0.0.1+0x72 stack: 0x431b19603150, 0x451a8049bc20, 0x0, 0x80100000000000, 0x100000000000 0x451a8049bcd0:[0x4180387dd2d3][email protected]#0.0.0.1+0x80 stack: 0x4318102ba7a0, 0x1, 0x45aad11f81c0, 0x4180387dabb6, 0x4180387d9fbc 0x451a8049bd00:[0x4180387dabb5][email protected]#0.0.0.1+0x2ce stack: 0x800000, 0x1000, 0x712d11a, 0x0, 0x0 0x451a8049beb0:[0x4180386de2db][email protected]#0.0.0.1+0x590 stack: 0x43180fe83380, 0x2, 0x451a613a3780, 0x41803770cf65, 0x0 0x451a8049bf90:[0x4180375291ce]vmkWorldFunc@vmkernel#nover+0x4f stack: 0x4180375291ca, 0x0, 0x451a613a3100, 0x451a804a3000, 0x451a613a3100 0x451a8049bfe0:[0x4180377107da]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

        • With ESXi670-202011002, you can use the --remote-host-max-msg-len parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts.
          The maximum message length should be set to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure.

        • With ESXi670-202011002, you can use the .vmx option monitor_control.disable_mmu_largepages = TRUE to define whether to use large pages backing on a per-VM basis.

        • After you enable the Enhanced Networking Stack in your environment and the port index happens to exceed 128, ESXi hosts might fail with a purple diagnostic screen. In the screen, you see a message such as:
          VMware ESXi 6.7.0 [Releasebuild-xxxx x86_64] #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063:0x193c11063;0x0;
          In the vmkernel-log.1 file, you see: ​ ​2020-03-30T18:10:16.203Z cpu32:2097438)@BlueScreen: #PF Exception 14 in world 2097438:HELPER_NETWO IP 0x4180172d4365 addr 0x430e42fc3018 PTEs:0x10018f023;0x17a121063;0x193c11063;0x0;.

        • Any DFW filter reconfiguration activity, such as adding or removing filters, might cause some filters to start dropping packets. As a result, virtual machines lose network connectivity and you need to reset the vmnic, change the port group or reboot the virtual machine to restore traffic. In the output of the summarize-dvfilter command, you see state: IOChain Detaching for the failed filter.

        • On ESXi hosts that use a VID:PID/0bda:0329 Realtek Semiconductor Corp USB 3.0 SD card reader device that does not support Read Capacity 16, you might see numerous errors in the vmkernel log such as:
          2020-06-30T13:26:06.141Z cpu0:2097243)ScsiDeviceIO: 3449: Cmd(0x459ac1350600) 0x9e, CmdSN 0x2452e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x6e 0x73.
          and
          2020-06-30T14:23:18.280Z cpu0:2097243)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

      ESXi-6.7.0-20201101001s-standard
      Profile Name ESXi-6.7.0-20201101001s-standard
      Build For build information, see Patches Contained in this Release.
      Vendor VMware, Inc.
      Release Date November 19, 2020
      Acceptance Level PartnerSupported
      Affected Hardware N/A
      Affected Software N/A
      Affected VIBs
      • VMware_bootbank_esx-update_6.7.0-3.128.17167699
      • VMware_bootbank_vsanhealth_6.7.0-3.128.17098397
      • VMware_bootbank_esx-base_6.7.0-3.128.17167699
      • VMware_bootbank_vsan_6.7.0-3.128.17098396
      PRs Fixed 2633870, 2671479
      Related CVE numbers CVE-2020-4004, CVE-2020-4005
      • This patch updates the following issues:
          • The SQLite database is updated to version 3.33.0.
          • The ESXi userworld libcurl library is updated to libcurl-7.72.0.
          • The OpenSSH version is updated to 8.3p1.
          • ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.
        • The following VMware Tools ISO images are bundled with ESXi670-202011002:

          • windows.iso: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or later
          • linux.iso: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later

          The following VMware Tools ISO imagess are available for download:

          VMware Tools 11.0.6

          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)

          VMware Tools 10.0.12

          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003

          VMWare Tools 10.3.22

          • linux.iso: for Linux OS with glibc 2.5 or later

          VMware Tools 10.3.10

          • solaris.iso: VMware Tools image for Solaris
          • darwin.iso: VMware Tools image for OSX

          Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

        • VMware ESXi contains a use-after-free vulnerability in the XHCI USB controller. A malicious actor with local administrative privileges on a virtual machine might exploit this issue to execute code as the virtual machine's VMX process running on the host. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4004 to this issue. For more information, see VMSA-2020-0026.

        • VMware ESXi contains a privilege-escalation vulnerability that exists in the way certain system calls are being managed. A malicious actor with privileges within the VMX process only, might escalate their privileges on the affected system. Successful exploitation of this issue is only possible when chained with another vulnerability. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4005 to this issue. For more information, see VMSA-2020-0026

      ESXi-6.7.0-20201101001s-no-tools
      Profile Name ESXi-6.7.0-20201101001s-no-tools
      Build For build information, see Patches Contained in this Release.
      Vendor VMware, Inc.
      Release Date November 19, 2020
      Acceptance Level PartnerSupported
      Affected Hardware N/A
      Affected Software N/A
      Affected VIBs
      • VMware_bootbank_esx-update_6.7.0-3.128.17167699
      • VMware_bootbank_vsanhealth_6.7.0-3.128.17098397
      • VMware_bootbank_esx-base_6.7.0-3.128.17167699
      • VMware_bootbank_vsan_6.7.0-3.128.17098396
      PRs Fixed 2633870, 2671479
      Related CVE numbers CVE-2020-4004, CVE-2020-4005
      • This patch updates the following issues:
          • The SQLite database is updated to version 3.33.0.
          • The ESXi userworld libcurl library is updated to libcurl-7.72.0.
          • The OpenSSH version is updated to 8.3p1.
          • ESXi userworld OpenSSL library is updated to version openssl-1.0.2w.
        • The following VMware Tools ISO images are bundled with ESXi670-202011002:

          • windows.iso: VMware Tools 11.1.1 ISO image for Windows Vista (SP2) or later
          • linux.iso: VMware Tools 10.3.22 ISO image for Linux OS with glibc 2.5 or later

          The following VMware Tools ISO imagess are available for download:

          VMware Tools 11.0.6

          • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2)

          VMware Tools 10.0.12

          • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003

          VMWare Tools 10.3.22

          • linux.iso: for Linux OS with glibc 2.5 or later

          VMware Tools 10.3.10

          • solaris.iso: VMware Tools image for Solaris
          • darwin.iso: VMware Tools image for OSX

          Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

        • VMware ESXi contains a use-after-free vulnerability in the XHCI USB controller. A malicious actor with local administrative privileges on a virtual machine might exploit this issue to execute code as the virtual machine's VMX process running on the host. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4004 to this issue. For more information, see VMSA-2020-0026.

        • VMware ESXi contains a privilege-escalation vulnerability that exists in the way certain system calls are being managed. A malicious actor with privileges within the VMX process only, might escalate their privileges on the affected system. Successful exploitation of this issue is only possible when chained with another vulnerability. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-4005 to this issue. For more information, see VMSA-2020-0026

      Known Issues from Earlier Releases

      To view a list of previous known issues, click here.

      check-circle-line exclamation-circle-line close-line
      Scroll to top icon