VMware ESXi 7.0 Update 3p | 11 APR 2024 | Build 23307199

Check for additions and updates to these release notes.

IMPORTANT: If your source system contains hosts of versions between ESXi 7.0 Update 2 and Update 3c, and Intel drivers, before upgrading to ESXi 7.0 Update 3p, see the What's New section of the VMware vCenter Server 7.0 Update 3c Release Notes, because all content in the section is also applicable for vSphere 7.0 Update 3p. Also, see the related VMware knowledge base articles: 86447, 87258, and 87308

What's New

  • This release resolves CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, and CVE-2024-22255. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2024-0006.

Earlier Releases of ESXi 7.0

New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.

Patches Contained in This Release

This release of ESXi 7.0 Update 3p delivers the following patches:

Build Details

Download Filename:

VMware-ESXi-7.0U3p-23307199-depot.zip

Build:

23307199

Download Size:

381.3 MB

md5sum:

9b04d3fb0cbda95ba2dd06a042971d56

sha256checksum:

f407cf873d191c453ff805b04f75398874bce63120443ce835a2da28977977b1

Host Reboot Required:

Yes

Virtual Machine Migration or Shutdown Required:

Yes

Components

Component

Bulletin

Category

Severity

ESXi Component - core ESXi VIBs

ESXi_7.0.3-0.110.23307199

Security

Critical

ESXi Install/Upgrade Component

esx-update_7.0.3-0.110.23307199

Security

Critical

IMPORTANT:

  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:

    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher

    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher

    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher

    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.

Bulletin ID

Category

Severity

Details

ESXi70U3p-23307199

Security

Critical

Security only

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name

ESXi-7.0U3p-23307199-standard

ESXi-7.0U3p-23307199-no-tools

ESXi Image

Name and Version

Release Date

Category

Details

ESXi7.0U3p - 23307199

MAR 05 2024

Security

Security only image

For information about the individual components and bulletins, see the Resolved Issues section.

Patch Download and Installation

Log in to the Broadcom Support Portal to download this patch.

For download instructions for earlier releases, see Download Broadcom products and software.

In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.

The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.

You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file.

For details, see Upgrading Hosts by Using ESXCLI Commands, and the VMware ESXi Upgrade guide.

Resolved Issues

The resolved issues are grouped as follows:

ESXi_7.0.3-0.110.23307199

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_vsan_7.0.3-0.110.23307199

  • VMware_bootbank_esxio-combiner_7.0.3-0.110.23307199

  • VMware_bootbank_crx_7.0.3-0.110.23307199

  • VMware_bootbank_vsanhealth_7.0.3-0.110.23307199

  • VMware_bootbank_trx_7.0.3-0.110.23307199

  • VMware_bootbank_vdfs_7.0.3-0.110.23307199

  • VMware_bootbank_native-misc-drivers_7.0.3-0.110.23307199

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.110.23307199

  • VMware_bootbank_bmcal_7.0.3-0.110.23307199

  • VMware_bootbank_esx-xserver_7.0.3-0.110.23307199

  • VMware_bootbank_esx-base_7.0.3-0.110.23307199

  • VMware_bootbank_cpu-microcode_7.0.3-0.110.23307199

  • VMware_bootbank_gc_7.0.3-0.110.23307199

PRs Fixed

N/A

CVE numbers

CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

This patch updates the esx-base VIB. Due to their dependency with the esx-base VIB, the following VIBs are updated with build number and patch version changes, but deliver no fixes: trx, vdfs, native-misc-drivers, esx-dvfilter-generic-fastpath, bmcal, esx-xserver, esx-base, cpu-microcode, and gc. This patch resolves the following issue:

  • This release resolves CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, and CVE-2024-22255. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2024-0006.

esx-update_7.0.3-0.110.23307199

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_loadesx_7.0.3-0.110.23307199

  • VMware_bootbank_esx-update_7.0.3-0.110.23307199

PRs Fixed

N/A

CVE numbers

N/A

Due to their dependency on the esx-base VIB, the following VIBs are updated with build number and patch version changes, but deliver no fixes: loadesx and esx-update.

ESXi-7.0U3p-23307199-standard

Profile Name

ESXi-7.0U3p-23307199-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

March 05, 2024

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsan_7.0.3-0.110.23307199

  • VMware_bootbank_esxio-combiner_7.0.3-0.110.23307199

  • VMware_bootbank_crx_7.0.3-0.110.23307199

  • VMware_bootbank_vsanhealth_7.0.3-0.110.23307199

  • VMware_bootbank_trx_7.0.3-0.110.23307199

  • VMware_bootbank_vdfs_7.0.3-0.110.23307199

  • VMware_bootbank_native-misc-drivers_7.0.3-0.110.23307199

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.110.23307199

  • VMware_bootbank_bmcal_7.0.3-0.110.23307199

  • VMware_bootbank_esx-xserver_7.0.3-0.110.23307199

  • VMware_bootbank_esx-base_7.0.3-0.110.23307199

  • VMware_bootbank_cpu-microcode_7.0.3-0.110.23307199

  • VMware_bootbank_gc_7.0.3-0.110.23307199

  • VMware_bootbank_loadesx_7.0.3-0.110.23307199

  • VMware_bootbank_esx-update_7.0.3-0.110.23307199

PRs Fixed

N/A

Related CVE numbers

CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255

This patch updates the following issue:

  • This release resolves CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, and CVE-2024-22255. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2024-0006.

ESXi-7.0U3p-23307199-no-tools

Profile Name

ESXi-7.0U3p-23307199-no-tools

Build

For build information, see Patches Contained in This Release.

Vendor

VMware by Broadcom, Inc.

Release Date

March 05, 2024

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsan_7.0.3-0.110.23307199

  • VMware_bootbank_esxio-combiner_7.0.3-0.110.23307199

  • VMware_bootbank_crx_7.0.3-0.110.23307199

  • VMware_bootbank_vsanhealth_7.0.3-0.110.23307199

  • VMware_bootbank_trx_7.0.3-0.110.23307199

  • VMware_bootbank_vdfs_7.0.3-0.110.23307199

  • VMware_bootbank_native-misc-drivers_7.0.3-0.110.23307199

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.110.23307199

  • VMware_bootbank_bmcal_7.0.3-0.110.23307199

  • VMware_bootbank_esx-xserver_7.0.3-0.110.23307199

  • VMware_bootbank_esx-base_7.0.3-0.110.23307199

  • VMware_bootbank_cpu-microcode_7.0.3-0.110.23307199

  • VMware_bootbank_gc_7.0.3-0.110.23307199

  • VMware_bootbank_loadesx_7.0.3-0.110.23307199

  • VMware_bootbank_esx-update_7.0.3-0.110.23307199

PRs Fixed

N/A

Related CVE numbers

CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255

This patch updates the following issue:

  • This release resolves CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, and CVE-2024-22255. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2024-0006.

ESXi7.0U3p - 23307199

Name

ESXi

Version

ESXi7.0U3p - 23307199

Release Date

March 05, 2024

Category

Security

Affected Components​

  • ESXi Component - core ESXi

  • ESXi Install/Upgrade

PRs Fixed

N/A

Related CVE numbers

CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255

This patch resolves the issues listed in ESXi-7.0U3p-23307199-standard.

Known Issues from Previous Releases

vSphere Cluster Services Issues

  • You see compatibility issues in new vCLS VMs deployed in vSphere 7.0 Update 3 environment

    The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.

    Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3.

Installation, Upgrade, and Migration Issues

  • The vCenter Upgrade/Migration pre-checks fail with "Unexpected error 87"

    The vCenter Server Upgrade/Migration pre-checks fail when the Security Token Service (STS) certificate does not contain a Subject Alternative Name (SAN) field. This situation occurs when you have replaced the vCenter 5.5 Single Sign-On certificate with a custom certificate that has no SAN field, and you attempt to upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid and the pre-checks prevent the upgrade process from continuing.

    Workaround: Replace the STS certificate with a valid certificate that contains a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration.

  • Corrupted VFAT partitions from a vSphere 6.7 environment might cause upgrades to ESXi 7.x to fail

    Due to corrupted VFAT partitions from a vSphere 6.7 environment, repartitioning the boot disks of an ESXi host might fail during an upgrade to ESXi 7.x. As a result, you might see the following errors:

    When upgrading to ESXi 7.0 Update 3l, the operation fails with a purple diagnostic screen and/or an error such as:

    • An error occurred while backing up VFAT partition files before re-partitioning: Failed to calculate size for temporary Ramdisk: <error>.

    • An error occurred while backing up VFAT partition files before re-partitioning: Failed to copy files to Ramdisk: <error>.

      If you use an ISO installer, you see the errors, but no purple diagnostic screen.

    When upgrading to an ESXi 7.x version earlier than 7.0 Update 3l, you might see:

    • Logs such as ramdisk (root) is full in the vmkwarning.log file.

    • Unexpected rollback to ESXi 6.5 or 6.7 on reboot.

    • The /bootbank,/altbootbank, and ESX-OSData partitions are not present.

    Workaround: You must first remediate the corrupted partitions before completing the upgrade to ESXi 7.x. For more details, see VMware knowledge base article 91136.

  • Problems upgrading to vSphere 7.0 with pre-existing CIM providers

    After upgrade, previously installed 32-bit CIM providers stop working because ESXi requires 64-bit CIM providers. Customers may lose management API functions related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors related to uwglibc dependency.

    The syslog reports module missing, "32 bit shared libraries not loaded."

    Workaround: There is no workaround. The fix is to download new 64-bit CIM providers from your vendor.

  • Installation of 7.0 Update 1 drivers on ESXi 7.0 hosts might fail

    You cannot install drivers applicable to ESXi 7.0 Update 1 on hosts that run ESXi 7.0 or 7.0b.

    The operation fails with an error, such as:

    VMW_bootbank_qedrntv_3.40.4.0-12vmw.701.0.0.xxxxxxx requires vmkapi_2_7_0_0, but the requirement cannot be satisfied within the ImageProfile. ​Please refer to the log file for more details.

    Workaround: Update the ESXi host to 7.0 Update 1. Retry the driver installation.

  • UEFI booting of ESXi hosts might stop with an error during an update to ESXi 7.0 Update 2 from an earlier version of ESXi 7.0

    If you attempt to update your environment to 7.0 Update 2 from an earlier version of ESXi 7.0by using vSphere Lifecycle Manager patch baselines, UEFI booting of ESXi hosts might stop with an error such as:

    Loading /boot.cfgFailed to load crypto64.efiFatal error: 15 (Not found)

    Workaround: For more information, see VMware knowledge base articles 83063 and 83107 .

  • If legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract a desired software specification to seed to a new cluster

    With vCenter Server 7.0 Update 2, you can create a new cluster by importing the desired software specification from a single reference host. However, if legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract in the vCenter Server instance where you create the cluster a reference software specification from such a host. In the /var/log/lifecycle.log, you see messages such as:

    020-11-11T06:54:03Z lifecycle: 1000082644: HostSeeding:499 ERROR Extract depot failed: Checksum doesn't match. Calculated 5b404e28e83b1387841bb417da93c8c796ef2497c8af0f79583fd54e789d8826, expected: 0947542e30b794c721e21fb595f1851b247711d0619c55489a6a8cae6675e796 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:366 ERROR Extract depot failed. 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:145 ERROR [VibChecksumError]

    Workaround: Follow the steps described in VMware knowledge base article 83042.

  • You see a short burst of log messages in the syslog.log after every ESXi boot

    After updating to ESXi 7.0 Update 2, you might see a short burst of log messages after every ESXi boot.

    Such logs do not indicate any issue with ESXi and you can ignore these messages. For example:

    ​2021-01-19T22:44:22Z watchdog-vaai-nasd: '/usr/lib/vmware/nfs/bin/vaai-nasd -f' exited after 0 seconds (quick failure 127) 12021-01-19T22:44:22Z watchdog-vaai-nasd: Executing '/usr/lib/vmware/nfs/bin/vaai-nasd -f'2021-01-19T22:44:22.990Z aainasd[1000051135]: Log for VAAI-NAS Daemon for NFS version=1.0 build=build-00000 option=DEBUG2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoadFile: No entries loaded by dictionary.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/config": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/preferences": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: Switching to VMware syslog extensions2021-01-19T22:44:22.992Z vaainasd[1000051135]: Loading VAAI-NAS plugin(s).2021-01-19T22:44:22.992Z vaainasd[1000051135]: DISKLIB-PLUGIN : Not loading plugin /usr/lib/vmware/nas_plugins/lib64: Not a shared library.

    Workaround: None

  • You see warning messages for missing VIBs in vSphere Quick Boot compatibility check reports

    After you upgrade to ESXi 7.0 Update 2, if you check vSphere Quick Boot compatibility of your environment by using the /usr/lib/vmware/loadesx/bin/loadESXCheckCompat.py command, you might see some warning messages for missing VIBs in the shell. For example:

    Cannot find VIB(s) ... in the given VIB collection.Ignoring missing reserved VIB(s) ..., they are removed from reserved VIB IDs.

    Such warnings do not indicate a compatibility issue.

    Workaround: The missing VIB messages can be safely ignored and do not affect the reporting of vSphere Quick Boot compatibility. The final output line of the loadESXCheckCompat command unambiguously indicates if the host is compatible.

  • Auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image fails with an error

    If you attempt auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image to perform a stateful install and overwrite the VMFS partitions, the operation fails with an error. In the support bundle, you see messages such as:

    2021-02-11T19:37:43Z Host Profiles[265671 opID=MainThread]: ERROR: EngineModule::ApplyHostConfig. Exception: [Errno 30] Read-only file system

    Workaround: Follow vendor guidance to clean the VMFS partition in the target host and retry the operation. Alternatively, use an empty disk. For more information on the disk-partitioning utility on ESXi, see VMware knowledge base article 1036609.

  • Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using ESXCLI might fail due to a space limitation

    Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the esxcli software profile update or esxcli software profile install ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:

    [InstallationError]
     The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB.
     Please refer to the log file for more details.

    The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands esxcli software vib update or esxcli software vib install.

    Workaround: You can perform the upgrade in two steps, by using the esxcli software profile update command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.

  • You cannot migrate linked clones across vCenter Servers

    If you migrate a linked clone across vCenter Servers, operations such as power on and delete might fail for the source virtual machine with an Invalid virtual machine state error.

    Workaround: Keep linked clones on the same vCenter Server as the source VM. Alternatively, promote the linked clone to full clone before migration.

  • Migration across vCenter Servers of virtual machines with many virtual disks and snapshot levels to a datastore on NVMe over TCP storage might fail

    Migration across vCenter Servers of virtual machines with more than 180 virtual disks and 32 snapshot levels to a datastore on NVMe over TCP storage might fail. The ESXi host preemptively fails with an error such as The migration has exceeded the maximum switchover time of 100 second(s).

    Workaround: None

  • A virtual machine with enabled Virtual Performance Monitoring Counters (VPMC) might fail to migrate between ESXi hosts

    If you try to migrate a virtual machine with enabled VPMC by using vSphere vMotion, the operation might fail if the target host is using some of the counters to compute memory or performance statistics. The operation fails with an error such as A performance counter used by the guest is not available on the host CPU.

    Workaround: Power off the virtual machine and use cold migration. For more information, see VMware knowledge base article 81191.

  • If a live VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the upgrade fails

    When a VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the ConfigStore might not keep some configurations of the upgrade. As a result, ESXi hosts become inaccessible after the upgrade operation, although the upgrade seems successful. To prevent this issue, the ESXi 7.0 Update 3 installer adds a temporary check to block such scenarios. In the ESXi installer console, you see the following error message: Live VIB installation, upgrade or removal may cause subsequent ESXi upgrade to fail when using the ISO installer.

    Workaround: Use an alternative upgrade method to avoid the issue, such as using ESXCLI or the vSphere Lifecycle Manager.

  • Smart Card and RSA SecurID authentication might stop working after upgrading to vCenter Server 7.0

    If you have configured vCenter Server for either Smart Card or RSA SecurID authentication, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade process. If you do not perform the workaround as described in the KB, you might see the following error messages and Smart Card or RSA SecurID authentication does not work.

    "Smart card authentication may stop working. Smart card settings may not be preserved, and smart card authentication may stop working."

    or

    "RSA SecurID authentication may stop working. RSA SecurID settings may not be preserved, and RSA SecurID authentication may stop working."

    Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057.

  • The vlanid property in custom installation scripts might not work

    If you use a custom installation script that sets the vlanid property to specify a desired VLAN, the property might not take effect on newly installed ESXi hosts. The issue occurs only when a physical NIC is already connected to DHCP when the installation starts. The vlanid property works properly when you use a newly connected NIC.

    Workaround: Manually set the VLAN from the Direct Console User Interface after you boot the ESXi host. Alternatively, disable the physical NIC and then boot the host.

  • HPE servers with Trusted Platform Module (TPM) boot, but remote attestation fails

    Some HPE servers do not have enough event log space to properly finish TPM remote attestation. As a result, the VMkernel boots, but remote attestation fails due to the truncated log.

    Workaround: None.

  • Upgrading a vCenter Server with an external Platform Services Controller from 6.7u3 to 7.0 fails with VMAFD error

    When you upgrade a vCenter Server deployment using an external Platform Services Controller, you converge the Platform Services Controller into a vCenter Server appliance. If the upgrade fails with the error install.vmafd.vmdir_vdcpromo_error_21, the VMAFD firstboot process has failed. The VMAFD firstboot process copies the VMware Directory Service Database (data.mdb) from the source Platform Services Controller and replication partner vCenter Server appliance.

    Workaround: Disable TCP Segmentation Offload (TSO) and Generic Segmentation Offload (GSO) on the Ethernet adapter of the source Platform Services Controller or replication partner vCenter Server appliance before upgrading a vCenter Server with an external Platform Services Controller. See Knowledge Base article: https://kb.vmware.com/s/article/74678

  • Smart card and RSA SecurID settings may not be preserved during vCenter Server upgrade

    Authentication using RSA SecurID will not work after upgrading to vCenter Server 7.0. An error message will alert you to this issue when attempting to login using your RSA SecurID login.

    Workaround: Reconfigure the smart card or RSA SecureID.

  • Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with network error message

    Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with the error message IP already exists in the network. This prevents the migration process from configuring the network parameters on the new vCenter Server appliance. For more information, examine the log file: /var/log/vmware/upgrade/UpgradeRunner.log

    Workaround:

    1. Verify that all Windows Updates have been completed on the source vCenter Server for Windows instance, or disable automatic Windows Updates until after the migration finishes.

    2. Retry the migration of vCenter Server for Windows to vCenter Server appliance 7.0.

  • When you configure the number of virtual functions for an SR-IOV device by using the max_vfs module parameter, the changes might not take effect

    In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV device by using the Virtual Infrastructure Management (VIM) API, for example, through the vSphere Client. The task does not require reboot of the ESXi host. After you use the VIM API configuration, if you try to configure the number of SR-IOV virtual functions by using the max_vfs module parameter, the changes might not take effect because they are overridden by the VIM API configuration.

    Workaround: None. To configure the number of virtual functions for an SR-IOV device, use the same method every time. Use the VIM API or use the max_vfs module parameter and reboot the ESXi host.

  • Upgraded vCenter Server appliance instance does not retain all the secondary networks (NICs) from the source instance

    During a major upgrade, if the source instance of the vCenter Server appliance is configured with multiple secondary networks other than the VCHA NIC, the target vCenter Server instance will not retain secondary networks other than the VCHA NIC. If the source instance is configured with multiple NICs that are part of VDS port groups, the NIC configuration will not be preserved during the upgrade. Configurations for vCenter Server appliance instances that are part of the standard port group will be preserved.

    Workaround: None. Manually configure the secondary network in the target vCenter Server appliance instance.

  • After upgrading or migrating a vCenter Server with an external Platform Services Controller, users authenticating using Active Directory lose access to the newly upgraded vCenter Server instance

    After upgrading or migrating a vCenter Server with an external Platform Services Controller, if the newly upgraded vCenter Server is not joined to an Active Directory domain, users authenticating using Active Directory will lose access to the vCenter Server instance.

    Workaround: Verify that the new vCenter Server instance has been joined to an Active Directory domain. See Knowledge Base article: https://kb.vmware.com/s/article/2118543

  • Migrating a vCenter Server for Windows with an external Platform Services Controller using an Oracle database fails

    If there are non-ASCII strings in the Oracle events and tasks table the migration can fail when exporting events and tasks data. The following error message is provided: UnicodeDecodeError

    Workaround: None.

  • After an ESXi host upgrade, a Host Profile compliance check shows non-compliant status while host remediation tasks fail

    The non-compliant status indicates an inconsistency between the profile and the host.

    This inconsistency might occur because ESXi 7.0 does not allow duplicate claim rules, but the profile you use contains duplicate rules. For example, if you attempt to use the Host Profile that you extracted from the host before upgrading ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate claim rules of system default rules, you might experience the problems.

    Workaround:

    1. Remove any duplicate claim rules of the system default rules from the Host Profile document.

    2. Check the compliance status.

    3. Remediate the host.

    4. If the previous steps do not help, reboot the host.

  • Error message displays in the vCenter Server Management Interface

    After installing or upgrading to vCenter Server 7.0, when you navigate to the Update panel within the vCenter Server Management Interface, the error message "Check the URL and try again" displays. The error message does not prevent you from using the functions within the Update panel, and you can view, stage, and install any available updates.

    Workaround: None.

Security Features Issues

  • Turn off the Service Location Protocol service in ESXi, slpd, to prevent potential security vulnerabilities

    Some services in ESXi that run on top of the host operating system, including slpd, the CIM object broker, sfcbd, and the related openwsmand service, have proven security vulnerabilities. VMware has addressed all known vulnerabilities in VMSA-2019-0022 and VMSA-2020-0023, and the fixes are part of the vSphere 7.0 Update 2 release. While sfcbd and openwsmand are disabled by default in ESXi, slpd is enabled by default and you must turn it off, if not necessary, to prevent exposure to a future vulnerability after an upgrade.

    Workaround: To turn off the slpd service, run the following PowerCLI commands:

    $ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Set-VMHostService -policy “off”$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Stop-VMHostService -Confirm:$false

    Alternatively, you can use the command chkconfig slpd off && /etc/init.d/slpd stop.

    The openwsmand service is not on the ESXi services list and you can check the service state by using the following PowerCLI commands:

    $esx=(Get-EsxCli -vmhost xx.xx.xx.xx -v2)$esx.system.process.list.invoke() | where CommandLine -like '*openwsman*' | select commandline

    In the ESXi services list, the sfcbd service appears as sfcbd-watchdog.

    For more information, see VMware knowledge base articles 76372 and 1025757.

  • Encrypted virtual machine fails to power on when HA-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled HA on the Trusted Cluster and one or more hosts in the cluster fails attestation, an encrypted virtual machine cannot power on.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Encrypted virtual machine fails to power on when DRS-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted Cluster and one or more hosts in the cluster fails attestation, DRS might try to power on an encrypted virtual machine on an unattested host in the cluster. This operation puts the virtual machine in a locked state.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Migrating or cloning encrypted virtual machines across <span>vCenter Server</span> instances fails when attempting to do so using the vSphere Client

    If you try to migrate or clone an encrypted virtual machine across vCenter Server instances using the vSphere Client, the operation fails with the following error message: "The operation is not allowed in the current state."

    Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual machines across vCenter Server instances.

Networking Issues

  • Reduced throughput in networking performance on Intel 82599/X540/X550 NICs

    The new queue-pair feature added to ixgben driver to improve networking performance on Intel 82599EB/X540/X550 series NICs might reduce throughput under some workloads in vSphere 7.0 as compared to vSphere 6.7.

    Workaround: To achieve the same networking performance as vSphere 6.7, you can disable the queue-pair with a module parameter. To disable the queue-pair, run the command:

    # esxcli system module parameters set -p "QPair=0,0,0,0..." -m ixgben

    After running the command, reboot.

  • One or more I/O devices do not generate interrupts when the AMD IOMMU is in use

    If the I/O devices on your ESXi host provide more than a total of 512 distinct interrupt sources, some sources are erroneously assigned an interrupt-remapping table entry (IRTE) index in the AMD IOMMU that is greater than the maximum value. Interrupts from such a source are lost, so the corresponding I/O device behaves as if interrupts are disabled.

    Workaround: Use the ESXCLI command esxcli system settings kernel set -s iovDisableIR -v true to disable the AMD IOMMU interrupt remapper. Reboot the ESXi host so that the command takes effect.

  • When you set auto-negotiation on a network adapter, the device might fail

    In some environments, if you set link speed to auto-negotiation for network adapters by using the command esxcli network nic set -a -n vmmicx, the devices might fail and reboot does not recover connectivity. The issue is specific to a combination of some Intel X710/X722 network adapters, a SFP+ module and a physical switch, where auto-negotiate speed/duplex scenario is not supported.

    Workaround: Make sure you use an Intel-branded SFP+ module. Alternatively, use a Direct Attach Copper (DAC) cable.

  • Solarflare x2542 and x2541 network adapters configured in 1x100G port mode achieve throughput of up to 70Gbps in a vSphere environment

    vSphere 7.0 Update 2 supports Solarflare x2542 and x2541 network adapters configured in 1x100G port mode. However, you might see a hardware limitation in the devices that causes the actual throughput to be up to some 70Gbps in a vSphere environment.

    Workaround: None

  • VLAN traffic might fail after a NIC reset

    A NIC with PCI device ID 8086:1537 might stop to send and receive VLAN tagged packets after a reset, for example, with a command vsish -e set /net/pNics/vmnic0/reset 1.

    Workaround: Avoid resetting the NIC. If you already face the issue, use the following commands to restore the VLAN capability, for example at vmnic0:

    # esxcli network nic software set --tagging=1 -n vmnic0# esxcli network nic software set --tagging=0 -n vmnic0

  • Any change in the NetQueue balancer settings causes NetQueue to be disabled after an ESXi host reboot

    Any change in the NetQueue balancer settings by using the command esxcli/localcli network nic queue loadbalancer set -n <nicname> --<lb_setting> causes NetQueue, which is enabled by default, to be disabled after an ESXi host reboot.

    Workaround: After a change in the NetQueue balancer settings and host reboot, use the command configstorecli config current get -c esx -g network -k nics to retrieve ConfigStore data to verify whether the /esx/network/nics/net_queue/load_balancer/enable is working as expected.

    After you run the command, you see output similar to:

    {"mac": "02:00:0e:6d:14:3e","name": "vmnic1","net_queue": { "load_balancer": { "dynamic_pool": true, "enable": true }},"virtual_mac": "00:50:56:5a:21:11"}

    If the output is not as expected, for example "load_balancer": "enable": false", run the following command:

    esxcli/localcli network nic queue loadbalancer state set -n <nicname> -e true

  • Paravirtual RDMA (PVRDMA) network adapters do not support NSX networking policies

    If you configure an NSX distributed virtual port for use in PVRDMA traffic, the RDMA protocol traffic over the PVRDMA network adapters does not comply with the NSX network policies.

    Workaround: Do not configure NSX distributed virtual ports for use in PVRDMA traffic.

  • Rollback from converged vSphere Distributed Switch (VDS) to NSX-T VDS is not supported in vSphere 7.0 Update 3

    Rollback from converged VDS that supports both vSphere 7 traffic and NSX-T 3 traffic on the same VDS to one N-VDS for NSX-T traffic is not supported in vSphere 7.0 Update 3.

    Workaround: None

  • If you do not set the nmlx5 network driver module parameter, network connectivity or ESXi hosts might fail

    If you do not set the supported_num_ports module parameter for the nmlx5_core driver on an ESXi host with multiple network adapters of versions Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6, the driver might not allocate sufficient memory for operating all the NIC ports for the host. As a result, you might experience network loss or ESXi host failure with purple diagnostic screen, or both.

    Workaround: Set the supported_num_ports module parameter value in the nmlx5_core network driver equal to the total number of Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6 network adapter ports on the ESXi host.

  • High throughput virtual machines may experience degradation in network performance when Network I/O Control (NetIOC) is enabled

    Virtual machines requiring high network throughput can experience throughput degradation when upgrading from vSphere 6.7 to vSphere 7.0 with NetIOC enabled.

    Workaround: Adjust the ethernetx.ctxPerDev setting to enable multiple worlds.

  • IPv6 traffic fails to pass through VMkernel ports using IPsec

    When you migrate VMkernel ports from one port group to another, IPv6 traffic does not pass through VMkernel ports using IPsec.

    Workaround: Remove the IPsec security association (SA) from the affected server, and then reapply the SA. To learn how to set and remove an IPsec SA, see the vSphere Security documentation.

  • Higher ESX network performance with a portion of CPU usage increase

    ESX network performance may increase with a portion of CPU usage.

    Workaround: Remove and add the network interface with only 1 rx dispatch queue. For example:

    esxcli network ip interface remove --interface-name=vmk1

    esxcli network ip interface add --interface-name=vmk1 --num-rxqueue=1

  • VM might lose Ethernet traffic after hot-add, hot-remove or storage vMotion

    A VM might stop receiving Ethernet traffic after a hot-add, hot-remove or storage vMotion. This issue affects VMs where the uplink of the VNIC has SR-IOV enabled. PVRDMA virtual NIC exhibits this issue when the uplink of the virtual network is a Mellanox RDMA capable NIC and RDMA namespaces are configured.

    Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM to restore traffic. On Linux guest operating systems, restarting the network might also resolve the issue. If these workarounds have no effect, you can reboot the VM to restore network connectivity.

  • Change of IP address for a VCSA deployed with static IP address requires that you create the DNS records in advance

    With the introduction of the DDNS, the DNS record update only works for VCSA deployed with DHCP configured networking. While changing the IP address of the vCenter server via VAMI, the following error is displayed:

    The specified IP address does not resolve to the specified hostname.

    Workaround: There are two possible workarounds.

    1. Create an additional DNS entry with the same FQDN and desired IP address. Log in to the VAMI and follow the steps to change the IP address.

    2. Log in to the VCSA using ssh. Execute the following script:

      ./opt/vmware/share/vami/vami_config_net

      Use option 6 to change the IP adddress of eth0. Once changed, execute the following script:

      ./opt/likewise/bin/lw-update-dns

      Restart all the services on the VCSA to update the IP information on the DNS server.

  • It may take several seconds for the NSX Distributed Virtual Port Group (NSX DVPG) to be removed after deleting the corresponding logical switch in NSX Manager.

    As the number of logical switches increases, it may take more time for the NSX DVPG in vCenter Server to be removed after deleting the corresponding logical switch in NSX Manager. In an environment with 12000 logical switches, it takes approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server.

    Workaround: None.

  • Hostd runs out of memory and fails if a large number of NSX Distributed Virtual port groups are created.

    In vSphere 7.0, NSX Distributed Virtual port groups consume significantly larger amounts of memory than opaque networks. For this reason, NSX Distributed Virtual port groups can not support the same scale as an opaque network given the same amount of memory.

    Workaround:To support the use of NSX Distributed Virtual port groups, increase the amount of memory in your ESXi hosts. If you verify that your system has adequate memory to support your VMs, you can directly increase the memory of hostd using the following command.

    localcli --plugin-dir /usr/lib/vmware/esxcli/int/ sched group setmemconfig --group-path host/vim/vmvisor/hostd --units mb --min 2048 --max 2048

    Note that this will cause hostd to use memory normally reserved for your environment's VMs. This may have the affect of reducing the number of VMs your ESXi host can support.

  • DRS may incorrectly launch vMotion if the network reservation is configured on a VM

    If the network reservation is configured on a VM, it is expected that DRS only migrates the VM to a host that meets the specified requirements. In a cluster with NSX transport nodes, if some of the transport nodes join the transport zone by NSX-T Virtual Distributed Switch (N-VDS), and others by vSphere Distributed Switch (VDS) 7.0, DRS may incorrectly launch vMotion. You might encounter this issue when:

    • The VM connects to an NSX logical switch configured with a network reservation.

    • Some transport nodes join transport zone using N-VDS, and others by VDS 7.0, or, transport nodes join the transport zone through different VDS 7.0 instances.

    Workaround: Make all transport nodes join the transport zone by N-VDS or the same VDS 7.0 instance.

  • When adding a VMkernel NIC (vmknic) to an NSX portgroup, vCenter Server reports the error "Connecting VMKernel adapter to a NSX Portgroup on a Stateless host is not a supported operation. Please use Distributed Port Group instead."

    • For stateless ESXi on Distributed Virtual Switch (VDS), the vmknic on a NSX port group is blocked. You must instead use a Distributed Port Group.

    • For stateful ESXi on VDS, vmknic on NSX port group is supported, but vSAN may have an issue if it is using vmknic on a NSX port group.

    Workaround: Use a Distributed Port Group on the same VDS.

  • Enabling SRIOV from vCenter for QLogic 4x10GE QL41164HFCU CNA might fail

    If you navigate to the Edit Settings dialog for physical network adapters and attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the ESXi host.

    Workaround: Use the following command on the ESXi host to enable SRIOV:

    esxcfg-module

  • vCenter Server fails if the hosts in a cluster using Distributed Resource Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch (VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS

    In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail.

    Workaround: Have hosts in a DRS cluster join the NSX transport zone using the same VDS or NVDS.

Storage Issues

  • VMFS datastores are not mounted automatically after disk hot remove and hot insert on HPE Gen10 servers with SmartPQI controllers

    When SATA disks on HPE Gen10 servers with SmartPQI controllers without expanders are hot removed and hot inserted back to a different disk bay of the same machine, or when multiple disks are hot removed and hot inserted back in a different order, sometimes a new local name is assigned to the disk. The VMFS datastore on that disk appears as a snapshot and will not be mounted back automatically because the device name has changed.

    Workaround: None. SmartPQI controller does not support unordered hot remove and hot insert operations.

  • VOMA check on NVMe based VMFS datastores fails with error

    VOMA check is not supported for NVMe based VMFS datastores and will fail with the error:

    ERROR: Failed to reserve device. Function not implemented 

    Example:

    # voma -m vmfs -f check -d /vmfs/devices/disks/: <partition#>
    Running VMFS Checker version 2.1 in check mode
    Initializing LVM metadata, Basic Checks will be done
    
    Checking for filesystem activity
    Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
    ERROR: Failed to reserve device. Function not implemented
    Aborting VMFS volume check
    VOMA failed to check device : General Error

    Workaround: None. If you need to analyse VMFS metadata, collect it using the -l option, and pass to VMware customer support. The command for collecting the dump is:

    voma -l -f dump -d /vmfs/devices/disks/:<partition#> 

  • Using the VM reconfigure API to attach an encrypted First Class Disk to an encrypted virtual machine might fail with error

    If an FCD and a VM are encrypted with different crypto keys, your attempts to attach the encrypted FCD to the encrypted VM using the VM reconfigure API might fail with the error message:

    Cannot decrypt disk because key or password is incorrect.

    Workaround: Use the attachDisk API rather than the VM reconfigure API to attach an encrypted FCD to an encrypted VM.

  • ESXi host might get in non responding state if a non-head extent of its spanned VMFS datastore enters the Permanent Device Loss (PDL) state

    This problem does not occur when a non-head extent of the spanned VMFS datastore fails along with the head extent. In this case, the entire datastore becomes inaccessible and no longer allows I/Os.

    In contrast, when only a non-head extent fails, but the head extent remains accessible, the datastore heartbeat appears to be normal. And the I/Os between the host and the datastore continue. However, any I/Os that depend on the failed non-head extent start failing as well. Other I/O transactions might accumulate while waiting for the failing I/Os to resolve, and cause the host to enter the non responding state.

    Workaround: Fix the PDL condition of the non-head extent to resolve this issue.

  • Virtual NVMe Controller is the default disk controller for Windows 10 guest operating systems

    The Virtual NVMe Controller is the default disk controller for the following guest operating systems when using Hardware Version 15 or later:

    Windows 10

    Windows Server 2016

    Windows Server 2019

    Some features might not be available when using a Virtual NVMe Controller. For more information, see https://kb.vmware.com/s/article/2147714

    Note: Some clients use the previous default of LSI Logic SAS. This includes ESXi host client and PowerCLI.

    Workaround: If you need features not available on Virtual NVMe, switch to VMware Paravirtual SCSI (PVSCSI) or LSI Logic SAS. For information on using VMware Paravirtual SCSI (PVSCSI), see https://kb.vmware.com/s/article/1010398

  • After an ESXi host upgrade to vSphere 7.0, presence of duplicate core claim rules might cause unexpected behavior

    Claim rules determine which multipathing plugin, such as NMP, HPP, and so on, owns paths to a particular storage device. ESXi 7.0 does not support duplicate claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate rules to the existing claim rules inherited through an upgrade from a legacy release. As a result of using duplicate rules, storage devices might be claimed by unintended plugins, which can cause unexpected outcome.

    Workaround: Do not use duplicate core claim rules. Before adding a new claim rule, delete any existing matching claim rule.

  • A CNS query with the compliance status filter set might take unusually long time to complete

    The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.

    Workaround: Avoid using bulk queries. When you need to get compliance status, query one volume at a time or limit the number of volumes in the query API to 20 or fewer. While using the query, avoid running other CNS operations to get the best performance.

  • A VMFS datastore backed by an NVMe over Fabrics namespace or device might become permanently inaccessible after recovering from an APD or PDL failure

    If a VMFS datastore on an ESXi host is backed by an NVMe over Fabrics namespace or device, in case of an all paths down (APD) or permanent device loss (PDL) failure, the datastore might be inaccessible even after recovery. You cannot access the datastore from either the ESXi host or the vCenter Server system.

    Workaround: To recover from this state, perform a rescan on a host or cluster level. For more information, see Perform Storage Rescan.

  • Deleted CNS volumes might temporarily appear as existing in the CNS UI

    After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following:

    The object or item referred to could not be found.

    Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI.

  • Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error

    When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.

    Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM.

  • Under certain circumstances, while a CNS operation fails, the task status appears as successful in the vSphere Client

    This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful.

    Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results.

  • Unsuccessful delete operation for a CNS persistent volume might leave the volume undeleted on the vSphere datastore

    This issue might occur when the CNS Delete API attempts to delete a persistent volume that is still attached to a pod. For example, when you delete the Kubernetes namespace where the pod runs. As a result, the volume gets cleared from CNS and the CNS query operation does not return the volume. However, the volume continues to reside on the datastore and cannot be deleted through the repeated CNS Delete API operations.

    Workaround: None.

vCenter Server and vSphere Client Issues

  • Vendor providers go offline after a PNID change​

    When you change the vCenter IP address (PNID change), the registered vendor providers go offline.

    Workaround: Re-register the vendor providers.

  • Cross vCenter migration of a virtual machine fails with an error

    When you use cross vCenter vMotion to move a VM's storage and host to a different vCenter server instance, you might receive the error The operation is not allowed in the current state.

    This error appears in the UI wizard after the Host Selection step and before the Datastore Selection step, in cases where the VM has an assigned storage policy containing host-based rules such as encryption or any other IO filter rule.

    Workaround: Assign the VM and its disks to a storage policy without host-based rules. You might need to decrypt the VM if the source VM is encrypted. Then retry the cross vCenter vMotion action.

  • Storage Sensors information in Hardware Health tab shows incorrect values on vCenter UI, host UI, and MOB

    When you navigate to Host > Monitor > Hardware Health > Storage Sensors on vCenter UI, the storage information displays either incorrect or unknown values. The same issue is observed on the host UI and the MOB path “runtime.hardwareStatusInfo.storageStatusInfo” as well.

    Workaround: None.

  • vSphere UI host advanced settings shows the current product locker location as empty with an empty default

    vSphere UI host advanced settings shows the current product locker location as empty with an empty default. This is inconsistent as the actual product location symlink is created and valid. This causes confusion to the user. The default cannot be corrected from UI.

    Workaround: User can use the esxcli command on the host to correct the current product locker location default as below.

    1. Remove the existing Product Locker Location setting with: "esxcli system settings advanced remove -o ProductLockerLocation"

    2. Re-add the Product Locker Location setting with the appropriate default:

    2.a. If the ESXi is a full installation, the default value is "/locker/packages/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/locker/packages/vmtoolsRepo"

    2.b. If the ESXi is a PXEboot configuration such as autodeploy, the default value is: "/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/vmtoolsRepo"

    Run the following command to automatically figure out the location: export PRODUCT_LOCKER_DEFAULT=`readlink /productLocker`

    Add the setting: esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT

    You can combine all the above steps in step 2 by issuing the single command:

    esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s `readlink /productLocker`

  • Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDC.

    When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.

    Workaround: None.

Virtual Machine Management Issues

  • UEFI HTTP booting of virtual machines on ESXi hosts of version earlier than 7.0 Update 2 fails

    UEFI HTTP booting of virtual machines is supported only on hosts of version ESXi 7.0 Update 2 and later and VMs with HW version 19 or later.

    Workaround: Use UEFI HTTP booting only in virtual machines with HW version 19 or later. Using HW version 19 ensures the virtual machines are placed only on hosts with ESXi version 7.0 Update 2 or later.

  • Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10

    Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10 with an error such as An error occurred while saving the snapshot: The VVol target encountered a vendor specific error. The issue is specific for Purity version 5.3.10.

    Workaround: Upgrade to Purity version 6.1.7 or follow vendor recommendations.

  • The postcustomization section of the customization script runs before the guest customization

    When you run the guest customization script for a Linux guest operating system, the precustomization section of the customization script that is defined in the customization specification runs before the guest customization and the postcustomization section runs after that. If you enable Cloud-Init in the guest operating system of a virtual machine, the postcustomization section runs before the customization due to a known issue in Cloud-Init.

    Workaround: Disable Cloud-Init and use the standard guest customization.

  • Group migration operations in vSphere vMotion, Storage vMotion, and vMotion without shared storage fail with error

    When you perform group migration operations on VMs with multiple disks and multi-level snapshots, the operations might fail with the error com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167. Connection closed by remote host, possibly due to timeout.

    Workaround: Retry the migration operation on the failed VMs one at a time.

  • Deploying an OVF or OVA template from a URL fails with a 403 Forbidden error

    The URLs that contain an HTTP query parameter are not supported. For example, http://webaddress.com?file=abc.ovf or the Amazon pre-signed S3 URLs.

    Workaround: Download the files and deploy them from your local file system.

  • The third level of nested objects in a virtual machine folder is not visible

    Perform the following steps:

    1. Navigate to a data center and create a virtual machine folder.

    2. In the virtual machine folder, create a nested virtual machine folder.

    3. In the second folder, create another nested virtual machine, virtual machine folder, vApp, or VM Template.

    As a result, from the VMs and Templates inventory tree you cannot see the objects in the third nested folder.

    Workaround: To see the objects in the third nested folder, navigate to the second nested folder and select the VMs tab.

vSphere HA and Fault Tolerance Issues

  • VMs in a cluster might be orphaned after recovering from storage inaccessibility such as a cluster wide APD

    Some VMs might be in orphaned state after cluster wide APD recovers, even if HA and VMCP are enabled on the cluster.

    This issue might be encountered when the following conditions occur simultaneously:

    • All hosts in the cluster experience APD and do not recover until VMCP timeout is reached.

    • HA primary initiates failover due to APD on a host.

    • Power on API during HA failover fails due to one of the following:

      • APD across the same host

      • Cascading APD across the entire cluster

      • Storage issues

      • Resource unavailability

    • FDM unregistration and VCs steal VM logic might initiate during a window where FDM has not unregistered the failed VM and VC's host synchronization responds that multiple hosts are reporting the same VM. Both FDM and VC unregister the different registered copies of the same VM from different hosts, causing the VM to be orphaned.

    Workaround: You must unregister and reregister the orphaned VMs manually within the cluster after the APD recovers.

    If you do not manually reregister the orphaned VMs, HA attempts failover of the orphaned VMs, but it might take between 5 to 10 hours depending on when APD recovers.

    The overall functionality of the cluster is not affected in these cases and HA will continue to protect the VMs. This is an anomaly in what gets displayed on VC for the duration of the problem.

vSphere Lifecycle Manager Issues

  • vSphere Lifecycle Manager and vSAN File Services cannot be simultaneously enabled on a vSAN cluster in vSphere 7.0 release

    If vSphere Lifecycle Manager is enabled on a cluster, vSAN File Services cannot be enabled on the same cluster and vice versa. In order to enable vSphere Lifecycle Manager on a cluster, which has VSAN File Services enabled already, first disable vSAN File Services and retry the operation. Please note that if you transition to a cluster that is managed by a single image, vSphere Lifecycle Manager cannot be disabled on that cluster.

    Workaround: None.

  • When a hardware support manager is unavailable, vSphere High Availability (HA) functionality is impacted

    If hardware support manager is unavailable for a cluster that you manage with a single image, where a firmware and drivers addon is selected and vSphere HA is enabled, the vSphere HA functionality is impacted. You may experience the following errors.

    • Configuring vSphere HA on a cluster fails.

    • Cannot complete the configuration of the vSphere HA agent on a host: Applying HA VIBs on the cluster encountered a failure.

    • Remediating vSphere HA fails: A general system error occurred: Failed to get Effective Component map.

    • Disabling vSphere HA fails: Delete Solution task failed. A general system error occurred: Cannot find hardware support package from depot or hardware support manager.

    Workaround:

    • If the hardware support manager is temporarily unavailable, perform the following steps.

    1. Reconnect the hardware support manager to vCenter Server.

    2. Select a cluster from the Hosts and Cluster menu.

    3. Select the Configure tab.

    4. Under Services, click vSphere Availability.

    5. Re-enable vSphere HA.

    • If the hardware support manager is permanently unavailable, perform the following steps.

    1. Remove the hardware support manager and the hardware support package from the image specification

    2. Re-enable vSphere HA.

    3. Select a cluster from the Hosts and Cluster menu.

    4. Select the Updates tab.

    5. Click Edit .

    6. Remove the firmware and drivers addon and click Save.

    7. Select the Configure tab.

    8. Under Services, click vSphere Availability.

    9. Re-enable vSphere HA.

  • I/OFilter is not removed from a cluster after a remediation process in vSphere Lifecycle Manager

    Removing I/OFilter from a cluster by remediating the cluster in vSphere Lifecycle Manager, fails with the following error message: iofilter XXX already exists. Тhe iofilter remains listed as installed.

    Workaround:

    1. Call IOFilter API UninstallIoFilter_Task from the vCenter Server managed object (IoFilterManager).

    2. Remediate the cluster in vSphere Lifecycle Manager.

    3. Call IOFilter API ResolveInstallationErrorsOnCluster_Task from the vCenter Server managed object (IoFilterManager) to update the database.

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, adding hosts causes a vSphere HA error state

    Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message: Applying HA VIBs on the cluster encountered a failure.

    Workaround: Аfter the cluster remediation operation has finished, perform one of the following tasks.

    • Right-click the failed ESXi host and select Reconfigure for vSphere HA.

    • Disable and re-enable vSphere HA for the cluster.

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, disabling and re-enabling vSphere HA causes a vSphere HA error state

    Disabling and re-enabling vSphere HA during remediation process of a cluster, may fail the remediation process due to vSphere HA health checks reporting that hosts don't have vSphere HA VIBs installed. You may see the following error message: Setting desired image spec for cluster failed.

    Workaround: Аfter the cluster remediation operation has finished, disable and re-enable vSphere HA for the cluster.

  • Checking for recommended images in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the recommendation generation task could take more than an hour to finish or may appear to hang. The completion time for the recommendation task depends on the number of devices configured on each host and the number of image candidates from the depot that vSphere Lifecycle Manager needs to process before obtaining a valid image to recommend.

    Workaround: None.

  • Checking for hardware compatibility in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the validation report generation task could take up to 30 minutes to finish or may appear to hang. The completion time depends on the number of devices configured on each host and the number of hosts configured in the cluster.

    Workaround: None

  • Incomplete error messages in non-English languages are displayed, when remediating a cluster in vSphere Lifecycle Manager

    You can encounter incomplete error messages for localized languages in the vCenter Server user interface. The messages are displayed, after a cluster remediation process in vSphere Lifecycle Manager fails. For example, your can observe the following error message.

    The error message in English language: Virtual machine 'VMC on DELL EMC -FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents entering maintenance mode: Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    The error message in French language: La VM « VMC on DELL EMC -FileServer », située sur le cluster « {Cluster-1} », a signalé un problème empêchant le passage en mode de maintenance : Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    Workaround: None.

  • Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing image

    Only the ESXi base image is replaced with the one from the imported image.

    Workaround: After the import process finishes, edit the image, and if needed, remove the vendor addon, components, and firmware and drivers addon.

  • When you convert a cluster that uses baselines to a cluster that uses a single image, a warning is displayed that vSphere HA VIBs will be removed

    Converting a vSphere HA enabled cluster that uses baselines to a cluster that uses a single image, may result a warning message displaying that vmware-fdm component will be removed.

    Workaround: This message can be ignored. The conversion process installs the vmware-fdm component.

  • If vSphere Update Manager is configured to download patch updates from the Internet through a proxy server, after upgrade to vSphere 7.0 that converts Update Manager to vSphere Lifecycle Manager, downloading patches from VMware patch repository might fail

    In earlier releases of vCenter Server you could configure independent proxy settings for vCenter Server and vSphere Update Manager. After an upgrade to vSphere 7.0, vSphere Update Manager service becomes part of the vSphere Lifecycle Manager service. For the vSphere Lifecycle Manager service, the proxy settings are configured from the vCenter Server appliance settings. If you had configured Update Manager to download patch updates from the Internet through a proxy server but the vCenter Server appliance had no proxy setting configuration, after a vCenter Server upgrade to version 7.0, the vSphere Lifecycle Manager fails to connect to the VMware depot and is unable to download patches or updates.

    Workaround: Log in to the vCenter Server Appliance Management Interface, https://vcenter-server-appliance-FQDN-or-IP-address:5480, to configure proxy settings for the vCenter Server appliance and enable vSphere Lifecycle Manager to use proxy.

Miscellaneous Issues

  • When applying a host profile with version 6.5 to a ESXi host with version 7.0, the compliance check fails

    Applying a host profile with version 6.5 to a ESXi host with version 7.0, results in Coredump file profile reported as not compliant with the host.

    Workaround: There are two possible workarounds.

    1. When you create a host profile with version 6.5, set an advanced configuration option VMkernel.Boot.autoCreateDumpFile to false on the ESXi host.

    2. When you apply an existing host profile with version 6.5, add an advanced configuration option VMkernel.Boot.autoCreateDumpFile in the host profile, configure the option to a fixed policy, and set value to false.

  • Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit minor throughput degradation when Dynamic Receive Side Scaling (DYN_RSS) or Generic RSS (GEN_RSS) feature is turned on

    Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5 percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which is unlikely to impact normal workloads.

    Workaround: You can disable DYN_RSS and GEN_RSS feature with the following commands:

    # esxcli system module parameters set -m nmlx5_core -p "DYN_RSS=0 GEN_RSS=0"

    # reboot

  • RDMA traffic between two VMs on the same host might fail in PVRDMA environment

    In a vSphere 7.0 implementation of a PVRDMA environment, VMs pass traffic through the HCA for local communication if an HCA is present. However, loopback of RDMA traffic does not work on qedrntv driver. For instance, RDMA Queue Pairs running on VMs that are configured under same uplink port cannot communicate with each other.

    In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have SRQ enabled with a minimum of HW v14 using RoCE v2.

    The current version of Marvell FastLinQ adapter firmware does not support loopback traffic between QPs of the same PF or port.

    Workaround: Required support is being added in the out-of-box driver certified for vSphere 7.0. If you are using the inbox qedrntv driver, you must use a 3-host configuration and migrate VMs to the third host.

  • Unreliable Datagram traffic QP limitations in qedrntv driver

    There are limitations with the Marvell FastLinQ qedrntv RoCE driver and Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions (MR). Physical MRs or FRMR are not supported. Applications attempting to use physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv driver. Known examples of such test applications are ibv_ud_pingpong and ib_send_bw.

    Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER, NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic are limited and this issue impacts a small set of applications requiring bulk UD traffic.

    Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to meet the VMware PVRDMA requirement to support GSI QP, a restricted software only implementation of UD QP support was added to the qedrntv driver. The goal of the implementation is to provide support for control path GSI communication and is not a complete implementation of UD QP supporting bulk traffic and advanced features.

    Since UD support is implemented in software, the implementation might not keep up with heavy traffic and packets might be dropped. This can result in failures with bulk UD traffic.

    Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe, RDMA and PVRDMA are unaffected by this issue.

  • Servers equipped with QLogic 578xx NIC might fail when frequently connecting or disconnecting iSCSI LUNs

    If you trigger QLogic 578xx NIC iSCSI connection or disconnection frequently in a short time, the server might fail due to an issue with the qfle3 driver. This is caused by a known defect in the device's firmware.

    Workaround: None.

  • ESXi might fail during driver unload or controller disconnect operation in Broadcom NVMe over FC environment

    In Broadcom NVMe over FC environment, ESXi might fail during driver unload or controller disconnect operation and display an error message such as: @BlueScreen: #PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19

    Workaround: None.

  • ESXi does not display OEM firmware version number of i350/X550 NICs on some Dell servers

    The inbox ixgben driver only recognizes firmware data version or signature for i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed into the OEM package version region, and the inbox ixgben driver does not read this information. Only the 8-digit firmware signature is displayed.

    Workaround: To display the OEM firmware version number, install async ixgben driver version 1.7.15 or later.

  • X710 or XL710 NICs might fail in ESXi

    When you initiate certain destructive operations to X710 or XL710 NICs, such as resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware might read data from non-packet memory.

    Workaround: Do not reset the NIC or manipulate vmkernel internal device state.

  • NVMe-oF does not guarantee persistent VMHBA name after system reboot

    NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage installation that uses vmhba30+ and also has NVMe over RDMA configuration, the VMHBA name might change after a system reboot. This is because the VMHBA name assignment for NVMe over RDMA is different from PCIe devices. ESXi does not guarantee persistence.

    Workaround: None.

  • Backup fails for vCenter database size of 300 GB or greater

    If the vCenter database size is 300 GB or greater, the file-based backup will fail with a timeout. The following error message is displayed: Timeout! Failed to complete in 72000 seconds

    Workaround: None.

  • A restore of vCenter Server 7.0 which is upgraded from vCenter Server 6.x with External Platform Services Controller to vCenter Server 7.0 might fail

    When you restore a vCenter Server 7.0 which is upgraded from 6.x with External Platform Services Controller to vCenter Server 7.0, the restore might fail and display the following error: Failed to retrieve appliance storage list

    Workaround: During the first stage of the restore process, increase the storage level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External Platform Services Controller setup storage type is small, select storage type large for the restore process.

  • Enabled SSL protocols configuration parameter is not configured during a host profile remediation process

    Enabled SSL protocols configuration parameter is not configured during a host profile remediation and only the system default protocol tlsv1.2 is enabled. This behavior is observed for a host profile with version 7.0 and earlier in a vCenter Server 7.0 environment.

    Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an ESXi host by using SSH, and run the following ESXCLI command: esxcli system wbem -P <protocol_name>

  • Unable to configure Lockdown Mode settings by using Host Profiles

    Lockdown Мode cannot be configured by using a security host profile and cannot be applied to multiple ESXi hosts at once. You must manually configure each host.

    Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage Lockdown Mode exception user list by using a security host profile.

  • When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from the ESXi hosts

    Some settings in the VMware config file /etc/vmware/config are not managed by Host Profiles and are blocked, when the config file is modified. As a result, when the host profile is applied to a cluster, the EVC settings are lost, which causes loss of EVC functionalities. For example, unmasked CPUs can be exposed to workloads.

    Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC settings.

  • Using a host profile that defines a core dump partition in vCenter Server 7.0 results in an error

    In vCenter Server 7.0, configuring and managing a core dump partition in a host profile is not available. Attempting to apply a host profile that defines a core dump partition, results in the following error: No valid coredump partition found.

    Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-based core dumps.

  • If you run the ESXCLI command to unload the firewall module, the hostd service fails and ESXi hosts lose connectivity

    If you automate the firewall configuration in an environment that includes multiple ESXi hosts, and run the ESXCLI command esxcli network firewall unload that destroys filters and unloads the firewall module, the hostd service fails and ESXi hosts lose connectivity.

    Workaround: Unloading the firewall module is not recommended at any time. If you must unload the firewall module, use the following steps:

    1. Stop the hostd service by using the command:

      /etc/init.d/hostd stop.

    2. Unload the firewall module by using the command:

      esxcli network firewall unload.

    3. Perform the required operations.

    4. Load the firewall module by using the command:

      esxcli network firewall load.

    5. Start the hostd service by using the command:

      /etc/init.d/hostd start.

  • vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager

    Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.

    Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.

  • Changes in the properties and attributes of the devices and storage on an ESXi host might not persist after a reboot

    If the device discovery routine during a reboot of an ESXi host times out, the jumpstart plug-in might not receive all configuration changes of the devices and storage from all the registered devices on the host. As a result, the process might restore the properties of some devices or storage to the default values after the reboot.

    Workaround: Manually restore the changes in the properties of the affected device or storage.

  • If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations

    If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations such as unloading a driver or switching between ENS mode and native driver mode. For example, if you try to change the ENS mode, in the backtrace you see an error message similar to: case ENS::INTERRUPT::NoVM_DeviceStateWithGracefulRemove hit BlueScreen: ASSERT bora/vmkernel/main/dlmalloc.c:2733 This issue is specific for beta builds and does not affect release builds such as ESXi 7.0.

    Workaround: Update to ESXi 7.0 GA.

  • You cannot create snapshots of virtual machines due to an error that a digest operation has failed

    A rare race condition when an All-Paths-Down (APD) state occurs during the update of the Content Based Read Cache (CBRC) digest file might cause inconsistencies in the digest file. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed in the backtrace.

    Workaround: Power cycle the virtual machines to trigger a recompute of the CBRC hashes and clear the inconsistencies in the digest file.

  • If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, Trusted Platform Module (TPM) attestation of the ESXi hosts fails

    If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, and you enable TPM, ESXi hosts fail to pass attestation. In the vSphere Client, you see the warning Host TPM attestation alarm. The Elliptic Curve Digital Signature Algorithm (ECDSA) introduced with ESXi 7.0 Update 3 causes the issue when vCenter Server is not of version 7.0 Update 3.

    Workaround: Upgrade your vCenter Server to 7.0 Update 3 or acknowledge the alarm.

  • You see warnings in the boot loader screen about TPM asset tags

    If a TPM-enabled ESXi host has no asset tag set on, you might see idle warning messages in the boot loader screen such as:

    Failed to determine TPM asset tag size: Buffer too smallFailed to measure asset tag into TPM: Buffer too small

    Workaround: Ignore the warnings or set an asset tag by using the command $ esxcli hardware tpm tag set -d

  • The sensord daemon fails to report ESXi host hardware status

    A logic error in the IPMI SDR validation might cause sensord to fail to identify a source for power supply information. As a result, when you run the command vsish -e get /power/hostStats, you might not see any output.

    Workaround: None

  • If an ESXi host fails with a purple diagnostic screen, the netdump service might stop working

    In rare cases, if an ESXi host fails with a purple diagnostic screen, the netdump service might fail with an error such as NetDump FAILED: Couldn't attach to dump server at IP x.x.x.x.

    Workaround: Configure the VMkernel core dump to use local storage.

  • You see frequent VMware Fault Domain Manager (FDM) core dumps on multiple ESXi hosts

    In some environments, the number of datastores might exceed the FDM file descriptor limit. As a result, you see frequent core dumps on multiple ESXi hosts indicating FDM failure.

    Workaround: Increase the FDM file descriptor limit to 2048. You can use the setting das.config.fdm.maxFds from the vSphere HA advanced options in the vSphere Client. For more information, see Set Advanced Options.

  • Virtual machines on a vSAN cluster with enabled NSX-T and a converged vSphere Distributed Switch (CVDS) in a VLAN transport zone cannot power on after a power off

    If a secondary site is 95% disk full and VMs are powered off before simulating a secondary site failure, during recovery some of the virtual machines fail to power on. As a result, virtual machines become unresponsive. The issue occurs regardless if site recovery includes adding disks or ESXi hosts or CPU capacity.

    Workaround: Select the virtual machines that do not power on and change the network to VM Network from Edit Settings on the VM context menu.

  • ESXI hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835

    In rare cases, for example when running SESparse tests, the number of locks per transaction in a VMFS datastore might exceed the limit of 50 for the J6_MAX_TXN_LOCKACTIONS parameter. As a result, ESXi hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835.

    Workaround: None

  • If you modify the netq_rss_ens parameter of the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen

    If you try to enable the netq_rss_ens parameter when you configure an enhanced data path on the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen. The netq_rss_ens parameter, which enables NetQ RSS, is disabled by default with a value of 0.

    Workaround: Keep the default value for the netq_rss_ens module parameter in the nmlx5_core driver.

  • ESXi might terminate I/O to NVMeOF devices due to errors on all active paths

    Occasionally, all active paths to NVMeOF device register I/O errors due to link issues or controller state. If the status of one of the paths changes to Dead, the High Performance Plug-in (HPP) might not select another path if it shows high volume of errors. As a result, the I/O fails.

    Workaround: Disable the configuration option /Misc/HppManageDegradedPaths to unblock the I/O.

  • Upgrade to ESXi 7.0 Update 3 might fail due to changed name of the inbox i40enu network driver

    Starting with vSphere 7.0 Update 3, the inbox i40enu network driver for ESXi changes name back to i40en. The i40en driver was renamed to i40enu in vSphere 7.0 Update 2, but the name change impacted some upgrade paths. For example, rollup upgrade of ESXi hosts that you manage with baselines and baseline groups from 7.0 Update 2 or 7.0 Update 2a to 7.0 Update 3 fails. In most cases, the i40enu driver upgrades to ESXi 7.0 Update 3 without any additional steps. However, if the driver upgrade fails, you cannot update ESXi hosts that you manage with baselines and baseline groups. You also cannot use host seeding or a vSphere Lifecycle Manager single image to manage the ESXi hosts. If you have already made changes related to the i40enu driver and devices in your system, before upgrading to ESXi 7.0 Update 3, you must uninstall the i40enu VIB or Component on ESXi, or first upgrade ESXi to ESXi 7.0 Update 2c.

    Workaround: For more information, see VMware knowledge base article 85982.

  • SSH access fails after you upgrade to ESXi 7.0 Update 3d

    After you upgrade to ESXi 7.0 Update 3d, SSH access might fail in certain conditions due to an update of OpenSSH to version 8.8.

    Workaround: For more information, see VMware knowledge base article 88055.

  • USB device passthrough from ESXi hosts to virtual machines might fail

    A USB modem device might simultaneously claim multiple interfaces by the VMkernel and block the device passthrough to VMs.

    Workaround: You must apply the USB.quirks advanced configuration on the ESXi host to ignore the NET interface from VMkernel and allow the USB modem to passthrough to VMs. You can apply the configuration in 3 alternative ways:

    1. Access the ESXi shell and run the following command: esxcli system settings advanced set -o /USB/quirks -s 0xvvvv:0xpppp:0:0xffff:UQ_NET_IGNORE | |- Device Product ID |------- Device Vendor ID

      For example, for the Gemalto M2M GmbH Zoom 4625 Modem(vid:pid/1e2d:005b), you can have the command

      esxcli system settings advanced set -o /USB/quirks -s 0x1e2d:0x005b:0:0xffff:UQ_NET_IGNORE

      Reboot the ESXi host.

    2. Set the advanced configuration from the vSphere Client or the vSphere Web Client and reboot the ESXi host.

    3. Use a Host Profile to apply the advanced configuration.

    For more information on the steps, see VMware knowledge base article 80416.

  • HTTP requests from certain libraries to vSphere might be rejected

    The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance than in previous releases. This might expose pre-existing problems in some third-party libraries used by applications for SOAP calls to vSphere.

    If you develop vSphere applications that use such libraries or include applications that rely on such libraries in your vSphere stack, you might experience connection issues when these libraries send HTTP requests to VMOMI. For example, HTTP requests issued from vijava libraries can take the following form:

    POST /sdk HTTP/1.1
    SOAPAction
    Content-Type: text/xml; charset=utf-8
    User-Agent: Java/1.8.0_221

    The syntax in this example violates an HTTP protocol header field requirement that mandates a colon after SOAPAction. Hence, the request is rejected in flight.

    Workaround: Developers leveraging noncompliant libraries in their applications can consider using a library that follows HTTP standards instead. For example, developers who use the vijava library can consider using the latest version of the yavijava library instead.

  • You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35 and lsi_mr3

    When using the lsi_msgpt3, lsi_msgpt35 and lsi_mr3 controllers, there is a potential risk to see dump file lsuv2-lsi-drivers-plugin-util-zdump. There is an issue when exiting the storelib used in this plugin utility. There is no impact on ESXi operations, you can ignore the dump file.

    Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-drivers-plugin with the following command:

    esxcli software vib remove -n lsuv2-lsiv2-drivers-plugin

  • You might see reboot is not required after configuring SR-IOV of a PCI device in vCenter, but device configurations made by third party extensions might be lost and require reboot to be re-applied.

    In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device driver is reloaded. ESXi hosts might have third party extensions perform device configurations that need to run after the device driver is loaded during boot. A reboot is required for those third party extensions to re-apply the device configuration.

    Workaround: You must reboot after configuring SR-IOV to apply third party device configurations.

vSphere Client Issues

  • BIOS manufacturer displays as "--" in the vSphere Client

    In the vSphere Client, when you select an ESXi host and navigate to Configure > Hardware > Firmware, you see -- instead of the BIOS manufacturer name.

    Workaround: For more information, see VMware knowledge base article 88937.

vSAN Issues

  • PR 2962316: vSAN File Service does not support NFSv4 delegations

    vSAN File Service does not support NFSv4 delegations in this release.

    Workaround: None

  • PR 3235496: Applications on a vSAN cluster become intermittently unresponsive

    If the advanced option preferHT is enabled on an AMD server in a vSAN cluster, the virtual CPUs of a virtual machine run on the same last level cache. When the virtual CPUs are all very busy and an NVMe thread to handle I/Os happens to run also on the same last level cache, the NVMe thread takes very long to complete, because it competes for CPU resources with the busy VMs. As a result, the I/O latency is very high and might lead to intermittent unresponsiveness of storage and applications on the vSAN cluster. In the backlog, you see errors such as Lost access to volume <uuid> due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.

    Workaround: Use the command esxcli system settings advanced set -o /Misc/vmknvmeCwYield -I 0 to avoid competition between the NVMe completion queue and busy virtual CPUs.

check-circle-line exclamation-circle-line close-line
Scroll to top icon