This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware ESXi 7.0 Update 3o | 28 SEP 2023 | Build 22348816

Check for additions and updates to these release notes.

IMPORTANT: If your source system contains hosts of versions between ESXi 7.0 Update 2 and Update 3c, and Intel drivers, before upgrading to ESXi 7.0 Update 3o, see the What's New section of the VMware vCenter Server 7.0 Update 3c Release Notes, because all content in the section is also applicable for vSphere 7.0 Update 3o. Also, see the related VMware knowledge base articles: 86447, 87258, and 87308

What's New

  • ESXi 7.0 Update 3o adds support for vSphere Quick Boot to multiple servers, including: 

    • Dell  

      • PowerEdge XR4510c  

      • PowerEdge XR4520c  

      • PowerEdge XR5610  

      • PowerEdge XR7620  

      • PowerEdge XR8620t  

      • R6615 vSAN Ready Node  

      • R7615 vSAN Ready Node  

      • VxRail VD-4510c  

      • VxRail VD-4520c

    • Lenovo  

      • ThinkSystem SR635 V3  

      • ThinkSystem SR645 V3  

      • ThinkSystem ST650 V3  

      • ThinkSystem SR655 V3

    • HPE       

      • Alletra 4120

      • Cray XD220v

      • ProLiant DL320 Gen11

      • ProLiant DL380a Gen11

      • ProLiant DL560 Gen11

      • ProLiant ML110 Gen11

      • ProLiant DL360 Gen11       

      • ProLiant DL380 Gen11       

      • ProLiant ML350 Gen11

  • ESXi 7.0 Update 3o adds support for vSphere Quick Boot to several drivers, including: 

    • Intel  

      • QuickAssist Technology (QAT)  

      • Dynamic Load Balancer (DLB) 

    • Cisco  

      • NENIC_ENS

    For the full list of supported servers and drivers, see the VMware Compatibility Guide.

Product Support Notices

Support for spaces in organizational units (OUs) names: With ESXi 7.0 Update 3o, vSAN File Service supports spaces in organizational units (OUs) names.

Patches Contained in This Release

This release of ESXi 7.0 Update 3o delivers the following patches:

Build Details

Download Filename:

VMware-ESXi-7.0U3o-22348816-depot.zip

Build:

22348816

Download Size:

574.5 MB

md5sum:

2dc3fe97405d94a12c3a7b14570e24c1

sha256checksum:

42594bd42b9cf2f7d002780d8fee9062a61567a9f3f1a0675726a6f29075d607

Host Reboot Required:

Yes

Virtual Machine Migration or Shutdown Required:

Yes

Components

Component

Bulletin

Category

Severity

ESXi Component - core ESXi VIBs

ESXi_7.0.3-0.105.22348816

Bugfix

Critical

ESXi Install/Upgrade Component

esx-update_7.0.3-0.105.22348816

Bugfix

Critical

VMware Native AHCI Driver

VMware-ahci_2.0.11-2vmw.703.0.105.22348816

Bugfix

Critical

SMARTPQI LSU Management Plugin

Microchip-smartpqiv2-plugin_1.0.0-9vmw.703.0.105.22348816

Bugfix

Critical

Non-Volatile memory controller driver

VMware-NVMe-PCIe_1.2.3.16-3vmw.703.0.105.22348816

Bugfix

Critical

Avago (LSI) Native 12Gbps SAS MPT Driver

Broadcom-lsi-msgpt3_17.00.12.00-2vmw.703.0.105.22348816

Bugfix

Critical

Intel NVME Driver with VMD Technology

Intel-Volume-Mgmt-Device_2.7.0.1157-3vmw.703.0.105.22348816

Bugfix

Critical

ESXi Component - core ESXi VIBs

ESXi_7.0.3-0.100.22348808

Security

Critical

ESXi Install/Upgrade Component

esx-update_7.0.3-0.100.22348808

Security

Critical

ESXi Tools Component

VMware-VM-Tools_12.2.6.22229486-22348808

Security

Critical

IMPORTANT:

  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:

    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher

    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher

    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher

    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher

Rollup Bulletin

This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.

Bulletin ID

Category

Severity

Details

ESXi70U3o-22348816

Bugfix

Critical

Security and Bugfix

ESXi70U3so-22348808

Security

Critical

Security only

Image Profiles

VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.

Image Profile Name

ESXi-7.0U3o-22348816-standard

ESXi-7.0U3o-22348816-no-tools

ESXi-7.0U3so-22348808-standard

ESXi-7.0U3so-22348808-no-tools

ESXi Image

Name and Version

Release Date

Category

Details

ESXi7.0U3o - 22348816

SEP 28 2023

Bugfix

Security and Bugfix image

ESXi7.0U3so - 22348808

SEP 28 2023

Security

Security only image

For information about the individual components and bulletins, see the Resolved Issues section.

Patch Download and Installation

Download this patch from the Broadcom Support Portal.

For download instructions for earlier releases, see Download Broadcom products and software.

In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.

The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.

You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file.

For details, see How to Download a ZIP file for ESXi Patches and Updates, Upgrading Hosts by Using ESXCLI Commands, and the VMware ESXi Upgrade guide.

Resolved Issues

The resolved issues are grouped as follows:

ESXi_7.0.3-0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs Included

  • VMware_bootbank_gc_7.0.3-0.105.22348816

  • VMware_bootbank_native-misc-drivers_7.0.3-0.105.22348816

  • VMware_bootbank_vsanhealth_7.0.3-0.105.22348816

  • VMware_bootbank_crx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-xserver_7.0.3-0.105.22348816

  • VMware_bootbank_vdfs_7.0.3-0.105.22348816

  • VMware_bootbank_cpu-microcode_7.0.3-0.105.22348816

  • VMware_bootbank_esx-base_7.0.3-0.105.22348816

  • VMware_bootbank_vsan_7.0.3-0.105.22348816

  • VMware_bootbank_esxio-combiner_7.0.3-0.105.22348816

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_bmcal_7.0.3-0.105.22348816

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.105.22348816

  • VMware_bootbank_trx_7.0.3-0.105.22348816

PRs Fixed

3244098, 3216958, 3245763, 3242021, 3246132, 3253205, 3256804, 3239170, 3251981, 3215370, 3117615, 3248478, 3252676, 3235496, 3238026, 3236064, 3224739, 3224306, 3223755, 3216958, 3233958, 3185125, 3219264, 3221620, 3218835, 3185560, 3221099, 3211625, 3221860, 3228586, 3224604, 3223539, 3222601, 3217258, 3213041, 3216522, 3221591, 3216389, 3220004, 3217633, 3216548, 3216449, 3156666, 3181574, 3180746, 3155476, 3211807, 3154090, 3184008, 3183519, 3183519, 3209853, 3100552, 3187846, 3113263, 3176350, 3185827, 3095511, 3184425, 3186351, 3186367, 3154090, 3158524, 3181601, 3180391, 3180283, 3184608, 3181774, 3155476, 3163361, 3182978, 3184368, 3160456, 3164575, 3181901, 3184871, 3166818, 3157195, 3164355, 3163271, 3112194, 3161690, 3261925, 3178589, 3178721, 3162963, 3168950, 3159168, 3158491, 3158866, 3096769, 3165374, 3122037, 3098760, 3164439, 3161473, 3162790, 3100030, 3096974, 3161473, 3165651, 3083007, 3118240, 3151076, 3118402, 3160480, 3156627, 3158531, 3162499, 3158512, 3053430, 3153395, 3117615, 3099192, 3158508, 3115870, 3119834, 3158220, 3110401, 2625439, 3099357, 3152811, 3108979, 3120165

CVE numbers

N/A

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

Updates the gc, native-misc-drivers, vsanhealth, crx, esx-xserver, vdfs, cpu-microcode, esx-base, vsan, esxio-combiner, esx-ui, bmcal, esx-dvfilter-generic-fastpath and trx VIBs to resolve the following issues:

  • PR 3253205: Static IPv6 gateway disappears in 18 hours

    If you configure a static IPv6 gateway address in your vSphere environment, such gateways might no longer exist in up to 18 hours due to a timeout.

    This issue is resolved in this release. The fix removes the existing timeout for static gateways.

  • PR 3272532: You see speed as Half Duplex after changing the speed from Auto to Full on a VMNIC

    In some cases, length checks during the parsing of duplex TLVs in an environment with active Cisco Discovery Protocol (CDP) might fail. As a result, when you change the speed from Auto to Full on a physical NIC on peer devices you do not see the real duplex value in the TLV under the neighbor information, but the default Half Duplex value.

    This issue is resolved in this release.

  • PR 3251981: An NFSv4.1 file might appear to be empty even though it contains data

    When you open an existing NFSv4.1 file with write only access, if for some reason the NFS client opens again the same file with read only access, read operations from the client might return no data although the file is not empty.

    This issue is resolved in this release.

  • PR 3116601: When you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the vSphere HA agent on the host displays as inaccessible

    In some cases, when you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the Fault Domain Manager (FDM), which is the agent that vSphere HA deploys on ESX hosts, might stop receiving Internet Control Message Protocol (ICMP) pings. As a result, FDM might issue false cluster alarms that the vSphere HA agent on the host cannot reach some management network addresses of other hosts.

    This issue is resolved in this release.

  • PR 3251801: vSphere vMotion operation from a 6.7.x ESXi host with an Intel Ice Lake CPU fails with msg.checkpoint.cpucheck.fail

    vSphere vMotion operations by using either the vSphere Client or VMware Hybrid Cloud Extension (HCX) from an Intel Ice Lake CPU host running ESXi 6.7.x fails with an error such as msg.checkpoint.cpucheck.fail. In the vSphere Client, you see a message that cpuid.PSFD is not supported on the target host. In HCX, you see a report such as A general system error occurred: vMotion failed:.

    This issue is resolved in this release.

  • PR 3224739: You see alarms for dropped syslog messages

    If the network throughput of your vSphere system does not align with the speed at which log messages are generated, some of the log messages sent to the remote syslog server might drop. As a result, in the vSphere Client you see host errors of type Triggered Alarm and in the logs, you see warnings such as ALERT: vmsyslog logger xxxx:514 lost yyyy log messages.

    This issue is resolved in this release. The fix improves the network logging performance of the logging service to reduce the number of such lost log messages.

  • PR 3247027: If any failure occurs during a vSphere vMotion migration of NVIDIA virtual GPU (vGPU) VM, the destination ESXi host might fail with a purple diagnostic screen

    In very rare cases, if any type of failure occurs during a vSphere vMotion migration of a vGPU VM, the vMotion operation is marked as a failure while a specific internal operation is in progress. As a result, the destination ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3095511: The SFCB service intermittently fails and generates "sfcb-vmware_bas-zdump" files on multiple ESXi hosts

    In very rare cases, when the SFCB service tries to access an uninitialized variable, the service might fail with dump file such as sfcb-vmware_bas-zdump.000. The issue occurs because accessing an uninitialized variable can cause the SFCB process to keep requesting memory until it exceeds the heap allocation.

    This issue is resolved in this release.

  • PR 3256804: File server loses connectivity after failover when running vSAN File Service on NSX overlay network

    If vSAN File Service is running on an NSX overlay network, the file server might lose connectivity after it fails over from one agent VM to another agent VM. The file server can failover when reconfiguring file service domain from such without Active Directory(AD) to such with AD, or if vSAN detects unhealthy behavior in the file server, file share, or AD server. If file server connectivity is lost after a failover, File Service clients cannot access the file share. The following health warning might be reported:

    One or more DNS servers is not reachable.

    This issue is resolved in this release.

  • PR 3217633: vSAN management service on the orchestration host malfunctions during vCenter restart

    This issue can occur on vSAN clusters where vCenter is deployed. If multiple cluster shutdown operations are performed, the/etc/vmware/vsan/vsanperf.conf file on the orchestration host can contain two versions of the following management VM: vc_vm_moIdvc_vm_moid. These configuration options conflict with each other and can cause the vSAN management service to malfunction.

    This issue is resolved in this release.

  • PR 3224306: When you change the ESXi syslog setting syslog.global.logDir, if syslog.global.logDirUnique is active, you might see 2 levels of <hostname> subdir under the logdir path

    After upgrading an ESXi host to 7.0 Update 3c and later, if you change the syslog setting syslog.global.logDir and the syslog.global.logDirUnique setting is active, you might see 2 levels of <hostname> subdirs under the logdir path on the host. The syslog.global.logDirUnique setting is useful if the same NFS directory is configured as the Syslog.global.LogDir by multiple ESXi hosts, as it creates a subdirectory with the name of the ESXi host under logdir.

    This issue is resolved in this release.

  • PR 3223755: ESXi syslog daemon fails to resume transmitting logs to the configured SSL remote syslog server once a drop in the network connectivity restores

    If a remote SSL syslog server temporarily loses connectivity, the ESXi syslog daemon might fail to resume transmitting logs after connectivity restores and you must restart the service to re-activate transmissions. The issue occurs due to some unhandled exceptions while the ESXi syslog daemon tries to restore connection to the SSL remote syslog server.

    This issue is resolved in this release. The fix adds a generic catch for all unhandled exceptions.

  • PR 3185536: vSAN cluster remediation error after vCenter upgrade to 7.0 Update 3

    When you upgrade vCenter from version 6.5.x to 7.0 Update 3, and reconfigure the vSAN cluster, a vSAN cluster remediation is triggered, but in some cases fails.

    This issue is resolved in this release.

  • PR 3110401: The Explicit Congestion Notification (ECN) setting is not persistent on ESXi host reboot

    ECN, specified in RFC 3168, allows a TCP sender to reduce the transmission rate to avoid packet drops and is activated by default. However, if you change the default setting, or manually deactivate ECN, such changes to the ESXi configuration might not persist after a reboot of the host.

    This issue is resolved in this release.

  • PR 3153395: Upon refresh, some dynamic firewalls rules might not persist and lead to vSAN iSCSI Target service failure

    If you frequently run the command esxcli network firewall refresh, a race condition might cause some dynamic firewalls rules to be removed upon refresh or load. As a result, some services such as the vSAN iSCSI Target daemon, which provides vSAN storage with iSCSI protocol, might fail.

    This issue is resolved in this release.

  • PR 3096769: You see high latency for VM I/O operations in a vSAN cluster with Unmap activated

    This issue affects vSAN clusters with Unmap activated. A problem in Unmap handling in LSOM creates log congestion. The log congestion can cause high VM I/O latency.

    This issue is resolved in this release.

  • PR 3163361: The Audit Record Storage Capacity parameter does not persist across ESXi host reboots

    After an ESXi host reboot, you see the Audit Record Storage Capacity parameter restored to the default value of 4 MiB, regardless of any previous changes.

    This issue is resolved in this release.

  • PR 3162496: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after upgrading to vSphere 8.0 and later

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system to vSphere 8.0 and later, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3220004: VBS-enabled Windows VMs might fail with a blue diagnostic screen when an ESXi host is under memory pressure

    Windows VMs with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen during memory-heavy operations such as deleting snapshots or consolidating disks. The BSOD has the following signature:

    DRIVER_VERIFIER_DMA_VIOLATION (e6)

    An illegal DMA operation was attempted by a driver being verified.

    Arguments:

    Arg1: 0000000000000026, IOMMU detected DMA violation.

    Arg2: 0000000000000000, Device Object of faulting device.

    Arg3: xxxxxxxxxxxxxxxx, Faulting information (usually faulting physical address).

    Arg4: 0000000000000008, Fault type (hardware specific).

    This issue is resolved in this release.

  • PR 3162905: ESXi hosts might intermittently disconnect from vCenter in case of memory outage

    In rare cases, if vCenter fails to allocate or transfer active bitmaps due to an alert for insufficient memory for any reason, the hostd service might repeatedly fail and you cannot reconnect the host to the vCenter.

    This issue is resolved in this release. The fix makes sure that in such cases you see an error rather than the hostd service failure.

  • PR 3108979: Staging ESXi patches and ESXi upgrade tasks might become indefinitely unresponsive due to a log buffer limit

    If a large string or log exceed the python log buffer limit of 16K, the logger becomes unresponsive. As a result, when you stage ESXi patches on a host by using the vSphere Client, the process responsible for completing the staging task on ESXi might become indefinitely unresponsive. The task times out and reports an error. The issue occurs also when you remediate ESXi hosts by using a patch baseline.

    This issue is resolved in this release. The fix breaks the large input string into smaller chunks for logging.

  • PR 3219264: vSAN precheck for maintenance mode or disk decommission doesn't list objects that might lose accessibility

    This issue affects objects with resyncing components, and some components reside on a device to be removed or placed into maintenance mode. When you run a precheck with the No-Action option, the precheck does not evaluate the object correctly to report it in the inaccessibleObjects list.

    This issue is resolved in this release. Precheck includes all affected objects in the inaccessibleObjects list.

  • PR 3186545: Vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable

    Some third-party tools for vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable.

    This issue is resolved in this release.

  • PR 3221591: ESXi NVMe/TCP initiator fails to recover paths after target failure recovery

    When a NVMe/TCP target recovers from a failure, ESXi cannot recover the path.

    This issue is resolved in this release.

  • PR 3186351: You see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot

    In some cases, a networking configuration might not persist in the ESXi ConfigStore and you see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot.

    This issue is resolved in this release.

  • PR 3162790: The sfcb daemon might fail with a core dump while installing a third-party CIM provider

    The sfcb daemon might try to access already freed memory when registering a CIM provider and fail with a core dump while installing a third-party CIM provider, such as Dell OpenManage Server Administrator for example.

    This issue is resolved in this release.

  • PR 3157195: ESX hosts might fail with a purple diagnostic screen and an error NMI IPI: Panic requested by another PCPU

    The resource pool cache is a VMFS specific volume level cache that stores the resource clusters corresponding to the VMFS volume. While searching for priority clusters, the cache flusher workflow iterates through a large list of cached resource clusters, which can cause lockup of the physical CPUs. As a result, ESX hosts might fail with a purple diagnostic screen. In the logDump file, you see an error such as:

    ^[[7m2022-10-22T07:56:47.322Z cpu13:2101160)WARNING: Heartbeat: 827: PCPU 0 didn't have a heartbeat for 7 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.^[[0m^

    [[31;1m2022-10-22T07:56:47.322Z cpu0:2110633)ALERT: NMI: 710: NMI IPI: RIPOFF(base):RBP:CS

    This issue is resolved in this release.

  • PR 3230493: If a delta sync runs while the target vSphere Replication server is not active, Windows virtual machines might become unresponsive

    If the target Sphere Replication server is not active during a delta sync, the synchronization process cannot complete and Windows virtual machines become unresponsive.

    This issue is resolved in this release. The fix makes sure that no delta sync jobs start when a Sphere Replication server is not active.

  • PR 3122037: ESXi ConfigStore database fills up and writes fail

    Stale data related to block devices might not be deleted in time from the ESXi ConfigStore database and cause an out of space condition. As a result, write operations to ConfigStore start to fail. In the backtrace, you see logs such as:

    2022-12-19T03:51:42.733Z cpu53:26745174)WARNING: VisorFSRam: 203: Cannot extend visorfs file /etc/vmware/configstore/current-store-1-journal because its ramdisk (configstore) is full.

    This issue is resolved in this release.

  • PR 3246132: You see a system clock drift on an ESXi host after synchronization with the NTP server fails

    If connectivity between the NTP server and an ESXi host is delayed for some reason, the system clock on the ESXi host might experience a drift. When you run the vsish command vsish -e get /system/ntpclock/clockData, you might see a large negative value for the adjtime field. For example:

    NTP clock data {  
    ...  adjtime() (usec):-309237290312 <<<<<<  
    ...
    }

    This issue is resolved in this release.

  • PR 3185560: vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore intermittently fail after a hot extend of the VM disk

    vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore might fail under the following circumstances:

    A VM that runs on ESXi host A is migrated to ESXi host B, then the VM disk is hot extended, and the VM is migrated back to ESXi host A. The issue occurs due to stale statistics about the swap virtual volume.

    This issue is resolved in this release.

  • PR 3236207: You cannot unmount an NFS v3 datastore from a vCenter system because the datastore displays as inaccessible

    While unmounting an NFSv3 datastore from a vCenter system by using either the vSphere Client or ESXCLI, you might see the datastore either as inaccessible, or get an Unable to Refresh error. The issue occurs when a hostd cache refresh happens before NFS deletes the mount details from ConfigStore, which creates a rare race condition.

    This issue is resolved in this release.

  • PR 3236064: Compliance or remediation scans by using vSphere Lifecycle Manager might take long on ESXi hosts with a large number of datastores

    Compliance or remediation scans by using vSphere Lifecycle Manager include an upgrade precheck that lists all volumes attached to an ESXi host, their free space, versions, and other details. If a host has many datastores attached, each with a large capacity, the precheck can take long. The time is multiplied by the number of baselines attached to the cluster or host.

    This issue is resolved in this release. If you already face the issue, disconnect the attached datastores while performing the remediation or compliance scans and reattach them after the operations complete.

  • PR 3245763: During a vSphere vMotion operation, destination ESXi hosts with a Distributed Firewall (DFW) enabled might fail with a purple diagnostic screen

    If the destination host in a vSphere vMotion operation has a DFW enabled, a rare race condition might cause the host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3158866: Virtual machines might become unresponsive during VMFS volume expansion due to a Logical Volume Manager (LVM) check

    If a VMFS volume expansion operation happens during a LVM probe to update volume attributes, such as the size of the VMFS partition backing of the VMFS volume, the current size of the VMFS volume might not match the LVM metadata on the disk. As a result, LVM marks such a volume offline and virtual machines on that volume become unresponsive.

    This issue is resolved in this release. The fix improves Input Output Control (IOCTL) to speed up the update of device attributes and avoid such cases.

  • PR 3152717: vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later

    Due to a networking issue in reverse proxy configuration environments, vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later. In the ESXi management console and in the /var/log/vmware/rbd/rbd-cgi.log file you see an HTTP client error.

    This issue is resolved in this release. The fix makes sure Auto Deploy workflows run successfully in reverse proxy configuration environments.

  • PR 3185125: Virtual SLIT table not populated for the guest OS when using sched.nodeX.affinity

    When you specify virtual NUMA node affinity, the vSLIT table might not be populated and you see the following error in vmware.log: vSLIT: NumaGetLatencyInfo failed with status: 195887107.

    This happens when setting node affinity as follows:

    numa.slit.enable = "TRUE"

    sched.node0.affinity = 0

    sched.node1.affinity = 1

    ...sched.nodeN.affinity = N

    This issue is resolved in this release. If you already face the issue, specify VCPU affinity to physical CPUs on the associated NUMA nodes, instead of node affinity.

    For example:

    numa.slit.enable = "TRUE"

    sched.vcpu0.affinity = "0-7"

    sched.vcpu1.affinity = "0-7"

    ...sched.vcpuN.affinity = "Y-Z"

  • PR 3181774: You do not see an error when you attach a CBT enabled FCD to a VM with CBT disabled

    Attaching a First Class Disk (FCD) with Changed Block Tracking (CBT) to a VM with CBT disabled does not throw a detailed error, but the operation cannot take effect. In the backtrace, you might see an error message such as InvalidState, which is generic and does not provide details on the issue.

    This issue is resolved in this release. The fix adds the error message Cannot attach a CBT enabled fcd: {path} to CBT disabled VM. to the logs of the hostd service and adds a status message for the task in the vSphere Client.

  • PR 3185827: You see trap files in a SNMP directory under /var/spool even though SNMP is not enabled

    After the hostd service starts, for example after an ESXi host reboot, it might create a SNMP directory under /var/spool and you see many .trp files to pile in this directory.

    This issue is resolved in this release. The fix makes sure that the directory /var/spool/snmp exists only when SNMP is enabled.

  • PR 3082867: An ESXi host might fail with a purple diagnostic screen after a replicated virtual machine migrates to the host

    In some environments, when a replicated virtual machine migrates to a certain ESXi host, during a full sync, some SCSI WRITE commands might target address ranges past the end of the transfer map. As a result, the ESXi host might fail with a purple diagnostic screen and a backtrace such as:

    #0 DLM_free (msp=0x43238f2c9cc0, mem=mem@entry=0x43238f6db860, allowTrim=allowTrim@entry=1 '\001') at bora/vmkernel/main/dlmalloc.c:4924

    #1 0x000041802a151e01 in Heap_Free (heap=0x43238f2c9000, mem=<optimized out>) at bora/vmkernel/main/heap.c:4217

    #2 0x000041802a3b0fa1 in BitVector_Free (heap=<optimized out>, bv=<optimized out>) at bora/vmkernel/util/bitvector.c:94

    #3 0x000041802b9f4f3f in HbrBitmapFree (bitmap=<optimized out>) at bora/modules/vmkernel/hbr_filter/hbr_bitmap.c:91

    This issue is resolved in this release.

  • PR 3162499: During service insertion for NSX-managed workload VMs, some VMS might become intermittently unresponsive, and the virtual device might reset

    During service insertion for NSX-managed workload VMs, a packet list might be reinjected from the input chain of one switchport to another switchport. In such cases, the source switchport does not correspond to the actual portID of the input chain and the virtual device does not get completion status for the transmitted frame. As a result, when you run a service insertion task, some VMs might become intermittently unresponsive due to network connectivity issues.

    This issue is resolved in this release.

  • PR 3219441: You cannot send keys to a guest OS by using the browser console in the ESXi Host Client

    In the ESXi Host Client, when you open the browser console of a virtual machine and select Guest OS > Send keys, when you select a key input, for example Ctrl-Alt-Delete, the selection is not sent to the guest OS.

    This issue is resolved in this release.

  • PR 3209853: Windows VMs might fail with a blue diagnostic screen with 0x5c signature due to platform delays

    Windows VMs might fail with a blue diagnostic screen with the following signature due to a combination of factors involving the Windows timer code and platform delays, such as storage or network delays:

    HAL_INITIALIZATION_FAILED (0x5c) (This indicates that the HAL initialization failed.)

    Arguments:

    Arg1: 0000000000000115, HAL_TIMER_INITIALIZATION_FAILURE

    Arg2: fffff7ea800153e0, Timer address

    Arg3: 00000000000014da, Delta in QPC units (delta to program the timer with)

    Arg4: ffffffffc0000001, invalid status code (STATUS_UNSUCCESSFUL)

    Checks done by HPET timers in Windows might fail due to such platform delays and cause the issue.

    This issue is resolved in this release.

  • PR 3178109: vSphere Lifecycle Manager compliance check fails with "An unknown error occurred while performing the operation"

    The base ESXi image contains driver components that can be overridden by higher version async driver components packaged by an OEM add-on. If such a component is manually removed on a host, the compliance check of the vSphere Lifecycle Manager might fail unexpectedly. In the vSphere Client, you see errors such as Host status is unknown and An unknown error occurred while performing the operation.

    This issue is resolved in this release.

  • PR 3180634: Performance of certain nested virtual machines on AMD CPUs might degrade

    Nested virtual machines on AMD CPUs with operational systems such as Windows with virtualization-based security (VBS) might experience performance degradation, timeouts, or unresponsiveness due to an issue with the virtualization of AMD's Rapid Virtualization Indexing (RVI), also known as Nested Page Tables (NPT).

    This issue is resolved in this release.

  • PR 3223539: Some vmnics might not be visible after an ESXi host reboot due to a faulty NVMe device

    If an NVMe device fails during the attach phase, the NVMe driver disables the NVMe controller and resets the hardware queue resources. When you try to re-attach the device, a mismatch between the hardware and driver queue pointers might cause an IOMMU fault on the NVMe device, which leads to memory corruption and failure of the vmkdevmgr service. As a result, you do not see some vmnics in your network.

    This issue is resolved in this release.

  • PR 3218835: When you disable or suspend vSphere Fault Tolerance, virtual machines might become unresponsive for about 10 seconds

    When you disable or suspend vSphere FT, some virtual machines might take about 10 seconds to release vSphere FT resources. As a result, such VMs become temporarily unresponsive to network requests or console operations.

    This issue is resolved in this release.

  • PR 3181901: ESXi host becomes unresponsive and you cannot put the host in Maintenance Mode or migrate VMs from that host

    Asynchronous reads of metadata on a VMFS volume attached to an ESXi host might cause a race condition with other threads on the host and make the host unresponsive. As a result, you cannot put the host in Maintenance Mode or migrate VMs from that host.

    This issue is resolved in this release.

  • PR 3156666: Packets with length less than 60 bytes might drop

    An ESXi host might add invalid bytes, other than zero, to packets with less than 60 bytes. As a result, packets with such invalid bytes drop.

    This issue is resolved in this release.

  • PR 3180283: When you migrate a VM with recently hot-added memory, an ESXi host might repeatedly fail with a purple diagnostic screen

    Due to a race condition while the memory hotplug module recomputes the NUMA memory layout of a VM on a destination host after migration, an ESXi host might repeatedly fail with a purple diagnostic screen. In the backtrace, you see errors such as:

    0x452900262cf0:[0x4200138fee8b]PanicvPanicInt@vmkernel#nover+0x327 stack: 0x452900262dc8, 0x4302f6c06508, 0x4200138fee8b, 0x420013df1300, 0x452900262cf0  0x452900262dc0:[0x4200138ff43d]Panic_WithBacktrace@vmkernel#nover+0x56 stack: 0x452900262e30, 0x452900262de0, 0x452900262e40, 0x452900262df0, 0x3e7514  0x452900262e30:[0x4200138fbb90]NMI_Interrupt@vmkernel#nover+0x561 stack: 0x0, 0xf48, 0x0, 0x0, 0x0  0x452900262f00:[0x420013953392]IDTNMIWork@vmkernel#nover+0x7f stack: 0x420049800000, 0x4200139546dd, 0x0, 0x452900262fd0, 0x0  0x452900262f20:[0x4200139546dc]Int2_NMI@vmkernel#nover+0x19 stack: 0x0, 0x42001394e068, 0xf50, 0xf50, 0x0  0x452900262f40:[0x42001394e067]gate_entry@vmkernel#nover+0x68 stack: 0x0, 0x43207bc02088, 0xd, 0x0, 0x43207bc02088  0x45397b61bd30:[0x420013be7514]NUMASched_PageNum2PhysicalDomain@vmkernel#nover+0x58 stack: 0x1, 0x420013be34c3, 0x45396f79f000, 0x1, 0x100005cf757  0x45397b61bd50:[0x420013be34c2]NUMASched_UpdateAllocStats@vmkernel#nover+0x4b stack: 0x100005cf757, 0x0, 0x0, 0x4200139b36d9, 0x0  0x45397b61bd80:[0x4200139b36d8]VmMem_NodeStatsSub@vmkernel#nover+0x59 stack: 0x39, 0x45396f79f000, 0xbce0dbf, 0x100005cf757, 0x0  0x45397b61bdc0:[0x4200139b4372]VmMem_FreePageNoBackmap@vmkernel#nover+0x8b stack: 0x465ec0001be0, 0xa, 0x465ec18748b0, 0x420014e7685f, 0x465ec14437d0

    This issue is resolved in this release.

  • PR 3100552: ESXi booting times out after five minutes and the host automatically reboots

    During initial ESXi booting, when you see the Loading VMware ESXi progress bar, if the bootloader takes more than a total of five minutes to load all the boot modules before moving on to the next phase of booting, the host firmware times out the boot process and resets the system.

    This issue is resolved in this release. With ESXi 7.0 Update 3o, the ESXi bootloader resets the five-minute timeout after each boot module loads.

  • PR 3184368: The durable name of a SCSI LUN might not be set

    The durable name property for a SCSI-3 compliant device comes from pages 80h and 83h of the Vital Product Data (VPD) as defined by the T10 and SMI standards. To populate the durable name, ESXi first sends an inquiry command to get a list of VPD pages supported by the device. Then ESXi issues commands to get data for all supported VPD pages. Due to an issue with the target array, the device might fail a command to get VPD page data for a page in the list with a not supported error. As a result, ESXi cannot populate the durable name property for the device.

    This issue is resolved in this release. The fix ignores the error on command to get VPD page data, except pages 80h and 83h, if that data is not required for the generation of durable name.

  • PR 3184425: Port scan of a VXLAN Tunnel End Point (VTEP) on an ESXi host might result in an intermittent connectivity loss

    Port scan of a VTEP on an ESXi host might result in an intermittent connectivity loss, but only under the following conditions:        

    1. Your environment has many VTEPs

    2. All VTEPs are in the same IP subnet     

    3. The upstream switch is Cisco ACI

    This issue is resolved in this release.

  • PR 3156627: Changing the mode of the virtual disk on a running virtual machine might cause the VM to fail

    If you use the VMware Host Client to edit the disk mode of a running virtual machine, for example from Independent - Nonpersistent to Dependent or Independent - Persistent, the operation fails and might cause the VM to fail. In the vmware.log, you see errors such as:

    msg.disk.notConfigured2] Failed to configure disk 'scsi0:4'. The virtual machine cannot be powered on with an unconfigured disk.

    [msg.checkpoint.continuesync.error] An operation required the virtual machine to quiesce and the virtual machine was unable to continue running.

    This issue is resolved in this release. The fix blocks changing the mode of an Independent - Nonpersistent disk on a running virtual machine by using the VMware Host Client. The vSphere Client already blocks such operations.

  • PR 3164477: In VMware Skyline Health Diagnostics, you see multiple warnings for vSAN memory pools

    The free heap memory estimation logic for some vSAN memory pools might consider more memory heap than actual and trigger warnings for insufficient memory. As a result, you see Memory pools (heap) health warnings for many hosts under the Physical disks section in Skyline Health.

    This issue is resolved in this release.

  • PR 3168950: ESXi hosts fail with a purple diagnostic screen while installing VMware NSX-T Data Center due to insufficient TCP/IP heap space

    VMware NSX-T Data Center has multiple net stacks and the default TCP/IP heap space might not be sufficient for installation. As a result, ESXi hosts might fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix increases the default TcpipHeapSize setting to 8 MB from 0 MB and the maximum size to 128 MB from 32 MB. In the vSphere Client, you can change the TcpipHeapSize value by navigating to Hosts and Clusters > Configure > System > Advanced System Settings > TcpipHeapSize. In vCenter systems with VMware NSX-T Data Center, set the value to 128 MB.

  • PR 3096974: Rare race condition in I/O filters might cause an ESXi host to fail with a purple diagnostics screen

    ESXi hosts in which virtual machines use I/O filters might randomly fail with a purple diagnostics screen and an error such as #DF Exception 8 IP 0x42002c54b19f SP 0x453a9e8c2000 due to a rare race condition.

    This issue is resolved in this release.

  • PR 3112194: Spherelet might fail to start on an ESXi host due to the execInstalledOnly setting

    The execInstalledOnly setting is a run time parameter which limits execution of binaries such as applications and vmkernel modules to improve security and guard against breaches and compromises. Some execInstalledOnly security checks might interfere with Spherelet, the ESXi UserWorld agent that acts as an extension to the Kubernetes Control Plane, and prevent it to start, even when all files are installed.

    This issue is resolved in this release.

  • PR 3115870: VMware VIB installation might fail during concurrent vendor package installations

    When you install update packages from several vendors, such as JetStream Software, Microsoft, and VMware, multiple clients call the same PatchManager APIs and might lead to a race condition. As a result, VMware installation packages (VIBs) might fail to install. In the logs, you see an error such as vim.fault.PlatformConfigFault, which is a catch-all fault indicating that some error has occurred regarding the configuration of the ESXi host. In the vSphere Client, you see a message such as An error occurred during host configuration.

    This issue is resolved in this release. The fix is to return a TaskInProgress warning instead of PlatformConfigFault, so that you are aware of the actual issue and retry the installation.

  • PR 3164439: Certain applications might take too many ESXi file handles and cause performance aggravation

    In very rare cases, applications such as NVIDIA virtual GPU (vGPU) might consume so many file handles that ESXi fails to process other services or VMs. As a result, you might see GPU on some nodes to disappear, or report zero GPU memory, or performance degradation.

    This issue is resolved in this release. The fix reduces the number of file handles a vGPU VM can consume.

  • PR 3162963: If parallel volume expand and volume refresh operations on the same VMFS volume run on two ESXi hosts in the same cluster, the VMFS volume might go offline

    While a VMFS volume expand operation is in progress on an ESXi host in a vCenter cluster, if on another host a user or vCenter initiates a refresh of the same VMFS volume capacity, such a volume might go offline. The issue occurs due to a possible mismatch in the device size, which is stamped on the disk in the volume metadata during a device rescan, and the device size value in the Pluggable Storage Architecture (PSA) layer on the host, which might not be updated if the device rescan is not complete.

    This issue is resolved in this release. The fix improves the resiliency of the volume manager code to force a consecutive refresh of the device attributes and comparison of the device sizes again if vCenter reports a mismatch in the device size.

  • PR 3161690: CPU usage of ESXi hosts might intermittently increase in environments

    Due to a rare race condition, when a VM power off command conflicts with a callback function, you might see increased CPU usage on ESXi hosts, for example, after an upgrade.

    This issue is resolved in this release.

  • PR 3165374: ESXi hosts might become unresponsive and fail with a purple diagnostic screen during TCP TIME_WAIT

    The TCP slow timer might starve TCP input processing while it covers the list of connections in TIME_WAIT closing expired connections due to contention on the global TCP pcbinfo lock. As a result, the VMkernel might fail with a purple diagnostic screen and the error Spin count exceeded - possible deadlock while cleaning up TCP TIME_WAIT sockets. The backtrace points to tcpip functions such as tcp_slowtimo() or tcp_twstart().

    This issue is resolved in this release. The fix adds a new global lock to protect the list of connections in TIME_WAIT and to acquire the TCP pcbinfo lock only when closing an expired connection.

  • PR 3166818: If any of the mount points that use the same NFS server IP returns an error, VAAI-NAS might mark all mount points as not supported

    When you use a VAAI-NAS vendor plug-in, multiple file systems on an ESXi host use the same NFS server IP to create a datastore. For certain NAS providers, if one of the mounts returns a VIX_E_OBJECT_NOT_FOUND (25) error during the startSession call, hardware acceleration for all file systems with the same NFS server IP might become unsupported. This issue occurs when an ESXi host mounts a directory on an NFS server, but that directory is not available at the time of the startSession call, because, for example, it has been moved from that NFS server.

    This issue is resolved in this release. The fix makes sure that VAAI-NAS marks as not supported only mounts that report an error.

  • PR 3100030: An ESXi host fails with a purple diagnostic screen indicating VMFS heap corruption

    A race condition in the host cache of a VMFS datastore might cause heap corruption and the ESXi host fails with a purple diagnostic screen and a message such as:

    PF Exception 14 in world xxxxx:res3HelperQu IP 0xxxxxxe2 addr 0xXXXXXXXXX

    This issue is resolved in this release.

  • PR 3121216: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after an upgrade

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3098760: ESXi hosts randomly disconnect from the Active Directory domain or vCenter due to Likewise memory exhaustion

    Memory leaks in Active Directory operations and related libraries, or when smart card authentication is enabled on an ESXi host, might lead to Likewise memory exhaustion.

    This issue is partially resolved in this release. For more information, see VMware knowledge base article 78968.

  • PR 3118402: The NTP monitoring function returns intermittent time sync failure in the time services test report even when the ESXi host system clock is actually in sync

    The time services monitoring infrastructure periodically queries the NTP daemon, ntpd, for time sync status on ESXi hosts. Intermittent query failures might occur due to timeouts in the socket call to read such status information. As a result, you might see time sync failure alerts in the NTP test reports.

    This issue is resolved in this release. The fix makes sure that NTP monitoring queries do not fail.

  • PR 3159168: You see multiple "End path evaluation for device" messages in the vmkernel.log

    A periodic device probe that evaluates the state of all the paths to a device might create multiple unnecessary End path evaluation for device messages in the vmkernel.log file on a release build.

    This issue is resolved in this release. The fix restricts End path evaluation for device messages to only non-release builds, such as beta. You must enable the VERBOSE log level to see such logs.

  • PR 3101512: You see a warning for lost uplink redundancy even when the uplink is active

    If you quickly remove the network cable from a physical NIC and reinsert it, the ESXi host agent, hostd, triggers an event to restore uplink redundancy, but in some cases, although the uplink restores in a second, you still see a misleading alarm for lost uplink redundancy.

    This issue is resolved in this release.

  • PR 3161473: Operations with stateless ESXi hosts might not pick the expected remote disk for system cache, which causes remediation or compliance issues

    Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

    This issue is resolved in this release. The fix provides a consistent way to sort the remote disks and always pick the disk with the lowest LUN ID. To make sure you enable the fix, follow these steps:

    1. On the Edit host profile page of the Auto Deploy wizard, select Advanced Configuration Settings > SystemImage Cache Configuration > System Image Cache Configuration

    2. In the System Image Cache Profile Settings drop-down menu, select Enable stateless caching on the host.

    3. Edit Arguments for first disk by replacing remote with sortedremote and/or remoteesx with sortedremoteesx.

  • PR 3187846: Setting the screen resolution of a powered off, encrypted virtual machine does not always work when manually editing the VMX file

    If you manually specify the screen resolution of a powered off, encrypted virtual machine by editing the VMX file, the change might not take effect.

    This issue is resolved in this release.

esx-update_7.0.3-0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_loadesx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-update_7.0.3-0.105.22348816

PRs Fixed

3245953, 3178109, 3211395, 3164477, 3119959, 3164462

CVE numbers

N/A

Updates the loadesx and esx-update VIBs.

Broadcom-lsi-msgpt3_17.00.12.00-2vmw.703.0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMW_bootbank_lsi-msgpt3_17.00.12.00-2vmw.703.0.105.22348816

PRs Fixed

3036883

CVE numbers

N/A

Updates the ntg3 VIB to resolve the following issue:

  • PR 3036883: A SAS path might be lost during upgrade of lsi_msgpt3 drivers

    Upgrading lsi_msgpt3 drivers of version 17.00.12.00-1 and earlier might cause storage connections and VMs on ESXi host to become unresponsive due to a lost SAS path. In the VMkernel logs, you see error messages such as:

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:640: Retry world failover device "naa.6000d310045e2c000000000000000043" - issuing command 0x459acc1f5dc0

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: vmw_psp_rr: psp_rrSelectPath:2177: Could not select path for device "naa.6000d310045e2c000000000000000043".

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:715: Retry world failover device "naa.6000d310045e2c000000000000000043" - failed to issue command due to Not found (APD), try again...

    This issue is resolved in this release. If you already face the issue, rescan your storage after failing back on storage controller to recover the paths and storage connections.

Microchip-smartpqiv2-plugin_1.0.0-9vmw.703.0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

No

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-9vmw.703.0.105.22348816

PRs Fixed

3112692

CVE numbers

N/A

Updates the lsuv2-smartpqiv2-plugin VIB to resolve the following issue:

  • PR 3112692: The smartpqi driver cannot get the location of ThinkSystem PM1645a SAS SSD devices

    In rare cases, the smartpqi plugin in LSU service might get a longer serial number string from the VPD page 0x80 than the actual serial number, and the device lookup in the cache fails. As a result, when you run the command esxcli storage core device physical get -d <device_id> to get a device location, you see an error such as Unable to get location for device. Device is not found..

    This issue is resolved in this release. The fix makes sure that the smartpqi plugin in LSU service always checks the correct serial number length and finds the device location.

VMware-NVMe-PCIe_1.2.3.16-3vmw.703.0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMW_bootbank_nvme-pcie_1.2.3.16-3vmw.703.0.105.22348816

PRs Fixed

3218578

CVE numbers

N/A

Updates the nvme-pcie VIB to resolve the following issue:

  • PR 3218578: You see wrong device descriptions of some Lenovo ThinkSystem boot RAID adapters

    In a vSphere client interface, such as the vSphere Client, or when you run commands such as lspci and esxcfg-scsidevs -a from the ESXi shell, you might see the device names of some Lenovo-branded boot SATA/NVMe adapters as the generical names of the controller chips they use. For example, Lenovo ThinkSystem M.2 with Mirroring Enablement Kit is displayed as 88SE9230 PCIe SATA 6GB/s controller.

    This issue is resolved in this release.

VMware-ahci_2.0.11-2vmw.703.0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMW_bootbank_vmw-ahci_2.0.11-2vmw.703.0.105.22348816

PRs Fixed

3218578

CVE numbers

N/A

Updates the vmw-ahci VIB.

Intel-Volume-Mgmt-Device_2.7.0.1157-3vmw.703.0.105.22348816

Patch Category

Bugfix

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

No

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMW_bootbank_iavmd_2.7.0.1157-3vmw.703.0.105.22348816

PRs Fixed

3224847

CVE numbers

N/A

Updates the iavmd VIB.

ESXi_7.0.3-0.100.22348808

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsanhealth_7.0.3-0.100.22348808

  • VMware_bootbank_crx_7.0.3-0.100.22348808

  • VMware_bootbank_vdfs_7.0.3-0.100.22348808

  • VMware_bootbank_esxio-combiner_7.0.3-0.100.22348808

  • VMware_bootbank_native-misc-drivers_7.0.3-0.100.22348808

  • VMware_bootbank_esx-base_7.0.3-0.100.22348808

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.100.22348808

  • VMware_bootbank_cpu-microcode_7.0.3-0.100.22348808

  • VMware_bootbank_bmcal_7.0.3-0.100.22348808

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_gc_7.0.3-0.100.22348808

  • VMware_bootbank_trx_7.0.3-0.100.22348808

  • VMware_bootbank_vsan_7.0.3-0.100.22348808

  • VMware_bootbank_esx-xserver_7.0.3-0.100.22348808

PRs Fixed

3219441, 3239369, 3236018, 3232099, 3232034, 3222887, 3217141, 3213025, 3187868, 3186342, 3164962, 3089785, 3160789, 3098679, 3089785, 3256457

CVE numbers

N/A

The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.

Updates the vsanhealth, crx, vdfs, esxio-combiner, native-misc-drivers, esx-base, esx-dvfilter-generic-fastpath, cpu-microcode, bmcal, esx-ui, gc, trx, vsan, and esx-xserver VIBs to resolve the following issues:

  • ESXi 7.0 Update 3o provides the following security updates:

    • The cURL library is updated to version 8.1.2.

    • The ESXi userworld libxml2 library is updated to version 2.10.4.

    • The SQLite library is updated to version 3.42.0.

    • The OpenSSL package is updated to version 1.0.2zh.

  • The cpu-microcode VIB includes the following AMD microcode:

    Code Name

    FMS

    MCU Rev

    MCU Date

    Brand Names

    Bulldozer

    0x600f12 (15/01/2)

    0x0600063e

    2/7/2018

    Opteron 6200/4200/3200 Series

    Piledriver

    0x600f20 (15/02/0)

    0x06000852

    2/6/2018

    Opteron 6300/4300/3300 Series

    Zen-Naples

    0x800f12 (17/01/2)

    0x0800126e

    11/11/2021

    EPYC 7001 Series

    Zen2-Rome

    0x830f10 (17/31/0)

    0x0830107a

    5/17/2023

    EPYC 7002/7Fx2/7Hx2 Series

    Zen3-Milan-B1

    0xa00f11 (19/01/1)

    0x0a0011d1

    7/10/2023

    EPYC 7003/7003X Series

    Zen3-Milan-B2

    0xa00f12 (19/01/2)

    0x0a001234

    7/10/2023

    EPYC 7003/7003X Series

  • The cpu-microcode VIB includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003a5

    3/30/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000230

    1/27/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xac

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x2c

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x46

    2/27/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c0001d1

    2/14/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000461

    3/13/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x59

    2/26/2023

    Intel Xeon E-2300 Series

esx-update_7.0.3-0.100.22348808

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_loadesx_7.0.3-0.100.22348808

  • VMware_bootbank_esx-update_7.0.3-0.100.22348808

PRs Fixed

N/A

CVE numbers

N/A

Updates the loadesx and esx-update VIBs.

VMware-VM-Tools_12.2.6.22229486-22348808

Patch Category

Security

Patch Severity

Critical

Host Reboot Required

No

Virtual Machine Migration or Shutdown Required

No

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_locker_tools-light_12.2.6.22229486-22348808

PRs Fixed

3178522, 3178519

CVE numbers

N/A

Updates the tools-light VIB.

  • The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3o:

    • windows.iso: VMware Tools 12.2.6 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.

    • linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.

    The following VMware Tools ISO images are available for download:

    • VMware Tools 11.0.6:

      • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).

    • VMware Tools 10.0.12:

      • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.

      • linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.

    • solaris.iso: VMware Tools image 10.3.10 for Solaris.

    • darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. For more details, see VMware knowledge base article 88698.

    Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

ESXi-7.0U3o-22348816-standard

Profile Name

ESXi-7.0U3o-22348816-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware, Inc.

Release Date

September 28, 2023

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_gc_7.0.3-0.105.22348816

  • VMware_bootbank_native-misc-drivers_7.0.3-0.105.22348816

  • VMware_bootbank_vsanhealth_7.0.3-0.105.22348816

  • VMware_bootbank_crx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-xserver_7.0.3-0.105.22348816

  • VMware_bootbank_vdfs_7.0.3-0.105.22348816

  • VMware_bootbank_cpu-microcode_7.0.3-0.105.22348816

  • VMware_bootbank_esx-base_7.0.3-0.105.22348816

  • VMware_bootbank_vsan_7.0.3-0.105.22348816

  • VMware_bootbank_esxio-combiner_7.0.3-0.105.22348816

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_bmcal_7.0.3-0.105.22348816

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.105.22348816

  • Mware_bootbank_trx_7.0.3-0.105.22348816

  • VMware_bootbank_loadesx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-update_7.0.3-0.105.22348816

  • VMW_bootbank_lsi-msgpt3_17.00.12.00-2vmw.703.0.105.22348816

  • VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-9vmw.703.0.105.22348816

  • VMW_bootbank_nvme-pcie_1.2.3.16-3vmw.703.0.105.22348816

  • VMW_bootbank_vmw-ahci_2.0.11-2vmw.703.0.105.22348816

  • VMW_bootbank_iavmd_2.7.0.1157-3vmw.703.0.105.22348816

  • VMware_locker_tools-light_12.2.6.22229486-22233486

PRs Fixed

3244098, 3216958, 3245763, 3242021, 3246132, 3253205, 3256804, 3239170, 3251981, 3215370, 3117615, 3248478, 3252676, 3235496, 3238026, 3236064, 3224739, 3224306, 3223755, 3216958, 3233958, 3185125, 3219264, 3221620, 3218835, 3185560, 3221099, 3211625, 3221860, 3228586, 3224604, 3223539, 3222601, 3217258, 3213041, 3216522, 3221591, 3216389, 3220004, 3217633, 3216548, 3216449, 3156666, 3181574, 3180746, 3155476, 3211807, 3154090, 3184008, 3183519, 3183519, 3209853, 3100552, 3187846, 3113263, 3176350, 3185827, 3095511, 3184425, 3186351, 3186367, 3154090, 3158524, 3181601, 3180391, 3180283, 3184608, 3181774, 3155476, 3163361, 3182978, 3184368, 3160456, 3164575, 3181901, 3184871, 3166818, 3157195, 3164355, 3163271, 3112194, 3161690, 3261925, 3178589, 3178721, 3162963, 3168950, 3159168, 3158491, 3158866, 3096769, 3165374, 3122037, 3098760, 3164439, 3161473, 3162790, 3100030, 3096974, 3161473, 3165651, 3083007, 3118240, 3151076, 3118402, 3160480, 3156627, 3158531, 3162499, 3158512, 3053430, 3153395, 3117615, 3099192, 3158508, 3115870, 3119834, 3158220, 3110401, 2625439, 3099357, 3152811, 3108979, 3120165, 3245953, 3178109, 3211395, 3164477, 3119959, 3164462, 3036883, 3112692, 3218578, 3224847

Related CVE numbers

N/A

This patch updates the following issues:

  • PR 3187846: Setting the screen resolution of a powered off, encrypted virtual machine does not always work when manually editing the VMX file

    If you manually specify the screen resolution of a powered off, encrypted virtual machine by editing the VMX file, the change might not take effect.

    This issue is resolved in this release.

  • PR 3253205: Static IPv6 gateway disappears in 18 hours

    If you configure a static IPv6 gateway address in your vSphere environment, such gateways might no longer exist in up to 18 hours due to a timeout.

    This issue is resolved in this release. The fix removes the existing timeout for static gateways.

  • PR 3272532: You see speed as Half Duplex after changing the speed from Auto to Full on a VMNIC

    In some cases, length checks during the parsing of duplex TLVs in an environment with active Cisco Discovery Protocol (CDP) might fail. As a result, when you change the speed from Auto to Full on a physical NIC on peer devices you do not see the real duplex value in the TLV under the neighbor information, but the default Half Duplex value.

    This issue is resolved in this release.

  • PR 3251981: An NFSv4.1 file might appear to be empty even though it contains data

    When you open an existing NFSv4.1 file with write only access, if for some reason the NFS client opens again the same file with read only access, read operations from the client might return no data although the file is not empty.

    This issue is resolved in this release.

  • PR 3116601: When you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the vSphere HA agent on the host displays as inaccessible

    In some cases, when you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the Fault Domain Manager (FDM), which is the agent that vSphere HA deploys on ESX hosts, might stop receiving Internet Control Message Protocol (ICMP) pings. As a result, FDM might issue false cluster alarms that the vSphere HA agent on the host cannot reach some management network addresses of other hosts.

    This issue is resolved in this release.

  • PR 3251801: vSphere vMotion operation from a 6.7.x ESXi host with an Intel Ice Lake CPU fails with msg.checkpoint.cpucheck.fail

    vSphere vMotion operations by using either the vSphere Client or VMware Hybrid Cloud Extension (HCX) from an Intel Ice Lake CPU host running ESXi 6.7.x fails with an error such as msg.checkpoint.cpucheck.fail. In the vSphere Client, you see a message that cpuid.PSFD is not supported on the target host. In HCX, you see a report such as A general system error occurred: vMotion failed:.

    This issue is resolved in this release.

  • PR 3036883: A SAS path might be lost during upgrade of lsi_msgpt3 drivers

    Upgrading lsi_msgpt3 drivers of version 17.00.12.00-1 and earlier might cause storage connections and VMs on ESXi host to become unresponsive due to a lost SAS path. In the VMkernel logs, you see error messages such as:

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:640: Retry world failover device "naa.6000d310045e2c000000000000000043" - issuing command 0x459acc1f5dc0

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: vmw_psp_rr: psp_rrSelectPath:2177: Could not select path for device "naa.6000d310045e2c000000000000000043".

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:715: Retry world failover device "naa.6000d310045e2c000000000000000043" - failed to issue command due to Not found (APD), try again...

    This issue is resolved in this release. If you already face the issue, rescan your storage after failing back on storage controller to recover the paths and storage connections.

  • PR 3224739: You see alarms for dropped syslog messages

    If the network throughput of your vSphere system does not align with the speed at which log messages are generated, some of the log messages sent to the remote syslog server might drop. As a result, in the vSphere Client you see host errors of type Triggered Alarm and in the logs, you see warnings such as ALERT: vmsyslog logger xxxx:514 lost yyyy log messages.

    This issue is resolved in this release. The fix improves the network logging performance of the logging service to reduce the number of such lost log messages.

  • PR 3247027: If any failure occurs during a vSphere vMotion migration of NVIDIA virtual GPU (vGPU) VM, the destination ESXi host might fail with a purple diagnostic screen

    In very rare cases, if any type of failure occurs during a vSphere vMotion migration of a vGPU VM, the vMotion operation is marked as a failure while a specific internal operation is in progress. As a result, the destination ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3095511: The SFCB service intermittently fails and generates "sfcb-vmware_bas-zdump" files on multiple ESXi hosts

    In very rare cases, when the SFCB service tries to access an uninitialized variable, the service might fail with dump file such as sfcb-vmware_bas-zdump.000. The issue occurs because accessing an uninitialized variable can cause the SFCB process to keep requesting memory until it exceeds the heap allocation.

    This issue is resolved in this release.

  • PR 3218578: You see wrong device descriptions of some Lenovo ThinkSystem boot RAID adapters

    In a vSphere client interface, such as the vSphere Client, or when you run commands such as lspci and esxcfg-scsidevs -a from the ESXi shell, you might see the device names of some Lenovo-branded boot SATA/NVMe adapters as the generical names of the controller chips they use. For example, Lenovo ThinkSystem M.2 with Mirroring Enablement Kit is displayed as 88SE9230 PCIe SATA 6GB/s controller.

    This issue is resolved in this release.

  • PR 3256804: File server loses connectivity after failover when running vSAN File Service on NSX overlay network

    If vSAN File Service is running on an NSX overlay network, the file server might lose connectivity after it fails over from one agent VM to another agent VM. The file server can failover when reconfiguring file service domain from such without Active Directory(AD) to such with AD, or if vSAN detects unhealthy behavior in the file server, file share, or AD server. If file server connectivity is lost after a failover, File Service clients cannot access the file share. The following health warning might be reported:

    One or more DNS servers is not reachable.

    This issue is resolved in this release.

  • PR 3217633: vSAN management service on the orchestration host malfunctions during vCenter restart

    This issue can occur on vSAN clusters where vCenter is deployed. If multiple cluster shutdown operations are performed, the/etc/vmware/vsan/vsanperf.conf file on the orchestration host can contain two versions of the following management VM: vc_vm_moIdvc_vm_moid. These configuration options conflict with each other and can cause the vSAN management service to malfunction.

    This issue is resolved in this release.

  • PR 3224306: When you change the ESXi syslog setting syslog.global.logDir, if syslog.global.logDirUnique is active, you might see 2 levels of <hostname> subdir under the logdir path

    After upgrading an ESXi host to 7.0 Update 3c and later, if you change the syslog setting syslog.global.logDir and the syslog.global.logDirUnique setting is active, you might see 2 levels of <hostname> subdirs under the logdir path on the host. The syslog.global.logDirUnique setting is useful if the same NFS directory is configured as the Syslog.global.LogDir by multiple ESXi hosts, as it creates a subdirectory with the name of the ESXi host under logdir.

    This issue is resolved in this release.

  • PR 3223755: ESXi syslog daemon fails to resume transmitting logs to the configured SSL remote syslog server once a drop in the network connectivity restores

    If a remote SSL syslog server temporarily loses connectivity, the ESXi syslog daemon might fail to resume transmitting logs after connectivity restores and you must restart the service to re-activate transmissions. The issue occurs due to some unhandled exceptions while the ESXi syslog daemon tries to restore connection to the SSL remote syslog server.

    This issue is resolved in this release. The fix adds a generic catch for all unhandled exceptions.

  • PR 3185536: vSAN cluster remediation error after vCenter upgrade to 7.0 Update 3

    When you upgrade vCenter from version 6.5.x to 7.0 Update 3, and reconfigure the vSAN cluster, a vSAN cluster remediation is triggered, but in some cases fails.

    This issue is resolved in this release.

  • PR 3110401: The Explicit Congestion Notification (ECN) setting is not persistent on ESXi host reboot

    ECN, specified in RFC 3168, allows a TCP sender to reduce the transmission rate to avoid packet drops and is activated by default. However, if you change the default setting, or manually deactivate ECN, such changes to the ESXi configuration might not persist after a reboot of the host.

    This issue is resolved in this release.

  • PR 3153395: Upon refresh, some dynamic firewalls rules might not persist and lead to vSAN iSCSI Target service failure

    If you frequently run the command esxcli network firewall refresh, a race condition might cause some dynamic firewalls rules to be removed upon refresh or load. As a result, some services such as the vSAN iSCSI Target daemon, which provides vSAN storage with iSCSI protocol, might fail.

    This issue is resolved in this release.

  • PR 3096769: You see high latency for VM I/O operations in a vSAN cluster with Unmap activated

    This issue affects vSAN clusters with Unmap activated. A problem in Unmap handling in LSOM creates log congestion. The log congestion can cause high VM I/O latency.

    This issue is resolved in this release.

  • PR 3163361: The Audit Record Storage Capacity parameter does not persist across ESXi host reboots

    After an ESXi host reboot, you see the Audit Record Storage Capacity parameter restored to the default value of 4 MiB, regardless of any previous changes.

    This issue is resolved in this release.

  • PR 3162496: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after upgrading to vSphere 8.0 and later

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system to vSphere 8.0 and later, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3220004: VBS-enabled Windows VMs might fail with a blue diagnostic screen when an ESXi host is under memory pressure

    Windows VMs with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen during memory-heavy operations such as deleting snapshots or consolidating disks. The BSOD has the following signature:

    DRIVER_VERIFIER_DMA_VIOLATION (e6)

    An illegal DMA operation was attempted by a driver being verified.

    Arguments:

    Arg1: 0000000000000026, IOMMU detected DMA violation.

    Arg2: 0000000000000000, Device Object of faulting device.

    Arg3: xxxxxxxxxxxxxxxx, Faulting information (usually faulting physical address).

    Arg4: 0000000000000008, Fault type (hardware specific).

    This issue is resolved in this release.

  • PR 3162905: ESXi hosts might intermittently disconnect from vCenter in case of memory outage

    In rare cases, if vCenter fails to allocate or transfer active bitmaps due to an alert for insufficient memory for any reason, the hostd service might repeatedly fail and you cannot reconnect the host to the vCenter.

    This issue is resolved in this release. The fix makes sure that in such cases you see an error rather than the hostd service failure.

  • PR 3108979: Staging ESXi patches and ESXi upgrade tasks might become indefinitely unresponsive due to a log buffer limit

    If a large string or log exceed the python log buffer limit of 16K, the logger becomes unresponsive. As a result, when you stage ESXi patches on a host by using the vSphere Client, the process responsible for completing the staging task on ESXi might become indefinitely unresponsive. The task times out and reports an error. The issue occurs also when you remediate ESXi hosts by using a patch baseline.

    This issue is resolved in this release. The fix breaks the large input string into smaller chunks for logging.

  • PR 3219264: vSAN precheck for maintenance mode or disk decommission doesn't list objects that might lose accessibility

    This issue affects objects with resyncing components, and some components reside on a device to be removed or placed into maintenance mode. When you run a precheck with the No-Action option, the precheck does not evaluate the object correctly to report it in the inaccessibleObjects list.

    This issue is resolved in this release. Precheck includes all affected objects in the inaccessibleObjects list.

  • PR 3186545: Vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable

    Some third-party tools for vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable.

    This issue is resolved in this release.

  • PR 3221591: ESXi NVMe/TCP initiator fails to recover paths after target failure recovery

    When a NVMe/TCP target recovers from a failure, ESXi cannot recover the path.

    This issue is resolved in this release.

  • PR 3186351: You see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot

    In some cases, a networking configuration might not persist in the ESXi ConfigStore and you see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot.

    This issue is resolved in this release.

  • PR 3162790: The sfcb daemon might fail with a core dump while installing a third-party CIM provider

    The sfcb daemon might try to access already freed memory when registering a CIM provider and fail with a core dump while installing a third-party CIM provider, such as Dell OpenManage Server Administrator for example.

    This issue is resolved in this release.

  • PR 3157195: ESX hosts might fail with a purple diagnostic screen and an error NMI IPI: Panic requested by another PCPU

    The resource pool cache is a VMFS specific volume level cache that stores the resource clusters corresponding to the VMFS volume. While searching for priority clusters, the cache flusher workflow iterates through a large list of cached resource clusters, which can cause lockup of the physical CPUs. As a result, ESX hosts might fail with a purple diagnostic screen. In the logDump file, you see an error such as:

    ^[[7m2022-10-22T07:56:47.322Z cpu13:2101160)WARNING: Heartbeat: 827: PCPU 0 didn't have a heartbeat for 7 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.^[[0m^

    [[31;1m2022-10-22T07:56:47.322Z cpu0:2110633)ALERT: NMI: 710: NMI IPI: RIPOFF(base):RBP:CS

    This issue is resolved in this release.

  • PR 3230493: If a delta sync runs while the target vSphere Replication server is not active, Windows virtual machines might become unresponsive

    If the target Sphere Replication server is not active during a delta sync, the synchronization process cannot complete and Windows virtual machines become unresponsive.

    This issue is resolved in this release. The fix makes sure that no delta sync jobs start when a Sphere Replication server is not active.

  • PR 3122037: ESXi ConfigStore database fills up and writes fail

    Stale data related to block devices might not be deleted in time from the ESXi ConfigStore database and cause an out of space condition. As a result, write operations to ConfigStore start to fail. In the backtrace, you see logs such as:

    2022-12-19T03:51:42.733Z cpu53:26745174)WARNING: VisorFSRam: 203: Cannot extend visorfs file /etc/vmware/configstore/current-store-1-journal because its ramdisk (configstore) is full.

    This issue is resolved in this release.

  • PR 3246132: You see a system clock drift on an ESXi host after synchronization with the NTP server fails

    If connectivity between the NTP server and an ESXi host is delayed for some reason, the system clock on the ESXi host might experience a drift. When you run the vsish command vsish -e get /system/ntpclock/clockData, you might see a large negative value for the adjtime field. For example:

    NTP clock data {  
    ...  adjtime() (usec):-309237290312 <<<<<<  
    ...
    }

    This issue is resolved in this release.

  • PR 3185560: vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore intermittently fail after a hot extend of the VM disk

    vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore might fail under the following circumstances:

    A VM that runs on ESXi host A is migrated to ESXi host B, then the VM disk is hot extended, and the VM is migrated back to ESXi host A. The issue occurs due to stale statistics about the swap virtual volume.

    This issue is resolved in this release.

  • PR 3236207: You cannot unmount an NFS v3 datastore from a vCenter system because the datastore displays as inaccessible

    While unmounting an NFSv3 datastore from a vCenter system by using either the vSphere Client or ESXCLI, you might see the datastore either as inaccessible, or get an Unable to Refresh error. The issue occurs when a hostd cache refresh happens before NFS deletes the mount details from ConfigStore, which creates a rare race condition.

    This issue is resolved in this release.

  • PR 3236064: Compliance or remediation scans by using vSphere Lifecycle Manager might take long on ESXi hosts with a large number of datastores

    Compliance or remediation scans by using vSphere Lifecycle Manager include an upgrade precheck that lists all volumes attached to an ESXi host, their free space, versions, and other details. If a host has many datastores attached, each with a large capacity, the precheck can take long. The time is multiplied by the number of baselines attached to the cluster or host.

    This issue is resolved in this release. If you already face the issue, disconnect the attached datastores while performing the remediation or compliance scans and reattach them after the operations complete.

  • PR 3245763: During a vSphere vMotion operation, destination ESXi hosts with a Distributed Firewall (DFW) enabled might fail with a purple diagnostic screen

    If the destination host in a vSphere vMotion operation has a DFW enabled, a rare race condition might cause the host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3158866: Virtual machines might become unresponsive during VMFS volume expansion due to a Logical Volume Manager (LVM) check

    If a VMFS volume expansion operation happens during a LVM probe to update volume attributes, such as the size of the VMFS partition backing of the VMFS volume, the current size of the VMFS volume might not match the LVM metadata on the disk. As a result, LVM marks such a volume offline and virtual machines on that volume become unresponsive.

    This issue is resolved in this release. The fix improves Input Output Control (IOCTL) to speed up the update of device attributes and avoid such cases.

  • PR 3152717: vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later

    Due to a networking issue in reverse proxy configuration environments, vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later. In the ESXi management console and in the /var/log/vmware/rbd/rbd-cgi.log file you see an HTTP client error.

    This issue is resolved in this release. The fix makes sure Auto Deploy workflows run successfully in reverse proxy configuration environments.

  • PR 3185125: Virtual SLIT table not populated for the guest OS when using sched.nodeX.affinity

    When you specify virtual NUMA node affinity, the vSLIT table might not be populated and you see the following error in vmware.log: vSLIT: NumaGetLatencyInfo failed with status: 195887107.

    This happens when setting node affinity as follows:

    numa.slit.enable = "TRUE"

    sched.node0.affinity = 0

    sched.node1.affinity = 1

    ...sched.nodeN.affinity = N

    This issue is resolved in this release. If you already face the issue, specify VCPU affinity to physical CPUs on the associated NUMA nodes, instead of node affinity.

    For example:

    numa.slit.enable = "TRUE"

    sched.vcpu0.affinity = "0-7"

    sched.vcpu1.affinity = "0-7"

    ...sched.vcpuN.affinity = "Y-Z"

  • PR 3181774: You do not see an error when you attach a CBT enabled FCD to a VM with CBT disabled

    Attaching a First Class Disk (FCD) with Changed Block Tracking (CBT) to a VM with CBT disabled does not throw a detailed error, but the operation cannot take effect. In the backtrace, you might see an error message such as InvalidState, which is generic and does not provide details on the issue.

    This issue is resolved in this release. The fix adds the error message Cannot attach a CBT enabled fcd: {path} to CBT disabled VM. to the logs of the hostd service and adds a status message for the task in the vSphere Client.

  • PR 3185827: You see trap files in a SNMP directory under /var/spool even though SNMP is not enabled

    After the hostd service starts, for example after an ESXi host reboot, it might create a SNMP directory under /var/spool and you see many .trp files to pile in this directory.

    This issue is resolved in this release. The fix makes sure that the directory /var/spool/snmp exists only when SNMP is enabled.

  • PR 3082867: An ESXi host might fail with a purple diagnostic screen after a replicated virtual machine migrates to the host

    In some environments, when a replicated virtual machine migrates to a certain ESXi host, during a full sync, some SCSI WRITE commands might target address ranges past the end of the transfer map. As a result, the ESXi host might fail with a purple diagnostic screen and a backtrace such as:

    #0 DLM_free (msp=0x43238f2c9cc0, mem=mem@entry=0x43238f6db860, allowTrim=allowTrim@entry=1 '\001') at bora/vmkernel/main/dlmalloc.c:4924

    #1 0x000041802a151e01 in Heap_Free (heap=0x43238f2c9000, mem=<optimized out>) at bora/vmkernel/main/heap.c:4217

    #2 0x000041802a3b0fa1 in BitVector_Free (heap=<optimized out>, bv=<optimized out>) at bora/vmkernel/util/bitvector.c:94

    #3 0x000041802b9f4f3f in HbrBitmapFree (bitmap=<optimized out>) at bora/modules/vmkernel/hbr_filter/hbr_bitmap.c:91

    This issue is resolved in this release.

  • PR 3162499: During service insertion for NSX-managed workload VMs, some VMS might become intermittently unresponsive, and the virtual device might reset

    During service insertion for NSX-managed workload VMs, a packet list might be reinjected from the input chain of one switchport to another switchport. In such cases, the source switchport does not correspond to the actual portID of the input chain and the virtual device does not get completion status for the transmitted frame. As a result, when you run a service insertion task, some VMs might become intermittently unresponsive due to network connectivity issues.

    This issue is resolved in this release.

  • PR 3219441: You cannot send keys to a guest OS by using the browser console in the ESXi Host Client

    In the ESXi Host Client, when you open the browser console of a virtual machine and select Guest OS > Send keys, when you select a key input, for example Ctrl-Alt-Delete, the selection is not sent to the guest OS.

    This issue is resolved in this release.

  • PR 3209853: Windows VMs might fail with a blue diagnostic screen with 0x5c signature due to platform delays

    Windows VMs might fail with a blue diagnostic screen with the following signature due to a combination of factors involving the Windows timer code and platform delays, such as storage or network delays:

    HAL_INITIALIZATION_FAILED (0x5c) (This indicates that the HAL initialization failed.)

    Arguments:

    Arg1: 0000000000000115, HAL_TIMER_INITIALIZATION_FAILURE

    Arg2: fffff7ea800153e0, Timer address

    Arg3: 00000000000014da, Delta in QPC units (delta to program the timer with)

    Arg4: ffffffffc0000001, invalid status code (STATUS_UNSUCCESSFUL)

    Checks done by HPET timers in Windows might fail due to such platform delays and cause the issue.

    This issue is resolved in this release.

  • PR 3178109: vSphere Lifecycle Manager compliance check fails with "An unknown error occurred while performing the operation"

    The base ESXi image contains driver components that can be overridden by higher version async driver components packaged by an OEM add-on. If such a component is manually removed on a host, the compliance check of the vSphere Lifecycle Manager might fail unexpectedly. In the vSphere Client, you see errors such as Host status is unknown and An unknown error occurred while performing the operation.

    This issue is resolved in this release.

  • PR 3180634: Performance of certain nested virtual machines on AMD CPUs might degrade

    Nested virtual machines on AMD CPUs with operational systems such as Windows with virtualization-based security (VBS) might experience performance degradation, timeouts, or unresponsiveness due to an issue with the virtualization of AMD's Rapid Virtualization Indexing (RVI), also known as Nested Page Tables (NPT).

    This issue is resolved in this release.

  • PR 3223539: Some vmnics might not be visible after an ESXi host reboot due to a faulty NVMe device

    If an NVMe device fails during the attach phase, the NVMe driver disables the NVMe controller and resets the hardware queue resources. When you try to re-attach the device, a mismatch between the hardware and driver queue pointers might cause an IOMMU fault on the NVMe device, which leads to memory corruption and failure of the vmkdevmgr service. As a result, you do not see some vmnics in your network.

    This issue is resolved in this release.

  • PR 3218835: When you disable or suspend vSphere Fault Tolerance, virtual machines might become unresponsive for about 10 seconds

    When you disable or suspend vSphere FT, some virtual machines might take about 10 seconds to release vSphere FT resources. As a result, such VMs become temporarily unresponsive to network requests or console operations.

    This issue is resolved in this release.

  • PR 3181901: ESXi host becomes unresponsive and you cannot put the host in Maintenance Mode or migrate VMs from that host

    Asynchronous reads of metadata on a VMFS volume attached to an ESXi host might cause a race condition with other threads on the host and make the host unresponsive. As a result, you cannot put the host in Maintenance Mode or migrate VMs from that host.

    This issue is resolved in this release.

  • PR 3156666: Packets with length less than 60 bytes might drop

    An ESXi host might add invalid bytes, other than zero, to packets with less than 60 bytes. As a result, packets with such invalid bytes drop.

    This issue is resolved in this release.

  • PR 3180283: When you migrate a VM with recently hot-added memory, an ESXi host might repeatedly fail with a purple diagnostic screen

    Due to a race condition while the memory hotplug module recomputes the NUMA memory layout of a VM on a destination host after migration, an ESXi host might repeatedly fail with a purple diagnostic screen. In the backtrace, you see errors such as:

    0x452900262cf0:[0x4200138fee8b]PanicvPanicInt@vmkernel#nover+0x327 stack: 0x452900262dc8, 0x4302f6c06508, 0x4200138fee8b, 0x420013df1300, 0x452900262cf0  0x452900262dc0:[0x4200138ff43d]Panic_WithBacktrace@vmkernel#nover+0x56 stack: 0x452900262e30, 0x452900262de0, 0x452900262e40, 0x452900262df0, 0x3e7514  0x452900262e30:[0x4200138fbb90]NMI_Interrupt@vmkernel#nover+0x561 stack: 0x0, 0xf48, 0x0, 0x0, 0x0  0x452900262f00:[0x420013953392]IDTNMIWork@vmkernel#nover+0x7f stack: 0x420049800000, 0x4200139546dd, 0x0, 0x452900262fd0, 0x0  0x452900262f20:[0x4200139546dc]Int2_NMI@vmkernel#nover+0x19 stack: 0x0, 0x42001394e068, 0xf50, 0xf50, 0x0  0x452900262f40:[0x42001394e067]gate_entry@vmkernel#nover+0x68 stack: 0x0, 0x43207bc02088, 0xd, 0x0, 0x43207bc02088  0x45397b61bd30:[0x420013be7514]NUMASched_PageNum2PhysicalDomain@vmkernel#nover+0x58 stack: 0x1, 0x420013be34c3, 0x45396f79f000, 0x1, 0x100005cf757  0x45397b61bd50:[0x420013be34c2]NUMASched_UpdateAllocStats@vmkernel#nover+0x4b stack: 0x100005cf757, 0x0, 0x0, 0x4200139b36d9, 0x0  0x45397b61bd80:[0x4200139b36d8]VmMem_NodeStatsSub@vmkernel#nover+0x59 stack: 0x39, 0x45396f79f000, 0xbce0dbf, 0x100005cf757, 0x0  0x45397b61bdc0:[0x4200139b4372]VmMem_FreePageNoBackmap@vmkernel#nover+0x8b stack: 0x465ec0001be0, 0xa, 0x465ec18748b0, 0x420014e7685f, 0x465ec14437d0

    This issue is resolved in this release.

  • PR 3100552: ESXi booting times out after five minutes and the host automatically reboots

    During initial ESXi booting, when you see the Loading VMware ESXi progress bar, if the bootloader takes more than a total of five minutes to load all the boot modules before moving on to the next phase of booting, the host firmware times out the boot process and resets the system.

    This issue is resolved in this release. With ESXi 7.0 Update 3o, the ESXi bootloader resets the five-minute timeout after each boot module loads.

  • PR 3184368: The durable name of a SCSI LUN might not be set

    The durable name property for a SCSI-3 compliant device comes from pages 80h and 83h of the Vital Product Data (VPD) as defined by the T10 and SMI standards. To populate the durable name, ESXi first sends an inquiry command to get a list of VPD pages supported by the device. Then ESXi issues commands to get data for all supported VPD pages. Due to an issue with the target array, the device might fail a command to get VPD page data for a page in the list with a not supported error. As a result, ESXi cannot populate the durable name property for the device.

    This issue is resolved in this release. The fix ignores the error on command to get VPD page data, except pages 80h and 83h, if that data is not required for the generation of durable name.

  • PR 3184425: Port scan of a VXLAN Tunnel End Point (VTEP) on an ESXi host might result in an intermittent connectivity loss

    Port scan of a VTEP on an ESXi host might result in an intermittent connectivity loss, but only under the following conditions:        

    1. Your environment has many VTEPs

    2. All VTEPs are in the same IP subnet     

    3. The upstream switch is Cisco ACI

    This issue is resolved in this release.

  • PR 3156627: Changing the mode of the virtual disk on a running virtual machine might cause the VM to fail

    If you use the VMware Host Client to edit the disk mode of a running virtual machine, for example from Independent - Nonpersistent to Dependent or Independent - Persistent, the operation fails and might cause the VM to fail. In the vmware.log, you see errors such as:

    msg.disk.notConfigured2] Failed to configure disk 'scsi0:4'. The virtual machine cannot be powered on with an unconfigured disk.

    [msg.checkpoint.continuesync.error] An operation required the virtual machine to quiesce and the virtual machine was unable to continue running.

    This issue is resolved in this release. The fix blocks changing the mode of an Independent - Nonpersistent disk on a running virtual machine by using the VMware Host Client. The vSphere Client already blocks such operations.

  • PR 3164477: In VMware Skyline Health Diagnostics, you see multiple warnings for vSAN memory pools

    The free heap memory estimation logic for some vSAN memory pools might consider more memory heap than actual and trigger warnings for insufficient memory. As a result, you see Memory pools (heap) health warnings for many hosts under the Physical disks section in Skyline Health.

    This issue is resolved in this release.

  • PR 3168950: ESXi hosts fail with a purple diagnostic screen while installing VMware NSX-T Data Center due to insufficient TCP/IP heap space

    VMware NSX-T Data Center has multiple net stacks and the default TCP/IP heap space might not be sufficient for installation. As a result, ESXi hosts might fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix increases the default TcpipHeapSize setting to 8 MB from 0 MB and the maximum size to 128 MB from 32 MB. In the vSphere Client, you can change the TcpipHeapSize value by navigating to Hosts and Clusters > Configure > System > Advanced System Settings > TcpipHeadSize. In vCenter systems with VMware NSX-T Data Center, set the value to 128 MB.

  • PR 3096974: Rare race condition in I/O filters might cause an ESXi host to fail with a purple diagnostics screen

    ESXi hosts in which virtual machines use I/O filters might randomly fail with a purple diagnostics screen and an error such as #DF Exception 8 IP 0x42002c54b19f SP 0x453a9e8c2000 due to a rare race condition.

    This issue is resolved in this release.

  • PR 3112194: Spherelet might fail to start on an ESXi host due to the execInstalledOnly setting

    The execInstalledOnly setting is a run time parameter which limits execution of binaries such as applications and vmkernel modules to improve security and guard against breaches and compromises. Some execInstalledOnly security checks might interfere with Spherelet, the ESXi UserWorld agent that acts as an extension to the Kubernetes Control Plane, and prevent it to start, even when all files are installed.

    This issue is resolved in this release.

  • PR 3115870: VMware VIB installation might fail during concurrent vendor package installations

    When you install update packages from several vendors, such as JetStream Software, Microsoft, and VMware, multiple clients call the same PatchManager APIs and might lead to a race condition. As a result, VMware installation packages (VIBs) might fail to install. In the logs, you see an error such as vim.fault.PlatformConfigFault, which is a catch-all fault indicating that some error has occurred regarding the configuration of the ESXi host. In the vSphere Client, you see a message such as An error occurred during host configuration.

    This issue is resolved in this release. The fix is to return a TaskInProgress warning instead of PlatformConfigFault, so that you are aware of the actual issue and retry the installation.

  • PR 3164439: Certain applications might take too many ESXi file handles and cause performance aggravation

    In very rare cases, applications such as NVIDIA virtual GPU (vGPU) might consume so many file handles that ESXi fails to process other services or VMs. As a result, you might see GPU on some nodes to disappear, or report zero GPU memory, or performance degradation.

    This issue is resolved in this release. The fix reduces the number of file handles a vGPU VM can consume.

  • PR 3162963: If parallel volume expand and volume refresh operations on the same VMFS volume run on two ESXi hosts in the same cluster, the VMFS volume might go offline

    While a VMFS volume expand operation is in progress on an ESXi host in a vCenter cluster, if on another host a user or vCenter initiates a refresh of the same VMFS volume capacity, such a volume might go offline. The issue occurs due to a possible mismatch in the device size, which is stamped on the disk in the volume metadata during a device rescan, and the device size value in the Pluggable Storage Architecture (PSA) layer on the host, which might not be updated if the device rescan is not complete.

    This issue is resolved in this release. The fix improves the resiliency of the volume manager code to force a consecutive refresh of the device attributes and comparison of the device sizes again if vCenter reports a mismatch in the device size.

  • PR 3161690: CPU usage of ESXi hosts might intermittently increase in environments that use a Vigor router

    Due to a rare race condition, when a VM power off command conflicts with a callback function by a Vigor router, you might see increased CPU usage on ESXi hosts, for example, after an upgrade.

    This issue is resolved in this release.

  • PR 3165374: ESXi hosts might become unresponsive and fail with a purple diagnostic screen during TCP TIME_WAIT

    The TCP slow timer might starve TCP input processing while it covers the list of connections in TIME_WAIT closing expired connections due to contention on the global TCP pcbinfo lock. As a result, the VMkernel might fail with a purple diagnostic screen and the error Spin count exceeded - possible deadlock while cleaning up TCP TIME_WAIT sockets. The backtrace points to tcpip functions such as tcp_slowtimo() or tcp_twstart().

    This issue is resolved in this release. The fix adds a new global lock to protect the list of connections in TIME_WAIT and to acquire the TCP pcbinfo lock only when closing an expired connection.

  • PR 3166818: If any of the mount points that use the same NFS server IP returns an error, VAAI-NAS might mark all mount points as not supported

    When you use a VAAI-NAS vendor plug-in, multiple file systems on an ESXi host use the same NFS server IP to create a datastore. For certain NAS providers, if one of the mounts returns a VIX_E_OBJECT_NOT_FOUND (25) error during the startSession call, hardware acceleration for all file systems with the same NFS server IP might become unsupported. This issue occurs when an ESXi host mounts a directory on an NFS server, but that directory is not available at the time of the startSession call, because, for example, it has been moved from that NFS server.

    This issue is resolved in this release. The fix makes sure that VAAI-NAS marks as not supported only mounts that report an error.

  • PR 3112692: The smartpqi driver cannot get the location of ThinkSystem PM1645a SAS SSD devices

    In rare cases, the smartpqi plugin in LSU service might get a longer serial number string from the VPD page 0x80 than the actual serial number, and the device lookup in the cache fails. As a result, when you run the command esxcli storage core device physical get -d <device_id> to get a device location, you see an error such as Unable to get location for device. Device is not found..

    This issue is resolved in this release. The fix makes sure that the smartpqi plugin in LSU service always checks the correct serial number length and finds the device location.

  • PR 3100030: An ESXi host fails with a purple diagnostic screen indicating VMFS heap corruption

    A race condition in the host cache of a VMFS datastore might cause heap corruption and the ESXi host fails with a purple diagnostic screen and a message such as:

    PF Exception 14 in world xxxxx:res3HelperQu IP 0xxxxxxe2 addr 0xXXXXXXXXX

    This issue is resolved in this release.

  • PR 3121216: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after an upgrade

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3098760: ESXi hosts randomly disconnect from the Active Directory domain or vCenter due to Likewise memory exhaustion

    Memory leaks in Active Directory operations and related libraries, or when smart card authentication is enabled on an ESXi host, might lead to Likewise memory exhaustion.

    This issue is partially resolved in this release. For more information, see VMware knowledge base article 78968.

  • PR 3118402: The NTP monitoring function returns intermittent time sync failure in the time services test report even when the ESXi host system clock is actually in sync

    The time services monitoring infrastructure periodically queries the NTP daemon, ntpd, for time sync status on ESXi hosts. Intermittent query failures might occur due to timeouts in the socket call to read such status information. As a result, you might see time sync failure alerts in the NTP test reports.

    This issue is resolved in this release. The fix makes sure that NTP monitoring queries do not fail.

  • PR 3159168: You see multiple "End path evaluation for device" messages in the vmkernel.log

    A periodic device probe that evaluates the state of all the paths to a device might create multiple unnecessary End path evaluation for device messages in the vmkernel.log file on a release build.

    This issue is resolved in this release. The fix restricts End path evaluation for device messages to only non-release builds, such as beta. You must enable the VERBOSE log level to see such logs.

  • PR 3101512: You see a warning for lost uplink redundancy even when the uplink is active

    If you quickly remove the network cable from a physical NIC and reinsert it, the ESXi host agent, hostd, triggers an event to restore uplink redundancy, but in some cases, although the uplink restores in a second, you still see a misleading alarm for lost uplink redundancy.

    This issue is resolved in this release.

  • PR 3161473: Operations with stateless ESXi hosts might not pick the expected remote disk for system cache, which causes remediation or compliance issues

    Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

    This issue is resolved in this release. The fix provides a consistent way to sort the remote disks and always pick the disk with the lowest LUN ID. To make sure you enable the fix, follow these steps:

    1. On the Edit host profile page of the Auto Deploy wizard, select Advanced Configuration Settings > SystemImage Cache Configuration > System Image Cache Configuration

    2. In the System Image Cache Profile Settings drop-down menu, select Enable stateless caching on the host.

    3. Edit Arguments for first disk by replacing remote with sortedremote and/or remoteesx with sortedremoteesx.

ESXi-7.0U3o-22348816-no-tools

Profile Name

ESXi-7.0U3o-22348816-no-tools

Build

For build information, see Patches Contained in This Release.

Vendor

VMware, Inc.

Release Date

September 28, 2023

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_gc_7.0.3-0.105.22348816

  • VMware_bootbank_native-misc-drivers_7.0.3-0.105.22348816

  • VMware_bootbank_vsanhealth_7.0.3-0.105.22348816

  • VMware_bootbank_crx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-xserver_7.0.3-0.105.22348816

  • VMware_bootbank_vdfs_7.0.3-0.105.22348816

  • VMware_bootbank_cpu-microcode_7.0.3-0.105.22348816

  • VMware_bootbank_esx-base_7.0.3-0.105.22348816

  • VMware_bootbank_vsan_7.0.3-0.105.22348816

  • VMware_bootbank_esxio-combiner_7.0.3-0.105.22348816

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_bmcal_7.0.3-0.105.22348816

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.105.22348816

  • Mware_bootbank_trx_7.0.3-0.105.22348816

  • VMware_bootbank_loadesx_7.0.3-0.105.22348816

  • VMware_bootbank_esx-update_7.0.3-0.105.22348816

  • VMW_bootbank_lsi-msgpt3_17.00.12.00-2vmw.703.0.105.22348816

  • VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-9vmw.703.0.105.22348816

  • VMW_bootbank_nvme-pcie_1.2.3.16-3vmw.703.0.105.22348816

  • VMW_bootbank_vmw-ahci_2.0.11-2vmw.703.0.105.22348816

  • VMW_bootbank_iavmd_2.7.0.1157-3vmw.703.0.105.22348816

PRs Fixed

3244098, 3216958, 3245763, 3242021, 3246132, 3253205, 3256804, 3239170, 3251981, 3215370, 3117615, 3248478, 3252676, 3235496, 3238026, 3236064, 3224739, 3224306, 3223755, 3216958, 3233958, 3185125, 3219264, 3221620, 3218835, 3185560, 3221099, 3211625, 3221860, 3228586, 3224604, 3223539, 3222601, 3217258, 3213041, 3216522, 3221591, 3216389, 3220004, 3217633, 3216548, 3216449, 3156666, 3181574, 3180746, 3155476, 3211807, 3154090, 3184008, 3183519, 3183519, 3209853, 3100552, 3187846, 3113263, 3176350, 3185827, 3095511, 3184425, 3186351, 3186367, 3154090, 3158524, 3181601, 3180391, 3180283, 3184608, 3181774, 3155476, 3163361, 3182978, 3184368, 3160456, 3164575, 3181901, 3184871, 3166818, 3157195, 3164355, 3163271, 3112194, 3161690, 3261925, 3178589, 3178721, 3162963, 3168950, 3159168, 3158491, 3158866, 3096769, 3165374, 3122037, 3098760, 3164439, 3161473, 3162790, 3100030, 3096974, 3161473, 3165651, 3083007, 3118240, 3151076, 3118402, 3160480, 3156627, 3158531, 3162499, 3158512, 3053430, 3153395, 3117615, 3099192, 3158508, 3115870, 3119834, 3158220, 3110401, 2625439, 3099357, 3152811, 3108979, 3120165, 3245953, 3178109, 3211395, 3164477, 3119959, 3164462, 3036883, 3112692, 3218578, 3224847

Related CVE numbers

N/A

This patch updates the following issues:

  • PR 3187846: Setting the screen resolution of a powered off, encrypted virtual machine does not always work when manually editing the VMX file

    If you manually specify the screen resolution of a powered off, encrypted virtual machine by editing the VMX file, the change might not take effect.

    This issue is resolved in this release.

  • PR 3253205: Static IPv6 gateway disappears in 18 hours

    If you configure a static IPv6 gateway address in your vSphere environment, such gateways might no longer exist in up to 18 hours due to a timeout.

    This issue is resolved in this release. The fix removes the existing timeout for static gateways.

  • PR 3272532: You see speed as Half Duplex after changing the speed from Auto to Full on a VMNIC

    In some cases, length checks during the parsing of duplex TLVs in an environment with active Cisco Discovery Protocol (CDP) might fail. As a result, when you change the speed from Auto to Full on a physical NIC on peer devices you do not see the real duplex value in the TLV under the neighbor information, but the default Half Duplex value.

    This issue is resolved in this release.

  • PR 3251981: An NFSv4.1 file might appear to be empty even though it contains data

    When you open an existing NFSv4.1 file with write only access, if for some reason the NFS client opens again the same file with read only access, read operations from the client might return no data although the file is not empty.

    This issue is resolved in this release.

  • PR 3116601: When you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the vSphere HA agent on the host displays as inaccessible

    In some cases, when you override the default gateway for the vSAN VMkernel adapter on an ESXi host, the Fault Domain Manager (FDM), which is the agent that vSphere HA deploys on ESX hosts, might stop receiving Internet Control Message Protocol (ICMP) pings. As a result, FDM might issue false cluster alarms that the vSphere HA agent on the host cannot reach some management network addresses of other hosts.

    This issue is resolved in this release.

  • PR 3251801: vSphere vMotion operation from a 6.7.x ESXi host with an Intel Ice Lake CPU fails with msg.checkpoint.cpucheck.fail

    vSphere vMotion operations by using either the vSphere Client or VMware Hybrid Cloud Extension (HCX) from an Intel Ice Lake CPU host running ESXi 6.7.x fails with an error such as msg.checkpoint.cpucheck.fail. In the vSphere Client, you see a message that cpuid.PSFD is not supported on the target host. In HCX, you see a report such as A general system error occurred: vMotion failed:.

    This issue is resolved in this release.

  • PR 3036883: A SAS path might be lost during upgrade of lsi_msgpt3 drivers

    Upgrading lsi_msgpt3 drivers of version 17.00.12.00-1 and earlier might cause storage connections and VMs on ESXi host to become unresponsive due to a lost SAS path. In the VMkernel logs, you see error messages such as:

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:640: Retry world failover device "naa.6000d310045e2c000000000000000043" - issuing command 0x459acc1f5dc0

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: vmw_psp_rr: psp_rrSelectPath:2177: Could not select path for device "naa.6000d310045e2c000000000000000043".

    2022-06-19T05:25:51.949Z cpu26:2097947)WARNING: NMP: nmpDeviceAttemptFailover:715: Retry world failover device "naa.6000d310045e2c000000000000000043" - failed to issue command due to Not found (APD), try again...

    This issue is resolved in this release. If you already face the issue, rescan your storage after failing back on storage controller to recover the paths and storage connections.

  • PR 3224739: You see alarms for dropped syslog messages

    If the network throughput of your vSphere system does not align with the speed at which log messages are generated, some of the log messages sent to the remote syslog server might drop. As a result, in the vSphere Client you see host errors of type Triggered Alarm and in the logs, you see warnings such as ALERT: vmsyslog logger xxxx:514 lost yyyy log messages.

    This issue is resolved in this release. The fix improves the network logging performance of the logging service to reduce the number of such lost log messages.

  • PR 3247027: If any failure occurs during a vSphere vMotion migration of NVIDIA virtual GPU (vGPU) VM, the destination ESXi host might fail with a purple diagnostic screen

    In very rare cases, if any type of failure occurs during a vSphere vMotion migration of a vGPU VM, the vMotion operation is marked as a failure while a specific internal operation is in progress. As a result, the destination ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3095511: The SFCB service intermittently fails and generates "sfcb-vmware_bas-zdump" files on multiple ESXi hosts

    In very rare cases, when the SFCB service tries to access an uninitialized variable, the service might fail with dump file such as sfcb-vmware_bas-zdump.000. The issue occurs because accessing an uninitialized variable can cause the SFCB process to keep requesting memory until it exceeds the heap allocation.

    This issue is resolved in this release.

  • PR 3218578: You see wrong device descriptions of some Lenovo ThinkSystem boot RAID adapters

    In a vSphere client interface, such as the vSphere Client, or when you run commands such as lspci and esxcfg-scsidevs -a from the ESXi shell, you might see the device names of some Lenovo-branded boot SATA/NVMe adapters as the generical names of the controller chips they use. For example, Lenovo ThinkSystem M.2 with Mirroring Enablement Kit is displayed as 88SE9230 PCIe SATA 6GB/s controller.

    This issue is resolved in this release.

  • PR 3256804: File server loses connectivity after failover when running vSAN File Service on NSX overlay network

    If vSAN File Service is running on an NSX overlay network, the file server might lose connectivity after it fails over from one agent VM to another agent VM. The file server can failover when reconfiguring file service domain from such without Active Directory(AD) to such with AD, or if vSAN detects unhealthy behavior in the file server, file share, or AD server. If file server connectivity is lost after a failover, File Service clients cannot access the file share. The following health warning might be reported:

    One or more DNS servers is not reachable.

    This issue is resolved in this release.

  • PR 3217633: vSAN management service on the orchestration host malfunctions during vCenter restart

    This issue can occur on vSAN clusters where vCenter is deployed. If multiple cluster shutdown operations are performed, the/etc/vmware/vsan/vsanperf.conf file on the orchestration host can contain two versions of the following management VM: vc_vm_moIdvc_vm_moid. These configuration options conflict with each other and can cause the vSAN management service to malfunction.

    This issue is resolved in this release.

  • PR 3224306: When you change the ESXi syslog setting syslog.global.logDir, if syslog.global.logDirUnique is active, you might see 2 levels of <hostname> subdir under the logdir path

    After upgrading an ESXi host to 7.0 Update 3c and later, if you change the syslog setting syslog.global.logDir and the syslog.global.logDirUnique setting is active, you might see 2 levels of <hostname> subdirs under the logdir path on the host. The syslog.global.logDirUnique setting is useful if the same NFS directory is configured as the Syslog.global.LogDir by multiple ESXi hosts, as it creates a subdirectory with the name of the ESXi host under logdir.

    This issue is resolved in this release.

  • PR 3223755: ESXi syslog daemon fails to resume transmitting logs to the configured SSL remote syslog server once a drop in the network connectivity restores

    If a remote SSL syslog server temporarily loses connectivity, the ESXi syslog daemon might fail to resume transmitting logs after connectivity restores and you must restart the service to re-activate transmissions. The issue occurs due to some unhandled exceptions while the ESXi syslog daemon tries to restore connection to the SSL remote syslog server.

    This issue is resolved in this release. The fix adds a generic catch for all unhandled exceptions.

  • PR 3185536: vSAN cluster remediation error after vCenter upgrade to 7.0 Update 3

    When you upgrade vCenter from version 6.5.x to 7.0 Update 3, and reconfigure the vSAN cluster, a vSAN cluster remediation is triggered, but in some cases fails.

    This issue is resolved in this release.

  • PR 3110401: The Explicit Congestion Notification (ECN) setting is not persistent on ESXi host reboot

    ECN, specified in RFC 3168, allows a TCP sender to reduce the transmission rate to avoid packet drops and is activated by default. However, if you change the default setting, or manually deactivate ECN, such changes to the ESXi configuration might not persist after a reboot of the host.

    This issue is resolved in this release.

  • PR 3153395: Upon refresh, some dynamic firewalls rules might not persist and lead to vSAN iSCSI Target service failure

    If you frequently run the command esxcli network firewall refresh, a race condition might cause some dynamic firewalls rules to be removed upon refresh or load. As a result, some services such as the vSAN iSCSI Target daemon, which provides vSAN storage with iSCSI protocol, might fail.

    This issue is resolved in this release.

  • PR 3096769: You see high latency for VM I/O operations in a vSAN cluster with Unmap activated

    This issue affects vSAN clusters with Unmap activated. A problem in Unmap handling in LSOM creates log congestion. The log congestion can cause high VM I/O latency.

    This issue is resolved in this release.

  • PR 3163361: The Audit Record Storage Capacity parameter does not persist across ESXi host reboots

    After an ESXi host reboot, you see the Audit Record Storage Capacity parameter restored to the default value of 4 MiB, regardless of any previous changes.

    This issue is resolved in this release.

  • PR 3162496: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after upgrading to vSphere 8.0 and later

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system to vSphere 8.0 and later, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3220004: VBS-enabled Windows VMs might fail with a blue diagnostic screen when an ESXi host is under memory pressure

    Windows VMs with virtualization-based security (VBS) enabled might intermittently fail with a blue diagnostic screen during memory-heavy operations such as deleting snapshots or consolidating disks. The BSOD has the following signature:

    DRIVER_VERIFIER_DMA_VIOLATION (e6)

    An illegal DMA operation was attempted by a driver being verified.

    Arguments:

    Arg1: 0000000000000026, IOMMU detected DMA violation.

    Arg2: 0000000000000000, Device Object of faulting device.

    Arg3: xxxxxxxxxxxxxxxx, Faulting information (usually faulting physical address).

    Arg4: 0000000000000008, Fault type (hardware specific).

    This issue is resolved in this release.

  • PR 3162905: ESXi hosts might intermittently disconnect from vCenter in case of memory outage

    In rare cases, if vCenter fails to allocate or transfer active bitmaps due to an alert for insufficient memory for any reason, the hostd service might repeatedly fail and you cannot reconnect the host to the vCenter.

    This issue is resolved in this release. The fix makes sure that in such cases you see an error rather than the hostd service failure.

  • PR 3108979: Staging ESXi patches and ESXi upgrade tasks might become indefinitely unresponsive due to a log buffer limit

    If a large string or log exceed the python log buffer limit of 16K, the logger becomes unresponsive. As a result, when you stage ESXi patches on a host by using the vSphere Client, the process responsible for completing the staging task on ESXi might become indefinitely unresponsive. The task times out and reports an error. The issue occurs also when you remediate ESXi hosts by using a patch baseline.

    This issue is resolved in this release. The fix breaks the large input string into smaller chunks for logging.

  • PR 3219264: vSAN precheck for maintenance mode or disk decommission doesn't list objects that might lose accessibility

    This issue affects objects with resyncing components, and some components reside on a device to be removed or placed into maintenance mode. When you run a precheck with the No-Action option, the precheck does not evaluate the object correctly to report it in the inaccessibleObjects list.

    This issue is resolved in this release. Precheck includes all affected objects in the inaccessibleObjects list.

  • PR 3186545: Vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable

    Some third-party tools for vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable.

    This issue is resolved in this release.

  • PR 3221591: ESXi NVMe/TCP initiator fails to recover paths after target failure recovery

    When a NVMe/TCP target recovers from a failure, ESXi cannot recover the path.

    This issue is resolved in this release.

  • PR 3186351: You see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot

    In some cases, a networking configuration might not persist in the ESXi ConfigStore and you see an override flag of NIC teaming, security, or traffic shaping policy on a portgroup unexpectedly enabled post an ESXi host reboot.

    This issue is resolved in this release.

  • PR 3162790: The sfcb daemon might fail with a core dump while installing a third-party CIM provider

    The sfcb daemon might try to access already freed memory when registering a CIM provider and fail with a core dump while installing a third-party CIM provider, such as Dell OpenManage Server Administrator for example.

    This issue is resolved in this release.

  • PR 3157195: ESX hosts might fail with a purple diagnostic screen and an error NMI IPI: Panic requested by another PCPU

    The resource pool cache is a VMFS specific volume level cache that stores the resource clusters corresponding to the VMFS volume. While searching for priority clusters, the cache flusher workflow iterates through a large list of cached resource clusters, which can cause lockup of the physical CPUs. As a result, ESX hosts might fail with a purple diagnostic screen. In the logDump file, you see an error such as:

    ^[[7m2022-10-22T07:56:47.322Z cpu13:2101160)WARNING: Heartbeat: 827: PCPU 0 didn't have a heartbeat for 7 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.^[[0m^

    [[31;1m2022-10-22T07:56:47.322Z cpu0:2110633)ALERT: NMI: 710: NMI IPI: RIPOFF(base):RBP:CS

    This issue is resolved in this release.

  • PR 3230493: If a delta sync runs while the target vSphere Replication server is not active, Windows virtual machines might become unresponsive

    If the target Sphere Replication server is not active during a delta sync, the synchronization process cannot complete and Windows virtual machines become unresponsive.

    This issue is resolved in this release. The fix makes sure that no delta sync jobs start when a Sphere Replication server is not active.

  • PR 3122037: ESXi ConfigStore database fills up and writes fail

    Stale data related to block devices might not be deleted in time from the ESXi ConfigStore database and cause an out of space condition. As a result, write operations to ConfigStore start to fail. In the backtrace, you see logs such as:

    2022-12-19T03:51:42.733Z cpu53:26745174)WARNING: VisorFSRam: 203: Cannot extend visorfs file /etc/vmware/configstore/current-store-1-journal because its ramdisk (configstore) is full.

    This issue is resolved in this release.

  • PR 3246132: You see a system clock drift on an ESXi host after synchronization with the NTP server fails

    If connectivity between the NTP server and an ESXi host is delayed for some reason, the system clock on the ESXi host might experience a drift. When you run the vsish command vsish -e get /system/ntpclock/clockData, you might see a large negative value for the adjtime field. For example:

    NTP clock data {  
    ...  adjtime() (usec):-309237290312 <<<<<<  
    ...
    }

    This issue is resolved in this release.

  • PR 3185560: vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore intermittently fail after a hot extend of the VM disk

    vSphere vMotion operations for virtual machines with swap files on a vSphere Virtual Volumes datastore might fail under the following circumstances:

    A VM that runs on ESXi host A is migrated to ESXi host B, then the VM disk is hot extended, and the VM is migrated back to ESXi host A. The issue occurs due to stale statistics about the swap virtual volume.

    This issue is resolved in this release.

  • PR 3236207: You cannot unmount an NFS v3 datastore from a vCenter system because the datastore displays as inaccessible

    While unmounting an NFSv3 datastore from a vCenter system by using either the vSphere Client or ESXCLI, you might see the datastore either as inaccessible, or get an Unable to Refresh error. The issue occurs when a hostd cache refresh happens before NFS deletes the mount details from ConfigStore, which creates a rare race condition.

    This issue is resolved in this release.

  • PR 3236064: Compliance or remediation scans by using vSphere Lifecycle Manager might take long on ESXi hosts with a large number of datastores

    Compliance or remediation scans by using vSphere Lifecycle Manager include an upgrade precheck that lists all volumes attached to an ESXi host, their free space, versions, and other details. If a host has many datastores attached, each with a large capacity, the precheck can take long. The time is multiplied by the number of baselines attached to the cluster or host.

    This issue is resolved in this release. If you already face the issue, disconnect the attached datastores while performing the remediation or compliance scans and reattach them after the operations complete.

  • PR 3245763: During a vSphere vMotion operation, destination ESXi hosts with a Distributed Firewall (DFW) enabled might fail with a purple diagnostic screen

    If the destination host in a vSphere vMotion operation has a DFW enabled, a rare race condition might cause the host to fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 3158866: Virtual machines might become unresponsive during VMFS volume expansion due to a Logical Volume Manager (LVM) check

    If a VMFS volume expansion operation happens during a LVM probe to update volume attributes, such as the size of the VMFS partition backing of the VMFS volume, the current size of the VMFS volume might not match the LVM metadata on the disk. As a result, LVM marks such a volume offline and virtual machines on that volume become unresponsive.

    This issue is resolved in this release. The fix improves Input Output Control (IOCTL) to speed up the update of device attributes and avoid such cases.

  • PR 3152717: vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later

    Due to a networking issue in reverse proxy configuration environments, vSphere Auto Deploy workflows might fail after an upgrade to vCenter Server 7.0 Update 3j or later. In the ESXi management console and in the /var/log/vmware/rbd/rbd-cgi.log file you see an HTTP client error.

    This issue is resolved in this release. The fix makes sure Auto Deploy workflows run successfully in reverse proxy configuration environments.

  • PR 3185125: Virtual SLIT table not populated for the guest OS when using sched.nodeX.affinity

    When you specify virtual NUMA node affinity, the vSLIT table might not be populated and you see the following error in vmware.log: vSLIT: NumaGetLatencyInfo failed with status: 195887107.

    This happens when setting node affinity as follows:

    numa.slit.enable = "TRUE"

    sched.node0.affinity = 0

    sched.node1.affinity = 1

    ...sched.nodeN.affinity = N

    This issue is resolved in this release. If you already face the issue, specify VCPU affinity to physical CPUs on the associated NUMA nodes, instead of node affinity.

    For example:

    numa.slit.enable = "TRUE"

    sched.vcpu0.affinity = "0-7"

    sched.vcpu1.affinity = "0-7"

    ...sched.vcpuN.affinity = "Y-Z"

  • PR 3181774: You do not see an error when you attach a CBT enabled FCD to a VM with CBT disabled

    Attaching a First Class Disk (FCD) with Changed Block Tracking (CBT) to a VM with CBT disabled does not throw a detailed error, but the operation cannot take effect. In the backtrace, you might see an error message such as InvalidState, which is generic and does not provide details on the issue.

    This issue is resolved in this release. The fix adds the error message Cannot attach a CBT enabled fcd: {path} to CBT disabled VM. to the logs of the hostd service and adds a status message for the task in the vSphere Client.

  • PR 3185827: You see trap files in a SNMP directory under /var/spool even though SNMP is not enabled

    After the hostd service starts, for example after an ESXi host reboot, it might create a SNMP directory under /var/spool and you see many .trp files to pile in this directory.

    This issue is resolved in this release. The fix makes sure that the directory /var/spool/snmp exists only when SNMP is enabled.

  • PR 3082867: An ESXi host might fail with a purple diagnostic screen after a replicated virtual machine migrates to the host

    In some environments, when a replicated virtual machine migrates to a certain ESXi host, during a full sync, some SCSI WRITE commands might target address ranges past the end of the transfer map. As a result, the ESXi host might fail with a purple diagnostic screen and a backtrace such as:

    #0 DLM_free (msp=0x43238f2c9cc0, mem=mem@entry=0x43238f6db860, allowTrim=allowTrim@entry=1 '\001') at bora/vmkernel/main/dlmalloc.c:4924

    #1 0x000041802a151e01 in Heap_Free (heap=0x43238f2c9000, mem=<optimized out>) at bora/vmkernel/main/heap.c:4217

    #2 0x000041802a3b0fa1 in BitVector_Free (heap=<optimized out>, bv=<optimized out>) at bora/vmkernel/util/bitvector.c:94

    #3 0x000041802b9f4f3f in HbrBitmapFree (bitmap=<optimized out>) at bora/modules/vmkernel/hbr_filter/hbr_bitmap.c:91

    This issue is resolved in this release.

  • PR 3162499: During service insertion for NSX-managed workload VMs, some VMS might become intermittently unresponsive, and the virtual device might reset

    During service insertion for NSX-managed workload VMs, a packet list might be reinjected from the input chain of one switchport to another switchport. In such cases, the source switchport does not correspond to the actual portID of the input chain and the virtual device does not get completion status for the transmitted frame. As a result, when you run a service insertion task, some VMs might become intermittently unresponsive due to network connectivity issues.

    This issue is resolved in this release.

  • PR 3219441: You cannot send keys to a guest OS by using the browser console in the ESXi Host Client

    In the ESXi Host Client, when you open the browser console of a virtual machine and select Guest OS > Send keys, when you select a key input, for example Ctrl-Alt-Delete, the selection is not sent to the guest OS.

    This issue is resolved in this release.

  • PR 3209853: Windows VMs might fail with a blue diagnostic screen with 0x5c signature due to platform delays

    Windows VMs might fail with a blue diagnostic screen with the following signature due to a combination of factors involving the Windows timer code and platform delays, such as storage or network delays:

    HAL_INITIALIZATION_FAILED (0x5c) (This indicates that the HAL initialization failed.)

    Arguments:

    Arg1: 0000000000000115, HAL_TIMER_INITIALIZATION_FAILURE

    Arg2: fffff7ea800153e0, Timer address

    Arg3: 00000000000014da, Delta in QPC units (delta to program the timer with)

    Arg4: ffffffffc0000001, invalid status code (STATUS_UNSUCCESSFUL)

    Checks done by HPET timers in Windows might fail due to such platform delays and cause the issue.

    This issue is resolved in this release.

  • PR 3178109: vSphere Lifecycle Manager compliance check fails with "An unknown error occurred while performing the operation"

    The base ESXi image contains driver components that can be overridden by higher version async driver components packaged by an OEM add-on. If such a component is manually removed on a host, the compliance check of the vSphere Lifecycle Manager might fail unexpectedly. In the vSphere Client, you see errors such as Host status is unknown and An unknown error occurred while performing the operation.

    This issue is resolved in this release.

  • PR 3180634: Performance of certain nested virtual machines on AMD CPUs might degrade

    Nested virtual machines on AMD CPUs with operational systems such as Windows with virtualization-based security (VBS) might experience performance degradation, timeouts, or unresponsiveness due to an issue with the virtualization of AMD's Rapid Virtualization Indexing (RVI), also known as Nested Page Tables (NPT).

    This issue is resolved in this release.

  • PR 3223539: Some vmnics might not be visible after an ESXi host reboot due to a faulty NVMe device

    If an NVMe device fails during the attach phase, the NVMe driver disables the NVMe controller and resets the hardware queue resources. When you try to re-attach the device, a mismatch between the hardware and driver queue pointers might cause an IOMMU fault on the NVMe device, which leads to memory corruption and failure of the vmkdevmgr service. As a result, you do not see some vmnics in your network.

    This issue is resolved in this release.

  • PR 3218835: When you disable or suspend vSphere Fault Tolerance, virtual machines might become unresponsive for about 10 seconds

    When you disable or suspend vSphere FT, some virtual machines might take about 10 seconds to release vSphere FT resources. As a result, such VMs become temporarily unresponsive to network requests or console operations.

    This issue is resolved in this release.

  • PR 3181901: ESXi host becomes unresponsive and you cannot put the host in Maintenance Mode or migrate VMs from that host

    Asynchronous reads of metadata on a VMFS volume attached to an ESXi host might cause a race condition with other threads on the host and make the host unresponsive. As a result, you cannot put the host in Maintenance Mode or migrate VMs from that host.

    This issue is resolved in this release.

  • PR 3156666: Packets with length less than 60 bytes might drop

    An ESXi host might add invalid bytes, other than zero, to packets with less than 60 bytes. As a result, packets with such invalid bytes drop.

    This issue is resolved in this release.

  • PR 3180283: When you migrate a VM with recently hot-added memory, an ESXi host might repeatedly fail with a purple diagnostic screen

    Due to a race condition while the memory hotplug module recomputes the NUMA memory layout of a VM on a destination host after migration, an ESXi host might repeatedly fail with a purple diagnostic screen. In the backtrace, you see errors such as:

    0x452900262cf0:[0x4200138fee8b]PanicvPanicInt@vmkernel#nover+0x327 stack: 0x452900262dc8, 0x4302f6c06508, 0x4200138fee8b, 0x420013df1300, 0x452900262cf0  0x452900262dc0:[0x4200138ff43d]Panic_WithBacktrace@vmkernel#nover+0x56 stack: 0x452900262e30, 0x452900262de0, 0x452900262e40, 0x452900262df0, 0x3e7514  0x452900262e30:[0x4200138fbb90]NMI_Interrupt@vmkernel#nover+0x561 stack: 0x0, 0xf48, 0x0, 0x0, 0x0  0x452900262f00:[0x420013953392]IDTNMIWork@vmkernel#nover+0x7f stack: 0x420049800000, 0x4200139546dd, 0x0, 0x452900262fd0, 0x0  0x452900262f20:[0x4200139546dc]Int2_NMI@vmkernel#nover+0x19 stack: 0x0, 0x42001394e068, 0xf50, 0xf50, 0x0  0x452900262f40:[0x42001394e067]gate_entry@vmkernel#nover+0x68 stack: 0x0, 0x43207bc02088, 0xd, 0x0, 0x43207bc02088  0x45397b61bd30:[0x420013be7514]NUMASched_PageNum2PhysicalDomain@vmkernel#nover+0x58 stack: 0x1, 0x420013be34c3, 0x45396f79f000, 0x1, 0x100005cf757  0x45397b61bd50:[0x420013be34c2]NUMASched_UpdateAllocStats@vmkernel#nover+0x4b stack: 0x100005cf757, 0x0, 0x0, 0x4200139b36d9, 0x0  0x45397b61bd80:[0x4200139b36d8]VmMem_NodeStatsSub@vmkernel#nover+0x59 stack: 0x39, 0x45396f79f000, 0xbce0dbf, 0x100005cf757, 0x0  0x45397b61bdc0:[0x4200139b4372]VmMem_FreePageNoBackmap@vmkernel#nover+0x8b stack: 0x465ec0001be0, 0xa, 0x465ec18748b0, 0x420014e7685f, 0x465ec14437d0

    This issue is resolved in this release.

  • PR 3100552: ESXi booting times out after five minutes and the host automatically reboots

    During initial ESXi booting, when you see the Loading VMware ESXi progress bar, if the bootloader takes more than a total of five minutes to load all the boot modules before moving on to the next phase of booting, the host firmware times out the boot process and resets the system.

    This issue is resolved in this release. With ESXi 7.0 Update 3o, the ESXi bootloader resets the five-minute timeout after each boot module loads.

  • PR 3184368: The durable name of a SCSI LUN might not be set

    The durable name property for a SCSI-3 compliant device comes from pages 80h and 83h of the Vital Product Data (VPD) as defined by the T10 and SMI standards. To populate the durable name, ESXi first sends an inquiry command to get a list of VPD pages supported by the device. Then ESXi issues commands to get data for all supported VPD pages. Due to an issue with the target array, the device might fail a command to get VPD page data for a page in the list with a not supported error. As a result, ESXi cannot populate the durable name property for the device.

    This issue is resolved in this release. The fix ignores the error on command to get VPD page data, except pages 80h and 83h, if that data is not required for the generation of durable name.

  • PR 3184425: Port scan of a VXLAN Tunnel End Point (VTEP) on an ESXi host might result in an intermittent connectivity loss

    Port scan of a VTEP on an ESXi host might result in an intermittent connectivity loss, but only under the following conditions:        

    1. Your environment has many VTEPs

    2. All VTEPs are in the same IP subnet     

    3. The upstream switch is Cisco ACI

    This issue is resolved in this release.

  • PR 3156627: Changing the mode of the virtual disk on a running virtual machine might cause the VM to fail

    If you use the VMware Host Client to edit the disk mode of a running virtual machine, for example from Independent - Nonpersistent to Dependent or Independent - Persistent, the operation fails and might cause the VM to fail. In the vmware.log, you see errors such as:

    msg.disk.notConfigured2] Failed to configure disk 'scsi0:4'. The virtual machine cannot be powered on with an unconfigured disk.

    [msg.checkpoint.continuesync.error] An operation required the virtual machine to quiesce and the virtual machine was unable to continue running.

    This issue is resolved in this release. The fix blocks changing the mode of an Independent - Nonpersistent disk on a running virtual machine by using the VMware Host Client. The vSphere Client already blocks such operations.

  • PR 3164477: In VMware Skyline Health Diagnostics, you see multiple warnings for vSAN memory pools

    The free heap memory estimation logic for some vSAN memory pools might consider more memory heap than actual and trigger warnings for insufficient memory. As a result, you see Memory pools (heap) health warnings for many hosts under the Physical disks section in Skyline Health.

    This issue is resolved in this release.

  • PR 3168950: ESXi hosts fail with a purple diagnostic screen while installing VMware NSX-T Data Center due to insufficient TCP/IP heap space

    VMware NSX-T Data Center has multiple net stacks and the default TCP/IP heap space might not be sufficient for installation. As a result, ESXi hosts might fail with a purple diagnostic screen.

    This issue is resolved in this release. The fix increases the default TcpipHeapSize setting to 8 MB from 0 MB and the maximum size to 128 MB from 32 MB. In the vSphere Client, you can change the TcpipHeapSize value by navigating to Hosts and Clusters > Configure > System > Advanced System Settings > TcpipHeadSize. In vCenter systems with VMware NSX-T Data Center, set the value to 128 MB.

  • PR 3096974: Rare race condition in I/O filters might cause an ESXi host to fail with a purple diagnostics screen

    ESXi hosts in which virtual machines use I/O filters might randomly fail with a purple diagnostics screen and an error such as #DF Exception 8 IP 0x42002c54b19f SP 0x453a9e8c2000 due to a rare race condition.

    This issue is resolved in this release.

  • PR 3112194: Spherelet might fail to start on an ESXi host due to the execInstalledOnly setting

    The execInstalledOnly setting is a run time parameter which limits execution of binaries such as applications and vmkernel modules to improve security and guard against breaches and compromises. Some execInstalledOnly security checks might interfere with Spherelet, the ESXi UserWorld agent that acts as an extension to the Kubernetes Control Plane, and prevent it to start, even when all files are installed.

    This issue is resolved in this release.

  • PR 3115870: VMware VIB installation might fail during concurrent vendor package installations

    When you install update packages from several vendors, such as JetStream Software, Microsoft, and VMware, multiple clients call the same PatchManager APIs and might lead to a race condition. As a result, VMware installation packages (VIBs) might fail to install. In the logs, you see an error such as vim.fault.PlatformConfigFault, which is a catch-all fault indicating that some error has occurred regarding the configuration of the ESXi host. In the vSphere Client, you see a message such as An error occurred during host configuration.

    This issue is resolved in this release. The fix is to return a TaskInProgress warning instead of PlatformConfigFault, so that you are aware of the actual issue and retry the installation.

  • PR 3164439: Certain applications might take too many ESXi file handles and cause performance aggravation

    In very rare cases, applications such as NVIDIA virtual GPU (vGPU) might consume so many file handles that ESXi fails to process other services or VMs. As a result, you might see GPU on some nodes to disappear, or report zero GPU memory, or performance degradation.

    This issue is resolved in this release. The fix reduces the number of file handles a vGPU VM can consume.

  • PR 3162963: If parallel volume expand and volume refresh operations on the same VMFS volume run on two ESXi hosts in the same cluster, the VMFS volume might go offline

    While a VMFS volume expand operation is in progress on an ESXi host in a vCenter cluster, if on another host a user or vCenter initiates a refresh of the same VMFS volume capacity, such a volume might go offline. The issue occurs due to a possible mismatch in the device size, which is stamped on the disk in the volume metadata during a device rescan, and the device size value in the Pluggable Storage Architecture (PSA) layer on the host, which might not be updated if the device rescan is not complete.

    This issue is resolved in this release. The fix improves the resiliency of the volume manager code to force a consecutive refresh of the device attributes and comparison of the device sizes again if vCenter reports a mismatch in the device size.

  • PR 3161690: CPU usage of ESXi hosts might intermittently increase in environments that use a Vigor router

    Due to a rare race condition, when a VM power off command conflicts with a callback function by a Vigor router, you might see increased CPU usage on ESXi hosts, for example, after an upgrade.

    This issue is resolved in this release.

  • PR 3165374: ESXi hosts might become unresponsive and fail with a purple diagnostic screen during TCP TIME_WAIT

    The TCP slow timer might starve TCP input processing while it covers the list of connections in TIME_WAIT closing expired connections due to contention on the global TCP pcbinfo lock. As a result, the VMkernel might fail with a purple diagnostic screen and the error Spin count exceeded - possible deadlock while cleaning up TCP TIME_WAIT sockets. The backtrace points to tcpip functions such as tcp_slowtimo() or tcp_twstart().

    This issue is resolved in this release. The fix adds a new global lock to protect the list of connections in TIME_WAIT and to acquire the TCP pcbinfo lock only when closing an expired connection.

  • PR 3166818: If any of the mount points that use the same NFS server IP returns an error, VAAI-NAS might mark all mount points as not supported

    When you use a VAAI-NAS vendor plug-in, multiple file systems on an ESXi host use the same NFS server IP to create a datastore. For certain NAS providers, if one of the mounts returns a VIX_E_OBJECT_NOT_FOUND (25) error during the startSession call, hardware acceleration for all file systems with the same NFS server IP might become unsupported. This issue occurs when an ESXi host mounts a directory on an NFS server, but that directory is not available at the time of the startSession call, because, for example, it has been moved from that NFS server.

    This issue is resolved in this release. The fix makes sure that VAAI-NAS marks as not supported only mounts that report an error.

  • PR 3112692: The smartpqi driver cannot get the location of ThinkSystem PM1645a SAS SSD devices

    In rare cases, the smartpqi plugin in LSU service might get a longer serial number string from the VPD page 0x80 than the actual serial number, and the device lookup in the cache fails. As a result, when you run the command esxcli storage core device physical get -d <device_id> to get a device location, you see an error such as Unable to get location for device. Device is not found..

    This issue is resolved in this release. The fix makes sure that the smartpqi plugin in LSU service always checks the correct serial number length and finds the device location.

  • PR 3100030: An ESXi host fails with a purple diagnostic screen indicating VMFS heap corruption

    A race condition in the host cache of a VMFS datastore might cause heap corruption and the ESXi host fails with a purple diagnostic screen and a message such as:

    PF Exception 14 in world xxxxx:res3HelperQu IP 0xxxxxxe2 addr 0xXXXXXXXXX

    This issue is resolved in this release.

  • PR 3121216: If the Internet Control Message Protocol (ICMPA) is not active, ESXi host reboot might take long after an upgrade

    If ICMPA is not active on the NFS servers in your environment, after upgrading your system, ESXi hosts reboot might take an hour to complete, because restore operations for NFS datastores fail. NFS uses the vmkping utility to identify reachable IPs of the NFS servers before executing a mount operation and when ICMP is not active, mount operations fail.

    This issue is resolved in this release. To remove dependency on the ICMP protocol to find reachable IPs, the fix adds socket APIs to ensure that IPs on a given NFS server are available.

  • PR 3098760: ESXi hosts randomly disconnect from the Active Directory domain or vCenter due to Likewise memory exhaustion

    Memory leaks in Active Directory operations and related libraries, or when smart card authentication is enabled on an ESXi host, might lead to Likewise memory exhaustion.

    This issue is partially resolved in this release. For more information, see VMware knowledge base article 78968.

  • PR 3118402: The NTP monitoring function returns intermittent time sync failure in the time services test report even when the ESXi host system clock is actually in sync

    The time services monitoring infrastructure periodically queries the NTP daemon, ntpd, for time sync status on ESXi hosts. Intermittent query failures might occur due to timeouts in the socket call to read such status information. As a result, you might see time sync failure alerts in the NTP test reports.

    This issue is resolved in this release. The fix makes sure that NTP monitoring queries do not fail.

  • PR 3159168: You see multiple "End path evaluation for device" messages in the vmkernel.log

    A periodic device probe that evaluates the state of all the paths to a device might create multiple unnecessary End path evaluation for device messages in the vmkernel.log file on a release build.

    This issue is resolved in this release. The fix restricts End path evaluation for device messages to only non-release builds, such as beta. You must enable the VERBOSE log level to see such logs.

  • PR 3101512: You see a warning for lost uplink redundancy even when the uplink is active

    If you quickly remove the network cable from a physical NIC and reinsert it, the ESXi host agent, hostd, triggers an event to restore uplink redundancy, but in some cases, although the uplink restores in a second, you still see a misleading alarm for lost uplink redundancy.

    This issue is resolved in this release.

  • PR 3161473: Operations with stateless ESXi hosts might not pick the expected remote disk for system cache, which causes remediation or compliance issues

    Operations with stateless ESXi hosts, such as storage migration, might not pick the expected remote disk for system cache. For example, you want to keep the new boot LUN as LUN 0, but vSphere Auto Deploy picks LUN 1.

    This issue is resolved in this release. The fix provides a consistent way to sort the remote disks and always pick the disk with the lowest LUN ID. To make sure you enable the fix, follow these steps:

    1. On the Edit host profile page of the Auto Deploy wizard, select Advanced Configuration Settings > SystemImage Cache Configuration > System Image Cache Configuration

    2. In the System Image Cache Profile Settings drop-down menu, select Enable stateless caching on the host.

    3. Edit Arguments for first disk by replacing remote with sortedremote and/or remoteesx with sortedremoteesx.

ESXi-7.0U3so-22348808-standard

Profile Name

ESXi-7.0U3so-22348808-standard

Build

For build information, see Patches Contained in This Release.

Vendor

VMware, Inc.

Release Date

September 28, 2023

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsanhealth_7.0.3-0.100.22348808

  • VMware_bootbank_crx_7.0.3-0.100.22348808

  • VMware_bootbank_vdfs_7.0.3-0.100.22348808

  • VMware_bootbank_esxio-combiner_7.0.3-0.100.22348808

  • VMware_bootbank_native-misc-drivers_7.0.3-0.100.22348808

  • VMware_bootbank_esx-base_7.0.3-0.100.22348808

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.100.22348808

  • VMware_bootbank_cpu-microcode_7.0.3-0.100.22348808

  • VMware_bootbank_bmcal_7.0.3-0.100.22348808

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_gc_7.0.3-0.100.22348808

  • VMware_bootbank_trx_7.0.3-0.100.22348808

  • VMware_bootbank_vsan_7.0.3-0.100.22348808

  • VMware_bootbank_esx-xserver_7.0.3-0.100.22348808

  • VMware_bootbank_loadesx_7.0.3-0.100.22348808

  • VMware_bootbank_esx-update_7.0.3-0.100.22348808

  • VMware_locker_tools-light_12.2.6.22229486-22348808

PRs Fixed

3219441, 3239369, 3236018, 3232099, 3232034, 3222887, 3217141, 3213025, 3187868, 3186342, 3164962, 3089785, 3160789, 3098679, 3089785, 3256457, 3178522, 3178519

Related CVE numbers

N/A

This patch updates the following issues:

  • ESXi 7.0 Update 3o provides the following security updates:

    • The cURL library is updated to version 8.1.2.

    • The ESXi userworld libxml2 library is updated to version 2.10.4.

    • The SQLite library is updated to version 3.42.0.

    • The OpenSSL package is updated to version 1.0.2zh.

  • The cpu-microcode VIB includes the following AMD microcode:

    Code Name

    FMS

    MCU Rev

    MCU Date

    Brand Names

    Bulldozer

    0x600f12 (15/01/2)

    0x0600063e

    2/7/2018

    Opteron 6200/4200/3200 Series

    Piledriver

    0x600f20 (15/02/0)

    0x06000852

    2/6/2018

    Opteron 6300/4300/3300 Series

    Zen-Naples

    0x800f12 (17/01/2)

    0x0800126e

    11/11/2021

    EPYC 7001 Series

    Zen2-Rome

    0x830f10 (17/31/0)

    0x0830107a

    5/17/2023

    EPYC 7002/7Fx2/7Hx2 Series

    Zen3-Milan-B1

    0xa00f11 (19/01/1)

    0x0a0011d1

    7/10/2023

    EPYC 7003/7003X Series

    Zen3-Milan-B2

    0xa00f12 (19/01/2)

    0x0a001234

    7/10/2023

    EPYC 7003/7003X Series

  • The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3o:

    • windows.iso: VMware Tools 12.2.6 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.

    • linux.iso: VMware Tools 10.3.25 ISO image for Linux OS with glibc 2.11 or later.

    The following VMware Tools ISO images are available for download:

    • VMware Tools 11.0.6:

      • windows.iso: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).

    • VMware Tools 10.0.12:

      • winPreVista.iso: for Windows 2000, Windows XP, and Windows 2003.

      • linuxPreGLibc25.iso: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.

    • solaris.iso: VMware Tools image 10.3.10 for Solaris.

    • darwin.iso: Supports Mac OS X versions 10.11 and later. VMware Tools 12.1.0 was the last regular release for macOS. For more details, see VMware knowledge base article 88698.

    Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

  • The cpu-microcode VIB includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003a5

    3/30/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000230

    1/27/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xac

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x2c

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x46

    2/27/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c0001d1

    2/14/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000461

    3/13/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x59

    2/26/2023

    Intel Xeon E-2300 Series

ESXi-7.0U3so-22348808-no-tools

Profile Name

ESXi-7.0U3so-22348808-no-tools

Build

For build information, see Patches Contained in This Release.

Vendor

VMware, Inc.

Release Date

September 28, 2023

Acceptance Level

PartnerSupported

Affected Hardware

N/A

Affected Software

N/A

Affected VIBs

  • VMware_bootbank_vsanhealth_7.0.3-0.100.22348808

  • VMware_bootbank_crx_7.0.3-0.100.22348808

  • VMware_bootbank_vdfs_7.0.3-0.100.22348808

  • VMware_bootbank_esxio-combiner_7.0.3-0.100.22348808

  • VMware_bootbank_native-misc-drivers_7.0.3-0.100.22348808

  • VMware_bootbank_esx-base_7.0.3-0.100.22348808

  • VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.3-0.100.22348808

  • VMware_bootbank_cpu-microcode_7.0.3-0.100.22348808

  • VMware_bootbank_bmcal_7.0.3-0.100.22348808

  • VMware_bootbank_esx-ui_2.11.2-21988676

  • VMware_bootbank_gc_7.0.3-0.100.22348808

  • VMware_bootbank_trx_7.0.3-0.100.22348808

  • VMware_bootbank_vsan_7.0.3-0.100.22348808

  • VMware_bootbank_esx-xserver_7.0.3-0.100.22348808

  • VMware_bootbank_loadesx_7.0.3-0.100.22348808

  • VMware_bootbank_esx-update_7.0.3-0.100.22348808

PRs Fixed

3219441, 3239369, 3236018, 3232099, 3232034, 3222887, 3217141, 3213025, 3187868, 3186342, 3164962, 3089785, 3160789, 3098679, 3089785, 3256457

Related CVE numbers

N/A

This patch updates the following issues:

  • ESXi 7.0 Update 3o provides the following security updates:

    • The cURL library is updated to version 8.1.2.

    • The ESXi userworld libxml2 library is updated to version 2.10.4.

    • The SQLite library is updated to version 3.42.0.

    • The OpenSSL package is updated to version 1.0.2zh.

  • The cpu-microcode VIB includes the following AMD microcode:

    Code Name

    FMS

    MCU Rev

    MCU Date

    Brand Names

    Bulldozer

    0x600f12 (15/01/2)

    0x0600063e

    2/7/2018

    Opteron 6200/4200/3200 Series

    Piledriver

    0x600f20 (15/02/0)

    0x06000852

    2/6/2018

    Opteron 6300/4300/3300 Series

    Zen-Naples

    0x800f12 (17/01/2)

    0x0800126e

    11/11/2021

    EPYC 7001 Series

    Zen2-Rome

    0x830f10 (17/31/0)

    0x0830107a

    5/17/2023

    EPYC 7002/7Fx2/7Hx2 Series

    Zen3-Milan-B1

    0xa00f11 (19/01/1)

    0x0a0011d1

    7/10/2023

    EPYC 7003/7003X Series

    Zen3-Milan-B2

    0xa00f12 (19/01/2)

    0x0a001234

    7/10/2023

    EPYC 7003/7003X Series

  • The cpu-microcode VIB includes the following Intel microcode:

    Code Name

    FMS

    Plt ID

    MCU Rev

    MCU Date

    Brand Names

    Nehalem EP

    0x106a5 (06/1a/5)

    0x03

    0x1d

    5/11/2018

    Intel Xeon 35xx Series; Intel Xeon 55xx Series

    Clarkdale

    0x20652 (06/25/2)

    0x12

    0x11

    5/8/2018

    Intel i3/i5 Clarkdale Series; Intel Xeon 34xx Clarkdale Series

    Arrandale

    0x20655 (06/25/5)

    0x92

    0x7

    4/23/2018

    Intel Core i7-620LE Processor

    Sandy Bridge DT

    0x206a7 (06/2a/7)

    0x12

    0x2f

    2/17/2019

    Intel Xeon E3-1100 Series; Intel Xeon E3-1200 Series; Intel i7-2655-LE Series; Intel i3-2100 Series

    Westmere EP

    0x206c2 (06/2c/2)

    0x03

    0x1f

    5/8/2018

    Intel Xeon 56xx Series; Intel Xeon 36xx Series

    Sandy Bridge EP

    0x206d6 (06/2d/6)

    0x6d

    0x621

    3/4/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Sandy Bridge EP

    0x206d7 (06/2d/7)

    0x6d

    0x71a

    3/24/2020

    Intel Pentium 1400 Series; Intel Xeon E5-1400 Series; Intel Xeon E5-1600 Series; Intel Xeon E5-2400 Series; Intel Xeon E5-2600 Series; Intel Xeon E5-4600 Series

    Nehalem EX

    0x206e6 (06/2e/6)

    0x04

    0xd

    5/15/2018

    Intel Xeon 65xx Series; Intel Xeon 75xx Series

    Westmere EX

    0x206f2 (06/2f/2)

    0x05

    0x3b

    5/16/2018

    Intel Xeon E7-8800 Series; Intel Xeon E7-4800 Series; Intel Xeon E7-2800 Series

    Ivy Bridge DT

    0x306a9 (06/3a/9)

    0x12

    0x21

    2/13/2019

    Intel i3-3200 Series; Intel i7-3500-LE/UE; Intel i7-3600-QE; Intel Xeon E3-1200-v2 Series; Intel Xeon E3-1100-C-v2 Series; Intel Pentium B925C

    Haswell DT

    0x306c3 (06/3c/3)

    0x32

    0x28

    11/12/2019

    Intel Xeon E3-1200-v3 Series; Intel i7-4700-EQ Series; Intel i5-4500-TE Series; Intel i3-4300 Series

    Ivy Bridge EP

    0x306e4 (06/3e/4)

    0xed

    0x42e

    3/14/2019

    Intel Xeon E5-4600-v2 Series; Intel Xeon E5-2600-v2 Series; Intel Xeon E5-2400-v2 Series; Intel Xeon E5-1600-v2 Series; Intel Xeon E5-1400-v2 Series

    Ivy Bridge EX

    0x306e7 (06/3e/7)

    0xed

    0x715

    3/14/2019

    Intel Xeon E7-8800/4800/2800-v2 Series

    Haswell EP

    0x306f2 (06/3f/2)

    0x6f

    0x49

    8/11/2021

    Intel Xeon E5-4600-v3 Series; Intel Xeon E5-2600-v3 Series; Intel Xeon E5-2400-v3 Series; Intel Xeon E5-1600-v3 Series; Intel Xeon E5-1400-v3 Series

    Haswell EX

    0x306f4 (06/3f/4)

    0x80

    0x1a

    5/24/2021

    Intel Xeon E7-8800/4800-v3 Series

    Broadwell H

    0x40671 (06/47/1)

    0x22

    0x22

    11/12/2019

    Intel Core i7-5700EQ; Intel Xeon E3-1200-v4 Series

    Avoton

    0x406d8 (06/4d/8)

    0x01

    0x12d

    9/16/2019

    Intel Atom C2300 Series; Intel Atom C2500 Series; Intel Atom C2700 Series

    Broadwell EP/EX

    0x406f1 (06/4f/1)

    0xef

    0xb000040

    5/19/2021

    Intel Xeon E7-8800/4800-v4 Series; Intel Xeon E5-4600-v4 Series; Intel Xeon E5-2600-v4 Series; Intel Xeon E5-1600-v4 Series

    Skylake SP

    0x50654 (06/55/4)

    0xb7

    0x2007006

    3/6/2023

    Intel Xeon Platinum 8100 Series; Intel Xeon Gold 6100/5100, Silver 4100, Bronze 3100 Series; Intel Xeon D-2100 Series; Intel Xeon D-1600 Series; Intel Xeon W-3100 Series; Intel Xeon W-2100 Series

    Cascade Lake B-0

    0x50656 (06/55/6)

    0xbf

    0x4003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cascade Lake

    0x50657 (06/55/7)

    0xbf

    0x5003604

    3/17/2023

    Intel Xeon Platinum 9200/8200 Series; Intel Xeon Gold 6200/5200; Intel Xeon Silver 4200/Bronze 3200; Intel Xeon W-3200

    Cooper Lake

    0x5065b (06/55/b)

    0xbf

    0x7002703

    3/21/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300

    Broadwell DE

    0x50662 (06/56/2)

    0x10

    0x1c

    6/17/2019

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50663 (06/56/3)

    0x10

    0x700001c

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell DE

    0x50664 (06/56/4)

    0x10

    0xf00001a

    6/12/2021

    Intel Xeon D-1500 Series

    Broadwell NS

    0x50665 (06/56/5)

    0x10

    0xe000014

    9/18/2021

    Intel Xeon D-1600 Series

    Skylake H/S

    0x506e3 (06/5e/3)

    0x36

    0xf0

    11/12/2021

    Intel Xeon E3-1500-v5 Series; Intel Xeon E3-1200-v5 Series

    Denverton

    0x506f1 (06/5f/1)

    0x01

    0x38

    12/2/2021

    Intel Atom C3000 Series

    Ice Lake SP

    0x606a6 (06/6a/6)

    0x87

    0xd0003a5

    3/30/2023

    Intel Xeon Platinum 8300 Series; Intel Xeon Gold 6300/5300 Series; Intel Xeon Silver 4300 Series

    Ice Lake D

    0x606c1 (06/6c/1)

    0x10

    0x1000230

    1/27/2023

    Intel Xeon D-2700 Series; Intel Xeon D-1700 Series

    Snow Ridge

    0x80665 (06/86/5)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Snow Ridge

    0x80667 (06/86/7)

    0x01

    0x4c000023

    2/22/2023

    Intel Atom P5000 Series

    Tiger Lake U

    0x806c1 (06/8c/1)

    0x80

    0xac

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake U Refresh

    0x806c2 (06/8c/2)

    0xc2

    0x2c

    2/27/2023

    Intel Core i3/i5/i7-1100 Series

    Tiger Lake H

    0x806d1 (06/8d/1)

    0xc2

    0x46

    2/27/2023

    Intel Xeon W-11000E Series

    Sapphire Rapids SP HBM

    0x806f8 (06/8f/8)

    0x10

    0x2c0001d1

    2/14/2023

    Intel Xeon Max 9400 Series

    Sapphire Rapids SP

    0x806f8 (06/8f/8)

    0x87

    0x2b000461

    3/13/2023

    Intel Xeon Platinum 8400 Series; Intel Xeon Gold 6400/5400 Series; Intel Xeon Silver 4400 Series; Intel Xeon Bronze 3400 Series

    Kaby Lake H/S/X

    0x906e9 (06/9e/9)

    0x2a

    0xf4

    2/23/2023

    Intel Xeon E3-1200-v6 Series; Intel Xeon E3-1500-v6 Series

    Coffee Lake

    0x906ea (06/9e/a)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series; Intel Xeon E-2200 Series (4 or 6 core)

    Coffee Lake

    0x906eb (06/9e/b)

    0x02

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake

    0x906ec (06/9e/c)

    0x22

    0xf4

    2/23/2023

    Intel Xeon E-2100 Series

    Coffee Lake Refresh

    0x906ed (06/9e/d)

    0x22

    0xfa

    2/27/2023

    Intel Xeon E-2200 Series (8 core)

    Rocket Lake S

    0xa0671 (06/a7/1)

    0x02

    0x59

    2/26/2023

    Intel Xeon E-2300 Series

ESXi7.0U3o - 22348816

Name

ESXi

Version

ESXi7.0U3o - 22348816

Release Date

September 28, 2023

Category

Bugfix

Affected Components​

  • ESXi Component - core ESXi

  • ESXi Install/Upgrade

  • VMware Native AHCI Driver

  • SMARTPQI LSU Management Plugin

  • Non-Volatile memory controller driver

  • Avago (LSI) Native 12Gbps SAS MPT Driver

  • Intel NVME Driver with VMD Technology

PRs Fixed

3244098, 3216958, 3245763, 3242021, 3246132, 3253205, 3256804, 3239170, 3251981, 3215370, 3117615, 3248478, 3252676, 3235496, 3238026, 3236064, 3224739, 3224306, 3223755, 3216958, 3233958, 3185125, 3219264, 3221620, 3218835, 3185560, 3221099, 3211625, 3221860, 3228586, 3224604, 3223539, 3222601, 3217258, 3213041, 3216522, 3221591, 3216389, 3220004, 3217633, 3216548, 3216449, 3156666, 3181574, 3180746, 3155476, 3211807, 3154090, 3184008, 3183519, 3183519, 3209853, 3100552, 3187846, 3113263, 3176350, 3185827, 3095511, 3184425, 3186351, 3186367, 3154090, 3158524, 3181601, 3180391, 3180283, 3184608, 3181774, 3155476, 3163361, 3182978, 3184368, 3160456, 3164575, 3181901, 3184871, 3166818, 3157195, 3164355, 3163271, 3112194, 3161690, 3261925, 3178589, 3178721, 3162963, 3168950, 3159168, 3158491, 3158866, 3096769, 3165374, 3122037, 3098760, 3164439, 3161473, 3162790, 3100030, 3096974, 3161473, 3165651, 3083007, 3118240, 3151076, 3118402, 3160480, 3156627, 3158531, 3162499, 3158512, 3053430, 3153395, 3117615, 3099192, 3158508, 3115870, 3119834, 3158220, 3110401, 2625439, 3099357, 3152811, 3108979, 3120165, 3245953, 3178109, 3211395, 3164477, 3119959, 3164462, 3036883, 3112692, 3218578, 3224847

Related CVE numbers

N/A

This patch resolves the issues listed in ESXi-7.0U3o-22348816-standard.

ESXi7.0U3so - 22348808

Name

ESXi

Version

ESXi7.0U3so - 22348808

Release Date

September 28, 2023

Category

Security

Affected Components​

  • ESXi Component - core ESXi VIBs

  • ESXi Install/Upgrade Component

  • ESXi Tools Component

PRs Fixed

3219441, 3239369, 3236018, 3232099, 3232034, 3222887, 3217141, 3213025, 3187868, 3186342, 3164962, 3089785, 3160789, 3098679, 3089785, 3256457, 3178522, 3178519

Related CVE numbers

N/A

This patch resolves the issues listed in ESXi-7.0U3so-22348808-standard.

Known Issues

vSAN Issues

  • PR 2962316: vSAN File Service does not support NFSv4 delegations

    vSAN File Service does not support NFSv4 delegations in this release.

    Workaround: None

  • PR 3235496: Applications on a vSAN cluster become intermittently unresponsive

    If the advanced option preferHT is enabled on an AMD server in a vSAN cluster, the virtual CPUs of a virtual machine run on the same last level cache. When the virtual CPUs are all very busy and an NVMe thread to handle I/Os happens to run also on the same last level cache, the NVMe thread takes very long to complete, because it competes for CPU resources with the busy VMs. As a result, the I/O latency is very high and might lead to intermittent unresponsiveness of storage and applications on the vSAN cluster. In the backlog, you see errors such as Lost access to volume <uuid> due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.

    Workaround: Use the command esxcli system settings advanced set -o /Misc/vmknvmeCwYield -I 0 to avoid competition between the NVMe completion queue and busy virtual CPUs.

Miscellaneous Issues

  • Windows or Citrix virtual machines with NVIDIA virtual GPU (vGPU) might intermittently fail with a blue diagnostics screen

    After upgrading to ESXi 7.0 Update 3o, some Windows or Citrix vGPU VMs might fail due to insufficient file handles. Such VMs might need more than the default limit of 2048 handles.

    Workaround: Apply the VM configuration setting vmiop.maxFileHandles = 16384.

Known Issues from Previous Releases

vSphere Cluster Services Issues

  • You see compatibility issues in new vCLS VMs deployed in vSphere 7.0 Update 3 environment

    The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.

    Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3.

Installation, Upgrade, and Migration Issues

  • The vCenter Upgrade/Migration pre-checks fail with "Unexpected error 87"

    The vCenter Server Upgrade/Migration pre-checks fail when the Security Token Service (STS) certificate does not contain a Subject Alternative Name (SAN) field. This situation occurs when you have replaced the vCenter 5.5 Single Sign-On certificate with a custom certificate that has no SAN field, and you attempt to upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid and the pre-checks prevent the upgrade process from continuing.

    Workaround: Replace the STS certificate with a valid certificate that contains a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration.

  • Corrupted VFAT partitions from a vSphere 6.7 environment might cause upgrades to ESXi 7.x to fail

    Due to corrupted VFAT partitions from a vSphere 6.7 environment, repartitioning the boot disks of an ESXi host might fail during an upgrade to ESXi 7.x. As a result, you might see the following errors:

    When upgrading to ESXi 7.0 Update 3l, the operation fails with a purple diagnostic screen and/or an error such as:

    • An error occurred while backing up VFAT partition files before re-partitioning: Failed to calculate size for temporary Ramdisk: <error>.

    • An error occurred while backing up VFAT partition files before re-partitioning: Failed to copy files to Ramdisk: <error>.

      If you use an ISO installer, you see the errors, but no purple diagnostic screen.

    When upgrading to an ESXi 7.x version earlier than 7.0 Update 3l, you might see:

    • Logs such as ramdisk (root) is full in the vmkwarning.log file.

    • Unexpected rollback to ESXi 6.5 or 6.7 on reboot.

    • The /bootbank,/altbootbank, and ESX-OSData partitions are not present.

    Workaround: You must first remediate the corrupted partitions before completing the upgrade to ESXi 7.x. For more details, see VMware knowledge base article 91136.

  • Problems upgrading to vSphere 7.0 with pre-existing CIM providers

    After upgrade, previously installed 32-bit CIM providers stop working because ESXi requires 64-bit CIM providers. Customers may lose management API functions related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors related to uwglibc dependency.

    The syslog reports module missing, "32 bit shared libraries not loaded."

    Workaround: There is no workaround. The fix is to download new 64-bit CIM providers from your vendor.

  • Installation of 7.0 Update 1 drivers on ESXi 7.0 hosts might fail

    You cannot install drivers applicable to ESXi 7.0 Update 1 on hosts that run ESXi 7.0 or 7.0b.

    The operation fails with an error, such as:

    VMW_bootbank_qedrntv_3.40.4.0-12vmw.701.0.0.xxxxxxx requires vmkapi_2_7_0_0, but the requirement cannot be satisfied within the ImageProfile. ​Please refer to the log file for more details.

    Workaround: Update the ESXi host to 7.0 Update 1. Retry the driver installation.

  • UEFI booting of ESXi hosts might stop with an error during an update to ESXi 7.0 Update 2 from an earlier version of ESXi 7.0

    If you attempt to update your environment to 7.0 Update 2 from an earlier version of ESXi 7.0by using vSphere Lifecycle Manager patch baselines, UEFI booting of ESXi hosts might stop with an error such as:

    Loading /boot.cfgFailed to load crypto64.efiFatal error: 15 (Not found)

    Workaround: For more information, see VMware knowledge base articles 83063 and 83107 .

  • If legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract a desired software specification to seed to a new cluster

    With vCenter Server 7.0 Update 2, you can create a new cluster by importing the desired software specification from a single reference host. However, if legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract in the vCenter Server instance where you create the cluster a reference software specification from such a host. In the /var/log/lifecycle.log, you see messages such as:

    020-11-11T06:54:03Z lifecycle: 1000082644: HostSeeding:499 ERROR Extract depot failed: Checksum doesn't match. Calculated 5b404e28e83b1387841bb417da93c8c796ef2497c8af0f79583fd54e789d8826, expected: 0947542e30b794c721e21fb595f1851b247711d0619c55489a6a8cae6675e796 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:366 ERROR Extract depot failed. 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:145 ERROR [VibChecksumError]

    Workaround: Follow the steps described in VMware knowledge base article 83042.

  • You see a short burst of log messages in the syslog.log after every ESXi boot

    After updating to ESXi 7.0 Update 2, you might see a short burst of log messages after every ESXi boot.

    Such logs do not indicate any issue with ESXi and you can ignore these messages. For example:

    ​2021-01-19T22:44:22Z watchdog-vaai-nasd: '/usr/lib/vmware/nfs/bin/vaai-nasd -f' exited after 0 seconds (quick failure 127) 12021-01-19T22:44:22Z watchdog-vaai-nasd: Executing '/usr/lib/vmware/nfs/bin/vaai-nasd -f'2021-01-19T22:44:22.990Z aainasd[1000051135]: Log for VAAI-NAS Daemon for NFS version=1.0 build=build-00000 option=DEBUG2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoadFile: No entries loaded by dictionary.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/config": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/preferences": No such file or directory.2021-01-19T22:44:22.990Z vaainasd[1000051135]: Switching to VMware syslog extensions2021-01-19T22:44:22.992Z vaainasd[1000051135]: Loading VAAI-NAS plugin(s).2021-01-19T22:44:22.992Z vaainasd[1000051135]: DISKLIB-PLUGIN : Not loading plugin /usr/lib/vmware/nas_plugins/lib64: Not a shared library.

    Workaround: None

  • You see warning messages for missing VIBs in vSphere Quick Boot compatibility check reports

    After you upgrade to ESXi 7.0 Update 2, if you check vSphere Quick Boot compatibility of your environment by using the /usr/lib/vmware/loadesx/bin/loadESXCheckCompat.py command, you might see some warning messages for missing VIBs in the shell. For example:

    Cannot find VIB(s) ... in the given VIB collection.Ignoring missing reserved VIB(s) ..., they are removed from reserved VIB IDs.

    Such warnings do not indicate a compatibility issue.

    Workaround: The missing VIB messages can be safely ignored and do not affect the reporting of vSphere Quick Boot compatibility. The final output line of the loadESXCheckCompat command unambiguously indicates if the host is compatible.

  • Auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image fails with an error

    If you attempt auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image to perform a stateful install and overwrite the VMFS partitions, the operation fails with an error. In the support bundle, you see messages such as:

    2021-02-11T19:37:43Z Host Profiles[265671 opID=MainThread]: ERROR: EngineModule::ApplyHostConfig. Exception: [Errno 30] Read-only file system

    Workaround: Follow vendor guidance to clean the VMFS partition in the target host and retry the operation. Alternatively, use an empty disk. For more information on the disk-partitioning utility on ESXi, see VMware knowledge base article 1036609.

  • Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using ESXCLI might fail due to a space limitation

    Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the esxcli software profile update or esxcli software profile install ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:

    [InstallationError]
     The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB.
     Please refer to the log file for more details.

    The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands esxcli software vib update or esxcli software vib install.

    Workaround: You can perform the upgrade in two steps, by using the esxcli software profile update command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.

  • You cannot migrate linked clones across vCenter Servers

    If you migrate a linked clone across vCenter Servers, operations such as power on and delete might fail for the source virtual machine with an Invalid virtual machine state error.

    Workaround: Keep linked clones on the same vCenter Server as the source VM. Alternatively, promote the linked clone to full clone before migration.

  • Migration across vCenter Servers of virtual machines with many virtual disks and snapshot levels to a datastore on NVMe over TCP storage might fail

    Migration across vCenter Servers of virtual machines with more than 180 virtual disks and 32 snapshot levels to a datastore on NVMe over TCP storage might fail. The ESXi host preemptively fails with an error such as The migration has exceeded the maximum switchover time of 100 second(s).

    Workaround: None

  • A virtual machine with enabled Virtual Performance Monitoring Counters (VPMC) might fail to migrate between ESXi hosts

    If you try to migrate a virtual machine with enabled VPMC by using vSphere vMotion, the operation might fail if the target host is using some of the counters to compute memory or performance statistics. The operation fails with an error such as A performance counter used by the guest is not available on the host CPU.

    Workaround: Power off the virtual machine and use cold migration. For more information, see VMware knowledge base article 81191.

  • If a live VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the upgrade fails

    When a VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the ConfigStore might not keep some configurations of the upgrade. As a result, ESXi hosts become inaccessible after the upgrade operation, although the upgrade seems successful. To prevent this issue, the ESXi 7.0 Update 3 installer adds a temporary check to block such scenarios. In the ESXi installer console, you see the following error message: Live VIB installation, upgrade or removal may cause subsequent ESXi upgrade to fail when using the ISO installer.

    Workaround: Use an alternative upgrade method to avoid the issue, such as using ESXCLI or the vSphere Lifecycle Manager.

  • Smart Card and RSA SecurID authentication might stop working after upgrading to vCenter Server 7.0

    If you have configured vCenter Server for either Smart Card or RSA SecurID authentication, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade process. If you do not perform the workaround as described in the KB, you might see the following error messages and Smart Card or RSA SecurID authentication does not work.

    "Smart card authentication may stop working. Smart card settings may not be preserved, and smart card authentication may stop working."

    or

    "RSA SecurID authentication may stop working. RSA SecurID settings may not be preserved, and RSA SecurID authentication may stop working."

    Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057.

  • The vlanid property in custom installation scripts might not work

    If you use a custom installation script that sets the vlanid property to specify a desired VLAN, the property might not take effect on newly installed ESXi hosts. The issue occurs only when a physical NIC is already connected to DHCP when the installation starts. The vlanid property works properly when you use a newly connected NIC.

    Workaround: Manually set the VLAN from the Direct Console User Interface after you boot the ESXi host. Alternatively, disable the physical NIC and then boot the host.

  • HPE servers with Trusted Platform Module (TPM) boot, but remote attestation fails

    Some HPE servers do not have enough event log space to properly finish TPM remote attestation. As a result, the VMkernel boots, but remote attestation fails due to the truncated log.

    Workaround: None.

  • Upgrading a vCenter Server with an external Platform Services Controller from 6.7u3 to 7.0 fails with VMAFD error

    When you upgrade a vCenter Server deployment using an external Platform Services Controller, you converge the Platform Services Controller into a vCenter Server appliance. If the upgrade fails with the error install.vmafd.vmdir_vdcpromo_error_21, the VMAFD firstboot process has failed. The VMAFD firstboot process copies the VMware Directory Service Database (data.mdb) from the source Platform Services Controller and replication partner vCenter Server appliance.

    Workaround: Disable TCP Segmentation Offload (TSO) and Generic Segmentation Offload (GSO) on the Ethernet adapter of the source Platform Services Controller or replication partner vCenter Server appliance before upgrading a vCenter Server with an external Platform Services Controller. See Knowledge Base article: https://kb.vmware.com/s/article/74678

  • Smart card and RSA SecurID settings may not be preserved during vCenter Server upgrade

    Authentication using RSA SecurID will not work after upgrading to vCenter Server 7.0. An error message will alert you to this issue when attempting to login using your RSA SecurID login.

    Workaround: Reconfigure the smart card or RSA SecureID.

  • Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with network error message

    Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with the error message IP already exists in the network. This prevents the migration process from configuring the network parameters on the new vCenter Server appliance. For more information, examine the log file: /var/log/vmware/upgrade/UpgradeRunner.log

    Workaround:

    1. Verify that all Windows Updates have been completed on the source vCenter Server for Windows instance, or disable automatic Windows Updates until after the migration finishes.

    2. Retry the migration of vCenter Server for Windows to vCenter Server appliance 7.0.

  • When you configure the number of virtual functions for an SR-IOV device by using the max_vfs module parameter, the changes might not take effect

    In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV device by using the Virtual Infrastructure Management (VIM) API, for example, through the vSphere Client. The task does not require reboot of the ESXi host. After you use the VIM API configuration, if you try to configure the number of SR-IOV virtual functions by using the max_vfs module parameter, the changes might not take effect because they are overridden by the VIM API configuration.

    Workaround: None. To configure the number of virtual functions for an SR-IOV device, use the same method every time. Use the VIM API or use the max_vfs module parameter and reboot the ESXi host.

  • Upgraded vCenter Server appliance instance does not retain all the secondary networks (NICs) from the source instance

    During a major upgrade, if the source instance of the vCenter Server appliance is configured with multiple secondary networks other than the VCHA NIC, the target vCenter Server instance will not retain secondary networks other than the VCHA NIC. If the source instance is configured with multiple NICs that are part of VDS port groups, the NIC configuration will not be preserved during the upgrade. Configurations for vCenter Server appliance instances that are part of the standard port group will be preserved.

    Workaround: None. Manually configure the secondary network in the target vCenter Server appliance instance.

  • After upgrading or migrating a vCenter Server with an external Platform Services Controller, users authenticating using Active Directory lose access to the newly upgraded vCenter Server instance

    After upgrading or migrating a vCenter Server with an external Platform Services Controller, if the newly upgraded vCenter Server is not joined to an Active Directory domain, users authenticating using Active Directory will lose access to the vCenter Server instance.

    Workaround: Verify that the new vCenter Server instance has been joined to an Active Directory domain. See Knowledge Base article: https://kb.vmware.com/s/article/2118543

  • Migrating a vCenter Server for Windows with an external Platform Services Controller using an Oracle database fails

    If there are non-ASCII strings in the Oracle events and tasks table the migration can fail when exporting events and tasks data. The following error message is provided: UnicodeDecodeError

    Workaround: None.

  • After an ESXi host upgrade, a Host Profile compliance check shows non-compliant status while host remediation tasks fail

    The non-compliant status indicates an inconsistency between the profile and the host.

    This inconsistency might occur because ESXi 7.0 does not allow duplicate claim rules, but the profile you use contains duplicate rules. For example, if you attempt to use the Host Profile that you extracted from the host before upgrading ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate claim rules of system default rules, you might experience the problems.

    Workaround:

    1. Remove any duplicate claim rules of the system default rules from the Host Profile document.

    2. Check the compliance status.

    3. Remediate the host.

    4. If the previous steps do not help, reboot the host.

  • Error message displays in the vCenter Server Management Interface

    After installing or upgrading to vCenter Server 7.0, when you navigate to the Update panel within the vCenter Server Management Interface, the error message "Check the URL and try again" displays. The error message does not prevent you from using the functions within the Update panel, and you can view, stage, and install any available updates.

    Workaround: None.

Security Features Issues

  • Turn off the Service Location Protocol service in ESXi, slpd, to prevent potential security vulnerabilities

    Some services in ESXi that run on top of the host operating system, including slpd, the CIM object broker, sfcbd, and the related openwsmand service, have proven security vulnerabilities. VMware has addressed all known vulnerabilities in VMSA-2019-0022 and VMSA-2020-0023, and the fixes are part of the vSphere 7.0 Update 2 release. While sfcbd and openwsmand are disabled by default in ESXi, slpd is enabled by default and you must turn it off, if not necessary, to prevent exposure to a future vulnerability after an upgrade.

    Workaround: To turn off the slpd service, run the following PowerCLI commands:

    $ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Set-VMHostService -policy “off”$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Stop-VMHostService -Confirm:$false

    Alternatively, you can use the command chkconfig slpd off && /etc/init.d/slpd stop.

    The openwsmand service is not on the ESXi services list and you can check the service state by using the following PowerCLI commands:

    $esx=(Get-EsxCli -vmhost xx.xx.xx.xx -v2)$esx.system.process.list.invoke() | where CommandLine -like '*openwsman*' | select commandline

    In the ESXi services list, the sfcbd service appears as sfcbd-watchdog.

    For more information, see VMware knowledge base articles 76372 and 1025757.

  • Encrypted virtual machine fails to power on when HA-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled HA on the Trusted Cluster and one or more hosts in the cluster fails attestation, an encrypted virtual machine cannot power on.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Encrypted virtual machine fails to power on when DRS-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted Cluster and one or more hosts in the cluster fails attestation, DRS might try to power on an encrypted virtual machine on an unattested host in the cluster. This operation puts the virtual machine in a locked state.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Migrating or cloning encrypted virtual machines across <span>vCenter Server</span> instances fails when attempting to do so using the vSphere Client

    If you try to migrate or clone an encrypted virtual machine across vCenter Server instances using the vSphere Client, the operation fails with the following error message: "The operation is not allowed in the current state."

    Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual machines across vCenter Server instances.

Networking Issues

  • Reduced throughput in networking performance on Intel 82599/X540/X550 NICs

    The new queue-pair feature added to ixgben driver to improve networking performance on Intel 82599EB/X540/X550 series NICs might reduce throughput under some workloads in vSphere 7.0 as compared to vSphere 6.7.

    Workaround: To achieve the same networking performance as vSphere 6.7, you can disable the queue-pair with a module parameter. To disable the queue-pair, run the command:

    # esxcli system module parameters set -p "QPair=0,0,0,0..." -m ixgben

    After running the command, reboot.

  • One or more I/O devices do not generate interrupts when the AMD IOMMU is in use

    If the I/O devices on your ESXi host provide more than a total of 512 distinct interrupt sources, some sources are erroneously assigned an interrupt-remapping table entry (IRTE) index in the AMD IOMMU that is greater than the maximum value. Interrupts from such a source are lost, so the corresponding I/O device behaves as if interrupts are disabled.

    Workaround: Use the ESXCLI command esxcli system settings kernel set -s iovDisableIR -v true to disable the AMD IOMMU interrupt remapper. Reboot the ESXi host so that the command takes effect.

  • When you set auto-negotiation on a network adapter, the device might fail

    In some environments, if you set link speed to auto-negotiation for network adapters by using the command esxcli network nic set -a -n vmmicx, the devices might fail and reboot does not recover connectivity. The issue is specific to a combination of some Intel X710/X722 network adapters, a SFP+ module and a physical switch, where auto-negotiate speed/duplex scenario is not supported.

    Workaround: Make sure you use an Intel-branded SFP+ module. Alternatively, use a Direct Attach Copper (DAC) cable.

  • Solarflare x2542 and x2541 network adapters configured in 1x100G port mode achieve throughput of up to 70Gbps in a vSphere environment

    vSphere 7.0 Update 2 supports Solarflare x2542 and x2541 network adapters configured in 1x100G port mode. However, you might see a hardware limitation in the devices that causes the actual throughput to be up to some 70Gbps in a vSphere environment.

    Workaround: None

  • VLAN traffic might fail after a NIC reset

    A NIC with PCI device ID 8086:1537 might stop to send and receive VLAN tagged packets after a reset, for example, with a command vsish -e set /net/pNics/vmnic0/reset 1.

    Workaround: Avoid resetting the NIC. If you already face the issue, use the following commands to restore the VLAN capability, for example at vmnic0:

    # esxcli network nic software set --tagging=1 -n vmnic0# esxcli network nic software set --tagging=0 -n vmnic0

  • Any change in the NetQueue balancer settings causes NetQueue to be disabled after an ESXi host reboot

    Any change in the NetQueue balancer settings by using the command esxcli/localcli network nic queue loadbalancer set -n <nicname> --<lb_setting> causes NetQueue, which is enabled by default, to be disabled after an ESXi host reboot.

    Workaround: After a change in the NetQueue balancer settings and host reboot, use the command configstorecli config current get -c esx -g network -k nics to retrieve ConfigStore data to verify whether the /esx/network/nics/net_queue/load_balancer/enable is working as expected.

    After you run the command, you see output similar to:

    {"mac": "02:00:0e:6d:14:3e","name": "vmnic1","net_queue": { "load_balancer": { "dynamic_pool": true, "enable": true }},"virtual_mac": "00:50:56:5a:21:11"}

    If the output is not as expected, for example "load_balancer": "enable": false", run the following command:

    esxcli/localcli network nic queue loadbalancer state set -n <nicname> -e true

  • Paravirtual RDMA (PVRDMA) network adapters do not support NSX networking policies

    If you configure an NSX distributed virtual port for use in PVRDMA traffic, the RDMA protocol traffic over the PVRDMA network adapters does not comply with the NSX network policies.

    Workaround: Do not configure NSX distributed virtual ports for use in PVRDMA traffic.

  • Rollback from converged vSphere Distributed Switch (VDS) to NSX-T VDS is not supported in vSphere 7.0 Update 3

    Rollback from converged VDS that supports both vSphere 7 traffic and NSX-T 3 traffic on the same VDS to one N-VDS for NSX-T traffic is not supported in vSphere 7.0 Update 3.

    Workaround: None

  • If you do not set the nmlx5 network driver module parameter, network connectivity or ESXi hosts might fail

    If you do not set the supported_num_ports module parameter for the nmlx5_core driver on an ESXi host with multiple network adapters of versions Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6, the driver might not allocate sufficient memory for operating all the NIC ports for the host. As a result, you might experience network loss or ESXi host failure with purple diagnostic screen, or both.

    Workaround: Set the supported_num_ports module parameter value in the nmlx5_core network driver equal to the total number of Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6 network adapter ports on the ESXi host.

  • High throughput virtual machines may experience degradation in network performance when Network I/O Control (NetIOC) is enabled

    Virtual machines requiring high network throughput can experience throughput degradation when upgrading from vSphere 6.7 to vSphere 7.0 with NetIOC enabled.

    Workaround: Adjust the ethernetx.ctxPerDev setting to enable multiple worlds.

  • IPv6 traffic fails to pass through VMkernel ports using IPsec

    When you migrate VMkernel ports from one port group to another, IPv6 traffic does not pass through VMkernel ports using IPsec.

    Workaround: Remove the IPsec security association (SA) from the affected server, and then reapply the SA. To learn how to set and remove an IPsec SA, see the vSphere Security documentation.

  • Higher ESX network performance with a portion of CPU usage increase

    ESX network performance may increase with a portion of CPU usage.

    Workaround: Remove and add the network interface with only 1 rx dispatch queue. For example:

    esxcli network ip interface remove --interface-name=vmk1

    esxcli network ip interface add --interface-name=vmk1 --num-rxqueue=1

  • VM might lose Ethernet traffic after hot-add, hot-remove or storage vMotion

    A VM might stop receiving Ethernet traffic after a hot-add, hot-remove or storage vMotion. This issue affects VMs where the uplink of the VNIC has SR-IOV enabled. PVRDMA virtual NIC exhibits this issue when the uplink of the virtual network is a Mellanox RDMA capable NIC and RDMA namespaces are configured.

    Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM to restore traffic. On Linux guest operating systems, restarting the network might also resolve the issue. If these workarounds have no effect, you can reboot the VM to restore network connectivity.

  • Change of IP address for a VCSA deployed with static IP address requires that you create the DNS records in advance

    With the introduction of the DDNS, the DNS record update only works for VCSA deployed with DHCP configured networking. While changing the IP address of the vCenter server via VAMI, the following error is displayed:

    The specified IP address does not resolve to the specified hostname.

    Workaround: There are two possible workarounds.

    1. Create an additional DNS entry with the same FQDN and desired IP address. Log in to the VAMI and follow the steps to change the IP address.

    2. Log in to the VCSA using ssh. Execute the following script:

      ./opt/vmware/share/vami/vami_config_net

      Use option 6 to change the IP adddress of eth0. Once changed, execute the following script:

      ./opt/likewise/bin/lw-update-dns

      Restart all the services on the VCSA to update the IP information on the DNS server.

  • It may take several seconds for the NSX Distributed Virtual Port Group (NSX DVPG) to be removed after deleting the corresponding logical switch in NSX Manager.

    As the number of logical switches increases, it may take more time for the NSX DVPG in vCenter Server to be removed after deleting the corresponding logical switch in NSX Manager. In an environment with 12000 logical switches, it takes approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server.

    Workaround: None.

  • Hostd runs out of memory and fails if a large number of NSX Distributed Virtual port groups are created.

    In vSphere 7.0, NSX Distributed Virtual port groups consume significantly larger amounts of memory than opaque networks. For this reason, NSX Distributed Virtual port groups can not support the same scale as an opaque network given the same amount of memory.

    Workaround:To support the use of NSX Distributed Virtual port groups, increase the amount of memory in your ESXi hosts. If you verify that your system has adequate memory to support your VMs, you can directly increase the memory of hostd using the following command.

    localcli --plugin-dir /usr/lib/vmware/esxcli/int/ sched group setmemconfig --group-path host/vim/vmvisor/hostd --units mb --min 2048 --max 2048

    Note that this will cause hostd to use memory normally reserved for your environment's VMs. This may have the affect of reducing the number of VMs your ESXi host can support.

  • DRS may incorrectly launch vMotion if the network reservation is configured on a VM

    If the network reservation is configured on a VM, it is expected that DRS only migrates the VM to a host that meets the specified requirements. In a cluster with NSX transport nodes, if some of the transport nodes join the transport zone by NSX-T Virtual Distributed Switch (N-VDS), and others by vSphere Distributed Switch (VDS) 7.0, DRS may incorrectly launch vMotion. You might encounter this issue when:

    • The VM connects to an NSX logical switch configured with a network reservation.

    • Some transport nodes join transport zone using N-VDS, and others by VDS 7.0, or, transport nodes join the transport zone through different VDS 7.0 instances.

    Workaround: Make all transport nodes join the transport zone by N-VDS or the same VDS 7.0 instance.

  • When adding a VMkernel NIC (vmknic) to an NSX portgroup, vCenter Server reports the error "Connecting VMKernel adapter to a NSX Portgroup on a Stateless host is not a supported operation. Please use Distributed Port Group instead."

    • For stateless ESXi on Distributed Virtual Switch (VDS), the vmknic on a NSX port group is blocked. You must instead use a Distributed Port Group.

    • For stateful ESXi on VDS, vmknic on NSX port group is supported, but vSAN may have an issue if it is using vmknic on a NSX port group.

    Workaround: Use a Distributed Port Group on the same VDS.

  • Enabling SRIOV from vCenter for QLogic 4x10GE QL41164HFCU CNA might fail

    If you navigate to the Edit Settings dialog for physical network adapters and attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the ESXi host.

    Workaround: Use the following command on the ESXi host to enable SRIOV:

    esxcfg-module

  • vCenter Server fails if the hosts in a cluster using Distributed Resource Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch (VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS

    In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail.

    Workaround: Have hosts in a DRS cluster join the NSX transport zone using the same VDS or NVDS.

Storage Issues

  • VMFS datastores are not mounted automatically after disk hot remove and hot insert on HPE Gen10 servers with SmartPQI controllers

    When SATA disks on HPE Gen10 servers with SmartPQI controllers without expanders are hot removed and hot inserted back to a different disk bay of the same machine, or when multiple disks are hot removed and hot inserted back in a different order, sometimes a new local name is assigned to the disk. The VMFS datastore on that disk appears as a snapshot and will not be mounted back automatically because the device name has changed.

    Workaround: None. SmartPQI controller does not support unordered hot remove and hot insert operations.

  • VOMA check on NVMe based VMFS datastores fails with error

    VOMA check is not supported for NVMe based VMFS datastores and will fail with the error:

    ERROR: Failed to reserve device. Function not implemented 

    Example:

    # voma -m vmfs -f check -d /vmfs/devices/disks/: <partition#>
    Running VMFS Checker version 2.1 in check mode
    Initializing LVM metadata, Basic Checks will be done
    
    Checking for filesystem activity
    Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
    ERROR: Failed to reserve device. Function not implemented
    Aborting VMFS volume check
    VOMA failed to check device : General Error

    Workaround: None. If you need to analyse VMFS metadata, collect it using the -l option, and pass to VMware customer support. The command for collecting the dump is:

    voma -l -f dump -d /vmfs/devices/disks/:<partition#> 

  • Using the VM reconfigure API to attach an encrypted First Class Disk to an encrypted virtual machine might fail with error

    If an FCD and a VM are encrypted with different crypto keys, your attempts to attach the encrypted FCD to the encrypted VM using the VM reconfigure API might fail with the error message:

    Cannot decrypt disk because key or password is incorrect.

    Workaround: Use the attachDisk API rather than the VM reconfigure API to attach an encrypted FCD to an encrypted VM.

  • ESXi host might get in non responding state if a non-head extent of its spanned VMFS datastore enters the Permanent Device Loss (PDL) state

    This problem does not occur when a non-head extent of the spanned VMFS datastore fails along with the head extent. In this case, the entire datastore becomes inaccessible and no longer allows I/Os.

    In contrast, when only a non-head extent fails, but the head extent remains accessible, the datastore heartbeat appears to be normal. And the I/Os between the host and the datastore continue. However, any I/Os that depend on the failed non-head extent start failing as well. Other I/O transactions might accumulate while waiting for the failing I/Os to resolve, and cause the host to enter the non responding state.

    Workaround: Fix the PDL condition of the non-head extent to resolve this issue.

  • Virtual NVMe Controller is the default disk controller for Windows 10 guest operating systems

    The Virtual NVMe Controller is the default disk controller for the following guest operating systems when using Hardware Version 15 or later:

    Windows 10

    Windows Server 2016

    Windows Server 2019

    Some features might not be available when using a Virtual NVMe Controller. For more information, see https://kb.vmware.com/s/article/2147714

    Note: Some clients use the previous default of LSI Logic SAS. This includes ESXi host client and PowerCLI.

    Workaround: If you need features not available on Virtual NVMe, switch to VMware Paravirtual SCSI (PVSCSI) or LSI Logic SAS. For information on using VMware Paravirtual SCSI (PVSCSI), see https://kb.vmware.com/s/article/1010398

  • After an ESXi host upgrade to vSphere 7.0, presence of duplicate core claim rules might cause unexpected behavior

    Claim rules determine which multipathing plugin, such as NMP, HPP, and so on, owns paths to a particular storage device. ESXi 7.0 does not support duplicate claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate rules to the existing claim rules inherited through an upgrade from a legacy release. As a result of using duplicate rules, storage devices might be claimed by unintended plugins, which can cause unexpected outcome.

    Workaround: Do not use duplicate core claim rules. Before adding a new claim rule, delete any existing matching claim rule.

  • A CNS query with the compliance status filter set might take unusually long time to complete

    The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.

    Workaround: Avoid using bulk queries. When you need to get compliance status, query one volume at a time or limit the number of volumes in the query API to 20 or fewer. While using the query, avoid running other CNS operations to get the best performance.

  • A VMFS datastore backed by an NVMe over Fabrics namespace or device might become permanently inaccessible after recovering from an APD or PDL failure

    If a VMFS datastore on an ESXi host is backed by an NVMe over Fabrics namespace or device, in case of an all paths down (APD) or permanent device loss (PDL) failure, the datastore might be inaccessible even after recovery. You cannot access the datastore from either the ESXi host or the vCenter Server system.

    Workaround: To recover from this state, perform a rescan on a host or cluster level. For more information, see Perform Storage Rescan.

  • Deleted CNS volumes might temporarily appear as existing in the CNS UI

    After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following:

    The object or item referred to could not be found.

    Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI.

  • Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error

    When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.

    Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM.

  • Under certain circumstances, while a CNS operation fails, the task status appears as successful in the vSphere Client

    This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful.

    Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results.

  • Unsuccessful delete operation for a CNS persistent volume might leave the volume undeleted on the vSphere datastore

    This issue might occur when the CNS Delete API attempts to delete a persistent volume that is still attached to a pod. For example, when you delete the Kubernetes namespace where the pod runs. As a result, the volume gets cleared from CNS and the CNS query operation does not return the volume. However, the volume continues to reside on the datastore and cannot be deleted through the repeated CNS Delete API operations.

    Workaround: None.

vCenter Server and vSphere Client Issues

  • Vendor providers go offline after a PNID change​

    When you change the vCenter IP address (PNID change), the registered vendor providers go offline.

    Workaround: Re-register the vendor providers.

  • Cross vCenter migration of a virtual machine fails with an error

    When you use cross vCenter vMotion to move a VM's storage and host to a different vCenter server instance, you might receive the error The operation is not allowed in the current state.

    This error appears in the UI wizard after the Host Selection step and before the Datastore Selection step, in cases where the VM has an assigned storage policy containing host-based rules such as encryption or any other IO filter rule.

    Workaround: Assign the VM and its disks to a storage policy without host-based rules. You might need to decrypt the VM if the source VM is encrypted. Then retry the cross vCenter vMotion action.

  • Storage Sensors information in Hardware Health tab shows incorrect values on vCenter UI, host UI, and MOB

    When you navigate to Host > Monitor > Hardware Health > Storage Sensors on vCenter UI, the storage information displays either incorrect or unknown values. The same issue is observed on the host UI and the MOB path “runtime.hardwareStatusInfo.storageStatusInfo” as well.

    Workaround: None.

  • vSphere UI host advanced settings shows the current product locker location as empty with an empty default

    vSphere UI host advanced settings shows the current product locker location as empty with an empty default. This is inconsistent as the actual product location symlink is created and valid. This causes confusion to the user. The default cannot be corrected from UI.

    Workaround: User can use the esxcli command on the host to correct the current product locker location default as below.

    1. Remove the existing Product Locker Location setting with: "esxcli system settings advanced remove -o ProductLockerLocation"

    2. Re-add the Product Locker Location setting with the appropriate default:

    2.a. If the ESXi is a full installation, the default value is "/locker/packages/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/locker/packages/vmtoolsRepo"

    2.b. If the ESXi is a PXEboot configuration such as autodeploy, the default value is: "/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/vmtoolsRepo"

    Run the following command to automatically figure out the location: export PRODUCT_LOCKER_DEFAULT=`readlink /productLocker`

    Add the setting: esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT

    You can combine all the above steps in step 2 by issuing the single command:

    esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s `readlink /productLocker`

  • Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDC.

    When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.

    Workaround: None.

Virtual Machine Management Issues

  • UEFI HTTP booting of virtual machines on ESXi hosts of version earlier than 7.0 Update 2 fails

    UEFI HTTP booting of virtual machines is supported only on hosts of version ESXi 7.0 Update 2 and later and VMs with HW version 19 or later.

    Workaround: Use UEFI HTTP booting only in virtual machines with HW version 19 or later. Using HW version 19 ensures the virtual machines are placed only on hosts with ESXi version 7.0 Update 2 or later.

  • Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10

    Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10 with an error such as An error occurred while saving the snapshot: The VVol target encountered a vendor specific error. The issue is specific for Purity version 5.3.10.

    Workaround: Upgrade to Purity version 6.1.7 or follow vendor recommendations.

  • The postcustomization section of the customization script runs before the guest customization

    When you run the guest customization script for a Linux guest operating system, the precustomization section of the customization script that is defined in the customization specification runs before the guest customization and the postcustomization section runs after that. If you enable Cloud-Init in the guest operating system of a virtual machine, the postcustomization section runs before the customization due to a known issue in Cloud-Init.

    Workaround: Disable Cloud-Init and use the standard guest customization.

  • Group migration operations in vSphere vMotion, Storage vMotion, and vMotion without shared storage fail with error

    When you perform group migration operations on VMs with multiple disks and multi-level snapshots, the operations might fail with the error com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167. Connection closed by remote host, possibly due to timeout.

    Workaround: Retry the migration operation on the failed VMs one at a time.

  • Deploying an OVF or OVA template from a URL fails with a 403 Forbidden error

    The URLs that contain an HTTP query parameter are not supported. For example, http://webaddress.com?file=abc.ovf or the Amazon pre-signed S3 URLs.

    Workaround: Download the files and deploy them from your local file system.

  • The third level of nested objects in a virtual machine folder is not visible

    Perform the following steps:

    1. Navigate to a data center and create a virtual machine folder.

    2. In the virtual machine folder, create a nested virtual machine folder.

    3. In the second folder, create another nested virtual machine, virtual machine folder, vApp, or VM Template.

    As a result, from the VMs and Templates inventory tree you cannot see the objects in the third nested folder.

    Workaround: To see the objects in the third nested folder, navigate to the second nested folder and select the VMs tab.

vSphere HA and Fault Tolerance Issues

  • VMs in a cluster might be orphaned after recovering from storage inaccessibility such as a cluster wide APD

    Some VMs might be in orphaned state after cluster wide APD recovers, even if HA and VMCP are enabled on the cluster.

    This issue might be encountered when the following conditions occur simultaneously:

    • All hosts in the cluster experience APD and do not recover until VMCP timeout is reached.

    • HA primary initiates failover due to APD on a host.

    • Power on API during HA failover fails due to one of the following:

      • APD across the same host

      • Cascading APD across the entire cluster

      • Storage issues

      • Resource unavailability

    • FDM unregistration and VCs steal VM logic might initiate during a window where FDM has not unregistered the failed VM and VC's host synchronization responds that multiple hosts are reporting the same VM. Both FDM and VC unregister the different registered copies of the same VM from different hosts, causing the VM to be orphaned.

    Workaround: You must unregister and reregister the orphaned VMs manually within the cluster after the APD recovers.

    If you do not manually reregister the orphaned VMs, HA attempts failover of the orphaned VMs, but it might take between 5 to 10 hours depending on when APD recovers.

    The overall functionality of the cluster is not affected in these cases and HA will continue to protect the VMs. This is an anomaly in what gets displayed on VC for the duration of the problem.

vSphere Lifecycle Manager Issues

  • vSphere Lifecycle Manager and vSAN File Services cannot be simultaneously enabled on a vSAN cluster in vSphere 7.0 release

    If vSphere Lifecycle Manager is enabled on a cluster, vSAN File Services cannot be enabled on the same cluster and vice versa. In order to enable vSphere Lifecycle Manager on a cluster, which has VSAN File Services enabled already, first disable vSAN File Services and retry the operation. Please note that if you transition to a cluster that is managed by a single image, vSphere Lifecycle Manager cannot be disabled on that cluster.

    Workaround: None.

  • When a hardware support manager is unavailable, vSphere High Availability (HA) functionality is impacted

    If hardware support manager is unavailable for a cluster that you manage with a single image, where a firmware and drivers addon is selected and vSphere HA is enabled, the vSphere HA functionality is impacted. You may experience the following errors.

    • Configuring vSphere HA on a cluster fails.

    • Cannot complete the configuration of the vSphere HA agent on a host: Applying HA VIBs on the cluster encountered a failure.

    • Remediating vSphere HA fails: A general system error occurred: Failed to get Effective Component map.

    • Disabling vSphere HA fails: Delete Solution task failed. A general system error occurred: Cannot find hardware support package from depot or hardware support manager.

    Workaround:

    • If the hardware support manager is temporarily unavailable, perform the following steps.

    1. Reconnect the hardware support manager to vCenter Server.

    2. Select a cluster from the Hosts and Cluster menu.

    3. Select the Configure tab.

    4. Under Services, click vSphere Availability.

    5. Re-enable vSphere HA.

    • If the hardware support manager is permanently unavailable, perform the following steps.

    1. Remove the hardware support manager and the hardware support package from the image specification

    2. Re-enable vSphere HA.

    3. Select a cluster from the Hosts and Cluster menu.

    4. Select the Updates tab.

    5. Click Edit .

    6. Remove the firmware and drivers addon and click Save.

    7. Select the Configure tab.

    8. Under Services, click vSphere Availability.

    9. Re-enable vSphere HA.

  • I/OFilter is not removed from a cluster after a remediation process in vSphere Lifecycle Manager

    Removing I/OFilter from a cluster by remediating the cluster in vSphere Lifecycle Manager, fails with the following error message: iofilter XXX already exists. Тhe iofilter remains listed as installed.

    Workaround:

    1. Call IOFilter API UninstallIoFilter_Task from the vCenter Server managed object (IoFilterManager).

    2. Remediate the cluster in vSphere Lifecycle Manager.

    3. Call IOFilter API ResolveInstallationErrorsOnCluster_Task from the vCenter Server managed object (IoFilterManager) to update the database.

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, adding hosts causes a vSphere HA error state

    Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message: Applying HA VIBs on the cluster encountered a failure.

    Workaround: Аfter the cluster remediation operation has finished, perform one of the following tasks.

    • Right-click the failed ESXi host and select Reconfigure for vSphere HA.

    • Disable and re-enable vSphere HA for the cluster.

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, disabling and re-enabling vSphere HA causes a vSphere HA error state

    Disabling and re-enabling vSphere HA during remediation process of a cluster, may fail the remediation process due to vSphere HA health checks reporting that hosts don't have vSphere HA VIBs installed. You may see the following error message: Setting desired image spec for cluster failed.

    Workaround: Аfter the cluster remediation operation has finished, disable and re-enable vSphere HA for the cluster.

  • Checking for recommended images in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the recommendation generation task could take more than an hour to finish or may appear to hang. The completion time for the recommendation task depends on the number of devices configured on each host and the number of image candidates from the depot that vSphere Lifecycle Manager needs to process before obtaining a valid image to recommend.

    Workaround: None.

  • Checking for hardware compatibility in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the validation report generation task could take up to 30 minutes to finish or may appear to hang. The completion time depends on the number of devices configured on each host and the number of hosts configured in the cluster.

    Workaround: None

  • Incomplete error messages in non-English languages are displayed, when remediating a cluster in vSphere Lifecycle Manager

    You can encounter incomplete error messages for localized languages in the vCenter Server user interface. The messages are displayed, after a cluster remediation process in vSphere Lifecycle Manager fails. For example, your can observe the following error message.

    The error message in English language: Virtual machine 'VMC on DELL EMC -FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents entering maintenance mode: Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    The error message in French language: La VM « VMC on DELL EMC -FileServer », située sur le cluster « {Cluster-1} », a signalé un problème empêchant le passage en mode de maintenance : Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    Workaround: None.

  • Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing image

    Only the ESXi base image is replaced with the one from the imported image.

    Workaround: After the import process finishes, edit the image, and if needed, remove the vendor addon, components, and firmware and drivers addon.

  • When you convert a cluster that uses baselines to a cluster that uses a single image, a warning is displayed that vSphere HA VIBs will be removed

    Converting a vSphere HA enabled cluster that uses baselines to a cluster that uses a single image, may result a warning message displaying that vmware-fdm component will be removed.

    Workaround: This message can be ignored. The conversion process installs the vmware-fdm component.

  • If vSphere Update Manager is configured to download patch updates from the Internet through a proxy server, after upgrade to vSphere 7.0 that converts Update Manager to vSphere Lifecycle Manager, downloading patches from VMware patch repository might fail

    In earlier releases of vCenter Server you could configure independent proxy settings for vCenter Server and vSphere Update Manager. After an upgrade to vSphere 7.0, vSphere Update Manager service becomes part of the vSphere Lifecycle Manager service. For the vSphere Lifecycle Manager service, the proxy settings are configured from the vCenter Server appliance settings. If you had configured Update Manager to download patch updates from the Internet through a proxy server but the vCenter Server appliance had no proxy setting configuration, after a vCenter Server upgrade to version 7.0, the vSphere Lifecycle Manager fails to connect to the VMware depot and is unable to download patches or updates.

    Workaround: Log in to the vCenter Server Appliance Management Interface, https://vcenter-server-appliance-FQDN-or-IP-address:5480, to configure proxy settings for the vCenter Server appliance and enable vSphere Lifecycle Manager to use proxy.

Miscellaneous Issues

  • When applying a host profile with version 6.5 to a ESXi host with version 7.0, the compliance check fails

    Applying a host profile with version 6.5 to a ESXi host with version 7.0, results in Coredump file profile reported as not compliant with the host.

    Workaround: There are two possible workarounds.

    1. When you create a host profile with version 6.5, set an advanced configuration option VMkernel.Boot.autoCreateDumpFile to false on the ESXi host.

    2. When you apply an existing host profile with version 6.5, add an advanced configuration option VMkernel.Boot.autoCreateDumpFile in the host profile, configure the option to a fixed policy, and set value to false.

  • Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit minor throughput degradation when Dynamic Receive Side Scaling (DYN_RSS) or Generic RSS (GEN_RSS) feature is turned on

    Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5 percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which is unlikely to impact normal workloads.

    Workaround: You can disable DYN_RSS and GEN_RSS feature with the following commands:

    # esxcli system module parameters set -m nmlx5_core -p "DYN_RSS=0 GEN_RSS=0"

    # reboot

  • RDMA traffic between two VMs on the same host might fail in PVRDMA environment

    In a vSphere 7.0 implementation of a PVRDMA environment, VMs pass traffic through the HCA for local communication if an HCA is present. However, loopback of RDMA traffic does not work on qedrntv driver. For instance, RDMA Queue Pairs running on VMs that are configured under same uplink port cannot communicate with each other.

    In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have SRQ enabled with a minimum of HW v14 using RoCE v2.

    The current version of Marvell FastLinQ adapter firmware does not support loopback traffic between QPs of the same PF or port.

    Workaround: Required support is being added in the out-of-box driver certified for vSphere 7.0. If you are using the inbox qedrntv driver, you must use a 3-host configuration and migrate VMs to the third host.

  • Unreliable Datagram traffic QP limitations in qedrntv driver

    There are limitations with the Marvell FastLinQ qedrntv RoCE driver and Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions (MR). Physical MRs or FRMR are not supported. Applications attempting to use physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv driver. Known examples of such test applications are ibv_ud_pingpong and ib_send_bw.

    Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER, NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic are limited and this issue impacts a small set of applications requiring bulk UD traffic.

    Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to meet the VMware PVRDMA requirement to support GSI QP, a restricted software only implementation of UD QP support was added to the qedrntv driver. The goal of the implementation is to provide support for control path GSI communication and is not a complete implementation of UD QP supporting bulk traffic and advanced features.

    Since UD support is implemented in software, the implementation might not keep up with heavy traffic and packets might be dropped. This can result in failures with bulk UD traffic.

    Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe, RDMA and PVRDMA are unaffected by this issue.

  • Servers equipped with QLogic 578xx NIC might fail when frequently connecting or disconnecting iSCSI LUNs

    If you trigger QLogic 578xx NIC iSCSI connection or disconnection frequently in a short time, the server might fail due to an issue with the qfle3 driver. This is caused by a known defect in the device's firmware.

    Workaround: None.

  • ESXi might fail during driver unload or controller disconnect operation in Broadcom NVMe over FC environment

    In Broadcom NVMe over FC environment, ESXi might fail during driver unload or controller disconnect operation and display an error message such as: @BlueScreen: #PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19

    Workaround: None.

  • ESXi does not display OEM firmware version number of i350/X550 NICs on some Dell servers

    The inbox ixgben driver only recognizes firmware data version or signature for i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed into the OEM package version region, and the inbox ixgben driver does not read this information. Only the 8-digit firmware signature is displayed.

    Workaround: To display the OEM firmware version number, install async ixgben driver version 1.7.15 or later.

  • X710 or XL710 NICs might fail in ESXi

    When you initiate certain destructive operations to X710 or XL710 NICs, such as resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware might read data from non-packet memory.

    Workaround: Do not reset the NIC or manipulate vmkernel internal device state.

  • NVMe-oF does not guarantee persistent VMHBA name after system reboot

    NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage installation that uses vmhba30+ and also has NVMe over RDMA configuration, the VMHBA name might change after a system reboot. This is because the VMHBA name assignment for NVMe over RDMA is different from PCIe devices. ESXi does not guarantee persistence.

    Workaround: None.

  • Backup fails for vCenter database size of 300 GB or greater

    If the vCenter database size is 300 GB or greater, the file-based backup will fail with a timeout. The following error message is displayed: Timeout! Failed to complete in 72000 seconds

    Workaround: None.

  • A restore of vCenter Server 7.0 which is upgraded from vCenter Server 6.x with External Platform Services Controller to vCenter Server 7.0 might fail

    When you restore a vCenter Server 7.0 which is upgraded from 6.x with External Platform Services Controller to vCenter Server 7.0, the restore might fail and display the following error: Failed to retrieve appliance storage list

    Workaround: During the first stage of the restore process, increase the storage level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External Platform Services Controller setup storage type is small, select storage type large for the restore process.

  • Enabled SSL protocols configuration parameter is not configured during a host profile remediation process

    Enabled SSL protocols configuration parameter is not configured during a host profile remediation and only the system default protocol tlsv1.2 is enabled. This behavior is observed for a host profile with version 7.0 and earlier in a vCenter Server 7.0 environment.

    Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an ESXi host by using SSH, and run the following ESXCLI command: esxcli system wbem -P <protocol_name>

  • Unable to configure Lockdown Mode settings by using Host Profiles

    Lockdown Мode cannot be configured by using a security host profile and cannot be applied to multiple ESXi hosts at once. You must manually configure each host.

    Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage Lockdown Mode exception user list by using a security host profile.

  • When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from the ESXi hosts

    Some settings in the VMware config file /etc/vmware/config are not managed by Host Profiles and are blocked, when the config file is modified. As a result, when the host profile is applied to a cluster, the EVC settings are lost, which causes loss of EVC functionalities. For example, unmasked CPUs can be exposed to workloads.

    Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC settings.

  • Using a host profile that defines a core dump partition in vCenter Server 7.0 results in an error

    In vCenter Server 7.0, configuring and managing a core dump partition in a host profile is not available. Attempting to apply a host profile that defines a core dump partition, results in the following error: No valid coredump partition found.

    Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-based core dumps.

  • If you run the ESXCLI command to unload the firewall module, the hostd service fails and ESXi hosts lose connectivity

    If you automate the firewall configuration in an environment that includes multiple ESXi hosts, and run the ESXCLI command esxcli network firewall unload that destroys filters and unloads the firewall module, the hostd service fails and ESXi hosts lose connectivity.

    Workaround: Unloading the firewall module is not recommended at any time. If you must unload the firewall module, use the following steps:

    1. Stop the hostd service by using the command:

      /etc/init.d/hostd stop.

    2. Unload the firewall module by using the command:

      esxcli network firewall unload.

    3. Perform the required operations.

    4. Load the firewall module by using the command:

      esxcli network firewall load.

    5. Start the hostd service by using the command:

      /etc/init.d/hostd start.

  • vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager

    Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.

    Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.

  • Changes in the properties and attributes of the devices and storage on an ESXi host might not persist after a reboot

    If the device discovery routine during a reboot of an ESXi host times out, the jumpstart plug-in might not receive all configuration changes of the devices and storage from all the registered devices on the host. As a result, the process might restore the properties of some devices or storage to the default values after the reboot.

    Workaround: Manually restore the changes in the properties of the affected device or storage.

  • If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations

    If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations such as unloading a driver or switching between ENS mode and native driver mode. For example, if you try to change the ENS mode, in the backtrace you see an error message similar to: case ENS::INTERRUPT::NoVM_DeviceStateWithGracefulRemove hit BlueScreen: ASSERT bora/vmkernel/main/dlmalloc.c:2733 This issue is specific for beta builds and does not affect release builds such as ESXi 7.0.

    Workaround: Update to ESXi 7.0 GA.

  • You cannot create snapshots of virtual machines due to an error that a digest operation has failed

    A rare race condition when an All-Paths-Down (APD) state occurs during the update of the Content Based Read Cache (CBRC) digest file might cause inconsistencies in the digest file. As a result, you cannot create virtual machine snapshots. You see an error such as An error occurred while saving the snapshot: A digest operation has failed in the backtrace.

    Workaround: Power cycle the virtual machines to trigger a recompute of the CBRC hashes and clear the inconsistencies in the digest file.

  • If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, Trusted Platform Module (TPM) attestation of the ESXi hosts fails

    If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, and you enable TPM, ESXi hosts fail to pass attestation. In the vSphere Client, you see the warning Host TPM attestation alarm. The Elliptic Curve Digital Signature Algorithm (ECDSA) introduced with ESXi 7.0 Update 3 causes the issue when vCenter Server is not of version 7.0 Update 3.

    Workaround: Upgrade your vCenter Server to 7.0 Update 3 or acknowledge the alarm.

  • You see warnings in the boot loader screen about TPM asset tags

    If a TPM-enabled ESXi host has no asset tag set on, you might see idle warning messages in the boot loader screen such as:

    Failed to determine TPM asset tag size: Buffer too smallFailed to measure asset tag into TPM: Buffer too small

    Workaround: Ignore the warnings or set an asset tag by using the command $ esxcli hardware tpm tag set -d

  • The sensord daemon fails to report ESXi host hardware status

    A logic error in the IPMI SDR validation might cause sensord to fail to identify a source for power supply information. As a result, when you run the command vsish -e get /power/hostStats, you might not see any output.

    Workaround: None

  • If an ESXi host fails with a purple diagnostic screen, the netdump service might stop working

    In rare cases, if an ESXi host fails with a purple diagnostic screen, the netdump service might fail with an error such as NetDump FAILED: Couldn't attach to dump server at IP x.x.x.x.

    Workaround: Configure the VMkernel core dump to use local storage.

  • You see frequent VMware Fault Domain Manager (FDM) core dumps on multiple ESXi hosts

    In some environments, the number of datastores might exceed the FDM file descriptor limit. As a result, you see frequent core dumps on multiple ESXi hosts indicating FDM failure.

    Workaround: Increase the FDM file descriptor limit to 2048. You can use the setting das.config.fdm.maxFds from the vSphere HA advanced options in the vSphere Client. For more information, see Set Advanced Options.

  • Virtual machines on a vSAN cluster with enabled NSX-T and a converged vSphere Distributed Switch (CVDS) in a VLAN transport zone cannot power on after a power off

    If a secondary site is 95% disk full and VMs are powered off before simulating a secondary site failure, during recovery some of the virtual machines fail to power on. As a result, virtual machines become unresponsive. The issue occurs regardless if site recovery includes adding disks or ESXi hosts or CPU capacity.

    Workaround: Select the virtual machines that do not power on and change the network to VM Network from Edit Settings on the VM context menu.

  • ESXI hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835

    In rare cases, for example when running SESparse tests, the number of locks per transaction in a VMFS datastore might exceed the limit of 50 for the J6_MAX_TXN_LOCKACTIONS parameter. As a result, ESXi hosts might fail with a purple diagnostic screen with an error Assert at bora/modules/vmkernel/vmfs/fs6Journal.c:835.

    Workaround: None

  • If you modify the netq_rss_ens parameter of the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen

    If you try to enable the netq_rss_ens parameter when you configure an enhanced data path on the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen. The netq_rss_ens parameter, which enables NetQ RSS, is disabled by default with a value of 0.

    Workaround: Keep the default value for the netq_rss_ens module parameter in the nmlx5_core driver.

  • ESXi might terminate I/O to NVMeOF devices due to errors on all active paths

    Occasionally, all active paths to NVMeOF device register I/O errors due to link issues or controller state. If the status of one of the paths changes to Dead, the High Performance Plug-in (HPP) might not select another path if it shows high volume of errors. As a result, the I/O fails.

    Workaround: Disable the configuration option /Misc/HppManageDegradedPaths to unblock the I/O.

  • Upgrade to ESXi 7.0 Update 3 might fail due to changed name of the inbox i40enu network driver

    Starting with vSphere 7.0 Update 3, the inbox i40enu network driver for ESXi changes name back to i40en. The i40en driver was renamed to i40enu in vSphere 7.0 Update 2, but the name change impacted some upgrade paths. For example, rollup upgrade of ESXi hosts that you manage with baselines and baseline groups from 7.0 Update 2 or 7.0 Update 2a to 7.0 Update 3 fails. In most cases, the i40enu driver upgrades to ESXi 7.0 Update 3 without any additional steps. However, if the driver upgrade fails, you cannot update ESXi hosts that you manage with baselines and baseline groups. You also cannot use host seeding or a vSphere Lifecycle Manager single image to manage the ESXi hosts. If you have already made changes related to the i40enu driver and devices in your system, before upgrading to ESXi 7.0 Update 3, you must uninstall the i40enu VIB or Component on ESXi, or first upgrade ESXi to ESXi 7.0 Update 2c.

    Workaround: For more information, see VMware knowledge base article 85982.

  • SSH access fails after you upgrade to ESXi 7.0 Update 3d

    After you upgrade to ESXi 7.0 Update 3d, SSH access might fail in certain conditions due to an update of OpenSSH to version 8.8.

    Workaround: For more information, see VMware knowledge base article 88055.

  • USB device passthrough from ESXi hosts to virtual machines might fail

    A USB modem device might simultaneously claim multiple interfaces by the VMkernel and block the device passthrough to VMs.

    Workaround: You must apply the USB.quirks advanced configuration on the ESXi host to ignore the NET interface from VMkernel and allow the USB modem to passthrough to VMs. You can apply the configuration in 3 alternative ways:

    1. Access the ESXi shell and run the following command: esxcli system settings advanced set -o /USB/quirks -s 0xvvvv:0xpppp:0:0xffff:UQ_NET_IGNORE | |- Device Product ID |------- Device Vendor ID

      For example, for the Gemalto M2M GmbH Zoom 4625 Modem(vid:pid/1e2d:005b), you can have the command

      esxcli system settings advanced set -o /USB/quirks -s 0x1e2d:0x005b:0:0xffff:UQ_NET_IGNORE

      Reboot the ESXi host.

    2. Set the advanced configuration from the vSphere Client or the vSphere Web Client and reboot the ESXi host.

    3. Use a Host Profile to apply the advanced configuration.

    For more information on the steps, see VMware knowledge base article 80416.

  • HTTP requests from certain libraries to vSphere might be rejected

    The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance than in previous releases. This might expose pre-existing problems in some third-party libraries used by applications for SOAP calls to vSphere.

    If you develop vSphere applications that use such libraries or include applications that rely on such libraries in your vSphere stack, you might experience connection issues when these libraries send HTTP requests to VMOMI. For example, HTTP requests issued from vijava libraries can take the following form:

    POST /sdk HTTP/1.1
    SOAPAction
    Content-Type: text/xml; charset=utf-8
    User-Agent: Java/1.8.0_221

    The syntax in this example violates an HTTP protocol header field requirement that mandates a colon after SOAPAction. Hence, the request is rejected in flight.

    Workaround: Developers leveraging noncompliant libraries in their applications can consider using a library that follows HTTP standards instead. For example, developers who use the vijava library can consider using the latest version of the yavijava library instead.

  • You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35 and lsi_mr3

    When using the lsi_msgpt3, lsi_msgpt35 and lsi_mr3 controllers, there is a potential risk to see dump file lsuv2-lsi-drivers-plugin-util-zdump. There is an issue when exiting the storelib used in this plugin utility. There is no impact on ESXi operations, you can ignore the dump file.

    Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-drivers-plugin with the following command:

    esxcli software vib remove -n lsuv2-lsiv2-drivers-plugin

  • You might see reboot is not required after configuring SR-IOV of a PCI device in vCenter, but device configurations made by third party extensions might be lost and require reboot to be re-applied.

    In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device driver is reloaded. ESXi hosts might have third party extensions perform device configurations that need to run after the device driver is loaded during boot. A reboot is required for those third party extensions to re-apply the device configuration.

    Workaround: You must reboot after configuring SR-IOV to apply third party device configurations.

vSphere Client Issues

  • BIOS manufacturer displays as "--" in the vSphere Client

    In the vSphere Client, when you select an ESXi host and navigate to Configure > Hardware > Firmware, you see -- instead of the BIOS manufacturer name.

    Workaround: For more information, see VMware knowledge base article 88937.

check-circle-line exclamation-circle-line close-line
Scroll to top icon