vCenter Server 7.0 Update 3l | 30 MAR 2023 | ISO Build 21477706

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

Earlier Releases of vCenter Server 7.0

Features, resolved and known issues of vCenter Server are described in the release notes for each release. Release notes for earlier releases of vCenter Server 7.0 are:

For internationalization, compatibility, installation, upgrade, open source components and product support notices, see the VMware vSphere 7.0 Release Notes.
For more information on vCenter Server supported upgrade and migration paths, please refer to VMware knowledge base article 67077.

Patches Contained in This Release

This release of vCenter Server 7.0 Update 3l delivers the following patch:

For a table of build numbers and versions of VMware vCenter Server, see VMware knowledge base article 2143838.

Patch for VMware vCenter Server Appliance 7.0 Update 3l

Product Patch for vCenter Server containing VMware software fixes, security fixes, and third-party product fixes.

This patch is applicable to vCenter Server.

Download Filename VMware-vCenter-Server-Appliance-7.0.3.01400-21477706-patch-FP.iso
Build 21477706
Download Size 6624.2 MB
md5sum 1ae421156adeb9fd531c537a4d80ccbc
sha256checksum 2ed1607c4c02c2d3e2da9ec75ffb784e9ae74d3f47f2c4d077ec771debd9a042

Download and Installation

To download this patch from VMware Customer Connect, you must navigate to Products and Accounts > Product Patches. From the Select a Product drop-down menu, select VC and from the Select a Version drop-down menu, select 7.0.3.

  1. Attach the VMware-vCenter-Server-Appliance-7.0.3.01400-21477706-patch-FP.iso file to the vCenter Server CD or DVD drive.
  2. Log in to the appliance shell as a user with super administrative privileges (for example, root) and run the following commands:
    • To stage the ISO:
      software-packages stage --iso
    • To see the staged content:
      software-packages list --staged
    • To install the staged rpms:
      software-packages install --staged

For more information on using the vCenter Server shells, see VMware knowledge base article 2100508.

For more information on patching vCenter Server, see Patching the vCenter Server Appliance.

For more information on staging patches, see Stage Patches to vCenter Server Appliance.

For more information on installing patches, see Install vCenter Server Appliance Patches.

For more information on patching using the Appliance Management Interface, see Patching the vCenter Server by Using the Appliance Management Interface.

Product Support Notices

  • Restoring Memory Snapshots of a VM with independent disk for virtual machine backups is not supported: You can take a memory snapshot of a virtual machine with an independent disk only to analyze the guest operating system behavior of a virtual machine. You cannot use such snapshots for virtual machine backups because restoring this type of snapshots is unsupported. You can convert a memory snapshot to a powered-off snapshot, which can be successfully restored.

Resolved Issues

The resolved issues are grouped as follows.

vSAN Issues
    • Moving an ESXi host with encryption enabled to a vSAN cluster might fail with a permission error

      In the vSphere Client, when you try to move an ESXi host with enabled encryption to an encrypted vSAN cluster, the operation might fail with an error such as The session <NULL> does not have privilege Cryptographer.RegisterHost on entity.

      This issue is resolved in this release.

    • You cannot clear vSAN health check: NVMe device can be identified

      If the vSAN health check indicates that an NVMe device cannot be identified, the warning might not be cleared after you correctly select the device model.

      This issue is resolved in this release.

Virtual Machine Management
    • In the vSphere Client, you can set automatic startup and shutdown of virtual machines on ESXi hosts in a vSphere HA cluster

      In the vSphere Client, when you navigate to an ESXi host part of a vSphere HA cluster and follow the route Configure > Virtual Machines > VM Startup/Shutdown, you see a warning If the host is part of a vSphere HA cluster, the automatic startup and shutdown of virtual machines is disabled. However, the Edit button is not deactivated and when you click it, you can select the option Automatically start and stop the virtual machines with the system and complete the automatic startup and shutdown of virtual machines.

      This issue is resolved in this release. The fix makes sure that the Edit button in the VM Startup/Shutdown option is unavailable for ESXi hosts in a vSphere HA cluster.

    • A vim.fault.Timedout error might occur when a task to create a virtual machine takes longer than 10 min

      If the creation of a virtual machine on an ESXi host takes longer than 10 min, the task might fail with a vim.fault.Timedout error.

      This issue is resolved in this release.

    • You cannot add Raw Device Mapping (RDM) as an existing disk to a virtual machine

      If the VMX file of a VM resides on an NFS or vSphere Virtual Volumes datastore, when you create or edit that VM, you cannot add an RDM as an existing disk. In the vSphere Client, you see a validation error for the newly added disk. If you change the location for the new disk to a VMFS datastore, the validation error disappears, but the operation fails with error such as: Invalid virtual machine configuration. Storage policy change failure: @&!*@*@(mgs.disklin.INVALIDDISK) The file specified is not a virtual disk.

      This issue is resolved in this release.

    • Enter Maintenance Mode task fails when you try to perform maintenance on ESXi hosts where instant clones reside

      In some cases, when you try to put in maintenance mode the ESXi hosts where instant clones reside, the task might fail with a ManagedObjectNotFound error. The issue occurs because some virtual machine objects enqueued for evacuation might be deleted while in-flight.

      This issue is resolved in this release.

vCenter Server and vSphere Client Issues
    • In vCenter Enhanced Linked Mode, you can see vCenter Server systems of version 8.0 from a 7.x vCenter Server instance

      If you have a vCenter Enhanced Linked Mode group that contains vCenter Server instances of versions 8.x and 7.x, when you log in to a 7.x vCenter Server instance, in the vSphere Client, you can see vCenter Server systems of version 8.0. Since vCenter Server 8.0 introduces new functionalities, you can run workflows specific to vSphere 8.0 on the 8.0 vCenter Server, but this might lead to unexpected results when run from the 7.x vSphere Client.

      This issue is resolved in this release. The fix makes sure that when in Enhanced Linked Mode, you do not see vCenter Server 8.0 instances when you log in to a 7.x vCenter Server instance.

    • When you shutdown a virtual machine, other VMs also shut down unexpectedly

      In rare cases, after migrating some VMs from a VM list, those VMs might erroneously remain as preselected in the list. As a result, any following operation on one of the VMs in the VM list might be executed on the other VMs too. For example, if you migrate some VMs from one cluster to another and shut down a VM in the first cluster, the VMs migrated to the second cluster might also shut down.

      This issue is resolved in this release.

    • The vpxd service might fail and generate a core file due to bad authentication data when performing vCenter Server tagging operations

      A rare issue with the authentication of the tagging API might cause the vpxd service to fail and generate a core file in the /var/core directory. Such failures of the vpxd service might in turn cause unexpected failovers to passive nodes in vCenter Server High Availability environments.

      This issue is resolved in this release.

  • SNMPv3 polling on a vCenter system might fail after an upgrade to vCenter Server 7.x due to a null request ID

    SNMPv3 polling on a vCenter system might fail when a discovery request returns requestID = 0. The issue occurs after an update of the vCenter system to vCenter Server 7.x and might be related to RFC compliance.

    This issue is resolved in this release. 

Security Issues
    • vSphere Client stops responding after you reconfigure vSphere HA on a vCenter instance with a Key Management Server (KMS) cluster

      When you enable or reconfigure vSphere HA on a vCenter instance with a KMS cluster, the Hosts and Clusters tab in the vSphere Client becomes unavailable. You see no clusters, hosts, or VMs until the vpxd process restarts after the KMS cluster is reconfigured on the vCenter instance.

      This issue is resolved in this release.

    • Repeated rekey operation on a powered on encrypted virtual machine with an IDE controller might cause the VM to shutdown

      Generally, you can perform a shallow recrypt while a virtual machine is powered on, but in case the VM has an IDE controller, the VM must be powered off before a rekey operation. When you try to rekey a powered on VM with an IDE controller, in the vSphere Client, you see an error message that the reconfiguration failed. If you retry the operation, the VM might power off and you cannot power it on.

      This issue is resolved in this release. If you already face the issue and an encrypted virtual machine cannot power on, use the Managed Object Browser (MOB) to find the page of the target VM, for example https://vcip/mob?moid=vm-14, and run the command CryptoUnlock_Task. As a result, the VM can be powered on. Power off the VM and retry the rekey operation. Alternatively, you can unregister and register the VM again and retry the rekey in a powered off state.

    • If you use Integrated Windows Authentication (IWA) for vCenter Server authentication, you might not be able to log in to the vCenter Server system because the STS service (vmware-stsd) might fail

    • If you use IWA for vCenter Server authentication, the STS service might fail during attempts to log in to the vCenter Server system, because the Active Directory might return a null LDAP hostname. Logins fail until the STS service restarts.

      ThevMonCoredumper.log file contains entries similar to the following:  

      2022-12-29T07:16:15.305Z In(05) host-12345 Notify vMon about pool-2-thread-4 dumping core. Pid : 3315  

      2022-12-29T07:16:15.320Z In(05) host-12345 Successfully notified vMon.  

      2022-127-29T07:16:16.785Z In(05) host-12345 Successfully generated core file /var/core/core.pool-2-thread-4.3315.

      The hs_err_sts_pid*.log file contains entries such as:  

      # A fatal error has been detected by the Java Runtime Environment:  

      # SIGSEGV (0xb) at pc=0x00007f4d80203d06, pid=3315, tid=0x00007f4ce3d1f700

      This issue is resolved in this release.

  • vCenter Server 7.0 Update 3l provides the following security updates:
    • Eclipse Jetty is updated to version 9.4.50.v20221201.
    • The ini4j library is updated to version 0.5.4.
    • Apache Tomcat is updated to version 8.5.84/9.0.70.
    • The Jackson package is updated to version 2.14.1.
    • The sqlite-jdbc is updated to version 3.40.0.0.
    • The Spring Framework is updated to 5.3.24.
    • See the PhotonOS release notes for open source changes.
Installation and Upgrade Issues
    • ADFS user groups cannot authenticate to vCenter after an upgrade

      ADFS user groups might not be able to authenticate to vCenter after an upgrade due to a possible mismatch between case-sensitive values in the group name.

      This issue is resolved in this release. The fix removes the case-sensitive check.

    • VMware vSphere Update Manager Download Service (UMDS) might not get latest patch metadata files

      UMDS is an optional module of vSphere Lifecycle Manager that downloads software metadata, software binaries, and notifications that might not otherwise be available to vSphere Lifecycle Manager. You use UMDS to download latest ESXi patches, but in some cases, the metadata file containing listing of new patches might not be up-to-date. As a result, ESXi hosts cannot identify the latest patch and prevent updates.

      This issue is resolved in this release.

    • Patching to vCenter Server 7.0 Update 3i or 7.0 Update 3j fails when the clienttrustCA.pem file is empty

      If your vCenter Server system has smart card authentication enabled and the clienttrustCA.pem file at /usr/lib/vmware-sso/vmware-sts/conf/ is empty, patching to vCenter Server 7.0 Update 3i or 7.0 Update 3j might fail. The issue occurs because during update, the STS service looks for certificates in the clienttrustCA.pem file and if the file is empty, the STS service fails. The issue is specific for vCenter Server 7.0 Update 3i and 7.0 Update 3j.

      This issue is resolved in this release. If patching to vCenter Server 7.0 Update 3i or 7.0 Update 3j has failed, re-run the upgrade to 7.0 Update 3l to make sure it completes successfully.

    • Users with Administrator role on vCenter Server do not see warnings for clusters with vSphere Cluster Services (vCLS) enabled

      vCenter Server 7.0 Update 3e added a new group of vCenter Server Administrators, vCLSAdmin, with reduced privileges that get Read-only access to virtual machines in clusters with vSphere Cluster Services (vCLS) enabled. An issue with updates to vCenter Server 7.0 Update 3e and later prevents automatic refresh of Administrator roles and you might not see the vCLSAdmin group. As a result, warnings of vCLS VMs might not be visible for users with Administrator role on vCenter Server, only for vCenter Single Sign-On Administrators.

      This issue is resolved in this release. The fix makes sure you can see the vCLSAdmin group after an update to vCenter Server 7.0 Update 3l and later, so that you can monitor vCLS VMs.

Miscellaneous Issues
    • Relocating a First Class Disk (FCD) attached to a VM to a datastore might result in health status alert and datastore inaccessibility

      The FCDInfo field that contains the name of a Persistent Volume in its specification might not update in time when you relocate a FCD attached to a VM to a datastore. As a result, the volume might display a red health status and the datastore accessibility is NotAccessibile.

      This issue is resolved in this release.

    • After a vCenter upgrade, vSphere HA fails with an error Device or resource busy

      After a cluster upgrade, when vCenter restarts, in a short time most of the VMs might be migrated to a small group of the ESXi hosts in the cluster, which leads to a performance downgrade. In the FDM Install error logs on the ESXi hosts, you see a warning such as rm: can't remove '/tardisks/vmware_f.v00': Device or resource busy. After a cluster upgrade, while ESXi hosts reconnect to the vpxd service, the Fault Domain Manager (FDM) agent needs some time to become fully operational. If during this time the vSphere DRS component that scans the VM-to-host compatibility finds the FDM agent does not work on a given ESXi host, DRS forces the migration of the VMs to another host for high availability purposes.

      This issue is resolved in this release. The fix adds a new grace period mechanism where DRS waits for 10 minutes until migrating VMs out of their current ESXi host, because of the transient compatibility failure. You can configure this grace period by using the advanced option CompatCheckTransientFailureTimeSeconds, following this sample:

      IOPT(COMPAT_CHECK_TRANSIENT_FAILURE_TIME_SECS,

      "CompatCheckTransientFailureTimeSeconds",

      "Length of time a same host compatibility check failure is tolerated. "

      "(-1 -> ignore same host compatibility check failure)", -1, 3600, 600)

    • In the vSphere Client, you do not see the asset tag for some servers

      In vSphere Client, when you navigate to Configure > Hardware > Overview, for some servers you do not see any asset tag listed.

      This issue is resolved in this release. The fix ensures that vCenter receives either baseboard info (Type2) asset tag or chassis info (Type3) asset tag for the server. You see the chassis info asset tag when the baseboard info asset tag for the server is empty.

    • The vpxd service intermittently fails with a core dump due to a race condition

      A rare race condition between threads of the vpxd service might cause the daemon to fail, because one of the threads might access a data member outside the mutex lock. In the backtrace, you see errors such as Panic: Memory exceeds hard limit. Panic and ExtensionManagerMo::UpdateExtension use ExtensionWrapper after free.

      This issue is resolved in this release.

    • In case of network connectivity issues, the Key Management Interoperability Protocol (KMIP) client might cause high CPU usage

      In case of a network outage or connectivity issues, unresolved calls to the KMIP server in your vCenter Server system might cause high CPU usage.

      This issue is resolved in this release.

Networking Issues
    • After a vCenter Server VM reboot, the /var/spool/snmp directory might not exist and you do not see some vCenter Server-related SNMP traps

      When the SNMP service is enabled and after a vCenter Server VM reboot, the /var/spool/snmp directory might not be mounted in time and you cannot see some SNMP traps, such as hrStorageSize.

      This issue is resolved in this release.

Storage Issues
    • Higher than actual vSAN storage pool free space estimate might lead to unexpected storage shortages

      The freeSpace parameter of a vSAN storage pool might not exclude disks with errors and display the unused space estimate as higher than the actual. As a result, you might see unexpected storage shortages. This issue only affects vSAN storage pools.

      Workaround: Manually compute the storage capacity by disregarding the freeSpace value of disks with errors from the overall storage capacity provided by the freeSpace parameter of the storage pool.

    • If a vmdk file is bigger than 2TB, importing an OVA from a content library fails with an Invalid disk format error

      Due to size restrictions, when you try to import an OVA file that contains a vmdk file larger than 2TB from a content library, in the vSphere Client you see an error such as com.vmware.transfer.streamVmdk.VmdkFormatException: Invalid disk format (capacity too large).

      This issue is resolved in this release. The fix removes size restrictions on the OVA files in content libraries.

CIM and API Issues
    • The Guest Power API returns 500 Internal Server Error

      The Guest Power API, which is a category of executable operations for the vCenter REST APIs, intermittently returns an HTTP 500 error. The issue occurs after a restart of the VPXD service and does not affect SOAP calls.

      This issue is resolved in this release.

    • vCenter might become temporally unresponsive due to a slow client application

      A slow client application might create vCenter API calls that in turn create a very large HTTP response. For the time the slow client reads the response, all other API calls to vCenter get blocked. As a result, vCenter becomes temporally unresponsive.

      This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

vSphere Lifecycle Manager Issues
  • You cannot edit the VMware vSphere Lifecycle Manager Update Download scheduled task

    In the vSphere Client, when you navigate to a vCenter Server instance and select Scheduled Tasks under the Configure tab, if you select the VMware vSphere Lifecycle Manager Update Download task and click Edit, you cannot modify the existing settings.

    Workaround: You can edit the VMware vSphere Lifecycle Manager Update Download task by following the steps in the topic Configure the vSphere Lifecycle Manager Automatic Download Task.

Server Configuration Issues
    • You see a message Error retrieving when trying to view the status of Key Management Server (KMS) instances in the vSphere Client

      In the vSphere Client, you might see a message Error retrieving for several minutes when trying to view Key Management Server (KMS) instances. The issue occurs when a KMS instance in a standard key provider loses connectivity. Until all network requests to the affected KMS time out, which takes around 4 minutes, you cannot see the status of any KMS instance in your system, only the error message for the key provider. After the timeout, you can see the status of all KMS instances.

      Workaround: If you see the Error retrieving message, wait for 4 minutes.

  • Due to a default timeout setting of 2 minutes, log in to vCenter instances in Enhanced Linked Mode takes long when any of the vCenters is down

    When you have a number of vCenter instances in Enhanced Linked Mode, if one of the instances is down for some reason, such as maintenance, log in to the other instances might take up to 2 minutes, which is the default timeout setting. Changing the timeout property that specifies the wait time for log in to linked vCenter instances is a complex task that requires manually editing the LinkedVcGroup.login.timeout property located in vim-commons-vsphere.properties. To simlify the task, starting with vCenter Server 7.0 Update 3l, the setting LinkedVcGroup.login.timeout=120000 moves from vim-commons-vsphere.properties to the webclient.properties file. Editing this option allows you to reduce the wait time so that if one of the vCenter logins is taking more time, it does not affect the log in time for other instances.

    Workaround: Edit /etc/vmware/vsphere-ui/webclient.properties and change the value of LinkedVcGroup.login.timeout from 120000 to a smaller value in milliseconds, but consider that a value of <= 0 is an infinite timeout.

Security Issues
  • Vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable

    Some third-party tools for vulnerability scans might report the HTTP TRACE method on vCenter ports 9084 and 9087 as vulnerable.

    Workaround: Log in to the vSphere Client and run the following steps:

    1. Backup all the *.war files before editing them.
    2. Run the command mkdir /tmp/war/
    3. Run the command cp /usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war /tmp/war/
    4. Run the command cd /tmp/war/
    5. Unzip the root.war file.
    6. Run the command cd WEB-INF/
    7. Run the command chmod 777 web.xml
    8. Edit web.xml and add the following code after the last <servlet-mapping> tag:
      <security constraint>
             <web resource collection>
                <web-resource-name>Restricted HTTP Methods</web-resource-name>
                <url-pattern>/*</url-pattern>
                <http method>TRACE</http method>
             </web-resource-collection>
             <auth constraint />
      </security-constraint>
      cd ..
    9. Run the command zip -r -u root.war WEB-INF/
    10. Run the command cp root.war /usr/lib/vmware-updatemgr/bin/jetty/webapps/
    11. Clean /tmp/war with rm -rf /tmp/war/*
    12. Repeat steps 2 to 11 for vum-filedownload.war and vum-fileupload.war
    13. Restart the updatemgr service.

Known Issues from Prior Releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon