vCenter Server 7.0 Update 3 | 05 OCT 2021 | ISO Build 18700403

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • vCenter Server 7.0 Update 3 contains all security fixes from vCenter Server 7.0 Update 2d and covers all vulnerabilities documented in VMSA-2021-0020.

  • vSphere Memory Monitoring and Remediation, and support for snapshots of PMem VMs: vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs. For more information, see vSphere Memory Monitoring and Remediation.

  • Extended support for disk drives types: Starting with vSphere 7.0 Update 3, vSphere Lifecycle Manager validates the following types of disk drives and storage device configurations:
    • HDD (SAS/SATA)
    • SSD (SAS/SATA)
    • SAS/SATA disk drives behind single-disk RAID-0 logical volumes
    For more information, see Cluster-Level Hardware Compatibility Checks.

  • Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host: Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host. For more information, see Using vSphere Lifecycle Manager Images to Remediate vSAN Stretched Clusters.

  • vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs. 

  • Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.

  • MTU size greater than 9000 bytes: With vCenter Server 7.0 Update 3, you can set the size of the maximum transmission unit (MTU) on a vSphere Distributed Switch to up to 9190 bytes to support switches with larger packet sizes.

  • Zero downtime, zero data loss for mission critical VMs in case of Machine Check Exception (MCE) hardware failure: With vSphere 7.0 Update 3, mission critical VMs protected by VMware vSphere Fault Tolerance can achieve zero downtime, zero data loss in case of Machine Check Exception (MCE) hardware failure, because VMs fallback to the secondary VM, instead of failing. For more information, see How Fault Tolerance Works.

  • For Photon OS updates, see VMware vCenter Server Appliance Photon OS Security Patches

  • For VMware vSphere with Kubernetes updates, see VMware vSphere with Kubernetes Release Notes.

Earlier Releases of vCenter Server 7.0

Features, resolved and known issues of vCenter Server are described in the release notes for each release. Release notes for earlier releases of vCenter Server 7.0 are:

For internationalization, compatibility, installation, upgrade, open source components and product support notices, see the VMware vSphere 7.0 Release Notes.
For more information on vCenter Server supported upgrade and migration paths, please refer to VMware knowledge base article 67077.

Patches Contained in This Release

IMPORTANT: Build details for ESXi are available at the ESXi 7.0 Update 3 Release Notes.

This release of vCenter Server 7.0 Update 3 delivers the following patch:

Patch for VMware vCenter Server 7.0 Update 3

Product Patch for vCenter Server containing VMware software fixes, security fixes, and third-party product fixes.

This patch is applicable to vCenter Server.

Download Filename VMware-vCenter-Server-Appliance-7.0.3.00000-18700403-patch-FP.iso
Build 18700403
Download Size 7259.1 MB
md5sum 56947bc1a591849e55165b7fecebdf85
sha256checksum 54f30ff9fda3dc0cf7f4ff3a1efef45c37152f8ff7623a4f222dbe1d89411c08

Download and Installation

To download the VMware vCenter Server 7.0 Update 3 build from VMware Customer Connect, you must navigate to Products and Accounts > Product Patches. From the Select a Product drop-down menu, select VC and from the Select a Version drop-down menu, select 7.0.3, and click Search.

  1. Attach the VMware-vCenter-Server-Appliance-7.0.3.00000-18700403-patch-FP.iso file to the vCenter Server CD or DVD drive.
  2. Log in to the appliance shell as a user with super administrative privileges (for example, root) and run the following commands:
    • To stage the ISO:
      software-packages stage --iso
    • To see the staged content:
      software-packages list --staged
    • To install the staged rpms:
      software-packages install --staged

For more information on using the vCenter Server shells, see VMware knowledge base article 2100508.

For more information on patching vCenter Server, see Patching the vCenter Server Appliance.

For more information on staging patches, see Stage Patches to vCenter Server Appliance.

For more information on installing patches, see Install vCenter Server Appliance Patches.

For more information on patching using the Appliance Management Interface, see Patching the vCenter Server by Using the Appliance Management Interface.

Product Support Notices

  • Your vCenter Server system must reboot after an update to vCenter Server 7.0 Update 3: After updating your vCenter Server system to version vCenter Server 7.0 Update 3 from a previous version of vCenter Server 7.0.x, a reboot is required to ensure critical kernel patches are applied. If you use the Command Line Interface (CLI) for the update or upgrade operation, you must manually restart vCenter Server. If you use the Graphical User Interface (GUI) installer or API , the system reboots automatically.
     
  • No prompts to provide the vCenter Single Sign-On administrator password: During an update from vCenter Server 7.0.x to vCenter Server 7.0 Update 3, you see no prompts to provide the vCenter Single Sign-On administrator password. Irrespective of whether you run the update by using the vCenter Server Management Interface, or by using software-packages, or CLI in an interactive manner, or run the update by using software-packages or CLI in a non-interactive manner, you see no prompts to provide the vCenter Single Sign-On administrator password.
     
  • Client plug-ins compliance with FIPS: In a future vSphere release, all client plug-ins for vSphere must become compliant with the Federal Information Processing Standards (FIPS). When FIPS is enabled by default in the vCenter Server, you cannot use local plug-ins that do not conform to the standards. For more information, see Preparing Local Plug-ins for FIPS Compliance.
     
  • Deprecation of legacy vSphere Update Manager workflows for firmware updates delivered as baselines in a vSAN-managed baseline group: In a future major vSphere release, VMware plans to discontinue legacy vSphere Update Manager workflows for firmware updates delivered as baselines in a vSAN-managed baseline group. You can manage vSAN clusters with a single vSphere Lifecycle image to upgrade the firmware of servers with a supported, integrated hardware support manager from your server vendor. For more information, see Updating Firmware in vSAN Clusters.
     
  • Install NSX Manager from the vSphere Client: vCenter Server 7.0 Update 3 adds a functionality in the vSphere Client to enable installation of NSX Manager with a future NSX-T Data Center release. You can see the vSphere Client NSX-T home page that enables the feature, but it does not work with NSX-T Data Center 3.1.x or earlier.
     
  • Deprecation of Shares and Limit – IOPS fields in the virtual machine Edit Settings dialog: Starting from vCenter Server 7.0 Update 3, the use of Shares and Limit – IOPS fields in the virtual machine Edit Settings dialog is deprecated, because all I/O settings are only defined by using a storage policy. In a future vSphere release, the two fields are planned to be removed from the virtual machine Edit Settings dialog. For more information, see VMware knowledge base article 85696 and About Virtual Machine Storage Policies.

 

Resolved Issues

The resolved issues are grouped as follows.

vSphere Lifecycle Manager Issues
  • When you try to check VMware Tools or VM Hardware compliance status, you see a status 500 error and the check returns no results

    In the vSphere Client, when you navigate to the Updates tab of a container object: host, cluster, data center, or vCenter Server instance, to check VMware Tools or VM Hardware compliance status, you might see a status 500 error. The check works only if you navigate to the Updates tab of a virtual machine.

    This issue is resolved in this release.

Miscellaneous Issues
  • SNMP dynamic firewall ruleset is modified by Host Profiles during a remediation process

    The SNMP firewall ruleset is a dynamic state, which is handled during runtime. When a host profile is applied, the configuration of the ruleset is managed simultaneously by Host Profiles and SNMP, which can modify the firewall settings unexpectedly.  

    This issue is resolved in this release.

  • Import Host Profile task fails with a reference host error

    The NoAccess or NoCryptoAdmin roles might be modified during exports of a host profile in a 7.0.x vCenter Sever system and the import of such a host profile might fail with a reference host error. In the vSphere Client, you see a message such as There is no suitable host in the inventory as reference host for the profile Host Profile.

    This issue is resolved in this release. However, you must edit the host profile xml file for versions earlier than vCenter Server 7.0 Update 3, and remove the privileges in the NoAccess or NoCryptoAdmin roles before an import operation.

Storage Issues
  • A CNS query with the compliance status filter set might take unusually long time to complete

    The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.

    This issue is resolved in this release.

  • All I/O filter storage providers are offline after upgrade to vCenter Server 7.0 Update 2

    After patching or upgrading your system to vCenter Server 7.0 Update 2, all I/O filter storage providers might display with status Offline or Disconnected in the vSphere Client. vCenter Server 7.0 Update 2 supports the Federal Information Processing Standards (FIPS) and certain environments might face the issue due to certificates signed with the sha1 hashing algorithm that is not FIPS-compliant.

    This issue is resolved in this release.

vCenter Server and vSphere Client Issues
  • You do not see progress on vSphere Lifecycle Manager and vSphere with VMware Tanzu tasks in the vSphere Client

    In a mixed-version vCenter Server 7.0 system, such as vCenter Server 7.0 Update 1 and Update 2 transitional environment with Enhanced Linked Mode enabled, tasks such as image, host or hardware compliance checks that you trigger from the vSphere Client might show no progress, while the tasks actually run.

    This issue is resolved in this release.

vSphere DRS Issues
  • If DRS Awareness of vSAN Stretched Cluster is enabled on a stretched cluster managing ESXi hosts of version earlier than 7.0 Update 2, vSphere DRS might suggest wrong virtual machine placement

    Prior to vSphere 7.0 Update 2, vSphere DRS has no awareness of read locality for vSAN stretched clusters and the DRS Awareness of vSAN Stretched Cluster feature requires all hosts in a vCenter Server system to be of version ESXi 7.0 Update 2 to work as expected. If you manage ESXi hosts of version earlier than 7.0 Update 2 in a vCenter Server 7.0 Update 2 system, some read locality stats might be read incorrectly and result in improper placements.

    This issue is resolved in this release. The fix ensures that if ESXi hosts of version earlier than 7.0 Update 2 are detected in a vSAN stretched cluster, read locality stats are ignored and vSphere DRS uses the default load balancing algorithm to initial placement and load balance workloads.

vSphere HA and Fault Tolerance Issues
  • You see vCenter Server High Availability health degradation alarms reporting an rsync failure

    If you use both vSphere Auto Deploy and vCenter Server High Availability in your environment, rsync might not sync quickly enough some short-lived temporary files created by Auto Deploy. As a result, in the vSphere Client you might see vCenter Server High Availability health degradation alarms. In the /var/log/vmware/vcha file, you see errors such as rsync failure for /etc/vmware-rbd/ssl. The issue does not affect the normal operation of any service.

    This issue is resolved in this release. vSphere Auto Deploy now creates the temporary files outside the vCenter Server High Availability replication folders.

Virtual Machine Management Issues
  • Deployment of virtual machines fails with an error Could not power on virtual machine: No space left on device

    In rare cases, vSphere Storage DRS might over recommend some datastores and lead to an overload of those datastores, and imbalance of datastore clusters. In extreme cases, power-on of virtual machines might fail due to swap file creation failure. In the vSphere Client, you see an error such as Could not power on virtual machine: No space left on device. You can backtrace the error in the /var/log/vmware/vpxd/drmdump directory.

    This issue is resolved in this release.

Auto Deploy and Image Builder Issues
  • Boot sequence for ESXi hosts that are provisioned with Auto Deploy stops at /vmw/rbd/host-register

    ESXi hosts that are provisioned with Auto Deploy might fail to boot after you update your vCenter Server system to 7.0 Update 2 and later. In the logs, you see a message such as:
    FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/rbd/cache/f2/0154d902a1ebb121bac89040df90d1/README.b0f08dea872690a93c4b5bc5e14148d1'

    This issue is resolved in this release.

Server Configuration Issues
  • NEW If the NT LAN Manager (NTLM) is disabled on Active Directory, configuration of the vSphere Authentication Proxy service might fail

    You cannot configure the vSphere Authentication Proxy service on an Active Directory when NTLM is disabled, because by default the vSphere Authentication Proxy uses NTLMv1 for initial communication.

    This issue is resolved in this release. The fix changes the default protocol for the initial communication of the vSphere Authentication Proxy to NTLMv2.

  • NEW Configuration of the vSphere Authentication Proxy service might fail when NTLMv2 response is explicitly enabled on vCenter Server 

    Configuration of the vSphere Authentication Proxy service might fail when NTLMv2 response is explicitly enabled on vCenter Server with the generation of a core.lsassd file under the /storage/core directory.

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Post-GA vSphere 7.0 Update 3 Known Issues vSphere Cluster Services Issues
  • You see compatibility issues in new vCLS VMs deployed in vSphere 7.0 Update 3 environment

    The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.

    Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3. 

Networking Issues
  • You see errors in the vSphere Client when the HTTP Reverse Proxy (rhttpproxy) service is set on different ports from 80 and 443

    If you configure vCenter Enhanced Linked Mode and customize the rhttpproxy settings from the default ports 80 for HTTP and 443 for HTTPS, you might see an error such as You have no privileges to view object when you first log in to the vSphere Client.

    Workaround: None.

Backup and Restore Issues
  • When monitoring task status in a vSphere with Tanzu environment, you see an error that a specified parameter is not correct

    In the vSphere Client, when you navigate to Monitor > Tasks, you see an error such as vslm.vcenter.VStorageObjectManager.deleteVStorageObjectEx.label - A specified parameter was not correct: in the Status field. The issue occurs in vSphere with Tanzu environments when you deploy a backup solution that uses snapshots. If the snapshots are not cleaned up, some operations in Tanzu Kubernetes clusters might not complete and cause the error.

    Workaround: Delete snapshots from the backup solution endpoint by using vendor instructions and retry the Tanzu Kubernetes cluster operation.

Miscellaneous Issues
  • You cannot delete services from supervisor clusters in your vSphere environment

    In rare cases, you might not be able to delete services such as NGINX and MinIO from supervisor clusters in your vSphere environment from the vSphere Client. After you deactivate the services, the Delete modal continuously stays in processing state.

    Workaround: Close and reopen the Delete modal.

  • You cannot enable or reconfigure a vSphere Trust Authority cluster on a vCenter Server system of version 7.0 Update 3 with ESXi hosts of earlier versions

    If you try to enable or reconfigure a vSphere Trust Authority cluster on a vCenter Server system of version 7.0 Update 3 with ESXi hosts of earlier versions, encryption of virtual machines on such hosts fails.

    Workaround: Keep your existing Trusted Cluster configuration unchanged until you upgrade your ESXi hosts to version 7.0 Update 3.

vSphere Lifecycle Manager Issues
  • You cannot upload an NSX depot to a vSphere Lifecycle Manager depot when vCenter Server services are deployed on a custom port

    If you create a vSphere Lifecycle Manager cluster and configure NSX-T Data Center on that cluster by using the NSX Manager user interface, the configuration might fail as the upload of an NSX depot to the vSphere Lifecycle Manager depot fails. In the NSX Manager user interface, you see an error such as 26195: Setting NSX depot(s) on Compute Manager: 253b644a-4ea5-4025-9c47-6cd00af1d75f failed with error: Unable to connect ComputeManager. Retry Transport Node Collection at cluster. The issue occurs when you use a custom port to configure the vCenter Server that is associated with the NSX-T Data Center as a compute manager in the NSX Manager.

    Workaround: None

Installation, Upgrade, and Migration Issues
  • After upgrade to vCenter Server 7.0 Update 3, some plug-ins might fail due to incompatibility with Spring 5

    After you upgrade your system to vCenter Server 7.0 Update 3, the vSphere Client is upgraded to use the Spring Framework version 5, because Spring 4 is EOL as of December 31, 2020. However, some plug-ins which use Spring 4 APIs might fail due to incompatibility with Spring 5. For example, plug-ins for VMware NSX Data Center for vSphere of version 6.4.10 or earlier. You see an error such as HTTP Status 500 – Internal Server Error.

    Workaround: Update the plug-ins to use Spring 5. Alternatively, downgrade the vSphere Client to use Spring 4 by uncommenting the line //-DuseOldSpring=true in the /etc/vmware/vmware-vmon/svcCfgfiles/vsphere-ui.json file and restarting the vSphere Client. For more information, see VMware knowledge base article 85632.

  • vSphere Pod Service might fail after a vCenter Server upgrade while waiting for a vCenter Server reboot

    If vSphere Pod Service fails for some reason during stage 1 of a vCenter Server upgrade while waiting for a vCenter Server reboot, the service does not complete the upgrade.

    Workaround: Continue or retry the upgrade operation after vSphere Pod Service recovers.

vCenter Server and vSphere Client Issues
  • Skyline Health page displays garbage characters

    In the vSphere Client, when you navigate to vCenter Server or select an ESXi host in the vSphere Client navigator and click Monitor > Skyline Health, the page displays garbage characters in the following locales: Korean, Japanese, German and French.

    Workaround: Switch to English locale.

  • If vCenter Server services are deployed on custom ports, remediation of ESXi hosts in a vSphere Lifecycle Manager cluster with vSAN enabled fails

    If vCenter Server services are deployed on custom ports in an environment with enabled vSAN, vSphere DRS and vSphere HA, remediation of vSphere Lifecycle Manager clusters might fail due to a vSAN resource check task error. The vSAN health check also prevents ESXi hosts to enter maintenance mode, which leads to failing remediation tasks.

    Workaround: For more information, see VMware knowledge base article 85890.

  • You see Certificate Status alarm in the vSphere Client for expiring certificates in the vSphere Certificate Manager Utility backup store

    The VMware Certificate Manager uses the vSphere Certificate Manager Utility backup store (BACKUP_STORE) to support certificate revert, keeping only the most recent state. However, the vpxd service throws a Certificate Status error when monitoring the BACKUP_STORE, if it contains any expired certificates, even though this is expected.

    Workaround: Delete the certificate entries in BACKUP_STORE by using the following vecs-cli commands:

    1. Get expired certificate alias in BACKUP_STORE:
      /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store BACKUP_STORE --text
    2. Delete certificate in BACKUP_STORE:
      /usr/lib/vmware-vmafd/bin/vecs-cli entry delete --store BACKUP_STORE --alias <alias>
  • In the vSphere Client dark theme, in the OVF deployment wizard, you cannot see the Virtual machine name field

    If you use the vSphere Client dark theme, in the OVF deployment wizard, after you provide a virtual machine name and open the tree view to select a location, the Virtual machine name field turns into solid white color and hides your input.

    Workaround: Click on the white space that hides your input in the Virtual machine name field to restore the correct view. 

  • If the deployment location is an NSX Distributed Virtual port group, deployments by using an OVF file or template might fail

    If the following two conditions exist in your environment, deployments by using an OVF file or template might fail:

    1. The deployment location is an NSX Distributed Virtual port group
    2. The deployment location is a vSphere cluster with a mixed transport node of a vSphere Distributed Switch (VDS) and NSX Virtual Distributed Switch (N-VDS), and the N-VDS has the same logical switch as the OVF deployment location.

    Workaround: Select the OVF deployment location to be on an opaque network, not on a NSX Distributed Virtual port group, or retry the deployment. In a mixed transport node, the target is randomly selected and a retry of the deployment succeeds when the location is on the VDS.

Known Issues from Prior Releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon