vCenter Server 7.0 Update 1 | 06 OCT 2020 | ISO Build 16860138

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • NSX-T 3.1.0 support with vSphere Lifecycle Manager: vCenter Server 7.0 Update 1 supports NSX-T 3.1.0 that enables the integration of NSX-T with vSphere Lifecycle Manager cluster images. For more information, see the vSphere Lifecycle Manager with NSX-T section in the NSX-T Data Center Installation Guide.
  • Inclusive terminology: In vCenter Server 7.0 Update 1, as part of a company-wide effort to remove instances of non-inclusive language in our products, the vSphere team has made changes to some of the terms used in the vSphere Client. APIs and CLIs still use legacy terms, but updates are pending in an upcoming release.
  • vSphere accessibility enhancements: vCenter Server 7.0 Update 1 comes with significant accessibility enhancements based on recommendations by the Accessibility Conformance Report (ACR), which is the internationally accepted standard. Some of the user interface accessibility enhancements are:
    • Accessibility compliance of the vCenter Server Appliance Management Interface
    • Accessibility compliance of the storage management UI, such as the datastore and file browser
    • Capabilities for plug-ins to create accessible UI, such as enhanced structure of dialog screens
    • Enhanced accessibility of other UI components, such as the Content Library, host and cluster management, virtual machine configuration, tasks, events and alarms, network management, and workload management
  • vSphere Ideas Portal: With vCenter Server 7.0 Update 1, any user with a valid my.vmware.com account can submit feature requests by using the vSphere Ideas portal. All published ideas are available for voting and the most popular ones might become vSphere features. You can access the vSphere Ideas portal at https://vsphere.ideas.aha.io/ or from the Idea tab under the Feedback section of the vSphere Client. When you log in to the Ideas portal, you are automatically redirected to the my.vmware.com login page for user authentication. After successful login to my.vmware.com, you return to the active session in the vSphere ideas portal. When you log out of the Ideas portal, you are redirected to my.vmware.com to close the session.
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: vCenter Server 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. The pre-checks automatically trigger after certain change events, such as modification of the cluster desired image or addition of a new ESXi host in vSAN environments. Also, the hardware compatibility framework automatically polls the Hardware Compatibility List database at predefined intervals for changes that trigger pre-checks as necessary.
  • Increased scalability with vSphere Lifecycle Manager​: With vCenter Server 7.0 Update 1, scalability for vSphere Lifecycle Manager​ operations with ESXi hosts and clusters is up to:
    • 64 supported clusters from 15
    • 96 supported ESXi hosts within a cluster from 64. For vSAN environments, the limit is still 64
    • 280 supported ESXi hosts managed by a vSphere Lifecycle Manager Image from 150
    • 64 clusters on which you can run remediation in parallel, if you initiate remediation at a data center level, from 15
  • vSphere Lifecycle Manager support for coordinated upgrades between availability zones: With vCenter Server 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): vCenter Server 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Silence Alerts button in VMware Skyline Health - With vCenter Server 7.0 Update 1, you can stop alerts for certain health checks, such as notifications for known issues, by using the Silence Alerts button. For example, if you do not want to receive notifications from some of the Compute Health Checks, navigate to Skyline Health > Compute Health Checks > name of the Health Check and click the Silence Alert button. In the pop-up window, select YES to disable notifications. Use the button Restore Alert to re-enable the alerts.
  • Configure SMTP authentication: vCenter Server 7.0 Update 1 adds support to SMTP authentication in the vCenter Server Appliance to enable sending alerts and alarms by email in secure mode. You can choose between anonymous and authenticated way of sending email alerts. To configure SMTP authentication, see Configure Mail Sender Settings.
  • System virtual machines for vSphere Cluster Services: In vCenter Server 7.0 Update 1, vSphere Cluster Services adds a set of system virtual machines in every vSphere cluster to ensure the healthy operation of VMware vSphere Distributed Resource Scheduler. For more information, see VMware knowledge base articles 8047279892 and 80483.
  • Licensing for VMware Tanzu Basic: With vCenter Server 7.0 Update 1, licensing for VMware Tanzu Basic splits into separate license keys for vSphere 7 Enterprise Plus and VMware Tanzu Basic. In vCenter Server 7.0 Update 1, you must provide either a vSphere 7 Enterprise Plus license key or a vSphere 7 Enterprise Plus with an add-on for Kubernetes license key to enable the Enterprise Plus functionality for ESXi hosts. In addition, you must provide a VMware Tanzu Basic license key to enable Kubernetes functionality for all ESXi hosts that you want to use as part of a Supervisor Cluster.
    When you upgrade a 7.0 deployment to 7.0 Update 1, existing Supervisor Clusters automatically start a 60-day evaluation mode. If you do not install a VMware Tanzu Basic license key and assign it to existing Supervisor Clusters within 60 days, you see some limitations in the Kubernetes functionality. For more information, see Licensing for vSphere with Tanzu and VMware knowledge base article 80868.

Earlier Releases of vCenter Server 7.0

Features, resolved and known issues of vCenter Server are described in the release notes for each release. Release notes for earlier releases of vCenter Server 7.0 are:

For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes.

Patches Contained in This Release

This release of vCenter Server 7.0 Update 1 delivers the following patch. See the VMware Patch Download Center for more information on downloading patches.

Patch for VMware vCenter Server Appliance 7.0 Update 1

Product Patch for vCenter Server containing VMware software fixes, security fixes, and third-party product fixes.

This patch is applicable to vCenter Server.

Download Filename VMware-vCenter-Server-Appliance-7.0.1.00000-16860138-patch-FP.iso
Build 16860138
Download Size 5902.8 MB
md5sum 30975695fcc1d4b1c169ffa7132a77f5
sha1checksum d5ecb95189e40657be1fb9b0871824b501041f18

Download and Installation

You can download this patch by going to the VMware Patch Download Center and selecting VC from the Select a Product drop-down menu.

  1. Attach the VMware-vCenter-Server-Appliance-7.0.1.00000-16860138-patch-FP.iso file to the vCenter Server CD or DVD drive.
  2. Log in to the appliance shell as a user with super administrative privileges (for example, root) and run the following commands:
    • To stage the ISO:
      software-packages stage --iso
    • To see the staged content:
      software-packages list --staged
    • To install the staged rpms:
      software-packages install --staged

For more information on using the vCenter Server shells, see VMware knowledge base article 2100508.

For more information on patching vCenter Server, see Patching the vCenter Server Appliance.

For more information on staging patches, see Stage Patches to vCenter Server Appliance.

For more information on installing patches, see Install vCenter Server Appliance Patches.

For more information on patching using the Appliance Management Interface, see Patching the vCenter Server by Using the Appliance Management Interface.

Installation and Upgrade Notes for This Release

Before upgrading to vCenter Server 7.0 Update 1, you must confirm that the Link Aggregation Control Protocol (LACP) mode is set to enhanced, which enables the Multiple Link Aggregation Control Protocol (the multipleLag parameter) on the VMware vSphere Distributed Switch (VDS) in your vCenter Server system.

If the LACP mode is set to basic, indicating One Link Aggregation Control Protocol (singleLag), the distributed virtual port groups on the vSphere Distributed Switch might lose connection after the upgrade and affect the management vmknic, if it is on one of the dvPort groups. During the upgrade precheck, you see an error such as Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion.
For more information on converting to Enhanced LACP Support on a vSphere Distributed Switch, see VMware knowledge base article 2051311. For more information on the limitations of LACP in vSphere, see VMware knowledge base article 2051307.

Product Support Notices

  • Intent to deprecate SHA-1
    The SHA-1 cryptographic hashing algorithm will be deprecated in a future release of vSphere. SHA-1 and the already-deprecated MD5 have known weaknesses, and practical attacks against them have been demonstrated.
  • vCenter Server 7.0 Update 1 does not support VMware Site Recovery Manager 8.3.1.
  • Deprecation of Server Message Block (SMB) protocol version 1.0
    File-based backup and restore of vCenter Server by using Server Message Block (SMB) protocol version 1.0 is deprecated in vCenter Server 7.0 Update 1. Removal of SMBv.1 is due in a future vSphere release.
  • End of General Support for ​VMware Tools 9.10.x and 10.0.x
    VMware Tools 9.10.x and 10.0.x has reached End of General Support. For more details, refer to VMware Tools listed under the VMware Product Lifecycle Matrix.
  • Deprecation of the VMware Service Lifecycle Manager API 
    VMware plans to deprecate the VMware Service Lifecycle Manager API (vmonapi service) in a future release. For more information, see VMware knowledge base article 80775.
  • End of support for Internet Explorer 11
    Removal of Internet Explorer 11 from the list of supported browsers for the vSphere Client is due in a future vSphere release.
  • VMware Host Client in maintenance mode 
    The VMware Host Client is in maintenance mode until the release of a new client.

Resolved Issues

The resolved issues are grouped as follows.

Backup and Restore Issues
  • NEW: If SSH is disabled on a vCenter Server system, a file-based restore operation might fail

    If a file-based backup of a vCenter Server system is taken while SSH is disabled, a restore operation by using such a backup might fail when the SSH service restarts after the restore. In the vSphere Client, you see a message such as ERROR: Failed to restart sshd.service: Unit sshd.service is masked.

    This issue is resolved in this release.

vSphere Lifecycle Manager Issues
  • While remediating a vSphere HA enabled cluster in the vSphere Lifecycle Manager, adding hosts causes a vSphere HA error state

    Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message: Applying HA VIBs on the cluster encountered a failure.

    This issue is resolved in this release.

  • Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing image

    Only the ESXi base image is replaced with the one from the imported image.

    This issue is resolved in this release.

  • ESXi 7.0 hosts cannot be added to а cluster that you manage with a single image by using vSphere Auto Deploy

    Attempting to add ESXi hosts to а cluster that you manage with a single image by using the Add to Inventory workflow in vSphere Auto Deploy fails. The failure occurs because no patterns are matched in an existing Auto Deploy ruleset. The task fails silently and the hosts remain in the Discovered Hosts tab.

    This issue is resolved in this release.

vCenter Server and vSphere Client Issues
  • Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDC

    When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.

    This issue is resolved in this release.

Security Issues
  • Update to the SQLite database

    The SQLite database is updated to version 3.32.2.

    • Update to the Apache Tomcat server

      The Apache Tomcat server is updated to version 8.5.55 / 9.0.35.

    • Update to cURL

      cURL in the vCenter Server is updated to 7.70.0.

    • Update to VMware PostgreSQL

      VMware PostgreSQL is updated to version 11.8.

    • Update to OpenJDK 1.8.0.252

      Open-source JDK is updated to version 1.8.0.252.

    • Update of the Jackson package

      The Jackson package is updated to versions 2.10.3.

    • Upgrade of Eclipse Jetty

      Eclipse Jetty is upgraded to version 9.4.28.

    • Update to the Spring Framework

      The Spring Framework is updated to version 4.3.27 / 5.2.5.

Storage Issues
  • Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error

    When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.

    This issue is resolved in this release.

Installation, Upgrade, and Migration Issues
  • Generating an interoperability report for vCenter Server 7.0 Update 1 fails with an error

    In the vSphere Client, when you navigate to Updates > Update Planner, and select 7.0.1 as your target vCenter Server to generate an interoperability report, you see an error such as:
    The provided target product version is invalid. Please provide valid version for the target product.

    This issue is resolved in this release.

Networking Issues
  • NEW: vCenter Server fails if the hosts in a cluster using Distributed Resource Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch (VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS

    In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail.

    This issue is resolved in this release.

Miscellaneous Issues
  • NEW: Editing an advanced options parameter in a host profile and setting a value to false, results in setting the value to true

    When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile.

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Virtual Machine Management Issues
  • You cannot add or modify an existing network adapter on a virtual machine

    If you try to add or modify an existing network adapter on a virtual machine, the Reconfigure Virtual Machine task might fail with an error such as Cannot complete operation due to concurrent modification by another operation in the vSphere Client. In the/var/log/hostd.log file of the ESXi host where the virtual machine runs, you see logs such as:
    2020-07-28T07:47:31.621Z verbose hostd[2102259] [Originator@6876 sub=Vigor.Vmsvc.vm:/vmfs/volumes/vsan:526bc94351cf8f42-41153841cab2f9d9/bad71f5f-d85e-a276-4cf6-246e965d7154/interop_l2vpn_vmotion_VM_1.vmx] NIC: connection control message: Failed to connect virtual device 'ethernet0'.
    In the vpxa.log file, you see entries similar to: 2020-07-28T07:47:31.941Z info vpxa[2101759] [Originator@6876 sub=Default opID=opId-59f15-19829-91-01-ed] [VpxLRO] -- ERROR task-138 -- vm-13 -- vim.VirtualMachine.reconfigure: vim.fault.GenericVmConfigFault: 

    Workaround: For each ESXi host in your cluster do the following:

    1. Connect to the ESXi host by using SSH and run the command
      esxcli system module parameters set -a -p dvfiltersMaxFilters=8192 -m dvfilter
    2. Put the ESXi host in Maintenance Mode.
    3. Reboot the ESXi host.

    For more information, see VMware knowledge base article 80399

  • ESXi 6.5 hosts with AMD Opteron Generation 3 (Greyhound) processors cannot join Enhanced vMotion Compatibility (EVC) AMD REV E or AMD REV F clusters on a vCenter Server 7.0 Update 1 system

    In vCenter Server 7.0 Update 1, vSphere cluster services, such as vSphere DRS and vSphere HA, run on ESX agent virtual machines to make the services functionally independent of vCenter Server. However, the CPU baseline for AMD processors of the ESX agent virtual machines have POPCNT SSE4A instructions, which prevents ESXi 6.5 hosts with AMD Opteron Generation 3 (Greyhound) processors to enable EVC mode AMD REV E and AMD REV F on a vCenter Server 7.0 Update 1 system.

    Workaround: None

Installation, Upgrade, and Migration Issues
  • Patching to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x is blocked when vCenter Server High Availability is enabled

    Patching to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x is blocked when vCenter Server High Availability is active.

    Workaround: To patch your system to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x, you must remove vCenter Server High Availability and delete the passive and witness nodes. After the upgrade, you must re-create your vCenter Server High Availability clusters.

  • Migration of a 6.7.x vCenter Server system to vCenter Server 7.x fails with an UnicodeEncodeError

    If you select the option to import all data for configuration, inventory, tasks, events, and performance metrics, the migration of a 6.7.x vCenter Server system to vCenter Server 7.x might fail for any vCenter Server system that uses a non-English locale. At step 1 of stage 2 of the migration, in the vSphere Client, you see an error such as:
    Error while exporting events and tasks data: …ERROR UnicodeEncodeError: Traceback (most recent call last):

    Workaround: You can complete the migration operation by doing either:

    • Select the default option Configuration and Inventory at the end of stage 1 of the migration.
      This option does not include tasks and events data.
    • Clean the data in the events tables and run the migration again.
  • If a Windows vCenter Server system has a database password containing non-ASCII characters, pre-checks of the VMware Migration Assistant fail

    If you try to migrate a 6.x vCenter Server system to vCenter Server 7.x by using the VMware Migration Assistant, and your system has a Windows OS, and uses an external database with a password containing non-ASCII characters, the operation fails. For example, Admin!23迁移. In the Migration Assistant console, you see the following error:

    Error:Component com.vmware.vcdb failed with internal error
    Resolution:File Bugzilla PR to VPX/VPX/vcdb-upgrade

    Workaround: None

  • During an update from vCenter Server 7.x to vCenter Server 7.0 Update 1, you get prompts to provide the vCenter Single Sign-On password

    During an update from vCenter Server 7.x to vCenter Server 7.0 Update 1, you get prompts to provide vCenter Single Sign-On administrator password.

    Workaround: If you run the update by using the vCenter Server Management Interface, you must provide the vCenter Single Sign-On administrator password.
    If you run the update by using software-packages or CLI in an interactive manner, you must interactively provide the vCenter Single Sign-On administrator password.
    If you run the update by using software-packages or CLI in a non-interactive manner, you must provide the vCenter Single Sign-On administrator password by an answer file in the format
    { "vmdir.password": "SSO Password of Administrator@<SSO-DOMAIN> user" }

  • You might not be able to apply or remove NSX while you add ESXi hosts by using a vSphere Lifecycle Manager image to a cluster with enabled VMware vSphere High Availability

    If you start an operation to apply or remove NSX while adding multiple ESXi hosts by using a vSphere Lifecycle Manager image to a vSphere HA-enabled cluster, the NSX-related operations might fail with an error in the vSphere Client such as:
    vSphere HA agent on some of the hosts on cluster <cluster_name> is neither vSphere HA master agent nor connected to vSphere HA master agent. Verify that the HA configuration is correct.
    The issue occurs because vSphere Lifecycle Manager configures vSphere HA for the ESXi hosts being added to the cluster one at a time. If you run an operation to apply or remove NSX while vSphere HA configure operations are still in progress, NSX operations might queue up between the vSphere HA configure operations for two different ESXi hosts. In such a case, the NSX operation fails with a cluster health check error, because the state of the cluster at that point does not match the expected state that all ESXi hosts have vSphere HA configured and running. The more ESXi hosts you add to a cluster at the same time, the more likely the issue is to occur.

    Workaround: Disable and enable Sphere HA on the cluster. Proceed with the operations to apply or remove NSX.

  • After an upgrade of a vCenter Server 7.0 system, you cannot see the IP addresses of pods in the vSphere Pod Summary tab of the vSphere Client

    If you upgrade your vCenter Server 7.0 system to a later version, you can no longer see the IP addresses of pods in the vSphere Pod Summary tab of the vSphere Client.

    Workaround: Use the Kubernetes CLI Tools for vSphere to review details of pods:

    1. As a prerequisite, copy the pod and namespace names. 
      • In the vSphere Client, navigate to Workload Management > Clusters.
      • Copy the IP displayed in the Control Plane Node IP Address tab.
      • You can navigate to https://<control_plane_node_IP_address> and download the Kubernetes CLI Tools, kubectl and kubectl-vsphere.
        Alternatively, follow the steps in Download and Install the Kubernetes CLI Tools for vSphere.
    2. Use the CLI plug-in for vSphere to review the pod details.
      1. Log in to the Supervisor cluster by using the command
        kubectl vsphere login --server=https://<server_adress> --vsphere-username <your user account name> --insecure-skip-tls-verify
      2. By using the names copied in step 1, run the commands for retrieving the pod details:
        kubectl config use-context <namespace_name> 
        and
        kubectl describe pod <pod_name> -n <namespace_name> 

    ​As a result, you can see the IP address in an output similar to:

    $ kubectl describe pod helloworld -n my-podvm-ns ...
    Status: Running
    IP: 10.0.0.10
    IPs:
     IP: 10.0.0.10 ...

  • Deployment of a vCenter Server Appliance by using port 5480 at stage 2 fails with unable to save IP settings error

    If you use https://appliance-IP-address-or-FQDN:5480 in a Web browser, go to the vCenter Server Appliance Management Interface for stage 2 of a newly deployed vCenter Server Appliance, and you configure a static IP or try to change the IP configuration, you see an error such as
    Unable to save IP settings.

    Workaround: None.

Backup and Restore Issues
  • If you use the NFS and SMB protocols for file-based backup of vCenter Server, the backup fails after an update from vCenter Server 7.x to vCenter Server 7.0 Update 1

    If you use the Network File System (NFS) and Server Message Block (SMB) protocols for file-based backup of vCenter Server, the backup fails after an update from an earlier version of vCenter Server 7.x to vCenter Server 7.0 Update 1. In the applmgmt.log, you see an error message such as Failed to mount the remote storage. The issue occurs because of Linux kernel updates that run during the patch process. The issue does not occur on fresh installations of vCenter Server 7.0 Update 1.

    Workaround: Reboot the vCenter Server appliance after the update is complete.

vSphere Lifecycle Manager Issues
  • If you use a Java client to review remediation tasks, you cannot extract the results from the remediation operations

    If you use a Java client to review remediation tasks, extracting the results might fail with a ConstraintValidationException error. The issue occurs when an ESXi host fails to enter maintenance mode during the remediation and gets a status SKIPPED, but at the same time wrongly gets an In Progress flag for the consecutive remediation operations. This causes the ConstraintValidationException error on the Java Clients and you cannot extract the result of the remediation operation.

    Workaround: Fix the underlying issues that prevent ESXi hosts to enter Maintenance Mode and retry the remediation operation.

  • NEW: Attempts to use Auto Deploy to provision ESXi hosts with a vSphere Lifecycle Manager image on a cluster with NSX-T enabled fail

    If you use Auto Deploy to provision ESXi hosts on a cluster with NSX-T configured on it with a vSphere Lifecycle Manager image, you might experience issues such as:
    The ESXi host fails to join the vSphere Distributed Switch (VDS) which has been configured in NSX-T as part of the transport node profile configuration.
    The ESXi host remains in Maintenance Mode after being added to the vCenter Server system or the ESXi host fails to join the vCenter Server system.
    In the vSphere Client, you see an error message such as:
    Host cannot be added to the cluster. Stateless host cannot be added to clusters using a single image to manage hosts

    In the syslog.log file in the var/log/ folder on the affected ESXi hosts, you see a backtrace such as:

    ~~~2020-07-24T10:58:46Z Host Profiles[1000350314 opID=MainThread]: INFO: Successfully initialized privilege list.
    2020-07-24T10:58:46Z HostProfileManager:
    2020-07-24 10:58:46,481 [MainProcess INFO 'root' MainThread] Starting CGI server on stdin/stdout
    2020-07-24T10:58:46Z Host Profiles[1000350314 opID=28ea1be8-05-84-1da1]: INFO: Calling QueryState()
    2020-07-24T10:58:46Z Host Profiles[1000350314 opID=28ea1be8-05-84-1da1]: INFO: State = (vmodl.KeyAnyValue) [ (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'NSX_INSTALL_OPAQUE_SWITCH_STATUS', value = (str) [ 'OpaqueSwitchProfile' ] }, (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'REAPPLY_REQUIRED', value = (str) [ 'DvsProfile' ] }, (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'NSX_DVS_CONFIG_REQUIRED', value = (str) [ 'DvsProfile' ] } ]
    2020-07-24T10:58:46Z Host Profiles[1000350314 opID=28ea1be8-05-84-1da1]: INFO: Cleaned up Host Configuration
    2020-07-24T10:58:46Z Host Profiles[1000350314 opID=28ea1be8-05-84-1da1]: INFO: Returning Host Profile Manager state: (vmodl.KeyAnyValue) [ (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'NSX_INSTALL_OPAQUE_SWITCH_STATUS', value = (str) [ 'OpaqueSwitchProfile' ] }, (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'REAPPLY_REQUIRED', value = (str) [ 'DvsProfile' ] }, (vmodl.KeyAnyValue) { dynamicType = , dynamicProperty = (vmodl.DynamicProperty) [], key = 'NSX_DVS_CONFIG_REQUIRED', value = (str) [ 'DvsProfile' ] } ]

    The issue occurs due to some limitations when the vSphere Lifecycle Manager is enabled on a cluster, such as NSX-T VIBs being removed from the transport node when an administrator remediates an ESXi host.

    Workaround: Follow the steps described in VMware knowledge base article 80697.

  • The general vSphere Lifecycle Manager depot and local depots in Remote Office and Branch Office (ROBO) deployments might not be in sync

    ROBO clusters that have limited or no access to the Internet or limited connectivity to vCenter Server can download an image from a depot that is local for them instead of accessing the vSphere Lifecycle Manager depot in vCenter Server. However, vSphere Lifecycle Manager generates software recommendations in the form of pre-validated images only on a central level and a recommended image content might not be available at a depot override.

    Workaround: If you decide to use a recommended image, make sure the content between depot overrides and the central depot are in sync.

  • Cluster remediation by using the vSphere Lifecycle Manager might fail on ESXi hosts with enabled lockdown mode

    If a cluster has ESXi hosts with enabled lockdown mode, remediation operations by using the vSphere Lifecycle Manager might skip such hosts. In the log files, you see messages such as Host scan task failed and com.vmware.vcIntegrity.lifecycle.EsxImage.UnknownError An unknown error occurred while performing the operation..

    Workaround: Add the root user to the exception list for lockdown mode and retry the cluster remediation.

Networking Issues
  • If you try to disable vSphere with Tanzu on a vSphere cluster, the operation stops with an error

    If some virtual machines outside of a Supervisor Cluster reside on any of the NSX segment port groups on the cluster, the cleanup script cannot delete such ports and disable vSphere with Tanzu on the cluster. In the vSphere Client, you see the error Cleanup requests to NSX Manager failed and the operation stops at Removing status. In the/var/log/vmware/wcp/wcpsvc.log file, you see an error message such as
    Segment path=[...] has x VMs or VIFs attached. Disconnect all VMs and VIFs before deleting a segment.

    Workaround: Delete the virtual machines indicated in the /var/log/vmware/wcp/wcpsvc.log file from the segment. Wait for the operation to restore.

  • After upgrading to NSX 6.4.7, when a static IPv6 address is assigned to workload VMs on an IPv6 network, the VMs are unable to ping the IPv6 gateway interface of the edge

    This issue occurs after upgrading the vSphere Distributed Switches from 6.x to 7.0.

    Workaround 1:

    Select the VDS where all the hosts are connected, go to the Edit setting, and under Multicast option switch to basic.

    Workaround 2:

    Add the following rules on the edge firewall:

    Ping allow rule.

    Multicast Listener Discover (MLD) allow rule, which are icmp6, type 130 (v1) and type 143 (v2).

vSAN Issues
  • NEW: Fault domain-aware upgrade for vSAN does not work on component level

    When an admin user selects an option to upgrade vSAN with fault domains, they can either remediate their desired image directly from the vSphere Lifecycle Manager user interface or API, or select to upgrade a single component, such as NSX-T VIBs only. When remediating the desired image from the user interface or API, vSAN fault domain awareness is adhered, but not for remediating a component.

    Workaround: None

vSphere Cluster Services Issues
  • If all vSphere Cluster Service agent virtual machines in a cluster are down, vSphere DRS does not function in the cluster 

    If vSphere Cluster Service agent virtual machines fail to deploy or power on in a cluster, services such as vSphere DRS might be impacted.

    Workaround: For more information on the issue and workarounds, see VMware knowledge base article 79892.

  • System virtual machines that support vSphere Cluster Services might impact cluster and datastore maintenance workflows

    In vCenter Server 7.0 Update 1, vSphere Cluster Services adds a set of system virtual machines in every vSphere cluster to ensure the healthy operation of vSphere DRS. The system virtual machines deploy automatically with an implicit datastore selection logic. Depending on your cluster configuration, the system virtual machines might impact some of the cluster and datastore maintenance workflows.

    Workaround: For more information on the impacted workflows and possible workarounds, see VMware knowledge base articles 79892 and 80483.

Known Issues from Prior Releases

To view a list of previous known issues, click here.

check-circle-line exclamation-circle-line close-line
Scroll to top icon