This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Updated on: 05 AUG 2020

ESXi 7.0 | 2 APR 2020 | ISO Build 15843807

vCenter Server 7.0 | 2 APR 2020 | ISO Build 15952498

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware vSphere 7.0 includes VMware ESXi 7.0 and VMware vCenter Server 7.0. Read about the new and enhanced features in this release in What's New in vSphere 7.0.

Internationalization

vSphere 7.0 is available in the following languages:

  • English
  • French
  • German
  • Spanish
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of  vSphere 7.0, including vCenter Server, ESXi, the vSphere Client, and the vSphere Host Client, do not accept non-ASCII input.

Compatibility

ESXi and vCenter Server Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Lifecycle Manager and vSphere Client are packaged with vCenter Server.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 7.0, use the ESXi 7.0 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 7.0, use the ESXi 7.0 information in the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 7.0, use the ESXi 7.0 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 7.0. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 7.0, upgrade the virtual machine compatibility. See the ESXi Upgrade documentation.

Before You Begin

vSphere 7.0 requires one CPU license for up to 32 physical cores. If a CPU has more than 32 cores, additional CPU licenses are required as announced in "Update to VMware’s per-CPU Pricing Model". Prior to upgrading ESXi hosts, you can determine the number of licenses required using the license counting tool described in "Counting CPU licenses needed under new VMware licensing policy".

Installation and Upgrades for This Release

Installation Notes for This Release

Read the ESXi Installation and Setup and the vCenter Server Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

VMware's Configuration Maximums tool helps you plan your vSphere deployments. Use this tool to view the limits for virtual machines, ESXi, vCenter Server, vSAN, networking, and so on. You can also compare limits for two or more product releases. The VMware Configuration Maximums tool is best viewed on larger format devices such as desktops and laptops.

VMware Tools Bundling Changes in ESXi 7.0

In ESXi 7.0, a subset of VMware Tools 11.0.5 and VMware Tools 10.3.21 ISO images are bundled with the ESXi 7.0 host.

The following VMware Tools 11.0.5 ISO image is bundled with ESXi:

  • windows.iso: VMware Tools image for Windows Vista or higher

The following VMware Tools 10.3.21 ISO image is bundled with ESXi:

  • linux.iso: VMware Tools image for Linux OS with glibc 2.5 or higher

The following VMware Tools 11.0.5 ISO images are available for download:

  • darwin.iso: VMware Tools image for OSX

Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

Migrating Third-Party Solutions

For information about upgrading with third-party customizations, see the ESXi Upgrade documentation. For information about using Image Builder to make a custom ISO, see the ESXi Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

Comparing the processors supported by vSphere 6.7, vSphere 7.0 no longer supports the following processors:

  • Intel Family 6, Model = 2C (Westmere-EP)
  • Intel Family 6, Model = 2F (Westmere-EX)

During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 7.0. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and the vSphere 7.0 installation process stops.

The following CPUs are supported in the vSphere 7.0 release, but they may not be supported in future vSphere releases. Please plan accordingly:

  • Intel Family 6, Model = 2A (Sandy Bridge DT/EN, GA 2011)
  • Intel Family 6, Model = 2D (Sandy Bridge EP, GA 2012)
  • Intel Family 6, Model = 3A (Ivy Bridge DT/EN, GA 2012)
  • AMD Family 0x15, Model = 01 (Bulldozer, GA 2012)

Upgrade Notes for This Release

For instructions about upgrading ESXi hosts and vCenter Server, see the ESXi Upgrade and the vCenter Server Upgrade documentation.

Open Source Components for vSphere 7.0

The copyright statements and licenses applicable to the open source software components distributed in vSphere 7.0 are available at http://www.vmware.com. You need to log in to your My VMware account. Then, from the Downloads menu, select vSphere. On the Open Source tab, you can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • VMware vSphere Clients

    In vSphere 7.0, you can take advantage of the features available in the vSphere Client (HTML5). The Flash-based vSphere Web Client has been deprecated and is no longer available. For more information, see Goodbye, vSphere Web Client.

    The VMware Host Client is a web-based application that you can use to manage individual ESXi hosts that are not connected to a vCenter Server system.

  • VMware vSphere 7.0 and TLS Protocol

    In vSphere 7.0, TLS 1.2 is enabled by default. TLS 1.0 and TLS 1.1 are disabled by default. If you upgrade vCenter Server to 7.0 and that vCenter Server instance connects to ESXi hosts, other vCenter Server instances, or other services, you might encounter communication problems.

    To resolve this issue, you can use the TLS Configurator utility to enable older versions of the protocol temporarily on 7.0 systems. You can then disable the older less secure versions after all connections use TLS 1.2. For information, see Managing TLS Protocol Configuration with the TLS Configurator Utility.

  • Removal of External Platform Services Controller

    In vSphere 7.0, deploying or upgrading vCenter Server in vSphere 7.0 requires the use of vCenter Server appliance, a preconfigured Linux virtual machine optimized for running vCenter Server. The new vCenter Server contains all Platform Services Controller (PSC) services, preserving the functionality and workflows, including authentication, certificate management, and licensing. It is no longer necessary nor possible to deploy and use an external Platform Services Controller. All PSC services have been consolidated into vCenter Server, and deployment and administration have been simplified.

  • Removal of vCenter Server for Windows Support

    In vSphere 7.0, vCenter Server for Windows has been removed and support is not available. For more information, see Farewell, vCenter Server for Windows.

  • Removal of VNC Server from ESXi

    In vSphere 7.0, the ESXi built-in VNC server has been removed. Users will no longer be able to connect to a virtual machine using a VNC client by setting the RemoteDisplay.vnc.enable configure to be true. Instead, users should use the VM Console via the vSphere Client, the ESXi Host Client, or the VMware Remote Console, to connect virtual machines. Customers desiring VNC access to a VM should use the VirtualMachine.AcquireTicket("webmks") API, which offers a VNC-over-websocket connection. The webmks ticket offers authenticated access to the virtual machine console. For more information, please refer to the VMware HTML Console SDK Documentation.

  • Deprecation of VMKLinux

    In vSphere 7.0, VMKLinux driver compatibility has been deprecated and removed. vSphere 7.0 will not contain support for VMKLinux APIs and associated VMKLinux drivers. Custom ISO will not be able to have any VMKLinux async drivers. All drivers contained in an ISO must be native drivers. All currently supported devices which are not supported by native drivers will not function and will not be recognized during installation or upgrade. VCG will not show any devices not supported by a native driver as supported in vSphere 7.0.

  • Deprecation of 32-bit Userworld Support

    In vSphere 7.0, 32-bit userworld support has been deprecated. Userworlds are the components of ESXi used by partners to provide drivers, plugins, and other system extensions (distributed as VIBs). Userworlds are not customer accessible.

    vSphere 7.0 provides 64-bit userworld support through partner devkits and will retain 32-bit userworld support through this major release. Support for 32-bit userworlds will be permanently removed in the next major ESXi release. To avoid loss of functionality, customers should ensure any vendor-supplied VIBs in use are migrated to 64-bit before upgrading beyond the vSphere 7.0 release.

  • Deprecation of Update Manager Plugin

    In vSphere 7.0, the Update Manager plugin used for administering vSphere Update Manager has been replaced with the Lifecycle Manager plugin. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plugin, along with new capabilities for vSphere Lifecycle Manager.

  • Deprecation of Integrated Windows Authentication

    Integrated Windows Authentication (IWA) is deprecated in vSphere 7.0 and will be removed in a future release. For more information, see VMware Knowledge Base article 78506.

  • Deprecation of DCUI Smart Card Authentication

    In a future vSphere release, support for Smart Card Authentication in DCUI will be discontinued. In place of accessing DCUI using Personal Identity Verification (PIV), Common Access Card (CAC), or SC650 smart card, users will be encouraged to perform operations through vCenter, PowerCLI, API calls, or by logging in with a username and password.

  • Deprecation of Core Partition Profile in Host Profiles

    In vSphere 7.0, support for Coredump Partitions in Host Profiles has been deprecated. In place of Coredump Partitions, users should transition to Coredump Files.

  • Deprecation of Software FCoE Adapters

    Starting from vSphere 7.0, VMware deprecates the configuration of software FCoE adapters that use the native FCoE stack in ESXi and plans to remove the feature in a future vSphere release.

  • Vendor add-ons in MyVMware for use with vSphere Lifecycle Manager

    In vSphere 7.0, vendor add-ons are accessible through vCenter Server's vSphere Lifecycle Manager if the vCenter Server instance has been configured to use a proxy or Update Manager Download Service. To access add-ons from MyVMware, navigate to the Custom ISOs and Add-ons tab. Under the OEM Customized Installer CDs and Add-ons, you can find the custom add-ons from each of the vendors. For more information about vSphere Lifecycle Manager and vendor add-ons, see the Managing Host and Cluster Lifecycle documentation.

Known Issues

The known issues are grouped as follows.

Installation, Upgrade, and Migration Issues
  • NEW: vmnic and vmhba device names change after an upgrade to ESXi 7.0

    On certain hardware platforms, vmnic and vmhba device names (aliases) might change across an upgrade to ESXi 7.0 from an earlier ESXi version. This occurs on systems whose firmware provides an ACPI _SUN method that returns a physical slot number of 0 for devices that are not in a pluggable slot. 

    Workaround: You can rename devices by using the instructions in VMware knowledge base article 2091560.

  • The vCenter Upgrade/Migration pre-checks fail with "Unexpected error 87"

    The vCenter Server Upgrade/Migration pre-checks fail when the Security Token Service (STS) certificate does not contain a Subject Alternative Name (SAN) field. This situation occurs when you have replaced the vCenter 5.5 Single Sign-On certificate with a custom certificate that has no SAN field, and you attempt to upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid and the pre-checks prevent the upgrade process from continuing.

    Workaround: Replace the STS certificate with a valid certificate that contains a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration.

  • Problems upgrading to vSphere 7.0 with pre-existing CIM providers

    After upgrade, previously installed 32-bit CIM providers stop working because ESXi requires 64-bit CIM providers. Customers may lose management API functions related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors related to uwglibc dependency.
    The syslog reports module missing, "32 bit shared libraries not loaded."

    Workaround: There is no workaround. The fix is to download new 64-bit CIM providers from your vendor.

  • Smart Card and RSA SecurID authentication might stop working after upgrading to vCenter Server 7.0

    If you have configured vCenter Server for either Smart Card or RSA SecurID authentication, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade process. If you do not perform the workaround as described in the KB, you might see the following error messages and Smart Card or RSA SecurID authentication does not work.

    "Smart card authentication may stop working. Smart card settings may not be preserved, and smart card authentication may stop working."

    or

    "RSA SecurID authentication may stop working. RSA SecurID settings may not be preserved, and RSA SecurID authentication may stop working."

    Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057.

  • Upgrading a vCenter Server with an external Platform Services Controller from 6.7u3 to 7.0 fails with VMAFD error

    When you upgrade a vCenter Server deployment using an external Platform Services Controller, you converge the Platform Services Controller into a vCenter Server appliance. If the upgrade fails with the error install.vmafd.vmdir_vdcpromo_error_21, the VMAFD firstboot process has failed. The VMAFD firstboot process copies the VMware Directory Service Database (data.mdb) from the source Platform Services Controller and replication partner vCenter Server appliance.

    Workaround: Disable TCP Segmentation Offload (TSO) and Generic Segmentation Offload (GSO) on the Ethernet adapter of the source Platform Services Controller or replication partner vCenter Server appliance before upgrading a vCenter Server with an external Platform Services Controller. See Knowledge Base article: https://kb.vmware.com/s/article/74678

  • Upgrading vCenter Server using the CLI incorrectly preserves the Transport Security Layer (TLS) configuration for the vSphere Authentication Proxy service

    If the vSphere Authentication Proxy service (vmcam) is configured to use a particular TLS protocol other than the default TLS 1.2 protocol, this configuration is preserved during the CLI upgrade process. By default, vSphere supports the TLS 1.2 encryption protocol. If you must use the TLS 1.0 and TLS 1.1 protocols to support products or services that do not support TLS 1.2, use the TLS Configurator Utility to enable or disable different TLS protocol versions.

    Workaround: Use the TLS Configurator Utility to configure the vmcam port. To learn how to manage TLS protocol configuration and use the TLS Configurator Utility, see the VMware Security documentation.

  • Smart card and RSA SecurID settings may not be preserved during vCenter Server upgrade

    Authentication using RSA SecurID will not work after upgrading to vCenter Server 7.0. An error message will alert you to this issue when attempting to login using your RSA SecurID login.

    Workaround: Reconfigure the smart card or RSA SecureID.

  • Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with network error message

    Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with the error message IP already exists in the network. This prevents the migration process from configuring the network parameters on the new vCenter Server appliance. For more information, examine the log file: /var/log/vmware/upgrade/UpgradeRunner.log

    Workaround:

    1. Verify that all Windows Updates have been completed on the source vCenter Server for Windows instance, or disable automatic Windows Updates until after the migration finishes.
    2. Retry the migration of vCenter Server for Windows to vCenter Server appliance 7.0.

     

  • When you configure the number of virtual functions for an SR-IOV device by using the max_vfs module parameter, the changes might not take effect

    In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV device by using the Virtual Infrastructure Management (VIM) API, for example, through the vSphere Client. The task does not require reboot of the ESXi host. After you use the VIM API configuration, if you try to configure the number of SR-IOV virtual functions by using the max_vfs module parameter, the changes might not take effect because they are overridden by the VIM API configuration.

    Workaround: None. To configure the number of virtual functions for an SR-IOV device, use the same method every time. Use the VIM API or use the max_vfs module parameter and reboot the ESXi host.

  • Upgraded vCenter Server appliance instance does not retain all the secondary networks (NICs) from the source instance

    During a major upgrade, if the source instance of the vCenter Server appliance is configured with multiple secondary networks other than the VCHA NIC, the target vCenter Server instance will not retain secondary networks other than the VCHA NIC. If the source instance is configured with multiple NICs that are part of DVS port groups, the NIC configuration will not be preserved during the upgrade. Configurations for vCenter Server appliance instances that are part of the standard port group will be preserved.

    Workaround: None. Manually configure the secondary network in the target vCenter Server appliance instance.

  • After upgrading or migrating a vCenter Server with an external Platform Services Controller, users authenticating using Active Directory lose access to the newly upgraded vCenter Server instance

    After upgrading or migrating a vCenter Server with an external Platform Services Controller, if the newly upgraded vCenter Server is not joined to an Active Directory domain, users authenticating using Active Directory will lose access to the vCenter Server instance.

    Workaround: Verify that the new vCenter Server instance has been joined to an Active Directory domain. See Knowledge Base article: https://kb.vmware.com/s/article/2118543

  • Migrating a vCenter Server for Windows with an external Platform Services Controller using an Oracle database fails

    If there are non-ASCII strings in the Oracle events and tasks table the migration can fail when exporting events and tasks data. The following error message is provided: UnicodeDecodeError

    Workaround: None.

  • After an ESXi host upgrade, a Host Profile compliance check shows non-compliant status while host remediation tasks fail

    The non-compliant status indicates an inconsistency between the profile and the host.

    This inconsistency might occur because ESXi 7.0 does not allow duplicate claim rules, but the profile you use contains duplicate rules. For example, if you attempt to use the Host Profile that you extracted from the host before upgrading ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate claim rules of system default rules, you might experience the problems.

    Workaround:

    1. Remove any duplicate claim rules of the system default rules from the Host Profile document.
    2. Check the compliance status.
    3. Remediate the host.
    4. If the previous steps do not help, reboot the host.

     

  • Error message displays in the vCenter Server Management Interface

    After installing or upgrading to vCenter Server 7.0, when you navigate to the Update panel within the vCenter Server Management Interface, the error message "Check the URL and try again" displays. The error message does not prevent you from using the functions within the Update panel, and you can view, stage, and install any available updates.

    Workaround: None.

Security Features Issues
  • Encrypted virtual machine fails to power on when HA-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled HA on the Trusted Cluster and one or more hosts in the cluster fails attestation, an encrypted virtual machine cannot power on.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Encrypted virtual machine fails to power on when DRS-enabled Trusted Cluster contains an unattested host

    In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted Cluster and one or more hosts in the cluster fails attestation, DRS might try to power on an encrypted virtual machine on an unattested host in the cluster. This operation puts the virtual machine in a locked state.

    Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.

  • Migrating or cloning encrypted virtual machines across vCenter Server instances fails when attempting to do so using the vSphere Client

    If you try to migrate or clone an encrypted virtual machine across vCenter Server instances using the vSphere Client, the operation fails with the following error message: "The operation is not allowed in the current state."

    Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual machines across vCenter Server instances.

Networking Issues
  • Reduced throughput in networking performance on Intel 82599/X540/X550 NICs

    The new queue-pair feature added to ixgben driver to improve networking performance on Intel 82599EB/X540/X550 series NICs might reduce throughput under some workloads in vSphere 7.0 as compared to vSphere 6.7.

    Workaround: To achieve the same networking performance as vSphere 6.7, you can disable the queue-pair with a module parameter. To disable the queue-pair, run the command:

    # esxcli system module parameters set -p "QPair=0,0,0,0..." -m ixgben

    After running the command, reboot.

  • High throughput virtual machines may experience degradation in network performance when Network I/O Control (NetIOC) is enabled

    Virtual machines requiring high network throughput can experience throughput degradation when upgrading from vSphere 6.7 to vSphere 7.0 with NetIOC enabled.

    Workaround: Adjust the ethernetx.ctxPerDev setting to enable multiple worlds.

  • IPv6 traffic fails to pass through VMkernel ports using IPsec

    When you migrate VMkernel ports from one port group to another, IPv6 traffic does not pass through VMkernel ports using IPsec.

    Workaround: Remove the IPsec security association (SA) from the affected server, and then reapply the SA. To learn how to set and remove an IPsec SA, see the vSphere Security documentation.

  • Higher ESX network performance with a portion of CPU usage increase

    ESX network performance may increase with a portion of CPU usage.

    Workaround: Remove and add the network interface with only 1 rx dispatch queue. For example:

    esxcli network ip interface remove --interface-name=vmk1

    esxcli network ip interface add --interface-name=vmk1 --num-rxqueue=1

  • VM might lose Ethernet traffic after hot-add, hot-remove or storage vMotion

    A VM might stop receiving Ethernet traffic after a hot-add, hot-remove or storage vMotion. This issue affects VMs where the uplink of the VNIC has SR-IOV enabled. PVRDMA virtual NIC exhibits this issue when the uplink of the virtual network is a Mellanox RDMA capable NIC and RDMA namespaces are configured.

    Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM to restore traffic. On Linux guest operating systems, restarting the network might also resolve the issue. If these workarounds have no effect, you can reboot the VM to restore network connectivity.

  • Change of IP address for a VCSA deployed with static IP address requires that you create the DNS records in advance

    With the introduction of the DDNS, the DNS record update only works for VCSA deployed with DHCP configured networking. While changing the IP address of the vCenter server via VAMI, the following error is displayed:

    The specified IP address does not resolve to the specified hostname.

    Workaround: There are two possible workarounds.

    1. Create an additional DNS entry with the same FQDN and desired IP address. Log in to the VAMI and follow the steps to change the IP address.
    2. Log in to the VCSA using ssh. Execute the following script:

      ./opt/vmware/share/vami/vami_config_net

      Use option 6 to change the IP adddress of eth0. Once changed, execute the following script:

      ./opt/likewise/bin/lw-update-dns

      Restart all the services on the VCSA to update the IP information on the DNS server.

  • It may take several seconds for the NSX Distributed Virtual Port Group (NSX DVPG) to be removed after deleting the corresponding logical switch in NSX Manager.

    As the number of logical switches increases, it may take more time for the NSX DVPG in vCenter Server to be removed after deleting the corresponding logical switch in NSX Manager. In an environment with 12000 logical switches, it takes approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server.

    Workaround: None.

  • Hostd runs out of memory and fails if a large number of NSX Distributed Virtual port groups are created.

    In vSphere 7.0, NSX Distributed Virtual port groups consume significantly larger amounts of memory than opaque networks. For this reason, NSX Distributed Virtual port groups can not support the same scale as an opaque network given the same amount of memory.

    Workaround:To support the use of NSX Distributed Virtual port groups, increase the amount of memory in your ESXi hosts. If you verify that your system has adequate memory to support your VMs, you can directly increase the memory of hostd using the following command.

    localcli --plugin-dir /usr/lib/vmware/esxcli/int/ sched group setmemconfig --group-path host/vim/vmvisor/hostd --units mb --min 2048 --max 2048

    Note that this will cause hostd to use memory normally reserved for your environment's VMs. This may have the affect of reducing the number of VMs your ESXi host can support.

  • DRS may incorrectly launch vMotion if the network reservation is configured on a VM

    If the network reservation is configured on a VM, it is expected that DRS only migrates the VM to a host that meets the specified requirements. In a cluster with NSX transport nodes, if some of the transport nodes join the transport zone by NSX-T Virtual Distributed Switch (N-VDS), and others by vSphere Distributed Switch (VDS) 7.0, DRS may incorrectly launch vMotion. You might encounter this issue when:

    • The VM connects to an NSX logical switch configured with a network reservation.
    • Some transport nodes join transport zone using N-VDS, and others by VDS 7.0, or, transport nodes join the transport zone through different VDS 7.0 instances.

    Workaround: Make all transport nodes join the transport zone by N-VDS or the same VDS 7.0 instance.

  • When adding a VMkernel NIC (vmknic) to an NSX portgroup, vCenter Server reports the error "Connecting VMKernel adapter to a NSX Portgroup on a Stateless host is not a supported operation. Please use Distributed Port Group instead."
    • For stateless ESXi on Distributed Virtual Switch (DVS), the vmknic on a NSX port group is blocked. You must instead use a Distributed Port Group.
    • For stateful ESXi on DVS, vmknic on NSX port group is supported, but vSAN may have an issue if it is using vmknic on a NSX port group.

    Workaround: Use a Distributed Port Group on the same DVS.

  • Enabling SRIOV from vCenter for QLogic 4x10GE QL41164HFCU CNA might fail

    If you navigate to the Edit Settings dialog for physical network adapters and attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the ESXi host.

    Workaround: Use the following command on the ESXi host to enable SRIOV:

    esxcfg-module

  • New vCenter Server fails if the hosts in a cluster using Distributed Resource Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch (VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS

    In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail.

    Workaround: Have hosts in a DRS cluster join the NSX transport zone using the same VDS or NVDS. 

Storage Issues
  • VMFS datastores are not mounted automatically after disk hot remove and hot insert on HPE Gen10 servers with SmartPQI controllers

    When SATA disks on HPE Gen10 servers with SmartPQI controllers without expanders are hot removed and hot inserted back to a different disk bay of the same machine, or when multiple disks are hot removed and hot inserted back in a different order, sometimes a new local name is assigned to the disk. The VMFS datastore on that disk appears as a snapshot and will not be mounted back automatically because the device name has changed.

    Workaround: None. SmartPQI controller does not support unordered hot remove and hot insert operations.

  • Setting the loglevel for nvme_pcie driver fails with an error

    When you set the loglevel for nvme_pcie driver with the command esxcli nvme driver loglevel set -l <log level>, the action fails with the error message:

    Failed to set log level 0x2.

    This command was retained for compatibility consideration with NVMe driver, but it is not supported for nvme_pcie driver.

    Workaround: None. This condition will exist when the nvme_pcie feature is enabled.

  • ESXi might terminate I/O to NVMeOF devices due to errors on all active paths

    Occasionally, all active paths to NVMeOF device register I/O errors due to link issues or controller state. If the status of one of the paths changes to Dead, the High Performance Plug-in (HPP) might not select another path if it shows high volume of errors. As a result, the I/O fails.

    Workaround: Disable the configuration option /Misc/HppManageDegradedPaths to unblock the I/O.

  • VOMA check on NVMe based VMFS datastores fails with error

    VOMA check is not supported for NVMe based VMFS datastores and will fail with the error:

    ERROR: Failed to reserve device. Function not implemented 

    Example:

    # voma -m vmfs -f check -d /vmfs/devices/disks/: <partition#>
    Running VMFS Checker version 2.1 in check mode
    Initializing LVM metadata, Basic Checks will be done
    
    Checking for filesystem activity
    Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
    ERROR: Failed to reserve device. Function not implemented
    Aborting VMFS volume check
    VOMA failed to check device : General Error

    Workaround: None. If you need to analyse VMFS metadata, collect it using the -l option, and pass to VMware customer support. The command for collecting the dump is:

    voma -l -f dump -d /vmfs/devices/disks/:<partition#> 
  • Using the VM reconfigure API to attach an encrypted First Class Disk to an encrypted virtual machine might fail with error

    If an FCD and a VM are encrypted with different crypto keys, your attempts to attach the encrypted FCD to the encrypted VM using the VM reconfigure API might fail with the error message:

    Cannot decrypt disk because key or password is incorrect.

    Workaround: Use the attachDisk API rather than the VM reconfigure API to attach an encrypted FCD to an encrypted VM.

  • ESXi host might get in non responding state if a non-head extent of its spanned VMFS datastore enters the Permanent Device Loss (PDL) state

    This problem does not occur when a non-head extent of the spanned VMFS datastore fails along with the head extent. In this case, the entire datastore becomes inaccessible and no longer allows I/Os.

    In contrast, when only a non-head extent fails, but the head extent remains accessible, the datastore heartbeat appears to be normal. And the I/Os between the host and the datastore continue. However, any I/Os that depend on the failed non-head extent start failing as well. Other I/O transactions might accumulate while waiting for the failing I/Os to resolve, and cause the host to enter the non responding state.

    Workaround: Fix the PDL condition of the non-head extent to resolve this issue.

  • After recovering from APD or PDL conditions, VMFS datastore with enabled support for clustered virtual disks might remain inaccessible

    You can encounter this problem only on datastores where the clustered virtual disk support is enabled. When the datastore recovers from an All Paths Down (APD) or Permanent Device Loss (PDL) condition, it remains inaccessible. The VMkernel log might show multiple SCSI3 reservation conflict messages similar to the following:

    2020-02-18T07:41:10.273Z cpu22:1001391219)ScsiDeviceIO: vm 1001391219: SCSIDeviceCmdCompleteCB:2972: Reservation conflict retries 544 for command 0x45ba814b8340 (op: 0x89) to device "naa.624a9370b97601e346f64ba900024d53"

    The problem can occur because the ESXi host participating in the cluster loses SCSI reservations for the datastore and cannot always reacquire them automatically after the datastore recovers.

    Workaround: Manually register the reservation using the following command:

    vmkfstools -L registerkey /vmfs/devices/disks/<device name>

    where the <device name> is the name of the device on which the datastore is created.

  • Virtual NVMe Controller is the default disk controller for Windows 10 guest operating systems

    The Virtual NVMe Controller is the default disk controller for the following guest operating systems when using Hardware Version 15 or later:

    Windows 10
    Windows Server 2016
    Windows Server 2019

    Some features might not be available when using a Virtual NVMe Controller. For more information, see https://kb.vmware.com/s/article/2147714

    Note: Some clients use the previous default of LSI Logic SAS. This includes ESXi host client and PowerCLI.

    Workaround: If you need features not available on Virtual NVMe, switch to VMware Paravirtual SCSI (PVSCSI) or LSI Logic SAS. For information on using VMware Paravirtual SCSI (PVSCSI), see https://kb.vmware.com/s/article/1010398

  • After an ESXi host upgrade to vSphere 7.0, presence of duplicate core claim rules might cause unexpected behavior

    Claim rules determine which multipathing plugin, such as NMP, HPP, and so on, owns paths to a particular storage device. ESXi 7.0 does not support duplicate claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate rules to the existing claim rules inherited through an upgrade from a legacy release. As a result of using duplicate rules, storage devices might be claimed by unintended plugins, which can cause unexpected outcome.

    Workaround: Do not use duplicate core claim rules. Before adding a new claim rule, delete any existing matching claim rule.

  • A CNS query with the compliance status filter set might take unusually long time to complete

    The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.

    Workaround: Avoid using bulk queries. When you need to get compliance status, query one volume at a time or limit the number of volumes in the query API to 20 or fewer. While using the query, avoid running other CNS operations to get the best performance.

  • New Deleted CNS volumes might temporarily appear as existing in the CNS UI

    After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following:
    The object or item referred to could not be found.

    Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI.

  • New Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error

    When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.

    Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM.

  • New Under certain circumstances, while a CNS operation fails, the task status appears as successful in the vSphere Client

    This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful.

    Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results.

  • New Unsuccessful delete operation for a CNS persistent volume might leave the volume undeleted on the vSphere datastore

    This issue might occur when the CNS Delete API attempts to delete a persistent volume that is still attached to a pod. For example, when you delete the Kubernetes namespace where the pod runs. As a result, the volume gets cleared from CNS and  the CNS query operation does not return the volume. However, the volume continues to reside on the datastore and cannot be deleted through the repeated CNS Delete API operations.

    Workaround: None.

vCenter Server and vSphere Client Issues
  • Vendor providers go offline after a PNID change​

    When you change the vCenter IP address (PNID change), the registered vendor providers go offline.

    Workaround: Re-register the vendor providers.

  • Cross vCenter migration of a virtual machine fails with an error

    When you use cross vCenter vMotion to move a VM's storage and host to a different vCenter server instance, you might receive the error The operation is not allowed in the current state. 

    This error appears in the UI wizard after the Host Selection step and before the Datastore Selection step, in cases where the VM has an assigned storage policy containing host-based rules such as encryption or any other IO filter rule.

    Workaround: Assign the VM and its disks to a storage policy without host-based rules. You might need to decrypt the VM if the source VM is encrypted. Then retry the cross vCenter vMotion action.

  • Storage Sensors information in Hardware Health tab shows incorrect values on vCenter UI, host UI, and MOB

    When you navigate to Host > Monitor > Hardware Health > Storage Sensors on vCenter UI, the storage information displays either incorrect or unknown values. The same issue is observed on the host UI and the MOB path “runtime.hardwareStatusInfo.storageStatusInfo” as well.

    Workaround: None.

  • vSphere UI host advanced settings shows the current product locker location as empty with an empty default

    vSphere UI host advanced settings shows the current product locker location as empty with an empty default. This is inconsistent as the actual product location symlink is created and valid. This causes confusion to the user. The default cannot be corrected from UI.

    Workaround: User can use the esxcli command on the host to correct the current product locker location default as below.

    1. Remove the existing Product Locker Location setting with: "esxcli system settings advanced remove -o ProductLockerLocation"

    2. Re-add the Product Locker Location setting with the appropriate default:

          2.a.  If the ESXi is a full installation, the default value is "/locker/packages/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/locker/packages/vmtoolsRepo"

          2.b. If the ESXi is a PXEboot configuration such as autodeploy, the default value is: "/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/vmtoolsRepo"

          Run the following command to automatically figure out the location: export PRODUCT_LOCKER_DEFAULT=`readlink /productLocker`

          Add the setting: esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT

    You can combine all the above steps in step 2 by issuing the single command:

    esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s `readlink /productLocker`

  • Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDC.

    When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.

    Workaround: None.

Virtual Machine Management Issues
  • The postcustomization section of the customization script runs before the guest customization

    When you run the guest customization script for a Linux guest operating system, the precustomization section of the customization script that is defined in the customization specification runs before the guest customization and the postcustomization section runs after that. If you enable Cloud-Init in the guest operating system of a virtual machine, the postcustomization section runs before the customization due to a known issue in Cloud-Init.

    Workaround: Disable Cloud-Init and use the standard guest customization. 

  • Group migration operations in vSphere vMotion, Storage vMotion, and vMotion without shared storage fail with error

    When you perform group migration operations on VMs with multiple disks and multi-level snapshots, the operations might fail with the error com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167. Connection closed by remote host, possibly due to timeout.

    Workaround: Retry the migration operation on the failed VMs one at a time.

  • Deploying an OVF or OVA template from a URL fails with a 403 Forbidden error

    The URLs that contain an HTTP query parameter are not supported. For example, http://webaddress.com?file=abc.ovf or the Amazon pre-signed S3 URLs.

    Workaround: Download the files and deploy them from your local file system.

  • Importing or deploying local OVF files containing non-ASCII characters in their name might fail with an error

    When you import local .ovf files containing non-ASCII characters in their name, you might receive 400 Bad Request Error. When you use such .ovf files to deploy a virtual machine in the vSphere Client, the deployment process stops at 0%. As a result, you might receive 400 Bad Request Error or 500 Internal Server Error.

    Workaround:

    1. Remove the non-ASCII characters from the .ovf and .vmdk file names.
      • To edit the .ovf file, open it with a text editor.
      • Search the non-ASCII .vmdk file name and change it to ASCII.
    2. Import or deploy the saved files again. 
  • New The third level of nested objects in a virtual machine folder is not visible

    Perform the following steps:

    1. Navigate to a data center and create a virtual machine folder.
    2. In the virtual machine folder, create a nested virtual machine folder.
    3. In the second folder, create another nested virtual machine, virtual machine folder, vApp, or VM Template.

    As a result, from the VMs and Templates inventory tree you cannot see the objects in the third nested folder.

    Workaround: To see the objects in the third nested folder, navigate to the second nested folder and select the VMs tab.

vSphere HA and Fault Tolerance Issues
  • Modified VMs in a cluster might be orphaned after recovering from storage inaccessibility such as a cluster wide APD

    Some VMs might be in orphaned state after cluster wide APD recovers, even if HA and VMCP are enabled on the cluster.

    This issue might be encountered when the following conditions occur simultaneously:

    • All hosts in the cluster experience APD and do not recover until VMCP timeout is reached.
    • HA primary initiates failover due to APD on a host.
    • Power on API during HA failover fails due to one of the following:
      • APD across the same host
      • Cascading APD across the entire cluster
      • Storage issues
      • Resource unavailability
    • FDM unregistration and VCs steal VM logic might initiate during a window where FDM has not unregistered the failed VM and VC's host synchronization responds that multiple hosts are reporting the same VM. Both FDM and VC unregister the different registered copies of the same VM from different hosts, causing the VM to be orphaned.

    Workaround: You must unregister and reregister the orphaned VMs manually within the cluster after the APD recovers.

    If you do not manually reregister the orphaned VMs, HA attempts failover of the orphaned VMs, but it might take between 5 to 10 hours depending on when APD recovers.

    The overall functionality of the cluster is not affected in these cases and HA will continue to protect the VMs. This is an anomaly in what gets displayed on VC for the duration of the problem.

vSphere Lifecycle Manager Issues
  • You cannot enable NSX-T on a cluster that is already enabled for managing image setup and updates on all hosts collectively

    NSX-T is not compatible with the vSphere Lifecycle Manager functionality for image management. When you enable a cluster for image setup and updates on all hosts in the cluster collectively, you cannot enable NSX-T on that cluster. However, you can deploy NSX Edges to this cluster.

    Workaround: Move the hosts to a new cluster that you can manage with baselines and enable NSX-T on that new cluster.

  • vSphere Lifecycle Manager and vSAN File Services cannot be simultaneously enabled on a vSAN cluster in vSphere 7.0 release

    If vSphere Lifecycle Manager is enabled on a cluster, vSAN File Services cannot be enabled on the same cluster and vice versa. In order to enable vSphere Lifecycle Manager on a cluster, which has VSAN File Services enabled already, first disable vSAN File Services and retry the operation. Please note that if you transition to a cluster that is managed by a single image, vSphere Lifecycle Manager cannot be disabled on that cluster.

    Workaround: None.

  • ESXi 7.0 hosts cannot be added to а cluster that you manage with a single image by using vSphere Auto Deploy

    Attempting to add ESXi hosts to а cluster that you manage with a single image by using the "Add to Inventory" workflow in vSphere Auto Deploy fails. The failure occurs because no patterns are matched in an existing Auto Deploy ruleset. The task fails silently and the hosts remain in the Discovered Hosts tab.

    Workaround:

    1. Remove the ESXi hosts that did not match the ruleset from the Discovered Hosts tab.
    2. Create a rule or edit an existing Auto Deploy rule, where the host target location is a cluster managed by an image.
    3. Reboot the hosts.

     

    The hosts are added to the cluster that you manage by an image in vSphere Lifecycle Manager.

  • When a hardware support manager is unavailable, vSphere High Availability (HA) functionality is impacted

    If hardware support manager is unavailable for a cluster that you manage with a single image, where a firmware and drivers addon is selected and vSphere HA is enabled, the vSphere HA functionality is impacted. You may experience the following errors.

    • Configuring vSphere HA on a cluster fails.
    • Cannot complete the configuration of the vSphere HA agent on a host: Applying HA VIBs on the cluster encountered a failure.
    • Remediating vSphere HA fails: A general system error occurred: Failed to get Effective Component map.
    • Disabling vSphere HA fails: Delete Solution task failed. A general system error occurred: Cannot find hardware support package from depot or hardware support manager.

     

    Workaround:

    • If the hardware support manager is temporarily unavailable, perform the following steps.
    1. Reconnect the hardware support manager to vCenter Server.
    2. Select a cluster from the Hosts and Cluster menu.
    3. Select the Configure tab.
    4. Under Services, click vSphere Availability.
    5. Re-enable vSphere HA.
    • If the hardware support manager is permanently unavailable, perform the following steps.
    1. Remove the hardware support manager and the hardware support package from the image specification
    2. Re-enable vSphere HA.
    3. Select a cluster from the Hosts and Cluster menu.
    4. Select the Updates tab.
    5. Click Edit .
    6. Remove the firmware and drivers addon and click Save.
    7. Select the Configure tab.
    8. Under Services, click vSphere Availability.
    9. Re-enable vSphere HA.

     

  • I/OFilter is not removed from a cluster after a remediation process in vSphere Lifecycle Manager

    Removing I/OFilter from a cluster by remediating the cluster in vSphere Lifecycle Manager, fails with the following error message: iofilter XXX already exists. Тhe iofilter remains listed as installed.

    Workaround:

    1. Call IOFilter API UninstallIoFilter_Task from the vCenter Server managed object (IoFilterManager).
    2. Remediate the cluster in vSphere Lifecycle Manager.
    3. Call IOFilter API ResolveInstallationErrorsOnCluster_Task from the vCenter Server managed object (IoFilterManager) to update the database.

     

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, adding hosts causes a vSphere HA error state

    Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message: Applying HA VIBs on the cluster encountered a failure.

    Workaround: Аfter the cluster remediation operation has finished, perform one of the following tasks.

    • Right-click the failed ESXi host and select Reconfigure for vSphere HA.
    • Disable and re-enable vSphere HA for the cluster.

     

  • While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, disabling and re-enabling vSphere HA causes a vSphere HA error state

    Disabling and re-enabling vSphere HA during remediation process of a cluster, may fail the remediation process due to vSphere HA health checks reporting that hosts don't have vSphere HA VIBs installed. You may see the following error message: Setting desired image spec for cluster failed.

    Workaround: Аfter the cluster remediation operation has finished, disable and re-enable vSphere HA for the cluster.

  • Checking for recommended images in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the recommendation generation task could take more than an hour to finish or may appear to hang. The completion time for the recommendation task depends on the number of devices configured on each host and the number of image candidates from the depot that vSphere Lifecycle Manager needs to process before obtaining a valid image to recommend.

    Workaround: None.

  • Checking for hardware compatibility in vSphere Lifecycle Manager has slow performance in large clusters

    In large clusters with more than 16 hosts, the validation report generation task could take up to 30 minutes to finish or may appear to hang. The completion time depends on the number of devices configured on each host and the number of hosts configured in the cluster.

    Workaround: None

  • Incomplete error messages in non-English languages are displayed, when remediating a cluster in vSphere Lifecycle Manager

    You can encounter incomplete error messages for localized languages in the vCenter Server user interface. The messages are displayed, after a cluster remediation process in vSphere Lifecycle Manager fails. For example, your can observe the following error message.

    The error message in English language: Virtual machine 'VMC on DELL EMC -FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents entering maintenance mode: Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    The error message in French language: La VM « VMC on DELL EMC -FileServer », située sur le cluster « {Cluster-1} », a signalé un problème empêchant le passage en mode de maintenance : Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx

    Workaround: None.

  • Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing image

    Only the ESXi base image is replaced with the one from the imported image.

    Workaround: After the import process finishes, edit the image, and if needed, remove the vendor addon, components, and firmware and drivers addon.

  • When you convert a cluster that uses baselines to a cluster that uses a single image, a warning is displayed that vSphere HA VIBs will be removed

    Converting a vSphere HA enabled cluster that uses baselines to a cluster that uses a single image, may result a warning message displaying that vmware-fdm component will be removed.

    Workaround: This message can be ignored. The conversion process installs the vmware-fdm component.

  • If vSphere Update Manager is configured to download patch updates from the Internet through a proxy server, after upgrade to vSphere 7.0 that converts Update Manager to vSphere Lifecycle Manager, downloading patches from VMware patch repository might fail

    In earlier releases of vCenter Server you could configure independent proxy settings for vCenter Server and vSphere Update Manager. After an upgrade to vSphere 7.0, vSphere Update Manager service becomes part of the vSphere Lifecycle Manager service. For the vSphere Lifecycle Manager service, the proxy settings are configured from the vCenter Server appliance settings. If you had configured Update Manager to download patch updates from the Internet through a proxy server but the vCenter Server appliance had no proxy setting configuration, after a vCenter Server upgrade to version 7.0, the vSphere Lifecycle Manager fails to connect to the VMware depot and is unable to download patches or updates.

    Workaround: Log in to the vCenter Server Appliance Management Interface, https://vcenter-server-appliance-FQDN-or-IP-address:5480, to configure proxy settings for the vCenter Server appliance and enable vSphere Lifecycle Manager to use proxy.

Miscellaneous Issues
  • When applying a host profile with version 6.5 to a ESXi host with version 7.0, the compliance check fails

    Applying a host profile with version 6.5 to a ESXi host with version 7.0, results in Coredump file profile reported as not compliant with the host.

    Workaround: There are two possible workarounds.

    1. When you create a host profile with version 6.5, set an advanced configuration option VMkernel.Boot.autoCreateDumpFile to false on the ESXi host.
    2. When you apply an existing host profile with version 6.5, add an advanced configuration option VMkernel.Boot.autoCreateDumpFile in the host profile, configure the option to a fixed policy, and set value to false.

     

  • The Actions drop-down menu does not contain any items when your browser is set to language different from English

    When your browser is set to language different from English and you click the Switch to New View button from the virtual machine Summary tab of the vSphere Client inventory, the Actions drop-down menu in the Guest OS panel does not contain any items.

    Workaround: Select the Actions drop-down menu on the top of the virtual machine page.

  • Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit minor throughput degradation when Dynamic Receive Side Scaling (DYN_RSS) or Generic RSS (GEN_RSS) feature is turned on

    Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5 percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which is unlikely to impact normal workloads.

    Workaround: You can disable DYN_RSS and GEN_RSS feature with the following commands:

    # esxcli system module parameters set -m nmlx5_core -p "DYN_RSS=0 GEN_RSS=0"

    # reboot

  • RDMA traffic between two VMs on the same host might fail in PVRDMA environment

    In a vSphere 7.0 implementation of a PVRDMA environment, VMs pass traffic through the HCA for local communication if an HCA is present. However, loopback of RDMA traffic does not work on qedrntv driver.  For instance, RDMA Queue Pairs running on VMs that are configured under same uplink port cannot communicate with each other.

    In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have SRQ enabled with a minimum of HW v14 using RoCE v2.

    The current version of Marvell FastLinQ adapter firmware does not support loopback traffic between QPs of the same PF or port.

    Workaround: Required support is being added in the out-of-box driver certified for vSphere 7.0. If you are using the inbox qedrntv driver, you must use a 3-host configuration and migrate VMs to the third host.

  • Unreliable Datagram traffic QP limitations in qedrntv driver

    There are limitations with the Marvell FastLinQ qedrntv RoCE driver and Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions (MR). Physical MRs or FRMR are not supported. Applications attempting to use physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv driver. Known examples of such test applications are ibv_ud_pingpong and ib_send_bw.

    Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER, NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic are limited and this issue impacts a small set of applications requiring bulk UD traffic.

    Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to meet the VMware PVRDMA requirement to support GSI QP, a restricted software only implementation of UD QP support was added to the qedrntv driver. The goal of the implementation is to provide support for control path GSI communication and is not a complete implementation of UD QP supporting bulk traffic and advanced features.

    Since UD support is implemented in software, the implementation might not keep up with heavy traffic and packets might be dropped. This can result in failures with bulk UD traffic.

    Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe, RDMA and PVRDMA are unaffected by this issue.

  • Servers equipped with QLogic 578xx NIC might fail when frequently connecting or disconnecting iSCSI LUNs

    If you trigger QLogic 578xx NIC iSCSI connection or disconnection frequently in a short time, the server might fail due to an issue with the qfle3 driver. This is caused by a known defect in the device's firmware.

    Workaround: None.

  • ESXi might fail during driver unload or controller disconnect operation in Broadcom NVMe over FC environment

    In Broadcom NVMe over FC environment, ESXi might fail during driver unload or controller disconnect operation and display an error message such as: @BlueScreen: #PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19

    Workaround: None.

  • ESXi does not display OEM firmware version number of i350/X550 NICs on some Dell servers

    The inbox ixgben driver only recognizes firmware data version or signature for i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed into the OEM package version region, and the inbox ixgben driver does not read this information. Only the 8-digit firmware signature is displayed.

    Workaround: To display the OEM firmware version number, install async ixgben driver version 1.7.15 or later.

  • X710 or XL710 NICs might fail in ESXi

    When you initiate certain destructive operations to X710 or XL710 NICs, such as resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware might read data from non-packet memory.

    Workaround: Do not reset the NIC or manipulate vmkernel internal device state.

  • NVMe-oF does not guarantee persistent VMHBA name after system reboot

    NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage installation that uses vmhba30+ and also has NVMe over RDMA configuration, the VMHBA name might change after a system reboot. This is because the VMHBA name assignment for NVMe over RDMA is different from PCIe devices. ESXi does not guarantee persistence.

    Workaround: None.

  • Backup fails for vCenter database size of 300 GB or greater

    If the vCenter database size is 300 GB or greater, the file-based backup will fail with a timeout. The following error message is displayed: Timeout! Failed to complete in 72000 seconds

    Workaround: None.

  • Checking the compliance state of a ESXi 7.0 host against a host profile with version 6.5 or 6.7, results in an error for vmhba and vmrdma devices

    When checking the compliance of a ESXi 7.0 host that uses nmlx5_core or nvme_pcie driver against a host profile with version 6.5 or 6.7, you may observe the following errors, where address1 and address2 are specific to the affected system.

    • A vmhba device with bus type logical, address1 is not present on your host.
    • A vmrdma device with bus type logical, address2 is not present on your host.

    The error is due to a mismatch between device addresses generated by the nmlx5_core or nvme_pcie driver in ESXi version 7.0 and earlier.

    Workaround: The error may be ignored. The ESXi host functionality is unaffected. To resolve the compliance state error, re-extract the host profile from a ESXi host version 7.0, and apply the new host profile to the host.

  • A restore of vCenter Server 7.0 which is upgraded from vCenter Server 6.x with External Platform Services Controller to vCenter Server 7.0 might fail

    When you restore a vCenter Server 7.0 which is upgraded from 6.x with External Platform Services Controller to vCenter Server 7.0, the restore might fail and display the following error: Failed to retrieve appliance storage list

    Workaround: During the first stage of the restore process, increase the storage level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External Platform Services Controller setup storage type is small, select storage type large for the restore process.

  • Enabled SSL protocols configuration parameter is not configured during a host profile remediation process

    Enabled SSL protocols configuration parameter is not configured during a host profile remediation and only the system default protocol tlsv1.2 is enabled. This behavior is observed for a host profile with version 7.0 and earlier in a vCenter Server 7.0 environment.

    Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an ESXi host by using SSH, and run the following ESXCLI command: esxcli system wbem -P <protocol_name>

  • Unable to configure Lockdown Mode settings by using Host Profiles

    Lockdown Мode cannot be configured by using a security host profile and cannot be applied to multiple ESXi hosts at once. You must manually configure each host.  

    Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage Lockdown Mode exception user list by using a security host profile.

  • When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from the ESXi hosts

    Some settings in the VMware config file /etc/vmware/config are not managed by Host Profiles and are blocked, when the config file is modified. As a result, when the host profile is applied to a cluster, the EVC settings are lost, which causes loss of EVC functionalities. For example, unmasked CPUs can be exposed to workloads.

    Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC settings.

  • Using a host profile that defines a core dump partition in vCenter Server 7.0 results in an error

    In vCenter Server 7.0, configuring and managing a core dump partition in a host profile is not available. Attempting to apply a host profile that defines a core dump partition, results in the following error: No valid coredump partition found.

    Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-based core dumps.

  • When a host profile is copied from an ESXi host or a host profile is edited, the user input values are lost

    Some of the host profile keys are generated from hash calculation even when explicit rules for key generation are provided. As a result, when you copy settings from a host or edit a host profile, the user input values in the answer file are lost.

    Workaround: In vCenter Server 7.0, when a host profile is copied from an ESXi host or a host profile is modified, the user input settings are preserved.

  • HTTP requests from certain libraries to vSphere might be rejected

    The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance than in previous releases. This might expose pre-existing problems in some third-party libraries used by applications for SOAP calls to vSphere.

    If you develop vSphere applications that use such libraries or include applications that rely on such libraries in your vSphere stack, you might experience connection issues when these libraries send HTTP requests to VMOMI. For example, HTTP requests issued from vijava libraries can take the following form:

    POST /sdk HTTP/1.1
    SOAPAction
    Content-Type: text/xml; charset=utf-8
    User-Agent: Java/1.8.0_221

    The syntax in this example violates an HTTP protocol header field requirement that mandates a colon after SOAPAction. Hence, the request is rejected in flight.

    Workaround: Developers leveraging noncompliant libraries in their applications can consider using a library that follows HTTP standards instead. For example, developers who use the vijava library can consider using the latest version of the yavijava library instead.

  • Editing an advanced options parameter in a host profile and setting a value to false, results in setting the value to true

    When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile.

    Workaround: There are two possible workarounds.

    • Set the advanced option parameter to false on a reference ESXi host and copy settings from this host in Host Profiles.
      Note: The host must be compliant with the host profile before modifying the advanced option parameter on the host.
    • Set the advanced option parameter to false on a reference ESXi host and create a host profile from this host. Then copy the host profile settings from the new host profile to the existing host profile.

     

  • SNMP dynamic firewall ruleset is modified by Host Profiles during a remediation process

    The SNMP firewall ruleset is a dynamic state, which is handled during runtime. When a host profile is applied, the configuration of the ruleset is managed simultaneously by Host Profiles and SNMP, which can modify the firewall settings unexpectedly.  

    Workaround: There are two possible workarounds.

    • To allow the ruleset to manage itself dynamically, exclude the SNMP firewall ruleset option in the configuration of the host profile.
    • To proceed with the double management of the ruleset, when needed, correct the firewall ruleset state.

     

  • You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35 and lsi_mr3

    When using the lsi_msgpt3, lsi_msgpt35 and lsi_mr3 controllers, there is a potential risk to see dump file lsuv2-lsi-drivers-plugin-util-zdump. There is an issue when exiting the storelib used in this plugin utility. There is no impact on ESXi operations, you can ignore the dump file.

    Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-drivers-plugin with the following command:

    esxcli software vib remove -n lsuv2-lsiv2-drivers-plugin

  • You might see reboot is not required after configuring SR-IOV of a PCI device in vCenter, but device configurations made by third party extensions might be lost and require reboot to be re-applied.

    In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device driver is reloaded. ESXi hosts might have third party extensions perform device configurations that need to run after the device driver is loaded during boot. A reboot is required for those third party extensions to re-apply the device configuration.

    Workaround: You must reboot after configuring SR-IOV to apply third party device configurations.

check-circle-line exclamation-circle-line close-line
Scroll to top icon