vCenter Server 6.7 Update 1b | JAN 17 2019 | ISO Build 11726888 vCenter Server Appliance 6.7 Update 1b | JAN 17 2019 | ISO Build 11726888 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of vCenter Server 6.7
- Upgrade Notes for This Release
- Patches Contained in this Release
- Resolved Issues
- Known Issues from Earlier Releases
What's New
The vCenter Server 6.7 Update 1b release addresses issues documented in the Resolved Issues section.
Earlier Releases of vCenter Server 6.7
Features and known issues of vCenter Server are described in the release notes for each release. Release notes for earlier releases of vCenter Server 6.7 are:
- VMware vCenter Server 6.7 Update 1 Release Notes
- VMware vCenter Server 6.7.0d Release Notes
- VMware vCenter Server 6.7.0c Release Notes
- VMware vCenter Server 6.7.0b Release Notes
- VMware vCenter Server 6.7.0a Release Notes
- VMware vSphere 6.7 Release Notes
For internationalization, compatibility, installation and upgrades, open source components and product support notices see the VMware vSphere 6.7 Release Notes.
Upgrade Notes for This Release
IMPORTANT: Upgrade and migration paths from vCenter Server 6.5 Update 2d and vCenter Server 6.5 Update 2e to vCenter Server 6.7 Update 1b are not supported.
Patches Contained in This Release
vCenter Server 6.7 Update 1b delivers the following patches. See the VMware Patch Download Center for more information on downloading patches.
Full Patch for VMware vCenter Server Appliance 6.7 Update 1b
Product Patch for vCenter Server Appliance 6.7 containing VMware software fixes.
This patch is applicable to the vCenter Server Appliance and Platform Services Controller Appliance.
For vCenter Server and Platform Services Controller Appliances
Download Filename | VMware-vCenter-Server-Appliance-6.7.0.21000-11726888-patch-FP.iso |
Build | 11726888 |
Download Size | 1933.1 MB |
md5sum | 9dec0f4580d26fef8186bbb55035e1d7 |
sha1checksum | eae367a6edeb271d0d11e183f202150a3dae492d |
Download and Installation
You can download this patch by going to the VMware Patch Download Center and choosing VC from the Search by Product drop-down menu.
- Attach the
VMware-vCenter-Server-Appliance-6.7.0.21000-11726888-patch-FP.iso
file to vCenter Server Appliance CD or DVD drive. - Log in to the appliance shell as root and run the commands given below:
- To stage the ISO:
software-packages stage --iso
- To see the staged content:
software-packages list --staged
- To install the staged rpms:
software-packages install --staged
- To stage the ISO:
For more information on patching the vCenter Server Appliance, see Patching the vCenter Server Appliance.
For more information on staging patches, see Stage Patches to vCenter Server Appliance.
For more information on installing patches, see Install vCenter Server Appliance Patches.
For issues resolved in this patch, see Resolved Issues.
For more information on patching using the Appliance Management Interface, see Patching the vCenter Server Appliance by Using the Appliance Management Interface.
Resolved Issues
The resolved issues are grouped as follows.
vCenter Server, vSphere Web Client, and vSphere Client Issues- You cannot add permissions for a user or group beyond the first 200 security principals in an Active Directory domain by using the vSphere Client
If you grant permissions to a user or group from an Active Directory domain by using the vSphere Client, the search for security principals is limited to 200 and you cannot add users to any principal beyond that list.
This issue is resolved in this release.
- The vpxd service might fail to start if certificates in the TRUSTED_ROOTS store exceed 20
When certificates in the
TRUSTED_ROOTS
store on a vCenter Server Appliance pile to more than 20, the vpxd service might fail to start. The vSphere Web Client and vSphere Client display the following error:
[400] An error occurred while sending an authentication request to the vCenter Single Sign-On server.
This issue is resolved in this release for the vCenter Server Appliance. With this fix, the
TRUSTED_ROOTS
store can support up to 30 certificates.The issue is not resolved for vCenter Server for Windows.
- vCenter Server user permissions might be automatically removed on restart of the vpxd service
Some vCenter Server user permissions might be automatically removed on restart of the vpxd service with a message similar to
Reason: No reason given
. You might also seeRemoving invalid permission
logs in/var/log/vmware/vpxd/vpxd.log
or%ALLUSERSPROFILE%\VMWare\vCenterServer\logs\vmware-vpx\vpxd.log.
This issue is resolved in this release.
- Removing a virtual machine folder from the inventory by using the vSphere Client might delete all virtual machines
In the vSphere Client, if you right-click on a folder and select Remove from Inventory from the drop-down menu, the action might delete all virtual machines in that folder from the disk and underlying datastore, and cause data loss.
This issue is resolved in this release.
- You might see health status warnings for low space in the archive partition
You might see health status warnings in the Summary tab of the vCenter Server Appliance interface such as
File system /storage/archive is low on storage space. Increase the size of disk /storage/archive
. You might also see such warnings in the node status in the system configuration settings when using the vSphere Client. System health checks trigger the warning when the partition exceeds the limits specified in thestatsMonitor.xml
file, but the warning is not indicating a problem, since the archive service automatically cleans old segments, such as WAL history.This issue is resolved in this release.
- vSAN UI might be slow when vCenter Server manages a large number of vSAN hosts
When vCenter Server manages a large number of vSAN hosts, such as 1000 vSAN enabled hosts, the vSAN UI might be slow in rendering data. Furthermore, if you randomly browse various VSAN UI pages about these clusters and hosts, or add or remove hosts, cluster partition might be triggered. This is because vCenter Server might fail to update member info configuration to some hosts due to lacking file handlers for making connections.
This issue is resolved in this release.
- Health check warning for vSAN VUM baseline creation failure
vCenter Server 6.7 Update 1 uses an outdated URL for the release catalog. This URL does not include ESXi 6.7 Update 1 builds. This issue can trigger a health check warning for baseline creation failure.
This issue is resolved in this release.
Known Issues from Earlier Releases
To view a list of previous known issues, click here.
The earlier known issues are grouped as follows.
- Installation, Upgrade, and Migration Issues
- Security Issues
- Networking Issues
- Storage Issues
- Backup and Restore Issues
- vCenter Server Appliance, vCenter Server, vSphere Web Client, and vSphere Client Issues
- Virtual Machine Management Issues
- vSphere HA and Fault Tolerance Issues
- Auto Deploy and Image Builder Issues
- Miscellaneous Issues
- ESXi installation or upgrade fail due to memory corruption on HPE ProLiant - DL380/360 Gen 9 Servers
The issue occurs on HPE ProLiant - DL380/360 Gen 9 Servers that have a Smart Array P440ar storage controller.
Workaround: Set the server BIOS mode to UEFI before you install or upgrade ESXi.
- After an ESXi upgrade to version 6.7 and a subsequent rollback to version 6.5 or earlier, you might experience failures with error messages
You might see failures and error messages when you perform one of the following on your ESXi host after reverting to 6.5 or earlier versions:
- Install patches and VIBs on the host
Error message: [DependencyError] VIB VMware_locker_tools-light requires esx-version >= 6.6.0 - Install or upgrade VMware Tools on VMs
Error message: Unable to install VMware Tools.
After the ESXi rollback from version 6.7, the new tools-light VIB does not revert to the earlier version. As a result, the VIB becomes incompatible with the rolled back ESXi host causing these issues.
Workaround: Perform the following to fix this problem.
SSH to the host and run one of these commands:
esxcli software vib install -v /path/to/tools-light.vib
or
esxcli software vib install -d /path/to/depot/zip -n tools-light
Where the vib and zip are of the currently running ESXi version.
Note: For VMs that already have new VMware Tools installed, you do not have to revert VMware Tools back when ESXi host is rolled back.
- Install patches and VIBs on the host
- Special characters backslash (\) or double-quote (") used in passwords causes installation pre-check to fail
If the special characters backslash (\) or double quote (") are used in ESXi, vCenter Single Sign-On, or operating system password fields during the vCenter Server Appliance Installation templates, the installation pre-check fails with the following error:
Error message: com.vmware.vcsa.installer.template.cli_argument_validation: Invalid \escape: line ## column ## (char ###)
Workaround: If you include special characters backslash (\) or double quote (") in the passwords for ESXi, operating systems, or Single-Sign-On, the special characters need to be escaped. For example, the password
pass\word
should be escaped aspass\\word
. - Windows vCenter Server 6.7 installer fails when non-ASCII characters are present in password
The Windows vCenter Server 6.7 installer fails when the Single Sign-on password contains non-ASCII characters for Chinese, Japanese, Korean, and Taiwanese locales.
Workaround: Ensure that the Single Sign-on password contains ASCII characters only for Chinese, Japanese, Korean, and Taiwanese locales.
- Cannot log in to vSphere Appliance Management Interface if the colon character (:) is part of vCenter Server root password
During the vCenter Server Appliance UI installation (Set up appliance VM page of Stage 1), if you include the colon character (:) as part of the vCenter Server root password, logging into the vSphere Appliance Management Interface (
https://vc_ip:5480
) fails and you are unable to login. The password might be accepted by the password rule check during the setup, but login fails.Workaround: Do not use the colon character (:) to set the vCenter Server root password in the vCenter Server Appliance UI (Set up appliance VM of Stage 1).
- vCenter Server Appliance installation fails when the backslash character (\) is included in the vCenter Single Sign-On password
During the vCenter Server Appliance UI installation (SSO setup page of Stage 2), if you include the backslash character (\) as part of the vCenter Single Sign-On password, the installation fails with the error
Analytics Service registration with Component Manager failed
. The password might be accepted by the password rule check, but installation fails.Workaround: Do not use the backslash character (\) to set the vCenter Single Sign-On password in the vCenter Server Appliance UI installer (SSO setup page of Stage 2)
- Scripted ESXi installation fails on HP ProLiant Gen 9 Servers with an error
When you perform a scripted ESXi installation on an HP ProLiant Gen 9 Server under the following conditions:
- The Embedded User Partition option is enabled in the BIOS.
- You use multiple USB drives during installation: one USB drive contains the ks.cfg file, and the others USB drive is not formatted and usable.
The installation fails with the error message Partitions not initialized.
Workaround:
- Disable the Embedded User Partition option in the server BIOS.
- Format the unformatted USB drive with a file system or unplug it from the server.
- Patch history might be improperly displayed after a you patch to vCenter Server Appliance 6.7.0a by using the vCenter Server Appliance Management Interface
After you patch to vCenter Server Appliance 6.7.0a by using the vCenter Server Appliance Management Interface, and run the
software-packages list --history
command in the appliance shell, the system might not display the full list of applied patches.Workaround: Track patch history in the
/var/vmware/applmgmt/patch-history/
directory. - When you start patching to vCenter Server Appliance 6.7.0a, you might be logged out of the vCenter Server Appliance Management Interface
When you start a patching operation from vCenter Server Appliance 6.7 to vCenter Server Appliance 6.7.0a by using the vCenter Server Appliance Management Interface, you might be logged out from the interface and lose connectivity. You cannot log in again until the operation completes.
Workaround: Wait until the operation completes and log in to the vCenter Server Appliance Management Interface.
- Windows vCenter Server 6.0.x or 6.5.x upgrade to vCenter Server 6.7 fails if vCenter Server contains non-ASCII or high-ASCII named 5.5 host profiles
When a source Windows vCenter Server 6.0.x or 6.5.x contains vCenter Server 5.5.x host profiles named with non-ASCII or high-ASCII characters, UpgradeRunner fails to start during the upgrade pre-check process.
Workaround: Before upgrading Windows vCenter Server 6.0.x or 6.5.x to vCenter Server 6.7, upgrade the ESXi 5.5.x with the non-ASCII or high-ASCII named host profiles to ESXi 6.0.x or 6.5.x, then update the host profile from the upgraded host by clicking Copy setting from the hosts.
- Patching of vCenter Server 6.7 on Windows to vCenter Server 6.7.0a might fail with error code 3010
If you reconfigure a vCenter Server 6.7 system on Windows with an Embedded Platform Services Controller and repoint it to an External Platform Services Controller, a patching operation from vCenter Server 6.7 to vCenter Server 6.7.0a might fail with an error
Installation of component VMware Directory Service client failed
.Workaround: After you reconfigure the vCenter Server 6.7 system to use an External Platform Services Controller, you must reboot the system and then start the upgrade process.
- vCenter Server Appliance Management Interface might not display the vCenter Server 6.7.0a patch
The vCenter Server Appliance Management Interface might not display the vCenter Server 6.7.0a patch and you might not be able to perform an upgrade.
Workaround: If you do not perform a fresh install, but use the vCenter Server Appliance Management Interface to patch your vCenter Server system to vCenter Server 6.7.0a, you must:
- In the vCenter Server Appliance Management Interface, go to Update > Settings and configure the custom URL to
https://vapp-updates.vmware.com/vai-catalog/valm/vmw/8d167796-34d5-4899-be0a-6daade4005a3/6.7.0.10000.latest/
. - Re-try the upgrade.
- In the vCenter Server Appliance Management Interface, go to Update > Settings and configure the custom URL to
- You cannot run the camregister command with the -x option if the vCenter Single Sign-On password contains non-ASCII characters
When you run the
camregister
command with the-x
file option, for example, to register the vSphere Authentication Proxy, the process fails with an access denied error when the vCenter Single Sign-On password contains non-ASCII characters.Workaround: Either set up the vCenter Single Sign-On password with ASCII characters, or use the
–p
password option when you run thecamregister
command to enter the vCenter Single Sign-On password that contains non-ASCII characters. - The Bash shell and SSH login are disabled after upgrading to vCenter Server 6.7
After upgrading to vCenter Server 6.7, you are not able to access the vCenter Server Appliance using either the Bash shell or SSH login.
Workaround:
- After successfully upgrading to vCenter Server 6.7, log in to the vCenter Server Appliance Management Interface. In a Web browser, go to: https://appliance_ip_address_or_fqdn:5480
- Log in as root.
The default root password is the password you set while deploying the vCenter Server Appliance. -
Click Access, and click Edit. -
Edit the access settings for the Bash shell and SSH login.
When enabling Bash shell access to the vCenter Server Appliance, enter the number of minutes to keep access enabled. -
Click OK to save the settings.
- Management node migration is blocked if vCenter Server for Windows 6.0 is installed on Windows Server 2008 R2 without previously enabling Transport Layer Security 1.2
This issue occurs if you are migrating vCenter Server for Windows 6.0 using an external Platform Services Controller (an MxN topology) on Windows Server 2008 R2. After migrating the external Platform Services Controller (PSC), when you run Migration Assistant on the Management node it fails, reporting that it cannot retrieve the PSC version. This error occurs because Windows Server 2008 R2 does not support Transport Layer Security (TLS) 1.2 by default, which is the default TLS protocol for Platform Services Controller 6.7.
Workaround: Enable TLS 1.2 for Windows Server 2008 R2.1.
- Navigate to the registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols
- Create a new folder and label it
TLS 1.2
. - Create two new keys with the
TLS 1.2
folder, and name the keys Client and Server. - Under the Client key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
- Under the Server key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
- Ensure that the Value field is set to 0 and that the Base is Hexadecimal for DisabledByDefault.
- Ensure that the Value field is set to 1 and that the Base is Hexadecimal for Enabled.
- Reboot the Windows Server 2008 R2 computer.
For more information on using TLS 1.2 with Windows Server 2008 R2, refer to the operating system vendor's documentation.
- Navigate to the registry key:
- vCenter Server containing host profiles with version less than 6.0 fails during upgrade to version 6.7
vCenter Server 6.7 does not support host profiles with version less than 6.0. To upgrade to vCenter Server 6.7, you must first upgrade the host profiles to version 6.0 or later, if you have any of the following components:
- ESXi host(s) version - 5.1 or 5.5
- vCenter server version - 6.0 or 6.5
- Host profiles version - 5.1 or 5.5
Workaround: See KB 52932
- After upgrading to vCenter Server 6.7, any edits to the ESXi host's /etc/ssh/sshd_config file are discarded, and the file is restored to the vCenter Server 6.7 default configuration
Due to changes in the default values in the
/etc/ssh/sshd_config
file, the vCenter Server 6.7 upgrade replaces any manual edits to this configuration file with the default configuration. This change was necessary as some prior settings (for example, permitted ciphers) are no longer compatible with current ESXi behavior, and prevented SSHD (SSH daemon) from starting correctly.CAUTION: Editing
/etc/ssh/sshd_config
is not recommended. SSHD is disabled by default, and the preferred method for editing the system configuration is through the VIM API (including the ESXi Host Client interface) or ESXCLI.Workaround: If edits to
/etc/ssh/sshd_config
are needed, you can apply them after successfully completing the vCenter Server 6.7 upgrade. The default configuration file now contains a version number. Preserve the version number to avoid overwriting the file.For further information on editing the
/etc/ssh/sshd_config
file, see the following Knowledge Base articles:
- Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.
Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.
Workaround: Enable Hyper-V Platform on Windows Server 2016. In the Server Manager, under Local Server select Manage -> Add Roles and Features Wizard and under Role-based or feature-based installation select Hyper-V from the server pool and specify the server roles. Choose defaults for Server Roles, Features, Hyper-V, Virtual Switches, Migration and Default Stores. Reboot the host.
Enable Hyper-V on Windows 10: Browse to Control Panel -> Programs -> Turn Windows features on or off. Check the Hyper-V Platform which includes the Hyper-V Hypervisor and Hyper-V Services. Uncheck Hyper-V Management Tools. Click OK. Reboot the host.
- Hostprofile PeerDNS flags do not work in some scenarios
If PeerDNS for IPv4 is enabled for a vmknic on a stateless host that has an associated host profile, the iPv6PeerDNS might appear with a different state in the extracted host profile after the host reboots.
Workaround: None.
- When you upgrade vSphere Distributed Switches to version 6.6, you might encounter a few known issues
During upgrade, the connected virtual machines might experience packet loss for a few seconds.
Workaround: If you have multiple vSphere Distributed Switches that need to be upgraded to version 6.6, upgrade the switches sequentially.
Schedule the upgrade of vSphere Distributed Switches during a maintenance window, set DRS mode to manual, and do not apply DRS recommendations for the duration of the upgrade.
For more details about known issues and solutions, see KB 52621
- VM fails to power on when Network I/O Control is enabled and all active uplinks are down
A VM fails to power on when Network I/O Control is enabled and the following conditions are met:
- The VM is connected to a distributed port group on a vSphere distributed switch
- The VM is configured with bandwidth allocation reservation and the VM's network adapter (vNIC) has a reservation configured
- The distributed port group teaming policy is set to Failover
- All active uplinks on the distributed switch are down. In this case, vSphere DRS cannot use the standby uplinks and the VM fails to power on.
Workaround: Move the available standby adapters to the active adapters list in the teaming policy of the distributed port group.
- Network flapping on a NIC that uses qfle3f driver might cause ESXi host to crash
The qfle3f driver might cause the ESXi host to crash (PSOD) when the physical NIC that uses the qfle3f driver experiences frequent link status flapping every 1-2 seconds.
Workaround: Make sure that network flapping does not occur. If the link status flapping interval is more than 10 seconds, the qfle3f driver does not cause ESXi to crash. For more information, see KB 2008093.
- Port Mirror traffic packets of ERSPAN Type III fail to be recognized by packet analyzers
A wrong bit that is incorrectly introduced in ERSPAN Type III packet header causes all ERSPAN Type III packets to appear corrupt in packet analyzers.
Workaround: Use GRE or ERSPAN Type II packets, if your traffic analyzer supports these types.
- DNS configuration esxcli commands are not supported on non-default TCP/IP stacks
DNS configuration of non-default TCP/IP stacks is not supported. Commands such as
esxcli network ip dns server add -N vmotion -s 10.11.12.13
do not work.Workaround: Do not use DNS configuration esxcli commands on non-default TCP/IP stacks.
- Compliance check fails with an error when applying a host profile with enabled default IPv4 gateway for vmknic interface
When applying a host profile with enabled default IPv4 gateway for vmknic interface, the setting is populated with "0.0.0.0" and does not match the host info, resulting with the following error:
IPv4 vmknic gateway configuration doesn't match the specification
Workaround:
- Edit the host profile settings.
- Navigate to Networking configuration > Host virtual nic or Host portgroup > (name of the vSphere Distributed Switch or name of portgroup) > IP address settings.
- From the Default gateway Vmkernal Network Adapter (IPv4) drop-down menu, select Choose a default IPv4 gateway for the vmknic and enter the Vmknic Default IPv4 gateway.
- Intel Fortville series NICs cannot receive Geneve encapsulation packets with option length bigger than 255 bytes
If you configure Geneve encapsulation with option length bigger than 255 bytes, the packets are not received correctly on Intel Fortville NICs X710, XL710, and XXV710.
Workaround: Disable hardware VLAN stripping on these NICs by running the following command:
esxcli network nic software set --untagging=1 -n vmnicX.
- RSPAN_SRC mirror session fails after migration
When a VM connected to a port assigned for RSPAN_SRC mirror session is migrated to another host, and there is no required pNic on the destination network of the destination host, then the RSPAN_SRC mirror session fails to configure on the port. This causes the port connection to fail failure but the vMotion migration process succeeds.
Workaround: To restore port connection failure, complete either one of the following:
- Remove the failed port and add a new port.
- Disable the port and enable it.
The mirror session fails to configure, but the port connection is restored.
- NFS datastores intermittently become read-only
A host's NFS datastores may become read-only when the NFS vmknic temporarily loses its IP address or after a stateless hosts reboot.
Workaround: You can unmount and remount the datastores to regain connectivity through the NFS vmknic. You can also set the NFS datastore write permission to both the IP address of the NFS vmknic and the IP address of the Management vmknic.
- When editing a VM's storage policies, selecting Host-local PMem Storage Policy fails with an error
In the Edit VM Storage Policies dialog, if you select Host-local PMem Storage Policy from the dropdown menu and click OK, the task fails with one of these errors:
The operation is not supported on the object.
or
Incompatible device backing specified for device '0'"Detailed
Workaround: You cannot apply the Host-local PMem Storage Policy to VM home. For a virtual disk, you can use the migration wizard to migrate the virtual disk and apply the Host-local PMem Storage Policy.
- Datastores might appear as inaccessible after ESXi hosts in a cluster recover from a permanent device loss state
This issue might occur in the environment where the hosts in the cluster share a large number of datastore, for example, 512 to 1000 datastores.
After the hosts in the cluster recover from the permanent device loss condition, the datastores are mounted successfully at the host level. However, in vCenter Server, several datastores might continue to appear as inaccessible for a number of hosts.Workaround: On the hosts that show inaccessible datastores in the vCenter Server view, perform the Rescan Storage operation from vCenter Server.
- Migration of a virtual machine from a VMFS3 datastore to VMFS5 fails in a mixed ESXi 6.5 and 6.7 host environment
If you have a mixed host environment, you cannot migrate a virtual machine from a VMFS3 datastore connected to an ESXi 6.5 host to a VMFS5 datastore on an ESXi 6.7 host.
Workaround: Upgrade the VMFS3 datastore to VMFS5 to be able to migrate the VM to the ESXi 6.7 host.
- Warning message about a VMFS3 datastore remains unchanged after you upgrade the VMFS3 datastore using the CLI
Typically, you use the CLI to upgrade the VMFS3 datastore that failed to upgrade during an ESXi upgrade. The VMFS3 datastore might fail to upgrade due to several reasons including the following:
- No space is available on the VMFS3 datastore.
- One of the extents on the spanned datastore is offline.
After you fix the reason of the failure and upgrade the VMFS3 datastore to VMFS5 using the CLI, the host continues to detect the VMFS3 datastore and reports the following error:
Deprecated VMFS (ver 3) volumes found. Upgrading such volumes to VMFS (ver5) is mandatory for continued availability on vSphere 6.7 host.
Workaround: To remove the error message, restart hostd using the /etc/init.d/hostd restart command or reboot the host.
- The Mellanox ConnectX-4/ConnectX-5 native ESXi driver might exhibit performance degradation when its Default Queue Receive Side Scaling (DRSS) feature is turned on
Receive Side Scaling (RSS) technology distributes incoming network traffic across several hardware-based receive queues, allowing inbound traffic to be processed by multiple CPUs. In Default Queue Receive Side Scaling (DRSS) mode, the entire device is in RSS mode. The driver presents a single logical queue to OS and is backed by several hardware queues.
The native nmlx5_core driver for the Mellanox ConnectX-4 and ConnectX-5 adapter cards enables the DRSS functionality by default. While DRSS helps to improve performance for many workloads, it could lead to possible performance degradation with certain multi-VM and multi-vCPU workloads.
Workaround: If significant performance degradation is observed, you can disable the DRSS functionality.
- Run the esxcli system module parameters set -m nmlx5_core -p DRSS=0 RSS=0 command.
- Reboot the host.
- Datastore name does not extract to the Coredump File setting in the host profile
When you extract a host profile, the Datastore name field is empty in the Coredump File setting of the host profile. Issue appears when using esxcli command to set coredump.
Workaround:
- Extract a host profile from an ESXi host.
- Edit the host profile settings and navigate to General System Settings > Core Dump Configuration > Coredump File.
- Select Create the Coredump file with an explicit datastore and size option and enter the Datastore name, where you want the Coredump File to reside.
- Native software FCoE adapters configured on an ESXi host might disappear when the host is rebooted
After you successfully enable the native software FCoE adapter (vmhba) supported by the vmkfcoe driver and then reboot the host, the adapter might disappear from the list of adapters. This might occur when you use Cavium QLogic 57810 or QLogic 57840 CNAs supported by the qfle3 driver.
Workaround: To recover the vmkfcoe adapter, perform these steps:
- Run the esxcli storage core adapter list command to make sure that the adapter is missing from the list.
- Verify the vSwitch configuration on vmnic associated with the missing FCoE adapter.
- Run the following command to discover the FCoE vmhba:
- On a fabric setup:
#esxcli fcoe nic discover -n vmnic_number - On a VN2VN setup:
#esxcli fcoe nic discover -n vmnic_number
- On a fabric setup:
- Attempts to create a VMFS datastore on an ESXi 6.7 host might fail in certain software FCoE environments
Your attempts to create the VMFS datastore fail if you use the following configuration:
- Native software FCoE adapters configured on an ESXi 6.7 host.
- Cavium QLogic 57810 or 57840 CNAs.
- Cisco FCoE switch connected directly to an FCoE port on a storage array from the Dell EMC VNX5300 or VNX5700 series.
Workaround: None.
As an alternative, you can switch to the following end-to-end configuration:
ESXi host > Cisco FCoE switch > FC switch > storage array from the DELL EMC VNX5300 and VNX5700 series.
- Windows Explorer displays some backups with unicode differently from how browsers and file system paths show them
Some backups containing unicode display differently in the Windows Explorer file system folder than they do in browsers and file system paths.
Workaround: Using http, https, or ftp, you can browse backups with your web browser instead of going to the storage folder locations through Windows Explorer.
- The time synchronization mode setting is not retained when upgrading vCenter Server Appliance
If NTP time synchronization is disabled on a source vCenter Server Appliance, and you perform an upgrade to vCenter Server Appliance 6.7, after the upgrade has successfully completed NTP time synchronization will be enabled on the newly upgraded appliance.
Workaround:
- After successfully upgrading to vCenter Server Appliance 6.7, log into the vCenter Server Appliance Management Interface as root.
The default root password is the password you set while deploying the vCenter Server Appliance.
https://IP_or_FQDN_of_appliance:5480
- In the vCenter Server Appliance Management Interface, click Time.
- In the Time Synchronization pane, click Edit.
- From the Mode drop-down menu, select Disabled.
The newly upgraded vCenter Server Appliance 6.7 will no longer use NTP time synchronization, and will instead use the system time zone settings.
- After successfully upgrading to vCenter Server Appliance 6.7, log into the vCenter Server Appliance Management Interface as root.
- Login to vSphere Web Client with Windows session authentication fails on Firefox browsers of version 54 or later
If you use Firefox of version 54 or later to log in to the vSphere Web Client, and you use your Windows session for authentication, the VMware Enhanced Authentication Plugin might fail to populate your user name and to log you in.
Workaround: If you are using Windows session authentication to log in to the vSphere Web Client, use one of the following browsers: Internet Explorer, Chrome, or Firefox of version 53 and earlier.
- vCenter hardware health alarm notifications are not triggered in some instances
When multiple sensors in the same category on an ESXi host are tripped within a time span of less than five minutes, traps are not received and email notifications are not sent.
Workaround: None. You can check the hardware sensors section for any alerts.
- Trying to disable the usage of Hyperthreading after enabling the HyperthreadingMitigation option on an ESXi host might trigger an incorrect message in the Processors view
When you enable the HyperthreadingMitigation option on an ESXi host, by setting
VMkernel.Boot.HyperthreadingMitigation
toTrue
, the usage of Hyperthreading by this ESXi patch is disabled, regardless of the previous settings. If you decide to change the settingVMkernel.Boot.Hyperthreading
toFalse
to disable the option, you might see the messageDisabled(Enabled on restart)
in the Processors view. In fact, Hyperthreading is not used after the restart.Workaround: None. Ignore the message.
- When using the VCSA Installer Time Sync option, you must connect the target ESX to the NTP server in the Time & Date Setting from the ESX Management
If you want to select Time Sync with NTP server from the VCSA Installer->Stage2->Appliance configuration->Time Sync option (ESX/NTP server), you also need to have the target ESX already connected to NTP server in the Time&Date Setting from the ESX Management, otherwise it'll fail in installation.
Workaround:
- Set the Time Sync option in stage2->Appliance configuration to sync with ESX
- Set the Time Sync option in stage2->Appliance configuration to sync with NTP Servers, make sure both the ESX and VC are set to connect to NTP servers.
- When you monitor Windows vCenter Server health, an error message appears
Health service is not available for Windows vCenter Server. If you select the vCenter Server, and click Monitor > Health, an error message appears:
Unable to query vSAN health information. Check vSphere Client logs for details.
This problem can occur after you upgrade the Windows vCenter Server from release 6.0 Update 1 or 6.0 Update 2 to release 6.7. You can ignore this message.
Workaround: None. Users can access vSAN health information through the vCenter Server Appliance.
- vCenter hardware health alarms do not function with earlier ESXi versions
If ESXi version 6.5 Update 1 or earlier is added to vCenter 6.7, hardware health related alarms will not be generated when hardware events occur such as high CPU temperatures, FAN failures, and voltage fluctuations.
Workaround: None.
- vCenter Server stops working in some cases when using vmodl to edit or expand a disk
When you configure a VM disk in a Storage DRS-enabled cluster using the latest vmodl, vCenter Server stops working. A previous workaround using an earlier vmodl no longer works and will also cause vCenter Server to stop working.
Workaround: None
- vCenter Server for Windows migration to vCenter Server Appliance fails with error
When you migrate vCenter Server for Windows 6.0.x or 6.5.x to vCenter Server Appliance 6.7, the migration might fail during the data export stage with the error:
The compressed zip folder is invalid or corrupted
.Workaround: You must zip the data export folder manually and follow these steps:
- In the source system, create an environment variable MA_INTERACTIVE_MODE.
- Go to Computer > Properties > Advanced system settings > Environment Variables > System Variables > New.
- Enter "MA_INTERACTIVE_MODE" as variable name with value 0 or 1.
- Start the VMware Migration Assistant and provide your password.
- Start the Migration from the client machine. The migration will pause, and the Migration Assistant console will display the message
To continue the migration, create the export.zip file manually from the export data (include export folder)
. - NOTE: Do not press any keys or tabs on the Migration Assistant console.
- Go to the
%appdata%\vmware\migration-assistant
folder. - Delete the export.zip created by the Migration Assistant.
- To continue the migration, manually create the export.zip file from the export folder.
- Return to the Migration Assistant console. Type
Y
and press Enter.
- Discrepancy between the build number in VAMI and the build number in the vSphere Client
In vSphere 6.7, the VAMI summary tab displays the ISO build for the vCenter Server and vCenter Server Appliance products. The vSphere Client summary tab displays the build for the vCenter product, which is a component within the vCenter Server product.
Workaround: None
- vCenter Server Appliance 6.7 displays an error message in the Available Update section of the vCenter Server Appliance Management Interface (VAMI)
The Available Update section of the vCenter Server Appliance Management Interface (VAMI) displays the following error message:
Check the URL and try again.
This message is generated when the vCenter Server Appliance searches for and fails to find a patch or update. No functionality is impacted by this issue. This issue will be resolved with the release of the first patch for vSphere 6.7.
Workaround: None. No functionality is impacted by this issue.
- Name of the virtual machine in the inventory changes to its path name
This issue might occur when a datastore where the VM resides enters the All Paths Down state and becomes inaccessible. When hostd is loading or reloading VM state, it is unable to read the VM's name and returns the VM path instead. For example, /vmfs/volumes/123456xxxxxxcc/cs-00.111.222.333.
Workaround: After you resolve the storage issue, the virtual machine reloads, and its name is displayed again.
- You must select the "Secure boot" Platform Security Level when enabling VBS in a Guest OS on AMD systems
On AMD systems, vSphere virtual machines do not provide a vIOMMU. Since vIOMMU is required for DMA protection, AMD users cannot select "Secure Boot and DMA protection" in the Windows Group Policy Editor when they "Turn on Virtualization Based Security". Instead select "Secure boot." If you select the wrong option it will cause VBS services to be silently disabled by Windows.
Workaround: Select "Secure boot" Platform Security Level in a Guest OS on AMD systems.
- You cannot hot add memory and CPU for Windows VMs when Virtualization Based Security (VBS) is enabled within Windows
Virtualization Based Security (VBS) is a new feature introduced in Windows 10 and Windows Server 2016. vSphere supports running Windows with VBS enabled starting in the vSphere 6.7 release. However, Hot add of memory and CPU will not operate for Windows VMs when Virtualization Based Security (VBS) is enabled.
Workaround: Power-off the VM, change memory or CPU settings and power-on the VM.
- Snapshot tree of a linked-clone VM might be incomplete after a vSAN network recovery from a failure
A vSAN network failure might impact accessibility of vSAN objects and VMs. After a network recovery, the vSAN objects regain accessibility. The hostd service reloads the VM state from storage to recover VMs. However, for a linked-clone VM, hostd might not detect that the parent VM namespace has recovered its accessibility. This results in the VM remaining in inaccessible state and VM snapshot information not being displayed in vCenter Server.
Workaround: Unregister the VM, then re-register it to force the hostd to reload the VM state. Snapshot information will be loaded from storage.
- An OVF Virtual Appliance fails to start in the vSphere Client
The vSphere Client does not support selecting vService extensions in the Deploy OVF Template wizard. As a result, if an OVF virtual appliance uses vService extensions and you use the vSphere Client to deploy the OVF file, the deployment succeeds, but the virtual appliance fails to start.
Workaround: Use the vSphere Web Client to deploy OVF virtual appliances that use vService extensions.
- When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build you are prompted twice to apply DRS recommendations
When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build and a red health update is sent from the Proactive HA provider plug-in, you are prompted twice to apply the recommendations under Cluster -> Monitor -> vSphere DRS -> Recommendations. The first prompt is to enter the host into maintenance mode. The second prompt is to migrate all VMs on a host entering maintenance mode. In vSphere 6.5, these two steps are presented as a single recommendation for entering maintenance mode, which lists all VMs to be migrated.
Workaround: There is no impact to work flow or results. You must apply the recommendations twice. If you are using automated scripts, you must modify the scripts to include the additional step.
- Lazy import upgrade interaction when VCHA is not configured
The VCHA feature is available as part of 6.5 release. As of 6.5, a VCHA cluster cannot be upgraded while preserving the VCHA configuration. The recommended approach for upgrade is to first remove the VCHA configuration either through vSphere Client or by calling a destroy VCHA API. So for lazy import upgrade workflow without VCHA configuration, there is no interaction with VCHA.
Do not configure a fresh VCHA setup while lazy import is in progress. The VCHA setup requires cloning the Active VM as Passive/Witness VM. As a result of an ongoing lazy import, the amount of data that needs to be cloned is large and may lead to performance issues.
Workaround: None.
- Reboot of an ESXi stateless host resets the numRxQueue value of the host
When an ESXi host provisioned with vSphere Auto Deploy reboots, it loses the previously set numRxQueue value. The Host Profiles feature does not support saving the numRxQueue value after the host reboots.
Workaround: After the ESXi stateless host reboots:
- Remove the vmknic from the host.
- Create a vmknic on the host with the expected numRxQueue value.
- After caching on a drive, if the server is in the UEFI mode, a boot from cache does not succeed unless you explicitly select the device to boot from the UEFI boot manager
In case of Stateless Caching, after the ESXi image is cached on a 512n, 512e, USB, or 4Kn target disk, the ESXi stateless boot from autodeploy might fail on a system reboot. This occurs if autodeploy service is down.
The system attempts to search for the cached ESXi image on the disk, next in the boot order. If the ESXi cached image is found, the host is booted from it. In legacy BIOS, this feature works without problems. However, in the UEFI mode of the BIOS, the next device with the cached image might not be found. As a result, the host cannot boot from the image even if the image is present on the disk.
Workaround: If autodeploy service is down, on the system reboot, manually select the disk with the cached image from the UEFI Boot Manager.
- A stateless ESXi host boot time might take 20 minutes or more
The booting of a stateless ESXi host with 1,000 configured datastores might require 20 minutes or more.
Workaround: None.
- ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver
ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver if you attempt to reboot the server with VMs in the running I/O state.
Workaround: First power off VMs and then reboot the ESXi host.
- VXLAN stateless hardware offloads are not supported with Guest OS TCP traffic over IPv6 on UCS VIC 13xx adapters
You may experience issues with VXLAN encapsulated TCP traffic over IPv6 on Cisco UCS VIC 13xx adapters configured to use the VXLAN stateless hardware offload feature. For VXLAN deployments involving Guest OS TCP traffic over IPV6, TCP packets subject to TSO are not processed correctly by the Cisco UCS VIC 13xx adapters, which causes traffic disruption. The stateless offloads are not performed correctly. From a TCP protocol standpoint this may cause incorrect packet checksums being reported to the ESXi software stack, which may lead to incorrect TCP protocol processing in the Guest OS.
Workaround: To resolve this issue, disable the VXLAN stateless offload feature on the Cisco UCS VIC 13xx adapters for VXLAN encapsulated TCP traffic over IPV6. To disable the VXLAN stateless offload feature in UCS Manager, disable the
Virtual Extensible LAN
field in the Ethernet Adapter Policy. To disable the VXLAN stateless offload feature in the CIMC of a Cisco C-Series UCS server, uncheckEnable VXLAN
field in the Ethernet Interfaces vNIC properties section. - Significant time might be required to list a large number of unresolved VMFS volumes using the batch QueryUnresolvedVmfsVolume API
ESXi provides the batch QueryUnresolvedVmfsVolume API, so that you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as resignaturing specific unresolved VMFS volumes. By default, when the API QueryUnresolvedVmfsVolume is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time.
Workaround: To decrease the time of the query operation, you can disable the filesystem liveness check.
- Log in to your host as root.
- Open the configuration file for hostd using a text editor. The configuration file is located in /etc/vmware/hostd/config.xml under plugins/hostsvc/storage node.
- Add the checkLiveFSUnresolvedVolume parameter and set its value to FALSE. Use the following syntax:
<checkLiveFSUnresolvedVolume>FALSE</checkLiveFSUnresolvedVolume>
As an alternative, you can set the ESXi Advanced option VMFS.UnresolvedVolumeLiveCheck to FALSE in the vSphere Client.
- Compliance check fails with an error for the UserVars.ESXiVPsDisabledProtocols option when an ESXi host upgraded to version 6.7 is attached to a host profile with version 6.0
Issue occurs when you perform the following actions:
- Extract a host profile from an ESXi host with version 6.0.
- Upgrade the ESXi host to version 6.7.
- The host appears as non-compliant for UserVars.ESXiVPsDisabledProtocols option even after remediation.
Workaround:
- Extract a new host profile from the upgraded ESXi host and attach the host to the profile.
- Upgrade the host profile by using the Copy Settings from Host from the upgraded ESXi host.
- Inconsistent patching of vCenter Server and ESXi might lead to issues with per-VM EVC
If you upgrade to ESXi 6.7 Patch Release ESXi670-201808001 but do not upgrade all ESXi hosts, a newly configured or reconfigured per-VM EVC might incorrectly power-on a virtual machine on an unpatched host. In such a case, the virtual machine might be missing some features added to a patched host. Due to compatibility checks, such virtual machines can not be migrated to patched hosts in the same EVC cluster by using vSphere vMotion. If the source and destination hosts are not in the same EVC cluster, they might have different feature sets and migration might be possible.
Workaround: Before you configure or reconfigure per-VM EVC, upgrade all the standalone ESXi hosts, as well as hosts inside a cluster, to ESXi 6.7 Patch Release ESXi670-201808001.
- If you enable per-VM EVC, the virtual machine might fail to power on
If you enable cluster-level EVC and even one of the hosts in the cluster is not patched with ESXi 6.7 Patch Release ESXi670-201808001, the new CPU IDs of that cluster might not be available on the cluster. In such a cluster, if you configure or reconfigure per-VM EVC, virtual machines might fail to power on.
Workaround: Before you configure or reconfigure per-VM EVC, upgrade all the standalone ESXi hosts, as well as hosts inside a cluster, to ESXi 6.7 Patch Release ESXi670-201808001.
- If you disable per-VM EVC, migration of virtual machines by using VMware vSphere vMotion might fail
If you enable cluster-level EVC and even one of the hosts in the cluster is not patched with ESXi 6.7 Patch Release ESXi670-201808001, the new CPU IDs of that cluster might not be available on the cluster. In such a cluster, if you disable per-VM EVC, migration by using vSphere vMotion might fail for virtual machines running on a non-patched host to a patched host.
Workaround: Upgrade all hosts in the EVC cluster to ESXi 6.7 Patch Release ESXi670-201808001. Disable per-VM EVC.
- After upgrade to ESXi 6.7, networking workloads on Intel 10GbE NICs cause higher CPU utilization
If you run certain types of networking workloads on an upgraded ESXi 6.7 host, you might see a higher CPU utilization under the following conditions:
- The NICs on the ESXi host are from the Intel 82599EB or X540 families
- The workload involves multiple VMs that run simultaneously and each VM is configured with multiple vCPUs
- Before the upgrade to ESXi 6.7, the VMKLinux ixgbe driver was used
Workaround: Revert to the legacy VMKLinux ixgbe driver:
- Connect to the ESXi host and run the following command:
# esxcli system module set -e false -m ixgben
- Reboot the host.
Note: The legacy VMKLinux ixgbe inbox driver version 3.7.x does not support Intel X550 NICs. Use the VMKLinux ixgbe async driver version 4.x with Intel X550 NICs.
- Unable to unstage patches when using an external Platform Services Controller
If you are patching an external Platform Services Controller (an MxN topology) using the VMWare Appliance Management Interface with patches staged to an update repository, and then attempt to unstage the patches, the following error message is reported:
Error in method invocation [Errno 2] No such file or directory: '/storage/core/software-update/stage'
Workaround:
- Access the appliance shell and log in as a user who has a super administrator role.
- Run the command
software-packages unstage
to unstage the staged patches. All directories and files generated by the staging process are removed. - Refresh the VMware Appliance Management Interface, which will now report the patches as being removed.
- Initial install of DELL CIM VIB might fail to respond
After you install a third-party CIM VIB it might fail to respond.
Workaround: To fix this issue, enter the following two commands to restart sfcbd:
esxcli system wbem set --enable false
esxcli system wbem set --enable true
- If you enable per-VM Enhanced vMotion Compatibility (EVC), the virtual machine might fail to power on
If you enable cluster-level EVC and even one of the hosts in the cluster is not patched with the hypervisor-assisted guest mitigation for guest operating systems, the new CPU IDs of that cluster might not be available on the cluster. In such a cluster, if you configure or reconfigure per-VM EVC, virtual machines might fail to power on.
Workaround: Before you configure or reconfigure per-VM EVC, upgrade all the standalone ESXi hosts, as well as hosts inside a cluster, to the latest patches for hypervisor-assisted guest mitigation for guest operating systems.
- If you disable Enhanced vMotion Compatibility (EVC) mode at the virtual machine level, migration of virtual machines by using VMware vSphere vMotion might fail
If you enable cluster-level EVC and even one of the hosts in the cluster is not patched with the hypervisor-assisted guest mitigation for guest operating systems, the new CPU IDs of that cluster might not be available on the cluster. In such a cluster, if you disable EVC at the virtual machine level, migration by using VMware vSphere vMotion might fail for virtual machines running on a non-patched host to a patched host.
Workaround: Upgrade all hosts in the EVC cluster to the latest patch for hypervisor-assisted guest mitigation for guest operating systems. Disable EVC at the virtual machine level.
- Inconsistent patching of vCenter Server and ESXi might lead to issues with per-VM Enhanced vMotion Compatibility (EVC)
If you patch a vCenter Server system with the hypervisor-assisted guest mitigation for guest operating systems and not all ESXi hosts have the upgrade too, a newly configured or reconfigured per-VM EVC might incorrectly power-on a virtual machine on an unpatched host. In such a case, the virtual machine might be missing some features added to a patched host. Due to compatibility checks, such virtual machines can not be migrated to patched hosts in the same EVC cluster by using vSphere vMotion. If the source and destination hosts are not in the same EVC cluster, they might have different feature sets and migration might be possible.Workaround:
Workaround: Workaround: Before you configure or reconfigure per-VM EVC, upgrade all the standalone ESXi hosts, as well as hosts inside a cluster, to the latest patches for hypervisor-assisted guest mitigation for guest operating systems.
To collapse the list of previous known issues, click here.