ESXi 6.5.0d | 18 APRIL 2017 | ISO Build 5310538 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Earlier Releases of ESXi 6.5
- Internationalization
- Patches Contained in this Release
- Resolved Issues
- Known Issues
What's New
The ESXi 6.5.0d release includes new features and bug fixes related to VMware vSAN 6.6. For more information, see the vSAN 6.6 Release Notes and the vSAN 6.6 documentation.
This release of ESXi 6.5.0d delivers a number of bug fixes that have been documented in the Resolved Issues section.
VMware vSAN health checks are now available in the ESXi Host Client: For a VMware vSAN enabled host, you can now view the VMware vSAN configuration, peer hosts, and health checks under the Monitor section of the VMware vSAN datastore.
Earlier Releases of ESXi 6.5
Features and known issues of ESXi 6.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.5, are:
For compatibility, installation and upgrades, product support notices, and features see the VMware vSphere 6.5 Release Notes.
Internationalization
VMware vSphere 6.5 is available in the following languages:
- English
- French
- German
- Spanish
- Japanese
- Korean
- Simplified Chinese
- Traditional Chinese
Components of VMware vSphere 6.5, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.
Patches Contained in this Release
This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.
Patch Release ESXi650-201704001 contains the following individual bulletins:
- VMware ESXi 6.5, Patch ESXi650-201704401-BG: Updates esx-base,vsan, and vsanhealth
- VMware ESXi 6.5, Patch ESXi650-201704402-BG: Updates esx-ui
Patch Release ESXi650-201701001 contains the following image profiles:
- VMware ESXi 6.5, Patch ESXi-6.5.0-20170404001-standard
- VMware ESXi 6.5, Patch ESXi-6.5.0-20170404001-no-tools
Resolved Issues
This section describes the resolved issue:
You cannot use the VMware Host Client in Chrome 57
When you try to log in to the VMware Host Client by using Chrome 57, the VMware Host Client immediately reports an error. The reported error is an Angular Digest in progress error.This issue is resolved in this release.
Known Issues
For existing issues that have not been resolved and documented in the Resolved Issues section, please see Known Issues, click here.
- Upgrade Issues
- Security Features Issues
- Networking Issues
- Storage Issues
- Backup and Restore Issues
- Server Configuration Issues
- vCenter Server Appliance, vCenter Server, vSphere Web Client, vSphere Client, and VMware Host Client Issues
- Virtual Machine Management Issues
- Migration Issues
- VMware HA and Fault Tolerance Issues
- Auto Deploy and Image Builder Issues
- Miscellaneous Issues
Pre-upgrade checks display error that the eth0 interface is missing when upgrading to vCenter Server Appliance 6.5
Pre-upgrade checks display error that the eth0 interface is missing and it is needed to complete the vCenter Server Appliance upgrade. In addition there might be a warning displayed that if multiple network adapters are detected only eth0 is preserved.Workaround: Please see KB http://kb.vmware.com/kb/2147933 for a workaround.
vCenter Server upgrade fails when Distributed Virtual Switch and Distributed Virtual Portgroups have same High/non-ASCII name in a Windows environment
In a Windows environment, if the Distributed Virtual Switches and the Distributed Virtual Portgroups use duplicate High/non-ASCII characters as names, the vCenter Server upgrade fails with the error:
Failed to launch UpgradeRunner. Please check the vminst.log and vcsUpgrade\UpgradeRunner.log files in the temp directory for more details
.Workaround: Rename either the Distributed Virtual Switches or the Distributed Virtual Portgroups using non-unique names.
Attempts to upgrade a vCenter Server Appliance or Platform Services Controller appliance might fail with an error message about DNS configuration setting if the source appliance is set with static IPv4 and static IPv6 configuration
Upgrading an appliance that is configured with both IPv4 and IPv6 static addresses might fail with the error message: Error setting DNS configuration. Details : Operation Failed.. Code: com.vmware.applmgmt.err_operation_failed.The log file /var/log/vmware/applmgmt/vami.log of the newly deployed appliance contains the following entries:
INFO:vmware.appliance.networking.utils:Running command: ['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address']
INFO:vmware.appliance.networking.utils:output:
error:
returncode: 17
ERROR:vmware.appliance.networking.impl:['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address'] error , rc=17Workaround:
Delete the newly deployed appliance and restore the source appliance.
On the source appliance, disable either the IPv6 or the IPv4 configuration.
From the DNS server, delete the entry for the IPv6 or IPv4 address that you disabled.
Retry the upgrade.
(Optional) After the upgrade finishes, add back the DNS entry and, on the upgraded appliance, set the IPv6 or IPv4 address that you disabled.
Attempts to upgrade a vCenter Server Appliance or Platform Services Controller appliance with an expired root password fail with a generic message that cites an internal error
During the appliance upgrade, the installer connects to the source appliance to detect its deployment type. If the root password of the source appliance has expired, the installer fails to connect to the source appliance, and the upgrade fails with the error message: Internal error occurs during pre-upgrade checks.Workaround:
Log in to the Direct Console User Interface of the appliance.
Set a new root password.
Retry the upgrade.
The upgrade of a vCenter Server Appliance might fail because a dependency shared library path is missing
The upgrade of a vCenter Server Appliance might fail before the export phase and the error log shows: /opt/vmware/share/vami/vami_get_network: error while loading shared libraries: libvami-common.so: cannot open shared object file: No such file or directory. This problem occurs due to missing dependency shared library path.Workaround:
Log in to the appliance Bash shell of the vCenter Server Appliance that you want to upgrade.
Run the following commands.
echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}/opt/vmware/lib/vami/" >> /etc/profile
echo 'export LD_LIBRARY_PATH' >> /etc/profileLog out of the appliance shell.
Retry the upgrade.
Upgrade from vCenter Server 6.0 with an external database fails if vCenter Server 6.0 has content libraries in the inventory
The pre-upgrade check fails when you attempt to upgrade a vCenter Server 6.0 instance with content libraries in the inventory and a Microsoft SQL Server database or an Oracle database. You receive an error message such as Internal error occurs during VMware Content Library Service pre-upgrade checks.Workaround: None.
Extracting the vCenter Server Appliance ISO image with a third-party extraction tool results in a permission error
When extracting the ISO image in Mac OS X to run the installer using a third-party tool available from the Internet, you might encounter the following error when you run the CLI installer: OSError: [Errno 13] Permission denied.This problem occurs because during extraction, some extraction tools change the default permission set on the vCenter Server Appliance ISO file.
Workaround: Perform the following steps before running the installer:
To open the vCenter Server Appliance ISO file, run the Mac OS X automount command.
Copy all the files to a new directory.
Run the installer from the new directory.
vCenter Server upgrade might fail during VMware Authentication Framework Daemon (VMAFD) firstboot
VMware Authentication Framework Daemon (VMAFD) firstboot might fail with the error message: Vdcpromo failed. Error 382312694: Access denied, reason = rpc_s_auth_method (0x16c9a0f6).During a vCenter Server upgrade you might encounter a VMAFD firstboot failure if the system you are upgrading is installed with third-party software that installs its own version of the OpenSSL libraries and modifies the system's PATH environment variable.
Workaround: Remove the third-party directories containing the OpenSSL libraries from %PATH% or move to end of %PATH%.
VMware vSphere vApp (vApp) and a resource pool are not available as target options for upgrading a vCenter Server Appliance or Platform Services Controller appliance
When upgrading an appliance by using the vCenter Server Appliance installer graphical user interface (GUI) or the command line interface (CLI), you cannot select vApp or a resource pool as the upgrade target.The vCenter Server Appliance installer interfaces do not enable the selection of vApp or resource pool as the target for upgrade.
Workaround: Complete the upgrade on the selected ESXi host or vCenter Server instance. When the upgrade finishes, move the newly deployed virtual machine manually as follows:
If you upgraded the appliance on an ESXi host that is part of a vCenter Server inventory or on a vCenter Server instance, log in to the vSphere Web Client of the vCenter Server instance and move the newly deployed virtual machine to the required vApp or resource pool.
If you upgraded the appliance on a standalone ESXi host, first add the host to a vCenter Server inventory, then log in to the vSphere Web Client of the vCenter Server instance and move the newly deployed virtual machine to the required vApp or resource pool.
Upgrading to vCenter Server 6.5 may fail at vmon-api firstboot phase because of an invalid IPv6 address in the SAN field of the SSL certificate
The vCenter Server SSL certificate takes an IPv6 address in the SAN field when you install vCenter Server and enable both IPv4 and IPv6. If you disable IPv6 after the installation and then attempt to upgrade vCenter Server to version 6.5, the upgrade fails at vmon-api firstboot phase.Workaround: Verify that the source vCenter Server SSL certificate SAN field contains the valid IP address of the source vCenter Server instance.
Upgrading to vCenter Server 6.5 fails due to duplicate names for entities in the network folder
vSphere 6.5 allows only unique names across all Distributed Virtual Switches and Distributed Virtual Portgroups in the network folder. Earlier versions of vSphere allowed a Distributed Virtual Switch and a Distributed Virtual Portgroup to have the same name. If you attempt to upgrade from a version that allowed duplicate names, the upgrade will fail.Workaround: Rename any Distributed Virtual Switches or Distributed Virtual Portgroups that have the same names before you start the upgrade.
Syslog collector may stop working after ESXi upgrade
Syslog collectors that use SSL to communicate with the ESXi syslog daemon may stop receiving log messages from the ESXi host after an upgrade.Workaround: Reconfigure the ESXi syslog daemon by running the following commands on the upgraded ESXi host:
esxcli system syslog config set --check-ssl-certs=true
esxcli system syslog reload-
The Ctrl + C command does not exit each End User License Agreement (EULA) page during the staging or installation of patches to the vCenter Server Appliance
When you stage or install the vCenter Server Appliance update packages with the relevant command without adding the optional --acceptEulas parameter, the EULA pages appear in the command prompt. You must be able to exit without accepting the agreement by running the Ctrl+C command, but running the command keeps you in the EULA pages.Workaround: Exit the EULA by typing NO at the end of the last page.
-
When validation ends with failure on staged patches in a vCenter Server Appliance, the staged update packages are deleted
To update your vCenter Server Appliance, first you must stage the available update patches before you install them to the appliance. If the validation of these staged packages fails, they are unstaged and deleted. An attempt to validate the packages after an initial failure generates an error.Workaround: Repeat the staging operation.
-
Issues after running TLS ReConfigurator if smart card authentication is enabled
vSphere 6.5 includes a TLS Reconfigurator tool that can be used to manage TLS configuration. Users install the tool explicitly. The tool is documented in VMware KB article 2147469.
If you run the tool in a vSphere 6.5 environment where smart card authentication is enabled on Platform Services Controller services fail to start and an exception results. The error occors when you run the tool to change the TLS configurations on the Platform Services Controller, for the Content Manager services (Windows PSC) or vmware-stsd service (Platform Services Controller appliance).Workaround:
Open the server.xml file for edit.
From the two entries for the server tag, remove the first entry.
Restart all the services.
Windows:
C:\ProgramData\VMware\vCenterServer\runtime\VMwareSTSService\con
Linux:
/usr/lib/vmware-sso/vmware-sts/conf
In the following example, you remove the first Server entry but not the second Server entry.
<!--Remove the first Server entry-->
<Server port="${base.shutdown.port}" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
...
</Server>
<!--Keep the second Server entry-->
<Server port="${base.shutdown.port}" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
...
</Server>
Error adding ESXi host in environments with multiple vCenter Server and Platform Services Controller behind a load balancer
You set up an environment with multiple vCenter Server instances and multiple Platform Services Controller instances behind a load balancer. The environment uses VMCA as an intermediate CA. When you attempt to add an ESXi host, the following error might result:
Unable to get signed certificate for host name : Error: Failed to connect to the remote host, reason = rpc_s_too_many_rem_connects
When you attempt to retrieve the root CA on the Platform Services Controller, the command fails, as follows.
/usr/lib/vmware-vmca/bin/certool --getrootca --server=wx-sxxx-sxxx.x.x.x Status : Failed Error Code : 382312518 Error Message : Failed to connect to the remote host, reason = rpc_s_too_many_rem_connects (0x16c9a046)
Workaround: Restart the VMCA service on the Platform Services Controller.
vCenter Server services do not start after reboot or failover if Platform Services Controller node is unavailable
If a Platform Services Controller node is temporarily unavailable, and if vCenter Server is rebooted or if a vCenter HA failover occurs during that time, vCenter services fail to start.Workaround: Restore the Platform Services Controller node and reboot vCenter Server again, or start all services from the command line using the following command:
service-control --start --all
STS daemon does not start on vCenter Server Appliance
After you install a vCenter Server Appliance, or after you upgrade or migrate to vCenter Server Appliance 6.5, the Secure Token Service daemon sometimes does not start. This issue is rare, but it has been observed.Workaround: None. This issue has been observed when the DNS did not resolve localhost to the loopback address in an IPv6 environment. Reviewing your network configuration may assist in resolving this issue.
Error 400 during attempt to log in to vCenter Server from the vSphere Web Client
You log in to vCenter Server from the vSphere Web Client and log out. If, after 8 hours or more, you attempt to log in from the same browser tab, the following error results.
400 An Error occurred from SSO. urn:oasis:names:tc:SAML:2.0:status:Requester, sub status:null
Workaround: Close the browser or the browser tab and log in again.
An encrypted VM goes into locked (invalid) state when vCenter Server is restored to a previously backed up state
In most cases, an encrypted virtual machine is added to vCenter Server using the vSphere Web Client or the vSphere API.
However, the virtual machine can be added in other ways, for example, when the encrypted virtual machine is added by a backup operation using file-based backup and restore.
In such a case, vCenter Server does not push the encryption keys to the ESXi host. As a result, the virtual machine is locked (invalid).
This is a security feature that prevents an unauthorized encrypted virtual machine from accessing vCenter Server.Workaround: You have several options.
Unregister the virtual machine and register it back using the vSphere API. You can perform this operation from the vSphere Web Client or the vSphere API.
Remove the ESXi host that contains the virtual machine from vCenter Server and add it back.
Error while communicating with the remote host during virtual machine encryption task
You perform a virtual machine encryption operation such as encrypting a virtual machine or creating a new encrypted virtual machine. Your cluster includes a disconnected ESXi host. The following error results.
An error occurred while communicating with the remote host
Workaround: Remove the disconnected host from the cluster's inventory.
Automatic failover does not happen if Platform Services Controller services become unavailable
If you run Platform Services Controller behind a load balancer, failover happens if the Platform Services Controller node becomes unavailable. If Platform Services Controller services that run behind the reverse proxy port 443 fail, automatic failover does not happen. These services include Security Token Service, License service, and so on.Workaround: Take the Platform Services Controller with the failed service offline to trigger failover.
vCenter Server system cannot connect to a KMS using the IPv6 address
vCenter Server can connect to a Key Management Server (KMS) only if the KMS has an IPv4 address or a host name that resolves to an IPv4 address. If the KMS has an IPv6 address, the following error occurs when you add the KMS to the vCenter Server system.
Cannot establish trust connection
Workaround: Configure an IPv4 address for the KMS.
Secure boot fails after upgrade from older versions of ESXi
You cannot boot an ESXi 6.5 host that has secure boot enabled in the following situations.The host upgrade was performed using the ESXCLI command. The command does not upgrade the bootloader and it does not persist signatures. When you enable secure boot after the upgrade, an error occurs.
The host upgrade was performed using the ISO, but old VIBs are retained after the upgrade. In this case, the secure boot process cannot verify the signatures for the old VIB, and fails. The ISO must contain new versions of all VIBs that are installed on the ESXi host before the upgrade.
Workaround: None. Secure boot cannot be enabled under these conditions. Reinstall the ESXi host to enable secure boot.
After an upgrade, SSLv3 is enabled on port 7444 if it was enabled before the upgrade
In a clean installation, SSLv3 is not enabled on vCenter Server or Platform Services Controller systems. However, after an upgrade, the service is enabled on port 7444 (the Secure Token Server port) by default. Having SSLv3 enabled might make your system vulnerable to certain attacks.Workaround: For an embedded deployment, disable SSLv3 in the
server.xml
file of the Secure Token Server.For a deployment with an external Platform Services Controller, determine whether any legacy vCenter Server systems are connected, and upgrade those vCenter Server systems.
- Disable SSLv3.
Open the
server.xml
file (C:\ProgramData\VMware\vCenterServer\runtime\VMwareSTSService\conf
on a Windows system and/usr/lib/vmware-sso/vmware-sts/conf/
on a Linux system).Find the connector that has
SSLEnabled=True
, and remove SSLv3 from the attributeSSLEnabledProtocols
so that the attribute reads as follows:
sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"
Save and restart all the services.
Error when cloning an encrypted virtual machine with one or more unencrypted hard disks
When you clone an encrypted virtual machine, the following error results if one or more of the hard disks of the virtual machine is not encrypted.
The operation is not supported on the object.
The clone process fails.Workaround: In the Select Storage screen of the Clone Virtual Machine wizard, select Advanced. Even if you do not make changes to the advanced settings, the clone process succeeds.
-
Connection fails when a vNIC is connected to a vSphere Standard Switch or a vSphere Distributed Switch with Network I/O Control disabled
If a vNIC is configured with reservation greater than 0 and you connect it to a vSphere Standard Switch or to a vSphere Distributed Switch with disabled network I/O control, the connection fails with the following error message: A specified parameter was not correct: spec.deviceChange.device.backing.Workaround: None.
-
Network becomes unavailable with full passthrough devices
If a native ntg3 driver is used on a passthrough Broadcom Gigabit Ethernet Adapter, the network connection will become unavailable.Workaround:
Run the ntg3 driver in legacy mode:
Run the esxcli system module parameters set -m ntg3 -p intrMode=0 command.
Reboot the host.
- Use the tg3 vmklinux driver as the default driver, instead of the native ntg3 driver.
- A user-space RDMA application of a virtual machine fails to send or receive data
If a guest user-space RDMA application uses Unreliable Datagram queue pairs and an IP-based group ID to communicate with other virtual machines, the RDMA application fails to send or receive any data or work completion entries. Any work requests that are enqueued on the queue pairs are removed and not completed.Workaround: None.
-
Packets are lost with IBM system servers that have a USB NIC device under Universal Host Controller Interface
Some IBM system servers, such as IBM BladeCenter HS22, with a USB NIC device under Universal Host Controller Interface, experience networking issues, if they are running the USB native driver vmkusb. Packets get lost on the USB NIC device that is under UHCI, when you use the vmkusb driver for the USB NIC.Workaround: Disable the USB native vmkusb driver and switch to the legacy vmklinux USB driver:
-
Run the esxcli system module set -m=vmkusb -e=FALSE command to disable the USB native driver vmkusb.
Reboot the host. At reboot, the legacy USB driver loads.
-
- Guest kernel applications receive unexpected completion entries for non-signaled fast-register work requests
RDMA communication between two virtual machines that reside on a host with an active RDMA uplink sometimes triggers spurious completion entries in the guest kernel applications. Completion entries are wrongly triggered by non-signaled fast-register work requests that are issued by the kernel-level RDMA Upper Layer Protocol (ULP) of a guest. This can cause completion queue overflows in the kernel ULP.Workaround: To avoid triggering extraneous completions, place on separate hosts the virtual machines that will use fast-register work requests.
-
A virtual machine with a paravirtual RDMA device becomes inaccessible during a snapshot operation or a vMotion migration
Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible during a snapshot operation or migration, if at the same time the RDMA application is still running.Workaround: Before you take a snapshot or perform a vMotion migration with a PVRDMA device, shut down the RDMA applications that are using a non-existing peer queue pair number.
-
Netdump transfer of ESXi core takes a couple of hours to complete
With hosts that use the Intel X710 or Intel X710L NICs, the transfer of the ESXi core to a Netdump server takes a couple of hours. The action is performed successfully, but the Netdump transfer is much faster with other NICs.Workaround: None.
-
Kernel-level RDMA Upper Layer Protcols do not work correctly in a guest virtual machine with a paravirtual RDMA device
Kernel-level RDMA Upper Layer Protocols, such as NFS or iSER, try to create more resources than the paravirtual RDMA (PVRDMA) device can provide. As a result, the kernel modules fail to load. However, the RDMA Connection Manager (RDMACM) continues working.Workaround: None.
-
The Down/Up command brings a NIC up after more than 60 seconds
When you set up a Virtual Extensible LAN (VXLAN) and start transferring VXLAN traffic over a NIC that uses the nmlx4_en driver, the esxcli network nic up command might fail to bring the NIC up within 60 seconds. The NIC is up with some delay. The slower execution of the command occurs when you run the Down/Up and Unlink/Link commands several times in a row.Workaround: None.
-
40-Gigabit nmlx4_en NIC does not support Wake-On-LAN (WOL)
WOL is supported only on 10-Gigabit HP flexible LOM cards (HP branded Mellanox cards). 40-Gigabit cards are not supported by HP.Workaround: Use 10-Gigabit HP flexible LOM cards, if you want to use WOL on nmlx4_en cards.
-
NFS shares fail to mount when using Kerberos credentials to authenticate with Active Directory
If you join an ESX system to Active Directory and upgrade to the current release, ESX fails to mount NFS shares correctly using the Kerberos keytab if the Active Directory instance has stopped supporting RC4 encryption. This is because the keytab is only written to when ESX is joined and the Likewise stack used on ESX does not support AES encryption for earlier releases.Workaround: You must re-join Active Directory so that the system keytab is updated.
-
Intel 82579LM or I217 vmnic might encounter an unrecoverable hang
A problem is triggered on Intel 82579LM or I217 vmnic with heavy traffic, such as 4 pairs of virtual machines running netperf and repeatedly disabling and re-enabling VMKernel software emulation of hardware offload capability. With some cycles of disabling and re-enabling, the hardware runs into a hang state.Workaround:
Avoid disabling and re-enabling VMKernel software emulation of hardware offload capabilities on Intel 82579LM or I217 adapter.
If you encounter this hang you must reboot the host.
- Intel i219 NIC might hang and lose networking connectivity
The Intel i219 family NICs, displayed as "Intel Corporation Ethernet Connection I219-LM" (sometimes with a "V" suffix), might experience a hardware hang state that can cause the system to lose networking connectivity on the port. This issue is triggered by disruptive operations while traffic is going through the NIC. For example, you might encounter this error if you bring down a link using the commandesxcli network nic down vmnixX
while copying a file over the i219 port. This problem might also be triggered with heavy traffic, such as 4 pairs of virtual machines running netperf and repeatedly disabling and re-enabling VMKernel software emulation of hardware offload capability. The NIC is not functional until you reboot the host.Workaround:
Avoid using i219 NIC port in critical jobs.
Avoid doing disruptive operations on i219 ports.
Avoid disabling and re-enabling VMKernel software emulation of hardware offload capabilities on Intel I219 adapter.
When i219 must be used, set up fail-over NIC teaming with a non-i219LM port.
- Full duplex configured on physical switch may cause duplex mismatch issue with igb native Linux driver supporting only auto-negotiate mode for nic speed/duplex setting
If you are using the igb native driver on an ESXi host, it always works in auto-negotiate speed and duplex mode. No matter what configuration you set up on this end of the connection, it is not applied on the ESXi side. The auto-negotiate support causes a duplex mismatch issue if a physical switch is set manually to a full-duplex mode.Workaround: Perform either of the workarounds:
On the physical switch port, set the speed and duplex mode to
auto
before an upgrade or fresh installation.On the ESXi host, use the VMklinux igb driver instead of the igb native driver. You can switch to the igb driver by running the commands:
esxcli system module set --enabled=false --module=igbn
esxcli system module set --enabled=true --module=igb
- The bnx2x driver causes a purple screen error message to appear during NIC failover or failback
When you disable or enable VMkernel ports and change the failover order of NICs, the bnx2x driver causes a purple screen error message.Workaround: Use async driver.
VMFS Issues
-
After failed attempts to grow a VMFS datastore, VIM APIs information and LVM information on the system is inconsistent
This problem occurs when you attempt to grow the datastore while the backing SCSI device enters the APD or PDL state. As a result, you might observe inconsistent information in VIM APIs and LVM commands on the host.Workaround: Perform these steps:
Run the vmkfstools --growfs command on one of the hosts connected to the volume.
Perform the rescan-vmfs operation on all host connected to the volume.
VMFS6 datastore does not support combining 512n and 512e devices in the same datastore
You can expand a VMFS6 datastore only with the devices of the same type. If the VMFS6 datastore is backed by a 512n device, expand the datastore with the 512n devices. If the datastore is created on a 512e device, expand the datastore with 512e devices.Workaround: None.
ESXi does not support the automatic space reclamation on arrays with unmap granularity greater than 1 MB
If the unmap granularity of the backing storage is greater than 1 MB, the unmap requests from the ESXi host are not processed. You can see the Unmap not supported message in the vmkernel.log file.Workaround: None.
Using storage rescan in environments with the large number of LUNs might cause unpredictable problems
Storage rescan is an IO intensive operation. If you run it while performing other datastore management operation, such as creating or extending a datastore, you might experience delays and other problems. Problems are likely to occur in environments with the large number of LUNs, up to 1024, that are supported in the vSphere 6.5 release.Workaround: Typically, storage rescans that your hosts periodically perform are sufficient. You are not required to rescan storage when you perform the general datastore management tasks. Run storage rescans only when absolutely necessary, especially when your deployments include a large set of LUNs.
NFS Issues
The NFS 4.1 client loses synchronization with the NFS server when attempting to create new sessions
This problem occurs after a period of interrupted connectivity with the NFS server or when NFS IOs do not get response. When this issue occurs, the vmwarning.log file contains a throttled series of warning messages similar to the following:
NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED
Workaround: Perform the following steps:
Unmount the affected NFS 4.1 datastores. If no files are open when you unmount, this operation succeeds and the NFS 4.1 client module cleans up its internal state. You can then remount the datastores that were unmounted and resume normal operation.
-
If unmounting the datastore does not solve the problem, disable the NICs connecting to the IP addresses of the NFS shares. Keep the NICs disabled for as long as it is required for the server lease times to expire, and then bring the NICs back up. Normal operations should resume.
If the preceding steps fail, reboot the ESXi host.
-
After an ESXi reboot, NFS 4.1 datastores exported by EMC VNX storage fail to mount
Due to a potential problem with EMC VNX, NFS 4.1 remount requests might fail after an ESXi host reboot. As a result, any existing NFS 4.1 datastores exported by this storage appear as unmounted.Workaround: Wait for the lease time of 90 seconds to expire and manually remount the volume.
-
Mounting the same NFS datastore with different labels might trigger failures when you attempt to mount another datastore later
The problem occurs when you use the esxcli command to mount the same NFS datastore on different ESXi hosts. If you use different labels, for example A and B, vCenter Server renames B to A, so that the datastore has consistent labels across the hosts. If you later attempt to mount a new datastore and use the B label, your ESXi host fails. This problem occurs only when you mount the NFS datastore with the esxcli command. It does not affect mounting through the vSphere Web Client.Workaround: When mounting the same NFS datastore with the esxcli commands, make sure to use consistent labels across the hosts.
An NFS 4.1 datastore exported from a VNX server might become inaccessible
When the VNX 4.1 server disconnects from the ESXi host, the NSF 4.1 datastore might become inaccessible. This issue occurs if the VNX server unexpectedly changes its major number. However, the NFS 4.1 client does not expect the server major number to change after establishing connectivity with the server.Workaround: Remove all datastores exported by the server and then remount them.
Virtual Volumes Issues
After upgrade from vSphere 6.0 to vSphere 6.5, the Virtual Volumes storage policy might disappear from the VM Storage Policies list
After you upgrade your environment to vSphere 6.5, the Virtual Volumes storage policy that you created in vSphere 6.0 might no longer be visible in the list of VM storage policies.Workaround: Log out of the vSphere Web Client, and then log in again.
The vSphere Web Client fails to display information about the default profile of a Virtual Volumes datastore
Typically, you can check information about the default profile associated with the Virtual Volumes datastore. In the vSphere Web Client, you do it by browsing to the datastore, and then clicking Configure > Settings > Default Profiles.
However, the vSphere Web Client is unable to report the default profiles when their IDs, configured at the storage side, are not unique across all the datastores reported by the same Virtual Volumes provider.Workaround: None.
iSCSI Issues
-
In vSphere 6.5, the name assigned to the iSCSI software adapter is different from the earlier releases
After you upgrade to the vSphere 6.5 release, the name of the existing software iSCSI adapter, vmhbaXX, changes. This change affects any scripts that use hard-coded values for the name of the adapter. Because VMware does not guarantee that the adapter name remains the same across releases, you should not hard code the name in the scripts. The name change does not affect the behavior of the iSCSI software adapter.Workaround: None.
Storage Host Profiles Issues
Attempts to set the action_OnRetryErrors parameter through host profiles fail
This problem occurs when you edit a host profile to add the SATP claim rule that activates the action_OnRetryErrors setting for NMP devices claimed by VMW_SATP_ALUA. The setting controls the ability of an ESXi host to mark a problematic path as dead and trigger a path failover. When added through the host profile, the setting is ignored.Workaround: You can use two alternative methods to set the parameter on a reference host.
Use the following esxcli command to enable or disable the action_OnRetryErrors parameter:
esxcli storage nmp satp generic deviceconfig set -c disable_action_OnRetryErrors -d naa.XXX
esxcli storage nmp satp generic deviceconfig set -c enable_action_OnRetryErrors -d naa.XXX-
Perform these steps:
-
Add the VMW_SATP_ALUA claimrule to the SATP rule:
esxcli storage nmp satp rule add --satp=VMW_SATP_ALUA --option=enable_action_OnRetryErrors --psp=VMW_PSP_XXX --type=device --device=naa.XXX Run the following commands to reclaim the device:
esxcli storage core claimrule load
esxcli storage core claiming reclaim -d naa.XXX
-
VM Storage Policy Issues
Hot migrating a virtual machine with vMotion across vCenter Servers might change the compliance status of a VM storage policy
After you use vMotion to perform a hot migration of a virtual machine across vCenter Servers, the VM Storage Policy compliance status changes to UNKNOWN.Workaround: Check compliance on the migrated virtual machine to refresh the compliance status.
In the vSphere Web Client, browse to the virtual machine.
From the right-click menu, select VM Policies > Check VM Storage Policy Compliance.
The system verifies the compliance.
Storage Driver Issues
The bnx2x inbox driver that supports the QLogic NetXtreme II Network/iSCSI/FCoE adapter might cause problems in your ESXi environment
Problems and errors occur when you disable or enable VMkernel ports and change the failover order of NICs for your iSCSI network setup.Workaround: Replace the bnx2x driver with an asynchronous driver. For information, see the VMware Web site.
The ESXi host might experience problems when you use Seagate SATA storage drives
If you use an HBA adapter that is claimed by the lsi_msgpt3 driver, the host might experience problems when connecting to the Seagate SATA devices. The vmkernel.log file displays errors similar to the following:
SCSI cmd RESERVE failed on path XXX
and
reservation state on device XXX is unknownWorkaround: Replace the Seagate SATA drive with another drive.
When you use the Dell lsi_mr3 driver version 6.903.85.00-1OEM.600.0.0.2768847, you might encounter errors
If you use the Dell lsi_mr3 asynchronous driver version 6.903.85.00-1OEM.600.0.0.2768847, the VMkernel logs might display the following message ScsiCore: 1806: Invalid sense buffer.Workaround: Replace the driver with the vSphere 6.5 inbox driver or an asynchronous driver from Broadcom.
Boot from SAN Issues
Installing ESXi 6.5 on a Fibre Channel or iSCSI LUN with LUN ID greater than 255 is not supported
vSphere 6.5 supports LUN IDs from 0 to 16383. However, due to adapter BIOS limitations, you cannot use LUNs with IDs greater than 255 for the boot from SAN installation.Workaround: For ESXi installation, use LUNs with IDs 255 or lower.
Miscellaneous Storage Issues
-
If you use SESparse VMDK, formatting of a VM with Windows or Linux file system takes longer
When you format a VM with Windows or Linux file system, the process might take longer than usual. This occurs if the virtual disk is SESparse.Workaround: Before formatting, disable the UNMAP operation on the guest operating system. You can re-enable the operation after the formatting process completes.
Attempts to use the VMW_SATP_LOCAL plug-in for shared remote SAS devices might trigger problems and failures
In releases earlier than ESX 6.5, the SAS devices are marked as remote despite being claimed by the VMW_SATP_LOCAL plug-in. In ESX 6.5, all devices claimed by VMW_SATP_LOCAL are marked as local even when they are external. As a result, when you upgrade to ESXi 6.5 from earlier releases, any of your existing remote SAS devices that were previously marked as remote change their status to local. This change affects shared datastores deployed on these devices and might cause problems and unpredictable behavior.
In addition, problems occur if you incorrectly use the devices that are now marked as local, but are in fact shared and external, for certain features. For example, when you allow creation of the VFAT file system, or use the devices for Virtual SAN.Workaround: Do not use the VMW_SATP_LOCAL plug-in for the remote external SAS devices. Make sure to use other applicable SATP from the supported list or a vendor unique SATP.
Logging out of the vSphere Web Client while uploading a file to a datastore cancels the upload and leaves an incomplete file
Uploading large files to a datastore takes some time. If you log out while uploading the file, the upload is cancelled without warning. The partially uploaded file might remain on the datastore.Workaround: Do not log out during file uploads. If the datastore contains the incomplete file, manually delete the file from the datastore.
Storage I/O Control Issues
You cannot change VM I/O filter configuration during cloning
Changes to a virtual machine's policies during cloning is not supported by Storage I/O Control.Workaround: Perform the clone operation without any policy change. You can update the policy after completing the clone operation.
Storage I/O Control settings are not honored per VMDK
Storage I/O Control settings are not honored on a per VMDK basis. The VMDK settings are honored at the virtual machine level.Workaround: None.
Storage DRS Issues
- Storage DRS does not honor Pod-level VMDK affinity if the VMDKs on a virtual machine have a storage policy attached to them
If you set a storage policy on the VMDK of a virtual machine that is part of a datastore cluster with Storage DRS enabled, then Storage DRS does not honor the Keep VMDKs together flag for that virtual machine. It might recommend different datastores for newly added or existing VMDKs.Workaround: None. This behavior is observed when you set any kind of policy such as VMCrypt or tag-based policies.
You cannot disable Storage DRS when deploying a VM from an OVF template
When you deploy an OVF template and select an individual datastore from a Storage DRS cluster for the VM placement, you cannot disable Storage DRS for your VM. Storage DRS remains enabled and might later move this VM to a different datastore.Workaround: To permanently keep the VM on the selected datastore, manually change the automation level of the VM. Add the VM to the VM overrides list from the storage cluster settings.
After file-based restore of а vCenter Server Appliance to a vCenter Server instance, operations in the vSphere Web Client such as configuring high availability cluster or enabling SSH access to the appliance may result with failure
In the process of restoring a vCenter Server instance, a new vCenter Server Appliance is deployed and the appliance HTTP server is started with self-signed certificate. The restore process completes with recovering the backed up certificates but without restarting the appliance HTTP server. As a result, any operation which requires to make an internal API call to the appliance HTTP server fails.Workaround: After restoring the vCenter Server Appliance to a vCenter Server instance, you must login to the appliance and restart its HTTP server by running the command
service vami-lighttp restart
.Attempts to restore a Platform Services Controller appliance from a file-based backup fail if you have changed the number of vCPUs or the disk size of the appliance
In vSphere 6.5, the Platform Services Controller appliance is deployed with 2 vCPUs and 60 GB disk size. Increasing the number of vCPUs and the disk size is unsupported. If you try to perform a file-based restore of a Platform Services Controller appliance with more than 2 CPUs or 60 GB disk size, the vCenter Server Appliance installer fails with the error:No possible size matches your set of requirements.
Workaround: Decrease the number of the processors to no more than 2 vCPUs and the disk size to no more than 60 GB.
Restoring a vCenter Server Appliance with an external Platform Services Controller from an image-based backup does not start all vCenter Server services
After you use vSphere Data Protection to restore a vCenter Server Appliance with an external Platform Services Controller, you must run thevcenter-restore
script to complete the restore operation and start the vCenter Server services. Thevcenter-restore
execution might fail with the error message:Operation Failed. Please make sure the SSO username and password are correct and rerun the script. If problem persists, contact VMware support.
Workaround: After the
vcenter-restore
execution has failed, run theservice-control --start --all
command to start all services.If the
service-control --start --all
execution fails, verify that you entered the correct vCenter Single Sign-On user name and password. You can also contact VMware Support.
Mismatch occurs for shared clusterwide option during host profile compliance check,
During a host profile compliance check, if a mismatch occurs because of theshared clusterwide
option, the device name is displayed for theHost
value or forHost Profile
value.Workaround: Treat the message as a compliance mismatch for the
shared clusterwide
option.Host Profile remediation fails if host profile extracted from one vSAN cluster is attached to a different vSAN cluster
If you extract a host profile from a reference host that is part of one vSAN cluster and attach it to a different vSAN cluster, the remediation (apply) operation fails.Workaround: Before applying the host profile, edit the profile's vSAN settings. The cluster UUID and datastore name must match the values for the cluster on which the profile is attached.
Host Profile does not capture host Lockdown Mode settings
If you extract a host profile from a stateless ESXi host with Lockdown Mode enabled, the Lockdown Mode settings are not captured. After applying the host profile and rebooting the host, Lockdown Mode is disabled on the host.Workaround: After applying the host profile and rebooting the host, manually enable Lockdown Mode for the host.
Compliance errors for host profiles with SAS drives after upgrading to vSphere 6.5
Because all drives claimed by SATP_LOCAL are marked as LOCAL, host profiles with SAS that have the Device is shared clusterwide option enabled fail compliance check.Workaround: Disable the Device is shared clusterwide configuration option for host proiles with SAS drives before remediating.
Host Profile batch remediation fails for hosts with DRS soft affinity rules
A batch remediation performs a remediate operation on a group of hosts or clusters. Host Profiles uses the DRS feature to automatically place hosts into maintenance mode before the remediate operation. However, only hosts in fully automated DRS clusters that have no soft affinity rules can perform this operation. Hosts that have a DRS soft affinity rule fail remediation because this rule stops the hosts from entering maintenance mode, causing remediation to fail.Workaround:
Determine whether the cluster is fully automated:
In the vSphere Web Client, navigate to the cluster.
Select the Configure tab, then select Settings.
Expand the list of services and select vSphere DRS.
If the DRS Automation field is Fully Automated then the cluster is fully automated.
Check whether the host has a soft affinity rule:
In the cluster, select the Configure tab, then select Settings.
select VM/Host Rules.
Examine any rule where the typie is set to Run VMs on Hosts or Do Not Run VMs on Hosts.
If the details for the VM/Host Rule Details for those rules contain the word "should," that rule is a soft affinity or soft anti-affinity rule.
For hosts with the DRS soft affinity rule, manually move the host into maintenance mode, and then remediate the host.
vCenter Server might fail because of database unique constraint violation at
pk_vpx_vm_virtual_device
Virtual machines with USB devices attached might cause vCenter Server to fail because of a vCenter Server error that occurs when properties of multiple devices, including the USB devices, have changed. The vCenter Server cannot be restarted once it has failed.Workaround: Unregister the problematic virtual machine from the host inventory and restart the vCenter Server. You might need to repeat this process if the failure happens again.
Downloaded vSphere system logs are invalid
This issue might occur if the client session expires while the downloading of system logs is still in progress. The resulting downloaded log bundle becomes invalid. You might observe an error message similar to the following in the vsphere_client_virgo.log file located in /var/log/vmware/vsphere-client/logs/vsphere_client_virgo.log:com.vmware.vsphere.client.logbundle.DownloadLogController
Error downloading logs. org.apache.catalina.connector.ClientAbortException:
java.net.SocketException: Broken pipe (Write failed)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:407)Workaround: None.
JRE patch fails while starting vCenter Server services on a Windows Server 2016 system
You might not be able to start the VMware vSphere Client, and you see error messages similar to the following:2016-11-04 15:47:26.991+05:30| vcsInstUtil-4600788| I: StartStopVCSServices: Waiting for VC services to start...
2016-11-04 16:09:57.652+05:30| vcsInstUtil-4600788| E: StartStopVCSServices: Unable to start VC services
2016-11-04 16:09:57.652+05:30| vcsInstUtil-4600788| I: Leaving function: VM_StartVcsServicesWorkaround: None.
Cannot access HTML-based vSphere Client
When vCenter Server 6.5 is installed on Windows server 2016 OS on an external PSC setup with custom path and custom port, login to the HTML5-based vSphere Client fails with a 503 Service Unavailable error message.Workaround: Login to vCenter Server using vSphere Web Client (Flash).
Performing an Advanced Search with Tags might slow down vSphere Web Client in large environments
The vSphere Web Client might be slow in large scale environments while performing an Advanced Search that includes Tags as part of the search construction. This issue occurs while the search is occurring, and stops when the search completes. vCenter Services might be running out of memory during the slowness.Workaround: Wait for the search to complete, or restart vCenter Server by performing Stop and Start vCenter Services using the command below if you observe OutOfMemory or Garbage Collection errors in the logs.
Log location:
- Windows: %ALLUSERSPROFILE%\VMWare\vCenterServer\logs\vsphere-client\logs\
- vCenter Server Appliance (VCSA): /var/log/vmware/vsphere-client/logs/
Log files: dataservice.log and vsphere_client_virgo.log
Commands for restarting the services:
service-control --stop vmware-vpxd-svcs
service-control --start vmware-vpxd-svcs
vSphere Web Client might become slow in large multi-VC multi-PSC environments
In large environments with large number of vCenter Servers and PSCs, the vSphere Update Manager Web Client Plug-in might experience a thread leak that degrades vSphere Web Client experience over time. This degradation occured specifically in an environment with 10 vCenter Servers plus 4 PSCs and the maximum number of Web Client user accounts, but is expected to occur in all large environments over time.Workaround: Disable the vSphere Update Manager Web Client Plug-in by performing the following:
In the vSphere Web Client, navigate to vSphere Web Client > Administration > Solutions/Client Plugins.
- Right-click the VMware vSphere Update Manager Web Client plug-in and select Disable.
If you do not intend to use the plug-in you may leave it disabled, or you can choose to re-enable it only when you need to use it. If you experience performance degradation, disabling then re-enabling the plug-in should also temporarily reset the leak.
In the vSphere Client or vSphere Web Client, live refresh of Recent Tasks and object status stops working after the client machine's IP address changes
In the vSphere Client or vSphere Web Client, after performing an action on an object, the action does not appear in Recent Tasks. The inventory trees, lists, and object details also do not reflect the new state.For example, if you power on on a VM, the Power On tasks do not appear in Recent Tasks, and the VM icon does not have a power on badge in the inventory trees, lists, and object detail. The Web browser and vCenter Server are connected using WebSockets, and when the IP address of the browser machine changes, this connection is broken.
Workaround: In the Web browser, refresh the page.
-
The vCenter Server Appliance Management Interface is inaccessible through Internet Explorer
The vCenter Server Appliance Management Interface is inaccessible through Internet Explorer.Workaround: Enable TLS 1.0, TLS 1.1, and TLS 1.2 in the security settings of Internet Explorer:
In Internet Explorer, select Tools > Internet Options.
Click the Advanced tab and scroll to the Security settings section.
Select Use TLS 1.0, Use TLS 1.1, and Use TLS 1.2.
- Attempts to import and export Content Library items fail in vCenter Server deployments with external PSC
When logged into the vSphere Web Client using the IP address in a vCenter Server deployment with external PSC environment, importing and exporting Content Library items fail. The following error message appears:Reason: Unable to update files in the library item. The source or destination may be slow or not responding.
Workaround: Log in to the vSphere Web Client using a fully qualified domain name, and not an IP address in a vCenter Server deployment with external PSC.
If logged into the vSphere Web Client using an IP address, accept the certificate that is issued when logging in with a fully qualified domain name. Open the fully qualified domain name target to accept the certificate. Accepting the certificate permanently should be sufficient.
The UI does not display hot-plugged devices in the PCIe passthrough device list
Hot-plugged PCIe devices are not available for PCI passthrough.Workaround: Choose one of the following:
Restart hostd using the command:
/etc/init.d/hostd restart
Reboot the ESXi host.
Permission on tag cannot be viewed from another management node
When logged in one vCenter Server management node in a multiple vCenter Server deployment, if you create a permission on a tag, it is not viewable when you log in to another management node.Workaround: While tags are global objects and can be viewed across nodes, permissions on tags are only persistent locally and cannot be viewed across nodes. To view the tag permission, log into the vCenter Server where you created the permission.
Logging into vSphere Web Client in incognito mode causes internal error
Users can log into the vSphere Web Client when the browser setting for incognito mode is enabled, but receive the following internal error:An internal error has occurred - [NetStatusEvent type="netStatus" bubbles=false cancelable=false eventPhase=2 info=[object Object]]"
Workaround: Disable incognito mode. This mode is not supported in the vSphere Web Client. Reload the vSphere Web Client to clear up any problems from the error and log back into the client.
Content Library tasks are not displayed in Recent Tasks
If you perform a Content Library task in the vSphere Web Client, such as uploading an item to the content library, syncing a library, or deploying a VM from the content library, the task might not be listed under Recent Tasks.Workaround: None. You can view all the tasks under the More Tasks lists, even if they are not listed under Recent Tasks.
Assign tag operation fails when simultaneously attempting to create and assign a tag in multiple vCenter Server environment
Normally, you can assign a tag to an object in the vSphere Web Client from the Tags setting in the object's Configure tab and automatically assign the tag to the selected object. In an environment with multiple vCenter Server instances, the tag is created successfully, but the assign options fail and you receive an error message.Workaround: Create the tag first on the Tags setting and then assign it to the object.
Solution tab removed from ESX Agent Manager view
The Solution tab that is available in the ESX Agent Manager view in earlier versions of the vSphere Web Client is no longer available.Workaround: You can achieve the same goal by performing the following steps:
In the vSphere Web Client Navigator, select Administration > vCenter Server Extensions.
Click vSphere ESX Agent Manager.
Choose one of the following:
Select the Configure tab.
Select the Monitor tab and click Events.
Unable to assign tag to objects when Platform Services Controller node is down
When the vSphere Platform Services Controller node is down, you are unable to assign tags from an object's Manage > Tags tab.The following error displays:
Provider method implementation threw unexpected exception.
Tags cannot be selected to assign.Workaround: Power on the Platform Services Controller node. Also check that all services on the Platform Services Controller node are running.
-
The vSphere Client does not render the edit setting dialog box when a host or data center administrator edits a virtual machine
The vSphere Client does not render the edit setting dialog when a host or data center administrator edits a virtual machine within that host or data center. This happens because the host or data center administrator does not have profile-driven storage privilege.Workaround: Create a new role with host or data center administrator privileges and profile-driven storage privilege. Assign this role to the current host or data center administrator.
-
When creating or editing a virtual machine using the vSphere Client, adding an eighth hard disk causes the task to fail with error
Creating or editing a virtual machine using the vSphere Client results in a task failure with the error:A specified parameter was not correct: unitNumber.
This is because the SCSI controller 0:7 is reserved for special purposes, so the system assigns SCSI 0:7 to the eighth hard disk.Workaround: The user must manually assign SCSI 0:8 when adding an eighth hard disk. The user can instead use the vSphere Web Client.
-
The vSphere Client supports up to 10,000 virtual machines and 1,000 hosts
The vSphere Client is only supported up to 10,000 virtual machines and 1,000 hosts, which is lower than the vCenter limits.Workaround: If user needs exceed the vSphere Client's limits, use the vSphere Web Client.
-
The Hostname text box is greyed out in the vCenter Server Appliance Management Interface if your system is installed without a host name
When you navigate to the Networking > Manage screen in the vCenter Server Appliance Management Interface and Edit the Hostname, Name Servers, and Gateways, the Hostname text box appears grayed out and cannot be modified. The same field is active and you can change it only in the vSphere Client.Workaround: Change the host name in the vSphere Client.
-
After you modify the virtual machine configuration of the vCenter Server Аppliance to provide a larger disk space for expanding the root partition, an attempt to claim that additional storage fails
After resizing the disk space for the root partitioning, thestorage.resize
command does not extend the disk storage for the root partition but keeps it the same size. This is an expected behavior. Resizing this partition is not supported.Workaround: None.
-
The vCenter Server Appliance Management Web interface allows for configuring only HTTP proxy server
When you navigate to the Networking tab in the vCenter Server Appliance Management Interface and Edit the Proxy Settings, you are not presented with the option to modify the proxy to HTTPS or FTP. You can specify only the HTTP proxy settings.Workaround: You can configure the HTTPS and FTP proxy servers by using the appliance shell command line.
-
After a successful vCenter Server Appliance update, running the
version.get
command from the appliance shell returns an error message
After a successful vCenter Server Appliance update, running theversion.get
command from the appliance shell returns an error message:Unknown command: 'version.get'
.Workaround: Log out and log in as administrator to a new appliance shell session, and run the
version.get
command. -
In Windows Internet Explorer 11 or later, the Use Windows Session Authentication checkbox is inactive on vSphere Web Client login page
The Use Windows Session Authentication checkbox is inactive on the vSphere Web Client login page in Windows Internet Explorer 11 or later browsers. You are also prompted to download and install the VMware Enhanced Authentication Plug-in.Workaround: In the the security options of the Windows settings on your system, add the fully qualified domain name and IP address of the vCenter Server to the list of local intranet sites.
-
Uploading a file using the datastore browser fails when the file is larger than 4GB on Internet Explorer
When you upload a file larger than 4GB using the datastore browser on Internet Explorer, your receive the error:Failed to transfer data to URL.
Internet Explorer does not support files larger than 4GB.
Workaround: Use the Chrome or Firefox browser to upload files from the datastore browser.
OVF parameter chunkSize is not supported in vCenter Server 6.5
Deploying an OVF template in vCenter Server 6.5 fails with the following error:OVF parameter chunkSize with value chunkSize_value is currently not supported for OVF package import.
You receive this error because chunkSize is not a supported OVF parameter in vCenter Server 6.5.
Workaround: Update the OVF template and remove the chunkSize parameter.
For OVA templates only, extract the individual files using a tar utility (For example: tar xvf). This includes the ovf file (.ovf), manifest (.mf), and virtual disk (.vmdk).
Combine virtual disk chunks into a single disk with the following command:
- On Linux or Mac:
cat vmName-disk1.vmdk.* > vmName-disk1.vmdk
- On Windows:
copy /b vmName-disk1.vmdk.000000 + vmName-disk1.vmdk.000001 + repeat until last fragment vmName-disk1.vmdk
Note: If only one virtual disk chunk fragment exists, rename it to the destination disk. If there are multiple disks with chunk fragments, combine each to their respective destination disks (For example: disk1.vmdk, disk2.vmdk, and so on).
- On Linux or Mac:
Remove the chunkSize attribute from the OVF descriptor (.ovf) using a plain text editor. For example:
<File ovf:chunkSize="7516192768" ovf:href="vmName-disk1.vmdk" ovf:id="file1" ovf:size=... />
to:
<File ovf:href="vmName-disk1.vmdk" ovf:id="file1" ovf:size=.../>
Deploy the OVF template via the vSphere Web Client by selecting the local files including the updated OVF descriptor and merged disk.
For OVA templates only, reassemble the OVA via the vSphere Web Client with the following steps:
Export the OVF template. The export produces the OVF file (.ovf), manifest (.mf), and virtual disk (.vmdk). The manifest file is different than the file from step 1.
Combine the files into a single OVA template using a tar utility (for example, tar cvf). On Linux for example:
tar cvf vm.ova vm.ovf vm.mf vm.disk1.vmdk
New Internet Explorer tabs open when exporting OVF tempate or exporting items from Content Library
If you use the vSphere Web Client with Internet Explorer to export an OVF template or to export item from a Content Library, new tabs open in the browser for each file in the Content Library item or OVF template. For each new tab, you might be prompted to accept a security certificate.Workaround: Accept each security certificate, then save each file.
vCenter Server does not remove OpaqueNetwork from the vCenter inventory
If a virtual machine is connected to an opaque network and this virtual machine is converted into a template, vCenter Server will not remove the OpaqueNetwork from the vCenter Server inventory, even if the ESXi host is removed from the opaque network and no virtual machines are connected to it. This is due to the fact that there are templates still attached to the opaque network.Workaround: None.
Deploying OVF template causes error.mutationService.ProviderMethodNotFoundError error in some views
You receive an error.mutationService.ProviderMethodNotFoundError error when you deploy an OVF template and all of the following conditions occur:You select an OVF file from your local file system and click Next in the Deploy OVF Template wizard.
The OVF file is less than 1.5 MB.
You deploy the OVF template without selecting the object. For example, from the VM list view
Workaround: Deploy an OVF template by selecting the object and choosing the Deploy OVF option.
Deploying an OVF or OVA template from a local file with delta disks in the vSphere Web Client might fail
When you deploy an OVF template or OVA template containing delta disks (ovf:parentRef in the OVF file), the operation might fail or stall during the process.The following is an example of OVF elements in the OVF descriptor:
<References>
<File ovf:href="Sugar-basedisk-1-4.vmdk" ovf:id="basefile14" ovf:size="112144896"/>
<File ovf:href="Sugar-disk1.vmdk" ovf:id="file1" ovf:size="44809216"/>
<File ovf:href="Sugar-disk4.vmdk" ovf:id="file4" ovf:size="82812928"/>
</References>
<DiskSection>
<Info>Meta-information about the virtual disks</Info>
<Disk ovf:capacity="1073741824"
ovf:diskId="basedisk14"
ovf:fileRef="basefile14"
ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" />
<Disk ovf:capacity="1073741824"
ovf:diskId="vmdisk1"
ovf:fileRef="file1"
ovf:parentRef="basedisk14"
ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" />
<Disk ovf:capacity="1073741824"
ovf:diskId="vmdisk4"
ovf:fileRef="file4"
ovf:parentRef="basedisk14"
ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized"/>
</DiskSection>
Workaround: To deploy the OVF or OVA template, host the template on a HTTP server. Then deploy the template from the HTTP URL pointing to that template.
No power on option available at completion of OVF deployment
During an OVF or OVA deployment, the deployment wizard does not provide an option to automatically power on the virtualmachine when the deployment completes.Workaround: This option is not available in vSphere 6.5 when using the OVF deployment wizard. Manually power on the virtual machine after the deployment completes.
When the deploy OVF wizard is started from global inventory list, error message not displayed on wizard's location page
This issue occurs when you deploy an OVF template from any of the global inventory lists (for example, virtual machine inventory list) in the vSphere Web Client and navigate to the "Select Name and Location" page of the wizard. If you do not choose a valid location, instead of the proper error message, the navigation buttons are disabled and you cannot proceed any further in the wizard.Workaround: Cancel the wizard, then reopen it and choose a valid location.
Deploy OVF wizard cannot deploy a local OVF or OVA template that contains external message bundles
When an OVF or OVA template contains references to external message bundles, it cannot be deployed from a local file.Workaround: To deploy the OVF or OVA template, perform one of the following:
Deploy the OVF or OVA template from a URL.
Edit the OVF file to replace the tag that points to the external message file (the
<Strings ovf:fileRef />
tag) with the actual content of the external message file.
Cannot create or clone a virtual machine on a SDRS-disabled datastore cluster
This issue occurs when you select a datastore that is part of a SDRS-disabled datastore cluster in any of the New Virtual Machine, Clone Virtual Machine (to virtual machine or to template), or Deploy From Template wizards. When you arrive at the the Ready to Complete page and click Finish, the wizard remains open and nothing appears to occur. The Datastore value status for the virtual machine might display "Getting data..." and does not change.Workaround: Use the vSphere Web Client for placing virtual machines on SDRS-disabled datastore clusters.
Deploying an OVF or OVA template fails for specific descriptors
Deploying an OVF or OVA template from the vSphere Web Client fails if the descriptor in the template contains any of the following values with their respective error message:Negative number as the value of size parameter in fileref element. Example error message:
VALUE_ILLEGAL: Illegal value ";-2&" for attribute "size". Must be positive.Optional Reservation attribute is specified but no parameter provided in VM Hardware section. For example: <Reservation />. Example error message:
VALUE_ILLEGAL: Illegal value "" for element "Reservation". Not a number.Missing VirtualHardwareSection.System.InstanceID vssd element. Example error message:
ELEMENT_REQUIRED: Element "InstanceID" expected.Internationalization section Strings refers to missing file. Example error message:
VALUE_ILLEGAL: Illegal value "eula" for attribute "fileRef".Unknown prefix added to Internationalization section Strings. For example: <ovfstr:Strings xml:lang="de-DE"> Example error message:
PARSE_ERROR: Parse error: Undeclared namespace prefix "ovfstr" at [row,col,system-id]: [41,39,"descriptor.ovf"].OVF contains OVF Specification version 0.9 elements. Example error message:
VALUE_ILLEGAL: OVF 0.9 is not supported. Invalid name space: "http://www.example.com/schema/ovf/1/envelope".
Workaround: Modify the descriptor so that it is a valid descriptor according to OVF Specification version 1.1.
vMotion enabled USB device connected to a virtual machine is not visible in vSphere Web Client
If you connect a USB device that is enabled with vMotion to a virtual machine running on ESXi 6.5, the device is not visible in the vSphere Web Client after you suspend and then resume the virtual machine. This occurs even when the device successfully reconnects to the virtual machine. As a result, you cannot disconnect the device.Workaround: Perform one of the following workarounds:
Attempt to connect another USB device to the same virtual machine. Both devices are visible in the vSphere Web Client, allowing you to disconnect the originally connected USB device.
Power off and then power on the virtual machine. The device is visible in the vSphere Web Client.
Uploading files using Content Library, Datastore, and OVF/OVA Deployment might fail
If you attempt to upload a file using Content Library, Datastore upload, or OVA/OVF Deployment in the vSphere Web Client, the operation might fail with the error:The operation failed for an undetermined reason.
The failure occurs because certificates are not trusted. If the URL being processed for the file upload operation is not already trusted, then the upload fails.
Workaround: Copy the URL from the error, open a new browser tab, and visit that URL. You should be prompted to accept the certificate associated with that URL. Trust and accept the new certificate, then retry the operation. See VMWare Knowledge Base Article 2147256 for more details.
Deploying an OVA template from URL containing a manifest or certificate file at the end might fail in a slow network environment
When you deploy a large OVA template containing one or more manifest or certificate files at the end of the OVA template, the deployment might fail in a slow network environment with the following error:
Unable to retrieve manifest or certificate file.
The following is an example of an OVA template with manifest and certificate files located at the end of the OVA:
example.ova
:example.ovf
example-disk1.vmdk
example-disk2.vmdk
example.mf
example.cert
This failure occurs because the manifest and cert files are necessary for the deployment process. So, the earlier those files appear in the OVA file the faster the process performs.
Workaround: Perform one of the following workarounds to deploy the OVA template:
Download the OVA template to your local system and deploy it from a local OVA file.
Convert the OVA template by placing the manifest or certificate files at the front of the OVA template file. To convert the template for the
example.ova
template, perform the following:Log in to the HTTP server machine and go to the folder that contains the OVA template.
Extract the files from the OVA template:
tar xvf example.ova
To recreate the OVA template, run the following commands in order:
tar cvf example.ova example.ovf
tar uvf example.ova example.mf
tar uvf example.ova example.cert
tar uvf example.ova example-disk1.vmdk
tar uvf example.ova example-disk2.vmdkDeploy the OVA template from the HTTP URL again.
OVF templates exported in vSphere 6.5 that contain 
 cannot be deployed in vSphere 5.5 or vSphere 6.0
If an OVF template that contains
in the OVF descriptor is exported from the vSphere Web Client 6.5, it cannot be deployed from vSphere Web Client 5.5 or vSphere Web Client 6.0. The following is an example of an OVF element in the OVF descriptor:<Annotation>---
This is a sample annotation for this OVF template ---</Annotation>
Workaround: Remove

from the OVF descriptor:Remove all the instances of

from the OVF descriptor.If the OVA or OVA template has a manifest file, recalculate the checksum based on the updated OVF descriptor and update the manifest file. If a certificate file exists, update the certificat file to replace the checksum for the updated manifest file.
vSphere Web Client does not support exporting virtual machines or vApps as OVA templates
In versions earlier than vSphere 6.5, you could export virtual machines and vApps as an OVA template on the vSphere Web Client. This functionality is not available in vSphere 6.5.Workaround: Export the virtual machine as an OVF template, and then create an OVA template from the OVF template files. The following procedure describes this process using the Linux an Mac commands. Windows systems require installation of a TAR capable utility.
Use the vSphere Web Client to export the VM or vApp as OVF template to the local machine.
Locate the downloaded OVF template files, and move them into an empty new folder.
Perform one of the following tasks to create an OVA template from an OVF template.
Go to the new folder and create an OVA template using the tar command to combine the files:
cd folder
tar cvf ova-template-name.ova ovf-template-name.ovf
tar uvf ova-template-name.ova ovf-template-name.mf
tar uvf ova-template-name.ova ovf-template-name-1.vmdk
...
tar uvf ova-template-name.ova ovf-template-name-n.vmdkn refers to the number of disks the VM contains. ova-template-name.ova is the final OVA template. Run the commands in the exact order so the OVA is correctly built.
Note: The
tar
command must use the TAR format and comply with the USTAR (Uniform Standard Tape Archive) format as defined by the POSIX IEEE 1003.1 standards group.If the OVF tool is installed on your system, run the following command:
cd downloaded-ovf-template-folder
path-to-ovf-tool\ovftool.exe ovf-template-name.ovf ova-template-name.ova
Deploying an OVF template containing compressed file references might fail
When you deploy an OVF template containing compressed files references (typically compressed using gzip), the operation fails.The following is an example of an OVF element in the OVF descriptor:
<References>
<File ovf:size="458" ovf:href="valid_disk.vmdk.gz" ovf:compression="gzip" ovf:id="file1"></File>
</References>Workaround: If the OVF tools is installed on your system, run the following command to convert the OVF or OVA template. The new template must have no compressed disks.
Go to the folder containing the template:
cd template-folder
Convert the template.
OVA template conversion:
path-to-ovf-tool\ovftool.exe ova-template-name.ova newova-template-name.ova
OVF template conversion:
path-to-ovf-tool\ovftool.exe ovf-template-name.ovf new-ovf-template-name.ovf
Deploying an OVF or OVA template that contains HTTP URLs in the file references fail
When you attempt to deploy an OVF or OVA template that contains an HTTP URL in the file references, the operation fails with the following error:
Invalid response code: 500
For example:
<References>
<File ovf:size="0" ovf:href="http://www.example.com/dummy.vmdk" ovf:id="file1"></File>
</References>
Workaround: To download the files from the HTTP server and update the OVF or OVA template, perform the following steps:
Open the OVF descriptor, and locate the file references with the HTTP URLs.
For OVA templates, extract the files from the OVA template to open the OVF descriptor. For exaple, run the following command:
tar xvf ova-template-name&.ova
Note: this command is for a Linux or Mac system. Windows systems require installation of a tar utility.
Download the files from the HTTP URLs to a local machine and copy those files to the same folder as the OVF or OVA template.
Replace the HTTP URLs in the OVF descriptor with the actual file names that are downloaded to the folder. For example:
<File ovf:size="actual-downloaded-file-size" ovf:href="dummy.vmdk" ovf:id="file1"></File>
If the template contains a manifest (
.mf
) file and a certificate (.cert
) file, regenerate them by recalculating checksums of relevant files, or omit these files during the OVF deploy operation.For OVA template only, recreate the OVA template using one of the following methods:
Use the tar command to recreate the template:
cd folder/ tar cvf ova-template-name.ova ovf-name.ovf
tar uvf ova-template-name.ova manifest-name.mf
tar uvf ova-template-name.ova cert-name.cert
tar uvf ova-template-name.ova disk-name.vmdk
Repeat for more disk or other file references.
Note: The tar command should use the TAR format shall comply with the USTAR (Uniform Standard Tape Archive) format as defined by the POSIX IEEE 1003.1 standards group.
Use the OVF tool to recreate the template (Windows):
cd folder
path-to-ovf-tool\ovftool.exe ovf-name.ovf ova-template-name.ova
Deploying OVF or OVA templates fail from HTTP or HTTPS URLs that require authentication
When you attempt to use the vSphere Web Client to deploy an OVF or OVA template from a HTTP or HTTPS URL that requires authentication, the operation fails. You receive the error:
Transfer failed: Invalid response code 401.
The attempt to deploy the OVF or OVA template fails because you cannot enter your credentials.
Workaround: Download the files and deploy the template locally:
Download the OVF or OVA template from the HTTP or HTTPS URL manually to your local machine in any accessible folder.
Deploy a virtual machine from the downloaded OVF or OVA template on the local machine.
Deploying an OVF or OVA template with EFI/UEFI boot options are not supported in the vSphere Web Client
When you deploy an OVF or OVA template in the vSphere Web Client with the EFI boot option and include a NVRAM file, the operation fails.Workaround: Deploy the OVF template with the EFI boot option using OvfTool, version 4.2.0.
Existing Network Protocol Profiles not populated and does not update Customize Template page of Deploy OVF Template wizard
In the Customize Template page of the Deploy OVF Template wizard, the following custom properties are recognized and displayed:
gateway, netmask, dns, searchPath, domainName, hostPrefix, httpProxy, subnetIf no Network Protocol Profile exists for the selected network, a new Network Protocol Profile is automatically created if any of the custom properties is set. Each of the properties contain entered values.
If a Network Protocol Profile already exists for the selected network, the new wizard does not pre-populate these custom properties, and any changes to these fields are ignored.
Workaround: If you need custom settings other than the existing Network Protocol Profile settings, make sure no Network Protocol Profile exists on the selected network. Either delete the profile or delete the network mapping in the profile.
Selected template is not preserved when Deploy OVF Template wizard is minimized, vSphere Web Client is refreshed, and wizard restored
When deploying an OVF template with the Deploy OVF Template wizard, perform the following:Navigate through all pages of the Deploy OVF Template wizard up to the final step.
Minimize the wizard to the Work in Progress panel.
Click Global Refresh at the top next to user name.
Restore the wizard from the Work in Progress panel.
This results in two issues:
The Source VM name displays the same value as Name value.
If you navigate to the Select template page the selected template is empty, indicating the template was not preserved.
Although the selected template does not appear, the wizard can complete and the template correctly deploys. This issue only occurs within the vSphere Web Client interface for displaying the above two values.
Workaround: Avoid using Global Refresh when deploying the OVF template.
Deploying OVF template fails for user without Datastore.Allocate Space permission
When you deploy an OVF template without the Datastore.Allocate Space permission, the operation fails.Workaround: Assign Datastore.Allocate Space permission to the user.
OVF deployment of a vApp fails in a non-DRS cluster
When you attempt to deploy an OVF containing a vApp in a non-DRS cluster, the operation fails. In vSphere 6.5, the Deploy OVF wizard allows you to select a non-DRS cluster that passes compatibility checks. However, the deployment attempt fails.Workaround: Enable DRS for the desired cluster or select another deployment location.
If you use the option
LimitVMsPerESXhost
, it might disable DRS load balancing and fail to generate any recommendations
TheLimitVMsPerESXhost
option is implemented as part of DRS constraint check. If the number of virtual machines on the host exceeds the limit specified by theLimitVMsPerESXhost
option, no additional virtual machines can be powered on or migrated to the host by DRS.Workaround: You can use a new advanced option
TryBalanceVmsPerHost
to replaceLimitVMsPerESXhost
option in this release, which avoids the potential DRS failure. You might observe the cluster imbalance problem when manually setting theLimitVMsPerESXhost
option to a small value (for example, 0).The task progress bar for some of the content library operations does not change
The task progress bar for some of the content library operations displays 0% during the task's progress. These operations are:Deploy a virtual machine from a VM template and from a content library.
Clone a library item from one library to another library.
Synchronize a subscribed library.
Workaround: None.
The initial version of a newly created content library item is 2
The initial version of a newly created content library item is 2 instead of 1. You can view the version of the content library item in the Version column in the list of content library items.Workaround: None.
If your user name contains non-ASCII characters, you cannot import items to a content library from your local system
If your user name contains non-ASCII characters, you might be unable to import items to a content library from your local system.Workaround: To import an item to a content library, use a URL link such as an HTTP link, an NFS link, or an SMB link.
If your user name contains non-ASCII characters, you cannot export items from a content library to your local system
If your user name contains non-ASCII characters, you might be unable to export items from a content library to your local system.Workaround: None.
When you synchronize a content library item in a subscribed content library, some of the tags of the item may not appear
Some of the tags of an item in a published content library may not appear in your subscribed content library after you synchronize the item.Workaround: None.
Deploy OVF task progress bar remains at 0 percent
When deploying an OVF template from the local system, the progress bar in the Deploy OVF Template wizard remains at 0%. However, the tasks for Deploy OVF Template and Import OVF package are created.Workaround: When selecting a local OVF template, make sure to select all the referenced files, including the OVF file and the VMDK files that are defined within the OVF descriptor file.
Deployment operation fails if a virtual machine template (OVF) includes a storage policy with replication.
If a virtual machine has a storage policy with storage replication group and is captured as a template in a library, that template causes the virtual machine deployment to fail. This happens because you cannot select a replication group when deploying from a Content Library template. The selection of a replication group is required for this type of template. You will receive an error message and must close the wizard manually. It will not close automatically despite the failed operation.Workaround: Delete the policy from the original virtual machine and create a new virtual machine template. You can add a policy to a new virtual machine after the new template has been created and deployed.
-
Selecting a storage policy when deploying a content library template causes a datastore or datastore cluster selection to be ignored.
Selecting a storage policy causes the datastore or datastore cluster selection to be ignored. The virtual machine is deployed on the user selected storage profile, but it is not deployed on the selected datastore or datastore cluster.Workaround: If the virtual machine has to be deployed on the specified datastore or datastore cluster, make sure the storage policy is set to "None" when deploying a content library template. This ensures that the virtual machine is stored on the selected datastore or datastore cluster. After the virtual machine is deployed successfully, you can apply a storage policy by navigating to the deployed virtual machine's page and editing the storage policy.
-
Uploading items to a library stops responding when hosts associated with the backing datastore are in maintenance mode.
You cannot upload items to a library when all the hosts associated with the datastore backing that library are in maintenance mode. Doing so causes the process to stop responding.Workaround: Ensure that at least one host associated with the datastore backing the library is available during upload.
Mounting an ISO file from a content library to an unassociated virtual machine results in an empty dialog box.
You can only mount an ISO file from a content library to a virtual machine if the datastore or storage device where the ISO file resides is accessible from the virtual machine host. If the datastore or storage device is not accessible, the user interface shows an empty dialog box.Workaround: Make the storage device where the ISO file resides accessible to the host where the virtual machine resides. If policies prohibit this, you can copy the ISO file to a library on a datastore that is accessible to the virtual machine.
Doing an Advanced search for the Content Libraries with the "Content Library Published" property fails.
Carrying out an "Advanced Search" for Content Libraries with the property value "Content library published" causes the search to fail.Workaround: Manually browse for published libraries.
Attempts to migrate a Windows installation of vCenter Server or Platform Services Controller to an appliance might fail with an error message about DNS configuration setting if the source Windows installation is set with static IPv4 and static IPv6 configuration
Migrating a Windows installation that is configured with both IPv4 and IPv6 static addresses might fail with the error message: Error setting DNS configuration. Details : Operation Failed.. Code: com.vmware.applmgmt.err_operation_failed.The log file /var/log/vmware/applmgmt/vami.log of the newly deployed appliance contains the following entries:
INFO:vmware.appliance.networking.utils:Running command: ['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address']
INFO:vmware.appliance.networking.utils:output:
error:
returncode: 17
ERROR:vmware.appliance.networking.impl:['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address'] error , rc=17Workaround:
Delete the newly deployed appliance and restore the source Windows installation.
On the source Windows installation, disable either the IPv6 or the IPv4 configuration.
From the DNS server, delete the entry for the IPv6 or IPv4 address that you disabled.
Retry the migration.
(Optional) After the migration finishes, add back the DNS entry and, on the migrated appliance, set the IPv6 or IPv4 address that you disabled.
VMware Migration Assistant initialization failure when migrating vCenter Server 6.0 with external SQL with Windows Integrated Authentication mode
As a user without a "Replace a Process level token" privilege, when you migrate from vCenter Server on Windows with an external Microsoft SQL Server database configured with "Integrated Windows Authentication", VMware Migration Assistant initialization fails with a confusing error message displays that not indicate the cause of failure.For example: Failed to run pre-migration checks
The vCenter Server database collects requirements in a log that is located at:
The log contains the entry:
%temp/vcsMigration/CollectRequirements_com.vmware.vcdb_2016_02_04_17_50.log
2016-02-04T12:20:47.868Z ERROR vcdb.const Error while validating source vCenter Server database: "[Error 1314] CreateProcessAsUser: 'A required privilege is not held by the client.' "Workaround: Verify that you have set a "Replace a Process level token" privilege for the user running the migration. A guide to customizing settings can be found in the Microsoft online documentation. You can re-run the migration after you verify the permissions are correct.
vCenter High Availability replication fails when user password expires
When the vCenter High Availability user password expires, the vCenter High Availability replication fails with several errors. For more details on the errors and cause, see http://kb.vmware.com/kb/2148675.Workaround: Reset the vCenter High Availability user password on each of the 3 vCenter High Availability nodes: active, passive, and witness. See http://kb.vmware.com/kb/2148675 for instructions on resetting the user passwords.
Deploying vCenter High Availability with an alternative Failover IP for the Passive node without specifying a Gateway IP address causes vCenter to fail
vCenter High Availability requires you to specify a gateway IP address if an alternative IP address and netmask are defined for the Passive node in a vCenter High Availability deployment. If this gateway IP address is left unspecified when you use an alternative IP address for Passive node deployment, vCenter Server fails.Workaround: You must specify a gateway IP address if you use an alternative IP address for the Passive node in a VCHA deployment.
Deploying vCenter High Availability might fail if Appliance has a mixed case hostname when setting up the Appliance
If a VCSA is installed with a mixed case FQDN then a subsequent attempt to deploy vCenter High Availability might fail. The reason for this a case-sensitive hostname validation by the vCenter High Availability deployment.Workaround: You must use a same case hostname when setting up the appliance.
-
SSH must be enabled on vCenter Server Appliance in order to configure vCenter HA
If SSH is disabled during the installation of a Management Node (with external PSC) and during the configuration of vCenter HA from a vSphere Web Client, the vCenter HA deployment task fails with the message:SSH is not enabled
. You can enable SSH on vCenter Server Appliance using the vSphere Web Client or appliance management UI (VAMI) to enable SSH, and then configure vCenter HA from the vSphere Web Client.Workaround: None. You must enable SSH on vCenter Server Appliance for vCenter HA to work.
Unable to configure vCenter HA from vSphere Web Client UI when IP is used instead of FQDN during deployment
When you perform the following steps, an error occurs:-
When deploying vCenter Server, in the "System Name" text box in the Deployment UI, enter an IP address instead of an FQDN and complete the installation.
After successful deployment, perform all the prerequisite steps to configure vCenter HA.
Configuring vCenter HA fails in the vSphere Web Client UI with following error message:
Platform Service Controller information cannot be retrieved. Make sure that Application Management Service is running and you are member of Single Sign-On system Configuration Administrators group. Guest OS network information about the vCenter VM cannot retrieved. Make sure that Application Management Service is running.
Workaround: Provide an FQDN in the System Name field when deploying vCenter (and/or PSC).
-
vSphere HA may fail to restart dependent VMs and any other VMs in a lower tier
Currently the VM override timer is started for a successful VM placement. If there is a dependency between VMs in the same tier, and a VM fails to restart successfully, all dependent VMs and VMs in lower tiers fail to be restarted. However, if VMs are present in different tiers with no dependency on a VM in the same tier, then the tier timeout is respected and lower tier VMs fail over after the timeout.Workaround: Do not create VM dependencies in the same tier.
Creating a vSphere HA cluster enables VM Component Protection by default and ESXi 5.5 hosts cannot be added to the cluster.
Attempting to add an ESXi 5.5 host to a new vSphere HA cluster or enabling vSphere HA on a newly created cluster that has ESXi 5.5 hosts fails because VM Component Protection is enabled by default. This returns the error message:Cannot enable vSphere HA VM Component Protection for the specified cluster, because it contains a host with "Upgrade the host to 6.0 or greater".
This does not affect ESXi 6.0 or later hosts.Workaround: Go to the vSphere HA settings of the newly created cluster. On the Failures and Responses tab, ensure "Datastore with PDL" and "Datastore with APD" are set to Disabled. After saving these settings, you can add ESXi 5.5 hosts to the cluster.
If an Active node reboots during an operation that removes vCenter HA Cluster configuration, you might need to manually start the Active node.
Removing vCenter HA Cluster configuration is a multi-step process that makes updates to the vCenter Appliance configuration. You must mark the appliance so that it starts as a stand-alone vCenter Server Appliance. If the Active appliance crashes or reboots during a key operation of configuration removal for vCenter HA, the Active node might reboot in a mode where you must intervene to start all services on the appliance.Workaround: You must do the following to start all services on the Active Appliance:
Login to the consoles of the Active vCenter appliance.
Enable bash on the appliance prompt.
Run command:
destroy-vcha -f
Reboot the appliance.
- If you attempt to reboot an Active node with an intent to failover, Active node may continue as an Active node after reboot.
In a vCenter HA Cluster when an Active node goes through a reboot cycle, the Passive node detects that the Active node in the vCenter HA Cluster is momentarily down. As a result, the Passive node tries to take over the role of the Active node. If Active node reboots while an appliance state is being modified on the Active node, a failover to the Passive node might not be completed. If this occurs the Active node continues as the Active node after the reboot cycle is complete.Workaround: If you are rebooting the Active node to cause a failover to the Passive node, you must use the "Initiate Failover" workflow from the UI or to use the command
Initiate Failover API
. This ensures that the Passive node takes the role of Active node.
-
You cannot apply an SSH root user key from an ESXi 6.0 host profile to an ESXi 6.5
In ESXi 6.5 the authorized keys host profile functionality for managing the root user SSH keys is deprecated. However, the deprecated version was set to 6.0 instead of 6.5. As a result, you cannot apply the root user SSH key for a host profile 6.0 to a host with version 6.5.Workaround: To be able to use a host profile to configure the SSH key for the root user, you must create a new 6.5 host profile.
-
Applying the host profile by rebooting the target stateless ESXi host results with an invalid file path error message
When you first extract a host profile from a stateless host, edit it to create a new role with user input for the password policy and with enabled stateless caching, then attach that profile to the host, update the password for the user and the role in the configuration, rebooting the host to apply the host profile fails with:
ERROR: EngineModule::ApplyHostConfig. Exception: Invalid file path
Workaround: You must apply the host profile directly to the host:
Stop the Auto Deploy service and reboot the host.
After the host boots, verify that the local user and role are present on the host.
Log in with the credentials provided in the configuration.
-
Booting a stateless host by using Auto Deploy may result with failure
Delay of about 11-16 seconds occurs for thegetnameinfo
request processing and leads to failure in booting stateless hosts through Auto Deploy when the following conditions are met:- A local DNS caching is enabled for a stateless host by adding the
resolve
parameter in the hosts entryhosts: files resolve dns
. Thehosts: files resolve dns
is part of the Photon/etc/nsswitch.conf
configuration file. - A NIC on the host gets its IP from DHCP and the same IP is not present in the DNS server.
Workaround: In the NIC's configuration file of the vCenter Server, which gets the IP from the DHCP, set the UseDNS key to false:
[DHCP] UseDNS=false
- A local DNS caching is enabled for a stateless host by adding the
-
A stateless ESXi host might remain in maintenance mode when you deploy it using vSphere Auto Deploy and the vSphere Distributed Switch property is specified in the Host Profile
When you deploy a stateless ESXi host using vSphere Auto Deploy and the vSphere Distributed Switch property is specified in the host profile, the host enters maintenance mode while the host profile is applied. If the process fails, the host profile might not be applied and the host might not exit maintenance mode.Workaround: On the host profile page, manually remove the deployed host from maintenance mode and remediate it.
-
Performing remediation to apply the Host Profile settings on iSCSI enabled cluster results with error in the vSphere Web Client
After you attach a host profile extracted from a host configured with large number of LUNs to a cluster comprised of iSCSI enabled hosts, and you remediate that cluster, the remediation process results in the following erro message in the vSphere Web Client interface:
Apply host profile operation in batch failed. com.vmware.vim.vmomi.client.exception.TransportProtocolException:org.apache.http.client.ClientProtocolException
vpxd.log
file:
2016-07-25T12:06:01.214Z error vpxd[7FF1FE8FB700] [Originator@6876 sub=SoapAdapter] length of HTTP request body exceeds configured maximum 20000000
Workaround: Perform the following steps:
Select fewer hosts in the Remediate wizard.
Only after remediation completes, start another wizard for the remaining hosts.
-
The Auto Deploy option is not present in the vSphere Web Client after an upgrade or migration of the vCenter Server system from 5.5 or 6.0 to version 6.5
After upgrading or migrating the vCenter Server system with version 5.5 and 6.0 to 6.5, the Auto Deploy option is not present in the Configure > Settings screen of the vSphere Web Client. The vSphere Auto Deploy service together with the Image Builder service are installed but not started automatically.Workaround: Perform the following steps:
Start the Image Builder service manually.
Log out and log back in to the vSphere Web Client.
On the vSphere Web Client Home page, navigate to the vCenter Server system and select Configure > Settings to locate the Auto Deploy service.
-
If you attach more than 63 hosts to a host profile, the host customization page loads with an error
If you extract a Host Profile from the reference host and attach more than 63 hosts to the host profile, the vCenter Server system becomes heavily loaded and the time for generating a host-specific answer file exceeds the time limit of 120 seconds. The customization page loads with an error:
The query execution timed out because the backend property provider took more than 120 seconds
Workaround: Attach the host profile to the cluster or the host without generating the customization data:
Right-click the cluster or the host and select Host Profiles > Attach Host Profile.
Select the profile you want to attach.
Select the Skip Customizations check box. The vSphere Web Client interface does not call the
RetrieveHostCustomizationsForProfile
to generate customization data.Right-click the host profile attached to the cluster or the host and select Export Host Customizations. This generates a CSV file with customization entry for each host.
Fill in the customization data in the CSV file.
Right-click the host profile, select Edit Host Customizations and import the CSV file.
Click Finish to save the customization data.
Populate the customization data:
-
Auto deploy with PXE boot of ESXi installer on Intel XL710 (40GB) network adapters results with failure
When you use the preboot execution environment to boot the ESXi installer from the Intel XL710 network device to a host, the process of copying the ESXi image fails before the control is being transferred to the ESXi kernel. You get the following error:Decompressed MD5: 000000000000000000000
Fatal error: 34(Unexpected EOF)
serial log:
******************************************************************
* Booting through VMware AutoDeploy...
*
* Machine attributes:
* . asset=
* . domain=eng.vmware.com
* . hostname=prme-hwe-drv-8-dhcp173
* . ipv4=10.24.87.173
* . ipv6=fe80::6a05:caff:fe2d:5608
* . mac=68:05:ca:2d:56:08
* . model=PowerEdge R730
* . oemstring=Dell System
* . oemstring=5[0000]
* . oemstring=14[1]
* . oemstring=17[04C4B7E08854C657]
* . oemstring=17[5F90B9D0CECE3B5A]
* . oemstring=18[0]
* . oemstring=19[1]
* . oemstring=19[1]
* . serial=3XJRR52
* . uuid=4c4c4544-0058-4a10-8052-b3c04f523532
* . vendor=Dell Inc.
*
* Image Profile: ESXi-6.5.0-4067802-standard
* VC Host: None
*
* Bootloader VIB version: 6.5.0-0.0.4067802
******************************************************************
/vmw/cache/d6/b46cc616433e9d62ab4d636bc7f749/mboot.c32.f70fd55f332c557878f1cf77edd9fbff... ok
Scanning the local disk for cached image.
If no image is found, the system will reboot in 20 seconds......
<3>The system on the disk is not stateless cached.
<3>Rebooting...Workaround: None.
Lsu-hpsa plugin does not work with the native hpsa driver (nhpsa)
Lsu-hpsa plugin does not work with the native hpsa driver (nhpsa) because the nhpsa driver is not compatible with the current HPSA management tool (hpssacli), which is used by the lsu-hpsa plugin. You might receive the following error messages:# esxcli storage core device set -d naa.600508b1001c7dce62f9307c0604e53b -l=locator
Unable to set device's LED state to locator. Error was: HPSSACLI call in HPSAPlugin_SetLedState exited with code 127! (from lsu-hpsa-plugin)
# esxcli storage core device physical get -d naa.50004cf211e636a7
Plugin lsu-hpsa-plugin cannot get information for device with name naa.50004cf211e636a7. Error was: HPSSACLI call in Cache_Update exited with code 127!
# esxcli storage core device raid list -d naa.600508b1001c7dce62f9307c0604e53b
Plugin lsu-hpsa-plugin cannot get information for device with name naa.600508b1001c7dce62f9307c0604e53b. Error was: HPSSACLI call in Cache_Update exited with code 127!
Workaround: Replace the native hpsa driver (nhpsa) with the vmklinux driver.