VMware ESXi 6.5 Update 1 Release Notes

|

ESXi 6.5 Update 1 | 27 JULY 2017 | ISO Build 5969303

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

The ESXi 6.5 Update 1 release includes the following list of new features.

  • If you prefer to use Update Manager for the upgrade of ESXi and vSAN stack, you can now enable vSAN software upgrades through integration with vSphere Update Manager. This provides a unified and common workflow. For more information see the vSphere Update Manager Installation and Administration Guide.
  • Driver Updates:
    • Cavium qlnativefc driver
    • VMware nvme driver
    • Intel i40en driver with Lewisburg 10G NIC Support
    • Intel ne1000 driver with Lewisburg 1G NIC Support
    • Intel igbn driver
    • Intel ixgben driver
    • Broadcom ntg3 driver

Earlier Releases of ESXi 6.5

Features and known issues of ESXi 6.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.5 are:

For compatibility, installation and upgrades, product support notices, and features see the VMware vSphere 6.5 Release Notes.

Internationalization

VMware vSphere 6.5 is available in the following languages:

  • English
  • French
  • German
  • Spanish
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of VMware vSphere 6.5, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client and vSphere Client are packaged with vCenter Server.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.5 Update 1, use the ESXi 6.5 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.5, use the ESXi 6.5 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.5. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.5, upgrade the virtual machine compatibility.
See the vSphere Upgrade documentation.

Product and Support Notices

  • The VMware Lifecycle Product Matrix provides detailed information about all supported and unsupported products. Check the VMware Lifecycle Product Matrix also for further information about the End of General Support, End of Technical Guidance, and End Of Availability.
  • VMware is announcing discontinuation of its third party virtual switch (vSwitch) program, and plans to deprecate the VMware vSphere APIs used by third party switches in the release following vSphere 6.5 Update 1. Subsequent vSphere versions will have the third party vSwitch APIs completely removed and third party vSwitches will no longer work. For more information, see FAQ: Discontinuation of third party vSwitch program (2149722).

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.

Update Release ESXi-6.5.0-update01 contains the following individual bulletins:

Update Release ESXi-6.5.0-update01 (Security-only build) contains the following individual bulletins:

Update Release ESXi-6.5.0-update01 contains the following image profiles:

Update Release ESXi-6.5.0-update01 (Security-only build) contains the following image profiles:

Resolved Issues

The resolved issues are grouped as follows.

Auto Deploy
  • ESXi host loses network connectivity when performing a stateless boot from Auto Deploy

    An ESXi host might lose network connectivity when performing a stateless boot from Auto Deploy if the management vmkernel NIC has static IP and is connected to a Distributed Virtual Switch. 

    This issue is resolved in this release.

CIM and API Issues
  • 13G DDR4 memory modules are displayed with status Unknown on the Hardware Health Status Page in vCenter Server 

    Dell 13G Servers use DDR4 memory modules. These modules are displayed with status "Unknown" on the Hardware Health Status Page in vCenter Server. 

    This issue is resolved in this release.

Internationalization
  • The newly connected USB keyboards are assigned with U.S. default layout

    When a keyboard is configured with a different layout other than the U.S. default, and later it is unplugged and plugged again to the ESXi host, the newly-connected keyboard is assigned with U.S. default layout instead of the user-selected layout. 

    This issue is resolved in this release. 

Miscellaneous Issues
  • The vmswapcleanup plugin fails to start

    The vmswapcleanup jumpstart plugin fails to start. The syslog contains the following line: 

    jumpstart[XXXX]: execution of '--plugin-dir /usr/lib/vmware/esxcli/int/ systemInternal vmswapcleanup cleanup' failed : Host Local Swap Location has not been enabled 

    This issue is resolved in this release.

  • SNMP agent is reporting ifOutErrors and ifOutOctets counter values incorrectly

    Simple Network Management Protocol (SNMP) agent is reporting the same value for both ifOutErrors and ifOutOctets counters, when they should be different. 

    This issue is resolved in this release.

  • ESXi disconnects from vCenter Server when a virtual machine with a hardware version 3 is registered

    Although the ESXi host no longer supports running hardware version 3 virtual machines, it does allow registering these legacy virtual machines to upgrade them to a newer, supported, version. A recent regression caused the ESXi hostd service to disconnect from vCenter Server during the registration process. This would prevent the virtual machine registration from succeeding. 

    This issue is resolved in this release.

  • Virtual machine shuts down automatically with the MXUserAllocSerialNumber: too many locks error

    During normal virtual machine operation VMware Tools (version 9.10.0 and later) services create vSocket connections to exchange data with the ESXi host. When a large number of such connections have been made, the ESXi host might run out of lock serial numbers, causing the virtual machine to shut down automatically with the MXUserAllocSerialNumber: too many locks error. If the workaround from KB article 2149941 has been used, please re-enable Guest RPC communication over vSockets by removing the line: guest_rpc.rpci.usevsocket = "FALSE" 

    This issue is resolved in this release.

  • Virtual machine crashes on ESXi 6.5 when multiple users log on to Windows Terminal Server VM

    Windows 2012 terminal server running VMware tools 10.1.0 on ESXi 6.5 stops responding when many users are logged in. 

    vmware.log will show similar messages to 

    2017-03-02T02:03:24.921Z| vmx| I125: GuestRpc: Too many RPCI vsocket channels opened.
    2017-03-02T02:03:24.921Z| vmx| E105: PANIC: ASSERT bora/lib/asyncsocket/asyncsocket.c:5217
    2017-03-02T02:03:28.920Z| vmx| W115: A core file is available in "/vmfs/volumes/515c94fa-d9ff4c34-ecd3-001b210c52a3/h8-
    ubuntu12.04x64/vmx-debug-zdump.001"
    2017-03-02T02:03:28.921Z| mks| W115: Panic in progress... ungrabbing 

    This issue is resolved in this release.

  • The vSphere Web Client cannot change the Syslog.global.logDirUnique option

    When using the vSphere Web Client to attempt to change the value of the Syslog.global.logDirUnique option, this option appears grayed out, and cannot be modified. 

    This issue is resolved in this release.

  • The activation of iodm and vmci jumpstart plug-ins might fail during the ESXi boot

    During the boot of an ESXi host, error messages related to execution of the jumpstart plug-ins iodm and vmci are observed in the jumpstart logs. 

    This issue is resolved in this release.

  • The time out values larger than 30 minutes are ignored by the EnterMaintenanceMode task

    Entering maintenance mode would time out after 30 mins even if the specified a timeout is larger than 30 mins.

    This issue is resolved in this release.

  • Even if all the NAS Storages are disabled in the Host Profile document, host profile compliance fails and existing datastores are removed or new are added

    When all NFS datastores are disabled within the Host Profile document, extracted from a reference host, the host profile’s remediation might fail with compliance errors and existing datastores are removed or new are added during the remediation.

    This issue is resolved in this release.

  • An ESXi host might fail with purple diagnostic screen when collecting performance snapshots

    An ESXi host might fail with purple diagnostic screen when collecting performance snapshots with vm-support due to calls for memory access after the data structure has already been freed.

    An error message similar to the following is displayed:

    @BlueScreen: #PF Exception 14 in world 75561:Default-drai IP 0xxxxxxxxxxxxaddr 0xxxxxxxxxxxx
    PTEs:0xXXXXXXXXXXXXX;0xYYYYYYYYYYYYYY;0xZZZZZZZZZZZZZZ;0x0;
    [0xxxxxxxxxxxx]SPLockIRQWork@vmkernel#nover+0x26 stack: 0xxxxxxxxxxxx
    [0xxxxxxxxxxxx]VMKStatsDrainWorldLoop@vmkernel#nover+0x90 stack: 0x17
    

    This issue is resolved in this release.

Networking Issues
  • Virtual machines configured to use EFI firmware fail to PXE boot in some DHCP environments.

    Virtual machine configured to use EFI firmware will fail to obtain an IP address when trying to PXE boot if the DHCP environment responds by IP unicast. The EFI firmware was not capable of receiving a DHCP reply sent by IP unicast.

    This issue is resolved in this release.

  • When you configure the guest VLAN with disabled VLAN offloading (stripping the VLAN tag), the ping command between the VLAN interfaces might fail

    The original packet buffer can be shared across multiple destination ports if the packet is forwarded to multiple ports (such as broadcast packet). If the VLAN offloading is disabled and you modify the original packet buffer, the VLAN tag will be inserted to the packet buffer before it is forwarded to the guest VLAN. The other port will detect a packet corruption and drop. 

    This issue is resolved in this release. 

  • Joining the ESXi 6.5 host to Active Directory domain fails with the Operation timed out error

    The ESXi 6.5 host fails to join the Active Directory domain and the process might become unresponsive for an hour before returning the Operation timed out error, if the host uses only IPv4 address, and the domain has IPv6 or mixed IPv4 and IPv4 setup. 

    This issue is resolved in this release. 

  • Full duplex configured on physical switch may cause duplex mismatch issue with igb native Linux driver supporting only auto-negotiate mode for nic speed/duplex setting
    If you are using the igb native driver on an ESXi host, it always works in auto-negotiate speed and duplex mode. No matter what configuration you set up on this end of the connection, it is not applied on the ESXi side. The auto-negotiate support causes a duplex mismatch issue if a physical switch is set manually to a full-duplex mode.


    This issue is resolved in this release.

  • If you perform a fresh installation of ESXi host, it might not have network connection in case the first physical NIC linkstate is down

    By default, each ESXi host has one virtual switch, vSwitch0. During the installation of ESXi, the first physical NIC is chosen as the default uplink for the vSwitch0. In case that NIC linkstate is down, the ESXi host might not have a network connection, although the other NICs linkstate is up and have a network access. 

    This issue is resolved in this release.

  • Virtual machines (VMs), which are part of Virtual Desktop Infrastructure (VDI) pools using Instant Clone technology, lose connection to Guest Introspection services

    Existing VMs using Instant Clone and new ones, created with or without Instant Clone, lose connection with the Guest Introspection host module. As a result, the VMs are not protected and no new Guest Introspection configurations can be forward to the ESXi host. You are also present with a Guest introspection not ready warning in the vCenter Server UI. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen on shutdown with IPv6 mld used

    An ESXi host might fail with a purple screen on shutdown if IPv6 mld is used because of a race condition in tcpip stack. 

    This issue is resolved in this release.

  • An ESXi host might become unresponsive when you reconnect it to vCenter Server

    If you disconnect your ESXi host from vCenter Server and some of the virtual machines on that host are using LAG, your ESXi host might become unresponsive when you reconnect it to vCenter Server after recreating the same lag on the vCenter Server side and you might see an error, such as the following: 

    0x439116e1aeb0:[0x418004878a9c]LACPScheduler@#+0x3c stack: 0x417fcfa00040 0x439116e1aed0:
    [0x418003df5a26]Net_TeamScheduler@vmkernel#nover+0x7a stack: 0x43070000003c 0x439116e1af30:
    [0x4180044f5004]TeamES_Output@#+0x410 stack: 0x4302c435d958 0x439116e1afb0:
    [0x4180044e27a7]EtherswitchPortDispatch@#+0x633 stack: 0x0 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error

    An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error, when you reboot the ESXi host under the following conditions: 

    • You use the vSphere Network Appliance (DVFilter) in an NSX environment
    • You migrate a virtual machine with vMotion under DVFilter control

    This issue is resolved in this release.

  • A Virtual Machine (VM) with e1000/e1000e vNIC might have network connectivity issues

    For a VM with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor (the transmit descriptor address and length are 0), a loss of network connectivity might occur.

    This issue is resolved in this release. 

  • VMkernel log includes one or more warnings Couldn't enable keep alive

    Couldn't enable keep alive warnings occur during VMware NSX and partner solutions communication through a VMCI socket (vsock). The VMkernel log now omits these repeated warnings because they can be safely ignored. 

    This issue is resolved in this release.

  • A virtual machine with a paravirtual RDMA device becomes inaccessible during a snapshot operation or a vMotion migration

    Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with the peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible if during a snapshot operation or migration, the RDMA application is still running.

    This issue is resolved in this release.

  • Guest VMkernel applications might receive unexpected completion entries for non-signaled fast-register work requests

    The RDMA communication between two virtual machines that reside on a host with an active RDMA uplink occasionally triggers spurious completion entries in the guest VMkernel applications. The completion entries are incorrectly triggered by non-signaled fast-register work requests that are issued by the kernel-level RDMA Upper Layer Protocol (ULP) of a guest. This can cause completion queue overflows in the kernel ULP. 

    This issue is resolved in this release.

  • When you use a paravirtual RDMA (PVRDMA) device, you might be stuck with the unavailable device in link down state until the guest OS reloads the PVRDMA guest driver

    When the PVRDMA driver is installed on a guest OS that supports PVRDMA device, the PVRDMA driver might fail to load properly when the guest OS is powered on. You might be stuck with the unavailable device in link down state until you manually reload the PVRDMA driver. 

    This issue is resolved in this release.

  • pNIC disconnect and connect events in the VMware NetQueue load balancer might cause a purple screen

    If a pNIC is disconnected and connected to a virtual switch, the VMware NetQueue load balancer must identify it and pause the ongoing balancing work. In some cases, the load balancer might not detect this and access wrong data structures. As a result, you might see a purple screen. 

    This issue is resolved in this release.

  • The netqueue load balancer might cause the ESXi host to stop responding

    For latency-sensitive virtual machines, the netqueue load balancer can try to reserve exclusive Rx queue. If the driver provides queue-preemption, then netqueue load balancer uses this to get exclusive queue for latency-sensitive virtual machines. The netqueue load balancer holds lock and execute queue preemption callback of the driver. With some drivers, this might result in a purple screen in the ESXi host, especially if a driver implementation involves sleep mode. 

    This issue is resolved in this release.

  • A USB network device integrated in Integrated Management Module (IMM) on some servers might stop responding if you reboot IMM

    On some servers, a USB network device is integrated in IMM or iLO to manange the server. When you reboot IMM by using the vSphere Web Client or an IMM or iLO command, the transaction on the USB network device is lost. 

    This issue is resolved in this release.

  • Link status output of a vmnic does not reflect the real physical status

    When the physical link status of a vminc is changed, for example the cable is unplugged or the switch port is shutdown, the generated output from the esxcli command might give a wrong link status on Intel 82574L based NICs (Intel Gigabit Desktop CT/CT2). You must manually restart the NIC to get the actual link status. 

    This issue is resolved in this release.

  • The network connection might be lost if a ntg3 driver is used with Broadcom NetXtreme I NICs 

    NICs using ntg3 driver might experience unexpected loss of connectivity. The network connection cannot be restored until you reboot the ESXi host. The devices affected are Broadcom NetXtreme I 5717, 5718, 5719, 5720, 5725 and 5727 Ethernet Adapters. The problem is related to certain malformed TSO packets sent by VMs such as (but not limited to) F5 BIG-IP Virtual Appliances.
    The ntg3 driver, version 4.1.2.0 resolves this problem. 

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when you turn off the IPv6 support

    An ESXi host might fail with purple screen when you turn off globally the IPv6 support and reboot the host.

    This issue is resolved in this release.

  • An ESXi host might stop responding when you migrate a virtual machine with Storage vMotion between ESXi 6.0 and ESXi 6.5 hosts

    The vmxnet3 device tries to access the memory of the guest OS while the guest memory preallocation is in progress during the migration of virtual machine with Storage vMotion. This results in an invalid memory access and the ESXi 6.5 host failure.

    This issue is resolved in this release.

Security Issues
  • Update to the libcurl library

    The ESXi userworld libcurl library is updated to version 7.53.1.

  • Update to the NTP package

    The ESXi NTP package is updated to version 4.2.8p10.

  • Update to the OpenSSH version

    The OpenSSH version is updated to version 7.5p1.

  • The likewise stack on ESXi is not enabled to support SMBv2

    The Windows 2012 domain controller supports SMBv2, whereas likewise stack on ESXi supports only SMBv1. With this release, the likewise stack on ESXi is enabled to support SMBv2. 

    This issue is resolved in this release.

  • You cannot use custom ESXi SSL certificates with keys that are longer than 2048 bits

    In vSphere 6.5 the secure heartbeat feature supported adding ESXi hosts with certificates with exactly 2048-bit keys. If you try to add or replace the ESXi host certificate with a custom certificate with a key longer than 2048 bits, the host gets disconnected from vCenter Server. The log messages in vpxd.log look similar to:

    error vpxd[7FB5BFF7E700] [Originator@6876 sub=vpxCrypt opID=HeartbeatModuleStart-4b63962d] [bool VpxPublicKey::Verify(const EVP_MD*, const unsigned char*, size_t, const unsigned char*, size_t)] ERR error:04091077:rsa routines:INT_RSA_VERIFY:wrong signature length

    warning vpxd[7FB5BFF7E700] [Originator@6876 sub=Heartbeat opID=HeartbeatModuleStart-4b63962d] Failed to verify signature; host: host-42, cert: (**THUMBPRINT_REMOVED**), signature : (**RSA_SIGNATURE_REMOVED**)

    warning vpxd[7FB5BFF7E700] [Originator@6876 sub=Heartbeat opID=HeartbeatModuleStart-4b63962d] Received incorrect size for heartbeat Expected size (334) Received size (590) Host host-87

    This issue is fixed in this release.

  • Update to the libPNG library

    The libPNG library is updated to libpng-1.6.29.

  • Update to OpenSSL

    The OpenSSL package is updated to version openssl-1.0.2k.

  • Xorg driver upgrade

    The Xorg driver upgrade includes the following updates:

    • the libXfont package is updated to libxfont - 1.5.1
    • the pixman library is updated to pixman - 0.35.1
  • Update to the libarchive library

    The libarchive library is updated to libarchive - 3.3.1.

  • Update to the SQLlite3 library

    The SQLlite3 library is updated to SQLlite3 - 3.17.0.

Server Configuration
  • PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits 

    PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits. 

    This issue is resolved in this release.

  • Logging in to an ESXi host by using SSH requires password re-entry

    You are prompted for a password twice when connecting to an ESXi host through SSH if the ESXi host is upgraded from vSphere version 5.5 to 6.5 while being part of a domain.

    This issue is resolved in this release.

  • Compliance error on Security.PasswordQualityControl in the host profile section

    In the host profile section, a compliance error on Security.PasswordQualityControl is observed, when the PAM password setting in the PAM password profile is different to the advanced configuration option Security.PasswordQualityControl. Because the advanced configuration option Security.PasswordQualityControl is unavailable for the host profile in this release, use the Requisite option in the Password PAM Configuration for changing password policy instead. 

    This issue is resolved in this release.

  • The vpxd service might crash preventing the vSphere Web Client user interface to connect and update the vCenter Server

    If the mandatory field in the VMODL object of the profile path is left unset, a serialization issue might occur during the answer file validation for network configuration, resulting in a vpxd service failure.

    This issue is resolved in this release.

  • The configuration of /Power/PerfBias cannot be edited

    Setting the /Power/PerfBias advanced configuration option is not available. Any attempt to set it to a value returns an error. 

    This issue is resolved in this release.

  • The joining of any host (ESXi or vCenter Server) to Active Directory in vSphere 6.5 domain can cause a service failure

    vSphere 6.5 does not support disjointed Active Directory domain. The disjoint namespace is a scenario in which a computer's primary domain name system (DNS) suffix does not match the DNS domain name where that computer resides. 

    This issue is resolved in this release.

  • The XHCI related platform errors reported in ESXi VMKernel Logs

    The XHCI related platform error messages, such as xHCI Host Controller USB 2.0 Control Transfer may cause IN Data to be dropped and xHCI controller Parity Error response bit set to avoid parity error on poison packet, are reported in ESXi VMkernel logs. 

    This issue is resolved in this release. 

  • An ESXi host might stop responding with no heartbeat NMI state

    An ESXi host might become unresponsive with no heartbeat NMI state on AMD machines with OHCI USB Host Controllers. 

    This issue is resolved in this release. 

Storage Issues
  • Modification of IOPS limit of virtual disks with enabled Changed Block Tracking (CBT) fails with errors in the log files

    To define the storage I/O scheduling policy for a virtual machine, you can configure the I/O throughput for each virtual machine disk by modifying the IOPS limit. When you edit the IOPS limit and CBT is enabled for the virtual machine, the operation fails with an error The scheduling parameter change failed. Due to this problem, the scheduling policies of the virtual machine cannot be altered. The error message appears in the vSphere Recent Tasks pane.

    You can see the following errors in the /var/log/vmkernel.log file:

    2016-11-30T21:01:56.788Z cpu0:136101)VSCSI: 273: handle 8194(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000
    2016-11-30T21:01:56.788Z cpu0:136101)ScsiSched: 2760: Invalid Bandwidth Cap Configuration
    2016-11-30T21:01:56.788Z cpu0:136101)WARNING: VSCSI: 337: handle 8194(vscsi0:0):Failed to invert policy

    This issue is resolved in this release.

  • When you hot-add multiple VMware Paravirtual SCSI (PVSCSI) hard disks in a single operation, only one is visible for the guest OS

    When you hot-add two or more hard disks to a VMware PVSCSI controller in a single operation, the guest OS can see only one of them.

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen

    An ESXi host might fail with a purple screen because of a race condition when multiple multipathing plugins (MPPs) try to claim paths.

    This issue is resolved in this release.

  • Reverting from an error during a storage profile change operation, results in a corrupted profile ID

    If a VVol VASA Provider returns an error during a storage profile change operation, vSphere tries to undo the operation, but the profile ID gets corrupted in the process. 

    This issue is resolved in this release.

  • Incorrect Read or Write latency displayed in vSphere Web Client for VVol datastores

    Per host Read or Write latency displayed for VVol datastores in the vSphere Web Client is incorrect. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen during NFSCacheGetFreeEntry

    The NFS v3 client does not properly handle a case where NFS server returns an invalid filetype as part of File attributes, which causes the ESXi host to fail with a purple screen. 

    This issue is resolved in this release.

  • You need to configure manually the SATP rule for a new Pure Storage FlashArray

    For a Pure Storage FlashArray device you have to add manually the SATP rule to set the SATP, PSP and IOPs. A new SATP rule is added to ESXi to set SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR, and IOPs to 1 for all Pure Storage FlashArray models. 

    Note: In case of a stateless ESXi installation, if an old host profile is applied, it overwrites the new rules after upgrade.

    This issue is resolved in this release.

  • The lsi_mr3 driver and hostd process might stop responding due to a memory allocation failure in ESXi 6.5

    The lsi_mr3 driver allocates memory from address space below 4GB. The vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plugin and the lsi_mr3 driver communicate with each other. The driver might stop responding during the memory allocation when handling the IOCTL event from storelib. As a result, lsu-lsi-lsi-mr3-plugin might stop responding and the hostd process might also fail even after restart of hostd. 

    This issue is resolved in this release with a code change in the lsu-lsi-lsi-mr3-plugin plugin of lsi_mr3 driver, setting a timeout value to 3 seconds to get the device information to avoid plugin and hostd failures.

  • When you hot-add an existing or new virtual disk to a CBT (Changed Block Tracking) enabled virtual machine (VM) residing on VVOL datastore, the guest operation system might stop responding

    When you hot-add an existing or new virtual disk to a CBT enabled VM residing on VVOL datastore, the guest operation system might stop responding until the hot-add process completes. The VM unresponsiveness depends on the size of the virtual disk being added. The VM automatically recovers once hot-add completes. 

    This issue is resolved in this release.

  • When you use vSphere Storage vMotion, the UUID of a virtual disk might change

    When you use vSphere Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified. 

    This issue is resolved in this release.

  • An ESXi host might stop responding if a LUN unmapping is made on the storage array side

    An ESXi host might stop responding if a LUN unmapping is made on the storage array side to those LUNs while connected to an ESXi host through Broadcom/Emulex fiber channel adapter (the driver is lpfc) and has I/O running.

    This issue is resolved in this release.

  • An ESXi host might become unresponsive if the VMFS-6 volume has no space for the journal

    When opening a VMFS-6 volume, it allocates a journal block. Upon successful allocation, a background thread is started. If there is no space on the volume for the journal, it is opened in read-only mode and no background thread is initiated. Any intent to close the volume, results in attempts to wake up a nonexistent thread. This results in the ESXi host failure.

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen if the virtual machines running on it have large capacity vRDMs and use the SPC4 feature

    When the virtual machines use the SCP4 feature with Get LBA Status command to query thin-provisioned features of large vRDMs attached, the processing of this command might run for a long time in the ESXi kernel without relinquishing the CPU. The high CPU usage can cause the CPU heartbeat watchdog process to deem a hung process and the ESXi host might stop responding. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen if the VMFS6 datastore is mounted on multiple ESXi hosts, while the disk.vmdk has file blocks allocated from an increased portion on the same datastore

    A VMDK file might reside on a VMFS6 datastore which is mounted on multiple ESXi hosts (for example 2 hosts, ESXi host1 and ESXi host2). When the VMFS6 datastore capacity is increased from ESXi host1, while having it mounted on ESXi host2, and the disk.vmdk has file blocks allocated from an increased portion of the VMFS6 datastore from ESXi host1. Now, if the disk.vmdk file is accessed from ESXi host2, and if the file blocks are allocated to it from ESXi host2, the ESXi host2 might fail with a purple screen.

    This issue is resolved in this release.

  • After installation or upgrade certain multipathed LUNs will not be visible

    If the paths to a LUN have different LUN IDs in case of multipathing, the LUN will not be registered by PSA and end users will not see them. 

    This issue is resolved in this release.

  • A virtual machine residing on NFS datastores might be failing the recompose operation through Horizon View

    The recompose operation in Horizon View might fail for desktop virtual machines residing on NFS datastores with stale NFS file handle errors, because of the way virtual disk descriptors are written to NFS datastores. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen because of a CPU heartbeat failure

     An ESXi host might fail with a purple screen because of a CPU heartbeat failure only if the SEsparse is used for creating snapshots and clones of virtual machines. The use of SEsparse might lead to CPU lockups with the warning message in the VMkernel logs, followed by a purple screen: 

    PCPU <cpu-num>  didn't have a heartbeat for <seconds>  seconds; *may* be locked up.

    This issue is resolved in this release.

  • Disabled frequent lookup to an internal vSAN metadata directory (.upit) on virtual volume datastores. This metadata folder is not applicable to virtual volumes

    The frequent lookup to a vSAN metadata directory (.upit) on virtual volume datastores can impact its performance. The .upit directory is not applicable to virtual volume datastores. The change disables the lookup to the .upit directory. 

    This issue is resolved in this release.

  • Performance issues on Windows Virtual Machine (VM) might occur after upgrading to VMware ESXi 6.5.0 P01 or 6.5 EP2 

    Performance issues might occur when the not aligned unmap requests are received from the Guest OS under certain conditions. Depending on the size and number of the not aligned unmaps, this might occur when a large number of small files (less than 1 MB in size) are deleted from the Guest OS. 

    This issue is resolved in this release.

  • ESXi 5.5 and 6.x hosts stop responding after running for 85 days

    ESXi 5.5 and 6.x hosts stop responding after running for 85 days. In the /var/log/vmkernel log file you see entries similar to: 

    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved a PUREX IOCB woh oo
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved the PUREX IOCB.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): sizeof(struct rdp_rsp_payload) = 0x88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674qlnativefc: vmhba2(5:0.0): transceiver_codes[0] = 0x3
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): transceiver_codes[0,1] = 0x3, 0x40
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Stats Mailbox successful.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Sending the Response to the RDP packet
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 53 01 00 00 00 00 00 00 00 00 04 00 01 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) c0 1d 13 00 00 00 18 00 01 fc ff 00 00 00 00 20
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 88 00 00 00 b0 d6 97 3c 01 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 02 00 00 00 00 00 00 80 00 00 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 18 00 00 00 00 01 00 00 00 00 00 0c 1e 94 86 08
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0e 81 13 ec 0e 81 00 51 00 01 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 2c 00 04 00 00 01 00 02 00 00 00 1c 00 00 00 01
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 40 00 00 00 00 01 00 03 00 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)50 01 43 80 23 18 a8 89 50 01 43 80 23 18 a8 88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 01 00 03 00 00 00 10 10 00 50 eb 1a da a1 8f

    This is a firmware problem and it is caused when Read Diagnostic Parameters (RDP) between the Fibre Channel (FC) Switch and the Hot Bus Adapter (HDA) fails 2048 times. The HBA adapter stops responding and because of this the virtual machine and/or the ESXi host might fail. By default, the RDP routine is initiated by the FC Switch and occurs once every hour, resulting in a reaching the 2048 limit in approximately 85 days. 

    This issue is resolved in this release.

  • Resolve the performance drop in Intel devices with stripe size limitation 

    Some Intel devices, for example P3700, P3600, and so on, have a vendor specific limitation on their firmware or hardware. Due to this limitation, all IOs across the stripe size (or boundary), delivered to the NVMe device can be affected from significant performance drop. This problem is resolved from the driver by checking all IOs and splitting command in case it crosses the stripe on the device.

    This issue is resolved in this release.

  • Remove the redundant controller reset when starting controller

    The driver might reset the controller twice (disable, enable, disable and then finally enable it) when the controller starts. This is a workaround for the QEMU emulator for an early version, but it might delay the display of some controllers. According to the NVMe specifications, only one reset is needed, that is, disable and enable the controller. This upgrade removes the redundant controller reset when starting the controller. 

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen if the virtual machine with large virtual disks uses the SPC-4 feature

    An ESXi host might stop responding and fail with purple screen with entries similar to the following as a result of a CPU lockup. 

    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]@BlueScreen: PCPU x: no heartbeat (x/x IPIs received)
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Code start: 0xxxxx VMK uptime: x:xx:xx:xx.xxx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Saved backtrace from: pcpu x Heartbeat NMI
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MCSLockWithFlagsWork@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PB3_Read@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PB3_AccessPBVMFS5@esx#nover+00xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3FileOffsetToBlockAddrCommonVMFS5@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_ResolveFileOffsetAndGetBlockTypeVMFS5@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_GetExtentDescriptorVMFS5@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_ScanExtentsBounded@esx#nover+0xx stack:0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3GetFileMappingAndLabelInt@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Fil3_FileIoctl@esx#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSSVec_Ioctl@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSS_IoctlByFH@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFsEmulateCommand@vmkernel#nover+0xx stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSI_FSCommand@vmkernel#nover+0xx stack: 0x1
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSI_IssueCommandBE@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIExecuteCommandInt@vmkernel#nover+0xx stack: 0xb298e000
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSIVmkProcessCmd@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSIVmkProcessRequestRing@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PVSCSI_ProcessRing@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMMVMKCall_Call@vmkernel#nover+0xx stack: 0xx
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMKVMM_ArchEnterVMKernel@vmkernel#nover+0xe stack: 0x0

    This occurs if your virtual machine's hardware version is 13 and uses SPC-4 feature for the large virtual disk. 

    This issue is resolved in this release.

  • The Marvell Console device on the Marvell 9230 ACHI controller is not available

    According to the kernel log, the ATAPI device is exposed on one of the AHCI ports of the Marvell 9230 controller. This Marvel Console device is an interface to configure RAID of the Marvell 9230 AHCI controller, which is used from some Marvell CLI tools. 

    As a result of the esxcfg-scsidevs -l command, the host equipped with the Marvell 9230 controller cannot detect the SCSI device with the Local Marvell Processor display name. 

    The information in the kernel log is:
    WARNING: vmw_ahci[XXXXXXXX]: scsiDiscover:the ATAPI device is not CD/DVD device 

    This issue is resolved in this release. 

  • SSD congestion might cause multiple virtual machines to become unresponsiv

    Depending on the workload and the number of virtual machines, diskgroups on the host might go into permanent device loss (PDL) state. This causes the diskgroups to not admit further IOs, rendering them unusable until manual intervention is performed.

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when running HBR + CBT on a datastore that supports unmap

    The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request may occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments. 

    This issue is resolved in this release.

  • An ESXi host might lose connectivity to VMFS datastore

    Due to a memory leak in the LVM module, you might see the LVM driver running out of memory on certain conditions, causing the ESXi host to lose access to the VMFS datastore.

    This issue is resolved in this release.

Supported Hardware Issues
  • Intel I218 NIC resets frequently in a heavy traffic scenario

    When TSO capability is enabled in NE1000 driver, I218 NIC reset frequently in heavy traffic scenario, because of I218 h/w issue. The NE1000 TSO capability for I218 NIC should be disabled. 

    This issue is resolved in this release.

Upgrade and Installation Issues
  • Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails 

    Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails with the Cannot execute upgrade script on host error. 

    This issue is resolved in this release.

  • The previous software profile version of an ESXi host is displayed after a software profile update and the software profile name is not marked Updated after an ISO upgrade

    The previous software profile version of an ESXi host is displayed in esxcli software profile get command output after execution of a esxcli software profile update command. Also software profile name is not marked Updated in esxcli software profile get command output after an ISO upgrade. 

    This is resolved in this release.

  • Unable to collect vm-support bundle from an ESXi 6.5 host

    Unable to collect vm-support bundle from an ESXi 6.5 host because when generating logs in ESXi 6.5 by using the vSphere Web Client, the select specific logs to export text box is blank. The options: network, storage, fault tolerance, hardware etc. are blank as well. This issue occurs because the rhttpproxy port for /cgi-bin has a value different from 8303.

    This issue is resolved in this release.

  • Installation on TPM 1.2 machine hangs early during boot

    The installation of ESXi 6.5 stops responding on a system with the TPM 1.2 chip. If tbootdebug is specified as a command line parameter, the last log message is: 

    Relocating modules and starting up the kernel...
    TBOOT: **************TBOOT ******************
    TBOOT: TPM family: 1.2

    This issue is resolved in this release.

  • vSphere Update Manager fails with RuntimeError 

    A host scan operation fails with a RuntimeError in ImageProfile module if the module contains VIBs for a specific hardware combination. What causes the failure is a code transition issue between Python 2 and Python 3. 

    This issue is resolved in this release.

  • An ESXi host might fail with a purple screen because of the system cannot recognize or reserve resources for the USB device

    The ESXi host might fail with a purple screen when booting the ESXi host on the following Oracle servers: X6-2, X5-2, and X4-2, and the backtrace shows that is caused by pcidrv_alloc_resource failure. The issue is caused because the system cannot recognize or reserve resources for the USB device. 

    This issue is resolved in this release.

vCenter Server, vSphere Web Client, and vSphere Client Issues
  • A warning message appears after you install the desktop vSphere Client for Windows, and try to connect to an ESXi host 

    The desktop vSphere Client was deprecated in vSphere 6.5 and is no longer supported. When you try to connect to an ESXi 6.5 host, the following warning message appears: The required client support files need to be retrieved from the server "0.0.0.0" and installed. Click Run the installer or Save the installer.

    In this case use the VMware Host Client to conduct host management operations. 

Virtual Machine Management Issues
  • The performance counter cpu.system incorrectly shows a value of 0 (zero)

    The performance counter cpu.system is incorrectly calculated for a virtual machine. The value of the counter is always 0 (zero) and never changes, which makes it impossible to do any kind of data analysis on it.

    This issue is resolved in this release.

  • The virtual machine might become unresponsive due to active memory drop

    If the active memory of a virtual machine that runs on an ESXi host falls under 1% and drops to zero, the host might start reclaiming memory even if the host has enough free memory.

    This issue is resolved in this release.

  • Virtual Machine stops responding during snapshot consolidation

    During snapshot consolidation, a precise calculation might be performed to determine the storage space required to perform the consolidation. This precise calculation can cause the virtual machine to stop responding, because it takes a long time to complete. 

    This issue is resolved in this release.

  • Wrong NUMA placement of a preallocated virtual machine leading to sub-optimal performance

    A preallocated VM will allocate all its memory at power-on time. However, it can happen that the scheduler will pick a wrong NUMA node for this initial allocation. In particular, the numa.nodeAffinity vmx option might not be honored. The Guest OS might see a performance degradation because of this. 

    Virtual machines configured with latency sensitivity set to high or with a passtrhough device are typically preallocated. 

    This issue is resolved in this release.

  • Virtual machine might become unresponsive

    When you take a snapshot of a virtual machine, the virtual machine might become unresponsive. 

    This issue is resolved in this release.

  • vSphere Storage vMotion might fail with an error message if it takes more than 5 minutes

    The destination virtual machine of the vSphere Storage vMotion is incorrectly stopped by a periodic configuration validation for the virtual machine. vSphere Storage vMotion that takes more than 5 minutes fails with the The source detected that the destination failed to resume message.
    The VMkernel log from the ESXi host contains the message D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details

    This issue is resolved in this release. 

  • The virtual machine management stack does not handle correctly a backing file that is specified as a generic FileBackingInfo and results in the virtual machine not being reconfigured properly

    Attaching a disk in a different folder than the virtual machine's folder while it is powered on might fail, if it is initiated by using the vSphere API directly with a ConfigSpec that specifies the disk backing file using the generic vim.vm.Device.VirtualDevice.FileBackingInfo class, instead of the disk type specific backing class, such as vim.vm.Device.VirtualDisk.FlatVer2BackingInfo, vim.vm.Device.VirtualDisk.SeSparseBackingInfo.

    This issue is resolved in this release.

  • A vMotion migration of a Virtual Machine (VM) gets suspended for some time, and further fails with timeout

    If a virtual machine has a driver (especially a graphics driver) or an application that pins too much memory, it creates a sticky page in the VM. When such a VM is about to be migrated with vMotion to another host, the migration process is suspended and later fails because of incorrect pending of I/O computations. 

    This issue is resolved in this release.

  • A reconfigure operation of a powered-on virtual machine that sets an extraConfig option with an integer value might fail with SystemError

    The reconfigure operation stops responding with SystemError under the following conditions: 

    • the virtual machine is powered on 
    • the ConfigSpec includes an extraConfig option with an integer value 


    The SystemError is triggered by a TypeMismatchException, which can be seen in the hostd log on the ESXi host with the message: 

    Unexpected exception during reconfigure: (vim.vm.ConfigSpec) { } Type Mismatch: expected: N5Vmomi9PrimitiveISsEE, found: N5Vmomi9PrimitiveIiEE. 

    This issue is resolved in this release. 

  • Digest VMDK files are not deleted from the VM folder when you delete a VM

    When you create a linked clone from a digest VMDK file, vCenter Server marks the digest disk file as non-deletable. Thus, when you delete the respective VM, the digest VMDK file is not deleted from the VM folder because of the ddb.deletable = FALSE ddb entry in the descriptor file.

    This issue is resolved in this release.

Virtual Volumes Issues
  • Non-Latin characters might be displayed incorrectly in VM storage profile names

    UTF-8 characters are not handled properly before being passed on to a VVol Vasa Provider. As a result, the VM storage profiles which are using international characters are either not recognized by the Vasa Provider or are treated, or displayed incorrectly by the Vasa Provider. 

    This issue is resolved in this release.

VMware HA and Fault Tolerance Issues
  • vSphere Guest Application Monitoring SDK fails for VMs with vSphere Fault Tolerance enabled

    When vSphere FT is enabled on an vSphere HA-protected VM where the vSphere Guest Application Monitor is installed, the vSphere Guest Application Monitoring SDK might fail.

    This issue is resolved in this release.

  • An ESXi host might fail with purple screen when a Fault Tolerance Secondary virtual machine (VM) fails to power on and the host runs out of memory

    When you configure a virtual machine with enabled Fault Tolerance and its Secondary VM is being powered on from the ESXi host with insufficient memory, the Secondary VM cannot power on and the ESXi host might fail with purple screen. 

    This issue is resolved in this release.

VMware Tools
  • The guestinfo.toolsInstallErrCode variable is not cleared on Guest OS reboot when installing VMware Tools

    If installation of VMware Tools requires a reboot to complete, the guest variable guestinfo.toolsInstallErrCode is set to 1603. The variable is not cleared by rebooting the Guest OS.

    This issue is resolved in this release.

  • VMware Tools version 10.1.7 included

    This release includes the VMware Tools version 10.1.7. Refer to the VMware Tools 10.1.7 Release Notes for further details.

vSAN Issues
  • An ESXi host fails with purple diagnostic screen when mounting a vSAN disk group

    Due to an internal race condition in vSAN, an ESXi host might fail with a purple diagnostic screen when you attempt to mount a vSAN disk group.

    This issue is resolved in this release.

  • Using objtool on a vSAN witness host causes an ESXi host to fail with a purple diagnostic screen

    If you use objtool on a vSAN witness host, it performs an I/O control (ioctl) call which leads to a NULL pointer in the ESXi host and the host crashes.

    This issue is resolved in this release.

  • Hosts in a vSAN cluster have high congestion which leads to host disconnects

    When vSAN components with invalid metadata are encountered while an ESXi host is booting, a leak of reference counts to SSD blocks can occur. If these components are removed by policy change, disk decommission, or other method, the leaked reference counts cause the next I/O to the SSD block to get stuck. The log files can build up, which causes high congestion and host disconnects. 

    This issue is resolved in this release.

  • Cannot enable vSAN or add an ESXi host into a vSAN cluster due to corrupted disks

    When you enable vSAN or add a host to a vSAN cluster, the operation might fail if there are corrupted storage devices on the host. Python zdumps are present on the host after the operation, and the vdq -q command fails with a core dump on the affected host.

    This issue is resolved in this release.

  • vSAN Configuration Assist issues a physical NIC warning for lack of redundancy when LAG is configured as the active uplink

    When the uplink port is a member of a Link Aggregation Group (LAG), the LAG provides redundancy. If the Uplink port number is 1, vSAN Configuration Assist issues a warning that the physical NIC lacks redundancy. 

    This issue is resolved in this release.

  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot

    If the hosts in a unicast vSAN cluster and the vCenter Server are rebooted at the same time, the cluster might become partitioned. The vCenter Server does not properly handle unstable vpxd property updates during a simultaneous reboot of hosts and vCenter Server. 

    This issue is resolved in this release.

  • An ESXi host fails with a purple diagnostic screen due to incorrect adjustment of read cache quota

    The vSAN mechanism to that controls read cache quota might make incorrect adjustments that result in a host failure with purple diagnostic screen.

    This issue is resolved in this release.

  • Large File System overhead reported by the vSAN capacity monitor

    When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.

    This issue is resolved in this release.

  • vSAN health check reports CLOMD liveness issue due to swap objects with size of 0 bytes

    If a vSAN cluster has objects with size of 0 bytes, and those objects have any components in need of repair, CLOMD might crash. The CLOMD log in /var/run/log/clomd.log might display logs similar to the following: 

    2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387
    2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring ae9cf658-cd5e-dbd4-668d-020010a45c75 workItem type REPAIR

    2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf658-cd5e-dbd4-668d-020010a45c75 found

    The vSAN health check reports a CLOMD liveness issue. Each time CLOMD is restarted it crashes while attempting to repair the affected object. Swap objects are the only vSAN objects that can have size of zero bytes.

    This issue is resolved in this release.

  • vSphere API FileManager.DeleteDatastoreFile_Task fails to delete DOM objects in vSAN

    If you delete vmdks from the vSAN datastore using FileManager.DeleteDatastoreFile_Task API, through filebrowser or SDK scripts, the underlying DOM objects are not deleted.

    These objects can build up over time and take up space on the vSAN datastore.

    This issue is resolved in this release.

  • A host in a vSAN cluster fails with a purple diagnostic screen due to internal race condition

    When a host in a vSAN cluster reboots, a race condition might occur between PLOG relog code and vSAN device discovery code. This condition can corrupt memory tables and cause the ESXi host to fail and display a purple diagnostic screen.  

    This issue is resolved in this release.

vSphere Command-Line Interface
  • You cannot change some ESXi advanced settings through the vSphere Web Client or esxcli commands

    You cannot change some ESXi advanced settings such as /Net/NetPktSlabFreePercentThreshold because of the wrong default value. This problem is resolved by changing the default value. 

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Installation Issues
  • The installation of unsigned VIB with --no-sig check option on a secure boot enabled ESXi host might fail

    Installing unsigned VIBs on a secure booted ESXi host will be prohibited because the unsigned vibs prevents the system from booting. The VIB signature check is mandatory on a secure booted ESXi host.

    Workaround: Use only signed VIBs on secure booted ESXi hosts.

  • Attempts to install or upgrade an ESXi host with ESXCLI or vSphere PowerCLI commands might fail for esx-base, vsan and vsanhealth VIBs

    From ESXi 6.5 Update 1 and above, there is a dependency between the esx-tboot VIB and the esx-base VIB and you must also include the esx-tboot VIB as part of the vib update command for successful installation or upgrade of ESXi hosts.

    Workaround: Include also the esx-tboot VIB as part of the vib update command. For example:

    esxcli software vib update -n esx-base -n vsan -n vsanhealth -n esx-tboot -d /vmfs/volumes/datastore1/update-from-esxi6.5-6.5_update01.zip
  • Remediation against an ESX 6.5 Update 1 baseline might fail on an ESXi host with enabled secure boot

    If you use vSphere Update Manager to upgrade an ESXi host by an upgrade baseline containing an ESXi 6.5 Update 1 image, the upgrade might fail if the host has secure boot enabled.

    Workaround: You can use a vSphere Update Manager patch baseline instead of a host upgrade baseline.

Miscellaneous Issues
  • The change in BIOS, BMC, and PLSA firmware versions are not displayed on the Direct Console User Interface (DCUI)

    The DCUI provides information about the hardware versions and when they change, the DCUI remains blank. 

    Workaround: Start the WBEM services through the command: esxcli system wbem set --enabled​

  • High read load of VMware Tools ISO images might cause corruption of flash media

    In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media. 

    Workaround: 

    You can copy all the VMware Tools data into its own ramdisk. As a result, the data can be read from the flash media only once per boot. All other reads will go to the ramdisk. vCenter Server Agent (vpxa) accesses this data through the /vmimages directory which has symlinks that point to productLocker.

    To activate this feature, follow the steps:

    1. Use the command to set the advanced ToolsRamdisk option to 1:

      esxcli system settings advanced set -o /UserVars/ToolsRamdisk -i 1

    2. Reboot the host. 
Networking Issues
  • Intel i40en driver does not allow you to disable hardware due to stripping the RX VLAN tag capability

    For Intel i40en driver, the hardware always strips the RX VLAN tag and you cannot disable it using the following vsish command:

    vsish -e set /net/pNics/vmnicX/hwCapabilities/CAP_VLAN_RX 0

    Workaround: None.

  • Adding a Physical Uplink to a Virtual Switch in the VMware Host Client Might Fail

    Adding a physical network adapter, or uplink, to a virtual switch in the VMware Host Client might fail if you select Networking > Virtual switches> Add uplink.

    Workaround:

    1. Right-click the virtual switch that you want to edit and click Edit Settings.
    2. Click Add uplink to add a new physical uplink to the virtual switch.
    3. Click Save.
Storage Issues
  • VMFS datastore is not available after rebooting an ESXi host for ATS-only array devices

    If the target connected to the ESXi host supports only implicit ALUA and has only standby paths, the device will get registered, but device attributes related to media access will not get populated. It could take up to 5 minutes for the VAAI attributes to refresh if an active path gets added on post registration. As a result, the VMFS volumes, configured with ATS-only, may fail to mount until the VAAI update.

    Workaround: If the target supports only standby paths and has only implicit ALUA, enable the FailDiskRegistration config option on the host by using the following ESXi CLI command: esxcli system settings advanced set -o /Disk/FailDiskRegistration -i 1

    To take effect, you must set the config option and reboot the host. This will delay the registration of the devices until you see an active path.

    Note: If you enable this config option in an environment with both implicit and explicit ALUA target device connected to an ESXi host, the device registration of explicit ALUA standby path devices might delay as well. 
     

  • The LED management commands might fail when you turn on or off them in the HBA mode

    When you turn on or off the following LED commands

    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>,

    they might fail in the HBA mode of some HP Smart Array controllers, for example, P440ar and HP H240. In addition, the controller might stop responding, causing the following management commands to fail: 

    LED management:
    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>

    Get disk location:esxcli storage core device physical get -d <device id>

    This problem is firmware specific and it is triggered only by LED management commands in the HBA mode. There is no such issue in the RAID mode.

    Workaround: Retry the management command until success. 

  • The vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plugin for the lsi_mr3 driver might fail with Can not get device info ... or not well-formed ... error

    The vSAN disk serviceability plugin provide extended management and information support for disks. In vSphere 6.5 Update 1 the commands:

    Get disk location:
    esxcli storage core device physical get -d <device UID> for JBOD mode disk.
    esxcli storage core device raid list -d <device UID> for RAID mode disk.

    Led management:
    esxcli storage core device set --led-state=locator --led-duration=<seconds> --device=<device UID>
    esxcli storage core device set --led-state=error --led-duration=<seconds> --device=<device UID>
    esxcli storage core device set --led-state=off -device=<device UID>

    might fail with one of the following errors:
    Plugin lsu-lsi-mr3-plugin cannot get information for device with name <NAA ID>.
    The error is: Can not get device info ... or not well-formed (invalid token): ...

    Workaround:
    1. Use the localcli instead of esxcli command on the server that experiences this issue.
    The localcli command produces the correct message while esxcli might randomly fail.​

Virtual Machine Management Issues
  • VMware Host Client runs actions on all virtual machines on an ESXi host instead on a range selected with Search

    When you use Search to select a range of virtual machines in VMware Host Client, if you check the box to select all, and run an action such as power off, power on or delete, the action might affect all VMs on the host instead only the selection.

    Workaround: You must select the virtual machines individually.

Known Issues from earlier releases

To view a list of previous known issues, click here.

The earlier known issues are grouped as follows.

    Auto Deploy and Image Builder
    • You cannot apply an SSH root user key from an ESXi 6.0 host profile to an ESXi 6.5
      In ESXi 6.5 the authorized keys host profile functionality for managing the root user SSH keys is deprecated. However, the deprecated version was set to 6.0 instead of 6.5. As a result, you cannot apply the root user SSH key for a host profile 6.0 to a host with version 6.5.

      Workaround: To be able to use a host profile to configure the SSH key for the root user, you must create a new 6.5 host profile.

    • Applying the host profile by rebooting the target stateless ESXi host results with an invalid file path error message
      When you first extract a host profile from a stateless host, edit it to create a new role with user input for the password policy and with enabled stateless caching, then attach that profile to the host, update the password for the user and the role in the configuration, rebooting the host to apply the host profile fails with:

      ERROR: EngineModule::ApplyHostConfig. Exception: Invalid file path

      Workaround: You must apply the host profile directly to the host:

      1. Stop the Auto Deploy service and reboot the host.

      2. After the host boots, verify that the local user and role are present on the host.

      3. Log in with the credentials provided in the configuration.

    • Booting a stateless host by using Auto Deploy may result with failure
      Delay of about 11-16 seconds occurs for the getnameinfo request processing and leads to failure in booting stateless hosts through Auto Deploy when the following conditions are met:

      • A local DNS caching is enabled for a stateless host by adding the resolve parameter in the hosts entry hosts: files resolve dns. The hosts: files resolve dns is part of the Photon /etc/nsswitch.conf configuration file.
      • A NIC on the host gets its IP from DHCP and the same IP is not present in the DNS server.

      Workaround: In the NIC's configuration file of the vCenter Server, which gets the IP from the DHCP, set the UseDNS key to false:

      [DHCP] UseDNS=false

    • A stateless ESXi host might remain in maintenance mode when you deploy it using vSphere Auto Deploy and the vSphere Distributed Switch property is specified in the Host Profile
      When you deploy a stateless ESXi host using vSphere Auto Deploy and the vSphere Distributed Switch property is specified in the host profile, the host enters maintenance mode while the host profile is applied. If the process fails, the host profile might not be applied and the host might not exit maintenance mode.

      Workaround: On the host profile page, manually remove the deployed host from maintenance mode and remediate it.

    • Performing remediation to apply the Host Profile settings on iSCSI enabled cluster results with error in the vSphere Web Client
      After you attach a host profile extracted from a host configured with large number of LUNs to a cluster comprised of iSCSI enabled hosts, and you remediate that cluster, the remediation process results in the following erro message in the vSphere Web Client interface:

      Apply host profile operation in batch failed. com.vmware.vim.vmomi.client.exception.TransportProtocolException:org.apache.http.client.ClientProtocolException
      You can also see an error in the vpxd.log file:
      2016-07-25T12:06:01.214Z error vpxd[7FF1FE8FB700] [Originator@6876 sub=SoapAdapter] length of HTTP request body exceeds configured maximum 20000000
      When you remediate one cluster, the vSphere Web Client makes one API request with data for all hosts in that cluster. This request exceeds the maximum HTTP request size supported by VPXD.

      Workaround: Perform the following steps:

      1. Select fewer hosts in the Remediate wizard.

      2. Only after remediation completes, start another wizard for the remaining hosts.

    • The Auto Deploy option is not present in the vSphere Web Client after an upgrade or migration of the vCenter Server system from 5.5 or 6.0 to version 6.5
      After upgrading or migrating the vCenter Server system with version 5.5 and 6.0 to 6.5, the Auto Deploy option is not present in the Configure > Settings screen of the vSphere Web Client. The vSphere Auto Deploy service together with the Image Builder service are installed but not started automatically.

      Workaround: Perform the following steps:

      1. Start the Image Builder service manually.

      2. Log out and log back in to the vSphere Web Client.

      3. On the vSphere Web Client Home page, navigate to the vCenter Server system and select Configure > Settings to locate the Auto Deploy service.

    • If you attach more than 63 hosts to a host profile, the host customization page loads with an error
      If you extract a Host Profile from the reference host and attach more than 63 hosts to the host profile, the vCenter Server system becomes heavily loaded and the time for generating a host-specific answer file exceeds the time limit of 120 seconds. The customization page loads with an error:

      The query execution timed out because the backend property provider took more than 120 seconds

      Workaround: Attach the host profile to the cluster or the host without generating the customization data:

      1. Right-click the cluster or the host and select Host Profiles > Attach Host Profile.

      2. Select the profile you want to attach.

      3. Select the Skip Customizations check box. The vSphere Web Client interface does not call the RetrieveHostCustomizationsForProfile to generate customization data.

      4. Populate the customization data:

      5. Right-click the host profile attached to the cluster or the host and select Export Host Customizations. This generates a CSV file with customization entry for each host.

      6. Fill in the customization data in the CSV file.

      7. Right-click the host profile, select Edit Host Customizations and import the CSV file.

      8. Click Finish to save the customization data.

    • Auto deploy with PXE boot of ESXi installer on Intel XL710 (40GB) network adapters results with failure
      When you use the preboot execution environment to boot the ESXi installer from the Intel XL710 network device to a host, the process of copying the ESXi image fails before the control is being transferred to the ESXi kernel. You get the following error:

      Decompressed MD5: 000000000000000000000
      Fatal error: 34(Unexpected EOF)
      serial log:
      ******************************************************************
      * Booting through VMware AutoDeploy...
      *
      * Machine attributes:
      * . asset=
      * . domain=eng.vmware.com
      * . hostname=prme-hwe-drv-8-dhcp173
      * . ipv4=10.24.87.173
      * . ipv6=fe80::6a05:caff:fe2d:5608
      * . mac=68:05:ca:2d:56:08
      * . model=PowerEdge R730
      * . oemstring=Dell System
      * . oemstring=5[0000]
      * . oemstring=14[1]
      * . oemstring=17[04C4B7E08854C657]
      * . oemstring=17[5F90B9D0CECE3B5A]
      * . oemstring=18[0]
      * . oemstring=19[1]
      * . oemstring=19[1]
      * . serial=3XJRR52
      * . uuid=4c4c4544-0058-4a10-8052-b3c04f523532
      * . vendor=Dell Inc.
      *
      * Image Profile: ESXi-6.5.0-4067802-standard
      * VC Host: None
      *
      * Bootloader VIB version: 6.5.0-0.0.4067802
      ******************************************************************
      /vmw/cache/d6/b46cc616433e9d62ab4d636bc7f749/mboot.c32.f70fd55f332c557878f1cf77edd9fbff... ok

      Scanning the local disk for cached image.
      If no image is found, the system will reboot in 20 seconds......
      <3>The system on the disk is not stateless cached.
      <3>Rebooting...

      Workaround: None.

    Backup and Restore Issues
    • After file-based restore of а vCenter Server Appliance to a vCenter Server instance, operations in the vSphere Web Client such as configuring high availability cluster or enabling SSH access to the appliance may result with failure
      In the process of restoring a vCenter Server instance, a new vCenter Server Appliance is deployed and the appliance HTTP server is started with self-signed certificate. The restore process completes with recovering the backed up certificates but without restarting the appliance HTTP server. As a result, any operation which requires to make an internal API call to the appliance HTTP server fails.

      Workaround: After restoring the vCenter Server Appliance to a vCenter Server instance, you must login to the appliance and restart its HTTP server by running the command service vami-lighttp restart.

    • Attempts to restore a Platform Services Controller appliance from a file-based backup fail if you have changed the number of vCPUs or the disk size of the appliance
      In vSphere 6.5, the Platform Services Controller appliance is deployed with 2 vCPUs and 60 GB disk size. Increasing the number of vCPUs and the disk size is unsupported. If you try to perform a file-based restore of a Platform Services Controller appliance with more than 2 CPUs or 60 GB disk size, the vCenter Server Appliance installer fails with the error: No possible size matches your set of requirements.

      Workaround: Decrease the number of the processors to no more than 2 vCPUs and the disk size to no more than 60 GB.

    • Restoring a vCenter Server Appliance with an external Platform Services Controller from an image-based backup does not start all vCenter Server services
      After you use vSphere Data Protection to restore a vCenter Server Appliance with an external Platform Services Controller, you must run the vcenter-restore script to complete the restore operation and start the vCenter Server services. The vcenter-restore execution might fail with the error message: Operation Failed. Please make sure the SSO username and password are correct and rerun the script. If problem persists, contact VMware support.

      Workaround: After the vcenter-restore execution has failed, run the service-control --start --all command to start all services.

      If the service-control --start --all execution fails, verify that you entered the correct vCenter Single Sign-On user name and password. You can also contact VMware Support.

    Boot from SAN Issues

    • Installing ESXi 6.5 on a Fibre Channel or iSCSI LUN with LUN ID greater than 255 is not supported
      vSphere 6.5 supports LUN IDs from 0 to 16383. However, due to adapter BIOS limitations, you cannot use LUNs with IDs greater than 255 for the boot from SAN installation.

      Workaround: For ESXi installation, use LUNs with IDs 255 or lower.

    iSCSI Issues

    • In vSphere 6.5, the name assigned to the iSCSI software adapter is different from the earlier releases
      After you upgrade to the vSphere 6.5 release, the name of the existing software iSCSI adapter, vmhbaXX, changes. This change affects any scripts that use hard-coded values for the name of the adapter. Because VMware does not guarantee that the adapter name remains the same across releases, you should not hard code the name in the scripts. The name change does not affect the behavior of the iSCSI software adapter.

      Workaround: None.

    Migration Issues
    • Attempts to migrate a Windows installation of vCenter Server or Platform Services Controller to an appliance might fail with an error message about DNS configuration setting if the source Windows installation is set with static IPv4 and static IPv6 configuration
      Migrating a Windows installation that is configured with both IPv4 and IPv6 static addresses might fail with the error message: Error setting DNS configuration. Details : Operation Failed.. Code: com.vmware.applmgmt.err_operation_failed.

      The log file /var/log/vmware/applmgmt/vami.log of the newly deployed appliance contains the following entries:
      INFO:vmware.appliance.networking.utils:Running command: ['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address']
      INFO:vmware.appliance.networking.utils:output:
      error:
      returncode: 17
      ERROR:vmware.appliance.networking.impl:['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address'] error , rc=17

      Workaround:

      1. Delete the newly deployed appliance and restore the source Windows installation.

      2. On the source Windows installation, disable either the IPv6 or the IPv4 configuration.

      3. From the DNS server, delete the entry for the IPv6 or IPv4 address that you disabled.

      4. Retry the migration.

      5. (Optional) After the migration finishes, add back the DNS entry and, on the migrated appliance, set the IPv6 or IPv4 address that you disabled.

    • VMware Migration Assistant initialization failure when migrating vCenter Server 6.0 with external SQL with Windows Integrated Authentication mode
      As a user without a "Replace a Process level token" privilege, when you migrate from vCenter Server on Windows with an external Microsoft SQL Server database configured with "Integrated Windows Authentication", VMware Migration Assistant initialization fails with a confusing error message displays that not indicate the cause of failure.

      For example: Failed to run pre-migration checks

      The vCenter Server database collects requirements in a log that is located at:
      %temp/vcsMigration/CollectRequirements_com.vmware.vcdb_2016_02_04_17_50.log

      The log contains the entry:
      2016-02-04T12:20:47.868Z ERROR vcdb.const Error while validating source vCenter Server database: "[Error 1314] CreateProcessAsUser: 'A required privilege is not held by the client.' "

      Workaround: Verify that you have set a "Replace a Process level token" privilege for the user running the migration. A guide to customizing settings can be found in the Microsoft online documentation. You can re-run the migration after you verify the permissions are correct.

    Miscellaneous Issues
    • Lsu-hpsa plugin does not work with the native hpsa driver (nhpsa)
      Lsu-hpsa plugin does not work with the native hpsa driver (nhpsa) because the nhpsa driver is not compatible with the current HPSA management tool (hpssacli), which is used by the lsu-hpsa plugin. You might receive the following error messages:

      # esxcli storage core device set -d naa.600508b1001c7dce62f9307c0604e53b -l=locator
      Unable to set device's LED state to locator. Error was: HPSSACLI call in HPSAPlugin_SetLedState exited with code 127! (from lsu-hpsa-plugin)

      # esxcli storage core device physical get -d naa.50004cf211e636a7
      Plugin lsu-hpsa-plugin cannot get information for device with name naa.50004cf211e636a7. Error was: HPSSACLI call in Cache_Update exited with code 127!

      # esxcli storage core device raid list -d naa.600508b1001c7dce62f9307c0604e53b
      Plugin lsu-hpsa-plugin cannot get information for device with name naa.600508b1001c7dce62f9307c0604e53b. Error was: HPSSACLI call in Cache_Update exited with code 127!

      Workaround: Replace the native hpsa driver (nhpsa) with the vmklinux driver.

    Miscellaneous Storage Issues

    • If you use SESparse VMDK, formatting of a VM with Windows or Linux file system takes longer
      When you format a VM with Windows or Linux file system, the process might take longer than usual. This occurs if the virtual disk is SESparse.

      Workaround: Before formatting, disable the UNMAP operation on the guest operating system. You can re-enable the operation after the formatting process completes.

    • Attempts to use the VMW_SATP_LOCAL plug-in for shared remote SAS devices might trigger problems and failures
      In releases earlier than ESX 6.5, the SAS devices are marked as remote despite being claimed by the VMW_SATP_LOCAL plug-in. In ESX 6.5, all devices claimed by VMW_SATP_LOCAL are marked as local even when they are external. As a result, when you upgrade to ESXi 6.5 from earlier releases, any of your existing remote SAS devices that were previously marked as remote change their status to local. This change affects shared datastores deployed on these devices and might cause problems and unpredictable behavior.
      In addition, problems occur if you incorrectly use the devices that are now marked as local, but are in fact shared and external, for certain features. For example, when you allow creation of the VFAT file system, or use the devices for Virtual SAN.

      Workaround: Do not use the VMW_SATP_LOCAL plug-in for the remote external SAS devices. Make sure to use other applicable SATP from the supported list or a vendor unique SATP.

    • Logging out of the vSphere Web Client while uploading a file to a datastore cancels the upload and leaves an incomplete file
      Uploading large files to a datastore takes some time. If you log out while uploading the file, the upload is cancelled without warning. The partially uploaded file might remain on the datastore.

      Workaround: Do not log out during file uploads. If the datastore contains the incomplete file, manually delete the file from the datastore.

    Networking Issues
    • Connection fails when a vNIC is connected to a vSphere Standard Switch or a vSphere Distributed Switch with Network I/O Control disabled
      If a vNIC is configured with reservation greater than 0 and you connect it to a vSphere Standard Switch or to a vSphere Distributed Switch with disabled network I/O control, the connection fails with the following error message: A specified parameter was not correct: spec.deviceChange.device.backing.

      Workaround: None.

    • Network becomes unavailable with full passthrough devices
      If a native ntg3 driver is used on a passthrough Broadcom Gigabit Ethernet Adapter, the network connection will become unavailable.

      Workaround:

      • Run the ntg3 driver in legacy mode:

        1. Run the esxcli system module parameters set -m ntg3 -p intrMode=0 command.

        2. Reboot the host.

      • Use the tg3 vmklinux driver as the default driver, instead of the native ntg3 driver.

    • A user-space RDMA application of a virtual machine fails to send or receive data
      If a guest user-space RDMA application uses Unreliable Datagram queue pairs and an IP-based group ID to communicate with other virtual machines, the RDMA application fails to send or receive any data or work completion entries. Any work requests that are enqueued on the queue pairs are removed and not completed.

      Workaround: None.

    • Packets are lost with IBM system servers that have a USB NIC device under Universal Host Controller Interface
      Some IBM system servers, such as IBM BladeCenter HS22, with a USB NIC device under Universal Host Controller Interface, experience networking issues, if they are running the USB native driver vmkusb. Packets get lost on the USB NIC device that is under UHCI, when you use the vmkusb driver for the USB NIC.

      Workaround: Disable the USB native vmkusb driver and switch to the legacy vmklinux USB driver:

      1. Run the esxcli system module set -m=vmkusb -e=FALSE command to disable the USB native driver vmkusb.

      2. Reboot the host. At reboot, the legacy USB driver loads.

    • Guest kernel applications receive unexpected completion entries for non-signaled fast-register work requests
      RDMA communication between two virtual machines that reside on a host with an active RDMA uplink sometimes triggers spurious completion entries in the guest kernel applications. Completion entries are wrongly triggered by non-signaled fast-register work requests that are issued by the kernel-level RDMA Upper Layer Protocol (ULP) of a guest. This can cause completion queue overflows in the kernel ULP.

      Workaround: To avoid triggering extraneous completions, place on separate hosts the virtual machines that will use fast-register work requests.

    • A virtual machine with a paravirtual RDMA device becomes inaccessible during a snapshot operation or a vMotion migration
      Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible during a snapshot operation or migration, if at the same time the RDMA application is still running.

      Workaround: Before you take a snapshot or perform a vMotion migration with a PVRDMA device, shut down the RDMA applications that are using a non-existing peer queue pair number.

    • Netdump transfer of ESXi core takes a couple of hours to complete
      With hosts that use the Intel X710 or Intel X710L NICs, the transfer of the ESXi core to a Netdump server takes a couple of hours. The action is performed successfully, but the Netdump transfer is much faster with other NICs.

      Workaround: None.

    • Kernel-level RDMA Upper Layer Protcols do not work correctly in a guest virtual machine with a paravirtual RDMA device
      Kernel-level RDMA Upper Layer Protocols, such as NFS or iSER, try to create more resources than the paravirtual RDMA (PVRDMA) device can provide. As a result, the kernel modules fail to load. However, the RDMA Connection Manager (RDMACM) continues working.

      Workaround: None.

    • The Down/Up command brings a NIC up after more than 60 seconds
      When you set up a Virtual Extensible LAN (VXLAN) and start transferring VXLAN traffic over a NIC that uses the nmlx4_en driver, the esxcli network nic up command might fail to bring the NIC up within 60 seconds. The NIC is up with some delay. The slower execution of the command occurs when you run the Down/Up and Unlink/Link commands several times in a row.

      Workaround: None.

    • 40-Gigabit nmlx4_en NIC does not support Wake-On-LAN (WOL)
      WOL is supported only on 10-Gigabit HP flexible LOM cards (HP branded Mellanox cards). 40-Gigabit cards are not supported by HP.

      Workaround: Use 10-Gigabit HP flexible LOM cards, if you want to use WOL on nmlx4_en cards.

    • NFS shares fail to mount when using Kerberos credentials to authenticate with Active Directory
      If you join an ESX system to Active Directory and upgrade to the current release, ESX fails to mount NFS shares correctly using the Kerberos keytab if the Active Directory instance has stopped supporting RC4 encryption. This is because the keytab is only written to when ESX is joined and the Likewise stack used on ESX does not support AES encryption for earlier releases.

      Workaround: You must re-join Active Directory so that the system keytab is updated.

    • Intel 82579LM or I217 vmnic might encounter an unrecoverable hang
      A problem is triggered on Intel 82579LM or I217 vmnic with heavy traffic, such as 4 pairs of virtual machines running netperf and repeatedly disabling and re-enabling VMKernel software emulation of hardware offload capability. With some cycles of disabling and re-enabling, the hardware runs into a hang state.

      Workaround:

      1. Avoid disabling and re-enabling VMKernel software emulation of hardware offload capabilities on Intel 82579LM or I217 adapter.

      2. If you encounter this hang you must reboot the host.

    • Intel i219 NIC might hang and lose networking connectivity
      The Intel i219 family NICs, displayed as "Intel Corporation Ethernet Connection I219-LM" (sometimes with a "V" suffix), might experience a hardware hang state that can cause the system to lose networking connectivity on the port. This issue is triggered by disruptive operations while traffic is going through the NIC. For example, you might encounter this error if you bring down a link using the command esxcli network nic down vmnixX while copying a file over the i219 port. This problem might also be triggered with heavy traffic, such as 4 pairs of virtual machines running netperf and repeatedly disabling and re-enabling VMKernel software emulation of hardware offload capability. The NIC is not functional until you reboot the host.

      Workaround:

      1. Avoid using i219 NIC port in critical jobs.

      2. Avoid doing disruptive operations on i219 ports.

      3. Avoid disabling and re-enabling VMKernel software emulation of hardware offload capabilities on Intel I219 adapter.

      4. When i219 must be used, set up fail-over NIC teaming with a non-i219LM port.

    • The bnx2x driver causes a purple screen error message to appear during NIC failover or failback
      When you disable or enable VMkernel ports and change the failover order of NICs, the bnx2x driver causes a purple screen error message.

      Workaround: Use async driver.

    NFS Issues

    • The NFS 4.1 client loses synchronization with the NFS server when attempting to create new sessions
      This problem occurs after a period of interrupted connectivity with the NFS server or when NFS IOs do not get response. When this issue occurs, the vmwarning.log file contains a throttled series of warning messages similar to the following:
      NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED

      Workaround: Perform the following steps:

      1. Unmount the affected NFS 4.1 datastores. If no files are open when you unmount, this operation succeeds and the NFS 4.1 client module cleans up its internal state. You can then remount the datastores that were unmounted and resume normal operation.

      2. If unmounting the datastore does not solve the problem, disable the NICs connecting to the IP addresses of the NFS shares. Keep the NICs disabled for as long as it is required for the server lease times to expire, and then bring the NICs back up. Normal operations should resume.

      3. If the preceding steps fail, reboot the ESXi host.

    • After an ESXi reboot, NFS 4.1 datastores exported by EMC VNX storage fail to mount
      Due to a potential problem with EMC VNX, NFS 4.1 remount requests might fail after an ESXi host reboot. As a result, any existing NFS 4.1 datastores exported by this storage appear as unmounted.

      Workaround: Wait for the lease time of 90 seconds to expire and manually remount the volume.

    • Mounting the same NFS datastore with different labels might trigger failures when you attempt to mount another datastore later
      The problem occurs when you use the esxcli command to mount the same NFS datastore on different ESXi hosts. If you use different labels, for example A and B, vCenter Server renames B to A, so that the datastore has consistent labels across the hosts. If you later attempt to mount a new datastore and use the B label, your ESXi host fails. This problem occurs only when you mount the NFS datastore with the esxcli command. It does not affect mounting through the vSphere Web Client.

      Workaround: When mounting the same NFS datastore with the esxcli commands, make sure to use consistent labels across the hosts.

    • An NFS 4.1 datastore exported from a VNX server might become inaccessible
      When the VNX 4.1 server disconnects from the ESXi host, the NSF 4.1 datastore might become inaccessible. This issue occurs if the VNX server unexpectedly changes its major number. However, the NFS 4.1 client does not expect the server major number to change after establishing connectivity with the server.

      Workaround: Remove all datastores exported by the server and then remount them.

    Security Features Issues
    • Issues after running TLS ReConfigurator if smart card authentication is enabled
      vSphere 6.5 includes a TLS Reconfigurator tool that can be used to manage TLS configuration. Users install the tool explicitly. The tool is documented in VMware KB article 2147469.
      If you run the tool in a vSphere 6.5 environment where smart card authentication is enabled on Platform Services Controller services fail to start and an exception results. The error occors when you run the tool to change the TLS configurations on the Platform Services Controller, for the Content Manager services (Windows PSC) or vmware-stsd service (Platform Services Controller appliance).

      Workaround:

      1. Open the server.xml file for edit.

      2. Windows: C:\ProgramData\VMware\vCenterServer\runtime\VMwareSTSService\con

        Linux: /usr/lib/vmware-sso/vmware-sts/conf

      3. From the two entries for the server tag, remove the first entry.

      4. Restart all the services.

      In the following example, you remove the first Server entry but not the second Server entry.

      <!--Remove the first Server entry-->
      <Server port="${base.shutdown.port}" shutdown="SHUTDOWN">
      <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
      <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
      ...
      </Server>
      <!--Keep the second Server entry-->
      <Server port="${base.shutdown.port}" shutdown="SHUTDOWN">
      <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
      <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
      ...
      </Server>

    • Error adding ESXi host in environments with multiple vCenter Server and Platform Services Controller behind a load balancer
      You set up an environment with multiple vCenter Server instances and multiple Platform Services Controller instances behind a load balancer. The environment uses VMCA as an intermediate CA. When you attempt to add an ESXi host, the following error might result:
      Unable to get signed certificate for host name : Error: Failed to connect to the remote host, reason = rpc_s_too_many_rem_connects When you attempt to retrieve the root CA on the Platform Services Controller, the command fails, as follows.
      /usr/lib/vmware-vmca/bin/certool --getrootca --server=wx-sxxx-sxxx.x.x.x Status : Failed Error Code : 382312518 Error Message : Failed to connect to the remote host, reason = rpc_s_too_many_rem_connects (0x16c9a046)

      Workaround: Restart the VMCA service on the Platform Services Controller.

    • vCenter Server services do not start after reboot or failover if Platform Services Controller node is unavailable
      If a Platform Services Controller node is temporarily unavailable, and if vCenter Server is rebooted or if a vCenter HA failover occurs during that time, vCenter services fail to start.

      Workaround: Restore the Platform Services Controller node and reboot vCenter Server again, or start all services from the command line using the following command:
      service-control --start --all

    • STS daemon does not start on vCenter Server Appliance
      After you install a vCenter Server Appliance, or after you upgrade or migrate to vCenter Server Appliance 6.5, the Secure Token Service daemon sometimes does not start. This issue is rare, but it has been observed.

      Workaround: None. This issue has been observed when the DNS did not resolve localhost to the loopback address in an IPv6 environment. Reviewing your network configuration may assist in resolving this issue.

    • Error 400 during attempt to log in to vCenter Server from the vSphere Web Client
      You log in to vCenter Server from the vSphere Web Client and log out. If, after 8 hours or more, you attempt to log in from the same browser tab, the following error results.
      400 An Error occurred from SSO. urn:oasis:names:tc:SAML:2.0:status:Requester, sub status:null

      Workaround: Close the browser or the browser tab and log in again.

    • An encrypted VM goes into locked (invalid) state when vCenter Server is restored to a previously backed up state
      In most cases, an encrypted virtual machine is added to vCenter Server using the vSphere Web Client or the vSphere API.
      However, the virtual machine can be added in other ways, for example, when the encrypted virtual machine is added by a backup operation using file-based backup and restore.
      In such a case, vCenter Server does not push the encryption keys to the ESXi host. As a result, the virtual machine is locked (invalid).
      This is a security feature that prevents an unauthorized encrypted virtual machine from accessing vCenter Server.

      Workaround: You have several options.

      • Unregister the virtual machine and register it back using the vSphere API. You can perform this operation from the vSphere Web Client or the vSphere API.

      • Remove the ESXi host that contains the virtual machine from vCenter Server and add it back.

    • Error while communicating with the remote host during virtual machine encryption task
      You perform a virtual machine encryption operation such as encrypting a virtual machine or creating a new encrypted virtual machine. Your cluster includes a disconnected ESXi host. The following error results.
      An error occurred while communicating with the remote host

      Workaround: Remove the disconnected host from the cluster's inventory.

    • Automatic failover does not happen if Platform Services Controller services become unavailable
      If you run Platform Services Controller behind a load balancer, failover happens if the Platform Services Controller node becomes unavailable. If Platform Services Controller services that run behind the reverse proxy port 443 fail, automatic failover does not happen. These services include Security Token Service, License service, and so on.

      Workaround: Take the Platform Services Controller with the failed service offline to trigger failover.

    • vCenter Server system cannot connect to a KMS using the IPv6 address
      vCenter Server can connect to a Key Management Server (KMS) only if the KMS has an IPv4 address or a host name that resolves to an IPv4 address. If the KMS has an IPv6 address, the following error occurs when you add the KMS to the vCenter Server system.
      Cannot establish trust connection

      Workaround: Configure an IPv4 address for the KMS.

    • Secure boot fails after upgrade from older versions of ESXi
      You cannot boot an ESXi 6.5 host that has secure boot enabled in the following situations.

      • The host upgrade was performed using the ESXCLI command. The command does not upgrade the bootloader and it does not persist signatures. When you enable secure boot after the upgrade, an error occurs.

      • The host upgrade was performed using the ISO, but old VIBs are retained after the upgrade. In this case, the secure boot process cannot verify the signatures for the old VIB, and fails. The ISO must contain new versions of all VIBs that are installed on the ESXi host before the upgrade.

      Workaround: None. Secure boot cannot be enabled under these conditions. Reinstall the ESXi host to enable secure boot.

    • After an upgrade, SSLv3 is enabled on port 7444 if it was enabled before the upgrade
      In a clean installation, SSLv3 is not enabled on vCenter Server or Platform Services Controller systems. However, after an upgrade, the service is enabled on port 7444 (the Secure Token Server port) by default. Having SSLv3 enabled might make your system vulnerable to certain attacks.

      Workaround: For an embedded deployment, disable SSLv3 in the server.xml file of the Secure Token Server.

      1. For a deployment with an external Platform Services Controller, determine whether any legacy vCenter Server systems are connected, and upgrade those vCenter Server systems.

      2. Disable SSLv3.
        1. Open the server.xml file (C:\ProgramData\VMware\vCenterServer\runtime\VMwareSTSService\conf on a Windows system and /usr/lib/vmware-sso/vmware-sts/conf/ on a Linux system).

        2. Find the connector that has SSLEnabled=True, and remove SSLv3 from the attribute SSLEnabledProtocols so that the attribute reads as follows:
          sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"

        3. Save and restart all the services.

    • Error when cloning an encrypted virtual machine with one or more unencrypted hard disks
      When you clone an encrypted virtual machine, the following error results if one or more of the hard disks of the virtual machine is not encrypted.
      The operation is not supported on the object.
      The clone process fails.

      Workaround: In the Select Storage screen of the Clone Virtual Machine wizard, select Advanced. Even if you do not make changes to the advanced settings, the clone process succeeds.

    Server Configuration Issues
    • Mismatch occurs for shared clusterwide option during host profile compliance check,
      During a host profile compliance check, if a mismatch occurs because of the shared clusterwide option, the device name is displayed for the Host value or for Host Profile value.

      Workaround: Treat the message as a compliance mismatch for the shared clusterwide option.

    • Host Profile remediation fails if host profile extracted from one vSAN cluster is attached to a different vSAN cluster
      If you extract a host profile from a reference host that is part of one vSAN cluster and attach it to a different vSAN cluster, the remediation (apply) operation fails.

      Workaround: Before applying the host profile, edit the profile's vSAN settings. The cluster UUID and datastore name must match the values for the cluster on which the profile is attached.

    • Host Profile does not capture host Lockdown Mode settings
      If you extract a host profile from a stateless ESXi host with Lockdown Mode enabled, the Lockdown Mode settings are not captured. After applying the host profile and rebooting the host, Lockdown Mode is disabled on the host.

      Workaround: After applying the host profile and rebooting the host, manually enable Lockdown Mode for the host.

    • Compliance errors for host profiles with SAS drives after upgrading to vSphere 6.5
      Because all drives claimed by SATP_LOCAL are marked as LOCAL, host profiles with SAS that have the Device is shared clusterwide option enabled fail compliance check.

      Workaround: Disable the Device is shared clusterwide configuration option for host proiles with SAS drives before remediating.

    • Host Profile batch remediation fails for hosts with DRS soft affinity rules
      A batch remediation performs a remediate operation on a group of hosts or clusters. Host Profiles uses the DRS feature to automatically place hosts into maintenance mode before the remediate operation. However, only hosts in fully automated DRS clusters that have no soft affinity rules can perform this operation. Hosts that have a DRS soft affinity rule fail remediation because this rule stops the hosts from entering maintenance mode, causing remediation to fail.

      Workaround:

      1. Determine whether the cluster is fully automated:

        1. In the vSphere Web Client, navigate to the cluster.

        2. Select the Configure tab, then select Settings.

        3. Expand the list of services and select vSphere DRS.

        4. If the DRS Automation field is Fully Automated then the cluster is fully automated.

      2. Check whether the host has a soft affinity rule:

        1. In the cluster, select the Configure tab, then select Settings.

        2. select VM/Host Rules.

        3. Examine any rule where the typie is set to Run VMs on Hosts or Do Not Run VMs on Hosts.

        4. If the details for the VM/Host Rule Details for those rules contain the word "should," that rule is a soft affinity or soft anti-affinity rule.

      3. For hosts with the DRS soft affinity rule, manually move the host into maintenance mode, and then remediate the host.

    Storage Driver Issues

    • The bnx2x inbox driver that supports the QLogic NetXtreme II Network/iSCSI/FCoE adapter might cause problems in your ESXi environment
      Problems and errors occur when you disable or enable VMkernel ports and change the failover order of NICs for your iSCSI network setup.

      Workaround: Replace the bnx2x driver with an asynchronous driver. For information, see the VMware Web site.

    • The ESXi host might experience problems when you use Seagate SATA storage drives
      If you use an HBA adapter that is claimed by the lsi_msgpt3 driver, the host might experience problems when connecting to the Seagate SATA devices. The vmkernel.log file displays errors similar to the following:
      SCSI cmd RESERVE failed on path XXX
      and
      reservation state on device XXX is unknown

      Workaround: Replace the Seagate SATA drive with another drive.

    • When you use the Dell lsi_mr3 driver version 6.903.85.00-1OEM.600.0.0.2768847, you might encounter errors
      If you use the Dell lsi_mr3 asynchronous driver version 6.903.85.00-1OEM.600.0.0.2768847, the VMkernel logs might display the following message ScsiCore: 1806: Invalid sense buffer.

      Workaround: Replace the driver with the vSphere 6.5 inbox driver or an asynchronous driver from Broadcom.

    Storage DRS Issues

    • Storage DRS does not honor Pod-level VMDK affinity if the VMDKs on a virtual machine have a storage policy attached to them
      If you set a storage policy on the VMDK of a virtual machine that is part of a datastore cluster with Storage DRS enabled, then Storage DRS does not honor the Keep VMDKs together flag for that virtual machine. It might recommend different datastores for newly added or existing VMDKs.

      Workaround: None. This behavior is observed when you set any kind of policy such as VMCrypt or tag-based policies.

    • You cannot disable Storage DRS when deploying a VM from an OVF template
      When you deploy an OVF template and select an individual datastore from a Storage DRS cluster for the VM placement, you cannot disable Storage DRS for your VM. Storage DRS remains enabled and might later move this VM to a different datastore.

      Workaround: To permanently keep the VM on the selected datastore, manually change the automation level of the VM. Add the VM to the VM overrides list from the storage cluster settings.

    Storage Host Profiles Issues

    • Attempts to set the action_OnRetryErrors parameter through host profiles fail
      This problem occurs when you edit a host profile to add the SATP claim rule that activates the action_OnRetryErrors setting for NMP devices claimed by VMW_SATP_ALUA. The setting controls the ability of an ESXi host to mark a problematic path as dead and trigger a path failover. When added through the host profile, the setting is ignored.

      Workaround: You can use two alternative methods to set the parameter on a reference host.

      • Use the following esxcli command to enable or disable the action_OnRetryErrors parameter:
        esxcli storage nmp satp generic deviceconfig set -c disable_action_OnRetryErrors -d naa.XXX
        esxcli storage nmp satp generic deviceconfig set -c enable_action_OnRetryErrors -d naa.XXX

      • Perform these steps:

        1. Add the VMW_SATP_ALUA claimrule to the SATP rule:
          esxcli storage nmp satp rule add --satp=VMW_SATP_ALUA --option=enable_action_OnRetryErrors --psp=VMW_PSP_XXX --type=device --device=naa.XXX

        2. Run the following commands to reclaim the device:
          esxcli storage core claimrule load
          esxcli storage core claiming reclaim -d naa.XXX

    Storage I/O Control Issues

    • You cannot change VM I/O filter configuration during cloning
      Changes to a virtual machine's policies during cloning is not supported by Storage I/O Control.

      Workaround: Perform the clone operation without any policy change. You can update the policy after completing the clone operation.

    • Storage I/O Control settings are not honored per VMDK
      Storage I/O Control settings are not honored on a per VMDK basis. The VMDK settings are honored at the virtual machine level.

      Workaround: None.

    Storage Issues
      Upgrade Issues
      • Pre-upgrade checks display error that the eth0 interface is missing when upgrading to vCenter Server Appliance 6.5
        Pre-upgrade checks display error that the eth0 interface is missing and it is needed to complete the vCenter Server Appliance upgrade. In addition there might be a warning displayed that if multiple network adapters are detected only eth0 is preserved.

        Workaround: Please see KB http://kb.vmware.com/kb/2147933 for a workaround.

      • vCenter Server upgrade fails when Distributed Virtual Switch and Distributed Virtual Portgroups have same High/non-ASCII name in a Windows environment
        In a Windows environment, if the Distributed Virtual Switches and the Distributed Virtual Portgroups use duplicate High/non-ASCII characters as names, the vCenter Server upgrade fails with the error:
        Failed to launch UpgradeRunner. Please check the vminst.log and vcsUpgrade\UpgradeRunner.log files in the temp directory for more details.

        Workaround: Rename either the Distributed Virtual Switches or the Distributed Virtual Portgroups using non-unique names.

      • Attempts to upgrade a vCenter Server Appliance or Platform Services Controller appliance might fail with an error message about DNS configuration setting if the source appliance is set with static IPv4 and static IPv6 configuration
        Upgrading an appliance that is configured with both IPv4 and IPv6 static addresses might fail with the error message: Error setting DNS configuration. Details : Operation Failed.. Code: com.vmware.applmgmt.err_operation_failed.

        The log file /var/log/vmware/applmgmt/vami.log of the newly deployed appliance contains the following entries:
        INFO:vmware.appliance.networking.utils:Running command: ['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address']
        INFO:vmware.appliance.networking.utils:output:
        error:
        returncode: 17
        ERROR:vmware.appliance.networking.impl:['/usr/bin/netmgr', 'dns_servers', '--set', '--mode', 'static', '--servers', 'IPv6_address,IPv4_address'] error , rc=17

        Workaround:

        1. Delete the newly deployed appliance and restore the source appliance.

        2. On the source appliance, disable either the IPv6 or the IPv4 configuration.

        3. From the DNS server, delete the entry for the IPv6 or IPv4 address that you disabled.

        4. Retry the upgrade.

        5. (Optional) After the upgrade finishes, add back the DNS entry and, on the upgraded appliance, set the IPv6 or IPv4 address that you disabled.

      • Attempts to upgrade a vCenter Server Appliance or Platform Services Controller appliance with an expired root password fail with a generic message that cites an internal error
        During the appliance upgrade, the installer connects to the source appliance to detect its deployment type. If the root password of the source appliance has expired, the installer fails to connect to the source appliance, and the upgrade fails with the error message: Internal error occurs during pre-upgrade checks.

        Workaround:

        1. Log in to the Direct Console User Interface of the appliance.

        2. Set a new root password.

        3. Retry the upgrade.

      • The upgrade of a vCenter Server Appliance might fail because a dependency shared library path is missing
        The upgrade of a vCenter Server Appliance might fail before the export phase and the error log shows: /opt/vmware/share/vami/vami_get_network: error while loading shared libraries: libvami-common.so: cannot open shared object file: No such file or directory. This problem occurs due to missing dependency shared library path.

        Workaround:

        1. Log in to the appliance Bash shell of the vCenter Server Appliance that you want to upgrade.

        2. Run the following commands.
          echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}/opt/vmware/lib/vami/" >> /etc/profile
          echo 'export LD_LIBRARY_PATH' >> /etc/profile

        3. Log out of the appliance shell.

        4. Retry the upgrade.

      • Upgrade from vCenter Server 6.0 with an external database fails if vCenter Server 6.0 has content libraries in the inventory
        The pre-upgrade check fails when you attempt to upgrade a vCenter Server 6.0 instance with content libraries in the inventory and a Microsoft SQL Server database or an Oracle database. You receive an error message such as Internal error occurs during VMware Content Library Service pre-upgrade checks.

        Workaround: None.

      • Extracting the vCenter Server Appliance ISO image with a third-party extraction tool results in a permission error
        When extracting the ISO image in Mac OS X to run the installer using a third-party tool available from the Internet, you might encounter the following error when you run the CLI installer: OSError: [Errno 13] Permission denied.

        This problem occurs because during extraction, some extraction tools change the default permission set on the vCenter Server Appliance ISO file.

        Workaround: Perform the following steps before running the installer:

        1. To open the vCenter Server Appliance ISO file, run the Mac OS X automount command.

        2. Copy all the files to a new directory.

        3. Run the installer from the new directory.

      • vCenter Server upgrade might fail during VMware Authentication Framework Daemon (VMAFD) firstboot
        VMware Authentication Framework Daemon (VMAFD) firstboot might fail with the error message: Vdcpromo failed. Error 382312694: Access denied, reason = rpc_s_auth_method (0x16c9a0f6).

        During a vCenter Server upgrade you might encounter a VMAFD firstboot failure if the system you are upgrading is installed with third-party software that installs its own version of the OpenSSL libraries and modifies the system's PATH environment variable.

        Workaround: Remove the third-party directories containing the OpenSSL libraries from %PATH% or move to end of %PATH%.

      • VMware vSphere vApp (vApp) and a resource pool are not available as target options for upgrading a vCenter Server Appliance or Platform Services Controller appliance
        When upgrading an appliance by using the vCenter Server Appliance installer graphical user interface (GUI) or the command line interface (CLI), you cannot select vApp or a resource pool as the upgrade target.

        The vCenter Server Appliance installer interfaces do not enable the selection of vApp or resource pool as the target for upgrade.

        Workaround: Complete the upgrade on the selected ESXi host or vCenter Server instance. When the upgrade finishes, move the newly deployed virtual machine manually as follows:

        • If you upgraded the appliance on an ESXi host that is part of a vCenter Server inventory or on a vCenter Server instance, log in to the vSphere Web Client of the vCenter Server instance and move the newly deployed virtual machine to the required vApp or resource pool.

        • If you upgraded the appliance on a standalone ESXi host, first add the host to a vCenter Server inventory, then log in to the vSphere Web Client of the vCenter Server instance and move the newly deployed virtual machine to the required vApp or resource pool.

      • Upgrading to vCenter Server 6.5 may fail at vmon-api firstboot phase because of an invalid IPv6 address in the SAN field of the SSL certificate
        The vCenter Server SSL certificate takes an IPv6 address in the SAN field when you install vCenter Server and enable both IPv4 and IPv6. If you disable IPv6 after the installation and then attempt to upgrade vCenter Server to version 6.5, the upgrade fails at vmon-api firstboot phase.

        Workaround: Verify that the source vCenter Server SSL certificate SAN field contains the valid IP address of the source vCenter Server instance.

      • Upgrading to vCenter Server 6.5 fails due to duplicate names for entities in the network folder
        vSphere 6.5 allows only unique names across all Distributed Virtual Switches and Distributed Virtual Portgroups in the network folder. Earlier versions of vSphere allowed a Distributed Virtual Switch and a Distributed Virtual Portgroup to have the same name. If you attempt to upgrade from a version that allowed duplicate names, the upgrade will fail.

        Workaround: Rename any Distributed Virtual Switches or Distributed Virtual Portgroups that have the same names before you start the upgrade.

      • Syslog collector may stop working after ESXi upgrade
        Syslog collectors that use SSL to communicate with the ESXi syslog daemon may stop receiving log messages from the ESXi host after an upgrade.

        Workaround: Reconfigure the ESXi syslog daemon by running the following commands on the upgraded ESXi host:
        esxcli system syslog config set --check-ssl-certs=true
        esxcli system syslog reload

      • The Ctrl + C command does not exit each End User License Agreement (EULA) page during the staging or installation of patches to the vCenter Server Appliance
        When you stage or install the vCenter Server Appliance update packages with the relevant command without adding the optional --acceptEulas parameter, the EULA pages appear in the command prompt. You must be able to exit without accepting the agreement by running the Ctrl+C command, but running the command keeps you in the EULA pages.

        Workaround: Exit the EULA by typing NO at the end of the last page.

      • When validation ends with failure on staged patches in a vCenter Server Appliance, the staged update packages are deleted
        To update your vCenter Server Appliance, first you must stage the available update patches before you install them to the appliance. If the validation of these staged packages fails, they are unstaged and deleted. An attempt to validate the packages after an initial failure generates an error.

        Workaround: Repeat the staging operation.

      vCenter Server Appliance, vCenter Server, vSphere Web Client, vSphere Client, and vSphere Host Client Issues
      • vCenter Server might fail because of database unique constraint violation at pk_vpx_vm_virtual_device
        Virtual machines with USB devices attached might cause vCenter Server to fail because of a vCenter Server error that occurs when properties of multiple devices, including the USB devices, have changed. The vCenter Server cannot be restarted once it has failed.

        Workaround: Unregister the problematic virtual machine from the host inventory and restart the vCenter Server. You might need to repeat this process if the failure happens again.

      • Downloaded vSphere system logs are invalid
        This issue might occur if the client session expires while the downloading of system logs is still in progress. The resulting downloaded log bundle becomes invalid. You might observe an error message similar to the following in the vsphere_client_virgo.log file located in /var/log/vmware/vsphere-client/logs/vsphere_client_virgo.log:

        com.vmware.vsphere.client.logbundle.DownloadLogController
        Error downloading logs. org.apache.catalina.connector.ClientAbortException:
        java.net.SocketException: Broken pipe (Write failed)
        at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:407)

        Workaround: None.

      • JRE patch fails while starting vCenter Server services on a Windows Server 2016 system
        You might not be able to start the VMware vSphere Client, and you see error messages similar to the following:

        2016-11-04 15:47:26.991+05:30| vcsInstUtil-4600788| I: StartStopVCSServices: Waiting for VC services to start...
        2016-11-04 16:09:57.652+05:30| vcsInstUtil-4600788| E: StartStopVCSServices: Unable to start VC services
        2016-11-04 16:09:57.652+05:30| vcsInstUtil-4600788| I: Leaving function: VM_StartVcsServices

        Workaround: None.

      • Cannot access HTML-based vSphere Client
        When vCenter Server 6.5 is installed on Windows server 2016 OS on an external PSC setup with custom path and custom port, login to the HTML5-based vSphere Client fails with a 503 Service Unavailable error message.

        Workaround: Login to vCenter Server using vSphere Web Client (Flash).

      • Performing an Advanced Search with Tags might slow down vSphere Web Client in large environments
        The vSphere Web Client might be slow in large scale environments while performing an Advanced Search that includes Tags as part of the search construction. This issue occurs while the search is occurring, and stops when the search completes. vCenter Services might be running out of memory during the slowness.

        Workaround: Wait for the search to complete, or restart vCenter Server by performing Stop and Start vCenter Services using the command below if you observe OutOfMemory or Garbage Collection errors in the logs.

        Log location:

        • Windows: %ALLUSERSPROFILE%\VMWare\vCenterServer\logs\vsphere-client\logs\
        • vCenter Server Appliance (VCSA): /var/log/vmware/vsphere-client/logs/

        Log files: dataservice.log and vsphere_client_virgo.log

        Commands for restarting the services:
        service-control --stop vmware-vpxd-svcs
        service-control --start vmware-vpxd-svcs

      • vSphere Web Client might become slow in large multi-VC multi-PSC environments
        In large environments with large number of vCenter Servers and PSCs, the vSphere Update Manager Web Client Plug-in might experience a thread leak that degrades vSphere Web Client experience over time. This degradation occured specifically in an environment with 10 vCenter Servers plus 4 PSCs and the maximum number of Web Client user accounts, but is expected to occur in all large environments over time.

        Workaround: Disable the vSphere Update Manager Web Client Plug-in by performing the following:

        1. In the vSphere Web Client, navigate to vSphere Web Client > Administration > Solutions/Client Plugins.

        2. Right-click the VMware vSphere Update Manager Web Client plug-in and select Disable.

        If you do not intend to use the plug-in you may leave it disabled, or you can choose to re-enable it only when you need to use it. If you experience performance degradation, disabling then re-enabling the plug-in should also temporarily reset the leak.

      • In the vSphere Client or vSphere Web Client, live refresh of Recent Tasks and object status stops working after the client machine's IP address changes
        In the vSphere Client or vSphere Web Client, after performing an action on an object, the action does not appear in Recent Tasks. The inventory trees, lists, and object details also do not reflect the new state.

        For example, if you power on on a VM, the Power On tasks do not appear in Recent Tasks, and the VM icon does not have a power on badge in the inventory trees, lists, and object detail. The Web browser and vCenter Server are connected using WebSockets, and when the IP address of the browser machine changes, this connection is broken.

        Workaround: In the Web browser, refresh the page.

      • The vCenter Server Appliance Management Interface is inaccessible through Internet Explorer
        The vCenter Server Appliance Management Interface is inaccessible through Internet Explorer.

        Workaround: Enable TLS 1.0, TLS 1.1, and TLS 1.2 in the security settings of Internet Explorer:

        1. In Internet Explorer, select Tools > Internet Options.

        2. Click the Advanced tab and scroll to the Security settings section.

        3. Select Use TLS 1.0, Use TLS 1.1, and Use TLS 1.2.

      • Attempts to import and export Content Library items fail in vCenter Server deployments with external PSC
        When logged into the vSphere Web Client using the IP address in a vCenter Server deployment with external PSC environment, importing and exporting Content Library items fail. The following error message appears:

        Reason: Unable to update files in the library item. The source or destination may be slow or not responding.

        Workaround: Log in to the vSphere Web Client using a fully qualified domain name, and not an IP address in a vCenter Server deployment with external PSC.

        If logged into the vSphere Web Client using an IP address, accept the certificate that is issued when logging in with a fully qualified domain name. Open the fully qualified domain name target to accept the certificate. Accepting the certificate permanently should be sufficient.

      • The UI does not display hot-plugged devices in the PCIe passthrough device list
        Hot-plugged PCIe devices are not available for PCI passthrough.

        Workaround: Choose one of the following:

        • Restart hostd using the command: /etc/init.d/hostd restart

        • Reboot the ESXi host.

      • Permission on tag cannot be viewed from another management node
        When logged in one vCenter Server management node in a multiple vCenter Server deployment, if you create a permission on a tag, it is not viewable when you log in to another management node.

        Workaround: While tags are global objects and can be viewed across nodes, permissions on tags are only persistent locally and cannot be viewed across nodes. To view the tag permission, log into the vCenter Server where you created the permission.

      • Logging into vSphere Web Client in incognito mode causes internal error
        Users can log into the vSphere Web Client when the browser setting for incognito mode is enabled, but receive the following internal error:

        An internal error has occurred - [NetStatusEvent type="netStatus" bubbles=false cancelable=false eventPhase=2 info=[object Object]]"

        Workaround: Disable incognito mode. This mode is not supported in the vSphere Web Client. Reload the vSphere Web Client to clear up any problems from the error and log back into the client.

      • Content Library tasks are not displayed in Recent Tasks
        If you perform a Content Library task in the vSphere Web Client, such as uploading an item to the content library, syncing a library, or deploying a VM from the content library, the task might not be listed under Recent Tasks.

        Workaround: None. You can view all the tasks under the More Tasks lists, even if they are not listed under Recent Tasks.

      • Assign tag operation fails when simultaneously attempting to create and assign a tag in multiple vCenter Server environment
        Normally, you can assign a tag to an object in the vSphere Web Client from the Tags setting in the object's Configure tab and automatically assign the tag to the selected object. In an environment with multiple vCenter Server instances, the tag is created successfully, but the assign options fail and you receive an error message.

        Workaround: Create the tag first on the Tags setting and then assign it to the object.

      • Solution tab removed from ESX Agent Manager view
        The Solution tab that is available in the ESX Agent Manager view in earlier versions of the vSphere Web Client is no longer available.

        Workaround: You can achieve the same goal by performing the following steps:

        1. In the vSphere Web Client Navigator, select Administration > vCenter Server Extensions.

        2. Click vSphere ESX Agent Manager.

        3. Choose one of the following:

          • Select the Configure tab.

          • Select the Monitor tab and click Events.

      • Unable to assign tag to objects when Platform Services Controller node is down
        When the vSphere Platform Services Controller node is down, you are unable to assign tags from an object's Manage > Tags tab.

        The following error displays: Provider method implementation threw unexpected exception. Tags cannot be selected to assign.

        Workaround: Power on the Platform Services Controller node. Also check that all services on the Platform Services Controller node are running.

      • The vSphere Client does not render the edit setting dialog box when a host or data center administrator edits a virtual machine
        The vSphere Client does not render the edit setting dialog when a host or data center administrator edits a virtual machine within that host or data center. This happens because the host or data center administrator does not have profile-driven storage privilege.

        Workaround: Create a new role with host or data center administrator privileges and profile-driven storage privilege. Assign this role to the current host or data center administrator.

      • When creating or editing a virtual machine using the vSphere Client, adding an eighth hard disk causes the task to fail with error
        Creating or editing a virtual machine using the vSphere Client results in a task failure with the error: A specified parameter was not correct: unitNumber. This is because the SCSI controller 0:7 is reserved for special purposes, so the system assigns SCSI 0:7 to the eighth hard disk.

        Workaround: The user must manually assign SCSI 0:8 when adding an eighth hard disk. The user can instead use the vSphere Web Client.

      • The vSphere Client supports up to 10,000 virtual machines and 1,000 hosts
        The vSphere Client is only supported up to 10,000 virtual machines and 1,000 hosts, which is lower than the vCenter limits.

        Workaround: If user needs exceed the vSphere Client's limits, use the vSphere Web Client.

      • The Hostname text box is greyed out in the vCenter Server Appliance Management Interface if your system is installed without a host name
        When you navigate to the Networking > Manage screen in the vCenter Server Appliance Management Interface and Edit the Hostname, Name Servers, and Gateways, the Hostname text box appears grayed out and cannot be modified. The same field is active and you can change it only in the vSphere Client.

        Workaround: Change the host name in the vSphere Client.

      • After you modify the virtual machine configuration of the vCenter Server Аppliance to provide a larger disk space for expanding the root partition, an attempt to claim that additional storage fails
        After resizing the disk space for the root partitioning, the storage.resize command does not extend the disk storage for the root partition but keeps it the same size. This is an expected behavior. Resizing this partition is not supported.

        Workaround: None.

      • The vCenter Server Appliance Management Web interface allows for configuring only HTTP proxy server
        When you navigate to the Networking tab in the vCenter Server Appliance Management Interface and Edit the Proxy Settings, you are not presented with the option to modify the proxy to HTTPS or FTP. You can specify only the HTTP proxy settings.

        Workaround: You can configure the HTTPS and FTP proxy servers by using the appliance shell command line.

      • After a successful vCenter Server Appliance update, running the version.get command from the appliance shell returns an error message
        After a successful vCenter Server Appliance update, running the version.get command from the appliance shell returns an error message: Unknown command: 'version.get'.

        Workaround: Log out and log in as administrator to a new appliance shell session, and run the version.get command.

      • In Windows Internet Explorer 11 or later, the Use Windows Session Authentication checkbox is inactive on vSphere Web Client login page
        The Use Windows Session Authentication checkbox is inactive on the vSphere Web Client login page in Windows Internet Explorer 11 or later browsers. You are also prompted to download and install the VMware Enhanced Authentication Plug-in.

        Workaround: In the the security options of the Windows settings on your system, add the fully qualified domain name and IP address of the vCenter Server to the list of local intranet sites.

      • Uploading a file using the datastore browser fails when the file is larger than 4GB on Internet Explorer
        When you upload a file larger than 4GB using the datastore browser on Internet Explorer, your receive the error:

        Failed to transfer data to URL.

        Internet Explorer does not support files larger than 4GB.

        Workaround: Use the Chrome or Firefox browser to upload files from the datastore browser.

      Virtual Machine Management Issues
      • OVF parameter chunkSize is not supported in vCenter Server 6.5
        Deploying an OVF template in vCenter Server 6.5 fails with the following error:

        OVF parameter chunkSize with value chunkSize_value is currently not supported for OVF package import.

        You receive this error because chunkSize is not a supported OVF parameter in vCenter Server 6.5.

        Workaround: Update the OVF template and remove the chunkSize parameter.

        1. For OVA templates only, extract the individual files using a tar utility (For example: tar xvf). This includes the ovf file (.ovf), manifest (.mf), and virtual disk (.vmdk).

        2. Combine virtual disk chunks into a single disk with the following command:

          • On Linux or Mac:
            cat vmName-disk1.vmdk.* > vmName-disk1.vmdk
          • On Windows:
            copy /b vmName-disk1.vmdk.000000 + vmName-disk1.vmdk.000001 + repeat until last fragment vmName-disk1.vmdk

          Note: If only one virtual disk chunk fragment exists, rename it to the destination disk. If there are multiple disks with chunk fragments, combine each to their respective destination disks (For example: disk1.vmdk, disk2.vmdk, and so on).

        3. Remove the chunkSize attribute from the OVF descriptor (.ovf) using a plain text editor. For example:
          <File ovf:chunkSize="7516192768" ovf:href="vmName-disk1.vmdk" ovf:id="file1" ovf:size=... />
          to:
          <File ovf:href="vmName-disk1.vmdk" ovf:id="file1" ovf:size=.../>

        4. Deploy the OVF template via the vSphere Web Client by selecting the local files including the updated OVF descriptor and merged disk.

        5. For OVA templates only, reassemble the OVA via the vSphere Web Client with the following steps:

          1. Export the OVF template. The export produces the OVF file (.ovf), manifest (.mf), and virtual disk (.vmdk). The manifest file is different than the file from step 1.

          2. Combine the files into a single OVA template using a tar utility (for example, tar cvf). On Linux for example:
            tar cvf vm.ova vm.ovf vm.mf vm.disk1.vmdk

      • New Internet Explorer tabs open when exporting OVF tempate or exporting items from Content Library
        If you use the vSphere Web Client with Internet Explorer to export an OVF template or to export item from a Content Library, new tabs open in the browser for each file in the Content Library item or OVF template. For each new tab, you might be prompted to accept a security certificate.

        Workaround: Accept each security certificate, then save each file.

      • vCenter Server does not remove OpaqueNetwork from the vCenter inventory
        If a virtual machine is connected to an opaque network and this virtual machine is converted into a template, vCenter Server will not remove the OpaqueNetwork from the vCenter Server inventory, even if the ESXi host is removed from the opaque network and no virtual machines are connected to it. This is due to the fact that there are templates still attached to the opaque network.

        Workaround: None.

      • Deploying OVF template causes error.mutationService.ProviderMethodNotFoundError error in some views
        You receive an error.mutationService.ProviderMethodNotFoundError error when you deploy an OVF template and all of the following conditions occur:

        • You select an OVF file from your local file system and click Next in the Deploy OVF Template wizard.

        • The OVF file is less than 1.5 MB.

        • You deploy the OVF template without selecting the object. For example, from the VM list view

        Workaround: Deploy an OVF template by selecting the object and choosing the Deploy OVF option.

      • Deploying an OVF or OVA template from a local file with delta disks in the vSphere Web Client might fail
        When you deploy an OVF template or OVA template containing delta disks (ovf:parentRef in the OVF file), the operation might fail or stall during the process.

        The following is an example of OVF elements in the OVF descriptor:

         <References>
          <File ovf:href="Sugar-basedisk-1-4.vmdk" ovf:id="basefile14" ovf:size="112144896"/>
          <File ovf:href="Sugar-disk1.vmdk" ovf:id="file1" ovf:size="44809216"/>
          <File ovf:href="Sugar-disk4.vmdk" ovf:id="file4" ovf:size="82812928"/>
         </References>

         <DiskSection>
          <Info>Meta-information about the virtual disks</Info>
          <Disk ovf:capacity="1073741824"
           ovf:diskId="basedisk14"
           ovf:fileRef="basefile14"
           ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" />
          <Disk ovf:capacity="1073741824"
           ovf:diskId="vmdisk1"
           ovf:fileRef="file1"
          ovf:parentRef="basedisk14"
           ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" />
          <Disk ovf:capacity="1073741824"
          ovf:diskId="vmdisk4"
          ovf:fileRef="file4"
          ovf:parentRef="basedisk14"
          ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized"/>
         </DiskSection>

        Workaround: To deploy the OVF or OVA template, host the template on a HTTP server. Then deploy the template from the HTTP URL pointing to that template.

      • No power on option available at completion of OVF deployment
        During an OVF or OVA deployment, the deployment wizard does not provide an option to automatically power on the virtualmachine when the deployment completes.

        Workaround: This option is not available in vSphere 6.5 when using the OVF deployment wizard. Manually power on the virtual machine after the deployment completes.

      • When the deploy OVF wizard is started from global inventory list, error message not displayed on wizard's location page
        This issue occurs when you deploy an OVF template from any of the global inventory lists (for example, virtual machine inventory list) in the vSphere Web Client and navigate to the "Select Name and Location" page of the wizard. If you do not choose a valid location, instead of the proper error message, the navigation buttons are disabled and you cannot proceed any further in the wizard.

        Workaround: Cancel the wizard, then reopen it and choose a valid location.

      • Deploy OVF wizard cannot deploy a local OVF or OVA template that contains external message bundles
        When an OVF or OVA template contains references to external message bundles, it cannot be deployed from a local file.

        Workaround: To deploy the OVF or OVA template, perform one of the following:

        • Deploy the OVF or OVA template from a URL.

        • Edit the OVF file to replace the tag that points to the external message file (the <Strings ovf:fileRef /> tag) with the actual content of the external message file.

      • Cannot create or clone a virtual machine on a SDRS-disabled datastore cluster
        This issue occurs when you select a datastore that is part of a SDRS-disabled datastore cluster in any of the New Virtual Machine, Clone Virtual Machine (to virtual machine or to template), or Deploy From Template wizards. When you arrive at the the Ready to Complete page and click Finish, the wizard remains open and nothing appears to occur. The Datastore value status for the virtual machine might display "Getting data..." and does not change.

        Workaround: Use the vSphere Web Client for placing virtual machines on SDRS-disabled datastore clusters.

      • Deploying an OVF or OVA template fails for specific descriptors
        Deploying an OVF or OVA template from the vSphere Web Client fails if the descriptor in the template contains any of the following values with their respective error message:

        • Negative number as the value of size parameter in fileref element. Example error message:
          VALUE_ILLEGAL: Illegal value ";-2&" for attribute "size". Must be positive.

        • Optional Reservation attribute is specified but no parameter provided in VM Hardware section. For example: <Reservation />. Example error message:
          VALUE_ILLEGAL: Illegal value "" for element "Reservation". Not a number.

        • Missing VirtualHardwareSection.System.InstanceID vssd element. Example error message:
          ELEMENT_REQUIRED: Element "InstanceID" expected.

        • Internationalization section Strings refers to missing file. Example error message:
          VALUE_ILLEGAL: Illegal value "eula" for attribute "fileRef".

        • Unknown prefix added to Internationalization section Strings. For example: <ovfstr:Strings xml:lang="de-DE"> Example error message:
          PARSE_ERROR: Parse error: Undeclared namespace prefix "ovfstr" at [row,col,system-id]: [41,39,"descriptor.ovf"].

        • OVF contains OVF Specification version 0.9 elements. Example error message:
          VALUE_ILLEGAL: OVF 0.9 is not supported. Invalid name space: "http://www.example.com/schema/ovf/1/envelope".

        Workaround: Modify the descriptor so that it is a valid descriptor according to OVF Specification version 1.1.

      • vMotion enabled USB device connected to a virtual machine is not visible in vSphere Web Client
        If you connect a USB device that is enabled with vMotion to a virtual machine running on ESXi 6.5, the device is not visible in the vSphere Web Client after you suspend and then resume the virtual machine. This occurs even when the device successfully reconnects to the virtual machine. As a result, you cannot disconnect the device.

        Workaround: Perform one of the following workarounds:

        • Attempt to connect another USB device to the same virtual machine. Both devices are visible in the vSphere Web Client, allowing you to disconnect the originally connected USB device.

        • Power off and then power on the virtual machine. The device is visible in the vSphere Web Client.

      • Uploading files using Content Library, Datastore, and OVF/OVA Deployment might fail
        If you attempt to upload a file using Content Library, Datastore upload, or OVA/OVF Deployment in the vSphere Web Client, the operation might fail with the error: The operation failed for an undetermined reason.

        The failure occurs because certificates are not trusted. If the URL being processed for the file upload operation is not already trusted, then the upload fails.

        Workaround: Copy the URL from the error, open a new browser tab, and visit that URL. You should be prompted to accept the certificate associated with that URL. Trust and accept the new certificate, then retry the operation. See VMWare Knowledge Base Article 2147256 for more details.

      • Deploying an OVA template from URL containing a manifest or certificate file at the end might fail in a slow network environment
        When you deploy a large OVA template containing one or more manifest or certificate files at the end of the OVA template, the deployment might fail in a slow network environment with the following error:
        Unable to retrieve manifest or certificate file.

        The following is an example of an OVA template with manifest and certificate files located at the end of the OVA:

        example.ova:

        • example.ovf
        • example-disk1.vmdk
        • example-disk2.vmdk
        • example.mf
        • example.cert

        This failure occurs because the manifest and cert files are necessary for the deployment process. So, the earlier those files appear in the OVA file the faster the process performs.

        Workaround: Perform one of the following workarounds to deploy the OVA template:

        • Download the OVA template to your local system and deploy it from a local OVA file.

        • Convert the OVA template by placing the manifest or certificate files at the front of the OVA template file. To convert the template for the example.ova template, perform the following:

          1. Log in to the HTTP server machine and go to the folder that contains the OVA template.

          2. Extract the files from the OVA template:
            tar xvf example.ova

          3. To recreate the OVA template, run the following commands in order:

            tar cvf example.ova example.ovf
            tar uvf example.ova example.mf
            tar uvf example.ova example.cert
            tar uvf example.ova example-disk1.vmdk
            tar uvf example.ova example-disk2.vmdk

          4. Deploy the OVA template from the HTTP URL again.

      • OVF templates exported in vSphere 6.5 that contain &#xd; cannot be deployed in vSphere 5.5 or vSphere 6.0
        If an OVF template that contains &#xd; in the OVF descriptor is exported from the vSphere Web Client 6.5, it cannot be deployed from vSphere Web Client 5.5 or vSphere Web Client 6.0. The following is an example of an OVF element in the OVF descriptor:

        <Annotation>---&#xd;This is a sample annotation for this OVF template ---</Annotation>

        Workaround: Remove &#xd; from the OVF descriptor:

        1. Remove all the instances of &#xd; from the OVF descriptor.

        2. If the OVA or OVA template has a manifest file, recalculate the checksum based on the updated OVF descriptor and update the manifest file. If a certificate file exists, update the certificat file to replace the checksum for the updated manifest file.

      • vSphere Web Client does not support exporting virtual machines or vApps as OVA templates
        In versions earlier than vSphere 6.5, you could export virtual machines and vApps as an OVA template on the vSphere Web Client. This functionality is not available in vSphere 6.5.

        Workaround: Export the virtual machine as an OVF template, and then create an OVA template from the OVF template files. The following procedure describes this process using the Linux an Mac commands. Windows systems require installation of a TAR capable utility.

        1. Use the vSphere Web Client to export the VM or vApp as OVF template to the local machine.

        2. Locate the downloaded OVF template files, and move them into an empty new folder.

        3. Perform one of the following tasks to create an OVA template from an OVF template.

          • Go to the new folder and create an OVA template using the tar command to combine the files:
            cd folder
            tar cvf ova-template-name.ova ovf-template-name.ovf
            tar uvf ova-template-name.ova ovf-template-name.mf
            tar uvf ova-template-name.ova ovf-template-name-1.vmdk
            ...
            tar uvf ova-template-name.ova ovf-template-name-n.vmdk

            n refers to the number of disks the VM contains. ova-template-name.ova is the final OVA template. Run the commands in the exact order so the OVA is correctly built.

            Note: The tar command must use the TAR format and comply with the USTAR (Uniform Standard Tape Archive) format as defined by the POSIX IEEE 1003.1 standards group.

          • If the OVF tool is installed on your system, run the following command:
            cd downloaded-ovf-template-folder
            path-to-ovf-tool\ovftool.exe ovf-template-name.ovf ova-template-name.ova

      • Deploying an OVF template containing compressed file references might fail
        When you deploy an OVF template containing compressed files references (typically compressed using gzip), the operation fails.

        The following is an example of an OVF element in the OVF descriptor:
        <References>
           <File ovf:size="458" ovf:href="valid_disk.vmdk.gz" ovf:compression="gzip" ovf:id="file1"></File>
        </References>

        Workaround: If the OVF tools is installed on your system, run the following command to convert the OVF or OVA template. The new template must have no compressed disks.

        1. Go to the folder containing the template:
          cd template-folder

        2. Convert the template.

          • OVA template conversion:
            path-to-ovf-tool\ovftool.exe ova-template-name.ova newova-template-name.ova

          • OVF template conversion:
            path-to-ovf-tool\ovftool.exe ovf-template-name.ovf new-ovf-template-name.ovf

      • Deploying an OVF or OVA template that contains HTTP URLs in the file references fail
        When you attempt to deploy an OVF or OVA template that contains an HTTP URL in the file references, the operation fails with the following error:
        Invalid response code: 500

        For example:
        <References>
          <File ovf:size="0" ovf:href="http://www.example.com/dummy.vmdk" ovf:id="file1"></File>
        </References>

        Workaround: To download the files from the HTTP server and update the OVF or OVA template, perform the following steps:

        1. Open the OVF descriptor, and locate the file references with the HTTP URLs.

          For OVA templates, extract the files from the OVA template to open the OVF descriptor. For exaple, run the following command:
          tar xvf ova-template-name&.ova

          Note: this command is for a Linux or Mac system. Windows systems require installation of a tar utility.

        2. Download the files from the HTTP URLs to a local machine and copy those files to the same folder as the OVF or OVA template.

        3. Replace the HTTP URLs in the OVF descriptor with the actual file names that are downloaded to the folder. For example:
          <File ovf:size="actual-downloaded-file-size" ovf:href="dummy.vmdk" ovf:id="file1"></File>

        4. If the template contains a manifest (.mf) file and a certificate (.cert) file, regenerate them by recalculating checksums of relevant files, or omit these files during the OVF deploy operation.

          For OVA template only, recreate the OVA template using one of the following methods:

          • Use the tar command to recreate the template:
            cd folder/ tar cvf ova-template-name.ova ovf-name.ovf
            tar uvf ova-template-name.ova manifest-name.mf
            tar uvf ova-template-name.ova cert-name.cert
            tar uvf ova-template-name.ova disk-name.vmdk

            Repeat for more disk or other file references.

            Note: The tar command should use the TAR format shall comply with the USTAR (Uniform Standard Tape Archive) format as defined by the POSIX IEEE 1003.1 standards group.

          • Use the OVF tool to recreate the template (Windows):
            cd folder
            path-to-ovf-tool\ovftool.exe ovf-name.ovf ova-template-name.ova

      • Deploying OVF or OVA templates fail from HTTP or HTTPS URLs that require authentication
        When you attempt to use the vSphere Web Client to deploy an OVF or OVA template from a HTTP or HTTPS URL that requires authentication, the operation fails. You receive the error:
        Transfer failed: Invalid response code 401.

        The attempt to deploy the OVF or OVA template fails because you cannot enter your credentials.

        Workaround: Download the files and deploy the template locally:

        1. Download the OVF or OVA template from the HTTP or HTTPS URL manually to your local machine in any accessible folder.

        2. Deploy a virtual machine from the downloaded OVF or OVA template on the local machine.

      • Deploying an OVF or OVA template with EFI/UEFI boot options are not supported in the vSphere Web Client
        When you deploy an OVF or OVA template in the vSphere Web Client with the EFI boot option and include a NVRAM file, the operation fails.

        Workaround: Deploy the OVF template with the EFI boot option using OvfTool, version 4.2.0.

      • Existing Network Protocol Profiles not populated and does not update Customize Template page of Deploy OVF Template wizard
        In the Customize Template page of the Deploy OVF Template wizard, the following custom properties are recognized and displayed:
        gateway, netmask, dns, searchPath, domainName, hostPrefix, httpProxy, subnet

        If no Network Protocol Profile exists for the selected network, a new Network Protocol Profile is automatically created if any of the custom properties is set. Each of the properties contain entered values.

        If a Network Protocol Profile already exists for the selected network, the new wizard does not pre-populate these custom properties, and any changes to these fields are ignored.

        Workaround: If you need custom settings other than the existing Network Protocol Profile settings, make sure no Network Protocol Profile exists on the selected network. Either delete the profile or delete the network mapping in the profile.

      • Selected template is not preserved when Deploy OVF Template wizard is minimized, vSphere Web Client is refreshed, and wizard restored
        When deploying an OVF template with the Deploy OVF Template wizard, perform the following:

        1. Navigate through all pages of the Deploy OVF Template wizard up to the final step.

        2. Minimize the wizard to the Work in Progress panel.

        3. Click Global Refresh at the top next to user name.

        4. Restore the wizard from the Work in Progress panel.

        This results in two issues:

        • The Source VM name displays the same value as Name value.

        • If you navigate to the Select template page the selected template is empty, indicating the template was not preserved.

        Although the selected template does not appear, the wizard can complete and the template correctly deploys. This issue only occurs within the vSphere Web Client interface for displaying the above two values.

        Workaround: Avoid using Global Refresh when deploying the OVF template.

      • Deploying OVF template fails for user without Datastore.Allocate Space permission
        When you deploy an OVF template without the Datastore.Allocate Space permission, the operation fails.

        Workaround: Assign Datastore.Allocate Space permission to the user.

      • OVF deployment of a vApp fails in a non-DRS cluster
        When you attempt to deploy an OVF containing a vApp in a non-DRS cluster, the operation fails. In vSphere 6.5, the Deploy OVF wizard allows you to select a non-DRS cluster that passes compatibility checks. However, the deployment attempt fails.

        Workaround: Enable DRS for the desired cluster or select another deployment location.

      • If you use the option LimitVMsPerESXhost, it might disable DRS load balancing and fail to generate any recommendations
        The LimitVMsPerESXhost option is implemented as part of DRS constraint check. If the number of virtual machines on the host exceeds the limit specified by the LimitVMsPerESXhost option, no additional virtual machines can be powered on or migrated to the host by DRS.

        Workaround: You can use a new advanced option TryBalanceVmsPerHost to replace LimitVMsPerESXhost option in this release, which avoids the potential DRS failure. You might observe the cluster imbalance problem when manually setting the LimitVMsPerESXhost option to a small value (for example, 0).

      • The task progress bar for some of the content library operations does not change
        The task progress bar for some of the content library operations displays 0% during the task's progress. These operations are:

        • Deploy a virtual machine from a VM template and from a content library.

        • Clone a library item from one library to another library.

        • Synchronize a subscribed library.

        Workaround: None.

      • The initial version of a newly created content library item is 2
        The initial version of a newly created content library item is 2 instead of 1. You can view the version of the content library item in the Version column in the list of content library items.

        Workaround: None.

      • If your user name contains non-ASCII characters, you cannot import items to a content library from your local system
        If your user name contains non-ASCII characters, you might be unable to import items to a content library from your local system.

        Workaround: To import an item to a content library, use a URL link such as an HTTP link, an NFS link, or an SMB link.

      • If your user name contains non-ASCII characters, you cannot export items from a content library to your local system
        If your user name contains non-ASCII characters, you might be unable to export items from a content library to your local system.

        Workaround: None.

      • When you synchronize a content library item in a subscribed content library, some of the tags of the item may not appear
        Some of the tags of an item in a published content library may not appear in your subscribed content library after you synchronize the item.

        Workaround: None.

      • Deploy OVF task progress bar remains at 0 percent
        When deploying an OVF template from the local system, the progress bar in the Deploy OVF Template wizard remains at 0%. However, the tasks for Deploy OVF Template and Import OVF package are created.

        Workaround: When selecting a local OVF template, make sure to select all the referenced files, including the OVF file and the VMDK files that are defined within the OVF descriptor file.

      • Deployment operation fails if a virtual machine template (OVF) includes a storage policy with replication.
        If a virtual machine has a storage policy with storage replication group and is captured as a template in a library, that template causes the virtual machine deployment to fail. This happens because you cannot select a replication group when deploying from a Content Library template. The selection of a replication group is required for this type of template. You will receive an error message and must close the wizard manually. It will not close automatically despite the failed operation.

        Workaround: Delete the policy from the original virtual machine and create a new virtual machine template. You can add a policy to a new virtual machine after the new template has been created and deployed.

      • Selecting a storage policy when deploying a content library template causes a datastore or datastore cluster selection to be ignored.
        Selecting a storage policy causes the datastore or datastore cluster selection to be ignored. The virtual machine is deployed on the user selected storage profile, but it is not deployed on the selected datastore or datastore cluster.

        Workaround: If the virtual machine has to be deployed on the specified datastore or datastore cluster, make sure the storage policy is set to "None" when deploying a content library template. This ensures that the virtual machine is stored on the selected datastore or datastore cluster. After the virtual machine is deployed successfully, you can apply a storage policy by navigating to the deployed virtual machine's page and editing the storage policy.

      • Uploading items to a library stops responding when hosts associated with the backing datastore are in maintenance mode.
        You cannot upload items to a library when all the hosts associated with the datastore backing that library are in maintenance mode. Doing so causes the process to stop responding.

        Workaround: Ensure that at least one host associated with the datastore backing the library is available during upload.

      • Mounting an ISO file from a content library to an unassociated virtual machine results in an empty dialog box.
        You can only mount an ISO file from a content library to a virtual machine if the datastore or storage device where the ISO file resides is accessible from the virtual machine host. If the datastore or storage device is not accessible, the user interface shows an empty dialog box.

        Workaround: Make the storage device where the ISO file resides accessible to the host where the virtual machine resides. If policies prohibit this, you can copy the ISO file to a library on a datastore that is accessible to the virtual machine.

      • Doing an Advanced search for the Content Libraries with the "Content Library Published" property fails.
        Carrying out an "Advanced Search" for Content Libraries with the property value "Content library published" causes the search to fail.

        Workaround: Manually browse for published libraries.

      Virtual Volumes Issues

      • After upgrade from vSphere 6.0 to vSphere 6.5, the Virtual Volumes storage policy might disappear from the VM Storage Policies list
        After you upgrade your environment to vSphere 6.5, the Virtual Volumes storage policy that you created in vSphere 6.0 might no longer be visible in the list of VM storage policies.

        Workaround: Log out of the vSphere Web Client, and then log in again.

      • The vSphere Web Client fails to display information about the default profile of a Virtual Volumes datastore
        Typically, you can check information about the default profile associated with the Virtual Volumes datastore. In the vSphere Web Client, you do it by browsing to the datastore, and then clicking Configure > Settings > Default Profiles.
        However, the vSphere Web Client is unable to report the default profiles when their IDs, configured at the storage side, are not unique across all the datastores reported by the same Virtual Volumes provider.

        Workaround: None.

      VMFS Issues

      • After failed attempts to grow a VMFS datastore, VIM APIs information and LVM information on the system is inconsistent
        This problem occurs when you attempt to grow the datastore while the backing SCSI device enters the APD or PDL state. As a result, you might observe inconsistent information in VIM APIs and LVM commands on the host.

        Workaround: Perform these steps:

        1. Run the vmkfstools --growfs command on one of the hosts connected to the volume.

        2. Perform the rescan-vmfs operation on all host connected to the volume.

      • VMFS6 datastore does not support combining 512n and 512e devices in the same datastore
        You can expand a VMFS6 datastore only with the devices of the same type. If the VMFS6 datastore is backed by a 512n device, expand the datastore with the 512n devices. If the datastore is created on a 512e device, expand the datastore with 512e devices.

        Workaround: None.

      • ESXi does not support the automatic space reclamation on arrays with unmap granularity greater than 1 MB
        If the unmap granularity of the backing storage is greater than 1 MB, the unmap requests from the ESXi host are not processed. You can see the Unmap not supported message in the vmkernel.log file.

        Workaround: None.

      • Using storage rescan in environments with the large number of LUNs might cause unpredictable problems
        Storage rescan is an IO intensive operation. If you run it while performing other datastore management operation, such as creating or extending a datastore, you might experience delays and other problems. Problems are likely to occur in environments with the large number of LUNs, up to 1024, that are supported in the vSphere 6.5 release.

        Workaround: Typically, storage rescans that your hosts periodically perform are sufficient. You are not required to rescan storage when you perform the general datastore management tasks. Run storage rescans only when absolutely necessary, especially when your deployments include a large set of LUNs.

      VM Storage Policy Issues

      • Hot migrating a virtual machine with vMotion across vCenter Servers might change the compliance status of a VM storage policy
        After you use vMotion to perform a hot migration of a virtual machine across vCenter Servers, the VM Storage Policy compliance status changes to UNKNOWN.

        Workaround: Check compliance on the migrated virtual machine to refresh the compliance status.

        1. In the vSphere Web Client, browse to the virtual machine.

        2. From the right-click menu, select VM Policies > Check VM Storage Policy Compliance.
          The system verifies the compliance.

      vSphere HA and Fault Tolerance Issues
      • vCenter High Availability replication fails when user password expires
        When the vCenter High Availability user password expires, the vCenter High Availability replication fails with several errors. For more details on the errors and cause, see http://kb.vmware.com/kb/2148675.

        Workaround: Reset the vCenter High Availability user password on each of the 3 vCenter High Availability nodes: active, passive, and witness. See http://kb.vmware.com/kb/2148675 for instructions on resetting the user passwords.

      • Deploying vCenter High Availability with an alternative Failover IP for the Passive node without specifying a Gateway IP address causes vCenter to fail
        vCenter High Availability requires you to specify a gateway IP address if an alternative IP address and netmask are defined for the Passive node in a vCenter High Availability deployment. If this gateway IP address is left unspecified when you use an alternative IP address for Passive node deployment, vCenter Server fails.

        Workaround: You must specify a gateway IP address if you use an alternative IP address for the Passive node in a VCHA deployment.

      • Deploying vCenter High Availability might fail if Appliance has a mixed case hostname when setting up the Appliance
        If a VCSA is installed with a mixed case FQDN then a subsequent attempt to deploy vCenter High Availability might fail. The reason for this a case-sensitive hostname validation by the vCenter High Availability deployment.

        Workaround: You must use a same case hostname when setting up the appliance.

      • SSH must be enabled on vCenter Server Appliance in order to configure vCenter HA
        If SSH is disabled during the installation of a Management Node (with external PSC) and during the configuration of vCenter HA from a vSphere Web Client, the vCenter HA deployment task fails with the message: SSH is not enabled. You can enable SSH on vCenter Server Appliance using the vSphere Web Client or appliance management UI (VAMI) to enable SSH, and then configure vCenter HA from the vSphere Web Client.

        Workaround: None. You must enable SSH on vCenter Server Appliance for vCenter HA to work.

      • Unable to configure vCenter HA from vSphere Web Client UI when IP is used instead of FQDN during deployment
        When you perform the following steps, an error occurs:

        1. When deploying vCenter Server, in the "System Name" text box in the Deployment UI, enter an IP address instead of an FQDN and complete the installation.

        2. After successful deployment, perform all the prerequisite steps to configure vCenter HA.

        3. Configuring vCenter HA fails in the vSphere Web Client UI with following error message:
          Platform Service Controller information cannot be retrieved. Make sure that Application Management Service is running and you are member of Single Sign-On system Configuration Administrators group. Guest OS network information about the vCenter VM cannot retrieved. Make sure that Application Management Service is running.

        Workaround: Provide an FQDN in the System Name field when deploying vCenter (and/or PSC).

      • vSphere HA may fail to restart dependent VMs and any other VMs in a lower tier
        Currently the VM override timer is started for a successful VM placement. If there is a dependency between VMs in the same tier, and a VM fails to restart successfully, all dependent VMs and VMs in lower tiers fail to be restarted. However, if VMs are present in different tiers with no dependency on a VM in the same tier, then the tier timeout is respected and lower tier VMs fail over after the timeout.

        Workaround: Do not create VM dependencies in the same tier.

      • Creating a vSphere HA cluster enables VM Component Protection by default and ESXi 5.5 hosts cannot be added to the cluster.
        Attempting to add an ESXi 5.5 host to a new vSphere HA cluster or enabling vSphere HA on a newly created cluster that has ESXi 5.5 hosts fails because VM Component Protection is enabled by default. This returns the error message: Cannot enable vSphere HA VM Component Protection for the specified cluster, because it contains a host with "Upgrade the host to 6.0 or greater". This does not affect ESXi 6.0 or later hosts.

        Workaround: Go to the vSphere HA settings of the newly created cluster. On the Failures and Responses tab, ensure "Datastore with PDL" and "Datastore with APD" are set to Disabled. After saving these settings, you can add ESXi 5.5 hosts to the cluster.

      • If an Active node reboots during an operation that removes vCenter HA Cluster configuration, you might need to manually start the Active node.
        Removing vCenter HA Cluster configuration is a multi-step process that makes updates to the vCenter Appliance configuration. You must mark the appliance so that it starts as a stand-alone vCenter Server Appliance. If the Active appliance crashes or reboots during a key operation of configuration removal for vCenter HA, the Active node might reboot in a mode where you must intervene to start all services on the appliance.

        Workaround: You must do the following to start all services on the Active Appliance:

        1. Login to the consoles of the Active vCenter appliance.

        2. Enable bash on the appliance prompt.

        3. Run command: destroy-vcha -f

        4. Reboot the appliance.

      • If you attempt to reboot an Active node with an intent to failover, Active node may continue as an Active node after reboot.
        In a vCenter HA Cluster when an Active node goes through a reboot cycle, the Passive node detects that the Active node in the vCenter HA Cluster is momentarily down. As a result, the Passive node tries to take over the role of the Active node. If Active node reboots while an appliance state is being modified on the Active node, a failover to the Passive node might not be completed. If this occurs the Active node continues as the Active node after the reboot cycle is complete.

        Workaround: If you are rebooting the Active node to cause a failover to the Passive node, you must use the "Initiate Failover" workflow from the UI or to use the command Initiate Failover API. This ensures that the Passive node takes the role of Active node.

      To collapse the list of previous known issues, click here.