VMware ESXi 5.5 Update 2 Release Notes

|

VMware ESXi™ 5.5 Update 2 | 09 SEP 2014 | Build 2068190

Last updated: 02 July 2015

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • Support for hosts with 6TB of RAM - vSphere 5.5 Update 2 starts to support hosts with 6TB of RAM.
  • VMware vShield Endpoint Thin Agent is renamed as VMware Tools Guest Introspection plugin - The vShield Endpoint driver bundled with VMware Tools is now called Guest Introspection.
  • Resolved Issues This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.5

Features and known issues of ESXi 5.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 5.5, are:

Internationalization

VMware vSphere 5.5 Update 2 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Compatibility and Installation

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged on the vCenter Server ISO. You can install one or both clients by using the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.5.3 adds support for ESXi 5.5 Update 2 and vCenter Server 5.5 Update 2 releases.
For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

ESXi and Virtual SAN Compatibility

Virtual SAN does not support clusters that are configured with ESXi hosts earlier than 5.5 Update 1. Make sure all hosts in the Virtual SAN cluster are upgraded to ESXi 5.5 Update 1 or later, before enabling Virtual SAN. vCenter Server should also be upgraded to 5.5 Update 1 or later.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 5.5 Update 2, use the ESXi 5.5 Update 2 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 5.5 Update 2, use the ESXi 5.5 Update 2 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 5.5 and later. During the upgrade process, the device driver is installed on the ESXi 5.5.x host. It might still function on ESXi 5.5.x, but the device is not supported on ESXi 5.5.x. For a list of devices that have been deprecated and are no longer supported on ESXi 5.5.x, see the VMware Knowledge Base article Deprecated devices and warnings during ESXi 5.5 upgrade process.

Third-Party Switch Compatibility for ESXi

VMware supports Cisco Nexus 1000V with vSphere 5.5. For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Cisco Application Virtual Switch (AVS) is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.5 Update 2, use the ESXi 5.5 Update 2 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.5 Update 2. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 5.5 Update 2, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

vSphere Client Connections to Linked Mode Environments with vCenter Server 5.x

vCenter Server 5.5 can exist in Linked Mode only with other instances of vCenter Server 5.5.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.1 and ESXi 5.5 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading your host with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.5.x supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.5.x. If your host hardware is not compatible, a purple screen appears with a message about incompatibility. You cannot install or upgrade to vSphere 5.5.x.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Supported Upgrade Paths for Upgrade to ESXi 5.5 Update 2:

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.5 Update 2

ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi 4.0 Update 2

ESX/ESXi 4.0 Update 3
ESX/ESXi 4.0 Update 4

ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2

ESX/ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2
ESXi 5.0 Update 3

ESXi 5.1
Includes
ESXi 5.1 Update 1
ESXi 5.1 Update 2

ESXi 5.5
Includes
ESXi 5.5 Update 1

VMware-VMvisor-Installer-5.5.0.update02-2068190.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes

update-from-esxi5.5-5.5_update02.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

Yes*

Yes*

Yes

Using patch definitions downloaded from VMware portal (online) VMware vCenter Update Manager with patch baseline

No

No

No

No

Yes


*Note: Upgrade from ESXi 5.0.x, or ESXi 5.1.x, to ESXi 5.5 Update 2 using update-from-esxi5.5-5.5_update02.zip is supported only with ESXCLI. You need to run the esxcli software profile update --depot=<depot_location> --profile=<profile_name> command to perform the upgrade. For more information, see the ESXi 5.5.x Upgrade Options topic in the vSphere Upgrade guide.

Open Source Components for VMware vSphere 5.5 Update 2

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.5 Update 2 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vSphere Web Client. Because Linux platforms are no longer supported by Adobe Flash, vSphere Web Client is not supported on the Linux OS. Third party browsers that add support for Adobe Flash on the Linux desktop OS might continue to function.

    VMware vCenter Server Appliance. In vSphere 5.5, the VMware vCenter Server Appliance meets high-governance compliance standards through the enforcement of the DISA Security Technical Information Guidelines (STIG). Before you deploy VMware vCenter Server Appliance, see the VMware Hardened Virtual Appliance Operations Guide for information about the new security deployment standards and to ensure successful operations.

  • vCenter Server database. vSphere 5.5 removes support for IBM DB2 as the vCenter Server database.

  • VMware Tools. Beginning with vSphere 5.5, all information about how to install and configure VMware Tools in vSphere is merged with the other vSphere documentation. For information about using VMware Tools in vSphere, see the vSphere documentation. Installing and Configuring VMware Tools is not relevant to vSphere 5.5 and later.

  • VMware Tools. Beginning with vSphere 5.5, VMware Tools do not provide ThinPrint features.

  • vSphere Data Protection. vSphere Data Protection 5.1 is not compatible with vSphere 5.5 because of a change in the way vSphere Web Client operates. vSphere Data Protection 5.1 users who upgrade to vSphere 5.5 must also update vSphere Data Protection to continue using vSphere Data Protection.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi550-Update02 contains the following individual bulletins:

ESXi550-201409201-UG: Updates ESXi 5.5 esx-base vib
ESXi550-201409202-UG: Updates ESXi 5.5 tools-light vib
ESXi550-201409203-UG: Updates ESXi 5.5 net-tg3 vib
ESXi550-201409204-UG: Updates ESXi 5.5 sata-ahci vib
ESXi550-201409205-UG: Updates ESXi 5.5 scsi-megaraid-sas vib
ESXi550-201409206-UG: Updates ESXi 5.5 mi sc-drivers vib
ESXi550-201409207-UG: Updates ESXi 5.5 stomata vib

a

Patch Release ESXi550-Update02 Security-only contains the following individual bulletins:

ESXi550-201409101-SG: Updates ESXi 5.5 esx-base vib

Patch Release ESXi550-Update02 contains the following image profiles:

ESXi-5.5.0-20140902001-standard
ESXi-5.5.0-20140902001-no-tools

Patch Release ESXi550-Update02 Security-only contains the following image profiles:

ESXi-5.5.0-20140901001s-standard
ESXi-5.5.0-20140901001s-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release:

CIM and API Issues

  • ESXi host might experience high I/O latency
    When large CIM requests are sent to the LSI SMI-S provider on an ESXi host, high I/O latency might occur on the ESXi host due to poor storage.

    This issue is resolved in this release.
  • vCenter Server might display the status of the power supply information as unknown
    After you install an ESXi host and connect it to a vCenter Server, the power supply information in the vCenter Server Hardware Status tab might appear as unknown.

    This issue is resolved in this release.
  • sfcbd service cannot be stopped or restarted
    When you stop and start the hardware monitoring service (sfcbd) in a vCenter Server to monitor the hardware status of the host, the sfcbd process might stop with an error similar to the following written to syslog file:

    sfcbd: Sending TERM signal to sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd: Sending TERM signal to sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd-watchdog: Waited for 3 20 second intervals, SIGKILL next
    sfcbd: Stopping sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd-watchdog: Providers have terminated, lets kill the sfcbd.
    sfcbd-watchdog: Reached max kill attempts. watchdog is exiting


    This issue is resolved in this release.
  • sfcb might respond with incorrect method provider
    On an ESXi host, when you register two different method providers to the same CIM class with different namespaces, upon request, the sfcb always responds with the provider nearest to the top of providerRegister. This might be an incorrect method provider.

    This issue is resolved in this release.
  • Hostd might not respond when you view the health status of an ESXi host
    The hostd service might not respond when you connect vSphere Client to an ESXi host to view health status and perform a refresh action. You see a log similar to the following written to hostd.log:

    YYYY-MM-DDThh:mm:ss.344Z [5A344B90 verbose \\\'ThreadPool\\\'] usage : total=22 max=74 workrun=22 iorun=0 workQ=0 ioQ=0 maxrun=30 maxQ=125 cur=W

    This issue is resolved in this release.
  • Hardware health monitoring might fail to respond
    Hardware health monitoring might fail to respond and error messages similar to the following might be displayed by CIM providers:

    2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM unable to dlopen(/lib/security/$ISA/pam_passwdqc.so): /lib/security/../../lib/security/pam_passwdqc.so: cannot open shared object file: Too many open files2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM adding faulty module: /lib/security/$ISA/pam_passwdqc.so2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM unable to dlopen(/lib/security/
    The SFCB service might also stop responding.


    This issue is resolved in this release.
  • Web Based Enterprise Management (WBEM) queries might fail when you attempt to monitor the hardware health of an ESXi host
    WBEM queries might fail when you attempt to monitor the hardware health of an ESXi host. An error message similar to the following is written to the syslog file:

    Timeout error accepting SSL connection exiting.

    To resolve this issue, a new configuration httpSelectTimeout is added that allows you to set the timeout value.

  • XML parsing error might occur on a WSMAN Client
    On a Web Server Manager (WSMAN) Client, an XML parsing error might occur if the return value of a CIM method call from the ESXi WSMAN agent (openwsman) contains XML reserved characters such as left angle bracket (), or ampersand (&).

    This issue is resolved in this release.

Guest Operating System Issues

  • Editing with VideoReDo application might cause a video to appear multiple times and might also cause distorted images to appear
    When you attempt to use VideoReDo to edit a video, the video might be displayed multiple times. Also, you might notice distorted images while editing the video.

    This issue is resolved in this release.
  • Increasing the size of disk partition in MAC OS X guest operating system using Disk Utility on an ESXi host might fail
    You are unable to increase the size of disk partition in MAC OS X guest operating system using Disk Utility on an ESXi host. This issue does not occur when you attempt to increase the size using VMware Fusion.

    This issue is resolved in this release by changing the headers in the GUID partition table (GPT) after increasing the disk size.
  • The Force Bios setup option might fail
    The virtual machine configuration option Force BIOS Setup on vSphere Client might fail with an error message similar to the following:

    The attempted operation cannot be performed in the current state (Powered on)

    This issue is resolved in this release.

Miscellaneous Issues

  • ESXi host might fail with purple diagnostic screen after powering on virtual machine with PCI Passthrough Device Emulex Quad Port LPE12004
    ESXi host might fail with a purple diagnostic screen after you power on a virtual machine with PCI Passthrough Device Emulex Quad Port LPE12004.
    You see the following error:

    cpu20:32852)@BlueScreen: #PF Exception 14 in world 32852:helper1-1 IP 0x41803baf7319 addr 0x410800fd3fa0
    PTEs:0x13f1f1023;0x208569d063;0x0;
    cpu20:32852)Code start: 0x41803ba00000 VMK uptime: 0:02:15:58.810
    cpu20:32852)0x4123c151de50:[0x41803baf7319]BackMap_Lookup@vmkernel#nover+0x35 stack: 0xffffffff00000000
    cpu20:32852)0x4123c151df00:[0x41803ba69483]IOMMUDoReportFault@vmkernel#nover+0x133 stack: 0x500000701
    cpu20:32852)0x4123c151df30:[0x41803ba69667]IOMMUProcessFaults@vmkernel#nover+0x1f stack: 0x0
    cpu20:32852)0x4123c151dfd0:[0x41803ba60f8a]helpFunc@vmkernel#nover+0x6b6 stack: 0x0
    cpu20:32852)0x4123c151dff0:[0x41803bc53242]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0
    cpu20:32852)base fs=0x0 gs=0x418045000000 Kgs=0x0

    You see entries similar to the following in vmkernel.log:

    cpu0:1097177)WARNING: IOMMUIntel: 2436: DMAR Fault IOMMU Unit #1: R/W=W, Device 0000:07:00.1 Faulting addr = 0xfd3fa000 Fault Reason = 0x05 -> PTE not set to allow Write.^[[0m
    cpu0:1097177)WARNING: IOMMUIntel: 2493: IOMMU context entry dump for 0000:07:00.1 Ctx-Hi = 0x302 Ctx-Lo = 0x14ac52001^[[0m


    This issue is resolved in this release.

  • The USB device information for ESXi 5.5 might be incorrect
    The USB device information available in /etc/vmware/usb.ids might display wrong vendor information.

    This issue is resolved in this release.
  • Virtual machines might fail to power on when you add PCI devices of BAR size less than 4 KB as a passthrough device
    Virtual machines might fail to power on when you add PCI devices of Base Address Register (BAR) size less than 4KB as a passthrough device.
    A message similar to the following is written to the vmware.log file:

    PCIPassthru: Device 029:04.0 barIndex 0 type 2 realaddr 0x97a03000 size 128 flags 0
    PCIPassthru: Device 029:04.0 barIndex 1 type 2 realaddr 0x97a01000 size 1024 flags 0
    PCIPassthru: Device 029:04.0 barIndex 2 type 2 realaddr 0x97a02000 size 128 flags 0
    PCIPassthru: Device 029:04.0 barIndex 3 type 2 realaddr 0x97a00000 size 1024 flags 0
    PCIPassthru: 029:04.0 : barSize: 128 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 1024 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 128 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 1024 is not pgsize multiple>

    This issue is resolved in this release.
  • Windows 2008 R2 and Solaris 10 64-bit virtual machines  fail with a blue screen or kernel panic when running on ESXi 5.x
    When running a virtual machine with Windows 2008 R2 or Solaris 10 64-bit, you might experience these symptoms:

    • Windows 2008 R2  VMs fail with a blue screen and events such as the following:
      0x0000000a - IRQL_NOT_LESS_OR_EQUAL
      0x0000001a - MEMORY_MANAGEMENT
      0x000000fc - ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY
      0x0000004e - PFN_LIST_CORRUPT
      0x00000050 - PAGE_FAULT_IN_NONPAGED_AREA
      0x0000003B- SYSTEM_SERVICE_EXCEPTION


    • Solaris 10 64-bit VMs fail with kernel panic.
      For more information, see KB 2073791.

    This issue is resolved in this release.

Networking Issues

  • ESXi host might fail with a purple screen during uplink instability
    When uplink is unstable, the ESXi host might fail with a purple screen and displays an error message similar to the following:

    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UplinkRxQueuesLoadBalance@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UplinkLB_LoadBalanceCB@vmkernel#nover+0x8e stack: 0xnnnnnnnnnnnn, 0x
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UplinkAsyncProcessCallsHelperCB@vmkernel#nover+0x223 stack: 0x0, 0x4
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]helpFunc@vmkernel#nover+0x6b6 stack: 0x0, 0x0, 0x0, 0x0, 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0, 0


    If you use Qlogic UCNA driver for your NIC, the status of the Uplink Port on an ESXi 5.5 host flaps and an error is written to vmkernel.log similar to the following:

    vmnic3:qlcnic_issue_cmd:372:Failed card response err_code: 0xn
    vmnic3:qlcnic_fw_cmd_destroy_tx_ctx:827:Failed to destroy tx ctx in firmware, err_code : 8
    vmnic3:qlcnic_dev_request_reset:6879:Changed the adapter dev_state to NEED_RESET.
    vmnic3:qlcnic_check_health:7485:Adapter not in operational state(Heartbit Failure), resetting adapter.
    <6>qlcnic 0000:04:00.1:
    vmnic3:qlcnic_check_peg_halt_status:6722:PEG_HALT_STATUS1: 0xnnnnnnnn, PEG_HALT_STATUS2: 0xnnnnnn.
    vmnic3:qlcnic_detach:2337:Deleted Tx and Rx loopback filters.
    vmnic3:qlcnic_disable_bus_master:1042:Disabled bus mastering.
    vmnic3:qlcnic_free_rx_irq:2008:Freed vmnic3_rx[0] irq.
    vmnic3:qlcnic_ctx_free_tx_irq:1859:Freed vmnic3_txq[0] irq #85.


    This issue is resolved in this release.
  • vApp startup time might be increased when large number of dvPort groups are connected to vCenter Server
    vApp startup time might increase when large number of dvPort groups connected to vCenter Server.

    This issue is resolved in this release.
  • SLP is unable to send UDP query network packets when multiple VMkernel interfaces are configured
    When you configure multiple vmkernel packets, Service Location Protocol (SLP) is unable to send UDP query network packets. As a result, the firewall drops the response packages.

    This issue is resolved in this release.
  • Burst of data packets sent by applications might drop due to limited queue size on a vDS or on a standard vSwitch
    On a vNetwork Distributed Switch (vDS) or on a standard vSwitch where the traffic shaping is enabled, burst of data packets sent by applications might drop due to limited queue size.

    This issue is resolved in this release.
  • Attempts to boot ESXi host using stateless cache image might fail due to wrong payload file name
    You might be unable to boot ESXi host using the stateless cache image when Auto Deploy fails to boot the host.
    An error message similar to the following is displayed when the host attempts to boot using the cached image:

    file not found. Fatal error : 15 (Not found)

    This issue occurs when you upgrade Auto Deploy from ESXi 5.0 to ESXi 5.x and you use the same image in the new Auto Deploy environment.

    This issue is resolved in this release.
  • Incorrect result might be reported while performing a vSphere Distributed Switch (VDS) health check
    Incorrect results might be reported while performing VDS Web Client health check to monitor health status for VLAN, MTU, and Teaming policies.

    This issue is resolved in this release.
  • Highly loaded filters might end up occupying the same Rx-Netqueue
    Rx processing might not get distributed evenly on the available Rx-Netqueues even in the presence of free queues. There might be under-utilization of physical NIC bandwidth, due to some load patterns where highly loaded filters get packed within a single queue.

    This issue is resolved in this release.

Security Issues

  • Update to glibc packages
    The ESXi glibc-2.5 package is updated to resolve security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-0242 and CVE-2013-1914 to these issues.
     

Server Configuration Issues

  • Ethtool utility might report incorrect cable type for Emulex 10Gb Ethernet (10GbE) 554FLR-SFP adapter
    The ethtool utility might report an incorrect cable connection type for Emulex 10Gb Ethernet (10GbE) 554FLR-SFP adapter. This is because ethtool might not support Direct Attached Copper (DAC) port type.

    This issue is resolved in this release.
  • ESXi installation on iSCSI remote LUN might fail
    Attempts to install ESXi on an iSCSI remote LUN might fail with the following error:

    Expecting 2 bootbanks, found 0

    This issue is resolved in this release.
  • Virtual machines might experience slow I/O response
    On an ESXi host, enabled with default I/O scheduler, one or more virtual machines utilize the maximum I/O bandwidth of the device for a long time causing an IOPS imbalance. This occurs due to a race condition identified in the ESXi default I/O scheduler.

    This issue is resolved in this release by ensuring uniformity in IOPS across VMs on an ESXi host. 
  • Attempts to create more than 16 TB of VMFS5 datastore on storage device fail
    An ESXi host might fail when you create more than 16 TB of VMFS5 datastore on a storage device. An error similar to the following is written to vmkernel.log:

    WARNING: LVM: 10032: Invalid firstPE 3758096383
    WARNING: LVM: 10039: Invalid lastPE 3758096383
    WARNING: LVM: 5242: Error detected for vol 526f639f-93de6438-940e-d48cb5bc414c, dev naa.60000970000195700908533037393545:1
    2013-11-05T09:50:56.997Z cpu10:34259)LVM: 7133: Device scan failed for <1> : Invalid metadata
    FSS: 5051: No FS driver claimed device 'naa.60000970000195700908533037393545:1': Not supported
    VC: 1958: Device rescan time 24 msec (total number of devices 9)
    VC: 1961: Filesystem probe time 56 msec (devices probed 7 of 9)
    VC: 1963: Refresh open volume time 0 msec
    LVM: 7525: Initialized naa.60000970000195700908533037393545:1, devID 5278bf81-af41c8bd-3c21-d48cb5bc414c
    LVM: 7613: Zero volumeSize specified: using available space (30786340240896).
    LVM: 13082: One or more LVM devices have been discovered.
    LVM: 7689: Error "No space left on device" creating volume 5278bf81-82937d98-3734-d48cb5bc414c on device naa.60000970000195700908533037393545:1.


    This issue is resolved in this release.
  • Attempts to edit endpoints.conf file using VI editor in VMware vSphere WebService SDK fail
    When you attempt to edit /etc/vmware/rhttpproxy/endpoints.conf file in a VI editor, you might notice an error message similar to the following:

    Operation not permitted

    This issue is resolved in this release.
  • Unable to add a virtual machine to a vSphere Distributed Switch 5.5 port group with traffic filtering rules applied
    After you configure traffic filtering rules on a vSphere 5.5 Distributed Switch (VDS), you experience these two issues:
    • When you connect a virtual machine to a distributed switch portgroup with traffic filtering rules applied, you see an error in the vSphere Client or the vSphere Web Client:

      Cannot create DVPort portnumber of VDS switchname on the host hostname
      A general system error occurred.

    • When you attempt to modify an existing traffic filtering rule on a port group, you see this error:

      Cannot complete a vSphere Distributed Switch operation for one or more host members.
      vDS operation failed on host hostname, Received SOAP response fault from [ TCP:hostname:443>]: invokeHostTransactionCall
      Received SOAP response fault from [ ]: invokeHostTransactionCall
      A general system error occurred: got (vmodl.fault.SystemError) exception

    The hostd.log file (located at /var/log/ on the target ESXi host) contains entries similar to:

    YYYY-02-07T17:57:21.859Z [FF90D5B0 error 'Default' opID=577f035c-5f user=vpxuser] AdapterServer caught unexpected exception: Not a valid netmask

    This issue is resolved in this release.

  • Host is unable to join the domain when booted from local cache
    During system restart, if the ESXi host is unable to reach Auto Deploy and needs to boot from the local cache, the settings might not persist and the ESXi host might be removed from Active Directory (AD) upon reboot.

    This issue is resolved in this release.

  • Installing or upgrading ESXi might fail to create 2.5 GB core dump partition on supported 4 GB and 6 GB USB flash drivers
    When you install or upgrade ESXi hosts, 2.5 GB core dump partition might not be created on supported 4 GB and 6 GB USB drivers.

    This issue is resolved in this release.
  • ESXCLI commands might fail on Cisco UCS Blades server due to heavy storage load
    ESXCLI commands might fail on Cisco UCS Blades server due to heavy storage load. Error messages similar to the following might be written to the hostd.log file:

    2013-12-13T16:24:57.402Z [3C5C9B90 verbose 'ThreadPool'] usage : total=20 max=62 workrun=18 iorun=2 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:57.403Z [3C5C9B90 verbose 'ThreadPool'] usage : total=20 max=62 workrun=18 iorun=2 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:57.404Z [3BEBEB90 verbose 'ThreadPool'] usage : total=21 max=62 workrun=18 iorun=3 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:58.003Z [3BEBEB90 verbose 'ThreadPool'] usage : total=21 max=62 workrun=18 iorun=3 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:58.282Z [3C9D4B90 verbose 'ThreadPool'] usage : total=22 max=62 workrun=18 iorun=4 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I

    This issue is resolved in this release.
  • ESXi hosts might fail with a purple screen when you perform X-vMotion
    ESXi hosts might fail with a purple screen with error messages similar to the following when you perform X-vMotion:

    #PF Exception 14 in world 301646:vpxa-worker IP 0x41801c281c03 addr 0x0
    PTEs:0x1aabaa027;0x1c7a86027;0x0;
    2014-01-16T09:36:52.171Z cpu0:301646)Code start: 0x41801b600000 VMK uptime: 1:03:55:02.989
    2014-01-16T09:36:52.180Z cpu0:301646)0x41242939d430:[0x41801c281c03]XVMotion_FreeBlocks@esx#nover+0x1f stack: 0x41801ba34b3d
    2014-01-16T09:36:52.192Z cpu0:301646)0x41242939d4a0:[0x41801c23abc2]SVMAsyncIOReadDone@svmVMware#0+0x1d2 stack: 0x412ed091a740
    2014-01-16T09:36:52.204Z cpu0:301646)0x41242939d4d0:[0x41801b62d29f]AsyncPopCallbackFrameInt@vmkernel#nover+0xe7 stack: 0x412eca9d6ac0

    This issue is resolved in this release.
  • Management system of SNMP might report incorrect ESXi volume size for large file systems
    When you monitor ESXi using SNMP or management software that relies on SNMP, the management system of SNMP might report incorrect ESXi volume size when it retrieves the volume size for large file systems. A new switch has been introduced from this release that supports large file systems.

    This issue is resolved in this release.
  • sfcb service might fail to open ESXi firewall for CIM indication delivery if more than one destination listen to the indication on different ports
    The sfcb server creates one firewall rule for each port to send CIM indication. The destination listens to the indication on this port. The sfcb service fails to open ESXi firewall for CIM indication delivery if more than one destination listen to the indication on different ports.
    This issue occurs when the sfcb service fails to create more than one rule for multiple ports as it uses a fixed name for the firewall rules, and duplicate rule names are not allowed in a single rule set.

    This issue is resolved in this release by using the port number as a suffix for the rule name so that the rule names for different firewall rules do not conflict with each other.
  • Performing a compliance check on an ESXi host might result in error message
    When you perform a compliance check on an ESXi host, an error message similar to the following might be displayed in the vSphere Client:

    Found extra CIM-XML Indication Subscription on local system for query u'select * from CIM_AlertIndication' sent to destination u'https://IP:port'

    This issue is resolved in this release.
  • Permissions for an AD user or group might not persist after rebooting the ESXi host
    When you set permissions for an Active Directory (AD) user or group on an ESXi host with Host Profile, the permissions might not persist after you reboot the ESXi host with Auto Deploy.

    This issue is resolved in this release.
  • Hardware Status tab in the vCenter Server might report memory warnings and memory alert messages
    If Assert or Deassert entry is logged into the IPMI System Event Log (SEL) for the Memory Presence Detected line, the Hardware Status tab in the vCenter Server might report memory warnings and memory alert messages.

    This issue is resolved in this release.
  • Query of CIM instances from WSMAN Client on ESXi host might result in an XML parsing error
    An XML parsing error might occur when you query CIM instances from Web Server Manager (WSMAN) Client on an ESXi host with special characters in one of its properties.

    This issue is resolved in this release.
  • Trend Micro Deep Security Manager fails to prepare ESXi 5.5 host
    When you run the Prepare ESX Server wizard in the Trend Micro Deep Security Manager interface, it fails while installing the filter driver on ESXi 5.5 host. You see the following error:

    The installation transaction failed

    The esxupdate.log file (located at /var/log/on the ESXi host) contains entries similar to the following:

    YYYY-01-09T06:21:32Z esxupdate: BootBankInstaller.pyc: INFO: /altbootbank/boot.cfg: bootstate changed from 3 to 3
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: An esxupdate error exception was caught:
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: Traceback (most recent call last):
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/usr/sbin/esxupdate", line 216, in main
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: cmd.Run()
    fckLR YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esx5update/Cmdline.py", line 144, in Run
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Transaction.py", line 245, in InstallVibsFromSources
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Transaction.py", line 347, in _installVibs
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/Transaction.py", line 390, in _validateAndInstallProfile
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/HostImage.py", line 692, in Stage
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: File "/build/mts/release/bora-1331820/bora/build/esx/release/vmvisor/sys-boot/lib/python2.6/site-packages/vmware/esximage/HostImage.py", line 478, in _download_and_stage
    YYYY-01-09T06:21:32Z esxupdate: esxupdate: ERROR: InstallationError: ('Trend_bootbank_dvfilter-dsa_9.0.0-2636', '[Errno 4] Socket Error: [Errno 1] _ssl.c:1332: error:0906D06C:PEM routines:PEM_read_bio:no start line')


    This issue is resolved in this release.
  • ESXi hosts might be disconnected from vCenter Server after Veeam backup is performed
    After you perform Veeam backup, the ESXi hosts might be disconnected from the vCenter Server.
    This issue occurs when Veaam tries to create a snapshot of the virtual machine.
    Error messages similar to the following are written to the hostd.log file:

    --> Crash Report build=1312873
    --> Signal 11 received, si_code -128, si_errno 0
    --> Bad access at 735F6572

    This issue is resolved in this release.
  • Creating and deploying a host profile from an ESXi host that is managed by a vCenter Server might take a long time or might fail if the host is configured with multiple storage devices
    Creating and deploying a host profile from an ESXi host that is managed by a vCenter Server might take a long time or might fail when the host is configured with multiple storage devices and the host cache is SSD enabled.
    An error message similar to the following is displayed in the vSphere Client:

    Error: vcenter took too long to respond. Error Stack: Call "HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server "your.vcenter.server.fqdn" failed.

    This issue is resolved in this release.
  • ESXi host fails and indicates page fault exception has occured
    An ESXi host stops responding and displays a purple diagnostic screen indicating a page fault exception has occurred.
    You see a backtrace similar to the following:

    @BlueScreen: #PF Exception 14 in world 33020:reset-handle IP 0x418007961885 addr 0x14
    PTEs:0xnnnnnnnnn;0xnnnnnnnnn;0xnnnnnnnnn;0x0;
    Code start: 0xnnnnnnnnnnnn VMK uptime: 3:23:09:37.928

    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFlashAbortMatchingObjectAsyncIO@com.vmware.vmkapi#v2_2_0_0+0xc1 sta
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]vmk_VFlashIoctl@com.vmware.vmkapi#v2_2_0_0+0x83 stack: 0x417fc6e6014
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFlashAbortMatchingDeviceAsyncIO@com.vmware.vmkapi#v2_2_0_0+0x248 st
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFlash_Ioctl@com.vmware.vmkapi#v2_2_0_0+0x247 stack: 0x412383f1ded0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DevFSFileResetCommand@vmkernel#nover+0x1b5 stack: 0x412383f1df20
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSI_FSResetTarget@vmkernel#nover+0x3a stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIResetWorldFunc@vmkernel#nover+0x459 stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0


    This issue is resolved in this release.
  • Applying a host profile might cause a compliance check failure for hosts provisioned with stateless caching
    When you configure a host profile to use stateless caching by selecting Enable stateless caching to a USB disk on the host option in the System Image Cache Profile Settings drop-down menu and attempt to apply the host profile, compliance check failure might occur on the hosts.
    This issue occurs when local SAS devices are ignored causing the compliance failures for local devices.

    This issue is resolved in this release.
  • ESXi host might display a purple screen during hypervisor swapping
    After you upgrade the ESXi host to ESXi 5.5 Update 1 on a NUMA server, which does not have memory allocated in the first NUMA node, the host might display a purple diagnostic screen with a backtrace similar to the following:

    Backtrace for current CPU #X, worldID=X, ebp=0xnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MemNode_MPN2ID@vmkernel#nover+0x1a stack: 0xnnnnnnnnn, 0xnnnnnn


    This issue is resolved in this release.
  • Status change of LSI MegaRaid disk might not be indicated or might be delayed
    When you use LSI SMI-S provider with MegaRaid SAS device driver on an ESXi 5.x host, the status change of LSI MegaRaid disk might not be indicated or might be delayed when you run the enum_instances LSIESG_PhysicalDrive lsi/lsimr13command.

    The following example indicates that the value of PowerState does not change or changes after a delay when the Power Saving mode of LSI MegaRaid disk is modified:

    LSILSIESG_PhysicalDrive.Tag="500605B004F93CF0_252_36",CreationClassName="LSIESG_PhysicalDrive"
    CreationClassName = LSIESG_PhysicalDrive
    Tag = 500605B004F93CF0_252_36
    UserDataBlockSize = 512
    PiType = 0
    PiEligible = 0
    PiFomatted = 0
    PowerState = 0 ---> No change in status
    Vendor = (NULL)
    FRUNumber = (NULL)
    DiskDrive_DeviceID = 252_36


    This issue is resolved in this release.
  • Warning message might be reported even when valid scratch partition is configured in the system
    If the configured scratch location is not in the root of the datastore (/vmfs/volumes/ /) , a warning message similar to the following might be reported due to incorrect identification of the scratch configuration:

    No scratch partition has been configured.

    This issue is resolved in this release.
  • Delay in loading the list of Logical Unit Number storage (LUNs) in the Add Storage wizard
    When you add read-only LUNs to a new datastore to create 200 VMFS replicas, the list of volumes in the Add Storage wizard might be displayed after 30 minutes.

    This issue is resolved in this release.
  • Applying a host profile fails due to compliance failure
    When attempting to complete the host profile application, or after applying the profile, you might see these compliance failures:
     
    Specification state absent from host: device 'datastore' state needs to be set to 'on'
    Host state doesn't match specification: device 'datastore' needs to be reset
    Specification state absent from host: device 'datastore' Path Selection Policy needs to be set to 'VMW_PSP_FIXED'
    Host state doesn't match specification: device 'datastore' Path Selection Policy needs to be set to default for claiming SATP


    This issue occurs when a host profile that is saved from a reference host records a SCSI device as potentially a shared device. However, if the SCSI is a local device, then after the host profile is applied to a host, a compliance check can fail. For example, this issue can occur if Host1 has a SAS device with GUID1 and Host2 has a different SAS device with GUID2. Examples of local-only SCSI devices include certain local RAID controllers and SAS disks.

    Although the system hardware is comparable, you see a compliance error because similar devices with different device IDs are uniquely named in the form naa.UID, instead of the generic form mpx.pathname. The resulting profile compliance check detects that some devices are removed, while others are added without appearing in the stored profile.

    This issue has been resolved in ESXi 5.5 Update 2 for all SATA-based SSD devices behind SAS controllers. Such devices will be marked as local devices by ESXi. In addition, storage host profiles will ignore local SAS and USB devices except USB CD-ROMs thus removing the compliance check errors seen due to local SAS and USB devices.
  • Noncompliance messages appear after using Auto Deploy for stateless caching or stateful installs to USB
    After a host profile is edited to enable stateless caching to the USB disk on the host, the host profile receives compliance errors when attempting to remidiate. The host is rebooted and caching finishes. After checking compliance, the following compliance error messages might be received:

    Host state does not match specification:device.mpx.vmhbaXX:C0:T0:L0 parameters needs to be reset
    Host state does not match specification:device.mpx.vmhbaXX:C0:T0:L0 Path selection Policy needs to be set to default for claiming SATP


    In the above messages, XX might be any integer greater than or equal to 32, such as 34.

    This issue is resolved in this release.
  • Hostd service might fail when a quiesced snapshot operation fails during LWD transfer process
    If a quiesce snapshot fails during the LWD transfer process, the hostd service might fail and an error message similar to the following might be written to the hostd.log file:

    Panic: Assert Failed: "false" @ bora/vim/hostd/hbrsvc/ReplicationGroup.cpp:5123

    This issue is resolved in this release.

Storage Issues

  • High virtual disks latency and low virtual disk performance might be observed when multiple virtual machines run on the same ESXi host and use the same NFS datastore
    In an environment where multiple virtual machines are running on the same ESXi host and using the same NFS datastore, you might experience high virtual disks latency and low virtual disk performance if you set the IOPS limit on one virtual machine. Even if you assign different IOPS limits for different virtual machines, the IOPS limit on all virtual machines is set to the lowest value assigned to a virtual machine in the NFS datastore.

    This issue is resolved in this release.
  • The disk identifier name might be truncated
    The Host Resources MIB (RFC 2790) limits the size of the formatted string to 64. The disk identifier name or any formatted string might be truncated if it exceeds this size limit.

    This issue is resolved in this release by removing the redundant white spaces to contain the valid information.
  • Intel 40G CNA in an ESXi host might display the link status as down
    The Intel 40G Converged Network Adapter (CNA) in an ESXi host might display the network link status as down.

    This issue is resolved in this release.
  • Rescan all storage adapters operation running from the vCenter Server or ESXCLI might take a long time to complete
    The Rescan all storage adapters operations that you run from the vCenter Server or ESXCLI might take a long time to complete if the ESXi host has a large number of VMFS data stores.

    This release improves performance for various storage operations such as Rescan all storage adapters and VMFS datastores and operations such as Listing VMFS Snapshots and Devices in the Create Datastore wizard.
  • Changed Block Tracking is reset by storage vMotion
    Performing storage vMotion operation on vSphere 5.x resets Change Block Tracking(CBT).
    For more information, see KB 2048201

    This issue is resolved in this release.
  • Large vApp deployments fail intermittently due to NFS lock contention
    vApp deployment might fail on NFS during linked clone operations if Enable VAAI for fast provisioning option is set in VCD. An error message similar to the following is present in hostd.log file:

    Create child failed to mark parent read-only (25): The system cannot find the file specified

    This issue is resolved in this release.
  • Cluster wide storage rescan might cause ESXi hosts and virtual machines to become unresponsive
    During VMFS refresh the virtual machines might become unresponsive to ping requests for a prolonged period of time. For more information, see KB 1039088.

    This issue is resolved in this release.
  • New claim rule option added for IBM
    A new claim rule option reset_on_attempted_reserve has been added to ESXi 5.5 for IBM storage Array Model 2145.
  • VM operations might fail when using CBT with Virtual SAN
    When CBT is used with Virtual SAN, in certain cases some components of the vmdk are lost that might lead to virtual macgine operations failure.

    This issue is resolved in this release.

Upgrade and Installation Issues

  • Installing an ESXi 5.5 might result in error messages been written to syslog.log files
    After you install an ESXi host and SNMP monitoring server monitors the host, error messages similar to the following might be written to the syslog.log file:

    snmpd: fetch_fixed_disk_status: fetch VSI_NODE_storage_scsifw_devices_smart_healthStatus(naa.6090a048e059642f5ac6d489e7faed03) failed Not supported, reporting unknown status

    This issue only occurs on hosts with disks that do not support Self Monitoring Analysis and Reporting Technology (SMART).

    This issue is resolved in this release.
  • OSPs might display error messages when you kickstart file to install RHEL virtual machines
    When you attempt to install RHEL virtual machines using the kickstart file, the operating system specific packages (OSPs) might display error message similar to the following:

    Could not load modules.dep

    This issue is resolved in this release.
  • Adding ESXi host to vCenter Server might fail after you upgrade from ESX 4.1 to ESXi 5.5
    After you upgrade from ESX 4.1 to ESXi 5.5, the nsswitch.conf file of the host might not be migrated properly. As a result, you might be unable to add the host to vCenter Server.

    This issue is resolved in this release.
  • Upgrading from ESX 4.1 to ESXi 5.5 OEM might fail
    Attempts to upgrade ESX 4.1.x to ESXi 5.5.x OEM might fail due to unsupported vsish command on the ESX 4.1 host. An error message similar to the following might be displayed:

    The upgrade is not supported on the host hardware. The upgrade ISO image contains HP (or other vendor) VIBs that failed the host hardware vendor compatibility check. VIBs by HP are not supported on host hardware model Hewlett-Packard Company made by vendor.

    This issue is resolved in this release.
  • Upgrading from ESXi 5.1 to ESXi 5.5 might cause the hostd service to fail and ESXi host might disconnect from vCenter Server
    When you upgrade from ESXi 5.1 to ESXi 5.5, the hostd service might fail. As a result, the ESXi host might disconnect from vCenter Server. This issue might also cause the Veaam backup operation to fail. Error messages similar to the following might be written to the hostd.log file:

    2014-01-17T11:15:29.069Z [704ADB90 error 'UW Memory checker'] Current value 644848 exceeds hard limit 643993. Shutting down process.2014-01-17T11:15:29.069Z [704ADB90 panic 'Default']
    -->
    --> Panic: Memory exceeds hard limit. Panic
    --> Backtrace:
    --> backtrace[00] rip 142c26a3 Vmacore::System::Stacktrace::CaptureFullWork(unsigned int)
    --> backtrace[01] rip 140e6a48 Vmacore::System::SystemFactoryImpl::CreateBacktrace(Vmacore::Ref <:system::backtrace> &)
    --> backtrace[02] rip 142b9803 /lib/libvmacore.so [0x142b9803]
    --> backtrace[03] rip 142b9bd1 Vmacore::PanicExit(char const*)
    --> backtrace[04] rip 140f3b6c Vmacore::System::ResourceChecker::DoCheck()
    --> backtrace[05] rip 140f423d boost::detail::function::void_function_obj_invoker <void,0
    This issue occurs due to hostd memory bloating.

    This issue is resolved in this release.

vCenter Server and vSphere Web Client Issues

  • ESXi host might be disconnected from the vCenter Server after hostd service fails
    ESXi host might be disconnected from the vCenter Server after the hostd service fails. This issue occurs when the virtual machine is unregistered. ESXi host can connect to the vCenter only after you restart the vpxa service.

    This issue is resolved in this release.
  • Performance chart might display incorrect throughput usage values for vCenter Server inventory objects
    The performance chart displays incorrect throughput usage values for the following vCenter Server inventory objects:
    • Virtual Machine > Virtual Disk
    • Virtual Machine > Disk
    • Host > Disk
    • Host > Storage Path
    • Host > Storage Adapter
    For more information, see KB 2064464.

    This issue is resolved in this release.
  • Average CPU usage values might be greater than the frequency of the processors multiplied by the number of processors
    The average CPU usage values displayed by Power CLI might be greater than the frequency of the processors multiplied by the number of processors.

    This issue is resolved in this release by setting the maximum limit of the average CPU usage values correctly.
  • Snapshot quiescing and hot cloning of a Linux virtual machine might fail
    Snapshot quiescing and hot cloning of a Linux virtual machine might fail and error messages similar to the following might be written to vmware.log file:

     2013-12-12T00:09:59.446Z| vcpu-1| I120: [msg.snapshot.quiesce.vmerr] The guest OS has reported an error during quiescing.
     2013-12-12T00:09:59.446Z| vcpu-1| I120+ The error code was: 3
     2013-12-12T00:09:59.446Z| vcpu-1| I120+ The error message was: Error when enabling the sync provider.
     2013-12-12T00:09:59.446Z| vcpu-1| I120: ----------------------------------------
     2013-12-12T00:09:59.449Z| vcpu-1| I120: ToolsBackup: changing quiesce state: STARTED -> ERROR_WAIT
     2013-12-12T00:10:00.450Z| vcpu-1| I120: ToolsBackup: changing quiesce state: ERROR_WAIT -> IDLE
     2013-12-12T00:10:00.450Z| vcpu-1| I120: ToolsBackup: changing quiesce state: IDLE -> DONE
     2013-12-12T00:10:00.450Z| vcpu-1| I120: SnapshotVMXTakeSnapshotComplete: Done with snapshot 'clone-temp-1386910714682919': 0
     2013-12-12T00:10:00.450Z| vcpu-1| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).


    This issue is resolved in this release.

Virtual Machine Management Issues

  • Virtual machine might fail while creating a quiesced snapshot
    Virtual machine might fail when vSphere Replication or other services initiate a quiesced snapshot.

    This issue is resolved in this release.
  • ESXi host might fail with a purple screen when you run custom scripts that use the AdvStats parameter
    ESXi host might fail with a purple screen when you run custom scripts using the AdvStats parameter to check the disk usage.
    An error message similar to the following might be written to vmkernel.log file:

    VSCSI: 231: Creating advStats for handle 8192, vscsi0:0
    The host reports a backtrace similar to:
    Histogram_XXX
    VSCSIPostIOCompletion
    AsyncPopCallbackFrameInt

    This issue is resolved in this release.
  • Guest operating customization might fail if a virtual machine is migrated because of DRS
    During guest operating system customization, if a virtual machine is migrated because of DRS, the customization process might fail and you might notice an error message similar to the following in the Guestcust.log file:

    Unable to set customization status in vmx.

    This issue is resolved in this release.
  • Executing automation code to copy a virtual machine might cause ESXi host to disconnect from vCenter Server
    When you execute automation code to copy a virtual machine, the ESXi host might disconnect from vCenter Server. This issue occurs when the hostd service fails.

    This issue is resolved in this release.
  • vSphere HA protection state might remain as unprotected if the HA cluster incorrectly assumes that the virtual machine is powered off
    vSphere HA protection state might remain as unprotected if the HA cluster incorrectly assumes that the virtual machine is powered off.
    This issue occurs when vm.runtime.cleanPowerOff property is incorrectly set to true and as a result the HA cluster assumes that the virtual machine is powered off.
    Error messages similar to the following are written to the log files:

    vpxd-8.log:2013-11-21T20:44:49.902Z [7F69C7CF9700 info 'vmdasVm' opID=FdmWaitForUpdates-domain-c26-45712e7f] [VmMo::UpdateActualDasProtectStateLocked] actual protection state for VM vm-93 'unprotected' -> 'protected'
    vpxd-10.log:2013-11-22T00:03:01.731Z [7F69D4832700 info 'vmdasVm' opID=FdmWaitForUpdates-domain-c26-6b160bef] [VmMo::UpdateActualDasProtectStateLocked] actual protection state for VM vm-93 'protected' -> 'unprotected'


    This issue is resolved in this release.

vMotion and Storage vMotion Issues

  • Performing vMotion from ESXi 5.1 to ESXi 5.5 might cause the ESXi hosts to stop responding
    ESXi hosts might stop responding when you perform vMotion from ESXi 5.1 to ESXi 5.5.
    Also, the status of the virtual machine might be displayed as invalid when you perform any one of the following operations:
    • Unregister or register a virtual machine
    • Cold migrate a virtual machine from ESXi 5.1 to ESXi 5.5

    ESXi hosts might stop responding when you perform vMotion from ESXi 5.1 to ESXi 5.5.
    Error messages similar to the following might be written to the hostd.log file on the destination ESXi host:

    2013-12-18T15:21:52.455Z [FFBEFB70 verbose 'Vmsvc.vm:/vmfs/volumes/50d2f92b-bc57ec6f-f5c0-001c23d7ba27/migtest3/migtest3.vmx'] Get shared vigor fields message: CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    --> CPUID register value () is invalid.
    -->
    2013-12-18T15:21:52.455Z [FFBEFB70 info 'Vmsvc.vm:/vmfs/volumes/50d2f92b-bc57ec6f-f5c0-001c23d7ba27/migtest3/migtest3.vmx'] Error encountered while retrieving configuration. Marking configuration as invalid: vim.fault.GenericVmConfigFault


    This issue is resolved in this release.
  • Performing vMotion might cause VMware Tools to auto-upgrade and virtual machines to reboot
    VMware Tools might auto-upgrade and the virtual machines might reboot when you enable upgrading of VMware Tools on power cycle and perform vMotion from an ESXi host with no-tools image profile to another ESXi host with the VMware Tools ISO image.

    This issue is resolved in this release.

VMware Tools Issues

  • Memory balloon driver (VMMEMCTL) compilation fails for FreeBSD version 10 and above
    The memory balloon driver(vmmemctl) might fail to compile in FreeBSD versions 10.0 and above. An error message similar to the following is written in the log file:

    os.c:300:22: error: implicit declaration of function 'kmem_alloc_nofault' is
    invalid in C99[-Werror,-Wimplicit-function-declaration]
    vm_offset_t res = kmem_alloc_nofault(kernel_map, PAGE_SIZE);
    ^
    os.c:357:14: error: incompatible pointer types passing 'vm_map_t' (aka
    'struct vm_map ') to parameter of type 'struct vmem ' [-Werror,-Wincompatible-pointer-types]
    kmem_free(kernel_map, (vm_offset_t)mapping, PAGE_SIZE);
    ^~~~~~~~~~
    @/vm/vm_extern.h:59:29: note: passing argument to parameter here
    void kmem_free(struct vmem *, vm_offset_t, vm_size_t);

    This issue is resolved in this release.
  • ESXi host might respond with a delay during login when MOVE Agentless 3.0 is running on a virtual machine
    You might experience a delay of 40 seconds or more during login when you run MOVE Agentless 3.0 on an ESXi host.

    This issue is resolved in this release.
  • Installing VMware Tools 5.1 on a Windows virtual machine reports Unity warnings in Windows Event logs
    After installing VMware Tools 5.1 on a Windows virtual machine, the Windows Application Event log of the guest operating system reports warnings similar to the following:

    [ warning] [vmusr:vmusr] vmware::tools::UnityPBRPCServer::Start: Failed to register with the host!
    [ warning] [vmusr:vmtoolsd] Failed registration of app type 2 (Signals) from plugin unity.
    [ warning] [vmusr:vmusr] Error in the RPC receive loop: RpcIn: Unable to send.
    [ warning] [vmusr:vmusr] socket count not create new socket, error 10047: An address incompatible with the requested protocol was used
    [ warning] [vmusr:vmusr] Channel restart failed [1]


    This issue is resolved in this release.
  • MOB for GuestInfo.ipAddress data object might not be populated correctly when more than four NICs are connected
    The Managed Object Browser (MOB) for GuestInfo.ipAddress data object might not be populated correctly and the value of GuestInfo.ipAddress data object might be displayed as unset when more than four NICs are connected.
    This issue occurs when VMware Tools fails to determine a valid guest operating system IP address.

    This issue is resolved in this release.
  • Windows virtual machines might fail with a blue screen when you upgrade VMware Tools
    After you upgrade VMware Tools, virtual machines might fail with a blue screen or might stop responding. This issue occurs due to a memory corruption when the TCP connections are not handled properly in the vShield Endpoint TDI Manager driver, vnetflt.sys.

    This issue is resolved in this release.
  • File disappears after VMware Tools upgrade
    The deployPkg.dll file that is usually present in the C:\Program Files\Vmware\Vmware Tools\ location might not be found after you upgrade VMware Tools to version 5.5 Update 1.

    This issue is resolved in this release.
  • Installing VMware Tools might cause Windows Event Viewer to display a warning message
    After you install VMware Tools, the Windows Event Viewer displays a warning similar to the following:

    Unable to read a line from 'C:\Program Files\VMware\VMware Tools\messages\es\hgfsUsability.vmsg': Invalid byte sequence in conversion input.

    This issue is particularly noticed when you install VMware Tools on Spanish locale operating system.

    This issue is resolved in this release.

Known Issues

Known issues not previously documented are marked with the * symbol. The known issues are grouped as follows:

Installation and Upgrade Issues

  • Attempts to get all image profiles might fail while running the Get-EsxImageProfile command in vSphere PowerCLI*
    When you run the Get-EsxImageProfile command using vSphere PowerCLI to get all image profiles, an error similar to the following is displayed:

    PowerCLI C:\Windows\system32> Get-EsxImageProfile
    Get-EsxImageProfile : The parameter 'name' cannot be an empty string.
    Parameter name: name
    At line:1 char:20
    + Get-EsxImageProfile <<<<
    + CategoryInfo : NotSpecified: (:) [Get-EsxImageProfile], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException,VMware.ImageBuilder.Commands.GetProfiles


    Workaround: Run the Get-EsxImageProfile -name "ESXi-5.x*" command, which includes the -name option and display all image profiles created during the PowerCLI session.

    For example, running the command Get-EsxImageProfile -name "ESXi-5.5.*" displays all 5.5 image profiles similar to the following:

    PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-EsxmageProfile -name "ESXi-5.5.*"

    Name Vendor Last Modified Acceptance Level
    ---- ------ ------------- ----------------
    ESXi-5.5.0-20140701001s-no-... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140302001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140604001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140401020s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20131201001s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
  • Simple Install fails on Windows Server 2012
    Simple Install fails on Windows Server 2012 if the operating system is configured to use a DHCP IP address

    Workaround: Configure the Windows 2012 Server to use a static IP address.

  • If you use preserve VMFS with Auto Deploy Stateless Caching or Auto Deploy Stateful Installs, no core dump partition is created
    When you use Auto Deploy for Stateless Caching or Stateful Install on a blank disk, an MSDOS partition table is created. However, no core dump partition is created.

    Workaround: When you enable the Stateless Caching or Stateful Install host profile option, select Overwrite VMFS, even when you install on a blank disk. When you do so, a 2.5GB coredump partition is created.

  • During scripted installation, ESXi is installed on an SSD even though the --ignoressd option is used with the installorupgrade command
    In ESXi 5.5, the --ignoressd option is not supported with the installorupgrade command. If you use the --ignoressd option with the installorupgrade command, the installer displays a warning that this is an invalid combination. The installer continues to install ESXi on the SSD instead of stopping the installation and displaying an error message.

    Workaround: To use the --ignoressd option in a scripted installation of ESXi, use the install command instead of the installorupgrade command.

  • Delay in Auto Deploy cache purging might apply a host profile that has been deleted
    After you delete a host profile, it is not immediately purged from the Auto Deploy. As long as the host profile is persisted in the cache, Auto Deploy continues to apply the host profile. Any rules that apply the profile fail only after the profile is purged from the cache.

    Workaround: You can determine whether any rules use deleted host profiles by using the Get-DeployRuleSet PowerCLI cmdlet. The cmdlet shows the string deleted in the rule's itemlist. You can then run the Remove-DeployRule cmdlet to remove the rule.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Select a different disk to use for stateless caching, or remove the ESX software from the disk. If you remove the ESX software, it becomes unavailable.

  • Installing or booting ESXi version 5.5.0 fails on servers from Oracle America (Sun) vendors
    When you perform a fresh ESXi version 5.5.0 installation or boot an existing ESXi version 5.5.0 installation on servers from Oracle America (Sun) vendors, the server console displays a blank screen during the installation process or when the existing ESXi 5.5.0 build boots. This happens because servers from Oracle America (Sun) vendors have a HEADLESS flag set in the ACPI FADT table, even though they are not headless platforms.

    Workaround: When you install or boot ESXi 5.5.0, pass the boot option ignoreHeadless="TRUE".

  • If you use ESXCLI commands to upgrade an ESXi host with less than 4GB physical RAM, the upgrade succeeds, but some ESXi operations fail upon reboot
    ESXi 5.5 requires a minimum of 4GB of physical RAM. The ESXCLI command-line interface does not perform a pre-upgrade check for the required 4GB of memory. You successfully upgrade a host with insufficient memory with ESXCLI, but when you boot the upgraded ESXi 5.5 host with less than 4GB RAM, some operations might fail.

    Workaround: None. Verify that the ESXi host has more than 4GB of physical RAM before the upgrade to version 5.5.

  • After upgrade from vCenter Server Appliance 5.0.x to 5.5, vCenter Server fails to start if an external vCenter Single Sign-On is used
    If the user chooses to use an external vCenter Single Sign-On instance while upgrading the vCenter Server Appliance from 5.0.x to 5.5, the vCenter Server fails to start after the upgrade. In the appliance management interface, the vCenter Single Sign-On is listed as not configured.

    Workaround: Perform the following steps:

    1. In a Web browser, open the vCenter Server Appliance management interface (https://appliance-address:5480).
    2. On the vCenter Server/Summary page, click the Stop Server button.
    3. On the vCenter Server/SSO page, complete the form with the appropriate settings, and click Save Settings.
    4. Return to the Summary page and click Start Server.
  • When you use ESXCLI to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the vMotion and Fault Tolerance Logging (FT Logging) settings of any VMKernel port group are lost after the upgrade
    If you use the command esxcli software profile update <options> to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the upgrade succeeds, but the vMotion and FT Logging settings of any VMkernel port group are lost. As a result, vMotion and FT Logging are restored to the default setting (disabled).

    Workaround: Perform an interactive or scripted upgrade, or use vSphere Update Manager to upgrade hosts. If you use the esxcli command, apply vMotion and FT Logging settings manually to the affected VMkernel port group after the upgrade.

  • When you upgrade vSphere 5.0.x or earlier to version 5.5, system resource allocation values that were set manually are reset to the default value
    In vSphere 5.0.x and earlier, you modify settings in the system resource allocation user interface as a temporary workaround. You cannot reset the value for these settings to the default without completely reinstalling ESXi. In vSphere 5.1 and later, the system behavior changes, so that preserving custom system resource allocation settings might result in values that are not safe to use. The upgrade resets all such values.

    Workaround: None.

  • IPv6 settings of virtual NIC vmk0 are not retained after upgrade from ESX 4.x to ESXi 5.5
    When you upgrade an ESX 4.x host with IPv6 enabled to ESXi 5.5 by using the --forcemigrate option, the IPv6 address of virtual NIC vmk0 is not retained after the upgrade.

    Workaround: None.

vCenter Single Sign-On Issues
  • Error 29107 appears during vSphere Web Client upgrade from 5.1Update 1a to 5.5
    During an upgrade of a vSphere Web Client from version 5.1 Update U1a to version 5.5, Error 29107 appears if the vCenter Single Sign-On service that was in use before the upgrade is configured as High Availability Single Sign-On.

    Workaround: Perform the upgrade again. You can run the installer and select Custom Install to upgrade only the vSphere Web Client.

  • Cannot change the password of administrator@vsphere.local from the vSphere Web Client pulldown menu
    When you log in to the vCenter Single Sign-On server from the vSphere Web Client, you can perform a password change from the pulldown menu. When you log in as administrator@vsphere.local the Change Password option is grayed out.

    Workaround:

    1. Select the Manage tab, and select vCenter Single Sign-On > Users and Groups.
    2. Right-click the administrator user and click Edit User.
    3. Change the password.

Networking Issues

  • Unable to use PCNet32 network adapter with NSX opaque network*
    When PCNet32 flexible network adapter is configured with NSX opaque network backing, the adapter disconnects while powering on the VM.
  • Workaround: None

  • Upgrading to ESXi 5.5 might change the IGMP configuration of TCP/IP stack for multicast group management*
    The default IGMP version of the management interfaces is changed from IGMP V2 to IGMP V3 for ESXi 5.5 hosts for multicast group management. As a result, when you upgrade to ESXi 5.5, the management interface might revert back to IGMP V2 from IGMP V3 if it receives an IGMP query of a previous version and you might notice IGMP version mismatch error messages.

    Workaround: Edit the default IGMP version by modifying the TCP/IP IGMP rejoin interval in the Advanced Configuration option.
  • Static routes associated with vmknic interfaces and dynamic IP addresses might fail to appear after reboot
    After you reboot the host, static routes that are associated with VMkernel network interface (vmknic) and dynamic IP address might fail to appear.
    This issue occurs due to a race condition between DHCP client and restore routes command. The DHCP client might not finish acquiring an IP address for vmknics when the host attempts to restore custom routes during the reboot process. As a result, the gateway might not be set up and the routes are not restored.

    Workaround: Run the esxcfg-route –r command to restore the routes manually.
  • An ESXi host stops responding after being added to vCenter Server by its IPv6 address
    When you add an ESXi host to vCenter Server by IPv6 link-local address of the form fe80::/64, within a short time the host name becomes dimmed and the host stops responding to vCenter Server.

    Workaround: Use a valid IPv6 address that is not a link-local address.

  • The vSphere Web Client lets you configure more virtual functions than are supported by the physical NIC and does not display an error message
    In the SR-IOV settings of a physical adapter, you can configure more virtual functions than are supported by the adapter. For example, you can configure 100 virtual functions on a NIC that supports only 23, and no error message appears. A message prompts you to reboot the host so that the SR-IOV settings are applied. After the host reboots, the NIC is configured with as many virtual functions as the adapter supports, or 23 in this example. The message that prompts you to reboot the host persists when it should not appear.

    Workaround: None

  • On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might not start
    When SR-IOV is enabled on an ESXi host 5.1 or later with Intel ixgbe NICs, if several virtual functions are enabled in the environment, some virtual machines might fail to start.
    The vmware.log file contains messages similar to the following:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine before starting it.

  • On an Emulex BladeEngine 3 physical network adapter, a virtual machine network adapter backed by a virtual function cannot reach a VMkernel adapter that uses the physical function as an uplink
    Traffic does not flow between a virtual function and its physical function. For example, on a switch backed by the physical function, a virtual machine that uses a virtual function on the same port cannot contact a VMkernel adapter on the same switch. This is a known issue of the Emulex BladeEngine 3 physical adapters. For information, contact Emulex.

    Workaround: Disable the native driver for Emulex BladeEngine 3 devices on the host. For more information, see VMware KB 2044993.

  • The ESXi Dump Collector fails to send the ESXi core file to the remote server
    The ESXi Dump Collector fails to send the ESXi core file if the VMkernel adapter that handles the traffic of the dump collector is configured to a distributed port group that has a link aggregation group (LAG) set as the active uplink. An LACP port channel is configured on the physical switch.

    Workaround: Perform one of the following workarounds:

    • Use a vSphere Standard Switch to configure the VMkernel adapter that handles the traffic for the ESXi Dump Collector with the remote server.
    • Use standalone uplinks to handle the traffic for the distributed port group where the VMkernel adapter is configured.
  • If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on a host by using the vSphere Client, the change is not saved, even after a reboot
    If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on an ESXi 5.5 host by using the vSphere Client, the number of ports does not change even after you reboot the host.

    When a host that runs ESXi 5.5 is rebooted, it dynamically scales up or down the ports of virtual switches. The number of ports is based on the number of virtual machines that the host can run. You do not have to configure the number of switch ports on such hosts.

    Workaround: None in the vSphere Client.

Server Configuration Issues

  • Menu navigation problem is experienced When Direct Control User Interface is accessed from a serial console
    When Direct Control User Interface is accessed from a serial console, the Up and Down arrow keys do not work while navigating to the menu and the user is forcefully logged out of the DCUI configuration screen.

    Workaround: Stop the DCUI process. The DCUI process will be restarted automatically.

  • Host profiles might incorrectly appear as compliant after ESXi hosts are upgrade to 5.5 Update 2 followed by changes in host configuration
    If an ESXi host that is compliant with an host profile is updated to ESXi 5.5 Update 2 followed by some changes in host configuration and you re-check the compliance of the host with the host profile, the profile is incorrectly reported to be compliant.

    Workaround:
    • In vSPhere Client, navigate to the host profile that has the issue and run Update profile From Reference Host.
    • In vSPhere Web Client, navigate to host Profile that has the issue, click Copy settings from host, select the host from which you want to copy the configuration settings and click OK.
  • Host Profile remediation fails with vSphere Distributed Switch
    Remediation errors might occur when applying a Host Profile with a vSphere Distributed Switch and a virtual machine with Fault Tolerance is in a powered off state on a host that uses the distributed switch in that Host Profile.

    Workaround: Move the powered off virtual machines to another host in order for the Host Profile to succeed.

  • Host profile receives firewall settings compliance errors when you apply ESX 4.0 or ESX 4.1 profile to ESXi 5.5.x host
    If you extract a host profile from an ESX 4.0 or ESX 4.1 host and attempt to apply it to an ESXi 5.5.x host, the profile remediation succeeds. The compliance check receives firewall settings errors that include the following:

    Ruleset LDAP not found
    Ruleset LDAPS not found
    Ruleset TSM not found
    Ruleset VCB not found
    Ruleset activeDirectorKerberos not found

    Workaround: No workaround is required. This is expected because the firewall settings for an ESX 4.0 or ESX 4.1 host are different from those for an ESXi 5.5.x host.

  • Changing BIOS device settings for an ESXi host might result in invalid device names
    Changing a BIOS device setting on an ESXi host might result in invalid device names if the change causes a shift in the <segment:bus:device:function> values assigned to devices. For example, enabling a previously-disabled integrated NIC might shift the <segment:bus:device:function> values assigned to other PCI devices, causing ESXi to change the names assigned to these NICs. Unlike previous versions of ESXi, ESXi 5.5 attempts to preserve devices names through <segment:bus:device:function> changes if the host BIOS provides specific device location information. Due to a bug in this feature, invalid names such as vmhba1 and vmnic32 are sometimes generated.

    Workaround: Rebooting the ESXi host once or twice might clear the invalid device names and restore the original names. Do not run an ESXi host with invalid device names in production.

Storage Issues

  • ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout*
    ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout with messages similar to the following:

    mem>2014-05-12T13:34:00.639Z cpu8:1416436)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L10" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x5 0x20 0x0.2014-05-12T13:34:05.637Z cpu0:33038)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L4" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

    This issue occurs with the HBA driver when a high disk I/O on the datastore is connected to the ESXi host and multipathing is enabled at the target level instead of the HBA level.

    Workaround: Replace the HBA driver with the latest async HBA driver.
  • Attempts to perform live storage vMotion of virtual machines with RDM disks might fail
    Storage vMotion of virtual machines with RDM disks might fail and virtual machines might be seen in powered off state. Attempts to power on the virtual machine fails with the following error:

    Failed to lock the file

    Workaround: None.
  • Renamed tags appear as missing in the Edit VM Storage Policy wizard
    A virtual machine storage policy can include rules based on datastore tags. If you rename a tag, the storage policy that references this tag does not automatically update the tag and shows it as missing.

    Workaround: Remove the tag marked as missing from the virtual machine storage policy and then add the renamed tag. Reapply the storage policy to all out-of-date entities.

  • A virtual machine cannot be powered on when the Flash Read Cache block size is set to 16KB, 256KB, 512KB, or 1024KB
    A virtual machine configured with Flash Read Cache and a block size of 16KB, 256KB, 512KB, or 1024KB cannot be powered on. Flash Read Cache supports a minimum cache size of 4MB and maximum of 200GB, and a minimum block size of 4KB and maximum block size of 1MB. When you power on a virtual machine, the operation fails and the following messages appear:

    An error was received from the ESX host while powering on VM.

    Failed to start the virtual machine.

    Module DiskEarly power on failed.

    Failed to configure disk scsi0:0.

    The virtual machine cannot be powered on with an unconfigured disk. vFlash cache cannot be attached: msg.vflashcache.error.VFC_FAILURE

    Workaround: Configure virtual machine Flash Read Cache size and block size.

    1. Right-click the virtual machine and select Edit Settings.
    2. On the Virtual Hardware tab, expand Hard disk to view the disk options.
    3. Click Advanced next to the Virtual Flash Read Cache field.
    4. Increase the cache size reservation or decrease the block size.
    5. Click OK to save your changes.
  • A custom extension of a saved resource pool tree file cannot be loaded in the vSphere Web Client
    A DRS error message appears on host summary page.

    When you disable DRS in the vSphere Web Client, you are prompted to save the resource pool structure so that it can be reloaded in the future. The default extension of this file is .snapshot, but you can select a different extension for this file. If the file has a custom extension, it appears as disabled when you try to load it. This behavior is observed only on OS X.

    Workaround: Change the extension to .snapshot to load it in the vSphere Web Client on OS X.

  • DRS error message appears on the host summary page
    The following DRS error message appears on the host summary page:

    Unable to apply DRS resource settings on host. The operation is not allowed in the current state. This can significantly reduce the effectiveness of DRS.

    In some configurations a race condition might result in the creation of an error message in the log that is not meaningful or actionable. This error might occur if a virtual machine is unregistered at the same time that DRS resource settings are applied.

    Workaround: Ignore this error message.

  • Configuring virtual Flash Read Cache for VMDKs larger than 16TB results in an error
    Virtual Flash Read Cache does not support virtual machine disks larger than 16TB. Attempts to configure such disks will fail.

    Workaround: None

  • Virtual machines might power off when the cache size is reconfigured
    If you incorrectly reconfigure the virtual Flash Read Cache on a virtual machine, for example by assigning an invalid value, the virtual machine might power off.

    Workaround: Follow the recommended cache size guidelines in the vSphere Storage documentation.

  • Reconfiguring a virtual machine with virtual Flash Read Cache enabled might fail with the Operation timed out error
    Reconfiguration operations require a significant amount of I/O bandwidth. When you run a heavy load, such operations might time out before they finish. You might also see this behavior if the host has LUNs that are in an all paths down (APD) state.

    Workaround: Fix all host APD states and retry the operation with a smaller I/O load on the LUN and host.

  • DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purpose
    DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purposes.

    Workaround: DRS does not recommend these virtual machines for vMotion except for the following reasons:

    • To evacuate a host that the user has requested to enter maintenance or standby mode.
    • To fix DRS rule violations.
    • Host resource usage is in red state.
    • One or most hosts is over utilized and virtual machine demand is not being met.
      Note: You can optionally set DRS to ignore this reason.
  • Hosts are put in standby when the active memory of virtual machines is low but consumed memory is high
    ESXi 5.5 introduces a change in the default behavior of DPM designed to make the feature less aggressive, which can help prevent performance degradation for virtual machines when active memory is low but consumed memory is high. The DPM metric is X%*IdleConsumedMemory + active memory. The X% variable is adjustable and is set to 25% by default.

    Workaround: You can revert to the aggressive DPM behavior found in earlier releases of ESXi by setting PercentIdleMBInMemDemand=0 in the advanced options.

  • vMotion initiated by DRS might fail
    When DRS recommends vMotion for virtual machines with a virtual Flash Read Cache reservation, vMotion might fail because the memory (RAM) available on the target host is insufficient to manage the Flash Read Cache reservation of the virtual machines.

    Workaround: Follow the Flash Read Cache configuration recommendations documented in vSphere Storage.
    If vMotion fails, perform the following steps:

    1. Reconfigure the block sizes of the virtual machines on the target host and the incoming virtual machines to reduce the overall target usage of the VMkernel memory on the target host.
    2. Use vMotion to manually migrate the virtual machine to the target host to ensure the condition is resolved.
  • You are unable to view problems that occur during virtual flash configuration of individual SSD devices
    The configuration of virtual flash resources is a task that operates on a list of SSD devices. When the task finishes for all objects, the vSphere Web Client reports it as successful, and you might not be notified of problems with the configuration of individual SSD devices.

    Workaround: Perform one of the following tasks.

    • In the Recent Tasks panel, double-click the completed task.
      Any configuration failures appear in the Related events section of the Task Details dialog box.
    • Alternatively, follow these steps:
      1. Select the host in the inventory.
      2. Click the Monitor tab, and click Events.
  • Unable to obtain SMART information for Micron PCIe SSDs on the ESXi host
    Your attempts to use the esxcli storage core device smart get -d command to display statistics for the Micron PCIe SSD device fail. You get the following error message:
    Error getting Smart Parameters: CANNOT open device

    Workaround: None. In this release, the esxcli storage core device smart command does not support Micron PCIe SSDs.

  • ESXi does not apply the bandwidth limit that is configured for a SCSI virtual disk in the configuration file of a virtual machine
    You configure the bandwidth and throughput limits of a SCSI virtual disk by using a set of parameters in the virtual machine configuration file (.vmx). For example, the configuration file might contain the following limits for a scsi0:0 virtual disk:
    sched.scsi0:0.throughputCap = "80IOPS"
    sched.scsi0:0.bandwidthCap = "10MBps"
    sched.scsi0:0.shares = "normal"

    ESXi does not apply the sched.scsi0:0.bandwidthCap limit to the scsi0:0 virtual disk.

    Workaround: Revert to an earlier version of the disk I/O scheduler by using the vSphere Web Client or the esxcli system settings advanced set command.

    • In the vSphere Web Client, edit the Disk.SchedulerWithReservation parameter in the Advanced System Settings list for the host.
      1. Navigate to the host.
      2. On the Manage tab, select Settings and select Advanced System Settings.
      3. Locate the Disk.SchedulerWithReservation parameter, for example, by using the Filter or Find text boxes.
      4. Click Edit and set the parameter to 0.
      5. Click OK.
    • In the ESXi Shell to the host, run the following console command:
      esxcli system settings advanced set -o /Disk/SchedulerWithReservation -i=0
  • A virtual machine configured with Flash Read Cache cannot be migrated off a host if there is an error in the cache
    A virtual machine with Flash Read Cache configured might have a migration error if the cache is in an error state and is unusable. This error causes migration of the virtual machine to fail.

    Workaround:

    1. Reconfigure the virtual machine and disable the cache.
    2. Perform the migration.
    3. Re-enable the cache after the virtual machine is migrated.

    Alternatively, the virtual machine must be powered off and then powered on to correct the error with the cache.

  • You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta
    You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta.

    Workaround: This occurs only when you upgrade from ESXi 5.5 Beta to ESXi 5.5. To avoid this problem, install ESXi 5.5 instead of upgrading. If a you upgrade from ESXi 5.5 Beta, delete the VFFS volume before you upgrade.

  • Expected latency runtime improvements are not seen when virtual Flash Read Cache is enabled on virtual machines with older Windows and Linux guest operating systems
    Virtual Flash Read Cache provides optimal performance when the cache is sized to match the target working set, and when the guest file systems are aligned to at least a 4KB boundary. The Flash Read Cache filters out misaligned blocks to avoid caching partial blocks within the cache. This behavior is typically seen when virtual Flash Read Cache is configured for VMDKs of virtual machines with Windows XP and Linux distributions earlier than 2.6. In such cases, a low cache hit rate with a low cache occupancy is observed, which implies a waste of cache reservation for such VMDKs. This behavior is not seen with virtual machines running Windows 7, Windows 2008, and Linux 2.6 and later distributions, which align their file systems to a 4KB boundary to ensure optimal performance.

    Workaround: To improve the cache hit rate and optimal use of the cache reservation for each VMDK, ensure that the guest operating file system installed on the VMDK is aligned to at least a 4KB boundary.

Virtual SAN

  • ESXi host with multiple VSAN disk groups might not display the magnetic disk statistics when you run the vsan.disks_stats command*
    An ESXi host with multiple VSAN disk groups might not display the magnetic disk (MD) statistics when you run the vsan.disks_stats Ruby vSphere Console (RVC)command. The host displays only the solid-state drive (SSD) information.

    Workaround: None
  • VM directories contain duplicate swap (.vswp) files
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • Attempts to add more than seven magnetic disks to a Virtual SAN disk group might fail with incorrect error message
    Virtual SAN disk group supports maximum of one SSD and seven magnetic disks (HDD). Attempts to add an additional magnetic disk might fail with an incorrect error message similar to the following:

    The number of disks is not sufficient.

    Workaround: None
  • Re-scan failure experienced while adding a Virtual SAN disk
    When you add a Virtual SAN disk, re-scan fails due to probe failure for a non-Virtual SAN volume, which causes the operation to fail.

    Workaround: Ignore the error as all the disks are registered correctly.
  • A hard disk drive (HDD) that is removed after its associated solid state drive (SSD) is removed might still be listed as a storage disk claimed by Virtual SAN
    If an SSD and then its associated HDD is removed from a Virtual SAN datastore and you run the esxcli vsan storage list command, the removed HDD is still listed as a storage disk claimed by Virtual SAN. If the HDD is inserted back in a different host, the disk might appear to be part of two different hosts.

    Workaround: For example, if SSD and HDD is removed from ESXi x and inserted into ESXi y, perform the following steps to prevent the HDD from appearing to be a part of both ESXi x and ESXi y:
    1. Insert the SSD and HDD removed from the ESXi x, into ESXi y.
    2. Decommission the SSD from ESXi x.
    3. Run the command esxcfg-rescan -A.
       The HDD and SSD will no longer be listed on ESXi x.
  • The Working with Virtual SAN section of the vSphere Storage documentation indicates that the maximum number of HDD disks per a disk group is six. However, the maximum allowed number of HDDs is seven.
  • After a failure in a Virtual SAN cluster, vSphere HA might report multiple events, some misleading, before restarting a virtual machine
    The vSphere HA master agent makes multiple attempts to restart a virtual machine running on Virtual SAN after it has appeared to have failed. If the virtual machine cannot be immediately restarted, the master agent monitors the cluster state, and makes another attempt when conditions indicate that a restart might be successful. For virtual machines running on Virtual SAN, the vSphere HA master has special application logic to detect when the accessibility of a virtual machine's objects might have changed, and attempts a restart whenever an accessibility change is likely. The master agent makes an attempt after each possible accessibility change, and if it did not successfully power on the virtual machine before giving up and waiting for the next possible accessibility change.

    After each failed attempt, vSphere HA reports an event indicating that the failover was not successful, and after five failed attempts, reports that vSphere HA stopped trying to restart the virtual machine because the maximum number of failover attempts was reached. Even after reporting that the vSphere HA master agent has stopped trying, however, it does try the next time a possible accessibility change occurs.

    Workaround: None.

  • Powering off a Virtual SAN host causes the Storage Providers view in the vSphere Web Client to refresh longer than expected
    If you power off a Virtual San host, the Storage Providers view might appear empty. The Refresh button continues to spin even though no information is shown.

    Workaround: Wait at least 15 minutes for the Storage Providers view to be populated again. The view also refreshes after you power on the host.

  • Virtual SAN reports a failed task as completed
    Virtual SAN might report certain tasks as completed even though they failed internally.

    The following are conditions and corresponding reasons for errors:

    • Condition: Users attempt to create a new disk group or add a new disk to already existing disk group when the Virtual SAN license has expired.
      Error stack: A general system error occurred: Cannot add disk: VSAN is not licensed on this host.
    • Condition: Users attempt to create a disk group with the number of disk higher than the supported number. Or they try to add new disks to already existing disk group so that the total number exceeds the supported number of disks per disk group.
      Error stack: A general system error occurred: Too many disks.
    • Condition: Users attempt to add a disk to the disk group that has errors.
      Error stack: A general system error occurred: Unable to create partition table.

    Workaround: After identifying the reason for a failure, correct the reason and perform the task again.

  • Virtual SAN datastores cannot store host local and system swap files
    Typically, you can place the system swap or host local swap file on a datastore. However, the Virtual SAN datastore does not support system swap and host local swap files. As a result, the UI option that allows you to select the Virtual SAN datastore as the file location for system swap or host local swap is not available.

    Workaround: In Virtual SAN environment, use other supported options to place the system swap and host local swap files.

  • A Virtual SAN virtual machine in a vSphere HA cluster is reported as vSphere HA protected although it has been powered off
    This might happen when you power off a virtual machine with its home object residing on a Virtual SAN datastore, and the home object is not accessible. This problem is seen if a HA master agent election occurs after the object becomes inaccessible.

    Workaround:

    1. Make sure that the home object is accessible again by checking the compliance of the object with the specified storage policy.
    2. Power on the virtual machine then power it off again.

    The status should change to unprotected.

  • Virtual machine object remains in Out of Date status even after Reapply action is triggered and completed successfully
    If you edit an existing virtual machine profile due to the new storage requirements, the associated virtual machine objects, home or disk, might go in Out of Date status.This occurs when your current environment cannot support reconfiguration of virtual machine objects. Using Reapply action does not change the status.

    Workaround: Add additional resources, hosts or disks, to the Virtual SAN cluster and invoke Reapply action again.

  • Automatic disk claiming for Virtual SAN does not work as expected if you license Virtual SAN after enabling it
    If you enable Virtual SAN in automatic mode and then assign a license, Virtual SAN fails to claim disks.

    Workaround: Change the mode to Manual, and then switch back to Automatic. Virtual SAN will properly claim the disks.

  • vSphere High Availability (HA) fails to restart a virtual machine when Virtual SAN network is partitioned
    This occurs when Virtual SAN uses VMkernel adapters for internode communication, which are on the same subnet as other VMkernel adapters in a cluster. Such configuration could cause network failure and disrupt Virtual SAN internode communication, while vSphere HA internode communication remains unaffected.

    In this situation, the HA master agent might detect the failure in a virtual machine, but is unable to restart it. For example, this could occur when the host on which the master agent is running does not have access to the virtual machine's objects.

    Workaround: Make sure that the VMkernel adapters used by Virtual SAN do not share a subnet with the VMkernel adapters used for other purposes.

  • VM directories contain duplicate swap (.vswp) files   
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • VMs might become inaccessible due to high network latency
    In a Virtual SAN cluster setup, if the network latency is high, some VMs might become inaccessible on vCenter Server and you will not be able to power on or access the VM.

    Workaround: Run the vsan.check_state -e -r RVC command.
  • VM operations might timeout due to high network latency
    When storage controller with low queue depths are used, high network latency might cause VM operations to time out.

    Workaround: Re-attempt the operations when the network load is lower.
  • VMs might get renamed to a truncated version of their vmx file path
    If the vmx file of a virtual machines is temporarily inaccessible, the VM gets renamed to a truncated version of the vmx file path. For example, the virtual machine might get renamed to /vmfs/volumes/vsan:52f1686bdcb477cd-8e97188e35b99d2e/236d5552-ad93. The truncation might delete half the UUID of the VM home directory making it difficult to map the renamed VM with the original VM, from just the VM name.

    Workaround: Run the vsan.fix_renamed_vms RVC command.

vCenter Server and vSphere Web Client

  • Unable to add ESXi host to Active Directory domain
    You might observe that Active Directory domain name is not displayed in Domain drop-down list under Select Users and Groups option when you attempt to assign permissions. Also, the Authentication Services Settings option might not display any trusted domain controller even when the active directory has trusted domains.

    Workaround:
    1. Restart netlogond, lwiod, and then lsassd daemons.
    2. Login to ESXi host using vSphere Client.
    3. In the Configuration tab and click Authentication Services Settings.
    4. Refresh to view the trusted domains.

Virtual Machine Management Issues

  • Unable to perform cold migration and storage vMotion of a virtual machine if the VMDK file name begins with "core"*
    Attempts to perform cold migration and storage vMotion of a virtual machine might fail if the VMDK file name begins with "core" with error message similar to the following:

    A general system error occurred: Error naming or renaming a VM file.

    Error messages similar to the following might be displayed in the vpxd.log file:

    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'commonvpxLro' opID=8BA11741-0000095D-86-97] [VpxLRO] -- FINISH task-internal-2471 -- -- VmprovWorkflow --
    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'Default' opID=8BA11741-0000095D-86-97] [VpxLRO] -- ERROR task-internal-2471 -- -- VmprovWorkflow: vmodl.fault.SystemError:
    mem> --> Result:
    mem> --> (vmodl.fault.SystemError){
    mem> --> dynamicType = ,
    mem> --> faultCause = (vmodl.MethodFault) null,
    mem> --> reason = "Error naming or renaming a VM file.",
    mem> --> msg = "",
    mem> --> }

    This issue occurs when the ESXi host incorrectly classifies VMDK files with a name beginning with "core" as a core file instead of the expected disk type.

    Workaround: Ensure that the VMDK file name of the virtual machine does not begin with "core". Also, use the vmkfstools utility to rename the VMDK file to ensure that the file name do not begin with the word "core".
  • Virtual machines with Windows 7 Enterprise 64-bit guest operating systems in the French locale experience problems during clone operations
    If you have a cloned Windows 7 Enterprise 64-bit virtual machine that is running in the French locale, the virtual machine disconnects from the network and the customization specification is not applied. This issue appears when the virtual machine is running on an ESXi 5.1 host and you clone it to ESXi 5.5 and upgrade the VMware Tools version to the latest version available with the 5.5 host.

    Workaround: Upgrade the virtual machine compatibility to ESXi 5.5 and later before you upgrade to the latest available version of VMware Tools.

  • Attempts to increase the size of a virtual disk on a running virtual machine fail with an error
    If you increase the size of a virtual disk when the virtual machine is running, the operation might fail with the following error:

    This operation is not supported for this device type.

    The failure might occur if you are extending the disk to the size of 2TB or larger. The hot-extend operation supports increasing the disk size to only 2TB or less. SATA virtual disks do not support the hot-extend operation no matter what their size is.

    Workaround: Power off the virtual machine to extend the virtual disk to 2TB or larger.

VMware HA and Fault Tolerance Issues
  • If you select an ESX/ESXi 4.0 or 4.1 host in a vSphere HA cluster to fail over a virtual machine, the virtual machine might not restart as expected
    When vSphere HA restarts a virtual machine on an ESX/ESXi 4.0 or 4.1 host that is different from the original host the virtual machine was running on, a query is issued that is not answered. The virtual machine is not powered on on the new host until you answer the query manually from the vSphere Client.

    Workaround: Answer the query from the vSphere Client. Alternatively, you can wait for a timeout (15 minutes by default), and vSphere HA attempts to restart the virtual machine on a different host. If the host is running ESX/ESXi 5.0 or later, the virtual machine is restarted.

  • If a vMotion operation without shared storage fails in a vSphere HA cluster, the destination virtual machine might be registered to an unexpected host
    A vMotion migration involving no shared storage might fail because the destination virtual machine does not receive a handshake message that coordinates the transfer of control between the two virtual machines. The vMotion protocol powers off both the source and destination virtual machines. If the source and destination hosts are in the same cluster and if vSphere HA has been enabled, the destination virtual machine might be registered by vSphere HA on another host than the one chosen as the target for the vMotion migration.

    Workaround: If you want to retain the destination virtual machine and you want it to be registered to a specific host, relocate the destination virtual machine to the destination host. This relocation is best done before powering on the virtual machine.

Supported Hardware Issues
  • Sensor values for Fan, Power Supply, Voltage, and Current sensors appear under the Other group of the vCenter Server Hardware Status Tab
    Some sensor values are listed in the Other group instead of the respective categorized group.

    Workaround: None.

  • I/O memory management unit (IOMMU) faults might appear when the debug direct memory access (DMA) mapper is enabled
    The debug mapper places devices in IOMMU domains to help catch device memory accesses to addresses that have not been explicitly mapped. On some HP systems with old firmware, IOMMU faults might appear.

    Workaround: Download firmware upgrades from the HP Web site and apply them.

    • Upgrade the firmware of the HP iLO2 controller.
      Version 2.07, released in August 2011, resolves the problem.
    • Upgrade the firmware of the HP Smart Array.
      For the HP Smart Array P410, version 5.14, released in January 2012, resolves the problem.

VMware Tools Issues

  • Unable to install VMware Tools on Linux guest operating systems by executing the vmware-install.pl -d command if VMware Tools is not installed earlier*
    If VMware Tools is not installed on your Linux guest operating system, attempts to perform a fresh installation of VMware Tools by executing the vmware-install.pl -d command might fail.
    This issue occurs in the following guest operating systems:
    • RHEL 7 and later
    • CentOS 7 and later
    • Oracle Linux 7 and later
    • Fedora 19 and later
    • SLES 12 and later
    • SLED 12 and later
    • openSUSE 12.2 and later
    • Ubuntu 14.04 and later
    • Debian 7 and later

    Workaround: There is no workaround that helps the -default (-d) switch work. However, you can install VMware Tools without the - default switch.
    Select Yes when you are prompted with the option Do you still want to proceed with this legacy installer? by the installer.

    Note: This release introduces a new --force-install’(-f) switch to install VMware Tools.
  • File disappears after VMware Tools upgrade
    deployPkg.dll file which is present in C:\Program Files\Vmware\Vmware Tools\ is not found after upgrading VMware Tools. This is observed when it is upgraded from version 5.1 Update 2 to 5.5 Update 1 or later and version 5.5 to 5.5 Update 1 or later.

    Workaround: None
  • User is forcefully logged out while installing or uninstalling VMware Tools by OSP
    While installing or uninstalling VMware Tools packages in a RHEL (Red Hat Linux Enterprise) and CentOS virtual machines that were installed using operating system specific packages (OSP), the current user is forcefully logged out. This issue occurs in RHEL 6.5 64-bit, RHEL 6.5 32-bit, CentOS 6.5 64-bit and CentOS 6.5 32-bit virtual machines.

    Workaround:
    • Use secure shell (SSH) to install or uninstall VMware Tools
      or
    • The user must log in again to install or uninstall the VMware Tools packages

Miscellaneous Issues

  • SRM test recovery operation might fail with an error*
    Attempts to perform Site Recovery Manager (SRM) test recovery might fail with error message similar to the following:
    'Error - A general system error occurred: VM not found'.
    When several test recovery operations are performed simultaneously, the probability of encountering the error messages increases.

    Workaround: None. However, this is not a persistent issue and this issue might not occur if you perform the SRM test recovery operation again.