ESXi 6.5 Update 3 | JUL 2 2019 | ISO Build 13932383 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Installation Notes for This Release
- Earlier Releases of ESXi 6.5
- Patches Contained in this Release
- Resolved Issues
- Known Issues
What's New
The ESXi 6.5 Update 3 release includes the following list of new features.
- The
ixgben
driver adds queue pairing to optimize CPU efficiency. - With ESXi 6.5 Update 3 you can track license usage and refresh switch topology.
- ESXi 6.5 Update 3 provides legacy support for AMD Zen 2 servers.
- Multiple driver updates: ESXi 6.5 Update 3 provides updates to the
lsi-msgpt2, lsi-msgpt35, lsi-mr3, lpfc/brcmfcoe, qlnativefc, smartpqi, nvme, nenic, ixgben, i40en
andbnxtnet
drivers. - ESXi 6.5 Update 3 provides support for Windows Server Failover Clustering and Windows Server 2019.
- ESXi 6.5 Update 3 adds the
com.vmware.etherswitch.ipfixbehavior
property to distributed virtual switches to enable you to choose how to track your inbound and outbound traffic, and utilization. With a value of1
, thecom.vmware.etherswitch.ipfixbehavior
property enables sampling in both ingress and egress directions. With a value of0
, you enable sampling in egress direction only, which is also the default setting.
Installation Notes for This Release
VMware Tools Bundling Changes in ESXi 6.5 Update 3
In ESXi 6.5 Update 3, a subset of VMware Tools 10.3.10 ISO images are bundled with the ESXi 6.5 Update 3 host.
The following VMware Tools 10.3.10 ISO images are bundled with ESXi:
windows.iso
: VMware Tools image for Windows Vista or laterlinux.iso
: VMware Tools image for Linux OS with glibc 2.5 or later
The following VMware Tools 10.3.10 ISO images are available for download:
solaris.iso
: VMware Tools image for Solarisfreebsd.iso
: VMware Tools image for FreeBSDdarwin.iso
: VMware Tools image for OSX
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
- VMware Tools 10.3.10 Release Notes
- Updating to VMware Tools 10 - Must Read
- VMware Tools for hosts provisioned with Auto Deploy
- Updating VMware Tools
Earlier Releases of ESXi 6.5
Features and known issues of ESXi 6.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.5 are:
- VMware ESXi 6.5, Patch Release ESXi650-201905001
- VMware ESXi 6.5, Patch Release ESXi650-201903001
- VMware ESXi 6.5, Patch Release ESXi650-201901001
- VMware ESXi 6.5, Patch Release ESXi650-201811002
- VMware ESXi 6.5, Patch Release ESXi650-201811001
- VMware ESXi 6.5, Patch Release ESXi650-201810002
- VMware ESXi 6.5 Update 2 Release Notes
- VMware ESXi 6.5 Update 1 Release Notes
- VMware ESXi 6.5.0a Release Notes
- VMware vSphere 6.5 Release Notes
For internationalization, compatibility, upgrades, and product support notices see the VMware vSphere 6.5 Release Notes.
Patches Contained in this Release
This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.
Details
Download Filename: | update-from-esxi6.5-6.5_update03 |
Build: | 13932383 13873656 (Security-only) |
Download Size: | 462.2 MB |
md5sum: | 81124de9717295f8b8afc94084ef6eff |
sha1checksum: | 63f753328642f25db77dcab2df67b1558b92a45b |
Host Reboot Required: | Yes |
Virtual Machine Migration or Shutdown Required: | Yes |
Bulletins
This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin supersedes the security- only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in "SG". For information on patch and update classification, see KB 2014447.
For more information about the individual bulletins, see the My VMware page and the Resolved Issues section.
Bulletin ID | Category | Severity |
---|---|---|
ESXi650-201907201-UG | Bugfix | Critical |
ESXi650-201907202-UG | Bugfix | Important |
ESXi650-201907203-UG | Bugfix | Important |
ESXi650-201907204-UG | Enhancement | Important |
ESXi650-201907205-UG | Enhancement | Important |
ESXi650-201907206-UG | Bugfix | Important |
ESXi650-201907207-UG | Enhancement | Important |
ESXi650-201907208-UG | Enhancement | Important |
ESXi650-201907209-UG | Enhancement | Important |
ESXi650-201907210-UG | Enhancement | Important |
ESXi650-201907211-UG | Enhancement | Important |
ESXi650-201907212-UG | Enhancement | Moderate |
ESXi650-201907213-UG | Enhancement | Important |
ESXi650-201907214-UG | Enhancement | Important |
ESXi650-201907215-UG | Bugfix | Important |
ESXi650-201907216-UG | Enhancement | Important |
ESXi650-201907217-UG | Enhancement | Important |
ESXi650-201907218-UG | Enhancement | Important |
ESXi650-201907219-UG | Enhancement | Important |
ESXi650-201907220-UG | Bugfix | Important |
ESXi650-201907101-SG | Security | Important |
ESXi650-201907102-SG | Security | Important |
ESXi650-201907103-SG | Security | Important |
IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi, vCenter Server and vSAN to the current version.
Image Profiles
VMware patch and update releases contain general and critical image profiles.
Application of the general release image profile applies to new bug fixes.
Image Profile Name |
ESXi-6.5.0-20190702001-standard |
ESXi-6.5.0-20190702001-no-tools |
ESXi-6.5.0-20190701001s-standard |
ESXi-6.5.0-20190701001s-no-tools |
Patch Download and Installation
The typical way to apply patches to ESXi hosts is by using the VMware vSphere Update Manager. For details, see About Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib
command. Additionally, the system can be updated by using the image profile and the esxcli software profile
command.
For more information, see vSphere Command-Line Interface Concepts and Examples and vSphere Upgrade Guide.
Resolved Issues
The resolved issues are grouped as follows.
- ESXi650-201907201-UG
- ESXi650-201907202-UG
- ESXi650-201907203-UG
- ESXi650-201907204-UG
- ESXi650-201907205-UG
- ESXi650-201907206-UG
- ESXi650-201907207-UG
- ESXi650-201907208-UG
- ESXi650-201907209-UG
- ESXi650-201907210-UG
- ESXi650-201907211-UG
- ESXi650-201907212-UG
- ESXi650-201907213-UG
- ESXi650-201907214-UG
- ESXi650-201907215-UG
- ESXi650-201907216-UG
- ESXi650-201907217-UG
- ESXi650-201907218-UG
- ESXi650-201907219-UG
- ESXi650-201907220-UG
- ESXi650-201907101-SG
- ESXi650-201907102-SG
- ESXi650-201907103-SG
- ESXi-6.5.0-20190702001-standard
- ESXi-6.5.0-20190702001-no-tools
- ESXi-6.5.0-20190701001s-standard
- ESXi-6.5.0-20190701001s-no-tools
Patch Category | Bugfix |
Patch Severity | Critical |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2164153, 2217863, 2240400, 2214767, 2250679, 2247984, 2228928, 2257478, 2204917, 2257345, 2248758, 2267143, 1982634, 2266767, 2288424, 2282080, 2267503, 2277645, 2230554, 2291669, 2163396, 2203387, 2230181, 2232204, 2224770, 2224775, 2227125, 2246959, 2257458, 2271326, 2267871, 2282205, 2220007, 2227665, 2292798, 2117131, 2268193, 2293806, 2300123, 2301266, 2301817, 2305376, 2265828, 2305720, 2259009, 2310825, 2280486, 2322967, 2311328, 2316813, 2292417, 2310409, 2314297, 2323959, 2224477, 2293772, 2203431, 2113782, 2256003, 2212913, 2280989, 2297764, 2325697, 2324404, 2260907, 2242242, 2320302, 2226863, 2219305, 2301228, 2221249, 2304091, 2271879, 2280737, 2244449, 2240049, 2251459, 2275735, 2289950, 2267717 |
CVE numbers | CVE-2019-5528 |
This patch updates the esx-base, esx-tboot, vsan
, and vsanhealth
VIBs to resolve the following issues:
- PR 2164153: The Small-Footprint CIM Broker daemon (SFCBD) might intermittently stop responding
SFCBD might intermittently stop responding with an error
wantCoreDump:sfcb-vmware_int signal:11
.This issue is resolved in this release.
- PR 2217863: Virtual machines with xHCI controllers supporting USB 3.0 might fail during suspend or migrate operations
Virtual machines with xHCI controllers supporting USB 3.0 might fail with a VMX error message during suspend or migrate operations. You might see an error message similar to:
PANIC: VERIFY bora/vmcore/vmx/main/monitorAction.c:598 bugNr=10871
.This issue is resolved in this release.
- PR 2240400: Virtual machines might stop responding during a long-running quiesced snapshot operation
If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run a snapshot consolidation to remove redundant disks and improve the virtual machine performance. However, the consolidation operation might race with the running quiesced snapshot operation and the virtual machines stop responding.
This issue is resolved in this release.
- PR 2214767: Virtual machines using 3D software might intermittently lose connectivity
Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.
This issue is resolved in this release. The fix adds checks for texture coordinates with values
Infinite
orNaN
. - PR 2250679: Linux virtual machines might not power on due to some memory size configurations
When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, it might not power on and display a blank screen.
This issue is resolved in this release.
- PR 2247984: The esxcli network plug-in command might return a false value as a result
Even though the value is correctly set, the esxcli
$esxcli.network.nic.coalesce.set.Invoke
network command might return a false value as a result. This might impact existing automation scripts.This issue is resolved in this release.
- PR 2228928: Virtual machines with preallocated memory might use memory from a remote NUMA node
When you create a virtual machine with preallocated memory, for example a virtual machine with latency sensitivity set to high, the memory might be allocated from a remote NUMA node instead of selecting the local NUMA node. This issue might depend on the memory usage of the ESXi host and does not affect hosts with only one NUMA node.
This issue is resolved in this release.
- PR 2257478: The Actor Key value of LACP PDUs might change to 0
The
Actor Key
value of LACP PDUs might change to0
during the link status flapping.This issue is resolved in this release.
- PR 2204917: An ESXi host might fail with an incorrect error message when you create multiple Virtual Flash File Systems (VFFS)
Creating multiple VFFS on a single ESXi host is not supported. When you try to create multiple VFFS on an ESXi host and to collect host profiles, the operation fails with an error:
Exception while Gathering Generic Plug-in Data. Exception: name 'INVALID_SYS_CONFIG_MSG_KEY' is not defined error
.This issue is resolved in this release. The fix makes a correction of the error message.
- PR 2257345: Some virtual machines might become invalid when trying to install or update VMware Tools
If you have modified the type of a virtual CD-ROM device on a virtual machine in an earlier version of ESXi, after you update to a later version and you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.
This issue is resolved in this release.
- PR 2248758: Starting an ESXi host, configured with big memory size, might fail with а purple diagnostic screen due to xmap allocation failure
The configuration of big memory nodes requires a big memory overhead mapped in xmap. Because xmap does not support memory buffer size higher than 256 MB, the ESXi host might fail with а purple diagnostic screen and an error similar to:
0:00:01:12.349 cpu0:1)@BlueScreen: MEM_ALLOC bora/vmkernel/main/memmap.c:4048
0:00:01:12.356 cpu0:1)Code start: 0x418026000000 VMK uptime:
0:00:01:12.363 cpu0:1)0x4100064274f0:[0x4180261726e1]+0x261726e1 stack: 0x7964647542203a47
0:00:01:12.372 cpu0:1)0x4100064275a0:[0x4180261720a1]+0x261720a1 stack: 0x410006427600
0:00:01:12.381 cpu0:1)0x410006427600:[0x418026172d36]+0x26172d36 stack: 0xfd000000000
0:00:01:12.390 cpu0:1)0x410006427690:[0x4180261dcbc3]+0x261dcbc3 stack:0x0
0:00:01:12.398 cpu0:1)0x4100064276b0:[0x4180261f0076]+0x261f0076 stack: 0x8001003f
0:00:01:12.406 cpu0:1)0x410006427770:[0x41802615947f]+0x2615947f stack: 0x0 This issue is resolved in this release.
- PR 2267143: An ESXi host might fail when you enable a vSphere Distributed Switch health check
When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 1982634: An ESXi host might fail with purple diagnostic screen and a Spin count exceeded - possible deadlock error
The P2MCache lock is a spin lock compromising the fairness across execution threads that wait for the P2MCache. As a result, the ESXi host might fail with purple diagnostic screen and the following error:
2017-09-15T06:50:17.777Z cpu11:153493)@BlueScreen: Spin count exceeded - possible deadlock with PCPU 19
2017-09-15T06:50:17.777Z cpu11:153493)Code start: 0x418034400000 VMK uptime: 8:21:18:05.527
2017-09-15T06:50:17.778Z cpu11:153493)Saved backtrace from: pcpu 19 SpinLock spin out NMI
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bb60:[0x4180345911bf]VmMemPin_ReleasePhysMemRange@vmkernel#nover+0x1f stack: 0x0
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bbb0:[0x41803455d706]P2MCache_Release@vmkernel#nover+0xa6 stack: 0x4393bb7a7000
2017-09-15T06:50:17.779Z cpu11:153493)0x439127a1bbf0:[0x4180345658a2]PhysMem_ReleaseSGE@vmkernel#nover+0x16 stack: 0x0This issue is resolved in this release. With this fix, the spin lock is changed to a MCS lock which enables CPUs get fair access to P2MCache.
- PR 2266767: The vmkernel logs are filled with multiple warning messages for unsupported physical block sizes
When you use devices with a physical block size other than 512 or 4096 bytes, in the
/var/log/vmkernel.log
file of the ESXi host you might see multiple warning messages similar to:ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.
This issue is resolved in this release. The fix lists the warning messages in the debug builds.
- PR 2288424: Virtual machines might become unresponsive due to repetitive failures of third-party device drivers to process commands
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
This issue is resolved in this release. This fix recovers commands to unresponsive third-party device drivers and ensures that failed commands are stopped and retried until success.
- PR 2282080: Creating a snapshot of a virtual machine from a virtual volume datastore might fail due to a null VVolId parameter
If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null
VVolId
parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a nullVvolId
parameter and a failure when creating a virtual machine snapshot.This issue is resolved in this release. The fix handles the policy modification failure and prevents the null
VVolId
parameter. - PR 2267503: An ESXi host might fail with a purple diagnostic screen at the DVFilter level
The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return
NULL
. If thisNULL
value is not handled gracefully, an ESXi host might fail with a purple diagnostic screen at the DVFilter level.This issue is resolved in this release.
- PR 2277645: An ESXi host might fail with a purple diagnostic screen displaying a sbflush_internal panic message
An ESXi host might fail with a purple diagnostic screen displaying a
sbflush_internal panic
message due to some discrepancies in the internal statistics.This issue is resolved in this release.
- PR 2230554: You might see many logs of the IsDhclientProcess process in hostd.log and syslog.log
You might see many messages similar to
VmkNicImpl::IsDhclientProcess: No valid dhclient pid found
in the hostd and syslog logs. TheIsDhclientProcess
process is in a hot code path and triggers frequently.This issue is resolved in this release. To avoid excessive prints of the log in your environment, this fix changes the current log into an internal debug log.
- PR 2291669: An ESXi host might fail with a purple diagnostic screen when you use the Software iSCSI adapter
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
This issue is resolved in this release.
- PR 2163396: The SNMP agent might incorrectly report the NVDIMM battery sensor on Dell 14G servers as down
If you run the
snmpwalk
command to get battery status, the SNMP agent might incorrectly report the NVDIMM battery sensor on Dell 14G servers as down. The processing ofIPMI_NETFN_SE/GET_SENSOR_READING
code does not check status bits properly.This issue is resolved in this release. This fix corrects the reading of compact sensors with an event code
6f
. - PR 2203387: The syslog service (vmsyslogd) might fail to write VMkernel Observations (VOB) logs due to memory limits
The vmsyslogd service might fail to write VOB logs due to memory limits, because the current method of reporting VOBs is memory-heavy.
This issue is resolved in this release. This fix makes logging VOBs lightweight.
- PR 2230181: The hostd daemon might fail with an error for too many locks
Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
hostd Panic: MXUserAllocSerialNumber: too many locks!
This issue is resolved in this release. The fix removes the limit for the number of locks.
- PR 2232204: An NFS volume mount by using a host name might not persist after a reboot
An NFS volume mount by using a host name instead of an IP address might not persist after a reboot in case of an intermittent failure in resolving the host name.
This issue is resolved in this release. The fix provides a retry mechanism for resolving the host name.
- PR 2224770: You might see false hardware health alarms from Intelligent Platform Management Interface (IPMI) sensors
Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.
This issue is resolved in this release. With this fix, you can use the advanced option command
esxcfg-advcfg -s {sensor ID}:{sensor ID} /UserVars/HardwareHealthIgnoredSensors
to ignore hardware health alarms from selected sensors. - PR 2224775: ESXi hosts might become unresponsive if the sfcbd service fails to process all forked processes
You might see many error messages such as
Heap globalCartel-1 already at its maximum size. Cannot expand.
in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail.This issue is resolved in this release.
- PR 2227125: Very large values of counters for read or write disk latency might trigger alarms
You might intermittently see alarms for very large values of counters for read or write disk latency, such as
datastore.totalReadLatency
anddatastore.totalWriteLatency
.This issue is resolved in this release.
- PR 2246959: An ESXi host with vSAN enabled fails due to race condition in the decommission workflow
During the disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2257458: Extending VMDK files might cause a long quiescing time
Inefficient block allocation mechanisms might lead to multiple iterations of metadata reads. This causes a long quiescing time while creating and extending thick provisioned lazy zeroed VMDK files on a VMFS6 datastore.
This issue is resolved in this release.
- PR 2271326: You might see wrong reports for dropped packets when using IOChain and the Large Receive Offload (LRO) feature is enabled
When you navigate to the Monitor tab for an ESXi host and under Performance you select Physical Adapters > pNIC vSwitch Port Drop Rate, you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.
This issue is resolved in this release.
- PR 2267871: You might see redundant VOB messages in the vCenter Server system
When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might be incorrect. As a result, you might see redundant VOB messages in the
/var/run/log/vobd.log
file, such as:Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnic8 is down. Affected portgroups:"xxx"
.This issue is resolved in this release.
- PR 2282205: After an ESXi host reboot, you might not see the NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
This issue is resolved in this release.
- PR 2220007: Network booting of virtual machines in a large network might fail
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for boot (PXE, DHCP, DNS, TFTP), the booting process fails, because the boot request does not reach the server.
This issue is resolved in this release.
- PR 2227665: The vSphere Web Client or vSphere Client might display a host profile compliance as unknown if you try to customize the DNS configuration
The vSphere Web Client or vSphere Client might display a host profile compliance as unknown if you try to customize the DNS configuration by navigating to Networking Configuration > NetStack Instance > defaultTcpipStack > DNS configuration.
This issue is resolved in this release.
- PR 2292798: Virtual machines with Microsoft Windows 10 version 1809 might start slowly or stop responding during the start phase if they are running on a VMFS6 datastore
If a virtual machine is with Windows 10 version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
This issue is resolved in this release.
- PR 2117131: The small footprint CIM broker (SFCB) process might crash due to NULL string in CIM requests
The SFCB process might fail because
str==NULL/length=0
values in input requests are not properly handled.This issue is resolved in this release. This fix modifies the
addClStringN
function to treatNULL
as an empty string. - PR 2268193: Managing the Active Directory lwsmd service from the Host Client or vSphere Web Client might fail
Managing the Active Directory lwsmd service from the Host Client or vSphere Web Client might fail with the following error:
Failed - A general system error occurred: Command /etc/init.d/lwsmd timed out after 30 secs
.This issue is resolved in this release.
- PR 2293806: Hardware sensors might report wrong sensor state in an ESXi host and in the vSphere Client
Hardware sensors in red state might report a green state in an ESXi host and in the vSphere Client. As a result, the alarm might not be triggered, and you might not receive an alert.
This issue is resolved in this release.
- PR 2300123: An ESXi host does not use the C2 low-power state on AMD EPYC CPUs
Even if C-states are enabled in the firmware setup of an ESXi host, the vmkernel does not detect all the C-states correctly. The power screen of the esxtop tool shows columns
%C0
(percentage of time spent in C-state 0) and%C1
(percentage of time spent in C-state 1), but does not show column%C2
. As a result, the system performance per watt of power is not maximized.This issue is resolved in this release.
- PR 2301266: An ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a converged network adapter (CNA)
An ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA, when you use the pktcap-uw syntax and a device does not have an uplink object associated while capturing a packet on the uplink.
This issue is resolved in this release. The fix is to move the capture point if no associated uplink object is available.
- PR 2301817: An ESXi host might fail with a purple diagnostic screen due to a race condition
An ESXi host might fail with a purple diagnostic screen due to a very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task.
This issue is resolved in this release.
- PR 2305376: Virtual machines might start performing poorly because of the slow performance of an NFS datastore
A virtual machine that runs on an NFS datastore might start performing poorly due to the slow performance of an NFS datastore. After you reboot the ESXi host, the virtual machine starts performing normally again.
This issue is resolved in this release.
- PR 2265828: A virtual machine with VMDK files backed by vSphere Virtual Volumes might fail to power on when you revert it to a snapshot
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
This issue is resolved in this release.
- PR 2305720: An ESXi host might fail with a purple diagnostic screen when you use vSphere Storage vMotion
When you migrate multiple virtual machines by using vSphere Storage vMotion, the container port might have duplicate VIF ID and the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2259009: You might not be able to add ESXi hosts to a vCenter Server 6.5 system
After you upgrade your system to vSphere 6.5, you might not be able to add ESXi hosts to vCenter Server, because the sfcb agent might not be scrubbing data returned from the hardware and IPMI layers.
This issue is resolved in this release.
- PR 2310825: An ESXi host might fail with a purple diagnostic screen if connectivity to NFS41 datastores intermittently fails
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails.This issue is resolved in this release.
- PR 2280486: An ESXi host might fail with a purple diagnostic screen while a trusted platform module (TPM) performs read operations
An ESXi host might fail with a purple diagnostic screen while a TPM performs a read operation. You might see an error message similar to:
#PF Exception 14 in world XXX:hostd-worker IP YYY addr ZZZ
In the backtrace of the purple diagnostic screen, you might see entries similar to:
Util_CopyOut@vmkernel#nover+0x1a stack: 0x418025d7e92c
TpmCharDev_Read@(tpmdriver)#+0x3b stack: 0x43082b811f78
VMKAPICharDevIO@vmkernel#nover+0xdd stack: 0x430beffebe90
VMKAPICharDevDevfsWrapRead@vmkernel#nover+0x2d stack: 0x4391d2e27100
CharDriverAsyncIO@vmkernel#nover+0x83 stack: 0x430148154370
FDS_AsyncIO@vmkernel#nover+0x3dd stack: 0x439e0037c6c0
FDS_DoSyncIO@vmkernel#nover+0xe3 stack: 0x439544e9bc30
DevFSFileIO@vmkernel#nover+0x2ea stack: 0x1
FSSVec_FileIO@vmkernel#nover+0x97 stack: 0x1707b4
UserChardevIO@(user)#+0xbb stack: 0x43071ad64660
UserChardevRead@(user)#+0x25 stack: 0x4396045ef700
UserVmfs_Readv@(user)#+0x8c stack: 0x430beff8e490
LinuxFileDesc_Read@(user)#+0xcb stack: 0x4396047cbc68
User_ArchLinux32SyscallHandler@(user)#+0x6c stack: 0x0
User_LinuxSyscallHandler@(user)#+0xeb stack: 0x0
User_LinuxSyscallHandler@vmkernel#nover+0x1d stack: 0x0
gate_entry_@vmkernel#nover+0x0 stack: 0x0This issue is resolved in this release. For vSphere versions earlier than 6.5 Update 3 you can disable the TPM in the BIOS if no other applications are using the TPM.
- PR 2322967: A virtual machine might stop responding because the Virtual Network Computing (VNC) connection is disconnected
When a VNC connection is disconnected, the error conditions might not be handled properly and the virtual machine might stop responding.
This issue is resolved in this release.
- PR 2311328: If an ESXi host does not have enough memory, the host might fail with a purple diagnostic screen
If the ESXi host does not have enough memory, the memory allocation might fail due to invalid memory access. As a result, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2316813: Multiple attempts to log in to an ESXi host with incorrect credentials might cause the hostd service to stop responding
When you attempt to log in to an ESXi host with incorrect credentials multiple times, the hostd service in the ESXi host might stop responding.
This issue is resolved in this release.
- PR 2292417: When Site Recovery Manager test recovery triggers a vSphere Replication synchronization phase, hostd might become unresponsive
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.
This issue is resolved in this release. The fix reduces the maximum number of concurrent snapshots to 8.
- PR 2310409: You cannot migrate virtual machines by using vSphere vMotion between ESXi hosts with NSX managed virtual distributed switches (N-VDS) and vSphere Standard Switches
With vCenter Server 6.5 Update 3, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.5 Update 3 and ESXi 6.5 Update 3 on both source and destination sites.
This issue is resolved in this release.
- PR 2314297: The syslog third-party scripts might fail due to outdated timestamps
When the syslog files rotate, they apply timestamps with the help of the bash configuration file. The timestamp refreshes only during the restart of the hostd service and might be outdated. As a result, the syslog third-party scripts might fail.
This issue is resolved in this release.
- PR 2323959: PXE booting of a virtual machine with a VMXNET3 virtual network device from a Citrix Provisioning Services (PVS) server might fail
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap, because any pending interrupts on the virtual network device are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
This issue is resolved in this release. The fix clears any pending interrupts and any mask INTx interrupts when the virtual device starts.
- PR 2224477: An ESXi host in a stretched cluster might fail during network interruption
When the network supporting a vSAN stretched cluster experiences many disconnect events, the ESXi host memory used by the vSAN module might be exhausted. As a result, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2293772: Overall physical disk health night be intermittently reported as red
vSAN health service might report the overall physical disk health as red and trigger an alarm, and then recover to green status. These alarms might happen while rescanning physical disks.
This issue is resolved in this release.
- PR 2203431: Some guest virtual machines might report their identity as an empty string
This issue occurs when the guest is running VMware Tools with a later version than the version of VMware Tools the ESXi host was released with, or when the host cannot recognize the
guestID
parameter of theGuestInfo
object. The issue affects mainly virtual machines with CentOS and VMware Photon OS.This issue is resolved in this release.
- PR 2113782: vSphere Virtual Volumes datastore might become inaccessible if you change the vCenter Server instance or refresh the CA certificate
vSphere Virtual Volume datastore uses VMware CA signed certificate to communicate with VASA providers. When the vCenter Server instance or the CA certificate changes, vCenter Server imports the new vCenter Server CA signed certificate and the vSphere Virtual Volume datastore gets SSL reset signal which might not be triggered. As a result, the communication between vSphere Virtual Volume datastore and VASA providers might fail and the vSphere Virtual Volume datastore might become inaccessible.
This issue is resolved in this release.
- PR 2256003: An ESXi host might fail with either Bucketlist_LowerBound or PLOGRelogRetireLsns in the backtrace
PLOG Relog can run along with DecommissionMD, causing Relog to access freed PLOG device state tables in an ESXi host. As a result, the host might fail with a purple diagnostic screen. The backtrace includes one of the following entries:
Bucketlist_LowerBound
orPLOGRelogRetireLsns.
This issue is resolved in this release.
- PR 2212913: An ESXi host might fail with a purple screen because of a physical CPU heartbeat failure
If you have a virtual machine on a SeSparse snapshot and you query the physical layout of the VMDK from the Guest Operating System or a third-party application, a physical CPU lockup might be triggered if the VMDK file size is not a multiple of 4K. As a result, the ESXi host fails with a purple screen.
This issue is resolved in this release.
- ESXi hosts might lose connectivity to vCenter Server due to failure of the vpxa agent service
ESXi hosts might lose connectivity to vCenter Server due to failure of the vpxa agent service in result of an invalid property update generated by the
PropertyCollector
object. A race condition in hostd leads to a malformed sequence of property update notification events that cause the invalid property update.This issue is resolved in this release.
- PR 2280989: When provisioning instant clones, you might see an error “Disconnected from virtual machine" and virtual machines fail to power on
When provisioning instant clones in the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails.This issue is resolved in this release. The fix corrects the VMX panic in such situations and returns an error.
- PR 2267717: You might not see details for qlnativefc driver worlds when running the ps command
You might not see details for qlnativefc driver worlds when running the
ps
command. The worldName fields have null value.This issue is resolved in this release.
- PR 2297764: When the state of the storage hardware changes, the vCenter Server system does not raise or lower the storage alarm state on individual ESXi hosts
ESXi hosts send a
HardwareSensorGroupStatus
event which allows the vCenter Server system to track the hardware alarm state of the hosts by a group, such as fan, storage, power, and temperature. Storage alarms are not raised or lowered based on reports from CIM providers which track the host bus adapter status.This issue is resolved in this release. With this fix, the storage hardware health checks are performed with the actual health state of the ESXi hosts that is reported by the CIM providers.
- PR 2325697: An ESXi host might fail with a purple diagnostic screen when it removes a PShare Hint with the function VmMemCow_PShareRemoveHint
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel
This issue is resolved in this release.
- PR 2324404: A virtual machine with a NetX filter might lose network connectivity after a vSphere vMotion migration across clusters
If you migrate a virtual machine with a NetX filter across clusters by using vSphere vMotion, the virtual machine might lose network connectivity. As a side effect of service profile optimization, the NetX filter might get blocked after the migration. As a result, the virtual machine loses network connectivity.
This issue is resolved in this release. The fix restores the network connectivity after the filter creation.
- PR 2260907: vSphere Web Client becomes unresponsive when viewing vSAN datastore properties
Browser problems can occur when you navigate to Storage > vsan > vsanDatastore > Configure > General. On some Windows-based browsers, the Properties field blinks when opened for the first time. The browser might become unresponsive.
This issue is resolved in this release.
- PR 2242242: vCenter Server in large scale environment is slow or stops responding
In large scale environments, such as when vCenter Server manages more than 900 vSAN hosts, the vCenter management server might slow or stop responding to user requests, due to a network socket handler leak.
This issue is resolved in this release.
- PR 2320302: When a vMotion operation fails and is immediately followed by a hot-add or a Storage vMotion operation, the ESXi host might fail with a purple diagnostic screen
If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release.
- PR 2226863: EPD does not restart after host failure
After a host fails, the Entry Persistence Daemon (EPD) might not restart. Some components continue to persist after an object is deleted. Messages similar to the following appear in the EPD log:
2018-07-10T10:35:21.029Z 71308 -- XXX: Lock file '/scratch/epd-store.db' exists.
2018-07-10T10:35:21.029Z 71308 -- XXX: Did the host or EPD recently crash?
2018-07-10T10:35:21.029Z 71308 -- XXX: Assuming it's OK. Unlinking lock file..
2018-07-10T10:35:21.030Z 71308 Failed to delete lock file: Is a directoryThis issue is resolved in this release.
- PR 2219305: RVC vsan.observer command does not return any data
When you issue the RVC command
vsan.observer
, no data appears on the vSAN Observer html page. You might see a ruby callstack on the RVC console.This problem occurs because the payload is too big for SOAP request/response
.This issue is resolved in this release.
- PR 2301228: ESXi host fails to discover local devices after upgrade from vSphere 6.0 to a later release
This problem occurs only when the SATP is VMW_SATP_LOCAL. In ESXi 6.0, if a device type SATP claim rule with SATP as VMW_SATP_LOCAL was added with an incorrectly formatted config option, then NMP and SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the pathThis issue is resolved in this release.
- PR 2221249: An ESXi host might fail with a purple diagnostic screen due to a race condition in NFS 4.1 request completion paths
I/O requests on a NFS 4.1 datastore might occasionally cause an ESXi host to fail with a purple diagnostic screen due to a race condition in the request completion path.
This issue is resolved in this release.
- PR 2304091: A VMXNET3 virtual network device might get null properties for MAC addresses of some virtual machine interfaces
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the vmkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
.This issue is resolved in this release.
- PR 2271879: Stateless boot operations might intermittently fail when using the vSphere Authentication Proxy service on a vCenter Server system
Stateless boot operations of ESXi hosts might intermittently fail when using the vSphere Authentication Proxy service on a vCenter Server system.
This issue is resolved in this release.
- PR 2280737: A corruption in the dvfilter-generic-fastpath heap might cause an ESXi host to fail with a purple diagnostic screen
A race condition in the
get-set firewall-rule
operations fordvfilter-generic-fastpath
might cause a buffer overflow and heap corruption. As a result, the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release
- PR 2244449: In rare cases, when live disk migration is interrupted, the guest operating system might become temporarily unresponsive
After an interruption of a live migration involving one or more disks, such as a Storage vMotion or a vMotion without shared storage, the guest operating system might become unresponsive for up to 8 minutes, due to a rare race condition.
This issue is resolved in this release.
- PR 2240049: Limitations in the network message buffer (msgBuffer) size might cause loss of log messages
The size of the network msgBuffer is 1 MB but the main msgBuffer supports up to 25 000 lines of any length, which is at least 3 MB. If the network is slow, the write thread to the network msgBuffer is faster than the reader thread. This leads to loss of log messages with an alert:
lost XXXX log messages
.This issue is resolved in this release. The size of msgBuffer is increased to 3 MB.
- PR 2251459: vSAN host contains python-zdump file related to CMMDS
The vsanmgmt daemon on an ESXi host might run out of memory due to use of the cmmds-tool. This problem can cause the host to generate a
python-zdump
file. You might see the following entry in the vsanmgmt.log:No space left on device Failed to initialize CMMDS: Out of memory
.This issue is resolved in this release.
- PR 2275735: Allocation of address handles might intermittently fail on virtual machines using a PVRDMA network adapter
If the GID for remote direct memory access (RDMA) of a virtual machine using a paravirtualized RDMA (PVRDMA) network adapter changes, the virtual device might leak address handles associated with that GID. If GIDs are frequently changed, the PVRDMA device might allocate all address handles of the physical Host Channel Adapters (HCA). As a result, any attempts to allocate an address handle in the guest virtual machine fail.
This issue is resolved in this release.
- PR 2289950: After an abrupt reboot of an ESXi host, certain virtual machines might go into invalid state and you cannot complete tasks such as power on or reconfigure
Virtual machines might go into invalid state and you cannot complete tasks such as power on, create, delete, migrate and reconfigure, depending on which locks are stuck in transactions as a result of an abrupt reboot of the ESXi host.
This issue is resolved in this release.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the vmkusb
VIB.
- Update to the vmkusb driver
The vmkusb driver is updated to version 0.1-1.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the misc-drivers
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315654 |
CVE numbers | N/A |
This patch updates the ixgben
VIB.
- Update to the ixgben driver
The ixgben driver is updated to version 1.7.1.15-1.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305392 |
CVE numbers | N/A |
This patch updates the qlnativefc
VIB.
- PR 2305392: Update to the qlnativefc driver
The qlnativefc driver is updated to version 2.1.73.0-5.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the igbn
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305388 |
CVE numbers | N/A |
This patch updates the nvme
VIB.
- PR 2305388: Updates to the NVMe driver
The NVMe driver is updated to version 1.2.2.28-1.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305388 |
CVE numbers | N/A |
This patch updates the vmware-esx-esxcli-nvme-plugin
VIB.
- PR 2305388: Updates to the esxcli-nvme-plugin
The esxcli-nvme-plugin is updated to version 1.2.0.36.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305399 |
CVE numbers | N/A |
This patch updates the lsi-mr3
VIB.
- Update to the lsi_mr3 driver
The lsi_mr3 driver is updated to version 7.708.07.00-3.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305397, 2342727 |
CVE numbers | N/A |
This patch updates the lsi-msgpt3
VIB.
- Update to the lsi_msgpt3 driver
The lsi_msgpt3 driver is updated to version 17.00.02.00-1.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305396, 2348943 |
CVE numbers | N/A |
This patch updates the lsi-msgpt35
VIB.
- Update to the lsi_msgpt35 driver
The lsi_msgpt35 driver is updated to version 09.00.00.00-5.
Patch Category | Enhancement |
Patch Severity | Moderate |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2270209 |
CVE numbers | N/A |
This patch updates the lsi-msgpt2
VIB.
- PR 2270209: Update to the lsi_msgpt2 driver
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305389 |
CVE numbers | N/A |
This patch updates the smartpqi
VIB.
- PR 2305389: Update to the smartpqi driver
The smartpqi driver is updated to version 1.0.1.553-28.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315657 |
CVE numbers | N/A |
This patch updates the i40en
VIB.
- Update to the i40en driver
The i40en driver is updated to version 1.8.1.9-2.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2322613 |
CVE numbers | N/A |
This patch updates the lsu-hp-hpsa-plugin
VIB.
- Update of the lsu-hp-hpsa-plugin driver
The lsu-hp-hpsa-plugin driver is updated to version 2.0.0-16.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2315644 |
CVE numbers | N/A |
This patch updates the bnxtnet
VIB.
- Update to the bnxtnet driver
The bnxtnet driver is updated to version 20.6.101.7-23.
- PR 2315644: ESXi 6.5 Update 3 adds support for Broadcom 100G Network Adapters
ESXi 6.5 Update 3 adds support for Broadcom 100G Network Adapters
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2305393 |
CVE numbers | N/A |
This patch updates the lfpc
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the brcmfcoe
VIB.
Patch Category | Enhancement |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2327765 |
CVE numbers | N/A |
This patch updates the nenic
VIB.
- Update to the nenic driver
The nenic driver is updated to version 1.0.29.0-1.
Patch Category | Bugfix |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2275902, 2327833 |
CVE numbers | N/A |
This patch updates the vmw-ahci
VIB.
- PR 2275902: An ESXi host might fail with a purple diagnostic screen due to lack of physical CPU heartbeat
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2327833: Discrepancies in the AHCI specification of the vmw_ahci driver and AHCI controller firmware might cause multiple errors
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
This issue is resolved in this release.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2109019, 2277230, 2262285, 2278591, 2341405 |
CVE numbers |
This patch updates the esx-base, vsan,
and vsanhealth
VIBs to resolve the following issues:
- Update to the Python library
The Python third-party library is updated to version 3.5.6.
- Update to libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.9.
- PR 2262285: An ESXi SNMP agent might respond to an SNMPv3 request without the proper authentication
The SNMP agent in an ESXi host might respond to SNMPv3 requests without a user name or password and authentication in the payload body.
This issue is resolved in this release.
- Update to the OpenSSH package
The OpenSSH package is updated to version 7.9 P1.
- PR 2278591: Cloning multiple virtual machines simultaneously on vSphere Virtual Volumes might stop responding
When you clone multiple virtual machines simultaneously from vSphere on a vSphere Virtual Volumes datastore, a
setPEContext
VASA API call is issued. If all connections to the VASA Providers are busy or unavailable at the time of issuing thesetPEContext
API call, the call might fail and the cloning process stops responding.This issue is resolved in this release.
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2r.
- Update to the BusyBox package
The BusyBox package is updated to version 1.29.3.
- PR 2341405: When exporting virtual machines as OVF files by using the Host Client, VMDK disk files for different virtual machines have the same prefix
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
This issue is resolved in this release. Exported VMDK file names are now based on the virtual machine name.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | 2109019, 2277230 |
CVE numbers | N/A |
This patch updates the tools-light
VIB.
Patch Category | Security |
Patch Severity | Important |
Host Reboot Required | No |
Virtual Machine Migration or Shutdown Required | No |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included |
|
PRs Fixed | N/A |
CVE numbers | N/A |
This patch updates the esx-ui
VIB.
Profile Name | ESXi-6.5.0-20190702001-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 2, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2164153, 2217863, 2240400, 2214767, 2250679, 2247984, 2228928, 2257478, 2204917, 2257345, 2248758, 2267143, 1982634, 2266767, 2288424, 2282080, 2267503, 2277645, 2230554, 2291669, 2163396, 2203387, 2230181, 2232204, 2224770, 2224775, 2227125, 2246959, 2257458, 2271326, 2267871, 2282205, 2220007, 2227665, 2292798, 2117131, 2268193, 2293806, 2300123, 2301266, 2301817, 2305376, 2265828, 2305720, 2259009, 2310825, 2280486, 2322967, 2311328, 2316813, 2292417, 2310409, 2314297, 2323959, 2224477, 2293772, 2203431, 2113782, 2256003, 2212913, 2280989, 2297764, 2325697, 2324404, 2260907, 2242242, 2320302, 2226863, 2219305, 2301228, 2221249, 2304091, 2271879, 2280737, 2244449, 2240049, 2251459, 2275735, 2289950, 2315654 , 2267717 , 2305392 , 2319597 , 2305388 , 2305388 , 2305399 , 2305397 , 2342727 , 2305396 , 2348943 , 2270209 , 2305389 , 2315657 , 2322613 , 2315644 , 2327765 , 2275902 , 2327833, 2267717 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
SFCBD might intermittently stop responding with an error
wantCoreDump:sfcb-vmware_int signal:11
. -
Virtual machines with xHCI controllers supporting USB 3.0 might fail with a VMX error message during suspend or migrate operations. You might see an error message similar to:
PANIC: VERIFY bora/vmcore/vmx/main/monitorAction.c:598 bugNr=10871
. -
If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run a snapshot consolidation to remove redundant disks and improve the virtual machine performance. However, the consolidation operation might race with the running quiesced snapshot operation and the virtual machines stop responding.
-
Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.
-
When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, it might not power on and display a blank screen.
-
Even though the value is correctly set, the esxcli
$esxcli.network.nic.coalesce.set.Invoke
network command might return a false value as a result. This might impact existing automation scripts. -
When you create a virtual machine with preallocated memory, for example a virtual machine with latency sensitivity set to high, the memory might be allocated from a remote NUMA node instead of selecting the local NUMA node. This issue might depend on the memory usage of the ESXi host and does not affect hosts with only one NUMA node.
-
The
Actor Key
value of LACP PDUs might change to0
during the link status flapping. -
Creating multiple VFFS on a single ESXi host is not supported. When you try to create multiple VFFS on an ESXi host and to collect host profiles, the operation fails with an error:
Exception while Gathering Generic Plug-in Data. Exception: name 'INVALID_SYS_CONFIG_MSG_KEY' is not defined error
. -
If you have modified the type of a virtual CD-ROM device on a virtual machine in an earlier version of ESXi, after you update to a later version and you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.
-
The configuration of big memory nodes requires a big memory overhead mapped in xmap. Because xmap does not support memory buffer size higher than 256 MB, the ESXi host might fail with а purple diagnostic screen and an error similar to:
0:00:01:12.349 cpu0:1)@BlueScreen: MEM_ALLOC bora/vmkernel/main/memmap.c:4048
0:00:01:12.356 cpu0:1)Code start: 0x418026000000 VMK uptime:
0:00:01:12.363 cpu0:1)0x4100064274f0:[0x4180261726e1]+0x261726e1 stack: 0x7964647542203a47
0:00:01:12.372 cpu0:1)0x4100064275a0:[0x4180261720a1]+0x261720a1 stack: 0x410006427600
0:00:01:12.381 cpu0:1)0x410006427600:[0x418026172d36]+0x26172d36 stack: 0xfd000000000
0:00:01:12.390 cpu0:1)0x410006427690:[0x4180261dcbc3]+0x261dcbc3 stack:0x0
0:00:01:12.398 cpu0:1)0x4100064276b0:[0x4180261f0076]+0x261f0076 stack: 0x8001003f
0:00:01:12.406 cpu0:1)0x410006427770:[0x41802615947f]+0x2615947f stack: 0x0 -
When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with a purple diagnostic screen.
-
The P2MCache lock is a spin lock compromising the fairness across execution threads that wait for the P2MCache. As a result, the ESXi host might fail with purple diagnostic screen and the following error:
2017-09-15T06:50:17.777Z cpu11:153493)@BlueScreen: Spin count exceeded - possible deadlock with PCPU 19
2017-09-15T06:50:17.777Z cpu11:153493)Code start: 0x418034400000 VMK uptime: 8:21:18:05.527
2017-09-15T06:50:17.778Z cpu11:153493)Saved backtrace from: pcpu 19 SpinLock spin out NMI
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bb60:[0x4180345911bf]VmMemPin_ReleasePhysMemRange@vmkernel#nover+0x1f stack: 0x0
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bbb0:[0x41803455d706]P2MCache_Release@vmkernel#nover+0xa6 stack: 0x4393bb7a7000
2017-09-15T06:50:17.779Z cpu11:153493)0x439127a1bbf0:[0x4180345658a2]PhysMem_ReleaseSGE@vmkernel#nover+0x16 stack: 0x0 -
When you use devices with a physical block size other than 512 or 4096 Bytes, in the
/var/log/vmkernel.log
file of the ESXi host you might see multiple warning messages similar to:ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.
-
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
-
If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null
VvolID
parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a nullVvoId
parameter and a failure when creating a virtual machine snapshot. -
The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return
NULL
. If thisNULL
value is not handled gracefully, an ESXi host might fail with a purple diagnostic screen at the DVFilter level. -
An ESXi host might fail with a purple diagnostic screen displaying a
sbflush_internal panic
message due to some discrepancies in the internal statistics. -
You might see many messages similar to
VmkNicImpl::IsDhclientProcess: No valid dhclient pid found
in the hostd and syslog logs. TheIsDhclientProcess
process is in a hot code path and triggers frequently. -
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
-
If you run the
snmpwalk
command to get battery status, the SNMP agent might incorrectly report the NVDIMM battery sensor on Dell 14G servers as down. The processing ofIPMI_NETFN_SE/GET_SENSOR_READING
code does not check status bits properly. -
The vmsyslogd service might fail to write VOB logs due to memory limits, because the current method of reporting VOBs is memory-heavy.
-
Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
hostd Panic: MXUserAllocSerialNumber: too many locks!
-
An NFS volume mount by using a host name instead of an IP address might not persist after a reboot in case of an intermittent failure in resolving the host name.
-
Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.
-
You might see many error messages such as
Heap globalCartel-1 already at its maximum size. Cannot expand.
in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail. -
You might intermittently see alarms for very large values of counters for read or write disk latency, such as
datastore.totalReadLatency
anddatastore.totalWriteLatency
. -
During the disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.
-
Inefficient block allocation mechanisms might lead to multiple iterations of metadata reads. This causes a long quiesce time while creating and extending thick provisioned lazy zeroed VMDK files on a VMFS6 datastore.
-
When you navigate to the Monitor tab for an ESXi host and under Performance you select Physical Adapters > pNIC vSwitch Port Drop Rate, you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.
-
When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might be incorrect. As a result, you might see redundant VOB messages in the
/var/run/log/vobd.log
file, such as:Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnic8 is down. Affected portgroups:"xxx"
. -
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
-
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for boot (PXE, DHCP, DNS, TFTP), the booting process fails, because the boot request does not reach the server.
-
The vSphere Web Client or vSphere Client might display a host profile compliance as unknown if you try to customize the DNS configuration by navigating to Networking Configuration > NetStack Instance > defaultTcpipStack > DNS configuration.
-
If a virtual machine is with Windows 10 version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
-
The SFCB process might fail because
str==NULL/length=0
values in input requests are not properly handled. -
Managing the Active Directory lwsmd service from the Host Client or vSphere Web Client might fail with the following error:
Failed - A general system error occurred: Command /etc/init.d/lwsmd timed out after 30 secs
. -
Hardware sensors in red state might report a green state in an ESXi host and in the vSphere Client. As a result, the alarm might not be triggered, and you might not receive an alert.
-
Even if C-states are enabled in the firmware setup of an ESXi host, the vmkernel does not detect all the C-states correctly. The power screen of the esxtop tool shows columns
%C0
(percentage of time spent in C-state 0) and%C1
(percentage of time spent in C-state 1), but does not show column%C2
. As a result, the system performance per watt of power is not maximized. -
An ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA, when you use the pktcap-uw syntax and a device does not have an uplink object associated while capturing a packet on the uplink.
-
An ESXi host might fail with a purple diagnostic screen due to a very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task.
-
A virtual machine that runs on an NFS datastore might start performing poorly due to the slow performance of an NFS datastore. After you reboot the ESXi host, the virtual machine starts performing normally again.
-
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
-
When you migrate multiple virtual machines by using vSphere Storage vMotion, the container port might have duplicate VIF ID and the ESXi host might fail with a purple diagnostic screen.
-
After you upgrade your system to vSphere 6.5, you might not be able to add ESXi hosts to vCenter Server, because the sfcb agent might not be scrubbing data returned from the hardware and IPMI layers.
-
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails. -
An ESXi host might fail with a purple diagnostic screen while a TPM performs a read operation. You might see an error message similar to:
#PF Exception 14 in world XXX:hostd-worker IP YYY addr ZZZ
In the backtrace of the purple diagnostic screen, you might see entries similar to:
Util_CopyOut@vmkernel#nover+0x1a stack: 0x418025d7e92c
TpmCharDev_Read@(tpmdriver)#+0x3b stack: 0x43082b811f78
VMKAPICharDevIO@vmkernel#nover+0xdd stack: 0x430beffebe90
VMKAPICharDevDevfsWrapRead@vmkernel#nover+0x2d stack: 0x4391d2e27100
CharDriverAsyncIO@vmkernel#nover+0x83 stack: 0x430148154370
FDS_AsyncIO@vmkernel#nover+0x3dd stack: 0x439e0037c6c0
FDS_DoSyncIO@vmkernel#nover+0xe3 stack: 0x439544e9bc30
DevFSFileIO@vmkernel#nover+0x2ea stack: 0x1
FSSVec_FileIO@vmkernel#nover+0x97 stack: 0x1707b4
UserChardevIO@(user)#+0xbb stack: 0x43071ad64660
UserChardevRead@(user)#+0x25 stack: 0x4396045ef700
UserVmfs_Readv@(user)#+0x8c stack: 0x430beff8e490
LinuxFileDesc_Read@(user)#+0xcb stack: 0x4396047cbc68
User_ArchLinux32SyscallHandler@(user)#+0x6c stack: 0x0
User_LinuxSyscallHandler@(user)#+0xeb stack: 0x0
User_LinuxSyscallHandler@vmkernel#nover+0x1d stack: 0x0
gate_entry_@vmkernel#nover+0x0 stack: 0x0 -
When a VNC connection is disconnected, the error conditions might not be handled properly and the virtual machine might stop responding.
-
If the ESXi host does not have enough memory, the memory allocation might fail due to invalid memory access. As a result, the ESXi host might fail with a purple diagnostic screen.
-
When you attempt to log in to an ESXi host with incorrect credentials multiple times, the hostd service in the ESXi host might stop responding.
-
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.
-
With vCenter Server 6.5 Update 3, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.5 Update 3 and ESXi 6.5 Update 3 on both source and destination sites.
-
When the
syslog
files rotate, they apply timestamps with the help of the bash configuration file. The timestamp refreshes only during the restart of the hostd service and might be outdated. As a result, the syslog third-party scripts might fail. -
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap, because any pending interrupts on the virtual network device are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
-
When the network supporting a vSAN stretched cluster experiences many disconnect events, the ESXi host memory used by the vSAN module might be exhausted. As a result, the ESXi host might fail with a purple diagnostic screen.
-
vSAN health service might report the overall physical disk health as red and trigger an alarm, and then recover to green status. These alarms might happen while rescanning physical disks.
-
This issue occurs when the guest is running VMware Tools with a later version than the version of VMware Tools the ESXi host was released with, or when the host cannot recognize the
guestID
parameter of theGuestInfo
object. The issue affects mainly virtual machines with CentOS and VMware Photon OS. -
vSphere Virtual Volume datastore uses VMware CA signed certificate to communicate with VASA providers. When the vCenter Server instance or the CA certificate changes, vCenter Server imports the new vCenter Server CA signed certificate and the vSphere Virtual Volume datastore gets SSL reset signal which might not be triggered. As a result, the communication between vSphere Virtual Volume datastore and VASA providers might fail and the vSphere Virtual Volume datastore might become inaccessible.
-
PLOG Relog can run along with DecommissionMD, causing Relog to access freed PLOG device state tables in an ESXi host. As a result, the host might fail with a purple diagnostic screen. The backtrace includes one of the following entries:
Bucketlist_LowerBound
orPLOGRelogRetireLsns.
-
If you have a virtual machine on a SeSparse snapshot and you query the physical layout of the VMDK from the Guest Operating System or a third-party application, a physical CPU lockup might be triggered if the VMDK file size is not a multiple of 4K. As a result, the ESXi host fails with a purple screen.
-
When provisioning instant clones in the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails. -
ESXi hosts send a
HardwareSensorGroupStatus
event which allows the vCenter Server system to track the hardware alarm state of the hosts by a group, such as fan, storage, power, and temperature. Storage alarms are not raised or lowered based on reports from CIM providers which track the host bus adapter status. -
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel
-
If you migrate a virtual machine with a NetX filter across clusters by using vSphere vMotion, the virtual machine might lose network connectivity. As a side effect of service profile optimization, the NetX filter might get blocked after the migration. As a result, the virtual machine loses network connectivity.
-
Browser problems can occur when you navigate to Storage > vsan > vsanDatastore > Configure > General. On some Windows-based browsers, the Properties field blinks when opened for the first time. The browser might become unresponsive.
-
In large scale environments, such as when vCenter Server manages more than 900 vSAN hosts, the vCenter management server might slow or stop responding to user requests, due to a network socket handler leak.
-
If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.
-
After a host fails, the Entry Persistence Daemon (EPD) might not restart. Some components continue to persist after an object is deleted. Messages similar to the following appear in the EPD log:
2018-07-10T10:35:21.029Z 71308 -- XXX: Lock file '/scratch/epd-store.db' exists.
2018-07-10T10:35:21.029Z 71308 -- XXX: Did the host or EPD recently crash?
2018-07-10T10:35:21.029Z 71308 -- XXX: Assuming it's OK. Unlinking lock file..
2018-07-10T10:35:21.030Z 71308 Failed to delete lock file: Is a directory -
When you issue the RVC command
vsan.observer
, no data appears on the vSAN Observer html page. You might see a ruby callstack on the RVC console.This problem occurs because the payload is too big for SOAP request/response
. -
This problem occurs only when the SATP is VMW_SATP_LOCAL. In ESXi 6.0, if a device type SATP claim rule with SATP as VMW_SATP_LOCAL was added with an incorrectly formatted config option, then NMP and SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the path -
I/O requests on a NFS 4.1 datastore might occasionally cause an ESXi host to fail with a purple diagnostic screen due to a race condition in the request completion path.
-
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the vmkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
. -
Stateless boot operations of ESXi hosts might intermittently fail when using the vSphere Authentication Proxy service on a vCenter Server system.
-
A race condition in the
get-set firewall-rule
operations fordvfilter-generic-fastpath
might cause a buffer overflow and heap corruption. As a result, the ESXi host might fail with a purple diagnostic screen. -
After an interruption of a live migration involving one or more disks, such as a Storage vMotion or a vMotion without shared storage, the guest operating system might become unresponsive for up to 8 minutes, due to a rare race condition.
-
The size of the network msgBuffer is 1 MB but the main msgBuffer supports up to 25 000 lines of any length, which is at least 3 MB. If the network is slow, the write thread to the network msgBuffer is faster than the reader thread. This leads to loss of log messages with an alert:
lost XXXX log messages
. -
The vsanmgmt daemon on an ESXi host might run out of memory due to use of the cmmds-tool. This problem can cause the host to generate a
python-zdump
file. You might see the following entry in the vsanmgmt.log:No space left on device Failed to initialize CMMDS: Out of memory
. -
If the GID for remote direct memory access (RDMA) of a virtual machine using a paravirtualized RDMA (PVRDMA) network adapter changes, the virtual device might leak address handles associated with that GID. If GIDs are frequently changed, the PVRDMA device might allocate all address handles of the physical Host Channel Adapters (HCA). As a result, any attempts to allocate an address handle in the guest virtual machine fail.
-
Virtual machines might go into invalid state and you cannot complete tasks such as power on, create, delete, migrate and reconfigure, depending on which locks are stuck in transactions as a result of an abrupt reboot of the ESXi host.
-
The vmkusb driver is updated to version 0.1-1.
-
The ixgben driver is updated to version 1.7.1.15-1.
-
The qlnativefc driver is updated to version 2.1.73.0-5.
-
The NVMe driver is updated to version 1.2.2.28-1.
-
The esxcli-nvme-plugin is updated to version 1.2.0.36.
-
The lsi_mr3 driver is updated to version 7.708.07.00-3.
-
The lsi_msgpt3 driver is updated to version 17.00.02.00-1.
-
The lsi_msgpt35 driver is updated to version 09.00.00.00-5.
-
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
-
The smartpqi driver is updated to version 1.0.1.553-28.
-
The i40en driver is updated to version 1.8.1.9-2.
-
The lsu-hp-hpsa-plugin driver is updated to version 2.0.0-16.
-
The bnxtnet driver is updated to version 20.6.101.7-23.
-
ESXi 6.5 Update 3 adds support for Broadcom 100G Network Adapters
-
The nenic driver is updated to version 1.0.29.0-1.
-
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen. -
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
- You might not see details for qlnativefc driver worlds when running the ps command. The worldName fields have null value.
-
Profile Name | ESXi-6.5.0-20190702001-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 2, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2164153, 2217863, 2240400, 2214767, 2250679, 2247984, 2228928, 2257478, 2204917, 2257345, 2248758, 2267143, 1982634, 2266767, 2288424, 2282080, 2267503, 2277645, 2230554, 2291669, 2163396, 2203387, 2230181, 2232204, 2224770, 2224775, 2227125, 2246959, 2257458, 2271326, 2267871, 2282205, 2220007, 2227665, 2292798, 2117131, 2268193, 2293806, 2300123, 2301266, 2301817, 2305376, 2265828, 2305720, 2259009, 2310825, 2280486, 2322967, 2311328, 2316813, 2292417, 2310409, 2314297, 2323959, 2224477, 2293772, 2203431, 2113782, 2256003, 2212913, 2280989, 2297764, 2325697, 2324404, 2260907, 2242242, 2320302, 2226863, 2219305, 2301228, 2221249, 2304091, 2271879, 2280737, 2244449, 2240049, 2251459, 2275735, 2289950, 2315654 , 2267717 , 2305392 , 2319597 , 2305388 , 2305388 , 2305399 , 2305397 , 2342727 , 2305396 , 2348943 , 2270209 , 2305389 , 2315657 , 2322613 , 2315644 , 2327765 , 2275902 , 2327833, 2267717 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
SFCBD might intermittently stop responding with an error
wantCoreDump:sfcb-vmware_int signal:11
. -
Virtual machines with xHCI controllers supporting USB 3.0 might fail with a VMX error message during suspend or migrate operations. You might see an error message similar to:
PANIC: VERIFY bora/vmcore/vmx/main/monitorAction.c:598 bugNr=10871
. -
If hostd restarts during a long-running quiesced snapshot operation, hostd might automatically run a snapshot consolidation to remove redundant disks and improve the virtual machine performance. However, the consolidation operation might race with the running quiesced snapshot operation and the virtual machines stop responding.
-
Virtual machines using 3D software might intermittently lose connectivity with a VMX panic error displayed on a blue screen.
-
When a Linux virtual machine is configured with specific memory size, such as 2052 MB or 2060 MB, it might not power on and display a blank screen.
-
Even though the value is correctly set, the esxcli
$esxcli.network.nic.coalesce.set.Invoke
network command might return a false value as a result. This might impact existing automation scripts. -
When you create a virtual machine with preallocated memory, for example a virtual machine with latency sensitivity set to high, the memory might be allocated from a remote NUMA node instead of selecting the local NUMA node. This issue might depend on the memory usage of the ESXi host and does not affect hosts with only one NUMA node.
-
The
Actor Key
value of LACP PDUs might change to0
during the link status flapping. -
Creating multiple VFFS on a single ESXi host is not supported. When you try to create multiple VFFS on an ESXi host and to collect host profiles, the operation fails with an error:
Exception while Gathering Generic Plug-in Data. Exception: name 'INVALID_SYS_CONFIG_MSG_KEY' is not defined error
. -
If you have modified the type of a virtual CD-ROM device on a virtual machine in an earlier version of ESXi, after you update to a later version and you try to install or update VMware Tools, the virtual machine might be terminated and marked as invalid.
-
The configuration of big memory nodes requires a big memory overhead mapped in xmap. Because xmap does not support memory buffer size higher than 256 MB, the ESXi host might fail with а purple diagnostic screen and an error similar to:
0:00:01:12.349 cpu0:1)@BlueScreen: MEM_ALLOC bora/vmkernel/main/memmap.c:4048
0:00:01:12.356 cpu0:1)Code start: 0x418026000000 VMK uptime:
0:00:01:12.363 cpu0:1)0x4100064274f0:[0x4180261726e1]+0x261726e1 stack: 0x7964647542203a47
0:00:01:12.372 cpu0:1)0x4100064275a0:[0x4180261720a1]+0x261720a1 stack: 0x410006427600
0:00:01:12.381 cpu0:1)0x410006427600:[0x418026172d36]+0x26172d36 stack: 0xfd000000000
0:00:01:12.390 cpu0:1)0x410006427690:[0x4180261dcbc3]+0x261dcbc3 stack:0x0
0:00:01:12.398 cpu0:1)0x4100064276b0:[0x4180261f0076]+0x261f0076 stack: 0x8001003f
0:00:01:12.406 cpu0:1)0x410006427770:[0x41802615947f]+0x2615947f stack: 0x0 -
When you set up the network and enable vSphere Distributed Switch health check to perform configuration checks, the ESXi host might fail with a purple diagnostic screen.
-
The P2MCache lock is a spin lock compromising the fairness across execution threads that wait for the P2MCache. As a result, the ESXi host might fail with purple diagnostic screen and the following error:
2017-09-15T06:50:17.777Z cpu11:153493)@BlueScreen: Spin count exceeded - possible deadlock with PCPU 19
2017-09-15T06:50:17.777Z cpu11:153493)Code start: 0x418034400000 VMK uptime: 8:21:18:05.527
2017-09-15T06:50:17.778Z cpu11:153493)Saved backtrace from: pcpu 19 SpinLock spin out NMI
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bb60:[0x4180345911bf]VmMemPin_ReleasePhysMemRange@vmkernel#nover+0x1f stack: 0x0
2017-09-15T06:50:17.778Z cpu11:153493)0x439127a1bbb0:[0x41803455d706]P2MCache_Release@vmkernel#nover+0xa6 stack: 0x4393bb7a7000
2017-09-15T06:50:17.779Z cpu11:153493)0x439127a1bbf0:[0x4180345658a2]PhysMem_ReleaseSGE@vmkernel#nover+0x16 stack: 0x0 -
When you use devices with a physical block size other than 512 or 4096 Bytes, in the
/var/log/vmkernel.log
file of the ESXi host you might see multiple warning messages similar to:ScsiPath: 4395: The Physical block size "8192" reported by the path vmhba3:C0:T1:L0 is not supported.
-
Virtual machines might become unresponsive due to repetitive failures in some third-party device drivers to process commands. You might see the following error when opening the virtual machine console:
Error: Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period
-
If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null
VvolID
parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a nullVvoId
parameter and a failure when creating a virtual machine snapshot. -
The DVFilter might receive unexpected or corrupt values in the shared memory ring buffer, causing the internal function to return
NULL
. If thisNULL
value is not handled gracefully, an ESXi host might fail with a purple diagnostic screen at the DVFilter level. -
An ESXi host might fail with a purple diagnostic screen displaying a
sbflush_internal panic
message due to some discrepancies in the internal statistics. -
You might see many messages similar to
VmkNicImpl::IsDhclientProcess: No valid dhclient pid found
in the hostd and syslog logs. TheIsDhclientProcess
process is in a hot code path and triggers frequently. -
When you use the Software iSCSI adapter, the ESXi host might fail with a purple diagnostic screen due to a race condition.
-
If you run the
snmpwalk
command to get battery status, the SNMP agent might incorrectly report the NVDIMM battery sensor on Dell 14G servers as down. The processing ofIPMI_NETFN_SE/GET_SENSOR_READING
code does not check status bits properly. -
The vmsyslogd service might fail to write VOB logs due to memory limits, because the current method of reporting VOBs is memory-heavy.
-
Larger configurations might exceed the limit for the number of locks and hostd starts failing with an error similar to:
hostd Panic: MXUserAllocSerialNumber: too many locks!
-
An NFS volume mount by using a host name instead of an IP address might not persist after a reboot in case of an intermittent failure in resolving the host name.
-
Some IPMI sensors might intermittently change from green to red and the reverse, which creates false hardware health alarms.
-
You might see many error messages such as
Heap globalCartel-1 already at its maximum size. Cannot expand.
in the vmkernel.log, because the sfcbd service fails to process all forked processes. As a result, the ESXi host might become unresponsive and some operations might fail. -
You might intermittently see alarms for very large values of counters for read or write disk latency, such as
datastore.totalReadLatency
anddatastore.totalWriteLatency
. -
During the disk decommission operation, a race condition in the decommission workflow might cause the ESXi host to fail with a purple diagnostic screen.
-
Inefficient block allocation mechanisms might lead to multiple iterations of metadata reads. This causes a long quiesce time while creating and extending thick provisioned lazy zeroed VMDK files on a VMFS6 datastore.
-
When you navigate to the Monitor tab for an ESXi host and under Performance you select Physical Adapters > pNIC vSwitch Port Drop Rate, you might see a counter showing dropped packets, while no packets are dropped. This is because the IOChain framework treats the reduced number of packets, when using LRO, as packet drops.
-
When checking the uplink status, the teaming policy might not check if the uplink belongs to a port group and the affected port group might be incorrect. As a result, you might see redundant VOB messages in the
/var/run/log/vobd.log
file, such as:Lost uplink redundancy on virtual switch "xxx". Physical NIC vmnic8 is down. Affected portgroups:"xxx"
. -
After an ESXi host reboot, you might not see NFS datastores using a fault tolerance solution by Nutanix mounted in the vCenter Server system. However, you can see the volumes in the ESXi host.
-
Network booting of virtual machines restricts the distance between a virtual machine and the network boot server to 15 routers. If your system has more routers on the path between a virtual machine and any server necessary for boot (PXE, DHCP, DNS, TFTP), the booting process fails, because the boot request does not reach the server.
-
The vSphere Web Client or vSphere Client might display a host profile compliance as unknown if you try to customize the DNS configuration by navigating to Networking Configuration > NetStack Instance > defaultTcpipStack > DNS configuration.
-
If a virtual machine is with Windows 10 version 1809, has snapshots, and runs on a VMFS6 datastore, the virtual machine might either start slowly or stop responding during the start phase.
-
The SFCB process might fail because
str==NULL/length=0
values in input requests are not properly handled. -
Managing the Active Directory lwsmd service from the Host Client or vSphere Web Client might fail with the following error:
Failed - A general system error occurred: Command /etc/init.d/lwsmd timed out after 30 secs
. -
Hardware sensors in red state might report a green state in an ESXi host and in the vSphere Client. As a result, the alarm might not be triggered, and you might not receive an alert.
-
Even if C-states are enabled in the firmware setup of an ESXi host, the vmkernel does not detect all the C-states correctly. The power screen of the esxtop tool shows columns
%C0
(percentage of time spent in C-state 0) and%C1
(percentage of time spent in C-state 1), but does not show column%C2
. As a result, the system performance per watt of power is not maximized. -
An ESXi host might fail with a purple diagnostic screen at an UplinkRcv capture point on a CNA, when you use the pktcap-uw syntax and a device does not have an uplink object associated while capturing a packet on the uplink.
-
An ESXi host might fail with a purple diagnostic screen due to a very rare race condition when the host is trying to access a memory region in the exact time between it is freed and allocated to another task.
-
A virtual machine that runs on an NFS datastore might start performing poorly due to the slow performance of an NFS datastore. After you reboot the ESXi host, the virtual machine starts performing normally again.
-
This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.
-
When you migrate multiple virtual machines by using vSphere Storage vMotion, the container port might have duplicate VIF ID and the ESXi host might fail with a purple diagnostic screen.
-
After you upgrade your system to vSphere 6.5, you might not be able to add ESXi hosts to vCenter Server, because the sfcb agent might not be scrubbing data returned from the hardware and IPMI layers.
-
An ESXi host might fail with a purple diagnostic screen because of
NullPointerException
in the I/O completion when connectivity to NFS41 datastores intermittently fails. -
An ESXi host might fail with a purple diagnostic screen while a TPM performs a read operation. You might see an error message similar to:
#PF Exception 14 in world XXX:hostd-worker IP YYY addr ZZZ
In the backtrace of the purple diagnostic screen, you might see entries similar to:
Util_CopyOut@vmkernel#nover+0x1a stack: 0x418025d7e92c
TpmCharDev_Read@(tpmdriver)#+0x3b stack: 0x43082b811f78
VMKAPICharDevIO@vmkernel#nover+0xdd stack: 0x430beffebe90
VMKAPICharDevDevfsWrapRead@vmkernel#nover+0x2d stack: 0x4391d2e27100
CharDriverAsyncIO@vmkernel#nover+0x83 stack: 0x430148154370
FDS_AsyncIO@vmkernel#nover+0x3dd stack: 0x439e0037c6c0
FDS_DoSyncIO@vmkernel#nover+0xe3 stack: 0x439544e9bc30
DevFSFileIO@vmkernel#nover+0x2ea stack: 0x1
FSSVec_FileIO@vmkernel#nover+0x97 stack: 0x1707b4
UserChardevIO@(user)#+0xbb stack: 0x43071ad64660
UserChardevRead@(user)#+0x25 stack: 0x4396045ef700
UserVmfs_Readv@(user)#+0x8c stack: 0x430beff8e490
LinuxFileDesc_Read@(user)#+0xcb stack: 0x4396047cbc68
User_ArchLinux32SyscallHandler@(user)#+0x6c stack: 0x0
User_LinuxSyscallHandler@(user)#+0xeb stack: 0x0
User_LinuxSyscallHandler@vmkernel#nover+0x1d stack: 0x0
gate_entry_@vmkernel#nover+0x0 stack: 0x0 -
When a VNC connection is disconnected, the error conditions might not be handled properly and the virtual machine might stop responding.
-
If the ESXi host does not have enough memory, the memory allocation might fail due to invalid memory access. As a result, the ESXi host might fail with a purple diagnostic screen.
-
When you attempt to log in to an ESXi host with incorrect credentials multiple times, the hostd service in the ESXi host might stop responding.
-
When you quiesce virtual machines that run Microsoft Windows Server 2008 or later, application-quiesced snapshots are created. The number of possible concurrent snapshots is 32, which might generate a lot of parallel threads to track tasks in the snapshot operations. As a result, the hostd service might become unresponsive.
-
With vCenter Server 6.5 Update 3, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.5 Update 3 and ESXi 6.5 Update 3 on both source and destination sites.
-
When the
syslog
files rotate, they apply timestamps with the help of the bash configuration file. The timestamp refreshes only during the restart of the hostd service and might be outdated. As a result, the syslog third-party scripts might fail. -
A virtual machine with VMXNET3 vNICs cannot start by using Citrix PVS bootstrap, because any pending interrupts on the virtual network device are not handled properly during the transition from the PXE boot to the start of the guest operating system. As a result, the guest operating system cannot start the virtual network device and the virtual machine also fails to start.
-
When the network supporting a vSAN stretched cluster experiences many disconnect events, the ESXi host memory used by the vSAN module might be exhausted. As a result, the ESXi host might fail with a purple diagnostic screen.
-
vSAN health service might report the overall physical disk health as red and trigger an alarm, and then recover to green status. These alarms might happen while rescanning physical disks.
-
This issue occurs when the guest is running VMware Tools with a later version than the version of VMware Tools the ESXi host was released with, or when the host cannot recognize the
guestID
parameter of theGuestInfo
object. The issue affects mainly virtual machines with CentOS and VMware Photon OS. -
vSphere Virtual Volume datastore uses VMware CA signed certificate to communicate with VASA providers. When the vCenter Server instance or the CA certificate changes, vCenter Server imports the new vCenter Server CA signed certificate and the vSphere Virtual Volume datastore gets SSL reset signal which might not be triggered. As a result, the communication between vSphere Virtual Volume datastore and VASA providers might fail and the vSphere Virtual Volume datastore might become inaccessible.
-
PLOG Relog can run along with DecommissionMD, causing Relog to access freed PLOG device state tables in an ESXi host. As a result, the host might fail with a purple diagnostic screen. The backtrace includes one of the following entries:
Bucketlist_LowerBound
orPLOGRelogRetireLsns.
-
If you have a virtual machine on a SeSparse snapshot and you query the physical layout of the VMDK from the Guest Operating System or a third-party application, a physical CPU lockup might be triggered if the VMDK file size is not a multiple of 4K. As a result, the ESXi host fails with a purple screen.
-
When provisioning instant clones in the vSphere Web Client or vSphere Client, you might see the error
Disconnected from virtual machine.
The virtual machines cannot power on because the virtual machine executable (VMX) process fails. -
ESXi hosts send a
HardwareSensorGroupStatus
event which allows the vCenter Server system to track the hardware alarm state of the hosts by a group, such as fan, storage, power, and temperature. Storage alarms are not raised or lowered based on reports from CIM providers which track the host bus adapter status. -
When an ESXi host removes a PShare Hint from a PShare chain, if the PShare chain is corrupted, the ESXi host might fail with a purple diagnostic screen and an error similar to:
0x43920bd9bdc0:[0x41800c5930d6]VmMemCow_PShareRemoveHint
0x43920bd9be00:[0x41800c593172]VmMemCowPFrameRemoveHint
0x43920bd9be30:[0x41800c594fc8]VmMemCowPShareFn@vmkernel
0x43920bd9bf80:[0x41800c500ef4]VmAssistantProcessTasks@vmkernel
0x43920bd9bfe0:[0x41800c6cae05]CpuSched_StartWorld@vmkernel
-
If you migrate a virtual machine with a NetX filter across clusters by using vSphere vMotion, the virtual machine might lose network connectivity. As a side effect of service profile optimization, the NetX filter might get blocked after the migration. As a result, the virtual machine loses network connectivity.
-
Browser problems can occur when you navigate to Storage > vsan > vsanDatastore > Configure > General. On some Windows-based browsers, the Properties field blinks when opened for the first time. The browser might become unresponsive.
-
In large scale environments, such as when vCenter Server manages more than 900 vSAN hosts, the vCenter management server might slow or stop responding to user requests, due to a network socket handler leak.
-
If the Maximum Transmission Unit (MTU) size is configured incorrectly and the MTU size of the virtual switch is less than the configured MTU size for the VMkernel port, the vMotion operation might fail. If the failed vMotion operation is immediately followed by a hot-add or a Storage vMotion operation, this causes the ESXi host to fail with a purple diagnostic screen.
-
After a host fails, the Entry Persistence Daemon (EPD) might not restart. Some components continue to persist after an object is deleted. Messages similar to the following appear in the EPD log:
2018-07-10T10:35:21.029Z 71308 -- XXX: Lock file '/scratch/epd-store.db' exists.
2018-07-10T10:35:21.029Z 71308 -- XXX: Did the host or EPD recently crash?
2018-07-10T10:35:21.029Z 71308 -- XXX: Assuming it's OK. Unlinking lock file..
2018-07-10T10:35:21.030Z 71308 Failed to delete lock file: Is a directory -
When you issue the RVC command
vsan.observer
, no data appears on the vSAN Observer html page. You might see a ruby callstack on the RVC console.This problem occurs because the payload is too big for SOAP request/response
. -
This problem occurs only when the SATP is VMW_SATP_LOCAL. In ESXi 6.0, if a device type SATP claim rule with SATP as VMW_SATP_LOCAL was added with an incorrectly formatted config option, then NMP and SATP plug-ins do not recognize the option and fail to discover the device when upgraded to a later ESXi release.
You might see log entries similar to the following:
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: NMP: nmp_DeviceAlloc:2048: nmp_AddPathToDevice failed Bad parameter (195887111).
2018-11-20T02:10:36.333Z cpu1:66089)WARNING: ScsiPath: 6615: Plugin 'NMP' had an error (Bad parameter) while claiming path 'vmhba0:C0:T2:L0'. Skipping the path -
I/O requests on a NFS 4.1 datastore might occasionally cause an ESXi host to fail with a purple diagnostic screen due to a race condition in the request completion path.
-
The maximum memory heap size for any module is 3 GB. If the requested total memory size is more than 4 GB, an integer overflow occurs. As a result, the MAC addresses of the virtual machine interfaces are set to
00:00:00:00:00:00
and the VMXNET3 device might fail to start. In the vmkernel log, you might see an error similar to:Vmxnet3: 15204: Failed to allocate memory for tq 0
. -
Stateless boot operations of ESXi hosts might intermittently fail when using the vSphere Authentication Proxy service on a vCenter Server system.
-
A race condition in the
get-set firewall-rule
operations fordvfilter-generic-fastpath
might cause a buffer overflow and heap corruption. As a result, the ESXi host might fail with a purple diagnostic screen. -
After an interruption of a live migration involving one or more disks, such as a Storage vMotion or a vMotion without shared storage, the guest operating system might become unresponsive for up to 8 minutes, due to a rare race condition.
-
The size of the network msgBuffer is 1 MB but the main msgBuffer supports up to 25 000 lines of any length, which is at least 3 MB. If the network is slow, the write thread to the network msgBuffer is faster than the reader thread. This leads to loss of log messages with an alert:
lost XXXX log messages
. -
The vsanmgmt daemon on an ESXi host might run out of memory due to use of the cmmds-tool. This problem can cause the host to generate a
python-zdump
file. You might see the following entry in the vsanmgmt.log:No space left on device Failed to initialize CMMDS: Out of memory
. -
If the GID for remote direct memory access (RDMA) of a virtual machine using a paravirtualized RDMA (PVRDMA) network adapter changes, the virtual device might leak address handles associated with that GID. If GIDs are frequently changed, the PVRDMA device might allocate all address handles of the physical Host Channel Adapters (HCA). As a result, any attempts to allocate an address handle in the guest virtual machine fail.
-
Virtual machines might go into invalid state and you cannot complete tasks such as power on, create, delete, migrate and reconfigure, depending on which locks are stuck in transactions as a result of an abrupt reboot of the ESXi host.
-
The vmkusb driver is updated to version 0.1-1.
-
The ixgben driver is updated to version 1.7.1.15-1.
-
The qlnativefc driver is updated to version 2.1.73.0-5.
-
The NVMe driver is updated to version 1.2.2.28-1.
-
The esxcli-nvme-plugin is updated to version 1.2.0.36.
-
The lsi_mr3 driver is updated to version 7.708.07.00-3.
-
The lsi_msgpt3 driver is updated to version 17.00.02.00-1.
-
The lsi_msgpt35 driver is updated to version 09.00.00.00-5.
-
The lsi_msgpt2 driver is updated to version 20.00.06.00-2.
-
The smartpqi driver is updated to version 1.0.1.553-28.
-
The i40en driver is updated to version 1.8.1.9-2.
-
The lsu-hp-hpsa-plugin driver is updated to version 2.0.0-16.
-
The bnxtnet driver is updated to version 20.6.101.7-23.
-
ESXi 6.5 Update 3 adds support for Broadcom 100G Network Adapters
-
The nenic driver is updated to version 1.0.29.0-1.
-
If you use an ESXi host with a CD-ROM drive model DU-8A5LH, the CD-ROM drive might report an unknown
Frame Information Structure
(FIS) exception. The vmw_ahci driver does not handle the exception properly and creates repeatedPORT_IRQ_UNK_FIS
exception logs in the kernel. The repeated logs cause lack of physical CPU heartbeat and the ESXi host might fail with a purple diagnostic screen. -
In heavy I/O workloads, discrepancies in the AHCI specification of the ESXi native vmw_ahci driver and third-party AHCI controller firmware, such as of the DELL BOSS-S1 adaptor, might cause multiple errors, including storage outage errors or a purple diagnostic screen.
- You might not see details for qlnativefc driver worlds when running the ps command. The worldName fields have null value.
-
Profile Name | ESXi-6.5.0-20190701001s-standard |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 2, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2109019, 2277230, 2262285, 2278591, 2341405 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The Python third-party library is updated to version 3.5.6.
-
The ESXi userworld libxml2 library is updated to version 2.9.9.
-
The SNMP agent in an ESXi host might respond to SNMPv3 requests without a user name or password and authentication in the payload body.
-
The OpenSSH package is updated to version 7.9 P1.
-
When you clone multiple virtual machines simultaneously from vSphere on a vSphere Virtual Volumes datastore, a
setPEContext
VASA API call is issued. If all connections to the VASA Providers are busy or unavailable at the time of issuing thesetPEContext
API call, the call might fail and the cloning process stops responding. -
The OpenSSL package is updated to version openssl-1.0.2r.
-
The BusyBox package is updated to version 1.29.3.
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
-
Profile Name | ESXi-6.5.0-20190701001s-no-tools |
Build | For build information, see Patches Contained in this Release. |
Vendor | VMware, Inc. |
Release Date | July 2, 2019 |
Acceptance Level | PartnerSupported |
Affected Hardware | N/A |
Affected Software | N/A |
Affected VIBs |
|
PRs Fixed | 2109019, 2277230, 2262285, 2278591, 2341405 |
Related CVE numbers | N/A |
- This patch updates the following issues:
-
The Python third-party library is updated to version 3.5.6.
-
The ESXi userworld libxml2 library is updated to version 2.9.9.
-
The SNMP agent in an ESXi host might respond to SNMPv3 requests without a user name or password and authentication in the payload body.
-
The OpenSSH package is updated to version 7.9 P1.
-
When you clone multiple virtual machines simultaneously from vSphere on a vSphere Virtual Volumes datastore, a
setPEContext
VASA API call is issued. If all connections to the VASA Providers are busy or unavailable at the time of issuing thesetPEContext
API call, the call might fail and the cloning process stops responding. -
The OpenSSL package is updated to version openssl-1.0.2r.
-
The BusyBox package is updated to version 1.29.3.
-
When exporting multiple virtual machines to one folder, the VMDK names might be conflicting and get auto-renamed by the Web browser. This makes the exported OVF files invalid.
-
Known Issues
The known issues are grouped as follows.
vSAN Issues- Hot plugging vSAN disk causes core dump
If you hot-plug a magnetic disk which previously was used as a vSAN disk, you might see a core dump file generated. This problem can occur when the disk arrival event happens while vSAN is scanning disks. Inconsistent disk data can cause a vSAN daemon core dump to be generated.
Workaround: When vSAN daemon core dump occurs, normally the watchdog restarts a new daemon automatically. Outstanding vSAN management tasks that take place during core dump might fail. When the daemon restarts, all operations return to normal.