vCenter Server 6.7 Update 2 | APR 11 2019 | ISO Build 13010631 vCenter Server Appliance 6.7 Update 2 | APR 11 2019 | ISO Build 13010631 |
What's in the Release Notes
The release notes cover the following topics:
What's New
- With vCenter Server 6.7 Update 2, you can configure the property
config.vpxd.macAllocScheme.method
in the vCenter Server configuration file, vpxd.cfg
, to allow sequential selection of MAC addresses from MAC address pools. The default option for random selection does not change. Modifying the MAC address allocation policy does not affect MAC addresses for existing virtual machines.
- vCenter Server 6.7 Update 2 adds a REST API that you can run from the vSphere Client for converging instances of vCenter Server Appliance with an external Platform Services Controller instances into vCenter Server Appliance with an embedded Platform Services Controller connected in Embedded Linked Mode. For more information, see the vCenter Server Installation and Setup guide.
- vCenter Server 6.7 Update 2 integrates the VMware Customer Experience Improvement Program (CEIP) into the converge utility.
- vCenter Server 6.7 Update 2 adds a SOAP API to track the status of encryption keys. With the API, you can see if the Crypto Key is available in a vCenter Server system, or is used by virtual machines, as a host key or by third-party programs.
- Precheck for upgrading vCenter Server systems: vCenter Server 6.7 Update 2 enables a precheck when upgrading a vCenter Server system to ensure upgrade compatibility of the VMware vCenter Single Sign-On service registrations endpoints. This check notifies for possible mismatch with present machine vCenter Single Sign-On certificates before the start of an upgrade and prevents upgrade interruptions that require manual workaround and cause downtime.
- vSphere Auditing Improvements: vCenter Server 6.7 Update 2 improves VMware vCenter Single Sign-On auditing by adding events for the following operations: user management, login, group creation, identity source, and policy updates. The new feature is available only for vCenter Server Appliance with an embedded Platform Services Controller and not for vCenter Server for Windows or vCenter Server Appliance with an external Platform Services Controller. Supported identity sources are vsphere.local, Integrated Windows Authentication (IWA), and Active Directory over LDAP.
- Virtual Hardware Version 15: vCenter Server 6.7 Update 2 introduces Virtual Hardware Version 15 which adds support for creating virtual machines with up to 256 virtual CPUs. For more information, see VMware knowledge base articles 1003746 and 2007240.
- Simplified restore of backup files: If you cannot find the correct build to restore a backup file and enter incorrect backup details, the vCenter Server Appliance Management Interface in vCenter Server 6.7 Update 2 adds an error message in the Enter backup details page providing corresponding version details that help you to pick the correct build. You can also find version details in Backup > Activity.
- With vCenter Server 6.7 Update 2, you can use the Network File System (NFS) and Server Message Block (SMB) protocols for file-based backup and restore operations on the vCenter Server Appliance. The use of NFS and SMB protocols for restore operations is supported only by using the vCenter Server Appliance CLI installer.
- vCenter Server 6.7 Update 2 adds events for changes of permissions on tags and categories, vCenter Server objects and global permissions. The events specify the user who initiates the changes.
- With vCenter Server 6.7 Update 2, you can create alarm definitions to monitor the backup status of your system. By setting a Backup Status alarm, you can receive email notifications, send SNMP traps, and run scripts triggered by events such as
Backup job failed
and Backup job finished successfully
. A Backup job failed
event sets the alarm status to RED and Backup job finished successfully
resets the alarm to GREEN.
- With vCenter Server 6.7 Update 2, in clusters with the Enterprise edition of VMware vSphere Remote Office Branch Office, configured to support vSphere Distributed Resource Scheduler in maintenance mode, when an ESXi host enters maintenance mode, all virtual machines running on the host are moved to other hosts in the cluster. Automatic VM-Host affinity rules ensure that the moved virtual machines return to the same ESXi hosts when it exits maintenance mode.
- With vCenter Server 6.7 Update 2, events related to adding, removing, or modifying user roles display the user that initiates the changes.
- With vCenter Server 6.7 Update 2, you can publish your .vmtx templates directly from a published library to multiple subscribers in a single action instead of performing a sync from each subscribed library individually. The published and subscribed libraries must be in the same linked vCenter Server system, regardless if on-prem, on cloud, or hybrid. Work with other templates in content libraries does not change.
- vCenter Server 6.7 Update 2 adds an alert to specify the installer version in the Enter backup details step of a restore operation. If the installer and backup versions are not identical, you see a prompt which matching build to download, such as
Launch the installer that corresponds with version 6.8.2 GA
.
- vCenter Server 6.7 Update 2 adds support for a Swedish keyboard in the vSphere Client and VMware Host Client. For known issues related to the keyboard mapping, see VMware knowledge base article 2149039.
- With vCenter Server 6.7 Update 2, the vSphere Client provides a check box Check host health after installation that allows you to opt-out vSAN health checks during the upgrade of an ESXi host by using the vSphere Update Manager. Before introducing this option, if vSAN issues were detected during an upgrade, an entire cluster remediation failed and the ESXi host that was upgraded stayed in maintenance mode.
- vSphere Health Аlarm and Categories: vCenter Server 6.7 Update 2 adds an alarm in the vSphere Client when vSphere Health detects a new issue in your environment and prompts you to resolve the issue. Health check results are now grouped in categories for better visibility.
- With vCenter 6.7 Update 2, you can now publish your VM templates managed by Content Library from a published library to multiple subscribers. You can trigger this action from the published library, which gives you greater control over the distribution of VM templates. The published and subscribed libraries must be in the same linked vCenter Server system, regardless if on-prem, on cloud or hybrid. Work with other templates in content libraries does not change.
Earlier Releases of vCenter Server 6.7
Features and known issues of vCenter Server are described in the release notes for each release. Release notes for earlier releases of vCenter Server 6.7 are:
For internationalization, compatibility, installation and upgrade, open source components and product support notices see the VMware vCenter Sever 6.7 Update 1 Release Notes.
Product Support Notices
- VMware vSphere Flash Read Cache is being deprecated. While this feature continues to be supported in the vSphere 6.7 generation, it will be discontinued in a future vSphere release. As an alternative, you can use the vSAN caching mechanism or any VMware certified third-party I/O acceleration software listed in the VMware Compatibility Guide.
- vCenter Server 6.7 Update 2 does not support Digest Algorithm 5 (MD5) and you cannot set the MD5 authentication option by using the
snmp.set
command.
Upgrade Notes for This Release
IMPORTANT: If you use the Hybrid Linked Mode (HLM) capability, please contact VMware Support team (Cloud Service Engineering team) before upgrading to vCenter Server 6.7 Update 2.
For more information on vCenter Server versions that support upgrade to vCenter Server 6.7 Update 2, please see VMware knowledge base article 67077.
Patches Contained in This Release
This release of vCenter Server 6.7 Update 2 delivers the following patches. See the VMware Patch Download Center for more information on downloading patches.
Security Patch for VMware vCenter Server 6.7 Update 2
Third-party product fixes (for example: JRE, tcServer). This patch is applicable for vCenter Server for Windows, Platform Services Controller for Windows, and vSphere Update Manager.
NOTE: This patch updates only the JRE version 1.8.0_202.
For vCenter Server and Platform Services Controller for Windows
Download Filename |
VMware-VIMPatch-T-6.7.0-13010631.iso |
Build |
13010631 |
Download Size |
40.7 MB |
md5sum |
edcd2f2a9294fffcbec32150f10a0005 |
sha1checksum |
a66c23f958a83542e2bd33681d838b22630ed953 |
These vCenter Server components depend on JRE and have to be patched:
- vCenter Server
- Platform Services Controller
- vSphere Update Manager
Download and Installation
You can download this patch by going to the VMware Patch Download Center and choosing VC from the Select a Product drop-down menu.
- Mount the
VMware-VIMPatch-T-6.7.0-13010631.iso
file to the system where the vCenter Server component is installed.
- Double-click ISO_mount_directory/autorun.exe.
- In the vCenter Server Java Components Update wizard, click Patch All.
Full Patch for VMware vCenter Server Appliance 6.7 Update 2
Product Patch for vCenter Server Appliance containing VMware software fixes, security fixes, and Third Party Product fixes (for example: JRE and tcServer).
This patch is applicable to the vCenter Server Appliance and Platform Services Controller Appliance.
For vCenter Server and Platform Services Controller Appliances
Download Filename |
VMware-vCenter-Server-Appliance-6.7.0.30000-13010631-patch-FP.iso |
Build |
13010631 |
Download Size |
1996.6 MB |
md5sum |
2f09c95d416c7d2ba6d94b032b240ef9 |
sha1checksum |
dd8053b955093cd512408099d4d9ac618668e28c |
Download and Installation
You can download this patch by going to the VMware Patch Download Center and choosing VC from the Select a Product drop-down menu.
- Attach the
VMware-vCenter-Server-Appliance-6.7.0.30000-13010631-patch-FP.iso
file to the vCenter Server Appliance CD or DVD drive.
- Log in to the appliance shell with your root credentials and run the commands given below:
- To stage the ISO:
software-packages stage --iso
- To see the staged content:
software-packages list --staged
- To install the staged rpms:
software-packages install --staged
For more information on using the vCenter Server Appliance shells, see VMware knowledge base article 2100508.
For more information on patching the vCenter Server Appliance, see Patching the vCenter Server Appliance.
For more information on staging patches, see Stage Patches to vCenter Server Appliance.
For more information on installing patches, see Install vCenter Server Appliance Patches.
For issues resolved in this patch see Resolved Issues.
For Photon OS updates, see VMware vCenter Server Appliance Photon OS Security Patches.
For more information on patching using the Appliance Management Interface, see Patching the vCenter Server Appliance by Using the Appliance Management Interface.
Release Notes Change Log
This section describes updates to the Release Notes.
Resolved Issues
The resolved issues are grouped as follows.
vMotion Issues
- vSphere vMotion operations for encrypted virtual machines might fail after a restart of the vCenter Sever system
After a restart of a vCenter Server system, compatibility check errors might fail vSphere vMotion operations for encrypted virtual machines. You might see logs similar to:
RuntimeFault.summary Session does not have Cryptographer.RegisterHost privilege.
This issue is resolved in this release.
- Power-on or vSphere vMotion operations with virtual machines might fail with an infinite loop error
Power-on or vSphere vMotion operations with virtual machines might fail with an infinite loop error if the .vmx
configuration file is corrupt.
This issue is resolved in this release.
- The disk mode of a virtual machine might change after migration by using vSphere Storage vMotion
If you migrate a virtual machine by using Storage vMotion, the disk mode of that virtual machine might change without a warning. For instance, from Independent-Persistent to Dependent.
This issue is resolved in this release.
- Migrating a virtual machine might fail due to inability to access the parent disk
The migration of a virtual machine might fail with the FileNotFound
error during the network file copy process when the destination host has access to the shared child disk of the source host and cannot access the parent disk.
This issue is resolved in this release.
- Virtual machine migration operations such as instant clone provisioning might fail due to a race condition
Due to a rare condition between operations that create a namespace database with solutions such as VMware AppDefense, and migration of virtual machines by using Storage vMotion or the Enhanced vMotion Compatibility, the migration might fail.
This issue is resolved in this release.
Backup and Restore Issues
- Backup of the VMware vCenter Server Appliance might not start if the vmonapi service cannot start while a proxy is configured or not responsive
While a proxy is configured, or if a proxy is not responsive, the vmonapi service, which provides the API to start and stop vCenter Server services, is not running. This blocks backups of the vCenter Server Appliance.
This issue is resolved in this release.
Auto Deploy Issues
- VMware vSphere Auto Deploy Discovered Hosts tab might display an error after creating or editing a deployment rule
When you prepare your system to provision ESXi hosts with vSphere Auto Deploy to network boot, if a host does not match any deployment rule during configuration, an error might be triggered when you create a rule later. As a result, you might not see the host on the Discovered Hosts tab and the error Unable to retrieve deployed hosts: name 'item' is not defined.
is displayed.
This issue is resolved in this release.
Guest OS Issues
- You cannot set a primary virtual NIC
You cannot customize the vCenter Server Appliance guest operating system to set a virtual NIC as a primary.
This issue is resolved in this release. With this fix you can customize a virtual NIC as a primary virtual NIC, when the virtual NIC is the first NIC and also has a static IPv4 and a gateway configured.
- Customization of virtual machines by using Microsoft Sysprep on vSphere 6.7 might fail and virtual machines stay in customization state
Customization of virtual machines by using Microsoft Sysprep on vSphere 6.7 might fail if Windows virtual machines use disposable disks. Sysprep might change the driver letter of the disposable disks during customization. As a result, the virtual machines remain in customization state and become unresponsive.
This issue is resolved in this release.
Tools Issues
- The c:\sysprep directory might not be deleted after Windows guest customization
The temporary c:\sysprep
directory might not be deleted after you run Windows guest customization.
This issue is resolved in this release. With this fix, all temporary files and folders are deleted by leveraging the Windows API and after virtual machine reboot.
- VMware Open Virtualization Format (OVF) Tool might fail to overwrite all files in a destination folder
Even when you use the --overwrite
option of the OVF Tool, existing files in the destination folder might not be deleted or overwritten, and only manual delete works.
This issue is resolved in this release.
- You might not see the configured CPU shares when exporting a virtual machine to OVF
When you export a virtual machine to OVF by using the OVF Tool, the configured CPU shares might not be exported.
This issue is resolved in this release.
Storage Issues
- Bulk virtual machine provisioning requests with the ResourceLeaseDurationSec parameter passed through VMware vSphere Storage DRS might fail
When multiple requests on virtual machine provisioning pass through vSphere Storage DRS with the ResourceLeaseDurationSec
parameter specified in the placement spec, vSphere Storage DRS provides initial placement recommendations and allocates space for all of them, blocking the usage of datastore space. This might result in provisioning failures.
This issue is resolved in this release.
- vCenter Server might stop responding when adding a fault message in the vSphere Storage DRS
vCenter Server might stop responding when the vpxd service tries to access and add a fault message of a decommissioned or removed datastore in vSphere Storage DRS.
This issue is resolved in this release.
- A wave of Config Update events triggered by a vSphere API for Storage Awareness call might cause an out of memory error or irregular API calls
Each Config Update event triggers a full sync with a vSphere API for Storage Awareness provider. As a result, sync threads pile up. If the number of events of type Config Update is large, the result is an out of memory error or irregular triggers of periodic getEvents
API calls.
This issue is resolved in this release.
- The vpxd service might fail when the vSphere Storage DRS provides an initial placement operation
One of the internal data structures in vSphere Storage DRS initial placement workflow might be overwritten with a NULL
value, which might result in a null pointer reference and a vpxd service failure.
This issue is resolved in this release.
- ESXi hosts with visibility to RDM LUNs might take a long time to start or experience delays during LUN rescans
A large number of RDM LUNs might cause an ESXi host to take a long time to start or experience delay while performing a LUN rescan. If you use APIs, such as MarkPerenniallyReserved
or MarkPerenniallyReservedEx
, you can mark a specific LUN as perennially reserved, which improves the start time and rescan time of the ESXi hosts.
This issue is resolved in this release.
- Expanding the disk of a virtual machine by using VMware vRealize Automation might fail with an error for insufficient disk space on a datastore
If vSphere Storage DRS does not provide a recommendation while you run an operation to expand the disk of a virtual machine by using VMware vRealize Automation, the operation might fail, because of insufficient space on the current datastore. This issue happens when vSphere Storage DRS picks a wrong matching disk for the operation. As a result, you might see the error Insufficient disk space on datastore
.
This issue is resolved in this release.
- vSphere Storage DRS tasks might take long or time out
vSphere Storage DRS tasks might take long or time out due to slow or delayed response from the vSphere Replication Management server.
This issue is resolved in this release.
- Provisioning of virtual machines might fail if the same replication group is used for some or all virtual machine files and disks
VMware vSphere Storage Policy Based Management (SPBM) might not filter the unique replication group ID during a queryReplicationGroup
call to an API for Storage Awareness (VASA) provider. As a result, provisioning of virtual machines might fail if the same replication group is used for some or all virtual machine files and virtual disks.
This issue is resolved in this release.
- Posting of VMware vSphere Virtual Volumes compliance alarms for a StorageObject type to a vCenter Server system might fail
If you use an API for Storage Awareness (VASA) provider, posting of vSphere Virtual Volumes compliance alarms for a StorageObject
type to a vCenter Server system might fail due to a mapping mismatch.
This issue is resolved in this release.
vCenter Server, vSphere Web Client, and vSphere Client Issues
- You cannot add permissions for a user or group beyond the first 200 security principals in an Active Directory domain by using the vSphere Client
If you grant permissions to a user or group from an Active Directory domain by using the vSphere Client, the search for security principals is limited to 200 and you cannot add users to any principal beyond that list.
This issue is resolved in this release.
- The vpxd service might fail to start if certificates in the TRUSTED_ROOTS store exceed 20
When the certificates in the TRUSTED_ROOTS
store on a vCenter Server system pile to more than 20, the vpxd service might fail to start. The vSphere Web Client and vSphere Client display the following error:
[400] An error occurred while sending an authentication request to the vCenter Single Sign-On server.
This issue is resolved in this release. With this fix, the TRUSTED_ROOTS
store can support up to 30 certificates in both vCenter Server for Windows and the vCenter Server Appliance.
- Firstboot might fail during deployment of vCenter Server Appliance using an external Platform Services Controller due to a lag in the time synchronization
Firstboot might fail during the deployment of a vCenter Server Appliance using an external Platform Services Controller if time between the Platform Services Controller node and the vCenter Server system is not synced.
This issue is resolved in this release.
- User login and logout events might not contain the IP address of the user
If you log in to a vCenter Server system by using either the vSphere Web Client or the vSphere Client, the login event might display 127.0.0.1
instead of the IP address of the user. In addition, you might not see track of vCenter Single Sign-On configuration changes in the Events view.
This issue is resolved in this release. The fix adds a new audit log file in the vCenter Single Sign-On logs. You can also see the new events in the Monitor > Events view in the vSphere Web Client and the vSphere Client.
- The vCenter Server daemon service vpxd might fail to start with an error for invalid descriptor index
The vpxd service might fail to start with an error for invalid descriptor index in the parameter VPX_HCI_CONFIG_INFO.LOCKDOWN_MODE
.
This issue affects environments on vCenter Server for Windows 6.7 Update 1 or later that use an MS SQL Database server. If you create a hyperconverged infrastructure cluster by using the Quickstart workflow and restart the vCenter Server system, vpxd might not start due to a failure with data handling from the SQL database server.
You might see similar logs in the vpxd.log
:
[VdbStatement::ResultValue:GetValue] Error to get value at pos: 1, ctype: 4 for SQL "VPX_HCI_CONFIG_INFO.LOCKDOWN_MODE" Init failed. VdbError: Error[VdbODBCError] (-1) ODBC error: (07009) - [Microsoft][SQL Server Native Client 11.0]Invalid Descriptor Index Failed to intialize VMware VirtualCenter. Shutting down
This issue is resolved in this release.
Virtual Machines Management Issues
- Cloning a virtual machine from a snapshot of a template might fail with an error
The error A general system error occurred: missing vmsn file
appears when you clone a virtual machine from a snapshot of a template.
This issue is resolved in this release.
- An internal error might occur in alarm definitions of the vSphere Web Client
An internal error might occur when you try to edit the predefined alarm containing xxx Exhaustion on xxx
, for example Autodeploy Disk Exhaustion on xxx
, and add or change the alarm actions.
This issue is resolved in this release.
Security Issues
- Update to VMware Postgres
VMware Postgres is updated to version 9.6.11.
- Numbering of firewall rules might unexpectedly change if you reorder the rules
If you create more than 9 firewall rules in a vCenter Server Appliance and change the order, setting a rule with a double-digit numbering among rules with one-digit numbering, the numbering might change. For instance, if you move a rule with number 10, such as 10 RETURN all -- X.X.X.10 anywhere
, to position 2, the numbering might change to 2 RETURN all -- X.X.X.10 anywhere
.
This issue is resolved in this release.
- Update to JRE
Oracle (Sun) JRE is updated to version 1.8.202.
- A composed URL might display Apache server details
If you compose a URL such as https://:9443/vsphere-client/inventory-viewer/locales/help
, you might see Apache server details such as version.
This issue is resolved in this release.
- Upgrade of Apache httpd
Apache httpd is updated to version 2.4.37 to resolve a security issue with identifier CVE-2018-11763.
- Update to OpenSSL
The OpenSSL package is updated to version openssl-1.0.2q.
- Update to the libxml2 library
The ESXi userworld libxml2 library is updated to version 2.9.8.
- Update to the OpenSSH version
The OpenSSH is updated to version 7.4p1-7.
Miscellaneous Issues
- Attempts to log in to a vCenter Server system after an upgrade to vCenter Server 6.7 might fail with a credentials validation error
After an upgrade of your system to vCenter Server 6.7, if you try to log in to the system by using either the vSphere Web Client or vSphere Client, and a security token or smartcard, the login might fail with an error Unable to validate the submitted credential
.
This issue is resolved in this release.
- The vCenter Server daemon service vpxd might fail to start after a server reboot
After a server reboot, while vpxd loads the inventory from the database, it performs a workload calculation based on the inventory size and the number of available CPU cores. Certain rare combinations of these inputs might lead to an incorrect calculation, causing an error that prevents the service from starting. You might see the following error:
Init failed: The data being fetched is NULL at column position 0
This issue is resolved in this release. A previous workaround involved disabling one or more CPU cores in the vCenter Server Appliance to fix the calculation. You can undo the workaround after you apply this update.
- The vmdir-syslog.log file is overfilled with log messages when migrating a vCenter Server or a Platform Services Controller instance from Windows to vCenter Server Appliance
When migrating a vCenter Server or a Platform Services Controller instance from Windows to vCenter Server Appliance, the entry cn=DSE Root
is replicated with no security descriptor. As a result, the vmdird-syslog.log
file is overfilled with No SD found for cn=DSE Root
messages.
This issue is resolved in this release. This fix changes the log level to verbose and suppresses the log messages after the migration from Windows to vCenter Server Appliance.
- The vCenter Server daemon service vpxd might fail if you log out immediately after initiating a FileManager operation
If you log out immediately after initiating a FileManager operation such as delete, move, or copy, the vpxd service might fail, because the task might not be picked up for execution from the task queue.
This issue is resolved in this release.
CIM and API Issues
- API queries might time out when many objects are associated with tags
API calls, such as listAttachedObjects
, listAttachedObjectsOnTags,
and listAllAttachedObjectsOnTags,
might take very long to complete and ultimately time out, when many objects are associated with each tag. This is because previously, separate remote procedure calls were sent to the vmware-vpxd service to perform permission checks on each vCenter Server object.
This issue is resolved in this release. With this fix, the tagging APIs make batched AuthZ
calls to vmware-vpxd to perform permission checks on all the associated objects.
Install, Upgrade and Migration Issues
- Migration of vCenter Server for Windows to vCenter Server Appliance might stop at 75% if system time is not synchronized with an NTP server
During stage 2 of a migration from vCenter Server for Windows to vCenter Server Appliance, if the vCenter Server system time is not synchronized with an NTP server, the session might timeout and the migration stops without a warning. The installer interface might indefinitely display progress at 75%.
This issue is resolved in this release.
- Upgrading vCenter Server for Windows to 6.7 Update 2 from earlier versions of the 6.7 line might fail
If you try to upgrade a vCenter Server for Windows system with an external SQL Server that uses Windows authentication to 6.7 Update 2 from an earlier version of the 6.7 line, the operation might fail.
This issue is resolved for upgrades from vCenter Server 6.7 Update 1 to 6.7 Update 2. For upgrades from 6.7.0 or 6.7.0.x versions to 6.7 Update 2, see VMware knowledge base article 67561.
- vCenter Server upgrades might fail due to compatibility issue between VMware Tools version 10.2 and later, and ESXi version 6.0 and earlier
VMware Tools version 10.2 and later might not be compatible with ESXi version 6.0 and earlier. As a result, upgrades of vCenter Server systems might fail.
This issue is resolved in this release. If you already face the issue, either update the ESXi container to version 6.7 or roll back the VMware Tools version to 10.1.5. When the upgrade of the vCenter Server system is complete, upgrade both the VMware Tools and the ESX container.
Convergence Issues
- Certificates might be lost after a convergence of a vCenter Server instance with an external Platform Services Controller to a vCenter Server instance with an embedded Platform Services Controller
Key Management Server (KMS) and Certificate Authority (CA) certificates might be lost after a convergence of a vCenter Server instance with an external Platform Services Controller to a vCenter Server instance with an embedded Platform Services Controller. You might see a warning similar to:
Not connected (Trust not established. View Details)
This issue is resolved in this release.
- The vCenter Server Convergence Tool might fail to convert an external Platform Services Controller to an embedded Platform Services Controller due to conflicting IP address and FQDN
If you have configured an external Platform Services Controller with an IP address as an optional FQDN field during the deployment, the vCenter Server Convergence Tool might fail to convert the external Platform Services Controller to an embedded Platform Services Controller because of a name conflict.
This issue is resolved in this release.
- Convergence of a vCenter Server instance with an external Platform Services Controller to a vCenter Server instance with an embedded Platform Services Controller might fail with an error for missing certificates
Convergence of a vCenter Server instance with an external Platform Services Controller to a vCenter Server instance with an embedded Platform Services Controller might fail with an error such as No certificates were found for entry [location_password_default] of type [Secret Key]
.
This issue is resolved in this release.
- The converge.log file might miss debug level logs when converging a vCenter Server instance with an external Platform Services Controller to a vCenter Server instance with an embedded Platform Services Controller
When you run the vscaConvergeCli
command with logging level set to verbose, the logging level for the converge-util is set to debug, but the converge.log
file might not record the debug log messages. As a result, when troubleshooting you cannot see expected level of details in the log file.
This issue is resolved in this release.
Networking Issues
- You might see a message that an upgrade of VMware vSphere Distributed Switch is running even after the upgrade is complete
You might see a message An Upgrade for the vSphere Distributed switch in datacenter is in progress
even after the upgrade is complete. This happens if no host member is available in the vSphere Distributed Switch configuration, or if a host member has failed to upgrade several times.
This issue is resolved in this release. If you already face the issue that no host member is available in the VDS, you must do the following:
- From the PostgreSQL database, run the command
update vpx_dvs upgrade_status set upgrade_status=0;
.
- From the the appliance shell, run the command
vmon-cli -r vpxd
.
- vSphere Distributed Switch might become out of sync for some ESXi hosts after upgrade to vSphere Distributed Switch 6.6
When you migrate a virtual machine that uses a vSAN datastore from an ESXi host in one data center to an ESXi host in another data center, the port on the source distributed switch might not be released in the vCenter Server system. As a result, the vSphere Distributed Switch might become out of sync when you upgrade to vSphere Distributed Switch 6.6.
This issue is resolved in this release.
- You cannnot migrate virtual machines by using vSphere vMotion between ESXi hosts with NSX managed virtual distributed switches (N-VDS) and vSphere Standard Switches
With vCenter Server 6.7 Update 2, you can migrate virtual machines by using vSphere vMotion between ESXi hosts with N-VDS and vSphere Standard Switches. To enable the feature, you must upgrade your vCenter Server system to vCenter Server 6.7 Update 2 and ESXi 6.7 Update 2 on both source and destination sites.
This issue is resolved in this release.
Server Configuration Issues
- You cannot restart the vpxd service when the KMS certificate is expired or close to the expiration date
When the KMS certificate is expired or close to the expiration date, you cannot restart the vpxd service and the vCenter Server system upgrade might fail.
This issue is resolved in this release.
vSAN Issues
- Unable to start vSAN health service because health configuration file is empty
The vSAN health configuration file can become corrupted, due to no disk quota or if the thread is stopped. When this problem occurs while setting vSAN health configuration, the health service cannot start.
This issue is resolved in this release.
Known Issues
The known issues are grouped as follows.
vCenter Server, vSphere Web Client, and vSphere Client Issues
- You might fail to log in to a vCenter Sever system due to a failure of the VMware Security Token Service service (vmware-stsd)
The vmware-stsd service fails in certain customer environments if you add the Active Directory Integrated Windows Authentication (IWA) as an identity source. The addition of IWA as an identity source might generate core dumps that fill up the /storage/core directory and eventually might cause log in failure to the vCenter Server system.
In the vmware-sts-idmd.log log, you might see entries similar to:
[2018-11-02T13:28:42.168-07:00 IDM Shutdown INFO ] [IdmServer] Stopping IDM Server...
[2018-11-02T13:28:42.523-07:00 IDM Shutdown INFO ] [IdmServer] IDM Server has stopped
[2018-11-02T13:29:38.270-07:00 IDM Startup INFO ] [IdmServer] Starting IDM Server...
[2018-11-02T13:29:38.272-07:00 IDM Startup INFO ] [IdmServer] IDM Server has started
[2018-11-02T13:39:40.913-07:00 IDM Shutdown INFO ] [IdmServer] Stopping IDM Server...
[2018-11-02T13:39:40.913-07:00 IDM Shutdown INFO ] [IdmServer] IDM Server has stopped
In the /var/log/vmware/sso/utils/vmware-stsd.err log, you see entries similar to:
Nov 02, 2018 1:29:40 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 663 ms
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-slf4jimpl-
2.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/slf4j-log4j12-
1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Nov 02, 2018 1:29:50 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 10097 ms
Service killed by signal 11
Workaround: Remove the vCenter Server system from the Active Directory domain and add the LDAP Server as identity source. For more information, see VMware knowledge base article 60161.
Convergence Issues
- You might not see load balancer details after vCenter Server system convergence
If you converge your existing system with a load balancer configured for several external Platform Service Controllers to the embedded deployment model, you might not see load balancer details in the System Configuration tab of the vSphere Client. As a result, you cannot decommission the load balancer.
Workaround: Manually decommission each external Platform Services Controller.
vMotion Issues
Upgrade and Installation Issues
- Upgrade to vCenter Server 6.7 fails during firstboot due to a PostgreSQL sequence owner error
Upgrade to vCenter Server 6.7 fails during firstboot, because the sequence owner is postgres
instead of vc
. You might see this error:
vCenter Server Firstboot Failure – must be owner of relation vpx_sn_vdevice_backing_rel_seq
.
Workaround: In vCenter Server 6.7 Update 2, the message must be owner of relation vpx_sn_vdevice_backing_rel_seq
is replaced with the message Source vCenter Server schema validation found a sequences issue
and points for more information to VMware knowledge base article 55747.
- Upgrade of vCenter Server for Windows might fail with an error that uninstallation of 5.5 products failed
If you reconfigure an embedded deployment node of vCenter Server for Windows to an external deployment model and repoint to the new external Platform Services Controller, upgrade of your vCenter Server system from vCenter Server 6.5 Update 2d to 6.7 Update 2 might fail with an error similar to Uninstallation of 5.5 products failed with error code '1603'
.
Workaround: Restart your vCenter Server system after the reconfiguration and retry the upgrade.
- You cannot use the GUI installer for vSphere 6.7 Update 2 on virtual machines with Ubuntu 14.04 OS
You cannot use the GUI installer for vSphere 6.7 Update 2 on virtual machines with Ubuntu 14.04 OS, because the libnss3 package is not installed by default.
Workaround: Install the latest version of libnss3 by executing the command sudo apt-get install libnss3.
- After upgrade to vCenter Server 6.7 Update 2 from 6.0.x, the Hardware Status tab in the vSphere Web Client might display no host data
After an upgrade to vCenter Server 6.7 Update 2 from vCenter Server 6.0.x, you might not be able to see hardware details for ESXi hosts in the Hardware Status tab of the vSphere Web Client. Instead, a No host data available error
is displayed.
Workaround: For more information on the issue, see VMware knowledge based article 2148520.
Tools Issues
Backup and Restore Issues
- Backups with third-party software might fail due to non-alphanumeric characters in the names of source datastores or datacenters
In vCenter Server 6.7 systems, backups with third-party software might be unsuccessful if the name of the source datastore or datacenter contains non-alphanumeric characters. Changes in the encoding cause download and upload of files to fail.
Workaround: Rename the datastores and datacenters that contain non-alphanumeric characters in the name.
Networking Issues
- A virtual machine NIC gets a non-sequential list of MAC addresses, even when you allow sequential selection of MAC addresses from MAC address pools
If you create a base virtual machine with sequential selection of MAC addresses, after a restart of vCenter Server, the order of the network adapters might be nonsequential. If you make a clone from the base virtual machine, the MAC addresses of the clone might also be nonsequential.
Workaround: You must open the Edit menu of the base virtual machine and click OK to make sure that network adapters are sorted as expected before cloning other virtual machines.
Known Issues from Prior Releases
To view a list of previous known issues, click here.
The earlier known issues are grouped as follows.
CLI Issues
- Views might switch from the appliance shell to the Direct Console User Interface during an upgrade of the vCenter Server Appliance by using the CLI installer
During an upgrade of the vCenter Server Appliance by using the CLI installer, views might switch intermittently from the appliance shell to the Direct Console User Interface. Restarts of the applmgmt service during updates causes the issue.
Workaround: Switch to appliance shell tty to monitor the progress.
Internationalization Issues
- A VMkernel network using an NSX logical switch might fail for stateless hosts if you register vCenter Server with non-ASCII characters on VMware NSX Manager
If you register vCenter Server to an NSX Manager with a password containing characters from the extended ASCII codes between 128 and 255, or non-ASCII characters, a VMkernel network using an NSX logical switch might be lost after deploying a stateless host.
Workaround: Register vCenter Server to an NSX Manager with a password containing only ASCII characters.
- An ESXi host might stop responding if you add a vSphere Distributed Switch named with a string containing tens of non-ASCII characters to a physical adapter in a hyper-converged infrastructure (HCI) cluster
If you name a VDS with a string containing more than 40 characters from the extended ASCII codes between 128 and 255, or more than 26 non-ASCII characters, ESXi hosts might stop responding when you attempt to add the VDS to a physical adapter during the configuration of a hyper-converged infrastructure (HCI) cluster.
Workaround: Use strings with less than 40 characters from the extended ASCII codes and 26 non-ASCII characters when naming a VDS.
Tools Issues
- The OVF Tool might fail to verify an SSL thumbprint if you use CLI
If you set the SSL thumbprint value by using CLI, the OVF Tool might fail to verify the thumbprint. The issue is not monitored if you use the Direct Console User Interface (DCUI).
Workaround: Use any of the following alternatives:
- In the DCUI, specify the thumbprint in the section of
ssl_certificate_verification
.
- In the DCUI, specify to ignore certificate thumbprint for ESXi by putting
ssl_certificate_verification verification_mode
to False
.
- Ignore all certificate thumbprints globally by using the command-line parameter:
--no-ssl-certificate-verification
.
- Wait for the CLI prompt to accept the thumbprint that it receives from the source.
Installation, Upgrade, and Migration Issues
- ESXi installation or upgrade fail due to memory corruption on HPE ProLiant - DL380/360 Gen 9 Servers
The issue occurs on HPE ProLiant - DL380/360 Gen 9 Servers that have a Smart Array P440ar storage controller.
Workaround: Set the server BIOS mode to UEFI before you install or upgrade ESXi.
- After an ESXi upgrade to version 6.7 and a subsequent rollback to version 6.5 or earlier, you might experience failures with error messages
You might see failures and error messages when you perform one of the following on your ESXi host after reverting to 6.5 or earlier versions:
- Install patches and VIBs on the host
Error message: [DependencyError] VIB VMware_locker_tools-light requires esx-version >= 6.6.0
- Install or upgrade VMware Tools on VMs
Error message: Unable to install VMware Tools.
After the ESXi rollback from version 6.7, the new tools-light VIB does not revert to the earlier version. As a result, the VIB becomes incompatible with the rolled back ESXi host causing these issues.
Workaround: Perform the following to fix this problem.
SSH to the host and run one of these commands:
esxcli software vib install -v /path/to/tools-light.vib
or
esxcli software vib install -d /path/to/depot/zip -n tools-light
Where the vib and zip are of the currently running ESXi version.
Note: For VMs that already have new VMware Tools installed, you do not have to revert VMware Tools back when ESXi host is rolled back.
- Special characters backslash (\) or double-quote (") used in passwords causes installation pre-check to fail
If the special characters backslash (\) or double quote (") are used in ESXi, vCenter Single Sign-On, or operating system password fields during the vCenter Server Appliance Installation templates, the installation pre-check fails with the following error:
Error message: com.vmware.vcsa.installer.template.cli_argument_validation: Invalid \escape: line ## column ## (char ###)
Workaround: If you include special characters backslash (\) or double quote (") in the passwords for ESXi, operating systems, or Single-Sign-On, the special characters need to be escaped. For example, the password pass\word
should be escaped as pass\\word
.
- Windows vCenter Server 6.7 installer fails when non-ASCII characters are present in password
The Windows vCenter Server 6.7 installer fails when the Single Sign-on password contains non-ASCII characters for Chinese, Japanese, Korean, and Taiwanese locales.
Workaround: Ensure that the Single Sign-on password contains ASCII characters only for Chinese, Japanese, Korean, and Taiwanese locales.
- Cannot log in to vSphere Appliance Management Interface if the colon character (:) is part of vCenter Server root password
During the vCenter Server Appliance UI installation (Set up appliance VM page of Stage 1), if you include the colon character (:) as part of the vCenter Server root password, logging into the vSphere Appliance Management Interface (https://vc_ip:5480
) fails and you are unable to login. The password might be accepted by the password rule check during the setup, but login fails.
Workaround: Do not use the colon character (:) to set the vCenter Server root password in the vCenter Server Appliance UI (Set up appliance VM of Stage 1).
- vCenter Server Appliance installation fails when the backslash character (\) is included in the vCenter Single Sign-On password
During the vCenter Server Appliance UI installation (SSO setup page of Stage 2), if you include the backslash character (\) as part of the vCenter Single Sign-On password, the installation fails with the error Analytics Service registration with Component Manager failed
. The password might be accepted by the password rule check, but installation fails.
Workaround: Do not use the backslash character (\) to set the vCenter Single Sign-On password in the vCenter Server Appliance UI installer (SSO setup page of Stage 2)
- Scripted ESXi installation fails on HP ProLiant Gen 9 Servers with an error
When you perform a scripted ESXi installation on an HP ProLiant Gen 9 Server under the following conditions:
- The Embedded User Partition option is enabled in the BIOS.
- You use multiple USB drives during installation: one USB drive contains the ks.cfg file, and the others USB drive is not formatted and usable.
The installation fails with the error message Partitions not initialized.
Workaround:
- Disable the Embedded User Partition option in the server BIOS.
- Format the unformatted USB drive with a file system or unplug it from the server.
- Upgrading vCenter Server 6.5 for Windows to vCenter Server 6.7 might fail if the vSphere Authentication Proxy service is active
If the vSphere Authentication Proxy service is active while you perform an upgrade from vCenter Server 6.5 for Windows to vCenter Server 6.7, the operation might fail during the pre-check. You might see an error similar to:
The following non-configurable port(s) are already in use:
2016, 7475, 7476
Stop the process(es) that use these port(s)
.
Workaround: Stop the vSphere Authentication Proxy service. You can restart the service after the successful upgrade to vCenter Server 6.7.
- Patching to vCenter Server 6.7 Update 1 from earlier versions of vCenter Server 6.7 might fail when vCenter Server High Availability is active
Patching to vCenter Server 6.7 Update 1 from earlier versions of vCenter Server 6.7 might fail when vCenter Server High Availability is active due to a DB schema change. For more information, see VMware knowledge base article 55938.
Workaround: To patch your system to vCenter Server 6.7 Update 1 from earlier versions of vCenter Server 6.7, you must remove vCenter Server High Availability and delete passive and witness nodes. After the upgrade, you must re-create your vCenter Server High Availability clusters.
- Windows vCenter Server 6.0.x or 6.5.x upgrade to vCenter Server 6.7 fails if vCenter Server contains non-ASCII or high-ASCII named 5.5 host profiles
When a source Windows vCenter Server 6.0.x or 6.5.x contains vCenter Server 5.5.x host profiles named with non-ASCII or high-ASCII characters, UpgradeRunner fails to start during the upgrade pre-check process.
Workaround: Before upgrading Windows vCenter Server 6.0.x or 6.5.x to vCenter Server 6.7, upgrade the ESXi 5.5.x with the non-ASCII or high-ASCII named host profiles to ESXi 6.0.x or 6.5.x, then update the host profile from the upgraded host by clicking Copy setting from the hosts.
- Upgrade to vCenter Server Appliance 6.7 Update 1 from vCenter Server Appliance 6.5 Update 2 and later, using custom HTTP and HTTPS ports, might fail
Upgrades from vCenter Server Appliance 6.5 Update 2 and later, using custom HTTP and HTTPS ports, to vCenter Server Appliance 6.7 Update 1 might fail. You might see the issue regardless if you use the GUI or CLI installer.
Workaround: None
- Converging an external Platform Services Controller to a vCenter Server might fail if the Platform Services Controller uses a custom HTTPS port
You might fail to converge an external Platform Services Controller to a vCenter Server system, if the vCenter Server system is configured with the default HTTPS port, 443, and the Platform Services Controller node is configured with a custom value for the HTTPS port. The operation fails in the firstboot stage due to convergence issues.
Workaround: Change the HTTPS port value to the default value, 443, for Platform Services Controller nodes before running vCenter External to Embedded Convergence tool. You can run the following commands to do the same:
/usr/lib/vmware-vmafd/bin/vmafd-cli set-dc-port --server-name localhost --dc-port 443
/usr/lib/vmware-vmafd/bin/vmafd-cli set-rhttpproxy-port --server-name localhost --rhttpproxy-port 443
- You cannot run the camregister command with the -x option if the vCenter Single Sign-On password contains non-ASCII characters
When you run the camregister
command with the -x
file option, for example, to register the vSphere Authentication Proxy, the process fails with an access denied error when the vCenter Single Sign-On password contains non-ASCII characters.
Workaround: Either set up the vCenter Single Sign-On password with ASCII characters, or use the –p
password option when you run the camregister
command to enter the vCenter Single Sign-On password that contains non-ASCII characters.
- The Bash shell and SSH login are disabled after upgrading to vCenter Server 6.7
After upgrading to vCenter Server 6.7, you are not able to access the vCenter Server Appliance using either the Bash shell or SSH login.
Workaround:
- After successfully upgrading to vCenter Server 6.7, log in to the vCenter Server Appliance Management Interface. In a Web browser, go to: https://appliance_ip_address_or_fqdn:5480
- Log in as root.
The default root password is the password you set while deploying the vCenter Server Appliance.
-
Click Access, and click Edit.
-
Edit the access settings for the Bash shell and SSH login.
When enabling Bash shell access to the vCenter Server Appliance, enter the number of minutes to keep access enabled.
-
Click OK to save the settings.
- Management node migration is blocked if vCenter Server for Windows 6.0 is installed on Windows Server 2008 R2 without previously enabling Transport Layer Security 1.2
This issue occurs if you are migrating vCenter Server for Windows 6.0 using an external Platform Services Controller (an MxN topology) on Windows Server 2008 R2. After migrating the external Platform Services Controller, when you run Migration Assistant on the Management node it fails, reporting that it cannot retrieve the Platform Services Controller version. This error occurs because Windows Server 2008 R2 does not support Transport Layer Security (TLS) 1.2 by default, which is the default TLS protocol for Platform Services Controller 6.7.
Workaround: Enable TLS 1.2 for Windows Server 2008 R2.1.
- Navigate to the registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols
- Create a new folder and label it
TLS 1.2
.
- Create two new keys with the
TLS 1.2
folder, and name the keys Client and Server.
- Under the Client key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
- Under the Server key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
- Ensure that the Value field is set to 0 and that the Base is Hexadecimal for DisabledByDefault.
- Ensure that the Value field is set to 1 and that the Base is Hexadecimal for Enabled.
- Reboot the Windows Server 2008 R2 computer.
For more information on using TLS 1.2 with Windows Server 2008 R2, refer to the operating system vendor's documentation.
- vCenter Server containing host profiles with version less than 6.0 fails during upgrade to version 6.7
vCenter Server 6.7 does not support host profiles with version less than 6.0. To upgrade to vCenter Server 6.7, you must first upgrade the host profiles to version 6.0 or later, if you have any of the following components:
- ESXi host(s) version - 5.1 or 5.5
- vCenter server version - 6.0 or 6.5
- Host profiles version - 5.1 or 5.5
Workaround: See KB 52932
- After upgrading to vCenter Server 6.7, any edits to the ESXi host's /etc/ssh/sshd_config file are discarded, and the file is restored to the vCenter Server 6.7 default configuration
Due to changes in the default values in the /etc/ssh/sshd_config
file, the vCenter Server 6.7 upgrade replaces any manual edits to this configuration file with the default configuration. This change was necessary as some prior settings (for example, permitted ciphers) are no longer compatible with current ESXi behavior, and prevented SSHD (SSH daemon) from starting correctly.
CAUTION: Editing /etc/ssh/sshd_config
is not recommended. SSHD is disabled by default, and the preferred method for editing the system configuration is through the VIM API (including the ESXi Host Client interface) or ESXCLI.
Workaround: If edits to /etc/ssh/sshd_config
are needed, you can apply them after successfully completing the vCenter Server 6.7 upgrade. The default configuration file now contains a version number. Preserve the version number to avoid overwriting the file.
For further information on editing the /etc/ssh/sshd_config
file, see the following Knowledge Base articles:
- For information on enabling public/private key authentication, see Knowledge Base article KB 1002866
- For information on changing the default SSHD configuration, see Knowledge Base article KB 1020530
Security Features Issues
- Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.
Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.
Workaround: Enable Hyper-V Platform on Windows Server 2016. In the Server Manager, under Local Server select Manage -> Add Roles and Features Wizard and under Role-based or feature-based installation select Hyper-V from the server pool and specify the server roles. Choose defaults for Server Roles, Features, Hyper-V, Virtual Switches, Migration and Default Stores. Reboot the host.
Enable Hyper-V on Windows 10: Browse to Control Panel -> Programs -> Turn Windows features on or off. Check the Hyper-V Platform which includes the Hyper-V Hypervisor and Hyper-V Services. Uncheck Hyper-V Management Tools. Click OK. Reboot the host.
Networking Issues
- Hostprofile PeerDNS flags do not work in some scenarios
If PeerDNS for IPv4 is enabled for a vmknic on a stateless host that has an associated host profile, the iPv6PeerDNS might appear with a different state in the extracted host profile after the host reboots.
Workaround: None.
- When you upgrade vSphere Distributed Switches to version 6.6, you might encounter a few known issues
During upgrade, the connected virtual machines might experience packet loss for a few seconds.
Workaround: If you have multiple vSphere Distributed Switches that need to be upgraded to version 6.6, upgrade the switches sequentially.
Schedule the upgrade of vSphere Distributed Switches during a maintenance window, set DRS mode to manual, and do not apply DRS recommendations for the duration of the upgrade.
For more details about known issues and solutions, see KB 52621
- VM fails to power on when Network I/O Control is enabled and all active uplinks are down
A VM fails to power on when Network I/O Control is enabled and the following conditions are met:
- The VM is connected to a distributed port group on a vSphere distributed switch
- The VM is configured with bandwidth allocation reservation and the VM's network adapter (vNIC) has a reservation configured
- The distributed port group teaming policy is set to Failover
- All active uplinks on the distributed switch are down. In this case, vSphere DRS cannot use the standby uplinks and the VM fails to power on.
Workaround: Move the available standby adapters to the active adapters list in the teaming policy of the distributed port group.
- Network flapping on a NIC that uses qfle3f driver might cause ESXi host to crash
The qfle3f driver might cause the ESXi host to crash (PSOD) when the physical NIC that uses the qfle3f driver experiences frequent link status flapping every 1-2 seconds.
Workaround: Make sure that network flapping does not occur. If the link status flapping interval is more than 10 seconds, the qfle3f driver does not cause ESXi to crash. For more information, see KB 2008093.
- Port Mirror traffic packets of ERSPAN Type III fail to be recognized by packet analyzers
A wrong bit that is incorrectly introduced in ERSPAN Type III packet header causes all ERSPAN Type III packets to appear corrupt in packet analyzers.
Workaround: Use GRE or ERSPAN Type II packets, if your traffic analyzer supports these types.
- DNS configuration esxcli commands are not supported on non-default TCP/IP stacks
DNS configuration of non-default TCP/IP stacks is not supported. Commands such as esxcli network ip dns server add -N vmotion -s 10.11.12.13
do not work.
Workaround: Do not use DNS configuration esxcli commands on non-default TCP/IP stacks.
- Compliance check fails with an error when applying a host profile with enabled default IPv4 gateway for vmknic interface
When applying a host profile with enabled default IPv4 gateway for vmknic interface, the setting is populated with "0.0.0.0" and does not match the host info, resulting with the following error:
IPv4 vmknic gateway configuration doesn't match the specification
Workaround:
- Edit the host profile settings.
- Navigate to Networking configuration > Host virtual nic or Host portgroup > (name of the vSphere Distributed Switch or name of portgroup) > IP address settings.
- From the Default gateway Vmkernal Network Adapter (IPv4) drop-down menu, select Choose a default IPv4 gateway for the vmknic and enter the Vmknic Default IPv4 gateway.
- Intel Fortville series NICs cannot receive Geneve encapsulation packets with option length bigger than 255 bytes
If you configure Geneve encapsulation with option length bigger than 255 bytes, the packets are not received correctly on Intel Fortville NICs X710, XL710, and XXV710.
Workaround: Disable hardware VLAN stripping on these NICs by running the following command:
esxcli network nic software set --untagging=1 -n vmnicX.
- RSPAN_SRC mirror session fails after migration
When a VM connected to a port assigned for RSPAN_SRC mirror session is migrated to another host, and there is no required pNic on the destination network of the destination host, then the RSPAN_SRC mirror session fails to configure on the port. This causes the port connection to fail failure but the vMotion migration process succeeds.
Workaround: To restore port connection failure, complete either one of the following:
- Remove the failed port and add a new port.
- Disable the port and enable it.
The mirror session fails to configure, but the port connection is restored.
Storage Issues
- NFS datastores intermittently become read-only
A host's NFS datastores may become read-only when the NFS vmknic temporarily loses its IP address or after a stateless hosts reboot.
Workaround: You can unmount and remount the datastores to regain connectivity through the NFS vmknic. You can also set the NFS datastore write permission to both the IP address of the NFS vmknic and the IP address of the Management vmknic.
- When editing a VM's storage policies, selecting Host-local PMem Storage Policy fails with an error
In the Edit VM Storage Policies dialog, if you select Host-local PMem Storage Policy from the dropdown menu and click OK, the task fails with one of these errors:
The operation is not supported on the object.
or
Incompatible device backing specified for device '0'"Detailed
Workaround: You cannot apply the Host-local PMem Storage Policy to VM home. For a virtual disk, you can use the migration wizard to migrate the virtual disk and apply the Host-local PMem Storage Policy.
- Datastores might appear as inaccessible after ESXi hosts in a cluster recover from a permanent device loss state
This issue might occur in the environment where the hosts in the cluster share a large number of datastore, for example, 512 to 1000 datastores.
After the hosts in the cluster recover from the permanent device loss condition, the datastores are mounted successfully at the host level. However, in vCenter Server, several datastores might continue to appear as inaccessible for a number of hosts.
Workaround: On the hosts that show inaccessible datastores in the vCenter Server view, perform the Rescan Storage operation from vCenter Server.
- Migration of a virtual machine from a VMFS3 datastore to VMFS5 fails in a mixed ESXi 6.5 and 6.7 host environment
If you have a mixed host environment, you cannot migrate a virtual machine from a VMFS3 datastore connected to an ESXi 6.5 host to a VMFS5 datastore on an ESXi 6.7 host.
Workaround: Upgrade the VMFS3 datastore to VMFS5 to be able to migrate the VM to the ESXi 6.7 host.
- Warning message about a VMFS3 datastore remains unchanged after you upgrade the VMFS3 datastore using the CLI
Typically, you use the CLI to upgrade the VMFS3 datastore that failed to upgrade during an ESXi upgrade. The VMFS3 datastore might fail to upgrade due to several reasons including the following:
- No space is available on the VMFS3 datastore.
- One of the extents on the spanned datastore is offline.
After you fix the reason of the failure and upgrade the VMFS3 datastore to VMFS5 using the CLI, the host continues to detect the VMFS3 datastore and reports the following error:
Deprecated VMFS (ver 3) volumes found. Upgrading such volumes to VMFS (ver5) is mandatory for continued availability on vSphere 6.7 host.
Workaround: To remove the error message, restart hostd using the /etc/init.d/hostd restart command or reboot the host.
- The Mellanox ConnectX-4/ConnectX-5 native ESXi driver might exhibit performance degradation when its Default Queue Receive Side Scaling (DRSS) feature is turned on
Receive Side Scaling (RSS) technology distributes incoming network traffic across several hardware-based receive queues, allowing inbound traffic to be processed by multiple CPUs. In Default Queue Receive Side Scaling (DRSS) mode, the entire device is in RSS mode. The driver presents a single logical queue to OS and is backed by several hardware queues.
The native nmlx5_core driver for the Mellanox ConnectX-4 and ConnectX-5 adapter cards enables the DRSS functionality by default. While DRSS helps to improve performance for many workloads, it could lead to possible performance degradation with certain multi-VM and multi-vCPU workloads.
Workaround: If significant performance degradation is observed, you can disable the DRSS functionality.
- Run the esxcli system module parameters set -m nmlx5_core -p DRSS=0 RSS=0 command.
- Reboot the host.
- Datastore name does not extract to the Coredump File setting in the host profile
When you extract a host profile, the Datastore name field is empty in the Coredump File setting of the host profile. Issue appears when using esxcli command to set coredump.
Workaround:
- Extract a host profile from an ESXi host.
- Edit the host profile settings and navigate to General System Settings > Core Dump Configuration > Coredump File.
- Select Create the Coredump file with an explicit datastore and size option and enter the Datastore name, where you want the Coredump File to reside.
- Native software FCoE adapters configured on an ESXi host might disappear when the host is rebooted
After you successfully enable the native software FCoE adapter (vmhba) supported by the vmkfcoe driver and then reboot the host, the adapter might disappear from the list of adapters. This might occur when you use Cavium QLogic 57810 or QLogic 57840 CNAs supported by the qfle3 driver.
Workaround: To recover the vmkfcoe adapter, perform these steps:
- Run the esxcli storage core adapter list command to make sure that the adapter is missing from the list.
- Verify the vSwitch configuration on vmnic associated with the missing FCoE adapter.
- Run the following command to discover the FCoE vmhba:
- On a fabric setup:
#esxcli fcoe nic discover -n vmnic_number
- On a VN2VN setup:
#esxcli fcoe nic discover -n vmnic_number
- Attempts to create a VMFS datastore on an ESXi 6.7 host might fail in certain software FCoE environments
Your attempts to create the VMFS datastore fail if you use the following configuration:
- Native software FCoE adapters configured on an ESXi 6.7 host.
- Cavium QLogic 57810 or 57840 CNAs.
- Cisco FCoE switch connected directly to an FCoE port on a storage array from the Dell EMC VNX5300 or VNX5700 series.
Workaround: None.
As an alternative, you can switch to the following end-to-end configuration:
ESXi host > Cisco FCoE switch > FC switch > storage array from the DELL EMC VNX5300 and VNX5700 series.
Backup and Restore Issues
- Windows Explorer displays some backups with unicode differently from how browsers and file system paths show them
Some backups containing unicode display differently in the Windows Explorer file system folder than they do in browsers and file system paths.
Workaround: Using http, https, or ftp, you can browse backups with your web browser instead of going to the storage folder locations through Windows Explorer.
vCenter Server Appliance, vCenter Server, vSphere Web Client, and vSphere Client Issues
- The time synchronization mode setting is not retained when upgrading vCenter Server Appliance
If NTP time synchronization is disabled on a source vCenter Server Appliance, and you perform an upgrade to vCenter Server Appliance 6.7, after the upgrade has successfully completed NTP time synchronization will be enabled on the newly upgraded appliance.
Workaround:
- After successfully upgrading to vCenter Server Appliance 6.7, log into the vCenter Server Appliance Management Interface as root.
The default root password is the password you set while deploying the vCenter Server Appliance.
https://IP_or_FQDN_of_appliance:5480
- In the vCenter Server Appliance Management Interface, click Time.
- In the Time Synchronization pane, click Edit.
- From the Mode drop-down menu, select Disabled.
The newly upgraded vCenter Server Appliance 6.7 will no longer use NTP time synchronization, and will instead use the system time zone settings.
- Login to vSphere Web Client with Windows session authentication fails on Firefox browsers of version 54 or later
If you use Firefox of version 54 or later to log in to the vSphere Web Client, and you use your Windows session for authentication, the VMware Enhanced Authentication Plugin might fail to populate your user name and to log you in.
Workaround: If you are using Windows session authentication to log in to the vSphere Web Client, use one of the following browsers: Internet Explorer, Chrome, or Firefox of version 53 and earlier.
- vCenter hardware health alarm notifications are not triggered in some instances
When multiple sensors in the same category on an ESXi host are tripped within a time span of less than five minutes, traps are not received and email notifications are not sent.
Workaround: None. You can check the hardware sensors section for any alerts.
- The vSphere Client and vSphere Web Client might not reflect update from vCenter Server 6.7 to vCenter Server 6.7 Update 1 for vCenter Server for Windows
If you update vCenter Server for Windows from vCenter Server 6.7 to vCenter Server 6.7 Update 1, the build number details for vpxd in the Summary tab of both the vSphere Client and vSphere Web Client might not reflect the update and show version 6.7.0.
Workaround: None.
- When using the VCSA Installer Time Sync option, you must connect the target ESX to the NTP server in the Time & Date Setting from the ESX Management
If you want to select Time Sync with NTP server from the VCSA Installer->Stage2->Appliance configuration->Time Sync option (ESX/NTP server), you also need to have the target ESX already connected to NTP server in the Time&Date Setting from the ESX Management, otherwise it'll fail in installation.
Workaround:
- Set the Time Sync option in stage2->Appliance configuration to sync with ESX
- Set the Time Sync option in stage2->Appliance configuration to sync with NTP Servers, make sure both the ESX and VC are set to connect to NTP servers.
- When you monitor Windows vCenter Server health, an error message appears
Health service is not available for Windows vCenter Server. If you select the vCenter Server, and click Monitor > Health, an error message appears:
Unable to query vSAN health information. Check vSphere Client logs for details.
This problem can occur after you upgrade the Windows vCenter Server from release 6.0 Update 1 or 6.0 Update 2 to release 6.7. You can ignore this message.
Workaround: None. Users can access vSAN health information through the vCenter Server Appliance.
- vCenter hardware health alarms do not function with earlier ESXi versions
If ESXi version 6.5 Update 1 or earlier is added to vCenter 6.7, hardware health related alarms will not be generated when hardware events occur such as high CPU temperatures, FAN failures, and voltage fluctuations.
Workaround: None.
- vCenter Server stops working in some cases when using vmodl to edit or expand a disk
When you configure a VM disk in a Storage DRS-enabled cluster using the latest vmodl, vCenter Server stops working. A previous workaround using an earlier vmodl no longer works and will also cause vCenter Server to stop working.
Workaround: None
- vCenter Server for Windows migration to vCenter Server Appliance fails with error
When you migrate vCenter Server for Windows 6.0.x or 6.5.x to vCenter Server Appliance 6.7, the migration might fail during the data export stage with the error: The compressed zip folder is invalid or corrupted
.
Workaround: You must zip the data export folder manually and follow these steps:
- In the source system, create an environment variable MA_INTERACTIVE_MODE.
- Go to Computer > Properties > Advanced system settings > Environment Variables > System Variables > New.
- Enter "MA_INTERACTIVE_MODE" as variable name with value 0 or 1.
- Start the VMware Migration Assistant and provide your password.
- Start the Migration from the client machine. The migration will pause, and the Migration Assistant console will display the message
To continue the migration, create the export.zip file manually from the export data (include export folder)
.
- NOTE: Do not press any keys or tabs on the Migration Assistant console.
- Go to the
%appdata%\vmware\migration-assistant
folder.
- Delete the export.zip created by the Migration Assistant.
- To continue the migration, manually create the export.zip file from the export folder.
- Return to the Migration Assistant console. Type
Y
and press Enter.
- Discrepancy between the build number in VAMI and the build number in the vSphere Client
In vSphere 6.7, the VAMI summary tab displays the ISO build for the vCenter Server and vCenter Server Appliance products. The vSphere Client summary tab displays the build for the vCenter product, which is a component within the vCenter Server product.
Workaround: None
- vCenter Server Appliance 6.7 displays an error message in the Available Update section of the vCenter Server Appliance Management Interface (VAMI)
The Available Update section of the vCenter Server Appliance Management Interface (VAMI) displays the following error message:
Check the URL and try again.
This message is generated when the vCenter Server Appliance searches for and fails to find a patch or update. No functionality is impacted by this issue. This issue will be resolved with the release of the first patch for vSphere 6.7.
Workaround: None. No functionality is impacted by this issue.
Virtual Machine Management Issues
- Name of the virtual machine in the inventory changes to its path name
This issue might occur when a datastore where the VM resides enters the All Paths Down state and becomes inaccessible. When hostd is loading or reloading VM state, it is unable to read the VM's name and returns the VM path instead. For example, /vmfs/volumes/123456xxxxxxcc/cs-00.111.222.333.
Workaround: After you resolve the storage issue, the virtual machine reloads, and its name is displayed again.
- You must select the "Secure boot" Platform Security Level when enabling VBS in a Guest OS on AMD systems
On AMD systems, vSphere virtual machines do not provide a vIOMMU. Since vIOMMU is required for DMA protection, AMD users cannot select "Secure Boot and DMA protection" in the Windows Group Policy Editor when they "Turn on Virtualization Based Security". Instead select "Secure boot." If you select the wrong option it will cause VBS services to be silently disabled by Windows.
Workaround: Select "Secure boot" Platform Security Level in a Guest OS on AMD systems.
- You cannot hot add memory and CPU for Windows VMs when Virtualization Based Security (VBS) is enabled within Windows
Virtualization Based Security (VBS) is a new feature introduced in Windows 10 and Windows Server 2016. vSphere supports running Windows with VBS enabled starting in the vSphere 6.7 release. However, Hot add of memory and CPU will not operate for Windows VMs when Virtualization Based Security (VBS) is enabled.
Workaround: Power-off the VM, change memory or CPU settings and power-on the VM.
- Snapshot tree of a linked-clone VM might be incomplete after a vSAN network recovery from a failure
A vSAN network failure might impact accessibility of vSAN objects and VMs. After a network recovery, the vSAN objects regain accessibility. The hostd service reloads the VM state from storage to recover VMs. However, for a linked-clone VM, hostd might not detect that the parent VM namespace has recovered its accessibility. This results in the VM remaining in inaccessible state and VM snapshot information not being displayed in vCenter Server.
Workaround: Unregister the VM, then re-register it to force the hostd to reload the VM state. Snapshot information will be loaded from storage.
- The Virtual Appliance Management Interface might display a 0- message or a blank page during patching from vCenter Server 6.7 to later versions
The Virtual Appliance Management Interface might display a 0-
message or a blank page during patching from vCenter Server 6.7 to later versions, if calls from the interface fail to reach the back end applmgmt service. You might also see the message Unable to get historical data import status. Check Server Status
.
Workaround: These are not failure messages. Refresh the browser and log in to the Virtual Appliance Management Interface again once the reboot of appliance in the back end is complete.
- The Ready to Complete page of the Register Virtual Machine wizard displays only one horizontal line
The Ready to Complete page of the Register Virtual Machine wizard might display content similar to one horizontal line due to a rendering issue. This issue does not affect the workflow of the wizard.
Workaround: None
- An OVF Virtual Appliance fails to start in the vSphere Client
The vSphere Client does not support selecting vService extensions in the Deploy OVF Template wizard. As a result, if an OVF virtual appliance uses vService extensions and you use the vSphere Client to deploy the OVF file, the deployment succeeds, but the virtual appliance fails to start.
Workaround: Use the vSphere Web Client to deploy OVF virtual appliances that use vService extensions.
vSphere HA and Fault Tolerance Issues
- When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build you are prompted twice to apply DRS recommendations
When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build and a red health update is sent from the Proactive HA provider plug-in, you are prompted twice to apply the recommendations under Cluster -> Monitor -> vSphere DRS -> Recommendations. The first prompt is to enter the host into maintenance mode. The second prompt is to migrate all VMs on a host entering maintenance mode. In vSphere 6.5, these two steps are presented as a single recommendation for entering maintenance mode, which lists all VMs to be migrated.
Workaround: There is no impact to work flow or results. You must apply the recommendations twice. If you are using automated scripts, you must modify the scripts to include the additional step.
- Lazy import upgrade interaction when VCHA is not configured
The VCHA feature is available as part of 6.5 release. As of 6.5, a VCHA cluster cannot be upgraded while preserving the VCHA configuration. The recommended approach for upgrade is to first remove the VCHA configuration either through vSphere Client or by calling a destroy VCHA API. So for lazy import upgrade workflow without VCHA configuration, there is no interaction with VCHA.
Do not configure a fresh VCHA setup while lazy import is in progress. The VCHA setup requires cloning the Active VM as Passive/Witness VM. As a result of an ongoing lazy import, the amount of data that needs to be cloned is large and may lead to performance issues.
Workaround: None.
- You cannot add ESXi hosts running vSphere Fault Tolerance workloads to a vCenter Server system by using the vSphere Client
Attempts to add ESXi hosts running vSphere Fault Tolerance workloads to a vCenter Server system by using the vSphere Client might fail with the error Cannot add a host with virtual machines that have Fault Tolerance turned on as a stand-alone host
.
Workaround: As alternatives, you can:
- Schedule a task to add the host and execute it immediately.
- In the vSphere Client, navigate to Configure > Scheduled tasks for a selected cluster.
- Select New scheduled task > Add Host.
- Schedule a time to run the task.
- Add a host and run the task.
- Delete the task after the host is added.
- Use the vSphere Web Client to add the host. Login to the vSphere Web Client and execute the standard add host workflow.
- Turn off the fault tolerance virtual machines temporarily, add the host to the new vCenter Server system, and then turn it on back again.
- vCenter Server High Availability cluster configuration by using an NSX-T logical switch might fail
Configuration of a vCenter Server High Availability cluster by using an NSX-T logical switch might fail with the error Failed to connect peer node
.
Workaround: Configure vCenter Server High Availability clusters by using a vSphere Distributed Switch.
Auto Deploy and Image Builder Issues
- Reboot of an ESXi stateless host resets the numRxQueue value of the host
When an ESXi host provisioned with vSphere Auto Deploy reboots, it loses the previously set numRxQueue value. The Host Profiles feature does not support saving the numRxQueue value after the host reboots.
Workaround: After the ESXi stateless host reboots:
- Remove the vmknic from the host.
- Create a vmknic on the host with the expected numRxQueue value.
- After caching on a drive, if the server is in the UEFI mode, a boot from cache does not succeed unless you explicitly select the device to boot from the UEFI boot manager
In case of Stateless Caching, after the ESXi image is cached on a 512n, 512e, USB, or 4Kn target disk, the ESXi stateless boot from autodeploy might fail on a system reboot. This occurs if autodeploy service is down.
The system attempts to search for the cached ESXi image on the disk, next in the boot order. If the ESXi cached image is found, the host is booted from it. In legacy BIOS, this feature works without problems. However, in the UEFI mode of the BIOS, the next device with the cached image might not be found. As a result, the host cannot boot from the image even if the image is present on the disk.
Workaround: If autodeploy service is down, on the system reboot, manually select the disk with the cached image from the UEFI Boot Manager.
- A stateless ESXi host boot time might take 20 minutes or more
The booting of a stateless ESXi host with 1,000 configured datastores might require 20 minutes or more.
Workaround: None.
Miscellaneous Issues
- ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver
ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver if you attempt to reboot the server with VMs in the running I/O state.
Workaround: First power off VMs and then reboot the ESXi host.
- VXLAN stateless hardware offloads are not supported with Guest OS TCP traffic over IPv6 on UCS VIC 13xx adapters
You may experience issues with VXLAN encapsulated TCP traffic over IPv6 on Cisco UCS VIC 13xx adapters configured to use the VXLAN stateless hardware offload feature. For VXLAN deployments involving Guest OS TCP traffic over IPV6, TCP packets subject to TSO are not processed correctly by the Cisco UCS VIC 13xx adapters, which causes traffic disruption. The stateless offloads are not performed correctly. From a TCP protocol standpoint this may cause incorrect packet checksums being reported to the ESXi software stack, which may lead to incorrect TCP protocol processing in the Guest OS.
Workaround: To resolve this issue, disable the VXLAN stateless offload feature on the Cisco UCS VIC 13xx adapters for VXLAN encapsulated TCP traffic over IPV6. To disable the VXLAN stateless offload feature in UCS Manager, disable the Virtual Extensible LAN
field in the Ethernet Adapter Policy. To disable the VXLAN stateless offload feature in the CIMC of a Cisco C-Series UCS server, uncheck Enable VXLAN
field in the Ethernet Interfaces vNIC properties section.
- Significant time might be required to list a large number of unresolved VMFS volumes using the batch QueryUnresolvedVmfsVolume API
ESXi provides the batch QueryUnresolvedVmfsVolume API, so that you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as resignaturing specific unresolved VMFS volumes. By default, when the API QueryUnresolvedVmfsVolume is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time.
Workaround: To decrease the time of the query operation, you can disable the filesystem liveness check.
- Log in to your host as root.
- Open the configuration file for hostd using a text editor. The configuration file is located in /etc/vmware/hostd/config.xml under plugins/hostsvc/storage node.
- Add the checkLiveFSUnresolvedVolume parameter and set its value to FALSE. Use the following syntax:
<checkLiveFSUnresolvedVolume>FALSE</checkLiveFSUnresolvedVolume>
As an alternative, you can set the ESXi Advanced option VMFS.UnresolvedVolumeLiveCheck to FALSE in the vSphere Client.
- Importing a .csv file overwrites user input during host customization.
User input in the Customize hosts pane is overwritten by the import process and the values from the .csv
file.
Workaround: Import the .csv
file before adding manual changes in the Customize hosts pane.
- The vCenter Server Convergence Tool might fail to convert an external Platform Services Controller to an embedded Platform Services Controller due to conflicting IP and FQDN
If you have configured an external Platform Services Controller with an IP address as an optional FQDN field during the deployment, the vCenter Server Convergence Tool might fail to convert the external Platform Services Controller to an embedded Platform Services Controller because of a name conflict.
Workaround: Do not use the vCenter Server Convergence Tool for Platform Services Controllers installed with an IP address as an alternative or addition to the FQDN address.
- If you repoint and reconfigure the setup of a Platform Services Controller, the restore process might fail
If you repoint and reconfigure the setup of a Platform Services Controller, the restore process might fail due to a stale service ID entry.
Workaround: Follow the steps in VMware knowledge base article 2131327 to clean up the stale service ID before proceeding with restore.
- An Enhanced vMotion Compatibility (EVC) cluster might show new CPU IDs such as IBPB even if you revert an ESXi host to an older version
If you revert an ESXi host to an older version of ESXi, an EVC cluster might expose new CPU IDs, such as IBRS, STIBP and IBPB, even though the host does not have any of the features.
Workaround: This issue is resolved in this release. However, a host that does not meet the requirements of an EVC cluster does not automatically reconnect and you must remove it from the cluster.
- Some vCenter Server plug-ins might not correctly render the dark theme mode in the vSphere Client
If you change color schemes in the vSphere Client to display the interface in a dark theme, some vCenter Server plug-ins might not correctly render the mode.
Workaround: None
- If you enable per-VM EVC, virtual machines might fail to power on
If you install or use for upgrade only VMware vCenter Server 6.7 Update 1, but do not apply ESXi 6.7 Update 1, and if you configure or reconfigure per-VM EVC, virtual machines on unpatched hosts might fail to power on. You might also see the issue if you enable cluster-level EVC and even one of the hosts in the cluster is not patched with the latest update. The new CPU IDs of that cluster might not be available on the cluster. In such a cluster, if you configure or reconfigure per-VM EVC, virtual machines might fail to power on.
Workaround: Before you configure or reconfigure per-VM EVC, upgrade all the standalone ESXi hosts, as well as hosts inside a cluster, to the latest update for hypervisor-assisted guest mitigation for guest operating systems.
- Edits to the DNS settings might cause deletion of the IPv6 loopback address from the /etc/resolv.conf and /etc/systemd/resolved.conf files
Edits to the DNS settings by using either the Appliance Management Interface, appliance shell, or the vSphere Web Client, might cause deletion of the IPv6 loopback address from the/etc/resolv.conf
and /etc/systemd/resolved.conf
files.
Workaround: To avoid deletion of the IPv6 loopback address, edit the resolv.conf
files by using the Bash shell:
- In the
/etc/resolv.conf
file, set the following parameters: nameserver: ::1
nameserver: <dnsserver 1>
nameserver: <dnsserver 2
>
- In the
/etc/systemd/resolved.conf
file, set the following parameters: [Resolve]
LLMNR=false
DNS=::1
<dnsserver 1
> <dnsserver 2
>
- The SSH service might be disabled after an external Platform Services Controller converts to an embedded Platform Services Controller
If you convert an external Platform Services Controller to an embedded Platform Services Controller, the SSH service might be disabled based on Active Directory policies and restrictions.
Workaround: Manually enable the SSH service after the conversion is complete.
- After upgrade to ESXi 6.7, networking workloads on Intel 10GbE NICs cause higher CPU utilization
If you run certain types of networking workloads on an upgraded ESXi 6.7 host, you might see a higher CPU utilization under the following conditions:
- The NICs on the ESXi host are from the Intel 82599EB or X540 families
- The workload involves multiple VMs that run simultaneously and each VM is configured with multiple vCPUs
- Before the upgrade to ESXi 6.7, the VMKLinux ixgbe driver was used
Workaround: Revert to the legacy VMKLinux ixgbe driver:
- Connect to the ESXi host and run the following command:
# esxcli system module set -e false -m ixgben
- Reboot the host.
Note: The legacy VMKLinux ixgbe inbox driver version 3.7.x does not support Intel X550 NICs. Use the VMKLinux ixgbe async driver version 4.x with Intel X550 NICs.
- Initial install of DELL CIM VIB might fail to respond
After you install a third-party CIM VIB it might fail to respond.
Workaround: To fix this issue, enter the following two commands to restart sfcbd:
esxcli system wbem set --enable false
esxcli system wbem set --enable true
To collapse the list of previous known issues, click here.