VMware Cloud Foundation 5.1 | 07 NOV 2023 | Build 22688368
Check for additions and updates to these release notes.
VMware Cloud Foundation 5.1 | 07 NOV 2023 | Build 22688368
Check for additions and updates to these release notes.
VCF+/VCF-S customers cannot currently upgrade to VCF 5.1. We will update this notification once upgrade is available.
The VMware Cloud Foundation (VCF) 5.1 release includes the following:
Support for vSAN ESA: vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.
Non-DHCP option for Tunnel Endpoint (TEP) IP assignment: SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs for stretched clusters and L3 aware clusters.
vSphere Distributed Services engine for Ready nodes: AMD-Pensando and NVIDIA BlueField-2 DPUs are now supported. Offloading the Virtual Distributed Switch (VDS) and NSX network and security functions to the hardware provides significant performance improvements for low latency and high bandwidth applications. NSX distributed firewall processing is also offloaded from the server CPUs to the network silicon.
Multi-pNIC/Multi-vSphere Distributed Switch UI enhancements: VCF users can configure complex networking configurations, including more vSphere Distributed Switch and NSX switch-related configurations, through the SDDC Manager UI.
Distributed Virtual Port Group Separation for management domain appliances: Enables the traffic isolation between management VMs (such as SDDC Manager, NSX Manager, and vCenter) and ESXi Management VMkernel interfaces
Support for vSphere Lifecycle Manager images in management domain:VCF users can deploy management domain using vSphere Lifecycle Manager (vLCM) images during new VCF instance deployment
Mixed-mode Support for Workload Domains: A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.
Asynchronous update of the pre-check files: The upgrade pre-checks can be updated asynchronously with new pre-checks using a pre-check file provided by VMware.
Workload domain NSX integration: Support for multiple NSX enabled VDSs for Distributed Firewall use cases
Tier-0/1 optional for VCF Edge cluster: When creating an Edge cluster with the VCF API, the Tier-0 and Tier-1 gateways are now optional.
VCF Edge nodes support static or pooled IP: When creating or expanding an Edge cluster using VCF APIs, Edge node TEP configuration may come from an NSX IP pool or be specified statically as in earlier releases.
Support for mixed license deployment: A combination of keyed and keyless licenses can be used within the same VCF instance.
Integration with VMware Identity Service: Provides identity federation and SSO across vCenter, NSX, and SDDC Manager. VCF administrators can add Okta to VMware Identity Service as a Day-N operation using the SDDC Manger UI.
VMware vRealize rebranding: VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details..
VMware Validated Solutions: All VMware Validated Solutions are updated to support VMware Cloud Foundation 5.1. Visit VMware Validated Solutions for the updated guides.
BOM updates: Updated Bill of Materials with new product versions.
The VMware Imaging Appliance (VIA), included with the VMware Cloud Builder appliance to image ESXi servers, is deprecated and removed.
Starting with VMware Cloud Foundation 5.1, Configuration Drift Bundles are no longer needed as part of the upgrade process and are now deprecated.
The Composable Infrastructure feature is deprecated in VMware Cloud Foundation 5.1 and will be removed in a future release.
In a future release, the "Connect Workload Domains" option from the VMware Aria Operations card located in SDDC Manager > Administration > Aria Suite section will be removed and related VCF Public API options will be deprecated.
Starting with VMware Aria Operations 8.10, functionality for connecting VCF Workload Domains to VMware Aria Operations is available directly from the UI. Users are encouraged to use this method within the VMware Aria Operations UI for connecting VCF workload domains, even if the integration was originally set up using SDDC Manager.
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Cloud Builder VM
07 NOV 2023
07 NOV 2023
VMware vCenter Server Appliance
8.0 Update 2a
26 OCT 2023
8.0 Update 2
21 SEP 2023
8.0 Update 2
21 SEP 2023
7 NOV 2023
VMware Aria Suite Lifecycle
19 OCT 2023
VMware vSAN is included in the VMware ESXi bundle.
You can use VMware Aria Suite Lifecycle to deploy VMware Aria Automation, VMware Aria Operations, VMware Aria Operations for Logs, and Workspace ONE Access. VMware Aria Suite Lifecycle determines which versions of these products are compatible and only allows you to install/upgrade to supported versions.
VMware Aria Operations for Logs content packs are installed when you deploy VMware Aria Operations for Logs.
The VMware Aria Operations management pack is installed when you deploy VMware Aria Operations.
You can access the latest versions of the content packs for VMware Aria Operations for Logs from the VMware Solution Exchange and the VMware Aria Operations for Logs in-product marketplace store.
The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:
The following VMware software components deployed by SDDC Manager are licensed separately:
VMware vCenter Server
NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM).
For general information about the product, see VMware Cloud Foundation.
To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
The VMware Cloud Foundation web-based interface supports the latest two versions of the following web browsers:
For the Web-based user interfaces, the supported standard resolution is 1920 by 1080 pixels.
You can install VMware Cloud Foundation 5.1 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.1.
Installing as a New Release
The new installation process has three phases:
Phase One: Prepare the Environment: The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.
Phase Two: Image all servers with ESXi: Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.
Phase Three: Install Cloud Foundation 5.1: See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.
Upgrading to Cloud Foundation 5.1
There is no direct upgrade path from VCF 4.4.x to VCF 5.1. You must first upgrade the SDDC Manager only to VCF 126.96.36.199, and then, you can "Plan upgrade" to VCF 5.1.
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.1 from VMware Cloud Foundation 4.4.x or later. If your environment is at a version earlier than 4.4.x, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.5.x or above and then upgrade to VMware Cloud Foundation 5.1. For more information see VMware Cloud Foundation Lifecycle Management.
Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
Since VMware Cloud Foundation 5.1 disables the SSH service by default, scripts that rely on SSH being enabled on ESXi hosts will not work after upgrading to VMware Cloud Foundation 5.1. Update your scripts to account for this new behavior. See KB 86230 for information about enabling and disabling the SSH service on ESXi hosts.
The following issue is resolved in this release:
This release resolves CVE-2023-34048 and CVE-2023-34056. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2023-0023.
New - When creating a workload domain with VVol storage and the smart NIC feature enabled, hosts are not visible in the host selection data grid.
On the host selection page of the VI workload domain creation wizard, if there are fewer than 3 commissioned hosts for vVol storage, the hosts will not be displayed in the data grid.
The same issue occurs on the host selection page in the Add Cluster wizard for a workload domain with vVol storage, if there are fewer than 3 commissioned hosts.
Since creating a VI workload domain directly through the API works correctly when there are fewer than 3 commissioned hosts for vVol storage, use the APIs directly to create VI domains and clusters with vVol storage and fewer than 3 commissioned hosts.
Have more than 2 hosts commissioned for the VVol storage type when creating the VI workload domain or cluster.
New - Incorrect warning is shown in UI during Create Domain/Add Cluster using Static IP Pool
When creating a domain or adding a cluster from the UI, an incorrect warning is displayed if Static IP Pool is selected for IP Allocation. The warning states:
Clusters with a static IP pool cannot be stretched across availability zones.
Workaround: This warning can be ignored.
New - Creating Sub Transport Node Profile (TNP) per cluster does not enforce maximum limit
Creating a new domain, cluster, or expanding a cluster with Sub TNP configuration can fail if the maximum limit of 16 Sub TNPs per cluster is reached.
Workaround: Delete the failed workload domain or cluster.
New - Edge node uplink information is blank during expansion
When attempting to expand an Edge cluster by more than two Edge nodes with a shared uplink network, the uplink information may appear blank for all nodes except the first one added during expansion.
Workaround: Expand the Edge cluster one node at a time, specifying the uplink details for each node. Alternatively, use the API or developer tools, since this is a limitation of the UI only. See KB article 95188.
New - Cosmetic issue: References to NSX+ in SDDC UI
NSX+ references in SDDC UI have no impact on functionality and should be ignored.
New - Password rotation for Platform Services Controller (PSC) fails with error:
Unable to connect to entity : VRLI
This issue can occur when the password for VMware Aria Operations for Logs (formerly vRealize Log Insight or VRLI) is changed in SDDC Manager, followed by a password rotation for the vCenter Single Sign-On (SSO) account (e.g. firstname.lastname@example.org) in the PSC. The following error message may appear:
Progress Message: Unable to connect to entity : VRLI. Cause: JSONObject["sessionId"] not found.
Workaround: Unlock the admin account by following these steps:
root@vrli-workone [ ~ ]# /usr/lib/loginsight/application/sbin/li-reset-admin-passwd.sh --unlock AdminAdmin user is LOCKED. Resetting. SUCCESS: admin user unlocked successfully.
New - When
networkProfileName is empty string,
Add Host fails.
Adding a host to a cluster fails when
networkProfileName is empty.
Log in to the NSX Manager associated with the workload domain to which the hosts are being added. (SDDC Manager UI > Workload Domains > Services > NSX Cluster)
Note the cluster to which the host is being added. (SDDC Manager UI > Workload Domains > Clusters)
Log in to NSX Manager, and navigate to Systems > Fabric > Hosts > Cluster > Expand the cluster view for the cluster named in step 2.
For each host given in the cluster expansion API payload, select the host, and click on ACTIONS > Select Change Sub-Cluster > Select None.
Once all the hosts under the host addition payload are removed from the sub-cluster (sub-cluster name also appears in the error message), go to SDDC Manager UI >Workload Domains > clusters > Select the cluster in which hosts were getting added > Select the hosts provided in the payload Trigger Remove Host workflow.
Decommission the hosts and recommission again for adding the hosts to SDDC Cluster with the correct payload. Verify that the networkProfileName should be non-empty.
Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync
The Lifecycle Management Precheck displays a green status and does not generate any errors for NSX Manager inventory.
The upgrade screen displays the incorrect version when upgrading from VCF 4.3 to version 5.0.
The VCF UI shows the incorrect source version when upgrading from version 4.3 to 5.0. However, the actual upgrade is still correctly performing a direct skip upgrade from the source version to target version.
Creating workload domains in parallel does not enforce maximum limit of compute managers per NSX Manager.
Creating workload domains in parallel can lead to one or more workload domain creations to fail since the compute managers limit per NSX Manager will be exceeded.
Workaround: Delete the failed workload domain
In VCF 5.0 Mixed BOM with vCenter Enhanced Linked Mode, you can see vCenter Server systems of version 8.0 from a 7.x vCenter Server instance
If you have a vCenter Enhanced Linked Mode group that contains vCenter Server instances of versions 8.x and 7.x, when you log in to a 7.x vCenter Server instance, in the vSphere Client, you can see vCenter Server systems of version 8.0. Since vCenter Server 8.0 introduces new functionalities, you can run workflows specific to vSphere 8.0 on the 8.0 vCenter Server, but this might lead to unexpected results when run from the 7.x vSphere Client.
Workaround: This issue is fixed in vSphere 7.0 P07.
Upgrade Pre-Check Scope dropdown may contain additional entries
When performing Upgrade Prechecks through SDDC Manager UI and selecting a target VCF version, the Pre-Check Scope dropdown may contain more selectable entries than necessary. SDDC Manager may appear as an entry more than once. It also may be included as a selectable component for VI domains, although it's a component of the management domain.
Workaround: None. The issue is visual with no functional impact.
Converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images is not supported.
vSphere Lifecycle Manager baselines (previously known as vSphere Update Manager or VUM) are deprecated in vSphere 8.0, but continue to be supported. See KB article 89519 for more information.
VMware Cloud Foundation 5.0 does not support converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images. This capability will be supported in a future release.
Workload Management does not support NSX Data Center Federation
You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX Data Center instance is participating in an NSX Data Center Federation.
NSX Guest Introspection (GI) and NSX Network Introspection (NI) are not supported on stretched clusters
There is no support for stretching clusters where NSX Guest Introspection (GI) or NSX Network Introspection (NI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX GI and NSX NI require that the same Transport Node Profile be attached to all hosts in the cluster.
Stretched clusters and Workload Management
You cannot stretch a cluster on which Workload Management is deployed.
New - LCM default manifest has incorrect details for VCF 188.8.131.52
If the latest LCM manifest is not present in the VCF inventory, the VCF 5.1 SDDC Manager UI Release Versions page will display inconsistent details for the VCF 5.1 release.
Workaround: Refresh the manifest by connecting to the VMware Depot or manually update to the latest manifest file using the Bundle Transfer Utility.
New - VCF ESXi Upgrade with 'quick boot' option fails for hosts configured with TPM.
After performing an upgrade, the host is stuck during quick boot in PSOD with
A security violation was detected.
Workaround: Rebooting the host will complete the upgrade.
New - vCenter upgrade fails during cleanup
vCenter patching fails at converting
PostInstall stage and the services are up, causing the vCenter to be unstable. vCenter Server Appliance patching fails with
Failed to perform cleanup.
Follow the steps in KB 94904.
Retry the upgrade from VMware Cloud Foundation UI.
New - When the SDDC Manager upgrades, the SDDC Manager UI does not redirect to the upgrade process screen.
When upgrading the SDDC Manager from 184.108.40.206 to 220.127.116.11, it moves to a SCHEDULED state. When the upgrade moves to IN PROGRESS, the separate UI hangs.
Workaround: Refresh the browser. The refreshed VCF Upgrade UI screen now shows the current upgrade status.
New - The "Schedule Update/Retry Upgrade" button should be disabled if the existing upgrade is in progress.
When an upgrade is in progress for a cluster part of a domain, the UI still allows the user to select and start the upgrade for another cluster, but on the FINISH button, this action is blocked.
Workaround: Do not trigger another upgrade when an upgrade is in progress on a domain.
New - The FINISH button in the ESXi Upgrade UI allows multiple clicks and upgrades to be initiated.
When the FINISH button is clicked more than once, multiple upgrades are initiated against the same bundle.
Workaround: No workaround. You can see all of the triggered upgrades in the Task panel. All upgrades initiated after the first click of the FINISH button will fail.
New - On-demand pre-checks for vCenter 80U1 bundle might fail in a specific scenario
The steps for the scenario are listed below:
SDDC-Manager is upgraded to VCF 18.104.22.168 from 4.5.x
BOM components are not upgraded to VCF 22.214.171.124
Bundles for VCF 126.96.36.199 are downloaded and the pre-check is run by selecting 188.8.131.52 as the target version
The on-demand pre-check fails.
Workaround: The following are the options:
Option 1 - Upgrade SDDC-Manager to VCF 184.108.40.206 and run the pre-checks
Option 2) Run the vCenter upgrade and check if there are any issues during the stage
Option 3) See KB article 94862 to run the on-demand prechecks Out-of-Band (OOB) and not through VCF.
VCF ESXi upgrade fails during post validation due to HA related cluster configuration issue
The upgrade of ESXi Cluster fails with error that is similar to below error message:
Cluster Configuration Issue: vSphere HA failover operation in progress in cluster <cluster-name> in datacenter <datacenter-name>: 0 VMs being restarted, 1 VMs waiting for a retry, 0 VMs waiting for resources, 0 inaccessible vSAN VMs
Workaround: See KB article 90985.
Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync
Precheck for NSX has failed with ERROR message
If the Upgrade Prerequisite
Backup SDDC Manager, all vCenter Servers, and NSX Managers is ignored and the STFPlocation defined for the NSX Configuration Backup does not have enough disk space, then the ERROR Message
Precheck for NSX has failed will display. Although the root cause of the error
sftp server disk is full appears in the Operations Manager Log, it is not currently displayed in SDDC Manager.
Workaround: Increase the amount of available disk space on the SFTP server and then complete the Upgrade Prerequisite before proceeding.
NSX upgrade may fail if there are any active alarms in NSX Manager
If there are any active alarms in NSX Manager, the NSX upgrade may fail.
Workaround: Check the NSX Manager UI for active alarms prior to NSX upgrade and resolve them, if any. If the alarms are not resolved, the NSX upgrade will fail. The upgrade can be retried once the alarms are resolved.
VC upgrade fails during install
When running the workload VC-upgrade from VCF, it fails at the VC install with
target vc upgrade precheck stage failing.
Workaround: Do not reuse the same temporary IP for sequential VC upgrades. Use a separate temporary IP for every VC undergoing upgrade.
SDDC Manager upgrade fails at "Setup Common Appliance Platform"
If a virtual machine reconfiguration task (for example, removing a snapshot or running a backup) is taking place in the management domain at the same time you are upgrading SDDC Manager, the upgrade may fail.
Workaround: Schedule SDDC Manager upgrades for a time when no virtual machine reconfiguration tasks are happening in the management domain. If you encounter this issue, wait for the other tasks to complete and then retry the upgrade.
Parallel upgrades of vCenter Server are not supported
If you attempt to upgrade vCenter Server for multiple VI workload domains at the same time, the upgrade may fail while changing the permissions for the vpostgres configuration directory in the appliance. The message
chown -R vpostgres:vpgmongrp /storage/archive/vpostgres appears in the PatchRunner.log file on the vCenter Server Appliance.
Workaround: Each vCenter Server instance must be upgraded separately.
When you upgrade VMware Cloud Foundation, one of the vSphere Cluster Services (vCLS) agent VMs gets placed on local storage
vSphere Cluster Services (vCLS) ensures that cluster services remain available, even when the vCenter Server is unavailable. vCLS deploys three vCLS agent virtual machines to maintain cluster services health. When you upgrade VMware Cloud Foundation, one of the vCLS VMs may get placed on local storage instead of shared storage. This could cause issues if you delete the ESXi host on which the VM is stored.
Workaround: Deactivate and reactivate vCLS on the cluster to deploy all the vCLS agent VMs to shared storage.
Check the placement of the vCLS agent VMs for each cluster in your environment.
In the vSphere Client, select Menu > VMs and Templates.
Expand the vCLS folder.
Select the first vCLS agent VM and click the Summary tab.
In the Related Objects section, check the datastore listed for Storage. It should be the vSAN datastore. If a vCLS agent VM is on local storage, you need to deactivate vCLS for the cluster and then re-enable it.
Repeat these steps for all vCLS agent VMs.
Deactivate vCLS for clusters that have vCLS agent VMs on local storage.
In the vSphere Client, click Menu > Hosts and Clusters.
Select a cluster that has a vCLS agent VM on local storage.
In the web browser address bar, note the moref id for the cluster.
For example, if the URL displays as https://vcenter-1.vrack.vsphere.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:503a0d38-442a-446f-b283-d3611bf035fb/summary, then the moref id is domain-c8.
Select the vCenter Server containing the cluster.
Click Configure > Advanced Settings.
Click Edit Settings.
Change the value for
config.vcls.clusters.<moref id>.enabled to
false and click Save.
config.vcls.clusters.<moref id>.enabled setting does not appear for your moref id, then enter its Name and
false for the Value and click Add.
Wait a couple of minutes for the vCLS agent VMs to be powered off and deleted. You can monitor progress in the Recent Tasks pane.
Enable vCLS for the cluster to place the vCLS agent VMs on shared storage.
Select the vCenter Server containing the cluster and click Configure > Advanced Settings.
Click Edit Settings.
Change the value for
config.vcls.clusters.<moref id>.enabled to
true and click Save.
Wait a couple of minutes for the vCLS agent VMs to be deployed and powered on. You can monitor progress in the Recent Tasks pane.
Check the placement of the vCLS agent VMs to make sure they are all on shared storage
SDDC Manager UI issues when running multiple parallel upgrade prechecks
If you initiate a precheck on more than one workload domain at the same time, the SDDC Manager UI may flicker and show incorrect information.
Workaround: Do not run multiple parallel prechecks. If you do, wait until the prechecks are complete to evaluate the results.
Upgrade precheck results for ESXi display the error "TPM 2.0 device detected but a connection cannot be established."
This issue can occur for ESXi hosts that have Trusted Platform Modules (TPM) chips partially configured.
Workaround: Ensure that the TPM is configured in the ESXi host's BIOS to use the SHA-256 hashing algorithm and the TIS/FIFO (First-In, First-Out) interface and not CRB (Command Response Buffer). For information about setting these required BIOS options, refer to the vendor documentation.
Performing parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images may fail
If you perform parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images at the same time as you are peforming parallel upgrades, the prechecks may fail.
Workaround: Use the following guidance to plan your upgrades and prechecks for workload domains that use vSphere Lifecycle Manager images.
For parallel upgrades, VMware Cloud Foundation supports up to five workload domains with up to five clusters each.
For parallel prechecks, VMware Cloud Foundation supports up to three workload domains with up to four clusters each.
Do not run parallel upgrades and prechecks at the same time.
Using the /v1/upgrades API to trigger parallel cluster upgrades across workload domains in a single API call does not upgrade the clusters in parallel
When using the VMware Cloud Foundation API to upgrade multiple workload domains in parallel, including multiple resource upgrade specifications (
resourceUpgradeSpec) in a single domain upgrade API (
/v1/upgrades) call does not work as expected.
Workaround: To get the best performance when upgrading multiple workload domains in parallel using the VMware Cloud Foundation API, do not include multiple resource upgrade specifications (
resourceUpgradeSpec) in a single domain upgrade call. Instead, invoke the domain upgrade multiple times with a single
resourceUpgradeSpec for each workload domain.
You can also use the SDDC Manager UI to trigger multiple parallel upgrades across workload domains.
NSX Data Center upgrade fails at "NSX T PERFORM BACKUP"
If you did not change the destination of NSX Manager backups to an external SFTP server, upgrades may fail due to an out-of-date SSH fingerprint for SDDC Manager.
Log in to the NSX Manager UI.
Click System > Backup & Restore.
Click Edit for the SFTP Server.
Remove the existing SSH fingerprint and click Save.
Click Add to add the server provided fingerprint.
Retry the NSX Data Center upgrade from the SDDC Manager UI.
Cluster-level ESXi upgrade fails
Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.
Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.
You are unable to update NSX Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX transport node precheck stage
In SDDC Manager, when you run the upgrade precheck before updating NSX Data Center, the NSX transport node validation results with the following error.
No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.
Because the upgrade precheck results with an error, you cannot proceed with updating the NSX Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.
Disable the update precheck validation for the subsequent NSX Data Center update.
Log in to SDDC Manager as vcf using a Secure Shell (SSH) client.
application-prod.properties file for editing:
Add the following property and save the file:
Restart the life cycle management service:
systemctl restart lcm
Log in to the SDDC Manager user interface and proceed with the update of NSX Data Center.
NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE
NSX-T upgrade may not proceed beyond the
NSX T TRANSPORT NODE POSTCHECK STAGE.
Contact VMware support.
ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."
This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.
Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.
Update precheck fails with the error "Password has expired"
If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.
Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.
Bring-up Network Configuration Validation fails with "Gateway IP Address for Management is not contactable"
The following failure "Gateway IP Address for MANAGEMENT is not contactable" is reported as fatal error in Cloud Buider UI and bring-up cannot continue. In some cases the validation fails to validate connectivity because it uses as set of predefined ports, however, ping is working.
See KB 89990 for more information.
New - The SDDC Manager UI shows 8.12 instead of Aria 8.14.
If the latest VMware Aria Suite Lifecycle install bundle is not downloaded, SDDC Manager UI suggests that install bundle for VMware Aria Suite Lifecycle 8.12 is missing, instead of 8.14.
Workaround: Download the install bundle for VMware Aria Suite Lifecycle 8.14. Once the download is complete, the deployment option will become available.
New - VMware Aria Operations admin account appears as disconnected
SDDC Manager incorrectly shows the VMware Aria Operations admin account as disconnected due to an expired password. The admin account password used for logging into the VMware Aria Operations UI never expires, but SDDC Manager is actually checking the virtual appliance (Photon OS) admin account password.
Workaround: To clear the expired password/disconnected alert in SDDC Manager:
Log in to the affected VMware Aria Operations node and update the virtual appliance admin password.
In SDDC Manager, remediate (or rotate or update) the password for the expired account. Or, use the VMware Cloud Foundation API to run
SDDC Manager license key is not needed
A SDDC manager license key is not needed, and any errors seen related to an existing SDDC manager license key can be ignored.
New - SDDC Manager incorrectly renders Dark Mode due to inheriting OS settings in VCF 5.0
SDDC Manager UI does not natively support Dark Mode. It attempts to render Dark Mode based on the OS settings and may cause issues with rendering some screens.
Workaround: There are two possible ways to workaround this issue:
Turn off Dark Mode at the OS level and clear the browser cache, or
Execute the following script in the Developer console of the browser you are using (replacing the
Domain field with the
FQDN of your system):
document.cookie = 'clarity-theme=Light; Max-Age=2147483647; path=/; Domain=sddc-manager.vrack.vsphere.local; SameSite=Lax'
SDDC Manager UI always shows Local OS as default identity source
If you add Active Directory over LDAP or OpenLDAP as an identity source in SDDC Manager and use the vSphere Client to set that identity source as the default, the SDDC Manager UI (Administration > Single Sign On > Identity Provider) continues to show Local OS as the default identity source.
Workaround: Use the vSphere Client to confirm the actual default identity source.
Name resolution fails when configuring the NTP server
Under certain conditions, name resolution may fail when you configure an NTP server.
Workaround: Run the following command using the FQDN of the failed resource(s) to ensure name resolution is successful and then retry the NTP server configuration.
Generate CSR task for a component hangs
When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.
Log in to the UI for the component to troubleshoot and resolve any issues.
Using SSH, log in to the SDDC Manager VM with the user name
Type su to switch to the root account.
Run the following command:
systemctl restart operationsmanager
Retry generating the CSR.
SoS utility options for health check are missing information
Due to limitations of the ESXi service account, some information is unavailable in the following health check options:
Devices and Driver information for ESXi hosts.
vSAN Health Status or
Total no. of disks information for ESXi hosts.
Heterogeneous operations "Cluster Creation" and "VI Creation" are not supported to be ran in parallel when they are operating against same shared NSX instance.
If there is a running VI Creation workflow operating on an NSX resource, then creating a cluster on domains that are sharing that NSX is not possible.
Workaround: None. The VI Creation workflow should complete before the cluster creation workflow can be started.
An unxpected replication agreement is still left after the successful decommission of the SSO node task
The decommission of the SSO node was successful, but the replication partner remained.
Workaround: Manually remove the replication agreement of the invalid SSO node, and restart the delete workload domain.
The vCenter command
/usr/lib/vmware-vmdir/bin/vdcrepadmin can be used to add/remove replication agreements.
SDDC Manager UI incorrectly shows "Owner" of Isolated Workload Doamin as the management domain SSO admin user.
When creating an isolated workload domain, the "Workload Domains" view in the UI incorrectly shows the "Owner" as the management domain SSO admin user.
Workaround: The domain info can be seen under "SSO Domain" as a column. To include the SSO Domain as a column, select it in the table configuration to show it as a column. The Owner column on the UI is not applicable for Isolated Workload Domains.
SDDC Manager UI allows you to select an unsupported NSX Manager instance
When you create a new VI workload domain, it cannot share an NSX Manager instance with a VI workload domain that is in different SSO domain. Although the SDDC Manager UI allows you to select an unsupported NSX Manager instance, the VI workload domain creation task will fail.
Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
Before adding the host, add a new portgroup to the VDS for that cluster.
Tag the new portgroup with the VLAN ID of the host to be added.
Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.
Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
Retry the Add Host operation.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
Deploying partner services on an NSX workload domain displays an error
Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.
Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.
If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.
Replace or deploy the witness appliance matching with the ESXi version.
vSAN partition and critical alerts are generated when the witness MTU is not set to 9000
If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.
Set the MTU of the witness switch in the witness appliance to 9000 MTU.
Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails
When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.
Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.
Adding a vSphere cluster or adding a host to a workload domain fails
Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX Transport Node or Create Transport Node Collection subtask.
Enable SSH for the NSX Manager VMs.
SSH into the NSX Manager VMs as admin and then log in as root.
Run the following command on each NSX Manager VM: sysctl -w net.ipv4.tcp_en=0
Login to NSX Manager UI for the workload domain.
Navigate to System > Fabric > Nodes > Host Transport Nodes.
Select the vCenter server for the workload domain from the Managed by drop-down menu.
Expand the vSphere cluster and navigate to the transport nodes that are in a partial success state.
Select the check box next to a partial success node, click Configure NSX.
Click Next and then click Apply.
Repeat steps 7-9 for each partial success node.
When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.
The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled
If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.
Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.
Creation or expansion of a vSAN cluster with more than 32 hosts fails
By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.
Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.
Select the vSAN cluster in the vSphere Client.
Select Configure > vSAN > Advanced Options.
Enable Large Cluster Support.
Run a vSAN health check to see which hosts require rebooting.
Put the hosts into Maintenance Mode and reboot the hosts.
For more information about large cluster support, see https://kb.vmware.com/kb/2110081.
Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present
If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.
For host removal: Delete the Service VM from the host and retry the operation.
For cluster deletion: Delete the service deployment for the cluster and retry the operation.
For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.
vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain
If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.
If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.
Creating or validating a new cluster fails if the cluster name does not meet the naming requirements
When you use the VMware Cloud Foundation API to create a cluster, the cluster name must meet the following requirements:
length between 3 and 20 characters
only alphanumeric characters ("A-Z", "a-z", "0-9") and hyphen ("-")
If the cluster name does not meet these requirements, validating the input specification and creating the cluster fails.
Workaround: Make sure that your cluster name meets the naming requirements.
Stretch cluster operation fails
If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.
Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.
An API for NSX Clusters is listed on VMware Cloud Foundation Developer Center in error
Public API "2.36.6. Get the transport zones from the NSX cluster" is listed on VMware Cloud Foundation Developer Center in error.