This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Foundation 5.1 | 07 NOV 2023 | Build 22688368

Check for additions and updates to these release notes.

Upgrade Notice

VCF+, VCF-S, and VCF Cloud Pack customers can now upgrade to VCF 5.1. See the "VMware Cloud Foundation Per Core Upgrade Process" section in this VCF Blog for more information.

What's New

The VMware Cloud Foundation (VCF) 5.1 release includes the following:

  • Support for vSAN ESA: vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.

  • Non-DHCP option for Tunnel Endpoint (TEP) IP assignment: SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs for stretched clusters and L3 aware clusters.

  • vSphere Distributed Services engine for Ready nodes: AMD-Pensando and NVIDIA BlueField-2 DPUs are now supported. Offloading the Virtual Distributed Switch (VDS) and NSX network and security functions to the hardware provides significant performance improvements for low latency and high bandwidth applications. NSX distributed firewall processing is also offloaded from the server CPUs to the network silicon.

  • Multi-pNIC/Multi-vSphere Distributed Switch UI enhancements: VCF users can configure complex networking configurations, including more vSphere Distributed Switch and NSX switch-related configurations, through the SDDC Manager UI.

  • Distributed Virtual Port Group Separation for management domain appliances: Enables the traffic isolation between management VMs (such as SDDC Manager, NSX Manager, and vCenter) and ESXi Management VMkernel interfaces

  • Support for vSphere Lifecycle Manager images in management domain:VCF users can deploy management domain using vSphere Lifecycle Manager (vLCM) images during new VCF instance deployment

  • Mixed-mode Support for Workload Domains​: A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.

  • Asynchronous update of the pre-check files: The upgrade pre-checks can be updated asynchronously with new pre-checks using a pre-check file provided by VMware.

  • Workload domain NSX integration: Support for multiple NSX enabled VDSs for Distributed Firewall use cases

  • Tier-0/1 optional for VCF Edge cluster: When creating an Edge cluster with the VCF API, the Tier-0 and Tier-1 gateways are now optional.

  • VCF Edge nodes support static or pooled IP: When creating or expanding an Edge cluster using VCF APIs, Edge node TEP configuration may come from an NSX IP pool or be specified statically as in earlier releases.

  • Support for mixed license deployment: A combination of keyed and keyless licenses can be used within the same VCF instance.

  • Integration with VMware Identity Service: Provides identity federation and SSO across vCenter, NSX, and SDDC Manager. VCF administrators can add Okta to VMware Identity Service as a Day-N operation using the SDDC Manger UI.

  • VMware vRealize rebranding: VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details..

  • VMware Validated Solutions: All VMware Validated Solutions are updated to support VMware Cloud Foundation 5.1. Visit VMware Validated Solutions for the updated guides.

  • BOM updates: Updated Bill of Materials with new product versions.

Deprecation Notices

  • The VMware Imaging Appliance (VIA), included with the VMware Cloud Builder appliance to image ESXi servers, is deprecated and removed.

  • Starting with VMware Cloud Foundation 5.1, Configuration Drift Bundles are no longer needed as part of the upgrade process and are now deprecated.

  • The Composable Infrastructure feature is deprecated in VMware Cloud Foundation 5.1 and will be removed in a future release.

  • In a future release, the "Connect Workload Domains" option from the VMware Aria Operations card located in SDDC Manager > Administration > Aria Suite section will be removed and related VCF Public API options will be deprecated.

    Starting with VMware Aria Operations 8.10, functionality for connecting VCF Workload Domains to VMware Aria Operations is available directly from the UI. Users are encouraged to use this method within the VMware Aria Operations UI for connecting VCF workload domains, even if the integration was originally set up using SDDC Manager.

VMware Cloud Foundation Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component

Version

Date

Build Number

Cloud Builder VM

5.1

07 NOV 2023

22688368

SDDC Manager

5.1

07 NOV 2023

22688368

VMware vCenter Server Appliance

8.0 Update 2a

26 OCT 2023

22617221

VMware ESXi

8.0 Update 2

21 SEP 2023

22380479

VMware vSAN Witness Appliance

8.0 Update 2

21 SEP 2023

22443122

VMware NSX

4.1.2.1

7 NOV 2023

22667789

VMware Aria Suite Lifecycle

8.14

19 OCT 2023

22630473

  • VMware vSAN is included in the VMware ESXi bundle.

  • You can use VMware Aria Suite Lifecycle to deploy VMware Aria Automation, VMware Aria Operations, VMware Aria Operations for Logs, and Workspace ONE Access. VMware Aria Suite Lifecycle determines which versions of these products are compatible and only allows you to install/upgrade to supported versions.

  • VMware Aria Operations for Logs content packs are installed when you deploy VMware Aria Operations for Logs.

  • The VMware Aria Operations management pack is installed when you deploy VMware Aria Operations.

  • You can access the latest versions of the content packs for VMware Aria Operations for Logs from the VMware Solution Exchange and the VMware Aria Operations for Logs in-product marketplace store.

VMware Software Edition License Information

The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:

  • VMware ESXi

  • VMware vSAN

  • VMware NSX

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server

NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM).

For general information about the product, see VMware Cloud Foundation.

Supported Hardware

For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.

Documentation

To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The VMware Cloud Foundation web-based interface supports the latest two versions of the following web browsers:

  • Google Chrome

  • Mozilla Firefox

  • Microsoft Edge

For the Web-based user interfaces, the supported standard resolution is 1920 by 1080 pixels.

Installation and Upgrade Information

You can install VMware Cloud Foundation 5.1 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.1.

Installing as a New Release

The new installation process has three phases:

  • Phase One: Prepare the Environment: The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

  • Phase Two: Image all servers with ESXi: Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.

  • Phase Three: Install Cloud Foundation 5.1: See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.

Upgrading to Cloud Foundation 5.1

Important:

There is no direct upgrade path from VCF 4.4.x to VCF 5.1. You must first upgrade the SDDC Manager only to VCF 5.0.0.1, and then, you can "Plan upgrade" to VCF 5.1.

You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.1 from VMware Cloud Foundation 4.4.x or later. If your environment is at a version earlier than 4.4.x, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.5.x or above and then upgrade to VMware Cloud Foundation 5.1. For more information see VMware Cloud Foundation Lifecycle Management.

Important:

Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.

Note:

Since VMware Cloud Foundation 5.1 disables the SSH service by default, scripts that rely on SSH being enabled on ESXi hosts will not work after upgrading to VMware Cloud Foundation 5.1. Update your scripts to account for this new behavior. See KB 86230 for information about enabling and disabling the SSH service on ESXi hosts.

Resolved Issues

The following issues are resolved in this release:

  • SDDC Manager UI incorrectly shows "Owner" of isolated workload domain as the management domain SSO admin user.

  • SDDC Manager UI allows you to select an unsupported NSX Manager instance.

  • NSX Guest Introspection (GI) and NSX Network Introspection (NI) are not supported on stretched clusters.

  • VMware Aria Operations admin account appears as disconnected.

  • SDDC Manager incorrectly renders Dark Mode due to inheriting OS settings in VCF 5.0.

  • This release resolves CVE-2023-34048 and CVE-2023-34056. For more information on these vulnerabilities and their impact on VMware products, see VMSA-2023-0023.

Known Issues

VMware Cloud Foundation Known Issues

  • New - When creating a workload domain with VVol storage and the smart NIC feature enabled, hosts are not visible in the host selection data grid.

    On the host selection page of the VI workload domain creation wizard, if there are fewer than 3 commissioned hosts for vVol storage, the hosts will not be displayed in the data grid.

    The same issue occurs on the host selection page in the Add Cluster wizard for a workload domain with vVol storage, if there are fewer than 3 commissioned hosts.

    Workarounds:

    • Since creating a VI workload domain directly through the API works correctly when there are fewer than 3 commissioned hosts for vVol storage, use the APIs directly to create VI domains and clusters with vVol storage and fewer than 3 commissioned hosts.

    • Have more than 2 hosts commissioned for the VVol storage type when creating the VI workload domain or cluster.

  • New - Incorrect warning is shown in UI during Create Domain/Add Cluster using Static IP Pool

    When creating a domain or adding a cluster from the UI, an incorrect warning is displayed if Static IP Pool is selected for IP Allocation. The warning states: Clusters with a static IP pool cannot be stretched across availability zones.

    Workaround: This warning can be ignored.

  • New - Creating Sub Transport Node Profile (TNP) per cluster does not enforce maximum limit

    Creating a new domain, cluster, or expanding a cluster with Sub TNP configuration can fail if the maximum limit of 16 Sub TNPs per cluster is reached.

    Workaround: Delete the failed workload domain or cluster.

  • New - Edge node uplink information is blank during expansion

    When attempting to expand an Edge cluster by more than two Edge nodes with a shared uplink network, the uplink information may appear blank for all nodes except the first one added during expansion.

    Workaround: Expand the Edge cluster one node at a time, specifying the uplink details for each node. Alternatively, use the API or developer tools, since this is a limitation of the UI only. See KB article 95188.

  • New - Cosmetic issue: References to NSX+ in SDDC UI

    NSX+ references in SDDC UI have no impact on functionality and should be ignored.

    Workaround: None.

  • New - Password rotation for Platform Services Controller (PSC) fails with error: Unable to connect to entity : VRLI

    This issue can occur when the password for VMware Aria Operations for Logs (formerly vRealize Log Insight or VRLI) is changed in SDDC Manager, followed by a password rotation for the vCenter Single Sign-On (SSO) account (e.g. [email protected]) in the PSC. The following error message may appear:   

    Progress Message: Unable to connect to entity : VRLI. 
    Cause: JSONObject["sessionId"] not found.

    Workaround: Unlock the admin account by following these steps:

    root@vrli-workone [ ~ ]# /usr/lib/loginsight/application/sbin/li-reset-admin-passwd.sh --unlock
    AdminAdmin user is LOCKED. Resetting.
    SUCCESS: admin user unlocked successfully.
  • New - When networkProfileName is empty string, Add Host fails.

    Adding a host to a cluster fails when networkProfileName is empty.

    Workaround:

    1. Log in to the NSX Manager associated with the workload domain to which the hosts are being added. (SDDC Manager UI > Workload Domains > Services > NSX Cluster)

    2. Note the cluster to which the host is being added. (SDDC Manager UI > Workload Domains > Clusters)

    3. Log in to NSX Manager, and navigate to Systems > Fabric > Hosts > Cluster > Expand the cluster view for the cluster named in step 2.

    4. For each host given in the cluster expansion API payload, select the host, and click on ACTIONS > Select Change Sub-Cluster > Select None.

    5. Once all the hosts under the host addition payload are removed from the sub-cluster (sub-cluster name also appears in the error message), go to SDDC Manager UI >Workload Domains > clusters > Select the cluster in which hosts were getting added > Select the hosts provided in the payload Trigger Remove Host workflow.

    6. Decommission the hosts and recommission again for adding the hosts to SDDC Cluster with the correct payload. Verify that the networkProfileName should be non-empty.

  • Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync

    The Lifecycle Management Precheck displays a green status and does not generate any errors for NSX Manager inventory.

    Workaround: None

  • The upgrade screen displays the incorrect version when upgrading from VCF 4.3 to version 5.0.

    The VCF UI shows the incorrect source version when upgrading from version 4.3 to 5.0. However, the actual upgrade is still correctly performing a direct skip upgrade from the source version to target version.

    Workaround: None

  • Creating workload domains in parallel does not enforce maximum limit of compute managers per NSX Manager.

    Creating workload domains in parallel can lead to one or more workload domain creations to fail since the compute managers limit per NSX Manager will be exceeded.

    Workaround: Delete the failed workload domain

  • In VCF 5.0 Mixed BOM with vCenter Enhanced Linked Mode, you can see vCenter Server systems of version 8.0 from a 7.x vCenter Server instance

    If you have a vCenter Enhanced Linked Mode group that contains vCenter Server instances of versions 8.x and 7.x, when you log in to a 7.x vCenter Server instance, in the vSphere Client, you can see vCenter Server systems of version 8.0. Since vCenter Server 8.0 introduces new functionalities, you can run workflows specific to vSphere 8.0 on the 8.0 vCenter Server, but this might lead to unexpected results when run from the 7.x vSphere Client.

    Workaround: This issue is fixed in vSphere 7.0 P07.

  • Upgrade Pre-Check Scope dropdown may contain additional entries

    When performing Upgrade Prechecks through SDDC Manager UI and selecting a target VCF version, the Pre-Check Scope dropdown may contain more selectable entries than necessary. SDDC Manager may appear as an entry more than once. It also may be included as a selectable component for VI domains, although it's a component of the management domain.

    Workaround: None. The issue is visual with no functional impact.

  • Converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images is not supported.

    vSphere Lifecycle Manager baselines (previously known as vSphere Update Manager or VUM) are deprecated in vSphere 8.0, but continue to be supported. See KB article 89519 for more information.

    VMware Cloud Foundation 5.0 does not support converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images. This capability will be supported in a future release.

    Workaround: None

  • Workload Management does not support NSX Data Center Federation

    You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX Data Center instance is participating in an NSX Data Center Federation.

    Workaround: None.

  • Stretched clusters and Workload Management

    You cannot stretch a cluster on which Workload Management is deployed.

    None.

Upgrade Known Issues

  • New - All NSX Edge clusters and host clusters are upgraded, even when you choose to upgrade a subset of clusters

    During an NSX upgrade you can choose to upgrade only specific NSX Edge clusters or host clusters. VMware Cloud Foundation 5.1 ignores the cluster selections and upgrades all NSX Edge clusters and host clusters.

    Workaround: See KB 96878.

  • LCM default manifest has incorrect details for VCF 5.1.0.0

    If the latest LCM manifest is not present in the VCF inventory, the VCF 5.1 SDDC Manager UI Release Versions page will display inconsistent details for the VCF 5.1 release.

    Workaround: Refresh the manifest by connecting to the VMware Depot or manually update to the latest manifest file using the Bundle Transfer Utility.

  • VCF ESXi Upgrade with 'quick boot' option fails for hosts configured with TPM.

    After performing an upgrade, the host is stuck during quick boot in PSOD with A security violation was detected.

    Workaround: Rebooting the host will complete the upgrade.

  • vCenter upgrade fails during cleanup

    vCenter patching fails at converting PostInstall stage and the services are up, causing the vCenter to be unstable. vCenter Server Appliance patching fails with Failed to perform cleanup.

    Workaround:

    1. Follow the steps in KB 94904.

    2. Retry the upgrade from VMware Cloud Foundation UI.

  • When the SDDC Manager upgrades, the SDDC Manager UI does not redirect to the upgrade process screen.

    When upgrading the SDDC Manager from 5.0.0.0 to 5.1.0.0, it moves to a SCHEDULED state. When the upgrade moves to IN PROGRESS, the separate UI hangs.

    Workaround: Refresh the browser. The refreshed VCF Upgrade UI screen now shows the current upgrade status.

  • The "Schedule Update/Retry Upgrade" button should be disabled if the existing upgrade is in progress.

    When an upgrade is in progress for a cluster part of a domain, the UI still allows the user to select and start the upgrade for another cluster, but on the FINISH button, this action is blocked.

    Workaround: Do not trigger another upgrade when an upgrade is in progress on a domain.

  • The FINISH button in the ESXi Upgrade UI allows multiple clicks and upgrades to be initiated.

    When the FINISH button is clicked more than once, multiple upgrades are initiated against the same bundle.

    Workaround: No workaround. You can see all of the triggered upgrades in the Task panel. All upgrades initiated after the first click of the FINISH button will fail.

  • On-demand pre-checks for vCenter 80U1 bundle might fail in a specific scenario

    The steps for the scenario are listed below:

    1. SDDC-Manager is upgraded to VCF 5.0.0.0 from 4.5.x

    2. BOM components are not upgraded to VCF 5.0.0.0

    3. Bundles for VCF 5.1.0.0 are downloaded and the pre-check is run by selecting 5.1.0.0 as the target version

    4. The on-demand pre-check fails.

    Workaround: The following are the options:

    • Option 1 - Upgrade SDDC-Manager to VCF 5.1.0.0 and run the pre-checks

    • Option 2) Run the vCenter upgrade and check if there are any issues during the stage VCENTER_UPGRADE_INSTALL_PRECHECK.

      Option 3) See KB article 94862 to run the on-demand prechecks Out-of-Band (OOB) and not through VCF.

  • VCF ESXi upgrade fails during post validation due to HA related cluster configuration issue

    The upgrade of ESXi Cluster fails with error that is similar to below error message:

    Cluster Configuration Issue: vSphere HA failover operation in progress in cluster <cluster-name> in datacenter <datacenter-name>: 0 VMs being restarted, 1 VMs waiting for a retry, 0 VMs waiting for resources, 0 inaccessible vSAN VMs

    Workaround: See KB article 90985.

  • Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync

    Workaround None.

  • Precheck for NSX has failed with ERROR message

    If the Upgrade Prerequisite Backup SDDC Manager, all vCenter Servers, and NSX Managers is ignored and the STFPlocation defined for the NSX Configuration Backup does not have enough disk space, then the ERROR Message Precheck for NSX has failed will display. Although the root cause of the error sftp server disk is full appears in the Operations Manager Log, it is not currently displayed in SDDC Manager.

    Workaround: Increase the amount of available disk space on the SFTP server and then complete the Upgrade Prerequisite before proceeding.

  • NSX upgrade may fail if there are any active alarms in NSX Manager

    If there are any active alarms in NSX Manager, the NSX upgrade may fail.

    Workaround: Check the NSX Manager UI for active alarms prior to NSX upgrade and resolve them, if any. If the alarms are not resolved, the NSX upgrade will fail. The upgrade can be retried once the alarms are resolved.

  • VC upgrade fails during install

    When running the workload VC-upgrade from VCF, it fails at the VC install with target vc upgrade precheck stage failing.

    Workaround: Do not reuse the same temporary IP for sequential VC upgrades. Use a separate temporary IP for every VC undergoing upgrade.

  • SDDC Manager upgrade fails at "Setup Common Appliance Platform"

    If a virtual machine reconfiguration task (for example, removing a snapshot or running a backup) is taking place in the management domain at the same time you are upgrading SDDC Manager, the upgrade may fail.

    Workaround: Schedule SDDC Manager upgrades for a time when no virtual machine reconfiguration tasks are happening in the management domain. If you encounter this issue, wait for the other tasks to complete and then retry the upgrade.

  • Parallel upgrades of vCenter Server are not supported

    If you attempt to upgrade vCenter Server for multiple VI workload domains at the same time, the upgrade may fail while changing the permissions for the vpostgres configuration directory in the appliance. The message chown -R vpostgres:vpgmongrp /storage/archive/vpostgres appears in the PatchRunner.log file on the vCenter Server Appliance.

    Workaround: Each vCenter Server instance must be upgraded separately.

  • When you upgrade VMware Cloud Foundation, one of the vSphere Cluster Services (vCLS) agent VMs gets placed on local storage

    vSphere Cluster Services (vCLS) ensures that cluster services remain available, even when the vCenter Server is unavailable. vCLS deploys three vCLS agent virtual machines to maintain cluster services health. When you upgrade VMware Cloud Foundation, one of the vCLS VMs may get placed on local storage instead of shared storage. This could cause issues if you delete the ESXi host on which the VM is stored.

    Workaround: Deactivate and reactivate vCLS on the cluster to deploy all the vCLS agent VMs to shared storage.

    1. Check the placement of the vCLS agent VMs for each cluster in your environment.

      1. In the vSphere Client, select Menu > VMs and Templates.

      2. Expand the vCLS folder.

      3. Select the first vCLS agent VM and click the Summary tab.

      4. In the Related Objects section, check the datastore listed for Storage. It should be the vSAN datastore. If a vCLS agent VM is on local storage, you need to deactivate vCLS for the cluster and then re-enable it.

      5. Repeat these steps for all vCLS agent VMs.

    2. Deactivate vCLS for clusters that have vCLS agent VMs on local storage.

      1. In the vSphere Client, click Menu > Hosts and Clusters.

      2. Select a cluster that has a vCLS agent VM on local storage.

      3. In the web browser address bar, note the moref id for the cluster.

        For example, if the URL displays as https://vcenter-1.vrack.vsphere.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:503a0d38-442a-446f-b283-d3611bf035fb/summary, then the moref id is domain-c8.

      4. Select the vCenter Server containing the cluster.

      5. Click Configure > Advanced Settings.

      6. Click Edit Settings.

      7. Change the value for config.vcls.clusters.<moref id>.enabled to false and click Save.

        If the config.vcls.clusters.<moref id>.enabled setting does not appear for your moref id, then enter its Name and false for the Value and click Add.

      8. Wait a couple of minutes for the vCLS agent VMs to be powered off and deleted. You can monitor progress in the Recent Tasks pane.

    3. Enable vCLS for the cluster to place the vCLS agent VMs on shared storage.

      1. Select the vCenter Server containing the cluster and click Configure > Advanced Settings.

      2. Click Edit Settings.

      3. Change the value for config.vcls.clusters.<moref id>.enabled to true and click Save.

      4. Wait a couple of minutes for the vCLS agent VMs to be deployed and powered on. You can monitor progress in the Recent Tasks pane.

    4. Check the placement of the vCLS agent VMs to make sure they are all on shared storage

  • SDDC Manager UI issues when running multiple parallel upgrade prechecks

    If you initiate a precheck on more than one workload domain at the same time, the SDDC Manager UI may flicker and show incorrect information.

    Workaround: Do not run multiple parallel prechecks. If you do, wait until the prechecks are complete to evaluate the results.

  • Upgrade precheck results for ESXi display the error "TPM 2.0 device detected but a connection cannot be established."

    This issue can occur for ESXi hosts that have Trusted Platform Modules (TPM) chips partially configured.

    Workaround: Ensure that the TPM is configured in the ESXi host's BIOS to use the SHA-256 hashing algorithm and the TIS/FIFO (First-In, First-Out) interface and not CRB (Command Response Buffer). For information about setting these required BIOS options, refer to the vendor documentation.

  • Performing parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images may fail

    If you perform parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images at the same time as you are peforming parallel upgrades, the prechecks may fail.

    Workaround: Use the following guidance to plan your upgrades and prechecks for workload domains that use vSphere Lifecycle Manager images.

    • For parallel upgrades, VMware Cloud Foundation supports up to five workload domains with up to five clusters each.

    • For parallel prechecks, VMware Cloud Foundation supports up to three workload domains with up to four clusters each.

    • Do not run parallel upgrades and prechecks at the same time.

  • Using the /v1/upgrades API to trigger parallel cluster upgrades across workload domains in a single API call does not upgrade the clusters in parallel

    When using the VMware Cloud Foundation API to upgrade multiple workload domains in parallel, including multiple resource upgrade specifications (resourceUpgradeSpec) in a single domain upgrade API (/v1/upgrades) call does not work as expected.

    Workaround: To get the best performance when upgrading multiple workload domains in parallel using the VMware Cloud Foundation API, do not include multiple resource upgrade specifications (resourceUpgradeSpec) in a single domain upgrade call. Instead, invoke the domain upgrade multiple times with a single resourceUpgradeSpec for each workload domain.

    You can also use the SDDC Manager UI to trigger multiple parallel upgrades across workload domains.

  • NSX Data Center upgrade fails at "NSX T PERFORM BACKUP"

    If you did not change the destination of NSX Manager backups to an external SFTP server, upgrades may fail due to an out-of-date SSH fingerprint for SDDC Manager.

    Workaround:

    1. Log in to the NSX Manager UI.

    2. Click System > Backup & Restore.

    3. Click Edit for the SFTP Server.

    4. Remove the existing SSH fingerprint and click Save.

    5. Click Add to add the server provided fingerprint.

    6. Click Save.

    7. Retry the NSX Data Center upgrade from the SDDC Manager UI.

  • Cluster-level ESXi upgrade fails

    Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.

    Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.

  • You are unable to update NSX Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX transport node precheck stage

    In SDDC Manager, when you run the upgrade precheck before updating NSX Data Center, the NSX transport node validation results with the following error.

    No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

    Because the upgrade precheck results with an error, you cannot proceed with updating the NSX Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.

    Disable the update precheck validation for the subsequent NSX Data Center update.

    1. Log in to SDDC Manager as vcf using a Secure Shell (SSH) client.

    2. Open the application-prod.properties file for editing: vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties

    3. Add the following property and save the file: lcm.nsxt.suppress.prechecks=true

    4. Restart the life cycle management service: systemctl restart lcm

    5. Log in to the SDDC Manager user interface and proceed with the update of NSX Data Center.

  • NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE

    NSX-T upgrade may not proceed beyond the NSX T TRANSPORT NODE POSTCHECK STAGE.

    Contact VMware support.

  • ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."

    This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.

    Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.

  • Update precheck fails with the error "Password has expired"

    If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.

    Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.

Bring-up Known Issues

  • Deploying the management domain using vSphere Lifecycle Manager (vLCM) images fails

    If you deploy the management domain with vLCM images, the deployment may fail with the error vlcm_cluster_creation_postdeploy.py", line 220, in execute AttributeError: 'NoneType' object has no attribute. This can occur when you are using the XLS or JSON file:

    • XLS: When Enable vLCM Cluster Image is set to Yes.

    • JSON: When clusterImageEnabled is set to true.

    The failure is caused by a race condition in vCenter Server installer which fails to create a datacenter once the vCenter Server is deployed.

    Workaround: Contact VMware by Broadcom Support to deploy a vLCM image-based management domain.

  • Entering pNICs in non-lexicographic order in the deployment parameter workbook does not work as expected

    When you deploy the management domain by uploading the deployment parameter workbook, the vmnic to uplink mapping for a vSphere distributed switch (vDS) is always done in lexicographic order, regardless of the order that you enter the vmnics in the deployment parameter workbook.

    For example, if you enter vmnic0, vmnic2, vmnic3, vmnic1, the vmnic to uplink mapping will be:

    vmnic0

    Uplink 1

    vmnic1

    Uplink 2

    vmnic2

    Uplink 3

    vmnic3

    Uplink 4

    Workaround: If you want to deploy the management domain with non-lexicographic vmnic to uplink mapping, create and upload a JSON file instead of using the deployment parameter workbook XLSX. See https://developer.vmware.com/apis/vcf/5.1.0/sddc/.

  • Entering more than two pNICs for the primary vDS in the deployment parameter workbook does not work as expected

    If you deploy the management domain by uploading the deployment parameter workbook, there is an issue with converting the XLSX file to JSON when you enter more than two pNICs for the primary vSphere distributed switch (vDS). The result is that only the host overlay network uses all of the pNICs. The management, vSAN, and vMotion networks use only the first two pNICs.

    Workaround: If you want to deploy the management domain with a primary vDS that uses more than two pNICs, create and upload a JSON file instead of using the deployment parameter workbook XLSX. See https://developer.vmware.com/apis/vcf/5.1.0/sddc/.

SDDC Manager Known Issues

  • New - The SDDC Manager UI shows 8.12 instead of Aria 8.14.

    If the latest VMware Aria Suite Lifecycle install bundle is not downloaded, SDDC Manager UI suggests that install bundle for VMware Aria Suite Lifecycle 8.12 is missing, instead of 8.14.

    Workaround: Download the install bundle for VMware Aria Suite Lifecycle 8.14. Once the download is complete, the deployment option will become available.

  • SDDC Manager license key is not needed

    A SDDC manager license key is not needed, and any errors seen related to an existing SDDC manager license key can be ignored.

    Workaround: None.

  • Generate CSR task for a component hangs

    When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.

    1. Log in to the UI for the component to troubleshoot and resolve any issues.

    2. Using SSH, log in to the SDDC Manager VM with the user name vcf.

    3. Type su to switch to the root account.

    4. Run the following command: systemctl restart operationsmanager

    5. Retry generating the CSR.

  • SoS utility options for health check are missing information

    Due to limitations of the ESXi service account, some information is unavailable in the following health check options:

    • --hardware-compatibility-report: No Devices and Driver information for ESXi hosts.

    • --storage-health: No vSAN Health Status or Total no. of disks information for ESXi hosts.

    None.

Workload Domain Known Issues

  • Heterogeneous operations "Cluster Creation" and "VI Creation" are not supported to be ran in parallel when they are operating against same shared NSX instance.

    If there is a running VI Creation workflow operating on an NSX resource, then creating a cluster on domains that are sharing that NSX is not possible.

    Workaround: None. The VI Creation workflow should complete before the cluster creation workflow can be started.

  • An unxpected replication agreement is still left after the successful decommission of the SSO node task

    The decommission of the SSO node was successful, but the replication partner remained.

    Workaround: Manually remove the replication agreement of the invalid SSO node, and restart the delete workload domain.

    The vCenter command /usr/lib/vmware-vmdir/bin/vdcrepadmin can be used to add/remove replication agreements.

  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    1. Before adding the host, add a new portgroup to the VDS for that cluster.

    2. Tag the new portgroup with the VLAN ID of the host to be added.

    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.

    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.

    5. Retry the Add Host operation.

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

  • Deploying partner services on an NSX workload domain displays an error

    Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.

    Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.

    2. Replace or deploy the witness appliance matching with the ESXi version.

  • vSAN partition and critical alerts are generated when the witness MTU is not set to 9000

    If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.

    Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails

    When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.

    Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.

  • Adding a vSphere cluster or adding a host to a workload domain fails

    Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX Transport Node or Create Transport Node Collection subtask.

    1. Enable SSH for the NSX Manager VMs.

    2. SSH into the NSX Manager VMs as admin and then log in as root.

    3. Run the following command on each NSX Manager VM: sysctl -w net.ipv4.tcp_en=0

    4. Login to NSX Manager UI for the workload domain.

    5. Navigate to System > Fabric > Nodes > Host Transport Nodes.

    6. Select the vCenter server for the workload domain from the Managed by drop-down menu.

    7. Expand the vSphere cluster and navigate to the transport nodes that are in a partial success state.

    8. Select the check box next to a partial success node, click Configure NSX.

    9. Click Next and then click Apply.

    10. Repeat steps 7-9 for each partial success node.

    When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.

  • The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled

    If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.

    Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.

  • Creation or expansion of a vSAN cluster with more than 32 hosts fails

    By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.

    1. Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.

      1. Select the vSAN cluster in the vSphere Client.

      2. Select Configure > vSAN > Advanced Options.

      3. Enable Large Cluster Support.

      4. Click Apply.

      5. Click Yes.

    2. Run a vSAN health check to see which hosts require rebooting.

    3. Put the hosts into Maintenance Mode and reboot the hosts.

    For more information about large cluster support, see https://kb.vmware.com/kb/2110081.

  • Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present

    If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.

    • For host removal: Delete the Service VM from the host and retry the operation.

    • For cluster deletion: Delete the service deployment for the cluster and retry the operation.

    • For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.

  • vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain

    If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.

    If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.

API Known Issues

  • Creating or validating a new cluster fails if the cluster name does not meet the naming requirements

    When you use the VMware Cloud Foundation API to create a cluster, the cluster name must meet the following requirements:

    • length between 3 and 20 characters

    • only alphanumeric characters ("A-Z", "a-z", "0-9") and hyphen ("-")

    If the cluster name does not meet these requirements, validating the input specification and creating the cluster fails.

    Workaround: Make sure that your cluster name meets the naming requirements.

  • Stretch cluster operation fails

    If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.

    Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.

  • An API for NSX Clusters is listed on VMware Cloud Foundation Developer Center in error

    Public API "2.36.6. Get the transport zones from the NSX cluster" is listed on VMware Cloud Foundation Developer Center in error.

    Workaround: None

check-circle-line exclamation-circle-line close-line
Scroll to top icon