This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Foundation 4.5 | 11 OCT 2022 | Build 20612863

Check for additions and updates to these release notes.

What's New

The VMware Cloud Foundation (VCF) 4.5 release includes the following:

  • Support for VCF+: This release introduces support for subscription cloud services including vSphere+ and vSAN+.

  • Improvements to using VCF at scale: Users can now add clusters in parallel and add/remove and commission/decommission hosts at scale.

  • Improvements to upgrade prechecks: Upgrade prechecks have been expanded to verify license and NSX-T edge cluster password validation, file permissions checks, password and certification rotation failed workflows validation, and also noisy vSAN health checks can be silenced.

  • Operational improvements: Users can now rename clusters and apply user-defined tags to objects. 

  • SDDC Manager Onboarding Workflow: The SDDC Manager UI provides an easy, wizard-like interface for new users to configure their VCF deployment.

  • Storage improvements: With HCI Mesh, a cluster can mount a remote vSAN datastore that has been configured with another cluster (two or more clusters can share the same vSAN datastore).

  • Accessibility improvements: This release resolves critical accessibility issues to provide a fully accessible interface.

  • Migration enablement: This release introduces support for Mixed Mode migrations and supports new topologies for migration from VCF 3.x through 4.x.

  • VMware Validated Solutions: All existing solutions updated to support VMware Cloud Foundation 4.5, including the use of the vRealize Automation Cloud service with the VMware Cloud Foundation platform. See VMware Validated Solutions – October 2022 Update.

  • VMware Cloud Foundation with Skyline Health Diagnostics: New content available for Proactive Diagnostics of VMware Cloud Foundation with Skyline Health Diagnostics.

  • BOM updates: Updated Bill of Materials with new product versions.

VMware Cloud Foundation Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component

Version

Date

Build Number

Cloud Builder VM

4.5

11 OCT 2022

20612863

SDDC Manager

4.5

11 OCT 2022

20612863

VMware vCenter Server Appliance

7.0 Update 3h

13 SEP 2022

20395099

VMware ESXi

7.0 Update 3g

01 SEP 2022

20328353

VMware vSAN Witness Appliance

7.0 Update 3c

27 JAN 2022

19193900

VMware NSX-T

3.2.1.2

04 OCT 2022

20541212

VMware vRealize Suite Lifecycle Manager

8.8.2

12 JUL 2022

20080494

  • VMware vSAN is included in the VMware ESXi bundle.

  • You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access. vRealize Suite Lifecycle Manager determines which versions of these products are compatible and only allows you to install/upgrade to supported versions.

  • vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.

  • The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.

  • You can access the latest versions of the content packs for vRealize Log Insight from the VMware Solution Exchange and the vRealize Log Insight in-product marketplace store.

VMware Software Edition License Information

The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:

  • VMware ESXi

  • VMware vSAN

  • VMware NSX-T Data Center

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server

NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM).

For general information about the product, see VMware Cloud Foundation.

Supported Hardware

For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.

Documentation

To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The VMware Cloud Foundation web-based interface supports the latest two versions of the following web browsers:

  • Google Chrome 89 or later

  • Mozilla Firefox 80 or later

  • Microsoft Edge 90 or later

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)

  • 1366 by 768 pixels

  • 1280 by 1024 pixels

  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

You can install VMware Cloud Foundation 4.5 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.5.

  • Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.

Phase Three: Install Cloud Foundation 4.5

See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.

  • Upgrading to Cloud Foundation 4.5

You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.5 from VMware Cloud Foundation 4.2.1 or later. If your environment is at a version earlier than 4.2.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.2.1 and then upgrade to VMware Cloud Foundation 4.5. For more information see VMware Cloud Foundation Lifecycle Management.

Important:

Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.

Note:

Since VMware Cloud Foundation 4.5 disables the SSH service by default, scripts that rely on SSH being enabled on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.5. Update your scripts to account for this new behavior. See KB 86230 for information about enabling and disabling the SSH service on ESXi hosts.

Resolved Issues

The following issues are resolved in this release.

  • The VMware Cloud Foundation API ignores NSX VDS uplink information for in-cluster expansion of an NSX Edge cluster

  • ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."

  • vRealize Operations Manager upgrade fails on the step VREALIZE_UPGRADE_PREPARE_BACKUP with the error: Waiting for vRealize Operations cluster to change state timed out

  • SDDC Manager UI Application upgrade fails with Password Authentication Exception

  • View Status information for an update shows the wrong component while an update is in progress.

  • Bringup fails when creating NSX-T Data Center transport nodes.

  • Cannot reuse a static IP pool that includes special characters in its name.

  • A workload domain precheck incorrectly shows that it completed successfully.

Known Issues

Important:

Refer to KB 89966 for the list of KB articles related to critical issues impacting VMware Cloud Foundation 4.5 before you begin a deployment or upgrade.

VMware Cloud Foundation Known Issues

  • Workload Management does not support NSX-T Data Center Federation

    You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.

    None.

  • NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters

    There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.

    None.

  • Stretched clusters and Workload Management

    You cannot stretch a cluster on which Workload Management is deployed.

    None.

Upgrade Known Issues

  • Bundle Transfer Utility "patch" option does not download all the required upgrade bundles

    When you use the Bundle Transfer Utility to download bundles and you select the patch bundles option, the utility does not download the PATCH (Drift) bundle.

    Workaround: Enter a comma-separated list of all the patch bundles, including PATCH (Drift).

  • New - VxRail Async patch 7.0.410 bundle visible in the Lifecycle Manager (LCM) bundle management UI page and availability status as "future"

    If you connect to the VMware Depot, the VxRail async patch bundle 7.0.410 might be visible in the Lifecycle Manager(LCM) UI. This is a known issue caused by the async patch bundle information being added to the existing partner bundle metadata (PBM) file. However, this issue has been resolved in the latest PBM, which has already been published. If you are still seeing this bundle in the LCM UI I and have not used the Async Patch Tool to enable this patch, you can follow the workaround to remove the patch. After completing the workaround, the bundle will no longer be displayed in the LCM UI.

    Workaround: Perform the bundle cleanup of VxRail async patch 7.0.410 by following the steps in KB 75050.

  • New - When upgrading VCF 4.X instances to VCF 4.5, third party upgrade fails during VIP service installation.

    If the password policy is set to 'Password expires never' for root user on SDDC Manager, the SDDC upgrade workflow set it to 365 days. This results in VCF instance whose root/vcf password was reset before 365 days to instantly expire, causing  rpm installations to fail.

    Workaround:

    • If you have not started the upgrade process, verify that the root password is 365 days or more older. Change the root password before start upgrade process.

    • If your upgrade reached the failed state, change the root password and retry the upgrade process.

      • Login to SDDC manager using old vcf and root passwords

      • Reset the vcf and root passwords using passwd vcf and passwd root commands

      • Go to SDDC UI and restart the failed work flow

    See KB artcle 89978 for more information.

  • SDDC Manager upgrade fails at "Setup Common Appliance Platform"

    If a virtual machine reconfiguration task (for example, removing a snapshot or running a backup) is taking place in the management domain at the same time you are upgrading SDDC Manager, the upgrade may fail.

    Workaround: Schedule SDDC Manager upgrades for a time when no virtual machine reconfiguration tasks are happening in the management domain. If you encounter this issue, wait for the other tasks to complete and then retry the upgrade.

  • Parallel upgrades of vCenter Server are not supported

    If you attempt to upgrade vCenter Server for multiple VI workload domains at the same time, the upgrade may fail while changing the permissions for the vpostgres configuration directory in the appliance. The message chown -R vpostgres:vpgmongrp /storage/archive/vpostgres appears in the PatchRunner.log file on the vCenter Server Appliance.

    Workaround: Each vCenter Server instance must be upgraded separately.

  • When you upgrade VMware Cloud Foundation, one of the vSphere Cluster Services (vCLS) agent VMs gets placed on local storage

    vSphere Cluster Services (vCLS) ensures that cluster services remain available, even when the vCenter Server is unavailable. vCLS deploys three vCLS agent virtual machines to maintain cluster services health. When you upgrade VMware Cloud Foundation, one of the vCLS VMs may get placed on local storage instead of shared storage. This could cause issues if you delete the ESXi host on which the VM is stored.

    Workaround: Deactivate and reactivate vCLS on the cluster to deploy all the vCLS agent VMs to shared storage.

    1. Check the placement of the vCLS agent VMs for each cluster in your environment.

      1. In the vSphere Client, select Menu > VMs and Templates.

      2. Expand the vCLS folder.

      3. Select the first vCLS agent VM and click the Summary tab.

      4. In the Related Objects section, check the datastore listed for Storage. It should be the vSAN datastore. If a vCLS agent VM is on local storage, you need to deactivate vCLS for the cluster and then re-enable it.

      5. Repeat these steps for all vCLS agent VMs.

    2. Deactivate vCLS for clusters that have vCLS agent VMs on local storage.

      1. In the vSphere Client, click Menu > Hosts and Clusters.

      2. Select a cluster that has a vCLS agent VM on local storage.

      3. In the web browser address bar, note the moref id for the cluster.

        For example, if the URL displays as https://vcenter-1.vrack.vsphere.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:503a0d38-442a-446f-b283-d3611bf035fb/summary, then the moref id is domain-c8.

      4. Select the vCenter Server containing the cluster.

      5. Click Configure > Advanced Settings.

      6. Click Edit Settings.

      7. Change the value for config.vcls.clusters.<moref id>.enabled to false and click Save.

        If the config.vcls.clusters.<moref id>.enabled setting does not appear for your moref id, then enter its Name and false for the Value and click Add.

      8. Wait a couple of minutes for the vCLS agent VMs to be powered off and deleted. You can monitor progress in the Recent Tasks pane.

    3. Enable vCLS for the cluster to place the vCLS agent VMs on shared storage.

      1. Select the vCenter Server containing the cluster and click Configure > Advanced Settings.

      2. Click Edit Settings.

      3. Change the value for config.vcls.clusters.<moref id>.enabled to true and click Save.

      4. Wait a couple of minutes for the vCLS agent VMs to be deployed and powered on. You can monitor progress in the Recent Tasks pane.

    4. Check the placement of the vCLS agent VMs to make sure they are all on shared storage

  • SDDC Manager UI issues when running multiple parallel upgrade prechecks

    If you initiate a precheck on more than one workload domain at the same time, the SDDC Manager UI may flicker and show incorrect information.

    Workaround: Do not run multiple parallel prechecks. If you do, wait until the prechecks are complete to evaluate the results.

  • Upgrade precheck results for ESXi display the error "TPM 2.0 device detected but a connection cannot be established."

    This issue can occur for ESXi hosts that have Trusted Platform Modules (TPM) chips partially configured.

    Workaround: Ensure that the TPM is configured in the ESXi host's BIOS to use the SHA-256 hashing algorithm and the TIS/FIFO (First-In, First-Out) interface and not CRB (Command Response Buffer). For information about setting these required BIOS options, refer to the vendor documentation.

  • Performing parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images may fail

    If you perform parallel prechecks on multiple workload domains that use vSphere Lifecycle Manager images at the same time as you are peforming parallel upgrades, the prechecks may fail.

    Workaround: Use the following guidance to plan your upgrades and prechecks for workload domains that use vSphere Lifecycle Manager images.

    • For parallel upgrades, VMware Cloud Foundation supports up to five workload domains with up to five clusters each.

    • For parallel prechecks, VMware Cloud Foundation supports up to three workload domains with up to four clusters each.

    • Do not run parallel upgrades and prechecks at the same time.

  • Using the /v1/upgrades API to trigger parallel cluster upgrades across workload domains in a single API call does not upgrade the clusters in parallel

    When using the VMware Cloud Foundation API to upgrade multiple workload domains in parallel, including multiple resource upgrade specifications (resourceUpgradeSpec) in a single domain upgrade API (/v1/upgrades) call does not work as expected.

    Workaround: To get the best performance when upgrading multiple workload domains in parallel using the VMware Cloud Foundation API, do not include multiple resource upgrade specifications (resourceUpgradeSpec) in a single domain upgrade call. Instead, invoke the domain upgrade multiple times with a single resourceUpgradeSpec for each workload domain.

    You can also use the SDDC Manager UI to trigger multiple parallel upgrades across workload domains.

  • SDDC Manager UI shows older VMware Cloud Foundation version after upgrading to 4.5

    If you had vRealize Suite Lifecycle Manager deployed prior to upgrading to VMware Cloud Foundation 4.5, the SDDC Manager UI will not display the version as 4.5 until you upgrade vRealize Suite Lifecycle Manager. For example:

    The VMware Cloud Foundation 4.5 BOM requires vRealize Suite Lifecycle Manager 8.8.2 or higher.

    Workaround: Upgrade vRealize Suite Lifecycle Manager to version 8.8.2 or higher. See Upgrade vRealize Suite Lifecycle Manager for VMware Cloud Foundation.

  • NSX-T Data Center upgrade fails at "NSX T PERFORM BACKUP"

    If you did not change the destination of NSX Manager backups to an external SFTP server, upgrades may fail due to an out-of-date SSH fingerprint for SDDC Manager.

    Workaround:

    1. Log in to the NSX Manager UI.

    2. Click System > Backup & Restore.

    3. Click Edit for the SFTP Server.

    4. Remove the existing SSH fingerprint and click Save.

    5. Click Add to add the server provided fingerprint.

    6. Click Save.

    7. Retry the NSX-T Data Center upgrade from the SDDC Manager UI.

  • Cluster-level ESXi upgrade fails

    Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.

    Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.

  • You are unable to update NSX-T Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX-T transport node precheck stage

    In SDDC Manager, when you run the upgrade precheck before updating NSX-T Data Center, the NSX-T transport node validation results with the following error.

    No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

    Because the upgrade precheck results with an error, you cannot proceed with updating the NSX-T Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.

    Disable the update precheck validation for the subsequent NSX-T Data Center update.

    1. Log in to SDDC Manager as vcf using a Secure Shell (SSH) client.

    2. Open the application-prod.properties file for editing: vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties

    3. Add the following property and save the file: lcm.nsxt.suppress.prechecks=true

    4. Restart the life cycle management service: systemctl restart lcm

    5. Log in to the SDDC Manager user interface and proceed with the update of NSX-T Data Center.

  • NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE

    NSX-T upgrade may not proceed beyond the NSX T TRANSPORT NODE POSTCHECK STAGE.

    Contact VMware support.

  • Update precheck fails with the error "Password has expired"

    If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.

    Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.

  • Skip level upgrades are not enabled for some product components after VMware Cloud Foundation is upgraded to 4.3

    After performing skip level upgrade to VMware Cloud Foundation 4.3 from 4.1.x or 4.2.x, one or more of the following symptoms is observed:

    • vRealize bundles do not show up as available for upgrade

    • Bundles for previous versions of some product components (NSX-T Data Center, vCenter Server, ESXi) show up as available for upgrade

    See: KB 85505

Bring-up Known Issues

  • Deprecated the ability to join an existing Single Sign-On (SSO) domain during bringup

    VMware Cloud Foundation 4.5 deprecates the ability to join a new VCF instance to an existing SSO domain during bringup.

    Workaround: None.

  • Bring-up Network Configuration Validation fails with "Gateway IP Address for Management is not contactable"

    The following failure "Gateway IP Address for MANAGEMENT is not contactable" is reported as fatal error in Cloud Buider UI and bring-up cannot continue. In some cases the validation fails to validate connectivity because it uses as set of predefined ports, however, ping is working.

    See KB 89990 for more information.

  • The Cloud Foundation Builder VM remains locked after more than 15 minutes.

    The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Log in to the VM console of the Cloud Foundation Builder VM as the root user. Unlock the account by resetting the password of the admin user with the following command:

    pam_tally2 --user=<user> --reset

  • Error in the Deployment Parameter Workbook when selecting Profile-2 for the vSphere Distributed Switch Profile

    If you select Profile-2 for the vSphere Distributed Switch Profile on the Hosts and Networks worksheet, then the cell below will show #REF! instead of showing a summary of the vSphere Distributed Switches to be deployed.

    This only affects the display of the descriptive information about Profile-2.

    Workaround:

    1. Click File > Info in the Deployment Parameter Workbook.

    2. Click Unprotect for the Hosts and Networks worksheet.

    3. Modify the formula in the cell below the profile selection (B23) to have the following value:

      =IF(E22="Profile-1",Lookup_Lists!C2,IF(E22="Profile-2",Lookup_Lists!C3,IF(E22="Profile-3",Lookup_Lists!C5,)))
    4. Protect the Hosts and Networks sheet again.

SDDC Manager Known Issues

  • vRealize Operations Manager admin account appears as disconnected

    SDDC Manager incorrectly shows the vRealize Operations Manager admin account as disconnected due to an expired password. The admin account password used for logging into the vRealize Operations Manager UI never expires, but SDDC Manager is actually checking the virtual appliance (Photon OS) admin account password.

    Workaround: To clear the expired password/disconnected alert in SDDC Manager:

    1. Log in to the affected vRealize Operations Manager node and update the virtual appliance admin password.

    2. In SDDC Manager, remediate (or rotate or update) the password for the expired account. Or, use the VMware Cloud Foundation API to run POST /v1/credentials/expirations.

  • Updating DNS/NTP server does not apply the update to all NSX Managers

    If you update the NTP or DNS server information for a VMware Cloud Foundation instance that includes more than one NSX Manager, only one of the NSX Managers gets updated with the new information.

    Workaround: Use the NSX Manager API or CLI to manually update the DNS/NTP server information for the remaining NSX Manager(s).

  • Metadata.json file contains the incorrect URL for downloading the SDDC Manager OVA

    When you follow the procedure to Prepare for Restoring SDDC Manager, the sddc_manager_ova_location in the metadata.json file contains the incorrect URL for the SDDC Manager OVA file. The naming convention for the SDDC Manager OVA was changed for VMware Cloud Foundation 4.5, but the change is not reflected in metadata.json.

    Workaround: The correct URL for downloading the SDDC Manager OVA for VMware Cloud Foundation 4.5 is: https://depot.vmware.com/PROD2/evo/vmw/sddcmanagerova/VCF-SDDC-Manager-Appliance-4.5.0.0-20612863.tar.

  • Unable to run SoS operations when invoked with sudo

    When you are logged in to the SDDC Manager appliance as the vcf user, running SoS commands with sudo fails. For example:

    sudo /opt/vmware/sddc-support/sos --health-check

    Workaround: Run SoS commands as the root user or apply the workaround in KB 91104.

  • SDDC Manager UI shows VMware Cloud Foundation+ subsciption information

    Customers who deploy or upgrade to VMware Cloud Foundation 4.5 will see "Start Subscription" information in the SDDC Manager UI, even if they are using perpetual licenses.

    Workaround: None. If you are using perpetual licenses, the subscription information is not relevant.

  • Name resolution fails when configuring the NTP server

    Under certain conditions, name resolution may fail when you configure an NTP server.

    Workaround: Run the following command using the FQDN of the failed resource(s) to ensure name resolution is successful and then retry the NTP server configuration.

    nslookup <FQDN>
  • SDDC Manager UI always shows Local OS as default identity source

    If you add Active Directory over LDAP or OpenLDAP as an identity source in SDDC Manager and use the vSphere Client to set that identity source as the default, the SDDC Manager UI (Administration > Single Sign On > Identity Provider) continues to show Local OS as the default identity source.

    Workaround: Use the vSphere Client to confirm the actual default identity source.

  • SDDC Manager UI issues when using Google Chrome

    Some versions of Google Chrome may have issues properly rendering the SDDC Manager UI screens.

    Workaround: Use a different web browser.

  • Deployment of vRealize Suite products fails after a cluster is renamed in SDDC Manager

    If you rename a cluster in the SDDC Manager UI, deploying a vRealize Suite product may fail with the error: "No cluster found with provided details. Ensure the provided cluster is present in vCenter or retry giving the right cluster details. Invalid cluster passed for the request."

    Workaround:Use the vRealize Suite Lifecycle Manager UI to refresh the vCenter data collection from the vRealize Suite Lifecycle Manager settings page and then retry the deployment.

  • Disabling CEIP on SDDC Manager does not disable CEIP on vRealize Automation and vRealize Suite Lifecycle Manager

    When you disable CEIP on the SDDC Manager Dashboard, data collection is not disabled on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.

    Workaround: Manually disable CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.

  • Generate CSR task for a component hangs

    When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.

    1. Log in to the UI for the component to troubleshoot and resolve any issues.

    2. Using SSH, log in to the SDDC Manager VM with the user name vcf.

    3. Type su to switch to the root account.

    4. Run the following command: systemctl restart operationsmanager

    5. Retry generating the CSR.

  • SoS utility options for health check are missing information

    Due to limitations of the ESXi service account, some information is unavailable in the following health check options:

    • --hardware-compatibility-report: No Devices and Driver information for ESXi hosts.

    • --storage-health: No vSAN Health Status or Total no. of disks information for ESXi hosts.

    None.

Workload Domain Known Issues

  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    1. Before adding the host, add a new portgroup to the VDS for that cluster.

    2. Tag the new portgroup with the VLAN ID of the host to be added.

    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.

    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.

    5. Retry the Add Host operation.

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

  • Deploying partner services on an NSX-T workload domain displays an error

    Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.

    Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.

    2. Replace or deploy the witness appliance matching with the ESXi version.

  • Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails

    When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.

    Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.

  • Adding a vSphere cluster or adding a host to a workload domain fails

    Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX-T Transport Node or Create Transport Node Collection subtask.

    1. Enable SSH for the NSX Manager VMs.

    2. SSH into the NSX Manager VMs as admin and then log in as root.

    3. Run the following command on each NSX Manager VM: sysctl -w net.ipv4.tcp_en=0

    4. Login to NSX Manager UI for the workload domain.

    5. Navigate to System > Fabric > Nodes > Host Transport Nodes.

    6. Select the vCenter server for the workload domain from the Managed by drop-down menu.

    7. Expand the vSphere cluster and navigate to the transport nodes that are in a partial success state.

    8. Select the check box next to a partial success node, click Configure NSX.

    9. Click Next and then click Apply.

    10. Repeat steps 7-9 for each partial success node.

    When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.

  • The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled

    If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.

    Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.

  • Creation or expansion of a vSAN cluster with more than 32 hosts fails

    By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.

    1. Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.

      1. Select the vSAN cluster in the vSphere Client.

      2. Select Configure > vSAN > Advanced Options.

      3. Enable Large Cluster Support.

      4. Click Apply.

      5. Click Yes.

    2. Run a vSAN health check to see which hosts require rebooting.

    3. Put the hosts into Maintenance Mode and reboot the hosts.

    For more information about large cluster support, see https://kb.vmware.com/kb/2110081.

  • Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present

    If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.

    • For host removal: Delete the Service VM from the host and retry the operation.

    • For cluster deletion: Delete the service deployment for the cluster and retry the operation.

    • For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.

  • vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain

    If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.

    If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.

API Known Issues

  • Stretch cluster operation fails

    If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.

    Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.

vRealize Suite Known Issues

  • VMware vRealize Suite Lifecycle Manager 8.8.2 is now out of general support

    VMware vRealize Suite Lifecycle Manager 8.8.2 has reached End of General Support. See the VMware Product Lifecycle Matrix.

    Workaround: Customers are advised to upgrade VMware vRealize Suite Lifecycle Manager to a newer version by following the instructions in Upgrade vRealize Suite Lifecycle Manager for VMware Cloud Foundation. See the VMware Interoperability Matrix for information about which versions of VMware vRealize Suite Lifecycle Manager are supported with your version of VMware Cloud Foundation.

  • vRealize Log Insight 8.10 is not compatible with VMware Cloud Foundation 4.5

    Although vRealize Suite Lifecycle Manager allows you to upgrade to vRealize Log Insight 8.10, doing so will result in issues with password and certificate management.

    Workaround: None. Do not upgrade to vRealize Log Insight 8.10. This issue does not affect earlier versions of vRealize Log Insight (for example, 8.8).

  • Deprecation of NSX-T load balancers

    Following the deprecation of NSX-T load balancer APIs in the NSX-T Data Center 3.2 release, VMware Cloud Foundation also marks the NSX-T load balancer configuration for vRealize Suite products as deprecated.

    NOTE: There is no immediate impact on customers who use the NSX-T load balancer. More details about the end of support of this feature will follow in future releases.

    Workaround: None

  • vRealize Suite Lifecycle Manager reports a "FAILED" inventory sync

    After rotating a vCenter Server service account password in SDDC Manager, the inventory sync may fail for vRealize Suite environments managed by VMware Cloud Foundation.

    Workaround: Log in to vRealize Suite Lifecycle Manager to identify and troubleshoot the failed environment(s).

check-circle-line exclamation-circle-line close-line
Scroll to top icon