This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Foundation 4.4.1 | 12 MAY 2022 | Build 19766960

VMware Cloud Foundation 4.4.1.1 | 30 JUN 2022 | Build 19948546

Check for additions and updates to these release notes.

What's New

The VMware Cloud Foundation (VCF) 4.4.1 release includes the following:

VMware Cloud Foundation Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component

Version

Date

Build Number

Cloud Builder VM

4.4.1

12 MAY 2022

19766960

SDDC Manager

4.4.1

12 MAY 2022

19766960

VMware vCenter Server Appliance

7.0 Update 3d

29 MAR 2022

19480866

VMware ESXi

7.0 Update 3d

29 MAR 2022

19482537

VMware Virtual SAN Witness Appliance

7.0 Update 3c

27 JAN 2022

19193900

VMware NSXT

3.1.3.7.4

6 MAY 2022

19762317

VMware vRealize Suite Lifecycle Manager

8.6.2 PSPAK 3

12 May 2022

19447709

  • VMware vSAN is included in the VMware ESXi bundle.

  • Specific vRealize Automation, vRealize Operations, vRealize Log Insight, and Workspace ONE Access (formerly known as VMware Identity Manager) versions are no longer listed in the VMware Cloud Foundation BOM. See the VMware Product Interoperability Matrix for information about which versions of these products are supported with VMware Cloud Foundation 4.4.1 and later. If your VMware Cloud Foundation instance includes vRealize Suite Lifecycle Manager 8.6.2 or later, use the vRealize Suite Lifecycle Manager UI to upgrade vRealize Suite Lifecycle Manager, vRealize Automation, vRealize Operations, vRealize Log Insight, and Workspace ONE Access to a supported version. See vRealize Suite Upgrade Paths on VMware Cloud Foundation 4.4.x +.

  • vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.

  • The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.

  • You can access the latest versions of the content packs for vRealize Log Insight from the VMware Solution Exchange and the vRealize Log Insight in-product marketplace store.

VMware Software Edition License Information

The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:

  • VMware ESXi

  • VMware vSAN

  • VMware NSX-T Data Center

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server

NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM) section above.

For general information about the product, see VMware Cloud Foundation.

Supported Hardware

For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.

Documentation

To access the VMware Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.

For for information relating to VMware Cloud Foundation 4.4.1 on Dell EMC VxRail, see https://docs.vmware.com/en/VMware-Cloud-Foundation/4.4.1/rn/vmware-cloud-foundation-441-on-dell-emc-vxrail-release-notes/index.html.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:

  • Google Chrome 89 or later

  • Mozilla Firefox 80 or later

  • Microsoft Edge 90 or later

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)

  • 1366 by 768 pixels

  • 1280 by 1024 pixels

  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

You can install VMware Cloud Foundation 4.4.1 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.1.

  • Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.

Phase Three: Install Cloud Foundation 4.4.1

See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.

  • Upgrading to Cloud Foundation 4.4.1

You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.1 from VMware Cloud Foundation 4.4, 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.4.1. For more information see VMware Cloud Foundation Lifecycle Management.

IMPORTANT: Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.

NOTE: Scripts that rely on SSH being activated on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.4.1, since VMware Cloud Foundation 4.4 deactivates the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about activating and deactivating the SSH service on ESXi hosts.

VMware Cloud Foundation 4.4.1.1 Release Information

The following features are introduced in this release:

  • VMware Cloud Foundation 4.4.1.1 supports multiple custom ISOs in a single ESXi upgrade in cases where specific clusters or workload domains require different custom ISOs.

  • Stability improvements to UI, synchronization and logging for scale upgrades.

You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.1.1 from Cloud Foundation 4.4.1, 4.3.1.1, 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains.

To upgrade the management domain, apply the following bundles, in order:

NOTE: Before triggering upgrade to VCF 4.4.1.1, download all component upgrade/install bundles from VCF 4.4.1.

  • VMware Cloud Foundation bundle

  • Configuration drift bundle

NOTE: When you are upgrading from VMware Cloud Foundation from 4.4.1 to 4.4.1.1, no configuration drift bundle is required.

VMware Cloud Foundation 4.4.1.1 contains the following BOM update:

Software Component

Version

Date

Build Number

SDDC Manager

4.4.1.1

30 JUN 2022

19948546

Resolved Issues

The following issues are resolved in this release:

  • Adding ESXi hosts that use VMFS on FC storage using the SDDC Manager UI fails

  • ESXi hosts are not exiting from Maintenance Mode during upgrade

  • VMware Update Manager panics during ESXi upgrade

  • vCenter backup acknowledgment popup does not appear through the "Schedule Update" and "Update Now" workflows

  • Workload domain upgrade fails due to file permission issue

  • SDDC Manager 4.4.1 provides the following security updates to OSS packages:

    • Cron-utils is updated to version 9.1.6

    • Netty is updated to version 4.1.71

    • Jackson is updated to version 2.13.0

    • Tomcat is updated to version 9.0.62

    • Xstream is updated to version 1.4.19

    • Liquibase-core is updated to version 4.8.0

    • Spring-framework is updated to version 5.2.20

Known Issues

VMware Cloud Foundation Known Issues

  • Workload Management does not support NSX-T Data Center Federation

    You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.

    Workaround: None.

  • NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters

    There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.

    Workaround: None.

  • Stretched clusters and Workload Management

    You cannot stretch a cluster on which Workload Management is deployed.

    Workaround: None.

Upgrade Known Issues

  • Bundle Transfer Utility "patch" option does not download all the required upgrade bundles

    When you use the Bundle Transfer Utility to download bundles and you select the patch bundles option, the utility does not download the PATCH (Drift) bundle.

    Workaround: Enter a comma-separated list of all the patch bundles, including PATCH (Drift).

  • New - VxRail Async patch 7.0.410 bundle visible in the Lifecycle Manager (LCM) bundle management UI page and availability status as "future"

    If you connect to the VMware Depot, the VxRail async patch bundle 7.0.410 might be visible in the Lifecycle Manager(LCM) UI. This is a known issue caused by the async patch bundle information being added to the existing partner bundle metadata (PBM) file. However, this issue has been resolved in the latest PBM, which has already been published. If you are still seeing this bundle in the LCM UI I and have not used the Async Patch Tool to enable this patch, you can follow the workaround to remove the patch. After completing the workaround, the bundle will no longer be displayed in the LCM UI.

    Workaround: Perform the bundle cleanup of VxRail async patch 7.0.410 by following the steps in KB 75050.

  • Update precheck warning for ESXi Third Party VIBs includes incorrect link

    The remediation message for the warning includes a link to the VMware Cloud Foundation 3.5 documentation.

    Workaround: Use the VMware Cloud Foundation 4.4 documentation link instead, Upgrade ESXi with Custom ISOs.

  • Update precheck shows ? (question mark) for vLCM host hardware device validation task on non-vSAN domains.

    When you run an update pre-check on workload domains that use vSphere Lifecycle Manager images and NFS for principal storage, one of the checks vLCM host hardware device validation fails with 0 errors and 0 warnings showing the ? (question mark) in the precheck status. This is caused because vSAN HCL checks are not skipped on non-vSAN clusters.

    Workaround: Ignore the error which shows the ? (question mark) for vLCM host hardware device validation task and continue with the upgrade.

  • NSX-T edge cluster upgrade shows grayed out as HSM API call was failing for a vCenter Server.

    When you start NSX-T upgrade, edge only option is grayed out.

    You might come across the following error:

    /var/log/vmware/vcf/lcm/lcm-debug.log shows the below error: Exception occurred while loading the Hardware Support Info for the domain 839b92fb-7bb5-4c39-b1f3-05999a117f86

    Check if your vCenter Server corresponding to the domain ID is in a functional state.

    Workaround: Reboot the vCenter Server and wait for it to get back online before trying the upgrade. Once the vCenter Server is online you can see the option available again.

  • Re-trying the NSX-T upgrade from VCF shows one of the vLCM Host Clusters as upgrade failed, however when upgrade re-try is triggered from VCF the upgrade passes quickly. In actual the specific Host Cluster is reported to be in failed or Install_SKIP state in NSX.

    The issue occurs because VCF communicates with NSX manager to determine if the specific vLCM Host Cluster is upgraded, however it talks to vLCM for triggering or monitoring NSX Host Cluster upgrade workflow. vLCM falsely reports the hosts (marked as install skipped) as compliant during upgrade retry, even though on NSX side, the actual upgrade (post check ) has failed for the specific host cluster.

    Workaround: Perform the following steps to resolve the host upgrade failure.

    1. On the NSX Manager UI, select System > Fabric > Nodes and perform the resolve action for the failed cluster.

    2. After you complete the first step, go to NSX-T Upgrade-coordinator UI under System > Upgrade page and resume or retry the Host upgrade.

    After performing this workaround, the specific host cluster that has been upgraded must not show up as failed cluster for retrying the upgrade. See VMware Knowledge Base article 88787 for more information.

  • Offline customers will see NSX-T 3.1.3.7.2 (10 May 2022) instead of 3.1.3.7.4 until they connect to depot.

    Once user connects to depot, the dynamic manifest will be downloaded automatically, overwriting the incorrect values in the description. There are no technical side effects.

  • View Status information for an update shows the wrong component while an update is in progress.

    When viewing the status of an update, the SDDC Manager UI may display information about a previously updated component.

    Workaround: To view update status for the current component, refresh your browser page.

  • Skip-level upgrade from VMware Cloud Foundation 4.1 to 4.4.1

    If you have enabled Kubernetes-Workload Management on a cluster in a workload domain, then you cannot perform a skip-level upgrade from VMware Cloud Foundation 4.1 to 4.4.1.

    Workaround: Upgrade to VMware Cloud Foundation 4.3.1 and then perform a skip-level upgrade to 4.4.1.

  • Async Patch Tool Known Issues

    The Async Patch Tool is a utility that allows you to apply critical patches to certain VMware Cloud Foundation components (NSX-T Manager, vCenter Server, and ESXi) outside of VMware Cloud Foundation releases. The Async Patch Tool also allows you to enable upgrade of an async patched system to a new version of VMware Cloud Foundation.

    Workaround: See the Async Patch Tool Release Notes for known issues.

  • Cluster-level ESXi upgrade fails

    Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.

    Workaround: Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.

  • You are unable to update NSX-T Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX-T transport node precheck stage

    In SDDC Manager, when you run the upgrade precheck before updating NSX-T Data Center, the NSX-T transport node validation results with the following error.

    No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

    Because the upgrade precheck results with an error, you cannot proceed with updating the NSX-T Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.

    Deactivate the update precheck validation for the subsequent NSX-T Data Center update.

    1. Log in to SDDC Manager as vcf using a Secure Shell (SSH) client.

    2. Open the application-prod.properties file for editing: vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties

    3. Add the following property and save the file: lcm.nsxt.suppress.prechecks=true

    4. Restart the lifecycle management service: systemctl restart lcm

    5. Log in to the SDDC Manager user interface and proceed with the update of NSX-T Data Center.

  • ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."

    This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.

    Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.

  • Update precheck fails with the error "Password has expired"

    If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.

    Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.

  • vRealize Operations Manager upgrade fails on the step VREALIZE_UPGRADE_PREPARE_BACKUP with the error: Waiting for vRealize Operations cluster to change state timed out

    When upgrading vRealize Operations Manager, SDDC Manager takes the vRealize Operations Manager cluster offline and takes snapshots of the vRealize Operations Manager virtual machines. In some circumstances, taking the cluster offline takes a long time and the operation times out.

    Workaround: Take the vRealize Operations Manager cluster back online and retry the upgrade.

    1. Log in to the vRealize Operations Manager Administration UI (https://<vrops_ip>/admin) using the admin credentials.

    2. If the cluster status is offline, in the Cluster Status section click Take Cluster Online. Wait for the cluster to initialize and be marked as green.

    3. In the SDDC Manager UI, the option to retry vRealize Operations Manager upgrade should be available. Retry the upgrade.

    If the upgrade continues to fail, take the snapshots manually and retry the upgrade. Since the snapshots already exist, SDDC Manager will skip that step and proceed with the upgrade.

    1. Log in to the vRealize Operations Manager Administration UI (https://<vrops_ip>/admin) using the admin credentials.

    2. Ensure that that the vRealize Operations Manager Cluster Status is offline. If it is online, click Take Cluster Offline in the Cluster Status section. Wait for the cluster to be marked as offline.

    3. Log in to the management domain vCenter Server using the vSphere Client.

    4. Navigate to the vRealize Operations Manager virtual machines and create a snapshot for each virtual machine in the vRealize Operations Manager cluster. Use the following prefix "vROPS_LCM_UPGRADE_MANUAL_BACKUP" for the snapshots. Please note that the prefix should match the letter casing.

    5. After the snapshots are done, log in to the vRealize Operations Manager UI and take cluster online. Wait for the cluster initialization.

    6. In the SDDC Manager UI, the option to retry vRealize Operations Manager upgrade should be available. Retry the upgrade.

Bring-up Known Issues

  • Bringup fails when creating NSX-T Data Center transport nodes

    The bringup task "Create NSX-T Data Center Transport Nodes from Discovered Nodes" might fail if there's an ESXi host in the management cluster which is pending a reboot.

    Workaround: Reboot all ESXi hosts that are pending reboot and retry bringup.

  • The Cloud Foundation Builder VM remains locked after more than 15 minutes.

    The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Workaround: Log in to the VM console of the Cloud Foundation Builder VM as the root user. Unlock the account by resetting the password of the admin user with the following command:

    pam_tally2 --user=<user> --reset

SDDC Manager Known Issues

  • Updating DNS/NTP server does not apply the update to all NSX Managers

    If you update the NTP or DNS server information for a VMware Cloud Foundation instance that includes more than one NSX Manager, only one of the NSX Managers gets updated with the new information.

    Workaround: Use the NSX Manager API or CLI to manually update the DNS/NTP server information for the remaining NSX Manager(s).

  • Rotating or updating vSphere Single-Sign On (PSC) password can cause issues

    If you have multiple VMware Cloud Foundation instances that share a single SSO domain, rotating or updating the vSphere SSO password for the first VCF instance causes the second VCF instance to become inaccessible.

    Workaround: See KB 85485.

  • A workload domain precheck incorrectly shows that it completed successfully.

    If you run a successful pre-upgrade precheck only on the Management domain in the SDDC Manager UI, the green banner incorrectly shows that a workload domain precheck also completed successfully.

    Workaround: Perform a new precheck on your workload domain before any future upgrades.

  • SDDC Manager UI Application upgrade fails with Password Authentication Exception

    There is a timing issue during an .RPM installation that causes the db userid and password to not set correctly.

    Workaround: Wait for the db password to appear in the config.properties before starting the sddc-manager-ui-app service. See KB 77551 for more information

  • Deactivating CEIP on SDDC Manager does not deactivate CEIP on vRealize Automation and vRealize Suite Lifecycle Manager

    When you deactivate CEIP on the SDDC Manager Dashboard, data collection is not deactivated on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.

    Workaround: Manually deactivate CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.

  • Generate CSR task for a component hangs

    When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.

    1. Log in to the UI for the component to troubleshoot and resolve any issues.

    2. Using SSH, log in to the SDDC Manager VM with the user name vcf.

    3. Type su to switch to the root account.

    4. Run the following command: systemctl restart operationsmanager

    5. Retry generating the CSR.

  • SoS utility options for health check are missing information

    Due to limitations of the ESXi service account, some information is unavailable in the following health check options:

    • --hardware-compatibility-report: No Devices and Driver information for ESXi hosts.

    • --storage-health: No vSAN Health Status or Total no. of disks information for ESXi hosts.

    Workaround: None.

  • Supportability and Serviceability (SoS) Utility health checks fail with the error "Failed to get details"

    SoS is not able to handle ESXi host names that include uppercase letters.

    Workaround: Use the precheck functionality in the SDDC Manager UI to check the health of the ESXi hosts.

Workload Domain Known Issues

  • Cannot reuse a static IP pool that includes special characters in its name

    If you chose Static IP Pool as the IP allocation method when creating a VI workload domain and you used special characters or spaces in the IP pool name, you are not able to reuse the IP pool when creating a new VI workload domain or adding a vSphere cluster to the workload domain.

    Workaround: Use only supported characters when naming a static IP pool. Supported characters:

    • a-z

    • A-Z

    • 0-9

    • - and _

    • No spaces

    If you have an existing static IP pool that includes unsupported characters in its name, you can use the NSX Manager UI to rename it.

  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    Workaround:

    1. Before adding the host, add a new portgroup to the VDS for that cluster.

    2. Tag the new portgroup with the VLAN ID of the host to be added.

    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.

    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.

    5. Retry the Add Host operation.

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

  • Deploying partner services on an NSX-T workload domain displays an error

    Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.

    Workaround: Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    Workaround:

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.

    2. Replace or deploy the witness appliance matching with the ESXi version.

  • vSAN partition and critical alerts are generated when the witness MTU is not set to 9000

    If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.

    Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails

    When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.

    Workaround: Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.

  • Adding a vSphere cluster or adding a host to a workload domain fails

    Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX-T Transport Node or Create Transport Node Collection subtask.

    Workaround:

    1. Enable SSH for the NSX Manager VMs.

    2. SSH into the NSX Manager VMs as admin and then log in as root.

    3. Run the following command on each NSX Manager VM: sysctl -w net.ipv4.tcp_en=0

    4. Login to NSX Manager UI for the workload domain.

    5. Navigate to System > Fabric > Nodes > Host Transport Nodes.

    6. Select the vCenter server for the workload domain from the Managed by drop-down menu.

    7. Expand the vSphere cluster and navigate to the transport nodes that are in a partial success state.

    8. Select the check box next to a partial success node, click Configure NSX.

    9. Click Next and then click Apply.

    10. Repeat steps 7-9 for each partial success node.

    When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.

  • The vSAN Performance Service is not activated for vSAN clusters when CEIP is not activated

    If you do not activate the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not activated for vSAN clusters. When CEIP is activated, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.

    Workaround: Activate CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is activated, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also activated for new workload domains and clusters. To activate the vSAN Performance Service immediately, see the VMware vSphere Documentation.

  • Creation or expansion of a vSAN cluster with more than 32 hosts fails

    By default, a vSAN cluster can grow up to 32 hosts. With large cluster support activated, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support activated, a creation or expansion task can fail on the sub-task Activate vSAN on vSphere Cluster.

    Workaround:

    1. Activate Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already activated, skip to step 2.

      1. Select the vSAN cluster in the vSphere Client.

      2. Select Configure > vSAN > Advanced Options.

      3. Activate Large Cluster Support.

      4. Click Apply.

      5. Click Yes.

    2. Run a vSAN health check to see which hosts require rebooting.

    3. Put the hosts into Maintenance Mode and reboot the hosts.

    For more information about large cluster support, see https://kb.vmware.com/kb/2110081.

  • Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present

    If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.

    Workaround:

    • For host removal: Delete the Service VM from the host and retry the operation.

    • For cluster deletion: Delete the service deployment for the cluster and retry the operation.

    • For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.

  • vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain

    If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.

    Workaround: If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.

API Known Issues

  • The VMware Cloud Foundation API ignores NSX VDS uplink information for in-cluster expansion of an NSX Edge cluster

    When you use the VMware Cloud Foundation API to expand an NSX Edge cluster and the new NSX Edge node is going to be hosted on the same vSphere cluster as the existing NSX Edge nodes (in-cluster), the edgeClusterExpansionSpec ignores any information you provide for firstNsxVdsUplink and secondNsxVdsUplink.

    Workaround: None. This is by design. For in-cluster expansions, new NSX Edge nodes use the same NSX VDS uplinks as the existing NSX Edge nodes in the NSX Edge cluster.

  • Stretch cluster operation fails

    If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.

    Workaround: Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.

vRealize Suite Known Issues

  • vRealize Suite Lifecycle Manager 8.6.2 reports an error

    vRealize Suite Lifecycle Manager requires a PSPAK in order to support VMware Cloud Foundation 4.4.1.

  • vRealize Suite Lifecycle Manager reports a "FAILED" inventory sync

    After rotating a vCenter Server service account password in SDDC Manager, the inventory sync may fail for vRealize Suite environments managed by VMware Cloud Foundation.

    Workaround: Log in to vRealize Suite Lifecycle Manager to identify and troubleshoot the failed environment(s).

check-circle-line exclamation-circle-line close-line
Scroll to top icon