VMware Cloud Foundation 5.2.1 | 09 OCT 2024 | Build 24307856

Check for additions and updates to these release notes.

What's New

The VMware Cloud Foundation (VCF) 5.2.1 release includes the following:

  • Reduced Downtime Upgrade (RDU) support for vCenter: VCF users can now leverage vCenter Reduced Downtime Upgrade (RDU) to execute a vCenter upgrade. vCenter RDU is a migration-based approach to upgrading vCenter and reduces the vCenter downtime to less than 5 minutes. 

  • NSX in-place upgrades for clusters that use vSphere Lifecycle Manager baselines: VCF users now have the choice to perform NSX in-place upgrade for clusters that use vSphere Lifecycle Manager baselines. In-place upgrades eliminate the need to place hosts into maintenance mode during the upgrade.

  • Support for vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image-based clusters in same workload domain: VCF users now have the flexibility to deploy and upgrade vLCM baseline and vLCM image-based clusters within the same workload domain. 

  • Support for the "License Now" option for vSAN add-on licenses based on capacity per tebibyte (TiB): VCF users can now apply the vSAN TiB capacity license within the SDDC Manager UI to expand storage capacity for their workload domains and clusters. You can also use the "License Later" option to assign the per-TiB vSAN license key using the vSphere Client.

  • Set up VMware Private AI Foundation infrastructure from the vSphere Client: VCF users can leverage a new guided workflow in the vSphere Client to set up infrastructure for VMware Private AI Foundation and maximize the potential of NVIDIA GPU-enabled ESXi hosts. The workflow streamlines the set up process by centralizing configuration steps from SDDC Manager and vCenter into a single workflow.

  • Manage all SDDC certificates and passwords from a single UI: SDDC Manager certificate and password management functionality is now integrated in the vSphere Client to simplify and speed-up day-to-day operations. VCF users can now manage the certificates, integrated certificate authorities, and system user passwords from the Administration section in the vSphere Client. 

Available Languages

Beginning with the next major release, VCF will be supporting the following localization languages:

  • Japanese

  • Spanish

  • French

The following languages will no longer be supported:

  • Italian, German, and Simplified Chinese.

Impact:

  • Customers who have been using the deprecated languages will no longer receive updates or support in these languages.

  • All user interfaces, help documentation, and customer support will be available only in English or in the three supported languages mentioned above.

Because VCF localization utilizes the browser language settings, ensure that your settings match the desired language.

Deprecation Notices

  • The following features are being deprecated and will be removed in a future major release:

    • Cloud Builder Appliance

    • Cloud Builder APIs

    • Cloud Builder deployment parameter workbooks

    • SDDC Manager standalone UI

    • NSX Edge management workflow

  • VMware End Of Availability of Perpetual Licensing and SaaS Services. See https://blogs.vmware.com/cloud-foundation/2024/01/22/vmware-end-of-availability-of-perpetual-licensing-and-saas-services/ for more information.

  • In a future release, the "Connect Workload Domains" option from the VMware Aria Operations card located in SDDC Manager > Administration > Aria Suite section will be removed and related VCF Public API options will be deprecated.

    Starting with VMware Aria Operations 8.10, functionality for connecting VCF Workload Domains to VMware Aria Operations is available directly from the UI. Users are encouraged to use this method within the VMware Aria Operations UI for connecting VCF workload domains, even if the integration was originally set up using SDDC Manager.

  • Deprecation announcements for VMware NSX. See the VMware NSX 4.2.1 Release Notes for details.

VMware Cloud Foundation Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component

Version

Date

Build Number

Cloud Builder VM

5.2.1

09 OCT 2024

24307856

SDDC Manager

5.2.1

09 OCT 2024

24307856

VMware vCenter Server Appliance

8.0 Update 3c

09 OCT 2024

24305161

VMware ESXi

8.0 Update 3b

17 SEP 2024

24280767

VMware vSAN Witness Appliance

8.0 Update 3b

17 SEP 2024

24280767

VMware NSX

4.2.1

09 OCT 2024

24304122

VMware Aria Suite Lifecycle

8.18

23 JUL 2024

24029603

  • VMware vSAN is included in the VMware ESXi bundle.

  • You can use VMware Aria Suite Lifecycle to deploy VMware Aria Automation, VMware Aria Operations, VMware Aria Operations for Logs, and Workspace ONE Access. VMware Aria Suite Lifecycle determines which versions of these products are compatible and only allows you to install/upgrade to supported versions.

  • VMware Aria Operations for Logs content packs are installed when you deploy VMware Aria Operations for Logs.

  • The VMware Aria Operations management pack is installed when you deploy VMware Aria Operations.

  • You can access the latest versions of the content packs for VMware Aria Operations for Logs from the VMware Solution Exchange and the VMware Aria Operations for Logs in-product marketplace store.

Supported Hardware

For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.

Documentation

To access the VCF documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The VMware Cloud Foundation web-based interface supports the latest two versions of the following web browsers:

  • Google Chrome

  • Mozilla Firefox

  • Microsoft Edge

For the Web-based user interfaces, the supported standard resolution is 1920 by 1080 pixels.

Installation and Upgrade Information

You can install VMware Cloud Foundation 5.2.1 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.2.1.

Installing as a New Release

The new installation process has three phases:

  • Phase One: Prepare the Environment: The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

  • Phase Two: Image all servers with ESXi: Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.

  • Phase Three: Install Cloud Foundation 5.2.1: See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.

Upgrading to Cloud Foundation 5.2.1

You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.2.1 from VMware Cloud Foundation 4.5.0 or later. If your environment is at a version earlier than 4.5.0, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.5.0 or above and then upgrade to VMware Cloud Foundation 5.2.1. For more information see VMware Cloud Foundation Lifecycle Management.

Important:

Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.

Note:

Since VMware Cloud Foundation disables the SSH service by default, scripts that rely on SSH being enabled on ESXi hosts will not work after upgrading to VMware Cloud Foundation 5.2.1. Update your scripts to account for this new behavior. See KB 86230 for information about enabling and disabling the SSH service on ESXi hosts.

Resolved Issues

The following issues are resolved in this release:

  • VMware Cloud Foundation 5.2 does not support the "License Now" option for vSAN add-on licenses based on capacity per tebibyte (TiB).

  • Remove unresponsive ESXi Host fails when SDDC Manager certificate does not have subject alternative name.

Known Issues

VMware Cloud Foundation Known Issues

  • VCF Import Tool does not support clusters that use vSphere Configuration Profiles

    If you use the VCF Import Tool to import/convert an existing vSphere environment that includes clusters that use vSphere Configuration Profiles, the task fails during NSX deployment.

    Workaround: None. Clusters that use vSphere Configuration Profiles do not support NSX.

  • Primary datastore is not getting set for imported workload domains with NFS 4.1 datastore

    When you use the VCF Import Tool to import a cluster for which NFS 4.1 is the only shared datastore, the primary datastore and datastore type is not getting set in VCF and the workload domain is not visible in the SDDC Manager UI. See https://knowledge.broadcom.com/external/article/372424 for details.

    Workaround: None.

  • Limitations for importing vSAN clusters

    When you use the VCF Import Tool to import a vSAN cluster, you should avoid importing clusters with certain configurations. SDDC Manager day-N operations will not be supported on imported vSAN clusters with these configurations. See https://knowledge.broadcom.com/external/article/371494 for details.

    Workaround: None.

  • Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync

    The Lifecycle Management Precheck displays a green status and does not generate any errors for NSX Manager inventory.

    Workaround: None

  • Upgrade Pre-Check Scope dropdown may contain additional entries

    When performing Upgrade Prechecks through SDDC Manager UI and selecting a target VCF version, the Pre-Check Scope dropdown may contain more selectable entries than necessary. SDDC Manager may appear as an entry more than once. It also may be included as a selectable component for VI domains, although it's a component of the management domain.

    Workaround: None. The issue is visual with no functional impact.

  • Converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images is not supported.

    vSphere Lifecycle Manager baselines (previously known as vSphere Update Manager or VUM) are deprecated in vSphere 8.0, but continue to be supported. See KB article 89519 for more information.

    VMware Cloud Foundation does not support converting clusters from vSphere Lifecycle Manager baselines to vSphere Lifecycle Manager images. This capability will be supported in a future release.

    Workaround: None

Upgrade Known Issues

  • Upgrade precheck warning "ESXi upgrade policy validation across vCenter and SDDC Manager"

    In SDDC Manager, the default upgrade policy applies to all clusters, while in vSphere each cluster has a distinct upgrade policy. This can create a scenario where the ESX upgrade policy configured in vCenter does not match what SDDC Manager expects.

    Workaround: This issue causes a warning only. You can proceed with the upgrade without remediating the issue.

  • Incorrect backup options displayed for a vCenter Regular Update

    When you configure a vCenter upgrade in the SDDC Manager UI, the Backup screen shows the options for a Reduced Downtime Upgrade (RDU), even if you selected vCenter Regular Update as the update mechanism. Do not select "I wish to continue without a backup of the vCenter server" when performing a vCenter Regular Update.

    Workaround: None. For a vCenter Regular Update, you must back up vCenter before you upgrade.

  • Bundle Transfer Utility fails to upload the NSX Advanced Load Balancer install bundle

    If you on a pre-5.2.x version of VMware Cloud Foundation and use the Bundle Transfer Utility to download all bundles for VCF 5.2.x, then uploading the NSX Advanced Load Balancer install bundle fails. This bundle is only supported with SDDC Manager 5.2 and later.

    Workaround: Upgrade SDDC Manager to 5.2 or later and then retry uploading the NSX Advanced Load Balancer install bundle.

  • NSX host cluster upgrade fails

    If you are upgrading a workload domain that uses vSphere Lifecycle Manager images and its cluster image was created from an ESXi host that uses vSphere Lifecycle Manager baselines, then NSX host cluster upgrade will fail. A cluster image created from an ESXi host that uses vSphere Lifecycle Manager baselines contains an NSX component that causes this issue.

    NOTE: This issue is resolved in you have ESXi and vCenter Server 8.0 Update 3 or later.

    Workaround: Do not create cluster images from an ESXi host that uses vSphere Lifecycle Manager baselines. If you encounter this issue, you can resolve it by using the vSphere Client to remove the NSX LCP Bundle component from the cluster image.

  • SDDC Manager UI shows the incorrect source version when upgrading SDDC Manager

    When you view the VMware Cloud Foundation Update Status for SDDC Manager, the UI may show the incorrect source version.

    Workaround: None. This is a cosmetic issue only and does not affect the upgrade.

  • Workspace ONE Access inventory sync fails in SDDC Manager after upgrading VMware Aria Suite Lifecycle

    After upgrading Aria Suite Lifecycle to version 8.12 or later, triggering a Workspace ONE Access inventory sync from Aria Suite Lifecycle fails. The SDDC Manager UI reports the following error: Failed to configure WSA <wsa_fqdn> in vROps .vrops_fqdn>, because Failed to manage vROps adapter.

    Workaround: Download the bundle for your version of Aria Suite Lifecycle to SDDC Manager and retry the inventory sync.

  • VCF ESXi upgrade fails during post validation due to HA related cluster configuration issue

    The upgrade of ESXi Cluster fails with error that is similar to below error message:

    Cluster Configuration Issue: vSphere HA failover operation in progress in cluster <cluster-name> in datacenter <datacenter-name>: 0 VMs being restarted, 1 VMs waiting for a retry, 0 VMs waiting for resources, 0 inaccessible vSAN VMs

    Workaround: See KB article 90985.

  • Lifecycle Management Precheck does not throw an error when NSX Manager inventory is out of sync

    Workaround None.

  • NSX upgrade may fail if there are any active alarms in NSX Manager

    If there are any active alarms in NSX Manager, the NSX upgrade may fail.

    Workaround: Check the NSX Manager UI for active alarms prior to NSX upgrade and resolve them, if any. If the alarms are not resolved, the NSX upgrade will fail. The upgrade can be retried once the alarms are resolved.

  • SDDC Manager upgrade fails at "Setup Common Appliance Platform"

    If a virtual machine reconfiguration task (for example, removing a snapshot or running a backup) is taking place in the management domain at the same time you are upgrading SDDC Manager, the upgrade may fail.

    Workaround: Schedule SDDC Manager upgrades for a time when no virtual machine reconfiguration tasks are happening in the management domain. If you encounter this issue, wait for the other tasks to complete and then retry the upgrade.

  • Parallel upgrades of vCenter Server are not supported

    If you attempt to upgrade vCenter Server for multiple VI workload domains at the same time, the upgrade may fail while changing the permissions for the vpostgres configuration directory in the appliance. The message chown -R vpostgres:vpgmongrp /storage/archive/vpostgres appears in the PatchRunner.log file on the vCenter Server Appliance.

    Workaround: Each vCenter Server instance must be upgraded separately.

  • When you upgrade VMware Cloud Foundation, one of the vSphere Cluster Services (vCLS) agent VMs gets placed on local storage

    vSphere Cluster Services (vCLS) ensures that cluster services remain available, even when the vCenter Server is unavailable. vCLS deploys three vCLS agent virtual machines to maintain cluster services health. When you upgrade VMware Cloud Foundation, one of the vCLS VMs may get placed on local storage instead of shared storage. This could cause issues if you delete the ESXi host on which the VM is stored.

    Workaround: Deactivate and reactivate vCLS on the cluster to deploy all the vCLS agent VMs to shared storage.

    1. Check the placement of the vCLS agent VMs for each cluster in your environment.

      1. In the vSphere Client, select Menu > VMs and Templates.

      2. Expand the vCLS folder.

      3. Select the first vCLS agent VM and click the Summary tab.

      4. In the Related Objects section, check the datastore listed for Storage. It should be the vSAN datastore. If a vCLS agent VM is on local storage, you need to deactivate vCLS for the cluster and then re-enable it.

      5. Repeat these steps for all vCLS agent VMs.

    2. Deactivate vCLS for clusters that have vCLS agent VMs on local storage.

      1. In the vSphere Client, click Menu > Hosts and Clusters.

      2. Select a cluster that has a vCLS agent VM on local storage.

      3. In the web browser address bar, note the moref id for the cluster.

        For example, if the URL displays as https://vcenter-1.vrack.vsphere.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:503a0d38-442a-446f-b283-d3611bf035fb/summary, then the moref id is domain-c8.

      4. Select the vCenter Server containing the cluster.

      5. Click Configure > Advanced Settings.

      6. Click Edit Settings.

      7. Change the value for config.vcls.clusters.<moref id>.enabled to false and click Save.

        If the config.vcls.clusters.<moref id>.enabled setting does not appear for your moref id, then enter its Name and false for the Value and click Add.

      8. Wait a couple of minutes for the vCLS agent VMs to be powered off and deleted. You can monitor progress in the Recent Tasks pane.

    3. Enable vCLS for the cluster to place the vCLS agent VMs on shared storage.

      1. Select the vCenter Server containing the cluster and click Configure > Advanced Settings.

      2. Click Edit Settings.

      3. Change the value for config.vcls.clusters.<moref id>.enabled to true and click Save.

      4. Wait a couple of minutes for the vCLS agent VMs to be deployed and powered on. You can monitor progress in the Recent Tasks pane.

    4. Check the placement of the vCLS agent VMs to make sure they are all on shared storage

  • You are unable to update NSX Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX transport node precheck stage

    In SDDC Manager, when you run the upgrade precheck before updating NSX Data Center, the NSX transport node validation results with the following error.

    No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

    Because the upgrade precheck results with an error, you cannot proceed with updating the NSX Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.

    Disable the update precheck validation for the subsequent NSX Data Center update.

    1. Log in to SDDC Manager as vcf using a Secure Shell (SSH) client.

    2. Open the application-prod.properties file for editing: vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties

    3. Add the following property and save the file: lcm.nsxt.suppress.prechecks=true

    4. Restart the life cycle management service: systemctl restart lcm

    5. Log in to the SDDC Manager user interface and proceed with the update of NSX Data Center.

Bring-up Known Issues

SDDC Manager Known Issues

  • The SDDC Manager UI displays incorrect CPU and memory utilization information for hosts

    When you view CPU and memory usage for hosts in the SDDC Manager UI (Inventory > Hosts), the information may not reflect actual utilization.

    Workaround: Use the vSphere Client to view CPU and memory utilization information for hosts.

  • Install bundle for VMware Aria Suite Lifecycle 8.18 displays incomplete information

    The SDDC Manager UI displays incomplete information for the VMware Aria Suite Lifecycle 8.18 install bundle.

    Workaround: None. This is a cosmetic issue and does not impact your ability to download or use the bundle.

  • When creating a network pool, the IP addresses you provide are not validated to ensure that they are not in use

    SDDC Manager does validate the included IPs for a new network pool against other network pools and against all other network types (vSAN, NFS, and so on) being added to the new network pool. However, it does not validate them against other components that are already deployed in the VMware Cloud Foundation instance (for example, ESXi hosts, NSX Managers, and so on). This can result in duplicate IP address errors or failed workflows.

    Workaround: When creating a network pool, do not include any IP addresses that are already in use. If you already created a network pool that includes IP addresses that are used by other components, contact Broadcom Support to resolve the issue.

  • vSphere Lifecycle Manager images that utilize “removed components” are not supported

    Starting with vSphere 8.0 Update 3, you can remove the Host Client and VMware Tools components from a base image, remove unnecessary drivers from vendor add-ons and components, and override existing drivers in a desired image. SDDC Manager does not support this functionality yet for imported or extracted cluster images.

    Workaround: None.

Workload Domain Known Issues

  • Deploying Avi Load Balancer fails

    When you deploy Avi Load Balancer (formerly known as NSX Advanced Load Balancer), the deployment may fail with the message OVA upload to NSX failed. This can happen if the certificates of the management domain NSX Manager nodes do not include their IP addresses in their Subject Alternative Names (SANs).

    Workaround: Generate new CSRs for the management domain NSX Manager nodes, making sure to include IP addresses for the SANs. For example:

    Generate the signed certificates using the CSRs and then install the signed certificates in the NSX Manager nodes. See Managing Certificates in VMware Cloud Foundation for more information.

    Once the new certificates are installed, retry deploying Avi Load Balancer.

  • Switch configuration error when deploying a VI workload domain or adding a cluster to a workload domain with hosts that have two DPUs

    If you are using ESXi hosts with two data processing units (DPU) to deploy a new VI workload domain or add a cluster to a workload domain, you may see the following error during switch configuration: Error in validating Config Profiles. This can be caused by the presence of a vusb0 network adapter on the hosts.

    Workaround: Contact Broadcom Support to remove the vusb0 interface from the SDDC Manager inventory.

  • Deploying a VI workload domain or adding a cluster to a workload domain fails with hosts that have two DPUs

    If you are using ESXi hosts with two data processing units (DPU) to deploy a new VI workload domain or add a cluster to a workload domain, the task fails when adding the ESXi hosts to the vSphere Distributed Switch (VDS) with the error Cannot complete a vSphere Distributed Switch operation for one or more host members.

    The VDS created by SDDC Manager for dual DPU hosts has all 4 uplinks in Active mode and this does not work with an NSX uplink profile where one set of DPU uplinks is Active and a second set of DPU uplinks is Standby.

    Workaround: Use the vSphere Client to manually update the DPU failover settings for the VDS and then retry the workflow from SDDC Manager.

    1. In the vSphere Client, browse to the VDS in the vCenter that contains the hosts.

    2. Click the Configure tab and select DPU Failover Settings.

    3. Click Edit and move uplink3 and uplink4 from Active to Standby.

    4. Click OK.

    5. In the SDDC Manager UI, retry the failed workflow.

  • NSX Edge cluster deployment fails at "Create VLAN Port Group" stage with message "Invalid parameter: port group already exists"

    When you deploy an NSX Edge cluster for VI workload domain and you select the option "USE ESXI MANAGEMENT VMK'S VLAN", the management portgroup name and VLAN ID are auto-populated. SDDC Manager tries to create a portgroup with same VLAN and portgroup name as ESXi management, but since the portgroup name already exists in vCenter the operation fails.

    Workaround: If you select the option "USE ESXI MANAGEMENT VMK'S VLAN", change the auto-populated portgroup name to something else so that there is no conflict. If the environment is already in failed state, remove the partially deployed edge cluster. See https://knowledge.broadcom.com/external/article/316110/vmware-cloud-foundation-nsxt-edge-clust.html.

  • Failure when deploying multiple isolated workload domains with the same SSO domain in parallel

    If you are deploying more than one isolated workload domain at the same time and those workload domains use the same SSO domain, then only the first workload domain is created successfully. Creation of the additional workload domains fails during validation with a message saying that the SSO domain name is already allocated.

    Workaround: Deploy the workload domains sequentially. Wait until the first workload domain deploys successfully and then create the additional workload domains.

  • Heterogeneous operations "Cluster Creation" and "VI Creation" are not supported to be run in parallel when they are operating against same shared NSX instance.

    If there is a running VI Creation workflow operating on an NSX resource, then creating a cluster on domains that are sharing that NSX is not possible.

    Workaround: None. The VI Creation workflow should complete before the cluster creation workflow can be started.

  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    1. Before adding the host, add a new portgroup to the VDS for that cluster.

    2. Tag the new portgroup with the VLAN ID of the host to be added.

    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.

    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.

    5. Retry the Add Host operation.

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

  • Deploying partner services on an NSX workload domain displays an error

    Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.

    Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.

    2. Replace or deploy the witness appliance matching with the ESXi version.

  • vSAN partition and critical alerts are generated when the witness MTU is not set to 9000

    If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.

    Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • Adding a host to a cluster configured with vLCM images fails if the workload domain is using the Dell Hardware Support Manager (OMIVV)

    When you try to add a host to a vSphere cluster that uses vSphere Lifecycle Manager (vLCM) images, the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.

    Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.

  • The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled

    If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.

    Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.

  • Creation or expansion of a vSAN cluster with more than 32 hosts fails

    By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.

    1. Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.

      1. Select the vSAN cluster in the vSphere Client.

      2. Select Configure > vSAN > Advanced Options.

      3. Enable Large Cluster Support.

      4. Click Apply.

      5. Click Yes.

    2. Run a vSAN health check to see which hosts require rebooting.

    3. Put the hosts into Maintenance Mode and reboot the hosts.

    For more information about large cluster support, see https://kb.vmware.com/kb/2110081.

  • Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present

    If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.

    • For host removal: Delete the Service VM from the host and retry the operation.

    • For cluster deletion: Delete the service deployment for the cluster and retry the operation.

    • For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.

  • vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain

    If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.

    If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.

check-circle-line exclamation-circle-line close-line
Scroll to top icon