check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 2.3.0 Release Notes

VMware Cloud Foundation 2.3.0 | 17 JANUARY 2018 | Build 7597069 

VMware Cloud Foundation is a unified SDDC platform that brings together VMware vSphere, vSAN, NSX and optionally, vRealize Suite and Horizon, components into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud. The Cloud Foundation 2.3.0 release continues to expand on the SDDC automation, VMware SDDC stack and the partner ecosystem.

IMPORTANT NOTE: The version of ESXi (6.5 EP05) that was previously released with Cloud Foundation 2.3.0 has been recalled due to KB article 52345. VMware Cloud Foundation 2.3.0 is now available as a fresh install with repackaged VMware vSphere (ESXi) ESXi-6.5.0-20171204001-standard patch. For more information, see KB article 2151112.

This release currently does not include a fix for VMware security bulletin VMSA-2018-0004.2.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 2.3.0 release includes the following:

  • Full SDDC deployment and configuration during bring-up and post-installation operations.
  • Ability to deploy and configure vRealize Automation v7.3 as an on-demand, post-installation operation, including HA configuration with NSX ESG load balancer and the ability to dynamically add workload domains as vRealize Automation endpoints.
  • Ability to deploy and configure vRealize Operations v6.6 as an on-demand, post-installation operation, including HA configuration with NSX ESG load balancer, the ability to add workload domains data collection, out-of-the-box monitoring of virtual infrastructure, and user-defined deployment sizing. It also includes management packs for vSphere, NSX, Storage, vRealize Automation, Log Insight, and Cloud Foundation.
  • Cloud Foundation now supports heterogeneous servers in a rack. Servers must be from the same vendor, but can be of different models, types (hybrid or all-flash), and sizes with variable CPU, memory, storage size, and type.
  • Imaging now supports importing custom ISO images from third-party sources. Imported ISO images must be at partner-supported acceptance level and based on qualified Cloud Foundation ESXi versions.
  • New Bundle Transfer Utility enables users to download LCM bundles from the VMware depot and upload them to SDDC Manager without internet access.
  • Improved backup integration through the SDDC Manager interface, including automatic backups upon key events such as workload domain creation/expansion/deletion, password rotation, and host decommissioning. This includes a checksum backup bundle to guard against data corruption.
  • Added support for the latest versions of NX-OS and Cumulus OS.
  • Added support for the Dell R740, and R740xd servers.
  • Added support for Arista ToR switch 7280SRA.

VMware Software Versions and Build Numbers

You can install Cloud Foundation 2.3.0 either directly or by upgrading. See the Upgrade and Installation Information section.

The Cloud Foundation 2.3.0 software product is installed and deployed by completing two phases: Imaging phase (phase one) and Bring-Up with Automated Deployment phase (phase two). The following sections list the VMware software versions and builds that are involved in each phase.

Phase One: Imaging with VIA

In this phase, hardware is imaged using the following VMware software build:

Software Component Version Date Build Number
VIA (the imaging appliance) 2.3 09 JAN 2018 7525042

Phase Two: Bring-Up With Automated Deployment

In this phase, the Cloud Foundation software product enables automated deployment of the following software Bill-of-Materials (BOM). This BOM is interoperable and compatible.

Software Component Version Date Build Number
Updated VMware Cloud Foundation Bundle 2.3.0 18 JAN 2018 7597069 
VMware SDDC Manager 2.3.0 11 JAN 2018 7524634
VMware Platform Services Controller 6.5 U1e 09 JAN 2018 7515524
VMware vCenter Server on vCenter Server Appliance 6.5 U1e 09 JAN 2018 7515524
Repackaged VMware vSphere (ESXi) 6.5 P02 18 JAN 2018

7388607

VMware vSAN 6.6 11 JAN 2018 7395176
VMware NSX for vSphere 6.3.5 11 JAN 2018 7119875
VMware NSX content pack for vRealize Log Insight 3.6 08 AUG 2017 n/a
VMware vRealize Operations 6.6.1 08 AUG 2017 6163035
VMware vRealize Automation 7.3 25 MAY 2017 5610496
VMware vRealize Log Insight 4.3 03 JUN 2017 5084751
VMware vRealize Log Insight Agent 4.3 03 MAR 2017 5052904
VMware vSAN content pack for vRealize Log Insight 2.0 18 APR 2016 n/a
VMware Tools 10.1.15 09 SEP 2017 6677369
VMware Horizon View 7.2 26 JUN 2017 5748532
VMware Horizon View content pack for vRealize Log Insight 3.0 n/a n/a
VMware App Volumes 2.12 08 DEC 2016  

VMware Software Edition License Information

The VIA and SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software deployed by SDDC Manager is licensed under the Cloud Foundation license:

  • VMware ESXi
  • VMware vSAN
  • VMware NSX

The following VMware software deployed by SDDC Manager is licensed separately:

  • VMware vCenter Server
  • VMware vRealize Automation
  • VMware vRealize Operations
  • VMware vRealize Log Insight
  • Content packs for Log Insight
  • VMware Horizon View
  • VMware App Volumes

NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.

NOTE The use of vRealize Log Insight for the management workload domains in a Cloud Foundation system is permitted without purchasing Log Insight licenses.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Software Versions and Build Numbers section above.

For more general information, see the Cloud Foundation product page.

Supported Hardware

For details on the hardware requirements for a Cloud Foundation environment, including manufacturers and model numbers, see the VMware Cloud Foundation Compatibility Guide.

Network Switch Operating System Versions

For details on network operating systems for the networking switches in Cloud Foundation, see the VMware Cloud Foundation Compatibility Guide.

Documentation

To access the Cloud Foundation 2.3.0 documentation, go to the VMware Cloud Foundation documentation landing page.

To access the documentation for VMware software products that SDDC Manager can deploy, see their documentation landing pages and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions for the Cloud Foundation Web-Based User Interfaces

The following Web browsers can be used to view the Cloud Foundation Web-based user interfaces:

  • Mozilla Firefox: Version 57.x or 56.x
  • Google Chrome: Version 65.x or 64.x
  • Internet Explorer: 11.x or 10.x for Windows systems, with all security updates installed
  • Safari: Basic Version 11.x or 10.x on Mac only

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)
  • 1366 by 768 pixels
  • 1280 by 1024 pixels
  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Updated Installation and Upgrade Information

You can install Cloud Foundation 2.3.0 as a new release, or upgrade from an existing release.

NOTE If you downloaded or installed the Cloud Foundation upgrade bundle that included ESXi 6.5 EP05, released on 11 JAN 2018, please follow the instructions in the appropriate Knowledge Base article:

Fresh Installation of Cloud Foundation 2.3

When you install Cloud Foundation 2.3.0 as a new release, cumulative patches up to vCenter Server 6.5 U1e and VMware ESXi 6.5 P02 are installed. For a fresh installation of this release:

  1. Read the VIA User's Guide for guidance on setting up your environment, deploying VIA, and imaging the rack.
  2. Read the Cloud Foundation Overview and Bring-Up Guide for information on deploying VMware Cloud Foundation.

Upgrading to Cloud Foundation 2.3.0

The upgrade path for Cloud Foundation 2.3.0 is:

  • 2.2.1.1 to 2.3.0
  • 2.2.1 to 2.2.1.1 to 2.3.0

The Cloud Foundation 2.3.0 LCM update bundle includes the VMware software components described in the table below. This patch bundle is hosted on the VMware Depot site and available via the Lifecycle Management feature in SDDC Manager. See Lifecycle Management in the Administering VMware Cloud Foundation guide.

LCM Update Component Version Date Build Number
SDDC Manager LCM Update 2.3.0 11 JAN 2018 n/a
SDDC Manager Update 2.3.0 11 JAN 2018 n/a
VMware NSX for vSphere 6.3.5 11 JAN 2018 7119875

For information on upgrading to Cloud Foundation 2.2.1.1, see the Release Notes for that version.

  1. Before upgrading, clean out out-of-date bundles. See Knowledge Base article 52402.
  2. Before scheduling an upgrade, run the pre-check utility with the following command from the SDDC Manager Controller VM:
    ./sos --pre-upgrade-check
  3. After upgrading, perform the following tasks:

Note: The bundle transfer utility is available only after you upgrade to Cloud Foundation 2.3.

Resolved Issues

  • The SoS tool returns a failed status for health check.

    When you run a general Supportability and Serviceability (SoS) check, without options, the health check may return as FAILED. The cause is that the PRM_HOST table lists a decommissioned host when that listing should have been cleared when the decommission operation completed.

    This issue is fixed in this release.

  • User is not prevented from marking bootstrapped host as "Ineligible"

    Users are able to configure bring-up to skip hosts during the add-rack and add-host workflows by running a script that marks the hosts as ineligible.
    However, if a user inadvertently marks an already bootstrapped host as ineligible, it generates an event and alert that requires the host to be decommissioned and reimaged, which prevents the host from being added to a domain. The user is also prevented from running add-host or add-rack until this is resolved.

    This issue is fixed in this release.

  • Decommissioning of all hosts from second rack results in PRM exception

    If you decommission all hosts from the second rack, this breaks the logical inventory data fetch. As a result, SDDC Manager cannot display any data from the rack.

    This issue is resolved in this release.

  • The Add Host workflow fails at the reconfiguring Host OOB IP task

    The Add Host workflow fails at the reconfiguring Host OOB IP task.

    This issue is fixed in this release.

  • SDDC Manager considers hosts for workloads whose bootstrap has failed and status is "Eligible"

    When creating a workload domain, SDDC Manager should not consider hosts with an "Eligible" status, only those with "Complete" status. Otherwise, it may add a host even though the Add Host workflow has failed.

    This issue is resolved in this release.

  • POSV Failed as N0 was missing from prm_host tables

    Power On System Validation (POSV) failed because N0 was missing from the prm_host tables. This issue was observed in a deployment with a Quanta S210 server and a Quanta ToR switch, a combination which is not supported.

    This issue is resolved in this release.

  • Unable to trigger password rotation if there is a failed SDDC Manager Configurations Backup workflow

    The workflow for triggering Password Rotation fails is the SDDC Manager Configuration Backup workflow has a failed status. This behavior is by design; you cannot rotate passwords if there is a failed backup workflow.

    This issue is resolved in this release.

  • Improve error logging for the failures that  happen before VUM upgrade stage

    The error message that is returned when an ESXi update fails includes the misleading text: "LCM will bring the domain back online once problems found in above steps are fixed manually."

    This issue is resolved in this release.

  • In SDDC Manager, the Dashboard: Physical Resource: Rack Details page is taking a long time to respond

    This delay is a result of SDDC Manager calls to the Physical Resource Manager (PRM) requiring up to five times longer to complete than previously baselined.

    This issue is resolved in this release.

  • Datacenter added as part of bring-up is not displayed in the Network Settings: Datacenter page

    The datacenter subnet named PUBLIC is provided as part of the bring-up process and is associated with the management domain. However, this subnet is not listed on the Datacenter page, allowing for the possibility of a user inadvertently creating the same subnet in the Datacenter page. This it triggers a validation rule and displays a "Subnet in use" message.

    This issue is resolved in this release.

  • LCM update status erroneously displays as FAILED after auto-recovery

    If the ESXi update fails during the ESX HOST UPGRADE STAGE REBOOT stage due to connectivity issues, and the update is automatically recovered after the connectivity issue is resolved, the update might still display as FAILED in the LCM update history even though it was successful. (The respective domains are upgraded to the target version.)

    This issue is resolved in this release.

  • VDI configuration: FQDN setting requires specific formatting

    When configuring the network for a new VDI, if you select Active Directory, an error will result if you do not use the correct formatting for the FQDN setting.

    This issue is resolved in this release.

  • Lifecycle Management page shows all available update bundles independent of the Cloud Foundation release in your environment

    The Lifecycle Management Repository page displays all available updates, regardless of your specific release.

    This issue is resolved in this release.

  • Existing trunk ports on ToR Cisco Nexus 9K switches are assigned to new VLANs when a VI or VDI workload domain is created

    During imaging with the VIA, port-channel 29, 45, 100, 110, and 120 are created on the ToR Cisco Nexus 9K switches, and are set to belong to all VLANs. As a result, when new VLANs are entered during creation of a VI or VDI workload domain, these port-channels become part of the new VLANs and the external VLAN and other VLANs created specifically for the new workload domain get assigned to all existing trunk ports on the ToR Cisco Nexus 9K switches, including the uplink and management cluster bonds.

    This issue is resolved in this release.

  • Host sometimes hangs after doing a normal reboot
    Due to a known vSAN issue, in a Cloud Foundation installation that has Dell R630 or Dell x730d servers and with certain controllers, sometimes a host hangs after doing a normal reboot. For a complete list of affected controllers, see VMware Knowledge Base article 2144936.

    This issue is resolved in this release.

  • When a Quanta server is powered down (off), the CPU_CAT_ERROR (CPU Catastrophic Error) event is generated

    When a Quanta server is powered down, the SERVER_DOWN event is generated, which is the expected behavior. The issue is the CPU_CAT_ERROR event is also generated when a Quanta server is powered down.

    This issue is resolved in this release.

Known Issues

The known issues are grouped as follows.

Imaging Known Issues
  • TOR imaging fails at Setup port configurations task.

    Sometimes the TOR switch imaging process fails during the port configurations task with an "auth" issue, resulting in an incorrect authentication for the admin user. As result, VIA cannot access the switch using the default password.

    Workaround: Use the following steps to resolve this issue:

    1. Reset the switch password. Please refer to the vendor documentation.
    2. Clean up the switch.
    3. Re-image the switch separately in VIA.
  • Modified bundle section in VIA interface not displaying correctly

    Observed in the Firefox browser. The interface elements of the Bundle tab in VIA do not display correctly. This is caused by older versions of Javascript cached in the browser.

    Workaround: Use Ctrl+F5 to force a refresh. The Bundle tab should display correctly.

  • The VIA user interface shows unsupported ToR and inter-rack switches as available for selection.

    The VIA user interface shows unsupported Quanta and Dell ToR and inter-rack switches as available for selection. These include Quanta ToR models Quanta_LY8-x86 and Quanta_LY8-ppc, and inter-rack model Quanta_LY6; and Dell ToR model Dell_S4000 and inter-rack model Dell_S6000.

    Only switches manufactured by Cisco Inc. or Arista Network Inc. are officially supported.

    Workaround:  When specifying ToR or inter-rack switches in VIA, do not select Dell Inc. or Quanta Computer Inc. brand.
    Select Cisco Inc. or Arista Network Inc. switches, both of which are officially supported.

Bring-Up Known Issues
  • Alerts raised during POSV do not contain a rack name in their description
    Because the rack name is not specified by the user until after the IP allocation step in the system configuration wizard, a rack name is not available to display in alerts raised prior to that step. Alerts that are raised subsequent to that step do contain the user-specified rack name.

    Workaround: None.

  • The bring-up process on the first rack fails at task "NSX: Register vCenter with error NSX did not power on on time"

    The bring-up process fails because the NSX Controller virtual machines did not power on during the wait time set in the NSX: Register vCenter task.

    Workaround: On the Bring-Up Status page, click Retry to proceed with the bring-up process.

  • System bring-up process might fail at task ESX: Configure Power Management
    If intermittent connectivity to an ESXi host occurs during the bring-up process, the process might fail at the ESX: Configure Power Management task with the following exception
    com.vmware.vrack.vrm.core.error.EvoWorkflowException: Unable to access the ESXi host

    Workaround: In the bring-up user interface, click the Retry button to perform the task and proceed with the bring-up process.

  • Google Chrome browser crashes for no known reason during bring-up

    The Chrome browser sometimes crashes when left open during bring-up, displaying the "Aw Snap something went wrong while displaying the webpage" message. Bring-up is unaffected. This is presumed to be a browser issue, not a Cloud Foundation issue.

    Workaround: Reload the web page.

  • Bring-up failed at the PostInventoryToVrmAdapter task.

    Sometimes the bring-up process fails at the PostInventoryToVrmAdapter task because as it is unable to connect to the Zookeeper service, even though Zookeeper is running.

    Workaround: Log into the SDDC Manager Controller VM to stop and restart the Zookeeper service.

    systemctl stop zookeeper
    systemctl start zookeeper
    
  • Bring-up fails during the BackupStateAndBootback operation.

    Sometimes bring-up can fail if some services crash on the ESXi host, resulting in large dumps filling up space in the /store/var/core directory. The BackupStateAndBootback task then has insuffienct space to save backups as part of the bring-up process. The log shows a HOST_STATE_BACKUP_FAILED error state.

    Workaround: Log into the ESXi host and delete the dump files from the /store/var/core directory, then retry bring-up.

  • Bring-up fails during NSX wire creation operation.

    Sometimes the bring-up process fails during the NSX configuration operation with the error NSX_VIRTUAL_WIRE_CREATION_POST_VALIDATION_FAILED

    Workaround: Retry the operation. It should succeed the second time.

  • Health-check checks wrong IP for the spine.

    Health-check is checking the wrong IP for the spine, resulting in a ping failure or switch down error. This behavior has been observed during bring-up and during Add Rack operations. The problem is that health check is looking at outdated IP addresses.

    Workaround: You can ignore the down alert raised during bring-up and Add Rack operations. After the operations are complete, you can clear the alerts.

Multi-Rack Bring-Up Known Issues
  • Add Rack (Rack 2) operation fails, but rack hosts are shown as available.

    Sometimes, even though the Add Rack bring-up operation fails, the hosts on that rack appear as available in the Host Selection screen when creating workload domains. Only the hosts on the connected rack should show as available. This issue has been observed when the Add Rack bring-up failure returns the error message UPDATE_OOB_IP_ON_SWITCH_FAILED OOB IP.

    Workaround: Even though the interface allows you to select the host, when you click Next, you will see an error message that the requested IaaS resource cannot be found. You can return to the Host Selection page and select a valid host from Rack 1.

Post Bring-Up Known Issues
  • When you use the vSphere Web Client to view the vCenter Server clusters associated with the management domains or workload domains, you might see alarms related to vSAN HCL
    As described in KB article 2109262, the vSAN Health Service has built-in Hardware Compatibility (HCL) health check that uses a JSON file as its HCL database, to inform the service of the hardware and firmware that is supported for vSAN. These alarms are raised if the HCL health check fails. However, because the set of supported hardware and firmware is constantly being updated as support for new hardware and firmware are added, if the check fails, the first step is to obtain the most recent vSAN HCL data and use the vSphere Web Client to update the HCL database.

    Workaround: The steps to update the vSAN HCL database are described in KB article 2145116.

  • The standard vCenter Server alarm named "License inventory monitoring" is raised for overprovisioning of the ESXi hosts, even though the ESXi hosts have the appropriate license key applied
    Under the standard licensing terms for the VMware Cloud Foundation product, all of the ESXi hosts in a Cloud Foundation installation are licensed using the same key. In the vCenter Server Licenses pane in the vSphere Web Client, in the Product column for this key, you see the associated product name is VMware vSphere 6 Enterprise for Embedded OEMs (CPUs). Under the VMware licensing terms, that type of key is allowed to be overprovisioned. However, due to this issue, when the vCenter Server sees this key as overprovisioned, it is incorrectly raising the standard vSphere "License inventory monitoring" alarm. You can use the vSphere Web Client to see the alarm definition for that alarm, by selecting the vCenter Server object in the left hand navigation area, and clicking the Manage tab > Alarm Definitions and clicking License inventory monitoring in the list.

    Workaround: None. Ignore these vCenter Server license inventory monitoring alarms about the overprovisioning of license capacity of the ESXi hosts in your Cloud Foundation installation.

  • Update/upgrade attempt on unmanaged host fails with error: "UPGRADE_SPEC_INVALID_DATA; The ESX host is managed by vCenter server with IP: x.x.x.x"

    If a selected unmanaged host update/upgrade fails with the above message, retrying will also fail because the API expects unmanaged hosts to not be managed in vCenter. If the initial update failure persists after adding the host to vCenter, the host remains attached to vCenter and causes subsequent attempts to fail.

    Workaround: Before retrying, remove the host from the vCenter inventory. Verify that the host is present in the SDDC Manager free-pool capacity and displays a healthy status.

  • Recreate VI workload domain or VDI workload domain blocked

    Sometimes you cannot recreate a VI workload domain or VDI workload domain after it has been deleted post-bring-up.

    Workaround: Clear the error Alerts, and retry.

  • During imaging, the host loses network connectivity but the Imaging tab indicates that it has been successfully imaged.

    If DHCP is not enabled on the server BMC before imaging, the server is not accessible on the network.

    Workaround: If DHCP was not enabled on the server BMC before imaging, follow the steps below.

    1. Log in to the management switch.
    2. Open the /etc/network/interfaces file.
    3. Scan the configuration settings for the auto iface entries.

      All the entries should show bridge-access 4. For example:

      auto swp6
      iface swp6
          bridge-access 4
      
    4. If any entries show bridge-access 3, change the bridge-access value to 4.
    5. Save and close the /etc/network/interfaces file.

      This should restore network access to the host and its iDRAC.

  • Configuration wizard redisplays after successful vROPs deployment.

    This is caused by the wrong or duplicate license key having been entered into the vROPS configuration wizard.

    Workaround: Use a valid license key. As a temporary alternative, when specifying the license key,  select the "Evaluation License Key" option. This will allow vROPS to deploy successfully. You can verify the correct license key in the vROPS interface after deployment.

  • vRealize Operations deployment fails.

    After successful imaging, bring-up, and VDI/VI creation, vRealize Operations deployment fails with error message VROPS_CONFIGURING_LICENCE_FAILED. This is typically caused by entering the wrong vROPS license key during deployment.

    Workaround: Use a valid Enterprise license when deploying vROPS in Cloud Foundation. Add-on license keys are not valid for this type of deployment.

  • Dell Arista Rack: Bring-up Host selection screen shows incorrect port information.

    Sometimes when configuring the rack, the Host Selection screen displays the ESXi port information instead of the neighboring ToR port information.

    Workaround: Ignore the incorrect port information. This is a display issue only.

  • Network down alerts received although the switches are up.

    The system returns "Alert - Operational status is down for ToR switch" alerts although the switches are up. The most likely cause is that the same VLAN is used by both the DC network and the Management workload domain bring-up operation.

    Workload: Configure separate VLANs for the DC and Management workload domain bring-up.

SDDC Manager Known Issues
  • SDDC Manager has no interface for verifying and modifying the configured DNS forwarders

    SDDC Manager currently provides no usr interface where a user can view, verify, and possible modify the configurations of DNS forwarder. The ideal scenario would be in the VRM UI, probably somewhere in the settings, to be able to see and change those DNS forwarders.

    Workaround: You can access and modify these configurations in the unbound.conf file.

    1. Login to the SDDC Manager Controller VM.
    2. Open the /etc/unbound/unbound.conf file.
    3. Verify or modify the configured DNS forwarders.
    4. Save and close the /etc/unbound/unbound.conf file.
    5. Reboot the SDDC Manager to verify configuration persistence.

  • Unable to disassociate a network from a VI/VDI workload domain.

    There is no apparent functionality in SDDC Manager to disassociate a network from a workload domain.

    Workaround: Use the --delete-dc-nw option in the SoS tool. See the Cloud Foundation documentation.

  • After the SDDC Manager times out in your browser session and displays the login screen, when you try to log in after a few hours, an error message about the required user name and password is displayed instead of the expected message about the expired SAML request

    Authentication to the SDDC Manager uses SAML (Security Assertion Markup Language). When the SDDC Manager is idle for a period of time, it automatically logs you out and displays the login screen. The URL in the browser holds the original SAML authentication request. After a longer period of time, on the order of hours, the SAML authentication request expires, by design. As a result, if you return to the screen without refreshing the browser session to get a new SAML authentication request, the request fails by design. However, instead of an error message informing you of the expired SAML request, an error message stating "User name and password are required" is displayed.

    Workaround: If you encounter this issue, open a new browser session to the virtual IP address of your SDDC Manager, such as https://vrm.subdomain.root-domain:8443/vrm-ui, as described in the Administering VMware Cloud Foundation Guide.

  • Add Host/Add Rack interface prevents user from adding host/rack when bring-up of any previous add rack/add host task is failed.

    The Add Host and Add Rack interfaces do not allow users to add a host or a rack if the bring-up of a Add Host and Add Rack is in currently in a failed state. Racks and adding hosts to racks should be independent from other racks.

    Workaround: None. This issue may be addressed as a design issue in a future release.

  • An expansion workflow that involves adding more than one ESXi host to a management or workload domain is marked successful, even though when the hosts were added to the domain's vCenter Server cluster, the NSX Manager Host Preparation process failed to complete on one or more hosts
    During an expansion workflow, the hosts are added to the vCenter Server cluster that underlies the management or workload domain. When hosts are added to a vCenter Server cluster that has NSX enabled on the cluster, one of the tasks involves preparing the newly added hosts, as described in the Prepare Hosts on the Primary NSX Manager topic in the NSX 6.2 documentation. Part of this host preparation process involves a scan of each added ESXi host prior to installing the required NSX software on that host. If the scan on a particular host fails for some transient reason, the NSX Manager host preparation process fails for that host. However, this failure condition is not reported to the expansion workflow and the workflow appears as successful in the SDDC Manager.

    Workaround: When performing an expansion workflow that involves multiple hosts and when the SDDC Manager indicates the workflow has completed, perform the follow steps to verify the NSX host preparation was successful for each added host, and if not, resolve the issues reported by NSX.

    1. Using the vSphere Web Client, log in to the vCenter Server instance for the management or workload domain that was expanded.
    2. In the vSphere Web Client, examine the NSX Manager host preparation state by navigating to Networking & Security > Installation and clicking the Host Preparation tab.
    3. On the Host Preparation tab, expand the cluster if it is not already expanded, and examine the data reported for each host in the Installation Status column and VXLAN column:
      • If the Installation Status column reports green checkmarks and "Configured" in the VXLAN column for all hosts, the added hosts were successfully prepared.
      • If the Installation Status column displays "Not Ready" and the corresponding VXLAN column displays "Error" for a host, resolve the error by right-clicking on the VXLAN column's "Error" and clicking Resolve. This action also applies the VXLAN distributed switch port group to that host.
  • Because no unique identifier is used to identify a Cloud Foundation system, when you deploy more than one system in your networking environment, you cannot use the same level of Active Directory (AD) integration for both systems

    For the first Cloud Foundation system deployed in your environment, you would configure AD authentication by adding your AD as an identity source to the Platform Services Controller instances using the Active Directory (Integrated Windows Authentication) option and joining the vCenter Single Sign-On server to the AD domain. Due to this issue, for additional systems, you cannot do that same configuration.

    Workaround: For additional systems, the Active Directory as an LDAP Server option can be used to add your AD as an identity source to the Platform Services Controller instances in those systems.

  • A workload domain’s workflow can fail if a VM in the management domain on which the workflow depends is in non-operational state

    Workflows to deploy, delete, and expand workload domains can fail if some of the management domain’s virtual machines are in an invalid state, down, or temporarily inaccessible. SDDC Manager does not prevent you from initiating and submitting a workflow when one of the VMs is in an invalid state. These virtual machines include the PSC VMs, vCenter Server VMs, vRealize Operations Manager VM, vRealize Log Insight VM, NSX Manager VM, and, in a multi-rack system, the SDDC Manager VM. If you submit the workflow and one of those virtual machines becomes temporarily inaccessible as the workflow is performed, the workflow will fail.

    Workaround: Before initiating a workflow, review the state of the management domain’s virtual machines to see that they are all in a valid (green) state. You can see the virtual machines by launching the vSphere Web Client from the domain details of the management domain.

  • SoS log collection and backup fail on Arista spine switches.

    Sometimes SoS log collection and backup fail on Arista spine switches. This is not observed on other spine switches.

    Workaround: Perform a manual backup of the spine switch prior to performing any FRU operations, such as replacing the spine switch.

  • When using the SDDC Manager’s Uplink screen to update L3 connectivity settings, the Uplink screen does not indicate which of the ToR switches has the L3 uplink configured on it
    When an uplink is configured to L3 mode, only one of the two ToR switches has an uplink port. The SDDC Manager does not indicate which ToR switch is connected to the upstream router.

    Workaround: When you use the Uplink screen to change uplink connectivity settings, perform the following steps.
    Note: Changing the settings triggers uplink reconfiguration on the switches. Because the reconfiguration process might take a few minutes to complete, connectivity to the corporate network might be lost during the process. To avoid losing connectivity with SDDC Manager, it is strongly recommended that you are connected to port 48 on the management switch when updating the settings using the Uplink screen.

    1. Connect to port 48 on the management switch and log in to the SDDC Manager using that connection.
    2. On the Uplink screen, configure the L3 uplink and click SAVE EDITS.
    3. Re-configure your upstream router to use the new network settings that you specified in step 2.
    4. Wait at least 3 minutes.
    5. Try connecting the upstream router to the top ToR switch.
    6. Test the new uplink connectivity by disconnecting from port 48 on the management switch and connecting to the rack with the new uplink configuration.
    7. If you are unable to reconnect to the rack, try connecting the upstream router to the bottom ToR switch.
    8. If you are unable to connect to the rack, reconnect using port 48 on the management switch and try reconfiguring your network to the original configuration.
    9. If you cannot connect to the rack with either configuration, contact VMware Support.
  • Rebooting Switches Can Crash SDDC Manager Server

    After rebooting the ToR and inter-rack switches, the SDDC Manager interface was not accessible.

    Workaround: Restart the Tomcat server to restore the SDDC Manager server. Log into the SDDC Manager Controller VM and run the following command:
    systemctl restart vcfmanager

  • Unable to change the Datacenter Connection name on racks from the SDDC Manager interface

    If a user wants to the change the value of the Datacenter Connection name on a rack, there is no apparent way to do so from the SDDC Manager interface.

    Workaround: Log into the SDDC Manager Controller VM and Delete the datacenter network using the following SoS command:
    /opt/vmware/sddc-support/sos --delete-dc-nw --dc-nw-name <datacenter_name>

    You can now create a new datacenter network with the desired name.

  • vRealize Operations and vRealize Automation tasks do not appear in SDDC Manager.

    After starting vRealize Operations in Cloud Foundation, vRealize Operations tasks do not appear in SDDC Manager. This is a temporary effect that results from the task display starting before the vRealize Lifecycle Configuration Manager and NSX Edge VMs finish deploying.

    Workaround: Allow more time for the vRealize Lifecycle Configuration Manager and NSX Edge VMs to finish deploying. Then the tasks will appear.

  • Add Rack or Add Host operation fails during Password Rotation.

    The operation fails because a domain is in a failed state. The issue results because the workflow allows users to run these operations even though conditions may exist that will result in a failed status.

    Workaround: To prevent or resolve this issue, check the System Status page for events that prevent adding a new host or rack, such as a similar operation in progress or that has failed. For example, if you recently completed such an operation but skipped hosts during the workflow, you will encounter this issue. Resolve any such status and retry the operation.

  • Decommission of a host which is not part of a domain fails.

    Sometimes if you decommission a host that is not part of a domain, the operation fails with a message such as Host with IP address 192.168.100.109 is currently reserved by a user. Decommissioning of host is not allowed in this state.. However, this message is erroneous.

    Workaround: If the operation fails and returns the Host with IP address 192.168.100.109 is currently reserved... message, wait approximately twenty minutes and retry the decommission operation.

  • vRA IaaS cloning operation is incorrectly shown as successful.

    The IaaS cloning operation is shown as successful but the machines have not joined Active Directory. As a result, the IaaS Management Agent fails.

    Workaround: Uninstall and redeploy vRealize Automation. This option is made available in SDDC Manager if this operation fails.

  • SDDC Manager inventory shows incorrect version vCenter.

    Before upgrading to 2.3.0, check the version of vCenter for all workloads created after upgrading to 2.2.1. If the version of vCenter in the workloads does not match the version shown in Lifecycle > Inventory page, contact VMware support before applying 2.3.0 updates.

    To check vCenter version:

    1. Find the vCenter IP address and root password by reading /home/vrack/bin/lookup-passwords.
    2. Using SSH, log in to vCenter.
    3. Run the following commands
      /bin/appliancesh
      system.version.get
      
    4. Note the vCenter build number.
    5. Compare with the vCenter version number shown in the Lifecycle > Inventory page.

    Workload: None. Contact VMware support.

  • Direct access to the vrlcm appliance is not supported.

    Customers must use the VMware SDDC Manager to deploy, monitor, and manage their vRealize Operations and vRealize Automation deployments.

  • In Add Host workflow in SDDC Manager, a dummy task appears on the Add Host Bringup Status page.

    Sometimes when adding a host in SDDC Manager, a phantom or dummy subtask may appear on the Add Host Bringup Status page (Settings > Add Host), labeled Not Available and with a status of Not Started.

    Workaround: This subtask is a development artifact and does not affect the Cloud Foundation's operation. You can ignore it.

  • After upgrading to Cloud Foundation 2.3.0, SDDC Manager shows validation errors.

    After upgrading Cloud Foundation from 2.2.1 to 2.3.0, you may see validation errors when creating VI workload domains and related screens.

    Workaround: Log out of SDDC Manager and log back in using a different browser.

Virtual Infrastructure Workload Domain Known Issues
  • Cisco plugin throws WARNING instead of ERROR if <subnet> is already configured on interface

    This is caused by an IP address conflict. When the uplink is over layer-3 (IP routing based) instead of VLAN switching, then the management VLAN and VI VLAN are configured to have a Switch Virtual Interface (SVI) on both ToR switches with individual IP address and common VRRP IP (for each VLAN). When the VI is deleted, the IP addresses are not removed from the SVI. As a result, if any subsequent VI workload has the same subnet, and therefore the same IP for SVI then the SVI creation fails.

    Workaround: You can prevent the IP address conflict as follows:

    1. After a VI workload is deleted through UI on a setup with Layer 3 uplink, log in to both ToR switches.
    2. Run the following command.
      configure terminal
      no interface vlan <VLAN id>

      Where <VLAN id> is the VLAN id that was given when the VI workload was created.

  • A previously used free pool host might not be considered for workload deployment capacity.

    In some cases, a free pool host that was used by a VI workload domain may not be considered for deployment capacity in subsequent VI workload domains, and may be flagged with a HOST_CANNOT_BE_USED_ALERT. After the original VI workload domain is deleted, the HMS service has the wrong password, resulting in the alert status.

    Workaround: Use the following procedure to recover from this issue.

    1. From the rack inventory, identify the node IP and obtain its IPMI password using lookup-password.
    2. Shutdown the host from IPMI power control for twenty minutes.
    3. Log into SDDC Manager Controller VM as root.
    4. Access the postgres database and delete the problem_record.
      su - postgre;
      psql vrm;
      delete from problem_record;
    5. Restart the vcfmanager service
      systemctl restart vcfmanager
  • In a Cloud Foundation environment configured with L3 uplinks, when you try to create a workload domain with a data center (external) connection using the same subnet but a different VLAN as a workload domain that was previously created and deleted, the workload domain creation fails
    When a workload domain is deleted and your environment’s ToR switch uplinks are configured with L3, the Switched Virtual Interfaces (SVIs) that were originally created on the ToR switches for that workload domain are not deleted. Due to this issue, if you subsequently try to create a workload domain using a different VLAN ID but same subnet as the deleted one, the workload domain creation fails because the switches do not allow two VLAN IDs with the same subnet.

    Workaround: When creating a VI or VDI workload domain, in the data center connection’s configuration, do not combine a different VLAN ID with a subnet that was previously used for a deleted workload domain. You can reuse the same VLAN with the same subnet or reuse the same VLAN with a different subnet.

  • The VI workload domain creation and expansion workflows might fail at task "ConfigureVCenterForLogInsightTask" due to a failure to connect to the deployed vRealize Log Insight instance
    During the VI workload domain creation and expansion workflows, if the system cannot connect to the deployed vRealize Log Insight instance, the workflow fails at the "ConfigureVCenterForLogInsightTask" task and you see an exception in the log with a 500 HTTP error code:
    [com.vmware.vrack.vrm.workflow.tasks.loginsight.ConfigureVCenterForLogInsightTask] Exception while doing the integration: Create session to LogInsight Failed : HTTP error code : 500

    Workaround: Restart the vRealize Log Insight's virtual machine by using the management domain's vCenter Server launch link to open the vSphere Web Client and using the vSphere Web Client user interface to restart the vRealize Log Insight's virtual machine. Then restart the failed workflow.

  • The VI workload domain creation workflow might fail at task "VC: Deploy vCenter" due to a failure to connect to the system's Platform Services Controller instances
    During the VI workload domain creation workflow, if the system cannot connect to the integrated Platform Services Controller instances, the workflow fails at the "VC: Deploy vCenter" task and you see errors in the log such as:
    Unexpected error while verifying Single Sign-On credentials: [Errno 111]
    Connection refused
    Cannot get a security token with the specified vCenter Single Sign-On configuration.

    Workaround: Restart the system's PSC-2 virtual appliance, then the PSC-1 virtual appliance, then the vCenter Server virtual appliance. Wait until each virtual appliance is up and running before restarting the next one. Then restart the failed workflow.

  • On the Review page of the VI workload domain creation wizard, the Download and Print buttons are not operational
    Due to this issue, when you reach the Review step of the VI workload domain creation wizard, you cannot use the Download or Print buttons to create a printable file of the displayed information for future reference.

    Workaround: None. At the Review step of the wizard, you must manually capture the information for future reference, for example by taking screen captures of the displayed information.

  • Dual rack VI creation fails at during the vCenter: Deploy vCenter workflow

    When creating a dual rack VI workload domain, it fails during the vCenter: Deploy vCenter workflow. The log may show that there is no available space for the vSAN datastore. For example:

    2017-08-17 10:40:18.146 [Thread-6839] DEBUG [com.vmware.vrack.vrm.core.local.InMemoryLogger] 
    The free space of datastore 'vsanDatastore' (0.0 GB) in host

    Workaround: Restart the VMware vCenter Server by restarting the vmware-vpxd service and retry. The process should succeed. For information about restarting this service, see the Knowledge Base Article 2109887 Stopping, starting, or restarting VMware vCenter Server Appliance 6.x services.

  • The "Create NSX Config File" task fails in the SDDC Manager Configuration Backup workflow/

    The Cloud Foundation configuration backup is if it is triggered while any workflow is in a running stated in SDDC Manager. The backup is not automatically retried. However, configuration backups are automatically triggered after completion of any of the following operations:

    • Password rotation

    • Creation, deletion, or expansion of a workload domain

    • A host or rack is added

    • Host decommission

    Workaround: Configure the backup after the workload domain creation process is complete. Wait for the next automatically triggered backup or manually trigger one. See Backing Up the Cloud Foundation Configuration in the product documentation.

  • Disassociate vROPs from a VI or VDI workload domain fails at ConfigureVsanInVropsForExistingWorkloadDomain operation.

    After enabling vROPS as part of a VI or VDI workload domain expansion, attempts to disassociate vROPS from the workload domain may fail.

    Workaround: Retry the disassociate operation.

  • New Host Selection workflow shows unhealthy hosts.

    Cloud Foundation does not presently screen unassigned hosts for health status.

    Workaround: Before selecting a host for inclusion in a domain, run the SoS command to check its health. See Supportability and Serviceability (SoS) Tool in the product documentation.

VDI Workload Domain Known Issues
  • VDI Creation failing at task vCenter: Enable vSAN

    While creating a new VDI, the workflow fails during the vCenter: Enable vSAN task.

    Workaround: Disable and then re-enable vSAN on the cluster in the newly deployed vCenter for the VDI. Retry the workflow. It should continue.

  • VDI creation will fail at Register VMware Horizon View Serial Number task.

    Sometimes VDI creation fails at the Register VMware Horizon View Serial Number task, due to a Windows firewall issue where the connection server installation takes place.

    Workaround: When creating a VDI, manually set firewall rules to avoid this issue. For more information about network and port configuration in Horizon 7, see NETWORK PORTS IN VMWARE HORIZON 7 and Network Ports in VMware Horizon 7.

  • ESXi update may fail on a VDI domain

    While upgrading a VDI domain, ESXi update may fail on the stage “ESX UPGRADE VUM STAGE INSTALL UPDATE”.

    Workaround: Wait for the update to be available again and retry upgrade.

  • Some VDI infrastructure settings can be modified even though they should be restricted.

    The following VDI infrastructure settings can be modified even though they should be restricted to prevent the user from modifying them:

    • Max Desktops per Connection Server
    • Max Desktops per Security Server
    • Max Desktops per vCenter Server
    • Desktop System Drive Size
    • Desktop System Snapshot Size

    Workaround: Do not manually modify these settings.

  • The VDI workload domain creation workflow might fail at task "Instantiate Horizon View Adapter"
    Due to intermittent timing issues, the VDI workload domain creation workflow sometimes fails at the Instantiate Horizon View Adapter task with the following exception error in the log: com.vmware.vrack.vdi.deployment.tools.tasks.VDIWorkflowException: "Unable to create vROps REST client" As a result, the pairing credential between the vRealize Operations Manager instance and the VDI environment is in a partially instantiated state and must be deleted before restarting the workflow.

    Workaround: Manually delete the pairing credential that is associated with the workload domain's Horizon Connection server and then restart the failed workflow using the Restart Workflow action in the workflow's status screen using these steps:

    1. Verify that you have the IP address for the first Horizon Connection server that was deployed for this VDI workload domain, such as 10.11.39.51 You will use that IP address to identify which pairing credential to delete.
    2. Log in to the vRealize Operations Manager Web interface. You can use the launch link in the management domain's details screen to open the log in screen.
    3. From the vRealize Operations Manager Web interface's Home screen, navigate to the Credentials screen by clicking Administration > Credentials.
    4. Locate the pairing credential having a name in the form of vdi-view-adapter-IPaddress, where the IP address matches the one you obtained in step 1. For example, if the Horizon Connection server has IP address 10.11.39.51, the displayed pairing credential name is vdi-view-adapter-10.11.39.51.
    5. Select that pairing credential and delete it.
    6. In the workflow's status screen, restart the failed workflow using the Restart Workflow action.
  • In a Cloud Foundation environment configured with L3 uplinks, when you try to create a workload domain with a data center (external) connection using the same subnet but a different VLAN as a workload domain that was previously created and deleted, the workload domain creation fails
    When a workload domain is deleted and your environment’s ToR switch uplinks are configured with L3, the Switched Virtual Interfaces (SVIs) that were originally created on the ToR switches for that workload domain are not deleted. Due to this issue, if you subsequently try to create a workload domain using a different VLAN ID but same subnet as the deleted one, the workload domain creation fails because the switches do not allow two VLAN IDs with the same subnet.

    Workaround: When creating a VI or VDI workload domain, in the data center connection’s configuration, do not combine a different VLAN ID with a subnet that was previously used for a deleted workload domain. You can reuse the same VLAN with the same subnet or reuse the same VLAN with a different subnet.

  • When creating a VDI workload domain with specified settings that results in the system deploying two vCenter Server instances, the creation workflow might fail at the "ESXI: Incremental LI Integration" task
    Depending on the value for the Max Desktops [per vCenter Server] setting in the VDI Infrastructure screen and your choice for the number of desktops in the VDI workload domain creation wizard, the system might need to deploy more than one vCenter Server instance to support the desired VDI workload domain. As part of this deployment, the system starts two VI workload domain creation workflows. One of the VI workload domain creation workflows might fail at the task "ESXI: Incremental LI Integration" with an error message about failure to connect ESXi hosts to vRealize Log Insight:

    hosts to LogInsight failed : HTTP error code : 404 : Response :
    

    Workaround: Use the Physical Resources screens to verify that the ESXi hosts that the failed workflow is trying to use are all up and running. Use the vSphere Web Client to verify that the vRealize Log Insight VMs in the system are all up and running. Ensure that the ESXi hosts involved in the failed workflow and the vRealize Log Insight VMs are in a healthy state, and then click Retry in the failed workflow.

  • UEM agents are not installed as part of the VMware UEM installation from Windows template

    The VDI workflow installs VMware UEM but UEM agents are not installed in that process.

    Workaround: Power on the Windows template, install UEM, then power off the template. Redeploy the desktop. The agents should be successfully installed.

  • Desktop VM creation fails with error "Cloning of VM vm-8-1-45 has failed..."

    During VDI workload creation, the VDI VM fails with error "Cloning of VM vm-8-1-45 has failed: Fault type is INVAlID_CONFIGURATION_FATAL - Failed to generate proposed link for specified host: host-122 because we cannot find a viable datastore".

    Workaround: On the affected desktop pool, as administrator re-enable provisioning through the Horizon console.

  • You cannot disassociate a workload domain from a vRealize Automation endpoint.

    After you associate a workload domain with a vRelease Automation endpoint, you can disassociate it.

    Workaround: There is no workaround.

  • NSX Edge upgrade failed on VDI workload domain.

    During the process of upgrading from Cloud Foundation 2.2.x to 2.3.0, the update operation to NSX 6.3.5 fails with the message {"details":"Failed to deploy edge appliance edge-1-jobdata-532-0.","errorCode":10020,"moduleName":"vShield Edge"}.

    Workaround: Redeploy the NSX Edge.

    1. Go to the Networking & Security > NSX Edges page in the vSphere Web Client.
    2. Locate the Edge (https://rack-1-vc-1.vcf.vmware.corp/sdk) and click Redeploy Edge.
    3. Restart the upgrade.

    If that this workaround fails, you may need to delete and redeploy the Edge.

  • VI and VDI workflow domains show pending tasks after upgrade to version 2.3 .

    VI and VDI workflows created inCloud Foundation version 2.2 show pending tasks after upgrade to 2.3.
    There is no workaround and these tasks can be ignored.

    Workaround: None. These tasks can be ignored.

  • Network expansion for VDI fails at Import DHCP Relay Agents task.

    This issue has been observed when expanding network settings for a VDI workload domain with an internal DHCP. When creating the new connection (Settings > Network Settings > Data Center), the user may have saved the new network connection configuration before associating the domains for the new connection.

    Workaround: You can avoid this bug by saving the new connection configuration after assigning domains. For example, use the following workflow:

    1. In SDDC Manager, go to Settings > Network Settings > Data Center.
      By default, the Data Center page opens to the New Connection form.
    2. Enter the settings for the new subnet.
    3. Associate the domains for the new connection.
    4. Save the new connection configuration.

    When you try the network expansion operation again, it should succeed.

Life Cycle Management (LCM) Known Issues
  • While bundle download is in progress, the myvmware login icon displays the error icon

    Sometimes, while an update bundle download is in progress, the error icon (an exclamation point in a yellow triangle) displays next to the myvmware login icon at the top of the Lifecycle Management: Repository page.

    Workaround: None. In this case, you can ignore the icon.

  • vCenter upgrade from 6.0.0-5326079 to 6.5.0-6671409 on the IaaS domain succeeds but the vSAN HCL is not updated.

    For a detailed description and workaround for this issue, see the Knowledge Base article vSAN Health Service - Hardware compatibility - vSAN HCL DB Auto Update.

  • Management vCenter certificate replacement fails

    The Management vCenter SSL certificate fails with the message "Error while reverting certificate for store : MACHINE_SSL_CERT". The certificate manager fails to register the service on the vcenter-1 instance. This is due to a rare vCenter issue.

    Workaround: Wait and retry until the process succeeds. It may require several tries.

  • LCM incorrectly shows both VCF and VMware bundles as available for upgrade.

    After upgrading to 2.2.1, LCM should display only the VCF bundle should appear as available for upgrade on the Management Domain. LCM should display the VMware bundle (vCenter, ESXi, NSX, etc.) as available only after the VCF bundle has been applied. The risk is that a user might apply the VMware bundle on an out-of-date version of VCF.

    Workaround: Always apply the VCF bundle first when multiple bundles are shown as available on the Management Domain.

  • ESXi and vCenter update on a host might fail in the task of exiting maintenance mode
    Sometimes during an ESXi and vCenter update process, a host might fail to exit maintenance mode, which results in a failed update status. During an update, the system puts a host into maintenance mode to perform the update on that host, and then tells the host to exit maintenance mode after its update is completed. At that point in time, a separate issue on the host might prevent the host from exiting maintenance mode.

    Workaround: Attempt to exit the host from maintenance mode through the vSphere Web Client.

    • Locate the host in vSphere and right-click it.
    • Select Maintenance Mode > Exit Maintenance Mode.

      This action will list any issues preventing the host from exiting maintenance mode.

    • Address the issues until you can successfully bring the host out of maintenance mode.
    • Return to the SDDC Manager and retry the update.
  • LCM Inventory page shows a failed domain, but no failed components

    The LCM Inventory page shows a failed domain, but does not show any failed components.

    Workaround: Log in to vCenter for the domain and check that all hosts in the domain have the lcm-bundle-repo datastore available. Mount them to the lcm-bundle-repo datastore, located at 192.168.100.46 /mnt/lcm-bundle-repo, if necessary.

  • ESXi VUM-based update failure

    ESXi VUM-based update fails with " Failed tasks during remediation" and "Failed VUM tasks" messages. This failure is most likely due to a collision between simultaneous VUM-based tasks. This issue will soon be resolved by improved VUM integration.

    Workaround: Retry.

  • Skipped host(s) during ESXi upgrade prevent users from continuing the upgrade process 

    If during the ESXi upgrade process one or more hosts are skipped, the user is unable to continue the rest of the upgrade. However, this is expected behavior because the entire bundle must be applied for the upgrade to be successful.

    Workaround: Retry and complete all skipped host upgrades to unblock the remaining upgrades.

  • Non-applicable LCM update bundles show status Pending

    LCM update bundles that were released for earlier versions of VMware Cloud Foundation appear in the LCM  Repository page with a status of PENDING. They should not appear at all.

    Workaround: None. Please ignore the bundles and the status.

  • ESXi update fails at stage ESX_UPGRADE_VUM_STAGE_TAKE_BACKUP.

    An ESXi update may fail during firmware backup. This may be due to a missing /tmp/scratch/downloads folder.

    Workaround: Using SSH, access the ESXI host and create the folder downloads in the /tmp/scratch directory.

  • The get-bmc-passwords script fails to get credentials for add hosts after rack decommissioning.

    After upgrade to 2.3, Cloud Foundation does not manage OOB ports and the lookup-passwords command may not retrieve OOB passwords. The get-bmc-passwords script can retrieve the passwords after upgrade, but not after the rack containing the hosts has been decommissioned.

    Workaround: None. this is expected behavior. See also the Installation and Upgrade Information section in this document.

  • LCM updates not available after bundle download.

    In some cases, LCM updates may not appear available after bundle download. Similarly, the Inventory page may not show the management domain information. Also, the Update details page may show domain as not available. The most likely cause is that one or more domains, including the management domain, is in a failed state. As a result, the updates are not shown as available.

    Workaround: If the management domain does not appear or is in a failed state:

    • Verify if a domain expansion operation is in progress or has failed. If so, complete the workflow for the operation.
    • Verifyif vRealize Automation or Operations deployment is in progress or has failed.
      • If in progress, wait for deployment to complete.
      • If failed, delete the existing instance and redeploy.

    If a workload domain does not appear or is in a failed state:

    • Verify if the workload domain is currently being created or expanded. If so, retry and complete the pending workflows.
  • Upgrade preview page does not show the correct version of the upgrade component.

    When using LCM version aliasing, the upgrade preview page shows the required version that is set on the bundle used for the upgrade. It does not show the actual version number of the component. This may cause confusion on the part of the user.

    Workaround: None.