check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 3.8.1 | 3 SEP 2019 | Build 14487798

VMware Cloud Foundation is a unified SDDC platform that brings together VMware ESXi, vSAN, NSX and optionally, vRealize Suite components, into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud. The Cloud Foundation 3.8.1 release continues to expand on the SDDC automation, VMware SDDC stack, and the partner ecosystem.

NOTE: VMware Cloud Foundation 3.8.1 must be installed as a new deployment or upgraded from VMware Cloud Foundation 3.8. For more information, see Installation and Upgrade Information below.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 3.8.1 release includes the following:

  • Automated deployment of VMware Enterprise PKS: Enables the automated deployment and the configuration of VMware Enterprise PKS on an NSX-T workload domain.
  • Dual Authentication Support: Provides the two-factor authentication for the password-related APIs in SDDC Manager.
  • Enhanced Phone Home Capability: Provides the capability to turn on phone home data for SDDC Manager deployed solutions with a single click.
  • BOM Updates for the 3.8.1 ReleaseUpdated Bill of Materials that incorporates the new product versions.
  • Bugs and Security Issues: Resolved critical bugs and and fixed the security issues.

Cloud Foundation Bill of Materials (BOM)

The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component Version Date Build Number
Cloud Builder VM 3 SEP 2019


SDDC Manager 3.8.1 3 SEP 2019


VMware vCenter Server Appliance vCenter Server 6.7 Update 3 20 AUG 2019


VMware ESXi ESXi 6.7 Update 3 20 AUG 2019


VMware vSAN

6.7 Update 3

20 AUG 2019


VMware NSX Data Center for vSphere 6.4.5 18 APRIL 2019


VMware NSX-T Data Center 2.4.2 Patch 1 10 AUG 2019


VMware Enterprise PKS 1.4.1 20 JUN 2019 n/a
VMware vRealize Suite Lifecycle Manager 2.1 Patch 2 02 JUL 2019


VMware vRealize Log Insight 4.8 11 APR 2019 13036238
vRealize Log Insight Content Pack for NSX for vSphere 3.9 n/a n/a
vRealize Log Insight Content Pack for Linux 1.0 n/a n/a
vRealize Log Insight Content Pack for vRealize Automation 7.3+ 2.2 n/a n/a
vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+ 2.0 n/a n/a
vRealize Log insight Content Pack for NSX-T 3.7 n/a n/a
vSAN Content Pack for Log Insight 2.0 n/a n/a
vRealize Operations Manager 7.5 11 APR 2019 13165949
vRealize Automation 7.6 11 APR 2019 13027280
VMware Horizon 7 7.9.0 25 JUN 2019 13956742


  • vRealize Log Insight Content Pack for vRealize Automation is installed during the deployment of vRealize Automation.
  • vRealize Log Insight Content Pack for vRealize Orchestrator is installed during the deployment of vRealize Automation.
  • vRealize Log Insight Content Pack for NSX-T is installed alongside the deployment of the first NSX-T workload domain.
  • VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The software components table contains the latest versions of the packs that were available and automation at the time VMware Cloud Foundation released. When you deploy the VMware Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.

VMware Software Edition License Information

The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:

  • VMware ESXi
  • VMware vSAN
  • VMware NSX Data Center for vSphere

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server
    NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
  • VMware vRealize Automation
  • VMware vRealize Operations
  • VMware vRealize Log Insight and content packs
    NOTE Cloud Foundation permits limited use of vRealize Log Insight for the management domain without purchasing full vRealize Log Insight licenses.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.

For more general information, see VMware Cloud Foundation.

Supported Hardware

For details on vSAN Ready Nodes in Cloud Foundation, see VMware Compatibility Guide (VCG) for vSAN and the Hardware Requirements section in the VMware Cloud Foundation Planning and Preparation Guide.


To access the Cloud Foundation 3.8.1 documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The Cloud Foundation web-based interface supports the following web browsers:

  • Google Chrome: Version 75.x or 74.x
  • Internet Explorer: Version 11
  • Mozilla Firefox: Version 67.x or 66.x

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)
  • 1366 by 768 pixels
  • 1280 by 1024 pixels
  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

You can install Cloud Foundation 3.8.1 as a new release or upgrade from VMware Cloud Foundation 3.8.

In addition to the release notes, see the VMware Cloud Foundation Upgrade Guide for information about the upgrade process.

Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The VMware Cloud Foundation Planning and Preparation Guide provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Architecture and Deployment Guide for information on installing ESXi.

Phase Three: Install Cloud Foundation 3.8.1

Refer to the VMware Cloud Foundation Architecture and Deployment Guide for information on deploying Cloud Foundation.

Upgrade to Cloud Foundation 3.8.1

You can upgrade to Cloud Foundation 3.8.1 only from 3.8. If you are at a version earlier than 3.8, refer to the 3.8 Release Notes for information on how to upgrade from the prior releases.

For information on upgrading to 3.8.1, refer to the VMware Cloud Foundation Upgrade Guide.



Resolved Issues

  • The SDDC Manager users are not able to log in through SSH and DCUI after the migration

    After migration, if the root and vcf users have passwords with the following special characters, they will not be able to log in through SSH and DCUI.

    & * { } [ ] ( ) / \ ' " ` ~ , ; : . < >


  • The password rotation or update operation fails to update new passwords of the vRealize Automation account in the vRealize Automation Adapter in vRealize Operations Manager

    This symptom occurs when the user configures the vRealize Automation adapter with the externally managed accounts. The vRealize Operations Manager Suite API expects the sysadmin and superuser credentials to be passed always in its update adapter API. But since the external account details are not known to VMware Cloud Foundation, the password management cannot pass the external account details in the update call. It is always recommended that the vRealize Automation adapter be configured with VMware Cloud Foundation managed accounts which are configured by the default post deployment.


  • While trying to add a cluster to an NSX-T based domain , the NSX-T Add Cluster operation fails at the Create Transport Node Collection Action task

    While trying to add cluster to an NSX-T domain, for every new host that will be part of this new cluster, the NSX-T vibs are installed on each host. Intermittently ,the HTTPS service in one of the nodes of NSX-T Manager goes down due to which NSX install fails in one of the host.


  • The "CPU, Memory and Storage"  dashboard widget shows the incorrect unit for memory

    The "CPU, Memory and Storage"  dashboard widget currently shows GB instead of TB for memory.


  • For any workload domain with NSX-T Managers, the corresponding cluster page in the SDDC manager UI shows the error sign for VLAN ID

    As part of cluster addition to a work load domain, the user is expected to provide a VLAN ID which will be used for the overlay network by NSX-T.
    After the successful deployment of the cluster, the summary page of the cluster has a field to show the VLAN ID which was provided by the user.
    Due to this issue, the user will not be able to see this VLAN ID on the summary page.


  • NSX-V Workload domain creation may fail at the NSX-V Controller deployment

    The workload domain creation workflow may fail at NSX-V Manager deploying controllers to the new domain. This issue occurs because the NSX Controllers do not get an IP address because the VM NIC is disconnected.


  • Second NSXT Workload domain will fail at "Creating NSXT Overlay Segment" or "Creating NSXT VLAN Segment"

    User or customer may hit this issue while deploying or creating Second NSXT workload.


Known Issues

The known issues are grouped as follows.

Bringup Known Issues
  • Clicking the help icon in the bring-up wizard opens the help for an older release.

    The help icon in the bring-up wizard links to the product help for an older version.

    Workaround: To open the help topic on deploying Cloud Foundation, perform the following steps:

    1. In a browser window, navigate to

    2. Click Browse All Products and then click VMware Cloud Foundation.

    3. Click VMware Cloud Foundation Architecture and Deployment Guide.

    4. In the left navigation pane, click Deploying Cloud Foundation.

  • Cloud Foundation Builder fails to initiate with the "[Admin/Root] password does not meet standards" message

    When configuring the Cloud Foundation Builder admin and root passwords, the format restrictions are not validated. As a result, a user may create a password that does not comply with the restrictions. As a result, Cloud Foundation Builder will fail upon initiation.

    Workaround: When configuring the Cloud Foundation Builder, ensure that the password meets the following restrictions:

    • Minimum eight characters long.
    • Must include both uppercase and lowercase letters
    • Must include digits and special characters
    • Must not include common dictionary words
  • The bring-up process fails at task disable TLS 1.0 on the vRealize Log Insight nodes

    The bring-up fails at the task disable TLS 1.0 on the  vRealize Log Insight nodes with the following error Connect to [/] failed: Connection refused (Connection refused). This issue has been observed in the slow environments after restarting a vRealize Log Insight node. The node does not start correctly and its API is not reachable.

    Workaround: Use the following procedure to work around this issue.

    1. Restart the failed bring-up execution in the Cloud Foundation Builder VM and open the bring-up logs.
      This will retry the failed the bring-up task which might still fail on the initial attempt. The log shows an unsuccessful connection to the vRealize Log Insight node.
    2. While bring-up is still running, use SSH to log in to the vRealize Log Insight node that is shown as failed in the bring-up log.
    3. Run the following command to determine the connection issue.
      loginsight-node-2:~ # service loginsight status
      It should confirm that the daemon is not running.
    4. Execute the following command:
      loginsight-node-2:~ # mv /storage/core/loginsight/cidata/cassandra/data/system ~/cassandra_keyspace_files
    5. Reboot the vRealize Log Insight node.
    6. Confirm that it is running.
      loginsight-node-2:~ # uptime
      18:25pm up 0:02, 1 user, load average: 3.16, 1.07, 0.39
      loginsight-node-2:~ # service loginsight status
      Log Insight is running.

    In a few minutes, the bring-up process should successfully establish a connection to the vRealize Log Insight node and proceed.

  • The Cloud Foundation Builder VM remains locked after more than 15 minutes.

    The VMware Imaging Appliance (VIA) locks out the user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Workaround: Using SSH, log in as admin to the Cloud Foundation Builder VM, then switch to root user. Unlock the account by resetting the password of the admin user with the following command.
    pam_tally2 --user=<user> --reset

  • Validation fails on SDDC Manager license.

    During the bringup process, the validation for the SDDC Manager license fails.

    Workaround: Enter a blank license and proceed. You can enter the correct license value later in the process.

  • During bring-up, the component sheet incorrectly lists the wrong NSX version.

    During the bring-up process, detailed information of the components to be deployed are displayed. This display incorrectly shows NSX version 6.4.3 but this is incorrect. The actual version being deployed is NSX 6.4.4.

    Workaround: None

  • Cloud Foundation Builder: Restart of imaging service fails.

    During the host imaging operation, the imaging service (imaging.service) fails when restarted.

    Workaround: If you encounter this issue, perform the following procedure:

    1. Stop the imaging service in the SDDC Manager VM.
      systemctl stop imaging.service
    2. Stop all processes related to imaging.
      ps - ef | grep imag
      kill <process_number>
    3. Sleep the system for five seconds.
      sleep 5
    4. Start the imaging service.
      systemctl start imaging.service
      It should restart correctly.
  • NSX-V Workload domain creation may fail at NSX-V Controller deployment

    The workload domain creation workflow may fail at NSX-V Manager deploying the controllers to the new domain. The user must reboot each of the ESXi hosts in the cluster of the new domain, and restart the workflow. The user will have an unusable domain until then.

    Workaround: The user must reboot each of the ESXi hosts in the cluster of the new domain and restart the workflow.

  • Unable to mount the vCenter ISO file

    The vCenter ISO image is not found during the bring-up operation using the ISO images.

    Workaround: Unmount the Cloud Builder ISO Image and then run bring-up.

    umount /mnt/iso/
    curl -H "Content-Type: application/json" -X POST -d @<path of bringup json file>/bringup.json http://localhost:9080/bringup-app/bringup/sddcs

  • The VMware Cloud Foundation 3.8.1 cloud builder deployment parameter sheet shows incorrect BOM version for PSC and vCenter

    The cloud builder deployment parameter shows incorrect BOM version for vCenter and PSC as 6.7U2a. The correct BOM version for VMware Cloud Foundation 3.8.1 is 6.7U3.

    Workaround: None

Upgrade Known Issues
  • The vCenter upgrade operation fails on the management domain and workload domain

    vCenter fails to be upgraded because lcm-bundle-repo nfs mount on the host is inaccessible.

    Workaround: Remove and remount the SDDC Manager NFS datastore on the affected ESXi hosts. Use the showmount command to check if all hosts are displayed in the SDDC manager mount list.

  • The Lifecycle Manager page displays that the update is available even after the upgrade is done

    After successful Lifecycle Manager upgrade of vRealize Automation, vRealize Operation Manager, vRealize Log Insight, NSX, vCenter and any other VMware Cloud Foundation component other than SDDC Manager, the Lifecycle Manager UI continues to show that this upgrade (the one that is just finished) is available for a few minutes even after you refresh the browser session.

    Workaround: After the successful Lifecycle Manager upgrade of the Vmware Cloud Foundation component, the client has to wait between 2-5 minutes until the upgrade button disappears. This button should not be clicked until then.

  • Even after applying the VMware Cloud Foundation update, the bundle status reflects to future ( pending list)

    This issue occurs if the user downloads and uploads the bundles by using the marker file.

    Workaround: Ignore the bundle that shows as future. Run /opt/vmware/sddc-support/sos --get-vcf-summary|grep "SDDC Version" to verify that SDDC Manager is updated to

  • The  download of the Lifecycle Manager bundles from using bundle transfer utils fails with the 'No space left on device" error

    This issue occurs because the bundles get downloaded to the tmp directory and they move to the provided directory path later. But if the tmp directory doesn't have sufficient space, the download fails with the  'No space left on device' error.

    Workaround: Increase the size of the tmp directory.

  • The NSX-T upgrade fails with the UPGRADE_TIMEDOUT status while upgrading the NSX-T host clusters.

    While upgrading the host clusters which are part of NSX-T Fabric, the NSX-T upgrade can time out if the hosts in the cluster are overloaded or when the cluster size is high.

    Workaround: Add nsxt.upgrade.hostcluster.timeout property in the Lifecycle Manager properties and set it to an appropriate level in millisecs.

    For example:
    nsxt.upgrade.hostcluster.timeout=72000000 (sets it to 20 hours).

  • The NSX-T upgrade fails with the COMPLETED_WITH_FAILURE status while retrying a failed upgrade

    When multiple setup issues cause the upgrade attempts to fail, even if the root cause is resolved, the upgrade could still fail at the NSX_T_UPGRADE_STAGE_SET_UPGRADE_PAYLOAD stage. This is because, NSX-T in the background changes the UC location.

    Restart Lifecycle Manager on the SDDC Manager VM as root (systemctl restart lcm).

  • The NSX-T Host cluster upgrade may fail with the COMPLETED_WITH_FAILURE status and any retry will fail at the same stage

    The upgrade fails with the COMPLETED_WITH_FAILURE status and you may see 'Unable to migrate VM, generic vm fault' on vCenter. NSX-T could put a host in a maintenance mode which needs to be exited before an upgrade attempt can be made.

    Workaround: Use the KB to exit the host from the transport node maintenance mode and retry the upgrade.

  • During the upgrade, all the VMware Cloud Foundation NSX-T workflows including the vRealize Log Insight enablement for NSX-T workload domains are blocked

    This issue is seen when the VMware Cloud Foundation systems are upgraded to the 3.8 version while NSX-T is still not upgraded to the 2.4.1 version. The NSX-T install bundle is also not downloaded.

    Workaround: Once you upgrade VMware Cloud Foundation to the 3.8 version, you must also upgrade NSX-T to the 2.4.1 version. Download the NSX-T Install bundle for the  2.4.1 version and retry the operation.

  • The vRealize Automation upgrade reports the "Precheck Execution Failure : Make sure the latest version of VMware Tools is installed" message

    This is a vRealize Lifecycle Manager pre-upgrade check that happens when the upgrade through vRealize Lifecycle Manager is triggered.

    Workaround: Upgrade VMware Tools on the vRealize Automation Iaas nodes.

  • The Lifecycle Manager initiated vRealize Automation upgrade fails

    Once the vRealize Automation upgrade from VMware Cloud Foundation fails with the "vRA IaaS Upgrade Failed" error , the user does not have an option to complete the failed upgrade from VMware Cloud Foundation Lifecycle Manager.


    1) On vRealize Lifecycle Manager, find the failing request.

    2) Retry the failing request. 
        The vRealize Lifecycle Manager upgrade workflow continues from the point of failure and completes successfully.
        The vRealize Automation environment details shows that the vRealize Automation version is upgraded to 7.6.0.

    3) Retry the failed vRealize Automation upgrade in SDDC Manager.This will validate the vRealize Automation current version and health of the vRealize Automation environment and will mark the upgrade flow that is completed successfully in Lifecycle Manager.

  • Error upgrading vRealize Automation

    Under certain circumstances, upgrading vRealize Automation may fail with a message similar to:

    An automated upgrade has failed. Manual intervention is required.
    vRealize Suite Lifecycle Manager Pre-upgrade checks for vRealize Automation have failed:
    vRealize Automation Validations : iaasms1.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
    vRealize Automation Validations : iaasms2.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
    Please retry the upgrade once the upgrade is available again. 

    1. Log-in into the first VM listed in the error message using RDP or the VMware Remote Console.
    2. Reboot the VM.
    3. Wait 5 minutes after the login screen of the VM appears.
    4. Repeat steps 1-3 for the next VM listed in the error message.
    5. Once you have restarted all the VMs listed in the error message, retry the vRealize Automation upgrade.

  • When there is no associated workload domain to vRealize Automation, the VRA VM NODES CONSISTENCY CHECK upgrade precheck fails

    This upgrade precheck compares the content in the logical inventory on the SDDC Manager and the content in the vRealize Lifecycle Manager environment. When there is no associated workload domain, the vRealize Lifecycle Manager environment does not contain information about the iaasagent1.rainpole.local iaasagent2.rainpole.local nodes. Therefore the check fails.

    Workaround: None. You can safely ignore a failed VRA VM NODES CONSISTENCY CHECK during the upgrade precheck. The upgrade will succeed even with this error.

  • The vRealize Operations upgrade fails at the vRealize upgrade prepare backup step

    When the vRealize Operations cluster is intensively used, the process of taking it offline in order to prepare snapshots as a backup could take a long time and therefore, may hit a timeout in SDDC Manager.

    Workaround: Prepare the backup manually following the steps:

    1. Take vRealize Operations cluster offline
      1. Log in to the master node vRealize Operations Manager administrator interface of your cluster.
      2. On the main page, click System Status. Click Take Offline under Cluster Status.
      3. Wait until all nodes in the analytics cluster are offline.
    2. Take a snapshot of each node so that you can roll the update back if a failure occurs.
      1. Log in to the Management vCenter Server.
      2. Take a snapshot of each node in the cluster. Right-click the virtual machine and select Snapshot > Take Snapshot. Use vROPS_LCM_UPGRADE_MANUAL_BACKUP as a prefix in the snapshot name for each virtual machine.
      3. Once this is done, retry the vRealize Operations upgrade through the SDDC Manager UI.
  • During the expansion of the vRealize Operations cluster, the API calls targeted to the new node hang if the vRealize Operations cluster is upgraded from version 7.0 to version 7.5, as part of the VMware Cloud Foundation upgrade to version 3.8

    The expansion of the vRealize Operations will fail, unless during the expansion process the Apache service on the new node VA is restarted.

    1) Trigger the expansion and monitor the deployment of the new node VA
    2) Once the new node VA is deployed, log into the vRealize Operations UI and wait for the new node to appear for configuration under the cluster management tab. Then log into the new node (with the password provided during the expansion input or use the default master node password if a new password is not provided) and execute the command:

      tail -f /var/log/casa_logs/casa.log

    3) During the expansion, the new node VA reboots itself. The SSH session will disconnect from the VA upon that event.
    4) Log in again after the successful reboot of the VA (the reboot process can be monitored in the vSphere UI), wait for about 5 minutes and execute:

         service apache2 restart

  • The operations manager component fails to come up after RPM upgrade.

    After manually upgrading the operations manager RPM to the latest version, the operationsmanager fails to come up. The system returns INFO-level message: Waiting for changelog lock... This is likely caused by overlapping restarts of the service preventing any from succeeding. This can happen to any service (e.g. ) which is exercising liquibase, such as commonsvcs.

    Workaround: Clean the databasechangeloglock table from the database.

    1. Log in to the SDDC Manager VM as admin user "vcf".
    2. Enter su to switch to root user.
    3. Run the following commands:
      1. In the postgres command prompt, open the password manager to delete the database change log lock:
        # psql -h /home/postgresql/ -U postgres -d password_manager -c "delete from databasechangeloglock"
      2. Restart the operationsmanager component:
        # systemctl restart operationsmanager
      3. Verify the operationsmanager is running:
        # curl http://localhost/operationsmanager/about
        It should return something like:
        "","description":"Operations Manager"}
  • The config drift upgrade bundle fails constantly

    The config drift upgrade bundle fails if there are any non responding hosts in the free pool.

    Workaround: Decommission the non responding hosts and retry the available upgrade.

  • On a new VMware Cloud Foundation 3.8.1 deployment, the NSX-V instance does not show up on the Patching or the Update tab

    One a freshly deployed VMware Cloud Foundation 3.8.1 setup, for any domain (including the management domain) with the NSX-V components in it, the NSX-V components do not show up on the domain page on the UI.

    Workaround: None. This is a cosmetic issue. The NSX-V bundles will still be visible for the upgrade and it can still be performed. The upgrade element names may not show up during the upgrade.

vRealize Integration Known Issues
  • vRealize Operations in vRealize Log Insight configuration fails when vRealize Operations appliances are in a different subdomain

    During vRealize Suite deployment in Cloud Foundation, the user provides FQDN values for vRealize load balancers. If these FQDNs are in a different domain than the one used during initial bringup, the deployment may fail.

    Workaround: To resolve this failure, you must add the vRealize Operations domain to the configuration in the vRealize Log Insight VMs.

    1. Log in to the first vRealize Log Insight VM.
    2. Open the /etc/resolv.conf file in a text editor, and locate the following lines:
      domain vrack.vsphere.local
      search vrack.vsphere.local vsphere.local 
    3. Add the domain used for vRealize Operations to the last line above.
    4. Repeat on each vRealize Log Insight VM.
  • SoS network cleanup utility fails on the hosts in two VDS switch configuration.

    This utility is not supported in a two VDS switch configuration in the current release.

  • Certificate replacement for the vRealize Automation component fails with 401 error

    Certificate replacement for the vRealize Automation component fails due to a 401 unauthorized error with the message "Importing certificate failed for VRA Cafe nodes." This issue is caused by a password lockout in the vRealize Automation product interface. For example, independently of Cloud Foundation, a user tried to log in to vRealize Automation with the wrong credentials too many times, causing the lockout.

    Workaround: The lockout period lasts for thirty minutes, after which the certification replacement process can succeed.

  • vRealize Automation integration fails with NSX-T workload domain.

    NSX-T does not yet support vRealize Automation integration.

    Workaround: None.

  • The vRealize port group is configured with the incorrect teaming policy.

    The vRealize network is configured with Route based on originating virtual port load balancing policy. This could cause uneven traffic distribution between the physical network interfaces.

    Workaround: Manually upgrade nick teaming policy in the vCenter UI:

    1. Log into vCenter.
    2. Navigate to Networking.
    3. Locate the Distributed switch in management vCenter.
    4. Right click on vRack-DPortGroup-vRealize portgroup -> Edit Settings.
    5. Select Teaming and failover.
    6. Change Load balancing from Route based on originating virtual port to Route based on physical NIC load.
    7. Click OK and verify that the operation have completed successfully.
  • The password update for vRealize Automation and vRealize Operations Manager may run infinitely or may fail after sometime when password provided by user contains special character "%".

    The Password management uses vRealize Lifecycle Manager API to update the password of vRealize Automation and vReaize Operations Manager. When there is special character "%" in either of SSH or API or Administrator credential types of the vRealize Automation and vRealize Operations Manager users, then the vRealize Lifecycle Manager API hangs and doesn't respond to password management. There is a timeout of 5 mins and password management marks the operation as failed.

    Workaround:Retry the password update operation without special character "%" and also ensure the passwords of all other vRealize Automation accounts don't contain "%" special character.

  • vRealize Operations expand will fail after upgrade if vRealize Operations Install bundle is not downloaded

    The vRealize Operations expand task fails after the vRealize Operations upgrade to version 7.5 if the vRealize Operations install bundle (for version 7.5) is not downloaded. The vRealize Operations Analytics Cluster expansion through vRealize Suite Lifecycle Manager task will fail.

    1. Perform rollback on the failed expand operation.
    2. Download vRealize Operations install bundle for VMware Cloud Foundation 3.8.
    3. Start the vRealize Operations expand task afresh.

Networking Known Issues
  • Platform audit for network connectivity validation fails

    The vSwitch MTU is set to the same MTU as the VXLAN VTEP MTU. However, if the vSAN and vMotion MTU are set to 9000, then vmkping fails.

    Workaround: Modify the nsxSpecs settings in the bring-up JSON by setting the VXLANMtu as a jumbo MTU because vSwitch is set with the VXLAN MTU value. This will prevent the error seen in the platform audit.

  • NSX Manager is not visible in the vSphere Web Client.

    In addition to NSX Manager not being visible in the vSphere Web Client, the following error message displays in the NSX Home screen: "No NSX Managers available. Verify current user has role assigned on NSX Manager." This issue occurs when vCenter Server and the permission is not correctly configured for the account that is logged in.

    Workaround: To resolve this issue, follow the procedure detailed in Knowledge Base article 2080740 "No NSX Managers available" error in the vSphere Web Client.

  • The east-west traffic between the workloads behind different T1 is impacted when the communication happens over their private IP addresses(NON-NAT IP addresses)

    The SNAT rule starts taking effect in 2.4.2 between the T0 and T1 causing the traffic to be SNATed twice; once, while the traffic egresses to the destination and the other time, when the traffic returns back from the destination. This leads to the workload dropping the traffic. This issue is seen in any NSX-T traffic matching the pattern.

    Workaround: Follow the steps in

SDDC Manager Known Issues
  • Unable to delete VI workload domain enabled for vRealize Operations Manager from SDDC Manager.

    Attempts to delete the vCenter adapter also fail, and return an SSL error.

    Workaround: Use the following procedure to resolve this issue.

    1. Create a vCenter adapter instance in vRealize Operations Manager, as described in Configure a vCenter Adapter Instance in vRealize Operations Manager.
      This step is required because the existing adapter was deleted by the failed workload domain deletion.
    2. Follow the procedure described in Knowledge Base article 56946.
    3. Restart the failed VI workload domain deletion workflow from the SDDC Manager interface.
  • The get operation for the certification API throws 500 Error when Microsoft CA is not configured at the SDDC Manager

    The get operation for the /security/ca/configuration API throws 500 Error when Microsoft CA is not configured.

    Workaround:Configure Microsoft CA before the get operation for /security/ca/configuration API.

Workload Domain Known Issues
  • The vSAN HCL database does not update as part of workload domain creation

    When you create a workload domain, the vSAN HCL database should update as part of the process. As a result, database moves into a CRITICAL state, as observed from vCenter.

    Workaround: Manually update the vSAN HCL database as described in Knowledge Base article 2145116.

  • Adding host fails when host is in a different VLAN

    This operation should succeed as adding a host to workload domain cluster should succeed even though the new host is on a different VLAN than other hosts in the same cluster.


    1. Before attempting to add a host, add a new portgroup to the VDS for the cluster.
    2. Tag the new portgroup with the VLAN ID of the host to be added.
    3. Run the Add Host workflow in the SDDC Manager Dashboard.
      This will fail at the "Migrate host vmknics to dvs" operation.
    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1.
      For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
    5. Retry the Add Host operation.
      It should succeed.

    NOTE: If you remove the host in the future, remember to manually remove the portgroup, too, if it is not used by any other hosts.

  • In some cases, VI workload domain NSX Manager does not appear in vCenter.

    Observed in NFS-based workload domains. Although VI workload domain creation was successful, the NSX Manager VM is not registered in vCenter and as a result, not appearing in vCenter.

    Workaround: To resolve this issue, use the following procedure:

    1. Log in to NSX Manager (http://<nsxmanager IP>).
    2. Navigate to Manage > NSX Management Service.
    3. Un-register the lookup service and vCenter, then re-register.
    4. Close the browser and log in to vCenter.
  • Adding cluster to a NSX-T workload domain fails with the error message "Invalid parameter: {0}".

    If you try to add cluster to a NSX-T workload domain and it fails with the error message "Invalid parameter: {0}", the subtask that creates logical switches has failed. This is likely due to an issue where artifacts from previously removed workload domains and clusters are conflicting with the new switch creation process.

    Workaround: Delete the hosts from the NSX-T Manager if exists and then delete the stale logical switches. the logical switches are created in the following pattern:

     ls-<UUID>-management, ls-<UUID>-vsan, ls-<UUID>-vmotion

    For all these three logical switches, {UUID} remains the same. For these logical switches, the logical ports are shown as "0" (ZERO).  The user has to delete these logical switches.

  • Unable to delete cluster from any NSX workload domain.

    The likely cause of this error is the presence of a dead or deactivated host within the cluster. To resolve this, you edit the workflow entry to remove reference to the offending host.

    Workaround: To resolve this issue, perform the following procedure.

    1. In SDDC Manager (Inventory > Workload Domains > [workload domain] > [cluster]), try to delete the cluster in order to identify the problematic host.
    2. Using SSH, log in to the SDDC Manager VM to obtain the workflow ID.
      Run curl -s http://localhost:7200/domainmanager/workflows to return a JSON listing all workflows.
    3. Using the workflow ID, get the workflow input.
      curl -s http://localhost:7200/domainmanager/internal/vault/<workflow-id> \
      -XGET > remove_cluster_input.json
    4. Edit the remove_cluster_input.json file by removing the entry referencing the problematic host.
      1. Find the reference to the host under the heading:
      2. Delete the entire entry for the problematic host.
      3. Save the edited file as remove_cluster_input_deadhost_removed.json.
    5. Update the workflow with the new file.
      curl -s http://localhost:7200/domainmanager/internal/vault/<workflow-id> -XGET > \
      -XPUT -H "Content-type: text/plain" -d @ remove_cluster_input_deadhost_removed.json
    6. Return to SDDC Manager and retry the workflow.
      It will fail at the logical switches deletion task.
    7. In NSX Manager, clear the host as follows:
      1. Go to NSX > Nodes > Transport Nodes and delete the same host as a transport node.
      2. Go to NSX > Nodes > Hosts and delete the same host.
    8. Return to SDDC Manager and restart the workflow.
      It should succeed.
  • Removal of a dead host from the NSX-T workload domain fails and subsequently, the removal of the workload domain fails

    In some cases where the creation of NSX-T workload domain fails, the subsequent attempt to delete the dead hosts fails as well. The host is disconnected from vCenter. Therefore, manual intervention is required to connect back. This issue is also seen when the domain is created successfully but then one of the host goes dead.


    If a host removal operation fails, since the host is dead (i.e host is disconnected from VC), then perform the following steps:

    1. Log in to NSX-T Manager.
    2. Identify the dead host under fabric->nodes->transport nodes.
    3. Edit the transport node and select the N-VDS tab.
    4. Under Physical NICs, assign vmnic0 to the uplink.
    5. From SDDC Manager, retry the failed domain deletion task.

  • If there is a dead host in the cluster, the subsequent task of adding a cluster or a host fails.

    If one of the hosts of the workload domain goes dead and the user tries to remove the host, the task fails. And then, that particular host is set to the deactivating state without providing an option to forcefully remove it.

    Workaround: Bring the dead host back to normal state, after which the add-cluster and add-host tasks succeed.

  • NSX-T workload domain creation fails at the 'Join vSphere hosts to NSX-T Fabric' task

    When NSX-T domain creation workflow fails at "Join vSphere hosts to NSX-T Fabric" task, it is generally due to the failure of NSX-T installation on one of the hosts. When seen in NSX-T manager web portal,the  installation failure is clearly indicated in the host view.

    This happens intermittently. When this happens, detection of it fails until some further point. Eventually task fails after a long wait.

    Workaround: The user has to log in to the NSX-T Manager, check the host that has the NSX-T installation failure, and delete it from the fabric.

  • Deletion of additional domain does not delete the NSXT vibs

    This issue is intermittent. You cannot re-purpose the same host for the workload domain creation as it has the older vibs.

    Workaround: Clean up the NSX-T vibs manually. Keep retrying the removal task for the NSX-Tvibs till it succeeds.

  • The SoS clean-up fails in cleaning up the NSX-T consumed hosts

    The issue occurs in removing the NSX-T related vib from the ESXi hosts. Hence, the SOS cleanup fails at the network clean up stage.

    Workaround: Refer the step 4 in the Remove a Host From NSX-T or Uninstall NSX-T Completely section in the VMware NSX-T Installation Guide.

  • The VI workload domain restart task fails

    The VI domain creation failed with the "Image with product type vCenter with version 6.7.0-13010631 and image type INSTALL is not found" error. When you upload the install bundle and restart the task., the restart process also fails with the same error.

    Workaround: Start a new VI domain creation task.

  • The workload domain creation and/or cluster addition may fail at the NSX host preparation phase

    If the ESXi hosts that are used to create a workload domain or cluster have not been fully cleaned or are reporting LiveInstallationError for any reason, the workload domain creation and/or cluster addition may fail at the NSX host preparation because EAM is not able to install the NSX VIBs on the ESXi hosts.

    Workaround: Reboot the hosts, ensure that EAM does not show LiveInstallationError and restart the workflow from the UI.

  • If the user alters or modifies the VMware Cloud Foundation created elements (port groups,segments), then there may be impact in the further workflow creation in the system.

    The user should not modify or alter any elements (port-groups, logical segments) which is created by VMware Cloud Foundation. 

    Workaround: The user has to identify the modified or changed elements and restore them back exactly as what was created by VMware Cloud Foundation. This involves manual intervention.

  • The vCenter that has gone through certificate rotation will not be able to access from any of the VDI infrastructure instance

    VMware Cloud Foundation does not support the rotation of certificates on the VDI-associated workload domains.

    Workaround: You can find the workaround for this issue at

  • When the user tries to deploy partner services on a VMware Cloud Foundation deployed NSX-T workload domain, the “Configure NSX at cluster level to deploy Service VM” error comes up

    On a VMware Cloud Foundation deployed NSX-T workload domain, the user will not be able to deploy partner services like McAfee, Trend, and so on.

    Workaround: Attach the Transport node profile back to the cluster and try deploying the partner service. After the service is deployed, detach the Transport node profile from the cluster.

  • The delete compute collection task during the workload domain deletion fails

    The deletion of a workload domain fails when the user tried to delete the NSX-T based workload domain that has VMs running on it


    a. Delete all the configuration deployed on top of the workload (edge vms/ segments/ routers etc)
    b. Update the transport node profile with the correct VM kernel adapters and their port group names:
    vmk0 - <mgmt port group name>
    vmk1 - <mgmt port group name>
    vmk2 - <mgmt port group name>
    c. Log in to the NSX-T UI and configure NSX with the above profile and current compute collection.
    d. Retry the failed workflow.

  • The NSX-T workload domain creation fails

    The NSX-T workload domain creation task fails with the "Configure Backup Schedule Task" error.

    Workaround: Wait for about five minutes and then restart the failed task.

  • The unstretch workflow fails with VSAN error during the host maintenance task

    Currently, during the unstretch operation, the policies are updated. We need to explicitly run the reapply policy and the VSAN compliance check.

    It is needed to avoid VSAN error during host maintenance task.

    1. Re-enable stretch
    2. Reapply the updated policy and run the VSAN compliance check and health check. After running this test, the VMs become compliant.
    3. Perform the disable stretch and remove the fault domains.
    4. Restart the failed task during the unstretch operation. The task successfully puts the host in maintenance.

  • The remove cluster operation fails for the partially created cluster

    The cluster creation fails at validating the network connectivity of the hosts. This issue occurs when the NSX-T add cluster operation is triggered which inserts the inventory details of the new cluster into database, but fails to create the cluster in the other areas of the VMware Cloud Foundation system (for example, vCenter, NSX-T, and so on). It fails at validating the host network connectivity. Trigger the remove cluster operation for the partially created cluster and even this fails at the "Gather input for NVDS to VDS migration from vCenter and NSX-T manager" task.


    1. Run the following command:

       curl -k -X GET http://localhost/inventory/clusters

    2. Find the <cluster-id>  from the above response for the cluster need to be deleted in backend.

    3. Delete cluster from inventory:

        curl -k -X DELETE http://localhost/inventory/extensions/vi/clusters/<cluster-id>

    4. Decommision the hosts part of the cluster.

    5. SOS cleanup for the hosts part of the cluster.

    6. Commission the hosts part of the cluster.

  • Even after the host has been removed from the workflow, the vCenter server still displays it in the list.

    When you try to remove a host from cluster, the operation is marked as successful but the vCenter server still has an entry for this host.


    1. Log into the management vCenter server.
    2. Select the host that you tried to remove.
    3. Right click and select Remove from inventory.
  • When the witness ESXi version does not match with the host ESXi version in the cluster, then the VSAN cluster partition may occur

    Currently, the VSAN stretch cluster workflow does not have any validation to check the ESXi version of the witness host. If the witness ESXi version does not match with the host version in the cluster, then the VSAN cluster partition can happen.


    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality
    2. Replace or deploy the witness appliance match with the ESXi version.

  • The VSAN partition and critical alerts are generated when the witness MTU setting are not set to 9000

    If the MTU of the witness switch in the witness Appliance is not set to 9000, the VSAN stretch cluster partition occurs.

    Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • Currently, the migration from NSX-V to NSX-T is not supported

    Currently in the VMware Cloud Foundation environments, the NSX-V to NSX-T migration is not supported.

    Workaround: Deploy a new NSX-T workload domain.

  • The PKS deployment failed with the "Unable to create pks user ubuntu" error

    The deployment fails due to the DNS or the IP address mismatch.

    Workaround: Fix the DNS configuration and restart the workflow.

Security Operations Known Issues
  • Updating password policy failure results in UPDATE message when should be FAILED

    If the password policy fails, the system shows an UPDATE status and the transaction history shows the message "Operation failed in 'appliance update', for  credential update." In actuality, the operation has FAILED because the new password does meet requirements. A more appropriate message would read "Password update has failed due to unmet policy requirements" and recommend reviewing the policy.

    Workaround: Review the password policy for the component in question and modify the password configuration as necessary, and try again to update.

  • Unable to perform password management operations from the SDDC Manager Dashboard

    If the Cloud Foundation Operations Manager component is restarted while a password management operation is in progress, the password management operation ends up in an INCONSISTENT state. You will not be able to perform any password management operations until you cancel the INCONSISTENT operation.

    1.  SSH into the SDDC Manager VM as the vcf user.
    2. Type su to switch to the root account.
    3. Run curl http://localhost/security/password/vault/transactions | json_pp
      This returns information about the INCONSISTENT operation, For example:
         "transactionStatus" : "INCONSISTENT",
         "workflowId" : "f6548e76-f3e9-4033-801d-36ccae893672",
         "transactions" : [
               "oldPassword" : "AX1are276!",
               "username" : "administrator@vsphere.local",
               "entityName" : "psc-1.vrack.vsphere.local",
               "id" : 123,
               "newPassword" : "x%5N6H1A^p%wJ4N",
               "entityType" : "PSC",
               "credentialType" : "SSO",
               "workflowId" : "0daaac30-c88d-4407-8bf3-c791541ebbae",
               "transactionStatus" : "INCONSISTENT",
               "timestamp" : "2019-04-09T09:00:29.628+0000",
               "transactions" : [
         "type" : "ROTATE",
         "id" : 1
    4. Using the ID of the INCONSISTENT transaction ("id" : 1 in the example above), run the following:
      curl -X DELETE http://localhost/security/password/vault/transactions/<ID> | json_pp
      This returns something like the following:
Known Issues Affecting Service Providers
  • Domain manager workflows fail when using SDDC Manager to manage an API-created cluster or domain.

    If you have a cluster or domain that was created through the API, and you try to manage it through the SDDC Manager dashboard, the workflow will fail. This affects the following domain manager workflows: Add/Remove Host, Add/Remove VI Workload Domain, and Add/Remove Cluster.

    Workaround: None. Clean up the failed workflow and try again using the API.