check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 2.3.1 Release Notes

VMware Cloud Foundation 2.3.1 | 06 MARCH 2018 | Build 7898339

VMware Cloud Foundation is a unified SDDC platform that brings together the VMware vSphere, vSAN, NSX, vRealize Suite, and Horizon components into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud. The Cloud Foundation 2.3.1 release continues to expand on the SDDC automation, VMware SDDC stack and the partner ecosystem.

This release note updates the VMware Cloud Foundation 2.3.0 Release Notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 2.3.1 release includes the following:

  • vRealize Automation and vRealize Operations are deployed on a separate VLAN to enable disaster recovery (DR).
  • The new Config Insight utility enables you to see changes to your systems's baseline configuration.
  • Support for the VMware Customer Experience Improvement Program (CEIP) that enables you to get product information and resolve issues.
  • Unassigned host validation enables you to upgrade or downgrade a available host when adding it to a workload domain. (An unassigned host is one not yet assigned to a workload domain.)
  • Expanded Upgrade Pre-Check feature. The Pre-Check feature can now be run from the SDDC Manager Dashboard. It allows you to pre-check readiness of all components for upgrade and to drill down to view detailed status of each pre-check task.
  • New hardware compatibility support:
    • Cisco UCS M5 server.
    • HPE DL380 Gen10 rackmount servers.
    • HPE Synergy Platform Gen10 servers.

    NOTE: On HPE Synergy Platform Gen10 servers, some workarounds may be required for VDI workload domains to be fully functional. For complete documentation related to Cloud Foundation on HPE Synergy, see the Hewlett Packard Enterprise Information Library.

For detailed release and build information, see LCM Upgrade Bundles below.

Installation and Upgrade Information

You install Cloud Foundation 2.3.1 directly as a new release. For a fresh installation of this release:

  1. Read the VIA User's Guide for guidance on setting up your environment, deploying VIA, and imaging an entire rack.
  2. Read the Cloud Foundation Overview and Bring-Up Guide for guidance on deploying the VMware Cloud Foundation software Bill-of-Materials (BOM) stack.

For instructions on upgrading to Cloud Foundation 2.3.1, see Lifecycle Management in the Administering VMware Cloud Foundation guide.

Supported Upgrade Paths

The following upgrade path is supported:

  • 2.3.0 to 2.3.1 through Lifecycle Management in SDDC Manager

Updated Cloud Foundation Upgrades in 2.3.1

The Cloud Foundation 2.3.1 software BOM contains the VMware software components described in the table below. This upgrade and patch bundle is hosted on the VMware Depot site and available via the Lifecycle Management feature in SDDC Manager. See Upgrade Cloud Foundation to 2.3.1 and Lifecycle Management in the Administering VMware Cloud Foundation guide.

For the rest of the BOM information, see the VMware Cloud Foundation 2.3.0 Release Notes.

Software Component Version Date Build Number
VMware Cloud Foundation Bundle 2.3.1 06 MAR 2018 7898339
VMware Imaging Appliance (VIA) 2.3.1 06 MAR 2018 7897130
vRealize Bundle for Cloud Foundation 2.4 06 MAR 2018 7847577
VMware Horizon 7.3.2 06 MAR 2018 7161471

Preparing for Upgrade

Before upgrading, see Upgrade Cloud Foundation to 2.3.1 in the Administering VMware Cloud Foundation guide.

For more information on upgrading to 2.3.1, see Patching and Upgrading Cloud Foundation in the Administering VMware Cloud Foundation guide.

Resolved Issues

  • VDI Creation failing at task vCenter: Enable vSAN.

    While creating a new VDI, the workflow fails during the vCenter: Enable vSAN task.

    Fixed in this release.

  • The VIA user interface shows unsupported ToR and inter-rack switches as available for selection.

    The VIA user interface shows unsupported Quanta and Dell ToR and inter-rack switches as available for selection. These include Quanta ToR models Quanta_LY8-x86 and Quanta_LY8-ppc, and inter-rack model Quanta_LY6; and Dell ToR model Dell_S4000 and inter-rack model Dell_S6000.

    Only switches manufactured by Cisco Inc. or Arista Network Inc. are officially supported.

    Fixed in this release.

  • Health-check checks wrong IP for the spine.

    Health-check is checking the wrong IP for the spine, resulting in a ping failure or switch down error. This behavior has been observed during bring-up and during Add Rack operations. The problem is that health check is looking at outdated IP addresses.

    Fixed in this release.

Known Issues

The known issues are grouped as follows.

Imaging Known Issues
  • TOR imaging fails at Setup port configurations task.

    Sometimes the TOR switch imaging process fails during the port configurations task with an "auth" issue, resulting in an incorrect authentication for the admin user. As result, VIA cannot access the switch using the default password.

    Workaround: Use the following steps to resolve this issue:

    1. Reset the switch password. Please refer to the vendor documentation.
    2. Clean up the switch.
    3. Re-image the switch separately in VIA.
  • Modified bundle section in VIA interface not displaying correctly.

    Observed in the Firefox browser. The interface elements of the Bundle tab in VIA do not display correctly. This is caused by older versions of Javascript cached in the browser.

    Workaround: Use Ctrl+F5 to force a refresh. The Bundle tab should display correctly.

  • Upgraded hosts reset to original version of ESXi after domain deletion.

    If a host has been upgraded to a higher version of ESXi and then assigned to a new workload domain, it may reset to its original version of ESXi when the workload domain is deleted. This requires the user to repeat the upgrade process before reassigning the host.

    Workaround: None.

    To prevent this from happening again, use the following procedure.

    1. Upgrade to the latest version of VIA.
    2. Decommission the desired hosts and re-image in VIA to change the host image permanently.

    NOTE: When you re-image, verify that the new image is the same image running on the hosts in the Management Domain.

Bring-Up Known Issues
  • Alerts raised during POSV do not contain a rack name in their description.

    Because the rack name is not specified by the user until after the IP allocation step in the system configuration wizard, a rack name is not available to display in alerts raised prior to that step. Alerts that are raised subsequent to that step do contain the user-specified rack name.

    Workaround: None.

  • The bring-up process on the first rack fails at task "NSX: Register vCenter with error NSX did not power on on time".

    The bring-up process fails because the NSX Controller virtual machines did not power on during the wait time set in the NSX: Register vCenter task.

    Workaround: On the Bring-Up Status page, click Retry to proceed with the bring-up process.

  • Google Chrome browser crashes for no known reason during bring-up.

    The Chrome browser sometimes crashes when left open during bring-up, displaying the "Aw Snap something went wrong while displaying the webpage" message. Bring-up is unaffected. This is presumed to be a browser issue, not a Cloud Foundation issue.

    Workaround: Reload the web page.

  • Bring-up failed at the PostInventoryToVrmAdapter task.

    Sometimes the bring-up process fails at the PostInventoryToVrmAdapter task because as it is unable to connect to the Zookeeper service, even though Zookeeper is running.

    Workaround: Log into the SDDC Manager Controller VM to stop and restart the Zookeeper service.

    systemctl stop zookeeper
    systemctl start zookeeper
    
  • Bring-up fails during the BackupStateAndBootback operation.

    Sometimes bring-up can fail if some services crash on the ESXi host, resulting in large dumps filling up space in the /store/var/core directory. The BackupStateAndBootback task then has insuffienct space to save backups as part of the bring-up process. The log shows a HOST_STATE_BACKUP_FAILED error state.

    Workaround: Log into the ESXi host and delete the dump files from the /store/var/core directory, then retry bring-up.

  • Bring-up fails during NSX wire creation operation.

    Sometimes the bring-up process fails during the NSX configuration operation with the error NSX_VIRTUAL_WIRE_CREATION_POST_VALIDATION_FAILED

    Workaround: Retry the operation. It should succeed the second time.

  • SPINE_SWITCH_DOWN_ALERT shown during Add Rack process.

    The SPINE_SWITCH_DOWN_ALERT is triggered if the monitoring operation calls network-health check while the switch-reip operation is in progress. This is a false alert and sometimes clears on its own. If not, use the workaround below.

    Workaround: In the vSphere Web Client, clear the SPINE_SWITCH_DOWN_ALERT manually during or after the Bringup operation.

Multi-Rack Bring-Up Known Issues
  • Add Rack (Rack 2) operation fails, but rack hosts are shown as available.

    Sometimes, even though the Add Rack bring-up operation fails, the hosts on that rack appear as available in the Host Selection screen when creating workload domains. Only the hosts on the connected rack should show as available. This issue has been observed when the Add Rack bring-up failure returns the error message UPDATE_OOB_IP_ON_SWITCH_FAILED OOB IP.

    Workaround: Even though the interface allows you to select the host, when you click Next, you will see an error message that the requested IaaS resource cannot be found. You can return to the Host Selection page and select a valid host from Rack 1.

Post Bring-Up Known Issues
  • When you use the vSphere Web Client to view the vCenter Server clusters associated with the management domains or workload domains, you might see alarms related to vSAN HCL.

    As described in KB article 2109262, the vSAN Health Service has built-in Hardware Compatibility (HCL) health check that uses a JSON file as its HCL database, to inform the service of the hardware and firmware that is supported for vSAN. These alarms are raised if the HCL health check fails. However, because the set of supported hardware and firmware is constantly being updated as support for new hardware and firmware are added, if the check fails, the first step is to obtain the most recent vSAN HCL data and use the vSphere Web Client to update the HCL database.

    Workaround: The steps to update the vSAN HCL database are described in KB article 2145116.

  • The standard vCenter Server alarm named "License inventory monitoring" is raised for overprovisioning of the ESXi hosts, even though the ESXi hosts have the appropriate license key applied.

    Under the standard licensing terms for the VMware Cloud Foundation product, all of the ESXi hosts in a Cloud Foundation installation are licensed using the same key. In the vCenter Server Licenses pane in the vSphere Web Client, in the Product column for this key, you see the associated product name is VMware vSphere 6 Enterprise for Embedded OEMs (CPUs). Under the VMware licensing terms, that type of key is allowed to be overprovisioned. However, due to this issue, when the vCenter Server sees this key as overprovisioned, it is incorrectly raising the standard vSphere "License inventory monitoring" alarm. You can use the vSphere Web Client to see the alarm definition for that alarm, by selecting the vCenter Server object in the left hand navigation area, and clicking the Manage tab > Alarm Definitions and clicking License inventory monitoring in the list.

    Workaround: None. Ignore these vCenter Server license inventory monitoring alarms about the overprovisioning of license capacity of the ESXi hosts in your Cloud Foundation installation.

  • Update/upgrade attempt on unmanaged host fails with error: "UPGRADE_SPEC_INVALID_DATA; The ESX host is managed by vCenter server with IP: x.x.x.x".

    If a selected unmanaged host update/upgrade fails with the above message, retrying will also fail because the API expects unmanaged hosts to not be managed in vCenter. If the initial update failure persists after adding the host to vCenter, the host remains attached to vCenter and causes subsequent attempts to fail.

    Workaround: Before retrying, remove the host from the vCenter inventory. Verify that the host is present in the SDDC Manager free-pool capacity and displays a healthy status.

  • Recreate VI workload domain or VDI workload domain blocked.

    Sometimes you cannot recreate a VI workload domain or VDI workload domain after it has been deleted post-bring-up.

    Workaround: Clear the error Alerts, and retry.

  • During imaging, the host loses network connectivity but the Imaging tab indicates that it has been successfully imaged.

    If DHCP is not enabled on the server BMC before imaging, the server is not accessible on the network.

    Workaround: If DHCP was not enabled on the server BMC before imaging, follow the steps below.

    1. Log in to the management switch.
    2. Open the /etc/network/interfaces file.
    3. Scan the configuration settings for the auto iface entries.

      All the entries should show bridge-access 4. For example:

      auto swp6
      iface swp6
          bridge-access 4
      
    4. If any entries show bridge-access 3, change the bridge-access value to 4.
    5. Save and close the /etc/network/interfaces file.

      This should restore network access to the host and its iDRAC.

  • Configuration wizard redisplays after successful vROPs deployment.

    This is caused by the wrong or duplicate license key having been entered into the vROPS configuration wizard.

    Workaround: Use a valid license key. As a temporary alternative, when specifying the license key,  select the "Evaluation License Key" option. This will allow vROPS to deploy successfully. You can verify the correct license key in the vROPS interface after deployment.

  • Dell Arista Rack: Bring-up Host selection screen shows incorrect port information.

    Sometimes when configuring the rack, the Host Selection screen displays the ESXi port information instead of the neighboring ToR port information.

    Workaround: Ignore the incorrect port information. This is a display issue only.

  • Network down alerts received although the switches are up.

    The system returns "Alert - Operational status is down for ToR switch" alerts although the switches are up. The most likely cause is that the same VLAN is used by both the DC network and the Management workload domain bring-up operation.

    Workaround: Configure separate VLANs for the DC and Management workload domain bring-up.

  • vSAN Policy not applied post-Bringup and WLD Creation

    After bringup, all VMs deployed in the management workload domain are shown as non-compliant with the vSAN Storage Policy.

    Workaround: Manually reapply the vSAN policy. For each affected VM:

    1. Locate the VM in the vSphere Web Client.
    2. Right-click the VM and choose VM Policies.
    3. Select Reapply Storage Policy.
SDDC Manager Known Issues
  • SDDC Manager has no interface for verifying and modifying the configured DNS forwarders.

    SDDC Manager currently provides no user interface where a user can view, verify, and possible modify the configurations of DNS forwarder. The ideal scenario would be in the VRM UI, probably somewhere in the settings, to be able to see and change those DNS forwarders.

    Workaround: You can access and modify these configurations in the unbound.conf file.

    1. Login to the SDDC Manager Controller VM.
    2. Open the /etc/unbound/unbound.conf file.
    3. Verify or modify the configured DNS forwarders.
    4. Save and close the /etc/unbound/unbound.conf file.
    5. Reboot the SDDC Manager to verify configuration persistence.

    6.  
  • Unable to disassociate a network from a VI/VDI workload domain.

    There is no apparent functionality in SDDC Manager to disassociate a network from a workload domain.

    Workaround: Use the --delete-dc-nw option in the SoS tool. See the Cloud Foundation documentation.

  • After the SDDC Manager times out in your browser session and displays the login screen, when you try to log in after a few hours, an error message about the required user name and password is displayed instead of the expected message about the expired SAML request.

    Authentication to the SDDC Manager uses SAML (Security Assertion Markup Language). When the SDDC Manager is idle for a period of time, it automatically logs you out and displays the login screen. The URL in the browser holds the original SAML authentication request. After a longer period of time, on the order of hours, the SAML authentication request expires, by design. As a result, if you return to the screen without refreshing the browser session to get a new SAML authentication request, the request fails by design. However, instead of an error message informing you of the expired SAML request, an error message stating "User name and password are required" is displayed.

    Workaround: If you encounter this issue, open a new browser session to the virtual IP address of your SDDC Manager, such as https://vrm.subdomain.root-domain:8443/vrm-ui, as described in the Administering VMware Cloud Foundation Guide.

  • Add Host/Add Rack interface prevents user from adding host/rack when bring-up of any previous add rack/add host task is failed.

    The Add Host and Add Rack interfaces do not allow users to add a host or a rack if the bring-up of a Add Host and Add Rack is in currently in a failed state. Racks and adding hosts to racks should be independent from other racks.

    Workaround: None. This issue may be addressed as a design issue in a future release.

  • An expansion workflow that involves adding more than one ESXi host to a management or workload domain is marked successful, even though when the hosts were added to the domain's vCenter Server cluster, the NSX Manager Host Preparation process failed to complete on one or more hosts.
    During an expansion workflow, the hosts are added to the vCenter Server cluster that underlies the management or workload domain. When hosts are added to a vCenter Server cluster that has NSX enabled on the cluster, one of the tasks involves preparing the newly added hosts, as described in the Prepare Hosts on the Primary NSX Manager topic in the NSX 6.2 documentation. Part of this host preparation process involves a scan of each added ESXi host prior to installing the required NSX software on that host. If the scan on a particular host fails for some transient reason, the NSX Manager host preparation process fails for that host. However, this failure condition is not reported to the expansion workflow and the workflow appears as successful in the SDDC Manager.

    Workaround: When performing an expansion workflow that involves multiple hosts and when the SDDC Manager indicates the workflow has completed, perform the follow steps to verify the NSX host preparation was successful for each added host, and if not, resolve the issues reported by NSX.

    1. Using the vSphere Web Client, log in to the vCenter Server instance for the management or workload domain that was expanded.
    2. In the vSphere Web Client, examine the NSX Manager host preparation state by navigating to Networking & Security > Installation and clicking the Host Preparation tab.
    3. On the Host Preparation tab, expand the cluster if it is not already expanded, and examine the data reported for each host in the Installation Status column and VXLAN column:
      • If the Installation Status column reports green checkmarks and "Configured" in the VXLAN column for all hosts, the added hosts were successfully prepared.
      • If the Installation Status column displays "Not Ready" and the corresponding VXLAN column displays "Error" for a host, resolve the error by right-clicking on the VXLAN column's "Error" and clicking Resolve. This action also applies the VXLAN distributed switch port group to that host.
  • A workload domain’s workflow can fail if a VM in the management domain on which the workflow depends is in non-operational state.

    Workflows to deploy, delete, and expand workload domains can fail if some of the management domain’s virtual machines are in an invalid state, down, or temporarily inaccessible. SDDC Manager does not prevent you from initiating and submitting a workflow when one of the VMs is in an invalid state. These virtual machines include the PSC VMs, vCenter Server VMs, vRealize Operations Manager VM, vRealize Log Insight VM, NSX Manager VM, and, in a multi-rack system, the SDDC Manager Controller VM. If you submit the workflow and one of those virtual machines becomes temporarily inaccessible as the workflow is performed, the workflow will fail.

    Workaround: Before initiating a workflow, review the state of the management domain’s virtual machines to see that they are all in a valid (green) state. You can see the virtual machines by launching the vSphere Web Client from the domain details of the management domain.

  • SoS log collection and backup fail on Arista spine switches.

    Sometimes SoS log collection and backup fail on Arista spine switches. This is not observed on other spine switches.

    Workaround: Perform a manual backup of the spine switch prior to performing any FRU operations, such as replacing the spine switch.

  • When using the SDDC Manager’s Uplink screen to update L3 connectivity settings, the Uplink screen does not indicate which of the ToR switches has the L3 uplink configured on it.
    When an uplink is configured to L3 mode, only one of the two ToR switches has an uplink port. The SDDC Manager does not indicate which ToR switch is connected to the upstream router.

    Workaround: When you use the Uplink screen to change uplink connectivity settings, perform the following steps.
    Note: Changing the settings triggers uplink reconfiguration on the switches. Because the reconfiguration process might take a few minutes to complete, connectivity to the corporate network might be lost during the process. To avoid losing connectivity with SDDC Manager, it is strongly recommended that you are connected to port 48 on the management switch when updating the settings using the Uplink screen.

    1. Connect to port 48 on the management switch and log in to the SDDC Manager using that connection.
    2. On the Uplink screen, configure the L3 uplink and click SAVE EDITS.
    3. Re-configure your upstream router to use the new network settings that you specified in step 2.
    4. Wait at least 3 minutes.
    5. Try connecting the upstream router to the top ToR switch.
    6. Test the new uplink connectivity by disconnecting from port 48 on the management switch and connecting to the rack with the new uplink configuration.
    7. If you are unable to reconnect to the rack, try connecting the upstream router to the bottom ToR switch.
    8. If you are unable to connect to the rack, reconnect using port 48 on the management switch and try reconfiguring your network to the original configuration.
    9. If you cannot connect to the rack with either configuration, contact VMware Support.
  • Rebooting Switches Can Crash SDDC Manager Server.

    After rebooting the ToR and inter-rack switches, the SDDC Manager interface was not accessible.

    Workaround: Restart the Tomcat server to restore the SDDC Manager server. Log into the SDDC Manager Controller VM and run the following command:
    systemctl restart vcfmanager

  • Unable to change the Datacenter Connection name on racks from the SDDC Manager interface.

    If a user wants to the change the value of the Datacenter Connection name on a rack, there is no apparent way to do so from the SDDC Manager interface.

    Workaround: Log into the SDDC Manager Controller VM and Delete the datacenter network using the following SoS command:
    /opt/vmware/sddc-support/sos --delete-dc-nw --dc-nw-name <datacenter_name>

    You can now create a new datacenter network with the desired name.

  • vRealize Operations and vRealize Automation tasks do not appear in SDDC Manager.

    After starting vRealize Operations in Cloud Foundation, vRealize Operations tasks do not appear in SDDC Manager. This is a temporary effect that results from the task display starting before the vRealize Lifecycle Configuration Manager and NSX Edge VMs finish deploying.

    Workaround: Allow more time for the vRealize Lifecycle Configuration Manager and NSX Edge VMs to finish deploying. Then the tasks will appear.

  • Add Rack or Add Host operation fails during Password Rotation.

    The operation fails because a domain is in a failed state. The issue results because the workflow allows users to run these operations even though conditions may exist that will result in a failed status.

    Workaround: To prevent or resolve this issue, check the System Status page for events that prevent adding a new host or rack, such as a similar operation in progress or that has failed. For example, if you recently completed such an operation but skipped hosts during the workflow, you will encounter this issue. Resolve any such status and retry the operation.

  • Decommission of a host which is not part of a domain fails.

    Sometimes if you decommission a host that is not part of a domain, the operation fails with a message such as Host with IP address 192.168.100.109 is currently reserved by a user. Decommissioning of host is not allowed in this state.. However, this message is erroneous.

    Workaround: If the operation fails and returns the Host with IP address 192.168.100.109 is currently reserved... message, wait approximately twenty minutes and retry the decommission operation.

  • SDDC Manager inventory shows incorrect version vCenter.

    Before upgrading to 2.3.x, check the version of vCenter for all workloads created after upgrading to 2.2.1. If the version of vCenter in the workloads does not match the version shown in Lifecycle > Inventory page, contact VMware support before applying 2.3.0 updates.

    To check vCenter version:

    1. Find the vCenter IP address and root password by reading /home/vrack/bin/lookup-passwords.
    2. Using SSH, log in to vCenter.
    3. Run the following commands
      /bin/appliancesh
      system.version.get
      
    4. Note the vCenter build number.
    5. Compare with the vCenter version number shown in the Lifecycle > Inventory page.

    Workaround: None. Contact VMware support.

  • In Add Host workflow in SDDC Manager, a dummy task appears on the Add Host Bringup Status page.

    Sometimes when adding a host in SDDC Manager, a phantom or dummy subtask may appear on the Add Host Bringup Status page (Settings > Add Host), labeled Not Available and with a status of Not Started.

    Workaround: This subtask is a development artifact and does not affect the Cloud Foundation's operation. You can ignore it.

  • After upgrading to Cloud Foundation 2.3.0, SDDC Manager shows validation errors.

    After upgrading Cloud Foundation from 2.2.1 to 2.3.0, you may see validation errors when creating VI workload domains and related screens.

    Workaround: Log out of SDDC Manager and log back in using a different browser.

  • vSAN and vMotion distributed port groups have an incorrect MTU value.

    The vSAN and vMotion distributed port groups in a VMware Cloud Foundation environment use an incorrect MTU value. You can correct this by running a script provided by VMware.

    Workaround: See the Knowledge Base article The VSAN and vMotion distributed port groups in a VMware Cloud Foundation environment have an incorrect MTU value.

  • SDDC Manager Not Notifying User of Failed Add Host Workflow.

    Sometimes when you add a host to a workload domain and the operation fails, the SDDC Manager interface does not redirect to failed message but instead returns to the Add Host page. This issue indicates another issue, which is addressed by the Workaround.

    Workaround: When you discover the failed host, restart the HMS and SDDC Manager Controller VM services, and retry the Add Host operation.

  • LCM inventory has the incorrect ESXi versions for the nodes.

    This is due to the Modify ISO option being used to add a custom ESXi ISO to the rack imaging. When modifying ESXi versions, the new version must match the version used by the Management Workload Domain, or be a lower version that can then be upgraded to match the required version.

    Workaround: See Upgrade an Unassigned Host in the product documentation.

  • Network information missing from the Host Details page.

    Sometimes the Network Info section of the Host Details page (Dashboard > Physical Resources > Rack Details > Host Details) for a selected host is missing.

    Workaround: None.

  • Upgrade pre-check operation does not complete, keeps running.

    This has been observed when running the Upgrade Pre-Check through the SDDC Manager Dashboard. The cause is that the SDDC Manager Utility VM is not running.

    Workaround: Check in vCenter if the SDDC Manager Utility VM is running. If not, restart and retry the operation

  • New The host selection controls in the SDDC Manager dashboard display incorrect memory.

    The host selection controls in the SDDC Manager dashboard displays the incorrect memory value, usually 78 GB, for all nodes.

    Workaround: Use the following procedure to correct this issue.

    1. Using SSH, log in as root to the SDDC Manager Controller VM.
    2. Stop the following services.
      systemctl stop scs
      systemctl stop hms
      systemctl stop vcfmanager
    3. Move the hms-inband-simulation-X.X.X-RELEASE.jar file from the following locations to a temporary location.
      • /home/vrack/vrm/webapps/vrm-ui/WEB-INF/lib
      • /home/vrack/vrm/webapps/hms-local/WEB-INF/lib
    4. Restart the following services.
      start systemctl start scs
      systemctl start hms
      systemctl start vcfmanager
  • New Unable to validate Composer service account in UPN format.

    The UPN format is not supported for the Composer service account.

    Workaround: Use domain\username format for login credentials in the Composer service.

Virtual Infrastructure Workload Domain Known Issues
  • Cisco plugin throws WARNING instead of ERROR if <subnet> is already configured on interface.

    This is caused by an IP address conflict. When the uplink is over layer-3 (IP routing based) instead of VLAN switching, then the management VLAN and VI VLAN are configured to have a Switch Virtual Interface (SVI) on both ToR switches with individual IP address and common VRRP IP (for each VLAN). When the VI is deleted, the IP addresses are not removed from the SVI. As a result, if any subsequent VI workload has the same subnet, and therefore the same IP for SVI then the SVI creation fails.

    Workaround: You can prevent the IP address conflict as follows:

    Where <VLAN id> is the VLAN id that was given when the VI workload was created.

    1. After a VI workload is deleted through UI on a setup with Layer 3 uplink, log in to both ToR switches.
    2. Run the following command.
      configure terminal
      no interface vlan <VLAN id>

      Where <VLAN id> is the VLAN id that was given when the VI workload was created.

    3. Copy the running configuration as the startup configuration.
      copy running-config startup-config
  • A previously used free pool host might not be considered for workload deployment capacity.

    In some cases, a free pool host that was used by a VI workload domain may not be considered for deployment capacity in subsequent VI workload domains, and may be flagged with a HOST_CANNOT_BE_USED_ALERT. After the original VI workload domain is deleted, the HMS service has the wrong password, resulting in the alert status.

    Workaround: Use the following procedure to recover from this issue.

    1. From the rack inventory, identify the node IP and obtain its IPMI password by running /home/vrack/bin/lookup-password on the SDDC Manager Controller VM.
    2. Shutdown the host from IPMI power control for twenty minutes.
    3. Log into SDDC Manager Controller VM as root.
    4. Access the postgres database and delete the problem_record.
      su - postgre;
      psql vrm;
      delete from problem_record;
    5. Restart the vcfmanager service
      systemctl restart vcfmanager
  • New Host Selection workflow shows unhealthy hosts.

    Cloud Foundation does not presently screen unassigned hosts for health status.

    Workaround: Before selecting a host for inclusion in a domain, run the SoS command to check its health. See Supportability and Serviceability (SoS) Tool in the product documentation.

  • In a Cloud Foundation environment configured with L3 uplinks, when you try to create a workload domain with a data center (external) connection using the same subnet but a different VLAN as a workload domain that was previously created and deleted, the workload domain creation fails.

    When a workload domain is deleted and your environment’s ToR switch uplinks are configured with L3, the Switched Virtual Interfaces (SVIs) that were originally created on the ToR switches for that workload domain are not deleted. Due to this issue, if you subsequently try to create a workload domain using a different VLAN ID but same subnet as the deleted one, the workload domain creation fails because the switches do not allow two VLAN IDs with the same subnet.

    Workaround: When creating a VI or VDI workload domain, in the data center connection’s configuration, do not combine a different VLAN ID with a subnet that was previously used for a deleted workload domain. You can reuse the same VLAN with the same subnet or reuse the same VLAN with a different subnet.

  • The VI workload domain creation and expansion workflows might fail at task "ConfigureVCenterForLogInsightTask" due to a failure to connect to the deployed vRealize Log Insight instance.

    During the VI workload domain creation and expansion workflows, if the system cannot connect to the deployed vRealize Log Insight instance, the workflow fails at the "ConfigureVCenterForLogInsightTask" task and you see an exception in the log with a 500 HTTP error code:
    [com.vmware.vrack.vrm.workflow.tasks.loginsight.ConfigureVCenterForLogInsightTask] Exception while doing the integration: Create session to LogInsight Failed : HTTP error code : 500

    Workaround: Restart the vRealize Log Insight's virtual machine by using the management domain's vCenter Server launch link to open the vSphere Web Client and using the vSphere Web Client user interface to restart the vRealize Log Insight's virtual machine. Then restart the failed workflow.

  • The VI workload domain creation workflow might fail at task "VC: Deploy vCenter" due to a failure to connect to the system's Platform Services Controller instances.
    During the VI workload domain creation workflow, if the system cannot connect to the integrated Platform Services Controller instances, the workflow fails at the "VC: Deploy vCenter" task and you see errors in the log such as:
    Unexpected error while verifying Single Sign-On credentials: [Errno 111]
    Connection refused
    Cannot get a security token with the specified vCenter Single Sign-On configuration.

     

    Workaround: Restart the system's PSC-2 virtual appliance, then the PSC-1 virtual appliance, then the vCenter Server virtual appliance. Wait until each virtual appliance is up and running before restarting the next one. Then restart the failed workflow.

  • On the Review page of the VI workload domain creation wizard, the Download and Print buttons are not operational.

    Due to this issue, when you reach the Review step of the VI workload domain creation wizard, you cannot use the Download or Print buttons to create a printable file of the displayed information for future reference.

    Workaround: None. At the Review step of the wizard, you must manually capture the information for future reference, for example by taking screen captures of the displayed information.

  • Dual rack VI creation fails at during the vCenter: Deploy vCenter workflow.

    When creating a dual rack VI workload domain, it fails during the vCenter: Deploy vCenter workflow. The log may show that there is no available space for the vSAN datastore. For example:

    2017-08-17 10:40:18.146 [Thread-6839] DEBUG [com.vmware.vrack.vrm.core.local.InMemoryLogger] 
    The free space of datastore 'vsanDatastore' (0.0 GB) in host

    Workaround: Restart the VMware vCenter Server by restarting the vmware-vpxd service and retry. The process should succeed. For information about restarting this service, see the Knowledge Base Article 2109887 Stopping, starting, or restarting VMware vCenter Server Appliance 6.x services.

  • The "Create NSX Config File" task fails in the SDDC Manager Configuration Backup workflow.

    The Cloud Foundation configuration backup is if it is triggered while any workflow is in a running stated in SDDC Manager. The backup is not automatically retried. However, configuration backups are automatically triggered after completion of any of the following operations:

    • Password rotation

    • Creation, deletion, or expansion of a workload domain

    • A host or rack is added

    • Host decommission

    Workaround: Configure the backup after the workload domain creation process is complete. Wait for the next automatically triggered backup or manually trigger one. See Backing Up the Cloud Foundation Configuration in the product documentation.

  • Disassociate vROPs from a VI or VDI workload domain fails at ConfigureVsanInVropsForExistingWorkloadDomain operation.

    After enabling vROPS as part of a VI or VDI workload domain expansion, attempts to disassociate vROPS from the workload domain may fail.

    Workaround: Retry the disassociate operation.

  • Third-party certificate rotation fails if any domain workflows are running.

    If you rotate third-party certifications while any workflow domains are in progress, the rotation fails.

    Workaround: Do not rotate third-party certificates when any workload domain is running or in a failed state.

VDI Workload Domain Known Issues
  • VDI creation will fail at Register VMware Horizon View Serial Number task.

    Sometimes VDI creation fails at the Register VMware Horizon View Serial Number task, due to a Windows firewall issue where the connection server installation takes place.

    Workaround: When creating a VDI, manually set firewall rules to avoid this issue. For more information about network and port configuration in Horizon 7, see NETWORK PORTS IN VMWARE HORIZON 7 and Network Ports in VMware Horizon 7.

  • ESXi update may fail on a VDI domain.

    While upgrading a VDI domain, ESXi update may fail on the stage “ESX UPGRADE VUM STAGE INSTALL UPDATE”.

    Workaround: Wait for the update to be available again and retry upgrade.

  • Some VDI infrastructure settings can be modified even though they should be restricted.

    The following VDI infrastructure settings can be modified even though they should be restricted to prevent the user from modifying them:

    • Max Desktops per Connection Server
    • Max Desktops per Security Server
    • Max Desktops per vCenter Server
    • Desktop System Drive Size
    • Desktop System Snapshot Size

    Workaround: Do not manually modify these settings.

  • The VDI workload domain creation workflow might fail at task "Instantiate Horizon View Adapter".
    Due to intermittent timing issues, the VDI workload domain creation workflow sometimes fails at the Instantiate Horizon View Adapter task with the following exception error in the log: com.vmware.vrack.vdi.deployment.tools.tasks.VDIWorkflowException: "Unable to create vROps REST client" As a result, the pairing credential between the vRealize Operations Manager instance and the VDI environment is in a partially instantiated state and must be deleted before restarting the workflow.

    Workaround: Manually delete the pairing credential that is associated with the workload domain's Horizon Connection server and then restart the failed workflow using the Restart Workflow action in the workflow's status screen using these steps:

    1. Verify that you have the IP address for the first Horizon Connection server that was deployed for this VDI workload domain, such as 10.11.39.51 You will use that IP address to identify which pairing credential to delete.
    2. Log in to the vRealize Operations Manager Web interface. You can use the launch link in the management domain's details screen to open the log in screen.
    3. From the vRealize Operations Manager Web interface's Home screen, navigate to the Credentials screen by clicking Administration > Credentials.
    4. Locate the pairing credential having a name in the form of vdi-view-adapter-IPaddress, where the IP address matches the one you obtained in step 1. For example, if the Horizon Connection server has IP address 10.11.39.51, the displayed pairing credential name is vdi-view-adapter-10.11.39.51.
    5. Select that pairing credential and delete it.
    6. In the workflow's status screen, restart the failed workflow using the Restart Workflow action.
  • In a Cloud Foundation environment configured with L3 uplinks, when you try to create a workload domain with a data center (external) connection using the same subnet but a different VLAN as a workload domain that was previously created and deleted, the workload domain creation fails.

    When a workload domain is deleted and your environment’s ToR switch uplinks are configured with L3, the Switched Virtual Interfaces (SVIs) that were originally created on the ToR switches for that workload domain are not deleted. Due to this issue, if you subsequently try to create a workload domain using a different VLAN ID but same subnet as the deleted one, the workload domain creation fails because the switches do not allow two VLAN IDs with the same subnet.

    Workaround: When creating a VI or VDI workload domain, in the data center connection’s configuration, do not combine a different VLAN ID with a subnet that was previously used for a deleted workload domain. You can reuse the same VLAN with the same subnet or reuse the same VLAN with a different subnet.

  • When creating a VDI workload domain with specified settings that results in the system deploying two vCenter Server instances, the creation workflow might fail at the "ESXI: Incremental LI Integration" task.
    Depending on the value for the Max Desktops [per vCenter Server] setting in the VDI Infrastructure screen and your choice for the number of desktops in the VDI workload domain creation wizard, the system might need to deploy more than one vCenter Server instance to support the desired VDI workload domain. As part of this deployment, the system starts two VI workload domain creation workflows. One of the VI workload domain creation workflows might fail at the task "ESXI: Incremental LI Integration" with an error message about failure to connect ESXi hosts to vRealize Log Insight:

    hosts to LogInsight failed : HTTP error code : 404 : Response :
    

    Workaround: Use the Physical Resources screens to verify that the ESXi hosts that the failed workflow is trying to use are all up and running. Use the vSphere Web Client to verify that the vRealize Log Insight VMs in the system are all up and running. Ensure that the ESXi hosts involved in the failed workflow and the vRealize Log Insight VMs are in a healthy state, and then click Retry in the failed workflow.

  • UEM agents are not installed as part of the VMware UEM installation from Windows template.

    The VDI workflow installs VMware UEM but UEM agents are not installed in that process.

    Workaround: Power on the Windows template, install UEM, then power off the template. Redeploy the desktop. The agents should be successfully installed.

  • Desktop VM creation fails with error "Cloning of VM vm-8-1-45 has failed..."

    During VDI workload creation, the VDI VM fails with error "Cloning of VM vm-8-1-45 has failed: Fault type is INVAlID_CONFIGURATION_FATAL - Failed to generate proposed link for specified host: host-122 because we cannot find a viable datastore".

    Workaround: On the affected desktop pool, as administrator re-enable provisioning through the Horizon console.

  • NSX Edge upgrade failed on VDI workload domain.

    During the process of upgrading from Cloud Foundation 2.2.x to 2.3.x, the update operation to NSX 6.3.5 fails with the message {"details":"Failed to deploy edge appliance edge-1-jobdata-532-0.","errorCode":10020,"moduleName":"vShield Edge"}.

    Workaround: Redeploy the NSX Edge.

    1. Go to the Networking & Security > NSX Edges page in the vSphere Web Client.
    2. Locate the Edge and click Redeploy Edge.
    3. Restart the upgrade.

    If that this workaround fails, you may need to delete and redeploy the Edge.

  • VI and VDI workflow domains show pending tasks after upgrade to version 2.3.0.

    VI and VDI workflows created in Cloud Foundation version 2.2 show pending tasks after upgrade to 2.3.
    There is no workaround and these tasks can be ignored.

    Workaround: None. These tasks can be ignored.

  • Network expansion for VDI fails at Import DHCP Relay Agents task.

    This issue has been observed when expanding network settings for a VDI workload domain with an internal DHCP. When creating the new connection (Settings > Network Settings > Data Center), the user may have saved the new network connection configuration before associating the domains for the new connection.

    Workaround: You can avoid this issue by saving the new connection configuration after assigning domains. For example, use the following workflow:

    1. In SDDC Manager, go to Settings > Network Settings > Data Center.
      By default, the Data Center page opens to the New Connection form.
    2. Enter the settings for the new subnet.
    3. Associate the domains for the new connection.
    4. Save the new connection configuration.

    When you try the network expansion operation again, it should succeed.

  • VDI workload domain expansion completed but domain status still shows as EXPANDING.

    Also, the workload domain cannot be associated with vRealize Operations even though the expansion was successful. This is caused by a known issue with the Apache Cassandra DBMS.

    Workaround: Modify the record in Cassandra directly.

    1. Using SSH, log in as root to the SDDC Manager Controller VM.
    2. Open the Cassandra kernel.
      run /opt/vmware/cassandra/apache-cassandra-2.2.4/bin/cqlsh
    3. Locate the record.
      -use vrmkeyspace;
      -select * from domain;
    4. Locate the domain that shows a status of EXPANDING and modify as follows.
      update domain set status = 'SUCCESS' where id=<DOMAIN-ID>
      
  • Simultaneous execution of VDI workflow fails with mount error.

    When running more than one VDI workflow in parallel, the operations fail with the error message: com.vmware.vrack.vrm.workflow.tasks.TaskException: vCenter ISO mount failed.

    Workaround: Because only one workflow can mount the mount the ISO image at once, run only one VDI workflow at a time.

  • VDI Linked-Clone Workload creation fails at Wait Connection Servers.

    In some cases, the process of creating a VDI linked-clone workload fails at the Wait Connection Servers operation. If you examine the VM Summary in vSphere Web Client, you are asked about connecting and disconnecting the ISO mounted to the VM.

    Workaround: By answering the question in the VM Summary, you enable the operation to progress.

Life Cycle Management (LCM) Known Issues
  • While bundle download is in progress, the myvmware login icon displays the error icon.

    Sometimes, while an update bundle download is in progress, the error icon (an exclamation point in a yellow triangle) displays next to the myvmware login icon at the top of the Lifecycle Management: Repository page.

    Workaround: None. In this case, you can ignore the icon.

  • vCenter upgrade from 6.0.0-5326079 to 6.5.0-6671409 on the IaaS domain succeeds but the vSAN HCL is not updated.

    For a detailed description and workaround for this issue, see the Knowledge Base article vSAN Health Service - Hardware compatibility - vSAN HCL DB Auto Update.

  • ESXi and vCenter update on a host might fail in the task of exiting maintenance mode.
    Sometimes during an ESXi and vCenter update process, a host might fail to exit maintenance mode, which results in a failed update status. During an update, the system puts a host into maintenance mode to perform the update on that host, and then tells the host to exit maintenance mode after its update is completed. At that point in time, a separate issue on the host might prevent the host from exiting maintenance mode.

    Workaround: Attempt to exit the host from maintenance mode through the vSphere Web Client.

    • Locate the host in vSphere and right-click it.
    • Select Maintenance Mode > Exit Maintenance Mode.

      This action will list any issues preventing the host from exiting maintenance mode.

    • Address the issues until you can successfully bring the host out of maintenance mode.
    • Return to the SDDC Manager and retry the update.
  • LCM Inventory page shows a failed domain, but no failed components.

    The LCM Inventory page shows a failed domain, but does not show any failed components.

    Workaround: Log in to vCenter for the domain and check that all hosts in the domain have the lcm-bundle-repo datastore available. Mount them to the lcm-bundle-repo datastore, located at 192.168.100.46 /mnt/lcm-bundle-repo, if necessary.

  • Skipped host(s) during ESXi upgrade prevent users from continuing the upgrade process.

    If during the ESXi upgrade process one or more hosts are skipped, the user is unable to continue the rest of the upgrade. However, this is expected behavior because the entire bundle must be applied for the upgrade to be successful.

    Workaround: Retry and complete all skipped host upgrades to unblock the remaining upgrades.

  • ESXi update fails at stage ESX_UPGRADE_VUM_STAGE_TAKE_BACKUP.

    An ESXi update may fail during firmware backup. This may be due to a missing /tmp/scratch/downloads folder.

    Workaround: Using SSH, access the ESXI host and create the folder downloads in the /tmp/scratch directory.

  • LCM updates not available after bundle download.

    In some cases, LCM updates may not appear available after bundle download. Similarly, the Inventory page may not show the management domain information. Also, the Update details page may show domain as not available. The most likely cause is that one or more domains, including the management domain, is in a failed state. As a result, the updates are not shown as available.

    Workaround: If the management domain does not appear or is in a failed state:

    • Verify if a domain expansion operation is in progress or has failed. If so, complete the workflow for the operation.
    • Verifyif vRealize Automation or Operations deployment is in progress or has failed.
      • If in progress, wait for deployment to complete.
      • If failed, delete the existing instance and redeploy.

    If a workload domain does not appear or is in a failed state:

    • Verify if the workload domain is currently being created or expanded. If so, retry and complete the pending workflows.
  • Upgrade preview page does not show the correct version of the upgrade component.

    When using LCM version aliasing, the upgrade preview page shows the required version that is set on the bundle used for the upgrade. It does not show the actual version number of the component.

    Workaround: None.

  • ESXi upgrade fails due to timeout.

    Any ESXi hosts that are assigned to heavy workload domains may require more time to upgrade than the default timeout setting allows (4500000 ms or 75 min.). If you receive an upgrade timeout error alert, this may be the cause.

    Workaround: Modify the properties file to increase the default timeout allowed and restart LCM, as follows.

    1. Using SSH, log in as root to the SDDC Manager Controller VM.
    2. Open the /home/vrack/lcm/lcm-app/conf/application-evo.properties file in an editor.
    3. Locate the "ESX Upgrade Timeout" section and add the following setting: esx.upgrade.timeout = 9000000 to change the default.
    4. Save and close the /home/vrack/lcm/lcm-app/conf/application-evo.properties file.
    5. While still logged into the SDDC Manager Controller VM, restart the LCM service.
      systemctl restart lcm
      

     

  • Current design of host downgrade will be BLOCKED with certain scenarios.

    When adding hosts for domain creation, the hosts or rack you order from the vendor must be imaged with the same ESXi version as the hosts in the management domain. If an unassigned host is imaged with a different version, you can downgrade or upgrade as necessary. However, you cannot downgrade a host below its base version.

    Workaround: None. See Managing Unassigned Hosts in the Administering VMware Cloud Foundation.

  • Bundle upload failure with Error parsing the bundle tar file.

    If you copy the bundle to a local file system such /home/vrack instead of /mnt/lcm-bundle-repo and then call the upload API, the upgrade and the subsequent upgrades will fail with the error BUNDLE_REPO_WRITE_FAILURE.

    Workaround: Restart LCM. Log in as root to the SDDC Manager Controller VM and run the following command:

    systemctl restart lcm

     

vRealize Integration Known Issues
  • You cannot disassociate a workload domain from a vRealize Automation endpoint.

    After you associate a workload domain with a vRelease Automation endpoint, you cannot disassociate it.

    Workaround: There is no workaround.

  • Direct access to the vrlcm appliance is not supported.

    Customers must use the VMware SDDC Manager to deploy, monitor, and manage their vRealize Operations and vRealize Automation deployments.

  • vRA IaaS cloning operation is incorrectly shown as successful.

    The IaaS cloning operation is shown as successful but the machines have not joined Active Directory. As a result, the IaaS Management Agent fails.

    Workaround: Uninstall and redeploy vRealize Automation. This option is made available in SDDC Manager if this operation fails.

  • vRealize Operations deployment fails.

    After successful imaging, bring-up, and VDI/VI creation, vRealize Operations deployment fails with error message VROPS_CONFIGURING_LICENCE_FAILED. This is typically caused by entering the wrong vROPS license key during deployment.

    Workaround: Use a valid Enterprise license when deploying vROPS in Cloud Foundation. Add-on license keys are not valid for this type of deployment.

  • Hosts in management domain cannot enter maintenance mode due to anti-affinity configuration of vROp nodes

    All vRealize Operations nodes are automatically configured with a single anti-affinity rule. As a result, the operation of putting any management host into maintenance mode gets stuck. This can also cause an error during power-on operations for multiple VMs: "vCenter Server was unable to find a suitable host to power on the following virtual machines for the reasons listed below."

    Workaround: You can overcome this issue by disabling the anti-affinity rule in vCenter, migrating the VMs, then reenabling the rule.

    1. Log in to the vSphere Web Client and browse to the cluster.
    2. Click the Configure tab.
    3. Under Configuration, click VM/Host Rules.
    4. Locate the rule named "vROps Anti-Affinity Rule" and select Edit.
    5. Deselect the Enable Rule option and click OK.
    6. After you successfully place the hosts in maintenance mode, repeat the preceding steps to re-enable the rule.
  • vRA deployment fails at InstallIaasWebTask task.

    This is a known vRealize Automation issue that occurs due to slow or interrupted network connectivity, creating a communication issue between vRealize Automation nodes and Windows VMs.

    Workaround: When the failure is reported in the SDDC Manager Dashboard, click Retry. It should complete without issue.

  • vRealize Automation deployment fails with GuestToolsNotRunningException error.

    This is caused when Lifecycle Management fails to discover the IP assignment of an IaaS host due to the default tunneling protocol in Windows. From the Microsoft documentation: "By default, the 6to4 tunneling protocol is enabled in Windows 7, Windows Vista, Windows Server 2008 R2, and Windows Server 2008 when an interface is assigned a public IPv4 address (that is, an IPv4 address that is not in the ranges 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16)."

    Workaround: Disable IPv6 for the IaaS template.

    1. Log in to the Windows VM that runs the IaaS Windows template.
    2. Download MicrosoftEastFix20174.mini.diagcab from Microsoft.
    3. Disable IPv6 on nontunnel interfaces, excluding the loopback interface, and on the IPv6 tunnel interface. See https://support.microsoft.com/en-us/help/929852/how-to-disable-ipv6-or-its-components-in-windows.
    4. Install MicrosoftEastFix20174.mini.diagcab.
    5. Export the VM as an OVA.

     

  • Sporadic failures in workflows related to vRealize Operations integration.

    This is caused by intermittent failures with vRealize Operations authentication via REST API. The following workflows may be potentially impacted and sporadically fail:

    • Associating vRealize Operations with VI or VDI workload domain may fail at the ConfigureVCenterInVropsForExistingWorkloadDomain or ConfigureVsanInVropsForExistingWorkloadDomain task.
    • Disassociating vRealize Operations from a VI or VDI workload domain may fail at the DeleteVCenterAdapterForExistingWorkloadDomain or DeleteVsanAdapterForExistingWorkloadDomain task.
    • Deploying a vRealize Automation workflow may fail at the ConfigureVraInVrops operation.
    • Password rotation may fail during the VraApiRotatorTask, SsoRotatorTask, or VropsApiRotatorTask operation.
    • Deploying vRealize Operations workflow may fail at the ConfigureVropsAdapters task.

    Workaround: Retry the failed workflow.

    Due to the intermittent nature of the issue, more than one retry may be required for the operation to succeed. To prevent account locking due to failing authentication, wait at least fifteen minutes between the failure and before retrying the operation.

  • The vRealize Automation and vRealize Operations consoles use self-signed certificates.

    This is a known issue in which the VMware Appliance Management Interface does not display the current certificates.

    Workaround: See Knowledge Base article VAMI does not display the new certificate after changing vCenter Server Appliance 6.x certificates for a solution.

  • The Deploy vRealize Operations page does not show the VLAN configuration settings.

    This issue has been observed after upgrading from Cloud Foundation 2.3.0. When deploying vRealize Operations through the Cloud Foundation interface, the VLAN configuration settings are not displayed. As a result, vRealize Automation and vRealize Operations deploy on the MGMT VLAN instead of a dedicated vRealize VLAN. The LCM also displays notification message to refresh browser or logout/login

    Workaround: This is due to a caching issue. As indicated by the notification message in LCM, refreshing the interface page in the browser should fix the issue. If not, log out of Cloud Foundation and log back in.

  • Windows customization on IaaS hosts sets incorrect time zone.

    By default the IaaS template is configured to set the time zone to UTC Casablanca (UTC 0.0). However, IaaS hosts are being configured for the UTC Samoa time zone.

    Workaround: After they are added to the workload domain, manually synch all IaaS hosts with the NTP server.

  • vRealize Suite deployment fails with tenant name.

    vRealize Suite deployment fails with with the message "User with username vmw\administrator already exists in tenant vsphere.local".

    Workaround: Uninstall vRealize and redeploy. Review the product documentation.

  • Associating vRealize with a VI workload domain fails with Incorrect Credentials Error.

    Associating vRealize Automation with a workload domain fails with Incorrect Credentials Error, even though the credentials are correct.

    Workaround: Retry the association in SDDC Manager. It may require several retries.

Upgrade Known Issues
  • After upgrading from Cloud Foundation 2.1.x to 2.3.x, the Bringup workflows shows no tasks.

    After upgrading, the Bringup workflows status page (System Status > Workflow Tasks > Sub Tasks) should list all bringup tasks. Instead the list is empty.

    Workaround: All bringup sub tasks completed before upgrading will no longer appear. However, after a delay, tasks related to newly initiated bringup workflows will appear.

  • The Status page incorrectly shows Add Host workflow operation as "Bringup".

    The correct list item should be titled Bringup Additional Hosts. As a result, the page lists more than one Bringup item.

    Workaround: None.

  • Workflow domain creation gets BLOCKED with insufficient resources message.

    An otherwise available, unassigned host may be ignored if it shows out-of-date alerts. For example, if the host agent is previously failed on a host, the alert remains until it is cleared by an administrator. As a result, when determining healthy resources for a new domain creation, the old alerts will block the corresponding host from being available as healthy.

    Workaround: Clear all alerts on all unassigned hosts and retry the workload domain creation workflow.

    1. Go to the Status view for the host.
    2. On the Alerts pane, click VIEW DETAILS.
    3. Filter alerts by problematic resources and clear the alert.
  • Password lookup for NSX manager root account is not functional to operate SSH.

    This is expected behavior. NSX restricts access to the NSX Manager root account.

    Workaround: Follow the steps in Knowledge Base article Tech Support Access in NSX for vSphere 6.x.