check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 3.5 Release Notes

VMware Cloud Foundation 3.5 | 13 DECEMBER 2018 | Build 11215871

VMware Cloud Foundation is a unified SDDC platform that brings together VMware vSphere, vSAN, NSX and optionally, vRealize Suite components, into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud. The Cloud Foundation 3.5 release continues to expand on the SDDC automation, VMware SDDC stack, and the partner ecosystem.

NOTE: VMware Cloud Foundation 3.5 must be installed as a new deployment or upgraded from Cloud Foundation 3.0.1.1. For more information, see Installation and Upgrade Information below.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 3.5 release includes the following:

  • Support for NSX-T Automation
    Enables deployment of workload domains with NSX-T.
    Note: NSX-T is not supported for consolidated architecture.
  • Support for NFS Storage Automation
    Enables users to leverage existing NFS storage investment with NFS-based workload domains.

  • Deeper Integration with Composable Infrastructure
    SDDC Manager is now integrated with Redfish Composability APIs, with HPE Synergy as the first certified partner.

  • Bulk Host Commissioning and De-Commissioning
    You can now commission and de-commission multiple hosts at the same time.

  • Improved Interface for Password Management
    Provides new controls for managing password rotation and manual password updating.

  • Updated Certificate Replacement Workflow
    Improvements include clearer error messages and fewer errors.
     

Updated Cloud Foundation Bill of Materials (BOM)

The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

IMPORTANT: VMware Cloud Foundation downloads upgrade manifests for non-applicable upgrade bundles. For example, in VMware Cloud Foundation 3.x, in addition to the pertinent VMware Cloud Foundation 3.x manifest, you may also see update manifests present for VMware Cloud Foundation 2.x. For more information refer to Knowledge Base article 65045 VMware Cloud Foundation downloads upgrade manifests for non-applicable upgrade bundles.

Version 3.5 is the first release of Cloud Foundation with a BOM based on vSphere 6.7.

Software Component Version Date Build Number
Cloud Foundation Builder VM 3.5 13 DEC 2018 11215871
SDDC Manager 3.5 13 DEC 2018 11215871
VMware vCenter Server on vCenter Server Appliance 6.7 U1 12 OCT 2018 10244745
VMware Platform Services Controller 6.7 U1 16 OCT 2018 10244745
VMware vSphere (ESXi) 6.7 EP5 09 NOV 2018 10764712
VMware vSAN 6.7 EP5 09 NOV 2018 10764712
VMware NSX Data Center for vSphere 6.4.4 08 DEC 2018 11197766
VMware NSX-T Data Center 2.3 25 SEP 2018 10085361
VMware vRealize Suite Lifecycle Manager 2.0 20 SEP 2018 10150522
VMware vRealize Automation 7.5 20 SEP 2018 10053539
VMware vRealize Log Insight 4.7 04 OCT 2018 9983377
vRealize Log Insight Content Pack for NSX for vSphere 3.8 n/a n/a
New vRealize Log Insight Content Pack for Linux 1.0 n/a n/a
VMware vRealize Operations 7.0 24 SEP 2018 10098133

VMware Software Edition License Information

The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:

  • VMware ESXi
  • VMware vSAN
  • VMware NSX Data Center for vSphere

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server
    NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
  • VMware vRealize Automation
  • VMware vRealize Operations
  • VMware vRealize Log Insight and content packs
    NOTE Cloud Foundation permits limited use of vRealize Log Insight for the management domain without purchasing full vRealize Log Insight licenses.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.

For more general information, see the VMware Cloud Foundation.

Supported Hardware

For details on vSAN ReadyNodes in Cloud Foundation, see the VMware Compatibility Guide (VCG) for vSAN and the Hardware Requirements section in the VMware Cloud Foundation Planning and Preparation Guide.

Documentation

To access the Cloud Foundation 3.5 documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The Cloud Foundation web-based interface supports the following web browsers:

  • Google Chrome: Version 70.x or 69.x
  • Internet Explorer: Version 11
  • Mozilla Firefox: Version 63.x or 62.x

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)
  • 1366 by 768 pixels
  • 1280 by 1024 pixels
  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

You can install Cloud Foundation 3.5 as a new release or upgrade from Cloud Foundation 3.0.1.1.

Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The VMware Cloud Foundation Planning and Preparation Guide provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with ESXi 6.7 U1 and update to ESXi 6.7 EP5 (build 10764712). See Knowledge Base article 58715 Virtual Machines running on VMware vSAN 6.6 and later report guest data consistency concerns following a disk extend operation for details.

Phase Three: Install Cloud Foundation 3.5

Please refer to the following user documentation:

Upgrade to Cloud Foundation 3.5

You can install Cloud Foundation 3.5 as a new release or upgrade from Cloud Foundation 3.0.1.1.

NOTES:

Preparing for Upgrade

Before upgrading, perform the following tasks:

  1. Download LCM upgrade bundles through the SDDC Manager Dashboard or using the bundle transfer utility. See Patching and Upgrading Cloud Foundation in the VMware Cloud Foundation Operations and Administration Guide
  2. Before scheduling an upgrade, run the pre-upgrade-check utility with the following command from the SDDC Manager Dashboard.
    Navigate to Inventory > Workload Domains > [Workload Domain Name] > Update/Patches tab to access the pre-upgrade check control.

After Upgrading

After upgrading, perform the following tasks:

  1. Upgrade vRealize Log Insight to version 4.7. See Knowledge Base article 60278 How to upgrade vRealize Log Insight in VMware Cloud Foundation 3.5.
  2. If you deployed vRealize Automation in a previous 3.0.x version, make sure that vRealize Suite Lifecycle Manager 2.0 is installed as part of the upgrade, and then contact VMware Support to upgrade vRealize Automation to version 7.5.
  3. If you deployed vRealize Operations in a previous 3.0.x version, make sure that vRealize Suite Lifecycle Manager 2.0 is installed as part of the upgrade, and then contact VMware Support to upgrade vRealize Operations to version 7.0.
  4. If you deployed both vRealize Automation and vRealize Operations in a previous 3.0.x version, make sure that vRealize Suite Lifecycle Manager 2.0 is installed as part of the upgrade, and then contact VMware Support to upgrade both.

Resolved Issues

  • No warning to prevent user from replacing certificates during updates

    The SDDC Manager Dashboard does not prevent the user from replacing certificates while product updates are in progress. This is unsupported.

    This issue is fixed in this release (3.5).

  • Bringup service logs are not accessible by admin user

    Although the admin user has permission on the Cloud Foundation Builder VM, bringup logs can only be accessed by the root user.

    This issue is fixed in this release (3.5).

  • SoS log collection returns AssertionError: "execute_api_and_save_output".

    If the user kicks off the SoS utility suite before the VI workload domain creation or a vRealize Suite LCM update runs, the system returns an AssertionError message. However, this error does not impact SoS functionality with log collection or other SoS operations such as health check.

    This issue is fixed in this release (3.5).

  • User must manually accept the vCenter Server certificate in vRealize Automation after connecting to workload domains.

    After deploying vRealize Automation and connecting it to workload domains in Cloud Foundation, the user must switch to the vRealize Automation interface and manually accept the security certificate.

    This issue is fixed in this release (3.5).

  • vSAN disk validation returns cache tier.

    During the vSAN disk validation operation the system returns the error "Host 'X' does not contain the minimum VSAN SDD cache disk required. VSAN cache their not within 200GB +-13.0 percent specification - FAIL". This may be caused by the validation process not correctly identifying the correct cache tier capacity requirements for the vSAN capacity tier size.

    This issue is fixed in this release (3.5).

  • Bring-up fails during SDDC Manager VMCA certificate installation task

    The bring-up may fail during SDDC Manager VMCA certificate installation task.

    This issue is fixed in this release (3.5).

  • Precheck returns Java Resource Access Exception

    The precheck operation returns the following Java error: org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://nsxManager.qr13.vcf.local/api/1.0/appliance-management/backuprestore/backup". This issue has been observed after replacing the certificate for the PSC, vCenter, NSX, and vRealize Automation components, and may be caused by a loss of connection between the SDDC Manager virtual appliance and the NSX Manager node.

    This issue is fixed in this release (3.5).

  • Post-Upgrade Configuration: User Must Reconfigure Adapter Teaming Policy for Management Portgroup

    The upgrade process from VMware Cloud Foundation 3.0 to 3.0.1 results in the teaming policy defaulting to a "Route based on originating virtual port" configuration. The policy for the Management, vMotion, and vSAN port groups should be set to the "Route based on physical NIC load" configuration.

    This issue is fixed in this release (3.5).

  • Post-Upgrade Configuration: User Must Reconfigure HA Settings for Management Cluster

    The upgrade process from VMware Cloud Foundation 3.0 to 3.0.1 results in the HA configuration settings for the management cluster defaulting to a "VM and Application Monitoring" configuration. The HA settings for the management cluster should be restored to the "VM Monitoring Only" setting.

    This issue is fixed in this release (3.5).

  • Unable to use third-party file transfer utilities to download logs from the SDDC Manager VM

    Third-party file transfer utilities (such as WinSCP) are popular and frequently used tools for secure file download and are mostly supported by Cloud Foundation. This permissions error has an easy workaround.

    This issue is fixed in this release (3.5).

Known Issues

The known issues are grouped as follows.

Bringup Known Issues
  • New Clicking the Help icon in the Bring-up wizard opens the help for an older release.

    The Help icon in the Bring-up wizard links to the product help for an older version.

    Workaround: To open the help topic on deploying Cloud Foundation, perform the following steps:

    1. In a browser window, navigate to docs.vmware.com.

    2. Click Browse All Products and then click VMware Cloud Foundation.

    3. Click VMware Cloud Foundation Architecture and Deployment Guide.

    4. In the left navigation pane, click Deploying Cloud Foundation.

  • Cloud Foundation Builder fails to initiate with "[Admin/Root] password does not meet standards" message

    When configuring the Cloud Foundation Builder admin and root passwords, format restrictions are not validated. As a result, a user may create a password that does not comply with the restrictions. As a result, Cloud Foundation Builder will fail upon initiation.

    Workaround: When configuring Cloud Foundation Builder, ensure that the password meets the following restrictions:

    • Minimum eight characters long.
    • Must include both uppercase and lowercase letters
    • Must include digits and special characters
    • Must not include common dictionary words
  • Bringup process fails at task Disable TLS 1.0 on vRealize Log Insight Nodes

    The Bringup fails at the task Disable TLS 1.0 on vRealize Log Insight Nodes with the following error Connect to 10.0.0.17:9543 [/10.0.0.17] failed: Connection refused (Connection refused). This issue has been observed on slow environments after restarting a vRealize Log Insight node. The node does not start correctly and its API is not reachable.

    Workaround: Use the following procedure to work around this issue.

    1. Restart the failed Bringup execution in the Cloud Foundation Builder VM and open the bringup logs.
      This will retry the failed the Bringup task, which might still fail on the initial attempt. The log shows an unsuccessful connection to the LogInsight node.
    2. While Bringup is still running, use SSH to log in to the Log Insight node that is shown as failed in the Bringup log.
    3. Run the following command to determine the connection issue.
      loginsight-node-2:~ # service loginsight status
      It should confirm that the daemon is not running.
    4. Execute the following command:
      loginsight-node-2:~ # mv /storage/core/loginsight/cidata/cassandra/data/system ~/cassandra_keyspace_files
    5. Reboot the Log Insight node.
    6. Confirm that it is running.
      loginsight-node-2:~ # uptime
      18:25pm up 0:02, 1 user, load average: 3.16, 1.07, 0.39
      loginsight-node-2:~ # service loginsight status
      Log Insight is running.

    In a few minutes, the Bringup process should successfully establish a connection to the LogInsight node and proceed.

  • vSAN SSD capacity disks marked as ineligible

    When running pre-bringup Audit Validation, an error displays because the audit shows the capacity to be under a terabyte, which is the recommended minimum capacity disk size.

    Workaround: The user has two options:

    • Ignore the error. It will does not prevent the user from completing the workflow. User can click Acknowledge to acknowledge the validation failure and proceed to bringup.
    • Execute the following command on each ESXi node in the deployment:
      esxcli storage core device list | grep -B 3 -e "Size: 3662830" | grep ^naa > /tmp/capacitydisks; for i in `cat /tmp/capacitydisks`; do esxcli vsan storage tag add -d $i -t capacityFlash; vdq -q -d $i; done
      NOTE: The size parameter in the above command will vary from customer to customer.
  • Bringup and VI workload domain workflows fail at VM deployments if any hosts are in maintenance mode

    Neither operation checks for host maintenance mode state. As a result, NSX controller deployments fail. This is expected because vSAN default policy requires a minimum of three ESXi nodes to be available for deployment.

    Workaround: If you encounter this error, do the following:

    1. Through either vCenter or the esxcli utility, take the affected hosts out of maintenance mode:
      esxcli system maintenanceMode set -e 0
    2. Restart the failed workflow.
  • Cloud Foundation Builder VM remains locked after more than 15 minutes.
    1. The VMware Imaging Appliance (VIA) locks out the user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Workaround: Using SSH, log in as root to the Cloud Foundation Builder VM. Unlock the account by resetting the password of the admin user with the following command.
    pam_tally2 --user=<user> --reset

  • Unable to cancel JSON Spec Validations.

    This is observed during the Bringup process when there is an error in the JSON file. The user is unable to cancel the JSON validation, but it might take up to five minutes if there's no connectivity to the ESXi hosts

    Workaround: There is no workaround to enable the desired cancellation. However, if this occurs, after the validation fails, review the JSON for syntax or other errors. Correct these errors and try again.

  • During bringup, platform audit returns vSAN Connectivity Validation errors.

    Observed in a deployment configuration containing six 1TB HDDs and two SSD caches of about 370GB. This suggests that the vSAN platform audit may not be robust enough to handle this excessive number of devices; however, the values in the error message are incorrect and bringup can continue to successful completion.

    Workaround: No action required. Bringup is unaffected.

Upgrade Known Issues
  • Operationsmanager component fails to come up after RPM upgrade.

    After manually upgrading the operations manager RPM to the latest version, the operationsmanager fails to come up. The system returns INFO-level message: Waiting for changelog lock... This is likely caused by overlapping restarts of the service preventing any from succeeding. This can happen to any service (e.g. ) which is exercising liquibase, such as commonsvcs.

    Workaround: Clean the databasechangeloglock table from the database.

    1. Log in to the SDDC Manager VM as admin user "vcf".
    2. Enter su to switch to root user.
    3. Run the following commands:
      1. Open the postgres command prompt:
        # psql -h /home/postgresql/ -U postgres
      2. Open the password manager:
        \c password_manager opsmgr
      3. Delete the databasechangeloglock:
        delete from databasechangeloglock
        NOTE: You can combine the preceding steps into a single command:
        psql -h /home/postgresql/ -U postgres -d password_manager -c "delete from databasechangeloglock"
      4. Leave the password manager and exit from the postgres prompt.
        \q
      5. Restart the operationsmanager component:
        # systemctl restart operationsmanager
      6. Verify the operationsmanager is running:
        # curl http://localhost/operationsmanager/about
        It should return something like:
        {"id":"2cac9b7c-545f-4e6d-a66-6f81eef27601","name":"OPERATIONS_MANAGER",
        "version":"3.1.0-SNAPSHOT-9580592","status":"ACTIVE","serviceUrl":
        "http://127.0.0.1/operationsmanager","description":"Operations Manager"}
  • Upgrade process returns invalid data error.

    Observed when upgrading from 3.0 to 3.0.1, and from 3.0.1 to 3.5. When upgrading through the Lifecycle Manager, the system returns this error: Scheduling immediate update of bundle failed. UPGRADE_SPEC_INVALID_DATA; User Input cannot be null/empty, User Input is required for this upgrade.

    Workaround: Refresh the browser page.

  • The upgrade process to version 3.5 may leave some hosts using previous ESXi version.

    The Lifecycle Manager by default displays only the latest ESXi version based on the bill of materials. However, with workload domains running on two ESXi versions, errors may result.

    Workaround: Manually update the ESXi version on all hosts.

  • During upgrade from 3.0.1 to 3.5, user is not notified of required vRealize Suite Lifecycle Manager upgrade.

    After upgrading from Cloud Foundation 3.0.1 to 3.5, you must upgrade vRealize Suite Lifecycle Manager from 1.2 to 2.0 before deploying vRealize Suite components in the upgraded version. Without this upgrade, vRealize Suite components will fail to deploy.

    Workaround: If the user has already deployed vRealize Suite Lifecycle Manager in a previous Cloud Foundation deployment, update this component prior to deploying any vRealize components in Cloud Foundation 3.5. Otherwise, the deployment may fail.

vRealize Integration Known Issues
  • vRealize Operations in vRealize Log Insight configuration fails when vRealize Operations appliances are in a different subdomain

    During vRealize Suite deployment in Cloud Foundation, the user provides FQDN values for vRealize load balancers. If these FQDNs are in a different domain than the one used during initial bringup, the deployment may fail.

    Workaround: To resolve this failure, you must add the vRealize Operations domain to the configuration in the vRealize Log Insight VMs.

    1. Log in to the vRealize Log Insight VM to modify the /etc/resolv.conf file.
      nameserver 10.0.0.250
      nameserver 10.0.0.250
      domain vrack.vsphere.local
      search vrack.vsphere.local vsphere.local 
    2. Add the domain used for vRealize Operations to the last line above.
    3. Repeat on each vRealize Log Insight VM.
  • vRealize Suite Lifecycle Manager is not properly cleaned-up during uninstall.

    vRealize Suite Lifecycle Manager is not cleaned-up during an uninstall of a failed vRealize Automation or vRealize Operations Manager deployment. This may happen if the deployment workflow for vRealize Automation or vRealize Operations fails under some specific conditions on step “ChangeVrslcmPasswords”. For example, if SDDC Manager services are restarted during this operation. As a result, vRealize Suite Lifecycle Manager VM is not be properly cleaned up from the system during the uninstall process of the failed vRealize Automation or vRealize Operations Manager deployment.

    Consecutive attempts to deploy vRealize Automation or vRealize Operations Manager will fail due to vRealize Suite Lifecycle Manager being left in a bad state.

    Workaround: Follow the steps outlined in Knowledge Base article 57917 to fix the issue.

  • vRealize Suite: The IaaS ManagerService stops running on the IaaS manager service nodes after certificate replacement.

    After replacing the certificate for the vRealize Automation resource, the Manager Service stops running on the IaaS manager service nodes. This can be observed by accessing the vRealize Automation appliance and opening the vRealize Automation Settings tab. Expand the IaaS manager service entries (for example, iaasms1.<serviceusername>.local or iaasms2.<serviceusername>.local) and the ManagerService shows a status of Stopped.

    Workaround: After completing certificate replacement, you must manually restart the ManagerService on the principal IaaS manager service node: for example, iaasms1.<serviceusername>.local. (The name of the node on your system may vary.) Access the node as described above and restart it. It may take five to ten minutes for the node to restart. It is recommended you return to verify the restart. You must also restart the VMware vCloud Automation Center Service.

    NOTE: You only need to restart one node. The other will restart as a peer.

  • Certificate replacement for the vRealize Automation component fails with 401 error

    Certificate replacement for the vRealize Automation component fails due to a 401 unauthorized error with the message "Importing certificate failed for VRA Cafe nodes." This issue is caused by a password lockout in the vRealize Automation product interface. For example, independently of Cloud Foundation, a user tried to log in to vRealize Automation with the wrong credentials too many times, causing the lockout.

    Workaround: The lockout period lasts for thirty minutes, after which the certification replacement process can succeed.

  • IP address for the load balancer VM shows as N/A in the SDDC Manager interface.

    In the Services tab in the MGMT domain page shows N/A as the IP address for the vRealize Log Insight load balancer VM.

    Workaround: The user can discover the IP as follows:

    1. Log in to the SDDC Manager VM, and change to root user.
    2. Run the following command to return the load balancer IP address: nslookup <vrli-hostname>.
  • Upgrade to vRealize Suite Lifecycle Manager 2.0 removes Environments Cards and Request History

    The upgrade process from vRealize Suite Lifecycle Manager 1.2 to 2.0 replaces the deployed appliance and restores a baseline configuration. This action subsequently removes any previous environment cards from the user interface and the request history. This may create auditing concerns; however, the originating requests for vRealize Suite product actions (such as deployment, workload domain connections, and so on) are maintained in the SDDC Manager logs.

    Workaround: Verify that vRealize Log Insight is manually configured for log retention and archiving. This will help to ensure that the SDDC Manager logs with the historical and originating vRealize Suite product action requests are preserved. For example, see Configure Log Retention and Archiving for vRealize Log Insight in Region A in the VMware Validated Design 4.3 documentation.

  • vRealize Log Insight node is configured with syslog server IP address.

    Observed after deploying Cloud Foundation and creating an NSX-T workload domain. When you enable Log Insight for the NSX-T workload domain, the active Log Insight is inadvertently configured as one of the syslog server node's IP addresses, instead of the load balancer virtual appliance IP address.

    Workaround: Using SSH, log in as admin user to the NSX-T Manager and NSX-T controller nodes to manually modify the configuration to point to the load balancer virtual appliance as syslog server. Perform the following procedure on each node:

    1. List the current syslog server IP address and port configuration:
      get logging-server
    2. Modify to the desired IP address:
      set logging-server <load balancer IP address:port> proto udp level info
    3. Confirm by listing the configuration information again:
      get logging-server
Networking Known Issues
  • Platform audit for network connectivity validation fails

    The vSwitch MTU is set to the same MTU as the VXLAN VTEP MTU. However, if the vSAN and vMotion MTU are set to 9000, then vmkping fails.

    Workaround: Modify the nsxSpecs settings in the bring-up JSON by setting the VXLANMtu as a jumbo MTU because vSwitch is set with the VXLAN MTU value. This will prevent the error seen in the platform audit.

  • NSX Manager is not visible in the vSphere Web Client.

    In addition to NSX Manager not being visible in the vSphere Web Client, the following error message displays in the NSX Home screen: "No NSX Managers available. Verify current user has role assigned on NSX Manager." This issue occurs when vCenter Server and the permission is not correctly configured for the account that is logged in.

    Workaround: To resolve this issue, follow the procedure detailed in Knowledge Base article 2080740 "No NSX Managers available" error in the vSphere Web Client.

  • After password rotation of the NSX-T workload domain, Add Host operation fails.

    After rotating the vCenter and ESXi passwords of the NSX-T workload domain, the Add Host operation fails. Specifically, the sub-task that configures the fabric fails. This appears related to the password rotation, but the cause is that the NSX-T host is insufficiently prepared and thus fails.

    Workaround: Log in to NSX-T Manager, remove the failed host, and retry the Add Host operation in SDDC Manager.
     

  • Multiple NSX-T workload domains cause conflicting overlay transport zones.

    If you create more than one NSX-T workload domain, each subsequent domain has a separate set of overlay networks. As a result, the different NSX-T edges cannot communicate because that requires a single overlay transport zone.

    Workaround: Limit your deployment to only a single NSX-T workload domain.

SDDC Manager Known Issues
  • Licensing page is missing several columns of data.

    Observed in Internet Explorer browser. The Licensing page displays only the Product Name. All other data columns are not displayed.

    Workaround: Impacts Internet Explorer only. Use another supported browser. (See above in these release notes.)

  • Cancel Network validation causes Host Validation to fail on next validation run.

    If you try to cancel validation while the Network Connectivity validation task is running, on the next validation, the Host Validation operation will fail with physical NIC issues, such as Physical NIC vmnic1 is connected to adt_vSwitch_01 on esxi host esxi-1 (10.0.0.100): should be disconnected.

    Workaround: This issue can be resolved as follows:

    1. Complete the validation with the failed Host Validation operation.
    2. Retry the validation.

    The Host Validation should complete without errors.

  • Bulk password rotation results in inaccurate failed message.

    If you select numerous domains for password rotation, the operation succeeds (as shown in the Tasks panel) but the Password Management page displays a Failed to rotate... 504 Gateway Time-out message.

    Workaround: Refresh the page view to clear the erroneous message.

  • Unable to download or print current validation report.

    After cancelling a validation, you can only download or print the most recent successful validation report, as opposed to the current one.

    Workaround: None.

Workload Domain Known Issues
  • The vSAN HCL database does not update as part of workload domain creation

    When you create a workload domain, the vSAN HCL database should update as part of the process. As a result, database moves into a CRITICAL state, as observed from vCenter.

    Workaround: Manually update the vSAN HCL database as described in Knowledge Base article 2145116.

  • Adding host fails when host is in a different VLAN

    This operation should succeed as adding a host to workload domain cluster should succeed even though the new host is on a different VLAN than other hosts in the same cluster.

    Workaround:

    1. Before attempting to add a host, add a new portgroup to the VDS for the cluster.
    2. Tag the new portgroup with the VLAN ID of the host to be added.
    3. Run the Add Host workflow in the SDDC Manager Dashboard.
      This will fail at the "Migrate host vmknics to dvs" operation.
    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1.
      For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
    5. Retry the Add Host operation.
      It should succeed.

    NOTE: If you remove the host in the future, remember to manually remove the portgroup, too, if it is not used by any other hosts.

  • Add cluster to domain operation fails with error FAILED_TO_GET_COMPLIANT_ESXI_VERSIONS

    This issue has been observed when a user attempts to add a cluster to a newly created workload domain. The domain creation workflow includes creating a cluster for that domain. However, even though the domain creation workflow may be completed, the new cluster may require up to five minutes to be recognized. This error results if a user tries to add an additional cluster during this five minute period.

    This error most likely occurs if you attempt the Add Cluster workflow shortly after an LCM update. Due to a separate issue, the LCM cache is taking longer to refresh than expected, thus returns the wrong component version information and resulting in the error.

    Workaround: Wait for at least 5 minutes after LCM updates complete before initiating the Add Cluster operation.

  • NFS datastore creation may fail in workload domain creation.

    When creating an NFS-based workload domain or cluster, the system does not validate the NFS server and mount configuration input. As a result, if incorrect information has been input, the workload domain creation workflow will accept the input anyway. This results in workload domain creation failure.

    Workaround: After the failure is known, remove the workload domain and re-image the hosts. Try again, using the correct server and mount configuration information.

  • Workload domain creation may fail at Deploy NSX Controller Cluster task.

    Sometimes the deployed NSX controller VMs are unable to connect to the vSphere Distributed Port Group. As a result, the domain manager workflow fails during the Control Cluster Deployment task.

    Workaround: Log into vCenter and reboot all the ESXi hosts in the failing workload domain. Return to the SDDC Manager dashboard and restart the failed workload domain creation workflow.
    IMPORTANT: Before rebooting, you must place the hosts in maintenance mode. After reboot, take them out of maintenance mode.

  • In some cases, VI workload domain NSX Manager does not appear in vCenter.

    Observed in NFS-based workload domains. Although VI workload domain creation was successful, the NSX Manager VM is not registered in vCenter and as a result, not appearing in vCenter.

    Workaround: To resolve this issue, use the following procedure:

    1. Log in to NSX Manager (http://<nsxmanager IP>).
    2. Navigate to Manage > NSX Management Service.
    3. Un-register the lookup service and vCenter, then re-register.
    4. Close the browser and log in to vCenter.
  • After deleting all the NSX-T workload domains, any new NSX-T workload domain creation workflow fails.

    This results from the deletion of the NSX-T Manager and NSX Controller nodes from the inventory even though the corresponding appliances persist in the management cluster.

    Workaround: After you successfully delete the last NSX-T workload domain, these additional steps are required..

    • Power off the NSX-T Manager VM and all NSX-T controllers running in the management (MGMT) workload domain.
    1. Delete the NSX-T Manager and all NSX-T controller virtual appliances from the management cluster.
  • Workload domain creation not blocked by missing install bundles.

    This issue has been observed in 3.5 deployments shortly after upgrade. The user tries to create a workload domain before all install bundles have completed downloading. This condition of missing bundle should prevent the user from proceeding. However, the only warning of a problem comes when the user tries to add a new host to the new workload domain.

    Workaround: There are two workaround for two aspects of this issue:

    • To reinstate the proper safeguards, perform a forced refresh of the workload domain creation page in SDDC Manager. Any future attempts to create a workload domain will be prevented if any install bundles are missing.
    • To fix the issue after you have already created a workload domain, you must delete the new workload domain, perform the hard refresh, and re-create the desired workload domain.
  • Unable to delete VI workload domain enabled for vRealize Operations Manager from SDDC Manager.

    Attempts to delete the vCenter adapter also fail, and return an SSL error.

    Workaround: Use the following procedure to resolve this issue.

    1. Create a vCenter adapter instance in vRealize Operations Manager, as described in Configure a vCenter Adapter Instance in vRealize Operations Manager.
      This step is required because the existing adapter was deleted by the failed workload domain deletion.
    2. Follow the procedure described in Knowledge Base article 56946.
    3. Restart the failed VI workload domain deletion workflow from the SDDC Manager interface.
Security Operations Known Issues
  • Updating password policy failure results in UPDATE message when should be FAILED

    If the password policy fails, the system shows an UPDATE status and the transaction history shows the message "Operation failed in 'appliance update', for  credential update." In actuality, the operation has FAILED because the new password does meet requirements. A more appropriate message would read "Password update has failed due to unmet policy requirements" and recommend reviewing the policy.

    Workaround: Review the password policy for the component in question and modify the password configuration as necessary, and try again to update.

  • SSL Certificate Replacement for vCenter breaks vRealize Operations data collection

    After replacing the certificate for the vCenter Server component, both the vCenter and vSAN components in vRealize Operations Manager report a "Collection failed" error message. Testing the connection and attempting to accept the new certificate returns additional error messages: Unable to establish a valid connection to the target system. Adapter instance has been configured to trust multiple certificates, when only one is allowed. Please remove any old, unneeded certificates and try again.

    Workaround: If you encounter this issue, use the following procedure to resolve the situation.

    1. Delete the current vCenter and vSAN adapters.
    2. Re-create them using the same configuration and credentials originally set by vRealize Operations.
    3. Test the connection, accept the new certificates, and save the configuration.
  • The Configure CA workflow fails if the Microsoft CA password contains certain special characters.

    The Configure CA workflow fails if the Microsoft CA password contains the following special characters:  < > (left or right angle brackets), ' (single quote or apostrophe), " (double quote), or & (ampersand). To mitigate risk from persistent cross-site scripting, SDDC Manager converts these characters to Unicode, replacing them with escape characters, thus making the password invalid.

    Workaround: Verify that the Microsoft CA password does not contain any of the problematic special characters. If so, modify the password so it complies with this limitation.

Lifecycle Manager Known Issues
  • Unable to filter download bundles by status.

    Observed only in the Internet Explorer browser. When viewing download bundles in the SDDC Manager, the feature for filtering by status does not function.

    Workaround: Ignore or use a different support browser, such as Chrome or Firefox.

Known Issues Affecting Service Providers Supportability and Serviceability (SoS) Utility Known Issues
  • New Host cleanup feature is supported only for hosts commissioned on the MGMT VLAN.

    The new Supportability and Serviceability (SoS) utility option (--cleanup-host) only functions on hosts commissioned on the MGMT VLAN, and not on hosts on other VLANs.

    Workaround: There is no workaround.