This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Foundation 3.10 | 26 MAY 2020 | Build 16223257

VMware Cloud Foundation 3.10.0.1 | 02 JUL 2020 | Build 16419449

Check for additions and updates to these release notes.

What's New

The VMware Cloud Foundation (VFC) 3.10 release includes the following:

  • ESXi Cluster-Level and Parallel Upgrades: Enables you to update the ESXi software on multiple clusters in the management domain or a workload domain in parallel. Parallel upgrades reduce the overall time required to upgrade your environment.

  • NSX-T Data Center Cluster-Level and Parallel Upgrades: Enables you to upgrade all Edge clusters in parallel, and then all host clusters in parallel. Parallel upgrades reduce the overall time required to upgrade your environment. You can also select specific clusters to upgrade. The ability to select clusters allows for multiple upgrade windows and does not require all clusters to be available at a given time.

  • Skip Level Upgrades: Enables you to upgrade to VMware Cloud Foundation 3.10 from version 3.5 and later.

  • Option to turn off Application Virtual Networks (AVNs) during Bring-up: AVNs deploy vRealize Suite components on NSX overlay networks, and it is recommended you use this option during bring-up. If you turn off AVN during bring-up, vRealize Suite components are deployed to a VLAN-backed distributed port group.

  • Option to deploy vRealize Suite 2019 products: Instead of the legacy vRealize Suite product versions included in the Cloud Foundation 3.10 Bill of Materials, you can deploy vRealize Suite 2019 products following prescriptive guidance.

  • BOM Updates for the 3.10 Release: Updated Bill of Materials with new product versions.

VMware Cloud Foundation Bill of Materials (BOM)

The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

VMware Response to Apache Log4j Remote Code Execution Vulnerability: VMware Cloud Foundation is impacted by CVE-2021-44228, and CVE-2021-45046 as described in VMSA-2021-0028. To remediate these issues, see Workaround instructions to address CVE-2021-44228 & CVE-2021-45046 in VMware Cloud Foundation (KB 87095).

Software Component

Version

Date

Build Number

Cloud Builder VM

2.2.2.0

26 MAY 2020

16223257

SDDC Manager

3.10

26 MAY 2020

16223257

VMware vCenter Server Appliance

6.7 P02 / U3g

28 APR 2020

16046470

VMware ESXi

6.7 P02 / U3g

28 APR 2020

16075168

VMware vSAN

6.7 P02 / U3g

28 APR 2020

15985001

VMware NSX Data Center for vSphere

6.4.6

10 OCT 2019

14819921

VMware NSX-T Data Center

2.5.1

19 DEC 2019

15314288

VMware Enterprise PKS

1.7

02 APR 2020

16116522

VMware vRealize Suite Lifecycle Manager

2.1 Patch 2

04 MAY 2020

16154511

VMware vRealize Log Insight

4.8

11 APR 2019

13036238

vRealize Log Insight Content Pack for NSX for vSphere

3.9

n/a

n/a

vRealize Log Insight Content Pack for Linux

2.0.1

n/a

n/a

vRealize Log Insight Content Pack for vRealize Automation 7.5+

1.0

n/a

n/a

vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+

2.1

n/a

n/a

vRealize Log insight Content Pack for NSX-T

3.8.2

n/a

n/a

vSAN Content Pack for Log Insight

2.2

n/a

n/a

vRealize Operations Manager

7.5

11 APR 2019

13165949

vRealize Automation

7.6

11 APR 2019

13027280

VMware Horizon 7

7.10.0

17 SEP 2019

14584133

Note:

  • vRealize Log Insight Content Packs are deployed during the workload domain creation.

  • VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.

  • To remediate VMSA-2020-0007 (CVE-2020-3953 and CVE-2020-3954) for vRealize Log Insight 4.8, you must apply the vRealize Log Insight 4.8 security patch. For information on the security patch, see KB article 79168. ​

  • For this release, you can install the vRealize Suite 2019 products instead of those listed in the BOM. These include vRealize Suite Lifecycle Manager 8.1, vRealize Log Insight 8.1.1, vRealize Operations 8.1, and vRealize Automation 8.1 along with Workspace ONE Access. For prescriptive guidance on deploying and configuring these products with Cloud Foundation 3.10, see Deployment of VMware vRealize Suite 2019 on VMware Cloud Foundation 3.10.

VMware Software Edition License Information

The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:

  • VMware ESXi

  • VMware vSAN

  • VMware NSX Data Center for vSphere

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server

  • NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
  • VMware NSX-T

  • VMware Enterprise PKS

  • VMware Horizon 7

  • VMware vRealize Automation

  • VMware vRealize Operations

  • VMware vRealize Log Insight and content packs

  • NOTE Cloud Foundation permits limited use of vRealize Log Insight for the management domain without the purchase of a vRealize Log Insight license.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.

For general information about the product, see VMware Cloud Foundation.

Supported Hardware

For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.

Documentation

To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:

  • Google Chrome

  • Mozilla Firefox

  • Microsoft Edge

  • Internet Explorer: Version 11

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)

  • 1366 by 768 pixels

  • 1280 by 1024 pixels

  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

VMware Cloud Foundation 3.10 can be installed as a new deployment or upgraded from VMware Cloud Foundation 3.9.1. You can also use the skip-level upgrade tool to upgrade to VMware Cloud Foundation 3.10 from versions earlier than 3.9.1.

In addition to the release notes, see the VMware Cloud Foundation Upgrade Guide for information about the upgrade process.

  • Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The VMware Cloud Foundation Planning and Preparation Guide provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Architecture and Deployment Guide for information on installing ESXi.

Phase Three: Install Cloud Foundation 3.10

Refer to the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.

  • Upgrading to Cloud Foundation 3.10

You can upgrade to Cloud Foundation 3.10 from 3.9.1. You can also use the skip-level upgrade tool to upgrade to VMware Cloud Foundation 3.10 from versions earlier than 3.9.1. For information on upgrading to 3.10, refer to the VMware Cloud Foundation Upgrade Guide.

VMware Cloud Foundation 3.10.0.1 Release Information

VMware Cloud Foundation 3.10.0.1 was released on 02 JUL 2020. You can upgrade to Cloud Foundation 3.10.0.1 from a 3.10 deployment, or you can use the skip-level upgrade tool to upgrade to VMware Cloud Foundation 3.10.0.1 from versions earlier than 3.10.

VMware Cloud Foundation 3.10.0.1 contains the following BOM updates:

Software Component

Version

Date

Build Number

SDDC Manager

3.10.0.1

30 JUN 2020

16419449

VMware ESXi

6.7 EP 15 (ESXi-202006001)

09 JUN 2020

16316930

VMware vCenter Server Appliance

6.7 U3h

28 MAY 2020

16275304

SDDC Manager 3.10.0.1 addresses the following:

SDDC Manager 3.10.0.1 contains security fixes for Photon OS packages PHSA-2020-3.0-0086 to PHSA-2020-3.0-0103 published here: https://github.com/vmware/photon/wiki/Security-Advisories-3

ESXi 6.7 EP 15 addresses the following:

VMware ESXi contains an out-of-bounds read vulnerability in the NVMe functionality. A malicious actor with local non-administrative access to a virtual machine might be able to read privileged information contained in the memory. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2020-3960 to this issue. For more information, see VMSA-2020-0012.

ESXi EP 15 also addresses the following:

For more information, see VMSA-2020-0015.

VMware vCenter Server Appliance 6.7 U3h addresses the following:

Security fixes for Photon OS packages.

Resolved Issues

The following issues are resolved in this release.

  • vCenter upgrade operation fails on the management domain and workload domain

  • vRealize Operations in deployment fails when vRealize Operations appliances are in a different subdomain

  • Host commissioning fails if the network pool does not have sufficient free IP addresses

  • NTP/DNS server is not updated for NSX-T Managers

  • Addition of members from PKS UAA to Harbor library fails when the certificate verification is enabled

  • Cluster level upgrade is not available if the workload domain has a faulty cluster

  • Upgrade task status may be reported incorrectly in the SDDC Manager Dashboard Tasks panel

  • Using the API to attempt to upgrade multiple clusters only upgrades one cluster

  • You are not able to add a cluster or a host to a NSX-T workload domain that has a dead host

  • vRealize Log Insight pre-check may fail for the consistency checks

  • When there is no associated workload domain to vRealize Automation, the VRA VM NODES CONSISTENCY CHECK upgrade precheck fails

  • Adding hosts from different network pools to NSX-T workload domain clusters is only supported for hosts using vSAN storage

Known Issues

Bring-up Known Issues

  • The Cloud Foundation Builder VM remains locked after more than 15 minutes.

    The VMware Imaging Appliance (VIA) locks out the user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Workaround: Using SSH, log in as admin to the Cloud Foundation Builder VM, then switch to the root user. Unlock the account by resetting the password of the admin user with the following command.

    pam_tally2 --user=<user> --reset

  • The bring-up process fails at task stop TLS 1.0 on the vRealize Log Insight nodes

    The bring-up fails at the stop TLS 1.0 on the vRealize Log Insight nodes with the following error Connect to 10.0.0.17:9543 [/10.0.0.17] failed: Connection refused (Connection refused). This issue has been observed in the slow environments after restarting a vRealize Log Insight node. The node does not start correctly and its API is not reachable.

    Workaround: Use the following procedure to work around this issue.

    1. Restart the failed bring-up execution in the Cloud Foundation Builder VM and open the bring-up logs.

      This will retry the failed the bring-up task which might still fail on the initial attempt. The log shows an unsuccessful connection to the vRealize Log Insight node.

    2. While bring-up is still running, use SSH to log in to the vRealize Log Insight node that is shown as failed in the bring-up log.

    3. Run the following command to determine the connection issue.

      loginsight-node-2:~ # service loginsight status

      It should confirm that the daemon is not running.

    4. Execute the following command:

      loginsight-node-2:~ # mv /storage/core/loginsight/cidata/cassandra/data/system ~/cassandra_keyspace_files

    5. Reboot the vRealize Log Insight node.

    6. Confirm that it is running.

      loginsight-node-2:~ # uptime

      18:25pm up 0:02, 1 user, load average: 3.16, 1.07, 0.39

      loginsight-node-2:~ # service loginsight status

      Log Insight is running.

    In a few minutes, the bring-up process should successfully establish a connection to the vRealize Log Insight node and proceed.

  • Cloud Foundation Builder VM deployment fails with the "[Admin/Root] password does not meet standards" message

    When configuring the Cloud Foundation Builder admin and root passwords, the format restrictions are not validated. As a result, you can create a password that does not meet the requirements and the Cloud Foundation Builder VM deployment will fail.

    Workaround: When configuring the Cloud Foundation Builder, ensure that the password meets the following restrictions:

    • Minimum eight characters long

    • Must include at least one uppercase letter

    • Must include at least one lowercase letter

    • Must include at least one digit

    • Must include at least one special character

Upgrade Known Issues

  • Lifecycle Management displays fatal error

    When the user password in the /opt/vmware/vcf/lcm/lcm-app/conf/application.properties file contains a backslash (\), Lifecycle Manager does not start and displays the fatal error Password authentication failed for user lcm.

    Workaround: Follow the steps below to resolve the error:

    1. SSH to the SDDC Manager VM.

    2. Type su to switch to root user.

    3. Open the /opt/vmware/vcf/lcm/lcm-app/conf/application.properties file, remove all backslashes (\) from the lcm.datasource.password field, and save the file.

    4. Run the command systemctl restart lcm-db.

  • NSX Data Center for vSphere upgrade fails with the message "Host Prep remediation failed"

    After addressing the issue, the NSX Data Center for vSphere bundle no longer appears as an available update.

    Workaround: To complete the upgrade, manually enable the anti-affinity rules.

    1. Log in to the management vCenter Server using the vSphere Client.

    2. Click Menu > Hosts and Clusters and select the cluster on which host prep remediation failed (for example SDDC-Cluster1).

    3. Click Configure > Configuration > VM/Host Rules.

    4. Select NSX Controller Anti-Affinity Rule and click Edit.

    5. Select Enable rule and click OK.

    This completes the NSX Data Center for vSphere upgrade.

  • When there is no associated workload domain to vRealize Automation, the VRA VM NODES CONSISTENCY CHECK upgrade precheck fails

    This upgrade precheck compares the content in the logical inventory on the SDDC Manager and the content in the vRealize Lifecycle Manager environment. When there is no associated workload domain, the vRealize Lifecycle Manager environment does not contain information about the iaasagent1.rainpole.local and iaasagent2.rainpole.local nodes. Therefore the check fails.

    Workaround: None. You can safely ignore a failed VRA VM NODES CONSISTENCY CHECK during the upgrade precheck. The upgrade will succeed even with this error.

  • Error upgrading vRealize Automation

    Under certain circumstances, upgrading vRealize Automation may fail with a message similar to:

    An automated upgrade has failed. Manual intervention is required. 
    vRealize Suite Lifecycle Manager Pre-upgrade checks for vRealize Automation have failed: 
    vRealize Automation Validations : iaasms1.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine. 
    vRealize Automation Validations : iaasms2.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine. 
    Please retry the upgrade once the upgrade is available again.

    1. Log-in into the first VM listed in the error message using RDP or the VMware Remote Console.

    2. Reboot the VM.

    3. Wait 5 minutes after the login screen of the VM appears.

    4. Repeat steps 1-3 for the next VM listed in the error message.

    5. Once you have restarted all the VMs listed in the error message, retry the vRealize Automation upgrade.

  • The vRealize Automation upgrade reports the "Precheck Execution Failure : Make sure the latest version of VMware Tools is installed" message

    The vRealize Automation IaaS VMs must have the same version of VMware Tools as the ESXi hosts on which the VMs reside.

    Workaround: Upgrade VMware Tools on the vRealize Automation IaaS VMs.

vRealize Suite Known Issues

  • vRealize Log Insight installation gets stuck due to incorrect MTU configuration

    During deployment, Edge Service Gateways send frames with the MTU specified in the Universal Distributed Logical Router - MTU Size field in the deployment parameters file to the Top of Rack switches. If this MTU size is not configured correctly in your infrastructure, the vRealize Log Insight deployment may hang on an installation task after the Apply vRealize Log Insight License task.

    If any of these tasks remain incomplete for more than 30 minutes. follow the workaround below.

    1. Fix the routing between the Edge-vTEP network and ESXi-vTEP network

    2. SSH to the Cloud Builder VM.

    3. Switch to root:

      • sudo -i
    1. Restart the bring-up service:

    systemctl restart vcf-bringup

    Wait five minutes to get all services online.

    Retry the bring-up process.

  • The password update for vRealize Automation and vRealize Operations Manager may run infinitely or may fail when the password contains special character "%"

    Password management uses the vRealize Lifecycle Manager API to update the password of vRealize Automation and vRealize Operations Manager. When there is special character "%" in either of SSH or API or Administrator credential types of the vRealize Automation and vRealize Operations Manager users, then the vRealize Lifecycle Manager API hangs and doesn't respond to password management. There is a timeout of 5 mins and password management marks the operation as failed.

    Workaround:Retry the password update operation without the special character "%". Ensure that the passwords for all other vRealize Automation and vRealize Operations Manager accounts don't contain the "%" special character.

  • vRealize Operations Manager: VMware Security Advisory VMSA-2021-0018

    VMSA-2021-0018 describes security vulnerabilities that affect VMware Cloud Foundation.

    • The vRealize Operations Manager API contains an arbitrary file read vulnerability. A malicious actor with administrative access to vRealize Operations Manager API can read any arbitrary file on server leading to information disclosure.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22022 to this issue.

    • The vRealize Operations Manager API has insecure object reference vulnerability. A malicious actor with administrative access to vRealize Operations Manager API may be able to modify other users information leading to an account takeover.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22023 to this issue.

    • The vRealize Operations Manager API contains an arbitrary log-file read vulnerability. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can read any log file resulting in sensitive information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22024 to this issue.

    • The vRealize Operations Manager API contains a broken access control vulnerability leading to unauthenticated API access. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can add new nodes to existing vROps cluster. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22025 to this issue.

    • The vRealize Operations Manager API contains a Server Side Request Forgery in multiple end points. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can perform a Server Side Request Forgery attack leading to information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifiers CVE-2021-22026 and CVE-2021-22027 to this issue.

    Workaround: See KB 85452 for information about applying vRealize Operations Security Patches that resolve the issues.

Networking Known Issues

  • NSX Manager is not visible in the vSphere Web Client.

    In addition to NSX Manager not being visible in the vSphere Web Client, the following error message displays in the NSX Home screen: "No NSX Managers available. Verify current user has role assigned on NSX Manager." This issue occurs when vCenter Server is not correctly configured for the account that is logged in.

    Workaround: To resolve this issue, follow the procedure detailed in Knowledge Base article 2080740 "No NSX Managers available" error in the vSphere Web Client.

SDDC Manager Known Issues

  • SDDC Manager cannot manage the passwords for the NSX Edges and UDLR/DLR deployed to support application virtual networking

    These passwords are not managed through the SDDC Manager Dashboard.

    Workaround: Refer to the NSX Data Center for vSphere documentation for information about how to update these passwords.

  • APIs for managing SDDC cannot be executed from the SDDC Manager Dashboard

    You cannot use the API Explorer in the SDDC Manager Dashboard to execute the APIs for managing SDDC (/v1/sddc).

    Workaround: None. These APIs can only be executed using the Cloud Builder as the host.

  • Unable to delete VI workload domain enabled for vRealize Operations Manager from SDDC Manager.

    Attempts to delete the vCenter adapter also fail, and return an SSL error.

    Workaround: Use the following procedure to resolve this issue.

    1. Create a vCenter adapter instance in vRealize Operations Manager, as described in Configure a vCenter Adapter Instance in vRealize Operations Manager. This step is required because the existing adapter was deleted by the failed workload domain deletion.

    2. Follow the procedure described in Knowledge Base article 56946.

    3. Restart the failed VI workload domain deletion workflow from the SDDC Manager interface.

Workload Domain Known Issues

  • Deleting an NSX-T workload domain or cluster containing a dead host fails at transport node deletion step

    During a delete NSX-T workflow involving a dead host, VMware Cloud Foundation attempts to update the corresponding transport node with uninstall mappings. This task fails because of the dead host, and the delete workflow fails.

    Workaround: Contact VMware Support.

  • Workload domain operations fail if cluster upgrade is in progress

    Workload domain operations cannot be performed if one or more clusters are being upgraded. The UI does not block such oeprations during an upgrade.

    Workaround: Do not perform any operations on the workload domain when a cluster upgrade is in progress.

  • Cluster is deleted even if VMs are up and running on the cluster

    When you delete a cluster, it gets deleted even if there are VMs running on the cluster. This includes critical VMs such as Edge VMs, which may prevent you from accessing your environment after the cluster gets deleted.

    Workaround: Migrate the VMs to a different cluster before deleting the cluster.

  • VI workload domain creation or expansion operations fail

    If there is a mismatch between the letter case (upper or lower) of an ESXi host's FQDN and the FQDN used when the host was commissioned, then workload domain creation and expansion may fail.

    Workaround: ESXi hosts should have lower case FQDNs and should be commissioned using lower case FQDNs.

  • Creating an NSX-T workload domain fails on the task "Add management domain vCenter as compute manager"

    This can happen if a previous attempt to create an NSX-T workload domain failed and Cloud Foundation was unable to clean up after the failed task.

    Workaround: Manually remove the NSX-T Data Center extension from the management vCenter Server and try to create the NSX-T workload domain again. See Remove NSX-T Data Center Extension from vCenter Server.

  • Operations on NSX-T workload domains fails if their host FQDNs include uppercase letters

    If the FQDNs of ESXi hosts in an NSX-T workload domain include uppercase letters, then the following operations may fail for the workload domain:

    • Add a host

    • Remove a host

    • Add a cluster

    • Remove a cluster

    • Delete the workload domain

    Workaround: See KB 76553.

  • Add cluster operation fails

    Adding a cluster to a workload domain with 50 or more VMware ESXi nodes may fail.

    Workaround: Contact VMware Support for help.

  • The certificate rotate operation on the second NSX-T domain fails

    Certificate rotation works on the first NSX-T workload domain in your environment, but fails on all subsequent NSX-T workload domains.

    Workaround: None

  • vSAN partition and critical alerts are generated when the witness MTU is not set to 9000

    If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.

    Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    Workaround:

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.

    2. Replace or deploy the witness appliance matching with the ESXi version.

  • Deploying partner services on an NSX-T workload domain displays an error

    Deploying partner services on an NSX-T workload domain such as McAfee or Trend displays the “Configure NSX at cluster level to deploy Service VM” error.

    Workaround: Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • A vCenter Server on which certificates have been rotated is not accessible from a Horizon workload domain

    VMware Cloud Foundation does not support the certificate rotation on the Horizon workload domains.

    Workaround: See KB article 70956.

  • NSX Manager for VI workload domain is not displayed in vCenter

    Although NFS-based VI workload domains are created successfully, the NSX Manager VM is not registered in vCenter Server and is not displayed in vCenter.

    Workaround: To resolve this issue, use the following procedure:

    1. Log in to NSX Manager (http://<nsxmanager IP>).

    2. Navigate to Manage > NSX Management Service.

    3. Un-register the lookup service and vCenter, then re-register.

    4. Close the browser and log in to vCenter.

  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    Workaround:

    1. Before adding the host, add a new portgroup to the VDS for that cluster.

    2. Tag the new portgroup with the VLAN ID of the host to be added.

    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.

    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.

    5. Retry the Add Host operation.

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

Security Operations Known Issues

  • Addition of members from PKS UAA to Harbor library fails when the certificate verification is enabled

    This issue occurs when Harbor does not honor the certificate chain under System Settings > Registry Root Certificate.

    Workaround:

    1. SSH into the SDDC Manager VM as the vcf user.

    2. Run the following command. Make sure to update the password of the admin user and the Harbor URL: curl -k -H'Content-type: application/json' -u admin:"< >" -XPUT https://harbor.vrack.vsphere.local/api/configurations -d '{"uaa_verify_cert":"false"}'

    Harbor is in the UAA authentication mode and it uses members from PKS UAA.

    To create a user in UAA:

    1. Connect through SSH to Ops Manager appliance

    2. Run the following:

    uaac target https://pks.vrack.vsphere.local:8443 --skip-ssl-validation

    uaac token client get admin

    uaac user add <<user-name> > --emails <<email> >

Multi-Instance Management Known Issues

  • Multi-Instance Management Dashboard operation fails

    After a controller joins or leaves a federation, Kafka is restarted on all controllers in the federation. It can take up to 15 minutes for the federation to stabilize. Any operations performed on the dashboard during this time may fail.

    Workaround: Re-try the operation.

  • The federation creation progress is not displayed

    While federation creation is in progress, the SDDC manager UI displays the progress on the multi-site page. If you navigate into any other screen and come back to the multi-site screen, the progress messages are not displayed. An empty map with no VMware Cloud Foundation instances is displayed until the federation creation process completes.

    Workaround: None

  • Federation creation information not displayed if you leave the Multi-Instance Management Dashboard

    Federation creation progress is displayed on the Multi-Instance Management Dashboard. If you navigate to another screen and then return to the Multi-Instance Management Dashboard, progress messages are not displayed. Instead, an empty map with no Cloud Foundation instances are displayed until the federation is created.

    Workaround: Stay on the Multi-Instance Dashboard till the task is complete. If you have navigated away, wait for around 20 minutes and then return to the dashboard by which time the operation should have completed.

API Known Issues

  • Unversionsed APIs are not Supported

    Unversioned APIs in Cloud Foundation have been deprecated.

    Use Cloud Foundation public APIs.

check-circle-line exclamation-circle-line close-line
Scroll to top icon