check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 3.9.1 | 14 JAN 2020 | Build 15345960

VMware Cloud Foundation is a unified SDDC platform that brings together VMware ESXi, VMware vSAN, VMware NSX, and optionally, vRealize Suite components, VMware NSX-T, VMware Enterprise PKS, and VMware Horizon 7 into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud. The Cloud Foundation 3.9.1 release continues to expand on the SDDC automation, VMware SDDC stack, and the partner ecosystem.

NOTE: VMware Cloud Foundation 3.9.1 must be installed as a new deployment or upgraded from VMware Cloud Foundation 3.9. For more information, see Installation and Upgrade Information below.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 3.9.1 release includes the following:

  • Application Virtual Networks (AVNs): Enable vRealize Suite deployment in NSX overlay networks. AVNs provide benefits for portability and failover for planned migration or disaster recovery. New installations of Cloud Foundation 3.9.1 use AVNs for vRealize Suite components. If you upgrade to Cloud Foundation 3.9.1, vRealize Suite components get deployed on a VLAN-backed Distributed Port Group by default. To enable AVNs and migrate vRealize Suite components to these networks for an upgraded system, contact VMware Support.For more information about AVNs, see http://blogs.vmware.com/cloud-foundation/2020/01/14/application-virtual-networks-with-VCF.
  • API support for multiple physical NICs and multiple distributed Switches: The API now supports multiple combinations of vSphere Distributed Switches (vDS) and N-VDS switches using up to six physical NICs, providing more flexibility to support high performance use cases and physical traffic separation.
  • Cloud Builder improvements: The Cloud Builder UI includes several workflow improvements and provides access to a deployment report that details the tasks performed during bring-up.
  • Developer Center: Enables you to access Cloud Foundation APIs and code samples from the SDDC Manager Dashboard.
  • BOM Updates for the 3.9.1 ReleaseUpdated Bill of Materials with new product versions.

Cloud Foundation Bill of Materials (BOM)

The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component Version Date Build Number
Cloud Builder VM 2.2.1.0 14 JAN 2020

15345960

SDDC Manager 3.9.1 14 JAN 2020

15345960

VMware vCenter Server Appliance 6.7 Update3b 05 DEC 2019

15132721

VMware ESXi 6.7 Update 3b 05 DEC 2019

15160138

VMware vSAN

6.7 Update 3b

05 DEC 2019

14840357

VMware NSX Data Center for vSphere 6.4.6 10 OCT 2019

14819921

VMware NSX-T Data Center 2.5 19 SEP 2019

14663974

VMware Enterprise PKS 1.5 20 AUG 2019

14878150

VMware vRealize Suite Lifecycle Manager 2.1 Patch 2 02 JUL 2019

14062628

VMware vRealize Log Insight 4.8 11 APR 2019 13036238
vRealize Log Insight Content Pack for NSX for vSphere 3.9 n/a n/a
vRealize Log Insight Content Pack for Linux 2.0.1 n/a n/a
vRealize Log Insight Content Pack for vRealize Automation 7.5+ 1.0 n/a n/a
vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+ 2.1 n/a n/a
vRealize Log insight Content Pack for NSX-T 3.8.2 n/a n/a
vSAN Content Pack for Log Insight 2.2 n/a n/a
vRealize Operations Manager 7.5 11 APR 2019 13165949
vRealize Automation 7.6 11 APR 2019 13027280
VMware Horizon 7 7.10.0 17 SEP 2019

14584133

Note: 

  • vRealize Log Insight Content Packs are deployed during the workload domain creation.
  • VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.

VMware Software Edition License Information

The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.

The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:

  • VMware ESXi
  • VMware vSAN
  • VMware NSX Data Center for vSphere

The following VMware software components deployed by SDDC Manager are licensed separately:

  • VMware vCenter Server
    NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
  • VMware NSX-T
  • VMware Enterprise PKS
  • VMware Horizon 7
  • VMware vRealize Automation
  • VMware vRealize Operations
  • VMware vRealize Log Insight and content packs
    NOTE Cloud Foundation permits limited use of vRealize Log Insight for the management domain without the purchase of a vRealize Log Insight license.

For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.

For general information about the product, see VMware Cloud Foundation.

Supported Hardware

For details on vSAN Ready Nodes in Cloud Foundation, see VMware Compatibility Guide (VCG) for vSAN and the Hardware Requirements section in the VMware Cloud Foundation Planning and Preparation Guide.

Documentation

To access the Cloud Foundation 3.9.1 documentation, go to the VMware Cloud Foundation product documentation.

To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:

Browser Compatibility and Screen Resolutions

The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except the Internet Explorer:

  • Google Chrome
  • Mozilla Firefox
  • Microsoft Edge
  • Internet Explorer: Version 11

For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:

  • 1024 by 768 pixels (standard)
  • 1366 by 768 pixels
  • 1280 by 1024 pixels
  • 1680 by 1050 pixels

Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.

Installation and Upgrade Information

You can install Cloud Foundation 3.9.1 as a new release or upgrade from VMware Cloud Foundation 3.9.

In addition to the release notes, see the VMware Cloud Foundation Upgrade Guide for information about the upgrade process.

Installing as a New Release

The new installation process has three phases:

Phase One: Prepare the Environment

The VMware Cloud Foundation Planning and Preparation Guide provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.

Phase Two: Image all servers with ESXi

Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Architecture and Deployment Guide for information on installing ESXi.

Phase Three: Install Cloud Foundation 3.9.1

Refer to the VMware Cloud Foundation Architecture and Deployment Guide for information on deploying Cloud Foundation.

Upgrade to Cloud Foundation 3.9.1

You can upgrade to Cloud Foundation 3.9.1 only from 3.9. If you are at a version earlier than 3.9, refer to the 3.9 Release Notes for information on how to upgrade from the prior releases.

For information on upgrading to 3.9.1, refer to the VMware Cloud Foundation Upgrade Guide.

Resolved Issues

  • The following issues have been resolved:
    • Adding a cluster to an NSX-T workload domain fails
    • The deletion of the domain does not clean up the NSX-T VIBS
    • The SoS clean-up does not clean hosts used in an NSX-T workload domain
    • Clicking the help icon in the Cloud Builder VM opens the help for an older release
    • Adding a cluster fails at validating the network connectivity of the hosts
    • During the upgrade of VMware Cloud Foundation, the upgrade UI screen does not update or auto refresh, even though the upgrade is successful
    • If an NSX Edge node is removed after the NSX-T upgrade is initiated through Lifecycle Manager, the upgrade may hang
    • During an upgrade, the SDDC Manager UI service upgrade fails due to upgrade timeout
    • Platform audit for network connectivity validation fails
    • Some APIs display "404 Not Found" error in the Developer Center UI
    • The APIs for managing a host are missing the input specifications box in the Developer Center UI
    • The Add Cluster operations fails with an "Insufficient Hosts" error for VMware vMotion and VMware vSAN
    • Removal of an unresponsive host from an NSX-T workload domain fails. Subsequently, the removal of the workload domain fails
    • Unable to delete cluster that includes an unresponsive host from an NSX Data Center for vSphere workload domain
    • The “Got bad CSRF token; invalid CSRF token” error message appears
    • The removal of an NSX-T workload domain host fails during the transport node deletion phase
    • When you select NSXT_CONTROLLER in the supported entityTypes drop down, an empty list is returned
    • When you create two NSX-T workload domains, the transport nodes from the second workload domain stay in the not-configured state
    • Unable to perform password management operations from the SDDC Manager Dashboard
    • When a new controller is added to a federation, the status turns red for all the members
    • After updating the via.properties file on the Cloud Foundation Builder VM, restarting the imaging service fails

Known Issues

The known issues are grouped as follows.

Bring-Up Known Issues
  • Cloud Foundation Builder VM deployment fails with the "[Admin/Root] password does not meet standards" message

    When configuring the Cloud Foundation Builder admin and root passwords, the format restrictions are not validated. As a result, you can create a password that does not meet the requirements and the Cloud Foundation Builder VM deployment will fail. 

    Workaround: When configuring the Cloud Foundation Builder, ensure that the password meets the following restrictions:

    • Minimum eight characters long
    • Must include at least one uppercase letter
    • Must include at least one lowercase letter
    • Must include at least one digit 
    • Must include at least one special character
  • The bring-up process fails at task disable TLS 1.0 on the vRealize Log Insight nodes

    The bring-up fails at the task disable TLS 1.0 on the  vRealize Log Insight nodes with the following error Connect to 10.0.0.17:9543 [/10.0.0.17] failed: Connection refused (Connection refused). This issue has been observed in the slow environments after restarting a vRealize Log Insight node. The node does not start correctly and its API is not reachable.

    Workaround: Use the following procedure to work around this issue.

    1. Restart the failed bring-up execution in the Cloud Foundation Builder VM and open the bring-up logs.
      This will retry the failed the bring-up task which might still fail on the initial attempt. The log shows an unsuccessful connection to the vRealize Log Insight node.
    2. While bring-up is still running, use SSH to log in to the vRealize Log Insight node that is shown as failed in the bring-up log.
    3. Run the following command to determine the connection issue.
      loginsight-node-2:~ # service loginsight status
      It should confirm that the daemon is not running.
    4. Execute the following command:
      loginsight-node-2:~ # mv /storage/core/loginsight/cidata/cassandra/data/system ~/cassandra_keyspace_files
    5. Reboot the vRealize Log Insight node.
    6. Confirm that it is running.
      loginsight-node-2:~ # uptime
      18:25pm up 0:02, 1 user, load average: 3.16, 1.07, 0.39
      loginsight-node-2:~ # service loginsight status
      Log Insight is running.

    In a few minutes, the bring-up process should successfully establish a connection to the vRealize Log Insight node and proceed.

  • The Cloud Foundation Builder VM remains locked after more than 15 minutes.

    The VMware Imaging Appliance (VIA) locks out the user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.

    Workaround: Using SSH, log in as admin to the Cloud Foundation Builder VM, then switch to the root user. Unlock the account by resetting the password of the admin user with the following command.
    pam_tally2 --user=<user> --reset

Upgrade Known Issues
  • vCenter upgrade operation fails on the management domain and workload domain

    vCenter fails to be upgraded because lcm-bundle-repo NFS Mount on the host is inaccessible.

    Workaround: Remove and remount the SDDC Manager NFS datastore on the affected ESXi hosts. Use the showmount command to check if all hosts are displayed in the SDDC manager mount list.

  • The vRealize Automation upgrade reports the "Precheck Execution Failure : Make sure the latest version of VMware Tools is installed" message

    The vRealize Automation IaaS VMs must have the same version of VMware Tools as the ESXi hosts on which the VMs reside.

    Workaround: Upgrade VMware Tools on the vRealize Automation IaaS VMs.

  • Error upgrading vRealize Automation

    Under certain circumstances, upgrading vRealize Automation may fail with a message similar to:

    An automated upgrade has failed. Manual intervention is required.
    vRealize Suite Lifecycle Manager Pre-upgrade checks for vRealize Automation have failed:
    vRealize Automation Validations : iaasms1.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
    vRealize Automation Validations : iaasms2.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
    Please retry the upgrade once the upgrade is available again. 

    Workaround:

    1. Log-in into the first VM listed in the error message using RDP or the VMware Remote Console.
    2. Reboot the VM.
    3. Wait 5 minutes after the login screen of the VM appears.
    4. Repeat steps 1-3 for the next VM listed in the error message.
    5. Once you have restarted all the VMs listed in the error message, retry the vRealize Automation upgrade.

  • The vRealize Log Insight pre-check may fail for the consistency checks for the vRealize Log Insight - vRealize Lifecycle Manager Environment Master and Environment Nodes

    This is a known issue with the discrepancy between the host names in SDDC Manager and vRealize Lifecycle Manager inventory.

    Workaround:

    1. Log in into vRealize Lifecycle Manager.
    2. Click View Details for vRLI_environment on the Getting Started page.
    3. Click View Details.
    4. Expand nodes one by one and check the hostname field.
    5. If the field contains only the host name (for example, loginsight-node-1) and not FQDN (for example, loginsight-node-1.vrack.vsphere.local), ignore this error in the pre-check validation.

  • When there is no associated workload domain to vRealize Automation, the VRA VM NODES CONSISTENCY CHECK upgrade precheck fails

    This upgrade precheck compares the content in the logical inventory on the SDDC Manager and the content in the vRealize Lifecycle Manager environment. When there is no associated workload domain, the vRealize Lifecycle Manager environment does not contain information about the iaasagent1.rainpole.local and iaasagent2.rainpole.local nodes. Therefore the check fails.

    Workaround: None. You can safely ignore a failed VRA VM NODES CONSISTENCY CHECK during the upgrade precheck. The upgrade will succeed even with this error.

  • Cluster level upgrade is not available if the workload domain has a faulty cluster

    This issue occurs if any host or cluster in the workload domain is in an error state. 

    Workaround: Remove the faulty host or cluster from the workload domain. The cluster level upgrade option is then available for the workload domain.

  • Upgrade task status may be reported incorrectly in the SDDC Manager Dashboard Tasks panel

    Under certain circumstances, the Tasks panel may report a status of Running, even though the upgrade task has completed successfully.

    Workaround: Check the status on the Upgrades/Patches tab or the Update History tab for the workload domain you are updating.

  • NSX Data Center for vSphere upgrade fails with the message "Host Prep remediation failed"

    After addressing the issue, the NSX Data Center for vSphere bundle no longer appears as an available update.

    Workaround: To complete the upgrade, manually enable the anti-affinity rules.

    1. Log in to the management vCenter Server using the vSphere Client.
    2. Click Menu > Hosts and Clusters and select the cluster on which host prep remediation failed (for example SDDC-Cluster1).
    3. Click Configure > Configuration > VM/Host Rules.
    4. Select NSX Controller Anti-Affinity Rule and click Edit.
    5. Select Enable rule and click OK.

    This completes the NSX Data Center for vSphere upgrade.

  • Using the API to attempt to upgrade multiple clusters only upgrades one cluster

    The API for upgrading clusters allows you to enter multiple clusters, but only one cluster gets updated.

    Workaround: Use the SDDC Manager UI to upgrade multiple clusters at once or use the API to upgrade multiple clusters one-at-a-time.

vRealize Integration Known Issues
  • vRealize Operations in deployment fails when vRealize Operations appliances are in a different subdomain

    When you deploy vRealize Operations, you provide FQDN values for the vRealize load balancer and nodes. If these FQDNs are in a different domain than the one used during initial bringup, the deployment may fail.

    Workaround: To resolve this failure, add the vRealize Operations domain to the configuration in the vRealize Log Insight VMs.

    1. Log in to the first vRealize Log Insight VM.
    2. Open the /etc/resolv.conf file in a text editor, and locate the following lines:
      nameserver 10.0.0.250
      nameserver 10.0.0.250
      domain vrack.vsphere.local
      search vrack.vsphere.local vsphere.local 
    3. Add the domain used for vRealize Operations to the last line above.
    4. Repeat on each vRealize Log Insight VM.
  • The password update for vRealize Automation and vRealize Operations Manager may run infinitely or may fail when the password contains special character "%"

    Password management uses the vRealize Lifecycle Manager API to update the password of vRealize Automation and vRealize Operations Manager. When there is special character "%" in either of SSH or API or Administrator credential types of the vRealize Automation and vRealize Operations Manager users, then the vRealize Lifecycle Manager API hangs and doesn't respond to password management. There is a timeout of 5 mins and password management marks the operation as failed.

    Workaround:Retry the password update operation without the special character "%". Ensure that the passwords for all other vRealize Automation and vRealize Operations Manager accounts don't contain the "%" special character.

Networking Known Issues
  • NSX Manager is not visible in the vSphere Web Client.

    In addition to NSX Manager not being visible in the vSphere Web Client, the following error message displays in the NSX Home screen: "No NSX Managers available. Verify current user has role assigned on NSX Manager." This issue occurs when vCenter Server is not correctly configured for the account that is logged in.

    Workaround: To resolve this issue, follow the procedure detailed in Knowledge Base article 2080740 "No NSX Managers available" error in the vSphere Web Client.

SDDC Manager Known Issues
  • Unable to delete VI workload domain enabled for vRealize Operations Manager from SDDC Manager.

    Attempts to delete the vCenter adapter also fail, and return an SSL error.

    Workaround: Use the following procedure to resolve this issue.

    1. Create a vCenter adapter instance in vRealize Operations Manager, as described in Configure a vCenter Adapter Instance in vRealize Operations Manager.
      This step is required because the existing adapter was deleted by the failed workload domain deletion.
    2. Follow the procedure described in Knowledge Base article 56946.
    3. Restart the failed VI workload domain deletion workflow from the SDDC Manager interface.
  • APIs for managing SDDC cannot be executed from the SDDC Manager Dashboard

    You cannot use the API Explorer in the SDDC Manager Dashboard to execute the APIs for managing SDDC (/v1/sddc). 

    Workaround: None. These APIs can only be executed using the Cloud Builder as the host.

  • Host commissioning fails if the network pool does not have sufficient free IP addresses

    When you commission hosts, the operation will fail if the network pool does not have enough free IP addresses to support the number of hosts being commissioned. You will see an error similar to:

    Network pool does not have sufficient IP addresses.

    Workaround: Add IP addresses to the network pool and retry the host commission operation.

  • NTP/DNS server is not updated for NSX-T Managers

    The APIs for updating the NTP and DNS servers used by Cloud Foundation do not update the NSX-T Managers. The workflow succeeds and updates all components except for NSX-T Managers.

    Workaround:

    1. Using SSH, log in to the SSDC Manager as vcf.
    2. Get a list of the NSX-T Managers:

      curl localhost/v1/nsxt-clusters | json_pp

      Take note of the FQDNs for the NSX-T Managers.

    3. Get the login credentials of the NSX-T Managers.

      For example: curl localhost/v1/credentials -H 'privileged-username: vcf-secure-user@vsphere.local' -H 'privileged-password: VMware123!' -u 'admin:VMware123!' | json_pp

      Replace the privileged username, privileged password, and basic authentication values to match your environment.

    4. If you are updating the DNS server, get the current DNS server information for each NSX-T Manager.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/network/name-servers -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Repeat this step for each NSX-T Manager.

    5. Configure the NSX-T Managers with the new DNS server.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' -X PUT -H 'Content-type: application/json' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/network/name-servers -d '{"name_servers":["10.0.0.250"]}' -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Replace 10.0.0.250 with the new DNS server for your environment. Repeat this step for each NSX-T Manager.

    6. Verify the update.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/network/name-servers -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Repeat this step for each NSX-T Manager.

    7. If you are updating the NTP server, get the current NTP server information for each NSX-T Manager.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/services/ntp -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Repeat this step for each NSX-T Manager.

    8. Configure the NSX-T Managers with the new NTP server.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' -X PUT -H 'Content-type: application/json' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/services/ntp -d '{"service_name":"ntp","service_properties":{"servers":["10.0.0.250"]}}' -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Replace 10.0.0.250 with the new NTP server for your environment. Repeat this step for each NSX-T Manager.

    9. Verify the updates.

      For example: curl -u 'admin:dP8^48z3Qf^iDY@' https://vi-nsxt-manager1.sfo01.rainpole.local/api/v1/node/services/ntp -k

      Replace the NSX-T Manager credentials and FQDN in the example with the information gathered in steps 2 and 3. Repeat this step for each NSX-T Manager.

  • SDDC Manager cannot manage the passwords for the NSX Edges and UDLR/DLR deployed to support application virtual networking

    These passwords are not managed through the SDDC Manager Dashboard.

    Workaround: Refer to the NSX Data Center for vSphere documentation for information about how to update these passwords.

Workload Domain Known Issues
  • Adding host fails when host is on a different VLAN

    A host add operation can sometimes fail if the host is on a different VLAN.

    Workaround:

    1. Before adding the host, add a new portgroup to the VDS for that cluster.
    2. Tag the new portgroup with the VLAN ID of the host to be added.
    3. Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.
    4. Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1.
      For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
    5. Retry the Add Host operation.
       

    NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.

  • NSX Manager for VI workload domain is not displayed in vCenter

    Although NFS-based VI workload domains are created successfully, the NSX Manager VM is not registered in vCenter Server and is not displayed in vCenter.

    Workaround: To resolve this issue, use the following procedure:

    1. Log in to NSX Manager (http://<nsxmanager IP>).
    2. Navigate to Manage > NSX Management Service.
    3. Un-register the lookup service and vCenter, then re-register.
    4. Close the browser and log in to vCenter.
  • You are not able to add a cluster or a host to a NSX-T workload domain that has a dead host

    If one of the hosts of the workload domain goes dead and if you try to remove the host, the task fails. And then, that particular host is set to the deactive state without an option to forcefully remove it. In this condition, if you try to add a new cluster or add a host to the workload domain, the task runs for a long time and then fails eventually.

    Workaround: Bring the dead host back to normal state, after which you would be able add a cluster and a host.

  • A vCenter Server on which certificates have been rotated is not accessible from a Horizon workload domain

    Cloud Foundation does not support the certificate rotation on the Horizon workload domains.

    Workaround: Refer to https://kb.vmware.com/s/article/70956.

  • Deploying partner services on an NSX-T workload domain displays an error

    Deploying partner services on an NSX-T workload domain such as McAfee or Trend displays the “Configure NSX at cluster level to deploy Service VM” error.

    Workaround: Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.

  • If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur

    vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.

    Workaround:

    1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.
    2. Replace or deploy the witness appliance matching with the ESXi version.

  • vSAN partition and critical alerts are generated when the witness MTU is not set to 9000

    If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.

    Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.

  • The certificate rotate operation on the second NSX-T domain fails

    Certificate rotation works on the first NSX-T workload domain in your environment, but fails on all subsequent NSX-T workload domains.

    Workaround: None

  • Add cluster operation fails

    Adding a cluster to a workload domain with 50 or more VMware ESXi nodes may fail.

    Workaround: Contact VMware Support for help.

  • Operations on NSX-T workload domains fails if their host FQDNs include uppercase letters

    If the FQDNs of ESXi hosts in an NSX-T workload domain include uppercase letters, then the following operations may fail for the workload domain:

    • Add a host
    • Remove a host
    • Add a cluster
    • Remove a cluster
    • Delete the workload domain

    Workaround: See KB 76553.

  • Creating an NSX-T workload domain fails on the task "Add management domain vCenter as compute manager"

    This can happen if a previous attempt to create an NSX-T workload domain failed and Cloud Foundation was unable to clean up after the failed task.

    Workaround: Manually remove the NSX-T Data Center extension from the management vCenter Server and try to create the NSX-T workload domain again. See https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/administration/GUID-E6E2F017-1106-48C5-ABCA-3D3E9130A863.html.

  • VI workload domain creation or expansion operations fail

    If there is a mismatch between the letter case (upper or lower) of an ESXi host's FQDN and the FQDN used when the host was commissioned, then workload domain creation and expansion may fail.

    Workaround: ESXi hosts should have lower case FQDNs and should be commissioned using lower case FQDNs.

Security Operations Known Issues
  • Addition of members from PKS UAA to Harbor library fails when the certificate verification is enabled

     This issue occurs when Harbor does not honor the certificate chain under System Settings > Registry Root Certificate.

    Workaround:

    1. SSH into the SDDC Manager VM as the vcf user.

    2. Run the following command. Make sure to update the password of the admin user and the Harbor URL:
    curl -k -H'Content-type: application/json' -u admin:"< >" -XPUT https://harbor.vrack.vsphere.local/api/configurations -d '{"uaa_verify_cert":"false"}'

    Harbor is in the UAA authentication mode and it uses members from PKS UAA.

    To create a user in UAA:
    1. Connect through SSH to Ops Manager appliance
    2. Run the following:

    uaac target https://pks.vrack.vsphere.local:8443 --skip-ssl-validation

    uaac token client get admin

    uaac user add <<user-name> > --emails <<email> >

Multi-Instance Management Known Issues
  • Federation creation information not displayed if you leave the Multi-Instance Management Dashboard

    Federation creation progress is displayed on the Multi-Instance Management Dashboard. If you navigate to another screen and then return to the Multi-Instance Management Dashboard, progress messages are not displayed. Instead, an empty map with no Cloud Foundation instances are displayed until the federation is created.

    Workaround: Stay on the Multi-Instance Dashboard till the task is complete. If you have navigated away, wait for around 20 minutes and then return to the dashboard by which time the operation should have completed.

  • The federation creation progress is not displayed

    While federation creation is in progress, the SDDC manager UI displays the progress on the multi-site page. If you navigate into any other screen and come back to the multi-site screen, the progress messages are not displayed. An empty map with no VMware Cloud Foundation instances is displayed until the federation creation process completes.

    Workaround: None

  • Multi-Instance Management Dashboard operation fails

    After a controller joins or leaves a federation, Kafka is restarted on all controllers in the federation. It can take up to 15 minutes for the federation to stabilize. Any operations performed on the dashboard during this time may fail.

    Workaround: Re-try the operation.