This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Foundation 4.0.1 | 25 JUN 2020 | Build 16428904

VMware Cloud Foundation 4.0.1.1 on Dell EMC VxRail | 06 AUG 2020 | Build 16660200

Check for additions and updates to these release notes.

What's New

These releases have been determined to be impacted by CVE-2020-4006. Fixes and Workarounds are available to address this vulnerability. For more information, see VMSA-2020-0027.

VMware Response to Apache Log4j Remote Code Execution Vulnerability: VMware Cloud Foundation is impacted by CVE-2021-44228, and CVE-2021-45046 as described in VMSA-2021-0028. To remediate these issues, see Workaround instructions to address CVE-2021-44228 & CVE-2021-45046 in VMware Cloud Foundation (KB 87095).

The VMware Cloud Foundation (VCF) 4.0.1 on Dell EMC VxRail release includes the following

  • Multi-pNIC/multi-vDS during bring-up: The deployment parameter workbook now provides five vSphere Distributed Switch (vDS) profiles that allow you to perform bring-up of hosts with two, four, or six physical NICs (pNICs) and to create up to two vSphere Distributed Switches for isolating system (Management, vMotion, VSAN) traffic from overlay (Host, Edge and Uplinks) traffic.
  • Multi-pNIC/multi-vDS API support: The API now supports configuring a second vSphere Distributed Switch (vDS) using up to four physical NICs (pNICs), providing more flexibility to support high performance use cases and physical traffic separation.
  • NSX-T cluster-level upgrade support: Users can upgrade specific host clusters within a workload domain so that the upgrade can fit into their maintenance windows.
  • Kubernetes in the management domain: vSphere with Kubernetes is now supported in the management domain. With VMware Cloud Foundation Workload Management, you can deploy vSphere with Kubernetes on the management domain default cluster with only 4 hosts.
  • API support for bring-up operations.
  • Automated externalization of the vCenter Server in the management domain: Externalizing the vCenter Server that gets created during the VxRail first run is now automated as part of the bring-up process.
  • L3 Aware IP Addressing (API only): VI workload domains now support the ability to use hosts from different L2 domains to create or expand clusters that use vSAN, NFS, or VMFS on FC as principal storage.
  • BOM Updates: Updated Bill of Materials with new product versions.

VMware Cloud Foundation on Dell EMC VxRail Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

Software Component Version Date Build Number
Cloud Builder VM 4.0.1.0 25 JUN 2020 16428904
SDDC Manager 4.0.1.0 25 JUN 2020 16428904
VxRail Manager 7.0.000 12 MAY 2020 n/a
VMware vCenter Server Appliance 7.0.0b 23 JUN 2020 16386292
VMware NSX-T Data Center 3.0.1 23 JUN 2020 16404613
VMware vRealize Suite Lifecycle Manager 8.1 Patch 1 21 MAY 2020 16256499
  • VMware ESXi and VMware vSAN are part of the VxRail BOM.
  • VMware Cloud Foundation supports, but does not automate, the deployment of VMware Horizon 7 version 7.12. You can deploy Horizon 7.12 on a workload domain using the Horizon 7.12 documentation.
  • You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation 8.1, vRealize Operations Manager 8.1, and vRealize Log Insight 8.1 using the VMware Validated Design 6.0 documentation.
  • VMware Enterprise PKS is not supported with this release of Cloud Foundation.

Limitations

The following limitations apply to this release:

  • vSphere Lifecycle Manager (vLCM) is not supported on VMware Cloud Foundation on Dell EMC VxRail.
  • Customer-supplied vSphere Distributed Switch (vDS) is a new feature supported by VxRail Manager 7.0.010 that allows customers to create their own vDS and provide it as an input to be utilized by the clusters they build using VxRail Manager. VMware Cloud Foundation on Dell EMC VxRail does not support clusters that utilize a customer-supplied vDS.

VMware Cloud Foundation 4.0.1.1 on Dell EMC VxRail Release Information

VMware Cloud Foundation on Dell EMC VxRail 4.0.1.1 was released on 06 AUG 2020. You can upgrade to VMware Cloud Foundation 4.0.1.1 from a 4.0.1 deployment.

NOTE: If you are upgrading from VMware Cloud Foundation 4.0.1 with VxRail Manager 7.0.000, you must upgrade to VMware Cloud Foundation 4.0.1.1 before you can upgrade to VxRail Manager 7.0.010. If you installed VMware Cloud Foundation 4.0.1 with VxRail Manager 7.0.010, then you can upgrade directly to VMware Cloud Foundation 4.0.1.1.

VMware Cloud Foundation on Dell EMC VxRail 4.0.1.1 contains the following BOM updates:

Software Component Version Date Build Number
SDDC Manager 4.0.1.1 06 AUG 2020 16660200
VxRail Manager 7.0.010 30 JUL 2020 n/a
VMware vCenter Server Appliance 7.0.0c 30 JULY 20200 16620007

VMware vCenter Server Appliance 7.0.0c includes the following new features:

  • Supervisor cluster: new version of Kubernetes, support for custom certificates and PNID changes
    • The Supervisor cluster now supports Kubernetes 1.18.2 (along with 1.16.7 and 1.17.4)
    • Replacing machine SSL certificates with custom certificates is now supported
    • vCenter PNID update is now supported when there are Supervisor clusters in the vCenter
  • Tanzu Kubernetes Grid Service for vSphere: new features added for cluster scale-in, networking and storage
    • Cluster scale-in operation is now supported for Tanzu Kubernetes Grid service clusters
    • Ingress firewall rules are now enforced by default for all Tanzu Kubernetes Grid service clusters
    • New versions of Kubernetes shipping regularly asynchronously to vSphere patches, current versions are 1.16.8, 1.16.12, 1.17.7, 1.17.8
  • Network service: new version of NCP
    • SessionAffinity is now supported for ClusterIP services
    • IngressClass, PathType, and Wildcard domain are supported for Ingress in Kubernetes 1.18
    • Client Auth is now supported in Ingress Controller
  • Registry service: new version of Harbor
    • The Registry service now is upgraded to 1.10.3

For more information and instructions on how to upgrade, refer to the Updating vSphere with Kubernetes Clusters documentation.

VMware vCenter Server Appliance 7.0.0c resolves the following issue:

  • Tanzu Kubernetes Grid Service cluster NTP sync issue

Resolved Issues

The following issues have been resolved:

  • Logging of credentials vulnerability as described in VMSA-2022-0003. See KB 87050 for more information.
  • Adding hosts with incorrect credentials locks out the ESXi account
  • Unable to reuse an existing NSX Manager cluster when creating a new VxRail VI workload domain
  • Validation APIs for domain, cluster, and host operations fail if you provide incorrect host credentials
  • You cannot delete a workload domain with a stretched cluster
  • Adding a vSphere cluster or adding a host to a workload domain fails
  • You cannot access VxRail Manager in vCenter Server after replacing its certificate
  • Bring-up fails with a password error
  • The VxRail vCenter Plugin UI options may disappear after the OpenSSL/Microsoft certificate replace operations of all the components or just VxRail Manager

Known Issues

For VMware Cloud Foundation 4.0.1 and 4.0.1.1 known issues, see VMware Cloud Foundation 4.0 Known Issues.

VMware Cloud Foundation 4.0.1 and 4.0.1.1 on Dell EMC VxRail known issues and limitations appear below:

  • Upgrading VxRail Manager to 7.0.010 fails with messageVxRail component upgrade failed with error Auth Fail

    This failure is caused by the fact that the password of the user mystic is not migrated to the new VxRail Manager.

    Workaround: Update the mystic password on VxRail Manager to match SDDC Manager.

    1. SSH to the SDDC Manager VM using the vcf user account.
    2. Enter the following command to retrieve the mystic password:

      lookup_passwords You will be required to enter the user name and the password for a user with the ADMIN role.

    3. SSH to the VxRail Manager using the mystic account and its default password.
    4. Enter the following command to reset the default mystic password to match the password retrieved from the SDDC Manager VM: passwd mystic
    5. Retry upgrading VxRail Manager.
  • Upgrading VxRail Manager to 7.0.010 fails with message VxRail component upgrade failed with error HostKey has been changed

    This failure is caused by the fact that the VxRail Manager SSH RSA key is not migrated to the new VxRail Manager.

    Workaround: Update SDDC Manager's VxRail Manager SSH RSA key to match the SSH RSA key on the new VxRail Manager.

    1. SSH to the new VxRail Manager, and look up its SSH RSA public key from /etc/ssh/ssh_host_rsa_key.pub.
    2. SSH to the SDDC Manager VM and update the VxRail Manager SSH RSA key in /home/vcf/.ssh/known_hosts to match the SSH RSA key from step 1.
    3. Change the value of lcm.ssh.strictHostKeyCheck to "false" in /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties.
    4. Restart the LCM service: systemctl restart lcm.
    5. Retry upgrading VxRail Manager.
  • Importing a cluster fails with the errorSystem DVS cannot be null

    If you use the MultiDvsAutomator script to import a cluster with a separate vSphere Distributed Switch (vDS) for overlay traffic, the task fails.

    Workaround:

    1. In the vSphere Client, select Menu > Hosts and Clusters.
    2. Expand the cluster that you are trying to import.
    3. Select the first host in the cluster and click Configure > Networking > VMKernel adapters > vmk0.
    4. Click Edit, deselect Management under Available Services, and click OK.
    5. Repeat steps 3-4 for each host in the cluster.
    6. Retry the import cluster task using the MultiDvsAutomator script.
    7. In the vSphere Client, select the first host in the cluster and click Configure > Networking > VMKernel adapters > vmk0.
    8. Click Edit, select Management under Available Services, and click OK.
    9. Repeat steps 7-8 for each host in the cluster.
  • Cloud Builder appliance platform audit issues

    When you upload an XLS or JSON file with your SDDC configuration details to the Cloud Builder appliance, the Cloud Builder platform audit does not validate the FQDNs. If you enter FQDNs that do not match the information on the DNS server, then bring-up fails.

    Workaround: Make sure to verify that you have entered the correct values for FQDNs and that your infrastructure is prepared correctly before you start the bring-up process.

  • Adding a VxRail cluster to a workload domain fails

    If you upgraded to VMware Cloud Foundation 4.0.1 on Dell EMC VxRail and you try to add a VxRail cluster to a workload domain, the required NSX-T Data Center install bundle may not be available. In this case, the task fails with a message similar to: Failed to prepare input for NSXT. Error : Product NSX_T_MANAGER install image not found for version 3.0.1.0.0-16375037.

    Workaround: Upload the required install bundle and retry the task.

  • Adding a host to a cluster or stretching a cluster fails

    If your deployment is using multiple vSphere Distributed Switches (vDS), adding a host to a cluster or stretching a cluster could assign the incorrect vmnics to the overlay vDS for the new hosts, causing the task to fail.

    Workaround: Upgrade to Cloud Foundation 4.0.1.1, which resolves this issue. If you are still on Cloud Foundation 4.0.1, manually assign the correct vmnics to the overlay vDS and restart the task.

  • Adding a host to a cluster may fail with the errorManagement NIC does not have subnet mask, UNABLE_TO_RETRIEVE_HOST_ATTRIBUTES

    VMware Cloud Foundation on Dell EMC VxRail has two port groups for management traffic. In some cases, adding a host to a cluster may select NICs on the VxRail Management port group instead of the Management port group, causing the operation to fail.

    Workaround: Deactivate management traffic on the VxRail Management port group for all the hosts in the cluster, add the new hosts to the cluster, and then re-enable management traffic on all hosts.

    1. In the vSphere Client, navigate to the first host in the cluster.
    2. On the Configure tab, expand Networking and select VMkernel adapters.
    3. Select the VMkernel adapter for the VxRail Management port group and click Edit.

      The Network Label will include "VxRail Management".

    4. Deselect Management from the list of enabled services and click OK.
    5. Repeat the above steps for all the hosts in the cluster that you are expanding.
    6. From the SDDC Manager UI, retry adding the hosts to the cluster.
    7. Once the task completes successfully, go back to the vSphere Client and re-enable the Management service on VMkernel adapter for the VxRail Management port group for each host in the cluster.
  • Gateway timeout 504 error displayed during VxRail bundle upload

    VxRail bundle upload fails with the 504 Gateway Time-out error. This issue only occurs if you upgraded to VMware Cloud Foundation 4.0.1 on Dell EMC VxRail.

    Workaround:

    1. Open the /etc/nginx/nginx.conf file.
    2. Add the following entries starting with line 210.

      210 location /lcm/ {

      proxy_read_timeout 600;

      proxy_connect_timeout 600;

      proxy_send_timeout 600;

      211 proxy_pass http://127.0.0.1:7400;

      212 }

    3. Restart the nginx service:

      systemctl restart nginx

  • The Cloud Builder platform audit does not validate whether or not the vmnics you enter for the secondary vSphere Distributed Switch (vDS) are already in use

    When you upload an XLS or JSON file to the Cloud Builder appliance that specifies a secondary vDS for overlay traffic, you must assign unused vmnics to the secondary vDS. Otherwise, bring-up fails with an error similar to: Failed to add host host-001.rainpole.local to DVS SDDC-Dswitch-NSX-T Failed to add host {0} to DVS {1}.

    Workaround:

    1. Deploy a new VMware Cloud Builder appliance.
    2. Download and Complete the Deployment Parameter Workbook.

      Make sure to enter unused vmnics for the secondary vDS.

    3. Upload the completed deployment parameter workbook and initiate bring-up.
  • vCenter Server version for the management domain does not match vCenter Server version for workload domains

    In this version of VMware Cloud Foundation on Dell EMC VxRail you can create VI workload domains with a newer version of vCenter Server than the one used by the management domain. You should not be able to create new VI workload domains until you upgrade the management domain vCenter Server.

    Workaround: Upgrade vCenter Server for the management domain.

  • Adding a host to a vSphere cluster fails at the Create NSX-T Data Center Transport Nodes from Discovered Nodes subtask

    In this situation, check the NSX Manager UI. If it shows the error Failed to uninstall the software on host. MPA not working. Host is disconnected. for the host you are trying to add, use the following workaround.

    Workaround:

    1. SSH to the failed host.
    2. Execute the following commands:
      • /etc/init.d/hostd restart
      • /etc/init.d/vpxa restart
    3. In the SDDC Manager UI, retry the add host task.
  • Adding a VxRail cluster to a workload domain fails

    If you add hosts that span racks (use different VLANs for management, vSAN, and vMotion) to a VxRail cluster after you perform the VxRail first run, but before you add the VxRail cluster to a workload domain in SDDC Manager, the task fails.

    Workaround:

    1. Create a VxRail cluster containing hosts from a single rack and perform the VxRail first run.
    2. Add the VxRail cluster to a workload domain in SDDC Manager.
    3. Add hosts from another rack to the VxRail cluster in the vCenter Server for VxRail.
    4. Add the VxRail hosts to the VxRail cluster in SDDC Manager.
check-circle-line exclamation-circle-line close-line
Scroll to top icon