VMware Cloud Foundation 3.10 on Dell EMC VxRail | 26 MAY 2020 | Build 16223257 VMware Cloud Foundation 3.10.0.1 on Dell EMC VxRail | 02 JUL 2020 | Build 16419449 Check regularly for additions and updates to these release notes. |
Read about what’s new, learn about what was fixed, and find workarounds for known issues in VMware Cloud Foundation 3.10 on Dell EMC VxRail and VMware Cloud Foundation 3.10.0.1 on Dell EMC VxRail.
The release notes cover the following topics:
- What's New
- Cloud Foundation Bill of Materials (BOM)
- Documentation
- Cloud Foundation 3.10.0.1 on Dell EMC VxRail Release Information
- Resolved Issues
- Known Issues
What's New
This release has the following features:
- ESXi Cluster-Level and Parallel Upgrades: Enables you to update the ESXi software on multiple clusters in the management domain or a workload domain in parallel. Parallel upgrades reduce the overall time required to upgrade your environment.
- NSX-T Data Center Cluster-Level and Parallel Upgrades: Enables you to upgrade all Edge clusters in parallel, and then all host clusters in parallel. Parallel upgrades reduce the overall time required to upgrade your environment. You can also select specific clusters to upgrade. The ability to select clusters allows for multiple upgrade windows and does not require all clusters to be available at a given time.
- Support for custom vDS: You can create a custom vDS after VxRail first run or after importing a cluster in SDDC Manager. The custom vDS is for traffic types such as backup and replication. It cannot handle system traffic.
- Skip Level Upgrades: Enables you to upgrade to VMware Cloud Foundation on Dell EMC VxRail 3.10 from versions 3.7 and later.
- Option to disable Application Virtual Networks (AVNs) during Bring-up: AVNs deploy vRealize Suite components on NSX overlay networks and it is recommended you use this option during bring-up. If you disable AVN during bring-up, vRealize Suite components are deployed to a VLAN-backed distributed port group.
- Option to deploy vRealize Suite 2019 products: Instead of the legacy vRealize Suite product versions included in the Cloud Foundation 3.10 Bill of Materials, you can deploy vRealize Suite 2019 products following prescriptive guidance.
- BOM Updates: Updated Bill of Materials with new product versions.
VMware Cloud Foundation over Dell EMC VxRail Bill of Materials (BOM)
The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component | Version | Date | Build Number |
Cloud Builder VM | 2.2.2.0 | 26 MAY 2020 | 16223257 |
SDDC Manager | 3.10 | 26 MAY 2020 | 16223257 |
VxRail Manager | 4.7.410 | 17 DEC 2019 | n/a |
VMware vCenter Server Appliance | 6.7 P02 / U3g | 28 APR 2020 | 16046470 |
VMware NSX Data Center for vSphere | 6.4.6 | 10 OCT 2019 | 14819921 |
VMware NSX-T Data Center | 2.5.1 | 19 DEC 2019 | 15314288 |
VMware Enterprise PKS | 1.7 | 02 APR 2020 | 16116522 |
VMware vRealize Suite Lifecycle Manager | 2.1 Patch 2 | 04 MAY 2020 | 16154511 |
VMware vRealize Log Insight | 4.8 | 11 APR 2019 | 13036238 |
vRealize Log Insight Content Pack for NSX for vSphere | 3.9 | n/a | n/a |
vRealize Log Insight Content Pack for Linux | 2.0.1 | n/a | n/a |
vRealize Log Insight Content Pack for vRealize Automation 7.5+ | 1.0 | n/a | n/a |
vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+ | 2.1 | n/a | n/a |
vRealize Log insight Content Pack for NSX-T | 3.8.2 | n/a | n/a |
vSAN content pack for Log Insight | 2.2 | n/a | n/a |
vRealize Operations Manager | 7.5 | 11 APR 2019 | 13165949 |
vRealize Automation | 7.6 | 11 APR 2019 | 13027280 |
Horizon 7 | 7.10.0 | 17 SEP 2019 | 14584133 |
Note:
- VMware vSphere (ESXi) and VMware vSAN are part of the VxRail BOM.
- vRealize Log Insight Content Packs are deployed during the workload domain creation.
- VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The software components table contains the latest versions of the packs that were available and automation at the time VMware Cloud Foundation released. When you deploy the VMware Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.
Documentation
The following documentation is available:
VMware Cloud Foundation on Dell EMC VxRail Admin Guide
VMware Cloud Foundation 3.10 Release Notes
Support Matrix of VMware Cloud Foundation on Dell EMC VxRail
Cloud Foundation 3.10.0.1 on Dell EMC VxRail Release Information
VMware Cloud Foundation 3.10.0.1 on Dell EMC VxRail was released on 02 JUL 2020. You can upgrade to Cloud Foundation 3.10.0.1 from a 3.10 deployment, or you can use the skip-level upgrade tool to upgrade to VMware Cloud Foundation 3.10.0.1 from versions earlier than 3.10.
Cloud Foundation 3.10.0.1 contains the following BOM updates:
Software Component | Version | Date | Build Number |
SDDC Manager | 3.10.0.1 | 30 JUN 2020 | 16419449 |
VxRail Manager | 4.7.511 | 23 JUN 2020 | n/a |
VMware vCenter Server Appliance | 6.7 U3h | 28 MAY 2020 | 16275304 |
Note: VMware vSphere (ESXi) and VMware vSAN are part of the VxRail BOM. For more information, refer to Dell EMC VxRail documentation.
SDDC Manager 3.10.0.1 addresses the following:
SDDC Manager 3.10.0.1 contains security fixes for Photon OS packages PHSA-2020-3.0-0086 to PHSA-2020-3.0-0103 published here:
https://github.com/vmware/photon/wiki/Security-Advisories-3
VMware vCenter Server Appliance 6.7 U3h addresses the following:
Security fixes for Photon OS packages.
- gdb: CVE-2019-1010180
- unzip: CVE-2014-8139, CVE-2014-8141, CVE-2014-8140
Resolved Issues
The following issues have been resolved in Cloud Foundation 3.10:
- Adding a VxRail cluster to a VxRail workload domain fails
- Add cluster does not support manual creation of multiple Transport Zones in NSX-T Manager
- New clusters can be added to the management domain or workload domain only with the same SSO domain name as the other cluster(s) in the domain
Known Issues
For VMware Cloud Foundation 3.10 known issues, see Cloud Foundation 3.10 known issues.
VMware Cloud Foundation 3.10 on Dell EMC VxRail known issues and limitations appear below:
VMware Cloud Foundation on Dell EMC VxRail bring-up fails with error
Failed to apply default vSAN policy
If bring-up fails when deploying the second Platform Services Controller (psc-2), retrying bring-up will fail with the error
Failed to apply default vSAN policy
. The cause is that the original deployment of psc-2 was not removed from the vCenter Server inventory.Workaround:
- Log in to the vCenter Server.
- Delete the psc-2 VM.
- Rename the psc-2 (1) VM to psc-2.
- Retry bring-up.
Bring-up fails with a password error
Bring-up fails with the error
password must contain only alphanumerics and special characters
. The error is the result of different password requirements for VxRail and VMware Cloud Foundation.Workaround: Make sure that VxRail clusters use passwords that meet the Cloud Foundation requirements for the following users:
- Default Single-Sign On Domain User (administrator@vsphere.local): 8-20 characters. At least 1 uppercase, 1 lowercase, 1 number, and 1 special character (@, !, #, $, %, ?, ^).
- vCenter Server and Platform Services Controller Virtual Appliances root account: 8-12 characters. At least 1 uppercase, 1 lowercase, 1 number, and 1 special character (@, !, #, $, %, ?, ^).
Workload domain cannot be deployed on a fresh deployment of Cloud Foundation on Dell EMC VxRail
The VxRail version 4.7.410 included in the 3.10 BOM deploys a vCenter Server that is incompatible with Cloud Foundation 3.10. vCenter Server must be upgraded before deploying a workload domain.
Workaround:
- After bring-up, login to SDDC Manager and navigate to Administration > Repository Settings.
- Authenticate to the My VMware depot.
- Wait for the bundles to show up under Bundle Management and download the vCenter/PSC upgrade bundle.
- Apply the vCenter bundle to the management domain.
- Deploy a workload domain.
-
Deleting a cluster from an NSX-T workload domain fails
If multiple clusters in the workload domain have similar names, deleting one of the clusters can fail with the error
Can't find the TransportNodeProfile for the Cluster: <cluster name>
.Workaround:
- Log in to the NSX Manager for the workload domain with admin privileges.
- Navigate to System > Fabric > Profiles > Transport Node Profiles > Edit.
- Record the names of all the transport node profiles.
- Rename the transport node profiles for all clusters with names similar o the cluster you want to delete.
- In the SDDC Manager Dashboard, delete the cluster.
- Log in to the NSX Manager and rename the transport node profiles back to their original names.
-
If you use the special character underscore (_) in the vCenter host name for the workload domain create operation, the vCenter deployment fails.
The vCenter deployment fails with the "
ERROR > Section 'new_vcsa', subsection 'network', property 'system_name' validation
" error message.Workaround: None. This is an issue in the vCenter product installer where the installer pre-validation fails. You should create the workload domain by providing valid vCenter host names.
The VxRail vCenter Plugin UI options may disappear after the OpenSSL/Microsoft certificate replace operations of all the components or just VxRail Manager
The certificate replace operation involves changes in VxRail Manager and the vCenter VMs. Sometimes the vCenter plugin download might fail as the communication can happen with invalid thumbprint and the VxRail plugin UI option might disappear from vCenter. As a result, the user cannot invoke the add hosts and the remove hosts operations from vCenter.
Workaround: Reload the plugin by opening the VxRail Manager page which redirects to vCenter and make sure the VxRail UI options are visible in the vCenter UI.
Duplicate node expansion tasks are generated in SDDC Manager
If you select two hosts in the Add Host wizard, two tasks are generated and displayed in the task bar. The second task fails right away, but the first task adds both hosts.
Workaround: None. Ignore the failed task since the functionality is not impacted.
Cluster and/or domain deletion fails when cluster names are not unique across shared NSX-T workload domains
Cluster deletion fails when a cluster with the same name is present in another shared NSX-T workload domain. When two or more clusters have the same name, the associated NST-T workload domain cannot be deleted either.
Workaround:
- Log in to the NSX-T Manager for the workload domain with admin privileges.
- Navigate to System > Fabric > Profiles > Nodes .
- For cluster deletion error, select select the corresponding vCenter Server in the Managed By dropdown.
- For workload domain deletion error, select None: Standalone Hosts in the Managed By dropdown.
- Select the hosts that belong to the cluster/domain you are deleting and click Delete.
- In the Delete Transport Node dialog box, click Uninstall NSX Components and then click Delete.
- After the deleted hosts are removed from the None: Standalone Hosts list, restart the delete operation.
Gateway timeout 504 error displayed during VxRail bundle upload
VxRail bundle upload fails with the
504 Gateway Time-out error. Workaround:
- Open the /etc/nginx/nginx.conf file.
- Add the following entries after line 154.
155 location /lcm/ {
156 proxy_read_timeout 600;
157 proxy_connect_timeout 600;
158 proxy_send_timeout 600;
159 proxy_pass http://127.0.0.1:7400;
160 } - Restart the nginx service:
systemctl restart nginx
Cancelling an in-progress VxRail upgrade displays an error
VxRail does not support cancellation of an in-progress upgrade though the UI provides this option.
Workaround: None.
Management VMs are unavailable in AZ2 when AZ1 is down
When you have a stretched cluster on the management domain and Availability Zone 1 (AZ1) goes down, if the L2 management network is not stretched you will not be able to manage your environment until AZ1 is back online. Although the management VMs are available on Availability Zone 2 (AZ2), their port groups are not configured correctly and the VMs cannot be accessed.
Workaround: None. You must wait until AZ1 is back online to access the management VMs.