VMware Cloud Foundation 4.3 on Dell EMC VxRail | 24 AUG 2021 | Build 18433963
Read about what’s new, learn about what was fixed, and find workarounds for known issues in VMware Cloud Foundation on Dell EMC VxRail 4.3.
The release notes cover the following topics:
- What's New
- Cloud Foundation Bill of Materials (BOM)
- Installation and Upgrade Information
- Resolved Issues
- Known Issues
This release has the following features:
- Workflow improvements for better VxRail integration: API support for streamlining day-N operations, by allowing VMware Cloud Foundation to work with VxRail Manager to orchestrate common workflows, including creating a workload domain and adding a vSphere cluster to a workload domain. This API support facilitates a single pane of glass experience for these workflows.
- Support for two system vSphere Distributed Switches: The ability to separate system traffic across two vSphere Distributed Switches allows you to segregate management traffic from other traffic (vMotion, vSAN, VM traffic) to meet security and bandwidth requirements.
- Flexibility in Application Virtual Networks (AVN): Application Virtual Networks (AVN)s, which include the NSX Edge Cluster and NSX network segments, are no longer deployed and configured during bring-up. Instead they are implemented as a Day-N operations in SDDC Manager, providing greater flexibility.
- FIPS Support: You can enable FIPS mode during bring-up, which will enable it on all the VMware Cloud Foundation components that support FIPS.
- Scheduled Automatic Password Rotations: In addition to the on-demand password rotation capability, it is now possible to schedule automatic password rotations for accounts managed through SDDC Manager (excluding ESXi accounts). Automatic password rotation is enabled by default for service accounts.
- SAN in Certificate Signing Requests (CSR) : You can now add a Subject Alternative Name (SAN) when you generate a Certificate Signing Request (CSR) in SDDC Manager.
- Improvements for vSphere Lifecycle Manager images: For workload domains that use vSphere Lifecycle Manager images, this release includes several improvements. These include: prechecks to proactively identify issues that may affect upgrade operations; enabling concurrent upgrades for NSX-T Data Center components; and enabling provisioning and upgrade of Workload Management.
- Add vSphere Clusters in Parallel: You can add up to 10 vSphere clusters to a workload domain in parallel, improving the performance and speed of the workflow.
- Add and Remove NSX Edge Nodes in NSX Edge Clusters: For NSX Edge clusters deployed through SDDC Manager or the VMware Cloud Foundation API, you can expand and shrink NSX Edge clusters by adding or removing NSX Edge nodes from the cluster.
- Guidance for Day-N operations in NSX Federated VCF environments: You can federate NSX-T Data Center environments across VMware Cloud Foundation instances. You can manage federated NSX-T Data Center environments with a single pane of glass, create gateways and segments that span VMware Cloud Foundation instances, and configure and enforce firewall rules consistently across instances. Guidance is also provided for password rotation, certificate management, backup and restore, and lifecycle management for federated environments.
- Backup Enhancements: You can now configure an SDDC Manager backup schedule and retention policy from the SDDC Manager UI.
- Capacity Planner tool: Administrators can use the VCF Capacity Planner online tool to model and generate a Software Defined Data Center build of materials. This interactive tool generates detailed guidance of hyper-converged server, storage, network, and cloud software SKUs required to successfully deploy an on-premises cloud.
- Private APIs: Access to private APIs that use basic authentication is deprecated in this release. You must switch to using public APIs.
- BOM updates: Updated Bill of Materials with new product versions.
The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
|Software Component||Version||Date||Build Number|
|Cloud Builder VM||4.3||24 AUG 2021||18433963|
|SDDC Manager||4.3||24 AUG 2021||18433963|
|VxRail Manager||7.0.202||15 JUN 2021||n/a|
|VMware vCenter Server Appliance||7.0 Update 2c||24 AUG 2021||18356314|
|VMware NSX-T Data Center||
|22 JUL 2021||18328989|
|VMware vRealize Suite Lifecycle Manager||8.4.1||15 MAY 2021||18067607|
|Workspace ONE Access||3.3.5||20 MAY 2021||18049997|
|vRealize Automation||8.4.1||27 MAY 2021||18054500|
|vRealize Log Insight||8.4||15 APR 2021||17828109|
|vRealize Log Insight Content Pack for NSX-T||4.0.2||n/a||n/a|
|vRealize Log Insight Content Pack for vRealize Automation 8.3+||1.0||n/a||n/a|
|vRealize Log Insight Content Pack for Linux||2.1.0||n/a||n/a|
|vRealize Log Insight Content Pack for Linux - Systemd||1.0.0||n/a||n/a|
|vRealize Log Insight Content Pack for vRealize Suite Lifecycle Manager 8.0.1+||1.0.2||n/a||n/a|
|vRealize Log Insight Content Pack for VMware Identity Manager||2.0||n/a||n/a|
|vRealize Operations Manager||8.4||15 APR 2021||17863947|
|vRealize Operations Management Pack for VMware Identity Manager||1.3||n/a||n/a|
- VMware ESXi and VMware vSAN are part of the VxRail BOM.
- You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access.
- vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
- The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
- VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.
The following documentation is available:
The following limitations apply to this release:
- vSphere Lifecycle Manager images are not supported on VMware Cloud Foundation on Dell EMC VxRail.
- Customer-supplied vSphere Distributed Switch (vDS) is a new feature supported by VxRail Manager 7.0.010 that allows customers to create their own vDS and provide it as an input to be utilized by the clusters they build using VxRail Manager. VMware Cloud Foundation on Dell EMC VxRail does not support clusters that utilize a customer-supplied vDS.
- VMware Cloud Foundation on Dell EMC VxRail does not support ESXi lockdown mode.
When you deploy the management domain, VxRail Manager 7.0.202 deploys vCenter Server 7.0 Update 2b (build 17958471). However, the VMware Cloud Foundation 4.3 BOM requires vCenter Server 7.0 Update 2c (build 18356314). Until you upgrade vCenter Server, you will not be able to deploy a VI workload domain. To upgrade vCenter Server, download and apply the upgrade bundle. See Download VMware Cloud Foundation on Dell EMC VxRail Bundles.
You can perform a sequential or skip level upgrade to VMware Cloud Foundation 4.3 on Dell EMC VxRail from VMware Cloud Foundation 4.2.1, 4.2, 184.108.40.206, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.3.
The following issues have been resolved:
- A host with upper case letters in its name fails to be added to SDDC Manager
For VMware Cloud Foundation 4.3 known issues, see Cloud Foundation 4.3 known issues.
VMware Cloud Foundation 4.3 on Dell EMC VxRail known issues appear below:
Upgrading the Supervisor Cluster on a Workload Management VI workload domain fails
While upgrading the Supervisor Cluster, the upgrade fails or gets struck due to multiple vmkernel adapters tagged with management traffic.
Workaround: Follow the steps in the Dell EMC KB and retry the upgrade.
You cannot reuse an existing static IP pool when adding a VxRail cluster to the management domain from the SDDC Manager UI
The option to reuse an existing static IP pool is disabled when you add a VxRail cluster to the management domain using the SDDC manager UI.
Workaround: Use the MutliDvsAutomator script to add the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the MultiDvsAutomator Script.
Adding a new ESXi node using the VxRail Manager plugin for vCenter Server fails
While expanding a VxRail cluster with a newly installed L3 node, the add operation fails with the error
http.client.RemoteDisconnected: Remote end closed connection without response.
- Login to the newly installed ESXi node and restart the proxy service:
- Retry the operation.
- Login to the newly installed ESXi node and restart the proxy service:
VxRail upgrade task in SDDC Manager displays incorrect status
The VxRail upgrade task status in SDDC Manager is displayed as running even after the upgrade is complete.
Workaround: Restart the LCM service:
- Take a snapshot of the SDDC Manager VM from the vSphere Web Client.
- Using SSH, log in to the SDDC Manager VM with the following credentials:
User name: vcf
Password: use the password specified in the deployment parameter workbook.
- Enter su to switch to the root user.
- Run the following command:
systemctl restart lcm
Task status is synchronized after approximately 10 minutes.
vSphere Cluster Services (vCLS) VMs are moved to remote storage after a VxRail cluster with HCI Mesh storage is imported to VMware Cloud Foundation
When you configure HCI Mesh storage on a VxRail cluster and then import it to VMware Cloud Foundation, vCLS VMs are moved to the remote storage instead of being placed on the cluster's primary storage. This can result in errors when you unmount the remote storage for the cluster.
- Login to vCenter UI.
- Retrieve the cluster MorfId.
In the Hosts and Clusters tab, click the Cluster entity and check the URL.
The cluster morfId for this URL is 'domain-c10'.
- Click the vCenter entity.
- Navigate to Configure -> Advanced Setting.
Be default, vCLS property set to true:
- Disable vCLS on the cluster.
Click Edit Settings, set the flag to 'false', and click Save.
- Wait 2 minutes for the vCLS VMs to be deleted.
- Unmount the remote storage.
- Repeat steps 3 and 4.
- Enable vCLS on the cluster.
Click Edit Settings, set the flag to 'true', and click Save.
- Wait 2-3 minutes for the vCLS VMs to be deployed.
Three vCLS VMs are displayed in the VMs and Templates tab.
vVols is not a supported storage option
Although VMware Cloud Foundation on Dell EMC VxRail does not support vVols, storage settings options related to vVols appear in the SDDC Manager UI. Do not use Administration > Storage Settings to add a VASA provider.
Workaround: See KB 81321 for information about how to remove the Storage Settings from the SDDC Manager UI.
The API does not support adding a host to a cluster with dead hosts or removing dead hosts from a cluster
The following flags appear in the API Reference Guide and API Explorer, but are not supported with VMware Cloud Foundation on Dell EMC VxRail.
forceHostAdditionInPresenceofDeadHosts: Use to add host to a cluster with dead hosts. Bypasses validation of disconnected hosts and vSAN cluster health.
forceByPassingSafeMinSize: Remove dead hosts from cluster, bypassing validations.
Adding a VxRail cluster with hosts spanning multiple racks to a workload domain fails
If you add hosts that span racks (use different VLANs for management, vSAN, and vMotion) to a VxRail cluster after you perform the VxRail first run, but before you add the VxRail cluster to a workload domain in SDDC Manager, the task fails.
- Create a VxRail cluster containing hosts from a single rack and perform the VxRail first run.
- Add the VxRail cluster to a workload domain in SDDC Manager.
- Add hosts from another rack to the VxRail cluster in the vCenter Server for VxRail.
- Add the VxRail hosts to the VxRail cluster in SDDC Manager.