VMware Cloud Foundation 4.5.2 | 17 AUG 2023 | Build 22223457 Check for additions and updates to these release notes. |
VMware Cloud Foundation 4.5.2 | 17 AUG 2023 | Build 22223457 Check for additions and updates to these release notes. |
The VMware Cloud Foundation (VCF) 4.5.2 release includes the following:
Keyed to keyless license conversion: The option to convert the licensing mode of a workload domain from a keyed license (VCF-S or VCF perpetual license) to a keyless license (VMware Cloud Foundation+) model is a now available.
Support for mixed license deployment: A combination of keyed and keyless licenses can now be used within the same VCF instance. The licensing within a given workload domain needs to be homogeneous (no mixing of keyed and keyless licensing within a workload domain).
BOM deviation precheck: Running an upgrade precheck now determines if the Async Patch Tool was used in the environment to patch components.
BOM updates: Updated Bill of Materials with new product versions.
NSX-T Data Center 3.2.3.1, which includes new features and enhancements as part of NSX 3.2.3.1 and critical bug fixes. See https://docs.vmware.com/en/VMware-NSX/3.2.3.1/rn/vmware-nsxt-data-center-3231-release-notes/index.html for more details.
VMware vCenter Server 7.0 Update 3m, which contains critical bug fixes. See https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3m-release-notes/index.html.
VMware ESXi 7.0 Update 3n, which contains critical bug fixes. See https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3n-release-notes.html for more details.
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component |
Version |
Date |
Build Number |
---|---|---|---|
Cloud Builder VM |
4.5.2 |
17 AUG 2023 |
22223457 |
SDDC Manager |
4.5.2 |
17 AUG 2023 |
22223457 |
VxRail Manager |
7.0.452 |
17 AUG 2023 |
NA |
VMware vCenter Server Appliance |
7.0 Update 3m |
22 JUN 2023 |
21784236 |
7.0 Update 3l |
30 MAR 2023 |
21424296 |
|
VMware NSX-T |
3.2.3.1 |
27 JUL 2023 |
22104592 |
VMware vRealize Suite Lifecycle Manager |
8.10 |
23 JUN 2023 |
21950667 |
* After deploying vRealize Suite Lifecycle Manager 8.8.2, you must install vRealize Suite Lifecycle Manager 8.8.2 Product Support Pack 6.
VMware ESXi and VMware vSAN are part of the VxRail BOM.
You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access (formerly known as VMware Identity Manager). vRealize Suite Lifecycle Manager determines which versions of these products are compatible and only allows you to install/upgrade to supported versions. See vRealize Suite Upgrade Paths on VMware Cloud Foundation 4.4.x +.
vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
You can access the latest versions of the content packs for vRealize Log Insight from the VMware Solution Exchange and the vRealize Log Insight in-product marketplace store.
The following documentation is available:
The following limitations apply to this release:
vSphere Lifecycle Manager images are not supported on VMware Cloud Foundation on Dell VxRail.
Customer-supplied vSphere Distributed Switch (vDS) is a new feature supported by VxRail Manager 7.0.010 and later that allows customers to create their own vDS and provide it as an input to be utilized by the clusters they build using VxRail Manager. VMware Cloud Foundation on Dell VxRail does not support clusters that utilize a customer-supplied vDS.
You can install VMware Cloud Foundation 4.5.2 on Dell VxRail as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.5.2 from VMware Cloud Foundation 4.2.1 or later. If your environment is at a version earlier than 4.2.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.2.1 and then upgrade to VMware Cloud Foundation 4.5.2.
If your VMware Cloud Foundation instance includes vRealize Suite Lifecycle Manager, you may need to install a Product Support Pack to support VMware Cloud Foundation 4.5.2. Check the release notes to see what Product Support Pack is required for your current version of vRealize Suite Lifecycle Manager:
VMware vRealize Suite Lifecycle Manager Product Support Pack Release Notes
VMware Aria Suite Lifecycle Product Support Pack Release Notes
Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
Scripts that rely on SSH being activated on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.5 and later, since VMware Cloud Foundation 4.5 deactivates the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about activating and deactivating the SSH service on ESXi hosts.
The following issues have been resolved:
VxRail parallel cluster upgrade fails for one or more clusters.
VxRail bundle is available for an upgrade even though the NSX-T upgrade is still in-progress on VMware Cloud Foundation on VxRail 4.x environment.
You must enable smooth VxRail upgrade after every VxRail build async patch or out-of-band upgrade.
SDDC Manager is unable to find the service account for ESXi hosts .
For VMware Cloud Foundation 4.5.2 known issues, see VMware Cloud Foundation 4.5.2 known issues. Some of the known issues may be for features that are not available on VMware Cloud Foundation on Dell VxRail.
VMware Cloud Foundation 4.5.2 on Dell VxRail known issues appear below:
VXrail upgrade fails while upgrading to VXrail 7.0.452.
Failure occurred while running an upgrade for bundle:VX3DG_VxRail-7.0.452-Composite-Upgrade-Slim-Package-for-7.0.x.zip
.
Workaround: Reset iDRAC for all the host and retry the upgrade.
For more information, see Dell KB article https://www.dell.com/support/kbdoc/000216383
VxRail Manager system precheck does not provide error information in /v1/system/prechecks/ API
The /v1/system/prechecks/
API does not populate the errors
attribute in cases where the precheck status is marked as WARNING
. For example:
{
'name': 'VXM_SYSTEM_PRECHECK',
'description': 'Perform Stage - Perform VxRail System Precheck',
'status': 'WARNING',
'creationTimestamp': '2022-09-21T09:34:55.044Z',
'completionTimestamp': '2022-09-21T09:45:20.283Z',
'errors': []
}
Workaround: None
Adding hosts to a cluster fails
When you add hosts to a VxRail cluster, the hosts being added must have their vmnics in the same order as the existing hosts in the cluster. If the vmnics of the new hosts are in a different order, then validation fails and the hosts cannot be added to the cluster.
Workaround: Modify the vmnic order in the new hosts to match that of the existing hosts and retry the add hosts task.
SDDC Manager UI buttons or links may be grayed out (inaccessible) after parallel add host or add cluster operations
If you run parallel add host operations to different clusters or parallel add cluster operations to different workload domains, the system may not release system locks after the operations complete. These locks prevent certain operations until the locks are released.
Workaround: Contact VMware Support.
Add VxRail hosts validation fails
When adding VxRail hosts to a workload domain or cluster that uses Fibre Channel (FC) storage, the task may fail with the message No shared datastore can be found on host
. This can happen if you used the Workflow Optimization Script to deploy the workload domain or cluster and chose an FC datastore name other than the default name.
Workaround: Use the VMware Host Client to rename the FC datastore on the new VxRail hosts to match the name you entered when creating the workload domain or cluster. Once the FC datasore name of the new hosts matches the existing FC datastore name, retry the Add VxRail Hosts operation.
VxRail Manager upgrade shows as failed in SDDC Manager but completed in vSphere Client
The upgrade might time out while waiting for VxRail to return the new version after the upgrade completes.
Workaround: Retry the VxRail manager upgrade.
Incorrect date and time is shown for upgrades in the update history for a workload domain
Viewing the update status on the Update History tab for a workload domain may show the incorrect date and time for an upgrade.
Workaround: None. This does not affect upgrade or any other functionality and will be addressed in future release.
Failed VxRail first run prevents new cluster/workload domain creation
When you use the Workflow Optimization Script to add a cluster or create a workload domain, the script performs a VxRail first run to discover and configure the ESXi hosts. If the VxRail first run fails, some objects associated with the failed task remain in the vCenter Server inventory and prevent new cluster/workload domain creation.
Workaround: Remove the inventory objects associated with the failed task using the vSphere Client.
Log in to the vSphere Client.
In the Hosts and Clusters inventory, right-click the failed cluster and select Delete.
In the Networking inventory, right-click the network objects created for the failed cluster and select Delete.
In the Storage inventory, right-click the datastore object created for the failed cluster and select Delete Datastore.
After the inventory is cleaned up, you can retry adding a cluster or creating a workload domain.
Upgrading VxRail cluster to 7.0.450 may fail
As part of the VxRail cluster upgrade, ESXi hosts get rebooted. An intermittent issue can cause some ESXi hosts to remain disconnected after reboot. If this happens, the upgrade fails.
Workaround: Use the vSphere Client to connect the disconnected ESXi hosts and retry the upgrade from SDDC Manager.
vSAN/vMotion network disruption can occur when using the workflow optimization script
When you use the workflow optimization script to create a new VI workload domain or add a new cluster to an existing workload domain, you can cause a network disruption on existing vSAN/vMotion networks if:
The IP range for the new vSAN network overlaps with the IP range for an existing vSAN network.
The IP range for the new vMotion network overlaps with the IP range for an existing vMotion network.
Workaround: None. Make sure to provide vSAN/vMotion IP ranges that do not overlap with existing vSAN/vMotion networks.