VMware Cloud Foundation 5.1 | 04 DEC 2023 | Build 22688368 Check for additions and updates to these release notes. |
VMware Cloud Foundation 5.1 | 04 DEC 2023 | Build 22688368 Check for additions and updates to these release notes. |
The VMware Cloud Foundation (VCF) 5.1 on Dell VxRail release includes the following:
Support for vSAN ESA: vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.
Non-DHCP option for Tunnel Endpoint (TEP) IP assignment: SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs for stretched clusters and L3 aware clusters.
Multi-pNIC/Multi-vSphere Distributed Switch UI enhancements: VCF users can configure complex networking configurations, including more vSphere Distributed Switch and NSX switch-related configurations, through the SDDC Manager UI.
Distributed Virtual Port Group Separation for management domain appliances: Enables the traffic isolation between management VMs (such as SDDC Manager, NSX Manager, and vCenter) and ESXi Management VMkernel interfaces
Support for vSphere Lifecycle Manager images in the management domain and VI workload domains: You can deploy the management domain using vSphere Lifecycle Manager images during new a VCF instance deployment. You can also create new VI workload domains that use vSphere Lifecycle Manager images.
Mixed-mode Support for Workload Domains: A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.
Asynchronous update of the pre-check files: The upgrade pre-checks can be updated asynchronously with new pre-checks using a pre-check file provided by VMware.
Workload domain NSX integration: Support for multiple NSX enabled VDSs for Distributed Firewall use cases
Tier-0/1 optional for VCF Edge cluster: When creating an Edge cluster with the VCF API, the Tier-0 and Tier-1 gateways are now optional.
VCF Edge nodes support static or pooled IP: When creating or expanding an Edge cluster using VCF APIs, Edge node TEP configuration may come from an NSX IP pool or be specified statically as in earlier releases.
Support for mixed license deployment: A combination of keyed and keyless licenses can be used within the same VCF instance.
Integration with VMware Identity Service: Provides identity federation and SSO across vCenter, NSX, and SDDC Manager. VCF administrators can add Okta to VMware Identity Service as a Day-N operation using the SDDC Manger UI.
VMware vRealize rebranding: VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details.
VMware Validated Solutions: All VMware Validated Solutions are updated to support VMware Cloud Foundation 5.1. Visit VMware Validated Solutions for the updated guides.
BOM updates: Updated Bill of Materials with new product versions.
Starting with VMware Cloud Foundation 5.1, Configuration Drift Bundles are no longer needed as part of the upgrade process and are now deprecated.
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component |
Version |
Date |
Build Number |
---|---|---|---|
Cloud Builder VM |
5.1 |
07 NOV 2023 |
22688368 |
SDDC Manager |
5.1 |
07 NOV 2023 |
22688368 |
VxRail Manager |
8.0.200 |
04 DEC 2023 |
NA |
VMware vCenter Server Appliance |
8.0 Update 2a |
26 OCT 2023 |
22617221 |
8.0 Update 2 |
21 SEP 2023 |
22443122 |
|
VMware NSX |
4.1.2.1 |
7 NOV 2023 |
22667789 |
VMware Aria Suite Lifecycle |
8.14 |
19 OCT 2023 |
22630473 |
VMware ESXi and VMware vSAN are part of the VxRail BOM.
You can use VMware Aria Suite Lifecycle to deploy VMware Aria Automation, VMware Aria Operations, VMware Aria Operations for Logs, and Workspace ONE Access (formerly known as VMware Identity Manager). VMware Aria Suite Lifecycle determines which versions of these products are compatible and only allows you to install/upgrade to supported versions. See VMware Aria Suite Upgrade Paths on VMware Cloud Foundation 4.4.x +.
VMware Aria Operations for Logs content packs are installed when you deploy VMware Aria Operations for Logs.
The VMware Aria Operations management pack is installed when you deploy VMware Aria Operations.
You can access the latest versions of the content packs for VMware Aria Operations for Logs from the VMware Solution Exchange and the VMware Aria Operations for Logs in-product marketplace store.
The following documentation is available:
You can perform a sequential or skip level upgrade to VMware Cloud Foundation 5.1 on Dell VxRail from VMware Cloud Foundation 4.4 or later. If your environment is at a version earlier than 4.4, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.4 and then upgrade to VMware Cloud Foundation 5.1.
IMPORTANT: Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
NOTE: Scripts that rely on SSH being activated on ESXi hosts will not work after upgrading to VMware Cloud Foundation 5.1, since VMware Cloud Foundation 5.1 deactivates the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about activating and deactivating the SSH service on ESXi hosts.
The following issues have been resolved:
ESXi host firmware not updating after VxRail upgrade from VCF.
Uplinks for overlay traffic are configured incorrectly in shared/single vSphere Distributed Switch topology.
Stretching a VxRail cluster on a VCF environment fails during transport node configuration.
ESXi/VxRail upgrade bundles are displayed after upgrade has been scheduled for the domain.
Parallel VxRail cluster upgrade fails with error A specified parameter was not correct: extension.key
VxRail Lifecycle Management Precheck fails while in progress.
For VMware Cloud Foundation 5.1 known issues, see VMware Cloud Foundation 5.1 known issues. Some of the known issues may be for features that are not available on VMware Cloud Foundation on Dell VxRail.
VMware Cloud Foundation 5.1 on Dell VxRail Known Issues appear below:
Creating a workload domain or adding a cluster using ESXi hosts with GPU drivers
If your ESXi hosts include a GPU driver, you must upload the GPU driver to the VxRail Manager before you:
Create a VI workload domain that uses vSphere Lifecycle Manager images.
Add a cluster to a VI workload domain that uses vSphere Lifecycle Manager images.
Workaround: See https://www.dell.com/support/kbdoc/en-in/000202491.
Adding a VxRail cluster using the SDDC Manager UI displays a validation error
If you add a new VxRail cluster with a custom switch configuration but do not specify MTU values for both the vSphere distributed switch (vDS) and the distributed port groups, validation fails and you cannot add the VxRail cluster.
Workaround: Add MTU values for the vDS and distributed port groups or copy the switch configuration from one of the preconfigured profiles.
New - VxRail does not allow more than 2 active/one active uplink, 1 standby nics.
Mapping more than 2 uplinks causes VxRail cluster validation to fail during the VxRail Dry Run.
Workaround: When creating a custom NIC profile, map no more than 2 uplinks to active/standby uplinks.
New - Incorrect warning displays for Create Workload Domain and Add Cluster UI
An incorrect warning appears when the 'Static IP Pool' option is selected for IP allocation on overlay switches during Workload Domain and Add Cluster workflows. The warning message states, "Clusters with a static IP pool cannot be stretched across availability zones."
Workaround: None. This is a cosmetic issue and can be ignored.
New - Support for consuming VxRail upgrade bundles for 4x-5x and 5x-5y is unavailable
VxRail Manager 8.0.100 to 8.0.200 upgrade fails stating a 8.0.x.zip file does not exist error.
Workaround: See KB article 94747 for the scripts and manual steps to mitigate this compatibility gap.
New - When adding a VxRail cluster, the teaming polices are not the same as specified in the request payload.
The "Add Cluster" workflow allows optional teaming policy inputs for port groups when creating a new cluster. However, the teaming policies are set to default values, even if custom teaming policies are specified in the input spec.
Workaround: Once the cluster is created, you can change the port group teaming policies from vCenter UI manually.
New - Cluster Expansion fails during the Transport Nodes collectionUnable to create transport node collection with profile
The cluster expansion fails when a separate VCF-created vSphere Distributed Switch (VDS) is used for NSX Overlay traffic and edge clusters are also deployed in the cluster
Workaround: Once the failure is observed in VCF at the task: Create Transport Node Collection if Transport Node Profile is not attached
:
Go to the NSX, and find the TNP where the new host addition failed. Note the Overlay VDS name.
Go to the vCenter.
Locate the Overlay VDS from Step 1.
Add hosts to the VDS and attach the pNICs.
Retry the failed workflow.
Failed VxRail first run prevents new cluster/workload domain creation
When you use the Workflow Optimization Script to add a cluster or create a workload domain, the script performs a VxRail first run to discover and configure the ESXi hosts. If the VxRail first run fails, some objects associated with the failed task remain in the vCenter Server inventory and prevent new cluster/workload domain creation.
Workaround: Remove the inventory objects associated with the failed task using the vSphere Client.
Log in to the vSphere Client.
In the Hosts and Clusters inventory, right-click the failed cluster and select vSAN > Shutdown cluster.
After the shutdown completes, right-click the failed cluster and select Delete.
After the inventory is cleaned up, you can retry adding a cluster or creating a workload domain.
Add VxRail hosts validation fails
When adding VxRail hosts to a workload domain or cluster that uses Fibre Channel (FC) storage, the task may fail with the message No shared datastore can be found on host. This can happen if you used the Workflow Optimization Script to deploy the workload domain or cluster and chose an FC datastore name other than the default name.
Workaround: Use the VMware Host Client to rename the FC datastore on the new VxRail hosts to match the name you entered when creating the workload domain or cluster. Once the FC datasore name of the new hosts matches the existing FC datastore name, retry the Add VxRail Hosts operation.
VCF on VxRail 5.1 Release Versions UI page may show VCF 5.1 with VxRail Manager 8.0.100-28093095 even though Bringup was completed with VxRail Manager 8.0.200 GA build
If the latest manifest is not updated, then VCF on VxRail 5.1 SDDC Manager UI Release Versions page incorrectly shows VCF 5.1 with VxRail Manager 8.0.100-28093095. This is because the SDDC Manager is referring to the default LCM manifest file instead of the latest version file.
Workaround: Refresh the manifest by connecting to the VMware depot or manually update the latest manifest file using the Bundle Transfer Utility.
Unsupported versions of VxRail not restricted during create cluster/domain operations
VCF on VxRail does not restrict using unsupported/unpaired versions of VxRail for create cluster/domain operations. If nodes are re-imaged with a VxRail version that is not paired with the current VCF release, VCF does not restrict using these nodes for creating a cluster/domain.
Workaround: Use the VxRail Manager version paired with the correct VCF release for create domain/cluster operations.
vSAN/vMotion network disruption can occur when using the workflow optimization script
When you use the workflow optimization script to create a new VI workload domain or add a new cluster to an existing workload domain, you can cause a network disruption on existing vSAN/vMotion networks if:
The IP range for the new vSAN network overlaps with the IP range for an existing vSAN network.
The IP range for the new vMotion network overlaps with the IP range for an existing vMotion network.
Workaround: None. Make sure to provide vSAN/vMotion IP ranges that do not overlap with existing vSAN/vMotion networks.