VMware Cloud Foundation 4.3.1 | 21 SEP 2021 | Build 18624509

VMware Cloud Foundation 4.3.1.1 on Dell EMC VxRail | 31 JAN 2022 | Build 19235535

Check for additions and updates to these release notes.

What's New

The VMware Cloud Foundation (VCF) 4.3.1 on Dell EMC VxRail release includes the following:

  • Limited Support for in-place migration from VMware Cloud Foundation 3.10.1.2+ releases: In addition to existing migration methods, customers can now engage the VMware Professional Service Organization (PSO) to perform an assessment for a potential in-place migration from VMware Cloud Foundation 3.10.1.2+ releases to VMware Cloud Foundation 4.3.1. Contact your sales and channel teams for guidance on choosing the best method for migrating your environment.
  • Support for lockdown mode on ESXi hosts: You can enable lockdown mode on ESXi hosts assigned to workload domains through vCenter Server.
  • VMFS on Fibre Channel Storage Support: You can configure a VMFS on Fibre Channel (FC) datastore as primary storage for VxRail VI Workload Domains using the API and SDDC Manager UI.
  • VxRail health check included as part of upgrade prechecks: Running a precheck now evaluates the health of VxRail Managers, in addition to other components.
  • BOM updates: Updated Bill of Materials with new product versions.

VMware Cloud Foundation over Dell EMC VxRail Bill of Materials (BOM)

The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.

VMware Response to Apache Log4j Remote Code Execution Vulnerability: VMware Cloud Foundation is impacted by CVE-2021-44228, and CVE-2021-45046 as described in VMSA-2021-0028. To remediate these issues, see Workaround instructions to address CVE-2021-44228 & CVE-2021-45046 in VMware Cloud Foundation (KB 87095).

Software Component Version Date Build Number
Cloud Builder VM 4.3.1 21 SEP 2021 18624509
SDDC Manager 4.3.1 21 SEP 2021 18624509
VxRail Manager 7.0.241 28 SEP 2021 n/a
VMware vCenter Server Appliance 7.0 Update 2d 21 SEP 2021 18455184
VMware NSX-T Data Center 3.1.3.1 26 AUG 2021 18504668
VMware vRealize Suite Lifecycle Manager 8.4.1 Patch 2 6 SEP 2021 18537943
Workspace ONE Access 3.3.5 20 MAY 2021 18049997
vRealize Automation 8.5 19 AUG 2021 18472703
vRealize Log Insight 8.4.1 15 JUN 2021 18136317
vRealize Log Insight Content Pack for NSX-T 4.0.2 n/a n/a
vRealize Log Insight Content Pack for vRealize Automation 8.3+ 1.0 n/a n/a
vRealize Log Insight Content Pack for Linux 2.1.0 n/a n/a
vRealize Log Insight Content Pack for Linux - Systemd 1.0.0 n/a n/a
vRealize Log Insight Content Pack for vRealize Suite Lifecycle Manager 8.0.1+ 1.0.2 n/a n/a
vRealize Log Insight Content Pack for VMware Identity Manager 2.0 n/a n/a
vRealize Operations Manager 8.5 13 JUL 2021 18255622
vRealize Operations Management Pack for VMware Identity Manager 1.3 n/a n/a
  • VMware ESXi and VMware vSAN are part of the VxRail BOM.
  • You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access.
  • vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
  • The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
  • VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.

Documentation

Limitations

The following limitations apply to this release:

  • vSphere Lifecycle Manager (vLCM) is not supported on VMware Cloud Foundation on Dell EMC VxRail.
  • Customer-supplied vSphere Distributed Switch (vDS) is a new feature supported by VxRail Manager 7.0.010 that allows customers to create their own vDS and provide it as an input to be utilized by the clusters they build using VxRail Manager. VMware Cloud Foundation on Dell EMC VxRail does not support clusters that utilize a customer-supplied vDS.

Installation and Upgrade Information

When you deploy the management domain, VxRail Manager 7.0.202 deploys vCenter Server 7.0 Update 2b (build 17958471). However, the VMware Cloud Foundation 4.3.1 BOM requires vCenter Server 7.0 Update 2d (build 18455184). Until you upgrade vCenter Server, you will not be able to deploy a VI workload domain. To upgrade vCenter Server, download and apply the upgrade bundle. See Download VMware Cloud Foundation on Dell EMC VxRail Bundles.

You can perform a sequential or skip level upgrade to VMware Cloud Foundation 4.3.1 on Dell EMC VxRail from VMware Cloud Foundation 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.3.1.

IMPORTANT: Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.

VMware Cloud Foundation 4.3.1.1 Release Information

VMware Cloud Foundation 4.3.1.1 includes security fixes. You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.3.1 from Cloud Foundation 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains.

To upgrade the management domain, apply the following bundles, in order:

NOTE: Before triggering an upgrade to VCF 4.3.1.1, download all component upgrade/install bundles from VCF 4.3.1.0.

  • VMware Cloud Foundation bundle.
  • Configuration drift bundle.

NOTE: When you are upgrading from VMware Cloud Foundation from 4.3.1 to 4.3.1.1, no configuration drift bundle is required.

VMware Cloud Foundation 4.3.1.1 contains the following BOM updates:

Software Component Version Date Build Number
SDDC Manager 4.3.1.1 31 JAN 2022 19235535

This release addresses the following issues in SDDC Manager:

  • Apache Log4j Remote Code Execution Vulnerability: (CVE-2021-44228 and CVE-2021-45046) as described in VMSA-2021-0028.
  • XML External Entity (XXE) Injection vulnerability as described in CVE-2021-23463
  • Credential logging vulnerability as described in VMSA-2022-0003. See KB 87050 for more information.

Resolved Issues

The following issues have been resolved:

  • vCenter Server: VMware Security Advisory VMSA-2021-0020 resolves resolves CVE-2021-22011 and CVE-2021-22018

Known Issues

For VMware Cloud Foundation 4.3.1 known issues, see VMware Cloud Foundation 4.3.1 known issues. Some of the known issues may be for features that are not available on VMware Cloud Foundation on Dell EMC VxRail.

VMware Cloud Foundation 4.3.1 on Dell EMC VxRail known issues appear below:

  • Using the Workflow Optimization script to create a VI workload domain or add a VxRail cluster fails

    If you are using a VxRail JSON input file with the Workflow Optimization script, creating a VI workload domain or adding a VxRail cluster may fail with the error "Vmnics in host are not attached to DVS uplinks in lexicographic order {vmnic0=uplink1, vmnic1=uplink2}". This error only happens when you specify a separate DVS for overlay traffic. It does not happen when you use the system DVS for overlay traffic.

    Workaround: Edit the JSON file to ensure that vmnics are mapped to uplinks in lexicographic order, that is, uplink1 is mapped to a lower numbered vmnic than uplink2.

    For example, this input will result in an error:

    "uplinks": [ {"name": "uplink1","physical_nic": "vmnic1"}, {"name": "uplink2", "physical_nic": "vmnic0"}]

    But this input will succeed:"uplinks": [ {"name": "uplink1","physical_nic": "vmnic0"}, {"name": "uplink2", "physical_nic": "vmnic1"}]

  • Upgrading the Supervisor Cluster on a Workload Management VI workload domain fails

    While upgrading the Supervisor Cluster, the upgrade fails or gets struck due to multiple vmkernel adapters tagged with management traffic.

    Workaround: Follow the steps in the Dell EMC KB and retry the upgrade.

  • You cannot reuse an existing static IP pool when adding a VxRail cluster to the management domain from the SDDC Manager UI

    The option to reuse an existing static IP pool is disabled when you add a VxRail cluster to the management domain using the SDDC manager UI.

    Workaround: Use the MutliDvsAutomator script to add the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the MultiDvsAutomator Script.

  • Adding a new ESXi node using the VxRail Manager plugin for vCenter Server fails

    While expanding a VxRail cluster with a newly installed L3 node, the add operation fails with the error http.client.RemoteDisconnected: Remote end closed connection without response.

    Workaround:

    1. Login to the newly installed ESXi node and restart the proxy service:

      /etc/init.d/vxrail-proxy-service

    2. Retry the operation.
  • VxRail upgrade task in SDDC Manager displays incorrect status

    The VxRail upgrade task status in SDDC Manager is displayed as running even after the upgrade is complete or failed.

    Workaround: Restart the LCM service:

    1. Take a snapshot of the SDDC Manager VM from the vSphere Web Client.
    2. Using SSH, log in to the SDDC Manager VM with the following credentials:

      User name: vcf

      Password: use the password specified in the deployment parameter workbook.

    3. Enter su to switch to the root user.
    4. Run the following command:

      systemctl restart lcm

      Task status is synchronized after approximately 10 minutes.

  • vSphere Cluster Services (vCLS) VMs are moved to remote storage after a VxRail cluster with HCI Mesh storage is imported to VMware Cloud Foundation

    When you configure HCI Mesh storage on a VxRail cluster and then import it to VMware Cloud Foundation, vCLS VMs are moved to the remote storage instead of being placed on the cluster's primary storage. This can result in errors when you unmount the remote storage for the cluster.

    Workaround:

    1. Login to vCenter UI.
    2. Retrieve the cluster MorfId.

      In the Hosts and Clusters tab, click the Cluster entity and check the URL.

      For example:

      https://dr26avc-1.rainpole.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c10:373acc41-be7e-4f12-855d-094e5f135a67/configure/plugin/com.vmware.vsphere.client.h5vsan/com.vmware.vsan.client.h5vsanui.cluster.configure.vsan.csd

      The cluster morfId for this URL is 'domain-c10'.

    3. Click the vCenter entity.
    4. Navigate to Configure -> Advanced Setting.

      Be default, vCLS property set to true:

      "config.vcls.clusters..enabled"

    5. Disable vCLS on the cluster.

      Click Edit Settings, set the flag to 'false', and click Save.

    6. Wait 2 minutes for the vCLS VMs to be deleted.
    7. Unmount the remote storage.
    8. Repeat steps 3 and 4.
    9. Enable vCLS on the cluster.

      Click Edit Settings, set the flag to 'true', and click Save.

    10. Wait 2-3 minutes for the vCLS VMs to be deployed.

      Three vCLS VMs are displayed in the VMs and Templates tab.

  • vVols is not a supported storage option

    Although VMware Cloud Foundation on Dell EMC VxRail does not support vVols, storage settings options related to vVols appear in the SDDC Manager UI. Do not use Administration > Storage Settings to add a VASA provider.

    Workaround: See KB 81321 for information about how to remove the Storage Settings from the SDDC Manager UI.

  • The API does not support adding a host to a cluster with dead hosts or removing dead hosts from a cluster

    The following flags appear in the API Reference Guide and API Explorer, but are not supported with VMware Cloud Foundation on Dell EMC VxRail.

    • forceHostAdditionInPresenceofDeadHosts: Use to add host to a cluster with dead hosts. Bypasses validation of disconnected hosts and vSAN cluster health.
    • forceByPassingSafeMinSize: Remove dead hosts from cluster, bypassing validations.

    Workaround: None.

  • Adding a VxRail cluster with hosts spanning multiple racks to a workload domain fails

    If you add hosts that span racks (use different VLANs for management, vSAN, and vMotion) to a VxRail cluster after you perform the VxRail first run, but before you add the VxRail cluster to a workload domain in SDDC Manager, the task fails.

    Workaround:

    1. Create a VxRail cluster containing hosts from a single rack and perform the VxRail first run.
    2. Add the VxRail cluster to a workload domain in SDDC Manager.
    3. Add hosts from another rack to the VxRail cluster in the vCenter Server for VxRail.
    4. Add the VxRail hosts to the VxRail cluster in SDDC Manager.
check-circle-line exclamation-circle-line close-line
Scroll to top icon