VMware Cloud Foundation 4.2.1 | 25 MAY 2021 | Build 18016307
Check for additions and updates to these release notes.
The VMware Cloud Foundation (VCF) 4.2.1 release includes the following:
The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
VMware Response to Apache Log4j Remote Code Execution Vulnerability: VMware Cloud Foundation is impacted by CVE-2021-44228, and CVE-2021-45046 as described in VMSA-2021-0028. To remediate these issues, see Workaround instructions to address CVE-2021-44228 & CVE-2021-45046 in VMware Cloud Foundation (KB 87095).
|Software Component||Version||Date||Build Number|
|Cloud Builder VM||4.2.1||25 MAY 2021||18016307|
|SDDC Manager||4.2.1||25 MAY 2021||18016307|
|VMware vCenter Server Appliance||7.0.1.00301||25 MAY 2021||17956102|
|VMware ESXi||7.0 Update 1d||04 FEB 2021||17551050*|
|VMware NSX-T Data Center||3.1.2||17 APR 2021||17883596|
|VMware vRealize Suite Lifecycle Manager||8.2 Patch 2||04 FEB 2021||17513665|
|Workspace ONE Access||3.3.4||04 FEB 2021||17498518|
|vRealize Automation||8.2||06 OCT 2020||16980951|
|vRealize Log Insight||8.2||06 OCT 2020||16957702|
|vRealize Log Insight Content Pack for NSX-T||3.9.2||n/a||n/a|
|vRealize Log Insight Content Pack for Linux||2.1||n/a||n/a|
|vRealize Log Insight Content Pack for Linux - Systemd||1.0||n/a||n/a|
|vRealize Log Insight Content Pack for vRealize Suite Lifecycle Manager 8.0.1+||1.0.2||n/a||n/a|
|vRealize Log Insight Content Pack for VMware Identity Manager||2.0||n/a||n/a|
|vRealize Operations Manager||8.2||06 OCT 2020||16949153|
|vRealize Operations Management Pack for VMware Identity Manager||1.1||n/a||n/a|
* VMware ESXi 7.0 Update 1d is a patch release and does not have an ISO available for download on My VMware. You can create an ISO to install the correct version of ESXi on your servers. See Create a Custom ISO Image for ESXi.
The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:
The following VMware software components deployed by SDDC Manager are licensed separately:
NOTE: Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.
For general information about the product, see VMware Cloud Foundation.
To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:
For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:
Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.
You can install VMware Cloud Foundation 4.2.1 as a new release or upgrade to VMware Cloud Foundation 4.2.1 from VMware Cloud Foundation 4.2, 22.214.171.124, or 4.1.
Installing as a New Release
The new installation process has three phases:
Phase One: Prepare the Environment
The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.
Phase Two: Image all servers with ESXi
Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.
Phase Three: Install Cloud Foundation 4.2.1
Refer to the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.
Upgrading to VMware Cloud Foundation 4.2.1
You can upgrade to VMware Cloud Foundation 4.2.1 from VMware Cloud Foundation 4.2, 126.96.36.199, or 4.1. For more information see VMware Cloud Foundation Lifecycle Management.
Access to private APIs that use basic authentication is being deprecated in an upcoming release. You must switch to using public APIs.
The following issues are resolved in this release:
Workload Management does not support NSX-T Data Center Federation
You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.
NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters
There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.
Stretched clusters and Workload Management
You cannot stretch a cluster on which Workload Management is deployed.
Special characters not allowed in the Username, Password, and Template Name fields on the Microsoft CA Configuration page
If any of the following special characters are used in the Username, Password, or Template Name fields on the Microsoft CA Configuration page, the configuration cannot be saved:
Workaround: Delete the special character and then save the configuration.
Different vCenter Server build numbers on SDDC Manager and vCenter Server UI
The vCenter Server build number on the vCenter Server UI is different from the build number displayed on SDDC Manager.
Async Patch Tool Known Issues
The Async Patch Tool is a utility that allows you to apply critical patches to certain VMware Cloud Foundation components (NSX-T Manager, vCenter Server, and ESXi) outside of VMware Cloud Foundation releases. The Async Patch Tool also allows you to enable upgrade of an async patched system to a new version of VMware Cloud Foundation.
See the Async Patch Tool Release Notes for known issues.
Cluster-level ESXi upgrade fails
Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.
Workaround: Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.
When you skip hosts during an ESXi upgrade of a vLCM-enabled workload domain, the upgrade may fail
Due to a known vSphere issue, the ESXi upgrade may fail with a "ConstraintValidationException" error when a host is skipped.
Workaround: Check the vSphere Client and its logs for details on what caused the error. Resolve the issues and retry the upgrade.
When you upgrade to VMware Cloud Foundation 4.1, one of the vSphere Cluster Services (vCLS) agent VMs gets placed on local storage
vSphere Cluster Services (vCLS) is new functionality in vSphere 7.0 Update 1 that ensures that cluster services remain available, even when the vCenter Server is unavailable. vCLS deploys three vCLS agent virtual machines to maintain cluster services health. When you upgrade to VMware Cloud Foundation 4.1, one of the vCLS VMs may get placed on local storage instead of shared storage. This could cause issues if you delete the ESXi host on which the VM is stored.
Workaround: Disable and re-enable vCLS on the cluster to deploy all the vCLS agent VMs to shared storage.
For example, if the URL displays as https://vcenter-1.vrack.vsphere.local/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:503a0d38-442a-446f-b283-d3611bf035fb/summary, then the moref id is domain-c8.
falseand click Save.
config.vcls.clusters.<moref id>.enabled setting does not appear for your moref id, then enter its Name and
false for the Value and click Add.
trueand click Save.
You are unable to update NSX-T Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX-T transport node precheck stage
In SDDC Manager, when you run the upgrade precheck before updating NSX-T Data Center, the NSX-T transport node validation results with the following error.
No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.
Because the upgrade precheck results with an error, you cannot proceed with updating the NSX-T Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware KB article 2074026.
Workaround: Disable the update precheck validation for the subsequent NSX-T Data Center update.
vcfusing a Secure Shell (SSH) client.
application-prod.propertiesfile for editing.
systemctl restart lcm
NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE
NSX-T upgrade may not proceed beyond the
NSX T TRANSPORT NODE POSTCHECK STAGE.
Workaround: Contact VMware support.
Bundle transfer utility command retrieves incorrect bundle types
When you run the bundle transfer utility command to retrieve install bundles, the results include install and configuration drift bundles.
Workaround: Review bundle components to validate the bundle type.
Workload Management upgrade fails
Workload Management can be upgraded only after NSX-T, vCenter Server, and ESXi have been upgraded. If you try upgrading Workload Management before upgrading these components, the upgrade fails.
Workaround: Contact VMware Support.
ESXi hosts with an HPE custom image cannot be upgraded
On ESXi hosts with an HPE custom image, the HPE Agentless Management Service (amsd) blocks the upgrade of NSX components with the following error message:
Error loading plugin /usr/lib/vmware/esxcli/ext/smad_rev.xml skipping. Error was: Error while trying to register the plugin /usr/lib/vmware/esxcli/ext/smad_rev.xml Duplicate top-level namespaces must have the samesma The functionality in /usr/lib/vmware/esxcli/ext/smad_rev.xml will not be available until this issue is resolved.
Impacted versions and components are listed below.
Workaround: Before upgrading to VMware Cloud Foundation 4.2.1, apply the fix described in the HPE Customer advisory https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00116792en_us.
The Cloud Foundation Builder VM remains locked after more than 15 minutes.
The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.
Log in to the VM console of the Cloud Foundation Builder VM as the
root user. Unlock the account by resetting the password of the admin user with the following command:
pam_tally2 --user=<user> --reset
Disabling CEIP on SDDC Manager does not disable CEIP on vRealize Automation and vRealize Suite Lifecycle Manager
When you disable CEIP on the SDDC Manager Dashboard, data collection is not disabled on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.
Workaround: Manually disable CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.
Generate CSR task for a component hangs
When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.
systemctl restart operationsmanager
SoS utility options for health check are missing information
Due to limitations of the ESXi service account, some information is unavailable in the following health check options:
Devices and Driverinformation for ESXi hosts.
vSAN Health Statusor
Total no. of disksinformation for ESXi hosts.
Host information on the Inventory -> Hosts page takes too long to load or does not load
When a large number of hosts are unassigned, the Inventory -> Hosts page may take up to three minutes to reflect hosts information or may display the following message:
Failed to load host details, Please retry or contact the service provider and provide the reference token
Workaround: Use VMware Cloud Foundation APIs for all host-related tasks.
Host commissioning may fail if API /v1/hosts/validations/commissions is used immediately after the host validation API /v1/hosts/validationsDescription
If you run the host validation API
/v1/hosts/validations/commissions immediately after the host validation API
/v1/hosts/validations, the validation workflow may delete temporary truststore created by the host commissioning workflow. This may cause host commissioning to fail.
Workaround: After host validation, wait for at least one minute before running the host commissioning API.
Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
Deploying partner services on an NSX-T workload domain displays an error
Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.
Workaround: Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.
If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
vSAN partition and critical alerts are generated when the witness MTU is not set to 9000
If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.
Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.
VI workload domain creation or expansion operations fail
If there is a mismatch between the letter case (upper or lower) of an ESXi host's FQDN and the FQDN used when the host was commissioned, then workload domain creation and expansion may fail.
Workaround: ESXi hosts should have lower case FQDNs and should be commissioned using lower case FQDNs.
Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails
When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.
Workaround: Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.
Adding a vSphere cluster or adding a host to a workload domain fails
Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the
Configure NSX-T Transport Node or Create Transport Node Collection subtask.
adminand then log in as
sysctl -w net.ipv4.tcp_en=0
partial successnode, click Configure NSX. .
Nextand then click
When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.
The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled
If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.
Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.
Creation or expansion of a vSAN cluster with more than 32 hosts fails
By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.
For more information about large cluster support, see https://kb.vmware.com/kb/2110081.
Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present
If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask
Enter Maintenance Mode on ESXi Hosts.
vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain
If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.
Workaround: If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.
Deleting a cluster that was renamed in the vSphere Client does not delete the cluster's transport node profile or uplink profile
When you use the vSphere Client to rename a cluster and then delete that cluster from SDDC Manager, the transport node profile and uplink profile associated with the cluster are not removed.
Workaround: Manually delete the transport node profile and uplink profile from NSX Manager and try deleting the cluster again.
Uplink profile names follow the pattern: <vcenter host name>-<old-cluster-name>.
For example, if the vCenter Server's FQDN is vcenter-vsan.vrack.vsphere.local, and the cluster's old name is nsxt-datacenter, then uplink profile name would be vcenter-vsan-nsxt-cluster.
Add host may fail for cluster using vLCM images
Add host for cluster using vLCM images may fail with the following error:
Unable to create transport node collection. Transport Node Collection is failed with state FAILED_TO_REALIZEWorkflow
Workaround: Re-try the add host workflow.
Federation creation information not displayed if you leave the Multi-Instance Management Dashboard
Federation creation progress is displayed on the Multi-Instance Management Dashboard. If you navigate to another screen and then return to the Multi-Instance Management Dashboard, progress messages are not displayed. Instead, an empty map with no Cloud Foundation instances are displayed until the federation is created.
Stay on the Multi-Instance Dashboard till the task is complete. If you have navigated away, wait for around 20 minutes and then return to the dashboard by which time the operation should have completed.
Multi-Instance Management Dashboard operation fails
After a controller joins or leaves a federation, Kafka is restarted on all controllers in the federation. It can take up to 20 minutes for the federation to stabilize. Any operations performed on the dashboard during this time may fail.
Re-try the operation.
Join operation fails
A join operation may fail if a controller SDDC Manager has a public certificate with a depth greater than one (that is, it has intermediate certificates).
Workaround: Trust the intermediate certificate of the controller SDDC Manager. See KB 80986.
Stretch cluster operation fails
If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.
Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.
vRealize Operations Manager: VMware Security Advisory VMSA-2021-0018
VMSA-2021-0018 describes security vulnerabilities that affect VMware Cloud Foundation.
Workaround: See KB 85452 for information about applying vRealize Operations Security Patches that resolve the issues.
vRealize Log Insight: VMSA-2021-0019
VMSA-2021-0019 describes security vulnerabilities that affect VMware Cloud Foundation.
VMware vRealize Log Insight contains a Cross Site Scripting (XSS) vulnerability due to improper user input validation. An attacker with user privileges may be able to inject a malicious payload via the Log Insight UI which would be executed when the victim accesses the shared dashboard link. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22021 to this issue.
Workaround: See KB 85405 for information about applying a vRealize Log Insight Security Patch that resolves the issue.
When you deploy a vRealize Suite product in vRealize Suite Lifecycle Manager, some of the infrastructure details may not be loaded
When you deploy a vRealize Suite product in vRealize Suite Lifecycle Manager, the Infrastructure details may not get populated. This may indicate a failed vCenter data collection request in vRealize Suite Lifecycle Manager, which prevents vRealize Suite Lifecycle Manager from validating the current state of vCenter Server inventory.
Workaround: Retry the failed vCenter data collection request until it successfully passes and then try to complete the vRealize Suite product deployment again.
Updating the DNS or NTP server configuration does not apply the update to vRealize Automation
Using the Cloud Foundation API to update the DNS or NTP servers does not apply the update to vRealize Automation due to a bug in vRealize Suite Lifecycle Manager.
Workaround: Manually update the DNS or NTP server(s) for vRealize Automation.
Update the DNS server(s) for vRealize Automation
sed '/nameserver.*/d' -i /etc/resolv.conf
echo nameserver [DNS server IP] >> /etc/resolv.conf
Update the NTP server(s) for vRealize Automation
vracli ntp systemd --set [NTP server IP]
To add multiple NTP servers:
vracli ntp systemd --set [NTP server 1 IP,NTP server 2 IP]
vracli ntp show-confi
vracli ntp apply
vracli ntp show-config
Connecting vRealize Log Insight to a workload domain fails at the "Enable Log Collection for vSphere" step
When you connect vRealize Log Insight to a workload domain, it fails at the
Enable Log Collection for vSphere step. Expanding the connect workflow task in the SDDC Manager Recent Tasks widget displays the following errors:
Cannot configure vCenter in vRealize Log Insight
Failed post request in vRLI
Workaround: Reboot the vRLI VMs.
vRealize Suite product deployment fails with error "Failed to get Environment ID by given Host Name"
When you deploy multiple vRealize Suite products simultaneously in vRealize Suite Lifecycle Manager, product registration with SDDC Manager may fail.
Workaround: Deploy vRealize Suite products one at a time.
Connecting vRealize Operations Manager to a workload domain fails at the "Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain" step
When you connect vRealize Operations Manager to a workload domain, it fails at the
Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain step with a message similar to
Failed to configure vCenter <vcenter-hostname> in vROps <vrops-hostname>, because Failed to manage vROps adapter. This issue can occur when the vRealize Operations cluster is offline.
Workaround: Make sure that the vRealize Operations cluster is online.