VMware Cloud Foundation 3.11 | 14 FEB 2021 | Build 19312783 VMware Cloud Foundation 3.11.0.1 | 07 APR 2022 | Build 19571759 Check for additions and updates to these release notes. |
VMware Cloud Foundation 3.11 | 14 FEB 2021 | Build 19312783 VMware Cloud Foundation 3.11.0.1 | 07 APR 2022 | Build 19571759 Check for additions and updates to these release notes. |
VMware Cloud Foundation 3.11 can either be upgraded from VMware Cloud Foundation 3.10.2.2 (sequential upgrade) or from VMware Cloud Foundation 3.5 or later (skip-level upgrade). It cannot be deployed as a new release. For more information, see Upgrade Information below.
The VMware Cloud Foundation (VCF) 3.11 release includes the following:
Security fixes for Apache Log4j Remote Code Execution Vulnerability: This release fixes CVE-2021-44228 and CVE-2021-45046. See VMSA-2021-0028.
Security fixes for Apache HTTP Server: This release fixes CVE-2021-40438. See CVE-2021-40438.
Improvements to upgrade prechecks: Upgrade prechecks have been expanded to verify filesystem capacity, file permissions, and passwords. These improved prechecks help identify issues that you need to resolve to ensure a smooth upgrade.
Skip-level upgrade to VMware Cloud Foundation 3.11: Upgrade directly to VMware Cloud Foundation 3.11 using the skip-level upgrade CLI tool, which has been updated with additional guardrails, prechecks, and usability improvements.
Scaling improvements: VMware Cloud Foundation 3.11 supports up to 1000 ESXi hosts per SDDC Manager instance. See VMware Configuration Maximums for details on all supported maximums.
BOM Updates: Updated Bill of Materials with new product versions.
The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component |
Version |
Date |
Build Number |
---|---|---|---|
SDDC Manager |
3.11 |
14 FEB 2022 |
19312783 |
VMware vCenter Server Appliance |
6.7 Update 3q |
08 FEB 2022 |
19300125 |
VMware ESXi |
ESXi670-202201001 |
25 JAN 2022 |
19195723 |
VMware NSX Data Center for vSphere |
6.4.12 |
21 DEC 2021 |
19066632 |
VMware NSX-T Data Center |
3.0.3.1 |
23 DEC 2021 |
19067109 |
VMware vRealize Suite Lifecycle Manager |
2.1 Patch 3 |
12 JAN 2022 |
19201324 |
VMware vRealize Log Insight |
4.8 |
11 APR 2019 |
13036238 |
vRealize Log Insight Content Pack for NSX for vSphere |
3.9 |
n/a |
n/a |
vRealize Log Insight Content Pack for Linux |
2.0.1 |
n/a |
n/a |
vRealize Log Insight Content Pack for vRealize Automation 7.5+ |
1.0 |
n/a |
n/a |
vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+ |
2.1 |
n/a |
n/a |
vRealize Log insight Content Pack for NSX-T |
3.8.2 |
n/a |
n/a |
vSAN Content Pack for Log Insight |
2.2 |
n/a |
n/a |
vRealize Operations Manager |
7.5 |
11 APR 2019 |
13165949 |
vRealize Automation |
7.6 |
11 APR 2019 |
13027280 |
VMware Horizon 7 |
7.10.3 |
17 DEC 2021 |
19069415 |
Note:
vRealize Log Insight Content Packs are deployed during the workload domain creation.
VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.
To remediate VMSA-2020-0007 (CVE-2020-3953 and CVE-2020-3954) for vRealize Log Insight 4.8, you must apply the vRealize Log Insight 4.8 security patch. For information on the security patch, see KB article 79168.
The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:
VMware ESXi
VMware vSAN
VMware NSX Data Center for vSphere
The following VMware software components deployed by SDDC Manager are licensed separately:
VMware vCenter Server
VMware NSX-T
VMware Horizon 7
VMware vRealize Automation
VMware vRealize Operations
VMware vRealize Log Insight and content packs
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.
For general information about the product, see VMware Cloud Foundation.
For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.
To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
VMware vSphere product documentation, also has documentation about ESXi and vCenter Server
The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:
Google Chrome
Mozilla Firefox
Microsoft Edge
Internet Explorer: Version 11
For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:
1024 by 768 pixels (standard)
1366 by 768 pixels
1280 by 1024 pixels
1680 by 1050 pixels
Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.
You can upgrade to VMware Cloud Foundation 3.11 either from VMware Cloud Foundation 3.10.2.2 (sequential upgrade) or from VMware Cloud Foundation 3.7.1 or later (skip-level upgrade). VMware Cloud Foundation 3.11 cannot be deployed as a new release. For upgrade information, refer to the VMware Cloud Foundation Upgrade Guide.
VMware Cloud Foundation 3.11 is supported as a source version for migration to VMware Cloud Foundation 4.x.
vRealize Suite Lifecycle Manager Version 2.1.0 Patch 3
There is no upgrade bundle for vRealize Suite Lifecycle Manager Version 2.1.0 Patch 3. To upgrade, follow the process described in the VMware vRealize Suite Lifecycle Manager 2.1 Patch 3 Release Notes.
Design Considerations for Multiple Availability Zones
NSX-T Data Center 3.x changes how the northbound traffic flow can be influenced. If you have the following architecture, you must change the Tier-0 gateway architecture before you upgrade to NSX-T Data Center 3.x:
An NSX Edge cluster with edge nodes placed in both availability zones (typically two edge nodes pinned to Availability Zone 1 and two edge nodes pinned to Availability Zone 2)
An Active/Active Tier-0 gateway architecture where the Tier-0 gateway spans edge nodes in both availability zones.
Deployed in a data center infrastructure that cannot tolerate asymmetrical routing to or from each availability zone, for example, for physical data center firewalls, and other.
Change to a Tier-0 gateway architecture where the Tier-0 gateway is active only in a single availability zone at a time in one of the following ways:
Recommended: Place an NSX Edge cluster with edge nodes in a single availability zone only (typically Availability Zone 1), that fail over using vSphere HA to Availability Zone 2 on failure. This change requires changes in the data center fabric including stretching of the Uplink and Edge TEP VLANs between the availability zones. See KB 87426 for more information.
Migrate to an Active/Standby Tier-0 gateway. Follow the NSX-T Data Center 3.x product documentation for changing from an Active/Active to an Active/Standby architecture of the Tier-0 gateway.
Changing from a Three N-VDS to Single N-VDS Edge Node Design
Starting with NSX-T Data Center 2.5, a single N-VDS switch design is available in the NSX Edge node. Changing from three N-VDS instances to a single N-VDS provides network throughput and scalability improvements in NSX-T Data Center. It is recommended for all environments but highly recommended for environments deployed at scale.
The procedure involves the following high-level steps:
Deploy a new NSX Edge cluster with new edge nodes based on the single N-VDS design.
Deploy a new Tier-0 gateway and verify connectivity.
Once tested, you can reconfigure your Tier-1 gateways to utilize the new Tier-0 gateway on the single N-VDS edge cluster.
See KB 87426 for more information.
You can upgrade to VMware Cloud Foundation 3.11.0.1 either from VMware Cloud Foundation 3.11 (sequential upgrade) or from VMware Cloud Foundation 3.7.1 or later (skip-level upgrade). VMware Cloud Foundation 3.11.0.1 cannot be deployed as a new release. For upgrade information, refer to the VMware Cloud Foundation Upgrade Guide. It is strongly recommended that all customers on VCF 3.x upgrade to VCF 3.11.0.1.
VMware Cloud Foundation 3.11.0.1 contains the following BOM updates:
Software Component |
Version |
Date |
Build Number |
---|---|---|---|
SDDC Manager |
3.11.0.1 |
07 APR 2022 |
19571759 |
VMware NSX Data Center for vSphere |
6.4.13 |
08 FEB 2022 |
19307994 |
SDDC Manager 3.11.0.1 fixes the issue:
Deleting an NSX for vSphere (NSX-V) VI workload domain incorrectly deletes the NSX controllers for the management domain
VMware NSX Data Center for vSphere 6.4.13 addresses the security vulnerability described in VMSA-2022-0005
The following issues are resolved in VMware Cloud Foundation 3.11:
VMware vCenter Server Appliance 6.7 Update 3p addresses security vulnerabilities CVE-2021-21980 and CVE-2021-22049 as described in VMware Security Advisory VMSA-2021-0027.
Inapplicable ESXi upgrade bundles are displayed after upgrade has been scheduled
Add host workflow fails
When the user password in the /opt/vmware/vcf/lcm/lcm-app/conf/application.properties
file contains a backslash (\), Lifecycle Manager does not start and displays the error Password authentication failed for user lcm
.
Credential logging vulnerability as described in VMSA-2022-0003. See KB 87050 for more information.
Deleting an NSX-T workload domain or cluster containing a dead host fails at transport node deletion step.
File permissions precheck gets stuck or captures permission issues in wrong directories
There are two potential issues:
The LCM directories permission precheck does not finish for more than an hour.
The LCM directories permission precheck reports permission/ownership issues in /var/log
directories.
Workaround: See KB 90205.
Upgrade precheck fails for PSC SSO
If the maximum lifetime password policy for vCenter Single Sign-On local accounts is set to a number greater than 9999, then the upgrade precheck fails.
Workaround: Set the maximum lifetime password policy to any number less than or equal to 9999. See KB 88163 for more information.
There is a cosmetic mismatch between the posted manifest date (02/11/2022) and the release notes software date (02/14/2022).
There are no technical side effects.
The vRealize Automation upgrade reports the "Precheck Execution Failure : Make sure the latest version of VMware Tools is installed" message
The vRealize Automation IaaS VMs must have the same version of VMware Tools as the ESXi hosts on which the VMs reside.
Workaround: Upgrade VMware Tools on the vRealize Automation IaaS VMs.
Error upgrading vRealize Automation
Under certain circumstances, upgrading vRealize Automation may fail with a message similar to:
An automated upgrade has failed. Manual intervention is required.
vRealize Suite Lifecycle Manager Pre-upgrade checks for vRealize Automation have failed:
vRealize Automation Validations : iaasms1.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
vRealize Automation Validations : iaasms2.rainpole.local : RebootPending : Check if reboot is pending : Reboot the machine.
Please retry the upgrade once the upgrade is available again.
Log-in into the first VM listed in the error message using RDP or the VMware Remote Console.
Reboot the VM.
Wait 5 minutes after the login screen of the VM appears.
Repeat steps 1-3 for the next VM listed in the error message.
Once you have restarted all the VMs listed in the error message, retry the vRealize Automation upgrade.
When there is no associated workload domain to vRealize Automation, the VRA VM NODES CONSISTENCY CHECK upgrade precheck fails
This upgrade precheck compares the content in the logical inventory on the SDDC Manager and the content in the vRealize Lifecycle Manager environment. When there is no associated workload domain, the vRealize Lifecycle Manager environment does not contain information about the iaasagent1.rainpole.local
and iaasagent2.rainpole.local
nodes. Therefore the check fails.
Workaround: None. You can safely ignore a failed VRA VM NODES CONSISTENCY CHECK
during the upgrade precheck. The upgrade will succeed even with this error.
NSX Data Center for vSphere upgrade fails with the message "Host Prep remediation failed"
After addressing the issue, the NSX Data Center for vSphere bundle no longer appears as an available update.
Workaround: To complete the upgrade, manually enable the anti-affinity rules.
Log in to the management vCenter Server using the vSphere Client.
Click Menu > Hosts and Clusters and select the cluster on which host prep remediation failed (for example SDDC-Cluster1).
Click Configure > Configuration > VM/Host Rules.
Select NSX Controller Anti-Affinity Rule and click Edit.
Select Enable rule and click OK.
This completes the NSX Data Center for vSphere upgrade.
Upgrade precheck may fail as you approach the maximum number of supported ESXi hosts
In a large environment with many ESXi hosts, vSAN prechecks may fail.
Workaround: Turn off vSAN prechecks.
SSH to the SDDC Manager appliance as the vcf user.
Enter su
to switch to the root user.
Turn off the vSAN prechecks:
Enter vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties
In the VSAN CONFIGURATION section, set the following properties to false:
vsan.healthcheck.enabled=false
vsan.hcl.update.enabled=false
vsan.precheck.enabled=false
Save the changes.
Restart LCM.
systemctl restart lcm
Task panel does not show correct upgrade tasks for NSX-T workload domain upgrades
When you upgrade NSX-T workload domains. the task panel does not show upgrade status correctly. This is a UI issue only and there is no impact on the upgrade workflow.
Workaround: Monitor upgrade status by navigating to the Update/Patches tab of the relevant workload domain:
On the SDDC Manager Dashboard, click Inventory -> Workload Domains.
In the Domain column, click the appropriate workload domain name.
Click the Update/Patches tab.
Monitor upgrade status.
Exception displayed when a scheduled NSX-T upgrade begins during an idle SDDC Manager session
When a scheduled NSX-T upgrade begins during an idle SDDC Manager session, the following UI exception is displayed: Retrieving NSXT upgrade failed with unknown exception
This is a UI issue only. There is no impact on the upgrade workflow.
Workaround: Refresh the web browser.
After upgrading to VMware Cloud Foundation 3.11 the NSX Edge password precheck is turned off
Enable the NSX Edge password precheck so that future upgrades of NSX-T based VI workload domains do not fail due to expired passwords.
Workaround:
SSH to the SDDC Manager appliance as the vcf user.
Type su
to switch to the root user.
Edit the /opt/vmware/vcf/lcm/lcm-app/conf/feature.properties file as shown below:
#feature flag for NSXT Edge VM Password Expiry Precheck
feature.lcm.nsxt.edge.password.validation=true
Run the following command: systemctl restart lcm
vRealize Operations Manager: VMware Security Advisory VMSA-2021-0018
VMSA-2021-0018 describes security vulnerabilities that affect VMware Cloud Foundation.
The vRealize Operations Manager API contains an arbitrary file read vulnerability. A malicious actor with administrative access to vRealize Operations Manager API can read any arbitrary file on server leading to information disclosure.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22022 to this issue.
The vRealize Operations Manager API has insecure object reference vulnerability. A malicious actor with administrative access to vRealize Operations Manager API may be able to modify other users information leading to an account takeover.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22023 to this issue.
The vRealize Operations Manager API contains an arbitrary log-file read vulnerability. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can read any log file resulting in sensitive information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22024 to this issue.
The vRealize Operations Manager API contains a broken access control vulnerability leading to unauthenticated API access. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can add new nodes to existing vROps cluster. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22025 to this issue.
The vRealize Operations Manager API contains a Server Side Request Forgery in multiple end points. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can perform a Server Side Request Forgery attack leading to information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifiers CVE-2021-22026 and CVE-2021-22027 to this issue.
Workaround: See KB 85452 for information about applying vRealize Operations Security Patches that resolve the issues.
The password update for vRealize Automation and vRealize Operations Manager may run infinitely or may fail when the password contains special character "%"
Password management uses the vRealize Lifecycle Manager API to update the password of vRealize Automation and vRealize Operations Manager. When there is special character "%" in either of SSH or API or Administrator credential types of the vRealize Automation and vRealize Operations Manager users, then the vRealize Lifecycle Manager API hangs and doesn't respond to password management. There is a timeout of 5 mins and password management marks the operation as failed.
Workaround:Retry the password update operation without the special character "%". Ensure that the passwords for all other vRealize Automation and vRealize Operations Manager accounts don't contain the "%" special character.
NSX Manager is not visible in the vSphere Web Client.
In addition to NSX Manager not being visible in the vSphere Web Client, the following error message displays in the NSX Home screen: "No NSX Managers available. Verify current user has role assigned on NSX Manager." This issue occurs when vCenter Server is not correctly configured for the account that is logged in.
Workaround: To resolve this issue, follow the procedure detailed in Knowledge Base article 2080740 "No NSX Managers available" error in the vSphere Web Client.
Unable to delete VI workload domain enabled for vRealize Operations Manager from SDDC Manager.
Attempts to delete the vCenter adapter also fail, and return an SSL error.
Workaround: Use the following procedure to resolve this issue.
Create a vCenter adapter instance in vRealize Operations Manager, as described in Configure a vCenter Adapter Instance in vRealize Operations Manager. This step is required because the existing adapter was deleted by the failed workload domain deletion.
Follow the procedure described in Knowledge Base article 56946.
Restart the failed VI workload domain deletion workflow from the SDDC Manager interface.
APIs for managing SDDC cannot be executed from the SDDC Manager Dashboard
You cannot use the API Explorer in the SDDC Manager Dashboard to execute the APIs for managing SDDC (/v1/sddc
).
Workaround: None. These APIs can only be executed using the Cloud Builder as the host.
Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
Workaround:
Before adding the host, add a new portgroup to the VDS for that cluster.
Tag the new portgroup with the VLAN ID of the host to be added.
Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.
Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
Retry the Add Host operation.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
NSX Manager for VI workload domain is not displayed in vCenter
Although NFS-based VI workload domains are created successfully, the NSX Manager VM is not registered in vCenter Server and is not displayed in vCenter.
Workaround: To resolve this issue, use the following procedure:
Log in to NSX Manager (http://<nsxmanager IP>).
Navigate to Manage > NSX Management Service.
Un-register the lookup service and vCenter, then re-register.
Close the browser and log in to vCenter.
A vCenter Server on which certificates have been rotated is not accessible from a Horizon workload domain
VMware Cloud Foundation does not support the certificate rotation on the Horizon workload domains.
Workaround: See KB article 70956.
Deploying partner services on an NSX-T workload domain displays an error
Deploying partner services on an NSX-T workload domain such as McAfee or Trend displays the “Configure NSX at cluster level to deploy Service VM” error.
Workaround: Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.
If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
Workaround:
Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.
Replace or deploy the witness appliance matching with the ESXi version.
The certificate rotate operation on the second NSX-T domain fails
Certificate rotation works on the first NSX-T workload domain in your environment, but fails on all subsequent NSX-T workload domains.
Workaround: None
Operations on NSX-T workload domains fails if their host FQDNs include uppercase letters
If the FQDNs of ESXi hosts in an NSX-T workload domain include uppercase letters, then the following operations may fail for the workload domain:
Add a host
Remove a host
Add a cluster
Remove a cluster
Delete the workload domain
Workaround: See KB 76553.
VI workload domain creation or expansion operations fail
If there is a mismatch between the letter case (upper or lower) of an ESXi host's FQDN and the FQDN used when the host was commissioned, then workload domain creation and expansion may fail.
Workaround: ESXi hosts should have lower case FQDNs and should be commissioned using lower case FQDNs.
Cluster is deleted even if VMs are up and running on the cluster
When you delete a cluster, it gets deleted even if there are VMs running on the cluster. This includes critical VMs such as Edge VMs, which may prevent you from accessing your environment after the cluster gets deleted.
Workaround: Migrate the VMs to a different cluster before deleting the cluster.
Workload domain operations fail if cluster upgrade is in progress
Workload domain operations cannot be performed if one or more clusters are being upgraded. The UI does not block such oeprations during an upgrade.
Workaround: Do not perform any operations on the workload domain when a cluster upgrade is in progress.
If you use the special character underscore (_) in the vCenter Server host name for the workload domain create operation, the vCenter Server deployment fails
The vCenter deployment fails with the "ERROR > Section 'new_vcsa', subsection 'network', property 'system_name' validation" error message.
Workaround: None. This is an issue in the vCenter Server product installer where the installer pre-validation fails. You should create the workload domain by providing valid vCenter Server host names.
Federation creation information not displayed if you leave the Multi-Instance Management Dashboard
Federation creation progress is displayed on the Multi-Instance Management Dashboard. If you navigate to another screen and then return to the Multi-Instance Management Dashboard, progress messages are not displayed. Instead, an empty map with no Cloud Foundation instances are displayed until the federation is created.
Workaround: Stay on the Multi-Instance Dashboard till the task is complete. If you have navigated away, wait for around 20 minutes and then return to the dashboard by which time the operation should have completed.
The federation creation progress is not displayed
While federation creation is in progress, the SDDC manager UI displays the progress on the multi-site page. If you navigate into any other screen and come back to the multi-site screen, the progress messages are not displayed. An empty map with no VMware Cloud Foundation instances is displayed until the federation creation process completes.
Workaround: None
Multi-Instance Management Dashboard operation fails
After a controller joins or leaves a federation, Kafka is restarted on all controllers in the federation. It can take up to 15 minutes for the federation to stabilize. Any operations performed on the dashboard during this time may fail.
Workaround: Re-try the operation.
NSX Manager restore might not complete due to certificate rejection
A restore might not complete as a result of an installed certificate with no CRL Distribution Point and an incorrect setting (crl_checking_enable configuration to true).
Workaround: None