VMware Cloud Foundation 4.3 | 24 AUG 2021 | Build 18433963
Read about what’s new, learn about what was fixed, and find workarounds for known issues in VMware Cloud Foundation 4.3.
What's in the Release Notes
The release notes cover the following topics:
- What's New
- Cloud Foundation Bill of Materials (BOM)
- VMware Software Edition License Information
- Supported Hardware
- Browser Compatibility and Screen Resolutions
- Installation and Upgrade Information
- Resolved Issues
- Known Issues
The VMware Cloud Foundation 4.3 release includes the following:
- Flexibility in Application Virtual Networks (AVN): Application Virtual Networks (AVN)s, which include the NSX Edge Cluster and NSX network segments, are no longer deployed and configured during bring-up. Instead they are implemented as a Day-N operations in SDDC Manager, providing greater flexibility.
- FIPS Support: You can enable FIPS mode during bring-up, which will enable it on all the VMware Cloud Foundation components that support FIPS.
- Scheduled Automatic Password Rotations: In addition to the on-demand password rotation capability, it is now possible to schedule automatic password rotations for accounts managed through SDDC Manager (excluding ESXi accounts). Automatic password rotation is enabled by default for service accounts.
- SAN in Certificate Signing Requests (CSR) : You can now add a Subject Alternative Name (SAN) when you generate a Certificate Signing Request (CSR) in SDDC Manager.
- Improvements for vSphere Lifecycle Manager images: For workload domains that use vSphere Lifecycle Manager images, this release includes several improvements. These include: prechecks to proactively identify issues that may affect upgrade operations; enabling concurrent upgrades for NSX-T Data Center components; and enabling provisioning and upgrade of Workload Management.
- Add vSphere Clusters in Parallel: You can add up to 10 vSphere clusters to a workload domain in parallel, improving the performance and speed of the workflow.
- Add and Remove NSX Edge Nodes in NSX Edge Clusters: For NSX Edge clusters deployed through SDDC Manager or the VMware Cloud Foundation API, you can expand and shrink NSX Edge clusters by adding or removing NSX Edge nodes from the cluster.
- Guidance for Day-N operations in NSX Federated VCF environments: You can federate NSX-T Data Center environments across VMware Cloud Foundation instances. You can manage federated NSX-T Data Center environments with a single pane of glass, create gateways and segments that span VMware Cloud Foundation instances, and configure and enforce firewall rules consistently across instances. Guidance is also provided for password rotation, certificate management, backup and restore, and lifecycle management for federated environments.
- Backup Enhancements: You can now configure an SDDC Manager backup schedule and retention policy from the SDDC Manager UI.
- VMware Validated Solutions: VMware Validated Solutions are a series of technical reference validated implementations designed to help customers build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads deployed on VMware Cloud Foundation. Each VMware Validated Solution will come with detailed design with design decisions, implementation guidance consisting of manual UI-based step-by-step procedures and, where applicable, automated steps using infrastructure as code. These solutions based on VMware Cloud Foundation will be available on core.vmware.com. The first set of validated solutions, that can be applied on vSAN ReadyNodes, include the following:
- Identity and Access Management for VMware Cloud Foundation
- Developer Ready Infrastructure for VMware Cloud Foundation
- Advanced Load Balancing for VMware Cloud Foundation
- Private Cloud Automation for VMware Cloud Foundation
- Intelligent Operations Management for VMware Cloud Foundation
- Intelligent Logging and Analytics for VMware Cloud Foundation
- Documentation Enhancements: The content from VMware Validated Design documentation has now been unified with core VMware Cloud Foundation documentation or has been integrated into a VMware Validated Solution. Additional documentation enhancements include:
- Design Documents for VMware Cloud Foundation foundational components with design decisions
- Design for the Management Domain
- Design for the Virtual Infrastructure Workload Domain
- Design for vRealize Suite Lifecyle and Access Management
- Getting Started with VMware Cloud Foundation publication
- Procedure enhancements through unification of content between VMware Validated Design and VMware Cloud Foundation publications
- Capacity Planner tool: Administrators can use the VCF Capacity Planner online tool to model and generate a Software Defined Data Center build of materials. This interactive tool generates detailed guidance of hyper-converged server, storage, network, and cloud software SKUs required to successfully deploy an on-premises cloud.
- Private APIs: Access to private APIs that use basic authentication is deprecated in this release. You must switch to using public APIs.
- BOM updates: Updated Bill of Materials with new product versions.
Cloud Foundation Bill of Materials (BOM)
The Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
|Software Component||Version||Date||Build Number|
|Cloud Builder VM||4.3||24 AUG 2021||
|SDDC Manager||4.3||24 AUG 2021||
|VMware vCenter Server Appliance||7.0 Update 2c||24 AUG 2021||
|VMware ESXi||7.0 Update 2a||29 APR 2021||
|VMware Virtual SAN Witness Appliance||7.0 Update 2||08 JUL 2021||18188211|
|VMware NSX-T Data Center||3.1.3||22 JUL 2021||
|VMware vRealize Suite Lifecycle Manager||8.4.1||27 MAY 2021||
|Workspace ONE Access||3.3.5||20 MAY 2021||18049997|
|vRealize Automation||8.4.1||27 MAY 2021||18054500|
|vRealize Log Insight||8.4||15 APR 2021||17828109|
|vRealize Log Insight Content Pack for NSX-T||4.0.2||n/a||n/a|
|vRealize Log Insight Content Pack for vRealize Automation 8.3+||1.0||n/a||n/a|
|vRealize Log Insight Content Pack for Linux||2.1.0||n/a||n/a|
|vRealize Log Insight Content Pack for Linux - Systemd||1.0.0||n/a||n/a|
|vRealize Log Insight Content Pack for vRealize Suite Lifecycle Manager 8.0.1+||1.0.2||n/a||n/a|
|vRealize Log Insight Content Pack for VMware Identity Manager||2.0||n/a||n/a|
|vRealize Operations Manager||8.4||15 APR 2021||17863947|
|vRealize Operations Management Pack for VMware Identity Manager||1.3||n/a||
- VMware vSAN is included in the VMware ESXi bundle.
- You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access.
- vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
- The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
- VMware Solution Exchange and the vRealize Log Insight in-product marketplace store only the latest versions of the content packs for vRealize Log Insight. The Bill of Materials table contains the latest versions of the packs that were available at the time VMware Cloud Foundation is released. When you deploy the Cloud Foundation components, it is possible that the version of a content pack within the in-product marketplace for vRealize Log Insight is newer than the one used for this release.
VMware Software Edition License Information
The SDDC Manager software is licensed under the Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the Cloud Foundation license:
- VMware ESXi
- VMware vSAN
- VMware NSX-T Data Center
The following VMware software components deployed by SDDC Manager are licensed separately:
- VMware vCenter Server
NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a Cloud Foundation system.
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the Cloud Foundation Bill of Materials (BOM) section above.
For general information about the product, see VMware Cloud Foundation.
To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
- VMware vSphere product documentation, also has documentation about ESXi and vCenter Server
- VMware vSAN product documentation
- VMware NSX-T Data Center product documentation
Browser Compatibility and Screen Resolutions
The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except the Internet Explorer:
- Google Chrome
- Mozilla Firefox
- Microsoft Edge
- Internet Explorer: Version 11
For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:
- 1024 by 768 pixels (standard)
- 1366 by 768 pixels
- 1280 by 1024 pixels
- 1680 by 1050 pixels
Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.
Installation and Upgrade Information
You can install VMware Cloud Foundation 4.3 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.3.
Installing as a New Release
The new installation process has three phases:
Phase One: Prepare the Environment
The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.
Phase Two: Image all servers with ESXi
Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.
Phase Three: Install Cloud Foundation 4.3
Refer to the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.
Upgrading to Cloud Foundation 4.3
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.3 from VMware Cloud Foundation 4.2.1, 4.2, 18.104.22.168, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.3. For more information see VMware Cloud Foundation Lifecycle Management.
The following issues are resolved in this release.
Special characters not allowed in the Username, Password, and Template Name fields on the Microsoft CA Configuration page
Applying the configuration drift upgrade bundle fails
Bundle transfer utility command retrieves incorrect bundle types
vRealize Automation upgrade fails at vRealize Upgrade stage
Host information on the Inventory > Hosts page takes too long to load or does not load
Host commissioning may fail if API /v1/hosts/validations/commissions is used immediately after the host validation API /v1/hosts/validationsDescription
Add host may fail for cluster using vLCM images
Connecting vRealize Log Insight to a workload domain fails at the "Enable Log Collection for vSphere" step
vRealize Suite product deployment fails with error "Failed to get Environment ID by given Host Name"
VMware vRealize Log Insight 8.4 resolves a Cross Site Scripting (XSS) vulnerability due to improper user input validation. An attacker with user privileges may be able to inject a malicious payload via the Log Insight UI which would be executed when the victim accesses the shared dashboard link. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22021 to this issue. For more information, see VMware Security Advisory VMSA-2021-0019.
The known issues are grouped as follows.
- VMware Cloud Foundation Known Issues
- Upgrade Known Issues
- Bring-up Known Issues
- SDDC Manager Known Issues
- Workload Domain Known Issues
- Multi-Instance Management Known Issues
- API Known Issues
- vRealize Suite Known Issues
- Workload Management does not support NSX-T Data Center Federation
You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.
- NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters
There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.
- Stretched clusters and Workload Management
You cannot stretch a cluster on which Workload Management is deployed.
- Cluster-level ESXi upgrade fails
Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.
Workaround: Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.
- You are unable to update NSX-T Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX-T transport node precheck stage
In SDDC Manager, when you run the upgrade precheck before updating NSX-T Data Center, the NSX-T transport node validation results with the following error.
No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.
Because the upgrade precheck results with an error, you cannot proceed with updating the NSX-T Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.
Workaround: Disable the update precheck validation for the subsequent NSX-T Data Center update.
- Log in to SDDC Manager as
vcfusing a Secure Shell (SSH) client.
- Open the
application-prod.propertiesfile for editing.
- Add the following property and save the file.
- Restart the life cycle management service.
systemctl restart lcm
- Log in to the SDDC Manager user interface and proceed with the update of NSX-T Data Center.
- Log in to SDDC Manager as
- NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE
NSX-T upgrade may not proceed beyond the
NSX T TRANSPORT NODE POSTCHECK STAGE.
Workaround: Contact VMware support.
- ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."
This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.
Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.
- Update precheck fails with the error "Password has expired"
If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.
Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.
- Skip level upgrades are not enabled for some product components after VMware Cloud Foundation is upgraded to 4.3
After performing skip level upgrade to VMware Cloud Foundation 4.3 from 4.1.x or 4.2.x, one or more of the following symptoms is observed:
- vRealize bundles do not show up as available for upgrade
- Bundles for previous versions of some product components (NSX-T Data Center, vCenter Server, ESXi) show up as available for upgrade
See: KB 85505
- The Cloud Foundation Builder VM remains locked after more than 15 minutes.
The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.
Workaround: Log in to the VM console of the Cloud Foundation Builder VM as the
rootuser. Unlock the account by resetting the password of the admin user with the following command.
pam_tally2 --user=<user> --reset
- Disabling CEIP on SDDC Manager does not disable CEIP on vRealize Automation and vRealize Suite Lifecycle Manager
When you disable CEIP on the SDDC Manager Dashboard, data collection is not disabled on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.
Workaround: Manually disable CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.
- Generate CSR task for a component hangs
When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.
- Log in to the UI for the component to troubleshoot and resolve any issues.
- Using SSH, log in to the SDDC Manager VM with the user name
suto switch to the root account.
- Run the following command:
systemctl restart operationsmanager
- Retry generating the CSR.
- SoS utility options for health check are missing information
Due to limitations of the ESXi service account, some information is unavailable in the following health check options:
Devices and Driverinformation for ESXi hosts.
vSAN Health Statusor
Total no. of disksinformation for ESXi hosts.
- Supportability and Serviceability (SoS) Utility health checks fail with the error "Failed to get details"
SoS is not able to handle ESXi host names that include uppercase letters.
Workaround: Use the precheck functionality in the SDDC Manager UI to check the health of the ESXi hosts.
- Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
- Before adding the host, add a new portgroup to the vSphere Distributed Switch for that cluster.
- Tag the new portgroup with the VLAN ID of the host to be added.
- Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.
- Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1.
For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
- Retry the Add Host operation.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
- Deploying partner services on a workload domain displays an error
Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Lifecycle Manager (vLCM) baselines, displays the “Configure NSX at cluster level to deploy Service VM” error.
Workaround: Attach the transport node profile to the cluster and try deploying the partner service. After the service is deployed, keep the transport node profile attached to the cluster. If you want to delete the cluster later, you must first undeploy the partner service and detach the transport node profile from the cluster.
- If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
1. Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.
2. Replace or deploy the witness appliance matching with the ESXi version.
- vSAN partition and critical alerts are generated when the witness MTU is not set to 9000
If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.
Workaround: Set the MTU of the witness switch in the witness appliance to 9000 MTU.
- Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails
When you try to add a host to a vSphere cluster for a workload domain that uses vSphere Lifecycle Manager images, the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.
Workaround: Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.
- Adding a vSphere cluster or adding a host to a workload domain fails
Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the
Configure NSX-T Transport Nodeor
Create Transport Node Collectionsubtask.
- Enable SSH for the NSX Manager VMs.
- SSH into the NSX Manager VMs as
adminand then log in as
- Run the following command on each NSX Manager VM:
sysctl -w net.ipv4.tcp_en=0
- Login to NSX Manager UI for the workload domain.
- Navigate to System > Fabric > Nodes > Host Transport Nodes.
- Select the vCenter server for the workload domain from the Managed by drop-down menu.
- Expand the vSphere cluster and navigate to the transport nodes that are in a
- Select the check box next to a
partial successnode, click Configure NSX. .
Nextand then click
- Repeat steps 7-9 for each
When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.
- The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled
If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.
Workaround: Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.
- Creation or expansion of a vSAN cluster with more than 32 hosts fails
By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task
Enable vSAN on vSphere Cluster.
- Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.
- Select the vSAN cluster in the vSphere Client.
- Select Configure > vSAN > Advanced Options.
- Enable Large Cluster Support.
- Click Apply.
- Click Yes.
- Run a vSAN health check to see which hosts require rebooting.
- Put the hosts into Maintenance Mode and reboot the hosts.
For more information about large cluster support, see https://kb.vmware.com/kb/2110081.
- Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.
- Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present
If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask
Enter Maintenance Mode on ESXi Hosts.
- For host removal: Delete the Service VM from the host and retry the operation.
- For cluster deletion: Delete the service deployment for the cluster and retry the operation.
- For workload domain deleting: Delete the service deployment for all clusters in the workload domain and retry the operation.
- vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain
If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.
Workaround: If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.
- Federation creation information not displayed if you leave the Multi-Instance Management Dashboard
Federation creation progress is displayed on the Multi-Instance Management Dashboard. If you navigate to another screen and then return to the Multi-Instance Management Dashboard, progress messages are not displayed. Instead, an empty map with no Cloud Foundation instances are displayed until the federation is created.
Workaround: Stay on the Multi-Instance Dashboard till the task is complete. If you have navigated away, wait for around 20 minutes and then return to the dashboard by which time the operation should have completed.
- Multi-Instance Management Dashboard operation fails
After a controller joins or leaves a federation, Kafka is restarted on all controllers in the federation. It can take up to 20 minutes for the federation to stabilize. Any operations performed on the dashboard during this time may fail.
Workaround: Re-try the operation.
- Join operation fails
A join operation may fail if a controller SDDC Manager has a public certificate with a depth greater than one (that is, it has intermediate certificates).
Workaround: Trust the intermediate certificate of the controller SDDC Manager. See KB 80986.
- Stretch cluster operation fails
If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.
Workaround: Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.
- The VMware Cloud Foundation API ignores NSX VDS uplink information for in-cluster expansion of an NSX Edge cluster
When you use the VMware Cloud Foundation API to expand an NSX Edge cluster and the new NSX Edge node is going to be hosted on the same vSphere cluster as the existing NSX Edge nodes (in-cluster), the
edgeClusterExpansionSpecignores any information you provide for
Workaround: None. This is by design. For in-cluster expansions, new NSX Edge nodes use the same NSX VDS uplinks as the existing NSX Edge nodes in the NSX Edge cluster.
- vRealize Operations Manager: VMware Security Advisory VMSA-2021-0018
VMSA-2021-0018 describes security vulnerabilities that affect VMware Cloud Foundation.
- The vRealize Operations Manager API contains an arbitrary file read vulnerability. A malicious actor with administrative access to vRealize Operations Manager API can read any arbitrary file on server leading to information disclosure.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22022 to this issue.
- The vRealize Operations Manager API has insecure object reference vulnerability. A malicious actor with administrative access to vRealize Operations Manager API may be able to modify other users information leading to an account takeover.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22023 to this issue.
- The vRealize Operations Manager API contains an arbitrary log-file read vulnerability. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can read any log file resulting in sensitive information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22024 to this issue.
- The vRealize Operations Manager API contains a broken access control vulnerability leading to unauthenticated API access. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can add new nodes to existing vROps cluster. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifier CVE-2021-22025 to this issue.
- The vRealize Operations Manager API contains a Server Side Request Forgery in multiple end points. An unauthenticated malicious actor with network access to the vRealize Operations Manager API can perform a Server Side Request Forgery attack leading to information disclosure. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned identifiers CVE-2021-22026 and CVE-2021-22027 to this issue.
Workaround: See KB 85452 for information about applying vRealize Operations Security Patches that resolve the issues.
- Updating the DNS or NTP server configuration does not apply the update to vRealize Automation
Using the Cloud Foundation API to update the DNS or NTP servers does not apply the update to vRealize Automation due to a bug in vRealize Suite Lifecycle Manager.
Workaround: Manually update the DNS or NTP server(s) for vRealize Automation.
Update the DNS server(s) for vRealize Automation
- SSH to the first vRealize Automation node using root credentials.
- Delete the current DNS server using the following command:
sed '/nameserver.*/d' -i /etc/resolv.conf
- Add the new DNS server IP with following command:
echo nameserver [DNS server IP] >> /etc/resolv.conf
- Repeat this command if there are multiple DNS servers.
- Validate the update with the following command:
- Repeat these steps for each vRealize Automation node.
Update the NTP server(s) for vRealize Automation
- SSH to the first vRealize Automation node using root credentials.
- Run the following command to specify the new NTP server:
vracli ntp systemd --set [NTP server IP]
To add multiple NTP servers:
vracli ntp systemd --set [NTP server 1 IP,NTP server 2 IP]
- Validate the update with the following command:
vracli ntp show-config
- Apply the update to all vRealize Automation nodes with the following command:
vracli ntp apply
- Validate the update by running the following command on each vRealize Automation node:
vracli ntp show-config
- Connecting vRealize Operations Manager to a workload domain fails at the "Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain" step
When you connect vRealize Operations Manager to a workload domain, it fails at the
Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domainstep with a message similar to
Failed to configure vCenter <vcenter-hostname> in vROps <vrops-hostname>, because Failed to manage vROps adapter. This issue can occur when the vRealize Operations cluster is offline.
Workaround: Make sure that the vRealize Operations cluster is online.
- Log in to the vRealize Operations Manager administration interface.
- Click Administration > Cluster Management and check the cluster status.
- If the vRealize Operations cluster is offline, bring the cluster online.
- When the cluster status displays as online, retry connecting vRealize Operations Manager to a workload domain.
- vRealize Operations Management Pack for VMware Identity Manager is not installed
If you install vRealize Operations Manager before you install Workspace ONE Access, then the vRealize Operations Management Pack for VMware Identity Manager is not installed.
- Log in to the vRealize Suite Lifecycle Manager appliance.
- Click VMware Marketplace.
- Enter "Identity Manager" in the Search text box.
- Download and install the vRealize Operations Management Pack for VMware Identity Manager.
- Log in to vRealize Operations Manager.
- On the main navigation bar, click Administration.
- In the left pane, select .
- Click Add account.
- On the Account types page, click VMware Identity Manager adapter.
- Configure the settings, choosing the default collector group.
- In the Connection information section, click the Add icon.
- In the Manage credential dialog box, configure the Workspace ONE Access credentials and click OK.
- On the New account page, click Validate connection.
- In the Info dialog box, click OK.
- Click Add.
- On the Other accounts page, verify that the collection status of the adapter is OK.
- Deploying a second vRealize Suite Lifecycle Manager fails
If you have multiple instances of VMware Cloud Foundation in the same SSO domain and you try to deploy vRealize Lifecycle Manager on both, the second deployment will fail with the message
Add vCenter Server and Data Center to vRealize Suite Lifecycle Manager Failed.
Workaround: Use a single vRealize Suite Lifecycle Manager to manage instances of VMware Cloud Foundation in the same SSO domain.
- vRealize Suite Lifecycle Manager reports a "FAILED" inventory sync
After rotating a vCenter Server service account password in SDDC Manager, the inventory sync may fail for vRealize Suite environments managed by VMware Cloud Foundation.
Workaround: Log in to vRealize Suite Lifecycle Manager to identify and troubleshoot the failed environment(s).