VMware Cloud Foundation 4.4 | 10 FEB 2022 | Build 19312029 Check for additions and updates to these release notes. |
VMware Cloud Foundation 4.4 | 10 FEB 2022 | Build 19312029 Check for additions and updates to these release notes. |
The VMware Cloud Foundation (VCF) 4.4 release includes the following:
Flexible vRealize Suite product upgrades: Starting with VMware Cloud Foundation 4.4 and vRealize Lifecycle Manager 8.6.2, upgrade and deployment of the vRealize Suite products is managed by vRealize Suite Lifecycle Manager. You can upgrade vRealize Suite products as new versions become available in your vRealize Suite Lifecycle Manager. vRealize Suite Lifecycle Manager will only allow upgrades to compatible and supported versions of vRealize Suite products. Specific vRealize Automation, vRealize Operations, vRealize Log Insight, and Workspace ONE Access versions will no longer be listed in the VMware Cloud Foundation BOM.
Improvements to upgrade prechecks: Upgrade prechecks have been expanded to verify filesystem capacity and passwords. These improved prechecks help identify issues that you need to resolve to ensure a smooth upgrade.
SSH deactivated on ESXi hosts: This release deactivates the SSH service on ESXi hosts by default, following the vSphere security configuration guide recommendation. This applies to new and upgraded VMware Cloud Foundation 4.4 deployments.
User Activity Logging: New activity logs capture all the VMware Cloud Foundation API invocation calls, along with user context. The new logs will also capture user logins and logouts to the SDDC Manager UI.
SDDC Manager UI workflow to manage DNS and NTP configurations: This feature provides a guided workflow to validate and apply DNS and NTP configuration changes to all components in a VMware Cloud Foundation deployment.
2-node vSphere clusters are supported when using NFS, VMFS on FC, or vVols as the principal storage for the cluster: This feature does not apply when using vSAN as principal storage or when using vSphere Lifecycle Manager baselines for updates.
Security fixes: This release includes fixes for the following security vulnerabilities:
Apache Log4j Remote Code Execution Vulnerability: This release fixes CVE-2021-44228 and CVE-2021-45046. See VMSA-2021-0028.
Apache HTTP Server: This release fixes CVE-2021-40438.
Improvements to reduce SDDC Manager service CPU and Memory usage and decrease inventory load times: Reduces the overall SDDC Manager service resource usage and improves service stability in scaled deployments. Decreases the load times for inventory objects (for example, ESXi hosts, workload domains, and so on) in the SDDC Manager UI.
Multi-Instance Management is deprecated: The Multi-Instance Management Dashboard is no longer available in the SDDC Manager UI.
BOM updates: Updated Bill of Materials with new product versions.
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component |
Version |
Date |
Build Number |
---|---|---|---|
Cloud Builder VM |
4.4 |
10 FEB 2022 |
19312029 |
SDDC Manager |
4.4 |
10 FEB 2022 |
19312029 |
VMware vCenter Server Appliance |
7.0 Update 3c |
27 JAN 2022 |
19234570 |
VMware ESXi |
7.0 Update 3c |
27 JAN 2022 |
19193900 |
7.0 Update 3c |
27 JAN 2022 |
19193900 |
|
VMware NSX-T Data Center |
3.1.3.5 |
21 DEC 2021 |
19068434 |
VMware vRealize Suite Lifecycle Manager |
8.6.2 |
18 JAN 2022 |
19221620 |
VMware vSAN is included in the VMware ESXi bundle.
You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access (formerly known as VMware Identity Manager). vRealize Suite Lifecycle Manager determines which versions of these products are compatible and only allows you to install/upgrade to supported versions. See vRealize Suite Upgrade Paths on VMware Cloud Foundation 4.4.x +.
vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
You can access the latest versions of the content packs for vRealize Log Insight from the VMware Solution Exchange and the vRealize Log Insight in-product marketplace store.
The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:
VMware ESXi
VMware vSAN
VMware NSX-T Data Center
The following VMware software components deployed by SDDC Manager are licensed separately:
VMware vCenter Server
NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM) section above.
For general information about the product, see VMware Cloud Foundation.
For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.
To access the Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
VMware vSphere product documentation, also has documentation about ESXi and vCenter Server
The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:
Google Chrome 89 or later
Mozilla Firefox 80 or later
Microsoft Edge 90 or later
For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:
1024 by 768 pixels (standard)
1366 by 768 pixels
1280 by 1024 pixels
1680 by 1050 pixels
Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.
You can install VMware Cloud Foundation 4.4 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.
Installing as a New Release
The new installation process has three phases:
Phase One: Prepare the Environment
The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.
Phase Two: Image all servers with ESXi
Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.
Phase Three: Install Cloud Foundation 4.4
See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.
Upgrading to Cloud Foundation 4.4
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4 from VMware Cloud Foundation 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.4. For more information see VMware Cloud Foundation Lifecycle Management.
IMPORTANT: Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
NOTE: Scripts that rely on SSH being activated on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.4, since VMware Cloud Foundation 4.4 deactivates the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about activating and deactivating the SSH service on ESXi hosts.
The following issues are resolved in this release.
Connecting vRealize Operations Manager to a workload domain fails at the "Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain" step
Unable to download SoS bundles from SDDC Manager API Explorer
Unable to remove host from vSphere cluster in workload domain
SDDC Manager UI does not load correctly
vRealize Operations Management Pack for VMware Identity Manager is not installed
Deploying a second vRealize Suite Lifecycle Manager fails
Domain prechecks for vRealize Suite products show incorrect health state
Workload Management does not support NSX-T Data Center Federation
You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.
Workaround: None.
NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters
There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.
Workaround: None.
Stretched clusters and Workload Management
You cannot stretch a cluster on which Workload Management is deployed.
Workaround: None.
Async Patch Tool Known Issues
The Async Patch Tool is a utility that allows you to apply critical patches to certain VMware Cloud Foundation components (NSX-T Manager, vCenter Server, and ESXi) outside of VMware Cloud Foundation releases. The Async Patch Tool also allows you to enable upgrade of an async patched system to a new version of VMware Cloud Foundation.
See the Async Patch Tool Release Notes for known issues.
NSX-T upgrade causing host PSOD
ESXi host can PSOD during NSX-T upgrade when there is a mass migration of DFW filters, where flows are being revalidated while configuration cycle is occurring.
See KB 87803 for more information. This issue is fixed in NSX-T 3.1.3.6.
If a VCF upgrade is tried post application of this workaround, the LCM pre-check on DRS configuration will fail. This is expected behavior.
Cluster-level ESXi upgrade fails
Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.
Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.
ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."
This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.
Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.
Skip level upgrades are not enabled for some product components after VMware Cloud Foundation is upgraded to 4.3
After performing skip level upgrade to VMware Cloud Foundation 4.3 from 4.1.x or 4.2.x, one or more of the following symptoms is observed:
vRealize bundles do not show up as available for upgrade
Bundles for previous versions of some product components (NSX-T Data Center, vCenter Server, ESXi) show up as available for upgrade
See: KB 85505
vRealize Operations Manager upgrade fails on the step VREALIZE_UPGRADE_PREPARE_BACKUP with the error: Waiting for vRealize Operations cluster to change state timed out
When upgrading vRealize Operations Manager, SDDC Manager takes the vRealize Operations Manager cluster offline and takes snapshots of the vRealize Operations Manager virtual machines. In some circumstances, taking the cluster offline takes a long time and the operation times out.
Workaround: Take the vRealize Operations Manager cluster back online and retry the upgrade.
Log in to the vRealize Operations Manager Administration UI (https://<vrops_ip>/admin) using the admin credentials.
If the cluster status is offline, in the Cluster Status section click Take Cluster Online. Wait for the cluster to initialize and be marked as green.
In the SDDC Manager UI, the option to retry vRealize Operations Manager upgrade should be available. Retry the upgrade.
If the upgrade continues to fail, take the snapshots manually and retry the upgrade. Since the snapshots already exist, SDDC Manager will skip that step and proceed with the upgrade.
Log in to the vRealize Operations Manager Administration UI (https://<vrops_ip>/admin) using the admin credentials.
Ensure that that the vRealize Operations Manager Cluster Status is offline. If it is online, click Take Cluster Offline in the Cluster Status section. Wait for the cluster to be marked as offline.
Log in to the management domain vCenter Server using the vSphere Client.
Navigate to the vRealize Operations Manager virtual machines and create a snapshot for each virtual machine in the vRealize Operations Manager cluster. Use the following prefix "vROPS_LCM_UPGRADE_MANUAL_BACKUP" for the snapshots. Please note that the prefix should match the letter casing.
After the snapshots are done, log in to the vRealize Operations Manager UI and take cluster online. Wait for the cluster initialization.
In the SDDC Manager UI, the option to retry vRealize Operations Manager upgrade should be available. Retry the upgrade.
vRealize Suite product upgrade request fails in vRealize Suite Lifecycle Manager
When upgrading a vRealize Suite product in vRealize Suite Lifecycle Manager, the product upgrade request may fail due to vRealize Suite product services not up and running although the product binaries are upgraded.
Workaround: Perform the following steps:
Revert to the snapshot that is created prior to the upgrade.
Perform an inventory sync to update to the correct vRealize Suite product version.
Perform the upgrade.
NOTE: If any vRealize Suite product upgrade request fails in vRealize Suite Lifecycle Manager, do not perform an inventory sync or re-import the vRealize product as this may cause inconsistency of information between vRealize Suite Lifecycle Manager and the vRealize Suite product.
The upgrade of vRealize Suite Lifecycle Manager fails with error "Timed out while waiting in-place upgrade of vRSLCM to complete"
When the upgrade of vRealize Suite Lifecycle Manager is triggered in the SDDC Manager UI, you may see a failure "Timed out while waiting in-place upgrade of vRSLCM to complete". vRealize Suite Lifecycle Manager also shows the upgrade as in-progress and stuck on the step of installing packages.
Workaround: Revert to the pre-upgrade snapshot of vRealize Suite Lifecycle Manager and retry the upgrade.
After upgrading to VMware Cloud Foundation 4.4 an NSX Manager that is shared between VI workload domains cannot connect to vCenter Server
When you rotate the vCenter Server service account password for NSX Manager, and that NSX Manager is shared with another VI workload domain, the NSX Manager will not be able to connect to the vCenter Server for the other VI workload domain.
Workaround: Rotate the service account for all vCenter Servers that share the same NSX Manager.
Upgrade precheck shows the incorrect health status for vRealize Operations Manager when it is part of an environment that also includes vRealize Automation
When you have a single environment in vRealize Suite Lifecycle Manager that includes vRealize Operations Manager and vRealize Automation, the management domain precheck in SDDC Manager may be incorrectly marked as RED, even though vRealize Operations Manager is healthy.
Workaround: Retry the precheck for vRealize Operations Manager only to update the status.
Upgrading to vRealize Suite Lifecycle Manager does not download the vRealize Log Insight content packs or vRealize Operations Manager management packs
After you upgrade to vRealize Suite Lifecycle Manager 8.6.2, the the vRealize Log Insight content packs and vRealize Operations Manager management packs are not available.
Workaround: Download the vRealize Suite Lifecycle Manager 8.6.2 install bundle.
Upgrade precheck fails due to out-of-date LCM manifest
If you perform a precheck on a workload domain and SDDC Manager is not connected to the My VMware repository, the pre-check may fail with the error LCM Manifest found in the system is currently more than 270 days old
.
Workaround: Use the Bundle Transfer utility to download the latest manifest and then upload the manifest to SDDC Manager. Once the latest manifest is uploaded, retry the precheck.
Download the Bundle Transfer utility on a computer with internet access.
Login to your MyVMware and browse to the Download VMware Cloud Foundation page.
In the Select Version field, select the version to which you are upgrading.
Click Drivers & Tools.
Expand VMware Cloud Foundation Tools and click Go To Downloads.
Click Download Now for the Bundle Transfer Utility.
Extract lcm-tools-prod.tar.gz.
Navigate to the lcm-tools-prod/bin/
and confirm that you have execute permission on all folders.
Run the following command to download the manifest file:
./lcm-bundle-transfer-util --download --manifestDownload --depotUser Username
Enter your My VMware password when prompted.
Copy the manifest file and lcm-tools-prod directory to a computer with access to the SDDC Manager appliance.
Upload the manifest file to the SDDC Manager appliance.
./lcm-bundle-transfer-util --update --sourceManifestDirectory Manifest-Downloaded-Directory --sddcMgrFqdn FQDN --sddcMgrUser Username
Replace Manifest-Downloaded-Directory with the path to the downloaded manifest.
Replace FQDN with the FQDN of the SDDC Manager appliance.
Replace Username with your your vSphere SSO user name.
Bringup fails when creating NSX-T Data Center transport nodes
The bringup task "Create NSX-T Data Center Transport Nodes from Discovered Nodes" might fail if there's an ESXi host in the management cluster which is pending a reboot.
Workaround: Reboot all ESXi hosts that are pending reboot and retry bringup.
The Cloud Foundation Builder VM remains locked after more than 15 minutes.
The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.
Log in to the VM console of the Cloud Foundation Builder VM as the root
user. Unlock the account by resetting the password of the admin user with the following command:pam_tally2 --user=<user> --reset
Cannot reuse a static IP pool that includes special characters in its name
If you chose Static IP Pool as the IP allocation method when creating a VI workload domain and you used special characters or spaces in the IP pool name, you are not able to reuse the IP pool when creating a new VI workload domain or adding a vSphere cluster to the workload domain.
Workaround: Use only supported characters when naming a static IP pool. Supported characters:
a-z
A-Z
0-9
- and _
No spaces
If you have an existing static IP pool that includes unsupported characters in its name, you can use the NSX Manager UI to rename it.
Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
Before adding the host, add a new portgroup to the VDS for that cluster.
Tag the new portgroup with the VLAN ID of the host to be added.
Add the Host. This workflow fails at the "Migrate host vmknics to dvs" operation.
Locate the failed host in vCenter, and migrate the vmk0 of the host to the new portgroup you created in step 1. For more information, see Migrate VMkernel Adapters to a vSphere Distributed Switch in the vSphere product documentation.
Retry the Add Host operation.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
Deploying partner services on an NSX-T workload domain displays an error
Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.
Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.
If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
Upgrade the witness host manually with the matching ESXi version using the vCenter VUM functionality.
Replace or deploy the witness appliance matching with the ESXi version.
vSAN partition and critical alerts are generated when the witness MTU is not set to 9000
If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.
Set the MTU of the witness switch in the witness appliance to 9000 MTU.
Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails
When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.
Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.
Adding a vSphere cluster or adding a host to a workload domain fails
Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX-T Transport Node or Create Transport Node Collection subtask.
Enable SSH for the NSX Manager VMs.
SSH into the NSX Manager VMs as admin and then log in as root.
Run the following command on each NSX Manager VM: sysctl -w net.ipv4.tcp_en=0
Login to NSX Manager UI for the workload domain.
Navigate to System > Fabric > Nodes > Host Transport Nodes.
Select the vCenter server for the workload domain from the Managed by drop-down menu.
Expand the vSphere cluster and navigate to the transport nodes that are in a partial success state.
Select the check box next to a partial success node, click Configure NSX.
Click Next and then click Apply.
Repeat steps 7-9 for each partial success node.
When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.
The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled
If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.
Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.
Creation or expansion of a vSAN cluster with more than 32 hosts fails
By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.
Enable Large Cluster Support for the vSAN cluster in the vSphere Client. If it is already enabled skip to step 2.
Select the vSAN cluster in the vSphere Client.
Select Configure > vSAN > Advanced Options.
Enable Large Cluster Support.
Click Apply.
Click Yes.
Run a vSAN health check to see which hosts require rebooting.
Put the hosts into Maintenance Mode and reboot the hosts.
For more information about large cluster support, see https://kb.vmware.com/kb/2110081.
Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present
If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.
For host removal: Delete the Service VM from the host and retry the operation.
For cluster deletion: Delete the service deployment for the cluster and retry the operation.
For workload domain deletion: Delete the service deployment for all clusters in the workload domain and retry the operation.
vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain
If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.
If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.
Updating DNS/NTP server does not apply the update to all NSX Managers
If you update the NTP or DNS server information for a VMware Cloud Foundation instance that includes more than one NSX Manager, only one of the NSX Managers gets updated with the new information.
Workaround: Use the NSX Manager API or CLI to manually update the DNS/NTP server information for the remaining NSX Manager(s).
Cannot generate a CSR with 4096 bit key size for NSX Manager
When you use the SDDC Manager UI to generate a certificate signing request (CSR) for NSX Manager, 4096 appears as an option in the Key Size drop-down menu, but you cannot select it.
Workaround: Use the VMware Cloud Foundation API to generate the CSR.
Rotating or updating vSphere Single-Sign On (PSC) password can cause issues
If you have multiple VMware Cloud Foundation instances that share a single SSO domain, rotating or updating the vSphere SSO password for the first VCF instance causes the second VCF instance to become inaccessible.
Workaround: See KB 85485.
Adding ESXi hosts that use VMFS on FC storage using the SDDC Manager UI fails
Using the SDDC Manager UI to add ESXi hosts that use VMFS on FC storage fails with the error "No unassigned hosts available". This can happen when you:
Create a new VI workload domain that uses VMFS on FC storage
Add a vSphere cluster that uses VMFS on FC storage
Add a host to a vSphere cluster that uses VMFS on FC storage
Workaround: Use the VMware Cloud Foundation API to add ESXi hosts that use VMFS on FC storage. See:
Create a Domain: https://developer.vmware.com/apis/vcf/latest/domains/
Create a Cluster: https://developer.vmware.com/apis/vcf/latest/clusters/
Expand a Cluster: https://developer.vmware.com/apis/vcf/latest/clusters/
Deactivating CEIP on SDDC Manager does not deactivate CEIP on vRealize Automation and vRealize Suite Lifecycle Manager
When you deactivate CEIP on the SDDC Manager Dashboard, data collection is not deactivated on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.
Workaround: Manually deactivate CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.
SoS utility options for health check are missing information
Due to limitations of the ESXi service account, some information is unavailable in the following health check options:
--hardware-compatibility-report:
No Devices and Driver
information for ESXi hosts.
--storage-health:
No vSAN Health Status
or Total no. of disks
information for ESXi hosts.
None.
SoS connectivity health check for ESXi hosts displays YELLOW status
When you use the Supportability and Serviceability (SoS) Utility to run a --connectivity-health check, ESXi hosts that have SSH deactivated may show a YELLOW status. In new or upgraded VMware Cloud Foundation 4.4 deployments, the SSH service on ESXi hosts is deactivated by default. If you upgraded the management domain to VMware Cloud Foundation 4.4, but have not upgraded all of your VI workload domains, the ESXi hosts in those VI workload domains will report a YELLOW status if SSH is deactivated.
Workaround: When all VI workload domains are upgraded to VMware Cloud Foundation 4.4, the connectivity health check will show a GREEN status for ESXi hosts that have SSH deactivated.
Stretch cluster operation fails
If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.
Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.
The VMware Cloud Foundation API ignores NSX VDS uplink information for in-cluster expansion of an NSX Edge cluster
When you use the VMware Cloud Foundation API to expand an NSX Edge cluster and the new NSX Edge node is going to be hosted on the same vSphere cluster as the existing NSX Edge nodes (in-cluster), the edgeClusterExpansionSpec
ignores any information you provide for firstNsxVdsUplink
and secondNsxVdsUplink
.
Workaround: None. This is by design. For in-cluster expansions, new NSX Edge nodes use the same NSX VDS uplinks as the existing NSX Edge nodes in the NSX Edge cluster.
Updating the DNS or NTP server configuration does not apply the update to vRealize Automation
Using the Cloud Foundation API to update the DNS or NTP servers does not apply the update to vRealize Automation due to a bug in vRealize Suite Lifecycle Manager.
Workaround: Manually update the DNS or NTP server(s) for vRealize Automation.
Update the DNS server(s) for vRealize Automation
SSH to the first vRealize Automation node using root credentials.
Delete the current DNS server using the following command: sed '/nameserver.*/d' -i /etc/resolv.conf
Add the new DNS server IP with following command:
echo nameserver [DNS server IP] >> /etc/resolv.conf
Repeat this command if there are multiple DNS servers.
Validate the update with the following command:
cat /etc/resolv.conf
Repeat these steps for each vRealize Automation node.
Update the NTP server(s) for vRealize Automation
SSH to the first vRealize Automation node using root credentials.
Run the following command to specify the new NTP server: vracli ntp systemd --set [NTP server IP]
To add multiple NTP servers:
vracli ntp systemd --set [NTP server 1 IP,NTP server 2 IP]
Validate the update with the following command:
vracli ntp show-config
Apply the update to all vRealize Automation nodes with the following command:
vracli ntp apply
Validate the update by running the following command on each vRealize Automation node: vracli ntp show-config
vRealize Suite Lifecycle Manager reports a "FAILED" inventory sync
After rotating a vCenter Server service account password in SDDC Manager, the inventory sync may fail for vRealize Suite environments managed by VMware Cloud Foundation.
Workaround: Log in to vRealize Suite Lifecycle Manager to identify and troubleshoot the failed environment(s).
Removing vRealize Automation from a vRealize Suite Lifecycle Manager environment does not remove integrations
If vRealize Automation has any integrations with vRealize Log Insight or vRealize Operations, those integrations do not get removed when you delete vRealize Automation from the environment.
Workaround: Manually remove the integrations using the vracli command line utility and the vRealize Operations Manager UI (Data Sources > Integrations). See: