VMware Cloud Foundation 4.4.1 | 12 MAY 2022 | Build 19766960 Check for additions and updates to these release notes. |
The VMware Cloud Foundation (VCF) 4.4.1 release includes the following:
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component | Version | Date | Build Number |
---|---|---|---|
Cloud Builder VM | 4.4.1 | 12 MAY 2022 | 19766960 |
SDDC Manager | 4.4.1 | 12 MAY 2022 | 19766960 |
VMware vCenter Server Appliance | 7.0 Update 3d | 29 MAR 2022 | 19480866 |
VMware ESXi | 7.0 Update 3d | 29 MAR 2022 | 19482537 |
VMware Virtual SAN Witness Appliance | 7.0 Update 3c | 27 JAN 2022 | 19193900 |
VMware NSXT | 3.1.3.7.4 | 6 MAY 2022 | 19762317 |
VMware vRealize Suite Lifecycle Manager | 8.6.2 PSPAK3 | 12 MAY 2022 | 19447709 |
The SDDC Manager software is licensed under the VMware Cloud Foundation license. As part of this product, the SDDC Manager software deploys specific VMware software products.
The following VMware software components deployed by SDDC Manager are licensed under the VMware Cloud Foundation license:
The following VMware software components deployed by SDDC Manager are licensed separately:
NOTE Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.
For details about the specific VMware software editions that are licensed under the licenses you have purchased, see the VMware Cloud Foundation Bill of Materials (BOM) section above.
For general information about the product, see VMware Cloud Foundation.
For details on supported configurations, see the VMware Compatibility Guide (VCG) and the Hardware Requirements section on the Prerequisite Checklist tab in the Planning and Preparation Workbook.
To access the VMware Cloud Foundation documentation, go to the VMware Cloud Foundation product documentation.
For for information relating to VMware Cloud Foundation 4.4.1 on Dell EMC VxRail, see https://docs.vmware.com/en/VMware-Cloud-Foundation/4.4.1/rn/vmware-cloud-foundation-441-on-dell-emc-vxrail-release-notes/index.html.
To access the documentation for VMware software products that SDDC Manager can deploy, see the product documentation and use the drop-down menus on the page to choose the appropriate version:
The Cloud Foundation web-based interface supports the latest two versions of the following web browsers except Internet Explorer:
For the Web-based user interfaces, the supported standard resolution is 1024 by 768 pixels. For best results, use a screen resolution within these tested resolutions:
Resolutions below 1024 by 768, such as 640 by 960 or 480 by 800, are not supported.
You can install VMware Cloud Foundation 4.4 as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.1.
The new installation process has three phases:
Phase One: Prepare the Environment
The Planning and Preparation Workbook provides detailed information about the software, tools, and external services that are required to implement a Software-Defined Data Center (SDDC) with VMware Cloud Foundation, using a standard architecture model.
Phase Two: Image all servers with ESXi
Image all servers with the ESXi version mentioned in the Cloud Foundation Bill of Materials (BOM) section. See the VMware Cloud Foundation Deployment Guide for information on installing ESXi.
Phase Three: Install Cloud Foundation 4.4.1
See the VMware Cloud Foundation Deployment Guide for information on deploying Cloud Foundation.
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4.1 from VMware Cloud Foundation 4.4, 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a version earlier than 4.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.4.1. For more information see VMware Cloud Foundation Lifecycle Management.
IMPORTANT: Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
NOTE: Scripts that rely on SSH being enabled on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.4.1, since VMware Cloud Foundation 4.4 disables the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about enabling and disabling the SSH service on ESXi hosts.
The following issues are resolved in this release:
Workload Management does not support NSX-T Data Center Federation
You cannot deploy Workload Management (vSphere with Tanzu) to a workload domain when that workload domain's NSX-T Data Center instance is participating in an NSX-T Data Center Federation.
None.
NSX-T Guest Introspection (GI) and NSX-T Service Insertion (SI) are not supported on stretched clusters
There is no support for stretching clusters where NSX-T Guest Introspection (GI) or NSX-T Service Insertion (SI) are enabled. VMware Cloud Foundation detaches Transport Node Profiles from AZ2 hosts to allow AZ-specific network configurations. NSX-T GI and NSX-T SI require that the same Transport Node Profile be attached to all hosts in the cluster.
None.
Stretched clusters and Workload Management
You cannot stretch a cluster on which Workload Management is deployed.
None.
Offline customers will see NSX-T 3.1.3.7.2 (10 May 2022) instead of 3.1.3.7.4 as until they connect to depot.
Once user connects to depot, the dynamic manifest will be downloaded automatically, overwriting the incorrect values in the description. There are no technical side effects.
vCenter Server upgrade fails
Upgrading vCenter Server may fail in certain circumstances. The log files show the error com.vmware.vapi.std.errors.Unauthenticated
.
Workaround: Retry the upgrade.
View Status information for an update shows the wrong component while an update is in progress.
When viewing the status of an update, the SDDC Manager UI may display information about a previously updated component.
Workaround: To view update status for the current component, refresh your browser page.
Skip-level upgrade from VMware Cloud Foundation 4.1 to 4.4.1
If you have enabled Kubernetes-Workload Management on a cluster in a workload domain, then you cannot perform a skip-level upgrade from VMware Cloud Foundation 4.1 to 4.4.1.
Workaround: Upgrade to VMware Cloud Foundation 4.3.1 and then perform a skip-level upgrade to 4.4.1.
Async Patch Tool Known Issues
The Async Patch Tool is a utility that allows you to apply critical patches to certain VMware Cloud Foundation components (NSX-T Manager, vCenter Server, and ESXi) outside of VMware Cloud Foundation releases. The Async Patch Tool also allows you to enable upgrade of an async patched system to a new version of VMware Cloud Foundation.
See the Async Patch Tool Release Notes for known issues.
VMware Cloud Foundation NSX upgrade fails for the pre-upgrade check warning raised due to IPFIX enablement on the hosts.
The underlying issue occurs due to changes that went into the VMKAPI for the ESXi 7.0 U3. Due to these changes, the IPFIX modules receive a non-zero reference count when an unload of the module is attempted leading to failure in the unload of the IPFIX Module. See KB 87975 for more information.
SDDC Manager can proceed with the upgrade by suppressing the NSX pre-upgrade check warnings.
> su -
/opt/vmware/vcf/lcm/lcm-app/conf/
> cd /opt/vmware/vcf/lcm/lcm-app/conf/
> lcm.nsxt.suppress.prechecks.warnings=true
> systemctl restart lcm
NSX-T upgrade causing host PSOD
ESXi host can PSOD during NSX-T upgrade when there is a mass migration of DFW filters, where flows are being revalidated while configuration cycle is occurring.
See KB 87803 for more information. This issue is fixed in NSX-T 3.1.3.6.
If a VCF upgrade is tried post application of this workaround, the LCM pre-check on DRS configuration will fail. This is expected behavior.
Cluster-level ESXi upgrade fails
Cluster-level selection during upgrade does not consider the health status of the clusters and may show a cluster's status as Available, even for a faulty cluster. If you select a faulty cluster, the upgrade fails.
Always perform an update precheck to validate the health status of the clusters. Resolve any issues before upgrading.
You are unable to update NSX-T Data Center in the management domain or in a workload domain with vSAN principal storage because of an error during the NSX-T transport node precheck stage
In SDDC Manager, when you run the upgrade precheck before updating NSX-T Data Center, the NSX-T transport node validation results with the following error.
No coredump target has been configured. Host core dumps cannot be saved.:System logs on host sfo01-m01-esx04.sfo.rainpole.io are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.
Because the upgrade precheck results with an error, you cannot proceed with updating the NSX-T Data Center instance in the domain. VMware Validated Design supports vSAN as the principal storage in the management domain. However, vSAN datastores do no support scratch partitions. See VMware Knowledge Base article 2074026.
Disable the update precheck validation for the subsequent NSX-T Data Center update.
application-prod.properties
file for editing: vi /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties
cm.nsxt.suppress.prechecks=true
systemctl restart lcm
NSX-T upgrade may fail at the step NSX T TRANSPORT NODE POSTCHECK STAGE
NSX-T upgrade may not proceed beyond the NSX T TRANSPORT NODE POSTCHECK STAGE
.
Contact VMware support.
ESXi upgrade fails with the error "Incompatible patch or upgrade files. Please verify that the patch file is compatible with the host. Refer LCM and VUM log file."
This error occurs if any of the ESXi hosts that you are upgrading have detached storage devices.
Workaround: Attach all storage devices to the ESXi hosts being upgraded, reboot the hosts, and retry the upgrade.
Update precheck fails with the error "Password has expired"
If the vCenter Single Sign-On password policy specifies a maximum lifetime of zero (never expires), the precheck fails.
Workaround: Set the maximum lifetime password policy to something other than zero and retry the precheck.
Skip level upgrades are not enabled for some product components after VMware Cloud Foundation is upgraded to 4.3
After performing skip level upgrade to VMware Cloud Foundation 4.3 from 4.1.x or 4.2.x, one or more of the following symptoms is observed:
See: KB 85505
vRealize Operations Manager upgrade fails on the step VREALIZE_UPGRADE_PREPARE_BACKUP with the error: Waiting for vRealize Operations cluster to change state timed out
When upgrading vRealize Operations Manager, SDDC Manager takes the vRealize Operations Manager cluster offline and takes snapshots of the vRealize Operations Manager virtual machines. In some circumstances, taking the cluster offline takes a long time and the operation times out.
Workaround: Take the vRealize Operations Manager cluster back online and retry the upgrade.
If the upgrade continues to fail, take the snapshots manually and retry the upgrade. Since the snapshots already exist, SDDC Manager will skip that step and proceed with the upgrade.
The Cloud Foundation Builder VM remains locked after more than 15 minutes.
The VMware Imaging Appliance (VIA) locks out the admin user after three unsuccessful login attempts. Normally, the lockout is reset after fifteen minutes but the underlying Cloud Foundation Builder VM does not automatically reset.
Log in to the VM console of the Cloud Foundation Builder VM as the root
user. Unlock the account by resetting the password of the admin user with the following command:
pam_tally2 --user=<user> --reset
Bringup fails when creating NSX-T Data Center transport nodes
The bringup task "Create NSX-T Data Center Transport Nodes from Discovered Nodes" might fail if there's an ESXi host in the management cluster which is pending a reboot.
Workaround: Reboot all ESXi hosts that are pending reboot and retry bringup.
SDDC Manager UI Application upgrade fails with Password Authentication Exception
There is a timing issue during an .RPM installation that causes the db userid and password to not set correctly.
Workaround: Wait for the db password to appear in the config.properties before starting the sddc-manager-ui-app service. See KB 77551 for more information
Disabling CEIP on SDDC Manager does not disable CEIP on vRealize Automation and vRealize Suite Lifecycle Manager
When you disable CEIP on the SDDC Manager Dashboard, data collection is not disabled on vRealize Automation and vRealize Suite Lifecycle Manager. This is because of API deprecation in vRealize Suite 8.x.
Workaround: Manually disable CEIP in vRealize Automation and vRealize Suite Lifecycle Manager. For more information, see VMware vRealize Automation Documentation and VMware vRealize Suite Lifecycle Manager Documentation.
Generate CSR task for a component hangs
When you generate a CSR, the task may fail to complete due to issues with the component's resources. For example, when you generate a CSR for NSX Manager, the task may fail to complete due to issues with an NSX Manager node. You cannot retry the task once the resource is up and running again.
vcf
.systemctl restart operationsmanager
SoS utility options for health check are missing information
Due to limitations of the ESXi service account, some information is unavailable in the following health check options:
--hardware-compatibility-report:
No Devices and Driver
information for ESXi hosts.--storage-health:
No vSAN Health Status
or Total no. of disks
information for ESXi hosts.None.
Supportability and Serviceability (SoS) Utility health checks fail with the error "Failed to get details"
SoS is not able to handle ESXi host names that include uppercase letters.
Workaround: Use the precheck functionality in the SDDC Manager UI to check the health of the ESXi hosts.
SDDC Manager UI does not load correctly
If you log in to the SDDC Manager UI using an Active Directory user name that includes a space, the UI does not load correctly.
Workaround: None
Adding host fails when host is on a different VLAN
A host add operation can sometimes fail if the host is on a different VLAN.
NOTE: If you later remove this host in the future, you must manually remove the portgroup as well if it is not being used by any other host.
Deploying partner services on an NSX-T workload domain displays an error
Deploying partner services, such as McAfee or Trend, on a workload domain enabled for vSphere Update Manager (VUM), displays the “Configure NSX at cluster level to deploy Service VM” error.
Attach the Transport node profile to the cluster and try deploying the partner service. After the service is deployed, detach the transport node profile from the cluster.
If the witness ESXi version does not match with the host ESXi version in the cluster, vSAN cluster partition may occur
vSAN stretch cluster workflow does not check the ESXi version of the witness host. If the witness ESXi version does not match the host version in the cluster, then vSAN cluster partition may happen.
vSAN partition and critical alerts are generated when the witness MTU is not set to 9000
If the MTU of the witness switch in the witness appliance is not set to 9000, the vSAN stretch cluster partition may occur.
Set the MTU of the witness switch in the witness appliance to 9000 MTU.
Adding a host to a vLCM-enabled workload domain configured with the Dell Hardware Support Manager (OMIVV) fails
When you try to add a host to a vSphere cluster for a workload domain enabled with vSphere Lifecycle Manager (vLCM), the task fails and the domain manager log reports "The host (host-name) is currently not managed by OMIVV." The domain manager logs are located at /var/log/vmware/vcf/domainmanager on the SDDC Manager VM.
Update the hosts inventory in OMIVV and retry the add host task in the SDDC Manager UI. See the Dell documentation for information about updating the hosts inventory in OMIVV.
Adding a vSphere cluster or adding a host to a workload domain fails
Under certain circumstances, adding a host or vSphere cluster to a workload domain fails at the Configure NSX-T Transport Node or Create Transport Node Collection subtask.
When all host issues are resolved, transport node creation starts for the failed nodes. When all hosts are successfully created as transport nodes, retry the failed add vSphere cluster or add host task from the SDDC Manager UI.
The vSAN Performance Service is not enabled for vSAN clusters when CEIP is not enabled
If you do not enable the VMware Customer Experience Improvement Program (CEIP) in SDDC Manager, when you create a workload domain or add a vSphere cluster to a workload domain, the vSAN Performance Service is not enabled for vSAN clusters. When CEIP is enabled, data from the vSAN Performance Service is provided to VMware and this data is used to aid VMware Support with troubleshooting and for products such as VMware Skyline, a proactive cloud monitoring service. See Customer Experience Improvement Program for more information on the data collected by CEIP.
Enable CEIP in SDDC Manager. See the VMware Cloud Foundation Documentation. After CEIP is enabled, a scheduled task that enables the vSAN Performance Service on existing clusters in workload domains runs every three hours. The service is also enabled for new workload domains and clusters. To enable the vSAN Performance Service immediately, see the VMware vSphere Documentation.
Creation or expansion of a vSAN cluster with more than 32 hosts fails
By default, a vSAN cluster can grow up to 32 hosts. With large cluster support enabled, a vSAN cluster can grow up to a maximum of 64 hosts. However, even with large cluster support enabled, a creation or expansion task can fail on the sub-task Enable vSAN on vSphere Cluster.
For more information about large cluster support, see https://kb.vmware.com/kb/2110081.
Removing a host from a cluster, deleting a cluster from a workload domain, or deleting a workload domain fails if Service VMs (SVMs) are present
If you deployed an endpoint protection service (such as guest introspection) to a cluster through NSX-T Data Center, then removing a host from the cluster, deleting the cluster, or deleting the workload domain containing the cluster will fail on the subtask Enter Maintenance Mode on ESXi Hosts.
vCenter Server overwrites the NFS datastore name when adding a cluster to a VI workload domain
If you add an NFS datastore with the same NFS server IP address, but a different NFS datastore name, as an NFS datastore that already exists in the workload domain, then vCenter Server applies the existing datastore name to the new datastore.
If you want to add an NFS datastore with a different datastore name, then it must use a different NFS server IP address.
The VMware Cloud Foundation API ignores NSX VDS uplink information for in-cluster expansion of an NSX Edge cluster
When you use the VMware Cloud Foundation API to expand an NSX Edge cluster and the new NSX Edge node is going to be hosted on the same vSphere cluster as the existing NSX Edge nodes (in-cluster), the edgeClusterExpansionSpec
ignores any information you provide for firstNsxVdsUplink
and secondNsxVdsUplink
.
Workaround: None. This is by design. For in-cluster expansions, new NSX Edge nodes use the same NSX VDS uplinks as the existing NSX Edge nodes in the NSX Edge cluster.
Stretch cluster operation fails
If the cluster that you are stretching does not include a powered-on VM with an operating system installed, the operation fails at the "Validate Cluster for Zero VMs" task.
Make sure the cluster has a powered-on VM with an operating system installed before stretching the cluster.
vRealize Operations Manager: VMware Security Advisory VMSA-2021-0018
VMSA-2021-0018 describes security vulnerabilities that affect VMware Cloud Foundation.
Workaround: See KB 85452 for information about applying vRealize Operations Security Patches that resolve the issues.
vRealize Suite Lifecycle Manager 8.6.2 reports an error
vRealize Suite Lifecycle Manager requires a PSPAK in order to support VMware Cloud Foundation 4.4.1.
Workaround: Install the PSPAK. See the VMware vRealize Suite Lifecycle Manager 8.6.2 PSPAK 3 Release Notes.
Updating the DNS or NTP server configuration does not apply the update to vRealize Automation
Using the Cloud Foundation API to update the DNS or NTP servers does not apply the update to vRealize Automation due to a bug in vRealize Suite Lifecycle Manager.
Workaround: Manually update the DNS or NTP server(s) for vRealize Automation.
sed '/nameserver.*/d' -i /etc/resolv.conf
echo nameserver [DNS server IP] >> /etc/resolv.conf
cat /etc/resolv.conf
Repeat these steps for each vRealize Automation node.
Update the NTP server(s) for vRealize Automation
vracli ntp systemd --set [NTP server IP]
To add multiple NTP servers:
vracli ntp systemd --set [NTP server 1 IP,NTP server 2 IP]
vracli ntp show-config
vracli ntp apply
vracli ntp show-config
Connecting vRealize Operations Manager to a workload domain fails at the "Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain" step
When you connect vRealize Operations Manager to a workload domain, it fails at the Create vCenter Server Adapter in vRealize Operations Manager for the Workload Domain
step with a message similar to Failed to configure vCenter <vcenter-hostname> in vROps <vrops-hostname>, because Failed to manage vROps adapter
. This issue can occur when the vRealize Operations cluster is offline.
Workaround: Make sure that the vRealize Operations cluster is online.
vRealize Operations Management Pack for VMware Identity Manager is not installed
If you install vRealize Operations Manager before you install Workspace ONE Access, then the vRealize Operations Management Pack for VMware Identity Manager is not installed.
Workaround:
Deploying a second vRealize Suite Lifecycle Manager fails
If you have multiple instances of VMware Cloud Foundation in the same SSO domain and you try to deploy vRealize Lifecycle Manager on both, the second deployment will fail with the message Add vCenter Server and Data Center to vRealize Suite Lifecycle Manager Failed
.
Workaround: Use a single vRealize Suite Lifecycle Manager to manage instances of VMware Cloud Foundation in the same SSO domain.
vRealize Suite Lifecycle Manager reports a "FAILED" inventory sync
After rotating a vCenter Server service account password in SDDC Manager, the inventory sync may fail for vRealize Suite environments managed by VMware Cloud Foundation.
Workaround: Log in to vRealize Suite Lifecycle Manager to identify and troubleshoot the failed environment(s).