The management domain in your environment must be upgraded before you upgrade VI workload domains. To upgrade to VMware Cloud Foundation 5.2, all VI workload domains in your environment must be at VMware Cloud Foundation 4.5 or higher. If your environment is at a version lower than 4.5, you must upgrade the workload domains to 4.5 and then upgrade to 5.2.

Within a VI workload domain, components must be upgraded in the following order.
  1. NSX.
  2. vCenter Server.
  3. ESXi.
  4. Workload Management on clusters that have vSphere with Tanzu. Workload Management can be upgraded through vCenter Server. See Updating the vSphere with Tanzu Environment.
  5. If you suppressed the Enter Maintenance Mode prechecks for ESXi or NSX, delete the following lines from the /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties file and restart the LCM service:

    lcm.nsxt.suppress.dry.run.emm.check=true

    lcm.esx.suppress.dry.run.emm.check.failures=true

  6. If you have stretched clusters in your environment, upgrade the vSAN witness host. See Upgrade vSAN Witness Host for VMware Cloud Foundation.
  7. For NFS-based workload domains, add a static route for hosts to access NFS storage over the NFS gateway. See Post Upgrade Steps for NFS-Based VI Workload Domains.
After all upgrades have completed successfully:
  1. Remove the VM snapshots you took before starting the update.
  2. Take a backup of the newly installed components.

Plan VI Workload Domain Upgrade

Before proceeding with a VI workload domain upgrade you must first plan the upgrade to your target version.

Prerequisites

Upgrade the Management Domain to VMware Cloud Foundation 5.2.

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. On the Workload Domains page, click the workload domain you want to upgrade and click the Updates tab.
  3. Under Available Updates, click PLAN UPGRADE.
    Select Plan Upgrade
  4. On the Plan Upgrade for VMware Cloud Foundation screen, select the target version from the drop-down, and click CONFIRM.
    Caution:

    You must upgrade all VI workload domains to VMware Cloud Foundation 5.x. Upgrading to a higher 4.x release once the management domain has been upgraded to 5.x is unsupported.

    Select version from drop down and confirm version

Results

Bundles applicable to the chosen release will be made available to the VI workload domain.

Target version set and SDDC Manager is refreshing bundles

Perform Update Precheck in SDDC Manager

You must perform a precheck in SDDC Manager before applying an update bundle to ensure that your environment is ready for the update.

Bundle-level pre-checks for vCenter are available in VMware Cloud Foundation.

Note:

Because ESXi bundle-level pre-checks only work in minor-version upgrades (for example: from ESXi 7.x through 7.y, or from ESXi 8.x through 8.y), these prechecks do not run in VMware Cloud Foundation.

If you silence a vSAN Skyline Health alert in the vSphere Client, SDDC Manager skips the related precheck and indicates which precheck it skipped. Click RESTORE PRECHECK to include the silenced precheck. For example:An example of an alert that was silenced in vSAN Skyline Health.

You can also silence failed vSAN prechecks in the SDDC Manager UI by clicking Silence Precheck. Silenced prechecks do not trigger warnings or block upgrades.

Important:

Only silence alerts if you know that they are incorrect. Do not silence alerts for real issues that require remediation.

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. On the Workload Domains page, click the workload domain where you want to run the precheck.
  3. On the domain summary page, click the Updates tab.
    (The following image is a sample screenshot and may not reflect current product versions.)
    This screenshot is the management domain summary page. The Updates tab is selected, and the most recent precheck information is shown
    Note:

    It is recommended that you Precheck your workload domain prior to performing an upgrade.

  4. Click RUN PRECHECK to select the components in the workload domain you want to precheck.
    1. You can select to run a Precheck only on vCenter or the vSphere cluster. All components in the workload domain are selected by default. To perform a precheck on certain components, choose Custom selection.
      All components are selected to precheck.
    2. If there are pending upgrade bundles available, then the "Target Version" dropdown contains "General Upgrade Readiness" and the available VMware Cloud Foundation versions to upgrade to. If there is an available VMware Cloud Foundation upgrade version, there will be extra checks - bundle-level prechecks for hosts, vCenter Server, and so forth. The version specific prechecks will only run prechecks on components that have available upgrade bundles downloaded."Target Version" dropdown
  5. When the precheck begins, a progress message appears indicating the precheck progress and the time when the precheck began.
    Precheck shows In Progress and is 72% completed, along with the date and time when Precheck was started.
    Note: Parallel precheck workflows are supported. If you want to precheck multiple domains, you can repeat steps 1-5 for each of them without waiting for step 5 to finish.
  6. Once the Precheck is complete, the report appears. Click through ALL, ERRORS, WARNINGS, and SILENCED to filter and browse through the results.

    Precheck report shows the number of resources that passed, errors, warnings, and silenced.
  7. To see details for a task, click the expander arrow.

    If a precheck task failed, fix the issue, and click Retry Precheck to run the task again. You can also click RETRY ALL FAILED RESOURCES to retry all failed tasks.

  8. If ESXi hosts display a driver incompatibility issue when updating a VI workload domain using vSphere Lifecycle Manager baselines, perform the following steps:
    1. Identify the controller with the HCL issue.

    2. For the given controller, identify the supported driver and firmware versions on the source and target ESXi versions.

    3. Upgrade the firmware, if required.

    4. Upgrade the driver manually on the ESXi host and retry the task at which the upgrade failed.

  9. If the workload domain contains a host that includes pinned VMs, the precheck fails at the Enter Maintenance Mode step. If the host can enter maintenance mode through vCenter Server UI, you can suppress this check for NSX and ESXi in VMware Cloud Foundation by following the steps below.
    1. Log in to SDDC Manager by using a Secure Shell (SSH) client with the user name vcf and password.

    2. Open the /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties file.

    3. Add the following line to the end of the file:

      lcm.nsxt.suppress.dry.run.emm.check=true

      lcm.esx.suppress.dry.run.emm.check.failures=true

    4. Restart Lifecycle Management by typing the following command in the console window.

      systemctl restart lcm

    5. After Lifecycle Management is restarted, run the precheck again.

Results

The precheck result is displayed at the top of the Upgrade Precheck Details window. If you click Exit Details, the precheck result is displayed at the top of the Precheck section in the Updates tab.

Ensure that the precheck results are green before proceeding. Although a failed precheck will not prevent the upgrade from proceeding, it may cause the update to fail.

Upgrade NSX for VMware Cloud Foundation in a Federated Environment

If NSX Federation is configured between two VMware Cloud Foundation instances, SDDC Manager does not manage the lifecycle of the NSX Global Managers. You must manually upgrade the NSX Global Managers for each instance.

Download NSX Global Manager Upgrade Bundle

SDDC Manager does not manage the lifecycle of the NSX Global Managers. You must download the NSX upgrade bundle manually to upgrade the NSX Global Managers.

Procedure

  1. Log in to the Broadcom Support Portal and browse to My Downloads > VMware NSX.
  2. Click the version of NSX to which you are upgrading.
  3. Locate the NSX version Upgrade Bundle and verify that the upgrade bundle filename extension ends with .mub.
    The upgrade bundle filename has the following format VMware-NSX-upgrade-bundle-versionnumber.buildnumber.mub.
  4. Click the download icon to download the upgrade bundle to the system where you access the NSX Global Manager UI.

Upgrade the Upgrade Coordinator for NSX Federation

The upgrade coordinator runs in the NSX Manager. It is a self-contained web application that orchestrates the upgrade process of hosts, NSX Edge cluster, NSX Controller cluster, and the management plane.

The upgrade coordinator guides you through the upgrade sequence. You can track the upgrade process and, if necessary, you can pause and resume the upgrade process from the UI.

Procedure

  1. In a web browser, log in to Global Manager for the domain at https://nsx_gm_vip_fqdn/).
  2. Select System > Upgrade from the navigation panel.
  3. Click Proceed to Upgrade.
  4. Navigate to the upgrade bundle .mub file you downloaded or paste the download URL link.
    • Click Browse to navigate to the location you downloaded the upgrade bundle file.
    • Paste the VMware download portal URL where the upgrade bundle .mub file is located.
  5. Click Upload.
    When the file is uploaded, the Begin Upgrade button appears.
  6. Click Begin Upgrade to upgrade the upgrade coordinator.
    Note:

    Upgrade one upgrade coordinator at a time.

  7. Read and accept the EULA terms and accept the notification to upgrade the upgrade coordinator..
  8. Click Run Pre-Checks to verify that all NSX components are ready for upgrade.
    The pre-check checks for component connectivity, version compatibility, and component status.
  9. Resolve any warning notifications to avoid problems during the upgrade.

Upgrade NSX Global Managers for VMware Cloud Foundation

Manually upgrade the NSX Global Managers when NSX Federation is configured between two VMware Cloud Foundation instances.

Prerequisites

Before you can upgrade NSX Global Managers, you must upgrade all VMware Cloud Foundation instances in the NSX Federation, including NSX Local Managers, using SDDC Manager.

Procedure

  1. In a web browser, log in to Global Manager for the domain at https://nsx_gm_vip_fqdn/).
  2. Select System > Upgrade from the navigation panel.
  3. Click Start to upgrade the management plane and then click Accept.
  4. On the Select Upgrade Plan page, select Plan Your Upgrade and click Next.
    The NSX Manager UI, API, and CLI are not accessible until the upgrade finishes and the management plane is restarted.

Upgrade NSX for VMware Cloud Foundation 5.2

If NSX is deployed, upgrade NSX in the management domain and VI workload domains.

Until SDDC Manager is upgraded to version 5.2, you must upgrade NSX in the management domain before you upgrade NSX in a VI workload domain. Once SDDC Manager is at version 5.2 or later, you can upgrade NSX in VI workload domains before or after upgrading NSX in the management domain.

Upgrading NSX involves the following components:

  • Upgrade Coordinator

  • NSX Edges/Clusters (if deployed)

  • Host clusters

  • NSX Manager cluster

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. On the Workload Domains page, click the domain you are upgrading and then click the Updates/Patches tab.

    When you upgrade NSX components for a selected VI workload domain, those components are upgraded for all VI workload domains that share the NSX Manager cluster.

  3. Click Precheck to run the upgrade precheck.

    Resolve any issues before proceeding with the upgrade.

    Note:

    The NSX precheck runs on all VI workload domains in your environment that share the NSX Manager cluster.

  4. In the Available Updates section, click Update Now or Schedule Update next to the VMware Software Update for NSX.
    Image showing the NSX upgrade bundle available for updating.
  5. On the NSX Edge Clusters page, select the NSX Edge clusters you want to upgrade and click Next.

    By default, all NSX Edge clusters are upgraded. To select specific NSX Edge clusters, select the Upgrade only NSX Edge clusters check box and select the Enable edge selection option. Then select the NSX Edges you want to upgrade.

  6. By default, all host clusters across all workload domains are upgraded. If you want to select specific host clusters to be upgraded, turn on the Enable host cluster selection setting. Host clusters are upgraded after all Edge clusters have been upgraded.
    Note:

    The NSX Manager cluster is upgraded only if you select all host clusters. If you have multiple host clusters and choose to upgrade only some of them, you must go through the NSX upgrade wizard again until all host clusters have been upgraded.

  7. Click Next.
  8. On the Upgrade Options dialog box, select the upgrade optimizations and click Next.

    By default, Edge clusters and host clusters are upgraded in parallel. You can enable sequential upgrade by selecting the relevant check box.

  9. If you selected the Schedule Upgrade option, specify the date and time for the NSX bundle to be applied.
  10. Click Next.
  11. On the Review page, review your settings and click Finish.

    The NSX upgrade begins and the upgrade components are displayed. The upgrade view displayed here pertains to the workload domain where you applied the bundle. Click the link to the associated workload domains to see the components pertaining to those workload domains.

  12. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.

    If a component upgrade fails, the failure is displayed across all associated workload domains. Resolve the issue and retry the failed task.

Results

When all NSX workload components are upgraded successfully, a message with a green background and check mark is displayed.

Upgrade vCenter Server for VMware Cloud Foundation 5.2

The upgrade bundle for VMware vCenter Server is used to upgrade the vCenter Server instances managed by SDDC Manager. Upgrade vCenter Server in the management domain before upgrading vCenter Server in VI workload domains.

Prerequisites

  • Download the VMware vCenter Server upgrade bundle. See Downloading VMware Cloud Foundation Upgrade Bundles.

  • If you are upgrading from VMware Cloud Foundation 4.5.x, allocate a temporary IP address for the vCenter Server migration. The IP address must be in the management subnet.
  • Take a file-based backup of the vCenter Server appliance before starting the upgrade. See Manually Back Up vCenter Server.

    Note:

    After taking a backup, do not make any changes to the vCenter Server inventory or settings until the upgrade completes successfully.

  • If your workload domain contains Workload Management (vSphere with Tanzu) enabled clusters, the supported target release depends on the version of Kubernetes (K8s) currently running in the cluster. Older versions of K8s might require a specific upgrade sequence. See KB 92227 for more information.

Procedure

  1. In the navigation pane, click Inventory > Workload Domains.
  2. On the Workload Domains page, click the domain you are upgrading and then click the Updates tab.
  3. Click Precheck to run the upgrade precheck.

    Resolve any issues before proceeding with the upgrade.

  4. In the Available Updates section, click Update Now or Schedule Update next to the VMware Software Update for vCenter Server.
  5. Click Confirm to confirm that you have taken a file-based backup of the vCenter Server appliance before starting the upgrade.
  6. If you selected Schedule Update, click the date and time for the bundle to be applied and click Schedule.
  7. If you are upgrading from VMware Cloud Foundation 4.5.x, enter the details for the temporary network to be used only during the upgrade.
    Image showing the temporary network settings when upgrading vCenter Server from 4.5.x to 5.x.
  8. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
  9. After the upgrade is complete, remove the old vCenter Server appliance.

    If the upgrade fails, resolve the issue and retry the failed task. If you cannot resolve the issue, restore vCenter Server using the file-based backup. See Restore vCenter Server.

What to do next

Once the upgrade successfully completes, use the vSphere Client to change the vSphere DRS Automation Level setting back to the original value (before you took a file-based backup) for each vSphere cluster that is managed by the vCenter Server. See KB 87631 for information about using VMware PowerCLI to change the vSphere DRS Automation Level.

Upgrade ESXi with vSphere Lifecycle Manager Baselines for VMware Cloud Foundation

Workload domains can use vSphere Lifecycle Manager baselines or vSphere Lifecycle Manager images. The following procedure describes upgrading ESXi hosts in workload domains that use vSphere Lifecycle Manager baselines.

For information about upgrading ESXi in VI workload domains that use vSphere Lifecycle Manager images, see Upgrade ESXi with vSphere Lifecycle Manager Images for VMware Cloud Foundation 5.2.

By default, the upgrade process upgrades the ESXi hosts in all clusters in a workload domain in parallel. If you have multiple clusters in a workload domain, you can select the clusters to upgrade.

If you want to skip any hosts while applying an ESXi update a workload domain, you must add these hosts to the application-prod.properties file before you begin the update. See "Skip Hosts During ESXi Update".

To perform ESXi upgrades with custom ISO images or async drivers see "Upgrade ESXi with Custom ISOs" and "Upgrade ESXi with Stock ISO and Async Drivers".

If you are using external (non-vSAN) storage, the following procedure updates the ESXi hosts attached to the external storage. However, updating and patching the storage software and drivers is a manual task and falls outside of SDDC Manager lifecycle management. To ensure supportability after an ESXi upgrade, consult the vSphere HCL and your storage vendor.

Prerequisites

  • Validate that the ESXi passwords are valid.

  • Download the ESXi bundle. See Downloading VMware Cloud Foundation Upgrade Bundles.

  • Ensure that the domain for which you want to perform cluster-level upgrade does not have any hosts or clusters in an error state. Resolve the error state or remove the hosts and clusters with errors before proceeding.

Procedure

  1. Navigate to the Updates/Patches tab of the workload domain.
  2. Click Precheck to run the upgrade precheck.

    Resolve any issues before proceeding with the upgrade.

  3. In the Available Updates section, select the target release.
  4. Click Upgrade Now or Schedule Update.
  5. If you selected Schedule Update, specify the date and time for the bundle to be applied.
  6. Select the clusters to upgrade and click Next.

    The default setting is to upgrade all clusters. To upgrade specific clusters, click Enable cluster-level selection and select the clusters to upgrade.

  7. Click Next.
  8. Select the appropriate upgrade options and click Finish.

    By default, the selected clusters are upgraded in parallel. If you selected more than ten clusters to be upgraded, the first ten are upgraded in parallel and the remaining clusters are upgraded sequentially. To upgrade all selected clusters sequentially, select Enable sequential cluster upgrade.

    Click Enable Quick Boot if desired. Quick Boot for ESXi hosts is an option that allows vSphere Lifecycle Manager to reduce the upgrade time by skipping the physical reboot of the host.

  9. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.

What to do next

Upgrade the vSAN Disk Format for vSAN clusters. The disk format upgrade is optional. Your vSAN cluster continues to run smoothly if you use a previous disk format version. For best results, upgrade the objects to use the latest on-disk format. The latest on-disk format provides the complete feature set of vSAN. See Upgrading vSAN Disk Format Using vSphere Client.

Upgrade vSAN Witness Host for VMware Cloud Foundation

If your VMware Cloud Foundation environment contains stretched clusters, update and remediate the vSAN witness host.

Prerequisites

Download the ESXi ISO that matches the version listed in the the Bill of Materials (BOM) section of the VMware Cloud Foundation Release Notes.

Procedure

  1. In a web browser, log in to vCenter Server at https://vcenter_server_fqdn/ui.
  2. Upload the ESXi ISO image file to vSphere Lifecycle Manager.
    1. Click Menu > Lifecycle Manager.
    2. Click the Imported ISOs tab.
    3. Click Import ISO and then click Browse.
    4. Navigate to the ESXi ISO file you downloaded and click Open.
    5. After the file is imported, click Close.
  3. Create a baseline for the ESXi image.
    1. On the Imported ISOs tab, select the ISO file that you imported, and click New baseline.
    2. Enter a name for the baseline and specify the Content Type as Upgrade.
    3. Click Next.
    4. Select the ISO file you had imported and click Next.
    5. Review the details and click Finish.
  4. Attach the baseline to the vSAN witness host.
    1. Click Menu > Hosts and Clusters.
    2. In the Inventory panel, click vCenter > Datacenter.
    3. Select the vSAN witness host and click the Updates tab.
    4. Under Attached Baselines, click Attach > Attach Baseline or Baseline Group.
    5. Select the baseline that you had created in step 3 and click Attach.
    6. Click Check Compliance.
      After the compliance check is completed, the Status column for the baseline is displayed as Non-Compliant.
  5. Remediate the vSAN witness host and update the ESXi hosts that it contains.
    1. Right-click the vSAN witness and click Maintenance Mode > Enter Maintenance Mode.
    2. Click OK.
    3. Click the Updates tab.
    4. Select the baseline that you had created in step 3 and click Remediate.
    5. In the End user license agreement dialog box, select the check box and click OK.
    6. In the Remediate dialog box, select the vSAN witness host, and click Remediate.
      The remediation process might take several minutes. After the remediation is completed, the Status column for the baseline is displayed as Compliant.
    7. Right-click the vSAN witness host and click Maintenance Mode > Exit Maintenance Mode.
    8. Click OK.

Skip Hosts During ESXi Update

You can skip hosts while applying an ESXi update to a workload domain. The skipped hosts are not updated.

Note:

You cannot skip hosts that are part of a VI workload domain that is using vSphere Lifecycle Manager images, since these hosts are updated at the cluster-level and not the host-level.

Procedure

  1. Using SSH, log in to the SDDC Manager appliance with the user name vcf and password you specified in the deployment parameter sheet.
  2. Type su to switch to the root account.
  3. Retrieve the host IDs for the hosts you want to skip.
    curl 'https://SDDC_MANAGER_IP/v1/hosts' -i -u 'username:password' -X GET -H 'Accept: application/json' |json_pp

    Replace the SDDC Manager FQDN, user name, and password with the information for your environment.

  4. Copy the ids for the hosts you want to skip from the output. For example:
    ...
             "fqdn" : "esxi-2.vrack.vsphere.local",
             "esxiVersion" : "6.7.0-16075168",
             "id" : "b318fe37-f9a8-48b6-8815-43aae5131b94",
    ...
    

    In this case, the id for esxi-2.vrack.vsphere.local is b318fe37-f9a8-48b6-8815-43aae5131b94.

  5. Open the /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties file.
  6. At the end of the file, add the following line:

    esx.upgrade.skip.host.ids=hostid1,hostid2

    Replace the host ids with the information from step 4. If you are including multiple host ids, do not add any spaces between them. For example: esx.upgrade.skip.host.ids=60927f26-8910-4dd3-8435-8bb7aef5f659,6c516864-b6de-4537-90e4-c0d711e5befb,65c206aa-2561-420e-8c5c-e51b9843f93d

  7. Save and close the file.
  8. Ensure that the ownership of the application-prod.properties file is vcf_lcm:vcf.
  9. Restart the LCM server by typing the following command in the console window:

    systemctl restart lcm

Results

The hosts added to the application-prod.properties are not updated when you update the workload domain.

Upgrade ESXi with Custom ISOs

For clusters in workload domains with vSphere Lifecycle Manager baselines, you can upgrade ESXi with a custom ISO from your vendor. VMware Cloud Foundation 4.4.1.1 and later support multiple custom ISOs in a single ESXi upgrade in cases where specific clusters or workload domains require different custom ISOs.

Prerequisites

Download the appropriate vendor-specific ISOs on a computer with internet access. If no vendor-specific ISO is available for the required version of ESXi, then you can create one. See Create a Custom ISO Image for ESXi.

Procedure

  1. Download the VMware Software Update bundle for VMware ESXi. See Download Bundles Using SDDC Manager.
    To use an async patch version of ESXi, enable the patch with the Async Patch Tool before proceeding to the next step. See the Async Patch Tool documentation.
  2. Using SSH, log in to the SDDC Manager appliance.
  3. Create a directory for the vendor ISO(s) under the /nfs/vmware/vcf/nfs-mount directory. For example, /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries.
  4. Copy the vendor-specific ISO(s) to the directory you created on the SDDC Manager appliance. For example, you can copy the ISO to the /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries directory.
  5. Change permissions on the directory where you copied the ISO(s). For example,
    chmod -R 775 /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/
  6. Change owner to vcf.
    chown -R vcf_lcm:vcf /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/
  7. Create an ESX custom image JSON using the following template.
    {
    "esxCustomImageSpecList": [{
    "bundleId": "bundle ID of the ESXi bundle you downloaded",
    "targetEsxVersion": "ESXi version for the target VMware Cloud Foundation version",
    "useVcfBundle": false,
    "domainId": "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "clusterId": "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "customIsoAbsolutePath": "Path_to_custom_ISO"
    }]
    }
    where
    Parameter Description and Example Value
    bundleId ID of the ESXi upgrade bundle you downloaded. You can retrieve the bundle ID by navigating to the Lifecycle Management > Bundle Management page and clicking View Details to view the bundle ID.
    For example, 8c0de63d-b522-4db8-be6c-f1e0ab7ef554. The bundle ID for an async patch looks slightly different. For example: 5dc57fe6-2c23-49fc-967c-0bea1bfea0f1-apTool.
    Note: If an incorrect bundle ID is provided, the upgrade will proceed with the VMware Cloud Foundation stock ISO and replace the custom VIBs in your environment with the stock VIBs.
    targetEsxVersion Version of the ESXi bundle you downloaded. You can retrieve the target ESXi version by navigating to the Lifecycle Management > Bundle Management page and clicking View Details to view the "Update to Version".
    useVcfBundle Specifies whether the VMware Cloud Foundation ESXi bundle is to be used for the upgrade.
    Note: If you want to upgrade with a custom ISO image, ensure that this is set to false.
    domainId (optional, VCF 4.4.1.1 and later only) ID of the specific workload domain for the custom ISO. Use the VMware Cloud Foundation API (GET /v1/domains) to get the IDs for your workload domains.
    clusterId (optional, VCF 4.4.1.1 and later only) ID of the specific cluster within a workload domain to apply the custom ISO. If you do not specify a clusterId, the custom ISO will be applied to all clusters in the workload domain. Use the VMware Cloud Foundation API (GET /v1/clusters) to get the IDs for your clusters.
    customIsoAbsolutePath Path to the custom ISO file on the SDDC Manager appliance. For example, /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/VMware-VMvisor-Installer-7.0.0.update01-17325551.x86_64-DellEMC_Customized-A01.iso

    Here is an example of a completed JSON template.

    {
    "esxCustomImageSpecList": [{
    "bundleId": "8c0de63d-b522-4db8-be6c-f1e0ab7ef554",
    "targetEsxVersion": "8.0.1-xxxxxxxxx",
    "useVcfBundle": false,
    "customIsoAbsolutePath":
    "/nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/VMware-VMvisor-Installer-8.0.0.update01-xxxxxxxx.x86_64-DellEMC_Customized-A01.iso"
    }]
    }
    Here is an example of a completed JSON template with multiple ISOs using a single workload domain and specified clusters (VCF 4.4.1.1 and later only).
    {
        "esxCustomImageSpecList": [
            {
                "bundleId": "aa7b16b1-d719-44b7-9ced-51bb02ca84f4",
                "targetEsxVersion": "8.0.2-xxxxxxxx",
                "useVcfBundle": false,
                "domainId": "1b7b16b1-d719-44b7-9ced-51bb02ca84b2",
                "clusterId": "c37b16b1-d719-44b7-9ced-51bb02ca84f4",
                "customIsoAbsolutePath": "/nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/VMware-ESXi-7.0.2-17867351-DELL.zip"
            },
            {
                "bundleId": "aa7b16b1-d719-44b7-9ced-51bb02ca84f4",
                "targetEsxVersion": "7.0.1-18150133",
                "useVcfBundle": false,
                "domainId": "1b7b16b1-d719-44b7-9ced-51bb02ca84b2",
                "customIsoAbsolutePath": "/nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-binaries/VMware-ESXi-7.0.2-17867351-HP.zip"
            }
        ]
    }
  8. Save the JSON file as esx-custom-image-upgrade-spec.json in the /nfs/vmware/vcf/nfs-mount.
    Note: If the JSON file is not saved in the correct directory, the stock VMware Cloud Foundation ISO is used for the upgrade and the custom VIBs are overwritten.
  9. Set the correct permissions on the /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json file:

    chmod -R 775 /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json

    chown -R vcf_lcm:vcf /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json

  10. Open the /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties file.
  11. In the lcm.esx.upgrade.custom.image.spec= parameter, add the path to the JSON file.
    For example, lcm.esx.upgrade.custom.image.spec=/nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json
  12. In the navigation pane, click Inventory > Workload Domains.
  13. On the Workload Domains page, click the domain you are upgrading and then click the Updates/Patches tab.
  14. Schedule the ESXi upgrade bundle.
  15. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
  16. After the upgrade is complete, confirm the ESXi version by clicking Current Versions. The ESXi hosts table displays the current ESXi version.

Upgrade ESXi with VMware Cloud Foundation Stock ISO and Async Drivers

For clusters in workload domains with vLCM baselines, you can apply the stock ESXi upgrade bundle with specified async drivers.

Prerequisites

Download the appropriate async drivers for your hardware on a computer with internet access.

Procedure

  1. Download the VMware Cloud Foundation ESXi upgrade bundle. See Download Bundles Using SDDC Manager.
  2. Using SSH, log in to the SDDC Manager appliance.
  3. Create a directory for the vendor provided async drivers under the /nfs/vmware/vcf/nfs-mount directory. For example, /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers.
  4. Copy the async drivers to the directory you created on the SDDC Manager appliance. For example, you can copy the drivers to the /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers directory.
  5. Change permissions on the directory where you copied the drivers. For example,
    chmod -R 775 /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers
  6. Change owner to vcf.
    chown -R vcf_lcm:vcf /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers
  7. Create an ESX custom image JSON using the following template.
    {
    "esxCustomImageSpecList": [{
    "bundleId": "bundle ID of the ESXi bundle you downloaded",
    "targetEsxVersion": "ESXi version for the target VMware Cloud Foundation version",
    "useVcfBundle": true,
    "esxPatchesAbsolutePaths": ["Path_to_Drivers"]
    }]
    }
    where
    Parameter Description and Example Value
    bundleId ID of the ESXi upgrade bundle you downloaded. You can retrieve the bundle ID by navigating to the Lifecycle Management > Bundle Management page and clicking View Details to view the bundle ID.

    For example, 8c0de63d-b522-4db8-be6c-f1e0ab7ef554.

    targetEsxVersion Version of the ESXi upgrade bundle you downloaded. You can retrieve the ESXi target version by navigating to the Lifecycle Management > Bundle Management page and clicking View Details to view the "Update to Version".
    useVcfBundle Specifies whether the ESXi bundle is to be used for the upgrade. Set this to true.
    esxPatchesAbsolutePaths Path to the async drivers on the SDDC Manager appliance. For example, /nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers/VMW-ESX-6.7.0-smartpqi-1.0.2.1038-offline_bundle-8984687.zip

    Here is an example of a completed JSON template.

    {
    "esxCustomImageSpecList": [{
    "bundleId": "411bea6a-b26c-4a15-9443-03f453c68752-apTool",
    "targetEsxVersion": "7.0.3-21053776",
    "useVcfBundle": true,
    "esxPatchesAbsolutePaths": ["/nfs/vmware/vcf/nfs-mount/esx-upgrade-partner-drivers/drivers/HPE-703.0.0.10.9.5.14-Aug2022-Synergy-Addon-depot.zip"]
    }]
    }
  8. Save the JSON file as esx-custom-image-upgrade-spec.json in the /nfs/vmware/vcf/nfs-mount.
    Note: If the JSON file is not saved in the correct directory, the stock VMware Cloud Foundation ISO is used for the upgrade and the custom VIBs are overwritten.
  9. Set the correct permissions on the /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json file:

    chmod -R 775 /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json

    chown -R vcf_lcm:vcf /nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json

  10. Open the /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties file.
  11. In the lcm.esx.upgrade.custom.image.spec= parameter, add the path to the JSON file.
    For example, lcm.esx.upgrade.custom.image.spec=/nfs/vmware/vcf/nfs-mount/esx-custom-image-upgrade-spec.json
  12. In the navigation pane, click Inventory > Workload Domains.
  13. On the Workload Domain page, click the management domain.
  14. On the Domain Summary page, click the Updates/Patches tab.
  15. In the Available Updates section, click Update Now or Schedule Update next to the VMware Software Update bundle for VMware ESXi.
  16. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
  17. After the upgrade is complete, confirm the ESXi version by clicking Current Versions. The ESXi hosts table displays the current ESXi version.

Upgrade ESXi with vSphere Lifecycle Manager Images for VMware Cloud Foundation 5.2

Workload domains can use vSphere Lifecycle Manager baselines or vSphere Lifecycle Manager images for ESXi host upgrade. The following procedure describes upgrading ESXi hosts in workload domains that use vSphere Lifecycle Manager images.

For information about upgrading ESXi in workload domains that use vSphere Lifecycle Manager baselines, see GUID-10738818-5AD4-4503-8965-D9920CB90D22.html#GUID-10738818-5AD4-4503-8965-D9920CB90D22.

You create a vSphere Lifecycle Manager image for upgrading ESXi hosts using the vSphere Client. During the creation of the image, you define the ESXi version and can optionally add vendor add-ons, components, and firmware. After you extract the vSphere Lifecycle Manager image into SDDC Manager, the ESXi update will be available for the relevant VI workload domains.

Prerequisites

  • Validate that the ESXi passwords are valid.
  • Ensure that the domain for which you want to perform cluster-level upgrade does not have any hosts or clusters in an error state. Resolve the error state or remove the hosts and clusters with errors before proceeding.
  • You must upgrade NSX and vCenter Server before you can upgrade ESXi hosts with a vSphere Lifecycle Manager image.
  • If you want to add firmware to the vSphere Lifecycle Manager image, you must install the Hardware Support Manager from your vendor. See Firmware Updates.

Procedure

  1. Log in to the management domain vCenter Server using the vSphere Client.
  2. Create a vSphere Lifecycle Manager image.
    1. Right-click the management domain data center and select New Cluster.
    2. Enter a name for the cluster (for example, ESXi upgrade image) and click Next.
      Keep the default settings for everything except the cluster name.
      New cluster settings with cluster name and default settings
    3. Define the vSphere Lifecycle manager image and click Next.
      Image Element Description
      ESXi Version From the ESXi Version drop-down menu, select the ESXi version specified in the VMware Cloud Foundation BOM.

      If the ESXi version does not appear in the drop-down menu, see Working With the vSphere Lifecycle Manager Depot.

      Vendor Add-On (optional) To add a vendor add-on to the image, click Select and select a vendor add-on.
      You can customize the image components, firmware, and drivers later.
    4. Click Finish.
    5. After the cluster is created successfully, click the Updates tab for the new cluster to further customize it, if needed.
    6. Click Hosts > Image and then click Edit.
      vSphere Client view of the new cluster image settings.
    7. Edit the vSphere Lifecycle manager image properties and click Save.
      You already specified the ESXi version and optional vendor add-on, but you can modify those settings as required.
      Image Element Description
      ESXi Version From the ESXi Version drop-down menu, select the ESXi version specified in the VMware Cloud Foundation BOM.

      If the ESXi version does not appear in the drop-down menu, see Synchronize the vSphere Lifecycle Manager Depot and Import Updates to the vSphere Lifecycle Manager Depot.

      Vendor Add-On (optional) To add a vendor add-on to the image, click Select and select a vendor add-on.
      Firmware and Drivers Add-On (optional) To add a firmware add-on to the image, click Select. In the Select Firmware and Drivers Addon dialog box, specify a hardware support manager and select a firmware add-on to add to the image.

      Selecting a firmware add-on for a family of vendor servers is possible only if the respective vendor-provided hardware support manager is registered as an extension to the vCenter Server where vSphere Lifecycle Manager runs.

      Components To add components to the image:
      • Click Show details.
      • Click Add Components.
      • Select the components and their corresponding versions to add to the image.
    vSphere saves the cluster image.
  3. Extract the vSphere Lifecycle Manager image into SDDC Manager.
    1. In the SDDC Manager UI, click Lifecycle Management > Image Management .
    2. Click Import Image.
    3. In the Option 1 section, select the management domain from the drop-down menu.
    4. In the Cluster drop-down, select the cluster from which you want to extract the vSphere Lifecycle manager image. For example, ESXi upgrade image.
      Option 1 section for importing a cluster image with workload domain and cluster selected
    5. Enter a name for the cluster image and click Extract Cluster Image.
    You can view status in the Tasks panel.
  4. Upgrade ESXi hosts with the vSphere Lifecycle Manager image.
    1. Navigate to the Updates tab of the VI workload domain.
    2. In the Available Updates section, click Configure Update.
    3. Click Next.
    4. Select the clusters to upgrade and click Next.
      The default setting is to upgrade all clusters. To upgrade specific clusters, click Enable cluster-level selection and select the clusters to upgrade.
    5. Select the cluster and the cluster image, and click Apply Image.
    6. Click Next.
    7. Select the upgrade options and click Next.
      An images showing the upgrade options.
      By default, the selected clusters are upgraded in parallel. If you selected more than five clusters to be upgraded, the first five are upgraded in parallel and the remaining clusters are upgraded sequentially. To upgrade all selected clusters sequentially, select Enable sequential cluster upgrade.

      Select Enable Quick Boot to reduce the upgrade time by skipping the physical reboot of the host.

      Select Enforce Live Patch when the cluster image includes a Live Patch. With the Enforce Live Patch option, vSphere Lifecycle Manager does not place the hosts in the cluster into maintenance mode, hosts are not rebooted, and there is no need to migrate the virtual machines running on the hosts in the cluster.

      Select Migrate Powered Off and Suspended VMs to migrate the suspended and powered off virtual machines from the hosts that must enter maintenance mode to other hosts in the cluster.

    8. Click Next, review the settings, and click Finish.
      VMware Cloud Foundation runs a cluster image hardware compatibility and compliance check. Resolve any reported issues before proceeding.
    9. Click Schedule Update and click Next.
    10. Select Upgrade Now or Schedule Update and click Finish.
    11. Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.

What to do next

Upgrade the vSAN Disk Format for vSAN clusters. The disk format upgrade is optional. Your vSAN cluster continues to run smoothly if you use a previous disk format version. For best results, upgrade the objects to use the latest on-disk format. The latest on-disk format provides the complete feature set of vSAN. See Upgrade vSAN on-disk format versions.

Firmware Updates

You can use vSphere Lifecycle Manager images to perform firmware updates on the ESXi hosts in a cluster. Using a vSphere Lifecycle Manager image simplifies the host update operation. With a single operation, you update both the software and the firmware on the host.

To apply firmware updates to hosts in a cluster, you must deploy and configure a vendor provided software module called hardware support manager. The deployment method and the management of a hardware support manager is determined by the respective OEM. For example, the hardware support manager that Dell EMC provides is part of their host management solution, OpenManage Integration for VMware vCenter (OMIVV), which you deploy as an appliance. See Deploying Hardware Support Managers.

You must deploy the hardware support manager appliance on a host with sufficient disk space. After you deploy the appliance, you must power on the appliance virtual machine, log in to the appliance as an administrator, and register the appliance as a vCenter Server extension. Each hardware support manager has its own mechanism of managing firmware packages and making firmware add-ons available for you to choose.

For detailed information about deploying, configuring, and managing hardware support managers, refer to the vendor-provided documentation.

Update License Keys for a Workload Domain

If upgrading from a VMware Cloud Foundation version prior to 5.0, you need to update your license keys to support vSAN 8.x and vSphere 8.x.

You first add the new component license key to SDDC Manager. This must be done once per license instance. You then apply the license key to the component on a per workload domain basis.

Prerequisites

You need a new license key for vSAN 8.x and vSphere 8.x. Prior to VMware Cloud Foundation 5.1.1, you must add and update the component license key for each upgraded component in the SDDC Manager UI as described below.

With VMware Cloud Foundation 5.1.1 and later, you can add a component license key as described below, or add a solution license key in the vSphere Client. See Managing vSphere Licenses for information about using a solution license key for VMware ESXi and vCenter Server. If you are using a solution license key, you must also add a VMware vSAN license key for vSAN clusters. See Configure License Settings for a vSAN Cluster.

Procedure

  1. Add a new component license key to the SDDC Manager inventory.
    1. In the navigation pane, click Administration > Licensing.
    2. On the Licensing page, click + License Key.
    3. Select a product from the drop-down menu.
    4. Enter the license key.
    5. Enter a description for the license key.
    6. Click Add.
    7. Repeat for each license key to be added.
  2. Update a license key for a workload domain component.
    1. In the navigation pane, click Inventory > Workload Domains.
    2. On the Workload Domains page, click the domain you are upgrading.
    3. On the Summary tab, expand the red error banner, and click Update Licenses.
    4. On the Update Licenses page, click Next.
    5. Select the products to update and click Next.
    6. For each product, select a new license key from the list, and select the entity to which the licensekey should be applied and click Next.
    7. On the Review pane, review each license key and click Submit.

      The new license keys will be applied to the workload domain. Monitor the task in the Tasks pane in SDDC Manager.

Upgrade vSphere Distributed Switch versions

[Optional] Upgrade the distributed switch to take advantage of features that are available only in the later versions.

Prerequisites

ESXi and vCenter Upgrades are completed.

Procedure

  1. On the vSphere Client Home page, click Networking and navigate to the distributed switch.
  2. Right-click the distributed switch and select Upgrade > Upgrade Distributed Switch
  3. Select the vSphere Distributed Switch version that you want to upgrade the switch to and click Next

Results

The vSphere Distributed Switch is successfully upgraded.

Upgrade vSAN on-disk format versions

[Optional] Upgrade the vSAN on-disk format version to take advantage of features that are available only in the later versions.

  • The upgrade may cause temporary resynchronization traffic and use additional space by moving data or rebuilding object components to a new data structure.

Prerequisites

  • ESXi and vCenter Upgrades are completed

  • Verify that the disks are in a healthy state. Navigate to the Disk Management page to verify the object status.

  • Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not place the hosts in maintenance mode.

  • Verify that there are no component rebuilding tasks currently in progress in the vSAN cluster. For information about vSAN resynchronization, see vSphere Monitoring and Performance

Procedure

  1. Navigate to the vSAN cluster.
  2. Click the Configure tab.
  3. Under vSAN, select Disk Management.
  4. Click Pre-check Upgrade. The upgrade pre-check analyzes the cluster to uncover any issues that might prevent a successful upgrade. Some of the items checked are host status, disk status, network status, and object status. Upgrade issues are displayed in the Disk pre-check status text box.
  5. Click Upgrade.
  6. Click Yes on the Upgrade dialog box to perform the upgrade of the on-disk format.

Results

vSAN successfully upgrades the on-disk format. The On-disk Format Version column displays the disk format version of storage devices in the cluster

Post Upgrade Steps for NFS-Based VI Workload Domains

After upgrading VI workload domains that use NFS storage, you must add a static route for hosts to access NFS storage over the NFS gateway. This process must be completed before expanding the workload domain.

Procedure

  1. Identify the IP address of the NFS server for the VI workload domain.
  2. Identify the network pool associated with the hosts in the cluster and the NFS gateway for the network pool.
    1. Log in to SDDC Manager.
    2. Click Inventory > Workload Domains and then click the VI workload domain.
    3. Click the Clusters tab and then click an NFS-based cluster.
    4. Click the Hosts tab and note down the network pool for the hosts.
    5. Click the Info icon next to the network pool name and note down the NFS gateway.
  3. Ensure that the NFS server is reachable from the NFS gateway. If a gateway does not exist, create it.
  4. Identify the vmknic on each host in the cluster that is configured for NFS traffic.
  5. Configure a static route on each host to reach the NFS server from the NFS gateway.
    esxcli network ip route ipv4 add -g NFS-gateway-IP -n NFS-gateway
  6. Verify that the new route is added to the host using the NFS vmknic.
    esxcli network ip route ipv4 list
  7. Ensure that the hosts in the NFS cluster can reach the NFS gateway through the NFS vmkernel.
    For example:
    vmkping -4 -I vmk2 -s 1470 -d -W 5 10.0.22.250
  8. Repeat steps 2 through 7 for each cluster using NFS storage.