check-circle-line exclamation-circle-line close-line

VMware Cloud Foundation 2.2.1 | 05 DECEMBER 2017 | Build 7236974

Cloud Foundation 2.2.1 is a replacement release, resulting in abbreviated release notes. The content of the Cloud Foundation 2.2 release notes and product documentation applies to version 2.2.1 as well.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 2.2.1 release includes the following:

  • Updates NSX for vSphere to the repackaged version of 6.3.4.

    Important information about NSX 6.3.4: NSX for vSphere 6.3.4 has been repackaged to address the problems mentioned in VMware Knowledge Base articles 2151719 and 000051144. The originally released build 6845891 is replaced with build 7087695. Please refer to the Knowledge Base articles for more details. See the VMware NSX for vSphere 6.3.4 Release Notes for upgrade information.

  • Updates Platform Services Controller to version 6.5 Update 1a.
  • Updates vCenter Server to version 6.5 Update 1a.
  • Updates ESXi to version 6.5 EP4.
  • Resolved Photon user lockout issues for SDDC controller and utility VMs.
  • Increased the size of the root partition in the SDDC utility VM.
  • Multiple bug fixes.

For detailed release and build information, see LCM Upgrade Bundles below.

Cloud Foundation deploys the VMware SDDC Software Stack. For information about what is new in those products, as well as their known issues and resolved issues, see the release notes for those software versions. You can locate their release notes from their documentation landing pages at docs.vmware.com.

Installation and Upgrades Information

You can upgrade to Cloud Foundation 2.2.1 only from 2.2.0.2 deployments.

NOTE: After upgrading a dedup-enabled cluster to Cloud Foundation 2.2.1, host reboot takes a long time.

When a host in a dedup-enabled cluster is rebooted after the cluster is upgraded to Cloud Foundation 2.2.1, vSAN needs to recreate compression related stats. This requires the system to read all hash metadata blocks, causing the reboot to take a long time.

There is no workaround to reduce this delay.

Required Step

Before upgrading, you must run the vrm-patch.py script. For details, see the VMware Knowledge Base article VMware Cloud Foundation 2.2.0.x patch.

Supported Upgrade Paths

The following upgrade path is supported:

  • 2.2.0.2 to 2.2.1 through Lifecycle Management in SDDC Manager

LCM Upgrade Bundles

The Cloud Foundation 2.2.0.2 software BOM contains the VMware software components described in the table below. This patch bundle is hosted on the VMware Depot site and available via the Lifecycle Management feature in SDDC Manager. See Lifecycle Management in the Administering VMware Cloud Foundation guide.

Software Component Version Date Build Number
VMware Platform Services Controller 6.5 Update 1a 28 NOV 2017 6671409
VMware NSX for vSphere 6.3.4 (repackaged) 10 NOV 2017 7087695
VMware vCenter Server on vCenter Server Appliance 6.5 Update 1a 21 SEP 2017 6671409
VMware vSphere (ESXi) 6.5 EP4 05 OCT 2017 6765664

 

Before scheduling an upgrade, run the pre-check utility with the following command from the SDDC Manager Controller VM:

/opt/vmware/sddc-support/sos --pre-upgrade-check

For more information on applying the patch bundle, see the following resources:

Known Issues

  • LCM incorrectly shows both VCF and VMware bundles as available for upgrade.

    After upgrading to 2.2.1, LCM should display only the VCF bundle should appear as available for upgrade on the Management Domain. LCM should display the VMware bundle (vCenter, ESXi, NSX, etc.) as available only after the VCF bundle has been applied. The risk is that a user might apply the VMware bundle on an out-of-date version of VCF.

    Workaround: Always apply the VCF bundle first when multiple bundles are shown as available on the Management Domain.

  • TOR imaging fails at Setup port configurations task.

    Sometimes the TOR switch imaging process fails during the port configurations task with an "auth" issue, resulting in an incorrect authentication for the admin user. As result, VIA cannot access the switch using the default password.

    Workaround: Use the following steps to resolve this issue:

    1. Reset the switch password. Please refer to the vendor documentation.
    2. Clean up the switch.
    3. Re-image the switch separately in VIA.
  • vCenter upgrade from 6.0.0-5326079 to 6.5.0-6671409 on the IaaS domain succeeds but the vSAN HCL is not updated.

    For a detailed description and workaround for this issue, see the Knowledge Base article vSAN Health Service - Hardware compatibility - vSAN HCL DB Auto Update.

  • Initial configuration of Cloud Foundation fails at rack discovery.

    Rack discovery fails with the error message "Rack discovery get progress failed.Contact support for more details.Refresh your browser once it is resolved." The cause is a time difference between your TORs and the SDDC Manager.

    Workaround: Synchronize the clocks on the TORs switches to match the time on the SDDC Manager Controller VM.

    1. Using SSH, log it the SDDC Manager Controller VM as root.
    2. Get the time:

      root@sddc-manager-controller [ ~ ]# date
      Wed Nov 22 06:42:43 UTC 2017

    3. Log into both TOR switches as admin and verify that there is difference between the time setting in the TOR and SDDC Manager Controller VM.

      Tue Nov 21 18:43:07 2017
      Timezone: UTC
      Clock source: local

    4. Verify that the SSL profile it is not valid on both switches.

      TOR-20#sh management security ssl profile eapi
      Profile State Error ------------- ------------- ----------------------------------------
      eapi invalid Certificate 'viaSSLCertificate.cert' is not yet valid

    5. On both switches, synchronize the time settings with that of the SDDC Manager VM.

      conf t
      switch#clock timezone [sddc manager="" time="" zone=""]
      switch#clock set [sddc dd="" format="" hh:mm:ss="" in="" manager="" mmm="" time="" yyyy=""]
      write [to save the running configuration]
      reload [to reload the switch]

    6. Refresh the browser and click RETRY to retry rack discovery.
  • Unable to add Datacenter networks in the user interface.

    Sometimes, when adding a new datacenter network, SDDC Manager will hang. This is caused by an issue with a third-party library. This issue is prioritized for resolution in the next release.

    Workaround: Restart the HMS service. To defer this issue before it occurs, you can increase memory limit for the HMS service.

    1. Using SHH, log in to the SDDC Manager VM and change to the /opt/vrack/hms directory.
    2. Check the current memory allocation for HMS.
      grep -i "xmx" hms.sh
      The output should look like:
      /bin/su $HMS_USER -c ""$JAVA" -Xmx2048m -XX:MaxPermSize=512m -classpath "$CLASSPATH" Dhms.config.file=$HMS_CONFIG $JETTY_LOGGING_OPT $HMS_LOGGING_CONFIG com.vmware.vrack.hms.HmsApp >> $HMSA_CONSOLE_LOG_FILE 2>&1 &"

      In this example, the HMS memory allocation is 2048MB.

    3. Increase the memory as needed, for example, to 4096MB.
      sed -i -e 's/Xmx2048/Xmx4096/g' hms.sh
    4. Optionally, confirm the new memory configuration.
      grep -i "xmx" hms.sh
      /bin/su $HMS_USER -c ""$JAVA" -Xmx4096m -XX:MaxPermSize=512m -classpath "$CLASSPATH" -Dhms.config.file=$HMS_CONFIG $JETTY_LOGGING_OPT $HMS_LOGGING_CONFIG com.vmware.vrack.hms.HmsApp >> $HMSA_CONSOLE_LOG_FILE 2>&1 &"
    5. Restart the HMS.
      systemctl restart hms

       

  • VDI creation will fail at Register VMware Horizon View Serial Number task.

    Sometimes VDI creation fails at the Register VMware Horizon View Serial Number task, due to a Windows firewall issue where the connection server installation takes place.

    Workaround: When creating a VDI, manually set firewall rules to avoid this issue. For more information about network and port configuration in Horizon 7, see NETWORK PORTS IN VMWARE HORIZON 7 and Network Ports in VMware Horizon 7.

  • A previously used free pool host might not be considered for workload deployment capacity.

    In some cases, a free pool host that was used by a VI workload domain may not be considered for deployment capacity in subsequent VI workload domains, and may be flagged with a HOST_CANNOT_BE_USED_ALERT. After the original VI workload domain is deleted, the HMS service has the wrong password, resulting in the alert status.

    Workaround: Use the following procedure to recover from this issue.

    1. From the rack inventory, identify the node IP and obtain its IPMI password using lookup-password.
    2. Shutdown the host from IPMI power control for twenty minutes.
    3. Log into SDDC Manager Controller VM as root.
    4. Access the postgres database and delete the problem_record.
      su - postgre;
      psql vrm;
      delete from problem_record;
    5. Restart the vcfmanager service
      systemctl restart vcfmanager
  • SoS log collection and backup fail on Arista spine switches.

    Sometimes SoS log collection and backup fail on Arista spine switches. This is not observed on other spine switches.

    Workaround: Perform a manual backup of the spine switch prior to performing any FRU operations, such as replacing the spine switch.

  • Unmanaged host upgrade is not disabled for single rack deployments.

    The unmanaged host upgrade feature in 2.2.1 results in the host configuration and boot files being saved in the wrong version. As a result, reverting a host leaves it in an inconsistent state. Applies only to single rack deployments.

    Workaround: Reimage the affected host.

  • Server selection incomplete when All Flash is selected.

    Some newer server models might not be present in the dropdown when using the VMware Imaging Appliance to prepare a rack for use with VMware Cloud Foundation 2.2.

    Workaround: For a detailed workaround, see Knowledge Base article How to update the VMware Imaging Appliance used in VMware Cloud Foundation 2.2.x and later with new server models (50354).

  • The SoS tool returns a failed status for health check.

    When you run a general Supportability and Serviceability (SoS) check, without options, the health check may return as FAILED. The cause is that the PRM_HOST table lists a decommissioned host when that listing should have been cleared when the decommission operation completed.

    Workaround: For a detailed workaround, see the Knowledge Base article VMware Cloud Foundation sos tool returns a failed status for health check (51992).

  • Unable to disassociate a network from a VI/VDI workload domain.

    There is no apparent functionality in SDDC Manager to disassociate a network from a workload domain.

    Workaround: Use the --delete-dc-nw option in the SoS tool. See the Cloud Foundation documentation.

  • Some VDI infrastructure settings can be modified even though they should be restricted.

    The following VDI infrastructure settings can be modified even though they should be restricted to prevent the user from modifying them:

    • Max Desktops per Connection Server
    • Max Desktops per Security Server
    • Max Desktops per vCenter Server
    • Desktop System Drive Size
    • Desktop System Snapshot Size

    Workaround: Do not manually modify these settings.