check-circle-line exclamation-circle-line close-line

VMware Cloud Provider Pod 1.5.0 Patch 2  | 12 NOV 2019

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New November 12th (Cloud Provider Pod Version 1.5.0 Patch 2)

  • The download URLs for the vRealize Operations Management packs are updated.
  • The deployed version of vCloud Usage Meter is updated to Hot Patch 3.

 

What's New October 17th (Cloud Provider Pod Version 1.5.0 Patch 1)

  • The version of CentOS used to deploy vCloud Director components (such as cells, database, and so on) is updated to the latest available version of CentOS 7.7 (1908).
  • In addition, some other bug fixes are added to the Cloud Provider Pod deployer appliance.

 

What's New July 18th (Cloud Provider Pod Version 1.5.0)

The current Cloud Provider Pod deploys vCloud Director 9.7. 

Cloud Provider Pod consists of the following components:

  • Cloud Provider Pod Designer (Cloud)
  • Cloud Provider Pod Deployer (On-Premises)

Version changes:

  • VMware ESXi and vCenter Server are updated to 6.7 Update 2a
  • VMware NSX for vSphere is updated to 6.4.5
  • VMware vCloud Director is updated to 9.7
  • VMware vCloud Director Extender is removed
  • VMware vRealize Log Insight is updated to 4.8.0
  • VMware vRealize Network Insight is updated to 4.1.1
  • VMware vRealize Operations Manager is updated to 7.5.0
  • VMware vRealize Operations Manager Multi-Tenant App is updated to 2.2
  • VMware vRealize Operations Manager – Cloud Pod Management Pack is updated to 2.0
  • VMware vRealize Orchestrator is updated to 7.6
  • VMware vCloud Usage Meter is updated to 3.6.1 Hot Patch 2
  • CentOS is updated to 7.6 (1810)

Newly added:

  • VMware Cloud Provider Pod Deployer (Virtual Appliance) replaces the Cloud Provider Pod Initiator that was used in previous versions

Features added:

  • The Cloud Provider Pod Deployer Virtual Appliance introduces a REST API to interact with the VMware Cloud Provider Pod deployment.
  • Binaries are downloaded from my.vmware.com (and other public sources) or uploaded through FTP to the Cloud Provider Pod Deployer.
  • Setup of a deployment infrastructure (vCenter Server instances and vRealize Orchestrator) as part of the Cloud Provider Pod Deployer startup (not preinstalled on the Cloud Provider Pod initiator).
  • All additional packages (RPMs) required for setup and configuration of virtual machines are downloaded initially, no download required during deployment.
  • Updated vRealize Orchestrator workflow package to allow simpler and more robust deployment.

You can find further details about the architecture and components in the document package that is uniquely created based on the input that you provide in the Cloud Provider Pod Designer.

 

System Requirements and Installation

Cloud Provider Pod consists of a virtual machine, the Cloud Provider Pod Deployer that is used to manage the deployment, a management cluster and one or two resource clusters.

System Requirements and Installation – Deployer

The Cloud Provider Pod Deployer is installed locally as a virtual machine, so you must ensure that your system meets certain requirements.

The VMware Cloud Provider Pod - Deployer is provided as a virtual appliance, which can be hosted on a physical VMware ESXi™ host (or VMware vCenter Server). The virtual appliance has the following requirements:

  • 64-bit CPU with Intel-VT or AMD-V feature set
  • 4 vCPUs available for a virtual machine
  •   4 GB of RAM available for a virtual machine
  • Support to run a virtual machine hardware version 13 or above
  • VMware ESXi 5.5 or later
  • 100 GB of disk space if deployed in thick mode
    Thin mode requires about 40–50 GB.

System Requirements and Installation – Management Pod

The minimum configuration for the Management Cluster requires 4 ESXi hosts with one of the supported storage options: either vSAN all-flash, NFS, iSCSI, or Fibre Channel. Cloud Provider Pod does not support a hybrid vSAN configuration.

All hosts must be configured with the same storage option.

The hosts must be on the VMware Hardware Compatibility List for ESXi 6.7 Update 2.

Each host must meet the following minimum requirements:

  • 2 Sockets with 8 physical cores/16 logical cores and 2.0 GHz or more
  • 192 GB RAM (128 GB RAM if deployed without any optional products)
  • 2 x 10+Gbit NICs or 4 x 10+Gbit NICs
    • The primary and secondary (as backup) NICs must be set up for PXE boot. This must be set up with an access VLAN. Additional networks must be provided as trunk VLANs.
    • Record the MAC address of the PXE boot device of each host before creating a configuration file.
  • 20 TB storage capacity

If you plan to use vSAN, the primary disk 0/0/0 on each host must be a boot device (SATADOM, USB, LOCALDISK). For RAID configuration of the boot device, make sure that the device (virtual device name) is specified correctly in the Cloud Provider Pod designer as preferred disk.

System Requirements and Installation – Resource Pod

The minimum configuration for the Resource Clusters (for tenant compute workloads) requires 4 ESXi hosts with one of the supported storage options: either vSAN all-flash, NFS, iSCSI, or Fibre Channel. Cloud Provider Pod does not support a hybrid vSAN configuration.

If the resource cluster is configured with NFS, iSCSI, or Fibre Channel for storage, the minimum required hosts is reduced to three.

The automated setup allows for 1 or 2 resource clusters each with up to 64 hosts. All resource cluster hosts must be configured with the same storage option. The hosts must be on the VMware Hardware Compatibility List for ESXi 6.7 Update 2.

Each host must meet the following technical minimum requirements:

  • 2 sockets with 8 physical cores/16 logical cores and 2.0 GHz or above
  • 64 GB RAM
  • 2 x 10+Gbit NICs or 4 x 10+Gbit NICs
    • The primary and secondary (as backup) NICs must be set up for PXE boot. This must be set up with an Access VLAN. Additional networks must be provided as trunk VLANs.
    • Record the MAC address of the PXE-boot device of each host before creating a configuration file.
  • At least 4 TB of storage capacity

If you plan to use vSAN, the primary disk 0/0/0 on each host must be a boot device (SATADOM, USB, or LOCALDISK). For RAID configuration of the boot device, make sure that the device (virtual device name) is specified correctly in the Cloud Provider Pod designer as preferred disk.

System Requirements and Installation – License and Generic Settings

You must have license keys for the following products before the configuration file can be created:

  • ESXi 6.7 Update 2
  • vCenter Server 6.7 Update 2a
  • vSAN 6.7 Enterprise, if selected
  • NSX 6.4.5
  • vCloud Director 9.7.x
  • vRealize Log Insight 4.8
  • vRealize Network Insight 4.1.1, if selected
  • vRealize Operations Manager 7.5, if selected

Each VMware Cloud Provider Pod requires to be run inside its custom subdomain, for example pod1.demo.vmware.com. Host names are then created within that subdomain. For the initial installation, we host an internal DNS server which can be replaced by a custom one.

You can find further information about the planning and preparation of the Cloud Provider Pod usage in the documents, that are created based on the input that you provide in the Cloud Provider Pod Designer. These documents are created by using the Cloud Provider Pod Document Generator and delivered by email.

Product Versions Deployed with VMware Cloud Provider Pod

Product Version Build Number
VMware ESXi 6.7 Update 2 13006603
VMware vCenter Server Appliance 6.7 Update 2a 13643870
VMware vCloud Director 9.7.0 13635483 
VMware vCloud Usage Meter 3.6.1 Hot Patch 3 14877528
VMware NSX for vSphere VMware NSX Manager 6.4.5 13282012
VMware vRealize Log Insight 4.8.0 13036238
Content Pack for vCloud Director Latest version as available within vRealize Log Insight  
Content Pack for NSX Latest version as available within vRealize Log Insight  
VMware vRealize Network Insight 4.1.1 1559730670
VMware vRealize Operations Manager 7.5.0 13165949
VMware vRealize Operations Manager Tenant App for vCloud Director 2.2.0 13473471
VMware vRealize Orchestrator 7.6.0 13020602

Compatibility

The hardware used for the Cloud Provider Pod deployment must be included in the VMware Hardware Compatibility List for ESXi 6.7 Update 2. Custom ISO images are supported but must be of the same version, follow the Deployment Guide for detailed instructions.

The current version of Cloud Provider Pod supports Fibre Channel, vSAN, iSCSI, and NFS as persistent storage technologies.

For vSAN, all-flash devices are required. Hybrid mode is NOT supported. The usage of vSAN requires devices and components compatible with the Hardware Compatibility List for vSAN version 6.7.

Cloud Provider Pod 1.5 requires the generation of a new configData.cfg (upload into Cloud Provider Pod Designer and regeneration of files is sufficient), and the usage of the newest package downloaded with the design documents.

Resolved Issues

The resolved issues are grouped as follows.

Issues Resolved In Cloud Provider Pod 1.5.0 Patch 2
  • Download of the vRealize Operations Management packs fails

    The download and verification of the vRealize Operations Management packs fails in Cloud Provider Pod 1.5.0 Patch 1. Because of this issue, the installation and configuration of vRealize Operations during the deployment process fails.

    This issue is fixed in Cloud Provider Pod 1.5.0 Patch 2. The URLs for downloading the vRealize Operations Management packs are updated and the download by using these URLs must be successful.

    If the download is not successful, you can provide the correct versions of the files manually through FTP.

Issues Resolved in Cloud Provider Pod 1.5.0 Patch 1
  • Download of CentOS 7.6 (1810) DVD image fails

    This issue occurs because the URL specified for the CentOS 7.6 (1810) DVD image in the Cloud Provider Pod deployer is no longer valid. With the release of CentOS 7.7, the previous version has been deprecated. The DVD ISO of CentOS 7.6 (1810) has been archived and is now only available for download by using restricted mirrors.

    In Cloud Provider Pod 1.5.0 Patch 1, the DVD ISO of CentOS is updated to version 7.7 (1907).

  • The validation of a downloaded PostgreSQL RPM file fails

    The RPM file for the CentOS PostgreSQL repository has been updated. This results in a different md5 checksum and the validation of a previously downloaded PostgreSQL RPM file fails.

    The validation has been updated with the correct checksum, so that download and verification of the file succeed.

  • The Cloud Provider Pod console might report incorrectly the status of a cluster

    If you specify a more complex password for the deployment, the Cloud Provider Pod console API does not report the correct state of the hosts in a cluster.

    This issue is fixed in Cloud Provider Pod 1.5.0 Patch 1, so that the console API reports the correct state of each host, even if you use more complex passwords .

  • The Cloud Provider Pod Deployer bring-up process cannot be started a second time

    The Cloud Provider Pod deployer in version 1.5.0 reports the status of the bring-up process as "in progress", even though the process has finished.

    This issue is fixed in Cloud Provider Pod 1.5.0 Patch 1.

Issues Resolved in Cloud Provider Pod 1.5.0
  • Usage Meter Hot Patch is not installed

    Usage Meter Hot Patch is required for collecting usage metrics from vCloud Director. Cloud Provider Pod 1.1.x only downloads the Hot Patch, but does not install it.

    Cloud Provider Pod 1.5 automatically installs Usage Meter Hot Patch.

  • NTP configuration is not applied to all deployed machines

    NTP is not configured consistently on all machines (mostly CentOS based machines).

    This issue is resolved in this release.

  • The DNS changer workflow does not apply the DNS settings correctly

    The DNS changer workflow does not apply the DNS settings correctly on CentOS-based virtual machines.

    This issue is resolved in this release.

  • Data collection in Usage Meter does not work properly

    Because of missing firewall rules, the Usage Meter data collection does not work properly.

    This issue is resolved in this release. The data collection from vCenter Server works properly. For data collection from vRealize Operation Manager, after the deployment, you must install an extension by manually updating the vCenter Server adapter instances in vRealize Operations .

  • Installation and configuration of CentOS-based machines stop responding while waiting for process

    In Cloud Provider Pod 1.1, the setup of CentOS-based machines (such as vCloud Director cells) fails when running the installation procedure. The workflow waits endlessly or fails while waiting for processes.

    This issue is resolved in this release. Cloud Provider Pod 1.5 uses a different way to run the installation scripts in guest virtual machines. This way, the issue of waiting for single processes is not relevant anymore.

  • vSphere distributed port group names might not match VLANs

    Some of the vSphere port groups created by Cloud Provider Pod are created by using an incorrect name that does not match the VLAN ID it was configured with.

    This issue is resolved in this release.

  • Configuration and attachment of Update Manager baselines might fail for resource clusters

    In Cloud Provider Pod 1.1, the configuration of the Update Manager baselines for the resource clusters  might fail.

    This issue is resolved in this release.

  • Cloud Provider Pod deployment fails when you use complex passwords

    The Cloud Provider Pod Designer password complexity validation is updated to allow only exclamation mark (!), at sign (@), and a dollar sign ($) as special characters. The Cloud Provider Pod deployment is updated to correctly configure all components by using the specified password.

    This issue is resolved in this release.

  • The vRealize Network Insight password might be incorrectly configured

    The password for login to vRealize Network Insight is not configured to match the password specified in the configuration worksheet.

    This issue is resolved in this release.

  • Unable to deploy a vSAN cluster with more than seven capacity disks

    In Cloud Provider 1.1, the vSAN configuration requires at least two differently sized SSDs per host and a maximum of seven large disks to use as capacity disks.

    This issue is resolved in this release. Cloud Provider Pod 1.5 is updated to allow more flexible configuration of the vSAN disk groups. Cloud Provider Pod now supports setting up a vSAN cluster with hosts having all-same disks or more than seven disks.

  • Download of binaries is interrupted and not resuming after the interruption

    The process for downloading binaries is reworked to download more reliably and to resume from previous download attempts, if run again (so that  already existing and verified files are not downloaded again).

    This issue is resolved in this release.

  • Edge Firewall Rules

    Cloud Provider Pod 1.0.x does not fully reduce access to critical systems. In Cloud Provider Pod 1.5, a more fine-grained firewall configuration is applied.

    This issue is resolved in this release. Cloud Provider Pod 1.5 has new additional firewall rules applied.

  • Initial configuration of vRealize Log Insight and vRealize Network Insight

    In previous Cloud Provider Pod 1.0.x, vRealize Log Insight and vRealize Network Insight are not connected to vCenter Server instances, NSX, and so on.

    This issue is resolved in this release.

Known Issues

  • The Cloud Provider Pod deployer root password expires

    The root account of the Cloud Provider Pod deployer appliance is configured with a default password and a policy to allow a maximum of 90 days between password changes.

    If the password expires, calls to the Cloud Provider Pod console API and vRealize Orchestrator workflows fail to authenticate.

    Workaround: Change the password of the appliance root account and update the configData.cfg configuration file with the changed password. You must import the updated configuration file to vRealize Orchestrator as well.

    For information about how to change the root password, follow the instructions in the Deployment Guide that is generated by the Cloud Provider Pod designer.

  • Download of CentOS 7.6 (1810) DVD image fails

    This issue occurs because the URL specified for the CentOS 7.6 (1810) DVD image in the Cloud Provider Pod deployer is no longer valid. With the release of CentOS 7.7, the previous version has been deprecated. The DVD ISO of CentOS 7.6 (1810) has been archived and is now only available for download by using restricted mirrors.

    Workaround:

    • Update Cloud Provider Pod deployer to version 1.5.0 Patch 1.
    • For Cloud Provider Pod 1.5.0, manually download the correct ISO file (CentOS-7-x86_64-DVD-1810.iso) and upload the file to the Cloud Provider Pod deployer virtual machine manually. For more information, see the Deployment Guide.

      For more information, see KB Article 74918.

  • The validation of a downloaded PostgreSQL RPM file fails

    The RPM file for the CentOS PostgreSQL repository has been updated. This results in a different md5 checksum and the validation of a previously downloaded PostgreSQL RPM file fails.

    Workaround:

    • Update Cloud Provider Pod deployer to version 1.5.0 Patch 1.
    • For Cloud Provider Pod 1.5.0, update the configuration file for the product downloads on the Cloud Provider Pod deployer.
      1. Log in to the Cloud Provider Pod deployer virtual machine, for example through SSH.
      2. Navigate to and modify the file /opt/vcppod/config/products.json.
      3. Update the artifact definition with artifactId 'postgresql-repo' with the following value for md5: 4b123ec38c50e952340353ee590efd04

        The resulting definition of this artifact should be:

        {
            "artifactId": "postgresql-repo",
            "label": "PostgreSQL Repo",
            "url": "https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm",
            "file": "pgdg-centos10-10-2.noarch.rpm",
            "md5": "4b123ec38c50e952340353ee590efd04",
            "version": "10.10.2"
        }

  • Cloud Provider Pod deployment fails when setting up a vSAN cluster if Fibre Channel LUNs are available

    If Cloud Provider Pod deployment is configured to set up a vSAN storage cluster as a management cluster and there are existing Fibre Channel LUNs connected to the first host of the management cluster, the setup might fail. The deployment assumes a Fibre Channel datastore to be a temporary local datastore. Machines deployed on the temporary datastore might not be migrated to the vSAN datastore correctly and the temporary datastore cannot be deleted as expected during the deployment.

    Workaround: Make sure that any Fibre Channel (or iSCSI, if connected directly to the management VLAN) LUN is disconnected from the first management host during PXE booting and startup. If it is connected during PXE boot, make sure that none of the LUNs are used as datastore 'localStore'. Instead, a local disk must be used as localStore. You can update the first host to fulfill this requirement after PXE booting (before startup execution).

  • ESXi hosts do not install on the correct local disk

    The target disk on which to install the ESXi operating system, can be specified as a USB Stick/Device or First Disk (which means local disk) in the Cloud Provider Pod Designer. In addition, a specific local disk can be defined by using an additional property in the configData.cfg file, by adding JSON properties esxiMg01BootMediaName, esxiRp00Rc00BootMediaName, or esxiRp00Rc01BootMediaName with the corresponding disk identifiers. You must do carefully any changes to configData.cfg  file before running the setup process. In some cases, such as when the installation on a RAID configuration is required, this might still not work correctly and custom changes to the generated kickstart files might be required.

    Workaround: To specify the correct target disk for installation, after running the setup process and before PXE booting the hosts, modify the generated kickstart files. The kickstart files are generated on the Cloud Provider Pod Deployer in the directory: /var/ftp/pub/esxi/.

  • The Cloud Provider Pod Console does not validate input of MAC addresses and license keys

    The Cloud Provider Pod Deployer API (vcppod-console) does not validate the input for updating the MAC addresses and license keys. The format and the values of the MAC addresses and license keys is not validated and the deployment might fail after updating the values.

    Workaround: In Cloud Provider Pod 1.5, there is no validation of the specified input for the MAC addresses and license keys. Make sure that you enter the correct values (both for the specified JSON body and the values). You can verify the content after updating the values by using the API to extract the configuration data (before running setup process).

  • NFS 4.1 requires non-SDN based routing

    In case that you use NFS 4.1, non-SDN based routing is required. In Cloud Provider Pod 1.1,  the automatic deployment of non-SDN based routing is unavailable.

    Workaround: Integrate datastores by using NFS 3 and select SDN-based routing in the Cloud Provider Pod Designer.

  • vRealize Operations Manager requires manual interaction after a successful deployment

    After a successful Cloud Provider Pod deployment, you must manually follow the wizard during the first login to vRealize Operations Manager. Even though the wizard must be followed, all relevant configurations are in place. This has no functional impact.

    Workaround: None

  • vRealize Operations Manager Tenant App deployment

    After a successful Cloud Provider Pod deployment, you must manually deploy and set up the vRealize Operations Manager. Automatic deployment is not yet fully functional. This has no functional impact.

    Workaround: None.

  • Certificates are not replaced

    Internal system certificates, such as the certificates for vCenter Server, ESXi, NSX, vRealize Operations Manager, vRealize Insight, and vRealize Log Insight are not replaced in this release. Only certificate replacement of the customer-facing vCloud Director certificate is supported.

    Workaround: You can manually replace the certificates.

  • VMware Validated Design-based design

    VMware Validated Design-based design options are not fully automated. Currently, only the custom Advanced Design is fully automated.

    Workaround: Use the Advanced Designer.

  • Second availability zone

    Generated design documents and automated deployment do not work with Second availability zone configured.

    Workaround: None.

  • 4 NIC configuration

    Some diagrams in the generated documentation might not represent the 4 NIC configuration but show 2 NICs instead. The Cloud Provider Pod Deployer will still deploy a correct 4 NIC configuration.

    Workaround: None.

  • No LACP/LAG support

    VMware Cloud Provider Pod 1.1 does not support LACP/LAG due to conflicts with PXE-boot and other configurations.

    Workaround: Do not set up channel aggregation on the physical switches before deployment.

  • The Update Manager baselines per ESXi host individual settings must be applied manually

    The Update Manager baselines per ESXi host individual settings must be applied manually, because no available API calls are available.

  • If you deploy Cassandra database for vCloud Director, no SSL connection is used

    If you deploy Cassandra database for vCloud Director, vCloud Director does not use an SSL connection.

    Workaround: You must manually configure SSL.