check-circle-line exclamation-circle-line close-line

VMware Cloud Provider Pod 1.5  | 18 JUL 2019 

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New in This Release

The current Cloud Provider Pod deploys vCloud Director 9.7. 

Cloud Provider Pod consists of 2 components:

  • Cloud Provider Pod Designer (Cloud)
  • Cloud Provider Pod Deployer (On-Premises)

Version changes:

  • VMware ESXi and vCenter Server are updated to 6.7 Update 2a
  • VMware NSX for vSphere is updated to 6.4.5
  • VMware vCloud Director is updated to 9.7
  • VMware vCloud Director Extender is removed
  • VMware vRealize Log Insight is updated to 4.8.0
  • VMware vRealize Network Insight is updated to 4.1.1
  • VMware vRealize Operations Manager is updated to 7.5.0
  • VMware vRealize Operations Manager Multi-Tenant App is updated to 2.2
  • VMware vRealize Operations Manager – Cloud Pod Management Pack is updated to 2.0
  • VMware vRealize Orchestrator is updated to 7.6
  • VMware vCloud Usage Meter is updated to 3.6.1 Hot Patch 2
  • CentOS is updated to 7.6 (1810)

Newly added:

  • VMware Cloud Provider Pod Deployer (Virtual Appliance) replaces the Cloud Provider Pod Initiator that was used in previous versions

Features added:

  • The Cloud Provider Pod Deployer Virtual Appliance introduces a REST API to interact with the VMware Cloud Provider Pod deployment.
  • Binaries are downloaded from my.vmware.com (and other public sources) or uploaded by using FTP to the Cloud Provider Pod Deployer.
  • Setup of deployment infrastructure (vCenter Server instances and vRealize Orchestrator) as part of the Cloud Provider Pod Deployer startup (not preinstalled on the Cloud Provider Pod initiator).
  • All additional packages (RPMs) required for setup and configuration of virtual machines are downloaded initially, no download required during deployment.
  • Updated vRealize Orchestrator workflow package to allow simpler and more robust deployment.

You can find further details about the architecture and components in the document package that is uniquely created based on the input that you provide in the Cloud Provider Pod Designer.

 

System Requirements and Installation

System Requirements and Installation - Deployer

The Cloud Provider Pod Deployer is installed locally as a virtual machine, so you must ensure that your system meets certain requirements.

The VMware Cloud Provider Pod - Deployer is provided as a virtual appliance, which can be hosted on a physical VMware ESXi™ host (or VMware vCenter Server). The virtual appliance has the following requirements:

  • 64-bit CPU with Intel-VT or AMD-V feature set
  • 4 vCPUs available for a virtual machine
  •   4 GB of RAM available for a virtual machine
  • Support to run a virtual machine hardware version 13 or above
  • VMware ESXi 5.5 or later
  • 100 GB of disk space if deployed in thick mode
    Thin mode requires about 40–50 GB.

System Requirements and Installation – Management Pod

The minimum configuration requires a Management Cluster with 4 ESXi hosts and vSAN all-flash storage capacity. Alternatively, you can also use NFS, iSCSI, or Fibre Channel as a storage option.

The hosts must be on the VMware Hardware Compatibility List for ESXi 6.7 Update 2.

Each host must meet the following minimum requirements:

  • 2 Sockets with 8 physical cores/16 logical cores and 2.0 GHz or more
  • 192 GB RAM (128 GB RAM if deployed without any optional products)
  • 2 x 10+Gbit NICs or 4 x 10+Gbit NICs
    • The primary and secondary (as backup) NICs must be set up for PXE boot. This must be set up with an access VLAN. Additional networks must be provided as trunk VLANs.
    • Record the MAC address of the PXE boot device of each host before creating a configuration file.
  • 20 TB storage capacity

If you plan to use vSAN, the primary disk 0/0/0 on each host must be a boot device (SATADOM, USB, LOCALDISK). No RAID configuration, just JBOD (according to the vSAN documentation).

System Requirements and Installation – Resource Pod

The minimum configuration requires a minimum one Resource Cluster (for tenant compute workloads) with four hosts and ideally vSAN all-flash storage capacity. Alternatively, NFS, iSCSI, or Fibre Channel are possible and reduce minimum required hosts to three. The actual size can differ based on demand. The automated setup allows for up to two resource clusters each up to 64 hosts. The host must be on the VMware Hardware Compatibility List for ESXi 6.7 Update 2.

Each host must meet the following technical minimum requirements:

  • 2 sockets with 8 physical cores/16 logical cores and 2.0 GHz or above
  • 64 GB RAM
  • 2 x 10+Gbit NICs or 4 x 10+Gbit NICs
    • The primary and secondary (as backup) NICs must be set up for PXE boot. This must be set up with an Access VLAN. Additional networks must be provided as trunk VLANs.
    • Record the MAC address of the PXE-boot device of each host before creating a configuration file.
  • At least 4 TB of storage capacity

If you plan to use vSAN, the primary disk 0/0/0 on each host must be a boot device (SATADOM, USB, or LOCALDISK). No RAID configuration, just JBOD (according to the vSAN documentation).

System Requirements and Installation – License and Generic Settings

You must have license keys for the following products before the configuration file can be created:

  • ESXi 6.7 Update 2
  • vCenter Server 6.7 Update 2a
  • vSAN 6.7 Enterprise, if selected
  • NSX 6.4.5
  • vCloud Director 9.7.x
  • vRealize Log Insight 4.8
  • vRealize Network Insight 4.1.1, if selected
  • vRealize Operations Manager 7.5, if selected

Each VMware Cloud Provider Pod requires to be run inside its custom subdomain, for example pod1.demo.vmware.com. Host names are then created within that subdomain. For the initial installation, we will host an internal DNS server which can be replaced by a custom one.

You can find further information about the planning and preparation of the Cloud Provider Pod usage in the documents, that are created based on the input that you provide in the Cloud Provider Pod Designer. These documents are created by using the Cloud Provider Pod Document Generator and delivered by email.

Product Versions Deployed with VMware Cloud Provider Pod

Product Version Build Number
VMware ESXi 6.7 Update 2 13006603
VMware vCenter Server Appliance 6.7 Update 2a 13643870
VMware vCloud Director 9.7.0 13635483 
VMware vCloud Usage Meter 3.6.1 Hot Patch 2 11832189
VMware NSX for vSphere VMware NSX Manager 6.4.5 13282012
VMware vRealize Log Insight 4.8.0 13036238
Content Pack for vCloud Director Latest version as available within vRealize Log Insight  
Content Pack for NSX Latest version as available within vRealize Log Insight  
VMware vRealize Network Insight 4.1.1 1559730670
VMware vRealize Operations Manager 7.5.0 13165949
VMware vRealize Operations Manager Tenant App for vCloud Director 2.2.0 13473471
VMware vRealize Orchestrator 7.6.0 13020602

Compatibility

The hardware used for the Cloud Provider Pod deployment must be included in the VMware Hardware Compatibility List for ESXi 6.7 Update 2. Custom ISO images are supported but must be of the same version, follow the Deployment Guide for detailed instructions.

The current version of Cloud Provider Pod supports Fibre Channel, vSAN, iSCSI, and NFS as persistent storage technologies.

For vSAN, all-flash devices are required. Hybrid mode is NOT supported. The usage of vSAN requires devices and components compatible with the Hardware Compatibility List for vSAN version 6.7.

Cloud Provider Pod 1.5 requires the generation of a new configData.cfg (upload into Cloud Provider Pod Designer and regeneration of files is sufficient), and the usage of the newest package downloaded with the design documents.

Resolved Issues

  • Usage Meter Hot Patch is not installed

    Usage Meter Hot Patch is required for collecting usage metrics from vCloud Director. Cloud Provider Pod 1.1.x only downloads the Hot Patch, but does not install it.

    Cloud Provider Pod 1.5 installs Usage Meter Hot Patch 2 automatically.

  • NTP configuration is not applied to all deployed machines

    NTP is not configured consistently on all machines (mostly CentOS based machines).

    This issue is resolved in this release.

  • The DNS changer workflow does not apply the DNS settings correctly

    The DNS changer workflow does not apply the DNS settings correctly on CentOS-based virtual machines.

    This issue is resolved in this release.

  • Data collection in Usage Meter does not work properly

    Because of missing firewall rules, the Usage Meter data collection does not work properly.

    This issue is resolved in this release. The data collection from vCenter Server works properly. For data collection from vRealize Operation Manager, after the deployment, you must install an extension by manually updating the vCenter Server adapter instances in vRealize Operations .

  • Installation and configuration of CentOS-based machines stop responding while waiting for process

    In Cloud Provider Pod 1.1, the setup of CentOS-based machines (such as vCloud Director cells) fails when running the installation procedure. The workflow waits endlessly or fails while waiting for processes.

    This issue is resolved in this release. Cloud Provider Pod 1.5 uses a different way to run the installation scripts in guest virtual machines. This way, the issue of waiting for single processes is not relevant anymore.

  • vSphere distributed port group names might not match VLANs

    Some of the vSphere port groups created by Cloud Provider Pod are created by using an incorrect name that does not match the VLAN ID it was configured with.

    This issue is resolved in this release.

  • Configuration and attachment of Update Manager baselines might fail for resource clusters

    In Cloud Provider Pod 1.1, the configuration of the Update Manager baselines for the resource clusters  might fail.

    This issue is resolved in this release.

  • Cloud Provider Pod deployment fails when you use complex passwords

    The Cloud Provider Pod Designer password complexity validation is updated to allow only exclamation mark (!), at sign (@), and a dollar sign ($) as special characters. The Cloud Provider Pod deployment is updated to correctly configure all components by using the specified password.

    This issue is resolved in this release.

  • The vRealize Network Insight password might be incorrectly configured

    The password for login to vRealize Network Insight is not configured to match the password specified in the configuration worksheet.

    This issue is resolved in this release.

  • Unable to deploy a vSAN cluster with more than seven capacity disks

    In Cloud Provider 1.1, the vSAN configuration requires at least two differently sized SSDs per host and a maximum of seven large disks to use as capacity disks.

    This issue is resolved in this release. Cloud Provider Pod 1.5 is updated to allow more flexible configuration of the vSAN disk groups. Cloud Provider Pod now supports setting up a vSAN cluster with hosts having all-same disks or more than seven disks.

  • Download of binaries is interrupted and not resuming after the interruption

    The process for downloading binaries is reworked to download more reliably and to resume from previous download attempts, if run again (so that  already existing and verified files are not downloaded again).

    This issue is resolved in this release.

  • Edge Firewall Rules

    Cloud Provider Pod 1.0.x does not fully reduce access to critical systems. In Cloud Provider Pod 1.5, a more fine-grained firewall configuration is applied.

    This issue is resolved in this release. Cloud Provider Pod 1.5 has new additional firewall rules applied.

  • Initial configuration of vRealize Log Insight and vRealize Network Insight

    In previous Cloud Provider Pod 1.0.x, vRealize Log Insight and vRealize Network Insight are not connected to vCenter Server instances, NSX, and so on.

    This issue is resolved in this release.

Known Issues

  • Cloud Provider Pod deployment fails when setting up a vSAN cluster if Fibre Channel LUNs are available

    If Cloud Provider Pod deployment is configured to set up a vSAN storage cluster as a management cluster and there are existing Fibre Channel LUNs connected to the first host of the management cluster, the setup might fail. The deployment assumes a Fibre Channel datastore to be a temporary local datastore. Machines deployed on the temporary datastore might not be migrated to the vSAN datastore correctly and the temporary datastore cannot be deleted as expected during the deployment.

    Workaround: Make sure that any Fibre Channel (or iSCSI, if connected directly to the management VLAN) LUN is disconnected from the first management host during PXE booting and startup. If it is connected during PXE boot, make sure that none of the LUNs are used as datastore 'localStore'. Instead, a local disk must be used as localStore. You can update the first host to fulfill this requirement after PXE booting (before startup execution).

  • The Cloud Provider Pod Deployer startup process cannot be started a second time

    In some cases, the Cloud Provider Pod Deployer startup process (triggered by using the API call /api/v1/deployment/bringup) does not end the process correctly, even though it is finished. Triggering the process again by using the API does not work, because the process is considered as "IN_PROGRESS". You can verify that this is the case by checking the state of the process. Overall it is marked as "IN_PROGRESS", but all subtasks are marked as "DONE".

    Workaround: Run the following API call to cancel the bring-up process:

    curl -k -H "Authorization: Basic cm9vdDpWTXdhcmUxIQ==" -H "Content-Type: application/json" -d '{"state":"CANCELLED"}' -X PUT https://$deployerIp/api/v1/deployment/bringup

    After canceling, you can start the process normally.

  • ESXi hosts do not install on the correct local disk

    The target disk on which to install the ESXi operating system, can be specified as a USB Stick/Device or First Disk (which means local disk) in the Cloud Provider Pod Designer. In addition, a specific local disk can be defined by using an additional property in the configData.cfg file, by adding JSON properties esxiMg01BootMediaName, esxiRp00Rc00BootMediaName, or esxiRp00Rc01BootMediaName with the corresponding disk identifiers. You must do carefully any changes to configData.cfg  file before running the setup process. In some cases, such as when the installation on a RAID configuration is required, this might still not work correctly and custom changes to the generated kickstart files might be required.

    Workaround: To specify the correct target disk for installation, after running the setup process and before PXE booting the hosts, modify the generated kickstart files. The kickstart files are generated on the Cloud Provider Pod Deployer in the directory: /var/ftp/pub/esxi/.

  • The Cloud Provider Pod Console does not validate input of MAC addresses and license keys

    The Cloud Provider Pod Deployer API (vcppod-console) does not validate the input for updating the MAC addresses and license keys. The format and the values of the MAC addresses and license keys is not validated and the deployment might fail after updating the values.

    Workaround: In Cloud Provider Pod 1.5, there is no validation of the specified input for the MAC addresses and license keys. Make sure that you enter the correct values (both for the specified JSON body as well as the values). You can verify the content after updating the values by using the API to extract the configuration data (before running setup process).

  • NFS 4.1 requires non-SDN based routing

    In case that you use NFS 4.1, non-SDN based routing is required. In Cloud Provider Pod 1.1,  the automatic deployment of non-SDN based routing is unavailable.

    Workaround: Integrate datastores by using NFS 3 and select SDN-based routing in the Cloud Provider Pod Designer.

  • vRealize Operations Manager requires manual interaction after a successful deployment

    After a successful Cloud Provider Pod deployment, you must manually follow the wizard during the first login to vRealize Operations Manager. Even though the wizard must be followed, all relevant configurations are in place. This has no functional impact.

    Workaround: None

  • vRealize Operations Manager Tenant App deployment

    After a successful Cloud Provider Pod deployment, you must manually deploy and set up the vRealize Operations Manager. Automatic deployment is not yet fully functional. This has no functional impact.

    Workaround: None.

  • Certificates are not replaced

    Internal system certificates, such as the certificates for vCenter Server, ESXi, NSX, vRealize Operations Manager, vRealize Insight, and vRealize Log Insight are not replaced in this release. Only certificate replacement of the customer-facing vCloud Director certificate is supported.

    Workaround: You can manually replace the certificates.

  • VMware Validated Design-based design

    VMware Validated Design-based design options are not fully automated. Currently, only the custom Advanced Design is fully automated.

    Workaround: Use the Advanced Designer.

  • Second availability zone

    Generated design documents and automated deployment do not work with Second availability zone configured.

    Workaround: None.

  • 4 NIC configuration

    Some diagrams in the generated documentation might not represent the 4 NIC configuration but show 2 NICs instead. The Cloud Provider Pod Deployer will still deploy a correct 4 NIC configuration.

    Workaround: None.

  • No LACP/LAG support

    VMware Cloud Provider Pod 1.1 does not support LACP/LAG due to conflicts with PXE-boot and other configurations.

    Workaround: Do not set up channel aggregation on the physical switches before deployment.

  • The Update Manager baselines per ESXi host individual settings must be applied manually

    The Update Manager baselines per ESXi host individual settings must be applied manually, because no available API calls are available.

     

  • If you deploy Cassandra database for vCloud Director, no SSL connection is used

    If you deploy Cassandra database for vCloud Director, vCloud Director does not use an SSL connection.

    Workaround: You must manually configure SSL.