This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Updated on 22 FEB 2019

VMware Integrated OpenStack 5.1 | 13 NOV 2018 | Build 10738236
VMware Integrated OpenStack with Kubernetes 5.1 | 13 NOV 2018 | Build 10628687

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

About VMware Integrated OpenStack

VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager vApp that runs directly in vCenter Server.

What's New

  • Support for the latest versions of VMware products: VMware Integrated OpenStack 5.1 supports and is fully compatible with VMware vSphere 6.7 Update 1, NSX-T 2.3, NSX Data Center for vSphere 6.4.3, and vRealize Operations Manager 7.0.
  • OpenStack Swift (object storage): Swift is now included to enable distributed object storage. In VMware Integrated OpenStack 5.1, this project is supported as a technology preview.
  • OpenStack Barbican (key manager): Barbican is included as a fully supported project and enables users to securely store and manage tenant secrets. 
  • Datastore clusters: You can now use datastore clusters for Nova and Cinder storage, and Storage DRS is supported for certain operations.
  • SR-IOV in NSX-T deployments: NSX-T and SR-IOV can now co-exist, combining the networking benefits and features of NSX-T with the passthrough of SR-IOV.
  • NSX-V and NSX-T coexistence: A single OpenStack control plane can now be configured with NSX-V and NSX-T coexisting. 
  • Horizon improvements: You can configure Designate settings and TVD project-plugin mappings on the VMware Integrated OpenStack dashboard.
  • Role tagging: Virtual devices can now be tagged when instances are created, allowing guest workloads to easier identify and categorize devices.

Compatibility

See the VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products, including vSphere components.

Upgrading to Version 5.1

Upgrading VMware Integrated OpenStack

You can upgrade directly to VMware Integrated OpenStack 5.1 from VMware Integrated OpenStack 4.0, 4.1, 4.1.1, 4.1.2, or 5.0. 

If you are running VMware Integrated OpenStack 4.1.2.1, you must upgrade directly to version 5.1.0.1 or later.

If you are running VMware Integrated OpenStack 3.1 or an earlier version, first upgrade to version 4.1 and then upgrade to version 5.1. 

Upgrading VMware Integrated OpenStack with Kubernetes

To upgrade from VMware Integrated OpenStack with Kubernetes 5.0 to VMware Integrated OpenStack with Kubernetes 5.1, see Upgrade VMware Integrated OpenStack with Kubernetes.

If you are running VMware Integrated OpenStack with Kubernetes 4.1 or an earlier version, first upgrade to version 5.0 and then upgrade to version 5.1. 

Security Notice

VMware Integrated OpenStack with Kubernetes (VIO-K) 5.1 is potentially affected by CVE-2018-1002105, a critical security vulnerability in Kubernetes. VMware previously posted an alert on this vulnerability here, which has now been updated to include VMware Integrated OpenStack with Kubernetes. You must install Security Patch 1 to remediate this vulnerability.

After you install VMware Integrated OpenStack with Kubernetes 5.1, perform the following steps to patch your deployment:

  1. Download VMware Integrated OpenStack with Kubernetes 5.1 Security Patch 1 from the product download page
  2. Transfer the patch file to the VMware Integrated OpenStack with Kubernetes management server.
  3. Log in to the management server.
  4. Decompress and install the patch by running the following commands:
    tar -xzf viok-5.1-hp1.tar.gz
    cd viok-5.1-hp1
    ./install.sh

Deprecation Notices 

New versions of vRealize Automation will no longer be certified with VMware Integrated OpenStack.

Some OpenStack Management Server lifecycle management APIs available in VMware Integrated OpenStack 5.1 will be changed or deprecated in a future major release of VMware Integrated OpenStack. 

The SDDC provider in VMware Integrated OpenStack with Kubernetes, intended for deployments without VMware Integrated OpenStack, will be deprecated in a future major release. For new VMware Integrated OpenStack with Kubernetes deployments, use the OpenStack provider.

Internationalization

VMware Integrated OpenStack 5.1 is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish.

The following items must contain only ASCII characters:

  • Names of OpenStack resources (such as projects, users, and images)
  • Names of infrastructure components (such as ESXi hosts, port groups, data centers, and datastores)
  • LDAP and Active Directory attributes 

VMware Integrated OpenStack with Kubernetes is available in English only.

Open Source Components for VMware Integrated OpenStack

The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack 5.1 are available on the Open Source tab of the product download page. You can also download the disclosure packages for the components of VMware Integrated OpenStack that are governed by the GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available.

Resolved Issues

  • ISO images on vSAN datastores cannot be booted.

    In previous versions, booting from an ISO on a vSAN datastore would fail.

    This issue has been resolved in this release.

  • The viocli deployment configure command does not restart the MySQL database.

    After you changed database parameters and ran viocli deployment configure, the new configuration did not take effect because the database service was not restarted.

    This issue has been resolved in this release.

  • The controller fails to create temporary files.

    Older environments may encounter issues because a large number of temporary files accumulated and were not cleaned up.

    This issue has been resolved in this release.

  • You cannot re-add a deleted compute node with tenant virtual data centers.

    After you delete a compute node with a tenant virtual data center, attempting to re-add it fails with a "Failed to create resource provider" error in /var/log/nova/nova-compute.log.

    This issue has been resolved in this release.

  • The prefix length for load balancer static routes cannot be configured.

    In previous versions, static route rules configured through the GUI all used a 24-bit prefix.

    You can now specify a subnet mask when configuring static routes for the load balancer.

  • Resizing a volume may cause the volume to be migrated to another host.

    When you resize a volume, it may be moved to a different host in the cluster even when always_resize_on_same_host is set to true.

    This issue has been resolved in this release.

  • An error occurs when you delete compute nodes out of order and then attempt to add a compute node.

    In earlier VMware Integrated OpenStack versions, you could only delete compute nodes in descending order. For example, with three compute nodes VIO-Compute-0, VIO-Compute-1, and VIO-Compute-2, you would need to delete VIO-Compute-2 first, then VIO-Compute-1, and finally VIO-Compute-0. If you did not delete the nodes in this order, adding a node later would generate an error.

    This issue has been resolved in this release.

  • Instances fail to launch with the error "ResourceProviderAggregateRetrievalFailed: Failed to get aggregates for resource provider".

    You cannot launch instances on a compute node where an existing tenant virtual data center was deleted. Specifically, this issue will occur when you attempt to launch an instance on a compute node on which a tenant virtual data center existed before the Nova compute service started and was deleted after the Nova compute service started.

    This issue has been resolved in this release.

  • For NSX-T deployments, virtual machines cannot be imported into VMware Integrated OpenStack.

    You can import virtual machines into deployments backed by NSX-V or VDS only. The import process fails for NSX-T deployments.

    Workaround: None.

Known Issues

The known issues are grouped as follows.

VMware Integrated OpenStack
  • JSON template files exported from the Flex and HTML5 clients are not completely interchangeable.

    If you export a template using the Flex-based vSphere Web Client and import it using the HTML5 vSphere Client, the "Create Metadata Proxy Server" and "Create DHCP Server Profile" settings are not preserved.

    Workaround: Use the same client to import and export template files. If you are required to use a different client, verify that the metadata proxy server and DHCP server profile settings are correct before deploying.

  • The edge cluster drop-down list is unavailable when deploying OpenStack from a JSON template.

    In the Flex-based vSphere Web Client, if you deploy OpenStack using a template file with "Create Metadata Proxy Server" or "Create DHCP Server Profile" selected, you are unable to select an edge cluster from the drop-down list.

    Workaround: Deselect "Create Metadata Proxy Server" or "Create DHCP Server Profile" and select it again. The edge cluster drop-down list is then displayed normally.

  • For NSX-V deployments, security groups are not enforced on newly created compute clusters.

    When you create a compute cluster, its managed object identifier (MOID) is not updated in NSX-V, and default rules are not applied.

    Workaround: Log in to the controller and run the sudo -u neutron nsxadmin -r firewall-sections -o nsx-update command. Alternatively, you can manually update the default rules in the OS Cluster Security Group section in NSX-V to include the new compute clusters.

  • In an environment with multiple vCenter Server instances, Nova live migration cannot move an instance to a host in the remote vCenter Server.

    Nova live migration across vCenter Server instances is not supported.

    Workaround: None.

  • Deleting an instance associated with a floating IP address does not delete associated DNS records.

    If you delete an instance that has a floating IP address associated, Designate will not remove the DNS records for that instance.

    Workaround: Disassociate the floating IP address from the instance before deleting the instance.

  • Compute nodes might not start after being removed and re-added.

    If there is a tenant virtual data center created for a compute node, that node will fail to start if it is removed and added again.

    Workaround: None.

  • Local storage may be incorrectly calculated on the VMware Integrated OpenStack dashboard.

    If multiple compute nodes use the same datastore, the Hypervisors page on the VMware Integrated OpenStack dashboard will incorrectly display that the total disk space available is the size of the single datastore multiplied by the number of compute nodes using it. In addition, the entry in the Local Storage (used) column for each compute node will display the total used space on the datastore, not the used space for a single compute node.

    Workaround: None.

  • In an environment with multiple vCenter Server instances, instances cannot find templates and fail to boot.

    If an image is deleted manually from the compute vCenter Server, instances might fail to boot with the error "Unable to find template at location". This issue can also occur when re-adding compute nodes.

    Workaround: Determine the location of the remote vCenter Server from the image by running the glance image-show image-uuid command. Then delete the location from Glance by running the glance location-delete --url image-location command.

  • For deployments using a remote vCenter Server, the viopatch command fails to take snapshots.

    In a deployment where all control virtual machines are deployed in a management vCenter Server instance and use a Nova compute node deployed in a remote vCenter Server instance, the viopatch snapshot take command cannot obtain information about the management vCenter Server instance. The command fails with the error "AttributeError: 'NoneType' object has no attribute 'snapshot'."

    Workaround: On the OpenStack Management Server virtual machine, manually set the IP address, username, and password of the management vCenter Server by running the following commands:

    export VCENTER_HOSTNAME = mgmt-vc-ip-address
    export VCENTER_USERNAME = mgmt-vc-username
    export VCENTER_PASSWORD = mgmt-vc-password
  • VMware Integrated OpenStack cannot connect to NSX-T after the NSX-T password is changed.

    If you change the NSX-T password while the Neutron server is running, VMware Integrated OpenStack might fail to connect to NSX-T.

    Workaround: Before changing the NSX-T password, log in to the active controller node and run the systemctl stop neutron-server command to stop the Neutron server service. The service will be restarted after you update the NSX-T password in VMware Integrated OpenStack.

  • The Nova compute service fails to start after an upgrade from version 4.x.

    If a Nova compute node was deleted in version 4.x and a new Nova compute node using the same vCenter Server and same cluster was added later, the Nova compute service will fail to start after you upgrade to version 5.x. "ERROR nova ResourceProviderCreationFailed" is written to /var/log/nova/nova-compute.log.

    Workaround: Perform the following steps to remove the Nova compute node from the database:

    1. Find the MOID of the deleted compute node.
    2. Log in to the active database node and open the nova_api database:

      mysql
      use nova_api

    3. In the "resource_providers" table, remove the resource_provider record with the MOID of the deleted compute node.
    4. In the "host_mappings" table, remove the host record for the deleted compute node.
  • A datastore failure might render the OpenStack deployment inaccessible.

    If all nodes in an HA deployment use the same datastore, the failure of that datastore will cause the entire deployment to be inaccessible.

    Workaround: Try to fix the failed datastore and recover its data. After the virtual machine for each node is shown in vCenter Server, restart the OpenStack deployment. If the datastore is not recoverable, use the viocli recover command to restore the failed nodes.

  • The Keystone endpoint is in the error state.

    After the internal endpoint in-flight encryption setting is changed, the Keystone endpoint fails to reconnect. This issue occurs when you set the internal_api_protocol parameter to http for an HA deployment or https for a compact or tiny deployment.

    Workaround: Modify the Keystone endpoint URL.

    1. In the vSphere Web Client, select Administration > OpenStack.
    2. Select the KEYSTONE endpoint and click the Edit (pencil) icon.
    3. In the Update Endpoint section displayed, change the URL to begin with http or https depending on your configuration.
    4. Enter the administrator password and click Update.
  • The VMware Integrated OpenStack OVA cannot be deployed in the HTML5 vSphere Client in vCenter Server 6.7.

    After you deploy the VMware Integrated OpenStack OVA using the HTML5 vSphere Client in vCenter Server 6.7, the VMware Integrated OpenStack vApp fails to power on and the UI displays the error: "The virtual machine has a required vService dependency 'vCenter Extension Installation' which is not bound to a provider."    

    Workaround: Deploy the VMware Integrated OpenStack OVA using the Flex-based vSphere Web Client or using the OVF Tool.

    For more information, see the vSphere 6.7 Release Notes and KB 55027.

  • Host names that start with a number cause a "java.io.IOException" error.

    The OpenStack Management Server does not support host names that start with a number. The error "java.io.IOException: DNSName components must begin with a letter" appears if the host name starts with a number.

    Workaround: Use a host name that does not start with a number. For more information, see the JDK upstream issue: https://bugs.openjdk.java.net/browse/JDK-8054380

  • East-west traffic does not travel between virtual machines booted on a virtual wire provider network.

    If you create a provider network using virtual wire and do not create a SpoofGuard policy, east-west traffic will not travel between the virtual machines booted on this provider network.  

    Workaround: Create a SpoofGuard policy and add virtual wire to the policy before creating the virtual wire provider network.

  • Certificate verification may fail on the OpenStack Management Server.

    When you use the viocli command-line utility, the following error may occur:

    ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

    Workaround: On the OpenStack Management Server, disable verification of the vCenter Server certificate by running the following commands:

    sudo su -
    export VCENTER_INSECURE=True
    
  • Deleting a router interface times out.

    When concurrent Heat stacks are deployed with shared NSX routers, router interface deletion can time out. The following might be displayed: neutron_client_socket_timeout, haproxy_neutron_client_timeout, or haproxy_neutron_server_timeout.

    Workaround: Do not use shared routers in environments where network resources frequently change. If NAT or floating IP addresses are required, use an exclusive router. Otherwise, use a distributed router.

VMware Integrated Openstack with Kubernetes
  • For NSX-T deployments, the vkube cluster heal command might fail.

    If you use the vkube cluster heal command on a cluster whose k8s-master-0 node is in the Error state, the following error message may be displayed:

    fatal: [k8s-master-0-0ffeac43-78ea-4eab]: FAILED! => {"changed": false, "msg": "Unable to start service etcd: Job for etcd.service failed because a timeout was exceeded. See \"systemctl status etcd.service\" and \"journalctl -xe\" for details.\n"}

    Workaround: Using SSH, log in to the k8s-master-1 node for the affected cluster and manually delete the unreachable member from the etcd cluster. Then close the SSH connection and run the vkube cluster heal command again.

  • A cluster in the ERROR state cannot be deleted.

    If the infrastructure is out of resources, cluster creation, healing, and scaling will fail and the cluster will enter the ERROR state.

    Workaround: Perform the following steps:

    1. Log in to the toolbox container.
    2. Use the OpenStack client to delete hosts in the ERROR state.
    3. Run the vkube cluster delete command again to delete the cluster.
  • If a username or domain contains a backslash (\), authentication fails.

    The Keystone authentication plugin uses the backslash character as a separator to encode the domain name and username in a single string . If there is an additional backslash in either the    domain name or username, the Keystone authentication plugin will not decode the domain name and username correctly.

    Workaround: Use domain names or usernames that do not include the backslash character.

check-circle-line exclamation-circle-line close-line
Scroll to top icon