VMware Integrated OpenStack 3.1 | 21 FEB 2017 | Build 5065461 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:- About VMware Integrated OpenStack
- Internationalization
- What's New
- Compatibility
- Upgrading to VMware Integrated OpenStack 3.1
- Open Source Components for VMware Integrated OpenStack 3.1
- Known Issues
- Resolved Issues
About VMware Integrated OpenStack
VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager vApp that runs directly in vCenter Server.
Internationalization
VMware Integrated OpenStack version 3.1 is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish. ASCII characters must be used for all input and naming conventions of OpenStack resources (such as project names, user names, image names, and so on) and for the underlying infrastructure components (such as ESXi hostnames, virtual switch port group names, data center names, datastore names, and so on).
What's New
This release is based on the OpenStack Mitaka release and provides the following new features and enhancements:
- Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.
- NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack cloud admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can't be overriden, or only use the predefined, depending on the policy set by the OpenStack cloud admin. This feature can also be used by cloud admins to insert third-party network services.
- New NFV Features. You can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack. Another new feature here is the full passthrough support by using VMware DirectPath I/O.
- Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update.
- Support for GPU devices. OpenStack instances can now be created with GPU passthrough devices which can be used for High Performance Computing.
- Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.
- Specify adapter type for empty volumes. It is now possible to change the default adapter type by modifying the
vmware_adapter_type
option for newly created volumes. - Profiling enhancements. You can also use VMware vRealize Log Insight as the trace store for the OpenStack profiling tool osprofiler.
Compatibility
The VMware Product Interoperability Matrix provides details about the compatibility of the current version of VMware Integrated OpenStack with VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install VMware Integrated OpenStack or other VMware products.
Upgrading to VMware Integrated OpenStack 3.1
New Upgrade from VMware Integrated OpenStack 2.5.2 to VMware Integrated OpenStack 3.0 and 3.1 is not supported.
You perform the upgrade procedure directly in the VMware Integrated OpenStack manager. The complete multi-step procedure is described in detail in the VMware Integrated OpenStack Administrator Guide.
Open Source Components for VMware Integrated OpenStack 3.1
The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack 3.1 are available on the Open Source tab of the product download page. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of VMware Integrated OpenStack.
Resolved Issues
- Horizon dashboard shows error after switching projects as admin user
If you are logged into Horizon as the administrative user and try to switch between projects using the drop-down menu in Horizon, the dashboard might begin returning errors. This is caused by an issue in OpenStack.
This issue is resolved in this release.
- VM import: Passing the root resource pool for tenant mapping fails.
When importing all unmanaged VMs, the optional --root-resource-pool parameter does not function as expected and can cause the operation to fail.
This issue is resolved in this release.
Known Issues
- New Tenants traffic might get blocked after enabling NSX policies in Neutron
After enabling security-group-policy in the Neutron plug-in, the NSX firewall sections order might be wrong. The correct order must be:
- NSX policies
- Tenants security groups
- Default sections
Workaround: Create the first NSX policy before configuring VMware Integrated OpenStack. If you have already made configurations, go to the NSX Firewall page in the vSphere Web Client and move the policies sections up.
- Provider network creation through horizon fails if no UUID of a transport zone is entered
When you create VLAN type networks, it is mandatory that you provide the UUID value for the transport zone in the provider_network text box in horizon. If no value is entered, network creation fails.
Workaround: See the UUID of the transport zone in your VMware NSX interface and enter that value in the provider_network text box.
- Metadata is not accessible for a DHCP disabled subnet on a distributed logical router
Instances on DHCP disabled subnets cannot access metadata through the interface of the router if a distributed logical router is used. This behavior is not observed for shared and exclusive routers. This might be an expected behavior since same logical networks, for example metadata networks, cannot be attached to multiple distributed logical routers.
Workaround: None.
- When you boot from a glance image created using the Ubuntu Xenial OVA, the OS fails to boot
The OS fails to boot with the following errors:
error: file `/boot/grub/i386-pc/efi_gop.mod' not found error: file `/boot/grub/i386-pc/efi_uga.mod' not found
This is an issue with the Xenial cloud OVA that is tracked by a bug in the Ubuntu cloud-images project, for more inforamtion see https://bugs.launchpad.net/cloud-images/+bug/1615875.
Workaround: Until the Ubuntu bug is resolved and new OVA is published, use Xenial ISO images.
- LBaaS v2 fails when you add members to a pool that is created with the --loadbalancer option
OpenStack LBaaS v2 provides two options to configure a loadbalancer pool:
--loadbalancer
and--listener
. At least one of the two option must be specified to create the pool.
If you create the pool for the OpenStack LBaaS v2 with the --loadbalancer option, addition of members fails and the loadbalancer goes to anERROR
state.Workaround: Create the pool with the
--listener
option. - Renamed OpenStack instances appears under the old name in vCenter Server
If you rename your OpenStack instance by using the
nova rename
command, changes appear only in the OpenStack database. Your vCenter Server instance shows the old name.Workaround: None
- Upgrade from 3.0 to 3.1 for deployments with NSX-T 1.0 is not supported
VMware NSX-T 1.0 uses community DHCP for VMware Integrated OpenStack 3.0 integration. VMware Integrated OpenStack 3.1 only uses NSX-T DHCP that is not available with NSX-T 1.0.
Workaround: None.
- Availability zone configuration might not be successfully applied
After you modify the configuration of an availability zone, the new configuration might not be applied until the backup edges are deleted and recreated.
For example, the following configuration in the
nsx.ini
file features an availability zone that has backup edges:
zone3:resgroup-163:datastore-12:true:datastore-21
If you change the resource pool of that zone and restart Neutron, the backup edges won't be updated. If you deploy new routers or networks, they will use the out-of-date backup edges that leads to inconsistent availability zone configuration.Call the admin utilities after you change the configuration of an availability zone and before you start Neutron:
- Modify the availability zone configuration in the nsx.ini file.
- Delete all backup edges in succession.
nsxadmin -r backup-edges -o clean --property edge-id=edge-XX
- Restart Neutron.
- Verify the new configuration.
availability-zone-list
- Certificate is not in CA store error might appear when you deploy new OpenStack instance
When you deploy a new VMware Integrated OpenStack instance with an IP address that was used by another instance that has been connected to vRealize Automation, you might get certificate errors:
Cannot execute the request: ; java.security.cert.CertificateException: Certificate is not in CA store.Certificate is not in CA store. (Workflow:Invoke a REST operation / REST call (item0)#35)
Workaround: Delete the certificate of the old VMware Integrated OpenStack instance and import the new one by running the respective workflows in vRealize Orchestrator.
- Log in to vRealize Orchestrator.
- Go to Library > Configuration > SSL Trust Manager.
- Run the workflow to delete the trusted certificates of the old VMware Integrated OpenStack instance.
- Run the workflow to import the certificate of the new instance from URL.
- Unable to modify syslog setting post deployment in VIO Manager interface.
After deploying VIO, you cannot modify the syslog server configuration using the setting in the VIO Manager interface (VMware Integrated OpenStack > Management Server > Edit Settings > vApp Options).
Workaround: Modify the configuration here: VMware Integrated OpenStack > OpenStack Cluster > Manage > Syslog Server.
Dashboard might show a Volume as attached even if it failed to attach.
This is a known OpenStack issue, first reported in the Icehouse release.Long image upload times cause NotAuthenticated failure.
This is a known OpenStack issue (https://bugs.launchpad.net/glance/+bug/1371121), first reported in the Icehouse release.Special characters in datastore names not supported by Glance (Image Service).
If a datastore name has non-alphanumeric characters like colons, ampersands, or commas, the datastore cannot be added to the Glance service. Specifically, the following characters are not permitted in Glance datastore names because their use is reserved for other purposes and therefore can interfere with the configuration: : , / $ (colon, comma, forward slash, dollar).This issue has been fast-tracked for resolution.- If either controller VM reboots, high availability might be compromised.
When a controller fails, the other controller continues to provide services. However, when the initial controller reboots, it might no longer provides services, and thus is not available if the other controller also fails. This issue has been fast-tracked for resolution.
Workaround: If a controller fails and HA is invoked, review your deployment to ensure that both controllers are providing services after the failed controller reboots.
- Metadata service is not accessible on subnets created with the no-gateway option.
Deployments using NSX 6.2.2 or earlier do not support no-gateway networks; Edges are used for edge-routed networks and DHCP is used for VDR networks. Deployments using NSX 6.2.3 or later do not support no-gateway or no-dhcp networks; DHCP is used for any DHCP network and Edges are used for non-DHCP networks. In 2.x, autoconfiguration is turned off for Edge VMs. When applicable, DHCP sets the gateway and metadata is served through this gateway Edge. As a result, when a subnet is created with the the no-gateway option, there is no router Edge to capture the metadata traffic.
Workaround: For networks with the no-gateway option, configure a route for 169.254.169.254/32 to forward traffic to DHCP Edge IP.
- Problem uploading patch file in Firefox Browser.
If you are using Firefox to update the patch for VMware Integrated OpenStack, the upload will fail if Firefox is using version 19 of the Adobe Flash plug-in.
Workaround: Obtain the patch using the CLI. You can also work around this issue by using an alternative browser or restoring the Flash plugin in your Firefox browser to an earlier version (15, 16, 17 or 18.)
OpenStack management service does not automatically restart.
Under certain conditions, the OpenStack management service does not automatically restart. For example, after a failover event, all OpenStack services successfully restart but the management service remains unreachable.Workaround: Manually restart the VMware Integrated OpenStack vApp in the vSphere Web Client. Right-click the icon in the Inventory page and select Shut Down. After all the services shut down, power on the vApp. Check the OpenStack manager logs to confirm that the restart was successful.
NOTE: Restarting interrupts services. This issue is fast-tracked for resolution in a future VMware Integrated OpenStack release.
Network creation sometimes fails when running Heat templates.
Observed in VMware Integrated OpenStack deployments using NSX 6.2.2. When running multiple Heat templates, an iteration of a network creation sometimes fails at the backend.Resolved in NSX 6.2.3 and greater.
- Recovery operation returns "Nodes already exist" error.
Under certain conditions, running the
viocli recovery - <DB name>
command fails if the ansible operation is interrupted. As a result, the database nodes remain and causes the error.Workaround: Manually remove the nodes and run the
viocli recovery
command again. LBaaS v2 migration: health monitors not associated to a pool do not migrate.
In LBaaS v2, health-monitors are required to specify and be attached to a pool. In LBaaS v1, health monitors can be created without pool association, and associated with a pool in a separate procedure.As a result, when migrating to LBaaS v2, unassociated health monitors are excluded.
Workaround: Before migrating to LBaaS v1, associate all health monitors with a pool to ensure their successful migration. The migration process is optional after installing or upgrading to VMware Integrated OpenStack 3.0. See the VMware Integrated OpenStack Administration Guide.
- NSX LBaaS v2.0 tenant limitation.
NSX load balancers support only one tenant per subnet. Under normal operation, this is not an issue because tenants create their own load balancers. If a user attempts to create and attach a load balancer to a subnet, the load balancer will be created in an ERROR state.
Workaround: Allow tenants to create their own load balancers. Do not create and attach a load balancer to an existing subnet.
Operations times out with error "Connection reset by peer".
Observed in VMware Integrated OpenStack 2.0.1 deployments using NSX 6.2.1. Sometimes the HA proxy will time out due to insufficient timeout configuration in the load balancer and controller node settings.Workaround: Edit the custom.yml to modify the timeout values. Add or modify the following parameters as shown.
neutron_client_socket_timeout: 1500
haproxy_neutron_client_timeout: 1500s
haproxy_neutron_server_timeout: 1500sHeat stack deletion fails with "Failed to publish configuration on NSX Edge" error.
Observed in deployments using NSX v6.2.2. Under stressful conditions, the Heat stack or OpenStack API might fail at the backend.Workaround: Retry the failed operation.
NOTE: This issue is fast-tracked for resolution in a future NSX release.
- Online documentation: Some graphics might not display.
If you use the Firefox or Internet Explorer browser to view the online documentation for VMware Integrated OpenStack, some graphics might not display. This does not affect the PDF documentation.
Workaround: Use the Google Chrome browser. All graphics display without issue.
Images must be VMX version 10 or greater.
This issue affects streamOptimized images and OVAs. For example, if an image is not VMX-10 or greater, it might import without difficulty but OpenStack instances created from the image will not function. This is typically experienced when OpenStack compute nodes are deployed on older ESXi versions, such as 5.5. You also cannot correct such an image by modifying the image metadata (vmware_hw_version) or flavor metadata (vmware:hw_version).- Recovery after vSphere HA event shows synchronization and process start-up failures.
If vSphere experiences and HA event, it can affect your VMware Integrated OpenStack deployment. After the recovery, in VMware Integrated OpenStack, run the
viocli deployment status -v
command. If the resulting report shows any synchronization or process start-up failures, use the workaround below.Workaround: Use the
viocli services stop
command to stop all OpenStack services. Use theviocli services start
command to restart all OpenStack services. After restarting, run theviocli deployment status -v
command again. There should be no errors. Interoperability with other VMware products with TLS v1.0 disabled.
VMware Integrated OpenStack experiences interoperability issues with other VMware products when those products have disabled TLS v1.0 and SSL v3. Many clients are phasing out TLS v1.0 and SSL v3 because they are no longer considered secure by current revisions of the PCI Data Security Standard. Previous versions of VMware Integrated OpenStack disabled TLS v1.0 and SSL v3 on inbound public API connections. VMware Integrated OpenStack v2.5.1 and v3.0 fully interoperate with components that have disabled TLS v1.0 and SSL v3, including vSphere 6.0 update 2, NSX 6.2.4, and LDAP servers.Workaround: Disable TLS on the vCenter server where VMware Integrated OpenStack is running.
- Modify the /etc/vmware-rhttpproxy/config.xml file.
<vmacore>
<ssl>
<doVersionCheck> false </doVersionCheck>
<useCompression>true</useCompression>
<libraryPath></libraryPath>
<sslOptions> 117587968</sslOptions>
</ssl>
... - Modify the /etc/vmware-vpx/vpxd.cfg file.
<vmacore>
<cacheProperties>true</cacheProperties>
<ssl>
<useCompression>true</useCompression>
<sslOptions> 117587968</sslOptions>
</ssl>
... - Restart the vpxd and rhttpproxy services on the vCenter server.
- Modify the /etc/vmware-rhttpproxy/config.xml file.
- OpenStack recovery sometimes fails when starting RabbitMQ.
In rare occurrences, VMware Integrated OpenStack recovery fails at the point where RabbitMQ initiates.
Workaround: Repeat the recovery process. Recovery should succeed the second time.
Heat stack deletion fails to delete associated Cinder volumes.
Under heavy loads, Cinder volumes sometimes fail to be deleted after their Heat stacks are deleted, resulting in database deadlock warnings and slower Cinder performance.This issue is fast-tracked for resolution in a future release.
SQL-configured users cannot be modified in dashboard.
If your VMware Integrated OpenStack deployment is configured to use LDAP for user authentication, you cannot modify user definitions in the OpenStack dashboard (Horizon) even those that are sourced from a SQL database.- OpenStack dashboard: router-size drop-down menu is missing.
In the OpenStack dashboard (Horizon), you can specify the size when you create an exclusive router. However when you modify a router from shared to exclusive, the router-size drop-down menu does not appear, preventing you from specifying the router size.
Workaround: Restore the default value, modify type to exclusive again. The drop-down menu should appear.
- After 3.0 upgrade, the vAPI service is not running.
After upgrading to VMware Integrated OpenStack 3.0, the vAPI service might be in a failed state.
Workaround: Manually restart the vAPI service. Use SSH to log into the VMware Integrated OpenStack server. Switch to root user. Run
service vapi restart
. Backup NSX Edge nodes are not used after upgrade to 3.0.
Due to changes in the OpenStack Neutron code in Mitaka, the backup Edge nodes are not recognized as available backups. The database column for the nsxv_router_bindings is set to null.Workaround: Modify the database column setting for the nsxv_router_bindings to default.
mysql -D neutron -e "update nsxv_router_bindings set availability_zone='default' where availability_zone is null;"
Restart the Neutron service.