Updated on 13 NOV 2018 VMware Integrated OpenStack 4.1.2 | 18 SEP 2018 | Build 10048327 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:- About VMware Integrated OpenStack
- What's New
- Compatibility
- Upgrading to Version 4.1.2
- Internationalization
- Open Source Components for VMware Integrated OpenStack
- Resolved Issues
- Known Issues
About VMware Integrated OpenStack
VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager vApp that runs directly in vCenter Server.
What's New
- When deploying instances in a multi-vCenter Server environment, the cloud administrator can now configure a default availability zone. If the user does not specify an availability zone, the scheduler will place the instance in the default zone.
- Log bundles have been enhanced to now also include Apache service logs.
Compatibility
See the VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products, including vSphere components.
Upgrading to Version 4.1.2
The upgrade to VMware Integrated OpenStack 4.1.2 is a patch process. The process varies depending upon your current installed version of VMware Integrated OpenStack.
Patch Process
If you are running VMware Integrated OpenStack 4.1 or 4.1.1, you can apply the patch directly to your existing deployment. To do so, perform the following steps:
- Verify that your OpenStack deployment is either running or not yet deployed. If the VMware Integrated OpenStack deployment is in any other state, the upgrade will fail.
NOTE: If you are patching from version 4.1.0 and you have not yet deployed OpenStack, you must run theviopatch install
command twice. Ignore the error displayed when you run the command for the first time and run the command again. -
In the vSphere Web Client, take a snapshot of the OpenStack Management Server virtual machine.
-
On the OpenStack Management Server virtual machine, take a snapshot by running the following command:
sudo viopatch snapshot take
NOTE: If the command fails, see "For deployments using a remote vCenter Server, the viopatch command fails to take snapshots" in the Known Issues section. -
Download the patch file to the OpenStack Management Server virtual machine.
-
Add the patch file by running the following command:
sudo viopatch add -l path/vio-patch-4.1.2_4.1.2.10048327_all.deb
-
Install the patch file by running the following command:
sudo viopatch install -p vio-patch-4.1.2 -v 4.1.2.10048327
NOTE: Infrastructure services will be restarted during the patch process.
During the patch installation, the API endpoint is automatically turned off. As a result, any API calls during installation will return a 503 error. -
Log out of the vSphere Web Client and log back in. Any error messages encountered during login can be ignored.
NOTE: The viopatch uninstall
action is deprecated and cannot be used to revert to the previous version. The snapshots created in the patch process are therefore necessary for reversion. Do not remove these snapshots until all validation tasks have been completed and you are certain that you will not need to revert to the previous version.
After you have validated that the patched version is operating correctly, you can run sudo viopatch snapshot remove
to delete the snapshot. This action is destructive and cannot be reversed.
If you need to revert to the previous version after installing the patch, perform the following steps.
-
On the OpenStack Management Server virtual machine, revert to the previous snapshot by running the following command:
sudo viopatch snapshot revert
-
In the vSphere Web Client, revert the OpenStack Management Server virtual machine to the previous snapshot.
-
On the OpenStack Management Server virtual machine, restart the OpenStack service by running the following command:
sudo service oms restart
- On the vCenter Server virtual machine, stop the vSphere client service, delete residual files, and restart the service:
-
For vSphere 6.5 or later, run the following commands:
service-control --stop vsphere-client
cd /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity/
rm -rf *
cd /usr/lib/vmware-vsphere-client/server/work
rm -rf *
service-control --start vsphere-client -
For vSphere 6.0, run the following commands:
service vsphere-client stop
cd /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity/
rm -rf *
service vsphere-client start -
For vSphere 5.5, run the following commands:
service vsphere-client stop
cd /var/lib/vmware/vsphere-client/vc-packages/vsphere-client-serenity/
rm -rf *
service vsphere-client start
-
-
Log out of the vSphere Web Client and log back in.
Older Versions
To upgrade from VMware Integrated OpenStack 4.0 to VMware Integrated OpenStack 4.1.2, perform the following steps:
- Upgrade to VMware Integrated OpenStack 4.1 as described in Patch VMware Integrated OpenStack.
- Apply the VMware Integrated OpenStack 4.1.2 patch as described above.
To upgrade from VMware Integrated OpenStack 3.1 to VMware Integrated OpenStack 4.1.2, perform the following steps:
- Deploy the VMware Integrated OpenStack 4.1 OVA as described in Install the New Version.
- On the new 4.1 deployment, apply the VMware Integrated OpenStack 4.1.2 patch as described above.
- Migrate to the patched deployment as described in Migrate to the New VMware Integrated OpenStack Deployment.
NOTE: If you have configured floating IP addresses on a router with source NAT disabled, enable source NAT or remove the floating IP addresses before upgrading to version 4.1.2. Floating IP addresses are no longer supported on routers with source NAT disabled.
Internationalization
VMware Integrated OpenStack 4.1.2 is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish.
The following items must contain only ASCII characters:
- Names of OpenStack resources (such as projects, users, and images)
- Names of infrastructure components (such as ESXi hosts, port groups, data centers, and datastores)
- LDAP and Active Directory attributes
Open Source Components for VMware Integrated OpenStack
The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack 4.1.2 are available on the Open Source tab of the product download page. You can also download the disclosure packages for the components of VMware Integrated OpenStack that are governed by the GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available.
Resolved Issues
- ISO images on vSAN datastores cannot be booted.
In previous versions, booting from an ISO on a vSAN datastore would fail.
This issue has been resolved in this release.
- The Nova placement service is unavailable in maintenance mode.
The HAProxy maintenance configuration did not include entries for the Nova placement service, rendering it unavailable when maintenance mode is invoked.
This issue has been resolved in this release.
- The viocli deployment configure command does not restart the MySQL database.
After you changed database parameters and ran
viocli deployment configure
, the new configuration did not take effect because the database service was not restarted.This issue has been resolved in this release.
- Ceilometer commands return an authentication required message.
When you run Ceilometer commands, you may receive a message indicating that authentication is required even when your configuration is correct.
This issue has been resolved in this release.
- Backup information is not reflected accurately in VMware Integrated OpenStack.
If you import information for multiple backups, it is not displayed accurately in VMware Integrated OpenStack even though it is imported into the database correctly.
This issue has been resolved in this release.
- The controller fails to create temporary files.
Older environments may encounter issues because a large number of temporary files accumulated and were not cleaned up.
This issue has been resolved in this release.
- The prefix length for load balancer static routes cannot be configured.
In previous versions, static route rules configured through the GUI all used a 24-bit prefix.
You can now specify a subnet mask when configuring static routes for the load balancer.
- Resizing a volume may cause the volume to be migrated to another host.
When you resize a volume, it may be moved to a different host in the cluster even when
always_resize_on_same_host
is set to true.This issue has been resolved in this release.
- An error occurs when you delete compute nodes out of order and then attempt to add a compute node.
In earlier VMware Integrated OpenStack versions, you could only delete compute nodes in descending order. For example, with three compute nodes VIO-Compute-0, VIO-Compute-1, and VIO-Compute-2, you would need to delete VIO-Compute-2 first, then VIO-Compute-1, and finally VIO-Compute-0. If you did not delete the nodes in this order, adding a node later would generate an error.
This issue has been resolved in this release.
Known Issues
The known issues are grouped as follows.
VMware Integrated OpenStack- VMware Integrated OpenStack cannot connect to NSX-T after the NSX-T password is changed.
If you change the NSX-T password while the Neutron server is running, VMware Integrated OpenStack might fail to connect to NSX-T.
Workaround: Before changing the NSX-T password, log in to the active controller node and run the
systemctl stop neutron-server
command to stop the Neutron server service. The service will be restarted after you update the NSX-T password in VMware Integrated OpenStack. - For deployments using a remote vCenter Server, the viopatch command fails to take snapshots
In a deployment where all control virtual machines are deployed in a management vCenter Server instance and use a Nova compute node deployed in a remote vCenter Server instance, the
viopatch snapshot take
command cannot obtain information about the management vCenter Server instance. The command fails with the error "AttributeError: 'NoneType' object has no attribute 'snapshot'."Workaround: On the OpenStack Management Server virtual machine, manually set the IP address, username, and password of the management vCenter Server by running the following commands:
export VCENTER_HOSTNAME = mgmt-vc-ip-address export VCENTER_USERNAME = mgmt-vc-username export VCENTER_PASSWORD = mgmt-vc-password
- The OpenStack GUI only exports the original value of the public virtual IP address.
If the public virtual IP address is changed and the VMware Integrated OpenStack or OpenStack configuration is exported and reloaded on setup, the exported configuration will contain the public virtual IP address of the original configuration, not the updated value.
Workaround: Update the public virtual IP address in the exported and saved configuration file before reloading the OpenStack configuration. Alternatively, update the public virtual IP address in the GUI when confirming the redeployment.
- The public load balancer IP address conflicts with the OpenStack API access network.
If configured outside of the GUI, the IP address of the public load balancer might overlap with the OpenStack API access network. When the configuration is exported and re-applied to the OpenStack or VMware Integrated OpenStack setup, the IP address overlap will not be allowed.
Workaround: When providing or configuring IP addresses, ensure that the public load balancer IP address does not overlap with the OpenStack API access network.
- A load balancer goes into the ERROR state.
If a load balancer is created using a subnet that is not connected to a tier-1 network router, the load balancer cannot be successfully created and will enter the ERROR state.
Workaround: Attach a tier-1 network router to the subnet before creating a load balancer.
- Certificate verification may fail on the OpenStack Management Server.
When you use the
viocli
command-line utility, the following error may occur:ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Workaround: On the OpenStack Management Server, disable verification of the vCenter Server certificate by running the following commands:
sudo su - export VCENTER_INSECURE=True
- When you remove a gateway of a BGP-enabled shared router, a brief network outage may occur on other BGP-enabled shared routers.
In an environment with shared routers, multiple routers may be hosted on the same edge. If BGP is enabled, the gateway IP address of one of those routers is used as the
router id
. When the gateway of a router is cleared, the plugin selects the gateway of another BGP-enabled router as the new value ofrouter id
. This process causes a temporary disruption in peering because the advertised routes for the other BGP-enabled routers hosted on that edge are lost.Workaround: Use an exclusive router.
- Service disruption occurs during refresh of Nova or Neutron services.
If VMware Integrated OpenStack detects an OpenStack setting that does not meet license requirements, it tries to correct the setting by restarting Nova or Neutron services.
Workaround: None. Assign your license before deploying OpenStack to ensure that OpenStack settings meet license requirements.
- For NSX-T deployments, a new tier-0 router does not connect to tier-1 routers during router-gateway-set.
If you create a tier-0 router when you already have one configured, the UUID of the new router is not automatically written to the
nsxv3.ini
file. Tier-1 routers that you create afterward do not connect to your new tier-0 router.Workaround: Manually update the
nsxv3.ini
file and recreate your external network.- Find the UUID of your new tier-0 router.
- Open the
/etc/neutron/plugin/vmware/nsxv3.ini
file and update the UUID for the new tier-0 router. - Restart the Neutron server.
- Delete your external network and create a new one.
- Deleting a router interface times out.
When concurrent Heat stacks are deployed with shared NSX routers, router interface deletion can time out. The following might be displayed:
neutron_client_socket_timeout
,haproxy_neutron_client_timeout
, orhaproxy_neutron_server_timeout
.Workaround: Do not use shared routers in environments where network resources frequently change. If NAT/FIP is required, use an exclusive router. Otherwise, use a distributed router.
- For NSX-V deployments, after you attach a gateway to a metadata proxy router, the OpenStack deployment cannot access the metadata server.
If you attach a gateway to a metadata proxy router, the NSX E
dge vnic0 index changes from VM Network to a gateway network port group. This may prevent the OpenStack deployment from accessing the metadata server.Workaround: Do not attach a gateway to a metadata proxy router.
- For NSX-T deployments, if you attach a firewall to a router without a gateway, firewall rules are added to the NSX router.
Firewall as a Service rules are added to a router without a gateway, even though there is no relevant traffic to match those rules.
Workaround: To activate the rules, configure a gateway for the router.
- A Nova instance fails to boot with the error "no valid host found".
Under stress conditions, booting an instance using the
tenant_vdc
property may fail.Workaround: Boot the instance when system load is lighter.
- BGP tenant networks are lost on service gateway edges.
After the BGP peering between a BGP speaker and the service gateway is established, running the
neutron bgp-speaker-network-remove
command to disassociate the BGP speaker from the external or provider network may cause the tenant routes on the service gateway to be lost. Restoring the external or provider network to the BGP speaker usingneutron bgp-speaker-network-add
will not recreate the routes.Workaround: In the
nsxv.ini
file, change the value ofecmp_wait_time
to 5 seconds. - iBGP peering between the DLR tenant edge (PLR) and provider gateway edge fails to properly advertise the tenant network and breaks external communication.
When iBGP peering is used, advertised routes are installed on peers without modifying the next hop. As a result, the provider gateway edge installs routes between tenant networks with the next hop IP address in the transit network range instead of the tenant's PLR edge uplink. Since the gateway edge cannot resolve the route to the transit network, communication is interrupted.
Workaround: Use eBGP peering when working with distributed routers.
- For NSX-V deployments, the admin_state parameter has no effect.
Changing the
admin_state
parameter toFalse
for a Nova port does not take effect. This parameter is not supported with NSX-V.Workaround: None.
- The cloud services router contains the IP address but not the FQDN.
During VMware Integrated OpenStack deployment, the public hostname in the load balancer configuration was not specified or did not conform to requirements for public access. The public hostname is used for external access to the VMware Integrated OpenStack dashboard and APIs.
Workaround: To change or edit the public hostname after deployment, see KB 2147624.
- When you attach a firewall to a router without a gateway, firewall rules are not added to the NSX router.
Firewall as a Service rules are added only to a router when it has a gateway. These rules have no effect on routers without a gateway because there is no relevant traffic.
Workaround: Configure a gateway for the router before attaching a firewall.
- Metadata agent HTTP communication with the Nova server experiences security risk.
The metadata agent on the edge appliance serves as a reverse proxy and communicates with an upstream Nova server to gather metadata information about the NSX environment on OpenStack. The nginx reverse proxy configuration also supports plaintext communication. The lack of TLS encryption exposes sensitive data to disclosure, and attackers on the network can also modify data from the site in transit.
Workaround: To ensure secure communication between the metadata proxy server and the Nova server, use HTTPS with CA support instead of HTTP.
- Enable Nova metadata HTTPS support by adding the following parameters to
nova.conf
:[DEFAULT] enabled_ssl_apis = metadata [wsgi] ssl_cert_file = nova-md-https-server-cert-file ssl_key_file = nova-md-https-server-private-key-file
- On the NSX Manager, select System > Trust > CERTIFICATES and import a CA certificate or chain of certificates. Record the UUIDs of the certificates imported.
- Prepare the
https_mdproxy.json
file in the following format:{ "display_name" : "https_md_proxy", "resource_type" : "MetadataProxy", "metadata_server_url" : "https://md-server-url", "metadata_server_ca_ids": ["ca-id"], "secret": "secret", "edge_cluster_id" : "edge-cluster-id" }
- Deploy the HTTPS metadata proxy server by using the REST API.
curl -i -k -u nsx-mgr-admin:nsx-mgr-passwd -H "content-type: application/json" -H "Accept: application/json" -X POST https://nsx-mgr-ip/api/v1/md-proxies -d "`cat ./https_mdproxy.json`"
- Configure VMware Integrated OpenStack with the UUID of the metadata proxy server created. Communication between the metadata proxy server and Nova server is now secured by HTTPS with certificate authentication.
- Enable Nova metadata HTTPS support by adding the following parameters to
- Policy file customizations are not synchronized to the VMware Integrated OpenStack dashboard.
The GUI does not honor changes to the policy specified in the custom playbook.
Workaround: If you use the custom playbook to edit policy files, make the same changes in the VMware Integrated OpenStack dashboard policy files to ensure consistency.
- Tenant traffic might be blocked after you enable NSX policies in Neutron.
After you enable
security-group-policy
in the Neutron plugin, the NSX firewall sections might be listed in the wrong order. The correct order is as follows:- NSX policies
- Tenant security groups
- Default sections
Workaround: In the vSphere Web Client, open the NSX Firewall page and move the sections to the correct position. To prevent this issue from occurring, create the first NSX policy before configuring VMware Integrated OpenStack.
- The router size drop-down menu is not displayed on the VMware Integrated OpenStack dashboard.
When you create an exclusive router on the VMware Integrated OpenStack dashboard, you can specify its size. However, when you change a router from shared to exclusive, the router size drop-down menu does not appear, preventing you from specifying the router size.
Workaround: Restore the default value for the router and modify the type to exclusive again. The drop-down menu should appear.
SQL-configured users cannot be modified on the VMware Integrated OpenStack dashboard.
If your VMware Integrated OpenStack deployment is configured to use LDAP for user authentication, you cannot modify any user definitions in the VMware Integrated OpenStack dashboard, even those that are sourced from a SQL database.Workaround: None.
- Recovery after a vSphere HA event shows synchronization and process startup failures.
vSphere HA events can affect your VMware Integrated OpenStack deployment. After vSphere recovers, run the
viocli deployment status
command on the OpenStack Management server. If the resulting report shows any synchronization or process startup failures, use the workaround below.Workaround: Manually restart all OpenStack services by running the
viocli services stop
command and then theviocli services start
command. After the OpenStack services have restarted, run theviocli deployment status
command again and confirm that there are no errors. - Images must be VMX version 10 or later.
This issue affects stream-optimized images and OVAs. If the hardware version of an image is earlier than VMX 10, OpenStack instances created from the image will not function. This is typically experienced when OpenStack compute nodes are deployed on older ESXi versions, such as 5.5. You cannot correct such an image by modifying the image metadata (vmware_hw_version) or flavor metadata (vmware:hw_version).
Workaround: Use a newer image.
- Metadata service is not accessible on subnets created with the no-gateway option.
When a subnet is created with the the no-gateway option, there is no router edge to capture the metadata traffic.
Workaround: For networks with the no-gateway option, configure a route for 169.254.169.254/32 to forward traffic to the DHCP edge IP address.
- High availability may be compromised if a controller virtual machine reboots.
When a controller fails in a high availability setup, the second controller continues to provide services. However, when the initial controller reboots, it might not begin to provide services. The deployment would then be unable to switch back to the initial controller if the second controller failed.
Workaround: After a failed controller reboots in a high availability setup, review your deployment to ensure that both controllers are providing services. For more information about how to start and stop VMware Integrated OpenStack deployments, see KB 2148892.
Special characters in datastore names are not supported by Glance.
If a datastore name includes certain non-alphanumeric characters, the datastore cannot be added to the Glance service. The following characters are reserved for other purposes and not permitted in Glance datastore names: colons (:), commas (,), slashes (/), and dollar signs ($).
Workaround: Do not use these symbols in datastore names.Long image upload times cause NotAuthenticated failure.
This is a known OpenStack issue first reported in the Icehouse release. See https://bugs.launchpad.net/glance/+bug/1371121.Volumes may be displayed as attached on the dashboard even if they failed to attach.
This is a known OpenStack issue first reported in the Icehouse release.- Syslog settings cannot be modified after deployment through the VMware Integrated OpenStack vApp.
The syslog server configuration cannot be modified in VMware Integrated OpenStack > Management Server > Edit Settings > vApp Options after deployment.
Workaround: Modify the configuration in VMware Integrated OpenStack > OpenStack Cluster > Manage > Syslog Server.
- For VDS deployments with an SDDC provider, clusters may appear as ACTIVE but have no external routing after recovery.
If the Nginx ingress controller pod is in the error state after recovery, no external routing can occur.
Workaround: Perform the following steps to clear the error state:
- Delete the default service account and the affected Nginx ingress controller pod.
kubectl delete serviceaccount default -n kube-system kubectl delete pod nginx-ingress-controller-id -n kube-system
- On the VMware Integrated OpenStack with Kubernetes virtual machine, run the
vkube cluster update
command.
- Delete the default service account and the affected Nginx ingress controller pod.
- Deleted clusters cannot be restored.
Once the Delete Cluster and Delete Provider commands have been run, the networks, routers, and load balancers that have been deleted cannot be recovered.
Workaround: None.
- After the guest operating system of the Kubernetes cluster node is restarted, the flannel pod does not start up correctly.
Restarting the guest operating system of the Kubernetes cluster node cleans up all IP table rules. As a result, the flannel pod does not start up correctly.
Workaround: Restart the Kubernetes network proxy. You can stop the kube-proxy process and hyperkube will start a new kube-proxy process automatically.
- The "No policy assigned" error is displayed when cluster operations are performed.
A user that is a member of a group assigned to either an exclusive or shared cluster may see "No policy assigned" when performing operations on the cluster, such as running the kubectl utility. This occurs because the group information of the authenticated user is not stored correctly during the user session.
Workaround: Assign an individual user to the cluster instead of a group.
- SDDC Cloud Provider creation fails with "dpkg: unrecoverable fatal error, aborting:" message.
Creating an SDDC cloud provider fails, and the logs of the column-api container on the virtual appliance contain a message similar to the following:
- docker logs column-api -fTASK [bootstrap-os : Bootstrap | Install python 2.x and pip] *******************172.18.0.2 - - [06/Sep/2017 05:47:32] "GET /runs/46a74449-7123-4574-90c2-3404dfac6641 HTTP/1.1" 200 -fatal: [k8s-node-1-2393e79d-ec6a-4e63-8f63-c6308d72496e]: FAILED! => {"changed": true, "failed": true, "rc": 100, "stderr": "Shared connection to 192.168.0.3 closed.", "stdout": "........"dpkg: unrecoverable fatal error, aborting:", " files list file for package 'python-libxml2' is missing final newline", "E: Sub-process /usr/bin/dpkg returned an error code (2)"]}"
Workaround: Delete the SDDC cloud provider and re-create it.
- After cycling power on a VMware Integrated OpenStack with Kubernetes virtual machine with an SDDC provider, OpenStack service containers stop working and do not restart automatically.
If a VMware Integrated OpenStack with Kubernetes virtual machine with one SDDC provider is powered off and on, the virtual machine is migrated to another host. Subsequent operations on the provider, such as Kubernetes cluster creation and scale-out, will fail.
Workaround: To refresh the provider, perform the following steps:
- On the VMware Integrated OpenStack with Kubernetes virtual machine, login as the root user.
vkube login --insecure
- Refresh the SDDC provider.
vkube provider refresh sddc provider-id --insecure
You can obtain the SDDC provider ID by running the
vkube provider list --insecure
command.
- On the VMware Integrated OpenStack with Kubernetes virtual machine, login as the root user.