VMware Integrated OpenStack 7.2 | 16 DEC 2021 | Build OVA 19066814, Patch 19066815
Check for additions and updates to these release notes.
What's in the Release NotesThe release notes cover the following topics:
- About VMware Integrated OpenStack
- What's New
- Upgrade to Version 7.2
- Deprecation Notices
- Resolved Issues
- Known Issues
VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager that runs as a virtual appliance in vCenter Server.
Support for the latest versions of VMware products:
- VMware Integrated OpenStack 7.2 is fully compatible with VMware vSphere 7.0 U2, NSX-T 3.2, and NSX-V 6.4.11.
New Features and Enhancements:
- Management Plane:
- Support NSX-V to NSX-T Migration for VIO deployment's Neutron: The End of General Support for VMware NSX Data Center for vSphere 6.4.x is January 16, 2022. This release adds the functionality to migrate existing VIO deployment's Neutron from NSX-V to NSX-T. NSX-T requires 3.2 or later.
- Support Import VM Feature Enhancement: In deployments with NSX-T Data Center networking, VIO can import VM with multiple vNICs, specify volume types for non-root disks, and associate with customized flavor.
- Support Disaster Recovery Enhancement: Provides UI support for management plane recovery and automatically updates some NSX resources after recovery. It also supports the deployment of VIO with multiple vCenters.
- Support VIOCLI Enhancement: Assist operations with multiple vCenter deployments, including volume migration commands and instances/VMs operation commands.
- Support New Health Check Tool: VIO admins can check the health status of the VIO system with the viocli command. For more information, see the VIO 7.2 administration guide.
- Support VIO Web UI Improvement: The VIO admin can backup or restore the deployment from VIO web UI directly. In the previous release, only the admin could do it with the viocli command.
- Support One-off Patch Management: In this release, we provide a unified management tool for one-off patches via the viocli command. All the one-off patches perform basic installation/uninstallation procedures and list status through the viocli command.
- Support VIO Patch Enhancement: Provides automatic pre-condition check before starting the patch. It can also track progress during patch installation.
- OpenStack Driver:
- Configure VM vNIC Connections with Multiple SR-IOV: The generic way is provided for VNF to connect on the specified SR-IOV pNic while the VM is placed under the NUMA node alignment.
- Support Tenant VDC Enhancement: The admin can migrate VMs within the Tenant VDC. There is also support for the renaming of Tenant VDC.
- Support Multiple vGPU Profiles for Nova Instance: You can use nova-manage commands to show and update vGPU information under the vCenter Server.
- Management Plane:
- Upgrade from previous 7. x version of VIO, use
viocli patchcommand. For more information, see the product installation guide.
- Upgrade from 6.0, use blue/green upgrade procedure as described in the product installation guide.
- Refer to VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products.
- The following networking features have been deprecated and will be removed in VIO next release:
- The NSX Data Center for vSphere driver for Neutron.
- Neutron FWaaSv2 will be deprecated in a future version.
The resolved issues are grouped as follows.Resolved VIO Management Issues
- 2836971 fixed: viocli get backup failed when user-specified backup name as parameter.
You cannot get and delete a backup with the backup file name as a parameter to viocli.
- 2700928 fixed: The Kubernetes ingress controller used by the VMware Integrated OpenStack Manager Web UI uses an outdated nginx 1.15.10 version.
Ngnix is now updated to 1.20.1.
- 2824166 fixed: Octavia service does not support policy in Custom Resource.
Admin cannot use the viocli update command to customize the Octavia policy.
- 2882017 fixed: OVS configuration is not recovered after restarting openvswitch-vswitchd pods.
OVS configuration is not recovered after restarting openvswitch-vswitchd pods.
- 2759804 fixed: Cannot choose not to ignore certificate validation if the vCenter and NSX certificate is signed by an intermediate CA.
When the vCenter and NSX certificate is signed by an intermediate CA, some VIO services cannot be configured properly to carry out certificate validation. Failure can be seen in various formats. For example, "ignore certificate validation" cannot be unselected when adding or editing vCenter or NSX.
- 2863416 fixed: Failure in building and running an instance on the datastore_cluster backend.
Nova instances cannot be launched based on the datastore_cluster backend.
- 2767303 fixed: Some Day2 operations fail to work after changing the vCenter username and the password from VIO Manager Web UI.
When a user updates the vCenter credential in VIO Manager Web UI, the OpenStack services halt. But VIO's control plane can not communicate to vCenter since the vCenter secret in the k8s cloud provider is not updated.
- 2852405 fixed: vSAN Datastore usage is getting high due to stale configdrive.iso files.
Stale configdrive.iso files will not be deleted with the deletion of Nova instances in vSAN Datastore.
- 2857728 fixed: VIO dashboard slacks every several hours. It would then be accessible only after deleting ingress pods.
In VIO with DVS backend, a large amount of metadata requests may cause slower ingress resulting in inaccessibility of the VIO dashboard.
- 2851020 fixed: Failed to boot an instance from a non-template image.
The second nova instance from the image with "vmware_create_template=false" and "img_linked_clone=false" properties set cannot be booted.
- 2746353 fixed: Adding commonly required nova-compute parameters globally to all nova-compute nodes.
In VIO 7.2, commonly required nova-compute parameters can be added globally to all nova-compute nodes by running "viocli update nova".
- 2814396 fixed: The Load Balancer is hung up in the PENDING_UPDATE state.
In VIO 7.0.1/7.1, when received two DELETE updates on the same object, second update pins LB in the PENDING_UPDATE state. However, in VIO 7.2, under similar circumstances, we skip the update to LB and hence, it does not change to PENDING_UPDATE.
- 2795219 fixed: Unable to evacuate the old Storage Volumes using "viocli prepare datastore" in a multi-vCenter environment.
In VIO 7.2, "--vcenter-name" flag is added for "viocli prepare datastore" command, which helps in specification of the vCenter. If not specified, it is considered to be a management vCenter.
- 2759873 fixed: VIO partially fails to deploy NSX LB.
The load balancer might not be created if the full chain of certificates is uploaded to NSX using barbican.
- 2761460 fixed: Setting fails while adding multiple fixed IPs to a single port. Error reflects as BadRequestException: 400: Client Error for URL.
In VIO 7.0.1/7.1, adding multiple fixed IPs to a single port failed with error, BadRequestException: 400: Client Error, for URL: https://frn-pr-vn-01.corp.fortinet.com:9696/v2.0/ports/a840e560-6614-4b44-a275-2898a39b14cb. In VIO 7.2, the user can add multiple fixed IPs per port.
- 2764317 fixed: Upon successful migration, migrator pod logs are not available in the VIO support bundle.
Once the migration succeeds, the VIO control plane gets reconfigured, and the migrator pod gets deleted. Therefore, its log is not captured on the support bundle.
- 2768005 fixed: Horizon UI shows "xmltooling::IOException" when login with SAML Federation IdP.
When configuring VIO with external SAML IdP, there is an "xmltooling::IOException" error if the user tries to login with SAML Federation.
- 2760150 fixed: There is no response when saving the firewall rules changes from Horizon UI.
If any of the required options that are marked with "*" is not updated when editing firewall rules, you cannot see the UI response after saving the changes.
- 2846881 fixed: value for "admin_user" is not validated in VIO public API layer
Prior to VIO 7.2, user can enter wild value to attribute "admin_user" when using VIO public API to deploy OpenStack cluster. This attribute is used for VIO OpenStack admin account name and the value can only be "admin". So custom value in public API request won't take effect.
- Public API rate limiting is not available.
In VMware Integrated OpenStack 7.2, it is not possible to enforce rate limiting on public APIs.
Workaround: None. This feature will be offered in a later version.
- Creating a load balancer with a private subnet that is not attached to a router results in an ERROR state.
With the neutron NSX-T plugins such as MP and policy plugins, creating a load balancer with a private subnet that is not attached to a router results in a load balancer that is in an ERROR state and the error will not be reported to the user.
Workaround: Create the load balancer with a subnet that is attached to a router.
- OpenStack port security cannot be enforced on direct ports in the NSX-V Neutron plugin.
Enabling port security for ports with vnic-type direct can be ineffective. Security features are not available for direct ports.
- Cannot log in to VIO if vCenter and NSX password contain $$.
If the VIO account configured for the underlying vCenter and NSX use the password that contains "$$", VIO cannot complete the authentication for vCenter and NSX due to "$$" used in the password. The OpenStack pods can run into CrashLoopBackOff.
Workaround: Use other passwords that do not contain "$$".
- Users could not download the Glance image from the OpenStack CLI client.
When downloading an image from Openstack CLI, there is an error: "[Errno 32] Corrupt image download." This is because VIO stores the image as a VM template in the vSphere datastore by default. The md5sum value is not saved between VMDK and VM template.
Workaround: The Glance image could be downloaded with the below configurations:
- The option vmware_create_template is false in the Glance configuration.
- The user creates the Glance image using Openstack CLI with the property "vmware_create_template=false".
- After setting a firewall group administratively DOWN (state=DOWN), the firewall group operational status is always DOWN, even after the firewall group admin state is brought back UP.
The neutron-fwaas service will ignore changing operational status on transitions that do not involve adding a port or removing a port from the firewall group.
Workaround: Add or remove a port, or you can add and remove a port that is already bound to the firewall group.
- Clicking edit and save on a Neutron segment accidentally enables multicast.
In NSX-T policy UI, if any unrelated changes are made in the multicast routing, multicast routing will be enabled on the segment.
Workaround: Explicitly disable multicast in UI when editing the segment.
- Add member operations fail with "Provider 'vmwareedge' reports error: Could not retrieve certificate::" (HTTP status 500).
Cannot add or remove members from HTTPS _TERMINATED Octavia load balancers.
Workaround: Use OpenStack CLI to add or remove members.
1. Fetch the
tls_container_reffor all the impacted users.
2. Find container, secret, and certificate URIs.
3. Retrieve Octavia service user-id.
4. Add URIs retrieved in step 2 to ACLs for user-id retrieved in step 3.
- Tier1 gateways could not rollback completely during large-scale MP2P migration.
Some tier1 gateways could not rollback completely, and the deletion status remained in progress during large-scale MP2P migration. Unsuccessful rollback might have been caused due to an error during migration.
Workaround: Restore UA and re-migrate.
- Duplicate entry error in Keystone Federation.
After deleting the OIDC in Keystone Federation, if the same user tries to log in with OIDC, authentication fails with a 409 message.
Workaround: Delete the user either through Horizon or OpenStack CLI.
1. In Horizon, log in with an admin account.
2. Set the domain context with the federated domain.
3. In the user page, delete the user with User Name column is
In OpenStack CLI
openstack user list --domain <federated domain name>
openstack user delete <user id> --domain <federated domain name>
- Fail to enable Ceilometer when there are 10k Neutron tenant networks.
When there are large amounts of resources such as networks created in vSphere, VIO will generate many customer resources for those objects. If CRs numbers are too big, VIO Manager Web UI will be failed on the backend API because the response data are too large for HTTP requests.
Workaround: In the VIO manager, manually delete the discoveries Customer Resources.
The CRs could be listed by using the below command:
kubectl -n openstack get discoveries.vio.vmware.com
The CRs could be deleted with the below command. For example:
kubectl -n openstack delete discoveries.vio.vmware.com vcenter-vcenter2-networks-2
- The certificate needs to be CA signed and re-applied after restoration.
The certs secret which contains the VIO private key and certificate does not backup currently. After not-in-place restoration, the cert imported previously is not present in the new deployment.
1. Save the certs secret from the original deployment.
osctl get secret certs -oyaml > certs.yaml
2. After restoration, replace the "private_key" and "vio_certificate" values in certs secret with the data from step1.
3. Stop/Start services.
- Cannot create instances on a specific Nova-Compute node and the Nova-Compute log is stuck.
When creating an instance, it is in a BUILD state and never succeeds. If you check the nova-compute log, there are only a few logs and without more information.
Workaround: Restart the nova-compute pod manually.
- The FWaaS v2 rule is enforced on all downlink ports of a Neutron router, regardless of FWaaS bindings.
This behavior is specific to NSX-V distributed routers. For these routers, the NSX-V implementation is between the PLR and the DLR. FWaaS rules run on the PLR, but downlink interfaces are on the PLR. Therefore, the firewall rules apply to all traffic going in and out of the downlink.
Workaround: For distributed routers, explicitly include source and target subnets to match the downlink subnet CIDR. Either make sure the firewall group applies to each port on the router or use a centralized router instead of a distributed router.
- Disabling DRS on the edge cluster can trigger deletion of the resource pool used by VIO.
Disabling DRS on the edge cluster can trigger the VIO resource pool, which will no longer manage edge appliances anymore.
Workaround: Do not perform the following steps on the edge cluster:
1. Right-click on the edge cluster.
2. Disable DRS on the edge cluster.
- After migrating from NSX-V to NSX-T, VMs are not able to access the Nova metadata service. New VMs created after the migration can access the Nova metadata service.
The VMs migrated from NSX-V have a static route for redirecting metadata traffic via the DHCP server. This configuration does not work on NSX-T, as NSX-T injects an on-link route for metadata access.
Workaround: Reboot the VM, or if you have access to the VM, remove the static route via the DHCP server and renew the DHCP lease so that the new lease will be provided by the NSX-T DHCP server with the appropriate route for accessing the Nova metadata service.
- When using the "viocli update " command to update CR, an error may occur if you enter a large integer as a value. For example, profile_fb_size_kb: 2097152.
Large integers will get converted to scientific notation in some cases by VIO helm charts.
Workaround: Add quote around the large integer. For example, profile_fb_size_kb: "2097152".
- V2T migrator pod always fails while creating load balancers on NSX-T.
V2T migration will fail for load balancers with backend members on external networks. This is because NSX-T does not support this configuration. Failure will occur during the "API replay" stage but before N/S cutover or host migration can start. Therefore no rollback is needed.
Workaround: Remove members on external networks from the LB pool before triggering the V2T migration.
- Snapshots on a controller node prevent some VIO operations.
The persistent volume on a controller node cannot be moved if a snapshot of the controller node exists. Therefore, taking a snapshot of a controller is not supported by VIO.
Workaround: Delete all snapshots on controller nodes.
- Volumes created from images are always bootable by default.
If you include the --non-bootable parameter when creating a volume from an image, the parameter does not take effect.
Workaround: After the volume has been created, update it to be non-bootable.
- During V2T migration, NSX host migration can fail during the "Resolve Configuration" with the following feedback request: "Incorrect DRS configuration for Maintenance Mode migration".
vSphere DRS during V2T migration is not supported for some configurations. For more information on supported DRS mode, see NSX-T V2T guide.
Workaround: Disable vSphere DRS if one of the following applies:
- In-Place migration: In this mode, hosts are not put in maintenance mode during migration. This mode is only available if the environment is vSphere 6. x (VDS will be migrated to N-VDS).
- Automated maintenance migration: In this mode, the vCenter Server version is 6.5 or 6.7.
- With VIO NSX-V integration, Openstack instances are unable to send and receive traffic when multiple compute clusters are defined.
The Neutron default security group, which allows DHCP traffic, is created when the neutron-server is first started. If compute clusters are added to the VIO deployment, they are not automatically added to the default security group.
Workaround: Use NSX admin utilities to rebuild the default firewall section as follows:
nsxadmin -r firewall-sections -o nsx-update
- V2T migrator job fails in API while migrating Neutron networks to NSX-T. An error like the following is reported:2021-05-18 17:16:00,612 ERROR Failed to create a network:: Invalid input for operation: Segmentation ID cannot be set with transparent VLAN.
The NSX-V Neutron plugin allows for setting VLAN transparent network settings for provider VLAN network. This setting does not make much sense as the provider network will only use a specific VLAN. The NSX-T Neutron plugin does not allow such configuration.
Workaround: Unset VLAN transparency for the network and retry.
- In some cases, API replay will fail with internal error while configuring router external gateways. The Neutron migrator job logs will display an error like ERROR Failed to add router gateway with port : Request Failed: internal server error while processing your request.
This happens because the temporary Neutron server running inside the migrator job's pod checks for Tier-1 realization on NSX-T. In some rare cases, this realization might be extremely slow and timeout. When this issue manifests, the temporary Neutron server logs (/var/log/migration/neutron-server-tmp.log) will report an error like the following: 2021-05-07 10:36:31.909 472 ERROR neutron.api.v2.resource vmware_nsxlib.v3.exceptions.RealizationTimeoutError: LogicalRouter ID /infra/tier-1s/
was not realized after 50 attempts with 1 seconds sleep
Workaround: There is no workaround for this issue. In most cases retrying will be enough. If the issue occurs persistently, it will be necessary to troubleshoot NSX-T to find the root cause for slow realization.
- In some cases, an Openstack Load Balancer will go into ERROR state after adding a member to one of its pools.
This happens because the selected member is configured on the downlink of a router that is already attached to an NSX load balancer. The Openstack driver is not able in this case to re-use the existing LBS.
Workaround: Re-create the Openstack LoadBalancer using a VIP on a downlink network. Then associate a floating IP to the VIP port.
- V2T migration fails with error like:INFO vmware_nsx.shell.admin.plugins.nsxv.resources.migration NSX : does not belong to Neutron. Please delete it. However, the referenced edge is created by the VIO Neutron service.
This is an issue with the NSX-V plugin where not all the edges created as a part of a pool are added to Neutron DB mappings and are therefore wrongly identified as not belonging to Neutron.
Workaround: Ignore the warning. Once user is sure there is no other failing validation check to address, set
strict_validationto False in
migrator.conf.jsonand re-launch the migrator job.
- Dealing with clusters in DOWN status.
Host migration cannot start if any of the compute clusters is DOWN on NSX-V. You must ensure all clusters are in a healthy status before starting the migration. In case some hosts persistently fail, they should be removed from their compute cluster. VMs on this host will be migrated to other hosts in the cluster.
Workaround: Once V2T migration is completed, you can reconfigure impacted hosts for NSX-T, and add them back to their original clusters.
- After the migration, there can be Neutron logical ports for non-existing load balancers.
After V2T migration, Neutron ports for load balancers in an error state are migrated, even if the corresponding load balancers are not migrated. The V2T migration process skips Octavia load balancers in an ERROR state. This is because load balancers in such a state might not be correctly implemented on NSX-T; in addition, the ERROR state is immutable, so you must delete these load balancers. The corresponding logical ports are however migrated.
Workaround: These ports can be safely deleted once the migration is completed. The issue can be completely avoided by deleting load balancers in the ERROR state before starting the migration.
- Failure while removing a Neutron router gateway.
The error reported is "Cannot delete a router_gateway as it still has lb service attachment." However, there is no Octavia load balancer attached to any of the router's subnet; likewise, no Octavia load balancer is attached to the external network and has members among router downlinks.
Workaround: Prior to VIO7.2, the only workaround is to detach the load balancer from the Tier-1 router on the backend. With VIO7.2 and admin, a utility is provided to remove stale load balancers from NSX-T.
nsxadmin -r lb-services -o list-orphaned
nsxadmin -r lb-services -o clean-orphaned
- The console and log of recovered Nova instances cannot be shown on Horizon.
After the disaster recovery procedure, log in to the Horizon at the target site and navigate to Project->Compute->Instances, take a look at the log and console for every recovered instance, it displays empty.
Workaround: Create a new Nova instance at the target site and check each recovered Nova instance again, it must work.
- V2T migration fails during the N/S cutover at the "edge" stage. The message returned by the UI is "Transport zone not found "On the NSX-T manager instance where the migration-coordinator service is running /var/log/migration-coordinator/v2t/cm.log shows the failure occurs while creating the "migration transit LS" for a distributed edge in the Neutron backup pool (edge name starts with "backup-").
The V2T migrator tries to create a transit switch for the N/S cutover for each NSX-V distributed router. Therefore, it scans all distributed edges and for each one performs this operation. In order to succeed, the edge must have at least a downlink interface, so the migrator can find the relevant transport zone. However, Neutron might keep some distributed edges in the "backup pool". These are unconfigured, ready-to-use edge appliances, which however cause the V2T migrator to fail.
Workaround: From NSX-V, delete distributed edges in the Neutron backup pool. This will have no effect on Neutron/NSX-T operations