VMware Integrated OpenStack 7.2.0.1 | 07 APR 2022 | Build OVA 19576490, Patch 19575596

Check for additions and updates to these release notes.

What's in the Release Notes

About VMware Integrated OpenStack

VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager that runs as a virtual appliance in vCenter Server.

Upgrade to Version 7.2.0.1

  • Upgrade from previous 7. 0/7.0.1/7.1 version of VIO, use viocli patch command. For more information, see the product installation guide.

Important: 7.2.0.1 only contains two critical fixes for 7.2 and direct upgrade from 7.2 to 7.2.0.1 is not supported. Check Resolved Issues to know about how to apply the two fixes in 7.2 setups.

  • Upgrade from 6.0, use blue/green upgrade procedure as described in the product installation guide.

Compatibility

Refer to VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products.

Deprecation Notices 

  • The following networking features have been deprecated and will be removed in VIO next release:
    • The NSX Data Center for vSphere driver for Neutron.
  • Neutron FWaaSv2 will be deprecated in a future version.

Resolved Issues

  • VIO disaster recovery failure in VIO 7.2 version.

    For 7.2 deployments, please deploy 7.2.0.1 ova to recover in destination vCenter.

  • The Kubernetes ingress controller used by the VMware Integrated OpenStack Manager Web UI uses an outdated Nginx 1.15.10 version.

    Ngnix is now updated to 1.20.1 for both refresh installations and patched setups. For 7.2 deployments upgraded from previous 7.x releases, please follow kb.vmware.com/s/article/88012.

Known Issues

  • Public API rate limiting is not available.

    In VMware Integrated OpenStack 7.2, it is not possible to enforce rate limiting on public APIs.

    None. This feature will be offered in a later version.

  • Creating a load balancer with a private subnet that is not attached to a router results in an ERROR state

    With the neutron NSX-T plugins such as MP and policy plugins, creating a load balancer with a private subnet that is not attached to a router results in a load balancer that is in an ERROR state and the error will not be reported to the user.

    Workaround: Create the load balancer with a subnet that is attached to a router.

  • OpenStack port security cannot be enforced on direct ports in the NSX-V Neutron plugin.

    Enabling port security for ports with vnic-type direct can be ineffective. Security features are not available for direct ports.

    Workaround: None.

  • Cannot log in to VIO if vCenter and NSX password contain $$.

    If the VIO account configured for the underlying vCenter and NSX use the password that contains "$$", VIO cannot complete the authentication for vCenter and NSX due to "$$" used in the password. The OpenStack pods can run into CrashLoopBackOff.

    Workaround: Use other passwords that do not contain "$$".

  • Users could not download the Glance image from the OpenStack CLI client.

    When downloading an image from Openstack CLI, there is an error: "[Errno 32] Corrupt image download." This is because VIO stores the image as a VM template in the vSphere datastore by default. The md5sum value is not saved between VMDK and VM template.

    Workaround: The Glance image could be downloaded with the below configurations:

    • The option vmware_create_template is false in the Glance configuration.
    • The user creates the Glance image using Openstack CLI with the property "vmware_create_template=false".

  • Clicking edit and save on a Neutron segment accidentally enables multicast.

    In NSX-T policy UI, if any unrelated changes are made in the multicast routing, multicast routing will be enabled on the segment.

    Workaround: Explicitly disable multicast in UI when editing the segment.

  • Add member operations fail with "Provider 'vmwareedge' reports error: Could not retrieve certificate::" (HTTP status 500).

    Cannot add or remove members from HTTPS _TERMINATED Octavia load balancers.

    Workaround: Use OpenStack CLI to add or remove members.

    1. Fetch the tls_container_ref for all the impacted users.

    2. Find container, secret, and certificate URIs.

    3. Retrieve Octavia service user-id.

    4. Add URIs retrieved in step 2 to ACLs for user-id retrieved in step 3.

  • Tier1 gateways could not rollback completely during large-scale MP2P migration.

    Some tier1 gateways could not rollback completely, and the deletion status remained in progress during large-scale MP2P migration. Unsuccessful rollback might have been caused due to an error during migration.

    Workaround: Restore UA and re-migrate.

  • Duplicate entry error in Keystone Federation.

    After deleting the OIDC in Keystone Federation, if the same user tries to log in with OIDC, authentication fails with a 409 message.

    Workaround: Delete the user either through Horizon or OpenStack CLI.

    For example:

    1. In Horizon, log in with an admin account.

    2. Set the domain context with the federated domain.

    3. In the user page, delete the user with User Name column is None.

    In OpenStack CLI

    openstack user list --domain <federated domain name>

    openstack user delete <user id> --domain <federated domain name>

  • Fail to enable Ceilometer when there are 10k Neutron tenant networks.

    When there are large amounts of resources such as networks created in vSphere, VIO will generate many customer resources for those objects. If CRs numbers are too big, VIO Manager Web UI will be failed on the backend API because the response data are too large for HTTP requests.

    Workaround: In the VIO manager, manually delete the discoveries Customer Resources.

    The CRs could be listed by using the below command:

    kubectl -n openstack get discoveries.vio.vmware.com

    The CRs could be deleted with the below command. For example:

    kubectl -n openstack delete discoveries.vio.vmware.com vcenter-vcenter2-networks-2

  • The certificate needs to be CA signed and re-applied after restoration.

    The certs secret which contains the VIO private key and certificate does not backup currently. After not-in-place restoration, the cert imported previously is not present in the new deployment.

    Workaround:

    1. Save the certs secret from the original deployment. osctl get secret certs -oyaml > certs.yaml2. After restoration, replace the "private_key" and "vio_certificate" values in certs secret with the data from step1.3. Stop/Start services.

  • Cannot create instances on a specific Nova-Compute node and the Nova-Compute log is stuck.

    When creating an instance, it is in a BUILD state and never succeeds. If you check the nova-compute log, there are only a few logs and without more information.

    Workaround: Restart the nova-compute pod manually.

  • The FWaaS v2 rule is enforced on all downlink ports of a Neutron router, regardless of FWaaS bindings.

    This behavior is specific to NSX-V distributed routers. For these routers, the NSX-V implementation is between the PLR and the DLR. FWaaS rules run on the PLR, but downlink interfaces are on the PLR. Therefore, the firewall rules apply to all traffic going in and out of the downlink.

    Workaround: For distributed routers, explicitly include source and target subnets to match the downlink subnet CIDR. Either make sure the firewall group applies to each port on the router or use a centralized router instead of a distributed router.

  • Disabling DRS on the edge cluster can trigger deletion of the resource pool used by VIO.

    Disabling DRS on the edge cluster can trigger the VIO resource pool, which will no longer manage edge appliances anymore.

    Workaround: Do not perform the following steps on the edge cluster:

    1. Right-click on the edge cluster.

    2. Disable DRS on the edge cluster.

  • After migrating from NSX-V to NSX-T, VMs are not able to access the Nova metadata service. New VMs created after the migration can access the Nova metadata service.

    The VMs migrated from NSX-V have a static route for redirecting metadata traffic via the DHCP server. This configuration does not work on NSX-T, as NSX-T injects an on-link route for metadata access.

    Workaround: Reboot the VM, or if you have access to the VM, remove the static route via the DHCP server and renew the DHCP lease so that the new lease will be provided by the NSX-T DHCP server with the appropriate route for accessing the Nova metadata service.

  • When using the "viocli update " command to update CR, an error may occur if you enter a large integer as a value. For example, profile_fb_size_kb: 2097152.

    Large integers will get converted to scientific notation in some cases by VIO helm charts.

    Workaround: Add quote around the large integer. For example, profile_fb_size_kb: "2097152".

  • V2T migrator pod always fails while creating load balancers on NSX-T.

    V2T migration will fail for load balancers with backend members on external networks. This is because NSX-T does not support this configuration. Failure will occur during the "API replay" stage but before N/S cutover or host migration can start. Therefore no rollback is needed.

    Workaround: Remove members on external networks from the LB pool before triggering the V2T migration.

  • Snapshots on a controller node prevent some VIO operations.

    The persistent volume on a controller node cannot be moved if a snapshot of the controller node exists. Therefore, taking a snapshot of a controller is not supported by VIO.

    Workaround: Delete all snapshots on controller nodes.

  • Volumes created from images are always bootable by default.

    If you include the --non-bootable parameter when creating a volume from an image, the parameter does not take effect.

    Workaround: After the volume has been created, update it to be non-bootable.

  • During V2T migration, NSX host migration can fail during the "Resolve Configuration" with the following feedback request: "Incorrect DRS configuration for Maintenance Mode migration".

    vSphere DRS during V2T migration is not supported for some configurations. For more information on supported DRS mode, see NSX-T V2T guide.

    Workaround: Disable vSphere DRS if one of the following applies:

    • In-Place migration: In this mode, hosts are not put in maintenance mode during migration. This mode is only available if the environment is vSphere 6. x (VDS will be migrated to N-VDS).
    • Automated maintenance migration: In this mode, the vCenter Server version is 6.5 or 6.7.

  • With VIO NSX-V integration, Openstack instances are unable to send and receive traffic when multiple compute clusters are defined.

    The Neutron default security group, which allows DHCP traffic, is created when the neutron-server is first started. If compute clusters are added to the VIO deployment, they are not automatically added to the default security group.

    Workaround: Use NSX admin utilities to rebuild the default firewall section as follows:nsxadmin -r firewall-sections -o nsx-update

  • V2T migrator job fails in API while migrating Neutron networks to NSX-T. An error like the following is reported:2021-05-18 17:16:00,612 ERROR Failed to create a network:: Invalid input for operation: Segmentation ID cannot be set with transparent VLAN.

    The NSX-V Neutron plugin allows for setting VLAN transparent network settings for provider VLAN network. This setting does not make much sense as the provider network will only use a specific VLAN. The NSX-T Neutron plugin does not allow such configuration.

    Workaround: Unset VLAN transparency for the network and retry.

  • In some cases, API replay will fail with internal error while configuring router external gateways. The Neutron migrator job logs will display an error like ERROR Failed to add router gateway with port : Request Failed: internal server error while processing your request.

    This happens because the temporary Neutron server running inside the migrator job's pod checks for Tier-1 realization on NSX-T. In some rare cases, this realization might be extremely slow and timeout. When this issue manifests, the temporary Neutron server logs (/var/log/migration/neutron-server-tmp.log) will report an error like the following: 2021-05-07 10:36:31.909 472 ERROR neutron.api.v2.resource vmware_nsxlib.v3.exceptions.RealizationTimeoutError: LogicalRouter ID /infra/tier-1s/ was not realized after 50 attempts with 1 seconds sleep

    Workaround: There is no workaround for this issue. In most cases retrying will be enough. If the issue occurs persistently, it will be necessary to troubleshoot NSX-T to find the root cause for slow realization.

  • In some cases, an Openstack Load Balancer will go into ERROR state after adding a member to one of its pools.

    This happens because the selected member is configured on the downlink of a router that is already attached to an NSX load balancer. The Openstack driver is not able in this case to re-use the existing LBS.

    Workaround: Re-create the Openstack LoadBalancer using a VIP on a downlink network. Then associate a floating IP to the VIP port.

  • V2T migration fails with error like:INFO vmware_nsx.shell.admin.plugins.nsxv.resources.migration NSX : does not belong to Neutron. Please delete it. However, the referenced edge is created by the VIO Neutron service.

    This is an issue with the NSX-V plugin where not all the edges created as a part of a pool are added to Neutron DB mappings and are therefore wrongly identified as not belonging to Neutron.

    Workaround: Ignore the warning. Once user is sure there is no other failing validation check to address, set strict_validation to False in migrator.conf.json and re-launch the migrator job.

  • Dealing with clusters in DOWN status.

    Host migration cannot start if any of the compute clusters is DOWN on NSX-V. You must ensure all clusters are in a healthy status before starting the migration. In case some hosts persistently fail, they should be removed from their compute cluster. VMs on this host will be migrated to other hosts in the cluster.

    Workaround: Once V2T migration is completed, you can reconfigure impacted hosts for NSX-T, and add them back to their original clusters.

  • After the migration, there can be Neutron logical ports for non-existing load balancers.

    After V2T migration, Neutron ports for load balancers in an error state are migrated, even if the corresponding load balancers are not migrated. The V2T migration process skips Octavia load balancers in an ERROR state. This is because load balancers in such a state might not be correctly implemented on NSX-T; in addition, the ERROR state is immutable, so you must delete these load balancers. The corresponding logical ports are however migrated.

    Workaround: These ports can be safely deleted once the migration is completed. The issue can be completely avoided by deleting load balancers in the ERROR state before starting the migration.

  • Failure while removing a Neutron router gateway.

    The error reported is "Cannot delete a router_gateway as it still has lb service attachment." However, there is no Octavia load balancer attached to any of the router's subnet; likewise, no Octavia load balancer is attached to the external network and has members among router downlinks.

    Workaround: Prior to VIO7.2, the only workaround is to detach the load balancer from the Tier-1 router on the backend. With VIO7.2 and admin, a utility is provided to remove stale load balancers from NSX-T.List orphans: nsxadmin -r lb-services -o list-orphanedDelete orphans: nsxadmin -r lb-services -o clean-orphaned

  • The console and log of recovered Nova instances cannot be shown on Horizon.

    After the disaster recovery procedure, log in to the Horizon at the target site and navigate to Project->Compute->Instances, take a look at the log and console for every recovered instance, it displays empty.

    Workaround: Create a new Nova instance at the target site and check each recovered Nova instance again, it must work.

  • V2T migration fails during the N/S cutover at the "edge" stage. The message returned by the UI is "Transport zone not found "On the NSX-T manager instance where the migration-coordinator service is running /var/log/migration-coordinator/v2t/cm.log shows the failure occurs while creating the "migration transit LS" for a distributed edge in the Neutron backup pool (edge name starts with "backup-").

    The V2T migrator tries to create a transit switch for the N/S cutover for each NSX-V distributed router. Therefore, it scans all distributed edges and for each one performs this operation. In order to succeed, the edge must have at least a downlink interface, so the migrator can find the relevant transport zone. However, Neutron might keep some distributed edges in the "backup pool". These are unconfigured, ready-to-use edge appliances, which however cause the V2T migrator to fail.

    Workaround: From NSX-V, delete distributed edges in the Neutron backup pool. This will have no effect on Neutron/NSX-T operations

  • After setting a firewall group administratively DOWN (state=DOWN), the firewall group operational status is always DOWN, even after the firewall group admin state is brought back UP.

    The neutron-fwaas service will ignore changing operational status on transitions that do not involve adding a port or removing a port from the firewall group.

    Workaround: Add or remove a port, or you can add and remove a port that is already bound to the firewall group.

  • In VIO 7.2, Disaster Recovery doesn’t support restoring VIO management plane in NSX-T secure mode.

    Follow the user guide Recover OpenStack Deployment Using User Interface to restore VIO Control Plane. While walking through the Disaster Recovery wizard to the Neutron page, uncheck 'Ignore the NSX Policy certificate validation.', and click Validate. Subsequently, the validation times out.

    Workaround: Upon checking the 'Ignore the NSX Policy certificate validation.' option, the validation succeeds. After the VIO management plane recovery is complete, if NSX secure mode is needed, run the following steps:

    • Navigate to Manage and open Settings.
    • In NSX Policy Credentials from VIO Manager UI, edit the NSX-T Manager.
    • Deselect 'Ignore the NSX Policy certificate validation.'
    • Click OK.

    The communication with NSX-T manager changes to the desired secure mode.

check-circle-line exclamation-circle-line close-line
Scroll to top icon