VMware Integrated OpenStack 7.3 | 15 JUN 2023 | Build OVA 21849205, Patch 21849206

Check for additions and updates to these release notes.

About VMware Integrated OpenStack

VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager that runs as a virtual appliance in vCenter Server.

What's New

VMware Integrated OpenStack 7.3  introduces the following key features and enhancements:

  • Support for vSphere 8.0U1: Helps create workloads based on the vCenter 8.0 infrastructure.

  • Support for NSX 4.1: Helps leverage new features supported in NSX.

  • Backward Compatibility: While VMware Integrated OpenStack 7.3 supports the latest versions of vSphere and NSX (vSphere 8.0U1 and NSX 4.1), you can upgrade VMware Integrated OpenStack to 7.3 at the VIM layer and continue to use earlier versions of vSphere and NSX (such as, vSphere 7.0.x and NSX 3.2.x).

  • Virtual Hyperthreading: VMware Integrated OpenStack can leverage the Virtual Hyperthreading feature that is supported in vSphere 8.0 when the latency sensitivity of the instance is set to "High with Hyperthreading".

  • Additional features:

    • Storage migration option for live migration of VMs managed by VMware Integrated OpenStack.

    • Cinder volume backup improvement, removed Cache for backing up cinder volumes.

    • Support for the deployment of instances with SR-IOV ports in an anti-affinity server group. Please note that in this case, all the instances in this server group should have SR-IOV port(s).

    • B&R enhancement: Support backup in the "Not-Running" deployment mode.

    • Improve the performance of detaching the cinder volume after nova instance is migrated.

    • Support specify mapping image while importing virtual machine into VMware Integrated OpenStack with DCLI command.

    • Support rescue/rebuild/shelve/live migration for imported VM.

    • auto-db-purge for glance: Two scheduled job will be created to purge glance tables including the image table automatically.

    • Support for importing images from an HTTPS location that requires a certificate.

    • Enable users to detach volume even if the volume VMDK reference is broken in cinder shadow vm (Usually caused by vCenter storage vmotion outside VIO).

    • Enable users to detach volume even if the moid of shadow VM is changed (Usually caused by remove/register shadow VM in vCenter manually outside VIO).

Upgrade to Version 7.3


Refer to VMware Product Interoperability Matrices for details about the compatibility of VMware Integrated OpenStack with other VMware products.

Deprecation Notices

  • The following networking features have been deprecated and will be removed in VIO next release:

    • The NSX Data Center for vSphere driver for Neutron.

  • Neutron FWaaSv2 will be deprecated in a future version.

Resolved Issues

  • License warning won't disappear in Integrated Openstack Manager.

    The following warning won't disappear in Integrated Openstack Manager although there is no license related issue: "The capacity is insufficient. To stop seeing this message, assign a license that has enough capacity."

  • Signed certificates are not re-applied or restored when restoring the complete control plane from backup.

    The certs secret which contains VIO private key and certificate does not currently backup scope. After not-in-place restoration, the previously imported cert will not exist in the new deployment.

  • Aodh configuration is not effective after being updated.

    The Aodh configuration is not effective after being updated, and cannot trigger the automatic scaling of Nova instances.

  • Security group rule is not applied to ESXi host after importing VM to VIO.

    In the latest release, after VM got imported into VIO, new security group can be applied to nic interface successfully.

  • VIO deployment failure with kube-controller-manager stuck in crash loop.

    Sometimes VIO deployment fails with kube-controller-manager stuck in a crash loop.

  • VIO instance resizing failure when there is no available datastore linked to storage policy.

    Add an option, skip_pbm_policy_checking, which specifies whether to skip pbm policy checking during instance resizing. The default value is False. If user wants to resize without checking pbm policy compatibility, please set skip_pbm_policy_checking to True manually.

  • Autoscaling with Heat templates is not working.

    After enabling ceilometer, attempt to get autoscaling working with Heat templates. The nova instances are created and the alarms are triggered as expected, however, no action is being taken on scaling up or down the instance count.

  • Users' customization on attribute mapping for SAML2 Federation configuration may not take effect.

    When users configure SAML 2.0 federation, the attribute mapping may not take effect because the same attribute name already exists in the default configuration defined in /etc/shibboleth/attribute-map.xml.

  • Some instance operations fail with error "Image extra_config has invalid format".

    If the image metadata is customized larger than 255, the instances from this image will fail to perform some operations, such as resizing or attaching interface with error of "Error: Invalid input received: Image extra_config has invalid format".

  • Some of the servers fail to be created in soft-anti-affinity server group.

    If you create multiple servers in soft-anti-affinity server group in batch, some of them may fail and reflect error, "This operation would violate a virtual machine affinity/anti-affinity rule".

  • Failed to import certificate to barbican.

    When importing a ceritificate to barbican, it will fail with below errors:

    "Object of type 'bytes' is not JSON serializable"

    ERROR: Failed to store key in barbican

  • Horizon web portal might be forced back to login page suddenly or reflect "400 bad request" error.

    When users are performing operations on horizon web portal, it is possible that horizon web portal is forced back to login page for re-authentication or reflect a "400 bad request" error. 

  • Unable to upload large image from Horizon web portal.

    If the image is a >1g, it will fail to upload from Horizon portal.

  • Healthcheck reports alarm that some FQDN are unreachable.

    The health check fails to get real IP address in command result of nslookup and thus reports target unreachable.

  • Unable to set Management host name on the VIO Appliance deployment.

    On a new VIO appliance deployment, the host name of the management server isn't being set on first boot. The name is always "default". It's due to systemd-resolved defaulting to use DNS-over-TLS, resulting in the initial lookup failure.

  • "connectivity" alarms may be raised in "viocli check health" because of LDAPS connections or DNS resolution from VIO controllers.

    FQDN reports as alarm by health check because it uses nslookup for checks. It sometimes fails due to issue with DNS resolution. For LDAP check, it run ldapsearch without certificates and might fail with SSL verification of LDAPS servers.

  • The deployment of coredns fails during the process of kube-api restarting.

    The kube-api server is restarted in bootstrap script. Sometimes, the subsequent operations (e.g. coredns deployment) are executed before kube-api completely restarts.

  • Live migration might fail when there is one host in maintenance mode in target compute cluster.

    If you live migrate a server to target compute cluster where there is one host in maintenance mode, the maintenance host might be the target host which causes the live migration to fail.

  • VM Network Not Changing in vCenter Server After Importing a VM to VMware Integrated OpenStack using DCLI.

    If you select a new network while importing a VM from vCenter Server to VMware Integrated OpenStack using the Data Center Command Line Interface (DCLI), the selected network does not reflect in vCenter Server after the import.

  • Unable to Retrieve Load Balancers After Clicking Load Balancer in the Horizon UI of VMware Integrated OpenStack.

    If a user, who does not have the Load Balancer role, clicks Load Balancer in the Horizon UI, the following error occurs:

    Unable to retrieve load balancer.

  • NSX N/S cutover migration fails with the following error: NSX-V edge edge-XX is used more than one times in input mapping file.

     This issue occurs due to a problem with router bindings management in the Neutron NSX-V plugin. In some cases, the same edge can be mapped to multiple neutron routers. When this occurs, only one of the neutron routers mapped to a given appliance will be functional; the other routers will not be able to forward any traffic on the NSX-V backend. This occurs only with distributed routers.


Known Issues

  • After vSphere is upgraded to 8.0 and VIO is upgraded to 7.3 release, it cannot create nova instance with hw version 20 based on the existing image. Users cannot use new feature on vmx-20, such as "Create instance with vHT enablement" which is delivered in 7.3 release.


    If vSphere is upgraded to 8.0, user needs to create new image in VIO 7.3. With the new images, user can create instances of hw_version vmx-20.

  • Duplicate entry error in Keystone Federation.

    After deleting the OIDC in Keystone Federation, if the same user tries to log in with OIDC, authentication fails with a 409 message.


    Delete the user either through Horizon or OpenStack CLI.

    For example:

    1. In Horizon, log in with an admin account.

    2. Set the domain context with the federated domain.

    3. In the user page, delete the user with User Name column is None.

    In OpenStack CLI

    openstack user list --domain <federated domain name>

    openstack user delete <user id> --domain <federated domain name>

  • The Octavia driver for VMware NSX does not support bulk object creation.

    It is not possible to submit requests that will create load balancers, listener, and other objects in a single API call.


    Please submit individual API call for each object.

  • SRIOV instance could not get the IP address with 7.3 default photon image.


    Do not use the 7.3 default photon image for sriov instance. Use other images like Ubuntu image.

  • If attaching a volume disk to an unrescued instance, the newly added volume would be set as boot disk. The VM will fail to boot if the volume disk is empty.



  • It is possible to encounter the following error when simultaneously deploying SRIOV and non-SRIOV VMs in the same anti-affinity group: "Power on failed: This operation would violate a virtual machine affinity/anti-affinity rule."


    If it is desirable to deploy SRIOV and non-SRIOV VMs in the same anti-affinity group, make sure they are deployed in separate batches at different times, respectively, for example, with the following workarounds:         

    Workaround 1: Boot all the SRIOV VMs prior to non-SRIOV VMs.

    Workaround 2: Power off the non-SRIOV VMs first, then launch new SRIOV VMs. Power on non-SIOV VMs afterwards.

check-circle-line exclamation-circle-line close-line
Scroll to top icon