VMware Integrated OpenStack 7.0.1 | 19 NOV 2020 | Build 17200834

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

About VMware Integrated OpenStack

VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager that runs as a virtual appliance in vCenter Server.

What's New

  • Support for the latest versions of VMware products: VMware Integrated OpenStack 7.0.1 supports and is fully compatible with VMware vSphere 7.0u1, NSX-T Data Center 3.1 and NSX Data Center for vSphere 6.4.8
  • New features and enhancement:
    • ​Support of Stateful DHCPv6: You can now choose between SLAAC and DHCPv6 for IP addressing in data plane. This feature requires the Neutron Policy Plugin and NSX-T 3.1.
    • Support IP CIDR format for Neutron ports allowed-address-pairs settings. This feature requires the Neutron Policy Plugin and NSX-T 3.1.
  • Upgrade procedure improvement:
    • If your VMware Integrated OpenStack 5.1 deployment includes a firewall with FWaaS v1 enabled, you can upgrade directly to version 7.0.1 and FWaaS v1 will be automatically migrated to FWaaS v2.

Upgrading to Version 7.0.1

To upgrade to VMware Integrated OpenStack 7.0.1, you apply a patch using the viocli patch command.

You can apply the patch to an existing VMware Integrated OpenStack 7.0 deployment or apply it to a VMware Integrated OpenStack 7.0 OVA without deployment.The patch applies to the following scenarios:

For patch instructions, see Apply the VMware Integrated OpenStack 7.0.1 Patch in the VMware Integrated OpenStack Installation and Configuration Guide.

Compatibility

Deprecation Notices 

  • Neutron FWaaSv2 will be deprecated in a future version.
  • The following networking features have been deprecated and will be removed in a future release:
    • The NSX Data Center for vSphere driver for Neutron.
    • The NSX-T Management Plugin for Neutron will be replaced by the NSX-T Policy Plugin.
    • The TVD plugin, which allows a single VMware Integrated OpenStack deployment to use an NSX Data Center for vSphere back end and a NSX-T Data Center back end.

Internationalization

VMware Integrated OpenStack is available in English and seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish.

The following items must contain only ASCII characters:

  • Names of OpenStack resources (such as projects, users, and images)
  • Names of infrastructure components (such as ESXi hosts, port groups, data centers, and datastores)
  • LDAP and Active Directory attributes 

Open Source Components for VMware Integrated OpenStack

The copyright statements and licenses applicable to the open source software components distributed in VMware Integrated OpenStack are available on the Open Source tab of the product download page. You can also download the disclosure packages for the components of VMware Integrated OpenStack that are governed by the GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available.

Resolved Issues

The resolved issues are grouped as follows.

    Upgrade

    The following upgrade issues have been resolved in this release.

    • Upgrade from VMware Integrated OpenStack 5.x to 7.0 fails with LDAP configuration.

      With LDAP configured and the AD domain name set without the domain information, the upgrade script fails.

    • Upgrade from VMware Integrated OpenStack 5.x to 7.0 fails in a large scale environment.

      When upgrading from VMware Integrated OpenStack 5.x to 7.0 in a large scale environment, the vCenter discovery services fail with the error message: "Request entity too large: limit is 3145728".

    • A firewall created in version 5.x cannot be displayed after upgrading to VMware Integrated OpenStack 7.0 directly

      VMware Integrated OpenStack 5.x uses FWaaS v1. VMware Integrated OpenStack 7.0 uses FWaaS v2. A migration from FWaaS v1 to FWaaS v2 before upgrading is no longer required.

    • Neutron service reports ToozConnectionError when multiple operations are performed concurrently

      After upgrading from VMware Integrated OpenStack 5.x to 7.0 with the NSX-V plugin, the Neutron service reports ToozConnectionError when executing network-related operations, particularly during periods when multiple operations are performed concurrently.

      This issue is fixed in this release, but it is still required to set neutron-server replica number to one.

    • After upgrading to VMware Integrated OpenStack 7.0, heat stack fails to deploy with error: AuthorizationFailure

      After upgrading from VMware Integrated OpenStack 5.x to 7.0, the heat stack fails to deploy with error: AuthorizationFailure because the user credentials configured in the heat.conf file are not carried over.

    • Some auth methods in keystone.conf are not carried over during upgrade.

      The unsupported auth methods, for example "application_credential", are not carried over during the upgrade from VMware Integrated OpenStack 5.1 to VMware Integrated OpenStack 7.0.

    • Request to Neutron quotas details fail after upgrading to VMware Integrated OpenStack 7.0.

      If a user set the loadbalancer related quota in VMware Integrated OpenStack 5.x, after upgrading to VMware Integrated OpenStack 7.0 it would fail to request the neutron quota.

    General

    The following general issues have been resolved in this release.

    • VMware Integrated OpenStack 7.0 deployment fails when NSX-T edge id is different with nsx_id.

      If the edge id and nsx id have different values, a new VMware Integrated OpenStack 7.0 deployment fails with neutron server error "vmware_nsxlib.v3.exceptions.ResourceNotFound: Resource could not be found on backend" with NSX-T 2.5.1.

    • Virtual Device Role Tagging does not return correct values.

      The hardware address of the tagged devices are not returned correctly from metaData service.

    • Inconsistent data is seen after restarting database pods

      When database pods are restarted, more than one pod might claim to be the leader node and the pods do not form a single cluster.

    • VMs is unable to reach to the router IPv6 gateway.

      In a VMware Integrated OpenStack 7.0 environment with NSX-V 6.4.8, VM is unable to reach to the router IPv6 gateway .

    • The property "ethernetx.cTxPerDev" is set to "1" when create VM with LatencySensitivity as High

      When creating a VM with latencySensitivity as High, cTxPerDev should not automatically be set to "1".
      The cTxPerDev settings should align with the "hw:vifs_multi_thread" property.

    • Horizon UI cannot display Japanese language

      End users cannot see the Japanese language UI on Horizon even after changing the language settings.

    • Command "viocli delete deployment" will delete the backup files stored in vCenter Content Library

      The command viocli delete deployment would delete the backup files stored in vCenter Content Library without warning messages. Now the command displays warning message, and waits for admin confirmation before executing the deletion.

    • Deploying OpenStack using GUI fails when vCenter's instanceUuid contains capital letters.

      When using the GUI to deploy OpenStack, if the vCenter instanceUuid has capital letters, the generated Nova compute name will have capital letters and the VMware Integrated OpenStack deployment will fail. The Nova compute CR can not be created because Kubernetes does not accept names with capital letters.

    • Access a heat autoscale URL via VIP fails in VMware Integrated OpenStack 6.0 or 7.0 deployment

      Access a heat autoscale URL via VIP fails with SSLError: 'certificate verify failed'.

    • Heat stack fails with the error: DesignateClientPlugin object has no attribute _get_service_name.

      Heat stack (which creates a port and recordset) fails with the following error:
      ERROR: Property error: : resources.ha_proxy_vip_shared_dns.properties.zone: : 'DesignateClientPlugin' object has no attribute '_get_service_name'.

    • Cannot create loadbalancer on network with RBAC

      Create LB on network with RBAC fails with an error such as: "create failed: No details.: DriverError: Driver error: Router 9e7c00b4-1e96-44fc-bcc5-ab9a7a9ef334 could not be found".

    • When creating a load balancer, Neutron reports driver error: Bad lbaas-listener request, failed to create virtual server at NSX backend.​

      Neutron reports driver error: Bad lbaas-listener request: Failed to create virtual server at NSX backend.​ This occurs when multiple listeners have the same certificate.

    • Octavia LB deletion fails with --cascade option

      When deleting the Octavia load balancer with the '--cascade' option, the LB remains in the 'PENDING_DELETE' state.

    • Neutron port creation with an allowed address pair fails and logs an UnboundLocalError in deployments using the DVS plugin.

      When creating a neutron port with an allowed address pair on a VMware Integrated OpenStack 7.0 deployment using the DVS plugin, the API call may fail with an HTTP/500 error. The Neutron logs will show an exception of the form: "POST failed.: UnboundLocalError: local variable 'port_security' referenced before assignment". The problem is caused by a variable scoping exception introduced in VMware Integrated OpenStack 7.0 due to switching from Python 2.7 to Python 3.7. This issue affects only the DVS plugin.

    • On NSXV plugin, with nsxv_use_routers_as_lbaas_platform flag enabled, deletion of a loadbalancer causes the service edge to flush conntrack table.

      While deleting a loadbalancer in the scenario above, the LBaaSv2 driver removes the VIP from the service edge appliance.
      That causes the edge conntrack table to flush. When the edge hosts other loadbalancer objects, connections to these loadbalancers drop.

    • Log forwarding to vRealize Log Insight 8.1.1 or later may not be functioning

      Starting from version 8.1.1, vRealize Log Insight enables SSL connections by default on port 9000 which VMware Integrated OpenStack 7.0 uses for log forwarding. However, VMware Integrated OpenStack 7.0 regards this port as an http endpoint and hence vRealize Log Insight 8.1.1 is no longer able to receive VMware Integrated OpenStack logs. To resolve this issue, VMware Integrated OpenStack 7.0.1 provides a way for customers to use vRealize Log Insight port 9543 as an https endpoint from web client. If you want to continue using port 9000 as an http endpoint, disable "SSL connections" in vRealize Log Insight 8.1.1 or later.

    • Cinder Backup services do not start and run correctly.

      Cinder-backup pod fails to mount backup NFS with error "rpc.statd is not running but is required for remote locking."

    Known Issues

    The known issues are grouped as follows.

      Upgrade

      Consider the following issues before performing an upgrade.
      • After upgrading from VMware Integrated OpenStack 5.x to 7.0, Horizon operations continually redirect to login page

        Following an upgrade, Horizon or Ingress pods might have issues. Re-creating them will restore normal function.

        Workaround. Perform the following steps to re-create the Horizon or Ingress pods.

        1. ssh to the management VM.
        2. To re-create the Horizon pod, use the command:
          kubectl get pods -n openstack|grep horizon-server| awk '{print $1}'|xargs kubectl -n openstack delete pod 
        3. To re-create the Ingress pod, use the command:
          kubectl get pods -n openstack|grep '^ingress' | awk '{print $1}'|xargs kubectl -n openstack delete pod
      • vCenter Server instances that contain no compute clusters are not retained during upgrade.

        If your VMware Integrated OpenStack 5.1 deployment includes vCenter Server instances from which no compute nodes have been added to your deployment, the settings for those vCenter Server instances are not retained after you upgrade to VMware Integrated OpenStack 7.0.

        Workaround: Add the desired vCenter Server instances to your VMware Integrated OpenStack 7.0 deployment after the upgrade is finished.

      • If migrating load balancers from Neutron-lbaas to Octavia, load balancers without members disappear

        Load balancers without members are not used, so they are not migrated. They disappear during the Neutron DB sync because they lack back end implementation.

        Workaround. Before migrating, remove unused load balancers.

      • Stale Nova services exist following upgrade from VMware Integrated OpenStack 5.1.0.4

        After upgrading to VMware Integrated OpenStack 7.0, legacy Nova services from 5.1.0.4 are "down"

        [root@vioadmin1-vioshim-5dbf477dc4-2lh7s ~]# nova service-list
        +---------------------------------------------------------------------------
        | Id | Binary | Host | Zone | Status | State 
        | Updated_at | Disabled Reason | Forced down 
        |+--------------------------------------------------------------------------
        | cbe3345a-4daa-41fa-8133-60302e369533 | nova-conductor | controller02 | internal | enabled | down 
        | 2020-04-29T14:58:20.000000 | - | False |
        | cb2ebf3c-53e1-493b-979e-471a1bd2e3b9 | nova-conductor | controller01 | internal | enabled | down 
        | 2020-04-29T14:58:20.000000 | - | False |
        | 669d979a-26a2-4b85-a139-a6705a3b04f8 | nova-scheduler | controller02 | internal | enabled | down 
        | 2020-04-29T14:58:23.000000 | - | False |
        | f97b8c40-ab45-4e9c-ac3d-675f1600d5ad | nova-scheduler | controller01 | internal | enabled | down 
        | 2020-04-29T14:58:23.000000 | - | False |
        | 39312d81-841e-4399-b178-f9b7d47eba1b | nova-consoleauth | controller02 | internal | enabled | down 
        | 2020-04-29T14:58:17.000000 | - | False |
        | 793480c6-6503-4fc2-a89e-fe20a3700efb | nova-consoleauth | controller01 | internal | enabled | down 
        | 2020-04-29T14:58:16.000000 | - | False |
        |04T05:15:54.000000 | - | False |

        Workaround: Remove services that are in a "down" state.

        1. Update the Nova CR.
          viocli update nova nova-xxx
        2. For the manifests parameter, set cron_job_service_cleaner: true, for example:
          conf:
            nova:
              neutron:
                metadata_proxy_shared_secret: .Secret:managedencryptedpasswords:data.metadata_proxy_shared_secret
              vmware:
                passthrough: "false"
                tenant_vdc: "false"
          manifests: 
            cron_job_service_cleaner: true
        3. Save the CR and wait for all Nova pods to be re-created.

        After approximately one hour, the cronjob removes all services that were in a "down" state.

      • Metadata service is unreachable in the NSX-V environment after upgrade to VMware Integrated OpenStack 7.0.

        When upgrading from VMware Integrated OpenStack 5.x to 7.0 with the NSX-V plugin, the metadata service is unreachable if the metadata proxy edge is not routable from the VMware Integrated OpenStack API network.

        Workaround: Login to each controller node, and add route to metadata proxy service via gateway through eth0 NIC.

        1. List the controller nodes:​
          osctl get nodes
        2. Login to each controller node:
          viossh <CONTROLLER_NODE_NAME>
        3. Add the route to metadata proxy services:
          sudo ip route add <metadata_proxy_ip1> via <metadata_proxy_gateway> dev eth0
          sudo ip route add <metadata_proxy_ip2> via <metadata_proxy_gateway> dev eth0
      • LDAP users are unable to login following upgrade from VMware Integrated OpenStack 5.1

        After upgrading from VMware Integrated OpenStack 5.1, LDAP users are unable to log in to VMware Integrated OpenStack 7.0.

        Workaround: After the release of VMware Integrated OpenStack 5.1, a change occurred in the Keystone Community code. To work around this problem, you must update the user database. See KB79373.

      • If the designate component is enabled in VMware Integrated OpenStack 5.x, zones created in VMware Integrated OpenStack 5.x cannot be updated following an upgrade to 7.0

        The architecture of the designate component in VMware Integrated OpenStack 5.x is different from the architecture in VMware Integrated OpenStack 7.0. In VMware Integrated OpenStack 5.x, the master IPs of the designate zone corresponds to the public IPs of the load balancer node. But after an upgrade to VMware Integrated OpenStack 7.0, the 7.0 test bed will not use the public IPs of the load balancer node, so the zone cannot make an AXFR request for its master IPs.

        Workaround: Manually update the master IP of the zones created on 5.x to Public VIP. The following steps show how to perform the update with a bind9 backend. The bind9 server must support the modzone operation.

        1. Select a designate-worker-XXXXX pod and log in.
          osctl exec -it designate-worker-XXXXX bash
        2. Get information about the designate zone that was created in VMware Integrated OpenStack 5.1. In the following example, the IP address of the bind9 server is 192.168.111.254  and the designate zone is test.com.
          rndc -s 192.168.111.254 -p 953 -k /etc/designate/rndc.key showzone test.com.
          zone "test.com" { type slave; file "slave.test.com.1b10d5ec-e39c-449d-b1e1-c7bf1fba02e5"; masters { 192.168.112.162 port 5354; 192.168.112.163 port 5354;}; }; 
        3. Update the master IP of the designate zone to the public VIP of VMware Integrated OpenStack 7.0. In the following example, the public VIP of VMware Integrated OpenStack 7.0 is192.168.112.160.
          rndc -s 192.168.111.254 -p 953 -k /etc/designate/rndc.key modzone test.com '{ type slave; file "slave.test.com.1b10d5ec-e39c-449d-b1e1-c7bf1fba02e5"; masters { 192.168.112.160 port 5354; }; };'

      General Issues

       

      • Public API rate limiting is not available.

        In VMware Integrated OpenStack 7.0, it is not possible to enforce rate limiting on public APIs.

        Workaround: None. This feature will be offered in a later version.

      • After you upgrade to VMware Integrated OpenStack 7.0, a default pool cannot be added to existing LBaaS listeners.

        VMware Integrated OpenStack 5.1 does not support changing the default pool of an LBaaS listener. This feature is supported in VMware Integrated OpenStack 7.0. However, if you create an LBaaS listener in VMware Integrated OpenStack 5.1 that does not have a default pool, you cannot add a default pool to the listener even after upgrading to VMware Integrated OpenStack 7.0.

        Workaround: Delete the affected listener and create it again.

      • Creating a load balancer with a private subnet that is not attached to a router results in an ERROR state

        With the neutron NSX-T plugins such as MP and policy plugins, creating a load balancer with a private subnet that is not attached to a router results in a load balancer that is in an ERROR state and the error will not be reported to the user.

        Workaround: Create the load balancer with a subnet that is attached to a router.

      • Some firewall groups, policies and rules are missing from Horizon Firewall dashboard.

        The Horizon Firewall dashboard is designed to display firewall groups, policies and rules with the shared flag set to true or false. If the shared flag is left unset, the firewall groups, policies, and rule do not appear.

        Workaround. Because the OpenStack CLI does not have the requirement for shared flag to be set, you can use it to list firewall groups, policies and rules. Or for all the records in the Horizon Firewall dashboard with the shared flag not set, you can update the value to zero or false. Logically, false is equivalent to not set. For example:

        update firewall_groups_v2 set shared=0 where shared is NULL;
        update firewall_policies_v2 set shared=0 where shared is NULL;
        update firewall_rules_v2 set shared=0 where shared is NULL;

      • Uploading an image to glance times out before completion

        Uploading a large image to glance might timeout before completion when using either Horizon or the CLI.

        • If using the Horizon UI, the image uploads to a Horizon temp folder then uploads to glance. The Horizon session time limit is one hour. If the upload operation does not complete within one hour, the session terminates and the image upload process aborts.
        • If using the CLI, the image uploads to the glance-api working folder where its format might be converted. Then the image transfers to the glance datastore. If the image is too big, the process may timeout and abort.

        Workround: To remove the extra step of uploading to the Horizon temp folder and save upload time, upload the image using the CLI. For large images, place them on a web server that is accessible to the client running the CLI, then use the glance async import command to upload the image. See https://docs.openstack.org/python-glanceclient/latest/cli/details.html#glance-task-create

        For example, you can use the following command to import an image from the URL: https://raw.githubusercontent.com/arnaudleg/openstack-cli-tests/master/files/cirros-0.3.0-i386-disk.vmdk:

        glance task-create --type import --input '{"import_from_format": "vmdk", "import_from": "https://raw.githubusercontent.com/arnaudleg/openstack-cli-tests/master/files/cirros-0.3.0-i386-disk.vmdk", "image_properties": {"name": "cirros-imported", "disk_format": "vmdk", "container_format": "bare", "vmware_adaptertype": "ide", "vmware_disktype": "streamOptimized", "vmware_ostype": "otherGuest"}}'

      • Pods go into pending state. Status of VMware Integrated OpenStack Management Server or controller changes to not ready.

        This condition is due to resource contention on the VMware Integrated OpenStack Management Server and controller VMs. When the kubelet cannot complete a task in a reasonable amount of time, it marks the Kubernetes node as not ready. Kubernetes attempts to reschedule the pods to other available nodes. If no other nodes are available, the pods go into a pending state. With a higher density of pods on a single node, a compact setup is more vulnerable to this problem.

        Workaround: To change the Kubernetes node status from not ready to ready, restart the kubelet service using systemctl restart kubelet. To address the problem more permanently, scale out to add controller nodes and reduce the load on each node.

      • If incorrect credentials are entered when deploying OpenStack, the wizard may fail to recognize correct credentials.

        During the OpenStack deployment process, if vCenter Server or NSX Manager credentials are entered incorrectly, the wizard may fail to recognize correct credentials. Even if you remove the incorrect information and enter the correct credentials, the wizard may fail to validate them.

        Workaround: Close the deployment wizard and open it again.

      • Using CIDR 0.0.0.0/x as an IPv4 address block in security groups translates to 'any' on the NSX-V back end

        When setting a security group rule with CIDR 0.0.0.0/x (x>0, x<=32), the VMware Integrated OpenStack plugin translates it to 'any' on the NSX-V back end. This also applies when setting a security group rule with IPv6 ::/x (x>0, x<=128).

        Workaround. Do not use the 0.0.0.0/x or ::/x address blocks.

      • The Default Repository Location will be changed in Photon OS 3.0

        VMware Integrated OpenStack 7.0 uses Photon OS 3.0 as the OS for LCM and Controller VMs. The default repository location will change from bintray.com to packages.vmware.com on 25-Nov-2020. This change will impact software packages installed with the tdnf commands.

        Workaround: All packages on Bintray are available on packages.vmware.com. To access the packages, change the default repository location to the new repository. See instructions provided here.

      check-circle-line exclamation-circle-line close-line
      Scroll to top icon