VMware Integrated OpenStack 7.0 | 04 JUN 2020 | Build 16227912

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

About VMware Integrated OpenStack

VMware Integrated OpenStack greatly simplifies deploying an OpenStack cloud infrastructure by streamlining the integration process. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow through a deployment manager that runs as a virtual appliance in vCenter Server.

What's New

  • OpenStack Train: VMware Integrated OpenStack 7.0 is based on the Train release of OpenStack
  • Support for the latest versions of VMware products: VMware Integrated OpenStack 7.0 supports and is fully compatible with VMware vSphere 7.0 and NSX-T Data Center 3.0.
  • OpenStack features:
    • Support of Selective vCPU pinning for Nova server instances. You can pin a subset of vCPUs to physical CPU cores. This feature requires vSphere 7.0 or later.
    • Support of SR-IOV NIC redundancy for network connectivity HA, allows scheduling vNIC provision from different pNICs. 
    • Support of Neutron Trunk services, allows multiple networks to be connected to an instance using a single virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port. For information on how to use trunk services, see the documentation in OpenStack Community. This feature requires the Neutron Policy Plugin. 
    • Support of Octavia LBaaS, the OpenStack Octavia project has been integrated into this release.
    • Support of IPv6 for Octavia LBaaS with IPv6 members, this feature requires NSX-T 3.0 or later and the Neutron Policy Plugin.
    • Support of NSX-T vRF-lite as External network, this feature requires NSX-T 3.0 or later and the Neutron Policy Plugin.
    • Support of NSX-T on VDS 7.0, this requires vSphere 7.0 or later and NSX-T 3.0 or later. To understand the deployment recommendation and known limitations, see  What's New section of the NSX-T 3.0 Release Notes.​
  • Control Plane Enhancement:
    • Patch Management: Built-in patching capability without requiring a blue/green upgrade procedure
    • Public API: API for managing the deployment of VMware Integrated OpenStack 7.0. For more information, see https://code.vmware.com/docs/12181
    • Python 3: VMware Integrated OpenStack 7.0 is now fully written in Python 3
    • Command Line: Improved viocli command-line utility
    • Stability and Performance improvements
  • Licensing Model: 
    • With VIO 7.0, the Data Center Edition has been merged with Carrier Edition. The Carrier Edition is available for purchase and includes all features. The Data Center Edition is no longer available for purchase. There is no impact for an upgrade to VIO 7.0. Please refer to KB79285 for details.

Install and Upgrade

  • Installation:
    • New deployment of VIO7.0 only supports Neutron Plugin with DVS and NSX-T Policy API
  • Upgrade:
    • Supported upgrade path of VIO:
      • VIO 5.1 upgrade to VIO 7.0
      • VIO 6.0 upgrade to VIO 7.0
    • Upgrade paths support Neutron Plugin with NSX-V and NSX-T Management Plugin
    • Please read the known issues before the upgrade and find the detailed upgrade procedure in the product documentation.

Compatibility

Deprecation Notices 

  • Neutron LBaaS is deprecated and replaced by the OpenStack Octavia project in this release.
  • Neutron FWaaS v1 is deprecated and replaced by FWaaSv2 in this release.
  • The following networking features have been deprecated and will be removed in a future version:
    • The NSX Data Center for vSphere driver for Neutron.
    • The NSX-T Management Plugin for Neutron. In the future, the NSX-T Policy Plugin will be used.
    • The TVD plugin, which allows a single VMware Integrated OpenStack deployment to use an NSX Data Center for vSphere back end and a NSX-T Data Center back end.

Internationalization

VMware Integrated OpenStack 7.0 is available in English and in seven additional languages: Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, and Spanish.

The following items must contain only ASCII characters:

  • Names of OpenStack resources (such as projects, users, and images)
  • Names of infrastructure components (such as ESXi hosts, port groups, data centers, and datastores)
  • LDAP and Active Directory attributes 

Open Source Components for VMware Integrated OpenStack

The copyright statements and licenses applicable to the open-source software components distributed in VMware Integrated OpenStack 7.0 Beta are available on the Open Source tab of the product download page. You can also download the disclosure packages for the components of VMware Integrated OpenStack that are governed by the GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available.

Known Issues

The known issues are grouped as follows.

    Upgrade

    Consider the following issues before performing an upgrade.
    • Neutron service reports ToozConnectionError when multiple operations are performed concurrently

      After upgrading from VMware Integrated OpenStack 5.x to 7.0 with the NSX-V plugin, the Neutron service reports ToozConnectionError when executing network-related operations, particularly during periods when multiple operations are performed concurrently.

      Workaround: To work around the problem, perform the following steps:

      1. Set the Neutron-server replica to 1 in the VMware Integrated OpenStack UI.
      2. Update Neutron using the viocli update command.
        viocli update neutron
        conf:
           neutron:
              DEFAULT:
                 locking_coordinator_url: etcd://etcd-server.openstack.svc.cluster.local:2379?timeout=600&lock_timeout=600

      For further improved performance, you can also change the size of the VIO manager and controller node from medium to large as described in Migrate to the New VMware Integrated OpenStack Deployment.

    • A firewall created in version 5.x cannot be displayed after upgrading to VMware Integrated OpenStack 7.0 directly

      VMware Integrated OpenStack 5.x uses FWaaS v1. VMware Integrated OpenStack 7.0 uses FWaaS v2.

      Workaround: To migrate a firewall created using version 5.x to version 7.0, upgrade VMware Integrated OpenStack to version 6.0, manually perform the FWaaS v1 to FWaaS v2 migration, then upgrade to version 7.0.

    • After upgrading from VIO 5.x to 7.0, Horizon operations continually redirect to login page

      Following an upgrade, Horizon or Ingress pods might have issues. Re-creating them will restore normal function.

      Workaround. Perform the following steps to re-create the Horizon or Ingress pods.

      1. ssh to the management VM.
      2. To re-create the Horizon pod, use the command:
        kubectl get pods -n openstack|grep horizon-server| awk '{print $1}'|xargs kubectl -n openstack delete pod 
      3. To re-create the Ingress pod, use the command:
        kubectl get pods -n openstack|grep '^ingress' | awk '{print $1}'|xargs kubectl -n openstack delete pod
    • vCenter Server instances that contain no compute clusters are not retained during upgrade.

      If your VMware Integrated OpenStack 5.1 deployment includes vCenter Server instances from which no compute nodes have been added to your deployment, the settings for those vCenter Server instances are not retained after you upgrade to VMware Integrated OpenStack 7.0.

      Workaround: Add the desired vCenter Server instances to your VMware Integrated OpenStack 7.0 deployment after the upgrade is finished.

    • If migrating load balancers from Neutron-lbaas to Octavia, load balancers without members disappear

      Load balancers without members are not used, so they are not migrated. They disappear during the Neutron DB sync because they lack back end implementation.

      Workaround. Before migrating, remove unused load balancers.

    • Stale Nova services exist following upgrade from VIO 5.1.0.4

      After upgrading to VIO 7.0, legacy Nova services from 5.1.0.4 are "down"

      [root@vioadmin1-vioshim-5dbf477dc4-2lh7s ~]# nova service-list
      +---------------------------------------------------------------------------
      | Id | Binary | Host | Zone | Status | State 
      | Updated_at | Disabled Reason | Forced down 
      |+--------------------------------------------------------------------------
      | cbe3345a-4daa-41fa-8133-60302e369533 | nova-conductor | controller02 | internal | enabled | down 
      | 2020-04-29T14:58:20.000000 | - | False |
      | cb2ebf3c-53e1-493b-979e-471a1bd2e3b9 | nova-conductor | controller01 | internal | enabled | down 
      | 2020-04-29T14:58:20.000000 | - | False |
      | 669d979a-26a2-4b85-a139-a6705a3b04f8 | nova-scheduler | controller02 | internal | enabled | down 
      | 2020-04-29T14:58:23.000000 | - | False |
      | f97b8c40-ab45-4e9c-ac3d-675f1600d5ad | nova-scheduler | controller01 | internal | enabled | down 
      | 2020-04-29T14:58:23.000000 | - | False |
      | 39312d81-841e-4399-b178-f9b7d47eba1b | nova-consoleauth | controller02 | internal | enabled | down 
      | 2020-04-29T14:58:17.000000 | - | False |
      | 793480c6-6503-4fc2-a89e-fe20a3700efb | nova-consoleauth | controller01 | internal | enabled | down 
      | 2020-04-29T14:58:16.000000 | - | False |
      |04T05:15:54.000000 | - | False |

      Workaround: Remove services that are in a "down" state.

      1. Update the Nova CR.
        viocli update nova nova-xxx
      2. For the manifests parameter, set cron_job_service_cleaner: true, for example:
        conf:
          nova:
            neutron:
              metadata_proxy_shared_secret: .Secret:managedencryptedpasswords:data.metadata_proxy_shared_secret
            vmware:
              passthrough: "false"
              tenant_vdc: "false"
        manifests: 
          cron_job_service_cleaner: true
      3. Save the CR and wait for all Nova pods to be re-created.

      After approximately one hour, the cronjob removes all services that were in a "down" state.

    • LDAP users are unable to login following upgrade from VIO 5.1

      After upgrading from VIO 5.1, LDAP users are unable to log in to VIO 7.0.

      Workaround: After the release of VIO 5.1, a change occurred in the Keystone Community code. To work around this problem, you must update the user database. See KB79373.

    • If the designate component is enabled in VIO 5.x, zones created in VIO 5.x cannot be updated following an upgrade to 7.0

      The architecture of the designate component in VIO 5.x is different from the architecture in VIO 7.0. In VIO 5.x, the master IPs of the designate zone corresponds to the public IPs of the load balancer node. But after an upgrade to VIO 7.0, the 7.0 test bed will not use the public IPs of the load balancer node, so the zone cannot make an AXFR request for its master IPs.

      Workaround: Manually update the master IP of the zones created on 5.x to Public VIP. The following steps show how to perform the update with a bind9 backend. The bind9 server must support the modzone operation.

      1. Select a designate-worker-XXXXX pod and log in.
        osctl exec -it designate-worker-XXXXX bash
      2. Get information about the designate zone that was created in VIO 5.1. In the following example, the IP address of the bind9 server is 192.168.111.254  and the designate zone is test.com.
        rndc -s 192.168.111.254 -p 953 -k /etc/designate/rndc.key showzone test.com.
        zone "test.com" { type slave; file "slave.test.com.1b10d5ec-e39c-449d-b1e1-c7bf1fba02e5"; masters { 192.168.112.162 port 5354; 192.168.112.163 port 5354;}; }; 
      3. Update the master IP of the designate zone to the public VIP of VIO 7.0. In the following example, the public VIP of VIO 7.0 is192.168.112.160.
        rndc -s 192.168.111.254 -p 953 -k /etc/designate/rndc.key modzone test.com '{ type slave; file "slave.test.com.1b10d5ec-e39c-449d-b1e1-c7bf1fba02e5"; masters { 192.168.112.160 port 5354; }; };'

    General Issues

     

    • Public API rate limiting is not available.

      In VMware Integrated OpenStack 7.0, it is not possible to enforce rate limiting on public APIs.

      Workaround: None. This feature will be offered in a later version.

    • After you upgrade to VMware Integrated OpenStack 7.0, a default pool cannot be added to existing LBaaS listeners.

      VMware Integrated OpenStack 5.1 does not support changing the default pool of an LBaaS listener. This feature is supported in VMware Integrated OpenStack 7.0. However, if you create an LBaaS listener in VMware Integrated OpenStack 5.1 that does not have a default pool, you cannot add a default pool to the listener even after upgrading to VMware Integrated OpenStack 7.0.

      Workaround: Delete the affected listener and create it again.

    • Creating a load balancer with a private subnet that is not attached to a router results in an ERROR state

      With the neutron NSX-T plugins such as MP and policy plugins, creating a load balancer with a private subnet that is not attached to a router results in a load balancer that is in an ERROR state and the error will not be reported to the user.

      Workaround: Create the load balancer with a subnet that is attached to a router.

    • Some firewall groups, policies and rules are missing from Horizon Firewall dashboard.

      The Horizon Firewall dashboard is designed to display firewall groups, policies and rules with the shared flag set to true or false. If the shared flag is left unset, the firewall groups, policies, and rule do not appear.

      Workaround. Because the OpenStack CLI does not have the requirement for shared flag to be set, you can use it to list firewall groups, policies and rules. Or for all the records in the Horizon Firewall dashboard with the shared flag not set, you can update the value to zero or false. Logically, false is equivalent to not set. For example:

      update firewall_groups_v2 set shared=0 where shared is NULL;
      update firewall_policies_v2 set shared=0 where shared is NULL;
      update firewall_rules_v2 set shared=0 where shared is NULL;

    • Uploading an image to glance times out before completion

      Uploading a large image to glance might timeout before completion when using either Horizon or the CLI.

      • If using the Horizon UI, the image uploads to a Horizon temp folder then uploads to glance. The Horizon session time limit is one hour. If the upload operation does not complete within one hour, the session terminates and the image upload process aborts.
      • If using the CLI, the image uploads to the glance-api working folder where its format might be converted. Then the image transfers to the glance datastore. If the image is too big, the process may timeout and abort.

      Workround: To remove the extra step of uploading to the Horizon temp folder and save upload time, upload the image using the CLI. For large images, place them on a web server that is accessible to the client running the CLI, then use the glance async import command to upload the image. See https://docs.openstack.org/python-glanceclient/latest/cli/details.html#glance-task-create

      For example, you can use the following command to import an image from the URL: https://raw.githubusercontent.com/arnaudleg/openstack-cli-tests/master/files/cirros-0.3.0-i386-disk.vmdk:

      glance task-create --type import --input '{"import_from_format": "vmdk", "import_from": "https://raw.githubusercontent.com/arnaudleg/openstack-cli-tests/master/files/cirros-0.3.0-i386-disk.vmdk", "image_properties": {"name": "cirros-imported", "disk_format": "vmdk", "container_format": "bare", "vmware_adaptertype": "ide", "vmware_disktype": "streamOptimized", "vmware_ostype": "otherGuest"}}'

    • Pods go into pending state. Status of VIO Management Server or controller changes to not ready.

      This condition is due to resource contention on the VIO Management Server and controller VMs. When the kubelet cannot complete a task in a reasonable amount of time, it marks the Kubernetes node as not ready. Kubernetes attempts to reschedule the pods to other available nodes. If no other nodes are available, the pods go into a pending state. With a higher density of pods on a single node, a compact setup is more vulnerable to this problem.

      Workaround: To change the Kubernetes node status from not ready to ready, restart the kubelet service using systemctl restart kubelet. To address the problem more permanently, scale out to add controller nodes and reduce the load on each node.

    • If incorrect credentials are entered when deploying OpenStack, the wizard may fail to recognize correct credentials.

      During the OpenStack deployment process, if vCenter Server or NSX Manager credentials are entered incorrectly, the wizard may fail to recognize correct credentials. Even if you remove the incorrect information and enter the correct credentials, the wizard may fail to validate them.

      Workaround: Close the deployment wizard and open it again.

    • Using CIDR 0.0.0.0/x as an IPv4 address block in security groups translates to 'any' on the NSX-V back end

      When setting a security group rule with CIDR 0.0.0.0/x (x>0, x<=32), the VMware Integrated OpenStack plugin translates it to 'any' on the NSX-V back end. This also applies when setting a security group rule with IPv6 ::/x (x>0, x<=128).

      Workaround. Do not use the 0.0.0.0/x or ::/x address blocks.

    check-circle-line exclamation-circle-line close-line
    Scroll to top icon