VMware vCloud® NFV™ OpenStack Edition 3.1 Release Notes | 12 MAR 2019

NOTE: These Release Notes do not provide the terms applicable to your license. Consult the VMware Product Guide and VMware End User License Agreement for the license metrics and other license terms applicable to your use of VMware vCloud NFV.

Check for additions and updates to these Release Notes.

What's in the Release Notes

This Release Notes applies to vCloud NFV OpenStack Edition 3.1 and cover the following topics:

vCloud NFV OpenStack Standard Edition and Advanced Edition

Starting July 1, 2019, vCloud NFV OpenStack is available in two editions, offering a simple path to customize deployments based on NFV use cases and requirements.

The two editions are:

  • vCloud NFV OpenStack Standard Edition
  • vCloud NFV OpenStack Advanced Edition

The components included in these editions are listed here.

What's New in this Release

vCloud NFV OpenStack Edition 3.1 is a minor release and delivers the following capabilities:

  • VMware NSX-T Data Center Switch with Overlay Support. When used with vSphere 6.7, the N-VDS switch supports a high-performance mode that is called Enhanced datapath. With this mode, NSX-T Data Center provides a hypervisor-based virtual switch that is considerably faster than the vSphere standard or distributed switches. With version 2.3, N-VDS in Enhanced datapath now supports overlay networking in addition to VLAN backing. This new capability can serve the NFV use cases that need a high-performance virtual switch without having to sacrifice any of the benefits of virtualization such as vMotion and DRS.
  • VMware Integrated OpenStack 5.1. This release of vCloud NFV includes an updated version of VMware Integrated OpenStack 5.1 which is based on upstream Queens release.
     
    • Support for the latest VMware products including VMware ESXi 6.7 U1, VMware NSX-T Data Center 2.3, VMware NSX Data Center for vSphere 6.4.3, and VMware vRealize Operations Manager 7.0.
    • Integrated security management for OpenStack Barbican (key manager) allowing tenants to provision and manage identity certificates for services such as LBaaS
    • Dual networking stacks are supported with NSX-T and NSX-V managed by the same VMware Integrated OpenStack instance. Different vSphere 6.7 U1 clusters with VMware NSX-T Data Center 2.3 and VMware NSX Data Center for vSphere 6.4.3 networks respectively can now coexist in NFV deployments.
    • SR-IOV can now be configured together with NSX-T by using provider port groups and the NSX-T Neutron plugin for VMware Integrated OpenStack.
       
  • Improved Operations Management with vRealize Operations Manager 7.0.  The release enabled improved performance optimization, capacity management, improved dashboarding and reporting, and intelligent remediation with the support of multiple clouds and additional compliance support (PCI, HIPAA, DISA, CIS, FISMA, and ISO Security).

Components of vCloud NFV OpenStack Standard Edition 3.1

Components of vCloud NFV OpenStack Advanced Edition 3.1

Mandatory Add-On Components 

Note: This component is required for both Standard Edition and Advanced Edition; additional license is required.

Optional Add-On Components 

Note: These components are recommended for Advanced Edition; additional license is required.

Validated Patches

Caveats and Limitations

  • Geneve overlay use with N-VDS in Enhanced datapath mode. To use N-VDS in Enhanced datapath mode with overlay networking, you must use Intel 7xx series NICs with a firmware version of 6.01 and higher.

Release Notes Change Log

Date Change
26 MAR 2019 Moved VMware vRealize Network Insight 4.0 from Validated Patches to Optional Add-On Component.
02 July 2019 Added information on new editions.

Support Resources

To access product documentation and additional support resources, go to the VMware vCloud NFV Documentation page.

Resolved Issues

  • Creation of an instance by using large page size memory images might fail with an error

    This issue is resolved in VIO 5.1

  • Importing VMs from vSphere to VMware Integrated OpenStack fails when the VMs have NSX-T Data Center backed connectivity. 

    This issue is resolved in VIO 5.1.0.1

  • ESXi vmkernel panic occurs when setting the TX/RX ring buffers to values that are not 32 aligned numbers

    This issue is resolved in NSX-T 2.3

     

Known Issues

  • You cannot enter 12 logical CPU cores even though the system has 12 available

    There is 1-1 mapping between logical CPU cores and physical NIC hardware queues. A maximum of 8 queues are supported on the physical NIC. So, maximum of 8 logical CPU cores are supported for each enhanced N-VDS. The GUI does not have a proper error message handling if you enter a number greater than 8.

  • Designate: VIO Horizon UI doesn’t show DNS zones in Errored status

    The VIO Horizon UI does not show the DNS zones that are in error state.

    Workaround:
    If the DNS zone status is blank on the horizon dashboard, use the VIO OMS CLI #openstack zone list --all-projects to determine the status.

  • Nova instances with config drive enabled fail to get deployed

    Nova instances with config drive enabled, such as a VIO-K (Kubernetes) node instance, fail to get deployed on a datastore cluster.

    Workaround:

    Use a basic datastore, such as vSAN or NFS, to deploy such instances in VIO.

  • Networking related errors in Horizon after stopping and starting OpenStack services

    You see networking (Neutron) related  errors  in Horizon after stopping and starting the OpenStack services in a VIO 5.1 co-existence setup.

    Workaround:

    1. Log in to OMS using command-line and connect to each controller using a SSH session.
    2. On each controller, log in as root user using sudo su.
    3. Run this command:

      service neutron-server restart
       
    4. Verify if the service is running using this command:

      service neutron-server status
       
    5. Log in to the Horizon UI and confirm if it works as expected

     

  • Objects created in nsx-v are not getting populated in the vRLI dashboard

    Objects created in nsx-v are not getting populated in the vRLI dashboard.

    Workaround:

    Currently, there is no workaround for this issue.

  • ESXi vmkernel panic may occur when MTU is set at 9000 inside Windows guest OS

    Inside Windows guest OS, vmxnet driver only offers 9000 or 1500 for MTU size. when MTU is set to 9000, the final packet size is larger than 9000 after the encapsulation for overlay network. Larger than 9000 packets will be dropped and vmkernel system panic may occur any time.

    Workaround:
    In Windows guest OS, use only a MTU size of 1500 for vmxnet drivers.

check-circle-line exclamation-circle-line close-line
Scroll to top icon