VMware NSX-T Data Center Plugin 3.2 for OpenStack Neutron | December 16, 2021

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

Release Compatibility

  • The Neutron Plugin for Management API (Imperative API) is:
    • Compatible with OpenStack Ussuri and OpenStack Train 
    • Compatible with VIO 7.2 (and other potential future versions of VIO)
  • The Neutron Plugin for Policy API (Declarative API) is:
    • Compatible with OpenStack Ussuri and OpenStack Train 
    • Compatible with VIO 7.2 and VIO 7.1 (and other potential future versions of VIO)

Future versions of VIO might be added to the compatibility list when they are released.

See the VMware NSX OpenStack Plugin Installation & Configuration Guide and NSX-T Data Center Installation Guide for more details on compatibility and system requirements.

Note on OpenStack and KVM Support

The 3.x release series is the final series to support KVM and non-VIO OpenStack distributions including but not limited to RedHat OpenStack Platform, Canonical/Ubuntu OpenStack, SUSE OpenStack Cloud, Mirantis OpenStack and community based OpenStack, which does not have a specific vendor. Customers using non-VIO OpenStack with NSX are encouraged to consider vRealize Automation or VMware Cloud Director as a replacement for their deployments.

What's New in VMware NSX-T Data Center Plugin for OpenStack Neutron

  • Support for migration from NSX for vSphere to NSX-T in environments with VIO:
    This migration allows you to move environments with VIO/NSX for vSphere to VIO/NSX-T. It is an in-place migration leveraging Migration Coordinator and migrates the objects created from VIO (configurations created outside of VIO need to be recreated manually before or after migration).

View Release Notes for previous versions:

Known Limitations

  • VIO 7.1 is not compatible with NSX-T 3.2.0 OpenStack Neutron Plugin for Management API (Imperative API). 
  • If a configured edge uplink profile “Transport VLAN” and a deployed VLAN network both have the same VLAN ID set, there can be disruptive side-effects. This configuration should not be used. Any configured VLAN ID overlapping between “Transport VLAN” and a deployed VLAN network will cause the seen issues with MDProxy and DHCP, not just 0. 
  • Cannot add more than one subnet with DHCP enabled to a network. 
  • Cannot configure multiple addresses from the same subnet for a given Neutron port.
  • Cannot add two routers to the same network. 
  • Can associate a maximum of 24 Security Groups per port. This limitation is due to the 30 maximum tags/port in a platform. Twenty four are available for SG.
  • Metadata only supports ports 3000-9000.
  • IPv6 is only supported for the NSX-T OpenStack Neutron plugin for Policy (NSXP) and not for NSX-T OpenStack Neutron plugin for the Management Plane API (non Policy).
  • QoS currently supports "Shaping" and “DSCP Marking” (not "CoS Marking" nor “Minimum BW”) for ESXi and KVM.
  • QoS is enforced for traffic leaving the hypervisor (not intra-hypervisor).
  • FWaaS is not enforced for E/W traffic among downlink interfaces on the same Neutron router.
  • FWaaS rules are enforced on traffic from VIPs only if order between firewall, NAT and VIP allows it.
  • NSX-T Data Center does not support per-pool session persistence profiles. When Layer 7 rules enable a VIP to route traffic to multiple pools, persistence profiles settings on these pools will have no effect.

Known Issues

  • VM boots, correctly gets IP from the DHCP server, but cannot send/receive any traffic; the actual VM IP differs from the one reported by Neutron

    When DHCP relay is enabled spoofguard might block outbound traffic from VMs.

    Workaround: Disable port security on every network.

  • Explicitly adding DFW rule for allowing traffic from the LB VIP has no effect.

    OpenStack configures NSX-T Virtual Servers with SNAT automap mode. This has been necessary to satisfy the use case where the LB connection happens from a client located behind the same Tier-1 router as the target members.

    In this mode the source IP for the traffic coming from the load balancer is not the VIP itself, but an address on the internal transit work. This does not create any problem to OpenStack integration.

    Workarounds:

    1. whitelist traffic into the security group.
    2. Find on NSX-T the IP on the transit network the LB VS uses. Create a DFW rule that includes at least this address. 
    3. whitelist traffic from the internal subnet CIDR.
  • Unable to update device names when using neutron l2-gateway-update.

    The operation
    neutron l2-gateway-update <l2_gw_id>--device name=<device_name>,interface_names=<interface_name>
    will always fail if <device_name> is not already defined. This operation is indeed conceived for updating interfaces on existing devices.

    Workaround: If there is a need to use a different gateway device, it is necessary to delete and recreate the layer 2 gateway on neutron, as neutron l2gw capabilities do not offer a solution for updating gateway devices.

  • Openstack Load Balancer goes into error state after adding a listener. A consistent number of listeners is already configured on the same load balancer.

    This is happening because the maximum number of virtual servers allowed on the NSX-T load balancer service is being exceeded. The maximum number allowed depends on the NSX-T version and the size of the loadbalancer service. For details about allowed maximums, refer to the configuration maximums for NSX-T.

    The Octavia service will only report the fact that the load balancer went into error state. No information about the backend failure can be retrieved via Octavia. The information will only be available in Neutron logs.

    Workaround: There is no workaround for this issue.

    In Octavia, once a load balancer goes into ERROR state it becomes immutable and no further operation is allowed on it.

    Existing listeners will however keep working.

  • When creating an OpenStack loadbalancer on an "internal" subnet, the relevant logical switch will have a logical port operationally down.

    Openstack loadbalancers (both Neutron LBaaS v2 and Octavia) have a "network driver" which always creates a neutron logical port for a load balancer.

    However, in NSX-T load balancer services are implemented on Tier-1 routers, and have therefore no logical port on the VIP subnet specified from neutron LBaaSv2 or Octavia.

    There will be therefore an unused logical port, which will be removed when the load balancer is deleted.

    Workaround: This logical port in operational status "down" is harmless, and can simply be ignored.

  • In Octaivia, Updating a L4 load balancer with Cookie-based session persistence send the load balancer in ERROR status.

    After updating the session persistence profile on a pool from Source IP to Cookie-base, the load balancer goes into ERROR status and becomes immutable.

    Unfortunately the Octavia service is not validating whether the appropriate session persistence type is being applied to the pool.

    Therefore when updating NSX-T with wrong information (request for HTTP Cookie-Based session persistency on a TCP virtual server) NSX-T returns an error that then sends the Octavia load balancer in Error.

    Unfortunately, Octavia is not able to display details about the driver error.

    Workaround: There is no workaround for this issue. Once the load balancer goes into ERROR status it becomes immutable and cannot be recovered.

  • After migration to Policy, "VIP ports" will be shown as segment ports on policy Manager.

    The logical port that was created on MP by the Neutron LB driver is mgirated to policy, even if OpenStack integration with NSX-T policy does not create segment ports for VIPs.

    Workaround: No workaround available. This additional segment port is irrelevant and will be ignored by the Neutron NSX-T Policy plugin. It can be safely removed post-migration.

  • LB health monitor deletion fails with "Server-side error: "'NoneType' object has no attribute 'load_balancer_id'".

    This is a bug in the Octavia service that affects the VMware NSX plugin. In some cases, the Octavia service fails to retrieve the pool corresponding to a health monitor thus triggering this error.

    The bug is currently open and tracked at https://storyboard.openstack.org/#!/story/2008231

    Workaround: The deletion of the health monitor can be retried and should eventually succeed.

check-circle-line exclamation-circle-line close-line
Scroll to top icon