VMware NSX-T Data Center Plugin 3.1 for OpenStack Neutron | October 30, 2020

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

Release Compatibility

  • The Neutron Plugin for Management API (Imperative API) and Policy API (Declarative API) is:
    • Compatible with OpenStack Stein and OpenStack Train 
    • Compatible with VIO 7.0.1 and VIO 6.0.1
  • New features exposed by the NSX-T 3.1 OpenStack Neutron Plugin are not available with Stein or with VIO 6.0.1 which are in backward compatibility mode.

See the VMware NSX OpenStack Plugin Installation & Configuration Guide and NSX-T Data Center Installation Guide for more details on compatibility and system requirements.

What's New in VMware NSX-T Data Center Plugin for OpenStack Neutron

  • Conversion to NSX-T Policy Neutron Plugin for OpenStack environment consuming Management API: Allows you to move an OpenStack with NSX-T environment from the Management API to the Policy API. This gives you the ability to move an environment deployed before NSX-T 2.5 to the latest NSX-T Neutron Plugin and take advantage of the latest platform features, such as vRF-lite.
    This feature will require a VIO version supporting it. It is strongly recommended to perform a NSX-T backup before proceeding the migration, as the process cannot be rolled back once completed.
     
  •  Ability to change the order of NAT and Firewall enforcement on OpenStack Neutron Router: This gives you the choice in your deployment for the order of operation between NAT and FWLL. At the OpenStack Neutron Router level (mapped to a Tier-1 in NSX-T), the order of operation can be defined to be either NAT then firewall or firewall then NAT. This is a global setting for a given OpenStack Platform and a Policy Plugin only feature.
     
  • Support of Stateful DHCPv6: This adds up to the IPv6 features offered in 3.0 and 2.5 to allow you to consume IPv6 workloads. You can now choose between SLAAC and DHCPv6 for IP addressing. This is a Policy Plugin Only feature.

View Release Notes for previous versions:

Known Limitations

  • If installed directly, the Train or Stein packages must be run on a controller node running either Ubuntu 18.04, or RHEL 7.7.
  • (RHEL 8.2 requires python 3 packages and Ubuntu 20.04 is not supported for Train.)
  • VIO 7.0.0 is not compatible with NSX-T 3.1.0 OpenStack Neutron Plugin. Please update VIO to VIO 7.0.1.
  • If a configured edge uplink profile “Transport VLAN” and a deployed vlan network both have the same VLAN ID set, there can be disruptive side-effects. This configuration should not be used. Any configured VLAN ID overlapping between “Transport VLAN” and a deployed vlan network will cause the seen issues with MDProxy and DHCP, not just 0. 
  • Cannot add more than one subnet with DHCP enabled to a network. 
  • Cannot configure multiple addresses from the same subnet for a given Neutron port.
  • Cannot add two routers to the same network. 
  • Can associate a maximum of 24 Security Groups per port. This limitation is due to the 30 maximum tags/port in a platform. Twenty four are available for SG.
  • Metadata only supports ports 3000-9000.
  • IPv6 is only supported for the NSX-T OpenStack Neutron plugin for Policy (NSXP) and not for NSX-T OpenStack Neutron plugin for the Management Plane API (non Policy).
  • QoS currently supports "Shaping" and “DSCP Marking” (not "CoS Marking" nor “Minimum BW”) for ESXi and KVM.
  • QoS is enforced for traffic leaving the hypervisor (not intra-hypervisor).
  • FWaaS is not enforced for E/W traffic among downlink interfaces on the same Neutron router.
  • FWaaS rules are enforced on traffic from VIPs only if order between firewall, nat and VIP allows it.
  • NSX-T Data Center does not support per-pool session persistence profiles. When Layer 7 rules enable a VIP to route traffic to multiple pools, persistence profiles settings on these pools will have no effect.

Resolved Issues

  • FWaaS rules are not enforced as expected. Some rules that work correctly with the NSX-v driver or the reference implementation driver seem to be ignored by NSX-T Edge Firewall.

    Firewall behavior for NSX-T differs from NSX-v and reference implementation: fwaas is done after NAT for ingress, before NAT for egress.

  • After setting the default pool for a listener, no traffic is received by the LB VIP.

    Updating default pool or a listener, even if allowed by Neutron-LBaas, is an action that is not implemented in NSX-T. As a result, even if the pool on the NSX-T virtual server will be correctly updated, the NSX-T LBS will not be updated with the VS id.

    Therefore if the VS is not already associated with a LBS, the association will not be created.

Known Issues

  • VM boots, correctly gets IP from the DHCP server, but cannot send/receive any traffic; the actual VM IP differs from the one reported by Neutron

    When DHCP relay is enabled spoofguard might block outbound traffic from VMs.

    Workaround: Disable port security on every network.

  • Explicitly adding DFW rule for allowing traffic from the LB VIP has no effect.

    OpenStack configures NSX-T Virtual Servers with SNAT automap mode. This has been necessary to satisfy the use case where the LB connection happens from a client located behind the same Tier-1 router as the target members.

    In this mode the source IP for the traffic coming from the load balancer is not the VIP itself, but an address on the internal transit work. This does not create any problem to OpenStack integration.

    Workarounds:

    1. whitelist traffic into the security group.
    2. Find on NSX-T the IP on the transit network the LB VS uses. Create a DFW rule that includes at least this address. 
    3. whitelist traffic from the internal subnet CIDR.
  • Unable to update device names when using neutron l2-gateway-update.

    The operation
    neutron l2-gateway-update <l2_gw_id>--device name=<device_name>,interface_names=<interface_name>
    will always fail if <device_name> is not already defined. This operation is indeed conceived for updating interfaces on existing devices.

    Workaround: If there is a need to use a different gateway device, it is necessary to delete and recreate the layer 2 gateway on neutron, as neutron l2gw capabilities do not offer a solution for updating gateway devices.

  • Openstack Load Balancer goes into error state after adding a listener. A consistent number of listeners is already configured on the same load balancer.

    This is happening because the maximum number of virtual servers allowed on the NSX-T load balancer service is being exceeded. The maximum number allowed depends on the NSX-T version and the size of the loadbalancer service. For details about allowed maximums, refer to the configuration maximums for NSX-T.

    The Octavia service will only report the fact that the load balancer went into error state. No information about the backend failure can be retrieved via Octavia. The information will only be available in Neutron logs.

    Workaround: There is no workaround for this issue.

    In Octavia, once a load balancer goes into ERROR state it becomes immutable and no further operation is allowed on it.

    Existing listeners will however keep working.

  • When creating an OpenStack loadbalancer on an "internal" subnet, the relevant logical switch will have a logical port operationally down.

    Openstack loadbalancers (both Neutron LBaaS v2 and Octavia) have a "network driver" which always creates a neutron logical port for a load balancer.

    However, in NSX-T load balancer services are implemented on Tier-1 routers, and have therefore no logical port on the VIP subnet specified from neutron LBaaSv2 or Octavia.

    There will be therefore an unused logical port, which will be removed when the load balancer is deleted.

    Workaround: This logical port in operational status "down" is harmless, and can simply be ignored.

  • In Octaivia, Updating a L4 load balancer with Cookie-based session persistence send the load balancer in ERROR status.

    After updating the session persistence profile on a pool from Source IP to Cookie-base, the load balancer goes into ERROR status and becomes immutable.

    Unfortunately the Octavia service is not validating whether the appropriate session persistence type is being applied to the pool.

    Therefore when updating NSX-T with wrong information (request for HTTP Cookie-Based session persistency on a TCP virtual server) NSX-T returns an error that then sends the Octavia load balancer in Error.

    Unfortunately, Octavia is not able to display details about the driver error.

    Workaround: There is no workaround for this issue. Once the load balancer goes into ERROR status it becomes immutable and cannot be recovered.

  • After migration to Policy, "VIP ports" will be shown as segment ports on policy Manager.

    The logical port that was created on MP by the Neutron LB driver is mgirated to policy, even if OpenStack integration with NSX-T policy does not create segment ports for VIPs.

    Workaround: No workaround available. This additional segment port is irrelevant and will be ignored by the Neutron NSX-T Policy plugin. It can be safely removed post-migration.

  • LB health monitor deletion fails with "Server-side error: "'NoneType' object has no attribute 'load_balancer_id'".

    This is a bug in the Octavia service that affects the VMware NSX plugin. In some cases, the Octavia service fails to retrieve the pool corresponding to a health monitor thus triggering this error.

    The bug is currently open and tracked at https://storyboard.openstack.org/#!/story/2008231

    Workaround: The deletion of the health monitor can be retried and should eventually succeed.

check-circle-line exclamation-circle-line close-line
Scroll to top icon