VMware NSX-T Data Center Plugin for OpenStack Neutron | September 2019 

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

Release Compatibility

  • Compatible with OpenStack Stein and OpenStack Rocky 
  • Compatible with VIO 6.0, VIO 5.1.0.3, VIO 4.1.2.3 and RHOSP 13 (other vendor OpenStack versions might be added subsequently).

See the VMware NSX OpenStack Plugin Installation & Configuration Guide and NSX-T Data Center Installation Guide for more details on compatibility and system requirements.

What's New in VMware NSX-T Data Center Plugin for OpenStack Neutron

  • Support of Stein and Rocky.
    New features are exposed in the Stein plugin, the Rocky plugin offering compatibility of the existing plugin with 2.5
     
  • New Neutron Plugin supporting NSX-T Policy API.
    In addition to the existing Neutron Plugin NSX-T 2.5 introduces the new Policy Plugin which will consume NSX-T Policy API.
     
  • IPv6 Support in the Neutron Plugin for NSX-T Policy API.
    This involves Static IPv6 address binding, Neutron Security Groups, FWaaS with IPv6 addresses, Neutron Router IPv6 Interfaces, Neutron Router dual stack Interfaces, Neutron Router no SNAT route redistribution with IPv6, Neutron Router static Routing with IPv6, Neutron Port Security with IPv6 and SLAAC.
    Applies only to the NSX-T Neutron Plugin for Policy.
     
  • OpenStack Neutron Router Deployment Optimization.
    An OpenStack Neutron Router is translated to a Tier-1. Previously it would always create a Tier-1 DR (Distributed Router) and a Tier-1 SR (Service Router, created on the Edge Node). Now the plugin will only create a Tier-1 SR when required (i.e.,when services are enabled) and remove it when not required. This allows better throughput and optimized resource utilization.
    Applies to both NSX-T Neutron Plugins (Management plane API and Policy API)  
     
  • OpenStack Neutron Router Placement Optimization.
    The Tier-1 routers created by OpenStack can be placed in a different cluster than the one with the Tier-0. 
    Applies to both NSX-T Neutron Plugin (Management plane API and Policy API)
     
  • Support for Octavia Load Balancing Service
    Applies to both NSX-T Neutron Plugins (Management plane API and Policy API). Octavia support is only available with the Stein's release of the Neutron plugin.
     
  • Support of FWaaSv2
    Applies to both NSX-T Neutron Plugins (Management plane API and Policy API)
     
  • Move of L2 Bridge to the Edge Cluster.
    The L2 bridge is now provided by the Edge Cluster, allowing non ESXi customers to implement bridge in their environment.
    Applies only to the NSX-T Neutron Plugin for the Management plane API.

View Release Notes for previous versions:

NSX-T OpenStack Neutron Plugin 2.5 Limitations

  • If a configured edge uplink profile “Transport VLAN” and a deployed vlan network  both have the same VLAN ID set, there can be disruptive side-effects. This configuration should not be used.  Any configured VLAN ID overlapping between “Transport VLAN” and a deployed vlan network will cause the seen issues with MDProxy and DHCP, not just 0. 
  • Cannot add more than one subnet with DHCP enabled to a network. 
  • Cannot add two routers to the same network. 
  • Can associate a maximum of 9 Security Groups per port. This limitation is due to the 15 maximum tags/port in a platform. 9 are available for SG. 
  • Metadata only supports ports 3000-9000.
  • IPv6 is only supported for the NSX-T OpenStack Neutron plugin for Policy (NSXP) and not for NSX-T OpenStack Neutron plugin for the Management Plane API (non Policy).
  • QoS currently supports "Shaping" and “DSCP Marking” (not "CoS Marking" nor “Minimum BW”) for ESXi and KVM.
  • QoS is enforced for traffic leaving the hypervisor (not intra-hypervisor).
  • FWaaS is not enforced for E/W traffic among downlink interfaces on the same Neutron router.
  • FWaaS rules are not enforced on traffic from VIPs.
  • NSX-T Data Center does not support per-pool session persistence profiles. When Layer 7 rules enable a VIP to route traffic to multiple pools, persistence profiles settings on these pools will have no effect.

Resolved Issues

  • Neutron plugin failure on VIO5 after upgrading NSX-T from 2.3 to 2.4. Creation of security group rules always fails with 500 errors.

    Unable to create security group rules after upgrading NSX-T to 2.4. Every attempt will fail with response code 500.

    Restart neutron server after completing NSX-T upgrade.

    Upon restart, neutron server will read the right NSX-T version and format DFW rule request appropriately.

Known Issues

  • FWaaS rules are not enforced as expected. Some rules that work correctly with the NSX-v driver or the reference implementation driver seem to be ignored by NSX-T Edge Firewall.

    Firewall behavior for NSX-T differs from NSX-v and reference implementation: fwaas is done after NAT for ingress, before NAT for egress.

    Workaround: Ensure rules are defined in a way that consider the fact that egress rules are enforced before SNAT occurs, and ingress rules after DNAT occurred.

    The following notes apply to both ALLOW and DENY rules. 

    Ingress FWaaS rules 

    Source behind NO-SNAT router 

    • The source IP should be the internal server IP or the internal subnet CIDR

    Source behind SNAT router 

    • If the source server is associated with a floating IP, use floating IP address 

    • Otherwise use source router’s gateway IP 

    External source 

    • Use actual source IP address or CIDR 

    Destination

    • As NSX-T Edge firewall is enforced after NAT, use either internal server IP or internal subnet CIDR 

    Egress FWaaS rules 

    Source IP 

    • As NSX-T Edge firewall is enforced before NAT in this case, use either internal server IP or internal subnet CIDR 

    Destination IP 

    • External destination, use actual source IP or CIDR 

    • Destination behind NO-SNAT router, the destination IP should be the internal server IP or the internal subnet CIDR 

    • Destination behind SNAT router, the destination server must be exposed via a floating IP. That floating IP should be used as the destination IP for the FWaaS rule. 

  • VM boots, correctly gets IP from the DHCP server, but cannot send/receive any traffic; the actual VM IP differs from the one reported by Neutron

    When DHCP relay is enabled spoofguard might block outbound traffic from VMs.

    Workaround: Disable port security on every network.

  • Explicitly adding DFW rule for allowing traffic from the LB VIP has no effect

    OpenStack configures NSX-T Virtual Servers with SNAT automap mode. This has been necessary to satisfy the use case where the LB connection happens from a client located behind the same Tier-1 router as the target members.

    In this mode the source IP for the traffic coming from the load balancer is not the VIP itself, but an address on the internal transit work. This does not create any problem to OpenStack integration.

    Workarounds:

    1) whitelist traffic into the security group.

    2) Find on NSX-T the IP on the transit network the LB VS uses. Create a DFW rule that includes at least this address. 

    3) whitelist traffic from the internal subnet CIDR.

  • After setting the default pool for a listener, no traffic is received by the LB VIP.

    Updating default pool or a listener, even if allowed by Neutron-LBaas, is an action that is not implemented in the NSX-T. As a result, even if the pool on the NSX-T virtual server will be correctly updated, the NSX-T LBS will not be updated with the VS id.

    Therefore if the VS is not already associated with a LBS, the association will not be created.

    1) Remove the pool from the listener, or delete it.

    2) Set the listener on the pool, or create a new pool setting the listener id.

  • Unable to update device names when using neutron l2-gateway-update

    The operation
    neutron l2-gateway-update <l2_gw_id>--device name=<device_name>,interface_names=<interface_name>
    will always fail if <device_name> is not already defined. This operation is indeed conceived for updating interfaces on existing devices.

    If there is a need to use a different gateway device, it is necessary to delete and recreate the layer 2 gateway on neutron, as neutron l2gw capabilities do not offer a solution for updating gateway devices.

  • Openstack Load Balancer goes into error state after adding a listener.A consistent number of listeners is already configured on the same load balancer.

    This is happening because the maximum number of virtual servers allowed on the NSX-T load balancer service is being exceeded. The maximum number allowed depends on the NSX-T version and the size of the loadbalancer service. For details about allowed maximums, refer to the configuration maximums for NSX-T.

    The Octavia service will only report the fact that the load balancer went into error state. No information about the backend failure can be retrieved via Octavia. The information will only be available in Neutron logs.

    There is no workaround for this issue.

    In Octavia, once a load balancer goes into ERROR state it becomes immutable and no further operation is allowed on it.

    Existing listeners will however keep working.

  • When creating an Openstack loadbalancer on an "internal" subnet, the relevant logical switch will have a logical port operationally down.

    Openstack loadbalancers (both Neutron LBaaS v2 and Octavia) have a "network driver" which always creates a neutron logical port for a load balancer.

    However, in NSX-T load balancer services are implemented on Tier-1 routers, and have therefore no logical port on the VIP subnet specified from neutron LBaaSv2 or Octavia.

    There will be therefore an unused logical port, which will be removed when the load balancer is deleted.

    This logical port in operational status "down" is harmless, and can simply be ignored.

  • In Octaivia, Updating a L4 load balancer with Cookie-based session persistence send the load balancer in ERROR status

    After updating the session persistence profile on a pool from Source IP to Cookie-base, the load balancer goes into ERROR status and becomes immutable.

    Unfortunately the Octavia service is not validating whether the appropriate session persistence type is being applied to the pool.

    Therefore when updating NSX-T with wrong information (request for HTTP Cookie-Based session persistency on a TCP virtual server) NSX-T returns an error that then sends the Octavia load balancer in Error.

    Unfortunately, Octavia is not able to display details about the driver error.

    There is no workaround for this issue. Once the load balancer goes into ERROR status it becomes immutable and cannot be recovered.

check-circle-line exclamation-circle-line close-line
Scroll to top icon