VMware NSX-T Data Center Plugin for OpenStack Neutron | April 2020

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

Release Compatibility

  • The Neutron Plugin for Management API (Imperative API) is:
    • Compatible with OpenStack Stein and OpenStack Rocky 
    • Compatible with VIO 5.1.0.4, VIO 6.0.0.1 (see Known Limitations)
  • The Neutron Plugin for Policy API (Declarative API) is:
    • Compatible with OpenStack Stein (other vendor OpenStack versions might be added subsequently).
    • Compatible with VIO 6.0.0.1 (see Known Limitations)
  • The interoperability with VIO 6.0.0.1 has known issues stated in the known limitation sections.
  • The new features exposed by the NSX-T 3.0 OpenStack Neutron Plugin are not available with VIO 6.0.0.1 and VIO 5.1.0.4 which are in backward compatibility mode.

See the VMware NSX OpenStack Plugin Installation & Configuration Guide and NSX-T Data Center Installation Guide for more details on compatibility and system requirements.

What's New in VMware NSX-T Data Center Plugin for OpenStack Neutron

  • Support of IPv6 Load Balancer in the Neutron Plugin for Policy API.
    This allows you to enable IPv6 in the OpenStack integration when using Octavia or LBaaSv2.
     
  • Support of IPv6 IP block in the Neutron Plugin for Policy API.
     
  • Support of VPNaaS in the Neutron Plugin for Policy API.
    This allows you to consume NSX-T VPN services on the Tier-1 Gateway from OpenStack.
     
  • Support of vRF-lite usage as an External network in the Neutron Plugin for Policy API.
    In NSX-T 3.0, a Tier-0 can have multiple vRF-lite. The model in OpenStack to map a Tier-0 to an External Network has been extended to the Tier-0 vRF-lite in order to ease multi-tenancy and improve resource allocation.
     
  • Ability to to advertise all static route on OpenStack Neutron Router (no NAT) in the Neutron Plugin for Policy API.
    This offers a setting to advertise all the static routes of No NAT OpenStack Neutron routers in addition to connected routes.
     
  • Allow to have the Project information part of the firewall rule syslog in the Neutron Plugin for Policy API.
    NSX-T allows a rule tag as part of the firewall dataplane syslogs (allow/deny) in order to provide tenancy information. This enhancement allows the Project as part of the firewall log. This allows a simpler way to triage the logs by project and manage multi-tenancy.

View Release Notes for previous versions:

Known Limitations

  • VIO 6.0.0.1 has some known limitations when used with NSX-T 3.0. (Both of these issues stem from Platform changes not managed OpenStack Neutron Plugin shipped with VIO 6.0.0.1.)
    • Trying to change the admin state of a Port will lead to inconsistent behavior. The underlying reason is the addition of admin state in Policy, which is not taken into account by the OpenStack Neutron Plugin in VIO 6.0.0.1.
    • Load Balancer deletion may fail sometimes in DevOps workflows, with concurrent stack creation/deletion (using Terraform/Rally/Heat). An available workaround is to identify the load balancer causing the issue and deleting them again.
  • VIO 6.0.0.1 and VIO 5.0.0.4 do not support NSX-T on VDS 7.0 and expect a NSX-T deployment with the N-VDS.
  • If a configured edge uplink profile “Transport VLAN” and a deployed vlan network both have the same VLAN ID set, there can be disruptive side-effects. This configuration should not be used. Any configured VLAN ID overlapping between “Transport VLAN” and a deployed vlan network will cause the seen issues with MDProxy and DHCP, not just 0. 
  • Cannot add more than one subnet with DHCP enabled to a network. 
  • Cannot add two routers to the same network. 
  • Can associate a maximum of nine Security Groups per port. This limitation is due to the 15 maximum tags/port in a platform. Nine are available for SG. 
  • Metadata only supports ports 3000-9000.
  • IPv6 is only supported for the NSX-T OpenStack Neutron plugin for Policy (NSXP) and not for NSX-T OpenStack Neutron plugin for the Management Plane API (non Policy).
  • QoS currently supports "Shaping" and “DSCP Marking” (not "CoS Marking" nor “Minimum BW”) for ESXi and KVM.
  • QoS is enforced for traffic leaving the hypervisor (not intra-hypervisor).
  • FWaaS is not enforced for E/W traffic among downlink interfaces on the same Neutron router.
  • FWaaS rules are not enforced on traffic from VIPs.
  • NSX-T Data Center does not support per-pool session persistence profiles. When Layer 7 rules enable a VIP to route traffic to multiple pools, persistence profiles settings on these pools will have no effect.

Known Issues

  • FWaaS rules are not enforced as expected. Some rules that work correctly with the NSX-v driver or the reference implementation driver seem to be ignored by NSX-T Edge Firewall.

    Firewall behavior for NSX-T differs from NSX-v and reference implementation: fwaas is done after NAT for ingress, before NAT for egress.

    Workaround: Ensure rules are defined in a way that consider the fact that egress rules are enforced before SNAT occurs, and ingress rules after DNAT occurred.

    The following notes apply to both ALLOW and DENY rules. 

    Ingress FWaaS rules 

    Source behind NO-SNAT router 

    • The source IP should be the internal server IP or the internal subnet CIDR

    Source behind SNAT router 

    • If the source server is associated with a floating IP, use floating IP address 

    • Otherwise use source router’s gateway IP 

    External source 

    • Use actual source IP address or CIDR 

    Destination

    • As NSX-T Edge firewall is enforced after NAT, use either internal server IP or internal subnet CIDR 

    Egress FWaaS rules 

    Source IP 

    • As NSX-T Edge firewall is enforced before NAT in this case, use either internal server IP or internal subnet CIDR 

    Destination IP 

    • External destination, use actual source IP or CIDR 

    • Destination behind NO-SNAT router, the destination IP should be the internal server IP or the internal subnet CIDR 

    • Destination behind SNAT router, the destination server must be exposed via a floating IP. That floating IP should be used as the destination IP for the FWaaS rule. 

  • VM boots, correctly gets IP from the DHCP server, but cannot send/receive any traffic; the actual VM IP differs from the one reported by Neutron

    When DHCP relay is enabled spoofguard might block outbound traffic from VMs.

    Workaround: Disable port security on every network.

  • Explicitly adding DFW rule for allowing traffic from the LB VIP has no effect.

    OpenStack configures NSX-T Virtual Servers with SNAT automap mode. This has been necessary to satisfy the use case where the LB connection happens from a client located behind the same Tier-1 router as the target members.

    In this mode the source IP for the traffic coming from the load balancer is not the VIP itself, but an address on the internal transit work. This does not create any problem to OpenStack integration.

    Workarounds:

    1. whitelist traffic into the security group.
    2. Find on NSX-T the IP on the transit network the LB VS uses. Create a DFW rule that includes at least this address. 
    3. whitelist traffic from the internal subnet CIDR.
  • After setting the default pool for a listener, no traffic is received by the LB VIP.

    Updating default pool or a listener, even if allowed by Neutron-LBaas, is an action that is not implemented in NSX-T. As a result, even if the pool on the NSX-T virtual server will be correctly updated, the NSX-T LBS will not be updated with the VS id.

    Therefore if the VS is not already associated with a LBS, the association will not be created.

    Workaround:

    1. Remove the pool from the listener, or delete it.
    2. Set the listener on the pool, or create a new pool setting the listener id.
  • Unable to update device names when using neutron l2-gateway-update.

    The operation
    neutron l2-gateway-update <l2_gw_id>--device name=<device_name>,interface_names=<interface_name>
    will always fail if <device_name> is not already defined. This operation is indeed conceived for updating interfaces on existing devices.

    Workaround: If there is a need to use a different gateway device, it is necessary to delete and recreate the layer 2 gateway on neutron, as neutron l2gw capabilities do not offer a solution for updating gateway devices.

  • Openstack Load Balancer goes into error state after adding a listener. A consistent number of listeners is already configured on the same load balancer.

    This is happening because the maximum number of virtual servers allowed on the NSX-T load balancer service is being exceeded. The maximum number allowed depends on the NSX-T version and the size of the loadbalancer service. For details about allowed maximums, refer to the configuration maximums for NSX-T.

    The Octavia service will only report the fact that the load balancer went into error state. No information about the backend failure can be retrieved via Octavia. The information will only be available in Neutron logs.

    Workaround: There is no workaround for this issue.

    In Octavia, once a load balancer goes into ERROR state it becomes immutable and no further operation is allowed on it.

    Existing listeners will however keep working.

  • When creating an OpenStack loadbalancer on an "internal" subnet, the relevant logical switch will have a logical port operationally down.

    Openstack loadbalancers (both Neutron LBaaS v2 and Octavia) have a "network driver" which always creates a neutron logical port for a load balancer.

    However, in NSX-T load balancer services are implemented on Tier-1 routers, and have therefore no logical port on the VIP subnet specified from neutron LBaaSv2 or Octavia.

    There will be therefore an unused logical port, which will be removed when the load balancer is deleted.

    Workaround: This logical port in operational status "down" is harmless, and can simply be ignored.

  • In Octaivia, Updating a L4 load balancer with Cookie-based session persistence send the load balancer in ERROR status.

    After updating the session persistence profile on a pool from Source IP to Cookie-base, the load balancer goes into ERROR status and becomes immutable.

    Unfortunately the Octavia service is not validating whether the appropriate session persistence type is being applied to the pool.

    Therefore when updating NSX-T with wrong information (request for HTTP Cookie-Based session persistency on a TCP virtual server) NSX-T returns an error that then sends the Octavia load balancer in Error.

    Unfortunately, Octavia is not able to display details about the driver error.

    Workaround: There is no workaround for this issue. Once the load balancer goes into ERROR status it becomes immutable and cannot be recovered.

check-circle-line exclamation-circle-line close-line
Scroll to top icon