This section lists the steps need to install NSX Advanced Load Balancer into an OpenStack cloud for the case in which NSX Advanced Load Balancer has no access to OpenStack, the orchestrator.

In No-Access mode, NSX Advanced Load Balancer has no access to OpenStack as an orchestrator. Adding, removing, or modifying properties of a Service Engine requires an administrator to manually perform the changes. Servers and networks cannot be auto-discovered by NSX Advanced Load Balancer; they must be manually configured.

To install NSX Advanced Load Balancer in No-Access OpenStack Cloud:

Prerequisites

NSX Advanced Load Balancer should be instantiated in the No-Orchestration mode.

Procedure

  1. Create an OpenStack No-Access cloud.
    Figure 1. OpenStack No Access Mode
  2. Select the IP address management as Use DHCP in DHCP Settings tab.
  3. Download SE qcow2 image, as this will be pushed to Glance.
    Figure 2. Download SE QCOW2
  4. Log in to the OpenStack instance under the respective tenant ('admin' in this case) and click Create . Select the format as QCOW2 and provide the image file for the SE QCOW2 that was downloaded.
    Figure 3. QCOW2 Image
  5. Upload the se.qcow2 image to Glance.
    • This is needed only if there is no existing network that can be used as the NSX Advanced Load Balancer network.

    • This network will be used by SEs to communicate with the Controller. Therefore, either create a new network or use an existing network and ensure that VMs created on that network can reach the Controller.

  6. Specify the network name in Network tab. Also provide appropriate subnet to the network in Subnet tab. Check Enable DHCP box in Subnet Details tab and create the network.
    • This step is required only if a new external network needs to be created. Create the network that will be the outbound network and will provide floating IP access. For instance, you can provide the Network Name as provider1 in Network tab and provide the other details in the respective tabs.

    • This step is required only if a new router needs to be created for an external connectivity. You can create a router by providing the router name, admin state and external network details.

  7. Additionally, you can deploy a web server in the avimgmt network to do tests. This could be a server of an OS type and the network topology would look something like this:
    Figure 4. Network Topology
  8. Create a security group as below and associate it with the Service Engine to make sure that ICMP traffic and SSH and HTTP traffic is allowed.
    Figure 5. Security Group associated with SE
  9. Create an NSX Advanced Load Balancer SE Instance. SEs can be created using heat-templates as well. For more information on this, see Creating Service Engine using Heat-Templates in No-Access OpenStack Cloud.
    Figure 6. launch-instance
  10. Select the appropriate qcow2 image for the SE that needs to be instantiated from the Source tab.
  11. Select the respective flavor for the SE from the Flavor tab. In this case it would be m1.small.
  12. Select the avimgmt network from the Networks tab for instantiating the SE.
  13. The SE gets spawned as below:
    Figure 7. Spawned SE
  14. Associate a floating IP to the instance. This step is required only where SEs are not reachable directly.
  15. Attach another interface to the SE. This would be the data vNIC.
  16. The SE gets created with one management vNIC and one data vNIC, the latter associated with a floating IP.
    Figure 8. SE with vNIC and Floating IP
    • For the SE to connect to the Controller, copy the token for the SE from NSX Advanced Load Balancer UI (outlined in Installing NSX Advanced Load Balancer for VMware vCenter) for the respective cloud and run the script at /opt/avi/scripts/init_system.py on the SE, which would then ask for the Controller IP and the token (the token expires in 60 minutes and is for a single SE). You need root access privileges to run this script.

    • root@Avi-Service-Engine:/opt/avi/scripts# ./init_system.py -h
      usage: init_system.py [-h] -c CONTROLLER [-d] [-i MGMT_IP] [-m MGMT_MASK]
      [-g GATEWAY] [-t TOKEN] [-r]
      optional arguments:
      -h, --help show this help message and exit
      -c CONTROLLER, --controller CONTROLLER
      Controller IP address.
      -d, --dhcp DHCP
      -i MGMT_IP, --mgmt-ip MGMT_IP
      IP address for Management Interface (eg.
      192.168.10.10)
      -m MGMT_MASK, --mgmt-mask MGMT_MASK
      Subnet mask for Management interface (eg. 24 or
      255.255.255.0)
      -g GATEWAY, --gateway GATEWAY
      Default gateway
      -t TOKEN, --token TOKEN
      Auth token generated in the Controller for this SE
      -r, --restart Restart SE for changes to take effect
      root@Avi-Service-Engine:/opt/avi/scripts# ./init_system.py -c 172.16.0.10 -d -i 172.16.0.7 -m 255.255.255.0 -g 172.16.0.1 -t c708a2cd-69e2-4057-923d-a09de94914f6 -r
      

      Reboot the SE for it to connect to the Controller.

  17. Wait for the NSX Advanced Load Balancer SEs to show up in the UI's Infrastructure > Service Engine list under the respective cloud.
  18. Edit each SE and enable DHCP for each data network.
  19. Create a virtual service and choose an IP address from the data network.

    Since this is a no-Access cloud, you cannot configure a 'floating VIP' in the virtual service configuration. For the Controller to communicate with OpenStack Nova to assign an allocated floating IP to virtual IP address, you need to create a binding association as shown below through the CLI for the Neutron port with the VIP.

    If you need a floating IP for the VIP address, create a port in the network where the VIP address lies.

    $> neutron port-create --fixed-ip subnet_id=subnet ID of the network in which VIP is placed,ip_address=VIP IP --name anyname network ID in which the VIP is being placed
    

    An example for the above syntax is as follows:

    $> neutron port-create --fixed-ip subnet_id=55daee6b-32b7-4f9c-945e-bcd2acb7272f,ip_address=172.16.0.231 --name test200vip f14eb427-4087-4dce-8477-479e10804ba1
    

    Create a floating IP and associate it with that VIP address.

    $> neutron floatingip-associate bf7c870e-6608-4512-b73d-faab5b18af04 ff67ae44-9874-43e6-a194-f336b9b1d7b5
    
  20. Create a pool of server(s) to be associated with the virtual service created above. In this case, this would be the server created using the Horizon UI in step 14.

    You cannot use the select-servers-by-network feature, as you do not have access to the infra manager. Therefore, specify the IP addresses manually.

  21. The virtual service should be up and running.
  22. Check the respective Service Engine to verify that the VIP is associated with it.
    • The allowed-address-pairs Neutron extension allows traffic with specific CIDRs to egress from a port. NSX Advanced Load Balancer uses this extension to place VIPs on SE data ports, thereby allowing VIP traffic to egress these data ports.

    • Add allowed-address-pairs on the SE ports so that security groups do not drop the packets. For the ML2/OVS plugin, you can add allowed-address-pairs with '0.0.0.0/0' once for each of the SE ports or specific VIP IP address.

      neutron port-update da0e1e9a-312d-41c2-b15f-f10ac344ef03 --allowed-address-pairs type=dict list=true ip_address=192.168.1.222/32
    • If True, the allowed-address-pairs extension will be used. If the underlying network plugin does not support this feature, then VIP traffic will not work unless there are other means to achieve the same effect. This option can be turned off if the underlying network supports turning off security/firewall/spoof filter rules on ports.

    • In cases where port-security is available, you can disable port-security on the SE's data vNIC Neutron port. This is another alternative for above.

  23. Ensure that you can SSH into one of the instances (Service Engines)