This section explains the configuration aspects of Avi Load Balancer in OpenStack no-Access mode (DPDK for the SE) and using SR-IOV and VLAN from OpenStack for better performance.
SR-IOV specification defines a standardized mechanism to virtualize PCIe devices. This mechanism can virtualize a single PCIe Ethernet Controller to appear as multiple PCIe devices. Each device can be directly assigned to an instance, bypassing the hypervisor and virtual switch layer. As a result, you can achieve low latency and near-line wire speed.
PF-Physical Function: The physical Ethernet Controller that supports SR-IOV.
VF-Virtual Function: The virtual PCIe device created from a physical Ethernet Controller.
This section outlines steps for ixgbe-vf
driver that supports the following NICs:
82599
X520
X540
X550
X552
The following are the limitations in OpenStack:
For creating SR-IOV ports, Horizon must not be used.
SR-IOV is not integrated into the OpenStack Dashboard (Horizon). You must use the CLI or API to configure SR-IOV interfaces.
Attaching SR-IOV ports to existing servers is not currently supported. In this case, Avi Load Balancer cloud type is OpenStack No-Access.
Enabling SR-IOV
The specific steps for equivalent OpenStack nodes are mentioned as follows:
Create Virtual Functions (Compute)
Allowlist PCI devices in nova-compute (Compute)
Configure neutron-server (Controller)
Configure nova-scheduler (Controller)
Enable neutron sriov-agent (Compute)
Creating Virtual Functions (Compute)
The following are the steps to create virtual functions:
Ensure that SR-IOV and VT-d are enabled in BIOS.
Enable IOMMU in Linux by adding
intel_iommu=on
to the kernel parameters, for instance, usingGRUB
.On each compute node, create the VFs through the PCI SYS interface:
echo '8' > /sys/class/net/eth3/device/sriov_numvfs
A network interface can be used both for PCI passthrough, using the PF, and SR-IOV, using the VFs. If the PF is used, the VF number stored in the
sriov_numvfs
file is lost. If the PF is attached again to the operating system, the number of VFs assigned to this interface will be zero. To keep the number of VFs always assigned to this interface, modify the interfaces configuration file by adding anifup
script command.The maximum number of VFs a PF can support:
cat /sys/class/net/eth3/device/sriov_totalvfs63
In Ubuntu, modify /etc/network/interfaces file as follows:
auto eth3 iface eth3 inet dhcp pre-up echo '4' > /sys/class/net/eth3/device/sriov_numvfs
In Red Hat, modify /sbin/ifup-local file as follows:
#!/bin/sh if [[ "$1" == "eth3" ]] then echo '4' > /sys/class/net/eth3/device/sriov_numvfs fi
Verify if the VFs are created:
root@sriov:~# lspci | grep "Ethernet" 03:10.1 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:10.7 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:11.1 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:11.3 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:11.5 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 03:11.7 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
Persist created VFs on reboot:
echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local
Allowlisting PCI Devices nova-compute
(Compute)
Make the change in /etc/nova/nova.conf file as follows:
pci_passthrough_whitelist = { "devname": "ens1f1", "physical_network": "provider"}
This informs the compute service that all VFs belonging to ens1f1
are allowed to be passed through to instances and belong to the network provider.
Configuring neutron-server
(Controller)
Add sriovnicswitch
as mechanism driver.
Make changes in/etc/neutron/plugins/ml2/ml2_conf.ini files.
mechanism_drivers
=openvswitch
,sriovnicswitch
restart
neutron-server
service
Configuring nova-scheduler
(Controller)
Make changes under default section in /etc/nova/nova.conf file.
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter scheduler_available_filters = nova.scheduler.filters.all_filters
Enabling Neutron sriov-agent
(Compute)
Make changes in /etc/neutron/plugins/ml2/sriov_agent.ini file.
[sriov_nic] physical_device_mappings = provider:ens1f1 exclude_devices = [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver
Launching Instances with SR-IOV Ports
Get network ID to create SR-IOV (in this case there is a network provider1 which is already created and is used for the SE’s management and data).
net_id=neutron net-show provider1 | grep "\ id\ " | awk '{ print $4 }'
Create the SR-IOV,
port. vnic_type=direct
is used here.port_id=neutron port-create $net_id --name sriov_port_1 --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }' port_id=neutron port-create $net_id --name sriov_port_2 --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'
Port-Show of the ports indicates that the respective ports have been created as SR-IOV ports:
root@sriov:~# neutron port-show 3d6ef025-77ae-4136-a8b6-c48955fe5d2f
+-------------------------+---------------------------------------------------------------------------------+ | Field | Value | +-------------------------+---------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | sriov | | binding:profile | {"pci_slot": "0000:03:10.7", "physical_network": "provider", "pci_vendor_info": "8086:1515"} | | binding:vif_details | {"port_filter": false, "vlan": "0"} | | binding:vif_type | hw_veb | | binding:vnic_type | direct | | created_at | 2019-04-05T04:32:15Z | | description | | | device_id | 6cf94dd4-c6f3-4d49-84bd-639f40ed1b5e | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "03e68028-ecec-4fb4-9c03-546ed14bf3c4", "ip_address": "10.140.81.217"} | | id | 3d6ef025-77ae-4136-a8b6-c48955fe5d2f | | mac_address | fa:16:3e:af:d1:f6 | | name | sriov_port_2 | | network_id | c4260d3e-f275-4097-96be-03751495f291 | | port_security_enabled | True | | project_id | dbe81cf9baa8492288456cbb295a529e | | revision_number | 18 | | security_groups | 643c4bb8-7236-47ec-a91d-9038be4774cb | | status | ACTIVE | | tags | | | tenant_id | dbe81cf9baa8492288456cbb295a529e | | updated_at | 2019-04-05T04:39:02Z | +-------------------------+--------------------------------------------------------------------------------+
root@sriov:~# neutron port-show a9fe3f1a-2e1c-4c3f-9fa0-1c03ab29d2c0
+-------------------------------+--------------------------------------------------------------------------+ | Field | Value | +-------------------------------+--------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | sriov | | binding:profile | {"pci_slot": "0000:03:11.1", "physical_network": "provider", "pci_vendor_info": "8086:1515"} | | binding:vif_details | {"port_filter": false, "vlan": "0"} | | binding:vif_type | hw_veb | | binding:vnic_type | direct | | created_at | 2019-04-05T04:32:06Z | | description | | | device_id | 6cf94dd4-c6f3-4d49-84bd-639f40ed1b5e | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "03e68028-ecec-4fb4-9c03-546ed14bf3c4", "ip_address": "10.140.81.213"} | | id | a9fe3f1a-2e1c-4c3f-9fa0-1c03ab29d2c0 | | mac_address | fa:16:3e:db:61:0a | | name | sriov_port_1 | | network_id | c4260d3e-f275-4097-96be-03751495f291 | | port_security_enabled | True | | project_id | dbe81cf9baa8492288456cbb295a529e | | revision_number | 19 | | security_groups | 643c4bb8-7236-47ec-a91d-9038be4774cb | | status | ACTIVE | | tags | | | tenant_id | dbe81cf9baa8492288456cbb295a529e | | updated_at | 2019-04-05T04:39:02Z | +-------------------------------+-------------------------------------------------------------------------+
An OpenStack Controller is brought up in No-Access mode and
se.qcow2
image is pushed to glance (As illustrated in Step 3 of Installing Avi Load Balancer into No Access OpenStack Cloud)Create an instance with two NIC’s (Data and Management) using
se.qcow2
image (As illustrated in Step 4 of Installing Avi Load Balancer into No Access OpenStack Cloud) as follows:openstack server create --flavor m1.se --image AVi-se-18.2.2-9224 --port a9fe3f1a-2e1c-4c3f-9fa0-1c03ab29d2c0 --port 3d6ef025-77ae-4136-a8b6-c48955fe5d2f test-sriov
Run/opt/avi/init_system.py script and make sure that the SE can connect to the Controller (As illustrated in Step 15 of Installing Avi Load Balancer into No Access OpenStack Cloud).
Ensure that the SE data vNIC’s (login to the SE) have come up as SR-IOV VF’s and not as VIRTIO interfaces.
root@10-140-81-213:~# lspci | grep "Ethernet" 00:04.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01) 00:05.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
For the SE data vNIC ports to run DPDK on top of the SR-IOV ports, login to the Controller shell (See Access the Controller CLIsection in the VMware Avi Load BalancerAdministration Guide.) and under SE-group properties (this will affect all SE’s under that SE group); make the change for the NIC’s to be up in DPDK mode by making the following changes:
[admin:avi-ctrlr]: serviceenginegroup> se_use_dpdk [admin:avi-ctrlr]: serviceenginegroup> se_dpdk_pmd
Note:The SE’s need to be rebooted for the change to take effect.