Starting with NSX Advanced Load Balancer version 22.1.1, installing Linux KVM with DPDK Support for virtio-net data NICs and macvtap interface (in bridge mode) created from the physical interface available on the host, is supported. This macvtap interface is employed as the interface within the guest virtual machine.
For more information on macvtap interface, see MacVTap.
Prerequisites
Hardware
Check that your CPU supports hardware virtualization to run KVM. You need a processor that supports hardware virtualization. Intel and AMD have both developed extensions for their processors, deemed respectively Intel VT-x and AMD-V. To check if your processor supports one of these, you can review the output using this command:
egrep -c '(vmx|svm)' /proc/cpuinfo
If the value is zero, it means that your CPU does not support hardware virtualization.
If the value is one or more, it means that your CPU supports hardware virtualization. However, you need to ensure that virtualization is enabled in the BIOS.
Software Installation
Ubuntu Distro
For more information on KVM Installation and requisite packages installation, see KVM/Installation.
RHEL/CentOS Distro
For more information on Installing KVM packages on an existing Red Hat Enterprise Linux system and for requisite packages installation, see KVM with a new Red Hat Enterprise Linux installation.
Software [Tested and Qualified]
Ubuntu Distro
Ubuntu ID
Version
DISTRIB_ID
Ubuntu
DISTRIB_RELEASE
18.04
DISTRIB_CODENAME
Bionic
DISTRIB_DESCRIPTION
Ubuntu 18.04 LTS
DISTRIB_ID
Ubuntu
OS kernel version
4.15.0-20-generic
libvirt-bin
libvirtd (libvirt) 4.0.0
qemu-kvm
QEMU emulator version 2.11.1
genisomage
genisoimage 1.1.11 (Linux)
RHEL/ CentOS Distro
RHEL/CentOS ID
Version
CentOS Linux release
7.9 (Maipo)
OS kernel version
5.4.17-2136.305.5.4.el7uek.x86_64
libvirt-bin
libvirtd (libvirt) 4.5.0
qemu-kvm
QEMU emulator version 2.0.0
genisomage
genisoimage 1.1.11 (Linux)
Installing Service Engine and Controller Virtual Machine
Prerequisites: Ensure that you copy NSX Advanced Load Balancerse.qcow2 and controller.qcow2 images to /var/lib/libvirt/images/ directory of the host machine. se.qcow2 image can be fetched from the Controller UI once it is up, as explained in the Deploying Service Engine section below.
Script Run: Run the install script mentioned in kvm-dpdk-virt-install.sh on the host machine for the Controller and the SE virtual machines installation.
Note:You need to create the Controller virtual machine before creating the SE virtual machine.
Deploying the Controller
Run the above-mentioned install script on the host machine choosing Create AVI Controller VM
option for deploying the Controller virtual machine.
These are some recommendations for the variables used to install script while creating the Controller virtual machine:
NSX Advanced Load Balancer Controller management IP
NSX Advanced Load Balancer Controller Management-IP Mask
NSX Advanced Load Balancer Controller Default Gateway
The values for these variables are to be derived from the host management interface configuration.
- Controller Initial Setup
-
The following are the steps to navigate to the Controller IP address to perform the initial setup:
Configure an administrator password
Set DNS information
Select No Orchestrator
Deploying Service Engine
Once the Controller virtual machine is up, the following are the steps to upload the SE image for installing Service Engine:
On the Controller, navigate to
.Click the download icon on Default Cloud row and select Qcow2.
Upload the se.qcow2 to the /var/lib/libvirt/images/ directory of the host machine.
Run the above-mentioned install script on the host machine choosing Create AVI SE VM
option for deploying SE virtual machine.
These are some recommendations for the variables used to install script while creating the Service Engine virtual machine:
NSX Advanced Load Balancer SE Management IP
NSX Advanced Load Balancer Controller Management-IP Mask
NSX Advanced Load Balancer SE Default Gateway
The values for these variables are to be derived from the host management interface configuration.
For the SE virtual creation, it is recommended a 4-queue configuration per macvtap interface and virtual CPUs configuration of greater than or equal to four for best throughput performance.
The bond interfaces sequence can be specified as follows:
- Bond-ifs sequence
-
1,2 3,4. This implies interface 1,2 are in bond and interface 3,4 are in bond (Note the space between 1,2 and 3,4).
- Bond-if sequence
-
1,2,3,4. This implies interface 1,2,3,4 are in bond.
You can verify the SE to connect to the Controller by navigating to
on the Controller UI (this might take a few minutes).Sample SE Virtual Machine Installation Output
On the Host
Host management interface (from which management macvtap interface is created).
eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 0c:c4:7a:b4:15:b4 brd ff:ff:ff:ff:ff:ff inet 10.217.144.19/22 brd 10.217.147.255 scope global eno1
Host physical interfaces (from which data macvtap interfaces are created).
ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 0c:c4:7a:bb:96:ea brd ff:ff:ff:ff:ff:ff inet 100.64.50.56/24 brd 100.64.50.255 scope global dynamic ens1f0 ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 0c:c4:7a:bb:96:eb brd ff:ff:ff:ff:ff:ff inet 100.64.67.55/24 brd 100.64.67.255 scope global dynamic ens1f1
Macvtap interfaces as seen on the host (post install script run).
macvtap0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500 link/ether 52:54:00:f4:ed:9f brd ff:ff:ff:ff:ff:ff macvtap1@ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500 link/ether 52:54:00:4f:3a:2d brd ff:ff:ff:ff:ff:ff macvtap2@ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500 link/ether 52:54:00:68:38:ef brd ff:ff:ff:ff:ff:ff
On the SE virtual machine post install script run
SE management interface
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:00:f4:ed:9f brd ff:ff:ff:ff:ff:ff inet 10.217.144.251/22 brd 10.217.147.255 scope global eth0
SE data-nic interfaces (as seen in namespace post ip-address assignment)
root@Avi-Service-Engine:/home/admin# ip netns exec avi_ns1 bash avi_eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:68:38:ef brd ff:ff:ff:ff:ff:ff inet 100.64.67.22/24 brd 100.64.67.255 scope global dynamic avi_eth2 avi_eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:4f:3a:2d brd ff:ff:ff:ff:ff:ff inet 100.64.50.41/24 brd 100.64.50.255 scope global dynamic avi_eth0
Post Host Reboot
The following enlists steps to be executed post every reboot.
Post reboot, all virtual machines will automatically be in the STOP state.
All the virtual machine names can be checked using virsh list output.
Bring up all the virtual machines using
virsh start <VM-name>
command.
Destroying the Controller and Service Engines
The virtual machines and their corresponding images can be cleared using the above-mentioned script.
Note:If the disk space of the host is getting exhausted it might be due to the N number of qcow2s being used in /var/lib/libvirt/images/ as part of the virtual machine’s creation. Clear up any unused ones by deleting the respective virtual machine as per above section.
Ensure to manually cleanup (force-delete) the stale Service Engine entries present in the Controller GUI post destroying SE virtual machines.