SR-IOV is a specification that allows a single Peripheral Component Interconnect Express (PCIe) physical device under a single root port to appear as multiple separate physical devices to the hypervisor or the guest operating system.
SR-IOV uses physical functions (PFs) and virtual functions (VFs) to manage global functions for the SR-IOV devices. PFs are full PCIe functions that can configure and manage the SR-IOV functionality. VFs are lightweight PCIe functions that support data flowing but have a restricted set of configuration resources.
The number of virtual functions provided to the hypervisor or the guest operating system depends on the device. SR-IOV enabled PCIe devices require appropriate BIOS and hardware support, and SR-IOV support in the guest operating system driver or hypervisor instance.
Prepare Hosts and VMs for SR-IOV
In vSphere, a VM can use an SR-IOV virtual function for networking. The VM and the physical adapter exchange data directly without using the VMkernel stack as an intermediary. Bypassing the VMkernel for networking reduces the latency and improves the CPU efficiency for high data transfer performance.
vSphere supports SR-IOV in an environment with specific configuration only. For the detailed support specification, see the SR-IOV Support page.
In the topology below, the vSphere SR-IOV support relies on the interaction between the VFs and the PF of the physical NIC port for high performance. VM network adapters directly communicate with the VFs that SR-IOV provides to transfer data. However, the ability to configure the VFs depends on the active policies for the vSphere Distributed Switch port group ports (VLAN IDs) on which the VMs reside. The VM handles incoming and outgoing external traffic through its virtual ports that reside on the host. The virtual ports are backed by physical NICs on the host and respectively the same physical NICs are also handling the traffic for the relevant port group to which they are configured. VLAN ID tags are inserted by each SR-IOV virtual function. For more information on configuring SR-IOV, see Configure a Virtual Machine to Use SR-IOV.
SR-IOV can be used for data intensive traffic, but it cannot use virtualization benefits such as vMotion, DRS and so on. Because of this reason, VNFs employing SR-IOV become static hosts. A special host aggregate can be configured for such workloads. The NSX-T Data Center fabric can be used to plumb interfaces into an N-VDS Standard switch and for VLAN and overlay connectivity.
SR-IOV Configuration by Using VMware Integrated OpenStack
VMware Integrated OpenStack supports a proxy TVD plug-in that must be configured to use the vSphere Distributed Switch and N-VDS networking. A port on a vSphere Distributed Switch port group can be configured from the VMware Integrated OpenStack API as an External network. Similarly, VMXNET3 interfaces can be plumbed into the VMs as well.
vSphere Distributed Switch and N-VDS Standard can be deployed on the same host with dedicated physical NICs to each switch. VNFs can have a VIF connected to both the vSphere Distributed Switch port group ports for North-South connectivity and N-VDS Standard switches for East-West connectivity.