vkhitrin.com

Technology And Ramblings

SR-IOV NIC Partitioning



Tags: #SR-IOV #networking

Verified

Tested on Dell PowerEdge R730 with Intel X710 10G using Red Hat Enterprise Linux 7 with QEMU-KVM

SR-IOV

Single-root input/output virtualization ^[SR-IOV Intel network spec], refers to the capability of splitting a single PCI hardware resource into multiple virtual PCI resources. Devices like nVME, Network Interfaces, and GPUs may use this capability to use the physical hardware according to the required use case.

A physical PCI device is referred to as a PF(Physical Function), and a virtual PCI device is a VF(Virtual Function).

An excellent overview of SR-IOV can be found in Scott Lowe’s blog post.

SR-IOV Network Interfaces

General

In networking, a capable SR-IOV NIC is used to split a single physical NIC into multiple vNICs, which can be used on the bare metal host or as part of virtual guest instances.

Possible Use Cases

Note

Remember that SR-IOV VFs reside on a physical NIC, which may be a single point of failure if your network topology is not designed properly.

Due to SR-IOV robustness, many network topologies can be achieved with minimal NICs, which will require less cabling and maintenance.

With a single 10GB/40GB/100GB SR-IOV NIC, we could build a setup that offers both redundancy and performance while taking minimal physical space.

Baremetal host with single NIC two ports split to multiple vNICs connected to two switches

The diagram above shows a bare metal host containing a NIC with two ports connected to different switches.

Each port, represented as a NIC inside the operating system, is split into several vNICs(VFs), which are also represented as a NIC.

VFs can be leveraged to configure networking. For example, in a cloud environment, multiple networks representing different components of the cloud can reside on VFs instead of separate physical NICs. A QoS setting can be applied if some networks require more bandwidth than the rest.

Enabling SR-IOV

Bios Configuration

Note

Bios settings may differ depending on Bios and NIC vendors. Refer to the vendor's documentation.

On supported NICs, a Bios option must be set: Intel X710 NIC on Dell PowerEdge R730

Operating System

Note

Operating System settings may differ based on NIC vendor and operating system distribution. Refer to the vendor’s/distribution documentation.

Once SR-IOV has been enabled in Bios, verify this:

Leveraging SR-IOV VFs

Once SR-IOV has been configured, we can use them as NICs or pass them to VMs for increased performance.

VFs As NICs

Since VFs are represented as NICs, we can use native Linux tools such as ip, ifconfig, nmcli, network scripts, and others to configure the network.

Make sure your SR-IOV PF device settings are empty before using VFs:

ifconfig p4p4
# Output
p4p4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::3efd:feff:fe33:a8a6  prefixlen 64  scopeid 0x20<link>
        ether 3c:fd:fe:33:a8:a6  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Now you can configure VF. Example of network script config file:

cat /etc/sysconfig/network-scripts/ifcfg-p4p4_1
# Output
DEVICE=p4p4_1
ONBOOT=yes
HOTPLUG=no
NM_CONTROLLED=no
PEERDNS=no
BOOTPROTO=static
IPADDR=10.20.151.125
NETMASK=255.255.255.0

Start NIC and verify that settings were applied:

# Bring interface online
ifup p4p4_1
# View interface
ifconfig p4p4_1
# Output
p4p4_1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        it 10.20.151.125  netmask 255.255.255.0  broadcast 10.20.151.255
        inet6 fe80::c8d9:10ff:fe01:8488  prefixlen 64  scopeid 0x20<link>
        ether ca:d9:10:01:84:88  txqueuelen 1000  (Ethernet)
        RX packets 2430103  bytes 320710952 (305.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2369103  bytes 433952302 (413.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

VFs For VMs

Note

PCI Passthrough and more in-depth topics are out of the scope of this blog post.

Note

Requires AMD-Vi or Intel VT-d to be enabled on the hypervisor host.

Note

Refer to your hypervisor's documentation regarding SR-IOV..

On supported hypervisors, SR-IOV allows VMs to access the PCI directly, which increases performance compared to using generic vNICs provided by your hypervisor.

Refer to Red Hat Enterprise Linux 7 guide to boot up an instance with SR-IOV VFs using KVM.

Back To Top