Skip to main content

QEMU

The Software-Enabled Flash™️ (SEF) Software Development Kit (SDK) includes QEMU storage devices that allow non-SEF enabled software to be tested with an SEF Unit. The storage devices translate block APIs into SEF commands storing the data in an SEF Unit. The storage devices included are:

To use them, an SEF Unit and SEF QoS Domain are supplied as parameters. The storage devices then present a block device to the QEMU guest based on the SEF QoS Domain’s configuration. See each sub-section for details on how to use them.

SEF-Backed Virtual Driver

The SEF patches for QEMU add support for a virtual driver called sef-aio. Instead of using a file for a backing store, it uses an SEF QoS Domain. The QoS Domain must be unconfigured or configured as an SEF Block FTL domain. An unconfigured domain will be auto configured as an SEF Block FTL domain when it’s mounted. The file option selects the unit and QoS Domain and has the following format: file=<unit>:<domain>. Below is an example of a QEMU command line with options for an SEF Virtual Device on SEF Unit 0 and QoS Domain 3.

$sudo qemu -name sefAio -m 4G -smp 8 -enable-kvm                                              \
-drive file=system-debian.qcow2,cache=none,format=qcow2,if=virtio \
-netdev user,id=vnet,hostfwd=tcp:127.0.0.1:2222-:22 \
-device virtio-net-pci,netdev=vnet -nographic \
-display none -cpu host \
-drive driver=sef-aio,file=0:3,cache=none,if=virtio

To verify the virtual drive is present, run the command lsblk. In this case, it’s the device vdb.

$sudo lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sr0 11:0 1 1024M 0 rom
vda 254:0 0 100G 0 disk
+-vda1 254:1 0 96G 0 part /
+-vda2 254:2 0 1K 0 part
+-vda5 254:5 0 4G 0 part [SWAP]
vdb 254:16 0 4.3G 0 disk

FIO can be used for testing the device from within the virtual machine. Below is an example that randomly reads and writes to the device.

$sudo fio -ioengine=libaio -filename=/dev/vdb -name=rw -randrw

SEF-Backed ZNS Device

The standard NVMe device for QEMU supports ZNS. Using it requires that the guest OS has a Linux kernel of 5.9 or later. It’s enabled with the option zoned=true. The SEF patches for QEMU add support for using an SEF QoS Domain as a backing store. SEF-specific ZNS Device Options shows the SEF-specific options. The SEF QoS Domain used must be unconfigured or configured as an SEF ZNS domain. An unconfigured domain will be auto configured as an SEF ZNS domain.

SEF-specific ZNS Device Options

OptionDefault ValueDescription
seffalseIf true, switches from a file as a backing store to an SEF QoS Domain. In this mode, a zone is an SEF Super Block, which dictates the default zone. Setting the zone size smaller is supported but will waste space.
sef_unit0Unit number of the SEF Unit to use as the backing store.
sef_qos_domain2QoS Domain ID to use as the backing store.

Below is an example of a QEMU command line with options for an SEF-backed ZNS device on QoS Domain 3. Although unused, it’s still required to supply a valid backing file, even if it's zero-sized. The example is using the default of 0 for the SEF Unit number and explicitly configuring QoS Domain 3.

$sudo qemu -name sefZns -m 4G -smp 8 -enable-kvm                                                \
-drive file=system-debian.qcow2,cache=none,format=qcow2,if=virtio \
-netdev user,id=vnet,hostfwd=tcp:127.0.0.1:2222-:22 \
-device virtio-net-pci,netdev=vnet -nographic \
-display none -cpu host \
-drive if=none,id=nvme1,file=nvme3.raw,format=raw \
-device nvme,serial=654321 \
-device nvme-ns,drive=nvme1,nsid=1,sef=true,zoned=true, \
sef_qos_domain=3

To verify the ZNS device is present, run the command lsblk -z.

$sudo lsblk -z

NAME ZONED
fd0 none
sr0 none
vda none
+-vda1 none
+-vda2 none
+-vda5 none
nvme0n1 host-managed

The indication “host managed” tells you that device nvme0n1 is a ZNS device.

FIO can be used for testing ZNS devices from within the virtual machine. Below is an example that randomly reads and writes 8 gigabytes of data using 128k I/O requests. The request types are evenly split between reads and writes, yielding 4 gigabytes of reads and 4 gigabytes of writes.

$sudo fio --aux-path=/tmp --allow_file_create=0 --name=job1         \
--filename=/dev/nvme0n1 --rw=randrw --direct=1 \
--zonemode=zbd --bs=128k --size=8G

When run only once, it will not fill the device since half of the size is used for reads, and size will be rounded down to the device size if larger. To see how much data has been written, run blkzone report /dev/nvme0n1 for the write pointer value of each zone. By running the fio job more than once, it will eventually fill a zone, which will then be reset/erased to allow for more writes.

$sudo blkzone report /dev/nvme0n1 | head -2

start: 0x000000000, len 0x010000, cap 0x010000, wptr 0x009800
reset:0 non-seq:0, zcond: 2(oi) [type: 2(SEQ_WRITE_REQUIRED)]
start: 0x000010000, len 0x010000, cap 0x010000, wptr 0x00aa00
reset:0 non-seq:0, zcond: 2(oi) [type: 2(SEQ_WRITE_REQUIRED)]

SEF-Backed NVMe Device

The SEF patches for QEMU add support for using an SEF QoS Domain as a backing store for a QEMU NVMe device. SEF-specific NVMe Device Options shows the SEF-specific options. The QoS Domain must be unconfigured or configured as an SEF Block FTL domain. An unconfigured domain will be auto configured as an SEF Block FTL domain when it’s mounted.

SEF-specific NVMe Device Options

OptionDefault ValueDescription
seffalseIf true, switches from a file as a backing store to an SEF QoS Domain. In this mode, the SEF reference FTL is used to provide block I/O for the QEMU NVMe device.
sef_unit0Unit number of the SEF Unit to use as the backing store.
sef_qos_domain2QoS Domain ID to use as the backing store.

Below is an example of a QEMU command line with options for an SEF-backed NVMe device on QoS Domain 2. Although unused, it’s still required to supply a valid backing file, even if it’s zero-sized. The example is using the default of 0 for the SEF Unit number and the default of 2 for the QoS Domain.

$sudo qemu -name sefNvme -m 4G -smp 8 -enable-kvm                                     \
-drive file=system-debian.qcow2,cache=none,format=qcow2,if=virtio \
-netdev user,id=vnet,hostfwd=tcp:127.0.0.1:2222-:22 \
-device virtio-net-pci,netdev=vnet -nographic \
-display none -cpu host \
-drive if=none,id=nvme1,file=nvme2.raw,format=raw \
-device nvme,serial=123456 \
-device nvme-ns,drive=nvme1,nsid=1,sef=true

To verify the virtual drive is present, run the command lsblk. In this case, it’s the device vdb.

$sudo lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sr0 11:0 1 1024M 0 rom
vda 254:0 0 100G 0 disk
+-vda1 254:1 0 96G 0 part /
+-vda2 254:2 0 1K 0 part
+-vda5 254:5 0 4G 0 part [SWAP]
nvme0n1 259:0 0 4.3G 0 disk

FIO can be used for testing the device from within the virtual machine. Below is an example that randomly reads and writes to the device.

$sudo fio -ioengine=libaio -filename=/dev/nvme0n1 -name=rw -randrw