Linux I/O Passthru

Guided Research


Modern SSDs use the NVMe protocol for communication and are capable of processing millions of NVMe commands per second. However, traditional Linux I/O APIs are too slow to perform that many I/O operations, and newer APIs still suffer from significant performance overheads. Subsequently, for super-fast I/O with super-low latency, user-space frameworks like SPDK [1] are used that completely bypass the Linux kernel. But the race between kernel-space and user-space is not over yet. To diminish the overheads of the Linux kernel I/O path, a new API has been merged recently into the Linux kernel: I/O Passthru [2].

The aim of this project is to investigate the performance impact of the new Linux API regarding throughput and latency. For the evaluation, various read/write accesses should be performed and compared to using the Linux block device, SPDK and our own toy driver (written in Rust) with SPDK-like performance.

Research question

  1. What is the performance impact of Linux I/O Passthru?


  • Knowledge of a systems programming language (C, C++, Rust, …)


If you are interested in this topic, send me an e-mail or drop by my office.


  1. Storage Performance Development Kit
  2. I/O Passthru: Upstreaming a flexible and efficient I/O Path in Linux. USENIX FAST '24