On Nov 28, 2016, at 10:37 PM, txcy uio
I came across this paper which talks about NVMeDirect :
Specifically it says that with NVMeDirect it is possible to share the same NVMe device
between userspace applications and kernel
"Although NVMeDirect is conceptually similar to SPDK, NVMeDirect has following
differences. First, NVMeDirect leverages the kernel NVMe driver for control-plane
operations, thus existing applications and NVMeDirect-enabled applications can share the
same NVMe SSD. In SPDK, however, the whole NVMe SSD is dedicated to a single process who
has all the user-level driver code."
Any comments if this seems like a good idea or can impact performance in one way or the
The NVMeDirect approach can get better performance/efficiency than the kernel driver, but
I suspect the current implementation does not hit the same levels of performance as SPDK
due to multiple spin locks in the I/O path.
The SPDK NVMe driver will also work with any recent stock Linux kernel, while the
NVMeDirect approach requires out-of-tree kernel patches. I suspect getting
NVMeDirect-like approach into the upstream kernel would be difficult since each queue
exposed to userspace can write to an LBA or DMA to any host memory region.