Thank you very much!
Department of Computer Science and Engineering <http://www.cs.umn.edu/>
College of Science and Engineering <http://cse.umn.edu/>
University of Minnesota, Twin Cities <http://www.umn.edu>
On Wed, Jan 31, 2018 at 12:22 PM, Walker, Benjamin <
On Wed, 2018-01-31 at 17:49 +0000, Fenggang Wu wrote:
> Hi All,
> I read from the SPDK doc "NVMe Driver Design -- Scaling Performance"
> which saids:
> " For example, if a device claims to be capable of 450,000 I/O per
> queue depth 128, in practice it does not matter if the driver is using 4
> pairs each with queue depth 32, or a single queue pair with queue depth
> Does this consider the queuing latency? I am guessing the latency in the
> cases will be different ( in qp/qd = 4/32 and in qp/qd = 1/128). In the 4
> threads case, the latency will be 1/4 of the 1 thread case. Do I get it
Officially, it is entirely up to the internal design of the device. But
NVMe devices I've encountered on the market today you can use as a mental
a single thread inside the SSD processing incoming messages that
doorbell writes. It simply takes the doorbell write message and does
to calculate where the command is located in host memory, and then issues
to pull it into device local memory. It doesn't matter which queue the I/O
- the math is the same. So no, the latency of 1 queue pair at 128 queue
the same as 4 queue pairs at 32 queue depth.
> If so, then I got confused as the document also says:
> "In order to take full advantage of this scaling, applications should
> organizing their internal data structures such that data is assigned
> exclusively to a single thread."
> Please correct me if I get it wrong. I understand that if the dedicate
> thread has the total ownership of the I/O data structures, there is no
> contention to slow down the I/O. I believe that BlobFS is also designed
> this philosophy in that only one thread is doing I/O.
> But considering the RocksDB case, if the shared data structure has
> been largely taken care of by the RocksDB logic via locking (which is
> inevitable anyway), the I/O requests each RocksDB thread sends to the
> could also has its own queue pair to do I/O. More I/O threads means
> queue depth and smaller queuing delay.
> Even if there is some FS metadata operations that may require some
> but I would guest such metadata operation takes only a small portion.
> Therefore, is it a viable idea to have more I/O threads in the BlobFS to
> the multi-threaded RocksDB for a smaller delay? What will be the
You're right that RocksDB has already worked out all of its internal data
sharing using locks. It then uses a thread pool to issue simultaneous
I/O requests to the filesystem. That's where the SPDK RocksDB backend
intercepts. As you suspect, the filesystem itself (BlobFS, in this case)
shared data structures that must be coordinated for some operations
and deleting files, resizing files, etc. - but not regular read/write).
small part of the reason why we elected, in our first attempt at writing a
RocksDB backend, to route all I/O from each thread in the thread pool to a
single thread doing asynchronous I/O.
The main reason we route all I/O to a single thread, however, is to
usage. RocksDB makes blocking calls on all threads in the thread pool. We
implement that in SPDK by spinning in a tight loop, polling for the I/O to
complete. But that means every thread in the RocksDB thread pool would be
burning the full core. Instead, we send all I/O to a single thread that is
polling for completions, and put the threads in the pool to sleep on a
semaphore. When an I/O completes, we send a message back to the originating
thread and kick the semaphore to wake it up. This introduces some latency
the rest of SPDK is more than fast enough to compensate for that), but it
a lot of CPU usage.
Yeah, get it. It make perfect sense to me that BlobFS concentrate the I/Os
to one thread to minimized the busy waiting.
In an ideal world, we'd be integrating with a fully asynchronous K/V
where the user could call Put() or Get() and have it return immediately
a callback when the data was actually inserted. But that's just not how
works today. Even the background thread pool doing compaction is designed
blocking operations. It would integrate with SPDK much better if it
a smaller set of threads each doing asynchronous compaction operations on a
whole set of files at once. Changing RocksDB in this way is a huge lift,
would be an impressive project.
Right, if RocksDB could do async compaction, the more parallelism of the
SSD can be exploit using a small number of thread. Or equivalently, RocksDB
can spawn enough blocking compaction threads, in a sense to keep the SSD
busy. However, tradeoff is that there will be more thread managing
Still, instead of altering RocksDB, which is complicated, it's also
possible to start from other parallel LSM equivalence such as HyperLevelDB
> Any thoughts/comments are appreciated. Thank you very much!
> SPDK mailing list
SPDK mailing list