Thanks Paul.

I got the idea what happens for queue size.

Let me take a look at the code and test.


From: SPDK [] On Behalf Of Luse, Paul E
Sent: Thursday, November 16, 2017 4:05 PM
To: Storage Performance Development Kit <>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.


Hi Sreeni,


So in NVMe the queues are SW constructs that can be made pretty much any size as long as they are smaller than what the HW reports as its max via CAP.MQES.  In the PSDK NVMe driver you can see how the value is determined in this function:



nvme_ctrlr_init_cap(struct spdk_nvme_ctrlr *ctrlr, const union spdk_nvme_cap_register *cap)


              ctrlr->cap = *cap;


              ctrlr->min_page_size = 1u << (12 + ctrlr->cap.bits.mpsmin);


              /* For now, always select page_size == min_page_size. */

              ctrlr->page_size = ctrlr->min_page_size;


              ctrlr->opts.io_queue_size = spdk_max(ctrlr->opts.io_queue_size, SPDK_NVME_IO_QUEUE_MIN_ENTRIES);

              ctrlr->opts.io_queue_size = spdk_min(ctrlr->opts.io_queue_size, ctrlr->cap.bits.mqes + 1u);


              ctrlr->opts.io_queue_requests = spdk_max(ctrlr->opts.io_queue_requests, ctrlr->opts.io_queue_size);



So you can control the number via the options structure, struct spdk_nvme_ctrlr_opts, passed in when you probe for devices.  So think of it as the size of the submission queue that you create as limited by HW.


Does that make sense?






From: SPDK [] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 12:26 PM
Subject: [SPDK] Regarding NVMe driver command queue depth.


Hi Paul,


I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.

“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”

When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?

Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?

Please give some detail about the parameter.