Hi Sreeni,

 

Can you step through your second call to spdk_nvme_ctrlr_alloc_io_qpair?  The callstack clearly shows that qpair=0x0 was passed into stellus_spdk_nvme_ns_cmd_write() at frame #3.  So I think we should back up and figure out why no I/O qpair was allocated (or maybe it was allocated but not saved in a structure or something).

 

-Jim

 

 

From: SPDK <spdk-bounces@lists.01.org> on behalf of "Sreeni (Sreenivasa) Busam (Stellus)" <s.busam@stellus.com>
Reply-To: Storage Performance Development Kit <spdk@lists.01.org>
Date: Thursday, November 16, 2017 at 4:57 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] Regarding NVMe driver command queue depth.

 

I have been trying to test the number of commands that can be given to the device at a time. I verified that a maximum of 254 commands could be issued for a qpair. So I created a 2nd qpair for ns_entry and issued the I/O commands, it was failing in the first command itself. Is it invalid to create 2 qpair for the same ns_entry and send command to device? The qpair is successfully created, but I could not submit command.

I modified the hello_world program to test this and attached the related code.  

Please take a look and let me know what is the problem.

 

0x000000000040bae2 in nvme_allocate_request (qpair=0x0,

    payload=0x7fffa4726ba0, payload_size=512, cb_fn=0x4041a4 <write_complete>,

    cb_arg=0x7b4270) at nvme.c:85

#1  0x000000000040996c in _nvme_ns_cmd_rw (ns=0x100ff8ee40, qpair=0x0,

    payload=0x7fffa4726ba0, payload_offset=0, md_offset=0, lba=0, lba_count=1,

    cb_fn=0x4041a4 <write_complete>, cb_arg=0x7b4270, opc=1, io_flags=0,

    apptag_mask=0, apptag=0, check_sgl=true) at nvme_ns_cmd.c:440

#2  0x0000000000409fea in spdk_nvme_ns_cmd_write (ns=0x100ff8ee40, qpair=0x0,

    buffer=0x10000f7000, lba=0, lba_count=1, cb_fn=0x4041a4 <write_complete>,

    cb_arg=0x7b4270, io_flags=0) at nvme_ns_cmd.c:649

#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,

    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,

    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233

#4  0x00000000004046b8 in test_io_func1 () at iostat.c:342

#5  0x0000000000404a94 in main (argc=1, argv=0x7fffa4726db8) at iostat.c:503

(gdb) f 3

#3  0x000000000040439d in stellus_spdk_nvme_ns_cmd_write (ns_entry=0x7b13c0,

    qpair=0x0, buffer=0x10000f7000, lba=0, lba_count=1,

    cb_fn=0x40420d <io_complete>, cb_arg=0x7b4270, ioflags=0) at iostat.c:233

233                     rc = spdk_nvme_ns_cmd_write(ns_entry->ns, qpair, buffer,

(gdb) p qpair

$1 = (struct spdk_nvme_qpair *) 0x0

 

If any of you get time, please look at it. Thank you for your suggestion.

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Sreeni (Sreenivasa) Busam (Stellus)
Sent: Thursday, November 16, 2017 11:26 AM
To: spdk@lists.01.org
Subject: [SPDK] Regarding NVMe driver command queue depth.

 

Hi Paul,

 

I was reading about the driver from SPDK site, and interested in understanding the queue depth for a device.

“The specification allows for thousands, but most devices support between 32 and 128. The specification makes no guarantees about the performance available from each queue pair, but in practice the full performance of a device is almost always achievable using just one queue pair. For example, if a device claims to be capable of 450,000 I/O per second at queue depth 128, in practice it does not matter if the driver is using 4 queue pairs each with queue depth 32, or a single queue pair with queue depth 128”

When queue depth is mentioned for device, is it the number of commands that can be issued from application to controller, and outstanding at any time?

Is there NVMe driver API to set the queue depth? Is my understanding correct if I think that the size of queue is at firmware level?

Please give some detail about the parameter.

 

Thanks,

Sreeni