I am new to SPDK, and driver writing in general. I am trying to get SPDK
working with a non NVMe storage card (Netlist Express Vault:
it shows up as a block device in Linux. Would anyone know if there are
already utilities in SPDK to configure new devices like this (I have been
looking at ioat, and the SPDK bdev libraries) or do I have to create some
sort of SPDK user space driver for the Neltist card? If I do have to create
and SPDK driver for the Netlist card, how would I go about that?
Please see http://www.spdk.io/development/ for full instructions on how to contribute patches. Basically, you just open a pull request on GitHub.
-------- Original message --------
From: Wenhua Liu <liuw(a)vmware.com>
Date: 1/26/17 12:22 AM (GMT-07:00)
Subject: [SPDK] How to contribute to SPDK
While using SPDK NVMF target, I encountered 2 blocking issues at target side. I have fixed the problems. I'd like to contribute my changes to SPDK, how should I do that?
I've been using SPDK NVMF target over a month and it worked fine. Yesterday, I checked out the latest SPDK source code from https://github.com/spdk/spdk up to commit 5ee4728d0cc05e506fba2b12445d93d088188fe3. This time, when SPDK NVMF target is running, if I press ctrl-C, SPDK does not shutdown as before, it just prints some message and hangs there. Here is the message:
NVMF shutdown signal
[nvmf] subsystem.c: 224:spdk_nvmf_delete_subsystem: subsystem is 0x559509c81f00
[nvmf] subsystem.c: 224:spdk_nvmf_delete_subsystem: subsystem is 0x559509c82c80
Has anyone seen this? Is this something wrong in SPDK or some configuration issue? If it's configuration issue, how should I fix it?
I am using SPDK application framework (spdk_app_init() and so on), and struggling with figuring out the way to insert DPDK timer sub-system in the framework.
According to the DPDK documentation, you have to call function rte_timer_manage() periodically in lcores main_loop() in order to make the timer sub-system functioning properly.
Could anyone advice where where I can find the lcore main loop in the SPDK app framework, and what is the recommended way to insert the rte_timer_manage() function?
I'm trying to run some fio benchmarks and it doesn't seem to work for me.
The example in GitHub (
work (says "size=" is required) and it's very unclear if there needs to be
any configuration to point to the device.
It seems that when I run it everything goes straight to the root
filesystem (drive gets saturated with writes, all terminals freeze) or
memory (ungodly throughputs). Here's one input file:
And I get:
Run status group 0 (all jobs):
> READ: io=10227MB, aggrb=10227MB/s, minb=10227MB/s, maxb=10227MB/s,
> mint=1000msec, maxt=1000msec
Am I using it correctly? Something seems off. Also, my root filesystem
gets saturated with writes. I can easily notice it using dstat. Is this
Could anyone give me a pointer or an example which works? My end goal is
to do some random writes benchmarks.
I was going through the nvmf code and I need few clarifications on that front. We set the send_wr and recv_mr of a connection to max queue depth and the completion queue depth to 2 * max queue depth. What I am not able to understand is that we would require 2 * max queue depth of work request right, but in the code only max_queue_depth number of requests are allocated that contains only max_queue_depth work requests both send_wr and recv_wr in a union.
Is there any benchmarking done to prove that SPDK is better than the Kernel driver?
We did run the sample driver provided from the SPDK and compared with the kernel interface and found that SPDK is very slow. We did run FIO.
Can you describe the setup for the performance measurement between Kernel and SPDK driver.
I am trying to create vmfs on the Target discovered by the ESXi initiator. But it fails , is the userspace iSCSI target tested against the ESXi initiator creating the VMFS.
The CFP for the Linux Foundation's Vault conference is coming close to an end.
The event is being held this year in Cambridge, Massachusetts on the days
following the LSF/MM summit.
The first two year's events have been solid, focused events in my (slightly
biased) opinion, so worth submitting to and definitely worth attending.
This year we have explicitly included embedded flash storage like raw
NAND, eMMC and flash friendly file systems.
Submit a Proposal to Speak at Vault, Linux Storage and Filesystems Conference.
Vault will be held alongside LSF-MM Summit on March 22 & 23 in Cambridge, MA. To
submit (by 1/14) visit http://events.linuxfoundation.org/events/vault/program/cfp
Happy to answer any questions about the event.
Christoph (on behalf of the Program Committee)