I am new to SPDK, and driver writing in general. I am trying to get SPDK
working with a non NVMe storage card (Netlist Express Vault:
it shows up as a block device in Linux. Would anyone know if there are
already utilities in SPDK to configure new devices like this (I have been
looking at ioat, and the SPDK bdev libraries) or do I have to create some
sort of SPDK user space driver for the Neltist card? If I do have to create
and SPDK driver for the Netlist card, how would I go about that?
Is there any benchmarking done to prove that SPDK is better than the Kernel driver?
We did run the sample driver provided from the SPDK and compared with the kernel interface and found that SPDK is very slow. We did run FIO.
Can you describe the setup for the performance measurement between Kernel and SPDK driver.
I’m trying to run SPDK (NVMf target) in a non-root user mode. I’m ending up in rte_mempool_init always returns 0MB Available memory. But if I run the same as a root user, everything works fine. Can Someone help me out on this and I have come across some DPDK mail thread for the same issue.
Can DPDK currently run in non-root user mode?
Is someone tried experimenting this in either DPDK or SPDK?
Karthi | +91 9036339210
I am trying to test SPDK with Mellanox 40G NIC (mlx4). I installed Mellanox
OFED using the command
"./mlnxofedinstall --dpdk --vma-eth --vma". I modified SPDK makefile to
link dpdk mlx4 NIC driver as below.
diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
index 41fb18a..7ab1e8f 100644
@@ -55,7 +55,9 @@ else
DPDK_INC = -I$(DPDK_ABS_DIR)/include/dpdk
DPDK_LIB = $(DPDK_ABS_DIR)/lib/librte_eal.a
# librte_malloc was removed after DPDK 2.1. Link this library
conditionally based on its
# existence to maintain backward compatibility.
@@ -64,7 +66,7 @@ DPDK_LIB += $(DPDK_ABS_DIR)/lib/librte_malloc.a
-DPDK_LIB += -ldl
+DPDK_LIB += -ldl -libverbs
DPDK_LIB += -lexecinfo
with these changes, mlx4 DPDK driver probe is executed while executing the
nvmf_tgt command, but, Tx and Rx routines are not executed while i/o
transfer using FIO. Please let me know if there is any additional
programming needed to hook NIC Tx and Rx to SPDK.
We have been experimenting with SPDK and the Linux based kernel initiator. We have ran into a repeatable issue regarding the usage of non-direct I/O directly to a NVMeF block device. We also ran into similar issues when using a file system with direct I/O turned off. In any case lets focus on the case of I/Os to a raw block device as it seems easier to replicate the bug.
I have been unable to diagnose the root cause of the problem to fix it so I am posting here to hopefully get some help.
1. Get latest SPDK/DPDK sources and compile on target node with NVMe drives (note: we have two separate RNICs on the target, but have configured the SPDK configuration file to place the NVMe drives on the correct root complex for each RNIC - it is a two socket system... I don't think the RNICs are the issue just wanted to point that detail out in our configuration). Also we are using IB as the transport with Mellanox Connect-IB cards.
2. Do discovery and connect via the regular 'nvme' open source initiator package. Note: no file system involved at all with this test.
3. The following FIO command does not cause the bug:
fio --rw=randwrite --bs=128k --numjobs=1 --iodepth=256 --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name test --filename=/dev/nvme0n1 --output=blah --time_based --runtime=60 --group_reporting --size=30G
1. The following FIO command does cause the bug:
fio --rw=randwrite --bs=128k --numjobs=1 --iodepth=256 --loops=1 --ioengine=libaio --direct=0 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name test --filename=/dev/nvme0n1 --output=blah --time_based --runtime=60 --group_reporting --size=30G
Note: the only difference in the two commands is the --direct flag change.
1. If you run FIO command from step 4 you will see on the initiator in the dmesg things like the following:
[158052.975693] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 192.168.4.74:4420
[158058.986892] nvme nvme0: creating 56 I/O queues.
[158061.990454] nvme nvme0: new ctrl: NQN "nqn.2016-06.io.spdk:cnode8", addr 192.168.4.74:4420
[158697.246241] blk_update_request: I/O error, dev nvme0n1, sector 207104
[158697.248259] Buffer I/O error on dev nvme0n1, logical block 25888, lost async page write
[158697.250775] Buffer I/O error on dev nvme0n1, logical block 25889, lost async page write
[158697.253274] Buffer I/O error on dev nvme0n1, logical block 25890, lost async page write
[158697.255779] Buffer I/O error on dev nvme0n1, logical block 25891, lost async page write
[158697.273322] blk_update_request: I/O error, dev nvme0n1, sector 207616
[158697.275357] blk_update_request: I/O error, dev nvme0n1, sector 208128
[158697.277395] blk_update_request: I/O error, dev nvme0n1, sector 208384
[158697.279431] blk_update_request: I/O error, dev nvme0n1, sector 209408
[158697.281473] blk_update_request: I/O error, dev nvme0n1, sector 209920
[158697.473874] nvme nvme0: reconnecting in 10 seconds
[158709.884153] nvme nvme0: Successfully reconnected
[158727.572304] blk_update_request: 29 callbacks suppressed
[158727.572311] blk_update_request: I/O error, dev nvme0n1, sector 228352
[158727.572388] nvme nvme0: reconnecting in 10 seconds
[158727.614845] buffer_io_error: 1238 callbacks suppressed
[158727.614849] Buffer I/O error on dev nvme0n1, logical block 28544, lost async page write
[158727.698281] Buffer I/O error on dev nvme0n1, logical block 28545, lost async page write
[158738.976347] nvme nvme0: Successfully reconnected
Please let me know if you require more information about how we have setup the SPDK configuration file as well as any other details related to our setup if you have issues replicating this bug.
Appreciate any help/advice you all can offer.
When I try to compile SPDK applications I get this error:
fatal error: rte_config.h: No such file or directory
Is this because I have not installed DPDK correctly? I used the method for
installing DPDK as described on the SPDK github:
Is there any way to directly use the memory allocated by the malloc()
function? The spdk_malloc() allocates a pinned memory buffer in huge
page, and the spdk_nvme_ns_cmd_write also uses the pinned memory for
write. However, if we allocate a memory buffer by malloc(), how could we
directly write data to NVMe device by SPDK without copying data to the
buffer allocated by spdk_malloc()?
For issue 66(https://github.com/spdk/spdk/issues/66), please let me give more description by email, because I don’t find a way to add pics in github.
If we use spdk-iscsi to connect librbd, and input lsscsi, the device name is “Ceph rbd”.
If we use the well-known tgtd to connect librbd, the device name is “VIRTUAL_DISK”:
I prefer “VIRTUAL-DISK” to “Ceph rbd”:
1) The user doesn’t need to care it is a ceph a not.
2) From the usage: http://sg.danny.cz/scsi/lsscsi.html, “VIRTUAL-DISK” is introduced.
What’s your opinion?
Email Disclaimer & Confidentiality Notice
This message is confidential and intended solely for the use of the recipient to whom they are addressed. If you are not the intended recipient you should not deliver, distribute or copy this e-mail. Please notify the sender immediately by e-mail and delete this e-mail from your system. Copyright (c) 2016 by Istuary Innovation Labs, Inc. All rights reserved.