I am new to SPDK, and driver writing in general. I am trying to get SPDK
working with a non NVMe storage card (Netlist Express Vault:
it shows up as a block device in Linux. Would anyone know if there are
already utilities in SPDK to configure new devices like this (I have been
looking at ioat, and the SPDK bdev libraries) or do I have to create some
sort of SPDK user space driver for the Neltist card? If I do have to create
and SPDK driver for the Netlist card, how would I go about that?
Is there any detailed instruction or sample code for using SPDK's NVMe over fabrics?
How to set up the experiment? I would really appreciate if anyone give me some
Instructions or materials on this.
I have been using SPDK.
When I have a virtual subsystem with malloc drives and I have more than one, in the initiator side it will appear as more than one namespace. So I can start performing I/Os simultaneously to both namespaces right. It shouldn't have any restriction right. My doubt is if I can do simultaneously to both namespaces using a single session.
I came across this paper which talks about NVMeDirect :
Specifically it says that with NVMeDirect it is possible to share the same
NVMe device between userspace applications and kernel
"Although NVMeDirect is conceptually similar to SPDK, NVMeDirect has
following differences. First, NVMeDirect leverages the kernel NVMe driver
for control-plane operations, thus existing applications and
NVMeDirect-enabled applications can share the same NVMe SSD. In SPDK,
however, the whole NVMe SSD is dedicated to a single process who has all
the user-level driver code."
Any comments if this seems like a good idea or can impact performance in
one way or the other?
Hi All ,
I am using NVMe kernel initiator and userspace NVMe target.
I require some input from you. I see that a session can have multiple connections. Right now I do a discover from NVMe initiator and do a connect. So if I run a connect command for second time will it create a new connection for the session. So how does the I/O happen when there are multiple connections. How do we which are the connections and do we need to worry about the synchronisation issues.
And when we establish connection when I do a dmesg I see that 3 queues are getting created for the RDMA transport layer . Is it 3 pairs of submission and complete queues.
I am looking for a tool to display disk IO while I access an nvme-ssd via SPDK.
For example, any disk IO does not display in iostat or vmstat while I am running SPDK perf.
I think that it is expected and necessary, but i wonder if there is any available tool to replace iostat for SPDK.
Or is there any SPDK API that helps displaying disk IO ?
I have a server with two NUMA nodes. On each node, configured a NIC.
In nvmf.conf file, based on the node configuration, would like to assign right lcore.
Here is snippet of nvmf.conf:
Listen RDMA 184.108.40.206:4420
Listen RDMA 220.127.116.11:4420
But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.
Following warning confirms it’s uses lcore 0.
allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0
“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”
Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.
It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.
Hi All ,
I see that Reactor mask is assigned during the initialisation time. So the global mask represents the number of cores in which reactors can run. So when subsystem is initialised we get a parameter from config file called lcore. The core in which the subsystem should run . So we call a function called spdk_nvmf_allocate_core .. what does this function do. N maximum value of reactor mask is the maximum number of cores in the system. Correct me if I am wrong. N I need to understand about that function.