Add py-spdk client for SPDK
by We We
Hi, all
I have submitted the py-spdk code on https://review.gerrithub.io/#/c/379741/, please take some time to visit it, I will be very grateful to you.
The py-spdk is client which can help the upper-level app to communicate with the SPDK-based app (such as: nvmf_tgt, vhost, iscsi_tgt, etc.). Should I submit it into the other repo I rebuild rather than SPDK repo? Because I think it is a relatively independent kit upon the SPDK.
If you have some thoughts about the py-spdk, please share with me.
Regards,
Helloway
2 years, 9 months
SPDK + user space appliance
by Shahar Salzman
Hi all,
Sorry for the delay, had to solve a quarantine issue in order to get access to the list.
Some clarifications regarding the user space application:
1. The application is not the nvmf_tgt, we have an entire applicance to which we are integrating spdk
2. We are currently using nvmf_tgt functions in order to activate spdk, and the bdev_user in order to handle IO
3. This is all in user space (I am used to the kernel/user distinction in order to separate protocol/appliance).
4. The bdev_user will also notify spdk of changes to namespaces (e.g. a new namespace has been added, and can be attached to the spdk subsystem)
I am glad that this is your intention, the question is, do you think that it would be useful to create such a bdev_user module which will allow other users to integrate spdk to their appliance using such a simple threading model? Perhaps such a module will allow easier integration of spdk.
I am attaching a reference application which is does NULL IO via bdev_user.
Regarding the RPC, we have an implementation of it, and will be happy to push it upstream.
I am not sure that using the RPC for this type of bdev_user namespaces is the correct approach in the long run, since the user appliance is the one adding/removing namespaces (like hot plugging of a new NVME device), so it can just call the "add_namespace_to_subsystem" interface directly, and does not need to use an RPC for it.
Thanks,
Shahar
3 years, 11 months
Need help for fixing NVMe probe problem in NVMeoF initiator running fio.
by Sreeni (Sreenivasa) Busam (Stellus)
Hello,
I have configured the target and initiator for a subsystem with 1 NVMe device in target.
Here are the errors I am getting on the initiator. I have a good NVMe device on the target side, but I am getting the error below.
If you know why the initiator does not initialize the controller and reason for the error, please let me know.
Target log:
Starting DPDK 17.08.0 initialization...
[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid27838 ]
EAL: Detected 32 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 364:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload Enabled
nvmf_tgt.c: 178:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 178:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem nqn.2017-06.io.spdk-MPcnode1 on lcore 0 on socket 0
rdma.c:1146:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1353:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 172.17.2.175 port 11345 ***
nvmf_tgt.c: 255:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on socket 0
rdma.c:1515:spdk_nvmf_rdma_poll_group_create: *NOTICE*: Skipping unused RDMA device when creating poll group.
Everything seems to be fine on the target application until the initiator connects to it and create a namespace.
NVMF configuration file:
[Nvmf]
MaxQueuesPerSession 4
AcceptorPollRate 10000
[Subsystem1]
NQN nqn.2017-06.io.spdk-MPcnode1
Core 1
SN SPDK0000000000000001
Listen RDMA 172.17.2.175:11345
AllowAnyHost Yes
NVMe 0000:84:00.0
Initiator log:
./fio --name=nvme --numjobs=1 --filename="trtype=RDMA adrfam=IPV4 traddr=172.17.2.175 trsvcid=11345 subnqn=nqn.2017-06.io.spdk-MPcnode1 ns=1" --bs=4K --iodepth=1 --ioengine=/home.local/sfast/spdk20/spdk/examples/nvme/fio_plugin/fio_plugin --sync=0 --norandommap --group_reporting --size=12K --runtime=3 -rwmixwrite=30 --thread=1 --rw=rw
nvme: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=1
fio-3.3
Starting 1 thread
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: fio -c 0x1 -m 512 --file-prefix=spdk_pid28214 ]
EAL: Detected 32 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
nvme_ctrlr.c:1031:nvme_ctrlr_construct_namespaces: *ERROR*: controller has 0 namespaces
fio_plugin.c: 298:spdk_fio_setup: *ERROR*: spdk_nvme_probe()
Thanks for your suggestion
Sreeni
4 years, 3 months
Re: [SPDK] Buffer I/O error on bigger block size running fio
by Harris, James R
Hi Victor,
Could you provide a few more details? This will help the list to provide some ideas.
1) On the client, are you using the SPDK NVMe-oF initiator or the kernel initiator?
2) Can you provide the fio configuration file or command line? Just so we can have more specifics on “bigger block size”.
3) Any details on the HW setup – specifically details on the RDMA NIC (or if you’re using SW RoCE).
Thanks,
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Victor Banh <victorb(a)mellanox.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, October 5, 2017 at 11:26 AM
To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
Subject: [SPDK] Buffer I/O error on bigger block size running fio
Hi
I have SPDK NVMeoF and keep getting error with bigger block size with fio on randwrite tests.
I am using Ubuntu 16.04 with kernel version 4.12.0-041200-generic on target and client.
The DPDK is 17.08 and SPDK is 17.07.1.
Thanks
Victor
[46905.233553] perf: interrupt took too long (2503 > 2500), lowering kernel.perf_event_max_sample_rate to 79750
[48285.159186] blk_update_request: I/O error, dev nvme1n1, sector 2507351968
[48285.159207] blk_update_request: I/O error, dev nvme1n1, sector 1301294496
[48285.159226] blk_update_request: I/O error, dev nvme1n1, sector 1947371168
[48285.159239] blk_update_request: I/O error, dev nvme1n1, sector 1891797568
[48285.159252] blk_update_request: I/O error, dev nvme1n1, sector 10833824
[48285.159265] blk_update_request: I/O error, dev nvme1n1, sector 614937152
[48285.159277] blk_update_request: I/O error, dev nvme1n1, sector 1872305088
[48285.159290] blk_update_request: I/O error, dev nvme1n1, sector 1504491040
[48285.159299] blk_update_request: I/O error, dev nvme1n1, sector 1182136128
[48285.159308] blk_update_request: I/O error, dev nvme1n1, sector 1662985792
[48285.191185] nvme nvme1: Reconnecting in 10 seconds...
[48285.191254] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191291] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191305] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191314] ldm_validate_partition_table(): Disk read failed.
[48285.191320] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191327] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191335] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191342] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191347] Dev nvme1n1: unable to read RDB block 0
[48285.191353] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191360] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191375] Buffer I/O error on dev nvme1n1, logical block 3, async page read
[48285.191389] nvme1n1: unable to read partition table
[48285.223197] nvme1n1: detected capacity change from 1600321314816 to 0
[48289.623192] nvme1n1: detected capacity change from 0 to -65647705833078784
[48289.623411] ldm_validate_partition_table(): Disk read failed.
[48289.623447] Dev nvme1n1: unable to read RDB block 0
[48289.623486] nvme1n1: unable to read partition table
[48289.643305] ldm_validate_partition_table(): Disk read failed.
[48289.643328] Dev nvme1n1: unable to read RDB block 0
[48289.643373] nvme1n1: unable to read partition table
4 years, 3 months
Nvme discover using link-local
by Suman Chakraborty
Hi ,
I am trying to discover using my IPV6 link-local address .
nvme discover -t rdma -a fe80::0c6:11ff:fe41:0516%eth0 -s 4420.
It is unable to discover.when I check the dmesg on the initiator side I get the error "
nvme nvme0: rdma_resolve_addr wait failed (-110)."
Can someone help to resolve this issue ?
Regards
Suman Chakraborty
4 years, 4 months
SPDK NVMe CMB WDS/RDS Support: Thanks and next steps!
by Stephen Bates
Hi SPDK Team
I wanted to start by thanking everyone for the great feedback on the first set of CMB WDS/RDS enablement patches which went into master over the past few days (e.g. [1]). There are already some reported suspected bug sightings so with luck those patches will mature quickly ;-).
Now I wanted to pick the communities brains' on a the best way to approach a couple of topics:
1. Documentation. I would like to update the API documentation (which I believe is auto-generated) as well as add a new file in docs/ discussing some of the issues setting up Peer-2-Peer DMAs (which cmb_copy does). Any tips for how best to do this?
2. CI Testing. Upstream QEMU has support in its NVMe model for SSDs with WDS/RDS CMBs [2] (I should know as I added that support ;-)). Can we discuss adding this to the CI pool so we can do some form of emulated P2P testing? In addition is there interest in real HW testing? If so we could discuss adding some of our HW to the pool (but as a lowly startup I think donating HW is beyond our budget right now).
3. VFIO Support. Right now I have only tested with UIO. VFIO adds some interesting issues around BAR address translations, PCI ACS and PCI ATS.
4. Fabrics Support. An obvious extension of this work is to allow other devices (aside from NVMe SSDs) to initiate DMAs to the NVMe CMBs. The prime candidate for that is a RDMA capable NIC which ties superbly well into NVMe over Fabrics. I would like to start a discussion on how best to approach this.
Is Trello the right place to enter and discuss these topics? Or is it OK to hash them out on the mailing list? Or do the community have a better way of discussing these items?
Cheers
Stephen
[1] https://github.com/spdk/spdk/commit/1f9da54e9cca75c1a049844b36319a52fdbacbd6
[2] https://github.com/qemu/qemu/blob/master/hw/block/nvme.c (see cmb_size_mb)
4 years, 4 months
Re: [SPDK] lvol function: hangs up with Nvme bdev.
by Terry_MF_Kao@wistron.com
Hi Maciek,
> Hello Terry,
> I pushed patch (already merged on master) that changes the default behavior of lvol store creation. Currently we write zeros on metadata space and unmap data clusters. Please check > if this works well in your environment.
> Here is the patch:
> https://review.gerrithub.io/#/c/387152/
I did the same testing in my environment.
Yes, it works. Speed updates to 2m as below:
# time ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536
c7f2a420-186f-11e8-9e50-00e04c6805c8
real 2m12.727s
user 0m0.073s
sys 0m0.016s
Regards,,
Terry
---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient.
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
4 years, 4 months
Re: [SPDK] Linker errors after adding new library
by Harris, James R
Hi Avinash,
Can you post your patch to GerritHub? That is the best way for others to help.
If you put [RFC] in the title of your patch, it will not be added to the patch queue for the CI test pool.
Regards,
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Avinash M N <Avinash.M.N(a)wdc.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, February 22, 2018 at 7:52 AM
To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
Subject: [SPDK] Linker errors after adding new library
Hello everyone,
I’m trying to add a new library in spdk/lib which will provide some statistical information. The functions from this library can be called from other libraries. One of the functions from my library is called from env_dpdk/env.c. During compilation, I’m getting the following linker errors only in vhost and iscsi_tgt apps.
/root/spdk/build/lib/libspdk_env_dpdk.a(env.o): In function `spdk_dma_zmalloc':
/root/spdk/lib/env_dpdk/env.c:87: undefined reference to `malloc_stats’
collect2: error: ld returned 1 exit status
make[2]: *** [iscsi_tgt] Error 1
make[1]: *** [iscsi_tgt] Error 2
make[1]: *** Waiting for unfinished jobs....
/root/spdk/build/lib/libspdk_env_dpdk.a(env.o): In function `spdk_dma_zmalloc':
/root/spdk/lib/env_dpdk/env.c:87: undefined reference to `malloc_stats'
collect2: error: ld returned 1 exit status
make[2]: *** [vhost] Error 1
make[1]: *** [vhost] Error 2
make: *** [app] Error 2
The nvmf_tgt application compiled successfully. I’m not able to figure out what is going wrong since I have made the same changes in Makefiles of nvmf_tgt and vhost/iscsi_tgt. I tried re-ordering the library in the Makefile but it did not solve the issue. Can anyone suggest what might be going wrong?
Thanks,
Avinash
4 years, 4 months
Linker errors after adding new library
by Avinash M N
Hello everyone,
I'm trying to add a new library in spdk/lib which will provide some statistical information. The functions from this library can be called from other libraries. One of the functions from my library is called from env_dpdk/env.c. During compilation, I'm getting the following linker errors only in vhost and iscsi_tgt apps.
/root/spdk/build/lib/libspdk_env_dpdk.a(env.o): In function `spdk_dma_zmalloc':
/root/spdk/lib/env_dpdk/env.c:87: undefined reference to `malloc_stats'
collect2: error: ld returned 1 exit status
make[2]: *** [iscsi_tgt] Error 1
make[1]: *** [iscsi_tgt] Error 2
make[1]: *** Waiting for unfinished jobs....
/root/spdk/build/lib/libspdk_env_dpdk.a(env.o): In function `spdk_dma_zmalloc':
/root/spdk/lib/env_dpdk/env.c:87: undefined reference to `malloc_stats'
collect2: error: ld returned 1 exit status
make[2]: *** [vhost] Error 1
make[1]: *** [vhost] Error 2
make: *** [app] Error 2
The nvmf_tgt application compiled successfully. I'm not able to figure out what is going wrong since I have made the same changes in Makefiles of nvmf_tgt and vhost/iscsi_tgt. I tried re-ordering the library in the Makefile but it did not solve the issue. Can anyone suggest what might be going wrong?
Thanks,
Avinash
4 years, 4 months