Question about SPDK on ARM
by Wei Hu (Xavier)
Hi,all
I need to compile spdk sofeware on ARM linux operating system,
while some questions should be sovled before that.
The questions are as follows.
question 1: Does the version of spdk based on ARM exist or not?
if not, how about configuration ways based on ARM?
question 2: If one compiles spdk sofeware on ARM linux operating
system, are there any compiling methods or instructions?
I will very appreciate it if you could offer help.
Regards
David Hu
5 years, 1 month
CQ error
by Ramaraj Pandian
When I run my app (many threads generate io and only one thread submits io to SQ) through nvmf, I get the following error:
Target (SPDK tgt app):
mlx5: msl-dsma-sierra24.msl.lab: got completion with error:
00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000002 00000000 00000000 00000000
00000000 00008716 0a000486 07c9c8d3
rdma.c:1473:spdk_nvmf_rdma_poll: ***ERROR*** CQ error on Connection 0x7f8524004f30, Request 0x140209811671856 (13): RNR retry counter exceeded
session.c: 638:spdk_nvmf_session_poll: ***ERROR*** Transport poll failed for conn 0x7f8524004f30; closing connection
Initiator:
- Hangs because of the above error
And I come across in the web suggesting to do "set up the gid in the global routing header in queue attributes"
Any insights will be appreciated.
Thanks
Ram
5 years, 1 month
Announcing the SPDK Trello Boards!
by Luse, Paul E
We've started a series of Trello boards to help the community find development activities to either own or participate in. The link is further below and everyone is welcome to join in. If you don't have a Trello account its super easy (and free) to set one up, once you do just provide your handle on IRC (or the dist list if you don't have IRC yet) and one of us can add you. You don't need an account just to browse...
The idea is simple:
* There's a board for "Things To Do" that has self-explanatory columns, anyone is free to add items or chat with anyone working on item to see if they can help
* Each card that isn't totally obvious should have a "blueprint" that describes some very basic elements, mainly looking for "what problem is being solved" and then some basics about the proposed solution. As design/development begins the Trello card is a great place to add more details
* For more complex features, a card can be expanded into a complete board, you'll see that there already a few such features up there (like logical volumes). Not only are those boards great for further documenting the design, but they're a fantastic way to breakdown the backlog and allow multiple people to collaborate
* The roadmap currently on the community webpage will eventually have a card or a board for each item there as well.
* Note that there are also GitHub issues that can be looked at, if their solutions turn into something more than a quick patch Trello is the place to get the ball rolling
Finally, remember that this is a community tool so if you find it valuable please use it and if not please speak up in IRC so we can all work to make it better!
Thanks,
Paul
Trello: https://trello.com/spdk
IRC: #spdk on freenode
Webpage: http://spdk.io
5 years, 1 month
Re: [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions
by nixun_992@sina.com
Why have such limitation(size>=4k)? In my opinion, guest kernel should not have any limitation, spdk vhost should take over it.
Thanks,
Xun
---------------------------
Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also
can you send qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
If resets are triggered from guest OS you should see some failures on guest dmesg.
Pawel
From: SPDK [mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, Pawel & changpeng:
No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt. the ssdtest1 is the test file for fio test.
my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.
Thanks,
Xun
================================
I see bsrange 1k to 512k, is NVMe formatted as 512b block size
here?
Which commit you use on this test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
Paweł
From: SPDK
[mailto:[email protected]]
On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Thanks Xun.
We’ll take a look at the issue.
From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.
From: SPDK
[mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, All:
spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29
the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1
cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
=========================
our vhost conf is the following
# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
# Syntax:
# Split <bdev> <count> [<size_in_megabytes>]
#
# Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
Split Nvme0n1 4 200000
# Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
# leaving the rest of the device inaccessible
#Split Malloc2 8 1
[VhostScsi0]
Dev 0 Nvme0n1p0
[VhostScsi1]
Dev 0 Nvme0n1p1
[VhostScsi2]
Dev 0 Nvme0n1p2
[VhostScsi3]
Dev 0 Nvme0n1p3
fio script is the following:
[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5
# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall
# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall
# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall
# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
5 years, 2 months
Re: [SPDK] spdk can't pass fio test_if_4_clients_testing_with_4_split_partitions
by nixun_992@sina.com
Hi, Pawel & karol:
the ssdfile is generated by fio, i just speicfy the size. i will test the bsrange 4k - 512k with upstream code. and give you feedback.
Thanks,
Xun
---------------------------------------------------------
Hi Xun,
Like Pawel mentioned – could you please try running the same config but with bsrange 4k-512k?
Also, could you tell what are the contents of /mnt/ssdtest1 file before you start the test?
Do you generate it’s contents before fio starts or is it empty?
Karol
From: SPDK [mailto:[email protected]]
On Behalf Of Wodkowski, PawelX
Sent: Tuesday, June 13, 2017 1:15 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also can you send qemu and vhost launch
command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
If resets are triggered from guest OS you should see some failures on guest dmesg.
Pawel
From: SPDK [mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, Pawel & changpeng:
No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt. the ssdtest1 is the test file for fio test.
my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.
Thanks,
Xun
================================
I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?
Which commit you use on this test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
Paweł
From: SPDK [mailto:[email protected]]
On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Thanks Xun.
We’ll take a look at the issue.
From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.
From: SPDK [mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, All:
spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29
the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1
cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
=========================
our vhost conf is the following
# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
# Syntax:
# Split <bdev> <count> [<size_in_megabytes>]
#
# Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
Split Nvme0n1 4 200000
# Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
# leaving the rest of the device inaccessible
#Split Malloc2 8 1
[VhostScsi0]
Dev 0 Nvme0n1p0
[VhostScsi1]
Dev 0 Nvme0n1p1
[VhostScsi2]
Dev 0 Nvme0n1p2
[VhostScsi3]
Dev 0 Nvme0n1p3
fio script is the following:
[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5
# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall
# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall
# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall
# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
---------------------------------------------------------------------
Intel
Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk
Północ
| VII Wydział Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP
957-07-52-316 | Kapitał zakładowy 200.000 PLN.
Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego
adresata i może zawierać informacje poufne. W razie przypadkowego otrzymania
tej wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie;
jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). If you are not the intended recipient,
please
contact the sender and delete all copies; any review or distribution by others
is strictly prohibited.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
5 years, 2 months
FIO Plugin ERROR - Failed to find namespace 'ns=X'
by OConnell, Brendan (Eng)
Hi,
When trying to run the FIO Plugin I get an error:
Failed to find namespace 'ns=X'
I believe I have installed everything correctly and SPDK grabs the drives as expected.
Any ideas where I may have gone wrong?
It's a new Centos7 build with kernel 4.11
Thanks
Brendan
5 years, 2 months
回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
by Latecki, Karol
Hi Xun,
Like Pawel mentioned – could you please try running the same config but with bsrange 4k-512k?
Also, could you tell what are the contents of /mnt/ssdtest1 file before you start the test?
Do you generate it’s contents before fio starts or is it empty?
Karol
From: SPDK [mailto:[email protected]] On Behalf Of Wodkowski, PawelX
Sent: Tuesday, June 13, 2017 1:15 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:[email protected]>>
Subject: Re: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Try bsrange from 4k (not 1k). For direct IO you should not send IO that is less than minimum_io_size/hw_sector_size. Also can you send qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
If resets are triggered from guest OS you should see some failures on guest dmesg.
Pawel
From: SPDK [mailto:[email protected]] On Behalf Of nixun_992(a)sina.com<mailto:[email protected]>
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org<mailto:[email protected]>>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, Pawel & changpeng:
No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt. the ssdtest1 is the test file for fio test.
my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and i didn't see it in old version.
Thanks,
Xun
================================
I see bsrange 1k to 512k, is NVMe formatted as 512b block size here?
Which commit you use on this test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
Paweł
From: SPDK [mailto:[email protected]] On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:[email protected]>>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Thanks Xun.
We’ll take a look at the issue.
From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.
From: SPDK [mailto:[email protected]] On Behalf Of nixun_992(a)sina.com<mailto:[email protected]>
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org<mailto:[email protected]>>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, All:
spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29
the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
=========================
our vhost conf is the following
# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
# Syntax:
# Split <bdev> <count> [<size_in_megabytes>]
#
# Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
Split Nvme0n1 4 200000
# Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
# leaving the rest of the device inaccessible
#Split Malloc2 8 1
[VhostScsi0]
Dev 0 Nvme0n1p0
[VhostScsi1]
Dev 0 Nvme0n1p1
[VhostScsi2]
Dev 0 Nvme0n1p2
[VhostScsi3]
Dev 0 Nvme0n1p3
fio script is the following:
[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5
# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall
# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall
# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall
# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:[email protected]>
https://lists.01.org/mailman/listinfo/spdk
--------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN.
Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.
5 years, 2 months
[SPDK-NVMeF]: Issues with Namespace Reservation
by Ankur Srivastava
Hi All,
I was trying some reservation related stuff through the examples given
in the SPDK folder,
Through nvme-manage, I was able to create a namespace and also
attached it with the controller.
While executing the reserve application, I am getting the following error:
Setup: PCIe attached Intel NVMe SSD
[[email protected] reserve]# ./reserve
EAL: Detected 12 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 8086:984 spdk_nvme
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 8086:984 spdk_nvme
=====================================================
NVMe Controller at PCI bus 6, device 0, function 0
=====================================================
Reservations: Supported
Set Feature: Host Identifier 0xababababcdcdcdcd
could not find 2MB vfn 0x3ffe4e9 in DPDK mem config
SET FEATURES (09) sqid:0 cid:55 nsid:0 cdw10:00000081 cdw11:00000000
INVALID FIELD (00/02) sqid:0 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
Set Feature: Failed
could not find 2MB vfn 0x3ffe4e9 in DPDK mem config
GET FEATURES (0a) sqid:0 cid:55 nsid:0 cdw10:00000081 cdw11:00000000
INVALID FIELD (00/02) sqid:0 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
Get Feature: Failed
RESERVATION REGISTER (0d) sqid:1 cid:127 nsid:1
INVALID FIELD (00/02) sqid:1 cid:127 cdw0:0 sqhd:0001 p:1 m:1 dnr:0
Reservation Register Failed
RESERVATION ACQUIRE (11) sqid:1 cid:127 nsid:1
INVALID FIELD (00/02) sqid:1 cid:127 cdw0:0 sqhd:0002 p:1 m:1 dnr:0
Reservation Acquire Failed
Reservation Generation Counter 0
Reservation type 0
Reservation Number of Registered Controllers 0
Reservation Persist Through Power Loss State 0
RESERVATION RELEASE (15) sqid:1 cid:127 nsid:1
INVALID FIELD (00/02) sqid:1 cid:127 cdw0:0 sqhd:0004 p:1 m:1 dnr:0
Reservation Release Failed
Cleaning up...
Although, I have reserved and also checked the hugepages:
mount -t hugetlbfs nodev /mnt/huge
echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
./setup.sh
[[email protected] reserve]# cat
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
1195
[[email protected] reserve]# cat
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
12
Am I doing something wrong or missing something ?? Thanks in Advance.
Regards
Ankur
5 years, 2 months