nvme_ctrlr.c:1224:nvme_ctrlr_process_init: ***ERROR*** Initialization timed out in state 3
by Oza Oza
I have ported SPDK for ARMv8.
And DPDK is compiled with version: 16.11.1
init controller is failing.
[email protected]:/home/oza/SPDK/spdk#
odepth=128 --size=4G --readwrite=read --filename=0000.01.00.00/1 --bs=4096
--i
/home/oza/fio /home/oza/SPDK/spdk
EAL: pci driver is being registered 0x1nreadtest: (g=0): rw=read,
bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=spdk_fio, iodepth=128
fio-2.17-29-gf0ac1
Starting 1 process
Starting Intel(R) DPDK initialization ...
[ DPDK EAL parameters: fio -c 1 --file-prefix=spdk_pid6448
--base-virtaddr=0x1000000000 --proc-type=auto ]
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: cannot open /proc/self/numa_maps, consider that all memory is in
socket_id 0
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
EAL: vfio_group_fd=11 iommu_group_no=3 *vfio_dev_fd=13
EAL: reg=0x2000 fd=13 cap_offset=0x50
EAL: the msi-x bar number is 0 0x2000 0x200
EAL: Hotplug doesn't support vfio yet
spdk_fio_setup() is being called
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: vfio_group_fd=11 iommu_group_no=3 *vfio_dev_fd=16
EAL: reg=0x2000 fd=16 cap_offset=0x50
EAL: the msi-x bar number is 0 0x2000 0x200
EAL: inside pci_vfio_write_config offset=4
nvme_ctrlr.c:1224:nvme_ctrlr_process_init: ***ERROR*** Initialization
timed out in state 3
nvme_ctrlr.c: 403:nvme_ctrlr_shutdown: ***ERROR*** did not shutdown within
5 seconds
EAL: Hotplug doesn't support vfio yet
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
EAL: vfio_group_fd=11 iommu_group_no=3 *vfio_dev_fd=18
EAL: reg=0x2000 fd=18 cap_offset=0x50
EAL: the msi-x bar number is 0 0x2000 0x200
EAL: Hotplug doesn't support vfio yet
EAL: Requested device 0000:01:00.0 cannot be used
spdk_nvme_probe() failed
Regards,
Oza.
4 years, 10 months
RDMA Queue Pair
by Kumaraparameshwaran Rathnavel
Hi All,
I would like to get few pointers on how the RDMA is used in NVMf Target . The below statement decides the value of Qdepth of NVMf Qpair.
nvmf_min(max_rw_depth, addr->attr.max_qp_rd_atom and anything more than this will be Queued irrespective of what the upper layer has the Queue depth as.
And the value for attr.max_qp_rd_atom is got by Querying the device. I see that ideally in most of NICs the value is 16. So does this mean that in a Queue pair there cannot be more RDMA requests than this value.
Please correct me if I am wrong.
Thanking You,
Param.
4 years, 12 months
tgt_node.c: 700:spdk_iscsi_tgt_node_construct: *ERROR*: Could not construct SCSI device
by Isaac Otsiabah
I have tested with the "./app/iscsi_tgt/iscsi_tgt -m 0x101" many times but now I am getting this error on both of my test systems. Something has changed?
tgt_node.c: 700:spdk_iscsi_tgt_node_construct: *ERROR*: Could not construct SCSI device
tgt_node.c: 966:spdk_cf_add_iscsi_tgt_node: *ERROR*: tgt_node1: add_iscsi_target_node error
tgt_node.c: 992:spdk_iscsi_init_tgt_nodes: *ERROR*: spdk_cf_add_iscsi_tgt_node() failed
iscsi_subsystem.c: 984:spdk_iscsi_init: *ERROR*: spdk_iscsi_init_tgt_nodes() failed
subsystem.c: 119:spdk_subsystem_init_next: *ERROR*: Init subsystem iscsi failed
my /usr/local/etc/spdk/iscsi.conf contents:
.....
.....
.....
[Nvme]
TransportID "trtype:PCIe traddr:0000:04:00.0" Nvme0
TransportID "trtype:PCIe traddr:0000:05:00.0" Nvme1
# If 'Yes', iscsi will automatically unbind the kernel NVMe driver from
# discovered devices and rebind it to the uio driver.
UnbindFromKernel Yes
# The following two arguments allow the user to partition NVMe namespaces
# into multiple LUNs
NvmeLunsPerNs 2
LunSizeInMB 1024
# The number of attempts per I/O when an I/O fails. Do not include
# this key to get the default behavior.
RetryCount 4
# The maximum number of NVMe controllers to claim. Do not include this key to
# claim all of them.
#NumControllers 1
[TargetNode1]
# Comment "Disk 1"
TargetName disk1
TargetAlias "Data Disk1"
Mapping PortalGroup1 InitiatorGroup1
AuthMethod Auto
AuthGroup AuthGroup1
UseDigest Auto
LUN0 Nvme0
LUN1 Nvme1
QueueDepth 128
My systems has these devices before I ran scripts/setup.sh
[[email protected]]# ls -l /dev/*nv*
crw------- 1 root root 248, 0 Jun 30 11:44 /dev/nvme0
brw-rw---- 1 root disk 259, 0 Jun 30 11:44 /dev/nvme0n1
crw------- 1 root root 248, 1 Jun 30 11:44 /dev/nvme1
brw-rw---- 1 root disk 259, 1 Jun 30 11:44 /dev/nvme1n1
crw------- 1 root root 10, 144 Jun 29 18:31 /dev/nvram
4 years, 12 months
Re: [SPDK] Blob store
by tejasgole
I got past the issue. The issue was blob store block size (4K) was mismatched with the nvme device (512b). Once I converted correctly, everything worked.
Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------From: "Walker, Benjamin" <benjamin.walker(a)intel.com> Date: 6/30/17 9:13 AM (GMT-08:00) To: spdk(a)lists.01.org Subject: Re: [SPDK] Blob store
On Thu, 2017-06-15 at 09:16 -0700, tejasgole wrote:
> Hi,
>
> Is there any example code for blob store to use nvme device directly without
> the block dev layer?
>
> If not, is there even a document on the order in which APIs need to be invoked
> to set this up correctly?
>
> The way I have it setup, spdk_bs_init () followed by spdk_bs_md_create_blob ()
> both succeed, but spdk_bs_md_open_blob () spins forever.
>
> It keeps issuing reads to lba=3.
I haven't written an example for how to do this yet. Are you able to test the
blob store using the bdev layer like in the rest of the examples for now? I
think that will be a lot more straightforward, but if there are concerns about
the bdev layer I'd love to understand what those are.
>
> Thanks
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
4 years, 12 months
Blob store
by tejasgole
Hi,
Is there any example code for blob store to use nvme device directly without the block dev layer?
If not, is there even a document on the order in which APIs need to be invoked to set this up correctly?
The way I have it setup, spdk_bs_init () followed by spdk_bs_md_create_blob () both succeed, but spdk_bs_md_open_blob () spins forever.
It keeps issuing reads to lba=3.
Thanks
4 years, 12 months
RocksDB with BlobFS gets stuck after some time with WAL enabled
by Swaminathan Sivaraman
Hello,
I'm new to SPDK and am benchmarking RocksDB with BlobFS on an NVMe SSD.
When I tried to run a fillseq db_bench workload for 500M keys, db_bench
gets stuck after processing some 100M keys. After playing with the db_bench
options I was using, I found out that it only gets stuck when WAL is
enabled. When I disabled WAL, the entire workload completed properly.
With WAL enabled, the issue occurs consistently on each run. Using gdb and
some extra print statements, I saw that this happened when RocksDB calls
sync on one of the MANIFEST files. The sync call waits on a semaphore in
this line,
https://github.com/spdk/spdk/blob/master/lib/blobfs/blobfs.c#L2169
I have not yet properly debugged which is the piece of code that is not
calling sem_post, but it looks to be this section (though I'm not quite
certain about this),
https://github.com/spdk/spdk/blob/master/lib/blobfs/blobfs.c#L1756
Any ideas what is happening here? Is this a known issue or a bug? (Just to
note, the run_tests.sh db_bench test script in the repository runs the
fillseq workload with WAL disabled).
Thanks
Swami
5 years
Re: [SPDK] spdk valgrind issue!
by Harris, James R
Hi,
This looks like a bug in the valgrind-dpdk implementation.
When rte_malloc() is called, it allocates additional memory to store a “struct malloc_elem”. This structure is approximately 64 bytes in size (see malloc_elem.h in DPDK). The buffer returned to the rte_malloc() caller is after this structure.
For example:
rte_malloc(4096) => rte_malloc allocates 4096 + 64 bytes at address 0x1000
0x1000 points to the struct malloc_elem, 0x1040 is returned from rte_malloc() to the caller
SPDK then calls rte_malloc_virt2phy() to get the physical address for 0x1040. rte_malloc_virt2phy() subtracts 64 from the address to get the pointer to the struct malloc_elem. It then references the ms field (a struct rte_memseg) so it can get the physical address. It is this access to the ms field that is causing valgrind to fail. It thinks that rte_malloc_virt2phy() is referencing out of range memory, since it does not understand this struct malloc_elem.
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "huangqingxin(a)ruijie.com.cn" <huangqingxin(a)ruijie.com.cn>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, June 26, 2017 at 2:53 AM
To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
Subject: [SPDK] spdk valgrind issue!
Hi, all
I faild to issue the illegal memory write using Valgrind on spdk.
Is there Anyone who have succeed?
The valgrind what I used comes from: https://github.com/bluca/valgrind-dpdk
Here is the process:
valgrind --soname-synonyms=somalloc=NONE ./iscsi_tgt -c my_iscsi.conf
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0xff --file-prefix=spdk_pid540 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
==540== Warning: set address range perms: large range [0x80200000, 0x100000000) (defined)
==540== Warning: set address range perms: large range [0x80200000, 0x100000000) (noaccess)
Occupied cpu core mask is 0xff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:6f50 spdk_ioat
--540-- WARNING: Serious error when reading debug info
--540-- When reading debug info from /sys/devices/pci0000:00/0000:00:02.0/0000:02:00.0/resource0:
--540-- can't read file to inspect ELF header
Found matching device at 0000:02:00.0 vendor:0x8086 device:0x6f50
==540== Invalid read of size 4
==540== at 0x4A3CEE: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x5303cb8 is 24 bytes before a block of size 8 alloc'd
==540== at 0x4A0A3BC: rte_malloc (vg_replace_malloc.c:1184)
==540== by 0x48E1C5: spdk_malloc (env.c:49)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540==
==540== Invalid read of size 8
==540== at 0x4A3CF6: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x5303cb0 is 32 bytes before a block of size 16 in arena "client"
==540==
==540== Invalid read of size 8
==540== at 0x4A3CFD: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x58 is not stack'd, malloc'd or (recently) free'd
==540==
==540==
==540== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==540== Access not within mapped region at address 0x58
==540== at 0x4A3CFD: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== If you believe this happened as a result of a stack
==540== overflow in your program's main thread (unlikely but
==540== possible), you can try to increase the size of the
==540== main thread stack using the --main-stacksize= flag.
==540== The main thread stack size used in this run was 8388608.
==540==
==540== HEAP SUMMARY:
==540== in use at exit: 2,274,897 bytes in 308 blocks
==540== total heap usage: 1,050 allocs, 742 frees, 7,068,857 bytes allocated
==540==
==540== LEAK SUMMARY:
==540== definitely lost: 4,424 bytes in 4 blocks
==540== indirectly lost: 236 bytes in 9 blocks
==540== possibly lost: 2,432 bytes in 8 blocks
==540== still reachable: 2,267,805 bytes in 287 blocks
==540== suppressed: 0 bytes in 0 blocks
==540== Rerun with --leak-check=full to see details of leaked memory
==540==
==540== For counts of detected and suppressed errors, rerun with: -v
==540== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
________________________________
Best Regards
5 years
回复:Re: spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions
by nixun_992@sina.com
Hi, Pawel:
This time, when i specify the bsrange to 4k-512k, it seems to pass the test.
I run the ltp test, the test script is ltpstress, i run it in the following: ./ltpstress.sh -npqS -t 5 -b /dev/sda1 -B ext4 The result is FAIL, i guess it's also the 4k issue. because it has the direct-io with less than 4k io size.
And if i want to do the direct-IO with less than 4k size, how do i set it in spdk vhost? And i also very interested in the source code of spdk, can you give me the detail spdk function routines on what's spdk will do when the size is less than 4k?
Thanks,Xun
-----------------------------------
Because 1 physical block is min unit you can read/write. If you need 1k direct IO you need to format undelaying NVMe with
512b sector size, then you can do direct IO with N x 512b size. In buffered mode (non-direct) you can do IO size whatever you want. Kernel will handle that for you.
Going back to bsrange: 4k-512k will force fio to produce IO that is N x 4k and this case must work with SPDK vhost.
Pawel
From: SPDK [mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Friday, June 16, 2017 5:28 AM
To: spdk <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio_test_if_4_clients_testing_with_4_split_partitions
Why have such limitation(size>=4k)? In my opinion, guest kernel should not have any limitation, spdk vhost should take over it.
Thanks,
Xun
---------------------------
Try bsrange from 4k (not 1k). For direct IO you should not
send IO that is less than minimum_io_size/hw_sector_size. Also can you send qemu and vhost launch command. Commit id of working version and not working also will also help us because we don’t know what is “old version”.
If resets are triggered from guest OS you should see some failures
on guest dmesg.
Pawel
From: SPDK
[mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Tuesday, June 13, 2017 10:48 AM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] 回复:Re: spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, Pawel & changpeng:
No, not for 512 size, i just specify random size of IO to stress the spdk vhost program. and for /mnt/ssdtest1, i mount the target disk to /dev/sda, and mount /dev/sda1 /mnt.
the ssdtest1 is the test file for fio test.
my guest os is using centos7u1, the dmesg seems to not have so much problem. The main problem is that the spdk is continuing reset controller, not sure why this happens. and
i didn't see it in old version.
Thanks,
Xun
================================
I see bsrange 1k to 512k, is NVMe formatted as 512b block size
here?
Which commit you use on this test?
filename=/mnt/ssdtest1 – is this some FS mounted directory?
Can you send us dmesg from failure?
Paweł
From: SPDK
[mailto:[email protected]]
On Behalf Of Liu, Changpeng
Sent: Tuesday, June 13, 2017 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Thanks Xun.
We’ll take a look at the issue.
From the error log message, there seems a SCSI task management command received by SPDK,
the reason why VM sent the task management command is most likely due to timeout for some
commands.
From: SPDK
[mailto:[email protected]]
On Behalf Of nixun_992(a)sina.com
Sent: Monday, June 12, 2017 3:40 PM
To: spdk <spdk(a)lists.01.org>
Subject: [SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions
Hi, All:
spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29
the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1
cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
=========================
our vhost conf is the following
# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
# Syntax:
# Split <bdev> <count> [<size_in_megabytes>]
#
# Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
Split Nvme0n1 4 200000
# Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
# leaving the rest of the device inaccessible
#Split Malloc2 8 1
[VhostScsi0]
Dev 0 Nvme0n1p0
[VhostScsi1]
Dev 0 Nvme0n1p1
[VhostScsi2]
Dev 0 Nvme0n1p2
[VhostScsi3]
Dev 0 Nvme0n1p3
fio script is the following:
[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5
# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall
# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall
# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall
# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
5 years
Questions about SPDK on ARM
by ぁ英雄ぁ
Hi all,
I need to compile SPDK software on arm linux operating system,while some questions should be sovled before that.
Questions are as follows.
Question 1:
Does the version of SPDK based on ARM exist or not? If not, how about configuration ways based on ARM?
Question 2:
If one compiles SPDK on on arm linux operating system, are there any compiling methods or instructions?
I will appreciate it if you could offer help.
Best Regards
Bruce Hu
5 years
spdk valgrind issue!
by huangqingxin@ruijie.com.cn
Hi, all
I faild to issue the illegal memory write using Valgrind on spdk.
Is there Anyone who have succeed?
The valgrind what I used comes from: https://github.com/bluca/valgrind-dpdk
Here is the process:
valgrind --soname-synonyms=somalloc=NONE ./iscsi_tgt -c my_iscsi.conf
Starting DPDK 17.02.0 initialization...
[ DPDK EAL parameters: iscsi -c 0xff --file-prefix=spdk_pid540 ]
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
==540== Warning: set address range perms: large range [0x80200000, 0x100000000) (defined)
==540== Warning: set address range perms: large range [0x80200000, 0x100000000) (noaccess)
Occupied cpu core mask is 0xff
Occupied cpu socket mask is 0x1
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:6f50 spdk_ioat
--540-- WARNING: Serious error when reading debug info
--540-- When reading debug info from /sys/devices/pci0000:00/0000:00:02.0/0000:02:00.0/resource0:
--540-- can't read file to inspect ELF header
Found matching device at 0000:02:00.0 vendor:0x8086 device:0x6f50
==540== Invalid read of size 4
==540== at 0x4A3CEE: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x5303cb8 is 24 bytes before a block of size 8 alloc'd
==540== at 0x4A0A3BC: rte_malloc (vg_replace_malloc.c:1184)
==540== by 0x48E1C5: spdk_malloc (env.c:49)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540==
==540== Invalid read of size 8
==540== at 0x4A3CF6: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x5303cb0 is 32 bytes before a block of size 16 in arena "client"
==540==
==540== Invalid read of size 8
==540== at 0x4A3CFD: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== Address 0x58 is not stack'd, malloc'd or (recently) free'd
==540==
==540==
==540== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==540== Access not within mapped region at address 0x58
==540== at 0x4A3CFD: rte_malloc_virt2phy (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x48E1EC: spdk_malloc (env.c:51)
==540== by 0x48E20E: spdk_zmalloc (env.c:59)
==540== by 0x415387: ioat_channel_start (ioat.c:407)
==540== by 0x415387: ioat_attach (ioat.c:483)
==540== by 0x415387: ioat_enum_cb (ioat.c:522)
==540== by 0x49DC4C: pci_probe_all_drivers.part.0 (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x49E1D1: rte_eal_pci_probe (in /home/spdk_61/app/iscsi_tgt/iscsi_tgt)
==540== by 0x490325: spdk_pci_enumerate (pci.c:147)
==540== by 0x41574D: spdk_ioat_probe (ioat.c:548)
==540== by 0x414C7F: copy_engine_ioat_init (copy_engine_ioat.c:304)
==540== by 0x438761: spdk_copy_engine_module_initialize (copy_engine.c:228)
==540== by 0x438761: spdk_copy_engine_initialize (copy_engine.c:246)
==540== by 0x44268A: spdk_subsystem_init (subsystem.c:135)
==540== by 0x43FF07: spdk_app_init (app.c:425)
==540== If you believe this happened as a result of a stack
==540== overflow in your program's main thread (unlikely but
==540== possible), you can try to increase the size of the
==540== main thread stack using the --main-stacksize= flag.
==540== The main thread stack size used in this run was 8388608.
==540==
==540== HEAP SUMMARY:
==540== in use at exit: 2,274,897 bytes in 308 blocks
==540== total heap usage: 1,050 allocs, 742 frees, 7,068,857 bytes allocated
==540==
==540== LEAK SUMMARY:
==540== definitely lost: 4,424 bytes in 4 blocks
==540== indirectly lost: 236 bytes in 9 blocks
==540== possibly lost: 2,432 bytes in 8 blocks
==540== still reachable: 2,267,805 bytes in 287 blocks
==540== suppressed: 0 bytes in 0 blocks
==540== Rerun with --leak-check=full to see details of leaked memory
==540==
==540== For counts of detected and suppressed errors, rerun with: -v
==540== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
________________________________
Best Regards
5 years