Does BlobFS Asynchronous API support multi thread writing?
by chen.zhenghua@zte.com.cn
Hi everyone,
I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
2 months, 2 weeks
Does spdk support the directives write function?
by sunshihao@huawei.com
Dear maintainer,
I am do some job about Directives (in nvme spec 1.4, part 9), i don't find read/write function about it in spdk.
Maybe i missed something.
Please give me some advice about that if i want to develop a group of API used in muilty-stream write.
Thanks!
1 year, 6 months
iSCSI Sequential Read with BS 128K Performance
by Lego Lin
Hi, all:
I just tested SPDK 20.10.x iSCSI performance and I got
following performance data: (All Testing with FIO QD 32)
[OK] (100%RndR_4K, 100%RndW_4K) : (read: IOPS=155k, BW=605MiB/s
(634MB/s), write: IOPS=140k, BW=547MiB/s (573MB/s))
[OK] (100%RndR_8K, 100%RndW_8K) : (read: IOPS=147k, BW=1152MiB/s
(1208MB/s), write: IOPS=128k, BW=1003MiB/s (1051MB/s))
[NOK] (100%SeqR_128K, 100%SeqW_128K) : (read: IOPS=210, BW=26.3MiB/s
(27.5MB/s), write: IOPS=14.6k, BW=1831MiB/s (1920MB/s))
=> Read Bad, Write OK
[NOK] (100%SeqR_32K, 100%SeqR_16K) : (read: IOPS=9418, BW=294MiB/s
(309MB/s), read: IOPS=105k, BW=1641MiB/s (1721MB/s))
=> [NOK] BS_32K + QD_32
=> [OK] BS_16K + QD_32
[OK] (100%SeqR_8K, 100%SeqR_4K) : (read: IOPS=149k, BW=1160MiB/s
(1217MB/s), read: IOPS=157k, BW=612MiB/s (642MB/s))
Focus on BS 128K
[OK] QD1: read: IOPS=5543, BW=693MiB/s (727MB/s)
[OK] QD8: read: IOPS=21.1k, BW=2634MiB/s (2762MB/s)
[NOK] QD16: read: IOPS=301, BW=37.7MiB/s (39.5MB/s)
FIO Configuration:
ioengine=libaio
direct=1
numjobs=1
I also check with document:
https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2007...
Inside this document, it also suggest Sequential Read test with BS=128K +
QD8
I think the low performance with BS=128K + QD32 should not relate to iSCSI,
but can anyone share experience about tuning iSCSI sequential read
performance? It's weird that performance drop with high QD. Any suggestion
are welcome.
Thanks
Following are my test configuration
1. Network bandwidth: 40GB
2. TCP Setting at both target and client
tcp_timestamps: "1"
tcp_sack: "0",
tcp_rmem: "4096 87380 134217728"
tcp_wmem: "4096 87380 134217728",
tcp_mem: "4096 87380 134217728",
rmem_default: "524287",
wmem_default: "524287",
rmem_max: "268435456",
wmem_max: "268435456",
optmem_max: "268435456",
netdev_max_backlog: "300000"}
3. number of CPU cores at target and client: 48 vcores
Intel(R) Xeon(R) Silver 4215 CPU @ 2.50GHz * 2
4. disable irqbalance / enable CPU power government
5. run SPDK with: ./iscsi_tgt -m 0x08007C08007C
1 year, 6 months
Mellanox Build Bot build failure
by Rui Chang
Hi
The Mellanox build bot report error for this code change: https://review.spdk.io/gerrit/c/spdk/spdk/+/5667
But the code did not touch the file mentioned. And I see other code reviews have similar issue.
https://swx-ci.mellanox.com/spdk-test/blue/rest/organizations/jenkins/pip...
[2020-12-22T13:55:03.277Z] CC app/spdk_lspci/spdk_lspci.o
[2020-12-22T13:55:03.277Z] LINK verify
[2020-12-22T13:55:03.616Z] LINK spdk_lspci
[2020-12-22T13:55:03.936Z] CC examples/nvme/hello_world/hello_world.o
[2020-12-22T13:55:03.936Z] /usr/bin/ld.bfd: /basic_test_spdk_upstream_сi/spdk_upstream/build/lib/libspdk_rdma.a(rdma_mlx5_dv.o): in function `spdk_rdma_qp_create':
[2020-12-22T13:55:03.936Z] /basic_test_spdk_upstream_сi/spdk_upstream/lib/rdma/rdma_mlx5_dv.c:123: undefined reference to `mlx5dv_create_qp'
[2020-12-22T13:55:03.936Z] collect2: error: ld returned 1 exit status
[2020-12-22T13:55:03.936Z] make[2]: *** [/basic_test_spdk_upstream_сi/spdk_upstream/mk/spdk.app.mk:65: /basic_test_spdk_upstream_сi/spdk_upstream/build/bin/spdk_lspci] Error 1
[2020-12-22T13:55:03.936Z] make[1]: *** [/basic_test_spdk_upstream_сi/spdk_upstream/mk/spdk.subdirs.mk:44: spdk_lspci] Error 2
[2020-12-22T13:55:03.936Z] make: *** [/basic_test_spdk_upstream_сi/spdk_upstream/mk/spdk.subdirs.mk:44: app] Error 2
[2020-12-22T13:55:03.936Z] make: *** Waiting for unfinished jobs....
[2020-12-22T13:55:04.246Z] CC examples/sock/hello_world/hello_sock.o
[2020-12-22T13:55:04.576Z] LINK hello_world
[2020-12-22T13:55:04.907Z] LINK iscsi_fuzz
Regards,
Rui
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
1 year, 6 months
Re: SPDK and RocksDB integration -- setup problem
by Meneghini, John
Hi Karol.
We tried this and there's no difference between your suggested changed and the prior one.
Are we sure that everything works correctly with the latest version of SPDK? It looks like the RocksDB code was originally integrated and supported with SPDK v19.01.x
It appears nothing has changed since then.
ssan-rx2560-03:rocksdb(spdk-v5.14.3) > git logg -3
* f43f85564 2019-05-03 (HEAD -> spdk-v5.14.3, origin/spdk-v5.14.3) Add SpdkEnv C API [ Jim Harris / youngtack.jin(a)circuitblvd.com ]
* e28188774 2018-09-04 Add SPDK BlobFS integration. [ Jim Harris / changpeng.liu(a)intel.com ]
* 626550343 2018-08-21 (tag: rocksdb-5.14.3) Bump version to 5.14.3 and update HISTORY [ Yi Wu / yiwu(a)fb.com ]
ssan-rx2560-03:spdk(v19.01.x) > git logg -3
* 3f5e32adc 2019-05-01 (HEAD -> v19.01.x, origin/v19.01.x) test/rocksdb: add rocksdb_commit_id file [ Jim Harris / james.r.harris(a)intel.com ]
* 089585c8d 2019-05-01 rocksdb: use C++ constructor for global channel [ Jim Harris / james.r.harris(a)intel.com ]
* fe3a2c4dc 2019-05-01 test/rocksdb: suppress leak reports on thread local ctx [ Jim Harris / james.r.harris(a)intel.com ]
Should we try this using SPDK v19.01.x?
/John
[[email protected] spdk]# scripts/gen_nvme.sh ----json-with-subsystems
0000:84:00.0 ....
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_nvme_attach_controller",
"params": {
"trtype": "PCIe",
"name":"Nvme0",
"traddr":"0000:84:00.0"
}
}
]
}
[[email protected] spdk]# scripts/gen_nvme.sh
0000:84:00.0 ....
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_nvme_attach_controller",
"params": {
"trtype": "PCIe",
"name":"Nvme0",
"traddr":"0000:84:00.0"
}
}
]
}
On 12/17/20, 4:31 AM, "Latecki, Karol" <karol.latecki(a)intel.com> wrote:
Hey John!
Could you try "scripts/gen_nvme.sh ----json-with-subsystems" and let me know if this worked?
Karol
-----Original Message-----
From: Meneghini, John <John.Meneghini(a)netapp.com>
Sent: Wednesday, December 16, 2020 7:42 PM
To: spdk(a)lists.01.org
Cc: Meneghini, John <John.Meneghini(a)netapp.com>; Lalsangi, Raj <Raj.Lalsangi(a)netapp.com>
Subject: [SPDK] SPDK and RocksDB integration -- setup problem
I have the followed the instructions here -- https://spdk.io/doc/blobfs.html to setup RockDB integration with SPDK environment. And I’m using top of the tree SPDK from github.
Here is the output from running scripts/gen_nvme.sh:
[[email protected]]$ scripts/gen_nvme.sh
0000:84:00.0 ....
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_nvme_attach_controller",
"params": {
"trtype": "PCIe",
"name":"Nvme0",
"traddr":"0000:84:00.0"
}
}
]
}
And the output from scripts/setup.sh: (with some tracing)
[[email protected]]# HUGEMEM=5120 scripts/setup.sh
START ...>
----------
TARGET_USER .. defined.
lalsangi ..
OS is Linux! .. config
configure_linux_pci ..
vfio-pci ..
bdf related stuff ..
0000:84:00.0 (8086 0953): no driver -> vfio-pci
8086 0953
0000:84:00.0
1
end of config_linux_pci ..
config_linux_pci .. DONE.
-------------------------
hugetblfs_mounts ..
2560
Setting 2560 in /proc/sys/vm/nr_hugepages ..
/dev/hugepages
...............
Target user is: lalsangi
Password:
MEMLOCK_AMNT = 65536 ..
"lalsangi" user memlock limit: 64 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as user "lalsangi".
To change this, please adjust limits.conf memlock limit for user "lalsangi".
[[email protected]]#
And finally, the problem when I run ‘test/blobfs/mkfs/mkfs /usr/local/etc/spdk/rocksdb.json Nvme0n1: (with some tracing) (I tried with different bdev names -- e.g., Nvme0 -- still same problem)
[[email protected] spdk]# test/blobfs/mkfs/mkfs /usr/local/etc/spdk/rocksdb.json Nvme0n1
[2020-12-14 21:30:05.201555] Launching mkfs, config_file: /usr/local/etc/spdk/rocksdb.json
[2020-12-14 21:30:05.201832] Starting SPDK v21.01-pre git sha1 602b134fa / DPDK 20.08.0 initialization...
[2020-12-14 21:30:05.201879] [ DPDK EAL parameters: [2020-12-14 21:30:05.201906] spdk_mkfs [2020-12-14 21:30:05.201928] --no-shconf [2020-12-14 21:30:05.201948] -c 0x3 [2020-12-14 21:30:05.201970] --log-level=lib.eal:6 [2020-12-14 21:30:05.201992] --log-level=lib.cryptodev:5 [2020-12-14 21:30:05.202012] --log-level=user1:6 [2020-12-14 21:30:05.202031] --base-virtaddr=0x200000000000 [2020-12-14 21:30:05.202051] --match-allocations [2020-12-14 21:30:05.202071] --file-prefix=spdk_pid103642 [2020-12-14 21:30:05.202092] ]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
[2020-12-14 21:30:05.250379] app.c: 465:spdk_app_start: *NOTICE*: Total cores available: 2
[2020-12-14 21:30:05.250411] Total cores available: 2
[2020-12-14 21:30:05.250538] spdk_reactor_init ..
[2020-12-14 21:30:05.347656] Reactor init done ..
[2020-12-14 21:30:05.347710] cpuset_set_cpu done ..
[2020-12-14 21:30:05.347719] spdk_thread_create init done ..
[2020-12-14 21:30:05.348539] spdk_mempool_get_bulk done ..
[2020-12-14 21:30:05.348556] Allowing new thread app_thread ..
[2020-12-14 21:30:05.348568] Ran the thread function app_thread ..
[2020-12-14 21:30:05.461055] send msg to run bootstrap_fn ..
[2020-12-14 21:30:05.461181] reactor.c: 701:reactor_run: *NOTICE*: Reactor started on core 1
[2020-12-14 21:30:05.461184] spdk_thread_create init done ..
[2020-12-14 21:30:05.462004] spdk_mempool_get_bulk done ..
[2020-12-14 21:30:05.462035] Allowing new thread reactor_1 ..
[2020-12-14 21:30:05.462045] Ran the thread function reactor_1 ..
[2020-12-14 21:30:05.462052] reactor.c: 701:reactor_run: *NOTICE*: Reactor started on core 0
[2020-12-14 21:30:05.462080] Load json config /usr/local/etc/spdk/rocksdb.json ..Inside spdk_json_parse ..
Inside spdk_json_parse ..
[2020-12-14 21:30:05.462126] Read the json config file /usr/local/etc/spdk/rocksdb.json ..
[2020-12-14 21:30:05.462155] Find the key_name: subsystem ..
[2020-12-14 21:30:05.462164] parsing token #??..
[2020-12-14 21:30:05.462172] key #??, value 0??
[2020-12-14 21:30:05.462180] json_config.c: 597:spdk_app_json_config_load: *WARNING*: No 'subsystems' key JSON configuration file.
[2020-12-14 21:30:05.462524] accel_engine.c: 692:spdk_accel_engine_initialize: *NOTICE*: Accel engine initialized to use software engine.
Initializing filesystem on bdev Nvme0n1...[2020-12-14 21:30:05.527735] bdev.c:5562:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
Failed to initialize filesystem on bdev Nvme0n1...done.
First, I noticed that the code is expecting ‘subsystems’ key_name, but the gen_nvme.sh() created the .json with the key ‘subsystem’. I changed the code to match the key_name. But still the same issue exists.
Based on the above, it seems that either I’m making a silly mistake, or something basic is not working.
Any hints / suggestions on debugging this problem is highly appreciated!
Or should I stick with code bits with particular tags for SPDK and RocksDB integration?
Thanks,
John Meneghini
ONTAP SAN Target Architect
978-930-3519 (cell)
johnm(a)netapp.com
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
1 year, 6 months
SPDK and RocksDB integration -- setup problem
by Meneghini, John
I have the followed the instructions here -- https://spdk.io/doc/blobfs.html to setup RockDB integration with SPDK environment. And I’m using top of the tree SPDK from github.
Here is the output from running scripts/gen_nvme.sh:
[[email protected]]$ scripts/gen_nvme.sh
0000:84:00.0 ....
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_nvme_attach_controller",
"params": {
"trtype": "PCIe",
"name":"Nvme0",
"traddr":"0000:84:00.0"
}
}
]
}
And the output from scripts/setup.sh: (with some tracing)
[[email protected]]# HUGEMEM=5120 scripts/setup.sh
START ...>
----------
TARGET_USER .. defined.
lalsangi ..
OS is Linux! .. config
configure_linux_pci ..
vfio-pci ..
bdf related stuff ..
0000:84:00.0 (8086 0953): no driver -> vfio-pci
8086 0953
0000:84:00.0
1
end of config_linux_pci ..
config_linux_pci .. DONE.
-------------------------
hugetblfs_mounts ..
2560
Setting 2560 in /proc/sys/vm/nr_hugepages ..
/dev/hugepages
...............
Target user is: lalsangi
Password:
MEMLOCK_AMNT = 65536 ..
"lalsangi" user memlock limit: 64 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as user "lalsangi".
To change this, please adjust limits.conf memlock limit for user "lalsangi".
[[email protected]]#
And finally, the problem when I run ‘test/blobfs/mkfs/mkfs /usr/local/etc/spdk/rocksdb.json Nvme0n1: (with some tracing)
(I tried with different bdev names -- e.g., Nvme0 -- still same problem)
[[email protected] spdk]# test/blobfs/mkfs/mkfs /usr/local/etc/spdk/rocksdb.json Nvme0n1
[2020-12-14 21:30:05.201555] Launching mkfs, config_file: /usr/local/etc/spdk/rocksdb.json
[2020-12-14 21:30:05.201832] Starting SPDK v21.01-pre git sha1 602b134fa / DPDK 20.08.0 initialization...
[2020-12-14 21:30:05.201879] [ DPDK EAL parameters: [2020-12-14 21:30:05.201906] spdk_mkfs [2020-12-14 21:30:05.201928] --no-shconf [2020-12-14 21:30:05.201948] -c 0x3 [2020-12-14 21:30:05.201970] --log-level=lib.eal:6 [2020-12-14 21:30:05.201992] --log-level=lib.cryptodev:5 [2020-12-14 21:30:05.202012] --log-level=user1:6 [2020-12-14 21:30:05.202031] --base-virtaddr=0x200000000000 [2020-12-14 21:30:05.202051] --match-allocations [2020-12-14 21:30:05.202071] --file-prefix=spdk_pid103642 [2020-12-14 21:30:05.202092] ]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
[2020-12-14 21:30:05.250379] app.c: 465:spdk_app_start: *NOTICE*: Total cores available: 2
[2020-12-14 21:30:05.250411] Total cores available: 2
[2020-12-14 21:30:05.250538] spdk_reactor_init ..
[2020-12-14 21:30:05.347656] Reactor init done ..
[2020-12-14 21:30:05.347710] cpuset_set_cpu done ..
[2020-12-14 21:30:05.347719] spdk_thread_create init done ..
[2020-12-14 21:30:05.348539] spdk_mempool_get_bulk done ..
[2020-12-14 21:30:05.348556] Allowing new thread app_thread ..
[2020-12-14 21:30:05.348568] Ran the thread function app_thread ..
[2020-12-14 21:30:05.461055] send msg to run bootstrap_fn ..
[2020-12-14 21:30:05.461181] reactor.c: 701:reactor_run: *NOTICE*: Reactor started on core 1
[2020-12-14 21:30:05.461184] spdk_thread_create init done ..
[2020-12-14 21:30:05.462004] spdk_mempool_get_bulk done ..
[2020-12-14 21:30:05.462035] Allowing new thread reactor_1 ..
[2020-12-14 21:30:05.462045] Ran the thread function reactor_1 ..
[2020-12-14 21:30:05.462052] reactor.c: 701:reactor_run: *NOTICE*: Reactor started on core 0
[2020-12-14 21:30:05.462080] Load json config /usr/local/etc/spdk/rocksdb.json ..Inside spdk_json_parse ..
Inside spdk_json_parse ..
[2020-12-14 21:30:05.462126] Read the json config file /usr/local/etc/spdk/rocksdb.json ..
[2020-12-14 21:30:05.462155] Find the key_name: subsystem ..
[2020-12-14 21:30:05.462164] parsing token #??..
[2020-12-14 21:30:05.462172] key #??, value 0??
[2020-12-14 21:30:05.462180] json_config.c: 597:spdk_app_json_config_load: *WARNING*: No 'subsystems' key JSON configuration file.
[2020-12-14 21:30:05.462524] accel_engine.c: 692:spdk_accel_engine_initialize: *NOTICE*: Accel engine initialized to use software engine.
Initializing filesystem on bdev Nvme0n1...[2020-12-14 21:30:05.527735] bdev.c:5562:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
Failed to initialize filesystem on bdev Nvme0n1...done.
First, I noticed that the code is expecting ‘subsystems’ key_name, but the gen_nvme.sh() created the .json with the key ‘subsystem’. I changed the code to match the key_name. But still the same issue exists.
Based on the above, it seems that either I’m making a silly mistake, or something basic is not working.
Any hints / suggestions on debugging this problem is highly appreciated!
Or should I stick with code bits with particular tags for SPDK and RocksDB integration?
Thanks,
John Meneghini
ONTAP SAN Target Architect
978-930-3519 (cell)
johnm(a)netapp.com
1 year, 6 months
memory allocated from spdk_zmalloc is cached or uncached?
by wyqprince@126.com
Hey guys. I found the memory allocated for sq/cq pair is from spdk_zmalloc. So I wanna to know if the memory is cachable or not? If the allocated memory is cachable, How spdk guarantee the dma-cache coherency?
1 year, 6 months