I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
On behalf of the SPDK community I'm pleased to announce the release of SPDK 21.04!
This release contains the following new features:
- ZNS NVMe bdev: Added support for zoned namespaces to the NVMe bdev module. This builds on the work to support zoned namespaces in the NVMe driver in the previous release.
- NVMe PMR: Added support for the Persistent Memory Region feature to NVMe driver.
- NVMe-oF ADQ: Added support for Application Device Queues (ADQ) to the NVMe-oF TCP initiator.
- RPM: Added script for building SPDK RPM packages. See https://spdk.io/doc/rpms.html.
Users updating from the previous release should note that a large number of deprecated APIs have been removed. The process for deprecation has been formalized in https://spdk.io/doc/deprecation.html, where a list of deprecation notices for future releases can be found.
The full changelog for this release is available at:
This release contains 730 commits from 49 authors with over 29k lines of code changed.
We'd especially like to recognize all of our first time contributors:
Krishna Kanth Reddy
Thanks to everyone for your contributions, participation, and effort!
I'm looking at using Linux's native NVMe multipathing with SPDK's nvmf
target where multiple SPDK instances are used.
As the subsystem issues controller IDs in the range [1, 0xFFEF], the
initial connection to each individual target ends up with controller ID
1, leading to a clash for the connection to the 2nd, and subsequent,
targets requiring multiple connections to "cycle" to the non-clashing ID.
This issue that was previously discussed in
but I don't think a resolution was agreed upon.
One approach would be to limit the controller ID range per subsystem
(which was also previously proposed) which was implemented for the Linux
kernel NVMe target in 5.7 as
Would you consider a pull request to implement a similar feature in
SPDK? An external orchestrator could then assign the subsystem in each
SPDK instance a non-overlapping controller ID range.
We are testing a new NVME disk. In the worst case, the DMA feedback duration of the NVME disk may reach seconds. In this case, I consider that if the process exits abnormally during the DMA transfer process, After another process restarts, it applies for the memory that is not finished. As a result, memory overrun may occur. However, DPDK and SPDK do not handle this problem. I would like to ask how the spdk is considered in this case?
Any feedbacks are welcome!
The merge window for SPDK 21.04 release will close by April 23rd.
Please ensure all patches you believe should be included in the release are merged to
master branch by this date.
You can do it by adding a hashtag '21.04' in Gerrit on those patches.
The current set of patches that are tagged and need to be reviewed can be seen here:
On April 23rd new branch 'v21.04.x' will be created, and a patch on it will be
tagged as release candidate.
Then, by April 30th, a formal release will take place tagging the last patch on the branch
as SPDK 21.04.
Between release candidate and formal release, only critical fixes shall be backported to
the 'v21.04.x' branch.
Development can continue without disruptions on 'master' branch.