I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
Another possibility would be to use spdk_bdev_nvme_io_passthru and avoid trying to create a KV abstraction at all in the bdev layer. Then the bdev nvme module can just pass those operations straight through to a KV-capable NVMe SSD. Or a bdev module can implement a handler to emulate NVMe KV opcodes.
On 3/9/21, 6:31 AM, "Rodriguez, Edwin" <Ed.Rodriguez(a)netapp.com> wrote:
I've been implementing the NVMeOf KV protocol and have a question about implementing the KV list operation. Currently, I've implemented it as returning the same array of packed keys expected by the KV protocol, but it strikes me as tying bdev too closely to the NVME protocol.
A simple solution is to return one key for each list operation, then iteratively increment the key and pass it back to bdev to search for the next key. Downside is that the bdev implementation could be expensive to search for a key and that would have to be done for each and every key.
Another solution is to return a list or array which the controller iterates over to fill in the NVME list of keys, however this requires the controller to estimate how many keys will be needed to fill in the results buffer. Since keys are variable length and packed together that will pose a challenge.
And another solution would be to create bdev operations to iterate over the keys in the bdev, but that requires at least 3 operations - open, next and close and requires maintaining state somewhere, probably in the bdev_io itself.
What would be in keeping with the design philosphy of bdev? Keep it as is, returning an nvme structure of packed keys or something more abstract?
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
How can we export a file as a block device using SPDK?
Actually, currently I can now export a spdk block device(linux aio or malloc) using nvmf_tgt or iscsi_tgt and can access it on initiator. But can anyone help me export a file as a block device to the initiator using nvmf_tgt or iscsi_tgt.
I am having a terrible time getting my head wrapped around the execution model. Probably the biggest problem is I started life showing flight simulator customers how to use interrupts instead of polling (because I worked for a company that did interrupts very, VERY well). This polling executive is completely old-fashioned how they did it before interrupts and multi-processors kinda stuff. But, the basic benchmarks don't lie - I'm seeing about a 20% plus-up in performance using SPDK over kernel drivers, so I've got to just deal with it and get with the program.
So I finally re-arranged hello_world_bdev using a state machine maintained in a Poller, where hello_write and hello_read are called when the right state is met and the callbacks handle the asynchrony and update the state. This seems like a good model moving forward, where as I scale from the single-channel prototype to a multi-channel reality, and the number of cores I have available does not exceed the number of channels in my application.
Is this a sensible use of a Poller?