Does BlobFS Asynchronous API support multi thread writing?
by chen.zhenghua@zte.com.cn
Hi everyone,
I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
2 months, 2 weeks
RFC: NVMf namespace masking
by Jonas Pfefferle
Hi all,
I would be happy to get some feedback on my NVMf target namespace masking
implementation using attach/detach:
https://review.spdk.io/gerrit/c/spdk/spdk/+/7821
The patch introduces namespace masking for NVMe-over-fabrics
targets by allowing to (dynamically) attach and detach
controllers to/from namespaces, cf. NVMe spec 1.4 - section 6.1.4.
Since SPDK only supports the dynamic controller model a new
controller is allocated on every fabric connect command.
This allows to attach/detach controllers of a specific
host NQN to/from a namespace. A host can only perform
operations to an active namespace. Inactive namespaces can
be listed (not supported by SPDK) but no additional
information can be retrieved:
"Unless otherwise noted, specifying an inactive NSID in a
command that uses the Namespace Identifier (NSID) field shall
cause the controller to abort the command with status
Invalid Field in Command" - NVMe spec 1.4 - section 6.1.5
Note that this patch does not implement the NVMe namespace
attachment command but allows to attach/detach via RPCs only.
To preserve current behavior all controllers are auto attached.
To not not auto attach controllers the nvmf_subsystem_add_ns
shall be called with "--no-auto-attach". We introduce two new
RPC calls:
- nvmf_ns_attach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
- nvmf_ns_detach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
If no host NQN is specified all controllers
(new and currently connected) will attach/detach to/from the
namespace specified.
The list in spdk_nvmf_ns is used to keep track of hostNQNs
which controllers should be attached on connect.
The active_ns array in spdk_nvmf_ctrlr is used for fast lookup
to check whether a NSID is active/inactive on command execution.
Thanks,
Jonas
8 months, 2 weeks
On the resize of bdevs with iSCSI initiator backend or NVMf backend
by liushengchao02@meituan.com
Hello,
We are studying SPDK and preparing for remote storage applications.
After testing, I find that bdevs created by bdev_iscsi_create or bdev_nvme_attach_controller can't resize accordingly when the connected remote target block devices have been resized online.
However, when using 'nvme connect' or 'iscsiadmin' commands, the connected block devices can resize accordingly as remote storage resizes and local users can use the connected remote storage without interruption (with the help of resize2fs or xfs_growfs).
I learned that SPDK support resize of bdevs with ceph rbd or logical volume beckend (i.e. bdevs created by bdev_rbd_create and bdev_lvol_create).
Dose SPDK support or will support the resize of iSCSI and NVMf backend when the connected remote target has been resized?
Thank you very much.
10 months, 2 weeks
Clarification regarding VFIO driver bind in the setup.sh
by chandanga@google.com
I needed a clarification regarding the VFIO setup script for SPDK (https://github.com/spdk/spdk/blob/master/scripts/setup.sh).
In the linux_bind_driver() method, we have the following lines:
echo "$ven_dev_id" > "/sys/bus/pci/drivers/$driver_name/new_id" 2> /dev/null || true
echo "$bdf" > "/sys/bus/pci/drivers/$driver_name/bind" 2> /dev/null || true
I'm wondering if both steps are necessary? It looks like if we set the vendor_id, device_id into "/sys/bus/pci/drivers/vfio-pci/new_id", this will automatically assign all the VFs (which are not assigned to the NVMe driver) to the VFIO driver. Hence, the second line (echoing the $bdf to the bind file) becomes redundant. Is this correct? Any reason why both of these steps are required? Thanks.
10 months, 2 weeks