On Fri, 2017-06-30 at 18:09 +0530, Kumaraparameshwaran Rathnavel wrote:
I would like to get few pointers on how the RDMA is used in NVMf Target . The
below statement decides the value of Qdepth of NVMf Qpair.
nvmf_min(max_rw_depth, addr->attr.max_qp_rd_atom and anything more than this
will be Queued irrespective of what the upper layer has the Queue depth as.
And the value for attr.max_qp_rd_atom is got by Querying the device. I see
that ideally in most of NICs the value is 16. So does this mean that in a
Queue pair there cannot be more RDMA requests than this value.
The queue depth calculations for RDMA are actually quite involved. RDMA has four
base operations - send, recv, read, and write - and the available queue depth
for read/write is different than for send/recv. RDMA send/recv operations are
used to send the NVMe commands and responses, while the RDMA read/write
operations are used to transfer data. It's just the read/write limit that is
usually 16 per queue pair, limited by the NIC capabilities. The send/recv limit
is much higher, with the NIC supporting queues of anywhere from 1024 to 64k
Therefore, we report typically 128 queue depth available and the target will
gather up to 128 total commands at a time. If the commands are all NVMe reads,
for example, the target will queue up the full 128 queue depth to the backing
SSD. However, it can only perform 16 simultaneous RDMA read or writes per queue
pair, so often commands will get queued in the RDMA layer waiting on that limit.
Please correct me if I am wrong.
SPDK mailing list