A) Each target gets its own transport. That means there would be two RDMA transports for
two targets, and each transport would allocate it's own pool of independent
B) transports can be shared across targets. Two different targets can share a single RDMA
transport. They can't share connections or listen on the same addresses, of course,
but they share the same request/buffer pools internally.
I think that A is simple and always correct. For B, I do not think that different NVMe-oF
targets (each have different subsystem), should listen on same <address, port>. If
that, why should we have the different NVMe-oF targets? If can be same address
configured, which will make the management very confuse. Besides, for B, I think that
sharing the buffer (With one global buffer pool since there is one transport) is OK, but
you need to consider the QoS among different targets for competing the buffer.
From: Walker, Benjamin <benjamin.walker(a)intel.com>
Sent: Saturday, November 9, 2019 12:30 AM
Subject: [SPDK] Re: NVMe-oF and multiple targets
On Fri, 2019-11-08 at 18:11 +0200, Sasha Kotchubievsky wrote:
Could you elaborate what's use case for having multiple nvmf targets
in the same application?
Each NVMe-oF target defines a discovery service - i.e. when a client connects to the
network addresses where that target is listening, they'll discover all of the
subsystems in that target only. The cases for multiple targets that I've seen all
involve multiple networks (whether real or virtual) where different subsystems need to be
presented to each.
On the first glace, I'd say it's better to share the same "transport"
between targets. Let's think about NVME-OF RDMA transports. This
transport opens CQ per core/device and can use SRQ per core/device. If
in the same application, there are two different RDMA transports, the
application polls on two different CQ in the same core. It's sub-optimal.
But, on the other hand, I can think about complex scenarios when
running two different instances of the same transports allows flexibility.
For example, Target application serves "public" and "private"
The first one is between applications and Storage. The second one is
for distributed storage itself. For example if I look on CEPH
approach, nvmf target application receives big chunk of data from
application, calculates parity blocks and distributes data + parity
blocks to other instances. In this scheme each instance of Target
application receives/sends data from/to application and receives/sends
data to other instances. For those two flows, buffers and settings
SRQ/RQ per QP can be different. Having two instances of RDMA transports makes sense.
I think it's most likely that the connections for each target will end up on different
network devices. It's possible that two targets could share a single network device
with each listening on a different TCP/UDP port, but I believe that's the less likely
case. That means the connections on one target won't be able to share a CQ or an SRQ
with connections on the other target. The sharing would be limited to the allocated data
buffers and request objects.
For now, I'm leaning toward using separate transports for each target. I'll let
this discussion sit for a few days pending additional feedback though.
On 07-Nov-19 8:25 PM, Walker, Benjamin wrote:
> Hi all,
> I've got an open design question that I wanted to run by the
> community regarding the NVMe-oF target. There are a few primitives
> that the library currently
> 1) NVMe-oF transports are a networking transport abstraction. These
> have memory and request object pools and can create and destroy
> 2) NVMe-oF poll groups are sets of NVMe-oF queue pairs (that aren't
> necessarily related to one another). A single poll group can
> aggregate connections from multiple different transports.
> 3) NVMe-oF subsystems are sets of related namespaces. A subsystem is
> effectively an access control list.
> 4) NVMe-oF controllers are network sessions. It's a set of NVMe-oF
> queue pairs
> (connections) that are all connected to the same subsystem and
> accessing the same namespaces.
> 6) NVMe-oF targets are a set of NVMe-oF subsystems, controllers, and
> poll groups. The target object really defines the NVMe-oF discovery service.
> Historically, the SPDK NVMe-oF target application created a single,
> oF target object. But we're trying to generalize the underlying nvmf
> library to support multiple targets so that a single application can
> support multiple discovery services for more complex use cases.
> My question is around how to map transports (#1) to targets (#6).
> There are two paths we could go down. I'll use the RDMA transport as
> an example.
> A) Each target gets its own transport. That means there would be two
> RDMA transports for two targets, and each transport would allocate
> it's own pool of independent resources.
> B) transports can be shared across targets. Two different targets
> can share a single RDMA transport. They can't share connections or
> listen on the same addresses, of course, but they share the same
> request/buffer pools internally.
> Does anyone have a strong opinion on what the right choice is here?
> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> to spdk-leave(a)lists.01.org
SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email to
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org