Hello, I've been watching this thread not as a kernel developer, but
as an user interested in doing peer-to-peer access between network
card and GPU. I believe that merging raw direct access with vma
overcomplicates things for our use case. We'll have a very large
camera streaming data at high throughput (up to 100 Gbps) to the GPU,
which will operate in soft real time mode and write back the results
to a RDMA enabled network storage. The CPU will only arrange the
connection between GPU and network card. Having things like paging or
memory overcommit is possible, but they are not required and they
might consistently decrease the quality of the data acquisition.
I see my use case something likely to exist for others and a strong
reason to split the implementation in two.
2017-01-05 16:01 GMT-03:00 Jason Gunthorpe <jgunthorpe(a)obsidianresearch.com>:
On Thu, Jan 05, 2017 at 01:39:29PM -0500, Jerome Glisse wrote:
> 1) peer-to-peer because of userspace specific API like NVidia GPU
> direct (AMD is pushing its own similar API i just can't remember
> marketing name). This does not happen through a vma, this happens
> through specific device driver call going through device specific
> ioctl on both side (GPU and RDMA). So both kernel driver are aware
> of each others.
Today you can only do user-initiated RDMA operations in conjection
with a VMA.
We'd need a really big and strong reason to create an entirely new
non-VMA based memory handle scheme for RDMA.
So my inclination is to just completely push back on this idea. You
need a VMA to do RMA.
GPUs need to create VMAs for the memory they want to RDMA from, even
if the VMA handle just causes SIGBUS for any CPU access.
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html