On Wed, Nov 30, 2016 at 12:45:58PM +0200, Haggai Eran wrote:
> That just forces applications to handle horrible unexpected
> failures. If this sort of thing is needed for correctness then OOM
> kill the offending process, don't corrupt its operation.
Yes, that sounds fine. Can we simply kill the process from the GPU
Or do we need to extend the OOM killer to manage GPU pages?
I don't know..
>>> From what I understand we are not really talking about
>>> everything proposed so far is being mediated by a userspace VMA, so
>>> I'd focus on making that work.
>> Fair enough, although we will need both eventually, and I hope the
>> infrastructure can be shared to some degree.
> What use case do you see for in kernel?
Two cases I can think of are RDMA access to an NVMe device's
I'm not sure on the use model there..
and O_DIRECT operations that access GPU memory.
This goes through user space so there is still a VMA..
Also, HMM's migration between two GPUs could use peer to peer in
kernel, although that is intended to be handled by the GPU driver if
I understand correctly.
Hum, presumably these migrations are VMA backed as well...
> Presumably in-kernel could use a vmap or something and the same
I think we can achieve the kernel's needs with ZONE_DEVICE and DMA-API support
for peer to peer. I'm not sure we need vmap. We need a way to have a scatterlist
of MMIO pfns, and ZONE_DEVICE allows that.
Well, if there is no virtual map then we are back to how do you do
migrations and other things people seem to want to do on these
pages. Maybe the loose 'struct page' flow is not for those users.
But I think if you want kGPU or similar then you probably need vmaps
or something similar to represent the GPU pages in kernel memory.