On 11/30/2016 6:23 PM, Jason Gunthorpe wrote:
> and O_DIRECT operations that access GPU memory.
This goes through user space so there is still a VMA..
> Also, HMM's migration between two GPUs could use peer to peer in the
> kernel, although that is intended to be handled by the GPU driver if
> I understand correctly.
Hum, presumably these migrations are VMA backed as well...
I guess so.
>> Presumably in-kernel could use a vmap or something and the
> I think we can achieve the kernel's needs with ZONE_DEVICE and DMA-API support
> for peer to peer. I'm not sure we need vmap. We need a way to have a scatterlist
> of MMIO pfns, and ZONE_DEVICE allows that.
Well, if there is no virtual map then we are back to how do you do
migrations and other things people seem to want to do on these
pages. Maybe the loose 'struct page' flow is not for those users.
thinking that kernel use cases would disallow migration, similar to how
non-ODP MRs would work. Either they are short-lived (like an O_DIRECT transfer)
or they can be longed lived but non-migratable (like perhaps a CMB staging buffer).
But I think if you want kGPU or similar then you probably need vmaps
or something similar to represent the GPU pages in kernel memory.
sometimes the GPU pages are simply inaccessible to the CPU.
In any case, I haven't thought about kGPU as a use-case.