Am 25.11.2016 um 20:32 schrieb Jason Gunthorpe:
On Fri, Nov 25, 2016 at 02:22:17PM +0100, Christian König wrote:
>> Like you say below we have to handle short lived in the usual way, and
>> that covers basically every device except IB MRs, including the
>> command queue on a NVMe drive.
> Well a problem which wasn't mentioned so far is that while GPUs do have a
> page table to mirror the CPU page table, they usually can't recover from
> page faults.
> So what we do is making sure that all memory accessed by the GPU Jobs stays
> in place while those jobs run (pretty much the same pinning you do for the
Yes, it is DMA, so this is a valid approach.
But, you don't need page faults from the GPU to do proper coherent
page table mirroring. Basically when the driver submits the work to
the GPU it 'faults' the pages into the CPU and mirror translation
table (instead of pinning).
Like in ODP, MMU notifiers/HMM are used to monitor for translation
changes. If a change comes in the GPU driver checks if an executing
command is touching those pages and blocks the MMU notifier until the
command flushes, then unfaults the page (blocking future commands) and
unblocks the mmu notifier.
Yeah, we have a function to "import" anonymous pages from a CPU pointer
which works exactly that way as well.
We call this "userptr" and it's just a combination of get_user_pages()
on command submission and making sure the returned list of pages stays
valid using a MMU notifier.
The "big" problem with this approach is that it is horrible slow. I mean
seriously horrible slow so that we actually can't use it for some of the
purposes we wanted to use it.
The code moving the page will move it and the next GPU command that
needs it will refault it in the usual way, just like the CPU would.
And here comes the problem. CPU do this on a page by page basis, so they
fault only what needed and everything else gets filled in on demand.
This results that faulting a page is relatively light weight operation.
But for GPU command submission we don't know which pages might be
accessed beforehand, so what we do is walking all possible pages and
make sure all of them are present.
Now as far as I understand it the I/O subsystem for example assumes that
it can easily change the CPU page tables without much overhead. So for
example when a page can't modified it is temporary marked as readonly
AFAIK (you are probably way deeper into this than me, so please confirm).
That absolutely kills any performance for GPU command submissions. We
have use cases where we practically ended up playing ping/pong between
the GPU driver trying to grab the page with get_user_pages() and sombody
else in the kernel marking it readonly.
This might be much more efficient since it optimizes for the common
case of unchanging translation tables.
Yeah, completely agree. It works perfectly fine as long as you don't
have two drivers trying to mess with the same page.
This assumes the commands are fairly short lived of course, the
expectation of the mmu notifiers is that a flush is reasonably prompt
Correct, this is another problem. GFX command submissions usually don't
take longer than a few milliseconds, but compute command submission can
easily take multiple hours.
I can easily imagine what would happen when kswapd is blocked by a GPU
command submission for an hour or so while the system is under memory
I'm thinking on this problem for about a year now and going in circles
for quite a while. So if you have ideas on this even if they sound
totally crazy, feel free to come up.