We are already using emulated QEMU NVMe devices in the test pool - so
in lieu of adding new HW, we could
get at least some level of CMB WDS/RDS testing there. You could look at
test/lib/nvme/nvme.sh initially for
where to plumb something in. Of course, whatever tests get added need to know the
failing due to lack of CMB v. a real CMB bug.
OK I will go take a look. This could work well for us.
Good point. Testing this will be predicated on getting real HW.
For now, we should probably at least
have a warning message emitted when we find a CMB-enabled SSD with vfio enabled.
Well I think we can test this using the vIOMMU in QEMU without needing real hardware but
it might be tricky. I will look at putting in an error path in place for now. Does anyone
know if SPDK has a helper function that can tell if a PCIe device is under VFIO or UIO
4. Fabrics Support.
Step 1 would just be testing this I/O path to confirm it works. You’ve already added the
spdk_mem_register() calls which should register the CMB region with each RDMA NIC. So
first make sure that works.
Agreed though I am very confident this works as our kernel patches do something identical
to what SPDK would do.
rxe might be OK to start but really you’ll want a real RDMA NIC.
I don't think rxe would work the way we want since its "DMA engine" is
actually a memcpy() operation.
Then you could read to a CMB buffer and write to a remote NVMe
namespace using the SPDK NVMe-oF driver.
Then read it back into a different CMB buffer, etc.
Step 2 would be a lot more involved. In an ideal world, there’s
enough CMB space to replace all of the existing
host memory buffer pools used by the NVMe-oF target. If not - well, that’s where a lot
more work will be needed. :-)
Agreed. In the kernel patches we fall back to using host memory when we run out of CMB
space. We would need to do something similar here.
Thanks Jim and Paul. Lots of good input here and enough to keep us busy for a while ;-).