[PATCH v7 00/13] Copy Offload in NVMe Fabrics with P2P PCI Memory
by Logan Gunthorpe
Hi Everyone,
Here is version 6 of the PCI P2PDMA patch set. This version makes
a few minor changes from v6 and is based on v4.19-rc5. A git repo is here:
https://github.com/sbates130272/linux-p2pmem pci-p2p-v7
Now that we have Bjorn's Acks, I'd preferably like to get Jens's Ack for
Patch 7 and then I'd like to propose merging this series through the NVME
tree.
Thanks,
Logan
--
Changes in v7:
* Rebased on v4.19-rc5
* Fixed up commit message of patch 7 that was no longer accurate. (as
pointed out by Jens)
* Change the configfs to not use "auto" or "none" and instead just
use a 0/1/<pci_dev> (or boolean). This matches the existing
nvme-target configfs booleans. (Per Bjorn)
* A handful of other minor changes and edits that were noticed by Bjorn
* Collected Acks from Bjorn
Changes in v6:
* Rebased on v4.19-rc3
* Remove the changes to the common submit_bio() path and instead
set REQ_NOMERGE in the NVME target driver, when appropriate.
Per discussions with Jens and Christoph.
* Some minor grammar changes in the documentation as spotted by Randy.
Changes in v5:
* Rebased on v4.19-rc1
* Drop changing ACS settings in this patchset. Now, the code
will only allow P2P transactions between devices whos
downstream ports do not restrict P2P TLPs.
* Drop the REQ_PCI_P2PDMA block flag and instead use
is_pci_p2pdma_page() to tell if a request is P2P or not. In that
case we check for queue support and enforce using REQ_NOMERGE.
Per feedback from Christoph.
* Drop the pci_p2pdma_unmap_sg() function as it was empty and only
there for symmetry and compatibility with dma_unmap_sg. Per feedback
from Christoph.
* Split off the logic to handle enabling P2P in NVMe fabrics' configfs
into specific helpers in the p2pdma code. Per feedback from Christoph.
* A number of other minor cleanups and fixes as pointed out by
Christoph and others.
Changes in v4:
* Change the original upstream_bridges_match() function to
upstream_bridge_distance() which calculates the distance between two
devices as long as they are behind the same root port. This should
address Bjorn's concerns that the code was to focused on
being behind a single switch.
* The disable ACS function now disables ACS for all bridge ports instead
of switch ports (ie. those that had two upstream_bridge ports).
* Change the pci_p2pmem_alloc_sgl() and pci_p2pmem_free_sgl()
API to be more like sgl_alloc() in that the alloc function returns
the allocated scatterlist and nents is not required bythe free
function.
* Moved the new documentation into the driver-api tree as requested
by Jonathan
* Add SGL alloc and free helpers in the nvmet code so that the
individual drivers can share the code that allocates P2P memory.
As requested by Christoph.
* Cleanup the nvmet_p2pmem_store() function as Christoph
thought my first attempt was ugly.
* Numerous commit message and comment fix-ups
Changes in v3:
* Many more fixes and minor cleanups that were spotted by Bjorn
* Additional explanation of the ACS change in both the commit message
and Kconfig doc. Also, the code that disables the ACS bits is surrounded
explicitly by an #ifdef
* Removed the flag we added to rdma_rw_ctx() in favour of using
is_pci_p2pdma_page(), as suggested by Sagi.
* Adjust pci_p2pmem_find() so that it prefers P2P providers that
are closest to (or the same as) the clients using them. In cases
of ties, the provider is randomly chosen.
* Modify the NVMe Target code so that the PCI device name of the provider
may be explicitly specified, bypassing the logic in pci_p2pmem_find().
(Note: it's still enforced that the provider must be behind the
same switch as the clients).
* As requested by Bjorn, added documentation for driver writers.
Changes in v2:
* Renamed everything to 'p2pdma' per the suggestion from Bjorn as well
as a bunch of cleanup and spelling fixes he pointed out in the last
series.
* To address Alex's ACS concerns, we change to a simpler method of
just disabling ACS behind switches for any kernel that has
CONFIG_PCI_P2PDMA.
* We also reject using devices that employ 'dma_virt_ops' which should
fairly simply handle Jason's concerns that this work might break with
the HFI, QIB and rxe drivers that use the virtual ops to implement
their own special DMA operations.
--
This is a continuation of our work to enable using Peer-to-Peer PCI
memory in the kernel with initial support for the NVMe fabrics target
subsystem. Many thanks go to Christoph Hellwig who provided valuable
feedback to get these patches to where they are today.
The concept here is to use memory that's exposed on a PCI BAR as
data buffers in the NVMe target code such that data can be transferred
from an RDMA NIC to the special memory and then directly to an NVMe
device avoiding system memory entirely. The upside of this is better
QoS for applications running on the CPU utilizing memory and lower
PCI bandwidth required to the CPU (such that systems could be designed
with fewer lanes connected to the CPU).
Due to these trade-offs we've designed the system to only enable using
the PCI memory in cases where the NIC, NVMe devices and memory are all
behind the same PCI switch hierarchy. This will mean many setups that
could likely work well will not be supported so that we can be more
confident it will work and not place any responsibility on the user to
understand their topology. (We chose to go this route based on feedback
we received at the last LSF). Future work may enable these transfers
using a white list of known good root complexes. However, at this time,
there is no reliable way to ensure that Peer-to-Peer transactions are
permitted between PCI Root Ports.
For PCI P2P DMA transfers to work in this situation the ACS bits
must be disabled on the downstream ports (DSPs) for all devices
involved in the transfer. This can be done using the "disable_acs_redir"
PCI kernel command line option which was introduced in v4.19.
In order to enable PCI P2P functionality, we introduce a few new PCI
functions such that a driver can register P2P memory with the system.
Struct pages are created for this memory using devm_memremap_pages()
and the PCI bus offset is stored in the corresponding pagemap structure.
Another set of functions allow a client driver to create a list of
client devices that will be used in a given P2P transactions and then
use that list to find any P2P memory that is supported by all the
client devices.
In the block layer, we also introduce a flag for a request queue
to indicate a given queue supports targeting P2P memory. The driver
submitting bios must ensure that the queue supports P2P before
attempting to submit BIOs backed by P2P memory. Also, P2P requests
are marked to not be merged seeing a non-homogenous request would
complicate the DMA mapping requirements.
In the PCI NVMe driver, we modify the existing CMB support to utilize
the new PCI P2P memory infrastructure and also add support for P2P
memory in its request queue. When a P2P request is received it uses the
pci_p2pmem_map_sg() function which applies the necessary transformation
to get the corrent pci_bus_addr_t for the DMA transactions.
In the RDMA core, we also adjust rdma_rw_ctx_init() and
rdma_rw_ctx_destroy() to take a flags argument which indicates whether
to use the PCI P2P mapping functions or not. To avoid odd RDMA devices
that don't use the proper DMA infrastructure this code rejects using
any device that employs the virt_dma_ops implementation.
Finally, in the NVMe fabrics target port we introduce a new
configuration attribute: 'p2pmem'. When set to a true boolean, the port
will attempt to find P2P memory supported by the RDMA NIC and all namespaces.
It may also be set to a PCI device name to select a specific P2P
memory to use. If supported memory is found, it will be used in all IO
transfers. And if a port is using P2P memory, adding new namespaces that
are not supported by that memory will fail.
These patches have been tested on a number of Intel based systems and
for a variety of RDMA NICs (Mellanox, Broadcomm, Chelsio) and NVMe
SSDs (Intel, Seagate, Samsung) and p2pdma devices (Eideticom,
Microsemi, Chelsio and Everspin) using switches from both Microsemi
and Broadcomm.
--
Logan Gunthorpe (13):
PCI/P2PDMA: Support peer-to-peer memory
PCI/P2PDMA: Add sysfs group to display p2pmem stats
PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset
PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers
docs-rst: Add a new directory for PCI documentation
PCI/P2PDMA: Add P2P DMA driver writer's documentation
block: Add PCI P2P flag for request queue and check support for
requests
IB/core: Ensure we map P2P memory correctly in
rdma_rw_ctx_[init|destroy]()
nvme-pci: Use PCI p2pmem subsystem to manage the CMB
nvme-pci: Add support for P2P memory in requests
nvme-pci: Add a quirk for a pseudo CMB
nvmet: Introduce helper functions to allocate and free request SGLs
nvmet: Optionally use PCI P2P memory
Documentation/ABI/testing/sysfs-bus-pci | 24 +
Documentation/driver-api/index.rst | 2 +-
Documentation/driver-api/pci/index.rst | 21 +
Documentation/driver-api/pci/p2pdma.rst | 170 ++++
Documentation/driver-api/{ => pci}/pci.rst | 0
drivers/infiniband/core/rw.c | 11 +-
drivers/nvme/host/core.c | 4 +
drivers/nvme/host/nvme.h | 8 +
drivers/nvme/host/pci.c | 121 ++-
drivers/nvme/target/configfs.c | 36 +
drivers/nvme/target/core.c | 154 ++++
drivers/nvme/target/io-cmd-bdev.c | 3 +
drivers/nvme/target/nvmet.h | 15 +
drivers/nvme/target/rdma.c | 22 +-
drivers/pci/Kconfig | 17 +
drivers/pci/Makefile | 1 +
drivers/pci/p2pdma.c | 943 +++++++++++++++++++++
include/linux/blkdev.h | 3 +
include/linux/memremap.h | 6 +
include/linux/mm.h | 18 +
include/linux/pci-p2pdma.h | 123 +++
include/linux/pci.h | 4 +
22 files changed, 1652 insertions(+), 54 deletions(-)
create mode 100644 Documentation/driver-api/pci/index.rst
create mode 100644 Documentation/driver-api/pci/p2pdma.rst
rename Documentation/driver-api/{ => pci}/pci.rst (100%)
create mode 100644 drivers/pci/p2pdma.c
create mode 100644 include/linux/pci-p2pdma.h
--
2.19.0
2 years, 3 months
[GIT PULL] libnvdimm/dax fixes for 4.19-rc6
by Williams, Dan J
Hi Greg, please pull from...
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm libnvdimm-fixes
...to receive:
* (2) fixes for the dax error handling updates that were merged for
v4.19-rc1. My mails to Al have been bouncing recently, so I do not have
his ack but the uaccess change is of the trivial / obviously correct
variety. The address_space_operations fixes a regression.
* A filesystem-dax fix to correct the zero page lookup to be compatible
with non-x86 (mips and s390) architectures.
Arguably only the address_space_operations fix is urgent for -rc6, the
others can reasonably wait, but I see no reason to hold them back. This
has all appeared in -next with no reported issues. The full diff is
small and included below.
---
The following changes since commit 11da3a7f84f19c26da6f86af878298694ede0804:
Linux 4.19-rc3 (2018-09-09 17:26:43 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm libnvdimm-fixes
for you to fetch changes up to 41c9b1be335b5afc3b5fb71c5d16f9d5939cd13f:
device-dax: Add missing address_space_operations (2018-09-22 09:07:33 -0700)
----------------------------------------------------------------
Dave Jiang (2):
uaccess: Fix is_source param for check_copy_size() in copy_to_iter_mcsafe()
device-dax: Add missing address_space_operations
Matthew Wilcox (1):
filesystem-dax: Fix use of zero page
drivers/dax/device.c | 6 ++++++
fs/dax.c | 13 ++-----------
include/linux/uio.h | 2 +-
3 files changed, 9 insertions(+), 12 deletions(-)
---
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index bbe4d72ca105..948806e57cee 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -535,6 +535,11 @@ static unsigned long dax_get_unmapped_area(struct file *filp,
return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);
}
+static const struct address_space_operations dev_dax_aops = {
+ .set_page_dirty = noop_set_page_dirty,
+ .invalidatepage = noop_invalidatepage,
+};
+
static int dax_open(struct inode *inode, struct file *filp)
{
struct dax_device *dax_dev = inode_dax(inode);
@@ -544,6 +549,7 @@ static int dax_open(struct inode *inode, struct file *filp)
dev_dbg(&dev_dax->dev, "trace\n");
inode->i_mapping = __dax_inode->i_mapping;
inode->i_mapping->host = __dax_inode;
+ inode->i_mapping->a_ops = &dev_dax_aops;
filp->f_mapping = inode->i_mapping;
filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
filp->private_data = dev_dax;
diff --git a/fs/dax.c b/fs/dax.c
index f32d7125ad0f..b68ce484e1be 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1120,21 +1120,12 @@ static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
{
struct inode *inode = mapping->host;
unsigned long vaddr = vmf->address;
- vm_fault_t ret = VM_FAULT_NOPAGE;
- struct page *zero_page;
- pfn_t pfn;
-
- zero_page = ZERO_PAGE(0);
- if (unlikely(!zero_page)) {
- ret = VM_FAULT_OOM;
- goto out;
- }
+ pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
+ vm_fault_t ret;
- pfn = page_to_pfn_t(zero_page);
dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE,
false);
ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
-out:
trace_dax_load_hole(inode, vmf, ret);
return ret;
}
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 409c845d4cd3..422b1c01ee0d 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -172,7 +172,7 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
static __always_inline __must_check
size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
{
- if (unlikely(!check_copy_size(addr, bytes, false)))
+ if (unlikely(!check_copy_size(addr, bytes, true)))
return 0;
else
return _copy_to_iter_mcsafe(addr, bytes, i);
2 years, 3 months
[PATCH v6 00/13] Copy Offload in NVMe Fabrics with P2P PCI Memory
by Logan Gunthorpe
Hi Everyone,
Here is version 6 of the PCI P2PDMA patch set. This version makes
a few minor changes from v5 and is based on v4.19-rc3. A git repo is here:
https://github.com/sbates130272/linux-p2pmem pci-p2p-v6
Thanks,
Logan
--
Changes in v6:
* Rebased on v4.19-rc3
* Remove the changes to the common submit_bio() path and instead
set REQ_NOMERGE in the NVME target driver, when appropriate.
Per discussions with Jens and Christoph.
* Some minor grammar changes in the documentation as spotted by Randy.
Changes in v5:
* Rebased on v4.19-rc1
* Drop changing ACS settings in this patchset. Now, the code
will only allow P2P transactions between devices whos
downstream ports do not restrict P2P TLPs.
* Drop the REQ_PCI_P2PDMA block flag and instead use
is_pci_p2pdma_page() to tell if a request is P2P or not. In that
case we check for queue support and enforce using REQ_NOMERGE.
Per feedback from Christoph.
* Drop the pci_p2pdma_unmap_sg() function as it was empty and only
there for symmetry and compatibility with dma_unmap_sg. Per feedback
from Christoph.
* Split off the logic to handle enabling P2P in NVMe fabrics' configfs
into specific helpers in the p2pdma code. Per feedback from Christoph.
* A number of other minor cleanups and fixes as pointed out by
Christoph and others.
Changes in v4:
* Change the original upstream_bridges_match() function to
upstream_bridge_distance() which calculates the distance between two
devices as long as they are behind the same root port. This should
address Bjorn's concerns that the code was to focused on
being behind a single switch.
* The disable ACS function now disables ACS for all bridge ports instead
of switch ports (ie. those that had two upstream_bridge ports).
* Change the pci_p2pmem_alloc_sgl() and pci_p2pmem_free_sgl()
API to be more like sgl_alloc() in that the alloc function returns
the allocated scatterlist and nents is not required bythe free
function.
* Moved the new documentation into the driver-api tree as requested
by Jonathan
* Add SGL alloc and free helpers in the nvmet code so that the
individual drivers can share the code that allocates P2P memory.
As requested by Christoph.
* Cleanup the nvmet_p2pmem_store() function as Christoph
thought my first attempt was ugly.
* Numerous commit message and comment fix-ups
Changes in v3:
* Many more fixes and minor cleanups that were spotted by Bjorn
* Additional explanation of the ACS change in both the commit message
and Kconfig doc. Also, the code that disables the ACS bits is surrounded
explicitly by an #ifdef
* Removed the flag we added to rdma_rw_ctx() in favour of using
is_pci_p2pdma_page(), as suggested by Sagi.
* Adjust pci_p2pmem_find() so that it prefers P2P providers that
are closest to (or the same as) the clients using them. In cases
of ties, the provider is randomly chosen.
* Modify the NVMe Target code so that the PCI device name of the provider
may be explicitly specified, bypassing the logic in pci_p2pmem_find().
(Note: it's still enforced that the provider must be behind the
same switch as the clients).
* As requested by Bjorn, added documentation for driver writers.
Changes in v2:
* Renamed everything to 'p2pdma' per the suggestion from Bjorn as well
as a bunch of cleanup and spelling fixes he pointed out in the last
series.
* To address Alex's ACS concerns, we change to a simpler method of
just disabling ACS behind switches for any kernel that has
CONFIG_PCI_P2PDMA.
* We also reject using devices that employ 'dma_virt_ops' which should
fairly simply handle Jason's concerns that this work might break with
the HFI, QIB and rxe drivers that use the virtual ops to implement
their own special DMA operations.
--
This is a continuation of our work to enable using Peer-to-Peer PCI
memory in the kernel with initial support for the NVMe fabrics target
subsystem. Many thanks go to Christoph Hellwig who provided valuable
feedback to get these patches to where they are today.
The concept here is to use memory that's exposed on a PCI BAR as
data buffers in the NVMe target code such that data can be transferred
from an RDMA NIC to the special memory and then directly to an NVMe
device avoiding system memory entirely. The upside of this is better
QoS for applications running on the CPU utilizing memory and lower
PCI bandwidth required to the CPU (such that systems could be designed
with fewer lanes connected to the CPU).
Due to these trade-offs we've designed the system to only enable using
the PCI memory in cases where the NIC, NVMe devices and memory are all
behind the same PCI switch hierarchy. This will mean many setups that
could likely work well will not be supported so that we can be more
confident it will work and not place any responsibility on the user to
understand their topology. (We chose to go this route based on feedback
we received at the last LSF). Future work may enable these transfers
using a white list of known good root complexes. However, at this time,
there is no reliable way to ensure that Peer-to-Peer transactions are
permitted between PCI Root Ports.
For PCI P2P DMA transfers to work in this situation the ACS bits
must be disabled on the downstream ports (DSPs) for all devices
involved in the transfer. This can be done using the "disable_acs_redir"
PCI kernel command line option which was introduced in v4.19.
In order to enable PCI P2P functionality, we introduce a few new PCI
functions such that a driver can register P2P memory with the system.
Struct pages are created for this memory using devm_memremap_pages()
and the PCI bus offset is stored in the corresponding pagemap structure.
Another set of functions allow a client driver to create a list of
client devices that will be used in a given P2P transactions and then
use that list to find any P2P memory that is supported by all the
client devices.
In the block layer, we also introduce a flag for a request queue
to indicate a given queue supports targeting P2P memory. The driver
submitting bios must ensure that the queue supports P2P before
attempting to submit BIOs backed by P2P memory. Also, P2P requests
are marked to not be merged seeing a non-homogenous request would
complicate the DMA mapping requirements.
In the PCI NVMe driver, we modify the existing CMB support to utilize
the new PCI P2P memory infrastructure and also add support for P2P
memory in its request queue. When a P2P request is received it uses the
pci_p2pmem_map_sg() function which applies the necessary transformation
to get the corrent pci_bus_addr_t for the DMA transactions.
In the RDMA core, we also adjust rdma_rw_ctx_init() and
rdma_rw_ctx_destroy() to take a flags argument which indicates whether
to use the PCI P2P mapping functions or not. To avoid odd RDMA devices
that don't use the proper DMA infrastructure this code rejects using
any device that employs the virt_dma_ops implementation.
Finally, in the NVMe fabrics target port we introduce a new
configuration attribute: 'p2pmem'. When set to a true boolean, the port
will attempt to find P2P memory supported by the RDMA NIC and all namespaces.
It may also be set to a PCI device name to select a specific P2P
memory to use. If supported memory is found, it will be used in all IO
transfers. And if a port is using P2P memory, adding new namespaces that
are not supported by that memory will fail.
These patches have been tested on a number of Intel based systems and
for a variety of RDMA NICs (Mellanox, Broadcomm, Chelsio) and NVMe
SSDs (Intel, Seagate, Samsung) and p2pdma devices (Eideticom,
Microsemi, Chelsio and Everspin) using switches from both Microsemi
and Broadcomm.
--
Logan Gunthorpe (13):
PCI/P2PDMA: Support peer-to-peer memory
PCI/P2PDMA: Add sysfs group to display p2pmem stats
PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset
PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers
docs-rst: Add a new directory for PCI documentation
PCI/P2PDMA: Add P2P DMA driver writer's documentation
block: Add PCI P2P flag for request queue and check support for
requests
IB/core: Ensure we map P2P memory correctly in
rdma_rw_ctx_[init|destroy]()
nvme-pci: Use PCI p2pmem subsystem to manage the CMB
nvme-pci: Add support for P2P memory in requests
nvme-pci: Add a quirk for a pseudo CMB
nvmet: Introduce helper functions to allocate and free request SGLs
nvmet: Optionally use PCI P2P memory
Documentation/ABI/testing/sysfs-bus-pci | 25 +
Documentation/driver-api/index.rst | 2 +-
Documentation/driver-api/pci/index.rst | 21 +
Documentation/driver-api/pci/p2pdma.rst | 170 ++++
Documentation/driver-api/{ => pci}/pci.rst | 0
drivers/infiniband/core/rw.c | 11 +-
drivers/nvme/host/core.c | 4 +
drivers/nvme/host/nvme.h | 8 +
drivers/nvme/host/pci.c | 121 ++-
drivers/nvme/target/configfs.c | 36 +
drivers/nvme/target/core.c | 154 ++++
drivers/nvme/target/io-cmd-bdev.c | 3 +
drivers/nvme/target/nvmet.h | 15 +
drivers/nvme/target/rdma.c | 22 +-
drivers/pci/Kconfig | 17 +
drivers/pci/Makefile | 1 +
drivers/pci/p2pdma.c | 941 +++++++++++++++++++++
include/linux/blkdev.h | 3 +
include/linux/memremap.h | 6 +
include/linux/mm.h | 18 +
include/linux/pci-p2pdma.h | 124 +++
include/linux/pci.h | 4 +
22 files changed, 1652 insertions(+), 54 deletions(-)
create mode 100644 Documentation/driver-api/pci/index.rst
create mode 100644 Documentation/driver-api/pci/p2pdma.rst
rename Documentation/driver-api/{ => pci}/pci.rst (100%)
create mode 100644 drivers/pci/p2pdma.c
create mode 100644 include/linux/pci-p2pdma.h
--
2.19.0
2 years, 3 months
Re: [PATCH v2 05/17] compat_ioctl: move more drivers to generic_compat_ioctl_ptrarg
by Arnd Bergmann
On Mon, Sep 24, 2018 at 10:35 PM Jason Gunthorpe <jgg(a)ziepe.ca> wrote:
> On Mon, Sep 24, 2018 at 10:18:52PM +0200, Arnd Bergmann wrote:
> > On Tue, Sep 18, 2018 at 7:59 PM Jason Gunthorpe <jgg(a)ziepe.ca> wrote:
> > > On Tue, Sep 18, 2018 at 10:51:08AM -0700, Darren Hart wrote:
> > > > On Fri, Sep 14, 2018 at 09:57:48PM +0100, Al Viro wrote:
> > > > > On Fri, Sep 14, 2018 at 01:35:06PM -0700, Darren Hart wrote:
> > We already do this inside of some subsystems, notably drivers/media/,
> > and it simplifies the implementation of the ioctl handler function
> > significantly. We obviously cannot do this in general, both because of
> > traditional drivers that have 16-bit command codes (drivers/tty and others)
> > and also because of drivers that by accident defined the commands
> > incorrectly and use the wrong type or the wrong direction in the
> > definition.
>
> That could work well, but the first idea could be done globally and
> mechanically, while this would require very careful per-driver
> investigation.
>
> Particularly if the core code has worse performance.. ie due to
> kmalloc calls or something.
>
> I think it would make more sense to start by having the core do the
> case to __user and then add another entry point to have the core do
> the copy_from_user, and so on.
Having six separate callback pointers to implement a single
system call seems a bit excessive though.
Arnd
2 years, 3 months
[PATCH v8 00/12] Adding security support for nvdimm
by Dave Jiang
The following series implements security support for nvdimm. Mostly adding
new security DSM support from the Intel NVDIMM DSM spec v1.7, but also
adding generic support libnvdimm for other vendors. The most important
security features are unlocking locked nvdimms, and updating/setting security
passphrase to nvdimms.
Security folks, thanks in advance for taking a look at my key management
implementation and making sure that I'm doing something sane. Mainly you'll
want to review patches 2, 4, and 5 as most relevant ones that need scrutiny.
v8:
- Make the keys retained by the kernel user searchable in order to find the
key that needs to be updated for key update.
v7:
- Add CONFIG_KEYS depenency for libnvdimm. (Alison)
- Export lookup_user_key(). (David)
- Modified "update" to take two key ids and and use lookup_user_key() in
order to improve security. (David)
- Use key ptrs and key_validate() for cached keys. (David)
v6:
- Fix intel DSM data structures to use defined size for passphrase (Robert)
- Fix memcpy size to use sizeof data structure member (Robert)
- Fix defined dimm id length (Robert)
- Making intel_security_ops const (Eric)
- Remove unused var in nvdimm_key_search() (Eric)
- Added wbinvd before secure erase is issued (Robert)
- Removed key_put_sync() usage (David)
- Use init_cred instead of creating own cred (David)
- Exported init_cred symbol
- Move keyring to dedicated (David)
- Use logon_key_type and friends instead of creating custom (David)
- Use key_lookup() with stored key serial (David)
- Exported key_lookup() symbol
- Mark passed in key data as const (David)
- Added comment for change_pass_phrase to explain how it works (David)
- Unlink key when it's being removed from keyring. (David)
- Removed request_key() from all security ops except update and unlock.
- Update will now update the existing key's payload with the new key's
retrieved from userspace when the new payload is accepted by nvdimm.
v5:
- Moved dimm_id initialization (Dan)
- Added a key_put_sync() in order to run key_gc_work and cleanup old key. (Dan)
- Added check to block security state changes while DIMM is active. (Dan)
v4:
- flip payload layout for update passphrase to make it easier on userland.
v3:
- Set x86 wrappers for x86 only bits. (Dan)
- Fixed up some verbiage in commit headers.
- Put in usage of sysfs_streq() for sysfs inputs.
- 0-day build fixes for non-x86 archs.
v2:
- Move inclusion of intel.h to relevant source files and not in nfit.h. (Dan)
- Moved security ring relevant code to dimm_devs.c. (Dan)
- Added dimm_id to nfit_mem to avoid recreate per sysfs show call. (Dan)
- Added routine to return security_ops based on family supplied. (Dan)
- Added nvdimm_key_data struct to wrap raw passphrase string. (Dan)
- Allocate firmware package on stack. (Dan)
- Added missing frozen state detection when retrieving security state.
---
Dave Jiang (12):
nfit: add support for Intel DSM 1.7 commands
libnvdimm: create keyring to store security keys
nfit/libnvdimm: store dimm id as a member to struct nvdimm
keys: export lookup_user_key to external users
nfit/libnvdimm: add unlock of nvdimm support for Intel DIMMs
nfit/libnvdimm: add set passphrase support for Intel nvdimms
nfit/libnvdimm: add disable passphrase support to Intel nvdimm.
nfit/libnvdimm: add freeze security support to Intel nvdimm
nfit/libnvdimm: add support for issue secure erase DSM to Intel nvdimm
nfit_test: add context to dimm_dev for nfit_test
nfit_test: add test support for Intel nvdimm security DSMs
libnvdimm: add documentation for nvdimm security support
Documentation/nvdimm/security.txt | 82 ++++++
drivers/acpi/nfit/Makefile | 1
drivers/acpi/nfit/core.c | 58 +++-
drivers/acpi/nfit/intel.c | 382 +++++++++++++++++++++++++++
drivers/acpi/nfit/intel.h | 82 ++++++
drivers/acpi/nfit/nfit.h | 20 +
drivers/nvdimm/Kconfig | 1
drivers/nvdimm/bus.c | 2
drivers/nvdimm/core.c | 7
drivers/nvdimm/dimm.c | 7
drivers/nvdimm/dimm_devs.c | 529 +++++++++++++++++++++++++++++++++++++
drivers/nvdimm/nd-core.h | 6
drivers/nvdimm/nd.h | 2
include/linux/key.h | 3
include/linux/libnvdimm.h | 42 +++
kernel/cred.c | 1
security/keys/internal.h | 2
security/keys/process_keys.c | 1
tools/testing/nvdimm/Kbuild | 1
tools/testing/nvdimm/test/nfit.c | 227 +++++++++++++++-
20 files changed, 1420 insertions(+), 36 deletions(-)
create mode 100644 Documentation/nvdimm/security.txt
create mode 100644 drivers/acpi/nfit/intel.c
create mode 100644 drivers/acpi/nfit/intel.h
--
2 years, 3 months
Re: [PATCH v2 05/17] compat_ioctl: move more drivers to generic_compat_ioctl_ptrarg
by Arnd Bergmann
On Tue, Sep 18, 2018 at 7:59 PM Jason Gunthorpe <jgg(a)ziepe.ca> wrote:
>
> On Tue, Sep 18, 2018 at 10:51:08AM -0700, Darren Hart wrote:
> > On Fri, Sep 14, 2018 at 09:57:48PM +0100, Al Viro wrote:
> > > On Fri, Sep 14, 2018 at 01:35:06PM -0700, Darren Hart wrote:
> > >
> > > > Acked-by: Darren Hart (VMware) <dvhart(a)infradead.org>
> > > >
> > > > As for a longer term solution, would it be possible to init fops in such
> > > > a way that the compat_ioctl call defaults to generic_compat_ioctl_ptrarg
> > > > so we don't have to duplicate this boilerplate for every ioctl fops
> > > > structure?
> > >
> > > Bad idea, that... Because several years down the road somebody will add
> > > an ioctl that takes an unsigned int for argument. Without so much as looking
> > > at your magical mystery macro being used to initialize file_operations.
> >
> > Fair, being explicit in the declaration as it is currently may be
> > preferable then.
>
> It would be much cleaner and safer if you could arrange things to add
> something like this to struct file_operations:
>
> long (*ptr_ioctl) (struct file *, unsigned int, void __user *);
>
> Where the core code automatically converts the unsigned long to the
> void __user * as appropriate.
>
> Then it just works right always and the compiler will help address
> Al's concern down the road.
I think if we wanted to do this with a new file operation, the best
way would be to do the copy_from_user()/copy_to_user() in the caller
as well.
We already do this inside of some subsystems, notably drivers/media/,
and it simplifies the implementation of the ioctl handler function
significantly. We obviously cannot do this in general, both because of
traditional drivers that have 16-bit command codes (drivers/tty and others)
and also because of drivers that by accident defined the commands
incorrectly and use the wrong type or the wrong direction in the
definition.
Arnd
2 years, 3 months
get with me f#ckbuddy
by carew@bondcom.com
Hey cute. I’ve seen, met you in night club month ago but I could not come . You’re amazing. My pictures, photos are on that website
2 years, 3 months
I miss you.
by jaleel@bandorganmusic.com
Hey handsome. I’ve seen, found your profile and liked your sexy body sooo much. I can not stop thinking of u . I wish to know you better. Find me on that website
2 years, 3 months
[PATCH] libnvdimm: remove duplicate include
by Pankaj Gupta
Removed duplicate include.
Signed-off-by: Pankaj Gupta <pagupta(a)redhat.com>
---
drivers/nvdimm/nd-core.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
index ac68072fb8cd..182258f64417 100644
--- a/drivers/nvdimm/nd-core.h
+++ b/drivers/nvdimm/nd-core.h
@@ -14,7 +14,6 @@
#define __ND_CORE_H__
#include <linux/libnvdimm.h>
#include <linux/device.h>
-#include <linux/libnvdimm.h>
#include <linux/sizes.h>
#include <linux/mutex.h>
#include <linux/nd.h>
--
2.14.3
2 years, 3 months