[PATCH v3 0/2] Support ACPI 6.1 update in NFIT Control Region Structure
by Toshi Kani
ACPI 6.1, Table 5-133, updates NVDIMM Control Region Structure as
follows.
- Valid Fields, Manufacturing Location, and Manufacturing Date
are added from reserved range. No change in the structure size.
- IDs (SPD values) are stored as arrays of bytes (i.e. big-endian
format). The spec clarifies that they need to be represented
as arrays of bytes as well.
Patch 1 changes the NFIT driver to comply with ACPI 6.1.
Patch 2 adds a new sysfs file "id" to show NVDIMM ID defined in ACPI 6.1.
The patch-set applies on linux-pm.git acpica.
link: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf
---
v3:
- Need to coordinate with ACPICA update (Bob Moore, Dan Williams)
- Integrate with ACPICA changes in struct acpi_nfit_control_region.
(commit 138a95547ab0)
v2:
- Remove 'mfg_location' and 'mfg_date'. (Dan Williams)
- Rename 'unique_id' to 'id' and make this change as a separate patch.
(Dan Williams)
---
Toshi Kani (3):
1/2 acpi/nfit: Update nfit driver to comply with ACPI 6.1
2/3 acpi/nfit: Add sysfs "id" for NVDIMM ID
---
drivers/acpi/nfit.c | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
4 years, 1 month
xfs: untangle the direct I/O and DAX path, fix DAX locking
by Christoph Hellwig
The last patch is what started the series: XFS currently uses the
direct I/O locking strategy for DAX because DAX was overloaded onto
the direct I/O path. For XFS this means that we only take a shared
inode lock instead of the normal exclusive one for writes IFF they
are properly aligned. While this is fine for O_DIRECT which requires
explicit opt-in from the application it's not fine for DAX where we'll
suddenly lose expected and required synchronization of the file system
happens to use DAX undeneath.
Patches 1-7 just untangle the code so that we can deal with DAX on
it's own easily.
5 years, 10 months
[PATCH 0/15 v2] dax: Clear dirty bits after flushing caches
by Jan Kara
Hello,
this is a second revision of my patches to clear dirty bits from radix tree of
DAX inodes when caches for corresponding pfns have been flushed. This patch set
is significantly larger than the previous version because I'm changing how
->fault, ->page_mkwrite, and ->pfn_mkwrite handlers may choose to handle the
fault so that we don't have to leak details about DAX locking into the generic
code. In principle, these patches enable handlers to easily update PTEs and do
other work necessary to finish the fault without duplicating the functionality
present in the generic code. I'd be really interested in feedback from mm
folks whether such changes to fault handling code are fine or what they'd do
differently...
Changes since v1:
* make sure all PTE updates happen under radix tree entry lock to protect
against races between faults & write-protecting code
* remove information about DAX locking from mm/memory.c
* smaller updates based on Ross' feedback
----
Background information regarding the motivation:
Currently we never clear dirty bits in the radix tree of a DAX inode. Thus
fsync(2) flushes all the dirty pfns again and again. This patches implement
clearing of the dirty tag in the radix tree so that we issue flush only when
needed.
The difficulty with clearing the dirty tag is that we have to protect against
a concurrent page fault setting the dirty tag and writing new data into the
page. So we need a lock serializing page fault and clearing of the dirty tag
and write-protecting PTEs (so that we get another pagefault when pfn is written
to again and we have to set the dirty tag again).
The effect of the patch set is easily visible:
Writing 1 GB of data via mmap, then fsync twice.
Before this patch set both fsyncs take ~205 ms on my test machine, after the
patch set the first fsync takes ~283 ms (the additional cost of walking PTEs,
clearing dirty bits etc. is very noticeable), the second fsync takes below
1 us.
As a bonus, these patches make filesystem freezing for DAX filesystems
reliable because mappings are now properly writeprotected while freezing the
fs.
Patches have passed xfstests for both xfs and ext4.
Honza
6 years
Subtle races between DAX mmap fault and write path
by Jan Kara
Hi,
when testing my latest changes to DXA fault handling code I have hit the
following interesting race between the fault and write path (I'll show
function names for ext4 but xfs has the same issue AFAICT).
We have a file 'f' which has a hole at offset 0.
Process 0 Process 1
data = mmap('f');
read data[0]
-> fault, we map a hole page
pwrite('f', buf, len, 0)
-> ext4_file_write_iter
inode_lock(inode);
__generic_file_write_iter()
generic_file_direct_write()
invalidate_inode_pages2_range()
- drops hole page from
the radix tree
ext4_direct_IO()
dax_do_io()
- allocates block for
offset 0
data[0] = 1
-> page_mkwrite fault
-> ext4_dax_fault()
down_read(&EXT4_I(inode)->i_mmap_sem);
__dax_fault()
grab_mapping_entry()
- creates locked radix tree entry
- maps block into PTE
put_locked_mapping_entry()
invalidate_inode_pages2_range()
- removes dax entry from
the radix tree
So we have just lost information that block 0 is mapped and needs flushing
caches.
Also the fact that the consistency of data as viewed by mmap and
dax_do_io() relies on invalidate_inode_pages2_range() is somewhat
unexpected to me and we should document it somewhere.
The question is how to best fix this. I see three options:
1) Lock out faults during writes via exclusive i_mmap_sem. That is rather
harsh but should work - we call filemap_write_and_wait() in
generic_file_direct_write() so we flush out all caches for the relevant
area before dropping radix tree entries.
2) Call filemap_write_and_wait() after we return from ->direct_IO before we
call invalidate_inode_pages2_range() and hold i_mmap_sem exclusively only
for those two calls. Lock hold time will be shorter than 1) but it will
require additional flush and we'd probably have to stop using
generic_file_direct_write() for DAX writes to allow for all this special
hackery.
3) Remodel dax_do_io() to work more like buffered IO and use radix tree
entry locks to protect against similar races. That has likely better
scalability than 1) but may be actually slower in the uncontended case (due
to all the radix tree operations).
Any opinions on this?
Honza
--
Jan Kara <jack(a)suse.com>
SUSE Labs, CR
6 years
[PATCH] libnvdimm, nd_blk: mask off reserved status bits
by Ross Zwisler
The "NVDIMM Block Window Driver Writer's Guide":
http://pmem.io/documents/
http://pmem.io/documents/NVDIMM_DriverWritersGuide-July-2016.pdf
defines the layout of the block window status register. For the July 2016
version of the spec linked to above, this happens in Figure 4 on page 26.
The only bits defined in this spec are bits 31, 5, 4, 2, 1 and 0. The rest
of the bits in the status register are reserved, and there is a warning
following the diagram that says:
Note: The driver cannot assume the value of the RESERVED bits in the
status register are zero. These reserved bits need to be masked off, and
the driver must avoid checking the state of those bits.
This change ensures that for hardware implementations that set these
reserved bits in the status register, the driver won't incorrectly fail the
block I/Os.
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: stable(a)vger.kernel.org
---
drivers/acpi/nfit.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index 1f0e060..375c10f 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -1396,11 +1396,12 @@ static u32 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
{
struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR];
u64 offset = nfit_blk->stat_offset + mmio->size * bw;
+ const u32 STATUS_MASK = 0x80000037;
if (mmio->num_lines)
offset = to_interleave_offset(offset, mmio);
- return readl(mmio->addr.base + offset);
+ return readl(mmio->addr.base + offset) & STATUS_MASK;
}
static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
--
2.9.0
6 years
HPE SMART data retreival
by Johannes Thumshirn
Hi Dan and Jerry,
I'm currently looking into SMART data retrieval on HPE NVDIMMs.
After the first obstacle (like getting cat
/sys/class/nd/ndctl0/device/nmem0/commands reutrn smart so ndctl will issue
the ioctl) I ran into a rather nasty problem. According to [1] HPEDIMMs
need the input buffer specially crafted for SMART data, according to [2]
Intel DIMMs don't.
Adding translation functions for the DIMMs accepted commands is one thing and
should be more or less trivial for all DIMMs (I'll post an RFC patch as soon
as Linus merged Dan's 4.8 pull request so I can rebase it) but doing this
type of conversation for each and every command for every defined vendor
family for both the input and output buffers will drive us all mad I guess.
Especially from the distribution's POV I'm not to keen on having customers
with some new NVDIMM family and we would need to re-implement all the
translators again. Adding a new ID is one thing but translation tables are a
totally different story.
So the question is have I overlooked something and there is a clean and easy
solution to this problem, or not.
@Jerry have you tested SMART data retrieval with ndctl? Did it work for you?
Thanks,
Johannes
[1] https://github.com/HewlettPackard/hpe-nvm/blob/master/Documentation/NFIT_...
[2] http://pmem.io/documents/NVDIMM_DSM_Interface_Example-V1.2.pdf
--
Johannes Thumshirn Storage
jthumshirn(a)suse.de +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
6 years
[PATCH] block: fix bdi vs gendisk lifetime mismatch
by Dan Williams
The name for a bdi of a gendisk is derived from the gendisk's devt.
However, since the gendisk is destroyed before the bdi it leaves a
window where a new gendisk could dynamically reuse the same devt while a
bdi with the same name is still live. Arrange for the bdi to hold a
reference against its "owner" disk device while it is registered.
Otherwise we can hit sysfs duplicate name collisions like the following:
WARNING: CPU: 10 PID: 2078 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x64/0x80
sysfs: cannot create duplicate filename '/devices/virtual/bdi/259:1'
Hardware name: HP ProLiant DL580 Gen8, BIOS P79 05/06/2015
0000000000000286 0000000002c04ad5 ffff88006f24f970 ffffffff8134caec
ffff88006f24f9c0 0000000000000000 ffff88006f24f9b0 ffffffff8108c351
0000001f0000000c ffff88105d236000 ffff88105d1031e0 ffff8800357427f8
Call Trace:
[<ffffffff8134caec>] dump_stack+0x63/0x87
[<ffffffff8108c351>] __warn+0xd1/0xf0
[<ffffffff8108c3cf>] warn_slowpath_fmt+0x5f/0x80
[<ffffffff812a0d34>] sysfs_warn_dup+0x64/0x80
[<ffffffff812a0e1e>] sysfs_create_dir_ns+0x7e/0x90
[<ffffffff8134faaa>] kobject_add_internal+0xaa/0x320
[<ffffffff81358d4e>] ? vsnprintf+0x34e/0x4d0
[<ffffffff8134ff55>] kobject_add+0x75/0xd0
[<ffffffff816e66b2>] ? mutex_lock+0x12/0x2f
[<ffffffff8148b0a5>] device_add+0x125/0x610
[<ffffffff8148b788>] device_create_groups_vargs+0xd8/0x100
[<ffffffff8148b7cc>] device_create_vargs+0x1c/0x20
[<ffffffff811b775c>] bdi_register+0x8c/0x180
[<ffffffff811b7877>] bdi_register_dev+0x27/0x30
[<ffffffff813317f5>] add_disk+0x175/0x4a0
Cc: <stable(a)vger.kernel.org>
Reported-by: Yi Zhang <yizhan(a)redhat.com>
Tested-by: Yi Zhang <yizhan(a)redhat.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
block/genhd.c | 2 +-
include/linux/backing-dev-defs.h | 1 +
include/linux/backing-dev.h | 1 +
mm/backing-dev.c | 18 ++++++++++++++++++
4 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/block/genhd.c b/block/genhd.c
index 3c9dede4e04f..f6f7ffcd4eab 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -614,7 +614,7 @@ void device_add_disk(struct device *parent, struct gendisk *disk)
/* Register BDI before referencing it from bdev */
bdi = &disk->queue->backing_dev_info;
- bdi_register_dev(bdi, disk_devt(disk));
+ bdi_register_owner(bdi, disk_to_dev(disk));
blk_register_region(disk_devt(disk), disk->minors, NULL,
exact_match, exact_lock, disk);
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index 3f103076d0bf..c357f27d5483 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -163,6 +163,7 @@ struct backing_dev_info {
wait_queue_head_t wb_waitq;
struct device *dev;
+ struct device *owner;
struct timer_list laptop_mode_wb_timer;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 491a91717788..43b93a947e61 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,6 +24,7 @@ __printf(3, 4)
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...);
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
+int bdi_register_owner(struct backing_dev_info *bdi, struct device *owner);
void bdi_unregister(struct backing_dev_info *bdi);
int __must_check bdi_setup_and_register(struct backing_dev_info *, char *);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index efe237742074..7b51cb7905be 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -825,6 +825,19 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
}
EXPORT_SYMBOL(bdi_register_dev);
+int bdi_register_owner(struct backing_dev_info *bdi, struct device *owner)
+{
+ int rc;
+
+ rc = bdi_register(bdi, NULL, "%u:%u", MAJOR(owner->devt),
+ MINOR(owner->devt));
+ if (rc)
+ return rc;
+ bdi->owner = owner;
+ get_device(owner);
+}
+EXPORT_SYMBOL(bdi_register_owner);
+
/*
* Remove bdi from bdi_list, and ensure that it is no longer visible
*/
@@ -849,6 +862,11 @@ void bdi_unregister(struct backing_dev_info *bdi)
device_unregister(bdi->dev);
bdi->dev = NULL;
}
+
+ if (bdi->owner) {
+ put_device(bdi->owner);
+ bdi->owner = NULL;
+ }
}
void bdi_exit(struct backing_dev_info *bdi)
6 years
[BUG] kernel NULL pointer dereference observed during pmem btt switch test
by Yi Zhang
Hello everyone
Could you help check this issue, thanks.
Steps I used:
1. Reserve 4*8G of memory for pmem by add kernel parameter "memmap=8G!4G memmap=8G!12G memmap=8G!20G memmap=8G!28G"
2. Execute below script
#!/bin/bash
pmem_btt_switch() {
sector_size_list="512 520 528 4096 4104 4160 4224"
for sector_size in $sector_size_list; do
ndctl create-namespace -f -e namespace${1}.0 --mode=sector -l $sector_size
ndctl create-namespace -f -e namespace${1}.0 --mode=raw
done
}
for i in 0 1 2 3; do
pmem_btt_switch $i &
done
KERNEL log:
[ 243.404847] nd_pmem namespace2.0: unable to guarantee persistence of writes
[ 243.467271] nd_pmem namespace3.0: unable to guarantee persistence of writes
[ 243.513412] nd_pmem namespace1.0: unable to guarantee persistence of writes
[ 243.544728] nd_pmem namespace0.0: unable to guarantee persistence of writes
[ 243.545371] ------------[ cut here ]------------
[ 243.545381] WARNING: CPU: 10 PID: 2078 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x64/0x80
[ 243.545382] sysfs: cannot create duplicate filename '/devices/virtual/bdi/259:1'
[ 243.545432] Modules linked in: nfsv3 rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw nd_pmem gf128mul glue_helper ablk_helper cryptd nd_btt hpilo iTCO_wdt iTCO_vendor_support sg hpwdt pcspkr ipmi_ssif ioatdma wmi pcc_cpufreq acpi_cpufreq acpi_power_meter lpc_ich ipmi_si ipmi_msghandler mfd_core shpchp dca nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel tg3 serio_raw hpsa ptp i2c_core scsi_transport_sas pps_core fjes dm_mirror dm_region_hash dm_log dm_mod
[ 243.545435] CPU: 10 PID: 2078 Comm: ndctl Not tainted 4.7.0-rc7 #1
[ 243.545436] Hardware name: HP ProLiant DL580 Gen8, BIOS P79 05/06/2015
[ 243.545439] 0000000000000286 0000000002c04ad5 ffff88006f24f970 ffffffff8134caec
[ 243.545441] ffff88006f24f9c0 0000000000000000 ffff88006f24f9b0 ffffffff8108c351
[ 243.545442] 0000001f0000000c ffff88105d236000 ffff88105d1031e0 ffff8800357427f8
[ 243.545443] Call Trace:
[ 243.545452] [<ffffffff8134caec>] dump_stack+0x63/0x87
[ 243.545460] [<ffffffff8108c351>] __warn+0xd1/0xf0
[ 243.545463] [<ffffffff8108c3cf>] warn_slowpath_fmt+0x5f/0x80
[ 243.545465] [<ffffffff812a0d34>] sysfs_warn_dup+0x64/0x80
[ 243.545466] [<ffffffff812a0e1e>] sysfs_create_dir_ns+0x7e/0x90
[ 243.545469] [<ffffffff8134faaa>] kobject_add_internal+0xaa/0x320
[ 243.545473] [<ffffffff81358d4e>] ? vsnprintf+0x34e/0x4d0
[ 243.545475] [<ffffffff8134ff55>] kobject_add+0x75/0xd0
[ 243.545483] [<ffffffff816e66b2>] ? mutex_lock+0x12/0x2f
[ 243.545489] [<ffffffff8148b0a5>] device_add+0x125/0x610
[ 243.545491] [<ffffffff8148b788>] device_create_groups_vargs+0xd8/0x100
[ 243.545492] [<ffffffff8148b7cc>] device_create_vargs+0x1c/0x20
[ 243.545498] [<ffffffff811b775c>] bdi_register+0x8c/0x180
[ 243.545500] [<ffffffff811b7877>] bdi_register_dev+0x27/0x30
[ 243.545505] [<ffffffff813317f5>] add_disk+0x175/0x4a0
[ 243.545507] [<ffffffff816e66b2>] ? mutex_lock+0x12/0x2f
[ 243.545513] [<ffffffff814afb7f>] ? nvdimm_bus_unlock+0x1f/0x30
[ 243.545518] [<ffffffffa04e039f>] nd_pmem_probe+0x28f/0x360 [nd_pmem]
[ 243.545521] [<ffffffff814b0599>] nvdimm_bus_probe+0x69/0x120
[ 243.545524] [<ffffffff8148e779>] driver_probe_device+0x239/0x460
[ 243.545526] [<ffffffff8148c974>] bind_store+0xd4/0x110
[ 243.545528] [<ffffffff8148c054>] drv_attr_store+0x24/0x30
[ 243.545529] [<ffffffff812a042a>] sysfs_kf_write+0x3a/0x50
[ 243.545531] [<ffffffff8129fa3b>] kernfs_fop_write+0x11b/0x1a0
[ 243.545536] [<ffffffff8121d5e7>] __vfs_write+0x37/0x160
[ 243.545544] [<ffffffff812ceadd>] ? security_file_permission+0x3d/0xc0
[ 243.545550] [<ffffffff810d7e1f>] ? percpu_down_read+0x1f/0x50
[ 243.545552] [<ffffffff8121e8e2>] vfs_write+0xb2/0x1b0
[ 243.545555] [<ffffffff8121fd35>] SyS_write+0x55/0xc0
[ 243.545560] [<ffffffff81003b12>] do_syscall_64+0x62/0x110
[ 243.545563] [<ffffffff816e85e1>] entry_SYSCALL64_slow_path+0x25/0x25
[ 243.545579] ---[ end trace 6d3b90c425a39fda ]---
[ 243.545580] ------------[ cut here ]------------
[ 243.545583] WARNING: CPU: 10 PID: 2078 at lib/kobject.c:240 kobject_add_internal+0x262/0x320
[ 243.545584] kobject_add_internal failed for 259:1 with -EEXIST, don't try to register things with the same name in the same directory.
[ 243.545603] Modules linked in: nfsv3 rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw nd_pmem gf128mul glue_helper ablk_helper cryptd nd_btt hpilo iTCO_wdt iTCO_vendor_support sg hpwdt pcspkr ipmi_ssif ioatdma wmi pcc_cpufreq acpi_cpufreq acpi_power_meter lpc_ich ipmi_si ipmi_msghandler mfd_core shpchp dca nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel tg3 serio_raw hpsa ptp i2c_core scsi_transport_sas pps_core fjes dm_mirror dm_region_hash dm_log dm_mod
[ 243.545605] CPU: 10 PID: 2078 Comm: ndctl Tainted: G W 4.7.0-rc7 #1
[ 243.545605] Hardware name: HP ProLiant DL580 Gen8, BIOS P79 05/06/2015
[ 243.545607] 0000000000000286 0000000002c04ad5 ffff88006f24f9c0 ffffffff8134caec
[ 243.545608] ffff88006f24fa10 0000000000000000 ffff88006f24fa00 ffffffff8108c351
[ 243.545610] 000000f06f24fa28 ffff880035164010 ffff88006c7e3780 00000000ffffffef
[ 243.545610] Call Trace:
[ 243.545612] [<ffffffff8134caec>] dump_stack+0x63/0x87
[ 243.545614] [<ffffffff8108c351>] __warn+0xd1/0xf0
[ 243.545616] [<ffffffff8108c3cf>] warn_slowpath_fmt+0x5f/0x80
[ 243.545618] [<ffffffff812a0d3c>] ? sysfs_warn_dup+0x6c/0x80
[ 243.545619] [<ffffffff8134fc62>] kobject_add_internal+0x262/0x320
[ 243.545621] [<ffffffff81358d4e>] ? vsnprintf+0x34e/0x4d0
[ 243.545622] [<ffffffff8134ff55>] kobject_add+0x75/0xd0
[ 243.545625] [<ffffffff816e66b2>] ? mutex_lock+0x12/0x2f
[ 243.545626] [<ffffffff8148b0a5>] device_add+0x125/0x610
[ 243.545628] [<ffffffff8148b788>] device_create_groups_vargs+0xd8/0x100
[ 243.545630] [<ffffffff8148b7cc>] device_create_vargs+0x1c/0x20
[ 243.545632] [<ffffffff811b775c>] bdi_register+0x8c/0x180
[ 243.545634] [<ffffffff811b7877>] bdi_register_dev+0x27/0x30
[ 243.545636] [<ffffffff813317f5>] add_disk+0x175/0x4a0
[ 243.545638] [<ffffffff816e66b2>] ? mutex_lock+0x12/0x2f
[ 243.545640] [<ffffffff814afb7f>] ? nvdimm_bus_unlock+0x1f/0x30
[ 243.545642] [<ffffffffa04e039f>] nd_pmem_probe+0x28f/0x360 [nd_pmem]
[ 243.545644] [<ffffffff814b0599>] nvdimm_bus_probe+0x69/0x120
[ 243.545646] [<ffffffff8148e779>] driver_probe_device+0x239/0x460
[ 243.545648] [<ffffffff8148c974>] bind_store+0xd4/0x110
[ 243.545649] [<ffffffff8148c054>] drv_attr_store+0x24/0x30
[ 243.545651] [<ffffffff812a042a>] sysfs_kf_write+0x3a/0x50
[ 243.545652] [<ffffffff8129fa3b>] kernfs_fop_write+0x11b/0x1a0
[ 243.545654] [<ffffffff8121d5e7>] __vfs_write+0x37/0x160
[ 243.545657] [<ffffffff812ceadd>] ? security_file_permission+0x3d/0xc0
[ 243.545659] [<ffffffff810d7e1f>] ? percpu_down_read+0x1f/0x50
[ 243.545661] [<ffffffff8121e8e2>] vfs_write+0xb2/0x1b0
[ 243.545663] [<ffffffff8121fd35>] SyS_write+0x55/0xc0
[ 243.545665] [<ffffffff81003b12>] do_syscall_64+0x62/0x110
[ 243.545666] [<ffffffff816e85e1>] entry_SYSCALL64_slow_path+0x25/0x25
[ 243.545667] ---[ end trace 6d3b90c425a39fdb ]---
[ 243.577109] BUG: unable to handle kernel NULL pointer dereference at 0000000000000040
[ 243.577117] IP: [<ffffffff812a1054>] sysfs_do_create_link_sd.isra.2+0x34/0xb0
[ 243.577119] PGD 1057752067 PUD 105e37a067 PMD 0
[ 243.577121] Oops: 0000 [#1] SMP
[ 243.577154] Modules linked in: nfsv3 rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw nd_pmem gf128mul glue_helper ablk_helper cryptd nd_btt hpilo iTCO_wdt iTCO_vendor_support sg hpwdt pcspkr ipmi_ssif ioatdma wmi pcc_cpufreq acpi_cpufreq acpi_power_meter lpc_ich ipmi_si ipmi_msghandler mfd_core shpchp dca nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel tg3 serio_raw hpsa ptp i2c_core scsi_transport_sas pps_core fjes dm_mirror dm_region_hash dm_log dm_mod
[ 243.577157] CPU: 6 PID: 2078 Comm: ndctl Tainted: G W 4.7.0-rc7 #1
[ 243.577158] Hardware name: HP ProLiant DL580 Gen8, BIOS P79 05/06/2015
[ 243.577159] task: ffff8800340c8000 ti: ffff88006f24c000 task.ti: ffff88006f24c000
[ 243.577162] RIP: 0010:[<ffffffff812a1054>] [<ffffffff812a1054>] sysfs_do_create_link_sd.isra.2+0x34/0xb0
[ 243.577163] RSP: 0018:ffff88006f24fc28 EFLAGS: 00010246
[ 243.577164] RAX: 0000000000000000 RBX: 0000000000000040 RCX: 0000000000000001
[ 243.577164] RDX: 0000000000000001 RSI: 0000000000000040 RDI: ffffffff822411f0
[ 243.577165] RBP: ffff88006f24fc50 R08: ffff8800690f1711 R09: ffffffff8134e82e
[ 243.577166] R10: ffff88007799b640 R11: ffffea0000d46000 R12: ffffffff81a3dc3c
[ 243.577166] R13: ffff88105ae627f8 R14: 0000000000000001 R15: ffff880034a89040
[ 243.577168] FS: 00007f685b5dc780(0000) GS:ffff880077980000(0000) knlGS:0000000000000000
[ 243.577168] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 243.577169] CR2: 0000000000000040 CR3: 000000105bb0b000 CR4: 00000000001406e0
[ 243.577170] Stack:
[ 243.577172] ffff880070666000 ffff880070666080 ffff88006a0635d0 ffff88007066600c
[ 243.577173] ffff880034a89040 ffff88006f24fc60 ffffffff812a10f5 ffff88006f24fcc8
[ 243.577175] ffffffff8133188b ffff880070666000 1030000135282c00 ffff880070666000
[ 243.577175] Call Trace:
[ 243.577179] [<ffffffff812a10f5>] sysfs_create_link+0x25/0x40
[ 243.577184] [<ffffffff8133188b>] add_disk+0x20b/0x4a0
[ 243.577189] [<ffffffffa04e039f>] nd_pmem_probe+0x28f/0x360 [nd_pmem]
[ 243.577194] [<ffffffff814b0599>] nvdimm_bus_probe+0x69/0x120
[ 243.577198] [<ffffffff8148e779>] driver_probe_device+0x239/0x460
[ 243.577200] [<ffffffff8148c974>] bind_store+0xd4/0x110
[ 243.577202] [<ffffffff8148c054>] drv_attr_store+0x24/0x30
[ 243.577203] [<ffffffff812a042a>] sysfs_kf_write+0x3a/0x50
[ 243.577205] [<ffffffff8129fa3b>] kernfs_fop_write+0x11b/0x1a0
[ 243.577209] [<ffffffff8121d5e7>] __vfs_write+0x37/0x160
[ 243.577215] [<ffffffff812ceadd>] ? security_file_permission+0x3d/0xc0
[ 243.577220] [<ffffffff810d7e1f>] ? percpu_down_read+0x1f/0x50
[ 243.577222] [<ffffffff8121e8e2>] vfs_write+0xb2/0x1b0
[ 243.577224] [<ffffffff8121fd35>] SyS_write+0x55/0xc0
[ 243.577229] [<ffffffff81003b12>] do_syscall_64+0x62/0x110
[ 243.577232] [<ffffffff816e85e1>] entry_SYSCALL64_slow_path+0x25/0x25
[ 243.577248] Code: 48 89 e5 41 57 41 56 41 55 41 54 49 89 d4 53 74 73 48 85 ff 49 89 fd 74 6b 48 89 f3 48 c7 c7 f0 11 24 82 41 89 ce e8 7c 72 44 00 <48> 8b 1b 48 85 db 74 08 48 89 df e8 ac c1 ff ff 48 c7 c7 f0 11
[ 243.577250] RIP [<ffffffff812a1054>] sysfs_do_create_link_sd.isra.2+0x34/0xb0
[ 243.577251] RSP <ffff88006f24fc28>
[ 243.577251] CR2: 0000000000000040
[ 243.577285] ---[ end trace 6d3b90c425a39fdc ]---
[ 243.578932] Kernel panic - not syncing: Fatal exception
[ 243.597839] Kernel Offset: disabled
[ 247.934728] ---[ end Kernel panic - not syncing: Fatal exception
Best Regards,
Yi Zhang
6 years
Reach Millions of FB Group Members
by BENJAMIN
Hello,
Do you want to advertise on facebook? We're here to help.
We wil manually post your product/logo/link on Facebook Groups and will
give you a full report with links of each live post where your advertisement
was posted.
http://www.mg-dot.cn/detail.php?id=12
Regards
BENJAMIN
�
Unsubscribe option is available on the footer of our website
6 years