[OF test] BUG: unable to handle kernel NULL pointer dereference at 00000038
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit b951f9dc7f25fc1e39aafda5edb4b47b38285d9f
Author: Gaurav Minocha <gaurav.minocha.os(a)gmail.com>
AuthorDate: Sat Jul 26 12:48:50 2014 -0700
Commit: Grant Likely <grant.likely(a)linaro.org>
CommitDate: Sat Aug 16 09:03:56 2014 +0100
Enabling OF selftest to run without machine's devicetree
If there is no devicetree present, this patch adds the selftest
data as a live devicetree. It also removes the same after the
testcase execution is complete.
Tested with and without machine's devicetree.
Signed-off-by: Gaurav Minocha <gaurav.minocha.os(a)gmail.com>
Signed-off-by: Grant Likely <grant.likely(a)linaro.org>
+------------------------------------------------------+------------+------------+---------------+
| | b5f2a8c026 | b951f9dc7f | next-20140827 |
+------------------------------------------------------+------------+------------+---------------+
| boot_successes | 60 | 0 | 0 |
| boot_failures | 0 | 20 | 11 |
| BUG:unable_to_handle_kernel_NULL_pointer_dereference | 0 | 20 | 11 |
| Oops | 0 | 20 | 11 |
| EIP_is_at_kernfs_find_ns | 0 | 20 | 11 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 20 | 11 |
| backtrace:of_selftest | 0 | 20 | 11 |
| backtrace:kernel_init_freeable | 0 | 20 | 11 |
+------------------------------------------------------+------------+------------+---------------+
[ 2.779062] rtc-test rtc-test.0: setting system clock to 2014-08-27 12:23:45 UTC (1409142225)
[ 2.779708] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[ 2.780171] EDD information not available.
[ 2.780575] BUG: unable to handle kernel NULL pointer dereference at 00000038
[ 2.781133] IP: [<b10e6578>] kernfs_find_ns+0xd/0xe4
[ 2.781518] *pde = 00000000
[ 2.781747] Oops: 0000 [#1] PREEMPT
[ 2.782037] Modules linked in:
[ 2.782291] CPU: 0 PID: 1 Comm: swapper Not tainted 3.16.0-04165-gb951f9d #8
[ 2.782799] task: c34c1000 ti: c34c4000 task.ti: c34c4000
[ 2.783200] EIP: 0060:[<b10e6578>] EFLAGS: 00010282 CPU: 0
[ 2.783599] EIP is at kernfs_find_ns+0xd/0xe4
[ 2.783921] EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: b158c20a
[ 2.784164] ESI: 00000000 EDI: c34f0df0 EBP: c34c5e94 ESP: c34c5e80
[ 2.784164] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 2.784164] CR0: 8005003b CR2: 00000038 CR3: 01675000 CR4: 000006d0
[ 2.784164] Stack:
[ 2.784164] 000036b0 b0085008 00000000 b158c20a c34f0df0 c34c5ea8 b10e6674 00000000
[ 2.784164] b158c20a 00000000 c34c5ec0 b13552ea b158c20a b0088000 b0088000 00000000
[ 2.784164] c34c5ed0 b13563df b164a3b5 b0088000 c34c5f34 b164a456 b13de13e 00000003
[ 2.784164] Call Trace:
[ 2.784164] [<b10e6674>] kernfs_find_and_get_ns+0x25/0x3d
[ 2.784164] [<b13552ea>] safe_name+0x4d/0x70
[ 2.784164] [<b13563df>] __of_attach_node_sysfs+0x2d/0xa5
[ 2.784164] [<b164a3b5>] ? of_selftest_platform_populate+0x1ca/0x1ca
[ 2.784164] [<b164a456>] of_selftest+0xa1/0xf46
[ 2.784164] [<b13de13e>] ? _raw_spin_unlock_irqrestore+0x39/0x54
[ 2.784164] [<b104b60c>] ? trace_hardirqs_on+0xb/0xd
[ 2.784164] [<b10a6390>] ? slob_free+0x217/0x21f
[ 2.784164] [<b164a3b5>] ? of_selftest_platform_populate+0x1ca/0x1ca
[ 2.784164] [<b164a3b5>] ? of_selftest_platform_populate+0x1ca/0x1ca
[ 2.784164] [<b100045b>] do_one_initcall+0xce/0x160
[ 2.784164] [<b1626400>] ? do_early_param+0x51/0x75
[ 2.784164] [<b103eeb9>] ? parse_args+0x182/0x23b
[ 2.784164] [<b1626beb>] kernel_init_freeable+0x184/0x20e
[ 2.784164] [<b13d5b3c>] kernel_init+0x8/0xb8
[ 2.784164] [<b13de840>] ret_from_kernel_thread+0x20/0x30
[ 2.784164] [<b13d5b34>] ? rest_init+0xa0/0xa0
[ 2.784164] Code: 5e 2f 00 89 d8 e8 21 b9 fd ff 85 c0 0f 95 c0 0f b6 c0 eb 06 b8 f6 ff ff ff c3 5b 5e 5d c3 55 89 e5 57 56 89 c6 53 89 cb 83 ec 08 <8b> 78 38 66 8b 40 4c 89 55 f0 66 c1 e8 05 83 e0 01 83 3d 94 9c
[ 2.784164] EIP: [<b10e6578>] kernfs_find_ns+0xd/0xe4 SS:ESP 0068:c34c5e80
[ 2.784164] CR2: 0000000000000038
[ 2.784164] ---[ end trace 411ad12a024bcda1 ]---
[ 2.784164] Kernel panic - not syncing: Fatal exception
git bisect start 52addcf9d6669fa439387610bc65c92fa0980cef v3.16 --
git bisect good ad1f5caf34390bb20fdbb4eaf71b0494e89936f0 # 19:54 20+ 0 Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
git bisect good 179c0ac67b9d947d2de69e9f08a743e7c74a8dce # 20:01 20+ 0 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc
git bisect bad 3951ad2e051543f6cd01706da477a73f19165eb6 # 20:03 0- 20 Merge branch 'for_linus' of git://cavan.codon.org.uk/platform-drivers-x86
git bisect good 90c80969145d006eb6294a3aa501d0e156f5e244 # 20:07 20+ 0 Merge branch 'rng-queue' of git://git.kernel.org/pub/scm/linux/kernel/git/amit/virtio
git bisect bad 7ac0bbf99d44c827c88aa7a9064050526e723ebb # 20:10 0- 20 Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux
git bisect good 605f884d05cc0de8c3bde36281d58216011f51a5 # 20:12 20+ 0 Merge branch 'for_linus' of git://cavan.codon.org.uk/platform-drivers-x86
git bisect good 7d1311b93e58ed55f3a31cc8f94c4b8fe988a2b9 # 20:18 20+ 0 Linux 3.17-rc1
git bisect good f325f1643abca9fac5b8e04e9faa46effc984a61 # 20:21 20+ 0 frv: Define cpu_relax_lowlatency()
git bisect bad b951f9dc7f25fc1e39aafda5edb4b47b38285d9f # 20:24 0- 20 Enabling OF selftest to run without machine's devicetree
git bisect good b5f2a8c02697c3685ccbbb66495465742ffa0dc1 # 20:38 20+ 0 of: Allow mem_reserve of memory with a base address of zero
# first bad commit: [b951f9dc7f25fc1e39aafda5edb4b47b38285d9f] Enabling OF selftest to run without machine's devicetree
git bisect good b5f2a8c02697c3685ccbbb66495465742ffa0dc1 # 20:41 60+ 0 of: Allow mem_reserve of memory with a base address of zero
git bisect bad d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 20:41 0- 11 Add linux-next specific files for 20140827
git bisect bad 68e370289c29e3beac99d59c6d840d470af9dfcf # 20:56 0- 60 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
git bisect bad d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 20:56 0- 11 Add linux-next specific files for 20140827
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 10 months
[rcutorture] RIP: 0010:[<ffffffff8111b24f>] [<ffffffff8111b24f>] __might_sleep
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/exp
commit 86d8f10e35c1e63d5a839792efd7c3cb6a564fe4
Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
AuthorDate: Fri Aug 22 14:03:38 2014 -0700
Commit: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
CommitDate: Wed Aug 27 10:00:30 2014 -0700
rcutorture: Add preemption-slam testing
This commit adds kernel threads whose sole purpose is to add random
preemption of the code under test. It adds four kernel boot parameters
to control the number of preemption-slam kthreads, their real-time
priority, the length of time that they sleep, and the length of time
that they spin at the real-time priority.
Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
===================================================
PARENT COMMIT NOT CLEAN. LOOK OUT FOR WRONG BISECT!
===================================================
Attached dmesg for the parent commit, too, to help confirm whether it is a noise error.
+------------------------------------------------+------------+------------+------------------+
| | b52938be99 | 86d8f10e35 | v3.17-rc2_082810 |
+------------------------------------------------+------------+------------+------------------+
| boot_successes | 112 | 19 | 0 |
| boot_failures | 8 | 9 | 11 |
| BUG:kernel_boot_crashed | 6 | | |
| BUG:kernel_boot_hang | 2 | | |
| RIP:__might_sleep | 0 | 2 | 7 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 9 | 11 |
| backtrace:stutter_wait | 0 | 2 | 5 |
| RIP:_cond_resched | 0 | 3 | 1 |
| backtrace:trace_apic_timer_interrupt | 0 | 1 | |
| RIP:stutter_wait | 0 | 4 | 2 |
| backtrace:apic_timer_interrupt | 0 | 6 | 5 |
| RIP:function_test_events_call | 0 | 0 | 1 |
| backtrace:ftrace_call | 0 | 0 | 1 |
+------------------------------------------------+------------+------------+------------------+
[ 9.123983] Testing event mm_vmscan_direct_reclaim_end:
[ 32.110003] NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [torture_slam_kt:115]
[ 32.110003] CPU: 1 PID: 115 Comm: torture_slam_kt Not tainted 3.17.0-rc2-00063-g86d8f10 #7
[ 32.110003] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 32.110003] task: ffff88000b395650 ti: ffff88000b210000 task.ti: ffff88000b210000
[ 32.110003] RIP: 0010:[<ffffffff8111b24f>] [<ffffffff8111b24f>] __might_sleep+0x18f/0x1a0
[ 32.110003] RSP: 0000:ffff88000b213e10 EFLAGS: 00000202
[ 32.110003] RAX: ffff88000b395650 RBX: 0000000000000000 RCX: 00000000000002a0
[ 32.110003] RDX: 0000000000000073 RSI: 0000000000000276 RDI: ffffffff82da8b58
[ 32.110003] RBP: ffff88000b213e10 R08: 0000000000000ae8 R09: 0000000000000000
[ 32.110003] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88000b213df0
[ 32.110003] R13: ffff88000b213e10 R14: ffffffff8240723f R15: ffff88000b213d80
[ 32.110003] FS: 0000000000000000(0000) GS:ffff880012400000(0000) knlGS:0000000000000000
[ 32.110003] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 32.110003] CR2: 0000000000000000 CR3: 0000000003012000 CR4: 00000000000406e0
[ 32.110003] Stack:
[ 32.110003] ffff88000b213e28 ffffffff8120075b ffff880009048c90 ffff88000b213e48
[ 32.110003] ffffffff81201063 0000006300000000 ffff880010e48300 ffff88000b213f48
[ 32.110003] ffffffff81105150 0000000000000000 ffff88000b395650 0000000000000000
[ 32.110003] Call Trace:
[ 32.110003] [<ffffffff8120075b>] stutter_wait+0x9b/0x130
[ 32.110003] [<ffffffff81201063>] torture_slam_kthread+0xe3/0x190
[ 32.110003] [<ffffffff81105150>] kthread+0x130/0x160
[ 32.110003] [<ffffffff81105020>] ? __kthread_unpark+0xa0/0xa0
[ 32.110003] [<ffffffff824087bc>] ret_from_fork+0x7c/0xb0
[ 32.110003] [<ffffffff81105020>] ? __kthread_unpark+0xa0/0xa0
[ 32.110003] Code: 44 00 00 48 83 05 c1 e5 a3 02 01 f6 c4 02 0f 84 ae fe ff ff 65 48 8b 04 25 00 b9 00 00 8b 90 b0 03 00 00 48 83 05 a9 e5 a3 02 01 <85> d2 0f 84 8f fe ff ff 5d c3 0f 1f 80 00 00 00 00 0f 1f 44 00
[ 32.110003] Kernel panic - not syncing: softlockup: hung tasks
[ 32.110003] CPU: 1 PID: 115 Comm: torture_slam_kt Tainted: G L 3.17.0-rc2-00063-g86d8f10 #7
[ 32.110003] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 32.110003] 0000000000000007 ffff880012403df8 ffffffff823f8f74 ffffffff82d98e85
[ 32.110003] ffff880012403e70 ffffffff823f0db6 0000000000000008 ffff880012403e80
[ 32.110003] ffff880012403e20 0000000000000001 0000000000000000 ffffffff811504f3
[ 32.110003] Call Trace:
[ 32.110003] <IRQ> [<ffffffff823f8f74>] dump_stack+0x85/0xba
[ 32.110003] [<ffffffff823f0db6>] panic+0x110/0x300
[ 32.110003] [<ffffffff811504f3>] ? console_unlock+0x373/0x680
[ 32.110003] [<ffffffff811ad3b3>] watchdog_timer_fn+0x283/0x2e0
[ 32.110003] [<ffffffff81171c99>] hrtimer_run_queues+0x159/0x470
[ 32.110003] [<ffffffff811ad130>] ? watchdog+0x40/0x40
[ 32.110003] [<ffffffff811708e2>] update_process_times+0x42/0xc0
[ 32.110003] [<ffffffff81183d08>] tick_periodic+0x38/0x130
[ 32.110003] [<ffffffff8118400e>] tick_handle_periodic+0x2e/0xb0
[ 32.110003] [<ffffffff81200f80>] ? torture_stutter+0x1d0/0x1d0
[ 32.110003] [<ffffffff81050dbf>] local_apic_timer_interrupt+0x3f/0x80
[ 32.110003] [<ffffffff8240b43f>] smp_apic_timer_interrupt+0x5f/0x80
[ 32.110003] [<ffffffff8240981d>] apic_timer_interrupt+0x6d/0x80
[ 32.110003] <EOI> [<ffffffff8111b24f>] ? __might_sleep+0x18f/0x1a0
[ 32.110003] [<ffffffff8120075b>] stutter_wait+0x9b/0x130
[ 32.110003] [<ffffffff81201063>] torture_slam_kthread+0xe3/0x190
[ 32.110003] [<ffffffff81105150>] kthread+0x130/0x160
[ 32.110003] [<ffffffff81105020>] ? __kthread_unpark+0xa0/0xa0
[ 32.110003] [<ffffffff824087bc>] ret_from_fork+0x7c/0xb0
[ 32.110003] [<ffffffff81105020>] ? __kthread_unpark+0xa0/0xa0
[ 32.110003] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)
Elapsed time: 40
git bisect start 8153a6bbcc340b292248b66a6db89b991fe23995 52addcf9d6669fa439387610bc65c92fa0980cef --
git bisect good fa5b2a0d6039cf49777a5dc734820b5b4935d569 # 10:55 25+ 0 Merge 'spi/fix/dw' into devel-hourly-2014082810
git bisect bad bab6cbcfc9ff6b0abb1506a6c42e93765f3c5162 # 11:04 11- 20 Merge 'arm-soc/for-next' into devel-hourly-2014082810
git bisect good 61ca93aaede4e63c82b1ce60cddab0f922290115 # 11:09 28+ 4 Merge 'kvmarm/queue' into devel-hourly-2014082810
git bisect good 5cde56d94f0761101b235d1c88c674034bdd101a # 11:23 28+ 1 Merge 'netdev-next/master' into devel-hourly-2014082810
git bisect bad dd7ed2643b17336d0103ddda2c385b59812a40b8 # 11:26 5- 14 Merge 'hid/for-3.18/picolcd' into devel-hourly-2014082810
git bisect bad aa6fe56cf691d0587229cb5c21c44899fc312134 # 11:28 2- 6 Merge 'rcu/rcu/exp' into devel-hourly-2014082810
git bisect bad a31a69e0a7601d78b242d1e0c367e53e6bdc47f8 # 11:31 1- 3 rcu: Eliminate deadlock between CPU hotplug and expedited grace periods
git bisect bad 86d8f10e35c1e63d5a839792efd7c3cb6a564fe4 # 11:34 4- 6 rcutorture: Add preemption-slam testing
# first bad commit: [86d8f10e35c1e63d5a839792efd7c3cb6a564fe4] rcutorture: Add preemption-slam testing
git bisect good b52938be99a099b452979ed805b480337ee2df0f # 11:41 120+ 8 Merge branch 'rcu-tasks.2014.08.27a' into HEAD
git bisect bad 8153a6bbcc340b292248b66a6db89b991fe23995 # 11:41 0- 11 0day head guard for 'devel-hourly-2014082810'
git bisect good f1bd473f95e02bc382d4dae94d7f82e2a455e05d # 12:00 120+ 3 Merge branch 'sec-v3.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/sergeh/linux-security
git bisect good d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 12:24 120+ 4 Add linux-next specific files for 20140827
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 320
-smp 2
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 10 months
[xhci] BUG: unable to handle kernel NULL pointer dereference at (null)
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/djbw/usb.git td-fragments-v1
commit e65e21a542cab81d794db4e5fe919c4e1d624ea7
Author: Dan Williams <dan.j.williams(a)intel.com>
AuthorDate: Tue Jul 22 00:08:51 2014 -0700
Commit: Dan Williams <dan.j.williams(a)intel.com>
CommitDate: Fri Aug 22 10:06:50 2014 -0700
xhci: unit test ring enqueue/dequeue routines
Given the complexity of satisfying xhci 1.0+ host trb boundary
constraints, provide a test case that exercises inserting mid-segment
links into a ring.
The linker --wrap= option is used to not pollute the global identifier
space and to make it clear which standard xhci driver routines are being
mocked-up. The --wrap= option does not come into play when both
xhci-hcd and xhci-test are built-in to the kernel, so namespace
collisions are prevented by excluding xhci-test from the build when
xhci-hcd is built-in.
It's unfortunate that this is an in-kernel test rather than userspace
and that the infrastructure is custom rather than generic. That said,
it serves its purpose of exercising the corner cases of the scatterlist
parsing implementation in xhci.
Cc: Rusty Russell <rusty(a)rustcorp.com.au>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
+------------------------------------------------------+------------+------------+
| | fb6fa3e625 | e65e21a542 |
+------------------------------------------------------+------------+------------+
| boot_successes | 60 | 0 |
| boot_failures | 0 | 20 |
| BUG:unable_to_handle_kernel_NULL_pointer_dereference | 0 | 20 |
| Oops | 0 | 20 |
| RIP:setup_test_skip64 | 0 | 20 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 20 |
| backtrace:do_test | 0 | 20 |
| backtrace:xhci_test_init | 0 | 20 |
| backtrace:kernel_init_freeable | 0 | 20 |
+------------------------------------------------------+------------+------------+
[ 12.405859] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 12.406471] ohci-pci: OHCI PCI platform driver
[ 12.406906] ohci-platform: OHCI generic platform driver
[ 12.407510] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 12.408218] IP: [<ffffffff81968843>] setup_test_skip64+0x183/0x270
[ 12.408781] PGD 0
[ 12.409010] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 12.409450] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.16.0-rc5-00225-ge65e21a #6
[ 12.410102] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 12.410599] task: ffff880012128000 ti: ffff880012130000 task.ti: ffff880012130000
[ 12.410950] RIP: 0010:[<ffffffff81968843>] [<ffffffff81968843>] setup_test_skip64+0x183/0x270
[ 12.410950] RSP: 0000:ffff880012133d08 EFLAGS: 00010202
[ 12.410950] RAX: ffff880012117000 RBX: 0000000000000000 RCX: 000000078000000f
[ 12.410950] RDX: 0000000000000040 RSI: 0000000000000f01 RDI: 0000000000000000
[ 12.410950] RBP: ffff880012133d48 R08: 0000000000000fe0 R09: 0000000000000000
[ 12.410950] R10: 00000000000f0000 R11: 0000000000000001 R12: 0000000080000000
[ 12.410950] R13: 0000000000000000 R14: 000000000000ffe0 R15: 000000000000ffe0
[ 12.410950] FS: 0000000000000000(0000) GS:ffff880012400000(0000) knlGS:0000000000000000
[ 12.410950] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 12.410950] CR2: 0000000000000000 CR3: 0000000002568000 CR4: 00000000000006b0
[ 12.410950] Stack:
[ 12.410950] ffff880012133ddc ffff880012133de8 ffff880012133e10 0000000000000000
[ 12.410950] 0000000000000000 ffff88000b1a2400 0000000000000000 0000000000000000
[ 12.410950] ffff880012133e48 ffffffff81d71168 0000000000000000 0000303a35343200
[ 12.410950] Call Trace:
[ 12.410950] [<ffffffff81d71168>] do_test.constprop.70+0x47/0x894
[ 12.410950] [<ffffffff819686c0>] ? setup_test_32_248_8+0x340/0x340
[ 12.410950] [<ffffffff81826630>] ? device_create_groups_vargs+0xe0/0x1a0
[ 12.410950] [<ffffffff82d3a394>] ? ohci_platform_init+0x60/0x60
[ 12.410950] [<ffffffff82d3a585>] xhci_test_init+0x1f1/0x2a5
[ 12.410950] [<ffffffff819686c0>] ? setup_test_32_248_8+0x340/0x340
[ 12.410950] [<ffffffff81968380>] ? setup_test_wrap64+0x320/0x320
[ 12.410950] [<ffffffff81968060>] ? setup_test_dont_trim+0x2f0/0x2f0
[ 12.410950] [<ffffffff81967d70>] ? xhci_ring_free+0x1d0/0x1d0
[ 12.410950] [<ffffffff82d3a394>] ? ohci_platform_init+0x60/0x60
[ 12.410950] [<ffffffff82ce2695>] do_one_initcall+0x143/0x24d
[ 12.410950] [<ffffffff810dab7b>] ? parse_args+0x2fb/0x530
[ 12.410950] [<ffffffff82ce297b>] kernel_init_freeable+0x1dc/0x2aa
[ 12.410950] [<ffffffff82ce19d5>] ? do_early_param+0xc3/0xc3
[ 12.410950] [<ffffffff81d4b250>] ? rest_init+0xd0/0xd0
[ 12.410950] [<ffffffff81d4b25e>] kernel_init+0xe/0x160
[ 12.410950] [<ffffffff81d88d3c>] ret_from_fork+0x7c/0xb0
[ 12.410950] [<ffffffff81d4b250>] ? rest_init+0xd0/0xd0
[ 12.410950] Code: 48 85 ff 40 0f 94 c6 44 0f b6 ce 49 83 c1 02 4a 83 04 cd a0 e9 b3 82 01 45 31 c9 40 84 f6 75 0b 45 0f b6 ca 49 c1 e1 04 49 01 f9 <49> 8b 39 48 8b 30 48 c1 e1 06 4c 89 78 10 44 89 40 08 01 d3 89
[ 12.410950] RIP [<ffffffff81968843>] setup_test_skip64+0x183/0x270
[ 12.410950] RSP <ffff880012133d08>
[ 12.410950] CR2: 0000000000000000
[ 12.410950] ---[ end trace 3157077290b0c2c1 ]---
[ 12.410950] Kernel panic - not syncing: Fatal exception
git bisect start 66e8dfa4e0d9600dedc08adcaac83c378b65351b 52addcf9d6669fa439387610bc65c92fa0980cef --
git bisect good 511b6daa3a596ab5c54bee5dab56ed4f77337a40 # 22:39 20+ 0 Merge 'ipvs-next/master' into devel-hourly-2014082722
git bisect bad 73e9ac542728ea03b8796cf9818950dc9e05d534 # 22:49 0- 20 Merge 'hid/for-3.18/upstream' into devel-hourly-2014082722
git bisect good 513dd18bd1b397935660c01daa14e53e819b9270 # 23:00 20+ 0 Merge 'netdev-next/master' into devel-hourly-2014082722
git bisect good a617416625136eec767df79308544cbb46fe0311 # 23:03 20+ 0 Merge 'kvm-ppc/kvm-ppc-queue' into devel-hourly-2014082722
git bisect good 858bf88bf6175c80920daa8c9210b0209443b7e1 # 23:06 20+ 0 Merge 'spi/for-next' into devel-hourly-2014082722
git bisect good cdb03bc488490bb364fa29ec292ecd3291ca5770 # 23:10 20+ 0 Merge 'regulator/for-next' into devel-hourly-2014082722
git bisect bad 8f5a71eb299401d62562e7ab634665ff98850e8f # 23:13 0- 20 Merge 'djbw-usb/td-fragments-v1' into devel-hourly-2014082722
git bisect good a75ef911cf100b8cf7d25baf6dac8052328a96e7 # 23:22 20+ 0 xhci: clarify "ring valid" checks
git bisect good 652b7ee36207f186f3d701675483df43b4845c5c # 23:26 20+ 0 xhci: kill ->num_trbs_free_temp in struct xhci_ring
git bisect good 1c11eb8545a3321e7ca27fc7ba8c56b6e6df2b57 # 23:31 20+ 0 xhci: add xhci_ring_reap_td() helper
git bisect bad e65e21a542cab81d794db4e5fe919c4e1d624ea7 # 23:54 0- 20 xhci: unit test ring enqueue/dequeue routines
git bisect good fb6fa3e625e1e453aea9eeb97d58bee30e1c0781 # 23:58 20+ 0 xhci: v1.0 scatterlist enqueue support (td-fragment rework)
# first bad commit: [e65e21a542cab81d794db4e5fe919c4e1d624ea7] xhci: unit test ring enqueue/dequeue routines
git bisect good fb6fa3e625e1e453aea9eeb97d58bee30e1c0781 # 00:00 60+ 0 xhci: v1.0 scatterlist enqueue support (td-fragment rework)
git bisect bad 66e8dfa4e0d9600dedc08adcaac83c378b65351b # 00:00 0- 11 0day head guard for 'devel-hourly-2014082722'
git bisect good 68e370289c29e3beac99d59c6d840d470af9dfcf # 00:19 60+ 2 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
git bisect good d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 00:33 60+ 1 Add linux-next specific files for 20140827
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 2
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 10 months
WARNING: at kernel/trace/trace_kprobe.c:1393 kprobe_trace_self_tests_init()
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 003002e04ed38618fc37b92ba128f5ca79d39f4f
Author: Masami Hiramatsu <masami.hiramatsu.pt(a)hitachi.com>
AuthorDate: Wed Jun 5 12:12:16 2013 +0900
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Thu Jun 20 14:25:48 2013 +0200
kprobes: Fix arch_prepare_kprobe to handle copy insn failures
Fix arch_prepare_kprobe() to handle failures in copy instruction
correctly. This fix is related to the previous fix: 8101376
which made __copy_instruction return an error result if failed,
but caller site was not updated to handle it. Thus, this is the
other half of the bugfix.
This fix is also related to the following bug-report:
https://bugzilla.redhat.com/show_bug.cgi?id=910649
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt(a)hitachi.com>
Acked-by: Steven Rostedt <rostedt(a)goodmis.org>
Tested-by: Jonathan Lebon <jlebon(a)redhat.com>
Cc: Frank Ch. Eigler <fche(a)redhat.com>
Cc: systemtap(a)sourceware.org
Cc: yrl.pp-manager.tt(a)hitachi.com
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
+-------------------------------------------------------------------------------+------------+------------+---------------+
| | f1a527899e | 003002e04e | next-20140827 |
+-------------------------------------------------------------------------------+------------+------------+---------------+
| boot_successes | 60 | 0 | 0 |
| boot_failures | 0 | 20 | 11 |
| WARNING:at_kernel/trace/trace_kprobe.c:kprobe_trace_self_tests_init() | 0 | 20 | |
| backtrace:kprobe_trace_self_tests_init | 0 | 20 | 11 |
| backtrace:warn_slowpath_null | 0 | 20 | 11 |
| backtrace:kernel_init_freeable | 0 | 20 | 11 |
| WARNING:CPU:PID:at_kernel/trace/trace_kprobe.c:kprobe_trace_self_tests_init() | 0 | 0 | 11 |
+-------------------------------------------------------------------------------+------------+------------+---------------+
[ 26.534705] Testing kprobe tracing:
[ 26.537068] Could not insert probe at kprobe_trace_selftest_target+0: -22
[ 26.540483] ------------[ cut here ]------------
[ 26.541996] WARNING: at kernel/trace/trace_kprobe.c:1393 kprobe_trace_self_tests_init+0x6e/0x84d()
[ 26.545093] Modules linked in:
[ 26.546418] CPU: 0 PID: 1 Comm: swapper Not tainted 3.10.0-rc3-00005-g003002e #9
[ 26.548765] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 26.550364] 0000000000000009 ffff880013479e68 ffffffff81f16c80 ffff880013479ea0
[ 26.552703] ffffffff810a97e8 ffffffff8322bb14 0000000000000007 0000000000000000
[ 26.555109] 0000000000000000 0000000000000000 ffff880013479eb0 ffffffff810a984a
[ 26.558146] Call Trace:
[ 26.559125] [<ffffffff81f16c80>] dump_stack+0x27/0x30
[ 26.560708] [<ffffffff810a97e8>] warn_slowpath_common+0xa8/0xe0
[ 26.562520] [<ffffffff8322bb14>] ? init_kprobe_trace+0xe8/0xe8
[ 26.564181] [<ffffffff810a984a>] warn_slowpath_null+0x2a/0x40
[ 26.565343] [<ffffffff8322bb82>] kprobe_trace_self_tests_init+0x6e/0x84d
[ 26.567150] [<ffffffff8322bb14>] ? init_kprobe_trace+0xe8/0xe8
[ 26.568947] [<ffffffff81000372>] do_one_initcall+0x1c2/0x2b0
[ 26.570496] [<ffffffff831fbc5c>] kernel_init_freeable+0x238/0x36f
[ 26.572172] [<ffffffff831fac1f>] ? do_early_param+0x111/0x111
[ 26.573724] [<ffffffff81eecb40>] ? rest_init+0x140/0x140
[ 26.575201] [<ffffffff81eecb56>] kernel_init+0x16/0x2b0
[ 26.576740] [<ffffffff81f3acba>] ret_from_fork+0x7a/0xb0
[ 26.578215] [<ffffffff81eecb40>] ? rest_init+0x140/0x140
[ 26.579717] ---[ end trace c89373f17ce6ce32 ]---
[ 26.581040] error on probing function entry.
git bisect start v3.10 v3.9 --
git bisect good ff89acc563a0bd49965674f56552ad6620415fe2 # 08:19 20+ 20 Merge branch 'rcu/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
git bisect good e4327859341f2d3a93b4b6fef2ea483eac1c270c # 08:25 20+ 0 Merge branch 'for-3.10' of git://git.samba.org/sfrench/cifs-2.6
git bisect good 2601ded7fd8827ddbcc450cbfb153b3f3c59b443 # 08:30 20+ 0 Merge tag 'sound-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
git bisect good 9e895ace5d82df8929b16f58e9f515f6d54ab82d # 08:54 20+ 0 Linux 3.10-rc7
git bisect bad 1a506e473576cdcb922d339aea76b67d0fe344f7 # 09:06 0- 12 Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
git bisect good 78750f1908869c3bfcbf2a1f1f00f078f2948271 # 09:18 20+ 0 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client
git bisect bad 54faf77d065926adbcc2a49e6df3559094cc93ba # 09:25 0- 20 Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good de6e1317f746fbc527a73976c58b4119e506ff7c # 09:39 20+ 0 Merge tag 'critical_fix_for_3.9' of git://git.kernel.org/pub/scm/linux/kernel/git/rwlove/fcoe
git bisect good e3ff91143eb2a6eaaab4831c85a2837a95fbbea3 # 09:47 20+ 0 Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-arm
git bisect bad 8b4d801b2b123b6c09742f861fe44a8527b84d47 # 10:10 0- 20 hw_breakpoint: Fix cpu check in task_bp_pinned(cpu)
git bisect bad 003002e04ed38618fc37b92ba128f5ca79d39f4f # 10:15 0- 20 kprobes: Fix arch_prepare_kprobe to handle copy insn failures
# first bad commit: [003002e04ed38618fc37b92ba128f5ca79d39f4f] kprobes: Fix arch_prepare_kprobe to handle copy insn failures
git bisect good f1a527899ef0a8a241eb3bea619eb2e29d797f44 # 10:23 60+ 0 perf/x86: Fix broken PEBS-LL support on SNB-EP/IVB-EP
git bisect bad d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 10:23 0- 11 Add linux-next specific files for 20140827
git bisect bad f1bd473f95e02bc382d4dae94d7f82e2a455e05d # 10:30 0- 30 Merge branch 'sec-v3.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/sergeh/linux-security
git bisect bad d05446ae2128064a4bb8f74c84f6901ffb5c94bc # 10:30 0- 11 Add linux-next specific files for 20140827
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 10 months
[acpi/osl] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-R} usage.
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
commit b11bc0be2f115a90949f1c26379f1288c8cde531
Author: Lan Tianyu <tianyu.lan(a)intel.com>
AuthorDate: Tue Aug 26 01:54:34 2014 +0200
Commit: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
CommitDate: Tue Aug 26 01:54:34 2014 +0200
ACPI / OSL: Make acpi_os_map_cleanup() use call_rcu() to avoid deadlocks
Deadlock is possible when CPU hotplug and evaluating ACPI method happen
at the same time.
During CPU hotplug, acpi_cpu_soft_notify() is called under the CPU hotplug
lock. Then, acpi_cpu_soft_notify() calls acpi_bus_get_device() to obtain
the struct acpi_device attached to the given ACPI handle. The ACPICA's
namespace lock will be acquired by acpi_bus_get_device() in the process.
Thus it is possible to hold the ACPICA's namespace lock under the CPU
hotplug lock.
Evaluating an ACPI method may involve accessing an operation region in
system memory and the associated address space will be unmapped under
the ACPICA's namespace lock after completing the access. Currently, osl.c
uses RCU to protect memory ranges used by AML. When unmapping them it
calls synchronize_rcu() in acpi_os_map_cleanup(), but that blocks
CPU hotplug by acquiring the CPU hotplug lock. Thus it is possible to
hold the CPU hotplug lock under the ACPICA's namespace lock.
This leads to deadlocks like the following one if AML accessing operation
regions in memory is executed in parallel with CPU hotplug.
INFO: task bash:741 blocked for more than 30 seconds.
Not tainted 3.16.0-rc5+ #671
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
bash D ffff88014e214140 0 741 716 0x00000080
ffff88009b9f3a10 0000000000000086 ffff88009dcfb840 ffff88009b9f3fd8
0000000000014140 0000000000014140 ffffffff81c18460 ffffffff81c40fc8
ffffffff81c40fcc ffff88009dcfb840 00000000ffffffff ffffffff81c40fd0
Call Trace:
[<ffffffff817a1b29>] schedule_preempt_disabled+0x29/0x70
[<ffffffff817a34fa>] __mutex_lock_slowpath+0xca/0x1c0
[<ffffffff817a360f>] mutex_lock+0x1f/0x2f
[<ffffffff810bc8cc>] get_online_cpus+0x2c/0x50
[<ffffffff8111bbd4>] synchronize_sched_expedited+0x64/0x1c0
[<ffffffff8111bb65>] synchronize_sched+0x45/0x50
[<ffffffff81431498>] acpi_os_map_cleanup.part.7+0x14/0x3e
[<ffffffff81795c54>] acpi_os_unmap_iomem+0xe2/0xea
[<ffffffff81795c6a>] acpi_os_unmap_memory+0xe/0x14
[<ffffffff814459bc>] acpi_ev_system_memory_region_setup+0x2d/0x97
[<ffffffff81459504>] acpi_ut_update_ref_count+0x24d/0x2de
[<ffffffff814596af>] acpi_ut_update_object_reference+0x11a/0x18b
[<ffffffff81459282>] acpi_ut_remove_reference+0x2e/0x31
[<ffffffff8144ffdf>] acpi_ns_detach_object+0x7b/0x80
[<ffffffff8144ef11>] acpi_ns_delete_namespace_subtree+0x47/0x81
[<ffffffff81440488>] acpi_ds_terminate_control_method+0x85/0x11b
[<ffffffff81454625>] acpi_ps_parse_aml+0x164/0x289
[<ffffffff81454da6>] acpi_ps_execute_method+0x1c1/0x26c
[<ffffffff8144f764>] acpi_ns_evaluate+0x1c1/0x258
[<ffffffff81451f86>] acpi_evaluate_object+0x126/0x22f
[<ffffffff8144d1ac>] acpi_hw_execute_sleep_method+0x3d/0x68
[<ffffffff8144d5cf>] ? acpi_hw_enable_all_runtime_gpes+0x17/0x19
[<ffffffff8144deb0>] acpi_hw_legacy_wake+0x4d/0x9d
[<ffffffff8144e599>] acpi_hw_sleep_dispatch+0x2a/0x2c
[<ffffffff8144e5cb>] acpi_leave_sleep_state+0x17/0x19
[<ffffffff8143335c>] acpi_pm_finish+0x3f/0x99
[<ffffffff81108c49>] suspend_devices_and_enter+0x139/0x560
[<ffffffff81109162>] pm_suspend+0xf2/0x370
[<ffffffff81107e69>] state_store+0x79/0xf0
[<ffffffff813bc4af>] kobj_attr_store+0xf/0x20
[<ffffffff81284f3d>] sysfs_kf_write+0x3d/0x50
[<ffffffff81284580>] kernfs_fop_write+0xe0/0x160
[<ffffffff81210f47>] vfs_write+0xb7/0x1f0
[<ffffffff81211ae6>] SyS_write+0x46/0xb0
[<ffffffff8114d986>] ? __audit_syscall_exit+0x1f6/0x2a0
[<ffffffff817a4ea9>] system_call_fastpath+0x16/0x1b
INFO: task async-enable-no:749 blocked for more than 30 seconds.
Not tainted 3.16.0-rc5+ #671
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
async-enable-no D ffff88014e254140 0 749 2 0x00000080
ffff88009de83bf0 0000000000000046 ffff88009b850000 ffff88009de83fd8
0000000000014140 0000000000014140 ffff880148305dc0 ffff880149804160
7fffffffffffffff 0000000000000002 0000000000000000 ffff88009b850000
Call Trace:
[<ffffffff817a1689>] schedule+0x29/0x70
[<ffffffff817a0b49>] schedule_timeout+0x1f9/0x270
[<ffffffff81284bfe>] ? __kernfs_create_file+0x7e/0xa0
[<ffffffff8128546b>] ? sysfs_add_file_mode_ns+0x9b/0x160
[<ffffffff817a36b2>] __down_common+0x93/0xd8
[<ffffffff817a376a>] __down_timeout+0x16/0x18
[<ffffffff8110546c>] down_timeout+0x4c/0x60
[<ffffffff81431f97>] acpi_os_wait_semaphore+0x43/0x57
[<ffffffff8145a8f4>] acpi_ut_acquire_mutex+0x48/0x88
[<ffffffff81435d1b>] ? acpi_match_device+0x4f/0x4f
[<ffffffff8145250f>] acpi_get_data_full+0x3a/0x8e
[<ffffffff81435b30>] acpi_bus_get_device+0x23/0x40
[<ffffffff8145d839>] acpi_cpu_soft_notify+0x50/0xe6
[<ffffffff810e1ddc>] notifier_call_chain+0x4c/0x70
[<ffffffff810e1eee>] __raw_notifier_call_chain+0xe/0x10
[<ffffffff810bc993>] cpu_notify+0x23/0x50
[<ffffffff810bcb98>] _cpu_up+0x168/0x180
[<ffffffff810bcc5c>] _cpu_up_with_trace+0x2c/0xe0
[<ffffffff810bd050>] ? disable_nonboot_cpus+0x1c0/0x1c0
[<ffffffff810bd06f>] async_enable_nonboot_cpus+0x1f/0x70
[<ffffffff810dda02>] kthread+0xd2/0xf0
[<ffffffff810dd930>] ? insert_kthread_work+0x40/0x40
[<ffffffff817a4dfc>] ret_from_fork+0x7c/0xb0
To avoid such deadlocks, modify acpi_os_map_cleanup() to use call_rcu()
for the unmapping of memory regions that aren't used any more.
Signed-off-by: Lan Tianyu <tianyu.lan(a)intel.com>
[rjw: Subject and changelog.]
Cc: All applicable <stable(a)vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
+----------------------------------------------+-----------+------------+
| | v3.17-rc2 | b11bc0be2f |
+----------------------------------------------+-----------+------------+
| boot_successes | 60 | 0 |
| boot_failures | 0 | 20 |
| inconsistent_SOFTIRQ-ON-W-IN-SOFTIRQ-R_usage | 0 | 20 |
| backtrace:pci_arch_init | 0 | 20 |
| backtrace:kernel_init_freeable | 0 | 20 |
| backtrace:smpboot_thread_fn | 0 | 18 |
| backtrace:bdi_register | 0 | 1 |
| backtrace:default_bdi_init | 0 | 1 |
| backtrace:register_sysrq_key | 0 | 1 |
| backtrace:pm_sysrq_init | 0 | 1 |
+----------------------------------------------+-----------+------------+
[ 0.079000] 3.17.0-rc2-00001-gb11bc0b #7 Not tainted
[ 0.079000] ---------------------------------
[ 0.079000] ---------------------------------
[ 0.079000] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-R} usage.
[ 0.079000] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-R} usage.
[ 0.079000] ksoftirqd/0/3 [HC0[0]:SC1[3]:HE1:SE0] takes:
[ 0.079000] ksoftirqd/0/3 [HC0[0]:SC1[3]:HE1:SE0] takes:
[ 0.079000] (
[ 0.079000] (resource_lockresource_lock){+++?..}){+++?..}, at: , at: [<ffffffff810c2ed3>] find_next_iomem_res+0x43/0x130
[<ffffffff810c2ed3>] find_next_iomem_res+0x43/0x130
[ 0.079000] {SOFTIRQ-ON-W} state was registered at:
[ 0.079000] {SOFTIRQ-ON-W} state was registered at:
[ 0.079000]
[ 0.079000] [<ffffffff810ee014>] __lock_acquire+0x584/0x20d0
[<ffffffff810ee014>] __lock_acquire+0x584/0x20d0
[ 0.079000]
[ 0.079000] [<ffffffff810f04f6>] lock_acquire+0x86/0xe0
[<ffffffff810f04f6>] lock_acquire+0x86/0xe0
[ 0.079000]
[ 0.079000] [<ffffffff81997e78>] _raw_write_lock+0x38/0x70
[<ffffffff81997e78>] _raw_write_lock+0x38/0x70
[ 0.079000]
[ 0.079000] [<ffffffff810c3c9d>] __request_region+0xad/0x170
[<ffffffff810c3c9d>] __request_region+0xad/0x170
[ 0.079000]
[ 0.079000] [<ffffffff8235cee4>] pci_direct_probe+0x36/0x20f
[<ffffffff8235cee4>] pci_direct_probe+0x36/0x20f
[ 0.079000]
[ 0.079000] [<ffffffff8235cd65>] pci_arch_init+0xa/0x5a
[<ffffffff8235cd65>] pci_arch_init+0xa/0x5a
[ 0.079000]
[ 0.079000] [<ffffffff8232412a>] do_one_initcall+0x193/0x1a7
[<ffffffff8232412a>] do_one_initcall+0x193/0x1a7
[ 0.079000]
[ 0.079000] [<ffffffff82324238>] kernel_init_freeable+0xfa/0x17f
[<ffffffff82324238>] kernel_init_freeable+0xfa/0x17f
[ 0.079000]
[ 0.079000] [<ffffffff81979ee9>] kernel_init+0x9/0xf0
[<ffffffff81979ee9>] kernel_init+0x9/0xf0
[ 0.079000]
[ 0.079000] [<ffffffff819986ba>] ret_from_fork+0x7a/0xb0
[<ffffffff819986ba>] ret_from_fork+0x7a/0xb0
[ 0.079000] irq event stamp: 26
[ 0.079000] irq event stamp: 26
[ 0.079000] hardirqs last enabled at (26):
[ 0.079000] hardirqs last enabled at (26): [<ffffffff810fd53e>] __rcu_process_callbacks+0x6e/0x170
[<ffffffff810fd53e>] __rcu_process_callbacks+0x6e/0x170
[ 0.079000] hardirqs last disabled at (25):
[ 0.079000] hardirqs last disabled at (25): [<ffffffff810fd500>] __rcu_process_callbacks+0x30/0x170
[<ffffffff810fd500>] __rcu_process_callbacks+0x30/0x170
[ 0.079000] softirqs last enabled at (0):
[ 0.079000] softirqs last enabled at (0): [<ffffffff810bcd7a>] copy_process.part.55+0x2ca/0x18f0
[<ffffffff810bcd7a>] copy_process.part.55+0x2ca/0x18f0
[ 0.079000] softirqs last disabled at (23):
[ 0.079000] softirqs last disabled at (23): [<ffffffff810c23bd>] run_ksoftirqd+0x3d/0x70
[<ffffffff810c23bd>] run_ksoftirqd+0x3d/0x70
[ 0.079000]
[ 0.079000] other info that might help us debug this:
[ 0.079000]
[ 0.079000] other info that might help us debug this:
[ 0.079000] Possible unsafe locking scenario:
[ 0.079000]
[ 0.079000] Possible unsafe locking scenario:
[ 0.079000]
[ 0.079000] CPU0
[ 0.079000] CPU0
[ 0.079000] ----
[ 0.079000] ----
[ 0.079000] lock(
[ 0.079000] lock(resource_lockresource_lock);
);
[ 0.079000] <Interrupt>
[ 0.079000] <Interrupt>
[ 0.079000] lock(
[ 0.079000] lock(resource_lockresource_lock);
);
[ 0.079000]
[ 0.079000] *** DEADLOCK ***
[ 0.079000]
[ 0.079000]
[ 0.079000] *** DEADLOCK ***
[ 0.079000]
[ 0.079000] 1 lock held by ksoftirqd/0/3:
[ 0.079000] 1 lock held by ksoftirqd/0/3:
[ 0.079000] #0:
[ 0.079000] #0: ( (rcu_callbackrcu_callback){......}){......}, at: , at: [<ffffffff810fd594>] __rcu_process_callbacks+0xc4/0x170
[<ffffffff810fd594>] __rcu_process_callbacks+0xc4/0x170
[ 0.079000]
[ 0.079000] stack backtrace:
[ 0.079000]
[ 0.079000] stack backtrace:
[ 0.079000] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc2-00001-gb11bc0b #7
[ 0.079000] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc2-00001-gb11bc0b #7
[ 0.079000] ffffffff82b4b7b0
[ 0.079000] ffffffff82b4b7b0 ffff880000063aa0 ffff880000063aa0 ffffffff81983dbe ffffffff81983dbe ffff880000063af0 ffff880000063af0
[ 0.079000] ffffffff819822d2
[ 0.079000] ffffffff819822d2 0000000000000003 0000000000000003 ffff880000000001 ffff880000000001 ffff880000000000 ffff880000000000
[ 0.079000] 0000000000000006
[ 0.079000] 0000000000000006 ffff88000005c7a8 ffff88000005c7a8 ffffffff810ea390 ffffffff810ea390 0000000000000005 0000000000000005
[ 0.079000] Call Trace:
[ 0.079000] Call Trace:
[ 0.079000] [<ffffffff81983dbe>] dump_stack+0x19/0x1b
[ 0.079000] [<ffffffff81983dbe>] dump_stack+0x19/0x1b
[ 0.079000] [<ffffffff819822d2>] print_usage_bug.part.39+0x283/0x292
[ 0.079000] [<ffffffff819822d2>] print_usage_bug.part.39+0x283/0x292
[ 0.079000] [<ffffffff810ea390>] ? check_usage_backwards+0x150/0x150
[ 0.079000] [<ffffffff810ea390>] ? check_usage_backwards+0x150/0x150
[ 0.079000] [<ffffffff810eaf97>] mark_lock+0x267/0x6d0
[ 0.079000] [<ffffffff810eaf97>] mark_lock+0x267/0x6d0
[ 0.079000] [<ffffffff810edf12>] __lock_acquire+0x482/0x20d0
[ 0.079000] [<ffffffff810edf12>] __lock_acquire+0x482/0x20d0
[ 0.079000] [<ffffffff810049b5>] ? dump_trace+0x185/0x2f0
[ 0.079000] [<ffffffff810049b5>] ? dump_trace+0x185/0x2f0
[ 0.079000] [<ffffffff81010025>] ? save_stack_trace+0x25/0x40
[ 0.079000] [<ffffffff81010025>] ? save_stack_trace+0x25/0x40
[ 0.079000] [<ffffffff810c2aa0>] ? __request_resource+0x50/0x50
[ 0.079000] [<ffffffff810c2aa0>] ? __request_resource+0x50/0x50
[ 0.079000] [<ffffffff810f04f6>] lock_acquire+0x86/0xe0
[ 0.079000] [<ffffffff810f04f6>] lock_acquire+0x86/0xe0
[ 0.079000] [<ffffffff810c2ed3>] ? find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff810c2ed3>] ? find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff81997afb>] _raw_read_lock+0x3b/0x70
[ 0.079000] [<ffffffff81997afb>] _raw_read_lock+0x3b/0x70
[ 0.079000] [<ffffffff810c2ed3>] ? find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff810c2ed3>] ? find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff810c2ed3>] find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff810c2ed3>] find_next_iomem_res+0x43/0x130
[ 0.079000] [<ffffffff810c2aa0>] ? __request_resource+0x50/0x50
[ 0.079000] [<ffffffff810c2aa0>] ? __request_resource+0x50/0x50
[ 0.079000] [<ffffffff810c34ff>] walk_system_ram_range+0x7f/0xd0
[ 0.079000] [<ffffffff810c34ff>] walk_system_ram_range+0x7f/0xd0
[ 0.079000] [<ffffffff813f80a8>] ? acpi_os_execute_deferred+0x1b/0x1b
[ 0.079000] [<ffffffff813f80a8>] ? acpi_os_execute_deferred+0x1b/0x1b
[ 0.079000] [<ffffffff810c3567>] page_is_ram+0x17/0x40
[ 0.079000] [<ffffffff810c3567>] page_is_ram+0x17/0x40
[ 0.079000] [<ffffffff813f80c4>] acpi_os_map_reclaim+0x1c/0x35
[ 0.079000] [<ffffffff813f80c4>] acpi_os_map_reclaim+0x1c/0x35
[ 0.079000] [<ffffffff810fd5e7>] __rcu_process_callbacks+0x117/0x170
[ 0.079000] [<ffffffff810fd5e7>] __rcu_process_callbacks+0x117/0x170
[ 0.079000] [<ffffffff810fd594>] ? __rcu_process_callbacks+0xc4/0x170
[ 0.079000] [<ffffffff810fd594>] ? __rcu_process_callbacks+0xc4/0x170
[ 0.079000] [<ffffffff810fd650>] rcu_process_callbacks+0x10/0x20
[ 0.079000] [<ffffffff810fd650>] rcu_process_callbacks+0x10/0x20
[ 0.079000] [<ffffffff810c21f1>] __do_softirq+0x121/0x2b0
[ 0.079000] [<ffffffff810c21f1>] __do_softirq+0x121/0x2b0
[ 0.079000] [<ffffffff810c23bd>] run_ksoftirqd+0x3d/0x70
[ 0.079000] [<ffffffff810c23bd>] run_ksoftirqd+0x3d/0x70
[ 0.079000] [<ffffffff810df1e5>] smpboot_thread_fn+0xf5/0x180
[ 0.079000] [<ffffffff810df1e5>] smpboot_thread_fn+0xf5/0x180
[ 0.079000] [<ffffffff810df0f0>] ? in_egroup_p+0x40/0x40
[ 0.079000] [<ffffffff810df0f0>] ? in_egroup_p+0x40/0x40
[ 0.079000] [<ffffffff810db808>] kthread+0xf8/0x110
[ 0.079000] [<ffffffff810db808>] kthread+0xf8/0x110
[ 0.079000] [<ffffffff81993dfa>] ? wait_for_common+0x11a/0x160
[ 0.079000] [<ffffffff81993dfa>] ? wait_for_common+0x11a/0x160
[ 0.079000] [<ffffffff810e03e5>] ? finish_task_switch.constprop.50+0x45/0x100
[ 0.079000] [<ffffffff810e03e5>] ? finish_task_switch.constprop.50+0x45/0x100
[ 0.079000] [<ffffffff810db710>] ? __kthread_parkme+0x70/0x70
[ 0.079000] [<ffffffff810db710>] ? __kthread_parkme+0x70/0x70
[ 0.079000] [<ffffffff819986ba>] ret_from_fork+0x7a/0xb0
[ 0.079000] [<ffffffff819986ba>] ret_from_fork+0x7a/0xb0
[ 0.079000] [<ffffffff810db710>] ? __kthread_parkme+0x70/0x70
[ 0.079000] [<ffffffff810db710>] ? __kthread_parkme+0x70/0x70
[ 0.079372] Running resizable hashtable tests...
[ 0.079372] Running resizable hashtable tests...
[ 0.080012] Adding 2048 keys
git bisect start bf040f9c8492b6e7aaaa5bc593c7e9693a9c606a 52addcf9d6669fa439387610bc65c92fa0980cef --
git bisect bad 5cfd0a20f8c8b6f192aebd9c3f536319e106e433 # 02:06 0- 1 Merge 'stericsson/msm-cleanup' into devel-hourly-2014082622
git bisect good e39c483fdb0adda238e37d2bc80c20f7fe183600 # 02:10 20+ 0 Merge 'luto/checkpatch' into devel-hourly-2014082622
git bisect good 49c47bbe5cc0366fa83f53898cdd87d0c1c01b34 # 02:32 20+ 0 Merge 'pwm/for-next' into devel-hourly-2014082622
git bisect bad b998605a177da25ec3c5285b9cdad0cc5aaf6fa3 # 02:37 0- 20 Merge 'pm/bleeding-edge' into devel-hourly-2014082622
git bisect good 3afd0fcabfcec5e0d9164c91508eedd495674974 # 02:42 20+ 0 Merge 'm68knommu/for-next' into devel-hourly-2014082622
git bisect good 21a6f663b9172a50b0634a889501a520964b8155 # 02:47 20+ 0 Merge 'char-misc/char-misc-linus' into devel-hourly-2014082622
git bisect good ab3c20f55f3e8fc487f8db1fd83a43c429524789 # 02:51 20+ 0 Merge 'cifs/for-linus' into devel-hourly-2014082622
git bisect bad 90bf325c80978287390e17c24d84e909fc138c8c # 02:54 0- 5 Merge branches 'acpi-scan', 'acpi-osl', 'acpi-ec' and 'acpi-lpss' into bleeding-edge
git bisect good 236105db632c6279a020f78c83e22eaef746006b # 03:02 20+ 0 ACPI: Run fixed event device notifications in process context
git bisect good 558e4736f2e1b0e6323adf7a5e4df77ed6cfc1a4 # 03:04 20+ 0 ACPI / EC: Add support to disallow QR_EC to be issued before completing previous QR_EC
git bisect bad b11bc0be2f115a90949f1c26379f1288c8cde531 # 03:07 0- 20 ACPI / OSL: Make acpi_os_map_cleanup() use call_rcu() to avoid deadlocks
# first bad commit: [b11bc0be2f115a90949f1c26379f1288c8cde531] ACPI / OSL: Make acpi_os_map_cleanup() use call_rcu() to avoid deadlocks
git bisect good 52addcf9d6669fa439387610bc65c92fa0980cef # 03:09 60+ 0 Linux 3.17-rc2
git bisect bad bf040f9c8492b6e7aaaa5bc593c7e9693a9c606a # 03:09 0- 11 0day head guard for 'devel-hourly-2014082622'
git bisect good 52addcf9d6669fa439387610bc65c92fa0980cef # 03:09 60+ 0 Linux 3.17-rc2
git bisect good 1c9e4561f3b2afffcda007eae9d0ddd25525f50e # 03:17 60+ 0 Add linux-next specific files for 20140826
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 10 months
[sched] 143e1e28cb4: +17.9% aim7.jobs-per-min, -9.7% hackbench.throughput
by Fengguang Wu
Hi Vincent,
FYI, we noticed some performance ups/downs on
commit 143e1e28cb40bed836b0a06567208bd7347c9672 ("sched: Rework sched_domain topology definition")
107437febd495a5 143e1e28cb40bed836b0a0656 testbox/testcase/testparams
--------------- ------------------------- ---------------------------
0.09 ± 3% +88.2% 0.17 ± 1% nhm4/ebizzy/200%-100x-10s
0.09 ± 3% +88.2% 0.17 ± 1% TOTAL ebizzy.throughput.per_thread.stddev_percent
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
128529 ± 1% +17.9% 151594 ± 0% brickland1/aim7/6000-page_test
128529 ± 1% +17.9% 151594 ± 0% TOTAL aim7.jobs-per-min
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
156539 ± 1% -4.3% 149885 ± 0% lkp-snb01/hackbench/1600%-process-pipe
116465 ± 1% -17.1% 96542 ± 1% wsm/hackbench/1600%-process-pipe
273004 ± 1% -9.7% 246428 ± 0% TOTAL hackbench.throughput
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
12451869 ± 0% -2.9% 12087560 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
12451869 ± 0% -2.9% 12087560 ± 0% TOTAL vm-scalability.throughput
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1980 ± 0% -2.4% 1933 ± 0% nhm4/ebizzy/200%-100x-10s
1980 ± 0% -2.4% 1933 ± 0% TOTAL ebizzy.throughput.per_thread.min
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
4.446e+08 ± 0% -1.9% 4.364e+08 ± 0% lkp-nex04/pigz/100%-128K
4.446e+08 ± 0% -1.9% 4.364e+08 ± 0% TOTAL pigz.throughput
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
27.18 ± 0% -1.5% 26.78 ± 0% nhm4/ebizzy/200%-100x-10s
27.18 ± 0% -1.5% 26.78 ± 0% TOTAL ebizzy.time.user
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2083 ± 0% +1.3% 2110 ± 0% nhm4/ebizzy/200%-100x-10s
2083 ± 0% +1.3% 2110 ± 0% TOTAL ebizzy.throughput.per_thread.max
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
32335 ± 0% -1.0% 32012 ± 0% nhm4/ebizzy/200%-100x-10s
32335 ± 0% -1.0% 32012 ± 0% TOTAL ebizzy.throughput
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
6626 ± 5% -88.7% 751 ±36% brickland3/vm-scalability/300s-lru-file-readonce
6626 ± 5% -88.7% 751 ±36% TOTAL cpuidle.C3-IVT.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
14204402 ±19% -71.5% 4050186 ±34% brickland3/vm-scalability/300s-lru-file-readonce
14204402 ±19% -71.5% 4050186 ±34% TOTAL cpuidle.C3-IVT.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
0.38 ±32% +218.1% 1.20 ± 4% wsm/hackbench/1600%-process-pipe
0.38 ±32% +218.1% 1.20 ± 4% TOTAL perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.do_sync_read
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
76064 ± 3% -32.2% 51572 ± 6% brickland1/aim7/6000-page_test
269053 ± 1% -57.9% 113386 ± 1% brickland3/vm-scalability/300s-lru-file-readonce
345117 ± 1% -52.2% 164959 ± 2% TOTAL cpuidle.C6-IVT.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
59366697 ± 3% -46.1% 32017187 ± 7% brickland1/aim7/6000-page_test
59366697 ± 3% -46.1% 32017187 ± 7% TOTAL cpuidle.C1-IVT.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
26666815 ± 2% +83.3% 48893253 ± 2% lkp-nex04/pigz/100%-128K
26666815 ± 2% +83.3% 48893253 ± 2% TOTAL cpuidle.C1E-NHM.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2561 ± 7% -42.9% 1463 ± 9% brickland1/aim7/6000-page_test
2561 ± 7% -42.9% 1463 ± 9% TOTAL numa-numastat.node2.other_node
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
116864 ± 2% +77.4% 207322 ± 2% lkp-nex04/pigz/100%-128K
116864 ± 2% +77.4% 207322 ± 2% TOTAL cpuidle.C1E-NHM.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
0.65 ±22% +55.4% 1.02 ± 5% lkp-nex04/pigz/100%-128K
0.65 ±22% +55.4% 1.02 ± 5% TOTAL perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
9926 ± 2% -43.8% 5577 ± 4% brickland1/aim7/6000-page_test
9926 ± 2% -43.8% 5577 ± 4% TOTAL proc-vmstat.numa_other
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2627 ±12% -49.1% 1337 ±12% brickland1/aim7/6000-page_test
2627 ±12% -49.1% 1337 ±12% TOTAL numa-numastat.node1.other_node
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
0.88 ± 6% +65.8% 1.46 ± 4% lkp-nex04/pigz/100%-128K
0.88 ± 6% +65.8% 1.46 ± 4% TOTAL turbostat.%c3
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
19542 ± 9% -38.3% 12057 ± 4% brickland1/aim7/6000-page_test
19542 ± 9% -38.3% 12057 ± 4% TOTAL cpuidle.C1E-IVT.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1357 ± 8% -39.0% 827 ±16% brickland3/vm-scalability/300s-lru-file-readonce
1357 ± 8% -39.0% 827 ±16% TOTAL slabinfo.nfs_write_data.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1357 ± 8% -39.0% 827 ±16% brickland3/vm-scalability/300s-lru-file-readonce
1357 ± 8% -39.0% 827 ±16% TOTAL slabinfo.nfs_write_data.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
993654 ± 2% -19.9% 795962 ± 3% brickland1/aim7/6000-page_test
343162 ± 6% -59.1% 140462 ± 5% brickland3/vm-scalability/300s-lru-file-readonce
315034 ± 3% +34.5% 423784 ± 1% wsm/hackbench/1600%-process-pipe
1651850 ± 3% -17.7% 1360209 ± 2% TOTAL softirqs.RCU
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2455 ±10% -41.0% 1448 ± 9% brickland1/aim7/6000-page_test
2455 ±10% -41.0% 1448 ± 9% TOTAL numa-numastat.node0.other_node
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
582269 ±14% -55.6% 258617 ±16% brickland1/aim7/6000-page_test
441740 ±18% -49.2% 224389 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
219830 ± 1% -29.9% 154078 ± 0% lkp-nex04/pigz/100%-128K
156140 ± 8% -12.1% 137250 ± 5% lkp-snb01/hackbench/1600%-process-pipe
47877 ± 1% -10.3% 42941 ± 1% wsm/hackbench/1600%-process-pipe
1447857 ±12% -43.6% 817277 ± 6% TOTAL softirqs.SCHED
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
892836 ± 1% +58.0% 1410897 ± 0% lkp-nex04/pigz/100%-128K
892836 ± 1% +58.0% 1410897 ± 0% TOTAL cpuidle.C3-NHM.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
471304 ±11% -31.4% 323251 ± 8% brickland1/aim7/6000-page_test
471304 ±11% -31.4% 323251 ± 8% TOTAL numa-vmstat.node1.nr_anon_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
4.875e+08 ± 2% +57.7% 7.688e+08 ± 2% lkp-nex04/pigz/100%-128K
4.875e+08 ± 2% +57.7% 7.688e+08 ± 2% TOTAL cpuidle.C3-NHM.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2281 ±12% -41.8% 1327 ±16% brickland1/aim7/6000-page_test
2281 ±12% -41.8% 1327 ±16% TOTAL numa-numastat.node3.other_node
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1903446 ±11% -30.7% 1318156 ± 7% brickland1/aim7/6000-page_test
1903446 ±11% -30.7% 1318156 ± 7% TOTAL numa-meminfo.node1.AnonPages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.64 ± 3% -44.8% 0.90 ± 4% brickland3/vm-scalability/300s-lru-file-readonce
1.83 ± 1% +54.8% 2.84 ± 0% lkp-nex04/pigz/100%-128K
1.26 ± 2% -21.9% 0.99 ± 6% wsm/hackbench/1600%-process-pipe
4.73 ± 2% -0.1% 4.73 ± 2% TOTAL turbostat.%c1
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
518274 ±11% -30.4% 360742 ± 8% brickland1/aim7/6000-page_test
518274 ±11% -30.4% 360742 ± 8% TOTAL numa-vmstat.node1.nr_active_anon
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2097138 ±10% -30.0% 1469003 ± 8% brickland1/aim7/6000-page_test
2097138 ±10% -30.0% 1469003 ± 8% TOTAL numa-meminfo.node1.Active(anon)
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
49527464 ± 6% -32.4% 33488833 ± 4% brickland1/aim7/6000-page_test
49527464 ± 6% -32.4% 33488833 ± 4% TOTAL cpuidle.C1E-IVT.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2084 ±10% +31.6% 2743 ± 6% lkp-snb01/hackbench/1600%-process-pipe
235 ± 8% -35.7% 151 ± 8% wsm/hackbench/1600%-process-pipe
2319 ±10% +24.8% 2894 ± 6% TOTAL cpuidle.POLL.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
54543 ±11% -37.2% 34252 ±16% brickland1/aim7/6000-page_test
4542 ± 4% -14.3% 3891 ± 6% brickland3/vm-scalability/300s-lru-file-readonce
59085 ±10% -35.4% 38143 ±15% TOTAL cpuidle.C1-IVT.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
188938 ±33% -41.3% 110966 ±16% brickland1/aim7/6000-page_test
188938 ±33% -41.3% 110966 ±16% TOTAL numa-meminfo.node2.PageTables
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
47262 ±35% -42.3% 27273 ±16% brickland1/aim7/6000-page_test
47262 ±35% -42.3% 27273 ±16% TOTAL numa-vmstat.node2.nr_page_table_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1944687 ±10% -25.8% 1443923 ±16% brickland1/aim7/6000-page_test
1944687 ±10% -25.8% 1443923 ±16% TOTAL numa-meminfo.node3.Active(anon)
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1754763 ±11% -26.6% 1288713 ±16% brickland1/aim7/6000-page_test
1754763 ±11% -26.6% 1288713 ±16% TOTAL numa-meminfo.node3.AnonPages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1964722 ±10% -25.5% 1464696 ±16% brickland1/aim7/6000-page_test
1964722 ±10% -25.5% 1464696 ±16% TOTAL numa-meminfo.node3.Active
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
17100655 ± 7% +44.6% 24731604 ± 2% lkp-nex04/pigz/100%-128K
37335977 ± 6% -28.6% 26654124 ± 7% wsm/hackbench/1600%-process-pipe
54436632 ± 6% -5.6% 51385728 ± 5% TOTAL cpuidle.C1-NHM.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
432109 ± 9% -26.2% 318886 ±14% brickland1/aim7/6000-page_test
432109 ± 9% -26.2% 318886 ±14% TOTAL numa-vmstat.node3.nr_anon_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
479527 ± 9% -25.3% 358029 ±14% brickland1/aim7/6000-page_test
479527 ± 9% -25.3% 358029 ±14% TOTAL numa-vmstat.node3.nr_active_anon
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3157742 ±16% -26.5% 2320253 ±10% brickland1/aim7/6000-page_test
3157742 ±16% -26.5% 2320253 ±10% TOTAL numa-meminfo.node1.MemUsed
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2.00 ±39% -34.4% 1.31 ±10% lkp-nex04/pigz/100%-128K
2.00 ±39% -34.4% 1.31 ±10% TOTAL perf-profile.cpu-cycles.update_cfs_shares.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
9869 ±15% +21.8% 12025 ± 1% lkp-snb01/hackbench/1600%-process-pipe
9869 ±15% +21.8% 12025 ± 1% TOTAL numa-vmstat.node1.numa_other
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
61500 ± 2% +24.6% 76639 ± 2% lkp-nex04/pigz/100%-128K
1206181 ± 3% -30.5% 838619 ± 2% wsm/hackbench/1600%-process-pipe
1267682 ± 3% -27.8% 915259 ± 2% TOTAL cpuidle.C1-NHM.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2118206 ±10% -29.7% 1488874 ± 7% brickland1/aim7/6000-page_test
593077 ± 4% -9.9% 534490 ± 1% lkp-snb01/hackbench/1600%-process-pipe
2711283 ± 9% -25.4% 2023365 ± 6% TOTAL numa-meminfo.node1.Active
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
227935 ±13% -30.2% 159051 ±16% brickland3/vm-scalability/300s-lru-file-readonce
227935 ±13% -30.2% 159051 ±16% TOTAL numa-vmstat.node0.workingset_nodereclaim
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
7303589 ± 2% -24.8% 5495829 ± 3% brickland1/aim7/6000-page_test
202793 ± 3% +20.0% 243307 ± 2% wsm/hackbench/1600%-process-pipe
7506382 ± 2% -23.5% 5739136 ± 3% TOTAL meminfo.AnonPages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2064106 ± 7% -23.3% 1582792 ± 8% brickland1/aim7/6000-page_test
2064106 ± 7% -23.3% 1582792 ± 8% TOTAL numa-meminfo.node0.Active
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
8189 ± 9% -18.6% 6669 ± 3% lkp-snb01/hackbench/1600%-process-pipe
8189 ± 9% -18.6% 6669 ± 3% TOTAL slabinfo.proc_inode_cache.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
8064024 ± 2% -24.0% 6132677 ± 3% brickland1/aim7/6000-page_test
203300 ± 3% +19.6% 243236 ± 2% wsm/hackbench/1600%-process-pipe
8267324 ± 2% -22.9% 6375914 ± 3% TOTAL meminfo.Active(anon)
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2815 ± 3% -8.3% 2581 ± 6% brickland3/vm-scalability/300s-lru-file-readonce
1076 ±15% +19.6% 1287 ±12% nhm4/ebizzy/200%-100x-10s
3892 ± 6% -0.6% 3868 ± 8% TOTAL slabinfo.buffer_head.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
6.567e+11 ± 3% -21.4% 5.16e+11 ± 4% brickland1/aim7/6000-page_test
1872397 ± 3% +22.2% 2288513 ± 2% wsm/hackbench/1600%-process-pipe
6.567e+11 ± 3% -21.4% 5.16e+11 ± 4% TOTAL meminfo.Committed_AS
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
235358 ± 5% -19.8% 188793 ± 3% brickland1/aim7/6000-page_test
235358 ± 5% -19.8% 188793 ± 3% TOTAL proc-vmstat.pgmigrate_success
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
235358 ± 5% -19.8% 188793 ± 3% brickland1/aim7/6000-page_test
235358 ± 5% -19.8% 188793 ± 3% TOTAL proc-vmstat.numa_pages_migrated
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
433235 ± 4% -18.1% 354845 ± 5% brickland1/aim7/6000-page_test
433235 ± 4% -18.1% 354845 ± 5% TOTAL numa-vmstat.node2.nr_anon_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
90514624 ± 2% -21.6% 70928192 ± 1% wsm/hackbench/1600%-process-pipe
90514624 ± 2% -21.6% 70928192 ± 1% TOTAL proc-vmstat.pgalloc_dma32
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
463719 ± 8% -24.7% 349388 ± 7% brickland1/aim7/6000-page_test
71979 ± 2% -11.7% 63550 ± 4% lkp-snb01/hackbench/1600%-process-pipe
535698 ± 7% -22.9% 412938 ± 7% TOTAL numa-vmstat.node0.nr_anon_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1818612 ± 2% -24.9% 1365670 ± 3% brickland1/aim7/6000-page_test
51594 ± 2% +16.9% 60313 ± 1% wsm/hackbench/1600%-process-pipe
1870207 ± 2% -23.8% 1425983 ± 3% TOTAL proc-vmstat.nr_anon_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3187 ± 5% -18.5% 2599 ± 6% brickland1/aim7/6000-page_test
3187 ± 5% -18.5% 2599 ± 6% TOTAL numa-vmstat.node0.numa_other
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2007155 ± 2% -24.3% 1518688 ± 3% brickland1/aim7/6000-page_test
51734 ± 2% +16.6% 60298 ± 1% wsm/hackbench/1600%-process-pipe
2058889 ± 2% -23.3% 1578987 ± 3% TOTAL proc-vmstat.nr_active_anon
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1395062 ± 6% -19.0% 1130108 ± 3% brickland1/aim7/6000-page_test
1395062 ± 6% -19.0% 1130108 ± 3% TOTAL proc-vmstat.numa_hint_faults
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2501 ± 9% +25.0% 3126 ±11% brickland3/vm-scalability/300s-lru-file-readonce
2501 ± 9% +25.0% 3126 ±11% TOTAL proc-vmstat.compact_stall
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
477037 ± 4% -17.2% 394983 ± 5% brickland1/aim7/6000-page_test
477037 ± 4% -17.2% 394983 ± 5% TOTAL numa-vmstat.node2.nr_active_anon
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
511455 ± 8% -23.9% 389447 ± 7% brickland1/aim7/6000-page_test
71970 ± 2% -11.7% 63552 ± 4% lkp-snb01/hackbench/1600%-process-pipe
583426 ± 7% -22.4% 452999 ± 7% TOTAL numa-vmstat.node0.nr_active_anon
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
796281 ±23% -27.7% 575352 ± 3% brickland1/aim7/6000-page_test
157314 ± 2% +22.7% 193026 ± 2% wsm/hackbench/1600%-process-pipe
953596 ±19% -19.4% 768378 ± 3% TOTAL meminfo.PageTables
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2829 ±10% +18.7% 3357 ± 3% brickland1/aim7/6000-page_test
2829 ±10% +18.7% 3357 ± 3% TOTAL numa-vmstat.node2.nr_alloc_batch
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1850230 ± 8% -24.1% 1405061 ± 8% brickland1/aim7/6000-page_test
289636 ± 2% -12.3% 254041 ± 4% lkp-snb01/hackbench/1600%-process-pipe
2139866 ± 7% -22.5% 1659103 ± 8% TOTAL numa-meminfo.node0.AnonPages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
8145316 ± 2% -23.7% 6213832 ± 3% brickland1/aim7/6000-page_test
353534 ± 2% +11.3% 393604 ± 1% wsm/hackbench/1600%-process-pipe
8498851 ± 2% -22.3% 6607437 ± 3% TOTAL meminfo.Active
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2706 ± 4% +26.1% 3411 ± 5% brickland1/aim7/6000-page_test
2706 ± 4% +26.1% 3411 ± 5% TOTAL numa-vmstat.node1.nr_alloc_batch
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.392e+08 ± 5% -15.5% 1.176e+08 ± 2% lkp-snb01/hackbench/1600%-process-pipe
1.392e+08 ± 5% -15.5% 1.176e+08 ± 2% TOTAL numa-numastat.node1.local_node
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.392e+08 ± 5% -15.5% 1.176e+08 ± 2% lkp-snb01/hackbench/1600%-process-pipe
1.392e+08 ± 5% -15.5% 1.176e+08 ± 2% TOTAL numa-numastat.node1.numa_hit
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2044097 ± 7% -23.5% 1562809 ± 8% brickland1/aim7/6000-page_test
289613 ± 2% -12.3% 254058 ± 4% lkp-snb01/hackbench/1600%-process-pipe
2333711 ± 7% -22.1% 1816867 ± 7% TOTAL numa-meminfo.node0.Active(anon)
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
198747 ±23% -28.0% 143034 ± 3% brickland1/aim7/6000-page_test
40081 ± 2% +19.1% 47725 ± 1% wsm/hackbench/1600%-process-pipe
238828 ±19% -20.1% 190760 ± 2% TOTAL proc-vmstat.nr_page_table_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.65e+08 ± 2% -11.3% 1.463e+08 ± 2% lkp-snb01/hackbench/1600%-process-pipe
82216748 ± 2% -23.6% 62851371 ± 1% wsm/hackbench/1600%-process-pipe
2.472e+08 ± 2% -15.4% 2.092e+08 ± 2% TOTAL proc-vmstat.pgalloc_normal
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2725835 ± 4% -17.5% 2247537 ± 4% brickland1/aim7/6000-page_test
2725835 ± 4% -17.5% 2247537 ± 4% TOTAL numa-meminfo.node2.MemUsed
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
393637 ± 6% -15.3% 333296 ± 2% brickland1/aim7/6000-page_test
393637 ± 6% -15.3% 333296 ± 2% TOTAL proc-vmstat.numa_hint_faults_local
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.681e+08 ± 2% -10.6% 1.504e+08 ± 3% lkp-snb01/hackbench/1600%-process-pipe
1.709e+08 ± 2% -22.6% 1.323e+08 ± 1% wsm/hackbench/1600%-process-pipe
3.391e+08 ± 2% -16.6% 2.827e+08 ± 2% TOTAL proc-vmstat.numa_local
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.681e+08 ± 2% -10.6% 1.504e+08 ± 3% lkp-snb01/hackbench/1600%-process-pipe
1.709e+08 ± 2% -22.6% 1.323e+08 ± 1% wsm/hackbench/1600%-process-pipe
3.391e+08 ± 2% -16.6% 2.827e+08 ± 2% TOTAL proc-vmstat.numa_hit
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
109532 ± 1% +20.5% 132015 ± 2% wsm/hackbench/1600%-process-pipe
109532 ± 1% +20.5% 132015 ± 2% TOTAL slabinfo.vm_area_struct.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.706e+08 ± 2% -10.5% 1.527e+08 ± 2% lkp-snb01/hackbench/1600%-process-pipe
1.727e+08 ± 2% -22.6% 1.338e+08 ± 1% wsm/hackbench/1600%-process-pipe
3.433e+08 ± 2% -16.5% 2.865e+08 ± 2% TOTAL proc-vmstat.pgfree
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2.82 ± 3% +21.9% 3.43 ± 4% brickland1/aim7/6000-page_test
2.82 ± 3% +21.9% 3.43 ± 4% TOTAL turbostat.%pc2
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1045052 ± 1% -15.9% 878536 ± 3% brickland3/vm-scalability/300s-lru-file-readonce
1045052 ± 1% -15.9% 878536 ± 3% TOTAL proc-vmstat.workingset_nodereclaim
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
148761 ± 1% +19.9% 178354 ± 2% wsm/hackbench/1600%-process-pipe
148761 ± 1% +19.9% 178354 ± 2% TOTAL slabinfo.kmalloc-64.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3023001 ± 1% -15.1% 2565273 ± 3% brickland3/vm-scalability/300s-lru-file-readonce
3023001 ± 1% -15.1% 2565273 ± 3% TOTAL proc-vmstat.slabs_scanned
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
74702 ± 1% +19.2% 89026 ± 2% wsm/hackbench/1600%-process-pipe
74702 ± 1% +19.2% 89026 ± 2% TOTAL slabinfo.anon_vma.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
14041931 ± 2% -16.0% 11788641 ± 1% wsm/hackbench/1600%-process-pipe
14041931 ± 2% -16.0% 11788641 ± 1% TOTAL proc-vmstat.pgfault
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1742111 ± 4% -16.9% 1447181 ± 5% brickland1/aim7/6000-page_test
1742111 ± 4% -16.9% 1447181 ± 5% TOTAL numa-meminfo.node2.AnonPages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
15865125 ± 1% -15.0% 13485882 ± 1% brickland1/aim7/6000-page_test
15865125 ± 1% -15.0% 13485882 ± 1% TOTAL softirqs.TIMER
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1923000 ± 4% -16.4% 1608509 ± 5% brickland1/aim7/6000-page_test
1923000 ± 4% -16.4% 1608509 ± 5% TOTAL numa-meminfo.node2.Active(anon)
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1943185 ± 4% -16.2% 1629057 ± 5% brickland1/aim7/6000-page_test
1943185 ± 4% -16.2% 1629057 ± 5% TOTAL numa-meminfo.node2.Active
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1.659e+08 ± 2% -13.8% 1.429e+08 ± 1% wsm/hackbench/1600%-process-pipe
1.659e+08 ± 2% -13.8% 1.429e+08 ± 1% TOTAL cpuidle.C6-NHM.time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2744 ± 1% +16.7% 3202 ± 2% wsm/hackbench/1600%-process-pipe
2744 ± 1% +16.7% 3202 ± 2% TOTAL slabinfo.vm_area_struct.active_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2744 ± 1% +16.7% 3202 ± 2% wsm/hackbench/1600%-process-pipe
2744 ± 1% +16.7% 3202 ± 2% TOTAL slabinfo.vm_area_struct.num_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
120778 ± 1% +16.7% 140916 ± 2% wsm/hackbench/1600%-process-pipe
120778 ± 1% +16.7% 140916 ± 2% TOTAL slabinfo.vm_area_struct.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2107 ± 7% +14.9% 2420 ± 8% brickland3/vm-scalability/300s-lru-file-readonce
2107 ± 7% +14.9% 2420 ± 8% TOTAL proc-vmstat.compact_success
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2591 ± 1% +16.0% 3007 ± 2% wsm/hackbench/1600%-process-pipe
2591 ± 1% +16.0% 3007 ± 2% TOTAL slabinfo.kmalloc-64.active_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2591 ± 1% +16.0% 3007 ± 2% wsm/hackbench/1600%-process-pipe
2591 ± 1% +16.0% 3007 ± 2% TOTAL slabinfo.kmalloc-64.num_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
165897 ± 1% +16.0% 192484 ± 2% wsm/hackbench/1600%-process-pipe
165897 ± 1% +16.0% 192484 ± 2% TOTAL slabinfo.kmalloc-64.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
4.40 ± 2% +22.0% 5.37 ± 4% brickland1/aim7/6000-page_test
1.54 ± 2% -12.0% 1.35 ± 3% wsm/hackbench/1600%-process-pipe
5.94 ± 2% +13.2% 6.73 ± 4% TOTAL turbostat.%c6
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1683 ± 5% -11.0% 1497 ± 9% brickland3/vm-scalability/300s-lru-file-readonce
1683 ± 5% -11.0% 1497 ± 9% TOTAL slabinfo.xfs_inode.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1683 ± 5% -11.0% 1497 ± 9% brickland3/vm-scalability/300s-lru-file-readonce
1683 ± 5% -11.0% 1497 ± 9% TOTAL slabinfo.xfs_inode.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
329 ± 1% -13.3% 285 ± 0% brickland1/aim7/6000-page_test
329 ± 1% -13.3% 285 ± 0% TOTAL uptime.boot
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
43214 ± 6% -15.2% 36648 ± 4% lkp-snb01/hackbench/1600%-process-pipe
43214 ± 6% -15.2% 36648 ± 4% TOTAL numa-vmstat.node0.nr_page_table_pages
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
175030 ± 7% -16.1% 146779 ± 4% lkp-snb01/hackbench/1600%-process-pipe
175030 ± 7% -16.1% 146779 ± 4% TOTAL numa-meminfo.node0.PageTables
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
47374 ± 1% -10.4% 42459 ± 4% wsm/hackbench/1600%-process-pipe
47374 ± 1% -10.4% 42459 ± 4% TOTAL meminfo.DirectMap4k
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
205471 ± 7% +18.5% 243577 ± 3% lkp-snb01/hackbench/1600%-process-pipe
205471 ± 7% +18.5% 243577 ± 3% TOTAL cpuidle.C1E-SNB.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
86033 ± 1% +14.0% 98061 ± 2% wsm/hackbench/1600%-process-pipe
86033 ± 1% +14.0% 98061 ± 2% TOTAL slabinfo.anon_vma.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
10692 ± 1% -12.7% 9334 ± 3% brickland3/vm-scalability/300s-lru-file-readonce
10692 ± 1% -12.7% 9334 ± 3% TOTAL proc-vmstat.pageoutrun
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1343 ± 1% +14.0% 1531 ± 2% wsm/hackbench/1600%-process-pipe
1343 ± 1% +14.0% 1531 ± 2% TOTAL slabinfo.anon_vma.active_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1343 ± 1% +14.0% 1531 ± 2% wsm/hackbench/1600%-process-pipe
1343 ± 1% +14.0% 1531 ± 2% TOTAL slabinfo.anon_vma.num_slabs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
18554 ± 0% -9.2% 16845 ± 0% lkp-snb01/hackbench/1600%-process-pipe
4687 ± 1% +19.1% 5582 ± 2% wsm/hackbench/1600%-process-pipe
23241 ± 0% -3.5% 22427 ± 1% TOTAL slabinfo.mm_struct.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
8281 ± 0% -14.7% 7062 ± 2% lkp-snb01/hackbench/1600%-process-pipe
3737 ± 0% -9.8% 3371 ± 1% wsm/hackbench/1600%-process-pipe
12018 ± 0% -13.2% 10433 ± 1% TOTAL vmstat.procs.r
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
26179 ± 0% -10.1% 23527 ± 0% lkp-snb01/hackbench/1600%-process-pipe
5556 ± 1% +16.6% 6479 ± 2% wsm/hackbench/1600%-process-pipe
31736 ± 0% -5.4% 30006 ± 1% TOTAL slabinfo.files_cache.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3077 ± 1% +14.5% 3523 ± 0% brickland1/aim7/6000-page_test
181562 ± 3% +13.5% 206096 ± 5% brickland3/vm-scalability/300s-lru-file-readonce
184639 ± 3% +13.5% 209619 ± 5% TOTAL proc-vmstat.pgactivate
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
6012 ± 1% +13.1% 6797 ± 1% wsm/hackbench/1600%-process-pipe
6012 ± 1% +13.1% 6797 ± 1% TOTAL slabinfo.mm_struct.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
13158 ±13% -14.4% 11261 ± 4% brickland1/aim7/6000-page_test
13158 ±13% -14.4% 11261 ± 4% TOTAL numa-meminfo.node3.SReclaimable
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3289 ±13% -14.4% 2815 ± 4% brickland1/aim7/6000-page_test
3289 ±13% -14.4% 2815 ± 4% TOTAL numa-vmstat.node3.nr_slab_reclaimable
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
6722 ± 1% +12.5% 7562 ± 2% wsm/hackbench/1600%-process-pipe
6722 ± 1% +12.5% 7562 ± 2% TOTAL slabinfo.files_cache.num_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
9663 ± 0% -11.6% 8546 ± 2% brickland3/vm-scalability/300s-lru-file-readonce
9663 ± 0% -11.6% 8546 ± 2% TOTAL proc-vmstat.kswapd_low_wmark_hit_quickly
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
60059 ± 5% +12.2% 67416 ± 3% lkp-snb01/hackbench/1600%-process-pipe
60059 ± 5% +12.2% 67416 ± 3% TOTAL cpuidle.C7-SNB.usage
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2815 ± 3% -8.3% 2581 ± 6% brickland3/vm-scalability/300s-lru-file-readonce
2815 ± 3% -8.3% 2581 ± 6% TOTAL slabinfo.buffer_head.active_objs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
4529818 ± 1% -8.8% 4129398 ± 1% brickland1/aim7/6000-page_test
237630 ± 0% -13.1% 206405 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
8.785e+08 ± 1% +51.3% 1.329e+09 ± 1% lkp-snb01/hackbench/1600%-process-pipe
89463167 ± 9% +250.9% 3.14e+08 ± 1% wsm/hackbench/1600%-process-pipe
9.727e+08 ± 2% +69.3% 1.647e+09 ± 1% TOTAL time.involuntary_context_switches
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
2175 ± 1% -8.8% 1984 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
23054 ± 1% +12.2% 25876 ± 0% lkp-nex04/pigz/100%-128K
4619649 ± 0% +30.1% 6008008 ± 2% lkp-snb01/hackbench/1600%-process-pipe
529899 ± 5% +167.6% 1418080 ± 0% wsm/hackbench/1600%-process-pipe
5174778 ± 1% +44.0% 7453950 ± 2% TOTAL vmstat.system.cs
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
3150464 ± 2% -24.2% 2387551 ± 3% brickland1/aim7/6000-page_test
3890218 ± 1% +7.6% 4184429 ± 0% lkp-nex04/pigz/100%-128K
1.926e+09 ± 1% +21.0% 2.331e+09 ± 1% lkp-snb01/hackbench/1600%-process-pipe
2.322e+08 ± 3% +136.4% 5.489e+08 ± 0% wsm/hackbench/1600%-process-pipe
2.166e+09 ± 1% +33.3% 2.887e+09 ± 1% TOTAL time.voluntary_context_switches
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
1067968 ± 1% -36.5% 677985 ± 4% lkp-snb01/hackbench/1600%-process-pipe
231933 ± 0% -1.2% 229082 ± 0% nhm4/ebizzy/200%-100x-10s
16014 ± 1% +59.3% 25511 ± 0% wsm/hackbench/1600%-process-pipe
1315917 ± 0% -29.1% 932579 ± 2% TOTAL vmstat.system.in
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
281 ± 1% -15.1% 238 ± 0% brickland1/aim7/6000-page_test
281 ± 1% -15.1% 238 ± 0% TOTAL time.elapsed_time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
29294 ± 1% -14.3% 25093 ± 0% brickland1/aim7/6000-page_test
33619 ± 0% +1.2% 34032 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
62914 ± 0% -6.0% 59125 ± 0% TOTAL time.system_time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
18258004 ± 1% -3.8% 17557683 ± 1% lkp-snb01/hackbench/1600%-process-pipe
3.104e+09 ± 0% -1.0% 3.073e+09 ± 0% nhm4/ebizzy/200%-100x-10s
13852545 ± 2% -16.7% 11538015 ± 1% wsm/hackbench/1600%-process-pipe
3.137e+09 ± 0% -1.1% 3.103e+09 ± 0% TOTAL time.minor_page_faults
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
18531 ± 0% -1.8% 18206 ± 0% lkp-nex04/pigz/100%-128K
1388 ± 0% +11.2% 1543 ± 0% lkp-snb01/hackbench/1600%-process-pipe
2718 ± 0% -1.5% 2678 ± 0% nhm4/ebizzy/200%-100x-10s
22638 ± 0% -0.9% 22428 ± 0% TOTAL time.user_time
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
± 1% -3.4% ± 0% brickland1/aim7/6000-page_test
± 1% -3.4% ± 0% TOTAL turbostat.RAM_W
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
± 0% +1.8% ± 0% lkp-snb01/hackbench/1600%-process-pipe
± 0% +1.8% ± 0% TOTAL turbostat.Cor_W
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
10655 ± 0% +1.4% 10802 ± 0% brickland1/aim7/6000-page_test
8224 ± 0% +1.4% 8341 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
6230 ± 0% -1.7% 6125 ± 0% lkp-nex04/pigz/100%-128K
25110 ± 0% +0.6% 25269 ± 0% TOTAL time.percent_of_cpu_this_job_got
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
72.35 ± 0% +1.2% 73.20 ± 0% brickland3/vm-scalability/300s-lru-file-readonce
97.29 ± 0% -1.6% 95.71 ± 0% lkp-nex04/pigz/100%-128K
169.63 ± 0% -0.4% 168.91 ± 0% TOTAL turbostat.%c0
107437febd495a5 143e1e28cb40bed836b0a0656
--------------- -------------------------
± 0% +1.2% ± 0% lkp-snb01/hackbench/1600%-process-pipe
± 0% +1.2% ± 0% TOTAL turbostat.Pkg_W
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
7 years, 10 months
[rcu] 0eb885afb20: -25.8% softirqs.RCU
by Fengguang Wu
Hi Pranith,
FYI, these changes look nice:
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/dev
commit 0eb885afb20317016670c1c5dd3a436c91e1e910 ("rcu: Use rcu_gp_kthread_wake() to wake up grace period kthreads")
test case: lkp-a04/aim7/400-disk_rr
It's an in-memory aim7/disk_rr test.
lkp-a04 is an Atom micro server with 8G Memory.
5db6a289a3a0a69 0eb885afb20317016670c1c5d
--------------- -------------------------
150860 ± 1% -25.8% 111937 ± 1% TOTAL softirqs.RCU
2809 ± 1% -12.5% 2457 ± 1% TOTAL vmstat.system.cs
4202 ± 1% -6.9% 3911 ± 1% TOTAL vmstat.system.in
212421 ± 2% -7.7% 196035 ± 3% TOTAL time.involuntary_context_switches
vmstat.system.in
4300 ++-------------------------------------------------------------------+
| .*.. *.. .*.. * |
4250 *+ .*.. + .*. * *.. .. : |
4200 ++ *..*..*. *.. + *..*. : .. * : |
| * : .* : .* |
4150 ++ : .*..*. *. |
4100 ++ *. |
| |
4050 ++ |
4000 ++ |
| O O O O O |
3950 O+ O O O O O O O O O O |
3900 ++ O O O O |
| O |
3850 ++-------------------------O-----------------------------------O-----O
vmstat.system.cs
3000 ++-------------------------------------------------------------------+
| |
2900 ++ *.. |
*..*..*..*..*..*..*.. .. *..*..*..*..*.. .*.. .*.. |
2800 ++ * .*. *. .* |
| *..*..*. *. |
2700 ++ |
| |
2600 ++ |
| O O |
2500 O+ O O O O O O O O O O O |
| O O O O O |
2400 ++ O O O O
| O |
2300 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
7 years, 10 months
[blkg_stat] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
by Fengguang Wu
Hi Hong and Jens,
FYI, this patch still has the error that impacts the latest linux-next.
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 2c575026fae6e63771bd2a4c1d407214a8096a89
Author: Hong Zhiguo <zhiguohong(a)tencent.com>
AuthorDate: Wed Nov 20 10:35:05 2013 -0700
Commit: Jens Axboe <axboe(a)kernel.dk>
CommitDate: Wed Nov 20 15:33:04 2013 -0700
Update of blkg_stat and blkg_rwstat may happen in bh context.
While u64_stats_fetch_retry is only preempt_disable on 32bit
UP system. This is not enough to avoid preemption by bh and
may read strange 64 bit value.
Signed-off-by: Hong Zhiguo <zhiguohong(a)tencent.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Cc: stable(a)kernel.org
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
+-----------------------------------------------------------------------+------------+------------+
| | 82023bb7f7 | 2c575026fa |
+-----------------------------------------------------------------------+------------+------------+
| boot_successes | 496 | 0 |
| boot_failures | 494 | 330 |
| WARNING:CPU:PID:at_arch/x86/mm/ioremap.c:__early_ioremap() | 493 | 177 |
| WARNING:CPU:PID:at_kernel/trace/ring_buffer.c:rb_reserve_next_event() | 493 | 177 |
| backtrace:acpi_initialize_tables | 493 | 177 |
| backtrace:acpi_table_init | 493 | 177 |
| backtrace:acpi_boot_table_init | 493 | 177 |
| backtrace:ring_buffer_producer_thread | 493 | 177 |
| BUG:unable_to_handle_kernel_NULL_pointer_dereference | 3 | 2 |
| Oops | 3 | 2 |
| EIP_is_at_strlen | 3 | 2 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 2 | |
| Kernel_panic-not_syncing:Fatal_exception | 1 | 2 |
| backtrace:vfs_write | 1 | 2 |
| backtrace:SyS_write | 1 | 2 |
| WARNING:CPU:PID:at_kernel/softirq.c:local_bh_enable() | 0 | 330 |
| inconsistent_IN-SOFTIRQ-W-SOFTIRQ-ON-W_usage | 0 | 330 |
| backtrace:do_mount | 0 | 330 |
| backtrace:SyS_mount | 0 | 330 |
| backtrace:smpboot_thread_fn | 0 | 182 |
+-----------------------------------------------------------------------+------------+------------+
[ 7.266963] scsi_id (235) used greatest stack depth: 6008 bytes left
[ 7.403676] ------------[ cut here ]------------
[ 7.404033] WARNING: CPU: 0 PID: 264 at kernel/softirq.c:156 local_bh_enable+0x9c/0x1e0()
[ 7.404033] CPU: 0 PID: 264 Comm: mount Tainted: G W 3.12.0-02795-g2c57502 #16
[ 7.404033] 00000001 511d1a58 420d4200 511d1a88 4109f3dd 426e5c40 00000000 00000108
[ 7.404033] 426e5fb0 0000009c 410a68dc 410a68dc 00000001 4183189d 00000001 511d1a98
[ 7.404033] 4109f4c2 00000009 00000000 511d1aac 410a68dc 51c9f008 51c9f23c 511d1ad8
[ 7.404033] Call Trace:
[ 7.404033] [<420d4200>] dump_stack+0x16/0x18
[ 7.404033] [<4109f3dd>] warn_slowpath_common+0x8d/0xb0
[ 7.404033] [<410a68dc>] ? local_bh_enable+0x9c/0x1e0
[ 7.404033] [<410a68dc>] ? local_bh_enable+0x9c/0x1e0
[ 7.404033] [<4183189d>] ? cfqg_stats_update_avg_queue_size+0x2d/0x100
[ 7.404033] [<4109f4c2>] warn_slowpath_null+0x22/0x30
[ 7.404033] [<410a68dc>] local_bh_enable+0x9c/0x1e0
[ 7.404033] [<4183189d>] cfqg_stats_update_avg_queue_size+0x2d/0x100
[ 7.404033] [<41833f4a>] __cfq_set_active_queue+0x15a/0x210
[ 7.404033] [<418300d9>] ? cfq_group_service_tree_add+0x199/0x260
[ 7.404033] [<41831f84>] ? cfq_service_tree_add+0x404/0x4f0
[ 7.404033] [<418320a9>] ? cfq_resort_rr_list+0x39/0x40
[ 7.404033] [<41832fff>] ? cfq_add_cfqq_rr+0x16f/0x1c0
[ 7.404033] [<4183a014>] ? cfq_update_idle_window.isra.78+0x84/0x3a0
[ 7.404033] [<41836b4c>] cfq_select_queue+0x7ec/0xa90
[ 7.404033] [<4183988f>] cfq_dispatch_requests+0x2bf/0x9c0
[ 7.404033] [<410833ec>] ? pvclock_clocksource_read+0xfc/0x240
[ 7.404033] [<410822f3>] ? kvm_clock_read+0x13/0x20
[ 7.404033] [<4183a702>] ? cfq_insert_request+0x3d2/0x8b0
[ 7.404033] [<410e0fc3>] ? sched_clock_local.constprop.2+0x43/0x190
[ 7.404033] [<410f5bd3>] ? __lock_acquire+0x1113/0x11a0
[ 7.404033] [<410f2fcb>] ? trace_hardirqs_off+0xb/0x10
[ 7.404033] [<4180f572>] blk_peek_request+0x302/0x450
[ 7.404033] [<4183fa06>] ? kobject_get+0xd6/0x100
[ 7.404033] [<419f1e08>] scsi_request_fn+0x68/0x8d0
[ 7.404033] [<4180b322>] __blk_run_queue_uncond+0x42/0x50
[ 7.404033] [<4180b370>] __blk_run_queue+0x40/0x50
[ 7.404033] [<418104ed>] blk_queue_bio+0x3dd/0x4e0
[ 7.404033] [<4180d1b2>] generic_make_request+0x102/0x180
[ 7.404033] [<4180d3a7>] submit_bio+0x177/0x250
[ 7.404033] [<412459da>] ? __find_get_block+0x2da/0x390
[ 7.404033] [<4124b46d>] ? bio_alloc_bioset+0xfd/0x340
[ 7.404033] [<4124673c>] _submit_bh+0x2dc/0x320
[ 7.404033] [<4124a595>] __bread+0x85/0x1f0
[ 7.404033] [<41301fb9>] ext3_fill_super+0x1a9/0x1d10
[ 7.404033] [<4184c428>] ? snprintf+0x18/0x20
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<4120090d>] mount_bdev+0x1ed/0x2c0
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<411debb0>] ? __kmalloc_track_caller+0xd0/0x2a0
[ 7.404033] [<412fc2bf>] ext3_mount+0x1f/0x30
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<41200d27>] mount_fs+0x47/0x1f0
[ 7.404033] [<4122a0a5>] ? alloc_vfsmnt+0x175/0x1c0
[ 7.404033] [<4122b633>] vfs_kern_mount+0xa3/0x1a0
[ 7.404033] [<420e8122>] ? _raw_read_unlock+0x22/0x30
[ 7.404033] [<4122e786>] do_mount+0xdb6/0x11c0
[ 7.404033] [<4122d9ac>] ? copy_mount_string+0x5c/0x80
[ 7.404033] [<4122f003>] SyS_mount+0xf3/0x120
[ 7.404033] [<420e8e8c>] syscall_call+0x7/0xb
[ 7.404033] ---[ end trace 05e0c07eb1c663a9 ]---
[ 7.404033]
[ 7.404033] =================================
[ 7.404033] [ INFO: inconsistent lock state ]
[ 7.404033] 3.12.0-02795-g2c57502 #16 Tainted: G W
[ 7.404033] ---------------------------------
[ 7.404033] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
[ 7.404033] mount/264 [HC0[0]:SC0[0]:HE1:SE1] takes:
[ 7.404033] (&(&q->__queue_lock)->rlock){+.?...}, at: [<418104cb>] blk_queue_bio+0x3bb/0x4e0
[ 7.404033] {IN-SOFTIRQ-W} state was registered at:
[ 7.404033] [<410f501b>] __lock_acquire+0x55b/0x11a0
[ 7.404033] [<410f63f3>] lock_acquire+0xd3/0x110
[ 7.404033] [<420e7b01>] _raw_spin_lock+0x41/0x90
[ 7.404033] [<419f2fa6>] scsi_device_unbusy+0x76/0xf0
[ 7.404033] [<419ea282>] scsi_finish_command+0x22/0x120
[ 7.404033] [<419f320f>] scsi_softirq_done+0xef/0x170
[ 7.404033] [<4181826b>] blk_done_softirq+0x7b/0x90
[ 7.404033] [<410a6214>] __do_softirq+0x134/0x450
[ 7.404033] [<410a656c>] run_ksoftirqd+0x3c/0x80
[ 7.404033] [<410d89fc>] smpboot_thread_fn+0x1dc/0x250
[ 7.404033] [<410cdec9>] kthread+0xf9/0x100
[ 7.404033] [<420e949b>] ret_from_kernel_thread+0x1b/0x30
[ 7.404033] irq event stamp: 2012
[ 7.404033] hardirqs last enabled at (2009): [<420e7dd7>] _raw_spin_unlock_irq+0x27/0x40
[ 7.404033] hardirqs last disabled at (2010): [<420e7c0a>] _raw_spin_lock_irq+0x1a/0x90
[ 7.404033] softirqs last enabled at (2012): [<4183189d>] cfqg_stats_update_avg_queue_size+0x2d/0x100
[ 7.404033] softirqs last disabled at (2011): [<41831888>] cfqg_stats_update_avg_queue_size+0x18/0x100
[ 7.404033]
[ 7.404033] other info that might help us debug this:
[ 7.404033] Possible unsafe locking scenario:
[ 7.404033]
[ 7.404033] CPU0
[ 7.404033] ----
[ 7.404033] lock(&(&q->__queue_lock)->rlock);
[ 7.404033] <Interrupt>
[ 7.404033] lock(&(&q->__queue_lock)->rlock);
[ 7.404033]
[ 7.404033] *** DEADLOCK ***
[ 7.404033]
[ 7.404033] 2 locks held by mount/264:
[ 7.404033] #0: (&type->s_umount_key#19/1){+.+.+.}, at: [<411ff563>] sget+0x2f3/0x530
[ 7.404033] #1: (&(&q->__queue_lock)->rlock){+.?...}, at: [<418104cb>] blk_queue_bio+0x3bb/0x4e0
[ 7.404033]
[ 7.404033] stack backtrace:
[ 7.404033] CPU: 0 PID: 264 Comm: mount Tainted: G W 3.12.0-02795-g2c57502 #16
[ 7.404033] 43159a40 511d19d4 420d4200 511d1a10 420b10dc 426e18ee 426e1c9c 00000108
[ 7.404033] 00000000 00000000 00000000 00000000 00000001 00000001 426e1c9c 00000006
[ 7.404033] 51208484 410f16f0 511d1a40 410f26bd 00000006 00000001 0000009c 511d1a44
[ 7.404033] Call Trace:
[ 7.404033] [<420d4200>] dump_stack+0x16/0x18
[ 7.404033] [<420b10dc>] print_usage_bug+0x1d1/0x1db
[ 7.404033] [<410f16f0>] ? check_usage_forwards+0xe0/0xe0
[ 7.404033] [<410f26bd>] mark_lock+0x18d/0x2e0
[ 7.404033] [<410f2911>] mark_held_locks+0x101/0x110
[ 7.404033] [<4109f33f>] ? print_oops_end_marker+0x2f/0x40
[ 7.404033] [<4109f3ec>] ? warn_slowpath_common+0x9c/0xb0
[ 7.404033] [<410a693b>] ? local_bh_enable+0xfb/0x1e0
[ 7.404033] [<410f2c8c>] trace_hardirqs_on_caller+0x17c/0x220
[ 7.404033] [<4183189d>] ? cfqg_stats_update_avg_queue_size+0x2d/0x100
[ 7.404033] [<410f2d3b>] trace_hardirqs_on+0xb/0x10
[ 7.404033] [<410a693b>] local_bh_enable+0xfb/0x1e0
[ 7.404033] [<4183189d>] cfqg_stats_update_avg_queue_size+0x2d/0x100
[ 7.404033] [<41833f4a>] __cfq_set_active_queue+0x15a/0x210
[ 7.404033] [<418300d9>] ? cfq_group_service_tree_add+0x199/0x260
[ 7.404033] [<41831f84>] ? cfq_service_tree_add+0x404/0x4f0
[ 7.404033] [<418320a9>] ? cfq_resort_rr_list+0x39/0x40
[ 7.404033] [<41832fff>] ? cfq_add_cfqq_rr+0x16f/0x1c0
[ 7.404033] [<4183a014>] ? cfq_update_idle_window.isra.78+0x84/0x3a0
[ 7.404033] [<41836b4c>] cfq_select_queue+0x7ec/0xa90
[ 7.404033] [<4183988f>] cfq_dispatch_requests+0x2bf/0x9c0
[ 7.404033] [<410833ec>] ? pvclock_clocksource_read+0xfc/0x240
[ 7.404033] [<410822f3>] ? kvm_clock_read+0x13/0x20
[ 7.404033] [<4183a702>] ? cfq_insert_request+0x3d2/0x8b0
[ 7.404033] [<410e0fc3>] ? sched_clock_local.constprop.2+0x43/0x190
[ 7.404033] [<410f5bd3>] ? __lock_acquire+0x1113/0x11a0
[ 7.404033] [<410f2fcb>] ? trace_hardirqs_off+0xb/0x10
[ 7.404033] [<4180f572>] blk_peek_request+0x302/0x450
[ 7.404033] [<4183fa06>] ? kobject_get+0xd6/0x100
[ 7.404033] [<419f1e08>] scsi_request_fn+0x68/0x8d0
[ 7.404033] [<4180b322>] __blk_run_queue_uncond+0x42/0x50
[ 7.404033] [<4180b370>] __blk_run_queue+0x40/0x50
[ 7.404033] [<418104ed>] blk_queue_bio+0x3dd/0x4e0
[ 7.404033] [<4180d1b2>] generic_make_request+0x102/0x180
[ 7.404033] [<4180d3a7>] submit_bio+0x177/0x250
[ 7.404033] [<412459da>] ? __find_get_block+0x2da/0x390
[ 7.404033] [<4124b46d>] ? bio_alloc_bioset+0xfd/0x340
[ 7.404033] [<4124673c>] _submit_bh+0x2dc/0x320
[ 7.404033] [<4124a595>] __bread+0x85/0x1f0
[ 7.404033] [<41301fb9>] ext3_fill_super+0x1a9/0x1d10
[ 7.404033] [<4184c428>] ? snprintf+0x18/0x20
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<4120090d>] mount_bdev+0x1ed/0x2c0
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<411debb0>] ? __kmalloc_track_caller+0xd0/0x2a0
[ 7.404033] [<412fc2bf>] ext3_mount+0x1f/0x30
[ 7.404033] [<41301e10>] ? ext3_setup_super+0x340/0x340
[ 7.404033] [<41200d27>] mount_fs+0x47/0x1f0
[ 7.404033] [<4122a0a5>] ? alloc_vfsmnt+0x175/0x1c0
[ 7.404033] [<4122b633>] vfs_kern_mount+0xa3/0x1a0
[ 7.404033] [<420e8122>] ? _raw_read_unlock+0x22/0x30
[ 7.404033] [<4122e786>] do_mount+0xdb6/0x11c0
[ 7.404033] [<4122d9ac>] ? copy_mount_string+0x5c/0x80
[ 7.404033] [<4122f003>] SyS_mount+0xf3/0x120
[ 7.404033] [<420e8e8c>] syscall_call+0x7/0xb
[ 7.789066] qnx6: unable to read the first superblock
[ 7.791067] UDF-fs: warning (device sda): udf_fill_super: No partition found (2)
[ 7.795064] NILFS: Can't find nilfs on dev sda.
[ 7.795738] BeFS(sda): No write support. Marking filesystem read-only
[ 7.797064] BeFS(sda): invalid magic header
git bisect start v3.13 v3.12 --
git bisect good 3bad8bb5cd3048a67df43ac6b1e2f191f19d9ff0 # 07:29 330+ 207 Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6
git bisect bad dd0508093b79141e0044ca02f0acb6319f69f546 # 07:33 266- 234 Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 3f02ff5c2c69753666787ed125708d283a823ffb # 07:45 103- 70 Merge tag 'tty-3.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
git bisect good 1ab231b274ba51a54acebec23c6aded0f3cdf54e # 07:59 330+ 0 Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 5ee540613db504a10e15fafaf4c08cac96aa1823 # 08:18 53- 1 Merge branch 'for-linus' of git://git.kernel.dk/linux-block
git bisect good 53c6de50262a8edd6932bb59a32db7b9d92f8d67 # 08:24 330+ 0 Merge branch 'x86/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 59fb2f0e9e30ad99a8bab0ff1efaf8f4a3b7105f # 08:58 330+ 130 Merge tag 'fbdev-fixes-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux
git bisect good ef1e4e32d595d3e6c9a6d3d2956f087d5886c5e5 # 09:06 330+ 87 Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs
git bisect good 29be6345bbaec8502a70c4e2204d5818b48c4e8f # 09:10 330+ 163 Merge tag 'nfs-for-3.13-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
git bisect bad e345d767f6530ec9cb0aabab7ea248072a9c6975 # 09:18 228- 125 Merge branch 'stable/for-jens-3.13-take-two' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip into for-linus
git bisect bad c170bbb45febc03ac4d34ba2b8bb55e06104b7e7 # 09:21 0- 20 block: submit_bio_wait() conversions
git bisect bad 2c575026fae6e63771bd2a4c1d407214a8096a89 # 09:24 0- 1 Update of blkg_stat and blkg_rwstat may happen in bh context. While u64_stats_fetch_retry is only preempt_disable on 32bit UP system. This is not enough to avoid preemption by bh and may read strange 64 bit value.
# first bad commit: [2c575026fae6e63771bd2a4c1d407214a8096a89] Update of blkg_stat and blkg_rwstat may happen in bh context. While u64_stats_fetch_retry is only preempt_disable on 32bit UP system. This is not enough to avoid preemption by bh and may read strange 64 bit value.
git bisect good 82023bb7f75b0052f40d3e74169d191c3e4e6286 # 09:32 990+ 494 Merge tag 'pm+acpi-2-3.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad 62287f766a9198a878852be74b35cc8a979c6b25 # 09:32 0- 11 0day head guard for 'devel-hourly-2014082110'
git bisect bad 5317821c08533e5f42f974e4e68e092beaf099b1 # 09:42 43- 40 Merge branch 'for-3.17-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata
git bisect bad deb9705745a63948e0a147713d39ed2aaaac97d7 # 09:47 19- 16 Add linux-next specific files for 20140822
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 10 months
[ext4] 71d4f7d0321: -49.6% xfstests.generic.274.seconds
by Fengguang Wu
Hi Ted,
We noticed increased xfstests 274's test speed and the first good
commit is 71d4f7d032149b935a26eb3ff85c6c837f3714e1 ("ext4: remove
metadata reservation checks").
test case: snb-drag/xfstests/4HDD-ext4-generic-slow2
snb-drag is a Sandy Bridge PC with 6G memory.
d5e03cbb0c88cd1 71d4f7d032149b935a26eb3ff
--------------- -------------------------
51 ± 2% -49.6% 25 ± 1% TOTAL xfstests.generic.274.seconds
817 ± 1% -3.0% 792 ± 1% TOTAL time.elapsed_time
xfstests.generic.274.seconds
60 ++---------------------------------------------------------------------+
| |
55 ++ .*..*.. *.. *.. |
| .*. . .. .. |
50 *+.*..*. *..*..*..* *..*..*..*..* * |
| |
45 ++ |
| |
40 ++ |
| |
35 ++ |
| |
30 ++ |
| |
25 O+-O--O--O--O--O---O--O--O--O--O--O--O--O--O--O--O--O---O--O--O--O--O--O
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
7 years, 10 months