c86660ba89 ("drm/modes: Parse overscan properties"): BUG: unable to handle kernel NULL pointer dereference at 0000001c
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Maxime-Ripard/drm-modes-Rewrite-...
commit c86660ba893f328e1b26b4dcb4af0169ce52a9d4
Author: Maxime Ripard <maxime.ripard(a)bootlin.com>
AuthorDate: Thu Apr 11 15:22:42 2019 +0200
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Fri Apr 12 12:28:39 2019 +0800
drm/modes: Parse overscan properties
Properly configuring the overscan properties might be needed for the
initial setup of the framebuffer for display that still have overscan.
Let's allow for more properties on the kernel command line to setup each
margin.
Signed-off-by: Maxime Ripard <maxime.ripard(a)bootlin.com>
7b53ea1e7a drm/modes: Allow to specify rotation and reflection on the commandline
c86660ba89 drm/modes: Parse overscan properties
6993bfc971 drm/selftests: Add command line parser selftests
+-------------------------------------------------+------------+------------+------------+
| | 7b53ea1e7a | c86660ba89 | 6993bfc971 |
+-------------------------------------------------+------------+------------+------------+
| boot_successes | 31 | 0 | 0 |
| boot_failures | 2 | 11 | 11 |
| BUG:kernel_reboot-without-warning_in_test_stage | 2 | | |
| BUG:unable_to_handle_kernel | 0 | 11 | 11 |
| Oops:#[##] | 0 | 11 | 11 |
| EIP:drm_property_change_valid_get | 0 | 11 | 11 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 11 | 11 |
+-------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 8.604127] [drm] Found bochs VGA, ID 0xb0c0.
[ 8.604669] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebf0000.
[ 8.605615] [TTM] Zone kernel: Available graphics memory: 241782 kiB
[ 8.606407] [TTM] Initializing pool allocator
[ 8.607172] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 1
[ 8.610655] BUG: unable to handle kernel NULL pointer dereference at 0000001c
[ 8.611403] #PF error: [normal kernel read fault]
[ 8.611403] *pde = 00000000
[ 8.611403] Oops: 0000 [#1] PREEMPT
[ 8.611403] CPU: 0 PID: 1 Comm: swapper Tainted: G T 5.1.0-rc4-00066-gc86660b #2
[ 8.611403] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 8.611403] EIP: drm_property_change_valid_get+0x14/0x220
[ 8.611403] Code: a1 ff ff ff eb d9 b8 fe ff ff ff eb d2 89 f6 8d bc 27 00 00 00 00 3e 8d 74 26 00 55 89 e5 83 ec 1c 89 5d f4 89 75 f8 89 7d fc <f6> 40 1c 04 75 76 8b 7d 08 89 d6 89 cb c7 07 00 00 00 00 8b 50 1c
[ 8.611403] EAX: 00000000 EBX: dfbb6300 ECX: 00000000 EDX: 00000000
[ 8.611403] ESI: 00000000 EDI: dfbbe3ec EBP: dde83c78 ESP: dde83c5c
[ 8.611403] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00210296
[ 8.611403] CR0: 80050033 CR2: 0000001c CR3: 13a80000 CR4: 000006d0
[ 8.611403] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 8.611403] DR6: fffe0ff0 DR7: 00000400
[ 8.611403] Call Trace:
[ 8.611403] ? trace_hardirqs_on+0xf6/0x2f0
[ 8.611403] drm_atomic_set_property+0x2f/0xac0
[ 8.611403] ? drm_atomic_state_init+0xc7/0x110
[ 8.611403] ? drm_atomic_state_alloc+0x76/0x90
[ 8.611403] drm_setup_crtcs_fb+0x115/0x270
[ 8.611403] __drm_fb_helper_initial_config_and_unlock+0x279/0x4c0
[ 8.611403] drm_fbdev_client_hotplug+0x17a/0x240
[ 8.611403] drm_fbdev_generic_setup+0xf7/0x180
[ 8.611403] bochs_pci_probe+0x178/0x190
[ 8.611403] pci_device_probe+0x9a/0x130
[ 8.611403] ? sysfs_create_link+0x25/0x50
[ 8.611403] really_probe+0x12e/0x4e0
[ 8.611403] ? pm_runtime_barrier+0x113/0x150
[ 8.611403] driver_probe_device+0x77/0x1e0
[ 8.611403] ? mutex_lock+0x62/0x80
[ 8.611403] device_driver_attach+0x59/0x60
[ 8.611403] __driver_attach+0x55/0x1e0
[ 8.611403] ? device_driver_attach+0x60/0x60
[ 8.611403] bus_for_each_dev+0x67/0xa0
[ 8.611403] driver_attach+0x1e/0x20
[ 8.611403] ? device_driver_attach+0x60/0x60
[ 8.611403] bus_add_driver+0x18f/0x240
[ 8.611403] ? pci_dma_configure+0xa0/0xa0
[ 8.611403] driver_register+0x59/0x100
[ 8.611403] ? driver_register+0x8b/0x100
[ 8.611403] ? qxl_init+0x4a/0x4a
[ 8.611403] __pci_register_driver+0x33/0x40
[ 8.611403] bochs_init+0x3e/0x40
[ 8.611403] do_one_initcall+0xb0/0x1f3
[ 8.611403] ? kernel_init_freeable+0x18b/0x293
[ 8.611403] kernel_init_freeable+0x216/0x293
[ 8.611403] ? rest_init+0xc0/0xc0
[ 8.611403] kernel_init+0x10/0x110
[ 8.611403] ? schedule_tail_wrapper+0x9/0x10
[ 8.611403] ret_from_fork+0x19/0x30
[ 8.611403] CR2: 000000000000001c
[ 8.611403] ---[ end trace 23d403ec3d8d8ea8 ]---
[ 8.611403] EIP: drm_property_change_valid_get+0x14/0x220
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 9d823dfee418a12ec9afc49934463d141ca6294d 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect good 8422ba73643e9d6cfc5dbca8475a3d72c7fe6a78 # 17:59 G 10 0 10 10 Merge 'linux-review/Lorenzo-Bianconi/fix-possible-use-after-free-in-erspan_v-4-6/20190407-053639' into devel-hourly-2019041212
git bisect good a82b475127ceb1ce575aeaac161fdcf5b05d3ee2 # 18:33 G 11 0 11 11 Merge 'linux-review/Sean-Christopherson/KVM-x86-clear-HF_SMM_MASK-before-loading-state/20190403-053443' into devel-hourly-2019041212
git bisect good 3490b1a0b7659f4478c21e660890620cccba21e8 # 18:43 G 10 0 10 10 Merge 'linux-review/Dan-Carpenter/NFC-pn533-potential-buffer-overflow-in-pn533_target_found_type_a/20190331-075323' into devel-hourly-2019041212
git bisect good 72eb330ece216a6977931ccc59372bb5c06dabc2 # 19:01 G 11 0 11 11 Merge 'linux-review/Andrey-Ignatov/libbpf-Ignore-Wformat-nonliteral-warning/20190407-160747' into devel-hourly-2019041212
git bisect good 6868e41f438b910038e1e7f9c47273d6013fa851 # 19:17 G 11 0 11 11 Merge 'linux-review/Jon-Maxwell/tg3-allow-ethtool-p-to-work-for-NICs-in-down-state/20190402-145923' into devel-hourly-2019041212
git bisect good 55116b4227a09f3b01cea47714f2a64a50d26d6a # 19:45 G 11 0 11 11 Merge 'linux-review/Xin-Long/net-use-rcu_dereference_protected-to-fetch-sk_dst_cache-in-sk_destruct/20190401-033023' into devel-hourly-2019041212
git bisect good a50337ee4decf9d3af4b01e191173b2bd3334e8d # 20:00 G 11 0 11 11 Merge 'linux-review/Grygorii-Strashko/net-ethernet-ti-cpsw-drop-TI_DAVINCI_CPDMA-config-option/20190330-085620' into devel-hourly-2019041212
git bisect bad 476faf8306d36f5882b58594112fe4df2a104673 # 20:15 B 0 11 25 0 Merge 'linux-review/Maxime-Ripard/drm-modes-Rewrite-the-command-line-parser/20190412-122837' into devel-hourly-2019041212
git bisect good 331f53de8a781e35927146d26e616b919738dc25 # 20:35 G 11 0 11 11 Merge 'linux-review/Grygorii-Strashko/net-ethernet-ti-davinci_mdio-switch-to-readl-writel/20190330-081905' into devel-hourly-2019041212
git bisect good 7b53ea1e7adf1713a5de4cddf3df0c63188233ac # 20:58 G 11 0 0 0 drm/modes: Allow to specify rotation and reflection on the commandline
git bisect bad 6993bfc971bbb32a70c38ec6e89a92c658f21f74 # 21:07 B 0 9 23 0 drm/selftests: Add command line parser selftests
git bisect bad c86660ba893f328e1b26b4dcb4af0169ce52a9d4 # 21:13 B 0 5 19 0 drm/modes: Parse overscan properties
# first bad commit: [c86660ba893f328e1b26b4dcb4af0169ce52a9d4] drm/modes: Parse overscan properties
git bisect good 7b53ea1e7adf1713a5de4cddf3df0c63188233ac # 21:35 G 30 0 0 0 drm/modes: Allow to specify rotation and reflection on the commandline
# extra tests with debug options
git bisect bad c86660ba893f328e1b26b4dcb4af0169ce52a9d4 # 21:45 B 0 7 21 0 drm/modes: Parse overscan properties
# extra tests on HEAD of linux-devel/devel-hourly-2019041212
git bisect bad 9d823dfee418a12ec9afc49934463d141ca6294d # 21:45 B 0 13 30 0 0day head guard for 'devel-hourly-2019041212'
# extra tests on tree/branch linux-review/Maxime-Ripard/drm-modes-Rewrite-the-command-line-parser/20190412-122837
git bisect bad 6993bfc971bbb32a70c38ec6e89a92c658f21f74 # 21:50 B 0 11 25 0 drm/selftests: Add command line parser selftests
# extra tests with first bad commit reverted
git bisect good b88dd6c0afbb168e3de7e51e59b73e076634f36b # 22:06 G 11 0 0 0 Revert "drm/modes: Parse overscan properties"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 4 months
[locking/rwsem] f03c360396: WARNING:at_init/main.c:#start_kernel
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: f03c36039664fc53ebf6d8322c46aaf8e373f70c ("locking/rwsem: Merge owner into count on x86-64")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | 1878939138 | f03c360396 |
+----------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 4 | 9 |
| BUG:kernel_hang_in_boot-around-mounting-root_stage | 3 | 5 |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | |
| WARNING:at_init/main.c:#start_kernel | 0 | 9 |
| RIP:start_kernel | 0 | 9 |
+----------------------------------------------------+------------+------------+
[ 4.777899] WARNING: CPU: 0 PID: 0 at init/main.c:663 start_kernel+0x366/0x512
[ 4.777906] Modules linked in:
[ 4.777920] CPU: 0 PID: 0 Comm: swapper Not tainted 5.1.0-rc4-00083-gf03c360 #2
[ 4.777929] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 4.777943] RIP: 0010:start_kernel+0x366/0x512
[ 4.777957] Code: 01 00 e8 f2 85 00 00 e8 84 cd 01 00 e8 0e 48 02 00 e8 34 2b 8b fe 9c 58 0f ba e0 09 73 0e 48 c7 c7 e0 08 a0 99 e8 2c 91 bd fd <0f> 0b c6 05 4b c0 b9 ff 00 e8 64 d2 cb fd fb e8 c9 ca 02 00 e8 87
[ 4.777966] RSP: 0000:ffffffff9a207ed8 EFLAGS: 00010282
[ 4.777977] RAX: dffffc0000000008 RBX: ffff8881f699cb00 RCX: ffffffff9896f4d5
[ 4.777986] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff988f0c4b
[ 4.777995] RBP: 1ffffffff3440fdb R08: fffffbfff35085ae R09: fffffbfff35085ae
[ 4.778003] R10: 0000000000000001 R11: fffffbfff35085ad R12: ffffffff9ad812e0
[ 4.778011] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 4.778020] FS: 0000000000000000(0000) GS:ffffffff9a2a7000(0000) knlGS:0000000000000000
[ 4.778029] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4.778037] CR2: 00000000ffffffff CR3: 00000001e884c000 CR4: 00000000000006b0
[ 4.778046] Call Trace:
[ 4.778063] ? mem_encrypt_init+0x1/0x1
[ 4.778080] ? memcpy_orig+0x16/0x110
[ 4.778093] secondary_startup_64+0xb6/0xc0
[ 4.778116] random: get_random_bytes called from print_oops_end_marker+0x34/0x47 with crng_init=0
[ 4.778128] ---[ end trace 8182026d66b2a4ad ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc4-00083-gf03c360 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months
853fbf8946 ("fs: Fix ovl_i_mutex_dir_key/p->lock/cred .."): BUG: unable to handle kernel NULL pointer dereference at 00000114
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Mina-Almasry/fs-Fix-ovl_i_mutex_...
commit 853fbf894629ed7df6b3d494bdf0dca547325188
Author: Mina Almasry <almasrymina(a)google.com>
AuthorDate: Thu Apr 11 09:47:53 2019 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Fri Apr 12 08:05:21 2019 +0800
fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
These 3 locks are acquired simultaneously in different order causing
deadlock:
https://syzkaller.appspot.com/bug?id=00f119b8bb35a3acbcfafb9d36a2752b364e...
======================================================
WARNING: possible circular locking dependency detected
4.19.0-rc5+ #253 Not tainted
------------------------------------------------------
syz-executor1/545 is trying to acquire lock:
00000000b04209e4 (&ovl_i_mutex_dir_key[depth]){++++}, at: inode_lock_shared include/linux/fs.h:748 [inline]
00000000b04209e4 (&ovl_i_mutex_dir_key[depth]){++++}, at: do_last fs/namei.c:3323 [inline]
00000000b04209e4 (&ovl_i_mutex_dir_key[depth]){++++}, at: path_openat+0x250d/0x5160 fs/namei.c:3534
but task is already holding lock:
0000000044500cca (&sig->cred_guard_mutex){+.+.}, at: prepare_bprm_creds+0x53/0x120 fs/exec.c:1404
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (&sig->cred_guard_mutex){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:925 [inline]
__mutex_lock+0x166/0x1700 kernel/locking/mutex.c:1072
mutex_lock_killable_nested+0x16/0x20 kernel/locking/mutex.c:1102
lock_trace+0x4c/0xe0 fs/proc/base.c:384
proc_pid_stack+0x196/0x3b0 fs/proc/base.c:420
proc_single_show+0x101/0x190 fs/proc/base.c:723
seq_read+0x4af/0x1150 fs/seq_file.c:229
do_loop_readv_writev fs/read_write.c:700 [inline]
do_iter_read+0x4a3/0x650 fs/read_write.c:924
vfs_readv+0x175/0x1c0 fs/read_write.c:986
do_preadv+0x1cc/0x280 fs/read_write.c:1070
__do_sys_preadv fs/read_write.c:1120 [inline]
__se_sys_preadv fs/read_write.c:1115 [inline]
__x64_sys_preadv+0x9a/0xf0 fs/read_write.c:1115
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
-> #2 (&p->lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:925 [inline]
__mutex_lock+0x166/0x1700 kernel/locking/mutex.c:1072
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
seq_read+0x71/0x1150 fs/seq_file.c:161
do_loop_readv_writev fs/read_write.c:700 [inline]
do_iter_read+0x4a3/0x650 fs/read_write.c:924
vfs_readv+0x175/0x1c0 fs/read_write.c:986
kernel_readv fs/splice.c:362 [inline]
default_file_splice_read+0x53c/0xb20 fs/splice.c:417
do_splice_to+0x12e/0x190 fs/splice.c:881
splice_direct_to_actor+0x270/0x8f0 fs/splice.c:953
do_splice_direct+0x2d4/0x420 fs/splice.c:1062
do_sendfile+0x62a/0xe20 fs/read_write.c:1440
__do_sys_sendfile64 fs/read_write.c:1495 [inline]
__se_sys_sendfile64 fs/read_write.c:1487 [inline]
__x64_sys_sendfile64+0x15d/0x250 fs/read_write.c:1487
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
-> #1 (sb_writers#5){.+.+}:
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
__sb_start_write+0x214/0x370 fs/super.c:1387
sb_start_write include/linux/fs.h:1566 [inline]
mnt_want_write+0x3f/0xc0 fs/namespace.c:360
ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:24
ovl_create_object+0x142/0x3a0 fs/overlayfs/dir.c:596
ovl_create+0x2b/0x30 fs/overlayfs/dir.c:627
lookup_open+0x1319/0x1b90 fs/namei.c:3234
do_last fs/namei.c:3324 [inline]
path_openat+0x15e7/0x5160 fs/namei.c:3534
do_filp_open+0x255/0x380 fs/namei.c:3564
do_sys_open+0x568/0x700 fs/open.c:1063
ksys_open include/linux/syscalls.h:1276 [inline]
__do_sys_creat fs/open.c:1121 [inline]
__se_sys_creat fs/open.c:1119 [inline]
__x64_sys_creat+0x61/0x80 fs/open.c:1119
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
-> #0 (&ovl_i_mutex_dir_key[depth]){++++}:
lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3900
down_read+0xb0/0x1d0 kernel/locking/rwsem.c:24
inode_lock_shared include/linux/fs.h:748 [inline]
do_last fs/namei.c:3323 [inline]
path_openat+0x250d/0x5160 fs/namei.c:3534
do_filp_open+0x255/0x380 fs/namei.c:3564
do_open_execat+0x221/0x8e0 fs/exec.c:853
__do_execve_file.isra.33+0x173f/0x2540 fs/exec.c:1755
do_execveat_common fs/exec.c:1866 [inline]
do_execve fs/exec.c:1883 [inline]
__do_sys_execve fs/exec.c:1964 [inline]
__se_sys_execve fs/exec.c:1959 [inline]
__x64_sys_execve+0x8f/0xc0 fs/exec.c:1959
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
other info that might help us debug this:
Chain exists of:
&ovl_i_mutex_dir_key[depth] --> &p->lock --> &sig->cred_guard_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&sig->cred_guard_mutex);
lock(&p->lock);
lock(&sig->cred_guard_mutex);
lock(&ovl_i_mutex_dir_key[depth]);
*** DEADLOCK ***
Solution: I establish this locking order for these locks:
1. ovl_i_mutex_dir_key
2. p->lock
3. sig->cred_guard_mutex
In this change i fix the locking order of exec.c, which is the only
instance that voilates this order.
Signed-off-by: Mina Almasry <almasrymina(a)google.com>
582549e3fb Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
853fbf8946 fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
+-------------------------------------------------+------------+------------+
| | 582549e3fb | 853fbf8946 |
+-------------------------------------------------+------------+------------+
| boot_successes | 35 | 0 |
| boot_failures | 4 | 11 |
| BUG:kernel_reboot-without-warning_in_test_stage | 4 | |
| BUG:unable_to_handle_kernel | 0 | 11 |
| Oops:#[##] | 0 | 11 |
| EIP:free_bprm | 0 | 11 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 11 |
+-------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 11.779780] Initialise system trusted keyrings
[ 11.802037] Key type blacklist registered
[ 11.823781] workingset: timestamp_bits=14 max_order=17 bucket_order=3
[ 12.213259] ntfs: driver 2.1.32 [Flags: R/W].
[ 12.255175] JFS: nTxBlock = 3910, nTxLock = 31283
[ 12.299632] BUG: unable to handle kernel NULL pointer dereference at 00000114
[ 12.302441] #PF error: [normal kernel read fault]
[ 12.302441] *pde = 00000000
[ 12.302441] Oops: 0000 [#1]
[ 12.302441] CPU: 0 PID: 29 Comm: kworker/u2:0 Not tainted 5.1.0-rc4-00059-g853fbf8 #1
[ 12.302441] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 12.302441] EIP: free_bprm+0x6/0x6a
[ 12.302441] Code: 75 02 eb 0a 8b 3f 81 ff e4 dd 93 c1 75 85 b8 c0 dd 93 c1 e8 55 a7 43 00 eb 00 8d 65 f4 89 f0 5b 5e 5f 5d c3 55 89 e5 53 89 c3 <83> b8 20 01 00 00 00 74 20 a1 54 7c 7e c1 8b 80 18 04 00 00 05 b8
[ 12.302441] EAX: fffffff4 EBX: fffffff4 ECX: dd42e640 EDX: dd431ecc
[ 12.302441] ESI: dd96e120 EDI: fffffff4 EBP: dd431f44 ESP: dd431f40
[ 12.302441] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00210202
[ 12.302441] CR0: 80050033 CR2: 00000114 CR3: 01a19000 CR4: 000006d0
[ 12.302441] Call Trace:
[ 12.302441] __do_execve_file+0x5f9/0x651
[ 12.302441] ? kmem_cache_alloc+0x9f/0x106
[ 12.302441] do_execve+0x16/0x18
[ 12.302441] call_usermodehelper_exec_async+0x108/0x12c
[ 12.302441] ? umh_complete+0x1c/0x1c
[ 12.302441] ret_from_fork+0x2e/0x38
[ 12.302441] Modules linked in:
[ 12.302441] CR2: 0000000000000114
[ 12.302441] ---[ end trace 311785ead50a43d0 ]---
[ 12.302441] EIP: free_bprm+0x6/0x6a
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 9052aea5db91b50caf30f7697af22f7bc37e8573 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect bad c18c1b00c878798532237abe16623a5f48b8cc08 # 11:14 B 0 11 26 0 Merge 'ext4/unicode' into devel-hourly-2019041208
git bisect bad 842090795402a3f48219bd161e04f983d384c957 # 11:39 B 0 11 26 0 Merge 'linux-review/Nicolas-Dichtel/xfrm-update-doc-about-xfrm-46-_gc_thresh/20190410-075445' into devel-hourly-2019041208
git bisect bad 5940a630034d0a37900039dbbe9cc0dc79276d0f # 11:45 B 0 10 36 11 Merge 'ipmi/for-next' into devel-hourly-2019041208
git bisect bad 5c53268527db7ccca97d35edb703c04b16e35ce6 # 11:55 B 0 11 37 11 Merge 'pinctrl-intel/review-andy' into devel-hourly-2019041208
git bisect good 512ac675753e33e639d89fc1d87915b695719664 # 12:14 G 11 0 0 0 Merge 'tegra/for-5.2/memory' into devel-hourly-2019041208
git bisect bad a569f122acb67388f4d29d73280e2b7bac1f05bf # 12:28 B 0 11 36 10 Merge 'pci/for-linus' into devel-hourly-2019041208
git bisect good 9317887598c3abcef953f087453762928078901e # 12:41 G 10 0 0 1 Merge 'vireshk-pm/cpufreq/arm/linux-next' into devel-hourly-2019041208
git bisect bad 736351a76bb458999b811380697e8af6544abbd3 # 12:57 B 0 3 22 4 Merge 'linux-review/Mina-Almasry/fs-Fix-ovl_i_mutex_dir_key-p-lock-cred-cred_guard_mutex-deadlock/20190412-080519' into devel-hourly-2019041208
git bisect good 4359b1faec1cc6d8e3407016fdc0ff6806d81b78 # 13:33 G 11 0 0 0 Merge 'linux-review/Alexandru-Ardelean/net-xilinx-emaclite-add-minimal-ethtool-ops/20190408-202115' into devel-hourly-2019041208
git bisect good 869e3305f23dfeacdaa234717c92ccb237815d90 # 13:36 G 10 0 0 0 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
git bisect good ed79cc87302bf7fbc87f05d655b998f866b4fed8 # 13:46 G 10 0 0 0 Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
git bisect good ea7a5c706fa49273cf6d1d9def053ecb50db2076 # 13:56 G 11 0 0 3 RDMA/vmw_pvrdma: Fix memory leak on pvrdma_pci_remove
git bisect good d737b25b1ae0540ba13cbd45ebb9b58a1d6d7f0d # 14:23 G 10 0 0 0 IB/hfi1: Do not flush send queue in the TID RDMA second leg
git bisect bad 853fbf894629ed7df6b3d494bdf0dca547325188 # 14:35 B 0 11 26 0 fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
git bisect good 582549e3fbe137eb6ce9be591aca25c2222a36b4 # 14:38 G 10 0 0 4 Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
# first bad commit: [853fbf894629ed7df6b3d494bdf0dca547325188] fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
git bisect good 582549e3fbe137eb6ce9be591aca25c2222a36b4 # 14:42 G 31 0 0 4 Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
# extra tests with debug options
git bisect bad 853fbf894629ed7df6b3d494bdf0dca547325188 # 14:47 B 0 11 26 0 fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
# extra tests on HEAD of linux-devel/devel-hourly-2019041208
git bisect bad 9052aea5db91b50caf30f7697af22f7bc37e8573 # 14:47 B 0 13 31 0 0day head guard for 'devel-hourly-2019041208'
# extra tests on tree/branch linux-review/Mina-Almasry/fs-Fix-ovl_i_mutex_dir_key-p-lock-cred-cred_guard_mutex-deadlock/20190412-080519
git bisect bad 853fbf894629ed7df6b3d494bdf0dca547325188 # 14:48 B 0 11 26 0 fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock
# extra tests with first bad commit reverted
git bisect good 13f65552b2b05816859deb13ec2584f4bf49ab3e # 14:59 G 10 0 0 0 Revert "fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 4 months
4f4fd7c579: mdadm-selftests.10ddf-fail-two-spares.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef ("Don't jump to compute_result state from check_result state")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: mdadm-selftests
with following parameters:
disk: 1HDD
test_prefix: 10
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | 81ba6abd2b | 4f4fd7c579 |
+-------------------------------------------------+------------+------------+
| boot_successes | 1 | 0 |
| boot_failures | 3 | 4 |
| BUG:kernel_reboot-without-warning_in_test_stage | 3 | 4 |
| mdadm-selftests.10ddf-fail-two-spares.fail | 0 | 4 |
+-------------------------------------------------+------------+------------+
[ 165.841336 ] Testing on linux-5.1.0-rc3-00023-g4f4fd7c kernel
[ 165.841339 ]
[ 167.302289 ] md/raid:md126: not clean -- starting background reconstruction
[ 167.304321 ] md/raid:md126: device loop13 operational as raid disk 3
[ 167.306055 ] md/raid:md126: device loop12 operational as raid disk 2
[ 167.308044 ] md/raid:md126: device loop11 operational as raid disk 1
[ 167.309522 ] md/raid:md126: device loop10 operational as raid disk 0
[ 167.320733 ] md/raid:md126: raid level 6 active with 4 out of 4 devices, algorithm 10
[ 167.327158 ] md126: detected capacity change from 0 to 33554432
[ 167.409736 ] md: resync of RAID array md126
[ 167.561148 ] md/raid10:md125: not clean -- starting background reconstruction
[ 167.563142 ] md/raid10:md125: active with 4 out of 4 devices
[ 167.568822 ] md125: detected capacity change from 0 to 33554432
[ 167.588867 ] md: delaying resync of md125 until md126 has finished (they share one or more physical units)
[ 168.171556 ] md: md126: resync done.
[ 168.202995 ] md: resync of RAID array md125
[ 168.438884 ] md: md125: resync done.
[ 168.998162 ] md/raid10:md125: Disk failure on loop11, disabling device.
[ 168.998162 ] md/raid10:md125: Operation continuing on 3 devices.
[ 169.044560 ] md/raid:md126: Disk failure on loop11, disabling device.
[ 169.044560 ] md/raid:md126: Operation continuing on 3 devices.
[ 169.104982 ] md: recovery of RAID array md125
[ 169.129051 ] md: delaying recovery of md126 until md125 has finished (they share one or more physical units)
[ 170.016071 ] md/raid10:md125: Disk failure on loop12, disabling device.
[ 170.016071 ] md/raid10:md125: Operation continuing on 2 devices.
[ 170.031796 ] md/raid:md126: Disk failure on loop12, disabling device.
[ 170.031796 ] md/raid:md126: Operation continuing on 2 devices.
[ 170.123425 ] md: md125: recovery interrupted.
[ 170.127705 ] md: recovery of RAID array md126
[ 170.132003 ] md: delaying recovery of md125 until md126 has finished (they share one or more physical units)
[ 177.280292 ] md: md126: recovery done.
[ 177.286369 ] md: recovery of RAID array md125
[ 177.304563 ] md: delaying recovery of md126 until md125 has finished (they share one or more physical units)
[ 183.347549 ] md: md125: recovery done.
[ 183.350452 ] md: recovery of RAID array md126
[ 183.368340 ] md: delaying recovery of md125 until md126 has finished (they share one or more physical units)
[ 190.512654 ] md: md126: recovery done.
[ 190.519633 ] md: recovery of RAID array md125
[ 197.641803 ] md: md125: recovery done.
[ 198.039343 ] tests/10ddf-fail-two-spares... FAILED - see /var/tmp/log for details
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc3-00023-g4f4fd7c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months
[crypto] 71052dcf4b: will-it-scale.per_thread_ops 59.5% improvement
by kernel test robot
Greeting,
FYI, we noticed a 59.5% improvement of will-it-scale.per_thread_ops due to commit:
commit: 71052dcf4be70be4077817297dcde7b155e745f2 ("crypto: scompress - Use per-CPU struct instead multiple variables")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
nr_task: 100%
mode: thread
test: signal1
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops 26.3% improvement |
| test machine | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=100% |
| | test=malloc1 |
+------------------+----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/thread/100%/debian-x86_64-2018-04-03.cgz/lkp-knm01/signal1/will-it-scale
commit:
6a4d1b18ef ("crypto: scompress - return proper error code for allocation failure")
71052dcf4b ("crypto: scompress - Use per-CPU struct instead multiple variables")
6a4d1b18ef00a7b1 71052dcf4be70be4077817297dc
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
3:4 -75% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
211.75 +59.5% 337.75 will-it-scale.per_thread_ops
144571 +2.2% 147795 will-it-scale.time.involuntary_context_switches
96.36 ± 11% +37.6% 132.63 ± 14% will-it-scale.time.user_time
61168 +59.2% 97391 will-it-scale.workload
0.20 ± 11% +0.1 0.26 ± 11% mpstat.cpu.all.usr%
16.48 -0.1 16.43 perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
2312274 ± 90% -74.0% 601465 ±109% cpuidle.C1.usage
15767 ± 95% -75.8% 3822 ± 53% cpuidle.POLL.time
2280094 ± 91% -74.7% 576421 ±114% turbostat.C1
2.11 -23.3% 1.62 turbostat.RAMWatt
58.02 ± 3% -19.7% 46.57 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.stddev
730.45 ± 8% +35.7% 991.09 ± 4% sched_debug.cpu.clock.stddev
730.45 ± 8% +35.7% 991.09 ± 4% sched_debug.cpu.clock_task.stddev
0.00 ± 8% +35.4% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
94577 +1.2% 95672 proc-vmstat.nr_active_anon
5629 -1.5% 5542 proc-vmstat.nr_inactive_anon
35563 +3.0% 36627 proc-vmstat.nr_shmem
94577 +1.2% 95672 proc-vmstat.nr_zone_active_anon
5629 -1.5% 5542 proc-vmstat.nr_zone_inactive_anon
41301847 ± 2% +16.3% 48050918 ± 2% perf-stat.i.branch-misses
21.16 -0.6 20.54 ± 2% perf-stat.i.cache-miss-rate%
34021212 +11.2% 37847523 perf-stat.i.cache-misses
1.607e+08 +15.3% 1.852e+08 perf-stat.i.cache-references
13007 ± 2% -8.4% 11917 perf-stat.i.cycles-between-cache-misses
29379452 ± 2% +19.7% 35181605 ± 5% perf-stat.i.iTLB-load-misses
1258 ± 2% -17.5% 1038 ± 4% perf-stat.i.instructions-per-iTLB-miss
4.43 +15.4% 5.11 ± 2% perf-stat.overall.MPKI
0.46 ± 2% +0.1 0.54 perf-stat.overall.branch-miss-rate%
13156 -9.3% 11934 perf-stat.overall.cycles-between-cache-misses
0.08 ± 2% +0.0 0.10 ± 5% perf-stat.overall.iTLB-load-miss-rate%
1256 ± 2% -17.4% 1037 ± 5% perf-stat.overall.instructions-per-iTLB-miss
0.08 -0.7% 0.08 perf-stat.overall.ipc
1.765e+08 -37.3% 1.107e+08 perf-stat.overall.path-length
40843912 ± 2% +16.6% 47633548 ± 2% perf-stat.ps.branch-misses
33816266 +10.8% 37471929 perf-stat.ps.cache-misses
1.596e+08 +15.2% 1.839e+08 perf-stat.ps.cache-references
28713208 +21.2% 34807093 ± 5% perf-stat.ps.iTLB-load-misses
1471021 +5.2% 1547236 interrupts.CAL:Function_call_interrupts
7736 -37.5% 4833 ± 34% interrupts.CPU101.NMI:Non-maskable_interrupts
7736 -37.5% 4833 ± 34% interrupts.CPU101.PMI:Performance_monitoring_interrupts
3843 +76.7% 6791 ± 24% interrupts.CPU11.NMI:Non-maskable_interrupts
3843 +76.7% 6791 ± 24% interrupts.CPU11.PMI:Performance_monitoring_interrupts
4836 ± 34% +40.7% 6804 ± 24% interrupts.CPU121.NMI:Non-maskable_interrupts
4836 ± 34% +40.7% 6804 ± 24% interrupts.CPU121.PMI:Performance_monitoring_interrupts
10.25 ± 65% +1302.4% 143.75 ±121% interrupts.CPU13.RES:Rescheduling_interrupts
4846 ± 34% +59.3% 7718 interrupts.CPU159.NMI:Non-maskable_interrupts
4846 ± 34% +59.3% 7718 interrupts.CPU159.PMI:Performance_monitoring_interrupts
3826 +77.2% 6780 ± 25% interrupts.CPU164.NMI:Non-maskable_interrupts
3826 +77.2% 6780 ± 25% interrupts.CPU164.PMI:Performance_monitoring_interrupts
5793 ± 32% -33.4% 3860 interrupts.CPU170.NMI:Non-maskable_interrupts
5793 ± 32% -33.4% 3860 interrupts.CPU170.PMI:Performance_monitoring_interrupts
47.00 ±163% -97.3% 1.25 ± 34% interrupts.CPU182.RES:Rescheduling_interrupts
268.50 ±166% -99.2% 2.25 ± 19% interrupts.CPU190.RES:Rescheduling_interrupts
5833 ± 32% -35.0% 3794 interrupts.CPU198.NMI:Non-maskable_interrupts
5833 ± 32% -35.0% 3794 interrupts.CPU198.PMI:Performance_monitoring_interrupts
3869 +52.5% 5899 ± 32% interrupts.CPU20.NMI:Non-maskable_interrupts
3869 +52.5% 5899 ± 32% interrupts.CPU20.PMI:Performance_monitoring_interrupts
4775 ± 34% +62.3% 7748 interrupts.CPU210.NMI:Non-maskable_interrupts
4775 ± 34% +62.3% 7748 interrupts.CPU210.PMI:Performance_monitoring_interrupts
6790 ± 24% -43.6% 3833 interrupts.CPU212.NMI:Non-maskable_interrupts
6790 ± 24% -43.6% 3833 interrupts.CPU212.PMI:Performance_monitoring_interrupts
4817 ± 34% +40.4% 6763 ± 24% interrupts.CPU216.NMI:Non-maskable_interrupts
4817 ± 34% +40.4% 6763 ± 24% interrupts.CPU216.PMI:Performance_monitoring_interrupts
6800 ± 24% -28.8% 4844 ± 34% interrupts.CPU22.NMI:Non-maskable_interrupts
6800 ± 24% -28.8% 4844 ± 34% interrupts.CPU22.PMI:Performance_monitoring_interrupts
7698 -49.7% 3874 interrupts.CPU221.NMI:Non-maskable_interrupts
7698 -49.7% 3874 interrupts.CPU221.PMI:Performance_monitoring_interrupts
3848 +75.6% 6756 ± 24% interrupts.CPU223.NMI:Non-maskable_interrupts
3848 +75.6% 6756 ± 24% interrupts.CPU223.PMI:Performance_monitoring_interrupts
3814 +51.8% 5791 ± 32% interrupts.CPU226.NMI:Non-maskable_interrupts
3814 +51.8% 5791 ± 32% interrupts.CPU226.PMI:Performance_monitoring_interrupts
6774 ± 24% -28.2% 4864 ± 33% interrupts.CPU238.NMI:Non-maskable_interrupts
6774 ± 24% -28.2% 4864 ± 33% interrupts.CPU238.PMI:Performance_monitoring_interrupts
3848 +51.1% 5815 ± 33% interrupts.CPU254.NMI:Non-maskable_interrupts
3848 +51.1% 5815 ± 33% interrupts.CPU254.PMI:Performance_monitoring_interrupts
3841 +51.4% 5817 ± 33% interrupts.CPU258.NMI:Non-maskable_interrupts
3841 +51.4% 5817 ± 33% interrupts.CPU258.PMI:Performance_monitoring_interrupts
3864 +98.9% 7687 interrupts.CPU261.NMI:Non-maskable_interrupts
3864 +98.9% 7687 interrupts.CPU261.PMI:Performance_monitoring_interrupts
55.75 ±114% -96.9% 1.75 ± 47% interrupts.CPU264.RES:Rescheduling_interrupts
3812 +52.0% 5795 ± 33% interrupts.CPU284.NMI:Non-maskable_interrupts
3812 +52.0% 5795 ± 33% interrupts.CPU284.PMI:Performance_monitoring_interrupts
3836 +76.1% 6755 ± 23% interrupts.CPU286.NMI:Non-maskable_interrupts
3836 +76.1% 6755 ± 23% interrupts.CPU286.PMI:Performance_monitoring_interrupts
365.50 ±169% -98.8% 4.50 ±109% interrupts.CPU47.RES:Rescheduling_interrupts
79.00 ±168% -98.7% 1.00 interrupts.CPU49.RES:Rescheduling_interrupts
3867 +76.1% 6811 ± 24% interrupts.CPU52.NMI:Non-maskable_interrupts
3867 +76.1% 6811 ± 24% interrupts.CPU52.PMI:Performance_monitoring_interrupts
3866 +50.4% 5814 ± 33% interrupts.CPU57.NMI:Non-maskable_interrupts
3866 +50.4% 5814 ± 33% interrupts.CPU57.PMI:Performance_monitoring_interrupts
3805 +78.6% 6794 ± 24% interrupts.CPU60.NMI:Non-maskable_interrupts
3805 +78.6% 6794 ± 24% interrupts.CPU60.PMI:Performance_monitoring_interrupts
3895 +74.3% 6791 ± 24% interrupts.CPU78.NMI:Non-maskable_interrupts
3895 +74.3% 6791 ± 24% interrupts.CPU78.PMI:Performance_monitoring_interrupts
130579 +12.2% 146530 softirqs.CPU0.TIMER
129305 +12.8% 145873 ± 2% softirqs.CPU1.TIMER
129656 +12.6% 145982 softirqs.CPU10.TIMER
129973 +12.7% 146504 softirqs.CPU100.TIMER
130116 +12.2% 145948 softirqs.CPU101.TIMER
130029 +12.8% 146617 softirqs.CPU102.TIMER
129851 +12.8% 146441 softirqs.CPU103.TIMER
129723 +13.2% 146808 softirqs.CPU104.TIMER
130422 +12.1% 146173 softirqs.CPU106.TIMER
129536 +12.7% 146002 softirqs.CPU107.TIMER
132836 ± 5% +10.6% 146911 softirqs.CPU108.TIMER
129074 +13.2% 146139 softirqs.CPU109.TIMER
129335 +12.9% 146023 softirqs.CPU11.TIMER
129682 +12.7% 146174 softirqs.CPU110.TIMER
129383 +13.3% 146635 softirqs.CPU111.TIMER
129902 +12.7% 146428 softirqs.CPU112.TIMER
129745 +15.6% 150015 ± 4% softirqs.CPU113.TIMER
129647 +12.8% 146194 softirqs.CPU114.TIMER
129484 +12.5% 145634 softirqs.CPU115.TIMER
129898 +12.4% 145958 softirqs.CPU116.TIMER
130282 +12.3% 146327 softirqs.CPU117.TIMER
130010 +12.4% 146181 softirqs.CPU118.TIMER
130185 +12.1% 145973 softirqs.CPU119.TIMER
129606 +13.2% 146682 softirqs.CPU12.TIMER
129815 +12.8% 146440 softirqs.CPU120.TIMER
129801 +12.8% 146372 softirqs.CPU121.TIMER
130178 +12.1% 145949 softirqs.CPU122.TIMER
129362 +12.7% 145732 softirqs.CPU123.TIMER
129775 +13.2% 146861 softirqs.CPU124.TIMER
129808 +12.5% 145999 softirqs.CPU125.TIMER
130012 +11.1% 144470 softirqs.CPU126.TIMER
129918 +11.7% 145173 softirqs.CPU127.TIMER
130078 +12.1% 145875 softirqs.CPU128.TIMER
129305 +12.9% 146033 softirqs.CPU129.TIMER
130188 +12.2% 146023 softirqs.CPU13.TIMER
129812 +12.6% 146177 softirqs.CPU130.TIMER
132495 ± 4% +10.1% 145843 softirqs.CPU131.TIMER
129788 +12.6% 146104 softirqs.CPU132.TIMER
129588 +12.6% 145910 softirqs.CPU133.TIMER
129398 +12.7% 145821 softirqs.CPU134.TIMER
129138 +13.1% 146093 softirqs.CPU135.TIMER
129924 +11.9% 145438 softirqs.CPU136.TIMER
129628 +12.3% 145575 softirqs.CPU137.TIMER
129465 +12.9% 146130 softirqs.CPU138.TIMER
129265 +12.6% 145498 softirqs.CPU139.TIMER
129194 +13.5% 146688 softirqs.CPU14.TIMER
129749 +12.5% 145990 softirqs.CPU140.TIMER
129689 +12.1% 145381 softirqs.CPU141.TIMER
130822 +11.1% 145339 softirqs.CPU142.TIMER
129792 +11.4% 144575 softirqs.CPU143.TIMER
132249 ± 2% +10.1% 145572 softirqs.CPU144.TIMER
129142 +11.9% 144523 softirqs.CPU145.TIMER
130939 +11.5% 145962 softirqs.CPU146.TIMER
128899 +12.9% 145519 softirqs.CPU147.TIMER
129580 +12.5% 145729 softirqs.CPU148.TIMER
128830 +12.8% 145371 softirqs.CPU149.TIMER
130201 +14.2% 148716 ± 4% softirqs.CPU15.TIMER
129200 +13.0% 145943 softirqs.CPU150.TIMER
129164 +12.1% 144848 softirqs.CPU151.TIMER
129532 +12.4% 145642 softirqs.CPU152.TIMER
129501 +12.7% 145934 softirqs.CPU153.TIMER
129557 +12.7% 146061 softirqs.CPU154.TIMER
129320 +12.3% 145283 softirqs.CPU155.TIMER
129702 +12.3% 145705 softirqs.CPU156.TIMER
129463 +12.3% 145443 softirqs.CPU157.TIMER
129414 +12.9% 146093 softirqs.CPU158.TIMER
129346 +12.7% 145807 softirqs.CPU159.TIMER
129502 +12.8% 146023 softirqs.CPU16.TIMER
129332 +12.7% 145782 softirqs.CPU160.TIMER
129379 +13.1% 146374 softirqs.CPU161.TIMER
129815 +12.2% 145613 softirqs.CPU162.TIMER
128951 +13.0% 145710 softirqs.CPU163.TIMER
129475 +12.3% 145451 softirqs.CPU164.TIMER
129115 +13.0% 145929 softirqs.CPU165.TIMER
129377 +12.6% 145737 softirqs.CPU166.TIMER
129568 +12.4% 145656 softirqs.CPU167.TIMER
129673 +11.9% 145066 softirqs.CPU168.TIMER
129273 +12.5% 145480 softirqs.CPU169.TIMER
129428 +13.1% 146418 softirqs.CPU17.TIMER
129171 +12.7% 145627 softirqs.CPU170.TIMER
128630 +13.1% 145457 softirqs.CPU171.TIMER
129679 +12.9% 146347 softirqs.CPU172.TIMER
128767 +12.9% 145358 softirqs.CPU173.TIMER
129149 +13.1% 146083 softirqs.CPU174.TIMER
128986 +12.6% 145257 softirqs.CPU175.TIMER
129710 +12.8% 146322 softirqs.CPU176.TIMER
129180 +13.2% 146183 softirqs.CPU177.TIMER
129185 +13.5% 146610 softirqs.CPU178.TIMER
129411 +11.9% 144798 softirqs.CPU179.TIMER
129570 +13.2% 146629 softirqs.CPU18.TIMER
128724 +13.2% 145719 softirqs.CPU180.TIMER
128915 +13.4% 146179 softirqs.CPU181.TIMER
129599 +12.4% 145717 softirqs.CPU182.TIMER
129469 +12.2% 145312 softirqs.CPU183.TIMER
129460 +12.1% 145095 softirqs.CPU184.TIMER
129327 +11.5% 144254 ± 2% softirqs.CPU185.TIMER
129699 +12.2% 145566 softirqs.CPU186.TIMER
129153 +12.6% 145482 softirqs.CPU187.TIMER
129418 +12.4% 145407 softirqs.CPU188.TIMER
129144 +12.7% 145589 softirqs.CPU189.TIMER
129256 +12.6% 145507 softirqs.CPU19.TIMER
129055 +12.8% 145615 softirqs.CPU190.TIMER
129323 +12.8% 145837 softirqs.CPU191.TIMER
129541 +12.5% 145735 softirqs.CPU192.TIMER
129533 +12.3% 145524 softirqs.CPU193.TIMER
129178 +12.8% 145774 softirqs.CPU194.TIMER
128952 +12.4% 144943 softirqs.CPU195.TIMER
129146 +12.1% 144811 softirqs.CPU196.TIMER
128709 +12.9% 145249 softirqs.CPU197.TIMER
129463 +11.7% 144555 softirqs.CPU198.TIMER
129454 ± 2% +12.0% 145017 softirqs.CPU199.TIMER
130925 ± 2% +10.9% 145137 softirqs.CPU2.TIMER
129506 +12.4% 145629 softirqs.CPU20.TIMER
129327 +12.3% 145191 softirqs.CPU200.TIMER
128733 +12.6% 144998 softirqs.CPU201.TIMER
129210 +12.6% 145536 softirqs.CPU202.TIMER
128933 +12.1% 144551 softirqs.CPU203.TIMER
129584 +12.0% 145199 softirqs.CPU204.TIMER
128887 +12.8% 145431 softirqs.CPU205.TIMER
128595 +12.6% 144822 softirqs.CPU206.TIMER
128725 +12.4% 144674 softirqs.CPU207.TIMER
128957 +12.7% 145359 softirqs.CPU208.TIMER
128658 +13.1% 145493 softirqs.CPU209.TIMER
129376 +12.7% 145809 softirqs.CPU21.TIMER
129032 +12.5% 145139 softirqs.CPU210.TIMER
128496 +12.9% 145078 softirqs.CPU211.TIMER
128946 +14.9% 148096 ± 4% softirqs.CPU212.TIMER
128669 +15.1% 148061 ± 3% softirqs.CPU213.TIMER
129293 +11.9% 144730 softirqs.CPU214.TIMER
128832 +11.7% 143860 softirqs.CPU215.TIMER
128161 +13.6% 145642 softirqs.CPU216.TIMER
128166 +12.5% 144175 softirqs.CPU217.TIMER
129243 +12.3% 145112 softirqs.CPU218.TIMER
128892 +13.0% 145697 softirqs.CPU219.TIMER
129339 +13.1% 146296 softirqs.CPU22.TIMER
129239 +12.1% 144858 softirqs.CPU220.TIMER
128751 +12.7% 145040 softirqs.CPU221.TIMER
128496 +12.9% 145071 softirqs.CPU222.TIMER
128408 +13.0% 145061 softirqs.CPU223.TIMER
129023 +12.7% 145431 softirqs.CPU224.TIMER
129200 +12.9% 145824 softirqs.CPU225.TIMER
128658 +12.9% 145314 softirqs.CPU226.TIMER
128581 +12.8% 145080 softirqs.CPU227.TIMER
128959 +12.4% 144981 softirqs.CPU228.TIMER
128760 +12.9% 145359 softirqs.CPU229.TIMER
130169 +12.2% 146007 softirqs.CPU23.TIMER
128855 +12.5% 144977 softirqs.CPU230.TIMER
128140 +13.1% 144937 softirqs.CPU231.TIMER
128772 +12.6% 144942 softirqs.CPU232.TIMER
128750 +13.0% 145443 softirqs.CPU233.TIMER
128567 +13.0% 145274 softirqs.CPU234.TIMER
128442 +13.2% 145350 softirqs.CPU235.TIMER
128661 +12.6% 144843 softirqs.CPU236.TIMER
128471 +12.7% 144773 softirqs.CPU237.TIMER
128198 +13.2% 145117 softirqs.CPU238.TIMER
131515 ± 4% +10.3% 145072 softirqs.CPU239.TIMER
129458 +14.0% 147638 ± 4% softirqs.CPU24.TIMER
128926 +12.1% 144540 softirqs.CPU240.TIMER
128797 +12.4% 144775 softirqs.CPU241.TIMER
128442 +13.1% 145300 softirqs.CPU242.TIMER
128548 +13.5% 145844 softirqs.CPU243.TIMER
128906 +12.9% 145546 softirqs.CPU244.TIMER
128510 +12.7% 144774 softirqs.CPU245.TIMER
128615 +13.0% 145313 softirqs.CPU246.TIMER
128540 +13.2% 145480 softirqs.CPU247.TIMER
128445 +13.1% 145333 softirqs.CPU248.TIMER
127260 +14.2% 145315 softirqs.CPU249.TIMER
130145 +11.9% 145569 softirqs.CPU25.TIMER
128726 +12.7% 145030 softirqs.CPU250.TIMER
128132 +13.3% 145165 softirqs.CPU251.TIMER
128137 +13.5% 145424 softirqs.CPU252.TIMER
128223 +12.8% 144586 softirqs.CPU253.TIMER
128833 +12.9% 145421 softirqs.CPU254.TIMER
128439 +12.7% 144753 softirqs.CPU255.TIMER
128587 +12.0% 143988 softirqs.CPU256.TIMER
128416 +18.0% 151563 ± 7% softirqs.CPU257.TIMER
128549 +13.2% 145523 softirqs.CPU258.TIMER
128456 +12.5% 144478 softirqs.CPU259.TIMER
129294 +13.2% 146314 softirqs.CPU26.TIMER
128677 +12.8% 145200 softirqs.CPU260.TIMER
129165 +12.1% 144829 softirqs.CPU261.TIMER
128597 +13.1% 145385 softirqs.CPU262.TIMER
128572 +13.0% 145298 softirqs.CPU263.TIMER
128729 +12.6% 145012 softirqs.CPU264.TIMER
128656 +12.6% 144902 softirqs.CPU265.TIMER
128688 +12.8% 145216 softirqs.CPU266.TIMER
128222 +12.6% 144428 softirqs.CPU267.TIMER
128316 +13.2% 145235 softirqs.CPU268.TIMER
128376 +12.9% 144961 softirqs.CPU269.TIMER
129280 +12.9% 145915 softirqs.CPU27.TIMER
128630 ± 2% +11.3% 143179 softirqs.CPU270.TIMER
129007 +11.8% 144177 softirqs.CPU271.TIMER
127814 +12.9% 144362 softirqs.CPU272.TIMER
127860 +13.2% 144798 softirqs.CPU273.TIMER
127459 +12.9% 143962 softirqs.CPU274.TIMER
126856 +13.6% 144136 softirqs.CPU275.TIMER
127923 +12.5% 143920 softirqs.CPU276.TIMER
127645 +13.1% 144382 softirqs.CPU277.TIMER
127237 +13.5% 144373 softirqs.CPU278.TIMER
127758 +12.7% 143933 softirqs.CPU279.TIMER
129558 +12.7% 146035 softirqs.CPU28.TIMER
127876 +12.5% 143909 softirqs.CPU280.TIMER
127869 +12.7% 144058 softirqs.CPU281.TIMER
127603 +12.3% 143331 softirqs.CPU282.TIMER
127615 +12.5% 143517 softirqs.CPU283.TIMER
127592 +19.0% 151841 ± 10% softirqs.CPU284.TIMER
127604 +12.7% 143805 softirqs.CPU287.TIMER
129893 +12.2% 145727 softirqs.CPU29.TIMER
129408 +12.0% 144986 softirqs.CPU3.TIMER
129250 +12.9% 145873 softirqs.CPU30.TIMER
129703 +12.5% 145923 softirqs.CPU31.TIMER
129609 +13.0% 146422 softirqs.CPU32.TIMER
129390 +13.2% 146472 softirqs.CPU33.TIMER
129562 +12.2% 145344 softirqs.CPU34.TIMER
128950 +13.1% 145898 softirqs.CPU35.TIMER
128787 +13.3% 145950 softirqs.CPU36.TIMER
129130 +12.7% 145513 softirqs.CPU37.TIMER
129191 +13.3% 146389 softirqs.CPU38.TIMER
129298 +12.7% 145774 softirqs.CPU39.TIMER
130067 +13.7% 147949 ± 3% softirqs.CPU4.TIMER
129669 +12.4% 145804 softirqs.CPU40.TIMER
129516 +12.2% 145339 softirqs.CPU41.TIMER
133127 ± 3% +12.2% 149413 ± 4% softirqs.CPU42.TIMER
129375 +12.8% 145952 softirqs.CPU43.TIMER
131848 ± 3% +10.6% 145875 softirqs.CPU44.TIMER
129154 +13.3% 146280 softirqs.CPU46.TIMER
129539 +13.0% 146329 softirqs.CPU47.TIMER
129638 +13.1% 146579 softirqs.CPU48.TIMER
129426 +12.9% 146108 softirqs.CPU49.TIMER
129600 +12.8% 146183 softirqs.CPU5.TIMER
129905 +12.5% 146087 softirqs.CPU50.TIMER
129022 +12.9% 145651 softirqs.CPU51.TIMER
129770 +12.7% 146232 softirqs.CPU52.TIMER
129292 +13.1% 146209 softirqs.CPU53.TIMER
5433 ± 28% +30.3% 7080 ± 30% softirqs.CPU54.RCU
129836 +12.1% 145500 softirqs.CPU55.TIMER
129356 +12.8% 145955 softirqs.CPU56.TIMER
129567 +12.6% 145903 softirqs.CPU57.TIMER
128936 +12.9% 145514 softirqs.CPU58.TIMER
129375 +12.9% 146028 softirqs.CPU59.TIMER
129142 +13.9% 147056 softirqs.CPU6.TIMER
129595 +12.5% 145800 softirqs.CPU60.TIMER
128913 +13.1% 145794 softirqs.CPU61.TIMER
129061 +12.5% 145181 softirqs.CPU62.TIMER
129352 +12.6% 145689 softirqs.CPU63.TIMER
129444 +12.7% 145937 softirqs.CPU64.TIMER
129390 +13.1% 146375 softirqs.CPU65.TIMER
128786 +13.6% 146311 softirqs.CPU66.TIMER
128852 +12.8% 145342 softirqs.CPU67.TIMER
129554 +11.8% 144858 softirqs.CPU68.TIMER
129448 +12.1% 145164 ± 2% softirqs.CPU69.TIMER
129506 +13.1% 146430 softirqs.CPU7.TIMER
129299 +12.8% 145841 softirqs.CPU70.TIMER
129438 +12.0% 144997 softirqs.CPU71.TIMER
132769 ± 3% +12.6% 149510 ± 2% softirqs.CPU72.TIMER
129179 +13.0% 145909 softirqs.CPU73.TIMER
129615 +12.8% 146157 softirqs.CPU74.TIMER
129585 +12.0% 145130 softirqs.CPU75.TIMER
129413 +13.2% 146437 softirqs.CPU76.TIMER
129681 +13.0% 146590 softirqs.CPU77.TIMER
129190 +13.2% 146279 softirqs.CPU78.TIMER
129110 +13.0% 145898 softirqs.CPU79.TIMER
130097 +13.4% 147549 ± 2% softirqs.CPU8.TIMER
130307 +12.4% 146504 softirqs.CPU80.TIMER
130075 +12.7% 146539 softirqs.CPU81.TIMER
129815 +12.6% 146162 softirqs.CPU82.TIMER
129831 +12.3% 145840 softirqs.CPU83.TIMER
130013 +12.8% 146701 softirqs.CPU84.TIMER
130069 +11.9% 145602 softirqs.CPU85.TIMER
132522 ± 4% +10.2% 145979 softirqs.CPU86.TIMER
130180 +12.3% 146145 softirqs.CPU87.TIMER
129819 +12.9% 146559 softirqs.CPU88.TIMER
129855 +12.5% 146133 softirqs.CPU89.TIMER
130079 +12.5% 146311 softirqs.CPU9.TIMER
129923 +12.7% 146468 softirqs.CPU90.TIMER
130267 +12.0% 145925 softirqs.CPU91.TIMER
129297 +13.2% 146376 softirqs.CPU92.TIMER
130213 +12.3% 146200 softirqs.CPU93.TIMER
129837 +12.8% 146429 softirqs.CPU94.TIMER
129618 +12.9% 146323 softirqs.CPU95.TIMER
129948 +12.8% 146534 softirqs.CPU96.TIMER
129778 +12.8% 146432 softirqs.CPU97.TIMER
129842 +13.2% 146945 softirqs.CPU98.TIMER
129713 +13.0% 146545 softirqs.CPU99.TIMER
37293358 +12.6% 41975755 softirqs.TIMER
will-it-scale.per_thread_ops
350 +-+-------------------------O--O---O-O------O-------------------------+
O O O O O O O OO O O |
300 +-+ O OO O O O O O O |
| |
250 +-+ .+. .+. |
|.+.+. +.+ + ++.+.+ +. +. .+.|
200 +-+ + : : + : + +.++.+ |
| : : : : : : |
150 +-+ : : : : : : |
| : : : : : : |
100 +-+ : : : : : : |
| : : : : : : |
50 +-+ : : : : : |
| : : : : : |
0 +-+------------O------O-----------------------------------------------+
will-it-scale.workload
100000 +-+-----------------------O---O---OO-----OO-O-O--------------------+
90000 O-O OO O OO O O O O O O O |
| O O O |
80000 +-+ |
70000 +-+ .+.+ .+. +. |
|.+.++.+ + +.+ + + +. .+. .+.|
60000 +-+ : :+ : + + ++ |
50000 +-+ : :: : : : |
40000 +-+ : : : : : : |
| : : : : : : |
30000 +-+ : : : : : : |
20000 +-+ : : : : : : |
| : : : : : |
10000 +-+ : : : : : |
0 +-+-----------O------O---------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2018-04-03.cgz/lkp-knm01/malloc1/will-it-scale
commit:
6a4d1b18ef ("crypto: scompress - return proper error code for allocation failure")
71052dcf4b ("crypto: scompress - Use per-CPU struct instead multiple variables")
6a4d1b18ef00a7b1 71052dcf4be70be4077817297dc
---------------- ---------------------------
%stddev %change %stddev
\ | \
640.50 +26.3% 809.00 will-it-scale.per_process_ops
8030 -1.4% 7916 will-it-scale.time.maximum_resident_set_size
184708 +26.2% 233116 will-it-scale.workload
751836 ± 5% -13.6% 649948 ± 3% meminfo.DirectMap4k
2.69 ± 4% -7.3% 2.49 turbostat.RAMWatt
2120 -2.0% 2078 vmstat.system.cs
6381 ± 37% -68.0% 2043 ± 43% cpuidle.POLL.time
221.00 ± 13% -62.2% 83.50 ± 11% cpuidle.POLL.usage
0.00 ± 11% -0.0 0.00 ± 9% mpstat.cpu.all.soft%
0.46 +0.1 0.57 ± 5% mpstat.cpu.all.usr%
39098 -10.1% 35164 ± 3% numa-meminfo.node0.Mapped
16078 ± 3% -6.1% 15099 numa-meminfo.node1.SUnreclaim
1.118e+08 +26.2% 1.411e+08 numa-numastat.node0.local_node
1.118e+08 +26.2% 1.411e+08 numa-numastat.node0.numa_hit
46.62 -2.3% 45.55 boot-time.boot
33.98 -2.4% 33.16 ± 2% boot-time.dhcp
11319 -2.5% 11036 ± 2% boot-time.idle
9718 ± 2% -8.6% 8881 ± 3% numa-vmstat.node0.nr_mapped
57373107 +24.8% 71596585 numa-vmstat.node0.numa_hit
57373864 +24.8% 71597286 numa-vmstat.node0.numa_local
4019 ± 3% -6.1% 3774 numa-vmstat.node1.nr_slab_unreclaimable
1.04 -0.1 0.90 ± 2% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
1.09 -0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
1.09 -0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.page_fault
0.85 -0.1 0.76 ± 2% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.80 -0.1 0.71 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.72 -0.1 0.64 ± 2% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1582 ± 13% -29.0% 1123 ± 3% slabinfo.dmaengine-unmap-16.active_objs
1582 ± 13% -29.0% 1123 ± 3% slabinfo.dmaengine-unmap-16.num_objs
1517 ± 6% -9.2% 1378 ± 9% slabinfo.file_lock_cache.active_objs
1517 ± 6% -9.2% 1378 ± 9% slabinfo.file_lock_cache.num_objs
504.00 ± 8% -29.2% 357.00 ± 5% slabinfo.kmem_cache.active_objs
504.00 ± 8% -29.2% 357.00 ± 5% slabinfo.kmem_cache.num_objs
846.00 ± 7% -26.5% 622.00 ± 5% slabinfo.kmem_cache_node.active_objs
896.00 ± 7% -25.0% 672.00 ± 4% slabinfo.kmem_cache_node.num_objs
1595 ± 3% -9.0% 1452 ± 6% slabinfo.pool_workqueue.num_objs
5684 -1.1% 5619 proc-vmstat.nr_inactive_anon
12244 -8.3% 11233 ± 2% proc-vmstat.nr_mapped
44956 -2.6% 43768 proc-vmstat.nr_shmem
5684 -1.1% 5619 proc-vmstat.nr_zone_inactive_anon
1.117e+08 +26.1% 1.409e+08 proc-vmstat.numa_hit
1.117e+08 +26.1% 1.409e+08 proc-vmstat.numa_local
6696 ± 3% -11.5% 5923 ± 4% proc-vmstat.pgactivate
1.119e+08 +26.1% 1.412e+08 proc-vmstat.pgalloc_normal
56263457 +26.0% 70898353 proc-vmstat.pgfault
1.118e+08 +26.2% 1.411e+08 proc-vmstat.pgfree
134341 ± 6% -10.6% 120099 softirqs.CPU116.TIMER
27993 ± 9% -99.0% 284.00 ± 40% softirqs.CPU17.NET_RX
131921 ± 2% -8.3% 120986 softirqs.CPU17.TIMER
131982 ± 3% -9.0% 120071 softirqs.CPU192.TIMER
133187 ± 4% -10.1% 119788 softirqs.CPU208.TIMER
141607 ± 12% -15.7% 119349 softirqs.CPU274.TIMER
150281 ± 16% -20.7% 119207 softirqs.CPU276.TIMER
10141 -57.2% 4340 ± 21% softirqs.CPU72.RCU
138167 -12.3% 121165 softirqs.CPU72.TIMER
29873 ± 7% -93.4% 1967 ± 7% softirqs.NET_RX
17.37 ± 71% -75.7% 4.22 ± 6% sched_debug.cfs_rq:/.load_avg.avg
3969 ± 87% -92.5% 296.20 ± 9% sched_debug.cfs_rq:/.load_avg.max
237.20 ± 87% -92.0% 19.01 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
-5164976 -15.8% -4346453 sched_debug.cfs_rq:/.spread0.min
3.93 ± 8% -13.7% 3.40 ± 2% sched_debug.cpu.cpu_load[0].avg
330.10 ± 25% -28.5% 235.95 ± 3% sched_debug.cpu.cpu_load[0].max
20.27 ± 21% -28.9% 14.42 ± 4% sched_debug.cpu.cpu_load[0].stddev
3.80 ± 4% -10.4% 3.41 ± 2% sched_debug.cpu.cpu_load[1].avg
281.90 ± 13% -16.3% 235.85 ± 3% sched_debug.cpu.cpu_load[1].max
17.60 ± 9% -18.1% 14.41 ± 4% sched_debug.cpu.cpu_load[1].stddev
3.73 -8.3% 3.42 ± 2% sched_debug.cpu.cpu_load[2].avg
257.00 ± 4% -8.0% 236.40 ± 3% sched_debug.cpu.cpu_load[2].max
16.36 ± 2% -11.3% 14.51 ± 4% sched_debug.cpu.cpu_load[2].stddev
0.13 -15.6% 0.11 ± 12% sched_debug.cpu.nr_running.stddev
9.95 +9.2% 10.87 perf-stat.i.MPKI
9.885e+09 +1.7% 1.006e+10 perf-stat.i.branch-instructions
1.08 +0.1 1.22 perf-stat.i.branch-miss-rate%
1.066e+08 +15.1% 1.226e+08 perf-stat.i.branch-misses
11.81 -0.8 11.05 ± 2% perf-stat.i.cache-miss-rate%
4.027e+08 +11.5% 4.49e+08 ± 2% perf-stat.i.cache-references
2048 -2.2% 2003 perf-stat.i.context-switches
11.16 -1.8% 10.96 perf-stat.i.cpi
9514 ± 3% -3.9% 9138 perf-stat.i.cycles-between-cache-misses
0.16 +0.0 0.18 perf-stat.i.iTLB-load-miss-rate%
62886072 +19.3% 75036125 perf-stat.i.iTLB-load-misses
4.04e+10 +2.1% 4.124e+10 perf-stat.i.iTLB-loads
4.046e+10 +2.1% 4.129e+10 perf-stat.i.instructions
644.65 -14.6% 550.82 perf-stat.i.instructions-per-iTLB-miss
0.09 +1.9% 0.09 perf-stat.i.ipc
186582 +26.0% 235021 perf-stat.i.minor-faults
186870 +26.0% 235403 perf-stat.i.page-faults
9.95 +9.2% 10.87 perf-stat.overall.MPKI
1.08 +0.1 1.22 perf-stat.overall.branch-miss-rate%
11.82 -0.8 11.05 ± 2% perf-stat.overall.cache-miss-rate%
11.16 -1.8% 10.96 perf-stat.overall.cpi
9497 ± 3% -3.9% 9122 perf-stat.overall.cycles-between-cache-misses
0.16 +0.0 0.18 perf-stat.overall.iTLB-load-miss-rate%
643.60 -14.5% 550.27 perf-stat.overall.instructions-per-iTLB-miss
0.09 +1.9% 0.09 perf-stat.overall.ipc
64136162 -18.9% 52013113 perf-stat.overall.path-length
9.786e+09 +1.9% 9.967e+09 perf-stat.ps.branch-instructions
1.054e+08 +15.3% 1.215e+08 perf-stat.ps.branch-misses
3.986e+08 +11.6% 4.451e+08 ± 2% perf-stat.ps.cache-references
2033 -2.0% 1992 perf-stat.ps.context-switches
62235957 +19.5% 74377952 perf-stat.ps.iTLB-load-misses
3.998e+10 +2.2% 4.087e+10 perf-stat.ps.iTLB-loads
4.005e+10 +2.2% 4.093e+10 perf-stat.ps.instructions
185175 +26.1% 233489 perf-stat.ps.minor-faults
185171 +26.1% 233535 perf-stat.ps.page-faults
1.185e+13 +2.3% 1.212e+13 perf-stat.total.instructions
43805 ± 39% -99.1% 411.25 ± 49% interrupts.37:IR-PCI-MSI.2621443-edge.eth1-TxRx-2
700518 +5.4% 738269 interrupts.CAL:Function_call_interrupts
1391 -14.1% 1194 ± 9% interrupts.CPU0.RES:Rescheduling_interrupts
2370 +10.2% 2612 interrupts.CPU101.CAL:Function_call_interrupts
5760 ± 33% -33.7% 3820 interrupts.CPU101.NMI:Non-maskable_interrupts
5760 ± 33% -33.7% 3820 interrupts.CPU101.PMI:Performance_monitoring_interrupts
117.00 ± 97% -98.3% 2.00 ± 35% interrupts.CPU101.RES:Rescheduling_interrupts
7744 -50.4% 3842 interrupts.CPU108.NMI:Non-maskable_interrupts
7744 -50.4% 3842 interrupts.CPU108.PMI:Performance_monitoring_interrupts
7694 -37.9% 4777 ± 34% interrupts.CPU11.NMI:Non-maskable_interrupts
7694 -37.9% 4777 ± 34% interrupts.CPU11.PMI:Performance_monitoring_interrupts
81.50 ± 47% -72.1% 22.75 ±158% interrupts.CPU112.RES:Rescheduling_interrupts
7727 -50.2% 3851 interrupts.CPU116.NMI:Non-maskable_interrupts
7727 -50.2% 3851 interrupts.CPU116.PMI:Performance_monitoring_interrupts
7680 -37.6% 4790 ± 34% interrupts.CPU121.NMI:Non-maskable_interrupts
7680 -37.6% 4790 ± 34% interrupts.CPU121.PMI:Performance_monitoring_interrupts
5780 ± 33% -34.3% 3798 interrupts.CPU144.NMI:Non-maskable_interrupts
5780 ± 33% -34.3% 3798 interrupts.CPU144.PMI:Performance_monitoring_interrupts
7690 -50.3% 3824 interrupts.CPU146.NMI:Non-maskable_interrupts
7690 -50.3% 3824 interrupts.CPU146.PMI:Performance_monitoring_interrupts
5793 ± 33% -33.7% 3839 interrupts.CPU147.NMI:Non-maskable_interrupts
5793 ± 33% -33.7% 3839 interrupts.CPU147.PMI:Performance_monitoring_interrupts
5797 ± 32% -33.5% 3857 interrupts.CPU15.NMI:Non-maskable_interrupts
5797 ± 32% -33.5% 3857 interrupts.CPU15.PMI:Performance_monitoring_interrupts
7693 -50.5% 3811 interrupts.CPU150.NMI:Non-maskable_interrupts
7693 -50.5% 3811 interrupts.CPU150.PMI:Performance_monitoring_interrupts
533.50 ± 92% -99.6% 2.25 ± 57% interrupts.CPU153.RES:Rescheduling_interrupts
3826 +50.3% 5749 ± 32% interrupts.CPU157.NMI:Non-maskable_interrupts
3826 +50.3% 5749 ± 32% interrupts.CPU157.PMI:Performance_monitoring_interrupts
3815 +51.6% 5783 ± 33% interrupts.CPU16.NMI:Non-maskable_interrupts
3815 +51.6% 5783 ± 33% interrupts.CPU16.PMI:Performance_monitoring_interrupts
7706 -37.8% 4791 ± 33% interrupts.CPU165.NMI:Non-maskable_interrupts
7706 -37.8% 4791 ± 33% interrupts.CPU165.PMI:Performance_monitoring_interrupts
5826 ± 33% -17.6% 4802 ± 33% interrupts.CPU169.NMI:Non-maskable_interrupts
5826 ± 33% -17.6% 4802 ± 33% interrupts.CPU169.PMI:Performance_monitoring_interrupts
43805 ± 39% -99.1% 411.25 ± 49% interrupts.CPU17.37:IR-PCI-MSI.2621443-edge.eth1-TxRx-2
287.50 ± 84% -99.7% 1.00 interrupts.CPU175.RES:Rescheduling_interrupts
7811 -51.2% 3812 interrupts.CPU18.NMI:Non-maskable_interrupts
7811 -51.2% 3812 interrupts.CPU18.PMI:Performance_monitoring_interrupts
59.50 ± 94% -98.3% 1.00 interrupts.CPU185.RES:Rescheduling_interrupts
1.50 ± 33% +7516.7% 114.25 ±105% interrupts.CPU192.RES:Rescheduling_interrupts
7737 -37.8% 4811 ± 35% interrupts.CPU193.NMI:Non-maskable_interrupts
7737 -37.8% 4811 ± 35% interrupts.CPU193.PMI:Performance_monitoring_interrupts
5752 ± 33% -33.6% 3820 interrupts.CPU195.NMI:Non-maskable_interrupts
5752 ± 33% -33.6% 3820 interrupts.CPU195.PMI:Performance_monitoring_interrupts
5782 ± 32% -33.9% 3820 interrupts.CPU197.NMI:Non-maskable_interrupts
5782 ± 32% -33.9% 3820 interrupts.CPU197.PMI:Performance_monitoring_interrupts
395.50 ± 98% -99.6% 1.75 ± 24% interrupts.CPU202.RES:Rescheduling_interrupts
5773 ± 32% -34.1% 3803 interrupts.CPU203.NMI:Non-maskable_interrupts
5773 ± 32% -34.1% 3803 interrupts.CPU203.PMI:Performance_monitoring_interrupts
177.00 ± 92% -99.0% 1.75 ± 47% interrupts.CPU204.RES:Rescheduling_interrupts
5769 ± 31% -34.1% 3802 interrupts.CPU206.NMI:Non-maskable_interrupts
5769 ± 31% -34.1% 3802 interrupts.CPU206.PMI:Performance_monitoring_interrupts
3850 +99.3% 7673 interrupts.CPU212.NMI:Non-maskable_interrupts
3850 +99.3% 7673 interrupts.CPU212.PMI:Performance_monitoring_interrupts
5771 ± 32% -33.7% 3825 interrupts.CPU218.NMI:Non-maskable_interrupts
5771 ± 32% -33.7% 3825 interrupts.CPU218.PMI:Performance_monitoring_interrupts
7646 -37.0% 4814 ± 34% interrupts.CPU219.NMI:Non-maskable_interrupts
7646 -37.0% 4814 ± 34% interrupts.CPU219.PMI:Performance_monitoring_interrupts
7710 -37.8% 4799 ± 34% interrupts.CPU222.NMI:Non-maskable_interrupts
7710 -37.8% 4799 ± 34% interrupts.CPU222.PMI:Performance_monitoring_interrupts
5768 ± 32% -33.6% 3829 interrupts.CPU223.NMI:Non-maskable_interrupts
5768 ± 32% -33.6% 3829 interrupts.CPU223.PMI:Performance_monitoring_interrupts
192.00 ± 74% -98.2% 3.50 ± 91% interrupts.CPU228.RES:Rescheduling_interrupts
3841 +100.0% 7684 interrupts.CPU229.NMI:Non-maskable_interrupts
3841 +100.0% 7684 interrupts.CPU229.PMI:Performance_monitoring_interrupts
7725 -38.3% 4765 ± 35% interrupts.CPU232.NMI:Non-maskable_interrupts
7725 -38.3% 4765 ± 35% interrupts.CPU232.PMI:Performance_monitoring_interrupts
454.50 ± 76% -74.4% 116.25 ± 97% interrupts.CPU239.RES:Rescheduling_interrupts
5792 ± 33% -17.3% 4790 ± 34% interrupts.CPU241.NMI:Non-maskable_interrupts
5792 ± 33% -17.3% 4790 ± 34% interrupts.CPU241.PMI:Performance_monitoring_interrupts
3815 +101.6% 7691 interrupts.CPU243.NMI:Non-maskable_interrupts
3815 +101.6% 7691 interrupts.CPU243.PMI:Performance_monitoring_interrupts
204.50 ± 92% -97.2% 5.75 ±133% interrupts.CPU247.RES:Rescheduling_interrupts
7702 -37.6% 4808 ± 35% interrupts.CPU256.NMI:Non-maskable_interrupts
7702 -37.6% 4808 ± 35% interrupts.CPU256.PMI:Performance_monitoring_interrupts
5809 ± 32% -17.4% 4798 ± 35% interrupts.CPU257.NMI:Non-maskable_interrupts
5809 ± 32% -17.4% 4798 ± 35% interrupts.CPU257.PMI:Performance_monitoring_interrupts
5801 ± 32% -17.7% 4773 ± 35% interrupts.CPU258.NMI:Non-maskable_interrupts
5801 ± 32% -17.7% 4773 ± 35% interrupts.CPU258.PMI:Performance_monitoring_interrupts
7640 -37.1% 4809 ± 34% interrupts.CPU262.NMI:Non-maskable_interrupts
7640 -37.1% 4809 ± 34% interrupts.CPU262.PMI:Performance_monitoring_interrupts
3.50 ± 71% +17707.1% 623.25 ±130% interrupts.CPU264.RES:Rescheduling_interrupts
3812 +100.7% 7650 interrupts.CPU267.NMI:Non-maskable_interrupts
3812 +100.7% 7650 interrupts.CPU267.PMI:Performance_monitoring_interrupts
5781 ± 32% -17.3% 4782 ± 34% interrupts.CPU272.NMI:Non-maskable_interrupts
5781 ± 32% -17.3% 4782 ± 34% interrupts.CPU272.PMI:Performance_monitoring_interrupts
3815 +76.7% 6741 ± 24% interrupts.CPU277.NMI:Non-maskable_interrupts
3815 +76.7% 6741 ± 24% interrupts.CPU277.PMI:Performance_monitoring_interrupts
2.00 +10337.5% 208.75 ±170% interrupts.CPU278.RES:Rescheduling_interrupts
5837 ± 32% -17.4% 4819 ± 34% interrupts.CPU283.NMI:Non-maskable_interrupts
5837 ± 32% -17.4% 4819 ± 34% interrupts.CPU283.PMI:Performance_monitoring_interrupts
7711 -37.9% 4789 ± 35% interrupts.CPU285.NMI:Non-maskable_interrupts
7711 -37.9% 4789 ± 35% interrupts.CPU285.PMI:Performance_monitoring_interrupts
7731 -38.1% 4788 ± 34% interrupts.CPU3.NMI:Non-maskable_interrupts
7731 -38.1% 4788 ± 34% interrupts.CPU3.PMI:Performance_monitoring_interrupts
5768 ± 32% -33.3% 3847 interrupts.CPU32.NMI:Non-maskable_interrupts
5768 ± 32% -33.3% 3847 interrupts.CPU32.PMI:Performance_monitoring_interrupts
3893 +72.8% 6728 ± 23% interrupts.CPU39.NMI:Non-maskable_interrupts
3893 +72.8% 6728 ± 23% interrupts.CPU39.PMI:Performance_monitoring_interrupts
5809 ± 33% -17.9% 4771 ± 35% interrupts.CPU4.NMI:Non-maskable_interrupts
5809 ± 33% -17.9% 4771 ± 35% interrupts.CPU4.PMI:Performance_monitoring_interrupts
7712 -50.2% 3841 interrupts.CPU44.NMI:Non-maskable_interrupts
7712 -50.2% 3841 interrupts.CPU44.PMI:Performance_monitoring_interrupts
4.50 ± 55% +8027.8% 365.75 ± 44% interrupts.CPU48.RES:Rescheduling_interrupts
64.50 ± 93% -85.7% 9.25 ±123% interrupts.CPU51.RES:Rescheduling_interrupts
7727 -37.4% 4840 ± 34% interrupts.CPU54.NMI:Non-maskable_interrupts
7727 -37.4% 4840 ± 34% interrupts.CPU54.PMI:Performance_monitoring_interrupts
7625 -49.7% 3838 interrupts.CPU56.NMI:Non-maskable_interrupts
7625 -49.7% 3838 interrupts.CPU56.PMI:Performance_monitoring_interrupts
5810 ± 33% -17.6% 4785 ± 34% interrupts.CPU57.NMI:Non-maskable_interrupts
5810 ± 33% -17.6% 4785 ± 34% interrupts.CPU57.PMI:Performance_monitoring_interrupts
2092 ± 9% +23.4% 2582 interrupts.CPU59.CAL:Function_call_interrupts
7652 -49.9% 3834 interrupts.CPU59.NMI:Non-maskable_interrupts
7652 -49.9% 3834 interrupts.CPU59.PMI:Performance_monitoring_interrupts
552.00 ± 96% -64.4% 196.25 ±170% interrupts.CPU59.RES:Rescheduling_interrupts
126.50 ± 92% -97.8% 2.75 ± 47% interrupts.CPU60.RES:Rescheduling_interrupts
7671 -37.3% 4812 ± 34% interrupts.CPU61.NMI:Non-maskable_interrupts
7671 -37.3% 4812 ± 34% interrupts.CPU61.PMI:Performance_monitoring_interrupts
81.50 ± 77% -97.5% 2.00 ± 35% interrupts.CPU61.RES:Rescheduling_interrupts
7718 -38.0% 4785 ± 34% interrupts.CPU64.NMI:Non-maskable_interrupts
7718 -38.0% 4785 ± 34% interrupts.CPU64.PMI:Performance_monitoring_interrupts
5847 ± 33% -34.9% 3805 interrupts.CPU67.NMI:Non-maskable_interrupts
5847 ± 33% -34.9% 3805 interrupts.CPU67.PMI:Performance_monitoring_interrupts
3783 +52.3% 5763 ± 33% interrupts.CPU75.NMI:Non-maskable_interrupts
3783 +52.3% 5763 ± 33% interrupts.CPU75.PMI:Performance_monitoring_interrupts
7697 -50.1% 3840 interrupts.CPU79.NMI:Non-maskable_interrupts
7697 -50.1% 3840 interrupts.CPU79.PMI:Performance_monitoring_interrupts
7729 -50.3% 3845 interrupts.CPU83.NMI:Non-maskable_interrupts
7729 -50.3% 3845 interrupts.CPU83.PMI:Performance_monitoring_interrupts
7753 -38.3% 4781 ± 34% interrupts.CPU84.NMI:Non-maskable_interrupts
7753 -38.3% 4781 ± 34% interrupts.CPU84.PMI:Performance_monitoring_interrupts
5759 ± 32% -33.9% 3808 interrupts.CPU93.NMI:Non-maskable_interrupts
5759 ± 32% -33.9% 3808 interrupts.CPU93.PMI:Performance_monitoring_interrupts
3821 +75.6% 6709 ± 24% interrupts.CPU94.NMI:Non-maskable_interrupts
3821 +75.6% 6709 ± 24% interrupts.CPU94.PMI:Performance_monitoring_interrupts
7686 -37.3% 4817 ± 34% interrupts.CPU96.NMI:Non-maskable_interrupts
7686 -37.3% 4817 ± 34% interrupts.CPU96.PMI:Performance_monitoring_interrupts
5752 ± 33% -33.6% 3818 interrupts.CPU98.NMI:Non-maskable_interrupts
5752 ± 33% -33.6% 3818 interrupts.CPU98.PMI:Performance_monitoring_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[locking/rwsem] 46ad0840b1: reaim.jobs_per_min -3.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -3.4% regression of reaim.jobs_per_min due to commit:
commit: 46ad0840b1584b92b5ff2cc3ed0b011dd6b8e0f1 ("locking/rwsem: Remove arch specific rwsem files")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
in testcase: reaim
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 100t
test: fork_test
cpufreq_governor: performance
ucode: 0x42d
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/100t/debian-x86_64-2018-04-03.cgz/300s/ivb44/fork_test/reaim/0x42d
commit:
a1247d06d0 ("locking/static_key: Fix false positive warnings on concurrent dec/inc")
46ad0840b1 ("locking/rwsem: Remove arch specific rwsem files")
a1247d06d01045d7 46ad0840b1584b92b5ff2cc3ed0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:5 40% 2:6 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:5 8% 2:6 perf-profile.calltrace.cycles-pp.error_entry
%stddev %change %stddev
\ | \
319.20 +4.5% 333.70 reaim.child_systime
228.14 +1.2% 230.79 reaim.child_utime
50164 -3.4% 48470 reaim.jobs_per_min
501.65 -3.4% 484.70 reaim.jobs_per_min_child
51181 -2.4% 49944 reaim.max_jobs_per_min
11.96 +3.5% 12.38 reaim.parent_time
307.80 -1.7% 302.61 reaim.time.elapsed_time
307.80 -1.7% 302.61 reaim.time.elapsed_time.max
21833132 -4.8% 20791331 reaim.time.involuntary_context_switches
7.594e+08 -4.4% 7.263e+08 reaim.time.minor_page_faults
5038 -3.4% 4865 reaim.time.user_time
220000 -4.5% 210000 reaim.workload
14617 ± 2% +10.3% 16128 ± 3% meminfo.PageTables
0.03 ± 11% -0.0 0.02 ± 14% mpstat.cpu.all.soft%
320607 +2.6% 328953 vmstat.system.cs
15491 -86.2% 2133 ±134% numa-numastat.node0.other_node
228.33 ± 5% +5855.5% 13598 ± 21% numa-numastat.node1.other_node
108917 ± 4% -7.8% 100423 ± 3% softirqs.CPU11.TIMER
113369 -8.1% 104175 ± 5% softirqs.CPU40.TIMER
8966761 ± 5% +10.7% 9926924 ± 4% cpuidle.C1.usage
964538 ± 7% +88.9% 1821601 ± 11% cpuidle.POLL.time
803275 ± 10% +118.2% 1752897 ± 13% cpuidle.POLL.usage
8957861 ± 5% +10.8% 9923511 ± 4% turbostat.C1
0.43 ± 69% -0.4 0.00 turbostat.PKG_%
0.02 ±141% +660.0% 0.13 ± 65% turbostat.Pkg%pc3
0.34 ± 62% -0.3 0.00 turbostat.RAM_%
789.33 ± 3% -21.6% 618.67 ± 9% slabinfo.kmalloc-rcl-128.active_objs
789.33 ± 3% -21.6% 618.67 ± 9% slabinfo.kmalloc-rcl-128.num_objs
1366 -10.2% 1227 ± 5% slabinfo.pool_workqueue.active_objs
1367 -10.0% 1230 ± 5% slabinfo.pool_workqueue.num_objs
341.33 ± 8% +25.0% 426.67 ± 7% slabinfo.skbuff_fclone_cache.active_objs
341.33 ± 8% +25.0% 426.67 ± 7% slabinfo.skbuff_fclone_cache.num_objs
153786 +9.3% 168087 numa-vmstat.node0.nr_file_pages
2967 ± 30% +61.5% 4793 ± 5% numa-vmstat.node0.nr_inactive_anon
3513 ± 15% +30.3% 4578 ± 2% numa-vmstat.node0.nr_mapped
3349 ± 26% +382.3% 16155 ± 2% numa-vmstat.node0.nr_shmem
6987 ± 10% +65.6% 11569 ± 3% numa-vmstat.node0.nr_slab_reclaimable
2967 ± 30% +61.5% 4793 ± 5% numa-vmstat.node0.nr_zone_inactive_anon
25044 ± 54% -90.7% 2340 ±121% numa-vmstat.node0.numa_other
655.67 ± 71% +62.0% 1062 numa-vmstat.node1.nr_active_file
169454 -8.4% 155176 numa-vmstat.node1.nr_file_pages
2152 ± 41% -82.8% 369.33 ± 59% numa-vmstat.node1.nr_inactive_anon
3903 ± 14% -29.0% 2770 ± 2% numa-vmstat.node1.nr_mapped
13327 ± 5% -95.1% 652.33 ± 35% numa-vmstat.node1.nr_shmem
11241 ± 6% -46.1% 6064 ± 6% numa-vmstat.node1.nr_slab_reclaimable
655.67 ± 71% +62.0% 1062 numa-vmstat.node1.nr_zone_active_file
2152 ± 41% -82.8% 369.33 ± 59% numa-vmstat.node1.nr_zone_inactive_anon
79276 -2.1% 77572 proc-vmstat.nr_active_anon
66149 -2.9% 64243 proc-vmstat.nr_anon_pages
7467 -1.6% 7346 proc-vmstat.nr_mapped
3756 ± 2% +5.3% 3956 proc-vmstat.nr_page_table_pages
18126 -3.1% 17561 proc-vmstat.nr_slab_reclaimable
79276 -2.1% 77572 proc-vmstat.nr_zone_active_anon
8473 +8.1% 9162 ± 7% proc-vmstat.numa_hint_faults
7.766e+08 -3.9% 7.459e+08 proc-vmstat.numa_hit
7.765e+08 -3.9% 7.459e+08 proc-vmstat.numa_local
984.00 ± 74% +885.9% 9701 ± 97% proc-vmstat.numa_pages_migrated
38689 ± 39% +47.3% 56992 ± 12% proc-vmstat.numa_pte_updates
8.078e+08 -4.0% 7.753e+08 proc-vmstat.pgalloc_normal
7.608e+08 -4.4% 7.276e+08 proc-vmstat.pgfault
8.077e+08 -4.0% 7.752e+08 proc-vmstat.pgfree
984.00 ± 74% +885.9% 9701 ± 97% proc-vmstat.pgmigrate_success
615148 +9.3% 672325 numa-meminfo.node0.FilePages
12293 ± 25% +55.2% 19080 ± 5% numa-meminfo.node0.Inactive
11872 ± 30% +60.7% 19080 ± 5% numa-meminfo.node0.Inactive(anon)
27949 ± 10% +64.6% 46006 ± 3% numa-meminfo.node0.KReclaimable
14085 ± 15% +28.0% 18031 numa-meminfo.node0.Mapped
27949 ± 10% +64.6% 46006 ± 3% numa-meminfo.node0.SReclaimable
13401 ± 26% +382.0% 64597 ± 2% numa-meminfo.node0.Shmem
104587 ± 4% +14.0% 119233 ± 2% numa-meminfo.node0.Slab
2624 ± 71% +61.9% 4249 numa-meminfo.node1.Active(file)
677803 -8.4% 620707 numa-meminfo.node1.FilePages
9431 ± 32% -72.2% 2619 ± 33% numa-meminfo.node1.Inactive
8710 ± 41% -83.0% 1479 ± 59% numa-meminfo.node1.Inactive(anon)
44608 ± 4% -45.6% 24259 ± 6% numa-meminfo.node1.KReclaimable
15357 ± 14% -28.9% 10924 numa-meminfo.node1.Mapped
44608 ± 4% -45.6% 24259 ± 6% numa-meminfo.node1.SReclaimable
53293 ± 5% -95.1% 2611 ± 35% numa-meminfo.node1.Shmem
109835 ± 4% -16.5% 91708 ± 2% numa-meminfo.node1.Slab
198438 ± 19% -48.3% 102613 ± 29% sched_debug.cfs_rq:/.MIN_vruntime.avg
872601 ± 15% -44.0% 488425 ± 20% sched_debug.cfs_rq:/.MIN_vruntime.stddev
58877 ± 4% -32.5% 39716 ± 12% sched_debug.cfs_rq:/.load.avg
884652 ± 13% -36.3% 563479 sched_debug.cfs_rq:/.load.max
170539 ± 10% -44.5% 94726 ± 11% sched_debug.cfs_rq:/.load.stddev
198438 ± 19% -48.3% 102613 ± 29% sched_debug.cfs_rq:/.max_vruntime.avg
872601 ± 15% -44.0% 488425 ± 20% sched_debug.cfs_rq:/.max_vruntime.stddev
98.43 ± 20% +44.2% 141.96 ± 17% sched_debug.cfs_rq:/.removed.load_avg.avg
4591 ± 20% +42.7% 6553 ± 17% sched_debug.cfs_rq:/.removed.runnable_sum.avg
20.87 ± 11% +50.0% 31.30 ± 14% sched_debug.cfs_rq:/.removed.util_avg.avg
67.81 ± 5% +19.5% 81.04 ± 11% sched_debug.cfs_rq:/.removed.util_avg.stddev
40.35 ± 15% -23.4% 30.90 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.avg
629.17 ± 11% -23.5% 481.17 ± 16% sched_debug.cfs_rq:/.runnable_load_avg.max
118.51 ± 16% -31.1% 81.69 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.stddev
56059 ± 8% -37.8% 34866 ± 13% sched_debug.cfs_rq:/.runnable_weight.avg
876174 ± 14% -37.1% 551504 sched_debug.cfs_rq:/.runnable_weight.max
171272 ± 11% -45.1% 94102 ± 11% sched_debug.cfs_rq:/.runnable_weight.stddev
31227 ± 19% -32.2% 21174 ± 32% sched_debug.cfs_rq:/.spread0.avg
1223 ± 4% +16.5% 1425 ± 3% sched_debug.cfs_rq:/.util_avg.max
213.82 ± 2% +11.2% 237.75 sched_debug.cfs_rq:/.util_avg.stddev
124.17 ± 4% +12.4% 139.59 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev
219522 ± 24% -23.3% 168384 ± 18% sched_debug.cpu.avg_idle.avg
11161 ± 7% -26.2% 8235 ± 16% sched_debug.cpu.avg_idle.min
451.78 ± 7% -15.1% 383.78 ± 7% sched_debug.cpu.cpu_load[1].max
85.00 ± 10% -21.7% 66.56 ± 10% sched_debug.cpu.cpu_load[1].stddev
277.61 ± 10% -17.9% 227.94 ± 9% sched_debug.cpu.cpu_load[2].max
56.14 ± 11% -18.5% 45.73 ± 9% sched_debug.cpu.cpu_load[2].stddev
52735 ± 8% +32.7% 69975 ± 8% sched_debug.cpu.curr->pid.avg
77254 ± 6% +23.7% 95588 ± 6% sched_debug.cpu.curr->pid.max
102587 ± 23% +28.5% 131794 ± 7% sched_debug.cpu.load.stddev
2153 ± 8% -20.9% 1703 ± 23% sched_debug.cpu.nr_load_updates.stddev
12689 ± 7% -11.4% 11237 ± 3% sched_debug.cpu.nr_switches.stddev
0.05 ± 48% +126.8% 0.11 ± 24% sched_debug.cpu.nr_uninterruptible.avg
-333.33 +32.1% -440.17 sched_debug.cpu.nr_uninterruptible.min
111.56 ± 2% +18.4% 132.09 sched_debug.cpu.nr_uninterruptible.stddev
8.47e+09 +2.8% 8.707e+09 perf-stat.i.branch-instructions
1.006e+08 -2.5% 98077383 perf-stat.i.branch-misses
1.205e+08 -2.5% 1.174e+08 perf-stat.i.cache-misses
1.034e+09 -2.5% 1.008e+09 perf-stat.i.cache-references
323541 +2.8% 332474 perf-stat.i.context-switches
25454 +5.5% 26847 ± 2% perf-stat.i.cpu-migrations
1.217e+10 +3.9% 1.264e+10 perf-stat.i.dTLB-loads
5.319e+09 -2.7% 5.174e+09 perf-stat.i.dTLB-stores
29193114 -2.4% 28497000 perf-stat.i.iTLB-load-misses
4.178e+10 +3.4% 4.319e+10 perf-stat.i.instructions
1424 +4.5% 1487 perf-stat.i.instructions-per-iTLB-miss
2472806 -2.9% 2402150 perf-stat.i.minor-faults
78070119 -2.8% 75920079 perf-stat.i.node-load-misses
82100776 -2.7% 79853820 perf-stat.i.node-loads
2472804 -2.9% 2402149 perf-stat.i.page-faults
24.75 -5.7% 23.35 perf-stat.overall.MPKI
1.19 -0.1 1.13 perf-stat.overall.branch-miss-rate%
2.85 -3.1% 2.77 perf-stat.overall.cpi
989.57 +2.8% 1017 perf-stat.overall.cycles-between-cache-misses
1431 +5.9% 1515 perf-stat.overall.instructions-per-iTLB-miss
0.35 +3.2% 0.36 perf-stat.overall.ipc
58325624 +6.7% 62258667 perf-stat.overall.path-length
8.442e+09 +2.8% 8.678e+09 perf-stat.ps.branch-instructions
1.003e+08 -2.5% 97753949 perf-stat.ps.branch-misses
1.201e+08 -2.5% 1.171e+08 perf-stat.ps.cache-misses
1.031e+09 -2.5% 1.005e+09 perf-stat.ps.cache-references
322473 +2.8% 331365 perf-stat.ps.context-switches
25370 +5.5% 26758 ± 2% perf-stat.ps.cpu-migrations
1.213e+10 +3.9% 1.26e+10 perf-stat.ps.dTLB-loads
5.302e+09 -2.7% 5.157e+09 perf-stat.ps.dTLB-stores
29096816 -2.4% 28402005 perf-stat.ps.iTLB-load-misses
4.165e+10 +3.4% 4.305e+10 perf-stat.ps.instructions
2464656 -2.9% 2394142 perf-stat.ps.minor-faults
77812636 -2.8% 75666866 perf-stat.ps.node-load-misses
81830010 -2.7% 79587500 perf-stat.ps.node-loads
2464651 -2.9% 2394141 perf-stat.ps.page-faults
244.00 ± 2% +125.7% 550.67 ± 36% interrupts.35:IR-PCI-MSI.2621441-edge.eth0-TxRx-0
184.67 ± 6% -15.2% 156.67 interrupts.36:IR-PCI-MSI.2621442-edge.eth0-TxRx-1
358.33 ± 74% -56.3% 156.67 interrupts.37:IR-PCI-MSI.2621443-edge.eth0-TxRx-2
201599 -1.2% 199147 interrupts.CAL:Function_call_interrupts
283.00 ± 9% -18.4% 231.00 ± 7% interrupts.CPU1.TLB:TLB_shootdowns
1163 ±131% +473.3% 6671 interrupts.CPU10.NMI:Non-maskable_interrupts
1163 ±131% +473.3% 6671 interrupts.CPU10.PMI:Performance_monitoring_interrupts
345.33 ± 8% -34.8% 225.00 ± 16% interrupts.CPU11.TLB:TLB_shootdowns
4375 ± 2% -7.2% 4060 ± 5% interrupts.CPU12.CAL:Function_call_interrupts
4277 ± 3% -9.1% 3888 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
331.00 ± 5% -23.0% 255.00 ± 5% interrupts.CPU14.TLB:TLB_shootdowns
1115 ±141% +303.1% 4496 ± 35% interrupts.CPU16.NMI:Non-maskable_interrupts
1115 ±141% +303.1% 4496 ± 35% interrupts.CPU16.PMI:Performance_monitoring_interrupts
1136 ±137% +197.0% 3375 interrupts.CPU18.NMI:Non-maskable_interrupts
1136 ±137% +197.0% 3375 interrupts.CPU18.PMI:Performance_monitoring_interrupts
247.67 ± 15% +21.0% 299.67 ± 7% interrupts.CPU19.TLB:TLB_shootdowns
245.00 ± 18% +26.3% 309.33 ± 7% interrupts.CPU20.TLB:TLB_shootdowns
270.67 ± 27% +39.5% 377.67 ± 9% interrupts.CPU21.TLB:TLB_shootdowns
99.00 ± 53% +3296.0% 3362 interrupts.CPU22.NMI:Non-maskable_interrupts
99.00 ± 53% +3296.0% 3362 interrupts.CPU22.PMI:Performance_monitoring_interrupts
244.00 ± 2% +125.7% 550.67 ± 36% interrupts.CPU24.35:IR-PCI-MSI.2621441-edge.eth0-TxRx-0
184.67 ± 6% -15.2% 156.67 interrupts.CPU25.36:IR-PCI-MSI.2621442-edge.eth0-TxRx-1
358.33 ± 74% -56.3% 156.67 interrupts.CPU26.37:IR-PCI-MSI.2621443-edge.eth0-TxRx-2
2.00 ± 81% +57150.0% 1145 ±138% interrupts.CPU26.NMI:Non-maskable_interrupts
2.00 ± 81% +57150.0% 1145 ±138% interrupts.CPU26.PMI:Performance_monitoring_interrupts
310.00 ± 5% -28.0% 223.33 ± 5% interrupts.CPU27.TLB:TLB_shootdowns
3360 -66.9% 1113 ±140% interrupts.CPU29.NMI:Non-maskable_interrupts
3360 -66.9% 1113 ±140% interrupts.CPU29.PMI:Performance_monitoring_interrupts
1112 ±141% +402.7% 5592 ± 28% interrupts.CPU3.NMI:Non-maskable_interrupts
1112 ±141% +402.7% 5592 ± 28% interrupts.CPU3.PMI:Performance_monitoring_interrupts
213.67 ± 14% +21.8% 260.33 ± 17% interrupts.CPU31.TLB:TLB_shootdowns
3356 ± 80% -99.5% 17.00 ± 84% interrupts.CPU34.NMI:Non-maskable_interrupts
3356 ± 80% -99.5% 17.00 ± 84% interrupts.CPU34.PMI:Performance_monitoring_interrupts
1123 ±138% +401.6% 5633 ± 28% interrupts.CPU37.NMI:Non-maskable_interrupts
1123 ±138% +401.6% 5633 ± 28% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1143 ±136% +292.3% 4484 ± 34% interrupts.CPU41.NMI:Non-maskable_interrupts
1143 ±136% +292.3% 4484 ± 34% interrupts.CPU41.PMI:Performance_monitoring_interrupts
230.33 ± 9% +25.5% 289.00 ± 12% interrupts.CPU42.TLB:TLB_shootdowns
1134 ±137% +209.6% 3511 ± 71% interrupts.CPU43.NMI:Non-maskable_interrupts
1134 ±137% +209.6% 3511 ± 71% interrupts.CPU43.PMI:Performance_monitoring_interrupts
319.00 ± 7% -33.4% 212.33 ± 5% interrupts.CPU43.TLB:TLB_shootdowns
1146 ±133% +289.2% 4462 ± 35% interrupts.CPU44.NMI:Non-maskable_interrupts
1146 ±133% +289.2% 4462 ± 35% interrupts.CPU44.PMI:Performance_monitoring_interrupts
633.00 ±140% +604.5% 4459 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
633.00 ±140% +604.5% 4459 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
285.33 ± 4% -31.2% 196.33 ± 26% interrupts.CPU6.TLB:TLB_shootdowns
308.00 ± 6% -18.4% 251.33 ± 20% interrupts.CPU9.TLB:TLB_shootdowns
108747 ± 5% +32.4% 144006 ± 13% interrupts.NMI:Non-maskable_interrupts
108747 ± 5% +32.4% 144006 ± 13% interrupts.PMI:Performance_monitoring_interrupts
25.95 ± 4% -19.2 6.78 ± 11% perf-profile.calltrace.cycles-pp.__libc_fork
23.30 ± 5% -17.2 6.15 ± 11% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
23.30 ± 5% -17.2 6.15 ± 11% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
23.29 ± 5% -17.1 6.14 ± 11% perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
22.32 ± 4% -16.4 5.91 ± 11% perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
5.57 ± 3% -5.6 0.00 perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.anon_vma_fork.copy_process._do_fork
5.55 ± 3% -5.5 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.anon_vma_fork.copy_process
5.34 ± 2% -5.3 0.00 perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.unlink_anon_vmas.free_pgtables.exit_mmap
5.33 ± 2% -5.3 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.unlink_anon_vmas.free_pgtables
4.40 ± 10% -3.4 0.99 ± 9% perf-profile.calltrace.cycles-pp.wait
3.97 ± 10% -3.1 0.88 ± 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait
3.95 ± 10% -3.1 0.88 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
3.93 ± 10% -3.0 0.88 ± 9% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
3.93 ± 10% -3.0 0.88 ± 9% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
2.52 ± 9% -0.7 1.84 perf-profile.calltrace.cycles-pp.queued_read_lock_slowpath.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
2.83 -0.6 2.26 ± 13% perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.copy_process._do_fork
7.70 ± 2% -0.5 7.15 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
7.69 ± 2% -0.5 7.14 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
7.62 ± 2% -0.5 7.09 ± 3% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
5.01 ± 4% -0.5 4.48 ± 2% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.09 ± 3% -0.5 7.62 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
6.17 ± 2% -0.4 5.74 ± 3% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
1.83 ± 2% -0.4 1.43 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.__do_sys_wait4
0.61 ± 3% -0.2 0.36 ± 70% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.exit_mmap.mmput.do_exit
0.61 ± 3% -0.2 0.37 ± 70% perf-profile.calltrace.cycles-pp.lru_add_drain.exit_mmap.mmput.do_exit.do_group_exit
0.59 ± 3% -0.2 0.36 ± 70% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.exit_mmap.mmput
4.11 ± 2% -0.2 3.92 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.94 ± 2% -0.2 3.75 perf-profile.calltrace.cycles-pp.copy_p4d_range.copy_page_range.copy_process._do_fork.do_syscall_64
1.01 ± 8% -0.1 0.86 ± 3% perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.79 ± 9% -0.1 0.66 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_write_lock_slowpath.do_exit.do_group_exit.__x64_sys_exit_group
1.44 ± 4% -0.1 1.33 perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.76 ± 2% -0.1 0.71 perf-profile.calltrace.cycles-pp.remove_vma.exit_mmap.mmput.do_exit.do_group_exit
1.01 -0.0 0.97 ± 2% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap
1.10 -0.0 1.07 perf-profile.calltrace.cycles-pp.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
0.60 +0.0 0.65 ± 3% perf-profile.calltrace.cycles-pp.alloc_pages_vma.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault
0.65 ± 2% +0.1 0.71 ± 6% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.57 ± 5% +0.4 3.93 ± 5% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
4.02 ± 6% +0.4 4.47 ± 6% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
4.65 ± 3% +0.5 5.10 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
4.50 ± 5% +0.5 5.01 ± 5% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
4.51 ± 6% +0.5 5.02 ± 5% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
4.50 ± 5% +0.5 5.02 ± 5% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.do_wp_page.__handle_mm_fault
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.page_fault.__put_user_4.schedule_tail.ret_from_fork
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.__put_user_4.schedule_tail.ret_from_fork
1.13 ± 8% +0.6 1.69 perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +0.7 0.72 ± 3% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.down_write.anon_vma_clone.anon_vma_fork
0.00 +0.8 0.83 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.down_write.anon_vma_fork.copy_process
0.00 +0.8 0.84 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.down_write.unlink_anon_vmas.free_pgtables
0.00 +0.9 0.86 perf-profile.calltrace.cycles-pp.schedule_tail.ret_from_fork
9.74 ± 2% +0.9 10.68 ± 5% perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.copy_process._do_fork.do_syscall_64
0.00 +0.9 0.95 perf-profile.calltrace.cycles-pp.wake_up_new_task._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.78 ± 3% +1.0 6.81 ± 9% perf-profile.calltrace.cycles-pp.down_write.anon_vma_fork.copy_process._do_fork.do_syscall_64
5.08 ± 3% +1.2 6.33 ± 10% perf-profile.calltrace.cycles-pp.down_write.anon_vma_clone.anon_vma_fork.copy_process._do_fork
5.69 ± 2% +1.3 7.00 ± 10% perf-profile.calltrace.cycles-pp.down_write.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
35.70 +1.5 37.20 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
35.71 +1.5 37.22 perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
35.71 +1.5 37.22 perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
32.20 +1.7 33.86 ± 2% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
32.13 +1.7 33.79 ± 2% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
2.96 ± 8% +1.7 4.65 ± 13% perf-profile.calltrace.cycles-pp.down_write.__put_anon_vma.unlink_anon_vmas.free_pgtables.exit_mmap
3.74 ± 8% +1.7 5.45 ± 11% perf-profile.calltrace.cycles-pp.__put_anon_vma.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
16.96 ± 2% +1.9 18.89 ± 6% perf-profile.calltrace.cycles-pp.anon_vma_fork.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.13 ± 18% +2.5 3.64 ± 2% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.13 ± 18% +2.5 3.65 perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.70 ± 3% +2.8 17.53 ± 6% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
12.95 ± 3% +3.0 15.93 ± 8% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.do_exit
0.00 +3.8 3.83 ± 15% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.down_write.__put_anon_vma.unlink_anon_vmas
0.00 +4.6 4.59 ± 13% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.down_write.__put_anon_vma.unlink_anon_vmas.free_pgtables
0.00 +4.6 4.65 ± 13% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.down_write.anon_vma_clone.anon_vma_fork
0.00 +5.0 5.02 ± 11% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.down_write.anon_vma_fork.copy_process
0.00 +5.2 5.21 ± 13% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.down_write.unlink_anon_vmas.free_pgtables
0.00 +6.1 6.07 ± 10% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.down_write.anon_vma_clone.anon_vma_fork.copy_process
0.00 +6.6 6.60 ± 9% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.down_write.anon_vma_fork.copy_process._do_fork
0.00 +6.6 6.62 ± 11% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.down_write.unlink_anon_vmas.free_pgtables.exit_mmap
6.72 ± 19% +17.5 24.22 ± 4% perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.01 ± 19% +18.2 25.22 ± 4% perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.08 ± 3% +22.8 66.84 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.08 ± 3% +22.8 66.86 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
reaim.time.minor_page_faults
8e+08 +-+-----------------------------------------------------------------+
| OO + O O.O+ O O O +O O OO + O +.+ + + + ++ +.|
7e+08 +-+: : : : : : : :: : : : : : : : : :: : |
6e+08 +-+: : : : : : : :: : : : : : : : : :: : |
| : : : : : : : :: : : : : : : : : :: : |
5e+08 +-+:: :: :: : : : : :: : : :: :: : : : : :: : : :: : : : |
| :: :: :: : : : : :: : : :: :: : : : : :: : : :: : : : |
4e+08 +-+:: :: :: : : : : :: : : :: :: : : : : :: : : :: : : : |
| : :: : : :: : : :: : : : : :: :: :: : : :: :: : : :: |
3e+08 +-+ :: : : :: : : :: : : : : :: :: :: : : :: :: : : :: |
2e+08 +-+ :: : : :: : : :: : : : : :: :: :: : : :: :: : : :: |
| : : : : :: : : : : : : : :: : : : : |
1e+08 +-+ : : : :: : : : : : : : :: : : : : |
| : : : : :: : : : : : : : :: : : : : |
0 O-+--OO-O--O----O--O---O--O-O---O-O----OO---------------------------+
reaim.time.involuntary_context_switches
2.5e+07 +-+---------------------------------------------------------------+
| |
| OO + O OO.+O O O +O O OO + O +.+ + + + ++ +.|
2e+07 +-+: : : : : : : :: : : : : : : : : :: : |
| : : : : : : : :: : : : : : : : : :: : |
| :: :: :: : : :: : : : : : : : : :: :: : : : : |
1.5e+07 +-+:: :: :: : : :: :: : : :: :: :: : : :: :: :: : : : |
| :: :: :: : : :: :: : : :: :: :: : : :: :: :: : : : |
1e+07 +-+:: :: :: : : :: : :: : : :: :: :: : :: :: : :: : : |
| : :: :: : : : : : : :: : : :: :: :: : : :: : : :: :: |
| : :: :: : : : : : : :: : : :: :: :: : : :: : : :: :: |
5e+06 +-+ : : :: : : :: :: : : :: :: :: : : : :: :: : |
| : : : : : : : : : : : : : : : : : : : |
| : : : : : : : : : : : : : : : : : : : |
0 O-+--OO-O--O----O--O--O--O-O---O-O----OO--------------------------+
reaim.parent_time
14 +-+--------------------------------------------------------------------+
| OO O O |
12 +-OO + O ++.+ O O O ++ O + O + O +.+ + + + +.+ +.|
| : : : : : : : :: : : : : : : : : : : : |
10 +-+: : : : : : : :: : : : : : : : : : : : |
| :: : :: : : :: :: : : :: : :: : : :: :: :: : : : |
8 +-+:: :: : : : : : : :: : : :: :: : : : : : : :: : : : : : |
| :: :: : : : : : : :: : : :: :: : : : : : : :: : : : : : |
6 +-+:: : :: : : : : : :: : : :: : :: : : : : : :: : : : : : |
| : : : :: : : : : :: : : : : : : :: :: : : :: : : :: :: |
4 +-+ : : :: : : : : :: : : : : : : :: :: : : :: : : :: :: |
| : :: :: :: : : : :: : : :: :: : : : : :: : : |
2 +-+ : : : :: : : : : : : : :: : : : : |
| : : : : :: : : : : : : : :: : : : : |
0 O-+--O-OO---O----O--O---O--O-O----O-O----OO----------------------------+
reaim.child_systime
350 +-+----------O-O---------O----O-O----O--------------------------------+
| OO + O +.+.+O O + +.+ +O + O ++ + + + +.+ +.|
300 +-+: : : : : : : : : : : : :: : : : : : : |
| : : : : : : : : : : : : :: : : : : : : |
250 +-+: : : : : : : : : : : : :: : : : : : : |
| :: :: : : : : :: : : : : :: : : :: : : :: :: : : : : : |
200 +-+:: :: : : : : :: : : : : :: : : :: : : :: :: : : : : : |
| :: :: : : : : :: : : : : :: : : :: : : :: :: : : : : : |
150 +-+ : : :: :: : : :: :: : : :: :: : : : : : : :: :: :: |
| : : : :: :: : : :: :: : : :: :: : : : : : : :: :: :: |
100 +-+ : : :: :: : : :: :: : : :: :: : : : : : : :: :: :: |
| : : : : : : : : : : : : : : : : : : : |
50 +-+ : : : : : : : : : : : : : : : : : : |
| : : : : : : : : : : : : : : : : : : : |
0 O-+--O-OO---O----O--O--O---OO----O-O----O-O---------------------------+
reaim.jobs_per_min
60000 +-+-----------------------------------------------------------------+
| |
50000 +-++ + + +.++ + + ++ + + + +.+ + + + ++ +.|
| OO : O O O: O O O :O O OO : O : : : : : :: : |
| : : : : : : : :: : : : : : : : : :: : |
40000 +-+:: :: : : : :: :: : : : : :: : : : :: :: : : : |
| :: :: :: : : : : :: : : :: :: : : : : :: : : :: : : : |
30000 +-+:: :: :: : : : : :: : : :: :: : : : : :: : : :: : : : |
| :: :: : :: : : : :: : : : :: :: : : : : :: : :: : : : |
20000 +-+ :: : : :: : : :: : : : : :: :: :: : : :: :: : : :: |
| : :: : : :: : : :: : : : : :: :: :: : : :: :: : : :: |
| : : :: :: : : : :: : : :: :: : : : :: : :: : |
10000 +-+ : : : :: : : : : : : : :: : : : : |
| : : : : :: : : : : : : : :: : : : : |
0 O-+--OO-O--O----O--O---O--O-O---O-O----OO---------------------------+
reaim.jobs_per_min_child
600 +-+-------------------------------------------------------------------+
| |
500 +-++ + + +.+.+ + + +.+ + + + ++ + + + +.+ +.|
| OO : O O O :O O O : O O OO : O :: : : : : : : |
| : : : : : : : : : : : : :: : : : : : : |
400 +-+:: : :: : : : :: : : : :: :: : : :: : :: : : : |
| :: :: : : : : :: : : : : :: : : :: : : :: :: : : : : : |
300 +-+:: :: : : : : :: : : : : :: : : :: : : :: :: : : : : : |
| :: : :: : : : : :: : : : : :: : :: : : :: : :: : : : : |
200 +-+ : : :: :: : : :: :: : : :: :: : : : : : : :: :: :: |
| : : : :: :: : : :: :: : : :: :: : : : : : : :: :: :: |
| : :: :: : : : :: : : : :: : :: : : :: :: : : |
100 +-+ : : : : : : : : : : : : : : : : : : |
| : : : : : : : : : : : : : : : : : : : |
0 O-+--O-OO---O----O--O--O---OO----O-O----O-O---------------------------+
reaim.workload
250000 +-+----------------------------------------------------------------+
| |
| OO + O OO.+ O O O +.OO O O+ O ++ + + + ++ +.|
200000 +-+: : : : : : : : : : : : :: : : : :: : |
| : : : : : : : : : : : : :: : : : :: : |
| :: :: : : : : :: : : :: :: :: : : :: :: :: : : : |
150000 +-+:: :: :: : : :: : : : : :: :: :: : : : : :: :: : : : |
| :: :: :: : : :: : : : : :: :: :: : : : : :: :: : : : |
100000 +-+:: :: : :: : : :: : : : :: :: :: : : : : :: :: : : : |
| : :: : : :: : : :: :: : : :: :: : : : : :: :: : : :: |
| : :: : : :: : : :: :: : : :: :: : : : : :: :: : : :: |
50000 +-+ : :: :: : : :: : : : : : :: : : : : :: : |
| : : : : :: : : : : : : : :: : : : : |
| : : : : :: : : : : : : : :: : : : : |
0 O-+--OO-O--O----O--O--O---OO----O-O---O-O--------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[btrfs] c99d2765f3: aim7.jobs-per-min 4.5% improvement
by kernel test robot
Greeting,
FYI, we noticed a 4.5% improvement of aim7.jobs-per-min due to commit:
commit: c99d2765f3417c88a25835dd7394824d85dbb00e ("btrfs: use assertion helpers for spinning readers")
https://github.com/kdave/btrfs-devel.git misc-next
in testcase: aim7
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_src
load: 500
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/btrfs/x86_64-rhel-7.6/500/debian-x86_64-2018-04-03.cgz/lkp-skl-2sp7/disk_src/aim7
commit:
184175a6c7 ("btrfs: add assertion helpers for spinning readers")
c99d2765f3 ("btrfs: use assertion helpers for spinning readers")
184175a6c7a0ca80 c99d2765f3417c88a25835dd739
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
34171 +4.5% 35697 aim7.jobs-per-min
88.55 ± 2% -4.7% 84.37 aim7.time.elapsed_time
88.55 ± 2% -4.7% 84.37 aim7.time.elapsed_time.max
13048113 ± 3% +19.5% 15596179 ± 11% cpuidle.POLL.time
14584 ± 27% -18.0% 11954 ± 2% proc-vmstat.nr_dirtied
5.32 ± 3% +5.7% 5.62 ± 5% perf-stat.i.cpi
960.10 ± 4% +7.9% 1035 perf-stat.overall.cycles-between-cache-misses
24677 ± 4% -9.5% 22323 ± 4% softirqs.CPU0.RCU
24119 ± 6% -9.4% 21846 ± 5% softirqs.CPU15.RCU
25122 ± 4% -6.8% 23419 ± 3% softirqs.CPU40.RCU
24696 ± 5% -12.7% 21570 ± 7% softirqs.CPU5.RCU
23826 ± 9% -11.7% 21030 ± 3% softirqs.CPU61.RCU
188.75 ±110% -74.7% 47.75 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.max
859.38 ± 5% +23.3% 1059 ± 15% sched_debug.cfs_rq:/.util_avg.max
3.88 ± 5% +6.9% 4.14 ± 6% sched_debug.cpu.cpu_load[4].avg
299741 ± 79% -80.7% 57737 ± 2% sched_debug.cpu.load.max
189753 ± 7% -13.8% 163598 ± 2% sched_debug.cpu.nr_switches.max
4433 ± 11% -21.8% 3468 ± 6% sched_debug.cpu.nr_switches.stddev
20.75 ± 9% -13.9% 17.88 ± 5% sched_debug.cpu.nr_uninterruptible.max
2591 ± 5% -13.2% 2249 ± 9% interrupts.CPU10.RES:Rescheduling_interrupts
2777 ± 5% -11.1% 2469 ± 9% interrupts.CPU17.RES:Rescheduling_interrupts
2902 ± 5% -13.5% 2511 ± 4% interrupts.CPU38.RES:Rescheduling_interrupts
2904 ± 5% -11.6% 2568 ± 5% interrupts.CPU45.RES:Rescheduling_interrupts
2945 ± 10% -13.6% 2544 ± 10% interrupts.CPU48.RES:Rescheduling_interrupts
2914 ± 4% -9.5% 2636 ± 8% interrupts.CPU51.RES:Rescheduling_interrupts
2734 ± 9% +11.6% 3050 ± 3% interrupts.CPU70.RES:Rescheduling_interrupts
2499 ± 4% -6.1% 2348 ± 6% interrupts.CPU71.NMI:Non-maskable_interrupts
2499 ± 4% -6.1% 2348 ± 6% interrupts.CPU71.PMI:Performance_monitoring_interrupts
3.52 ± 4% -0.5 3.06 ± 5% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode
0.67 ± 15% -0.4 0.28 ±100% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.btrfs_async_run_delayed_root
1.54 ± 15% -0.3 1.27 ± 3% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
1.52 ± 15% -0.3 1.26 ± 3% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.51 ± 15% -0.3 1.25 ± 3% perf-profile.calltrace.cycles-pp.normal_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
1.54 ± 15% -0.3 1.29 ± 3% perf-profile.calltrace.cycles-pp.ret_from_fork
1.54 ± 15% -0.3 1.29 ± 3% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.50 ± 16% -0.3 1.25 ± 3% perf-profile.calltrace.cycles-pp.btrfs_async_run_delayed_root.normal_work_helper.process_one_work.worker_thread.kthread
0.76 ± 15% -0.2 0.58 ± 6% perf-profile.calltrace.cycles-pp.__btrfs_update_delayed_inode.btrfs_async_run_delayed_root.normal_work_helper.process_one_work.worker_thread
0.73 ± 15% -0.2 0.56 ± 7% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.btrfs_async_run_delayed_root.normal_work_helper
0.73 ± 15% -0.2 0.56 ± 7% perf-profile.calltrace.cycles-pp.btrfs_lookup_inode.__btrfs_update_delayed_inode.btrfs_async_run_delayed_root.normal_work_helper.process_one_work
0.67 -0.0 0.64 ± 2% perf-profile.calltrace.cycles-pp.finish_wait.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_del_orphan_item
0.61 ± 7% +0.1 0.68 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.__btrfs_unlink_inode
0.62 ± 7% +0.1 0.70 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.__btrfs_unlink_inode.btrfs_unlink_inode
0.68 ± 7% +0.1 0.77 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.__btrfs_update_delayed_inode
1.23 ± 3% +0.1 1.32 ± 3% perf-profile.calltrace.cycles-pp.btrfs_release_path.__btrfs_update_delayed_inode.btrfs_commit_inode_delayed_inode.btrfs_evict_inode.evict
0.70 ± 7% +0.1 0.79 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.__btrfs_update_delayed_inode.btrfs_commit_inode_delayed_inode
0.73 ± 9% +0.1 0.86 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.btrfs_free_path.btrfs_truncate_inode_items
1.07 ± 2% +0.1 1.20 ± 4% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.btrfs_release_path.btrfs_free_path.btrfs_del_orphan_item.btrfs_evict_inode
1.22 ± 5% +0.1 1.37 ± 5% perf-profile.calltrace.cycles-pp.btrfs_release_path.btrfs_free_path.btrfs_truncate_inode_items.btrfs_evict_inode.evict
1.22 ± 5% +0.1 1.37 ± 5% perf-profile.calltrace.cycles-pp.btrfs_free_path.btrfs_truncate_inode_items.btrfs_evict_inode.evict.do_unlinkat
0.74 ± 5% +0.2 0.90 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.btrfs_free_path.btrfs_del_orphan_item
1.16 ± 3% +0.2 1.32 ± 4% perf-profile.calltrace.cycles-pp.btrfs_release_path.btrfs_free_path.btrfs_del_orphan_item.btrfs_evict_inode.evict
1.16 ± 3% +0.2 1.32 ± 4% perf-profile.calltrace.cycles-pp.btrfs_free_path.btrfs_del_orphan_item.btrfs_evict_inode.evict.do_unlinkat
1.44 ± 6% +0.3 1.72 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.btrfs_free_path
2.55 ± 8% +0.5 3.05 ± 4% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.do_unlinkat
2.82 ± 8% +0.6 3.41 ± 4% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.82 ± 8% +0.6 3.41 ± 4% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.do_unlinkat.do_syscall_64
14.86 ± 4% +1.0 15.83 ± 3% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.16 ± 58% +1.3 3.43 ± 4% perf-profile.calltrace.cycles-pp.down_write.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
11.25 ± 57% +4.6 15.83 ± 3% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
11.29 ± 57% +4.6 15.89 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
11.29 ± 57% +4.6 15.88 ± 3% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
11.29 ± 57% +4.6 15.89 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
11.35 ± 57% +4.6 15.97 ± 3% perf-profile.calltrace.cycles-pp.creat
aim7.jobs-per-min
39000 +-+-----------------------------------------------------------------+
| O O O O O O |
38000 O-O O O O O O O O O O O O O O |
37000 +-+ O O O O O |
| O O O O |
36000 +-+ O O |
35000 +-+ O
| +. .+..+. |
34000 +-+ : + +.+ |
33000 +-+ : |
| .+ : |
32000 +-+ .+..+.+. .+.+. .+.+..+.+. .+.+.+. .+.+ : : |
31000 +-+.+ + + +.+ +..+ :: |
| + |
30000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
68b68a99e7 [ 68.054311] BUG kmalloc-4k (Not tainted): Poison overwritten
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://linuxtv.org/sailus/media_tree.git master
commit 68b68a99e7d028f95e97bf05279e5f3f350af3ad
Author: Sakari Ailus <sakari.ailus(a)linux.intel.com>
AuthorDate: Fri Apr 5 17:47:04 2019 +0300
Commit: Sakari Ailus <sakari.ailus(a)linux.intel.com>
CommitDate: Mon Apr 8 12:43:34 2019 +0300
media: Set subdev's device node to NULL on devnode unregistration
When an async sub-device is bound to a notifier, or when the notifier's
complete callback is called, a device node will be created for the async
sub-device. Once the notifier is unregistered, the async sub-device's
device node is unregistered, too. But the pointer to this device node was
left dangling. While the pointer is not apparently dereferenced, it will
prevent registering a new device node for that async sub-device if a new
notifier that matches with the async sub-device is registered.
Fix this by setting the device node pointer to NULL when it is
unregistered.
Fixes: 0e43734d4c46 ("media: v4l2-subdev: add release() internal op")
Signed-off-by: Sakari Ailus <sakari.ailus(a)linux.intel.com>
9d608d3e89 media: ov2659: fix unbalanced mutex_lock/unlock
68b68a99e7 media: Set subdev's device node to NULL on devnode unregistration
fc9ec521a0 ipu3-imgu: Use %u for formatting unsigned values (not %d)
+------------------------------------------------------------------+------------+------------+------------+
| | 9d608d3e89 | 68b68a99e7 | fc9ec521a0 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 31 | 0 | 0 |
| boot_failures | 0 | 11 | 11 |
| BUG_kmalloc-#k(Not_tainted):Poison_overwritten | 0 | 11 | 11 |
| INFO:0x(____ptrval____)-0x(____ptrval____).First_byte#instead_of | 0 | 11 | 11 |
| INFO:Allocated_in_vimc_sen_comp_bind_age=#cpu=#pid= | 0 | 11 | 11 |
| INFO:Freed_in_v4l2_device_release_subdev_node_age=#cpu=#pid= | 0 | 11 | 11 |
| INFO:Slab0x(____ptrval____)objects=#used=#fp=0x(#)flags= | 0 | 11 | 11 |
| INFO:Object0x(____ptrval____)@offset=#fp=0x(____ptrval____) | 0 | 11 | 11 |
| BUG_kmalloc-#k(Tainted:G_B):Poison_overwritten | 0 | 6 | 6 |
| BUG_kmalloc-#(Tainted:G_B):Poison_overwritten | 0 | 7 | 9 |
| INFO:Allocated_in_vimc_deb_comp_bind_age=#cpu=#pid= | 0 | 7 | 9 |
| INFO:Object0x(____ptrval____)@offset=#fp=0x(null) | 0 | 1 | |
| INFO:Allocated_in_vimc_sca_comp_bind_age=#cpu=#pid= | 0 | 1 | 1 |
+------------------------------------------------------------------+------------+------------+------------+
[ 68.030232] vimc vimc.0: bound vimc-capture.5.auto (ops vimc_cap_comp_ops)
[ 68.041719] vimc vimc.0: bound vimc-sensor.6.auto (ops vimc_sen_comp_ops)
[ 68.043293] vimc vimc.0: bound vimc-scaler.7.auto (ops vimc_sca_comp_ops)
[ 68.045315] vimc vimc.0: bound vimc-capture.8.auto (ops vimc_cap_comp_ops)
[ 68.052403] =============================================================================
[ 68.054311] BUG kmalloc-4k (Not tainted): Poison overwritten
[ 68.055557] -----------------------------------------------------------------------------
[ 68.055557]
[ 68.056353] Disabling lock debugging due to kernel taint
[ 68.056353] INFO: 0x(____ptrval____)-0x(____ptrval____). First byte 0x0 instead of 0x6b
[ 68.056353] INFO: Allocated in vimc_sen_comp_bind+0x61/0x2e0 age=7 cpu=1 pid=1
[ 68.056353] kmem_cache_alloc_trace+0x323/0x370
[ 68.056353] vimc_sen_comp_bind+0x61/0x2e0
[ 68.056353] component_bind_all+0x116/0x240
[ 68.056353] vimc_comp_bind+0x39/0x180
[ 68.056353] try_to_bring_up_master+0x12c/0x170
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 6e17fb1d1a384e20f2699b6fdfa5cd4e61360eb4 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect bad 2319249bb077e92e44b564ea8a5759b2c3238a9c # 00:55 B 0 7 21 0 Merge 'linux-review/Chris-Wilson/drm-i915-Be-precise-in-types-for-i915_gem_busy/20190404-194327' into devel-hourly-2019041006
git bisect good 5dbb2d9d86154612f85163aeeb61a9714f120be8 # 01:20 G 10 0 0 0 Merge 'linux-review/Dongli-Zhang/swiotlb-dump-used-and-total-slots-when-swiotlb-buffer-is-full/20190408-013102' into devel-hourly-2019041006
git bisect bad bd615aa8e1f7dbb21da5c2953c1adaf4bd4c1bd6 # 01:32 B 0 1 15 0 Merge 'dlm/next' into devel-hourly-2019041006
git bisect bad b407cebeaabc194d4ab4fbd9ef8737c044fca917 # 01:55 B 0 1 15 0 Merge 'linux-review/Qu-Wenruo/block-Add-new-BLK_STS_SELFTEST-status/20190407-042001' into devel-hourly-2019041006
git bisect bad 073945cfa6eb671ec63393ece470d15ca22ab7f1 # 02:14 B 0 5 19 0 Merge 'linux-review/Michal-Suchanek/dmaengine-bcm2835-Drop-duplicate-capability-setting/20190407-173405' into devel-hourly-2019041006
git bisect bad c3cc11b4792d8f6334758fb3002ddcf6663cc319 # 02:34 B 0 1 15 0 Merge 'sailus-media/master' into devel-hourly-2019041006
git bisect good 1e1bdecb0bcbf25832b39738ee833954ac996ee2 # 03:14 G 11 0 0 0 Merge 'linux-review/Hook-Gary/x86-mm-mem_encrypt-Disable-all-instrumentation-for-SME-early-boot-code/20190408-003938' into devel-hourly-2019041006
git bisect good d5639d5279f2fb0a5c2a14d9a74f7f33845cd2c9 # 03:40 G 11 0 0 0 Merge 'linux-review/Christopher-M-Riedl/powerpc-xmon-add-read-only-mode/20190408-123724' into devel-hourly-2019041006
git bisect good 73071eda1c4fdf474563f6a333fef76385d5d2eb # 04:00 G 11 0 0 0 Merge 'leo/next' into devel-hourly-2019041006
git bisect good dff077905fde45d054812774a18c4cba642cfa71 # 04:21 G 11 0 0 0 Merge 'linux-review/Jan-Kotas/soundwire-fix-pm_runtime_get_sync-return-code-checks/20190407-234315' into devel-hourly-2019041006
git bisect bad 68b68a99e7d028f95e97bf05279e5f3f350af3ad # 04:31 B 0 4 18 0 media: Set subdev's device node to NULL on devnode unregistration
git bisect good 75def1624b3fdc959a79bfeef0f9e9518866e545 # 05:44 G 11 0 6 6 media: ov6650: Register with asynchronous subdevice framework
git bisect good 9d608d3e894422936a7a8688313c208de350c2d4 # 06:07 G 11 0 0 0 media: ov2659: fix unbalanced mutex_lock/unlock
# first bad commit: [68b68a99e7d028f95e97bf05279e5f3f350af3ad] media: Set subdev's device node to NULL on devnode unregistration
git bisect good 9d608d3e894422936a7a8688313c208de350c2d4 # 06:17 G 30 0 0 0 media: ov2659: fix unbalanced mutex_lock/unlock
# extra tests with debug options
git bisect bad 68b68a99e7d028f95e97bf05279e5f3f350af3ad # 06:33 B 0 1 15 0 media: Set subdev's device node to NULL on devnode unregistration
# extra tests on HEAD of linux-devel/devel-hourly-2019041006
git bisect bad 6e17fb1d1a384e20f2699b6fdfa5cd4e61360eb4 # 06:38 B 0 15 34 0 0day head guard for 'devel-hourly-2019041006'
# extra tests on tree/branch sailus-media/master
git bisect bad fc9ec521a0e654c8c4c43ef24a9bbb74de86725e # 07:03 B 0 1 15 0 ipu3-imgu: Use %u for formatting unsigned values (not %d)
# extra tests with first bad commit reverted
git bisect good 60381118a475ef23df30486b5e1efa6b28da5427 # 07:26 G 11 0 0 0 Revert "media: Set subdev's device node to NULL on devnode unregistration"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 4 months
68b68a99e7 ("media: Set subdev's device node to NULL on .."): BUG kmalloc-4k (Not tainted): Poison overwritten
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://linuxtv.org/sailus/media_tree.git master
commit 68b68a99e7d028f95e97bf05279e5f3f350af3ad
Author: Sakari Ailus <sakari.ailus(a)linux.intel.com>
AuthorDate: Fri Apr 5 17:47:04 2019 +0300
Commit: Sakari Ailus <sakari.ailus(a)linux.intel.com>
CommitDate: Mon Apr 8 12:43:34 2019 +0300
media: Set subdev's device node to NULL on devnode unregistration
When an async sub-device is bound to a notifier, or when the notifier's
complete callback is called, a device node will be created for the async
sub-device. Once the notifier is unregistered, the async sub-device's
device node is unregistered, too. But the pointer to this device node was
left dangling. While the pointer is not apparently dereferenced, it will
prevent registering a new device node for that async sub-device if a new
notifier that matches with the async sub-device is registered.
Fix this by setting the device node pointer to NULL when it is
unregistered.
Fixes: 0e43734d4c46 ("media: v4l2-subdev: add release() internal op")
Signed-off-by: Sakari Ailus <sakari.ailus(a)linux.intel.com>
9d608d3e89 media: ov2659: fix unbalanced mutex_lock/unlock
68b68a99e7 media: Set subdev's device node to NULL on devnode unregistration
fc9ec521a0 ipu3-imgu: Use %u for formatting unsigned values (not %d)
+------------------------------------------------------------------+------------+------------+------------+
| | 9d608d3e89 | 68b68a99e7 | fc9ec521a0 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 31 | 0 | 0 |
| boot_failures | 0 | 11 | 11 |
| BUG_kmalloc-#k(Not_tainted):Poison_overwritten | 0 | 11 | 11 |
| INFO:0x(____ptrval____)-0x(____ptrval____).First_byte#instead_of | 0 | 11 | 11 |
| INFO:Allocated_in_vimc_sen_comp_bind_age=#cpu=#pid= | 0 | 11 | 11 |
| INFO:Freed_in_v4l2_device_release_subdev_node_age=#cpu=#pid= | 0 | 11 | 11 |
| INFO:Slab0x(____ptrval____)objects=#used=#fp=0x(#)flags= | 0 | 11 | 11 |
| INFO:Object0x(____ptrval____)@offset=#fp=0x(____ptrval____) | 0 | 11 | 11 |
| BUG_kmalloc-#k(Tainted:G_B):Poison_overwritten | 0 | 6 | 6 |
| BUG_kmalloc-#(Tainted:G_B):Poison_overwritten | 0 | 7 | 9 |
| INFO:Allocated_in_vimc_deb_comp_bind_age=#cpu=#pid= | 0 | 7 | 9 |
| INFO:Object0x(____ptrval____)@offset=#fp=0x(null) | 0 | 1 | |
| INFO:Allocated_in_vimc_sca_comp_bind_age=#cpu=#pid= | 0 | 1 | 1 |
+------------------------------------------------------------------+------------+------------+------------+
[ 68.030232] vimc vimc.0: bound vimc-capture.5.auto (ops vimc_cap_comp_ops)
[ 68.041719] vimc vimc.0: bound vimc-sensor.6.auto (ops vimc_sen_comp_ops)
[ 68.043293] vimc vimc.0: bound vimc-scaler.7.auto (ops vimc_sca_comp_ops)
[ 68.045315] vimc vimc.0: bound vimc-capture.8.auto (ops vimc_cap_comp_ops)
[ 68.052403] =============================================================================
[ 68.054311] BUG kmalloc-4k (Not tainted): Poison overwritten
[ 68.055557] -----------------------------------------------------------------------------
[ 68.055557]
[ 68.056353] Disabling lock debugging due to kernel taint
[ 68.056353] INFO: 0x(____ptrval____)-0x(____ptrval____). First byte 0x0 instead of 0x6b
[ 68.056353] INFO: Allocated in vimc_sen_comp_bind+0x61/0x2e0 age=7 cpu=1 pid=1
[ 68.056353] kmem_cache_alloc_trace+0x323/0x370
[ 68.056353] vimc_sen_comp_bind+0x61/0x2e0
[ 68.056353] component_bind_all+0x116/0x240
[ 68.056353] vimc_comp_bind+0x39/0x180
[ 68.056353] try_to_bring_up_master+0x12c/0x170
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 6e17fb1d1a384e20f2699b6fdfa5cd4e61360eb4 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect bad 2319249bb077e92e44b564ea8a5759b2c3238a9c # 00:55 B 0 7 21 0 Merge 'linux-review/Chris-Wilson/drm-i915-Be-precise-in-types-for-i915_gem_busy/20190404-194327' into devel-hourly-2019041006
git bisect good 5dbb2d9d86154612f85163aeeb61a9714f120be8 # 01:20 G 10 0 0 0 Merge 'linux-review/Dongli-Zhang/swiotlb-dump-used-and-total-slots-when-swiotlb-buffer-is-full/20190408-013102' into devel-hourly-2019041006
git bisect bad bd615aa8e1f7dbb21da5c2953c1adaf4bd4c1bd6 # 01:32 B 0 1 15 0 Merge 'dlm/next' into devel-hourly-2019041006
git bisect bad b407cebeaabc194d4ab4fbd9ef8737c044fca917 # 01:55 B 0 1 15 0 Merge 'linux-review/Qu-Wenruo/block-Add-new-BLK_STS_SELFTEST-status/20190407-042001' into devel-hourly-2019041006
git bisect bad 073945cfa6eb671ec63393ece470d15ca22ab7f1 # 02:14 B 0 5 19 0 Merge 'linux-review/Michal-Suchanek/dmaengine-bcm2835-Drop-duplicate-capability-setting/20190407-173405' into devel-hourly-2019041006
git bisect bad c3cc11b4792d8f6334758fb3002ddcf6663cc319 # 02:34 B 0 1 15 0 Merge 'sailus-media/master' into devel-hourly-2019041006
git bisect good 1e1bdecb0bcbf25832b39738ee833954ac996ee2 # 03:14 G 11 0 0 0 Merge 'linux-review/Hook-Gary/x86-mm-mem_encrypt-Disable-all-instrumentation-for-SME-early-boot-code/20190408-003938' into devel-hourly-2019041006
git bisect good d5639d5279f2fb0a5c2a14d9a74f7f33845cd2c9 # 03:40 G 11 0 0 0 Merge 'linux-review/Christopher-M-Riedl/powerpc-xmon-add-read-only-mode/20190408-123724' into devel-hourly-2019041006
git bisect good 73071eda1c4fdf474563f6a333fef76385d5d2eb # 04:00 G 11 0 0 0 Merge 'leo/next' into devel-hourly-2019041006
git bisect good dff077905fde45d054812774a18c4cba642cfa71 # 04:21 G 11 0 0 0 Merge 'linux-review/Jan-Kotas/soundwire-fix-pm_runtime_get_sync-return-code-checks/20190407-234315' into devel-hourly-2019041006
git bisect bad 68b68a99e7d028f95e97bf05279e5f3f350af3ad # 04:31 B 0 4 18 0 media: Set subdev's device node to NULL on devnode unregistration
git bisect good 75def1624b3fdc959a79bfeef0f9e9518866e545 # 05:44 G 11 0 6 6 media: ov6650: Register with asynchronous subdevice framework
git bisect good 9d608d3e894422936a7a8688313c208de350c2d4 # 06:07 G 11 0 0 0 media: ov2659: fix unbalanced mutex_lock/unlock
# first bad commit: [68b68a99e7d028f95e97bf05279e5f3f350af3ad] media: Set subdev's device node to NULL on devnode unregistration
git bisect good 9d608d3e894422936a7a8688313c208de350c2d4 # 06:17 G 30 0 0 0 media: ov2659: fix unbalanced mutex_lock/unlock
# extra tests with debug options
git bisect bad 68b68a99e7d028f95e97bf05279e5f3f350af3ad # 06:33 B 0 1 15 0 media: Set subdev's device node to NULL on devnode unregistration
# extra tests on HEAD of linux-devel/devel-hourly-2019041006
git bisect bad 6e17fb1d1a384e20f2699b6fdfa5cd4e61360eb4 # 06:38 B 0 15 34 0 0day head guard for 'devel-hourly-2019041006'
# extra tests on tree/branch sailus-media/master
git bisect bad fc9ec521a0e654c8c4c43ef24a9bbb74de86725e # 07:03 B 0 1 15 0 ipu3-imgu: Use %u for formatting unsigned values (not %d)
# extra tests with first bad commit reverted
git bisect good 60381118a475ef23df30486b5e1efa6b28da5427 # 07:26 G 11 0 0 0 Revert "media: Set subdev's device node to NULL on devnode unregistration"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 4 months
3b4ba6643d ("locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() .."): WARNING: CPU: 0 PID: 0 at kernel/locking/rwsem.h:273 up_write
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.locking/core
commit 3b4ba6643d26a95e08067fca9a5da1828f9afabf
Author: Waiman Long <longman(a)redhat.com>
AuthorDate: Thu Apr 4 13:43:15 2019 -0400
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Wed Apr 10 10:56:03 2019 +0200
locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
Currently, the DEBUG_RWSEMS_WARN_ON() macro just dumps a stack trace
when the rwsem isn't in the right state. It does not show the actual
states of the rwsem. This may not be that helpful in the debugging
process.
Enhance the DEBUG_RWSEMS_WARN_ON() macro to also show the current
content of the rwsem count and owner fields to give more information
about what is wrong with the rwsem. The debug_locks_off() function is
called as is done inside DEBUG_LOCKS_WARN_ON().
Signed-off-by: Waiman Long <longman(a)redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra(a)chello.nl>
Acked-by: Davidlohr Bueso <dbueso(a)suse.de>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
a68e2c4c63 locking/rwsem: Add debug check for __down_read*()
3b4ba6643d locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
5c587ed687 locking/rwsem: Remove redundant computation of writer lock word
31437a258f Merge branch 'perf/urgent'
+---------------------------------------------+------------+------------+------------+------------+
| | a68e2c4c63 | 3b4ba6643d | 5c587ed687 | 31437a258f |
+---------------------------------------------+------------+------------+------------+------------+
| boot_successes | 40 | 0 | 0 | 11 |
| boot_failures | 0 | 11 | 13 | |
| WARNING:at_kernel/locking/rwsem.h:#up_write | 0 | 11 | 13 | |
| EIP:up_write | 0 | 11 | 13 | |
+---------------------------------------------+------------+------------+------------+------------+
[ 0.191085] A-B-C-D-B-D-D-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 0.197274] A-B-C-D-B-C-D-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 0.203483] double unlock: ok | ok | ok | ok |
[ 0.205265] ------------[ cut here ]------------
[ 0.206323] DEBUG_RWSEMS_WARN_ON(sem->owner != current): count = 0x0, owner = 0x0, curr 0xd4657d00, list empty
[ 0.207250] WARNING: CPU: 0 PID: 0 at kernel/locking/rwsem.h:273 up_write+0x9f/0xb0
[ 0.208118] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G T 5.1.0-rc4-00065-g3b4ba66 #1
[ 0.208930] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.209691] EIP: up_write+0x9f/0xb0
[ 0.210021] Code: 05 b3 66 79 d4 01 39 ce be 3d af 51 d4 b9 8c cd 52 d4 0f 45 ce 8b 33 51 52 50 56 68 42 af 51 d4 68 b8 af 51 d4 e8 51 16 fb ff <0f> 0b 83 c4 18 eb b3 8d 76 00 8d bc 27 00 00 00 00 3e 8d 74 26 00
[ 0.211716] EAX: 00000062 EBX: d46c49e0 ECX: d36c6044 EDX: 00000002
[ 0.212294] ESI: 00000000 EDI: d3990d40 EBP: d464bf34 ESP: d464bf14
[ 0.212874] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00210286
[ 0.213495] CR0: 80050033 CR2: ffffffff CR3: 14a12000 CR4: 000406b0
[ 0.214076] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 0.214657] DR6: fffe0ff0 DR7: 00000400
[ 0.215018] Call Trace:
[ 0.215253] double_unlock_wsem+0x21/0x30
[ 0.215626] dotest+0x2a/0x5c0
[ 0.215918] locking_selftest+0x4d6/0x1cc0
[ 0.216301] start_kernel+0x358/0x42c
[ 0.216642] i386_start_kernel+0xac/0xb0
[ 0.217013] startup_32_smp+0x164/0x170
[ 0.217371] irq event stamp: 385
[ 0.217675] hardirqs last enabled at (385): [<d36c6075>] vprintk_emit+0x135/0x370
[ 0.218373] hardirqs last disabled at (384): [<d36c5f8c>] vprintk_emit+0x4c/0x370
[ 0.219063] softirqs last enabled at (0): [<00000000>] (null)
[ 0.219615] softirqs last disabled at (0): [<00000000>] (null)
[ 0.220173] random: get_random_bytes called from print_oops_end_marker+0x4f/0x60 with crng_init=0
[ 0.220988] ---[ end trace 7d35dc4b16298f6a ]---
[ 0.221416] ok | ok | ok |
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5c587ed687faed2eb0afdd669ddd167d0d940236 5e7a8ca319268a70a6c7c3c1fde5bea38e1e5539 --
git bisect bad fb346fd9fc081c3d978c3f3d26d39334527a2662 # 20:06 B 0 11 35 10 locking/lock_events: Make lock_events available for all archs & other locks
git bisect good eecec78f777742903ec9167490c625661284155d # 20:25 G 11 0 0 0 locking/rwsem: Relocate rwsem_down_read_failed()
git bisect good a338ecb07a338c9a8b0ca0010e862ebe598b1551 # 20:35 G 11 0 0 0 locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()
git bisect bad 3b4ba6643d26a95e08067fca9a5da1828f9afabf # 20:50 B 0 10 24 0 locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
git bisect good a68e2c4c637918da47b3aa270051545cff7d8245 # 21:00 G 11 0 0 0 locking/rwsem: Add debug check for __down_read*()
# first bad commit: [3b4ba6643d26a95e08067fca9a5da1828f9afabf] locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
git bisect good a68e2c4c637918da47b3aa270051545cff7d8245 # 21:04 G 31 0 0 0 locking/rwsem: Add debug check for __down_read*()
# extra tests on HEAD of tip/WIP.locking/core
git bisect bad 5c587ed687faed2eb0afdd669ddd167d0d940236 # 21:04 B 0 13 30 0 locking/rwsem: Remove redundant computation of writer lock word
# extra tests on tree/branch tip/WIP.locking/core
git bisect bad 5c587ed687faed2eb0afdd669ddd167d0d940236 # 21:06 B 0 13 30 0 locking/rwsem: Remove redundant computation of writer lock word
# extra tests on tree/branch tip/master
git bisect good 31437a258fa637d7449385ef2e1b33efc6786397 # 21:20 G 11 0 0 0 Merge branch 'perf/urgent'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 4 months