[torture] 33e04e4512: BUG:unable_to_handle_kernel
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 33e04e4512797b5e0242f452d0027b096d43d006 ("torture: Allow inter-stutter interval to be specified")
https://git.kernel.org/cgit/linux/kernel/git/paulmck/linux-rcu.git dev.2019.04.09b
in testcase: rcutorture
with following parameters:
runtime: 300s
test: default
torture_type: tasks
test-description: rcutorture is rcutorture kernel module load/unload test.
test-url: https://www.kernel.org/doc/Documentation/RCU/torture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------+------------+------------+
| | 7c932cda19 | 33e04e4512 |
+--------------------------------------------------------------------+------------+------------+
| boot_successes | 23 | 16 |
| boot_failures | 21 | 31 |
| WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_writer[rcutorture] | 14 | 13 |
| RIP:rcu_torture_writer[rcutorture] | 14 | 13 |
| BUG:kernel_reboot-without-warning_in_test_stage | 7 | |
| BUG:soft_lockup-CPU##stuck_for#s | 0 | 5 |
| RIP:free_unref_page | 0 | 1 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 5 |
| BUG:kernel_hang_in_boot-around-mounting-root_stage | 0 | 1 |
| RIP:free_reserved_area | 0 | 4 |
| BUG:unable_to_handle_kernel | 0 | 12 |
| Oops:#[##] | 0 | 12 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 12 |
+--------------------------------------------------------------------+------------+------------+
[ 20.150080] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[ 20.153862] tasks-torture: torture_shuffle task started
[ 20.154767] #PF error: [INSTR]
[ 20.161581] PGD 0 P4D 0
[ 20.164301] Oops: 0010 [#1] SMP PTI
[ 20.167187] CPU: 1 PID: 573 Comm: modprobe Not tainted 5.1.0-rc1-00105-g33e04e4 #2
[ 20.171616] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 20.176345] RIP: 0010: (null)
[ 20.179630] Code: Bad RIP value.
[ 20.182604] RSP: 0000:ffffa7a78140fc90 EFLAGS: 00010206
[ 20.186045] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
[ 20.190299] RDX: ffff89d7eb824200 RSI: ffffffffc03b1040 RDI: ffffffffc03b1020
[ 20.194170] RBP: ffffffffc03ffd88 R08: ffff89d825f7b9c0 R09: ffff89d707c03980
[ 20.198134] R10: 0000000000000001 R11: 0000000000000000 R12: ffffffffc041d338
[ 20.201975] R13: ffffffffc041d368 R14: ffffffffc0402500 R15: ffffffffc04026c0
[ 20.205845] FS: 00007faf0abb6700(0000) GS:ffff89d83fd00000(0000) knlGS:0000000000000000
[ 20.210047] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 20.213575] CR2: ffffffffffffffd6 CR3: 00000001eb806000 CR4: 00000000000006e0
[ 20.217485] Call Trace:
[ 20.220067] rcu_torture_init+0x534/0x1000 [rcutorture]
[ 20.223381] ? 0xffffffffc0432000
[ 20.226264] do_one_initcall+0x46/0x1e4
[ 20.230720] ? _cond_resched+0x19/0x30
[ 20.233990] ? kmem_cache_alloc_trace+0x3b/0x1d0
[ 20.237322] do_init_module+0x5b/0x210
[ 20.240222] load_module+0x1871/0x1f40
[ 20.243058] ? ima_post_read_file+0xe2/0x120
[ 20.245953] ? __do_sys_finit_module+0xe9/0x110
[ 20.248898] __do_sys_finit_module+0xe9/0x110
[ 20.251851] do_syscall_64+0x5b/0x1a0
[ 20.254414] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 20.257372] RIP: 0033:0x7faf0a6e1229
[ 20.259934] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 3f 4c 2b 00 f7 d8 64 89 01 48
[ 20.268666] RSP: 002b:00007ffc8cb92a48 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 20.272501] RAX: ffffffffffffffda RBX: 0000561e8de3e3d0 RCX: 00007faf0a6e1229
[ 20.276305] RDX: 0000000000000000 RSI: 0000561e8de3d510 RDI: 0000000000000005
[ 20.280050] RBP: 0000561e8de3d510 R08: 0000000000000000 R09: 000000000000000e
[ 20.283826] R10: 0000000000000005 R11: 0000000000000246 R12: 0000000000000000
[ 20.293063] R13: 0000561e8de3e4e0 R14: 0000000000040000 R15: 0000561e8de3d4d0
[ 20.296743] Modules linked in: rcutorture(+) torture sr_mod cdrom bochs_drm sg ttm drm_kms_helper ppdev crct10dif_pclmul crc32_pclmul crc32c_intel ata_generic ghash_clmulni_intel pata_acpi syscopyarea sysfillrect sysimgblt fb_sys_fops snd_pcm aesni_intel snd_timer drm ata_piix snd crypto_simd cryptd libata joydev soundcore glue_helper serio_raw pcspkr parport_pc i2c_piix4 parport floppy ip_tables
[ 20.314280] CR2: 0000000000000000
[ 20.317405] ---[ end trace ba8f2eb0ce74c268 ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc1-00105-g33e04e4 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months
293f439662 ("netfilter: nf_nat: register amanda NAT helper."): kernel BUG at net/netfilter/nf_conntrack_helper.c:547!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Flavio-Leitner/openvswitch-load-...
commit 293f439662715da7efbf21c268b35ee8fb9fe8e8
Author: Flavio Leitner <fbl(a)redhat.com>
AuthorDate: Sat Apr 13 20:17:11 2019 -0300
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Mon Apr 15 17:50:32 2019 +0800
netfilter: nf_nat: register amanda NAT helper.
Signed-off-by: Flavio Leitner <fbl(a)redhat.com>
ac824d37d6 netfilter: add API to manage NAT helpers.
293f439662 netfilter: nf_nat: register amanda NAT helper.
4c826f1cd2 openvswitch: load and reference the NAT helper.
+---------------------------------------------------+------------+------------+------------+
| | ac824d37d6 | 293f439662 | 4c826f1cd2 |
+---------------------------------------------------+------------+------------+------------+
| boot_successes | 42 | 0 | 0 |
| boot_failures | 0 | 11 | 11 |
| kernel_BUG_at_net/netfilter/nf_conntrack_helper.c | 0 | 11 | 11 |
| invalid_opcode:#[##] | 0 | 11 | 11 |
| EIP:nf_nat_helper_register | 0 | 11 | 11 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 11 | 11 |
+---------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 56.223627] intel_oaktrail: Platform not recognized (You could try the module's force-parameter)
[ 56.223927] cros_ec_lpcs: unsupported system.
[ 56.229083] gnss: GNSS driver registered with major 240
[ 56.229814] drop_monitor: Initializing network drop monitor service
[ 56.231205] ------------[ cut here ]------------
[ 56.231620] kernel BUG at net/netfilter/nf_conntrack_helper.c:547!
[ 56.232875] invalid opcode: 0000 [#1]
[ 56.233403] CPU: 0 PID: 1 Comm: swapper Not tainted 5.1.0-rc4-00760-g293f439 #1
[ 56.233865] EIP: nf_nat_helper_register+0x58/0x60
[ 56.233865] Code: 26 b2 ff 84 c0 74 12 89 33 c7 43 04 ac a9 0b 82 89 1d ac a9 0b 82 89 5e 04 b8 e4 57 08 82 e8 4f d5 37 00 5b 5e 5d c3 8d 76 00 <0f> 0b 8d b6 00 00 00 00 3e 8d 74 26 00 55 89 e5 53 8b 50 18 85 d2
[ 56.233865] EAX: 8208594c EBX: 00000006 ECX: 81780164 EDX: 00000000
[ 56.233865] ESI: 82123513 EDI: ffffffff EBP: 9f277f2c ESP: 9f277f24
[ 56.233865] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00013246
[ 56.233865] CR0: 80050033 CR2: a1800000 CR3: 1d2b1000 CR4: 003406d0
[ 56.233865] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 56.233865] DR6: fffe0ff0 DR7: 00000400
[ 56.233865] Call Trace:
[ 56.233865] ? nf_nat_init+0xab/0xab
[ 56.233865] nf_nat_amanda_init+0x1d/0x2b
[ 56.233865] do_one_initcall+0x5f/0x116
[ 56.233865] ? parse_args+0x9c/0x2d0
[ 56.233865] ? kernel_init_freeable+0xdd/0x192
[ 56.233865] kernel_init_freeable+0x117/0x192
[ 56.233865] ? rest_init+0x90/0x90
[ 56.233865] kernel_init+0xd/0xf0
[ 56.233865] ret_from_fork+0x19/0x30
[ 56.246695] ---[ end trace dbadf007ab32a164 ]---
[ 56.247316] EIP: nf_nat_helper_register+0x58/0x60
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 51638054df26a79aebb0a986e2eb32859eadc5a2 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect bad ef4be79df97e4eedbbc125ee185bfa750486ccb6 # 19:40 B 0 1 15 0 Merge 'linux-review/Flavio-Leitner/openvswitch-load-and-reference-the-NAT-helper/20190415-175030' into devel-catchup-201904151823
git bisect good 9278dcaefd7dcdcff9f9ea7956947d2ee97ee91f # 20:11 G 10 0 0 0 0day base guard for 'devel-catchup-201904151823'
git bisect good 48e4adf9afbe5256d0dab383baf310889973811d # 20:34 G 10 0 1 1 net: phy: realtek: use genphy_read_abilities
git bisect good 7f301cff1fc20c5b91203c5e610cf95782081d5d # 20:52 G 10 0 1 1 ethtool: thunder_bgx: use ethtool.h constants for speed and duplex
git bisect good bb23581b9b38703257acabd520aa5ebf1db008af # 21:14 G 11 0 1 1 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
git bisect good e4edbe3c1f44c84f319149aeb998e7e36b3b897f # 21:37 G 10 0 1 1 rhashtable: fix some __rcu annotation errors
git bisect good 1a49f3c6146f33c42523c8e4f5a72b6f322d5357 # 22:19 G 10 0 3 3 net: hns3: divide shared buffer between TC
git bisect good e62b2fd5d3b4c5c958cf88b92f31960750d88dc5 # 22:35 G 10 0 0 0 r8169: change irq handler to always trigger NAPI polling
git bisect bad 94bdd461241f07a2315f2322ea3e715f0eba0082 # 22:47 B 0 2 16 0 netfilter: nf_nat: register ftp NAT helper.
git bisect good ac824d37d68e21a1214b0de672515eba348929e8 # 23:11 G 10 0 0 0 netfilter: add API to manage NAT helpers.
git bisect bad 293f439662715da7efbf21c268b35ee8fb9fe8e8 # 23:28 B 0 3 17 0 netfilter: nf_nat: register amanda NAT helper.
# first bad commit: [293f439662715da7efbf21c268b35ee8fb9fe8e8] netfilter: nf_nat: register amanda NAT helper.
git bisect good ac824d37d68e21a1214b0de672515eba348929e8 # 23:46 G 30 0 0 0 netfilter: add API to manage NAT helpers.
# extra tests with debug options
git bisect bad 293f439662715da7efbf21c268b35ee8fb9fe8e8 # 23:58 B 0 2 16 0 netfilter: nf_nat: register amanda NAT helper.
# extra tests on HEAD of linux-devel/devel-catchup-201904151823
git bisect bad 51638054df26a79aebb0a986e2eb32859eadc5a2 # 00:04 B 0 15 32 0 0day head guard for 'devel-catchup-201904151823'
# extra tests on tree/branch linux-review/Flavio-Leitner/openvswitch-load-and-reference-the-NAT-helper/20190415-175030
git bisect bad 4c826f1cd2d4d1b4872086e1dac8080e9b6322ce # 00:13 B 0 1 15 0 openvswitch: load and reference the NAT helper.
# extra tests with first bad commit reverted
git bisect bad 27848368bd883ef8fe19fa9cb7501a22c7539b6f # 00:34 B 0 1 15 0 Revert "netfilter: nf_nat: register amanda NAT helper."
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
[mm] ac5b2c1891: vm-scalability.throughput -61.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -61.3% regression of vm-scalability.throughput due to commit:
commit: ac5b2c18911ffe95c08d69273917f90212cf5659 ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
with following parameters:
runtime: 300
thp_enabled: always
thp_defrag: always
nr_task: 32
nr_ssd: 1
test: swap-w-seq
ucode: 0x3d
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-7/performance/x86_64-rhel-7.2/1/32/debian-x86_64-2018-04-03.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/always/always/0x3d
commit:
94e297c50b ("include/linux/notifier.h: SRCU: fix ctags")
ac5b2c1891 ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings")
94e297c50b529f5d ac5b2c18911ffe95c08d692739
---------------- --------------------------
%stddev %change %stddev
\ | \
0.57 ± 35% +258.8% 2.05 ± 4% vm-scalability.free_time
146022 ± 14% -40.5% 86833 ± 2% vm-scalability.median
29.29 ± 40% -89.6% 3.06 ± 26% vm-scalability.stddev
7454656 ± 9% -61.3% 2885836 ± 3% vm-scalability.throughput
189.21 ± 10% +52.4% 288.34 ± 2% vm-scalability.time.elapsed_time
189.21 ± 10% +52.4% 288.34 ± 2% vm-scalability.time.elapsed_time.max
8768 ± 3% +11.6% 9781 ± 5% vm-scalability.time.involuntary_context_switches
20320196 ± 2% -33.4% 13531732 ± 3% vm-scalability.time.maximum_resident_set_size
425945 ± 9% +17.4% 499908 ± 4% vm-scalability.time.minor_page_faults
253.79 ± 6% +62.0% 411.07 ± 4% vm-scalability.time.system_time
322.52 +8.0% 348.18 vm-scalability.time.user_time
246150 ± 12% +50.3% 370019 ± 4% vm-scalability.time.voluntary_context_switches
7746519 ± 11% +49.0% 11538799 ± 4% cpuidle.C6.usage
192240 ± 10% +44.3% 277460 ± 8% interrupts.CAL:Function_call_interrupts
22.45 ± 85% -80.6% 4.36 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
22.45 ± 85% -80.6% 4.36 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
29.36 ± 13% +10.8% 32.52 ± 14% boot-time.boot
24.28 ± 16% +12.4% 27.30 ± 16% boot-time.dhcp
1597 ± 15% +10.2% 1760 ± 15% boot-time.idle
68.25 -9.8 58.48 ± 2% mpstat.cpu.idle%
27.12 ± 5% +10.3 37.42 ± 3% mpstat.cpu.iowait%
2.48 ± 11% -0.8 1.73 ± 2% mpstat.cpu.usr%
3422396 ± 14% +46.6% 5018492 ± 7% softirqs.RCU
1776561 ± 10% +52.4% 2707785 ± 2% softirqs.SCHED
5685772 ± 7% +54.3% 8774055 ± 3% softirqs.TIMER
7742519 ± 11% +49.0% 11534924 ± 4% turbostat.C6
29922317 ± 10% +55.0% 46366941 turbostat.IRQ
9.49 ± 64% -83.0% 1.62 ± 54% turbostat.Pkg%pc2
36878790 ± 27% -84.9% 5570259 ± 47% vmstat.memory.free
1.117e+08 ± 15% +73.5% 1.939e+08 ± 10% vmstat.memory.swpd
25.25 ± 4% +34.7% 34.00 vmstat.procs.b
513725 ± 7% +47.9% 759561 ± 12% numa-numastat.node0.local_node
35753 ±160% +264.7% 130386 ± 39% numa-numastat.node0.numa_foreign
519403 ± 6% +48.7% 772182 ± 12% numa-numastat.node0.numa_hit
35753 ±160% +264.7% 130386 ± 39% numa-numastat.node1.numa_miss
44315 ±138% +198.5% 132279 ± 40% numa-numastat.node1.other_node
32798032 ± 46% +80.6% 59228165 ± 2% numa-meminfo.node0.Active
32798009 ± 46% +80.6% 59228160 ± 2% numa-meminfo.node0.Active(anon)
33430762 ± 46% +80.8% 60429537 ± 2% numa-meminfo.node0.AnonHugePages
33572119 ± 46% +80.2% 60512777 ± 2% numa-meminfo.node0.AnonPages
1310559 ± 64% +86.4% 2442244 ± 2% numa-meminfo.node0.Inactive
1309969 ± 64% +86.4% 2442208 ± 2% numa-meminfo.node0.Inactive(anon)
30385359 ± 53% -90.7% 2821023 ± 44% numa-meminfo.node0.MemFree
35505047 ± 45% +77.6% 63055165 ± 2% numa-meminfo.node0.MemUsed
166560 ± 42% +130.2% 383345 ± 6% numa-meminfo.node0.PageTables
23702 ±105% -89.0% 2617 ± 44% numa-meminfo.node0.Shmem
1212 ± 65% +402.8% 6093 ± 57% numa-meminfo.node1.Shmem
8354144 ± 44% +77.1% 14798964 ± 2% numa-vmstat.node0.nr_active_anon
8552787 ± 44% +76.8% 15122222 ± 2% numa-vmstat.node0.nr_anon_pages
16648 ± 44% +77.1% 29492 ± 2% numa-vmstat.node0.nr_anon_transparent_hugepages
7436650 ± 53% -90.4% 712936 ± 44% numa-vmstat.node0.nr_free_pages
332106 ± 63% +83.8% 610268 ± 2% numa-vmstat.node0.nr_inactive_anon
41929 ± 41% +130.6% 96703 ± 6% numa-vmstat.node0.nr_page_table_pages
5900 ±106% -89.0% 650.75 ± 45% numa-vmstat.node0.nr_shmem
43336 ± 92% +151.2% 108840 ± 7% numa-vmstat.node0.nr_vmscan_write
43110 ± 92% +150.8% 108110 ± 7% numa-vmstat.node0.nr_written
8354142 ± 44% +77.1% 14798956 ± 2% numa-vmstat.node0.nr_zone_active_anon
332105 ± 63% +83.8% 610269 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
321.50 ± 66% +384.9% 1559 ± 59% numa-vmstat.node1.nr_shmem
88815743 ± 10% +33.8% 1.188e+08 ± 2% meminfo.Active
88815702 ± 10% +33.8% 1.188e+08 ± 2% meminfo.Active(anon)
90446011 ± 11% +34.0% 1.212e+08 ± 2% meminfo.AnonHugePages
90613587 ± 11% +34.0% 1.214e+08 ± 2% meminfo.AnonPages
5.15e+08 ± 3% +22.2% 6.293e+08 ± 2% meminfo.Committed_AS
187419 ± 10% -19.6% 150730 ± 7% meminfo.DirectMap4k
3620693 ± 18% +35.3% 4897093 ± 2% meminfo.Inactive
3620054 ± 18% +35.3% 4896961 ± 2% meminfo.Inactive(anon)
36144979 ± 28% -87.0% 4715681 ± 57% meminfo.MemAvailable
36723121 ± 27% -85.9% 5179468 ± 52% meminfo.MemFree
395801 ± 2% +56.9% 620816 ± 4% meminfo.PageTables
178672 +15.1% 205668 ± 2% meminfo.SUnreclaim
249496 +12.6% 280897 meminfo.Slab
1751813 ± 2% +34.7% 2360437 ± 3% meminfo.SwapCached
6.716e+08 -12.4% 5.88e+08 ± 2% meminfo.SwapFree
3926 ± 17% +72.9% 6788 ± 13% meminfo.Writeback
1076 ± 3% +42.9% 1538 ± 4% slabinfo.biovec-max.active_objs
275.75 ± 2% +41.1% 389.00 ± 4% slabinfo.biovec-max.active_slabs
1104 ± 2% +41.0% 1557 ± 4% slabinfo.biovec-max.num_objs
275.75 ± 2% +41.1% 389.00 ± 4% slabinfo.biovec-max.num_slabs
588.25 ± 7% +17.9% 693.75 ± 7% slabinfo.file_lock_cache.active_objs
588.25 ± 7% +17.9% 693.75 ± 7% slabinfo.file_lock_cache.num_objs
13852 ± 3% +37.5% 19050 ± 3% slabinfo.kmalloc-4k.active_objs
1776 ± 3% +37.7% 2446 ± 3% slabinfo.kmalloc-4k.active_slabs
14217 ± 3% +37.7% 19577 ± 3% slabinfo.kmalloc-4k.num_objs
1776 ± 3% +37.7% 2446 ± 3% slabinfo.kmalloc-4k.num_slabs
158.25 ± 15% +54.0% 243.75 ± 18% slabinfo.nfs_read_data.active_objs
158.25 ± 15% +54.0% 243.75 ± 18% slabinfo.nfs_read_data.num_objs
17762 ± 4% +44.3% 25638 ± 4% slabinfo.pool_workqueue.active_objs
563.25 ± 3% +43.7% 809.25 ± 4% slabinfo.pool_workqueue.active_slabs
18048 ± 3% +43.5% 25906 ± 4% slabinfo.pool_workqueue.num_objs
563.25 ± 3% +43.7% 809.25 ± 4% slabinfo.pool_workqueue.num_slabs
34631 ± 3% +21.0% 41905 ± 2% slabinfo.radix_tree_node.active_objs
624.50 ± 3% +20.7% 753.75 ± 2% slabinfo.radix_tree_node.active_slabs
34998 ± 3% +20.7% 42228 ± 2% slabinfo.radix_tree_node.num_objs
624.50 ± 3% +20.7% 753.75 ± 2% slabinfo.radix_tree_node.num_slabs
9.727e+11 ± 8% +50.4% 1.463e+12 ± 12% perf-stat.branch-instructions
1.11 ± 12% +1.2 2.31 ± 8% perf-stat.branch-miss-rate%
1.078e+10 ± 13% +214.9% 3.395e+10 ± 17% perf-stat.branch-misses
3.17 ± 11% -1.5 1.65 ± 9% perf-stat.cache-miss-rate%
8.206e+08 ± 7% +49.4% 1.226e+09 ± 11% perf-stat.cache-misses
2.624e+10 ± 14% +187.0% 7.532e+10 ± 17% perf-stat.cache-references
1174249 ± 9% +52.3% 1788442 ± 3% perf-stat.context-switches
2.921e+12 ± 8% +71.1% 4.998e+12 ± 9% perf-stat.cpu-cycles
1437 ± 14% +85.1% 2661 ± 20% perf-stat.cpu-migrations
7.586e+08 ± 21% +134.7% 1.78e+09 ± 30% perf-stat.dTLB-load-misses
7.943e+11 ± 11% +84.3% 1.464e+12 ± 14% perf-stat.dTLB-loads
93963731 ± 22% +40.9% 1.324e+08 ± 10% perf-stat.dTLB-store-misses
3.394e+11 ± 6% +60.5% 5.449e+11 ± 10% perf-stat.dTLB-stores
1.531e+08 ± 22% +44.0% 2.204e+08 ± 11% perf-stat.iTLB-load-misses
1.688e+08 ± 23% +71.1% 2.888e+08 ± 12% perf-stat.iTLB-loads
3.267e+12 ± 7% +58.5% 5.177e+12 ± 13% perf-stat.instructions
3988 ± 43% +123.9% 8930 ± 22% perf-stat.major-faults
901474 ± 5% +34.2% 1209877 ± 2% perf-stat.minor-faults
31.24 ± 10% +21.7 52.91 ± 2% perf-stat.node-load-miss-rate%
1.135e+08 ± 16% +187.6% 3.264e+08 ± 13% perf-stat.node-load-misses
6.27 ± 17% +26.9 33.19 ± 4% perf-stat.node-store-miss-rate%
27354489 ± 15% +601.2% 1.918e+08 ± 13% perf-stat.node-store-misses
905482 ± 5% +34.6% 1218833 ± 2% perf-stat.page-faults
4254 ± 7% +58.5% 6741 ± 13% perf-stat.path-length
6364 ± 25% +84.1% 11715 ± 14% proc-vmstat.allocstall_movable
46439 ± 12% +100.4% 93049 ± 21% proc-vmstat.compact_migrate_scanned
22425696 ± 10% +29.0% 28932634 ± 6% proc-vmstat.nr_active_anon
22875703 ± 11% +29.2% 29560082 ± 6% proc-vmstat.nr_anon_pages
44620 ± 11% +29.2% 57643 ± 6% proc-vmstat.nr_anon_transparent_hugepages
879436 ± 28% -77.3% 199768 ± 98% proc-vmstat.nr_dirty_background_threshold
1761029 ± 28% -77.3% 400034 ± 98% proc-vmstat.nr_dirty_threshold
715724 +17.6% 841386 ± 3% proc-vmstat.nr_file_pages
8960545 ± 28% -76.4% 2111248 ± 93% proc-vmstat.nr_free_pages
904330 ± 18% +31.2% 1186458 ± 7% proc-vmstat.nr_inactive_anon
11137 ± 2% +27.1% 14154 ± 9% proc-vmstat.nr_isolated_anon
12566 +3.5% 13012 proc-vmstat.nr_kernel_stack
97491 ± 2% +52.6% 148790 ± 10% proc-vmstat.nr_page_table_pages
17674 +6.2% 18763 ± 2% proc-vmstat.nr_slab_reclaimable
44820 +13.0% 50645 ± 2% proc-vmstat.nr_slab_unreclaimable
135763 ± 9% +68.4% 228600 ± 6% proc-vmstat.nr_vmscan_write
1017 ± 10% +54.7% 1573 ± 14% proc-vmstat.nr_writeback
220023 ± 5% +73.5% 381732 ± 6% proc-vmstat.nr_written
22425696 ± 10% +29.0% 28932635 ± 6% proc-vmstat.nr_zone_active_anon
904330 ± 18% +31.2% 1186457 ± 7% proc-vmstat.nr_zone_inactive_anon
1018 ± 10% +55.3% 1581 ± 13% proc-vmstat.nr_zone_write_pending
145368 ± 48% +63.1% 237050 ± 17% proc-vmstat.numa_foreign
671.50 ± 96% +479.4% 3890 ± 71% proc-vmstat.numa_hint_faults
1122389 ± 9% +17.2% 1315380 ± 4% proc-vmstat.numa_hit
214722 ± 5% +21.6% 261076 ± 3% proc-vmstat.numa_huge_pte_updates
1108142 ± 9% +17.4% 1300857 ± 4% proc-vmstat.numa_local
145368 ± 48% +63.1% 237050 ± 17% proc-vmstat.numa_miss
159615 ± 44% +57.6% 251573 ± 16% proc-vmstat.numa_other
185.50 ± 81% +8278.6% 15542 ± 40% proc-vmstat.numa_pages_migrated
1.1e+08 ± 5% +21.6% 1.337e+08 ± 3% proc-vmstat.numa_pte_updates
688332 ±106% +177.9% 1913062 ± 3% proc-vmstat.pgalloc_dma32
72593045 ± 10% +51.1% 1.097e+08 ± 3% proc-vmstat.pgdeactivate
919059 ± 4% +35.1% 1241472 ± 2% proc-vmstat.pgfault
3716 ± 45% +120.3% 8186 ± 25% proc-vmstat.pgmajfault
7.25 ± 26% +4.2e+05% 30239 ± 25% proc-vmstat.pgmigrate_fail
5340 ±106% +264.0% 19438 ± 33% proc-vmstat.pgmigrate_success
2.837e+08 ± 10% +51.7% 4.303e+08 ± 3% proc-vmstat.pgpgout
211428 ± 6% +74.1% 368188 ± 4% proc-vmstat.pgrefill
219051 ± 5% +73.7% 380419 ± 5% proc-vmstat.pgrotated
559397 ± 8% +43.0% 800110 ± 11% proc-vmstat.pgscan_direct
32894 ± 59% +158.3% 84981 ± 23% proc-vmstat.pgscan_kswapd
207042 ± 8% +71.5% 355174 ± 5% proc-vmstat.pgsteal_direct
14745 ± 65% +104.3% 30121 ± 18% proc-vmstat.pgsteal_kswapd
70934968 ± 10% +51.7% 1.076e+08 ± 3% proc-vmstat.pswpout
5852284 ± 12% +145.8% 14382881 ± 5% proc-vmstat.slabs_scanned
13453 ± 24% +204.9% 41023 ± 8% proc-vmstat.thp_split_page_failed
138385 ± 10% +51.6% 209783 ± 3% proc-vmstat.thp_split_pmd
138385 ± 10% +51.6% 209782 ± 3% proc-vmstat.thp_swpout
4.61 ± 24% -1.2 3.37 ± 10% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.90 ± 18% -0.7 2.19 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.86 ± 32% -0.6 1.22 ± 7% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.60 ± 31% -0.5 1.08 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
2.98 ± 8% -0.5 2.48 ± 10% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.46 ± 32% -0.5 0.96 ± 9% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.74 ± 27% -0.5 0.28 ±100% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.03 ± 52% -0.4 0.63 ± 15% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.17 ± 19% -0.3 0.87 ± 11% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.80 ± 17% +0.4 1.22 ± 7% perf-profile.calltrace.cycles-pp.nvme_queue_rq.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request
0.81 ± 16% +0.4 1.23 ± 8% perf-profile.calltrace.cycles-pp.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage
0.81 ± 16% +0.4 1.23 ± 8% perf-profile.calltrace.cycles-pp.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request.submit_bio
0.52 ± 60% +0.5 1.03 ± 6% perf-profile.calltrace.cycles-pp.dma_pool_alloc.nvme_queue_rq.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request
1.64 ± 16% +0.6 2.21 ± 15% perf-profile.calltrace.cycles-pp.find_next_bit.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done
0.51 ± 64% +0.8 1.31 ± 17% perf-profile.calltrace.cycles-pp.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_start
0.80 ± 25% +0.9 1.67 ± 19% perf-profile.calltrace.cycles-pp.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_start.blk_mq_make_request
0.82 ± 24% +0.9 1.71 ± 19% perf-profile.calltrace.cycles-pp.blk_mq_in_flight.part_round_stats.blk_account_io_start.blk_mq_make_request.generic_make_request
0.82 ± 25% +0.9 1.73 ± 19% perf-profile.calltrace.cycles-pp.part_round_stats.blk_account_io_start.blk_mq_make_request.generic_make_request.submit_bio
0.87 ± 25% +1.0 1.87 ± 14% perf-profile.calltrace.cycles-pp.blk_account_io_start.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage
2.05 ± 15% +1.4 3.48 ± 7% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.__swap_writepage.pageout.shrink_page_list
2.09 ± 15% +1.4 3.53 ± 7% perf-profile.calltrace.cycles-pp.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg
2.05 ± 15% +1.4 3.49 ± 7% perf-profile.calltrace.cycles-pp.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage.pageout
2.06 ± 15% +1.4 3.50 ± 7% perf-profile.calltrace.cycles-pp.submit_bio.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list
2.10 ± 15% +1.4 3.54 ± 6% perf-profile.calltrace.cycles-pp.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
3.31 ± 12% +1.5 4.83 ± 5% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
3.33 ± 12% +1.5 4.86 ± 5% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
3.40 ± 12% +1.6 4.97 ± 6% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
3.57 ± 12% +1.6 5.19 ± 7% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
3.49 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
3.51 ± 12% +1.7 5.17 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
3.34 ± 14% +2.2 5.57 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_check_inflight.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats
9.14 ± 17% +5.5 14.66 ± 22% perf-profile.calltrace.cycles-pp.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done
12.10 ± 17% +6.8 18.89 ± 20% perf-profile.calltrace.cycles-pp.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done.blk_mq_end_request
13.88 ± 15% +7.1 21.01 ± 20% perf-profile.calltrace.cycles-pp.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter_state.do_idle
13.87 ± 15% +7.1 21.00 ± 20% perf-profile.calltrace.cycles-pp.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter_state
13.97 ± 15% +7.1 21.10 ± 20% perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
13.94 ± 15% +7.1 21.07 ± 20% perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter_state.do_idle.cpu_startup_entry
12.50 ± 17% +7.2 19.65 ± 20% perf-profile.calltrace.cycles-pp.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu
12.70 ± 17% +7.2 19.86 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_end_request.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu
12.48 ± 17% +7.2 19.65 ± 20% perf-profile.calltrace.cycles-pp.part_round_stats.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request.nvme_irq
12.46 ± 17% +7.2 19.63 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_in_flight.part_round_stats.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request
14.78 ± 18% +8.1 22.83 ± 20% perf-profile.calltrace.cycles-pp.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
14.87 ± 18% +8.1 22.95 ± 21% perf-profile.calltrace.cycles-pp.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq
14.89 ± 18% +8.1 22.98 ± 21% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ
14.90 ± 18% +8.1 22.99 ± 21% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr
14.88 ± 18% +8.1 22.97 ± 21% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq
4.79 ± 22% -1.3 3.52 ± 9% perf-profile.children.cycles-pp.hrtimer_interrupt
3.04 ± 16% -0.7 2.30 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.98 ± 29% -0.7 1.29 ± 7% perf-profile.children.cycles-pp.tick_sched_timer
1.70 ± 27% -0.6 1.14 ± 9% perf-profile.children.cycles-pp.tick_sched_handle
1.57 ± 29% -0.5 1.03 ± 10% perf-profile.children.cycles-pp.update_process_times
3.02 ± 8% -0.5 2.52 ± 10% perf-profile.children.cycles-pp.menu_select
1.19 ± 19% -0.3 0.89 ± 10% perf-profile.children.cycles-pp.tick_nohz_next_event
0.81 ± 25% -0.2 0.56 ± 11% perf-profile.children.cycles-pp.scheduler_tick
0.42 ± 19% -0.1 0.30 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
0.27 ± 13% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.11 ± 35% -0.0 0.07 ± 19% perf-profile.children.cycles-pp.run_local_timers
0.10 ± 15% -0.0 0.07 ± 28% perf-profile.children.cycles-pp.cpu_load_update
0.14 ± 9% -0.0 0.11 ± 15% perf-profile.children.cycles-pp.perf_event_task_tick
0.07 ± 17% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.blk_flush_plug_list
0.07 ± 17% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.06 ± 26% +0.0 0.11 ± 17% perf-profile.children.cycles-pp.read
0.07 ± 17% +0.0 0.11 ± 36% perf-profile.children.cycles-pp.deferred_split_scan
0.15 ± 14% +0.1 0.21 ± 13% perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests
0.15 ± 16% +0.1 0.21 ± 15% perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list
0.15 ± 14% +0.1 0.22 ± 14% perf-profile.children.cycles-pp.__blk_mq_run_hw_queue
0.08 ± 23% +0.1 0.14 ± 34% perf-profile.children.cycles-pp.shrink_slab
0.08 ± 23% +0.1 0.14 ± 34% perf-profile.children.cycles-pp.do_shrink_slab
0.72 ± 19% +0.3 1.00 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.44 ± 18% +0.3 0.74 ± 14% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.86 ± 18% +0.4 1.30 ± 9% perf-profile.children.cycles-pp.blk_mq_try_issue_directly
0.88 ± 18% +0.5 1.34 ± 9% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
0.82 ± 19% +0.5 1.30 ± 9% perf-profile.children.cycles-pp.dma_pool_alloc
1.02 ± 15% +0.5 1.55 ± 9% perf-profile.children.cycles-pp.nvme_queue_rq
1.07 ± 15% +0.7 1.76 ± 21% perf-profile.children.cycles-pp.__indirect_thunk_start
2.42 ± 16% +0.7 3.16 ± 13% perf-profile.children.cycles-pp.find_next_bit
0.95 ± 26% +1.1 2.00 ± 15% perf-profile.children.cycles-pp.blk_account_io_start
2.19 ± 17% +1.5 3.72 ± 8% perf-profile.children.cycles-pp.blk_mq_make_request
2.20 ± 17% +1.5 3.73 ± 8% perf-profile.children.cycles-pp.submit_bio
2.20 ± 17% +1.5 3.73 ± 8% perf-profile.children.cycles-pp.generic_make_request
2.21 ± 17% +1.5 3.76 ± 8% perf-profile.children.cycles-pp.__swap_writepage
2.23 ± 17% +1.5 3.77 ± 7% perf-profile.children.cycles-pp.pageout
3.60 ± 13% +1.6 5.22 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
3.51 ± 15% +1.6 5.15 ± 6% perf-profile.children.cycles-pp.shrink_page_list
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.children.cycles-pp.do_try_to_free_pages
3.53 ± 14% +1.6 5.17 ± 6% perf-profile.children.cycles-pp.shrink_inactive_list
3.49 ± 12% +1.6 5.13 ± 6% perf-profile.children.cycles-pp.try_to_free_pages
3.51 ± 12% +1.7 5.19 ± 6% perf-profile.children.cycles-pp.__alloc_pages_slowpath
3.61 ± 14% +1.7 5.30 ± 6% perf-profile.children.cycles-pp.shrink_node_memcg
3.69 ± 15% +1.8 5.45 ± 7% perf-profile.children.cycles-pp.shrink_node
4.10 ± 17% +2.4 6.47 ± 18% perf-profile.children.cycles-pp.blk_mq_check_inflight
10.64 ± 16% +6.8 17.39 ± 17% perf-profile.children.cycles-pp.bt_iter
13.17 ± 16% +7.4 20.59 ± 18% perf-profile.children.cycles-pp.blk_account_io_done
13.39 ± 16% +7.4 20.81 ± 19% perf-profile.children.cycles-pp.blk_mq_end_request
15.67 ± 17% +8.2 23.84 ± 20% perf-profile.children.cycles-pp.handle_irq
15.66 ± 17% +8.2 23.83 ± 20% perf-profile.children.cycles-pp.handle_edge_irq
15.57 ± 17% +8.2 23.73 ± 20% perf-profile.children.cycles-pp.nvme_irq
15.60 ± 17% +8.2 23.77 ± 20% perf-profile.children.cycles-pp.handle_irq_event
15.58 ± 17% +8.2 23.75 ± 20% perf-profile.children.cycles-pp.__handle_irq_event_percpu
15.59 ± 17% +8.2 23.77 ± 20% perf-profile.children.cycles-pp.handle_irq_event_percpu
15.75 ± 17% +8.2 23.93 ± 20% perf-profile.children.cycles-pp.ret_from_intr
15.73 ± 17% +8.2 23.91 ± 20% perf-profile.children.cycles-pp.do_IRQ
15.54 ± 17% +8.2 23.73 ± 19% perf-profile.children.cycles-pp.blk_mq_complete_request
14.07 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.part_round_stats
14.05 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.blk_mq_queue_tag_busy_iter
14.05 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.blk_mq_in_flight
0.38 ± 20% -0.1 0.28 ± 16% perf-profile.self.cycles-pp._raw_spin_lock
0.10 ± 13% -0.0 0.07 ± 17% perf-profile.self.cycles-pp.idle_cpu
0.12 ± 14% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.10 ± 15% -0.0 0.07 ± 28% perf-profile.self.cycles-pp.cpu_load_update
0.14 ± 9% -0.0 0.11 ± 15% perf-profile.self.cycles-pp.perf_event_task_tick
0.62 ± 16% +0.3 0.89 ± 12% perf-profile.self.cycles-pp.dma_pool_alloc
0.44 ± 18% +0.3 0.74 ± 14% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.81 ± 16% +0.5 1.32 ± 25% perf-profile.self.cycles-pp.__indirect_thunk_start
2.07 ± 16% +0.6 2.69 ± 12% perf-profile.self.cycles-pp.find_next_bit
1.77 ± 16% +1.1 2.91 ± 15% perf-profile.self.cycles-pp.blk_mq_queue_tag_busy_iter
3.82 ± 16% +2.2 6.00 ± 18% perf-profile.self.cycles-pp.blk_mq_check_inflight
6.43 ± 15% +4.1 10.56 ± 16% perf-profile.self.cycles-pp.bt_iter
vm-scalability.time.system_time
600 +-+-------------------------------------------------------------------+
| |
500 +-O OO |
O O OO |
| O O O O O O |
400 +-+ O O O O |
| |
300 +-+ .+.++. .+ .+ .+ .+.+ +.+ .++.+.++ .++.+.++. |
| ++ + :.+ + + + : +.+ : +.+ + .++.+ +.+ .|
200 +-+ + + : : : : + + |
|: : : : : |
|: :: : : |
100 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.maximum_resident_set_size
2.5e+07 +-+---------------------------------------------------------------+
| |
| ++.++.++.+. +. +. + +.++.+ +.++.+ .+.++. .++. .++.+ .+ |
2e+07 +-+ + .+ + +.+: : : : + ++ + + +.|
| : + : : : : |
| : : : : : |
1.5e+07 O-+O OO: : : : |
|: O O O O : : : : |
1e+07 +-+ O O O O OO O : : : : |
|:O O :: :: |
|: :: :: |
5e+06 +-+ : : |
| : : |
| : : |
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 3 months
63c35ea6b8 ("x86/stacktrace: Use common infrastructure"): BUG: kernel hang in early-boot stage, last printk: early console in setup code
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.core/stacktrace
commit 63c35ea6b829a0f98d307a8dec038095681ecd13
Author: Thomas Gleixner <tglx(a)linutronix.de>
AuthorDate: Thu Apr 11 12:52:04 2019 +0200
Commit: Thomas Gleixner <tglx(a)linutronix.de>
CommitDate: Sun Apr 14 22:44:04 2019 +0200
x86/stacktrace: Use common infrastructure
Replace the stack_trace_save*() functions with the new arch_stack_walk()
interfaces.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
0245694164 stacktrace: Provide common infrastructure
63c35ea6b8 x86/stacktrace: Use common infrastructure
13adc4ee15 Merge branch 'WIP.locking/core'
+-----------------------------------------------------------------------------+------------+------------+------------+
| | 0245694164 | 63c35ea6b8 | 13adc4ee15 |
+-----------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 32 | 0 | 11 |
| boot_failures | 1 | 13 | |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | | |
| BUG:kernel_hang_in_early-boot_stage,last_printk:early_console_in_setup_code | 0 | 13 | |
+-----------------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
early console in setup code
BUG: kernel hang in early-boot stage, last printk: early console in setup code
Linux version 5.1.0-rc4-00302-g63c35ea #39
Command line: root=/dev/ram0 hung_task_panic=1 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw link=/cephfs/kbuild/run-queue/kvm/i386-randconfig-n3-201915/tip:WIP.core:stacktrace:63c35ea6b829a0f98d307a8dec038095681ecd13/.vmlinuz-63c35ea6b829a0f98d307a8dec038095681ecd13-20190415064514-2:yocto-vm-yocto-219 branch=tip/WIP.core/stacktrace BOOT_IMAGE=/pkg/linux/i386-randconfig-n3-201915/gcc-7/63c35ea6b829a0f98d307a8dec038095681ecd13/vmlinuz-5.1.0-rc4-00302-g63c35ea drbd.minor_count=8 rcuperf.shutdown=0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 63c35ea6b829a0f98d307a8dec038095681ecd13 4443f8e6ac7755cd775c70d08be8042dc2f936cb --
git bisect good 9da0899ac5cf06270762b0b530e7cd49e1a97759 # 08:27 G 10 0 0 0 latency_top: Simplify stack trace handling
git bisect good 6f9fad69e30495d9b3c62cf696b7abf68192a400 # 08:33 G 10 0 0 0 lockdep: Remove unused trace argument from print_circular_bug()
git bisect good c6b01c6ce59d329cd1f749faec3034809792d4c4 # 08:40 G 10 0 0 0 tracing: Simplify stack trace retrieval
git bisect good 150bf3fe05c88b76bea37253b9993dd89e58dc2f # 08:47 G 11 0 0 0 stacktrace: Remove obsolete functions
git bisect good 5468565682413ae5a788b1875bbd7e762c910cf9 # 08:53 G 10 0 1 1 lib/stackdepot: Remove obsolete functions
git bisect good 0245694164748e86f0ca565c2d519db1c968dcb1 # 09:06 G 11 0 0 0 stacktrace: Provide common infrastructure
# first bad commit: [63c35ea6b829a0f98d307a8dec038095681ecd13] x86/stacktrace: Use common infrastructure
git bisect good 0245694164748e86f0ca565c2d519db1c968dcb1 # 09:12 G 31 0 0 1 stacktrace: Provide common infrastructure
# extra tests with debug options
# extra tests on HEAD of tip/WIP.core/stacktrace
git bisect bad 63c35ea6b829a0f98d307a8dec038095681ecd13 # 09:15 B 0 13 30 0 x86/stacktrace: Use common infrastructure
# extra tests on tree/branch tip/WIP.core/stacktrace
git bisect bad 63c35ea6b829a0f98d307a8dec038095681ecd13 # 09:15 B 0 13 30 0 x86/stacktrace: Use common infrastructure
# extra tests with first bad commit reverted
git bisect good 0d34fc29df83b253f1289f69a8cea220358f651b # 09:23 G 11 0 0 0 Revert "x86/stacktrace: Use common infrastructure"
# extra tests on tree/branch tip/master
git bisect good 13adc4ee15853b456b55c061aa081df482a90fc1 # 09:27 G 10 0 0 0 Merge branch 'WIP.locking/core'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
[fs] 853fbf8946: BUG:unable_to_handle_kernel
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 853fbf894629ed7df6b3d494bdf0dca547325188 ("[PATCH] fs: Fix ovl_i_mutex_dir_key/p->lock/cred cred_guard_mutex deadlock")
url: https://github.com/0day-ci/linux/commits/Mina-Almasry/fs-Fix-ovl_i_mutex_...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | 582549e3fb | 853fbf8946 |
+-------------------------------------------------+------------+------------+
| boot_successes | 39 | 0 |
| boot_failures | 24 | 5 |
| BUG:kernel_reboot-without-warning_in_test_stage | 24 | |
| BUG:unable_to_handle_kernel | 0 | 5 |
| Oops:#[##] | 0 | 5 |
| RIP:kfree | 0 | 5 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 5 |
+-------------------------------------------------+------------+------------+
[ 0.775676] BUG: unable to handle kernel paging request at ffffebe9e000cac8
[ 0.775676] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 0.775676] #PF error: [normal kernel read fault]
[ 0.775676] PGD 0 P4D 0
[ 0.775676] Oops: 0000 [#1] SMP PTI
[ 0.775676] CPU: 1 PID: 21 Comm: kworker/u4:0 Not tainted 5.1.0-rc4-00059-g853fbf8 #2
[ 0.775676] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.775676] RIP: 0010:kfree+0xa1/0x153
[ 0.779952] futex hash table entries: 512 (order: 3, 32768 bytes)
[ 0.780581] xor: automatically using best checksumming function avx
[ 0.779965] Code: 15 9c 1a 16 01 48 01 d8 72 0e 49 c7 c2 00 00 00 80 4c 2b 15 01 93 0b 01 49 01 c2 49 c1 ea 0c 49 c1 e2 06 4c 03 15 df 92 0b 01 <49> 8b 42 08 a8 01 74 04 4c 8d 50 ff 49 8b 52 08 4c 89 d0 f6 c2 01
[ 0.779965] RSP: 0000:ffffc900003cbe60 EFLAGS: 00010286
[ 0.779965] RAX: 000002f88032b644 RBX: 000002f80032b644 RCX: 0000000000000000
[ 0.779965] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000002f80032b644
[ 0.779965] RBP: ffffc900003cbf08 R08: 0000000080000000 R09: ffffc900003cba68
[ 0.779965] R10: ffffebe9e000cac0 R11: 8080808080808080 R12: ffffffff812dcc00
[ 0.779965] R13: 00000000fffffffe R14: 00000000ffffff9c R15: 0000000000000000
[ 0.779965] FS: 0000000000000000(0000) GS:ffff88806e700000(0000) knlGS:0000000000000000
[ 0.782372] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.782372] CR2: ffffebe9e000cac8 CR3: 000000000240e000 CR4: 00000000000406e0
[ 0.782372] Call Trace:
[ 0.782372] free_bprm+0x73/0x7c
[ 0.782372] __do_execve_file+0x720/0x7a6
[ 0.782372] do_execve+0x21/0x24
[ 0.782372] call_usermodehelper_exec_async+0x141/0x16c
[ 0.782372] ? umh_complete+0x1a/0x1a
[ 0.782372] ret_from_fork+0x3a/0x50
[ 0.782372] Modules linked in:
[ 0.782372] CR2: ffffebe9e000cac8
[ 0.782372] ---[ end trace 803d9c656c15319d ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc4-00059-g853fbf8 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months
1808d65b55 ("asm-generic/tlb: Remove arch_tlb*_mmu()"): BUG: KASAN: stack-out-of-bounds in __change_page_attr_set_clr
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/mm
commit 1808d65b55e4489770dd4f76fb0dff5b81eb9b11
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Thu Sep 20 10:50:11 2018 +0200
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Wed Apr 3 10:32:58 2019 +0200
asm-generic/tlb: Remove arch_tlb*_mmu()
Now that all architectures are converted to the generic code, remove
the arch hooks.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Acked-by: Will Deacon <will.deacon(a)arm.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
9de7d833e3 s390/tlb: Convert to generic mmu_gather
1808d65b55 asm-generic/tlb: Remove arch_tlb*_mmu()
6455959819 ia64/tlb: Eradicate tlb_migrate_finish() callback
31437a258f Merge branch 'perf/urgent'
+------------------------------------------------------------+------------+------------+------------+------------+
| | 9de7d833e3 | 1808d65b55 | 6455959819 | 31437a258f |
+------------------------------------------------------------+------------+------------+------------+------------+
| boot_successes | 0 | 0 | 0 | 0 |
| boot_failures | 44 | 11 | 11 | 11 |
| BUG:KASAN:stack-out-of-bounds_in__unwind_start | 44 | | | |
| BUG:KASAN:stack-out-of-bounds_in__change_page_attr_set_clr | 0 | 11 | 11 | 11 |
+------------------------------------------------------------+------------+------------+------------+------------+
[ 13.977997] rodata_test: all tests were successful
[ 13.979792] x86/mm: Checking user space page tables
[ 14.011779] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 14.013022] Run /init as init process
[ 14.015154] ==================================================================
[ 14.016489] BUG: KASAN: stack-out-of-bounds in __change_page_attr_set_clr+0xa8/0x4df
[ 14.017853] Read of size 8 at addr ffff8880191ef8b0 by task init/1
[ 14.018976]
[ 14.019259] CPU: 0 PID: 1 Comm: init Not tainted 5.1.0-rc3-00029-g1808d65 #3
[ 14.020509] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 14.022028] Call Trace:
[ 14.022471] print_address_description+0x9d/0x26b
[ 14.023295] ? __change_page_attr_set_clr+0xa8/0x4df
[ 14.024161] ? __change_page_attr_set_clr+0xa8/0x4df
[ 14.025031] kasan_report+0x145/0x18a
[ 14.025667] ? __change_page_attr_set_clr+0xa8/0x4df
[ 14.026542] __change_page_attr_set_clr+0xa8/0x4df
[ 14.027433] ? __change_page_attr+0xad0/0xad0
[ 14.028260] ? kasan_unpoison_shadow+0xf/0x2e
[ 14.029062] ? preempt_latency_start+0x22/0x68
[ 14.029962] ? get_page_from_freelist+0xf37/0x1281
[ 14.030796] ? native_flush_tlb_one_user+0x54/0x95
[ 14.031602] ? trace_tlb_flush+0x1f/0x106
[ 14.032352] ? flush_tlb_func_common+0x26a/0x289
[ 14.033322] ? trace_irq_enable_rcuidle+0x21/0xf5
[ 14.034109] __kernel_map_pages+0x148/0x1b1
[ 14.034777] ? set_pages_rw+0x94/0x94
[ 14.035408] ? flush_tlb_mm_range+0x161/0x1ae
[ 14.036134] ? atomic_read+0xe/0x3f
[ 14.036715] ? page_expected_state+0x46/0x81
[ 14.037442] free_unref_page_prepare+0xe1/0x192
[ 14.038201] free_unref_page_list+0xd3/0x319
[ 14.038960] release_pages+0x5d1/0x612
[ 14.039581] ? __put_compound_page+0x91/0x91
[ 14.040346] ? tlb_flush_mmu_tlbonly+0x107/0x1c5
[ 14.041193] ? preempt_latency_start+0x22/0x68
[ 14.041922] ? free_swap_cache+0x51/0xd5
[ 14.042566] tlb_flush_mmu_free+0x31/0xca
[ 14.043254] tlb_finish_mmu+0xf6/0x1b5
[ 14.043883] shift_arg_pages+0x280/0x30b
[ 14.044535] ? __register_binfmt+0x18d/0x18d
[ 14.045259] ? trace_irq_enable_rcuidle+0x21/0xf5
[ 14.046029] ? ___might_sleep+0xac/0x33e
[ 14.046666] setup_arg_pages+0x46a/0x56e
[ 14.047347] ? shift_arg_pages+0x30b/0x30b
[ 14.048208] load_elf_binary+0x888/0x20dd
[ 14.048872] ? _raw_read_unlock+0x14/0x24
[ 14.049532] ? ima_bprm_check+0x18c/0x1c2
[ 14.050199] ? elf_map+0x1e8/0x1e8
[ 14.050756] ? ima_file_mmap+0xf3/0xf3
[ 14.051583] search_binary_handler+0x154/0x511
[ 14.052323] __do_execve_file+0x10b5/0x15e9
[ 14.053004] ? open_exec+0x3a/0x3a
[ 14.053564] ? memcpy+0x34/0x46
[ 14.054095] ? rest_init+0xdd/0xdd
[ 14.054669] kernel_init+0x66/0x10d
[ 14.055262] ? rest_init+0xdd/0xdd
[ 14.055833] ret_from_fork+0x3a/0x50
[ 14.056516]
[ 14.056769] The buggy address belongs to the page:
[ 14.057552] page:ffff88801de82c48 count:0 mapcount:0 mapping:0000000000000000 index:0x0
[ 14.058923] flags: 0x680000000000()
[ 14.059495] raw: 0000680000000000 ffff88801de82c50 ffff88801de82c50 0000000000000000
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 73f7e0e993d885606124134bd88c4c0e6b8b45bd 15ade5d2e7775667cf191cf2f94327a4889f8b9d --
git bisect bad 9b748775c7b377ae207813cb9ecdb0153b74ca55 # 17:54 B 0 9 24 0 Merge 'hwmon/hwmon' into devel-hourly-2019040920
git bisect bad a891bf73affea7bf5e4a7ef78b23b1c3f5b29d58 # 18:05 B 0 11 26 0 Merge 'linux-review/Heiner-Kallweit/net-phy-switch-drivers-to-use-dynamic-feature-detection/20190408-065213' into devel-hourly-2019040920
git bisect good 7ccb8fbbe4f0de58aeaac0f783d926a091de2942 # 18:18 G 11 0 11 11 Merge 'brgl-linux/gpio/for-next' into devel-hourly-2019040920
git bisect bad 081419eb685d308d93e12e1ddea3d02bfa52c0a4 # 18:33 B 0 4 19 0 Merge 'csky-linux/linux-next' into devel-hourly-2019040920
git bisect good 95041a63b3167fdc27aa36ef3b54daeeff12bdae # 18:52 G 11 0 11 11 Merge 'linux-review/Ido-Schimmel/mlxsw-Add-support-for-devlink-info-command/20190408-210315' into devel-hourly-2019040920
git bisect good 404993d745381524b38243176df8f1a11dd99d3b # 19:14 G 11 0 11 11 Merge 'linux-review/Simon-Horman/ravb-Avoid-unsupported-internal-delay-mode-for-R-Car-E3-D3/20190408-204324' into devel-hourly-2019040920
git bisect good 8cad949760cbcba41a6981993d0c65b4604a9e18 # 19:31 G 11 0 11 11 Merge 'gfs2/for-next.glock-refcount' into devel-hourly-2019040920
git bisect good 2c0d83617d30e6e747067a08e48a0e2de7404aa2 # 19:43 G 11 0 11 11 Merge 'pinctrl/devel' into devel-hourly-2019040920
git bisect good 14fb0415d4eba1e4f63efc98d4b3f1b8ceea047f # 19:57 G 11 0 11 11 Merge 'linux-review/Kristian-Evensen/qmi_wwan-Add-quirk-for-Quectel-dynamic-config/20190408-073833' into devel-hourly-2019040920
git bisect bad 8e115919d3366790a875d7dfad33bcb7009a957d # 20:10 B 0 1 16 0 Merge 'tip/master' into devel-hourly-2019040920
git bisect bad 9402fa854486829a7792fbb4038b5585473f3b1a # 20:31 B 0 8 23 0 Merge branch 'perf/urgent'
git bisect good 2e8623e9bc0ba4907e94c4d94a1caeac23d1fadb # 20:44 G 11 0 11 11 Merge branch 'linus'
git bisect good 64604d54d3115fee89598bfb6d8d2252f8a2d114 # 20:54 G 11 0 11 11 sched/x86_64: Don't save flags on context switch
git bisect bad b3fa8ed4e48802e6ba0aa5f3283313a27dcbf46f # 21:04 B 0 11 26 0 asm-generic/tlb: Remove CONFIG_HAVE_GENERIC_MMU_GATHER
git bisect good b78180b97dcf667350aac716cd3f32356eaf4984 # 21:20 G 11 0 11 11 arm/tlb: Convert to generic mmu_gather
git bisect good 6137fed0823247e32306bde2b48cac627c24f894 # 21:30 G 11 0 11 11 arch/tlb: Clean up simple architectures
git bisect good 9de7d833e3708213bf99d75c37483e0f773f5e16 # 21:43 G 11 0 11 11 s390/tlb: Convert to generic mmu_gather
git bisect bad 1808d65b55e4489770dd4f76fb0dff5b81eb9b11 # 21:52 B 0 11 26 0 asm-generic/tlb: Remove arch_tlb*_mmu()
# first bad commit: [1808d65b55e4489770dd4f76fb0dff5b81eb9b11] asm-generic/tlb: Remove arch_tlb*_mmu()
git bisect good 9de7d833e3708213bf99d75c37483e0f773f5e16 # 21:53 G 33 0 33 44 s390/tlb: Convert to generic mmu_gather
# extra tests with debug options
git bisect bad 1808d65b55e4489770dd4f76fb0dff5b81eb9b11 # 22:07 B 0 1 16 0 asm-generic/tlb: Remove arch_tlb*_mmu()
# extra tests on HEAD of linux-devel/devel-hourly-2019040920
git bisect bad 73f7e0e993d885606124134bd88c4c0e6b8b45bd # 22:13 B 0 13 31 0 0day head guard for 'devel-hourly-2019040920'
# extra tests on tree/branch tip/core/mm
git bisect bad 6455959819bf2469190ae9f6b4ccebaa9827e884 # 22:37 B 0 4 19 0 ia64/tlb: Eradicate tlb_migrate_finish() callback
# extra tests on tree/branch tip/master
git bisect bad 31437a258fa637d7449385ef2e1b33efc6786397 # 22:54 B 0 11 26 0 Merge branch 'perf/urgent'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
[locking/rwsem] 1b94536f2d: stress-ng.bad-altstack.ops_per_sec -32.7% regression
by kernel test robot
Greeting,
FYI, we noticed a -32.7% regression of stress-ng.bad-altstack.ops_per_sec due to commit:
commit: 1b94536f2debc98260fb17b44f7f262e3336f7e0 ("locking/rwsem: Implement lock handoff to prevent lock starvation")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
in testcase: stress-ng
on test machine: 272 threads Intel(R) Xeon Phi(TM) CPU 7255 @ 1.10GHz with 112G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 5s
class: memory
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
memory/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2018-04-03.cgz/lkp-knm02/stress-ng/5s
commit:
1bcfe0e4cb ("locking/rwsem: Improve scalability via a new locking scheme")
1b94536f2d ("locking/rwsem: Implement lock handoff to prevent lock starvation")
1bcfe0e4cb0efdba 1b94536f2debc98260fb17b44f7
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at_ip__mutex_lock/0x
:4 25% 1:4 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
%stddev %change %stddev
\ | \
52766 ± 19% -32.8% 35434 ± 3% stress-ng.bad-altstack.ops
10521 ± 19% -32.7% 7081 ± 3% stress-ng.bad-altstack.ops_per_sec
71472 ± 16% -37.1% 44986 stress-ng.stackmmap.ops
14281 ± 16% -37.0% 9001 stress-ng.stackmmap.ops_per_sec
82779573 -6.0% 77812734 stress-ng.time.minor_page_faults
92256 -43.9% 51723 stress-ng.vm-segv.ops
18416 -43.9% 10340 stress-ng.vm-segv.ops_per_sec
52143 ± 3% -11.7% 46067 ± 12% stress-ng.vm.ops
10429 ± 3% -11.7% 9211 ± 12% stress-ng.vm.ops_per_sec
275453 ± 3% +21.7% 335113 ± 7% cpuidle.POLL.usage
2483631 ± 13% -16.3% 2077801 meminfo.AnonHugePages
5815007 ± 5% -7.8% 5363463 meminfo.Memused
2549 ± 22% -41.7% 1487 ± 17% numa-meminfo.node1.Active
2549 ± 22% -41.7% 1487 ± 17% numa-meminfo.node1.Active(anon)
3076 ± 23% -45.0% 1693 ± 28% numa-meminfo.node1.AnonPages
9607 ± 4% -11.8% 8476 ± 9% softirqs.CPU1.SCHED
9325 ± 11% -18.0% 7646 ± 4% softirqs.CPU177.SCHED
10078 ± 8% -20.7% 7992 ± 10% softirqs.CPU68.SCHED
731566 ± 10% -16.3% 612334 ± 2% numa-vmstat.node0.nr_active_anon
677284 ± 10% -17.0% 562176 ± 3% numa-vmstat.node0.nr_anon_pages
590.50 ± 11% -39.1% 359.75 ± 17% numa-vmstat.node0.nr_isolated_anon
731282 ± 10% -16.3% 612130 ± 2% numa-vmstat.node0.nr_zone_active_anon
654.75 ± 24% -48.6% 336.25 ± 9% numa-vmstat.node1.nr_active_anon
764.75 ± 20% -49.0% 389.75 ± 23% numa-vmstat.node1.nr_anon_pages
615.50 ± 15% -42.1% 356.25 ± 30% numa-vmstat.node1.nr_isolated_anon
646.00 ± 26% -49.8% 324.50 ± 9% numa-vmstat.node1.nr_zone_active_anon
103726 -1.8% 101867 proc-vmstat.nr_inactive_anon
1319 ± 14% -33.8% 873.00 ± 33% proc-vmstat.nr_isolated_anon
156768 ± 2% -4.9% 149062 proc-vmstat.nr_shmem
129637 -1.2% 128122 proc-vmstat.nr_slab_unreclaimable
103726 -1.8% 101867 proc-vmstat.nr_zone_inactive_anon
152368 ± 3% -4.0% 146334 proc-vmstat.numa_huge_pte_updates
78212819 ± 3% -3.9% 75124681 proc-vmstat.numa_pte_updates
83661669 -5.9% 78716234 proc-vmstat.pgfault
4.57 ± 14% +28.0% 5.85 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.stddev
2981436 ± 17% -25.9% 2209429 ± 17% sched_debug.cfs_rq:/.spread0.max
416297 ± 10% -14.7% 355039 ± 10% sched_debug.cfs_rq:/.spread0.stddev
4.57 ± 12% +23.4% 5.64 ± 13% sched_debug.cpu.cpu_load[0].stddev
89872 ± 21% -33.6% 59685 ± 8% sched_debug.cpu.curr->pid.avg
102019 ± 18% -30.7% 70749 ± 8% sched_debug.cpu.curr->pid.max
19999 ± 34% -36.5% 12707 ± 24% sched_debug.cpu.curr->pid.stddev
5636 ± 14% -24.7% 4242 ± 9% sched_debug.cpu.nr_switches.min
1962 ± 6% -7.9% 1807 ± 2% slabinfo.TCPv6.active_objs
1962 ± 6% -7.9% 1807 ± 2% slabinfo.TCPv6.num_objs
12816 ± 6% -16.2% 10742 slabinfo.UNIX.active_objs
12816 ± 6% -16.2% 10742 slabinfo.UNIX.num_objs
14245 ± 4% -7.4% 13193 slabinfo.kmalloc-192.active_objs
14306 ± 4% -7.6% 13219 slabinfo.kmalloc-192.num_objs
1020 ± 6% -10.0% 918.75 ± 5% slabinfo.skbuff_fclone_cache.active_objs
1020 ± 6% -10.0% 918.75 ± 5% slabinfo.skbuff_fclone_cache.num_objs
20792 ± 6% -14.1% 17854 slabinfo.sock_inode_cache.active_objs
20792 ± 6% -14.1% 17854 slabinfo.sock_inode_cache.num_objs
4.88 ± 60% -2.7 2.14 ±173% perf-profile.calltrace.cycles-pp.alloc_new_node_page.migrate_pages.do_move_pages_to_node.kernel_move_pages.__x64_sys_move_pages
4.87 ± 60% -2.7 2.13 ±173% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_new_node_page.migrate_pages.do_move_pages_to_node.kernel_move_pages
4.84 ± 60% -2.7 2.12 ±173% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_new_node_page.migrate_pages.do_move_pages_to_node
4.13 ± 59% -2.4 1.77 ±173% perf-profile.calltrace.cycles-pp.__free_pages_ok.migrate_pages.do_move_pages_to_node.kernel_move_pages.__x64_sys_move_pages
4.21 ± 60% -2.3 1.91 ±173% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.alloc_new_node_page.migrate_pages
4.21 ± 60% -2.3 1.91 ±173% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.alloc_new_node_page
0.39 ±100% +0.6 0.96 ± 19% perf-profile.calltrace.cycles-pp.rmap_walk_file.try_to_unmap.migrate_pages.migrate_to_node.do_migrate_pages
0.39 ±100% +0.6 0.96 ± 19% perf-profile.calltrace.cycles-pp.try_to_unmap.migrate_pages.migrate_to_node.do_migrate_pages.kernel_migrate_pages
0.42 ±100% +0.7 1.17 ± 21% perf-profile.calltrace.cycles-pp.migrate_pages.migrate_to_node.do_migrate_pages.kernel_migrate_pages.__x64_sys_migrate_pages
0.45 ±100% +0.8 1.20 ± 21% perf-profile.calltrace.cycles-pp.do_migrate_pages.kernel_migrate_pages.__x64_sys_migrate_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.45 ±100% +0.8 1.20 ± 21% perf-profile.calltrace.cycles-pp.migrate_to_node.do_migrate_pages.kernel_migrate_pages.__x64_sys_migrate_pages.do_syscall_64
0.45 ±100% +0.8 1.21 ± 20% perf-profile.calltrace.cycles-pp.__x64_sys_migrate_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.45 ±100% +0.8 1.21 ± 20% perf-profile.calltrace.cycles-pp.kernel_migrate_pages.__x64_sys_migrate_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
1.102e+10 +1.8% 1.121e+10 perf-stat.i.branch-instructions
23.75 ± 2% +1.3 25.02 ± 2% perf-stat.i.cache-miss-rate%
1.308e+08 -4.8% 1.244e+08 perf-stat.i.cache-misses
8.874e+08 -4.4% 8.487e+08 perf-stat.i.cache-references
5277 +3.8% 5479 ± 3% perf-stat.i.cycles-between-cache-misses
5.288e+10 +2.2% 5.405e+10 perf-stat.i.iTLB-loads
5.298e+10 +2.2% 5.415e+10 perf-stat.i.instructions
633.05 +4.7% 662.92 ± 5% perf-stat.i.instructions-per-iTLB-miss
171328 ± 3% -7.6% 158307 perf-stat.i.minor-faults
172988 ± 3% -7.7% 159604 perf-stat.i.page-faults
17.18 -6.2% 16.11 perf-stat.overall.MPKI
10.94 -0.3 10.69 perf-stat.overall.branch-miss-rate%
5.04 -1.7% 4.96 perf-stat.overall.cpi
1946 +5.6% 2055 perf-stat.overall.cycles-between-cache-misses
0.86 -0.0 0.84 perf-stat.overall.iTLB-load-miss-rate%
115.71 +2.7% 118.87 perf-stat.overall.instructions-per-iTLB-miss
0.20 +1.8% 0.20 perf-stat.overall.ipc
1.062e+10 +2.0% 1.083e+10 perf-stat.ps.branch-instructions
1.322e+08 -4.8% 1.258e+08 perf-stat.ps.cache-misses
8.765e+08 -4.1% 8.41e+08 perf-stat.ps.cache-references
5.096e+10 +2.3% 5.214e+10 perf-stat.ps.iTLB-loads
5.101e+10 +2.3% 5.219e+10 perf-stat.ps.instructions
169847 ± 3% -7.7% 156839 perf-stat.ps.minor-faults
171241 ± 3% -7.9% 157736 perf-stat.ps.page-faults
1.394e+13 +2.8% 1.433e+13 perf-stat.total.instructions
509.25 ± 20% -43.8% 286.00 ± 20% interrupts.32:IR-PCI-MSI.2621442-edge.eth0-TxRx-1
1235 ±112% -77.3% 280.50 ± 27% interrupts.CPU0.TLB:TLB_shootdowns
385.25 ± 15% -39.1% 234.50 ± 37% interrupts.CPU104.TLB:TLB_shootdowns
490.00 ± 29% -68.2% 155.75 ± 28% interrupts.CPU106.TLB:TLB_shootdowns
2539 ± 41% -37.5% 1587 ± 12% interrupts.CPU11.NMI:Non-maskable_interrupts
2539 ± 41% -37.5% 1587 ± 12% interrupts.CPU11.PMI:Performance_monitoring_interrupts
509.25 ± 20% -43.8% 286.00 ± 20% interrupts.CPU12.32:IR-PCI-MSI.2621442-edge.eth0-TxRx-1
1629 ± 83% -71.7% 460.50 ±120% interrupts.CPU123.TLB:TLB_shootdowns
1244 ± 75% -85.1% 185.25 ± 61% interrupts.CPU125.TLB:TLB_shootdowns
395.00 ± 62% +783.4% 3489 ± 46% interrupts.CPU127.TLB:TLB_shootdowns
3031 ± 33% -63.7% 1101 ± 33% interrupts.CPU128.NMI:Non-maskable_interrupts
3031 ± 33% -63.7% 1101 ± 33% interrupts.CPU128.PMI:Performance_monitoring_interrupts
2805 ± 46% -49.5% 1417 ± 10% interrupts.CPU132.NMI:Non-maskable_interrupts
2805 ± 46% -49.5% 1417 ± 10% interrupts.CPU132.PMI:Performance_monitoring_interrupts
1864 ± 21% -43.7% 1049 ± 26% interrupts.CPU132.RES:Rescheduling_interrupts
2397 ± 24% -53.3% 1120 ± 20% interrupts.CPU133.RES:Rescheduling_interrupts
418.50 ± 19% +377.1% 1996 ± 72% interrupts.CPU134.TLB:TLB_shootdowns
2183 ± 38% -51.3% 1062 ± 32% interrupts.CPU136.NMI:Non-maskable_interrupts
2183 ± 38% -51.3% 1062 ± 32% interrupts.CPU136.PMI:Performance_monitoring_interrupts
2182 ±117% -82.1% 391.75 ± 54% interrupts.CPU136.TLB:TLB_shootdowns
449.50 ± 71% +272.7% 1675 ± 52% interrupts.CPU14.TLB:TLB_shootdowns
599.00 ± 61% -63.4% 219.50 ± 31% interrupts.CPU141.TLB:TLB_shootdowns
1489 ± 14% -26.0% 1101 ± 18% interrupts.CPU146.RES:Rescheduling_interrupts
1777 ± 26% -34.9% 1157 ± 23% interrupts.CPU148.RES:Rescheduling_interrupts
1011 ±106% +258.4% 3623 ± 26% interrupts.CPU15.TLB:TLB_shootdowns
2042 ± 26% -36.9% 1289 ± 17% interrupts.CPU156.NMI:Non-maskable_interrupts
2042 ± 26% -36.9% 1289 ± 17% interrupts.CPU156.PMI:Performance_monitoring_interrupts
370.50 ± 24% -48.6% 190.50 ± 41% interrupts.CPU156.TLB:TLB_shootdowns
3133 ± 25% -45.4% 1711 ± 53% interrupts.CPU157.NMI:Non-maskable_interrupts
3133 ± 25% -45.4% 1711 ± 53% interrupts.CPU157.PMI:Performance_monitoring_interrupts
2533 ± 28% -50.6% 1252 ± 27% interrupts.CPU161.NMI:Non-maskable_interrupts
2533 ± 28% -50.6% 1252 ± 27% interrupts.CPU161.PMI:Performance_monitoring_interrupts
2727 ± 31% -37.3% 1710 ± 8% interrupts.CPU162.NMI:Non-maskable_interrupts
2727 ± 31% -37.3% 1710 ± 8% interrupts.CPU162.PMI:Performance_monitoring_interrupts
2701 ± 28% -56.7% 1169 ± 46% interrupts.CPU163.NMI:Non-maskable_interrupts
2701 ± 28% -56.7% 1169 ± 46% interrupts.CPU163.PMI:Performance_monitoring_interrupts
384.50 ± 9% -63.8% 139.25 ± 43% interrupts.CPU163.TLB:TLB_shootdowns
4114 ± 10% +47.4% 6062 ± 23% interrupts.CPU167.CAL:Function_call_interrupts
421.75 ± 19% -66.5% 141.25 ± 60% interrupts.CPU168.TLB:TLB_shootdowns
2027 ± 94% -67.1% 666.75 ±130% interrupts.CPU172.TLB:TLB_shootdowns
2792 ± 25% -56.4% 1217 ± 47% interrupts.CPU173.NMI:Non-maskable_interrupts
2792 ± 25% -56.4% 1217 ± 47% interrupts.CPU173.PMI:Performance_monitoring_interrupts
2768 ± 42% -54.8% 1251 ± 40% interrupts.CPU176.RES:Rescheduling_interrupts
3623 ± 55% -71.3% 1040 ± 9% interrupts.CPU177.RES:Rescheduling_interrupts
2962 ± 39% -65.8% 1011 ± 45% interrupts.CPU184.NMI:Non-maskable_interrupts
2962 ± 39% -65.8% 1011 ± 45% interrupts.CPU184.PMI:Performance_monitoring_interrupts
2369 ± 36% -48.0% 1231 ± 29% interrupts.CPU190.NMI:Non-maskable_interrupts
2369 ± 36% -48.0% 1231 ± 29% interrupts.CPU190.PMI:Performance_monitoring_interrupts
2072 ± 45% -52.9% 976.25 ± 36% interrupts.CPU192.NMI:Non-maskable_interrupts
2072 ± 45% -52.9% 976.25 ± 36% interrupts.CPU192.PMI:Performance_monitoring_interrupts
2256 ± 37% -56.3% 986.50 ± 41% interrupts.CPU196.NMI:Non-maskable_interrupts
2256 ± 37% -56.3% 986.50 ± 41% interrupts.CPU196.PMI:Performance_monitoring_interrupts
5017 ± 18% +36.5% 6850 ± 16% interrupts.CPU2.CAL:Function_call_interrupts
2325 ± 12% -48.0% 1210 ± 27% interrupts.CPU200.RES:Rescheduling_interrupts
2743 ± 46% -58.2% 1147 ± 33% interrupts.CPU201.NMI:Non-maskable_interrupts
2743 ± 46% -58.2% 1147 ± 33% interrupts.CPU201.PMI:Performance_monitoring_interrupts
2375 ± 18% -50.7% 1170 ± 25% interrupts.CPU201.RES:Rescheduling_interrupts
327.75 ± 34% -53.6% 152.00 ± 46% interrupts.CPU201.TLB:TLB_shootdowns
2648 ± 39% -41.5% 1550 ± 16% interrupts.CPU203.NMI:Non-maskable_interrupts
2648 ± 39% -41.5% 1550 ± 16% interrupts.CPU203.PMI:Performance_monitoring_interrupts
2356 ± 34% -37.3% 1478 ± 17% interrupts.CPU204.NMI:Non-maskable_interrupts
2356 ± 34% -37.3% 1478 ± 17% interrupts.CPU204.PMI:Performance_monitoring_interrupts
3069 ± 27% -60.6% 1210 ± 45% interrupts.CPU206.NMI:Non-maskable_interrupts
3069 ± 27% -60.6% 1210 ± 45% interrupts.CPU206.PMI:Performance_monitoring_interrupts
2338 ± 32% -44.8% 1289 ± 15% interrupts.CPU212.NMI:Non-maskable_interrupts
2338 ± 32% -44.8% 1289 ± 15% interrupts.CPU212.PMI:Performance_monitoring_interrupts
1809 ± 20% -44.9% 996.50 ± 36% interrupts.CPU214.NMI:Non-maskable_interrupts
1809 ± 20% -44.9% 996.50 ± 36% interrupts.CPU214.PMI:Performance_monitoring_interrupts
1743 ± 24% -37.1% 1096 ± 33% interrupts.CPU215.RES:Rescheduling_interrupts
2866 ± 26% -48.6% 1472 ± 34% interrupts.CPU218.NMI:Non-maskable_interrupts
2866 ± 26% -48.6% 1472 ± 34% interrupts.CPU218.PMI:Performance_monitoring_interrupts
2051 ± 20% -45.3% 1121 ± 36% interrupts.CPU219.NMI:Non-maskable_interrupts
2051 ± 20% -45.3% 1121 ± 36% interrupts.CPU219.PMI:Performance_monitoring_interrupts
1488 ± 76% -87.6% 185.00 ± 40% interrupts.CPU219.TLB:TLB_shootdowns
2513 ± 22% -36.5% 1597 ± 15% interrupts.CPU221.NMI:Non-maskable_interrupts
2513 ± 22% -36.5% 1597 ± 15% interrupts.CPU221.PMI:Performance_monitoring_interrupts
2294 ± 33% -49.3% 1162 ± 35% interrupts.CPU222.NMI:Non-maskable_interrupts
2294 ± 33% -49.3% 1162 ± 35% interrupts.CPU222.PMI:Performance_monitoring_interrupts
1384 ± 64% -89.8% 141.00 ± 43% interrupts.CPU222.TLB:TLB_shootdowns
4116 ± 10% +38.3% 5694 ± 9% interrupts.CPU225.CAL:Function_call_interrupts
2770 ± 29% -35.7% 1781 ± 46% interrupts.CPU226.NMI:Non-maskable_interrupts
2770 ± 29% -35.7% 1781 ± 46% interrupts.CPU226.PMI:Performance_monitoring_interrupts
2402 ± 38% -58.0% 1008 ± 43% interrupts.CPU230.NMI:Non-maskable_interrupts
2402 ± 38% -58.0% 1008 ± 43% interrupts.CPU230.PMI:Performance_monitoring_interrupts
212.50 ± 77% +632.5% 1556 ±112% interrupts.CPU234.TLB:TLB_shootdowns
2079 ± 40% -53.2% 973.00 ± 41% interrupts.CPU238.NMI:Non-maskable_interrupts
2079 ± 40% -53.2% 973.00 ± 41% interrupts.CPU238.PMI:Performance_monitoring_interrupts
1701 ± 14% -42.1% 985.00 ± 44% interrupts.CPU239.NMI:Non-maskable_interrupts
1701 ± 14% -42.1% 985.00 ± 44% interrupts.CPU239.PMI:Performance_monitoring_interrupts
2795 ± 20% -47.9% 1456 ± 44% interrupts.CPU24.NMI:Non-maskable_interrupts
2795 ± 20% -47.9% 1456 ± 44% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2524 ± 25% -45.0% 1388 ± 18% interrupts.CPU241.NMI:Non-maskable_interrupts
2524 ± 25% -45.0% 1388 ± 18% interrupts.CPU241.PMI:Performance_monitoring_interrupts
1664 ± 10% -38.1% 1029 ± 38% interrupts.CPU244.NMI:Non-maskable_interrupts
1664 ± 10% -38.1% 1029 ± 38% interrupts.CPU244.PMI:Performance_monitoring_interrupts
3116 ± 43% -40.3% 1860 ± 45% interrupts.CPU25.NMI:Non-maskable_interrupts
3116 ± 43% -40.3% 1860 ± 45% interrupts.CPU25.PMI:Performance_monitoring_interrupts
3872 ± 39% -63.5% 1413 ±115% interrupts.CPU25.TLB:TLB_shootdowns
1868 ± 70% -78.2% 407.75 ± 86% interrupts.CPU251.TLB:TLB_shootdowns
2974 ± 35% -59.5% 1203 ± 45% interrupts.CPU253.NMI:Non-maskable_interrupts
2974 ± 35% -59.5% 1203 ± 45% interrupts.CPU253.PMI:Performance_monitoring_interrupts
1494 ± 77% -53.8% 691.25 ±133% interrupts.CPU256.TLB:TLB_shootdowns
2996 ± 34% -63.0% 1109 ± 35% interrupts.CPU259.NMI:Non-maskable_interrupts
2996 ± 34% -63.0% 1109 ± 35% interrupts.CPU259.PMI:Performance_monitoring_interrupts
1033 ± 3% +40.3% 1449 ± 26% interrupts.CPU259.RES:Rescheduling_interrupts
1042 ±104% -83.4% 172.75 ± 48% interrupts.CPU259.TLB:TLB_shootdowns
1761 ± 18% -34.6% 1151 ± 19% interrupts.CPU260.NMI:Non-maskable_interrupts
1761 ± 18% -34.6% 1151 ± 19% interrupts.CPU260.PMI:Performance_monitoring_interrupts
1738 ± 10% -31.7% 1187 ± 31% interrupts.CPU266.NMI:Non-maskable_interrupts
1738 ± 10% -31.7% 1187 ± 31% interrupts.CPU266.PMI:Performance_monitoring_interrupts
2311 ± 18% -40.8% 1367 ± 5% interrupts.CPU268.RES:Rescheduling_interrupts
2893 ± 32% -52.4% 1375 ± 31% interrupts.CPU269.NMI:Non-maskable_interrupts
2893 ± 32% -52.4% 1375 ± 31% interrupts.CPU269.PMI:Performance_monitoring_interrupts
2200 ± 15% -43.8% 1237 ± 17% interrupts.CPU269.RES:Rescheduling_interrupts
1821 ± 9% -26.7% 1335 ± 25% interrupts.CPU271.RES:Rescheduling_interrupts
2554 ± 54% -56.4% 1113 ± 29% interrupts.CPU31.NMI:Non-maskable_interrupts
2554 ± 54% -56.4% 1113 ± 29% interrupts.CPU31.PMI:Performance_monitoring_interrupts
1145 ± 90% -76.1% 274.00 ± 52% interrupts.CPU31.TLB:TLB_shootdowns
2850 ± 9% -53.6% 1322 ± 78% interrupts.CPU33.TLB:TLB_shootdowns
2536 ± 33% -52.6% 1201 ± 29% interrupts.CPU35.NMI:Non-maskable_interrupts
2536 ± 33% -52.6% 1201 ± 29% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1102 ± 20% +28.2% 1413 ± 8% interrupts.CPU38.RES:Rescheduling_interrupts
1985 ± 10% -36.1% 1268 ± 37% interrupts.CPU41.NMI:Non-maskable_interrupts
1985 ± 10% -36.1% 1268 ± 37% interrupts.CPU41.PMI:Performance_monitoring_interrupts
5682 ± 4% -28.2% 4080 ± 22% interrupts.CPU46.CAL:Function_call_interrupts
2780 ± 46% -83.2% 466.50 ±114% interrupts.CPU47.TLB:TLB_shootdowns
4596 ± 21% +30.4% 5994 ± 13% interrupts.CPU5.CAL:Function_call_interrupts
1176 ± 28% +46.6% 1723 ± 17% interrupts.CPU51.RES:Rescheduling_interrupts
2555 ± 26% -36.7% 1617 ± 7% interrupts.CPU57.NMI:Non-maskable_interrupts
2555 ± 26% -36.7% 1617 ± 7% interrupts.CPU57.PMI:Performance_monitoring_interrupts
2670 ± 34% -45.2% 1464 ± 21% interrupts.CPU58.NMI:Non-maskable_interrupts
2670 ± 34% -45.2% 1464 ± 21% interrupts.CPU58.PMI:Performance_monitoring_interrupts
3217 ± 50% -51.4% 1563 ± 40% interrupts.CPU61.RES:Rescheduling_interrupts
2631 ± 36% -48.1% 1366 ± 32% interrupts.CPU62.NMI:Non-maskable_interrupts
2631 ± 36% -48.1% 1366 ± 32% interrupts.CPU62.PMI:Performance_monitoring_interrupts
3742 ± 43% -69.9% 1127 ± 91% interrupts.CPU62.TLB:TLB_shootdowns
2291 ± 27% -48.8% 1172 ± 22% interrupts.CPU64.RES:Rescheduling_interrupts
2047 ± 14% -38.1% 1268 ± 27% interrupts.CPU65.RES:Rescheduling_interrupts
2116 ± 37% -36.6% 1341 ± 31% interrupts.CPU67.NMI:Non-maskable_interrupts
2116 ± 37% -36.6% 1341 ± 31% interrupts.CPU67.PMI:Performance_monitoring_interrupts
2021 ± 6% -26.6% 1484 ± 26% interrupts.CPU70.NMI:Non-maskable_interrupts
2021 ± 6% -26.6% 1484 ± 26% interrupts.CPU70.PMI:Performance_monitoring_interrupts
3464 ± 33% -60.0% 1387 ± 19% interrupts.CPU71.NMI:Non-maskable_interrupts
3464 ± 33% -60.0% 1387 ± 19% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1752 ± 24% -37.9% 1087 ± 7% interrupts.CPU78.RES:Rescheduling_interrupts
2622 ± 42% -63.4% 959.75 ± 40% interrupts.CPU79.NMI:Non-maskable_interrupts
2622 ± 42% -63.4% 959.75 ± 40% interrupts.CPU79.PMI:Performance_monitoring_interrupts
1856 ± 29% -35.3% 1200 ± 22% interrupts.CPU81.RES:Rescheduling_interrupts
2612 ± 29% -61.6% 1003 ± 37% interrupts.CPU82.NMI:Non-maskable_interrupts
2612 ± 29% -61.6% 1003 ± 37% interrupts.CPU82.PMI:Performance_monitoring_interrupts
1823 ± 97% -89.1% 198.75 ± 50% interrupts.CPU82.TLB:TLB_shootdowns
2753 ± 45% -64.9% 966.50 ± 43% interrupts.CPU83.NMI:Non-maskable_interrupts
2753 ± 45% -64.9% 966.50 ± 43% interrupts.CPU83.PMI:Performance_monitoring_interrupts
2822 ± 39% -39.0% 1720 ± 46% interrupts.CPU89.NMI:Non-maskable_interrupts
2822 ± 39% -39.0% 1720 ± 46% interrupts.CPU89.PMI:Performance_monitoring_interrupts
2729 ± 33% -48.4% 1409 ± 37% interrupts.CPU9.NMI:Non-maskable_interrupts
2729 ± 33% -48.4% 1409 ± 37% interrupts.CPU9.PMI:Performance_monitoring_interrupts
2267 ± 34% -44.5% 1257 ± 26% interrupts.CPU92.NMI:Non-maskable_interrupts
2267 ± 34% -44.5% 1257 ± 26% interrupts.CPU92.PMI:Performance_monitoring_interrupts
985.25 ± 25% +69.4% 1668 ± 37% interrupts.CPU93.RES:Rescheduling_interrupts
349.00 ± 13% +436.6% 1872 ± 80% interrupts.CPU95.TLB:TLB_shootdowns
2435 ± 39% -53.3% 1137 ± 28% interrupts.CPU96.NMI:Non-maskable_interrupts
2435 ± 39% -53.3% 1137 ± 28% interrupts.CPU96.PMI:Performance_monitoring_interrupts
2198 ± 84% -88.5% 252.50 ± 71% interrupts.CPU96.TLB:TLB_shootdowns
2729 ± 32% -55.2% 1223 ± 36% interrupts.CPU97.NMI:Non-maskable_interrupts
2729 ± 32% -55.2% 1223 ± 36% interrupts.CPU97.PMI:Performance_monitoring_interrupts
2800 ± 29% -50.7% 1380 ± 29% interrupts.CPU98.NMI:Non-maskable_interrupts
2800 ± 29% -50.7% 1380 ± 29% interrupts.CPU98.PMI:Performance_monitoring_interrupts
613407 ± 7% -27.5% 444631 ± 17% interrupts.NMI:Non-maskable_interrupts
613407 ± 7% -27.5% 444631 ± 17% interrupts.PMI:Performance_monitoring_interrupts
stress-ng.vm-segv.ops
100000 +-+----------------------------------------------------------------+
95000 +-++.. +.. .. + .. + |
|. .. .+..+..+..+ + + .+.. .+..+ |
90000 +-+ +.+..+..+ + +..+ +. |
85000 +-+ |
80000 +-+ |
75000 +-+ |
| |
70000 +-+ |
65000 +-+ |
60000 O-+O O O O O O O O O O O O O O |
55000 +-+ O O O O |
| O O O O
50000 +-+ O O |
45000 +-+----------------------------------------------------------------+
stress-ng.vm-segv.ops_per_sec
20000 +-+-----------------------------------------------------------------+
19000 +-++.. +.. .. .. + |
|. .. .+..+.+..+ + + .+.. .+..+ |
18000 +-+ +..+.+..+ +. +.+. +. |
17000 +-+ |
16000 +-+ |
15000 +-+ |
| |
14000 +-+ |
13000 +-+ |
12000 O-+O O O O O O O O O O O O O |
11000 +-+ O O O O O |
| O O O O
10000 +-+ O O |
9000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[locking/rwsem] adc32e8877: will-it-scale.per_thread_ops -21.0% regression
by kernel test robot
Greeting,
FYI, we noticed a -21.0% regression of will-it-scale.per_thread_ops due to commit:
commit: adc32e887793f51b32122c66b0e12b93ff4fdb8c ("locking/rwsem: Enable time-based spinning on reader-owned rwsem")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
in testcase: will-it-scale
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_task: 50%
mode: thread
test: page_fault1
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/thread/50%/debian-x86_64-2018-04-03-no-ucode.cgz/lkp-bdw-ep3d/page_fault1/will-it-scale
commit:
4407b29e19 ("locking/rwsem: Enable readers spinning on writer")
adc32e8877 ("locking/rwsem: Enable time-based spinning on reader-owned rwsem")
4407b29e19ffd372 adc32e887793f51b32122c66b0e
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
3:4 -75% :4 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
%stddev %change %stddev
\ | \
222958 -21.0% 176176 ± 2% will-it-scale.per_thread_ops
14388 ± 2% -34.4% 9439 ± 5% will-it-scale.time.involuntary_context_switches
5682228 -15.6% 4794550 ± 5% will-it-scale.time.maximum_resident_set_size
51676661 -20.9% 40893195 ± 2% will-it-scale.time.minor_page_faults
3646 -32.2% 2472 ± 3% will-it-scale.time.percent_of_cpu_this_job_got
10734 -31.6% 7346 ± 3% will-it-scale.time.system_time
247.33 -60.1% 98.68 ± 2% will-it-scale.time.user_time
3946863 ± 2% +11.3% 4394388 ± 3% will-it-scale.time.voluntary_context_switches
9810207 -21.0% 7751796 ± 2% will-it-scale.workload
33.93 ± 10% -6.3% 31.80 ± 9% boot-time.dhcp
33321 ± 12% -15.6% 28119 ± 10% numa-meminfo.node1.KReclaimable
33321 ± 12% -15.6% 28119 ± 10% numa-meminfo.node1.SReclaimable
3572 ± 16% -15.4% 3022 slabinfo.eventpoll_pwq.active_objs
3572 ± 16% -15.4% 3022 slabinfo.eventpoll_pwq.num_objs
58.00 +23.3% 71.50 vmstat.cpu.id
35.25 -39.7% 21.25 ± 6% vmstat.procs.r
27023 +10.7% 29914 ± 2% vmstat.system.cs
6378018 ± 79% +1312.6% 90098072 ± 93% cpuidle.C1.time
335896 ± 67% +907.4% 3383766 ± 89% cpuidle.C1.usage
283589 ± 28% +626.2% 2059300 ± 64% cpuidle.POLL.time
42961 ± 24% +2963.0% 1315908 ± 20% cpuidle.POLL.usage
6085134 ± 7% +5.8% 6438411 ± 8% meminfo.DirectMap2M
371628 ± 6% -24.5% 280495 ± 10% meminfo.DirectMap4k
55613 -20.6% 44144 ± 5% meminfo.Shmem
24543 -23.8% 18709 ± 2% meminfo.max_used_kB
58.04 +13.7 71.78 mpstat.cpu.all.idle%
0.00 ±113% +0.0 0.01 ± 91% mpstat.cpu.all.soft%
40.97 -13.2 27.80 ± 3% mpstat.cpu.all.sys%
1.00 -0.6 0.41 ± 2% mpstat.cpu.all.usr%
28932236 -20.2% 23101880 numa-numastat.node0.local_node
28950748 -20.2% 23111830 numa-numastat.node0.numa_hit
29228231 -21.1% 23046507 ± 2% numa-numastat.node1.local_node
29238249 -21.1% 23065051 ± 2% numa-numastat.node1.numa_hit
14882447 -19.8% 11942555 numa-vmstat.node0.numa_hit
14830170 -19.5% 11932491 numa-vmstat.node0.numa_local
8330 ± 12% -15.6% 7029 ± 10% numa-vmstat.node1.nr_slab_reclaimable
14989784 -20.7% 11887606 ± 2% numa-vmstat.node1.numa_hit
14861825 -21.1% 11718577 ± 2% numa-vmstat.node1.numa_local
1205 -30.4% 839.00 ± 2% turbostat.Avg_MHz
43.35 -13.2 30.17 ± 2% turbostat.Busy%
333289 ± 68% +914.2% 3380326 ± 89% turbostat.C1
0.02 ± 96% +0.3 0.34 ± 93% turbostat.C1%
1.70 ± 10% +202.2% 5.14 ± 11% turbostat.Pkg%pc2
209.59 -13.5% 181.27 turbostat.PkgWatt
22.38 -15.2% 18.97 turbostat.RAMWatt
785556 +1.9% 800400 proc-vmstat.nr_active_anon
773283 +1.9% 788100 proc-vmstat.nr_anon_pages
1421 +2.7% 1460 proc-vmstat.nr_anon_transparent_hugepages
274260 -1.5% 270044 proc-vmstat.nr_file_pages
4599 -3.9% 4420 proc-vmstat.nr_inactive_anon
6904 -3.5% 6661 proc-vmstat.nr_mapped
2978 -1.7% 2927 proc-vmstat.nr_page_table_pages
13918 -20.8% 11029 ± 6% proc-vmstat.nr_shmem
785553 +1.9% 800400 proc-vmstat.nr_zone_active_anon
4599 -3.9% 4420 proc-vmstat.nr_zone_inactive_anon
58104829 -20.6% 46153605 ± 2% proc-vmstat.numa_hit
7075 ± 2% -60.5% 2793 ± 11% proc-vmstat.numa_huge_pte_updates
58076290 -20.6% 46125099 ± 2% proc-vmstat.numa_local
3708795 ± 2% -60.5% 1463190 ± 11% proc-vmstat.numa_pte_updates
13129 ± 4% -32.8% 8818 ± 10% proc-vmstat.pgactivate
2.957e+09 -21.0% 2.336e+09 ± 2% proc-vmstat.pgalloc_normal
52427112 -20.5% 41660634 ± 2% proc-vmstat.pgfault
2.956e+09 -21.0% 2.335e+09 ± 2% proc-vmstat.pgfree
5670454 -21.0% 4479930 ± 2% proc-vmstat.thp_deferred_split_page
5671853 -21.0% 4481069 ± 2% proc-vmstat.thp_fault_alloc
2560591 ± 2% -51.2% 1249187 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
4759988 -50.7% 2345356 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
1567395 ± 7% -56.5% 681789 ± 13% sched_debug.cfs_rq:/.min_vruntime.stddev
20.20 ± 62% +77.7% 35.91 ± 17% sched_debug.cfs_rq:/.removed.load_avg.stddev
932.02 ± 62% +78.0% 1659 ± 17% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
23.96 ± 4% +53.2% 36.71 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.max
2142363 ± 8% -53.8% 989520 ± 21% sched_debug.cfs_rq:/.spread0.avg
4341759 ± 4% -52.0% 2085702 ± 8% sched_debug.cfs_rq:/.spread0.max
1567398 ± 7% -56.5% 681790 ± 13% sched_debug.cfs_rq:/.spread0.stddev
559.79 ± 5% -32.7% 376.85 ± 8% sched_debug.cfs_rq:/.util_avg.avg
964.92 ± 2% -12.4% 844.88 ± 2% sched_debug.cfs_rq:/.util_avg.max
388.94 ± 3% -28.1% 279.66 ± 12% sched_debug.cfs_rq:/.util_avg.stddev
873.08 ± 2% -20.7% 692.33 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.max
308.97 ± 12% -32.1% 209.70 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
723517 ± 2% +12.4% 812920 ± 3% sched_debug.cpu.avg_idle.avg
247774 ± 4% -15.9% 208263 ± 6% sched_debug.cpu.avg_idle.stddev
23.96 ± 4% +228.0% 78.58 ± 92% sched_debug.cpu.cpu_load[0].max
24.83 ± 18% +125.7% 56.04 ± 65% sched_debug.cpu.cpu_load[1].max
27.92 ± 15% +26.3% 35.25 ± 16% sched_debug.cpu.cpu_load[3].max
47157 ± 2% +10.1% 51938 ± 3% sched_debug.cpu.nr_switches.avg
94844 ± 3% +11.2% 105419 ± 2% sched_debug.cpu.nr_switches.max
0.26 ±100% +0.4 0.64 ± 7% perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.82 ± 8% +0.4 1.23 ± 8% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.34 ± 4% +0.6 1.97 ± 8% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
1.34 ± 4% +0.6 1.97 ± 8% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +0.7 0.66 ± 20% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +0.7 0.66 ± 9% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
0.00 +0.7 0.67 ± 9% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +0.7 0.67 ± 8% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +0.7 0.72 ± 8% perf-profile.calltrace.cycles-pp.__free_pages_ok.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu
0.00 +0.8 0.82 ± 8% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region
0.00 +0.8 0.82 ± 8% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region.__do_munmap
0.42 ± 59% +0.8 1.25 ± 23% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 +0.8 0.84 ± 8% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
0.00 +0.8 0.84 ± 8% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
0.15 ±173% +1.0 1.16 ± 25% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
0.00 +1.0 1.01 ± 9% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.00 +1.0 1.04 ± 8% perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.06 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
0.00 +1.1 1.06 ± 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
0.00 +1.1 1.06 ± 8% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
0.00 +1.1 1.06 ± 8% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
0.00 +1.1 1.06 ± 9% perf-profile.calltrace.cycles-pp.munmap
0.00 +1.4 1.45 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.00 +1.5 1.46 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
240.08 -18.3% 196.23 ± 2% perf-stat.i.MPKI
6.188e+08 +3.3% 6.39e+08 perf-stat.i.branch-instructions
1.05 ± 16% +0.6 1.68 ± 18% perf-stat.i.branch-miss-rate%
6838156 ± 14% +61.1% 11019347 ± 18% perf-stat.i.branch-misses
6.51 -0.8 5.73 perf-stat.i.cache-miss-rate%
45477357 -27.6% 32915438 ± 2% perf-stat.i.cache-misses
6.986e+08 -17.9% 5.736e+08 ± 2% perf-stat.i.cache-references
27189 +10.7% 30108 ± 2% perf-stat.i.context-switches
35.08 -30.5% 24.39 ± 2% perf-stat.i.cpi
1.021e+11 -30.0% 7.141e+10 ± 2% perf-stat.i.cpu-cycles
18.74 ± 2% -58.1% 7.86 ± 6% perf-stat.i.cpu-migrations
2246 -3.4% 2168 perf-stat.i.cycles-between-cache-misses
0.25 ± 7% +0.1 0.36 ± 8% perf-stat.i.dTLB-load-miss-rate%
2037936 ± 7% +43.1% 2915695 ± 7% perf-stat.i.dTLB-load-misses
0.11 +0.0 0.12 ± 7% perf-stat.i.dTLB-store-miss-rate%
1.828e+09 -14.4% 1.564e+09 perf-stat.i.dTLB-stores
35.58 ± 4% +11.7 47.31 ± 6% perf-stat.i.iTLB-load-miss-rate%
650313 ± 6% +24.5% 809395 ± 4% perf-stat.i.iTLB-load-misses
1176138 ± 2% -22.9% 907288 ± 10% perf-stat.i.iTLB-loads
4555 ± 7% -19.6% 3664 ± 5% perf-stat.i.instructions-per-iTLB-miss
0.03 +44.4% 0.04 ± 2% perf-stat.i.ipc
173825 -20.6% 138089 ± 2% perf-stat.i.minor-faults
8.18 ± 2% +1.7 9.87 ± 3% perf-stat.i.node-load-miss-rate%
728666 ± 4% -19.8% 584113 ± 2% perf-stat.i.node-load-misses
8200847 ± 3% -34.7% 5351398 ± 2% perf-stat.i.node-loads
1.82 -0.2 1.64 ± 2% perf-stat.i.node-store-miss-rate%
447002 ± 2% -35.5% 288326 ± 4% perf-stat.i.node-store-misses
24131452 -28.2% 17333253 ± 3% perf-stat.i.node-stores
173824 -20.6% 138088 ± 2% perf-stat.i.page-faults
237.41 -18.3% 194.01 ± 2% perf-stat.overall.MPKI
1.11 ± 15% +0.6 1.72 ± 18% perf-stat.overall.branch-miss-rate%
6.51 -0.8 5.74 perf-stat.overall.cache-miss-rate%
34.69 -30.4% 24.15 ± 2% perf-stat.overall.cpi
2244 -3.4% 2169 perf-stat.overall.cycles-between-cache-misses
0.25 ± 7% +0.1 0.36 ± 8% perf-stat.overall.dTLB-load-miss-rate%
0.11 +0.0 0.12 ± 7% perf-stat.overall.dTLB-store-miss-rate%
35.58 ± 4% +11.7 47.27 ± 6% perf-stat.overall.iTLB-load-miss-rate%
4545 ± 7% -19.4% 3661 ± 5% perf-stat.overall.instructions-per-iTLB-miss
0.03 +43.7% 0.04 ± 2% perf-stat.overall.ipc
8.16 ± 2% +1.7 9.85 ± 3% perf-stat.overall.node-load-miss-rate%
1.82 -0.2 1.64 perf-stat.overall.node-store-miss-rate%
90187 +27.2% 114676 perf-stat.overall.path-length
6.168e+08 +3.3% 6.369e+08 perf-stat.ps.branch-instructions
6819807 ± 14% +61.1% 10987177 ± 18% perf-stat.ps.branch-misses
45322958 -27.6% 32804347 ± 2% perf-stat.ps.cache-misses
6.962e+08 -17.9% 5.716e+08 ± 2% perf-stat.ps.cache-references
27098 +10.7% 30006 ± 2% perf-stat.ps.context-switches
1.017e+11 -30.0% 7.117e+10 ± 2% perf-stat.ps.cpu-cycles
18.69 ± 2% -58.0% 7.84 ± 6% perf-stat.ps.cpu-migrations
2031217 ± 7% +43.1% 2905916 ± 7% perf-stat.ps.dTLB-load-misses
1.821e+09 -14.4% 1.559e+09 perf-stat.ps.dTLB-stores
648188 ± 6% +24.5% 806705 ± 4% perf-stat.ps.iTLB-load-misses
1172160 ± 2% -22.9% 904228 ± 10% perf-stat.ps.iTLB-loads
173246 -20.6% 137623 ± 2% perf-stat.ps.minor-faults
726221 ± 4% -19.8% 582154 ± 2% perf-stat.ps.node-load-misses
8173121 ± 3% -34.7% 5333276 ± 2% perf-stat.ps.node-loads
445502 ± 2% -35.5% 287358 ± 4% perf-stat.ps.node-store-misses
24049168 -28.2% 17274645 ± 3% perf-stat.ps.node-stores
173246 -20.6% 137622 ± 2% perf-stat.ps.page-faults
39652 ± 2% +9.9% 43581 ± 3% softirqs.CPU0.SCHED
107352 +13.2% 121534 softirqs.CPU0.TIMER
30790 ± 19% +47.1% 45286 ± 3% softirqs.CPU1.SCHED
103533 ± 2% +10.6% 114471 ± 3% softirqs.CPU1.TIMER
28329 ± 9% +53.3% 43421 ± 4% softirqs.CPU10.SCHED
104203 ± 2% +11.9% 116563 ± 3% softirqs.CPU10.TIMER
25132 ± 12% +80.2% 45299 ± 8% softirqs.CPU11.SCHED
102967 +12.6% 115922 ± 4% softirqs.CPU11.TIMER
26183 ± 15% +75.0% 45808 ± 5% softirqs.CPU12.SCHED
103373 +11.9% 115631 ± 3% softirqs.CPU12.TIMER
31612 ± 18% +48.5% 46932 ± 7% softirqs.CPU13.SCHED
104644 +10.3% 115470 ± 2% softirqs.CPU13.TIMER
26288 ± 7% +75.2% 46070 ± 5% softirqs.CPU14.SCHED
104066 ± 2% +12.6% 117183 softirqs.CPU14.TIMER
24804 ± 16% +81.9% 45125 ± 2% softirqs.CPU15.SCHED
102933 ± 2% +14.9% 118222 softirqs.CPU15.TIMER
23752 ± 18% +83.4% 43562 ± 5% softirqs.CPU16.SCHED
102635 ± 2% +15.0% 118001 softirqs.CPU16.TIMER
29082 ± 16% +56.7% 45576 ± 8% softirqs.CPU17.SCHED
103248 ± 2% +14.0% 117737 softirqs.CPU17.TIMER
22195 ± 10% +102.4% 44921 ± 6% softirqs.CPU18.SCHED
102277 ± 2% +15.4% 118070 softirqs.CPU18.TIMER
27975 ± 11% +66.1% 46461 ± 4% softirqs.CPU19.SCHED
27177 ± 26% +72.7% 46948 ± 3% softirqs.CPU2.SCHED
105320 +24.8% 131418 ± 16% softirqs.CPU2.TIMER
30379 ± 11% +48.8% 45203 ± 3% softirqs.CPU20.SCHED
104150 ± 3% +13.8% 118540 softirqs.CPU20.TIMER
30891 ± 11% +52.6% 47154 softirqs.CPU21.SCHED
26164 ± 17% +71.6% 44900 ± 6% softirqs.CPU22.SCHED
31366 ± 17% +43.1% 44873 ± 7% softirqs.CPU23.SCHED
24394 ± 14% +83.5% 44774 ± 7% softirqs.CPU24.SCHED
27442 ± 16% +67.3% 45907 ± 3% softirqs.CPU25.SCHED
31616 ± 23% +42.9% 45164 softirqs.CPU26.SCHED
26466 ± 16% +71.3% 45336 ± 4% softirqs.CPU27.SCHED
29151 ± 10% +45.6% 42433 ± 2% softirqs.CPU28.SCHED
28455 ± 18% +64.1% 46688 softirqs.CPU29.SCHED
20502 ± 2% +12.4% 23051 ± 5% softirqs.CPU3.RCU
24116 ± 18% +91.5% 46193 ± 3% softirqs.CPU3.SCHED
102093 ± 2% +17.6% 120036 softirqs.CPU3.TIMER
27645 ± 26% +68.6% 46601 ± 6% softirqs.CPU30.SCHED
25045 ± 14% +69.7% 42498 ± 3% softirqs.CPU31.SCHED
28097 ± 3% +54.0% 43277 ± 6% softirqs.CPU32.SCHED
22500 ± 18% +100.2% 45043 ± 6% softirqs.CPU33.SCHED
24277 ± 18% +82.5% 44309 ± 5% softirqs.CPU34.SCHED
30861 ± 16% +52.7% 47113 ± 4% softirqs.CPU35.SCHED
25425 ± 30% +77.4% 45117 ± 6% softirqs.CPU36.SCHED
27246 ± 25% +70.5% 46449 ± 6% softirqs.CPU37.SCHED
24448 ± 30% +84.4% 45094 ± 5% softirqs.CPU38.SCHED
25117 ± 34% +77.0% 44447 softirqs.CPU39.SCHED
23205 ± 16% +98.6% 46077 ± 3% softirqs.CPU4.SCHED
102105 +13.4% 115741 ± 2% softirqs.CPU4.TIMER
28942 ± 29% +54.1% 44603 softirqs.CPU40.SCHED
27338 ± 15% +65.3% 45199 ± 7% softirqs.CPU41.SCHED
29373 ± 18% +52.6% 44834 ± 7% softirqs.CPU42.SCHED
32350 ± 14% +34.0% 43336 ± 3% softirqs.CPU43.SCHED
14249 ± 4% +18.4% 16869 ± 5% softirqs.CPU44.RCU
20276 ± 6% +139.1% 48484 ± 2% softirqs.CPU44.SCHED
100697 ± 2% +15.3% 116116 softirqs.CPU44.TIMER
27640 ± 19% +59.8% 44160 ± 4% softirqs.CPU45.SCHED
104558 ± 3% +13.8% 118954 ± 3% softirqs.CPU45.TIMER
32781 ± 14% +34.5% 44095 ± 3% softirqs.CPU47.SCHED
33772 ± 12% +28.2% 43288 ± 4% softirqs.CPU48.SCHED
104880 ± 2% +11.9% 117310 ± 2% softirqs.CPU48.TIMER
32556 ± 13% +39.2% 45330 ± 4% softirqs.CPU49.SCHED
24734 ± 13% +77.3% 43855 ± 3% softirqs.CPU5.SCHED
103789 +12.1% 116309 ± 3% softirqs.CPU5.TIMER
26588 ± 14% -18.5% 21657 ± 6% softirqs.CPU50.RCU
34458 ± 8% +31.5% 45315 ± 5% softirqs.CPU50.SCHED
105036 ± 2% +11.9% 117517 ± 2% softirqs.CPU50.TIMER
34237 ± 15% +31.4% 44981 ± 6% softirqs.CPU51.SCHED
32473 ± 11% +35.0% 43834 ± 6% softirqs.CPU52.SCHED
104568 ± 2% +12.4% 117503 ± 3% softirqs.CPU52.TIMER
30645 ± 13% +45.6% 44617 ± 3% softirqs.CPU53.SCHED
104251 ± 2% +12.5% 117286 ± 2% softirqs.CPU53.TIMER
29085 ± 6% +57.5% 45808 ± 4% softirqs.CPU54.SCHED
31859 ± 9% +38.7% 44198 ± 8% softirqs.CPU55.SCHED
104882 ± 3% +11.5% 116985 softirqs.CPU55.TIMER
30625 ± 13% +42.0% 43489 ± 5% softirqs.CPU56.SCHED
103851 ± 2% +12.8% 117106 softirqs.CPU56.TIMER
25529 ± 27% +65.2% 42183 ± 8% softirqs.CPU57.SCHED
102045 ± 4% +15.8% 118201 ± 2% softirqs.CPU57.TIMER
31198 ± 7% +39.2% 43421 ± 6% softirqs.CPU58.SCHED
103835 ± 2% +14.3% 118680 softirqs.CPU58.TIMER
33286 ± 13% +32.3% 44052 ± 3% softirqs.CPU59.SCHED
104067 ± 2% +12.3% 116885 softirqs.CPU59.TIMER
23032 ± 9% +92.7% 44375 ± 5% softirqs.CPU6.SCHED
102378 +13.9% 116623 ± 3% softirqs.CPU6.TIMER
33521 ± 14% +36.0% 45604 ± 4% softirqs.CPU60.SCHED
104127 ± 2% +11.5% 116107 ± 2% softirqs.CPU60.TIMER
28074 ± 16% +53.9% 43204 ± 8% softirqs.CPU61.SCHED
103917 ± 2% +12.4% 116757 ± 2% softirqs.CPU61.TIMER
34955 ± 5% +25.7% 43944 ± 6% softirqs.CPU62.SCHED
106068 ± 2% +12.2% 119043 softirqs.CPU62.TIMER
29799 ± 8% +42.8% 42559 ± 5% softirqs.CPU63.SCHED
103835 ± 2% +13.6% 117936 ± 2% softirqs.CPU63.TIMER
27444 ± 11% +61.0% 44190 ± 2% softirqs.CPU64.SCHED
103157 ± 2% +14.7% 118349 softirqs.CPU64.TIMER
27539 ± 12% +53.7% 42331 softirqs.CPU65.SCHED
33472 ± 12% +34.2% 44915 ± 4% softirqs.CPU66.SCHED
26992 ± 18% +62.9% 43958 ± 5% softirqs.CPU67.SCHED
33076 ± 7% +32.6% 43849 ± 7% softirqs.CPU68.SCHED
30471 ± 16% +42.1% 43288 softirqs.CPU69.SCHED
22903 ± 26% +95.0% 44663 ± 6% softirqs.CPU7.SCHED
102386 ± 3% +13.4% 116076 ± 3% softirqs.CPU7.TIMER
26717 ± 26% +65.7% 44272 softirqs.CPU70.SCHED
31545 ± 12% +41.8% 44717 ± 5% softirqs.CPU71.SCHED
29237 ± 12% +58.6% 46373 ± 4% softirqs.CPU72.SCHED
29733 ± 16% +42.1% 42255 ± 3% softirqs.CPU73.SCHED
29977 ± 21% +39.6% 41860 ± 7% softirqs.CPU74.SCHED
32778 ± 9% +41.4% 46341 ± 4% softirqs.CPU75.SCHED
29721 ± 5% +52.3% 45255 ± 3% softirqs.CPU76.SCHED
35096 ± 9% +26.2% 44287 ± 2% softirqs.CPU77.SCHED
103979 ± 2% +22.4% 127298 ± 16% softirqs.CPU77.TIMER
33571 ± 12% +34.9% 45293 ± 7% softirqs.CPU78.SCHED
27032 ± 16% +53.8% 41571 ± 5% softirqs.CPU79.SCHED
24602 ± 18% +85.0% 45505 ± 5% softirqs.CPU8.SCHED
102919 ± 2% +12.7% 115994 ± 2% softirqs.CPU8.TIMER
32946 ± 22% +34.7% 44368 ± 5% softirqs.CPU80.SCHED
31490 ± 18% +37.8% 43391 ± 7% softirqs.CPU81.SCHED
33486 ± 20% +32.6% 44402 ± 5% softirqs.CPU82.SCHED
32684 ± 26% +34.6% 43989 ± 3% softirqs.CPU83.SCHED
29156 ± 28% +52.4% 44431 ± 4% softirqs.CPU84.SCHED
30623 ± 15% +44.0% 44085 ± 4% softirqs.CPU85.SCHED
30209 ± 13% +53.2% 46285 ± 5% softirqs.CPU86.SCHED
26704 ± 11% +76.3% 47091 ± 2% softirqs.CPU87.SCHED
26401 ± 15% +69.0% 44606 ± 3% softirqs.CPU9.SCHED
103291 ± 2% +12.2% 115944 ± 3% softirqs.CPU9.TIMER
2546947 +54.4% 3933302 softirqs.SCHED
506.75 ± 33% -54.8% 229.00 ± 34% interrupts.39:PCI-MSI.3145736-edge.eth0-TxRx-7
1206416 +123.1% 2691966 ± 2% interrupts.CAL:Function_call_interrupts
12861 ± 37% +154.5% 32732 ± 36% interrupts.CPU1.CAL:Function_call_interrupts
4735 ± 35% -39.7% 2855 ± 36% interrupts.CPU1.NMI:Non-maskable_interrupts
4735 ± 35% -39.7% 2855 ± 36% interrupts.CPU1.PMI:Performance_monitoring_interrupts
10297 ± 53% +195.1% 30384 ± 40% interrupts.CPU1.TLB:TLB_shootdowns
10943 ± 61% +288.9% 42564 ± 41% interrupts.CPU13.CAL:Function_call_interrupts
413.50 ±102% +233.3% 1378 ± 49% interrupts.CPU13.RES:Rescheduling_interrupts
8158 ± 90% +396.8% 40527 ± 45% interrupts.CPU13.TLB:TLB_shootdowns
17692 ± 22% +88.8% 33398 ± 19% interrupts.CPU15.CAL:Function_call_interrupts
6203 ± 4% -42.0% 3597 ± 24% interrupts.CPU15.NMI:Non-maskable_interrupts
6203 ± 4% -42.0% 3597 ± 24% interrupts.CPU15.PMI:Performance_monitoring_interrupts
15585 ± 28% +99.7% 31120 ± 20% interrupts.CPU15.TLB:TLB_shootdowns
5696 ± 19% -57.0% 2449 ± 19% interrupts.CPU16.NMI:Non-maskable_interrupts
5696 ± 19% -57.0% 2449 ± 19% interrupts.CPU16.PMI:Performance_monitoring_interrupts
506.75 ± 33% -54.8% 229.00 ± 34% interrupts.CPU18.39:PCI-MSI.3145736-edge.eth0-TxRx-7
6395 -49.3% 3239 ± 28% interrupts.CPU18.NMI:Non-maskable_interrupts
6395 -49.3% 3239 ± 28% interrupts.CPU18.PMI:Performance_monitoring_interrupts
14613 ± 18% +176.2% 40354 ± 29% interrupts.CPU19.CAL:Function_call_interrupts
512.00 ± 16% +153.3% 1296 ± 32% interrupts.CPU19.RES:Rescheduling_interrupts
12039 ± 23% +217.5% 38224 ± 32% interrupts.CPU19.TLB:TLB_shootdowns
15894 ± 44% +137.0% 37661 ± 29% interrupts.CPU2.CAL:Function_call_interrupts
13480 ± 58% +162.8% 35426 ± 32% interrupts.CPU2.TLB:TLB_shootdowns
12773 ± 23% +160.6% 33284 ± 25% interrupts.CPU20.CAL:Function_call_interrupts
435.00 ± 26% +119.5% 955.00 ± 16% interrupts.CPU20.RES:Rescheduling_interrupts
10134 ± 34% +205.4% 30951 ± 27% interrupts.CPU20.TLB:TLB_shootdowns
11925 ± 30% +273.0% 44485 ± 10% interrupts.CPU21.CAL:Function_call_interrupts
442.25 ± 19% +177.0% 1225 ± 14% interrupts.CPU21.RES:Rescheduling_interrupts
9240 ± 39% +360.4% 42544 ± 10% interrupts.CPU21.TLB:TLB_shootdowns
11296 ± 43% +198.7% 33747 ± 49% interrupts.CPU23.CAL:Function_call_interrupts
371.00 ± 41% +150.8% 930.50 ± 34% interrupts.CPU23.RES:Rescheduling_interrupts
8604 ± 58% +266.0% 31493 ± 54% interrupts.CPU23.TLB:TLB_shootdowns
15140 ± 35% +156.1% 38776 ± 15% interrupts.CPU25.CAL:Function_call_interrupts
569.50 ± 56% +109.0% 1190 ± 17% interrupts.CPU25.RES:Rescheduling_interrupts
12617 ± 43% +190.2% 36620 ± 17% interrupts.CPU25.TLB:TLB_shootdowns
11134 ± 62% +201.8% 33609 ± 12% interrupts.CPU26.CAL:Function_call_interrupts
356.00 ± 72% +186.0% 1018 ± 22% interrupts.CPU26.RES:Rescheduling_interrupts
8317 ± 85% +276.2% 31289 ± 13% interrupts.CPU26.TLB:TLB_shootdowns
16482 ± 27% +84.7% 30449 ± 37% interrupts.CPU27.CAL:Function_call_interrupts
14166 ± 33% +97.9% 28027 ± 42% interrupts.CPU27.TLB:TLB_shootdowns
4733 ± 33% -47.4% 2491 ± 19% interrupts.CPU28.NMI:Non-maskable_interrupts
4733 ± 33% -47.4% 2491 ± 19% interrupts.CPU28.PMI:Performance_monitoring_interrupts
502.00 ± 43% +69.5% 851.00 ± 15% interrupts.CPU28.RES:Rescheduling_interrupts
14068 ± 34% +214.8% 44283 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
499.25 ± 39% +168.8% 1341 ± 17% interrupts.CPU29.RES:Rescheduling_interrupts
11502 ± 44% +267.7% 42292 ± 9% interrupts.CPU29.TLB:TLB_shootdowns
18770 ± 24% +89.3% 35532 ± 21% interrupts.CPU3.CAL:Function_call_interrupts
4889 ± 30% -55.3% 2186 ± 34% interrupts.CPU3.NMI:Non-maskable_interrupts
4889 ± 30% -55.3% 2186 ± 34% interrupts.CPU3.PMI:Performance_monitoring_interrupts
16584 ± 31% +100.5% 33245 ± 23% interrupts.CPU3.TLB:TLB_shootdowns
14747 ± 43% +195.7% 43601 ± 34% interrupts.CPU30.CAL:Function_call_interrupts
12277 ± 54% +238.2% 41524 ± 37% interrupts.CPU30.TLB:TLB_shootdowns
4960 ± 29% -55.4% 2213 ± 34% interrupts.CPU31.NMI:Non-maskable_interrupts
4960 ± 29% -55.4% 2213 ± 34% interrupts.CPU31.PMI:Performance_monitoring_interrupts
5793 ± 17% -45.8% 3139 ± 32% interrupts.CPU32.NMI:Non-maskable_interrupts
5793 ± 17% -45.8% 3139 ± 32% interrupts.CPU32.PMI:Performance_monitoring_interrupts
6341 -55.7% 2811 ± 46% interrupts.CPU33.NMI:Non-maskable_interrupts
6341 -55.7% 2811 ± 46% interrupts.CPU33.PMI:Performance_monitoring_interrupts
11612 ± 43% +301.1% 46572 ± 24% interrupts.CPU35.CAL:Function_call_interrupts
494.50 ± 36% +177.5% 1372 ± 25% interrupts.CPU35.RES:Rescheduling_interrupts
8895 ± 60% +401.5% 44613 ± 26% interrupts.CPU35.TLB:TLB_shootdowns
17143 ± 48% +84.3% 31591 ± 15% interrupts.CPU39.CAL:Function_call_interrupts
5747 ± 18% -46.2% 3093 ± 33% interrupts.CPU39.NMI:Non-maskable_interrupts
5747 ± 18% -46.2% 3093 ± 33% interrupts.CPU39.PMI:Performance_monitoring_interrupts
14848 ± 59% +96.7% 29211 ± 17% interrupts.CPU39.TLB:TLB_shootdowns
19066 ± 22% +97.1% 37572 ± 20% interrupts.CPU4.CAL:Function_call_interrupts
16957 ± 28% +108.3% 35327 ± 22% interrupts.CPU4.TLB:TLB_shootdowns
13209 ± 63% +138.2% 31460 ± 17% interrupts.CPU40.CAL:Function_call_interrupts
10995 ± 77% +164.9% 29121 ± 19% interrupts.CPU40.TLB:TLB_shootdowns
5140 ± 24% -51.3% 2504 ± 18% interrupts.CPU43.NMI:Non-maskable_interrupts
5140 ± 24% -51.3% 2504 ± 18% interrupts.CPU43.PMI:Performance_monitoring_interrupts
21954 ± 4% +135.3% 51667 ± 10% interrupts.CPU44.CAL:Function_call_interrupts
6355 -37.7% 3962 ± 27% interrupts.CPU44.NMI:Non-maskable_interrupts
6355 -37.7% 3962 ± 27% interrupts.CPU44.PMI:Performance_monitoring_interrupts
20036 ± 3% +148.8% 49854 ± 10% interrupts.CPU44.TLB:TLB_shootdowns
4926 ± 30% -48.5% 2535 ± 21% interrupts.CPU45.NMI:Non-maskable_interrupts
4926 ± 30% -48.5% 2535 ± 21% interrupts.CPU45.PMI:Performance_monitoring_interrupts
529.00 ± 34% +144.6% 1293 ± 36% interrupts.CPU45.RES:Rescheduling_interrupts
8879 ± 47% +188.3% 25599 ± 28% interrupts.CPU47.CAL:Function_call_interrupts
279.75 ± 59% +356.0% 1275 ± 83% interrupts.CPU47.RES:Rescheduling_interrupts
6031 ± 71% +283.4% 23124 ± 32% interrupts.CPU47.TLB:TLB_shootdowns
8046 ± 52% +192.6% 23545 ± 38% interrupts.CPU48.CAL:Function_call_interrupts
4357 ± 13% -28.8% 3103 ± 35% interrupts.CPU48.NMI:Non-maskable_interrupts
4357 ± 13% -28.8% 3103 ± 35% interrupts.CPU48.PMI:Performance_monitoring_interrupts
239.75 ± 71% +159.5% 622.25 ± 32% interrupts.CPU48.RES:Rescheduling_interrupts
5094 ± 89% +312.0% 20990 ± 44% interrupts.CPU48.TLB:TLB_shootdowns
8857 ± 44% +296.0% 35077 ± 31% interrupts.CPU49.CAL:Function_call_interrupts
283.00 ± 51% +252.7% 998.25 ± 25% interrupts.CPU49.RES:Rescheduling_interrupts
5940 ± 69% +452.4% 32813 ± 34% interrupts.CPU49.TLB:TLB_shootdowns
4257 ± 29% -48.9% 2175 ± 35% interrupts.CPU5.NMI:Non-maskable_interrupts
4257 ± 29% -48.9% 2175 ± 35% interrupts.CPU5.PMI:Performance_monitoring_interrupts
7494 ± 30% +345.9% 33416 ± 41% interrupts.CPU50.CAL:Function_call_interrupts
3917 ± 15% -35.6% 2522 ± 19% interrupts.CPU50.NMI:Non-maskable_interrupts
3917 ± 15% -35.6% 2522 ± 19% interrupts.CPU50.PMI:Performance_monitoring_interrupts
4744 ± 56% +556.7% 31155 ± 45% interrupts.CPU50.TLB:TLB_shootdowns
8245 ± 88% +267.2% 30274 ± 53% interrupts.CPU51.CAL:Function_call_interrupts
309.50 ±115% +231.7% 1026 ± 43% interrupts.CPU51.RES:Rescheduling_interrupts
5362 ±148% +420.5% 27911 ± 59% interrupts.CPU51.TLB:TLB_shootdowns
12505 ± 37% +145.1% 30652 ± 25% interrupts.CPU53.CAL:Function_call_interrupts
5377 ± 19% -55.1% 2413 ± 58% interrupts.CPU53.NMI:Non-maskable_interrupts
5377 ± 19% -55.1% 2413 ± 58% interrupts.CPU53.PMI:Performance_monitoring_interrupts
695.75 ± 24% +54.0% 1071 ± 21% interrupts.CPU53.RES:Rescheduling_interrupts
9933 ± 52% +185.5% 28361 ± 28% interrupts.CPU53.TLB:TLB_shootdowns
13095 ± 22% +184.9% 37305 ± 30% interrupts.CPU54.CAL:Function_call_interrupts
10596 ± 28% +231.8% 35153 ± 33% interrupts.CPU54.TLB:TLB_shootdowns
10486 ± 42% +134.1% 24551 ± 48% interrupts.CPU56.CAL:Function_call_interrupts
394.25 ± 50% +104.7% 807.00 ± 47% interrupts.CPU56.RES:Rescheduling_interrupts
8302 ± 53% +164.8% 21988 ± 55% interrupts.CPU56.TLB:TLB_shootdowns
5367 ± 31% -66.8% 1782 ± 25% interrupts.CPU57.NMI:Non-maskable_interrupts
5367 ± 31% -66.8% 1782 ± 25% interrupts.CPU57.PMI:Performance_monitoring_interrupts
8980 ± 46% +209.4% 27786 ± 25% interrupts.CPU59.CAL:Function_call_interrupts
1973 ± 52% -63.4% 723.00 ± 17% interrupts.CPU59.RES:Rescheduling_interrupts
6036 ± 71% +320.3% 25370 ± 29% interrupts.CPU59.TLB:TLB_shootdowns
8012 ± 55% +349.6% 36023 ± 33% interrupts.CPU60.CAL:Function_call_interrupts
272.75 ± 62% +285.1% 1050 ± 32% interrupts.CPU60.RES:Rescheduling_interrupts
5226 ± 90% +553.3% 34144 ± 35% interrupts.CPU60.TLB:TLB_shootdowns
5570 ± 24% -50.2% 2772 ± 34% interrupts.CPU64.NMI:Non-maskable_interrupts
5570 ± 24% -50.2% 2772 ± 34% interrupts.CPU64.PMI:Performance_monitoring_interrupts
4892 ± 29% -51.6% 2367 ± 42% interrupts.CPU65.NMI:Non-maskable_interrupts
4892 ± 29% -51.6% 2367 ± 42% interrupts.CPU65.PMI:Performance_monitoring_interrupts
10367 ± 42% +200.4% 31144 ± 44% interrupts.CPU66.CAL:Function_call_interrupts
4480 ± 32% -39.6% 2705 ± 32% interrupts.CPU66.NMI:Non-maskable_interrupts
4480 ± 32% -39.6% 2705 ± 32% interrupts.CPU66.PMI:Performance_monitoring_interrupts
7605 ± 61% +278.7% 28798 ± 49% interrupts.CPU66.TLB:TLB_shootdowns
5495 ± 18% -49.5% 2774 ± 33% interrupts.CPU67.NMI:Non-maskable_interrupts
5495 ± 18% -49.5% 2774 ± 33% interrupts.CPU67.PMI:Performance_monitoring_interrupts
4581 ± 26% -51.6% 2217 ± 35% interrupts.CPU69.NMI:Non-maskable_interrupts
4581 ± 26% -51.6% 2217 ± 35% interrupts.CPU69.PMI:Performance_monitoring_interrupts
4668 ± 25% -43.8% 2624 ± 38% interrupts.CPU70.NMI:Non-maskable_interrupts
4668 ± 25% -43.8% 2624 ± 38% interrupts.CPU70.PMI:Performance_monitoring_interrupts
14254 ± 24% +195.9% 42175 ± 23% interrupts.CPU72.CAL:Function_call_interrupts
11753 ± 34% +241.8% 40168 ± 25% interrupts.CPU72.TLB:TLB_shootdowns
9449 ± 27% +329.0% 40541 ± 25% interrupts.CPU75.CAL:Function_call_interrupts
376.00 ± 48% +206.1% 1150 ± 18% interrupts.CPU75.RES:Rescheduling_interrupts
6643 ± 44% +477.9% 38390 ± 27% interrupts.CPU75.TLB:TLB_shootdowns
13559 ± 19% +158.9% 35109 ± 34% interrupts.CPU76.CAL:Function_call_interrupts
4630 ± 25% -40.9% 2737 ± 32% interrupts.CPU76.NMI:Non-maskable_interrupts
4630 ± 25% -40.9% 2737 ± 32% interrupts.CPU76.PMI:Performance_monitoring_interrupts
432.75 ± 5% +176.4% 1196 ± 51% interrupts.CPU76.RES:Rescheduling_interrupts
11045 ± 24% +196.6% 32758 ± 37% interrupts.CPU76.TLB:TLB_shootdowns
6487 ± 50% +309.7% 26577 ± 37% interrupts.CPU77.CAL:Function_call_interrupts
215.75 ± 89% +279.6% 819.00 ± 62% interrupts.CPU77.RES:Rescheduling_interrupts
3747 ± 89% +541.1% 24023 ± 42% interrupts.CPU77.TLB:TLB_shootdowns
5619 ± 24% -56.0% 2474 ± 22% interrupts.CPU79.NMI:Non-maskable_interrupts
5619 ± 24% -56.0% 2474 ± 22% interrupts.CPU79.PMI:Performance_monitoring_interrupts
339.25 ± 90% +214.3% 1066 ± 42% interrupts.CPU80.RES:Rescheduling_interrupts
10050 ± 81% +196.0% 29747 ± 22% interrupts.CPU83.CAL:Function_call_interrupts
7212 ±123% +278.5% 27296 ± 24% interrupts.CPU83.TLB:TLB_shootdowns
5387 ± 18% -53.3% 2513 ± 19% interrupts.CPU84.NMI:Non-maskable_interrupts
5387 ± 18% -53.3% 2513 ± 19% interrupts.CPU84.PMI:Performance_monitoring_interrupts
451.75 ± 67% +93.4% 873.75 ± 17% interrupts.CPU84.RES:Rescheduling_interrupts
5362 ± 18% -50.0% 2681 ± 33% interrupts.CPU85.NMI:Non-maskable_interrupts
5362 ± 18% -50.0% 2681 ± 33% interrupts.CPU85.PMI:Performance_monitoring_interrupts
383.00 ± 42% +414.6% 1970 ± 47% interrupts.CPU85.RES:Rescheduling_interrupts
5037 ± 26% -45.6% 2738 ± 20% interrupts.CPU86.NMI:Non-maskable_interrupts
5037 ± 26% -45.6% 2738 ± 20% interrupts.CPU86.PMI:Performance_monitoring_interrupts
494.50 ± 69% +749.5% 4201 ±124% interrupts.CPU86.RES:Rescheduling_interrupts
16215 ± 21% +156.3% 41560 ± 20% interrupts.CPU87.CAL:Function_call_interrupts
646.50 ± 44% +78.8% 1156 ± 21% interrupts.CPU87.RES:Rescheduling_interrupts
13994 ± 28% +181.9% 39454 ± 22% interrupts.CPU87.TLB:TLB_shootdowns
15408 ± 29% +96.9% 30334 ± 22% interrupts.CPU9.CAL:Function_call_interrupts
13083 ± 36% +114.1% 28011 ± 24% interrupts.CPU9.TLB:TLB_shootdowns
419619 -33.5% 278854 ± 12% interrupts.NMI:Non-maskable_interrupts
419619 -33.5% 278854 ± 12% interrupts.PMI:Performance_monitoring_interrupts
53973 ± 17% +65.5% 89333 ± 7% interrupts.RES:Rescheduling_interrupts
985222 ± 2% +152.1% 2483749 ± 3% interrupts.TLB:TLB_shootdowns
will-it-scale.per_thread_ops
250000 +-+----------------------------------------------------------------+
|+.++++.+++.++++.+++.++++.+++.+++.++++.+++.++++.+++.++++. + .++ .+|
| + + ++ |
200000 +-+ |
| O O OO O OO O OO OOO OOOO O O |
O O O O O O O |
150000 +-+ |
| |
100000 +-+ |
| |
| |
50000 +-+ |
| |
| |
0 +O+-O---O-O--O---------O--O----------------------------------------+
will-it-scale.workload
1e+07 +-+-----------------------------------------------------------------+
9e+06 +-+ |
| |
8e+06 +-+O O O O O OO O OOO OO OOO O |
7e+06 O-+ O O O O O O O O |
| |
6e+06 +-+ |
5e+06 +-+ |
4e+06 +-+ |
| |
3e+06 +-+ |
2e+06 +-+ |
| |
1e+06 +-+ |
0 +O+-O---O-O--O----------O-O-----------------------------------------+
will-it-scale.time.user_time
300 +-+-------------------------------------------------------------------+
| |
250 +-+ +. .+ +.+ .+ .+ .+ + .+|
|+.++ +++.+++.++.+++.+++ ++.++ ++ ++.+++.+++ +.+++ ++.+ +.+ + |
| + |
200 +-+ |
| |
150 +-+ |
| |
100 O-+O O O O O OO O OO O O O OO O |
| O O O O O OO O O |
| |
50 +-+ |
| |
0 +O+-O---O--O-O----------O--O------------------------------------------+
will-it-scale.time.system_time
12000 +-+-----------------------------------------------------------------+
|+.+ +. + .+ +.+++. +++.+ +.+++.++ .+ ++.+ +.+++. + + +.++ .+|
10000 +-+ + + ++ ++.++ + + + + + + +.+ + + |
| |
| |
8000 +-+ O O OO O O OO O |
O O O O O O OO O O O O O OO O O |
6000 +-+ |
| |
4000 +-+ |
| |
| |
2000 +-+ |
| |
0 +O+-O---O-O--O----------O-O-----------------------------------------+
will-it-scale.time.percent_of_cpu_this_job_got
4000 +-+------------------------------------------------------------------+
|+.+++.+++.+++.+++.+++.+++.+++.+++.++++.+++.+++.+++.+++.+++.+++.+++.+|
3500 +-+ |
3000 +-+ |
| |
2500 +-+ O O O O O O O O O O |
O O O O O OO O O OO OO O O O |
2000 +-+ |
| |
1500 +-+ |
1000 +-+ |
| |
500 +-+ |
| |
0 +O+-O---O--O-O----------O--O-----------------------------------------+
will-it-scale.time.minor_page_faults
6e+07 +-+-----------------------------------------------------------------+
| |
5e+07 +-++++.++++.+++.+++.+++.++++.+++.+++.+++.++++.+++.+++.+++.++++.+++.+|
| |
| O O OO O O OO O |
4e+07 O-+O O O O O OO O O O O O OO O O |
| |
3e+07 +-+ |
| |
2e+07 +-+ |
| |
| |
1e+07 +-+ |
| |
0 +O+-O---O-O--O----------O-O-----------------------------------------+
will-it-scale.time.involuntary_context_switches
16000 +-+-----------------------------------------------------------------+
| + .+ +. + + |
14000 +-++++.++++.+++.+ +.+++.++++.+++ ++.++ ++++.+++.+++.+++.+ ++.+ +.+|
12000 +-+ |
| |
10000 +-+ O O O O O O OO |
O O O O O O O O O O O O OO O OO |
8000 +-+ O |
| |
6000 +-+ |
4000 +-+ |
| |
2000 +-+ |
| |
0 +O+-O---O-O--O----------O-O-----------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[tcp] 01b4c2aab8: lmbench3.TCP.socket.bandwidth.10MB.MB/sec -20.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -20.2% regression of lmbench3.TCP.socket.bandwidth.10MB.MB/sec due to commit:
commit: 01b4c2aab841d7ed9c5457371785070b2e0b53b1 ("[PATCH v3 net-next 3/3] tcp: add one skb cache for rx")
url: https://github.com/0day-ci/linux/commits/Eric-Dumazet/tcp-add-rx-tx-cache...
in testcase: lmbench3
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
test_memory_size: 50%
nr_threads: 100%
mode: development
test: TCP
cpufreq_governor: performance
ucode: 0xb00002e
test-url: http://www.bitmover.com/lmbench/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_threads/rootfs/tbox_group/test/test_memory_size/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/development/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/TCP/50%/lmbench3/0xb00002e
commit:
af0b648e98 ("tcp: add one skb cache for tx")
01b4c2aab8 ("tcp: add one skb cache for rx")
af0b648e98a72a54 01b4c2aab841d7ed9c545737178
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at_ip__netif_receive_skb_core/0x
:4 25% 1:4 dmesg.WARNING:at_ip_do_select/0x
1:4 -25% :4 dmesg.WARNING:at_ip_ip_finish_output2/0x
%stddev %change %stddev
\ | \
30.40 ± 5% -14.1% 26.11 ± 5% lmbench3.TCP.localhost.latency
99117 -20.2% 79133 lmbench3.TCP.socket.bandwidth.10MB.MB/sec
2537 -2.2% 2481 lmbench3.TCP.socket.bandwidth.64B.MB/sec
157430 -1.7% 154819 lmbench3.time.minor_page_faults
3593 +3.1% 3705 lmbench3.time.percent_of_cpu_this_job_got
22.28 ± 5% -3.0 19.29 ± 2% mpstat.cpu.all.idle%
6.19 ± 2% +1.2 7.40 ± 2% mpstat.cpu.all.soft%
508795 ± 2% +22.0% 620794 ± 2% numa-meminfo.node0.Unevictable
516356 ± 2% +18.1% 609977 ± 2% numa-meminfo.node1.Unevictable
1137258 +14.6% 1303104 ± 2% meminfo.Cached
6680579 ± 6% -12.9% 5821954 meminfo.DirectMap2M
298947 ± 3% +24.1% 371141 ± 5% meminfo.DirectMap4k
1025152 +20.1% 1230771 meminfo.Unevictable
379964 ± 75% -141.4% -157480 sched_debug.cfs_rq:/.spread0.avg
-392190 +179.0% -1094014 sched_debug.cfs_rq:/.spread0.min
4051 ± 35% +45.1% 5879 ± 14% sched_debug.cpu.load.min
9511408 ± 8% -18.7% 7730274 ± 3% sched_debug.cpu.nr_switches.max
541.00 ± 5% +17.7% 637.00 ± 8% slabinfo.kmem_cache_node.active_objs
592.00 ± 4% +16.2% 688.00 ± 7% slabinfo.kmem_cache_node.num_objs
5185 ± 3% +38.9% 7201 ± 6% slabinfo.skbuff_fclone_cache.active_objs
5187 ± 3% +38.9% 7206 ± 6% slabinfo.skbuff_fclone_cache.num_objs
21.75 ± 5% -12.6% 19.00 ± 3% vmstat.cpu.id
68.75 +4.0% 71.50 vmstat.cpu.sy
1206841 +13.7% 1372779 ± 2% vmstat.memory.cache
1846679 ± 2% -6.3% 1731096 ± 5% vmstat.system.cs
1.512e+08 ± 2% +13.2% 1.711e+08 ± 3% numa-numastat.node0.local_node
1.512e+08 ± 2% +13.2% 1.711e+08 ± 3% numa-numastat.node0.numa_hit
7091 ±173% +201.1% 21353 ± 57% numa-numastat.node0.other_node
1.48e+08 ± 2% +13.2% 1.674e+08 ± 3% numa-numastat.node1.local_node
1.48e+08 ± 2% +13.1% 1.674e+08 ± 3% numa-numastat.node1.numa_hit
2087 +3.6% 2163 turbostat.Avg_MHz
89315160 ± 13% -50.0% 44625119 ± 19% turbostat.C1
0.94 ± 13% -0.5 0.46 ± 18% turbostat.C1%
7386178 ± 49% -44.9% 4072935 ± 5% turbostat.C1E
0.39 ±146% -0.3 0.04 ± 10% turbostat.C1E%
5.109e+08 ± 15% -50.4% 2.534e+08 ± 22% cpuidle.C1.time
89317656 ± 13% -50.0% 44626550 ± 19% cpuidle.C1.usage
2.092e+08 ±145% -88.4% 24217866 ± 3% cpuidle.C1E.time
7389747 ± 49% -44.9% 4075034 ± 5% cpuidle.C1E.usage
75351513 ± 5% -30.3% 52528863 ± 16% cpuidle.POLL.time
29827485 ± 4% -27.6% 21596769 ± 16% cpuidle.POLL.usage
127198 ± 2% +22.0% 155198 ± 2% numa-vmstat.node0.nr_unevictable
127198 ± 2% +22.0% 155198 ± 2% numa-vmstat.node0.nr_zone_unevictable
25297173 ± 3% +15.2% 29129943 ± 3% numa-vmstat.node0.numa_hit
147405 +17.2% 172790 numa-vmstat.node0.numa_interleave
25289758 ± 2% +15.1% 29108245 ± 3% numa-vmstat.node0.numa_local
129088 ± 2% +18.1% 152494 ± 2% numa-vmstat.node1.nr_unevictable
129088 ± 2% +18.1% 152494 ± 2% numa-vmstat.node1.nr_zone_unevictable
24817577 ± 2% +14.5% 28404348 ± 3% numa-vmstat.node1.numa_hit
147161 +17.7% 173180 numa-vmstat.node1.numa_interleave
24646043 ± 2% +14.5% 28221056 ± 3% numa-vmstat.node1.numa_local
152712 +6.6% 162773 proc-vmstat.nr_anon_pages
241.25 +5.9% 255.50 proc-vmstat.nr_anon_transparent_hugepages
284312 +14.6% 325770 ± 2% proc-vmstat.nr_file_pages
6218 -2.9% 6039 proc-vmstat.nr_mapped
2354 +3.9% 2447 proc-vmstat.nr_page_table_pages
256287 +20.1% 307692 proc-vmstat.nr_unevictable
256287 +20.1% 307692 proc-vmstat.nr_zone_unevictable
2.988e+08 ± 2% +12.8% 3.369e+08 ± 3% proc-vmstat.numa_hit
2.988e+08 ± 2% +12.8% 3.369e+08 ± 3% proc-vmstat.numa_local
2.393e+09 ± 2% +13.0% 2.704e+09 ± 3% proc-vmstat.pgalloc_normal
2.393e+09 ± 2% +13.0% 2.703e+09 ± 3% proc-vmstat.pgfree
36.21 ± 2% -10.1% 32.54 ± 3% perf-stat.i.MPKI
2.203e+10 +3.2% 2.274e+10 perf-stat.i.branch-instructions
1.234e+08 ± 2% -6.8% 1.149e+08 perf-stat.i.cache-misses
1853039 ± 2% -6.3% 1736720 ± 5% perf-stat.i.context-switches
1.831e+11 +3.6% 1.896e+11 perf-stat.i.cpu-cycles
177350 ± 8% -39.9% 106565 ± 12% perf-stat.i.cpu-migrations
52299 ± 9% -15.0% 44447 ± 5% perf-stat.i.cycles-between-cache-misses
0.13 ± 10% -0.0 0.10 ± 6% perf-stat.i.dTLB-load-miss-rate%
24601513 ± 2% -28.8% 17506777 ± 7% perf-stat.i.dTLB-load-misses
3.49e+10 +2.5% 3.578e+10 perf-stat.i.dTLB-loads
0.05 ± 8% +0.0 0.06 ± 6% perf-stat.i.dTLB-store-miss-rate%
54414059 +11.6% 60703047 ± 5% perf-stat.i.iTLB-load-misses
13295971 ± 3% -11.2% 11805882 ± 4% perf-stat.i.iTLB-loads
1.125e+11 +3.0% 1.159e+11 perf-stat.i.instructions
3843 ± 4% +58.0% 6073 perf-stat.i.instructions-per-iTLB-miss
82.77 -2.9 79.87 perf-stat.i.node-load-miss-rate%
61535201 ± 2% -16.8% 51169722 perf-stat.i.node-loads
52.20 ± 3% -4.9 47.30 ± 4% perf-stat.i.node-store-miss-rate%
1348075 ± 6% -24.9% 1012041 ± 3% perf-stat.i.node-store-misses
1002686 ± 6% +26.7% 1269969 perf-stat.i.node-stores
14.36 -3.1% 13.93 perf-stat.overall.MPKI
2.15 -0.0 2.10 perf-stat.overall.branch-miss-rate%
7.65 ± 2% -0.5 7.14 perf-stat.overall.cache-miss-rate%
1481 ± 2% +11.2% 1646 ± 2% perf-stat.overall.cycles-between-cache-misses
0.07 ± 4% -0.0 0.05 ± 7% perf-stat.overall.dTLB-load-miss-rate%
80.36 +3.3 83.69 perf-stat.overall.iTLB-load-miss-rate%
2067 -7.5% 1913 ± 4% perf-stat.overall.instructions-per-iTLB-miss
57.31 ± 4% -13.0 44.31 perf-stat.overall.node-store-miss-rate%
2.198e+10 +3.2% 2.269e+10 perf-stat.ps.branch-instructions
1.234e+08 ± 2% -6.8% 1.15e+08 perf-stat.ps.cache-misses
1849729 ± 2% -6.3% 1733566 ± 5% perf-stat.ps.context-switches
1.827e+11 +3.6% 1.893e+11 perf-stat.ps.cpu-cycles
176984 ± 8% -39.9% 106335 ± 12% perf-stat.ps.cpu-migrations
24552688 ± 2% -28.8% 17472051 ± 7% perf-stat.ps.dTLB-load-misses
3.482e+10 +2.5% 3.57e+10 perf-stat.ps.dTLB-loads
54282300 +11.6% 60558515 ± 5% perf-stat.ps.iTLB-load-misses
13269959 ± 3% -11.2% 11782445 ± 4% perf-stat.ps.iTLB-loads
1.123e+11 +3.0% 1.156e+11 perf-stat.ps.instructions
61567371 ± 2% -16.8% 51205094 perf-stat.ps.node-loads
1345363 ± 6% -24.9% 1010000 ± 3% perf-stat.ps.node-store-misses
1001013 ± 6% +26.8% 1268788 perf-stat.ps.node-stores
31.55 ± 4% -15.5 16.09 ± 76% perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit
28.16 ± 6% -14.1 14.10 ± 76% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb
28.00 ± 6% -14.0 14.02 ± 76% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit
27.74 ± 6% -13.9 13.86 ± 76% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
27.62 ± 6% -13.8 13.79 ± 76% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
27.24 ± 6% -13.7 13.53 ± 76% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip
26.70 ± 7% -13.5 13.25 ± 76% perf-profile.calltrace.cycles-pp.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq
26.05 ± 7% -13.1 12.93 ± 76% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack
25.54 ± 8% -12.9 12.61 ± 76% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start
24.81 ± 8% -12.7 12.14 ± 76% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action
24.69 ± 9% -12.6 12.07 ± 76% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
24.63 ± 9% -12.6 12.04 ± 76% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
24.34 ± 9% -12.4 11.93 ± 76% perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
21.18 ± 12% -11.8 9.35 ± 76% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
20.79 ± 12% -11.6 9.19 ± 76% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
15.49 ± 20% -9.4 6.12 ± 77% perf-profile.calltrace.cycles-pp.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu
15.14 ± 20% -9.2 5.94 ± 77% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
14.75 ± 20% -9.0 5.77 ± 77% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv
14.40 ± 21% -8.8 5.61 ± 77% perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established
18.83 ± 6% -7.6 11.23 ± 31% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read
19.13 ± 6% -7.5 11.60 ± 30% perf-profile.calltrace.cycles-pp.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
22.17 ± 4% -7.5 14.67 ± 38% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.53 ± 4% -7.4 15.13 ± 38% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
20.30 ± 5% -7.3 13.03 ± 28% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.89 ± 5% -7.3 12.63 ± 28% perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
12.67 ± 11% -7.1 5.60 ± 78% perf-profile.calltrace.cycles-pp.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read
11.16 ± 12% -6.3 4.90 ± 78% perf-profile.calltrace.cycles-pp.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_read_iter
10.69 ± 12% -6.0 4.70 ± 78% perf-profile.calltrace.cycles-pp.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg
10.55 ± 12% -5.9 4.64 ± 78% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg
10.35 ± 12% -5.8 4.53 ± 78% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.schedule_timeout.wait_woken.sk_wait_data
5.66 ± 21% -3.4 2.22 ± 79% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable
5.51 ± 22% -3.4 2.15 ± 79% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_common_lock
5.05 ± 21% -3.0 2.03 ± 78% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout.wait_woken
3.04 ± 34% -2.3 0.74 ± 75% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.99 ± 34% -2.3 0.72 ± 75% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule_idle.do_idle.cpu_startup_entry.start_secondary
4.14 ± 13% -2.1 2.00 ± 75% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable
3.13 ± 32% -2.1 1.01 ± 76% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__wake_up_common
2.93 ± 32% -2.0 0.94 ± 77% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout
3.48 ± 13% -1.9 1.59 ± 75% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.__wake_up_common.__wake_up_common_lock
1.58 ± 16% -1.1 0.52 ±105% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.__wake_up_common
1.14 ± 7% -0.7 0.47 ±104% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__sched_text_start.schedule.schedule_timeout.wait_woken
0.16 ±173% +0.8 0.95 ± 61% perf-profile.calltrace.cycles-pp.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64
0.14 ±173% +1.0 1.19 ± 58% perf-profile.calltrace.cycles-pp.__x64_sys_rt_sigaction.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±104% +1.3 1.72 ± 79% perf-profile.calltrace.cycles-pp.task_sched_runtime.thread_group_cputime.thread_group_cputime_adjusted.getrusage.__do_sys_getrusage
0.52 ±103% +2.0 2.48 ± 64% perf-profile.calltrace.cycles-pp.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ±104% +2.2 2.99 ± 68% perf-profile.calltrace.cycles-pp.thread_group_cputime_adjusted.getrusage.__do_sys_getrusage.do_syscall_64.entry_SYSCALL_64_after_hwframe
399.25 ± 22% +48.8% 594.00 ± 27% interrupts.47:PCI-MSI.1572878-edge.eth0-TxRx-14
399.25 ± 22% +48.8% 594.00 ± 27% interrupts.CPU14.47:PCI-MSI.1572878-edge.eth0-TxRx-14
28894 ± 27% -36.6% 18322 ± 11% interrupts.CPU15.RES:Rescheduling_interrupts
6661 ± 24% -41.3% 3911 ± 53% interrupts.CPU21.NMI:Non-maskable_interrupts
6661 ± 24% -41.3% 3911 ± 53% interrupts.CPU21.PMI:Performance_monitoring_interrupts
6395 ± 24% -40.9% 3776 ± 54% interrupts.CPU22.NMI:Non-maskable_interrupts
6395 ± 24% -40.9% 3776 ± 54% interrupts.CPU22.PMI:Performance_monitoring_interrupts
25874 ± 18% -29.3% 18288 ± 20% interrupts.CPU29.RES:Rescheduling_interrupts
26229 ± 11% -28.0% 18896 ± 13% interrupts.CPU32.RES:Rescheduling_interrupts
28466 ± 13% -35.1% 18482 ± 28% interrupts.CPU34.RES:Rescheduling_interrupts
28021 ± 26% -35.4% 18102 ± 25% interrupts.CPU36.RES:Rescheduling_interrupts
7321 -52.3% 3491 ± 67% interrupts.CPU37.NMI:Non-maskable_interrupts
7321 -52.3% 3491 ± 67% interrupts.CPU37.PMI:Performance_monitoring_interrupts
7351 -48.8% 3766 ± 55% interrupts.CPU38.NMI:Non-maskable_interrupts
7351 -48.8% 3766 ± 55% interrupts.CPU38.PMI:Performance_monitoring_interrupts
7333 -47.6% 3841 ± 52% interrupts.CPU39.NMI:Non-maskable_interrupts
7333 -47.6% 3841 ± 52% interrupts.CPU39.PMI:Performance_monitoring_interrupts
6408 ± 23% -41.3% 3759 ± 55% interrupts.CPU40.NMI:Non-maskable_interrupts
6408 ± 23% -41.3% 3759 ± 55% interrupts.CPU40.PMI:Performance_monitoring_interrupts
6416 ± 23% -35.2% 4160 ± 43% interrupts.CPU41.NMI:Non-maskable_interrupts
6416 ± 23% -35.2% 4160 ± 43% interrupts.CPU41.PMI:Performance_monitoring_interrupts
6371 ± 24% -39.8% 3838 ± 52% interrupts.CPU42.NMI:Non-maskable_interrupts
6371 ± 24% -39.8% 3838 ± 52% interrupts.CPU42.PMI:Performance_monitoring_interrupts
6383 ± 24% -37.4% 3993 ± 48% interrupts.CPU43.NMI:Non-maskable_interrupts
6383 ± 24% -37.4% 3993 ± 48% interrupts.CPU43.PMI:Performance_monitoring_interrupts
6636 ± 24% -41.7% 3868 ± 55% interrupts.CPU45.NMI:Non-maskable_interrupts
6636 ± 24% -41.7% 3868 ± 55% interrupts.CPU45.PMI:Performance_monitoring_interrupts
5684 ± 32% -31.4% 3900 ± 55% interrupts.CPU46.NMI:Non-maskable_interrupts
5684 ± 32% -31.4% 3900 ± 55% interrupts.CPU46.PMI:Performance_monitoring_interrupts
22917 ± 10% -19.3% 18485 ± 12% interrupts.CPU51.RES:Rescheduling_interrupts
24437 ± 16% -18.6% 19881 ± 21% interrupts.CPU53.RES:Rescheduling_interrupts
26653 ± 16% -30.7% 18474 ± 22% interrupts.CPU61.RES:Rescheduling_interrupts
26867 ± 9% -39.1% 16369 ± 12% interrupts.CPU62.RES:Rescheduling_interrupts
28101 ± 24% -29.3% 19868 ± 22% interrupts.CPU68.RES:Rescheduling_interrupts
30098 ± 34% -39.4% 18243 ± 18% interrupts.CPU73.RES:Rescheduling_interrupts
6384 ± 24% -59.0% 2616 ± 41% interrupts.CPU74.NMI:Non-maskable_interrupts
6384 ± 24% -59.0% 2616 ± 41% interrupts.CPU74.PMI:Performance_monitoring_interrupts
6371 ± 24% -58.7% 2633 ± 40% interrupts.CPU75.NMI:Non-maskable_interrupts
6371 ± 24% -58.7% 2633 ± 40% interrupts.CPU75.PMI:Performance_monitoring_interrupts
6410 ± 23% -59.5% 2595 ± 42% interrupts.CPU76.NMI:Non-maskable_interrupts
6410 ± 23% -59.5% 2595 ± 42% interrupts.CPU76.PMI:Performance_monitoring_interrupts
27943 ± 16% -39.8% 16824 ± 18% interrupts.CPU76.RES:Rescheduling_interrupts
5478 ± 32% -52.7% 2593 ± 41% interrupts.CPU77.NMI:Non-maskable_interrupts
5478 ± 32% -52.7% 2593 ± 41% interrupts.CPU77.PMI:Performance_monitoring_interrupts
6393 ± 24% -58.8% 2631 ± 41% interrupts.CPU78.NMI:Non-maskable_interrupts
6393 ± 24% -58.8% 2631 ± 41% interrupts.CPU78.PMI:Performance_monitoring_interrupts
6401 ± 24% -59.3% 2603 ± 41% interrupts.CPU79.NMI:Non-maskable_interrupts
6401 ± 24% -59.3% 2603 ± 41% interrupts.CPU79.PMI:Performance_monitoring_interrupts
6370 ± 24% -59.1% 2608 ± 41% interrupts.CPU80.NMI:Non-maskable_interrupts
6370 ± 24% -59.1% 2608 ± 41% interrupts.CPU80.PMI:Performance_monitoring_interrupts
7316 -64.1% 2623 ± 40% interrupts.CPU81.NMI:Non-maskable_interrupts
7316 -64.1% 2623 ± 40% interrupts.CPU81.PMI:Performance_monitoring_interrupts
7292 -64.1% 2616 ± 40% interrupts.CPU82.NMI:Non-maskable_interrupts
7292 -64.1% 2616 ± 40% interrupts.CPU82.PMI:Performance_monitoring_interrupts
7307 -63.5% 2668 ± 37% interrupts.CPU83.NMI:Non-maskable_interrupts
7307 -63.5% 2668 ± 37% interrupts.CPU83.PMI:Performance_monitoring_interrupts
7334 -64.4% 2610 ± 40% interrupts.CPU84.NMI:Non-maskable_interrupts
7334 -64.4% 2610 ± 40% interrupts.CPU84.PMI:Performance_monitoring_interrupts
7350 -54.1% 3374 ± 15% interrupts.CPU85.NMI:Non-maskable_interrupts
7350 -54.1% 3374 ± 15% interrupts.CPU85.PMI:Performance_monitoring_interrupts
7312 -59.9% 2933 ± 22% interrupts.CPU86.NMI:Non-maskable_interrupts
7312 -59.9% 2933 ± 22% interrupts.CPU86.PMI:Performance_monitoring_interrupts
7338 -43.4% 4154 ± 45% interrupts.CPU87.NMI:Non-maskable_interrupts
7338 -43.4% 4154 ± 45% interrupts.CPU87.PMI:Performance_monitoring_interrupts
42075 ± 5% -16.4% 35154 ± 6% softirqs.CPU0.SCHED
39762 ± 7% -16.8% 33092 ± 8% softirqs.CPU1.SCHED
39607 ± 7% -19.2% 32002 ± 6% softirqs.CPU10.SCHED
39624 ± 10% -19.8% 31770 ± 6% softirqs.CPU11.SCHED
38851 ± 6% -18.1% 31832 ± 7% softirqs.CPU12.SCHED
39142 ± 6% -15.6% 33029 ± 7% softirqs.CPU13.SCHED
38809 ± 7% -17.5% 32025 ± 6% softirqs.CPU14.SCHED
40786 ± 12% -21.6% 31993 ± 5% softirqs.CPU15.SCHED
38640 ± 6% -17.4% 31903 ± 7% softirqs.CPU16.SCHED
38433 ± 6% -17.1% 31877 ± 6% softirqs.CPU17.SCHED
40026 ± 10% -16.7% 33343 ± 11% softirqs.CPU18.SCHED
41468 ± 9% -25.2% 31023 ± 6% softirqs.CPU19.SCHED
38731 ± 6% -15.0% 32938 ± 4% softirqs.CPU2.SCHED
5406654 ± 5% -8.8% 4931685 ± 3% softirqs.CPU20.NET_RX
39048 ± 6% -18.1% 31963 ± 7% softirqs.CPU20.SCHED
39819 ± 8% -20.6% 31596 ± 7% softirqs.CPU21.SCHED
39843 ± 8% -20.1% 31827 ± 11% softirqs.CPU22.SCHED
39542 ± 10% -19.9% 31668 ± 11% softirqs.CPU24.SCHED
39722 ± 9% -19.2% 32088 ± 9% softirqs.CPU25.SCHED
39669 ± 10% -19.1% 32078 ± 9% softirqs.CPU26.SCHED
39649 ± 11% -19.6% 31887 ± 10% softirqs.CPU27.SCHED
39862 ± 12% -20.6% 31651 ± 10% softirqs.CPU28.SCHED
39196 ± 11% -19.5% 31533 ± 9% softirqs.CPU29.SCHED
40023 ± 9% -19.0% 32406 ± 3% softirqs.CPU3.SCHED
39523 ± 11% -19.1% 31960 ± 10% softirqs.CPU30.SCHED
39695 ± 11% -19.8% 31838 ± 8% softirqs.CPU31.SCHED
38974 ± 11% -19.5% 31383 ± 10% softirqs.CPU32.SCHED
39283 ± 11% -19.2% 31742 ± 10% softirqs.CPU33.SCHED
39383 ± 12% -19.5% 31712 ± 10% softirqs.CPU34.SCHED
40608 ± 12% -21.7% 31797 ± 12% softirqs.CPU36.SCHED
39687 ± 11% -19.6% 31927 ± 11% softirqs.CPU37.SCHED
39664 ± 10% -19.1% 32099 ± 10% softirqs.CPU38.SCHED
39632 ± 10% -19.9% 31762 ± 10% softirqs.CPU39.SCHED
38780 ± 5% -16.3% 32468 ± 7% softirqs.CPU4.SCHED
39578 ± 11% -20.5% 31466 ± 11% softirqs.CPU40.SCHED
39321 ± 10% -18.7% 31961 ± 10% softirqs.CPU41.SCHED
39030 ± 11% -16.7% 32507 ± 9% softirqs.CPU42.SCHED
38372 ± 8% -16.9% 31898 ± 10% softirqs.CPU43.SCHED
38653 ± 7% -17.3% 31949 ± 6% softirqs.CPU44.SCHED
38385 ± 6% -18.4% 31334 ± 6% softirqs.CPU45.SCHED
38379 ± 6% -16.2% 32151 ± 5% softirqs.CPU47.SCHED
38798 ± 5% -15.5% 32780 ± 6% softirqs.CPU48.SCHED
38516 ± 6% -16.0% 32351 ± 6% softirqs.CPU49.SCHED
38685 ± 7% -12.3% 33942 ± 10% softirqs.CPU5.SCHED
38451 ± 6% -16.6% 32063 ± 6% softirqs.CPU50.SCHED
38345 ± 7% -17.3% 31722 ± 6% softirqs.CPU51.SCHED
39662 ± 6% -20.0% 31733 ± 6% softirqs.CPU52.SCHED
38634 ± 7% -17.6% 31849 ± 6% softirqs.CPU53.SCHED
39172 ± 6% -19.7% 31467 ± 7% softirqs.CPU54.SCHED
5379816 ± 4% -8.8% 4905481 ± 2% softirqs.CPU55.NET_RX
38658 ± 7% -18.5% 31505 ± 6% softirqs.CPU55.SCHED
38386 ± 7% -19.0% 31095 ± 4% softirqs.CPU56.SCHED
38370 ± 5% -15.8% 32305 ± 6% softirqs.CPU57.SCHED
38724 ± 7% -18.0% 31762 ± 4% softirqs.CPU58.SCHED
38540 ± 7% -18.5% 31398 ± 5% softirqs.CPU59.SCHED
38642 ± 6% -15.0% 32829 ± 9% softirqs.CPU6.SCHED
38745 ± 7% -18.0% 31789 ± 6% softirqs.CPU60.SCHED
38501 ± 6% -17.6% 31709 ± 6% softirqs.CPU61.SCHED
38697 ± 5% -18.2% 31659 ± 6% softirqs.CPU62.SCHED
39577 ± 7% -20.4% 31503 ± 6% softirqs.CPU63.SCHED
5678631 ± 13% -16.8% 4722162 ± 2% softirqs.CPU64.NET_RX
39312 ± 7% -19.5% 31647 ± 6% softirqs.CPU64.SCHED
38828 ± 5% -18.6% 31606 ± 7% softirqs.CPU65.SCHED
39268 ± 10% -17.9% 32233 ± 11% softirqs.CPU66.SCHED
39052 ± 10% -20.2% 31151 ± 12% softirqs.CPU67.SCHED
38862 ± 12% -20.0% 31094 ± 12% softirqs.CPU68.SCHED
38819 ± 12% -19.8% 31147 ± 11% softirqs.CPU69.SCHED
39013 ± 5% -13.2% 33865 ± 4% softirqs.CPU7.SCHED
39271 ± 10% -22.9% 30261 ± 13% softirqs.CPU70.SCHED
39586 ± 10% -21.1% 31234 ± 12% softirqs.CPU71.SCHED
39970 ± 12% -22.1% 31155 ± 11% softirqs.CPU72.SCHED
39372 ± 10% -20.1% 31441 ± 11% softirqs.CPU73.SCHED
39419 ± 11% -22.7% 30488 ± 10% softirqs.CPU74.SCHED
39041 ± 11% -19.2% 31532 ± 9% softirqs.CPU75.SCHED
39211 ± 11% -20.3% 31254 ± 10% softirqs.CPU76.SCHED
38997 ± 10% -20.4% 31024 ± 11% softirqs.CPU77.SCHED
38902 ± 11% -18.3% 31788 ± 13% softirqs.CPU78.SCHED
39187 ± 11% -19.6% 31506 ± 13% softirqs.CPU79.SCHED
38878 ± 7% -17.1% 32222 ± 6% softirqs.CPU8.SCHED
38849 ± 10% -18.3% 31748 ± 13% softirqs.CPU80.SCHED
39415 ± 10% -18.2% 32243 ± 12% softirqs.CPU81.SCHED
38612 ± 11% -19.0% 31271 ± 11% softirqs.CPU82.SCHED
39255 ± 11% -18.9% 31818 ± 11% softirqs.CPU83.SCHED
38819 ± 10% -19.1% 31403 ± 10% softirqs.CPU84.SCHED
39017 ± 10% -17.7% 32115 ± 12% softirqs.CPU85.SCHED
39175 ± 10% -17.7% 32232 ± 11% softirqs.CPU86.SCHED
38852 ± 10% -17.0% 32244 ± 11% softirqs.CPU87.SCHED
41916 ± 6% -23.5% 32046 ± 8% softirqs.CPU9.SCHED
3452696 ± 8% -18.5% 2812299 ± 6% softirqs.SCHED
lmbench3.TCP.socket.bandwidth.10MB.MB_sec
120000 +-+----------------------------------------------------------------+
| |
100000 +-+++ +.+ .+ +. ++.++.+++. + .+ + .++.+|
| : : + + ++.+++.++.+ ++ + + ++.++.+ +.++.+++ |
| : : + |
80000 OO+OO:OOO OO OOO OO OOO OO |
| : : |
60000 +-+ : : |
| : : |
40000 +-+ : : |
| : : |
| :: |
20000 +-+ :: |
| :: |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[rcutorture] 7c932cda19: WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_writer[rcutorture]
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 7c932cda193c4a9918a5867c8d7ec30ceb11d119 ("rcutorture: Fix stutter_wait() return value and freelist checks")
https://git.kernel.org/cgit/linux/kernel/git/paulmck/linux-rcu.git dev.2019.04.09b
in testcase: ltp
with following parameters:
test: kernel_misc
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------+------------+------------+
| | c41c3a1676 | 7c932cda19 |
+--------------------------------------------------------------------+------------+------------+
| boot_successes | 16 | 13 |
| boot_failures | 16 | 19 |
| BUG:kernel_reboot-without-warning_in_test_stage | 13 | 7 |
| BUG:soft_lockup-CPU##stuck_for#s | 1 | |
| RIP:free_reserved_area | 1 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 1 | |
| INFO:rcu_sched_self-detected_stall_on_CPU | 2 | |
| RIP:console_unlock | 2 | |
| WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_writer[rcutorture] | 0 | 12 |
| RIP:rcu_torture_writer[rcutorture] | 0 | 12 |
+--------------------------------------------------------------------+------------+------------+
[ 261.653491] WARNING: CPU: 0 PID: 3098 at kernel/rcu/rcutorture.c:1019 rcu_torture_writer+0x506/0x890 [rcutorture]
[ 261.656920] rcu-torture: Stopping rcu_torture_fwd_prog task
[ 261.658451] Modules linked in: rcutorture(-) torture loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver sr_mod cdrom sg crct10dif_pclmul crc32_pclmul crc32c_intel ata_generic pata_acpi bochs_drm ghash_clmulni_intel ttm ppdev drm_kms_helper snd_pcm snd_timer syscopyarea sysfillrect sysimgblt fb_sys_fops snd aesni_intel ata_piix crypto_simd drm cryptd glue_helper soundcore joydev libata pcspkr serio_raw parport_pc parport floppy i2c_piix4 ip_tables
[ 261.668859] rcu-torture: Stopping rcu_torture_fakewriter
[ 261.678108] CPU: 0 PID: 3098 Comm: rcu_torture_wri Not tainted 5.1.0-rc1-00104-g7c932cd #1
[ 261.678110] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 261.678127] RIP: 0010:rcu_torture_writer+0x506/0x890 [rcutorture]
[ 261.678132] Code: ff ff 49 8b 17 49 39 d7 75 eb 48 8b 4d 00 49 8d 57 e8 48 39 d1 74 de 41 8b 57 f8 4c 89 ee 48 c7 c7 d8 b3 65 c0 e8 aa cf 83 d8 <0f> 0b eb c7 84 c0 0f 84 50 01 00 00 c6 44 24 0f 01 e9 b9 fc ff ff
[ 261.700921] RSP: 0018:ffffb294810a7eb0 EFLAGS: 00010286
[ 261.704207] RAX: 0000000000000000 RBX: ffffffffc0678398 RCX: 0000000000000000
[ 261.707804] RDX: ffff91e1bfc1ed00 RSI: ffff91e1bfc16738 RDI: ffff91e1bfc16738
[ 261.711084] RBP: ffffffffc0678388 R08: 00000000000041dc R09: 0000000000aaaaaa
[ 261.714666] R10: ffffb294832b7c60 R11: ffff91e086fe7290 R12: ffffffffc065d500
[ 261.716865] rcu-torture: Stopping rcu_torture_fakewriter
[ 261.718256] R13: ffffffffc065b600 R14: ffffffffc0677228 R15: ffffffffc0677108
[ 261.718259] FS: 0000000000000000(0000) GS:ffff91e1bfc00000(0000) knlGS:0000000000000000
[ 261.718261] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 261.718262] CR2: 00007f0952707340 CR3: 000000019de16000 CR4: 00000000000006f0
[ 261.718271] Call Trace:
[ 261.725863] rcu-torture: Stopping rcu_torture_fakewriter
[ 261.729147] ? rcu_torture_pipe_update+0x110/0x110 [rcutorture]
[ 261.729156] kthread+0x11e/0x140
[ 261.736866] rcu-torture: Stopping rcu_torture_fakewriter
[ 261.738609] ? kthread_park+0x90/0x90
[ 261.754736] ret_from_fork+0x35/0x40
[ 261.757573] ---[ end trace 347a0c4f69921e29 ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc1-00104-g7c932cd .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months