[x86/alternative] 7e6c60cec4: BUG:unable_to_handle_kernel
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 7e6c60cec48b89fd5170208372f9ce08c2078439 ("x86/alternative: Initialize temporary mm for patching")
https://github.intel.com/rpedgeco/linux_public.git text_poke_merge_v4.6.2
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | 6011f89131 | 7e6c60cec4 |
+------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| BUG:unable_to_handle_kernel | 0 | 4 |
| Oops:#[##] | 0 | 4 |
| RIP:uprobe_start_dup_mmap | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 1.789074] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[ 1.791659] #PF error: [WRITE]
[ 1.791659] PGD 0 P4D 0
[ 1.791659] Oops: 0002 [#1] KASAN PTI
[ 1.791659] CPU: 0 PID: 0 Comm: swapper Not tainted 5.1.0-rc5-00633-g7e6c60c #5
[ 1.791659] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.791659] RIP: 0010:uprobe_start_dup_mmap+0x2f/0x60
[ 1.791659] Code: e8 36 8a f3 ff 31 d2 be 22 00 00 00 48 c7 c7 a0 ae cb 93 e8 b3 2b e9 ff e8 de 16 d1 00 ff 05 e0 2c 7b 01 48 8b 05 e9 c3 3f 02 <ff> 00 8b 05 a9 c3 3f 02 85 c0 75 0c e8 00 8a f3 ff ff 0d c2 2c 7b
[ 1.791659] RSP: 0000:ffffffff94207ce0 EFLAGS: 00010282
[ 1.791659] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff9296fe83
[ 1.791659] RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffffffff9473ec44
[ 1.791659] RBP: ffffffff94207e70 R08: fffffbfff28595e8 R09: fffffbfff28595e8
[ 1.791659] R10: ffffffff942caf3b R11: fffffbfff28595e8 R12: 1ffffffff2840fbd
[ 1.791659] R13: ffffffff94c0ce4a R14: ffffffff9438b2e0 R15: ffff888064f6ba00
[ 1.791659] FS: 0000000000000000(0000) GS:ffffffff94291000(0000) knlGS:0000000000000000
[ 1.791659] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.791659] CR2: 0000000000000000 CR3: 0000000039c2e000 CR4: 00000000000406b0
[ 1.791659] Call Trace:
[ 1.791659] dup_mm+0xf5/0x880
[ 1.791659] ? pde_free+0x90/0x90
[ 1.791659] ? kernfs_next_descendant_post+0xa4/0x140
[ 1.791659] ? get_mm_exe_file+0x150/0x150
[ 1.791659] ? proc_register+0x230/0x250
[ 1.791659] ? trace_event_define_fields_vector_alloc_managed+0xff/0xff
[ 1.791659] poking_init+0x7e/0x38f
[ 1.791659] ? __init_extra_mapping+0x584/0x584
[ 1.791659] ? cgroup_init+0xa4a/0xa79
[ 1.791659] ? trace_event_define_fields_vector_alloc_managed+0xff/0xff
[ 1.791659] start_kernel+0x7e9/0x848
[ 1.791659] ? mem_encrypt_init+0x37/0x37
[ 1.791659] ? idt_setup_from_table+0xd7/0x120
[ 1.791659] ? x86_family+0x2f/0x40
[ 1.791659] ? load_ucode_bsp+0x2be/0x357
[ 1.791659] secondary_startup_64+0xb6/0xc0
[ 1.791659] Modules linked in:
[ 1.791659] CR2: 0000000000000000
[ 1.791659] ---[ end trace 63391d18d5b56f9c ]---
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc5-00633-g7e6c60c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months
1f62a2b603 ("x86/alternative: Use temporary mm for text poking"): BUG: kernel NULL pointer dereference, address: 00000024
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://internal_merge_and_test_tree devel-catchup-201904262151
commit 1f62a2b603b4b0873eefa9ff73eb7bc784aa8c8d
Author: Nadav Amit <namit(a)vmware.com>
AuthorDate: Thu Apr 25 17:11:27 2019 -0700
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Fri Apr 26 15:33:23 2019 +0200
x86/alternative: Use temporary mm for text poking
text_poke() can potentially compromise security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.
Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
To address these potential security hazards, use a temporary mm for
patching the code.
Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.
Cc: <x86(a)kernel.org>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: <deneen.t.dock(a)intel.com>
Cc: <ard.biesheuvel(a)linaro.org>
Cc: <will.deacon(a)arm.com>
Cc: <akpm(a)linux-foundation.org>
Cc: <kristen(a)linux.intel.com>
Cc: <kernel-hardening(a)lists.openwall.com>
Cc: Masami Hiramatsu <mhiramat(a)kernel.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: <linux_dti(a)icloud.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Nadav Amit <nadav.amit(a)gmail.com>
Cc: <hpa(a)zytor.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Nadav Amit <namit(a)vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe(a)intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/[email protected]
b82a57e7e3 x86/alternative: Initialize temporary mm for patching
1f62a2b603 x86/alternative: Use temporary mm for text poking
93d9bbc81f 0day head guard for 'devel-catchup-201904262151'
+----------------------------------------------------------------------+------------+------------+------------+
| | b82a57e7e3 | 1f62a2b603 | 93d9bbc81f |
+----------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 0 | 0 | 0 |
| boot_failures | 48 | 13 | 15 |
| BUG:kernel_reboot-without-warning_in_boot-around-mounting-root_stage | 47 | | |
| WARNING:at_kernel/locking/lockdep.c:#lockdep_register_key | 1 | | |
| EIP:lockdep_register_key | 1 | | |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 13 | 15 |
| Oops:#[##] | 0 | 13 | 15 |
| EIP:__get_locked_pte | 0 | 13 | 15 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 13 | 15 |
+----------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.118618] Spectre V2 : Mitigation: Full generic retpoline
[ 0.119613] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.120454] Speculative Store Bypass: Vulnerable
[ 0.120613] L1TF: Kernel not compiled for PAE. No mitigation for L1TF
[ 0.122025] Performance Events: unsupported p6 CPU model 60 no PMU driver, software events only.
[ 0.122707] BUG: kernel NULL pointer dereference, address: 00000024
[ 0.123354] #PF: supervisor read access in kernel mode
[ 0.123609] #PF: error_code(0x0000) - not-present page
[ 0.123609] *pde = 00000000
[ 0.123609] Oops: 0000 [#1]
[ 0.123609] CPU: 0 PID: 1 Comm: swapper Not tainted 5.1.0-rc3-00033-g1f62a2b #1
[ 0.123609] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.123609] EIP: __get_locked_pte+0xc/0xa1
[ 0.123609] Code: ff 58 5a 8b 5b 08 85 db 75 e4 8d 45 a8 89 f1 89 fa e8 4a 5c 00 00 8d 65 f4 5b 5e 5f 5d c3 55 89 e5 57 56 53 53 e8 e5 69 f1 ff <8b> 58 24 89 d6 c1 ee 16 8d 1c b3 85 db 75 04 31 db eb 7a 83 3b 00
[ 0.123609] EAX: 00000000 EBX: 00000f7a ECX: b0247e6c EDX: 00000000
[ 0.123609] ESI: cf70f1e8 EDI: 00000000 EBP: b0247e48 ESP: b0247e38
[ 0.123609] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00210046
[ 0.123609] CR0: 80050033 CR2: 00000024 CR3: 15ba7000 CR4: 001406d0
[ 0.123609] Call Trace:
[ 0.123609] __text_poke+0xf2/0x24d
[ 0.123609] ? jump_label_test+0x75/0xb4
[ 0.123609] ? jump_label_test+0x76/0xb4
[ 0.123609] text_poke+0x3a/0x3f
[ 0.123609] ? jump_label_test+0x75/0xb4
[ 0.123609] text_poke_bp+0x5c/0xc1
[ 0.123609] ? jump_label_test+0x75/0xb4
[ 0.123609] __jump_label_transform+0xcf/0xd8
[ 0.123609] ? jump_label_test+0x7a/0xb4
[ 0.123609] arch_jump_label_transform+0x27/0x39
[ 0.123609] __jump_label_update+0x53/0x85
[ 0.123609] jump_label_update+0xa6/0xae
[ 0.123609] static_key_disable_cpuslocked+0x52/0x5e
[ 0.123609] ? jump_label_init_module+0x14/0x14
[ 0.123609] jump_label_test+0x4e/0xb4
[ 0.123609] do_one_initcall+0x73/0x14a
[ 0.123609] ? proc_register+0x9e/0xdd
[ 0.123609] ? proc_create_seq_private+0x3e/0x40
[ 0.123609] ? init_mm_internals+0x86/0x8b
[ 0.123609] kernel_init_freeable+0x5d/0x184
[ 0.123609] ? rest_init+0xc0/0xc0
[ 0.123609] kernel_init+0xd/0xd0
[ 0.123609] ret_from_fork+0x2e/0x38
[ 0.123609] Modules linked in:
[ 0.123609] CR2: 0000000000000024
[ 0.123609] random: get_random_bytes called from init_oops_id+0x28/0x3f with crng_init=0
[ 0.123609] ---[ end trace 23db22ba2cea6912 ]---
[ 0.123609] EIP: __get_locked_pte+0xc/0xa1
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 93d9bbc81fe6209237659c9549ff7bafa03e3031 085b7755808aa11f78ab9377257e1dad2e6fa4bb --
git bisect good 3b4a8d80da8ec87094d6da1ca9d47870988692b3 # 23:27 G 10 0 10 19 Merge 'peterz-queue/x86/irq' into devel-catchup-201904262151
git bisect bad 97a8d8d3264b93824352ced4d361d6a397f629ca # 23:38 B 0 10 25 0 Merge 'peterz-queue/master' into devel-catchup-201904262151
git bisect good 6679f2d8a3ead20bc97751896bcbb0745370fc29 # 23:50 G 11 0 11 27 Merge 'peterz-queue/x86/platform' into devel-catchup-201904262151
git bisect good 53e4e29232a38ecdec1817f969cf92b2564f895e # 00:01 G 10 0 10 10 Merge 'peterz-queue/locking/core' into devel-catchup-201904262151
git bisect good 3ef45ea34c756ab8a8b9eab16df9a12b37f5988b # 00:17 G 10 0 10 10 Merge branch 'perf/urgent'
git bisect good 3cb125f922073a99d8e31c6ca472674e8f88e317 # 00:28 G 11 0 11 11 Merge branch 'core/rseq'
git bisect good c78e73bb93d22659fe5cd2f6f898f1340a1c19d0 # 00:39 G 10 0 10 10 Merge branch 'core/objtool'
git bisect good d7d5028b32202a43cbcab9028d50414244b61b9d # 00:49 G 10 0 10 10 Merge branch 'core/core'
git bisect bad 6b83cc2b9335b1989b17eefc7233a00735a2d949 # 00:57 B 0 11 37 11 modules: Use vmalloc special flag
git bisect bad 1f62a2b603b4b0873eefa9ff73eb7bc784aa8c8d # 01:08 B 0 11 26 0 x86/alternative: Use temporary mm for text poking
git bisect good 14275f5ad22fe01e37287342140f69ce8d7ac79d # 01:24 G 10 0 10 10 x86/jump_label: Use text_poke_early() during early init
git bisect good be5769f6f069546a1ba0f87f0a92b3051a21262f # 01:33 G 10 0 10 25 x86/mm: Save debug registers when loading a temporary mm
git bisect good b82a57e7e3a159203af546d7a3ee3d76b88ab14c # 01:55 G 10 0 10 10 x86/alternative: Initialize temporary mm for patching
# first bad commit: [1f62a2b603b4b0873eefa9ff73eb7bc784aa8c8d] x86/alternative: Use temporary mm for text poking
git bisect good b82a57e7e3a159203af546d7a3ee3d76b88ab14c # 01:58 G 31 0 31 41 x86/alternative: Initialize temporary mm for patching
# extra tests with debug options
git bisect bad 1f62a2b603b4b0873eefa9ff73eb7bc784aa8c8d # 02:16 B 0 11 26 0 x86/alternative: Use temporary mm for text poking
# extra tests on HEAD of linux-devel/devel-catchup-201904262151
git bisect bad 93d9bbc81fe6209237659c9549ff7bafa03e3031 # 02:16 B 0 15 34 0 0day head guard for 'devel-catchup-201904262151'
# extra tests on tree/branch linux-devel/devel-catchup-201904262151
git bisect bad 93d9bbc81fe6209237659c9549ff7bafa03e3031 # 08:18 B 0 15 34 0 0day head guard for 'devel-catchup-201904262151'
# extra tests with first bad commit reverted
git bisect good c2480da0b640c8ad7d753fe41c1e5c23a492a985 # 08:36 G 10 0 10 10 Revert "x86/alternative: Use temporary mm for text poking"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
24186b40ab ("x86/percpu: Remove this_cpu_read_stable()"): BUG: kernel reboot-without-warning in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git x86/core
commit 24186b40ab0ad65fb3d0a16d140cf1336d768d4d
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Sun Mar 10 13:48:38 2019 +0100
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Sat Apr 27 12:31:30 2019 +0200
x86/percpu: Remove this_cpu_read_stable()
this_cpu_read_stable() and it's implementation percpu_stable_cpu()
read like a nasty hack because older compilers would not CSE things.
This is no longer so; __this_cpu_read() _is_ very much affected by
CSE, so remove this hack and use the proper function.
17642871 2157438 747808 20548117 1398a15 defconfig-pre/vmlinux.o
17639081 2157438 747808 20544327 1397b47 defconfig-post/vmlinux.o
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
a0aff529af x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
24186b40ab x86/percpu: Remove this_cpu_read_stable()
7608dc0eec x86/percpu: Optimize raw_cpu_xchg()
+-------------------------------------------------+------------+------------+------------+
| | a0aff529af | 24186b40ab | 7608dc0eec |
+-------------------------------------------------+------------+------------+------------+
| boot_successes | 41 | 0 | 0 |
| boot_failures | 0 | 21 | 13 |
| BUG:kernel_reboot-without-warning_in_boot_stage | 0 | 18 | 12 |
| PANIC:double_fault | 0 | 2 | |
| BUG:kernel_hang_in_boot_stage | 0 | 3 | 1 |
+-------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.272562] Spectre V2 : Mitigation: Full generic retpoline
[ 0.273768] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.275262] Speculative Store Bypass: Vulnerable
[ 0.276111] L1TF: Kernel not compiled for PAE. No mitigation for L1TF
[ 0.277956] debug: unmapping init [mem 0xb1e54000-0xb1e5bfff]
BUG: kernel reboot-without-warning in boot stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 7608dc0eec59bb9ecbc5d98a5c5b03b137b2e5c4 e53f31bffe1d552f496b674cd1733658a268e177 --
git bisect bad 03c3ee8e514b1542276b118719d9c19159741dba # 20:59 B 0 10 25 0 x86/percpu: Relax smp_processor_id()
git bisect good a0aff529af4d94c8a4db1a45987f42bd93497394 # 21:07 G 10 0 0 0 x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
git bisect bad 24186b40ab0ad65fb3d0a16d140cf1336d768d4d # 21:18 B 0 10 32 7 x86/percpu: Remove this_cpu_read_stable()
# first bad commit: [24186b40ab0ad65fb3d0a16d140cf1336d768d4d] x86/percpu: Remove this_cpu_read_stable()
git bisect good a0aff529af4d94c8a4db1a45987f42bd93497394 # 21:21 G 31 0 0 0 x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
# extra tests on HEAD of peterz-queue/x86/core
git bisect bad 7608dc0eec59bb9ecbc5d98a5c5b03b137b2e5c4 # 21:22 B 0 12 31 1 x86/percpu: Optimize raw_cpu_xchg()
# extra tests on tree/branch peterz-queue/x86/core
git bisect bad 7608dc0eec59bb9ecbc5d98a5c5b03b137b2e5c4 # 21:25 B 0 12 31 1 x86/percpu: Optimize raw_cpu_xchg()
# extra tests with first bad commit reverted
git bisect good 8210f4e9f9cdff1ac4619d895650eabdf0d44bd5 # 21:36 G 10 0 0 0 Revert "x86/percpu: Remove this_cpu_read_stable()"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
b82f808a42 ("x86/percpu: Remove this_cpu_read_stable()"): BUG: kernel reboot-without-warning in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git x86/core
commit b82f808a421f9fabf2e5769f351a063c9a4ebec1
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Sun Mar 10 13:48:38 2019 +0100
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Fri Apr 26 15:33:32 2019 +0200
x86/percpu: Remove this_cpu_read_stable()
this_cpu_read_stable() and it's implementation percpu_stable_cpu()
read like a nasty hack because older compilers would not CSE things.
This is no longer so; __this_cpu_read() _is_ very much affected by
CSE, so remove this hack and use the proper function.
17642871 2157438 747808 20548117 1398a15 defconfig-pre/vmlinux.o
17639081 2157438 747808 20544327 1397b47 defconfig-post/vmlinux.o
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
d59a421e40 x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
b82f808a42 x86/percpu: Remove this_cpu_read_stable()
752f3d568a x86/percpu: Optimize raw_cpu_xchg()
+-------------------------------------------------+------------+------------+------------+
| | d59a421e40 | b82f808a42 | 752f3d568a |
+-------------------------------------------------+------------+------------+------------+
| boot_successes | 32 | 0 | 0 |
| boot_failures | 0 | 11 | 11 |
| BUG:kernel_reboot-without-warning_in_boot_stage | 0 | 11 | 10 |
| PANIC:double_fault | 0 | 0 | 1 |
| BUG:kernel_hang_in_boot_stage | 0 | 0 | 1 |
+-------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.125408] Spectre V2 : Mitigation: Full generic retpoline
[ 0.125852] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.126511] Speculative Store Bypass: Vulnerable
[ 0.126881] L1TF: Kernel not compiled for PAE. No mitigation for L1TF
[ 0.127518] debug: unmapping init [mem 0xb1e56000-0xb1e5dfff]
BUG: kernel reboot-without-warning in boot stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 93d9bbc81fe6209237659c9549ff7bafa03e3031 085b7755808aa11f78ab9377257e1dad2e6fa4bb --
git bisect good 3b4a8d80da8ec87094d6da1ca9d47870988692b3 # 00:36 G 11 0 0 0 Merge 'peterz-queue/x86/irq' into devel-catchup-201904262151
git bisect bad 97a8d8d3264b93824352ced4d361d6a397f629ca # 00:45 B 0 10 25 0 Merge 'peterz-queue/master' into devel-catchup-201904262151
git bisect good 6679f2d8a3ead20bc97751896bcbb0745370fc29 # 00:59 G 11 0 0 0 Merge 'peterz-queue/x86/platform' into devel-catchup-201904262151
git bisect good 53e4e29232a38ecdec1817f969cf92b2564f895e # 01:16 G 11 0 0 0 Merge 'peterz-queue/locking/core' into devel-catchup-201904262151
git bisect good 3ef45ea34c756ab8a8b9eab16df9a12b37f5988b # 01:35 G 10 0 0 0 Merge branch 'perf/urgent'
git bisect good 3cb125f922073a99d8e31c6ca472674e8f88e317 # 01:50 G 10 0 0 0 Merge branch 'core/rseq'
git bisect good c78e73bb93d22659fe5cd2f6f898f1340a1c19d0 # 02:00 G 11 0 0 0 Merge branch 'core/objtool'
git bisect good d7d5028b32202a43cbcab9028d50414244b61b9d # 02:12 G 10 0 0 0 Merge branch 'core/core'
git bisect good 6b83cc2b9335b1989b17eefc7233a00735a2d949 # 02:22 G 11 0 11 11 modules: Use vmalloc special flag
git bisect good 79fa8d119c54e43b2fd05e78cc5648a7026e1826 # 02:33 G 11 0 11 11 Merge branch 'sched/core'
git bisect bad 7e81a20b15b6c35fffbc9561dad1421e3598fe8d # 02:42 B 0 9 24 0 x86/percpu, x86/tlb: Relax cpu_tlbstate accesses
git bisect bad b82f808a421f9fabf2e5769f351a063c9a4ebec1 # 02:54 B 0 11 26 0 x86/percpu: Remove this_cpu_read_stable()
git bisect good d59a421e40f8713f086b52f1e4390b552ba82290 # 03:07 G 10 0 0 0 x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
# first bad commit: [b82f808a421f9fabf2e5769f351a063c9a4ebec1] x86/percpu: Remove this_cpu_read_stable()
git bisect good d59a421e40f8713f086b52f1e4390b552ba82290 # 03:12 G 30 0 0 0 x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
# extra tests on HEAD of linux-devel/devel-catchup-201904262151
git bisect bad 93d9bbc81fe6209237659c9549ff7bafa03e3031 # 03:13 B 0 17 39 0 0day head guard for 'devel-catchup-201904262151'
# extra tests on tree/branch peterz-queue/x86/core
git bisect bad 752f3d568a7512864fcda200e289d2523cd73e45 # 03:31 B 0 10 25 0 x86/percpu: Optimize raw_cpu_xchg()
# extra tests with first bad commit reverted
git bisect good d6f89b9305ec0569942941bc319a84c1976b3c4b # 03:48 G 11 0 0 0 Revert "x86/percpu: Remove this_cpu_read_stable()"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
[locking/rwsem] e59710760a: vm-scalability.median -32.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -32.1% regression of vm-scalability.median due to commit:
commit: e59710760a857745a17e0283c7ba34e69f93711f ("locking/rwsem: Make rwsem_spin_on_owner() return owner state")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
in testcase: vm-scalability
on test machine: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
with following parameters:
runtime: 300s
test: small-allocs
cpufreq_governor: performance
ucode: 0xb00002e
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ex2/small-allocs/vm-scalability/0xb00002e
commit:
ec0f427b82 ("locking/rwsem: Remove rwsem_wake() wakeup optimization")
e59710760a ("locking/rwsem: Make rwsem_spin_on_owner() return owner state")
ec0f427b82848eb8 e59710760a857745a17e0283c7b
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 28% 1:4 perf-profile.calltrace.cycles-pp.error_entry.do_access
0:4 36% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 28% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
60929 -32.1% 41369 ± 5% vm-scalability.median
11698009 -32.1% 7942982 ± 5% vm-scalability.throughput
372.79 +4.2% 388.43 vm-scalability.time.elapsed_time
372.79 +4.2% 388.43 vm-scalability.time.elapsed_time.max
25325 -83.9% 4087 vm-scalability.time.involuntary_context_switches
7.814e+08 -32.1% 5.303e+08 ± 5% vm-scalability.time.minor_page_faults
18741 -95.7% 813.00 ± 4% vm-scalability.time.percent_of_cpu_this_job_got
67442 -96.7% 2223 ± 7% vm-scalability.time.system_time
2425 -61.3% 938.26 ± 2% vm-scalability.time.user_time
4038545 ± 2% +646.6% 30153014 ± 9% vm-scalability.time.voluntary_context_switches
3.516e+09 -32.1% 2.386e+09 ± 5% vm-scalability.workload
1017651 ± 2% -37.2% 638913 ± 6% numa-numastat.node1.local_node
1030160 ± 2% -35.0% 670015 ± 6% numa-numastat.node1.numa_hit
12513 +148.6% 31101 numa-numastat.node1.other_node
3.00 +3044.4% 94.33 vmstat.cpu.id
92.33 -96.8% 3.00 vmstat.cpu.sy
184.00 -96.0% 7.33 ± 6% vmstat.procs.r
22977 ± 2% +574.3% 154928 ± 7% vmstat.system.cs
3.13 ± 6% +91.8 94.91 mpstat.cpu.all.idle%
0.00 ± 28% +0.0 0.00 ± 40% mpstat.cpu.all.iowait%
0.00 ± 37% +0.0 0.04 ± 8% mpstat.cpu.all.soft%
93.48 -89.9 3.55 ± 5% mpstat.cpu.all.sys%
3.39 -1.9 1.50 ± 3% mpstat.cpu.all.usr%
367593 -23.8% 280096 meminfo.Active
367406 -23.8% 279908 meminfo.Active(anon)
31515 ± 2% -20.9% 24943 meminfo.Mapped
15378517 -24.9% 11549254 ± 5% meminfo.Memused
3576965 -29.7% 2513277 ± 6% meminfo.PageTables
9370177 -29.0% 6653846 ± 6% meminfo.SUnreclaim
367367 -25.4% 274189 meminfo.Shmem
9472805 -28.7% 6753140 ± 6% meminfo.Slab
65149 -32.7% 43877 ± 3% meminfo.max_used_kB
9149214 ± 7% +1131.9% 1.127e+08 ± 15% cpuidle.C1.time
283022 ± 3% +450.4% 1557676 ± 26% cpuidle.C1.usage
5116514 ± 17% +7600.0% 3.94e+08 ± 37% cpuidle.C1E.time
97068 ± 10% +4971.2% 4922535 ± 30% cpuidle.C1E.usage
9.494e+08 ± 35% +3229.2% 3.161e+10 ± 25% cpuidle.C3.time
2656491 ± 12% +3531.0% 96458415 ± 4% cpuidle.C3.usage
1.321e+09 ± 18% +2781.5% 3.808e+10 ± 23% cpuidle.C6.time
2085373 ± 5% +3273.3% 70346823 ± 10% cpuidle.C6.usage
5548437 ± 2% +136.6% 13127177 ± 28% cpuidle.POLL.time
3603010 ± 3% +71.1% 6165255 ± 37% cpuidle.POLL.usage
91867 -23.8% 69979 proc-vmstat.nr_active_anon
347299 -6.2% 325621 proc-vmstat.nr_file_pages
65652 -1.6% 64622 proc-vmstat.nr_inactive_anon
7831 -20.4% 6237 proc-vmstat.nr_mapped
893105 -29.7% 628251 ± 6% proc-vmstat.nr_page_table_pages
91769 -25.3% 68522 proc-vmstat.nr_shmem
25655 -3.2% 24822 proc-vmstat.nr_slab_reclaimable
2339692 -28.9% 1663201 ± 6% proc-vmstat.nr_slab_unreclaimable
91867 -23.8% 69979 proc-vmstat.nr_zone_active_anon
65652 -1.6% 64622 proc-vmstat.nr_zone_inactive_anon
4648577 -23.8% 3542930 ± 4% proc-vmstat.numa_hit
4555328 -24.3% 3449587 ± 4% proc-vmstat.numa_local
45275 ± 4% -84.9% 6828 proc-vmstat.pgactivate
6694983 -25.8% 4964955 ± 4% proc-vmstat.pgalloc_normal
7.825e+08 -32.1% 5.315e+08 ± 5% proc-vmstat.pgfault
6615706 -25.4% 4933308 ± 4% proc-vmstat.pgfree
2512 -92.6% 185.33 ± 5% turbostat.Avg_MHz
96.91 -89.7 7.17 ± 5% turbostat.Busy%
282134 ± 3% +451.8% 1556841 ± 26% turbostat.C1
0.01 +0.1 0.15 ± 14% turbostat.C1%
93722 ± 11% +5149.9% 4920371 ± 30% turbostat.C1E
0.01 +0.5 0.52 ± 36% turbostat.C1E%
2654346 ± 12% +3533.9% 96456202 ± 4% turbostat.C3
1.31 ± 35% +40.8 42.15 ± 27% turbostat.C3%
2015428 ± 5% +3387.9% 70296359 ± 10% turbostat.C6
1.80 ± 19% +48.6 50.41 ± 21% turbostat.C6%
2.14 ± 5% +2647.0% 58.88 turbostat.CPU%c1
0.28 ± 86% +4122.4% 11.96 ± 66% turbostat.CPU%c3
0.66 ± 33% +3231.8% 21.99 ± 38% turbostat.CPU%c6
54.00 -21.0% 42.67 ± 2% turbostat.CoreTmp
0.52 ± 14% +233.8% 1.75 ± 25% turbostat.Pkg%pc2
59.33 -19.1% 48.00 turbostat.PkgTmp
565.50 -38.4% 348.22 turbostat.PkgWatt
2841 -10.0% 2557 numa-vmstat.node0.nr_mapped
226584 -32.4% 153194 ± 10% numa-vmstat.node0.nr_page_table_pages
598824 -31.3% 411102 ± 10% numa-vmstat.node0.nr_slab_unreclaimable
1499 ± 2% -18.3% 1225 numa-vmstat.node1.nr_mapped
222867 -25.8% 165466 ± 8% numa-vmstat.node1.nr_page_table_pages
697.67 ± 11% -43.0% 397.67 ± 57% numa-vmstat.node1.nr_shmem
4633 ± 5% -26.9% 3387 ± 20% numa-vmstat.node1.nr_slab_reclaimable
582030 -25.3% 434486 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
869834 ± 4% -22.3% 675455 ± 8% numa-vmstat.node1.numa_hit
774378 ± 4% -27.5% 561295 ± 10% numa-vmstat.node1.numa_local
95455 +19.6% 114159 numa-vmstat.node1.numa_other
1659 ± 17% -26.3% 1223 numa-vmstat.node2.nr_mapped
219759 -27.4% 159465 ± 10% numa-vmstat.node2.nr_page_table_pages
573347 -26.8% 419676 ± 9% numa-vmstat.node2.nr_slab_unreclaimable
49433 ± 10% -44.4% 27484 ± 32% numa-vmstat.node3.nr_active_anon
1829 ± 12% -32.0% 1243 numa-vmstat.node3.nr_mapped
224449 -33.1% 150193 ± 7% numa-vmstat.node3.nr_page_table_pages
586613 -32.2% 397888 ± 7% numa-vmstat.node3.nr_slab_unreclaimable
49433 ± 10% -44.4% 27484 ± 32% numa-vmstat.node3.nr_zone_active_anon
11342 -9.8% 10231 numa-meminfo.node0.Mapped
3993788 ± 3% -25.8% 2964993 ± 12% numa-meminfo.node0.MemUsed
906860 -32.4% 612935 ± 10% numa-meminfo.node0.PageTables
2397639 -31.4% 1644736 ± 10% numa-meminfo.node0.SUnreclaim
2430843 -30.9% 1679208 ± 10% numa-meminfo.node0.Slab
2839 ± 26% -55.5% 1262 ± 87% numa-meminfo.node1.Inactive
18533 ± 5% -26.9% 13549 ± 20% numa-meminfo.node1.KReclaimable
5976 -18.0% 4901 numa-meminfo.node1.Mapped
3667695 ± 2% -21.8% 2866565 ± 9% numa-meminfo.node1.MemUsed
892364 -25.8% 662072 ± 8% numa-meminfo.node1.PageTables
18533 ± 5% -26.9% 13549 ± 20% numa-meminfo.node1.SReclaimable
2330301 -25.4% 1738329 ± 8% numa-meminfo.node1.SUnreclaim
2792 ± 11% -43.0% 1592 ± 57% numa-meminfo.node1.Shmem
2348834 -25.4% 1751879 ± 8% numa-meminfo.node1.Slab
6579 ± 16% -25.6% 4896 numa-meminfo.node2.Mapped
3760680 ± 4% -25.5% 2801836 ± 7% numa-meminfo.node2.MemUsed
879566 -27.5% 638048 ± 10% numa-meminfo.node2.PageTables
2295105 -26.8% 1679120 ± 9% numa-meminfo.node2.SUnreclaim
2316800 -26.8% 1695376 ± 9% numa-meminfo.node2.Slab
197900 ± 10% -44.4% 110069 ± 32% numa-meminfo.node3.Active
197820 ± 10% -44.4% 109944 ± 32% numa-meminfo.node3.Active(anon)
7300 ± 12% -31.8% 4977 numa-meminfo.node3.Mapped
3957918 ± 2% -26.3% 2917027 ± 3% numa-meminfo.node3.MemUsed
898474 -33.1% 600933 ± 7% numa-meminfo.node3.PageTables
2348303 -32.2% 1591936 ± 7% numa-meminfo.node3.SUnreclaim
2377495 -31.6% 1626948 ± 7% numa-meminfo.node3.Slab
18683 ± 3% -22.0% 14573 slabinfo.cred_jar.active_objs
18683 ± 3% -22.0% 14573 slabinfo.cred_jar.num_objs
8411 -27.0% 6142 ± 13% slabinfo.files_cache.active_objs
8411 -27.0% 6142 ± 13% slabinfo.files_cache.num_objs
51583 -21.7% 40388 ± 3% slabinfo.filp.active_objs
806.67 -21.2% 635.33 ± 3% slabinfo.filp.active_slabs
51644 -21.2% 40691 ± 3% slabinfo.filp.num_objs
806.67 -21.2% 635.33 ± 3% slabinfo.filp.num_slabs
19744 ± 2% +16.0% 22910 ± 5% slabinfo.kmalloc-96.active_objs
19862 ± 2% +15.8% 23006 ± 5% slabinfo.kmalloc-96.num_objs
5853 -18.6% 4764 ± 15% slabinfo.mm_struct.active_objs
5853 -18.6% 4764 ± 15% slabinfo.mm_struct.num_objs
23269 -10.5% 20833 ± 6% slabinfo.proc_inode_cache.active_objs
23276 -10.5% 20839 ± 6% slabinfo.proc_inode_cache.num_objs
11010 ± 2% -6.3% 10321 ± 4% slabinfo.shmem_inode_cache.active_objs
11010 ± 2% -6.3% 10321 ± 4% slabinfo.shmem_inode_cache.num_objs
4390 -17.9% 3606 ± 8% slabinfo.sighand_cache.active_objs
4392 -17.7% 3616 ± 8% slabinfo.sighand_cache.num_objs
7116 -20.4% 5662 ± 10% slabinfo.signal_cache.active_objs
7116 -20.4% 5662 ± 10% slabinfo.signal_cache.num_objs
10675 -23.9% 8127 ± 13% slabinfo.task_delay_info.active_objs
10675 -23.9% 8127 ± 13% slabinfo.task_delay_info.num_objs
3416 -18.5% 2785 ± 4% slabinfo.task_struct.active_objs
687.67 -18.8% 558.67 ± 4% slabinfo.task_struct.active_slabs
3440 -18.7% 2796 ± 4% slabinfo.task_struct.num_objs
687.67 -18.8% 558.67 ± 4% slabinfo.task_struct.num_slabs
45792115 -29.6% 32241884 ± 6% slabinfo.vm_area_struct.active_objs
1144913 -29.6% 806357 ± 6% slabinfo.vm_area_struct.active_slabs
45796577 -29.6% 32254336 ± 6% slabinfo.vm_area_struct.num_objs
1144913 -29.6% 806357 ± 6% slabinfo.vm_area_struct.num_slabs
0.64 ± 21% +2945.1% 19.39 ± 9% perf-stat.i.MPKI
5.929e+10 -87.0% 7.684e+09 ± 4% perf-stat.i.branch-instructions
0.05 ± 24% +1.7 1.73 perf-stat.i.branch-miss-rate%
18165016 +212.6% 56788224 ± 4% perf-stat.i.branch-misses
48.10 -41.0 7.12 ± 11% perf-stat.i.cache-miss-rate%
54971263 -64.2% 19652259 ± 8% perf-stat.i.cache-misses
1.164e+08 +137.9% 2.767e+08 ± 3% perf-stat.i.cache-references
23169 ± 2% +574.6% 156291 ± 7% perf-stat.i.context-switches
1.69 +44.0% 2.43 perf-stat.i.cpi
4.828e+11 -92.4% 3.652e+10 ± 3% perf-stat.i.cpu-cycles
97.09 ± 2% -20.4% 77.31 ± 4% perf-stat.i.cpu-migrations
10978 -80.5% 2139 ± 4% perf-stat.i.cycles-between-cache-misses
0.02 ± 11% +0.4 0.37 ± 5% perf-stat.i.dTLB-load-miss-rate%
12748269 +51.2% 19269023 perf-stat.i.dTLB-load-misses
8.967e+10 -92.3% 6.889e+09 ± 4% perf-stat.i.dTLB-loads
0.04 ± 12% +0.1 0.12 ± 2% perf-stat.i.dTLB-store-miss-rate%
499845 ± 12% +392.1% 2459874 ± 3% perf-stat.i.dTLB-store-misses
2.007e+09 +14.5% 2.298e+09 ± 3% perf-stat.i.dTLB-stores
86.08 -12.7 73.42 ± 2% perf-stat.i.iTLB-load-miss-rate%
5182063 +9.9% 5694315 ± 3% perf-stat.i.iTLB-load-misses
138350 +1322.6% 1968129 ± 10% perf-stat.i.iTLB-loads
2.923e+11 -91.3% 2.542e+10 ± 4% perf-stat.i.instructions
1348186 ± 4% -99.7% 4431 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.60 +14.5% 0.69 perf-stat.i.ipc
2093222 -34.9% 1363023 ± 4% perf-stat.i.minor-faults
79.67 +16.3 96.01 perf-stat.i.node-load-miss-rate%
11076687 -27.0% 8080669 ± 7% perf-stat.i.node-load-misses
2984622 -87.7% 366915 ± 3% perf-stat.i.node-loads
65.65 +2.5 68.17 perf-stat.i.node-store-miss-rate%
3579988 -17.0% 2972728 ± 6% perf-stat.i.node-store-misses
1865411 -26.5% 1371147 ± 8% perf-stat.i.node-stores
2093225 -34.9% 1363026 ± 4% perf-stat.i.page-faults
0.40 +2643.6% 10.93 ± 7% perf-stat.overall.MPKI
0.03 +0.7 0.74 perf-stat.overall.branch-miss-rate%
47.23 -40.1 7.15 ± 11% perf-stat.overall.cache-miss-rate%
1.65 -12.7% 1.44 perf-stat.overall.cpi
8779 -78.8% 1865 ± 4% perf-stat.overall.cycles-between-cache-misses
0.01 +0.3 0.28 ± 4% perf-stat.overall.dTLB-load-miss-rate%
0.02 ± 12% +0.1 0.11 perf-stat.overall.dTLB-store-miss-rate%
97.39 -23.0 74.37 ± 2% perf-stat.overall.iTLB-load-miss-rate%
56402 -92.1% 4463 perf-stat.overall.instructions-per-iTLB-miss
0.61 +14.5% 0.69 perf-stat.overall.ipc
78.77 +16.9 95.62 perf-stat.overall.node-load-miss-rate%
65.74 +2.7 68.47 perf-stat.overall.node-store-miss-rate%
31078 -86.6% 4154 perf-stat.overall.path-length
5.911e+10 -87.0% 7.66e+09 ± 4% perf-stat.ps.branch-instructions
18126329 +212.3% 56614513 ± 4% perf-stat.ps.branch-misses
54823408 -64.1% 19657358 ± 8% perf-stat.ps.cache-misses
1.161e+08 +137.8% 2.76e+08 ± 3% perf-stat.ps.cache-references
23094 ± 2% +574.6% 155801 ± 7% perf-stat.ps.context-switches
4.813e+11 -92.4% 3.654e+10 ± 3% perf-stat.ps.cpu-cycles
96.89 ± 2% -20.2% 77.29 ± 4% perf-stat.ps.cpu-migrations
12709846 +51.1% 19206141 perf-stat.ps.dTLB-load-misses
8.939e+10 -92.3% 6.869e+09 ± 4% perf-stat.ps.dTLB-loads
498796 ± 12% +391.6% 2452332 ± 3% perf-stat.ps.dTLB-store-misses
2.001e+09 +14.5% 2.292e+09 ± 3% perf-stat.ps.dTLB-stores
5166142 +9.9% 5675825 ± 3% perf-stat.ps.iTLB-load-misses
138196 +1319.9% 1962210 ± 10% perf-stat.ps.iTLB-loads
2.914e+11 -91.3% 2.535e+10 ± 4% perf-stat.ps.instructions
2086551 -34.9% 1358354 ± 4% perf-stat.ps.minor-faults
11043965 -27.0% 8061836 ± 7% perf-stat.ps.node-load-misses
2976983 -87.6% 367987 ± 3% perf-stat.ps.node-loads
3569562 -16.9% 2966237 ± 6% perf-stat.ps.node-store-misses
1859928 -26.5% 1367126 ± 8% perf-stat.ps.node-stores
2086551 -34.9% 1358354 ± 4% perf-stat.ps.page-faults
1.093e+14 -90.9% 9.918e+12 ± 6% perf-stat.total.instructions
0.00 +1.2e+10% 115.65 ± 79% sched_debug.cfs_rq:/.MIN_vruntime.avg
0.00 +1.4e+12% 13538 ± 61% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +5.7e+25% 1185 ± 70% sched_debug.cfs_rq:/.MIN_vruntime.stddev
13838 ± 3% +669.4% 106475 ± 8% sched_debug.cfs_rq:/.load.max
2138 ± 20% -100.0% 0.00 sched_debug.cfs_rq:/.load.min
1549 ± 14% +1270.5% 21233 ± 7% sched_debug.cfs_rq:/.load.stddev
9.69 ± 6% -48.1% 5.03 ± 26% sched_debug.cfs_rq:/.load_avg.avg
2.78 ± 12% -100.0% 0.00 sched_debug.cfs_rq:/.load_avg.min
0.00 +1.2e+10% 115.65 ± 79% sched_debug.cfs_rq:/.max_vruntime.avg
0.00 +1.4e+12% 13538 ± 61% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +5.7e+25% 1185 ± 70% sched_debug.cfs_rq:/.max_vruntime.stddev
30032364 -99.7% 104492 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
30308408 -99.5% 140383 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
27979369 -99.8% 65034 ± 13% sched_debug.cfs_rq:/.min_vruntime.min
311252 ± 5% -95.2% 14804 ± 10% sched_debug.cfs_rq:/.min_vruntime.stddev
0.82 -93.3% 0.06 ± 4% sched_debug.cfs_rq:/.nr_running.avg
0.39 ± 20% -100.0% 0.00 sched_debug.cfs_rq:/.nr_running.min
0.12 ± 16% +84.0% 0.22 sched_debug.cfs_rq:/.nr_running.stddev
227.50 ± 35% -57.4% 96.86 ± 70% sched_debug.cfs_rq:/.removed.load_avg.max
10505 ± 35% -57.2% 4492 ± 70% sched_debug.cfs_rq:/.removed.runnable_sum.max
92.94 ± 11% -47.9% 48.43 ± 70% sched_debug.cfs_rq:/.removed.util_avg.max
3.64 -73.6% 0.96 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
14.83 +596.1% 103.25 ± 8% sched_debug.cfs_rq:/.runnable_load_avg.max
1.22 ± 34% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_load_avg.min
1.54 ± 8% +436.8% 8.28 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.stddev
12576 ± 2% +746.6% 106475 ± 8% sched_debug.cfs_rq:/.runnable_weight.max
2138 ± 20% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_weight.min
1445 ± 11% +1371.4% 21266 ± 7% sched_debug.cfs_rq:/.runnable_weight.stddev
279148 ± 29% -95.1% 13767 ± 21% sched_debug.cfs_rq:/.spread0.max
-2052888 -97.0% -61649 sched_debug.cfs_rq:/.spread0.min
311380 ± 5% -95.2% 14824 ± 10% sched_debug.cfs_rq:/.spread0.stddev
868.13 -92.0% 69.75 ± 2% sched_debug.cfs_rq:/.util_avg.avg
1483 ± 6% -53.4% 691.44 ± 3% sched_debug.cfs_rq:/.util_avg.max
291.22 ± 26% -100.0% 0.00 sched_debug.cfs_rq:/.util_avg.min
108.79 ± 5% -26.5% 79.96 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
776.76 -99.4% 4.78 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.avg
1417 ± 2% -81.7% 259.83 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.max
25.00 ±138% -100.0% 0.00 sched_debug.cfs_rq:/.util_est_enqueued.min
155.47 ± 11% -82.5% 27.22 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.stddev
631145 ± 8% +41.7% 894444 sched_debug.cpu.avg_idle.avg
304813 ± 3% -47.5% 160134 ± 5% sched_debug.cpu.avg_idle.stddev
217240 ± 2% +9.8% 238611 ± 6% sched_debug.cpu.clock.avg
217260 ± 2% +9.8% 238626 ± 6% sched_debug.cpu.clock.max
217219 ± 2% +9.8% 238597 ± 6% sched_debug.cpu.clock.min
11.94 ± 6% -30.3% 8.32 ± 5% sched_debug.cpu.clock.stddev
217240 ± 2% +9.8% 238611 ± 6% sched_debug.cpu.clock_task.avg
217260 ± 2% +9.8% 238626 ± 6% sched_debug.cpu.clock_task.max
217219 ± 2% +9.8% 238597 ± 6% sched_debug.cpu.clock_task.min
11.94 ± 6% -30.3% 8.32 ± 5% sched_debug.cpu.clock_task.stddev
3.88 ± 11% -77.5% 0.87 ± 4% sched_debug.cpu.cpu_load[0].avg
1.00 ± 27% -100.0% 0.00 sched_debug.cpu.cpu_load[0].min
3.75 ± 7% -76.7% 0.88 ± 6% sched_debug.cpu.cpu_load[1].avg
42.44 ± 92% +140.4% 102.02 ± 7% sched_debug.cpu.cpu_load[1].max
1.28 ± 30% -100.0% 0.00 sched_debug.cpu.cpu_load[1].min
3.28 ± 80% +135.1% 7.71 ± 6% sched_debug.cpu.cpu_load[1].stddev
3.71 ± 5% -79.1% 0.77 ± 5% sched_debug.cpu.cpu_load[2].avg
29.94 ± 76% +225.8% 97.56 ± 4% sched_debug.cpu.cpu_load[2].max
1.67 ± 29% -100.0% 0.00 sched_debug.cpu.cpu_load[2].min
2.36 ± 61% +210.4% 7.33 ± 4% sched_debug.cpu.cpu_load[2].stddev
3.68 ± 3% -81.6% 0.68 ± 2% sched_debug.cpu.cpu_load[3].avg
23.56 ± 60% +281.4% 89.84 ± 7% sched_debug.cpu.cpu_load[3].max
1.78 ± 28% -100.0% 0.00 sched_debug.cpu.cpu_load[3].min
1.88 ± 45% +255.1% 6.69 ± 6% sched_debug.cpu.cpu_load[3].stddev
3.69 -83.7% 0.60 ± 13% sched_debug.cpu.cpu_load[4].avg
27.94 ± 6% +196.8% 82.94 ± 17% sched_debug.cpu.cpu_load[4].max
1.78 ± 28% -100.0% 0.00 sched_debug.cpu.cpu_load[4].min
2.10 ± 5% +190.9% 6.10 ± 16% sched_debug.cpu.cpu_load[4].stddev
2565 ± 2% -92.9% 182.20 ± 5% sched_debug.cpu.curr->pid.avg
7577 +10.5% 8374 ± 5% sched_debug.cpu.curr->pid.max
1092 ± 20% -100.0% 0.00 sched_debug.cpu.curr->pid.min
585.80 ± 13% +52.1% 891.25 ± 4% sched_debug.cpu.curr->pid.stddev
13838 ± 3% +669.4% 106475 ± 8% sched_debug.cpu.load.max
2138 ± 20% -100.0% 0.00 sched_debug.cpu.load.min
1390 ± 12% +1378.0% 20550 ± 6% sched_debug.cpu.load.stddev
0.00 ± 22% -84.8% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
159307 ± 2% +12.6% 179321 ± 8% sched_debug.cpu.nr_load_updates.avg
166946 ± 3% +12.3% 187514 ± 8% sched_debug.cpu.nr_load_updates.max
0.82 -93.8% 0.05 ± 5% sched_debug.cpu.nr_running.avg
1.72 ± 4% -41.9% 1.00 sched_debug.cpu.nr_running.max
0.39 ± 20% -100.0% 0.00 sched_debug.cpu.nr_running.min
0.15 ± 10% +44.1% 0.22 sched_debug.cpu.nr_running.stddev
23914 ± 3% +485.7% 140058 ± 13% sched_debug.cpu.nr_switches.avg
127000 ± 25% +41.2% 179360 ± 10% sched_debug.cpu.nr_switches.max
7479 ± 3% +991.4% 81627 ± 19% sched_debug.cpu.nr_switches.min
16386 ± 13% +15.9% 18990 sched_debug.cpu.nr_switches.stddev
0.03 ± 38% +2491.5% 0.82 sched_debug.cpu.nr_uninterruptible.avg
13.15 ± 9% +34.7% 17.72 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
217219 ± 2% +9.8% 238597 ± 6% sched_debug.cpu_clk
212462 ± 2% +10.1% 233840 ± 6% sched_debug.ktime
1.26 -100.0% 0.00 sched_debug.rt_rq:/.rt_runtime.stddev
221530 +9.6% 242891 ± 5% sched_debug.sched_clk
91.35 -89.2 2.19 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
91.34 -89.2 2.19 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
91.43 -89.1 2.29 ± 3% perf-profile.calltrace.cycles-pp.mmap64
91.82 -88.9 2.89 ± 36% perf-profile.calltrace.cycles-pp.osq_lock.__rwsem_down_write_failed_common.down_write.vma_link.mmap_region
90.74 -88.7 2.08 ± 4% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
90.77 -88.6 2.14 ± 4% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
92.35 -86.7 5.62 ± 17% perf-profile.calltrace.cycles-pp.__rwsem_down_write_failed_common.down_write.vma_link.mmap_region.do_mmap
92.40 -86.6 5.84 ± 17% perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
92.91 -83.8 9.13 ± 11% perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
93.05 -83.0 10.09 ± 10% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
93.17 -82.4 10.77 ± 10% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.5 0.52 ± 3% perf-profile.calltrace.cycles-pp.schedule.__rwsem_down_write_failed_common.down_write.vma_link.mmap_region
0.00 +0.5 0.55 ± 6% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
0.00 +0.6 0.56 ± 3% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +0.6 0.58 ± 3% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.6 0.62 ± 4% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.vma_link.mmap_region
0.00 +0.6 0.62 ± 6% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 +0.6 0.63 ± 8% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode.do_access
0.00 +0.6 0.65 ± 3% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.vma_link.mmap_region.do_mmap
0.00 +0.7 0.70 ± 2% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.8 0.78 ± 6% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.00 +0.8 0.79 ± 12% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.__rwsem_down_write_failed_common.down_write.vma_link.mmap_region
0.00 +0.8 0.83 ± 4% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.00 +0.9 0.89 ± 3% perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.00 +0.9 0.91 ± 5% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.00 +0.9 0.92 ± 3% perf-profile.calltrace.cycles-pp.rwsem_wake.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.00 +1.0 0.98 perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.00 +1.0 1.00 ± 4% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.51 +1.5 1.98 ± 5% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
0.00 +1.6 1.61 ± 3% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.00 +1.7 1.65 ± 5% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +1.9 1.87 ± 7% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +1.9 1.89 ± 2% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +2.1 2.09 perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +2.1 2.12 ± 2% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.00 +2.2 2.25 ± 6% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.77 +2.3 3.12 ± 5% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
0.00 +2.4 2.39 ± 3% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.83 +2.4 3.27 ± 5% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
0.00 +3.8 3.82 ± 3% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
3.77 +3.8 7.59 ± 5% perf-profile.calltrace.cycles-pp.do_rw_once
0.00 +4.0 4.01 ± 5% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.16 +4.7 5.88 ± 4% perf-profile.calltrace.cycles-pp.page_fault.do_access
0.00 +5.4 5.38 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.48 ± 46% +6.4 8.89 ± 11% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.48 ± 46% +6.7 9.16 ± 11% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.50 ± 46% +6.8 9.34 ± 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.50 ± 46% +6.9 9.35 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.09 +7.1 10.16 ± 4% perf-profile.calltrace.cycles-pp.do_access
0.00 +8.9 8.93 perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
0.00 +10.4 10.41 perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.29 ±141% +53.1 53.36 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.30 ±141% +63.6 63.91 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.30 ±141% +70.8 71.15 ± 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.30 ±141% +70.9 71.20 ± 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
0.30 ±141% +70.9 71.20 ± 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
0.30 ±141% +71.3 71.60 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64
91.43 -89.1 2.29 ± 3% perf-profile.children.cycles-pp.mmap64
91.86 -88.8 3.02 ± 29% perf-profile.children.cycles-pp.osq_lock
92.36 -86.7 5.62 ± 17% perf-profile.children.cycles-pp.__rwsem_down_write_failed_common
92.40 -86.6 5.84 ± 16% perf-profile.children.cycles-pp.down_write
92.91 -83.8 9.14 ± 11% perf-profile.children.cycles-pp.vma_link
93.06 -83.0 10.09 ± 10% perf-profile.children.cycles-pp.mmap_region
93.18 -82.4 10.78 ± 10% perf-profile.children.cycles-pp.do_mmap
93.21 -82.3 10.96 ± 10% perf-profile.children.cycles-pp.vm_mmap_pgoff
93.88 -82.2 11.63 ± 8% perf-profile.children.cycles-pp.do_syscall_64
93.88 -82.2 11.64 ± 9% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
93.25 -82.0 11.30 ± 9% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.14 ± 3% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.05 perf-profile.children.cycles-pp.pick_next_task_idle
0.00 +0.1 0.05 perf-profile.children.cycles-pp.prepend_path
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.read_counters
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__run_perf_stat
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.process_interval
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.trigger_load_balance
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.up_read
0.00 +0.1 0.06 ± 16% perf-profile.children.cycles-pp.ksoftirqd_running
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.tick_sched_do_timer
0.00 +0.1 0.06 perf-profile.children.cycles-pp.switch_mm
0.00 +0.1 0.06 ± 19% perf-profile.children.cycles-pp.down_read_trylock
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.06 ± 19% perf-profile.children.cycles-pp.___might_sleep
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.vfs_read
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.ksys_read
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.scheduler_ipi
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.hrtimer_forward
0.00 +0.1 0.07 perf-profile.children.cycles-pp.irq_work_needs_cpu
0.00 +0.1 0.07 perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.__task_rq_lock
0.00 +0.1 0.07 perf-profile.children.cycles-pp.tick_program_event
0.00 +0.1 0.07 ± 28% perf-profile.children.cycles-pp.menu_reflect
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.cpuidle_not_available
0.00 +0.1 0.08 ± 32% perf-profile.children.cycles-pp.ret_from_fork
0.00 +0.1 0.08 ± 32% perf-profile.children.cycles-pp.kthread
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.rcu_core
0.00 +0.1 0.08 ± 16% perf-profile.children.cycles-pp.irq_work_tick
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.__rwsem_mark_wake
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.reschedule_interrupt
0.00 +0.1 0.08 ± 10% perf-profile.children.cycles-pp.check_preempt_curr
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.rb_erase
0.00 +0.1 0.08 perf-profile.children.cycles-pp.__switch_to
0.00 +0.1 0.08 ± 28% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.1 0.08 ± 11% perf-profile.children.cycles-pp.call_cpuidle
0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.load_new_mm_cr3
0.00 +0.1 0.09 ± 10% perf-profile.children.cycles-pp.osq_unlock
0.00 +0.1 0.09 ± 5% perf-profile.children.cycles-pp.leave_mm
0.00 +0.1 0.09 ± 10% perf-profile.children.cycles-pp.calc_global_load_tick
0.00 +0.1 0.09 ± 5% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.00 +0.1 0.10 ± 9% perf-profile.children.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.10 ± 9% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.10 ± 16% perf-profile.children.cycles-pp.selinux_mmap_file
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.update_cfs_group
0.00 +0.1 0.10 ± 8% perf-profile.children.cycles-pp.perf_iterate_sb
0.00 +0.1 0.10 ± 12% perf-profile.children.cycles-pp.rcu_irq_exit
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.interrupt_entry
0.00 +0.1 0.10 ± 19% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.1 0.11 ± 43% perf-profile.children.cycles-pp.poll_idle
0.00 +0.1 0.11 ± 32% perf-profile.children.cycles-pp.new_slab
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.rcu_eqs_enter
0.00 +0.1 0.11 ± 11% perf-profile.children.cycles-pp.security_mmap_file
0.00 +0.1 0.12 ± 4% perf-profile.children.cycles-pp.set_next_entity
0.00 +0.1 0.12 ± 29% perf-profile.children.cycles-pp.__slab_alloc
0.00 +0.1 0.12 ± 29% perf-profile.children.cycles-pp.___slab_alloc
0.00 +0.1 0.12 ± 11% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.00 +0.1 0.12 ± 10% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.13 ± 12% perf-profile.children.cycles-pp.sync_regs
0.00 +0.1 0.13 ± 21% perf-profile.children.cycles-pp.account_process_tick
0.00 +0.1 0.14 ± 9% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.1 0.14 ± 9% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.14 ± 15% perf-profile.children.cycles-pp.pm_qos_read_value
0.00 +0.1 0.14 ± 5% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.00 +0.1 0.14 ± 5% perf-profile.children.cycles-pp.vmacache_find
0.00 +0.1 0.14 perf-profile.children.cycles-pp.d_path
0.00 +0.1 0.14 ± 8% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.14 ± 8% perf-profile.children.cycles-pp.cpu_load_update_active
0.00 +0.1 0.14 ± 3% perf-profile.children.cycles-pp.rb_next
0.00 +0.1 0.14 ± 3% perf-profile.children.cycles-pp.cpu_load_update
0.00 +0.1 0.15 ± 21% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.15 ± 5% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.2 0.15 ± 5% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.00 +0.2 0.15 ± 14% perf-profile.children.cycles-pp.rcu_irq_enter
0.05 +0.2 0.20 ± 2% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.2 0.15 ± 6% perf-profile.children.cycles-pp.wake_q_add
0.00 +0.2 0.16 ± 10% perf-profile.children.cycles-pp.rcu_idle_exit
0.00 +0.2 0.16 ± 13% perf-profile.children.cycles-pp.pm_qos_request
0.00 +0.2 0.16 ± 8% perf-profile.children.cycles-pp.run_local_timers
0.00 +0.2 0.16 ± 5% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.07 ± 6% +0.2 0.24 ± 3% perf-profile.children.cycles-pp.__vma_link_rb
0.00 +0.2 0.17 ± 18% perf-profile.children.cycles-pp.vm_area_alloc
0.00 +0.2 0.17 ± 10% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.07 +0.2 0.24 ± 3% perf-profile.children.cycles-pp.do_anonymous_page
0.00 +0.2 0.17 ± 9% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.2 0.17 perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.2 0.17 ± 8% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.2 0.18 ± 9% perf-profile.children.cycles-pp.update_curr
0.00 +0.2 0.18 ± 7% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.00 +0.2 0.19 ± 11% perf-profile.children.cycles-pp.native_apic_mem_write
0.00 +0.2 0.19 ± 4% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.00 +0.2 0.19 ± 4% perf-profile.children.cycles-pp.perf_event_task_tick
0.10 ± 4% +0.2 0.31 ± 6% perf-profile.children.cycles-pp.__perf_sw_event
0.00 +0.2 0.22 ± 9% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.00 +0.2 0.23 ± 9% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.24 perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.2 0.24 ± 5% perf-profile.children.cycles-pp.__fget
0.08 +0.2 0.32 ± 4% perf-profile.children.cycles-pp.___perf_sw_event
0.00 +0.2 0.24 ± 6% perf-profile.children.cycles-pp.select_task_rq_fair
0.00 +0.2 0.24 ± 7% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.00 +0.2 0.25 ± 8% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.2 0.25 ± 6% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.3 0.25 ± 4% perf-profile.children.cycles-pp.dequeue_entity
0.00 +0.3 0.25 ± 4% perf-profile.children.cycles-pp.nr_iowait_cpu
0.00 +0.3 0.26 ± 3% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.3 0.26 perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +0.3 0.27 ± 4% perf-profile.children.cycles-pp.run_timer_softirq
0.00 +0.3 0.28 ± 12% perf-profile.children.cycles-pp.__rb_insert_augmented
0.00 +0.3 0.28 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.3 0.28 ± 70% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.00 +0.3 0.28 ± 7% perf-profile.children.cycles-pp.timerqueue_add
0.00 +0.3 0.28 ± 8% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.3 0.29 ± 8% perf-profile.children.cycles-pp.sched_clock
0.00 +0.3 0.29 ± 4% perf-profile.children.cycles-pp.dequeue_task_fair
0.00 +0.3 0.29 ± 6% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.3 0.30 ± 7% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.00 +0.3 0.30 ± 4% perf-profile.children.cycles-pp.update_ts_time_stats
0.00 +0.3 0.31 ± 6% perf-profile.children.cycles-pp.enqueue_hrtimer
0.07 +0.3 0.38 ± 8% perf-profile.children.cycles-pp.up_write
0.07 +0.3 0.39 ± 4% perf-profile.children.cycles-pp.find_vma
0.00 +0.4 0.36 ± 11% perf-profile.children.cycles-pp.__remove_hrtimer
0.07 ± 7% +0.4 0.43 ± 2% perf-profile.children.cycles-pp.perf_event_mmap
0.00 +0.4 0.38 perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.4 0.39 ± 5% perf-profile.children.cycles-pp.find_next_bit
0.00 +0.4 0.39 ± 6% perf-profile.children.cycles-pp.read_tsc
0.09 ± 5% +0.4 0.48 ± 6% perf-profile.children.cycles-pp.unmapped_area_topdown
0.00 +0.4 0.40 ± 3% perf-profile.children.cycles-pp.start_kernel
0.00 +0.4 0.40 ± 10% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.00 +0.4 0.40 ± 3% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.4 0.41 ± 5% perf-profile.children.cycles-pp.sched_clock_cpu
0.00 +0.4 0.41 ± 7% perf-profile.children.cycles-pp.load_balance
0.00 +0.4 0.43 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.10 ± 4% +0.4 0.53 ± 6% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.00 +0.4 0.44 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.02 ±141% +0.5 0.47 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.5 0.45 ± 2% perf-profile.children.cycles-pp.native_write_msr
0.00 +0.5 0.47 ± 2% perf-profile.children.cycles-pp.update_load_avg
0.00 +0.5 0.49 perf-profile.children.cycles-pp.enqueue_entity
0.11 ± 4% +0.5 0.59 ± 5% perf-profile.children.cycles-pp.get_unmapped_area
0.05 ± 8% +0.5 0.57 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.5 0.52 ± 6% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.5 0.52 ± 5% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.5 0.54 ± 5% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.6 0.56 perf-profile.children.cycles-pp.enqueue_task_fair
0.40 +0.6 0.97 ± 11% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +0.6 0.58 ± 3% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.6 0.58 perf-profile.children.cycles-pp.ttwu_do_activate
0.17 +0.6 0.76 ± 21% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.6 0.60 ± 6% perf-profile.children.cycles-pp.__next_timer_interrupt
0.00 +0.6 0.63 ± 7% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.6 0.65 ± 2% perf-profile.children.cycles-pp.schedule
0.09 ± 14% +0.7 0.80 ± 6% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.8 0.76 ± 2% perf-profile.children.cycles-pp.sched_ttwu_pending
0.02 ±141% +0.8 0.86 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
0.00 +0.8 0.85 ± 4% perf-profile.children.cycles-pp.tick_irq_enter
0.02 ±141% +0.9 0.89 ± 3% perf-profile.children.cycles-pp.wake_up_q
0.21 ± 2% +0.9 1.11 ± 4% perf-profile.children.cycles-pp.vma_interval_tree_insert
0.00 +0.9 0.91 ± 5% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.18 ± 2% +0.9 1.10 ± 2% perf-profile.children.cycles-pp.scheduler_tick
0.21 +0.9 1.15 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 18% +1.0 1.08 ± 9% perf-profile.children.cycles-pp.ktime_get
0.00 +1.0 1.02 ± 4% perf-profile.children.cycles-pp.irq_enter
0.07 ± 6% +1.1 1.13 ± 2% perf-profile.children.cycles-pp.rwsem_wake
0.00 +1.2 1.20 ± 2% perf-profile.children.cycles-pp.__sched_text_start
0.45 ± 2% +1.2 1.69 ± 5% perf-profile.children.cycles-pp.__handle_mm_fault
0.52 +1.5 2.01 ± 5% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +1.6 1.63 ± 3% perf-profile.children.cycles-pp.__softirqentry_text_start
0.24 +1.8 2.05 ± 2% perf-profile.children.cycles-pp.update_process_times
0.00 +1.9 1.91 ± 7% perf-profile.children.cycles-pp.tick_nohz_next_event
0.24 ± 3% +2.0 2.28 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.00 +2.1 2.12 perf-profile.children.cycles-pp.irq_exit
0.00 +2.3 2.27 ± 6% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.26 +2.3 2.56 ± 3% perf-profile.children.cycles-pp.tick_sched_timer
0.78 +2.4 3.15 ± 5% perf-profile.children.cycles-pp.__do_page_fault
0.84 +2.5 3.30 ± 5% perf-profile.children.cycles-pp.do_page_fault
1.21 +3.6 4.84 ± 4% perf-profile.children.cycles-pp.page_fault
0.32 +3.7 4.04 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +4.1 4.06 ± 5% perf-profile.children.cycles-pp.menu_select
3.25 +4.3 7.57 ± 4% perf-profile.children.cycles-pp.do_rw_once
0.48 ± 2% +5.2 5.64 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
3.97 +6.9 10.85 ± 4% perf-profile.children.cycles-pp.do_access
0.52 +8.7 9.23 perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.57 +9.5 10.09 perf-profile.children.cycles-pp.apic_timer_interrupt
0.58 ± 37% +53.1 53.67 ± 3% perf-profile.children.cycles-pp.intel_idle
0.60 ± 36% +64.4 64.97 ± 3% perf-profile.children.cycles-pp.cpuidle_enter_state
0.61 ± 35% +70.6 71.20 ± 2% perf-profile.children.cycles-pp.start_secondary
0.61 ± 35% +71.0 71.60 ± 2% perf-profile.children.cycles-pp.secondary_startup_64
0.61 ± 35% +71.0 71.60 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
0.61 ± 35% +71.0 71.62 ± 2% perf-profile.children.cycles-pp.do_idle
91.34 -88.4 2.98 ± 29% perf-profile.self.cycles-pp.osq_lock
0.02 ±141% +0.1 0.07 ± 14% perf-profile.self.cycles-pp.__vma_link_rb
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.up_read
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.ksys_mmap_pgoff
0.00 +0.1 0.06 ± 8% perf-profile.self.cycles-pp.update_process_times
0.00 +0.1 0.06 ± 8% perf-profile.self.cycles-pp.enqueue_entity
0.00 +0.1 0.06 ± 8% perf-profile.self.cycles-pp.irq_work_needs_cpu
0.00 +0.1 0.06 ± 16% perf-profile.self.cycles-pp.ksoftirqd_running
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.__perf_sw_event
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.rcu_idle_exit
0.00 +0.1 0.06 perf-profile.self.cycles-pp.page_fault
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.___might_sleep
0.00 +0.1 0.06 ± 19% perf-profile.self.cycles-pp.down_read_trylock
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.irq_exit
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.check_preempt_curr
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.rcu_irq_enter
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.tick_irq_enter
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.perf_iterate_sb
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.00 +0.1 0.07 ± 14% perf-profile.self.cycles-pp.perf_event_mmap
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.tick_program_event
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.pick_next_task_fair
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.hrtimer_forward
0.00 +0.1 0.07 perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.1 0.07 perf-profile.self.cycles-pp.cpuidle_not_available
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.timerqueue_del
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.enqueue_task_fair
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.tick_nohz_tick_stopped
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.00 +0.1 0.07 ± 12% perf-profile.self.cycles-pp.irq_work_tick
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.leave_mm
0.00 +0.1 0.07 ± 12% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.__rwsem_mark_wake
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.update_ts_time_stats
0.00 +0.1 0.08 ± 22% perf-profile.self.cycles-pp.rb_erase
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.call_cpuidle
0.00 +0.1 0.08 perf-profile.self.cycles-pp.__switch_to
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.do_mmap
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.clockevents_program_event
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.calc_global_load_tick
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.load_new_mm_cr3
0.00 +0.1 0.09 ± 10% perf-profile.self.cycles-pp.osq_unlock
0.00 +0.1 0.09 ± 5% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.09 ± 14% perf-profile.self.cycles-pp.load_balance
0.00 +0.1 0.09 ± 9% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.1 0.09 ± 9% perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.1 0.09 perf-profile.self.cycles-pp.d_path
0.00 +0.1 0.09 ± 5% perf-profile.self.cycles-pp.rb_insert_color
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.interrupt_entry
0.00 +0.1 0.10 ± 17% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.1 0.10 ± 43% perf-profile.self.cycles-pp.poll_idle
0.00 +0.1 0.10 ± 35% perf-profile.self.cycles-pp.new_slab
0.00 +0.1 0.10 ± 14% perf-profile.self.cycles-pp.rcu_irq_exit
0.00 +0.1 0.10 ± 8% perf-profile.self.cycles-pp.do_anonymous_page
0.00 +0.1 0.10 ± 8% perf-profile.self.cycles-pp.update_cfs_group
0.00 +0.1 0.10 ± 12% perf-profile.self.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.10 ± 4% perf-profile.self.cycles-pp.sched_clock_cpu
0.00 +0.1 0.11 ± 8% perf-profile.self.cycles-pp.update_curr
0.00 +0.1 0.11 ± 12% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.11 ± 7% perf-profile.self.cycles-pp.rcu_eqs_enter
0.00 +0.1 0.11 ± 8% perf-profile.self.cycles-pp.run_local_timers
0.00 +0.1 0.11 ± 11% perf-profile.self.cycles-pp.__remove_hrtimer
0.00 +0.1 0.12 ± 8% perf-profile.self.cycles-pp.__softirqentry_text_start
0.00 +0.1 0.12 ± 11% perf-profile.self.cycles-pp.select_task_rq_fair
0.00 +0.1 0.12 ± 11% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.12 ± 10% perf-profile.self.cycles-pp.cpu_load_update_active
0.00 +0.1 0.12 ± 26% perf-profile.self.cycles-pp.apic_timer_interrupt
0.00 +0.1 0.13 ± 3% perf-profile.self.cycles-pp.rb_next
0.00 +0.1 0.13 ± 21% perf-profile.self.cycles-pp.account_process_tick
0.00 +0.1 0.14 ± 6% perf-profile.self.cycles-pp.vmacache_find
0.00 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.scheduler_tick
0.00 +0.1 0.14 ± 9% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.14 ± 15% perf-profile.self.cycles-pp.pm_qos_read_value
0.00 +0.1 0.14 perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.14 perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.00 +0.1 0.14 ± 8% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.cpu_load_update
0.00 +0.2 0.15 ± 10% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.2 0.15 ± 10% perf-profile.self.cycles-pp.pm_qos_request
0.00 +0.2 0.15 ± 6% perf-profile.self.cycles-pp.wake_q_add
0.00 +0.2 0.16 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.2 0.16 ± 7% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.2 0.16 ± 5% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.00 +0.2 0.17 ± 7% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.17 ± 2% perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.2 0.17 perf-profile.self.cycles-pp.__update_load_avg_se
0.00 +0.2 0.17 ± 8% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.2 0.18 ± 7% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.00 +0.2 0.18 ± 11% perf-profile.self.cycles-pp.native_apic_mem_write
0.00 +0.2 0.18 ± 14% perf-profile.self.cycles-pp.smp_apic_timer_interrupt
0.00 +0.2 0.18 ± 9% perf-profile.self.cycles-pp.__sched_text_start
0.00 +0.2 0.18 ± 4% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.00 +0.2 0.18 perf-profile.self.cycles-pp.try_to_wake_up
0.00 +0.2 0.18 ± 16% perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.2 0.18 ± 6% perf-profile.self.cycles-pp.timerqueue_add
0.00 +0.2 0.19 perf-profile.self.cycles-pp.update_load_avg
0.00 +0.2 0.19 ± 4% perf-profile.self.cycles-pp.perf_event_task_tick
0.07 +0.2 0.27 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.2 0.21 ± 4% perf-profile.self.cycles-pp.down_write
0.00 +0.2 0.22 ± 9% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.22 ± 2% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.2 0.22 ± 2% perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.2 0.22 ± 5% perf-profile.self.cycles-pp.find_vma
0.00 +0.2 0.22 ± 4% perf-profile.self.cycles-pp.run_timer_softirq
0.06 +0.2 0.28 perf-profile.self.cycles-pp.handle_mm_fault
0.00 +0.2 0.24 ± 5% perf-profile.self.cycles-pp.__fget
0.00 +0.3 0.25 ± 11% perf-profile.self.cycles-pp.__rb_insert_augmented
0.00 +0.3 0.25 ± 4% perf-profile.self.cycles-pp.nr_iowait_cpu
0.00 +0.3 0.26 ± 8% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.3 0.26 ± 6% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.00 +0.3 0.27 ± 9% perf-profile.self.cycles-pp.__next_timer_interrupt
0.00 +0.3 0.28 ± 6% perf-profile.self.cycles-pp.update_blocked_averages
0.09 ± 5% +0.3 0.37 ± 5% perf-profile.self.cycles-pp.__do_page_fault
0.00 +0.3 0.28 ± 2% perf-profile.self.cycles-pp.mmap_region
0.07 +0.3 0.38 ± 7% perf-profile.self.cycles-pp.up_write
0.00 +0.3 0.34 ± 5% perf-profile.self.cycles-pp.find_next_bit
0.13 ± 3% +0.3 0.48 ± 9% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.4 0.36 ± 7% perf-profile.self.cycles-pp.do_idle
0.00 +0.4 0.36 ± 2% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.4 0.37 ± 6% perf-profile.self.cycles-pp.read_tsc
0.09 ± 5% +0.4 0.48 ± 6% perf-profile.self.cycles-pp.unmapped_area_topdown
0.06 ± 7% +0.4 0.46 ± 3% perf-profile.self.cycles-pp.__rwsem_down_write_failed_common
0.00 +0.4 0.40 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.02 ±141% +0.4 0.43 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.4 0.44 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.00 +0.5 0.45 ± 3% perf-profile.self.cycles-pp.native_write_msr
0.39 +0.6 0.94 ± 10% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.04 ± 71% +0.7 0.73 ± 9% perf-profile.self.cycles-pp.ktime_get
0.00 +0.7 0.69 ± 10% perf-profile.self.cycles-pp.tick_nohz_next_event
0.21 ± 2% +0.9 1.10 ± 4% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.21 ± 2% +0.9 1.15 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.37 +1.0 1.38 ± 6% perf-profile.self.cycles-pp.__handle_mm_fault
0.00 +1.2 1.17 ± 3% perf-profile.self.cycles-pp.cpuidle_enter_state
0.00 +1.3 1.33 ± 6% perf-profile.self.cycles-pp.menu_select
1.97 +2.2 4.17 ± 5% perf-profile.self.cycles-pp.do_access
2.06 +4.1 6.15 ± 6% perf-profile.self.cycles-pp.do_rw_once
0.58 ± 37% +52.9 53.53 ± 3% perf-profile.self.cycles-pp.intel_idle
8294 ± 3% +202.1% 25060 ± 9% softirqs.CPU0.RCU
9423 ± 21% +456.6% 52453 ± 3% softirqs.CPU0.SCHED
121835 ± 7% +22.3% 149056 ± 3% softirqs.CPU0.TIMER
10002 ± 8% +218.9% 31899 ± 11% softirqs.CPU1.RCU
4444 ± 5% +1001.9% 48977 softirqs.CPU1.SCHED
116482 ± 2% +28.8% 150085 ± 2% softirqs.CPU1.TIMER
8218 ± 3% +223.8% 26614 ± 10% softirqs.CPU10.RCU
4390 ± 18% +1010.0% 48731 ± 2% softirqs.CPU10.SCHED
116884 ± 4% +28.7% 150476 ± 3% softirqs.CPU10.TIMER
8300 ± 9% +242.3% 28413 ± 13% softirqs.CPU100.RCU
5007 ± 29% +874.7% 48809 ± 2% softirqs.CPU100.SCHED
116959 ± 4% +28.3% 150113 ± 3% softirqs.CPU100.TIMER
8553 ± 14% +261.9% 30952 ± 10% softirqs.CPU101.RCU
4535 ± 19% +1001.9% 49974 softirqs.CPU101.SCHED
119087 ± 3% +28.0% 152452 ± 2% softirqs.CPU101.TIMER
8698 ± 11% +273.7% 32505 ± 13% softirqs.CPU102.RCU
5311 ± 7% +818.3% 48772 ± 2% softirqs.CPU102.SCHED
116760 ± 2% +27.8% 149180 ± 3% softirqs.CPU102.TIMER
8379 ± 4% +295.6% 33145 ± 11% softirqs.CPU103.RCU
5043 ± 7% +857.2% 48270 ± 3% softirqs.CPU103.SCHED
117456 ± 2% +27.1% 149330 ± 3% softirqs.CPU103.TIMER
7791 ± 4% +290.0% 30388 ± 8% softirqs.CPU104.RCU
4812 ± 19% +906.7% 48447 ± 3% softirqs.CPU104.SCHED
126919 ± 4% +17.7% 149383 ± 3% softirqs.CPU104.TIMER
8517 ± 14% +218.1% 27098 ± 11% softirqs.CPU105.RCU
4296 ± 18% +1032.3% 48652 ± 2% softirqs.CPU105.SCHED
116720 ± 4% +28.0% 149374 ± 3% softirqs.CPU105.TIMER
8396 ± 5% +230.3% 27734 ± 10% softirqs.CPU106.RCU
4270 ± 19% +1038.6% 48624 ± 2% softirqs.CPU106.SCHED
115906 ± 4% +29.2% 149718 ± 3% softirqs.CPU106.TIMER
7212 ± 9% +309.9% 29567 ± 11% softirqs.CPU107.RCU
4693 ± 15% +927.3% 48212 ± 3% softirqs.CPU107.SCHED
121207 ± 7% +23.1% 149199 ± 3% softirqs.CPU107.TIMER
9452 ± 27% +263.3% 34342 ± 13% softirqs.CPU108.RCU
5110 ± 24% +841.9% 48138 ± 3% softirqs.CPU108.SCHED
118480 ± 4% +26.0% 149229 ± 3% softirqs.CPU108.TIMER
7753 ± 3% +373.1% 36683 ± 12% softirqs.CPU109.RCU
4355 ± 19% +1005.5% 48151 ± 3% softirqs.CPU109.SCHED
114781 ± 3% +30.0% 149236 ± 3% softirqs.CPU109.TIMER
8858 ± 6% +231.5% 29363 ± 14% softirqs.CPU11.RCU
4695 ± 17% +939.4% 48806 ± 2% softirqs.CPU11.SCHED
7426 ± 5% +353.0% 33637 ± 14% softirqs.CPU110.RCU
4420 ± 16% +998.1% 48542 ± 3% softirqs.CPU110.SCHED
119149 ± 6% +25.6% 149594 ± 3% softirqs.CPU110.TIMER
9014 ± 4% +269.2% 33285 ± 15% softirqs.CPU111.RCU
4359 ± 19% +1009.5% 48366 ± 3% softirqs.CPU111.SCHED
115458 ± 4% +31.0% 151254 ± 2% softirqs.CPU111.TIMER
7105 ± 3% +294.0% 27995 ± 15% softirqs.CPU112.RCU
4364 ± 18% +1011.7% 48519 ± 3% softirqs.CPU112.SCHED
118907 ± 7% +25.5% 149269 ± 3% softirqs.CPU112.TIMER
7238 ± 4% +294.2% 28535 ± 13% softirqs.CPU113.RCU
4408 ± 18% +1002.1% 48586 ± 2% softirqs.CPU113.SCHED
115993 ± 4% +28.5% 149079 ± 3% softirqs.CPU113.TIMER
7505 +176.5% 20750 ± 10% softirqs.CPU114.RCU
4487 ± 20% +975.5% 48261 ± 3% softirqs.CPU114.SCHED
113489 ± 5% +31.3% 149013 ± 3% softirqs.CPU114.TIMER
8298 ± 7% +226.3% 27079 ± 12% softirqs.CPU115.RCU
4717 ± 22% +922.3% 48222 ± 3% softirqs.CPU115.SCHED
114901 ± 7% +29.6% 148952 ± 3% softirqs.CPU115.TIMER
8690 ± 11% +189.9% 25193 ± 13% softirqs.CPU116.RCU
6062 ± 52% +696.5% 48286 ± 3% softirqs.CPU116.SCHED
116705 ± 5% +27.6% 148943 ± 3% softirqs.CPU116.TIMER
7344 ± 3% +308.3% 29985 ± 10% softirqs.CPU117.RCU
4906 ± 18% +891.5% 48641 ± 2% softirqs.CPU117.SCHED
119570 ± 3% +25.1% 149546 ± 3% softirqs.CPU117.TIMER
8308 ± 16% +257.9% 29740 ± 10% softirqs.CPU118.RCU
4247 ± 19% +1049.9% 48839 ± 3% softirqs.CPU118.SCHED
114869 ± 2% +29.8% 149092 ± 3% softirqs.CPU118.TIMER
7412 ± 4% +289.7% 28886 ± 13% softirqs.CPU119.RCU
4478 ± 19% +991.0% 48857 ± 2% softirqs.CPU119.SCHED
115546 ± 3% +28.9% 148969 ± 3% softirqs.CPU119.TIMER
8437 ± 5% +270.3% 31244 ± 14% softirqs.CPU12.RCU
4568 ± 17% +965.9% 48690 ± 3% softirqs.CPU12.SCHED
117286 ± 3% +28.2% 150319 ± 3% softirqs.CPU12.TIMER
7142 +246.2% 24724 ± 9% softirqs.CPU120.RCU
5216 ± 15% +831.3% 48584 ± 3% softirqs.CPU120.SCHED
120516 ± 2% +21.8% 146826 ± 3% softirqs.CPU120.TIMER
7344 +233.5% 24491 ± 12% softirqs.CPU121.RCU
5017 ± 16% +866.6% 48499 ± 3% softirqs.CPU121.SCHED
112664 +30.1% 146525 ± 3% softirqs.CPU121.TIMER
7127 ± 3% +248.7% 24849 ± 4% softirqs.CPU122.RCU
5069 ± 20% +855.7% 48446 ± 3% softirqs.CPU122.SCHED
110777 ± 2% +32.4% 146623 ± 3% softirqs.CPU122.TIMER
6913 ± 3% +251.8% 24317 ± 19% softirqs.CPU123.RCU
4821 ± 19% +905.4% 48472 ± 3% softirqs.CPU123.SCHED
113940 ± 3% +28.8% 146721 ± 3% softirqs.CPU123.TIMER
7413 ± 8% +186.1% 21208 ± 7% softirqs.CPU124.RCU
5084 ± 18% +853.0% 48452 ± 3% softirqs.CPU124.SCHED
112292 ± 3% +30.4% 146437 ± 3% softirqs.CPU124.TIMER
7001 +259.4% 25162 ± 11% softirqs.CPU125.RCU
4660 ± 20% +939.7% 48449 ± 3% softirqs.CPU125.SCHED
110974 ± 3% +32.0% 146502 ± 3% softirqs.CPU125.TIMER
7054 ± 3% +258.5% 25292 ± 11% softirqs.CPU126.RCU
4695 ± 20% +952.8% 49428 ± 5% softirqs.CPU126.SCHED
109427 ± 2% +33.9% 146473 ± 3% softirqs.CPU126.TIMER
7519 ± 13% +236.6% 25309 ± 12% softirqs.CPU127.RCU
4686 ± 17% +942.3% 48843 ± 2% softirqs.CPU127.SCHED
110453 ± 3% +32.6% 146466 ± 3% softirqs.CPU127.TIMER
7537 ± 2% +292.9% 29613 ± 10% softirqs.CPU128.RCU
4642 ± 19% +957.6% 49093 ± 2% softirqs.CPU128.SCHED
113447 ± 6% +29.3% 146683 ± 3% softirqs.CPU128.TIMER
7383 +220.5% 23663 ± 8% softirqs.CPU129.RCU
5140 ± 16% +841.7% 48404 ± 3% softirqs.CPU129.SCHED
112064 ± 4% +30.9% 146727 ± 3% softirqs.CPU129.TIMER
8271 +281.4% 31546 ± 10% softirqs.CPU13.RCU
4319 ± 18% +1025.7% 48619 ± 2% softirqs.CPU13.SCHED
115558 ± 3% +30.1% 150313 ± 3% softirqs.CPU13.TIMER
9284 ± 32% +163.8% 24497 ± 5% softirqs.CPU130.RCU
4846 ± 14% +907.1% 48801 ± 2% softirqs.CPU130.SCHED
110693 ± 3% +32.4% 146571 ± 3% softirqs.CPU130.TIMER
8910 ± 12% +172.2% 24258 ± 4% softirqs.CPU131.RCU
4546 ± 17% +960.5% 48215 ± 2% softirqs.CPU131.SCHED
111708 ± 6% +31.2% 146511 ± 3% softirqs.CPU131.TIMER
7775 ± 5% +269.6% 28736 ± 7% softirqs.CPU132.RCU
4850 ± 17% +891.5% 48088 ± 3% softirqs.CPU132.SCHED
110231 ± 2% +33.1% 146704 ± 3% softirqs.CPU132.TIMER
7503 +325.5% 31927 ± 18% softirqs.CPU133.RCU
4691 ± 21% +935.1% 48565 ± 3% softirqs.CPU133.SCHED
113664 ± 6% +29.3% 146928 ± 4% softirqs.CPU133.TIMER
7432 +325.3% 31610 ± 6% softirqs.CPU134.RCU
4683 ± 21% +949.3% 49144 softirqs.CPU134.SCHED
109879 ± 3% +33.7% 146878 ± 3% softirqs.CPU134.TIMER
7969 ± 10% +316.6% 33202 ± 13% softirqs.CPU135.RCU
4701 ± 18% +926.0% 48241 ± 4% softirqs.CPU135.SCHED
111038 ± 3% +32.5% 147096 ± 3% softirqs.CPU135.TIMER
7934 ± 7% +296.9% 31489 ± 13% softirqs.CPU136.RCU
4820 ± 19% +908.3% 48600 ± 3% softirqs.CPU136.SCHED
120001 +22.5% 147012 ± 3% softirqs.CPU136.TIMER
7547 +335.3% 32854 ± 10% softirqs.CPU137.RCU
4924 ± 18% +885.6% 48532 ± 3% softirqs.CPU137.SCHED
112158 ± 2% +30.8% 146735 ± 3% softirqs.CPU137.TIMER
7392 ± 3% +273.7% 27623 ± 14% softirqs.CPU138.RCU
4585 ± 18% +971.8% 49145 ± 6% softirqs.CPU138.SCHED
109277 ± 2% +34.2% 146650 ± 3% softirqs.CPU138.TIMER
7408 +320.6% 31160 ± 14% softirqs.CPU139.RCU
4870 ± 15% +888.2% 48125 ± 2% softirqs.CPU139.SCHED
112914 ± 2% +30.2% 147043 ± 3% softirqs.CPU139.TIMER
8838 ± 10% +251.3% 31050 ± 15% softirqs.CPU14.RCU
4509 ± 16% +985.0% 48926 ± 2% softirqs.CPU14.SCHED
7416 ± 2% +315.5% 30813 ± 15% softirqs.CPU140.RCU
4885 ± 14% +893.7% 48550 ± 2% softirqs.CPU140.SCHED
110820 ± 2% +32.5% 146828 ± 3% softirqs.CPU140.TIMER
7949 ± 2% +342.8% 35200 ± 11% softirqs.CPU141.RCU
5036 ± 18% +854.7% 48087 ± 3% softirqs.CPU141.SCHED
111494 ± 3% +31.8% 146915 ± 3% softirqs.CPU141.TIMER
7608 ± 7% +380.4% 36554 ± 19% softirqs.CPU142.RCU
5057 ± 14% +850.8% 48088 ± 2% softirqs.CPU142.SCHED
109201 ± 2% +34.6% 147026 ± 3% softirqs.CPU142.TIMER
7944 ± 5% +317.4% 33156 ± 9% softirqs.CPU143.RCU
5449 ± 18% +788.9% 48441 ± 2% softirqs.CPU143.SCHED
111781 +32.7% 148369 ± 5% softirqs.CPU143.TIMER
8688 ± 8% +249.4% 30359 ± 16% softirqs.CPU144.RCU
5771 ± 4% +734.9% 48185 ± 3% softirqs.CPU144.SCHED
114805 ± 4% +24.5% 142965 ± 7% softirqs.CPU144.TIMER
7682 ± 3% +296.6% 30464 ± 9% softirqs.CPU145.RCU
5223 ± 24% +823.5% 48238 ± 3% softirqs.CPU145.SCHED
112929 ± 4% +26.5% 142879 ± 7% softirqs.CPU145.TIMER
8453 ± 16% +275.8% 31769 ± 20% softirqs.CPU146.RCU
5384 ± 12% +794.8% 48175 ± 3% softirqs.CPU146.SCHED
112061 ± 4% +27.6% 143032 ± 7% softirqs.CPU146.TIMER
7282 ± 3% +204.5% 22178 softirqs.CPU147.RCU
5751 ± 16% +735.7% 48065 ± 3% softirqs.CPU147.SCHED
113998 ± 5% +25.2% 142718 ± 7% softirqs.CPU147.TIMER
7991 ± 14% +245.6% 27619 ± 9% softirqs.CPU148.RCU
5049 ± 10% +859.5% 48446 ± 3% softirqs.CPU148.SCHED
112513 ± 2% +27.4% 143287 ± 7% softirqs.CPU148.TIMER
7423 ± 4% +283.5% 28468 ± 13% softirqs.CPU149.RCU
4873 ± 13% +890.7% 48278 ± 3% softirqs.CPU149.SCHED
116811 ± 7% +22.6% 143202 ± 7% softirqs.CPU149.TIMER
8935 ± 12% +244.6% 30789 ± 16% softirqs.CPU15.RCU
4493 ± 18% +984.1% 48718 ± 2% softirqs.CPU15.SCHED
116513 ± 4% +29.4% 150736 ± 3% softirqs.CPU15.TIMER
9754 ± 28% +257.3% 34849 ± 22% softirqs.CPU150.RCU
5436 ± 10% +788.9% 48324 ± 3% softirqs.CPU150.SCHED
113317 ± 3% +26.1% 142932 ± 7% softirqs.CPU150.TIMER
10166 ± 24% +223.4% 32880 ± 17% softirqs.CPU151.RCU
5492 ± 13% +778.0% 48223 ± 3% softirqs.CPU151.SCHED
113078 ± 2% +26.4% 142925 ± 7% softirqs.CPU151.TIMER
7737 ± 5% +311.1% 31811 ± 13% softirqs.CPU152.RCU
5535 ± 13% +771.0% 48208 ± 3% softirqs.CPU152.SCHED
123248 ± 3% +16.0% 142932 ± 7% softirqs.CPU152.TIMER
8155 ± 4% +244.0% 28052 ± 9% softirqs.CPU153.RCU
5506 ± 11% +774.7% 48166 ± 2% softirqs.CPU153.SCHED
113467 +26.1% 143042 ± 7% softirqs.CPU153.TIMER
7994 ± 6% +273.8% 29886 ± 13% softirqs.CPU154.RCU
5483 ± 13% +778.3% 48158 ± 2% softirqs.CPU154.SCHED
113395 +25.9% 142768 ± 7% softirqs.CPU154.TIMER
8536 ± 8% +278.0% 32268 ± 21% softirqs.CPU155.RCU
6281 ± 41% +666.3% 48138 ± 2% softirqs.CPU155.SCHED
116548 ± 3% +22.6% 142853 ± 7% softirqs.CPU155.TIMER
10365 ± 22% +226.3% 33826 ± 16% softirqs.CPU156.RCU
5753 ± 9% +740.6% 48360 ± 3% softirqs.CPU156.SCHED
113785 +25.6% 142951 ± 7% softirqs.CPU156.TIMER
7614 ± 3% +345.8% 33942 ± 16% softirqs.CPU157.RCU
5330 ± 17% +804.2% 48195 ± 3% softirqs.CPU157.SCHED
112868 ± 2% +26.6% 142863 ± 7% softirqs.CPU157.TIMER
7423 ± 2% +320.0% 31179 ± 16% softirqs.CPU158.RCU
4845 ± 22% +899.0% 48400 ± 3% softirqs.CPU158.SCHED
110228 ± 3% +29.8% 143088 ± 7% softirqs.CPU158.TIMER
7477 ± 2% +353.1% 33875 ± 19% softirqs.CPU159.RCU
5177 ± 22% +829.5% 48124 ± 3% softirqs.CPU159.SCHED
112199 ± 3% +27.7% 143229 ± 7% softirqs.CPU159.TIMER
8514 ± 8% +232.1% 28271 ± 4% softirqs.CPU16.RCU
4428 ± 16% +999.9% 48708 ± 2% softirqs.CPU16.SCHED
119651 ± 7% +25.7% 150345 ± 3% softirqs.CPU16.TIMER
7324 ± 2% +262.2% 26525 ± 16% softirqs.CPU160.RCU
5263 ± 11% +815.3% 48171 ± 2% softirqs.CPU160.SCHED
114384 ± 5% +25.1% 143081 ± 7% softirqs.CPU160.TIMER
7029 ± 4% +238.6% 23799 ± 14% softirqs.CPU161.RCU
5406 ± 15% +792.5% 48255 ± 2% softirqs.CPU161.SCHED
112734 ± 3% +26.7% 142885 ± 7% softirqs.CPU161.TIMER
6863 +198.5% 20488 ± 23% softirqs.CPU162.RCU
6154 ± 26% +683.2% 48197 ± 2% softirqs.CPU162.SCHED
111608 ± 4% +28.0% 142856 ± 7% softirqs.CPU162.TIMER
7184 ± 5% +230.1% 23714 ± 15% softirqs.CPU163.RCU
5566 ± 25% +765.6% 48183 ± 3% softirqs.CPU163.SCHED
111617 ± 6% +28.4% 143336 ± 7% softirqs.CPU163.TIMER
6919 ± 3% +262.8% 25104 ± 23% softirqs.CPU164.RCU
4938 ± 14% +876.8% 48237 ± 3% softirqs.CPU164.SCHED
110667 ± 2% +29.2% 142929 ± 7% softirqs.CPU164.TIMER
9469 ± 32% +177.4% 26271 ± 18% softirqs.CPU165.RCU
5754 ± 21% +737.8% 48209 ± 3% softirqs.CPU165.SCHED
116835 ± 7% +22.2% 142716 ± 7% softirqs.CPU165.TIMER
7387 +230.3% 24403 ± 17% softirqs.CPU166.RCU
5249 ± 24% +823.5% 48479 ± 3% softirqs.CPU166.SCHED
111923 ± 3% +28.4% 143677 ± 7% softirqs.CPU166.TIMER
7347 ± 3% +209.9% 22769 ± 19% softirqs.CPU167.RCU
5998 ± 10% +743.8% 50616 ± 4% softirqs.CPU167.SCHED
113369 ± 2% +47.0% 166607 ± 13% softirqs.CPU167.TIMER
6882 ± 4% +309.1% 28155 ± 17% softirqs.CPU168.RCU
4980 ± 26% +870.9% 48356 ± 2% softirqs.CPU168.SCHED
7272 ± 9% +268.1% 26768 ± 4% softirqs.CPU169.RCU
4741 ± 27% +923.5% 48530 ± 2% softirqs.CPU169.SCHED
114848 +20.0% 137815 ± 2% softirqs.CPU169.TIMER
8518 ± 13% +233.0% 28368 ± 11% softirqs.CPU17.RCU
4357 ± 17% +1018.7% 48747 ± 2% softirqs.CPU17.SCHED
116709 ± 4% +28.7% 150247 ± 3% softirqs.CPU17.TIMER
6929 ± 5% +271.9% 25771 ± 21% softirqs.CPU170.RCU
4902 ± 24% +889.9% 48526 ± 3% softirqs.CPU170.SCHED
114066 ± 2% +20.6% 137529 ± 2% softirqs.CPU170.TIMER
6836 ± 9% +244.2% 23531 ± 5% softirqs.CPU171.RCU
5712 ± 40% +747.5% 48409 ± 3% softirqs.CPU171.SCHED
118712 ± 2% +16.5% 138274 ± 2% softirqs.CPU171.TIMER
7251 ± 14% +218.6% 23101 ± 12% softirqs.CPU172.RCU
4928 ± 28% +880.1% 48304 ± 3% softirqs.CPU172.SCHED
116380 ± 2% +18.4% 137767 ± 2% softirqs.CPU172.TIMER
6703 ± 3% +249.9% 23454 ± 12% softirqs.CPU173.RCU
4680 ± 20% +931.7% 48284 ± 3% softirqs.CPU173.SCHED
115375 ± 2% +18.9% 137182 ± 2% softirqs.CPU173.TIMER
7312 ± 10% +254.4% 25915 ± 12% softirqs.CPU174.RCU
5742 ± 21% +742.4% 48372 ± 3% softirqs.CPU174.SCHED
115023 +19.1% 137010 ± 2% softirqs.CPU174.TIMER
7802 ± 14% +219.9% 24963 ± 8% softirqs.CPU175.RCU
5756 ± 34% +739.7% 48339 ± 3% softirqs.CPU175.SCHED
114491 ± 3% +19.9% 137264 softirqs.CPU175.TIMER
7638 ± 4% +261.5% 27611 ± 11% softirqs.CPU176.RCU
5648 ± 30% +756.8% 48391 ± 3% softirqs.CPU176.SCHED
117714 ± 6% +17.4% 138207 softirqs.CPU176.TIMER
7239 ± 7% +267.4% 26593 ± 12% softirqs.CPU177.RCU
4851 ± 21% +897.5% 48387 ± 3% softirqs.CPU177.SCHED
114465 ± 4% +20.9% 138438 ± 2% softirqs.CPU177.TIMER
7388 ± 7% +266.1% 27048 ± 12% softirqs.CPU178.RCU
4612 ± 23% +951.7% 48506 ± 2% softirqs.CPU178.SCHED
113030 ± 4% +21.9% 137749 ± 2% softirqs.CPU178.TIMER
9363 ± 32% +183.7% 26563 ± 8% softirqs.CPU179.RCU
4634 ± 23% +946.8% 48512 ± 3% softirqs.CPU179.SCHED
113723 ± 6% +20.9% 137512 ± 2% softirqs.CPU179.TIMER
7637 ± 2% +180.7% 21438 ± 8% softirqs.CPU18.RCU
4370 ± 17% +1012.5% 48619 ± 2% softirqs.CPU18.SCHED
114134 ± 5% +31.6% 150173 ± 3% softirqs.CPU18.TIMER
7075 ± 6% +278.5% 26782 ± 15% softirqs.CPU180.RCU
5115 ± 31% +850.6% 48631 ± 2% softirqs.CPU180.SCHED
113470 ± 3% +21.5% 137820 ± 2% softirqs.CPU180.TIMER
7969 ± 7% +257.4% 28484 ± 11% softirqs.CPU181.RCU
5038 ± 27% +864.0% 48565 ± 3% softirqs.CPU181.SCHED
117362 ± 8% +17.9% 138399 ± 2% softirqs.CPU181.TIMER
9686 ± 31% +218.1% 30813 ± 7% softirqs.CPU182.RCU
4935 ± 25% +887.9% 48758 ± 2% softirqs.CPU182.SCHED
113111 ± 3% +24.3% 140577 ± 4% softirqs.CPU182.TIMER
8281 ± 13% +286.8% 32031 ± 11% softirqs.CPU183.RCU
5175 ± 25% +837.2% 48504 ± 2% softirqs.CPU183.SCHED
114201 ± 3% +25.1% 142879 ± 4% softirqs.CPU183.TIMER
7272 ± 8% +329.8% 31256 ± 12% softirqs.CPU184.RCU
4734 ± 27% +929.1% 48717 ± 3% softirqs.CPU184.SCHED
124859 ± 2% +15.0% 143647 ± 5% softirqs.CPU184.TIMER
7852 ± 16% +315.6% 32632 ± 12% softirqs.CPU185.RCU
5276 ± 31% +812.9% 48169 ± 3% softirqs.CPU185.SCHED
115513 ± 2% +21.5% 140306 ± 3% softirqs.CPU185.TIMER
7186 ± 5% +248.6% 25053 ± 8% softirqs.CPU186.RCU
4717 ± 25% +946.7% 49374 ± 5% softirqs.CPU186.SCHED
112750 +15.3% 130003 ± 7% softirqs.CPU186.TIMER
7396 ± 5% +331.7% 31928 ± 19% softirqs.CPU187.RCU
5141 ± 35% +840.2% 48337 ± 3% softirqs.CPU187.SCHED
116429 ± 3% +20.3% 140082 ± 4% softirqs.CPU187.TIMER
7255 ± 6% +337.5% 31743 ± 17% softirqs.CPU188.RCU
4817 ± 25% +909.5% 48627 ± 2% softirqs.CPU188.SCHED
113576 ± 2% +23.6% 140410 ± 3% softirqs.CPU188.TIMER
8051 ± 20% +329.6% 34587 ± 9% softirqs.CPU189.RCU
5067 ± 31% +854.5% 48367 ± 2% softirqs.CPU189.SCHED
118682 ± 7% +19.2% 141509 ± 5% softirqs.CPU189.TIMER
8293 ± 15% +224.3% 26893 ± 10% softirqs.CPU19.RCU
5756 ± 30% +745.9% 48697 ± 2% softirqs.CPU19.SCHED
117564 ± 6% +27.7% 150158 ± 3% softirqs.CPU19.TIMER
7967 ± 12% +282.6% 30479 ± 29% softirqs.CPU190.RCU
5185 ± 31% +850.8% 49304 ± 3% softirqs.CPU190.SCHED
8130 ± 10% +293.4% 31985 ± 5% softirqs.CPU191.RCU
5777 ± 25% +735.5% 48268 ± 3% softirqs.CPU191.SCHED
12457 ± 19% +162.2% 32665 ± 9% softirqs.CPU2.RCU
5769 ± 21% +750.7% 49077 ± 2% softirqs.CPU2.SCHED
122090 ± 6% +23.6% 150886 ± 3% softirqs.CPU2.TIMER
7776 +212.4% 24292 ± 12% softirqs.CPU20.RCU
4513 ± 21% +977.5% 48634 ± 3% softirqs.CPU20.SCHED
116230 ± 4% +29.1% 150103 ± 3% softirqs.CPU20.TIMER
7990 ± 5% +260.5% 28805 ± 14% softirqs.CPU21.RCU
4673 ± 21% +941.2% 48659 ± 2% softirqs.CPU21.SCHED
120108 ± 3% +25.2% 150344 ± 3% softirqs.CPU21.TIMER
7749 ± 2% +256.5% 27630 ± 10% softirqs.CPU22.RCU
4419 ± 17% +998.8% 48560 ± 2% softirqs.CPU22.SCHED
116038 ± 2% +29.4% 150158 ± 3% softirqs.CPU22.TIMER
7699 ± 3% +241.9% 26327 ± 15% softirqs.CPU23.RCU
4382 ± 17% +1014.0% 48824 ± 2% softirqs.CPU23.SCHED
116360 ± 3% +29.1% 150178 ± 3% softirqs.CPU23.TIMER
8535 ± 6% +220.1% 27324 ± 12% softirqs.CPU24.RCU
6228 ± 12% +689.1% 49148 ± 4% softirqs.CPU24.SCHED
122734 ± 4% +23.9% 152028 ± 5% softirqs.CPU24.TIMER
7845 ± 3% +226.6% 25620 ± 9% softirqs.CPU25.RCU
5458 ± 12% +800.7% 49161 ± 2% softirqs.CPU25.SCHED
113484 +30.6% 148219 ± 3% softirqs.CPU25.TIMER
7857 ± 3% +209.3% 24305 ± 8% softirqs.CPU26.RCU
5557 ± 17% +777.3% 48755 ± 2% softirqs.CPU26.SCHED
112105 ± 2% +31.7% 147657 ± 3% softirqs.CPU26.TIMER
7629 +193.1% 22357 ± 4% softirqs.CPU27.RCU
5225 ± 20% +832.8% 48740 ± 3% softirqs.CPU27.SCHED
115606 ± 2% +27.9% 147838 ± 3% softirqs.CPU27.TIMER
7453 +190.9% 21679 ± 5% softirqs.CPU28.RCU
5231 ± 27% +829.9% 48645 ± 2% softirqs.CPU28.SCHED
113111 ± 3% +30.5% 147617 ± 3% softirqs.CPU28.TIMER
8227 ± 13% +204.6% 25061 ± 8% softirqs.CPU29.RCU
4699 ± 18% +936.2% 48692 ± 3% softirqs.CPU29.SCHED
112205 ± 3% +31.6% 147611 ± 3% softirqs.CPU29.TIMER
8454 ± 3% +222.2% 27239 ± 13% softirqs.CPU3.RCU
5200 ± 8% +838.7% 48812 ± 2% softirqs.CPU3.SCHED
118231 ± 6% +27.7% 151030 ± 3% softirqs.CPU3.TIMER
8472 ± 6% +209.7% 26241 ± 5% softirqs.CPU30.RCU
4971 ± 19% +881.8% 48809 ± 3% softirqs.CPU30.SCHED
110839 ± 2% +33.3% 147716 ± 3% softirqs.CPU30.TIMER
7658 +240.2% 26052 ± 9% softirqs.CPU31.RCU
4753 ± 21% +925.0% 48724 ± 2% softirqs.CPU31.SCHED
111366 ± 3% +32.6% 147627 ± 3% softirqs.CPU31.TIMER
8106 ± 3% +273.1% 30243 ± 9% softirqs.CPU32.RCU
4789 ± 21% +919.2% 48811 ± 3% softirqs.CPU32.SCHED
114400 ± 5% +29.3% 147913 ± 3% softirqs.CPU32.TIMER
8347 ± 8% +197.4% 24825 ± 10% softirqs.CPU33.RCU
4801 ± 18% +913.2% 48651 ± 3% softirqs.CPU33.SCHED
112652 ± 4% +31.3% 147895 ± 3% softirqs.CPU33.TIMER
7897 +217.0% 25038 ± 7% softirqs.CPU34.RCU
4723 ± 20% +933.4% 48807 ± 3% softirqs.CPU34.SCHED
111436 ± 4% +32.5% 147687 ± 3% softirqs.CPU34.TIMER
7938 ± 3% +209.3% 24554 ± 8% softirqs.CPU35.RCU
4958 ± 16% +884.4% 48806 ± 2% softirqs.CPU35.SCHED
112994 ± 6% +30.9% 147953 ± 3% softirqs.CPU35.TIMER
9953 ± 29% +184.0% 28261 ± 6% softirqs.CPU36.RCU
4847 ± 16% +906.5% 48788 ± 2% softirqs.CPU36.SCHED
111042 ± 2% +33.3% 147970 ± 3% softirqs.CPU36.TIMER
8763 ± 10% +231.7% 29064 ± 9% softirqs.CPU37.RCU
4815 ± 20% +911.9% 48731 ± 2% softirqs.CPU37.SCHED
114715 ± 6% +28.9% 147851 ± 3% softirqs.CPU37.TIMER
8499 ± 10% +247.7% 29552 ± 10% softirqs.CPU38.RCU
4604 ± 20% +963.8% 48979 ± 3% softirqs.CPU38.SCHED
110837 ± 3% +33.5% 147999 ± 3% softirqs.CPU38.TIMER
8075 ± 3% +288.5% 31370 ± 9% softirqs.CPU39.RCU
4703 ± 18% +936.6% 48752 ± 3% softirqs.CPU39.SCHED
112023 ± 3% +32.3% 148222 ± 3% softirqs.CPU39.TIMER
8340 ± 4% +255.9% 29680 ± 8% softirqs.CPU4.RCU
4421 ± 20% +1018.6% 49455 ± 4% softirqs.CPU4.SCHED
116991 ± 3% +30.1% 152251 ± 4% softirqs.CPU4.TIMER
8904 ± 12% +276.3% 33501 ± 6% softirqs.CPU40.RCU
5110 ± 25% +856.0% 48856 ± 3% softirqs.CPU40.SCHED
121476 +22.0% 148245 ± 3% softirqs.CPU40.TIMER
8535 ± 11% +283.8% 32759 ± 7% softirqs.CPU41.RCU
5112 ± 22% +858.7% 49012 ± 2% softirqs.CPU41.SCHED
112998 ± 2% +31.2% 148284 ± 3% softirqs.CPU41.TIMER
7785 +259.5% 27989 ± 9% softirqs.CPU42.RCU
4777 ± 20% +925.3% 48979 ± 2% softirqs.CPU42.SCHED
110450 ± 2% +33.8% 147773 ± 3% softirqs.CPU42.TIMER
8601 ± 12% +253.0% 30359 ± 11% softirqs.CPU43.RCU
4713 ± 18% +934.8% 48775 ± 3% softirqs.CPU43.SCHED
113750 ± 2% +30.2% 148147 ± 3% softirqs.CPU43.TIMER
8557 ± 11% +248.0% 29777 ± 14% softirqs.CPU44.RCU
4718 ± 19% +931.1% 48655 ± 2% softirqs.CPU44.SCHED
111695 ± 3% +32.5% 147947 ± 3% softirqs.CPU44.TIMER
8036 ± 3% +305.0% 32549 ± 9% softirqs.CPU45.RCU
5037 ± 27% +868.5% 48787 ± 2% softirqs.CPU45.SCHED
112442 ± 3% +31.7% 148036 ± 3% softirqs.CPU45.TIMER
8837 ± 16% +254.6% 31334 ± 8% softirqs.CPU46.RCU
4713 ± 16% +933.2% 48696 ± 2% softirqs.CPU46.SCHED
110046 ± 2% +34.6% 148115 ± 3% softirqs.CPU46.TIMER
9020 ± 10% +274.1% 33742 ± 17% softirqs.CPU47.RCU
4849 ± 16% +905.6% 48769 ± 2% softirqs.CPU47.SCHED
111069 ± 3% +33.2% 147945 ± 3% softirqs.CPU47.TIMER
8587 ± 2% +281.8% 32790 ± 12% softirqs.CPU48.RCU
5845 ± 18% +724.8% 48214 ± 3% softirqs.CPU48.SCHED
115457 ± 5% +25.8% 145285 ± 5% softirqs.CPU48.TIMER
8842 ± 7% +263.2% 32113 ± 13% softirqs.CPU49.RCU
5633 ± 9% +760.6% 48481 ± 3% softirqs.CPU49.SCHED
114513 ± 3% +25.8% 144010 ± 7% softirqs.CPU49.TIMER
8496 +259.4% 30532 ± 10% softirqs.CPU5.RCU
4399 ± 22% +1010.2% 48843 ± 2% softirqs.CPU5.SCHED
119650 ± 3% +26.0% 150750 ± 3% softirqs.CPU5.TIMER
8790 ± 5% +246.9% 30492 ± 13% softirqs.CPU50.RCU
5649 ± 13% +757.6% 48449 ± 3% softirqs.CPU50.SCHED
113264 ± 4% +27.1% 144010 ± 7% softirqs.CPU50.TIMER
7773 ± 2% +204.7% 23684 ± 7% softirqs.CPU51.RCU
5285 ± 9% +818.2% 48524 ± 3% softirqs.CPU51.SCHED
114335 ± 5% +26.3% 144418 ± 7% softirqs.CPU51.TIMER
8135 ± 2% +261.5% 29407 ± 12% softirqs.CPU52.RCU
5840 ± 12% +732.2% 48607 ± 3% softirqs.CPU52.SCHED
114739 ± 2% +25.7% 144234 ± 7% softirqs.CPU52.TIMER
8400 ± 3% +248.9% 29311 ± 12% softirqs.CPU53.RCU
5956 ± 15% +711.1% 48314 ± 3% softirqs.CPU53.SCHED
117922 ± 7% +22.3% 144195 ± 7% softirqs.CPU53.TIMER
9034 ± 5% +264.7% 32945 ± 15% softirqs.CPU54.RCU
5766 ± 7% +738.9% 48377 ± 3% softirqs.CPU54.SCHED
114476 ± 3% +25.7% 143926 ± 7% softirqs.CPU54.TIMER
8630 ± 6% +270.5% 31977 ± 12% softirqs.CPU55.RCU
6089 ± 9% +694.4% 48373 ± 3% softirqs.CPU55.SCHED
114893 ± 2% +25.3% 143934 ± 7% softirqs.CPU55.TIMER
8490 ± 2% +308.2% 34662 ± 20% softirqs.CPU56.RCU
5440 ± 12% +788.7% 48347 ± 3% softirqs.CPU56.SCHED
123916 ± 3% +16.2% 143951 ± 7% softirqs.CPU56.TIMER
8248 ± 2% +227.7% 27029 ± 11% softirqs.CPU57.RCU
5545 ± 6% +773.5% 48443 ± 3% softirqs.CPU57.SCHED
114305 +26.0% 144046 ± 7% softirqs.CPU57.TIMER
10012 ± 18% +208.5% 30889 ± 14% softirqs.CPU58.RCU
7700 ± 15% +529.2% 48447 ± 3% softirqs.CPU58.SCHED
116577 +23.4% 143806 ± 7% softirqs.CPU58.TIMER
9061 ± 14% +239.2% 30741 ± 13% softirqs.CPU59.RCU
5836 ± 9% +729.5% 48409 ± 3% softirqs.CPU59.SCHED
117066 ± 3% +22.9% 143886 ± 7% softirqs.CPU59.TIMER
9288 ± 6% +276.5% 34968 ± 18% softirqs.CPU6.RCU
5917 ± 27% +725.8% 48867 ± 2% softirqs.CPU6.SCHED
118857 ± 4% +26.7% 150578 ± 3% softirqs.CPU6.TIMER
8562 ± 4% +262.2% 31010 ± 16% softirqs.CPU60.RCU
5669 ± 9% +753.5% 48386 ± 3% softirqs.CPU60.SCHED
114534 ± 2% +25.7% 143946 ± 7% softirqs.CPU60.TIMER
10364 ± 25% +213.2% 32457 ± 13% softirqs.CPU61.RCU
5474 ± 12% +785.2% 48459 ± 3% softirqs.CPU61.SCHED
114014 ± 2% +26.2% 143924 ± 7% softirqs.CPU61.TIMER
8886 ± 12% +270.6% 32929 ± 20% softirqs.CPU62.RCU
5813 ± 19% +734.1% 48491 ± 3% softirqs.CPU62.SCHED
112192 ± 3% +28.5% 144151 ± 7% softirqs.CPU62.TIMER
9061 ± 15% +243.1% 31091 ± 10% softirqs.CPU63.RCU
5386 ± 12% +799.8% 48465 ± 3% softirqs.CPU63.SCHED
113477 ± 2% +27.2% 144372 ± 7% softirqs.CPU63.TIMER
7875 ± 3% +274.4% 29487 ± 20% softirqs.CPU64.RCU
5497 ± 14% +784.5% 48626 ± 3% softirqs.CPU64.SCHED
115731 ± 5% +24.9% 144492 ± 8% softirqs.CPU64.TIMER
7571 ± 2% +244.7% 26096 ± 14% softirqs.CPU65.RCU
5287 ± 16% +822.8% 48789 ± 3% softirqs.CPU65.SCHED
113622 ± 3% +27.3% 144637 ± 8% softirqs.CPU65.TIMER
7699 ± 7% +191.8% 22468 ± 16% softirqs.CPU66.RCU
5245 ± 17% +822.7% 48400 ± 3% softirqs.CPU66.SCHED
111307 ± 4% +29.3% 143876 ± 7% softirqs.CPU66.TIMER
8634 ± 19% +222.0% 27805 ± 20% softirqs.CPU67.RCU
5450 ± 9% +789.7% 48496 ± 3% softirqs.CPU67.SCHED
113545 ± 6% +27.2% 144438 ± 7% softirqs.CPU67.TIMER
7607 ± 3% +220.5% 24382 ± 17% softirqs.CPU68.RCU
5689 ± 10% +751.4% 48436 ± 3% softirqs.CPU68.SCHED
112455 ± 2% +28.2% 144177 ± 7% softirqs.CPU68.TIMER
10390 ± 22% +161.5% 27172 ± 18% softirqs.CPU69.RCU
5380 ± 14% +801.0% 48472 ± 3% softirqs.CPU69.SCHED
117338 ± 7% +23.0% 144373 ± 7% softirqs.CPU69.TIMER
9265 ± 14% +227.8% 30371 ± 6% softirqs.CPU7.RCU
4348 ± 18% +1019.5% 48681 ± 3% softirqs.CPU7.SCHED
116997 ± 3% +28.5% 150366 ± 3% softirqs.CPU7.TIMER
7810 +252.7% 27550 ± 16% softirqs.CPU70.RCU
5883 ± 18% +727.9% 48708 ± 2% softirqs.CPU70.SCHED
113532 ± 3% +27.8% 145056 ± 7% softirqs.CPU70.TIMER
7357 +255.6% 26158 ± 23% softirqs.CPU71.RCU
5495 ± 10% +805.9% 49780 ± 2% softirqs.CPU71.SCHED
7568 +234.4% 25308 ± 6% softirqs.CPU72.RCU
5251 ± 26% +819.6% 48289 ± 2% softirqs.CPU72.SCHED
7701 ± 9% +217.9% 24483 ± 6% softirqs.CPU73.RCU
5471 ± 33% +790.3% 48706 ± 2% softirqs.CPU73.SCHED
116614 ± 2% +19.1% 138905 ± 2% softirqs.CPU73.TIMER
7164 +226.8% 23414 ± 9% softirqs.CPU74.RCU
4937 ± 30% +881.9% 48479 ± 3% softirqs.CPU74.SCHED
114790 ± 2% +20.6% 138414 ± 2% softirqs.CPU74.TIMER
7505 ± 4% +200.2% 22533 ± 4% softirqs.CPU75.RCU
5227 ± 29% +830.2% 48625 ± 3% softirqs.CPU75.SCHED
119181 ± 2% +17.3% 139849 softirqs.CPU75.TIMER
8043 ± 10% +181.7% 22660 ± 8% softirqs.CPU76.RCU
6006 ± 33% +705.9% 48403 ± 3% softirqs.CPU76.SCHED
118838 ± 2% +16.8% 138841 ± 2% softirqs.CPU76.TIMER
7382 +213.7% 23161 ± 11% softirqs.CPU77.RCU
5241 ± 34% +824.3% 48446 ± 2% softirqs.CPU77.SCHED
115996 ± 3% +19.9% 139058 ± 2% softirqs.CPU77.TIMER
7497 ± 5% +224.3% 24311 ± 9% softirqs.CPU78.RCU
5130 ± 29% +843.2% 48390 ± 3% softirqs.CPU78.SCHED
115552 ± 2% +19.6% 138172 ± 2% softirqs.CPU78.TIMER
8224 ± 13% +190.6% 23903 ± 8% softirqs.CPU79.RCU
6568 ± 24% +638.8% 48524 ± 3% softirqs.CPU79.SCHED
116694 ± 3% +18.6% 138454 softirqs.CPU79.TIMER
9440 ± 13% +222.8% 30473 ± 10% softirqs.CPU8.RCU
4499 ± 17% +978.3% 48517 ± 3% softirqs.CPU8.SCHED
127537 ± 4% +18.1% 150617 ± 3% softirqs.CPU8.TIMER
8514 ± 14% +232.9% 28341 ± 8% softirqs.CPU80.RCU
4987 ± 27% +874.9% 48619 ± 3% softirqs.CPU80.SCHED
117648 ± 5% +18.4% 139331 ± 2% softirqs.CPU80.TIMER
7760 ± 7% +261.1% 28025 ± 25% softirqs.CPU81.RCU
5303 ± 36% +824.2% 49017 ± 2% softirqs.CPU81.SCHED
115608 ± 4% +20.6% 139464 ± 2% softirqs.CPU81.TIMER
8213 ± 8% +216.4% 25990 ± 10% softirqs.CPU82.RCU
4916 ± 20% +888.8% 48616 ± 3% softirqs.CPU82.SCHED
114388 ± 4% +21.5% 138953 ± 2% softirqs.CPU82.TIMER
7855 ± 9% +231.2% 26015 ± 10% softirqs.CPU83.RCU
4928 ± 26% +882.8% 48441 ± 3% softirqs.CPU83.SCHED
115967 ± 6% +19.5% 138526 ± 2% softirqs.CPU83.TIMER
7602 ± 7% +243.8% 26139 ± 15% softirqs.CPU84.RCU
5005 ± 29% +868.2% 48459 ± 3% softirqs.CPU84.SCHED
114093 ± 3% +21.5% 138641 ± 2% softirqs.CPU84.TIMER
8666 ± 4% +231.7% 28747 ± 6% softirqs.CPU85.RCU
5445 ± 34% +794.5% 48705 ± 3% softirqs.CPU85.SCHED
118889 ± 7% +17.5% 139639 ± 2% softirqs.CPU85.TIMER
8191 ± 9% +257.0% 29242 ± 14% softirqs.CPU86.RCU
5463 ± 28% +788.2% 48523 ± 2% softirqs.CPU86.SCHED
114483 ± 3% +23.7% 141627 ± 4% softirqs.CPU86.TIMER
8565 ± 13% +267.5% 31477 ± 14% softirqs.CPU87.RCU
5120 ± 31% +850.9% 48686 ± 3% softirqs.CPU87.SCHED
115813 ± 3% +24.5% 144179 ± 5% softirqs.CPU87.TIMER
8229 ± 9% +309.1% 33664 ± 13% softirqs.CPU88.RCU
4954 ± 34% +884.6% 48779 ± 3% softirqs.CPU88.SCHED
126351 ± 2% +14.6% 144809 ± 6% softirqs.CPU88.TIMER
8746 ± 15% +266.9% 32090 ± 11% softirqs.CPU89.RCU
4996 ± 25% +870.8% 48510 ± 3% softirqs.CPU89.SCHED
116473 +20.7% 140634 ± 3% softirqs.CPU89.TIMER
8654 ± 3% +198.5% 25836 ± 8% softirqs.CPU9.RCU
4961 ± 23% +884.6% 48851 ± 3% softirqs.CPU9.SCHED
118596 ± 4% +26.9% 150466 ± 3% softirqs.CPU9.TIMER
8042 ± 8% +250.5% 28188 ± 8% softirqs.CPU90.RCU
4874 ± 24% +943.1% 50843 ± 9% softirqs.CPU90.SCHED
113975 ± 2% +42.2% 162078 ± 21% softirqs.CPU90.TIMER
8067 ± 5% +279.1% 30586 ± 14% softirqs.CPU91.RCU
4816 ± 26% +906.6% 48477 ± 3% softirqs.CPU91.SCHED
117357 ± 3% +20.4% 141288 ± 4% softirqs.CPU91.TIMER
8308 ± 8% +277.3% 31344 ± 20% softirqs.CPU92.RCU
5061 ± 37% +857.4% 48459 ± 2% softirqs.CPU92.SCHED
114749 ± 3% +23.2% 141421 ± 4% softirqs.CPU92.TIMER
7574 ± 6% +320.3% 31835 ± 12% softirqs.CPU93.RCU
4939 ± 31% +885.3% 48665 ± 3% softirqs.CPU93.SCHED
115341 ± 3% +24.3% 143322 ± 5% softirqs.CPU93.TIMER
8162 ± 7% +246.8% 28304 ± 15% softirqs.CPU94.RCU
5580 ± 35% +803.9% 50443 ± 5% softirqs.CPU94.SCHED
113683 ± 3% +44.4% 164136 ± 15% softirqs.CPU94.TIMER
7811 ± 17% +296.3% 30953 ± 12% softirqs.CPU95.RCU
5675 ± 36% +753.1% 48411 ± 3% softirqs.CPU95.SCHED
117129 ± 4% +20.2% 140811 ± 2% softirqs.CPU95.TIMER
8330 ± 8% +278.5% 31530 ± 6% softirqs.CPU96.RCU
5590 ± 23% +763.4% 48268 ± 2% softirqs.CPU96.SCHED
120276 ± 7% +24.3% 149515 ± 3% softirqs.CPU96.TIMER
8564 ± 12% +304.8% 34666 ± 24% softirqs.CPU97.RCU
4665 ± 17% +940.2% 48534 ± 3% softirqs.CPU97.SCHED
116820 ± 3% +30.7% 152641 ± 6% softirqs.CPU97.TIMER
7810 ± 5% +299.7% 31217 ± 14% softirqs.CPU98.RCU
4609 ± 23% +940.7% 47970 ± 2% softirqs.CPU98.SCHED
115185 ± 5% +28.2% 147682 ± 3% softirqs.CPU98.TIMER
10099 ± 25% +162.6% 26525 ± 10% softirqs.CPU99.RCU
4895 ± 19% +890.9% 48508 ± 2% softirqs.CPU99.SCHED
116792 ± 6% +28.1% 149556 ± 3% softirqs.CPU99.TIMER
1558965 ± 2% +252.5% 5495679 ± 11% softirqs.RCU
983493 ± 18% +849.1% 9334427 ± 3% softirqs.SCHED
22213581 ± 2% +25.7% 27918197 ± 3% softirqs.TIMER
191.33 +11.0% 212.33 ± 6% interrupts.113:PCI-MSI.1574915-edge.eth1-TxRx-3
201.67 ± 10% +518.5% 1247 ± 59% interrupts.117:PCI-MSI.1574919-edge.eth1-TxRx-7
793811 +8.9% 864335 ± 8% interrupts.CAL:Function_call_interrupts
7259 -84.3% 1139 ± 3% interrupts.CPU0.NMI:Non-maskable_interrupts
7259 -84.3% 1139 ± 3% interrupts.CPU0.PMI:Performance_monitoring_interrupts
2668 ± 22% +381.4% 12847 ± 6% interrupts.CPU0.RES:Rescheduling_interrupts
7302 -84.2% 1152 interrupts.CPU1.NMI:Non-maskable_interrupts
7302 -84.2% 1152 interrupts.CPU1.PMI:Performance_monitoring_interrupts
1939 ± 23% +641.0% 14373 ± 6% interrupts.CPU1.RES:Rescheduling_interrupts
7198 -83.1% 1216 interrupts.CPU10.NMI:Non-maskable_interrupts
7198 -83.1% 1216 interrupts.CPU10.PMI:Performance_monitoring_interrupts
1405 ± 27% +831.0% 13084 ± 12% interrupts.CPU10.RES:Rescheduling_interrupts
7286 -87.5% 914.00 ± 29% interrupts.CPU100.NMI:Non-maskable_interrupts
7286 -87.5% 914.00 ± 29% interrupts.CPU100.PMI:Performance_monitoring_interrupts
1897 ± 30% +616.7% 13599 ± 2% interrupts.CPU100.RES:Rescheduling_interrupts
7318 -87.3% 929.33 ± 25% interrupts.CPU101.NMI:Non-maskable_interrupts
7318 -87.3% 929.33 ± 25% interrupts.CPU101.PMI:Performance_monitoring_interrupts
1400 ± 38% +912.1% 14175 ± 13% interrupts.CPU101.RES:Rescheduling_interrupts
7204 ± 2% -86.7% 958.33 ± 28% interrupts.CPU102.NMI:Non-maskable_interrupts
7204 ± 2% -86.7% 958.33 ± 28% interrupts.CPU102.PMI:Performance_monitoring_interrupts
1406 ± 54% +1058.8% 16300 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
7190 -90.3% 698.00 ± 24% interrupts.CPU103.NMI:Non-maskable_interrupts
7190 -90.3% 698.00 ± 24% interrupts.CPU103.PMI:Performance_monitoring_interrupts
969.33 ± 43% +1298.9% 13560 ± 18% interrupts.CPU103.RES:Rescheduling_interrupts
7312 -87.9% 888.00 ± 26% interrupts.CPU104.NMI:Non-maskable_interrupts
7312 -87.9% 888.00 ± 26% interrupts.CPU104.PMI:Performance_monitoring_interrupts
797.67 ± 42% +1760.3% 14838 ± 14% interrupts.CPU104.RES:Rescheduling_interrupts
6064 ± 29% -84.3% 950.33 ± 25% interrupts.CPU105.NMI:Non-maskable_interrupts
6064 ± 29% -84.3% 950.33 ± 25% interrupts.CPU105.PMI:Performance_monitoring_interrupts
1189 ± 57% +1043.8% 13599 ± 16% interrupts.CPU105.RES:Rescheduling_interrupts
6075 ± 28% -84.0% 974.67 ± 26% interrupts.CPU106.NMI:Non-maskable_interrupts
6075 ± 28% -84.0% 974.67 ± 26% interrupts.CPU106.PMI:Performance_monitoring_interrupts
857.67 ± 23% +1388.8% 12769 ± 21% interrupts.CPU106.RES:Rescheduling_interrupts
6075 ± 28% -82.0% 1092 ± 3% interrupts.CPU107.NMI:Non-maskable_interrupts
6075 ± 28% -82.0% 1092 ± 3% interrupts.CPU107.PMI:Performance_monitoring_interrupts
1137 ± 29% +1055.0% 13132 ± 17% interrupts.CPU107.RES:Rescheduling_interrupts
6095 ± 27% -82.7% 1054 ± 5% interrupts.CPU108.NMI:Non-maskable_interrupts
6095 ± 27% -82.7% 1054 ± 5% interrupts.CPU108.PMI:Performance_monitoring_interrupts
1051 ± 76% +1106.2% 12680 ± 16% interrupts.CPU108.RES:Rescheduling_interrupts
6134 ± 27% -83.3% 1024 ± 11% interrupts.CPU109.NMI:Non-maskable_interrupts
6134 ± 27% -83.3% 1024 ± 11% interrupts.CPU109.PMI:Performance_monitoring_interrupts
636.67 ± 28% +1759.8% 11841 ± 11% interrupts.CPU109.RES:Rescheduling_interrupts
7174 ± 2% -85.2% 1060 ± 4% interrupts.CPU11.NMI:Non-maskable_interrupts
7174 ± 2% -85.2% 1060 ± 4% interrupts.CPU11.PMI:Performance_monitoring_interrupts
1504 ± 29% +856.7% 14392 ± 4% interrupts.CPU11.RES:Rescheduling_interrupts
6077 ± 27% -81.4% 1129 ± 5% interrupts.CPU110.NMI:Non-maskable_interrupts
6077 ± 27% -81.4% 1129 ± 5% interrupts.CPU110.PMI:Performance_monitoring_interrupts
1025 ± 38% +1039.5% 11680 ± 11% interrupts.CPU110.RES:Rescheduling_interrupts
6061 ± 28% -81.7% 1107 ± 7% interrupts.CPU111.NMI:Non-maskable_interrupts
6061 ± 28% -81.7% 1107 ± 7% interrupts.CPU111.PMI:Performance_monitoring_interrupts
1484 ± 74% +755.1% 12692 ± 12% interrupts.CPU111.RES:Rescheduling_interrupts
7271 -84.8% 1102 interrupts.CPU112.NMI:Non-maskable_interrupts
7271 -84.8% 1102 interrupts.CPU112.PMI:Performance_monitoring_interrupts
828.00 ± 33% +1478.5% 13070 ± 19% interrupts.CPU112.RES:Rescheduling_interrupts
7305 -84.1% 1164 ± 5% interrupts.CPU113.NMI:Non-maskable_interrupts
7305 -84.1% 1164 ± 5% interrupts.CPU113.PMI:Performance_monitoring_interrupts
865.67 ± 36% +1551.3% 14294 ± 18% interrupts.CPU113.RES:Rescheduling_interrupts
7329 -85.4% 1070 ± 6% interrupts.CPU114.NMI:Non-maskable_interrupts
7329 -85.4% 1070 ± 6% interrupts.CPU114.PMI:Performance_monitoring_interrupts
1089 ± 70% +1241.8% 14612 ± 11% interrupts.CPU114.RES:Rescheduling_interrupts
7320 -84.7% 1123 ± 6% interrupts.CPU115.NMI:Non-maskable_interrupts
7320 -84.7% 1123 ± 6% interrupts.CPU115.PMI:Performance_monitoring_interrupts
1119 ± 64% +1164.0% 14153 ± 6% interrupts.CPU115.RES:Rescheduling_interrupts
7287 -85.8% 1035 ± 11% interrupts.CPU116.NMI:Non-maskable_interrupts
7287 -85.8% 1035 ± 11% interrupts.CPU116.PMI:Performance_monitoring_interrupts
3227 ± 90% +330.6% 13896 ± 3% interrupts.CPU116.RES:Rescheduling_interrupts
7280 -84.9% 1100 ± 12% interrupts.CPU117.NMI:Non-maskable_interrupts
7280 -84.9% 1100 ± 12% interrupts.CPU117.PMI:Performance_monitoring_interrupts
1370 ± 52% +789.8% 12190 ± 8% interrupts.CPU117.RES:Rescheduling_interrupts
7315 -84.4% 1137 ± 6% interrupts.CPU118.NMI:Non-maskable_interrupts
7315 -84.4% 1137 ± 6% interrupts.CPU118.PMI:Performance_monitoring_interrupts
1062 ± 30% +1305.1% 14931 ± 13% interrupts.CPU118.RES:Rescheduling_interrupts
7242 -83.7% 1183 ± 2% interrupts.CPU119.NMI:Non-maskable_interrupts
7242 -83.7% 1183 ± 2% interrupts.CPU119.PMI:Performance_monitoring_interrupts
1468 ± 53% +1044.7% 16811 ± 11% interrupts.CPU119.RES:Rescheduling_interrupts
7255 -85.3% 1065 ± 3% interrupts.CPU12.NMI:Non-maskable_interrupts
7255 -85.3% 1065 ± 3% interrupts.CPU12.PMI:Performance_monitoring_interrupts
1509 ± 48% +832.5% 14078 ± 3% interrupts.CPU12.RES:Rescheduling_interrupts
7264 -83.7% 1182 ± 3% interrupts.CPU120.NMI:Non-maskable_interrupts
7264 -83.7% 1182 ± 3% interrupts.CPU120.PMI:Performance_monitoring_interrupts
1448 ± 30% +1019.4% 16209 ± 2% interrupts.CPU120.RES:Rescheduling_interrupts
7289 -83.9% 1171 ± 3% interrupts.CPU121.NMI:Non-maskable_interrupts
7289 -83.9% 1171 ± 3% interrupts.CPU121.PMI:Performance_monitoring_interrupts
1163 ± 17% +1188.8% 14993 ± 9% interrupts.CPU121.RES:Rescheduling_interrupts
7257 -84.1% 1154 ± 5% interrupts.CPU122.NMI:Non-maskable_interrupts
7257 -84.1% 1154 ± 5% interrupts.CPU122.PMI:Performance_monitoring_interrupts
1154 ± 49% +1297.7% 16129 ± 8% interrupts.CPU122.RES:Rescheduling_interrupts
7305 -84.4% 1141 ± 2% interrupts.CPU123.NMI:Non-maskable_interrupts
7305 -84.4% 1141 ± 2% interrupts.CPU123.PMI:Performance_monitoring_interrupts
872.67 ± 47% +1771.2% 16329 ± 8% interrupts.CPU123.RES:Rescheduling_interrupts
7258 -84.4% 1134 ± 7% interrupts.CPU124.NMI:Non-maskable_interrupts
7258 -84.4% 1134 ± 7% interrupts.CPU124.PMI:Performance_monitoring_interrupts
946.00 ± 2% +1623.3% 16302 ± 3% interrupts.CPU124.RES:Rescheduling_interrupts
7305 -83.9% 1175 ± 6% interrupts.CPU125.NMI:Non-maskable_interrupts
7305 -83.9% 1175 ± 6% interrupts.CPU125.PMI:Performance_monitoring_interrupts
849.00 ± 9% +1765.7% 15839 ± 5% interrupts.CPU125.RES:Rescheduling_interrupts
7292 -83.8% 1179 ± 5% interrupts.CPU126.NMI:Non-maskable_interrupts
7292 -83.8% 1179 ± 5% interrupts.CPU126.PMI:Performance_monitoring_interrupts
773.67 ± 31% +2167.4% 17542 ± 14% interrupts.CPU126.RES:Rescheduling_interrupts
7283 -83.8% 1177 ± 2% interrupts.CPU127.NMI:Non-maskable_interrupts
7283 -83.8% 1177 ± 2% interrupts.CPU127.PMI:Performance_monitoring_interrupts
852.33 ± 57% +1900.9% 17054 ± 4% interrupts.CPU127.RES:Rescheduling_interrupts
7329 -84.4% 1145 ± 2% interrupts.CPU128.NMI:Non-maskable_interrupts
7329 -84.4% 1145 ± 2% interrupts.CPU128.PMI:Performance_monitoring_interrupts
550.67 ± 30% +2955.1% 16823 ± 5% interrupts.CPU128.RES:Rescheduling_interrupts
7268 -83.7% 1183 ± 2% interrupts.CPU129.NMI:Non-maskable_interrupts
7268 -83.7% 1183 ± 2% interrupts.CPU129.PMI:Performance_monitoring_interrupts
1093 ± 34% +1493.0% 17412 ± 4% interrupts.CPU129.RES:Rescheduling_interrupts
7274 -86.0% 1021 ± 8% interrupts.CPU13.NMI:Non-maskable_interrupts
7274 -86.0% 1021 ± 8% interrupts.CPU13.PMI:Performance_monitoring_interrupts
1278 ± 26% +955.8% 13496 ± 5% interrupts.CPU13.RES:Rescheduling_interrupts
7269 -83.7% 1187 interrupts.CPU130.NMI:Non-maskable_interrupts
7269 -83.7% 1187 interrupts.CPU130.PMI:Performance_monitoring_interrupts
858.67 ± 27% +1861.3% 16840 ± 3% interrupts.CPU130.RES:Rescheduling_interrupts
7318 -83.8% 1185 ± 3% interrupts.CPU131.NMI:Non-maskable_interrupts
7318 -83.8% 1185 ± 3% interrupts.CPU131.PMI:Performance_monitoring_interrupts
542.67 ± 17% +2739.4% 15408 ± 3% interrupts.CPU131.RES:Rescheduling_interrupts
7250 -83.6% 1187 ± 6% interrupts.CPU132.NMI:Non-maskable_interrupts
7250 -83.6% 1187 ± 6% interrupts.CPU132.PMI:Performance_monitoring_interrupts
742.67 ± 34% +2000.3% 15598 ± 3% interrupts.CPU132.RES:Rescheduling_interrupts
7288 -84.5% 1128 ± 3% interrupts.CPU133.NMI:Non-maskable_interrupts
7288 -84.5% 1128 ± 3% interrupts.CPU133.PMI:Performance_monitoring_interrupts
749.67 ± 33% +2009.8% 15816 ± 4% interrupts.CPU133.RES:Rescheduling_interrupts
7309 -83.3% 1218 ± 4% interrupts.CPU134.NMI:Non-maskable_interrupts
7309 -83.3% 1218 ± 4% interrupts.CPU134.PMI:Performance_monitoring_interrupts
586.67 ± 37% +2688.9% 16361 ± 8% interrupts.CPU134.RES:Rescheduling_interrupts
6087 ± 28% -83.7% 991.33 ± 27% interrupts.CPU135.NMI:Non-maskable_interrupts
6087 ± 28% -83.7% 991.33 ± 27% interrupts.CPU135.PMI:Performance_monitoring_interrupts
608.67 +2544.4% 16095 ± 5% interrupts.CPU135.RES:Rescheduling_interrupts
6039 ± 28% -84.1% 959.67 ± 29% interrupts.CPU136.NMI:Non-maskable_interrupts
6039 ± 28% -84.1% 959.67 ± 29% interrupts.CPU136.PMI:Performance_monitoring_interrupts
771.67 ± 11% +1916.9% 15563 ± 5% interrupts.CPU136.RES:Rescheduling_interrupts
3894 ± 5% +18.7% 4624 ± 9% interrupts.CPU137.CAL:Function_call_interrupts
6103 ± 28% -84.3% 955.67 ± 25% interrupts.CPU137.NMI:Non-maskable_interrupts
6103 ± 28% -84.3% 955.67 ± 25% interrupts.CPU137.PMI:Performance_monitoring_interrupts
631.33 ± 4% +2458.0% 16149 ± 6% interrupts.CPU137.RES:Rescheduling_interrupts
6113 ± 28% -84.2% 965.00 ± 27% interrupts.CPU138.NMI:Non-maskable_interrupts
6113 ± 28% -84.2% 965.00 ± 27% interrupts.CPU138.PMI:Performance_monitoring_interrupts
600.67 ± 28% +2928.8% 18193 ± 23% interrupts.CPU138.RES:Rescheduling_interrupts
4882 ± 35% -80.9% 930.33 ± 27% interrupts.CPU139.NMI:Non-maskable_interrupts
4882 ± 35% -80.9% 930.33 ± 27% interrupts.CPU139.PMI:Performance_monitoring_interrupts
1290 ± 13% +1071.9% 15121 ± 6% interrupts.CPU139.RES:Rescheduling_interrupts
7293 -84.6% 1121 ± 4% interrupts.CPU14.NMI:Non-maskable_interrupts
7293 -84.6% 1121 ± 4% interrupts.CPU14.PMI:Performance_monitoring_interrupts
1335 ± 36% +1008.7% 14801 ± 9% interrupts.CPU14.RES:Rescheduling_interrupts
4918 ± 35% -79.8% 992.33 ± 32% interrupts.CPU140.NMI:Non-maskable_interrupts
4918 ± 35% -79.8% 992.33 ± 32% interrupts.CPU140.PMI:Performance_monitoring_interrupts
776.33 ± 31% +1971.6% 16082 ± 7% interrupts.CPU140.RES:Rescheduling_interrupts
4895 ± 36% -80.1% 974.67 ± 27% interrupts.CPU141.NMI:Non-maskable_interrupts
4895 ± 36% -80.1% 974.67 ± 27% interrupts.CPU141.PMI:Performance_monitoring_interrupts
709.67 ± 21% +2098.9% 15604 ± 8% interrupts.CPU141.RES:Rescheduling_interrupts
4905 ± 35% -75.9% 1181 ± 2% interrupts.CPU142.NMI:Non-maskable_interrupts
4905 ± 35% -75.9% 1181 ± 2% interrupts.CPU142.PMI:Performance_monitoring_interrupts
750.00 ± 36% +2060.0% 16200 ± 4% interrupts.CPU142.RES:Rescheduling_interrupts
4863 ± 35% -74.6% 1236 ± 3% interrupts.CPU143.NMI:Non-maskable_interrupts
4863 ± 35% -74.6% 1236 ± 3% interrupts.CPU143.PMI:Performance_monitoring_interrupts
946.67 ± 30% +1743.6% 17453 ± 5% interrupts.CPU143.RES:Rescheduling_interrupts
4878 ± 35% -77.3% 1109 ± 10% interrupts.CPU144.NMI:Non-maskable_interrupts
4878 ± 35% -77.3% 1109 ± 10% interrupts.CPU144.PMI:Performance_monitoring_interrupts
2157 ± 27% +625.5% 15651 ± 8% interrupts.CPU144.RES:Rescheduling_interrupts
4881 ± 35% -79.4% 1006 ± 17% interrupts.CPU145.NMI:Non-maskable_interrupts
4881 ± 35% -79.4% 1006 ± 17% interrupts.CPU145.PMI:Performance_monitoring_interrupts
1735 ± 17% +761.5% 14949 ± 15% interrupts.CPU145.RES:Rescheduling_interrupts
4872 ± 35% -79.7% 988.00 ± 22% interrupts.CPU146.NMI:Non-maskable_interrupts
4872 ± 35% -79.7% 988.00 ± 22% interrupts.CPU146.PMI:Performance_monitoring_interrupts
930.67 ± 35% +1528.5% 15155 ± 18% interrupts.CPU146.RES:Rescheduling_interrupts
4888 ± 36% -79.0% 1028 ± 20% interrupts.CPU147.NMI:Non-maskable_interrupts
4888 ± 36% -79.0% 1028 ± 20% interrupts.CPU147.PMI:Performance_monitoring_interrupts
1422 ± 38% +914.8% 14430 ± 20% interrupts.CPU147.RES:Rescheduling_interrupts
6101 ± 28% -83.1% 1030 ± 16% interrupts.CPU148.NMI:Non-maskable_interrupts
6101 ± 28% -83.1% 1030 ± 16% interrupts.CPU148.PMI:Performance_monitoring_interrupts
984.33 ± 33% +1272.2% 13507 ± 24% interrupts.CPU148.RES:Rescheduling_interrupts
5987 ± 31% -81.7% 1098 ± 9% interrupts.CPU149.NMI:Non-maskable_interrupts
5987 ± 31% -81.7% 1098 ± 9% interrupts.CPU149.PMI:Performance_monitoring_interrupts
1035 ± 40% +1275.7% 14243 ± 30% interrupts.CPU149.RES:Rescheduling_interrupts
7257 -87.1% 935.33 ± 33% interrupts.CPU15.NMI:Non-maskable_interrupts
7257 -87.1% 935.33 ± 33% interrupts.CPU15.PMI:Performance_monitoring_interrupts
1431 ± 33% +919.1% 14584 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
6050 ± 27% -80.1% 1206 ± 3% interrupts.CPU150.NMI:Non-maskable_interrupts
6050 ± 27% -80.1% 1206 ± 3% interrupts.CPU150.PMI:Performance_monitoring_interrupts
812.00 ± 32% +1520.9% 13161 ± 30% interrupts.CPU150.RES:Rescheduling_interrupts
6042 ± 28% -80.7% 1167 interrupts.CPU151.NMI:Non-maskable_interrupts
6042 ± 28% -80.7% 1167 interrupts.CPU151.PMI:Performance_monitoring_interrupts
1328 ± 47% +874.9% 12947 ± 35% interrupts.CPU151.RES:Rescheduling_interrupts
7330 -84.2% 1155 ± 4% interrupts.CPU152.NMI:Non-maskable_interrupts
7330 -84.2% 1155 ± 4% interrupts.CPU152.PMI:Performance_monitoring_interrupts
1014 ± 21% +1275.5% 13952 ± 28% interrupts.CPU152.RES:Rescheduling_interrupts
6062 ± 27% -81.7% 1108 ± 6% interrupts.CPU153.NMI:Non-maskable_interrupts
6062 ± 27% -81.7% 1108 ± 6% interrupts.CPU153.PMI:Performance_monitoring_interrupts
965.33 ± 25% +1333.0% 13833 ± 30% interrupts.CPU153.RES:Rescheduling_interrupts
6063 ± 28% -81.8% 1105 interrupts.CPU154.NMI:Non-maskable_interrupts
6063 ± 28% -81.8% 1105 interrupts.CPU154.PMI:Performance_monitoring_interrupts
1278 ± 26% +974.4% 13731 ± 22% interrupts.CPU154.RES:Rescheduling_interrupts
7280 -83.2% 1224 ± 2% interrupts.CPU155.NMI:Non-maskable_interrupts
7280 -83.2% 1224 ± 2% interrupts.CPU155.PMI:Performance_monitoring_interrupts
1301 ± 30% +943.9% 13581 ± 25% interrupts.CPU155.RES:Rescheduling_interrupts
6059 ± 27% -81.7% 1108 ± 5% interrupts.CPU156.NMI:Non-maskable_interrupts
6059 ± 27% -81.7% 1108 ± 5% interrupts.CPU156.PMI:Performance_monitoring_interrupts
1372 ± 24% +923.5% 14042 ± 24% interrupts.CPU156.RES:Rescheduling_interrupts
6101 ± 28% -81.3% 1143 ± 8% interrupts.CPU157.NMI:Non-maskable_interrupts
6101 ± 28% -81.3% 1143 ± 8% interrupts.CPU157.PMI:Performance_monitoring_interrupts
1124 ± 45% +1226.7% 14916 ± 10% interrupts.CPU157.RES:Rescheduling_interrupts
7288 -84.8% 1111 ± 5% interrupts.CPU158.NMI:Non-maskable_interrupts
7288 -84.8% 1111 ± 5% interrupts.CPU158.PMI:Performance_monitoring_interrupts
918.67 ± 20% +1509.9% 14789 ± 8% interrupts.CPU158.RES:Rescheduling_interrupts
6088 ± 28% -81.7% 1117 ± 7% interrupts.CPU159.NMI:Non-maskable_interrupts
6088 ± 28% -81.7% 1117 ± 7% interrupts.CPU159.PMI:Performance_monitoring_interrupts
981.67 ± 16% +1440.3% 15120 ± 10% interrupts.CPU159.RES:Rescheduling_interrupts
7259 -87.3% 922.00 ± 28% interrupts.CPU16.NMI:Non-maskable_interrupts
7259 -87.3% 922.00 ± 28% interrupts.CPU16.PMI:Performance_monitoring_interrupts
1683 ± 58% +800.4% 15160 ± 3% interrupts.CPU16.RES:Rescheduling_interrupts
6079 ± 28% -82.2% 1080 ± 11% interrupts.CPU160.NMI:Non-maskable_interrupts
6079 ± 28% -82.2% 1080 ± 11% interrupts.CPU160.PMI:Performance_monitoring_interrupts
1193 ± 60% +1098.5% 14298 ± 9% interrupts.CPU160.RES:Rescheduling_interrupts
6073 ± 28% -80.9% 1158 ± 3% interrupts.CPU161.NMI:Non-maskable_interrupts
6073 ± 28% -80.9% 1158 ± 3% interrupts.CPU161.PMI:Performance_monitoring_interrupts
1026 ± 39% +1264.0% 14004 ± 21% interrupts.CPU161.RES:Rescheduling_interrupts
5959 ± 30% -80.3% 1172 ± 4% interrupts.CPU162.NMI:Non-maskable_interrupts
5959 ± 30% -80.3% 1172 ± 4% interrupts.CPU162.PMI:Performance_monitoring_interrupts
1376 ± 81% +924.5% 14097 ± 15% interrupts.CPU162.RES:Rescheduling_interrupts
6051 ± 28% -80.2% 1197 ± 2% interrupts.CPU163.NMI:Non-maskable_interrupts
6051 ± 28% -80.2% 1197 ± 2% interrupts.CPU163.PMI:Performance_monitoring_interrupts
1536 ± 66% +806.6% 13931 ± 21% interrupts.CPU163.RES:Rescheduling_interrupts
6111 ± 27% -81.6% 1124 ± 5% interrupts.CPU164.NMI:Non-maskable_interrupts
6111 ± 27% -81.6% 1124 ± 5% interrupts.CPU164.PMI:Performance_monitoring_interrupts
1145 ± 20% +1160.7% 14438 ± 18% interrupts.CPU164.RES:Rescheduling_interrupts
6082 ± 28% -81.5% 1126 ± 13% interrupts.CPU165.NMI:Non-maskable_interrupts
6082 ± 28% -81.5% 1126 ± 13% interrupts.CPU165.PMI:Performance_monitoring_interrupts
1122 ± 11% +1264.0% 15309 ± 5% interrupts.CPU165.RES:Rescheduling_interrupts
7291 -85.4% 1061 ± 14% interrupts.CPU166.NMI:Non-maskable_interrupts
7291 -85.4% 1061 ± 14% interrupts.CPU166.PMI:Performance_monitoring_interrupts
1129 ± 33% +1256.9% 15328 ± 6% interrupts.CPU166.RES:Rescheduling_interrupts
7277 -85.2% 1073 ± 12% interrupts.CPU167.NMI:Non-maskable_interrupts
7277 -85.2% 1073 ± 12% interrupts.CPU167.PMI:Performance_monitoring_interrupts
932.67 ± 11% +1693.7% 16729 interrupts.CPU167.RES:Rescheduling_interrupts
7276 -84.8% 1109 ± 6% interrupts.CPU168.NMI:Non-maskable_interrupts
7276 -84.8% 1109 ± 6% interrupts.CPU168.PMI:Performance_monitoring_interrupts
1854 ± 45% +469.8% 10565 ± 13% interrupts.CPU168.RES:Rescheduling_interrupts
6089 ± 28% -83.5% 1007 ± 14% interrupts.CPU169.NMI:Non-maskable_interrupts
6089 ± 28% -83.5% 1007 ± 14% interrupts.CPU169.PMI:Performance_monitoring_interrupts
1246 ± 8% +792.3% 11124 ± 13% interrupts.CPU169.RES:Rescheduling_interrupts
7303 -86.6% 978.00 ± 36% interrupts.CPU17.NMI:Non-maskable_interrupts
7303 -86.6% 978.00 ± 36% interrupts.CPU17.PMI:Performance_monitoring_interrupts
1365 ± 43% +975.6% 14689 ± 2% interrupts.CPU17.RES:Rescheduling_interrupts
6072 ± 28% -83.4% 1008 ± 9% interrupts.CPU170.NMI:Non-maskable_interrupts
6072 ± 28% -83.4% 1008 ± 9% interrupts.CPU170.PMI:Performance_monitoring_interrupts
2102 ± 31% +411.8% 10757 ± 16% interrupts.CPU170.RES:Rescheduling_interrupts
6117 ± 28% -82.9% 1048 ± 7% interrupts.CPU171.NMI:Non-maskable_interrupts
6117 ± 28% -82.9% 1048 ± 7% interrupts.CPU171.PMI:Performance_monitoring_interrupts
2034 ± 82% +508.9% 12387 ± 17% interrupts.CPU171.RES:Rescheduling_interrupts
6075 ± 28% -82.2% 1083 ± 9% interrupts.CPU172.NMI:Non-maskable_interrupts
6075 ± 28% -82.2% 1083 ± 9% interrupts.CPU172.PMI:Performance_monitoring_interrupts
831.67 ± 19% +1419.1% 12634 ± 22% interrupts.CPU172.RES:Rescheduling_interrupts
7309 -83.9% 1179 ± 7% interrupts.CPU173.NMI:Non-maskable_interrupts
7309 -83.9% 1179 ± 7% interrupts.CPU173.PMI:Performance_monitoring_interrupts
781.67 ± 8% +1592.2% 13227 ± 16% interrupts.CPU173.RES:Rescheduling_interrupts
6099 ± 28% -83.6% 999.67 ± 7% interrupts.CPU174.NMI:Non-maskable_interrupts
6099 ± 28% -83.6% 999.67 ± 7% interrupts.CPU174.PMI:Performance_monitoring_interrupts
1308 ± 23% +878.9% 12803 ± 13% interrupts.CPU174.RES:Rescheduling_interrupts
6018 ± 27% -82.5% 1054 ± 10% interrupts.CPU175.NMI:Non-maskable_interrupts
6018 ± 27% -82.5% 1054 ± 10% interrupts.CPU175.PMI:Performance_monitoring_interrupts
1505 ± 37% +743.1% 12693 ± 20% interrupts.CPU175.RES:Rescheduling_interrupts
6091 ± 28% -85.5% 880.33 ± 27% interrupts.CPU176.NMI:Non-maskable_interrupts
6091 ± 28% -85.5% 880.33 ± 27% interrupts.CPU176.PMI:Performance_monitoring_interrupts
1395 ± 36% +765.7% 12079 ± 15% interrupts.CPU176.RES:Rescheduling_interrupts
6094 ± 28% -81.8% 1107 ± 7% interrupts.CPU177.NMI:Non-maskable_interrupts
6094 ± 28% -81.8% 1107 ± 7% interrupts.CPU177.PMI:Performance_monitoring_interrupts
908.67 ± 8% +1359.1% 13258 ± 2% interrupts.CPU177.RES:Rescheduling_interrupts
6094 ± 28% -84.5% 942.67 ± 26% interrupts.CPU178.NMI:Non-maskable_interrupts
6094 ± 28% -84.5% 942.67 ± 26% interrupts.CPU178.PMI:Performance_monitoring_interrupts
906.00 ± 14% +1329.9% 12955 ± 3% interrupts.CPU178.RES:Rescheduling_interrupts
6062 ± 28% -84.8% 922.67 ± 27% interrupts.CPU179.NMI:Non-maskable_interrupts
6062 ± 28% -84.8% 922.67 ± 27% interrupts.CPU179.PMI:Performance_monitoring_interrupts
1080 ± 17% +1102.4% 12989 ± 22% interrupts.CPU179.RES:Rescheduling_interrupts
6052 ± 28% -85.1% 899.00 ± 31% interrupts.CPU18.NMI:Non-maskable_interrupts
6052 ± 28% -85.1% 899.00 ± 31% interrupts.CPU18.PMI:Performance_monitoring_interrupts
1014 ± 42% +1345.5% 14657 ± 7% interrupts.CPU18.RES:Rescheduling_interrupts
6080 ± 28% -84.7% 931.00 ± 26% interrupts.CPU180.NMI:Non-maskable_interrupts
6080 ± 28% -84.7% 931.00 ± 26% interrupts.CPU180.PMI:Performance_monitoring_interrupts
1127 ± 37% +1029.1% 12729 ± 20% interrupts.CPU180.RES:Rescheduling_interrupts
7272 ± 2% -87.1% 937.67 ± 29% interrupts.CPU181.NMI:Non-maskable_interrupts
7272 ± 2% -87.1% 937.67 ± 29% interrupts.CPU181.PMI:Performance_monitoring_interrupts
1275 ± 43% +997.4% 13991 ± 7% interrupts.CPU181.RES:Rescheduling_interrupts
7327 -88.5% 845.00 ± 23% interrupts.CPU182.NMI:Non-maskable_interrupts
7327 -88.5% 845.00 ± 23% interrupts.CPU182.PMI:Performance_monitoring_interrupts
1254 ± 35% +1010.8% 13929 ± 9% interrupts.CPU182.RES:Rescheduling_interrupts
7291 -88.7% 820.33 ± 29% interrupts.CPU183.NMI:Non-maskable_interrupts
7291 -88.7% 820.33 ± 29% interrupts.CPU183.PMI:Performance_monitoring_interrupts
1051 ± 31% +1328.7% 15015 interrupts.CPU183.RES:Rescheduling_interrupts
6074 ± 28% -83.7% 992.00 ± 13% interrupts.CPU184.NMI:Non-maskable_interrupts
6074 ± 28% -83.7% 992.00 ± 13% interrupts.CPU184.PMI:Performance_monitoring_interrupts
1005 ± 60% +1199.3% 13062 ± 9% interrupts.CPU184.RES:Rescheduling_interrupts
6004 ± 27% -84.4% 936.33 ± 8% interrupts.CPU185.NMI:Non-maskable_interrupts
6004 ± 27% -84.4% 936.33 ± 8% interrupts.CPU185.PMI:Performance_monitoring_interrupts
1107 ± 43% +1051.6% 12747 ± 3% interrupts.CPU185.RES:Rescheduling_interrupts
6046 ± 28% -83.2% 1015 ± 6% interrupts.CPU186.NMI:Non-maskable_interrupts
6046 ± 28% -83.2% 1015 ± 6% interrupts.CPU186.PMI:Performance_monitoring_interrupts
868.00 ± 32% +1444.5% 13406 ± 12% interrupts.CPU186.RES:Rescheduling_interrupts
6108 ± 27% -81.2% 1150 interrupts.CPU187.NMI:Non-maskable_interrupts
6108 ± 27% -81.2% 1150 interrupts.CPU187.PMI:Performance_monitoring_interrupts
1337 ± 71% +872.9% 13014 ± 7% interrupts.CPU187.RES:Rescheduling_interrupts
6082 ± 27% -81.3% 1134 ± 6% interrupts.CPU188.NMI:Non-maskable_interrupts
6082 ± 27% -81.3% 1134 ± 6% interrupts.CPU188.PMI:Performance_monitoring_interrupts
935.33 ± 37% +1449.4% 14491 ± 5% interrupts.CPU188.RES:Rescheduling_interrupts
6073 ± 28% -82.4% 1067 ± 10% interrupts.CPU189.NMI:Non-maskable_interrupts
6073 ± 28% -82.4% 1067 ± 10% interrupts.CPU189.PMI:Performance_monitoring_interrupts
1099 ± 37% +946.5% 11500 ± 3% interrupts.CPU189.RES:Rescheduling_interrupts
6133 ± 27% -84.9% 928.67 ± 27% interrupts.CPU19.NMI:Non-maskable_interrupts
6133 ± 27% -84.9% 928.67 ± 27% interrupts.CPU19.PMI:Performance_monitoring_interrupts
1779 ± 57% +735.5% 14868 interrupts.CPU19.RES:Rescheduling_interrupts
6101 ± 28% -83.2% 1024 ± 10% interrupts.CPU190.NMI:Non-maskable_interrupts
6101 ± 28% -83.2% 1024 ± 10% interrupts.CPU190.PMI:Performance_monitoring_interrupts
969.67 ± 35% +1256.4% 13152 ± 8% interrupts.CPU190.RES:Rescheduling_interrupts
7289 -86.1% 1013 ± 14% interrupts.CPU191.NMI:Non-maskable_interrupts
7289 -86.1% 1013 ± 14% interrupts.CPU191.PMI:Performance_monitoring_interrupts
1422 ± 43% +826.9% 13187 ± 18% interrupts.CPU191.RES:Rescheduling_interrupts
7287 -83.3% 1213 ± 3% interrupts.CPU2.NMI:Non-maskable_interrupts
7287 -83.3% 1213 ± 3% interrupts.CPU2.PMI:Performance_monitoring_interrupts
3183 ± 17% +358.7% 14602 ± 12% interrupts.CPU2.RES:Rescheduling_interrupts
7298 -88.1% 869.00 ± 30% interrupts.CPU20.NMI:Non-maskable_interrupts
7298 -88.1% 869.00 ± 30% interrupts.CPU20.PMI:Performance_monitoring_interrupts
2198 ± 60% +542.1% 14118 ± 3% interrupts.CPU20.RES:Rescheduling_interrupts
6108 ± 28% -84.3% 956.00 ± 30% interrupts.CPU21.NMI:Non-maskable_interrupts
6108 ± 28% -84.3% 956.00 ± 30% interrupts.CPU21.PMI:Performance_monitoring_interrupts
1723 ± 10% +661.1% 13113 ± 8% interrupts.CPU21.RES:Rescheduling_interrupts
6043 ± 27% -81.3% 1130 ± 4% interrupts.CPU22.NMI:Non-maskable_interrupts
6043 ± 27% -81.3% 1130 ± 4% interrupts.CPU22.PMI:Performance_monitoring_interrupts
1400 ± 34% +880.2% 13726 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
6066 ± 28% -83.6% 993.00 ± 28% interrupts.CPU23.NMI:Non-maskable_interrupts
6066 ± 28% -83.6% 993.00 ± 28% interrupts.CPU23.PMI:Performance_monitoring_interrupts
1147 ± 38% +1142.2% 14256 ± 14% interrupts.CPU23.RES:Rescheduling_interrupts
6086 ± 28% -83.9% 979.00 ± 31% interrupts.CPU24.NMI:Non-maskable_interrupts
6086 ± 28% -83.9% 979.00 ± 31% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2366 ± 22% +631.5% 17306 ± 5% interrupts.CPU24.RES:Rescheduling_interrupts
6019 ± 27% -83.4% 996.67 ± 27% interrupts.CPU25.NMI:Non-maskable_interrupts
6019 ± 27% -83.4% 996.67 ± 27% interrupts.CPU25.PMI:Performance_monitoring_interrupts
1788 ± 7% +857.7% 17126 ± 4% interrupts.CPU25.RES:Rescheduling_interrupts
6093 ± 28% -84.2% 963.33 ± 28% interrupts.CPU26.NMI:Non-maskable_interrupts
6093 ± 28% -84.2% 963.33 ± 28% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1562 ± 23% +938.9% 16234 ± 6% interrupts.CPU26.RES:Rescheduling_interrupts
5952 ± 28% -80.8% 1143 interrupts.CPU27.NMI:Non-maskable_interrupts
5952 ± 28% -80.8% 1143 interrupts.CPU27.PMI:Performance_monitoring_interrupts
951.00 ± 36% +1575.6% 15935 ± 5% interrupts.CPU27.RES:Rescheduling_interrupts
7301 -84.4% 1141 ± 8% interrupts.CPU28.NMI:Non-maskable_interrupts
7301 -84.4% 1141 ± 8% interrupts.CPU28.PMI:Performance_monitoring_interrupts
937.00 ± 22% +1597.5% 15906 ± 6% interrupts.CPU28.RES:Rescheduling_interrupts
7324 -83.8% 1183 ± 4% interrupts.CPU29.NMI:Non-maskable_interrupts
7324 -83.8% 1183 ± 4% interrupts.CPU29.PMI:Performance_monitoring_interrupts
610.00 ± 14% +2541.5% 16113 ± 4% interrupts.CPU29.RES:Rescheduling_interrupts
191.33 +11.0% 212.33 ± 6% interrupts.CPU3.113:PCI-MSI.1574915-edge.eth1-TxRx-3
7166 -83.6% 1175 ± 4% interrupts.CPU3.NMI:Non-maskable_interrupts
7166 -83.6% 1175 ± 4% interrupts.CPU3.PMI:Performance_monitoring_interrupts
2797 ± 42% +341.4% 12349 ± 13% interrupts.CPU3.RES:Rescheduling_interrupts
7328 -83.9% 1179 ± 3% interrupts.CPU30.NMI:Non-maskable_interrupts
7328 -83.9% 1179 ± 3% interrupts.CPU30.PMI:Performance_monitoring_interrupts
857.67 ± 23% +1876.0% 16947 ± 2% interrupts.CPU30.RES:Rescheduling_interrupts
7266 -83.9% 1172 ± 3% interrupts.CPU31.NMI:Non-maskable_interrupts
7266 -83.9% 1172 ± 3% interrupts.CPU31.PMI:Performance_monitoring_interrupts
714.00 ± 24% +2123.1% 15872 ± 5% interrupts.CPU31.RES:Rescheduling_interrupts
7279 -84.0% 1165 interrupts.CPU32.NMI:Non-maskable_interrupts
7279 -84.0% 1165 interrupts.CPU32.PMI:Performance_monitoring_interrupts
951.67 ± 39% +1676.5% 16906 ± 6% interrupts.CPU32.RES:Rescheduling_interrupts
7279 -83.3% 1213 ± 2% interrupts.CPU33.NMI:Non-maskable_interrupts
7279 -83.3% 1213 ± 2% interrupts.CPU33.PMI:Performance_monitoring_interrupts
1076 ± 32% +1431.2% 16486 ± 4% interrupts.CPU33.RES:Rescheduling_interrupts
7331 -84.3% 1152 interrupts.CPU34.NMI:Non-maskable_interrupts
7331 -84.3% 1152 interrupts.CPU34.PMI:Performance_monitoring_interrupts
867.67 ± 12% +1778.1% 16295 ± 3% interrupts.CPU34.RES:Rescheduling_interrupts
7286 -86.2% 1003 ± 27% interrupts.CPU35.NMI:Non-maskable_interrupts
7286 -86.2% 1003 ± 27% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1034 ± 16% +1494.6% 16488 ± 5% interrupts.CPU35.RES:Rescheduling_interrupts
6091 ± 28% -84.6% 935.33 ± 27% interrupts.CPU36.NMI:Non-maskable_interrupts
6091 ± 28% -84.6% 935.33 ± 27% interrupts.CPU36.PMI:Performance_monitoring_interrupts
736.00 ± 12% +2215.1% 17039 ± 5% interrupts.CPU36.RES:Rescheduling_interrupts
6096 ± 28% -84.6% 938.33 ± 23% interrupts.CPU37.NMI:Non-maskable_interrupts
6096 ± 28% -84.6% 938.33 ± 23% interrupts.CPU37.PMI:Performance_monitoring_interrupts
544.00 ± 32% +2868.1% 16146 ± 3% interrupts.CPU37.RES:Rescheduling_interrupts
6099 ± 27% -83.6% 999.33 ± 28% interrupts.CPU38.NMI:Non-maskable_interrupts
6099 ± 27% -83.6% 999.33 ± 28% interrupts.CPU38.PMI:Performance_monitoring_interrupts
504.67 ± 5% +3152.8% 16416 ± 3% interrupts.CPU38.RES:Rescheduling_interrupts
6087 ± 28% -83.9% 978.67 ± 29% interrupts.CPU39.NMI:Non-maskable_interrupts
6087 ± 28% -83.9% 978.67 ± 29% interrupts.CPU39.PMI:Performance_monitoring_interrupts
872.33 ± 14% +1823.0% 16775 ± 6% interrupts.CPU39.RES:Rescheduling_interrupts
7337 -85.1% 1093 ± 3% interrupts.CPU4.NMI:Non-maskable_interrupts
7337 -85.1% 1093 ± 3% interrupts.CPU4.PMI:Performance_monitoring_interrupts
1391 ± 23% +821.2% 12813 ± 18% interrupts.CPU4.RES:Rescheduling_interrupts
6145 ± 28% -84.1% 976.33 ± 26% interrupts.CPU40.NMI:Non-maskable_interrupts
6145 ± 28% -84.1% 976.33 ± 26% interrupts.CPU40.PMI:Performance_monitoring_interrupts
590.33 ± 20% +2758.0% 16872 ± 6% interrupts.CPU40.RES:Rescheduling_interrupts
6050 ± 27% -86.9% 791.33 ± 41% interrupts.CPU41.NMI:Non-maskable_interrupts
6050 ± 27% -86.9% 791.33 ± 41% interrupts.CPU41.PMI:Performance_monitoring_interrupts
1070 ± 29% +1460.1% 16693 ± 3% interrupts.CPU41.RES:Rescheduling_interrupts
6085 ± 28% -87.0% 788.33 ± 41% interrupts.CPU42.NMI:Non-maskable_interrupts
6085 ± 28% -87.0% 788.33 ± 41% interrupts.CPU42.PMI:Performance_monitoring_interrupts
767.33 ± 14% +2130.1% 17112 ± 4% interrupts.CPU42.RES:Rescheduling_interrupts
6087 ± 28% -87.5% 760.00 ± 36% interrupts.CPU43.NMI:Non-maskable_interrupts
6087 ± 28% -87.5% 760.00 ± 36% interrupts.CPU43.PMI:Performance_monitoring_interrupts
949.00 ± 40% +1601.2% 16144 ± 5% interrupts.CPU43.RES:Rescheduling_interrupts
6087 ± 27% -87.2% 780.67 ± 29% interrupts.CPU44.NMI:Non-maskable_interrupts
6087 ± 27% -87.2% 780.67 ± 29% interrupts.CPU44.PMI:Performance_monitoring_interrupts
829.00 ± 9% +1879.8% 16412 ± 4% interrupts.CPU44.RES:Rescheduling_interrupts
6068 ± 28% -83.3% 1011 ± 28% interrupts.CPU45.NMI:Non-maskable_interrupts
6068 ± 28% -83.3% 1011 ± 28% interrupts.CPU45.PMI:Performance_monitoring_interrupts
744.67 ± 17% +2052.0% 16025 interrupts.CPU45.RES:Rescheduling_interrupts
6084 ± 27% -84.0% 971.67 ± 28% interrupts.CPU46.NMI:Non-maskable_interrupts
6084 ± 27% -84.0% 971.67 ± 28% interrupts.CPU46.PMI:Performance_monitoring_interrupts
789.00 ± 29% +2001.5% 16581 ± 3% interrupts.CPU46.RES:Rescheduling_interrupts
7329 -85.9% 1031 ± 32% interrupts.CPU47.NMI:Non-maskable_interrupts
7329 -85.9% 1031 ± 32% interrupts.CPU47.PMI:Performance_monitoring_interrupts
1152 ± 23% +1300.1% 16128 ± 2% interrupts.CPU47.RES:Rescheduling_interrupts
4861 ± 35% -82.0% 873.33 ± 26% interrupts.CPU48.NMI:Non-maskable_interrupts
4861 ± 35% -82.0% 873.33 ± 26% interrupts.CPU48.PMI:Performance_monitoring_interrupts
1966 ± 8% +743.2% 16579 interrupts.CPU48.RES:Rescheduling_interrupts
4800 ± 37% -83.0% 818.33 ± 25% interrupts.CPU49.NMI:Non-maskable_interrupts
4800 ± 37% -83.0% 818.33 ± 25% interrupts.CPU49.PMI:Performance_monitoring_interrupts
2071 ± 16% +663.1% 15805 ± 8% interrupts.CPU49.RES:Rescheduling_interrupts
7268 -84.5% 1127 ± 13% interrupts.CPU5.NMI:Non-maskable_interrupts
7268 -84.5% 1127 ± 13% interrupts.CPU5.PMI:Performance_monitoring_interrupts
1947 ± 28% +611.2% 13852 ± 12% interrupts.CPU5.RES:Rescheduling_interrupts
4906 ± 35% -83.8% 794.33 ± 30% interrupts.CPU50.NMI:Non-maskable_interrupts
4906 ± 35% -83.8% 794.33 ± 30% interrupts.CPU50.PMI:Performance_monitoring_interrupts
1007 ± 29% +1518.4% 16302 ± 8% interrupts.CPU50.RES:Rescheduling_interrupts
4879 ± 34% -83.7% 793.33 ± 23% interrupts.CPU51.NMI:Non-maskable_interrupts
4879 ± 34% -83.7% 793.33 ± 23% interrupts.CPU51.PMI:Performance_monitoring_interrupts
1043 ± 35% +1353.4% 15164 ± 10% interrupts.CPU51.RES:Rescheduling_interrupts
4877 ± 34% -82.9% 835.00 ± 24% interrupts.CPU52.NMI:Non-maskable_interrupts
4877 ± 34% -82.9% 835.00 ± 24% interrupts.CPU52.PMI:Performance_monitoring_interrupts
1197 ± 27% +1185.0% 15389 ± 11% interrupts.CPU52.RES:Rescheduling_interrupts
4846 ± 35% -76.9% 1117 ± 7% interrupts.CPU53.NMI:Non-maskable_interrupts
4846 ± 35% -76.9% 1117 ± 7% interrupts.CPU53.PMI:Performance_monitoring_interrupts
1171 ± 49% +1123.8% 14330 ± 13% interrupts.CPU53.RES:Rescheduling_interrupts
4864 ± 35% -75.5% 1193 interrupts.CPU54.NMI:Non-maskable_interrupts
4864 ± 35% -75.5% 1193 interrupts.CPU54.PMI:Performance_monitoring_interrupts
1096 ± 32% +1246.1% 14753 ± 23% interrupts.CPU54.RES:Rescheduling_interrupts
6080 ± 28% -80.4% 1193 interrupts.CPU55.NMI:Non-maskable_interrupts
6080 ± 28% -80.4% 1193 interrupts.CPU55.PMI:Performance_monitoring_interrupts
1809 ± 80% +680.0% 14113 ± 31% interrupts.CPU55.RES:Rescheduling_interrupts
6081 ± 28% -80.7% 1174 interrupts.CPU56.NMI:Non-maskable_interrupts
6081 ± 28% -80.7% 1174 interrupts.CPU56.PMI:Performance_monitoring_interrupts
819.67 ± 40% +1599.0% 13926 ± 26% interrupts.CPU56.RES:Rescheduling_interrupts
6095 ± 28% -81.1% 1154 ± 8% interrupts.CPU57.NMI:Non-maskable_interrupts
6095 ± 28% -81.1% 1154 ± 8% interrupts.CPU57.PMI:Performance_monitoring_interrupts
1084 ± 65% +1301.4% 15191 ± 13% interrupts.CPU57.RES:Rescheduling_interrupts
6063 ± 28% -81.1% 1146 ± 4% interrupts.CPU58.NMI:Non-maskable_interrupts
6063 ± 28% -81.1% 1146 ± 4% interrupts.CPU58.PMI:Performance_monitoring_interrupts
2905 ± 70% +411.5% 14859 ± 19% interrupts.CPU58.RES:Rescheduling_interrupts
6085 ± 27% -80.8% 1168 interrupts.CPU59.NMI:Non-maskable_interrupts
6085 ± 27% -80.8% 1168 interrupts.CPU59.PMI:Performance_monitoring_interrupts
1656 ± 12% +797.3% 14862 ± 25% interrupts.CPU59.RES:Rescheduling_interrupts
7207 -84.2% 1141 ± 6% interrupts.CPU6.NMI:Non-maskable_interrupts
7207 -84.2% 1141 ± 6% interrupts.CPU6.PMI:Performance_monitoring_interrupts
1571 ± 46% +763.2% 13560 ± 17% interrupts.CPU6.RES:Rescheduling_interrupts
6087 ± 27% -82.0% 1095 ± 9% interrupts.CPU60.NMI:Non-maskable_interrupts
6087 ± 27% -82.0% 1095 ± 9% interrupts.CPU60.PMI:Performance_monitoring_interrupts
1489 ± 55% +858.3% 14275 ± 27% interrupts.CPU60.RES:Rescheduling_interrupts
6096 ± 27% -81.3% 1137 ± 7% interrupts.CPU61.NMI:Non-maskable_interrupts
6096 ± 27% -81.3% 1137 ± 7% interrupts.CPU61.PMI:Performance_monitoring_interrupts
938.33 ± 31% +1544.4% 15429 ± 13% interrupts.CPU61.RES:Rescheduling_interrupts
6095 ± 28% -81.8% 1110 ± 6% interrupts.CPU62.NMI:Non-maskable_interrupts
6095 ± 28% -81.8% 1110 ± 6% interrupts.CPU62.PMI:Performance_monitoring_interrupts
1148 ± 45% +1261.3% 15637 ± 7% interrupts.CPU62.RES:Rescheduling_interrupts
6079 ± 28% -81.0% 1153 ± 9% interrupts.CPU63.NMI:Non-maskable_interrupts
6079 ± 28% -81.0% 1153 ± 9% interrupts.CPU63.PMI:Performance_monitoring_interrupts
1052 ± 39% +1369.0% 15454 ± 8% interrupts.CPU63.RES:Rescheduling_interrupts
6066 ± 27% -82.8% 1042 ± 10% interrupts.CPU64.NMI:Non-maskable_interrupts
6066 ± 27% -82.8% 1042 ± 10% interrupts.CPU64.PMI:Performance_monitoring_interrupts
975.67 ± 44% +1509.5% 15703 ± 13% interrupts.CPU64.RES:Rescheduling_interrupts
7326 -84.1% 1164 interrupts.CPU65.NMI:Non-maskable_interrupts
7326 -84.1% 1164 interrupts.CPU65.PMI:Performance_monitoring_interrupts
1090 ± 45% +1254.6% 14765 ± 8% interrupts.CPU65.RES:Rescheduling_interrupts
7331 -83.4% 1213 ± 3% interrupts.CPU66.NMI:Non-maskable_interrupts
7331 -83.4% 1213 ± 3% interrupts.CPU66.PMI:Performance_monitoring_interrupts
893.67 ± 31% +1426.7% 13643 ± 26% interrupts.CPU66.RES:Rescheduling_interrupts
7247 -86.1% 1005 ± 28% interrupts.CPU67.NMI:Non-maskable_interrupts
7247 -86.1% 1005 ± 28% interrupts.CPU67.PMI:Performance_monitoring_interrupts
1590 ± 51% +760.9% 13691 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
7324 -87.3% 929.00 ± 26% interrupts.CPU68.NMI:Non-maskable_interrupts
7324 -87.3% 929.00 ± 26% interrupts.CPU68.PMI:Performance_monitoring_interrupts
1576 ± 33% +897.8% 15732 ± 8% interrupts.CPU68.RES:Rescheduling_interrupts
7300 -87.1% 940.33 ± 27% interrupts.CPU69.NMI:Non-maskable_interrupts
7300 -87.1% 940.33 ± 27% interrupts.CPU69.PMI:Performance_monitoring_interrupts
1569 ± 53% +841.6% 14774 ± 11% interrupts.CPU69.RES:Rescheduling_interrupts
201.67 ± 10% +518.5% 1247 ± 59% interrupts.CPU7.117:PCI-MSI.1574919-edge.eth1-TxRx-7
7298 -84.9% 1105 ± 7% interrupts.CPU7.NMI:Non-maskable_interrupts
7298 -84.9% 1105 ± 7% interrupts.CPU7.PMI:Performance_monitoring_interrupts
1055 ± 34% +1153.3% 13222 ± 16% interrupts.CPU7.RES:Rescheduling_interrupts
7315 -87.7% 896.67 ± 27% interrupts.CPU70.NMI:Non-maskable_interrupts
7315 -87.7% 896.67 ± 27% interrupts.CPU70.PMI:Performance_monitoring_interrupts
1354 ± 3% +1014.7% 15100 ± 8% interrupts.CPU70.RES:Rescheduling_interrupts
7205 -87.5% 902.67 ± 29% interrupts.CPU71.NMI:Non-maskable_interrupts
7205 -87.5% 902.67 ± 29% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1142 ± 48% +1241.5% 15329 ± 12% interrupts.CPU71.RES:Rescheduling_interrupts
7318 -87.8% 892.00 ± 30% interrupts.CPU72.NMI:Non-maskable_interrupts
7318 -87.8% 892.00 ± 30% interrupts.CPU72.PMI:Performance_monitoring_interrupts
1860 ± 36% +606.7% 13145 ± 16% interrupts.CPU72.RES:Rescheduling_interrupts
7291 -88.2% 858.33 ± 37% interrupts.CPU73.NMI:Non-maskable_interrupts
7291 -88.2% 858.33 ± 37% interrupts.CPU73.PMI:Performance_monitoring_interrupts
1708 ± 29% +615.3% 12220 ± 12% interrupts.CPU73.RES:Rescheduling_interrupts
7304 -88.6% 830.67 ± 32% interrupts.CPU74.NMI:Non-maskable_interrupts
7304 -88.6% 830.67 ± 32% interrupts.CPU74.PMI:Performance_monitoring_interrupts
1379 ± 34% +923.1% 14115 ± 9% interrupts.CPU74.RES:Rescheduling_interrupts
7328 -88.0% 882.00 ± 30% interrupts.CPU75.NMI:Non-maskable_interrupts
7328 -88.0% 882.00 ± 30% interrupts.CPU75.PMI:Performance_monitoring_interrupts
1682 ± 58% +719.8% 13789 ± 11% interrupts.CPU75.RES:Rescheduling_interrupts
7287 -87.4% 916.00 ± 32% interrupts.CPU76.NMI:Non-maskable_interrupts
7287 -87.4% 916.00 ± 32% interrupts.CPU76.PMI:Performance_monitoring_interrupts
1693 ± 58% +734.5% 14133 ± 12% interrupts.CPU76.RES:Rescheduling_interrupts
7270 -83.7% 1183 ± 6% interrupts.CPU77.NMI:Non-maskable_interrupts
7270 -83.7% 1183 ± 6% interrupts.CPU77.PMI:Performance_monitoring_interrupts
1321 ± 53% +924.5% 13541 ± 8% interrupts.CPU77.RES:Rescheduling_interrupts
7305 -86.0% 1021 ± 10% interrupts.CPU78.NMI:Non-maskable_interrupts
7305 -86.0% 1021 ± 10% interrupts.CPU78.PMI:Performance_monitoring_interrupts
988.00 ± 33% +1179.9% 12645 ± 10% interrupts.CPU78.RES:Rescheduling_interrupts
7149 ± 3% -85.6% 1031 ± 13% interrupts.CPU79.NMI:Non-maskable_interrupts
7149 ± 3% -85.6% 1031 ± 13% interrupts.CPU79.PMI:Performance_monitoring_interrupts
2997 ± 84% +347.8% 13420 ± 5% interrupts.CPU79.RES:Rescheduling_interrupts
7278 -84.3% 1139 ± 3% interrupts.CPU8.NMI:Non-maskable_interrupts
7278 -84.3% 1139 ± 3% interrupts.CPU8.PMI:Performance_monitoring_interrupts
1382 ± 31% +780.7% 12171 ± 27% interrupts.CPU8.RES:Rescheduling_interrupts
7282 -85.4% 1059 ± 3% interrupts.CPU80.NMI:Non-maskable_interrupts
7282 -85.4% 1059 ± 3% interrupts.CPU80.PMI:Performance_monitoring_interrupts
1199 ± 52% +929.4% 12349 ± 16% interrupts.CPU80.RES:Rescheduling_interrupts
7311 -87.5% 917.00 ± 27% interrupts.CPU81.NMI:Non-maskable_interrupts
7311 -87.5% 917.00 ± 27% interrupts.CPU81.PMI:Performance_monitoring_interrupts
1425 ± 50% +823.9% 13169 ± 12% interrupts.CPU81.RES:Rescheduling_interrupts
7304 -84.0% 1167 ± 2% interrupts.CPU82.NMI:Non-maskable_interrupts
7304 -84.0% 1167 ± 2% interrupts.CPU82.PMI:Performance_monitoring_interrupts
1239 ± 10% +901.4% 12413 ± 6% interrupts.CPU82.RES:Rescheduling_interrupts
7265 -84.2% 1145 ± 3% interrupts.CPU83.NMI:Non-maskable_interrupts
7265 -84.2% 1145 ± 3% interrupts.CPU83.PMI:Performance_monitoring_interrupts
1049 ± 46% +1131.8% 12929 ± 11% interrupts.CPU83.RES:Rescheduling_interrupts
7285 -84.8% 1107 ± 7% interrupts.CPU84.NMI:Non-maskable_interrupts
7285 -84.8% 1107 ± 7% interrupts.CPU84.PMI:Performance_monitoring_interrupts
1135 ± 36% +1035.9% 12896 ± 10% interrupts.CPU84.RES:Rescheduling_interrupts
7309 -84.2% 1152 ± 4% interrupts.CPU85.NMI:Non-maskable_interrupts
7309 -84.2% 1152 ± 4% interrupts.CPU85.PMI:Performance_monitoring_interrupts
2439 ± 72% +438.4% 13134 ± 14% interrupts.CPU85.RES:Rescheduling_interrupts
7289 -85.7% 1043 ± 8% interrupts.CPU86.NMI:Non-maskable_interrupts
7289 -85.7% 1043 ± 8% interrupts.CPU86.PMI:Performance_monitoring_interrupts
1480 ± 52% +851.0% 14078 ± 5% interrupts.CPU86.RES:Rescheduling_interrupts
7329 -86.4% 995.33 ± 14% interrupts.CPU87.NMI:Non-maskable_interrupts
7329 -86.4% 995.33 ± 14% interrupts.CPU87.PMI:Performance_monitoring_interrupts
1237 ± 56% +1018.2% 13835 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
7289 -86.4% 993.33 ± 10% interrupts.CPU88.NMI:Non-maskable_interrupts
7289 -86.4% 993.33 ± 10% interrupts.CPU88.PMI:Performance_monitoring_interrupts
913.33 ± 48% +1335.0% 13106 ± 13% interrupts.CPU88.RES:Rescheduling_interrupts
7302 -87.2% 935.33 ± 7% interrupts.CPU89.NMI:Non-maskable_interrupts
7302 -87.2% 935.33 ± 7% interrupts.CPU89.PMI:Performance_monitoring_interrupts
1116 ± 41% +1056.7% 12917 ± 10% interrupts.CPU89.RES:Rescheduling_interrupts
7244 -83.6% 1185 ± 2% interrupts.CPU9.NMI:Non-maskable_interrupts
7244 -83.6% 1185 ± 2% interrupts.CPU9.PMI:Performance_monitoring_interrupts
1888 ± 61% +626.9% 13726 ± 21% interrupts.CPU9.RES:Rescheduling_interrupts
7308 -86.2% 1009 ± 4% interrupts.CPU90.NMI:Non-maskable_interrupts
7308 -86.2% 1009 ± 4% interrupts.CPU90.PMI:Performance_monitoring_interrupts
1346 ± 44% +931.6% 13885 ± 13% interrupts.CPU90.RES:Rescheduling_interrupts
7307 -87.3% 930.00 ± 27% interrupts.CPU91.NMI:Non-maskable_interrupts
7307 -87.3% 930.00 ± 27% interrupts.CPU91.PMI:Performance_monitoring_interrupts
1213 ± 24% +983.2% 13146 ± 14% interrupts.CPU91.RES:Rescheduling_interrupts
7213 -84.4% 1125 ± 4% interrupts.CPU92.NMI:Non-maskable_interrupts
7213 -84.4% 1125 ± 4% interrupts.CPU92.PMI:Performance_monitoring_interrupts
1004 ± 36% +1035.6% 11401 ± 13% interrupts.CPU92.RES:Rescheduling_interrupts
7290 -88.3% 853.67 ± 18% interrupts.CPU93.NMI:Non-maskable_interrupts
7290 -88.3% 853.67 ± 18% interrupts.CPU93.PMI:Performance_monitoring_interrupts
1451 ± 38% +731.7% 12074 ± 5% interrupts.CPU93.RES:Rescheduling_interrupts
7329 -88.2% 863.33 ± 20% interrupts.CPU94.NMI:Non-maskable_interrupts
7329 -88.2% 863.33 ± 20% interrupts.CPU94.PMI:Performance_monitoring_interrupts
1077 ± 29% +998.3% 11836 ± 12% interrupts.CPU94.RES:Rescheduling_interrupts
7315 -88.1% 873.33 ± 27% interrupts.CPU95.NMI:Non-maskable_interrupts
7315 -88.1% 873.33 ± 27% interrupts.CPU95.PMI:Performance_monitoring_interrupts
4406 ± 12% +112.5% 9364 ± 3% interrupts.CPU95.RES:Rescheduling_interrupts
7304 -87.2% 937.33 ± 29% interrupts.CPU96.NMI:Non-maskable_interrupts
7304 -87.2% 937.33 ± 29% interrupts.CPU96.PMI:Performance_monitoring_interrupts
2450 ± 64% +467.4% 13903 ± 6% interrupts.CPU96.RES:Rescheduling_interrupts
7240 -86.8% 959.33 ± 29% interrupts.CPU97.NMI:Non-maskable_interrupts
7240 -86.8% 959.33 ± 29% interrupts.CPU97.PMI:Performance_monitoring_interrupts
1772 ± 65% +682.0% 13857 ± 11% interrupts.CPU97.RES:Rescheduling_interrupts
7297 -86.6% 980.00 ± 26% interrupts.CPU98.NMI:Non-maskable_interrupts
7297 -86.6% 980.00 ± 26% interrupts.CPU98.PMI:Performance_monitoring_interrupts
2250 ± 40% +507.6% 13671 ± 13% interrupts.CPU98.RES:Rescheduling_interrupts
7256 -87.1% 938.67 ± 26% interrupts.CPU99.NMI:Non-maskable_interrupts
7256 -87.1% 938.67 ± 26% interrupts.CPU99.PMI:Performance_monitoring_interrupts
1624 ± 38% +848.4% 15405 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
354.00 ± 4% -98.4% 5.67 ± 16% interrupts.IWI:IRQ_work_interrupts
1270621 ± 3% -84.2% 200566 ± 2% interrupts.NMI:Non-maskable_interrupts
1270621 ± 3% -84.2% 200566 ± 2% interrupts.PMI:Performance_monitoring_interrupts
245359 ± 2% +1031.2% 2775515 ± 3% interrupts.RES:Rescheduling_interrupts
1455 ± 11% -66.7% 485.33 ± 29% interrupts.TLB:TLB_shootdowns
vm-scalability.time.user_time
2500 +-+------------------------------------------------------------------+
|..+..+..+..+.+..+..+ + +.+..+..+..+ + + + |
| : : : : : : : |
2000 +-+ : : : : : : : :|
| : : : : : : : : : :|
| : : : : : : : : : :|
1500 +-+ : : : : : : : : : :|
| : : : : : : : : : :|
1000 +-+ : : O : : : : : : : : |
O O O O : : : : O O O : : O :O: O O :O: O
| : : : : : : : : : : |
500 +-+ : : : : : : : : : : |
| : : : : : : |
| : : : : : : |
0 +-+O-----O--O----O-----O--O-----O----O-----O-----O----------O--------+
vm-scalability.time.system_time
80000 +-+-----------------------------------------------------------------+
| |
70000 +-++..+..+.+..+..+..+ + +..+..+..+.+ + +..+.+ |
60000 +-+ : : : : : : : |
| : :: : : :: : : |
50000 +-+ : :: : : : : : : :|
| : : : : : : : : : :|
40000 +-+ : : : : : : : : : :|
| : : : : : : : : : :|
30000 +-+ : : : : : : : : : : |
20000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
10000 +-+ : : :: :: :: :: |
| : : : : : : |
0 O-+O--O--O-O--O--O--O--O--O-O--O--O--O--O--O-O--O--O--O--O--O-O--O--O
vm-scalability.time.percent_of_cpu_this_job_got
20000 +-+-----------------------------------------------------------------+
18000 +-++..+..+.+..+..+..+ + +..+..+..+.+ + +..+.+ |
| : : : : : : : |
16000 +-+ : : : : : : : |
14000 +-+ : :: : : : : : : :|
| : : : : : : : : : :|
12000 +-+ : : : : : : : : : :|
10000 +-+ : : : : : : : : : :|
8000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
6000 +-+ : : : : : : : : : : |
4000 +-+ : : : : : : : : : : |
| : : : : : : |
2000 +-+ O : : O : O O O : O O O O O
0 O-+O--O--O-O--O--O-----O--O----O-----O-----O----O--------O--O-------+
vm-scalability.time.minor_page_faults
8e+08 +-+-----------------------------------------------------------------+
|..+. + +..+..+ + +. +. + + : : |
7e+08 +-+ : : : : : : : |
6e+08 +-+ : : : : : : : :|
| : :: : : : : : O: :|
5e+08 +-+ O: :O: :O O O: :O: O : : O :O
O O O : : : : : : : :O : :|
4e+08 +-+ : : : : : : : : : :|
| : : : : : : : : : : |
3e+08 +-+ : : : : : : : : : : |
2e+08 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
1e+08 +-+ : : : : : : |
| : : : : : : |
0 +-+O-----O-O-----O-----O--O----O-----O-----O----O-----------O-------+
vm-scalability.time.involuntary_context_switches
45000 +-+-----------------------------------------------------------------+
| +..+..+.+..+..+. : + +..+..+..+.+ + |
40000 +-+ : : : : + :: |
35000 +-+ : : : : : :: |
| : :: : : :: : : |
30000 +-+ : : : : : : : : : |
25000 +-+ : : : : : : : : +.+ |
| : : : : : : : : : |
20000 +-+ : : : : : : : : : :|
15000 +-+ : : : : : : : : : :|
| : : : : : : : : : :|
10000 +-+ : : : : : : : : : : |
5000 +-+ : : O : :: : : : |
O O O O : : : O O O : O O O O O O
0 +-+O-----O-O-----O-----O--O----O-----O-----O----O-----------O-------+
vm-scalability.throughput
1.2e+07 +-+---------------------------------------------------------------+
|..+..+.+. +..+.+ + +. + + + : : |
1e+07 +-+ : : : : : : : |
| : : : : : : : :|
| : :: : : :: : O: :|
8e+06 +-+ O: :O: :O O O: :O:O : :O :O
O O O : : : : : : : :O : :|
6e+06 +-+ : : : : : : : : : :|
| : : : : : : : : : : |
4e+06 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| : : :: : : :: :: |
2e+06 +-+ : : : : : : |
| : : : : : : |
0 +-+O----O--O-----O----O--O----O-----O----O-----O----------O-------+
vm-scalability.median
70000 +-+-----------------------------------------------------------------+
| |
60000 +-++..+..+.+..+..+..+ + +..+..+..+.+ + +..+.+ |
| : : : : : : : |
50000 +-+ : :: : : :: : : |
| : :: : : : : : O: :|
40000 +-+ O: :O: :O O O: :O: O : : O :O
O O O : : : : : : : :O : :|
30000 +-+ : : : : : : : : : :|
| : : : : : : : : : : |
20000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
10000 +-+ : : :: :: :: :: |
| : : : : : : |
0 +-+O-----O-O-----O-----O--O----O-----O-----O----O-----------O-------+
vm-scalability.workload
4e+09 +-+---------------------------------------------------------------+
| |
3.5e+09 +-++..+.+..+..+..+.+ + +..+..+.+..+ + +..+..+ |
3e+09 +-+ : : : : : : : |
| : : : : : : : |
2.5e+09 +-+ : : : : : : : : O: :|
| O: :O: :O O O: :O:O : :O :O
2e+09 O-+ O O : : : : : : : :O : :|
| : : : : : : : : : :|
1.5e+09 +-+ : : : : : : : : : : |
1e+09 +-+ : : : : : : : : : : |
| : : :: : : :: :: |
5e+08 +-+ : : :: :: :: :: |
| : : : : : : |
0 +-+O----O--O-----O----O--O----O-----O----O-----O----------O-------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 3 months
c81b289acb ("keys: Network namespace domain tag"): WARNING: CPU: 0 PID: 53 at lib/refcount.c:190 refcount_sub_and_test_checked
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git keys-namespace
commit c81b289acb226eff31b9163aa6d6f656dcaab63c
Author: David Howells <dhowells(a)redhat.com>
AuthorDate: Tue Feb 12 16:24:10 2019 +0000
Commit: David Howells <dhowells(a)redhat.com>
CommitDate: Wed Apr 24 16:45:21 2019 +0100
keys: Network namespace domain tag
Create key domain tags for network namespaces and make it possible to
automatically tag keys that are used by networked services (e.g. AF_RXRPC,
AFS, DNS) with the default network namespace if not set by the caller.
This allows keys with the same description but in different namespaces to
coexist within a keyring.
Signed-off-by: David Howells <dhowells(a)redhat.com>
cc: netdev(a)vger.kernel.org
cc: linux-nfs(a)vger.kernel.org
cc: linux-cifs(a)vger.kernel.org
cc: linux-afs(a)lists.infradead.org
157577b813 keys: Garbage collect keys for which the domain has been removed
c81b289acb keys: Network namespace domain tag
c6495469da keys: Pass the network namespace into request_key mechanism
+----------------------------------------------------------+------------+------------+------------+
| | 157577b813 | c81b289acb | c6495469da |
+----------------------------------------------------------+------------+------------+------------+
| boot_successes | 38 | 13 | 3 |
| boot_failures | 0 | 9 | 12 |
| WARNING:at_lib/refcount.c:#refcount_sub_and_test_checked | 0 | 8 | 12 |
| EIP:refcount_sub_and_test_checked | 0 | 8 | 12 |
| INFO:rcu_preempt_self-detected_stall_on_CPU | 0 | 1 | |
| INFO:rcu_preempt_detected_stalls_on_CPUs/tasks | 0 | 1 | |
| EIP:lapic_next_deadline | 0 | 1 | |
| BUG:kernel_hang_in_test_stage | 0 | 1 | |
+----------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 212.107120] _uid_ses (ptrval)
[ 212.107472] <== look_up_user_keyrings() = 0
[ 212.230629] trinity-c7 (1096) used greatest stack depth: 6104 bytes left
[ 212.259636] ------------[ cut here ]------------
[ 212.260144] refcount_t: underflow; use-after-free.
[ 212.260663] WARNING: CPU: 0 PID: 53 at lib/refcount.c:190 refcount_sub_and_test_checked+0x3b/0x50
[ 212.261728] CPU: 0 PID: 53 Comm: kworker/u4:4 Not tainted 5.1.0-rc2-00029-gc81b289 #42
[ 212.262566] Workqueue: netns cleanup_net
[ 212.263005] EIP: refcount_sub_and_test_checked+0x3b/0x50
[ 212.263572] Code: f9 72 0d f0 0f b1 0a 0f 94 c3 89 de 74 25 eb e6 80 3d f1 51 53 c2 00 75 14 68 a6 2f 23 c2 c6 05 f1 51 53 c2 01 e8 0c b5 cd ff <0f> 0b 58 31 f6 eb 08 eb 06 85 c9 74 fa eb f4 89 f0 5b 5e 5f c3 89
[ 212.265479] EAX: 00000026 EBX: dbba7d80 ECX: 00000001 EDX: 00000001
[ 212.266121] ESI: dd70dee4 EDI: 00000001 EBP: dbafc0f8 ESP: dd70def4
[ 212.266759] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010296
[ 212.267454] CR0: 80050033 CR2: b76a1cd4 CR3: 1bae7000 CR4: 001406d0
[ 212.268099] Call Trace:
[ 212.268360] ? key_put_tag+0xb/0x20
[ 212.268723] ? key_remove_domain+0x9/0x13
[ 212.269141] ? cleanup_net+0x1cf/0x214
[ 212.269529] ? process_one_work+0x256/0x408
[ 212.269960] ? worker_thread+0x19a/0x251
[ 212.274848] ? kthread+0xf2/0xf7
[ 212.275195] ? process_scheduled_works+0x1e/0x1e
[ 212.275669] ? __kthread_bind_mask+0x46/0x46
[ 212.276115] ? ret_from_fork+0x2e/0x38
[ 212.276503] ---[ end trace 2482c0b0f23fbe7e ]---
[ 213.097755] trinity-c3 (1118) used greatest stack depth: 5464 bytes left
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start c6495469da006656cc4c20b69758ff3daf7a4de1 6beff00b79ca0b5caf0ce6fb8e11f57311bd95f8 --
git bisect good d91158559b0c34e2f9660bd9487f52e2cb3b513a # 02:16 G 11 0 1 1 keys: Add a 'recurse' flag for keyring searches
git bisect good 9a856261d2e73fb50b09df76e290007705772c5e # 02:42 G 10 0 2 2 keys: Include target namespace in match criteria
git bisect good 157577b8138174350a176eb209f4d7f221daa289 # 02:59 G 10 0 0 0 keys: Garbage collect keys for which the domain has been removed
git bisect bad c81b289acb226eff31b9163aa6d6f656dcaab63c # 03:15 B 3 6 0 1 keys: Network namespace domain tag
# first bad commit: [c81b289acb226eff31b9163aa6d6f656dcaab63c] keys: Network namespace domain tag
git bisect good 157577b8138174350a176eb209f4d7f221daa289 # 03:22 G 37 0 0 0 keys: Garbage collect keys for which the domain has been removed
# extra tests with debug options
git bisect bad c81b289acb226eff31b9163aa6d6f656dcaab63c # 03:39 B 1 7 0 0 keys: Network namespace domain tag
# extra tests on HEAD of dhowells-fs/keys-namespace
git bisect bad c6495469da006656cc4c20b69758ff3daf7a4de1 # 03:39 B 1 12 0 0 keys: Pass the network namespace into request_key mechanism
# extra tests on tree/branch dhowells-fs/keys-namespace
git bisect bad c6495469da006656cc4c20b69758ff3daf7a4de1 # 03:51 B 1 12 0 0 keys: Pass the network namespace into request_key mechanism
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 3 months
[latencytop] b8cb2c5f73: hackbench.throughput 257.2% improvement
by kernel test robot
Greeting,
FYI, we noticed a 257.2% improvement of hackbench.throughput due to commit:
commit: b8cb2c5f73f5495c1acfc02104254bfb3dab7864 ("latencytop: to fit the LKP hackbench test")
git://bee.sh.intel.com/git/feng/linux.git master
in testcase: hackbench
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_threads: 1600%
mode: process
ipc: pipe
runtime: 240
size: 1024
ucode: 0xb00002e
cpufreq_governor: performance
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/runtime/size/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6-latency-stats/process/1600%/debian-x86_64-2018-04-03.cgz/240/1024/lkp-bdw-ep3b/hackbench/0xb00002e
commit:
5068a7fd91 ("latencytop: add a lazy mode for updating global data")
b8cb2c5f73 ("latencytop: to fit the LKP hackbench test")
5068a7fd9134e09e b8cb2c5f73f5495c1acfc021042
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :5 dmesg.WARNING:at_ip__mutex_lock/0x
%stddev %change %stddev
\ | \
556036 ± 3% +257.2% 1986052 hackbench.throughput
390.61 ± 2% -25.1% 292.64 hackbench.time.elapsed_time
390.61 ± 2% -25.1% 292.64 hackbench.time.elapsed_time.max
1.242e+08 ± 5% +175.6% 3.423e+08 ± 3% hackbench.time.involuntary_context_switches
3990132 +152.8% 10088851 hackbench.time.minor_page_faults
8337 -4.4% 7974 hackbench.time.percent_of_cpu_this_job_got
31561 ± 2% -33.1% 21109 hackbench.time.system_time
1005 ± 2% +121.7% 2227 hackbench.time.user_time
9.521e+08 ± 2% +156.1% 2.439e+09 hackbench.time.voluntary_context_switches
2.112e+08 +150.0% 5.28e+08 hackbench.workload
29.87 -7.1% 27.75 ± 3% boot-time.dhcp
0.79 -52.5% 0.38 ± 55% boot-time.smp_boot
87015937 ± 20% +90.4% 1.657e+08 ± 16% numa-numastat.node0.local_node
87037234 ± 20% +90.4% 1.657e+08 ± 16% numa-numastat.node0.numa_hit
25950 ± 2% -43.1% 14771 ± 67% numa-vmstat.node1.nr_shmem
35031 ± 14% -56.6% 15202 ± 25% numa-vmstat.node1.nr_slab_reclaimable
5.24 ± 7% +4.3 9.58 mpstat.cpu.all.idle%
92.30 -10.3 81.99 mpstat.cpu.all.sys%
2.46 +6.0 8.42 mpstat.cpu.all.usr%
139677 ± 14% -56.6% 60578 ± 25% numa-meminfo.node1.KReclaimable
139677 ± 14% -56.6% 60578 ± 25% numa-meminfo.node1.SReclaimable
103484 ± 2% -42.8% 59220 ± 67% numa-meminfo.node1.Shmem
1013190 ± 10% +167.8% 2713396 ± 16% cpuidle.C1.usage
865158 ±154% +311.8% 3562326 ± 48% cpuidle.C3.usage
344469 ± 41% +933.2% 3558957 ± 12% cpuidle.POLL.time
49377 ± 16% +3181.6% 1620372 ± 9% cpuidle.POLL.usage
5.25 ± 8% +82.9% 9.60 ± 5% vmstat.cpu.id
91.25 -11.2% 81.00 vmstat.cpu.sy
2208 ± 5% +22.3% 2699 ± 3% vmstat.procs.r
2741874 +244.2% 9437814 vmstat.system.cs
153098 ± 16% +49.1% 228336 vmstat.system.in
2649 -5.8% 2495 turbostat.Avg_MHz
1008672 ± 10% +168.5% 2708744 ± 16% turbostat.C1
0.06 ± 13% +0.0 0.10 ± 23% turbostat.C1%
863671 ±154% +312.4% 3561659 ± 48% turbostat.C3
1.03 ±161% +4.9 5.95 ± 53% turbostat.C3%
1.88 ± 27% +141.0% 4.53 ± 14% turbostat.CPU%c1
57.50 +27.0% 73.00 turbostat.CoreTmp
1.24 ± 18% +82.5% 2.26 ± 9% turbostat.Pkg%pc2
63.75 +20.2% 76.60 turbostat.PkgTmp
231.22 +15.0% 265.87 turbostat.PkgWatt
2716654 ± 2% -32.8% 1826135 meminfo.Active
2716534 ± 2% -32.8% 1826015 meminfo.Active(anon)
67545470 -30.0% 47286481 meminfo.Committed_AS
23404 -9.1% 21269 ± 2% meminfo.Inactive
215130 ± 5% -41.3% 126309 meminfo.KReclaimable
857741 -28.0% 617339 meminfo.KernelStack
10602204 -24.7% 7988540 meminfo.Memused
1957259 ± 2% -24.7% 1474443 meminfo.PageTables
215130 ± 5% -41.3% 126309 meminfo.SReclaimable
1584045 -12.2% 1390546 meminfo.SUnreclaim
123441 -23.7% 94160 ± 2% meminfo.Shmem
1799176 -15.7% 1516856 meminfo.Slab
37997 ± 3% +33.2% 50604 ± 3% meminfo.max_used_kB
677734 ± 2% -32.7% 455837 proc-vmstat.nr_active_anon
1370837 +4.7% 1435898 proc-vmstat.nr_dirty_background_threshold
2745031 +4.7% 2875178 proc-vmstat.nr_dirty_threshold
285662 -2.2% 279491 proc-vmstat.nr_file_pages
13817871 +4.7% 14468766 proc-vmstat.nr_free_pages
5721 ± 2% -8.8% 5216 ± 2% proc-vmstat.nr_inactive_anon
391.50 ± 17% -100.0% 0.00 proc-vmstat.nr_isolated_anon
855834 -27.9% 616702 proc-vmstat.nr_kernel_stack
6655 +3.8% 6905 proc-vmstat.nr_mapped
488180 ± 2% -24.7% 367707 proc-vmstat.nr_page_table_pages
30861 -24.3% 23369 ± 2% proc-vmstat.nr_shmem
53717 ± 4% -41.2% 31605 proc-vmstat.nr_slab_reclaimable
395342 -12.2% 347172 proc-vmstat.nr_slab_unreclaimable
677734 ± 2% -32.7% 455837 proc-vmstat.nr_zone_active_anon
5721 ± 2% -8.8% 5216 ± 2% proc-vmstat.nr_zone_inactive_anon
19353 ± 18% -94.8% 1007 ± 84% proc-vmstat.numa_hint_faults
4032 ± 21% -79.4% 831.60 ± 91% proc-vmstat.numa_hint_faults_local
1.652e+08 ± 6% +66.9% 2.756e+08 proc-vmstat.numa_hit
1.651e+08 ± 6% +66.9% 2.755e+08 proc-vmstat.numa_local
21771 ± 29% -96.0% 881.20 ±172% proc-vmstat.numa_pages_migrated
46522 ± 31% -76.0% 11172 ±125% proc-vmstat.numa_pte_updates
28924 -21.2% 22787 ± 2% proc-vmstat.pgactivate
1.659e+08 ± 6% +67.2% 2.773e+08 proc-vmstat.pgalloc_normal
4855295 +120.2% 10691421 proc-vmstat.pgfault
1.657e+08 ± 6% +67.2% 2.771e+08 proc-vmstat.pgfree
21771 ± 29% -96.0% 881.20 ±172% proc-vmstat.pgmigrate_success
257172 ± 5% -29.5% 181268 ± 3% slabinfo.Acpi-Namespace.active_objs
2795 ± 4% -35.1% 1815 ± 2% slabinfo.Acpi-Namespace.active_slabs
285214 ± 4% -35.1% 185176 ± 2% slabinfo.Acpi-Namespace.num_objs
2795 ± 4% -35.1% 1815 ± 2% slabinfo.Acpi-Namespace.num_slabs
891550 -22.4% 691527 slabinfo.anon_vma.active_objs
20172 -23.4% 15460 slabinfo.anon_vma.active_slabs
927947 -23.4% 711191 slabinfo.anon_vma.num_objs
20172 -23.4% 15460 slabinfo.anon_vma.num_slabs
1821274 -23.5% 1392571 slabinfo.anon_vma_chain.active_objs
29604 -24.6% 22326 slabinfo.anon_vma_chain.active_slabs
1894743 -24.6% 1428904 slabinfo.anon_vma_chain.num_objs
29604 -24.6% 22326 slabinfo.anon_vma_chain.num_slabs
234740 ± 6% -42.4% 135249 ± 3% slabinfo.dentry.active_objs
6158 ± 5% -45.7% 3346 ± 2% slabinfo.dentry.active_slabs
258675 ± 5% -45.7% 140561 ± 2% slabinfo.dentry.num_objs
6158 ± 5% -45.7% 3346 ± 2% slabinfo.dentry.num_slabs
70346 -15.4% 59482 slabinfo.files_cache.active_objs
1548 -11.8% 1366 slabinfo.files_cache.active_slabs
71264 -11.7% 62892 slabinfo.files_cache.num_objs
1548 -11.8% 1366 slabinfo.files_cache.num_slabs
75948 -17.3% 62834 slabinfo.filp.active_objs
2390 -14.1% 2053 slabinfo.filp.active_slabs
76500 -14.1% 65728 slabinfo.filp.num_objs
2390 -14.1% 2053 slabinfo.filp.num_slabs
4351 ± 3% -9.2% 3948 slabinfo.kmalloc-128.active_objs
4351 ± 3% -9.2% 3948 slabinfo.kmalloc-128.num_objs
36810 -15.9% 30965 slabinfo.kmalloc-1k.active_objs
1155 -10.2% 1037 slabinfo.kmalloc-1k.active_slabs
36970 -10.2% 33198 slabinfo.kmalloc-1k.num_objs
1155 -10.2% 1037 slabinfo.kmalloc-1k.num_slabs
197549 +21.2% 239526 slabinfo.kmalloc-32.active_objs
1549 +22.7% 1901 slabinfo.kmalloc-32.active_slabs
198367 +22.7% 243417 slabinfo.kmalloc-32.num_objs
1549 +22.7% 1901 slabinfo.kmalloc-32.num_slabs
120898 ± 2% +17.4% 141980 slabinfo.kmalloc-64.active_objs
1897 ± 2% +18.7% 2250 slabinfo.kmalloc-64.active_slabs
121439 ± 2% +18.6% 144070 slabinfo.kmalloc-64.num_objs
1897 ± 2% +18.7% 2250 slabinfo.kmalloc-64.num_slabs
64688 -11.7% 57106 slabinfo.kmalloc-96.active_objs
59832 -24.0% 45475 slabinfo.mm_struct.active_objs
2034 -18.2% 1664 slabinfo.mm_struct.active_slabs
61034 -18.1% 49957 slabinfo.mm_struct.num_objs
2034 -18.2% 1664 slabinfo.mm_struct.num_slabs
1102 +8.2% 1192 ± 4% slabinfo.proc_dir_entry.active_objs
1102 +8.2% 1192 ± 4% slabinfo.proc_dir_entry.num_objs
130148 ± 10% -75.7% 31638 ± 9% slabinfo.proc_inode_cache.active_objs
3030 ± 7% -68.8% 945.40 ± 7% slabinfo.proc_inode_cache.active_slabs
148497 ± 7% -68.8% 46347 ± 7% slabinfo.proc_inode_cache.num_objs
3030 ± 7% -68.8% 945.40 ± 7% slabinfo.proc_inode_cache.num_slabs
108452 +70.6% 185059 slabinfo.selinux_file_security.active_objs
423.25 +70.9% 723.40 slabinfo.selinux_file_security.active_slabs
108452 +70.9% 185359 slabinfo.selinux_file_security.num_objs
423.25 +70.9% 723.40 slabinfo.selinux_file_security.num_slabs
1467214 -27.3% 1067303 slabinfo.vm_area_struct.active_objs
36880 -26.0% 27300 slabinfo.vm_area_struct.active_slabs
1475248 -26.0% 1092027 slabinfo.vm_area_struct.num_objs
36880 -26.0% 27300 slabinfo.vm_area_struct.num_slabs
12.69 ± 2% +35.0% 17.13 ± 3% perf-stat.i.MPKI
1.805e+10 ± 2% +71.3% 3.092e+10 ± 2% perf-stat.i.branch-instructions
1.12 ± 2% +0.5 1.64 ± 5% perf-stat.i.branch-miss-rate%
1.568e+08 ± 2% +203.0% 4.753e+08 ± 2% perf-stat.i.branch-misses
7.72 ± 4% -4.8 2.90 ± 8% perf-stat.i.cache-miss-rate%
7.629e+08 ± 2% +188.7% 2.203e+09 ± 2% perf-stat.i.cache-references
2787192 +250.1% 9757996 perf-stat.i.context-switches
3.03 -44.7% 1.68 perf-stat.i.cpi
2.346e+11 -4.5% 2.241e+11 perf-stat.i.cpu-cycles
164729 ± 2% -10.6% 147259 ± 3% perf-stat.i.cpu-migrations
0.41 ± 3% +0.3 0.75 ± 2% perf-stat.i.dTLB-load-miss-rate%
83502821 ± 2% +277.6% 3.153e+08 ± 2% perf-stat.i.dTLB-load-misses
2.149e+10 ± 2% +99.7% 4.291e+10 ± 2% perf-stat.i.dTLB-loads
9498793 ± 13% +275.7% 35687992 ± 18% perf-stat.i.dTLB-store-misses
7.436e+09 ± 4% +251.3% 2.612e+10 ± 2% perf-stat.i.dTLB-stores
32575297 ± 3% +255.6% 1.158e+08 perf-stat.i.iTLB-load-misses
16421055 +235.2% 55047563 ± 2% perf-stat.i.iTLB-loads
7.961e+10 ± 2% +88.4% 1.5e+11 ± 2% perf-stat.i.instructions
2552 ± 2% -47.5% 1341 perf-stat.i.instructions-per-iTLB-miss
0.34 +88.9% 0.64 perf-stat.i.ipc
20873 ± 6% +75.7% 36668 perf-stat.i.minor-faults
185490 -3.8% 178358 perf-stat.i.msec
64.76 ± 7% -18.7 46.01 ± 4% perf-stat.i.node-load-miss-rate%
17314052 ± 4% -20.2% 13810294 ± 4% perf-stat.i.node-load-misses
11981238 ± 18% +53.5% 18395531 ± 6% perf-stat.i.node-loads
61.68 ± 2% -39.8 21.89 ± 3% perf-stat.i.node-store-miss-rate%
6290784 -78.4% 1356282 ± 6% perf-stat.i.node-store-misses
3661561 ± 8% +24.7% 4566070 ± 4% perf-stat.i.node-stores
21031 ± 6% +74.4% 36684 perf-stat.i.page-faults
9.50 +54.5% 14.67 perf-stat.overall.MPKI
0.86 +0.7 1.54 perf-stat.overall.branch-miss-rate%
8.22 ± 3% -5.3 2.90 ± 2% perf-stat.overall.cache-miss-rate%
2.91 -48.4% 1.50 perf-stat.overall.cpi
3728 ± 3% -5.4% 3528 ± 2% perf-stat.overall.cycles-between-cache-misses
0.37 ± 4% +0.4 0.72 ± 3% perf-stat.overall.dTLB-load-miss-rate%
2472 -47.6% 1296 perf-stat.overall.instructions-per-iTLB-miss
0.34 +93.9% 0.67 perf-stat.overall.ipc
54.58 ± 8% -12.0 42.60 ± 4% perf-stat.overall.node-load-miss-rate%
61.33 ± 3% -37.9 23.40 ± 3% perf-stat.overall.node-store-miss-rate%
147708 ± 2% -45.5% 80505 perf-stat.overall.path-length
1.803e+10 +65.9% 2.991e+10 perf-stat.ps.branch-instructions
1.558e+08 +195.0% 4.595e+08 perf-stat.ps.branch-misses
7.582e+08 +181.0% 2.131e+09 perf-stat.ps.cache-references
2757599 +243.2% 9463631 perf-stat.ps.context-switches
2.322e+11 -6.2% 2.179e+11 perf-stat.ps.cpu-cycles
80103958 ± 3% +278.7% 3.033e+08 ± 3% perf-stat.ps.dTLB-load-misses
2.159e+10 +92.6% 4.158e+10 perf-stat.ps.dTLB-loads
9192998 ± 16% +273.1% 34303273 ± 18% perf-stat.ps.dTLB-store-misses
7.593e+09 +233.3% 2.531e+10 perf-stat.ps.dTLB-stores
32286670 +246.9% 1.12e+08 perf-stat.ps.iTLB-load-misses
15092273 +251.8% 53090874 perf-stat.ps.iTLB-loads
7.981e+10 +81.9% 1.452e+11 perf-stat.ps.instructions
12326 ± 2% +193.9% 36223 perf-stat.ps.minor-faults
16779326 ± 5% -16.1% 14080942 ± 4% perf-stat.ps.node-load-misses
14073881 ± 14% +34.9% 18986313 ± 5% perf-stat.ps.node-loads
6260142 ± 2% -76.9% 1446791 ± 3% perf-stat.ps.node-store-misses
3950223 ± 5% +19.9% 4737459 ± 2% perf-stat.ps.node-stores
12327 ± 2% +193.8% 36223 perf-stat.ps.page-faults
3.12e+13 ± 2% +36.3% 4.251e+13 perf-stat.total.instructions
175009 ± 6% -37.0% 110268 ± 6% sched_debug.cfs_rq:/.exec_clock.avg
185724 ± 5% -36.5% 117951 ± 6% sched_debug.cfs_rq:/.exec_clock.max
165422 ± 7% -36.9% 104374 ± 7% sched_debug.cfs_rq:/.exec_clock.min
5325 ± 17% -40.3% 3182 ± 32% sched_debug.cfs_rq:/.exec_clock.stddev
12932 ± 13% -43.6% 7293 ± 51% sched_debug.cfs_rq:/.load.avg
2006 ± 50% -98.8% 24.96 ±200% sched_debug.cfs_rq:/.load.min
231.98 ± 10% +28.7% 298.61 ± 20% sched_debug.cfs_rq:/.load_avg.max
2.66 ± 21% -82.7% 0.46 ± 69% sched_debug.cfs_rq:/.load_avg.min
16472693 ± 6% -26.9% 12043125 ± 7% sched_debug.cfs_rq:/.min_vruntime.avg
13882498 ± 8% -44.5% 7705751 ± 9% sched_debug.cfs_rq:/.min_vruntime.min
0.79 ± 7% -60.6% 0.31 ± 49% sched_debug.cfs_rq:/.nr_running.avg
0.52 ± 38% -82.6% 0.09 ±123% sched_debug.cfs_rq:/.nr_running.min
0.10 ± 21% +118.1% 0.22 ± 13% sched_debug.cfs_rq:/.nr_running.stddev
109.03 ± 17% +77.8% 193.83 ± 20% sched_debug.cfs_rq:/.nr_spread_over.max
16.27 ± 18% +73.0% 28.15 ± 16% sched_debug.cfs_rq:/.nr_spread_over.stddev
9.52 ± 5% -54.9% 4.29 ± 53% sched_debug.cfs_rq:/.runnable_load_avg.avg
531.78 ± 25% -86.7% 70.80 ±144% sched_debug.cfs_rq:/.runnable_weight.min
1549642 ± 33% +120.6% 3417772 ± 36% sched_debug.cfs_rq:/.spread0.stddev
868.57 ± 7% -37.0% 547.58 ± 17% sched_debug.cfs_rq:/.util_avg.avg
1439 ± 2% -15.6% 1215 ± 4% sched_debug.cfs_rq:/.util_avg.max
473.42 ± 15% -81.0% 90.05 ± 60% sched_debug.cfs_rq:/.util_avg.min
175.55 ± 3% +81.8% 319.12 ± 14% sched_debug.cfs_rq:/.util_avg.stddev
60.58 ± 22% -76.3% 14.35 ± 70% sched_debug.cfs_rq:/.util_est_enqueued.min
349922 ± 10% +118.3% 763706 ± 9% sched_debug.cpu.avg_idle.avg
821079 ± 10% +21.8% 1000000 sched_debug.cpu.avg_idle.max
6016 ± 80% +1167.3% 76246 ± 52% sched_debug.cpu.avg_idle.min
234256 ± 4% -26.0% 173405 ± 3% sched_debug.cpu.clock.avg
242731 ± 4% -27.1% 177020 ± 3% sched_debug.cpu.clock.max
224857 ± 5% -25.1% 168370 ± 4% sched_debug.cpu.clock.min
5192 ± 18% -47.8% 2709 ± 41% sched_debug.cpu.clock.stddev
234256 ± 4% -26.0% 173405 ± 3% sched_debug.cpu.clock_task.avg
242731 ± 4% -27.1% 177020 ± 3% sched_debug.cpu.clock_task.max
224857 ± 5% -25.1% 168370 ± 4% sched_debug.cpu.clock_task.min
5192 ± 18% -47.8% 2709 ± 41% sched_debug.cpu.clock_task.stddev
10.34 ± 5% -61.1% 4.03 ± 57% sched_debug.cpu.cpu_load[0].avg
10.40 ± 4% -61.3% 4.02 ± 50% sched_debug.cpu.cpu_load[1].avg
0.93 ± 36% -94.6% 0.05 ±200% sched_debug.cpu.cpu_load[1].min
10.39 ± 3% -61.0% 4.05 ± 48% sched_debug.cpu.cpu_load[2].avg
1.42 ± 26% -96.5% 0.05 ±200% sched_debug.cpu.cpu_load[2].min
10.37 ± 2% -61.2% 4.02 ± 46% sched_debug.cpu.cpu_load[3].avg
1.42 ± 26% -96.5% 0.05 ±200% sched_debug.cpu.cpu_load[3].min
10.46 ± 2% -61.0% 4.08 ± 44% sched_debug.cpu.cpu_load[4].avg
1.42 ± 26% -96.5% 0.05 ±200% sched_debug.cpu.cpu_load[4].min
45459 ± 14% -69.4% 13897 ± 69% sched_debug.cpu.curr->pid.avg
74366 ± 4% -26.5% 54683 ± 13% sched_debug.cpu.curr->pid.max
19206 ± 3% -43.4% 10879 ± 21% sched_debug.cpu.curr->pid.stddev
3101 ± 18% -95.7% 132.38 ±122% sched_debug.cpu.load.min
0.01 ± 18% -47.7% 0.00 ± 41% sched_debug.cpu.next_balance.stddev
198658 ± 5% -30.6% 137857 ± 5% sched_debug.cpu.nr_load_updates.avg
209966 ± 5% -29.9% 147126 ± 5% sched_debug.cpu.nr_load_updates.max
187884 ± 6% -30.3% 130862 ± 6% sched_debug.cpu.nr_load_updates.min
5398 ± 17% -35.3% 3494 ± 26% sched_debug.cpu.nr_load_updates.stddev
0.95 ± 15% -71.6% 0.27 ± 93% sched_debug.cpu.nr_running.min
5696424 ± 7% +131.2% 13168364 ± 6% sched_debug.cpu.nr_switches.avg
6273689 ± 7% +147.1% 15502800 ± 9% sched_debug.cpu.nr_switches.max
5156030 ± 7% +111.1% 10883749 ± 5% sched_debug.cpu.nr_switches.min
300344 ± 13% +451.9% 1657678 ± 27% sched_debug.cpu.nr_switches.stddev
129.59 ± 21% -100.2% -0.32 sched_debug.cpu.nr_uninterruptible.avg
390.97 ± 7% -70.2% 116.32 ± 24% sched_debug.cpu.nr_uninterruptible.max
115.42 ± 21% -53.5% 53.70 ± 17% sched_debug.cpu.nr_uninterruptible.stddev
5696201 ± 7% +131.2% 13168207 ± 6% sched_debug.cpu.sched_count.avg
6272247 ± 7% +147.2% 15503080 ± 9% sched_debug.cpu.sched_count.max
5155034 ± 7% +111.1% 10883556 ± 5% sched_debug.cpu.sched_count.min
300477 ± 13% +451.5% 1657243 ± 27% sched_debug.cpu.sched_count.stddev
6107 ± 19% +239.1% 20712 ± 7% sched_debug.cpu.sched_goidle.avg
9905 ± 13% +316.6% 41265 ± 11% sched_debug.cpu.sched_goidle.max
1348 ± 14% +815.7% 12345 ± 29% sched_debug.cpu.sched_goidle.stddev
5129265 ± 6% +124.9% 11535890 ± 6% sched_debug.cpu.ttwu_count.avg
5601508 ± 7% +130.8% 12927764 ± 7% sched_debug.cpu.ttwu_count.max
4661720 ± 6% +110.2% 9797616 ± 5% sched_debug.cpu.ttwu_count.min
246866 ± 13% +292.8% 969756 ± 21% sched_debug.cpu.ttwu_count.stddev
4228910 ± 6% +136.7% 10010242 ± 6% sched_debug.cpu.ttwu_local.avg
4696055 ± 7% +138.4% 11195101 ± 7% sched_debug.cpu.ttwu_local.max
3805028 ± 6% +126.8% 8629206 ± 6% sched_debug.cpu.ttwu_local.min
232110 ± 15% +245.5% 801907 ± 21% sched_debug.cpu.ttwu_local.stddev
223805 ± 5% -25.0% 167878 ± 4% sched_debug.cpu_clk
220085 ± 5% -25.2% 164572 ± 4% sched_debug.ktime
224496 ± 5% -24.9% 168608 ± 4% sched_debug.sched_clk
13561 ± 7% -10.1% 12184 ± 8% softirqs.CPU0.SCHED
162063 ± 2% -27.6% 117276 softirqs.CPU0.TIMER
158636 ± 2% -27.7% 114649 softirqs.CPU1.TIMER
161043 ± 2% -27.3% 117021 softirqs.CPU10.TIMER
160708 ± 2% -22.1% 125247 ± 13% softirqs.CPU11.TIMER
160613 ± 2% -27.1% 117087 softirqs.CPU12.TIMER
160408 ± 2% -27.7% 115973 softirqs.CPU13.TIMER
51537 ± 2% +10.6% 57007 ± 6% softirqs.CPU14.RCU
160691 ± 2% -26.4% 118341 softirqs.CPU14.TIMER
160897 ± 3% -26.1% 118854 softirqs.CPU15.TIMER
161211 ± 2% -26.9% 117797 softirqs.CPU16.TIMER
160596 ± 2% -27.6% 116347 softirqs.CPU17.TIMER
160537 ± 2% -26.9% 117355 softirqs.CPU18.TIMER
160269 ± 2% -27.4% 116335 softirqs.CPU19.TIMER
162850 ± 3% -26.1% 120301 softirqs.CPU2.TIMER
161076 ± 2% -26.5% 118418 softirqs.CPU20.TIMER
161169 ± 2% -26.6% 118258 softirqs.CPU21.TIMER
165599 -26.8% 121172 softirqs.CPU22.TIMER
51450 +10.2% 56676 ± 2% softirqs.CPU23.RCU
160795 ± 2% -27.0% 117340 ± 2% softirqs.CPU23.TIMER
161225 ± 2% -27.1% 117464 softirqs.CPU24.TIMER
161283 ± 2% -28.0% 116172 ± 2% softirqs.CPU25.TIMER
161501 ± 2% -29.1% 114425 ± 2% softirqs.CPU26.TIMER
161444 ± 2% -29.0% 114656 softirqs.CPU27.TIMER
161066 ± 2% -28.5% 115140 softirqs.CPU28.TIMER
160971 ± 2% -28.0% 115921 softirqs.CPU29.TIMER
11934 ± 6% -21.7% 9345 ± 15% softirqs.CPU3.SCHED
163800 -29.2% 116021 softirqs.CPU3.TIMER
161266 ± 2% -29.0% 114553 softirqs.CPU30.TIMER
161071 ± 2% -28.1% 115829 softirqs.CPU31.TIMER
161140 ± 2% -28.7% 114860 softirqs.CPU32.TIMER
173137 ± 12% -33.4% 115361 softirqs.CPU33.TIMER
161181 ± 2% -28.3% 115642 ± 2% softirqs.CPU34.TIMER
161048 ± 2% -28.8% 114589 softirqs.CPU35.TIMER
161263 ± 2% -27.1% 117605 softirqs.CPU36.TIMER
51527 ± 2% +14.6% 59067 ± 4% softirqs.CPU37.RCU
161276 ± 2% -27.2% 117443 softirqs.CPU37.TIMER
51919 ± 2% +12.0% 58137 ± 5% softirqs.CPU38.RCU
161319 ± 2% -27.2% 117446 softirqs.CPU38.TIMER
161184 ± 2% -28.3% 115592 ± 2% softirqs.CPU39.TIMER
160994 ± 2% -27.7% 116431 softirqs.CPU4.TIMER
160831 ± 2% -28.2% 115487 softirqs.CPU40.TIMER
51461 ± 2% +9.1% 56120 ± 4% softirqs.CPU41.RCU
161129 ± 2% -28.8% 114765 softirqs.CPU41.TIMER
51423 +12.7% 57945 ± 5% softirqs.CPU42.RCU
161333 ± 2% -27.1% 117675 softirqs.CPU42.TIMER
161574 ± 2% -26.0% 119537 softirqs.CPU43.TIMER
160507 ± 2% -28.0% 115558 softirqs.CPU44.TIMER
164562 ± 2% -28.6% 117506 ± 2% softirqs.CPU45.TIMER
160581 ± 2% -27.9% 115768 softirqs.CPU46.TIMER
161179 ± 3% -27.6% 116769 softirqs.CPU47.TIMER
160548 ± 2% -27.6% 116232 softirqs.CPU48.TIMER
159244 ± 3% -26.9% 116423 softirqs.CPU49.TIMER
171792 ± 9% -32.1% 116668 softirqs.CPU5.TIMER
159964 ± 2% -27.0% 116703 softirqs.CPU50.TIMER
159959 ± 2% -27.5% 115960 softirqs.CPU51.TIMER
159970 ± 2% -26.2% 118079 ± 3% softirqs.CPU52.TIMER
160492 ± 2% -27.1% 116974 softirqs.CPU53.TIMER
160389 ± 2% -27.1% 116884 softirqs.CPU54.TIMER
160266 ± 2% -27.2% 116612 softirqs.CPU55.TIMER
160044 ± 2% -27.0% 116903 softirqs.CPU56.TIMER
160237 ± 2% -27.7% 115875 softirqs.CPU57.TIMER
175564 ± 14% -32.5% 118484 ± 2% softirqs.CPU58.TIMER
160778 ± 2% -26.3% 118551 softirqs.CPU59.TIMER
160333 ± 2% -27.6% 116103 softirqs.CPU6.TIMER
160659 ± 2% -26.5% 118052 softirqs.CPU60.TIMER
160498 ± 3% -27.3% 116754 softirqs.CPU61.TIMER
160524 ± 2% -27.3% 116658 softirqs.CPU62.TIMER
160235 ± 2% -27.6% 116018 softirqs.CPU63.TIMER
160753 ± 2% -26.7% 117884 softirqs.CPU64.TIMER
173353 ± 14% -32.2% 117447 softirqs.CPU65.TIMER
160532 ± 2% -27.9% 115767 ± 2% softirqs.CPU66.TIMER
160778 ± 2% -28.9% 114315 ± 2% softirqs.CPU67.TIMER
51076 ± 2% +16.6% 59560 ± 9% softirqs.CPU68.RCU
160432 ± 2% -28.1% 115274 ± 2% softirqs.CPU68.TIMER
50873 +10.0% 55947 ± 3% softirqs.CPU69.RCU
161360 ± 2% -27.7% 116695 softirqs.CPU7.TIMER
161036 ± 2% -28.7% 114751 ± 2% softirqs.CPU70.TIMER
160621 ± 2% -28.4% 114936 ± 2% softirqs.CPU71.TIMER
62521 ± 2% -11.2% 55530 ± 3% softirqs.CPU72.RCU
160327 ± 2% -28.0% 115369 ± 2% softirqs.CPU72.TIMER
51032 +11.7% 56993 ± 5% softirqs.CPU73.RCU
160838 ± 2% -28.2% 115459 softirqs.CPU73.TIMER
160889 ± 2% -28.6% 114888 softirqs.CPU74.TIMER
160723 ± 2% -27.9% 115851 ± 2% softirqs.CPU75.TIMER
160944 ± 2% -28.4% 115183 softirqs.CPU76.TIMER
160190 ± 2% -27.8% 115699 softirqs.CPU77.TIMER
161809 ± 2% -28.7% 115349 ± 2% softirqs.CPU78.TIMER
160883 ± 2% -28.7% 114661 ± 2% softirqs.CPU79.TIMER
160607 ± 2% -22.0% 125200 ± 13% softirqs.CPU8.TIMER
160907 ± 2% -27.2% 117177 softirqs.CPU80.TIMER
160882 ± 2% -27.1% 117299 softirqs.CPU81.TIMER
160472 ± 2% -26.8% 117483 softirqs.CPU82.TIMER
160596 ± 2% -27.7% 116095 softirqs.CPU83.TIMER
160475 ± 2% -28.3% 115094 softirqs.CPU84.TIMER
160597 ± 2% -28.3% 115128 ± 2% softirqs.CPU85.TIMER
160361 ± 2% -27.0% 117001 ± 2% softirqs.CPU86.TIMER
161004 ± 2% -16.0% 135184 ± 14% softirqs.CPU87.TIMER
160580 ± 2% -27.6% 116301 softirqs.CPU9.TIMER
14211218 ± 2% -27.5% 10297195 softirqs.TIMER
56.26 -56.3 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
55.41 -55.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
59.45 -46.7 12.76 perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
60.36 -44.4 15.98 perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
61.09 -42.1 18.98 perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
61.20 -41.8 19.38 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
46.45 -22.3 24.20 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write.ksys_write
30.28 -19.7 10.59 ± 4% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read
30.36 -19.6 10.79 ± 4% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read.__vfs_read
30.75 -19.6 11.19 ± 4% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read.ksys_read
30.38 -19.5 10.86 ± 4% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read
39.39 ± 2% -17.2 22.16 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
39.43 ± 2% -17.1 22.33 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write
39.56 ± 2% -16.7 22.83 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
52.93 -14.3 38.63 perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64
53.13 -13.7 39.45 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.60 ± 10% -9.6 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
9.29 ± 10% -9.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
9.28 ± 10% -9.3 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
9.22 ± 10% -9.2 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
9.17 ± 10% -9.2 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
38.85 -8.2 30.69 perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.ksys_read.do_syscall_64
7.74 ± 12% -7.7 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
39.05 -7.7 31.39 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.37 ± 12% -7.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.36 ± 12% -7.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.29 ± 12% -7.3 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.23 ± 12% -7.2 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
6.50 ± 9% -5.9 0.56 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.__vfs_write
6.58 ± 9% -5.7 0.85 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
5.32 ± 7% -4.6 0.75 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
5.35 ± 7% -4.5 0.85 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
3.90 ± 4% -3.2 0.73 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.pipe_wait
3.53 ± 6% -2.7 0.81 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.__kernel_text_address.unwind_get_return_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.6 0.55 ± 2% perf-profile.calltrace.cycles-pp.avc_has_perm.file_has_perm.security_file_permission.vfs_read.ksys_read
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.00 +0.6 0.57 ± 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.6 0.60 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_read.__vfs_read.vfs_read.ksys_read
0.00 +0.6 0.61 ± 3% perf-profile.calltrace.cycles-pp.orc_find.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.6 0.61 ± 6% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.63 ± 2% perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.pipe_read.__vfs_read.vfs_read
0.00 +0.6 0.64 ± 6% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.66 ± 7% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.7 0.66 ± 3% perf-profile.calltrace.cycles-pp.unwind_next_frame.__unwind_start.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.7 0.69 ± 2% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.7 0.69 ± 2% perf-profile.calltrace.cycles-pp.unwind_get_return_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.7 0.70 ± 6% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp.native_write_msr
0.00 +0.7 0.72 ± 2% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.72 perf-profile.calltrace.cycles-pp.file_update_time.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +0.7 0.73 ± 5% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.75 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.8 0.78 ± 4% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.80 perf-profile.calltrace.cycles-pp.__switch_to_asm
0.00 +0.8 0.85 ± 2% perf-profile.calltrace.cycles-pp.save_stack_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.8 0.85 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.00 +0.9 0.85 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +0.9 0.87 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.00 +0.9 0.92 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_write
0.00 +0.9 0.94 ± 2% perf-profile.calltrace.cycles-pp.__orc_find.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +1.1 1.10 ± 18% perf-profile.calltrace.cycles-pp.__unwind_start.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +1.2 1.16 perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +1.2 1.18 perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.ksys_read
0.00 +1.2 1.22 ± 2% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.00 +1.4 1.40 ± 4% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_write
0.00 +1.5 1.48 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.5 1.52 ± 2% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.00 +1.5 1.54 perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.pipe_wait
0.00 +1.6 1.59 ± 3% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.26 ±100% +1.7 1.94 perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.__vfs_write.vfs_write
0.00 +1.7 1.71 ± 7% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.73 perf-profile.calltrace.cycles-pp.__switch_to
0.00 +1.8 1.76 ± 7% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.8 1.80 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.__vfs_write
0.00 +1.8 1.81 ± 7% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.9 1.93 ± 4% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_write
0.58 ± 2% +2.0 2.60 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.__vfs_read
0.00 +2.1 2.09 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.61 ± 2% +2.1 2.72 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
0.00 +2.1 2.14 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.60 ± 2% +2.2 2.84 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.3 2.34 perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.77 ± 2% +2.5 3.25 perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.ksys_write
4.08 +2.6 6.65 ± 4% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_write.__vfs_write.vfs_write.ksys_write
32.97 +2.9 35.88 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.42 ± 57% +3.0 3.40 ± 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.96 ± 2% +3.3 4.25 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
0.97 ± 4% +3.5 4.50 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
2.09 ± 10% +3.6 5.71 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_write.__vfs_write
2.14 ± 10% +3.7 5.84 ± 4% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_write.__vfs_write.vfs_write
33.20 +3.7 36.93 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.03 ± 4% +3.7 4.77 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
5.60 ± 5% +4.4 10.04 ± 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
5.65 ± 5% +4.6 10.25 ± 2% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
0.00 +5.3 5.27 ± 4% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
5.86 ± 5% +5.4 11.27 ± 2% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.ksys_read
1.30 ± 5% +5.7 7.01 perf-profile.calltrace.cycles-pp.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
79.09 ± 2% +6.0 85.12 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
79.21 ± 2% +6.6 85.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.49 ± 6% +7.3 8.78 ± 3% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.71 ± 6% +8.0 9.76 ± 2% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
2.05 ± 4% +9.1 11.13 perf-profile.calltrace.cycles-pp.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
2.17 ± 4% +9.5 11.70 perf-profile.calltrace.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
747.25 ± 44% -73.7% 196.60 ± 13% interrupts.37:PCI-MSI.3145734-edge.eth0-TxRx-5
479.75 ± 2% -27.1% 349.80 interrupts.9:IO-APIC.9-fasteoi.acpi
197972 ± 6% +37.9% 272965 ± 2% interrupts.CAL:Function_call_interrupts
2297 ± 7% +36.5% 3137 ± 2% interrupts.CPU0.CAL:Function_call_interrupts
479.75 ± 2% -85.3% 70.40 ±200% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
2273 ± 6% +35.7% 3083 ± 4% interrupts.CPU10.CAL:Function_call_interrupts
814.00 ± 23% -86.6% 109.40 ±200% interrupts.CPU11.32:PCI-MSI.3145729-edge.eth0-TxRx-0
2269 ± 5% +34.8% 3060 ± 5% interrupts.CPU11.CAL:Function_call_interrupts
83446 ± 17% +62.9% 135970 ± 20% interrupts.CPU11.RES:Rescheduling_interrupts
2250 ± 6% +38.9% 3125 ± 3% interrupts.CPU12.CAL:Function_call_interrupts
372.50 ± 41% -90.3% 36.20 ±200% interrupts.CPU13.34:PCI-MSI.3145731-edge.eth0-TxRx-2
2247 ± 4% +39.5% 3135 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
79263 ± 25% +65.9% 131487 ± 22% interrupts.CPU13.RES:Rescheduling_interrupts
655.75 ± 65% -81.5% 121.40 ±200% interrupts.CPU14.35:PCI-MSI.3145732-edge.eth0-TxRx-3
2262 ± 6% +38.0% 3123 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
502.75 ± 39% -91.1% 44.80 ±200% interrupts.CPU15.36:PCI-MSI.3145733-edge.eth0-TxRx-4
2194 ± 8% +41.8% 3112 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
747.25 ± 44% -93.4% 49.60 ±200% interrupts.CPU16.37:PCI-MSI.3145734-edge.eth0-TxRx-5
2223 ± 5% +40.4% 3121 ± 2% interrupts.CPU16.CAL:Function_call_interrupts
80415 ± 17% +68.2% 135291 ± 30% interrupts.CPU16.RES:Rescheduling_interrupts
449.25 ± 83% -88.6% 51.20 ±200% interrupts.CPU17.38:PCI-MSI.3145735-edge.eth0-TxRx-6
2244 ± 7% +39.6% 3133 ± 2% interrupts.CPU17.CAL:Function_call_interrupts
86887 ± 17% +71.2% 148712 ± 28% interrupts.CPU17.RES:Rescheduling_interrupts
2264 ± 5% +34.7% 3049 ± 8% interrupts.CPU19.CAL:Function_call_interrupts
2306 ± 7% +35.9% 3133 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
2269 ± 7% +36.1% 3089 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
2259 ± 7% +38.1% 3119 ± 2% interrupts.CPU21.CAL:Function_call_interrupts
2260 ± 8% +38.8% 3137 ± 2% interrupts.CPU22.CAL:Function_call_interrupts
76602 ± 20% +182.1% 216109 ± 23% interrupts.CPU22.RES:Rescheduling_interrupts
2246 ± 6% +40.0% 3144 ± 2% interrupts.CPU23.CAL:Function_call_interrupts
65952 ± 25% +201.6% 198909 ± 17% interrupts.CPU23.RES:Rescheduling_interrupts
2262 ± 7% +38.3% 3128 ± 2% interrupts.CPU24.CAL:Function_call_interrupts
67151 ± 22% +211.4% 209090 ± 13% interrupts.CPU24.RES:Rescheduling_interrupts
2276 ± 6% +37.9% 3139 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
63188 ± 34% +214.4% 198647 ± 16% interrupts.CPU25.RES:Rescheduling_interrupts
2217 ± 10% +42.1% 3151 ± 3% interrupts.CPU26.CAL:Function_call_interrupts
66428 ± 25% +173.8% 181852 ± 17% interrupts.CPU26.RES:Rescheduling_interrupts
2271 ± 6% +37.9% 3133 ± 2% interrupts.CPU27.CAL:Function_call_interrupts
67143 ± 22% +181.1% 188716 ± 16% interrupts.CPU27.RES:Rescheduling_interrupts
2293 ± 6% +36.7% 3134 ± 2% interrupts.CPU28.CAL:Function_call_interrupts
67974 ± 25% +195.1% 200577 ± 14% interrupts.CPU28.RES:Rescheduling_interrupts
2262 ± 7% +37.1% 3102 ± 2% interrupts.CPU29.CAL:Function_call_interrupts
69463 ± 35% +191.1% 202241 ± 23% interrupts.CPU29.RES:Rescheduling_interrupts
2281 ± 6% +37.6% 3139 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
81877 ± 17% +56.1% 127829 ± 31% interrupts.CPU3.RES:Rescheduling_interrupts
2248 ± 8% +39.3% 3132 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
67898 ± 28% +174.6% 186428 ± 15% interrupts.CPU30.RES:Rescheduling_interrupts
2274 ± 7% +34.8% 3065 ± 5% interrupts.CPU31.CAL:Function_call_interrupts
66611 ± 22% +195.4% 196777 ± 20% interrupts.CPU31.RES:Rescheduling_interrupts
2226 ± 9% +34.0% 2983 ± 6% interrupts.CPU32.CAL:Function_call_interrupts
68504 ± 14% +172.5% 186681 ± 15% interrupts.CPU32.RES:Rescheduling_interrupts
2181 ± 3% +42.6% 3111 ± 2% interrupts.CPU33.CAL:Function_call_interrupts
72211 ± 21% +175.7% 199106 ± 16% interrupts.CPU33.RES:Rescheduling_interrupts
2246 ± 7% +39.6% 3137 ± 3% interrupts.CPU34.CAL:Function_call_interrupts
61475 ± 25% +249.3% 214755 ± 14% interrupts.CPU34.RES:Rescheduling_interrupts
2284 ± 7% +33.5% 3048 ± 5% interrupts.CPU35.CAL:Function_call_interrupts
60734 ± 19% +234.7% 203299 ± 11% interrupts.CPU35.RES:Rescheduling_interrupts
2247 ± 5% +37.4% 3087 ± 2% interrupts.CPU36.CAL:Function_call_interrupts
65336 ± 36% +238.4% 221101 ± 16% interrupts.CPU36.RES:Rescheduling_interrupts
2285 ± 6% +35.5% 3095 ± 3% interrupts.CPU37.CAL:Function_call_interrupts
64031 ± 35% +185.1% 182555 ± 19% interrupts.CPU37.RES:Rescheduling_interrupts
2257 ± 7% +39.4% 3147 ± 3% interrupts.CPU38.CAL:Function_call_interrupts
64266 ± 27% +218.0% 204388 ± 20% interrupts.CPU38.RES:Rescheduling_interrupts
2254 ± 6% +38.3% 3117 ± 3% interrupts.CPU39.CAL:Function_call_interrupts
68995 ± 30% +168.4% 185151 ± 17% interrupts.CPU39.RES:Rescheduling_interrupts
2257 ± 6% +35.3% 3053 ± 6% interrupts.CPU4.CAL:Function_call_interrupts
2249 ± 7% +36.2% 3063 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
63012 ± 31% +189.1% 182168 ± 20% interrupts.CPU40.RES:Rescheduling_interrupts
2238 ± 4% +40.8% 3151 ± 3% interrupts.CPU41.CAL:Function_call_interrupts
7854 -32.4% 5312 ± 34% interrupts.CPU41.NMI:Non-maskable_interrupts
7854 -32.4% 5312 ± 34% interrupts.CPU41.PMI:Performance_monitoring_interrupts
65417 ± 32% +188.0% 188433 ± 22% interrupts.CPU41.RES:Rescheduling_interrupts
2158 ± 5% +45.2% 3134 ± 3% interrupts.CPU42.CAL:Function_call_interrupts
66822 ± 29% +170.7% 180901 ± 17% interrupts.CPU42.RES:Rescheduling_interrupts
2221 ± 7% +35.9% 3017 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
63373 ± 22% +215.6% 199988 ± 16% interrupts.CPU43.RES:Rescheduling_interrupts
2272 ± 6% +37.1% 3115 ± 3% interrupts.CPU44.CAL:Function_call_interrupts
2296 ± 6% +22.9% 2823 ± 14% interrupts.CPU45.CAL:Function_call_interrupts
2219 ± 9% +41.1% 3131 ± 3% interrupts.CPU46.CAL:Function_call_interrupts
76020 ± 26% +61.1% 122458 ± 22% interrupts.CPU46.RES:Rescheduling_interrupts
2264 ± 6% +38.6% 3136 ± 3% interrupts.CPU47.CAL:Function_call_interrupts
81164 ± 18% +58.7% 128808 ± 23% interrupts.CPU47.RES:Rescheduling_interrupts
2215 ± 8% +40.2% 3106 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
77553 ± 19% +70.9% 132550 ± 22% interrupts.CPU48.RES:Rescheduling_interrupts
2211 ± 9% +42.2% 3144 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
2209 ± 11% +41.5% 3125 ± 2% interrupts.CPU5.CAL:Function_call_interrupts
84336 ± 20% +62.4% 136987 ± 24% interrupts.CPU5.RES:Rescheduling_interrupts
2261 ± 5% +37.8% 3116 ± 2% interrupts.CPU50.CAL:Function_call_interrupts
80907 ± 21% +85.3% 149912 ± 17% interrupts.CPU50.RES:Rescheduling_interrupts
2269 ± 6% +37.8% 3126 ± 2% interrupts.CPU51.CAL:Function_call_interrupts
81326 ± 21% +64.3% 133624 ± 30% interrupts.CPU51.RES:Rescheduling_interrupts
2260 ± 6% +39.4% 3151 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
2256 ± 6% +35.5% 3058 ± 7% interrupts.CPU53.CAL:Function_call_interrupts
2240 ± 5% +40.8% 3154 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
2211 ± 8% +41.5% 3127 ± 2% interrupts.CPU55.CAL:Function_call_interrupts
80221 ± 19% +54.1% 123625 ± 31% interrupts.CPU55.RES:Rescheduling_interrupts
2250 ± 6% +39.6% 3142 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
2211 ± 6% +41.1% 3120 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
2259 ± 5% +38.1% 3121 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
2256 ± 6% +39.1% 3138 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
2262 ± 6% +38.4% 3131 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
82896 ± 24% +71.2% 141899 ± 18% interrupts.CPU6.RES:Rescheduling_interrupts
2256 ± 5% +38.5% 3124 ± 2% interrupts.CPU60.CAL:Function_call_interrupts
80784 ± 12% +65.0% 133265 ± 24% interrupts.CPU60.RES:Rescheduling_interrupts
2279 ± 7% +38.0% 3145 ± 2% interrupts.CPU61.CAL:Function_call_interrupts
84830 ± 17% +60.6% 136247 ± 29% interrupts.CPU61.RES:Rescheduling_interrupts
2215 ± 4% +40.1% 3104 ± 5% interrupts.CPU62.CAL:Function_call_interrupts
2246 ± 5% +39.7% 3136 ± 2% interrupts.CPU63.CAL:Function_call_interrupts
2265 ± 6% +38.5% 3137 ± 2% interrupts.CPU64.CAL:Function_call_interrupts
86987 ± 10% +50.9% 131243 ± 26% interrupts.CPU64.RES:Rescheduling_interrupts
2234 ± 7% +41.1% 3151 ± 3% interrupts.CPU65.CAL:Function_call_interrupts
78629 ± 23% +71.5% 134864 ± 26% interrupts.CPU65.RES:Rescheduling_interrupts
2261 ± 7% +37.9% 3118 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
62225 ± 31% +205.0% 189770 ± 17% interrupts.CPU66.RES:Rescheduling_interrupts
2182 ± 9% +44.1% 3145 ± 3% interrupts.CPU67.CAL:Function_call_interrupts
66528 ± 27% +192.1% 194354 ± 13% interrupts.CPU67.RES:Rescheduling_interrupts
2246 ± 6% +40.3% 3152 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
61256 ± 26% +206.4% 187706 ± 16% interrupts.CPU68.RES:Rescheduling_interrupts
2277 ± 5% +38.2% 3147 ± 3% interrupts.CPU69.CAL:Function_call_interrupts
59444 ± 37% +223.0% 192005 ± 17% interrupts.CPU69.RES:Rescheduling_interrupts
2276 ± 6% +31.0% 2983 ± 12% interrupts.CPU7.CAL:Function_call_interrupts
82934 ± 21% +69.0% 140192 ± 30% interrupts.CPU7.RES:Rescheduling_interrupts
2217 ± 9% +40.6% 3118 ± 2% interrupts.CPU70.CAL:Function_call_interrupts
60661 ± 30% +188.8% 175171 ± 18% interrupts.CPU70.RES:Rescheduling_interrupts
2264 ± 6% +38.8% 3142 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
6886 ± 24% -22.8% 5316 ± 35% interrupts.CPU71.NMI:Non-maskable_interrupts
6886 ± 24% -22.8% 5316 ± 35% interrupts.CPU71.PMI:Performance_monitoring_interrupts
71409 ± 15% +169.6% 192537 ± 13% interrupts.CPU71.RES:Rescheduling_interrupts
2262 ± 5% +38.0% 3121 ± 2% interrupts.CPU72.CAL:Function_call_interrupts
67503 ± 20% +223.0% 218026 ± 10% interrupts.CPU72.RES:Rescheduling_interrupts
2282 ± 7% +36.8% 3122 ± 4% interrupts.CPU73.CAL:Function_call_interrupts
77143 ± 26% +138.6% 184062 ± 19% interrupts.CPU73.RES:Rescheduling_interrupts
2218 ± 8% +41.7% 3143 ± 2% interrupts.CPU74.CAL:Function_call_interrupts
69660 ± 19% +188.1% 200672 ± 24% interrupts.CPU74.RES:Rescheduling_interrupts
2280 ± 6% +37.5% 3135 ± 2% interrupts.CPU75.CAL:Function_call_interrupts
68991 ± 26% +192.4% 201763 ± 23% interrupts.CPU75.RES:Rescheduling_interrupts
64969 ± 22% +206.7% 199282 ± 17% interrupts.CPU76.RES:Rescheduling_interrupts
2250 ± 7% +35.9% 3057 ± 6% interrupts.CPU77.CAL:Function_call_interrupts
63736 ± 26% +236.0% 214164 ± 17% interrupts.CPU77.RES:Rescheduling_interrupts
2247 ± 6% +39.6% 3138 ± 2% interrupts.CPU78.CAL:Function_call_interrupts
7843 -32.3% 5311 ± 34% interrupts.CPU78.NMI:Non-maskable_interrupts
7843 -32.3% 5311 ± 34% interrupts.CPU78.PMI:Performance_monitoring_interrupts
62767 ± 21% +223.0% 202760 ± 21% interrupts.CPU78.RES:Rescheduling_interrupts
2251 ± 6% +39.6% 3142 ± 2% interrupts.CPU79.CAL:Function_call_interrupts
7859 -32.4% 5315 ± 35% interrupts.CPU79.NMI:Non-maskable_interrupts
7859 -32.4% 5315 ± 35% interrupts.CPU79.PMI:Performance_monitoring_interrupts
60001 ± 31% +211.8% 187068 ± 14% interrupts.CPU79.RES:Rescheduling_interrupts
2277 ± 6% +35.4% 3084 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
2205 ± 4% +40.1% 3089 ± 3% interrupts.CPU80.CAL:Function_call_interrupts
67508 ± 28% +189.3% 195321 ± 15% interrupts.CPU80.RES:Rescheduling_interrupts
2264 ± 6% +40.0% 3170 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
63910 ± 19% +178.8% 178152 ± 22% interrupts.CPU81.RES:Rescheduling_interrupts
2291 ± 6% +36.4% 3125 ± 2% interrupts.CPU82.CAL:Function_call_interrupts
7854 -32.5% 5301 ± 35% interrupts.CPU82.NMI:Non-maskable_interrupts
7854 -32.5% 5301 ± 35% interrupts.CPU82.PMI:Performance_monitoring_interrupts
65050 ± 21% +195.7% 192366 ± 20% interrupts.CPU82.RES:Rescheduling_interrupts
2257 ± 7% +38.8% 3132 ± 2% interrupts.CPU83.CAL:Function_call_interrupts
67635 ± 35% +176.1% 186742 ± 20% interrupts.CPU83.RES:Rescheduling_interrupts
2262 ± 6% +38.2% 3126 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
62825 ± 36% +202.7% 190141 ± 20% interrupts.CPU84.RES:Rescheduling_interrupts
2213 ± 2% +41.5% 3130 ± 2% interrupts.CPU85.CAL:Function_call_interrupts
63990 ± 31% +203.5% 194231 ± 15% interrupts.CPU85.RES:Rescheduling_interrupts
2258 ± 6% +39.2% 3144 ± 2% interrupts.CPU86.CAL:Function_call_interrupts
62782 ± 27% +173.8% 171923 ± 18% interrupts.CPU86.RES:Rescheduling_interrupts
2172 ± 8% +43.8% 3123 ± 3% interrupts.CPU87.CAL:Function_call_interrupts
60011 ± 25% +221.7% 193085 ± 20% interrupts.CPU87.RES:Rescheduling_interrupts
2189 ± 8% +42.6% 3122 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
193.50 ± 2% -14.9% 164.60 ± 2% interrupts.IWI:IRQ_work_interrupts
6495389 ± 9% +119.9% 14284697 ± 2% interrupts.RES:Rescheduling_interrupts
93.50 ± 13% -52.5% 44.40 ± 27% interrupts.TLB:TLB_shootdowns
hackbench.throughput
2.2e+06 +-+---------------------------------------------------------------+
| O |
2e+06 O-+ O O O |
1.8e+06 +-+ |
| |
1.6e+06 +-+ |
1.4e+06 +-+ |
| |
1.2e+06 +-+ |
1e+06 +-+ |
| |
800000 +-+ |
600000 +-+ |
|.....+....+.....+....+.....+....+.....+....+.....+....+.....+....|
400000 +-+---------------------------------------------------------------+
hackbench.workload
5.5e+08 +-+---------------------------------------------------------------+
O O O O O |
5e+08 +-+ |
| |
4.5e+08 +-+ |
| |
4e+08 +-+ |
| |
3.5e+08 +-+ |
| |
3e+08 +-+ |
| |
2.5e+08 +-+ |
| |
2e+08 +-+---------------------------------------------------------------+
hackbench.time.user_time
2400 +-+------------------------------------------------------------------+
O O |
2200 +-+ O O O |
2000 +-+ |
| |
1800 +-+ |
| |
1600 +-+ |
| |
1400 +-+ |
1200 +-+ |
| |
1000 +-+...+.....+....+.....+.....+.....+....+.....+..... ...+....+.....|
| +.. |
800 +-+------------------------------------------------------------------+
hackbench.time.system_time
36000 +-+-----------------------------------------------------------------+
| |
34000 +-+ ...+..... |
32000 +-+...+....+.. +.... ...+..... ..+.....|
| +.. +.... ...+.. |
30000 +-+ +.....+.. |
| |
28000 +-+ |
| |
26000 +-+ |
24000 +-+ |
| |
22000 +-+ |
O O O O O |
20000 +-+-----------------------------------------------------------------+
hackbench.time.percent_of_cpu_this_job_got
8450 +-+------------------------------------------------------------------+
8400 +-+ ..+.. ...+.. |
|..... ...+.. .. .+..... ...+.. .. |
8350 +-+ +.. . ... +....+.. . ..+.....|
8300 +-+ +. +.. |
| |
8250 +-+ |
8200 +-+ |
8150 +-+ |
| |
8100 +-+ |
8050 +-+ |
| |
8000 O-+ O O O |
7950 +-+--------------O---------------------------------------------------+
hackbench.time.elapsed_time
420 +-+-------------------------------------------------------------------+
| ... . |
400 +-+...+.....+. +.... .+..... ...+.....|
| . ... +.... .+.. |
380 +-+ +. . .. |
| +.....+. |
360 +-+ |
| |
340 +-+ |
| |
320 +-+ |
| |
300 +-+ |
O O O O O |
280 +-+-------------------------------------------------------------------+
hackbench.time.elapsed_time.max
420 +-+-------------------------------------------------------------------+
| ... . |
400 +-+...+.....+. +.... .+..... ...+.....|
| . ... +.... .+.. |
380 +-+ +. . .. |
| +.....+. |
360 +-+ |
| |
340 +-+ |
| |
320 +-+ |
| |
300 +-+ |
O O O O O |
280 +-+-------------------------------------------------------------------+
hackbench.time.minor_page_faults
1.1e+07 +-+---------------------------------------------------------------+
| O O O |
1e+07 O-+ O |
9e+06 +-+ |
| |
8e+06 +-+ |
| |
7e+06 +-+ |
| |
6e+06 +-+ |
5e+06 +-+ |
| |
4e+06 +-+...+....+.....+....+.....+....+.....+....+.....+....+.....+....|
| |
3e+06 +-+---------------------------------------------------------------+
hackbench.time.voluntary_context_switches
2.6e+09 +-+---------------------------------------------------------------+
O O O O |
2.4e+09 +-+ O |
2.2e+09 +-+ |
| |
2e+09 +-+ |
1.8e+09 +-+ |
| |
1.6e+09 +-+ |
1.4e+09 +-+ |
| |
1.2e+09 +-+ |
1e+09 +-+ ..+.....+.... |
|.....+.. +.....+....+.....+....+.....+....+.....+....|
8e+08 +-+---------------------------------------------------------------+
hackbench.time.involuntary_context_switches
4e+08 +-+---------------------------------------------------------------+
| |
3.5e+08 O-+ O |
| O O |
| O |
3e+08 +-+ |
| |
2.5e+08 +-+ |
| |
2e+08 +-+ |
| |
| |
1.5e+08 +-+...+....+.....+....+..... ..+.....+.... |
|.. +.. +.....+....+.....+....|
1e+08 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 3 months
457dddfe96: hackbench.throughput 262.7% improvement
by kernel test robot
Greeting,
FYI, we noticed a 262.7% improvement of hackbench.throughput due to commit:
commit: 457dddfe9640d12eb4c280ae65da8cb514092d53 ("simulate the mode == 2 case")
git://bee.sh.intel.com/git/feng/linux.git master
in testcase: hackbench
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_threads: 1600%
mode: process
ipc: pipe
runtime: 240
size: 1024
ucode: 0xb00002e
cpufreq_governor: performance
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/runtime/size/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6-latency-stats/process/1600%/debian-x86_64-2018-04-03.cgz/240/1024/lkp-bdw-ep3b/hackbench/0xb00002e
commit:
1847077bcb ("fix one issue in latencytop.h")
457dddfe96 ("simulate the mode == 2 case")
1847077bcb8f9ac8 457dddfe9640d12eb4c280ae65d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :5 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:5 dmesg.WARNING:at_ip__slab_free/0x
%stddev %change %stddev
\ | \
543723 ± 2% +262.7% 1972152 hackbench.throughput
399.40 ± 2% -26.3% 294.20 hackbench.time.elapsed_time
399.40 ± 2% -26.3% 294.20 hackbench.time.elapsed_time.max
1.31e+08 ± 2% +164.5% 3.466e+08 hackbench.time.involuntary_context_switches
3963040 +151.4% 9962004 hackbench.time.minor_page_faults
8316 -4.0% 7985 hackbench.time.percent_of_cpu_this_job_got
32203 ± 2% -34.0% 21267 hackbench.time.system_time
1016 +119.3% 2228 hackbench.time.user_time
9.718e+08 +152.2% 2.451e+09 hackbench.time.voluntary_context_switches
2.112e+08 +150.0% 5.28e+08 hackbench.workload
5.50 +3.9 9.39 mpstat.cpu.all.idle%
92.05 -9.8 82.21 mpstat.cpu.all.sys%
2.44 +5.9 8.38 mpstat.cpu.all.usr%
21091 ± 45% -54.6% 9572 ± 99% numa-vmstat.node0.nr_shmem
33179 ± 29% -51.0% 16246 ± 26% numa-vmstat.node0.nr_slab_reclaimable
329413 ± 69% -44.6% 182522 ± 94% numa-vmstat.node1.nr_active_anon
329413 ± 69% -44.6% 182522 ± 94% numa-vmstat.node1.nr_zone_active_anon
91.25 -11.2% 81.00 vmstat.cpu.sy
2116 ± 3% +30.5% 2761 vmstat.procs.r
2749115 +243.7% 9449560 vmstat.system.cs
189522 +20.8% 228852 vmstat.system.in
133046 ± 29% -51.2% 64883 ± 26% numa-meminfo.node0.KReclaimable
133046 ± 29% -51.2% 64883 ± 26% numa-meminfo.node0.SReclaimable
84670 ± 45% -54.5% 38530 ± 99% numa-meminfo.node0.Shmem
1317569 ± 69% -44.5% 731783 ± 94% numa-meminfo.node1.Active
1317508 ± 69% -44.5% 731731 ± 94% numa-meminfo.node1.Active(anon)
5208778 ± 75% -36.1% 3329598 ± 87% numa-meminfo.node1.MemUsed
1011468 ± 10% +163.4% 2663717 ± 9% cpuidle.C1.usage
19610782 ± 47% -34.8% 12784822 ± 5% cpuidle.C1E.time
1.173e+09 ± 18% -43.4% 6.641e+08 ± 17% cpuidle.C3.time
6.965e+08 ± 33% +141.9% 1.685e+09 ± 5% cpuidle.C6.time
1032568 ± 38% +136.4% 2441154 cpuidle.C6.usage
230036 ± 9% +1502.8% 3687119 ± 5% cpuidle.POLL.time
36376 ± 6% +4538.2% 1687232 ± 5% cpuidle.POLL.usage
2708607 -34.0% 1786872 meminfo.Active
2708487 -34.0% 1786752 meminfo.Active(anon)
67673499 -30.1% 47322963 meminfo.Committed_AS
23297 -9.1% 21179 meminfo.Inactive
22995 -9.2% 20886 meminfo.Inactive(anon)
211046 ± 8% -39.5% 127610 ± 2% meminfo.KReclaimable
859995 -28.1% 618064 meminfo.KernelStack
10609144 -24.9% 7966067 meminfo.Memused
1984214 ± 3% -23.4% 1519688 ± 2% meminfo.PageTables
211046 ± 8% -39.5% 127610 ± 2% meminfo.SReclaimable
1584627 -12.3% 1389422 meminfo.SUnreclaim
120730 -20.8% 95658 ± 4% meminfo.Shmem
1795674 -15.5% 1517033 meminfo.Slab
35795 ± 3% +38.6% 49616 ± 2% meminfo.max_used_kB
2643 -5.7% 2493 turbostat.Avg_MHz
1007223 ± 10% +164.1% 2659677 ± 9% turbostat.C1
0.06 ± 6% +0.0 0.09 ± 7% turbostat.C1%
3.33 ± 19% -0.8 2.55 ± 17% turbostat.C3%
1019101 ± 39% +138.5% 2430770 turbostat.C6
1.95 ± 33% +4.5 6.46 ± 6% turbostat.C6%
2.66 ± 8% +80.9% 4.82 ± 5% turbostat.CPU%c1
1.82 ± 16% -71.1% 0.53 ± 32% turbostat.CPU%c3
0.85 ± 36% +347.5% 3.80 ± 9% turbostat.CPU%c6
59.50 +23.7% 73.60 turbostat.CoreTmp
76121969 -10.9% 67831700 turbostat.IRQ
1.04 ± 9% +83.9% 1.91 ± 7% turbostat.Pkg%pc2
64.75 +19.5% 77.40 turbostat.PkgTmp
232.67 +14.8% 267.17 turbostat.PkgWatt
677381 -34.5% 443868 proc-vmstat.nr_active_anon
1370461 +4.9% 1437294 proc-vmstat.nr_dirty_background_threshold
2744277 +4.9% 2877990 proc-vmstat.nr_dirty_threshold
285017 -1.7% 280085 proc-vmstat.nr_file_pages
13814100 +4.8% 14482833 proc-vmstat.nr_free_pages
5744 -9.1% 5223 proc-vmstat.nr_inactive_anon
438.25 ± 4% -100.0% 0.00 proc-vmstat.nr_isolated_anon
858366 -28.6% 613295 proc-vmstat.nr_kernel_stack
6643 +5.9% 7036 proc-vmstat.nr_mapped
494492 ± 3% -23.8% 376889 proc-vmstat.nr_page_table_pages
30278 -20.7% 24010 ± 5% proc-vmstat.nr_shmem
52660 ± 8% -39.4% 31921 ± 2% proc-vmstat.nr_slab_reclaimable
395640 -12.6% 345975 proc-vmstat.nr_slab_unreclaimable
677381 -34.5% 443868 proc-vmstat.nr_zone_active_anon
5744 -9.1% 5223 proc-vmstat.nr_zone_inactive_anon
21895 ± 8% -88.9% 2424 ±108% proc-vmstat.numa_hint_faults
5245 ± 37% -80.9% 1004 ± 55% proc-vmstat.numa_hint_faults_local
1.604e+08 +70.3% 2.732e+08 proc-vmstat.numa_hit
1.604e+08 +70.3% 2.732e+08 proc-vmstat.numa_local
17957 ± 3% -84.1% 2847 ± 87% proc-vmstat.numa_pages_migrated
28334 -17.9% 23251 ± 6% proc-vmstat.pgactivate
1.612e+08 +70.6% 2.749e+08 proc-vmstat.pgalloc_normal
4848914 +118.0% 10572834 proc-vmstat.pgfault
1.609e+08 +70.8% 2.748e+08 proc-vmstat.pgfree
17957 ± 3% -84.1% 2847 ± 87% proc-vmstat.pgmigrate_success
252550 ± 9% -28.1% 181585 slabinfo.Acpi-Namespace.active_objs
2731 ± 9% -33.0% 1830 slabinfo.Acpi-Namespace.active_slabs
278625 ± 9% -33.0% 186696 slabinfo.Acpi-Namespace.num_objs
2731 ± 9% -33.0% 1830 slabinfo.Acpi-Namespace.num_slabs
890833 -23.1% 684748 slabinfo.anon_vma.active_objs
20208 -24.2% 15311 slabinfo.anon_vma.active_slabs
929620 -24.2% 704326 slabinfo.anon_vma.num_objs
20208 -24.2% 15311 slabinfo.anon_vma.num_slabs
1821379 -24.3% 1379596 slabinfo.anon_vma_chain.active_objs
29676 -25.5% 22122 slabinfo.anon_vma_chain.active_slabs
1899319 -25.5% 1415857 slabinfo.anon_vma_chain.num_objs
29676 -25.5% 22122 slabinfo.anon_vma_chain.num_slabs
230716 ± 8% -41.4% 135290 ± 2% slabinfo.dentry.active_objs
6050 ± 8% -44.3% 3368 slabinfo.dentry.active_slabs
254112 ± 8% -44.3% 141495 slabinfo.dentry.num_objs
6050 ± 8% -44.3% 3368 slabinfo.dentry.num_slabs
69940 -15.7% 58980 slabinfo.files_cache.active_objs
1540 -11.9% 1357 slabinfo.files_cache.active_slabs
70893 -11.9% 62474 slabinfo.files_cache.num_objs
1540 -11.9% 1357 slabinfo.files_cache.num_slabs
75703 -17.4% 62501 slabinfo.filp.active_objs
2383 -14.4% 2040 slabinfo.filp.active_slabs
76278 -14.4% 65299 slabinfo.filp.num_objs
2383 -14.4% 2040 slabinfo.filp.num_slabs
4403 -10.3% 3948 slabinfo.kmalloc-128.active_objs
4403 -10.3% 3948 slabinfo.kmalloc-128.num_objs
36787 -17.7% 30279 slabinfo.kmalloc-1k.active_objs
1152 -11.8% 1016 slabinfo.kmalloc-1k.active_slabs
36897 -11.8% 32537 slabinfo.kmalloc-1k.num_objs
1152 -11.8% 1016 slabinfo.kmalloc-1k.num_slabs
195931 +21.0% 236996 slabinfo.kmalloc-32.active_objs
1534 +22.7% 1882 slabinfo.kmalloc-32.active_slabs
196522 +22.6% 241010 slabinfo.kmalloc-32.num_objs
1534 +22.7% 1882 slabinfo.kmalloc-32.num_slabs
121535 +15.5% 140363 slabinfo.kmalloc-64.active_objs
1904 +16.8% 2225 slabinfo.kmalloc-64.active_slabs
121918 +16.8% 142441 slabinfo.kmalloc-64.num_objs
1904 +16.8% 2225 slabinfo.kmalloc-64.num_slabs
65393 -13.7% 56409 slabinfo.kmalloc-96.active_objs
59510 -24.4% 44969 slabinfo.mm_struct.active_objs
2027 -18.6% 1650 slabinfo.mm_struct.active_slabs
60832 -18.6% 49525 slabinfo.mm_struct.num_objs
2027 -18.6% 1650 slabinfo.mm_struct.num_slabs
126644 ± 15% -71.8% 35687 ± 11% slabinfo.proc_inode_cache.active_objs
2914 ± 14% -64.5% 1035 ± 6% slabinfo.proc_inode_cache.active_slabs
142845 ± 14% -64.4% 50783 ± 6% slabinfo.proc_inode_cache.num_objs
2914 ± 14% -64.5% 1035 ± 6% slabinfo.proc_inode_cache.num_slabs
107870 +70.8% 184229 slabinfo.selinux_file_security.active_objs
420.75 +71.4% 721.00 slabinfo.selinux_file_security.active_slabs
107870 +71.2% 184652 slabinfo.selinux_file_security.num_objs
420.75 +71.4% 721.00 slabinfo.selinux_file_security.num_slabs
1472207 -28.2% 1057731 slabinfo.vm_area_struct.active_objs
37002 -26.9% 27063 slabinfo.vm_area_struct.active_slabs
1480122 -26.9% 1082563 slabinfo.vm_area_struct.num_objs
37002 -26.9% 27063 slabinfo.vm_area_struct.num_slabs
12.39 ± 4% +42.5% 17.64 perf-stat.i.MPKI
1.811e+10 ± 2% +67.8% 3.039e+10 perf-stat.i.branch-instructions
1.06 ± 4% +0.7 1.76 perf-stat.i.branch-miss-rate%
1.56e+08 ± 3% +202.8% 4.722e+08 perf-stat.i.branch-misses
7.66 -4.7 2.92 ± 7% perf-stat.i.cache-miss-rate%
7.752e+08 ± 3% +179.7% 2.168e+09 perf-stat.i.cache-references
2804744 +245.1% 9679683 perf-stat.i.context-switches
3.05 -44.3% 1.70 perf-stat.i.cpi
2.352e+11 ± 2% -6.0% 2.211e+11 perf-stat.i.cpu-cycles
166412 ± 4% -10.4% 149175 perf-stat.i.cpu-migrations
0.43 ± 2% +0.3 0.73 ± 4% perf-stat.i.dTLB-load-miss-rate%
87203043 +250.1% 3.053e+08 ± 5% perf-stat.i.dTLB-load-misses
2.162e+10 ± 3% +95.2% 4.22e+10 perf-stat.i.dTLB-loads
9749893 ± 10% +209.1% 30134659 ± 8% perf-stat.i.dTLB-store-misses
7.574e+09 ± 5% +239.3% 2.57e+10 perf-stat.i.dTLB-stores
32841671 ± 5% +239.8% 1.116e+08 ± 2% perf-stat.i.iTLB-load-misses
16120761 ± 2% +232.5% 53605821 ± 3% perf-stat.i.iTLB-loads
7.999e+10 ± 2% +84.2% 1.473e+11 perf-stat.i.instructions
2549 ± 4% -47.1% 1348 perf-stat.i.instructions-per-iTLB-miss
0.34 +88.0% 0.63 perf-stat.i.ipc
19435 ± 2% +85.5% 36048 perf-stat.i.minor-faults
185161 -4.0% 177671 perf-stat.i.msec
69.91 ± 4% -22.7 47.22 ± 3% perf-stat.i.node-load-miss-rate%
18923802 ± 3% -25.5% 14097151 perf-stat.i.node-load-misses
10431088 ± 15% +69.7% 17700571 ± 6% perf-stat.i.node-loads
63.55 ± 2% -37.4 26.13 ± 4% perf-stat.i.node-store-miss-rate%
6558240 ± 2% -75.8% 1589874 ± 6% perf-stat.i.node-store-misses
3570446 ± 9% +18.6% 4235389 ± 2% perf-stat.i.node-stores
19574 ± 2% +84.2% 36064 perf-stat.i.page-faults
9.54 +54.2% 14.71 perf-stat.overall.MPKI
0.85 +0.7 1.55 perf-stat.overall.branch-miss-rate%
7.96 -5.1 2.87 perf-stat.overall.cache-miss-rate%
2.92 -48.3% 1.51 perf-stat.overall.cpi
3846 -7.3% 3565 perf-stat.overall.cycles-between-cache-misses
0.39 +0.3 0.71 ± 3% perf-stat.overall.dTLB-load-miss-rate%
2482 -46.8% 1321 perf-stat.overall.instructions-per-iTLB-miss
0.34 +93.6% 0.66 perf-stat.overall.ipc
59.96 ± 3% -15.8 44.15 ± 3% perf-stat.overall.node-load-miss-rate%
64.14 -36.5 27.61 ± 4% perf-stat.overall.node-store-miss-rate%
150275 -46.4% 80606 perf-stat.overall.path-length
1.796e+10 +65.8% 2.978e+10 perf-stat.ps.branch-instructions
1.526e+08 +203.1% 4.625e+08 perf-stat.ps.branch-misses
7.577e+08 +180.4% 2.125e+09 perf-stat.ps.cache-references
2765825 +242.5% 9472615 perf-stat.ps.context-switches
2.319e+11 -6.1% 2.177e+11 perf-stat.ps.cpu-cycles
83483589 +256.4% 2.975e+08 ± 3% perf-stat.ps.dTLB-load-misses
2.147e+10 +92.9% 4.141e+10 perf-stat.ps.dTLB-loads
9333767 ± 11% +215.3% 29431920 ± 9% perf-stat.ps.dTLB-store-misses
7.527e+09 +235.0% 2.522e+10 perf-stat.ps.dTLB-stores
32018894 ± 2% +241.3% 1.093e+08 perf-stat.ps.iTLB-load-misses
14830435 +253.3% 52394739 ± 3% perf-stat.ps.iTLB-loads
7.945e+10 +81.8% 1.444e+11 perf-stat.ps.instructions
12041 +195.9% 35634 perf-stat.ps.minor-faults
17807263 ± 4% -19.6% 14312257 perf-stat.ps.node-load-misses
11880418 ± 4% +52.6% 18131626 ± 4% perf-stat.ps.node-loads
6504641 -74.6% 1653489 ± 3% perf-stat.ps.node-store-misses
3636994 ± 2% +19.3% 4338925 ± 3% perf-stat.ps.node-stores
12042 +195.9% 35634 perf-stat.ps.page-faults
3.174e+13 +34.1% 4.256e+13 perf-stat.total.instructions
170729 ± 9% -34.0% 112728 sched_debug.cfs_rq:/.exec_clock.avg
180420 ± 9% -33.8% 119517 sched_debug.cfs_rq:/.exec_clock.max
161023 ± 11% -33.4% 107232 ± 2% sched_debug.cfs_rq:/.exec_clock.min
4974 ± 15% -47.8% 2597 ± 22% sched_debug.cfs_rq:/.exec_clock.stddev
12371 ± 9% -61.1% 4814 ± 24% sched_debug.cfs_rq:/.load.avg
2238 ± 28% -94.1% 132.80 ±200% sched_debug.cfs_rq:/.load.min
2.71 ± 11% -92.6% 0.20 ±126% sched_debug.cfs_rq:/.load_avg.min
16125496 ± 10% -23.5% 12334370 sched_debug.cfs_rq:/.min_vruntime.avg
13388339 ± 12% -45.5% 7293796 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
1810600 ± 11% +138.4% 4316889 ± 3% sched_debug.cfs_rq:/.min_vruntime.stddev
0.84 -69.9% 0.25 ± 15% sched_debug.cfs_rq:/.nr_running.avg
0.64 ± 16% -87.5% 0.08 ±122% sched_debug.cfs_rq:/.nr_running.min
0.09 ± 17% +155.1% 0.22 ± 17% sched_debug.cfs_rq:/.nr_running.stddev
9.16 ± 3% -66.1% 3.10 ± 17% sched_debug.cfs_rq:/.runnable_load_avg.avg
10367 ± 10% -58.8% 4276 ± 29% sched_debug.cfs_rq:/.runnable_weight.avg
627.26 ± 15% -92.2% 48.92 ±132% sched_debug.cfs_rq:/.runnable_weight.min
1738262 ± 8% +148.0% 4310714 ± 2% sched_debug.cfs_rq:/.spread0.stddev
899.28 -42.3% 518.49 ± 6% sched_debug.cfs_rq:/.util_avg.avg
1436 ± 3% -21.1% 1133 ± 3% sched_debug.cfs_rq:/.util_avg.max
443.06 ± 2% -84.7% 67.64 ± 68% sched_debug.cfs_rq:/.util_avg.min
186.88 ± 2% +80.8% 337.86 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
66.46 ± 31% -83.6% 10.92 ± 62% sched_debug.cfs_rq:/.util_est_enqueued.min
327010 ± 7% +147.0% 807580 ± 4% sched_debug.cpu.avg_idle.avg
777272 ± 11% +28.7% 1000000 sched_debug.cpu.avg_idle.max
5525 ± 52% +2922.6% 167024 ± 23% sched_debug.cpu.avg_idle.min
222720 ± 8% -23.6% 170237 sched_debug.cpu.clock.avg
229923 ± 7% -24.7% 173182 sched_debug.cpu.clock.max
213339 ± 8% -22.3% 165768 sched_debug.cpu.clock.min
4831 ± 17% -56.3% 2113 ± 25% sched_debug.cpu.clock.stddev
222720 ± 8% -23.6% 170237 sched_debug.cpu.clock_task.avg
229923 ± 7% -24.7% 173182 sched_debug.cpu.clock_task.max
213339 ± 8% -22.3% 165768 sched_debug.cpu.clock_task.min
4831 ± 17% -56.3% 2113 ± 25% sched_debug.cpu.clock_task.stddev
10.19 ± 3% -67.4% 3.32 ± 15% sched_debug.cpu.cpu_load[0].avg
0.32 ± 52% -100.0% 0.00 sched_debug.cpu.cpu_load[0].min
10.27 ± 3% -67.8% 3.31 ± 15% sched_debug.cpu.cpu_load[1].avg
1.37 ± 31% -97.1% 0.04 ±200% sched_debug.cpu.cpu_load[1].min
10.30 ± 3% -68.1% 3.28 ± 14% sched_debug.cpu.cpu_load[2].avg
1.79 ± 28% -95.5% 0.08 ±200% sched_debug.cpu.cpu_load[2].min
10.28 ± 2% -68.1% 3.27 ± 15% sched_debug.cpu.cpu_load[3].avg
1.86 ± 34% -93.5% 0.12 ±133% sched_debug.cpu.cpu_load[3].min
5.21 -30.0% 3.65 ± 23% sched_debug.cpu.cpu_load[3].stddev
10.34 ± 2% -66.8% 3.43 ± 16% sched_debug.cpu.cpu_load[4].avg
1.82 ± 33% -93.4% 0.12 ±133% sched_debug.cpu.cpu_load[4].min
45416 ± 8% -79.6% 9243 ± 32% sched_debug.cpu.curr->pid.avg
72536 ± 4% -27.0% 52940 ± 4% sched_debug.cpu.curr->pid.max
19602 ± 3% -44.2% 10933 ± 22% sched_debug.cpu.curr->pid.stddev
12310 ± 7% -60.1% 4905 ± 24% sched_debug.cpu.load.avg
2461 ± 19% -84.6% 380.28 ± 99% sched_debug.cpu.load.min
0.00 ± 17% -56.2% 0.00 ± 25% sched_debug.cpu.next_balance.stddev
194675 ± 9% -27.8% 140562 sched_debug.cpu.nr_load_updates.avg
203680 ± 8% -27.2% 148260 sched_debug.cpu.nr_load_updates.max
183328 ± 10% -27.0% 133837 sched_debug.cpu.nr_load_updates.min
4999 ± 17% -42.7% 2865 ± 18% sched_debug.cpu.nr_load_updates.stddev
0.87 ± 9% -76.9% 0.20 ± 63% sched_debug.cpu.nr_running.min
5566020 ± 9% +141.1% 13422406 sched_debug.cpu.nr_switches.avg
6162284 ± 8% +162.7% 16191166 sched_debug.cpu.nr_switches.max
5017464 ± 10% +115.6% 10817115 ± 3% sched_debug.cpu.nr_switches.min
310445 ± 15% +558.4% 2043922 ± 5% sched_debug.cpu.nr_switches.stddev
148.05 ± 15% -100.1% -0.18 sched_debug.cpu.nr_uninterruptible.avg
451.48 ± 11% -74.2% 116.52 ± 11% sched_debug.cpu.nr_uninterruptible.max
-71.80 +117.0% -155.84 sched_debug.cpu.nr_uninterruptible.min
109.64 ± 3% -44.5% 60.82 ± 10% sched_debug.cpu.nr_uninterruptible.stddev
5565902 ± 9% +141.2% 13422350 sched_debug.cpu.sched_count.avg
6160982 ± 8% +162.8% 16189899 sched_debug.cpu.sched_count.max
5016287 ± 10% +115.6% 10816531 ± 3% sched_debug.cpu.sched_count.min
310571 ± 15% +558.1% 2044025 ± 5% sched_debug.cpu.sched_count.stddev
5603 ± 19% +287.5% 21712 ± 5% sched_debug.cpu.sched_goidle.avg
8916 ± 19% +445.2% 48613 ± 5% sched_debug.cpu.sched_goidle.max
1150 ± 12% +1358.5% 16777 ± 7% sched_debug.cpu.sched_goidle.stddev
5002773 ± 9% +134.9% 11751944 sched_debug.cpu.ttwu_count.avg
5470146 ± 9% +144.8% 13388857 sched_debug.cpu.ttwu_count.max
4558654 ± 9% +118.2% 9946208 ± 3% sched_debug.cpu.ttwu_count.min
244605 ± 15% +371.1% 1152294 ± 9% sched_debug.cpu.ttwu_count.stddev
4105772 ± 9% +148.3% 10194990 sched_debug.cpu.ttwu_local.avg
4574699 ± 8% +153.2% 11584386 sched_debug.cpu.ttwu_local.max
3664225 ± 9% +139.4% 8771052 ± 3% sched_debug.cpu.ttwu_local.min
237313 ± 14% +292.9% 932406 ± 9% sched_debug.cpu.ttwu_local.stddev
212467 ± 9% -22.2% 165195 sched_debug.cpu_clk
208749 ± 9% -22.6% 161477 sched_debug.ktime
213154 ± 8% -22.2% 165883 sched_debug.sched_clk
165639 -28.9% 117850 softirqs.CPU0.TIMER
161054 -29.3% 113834 ± 2% softirqs.CPU1.TIMER
163593 -28.5% 117000 softirqs.CPU10.TIMER
163559 -29.1% 115993 softirqs.CPU11.TIMER
163555 -29.1% 116015 softirqs.CPU12.TIMER
163540 -29.0% 116172 softirqs.CPU13.TIMER
163327 -27.0% 119256 softirqs.CPU14.TIMER
163771 ± 3% -27.1% 119470 softirqs.CPU15.TIMER
163398 ± 2% -27.2% 118900 softirqs.CPU16.TIMER
162748 -28.4% 116554 softirqs.CPU17.TIMER
163150 ± 2% -27.8% 117766 softirqs.CPU18.TIMER
163584 -28.7% 116611 softirqs.CPU19.TIMER
164595 -28.9% 117082 softirqs.CPU2.TIMER
163195 -27.2% 118885 softirqs.CPU20.TIMER
52856 ± 3% +9.2% 57731 ± 3% softirqs.CPU22.RCU
167532 -27.0% 122216 softirqs.CPU22.TIMER
52118 ± 4% +8.5% 56558 ± 4% softirqs.CPU23.RCU
165689 -28.8% 117972 softirqs.CPU23.TIMER
165512 -28.1% 119075 softirqs.CPU24.TIMER
52144 ± 3% +10.1% 57418 ± 5% softirqs.CPU25.RCU
165465 -28.7% 117941 softirqs.CPU25.TIMER
52009 ± 3% +9.2% 56772 ± 4% softirqs.CPU26.RCU
165415 -28.4% 118479 softirqs.CPU26.TIMER
165063 -28.4% 118255 softirqs.CPU27.TIMER
51603 ± 4% +12.9% 58255 ± 6% softirqs.CPU28.RCU
165186 -28.8% 117673 softirqs.CPU28.TIMER
52043 ± 3% +8.7% 56583 ± 5% softirqs.CPU29.RCU
165375 -28.1% 118880 softirqs.CPU29.TIMER
162968 ± 2% -27.8% 117707 softirqs.CPU3.TIMER
165118 -28.5% 118012 ± 2% softirqs.CPU30.TIMER
165718 -28.7% 118161 ± 2% softirqs.CPU31.TIMER
165931 -29.0% 117880 ± 2% softirqs.CPU32.TIMER
165349 -28.6% 117987 softirqs.CPU33.TIMER
165179 -28.1% 118840 softirqs.CPU34.TIMER
52182 ± 3% +6.8% 55726 ± 5% softirqs.CPU35.RCU
165501 -28.8% 117768 softirqs.CPU35.TIMER
52511 ± 3% +8.0% 56710 ± 5% softirqs.CPU36.RCU
165479 -26.9% 120901 softirqs.CPU36.TIMER
52615 ± 3% +9.6% 57655 ± 6% softirqs.CPU37.RCU
165237 -26.8% 120995 softirqs.CPU37.TIMER
52595 ± 4% +8.9% 57296 ± 4% softirqs.CPU38.RCU
165282 -27.2% 120337 softirqs.CPU38.TIMER
165254 -28.4% 118268 softirqs.CPU39.TIMER
163854 -29.0% 116271 softirqs.CPU4.TIMER
52186 ± 4% +7.0% 55826 ± 5% softirqs.CPU40.RCU
165552 -28.7% 118021 softirqs.CPU40.TIMER
165362 -28.5% 118187 softirqs.CPU41.TIMER
52383 ± 3% +11.1% 58204 ± 6% softirqs.CPU42.RCU
164928 -27.2% 120070 softirqs.CPU42.TIMER
165258 -26.3% 121831 softirqs.CPU43.TIMER
166044 ± 2% -27.9% 119750 softirqs.CPU45.TIMER
163426 -29.1% 115827 softirqs.CPU46.TIMER
164291 -28.8% 116989 softirqs.CPU47.TIMER
163127 -28.6% 116400 ± 2% softirqs.CPU48.TIMER
162529 -28.6% 116098 softirqs.CPU49.TIMER
163930 ± 2% -24.1% 124405 ± 14% softirqs.CPU5.TIMER
162509 -28.4% 116423 softirqs.CPU50.TIMER
163187 -29.4% 115256 softirqs.CPU51.TIMER
162995 -28.9% 115879 softirqs.CPU52.TIMER
162917 -28.7% 116125 softirqs.CPU53.TIMER
162889 -29.0% 115681 ± 2% softirqs.CPU54.TIMER
162931 -29.2% 115313 softirqs.CPU55.TIMER
163094 -28.9% 115986 softirqs.CPU56.TIMER
162789 -28.5% 116352 softirqs.CPU57.TIMER
163074 -26.6% 119678 softirqs.CPU58.TIMER
189368 ± 14% -37.1% 119108 softirqs.CPU59.TIMER
11024 ± 8% -12.2% 9679 ± 16% softirqs.CPU6.SCHED
163755 -29.4% 115637 softirqs.CPU6.TIMER
162967 -26.4% 119917 ± 2% softirqs.CPU60.TIMER
162344 -28.6% 115864 softirqs.CPU61.TIMER
162979 -28.4% 116691 softirqs.CPU62.TIMER
162487 -28.6% 116024 softirqs.CPU63.TIMER
162551 -27.2% 118348 softirqs.CPU64.TIMER
187431 ± 12% -31.2% 128948 ± 12% softirqs.CPU65.TIMER
51804 ± 4% +7.7% 55774 ± 3% softirqs.CPU66.RCU
165329 -28.1% 118929 softirqs.CPU66.TIMER
165205 -28.9% 117399 softirqs.CPU67.TIMER
51725 ± 3% +10.3% 57049 ± 4% softirqs.CPU68.RCU
164825 -28.3% 118137 softirqs.CPU68.TIMER
164949 -28.4% 118116 softirqs.CPU69.TIMER
162860 -28.3% 116700 softirqs.CPU7.TIMER
51599 ± 3% +12.8% 58201 ± 9% softirqs.CPU70.RCU
164825 -28.6% 117719 softirqs.CPU70.TIMER
164984 -28.7% 117606 softirqs.CPU71.TIMER
51825 ± 4% +10.8% 57401 ± 7% softirqs.CPU72.RCU
165269 -28.5% 118112 softirqs.CPU72.TIMER
164821 -28.3% 118118 softirqs.CPU73.TIMER
165067 -28.7% 117655 softirqs.CPU74.TIMER
165057 -28.5% 118057 ± 2% softirqs.CPU75.TIMER
165267 -28.4% 118249 softirqs.CPU76.TIMER
165194 -28.7% 117767 softirqs.CPU77.TIMER
165332 -28.8% 117675 softirqs.CPU78.TIMER
166096 -28.9% 118037 softirqs.CPU79.TIMER
163232 -28.8% 116150 softirqs.CPU8.TIMER
165430 -27.1% 120522 softirqs.CPU80.TIMER
165423 -27.3% 120251 softirqs.CPU81.TIMER
164957 -27.3% 119948 softirqs.CPU82.TIMER
165288 -28.0% 118949 softirqs.CPU83.TIMER
165071 -28.9% 117422 softirqs.CPU84.TIMER
165234 -28.8% 117578 softirqs.CPU85.TIMER
164282 -27.1% 119812 softirqs.CPU86.TIMER
165205 -22.3% 128424 ± 12% softirqs.CPU87.TIMER
163320 -29.1% 115732 softirqs.CPU9.TIMER
14507070 -28.2% 10419594 softirqs.TIMER
56.37 -56.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
55.51 -55.5 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
59.57 -46.6 12.92 perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
60.47 -44.3 16.16 perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
61.22 -42.0 19.17 perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
61.33 -41.8 19.57 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
46.45 -22.5 23.91 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write.ksys_write
30.38 -19.8 10.57 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read
30.46 -19.7 10.78 ± 2% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read.__vfs_read
30.85 -19.7 11.18 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read.ksys_read
30.48 -19.6 10.86 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read
39.62 -17.7 21.87 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
39.65 -17.6 22.05 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write
39.79 -17.2 22.54 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
52.89 -14.2 38.65 perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64
53.10 -13.6 39.45 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.16 ± 14% -9.2 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
8.85 ± 14% -8.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
8.84 ± 14% -8.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
8.77 ± 14% -8.8 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
8.73 ± 14% -8.7 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
38.89 -8.1 30.79 perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.ksys_read.do_syscall_64
39.08 -7.6 31.49 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.37 ± 11% -7.4 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
7.03 ± 11% -7.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.01 ± 11% -7.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
6.94 ± 11% -6.9 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
6.89 ± 11% -6.9 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
6.25 -5.7 0.56 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.__vfs_write
6.33 -5.5 0.85 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
5.33 ± 3% -4.5 0.81 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
5.37 ± 3% -4.5 0.91 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
3.75 ± 4% -3.0 0.78 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.pipe_wait
3.50 ± 5% -2.6 0.85 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +0.6 0.55 ± 4% perf-profile.calltrace.cycles-pp.avc_has_perm.file_has_perm.security_file_permission.vfs_read.ksys_read
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.__kernel_text_address.unwind_get_return_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.6 0.57 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.6 0.58 ± 6% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.6 0.59 ± 2% perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.00 +0.6 0.59 perf-profile.calltrace.cycles-pp.orc_find.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.6 0.61 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_read.__vfs_read.vfs_read.ksys_read
0.00 +0.6 0.63 ± 6% perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.pipe_read.__vfs_read.vfs_read
0.00 +0.6 0.64 ± 2% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.65 ± 4% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.7 0.67 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.67 ± 2% perf-profile.calltrace.cycles-pp.unwind_next_frame.__unwind_start.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +0.7 0.70 ± 2% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp.native_write_msr
0.00 +0.7 0.72 perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.72 perf-profile.calltrace.cycles-pp.unwind_get_return_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.7 0.73 ± 2% perf-profile.calltrace.cycles-pp.file_update_time.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +0.7 0.74 ± 2% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.8 0.77 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.77 ± 3% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.78 ± 3% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.00 +0.8 0.79 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.00 +0.8 0.85 perf-profile.calltrace.cycles-pp.save_stack_address.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.9 0.86 ± 3% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.00 +0.9 0.86 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +0.9 0.87 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.00 +0.9 0.92 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_write
0.00 +1.0 1.00 ± 14% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.__vfs_write.vfs_write.ksys_write
0.00 +1.0 1.01 perf-profile.calltrace.cycles-pp.__unwind_start.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.00 +1.1 1.15 ± 4% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.ksys_read
0.00 +1.2 1.22 perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.00 +1.3 1.34 ± 15% perf-profile.calltrace.cycles-pp.__orc_find.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency
0.00 +1.4 1.41 ± 3% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_write
0.00 +1.5 1.48 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.5 1.49 ± 2% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +1.6 1.55 perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.pipe_wait
0.00 +1.6 1.56 perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.00 +1.7 1.68 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.71 perf-profile.calltrace.cycles-pp.__switch_to
0.00 +1.7 1.73 ± 4% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.8 1.78 ± 4% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.12 ±173% +1.8 1.94 ± 3% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_write
0.12 ±173% +2.1 2.18 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.58 ± 3% +2.1 2.66 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.__vfs_read
0.00 +2.1 2.08 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.00 +2.2 2.16 ± 11% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.__vfs_write
0.61 ± 3% +2.2 2.78 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
0.13 ±173% +2.2 2.30 ± 10% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.__vfs_write.vfs_write
0.60 +2.3 2.88 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.4 2.36 perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
4.07 +2.6 6.70 ± 2% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_write.__vfs_write.vfs_write.ksys_write
33.35 ± 2% +2.7 36.02 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.78 ± 2% +2.8 3.62 ± 7% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.ksys_write
0.54 ± 4% +2.9 3.42 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.95 ± 2% +3.3 4.29 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
0.96 ± 2% +3.5 4.46 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
33.57 ± 2% +3.5 37.11 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.17 ± 9% +3.7 5.89 ± 2% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_write.__vfs_write.vfs_write
1.02 ± 2% +3.7 4.76 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.99 ± 2% +3.8 5.75 ± 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_write.__vfs_write
5.57 +4.5 10.09 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
0.26 ±100% +4.6 4.83 perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
5.62 +4.7 10.29 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
79.89 ± 2% +5.3 85.19 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.83 +5.5 11.30 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.ksys_read
1.32 ± 2% +5.8 7.11 perf-profile.calltrace.cycles-pp.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
80.02 ± 2% +5.8 85.85 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.60 ± 5% +6.6 8.23 perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.83 ± 4% +7.4 9.20 perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
2.10 ± 2% +9.2 11.30 perf-profile.calltrace.cycles-pp.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
2.23 ± 2% +9.6 11.86 perf-profile.calltrace.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
484.75 ± 3% -25.8% 359.80 interrupts.9:IO-APIC.9-fasteoi.acpi
216226 +28.9% 278795 ± 2% interrupts.CAL:Function_call_interrupts
2508 +26.3% 3168 ± 4% interrupts.CPU0.CAL:Function_call_interrupts
766676 ± 2% -23.0% 590010 interrupts.CPU0.LOC:Local_timer_interrupts
484.75 ± 3% -25.8% 359.80 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
2503 +26.1% 3156 ± 5% interrupts.CPU1.CAL:Function_call_interrupts
767187 -23.2% 589583 interrupts.CPU1.LOC:Local_timer_interrupts
80879 ± 17% +84.7% 149412 ± 41% interrupts.CPU1.RES:Rescheduling_interrupts
2480 +28.9% 3198 ± 2% interrupts.CPU10.CAL:Function_call_interrupts
768147 -23.2% 589792 interrupts.CPU10.LOC:Local_timer_interrupts
73746 ± 33% +118.7% 161303 ± 37% interrupts.CPU10.RES:Rescheduling_interrupts
2480 +28.9% 3195 ± 2% interrupts.CPU11.CAL:Function_call_interrupts
767449 -23.2% 589186 interrupts.CPU11.LOC:Local_timer_interrupts
77524 ± 23% +110.4% 163120 ± 35% interrupts.CPU11.RES:Rescheduling_interrupts
2251 ± 7% +35.4% 3047 ± 7% interrupts.CPU12.CAL:Function_call_interrupts
767333 -23.3% 588653 interrupts.CPU12.LOC:Local_timer_interrupts
80393 ± 28% +84.6% 148373 ± 34% interrupts.CPU12.RES:Rescheduling_interrupts
2488 +24.9% 3107 ± 7% interrupts.CPU13.CAL:Function_call_interrupts
767963 -23.4% 588388 interrupts.CPU13.LOC:Local_timer_interrupts
82141 ± 18% +77.0% 145385 ± 38% interrupts.CPU13.RES:Rescheduling_interrupts
2413 ± 3% +28.0% 3090 ± 7% interrupts.CPU14.CAL:Function_call_interrupts
767352 -23.2% 589052 interrupts.CPU14.LOC:Local_timer_interrupts
2378 ± 7% +34.2% 3191 ± 2% interrupts.CPU15.CAL:Function_call_interrupts
765562 ± 2% -23.0% 589760 interrupts.CPU15.LOC:Local_timer_interrupts
74456 ± 28% +90.5% 141821 ± 40% interrupts.CPU15.RES:Rescheduling_interrupts
2471 ± 2% +29.3% 3194 ± 2% interrupts.CPU16.CAL:Function_call_interrupts
767144 -23.1% 589838 interrupts.CPU16.LOC:Local_timer_interrupts
75517 ± 22% +119.3% 165598 ± 44% interrupts.CPU16.RES:Rescheduling_interrupts
2456 +30.8% 3212 ± 3% interrupts.CPU17.CAL:Function_call_interrupts
765370 -22.9% 589808 interrupts.CPU17.LOC:Local_timer_interrupts
76018 ± 18% +118.9% 166418 ± 33% interrupts.CPU17.RES:Rescheduling_interrupts
2518 ± 2% +22.6% 3087 ± 6% interrupts.CPU18.CAL:Function_call_interrupts
767083 ± 2% -23.1% 589705 interrupts.CPU18.LOC:Local_timer_interrupts
74261 ± 19% +114.0% 158916 ± 45% interrupts.CPU18.RES:Rescheduling_interrupts
2495 ± 2% +27.5% 3182 ± 2% interrupts.CPU19.CAL:Function_call_interrupts
767687 -23.2% 589591 interrupts.CPU19.LOC:Local_timer_interrupts
2469 ± 2% +30.0% 3210 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
767713 -23.2% 589361 interrupts.CPU2.LOC:Local_timer_interrupts
79444 ± 26% +97.5% 156929 ± 28% interrupts.CPU2.RES:Rescheduling_interrupts
2474 ± 2% +28.8% 3187 ± 2% interrupts.CPU20.CAL:Function_call_interrupts
767513 -23.3% 588573 interrupts.CPU20.LOC:Local_timer_interrupts
82393 ± 23% +84.3% 151813 ± 39% interrupts.CPU20.RES:Rescheduling_interrupts
2460 ± 2% +27.9% 3146 ± 4% interrupts.CPU21.CAL:Function_call_interrupts
767448 ± 2% -23.2% 589459 interrupts.CPU21.LOC:Local_timer_interrupts
77670 ± 24% +98.9% 154455 ± 46% interrupts.CPU21.RES:Rescheduling_interrupts
2362 ± 3% +35.3% 3197 ± 2% interrupts.CPU22.CAL:Function_call_interrupts
767151 -23.1% 589887 interrupts.CPU22.LOC:Local_timer_interrupts
89532 ± 20% +107.6% 185861 ± 27% interrupts.CPU22.RES:Rescheduling_interrupts
2477 ± 2% +28.3% 3179 ± 2% interrupts.CPU23.CAL:Function_call_interrupts
768039 -23.2% 589599 interrupts.CPU23.LOC:Local_timer_interrupts
84262 ± 24% +103.0% 171026 ± 28% interrupts.CPU23.RES:Rescheduling_interrupts
2474 +28.9% 3188 ± 2% interrupts.CPU24.CAL:Function_call_interrupts
767979 -23.2% 589575 interrupts.CPU24.LOC:Local_timer_interrupts
88546 ± 30% +114.4% 189820 ± 25% interrupts.CPU24.RES:Rescheduling_interrupts
2457 +28.6% 3160 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
768027 -23.2% 589632 interrupts.CPU25.LOC:Local_timer_interrupts
2430 ± 3% +30.8% 3178 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
767738 -23.2% 589384 interrupts.CPU26.LOC:Local_timer_interrupts
85144 ± 36% +109.0% 177927 ± 34% interrupts.CPU26.RES:Rescheduling_interrupts
2468 ± 2% +29.5% 3197 ± 3% interrupts.CPU27.CAL:Function_call_interrupts
765596 -23.0% 589400 interrupts.CPU27.LOC:Local_timer_interrupts
79522 ± 30% +132.2% 184636 ± 25% interrupts.CPU27.RES:Rescheduling_interrupts
2450 ± 3% +30.2% 3190 ± 2% interrupts.CPU28.CAL:Function_call_interrupts
767534 -23.2% 589374 interrupts.CPU28.LOC:Local_timer_interrupts
80602 ± 33% +115.6% 173797 ± 33% interrupts.CPU28.RES:Rescheduling_interrupts
2465 ± 2% +29.4% 3188 ± 2% interrupts.CPU29.CAL:Function_call_interrupts
767228 -23.2% 589331 interrupts.CPU29.LOC:Local_timer_interrupts
84900 ± 30% +120.5% 187220 ± 29% interrupts.CPU29.RES:Rescheduling_interrupts
2486 +29.0% 3208 ± 3% interrupts.CPU3.CAL:Function_call_interrupts
767330 ± 2% -23.1% 589735 interrupts.CPU3.LOC:Local_timer_interrupts
81362 ± 18% +96.8% 160080 ± 49% interrupts.CPU3.RES:Rescheduling_interrupts
2391 ± 4% +32.3% 3163 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
768161 -23.3% 589314 interrupts.CPU30.LOC:Local_timer_interrupts
89290 ± 26% +87.4% 167375 ± 27% interrupts.CPU30.RES:Rescheduling_interrupts
2384 ± 6% +33.6% 3185 ± 2% interrupts.CPU31.CAL:Function_call_interrupts
767815 -23.3% 589011 interrupts.CPU31.LOC:Local_timer_interrupts
85518 ± 24% +97.3% 168741 ± 36% interrupts.CPU31.RES:Rescheduling_interrupts
2403 +30.8% 3144 ± 4% interrupts.CPU32.CAL:Function_call_interrupts
766680 ± 2% -23.2% 588874 interrupts.CPU32.LOC:Local_timer_interrupts
2417 ± 4% +31.6% 3180 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
766995 -23.2% 589408 interrupts.CPU33.LOC:Local_timer_interrupts
2451 ± 2% +30.4% 3195 ± 2% interrupts.CPU34.CAL:Function_call_interrupts
766847 -23.2% 589009 interrupts.CPU34.LOC:Local_timer_interrupts
80095 ± 35% +125.0% 180197 ± 37% interrupts.CPU34.RES:Rescheduling_interrupts
767752 -23.2% 589579 interrupts.CPU35.LOC:Local_timer_interrupts
86105 ± 27% +110.6% 181303 ± 24% interrupts.CPU35.RES:Rescheduling_interrupts
767303 -23.2% 589180 interrupts.CPU36.LOC:Local_timer_interrupts
2463 +29.9% 3199 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
767239 -23.2% 589524 interrupts.CPU37.LOC:Local_timer_interrupts
88045 ± 23% +90.4% 167616 ± 31% interrupts.CPU37.RES:Rescheduling_interrupts
2429 ± 3% +26.9% 3083 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
766954 -23.2% 588834 interrupts.CPU38.LOC:Local_timer_interrupts
2472 ± 2% +29.7% 3206 ± 2% interrupts.CPU39.CAL:Function_call_interrupts
768077 -23.3% 589426 interrupts.CPU39.LOC:Local_timer_interrupts
85215 ± 26% +109.1% 178204 ± 29% interrupts.CPU39.RES:Rescheduling_interrupts
2481 ± 2% +28.4% 3187 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
768141 -23.2% 589734 interrupts.CPU4.LOC:Local_timer_interrupts
2467 ± 4% +28.2% 3163 ± 4% interrupts.CPU40.CAL:Function_call_interrupts
767833 -23.2% 589494 interrupts.CPU40.LOC:Local_timer_interrupts
80113 ± 33% +107.6% 166280 ± 32% interrupts.CPU40.RES:Rescheduling_interrupts
2454 +28.9% 3164 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
768088 -23.2% 589632 interrupts.CPU41.LOC:Local_timer_interrupts
82482 ± 34% +101.5% 166200 ± 31% interrupts.CPU41.RES:Rescheduling_interrupts
2483 +28.6% 3192 ± 2% interrupts.CPU42.CAL:Function_call_interrupts
766748 -23.1% 589494 interrupts.CPU42.LOC:Local_timer_interrupts
80778 ± 34% +94.7% 157247 ± 36% interrupts.CPU42.RES:Rescheduling_interrupts
2419 ± 2% +30.9% 3166 ± 3% interrupts.CPU43.CAL:Function_call_interrupts
767709 -23.2% 589346 interrupts.CPU43.LOC:Local_timer_interrupts
84594 ± 22% +113.4% 180517 ± 35% interrupts.CPU43.RES:Rescheduling_interrupts
766732 ± 2% -23.2% 588492 interrupts.CPU44.LOC:Local_timer_interrupts
73070 ± 27% +105.0% 149793 ± 43% interrupts.CPU44.RES:Rescheduling_interrupts
2507 ± 3% +28.6% 3225 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
767945 -23.3% 589325 interrupts.CPU45.LOC:Local_timer_interrupts
81340 ± 20% +96.2% 159568 ± 42% interrupts.CPU45.RES:Rescheduling_interrupts
2471 +29.3% 3196 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
767772 -23.2% 589548 interrupts.CPU46.LOC:Local_timer_interrupts
80390 ± 13% +94.6% 156472 ± 41% interrupts.CPU46.RES:Rescheduling_interrupts
2500 ± 4% +27.7% 3193 ± 3% interrupts.CPU47.CAL:Function_call_interrupts
768250 -23.3% 589184 interrupts.CPU47.LOC:Local_timer_interrupts
78096 ± 23% +98.5% 155040 ± 43% interrupts.CPU47.RES:Rescheduling_interrupts
2452 ± 2% +31.6% 3226 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
767713 -23.2% 589825 interrupts.CPU48.LOC:Local_timer_interrupts
2505 ± 2% +27.7% 3200 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
767037 ± 2% -23.1% 589601 interrupts.CPU49.LOC:Local_timer_interrupts
75217 ± 26% +118.7% 164509 ± 42% interrupts.CPU49.RES:Rescheduling_interrupts
2516 +25.0% 3145 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
767735 -23.3% 589103 interrupts.CPU5.LOC:Local_timer_interrupts
2486 +27.7% 3174 ± 2% interrupts.CPU50.CAL:Function_call_interrupts
765630 ± 2% -23.0% 589525 interrupts.CPU50.LOC:Local_timer_interrupts
85467 ± 14% +86.7% 159587 ± 44% interrupts.CPU50.RES:Rescheduling_interrupts
2444 ± 3% +31.0% 3201 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
767550 -23.3% 589054 interrupts.CPU51.LOC:Local_timer_interrupts
80621 ± 26% +92.3% 155048 ± 22% interrupts.CPU51.RES:Rescheduling_interrupts
2482 ± 2% +29.0% 3202 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
768274 -23.3% 589452 interrupts.CPU52.LOC:Local_timer_interrupts
78569 ± 25% +97.2% 154940 ± 37% interrupts.CPU52.RES:Rescheduling_interrupts
2514 ± 2% +21.6% 3057 ± 8% interrupts.CPU53.CAL:Function_call_interrupts
767962 -23.2% 589669 interrupts.CPU53.LOC:Local_timer_interrupts
79490 ± 21% +87.8% 149301 ± 30% interrupts.CPU53.RES:Rescheduling_interrupts
2512 ± 2% +27.8% 3211 ± 2% interrupts.CPU54.CAL:Function_call_interrupts
768068 -23.3% 589068 interrupts.CPU54.LOC:Local_timer_interrupts
75633 ± 25% +98.8% 150329 ± 25% interrupts.CPU54.RES:Rescheduling_interrupts
2498 ± 2% +28.6% 3212 ± 2% interrupts.CPU55.CAL:Function_call_interrupts
767453 ± 2% -23.2% 589670 interrupts.CPU55.LOC:Local_timer_interrupts
74966 ± 30% +127.2% 170302 ± 34% interrupts.CPU55.RES:Rescheduling_interrupts
767893 -23.3% 589146 interrupts.CPU56.LOC:Local_timer_interrupts
73819 ± 27% +110.2% 155180 ± 35% interrupts.CPU56.RES:Rescheduling_interrupts
2470 +25.2% 3093 ± 9% interrupts.CPU57.CAL:Function_call_interrupts
767993 -23.2% 589798 interrupts.CPU57.LOC:Local_timer_interrupts
72216 ± 32% +93.3% 139590 ± 36% interrupts.CPU57.RES:Rescheduling_interrupts
2388 ± 3% +31.0% 3129 ± 7% interrupts.CPU58.CAL:Function_call_interrupts
768178 -23.2% 589858 interrupts.CPU58.LOC:Local_timer_interrupts
2411 ± 4% +31.9% 3180 interrupts.CPU59.CAL:Function_call_interrupts
767760 -23.2% 589743 interrupts.CPU59.LOC:Local_timer_interrupts
2489 +28.5% 3198 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
766850 -23.1% 589385 interrupts.CPU6.LOC:Local_timer_interrupts
84298 ± 17% +95.9% 165138 ± 32% interrupts.CPU6.RES:Rescheduling_interrupts
2414 ± 2% +32.1% 3189 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
767733 ± 2% -23.2% 589635 interrupts.CPU60.LOC:Local_timer_interrupts
2474 +30.2% 3221 ± 2% interrupts.CPU61.CAL:Function_call_interrupts
766946 ± 2% -23.1% 589470 interrupts.CPU61.LOC:Local_timer_interrupts
80160 ± 11% +94.8% 156114 ± 34% interrupts.CPU61.RES:Rescheduling_interrupts
767604 -23.2% 589743 interrupts.CPU62.LOC:Local_timer_interrupts
75685 ± 26% +108.3% 157622 ± 42% interrupts.CPU62.RES:Rescheduling_interrupts
768292 -23.2% 589694 interrupts.CPU63.LOC:Local_timer_interrupts
2462 +30.7% 3218 ± 2% interrupts.CPU64.CAL:Function_call_interrupts
767556 -23.2% 589180 interrupts.CPU64.LOC:Local_timer_interrupts
2524 ± 2% +27.0% 3206 ± 2% interrupts.CPU65.CAL:Function_call_interrupts
767681 -23.2% 589560 interrupts.CPU65.LOC:Local_timer_interrupts
2444 +31.3% 3210 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
767079 ± 2% -23.1% 589718 interrupts.CPU66.LOC:Local_timer_interrupts
84733 ± 30% +105.5% 174142 ± 32% interrupts.CPU66.RES:Rescheduling_interrupts
2420 ± 6% +32.4% 3204 ± 2% interrupts.CPU67.CAL:Function_call_interrupts
767496 -23.2% 589598 interrupts.CPU67.LOC:Local_timer_interrupts
82189 ± 27% +120.1% 180899 ± 35% interrupts.CPU67.RES:Rescheduling_interrupts
2426 +32.2% 3207 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
768241 -23.3% 589575 interrupts.CPU68.LOC:Local_timer_interrupts
81351 ± 36% +109.8% 170663 ± 20% interrupts.CPU68.RES:Rescheduling_interrupts
2433 +32.0% 3211 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
767196 -23.2% 589413 interrupts.CPU69.LOC:Local_timer_interrupts
95086 ± 23% +90.5% 181145 ± 31% interrupts.CPU69.RES:Rescheduling_interrupts
767898 -23.2% 589697 interrupts.CPU7.LOC:Local_timer_interrupts
82873 ± 17% +86.7% 154699 ± 30% interrupts.CPU7.RES:Rescheduling_interrupts
2430 ± 3% +32.0% 3207 ± 2% interrupts.CPU70.CAL:Function_call_interrupts
768128 -23.2% 589592 interrupts.CPU70.LOC:Local_timer_interrupts
2408 +33.2% 3209 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
767014 ± 2% -23.1% 589492 interrupts.CPU71.LOC:Local_timer_interrupts
77179 ± 37% +142.9% 187454 ± 32% interrupts.CPU71.RES:Rescheduling_interrupts
2446 ± 3% +30.8% 3200 ± 2% interrupts.CPU72.CAL:Function_call_interrupts
768024 -23.3% 589213 interrupts.CPU72.LOC:Local_timer_interrupts
83641 ± 25% +105.6% 171961 ± 32% interrupts.CPU72.RES:Rescheduling_interrupts
2448 +30.2% 3187 ± 2% interrupts.CPU73.CAL:Function_call_interrupts
767985 -23.2% 589571 interrupts.CPU73.LOC:Local_timer_interrupts
88301 ± 28% +97.3% 174228 ± 34% interrupts.CPU73.RES:Rescheduling_interrupts
2401 ± 4% +32.9% 3191 ± 2% interrupts.CPU74.CAL:Function_call_interrupts
767803 -23.2% 589592 interrupts.CPU74.LOC:Local_timer_interrupts
88603 ± 28% +96.5% 174112 ± 38% interrupts.CPU74.RES:Rescheduling_interrupts
2399 +33.4% 3201 ± 2% interrupts.CPU75.CAL:Function_call_interrupts
767593 -23.2% 589189 interrupts.CPU75.LOC:Local_timer_interrupts
2461 +30.1% 3201 ± 2% interrupts.CPU76.CAL:Function_call_interrupts
767685 -23.2% 589532 interrupts.CPU76.LOC:Local_timer_interrupts
2423 +32.8% 3219 ± 2% interrupts.CPU77.CAL:Function_call_interrupts
767726 -23.4% 588421 interrupts.CPU77.LOC:Local_timer_interrupts
88940 ± 28% +101.9% 179532 ± 38% interrupts.CPU77.RES:Rescheduling_interrupts
2473 ± 4% +29.8% 3209 ± 3% interrupts.CPU78.CAL:Function_call_interrupts
767587 -23.2% 589550 interrupts.CPU78.LOC:Local_timer_interrupts
2453 ± 2% +31.3% 3222 ± 2% interrupts.CPU79.CAL:Function_call_interrupts
767974 -23.3% 589219 interrupts.CPU79.LOC:Local_timer_interrupts
85429 ± 23% +108.6% 178187 ± 33% interrupts.CPU79.RES:Rescheduling_interrupts
2490 ± 2% +28.6% 3202 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
767732 -23.2% 589502 interrupts.CPU8.LOC:Local_timer_interrupts
79620 ± 20% +89.8% 151133 ± 37% interrupts.CPU8.RES:Rescheduling_interrupts
2446 ± 3% +31.7% 3223 ± 2% interrupts.CPU80.CAL:Function_call_interrupts
768373 -23.3% 589422 interrupts.CPU80.LOC:Local_timer_interrupts
81634 ± 30% +95.1% 159297 ± 33% interrupts.CPU80.RES:Rescheduling_interrupts
2460 +30.3% 3204 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
767673 -23.4% 588388 interrupts.CPU81.LOC:Local_timer_interrupts
2386 +35.4% 3231 interrupts.CPU82.CAL:Function_call_interrupts
766944 ± 2% -23.2% 589125 interrupts.CPU82.LOC:Local_timer_interrupts
2492 ± 2% +29.8% 3233 ± 2% interrupts.CPU83.CAL:Function_call_interrupts
768486 -23.3% 589291 interrupts.CPU83.LOC:Local_timer_interrupts
2456 +31.6% 3231 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
767668 -23.2% 589624 interrupts.CPU84.LOC:Local_timer_interrupts
82136 ± 26% +113.1% 175016 ± 32% interrupts.CPU84.RES:Rescheduling_interrupts
2461 ± 4% +30.4% 3210 ± 2% interrupts.CPU85.CAL:Function_call_interrupts
768260 -23.3% 589628 interrupts.CPU85.LOC:Local_timer_interrupts
2509 +27.6% 3201 ± 2% interrupts.CPU86.CAL:Function_call_interrupts
766683 -23.1% 589682 interrupts.CPU86.LOC:Local_timer_interrupts
82626 ± 35% +92.7% 159211 ± 34% interrupts.CPU86.RES:Rescheduling_interrupts
2508 ± 2% +27.2% 3189 ± 3% interrupts.CPU87.CAL:Function_call_interrupts
768214 -23.2% 589642 interrupts.CPU87.LOC:Local_timer_interrupts
84955 ± 28% +93.5% 164415 ± 32% interrupts.CPU87.RES:Rescheduling_interrupts
2484 +22.8% 3050 ± 9% interrupts.CPU9.CAL:Function_call_interrupts
767967 ± 2% -23.2% 589559 interrupts.CPU9.LOC:Local_timer_interrupts
4898 ± 34% +61.1% 7889 interrupts.CPU9.NMI:Non-maskable_interrupts
4898 ± 34% +61.1% 7889 interrupts.CPU9.PMI:Performance_monitoring_interrupts
187.25 ± 2% -9.3% 169.80 ± 5% interrupts.IWI:IRQ_work_interrupts
67542656 -23.2% 51868528 interrupts.LOC:Local_timer_interrupts
7150206 ± 10% +102.0% 14443293 ± 2% interrupts.RES:Rescheduling_interrupts
139.25 ± 36% -58.1% 58.40 ± 32% interrupts.TLB:TLB_shootdowns
hackbench.throughput
2.2e+06 +-+---------------------------------------------------------------+
| |
2e+06 O-+ O O O O O O O O O |
1.8e+06 +-+ |
| |
1.6e+06 +-+ |
1.4e+06 +-+ |
| |
1.2e+06 +-+ |
1e+06 +-+ |
| |
800000 +-+ |
600000 +-+ ...+..... |
|.....+.....+.....+.....+.. +.....+.....+.....+.....+.....|
400000 +-+---------------------------------------------------------------+
hackbench.workload
5.5e+08 +-+---------------------------------------------------------------+
O O O O O O O O O O |
5e+08 +-+ |
| |
4.5e+08 +-+ |
| |
4e+08 +-+ |
| |
3.5e+08 +-+ |
| |
3e+08 +-+ |
| |
2.5e+08 +-+ |
| |
2e+08 +-+---------------------------------------------------------------+
hackbench.time.user_time
2400 +-+------------------------------------------------------------------+
O O O O O O |
2200 +-+ O O O O |
2000 +-+ |
| |
1800 +-+ |
| |
1600 +-+ |
| |
1400 +-+ |
1200 +-+ |
| |
1000 +-+...+......+.....+.....+..... ...+.....+.....+.....+......+.....|
| +... |
800 +-+------------------------------------------------------------------+
hackbench.time.system_time
36000 +-+-----------------------------------------------------------------+
| |
34000 +-+ ...+..... +.... .+..... |
32000 +-+...+.....+... +.. .. . ... . |
| .. . +. +.....+.....|
30000 +-+ . .. |
| + |
28000 +-+ |
| |
26000 +-+ |
24000 +-+ |
| |
22000 +-+ |
O O O O O O O O O O |
20000 +-+-----------------------------------------------------------------+
hackbench.time.percent_of_cpu_this_job_got
8450 +-+------------------------------------------------------------------+
8400 +-+ ...+.. .+...... |
|..... ...+.. .. .. +..... |
8350 +-+ +... . .. +.....+.....+...... ...|
8300 +-+ + +.. |
8250 +-+ |
8200 +-+ |
| |
8150 +-+ |
8100 +-+ |
8050 +-+ |
8000 +-+ |
O O O O O O O O O |
7950 +-+ O |
7900 +-+------------------------------------------------------------------+
hackbench.time.elapsed_time
420 +-+-------------------------------------------------------------------+
| ... . +..... .+.... |
400 +-+...+......+. +.. + . ... . ...+.....|
| . + +. +... |
380 +-+ . + |
| .. + |
360 +-+ + |
| + |
340 +-+ |
| |
320 +-+ |
| |
300 +-+ O O |
O O O O O O O O |
280 +-+-------------------------------------------------------------------+
hackbench.time.elapsed_time.max
420 +-+-------------------------------------------------------------------+
| ... . +..... .+.... |
400 +-+...+......+. +.. + . ... . ...+.....|
| . + +. +... |
380 +-+ . + |
| .. + |
360 +-+ + |
| + |
340 +-+ |
| |
320 +-+ |
| |
300 +-+ O O |
O O O O O O O O |
280 +-+-------------------------------------------------------------------+
hackbench.time.minor_page_faults
1.1e+07 +-+---------------------------------------------------------------+
| O |
1e+07 O-+ O O O O O O O O |
9e+06 +-+ |
| |
8e+06 +-+ |
| |
7e+06 +-+ |
| |
6e+06 +-+ |
5e+06 +-+ |
| |
4e+06 +-+...+.....+.....+.....+.....+.....+.....+.....+.....+.....+.....|
| |
3e+06 +-+---------------------------------------------------------------+
hackbench.time.voluntary_context_switches
2.6e+09 +-+---------------------------------------------------------------+
O O O O O O O O O |
2.4e+09 +-+ O |
2.2e+09 +-+ |
| |
2e+09 +-+ |
1.8e+09 +-+ |
| |
1.6e+09 +-+ |
1.4e+09 +-+ |
| |
1.2e+09 +-+ |
1e+09 +-+ ...+.....+..... |
|.....+.. +.....+.....+.....+.....+.....+.....+.....|
8e+08 +-+---------------------------------------------------------------+
hackbench.time.involuntary_context_switches
4e+08 +-+---------------------------------------------------------------+
| |
3.5e+08 O-+ O O O O O |
| O O O O |
| |
3e+08 +-+ |
| |
2.5e+08 +-+ |
| |
2e+08 +-+ |
| |
| |
1.5e+08 +-+...+.....+.....+.....+.... .+..... ...+..... |
|.. . ... +.. +.....+.....|
1e+08 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 3 months
[cfg80211] 30f2a38b6c: hwsim.scan_multi_bssid_check_ie.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 30f2a38b6c0a12d5298519c55d56c0a49563e382 ("cfg80211: don't skip multi-bssid index element")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: hwsim
with following parameters:
group: hwsim-13
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | c772f036eb | 30f2a38b6c |
+-------------------------------------------------+------------+------------+
| boot_successes | 0 | 1 |
| boot_failures | 4 | 7 |
| BUG:kernel_reboot-without-warning_in_test_stage | 4 | 7 |
| hwsim.scan_multi_bssid_check_ie.fail | 0 | 7 |
+-------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-04-20 20:07:06 ./run-tests.py scan_multi_bssid_check_ie
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START scan_multi_bssid_check_ie 1/1
Test: Scan and check if nontransmitting BSS inherits IE from transmitting BSS
Starting AP wlan3
trans_bss beacon_ie: [0, 1, 3, 5, 71, 42, 45, 61, 50, 59, 221, 127]
nontrans_bss1 beacon_ie: [0, 1, 3, 5, 42, 45, 61, 50, 85, 59, 221, 127]
check IE failed
Traceback (most recent call last):
File "./run-tests.py", line 504, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_scan.py", line 1704, in test_scan_multi_bssid_check_ie
raise Exception("check IE failed")
Exception: check IE failed
FAIL scan_multi_bssid_check_ie 0.529016 2019-04-20 20:07:07.780295
passed 0 test case(s)
skipped 0 test case(s)
failed tests: scan_multi_bssid_check_ie
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc3-00703-g30f2a38 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months
[block] 8a96a0e408: xfstests.generic.349.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 8a96a0e408102fb7aa73d8aa0b5e2219cfd51e55 ("block: rewrite blk_bvec_map_sg to avoid a nth_page call")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: f2fs
test: generic-group4
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-04-20 11:13:51 export TEST_DIR=/fs/vda
2019-04-20 11:13:51 export TEST_DEV=/dev/vda
2019-04-20 11:13:51 export FSTYP=f2fs
2019-04-20 11:13:51 export SCRATCH_MNT=/fs/scratch
2019-04-20 11:13:51 mkdir /fs/scratch -p
2019-04-20 11:13:51 export SCRATCH_DEV=/dev/vdd
2019-04-20 11:13:51 export MKFS_OPTIONS=-f
2019-04-20 11:13:51 sed "s:^:generic/:" /lkp/lkp/src/pack/xfstests-addon/tests/generic-group4 | grep -F -f merged_ignored_files
2019-04-20 11:13:51 sed "s:^:generic/:" /lkp/lkp/src/pack/xfstests-addon/tests/generic-group4 | grep -v -F -f merged_ignored_files
2019-04-20 11:13:51 ./check generic/349 generic/350 generic/352 generic/353 generic/354 generic/355 generic/356 generic/357 generic/358 generic/359 generic/360 generic/361 generic/362 generic/363 generic/364 generic/365 generic/366 generic/367 generic/368 generic/369 generic/370 generic/371 generic/372 generic/373 generic/374 generic/375 generic/376 generic/377 generic/378 generic/379 generic/380 generic/381 generic/382 generic/383 generic/384 generic/385
FSTYP -- f2fs
PLATFORM -- Linux/x86_64 vm-snb-4G-754 5.1.0-rc3-00083-g8a96a0e4
MKFS_OPTIONS -- -f /dev/vdd
MOUNT_OPTIONS -- -o acl,user_xattr /dev/vdd /fs/scratch
generic/349 - output mismatch (see /lkp/benchmarks/xfstests/results//generic/349.out.bad)
--- tests/generic/349.out 2019-04-17 19:35:39.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//generic/349.out.bad 2019-04-20 11:14:02.972755404 +0800
@@ -4,8 +4,8 @@
Zero range without keep_size
Zero range past EOD
Check contents
-f0cb9070c098aa347f664bead3a219d9 SCSI_DEBUG_DEV
+cc742ba48530d0c2463fa34c9e8ed083 SCSI_DEBUG_DEV
Zero range to MAX_LFS_FILESIZE
Check contents
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/generic/349.out /lkp/benchmarks/xfstests/results//generic/349.out.bad' to see the entire diff)
generic/350 - output mismatch (see /lkp/benchmarks/xfstests/results//generic/350.out.bad)
--- tests/generic/350.out 2019-04-17 19:35:39.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//generic/350.out.bad 2019-04-20 11:14:06.200755404 +0800
@@ -3,8 +3,8 @@
Zero punch
Punch range past EOD
Check contents
-8c6a3fd51601141b56eaebbab3746156 SCSI_DEBUG_DEV
+2fc1a274684fc1d3951f3612a0e58aee SCSI_DEBUG_DEV
Punch to MAX_LFS_FILESIZE
Check contents
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/generic/350.out /lkp/benchmarks/xfstests/results//generic/350.out.bad' to see the entire diff)
generic/352 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/353 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/354 9s
generic/355 3s
generic/356 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/357 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/358 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/359 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/360 2s
generic/361 3s
generic/362 [not run] this test requires richacl support on $SCRATCH_DEV
generic/363 [not run] this test requires richacl support on $SCRATCH_DEV
generic/364 [not run] this test requires richacl support on $SCRATCH_DEV
generic/365 [not run] this test requires richacl support on $SCRATCH_DEV
generic/366 [not run] this test requires richacl support on $SCRATCH_DEV
generic/367 [not run] this test requires richacl support on $SCRATCH_DEV
generic/368 [not run] this test requires richacl support on $SCRATCH_DEV
generic/369 [not run] this test requires richacl support on $SCRATCH_DEV
generic/370 [not run] this test requires richacl support on $SCRATCH_DEV
generic/371 24s
generic/372 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/373 [not run] Reflink not supported by scratch filesystem type: f2fs
generic/374 [not run] Dedupe not supported by scratch filesystem type: f2fs
generic/375 3s
generic/376 3s
generic/377 7s
generic/378 3s
generic/379 [not run] disk quotas not supported by this filesystem type: f2fs
generic/380 [not run] disk quotas not supported by this filesystem type: f2fs
generic/381 [not run] disk quotas not supported by this filesystem type: f2fs
generic/382 [not run] disk quotas not supported by this filesystem type: f2fs
generic/383 [not run] disk quotas not supported by this filesystem type: f2fs
generic/384 [not run] disk quotas not supported by this filesystem type: f2fs
generic/385 [not run] disk quotas not supported by this filesystem type: f2fs
Ran: generic/349 generic/350 generic/352 generic/353 generic/354 generic/355 generic/356 generic/357 generic/358 generic/359 generic/360 generic/361 generic/362 generic/363 generic/364 generic/365 generic/366 generic/367 generic/368 generic/369 generic/370 generic/371 generic/372 generic/373 generic/374 generic/375 generic/376 generic/377 generic/378 generic/379 generic/380 generic/381 generic/382 generic/383 generic/384 generic/385
Not run: generic/352 generic/353 generic/356 generic/357 generic/358 generic/359 generic/362 generic/363 generic/364 generic/365 generic/366 generic/367 generic/368 generic/369 generic/370 generic/372 generic/373 generic/374 generic/379 generic/380 generic/381 generic/382 generic/383 generic/384 generic/385
Failures: generic/349 generic/350
Failed 2 of 36 tests
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc3-00083-g8a96a0e4 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months