[mm/slab_common] 8dbaaff323f: kernel BUG at mm/slab.c:2276!
by Huang Ying
FYI, we noticed the below changes on
git://git.cmpxchg.org/linux-mmotm.git master
commit 8dbaaff323f0621832a5594d100664584290699b ("mm/slab_common: support the slub_debug boot option on specific object size")
+------------------------------------------+------------+------------+
| | 511d85c267 | 8dbaaff323 |
+------------------------------------------+------------+------------+
| boot_successes | 19 | 0 |
| boot_failures | 1 | 10 |
| BUG:kernel_boot_crashed | 1 | |
| kernel_BUG_at_mm/slab.c | 0 | 10 |
| invalid_opcode | 0 | 10 |
| EIP_is_at__kmem_cache_create | 0 | 10 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 10 |
| backtrace:create_kmalloc_caches | 0 | 10 |
| backtrace:kmem_cache_init | 0 | 10 |
+------------------------------------------+------------+------------+
[ 0.000000] .text : 0x89800000 - 0x89d85d4e (5655 kB)
[ 0.000000] Checking if this processor honours the WP bit even in supervisor mode...Ok.
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] kernel BUG at mm/slab.c:2276!
[ 0.000000] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.1.0-rc3-mm1-00255-ge55a381 #283
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.000000] task: 8a074b80 ti: 8a06e000 task.ti: 8a06e000
[ 0.000000] EIP: 0060:[<898dae2a>] EFLAGS: 00210087 CPU: 0
[ 0.000000] EIP is at __kmem_cache_create+0x2cb/0x307
[ 0.000000] EAX: 00000000 EBX: 00000002 ECX: 00000000 EDX: 00000000
[ 0.000000] ESI: 8009a0e0 EDI: 80002800 EBP: 8a06ff8c ESP: 8a06ff4c
[ 0.000000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 0.000000] CR0: 8005003b CR2: ffd16000 CR3: 0a2c5000 CR4: 00000690
[ 0.000000] Stack:
[ 0.000000] 00000000 00001002 00000001 00000800 0009a0e0 0000001f 00000000 00001000
[ 0.000000] 00000000 00000000 ffffffe0 00001000 00000020 8009a0e0 000000c0 89f7d56a
[ 0.000000] 8a06ffa0 8a108317 8009a0e0 89f7d56a 000000c0 8a06ffbc 8a108372 00002000
[ 0.000000] Call Trace:
[ 0.000000] [<8a108317>] create_boot_cache+0x2f/0x4f
[ 0.000000] [<8a108372>] create_kmalloc_cache+0x3b/0x67
[ 0.000000] [<8a1083e3>] create_kmalloc_caches+0x2b/0x51
[ 0.000000] [<8a1098f9>] kmem_cache_init+0xec/0xef
[ 0.000000] [<8a0f18d2>] start_kernel+0x1c0/0x387
[ 0.000000] [<8a0f12b7>] i386_start_kernel+0x85/0x89
[ 0.000000] Code: 76 30 89 46 2c 8b 45 ec 89 46 10 e8 fa 8a 15 00 85 ff 89 46 14 89 56 18 79 13 31 d2 89 d8 e8 7d 4c fe ff 83 f8 10 89 46 34 77 02 <0f> 0b 8b 55 d8 89 f0 e8 7c e7 49 00 89 c3 31 c0 85 db 74 20 89
[ 0.000000] EIP: [<898dae2a>] __kmem_cache_create+0x2cb/0x307 SS:ESP 0068:8a06ff4c
[ 0.000000] ---[ end trace 38833c3a95f5478f ]---
[ 0.000000] Kernel panic - not syncing: Fatal exception
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 3 months
[unisys] WARNING: CPU: 0 PID: 1 at fs/sysfs/dir.c:31 sysfs_warn_dup()
by Fengguang Wu
Hi Benjamin,
FYI your patch triggered a warning, but it obviously is not the root cause.
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1b27feaa9d7bfd9e68c9f5e8a0039a223916c3bf
Author: Benjamin Romer <benjamin.romer(a)unisys.com>
AuthorDate: Tue May 5 18:37:08 2015 -0400
Commit: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
CommitDate: Fri May 8 15:27:32 2015 +0200
staging: unisys: fix visorbus Kconfig
Removing visorutil made it impossible to build visorbus. Remove the config
setting from the Kconfig so the module can be enabled in the build.
Signed-off-by: Benjamin Romer <benjamin.romer(a)unisys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
+---------------------------------------------+------------+------------+---------------+
| | 53490b545c | 1b27feaa9d | next-20150514 |
+---------------------------------------------+------------+------------+---------------+
| boot_successes | 57 | 0 | 0 |
| boot_failures | 3 | 20 | 12 |
| BUG:kernel_boot_crashed | 2 | | |
| Unexpected_close,not_stopping_watchdog | 1 | | |
| WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup() | 0 | 20 | 12 |
| backtrace:sysfs_create_file_ns | 0 | 20 | 12 |
| backtrace:param_sysfs_init | 0 | 20 | 12 |
| backtrace:kernel_init_freeable | 0 | 20 | 12 |
+---------------------------------------------+------------+------------+---------------+
[ 0.119867] PCI: Using configuration type 1 for base access
[ 0.120977] ------------[ cut here ]------------
[ 0.120977] ------------[ cut here ]------------
[ 0.121795] WARNING: CPU: 0 PID: 1 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x5b/0x6b()
[ 0.121795] WARNING: CPU: 0 PID: 1 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x5b/0x6b()
[ 0.123337] sysfs: cannot create duplicate filename '/module/visorbus/version'
[ 0.123337] sysfs: cannot create duplicate filename '/module/visorbus/version'
[ 0.124602] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.1.0-rc2-00246-g1b27fea #103
[ 0.124602] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.1.0-rc2-00246-g1b27fea #103
[ 0.125928] 0000000000000009
[ 0.125928] 0000000000000009 ffff880000047d18 ffff880000047d18 ffffffff899d045e ffffffff899d045e ffffffff88f1148b ffffffff88f1148b
[ 0.127247] ffff880000047d68
[ 0.127247] ffff880000047d68 ffff880000047d58 ffff880000047d58 ffffffff88edd4e7 ffffffff88edd4e7 ffff880000047d68 ffff880000047d68
[ 0.128590] ffffffff88fd4512
[ 0.128590] ffffffff88fd4512 ffff880000019000 ffff880000019000 ffffffff89f5cf9f ffffffff89f5cf9f ffff8800001aa980 ffff8800001aa980
[ 0.129928] Call Trace:
[ 0.129928] Call Trace:
[ 0.130007] [<ffffffff899d045e>] dump_stack+0x4c/0x65
[ 0.130007] [<ffffffff899d045e>] dump_stack+0x4c/0x65
[ 0.130902] [<ffffffff88f1148b>] ? console_unlock+0x32e/0x35d
[ 0.130902] [<ffffffff88f1148b>] ? console_unlock+0x32e/0x35d
[ 0.131923] [<ffffffff88edd4e7>] warn_slowpath_common+0x92/0xac
[ 0.131923] [<ffffffff88edd4e7>] warn_slowpath_common+0x92/0xac
[ 0.132967] [<ffffffff88fd4512>] ? sysfs_warn_dup+0x5b/0x6b
[ 0.132967] [<ffffffff88fd4512>] ? sysfs_warn_dup+0x5b/0x6b
[ 0.133341] [<ffffffff88edd55c>] warn_slowpath_fmt+0x41/0x43
[ 0.133341] [<ffffffff88edd55c>] warn_slowpath_fmt+0x41/0x43
[ 0.134942] [<ffffffff88fd19ab>] ? kernfs_path+0x48/0x53
[ 0.134942] [<ffffffff88fd19ab>] ? kernfs_path+0x48/0x53
[ 0.136435] [<ffffffff88fd4512>] sysfs_warn_dup+0x5b/0x6b
[ 0.136435] [<ffffffff88fd4512>] sysfs_warn_dup+0x5b/0x6b
[ 0.136673] [<ffffffff88fd41ab>] sysfs_add_file_mode_ns+0xf3/0x175
[ 0.136673] [<ffffffff88fd41ab>] sysfs_add_file_mode_ns+0xf3/0x175
[ 0.138398] [<ffffffff88fd4269>] sysfs_create_file_ns+0x27/0x29
[ 0.138398] [<ffffffff88fd4269>] sysfs_create_file_ns+0x27/0x29
[ 0.140008] [<ffffffff8a3c9daf>] param_sysfs_init+0x8d/0x2f7
[ 0.140008] [<ffffffff8a3c9daf>] param_sysfs_init+0x8d/0x2f7
[ 0.141606] [<ffffffff88f74a9f>] ? slab_free_hook+0x3e/0x46
[ 0.141606] [<ffffffff88f74a9f>] ? slab_free_hook+0x3e/0x46
[ 0.143163] [<ffffffff88f76469>] ? kfree+0x56/0xa7
[ 0.143163] [<ffffffff88f76469>] ? kfree+0x56/0xa7
[ 0.143341] [<ffffffff8a3c9d22>] ? locate_module_kobject+0xa9/0xa9
[ 0.143341] [<ffffffff8a3c9d22>] ? locate_module_kobject+0xa9/0xa9
[ 0.145067] [<ffffffff8a3b3fbf>] do_one_initcall+0xf8/0x181
[ 0.145067] [<ffffffff8a3b3fbf>] do_one_initcall+0xf8/0x181
[ 0.146504] [<ffffffff8a3b4162>] kernel_init_freeable+0x11a/0x1a2
[ 0.146504] [<ffffffff8a3b4162>] kernel_init_freeable+0x11a/0x1a2
[ 0.146675] [<ffffffff899c749e>] ? rest_init+0xc5/0xc5
[ 0.146675] [<ffffffff899c749e>] ? rest_init+0xc5/0xc5
[ 0.147581] [<ffffffff899c74a7>] kernel_init+0x9/0xd5
[ 0.147581] [<ffffffff899c74a7>] kernel_init+0x9/0xd5
[ 0.148473] [<ffffffff899d9b32>] ret_from_fork+0x42/0x70
[ 0.148473] [<ffffffff899d9b32>] ret_from_fork+0x42/0x70
[ 0.149401] [<ffffffff899c749e>] ? rest_init+0xc5/0xc5
[ 0.149401] [<ffffffff899c749e>] ? rest_init+0xc5/0xc5
[ 0.150013] ---[ end trace 2a5c9352f024102c ]---
[ 0.150013] ---[ end trace 2a5c9352f024102c ]---
git bisect start 6c6d91ac5dfb7a78ecbd9a715526da92355b748a 030bbdbf4c833bc69f502eae58498bc5572db736 --
git bisect good 774944164961ab183cc31e8f9e281e1ac2fdd47c # 05:54 20+ 0 Merge remote-tracking branch 'thermal/next'
git bisect good 1d5fce7535fa7e207cbbce8a41c7f7e18b5fa217 # 06:10 20+ 0 Merge remote-tracking branch 'clockevents/clockevents/next'
git bisect bad fe1a7d2078a594e90271dce61e230fe66b446a80 # 06:15 0- 6 next-20150511/target-updates
git bisect good 9dbedfe249c6a1e35deeef9bbf55f59c809778c8 # 06:20 20+ 0 Merge remote-tracking branch 'leds/for-next'
git bisect bad b1761b9aeb4e3c5a2f032e8997d0286159fe982f # 06:46 0- 2 Merge remote-tracking branch 'staging/staging-next'
git bisect good fedfdb4b49bc693cebd4b1e0fa9329b7df46a079 # 06:54 20+ 1 Merge remote-tracking branch 'tty/tty-next'
git bisect good b8ae62e67c4e3d7bbe96fbcaa1399359bc1783bb # 07:11 20+ 0 Merge remote-tracking branch 'usb/usb-next'
git bisect good 1452f37015dbb856572af16d37f23a552e95d494 # 07:34 20+ 0 staging: unisys: remove unused #define MAX_SERIAL_NUM
git bisect bad d53be924c8260266b72244c92399ec68d2eb8871 # 07:46 0- 1 staging: comedi: ni_stc.h: tidy up NI_M_CDIO_STATUS_REG bits
git bisect good 1038a6872802bb4a07f627162ff989bf49e2e5cc # 08:06 20+ 3 iio: magnetometer: support for lsm303dlh
git bisect bad 1b27feaa9d7bfd9e68c9f5e8a0039a223916c3bf # 08:17 0- 20 staging: unisys: fix visorbus Kconfig
git bisect good d24c3f07a6bae3b368681d25aa4eef380aa2e8f0 # 08:37 20+ 1 staging: unisys: memregion: {un, }mapit() are no longer used
git bisect good 434cbf28b5bc1e2694e6c097dba6f111fd61cafb # 08:51 20+ 2 staging: unisys: Finally remove the last remnants of memregion
git bisect good d5b3f1dccee4dedeff8e264b58fbfe90a6033ecf # 09:11 20+ 3 staging: unisys: move timskmod.h functionality
git bisect good c0a14641b1c6b9e85f94370ea3610ca486ca568f # 09:35 20+ 0 staging: unisys: remove timskmod.h and procobjecttree.h
git bisect good 53490b545cb07c9d2e1b36aa680afaf0d435aec8 # 10:10 20+ 0 staging: unisys: move periodic_work.c into the visorbus directory
# first bad commit: [1b27feaa9d7bfd9e68c9f5e8a0039a223916c3bf] staging: unisys: fix visorbus Kconfig
git bisect good 53490b545cb07c9d2e1b36aa680afaf0d435aec8 # 10:12 60+ 3 staging: unisys: move periodic_work.c into the visorbus directory
# extra tests with DEBUG_INFO
git bisect bad 1b27feaa9d7bfd9e68c9f5e8a0039a223916c3bf # 10:20 0- 31 staging: unisys: fix visorbus Kconfig
# extra tests on HEAD of next/master
git bisect bad 6c6d91ac5dfb7a78ecbd9a715526da92355b748a # 10:20 0- 12 Add linux-next specific files for 20150514
# extra tests on tree/branch next/master
git bisect bad 6c6d91ac5dfb7a78ecbd9a715526da92355b748a # 10:20 0- 12 Add linux-next specific files for 20150514
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good 110bc76729d448fdbcb5cdb63b83d9fd65ce5e26 # 10:26 60+ 2 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
# extra tests on tree/branch next/master
git bisect bad 6c6d91ac5dfb7a78ecbd9a715526da92355b748a # 10:26 0- 12 Add linux-next specific files for 20150514
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 3 months
[proc] BUG: unable to handle kernel NULL pointer dereference at (null)
by Fengguang Wu
Hi Eric,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace.git for-testing
commit c13736a7d54593e4f2bd2ae981b071a89c830511
Author: Eric W. Biederman <ebiederm(a)xmission.com>
AuthorDate: Mon May 11 16:44:25 2015 -0500
Commit: Eric W. Biederman <ebiederm(a)xmission.com>
CommitDate: Wed May 13 22:14:33 2015 -0500
proc: Allow creating permanently empty directories.
Add a new function proc_mk_empty_dir that when used to creates
a directory that can not be added to.
Update the code to use make_empty_dir_inode when reporting
a permanently empty directory to the vfs.
Update the code to not allow adding to permanently empty directories.
Update /proc/openprom and /proc/fs/nfsd to be permanently empty directories.
Cc: stable(a)vger.kernel.org
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
+----------------------------------------------------------------+------------+------------+------------+
| | de4e98aaa1 | c13736a7d5 | ab3bbb820f |
+----------------------------------------------------------------+------------+------------+------------+
| boot_successes | 246 | 24 | 5 |
| boot_failures | 8 | 39 | 8 |
| WDT_device_closed_unexpectedly.WDT_will_not_stop | 1 | | |
| BUG:kernel_early_crashed_without_any_printk_output | 1 | | |
| BUG:kernel_boot_crashed | 4 | | |
| IP-Config:Auto-configuration_of_network_failed | 2 | | |
| BUG:unable_to_handle_kernel | 0 | 38 | 7 |
| Oops | 0 | 31 | 8 |
| RIP:d_flags_for_inode | 0 | 29 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 38 | 8 |
| backtrace:SYSC_newfstatat | 0 | 33 | 6 |
| backtrace:SyS_newfstatat | 0 | 39 | 6 |
| Kernel_panic-not_syncing:Fatal(#exception | 0 | 1 | |
| RIP:#:[<#>][<ffffffff811ef_ce7>]d_flags_for_inode | 0 | 1 | |
| RIP:#:[<#>]#d)[on<ffdf#f48f[ffff811efce7>]d_flags_for_inode | 0 | 1 | |
| RIP:#:[<#>]t(##[d<#cf13f00f0_f4ffff811efce7>]d_flags_for_inode | 0 | 1 | |
| Oops:#[##]PREEMPT#S34M[P1:#:#] | 0 | 1 | |
| RIP:#:[<#>]#d[#)<ofn_ffdf#>]d_flags_for_inode | 0 | 1 | |
| WARNING:at_fs/proc/inode.c:#proc_get_inode() | 0 | 0 | 2 |
+----------------------------------------------------------------+------------+------------+------------+
[main] Setsockopt(1 2e 19cc000 4) on fd 194 [1:5:1]
[main] Setsockopt(10e 3 19cc000 4) on fd 196 [16:3:15]
[main] Setsockopt(1 1 19cc000 d6) on fd 197 [1:2:1]
[ 34.918644] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 34.920027] IP: [<ffffffff811efce7>] d_flags_for_inode+0x3d/0xbe
[ 34.920029] PGD 9230067 PUD 91c9067 PMD 0
[ 34.920029] Oops: 0000 [#1] PREEMPT SMP
[ 34.920029] Modules linked in:
[ 34.920029] CPU: 0 PID: 377 Comm: trinity-main Tainted: G W 4.1.0-rc1-00007-gc13736a #1
[ 34.920029] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 34.920029] task: ffff880009194000 ti: ffff8800092ac000 task.ti: ffff8800092ac000
[ 34.920029] RIP: 0010:[<ffffffff811efce7>] [<ffffffff811efce7>] d_flags_for_inode+0x3d/0xbe
[ 34.920029] RSP: 0018:ffff8800092afb78 EFLAGS: 00010202
[ 34.920029] RAX: 0000000000200000 RBX: ffff88000e191d80 RCX: 0000000000000000
[ 34.920029] RDX: 8c6318c6318c0000 RSI: ffff88000a1b0ab8 RDI: ffff88000a1b0ab8
[ 34.920029] RBP: ffff8800092afb78 R08: 0000000000000002 R09: 00000000000ba05c
[ 34.920029] R10: 0000000000024c82 R11: ffff880009194b01 R12: ffff88000a1b0ab8
[ 34.920029] R13: ffff88000e191e10 R14: ffff8800092afce8 R15: ffff88000e129d80
[ 34.920029] FS: 00007f9fa949c700(0000) GS:ffff880011000000(0000) knlGS:0000000000000000
[ 34.920029] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 34.920029] CR2: 0000000000000000 CR3: 0000000009240000 CR4: 00000000000006b0
[ 34.920029] Stack:
[ 34.920029] ffff8800092afba8 ffffffff811f1e1c ffff88000a1b0ab8 ffff88000e191d80
[ 34.920029] ffff8800084e8ab8 ffff8800092afce8 ffff8800092afbc8 ffffffff811f21e8
[ 34.920029] ffff88000e191d80 ffff88000a1b0ab8 ffff8800092afbf8 ffffffff81254d6d
[ 34.920029] Call Trace:
[ 34.920029] [<ffffffff811f1e1c>] __d_instantiate+0x27/0x11f
[ 34.920029] [<ffffffff811f21e8>] d_instantiate+0x58/0x83
[ 34.920029] [<ffffffff81254d6d>] proc_lookup_de+0xc3/0x10d
[ 34.920029] [<ffffffff81254dd4>] proc_lookup+0x1d/0x26
[ 34.920029] [<ffffffff811e217d>] lookup_real+0x3c/0x6c
[ 34.920029] [<ffffffff811e2bbf>] __lookup_hash+0x43/0x51
[ 34.920029] [<ffffffff811e6834>] walk_component+0xc2/0x280
[ 34.920029] [<ffffffff811e75d5>] ? path_init+0x4b4/0x4ca
[ 34.920029] [<ffffffff811e6a35>] lookup_last+0x43/0x4c
[ 34.920029] [<ffffffff811e7a86>] path_lookupat+0x77/0x3fb
[ 34.920029] [<ffffffff811e84d7>] filename_lookup+0x25/0x80
[ 34.920029] [<ffffffff811e9ed0>] user_path_at_empty+0x75/0xc8
[ 34.920029] [<ffffffff810feb44>] ? lock_release+0x2c6/0x345
[ 34.920029] [<ffffffff811e9f36>] user_path_at+0x13/0x1c
[ 34.920029] [<ffffffff811db9bd>] vfs_fstatat+0x6a/0xd8
[ 34.920029] [<ffffffff811dbc13>] SYSC_newfstatat+0x1c/0x4e
[ 34.920029] [<ffffffff81172990>] ? context_tracking_user_exit+0x15/0x1e
[ 34.920029] [<ffffffff810159b1>] ? user_exit+0x27/0x30
[ 34.920029] [<ffffffff81016495>] ? syscall_trace_enter_phase1+0x7a/0x16d
[ 34.920029] [<ffffffff81432c98>] ? trace_hardirqs_on_thunk+0x17/0x19
[ 34.920029] [<ffffffff811dc2b5>] SyS_newfstatat+0x10/0x19
[ 34.920029] [<ffffffff816a37ae>] system_call_fastpath+0x12/0x76
[ 34.920029] Code: a6 00 00 00 8b 0f 66 8b 57 02 66 81 e1 00 f0 66 81 f9 00 40 75 2b f6 c2 02 b8 00 00 20 00 75 71 48 8b 4f 20 48 ff 05 a9 20 f2 01 <48> 83 39 00 74 5b 83 ca 02 48 ff 05 a1 20 f2 01 66 89 57 02 eb
[ 34.920029] RIP [<ffffffff811efce7>] d_flags_for_inode+0x3d/0xbe
[ 34.920029] RSP <ffff8800092afb78>
[ 34.920029] CR2: 0000000000000000
[ 34.920029] ---[ end trace e8397d0f1581dfed ]---
[ 34.920029] Kernel panic - not syncing: Fatal exception
git bisect start ab3bbb820f2c15ff6b0a16f767ecd1665791e708 7e96c1b0e0f495c5a7450dc4aa7c9a24ba4305bd --
git bisect good d5044ae0735305c58622d9985afe597c7dfd4dd3 # 14:42 63+ 0 fs: Add helper functions for permanently empty directories.
git bisect bad c13736a7d54593e4f2bd2ae981b071a89c830511 # 14:42 0- 39 proc: Allow creating permanently empty directories.
git bisect good de4e98aaa1c83ff99baedf65c1226e3e1475c303 # 14:51 63+ 1 sysctl: Allow creating permanently empty directories.
# first bad commit: [c13736a7d54593e4f2bd2ae981b071a89c830511] proc: Allow creating permanently empty directories.
git bisect good de4e98aaa1c83ff99baedf65c1226e3e1475c303 # 03:12 189+ 8 sysctl: Allow creating permanently empty directories.
# extra tests with DEBUG_INFO
git bisect good c13736a7d54593e4f2bd2ae981b071a89c830511 # 03:33 189+ 189 proc: Allow creating permanently empty directories.
# extra tests on HEAD of linux-devel/devel-lkp-nhm1-smoke-201505141302
# extra tests on tree/branch userns/for-testing
git bisect bad a524faf520600968e58bbc732063fccf2fdf9199 # 05:03 21- 16 mnt: Update fs_fully_visible to test for permanently empty directories
# extra tests on tree/branch linus/master
git bisect good 110bc76729d448fdbcb5cdb63b83d9fd65ce5e26 # 05:11 189+ 14 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
# extra tests on tree/branch next/master
git bisect good 6c6d91ac5dfb7a78ecbd9a715526da92355b748a # 05:39 189+ 10 Add linux-next specific files for 20150514
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 3 months
[x86/smpboot] f5d6a52f511: BUG: kernel boot hang
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/apic
commit f5d6a52f511157c7476590532a23b5664b1ed877 ("x86/smpboot: Skip delays during SMP initialization similar to Xen")
+------------------------------------------------+------------+------------+
| | 19e3d60d49 | f5d6a52f51 |
+------------------------------------------------+------------+------------+
| boot_successes | 20 | 10 |
| boot_failures | 2 | 12 |
| IP-Config:Auto-configuration_of_network_failed | 2 | 2 |
| BUG:kernel_boot_hang | 0 | 10 |
+------------------------------------------------+------------+------------+
[ 0.000000] Initializing CPU#1
[ 1.586595] kvm-clock: cpu 1, msr 0:13fdf041, secondary cpu clock
BUG: kernel boot hang
Elapsed time: 305
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-c0-05111038/gcc-4.9/be67584d15684730aeed88cab355c5de8b0491fe/vmlinuz-4.1.0-rc3-01147-gbe67584 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-i386-3/rand_boot-1-yocto-minimal-i386.cgz-i386-randconfig-c0-05111038-be67584d15684730aeed88cab355c5de8b0491fe-1-20150512-31766-1fzr1qi.yaml ARCH=i386 kconfig=i386-randconfig-c0-05111038 branch=linux-devel/devel-cairo-smoke-201505120219 commit=be67584d15684730aeed88cab355c5de8b0491fe BOOT_IMAGE=/pkg/linux/i386-randconfig-c0-05111038/gcc-4.9/be67584d15684730aeed88cab355c5de8b0491fe/vmlinuz-4.1.0-rc3-01147-gbe67584 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-i386/yocto-minimal-i386.cgz/i386-randconfig-c0-05111038/gcc-4.9/be67584d15684730aeed88cab355c5de8b0491fe/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-i386-3::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-kbuild-yocto-i386-3 -m 320 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-kbuild-yocto-i386-3,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-i386-3 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-i386-3 -daemonize -display none -monitor null
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 3 months
[mm/slab_common] fdbacded3ac: kernel BUG at mm/slab.c:2276!
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit fdbacded3ac5e74b7d233b7af517fe1ba7f39fc8 ("mm/slab_common: support the slub_debug boot option on specific object size")
+------------------------------------------------------------------+------------+------------+
| | 40f716590c | fdbacded3a |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 23 | 22 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 16 | |
| backtrace:__hid_register_driver | 1 | |
| backtrace:logi_djreceiver_driver_init | 1 | |
| backtrace:kernel_init_freeable | 3 | |
| BUG:kernel_boot_oversize | 1 | |
| Unexpected_close,not_stopping_watchdog | 1 | |
| Out_of_memory:Kill_process | 18 | |
| backtrace:do_execveat_common | 3 | |
| backtrace:compat_SyS_execve | 3 | |
| backtrace:do_fork | 9 | |
| backtrace:SyS_clone | 8 | |
| backtrace:vm_munmap | 1 | |
| backtrace:SyS_munmap | 1 | |
| backtrace:do_sys_open | 1 | |
| backtrace:compat_SyS_open | 1 | |
| backtrace:vm_mmap_pgoff | 1 | |
| backtrace:SyS_mmap_pgoff | 1 | |
| backtrace:vfs_read | 2 | |
| backtrace:SyS_read | 2 | |
| backtrace:register_netdev | 2 | |
| backtrace:rose_proto_init | 2 | |
| backtrace:pgd_alloc | 1 | |
| backtrace:mm_init | 1 | |
| backtrace:do_execve | 1 | |
| backtrace:SyS_execve | 1 | |
| backtrace:vfs_write | 1 | |
| backtrace:SyS_write | 1 | |
| kernel_BUG_at_mm/slab.c | 0 | 22 |
| invalid_opcode | 0 | 22 |
| RIP:__kmem_cache_create | 0 | 22 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 22 |
| backtrace:create_kmalloc_caches | 0 | 22 |
| backtrace:kmem_cache_init | 0 | 22 |
+------------------------------------------------------------------+------------+------------+
[ 0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[ 0.000000] Memory: 493836K/589304K available (31054K kernel code, 4505K rwdata, 15064K rodata, 4540K init, 17360K bss, 95468K reserved, 0K cma-reserved)
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] kernel BUG at mm/slab.c:2276!
[ 0.000000] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.1.0-rc3-next-20150511 #140
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.000000] task: ffffffff8401b580 ti: ffffffff84000000 task.ti: ffffffff84000000
[ 0.000000] RIP: 0010:[<ffffffff811e958a>] [<ffffffff811e958a>] __kmem_cache_create+0x324/0x375
[ 0.000000] RSP: 0000:ffffffff84003e38 EFLAGS: 00010087
[ 0.000000] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000001
[ 0.000000] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000000000000
[ 0.000000] RBP: ffffffff84003e98 R08: 0000000000001000 R09: 0000000000000001
[ 0.000000] R10: 0000000000000040 R11: ffffffffffffffc0 R12: ffff880023006f80
[ 0.000000] R13: 0000000000000000 R14: 0000000080000000 R15: 0000000000000040
[ 0.000000] FS: 0000000000000000(0000) GS:ffff880023600000(0000) knlGS:0000000000000000
[ 0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 0.000000] CR2: ffff8800239ff000 CR3: 0000000004014000 CR4: 00000000000006b0
[ 0.000000] Stack:
[ 0.000000] ffffffff84003e48 0000000000000000 0000000080000000 0000000000001001
[ 0.000000] 0000000100000046 0000000000000001 0000000000008000 ffff880023006f80
[ 0.000000] 0000000000000060 ffffffff83a917c8 0000000000002000 0000ffffffff8464
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff84674896>] create_boot_cache+0x3c/0x66
[ 0.000000] [<ffffffff8467490f>] create_kmalloc_cache+0x4f/0x79
[ 0.000000] [<ffffffff84674998>] create_kmalloc_caches+0x44/0xe5
[ 0.000000] [<ffffffff846771af>] kmem_cache_init+0x155/0x159
[ 0.000000] [<ffffffff8111ebd1>] ? console_unlock+0x392/0x3c1
[ 0.000000] [<ffffffff84643d04>] start_kernel+0x214/0x46f
[ 0.000000] [<ffffffff84643120>] ? early_idt_handlers+0x120/0x120
[ 0.000000] [<ffffffff846434c6>] x86_64_start_reservations+0x2a/0x2c
[ 0.000000] [<ffffffff846435f5>] x86_64_start_kernel+0x12d/0x13c
[ 0.000000] Code: 44 24 2c e8 55 51 57 00 4d 85 f6 49 89 44 24 18 48 8b 4d c8 74 17 31 f6 48 89 cf e8 18 49 fd ff 48 83 f8 10 49 89 44 24 40 77 02 <0f> 0b 44 89 ee 4c 89 e7 e8 f8 d9 c2 01 89 c3 31 c0 85 db 74 2d
[ 0.000000] RIP [<ffffffff811e958a>] __kmem_cache_create+0x324/0x375
[ 0.000000] RSP <ffffffff84003e38>
[ 0.000000] ---[ end trace 068e58c69eab2901 ]---
[ 0.000000] Kernel panic - not syncing: Fatal exception
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 3 months
[sched] eae3e9e8843: +36.8% pigz.throughput
by Huang Ying
FYI, we noticed the below changes on
git://bee.sh.intel.com/git/ydu19/linux rewrite-v7-on-4.1-rc1
commit eae3e9e8843146e7e1cc77bd943e5f8138b61314 ("sched: Rewrite per entity runnable load average tracking")
testcase/path_params/tbox_group: pigz/performance-100%-128K/lkp-nex06
40fa32019d8574cb eae3e9e8843146e7e1cc77bd94
---------------- --------------------------
2.683e+08 ± 0% +36.8% 3.67e+08 ± 0% pigz.throughput
18661 ± 0% -21.6% 14625 ± 0% pigz.time.user_time
2046066 ± 0% +176.7% 5661440 ± 0% pigz.time.voluntary_context_switches
174 ± 0% +40.9% 245 ± 0% pigz.time.system_time
904986 ± 1% -43.8% 508883 ± 0% pigz.time.involuntary_context_switches
6268 ± 0% -21.0% 4952 ± 0% pigz.time.percent_of_cpu_this_job_got
3382 ± 8% +118.6% 7392 ± 3% uptime.idle
33576 ± 0% -12.3% 29441 ± 1% meminfo.Shmem
1241653 ± 6% -10.0% 1117099 ± 0% softirqs.RCU
201843 ± 0% +97.2% 397952 ± 0% softirqs.SCHED
9772559 ± 0% -20.6% 7762431 ± 0% softirqs.TIMER
63 ± 0% -17.7% 52 ± 1% vmstat.procs.r
84867 ± 0% -20.0% 67874 ± 0% vmstat.system.in
13057 ± 0% +183.4% 37007 ± 0% vmstat.system.cs
904986 ± 1% -43.8% 508883 ± 0% time.involuntary_context_switches
6268 ± 0% -21.0% 4952 ± 0% time.percent_of_cpu_this_job_got
174 ± 0% +40.9% 245 ± 0% time.system_time
18661 ± 0% -21.6% 14625 ± 0% time.user_time
2046066 ± 0% +176.7% 5661440 ± 0% time.voluntary_context_switches
69.56 ± 0% +7.4% 74.67 ± 0% turbostat.%Busy
1335 ± 0% +24.6% 1663 ± 0% turbostat.Avg_MHz
1920 ± 0% +16.0% 2227 ± 0% turbostat.Bzy_MHz
29.76 ± 0% -44.5% 16.50 ± 1% turbostat.CPU%c1
0.69 ± 5% +1182.5% 8.82 ± 1% turbostat.CPU%c3
0.28 ± 9% +83.0% 0.51 ± 2% turbostat.Pkg%pc3
8379 ± 0% -12.2% 7360 ± 1% proc-vmstat.nr_shmem
26862047 ± 0% +36.0% 36537881 ± 0% proc-vmstat.numa_hit
26861982 ± 0% +36.0% 36537861 ± 0% proc-vmstat.numa_local
9403 ± 0% -12.6% 8221 ± 2% proc-vmstat.pgactivate
2116622 ± 7% +28.9% 2727789 ± 2% proc-vmstat.pgalloc_dma32
24828395 ± 0% +36.6% 33904973 ± 0% proc-vmstat.pgalloc_normal
26931165 ± 0% +36.0% 36619978 ± 0% proc-vmstat.pgfree
6608532 ± 7% +29.1% 8533790 ± 2% numa-numastat.node0.numa_hit
6607488 ± 7% +29.1% 8533243 ± 2% numa-numastat.node0.local_node
6949144 ± 10% +32.9% 9232380 ± 10% numa-numastat.node1.local_node
6950185 ± 10% +32.8% 9232396 ± 10% numa-numastat.node1.numa_hit
6887396 ± 17% +50.7% 10377491 ± 10% numa-numastat.node2.local_node
6889467 ± 17% +50.6% 10378080 ± 10% numa-numastat.node2.numa_hit
6427416 ± 16% +30.8% 8408092 ± 1% numa-numastat.node3.local_node
6429490 ± 16% +30.8% 8408663 ± 1% numa-numastat.node3.numa_hit
23149 ± 4% +100.3% 46357 ± 1% cpuidle.C1-NHM.usage
7161061 ± 6% +326.7% 30557423 ± 1% cpuidle.C1-NHM.time
100163 ± 4% +263.5% 364095 ± 1% cpuidle.C1E-NHM.usage
22529441 ± 3% +390.6% 1.105e+08 ± 1% cpuidle.C1E-NHM.time
3.774e+08 ± 3% +1024.6% 4.244e+09 ± 0% cpuidle.C3-NHM.time
669049 ± 2% +556.3% 4391062 ± 0% cpuidle.C3-NHM.usage
16 ± 15% +506.1% 100 ± 11% cpuidle.POLL.usage
770 ± 40% +544.8% 4966 ± 35% cpuidle.POLL.time
34535 ± 36% -41.8% 20095 ± 8% numa-meminfo.node0.Active(anon)
23133 ± 7% -12.8% 20180 ± 8% numa-meminfo.node0.AnonPages
13351 ± 4% -20.6% 10594 ± 9% numa-meminfo.node0.SReclaimable
28535 ± 5% +15.7% 33003 ± 9% numa-meminfo.node2.Slab
218579 ± 2% +13.3% 247589 ± 1% numa-meminfo.node2.MemUsed
118187 ± 2% +8.3% 127984 ± 9% numa-meminfo.node3.FilePages
226067 ± 4% +10.6% 249934 ± 5% numa-meminfo.node3.MemUsed
2447 ± 9% +18.8% 2908 ± 3% numa-meminfo.node3.KernelStack
5775 ± 7% -12.6% 5047 ± 8% numa-vmstat.node0.nr_anon_pages
3498916 ± 9% +24.1% 4343868 ± 2% numa-vmstat.node0.numa_local
3532759 ± 9% +24.0% 4379945 ± 2% numa-vmstat.node0.numa_hit
8629 ± 37% -41.8% 5025 ± 8% numa-vmstat.node0.nr_active_anon
3337 ± 4% -20.7% 2648 ± 9% numa-vmstat.node0.nr_slab_reclaimable
150 ± 20% -22.2% 116 ± 0% numa-vmstat.node0.nr_unevictable
150 ± 20% -22.2% 116 ± 0% numa-vmstat.node0.nr_mlock
6417 ± 32% +363.7% 29753 ± 41% numa-vmstat.node1.numa_other
3525752 ± 11% +32.4% 4669694 ± 8% numa-vmstat.node1.numa_hit
3519334 ± 11% +31.8% 4639940 ± 8% numa-vmstat.node1.numa_local
3389724 ± 16% +53.8% 5212082 ± 9% numa-vmstat.node2.numa_local
3429257 ± 16% +52.9% 5243860 ± 9% numa-vmstat.node2.numa_hit
3205283 ± 15% +32.0% 4231247 ± 3% numa-vmstat.node3.numa_local
29546 ± 2% +8.3% 31998 ± 9% numa-vmstat.node3.nr_file_pages
3244813 ± 15% +31.1% 4253113 ± 3% numa-vmstat.node3.numa_hit
227 ± 1% +139.2% 544 ± 0% latency_stats.avg.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.system_call_fastpath
452 ± 1% -8.7% 413 ± 1% latency_stats.avg.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
183 ± 4% +129.6% 421 ± 13% latency_stats.avg.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
3331 ± 0% -23.0% 2564 ± 3% latency_stats.avg.pipe_wait.wait_for_partner.fifo_open.do_dentry_open.vfs_open.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
56 ± 28% -34.2% 37 ± 20% latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
241 ± 8% -21.6% 189 ± 10% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.may_open
207 ± 12% +263.9% 755 ± 26% latency_stats.avg.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
927 ± 1% -29.0% 658 ± 1% latency_stats.avg.do_wait.SyS_wait4.system_call_fastpath
175 ± 1% -17.5% 145 ± 0% latency_stats.avg.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
50 ± 11% +181.1% 141 ± 5% latency_stats.avg.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
35 ± 11% -100.0% 0 ± 0% latency_stats.avg.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
92 ± 5% +47.3% 136 ± 7% latency_stats.avg.call_rwsem_down_write_failed.SyS_mprotect.system_call_fastpath
152 ± 12% +91.6% 291 ± 6% latency_stats.avg.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
22 ± 19% -42.7% 12 ± 40% latency_stats.avg.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
10 ± 28% -83.7% 1 ± 24% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
1358060 ± 0% +253.5% 4800372 ± 0% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.system_call_fastpath
15211 ± 1% +52.3% 23161 ± 0% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
195 ± 6% -26.6% 143 ± 5% latency_stats.hits.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.system_call_fastpath
941 ± 2% -41.3% 553 ± 5% latency_stats.hits.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
856 ± 21% +233.1% 2851 ± 6% latency_stats.hits.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
25 ± 29% +169.0% 67 ± 15% latency_stats.hits.pipe_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
9 ± 40% -84.2% 1 ± 33% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
1313 ± 1% -12.4% 1151 ± 3% latency_stats.hits.call_rwsem_down_write_failed.SyS_mprotect.system_call_fastpath
4108 ± 0% +55.7% 6398 ± 0% latency_stats.hits.do_wait.SyS_wait4.system_call_fastpath
12638 ± 0% +27.0% 16053 ± 21% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
344 ± 10% -21.3% 271 ± 21% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.may_open
2295 ± 25% +331.3% 9898 ± 23% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
51 ± 18% -100.0% 0 ± 0% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
5538 ± 45% +120.8% 12228 ± 18% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
4086 ± 0% -12.3% 3582 ± 8% latency_stats.max.pipe_wait.wait_for_partner.fifo_open.do_dentry_open.vfs_open.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
3810831 ± 1% +10.7% 4218213 ± 1% latency_stats.sum.do_wait.SyS_wait4.system_call_fastpath
168212 ± 16% +274.5% 630000 ± 26% latency_stats.sum.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
6891099 ± 2% +39.0% 9579806 ± 0% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
1207 ± 8% -21.7% 946 ± 10% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.may_open
122162 ± 6% +28.5% 156995 ± 5% latency_stats.sum.call_rwsem_down_write_failed.SyS_mprotect.system_call_fastpath
33322 ± 0% -23.0% 25650 ± 3% latency_stats.sum.pipe_wait.wait_for_partner.fifo_open.do_dentry_open.vfs_open.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
128589 ± 19% +544.3% 828462 ± 3% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
47841 ± 12% +63.3% 78146 ± 3% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
20227 ± 15% +56.2% 31588 ± 11% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.filemap_write_and_wait.nfs_wb_all.nfs_getattr.vfs_getattr_nosec.vfs_getattr.vfs_fstat.SYSC_newfstat.SyS_newfstat.system_call_fastpath
228 ± 34% -93.8% 14 ± 46% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
1.137e+08 ± 1% -9.5% 1.029e+08 ± 0% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
3.1e+08 ± 1% +744.4% 2.617e+09 ± 0% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.system_call_fastpath
131 ± 48% -100.0% 0 ± 0% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
1581442 ± 5% +105.4% 3248689 ± 14% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
0.41 ± 25% +196.3% 1.22 ± 2% perf-profile.cpu-cycles.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
22.52 ± 5% -61.9% 8.57 ± 5% perf-profile.cpu-cycles.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.54 ± 29% +356.0% 2.48 ± 6% perf-profile.cpu-cycles.__mutex_lock_slowpath.mutex_lock.pipe_wait.pipe_write.__vfs_write
0.54 ± 29% +356.0% 2.48 ± 6% perf-profile.cpu-cycles.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_wait.pipe_write
0.54 ± 13% +633.2% 3.98 ± 7% perf-profile.cpu-cycles.pick_next_task_fair.__schedule.schedule.futex_wait_queue_me.futex_wait
0.52 ± 28% +370.0% 2.47 ± 5% perf-profile.cpu-cycles.mutex_spin_on_owner.isra.4.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_wait
0.40 ± 11% +785.6% 3.54 ± 6% perf-profile.cpu-cycles.load_balance.pick_next_task_fair.__schedule.schedule.futex_wait_queue_me
3.99 ± 5% +62.1% 6.46 ± 2% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
1.67 ± 26% -47.8% 0.87 ± 20% perf-profile.cpu-cycles.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.30 ± 16% +270.2% 1.12 ± 8% perf-profile.cpu-cycles.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
7.13 ± 11% -29.1% 5.05 ± 3% perf-profile.cpu-cycles.update_cfs_shares.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
1.25 ± 19% +264.3% 4.54 ± 6% perf-profile.cpu-cycles.wake_futex.futex_wake.do_futex.sys_futex.system_call_fastpath
2.16 ± 28% +317.7% 9.01 ± 4% perf-profile.cpu-cycles.cpuidle_enter.cpu_startup_entry.start_secondary
1.94 ± 1% -32.0% 1.32 ± 2% perf-profile.cpu-cycles.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.57 ± 26% +334.8% 2.50 ± 6% perf-profile.cpu-cycles.mutex_lock.pipe_wait.pipe_write.__vfs_write.vfs_write
0.36 ± 14% +279.9% 1.37 ± 7% perf-profile.cpu-cycles.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
1.08 ± 9% -52.2% 0.52 ± 14% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
1.07 ± 9% -51.6% 0.52 ± 15% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
32.23 ± 3% -53.5% 15.00 ± 3% perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer.hrtimer_interrupt
32.57 ± 2% -53.2% 15.24 ± 3% perf-profile.cpu-cycles.tick_sched_handle.isra.18.tick_sched_timer.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt
1.06 ± 9% -36.8% 0.67 ± 10% perf-profile.cpu-cycles.account_process_tick.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer
33.62 ± 2% -52.8% 15.88 ± 3% perf-profile.cpu-cycles.tick_sched_timer.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
1.27 ± 7% -16.3% 1.06 ± 12% perf-profile.cpu-cycles.futex_requeue.do_futex.sys_futex.system_call_fastpath
28.27 ± 3% -55.3% 12.64 ± 4% perf-profile.cpu-cycles.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer
35.73 ± 1% -52.2% 17.07 ± 3% perf-profile.cpu-cycles.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.41 ± 12% +124.5% 0.92 ± 15% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__schedule
39.41 ± 1% -52.4% 18.74 ± 2% perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
12.73 ± 6% +11.6% 14.21 ± 2% perf-profile.cpu-cycles.vfs_write.sys_write.system_call_fastpath
39.95 ± 1% -52.4% 19.02 ± 3% perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
45.98 ± 1% -51.2% 22.46 ± 2% perf-profile.cpu-cycles.apic_timer_interrupt
0.30 ± 21% +235.0% 1.01 ± 3% perf-profile.cpu-cycles.irq_exit.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt.cpuidle_enter
1.81 ± 17% +104.3% 3.70 ± 3% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending
43.89 ± 1% -51.3% 21.38 ± 2% perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt
1.66 ± 6% +322.1% 6.99 ± 5% perf-profile.cpu-cycles.futex_wait.do_futex.sys_futex.system_call_fastpath
2.99 ± 5% +48.1% 4.42 ± 2% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
3.02 ± 4% +47.4% 4.45 ± 2% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
0.72 ± 20% +210.0% 2.24 ± 7% perf-profile.cpu-cycles.ttwu_do_activate.constprop.88.try_to_wake_up.wake_up_state.wake_futex.futex_wake
3.29 ± 6% +50.0% 4.94 ± 3% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
1.11 ± 9% -47.1% 0.59 ± 11% perf-profile.cpu-cycles.clockevents_program_event.tick_program_event.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
12.63 ± 6% +11.8% 14.12 ± 2% perf-profile.cpu-cycles.__vfs_write.vfs_write.sys_write.system_call_fastpath
1.81 ± 32% +354.8% 8.22 ± 4% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
1.12 ± 6% +171.3% 3.05 ± 5% perf-profile.cpu-cycles.pipe_wait.pipe_write.__vfs_write.vfs_write.sys_write
1.16 ± 7% -50.6% 0.57 ± 13% perf-profile.cpu-cycles.ttwu_do_activate.constprop.88.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
12.57 ± 6% +12.0% 14.08 ± 2% perf-profile.cpu-cycles.pipe_write.__vfs_write.vfs_write.sys_write.system_call_fastpath
0.40 ± 17% +157.0% 1.01 ± 11% perf-profile.cpu-cycles.deactivate_task.__schedule.schedule.futex_wait_queue_me.futex_wait
1.83 ± 21% -46.3% 0.98 ± 11% perf-profile.cpu-cycles.ktime_get_update_offsets_now.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
12.75 ± 6% +11.6% 14.24 ± 2% perf-profile.cpu-cycles.sys_write.system_call_fastpath
20.55 ± 2% +11.7% 22.95 ± 2% perf-profile.cpu-cycles.sys_read.system_call_fastpath
1.27 ± 12% +97.8% 2.50 ± 4% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_state.wake_futex
1.61 ± 6% +43.6% 2.31 ± 6% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
20.50 ± 2% +11.8% 22.92 ± 2% perf-profile.cpu-cycles.vfs_read.sys_read.system_call_fastpath
1.27 ± 12% +97.8% 2.51 ± 4% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_state
0.27 ± 24% +229.0% 0.88 ± 14% perf-profile.cpu-cycles.cpuidle_select.cpu_startup_entry.start_secondary
1.19 ± 10% -47.4% 0.62 ± 9% perf-profile.cpu-cycles.tick_program_event.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.39 ± 16% +151.6% 0.99 ± 12% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__schedule.schedule.futex_wait_queue_me
0.55 ± 7% +96.8% 1.07 ± 13% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__schedule.schedule
20.22 ± 2% +12.0% 22.64 ± 2% perf-profile.cpu-cycles.pipe_read.__vfs_read.vfs_read.sys_read.system_call_fastpath
1.78 ± 14% -47.1% 0.94 ± 9% perf-profile.cpu-cycles.perf_event_task_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.33 ± 8% -43.8% 0.75 ± 15% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_read.__vfs_read
1.39 ± 7% -43.4% 0.79 ± 15% perf-profile.cpu-cycles.__wake_up_sync_key.pipe_read.__vfs_read.vfs_read.sys_read
0.33 ± 48% +309.1% 1.35 ± 7% perf-profile.cpu-cycles.tick_broadcast_oneshot_control.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry
1.32 ± 8% -43.7% 0.74 ± 15% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_read
2.37 ± 7% +25.1% 2.97 ± 1% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
0.44 ± 25% +207.9% 1.36 ± 3% perf-profile.cpu-cycles.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.35 ± 8% -43.3% 0.77 ± 15% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.pipe_read.__vfs_read.vfs_read
1.06 ± 6% -34.0% 0.70 ± 15% perf-profile.cpu-cycles.try_to_wake_up.wake_up_state.wake_futex.futex_requeue.do_futex
1.08 ± 7% -33.2% 0.72 ± 15% perf-profile.cpu-cycles.wake_up_state.wake_futex.futex_requeue.do_futex.sys_futex
0.30 ± 20% +222.7% 0.96 ± 2% perf-profile.cpu-cycles.__do_softirq.irq_exit.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt
0.34 ± 22% +747.4% 2.90 ± 7% perf-profile.cpu-cycles.update_sd_lb_stats.find_busiest_group.load_balance.pick_next_task_fair.__schedule
4.81 ± 6% +185.2% 13.73 ± 5% perf-profile.cpu-cycles.sys_futex.system_call_fastpath
0.87 ± 10% -31.7% 0.59 ± 10% perf-profile.cpu-cycles.account_user_time.account_process_tick.update_process_times.tick_sched_handle.tick_sched_timer
2.75 ± 5% +41.7% 3.91 ± 2% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
4.75 ± 6% +186.9% 13.62 ± 5% perf-profile.cpu-cycles.do_futex.sys_futex.system_call_fastpath
1.38 ± 25% +187.1% 3.95 ± 3% perf-profile.cpu-cycles.sched_ttwu_pending.cpu_startup_entry.start_secondary
1.46 ± 9% +44.7% 2.11 ± 4% perf-profile.cpu-cycles.anon_pipe_buf_release.pipe_read.__vfs_read.vfs_read.sys_read
0.85 ± 11% +51.8% 1.29 ± 5% perf-profile.cpu-cycles.free_hot_cold_page.put_page.anon_pipe_buf_release.pipe_read.__vfs_read
0.38 ± 16% +184.1% 1.07 ± 3% perf-profile.cpu-cycles.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt.cpuidle_enter.cpu_startup_entry
20.31 ± 2% +12.2% 22.77 ± 2% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.system_call_fastpath
5.19 ± 23% +266.7% 19.04 ± 2% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary
1.11 ± 11% +51.1% 1.67 ± 5% perf-profile.cpu-cycles.put_page.anon_pipe_buf_release.pipe_read.__vfs_read.vfs_read
5.23 ± 23% +268.5% 19.27 ± 2% perf-profile.cpu-cycles.start_secondary
0.45 ± 23% +209.5% 1.39 ± 3% perf-profile.cpu-cycles.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.20 ± 18% +252.6% 4.22 ± 7% perf-profile.cpu-cycles.try_to_wake_up.wake_up_state.wake_futex.futex_wake.do_futex
1.29 ± 25% +185.3% 3.69 ± 3% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry
1.76 ± 11% +252.5% 6.20 ± 3% perf-profile.cpu-cycles.mutex_spin_on_owner.isra.4.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_read
1.33 ± 8% +341.8% 5.89 ± 6% perf-profile.cpu-cycles.schedule.futex_wait_queue_me.futex_wait.do_futex.sys_futex
1.47 ± 9% -32.4% 1.00 ± 6% perf-profile.cpu-cycles.rcu_check_callbacks.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer
1.13 ± 5% -30.9% 0.78 ± 15% perf-profile.cpu-cycles.wake_futex.futex_requeue.do_futex.sys_futex.system_call_fastpath
1.35 ± 7% -42.8% 0.78 ± 13% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
5.13 ± 8% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.update_cfs_rq_blocked_load.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
2.60 ± 4% -34.7% 1.70 ± 5% perf-profile.cpu-cycles.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.21 ± 18% +255.4% 4.28 ± 7% perf-profile.cpu-cycles.wake_up_state.wake_futex.futex_wake.do_futex.sys_futex
0.32 ± 29% +332.3% 1.40 ± 10% perf-profile.cpu-cycles.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
1.41 ± 6% +336.3% 6.16 ± 6% perf-profile.cpu-cycles.futex_wait_queue_me.futex_wait.do_futex.sys_futex.system_call_fastpath
13.78 ± 3% -13.5% 11.92 ± 3% perf-profile.cpu-cycles.copy_user_generic_string.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
1.29 ± 9% +348.0% 5.79 ± 6% perf-profile.cpu-cycles.__schedule.schedule.futex_wait_queue_me.futex_wait.do_futex
38.58 ± 3% +32.7% 51.21 ± 0% perf-profile.cpu-cycles.system_call_fastpath
14.53 ± 4% -13.9% 12.51 ± 3% perf-profile.cpu-cycles.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.sys_read
1.65 ± 29% +367.9% 7.70 ± 4% perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
1.57 ± 16% +240.8% 5.35 ± 6% perf-profile.cpu-cycles.futex_wake.do_futex.sys_futex.system_call_fastpath
1.82 ± 11% +246.0% 6.31 ± 3% perf-profile.cpu-cycles.mutex_lock.pipe_read.__vfs_read.vfs_read.sys_read
1.32 ± 24% +185.4% 3.78 ± 3% perf-profile.cpu-cycles.ttwu_do_activate.constprop.88.sched_ttwu_pending.cpu_startup_entry.start_secondary
1.28 ± 25% +183.0% 3.63 ± 2% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry.start_secondary
1.80 ± 10% +250.3% 6.30 ± 3% perf-profile.cpu-cycles.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_read.__vfs_read
1.80 ± 10% +248.9% 6.30 ± 3% perf-profile.cpu-cycles.__mutex_lock_slowpath.mutex_lock.pipe_read.__vfs_read.vfs_read
0.39 ± 16% +278.2% 1.48 ± 7% perf-profile.cpu-cycles.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
0.36 ± 20% +791.6% 3.19 ± 6% perf-profile.cpu-cycles.find_busiest_group.load_balance.pick_next_task_fair.__schedule.schedule
7.79 ± 6% -10.4% 6.98 ± 2% perf-profile.cpu-cycles.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.sys_write
14 ± 3% -27.6% 10 ± 4% sched_debug.cfs_rq[0]:/.runnable_load_avg
936 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[0]:/.utilization_load_avg
14 ± 5% -26.8% 10 ± 4% sched_debug.cfs_rq[0]:/.load
4850 ± 9% -84.1% 769 ± 15% sched_debug.cfs_rq[0]:/.tg_load_avg
149666 ± 0% -17.3% 123710 ± 1% sched_debug.cfs_rq[0]:/.exec_clock
59 ± 35% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.blocked_load_avg
14 ± 7% -34.5% 9 ± 5% sched_debug.cfs_rq[10]:/.load
951 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.utilization_load_avg
74 ± 27% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.tg_load_contrib
8 ± 49% +100.0% 16 ± 23% sched_debug.cfs_rq[10]:/.nr_spread_over
4601 ± 11% -83.4% 764 ± 13% sched_debug.cfs_rq[10]:/.tg_load_avg
143527 ± 0% -19.5% 115574 ± 1% sched_debug.cfs_rq[10]:/.exec_clock
14 ± 5% -26.3% 10 ± 4% sched_debug.cfs_rq[10]:/.runnable_load_avg
121 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[11]:/.tg_load_contrib
144746 ± 1% -20.4% 115216 ± 0% sched_debug.cfs_rq[11]:/.exec_clock
4597 ± 11% -83.5% 758 ± 12% sched_debug.cfs_rq[11]:/.tg_load_avg
965 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[11]:/.utilization_load_avg
16 ± 11% -35.9% 10 ± 12% sched_debug.cfs_rq[11]:/.load
15 ± 12% -33.3% 10 ± 4% sched_debug.cfs_rq[11]:/.runnable_load_avg
14 ± 3% -24.6% 10 ± 4% sched_debug.cfs_rq[12]:/.runnable_load_avg
142635 ± 0% -18.8% 115855 ± 1% sched_debug.cfs_rq[12]:/.exec_clock
958 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[12]:/.utilization_load_avg
14 ± 3% -32.8% 9 ± 11% sched_debug.cfs_rq[12]:/.load
4589 ± 12% -83.6% 752 ± 11% sched_debug.cfs_rq[12]:/.tg_load_avg
70 ± 47% -100.0% 0 ± 0% sched_debug.cfs_rq[12]:/.tg_load_contrib
984 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[13]:/.utilization_load_avg
4551 ± 12% -83.5% 752 ± 11% sched_debug.cfs_rq[13]:/.tg_load_avg
15 ± 10% -30.0% 10 ± 8% sched_debug.cfs_rq[13]:/.runnable_load_avg
143643 ± 0% -19.4% 115845 ± 1% sched_debug.cfs_rq[13]:/.exec_clock
14 ± 10% -35.6% 9 ± 15% sched_debug.cfs_rq[13]:/.load
86 ± 39% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.tg_load_contrib
4544 ± 11% -83.4% 753 ± 11% sched_debug.cfs_rq[14]:/.tg_load_avg
969 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.utilization_load_avg
142757 ± 0% -18.9% 115755 ± 1% sched_debug.cfs_rq[14]:/.exec_clock
70 ± 47% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.blocked_load_avg
14 ± 10% -30.5% 10 ± 4% sched_debug.cfs_rq[14]:/.runnable_load_avg
15 ± 10% -35.5% 10 ± 14% sched_debug.cfs_rq[14]:/.load
15 ± 8% -34.4% 10 ± 10% sched_debug.cfs_rq[15]:/.runnable_load_avg
4529 ± 11% -83.4% 753 ± 11% sched_debug.cfs_rq[15]:/.tg_load_avg
142995 ± 0% -19.0% 115881 ± 1% sched_debug.cfs_rq[15]:/.exec_clock
961 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[15]:/.utilization_load_avg
4515 ± 11% -83.3% 753 ± 11% sched_debug.cfs_rq[16]:/.tg_load_avg
16 ± 9% -35.9% 10 ± 4% sched_debug.cfs_rq[16]:/.runnable_load_avg
15 ± 9% -39.7% 9 ± 15% sched_debug.cfs_rq[16]:/.load
975 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[16]:/.utilization_load_avg
142733 ± 0% -18.6% 116189 ± 1% sched_debug.cfs_rq[16]:/.exec_clock
14 ± 7% -33.3% 9 ± 9% sched_debug.cfs_rq[17]:/.load
93 ± 40% -100.0% 0 ± 0% sched_debug.cfs_rq[17]:/.tg_load_contrib
142654 ± 0% -18.4% 116363 ± 1% sched_debug.cfs_rq[17]:/.exec_clock
4483 ± 11% -83.2% 754 ± 11% sched_debug.cfs_rq[17]:/.tg_load_avg
14 ± 2% -30.5% 10 ± 4% sched_debug.cfs_rq[17]:/.runnable_load_avg
946 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[17]:/.utilization_load_avg
78 ± 47% -100.0% 0 ± 0% sched_debug.cfs_rq[17]:/.blocked_load_avg
14 ± 5% -23.2% 10 ± 7% sched_debug.cfs_rq[18]:/.runnable_load_avg
4484 ± 10% -83.2% 754 ± 11% sched_debug.cfs_rq[18]:/.tg_load_avg
142729 ± 0% -19.0% 115549 ± 1% sched_debug.cfs_rq[18]:/.exec_clock
54 ± 48% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.blocked_load_avg
69 ± 38% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.tg_load_contrib
928 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.utilization_load_avg
14 ± 9% -35.1% 9 ± 15% sched_debug.cfs_rq[18]:/.load
15 ± 10% -33.9% 10 ± 4% sched_debug.cfs_rq[19]:/.runnable_load_avg
142749 ± 0% -19.2% 115271 ± 1% sched_debug.cfs_rq[19]:/.exec_clock
960 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[19]:/.utilization_load_avg
83 ± 37% -100.0% 0 ± 0% sched_debug.cfs_rq[19]:/.tg_load_contrib
4479 ± 11% -83.1% 754 ± 11% sched_debug.cfs_rq[19]:/.tg_load_avg
14 ± 13% -32.2% 10 ± 10% sched_debug.cfs_rq[19]:/.load
68 ± 44% -100.0% 0 ± 0% sched_debug.cfs_rq[19]:/.blocked_load_avg
14 ± 5% -24.6% 10 ± 4% sched_debug.cfs_rq[1]:/.runnable_load_avg
14 ± 5% -24.6% 10 ± 13% sched_debug.cfs_rq[1]:/.load
4879 ± 10% -84.2% 769 ± 15% sched_debug.cfs_rq[1]:/.tg_load_avg
93 ± 38% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.blocked_load_avg
107 ± 33% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.tg_load_contrib
142952 ± 0% -18.9% 115989 ± 1% sched_debug.cfs_rq[1]:/.exec_clock
955 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.utilization_load_avg
91 ± 49% -100.0% 0 ± 0% sched_debug.cfs_rq[20]:/.tg_load_contrib
142991 ± 0% -19.3% 115464 ± 1% sched_debug.cfs_rq[20]:/.exec_clock
13 ± 3% -20.0% 11 ± 6% sched_debug.cfs_rq[20]:/.runnable_load_avg
14 ± 5% -35.7% 9 ± 7% sched_debug.cfs_rq[20]:/.load
962 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[20]:/.utilization_load_avg
4474 ± 11% -83.1% 755 ± 11% sched_debug.cfs_rq[20]:/.tg_load_avg
142721 ± 0% -19.1% 115419 ± 1% sched_debug.cfs_rq[21]:/.exec_clock
14 ± 7% -28.6% 10 ± 12% sched_debug.cfs_rq[21]:/.load
84 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[21]:/.tg_load_contrib
14 ± 7% -24.6% 10 ± 4% sched_debug.cfs_rq[21]:/.runnable_load_avg
4435 ± 11% -83.0% 752 ± 10% sched_debug.cfs_rq[21]:/.tg_load_avg
69 ± 27% -100.0% 0 ± 0% sched_debug.cfs_rq[21]:/.blocked_load_avg
945 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[21]:/.utilization_load_avg
4424 ± 11% -83.0% 752 ± 10% sched_debug.cfs_rq[22]:/.tg_load_avg
142700 ± 0% -18.9% 115735 ± 0% sched_debug.cfs_rq[22]:/.exec_clock
14 ± 7% -33.3% 9 ± 5% sched_debug.cfs_rq[22]:/.load
14 ± 5% -24.6% 10 ± 4% sched_debug.cfs_rq[22]:/.runnable_load_avg
42 ± 35% -100.0% 0 ± 0% sched_debug.cfs_rq[22]:/.tg_load_contrib
970 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[22]:/.utilization_load_avg
13 ± 6% -18.5% 11 ± 11% sched_debug.cfs_rq[23]:/.runnable_load_avg
924 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[23]:/.utilization_load_avg
6 ± 30% +268.0% 23 ± 29% sched_debug.cfs_rq[23]:/.nr_spread_over
14 ± 5% -39.3% 8 ± 13% sched_debug.cfs_rq[23]:/.load
142864 ± 0% -18.7% 116159 ± 1% sched_debug.cfs_rq[23]:/.exec_clock
4410 ± 11% -82.9% 753 ± 10% sched_debug.cfs_rq[23]:/.tg_load_avg
14 ± 3% -24.1% 11 ± 0% sched_debug.cfs_rq[24]:/.runnable_load_avg
142964 ± 0% -18.9% 116008 ± 1% sched_debug.cfs_rq[24]:/.exec_clock
4386 ± 11% -82.8% 753 ± 10% sched_debug.cfs_rq[24]:/.tg_load_avg
118 ± 49% -100.0% 0 ± 0% sched_debug.cfs_rq[24]:/.blocked_load_avg
968 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[24]:/.utilization_load_avg
134 ± 43% -100.0% 0 ± 0% sched_debug.cfs_rq[24]:/.tg_load_contrib
6 ± 46% +188.9% 19 ± 46% sched_debug.cfs_rq[24]:/.nr_spread_over
14 ± 3% -29.3% 10 ± 4% sched_debug.cfs_rq[25]:/.runnable_load_avg
142996 ± 0% -17.9% 117411 ± 2% sched_debug.cfs_rq[25]:/.exec_clock
4351 ± 11% -82.7% 752 ± 10% sched_debug.cfs_rq[25]:/.tg_load_avg
968 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[25]:/.utilization_load_avg
62 ± 41% -100.0% 0 ± 0% sched_debug.cfs_rq[25]:/.tg_load_contrib
14 ± 5% -37.3% 9 ± 14% sched_debug.cfs_rq[25]:/.load
4319 ± 10% -82.6% 752 ± 10% sched_debug.cfs_rq[26]:/.tg_load_avg
14 ± 10% -31.0% 10 ± 10% sched_debug.cfs_rq[26]:/.load
944 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[26]:/.utilization_load_avg
68 ± 45% -100.0% 0 ± 0% sched_debug.cfs_rq[26]:/.tg_load_contrib
142832 ± 0% -18.1% 117001 ± 2% sched_debug.cfs_rq[26]:/.exec_clock
14 ± 5% -29.3% 10 ± 4% sched_debug.cfs_rq[26]:/.runnable_load_avg
142765 ± 0% -18.6% 116191 ± 1% sched_debug.cfs_rq[27]:/.exec_clock
4280 ± 11% -82.4% 751 ± 10% sched_debug.cfs_rq[27]:/.tg_load_avg
14 ± 5% -30.5% 10 ± 4% sched_debug.cfs_rq[27]:/.runnable_load_avg
950 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[27]:/.utilization_load_avg
14 ± 7% -28.1% 10 ± 4% sched_debug.cfs_rq[28]:/.runnable_load_avg
143159 ± 0% -18.6% 116593 ± 1% sched_debug.cfs_rq[28]:/.exec_clock
52 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.blocked_load_avg
947 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.utilization_load_avg
4275 ± 11% -82.4% 750 ± 10% sched_debug.cfs_rq[28]:/.tg_load_avg
15 ± 12% -38.3% 9 ± 14% sched_debug.cfs_rq[28]:/.load
67 ± 36% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.tg_load_contrib
14 ± 5% -27.1% 10 ± 4% sched_debug.cfs_rq[29]:/.runnable_load_avg
97 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[29]:/.blocked_load_avg
112 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[29]:/.tg_load_contrib
956 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[29]:/.utilization_load_avg
4205 ± 9% -82.1% 750 ± 10% sched_debug.cfs_rq[29]:/.tg_load_avg
13 ± 6% -23.6% 10 ± 19% sched_debug.cfs_rq[29]:/.load
142927 ± 0% -18.7% 116233 ± 1% sched_debug.cfs_rq[29]:/.exec_clock
4855 ± 10% -84.1% 770 ± 15% sched_debug.cfs_rq[2]:/.tg_load_avg
78 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[2]:/.tg_load_contrib
64 ± 42% -100.0% 0 ± 0% sched_debug.cfs_rq[2]:/.blocked_load_avg
948 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[2]:/.utilization_load_avg
142729 ± 0% -18.8% 115944 ± 1% sched_debug.cfs_rq[2]:/.exec_clock
4173 ± 7% -82.0% 750 ± 10% sched_debug.cfs_rq[30]:/.tg_load_avg
949 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[30]:/.utilization_load_avg
14 ± 5% -42.1% 8 ± 13% sched_debug.cfs_rq[30]:/.load
142883 ± 0% -18.7% 116230 ± 1% sched_debug.cfs_rq[30]:/.exec_clock
14 ± 5% -25.0% 10 ± 4% sched_debug.cfs_rq[30]:/.runnable_load_avg
5 ± 18% +182.6% 16 ± 44% sched_debug.cfs_rq[31]:/.nr_spread_over
14 ± 5% -33.9% 9 ± 8% sched_debug.cfs_rq[31]:/.load
68 ± 29% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.tg_load_contrib
945 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.utilization_load_avg
53 ± 36% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.blocked_load_avg
14 ± 5% -29.8% 10 ± 7% sched_debug.cfs_rq[31]:/.runnable_load_avg
142780 ± 0% -18.6% 116266 ± 1% sched_debug.cfs_rq[31]:/.exec_clock
4148 ± 7% -81.9% 750 ± 10% sched_debug.cfs_rq[31]:/.tg_load_avg
4141 ± 7% -81.9% 750 ± 10% sched_debug.cfs_rq[32]:/.tg_load_avg
13 ± 6% -27.3% 10 ± 0% sched_debug.cfs_rq[32]:/.runnable_load_avg
9604126 ± 0% -13.2% 8340155 ± 1% sched_debug.cfs_rq[32]:/.min_vruntime
963 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[32]:/.utilization_load_avg
14 ± 7% -39.3% 8 ± 17% sched_debug.cfs_rq[32]:/.load
142960 ± 0% -23.1% 109925 ± 1% sched_debug.cfs_rq[32]:/.exec_clock
968 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[33]:/.utilization_load_avg
14 ± 3% -26.3% 10 ± 4% sched_debug.cfs_rq[33]:/.runnable_load_avg
13 ± 6% -38.2% 8 ± 13% sched_debug.cfs_rq[33]:/.load
143024 ± 0% -23.2% 109824 ± 1% sched_debug.cfs_rq[33]:/.exec_clock
4113 ± 7% -81.8% 750 ± 10% sched_debug.cfs_rq[33]:/.tg_load_avg
9591369 ± 0% -13.5% 8301137 ± 1% sched_debug.cfs_rq[33]:/.min_vruntime
142926 ± 0% -22.7% 110486 ± 1% sched_debug.cfs_rq[34]:/.exec_clock
9586520 ± 0% -12.5% 8385635 ± 1% sched_debug.cfs_rq[34]:/.min_vruntime
13 ± 3% -24.5% 10 ± 7% sched_debug.cfs_rq[34]:/.runnable_load_avg
4089 ± 7% -81.7% 750 ± 10% sched_debug.cfs_rq[34]:/.tg_load_avg
47 ± 44% -100.0% 0 ± 0% sched_debug.cfs_rq[34]:/.tg_load_contrib
942 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[34]:/.utilization_load_avg
13 ± 3% -30.2% 9 ± 8% sched_debug.cfs_rq[34]:/.load
143196 ± 0% -23.3% 109775 ± 1% sched_debug.cfs_rq[35]:/.exec_clock
9593365 ± 0% -12.8% 8362919 ± 2% sched_debug.cfs_rq[35]:/.min_vruntime
14 ± 0% -26.8% 10 ± 4% sched_debug.cfs_rq[35]:/.runnable_load_avg
4092 ± 7% -81.7% 749 ± 10% sched_debug.cfs_rq[35]:/.tg_load_avg
944 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[35]:/.utilization_load_avg
4083 ± 7% -81.6% 749 ± 10% sched_debug.cfs_rq[36]:/.tg_load_avg
142862 ± 0% -22.9% 110159 ± 0% sched_debug.cfs_rq[36]:/.exec_clock
934 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[36]:/.utilization_load_avg
14 ± 11% -35.1% 9 ± 4% sched_debug.cfs_rq[36]:/.load
9584087 ± 0% -12.9% 8351358 ± 0% sched_debug.cfs_rq[36]:/.min_vruntime
13 ± 6% -30.9% 9 ± 5% sched_debug.cfs_rq[36]:/.runnable_load_avg
963 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[37]:/.utilization_load_avg
4096 ± 8% -81.7% 748 ± 10% sched_debug.cfs_rq[37]:/.tg_load_avg
9581773 ± 0% -13.1% 8330912 ± 1% sched_debug.cfs_rq[37]:/.min_vruntime
142787 ± 0% -23.0% 109956 ± 1% sched_debug.cfs_rq[37]:/.exec_clock
14 ± 5% -28.6% 10 ± 7% sched_debug.cfs_rq[37]:/.runnable_load_avg
13 ± 3% -32.7% 9 ± 8% sched_debug.cfs_rq[37]:/.load
14 ± 3% -34.5% 9 ± 5% sched_debug.cfs_rq[38]:/.runnable_load_avg
959 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.utilization_load_avg
9602528 ± 0% -13.3% 8321364 ± 1% sched_debug.cfs_rq[38]:/.min_vruntime
59 ± 22% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.tg_load_contrib
14 ± 3% -41.4% 8 ± 13% sched_debug.cfs_rq[38]:/.load
142952 ± 0% -23.2% 109718 ± 1% sched_debug.cfs_rq[38]:/.exec_clock
44 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.blocked_load_avg
4096 ± 10% -81.7% 748 ± 10% sched_debug.cfs_rq[38]:/.tg_load_avg
47 ± 41% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.blocked_load_avg
963 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.utilization_load_avg
14 ± 0% -33.9% 9 ± 8% sched_debug.cfs_rq[39]:/.load
14 ± 3% -28.1% 10 ± 4% sched_debug.cfs_rq[39]:/.runnable_load_avg
62 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.tg_load_contrib
142892 ± 0% -22.9% 110233 ± 1% sched_debug.cfs_rq[39]:/.exec_clock
4089 ± 10% -81.7% 748 ± 9% sched_debug.cfs_rq[39]:/.tg_load_avg
9563215 ± 0% -12.6% 8356120 ± 1% sched_debug.cfs_rq[39]:/.min_vruntime
142636 ± 0% -18.5% 116193 ± 1% sched_debug.cfs_rq[3]:/.exec_clock
4779 ± 12% -83.9% 770 ± 15% sched_debug.cfs_rq[3]:/.tg_load_avg
14 ± 5% -28.6% 10 ± 7% sched_debug.cfs_rq[3]:/.load
36 ± 43% -100.0% 0 ± 0% sched_debug.cfs_rq[3]:/.tg_load_contrib
13 ± 3% -21.8% 10 ± 4% sched_debug.cfs_rq[3]:/.runnable_load_avg
938 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[3]:/.utilization_load_avg
14 ± 5% -42.1% 8 ± 15% sched_debug.cfs_rq[40]:/.load
14 ± 7% -31.0% 10 ± 0% sched_debug.cfs_rq[40]:/.runnable_load_avg
9581440 ± 0% -12.3% 8398863 ± 1% sched_debug.cfs_rq[40]:/.min_vruntime
4068 ± 10% -81.6% 747 ± 9% sched_debug.cfs_rq[40]:/.tg_load_avg
979 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[40]:/.utilization_load_avg
142879 ± 0% -22.5% 110697 ± 1% sched_debug.cfs_rq[40]:/.exec_clock
940 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[41]:/.utilization_load_avg
142941 ± 0% -23.3% 109699 ± 1% sched_debug.cfs_rq[41]:/.exec_clock
30 ± 44% -100.0% 0 ± 0% sched_debug.cfs_rq[41]:/.tg_load_contrib
4072 ± 10% -81.7% 747 ± 9% sched_debug.cfs_rq[41]:/.tg_load_avg
13 ± 6% -38.2% 8 ± 17% sched_debug.cfs_rq[41]:/.load
9583957 ± 0% -13.1% 8332859 ± 1% sched_debug.cfs_rq[41]:/.min_vruntime
13 ± 3% -27.3% 10 ± 7% sched_debug.cfs_rq[41]:/.runnable_load_avg
15 ± 6% -46.7% 8 ± 23% sched_debug.cfs_rq[42]:/.load
984 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[42]:/.utilization_load_avg
142998 ± 0% -22.7% 110471 ± 1% sched_debug.cfs_rq[42]:/.exec_clock
4043 ± 11% -81.5% 746 ± 9% sched_debug.cfs_rq[42]:/.tg_load_avg
15 ± 5% -34.4% 10 ± 0% sched_debug.cfs_rq[42]:/.runnable_load_avg
9592620 ± 0% -12.7% 8378446 ± 1% sched_debug.cfs_rq[42]:/.min_vruntime
4050 ± 12% -81.6% 746 ± 9% sched_debug.cfs_rq[43]:/.tg_load_avg
9586033 ± 0% -13.0% 8335645 ± 1% sched_debug.cfs_rq[43]:/.min_vruntime
14 ± 5% -31.0% 10 ± 0% sched_debug.cfs_rq[43]:/.runnable_load_avg
14 ± 5% -47.5% 7 ± 5% sched_debug.cfs_rq[43]:/.load
964 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[43]:/.utilization_load_avg
143013 ± 0% -22.9% 110192 ± 1% sched_debug.cfs_rq[43]:/.exec_clock
961 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[44]:/.utilization_load_avg
14 ± 5% -26.3% 10 ± 8% sched_debug.cfs_rq[44]:/.runnable_load_avg
14 ± 5% -40.4% 8 ± 21% sched_debug.cfs_rq[44]:/.load
142914 ± 0% -22.9% 110172 ± 1% sched_debug.cfs_rq[44]:/.exec_clock
9595791 ± 0% -12.9% 8354769 ± 1% sched_debug.cfs_rq[44]:/.min_vruntime
4051 ± 11% -81.6% 746 ± 9% sched_debug.cfs_rq[44]:/.tg_load_avg
9589932 ± 0% -13.0% 8344377 ± 1% sched_debug.cfs_rq[45]:/.min_vruntime
14 ± 5% -30.4% 9 ± 4% sched_debug.cfs_rq[45]:/.runnable_load_avg
143041 ± 0% -22.2% 111261 ± 3% sched_debug.cfs_rq[45]:/.exec_clock
46 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[45]:/.tg_load_contrib
4056 ± 11% -81.6% 746 ± 9% sched_debug.cfs_rq[45]:/.tg_load_avg
14 ± 7% -41.4% 8 ± 10% sched_debug.cfs_rq[45]:/.load
942 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[45]:/.utilization_load_avg
4056 ± 11% -81.6% 746 ± 9% sched_debug.cfs_rq[46]:/.tg_load_avg
9595201 ± 0% -13.2% 8331965 ± 1% sched_debug.cfs_rq[46]:/.min_vruntime
14 ± 3% -34.5% 9 ± 5% sched_debug.cfs_rq[46]:/.runnable_load_avg
62 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[46]:/.blocked_load_avg
959 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[46]:/.utilization_load_avg
77 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[46]:/.tg_load_contrib
13 ± 3% -41.8% 8 ± 15% sched_debug.cfs_rq[46]:/.load
142914 ± 0% -23.0% 110055 ± 1% sched_debug.cfs_rq[46]:/.exec_clock
9579237 ± 0% -13.1% 8322157 ± 1% sched_debug.cfs_rq[47]:/.min_vruntime
4094 ± 12% -81.8% 747 ± 9% sched_debug.cfs_rq[47]:/.tg_load_avg
142989 ± 0% -23.0% 110124 ± 1% sched_debug.cfs_rq[47]:/.exec_clock
945 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[47]:/.utilization_load_avg
13 ± 6% -38.9% 8 ± 10% sched_debug.cfs_rq[47]:/.load
14 ± 8% -30.4% 9 ± 4% sched_debug.cfs_rq[47]:/.runnable_load_avg
9583731 ± 0% -13.1% 8327758 ± 1% sched_debug.cfs_rq[48]:/.min_vruntime
142905 ± 0% -22.4% 110964 ± 1% sched_debug.cfs_rq[48]:/.exec_clock
13 ± 3% -24.1% 10 ± 4% sched_debug.cfs_rq[48]:/.runnable_load_avg
917 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[48]:/.utilization_load_avg
13 ± 6% -30.2% 9 ± 20% sched_debug.cfs_rq[48]:/.load
51 ± 45% -100.0% 0 ± 0% sched_debug.cfs_rq[48]:/.tg_load_contrib
4063 ± 12% -81.6% 747 ± 9% sched_debug.cfs_rq[48]:/.tg_load_avg
13 ± 3% -29.1% 9 ± 8% sched_debug.cfs_rq[49]:/.runnable_load_avg
33 ± 42% -100.0% 0 ± 0% sched_debug.cfs_rq[49]:/.blocked_load_avg
4052 ± 12% -81.6% 746 ± 9% sched_debug.cfs_rq[49]:/.tg_load_avg
9603241 ± 0% -12.9% 8365083 ± 1% sched_debug.cfs_rq[49]:/.min_vruntime
13 ± 3% -35.2% 8 ± 14% sched_debug.cfs_rq[49]:/.load
47 ± 29% -100.0% 0 ± 0% sched_debug.cfs_rq[49]:/.tg_load_contrib
142992 ± 0% -23.1% 110022 ± 1% sched_debug.cfs_rq[49]:/.exec_clock
922 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[49]:/.utilization_load_avg
142939 ± 0% -18.3% 116733 ± 1% sched_debug.cfs_rq[4]:/.exec_clock
949 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.utilization_load_avg
13 ± 3% -23.6% 10 ± 8% sched_debug.cfs_rq[4]:/.runnable_load_avg
60 ± 35% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.blocked_load_avg
4703 ± 12% -83.6% 770 ± 15% sched_debug.cfs_rq[4]:/.tg_load_avg
13 ± 3% -25.5% 10 ± 8% sched_debug.cfs_rq[4]:/.load
74 ± 28% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.tg_load_contrib
4043 ± 12% -81.5% 746 ± 9% sched_debug.cfs_rq[50]:/.tg_load_avg
14 ± 11% -40.4% 8 ± 17% sched_debug.cfs_rq[50]:/.load
52 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[50]:/.tg_load_contrib
142825 ± 0% -23.5% 109276 ± 1% sched_debug.cfs_rq[50]:/.exec_clock
929 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[50]:/.utilization_load_avg
37 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[50]:/.blocked_load_avg
9575580 ± 0% -13.5% 8282955 ± 1% sched_debug.cfs_rq[50]:/.min_vruntime
14 ± 10% -34.5% 9 ± 5% sched_debug.cfs_rq[50]:/.runnable_load_avg
13 ± 3% -40.7% 8 ± 17% sched_debug.cfs_rq[51]:/.load
13 ± 3% -27.8% 9 ± 8% sched_debug.cfs_rq[51]:/.runnable_load_avg
9592803 ± 0% -12.9% 8356128 ± 1% sched_debug.cfs_rq[51]:/.min_vruntime
142861 ± 0% -22.9% 110133 ± 1% sched_debug.cfs_rq[51]:/.exec_clock
4055 ± 12% -81.6% 745 ± 9% sched_debug.cfs_rq[51]:/.tg_load_avg
923 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[51]:/.utilization_load_avg
14 ± 5% -30.4% 9 ± 8% sched_debug.cfs_rq[52]:/.runnable_load_avg
4047 ± 12% -81.6% 745 ± 9% sched_debug.cfs_rq[52]:/.tg_load_avg
9583802 ± 0% -13.0% 8339035 ± 0% sched_debug.cfs_rq[52]:/.min_vruntime
945 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[52]:/.utilization_load_avg
13 ± 6% -38.9% 8 ± 26% sched_debug.cfs_rq[52]:/.load
142934 ± 0% -23.0% 109993 ± 0% sched_debug.cfs_rq[52]:/.exec_clock
9600421 ± 0% -12.6% 8390692 ± 0% sched_debug.cfs_rq[53]:/.min_vruntime
14 ± 2% -33.9% 9 ± 4% sched_debug.cfs_rq[53]:/.runnable_load_avg
4041 ± 13% -81.6% 745 ± 9% sched_debug.cfs_rq[53]:/.tg_load_avg
38 ± 48% -100.0% 0 ± 0% sched_debug.cfs_rq[53]:/.tg_load_contrib
142968 ± 0% -22.7% 110490 ± 0% sched_debug.cfs_rq[53]:/.exec_clock
14 ± 3% -36.2% 9 ± 14% sched_debug.cfs_rq[53]:/.load
1002 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[53]:/.utilization_load_avg
948 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[54]:/.utilization_load_avg
13 ± 6% -30.9% 9 ± 5% sched_debug.cfs_rq[54]:/.runnable_load_avg
4023 ± 13% -81.5% 745 ± 9% sched_debug.cfs_rq[54]:/.tg_load_avg
83 ± 43% -100.0% 0 ± 0% sched_debug.cfs_rq[54]:/.tg_load_contrib
13 ± 6% -38.2% 8 ± 5% sched_debug.cfs_rq[54]:/.load
142953 ± 0% -23.0% 110018 ± 1% sched_debug.cfs_rq[54]:/.exec_clock
9597890 ± 0% -12.7% 8376139 ± 1% sched_debug.cfs_rq[54]:/.min_vruntime
13 ± 6% -34.5% 9 ± 7% sched_debug.cfs_rq[55]:/.load
142898 ± 0% -23.3% 109542 ± 0% sched_debug.cfs_rq[55]:/.exec_clock
13 ± 3% -27.8% 9 ± 4% sched_debug.cfs_rq[55]:/.runnable_load_avg
22 ± 41% -100.0% 0 ± 0% sched_debug.cfs_rq[55]:/.tg_load_contrib
9571172 ± 0% -13.2% 8310607 ± 0% sched_debug.cfs_rq[55]:/.min_vruntime
944 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[55]:/.utilization_load_avg
4015 ± 13% -81.5% 744 ± 9% sched_debug.cfs_rq[55]:/.tg_load_avg
14 ± 8% -39.3% 8 ± 10% sched_debug.cfs_rq[56]:/.load
143089 ± 0% -22.9% 110304 ± 1% sched_debug.cfs_rq[56]:/.exec_clock
9598931 ± 0% -13.1% 8345381 ± 1% sched_debug.cfs_rq[56]:/.min_vruntime
960 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[56]:/.utilization_load_avg
33 ± 42% -100.0% 0 ± 0% sched_debug.cfs_rq[56]:/.tg_load_contrib
14 ± 7% -30.4% 9 ± 4% sched_debug.cfs_rq[56]:/.runnable_load_avg
4026 ± 14% -81.5% 744 ± 9% sched_debug.cfs_rq[56]:/.tg_load_avg
9588434 ± 0% -13.1% 8336741 ± 1% sched_debug.cfs_rq[57]:/.min_vruntime
43 ± 39% -100.0% 0 ± 0% sched_debug.cfs_rq[57]:/.tg_load_contrib
936 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[57]:/.utilization_load_avg
142957 ± 0% -23.2% 109850 ± 0% sched_debug.cfs_rq[57]:/.exec_clock
4019 ± 14% -81.5% 742 ± 9% sched_debug.cfs_rq[57]:/.tg_load_avg
13 ± 3% -30.9% 9 ± 5% sched_debug.cfs_rq[57]:/.runnable_load_avg
960 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[58]:/.utilization_load_avg
14 ± 3% -31.0% 10 ± 0% sched_debug.cfs_rq[58]:/.runnable_load_avg
9594638 ± 0% -13.0% 8350105 ± 1% sched_debug.cfs_rq[58]:/.min_vruntime
57 ± 39% -100.0% 0 ± 0% sched_debug.cfs_rq[58]:/.tg_load_contrib
142941 ± 0% -22.8% 110420 ± 1% sched_debug.cfs_rq[58]:/.exec_clock
4022 ± 13% -81.5% 743 ± 9% sched_debug.cfs_rq[58]:/.tg_load_avg
14 ± 5% -41.4% 8 ± 13% sched_debug.cfs_rq[58]:/.load
14 ± 0% -28.6% 10 ± 7% sched_debug.cfs_rq[59]:/.runnable_load_avg
14 ± 0% -35.7% 9 ± 15% sched_debug.cfs_rq[59]:/.load
968 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[59]:/.utilization_load_avg
53 ± 43% -100.0% 0 ± 0% sched_debug.cfs_rq[59]:/.tg_load_contrib
4016 ± 14% -81.5% 742 ± 9% sched_debug.cfs_rq[59]:/.tg_load_avg
143046 ± 0% -22.9% 110228 ± 1% sched_debug.cfs_rq[59]:/.exec_clock
9594718 ± 0% -12.9% 8358121 ± 1% sched_debug.cfs_rq[59]:/.min_vruntime
63 ± 47% -100.0% 0 ± 0% sched_debug.cfs_rq[5]:/.tg_load_contrib
920 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[5]:/.utilization_load_avg
142540 ± 0% -18.9% 115610 ± 1% sched_debug.cfs_rq[5]:/.exec_clock
4696 ± 12% -83.6% 770 ± 15% sched_debug.cfs_rq[5]:/.tg_load_avg
14 ± 5% -26.8% 10 ± 4% sched_debug.cfs_rq[5]:/.runnable_load_avg
143056 ± 0% -22.8% 110429 ± 1% sched_debug.cfs_rq[60]:/.exec_clock
14 ± 5% -28.1% 10 ± 4% sched_debug.cfs_rq[60]:/.runnable_load_avg
9597950 ± 0% -12.8% 8366607 ± 0% sched_debug.cfs_rq[60]:/.min_vruntime
940 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[60]:/.utilization_load_avg
14 ± 5% -43.9% 8 ± 8% sched_debug.cfs_rq[60]:/.load
3982 ± 15% -81.4% 742 ± 9% sched_debug.cfs_rq[60]:/.tg_load_avg
14 ± 3% -26.3% 10 ± 4% sched_debug.cfs_rq[61]:/.runnable_load_avg
951 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[61]:/.utilization_load_avg
38 ± 36% -100.0% 0 ± 0% sched_debug.cfs_rq[61]:/.tg_load_contrib
9601058 ± 0% -13.0% 8352950 ± 1% sched_debug.cfs_rq[61]:/.min_vruntime
3969 ± 15% -81.3% 742 ± 9% sched_debug.cfs_rq[61]:/.tg_load_avg
143089 ± 0% -23.1% 110084 ± 1% sched_debug.cfs_rq[61]:/.exec_clock
14 ± 5% -39.3% 8 ± 19% sched_debug.cfs_rq[61]:/.load
9593570 ± 0% -12.6% 8383474 ± 1% sched_debug.cfs_rq[62]:/.min_vruntime
49 ± 26% -100.0% 0 ± 0% sched_debug.cfs_rq[62]:/.tg_load_contrib
920 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[62]:/.utilization_load_avg
14 ± 5% -28.6% 10 ± 7% sched_debug.cfs_rq[62]:/.runnable_load_avg
34 ± 37% -100.0% 0 ± 0% sched_debug.cfs_rq[62]:/.blocked_load_avg
143017 ± 0% -22.8% 110436 ± 1% sched_debug.cfs_rq[62]:/.exec_clock
3967 ± 15% -81.3% 742 ± 9% sched_debug.cfs_rq[62]:/.tg_load_avg
14 ± 5% -39.3% 8 ± 10% sched_debug.cfs_rq[62]:/.load
14 ± 0% -37.5% 8 ± 4% sched_debug.cfs_rq[63]:/.load
930 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[63]:/.utilization_load_avg
9570329 ± 0% -13.0% 8330510 ± 1% sched_debug.cfs_rq[63]:/.min_vruntime
3969 ± 15% -81.3% 741 ± 9% sched_debug.cfs_rq[63]:/.tg_load_avg
14 ± 0% -32.1% 9 ± 5% sched_debug.cfs_rq[63]:/.runnable_load_avg
143128 ± 0% -22.4% 111121 ± 1% sched_debug.cfs_rq[63]:/.exec_clock
61 ± 41% -100.0% 0 ± 0% sched_debug.cfs_rq[6]:/.tg_load_contrib
951 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[6]:/.utilization_load_avg
15 ± 6% -35.0% 9 ± 19% sched_debug.cfs_rq[6]:/.load
4669 ± 12% -83.5% 770 ± 15% sched_debug.cfs_rq[6]:/.tg_load_avg
5 ± 41% +154.5% 14 ± 20% sched_debug.cfs_rq[6]:/.nr_spread_over
142485 ± 0% -18.8% 115703 ± 1% sched_debug.cfs_rq[6]:/.exec_clock
14 ± 7% -24.6% 10 ± 4% sched_debug.cfs_rq[6]:/.runnable_load_avg
4626 ± 11% -83.4% 767 ± 14% sched_debug.cfs_rq[7]:/.tg_load_avg
14 ± 3% -26.3% 10 ± 4% sched_debug.cfs_rq[7]:/.runnable_load_avg
974 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[7]:/.utilization_load_avg
4 ± 46% +143.8% 9 ± 36% sched_debug.cfs_rq[7]:/.nr_spread_over
14 ± 3% -36.8% 9 ± 19% sched_debug.cfs_rq[7]:/.load
56 ± 45% -100.0% 0 ± 0% sched_debug.cfs_rq[7]:/.tg_load_contrib
142781 ± 0% -18.7% 116026 ± 1% sched_debug.cfs_rq[7]:/.exec_clock
162 ± 26% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.blocked_load_avg
147352 ± 1% -21.3% 116014 ± 1% sched_debug.cfs_rq[8]:/.exec_clock
178 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.tg_load_contrib
4611 ± 11% -83.3% 768 ± 14% sched_debug.cfs_rq[8]:/.tg_load_avg
1007 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.utilization_load_avg
15 ± 4% -35.0% 9 ± 8% sched_debug.cfs_rq[8]:/.load
15 ± 5% -31.1% 10 ± 4% sched_debug.cfs_rq[8]:/.runnable_load_avg
111 ± 22% -100.0% 0 ± 0% sched_debug.cfs_rq[9]:/.tg_load_contrib
14 ± 3% -26.3% 10 ± 4% sched_debug.cfs_rq[9]:/.runnable_load_avg
14 ± 2% -28.8% 10 ± 8% sched_debug.cfs_rq[9]:/.load
4603 ± 11% -83.4% 764 ± 13% sched_debug.cfs_rq[9]:/.tg_load_avg
142730 ± 0% -18.9% 115809 ± 1% sched_debug.cfs_rq[9]:/.exec_clock
965 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[9]:/.utilization_load_avg
14 ± 17% -48.3% 7 ± 14% sched_debug.cfs_rq[9]:/.nr_spread_over
96 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[9]:/.blocked_load_avg
459950 ± 8% +26.5% 581670 ± 2% sched_debug.cpu#0.avg_idle
5114 ± 3% -15.0% 4345 ± 6% sched_debug.cpu#0.curr->pid
14 ± 5% -41.4% 8 ± 5% sched_debug.cpu#0.cpu_load[1]
54423 ± 2% +115.2% 117124 ± 2% sched_debug.cpu#0.sched_count
15 ± 5% -39.3% 9 ± 4% sched_debug.cpu#0.cpu_load[4]
14 ± 3% -38.6% 8 ± 4% sched_debug.cpu#0.cpu_load[0]
24243 ± 9% +135.3% 57041 ± 3% sched_debug.cpu#0.ttwu_count
8240 ± 6% +442.7% 44723 ± 3% sched_debug.cpu#0.sched_goidle
40005 ± 12% +155.6% 102241 ± 3% sched_debug.cpu#0.nr_switches
15 ± 6% -43.3% 8 ± 5% sched_debug.cpu#0.cpu_load[2]
15 ± 5% -44.4% 8 ± 4% sched_debug.cpu#0.cpu_load[3]
5278 ± 1% -15.1% 4479 ± 4% sched_debug.cpu#1.curr->pid
152407 ± 0% -9.3% 138247 ± 1% sched_debug.cpu#1.nr_load_updates
14 ± 5% -25.0% 10 ± 10% sched_debug.cpu#1.load
9011 ± 5% +397.4% 44820 ± 1% sched_debug.cpu#1.sched_goidle
15 ± 12% -38.3% 9 ± 4% sched_debug.cpu#1.cpu_load[0]
14 ± 3% -36.2% 9 ± 4% sched_debug.cpu#1.cpu_load[4]
419898 ± 5% +41.5% 593963 ± 3% sched_debug.cpu#1.avg_idle
14 ± 5% -39.0% 9 ± 0% sched_debug.cpu#1.cpu_load[3]
45611 ± 12% +138.9% 108972 ± 5% sched_debug.cpu#1.sched_count
22346 ± 14% +138.4% 53266 ± 0% sched_debug.cpu#1.ttwu_count
38054 ± 13% +166.6% 101445 ± 0% sched_debug.cpu#1.nr_switches
15 ± 10% -41.7% 8 ± 4% sched_debug.cpu#1.cpu_load[1]
15 ± 8% -41.0% 9 ± 0% sched_debug.cpu#1.cpu_load[2]
19623 ± 3% +155.1% 50061 ± 3% sched_debug.cpu#10.ttwu_count
14 ± 5% -34.5% 9 ± 5% sched_debug.cpu#10.cpu_load[4]
14 ± 5% -38.6% 8 ± 9% sched_debug.cpu#10.cpu_load[0]
14 ± 7% -34.5% 9 ± 5% sched_debug.cpu#10.load
5286 ± 1% -19.2% 4271 ± 2% sched_debug.cpu#10.curr->pid
33706 ± 8% +181.6% 94912 ± 3% sched_debug.cpu#10.nr_switches
14 ± 5% -37.9% 9 ± 7% sched_debug.cpu#10.cpu_load[3]
14 ± 5% -38.6% 8 ± 9% sched_debug.cpu#10.cpu_load[1]
151771 ± 0% -10.5% 135760 ± 1% sched_debug.cpu#10.nr_load_updates
39656 ± 9% +153.7% 100597 ± 5% sched_debug.cpu#10.sched_count
425666 ± 9% +28.1% 545278 ± 1% sched_debug.cpu#10.avg_idle
8362 ± 8% -27.7% 6044 ± 12% sched_debug.cpu#10.ttwu_local
7486 ± 17% +458.9% 41844 ± 2% sched_debug.cpu#10.sched_goidle
14 ± 5% -41.4% 8 ± 5% sched_debug.cpu#10.cpu_load[2]
15 ± 13% -33.9% 10 ± 10% sched_debug.cpu#11.load
15 ± 8% -46.0% 8 ± 5% sched_debug.cpu#11.cpu_load[0]
35740 ± 6% +168.3% 95897 ± 1% sched_debug.cpu#11.nr_switches
42550 ± 8% +133.0% 99139 ± 1% sched_debug.cpu#11.sched_count
15 ± 4% -40.0% 9 ± 7% sched_debug.cpu#11.cpu_load[3]
15 ± 0% -41.7% 8 ± 4% sched_debug.cpu#11.cpu_load[2]
9422 ± 9% -28.0% 6784 ± 11% sched_debug.cpu#11.ttwu_local
14 ± 5% -32.8% 9 ± 4% sched_debug.cpu#11.cpu_load[4]
152028 ± 0% -11.0% 135292 ± 1% sched_debug.cpu#11.nr_load_updates
5302 ± 1% -29.1% 3757 ± 14% sched_debug.cpu#11.curr->pid
420019 ± 6% +29.4% 543549 ± 9% sched_debug.cpu#11.avg_idle
15 ± 2% -42.6% 8 ± 9% sched_debug.cpu#11.cpu_load[1]
7298 ± 13% +471.9% 41743 ± 1% sched_debug.cpu#11.sched_goidle
20612 ± 8% +147.3% 50976 ± 2% sched_debug.cpu#11.ttwu_count
426702 ± 12% +39.0% 593273 ± 9% sched_debug.cpu#12.avg_idle
37593 ± 6% +163.0% 98855 ± 1% sched_debug.cpu#12.sched_count
34305 ± 7% +171.9% 93282 ± 2% sched_debug.cpu#12.nr_switches
7460 ± 12% +448.1% 40887 ± 2% sched_debug.cpu#12.sched_goidle
1 ± 0% -100.0% 0 ± 0% sched_debug.cpu#12.nr_running
14 ± 2% -35.6% 9 ± 5% sched_debug.cpu#12.cpu_load[4]
19137 ± 6% +163.6% 50437 ± 2% sched_debug.cpu#12.ttwu_count
151130 ± 0% -10.2% 135743 ± 1% sched_debug.cpu#12.nr_load_updates
14 ± 2% -37.3% 9 ± 4% sched_debug.cpu#12.cpu_load[3]
15 ± 4% -40.0% 9 ± 7% sched_debug.cpu#12.cpu_load[2]
15 ± 8% -44.4% 8 ± 9% sched_debug.cpu#12.cpu_load[0]
5203 ± 2% -17.9% 4273 ± 10% sched_debug.cpu#12.curr->pid
8695 ± 14% -27.6% 6297 ± 10% sched_debug.cpu#12.ttwu_local
15 ± 5% -41.0% 9 ± 7% sched_debug.cpu#12.cpu_load[1]
6727 ± 8% +516.7% 41486 ± 1% sched_debug.cpu#13.sched_goidle
18277 ± 15% +171.8% 49677 ± 1% sched_debug.cpu#13.ttwu_count
14 ± 3% -36.2% 9 ± 4% sched_debug.cpu#13.cpu_load[3]
40712 ± 21% +142.4% 98700 ± 4% sched_debug.cpu#13.sched_count
15 ± 9% -34.4% 10 ± 14% sched_debug.cpu#13.load
150856 ± 0% -10.0% 135840 ± 1% sched_debug.cpu#13.nr_load_updates
32486 ± 15% +188.1% 93585 ± 2% sched_debug.cpu#13.nr_switches
418274 ± 5% +38.7% 580336 ± 4% sched_debug.cpu#13.avg_idle
5263 ± 2% -29.2% 3727 ± 17% sched_debug.cpu#13.curr->pid
14 ± 5% -40.4% 8 ± 5% sched_debug.cpu#13.cpu_load[0]
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#13.cpu_load[4]
14 ± 5% -40.4% 8 ± 5% sched_debug.cpu#13.cpu_load[1]
14 ± 5% -36.8% 9 ± 0% sched_debug.cpu#13.cpu_load[2]
14 ± 5% -38.6% 8 ± 9% sched_debug.cpu#14.cpu_load[0]
36644 ± 6% +175.0% 100770 ± 2% sched_debug.cpu#14.sched_count
15 ± 19% -38.1% 9 ± 11% sched_debug.cpu#14.load
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#14.cpu_load[1]
150753 ± 0% -10.0% 135653 ± 1% sched_debug.cpu#14.nr_load_updates
32557 ± 8% +186.3% 93223 ± 0% sched_debug.cpu#14.nr_switches
6530 ± 6% +527.6% 40983 ± 0% sched_debug.cpu#14.sched_goidle
430654 ± 11% +35.1% 581977 ± 7% sched_debug.cpu#14.avg_idle
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#14.cpu_load[2]
7967 ± 12% -22.1% 6204 ± 6% sched_debug.cpu#14.ttwu_local
5162 ± 3% -25.8% 3831 ± 10% sched_debug.cpu#14.curr->pid
18131 ± 6% +175.2% 49892 ± 0% sched_debug.cpu#14.ttwu_count
14 ± 3% -36.2% 9 ± 4% sched_debug.cpu#14.cpu_load[3]
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#14.cpu_load[4]
32858 ± 9% +181.0% 92338 ± 2% sched_debug.cpu#15.nr_switches
14 ± 5% -35.6% 9 ± 5% sched_debug.cpu#15.cpu_load[3]
14 ± 7% -37.9% 9 ± 11% sched_debug.cpu#15.cpu_load[0]
8518 ± 19% -27.0% 6214 ± 5% sched_debug.cpu#15.ttwu_local
15 ± 4% -36.7% 9 ± 5% sched_debug.cpu#15.cpu_load[4]
14 ± 5% -37.3% 9 ± 8% sched_debug.cpu#15.cpu_load[1]
439603 ± 4% +33.6% 587095 ± 7% sched_debug.cpu#15.avg_idle
37374 ± 14% +172.7% 101927 ± 2% sched_debug.cpu#15.sched_count
150779 ± 0% -10.1% 135609 ± 1% sched_debug.cpu#15.nr_load_updates
18461 ± 10% +168.9% 49635 ± 3% sched_debug.cpu#15.ttwu_count
15 ± 13% -38.7% 9 ± 11% sched_debug.cpu#15.load
5087 ± 4% -21.3% 4005 ± 5% sched_debug.cpu#15.curr->pid
6623 ± 6% +511.6% 40513 ± 2% sched_debug.cpu#15.sched_goidle
14 ± 5% -37.3% 9 ± 8% sched_debug.cpu#15.cpu_load[2]
15 ± 4% -45.0% 8 ± 10% sched_debug.cpu#16.cpu_load[0]
151350 ± 0% -10.3% 135805 ± 1% sched_debug.cpu#16.nr_load_updates
7472 ± 8% +468.9% 42510 ± 2% sched_debug.cpu#16.sched_goidle
15 ± 4% -43.3% 8 ± 5% sched_debug.cpu#16.cpu_load[2]
15 ± 4% -43.3% 8 ± 5% sched_debug.cpu#16.cpu_load[3]
42078 ± 16% +172.3% 114559 ± 13% sched_debug.cpu#16.sched_count
18903 ± 5% +169.1% 50867 ± 1% sched_debug.cpu#16.ttwu_count
15 ± 4% -40.0% 9 ± 7% sched_debug.cpu#16.cpu_load[4]
33692 ± 4% +188.6% 97225 ± 1% sched_debug.cpu#16.nr_switches
5230 ± 3% -21.7% 4098 ± 12% sched_debug.cpu#16.curr->pid
14 ± 7% -44.1% 8 ± 10% sched_debug.cpu#16.cpu_load[1]
1 ± 0% -100.0% 0 ± 0% sched_debug.cpu#16.nr_running
388250 ± 4% +48.8% 577899 ± 7% sched_debug.cpu#16.avg_idle
8751 ± 11% -23.2% 6720 ± 9% sched_debug.cpu#16.ttwu_local
15 ± 4% -45.0% 8 ± 5% sched_debug.cpu#17.cpu_load[1]
14 ± 5% -39.0% 9 ± 0% sched_debug.cpu#17.cpu_load[3]
15 ± 4% -41.7% 8 ± 4% sched_debug.cpu#17.cpu_load[2]
31973 ± 4% +212.8% 100010 ± 2% sched_debug.cpu#17.nr_switches
5079 ± 4% -25.6% 3781 ± 9% sched_debug.cpu#17.curr->pid
14 ± 5% -37.3% 9 ± 4% sched_debug.cpu#17.cpu_load[4]
18052 ± 4% +185.4% 51514 ± 3% sched_debug.cpu#17.ttwu_count
14 ± 2% -44.1% 8 ± 5% sched_debug.cpu#17.cpu_load[0]
7214 ± 9% +504.5% 43608 ± 2% sched_debug.cpu#17.sched_goidle
39953 ± 17% +181.2% 112331 ± 6% sched_debug.cpu#17.sched_count
14 ± 7% -33.3% 9 ± 9% sched_debug.cpu#17.load
150798 ± 0% -10.0% 135723 ± 1% sched_debug.cpu#17.nr_load_updates
460177 ± 5% +24.5% 572863 ± 6% sched_debug.cpu#17.avg_idle
7557 ± 15% +463.6% 42592 ± 1% sched_debug.cpu#18.sched_goidle
151189 ± 0% -10.4% 135505 ± 1% sched_debug.cpu#18.nr_load_updates
14 ± 3% -42.1% 8 ± 5% sched_debug.cpu#18.cpu_load[0]
20909 ± 9% +146.8% 51603 ± 2% sched_debug.cpu#18.ttwu_count
41296 ± 14% +159.1% 107013 ± 4% sched_debug.cpu#18.sched_count
14 ± 3% -40.4% 8 ± 5% sched_debug.cpu#18.cpu_load[1]
14 ± 3% -38.6% 8 ± 4% sched_debug.cpu#18.cpu_load[2]
10219 ± 14% -33.2% 6825 ± 10% sched_debug.cpu#18.ttwu_local
14 ± 2% -37.3% 9 ± 4% sched_debug.cpu#18.cpu_load[4]
14 ± 2% -39.0% 9 ± 0% sched_debug.cpu#18.cpu_load[3]
5088 ± 3% -22.0% 3967 ± 3% sched_debug.cpu#18.curr->pid
425778 ± 11% +32.5% 564360 ± 4% sched_debug.cpu#18.avg_idle
37128 ± 10% +163.4% 97799 ± 1% sched_debug.cpu#18.nr_switches
19065 ± 13% +166.6% 50829 ± 3% sched_debug.cpu#19.ttwu_count
150656 ± 0% -10.2% 135240 ± 1% sched_debug.cpu#19.nr_load_updates
14 ± 14% -32.8% 9 ± 8% sched_debug.cpu#19.load
5122 ± 1% -25.9% 3793 ± 3% sched_debug.cpu#19.curr->pid
6661 ± 9% +538.9% 42558 ± 2% sched_debug.cpu#19.sched_goidle
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#19.cpu_load[3]
15 ± 8% -46.7% 8 ± 8% sched_debug.cpu#19.cpu_load[1]
48405 ± 38% +139.6% 115975 ± 9% sched_debug.cpu#19.sched_count
15 ± 9% -45.2% 8 ± 5% sched_debug.cpu#19.cpu_load[0]
414211 ± 18% +47.8% 612403 ± 7% sched_debug.cpu#19.avg_idle
33174 ± 16% +194.1% 97573 ± 3% sched_debug.cpu#19.nr_switches
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#19.cpu_load[4]
14 ± 5% -44.1% 8 ± 13% sched_debug.cpu#19.cpu_load[2]
446481 ± 6% +24.5% 555977 ± 3% sched_debug.cpu#2.avg_idle
49149 ± 23% +115.8% 106068 ± 7% sched_debug.cpu#2.sched_count
10586 ± 10% -25.4% 7902 ± 12% sched_debug.cpu#2.ttwu_local
14 ± 2% -35.6% 9 ± 21% sched_debug.cpu#2.cpu_load[1]
151563 ± 0% -9.0% 137974 ± 1% sched_debug.cpu#2.nr_load_updates
5165 ± 1% -24.7% 3887 ± 10% sched_debug.cpu#2.curr->pid
7552 ± 3% +469.0% 42974 ± 4% sched_debug.cpu#2.sched_goidle
15 ± 0% -36.7% 9 ± 21% sched_debug.cpu#2.cpu_load[0]
21396 ± 8% +145.3% 52491 ± 3% sched_debug.cpu#2.ttwu_count
14 ± 3% -33.3% 9 ± 21% sched_debug.cpu#2.cpu_load[2]
14 ± 3% -33.3% 9 ± 21% sched_debug.cpu#2.cpu_load[3]
38324 ± 9% +156.4% 98254 ± 3% sched_debug.cpu#2.nr_switches
151276 ± 0% -10.5% 135392 ± 1% sched_debug.cpu#20.nr_load_updates
18059 ± 8% +186.4% 51716 ± 1% sched_debug.cpu#20.ttwu_count
14 ± 5% -30.4% 9 ± 8% sched_debug.cpu#20.load
7481 ± 8% +466.1% 42357 ± 2% sched_debug.cpu#20.sched_goidle
444177 ± 20% +31.4% 583581 ± 4% sched_debug.cpu#20.avg_idle
34935 ± 17% +231.3% 115752 ± 6% sched_debug.cpu#20.sched_count
13 ± 3% -34.5% 9 ± 7% sched_debug.cpu#20.cpu_load[1]
31626 ± 7% +212.4% 98796 ± 2% sched_debug.cpu#20.nr_switches
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#20.cpu_load[3]
14 ± 0% -35.7% 9 ± 7% sched_debug.cpu#20.cpu_load[0]
14 ± 3% -32.8% 9 ± 4% sched_debug.cpu#20.cpu_load[4]
5213 ± 3% -27.4% 3782 ± 7% sched_debug.cpu#20.curr->pid
14 ± 5% -33.9% 9 ± 4% sched_debug.cpu#20.cpu_load[2]
14 ± 3% -33.3% 9 ± 5% sched_debug.cpu#21.cpu_load[3]
14 ± 7% -33.3% 9 ± 9% sched_debug.cpu#21.cpu_load[0]
14 ± 7% -33.3% 9 ± 5% sched_debug.cpu#21.cpu_load[1]
7015 ± 7% +490.2% 41403 ± 1% sched_debug.cpu#21.sched_goidle
32048 ± 16% +198.2% 95554 ± 2% sched_debug.cpu#21.nr_switches
5073 ± 5% -25.3% 3790 ± 6% sched_debug.cpu#21.curr->pid
14 ± 3% -31.0% 10 ± 0% sched_debug.cpu#21.cpu_load[4]
14 ± 3% -33.3% 9 ± 5% sched_debug.cpu#21.cpu_load[2]
39334 ± 17% +167.5% 105239 ± 8% sched_debug.cpu#21.sched_count
150872 ± 0% -10.5% 135099 ± 1% sched_debug.cpu#21.nr_load_updates
18112 ± 15% +180.9% 50883 ± 2% sched_debug.cpu#21.ttwu_count
439087 ± 7% +48.2% 650939 ± 5% sched_debug.cpu#21.avg_idle
5166 ± 2% -26.3% 3805 ± 8% sched_debug.cpu#22.curr->pid
14 ± 3% -38.6% 8 ± 9% sched_debug.cpu#22.cpu_load[2]
14 ± 3% -33.3% 9 ± 5% sched_debug.cpu#22.cpu_load[4]
14 ± 3% -35.1% 9 ± 4% sched_debug.cpu#22.cpu_load[3]
14 ± 7% -36.8% 9 ± 0% sched_debug.cpu#22.load
7428 ± 10% +458.5% 41491 ± 2% sched_debug.cpu#22.sched_goidle
14 ± 5% -35.7% 9 ± 7% sched_debug.cpu#22.cpu_load[0]
17236 ± 10% +197.8% 51335 ± 2% sched_debug.cpu#22.ttwu_count
40157 ± 29% +179.4% 112186 ± 7% sched_debug.cpu#22.sched_count
425732 ± 7% +42.6% 607224 ± 9% sched_debug.cpu#22.avg_idle
14 ± 5% -35.7% 9 ± 7% sched_debug.cpu#22.cpu_load[1]
31510 ± 9% +204.0% 95784 ± 2% sched_debug.cpu#22.nr_switches
151172 ± 0% -10.6% 135200 ± 0% sched_debug.cpu#22.nr_load_updates
6321 ± 6% +544.7% 40754 ± 2% sched_debug.cpu#23.sched_goidle
150554 ± 0% -10.0% 135467 ± 1% sched_debug.cpu#23.nr_load_updates
17495 ± 7% +184.8% 49824 ± 2% sched_debug.cpu#23.ttwu_count
13 ± 3% -36.4% 8 ± 4% sched_debug.cpu#23.cpu_load[2]
421307 ± 8% +34.1% 564941 ± 1% sched_debug.cpu#23.avg_idle
37049 ± 20% +217.2% 117530 ± 18% sched_debug.cpu#23.sched_count
13 ± 3% -38.2% 8 ± 5% sched_debug.cpu#23.cpu_load[1]
14 ± 5% -33.9% 9 ± 4% sched_debug.cpu#23.cpu_load[3]
13 ± 3% -34.5% 9 ± 0% sched_debug.cpu#23.cpu_load[0]
30682 ± 8% +208.4% 94634 ± 3% sched_debug.cpu#23.nr_switches
14 ± 5% -32.1% 9 ± 5% sched_debug.cpu#23.cpu_load[4]
4883 ± 6% -23.7% 3727 ± 11% sched_debug.cpu#23.curr->pid
14 ± 5% -39.3% 8 ± 13% sched_debug.cpu#23.load
5238 ± 3% -29.0% 3720 ± 15% sched_debug.cpu#24.curr->pid
6584 ± 3% +532.7% 41658 ± 3% sched_debug.cpu#24.sched_goidle
428673 ± 10% +40.8% 603741 ± 7% sched_debug.cpu#24.avg_idle
14 ± 3% -36.2% 9 ± 4% sched_debug.cpu#24.cpu_load[0]
39369 ± 8% +197.9% 117265 ± 12% sched_debug.cpu#24.sched_count
150374 ± 0% -10.1% 135251 ± 1% sched_debug.cpu#24.nr_load_updates
9006 ± 8% -32.2% 6104 ± 5% sched_debug.cpu#24.ttwu_local
14 ± 3% -31.0% 10 ± 0% sched_debug.cpu#24.cpu_load[4]
34540 ± 5% +173.1% 94315 ± 3% sched_debug.cpu#24.nr_switches
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#24.cpu_load[1]
19271 ± 2% +164.7% 51003 ± 2% sched_debug.cpu#24.ttwu_count
14 ± 3% -31.0% 10 ± 0% sched_debug.cpu#24.cpu_load[3]
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#24.cpu_load[2]
18407 ± 7% +177.0% 50983 ± 3% sched_debug.cpu#25.ttwu_count
5311 ± 0% -25.3% 3969 ± 9% sched_debug.cpu#25.curr->pid
15 ± 4% -41.7% 8 ± 14% sched_debug.cpu#25.cpu_load[1]
483320 ± 10% +17.9% 569719 ± 5% sched_debug.cpu#25.avg_idle
6536 ± 3% +542.2% 41973 ± 2% sched_debug.cpu#25.sched_goidle
32159 ± 7% +198.9% 96122 ± 3% sched_debug.cpu#25.nr_switches
14 ± 5% -37.3% 9 ± 14% sched_debug.cpu#25.load
14 ± 3% -39.7% 8 ± 14% sched_debug.cpu#25.cpu_load[0]
15 ± 4% -40.0% 9 ± 11% sched_debug.cpu#25.cpu_load[2]
40528 ± 7% +189.2% 117196 ± 5% sched_debug.cpu#25.sched_count
14 ± 2% -37.3% 9 ± 8% sched_debug.cpu#25.cpu_load[4]
15 ± 4% -40.0% 9 ± 11% sched_debug.cpu#25.cpu_load[3]
6505 ± 6% +545.5% 41988 ± 2% sched_debug.cpu#26.sched_goidle
32597 ± 8% +198.0% 97140 ± 5% sched_debug.cpu#26.nr_switches
5159 ± 1% -30.0% 3610 ± 11% sched_debug.cpu#26.curr->pid
18398 ± 9% +178.4% 51212 ± 5% sched_debug.cpu#26.ttwu_count
14 ± 5% -41.4% 8 ± 10% sched_debug.cpu#26.cpu_load[1]
14 ± 3% -40.4% 8 ± 10% sched_debug.cpu#26.cpu_load[2]
14 ± 5% -39.7% 8 ± 9% sched_debug.cpu#26.cpu_load[0]
14 ± 3% -36.8% 9 ± 7% sched_debug.cpu#26.cpu_load[3]
39155 ± 14% +180.9% 110006 ± 6% sched_debug.cpu#26.sched_count
14 ± 3% -36.2% 9 ± 11% sched_debug.cpu#26.cpu_load[4]
14 ± 10% -31.0% 10 ± 7% sched_debug.cpu#26.load
418972 ± 13% +39.4% 583946 ± 9% sched_debug.cpu#26.avg_idle
14 ± 3% -37.9% 9 ± 15% sched_debug.cpu#27.load
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#27.cpu_load[3]
34672 ± 10% +164.2% 91614 ± 2% sched_debug.cpu#27.nr_switches
14 ± 2% -39.0% 9 ± 7% sched_debug.cpu#27.cpu_load[0]
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#27.cpu_load[1]
9155 ± 15% -38.8% 5606 ± 8% sched_debug.cpu#27.ttwu_local
43896 ± 23% +158.1% 113312 ± 17% sched_debug.cpu#27.sched_count
14 ± 3% -36.8% 9 ± 7% sched_debug.cpu#27.cpu_load[2]
5086 ± 2% -21.0% 4019 ± 13% sched_debug.cpu#27.curr->pid
14 ± 2% -37.3% 9 ± 4% sched_debug.cpu#27.cpu_load[4]
150291 ± 0% -10.0% 135294 ± 1% sched_debug.cpu#27.nr_load_updates
6795 ± 6% +499.4% 40732 ± 1% sched_debug.cpu#27.sched_goidle
412373 ± 18% +42.2% 586421 ± 3% sched_debug.cpu#27.avg_idle
19738 ± 9% +151.6% 49671 ± 3% sched_debug.cpu#27.ttwu_count
6379 ± 6% +531.9% 40316 ± 1% sched_debug.cpu#28.sched_goidle
14 ± 7% -43.1% 8 ± 13% sched_debug.cpu#28.cpu_load[0]
30304 ± 11% +201.2% 91288 ± 3% sched_debug.cpu#28.nr_switches
14 ± 3% -42.1% 8 ± 5% sched_debug.cpu#28.cpu_load[2]
14 ± 5% -37.3% 9 ± 4% sched_debug.cpu#28.cpu_load[4]
7447 ± 19% -21.5% 5848 ± 16% sched_debug.cpu#28.ttwu_local
14 ± 7% -43.1% 8 ± 13% sched_debug.cpu#28.cpu_load[1]
150488 ± 0% -10.0% 135387 ± 1% sched_debug.cpu#28.nr_load_updates
15 ± 12% -40.0% 9 ± 17% sched_debug.cpu#28.load
47096 ± 34% +144.1% 114961 ± 13% sched_debug.cpu#28.sched_count
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#28.cpu_load[3]
17301 ± 13% +184.6% 49235 ± 3% sched_debug.cpu#28.ttwu_count
429678 ± 7% +31.0% 562755 ± 5% sched_debug.cpu#28.avg_idle
14 ± 3% -36.2% 9 ± 4% sched_debug.cpu#29.cpu_load[2]
150289 ± 0% -10.0% 135209 ± 1% sched_debug.cpu#29.nr_load_updates
5143 ± 2% -20.8% 4074 ± 18% sched_debug.cpu#29.curr->pid
6371 ± 4% +535.5% 40493 ± 3% sched_debug.cpu#29.sched_goidle
14 ± 5% -37.3% 9 ± 4% sched_debug.cpu#29.cpu_load[1]
15 ± 4% -38.3% 9 ± 4% sched_debug.cpu#29.cpu_load[3]
17989 ± 12% +175.7% 49598 ± 2% sched_debug.cpu#29.ttwu_count
14 ± 5% -39.0% 9 ± 0% sched_debug.cpu#29.cpu_load[0]
14 ± 5% -30.4% 9 ± 16% sched_debug.cpu#29.load
31759 ± 14% +188.3% 91569 ± 3% sched_debug.cpu#29.nr_switches
471255 ± 8% +30.0% 612851 ± 9% sched_debug.cpu#29.avg_idle
15 ± 2% -37.7% 9 ± 5% sched_debug.cpu#29.cpu_load[4]
37619 ± 19% +228.8% 123692 ± 12% sched_debug.cpu#29.sched_count
18968 ± 14% +172.7% 51724 ± 3% sched_debug.cpu#3.ttwu_count
39894 ± 18% +153.6% 101178 ± 8% sched_debug.cpu#3.sched_count
33554 ± 16% +186.1% 95988 ± 3% sched_debug.cpu#3.nr_switches
14 ± 3% -35.1% 9 ± 8% sched_debug.cpu#3.cpu_load[2]
14 ± 5% -33.9% 9 ± 8% sched_debug.cpu#3.cpu_load[1]
418310 ± 5% +37.9% 576750 ± 5% sched_debug.cpu#3.avg_idle
14 ± 0% -33.9% 9 ± 8% sched_debug.cpu#3.cpu_load[3]
5142 ± 2% -21.6% 4034 ± 6% sched_debug.cpu#3.curr->pid
7131 ± 5% +495.7% 42485 ± 3% sched_debug.cpu#3.sched_goidle
14 ± 0% -30.4% 9 ± 4% sched_debug.cpu#3.cpu_load[4]
14 ± 7% -35.1% 9 ± 15% sched_debug.cpu#3.cpu_load[0]
14 ± 5% -30.4% 9 ± 11% sched_debug.cpu#3.load
41476 ± 32% +142.2% 100464 ± 4% sched_debug.cpu#30.sched_count
14 ± 2% -37.3% 9 ± 4% sched_debug.cpu#30.cpu_load[4]
31708 ± 14% +185.0% 90376 ± 1% sched_debug.cpu#30.nr_switches
17674 ± 14% +179.3% 49362 ± 1% sched_debug.cpu#30.ttwu_count
150228 ± 0% -10.0% 135265 ± 1% sched_debug.cpu#30.nr_load_updates
14 ± 5% -43.9% 8 ± 0% sched_debug.cpu#30.cpu_load[1]
14 ± 5% -42.1% 8 ± 13% sched_debug.cpu#30.load
14 ± 5% -43.1% 8 ± 5% sched_debug.cpu#30.cpu_load[2]
6329 ± 7% +529.5% 39848 ± 1% sched_debug.cpu#30.sched_goidle
425532 ± 14% +43.5% 610674 ± 8% sched_debug.cpu#30.avg_idle
14 ± 5% -41.4% 8 ± 5% sched_debug.cpu#30.cpu_load[3]
5108 ± 4% -28.4% 3655 ± 15% sched_debug.cpu#30.curr->pid
14 ± 5% -43.9% 8 ± 0% sched_debug.cpu#30.cpu_load[0]
18864 ± 9% +163.9% 49790 ± 1% sched_debug.cpu#31.ttwu_count
433193 ± 3% +38.0% 598002 ± 6% sched_debug.cpu#31.avg_idle
14 ± 3% -38.6% 8 ± 9% sched_debug.cpu#31.cpu_load[4]
14 ± 5% -41.1% 8 ± 15% sched_debug.cpu#31.cpu_load[1]
5129 ± 2% -26.7% 3761 ± 12% sched_debug.cpu#31.curr->pid
33770 ± 12% +172.8% 92110 ± 1% sched_debug.cpu#31.nr_switches
150123 ± 0% -9.7% 135537 ± 1% sched_debug.cpu#31.nr_load_updates
14 ± 5% -41.1% 8 ± 10% sched_debug.cpu#31.cpu_load[0]
6565 ± 5% +508.8% 39972 ± 2% sched_debug.cpu#31.sched_goidle
14 ± 3% -40.4% 8 ± 13% sched_debug.cpu#31.cpu_load[3]
9070 ± 17% -25.3% 6771 ± 13% sched_debug.cpu#31.ttwu_local
14 ± 5% -41.1% 8 ± 15% sched_debug.cpu#31.cpu_load[2]
6227 ± 7% +428.4% 32902 ± 1% sched_debug.cpu#32.sched_goidle
14 ± 5% -41.1% 8 ± 17% sched_debug.cpu#32.cpu_load[1]
13 ± 3% -40.0% 8 ± 10% sched_debug.cpu#32.cpu_load[2]
15098 ± 11% +156.0% 38651 ± 2% sched_debug.cpu#32.ttwu_count
5247 ± 1% -32.3% 3550 ± 14% sched_debug.cpu#32.curr->pid
14 ± 7% -37.5% 8 ± 18% sched_debug.cpu#32.load
14 ± 0% -39.3% 8 ± 10% sched_debug.cpu#32.cpu_load[3]
150384 ± 0% -13.9% 129496 ± 1% sched_debug.cpu#32.nr_load_updates
26391 ± 11% +190.0% 76546 ± 1% sched_debug.cpu#32.nr_switches
375426 ± 9% +79.6% 674153 ± 3% sched_debug.cpu#32.avg_idle
14 ± 0% -35.7% 9 ± 7% sched_debug.cpu#32.cpu_load[4]
28730 ± 15% +174.6% 78888 ± 1% sched_debug.cpu#32.sched_count
13 ± 6% -41.8% 8 ± 17% sched_debug.cpu#32.cpu_load[0]
5267 ± 1% -35.6% 3392 ± 18% sched_debug.cpu#33.curr->pid
14 ± 3% -43.9% 8 ± 15% sched_debug.cpu#33.cpu_load[0]
15812 ± 10% +144.2% 38619 ± 3% sched_debug.cpu#33.ttwu_count
34631 ± 26% +127.0% 78608 ± 4% sched_debug.cpu#33.sched_count
14 ± 3% -36.2% 9 ± 8% sched_debug.cpu#33.cpu_load[4]
150320 ± 0% -14.1% 129155 ± 1% sched_debug.cpu#33.nr_load_updates
14 ± 5% -39.3% 8 ± 13% sched_debug.cpu#33.load
14 ± 2% -39.0% 9 ± 7% sched_debug.cpu#33.cpu_load[3]
14 ± 3% -41.4% 8 ± 13% sched_debug.cpu#33.cpu_load[2]
14 ± 3% -43.9% 8 ± 15% sched_debug.cpu#33.cpu_load[1]
27394 ± 10% +177.4% 75990 ± 3% sched_debug.cpu#33.nr_switches
377751 ± 6% +73.9% 656823 ± 4% sched_debug.cpu#33.avg_idle
6209 ± 3% +429.9% 32905 ± 2% sched_debug.cpu#33.sched_goidle
14 ± 0% -41.1% 8 ± 10% sched_debug.cpu#34.cpu_load[2]
13 ± 3% -39.6% 8 ± 8% sched_debug.cpu#34.cpu_load[0]
14 ± 0% -37.5% 8 ± 9% sched_debug.cpu#34.cpu_load[3]
30105 ± 18% +158.7% 77892 ± 3% sched_debug.cpu#34.nr_switches
150139 ± 0% -13.4% 130058 ± 1% sched_debug.cpu#34.nr_load_updates
36505 ± 26% +122.7% 81302 ± 2% sched_debug.cpu#34.sched_count
5149 ± 3% -29.8% 3612 ± 15% sched_debug.cpu#34.curr->pid
17483 ± 14% +124.1% 39182 ± 2% sched_debug.cpu#34.ttwu_count
427142 ± 13% +60.5% 685572 ± 2% sched_debug.cpu#34.avg_idle
6139 ± 9% +437.9% 33026 ± 3% sched_debug.cpu#34.sched_goidle
14 ± 0% -37.5% 8 ± 9% sched_debug.cpu#34.cpu_load[4]
13 ± 6% -33.3% 9 ± 7% sched_debug.cpu#34.load
13 ± 3% -40.0% 8 ± 10% sched_debug.cpu#34.cpu_load[1]
38213 ± 26% +125.8% 86267 ± 11% sched_debug.cpu#35.sched_count
5162 ± 2% -30.7% 3576 ± 21% sched_debug.cpu#35.curr->pid
150061 ± 0% -13.8% 129327 ± 1% sched_debug.cpu#35.nr_load_updates
14 ± 5% -42.9% 8 ± 12% sched_debug.cpu#35.cpu_load[1]
16579 ± 23% +130.0% 38131 ± 3% sched_debug.cpu#35.ttwu_count
14 ± 5% -37.5% 8 ± 9% sched_debug.cpu#35.cpu_load[0]
439468 ± 4% +53.6% 675096 ± 1% sched_debug.cpu#35.avg_idle
14 ± 0% -41.1% 8 ± 10% sched_debug.cpu#35.cpu_load[2]
14 ± 0% -39.3% 8 ± 5% sched_debug.cpu#35.cpu_load[3]
5768 ± 7% +474.6% 33143 ± 2% sched_debug.cpu#35.sched_goidle
29372 ± 24% +161.3% 76738 ± 2% sched_debug.cpu#35.nr_switches
14 ± 0% -37.5% 8 ± 4% sched_debug.cpu#35.cpu_load[4]
149993 ± 0% -13.6% 129644 ± 0% sched_debug.cpu#36.nr_load_updates
5201 ± 1% -26.1% 3843 ± 8% sched_debug.cpu#36.curr->pid
28801 ± 9% +172.8% 78566 ± 3% sched_debug.cpu#36.nr_switches
13 ± 6% -41.8% 8 ± 8% sched_debug.cpu#36.cpu_load[3]
13 ± 6% -41.8% 8 ± 8% sched_debug.cpu#36.cpu_load[2]
381368 ± 8% +70.1% 648745 ± 2% sched_debug.cpu#36.avg_idle
5943 ± 2% +456.7% 33086 ± 2% sched_debug.cpu#36.sched_goidle
14 ± 8% -33.9% 9 ± 11% sched_debug.cpu#36.load
14 ± 7% -38.6% 8 ± 9% sched_debug.cpu#36.cpu_load[4]
32433 ± 20% +155.2% 82772 ± 3% sched_debug.cpu#36.sched_count
13 ± 6% -41.8% 8 ± 8% sched_debug.cpu#36.cpu_load[0]
15835 ± 8% +148.5% 39344 ± 4% sched_debug.cpu#36.ttwu_count
13 ± 6% -41.8% 8 ± 8% sched_debug.cpu#36.cpu_load[1]
39427 ± 23% +103.9% 80377 ± 5% sched_debug.cpu#37.sched_count
28952 ± 1% +164.5% 76574 ± 3% sched_debug.cpu#37.nr_switches
5289 ± 1% -30.6% 3670 ± 2% sched_debug.cpu#37.curr->pid
13 ± 3% -37.0% 8 ± 10% sched_debug.cpu#37.cpu_load[3]
13 ± 3% -32.7% 9 ± 11% sched_debug.cpu#37.load
13 ± 6% -38.2% 8 ± 10% sched_debug.cpu#37.cpu_load[2]
416579 ± 11% +61.7% 673750 ± 1% sched_debug.cpu#37.avg_idle
13 ± 6% -38.2% 8 ± 10% sched_debug.cpu#37.cpu_load[1]
14 ± 0% -33.9% 9 ± 4% sched_debug.cpu#37.cpu_load[4]
15659 ± 3% +149.5% 39063 ± 4% sched_debug.cpu#37.ttwu_count
149915 ± 0% -13.6% 129537 ± 1% sched_debug.cpu#37.nr_load_updates
6300 ± 7% +419.9% 32753 ± 2% sched_debug.cpu#37.sched_goidle
13 ± 6% -38.2% 8 ± 10% sched_debug.cpu#37.cpu_load[0]
5944 ± 2% +454.9% 32984 ± 1% sched_debug.cpu#38.sched_goidle
5213 ± 2% -32.9% 3499 ± 16% sched_debug.cpu#38.curr->pid
14 ± 3% -43.1% 8 ± 5% sched_debug.cpu#38.cpu_load[3]
14 ± 3% -40.4% 8 ± 13% sched_debug.cpu#38.load
14 ± 3% -43.9% 8 ± 8% sched_debug.cpu#38.cpu_load[1]
33925 ± 40% +138.0% 80747 ± 2% sched_debug.cpu#38.sched_count
14 ± 3% -44.8% 8 ± 8% sched_debug.cpu#38.cpu_load[2]
150000 ± 0% -13.9% 129180 ± 1% sched_debug.cpu#38.nr_load_updates
437919 ± 6% +64.8% 721794 ± 5% sched_debug.cpu#38.avg_idle
14 ± 5% -42.1% 8 ± 10% sched_debug.cpu#38.cpu_load[0]
14 ± 5% -42.4% 8 ± 5% sched_debug.cpu#38.cpu_load[4]
25748 ± 6% +202.9% 77995 ± 1% sched_debug.cpu#38.nr_switches
14197 ± 8% +172.5% 38691 ± 1% sched_debug.cpu#38.ttwu_count
29596 ± 13% +159.9% 76923 ± 1% sched_debug.cpu#39.nr_switches
14 ± 0% -37.5% 8 ± 4% sched_debug.cpu#39.cpu_load[1]
14 ± 0% -37.5% 8 ± 4% sched_debug.cpu#39.cpu_load[2]
14 ± 3% -35.1% 9 ± 4% sched_debug.cpu#39.cpu_load[3]
149902 ± 0% -13.6% 129582 ± 1% sched_debug.cpu#39.nr_load_updates
31260 ± 11% +166.4% 83279 ± 3% sched_debug.cpu#39.sched_count
5882 ± 8% +449.3% 32310 ± 1% sched_debug.cpu#39.sched_goidle
5291 ± 1% -30.0% 3705 ± 8% sched_debug.cpu#39.curr->pid
16822 ± 10% +137.4% 39930 ± 3% sched_debug.cpu#39.ttwu_count
14 ± 0% -42.9% 8 ± 8% sched_debug.cpu#39.cpu_load[0]
14 ± 0% -35.7% 9 ± 7% sched_debug.cpu#39.load
420204 ± 7% +54.7% 649999 ± 2% sched_debug.cpu#39.avg_idle
14 ± 3% -35.1% 9 ± 4% sched_debug.cpu#39.cpu_load[4]
15 ± 10% -41.0% 9 ± 13% sched_debug.cpu#4.cpu_load[1]
436412 ± 12% +41.1% 615909 ± 5% sched_debug.cpu#4.avg_idle
15 ± 7% -40.3% 9 ± 11% sched_debug.cpu#4.cpu_load[3]
8027 ± 21% +443.1% 43600 ± 2% sched_debug.cpu#4.sched_goidle
14 ± 3% -35.1% 9 ± 11% sched_debug.cpu#4.cpu_load[0]
5160 ± 2% -20.9% 4079 ± 7% sched_debug.cpu#4.curr->pid
22071 ± 4% +138.0% 52524 ± 3% sched_debug.cpu#4.ttwu_count
47912 ± 8% +113.3% 102208 ± 1% sched_debug.cpu#4.sched_count
39644 ± 4% +147.9% 98279 ± 3% sched_debug.cpu#4.nr_switches
11544 ± 2% -27.3% 8397 ± 22% sched_debug.cpu#4.ttwu_local
15 ± 10% -41.0% 9 ± 13% sched_debug.cpu#4.cpu_load[2]
13 ± 3% -25.5% 10 ± 8% sched_debug.cpu#4.load
15 ± 7% -38.7% 9 ± 9% sched_debug.cpu#4.cpu_load[4]
403692 ± 12% +61.8% 653091 ± 5% sched_debug.cpu#40.avg_idle
14 ± 3% -36.8% 9 ± 7% sched_debug.cpu#40.cpu_load[4]
149865 ± 0% -13.1% 130283 ± 1% sched_debug.cpu#40.nr_load_updates
5812 ± 5% +474.9% 33412 ± 3% sched_debug.cpu#40.sched_goidle
15812 ± 10% +144.8% 38704 ± 4% sched_debug.cpu#40.ttwu_count
14 ± 3% -42.1% 8 ± 10% sched_debug.cpu#40.cpu_load[3]
14 ± 3% -42.1% 8 ± 10% sched_debug.cpu#40.cpu_load[2]
14 ± 5% -38.6% 8 ± 12% sched_debug.cpu#40.load
28338 ± 11% +173.6% 77530 ± 3% sched_debug.cpu#40.nr_switches
32984 ± 15% +143.5% 80306 ± 3% sched_debug.cpu#40.sched_count
14 ± 7% -43.1% 8 ± 10% sched_debug.cpu#40.cpu_load[0]
5223 ± 1% -28.0% 3762 ± 10% sched_debug.cpu#40.curr->pid
14 ± 7% -42.1% 8 ± 10% sched_debug.cpu#40.cpu_load[1]
14 ± 0% -37.5% 8 ± 9% sched_debug.cpu#41.cpu_load[4]
13 ± 6% -36.4% 8 ± 14% sched_debug.cpu#41.load
5946 ± 6% +464.4% 33561 ± 2% sched_debug.cpu#41.sched_goidle
5100 ± 3% -27.4% 3701 ± 12% sched_debug.cpu#41.curr->pid
32262 ± 16% +148.7% 80251 ± 3% sched_debug.cpu#41.sched_count
29197 ± 8% +167.5% 78115 ± 2% sched_debug.cpu#41.nr_switches
149918 ± 0% -13.7% 129319 ± 1% sched_debug.cpu#41.nr_load_updates
13 ± 3% -41.8% 8 ± 8% sched_debug.cpu#41.cpu_load[0]
13 ± 3% -41.8% 8 ± 8% sched_debug.cpu#41.cpu_load[1]
13 ± 3% -40.0% 8 ± 5% sched_debug.cpu#41.cpu_load[2]
16140 ± 8% +139.4% 38649 ± 4% sched_debug.cpu#41.ttwu_count
443969 ± 6% +53.8% 682912 ± 3% sched_debug.cpu#41.avg_idle
13 ± 3% -40.0% 8 ± 5% sched_debug.cpu#41.cpu_load[3]
15 ± 2% -45.9% 8 ± 10% sched_debug.cpu#42.cpu_load[0]
14 ± 3% -41.4% 8 ± 5% sched_debug.cpu#42.cpu_load[4]
15 ± 4% -48.3% 7 ± 24% sched_debug.cpu#42.load
5183 ± 3% -37.9% 3216 ± 21% sched_debug.cpu#42.curr->pid
15330 ± 5% +163.4% 40384 ± 2% sched_debug.cpu#42.ttwu_count
14 ± 5% -43.9% 8 ± 8% sched_debug.cpu#42.cpu_load[3]
5651 ± 3% +498.4% 33819 ± 1% sched_debug.cpu#42.sched_goidle
14 ± 3% -43.1% 8 ± 5% sched_debug.cpu#42.cpu_load[1]
31858 ± 13% +165.8% 84681 ± 2% sched_debug.cpu#42.sched_count
427660 ± 8% +65.9% 709402 ± 1% sched_debug.cpu#42.avg_idle
149880 ± 0% -13.3% 129968 ± 1% sched_debug.cpu#42.nr_load_updates
14 ± 3% -43.1% 8 ± 5% sched_debug.cpu#42.cpu_load[2]
26967 ± 4% +199.1% 80670 ± 2% sched_debug.cpu#42.nr_switches
149899 ± 0% -13.6% 129473 ± 1% sched_debug.cpu#43.nr_load_updates
14 ± 5% -47.5% 7 ± 5% sched_debug.cpu#43.load
14 ± 8% -47.5% 7 ± 10% sched_debug.cpu#43.cpu_load[1]
15334 ± 13% +153.6% 38892 ± 4% sched_debug.cpu#43.ttwu_count
28502 ± 15% +172.9% 77773 ± 1% sched_debug.cpu#43.nr_switches
6038 ± 9% +450.9% 33264 ± 1% sched_debug.cpu#43.sched_goidle
5310 ± 0% -36.4% 3380 ± 18% sched_debug.cpu#43.curr->pid
406693 ± 5% +67.1% 679394 ± 7% sched_debug.cpu#43.avg_idle
14 ± 8% -45.8% 8 ± 8% sched_debug.cpu#43.cpu_load[0]
14 ± 5% -39.7% 8 ± 9% sched_debug.cpu#43.cpu_load[4]
30032 ± 13% +163.2% 79035 ± 1% sched_debug.cpu#43.sched_count
14 ± 5% -41.4% 8 ± 5% sched_debug.cpu#43.cpu_load[3]
14 ± 5% -46.6% 7 ± 10% sched_debug.cpu#43.cpu_load[2]
16043 ± 12% +142.6% 38928 ± 2% sched_debug.cpu#44.ttwu_count
5236 ± 3% -31.7% 3576 ± 17% sched_debug.cpu#44.curr->pid
14 ± 3% -36.8% 9 ± 13% sched_debug.cpu#44.cpu_load[4]
30020 ± 11% +155.1% 76592 ± 2% sched_debug.cpu#44.nr_switches
6216 ± 2% +425.2% 32650 ± 1% sched_debug.cpu#44.sched_goidle
14 ± 5% -37.5% 8 ± 21% sched_debug.cpu#44.load
393059 ± 9% +77.3% 696766 ± 4% sched_debug.cpu#44.avg_idle
32223 ± 15% +151.2% 80928 ± 1% sched_debug.cpu#44.sched_count
14 ± 5% -41.4% 8 ± 17% sched_debug.cpu#44.cpu_load[0]
14 ± 5% -39.3% 8 ± 17% sched_debug.cpu#44.cpu_load[2]
14 ± 5% -40.4% 8 ± 17% sched_debug.cpu#44.cpu_load[1]
149852 ± 0% -13.6% 129432 ± 1% sched_debug.cpu#44.nr_load_updates
14 ± 3% -38.6% 8 ± 14% sched_debug.cpu#44.cpu_load[3]
14 ± 3% -40.4% 8 ± 5% sched_debug.cpu#45.cpu_load[4]
27127 ± 9% +189.0% 78406 ± 2% sched_debug.cpu#45.nr_switches
30870 ± 8% +180.3% 86533 ± 5% sched_debug.cpu#45.sched_count
15222 ± 8% +159.4% 39486 ± 4% sched_debug.cpu#45.ttwu_count
399822 ± 7% +64.7% 658696 ± 8% sched_debug.cpu#45.avg_idle
14 ± 3% -43.9% 8 ± 8% sched_debug.cpu#45.cpu_load[3]
149876 ± 0% -13.1% 130170 ± 2% sched_debug.cpu#45.nr_load_updates
5727 ± 4% +481.5% 33305 ± 0% sched_debug.cpu#45.sched_goidle
13 ± 6% -45.5% 7 ± 6% sched_debug.cpu#45.cpu_load[0]
14 ± 9% -38.6% 8 ± 9% sched_debug.cpu#45.load
5161 ± 1% -33.0% 3460 ± 12% sched_debug.cpu#45.curr->pid
13 ± 6% -43.6% 7 ± 5% sched_debug.cpu#45.cpu_load[1]
13 ± 6% -41.8% 8 ± 8% sched_debug.cpu#45.cpu_load[2]
5143 ± 4% -34.9% 3347 ± 19% sched_debug.cpu#46.curr->pid
5725 ± 4% +463.9% 32286 ± 1% sched_debug.cpu#46.sched_goidle
14 ± 3% -46.6% 7 ± 10% sched_debug.cpu#46.cpu_load[2]
14 ± 3% -39.7% 8 ± 9% sched_debug.cpu#46.cpu_load[4]
14 ± 3% -47.4% 7 ± 6% sched_debug.cpu#46.cpu_load[0]
14 ± 3% -44.8% 8 ± 12% sched_debug.cpu#46.cpu_load[3]
30713 ± 17% +165.0% 81393 ± 11% sched_debug.cpu#46.sched_count
27647 ± 10% +173.2% 75542 ± 2% sched_debug.cpu#46.nr_switches
14 ± 3% -47.4% 7 ± 6% sched_debug.cpu#46.cpu_load[1]
14823 ± 5% +158.6% 38328 ± 2% sched_debug.cpu#46.ttwu_count
402688 ± 9% +78.1% 717376 ± 2% sched_debug.cpu#46.avg_idle
149705 ± 0% -13.8% 129111 ± 1% sched_debug.cpu#46.nr_load_updates
27298 ± 6% +175.3% 75141 ± 3% sched_debug.cpu#47.nr_switches
13 ± 6% -38.9% 8 ± 10% sched_debug.cpu#47.load
14 ± 13% -45.8% 8 ± 0% sched_debug.cpu#47.cpu_load[2]
14 ± 7% -38.6% 8 ± 4% sched_debug.cpu#47.cpu_load[4]
5296 ± 1% -29.2% 3752 ± 7% sched_debug.cpu#47.curr->pid
5721 ± 6% +458.4% 31951 ± 1% sched_debug.cpu#47.sched_goidle
15555 ± 5% +150.9% 39035 ± 3% sched_debug.cpu#47.ttwu_count
15 ± 14% -49.2% 7 ± 5% sched_debug.cpu#47.cpu_load[1]
15 ± 19% -50.8% 7 ± 6% sched_debug.cpu#47.cpu_load[0]
427514 ± 3% +62.5% 694869 ± 5% sched_debug.cpu#47.avg_idle
30901 ± 9% +150.4% 77385 ± 4% sched_debug.cpu#47.sched_count
149712 ± 0% -13.8% 129072 ± 1% sched_debug.cpu#47.nr_load_updates
14 ± 10% -43.1% 8 ± 5% sched_debug.cpu#47.cpu_load[3]
5825 ± 9% +484.6% 34052 ± 2% sched_debug.cpu#48.sched_goidle
445509 ± 10% +55.6% 693060 ± 2% sched_debug.cpu#48.avg_idle
5144 ± 2% -27.6% 3724 ± 15% sched_debug.cpu#48.curr->pid
149663 ± 0% -13.2% 129942 ± 1% sched_debug.cpu#48.nr_load_updates
31091 ± 19% +162.8% 81718 ± 1% sched_debug.cpu#48.nr_switches
13 ± 3% -34.0% 8 ± 9% sched_debug.cpu#48.cpu_load[0]
37659 ± 9% +128.2% 85950 ± 2% sched_debug.cpu#48.sched_count
13 ± 6% -30.2% 9 ± 20% sched_debug.cpu#48.load
14 ± 5% -37.5% 8 ± 9% sched_debug.cpu#48.cpu_load[1]
14 ± 7% -38.6% 8 ± 9% sched_debug.cpu#48.cpu_load[2]
14 ± 5% -37.9% 9 ± 7% sched_debug.cpu#48.cpu_load[3]
14 ± 3% -35.1% 9 ± 4% sched_debug.cpu#48.cpu_load[4]
17375 ± 17% +135.1% 40852 ± 3% sched_debug.cpu#48.ttwu_count
14781 ± 16% +169.2% 39795 ± 2% sched_debug.cpu#49.ttwu_count
5723 ± 5% +499.1% 34288 ± 2% sched_debug.cpu#49.sched_goidle
13 ± 3% -33.3% 9 ± 13% sched_debug.cpu#49.load
14 ± 0% -39.3% 8 ± 5% sched_debug.cpu#49.cpu_load[4]
13 ± 3% -43.6% 7 ± 5% sched_debug.cpu#49.cpu_load[3]
29140 ± 19% +193.2% 85438 ± 5% sched_debug.cpu#49.sched_count
149680 ± 0% -13.5% 129415 ± 1% sched_debug.cpu#49.nr_load_updates
26451 ± 18% +205.1% 80693 ± 3% sched_debug.cpu#49.nr_switches
5140 ± 2% -26.7% 3767 ± 9% sched_debug.cpu#49.curr->pid
442201 ± 9% +59.5% 705228 ± 5% sched_debug.cpu#49.avg_idle
13 ± 3% -48.1% 7 ± 0% sched_debug.cpu#49.cpu_load[0]
13 ± 3% -46.3% 7 ± 5% sched_debug.cpu#49.cpu_load[2]
13 ± 3% -48.1% 7 ± 0% sched_debug.cpu#49.cpu_load[1]
393207 ± 8% +44.0% 566272 ± 12% sched_debug.cpu#5.avg_idle
14 ± 3% -40.4% 8 ± 5% sched_debug.cpu#5.cpu_load[2]
5202 ± 3% -27.5% 3773 ± 23% sched_debug.cpu#5.curr->pid
7245 ± 9% +487.3% 42556 ± 3% sched_debug.cpu#5.sched_goidle
14 ± 3% -43.1% 8 ± 13% sched_debug.cpu#5.cpu_load[0]
50662 ± 48% +101.7% 102190 ± 4% sched_debug.cpu#5.sched_count
8840 ± 4% -14.7% 7541 ± 11% sched_debug.cpu#5.ttwu_local
14 ± 8% -25.0% 10 ± 10% sched_debug.cpu#5.load
34132 ± 2% +181.9% 96203 ± 3% sched_debug.cpu#5.nr_switches
14 ± 3% -40.4% 8 ± 5% sched_debug.cpu#5.cpu_load[3]
19453 ± 1% +163.4% 51235 ± 2% sched_debug.cpu#5.ttwu_count
14 ± 3% -35.1% 9 ± 4% sched_debug.cpu#5.cpu_load[4]
14 ± 3% -42.1% 8 ± 13% sched_debug.cpu#5.cpu_load[1]
17176 ± 11% +126.2% 38854 ± 3% sched_debug.cpu#50.ttwu_count
6062 ± 6% +454.0% 33580 ± 2% sched_debug.cpu#50.sched_goidle
427537 ± 8% +70.6% 729408 ± 8% sched_debug.cpu#50.avg_idle
8338 ± 16% -26.7% 6110 ± 4% sched_debug.cpu#50.ttwu_local
14 ± 10% -46.6% 7 ± 14% sched_debug.cpu#50.cpu_load[0]
31212 ± 11% +150.5% 78189 ± 2% sched_debug.cpu#50.nr_switches
14 ± 13% -45.8% 8 ± 8% sched_debug.cpu#50.cpu_load[1]
15 ± 11% -46.7% 8 ± 8% sched_debug.cpu#50.cpu_load[2]
14 ± 8% -45.8% 8 ± 8% sched_debug.cpu#50.cpu_load[3]
34287 ± 13% +135.9% 80898 ± 2% sched_debug.cpu#50.sched_count
149585 ± 0% -14.2% 128408 ± 1% sched_debug.cpu#50.nr_load_updates
5077 ± 2% -31.0% 3505 ± 20% sched_debug.cpu#50.curr->pid
14 ± 5% -41.4% 8 ± 10% sched_debug.cpu#50.cpu_load[4]
13 ± 3% -46.3% 7 ± 11% sched_debug.cpu#51.cpu_load[0]
149563 ± 0% -13.7% 129131 ± 1% sched_debug.cpu#51.nr_load_updates
13 ± 6% -37.7% 8 ± 13% sched_debug.cpu#51.load
5961 ± 3% +456.4% 33169 ± 3% sched_debug.cpu#51.sched_goidle
14 ± 8% -50.8% 7 ± 11% sched_debug.cpu#51.cpu_load[1]
14805 ± 14% +169.2% 39856 ± 8% sched_debug.cpu#51.ttwu_count
446768 ± 9% +46.4% 654210 ± 3% sched_debug.cpu#51.avg_idle
27463 ± 13% +189.6% 79531 ± 7% sched_debug.cpu#51.nr_switches
5134 ± 2% -35.3% 3321 ± 18% sched_debug.cpu#51.curr->pid
14 ± 3% -42.1% 8 ± 10% sched_debug.cpu#51.cpu_load[4]
31453 ± 12% +165.8% 83619 ± 4% sched_debug.cpu#51.sched_count
14 ± 5% -44.8% 8 ± 8% sched_debug.cpu#51.cpu_load[3]
14 ± 8% -50.8% 7 ± 11% sched_debug.cpu#51.cpu_load[2]
13 ± 3% -41.8% 8 ± 15% sched_debug.cpu#52.cpu_load[1]
5140 ± 3% -28.5% 3677 ± 26% sched_debug.cpu#52.curr->pid
6165 ± 6% +427.5% 32524 ± 1% sched_debug.cpu#52.sched_goidle
17019 ± 18% +126.4% 38539 ± 4% sched_debug.cpu#52.ttwu_count
13 ± 3% -44.4% 7 ± 20% sched_debug.cpu#52.cpu_load[0]
35027 ± 21% +134.8% 82258 ± 8% sched_debug.cpu#52.sched_count
13 ± 3% -38.2% 8 ± 13% sched_debug.cpu#52.cpu_load[3]
13 ± 6% -38.9% 8 ± 26% sched_debug.cpu#52.load
149605 ± 0% -13.8% 128950 ± 0% sched_debug.cpu#52.nr_load_updates
31250 ± 16% +143.2% 76016 ± 3% sched_debug.cpu#52.nr_switches
13 ± 3% -41.8% 8 ± 15% sched_debug.cpu#52.cpu_load[2]
407007 ± 18% +73.1% 704364 ± 7% sched_debug.cpu#52.avg_idle
14 ± 0% -35.7% 9 ± 11% sched_debug.cpu#52.cpu_load[4]
14 ± 0% -39.3% 8 ± 5% sched_debug.cpu#53.cpu_load[2]
14336 ± 16% +174.3% 39327 ± 2% sched_debug.cpu#53.ttwu_count
149573 ± 0% -13.4% 129557 ± 0% sched_debug.cpu#53.nr_load_updates
14 ± 0% -39.3% 8 ± 5% sched_debug.cpu#53.cpu_load[3]
14 ± 0% -37.5% 8 ± 4% sched_debug.cpu#53.cpu_load[4]
26045 ± 17% +201.9% 78623 ± 4% sched_debug.cpu#53.nr_switches
5785 ± 9% +479.3% 33519 ± 3% sched_debug.cpu#53.sched_goidle
394131 ± 9% +71.8% 677092 ± 4% sched_debug.cpu#53.avg_idle
14 ± 3% -36.2% 9 ± 14% sched_debug.cpu#53.load
1 ± 0% -100.0% 0 ± 0% sched_debug.cpu#53.nr_running
5301 ± 1% -32.4% 3583 ± 22% sched_debug.cpu#53.curr->pid
14 ± 0% -39.3% 8 ± 5% sched_debug.cpu#53.cpu_load[1]
13 ± 3% -40.0% 8 ± 5% sched_debug.cpu#53.cpu_load[0]
27428 ± 19% +215.5% 86525 ± 11% sched_debug.cpu#53.sched_count
13 ± 6% -43.6% 7 ± 5% sched_debug.cpu#54.cpu_load[1]
5229 ± 0% -28.1% 3762 ± 4% sched_debug.cpu#54.curr->pid
149541 ± 0% -13.7% 129115 ± 1% sched_debug.cpu#54.nr_load_updates
5754 ± 5% +481.1% 33442 ± 2% sched_debug.cpu#54.sched_goidle
14 ± 5% -44.6% 7 ± 5% sched_debug.cpu#54.cpu_load[0]
14 ± 14% -41.4% 8 ± 5% sched_debug.cpu#54.load
14 ± 0% -44.6% 7 ± 5% sched_debug.cpu#54.cpu_load[3]
15157 ± 14% +154.4% 38560 ± 2% sched_debug.cpu#54.ttwu_count
14 ± 0% -41.1% 8 ± 5% sched_debug.cpu#54.cpu_load[4]
27067 ± 15% +186.7% 77604 ± 2% sched_debug.cpu#54.nr_switches
381381 ± 9% +87.1% 713657 ± 4% sched_debug.cpu#54.avg_idle
14 ± 0% -44.6% 7 ± 5% sched_debug.cpu#54.cpu_load[2]
33009 ± 30% +218.6% 105158 ± 35% sched_debug.cpu#54.sched_count
13 ± 6% -44.4% 7 ± 11% sched_debug.cpu#55.cpu_load[1]
5335 ± 6% +518.1% 32977 ± 4% sched_debug.cpu#55.sched_goidle
13 ± 8% -33.3% 9 ± 7% sched_debug.cpu#55.load
24372 ± 15% +215.9% 76995 ± 5% sched_debug.cpu#55.nr_switches
5093 ± 4% -21.1% 4016 ± 9% sched_debug.cpu#55.curr->pid
149428 ± 0% -14.0% 128500 ± 0% sched_debug.cpu#55.nr_load_updates
13723 ± 13% +181.8% 38667 ± 2% sched_debug.cpu#55.ttwu_count
13 ± 6% -46.3% 7 ± 17% sched_debug.cpu#55.cpu_load[0]
14 ± 5% -46.4% 7 ± 11% sched_debug.cpu#55.cpu_load[2]
14 ± 5% -42.9% 8 ± 8% sched_debug.cpu#55.cpu_load[3]
28348 ± 21% +190.3% 82293 ± 2% sched_debug.cpu#55.sched_count
431592 ± 5% +50.3% 648536 ± 8% sched_debug.cpu#55.avg_idle
14 ± 5% -37.5% 8 ± 9% sched_debug.cpu#55.cpu_load[4]
29536 ± 13% +232.5% 98216 ± 30% sched_debug.cpu#56.sched_count
149577 ± 0% -13.5% 129346 ± 1% sched_debug.cpu#56.nr_load_updates
14 ± 5% -39.3% 8 ± 10% sched_debug.cpu#56.cpu_load[4]
15441 ± 12% +158.8% 39963 ± 4% sched_debug.cpu#56.ttwu_count
14 ± 5% -41.1% 8 ± 10% sched_debug.cpu#56.cpu_load[3]
14 ± 5% -42.9% 8 ± 8% sched_debug.cpu#56.cpu_load[2]
14 ± 5% -42.9% 8 ± 8% sched_debug.cpu#56.cpu_load[0]
5178 ± 1% -27.9% 3734 ± 8% sched_debug.cpu#56.curr->pid
14 ± 5% -42.9% 8 ± 8% sched_debug.cpu#56.cpu_load[1]
5731 ± 3% +480.4% 33266 ± 3% sched_debug.cpu#56.sched_goidle
382742 ± 10% +80.1% 689145 ± 6% sched_debug.cpu#56.avg_idle
14 ± 8% -37.5% 8 ± 9% sched_debug.cpu#56.load
27782 ± 14% +183.1% 78659 ± 4% sched_debug.cpu#56.nr_switches
13 ± 3% -34.5% 9 ± 0% sched_debug.cpu#57.cpu_load[4]
13 ± 6% -43.6% 7 ± 14% sched_debug.cpu#57.load
14 ± 5% -42.9% 8 ± 0% sched_debug.cpu#57.cpu_load[0]
15431 ± 3% +156.5% 39587 ± 2% sched_debug.cpu#57.ttwu_count
149541 ± 0% -13.9% 128797 ± 1% sched_debug.cpu#57.nr_load_updates
6140 ± 4% +436.5% 32946 ± 2% sched_debug.cpu#57.sched_goidle
29782 ± 7% +177.5% 82641 ± 1% sched_debug.cpu#57.sched_count
14 ± 5% -44.6% 7 ± 5% sched_debug.cpu#57.cpu_load[1]
13 ± 3% -41.8% 8 ± 0% sched_debug.cpu#57.cpu_load[2]
27648 ± 5% +181.2% 77741 ± 1% sched_debug.cpu#57.nr_switches
5142 ± 1% -28.5% 3676 ± 13% sched_debug.cpu#57.curr->pid
13 ± 3% -38.2% 8 ± 5% sched_debug.cpu#57.cpu_load[3]
412300 ± 8% +63.2% 672757 ± 6% sched_debug.cpu#57.avg_idle
25198 ± 10% +209.5% 77988 ± 2% sched_debug.cpu#58.nr_switches
14 ± 5% -40.4% 8 ± 10% sched_debug.cpu#58.load
5766 ± 4% +471.7% 32966 ± 2% sched_debug.cpu#58.sched_goidle
5306 ± 1% -29.0% 3766 ± 8% sched_debug.cpu#58.curr->pid
14008 ± 11% +186.0% 40066 ± 4% sched_debug.cpu#58.ttwu_count
14 ± 3% -38.6% 8 ± 4% sched_debug.cpu#58.cpu_load[4]
149466 ± 0% -13.4% 129403 ± 1% sched_debug.cpu#58.nr_load_updates
14 ± 0% -41.1% 8 ± 5% sched_debug.cpu#58.cpu_load[1]
416949 ± 9% +60.8% 670547 ± 3% sched_debug.cpu#58.avg_idle
14 ± 0% -41.1% 8 ± 5% sched_debug.cpu#58.cpu_load[3]
27843 ± 16% +195.6% 82292 ± 1% sched_debug.cpu#58.sched_count
14 ± 3% -42.1% 8 ± 5% sched_debug.cpu#58.cpu_load[0]
14 ± 0% -41.1% 8 ± 5% sched_debug.cpu#58.cpu_load[2]
13 ± 3% -42.6% 7 ± 10% sched_debug.cpu#59.cpu_load[1]
5322 ± 0% -32.1% 3611 ± 11% sched_debug.cpu#59.curr->pid
385063 ± 11% +81.8% 699943 ± 3% sched_debug.cpu#59.avg_idle
149421 ± 0% -13.5% 129224 ± 1% sched_debug.cpu#59.nr_load_updates
24511 ± 9% +213.0% 76718 ± 2% sched_debug.cpu#59.nr_switches
14102 ± 13% +177.8% 39172 ± 1% sched_debug.cpu#59.ttwu_count
28477 ± 10% +180.7% 79943 ± 4% sched_debug.cpu#59.sched_count
13 ± 3% -42.6% 7 ± 5% sched_debug.cpu#59.cpu_load[0]
14 ± 0% -37.5% 8 ± 9% sched_debug.cpu#59.cpu_load[4]
14 ± 0% -35.7% 9 ± 15% sched_debug.cpu#59.load
5642 ± 6% +480.0% 32728 ± 1% sched_debug.cpu#59.sched_goidle
13 ± 3% -43.6% 7 ± 10% sched_debug.cpu#59.cpu_load[2]
14 ± 0% -41.1% 8 ± 5% sched_debug.cpu#59.cpu_load[3]
4986 ± 3% -27.6% 3609 ± 8% sched_debug.cpu#6.curr->pid
14 ± 5% -39.0% 9 ± 0% sched_debug.cpu#6.cpu_load[3]
9630 ± 11% -28.0% 6929 ± 9% sched_debug.cpu#6.ttwu_local
419211 ± 12% +40.3% 588042 ± 6% sched_debug.cpu#6.avg_idle
14 ± 3% -34.5% 9 ± 5% sched_debug.cpu#6.cpu_load[4]
20287 ± 12% +154.0% 51526 ± 2% sched_debug.cpu#6.ttwu_count
14 ± 8% -39.3% 8 ± 5% sched_debug.cpu#6.cpu_load[0]
46733 ± 31% +114.9% 100445 ± 2% sched_debug.cpu#6.sched_count
150916 ± 0% -9.8% 136187 ± 1% sched_debug.cpu#6.nr_load_updates
36120 ± 9% +162.9% 94974 ± 2% sched_debug.cpu#6.nr_switches
14 ± 8% -41.1% 8 ± 5% sched_debug.cpu#6.cpu_load[1]
7113 ± 9% +480.9% 41316 ± 2% sched_debug.cpu#6.sched_goidle
14 ± 5% -39.7% 8 ± 4% sched_debug.cpu#6.cpu_load[2]
14 ± 10% -49.2% 7 ± 6% sched_debug.cpu#60.load
439937 ± 10% +54.5% 679526 ± 5% sched_debug.cpu#60.avg_idle
14 ± 5% -39.7% 8 ± 9% sched_debug.cpu#60.cpu_load[4]
5179 ± 2% -35.5% 3342 ± 10% sched_debug.cpu#60.curr->pid
28843 ± 16% +167.8% 77229 ± 4% sched_debug.cpu#60.nr_switches
33067 ± 12% +247.8% 115018 ± 28% sched_debug.cpu#60.sched_count
14 ± 7% -45.6% 7 ± 10% sched_debug.cpu#60.cpu_load[2]
15944 ± 16% +148.5% 39619 ± 3% sched_debug.cpu#60.ttwu_count
14 ± 7% -43.9% 8 ± 8% sched_debug.cpu#60.cpu_load[3]
149390 ± 0% -13.5% 129257 ± 1% sched_debug.cpu#60.nr_load_updates
5661 ± 5% +476.4% 32633 ± 2% sched_debug.cpu#60.sched_goidle
14 ± 7% -49.1% 7 ± 5% sched_debug.cpu#60.cpu_load[1]
14 ± 7% -51.7% 7 ± 10% sched_debug.cpu#60.cpu_load[0]
2 ± 47% +172.7% 7 ± 46% sched_debug.cpu#60.nr_uninterruptible
14 ± 3% -37.9% 9 ± 7% sched_debug.cpu#61.cpu_load[4]
149440 ± 0% -13.8% 128815 ± 1% sched_debug.cpu#61.nr_load_updates
29614 ± 21% +172.4% 80671 ± 7% sched_debug.cpu#61.sched_count
14 ± 3% -43.1% 8 ± 10% sched_debug.cpu#61.cpu_load[3]
14 ± 3% -43.9% 8 ± 8% sched_debug.cpu#61.cpu_load[1]
13368 ± 10% +181.0% 37568 ± 2% sched_debug.cpu#61.ttwu_count
14 ± 3% -44.8% 8 ± 8% sched_debug.cpu#61.cpu_load[2]
5539 ± 4% +492.5% 32824 ± 1% sched_debug.cpu#61.sched_goidle
386906 ± 8% +76.3% 682133 ± 7% sched_debug.cpu#61.avg_idle
14 ± 3% -42.1% 8 ± 5% sched_debug.cpu#61.cpu_load[0]
5248 ± 2% -33.1% 3512 ± 8% sched_debug.cpu#61.curr->pid
14 ± 3% -38.6% 8 ± 21% sched_debug.cpu#61.load
23867 ± 10% +214.5% 75068 ± 1% sched_debug.cpu#61.nr_switches
25806 ± 16% +202.7% 78114 ± 0% sched_debug.cpu#62.nr_switches
14 ± 3% -43.9% 8 ± 8% sched_debug.cpu#62.cpu_load[2]
149356 ± 0% -13.5% 129229 ± 1% sched_debug.cpu#62.nr_load_updates
14 ± 7% -45.8% 8 ± 8% sched_debug.cpu#62.cpu_load[0]
13869 ± 13% +183.4% 39304 ± 0% sched_debug.cpu#62.ttwu_count
5710 ± 6% +477.2% 32961 ± 1% sched_debug.cpu#62.sched_goidle
14 ± 3% -44.8% 8 ± 8% sched_debug.cpu#62.cpu_load[1]
14 ± 5% -39.3% 8 ± 10% sched_debug.cpu#62.load
14 ± 3% -38.6% 8 ± 12% sched_debug.cpu#62.cpu_load[4]
4966 ± 5% -31.5% 3402 ± 7% sched_debug.cpu#62.curr->pid
378717 ± 15% +76.9% 669793 ± 4% sched_debug.cpu#62.avg_idle
14 ± 3% -42.1% 8 ± 13% sched_debug.cpu#62.cpu_load[3]
29176 ± 10% +189.4% 84428 ± 5% sched_debug.cpu#62.sched_count
5768 ± 4% +457.9% 32180 ± 3% sched_debug.cpu#63.sched_goidle
14 ± 3% -42.1% 8 ± 10% sched_debug.cpu#63.cpu_load[4]
14 ± 0% -44.6% 7 ± 10% sched_debug.cpu#63.cpu_load[0]
14 ± 0% -46.4% 7 ± 11% sched_debug.cpu#63.cpu_load[2]
5225 ± 0% -31.4% 3584 ± 8% sched_debug.cpu#63.curr->pid
30028 ± 12% +167.0% 80165 ± 5% sched_debug.cpu#63.sched_count
149417 ± 0% -13.6% 129103 ± 1% sched_debug.cpu#63.nr_load_updates
14 ± 0% -44.6% 7 ± 10% sched_debug.cpu#63.cpu_load[1]
14 ± 3% -43.9% 8 ± 8% sched_debug.cpu#63.cpu_load[3]
28749 ± 9% +160.3% 74821 ± 4% sched_debug.cpu#63.nr_switches
16124 ± 8% +139.4% 38596 ± 4% sched_debug.cpu#63.ttwu_count
381327 ± 11% +76.9% 674475 ± 1% sched_debug.cpu#63.avg_idle
13 ± 3% -36.4% 8 ± 4% sched_debug.cpu#63.load
5228 ± 2% -30.4% 3638 ± 18% sched_debug.cpu#7.curr->pid
32692 ± 14% +177.2% 90611 ± 2% sched_debug.cpu#7.nr_switches
14 ± 3% -39.7% 8 ± 4% sched_debug.cpu#7.cpu_load[3]
432214 ± 9% +31.0% 566127 ± 7% sched_debug.cpu#7.avg_idle
18548 ± 13% +164.5% 49069 ± 3% sched_debug.cpu#7.ttwu_count
14 ± 3% -35.1% 9 ± 8% sched_debug.cpu#7.cpu_load[0]
14 ± 3% -36.2% 9 ± 8% sched_debug.cpu#7.cpu_load[4]
8398 ± 21% -33.4% 5591 ± 4% sched_debug.cpu#7.ttwu_local
14 ± 3% -36.8% 9 ± 7% sched_debug.cpu#7.cpu_load[1]
14 ± 3% -36.8% 9 ± 19% sched_debug.cpu#7.load
151006 ± 0% -9.9% 136117 ± 1% sched_debug.cpu#7.nr_load_updates
14 ± 3% -39.7% 8 ± 4% sched_debug.cpu#7.cpu_load[2]
6755 ± 8% +495.5% 40224 ± 2% sched_debug.cpu#7.sched_goidle
44491 ± 27% +113.6% 95041 ± 1% sched_debug.cpu#7.sched_count
430043 ± 6% +28.0% 550405 ± 5% sched_debug.cpu#8.avg_idle
36918 ± 14% +155.5% 94310 ± 2% sched_debug.cpu#8.nr_switches
40673 ± 12% +150.6% 101925 ± 4% sched_debug.cpu#8.sched_count
15 ± 2% -37.7% 9 ± 5% sched_debug.cpu#8.cpu_load[3]
15 ± 5% -41.0% 9 ± 0% sched_debug.cpu#8.cpu_load[1]
9474 ± 15% -36.4% 6024 ± 8% sched_debug.cpu#8.ttwu_local
21205 ± 11% +135.4% 49914 ± 2% sched_debug.cpu#8.ttwu_count
1 ± 0% -100.0% 0 ± 0% sched_debug.cpu#8.nr_running
15 ± 3% -40.3% 9 ± 4% sched_debug.cpu#8.cpu_load[2]
5366 ± 0% -29.7% 3773 ± 8% sched_debug.cpu#8.curr->pid
15 ± 5% -39.3% 9 ± 4% sched_debug.cpu#8.cpu_load[0]
153610 ± 1% -11.4% 136085 ± 1% sched_debug.cpu#8.nr_load_updates
15 ± 4% -35.0% 9 ± 4% sched_debug.cpu#8.load
15 ± 2% -36.1% 9 ± 4% sched_debug.cpu#8.cpu_load[4]
6904 ± 9% +503.5% 41667 ± 1% sched_debug.cpu#8.sched_goidle
12279 ± 13% -42.9% 7013 ± 16% sched_debug.cpu#9.ttwu_local
422957 ± 10% +33.0% 562718 ± 10% sched_debug.cpu#9.avg_idle
14 ± 2% -33.9% 9 ± 4% sched_debug.cpu#9.cpu_load[4]
52631 ± 6% +91.3% 100677 ± 3% sched_debug.cpu#9.sched_count
14 ± 2% -27.1% 10 ± 7% sched_debug.cpu#9.load
7657 ± 11% +454.6% 42463 ± 1% sched_debug.cpu#9.sched_goidle
41670 ± 11% +135.0% 97935 ± 3% sched_debug.cpu#9.nr_switches
23117 ± 10% +121.6% 51221 ± 3% sched_debug.cpu#9.ttwu_count
14 ± 2% -40.7% 8 ± 4% sched_debug.cpu#9.cpu_load[2]
14 ± 2% -39.0% 9 ± 7% sched_debug.cpu#9.cpu_load[3]
14 ± 3% -38.6% 8 ± 4% sched_debug.cpu#9.cpu_load[0]
151315 ± 0% -10.2% 135912 ± 1% sched_debug.cpu#9.nr_load_updates
14 ± 3% -39.7% 8 ± 4% sched_debug.cpu#9.cpu_load[1]
5086 ± 5% -13.7% 4389 ± 7% sched_debug.cpu#9.curr->pid
lkp-nex06: Nehalem-EX
Memory: 64G
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 3 months
[x86/xsaves] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000b00
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/hansendc/linux.git github-mpx
commit 4d90fc49c73730c09d7afd515f9c4e08d30229bd
Author: Dave Hansen <dave.hansen(a)intel.com>
AuthorDate: Thu May 7 16:03:56 2015 -0700
Commit: Dave Hansen <dave.hansen(a)intel.com>
CommitDate: Thu May 7 16:03:56 2015 -0700
x86/xsaves: Define and use user_xstate_size for xstate size in signal context
If "xsaves" is enabled, kernel always uses compact format of xsave area.
But user space still uses standard format of xsave area. Thus, xstate size
in kernel's xsave area is smaller than xstate size in user's xsave area.
xstate in signal frame should be in standard format for user's signal
handler to access.
In no "xsaves" case, xsave area in both user space and kernel space are in
standard format. Therefore, user's and kernel's xstate sizes are equal.
In "xsaves" case, xsave area in user space is in standard format while
xsave area in kernel space is in compact format. Therefore, kernel's
xstate size is less than user's xstate size.
So here is the problem: currently kernel uses the kernel's xstate size
for xstate size in signal frame. This is not a problem in no "xsaves" case.
But it is an issue in "xsaves" case because kernel's xstate size is smaller
than user's xstate size. When setting up signal math frame in
alloc_ mathframe(), the fpstate is in standard format; but a smaller size
of fpstate buffer is allocated in signal frame for standard format
xstate. Then kernel saves only part of xstate registers into this smaller
user's fpstate buffer and user will see part of the xstate registers in
signal context. Similar issue happens after returning from signal handler:
kernel will only restore part of xstate registers from user's fpstate
buffer in signal frame.
This patch defines and uses user_xstate_size for xstate size in signal
frame. It's read from returned value in ebx from CPUID leaf 0x0D subleaf
0x0. This is maximum size required by enabled states in XCR0 and may be
enabled. This value indicates the size required for XSAVE to save all
supported user states in legacy/standard format.
And in order to copy kernel's xsave area in compact format to user xsave
area in standard format, we use copy_to_user_xstate().
Signed-off-by: Fenghua Yu <fenghua.yu(a)intel.com>
Reviewed-by: Dave Hansen <dave.hansen(a)intel.com>
+-----------------------------------------------------------+------------+------------+------------+
| | 1a745816a2 | 4d90fc49c7 | 4d90fc49c7 |
+-----------------------------------------------------------+------------+------------+------------+
| boot_successes | 114 | 31 | 31 |
| boot_failures | 9 | 83 | 83 |
| Unexpected_close,not_stopping_watchdog | 7 | 2 | 2 |
| IP-Config:Auto-configuration_of_network_failed | 2 | 2 | 2 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 0 | 79 | 79 |
| backtrace:do_group_exit | 0 | 79 | 79 |
| backtrace:SyS_exit_group | 0 | 79 | 79 |
+-----------------------------------------------------------+------------+------------+------------+
[ 1.092237] hostname (61) used greatest stack depth: 6960 bytes left
[ 1.093167] init[1]: segfault at 0 ip (null) sp bfaeb358 error 14 in init[80090000+2e000]
[ 1.095485] init (64) used greatest stack depth: 6888 bytes left
[ 1.096425] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000b00
[ 1.096425]
[ 1.097012] CPU: 0 PID: 1 Comm: init Not tainted 4.1.0-rc2-00025-g4d90fc4 #5
[ 1.097012] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 1.097012] d082c000 d082c000 d0837f38 cfbd6173 d0837f50 cfbd52e7 cfe04080 d082c000
[ 1.097012] cfe04080 ca5a0a00 d0837f90 cf83c75e cfd57134 00000b00 ca5b2108 b766c058
[ 1.097012] 00000000 00000001 d082c000 ca5a0a00 ca5a0a60 d0837f7c d0837f7c 00000b00
[ 1.097012] Call Trace:
[ 1.097012] [<cfbd6173>] dump_stack+0x16/0x18
[ 1.097012] [<cfbd52e7>] panic+0x7c/0x182
[ 1.097012] [<cf83c75e>] do_exit+0x8fe/0x960
[ 1.097012] [<cf83c828>] do_group_exit+0x28/0x90
[ 1.097012] [<cf83c8a1>] SyS_exit_group+0x11/0x20
[ 1.097012] [<cfbdbf19>] sysenter_do_call+0x12/0x12
[ 1.097012] Kernel Offset: 0xe800000 from 0xc1000000 (relocation range: 0xc0000000-0xd33dffff)
Elapsed time: 5
git bisect start 4d90fc49c73730c09d7afd515f9c4e08d30229bd 5ebe6afaf0057ac3eaeb98defd5456894b446d22 --
git bisect good 542e399436e980f4166ed3c25c04b0029f701f70 # 15:29 24+ 4 x86: make is_64bit_mm() widely available
git bisect good d298e33060450c01ed2889759cff4fd580aebd46 # 15:33 24+ 2 x86, mpx: do not count MPX VMAs as neighbors when unmapping
git bisect good e08ff5ad4a7d9a83e931c96384be0ebbc8e33c0c # 15:39 24+ 2 x86-fpu-xsave_kernel_buffer_compacted
git bisect good b22e8c7c585150f97945d470d3bd9dbf253730f8 # 15:43 24+ 3 x86, fpu: cleanup save_user_xstate() types
git bisect good 1a745816a2d115124ab9a47181041836c1a248f6 # 15:45 24+ 7 x86/xsave.c: Fix xstate offsets and sizes enumeration
# first bad commit: [4d90fc49c73730c09d7afd515f9c4e08d30229bd] x86/xsaves: Define and use user_xstate_size for xstate size in signal context
git bisect good 1a745816a2d115124ab9a47181041836c1a248f6 # 15:47 72+ 9 x86/xsave.c: Fix xstate offsets and sizes enumeration
# extra tests with DEBUG_INFO
git bisect bad 4d90fc49c73730c09d7afd515f9c4e08d30229bd # 15:56 0- 66 x86/xsaves: Define and use user_xstate_size for xstate size in signal context
# extra tests on HEAD of linux-devel/devel-lkp-nex05-rand-201505081410
git bisect bad 095555a8d0b69cc3e00a5675666942efab7821d0 # 15:56 0- 8 0day head guard for 'devel-lkp-nex05-rand-201505081410'
# extra tests on tree/branch hansendc/github-mpx
git bisect bad 50db9eb40ae8e9da7f8b05c78e99418946ba0f7e # 16:41 0- 16 x86/fpu: always restore_xinit_state() when !use_eager_cpu()
# extra tests on tree/branch linus/master
git bisect good af6472881a6127ad075adf64e459d2905fbc8a5c # 16:47 122+ 5 Merge branch 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs
# extra tests on tree/branch next/master
git bisect good 675b3fb9606dc62afe1542b12f7b2ac3dbf753e5 # 16:50 122+ 8 Add linux-next specific files for 20150508
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 3 months
[x86/xsave] WARNING: CPU: 0 PID: 1 at arch/x86/kernel/xsave.c:306 save_xstate_sig()
by Fengguang Wu
Hi Dave,
Just in case this information will help: this patch adds one more warning message.
https://github.com/hansendc/linux.git github-mpx
commit 3701f7533ba43e0aec12bf2dffd49855499fa524
Author: Dave Hansen <dave.hansen(a)intel.com>
AuthorDate: Thu May 7 16:03:57 2015 -0700
Commit: Dave Hansen <dave.hansen(a)intel.com>
CommitDate: Thu May 7 16:03:57 2015 -0700
x86/xsaves: Rename xstate_size to kernel_xstate_size to explicitely distinguish xstate size in kernel from user space
User space uses standard format xsave area. fpstate in signal frame should
have standard format size.
To explicitly distinguish between xstate size in kernel space and the one
in user space, we rename xstate_size to kernel_xstate_size. This patch is
not fixing a bug. It just makes kernel code more clear.
So we define the xsave area sizes in two global variables:
kernel_xstate_size (previous xstate_size): the xsave area size used in
xsave area allocated in kernel
user_xstate_size: the xsave area size used in xsave area used by user.
In no "xsaves" case, xsave area in both user space and kernel space are in
standard format. Therefore, kernel_xstate_size and user_xstate_size are
equal.
In "xsaves" case, xsave area in user space is in standard format while
xsave area in kernel space is in compact format. Therefore, kernel's
xstate size is less than user's xstate size.
Signed-off-by: Fenghua Yu <fenghua.yu(a)intel.com>
Reviewed-by: Dave Hansen <dave.hansen(a)intel.com>
Attached dmesg for the parent commit, too, to help confirm whether it is a noise error.
+-----------------------------------------------------------+------------+------------+------------+
| | 4d90fc49c7 | 3701f7533b | 095555a8d0 |
+-----------------------------------------------------------+------------+------------+------------+
| boot_successes | 31 | 19 | 8 |
| boot_failures | 73 | 10 | 8 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 69 | 6 | 2 |
| backtrace:do_group_exit | 69 | 6 | 2 |
| backtrace:SyS_exit_group | 69 | 6 | 2 |
| Unexpected_close,not_stopping_watchdog | 2 | 1 | |
| IP-Config:Auto-configuration_of_network_failed | 2 | 2 | 2 |
| WARNING:at_arch/x86/kernel/xsave.c:#save_xstate_sig() | 0 | 7 | 6 |
+-----------------------------------------------------------+------------+------------+------------+
debug traps:
[ 3.025927] random: init urandom read with 27 bits of entropy available
[ 3.044079] hostname (61) used greatest stack depth: 6960 bytes left
[ 3.057396] ------------[ cut here ]------------
[ 3.058044] WARNING: CPU: 0 PID: 1 at arch/x86/kernel/xsave.c:306 save_xstate_sig+0x329/0x340()
[ 3.059684] mismatched xstate sizes
[ 3.060227] Modules linked in:
[ 3.060730] CPU: 0 PID: 1 Comm: init Not tainted 4.1.0-rc2-00026-g3701f75 #3
[ 3.061705] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 3.063165] d0837ea0 d0837ea0 d0837e5c cebd61c3 d0837e90 ce83a654 ced4e668 d0837ebc
[ 3.064519] 00000001 ced52944 00000132 ce80bff9 00000132 ce80bff9 bfade7d0 d082c000
[ 3.065826] bfade840 d0837ea8 ce83a6be 00000009 d0837ea0 ced4e668 d0837ebc d0837ed4
[ 3.067169] Call Trace:
[ 3.067538] [<cebd61c3>] dump_stack+0x16/0x18
[ 3.068184] [<ce83a654>] warn_slowpath_common+0x84/0xc0
[ 3.068886] [<ce80bff9>] ? save_xstate_sig+0x329/0x340
[ 3.076023] [<ce80bff9>] ? save_xstate_sig+0x329/0x340
[ 3.076797] [<ce83a6be>] warn_slowpath_fmt+0x2e/0x30
[ 3.077664] [<ce80bff9>] save_xstate_sig+0x329/0x340
[ 3.078381] [<ce802145>] do_signal+0x785/0xa20
[ 3.079034] [<ce8f6339>] ? vfs_read+0x69/0xf0
[ 3.079711] [<ce8f6c4d>] ? SyS_read+0x4d/0xa0
[ 3.080403] [<ce802418>] do_notify_resume+0x38/0x50
[ 3.081170] [<cebdc0a6>] work_notifysig+0x22/0x28
[ 3.081841] ---[ end trace 60313aabd21503a7 ]---
[ 3.111349] 99-trinity[64]: segfault at 0 ip (null) sp bfba8070 error 14 in bash[8048000+dc000]
git bisect start 095555a8d0b69cc3e00a5675666942efab7821d0 5ebe6afaf0057ac3eaeb98defd5456894b446d22 --
git bisect good 8e2f9f5c8055fd77a6973509ea4375e2028d2ded # 22:28 25+ 2 Merge 'mlin/block-generic-req' into devel-lkp-nex05-rand-201505081410
git bisect bad 0ffe556dae9ae5aa9adc7f5c50909ebcda7e92f6 # 22:44 0- 25 Merge 'hansendc/github-mpx' into devel-lkp-nex05-rand-201505081410
git bisect good d4400bcee070ddc90b8d74af691ee6c747c16b4f # 23:13 25+ 2 x86, mpx: do 32-bit-only cmpxchg for 32-bit apps
git bisect good d542d9b320141a1421b959585f2fc30174ead379 # 23:22 25+ 4 x86, fpu: xsave directly when using compacted buffer format
git bisect bad 3701f7533ba43e0aec12bf2dffd49855499fa524 # 23:26 0- 7 x86/xsaves: Rename xstate_size to kernel_xstate_size to explicitely distinguish xstate size in kernel from user space
git bisect good 1a745816a2d115124ab9a47181041836c1a248f6 # 23:31 25+ 5 x86/xsave.c: Fix xstate offsets and sizes enumeration
git bisect good 4d90fc49c73730c09d7afd515f9c4e08d30229bd # 23:36 25+ 5 x86/xsaves: Define and use user_xstate_size for xstate size in signal context
# first bad commit: [3701f7533ba43e0aec12bf2dffd49855499fa524] x86/xsaves: Rename xstate_size to kernel_xstate_size to explicitely distinguish xstate size in kernel from user space
git bisect good 4d90fc49c73730c09d7afd515f9c4e08d30229bd # 23:38 75+ 73 x86/xsaves: Define and use user_xstate_size for xstate size in signal context
# extra tests with DEBUG_INFO
git bisect bad 3701f7533ba43e0aec12bf2dffd49855499fa524 # 23:59 0- 29 x86/xsaves: Rename xstate_size to kernel_xstate_size to explicitely distinguish xstate size in kernel from user space
# extra tests on HEAD of linux-devel/devel-lkp-nex05-rand-201505081410
git bisect bad 095555a8d0b69cc3e00a5675666942efab7821d0 # 23:59 0- 8 0day head guard for 'devel-lkp-nex05-rand-201505081410'
# extra tests on tree/branch hansendc/github-mpx
git bisect bad 50db9eb40ae8e9da7f8b05c78e99418946ba0f7e # 00:46 64- 13 x86/fpu: always restore_xinit_state() when !use_eager_cpu()
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good 3e0283a53f7d2f2dae7bc4aa7f3104cb5988018f # 00:50 78+ 5 Merge tag 'pm+acpi-4.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
# extra tests on tree/branch next/master
git bisect good 675b3fb9606dc62afe1542b12f7b2ac3dbf753e5 # 01:32 78+ 5 Add linux-next specific files for 20150508
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
7 years, 3 months
[watchdog] 952c8813073: BUG: unable to handle kernel NULL pointer dereference at (null)
by Fengguang Wu
Hi Chris,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 952c8813073ed240a610f8b44ecea9bdd60b62b4 ("watchdog: add watchdog_cpumask sysctl to assist nohz")
+------------------------------------------+------------+------------+
| | ef15be37bb | 952c881307 |
+------------------------------------------+------------+------------+
| boot_successes | 10 | 0 |
| boot_failures | 0 | 22 |
| BUG:unable_to_handle_kernel | 0 | 22 |
| Oops | 0 | 22 |
| RIP:find_first_bit | 0 | 22 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 22 |
| backtrace:lockup_detector_init | 0 | 22 |
| backtrace:kernel_init_freeable | 0 | 22 |
+------------------------------------------+------------+------------+
[ 0.073036] smpboot: CPU0: Intel QEMU Virtual CPU version 2.1.2 (fam: 06, model: 06, stepping: 03)
[ 0.075023] Performance Events: Broken PMU hardware detected, using software events only.
[ 0.077002] Failed to access perfctr msr (MSR c1 is 0)
[ 0.079378] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 0.080000] IP: [<ffffffff81432289>] find_first_bit+0x9/0x60
[ 0.080000] PGD 0
[ 0.080000] Oops: 0000 [#1] SMP
[ 0.080000] Modules linked in:
[ 0.080000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.1.0-rc2-next-20150506 #1
[ 0.080000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.080000] task: ffff88013aba0000 ti: ffff88013aba8000 task.ti: ffff88013aba8000
[ 0.080000] RIP: 0010:[<ffffffff81432289>] [<ffffffff81432289>] find_first_bit+0x9/0x60
[ 0.080000] RSP: 0000:ffff88013ababe68 EFLAGS: 00010202
[ 0.080000] RAX: 00000000ee6b2800 RBX: 0000000000000004 RCX: 0000000000000018
[ 0.080000] RDX: ffffffff821d7294 RSI: 0000000000000004 RDI: 0000000000000000
[ 0.080000] RBP: ffff88013ababe68 R08: 0000000000000005 R09: 000000000000001f
[ 0.080000] R10: 0000000039d63d7d R11: 000000000000001a R12: 0000000000000000
[ 0.080000] R13: 00000000000027e0 R14: 0000000000000000 R15: 0000000000000000
[ 0.080000] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
[ 0.080000] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 0.080000] CR2: 0000000000000000 CR3: 0000000001cb7000 CR4: 00000000000006f0
[ 0.080000] Stack:
[ 0.080000] ffff88013ababe88 ffffffff81e8fee6 ffffffff821d7294 ffffffff81ff84a0
[ 0.080000] ffff88013ababf38 ffffffff81e690b8 0000000000000001 0000000000000000
[ 0.080000] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 0.080000] Call Trace:
[ 0.080000] [<ffffffff81e8fee6>] lockup_detector_init+0x33/0x79
[ 0.080000] [<ffffffff81e690b8>] kernel_init_freeable+0x128/0x248
[ 0.080000] [<ffffffff818bc4c0>] ? rest_init+0x90/0x90
[ 0.080000] [<ffffffff818bc4ce>] kernel_init+0xe/0xf0
[ 0.080000] [<ffffffff818d1aa2>] ret_from_fork+0x42/0x70
[ 0.080000] [<ffffffff818bc4c0>] ? rest_init+0x90/0x90
[ 0.080000] Code: 66 90 48 83 c4 28 48 89 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 55 48 85 f6 48 89 e5 74 4b <48> 8b 0f 48 83 c7 08 ba 40 00 00 00 48 85 c9 74 19 eb 21 0f 1f
[ 0.080000] RIP [<ffffffff81432289>] find_first_bit+0x9/0x60
[ 0.080000] RSP <ffff88013ababe68>
[ 0.080000] CR2: 0000000000000000
[ 0.080000] ---[ end trace afede291f6be7dba ]---
[ 0.080000] Kernel panic - not syncing: Fatal exception
Thanks,
Fengguang Wu
7 years, 3 months
[sched] 2db34be8238: -7.0% aim7.jobs-per-min
by Huang Ying
FYI, we noticed the below changes on
git://bee.sh.intel.com/git/ydu19/linux for-lkp
commit 2db34be8238b94521a8985f66d2520e0492dcfba ("sched: Rewrite per entity runnable load average tracking")
testcase/path_params/tbox_group: aim7/performance-2000-fork_test/brickland3
72cc303b8e634656 2db34be8238b94521a8985f66d
---------------- --------------------------
%stddev %change %stddev
\ | \
24953 ± 1% -7.0% 23199 ± 1% aim7.jobs-per-min
41872 ± 38% +757.8% 359187 ± 13% aim7.time.involuntary_context_switches
228 ± 5% -11.0% 203 ± 3% aim7.time.user_time
90816951 ± 3% -6.9% 84529093 ± 5% aim7.time.voluntary_context_switches
481 ± 1% +7.6% 517 ± 1% aim7.time.elapsed_time
481 ± 1% +7.6% 517 ± 1% aim7.time.elapsed_time.max
751 ± 2% -7.8% 693 ± 1% pmeter.Average_Active_Power
25 ± 6% +237.3% 86 ± 7% vmstat.procs.r
388606 ± 4% -19.6% 312449 ± 6% vmstat.system.cs
41872 ± 38% +757.8% 359187 ± 13% time.involuntary_context_switches
228 ± 5% -11.0% 203 ± 3% time.user_time
275414 ± 11% -41.7% 160498 ± 14% softirqs.HRTIMER
5138484 ± 4% -14.7% 4385384 ± 2% softirqs.RCU
345303 ± 1% -9.9% 311201 ± 0% meminfo.Active
249510 ± 2% -13.4% 215994 ± 0% meminfo.Active(anon)
210600 ± 2% -16.4% 176076 ± 0% meminfo.AnonPages
248769 ± 1% -14.2% 213525 ± 2% meminfo.KernelStack
160700 ± 4% -14.3% 137788 ± 6% meminfo.PageTables
1.537e+08 ± 1% +37.9% 2.119e+08 ± 2% numa-numastat.node0.local_node
1.537e+08 ± 1% +37.9% 2.119e+08 ± 2% numa-numastat.node0.numa_hit
1 ± 24% +400.0% 8 ± 45% numa-numastat.node1.other_node
2 ± 36% +277.8% 8 ± 21% numa-numastat.node2.other_node
1.594e+09 ± 8% +124.3% 3.575e+09 ± 4% cpuidle.C1-IVT-4S.time
2954018 ± 15% +482.6% 17209296 ± 9% cpuidle.C1-IVT-4S.usage
1.34e+09 ± 11% -30.0% 9.384e+08 ± 6% cpuidle.C1E-IVT-4S.time
4.316e+09 ± 6% -53.6% 2.003e+09 ± 6% cpuidle.C3-IVT-4S.time
8127887 ± 11% -23.6% 6208689 ± 3% cpuidle.C3-IVT-4S.usage
48976675 ± 1% -34.8% 31946658 ± 2% cpuidle.C6-IVT-4S.usage
177 ± 6% -55.6% 78 ± 15% cpuidle.POLL.usage
52.89 ± 2% -24.1% 40.14 ± 1% turbostat.CPU%c1
4.65 ± 2% -41.9% 2.70 ± 10% turbostat.CPU%c3
18.38 ± 14% +88.6% 34.67 ± 2% turbostat.CPU%c6
283 ± 3% -13.6% 244 ± 1% turbostat.CorWatt
0.78 ± 9% +2400.0% 19.44 ± 1% turbostat.Pkg%pc2
354 ± 2% -11.2% 315 ± 1% turbostat.PkgWatt
93.83 ± 0% -7.5% 86.84 ± 0% turbostat.RAMWatt
62386 ± 2% -13.4% 54027 ± 0% proc-vmstat.nr_active_anon
52669 ± 2% -16.4% 44028 ± 0% proc-vmstat.nr_anon_pages
202 ± 24% -54.6% 92 ± 21% proc-vmstat.nr_dirtied
15558 ± 1% -14.2% 13356 ± 1% proc-vmstat.nr_kernel_stack
40156 ± 4% -14.2% 34441 ± 6% proc-vmstat.nr_page_table_pages
220 ± 15% -37.8% 136 ± 16% proc-vmstat.nr_written
6411013 ± 1% +42.5% 9133531 ± 2% proc-vmstat.pgalloc_dma32
71679 ± 2% +15.8% 83023 ± 2% numa-meminfo.node0.SUnreclaim
683976 ± 1% +20.6% 824678 ± 2% numa-meminfo.node0.MemUsed
92533 ± 1% +6.3% 98335 ± 4% numa-meminfo.node0.Inactive
803390 ± 10% -25.0% 602675 ± 9% numa-meminfo.node1.MemUsed
70282 ± 25% -42.4% 40515 ± 24% numa-meminfo.node1.KernelStack
8654 ± 44% -82.0% 1555 ± 41% numa-meminfo.node3.Shmem
8023 ± 44% -87.6% 995 ± 35% numa-meminfo.node3.Inactive(anon)
39389 ± 7% +98.6% 78242 ± 2% slabinfo.kmalloc-128.active_objs
39403 ± 7% +99.5% 78610 ± 3% slabinfo.kmalloc-128.num_objs
615 ± 7% +99.6% 1227 ± 3% slabinfo.kmalloc-128.active_slabs
615 ± 7% +99.6% 1227 ± 3% slabinfo.kmalloc-128.num_slabs
193739 ± 0% +10.6% 214199 ± 0% slabinfo.kmalloc-64.num_objs
190791 ± 0% +11.1% 212045 ± 0% slabinfo.kmalloc-64.active_objs
3026 ± 0% +10.6% 3346 ± 0% slabinfo.kmalloc-64.num_slabs
3026 ± 0% +10.6% 3346 ± 0% slabinfo.kmalloc-64.active_slabs
8593 ± 0% +10.5% 9491 ± 0% slabinfo.mm_struct.num_objs
8454 ± 0% +10.1% 9308 ± 0% slabinfo.mm_struct.active_objs
75192762 ± 1% +36.5% 1.026e+08 ± 3% numa-vmstat.node0.numa_hit
17922 ± 2% +15.8% 20759 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
75161178 ± 1% +36.5% 1.026e+08 ± 3% numa-vmstat.node0.numa_local
4394 ± 25% -42.2% 2540 ± 25% numa-vmstat.node1.nr_kernel_stack
52426 ± 0% +59.3% 83495 ± 0% numa-vmstat.node1.numa_other
83295 ± 0% -36.7% 52715 ± 0% numa-vmstat.node2.numa_other
2005 ± 44% -87.6% 248 ± 35% numa-vmstat.node3.nr_inactive_anon
2163 ± 44% -82.0% 388 ± 41% numa-vmstat.node3.nr_shmem
1257390 ± 2% -99.8% 2133 ± 4% sched_debug.cfs_rq[0]:/.tg_load_avg
2115 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[0]:/.blocked_load_avg
28699924 ± 5% -65.4% 9932690 ± 4% sched_debug.cfs_rq[0]:/.min_vruntime
51 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[0]:/.utilization_load_avg
64335 ± 5% +39.5% 89744 ± 3% sched_debug.cfs_rq[0]:/.exec_clock
14 ± 37% -76.8% 3 ± 33% sched_debug.cfs_rq[0]:/.nr_spread_over
2132 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[0]:/.tg_load_contrib
1297304 ± 2% -99.8% 1986 ± 2% sched_debug.cfs_rq[100]:/.tg_load_avg
99 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[100]:/.utilization_load_avg
13597 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[100]:/.tg_load_contrib
16843099 ± 8% -79.6% 3442369 ± 23% sched_debug.cfs_rq[100]:/.min_vruntime
13538 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[100]:/.blocked_load_avg
13109 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[101]:/.blocked_load_avg
122 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[101]:/.utilization_load_avg
13193 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[101]:/.tg_load_contrib
16907143 ± 7% -79.7% 3428806 ± 23% sched_debug.cfs_rq[101]:/.min_vruntime
1297648 ± 2% -99.8% 1986 ± 2% sched_debug.cfs_rq[101]:/.tg_load_avg
17037690 ± 8% -79.8% 3434272 ± 23% sched_debug.cfs_rq[102]:/.min_vruntime
14156 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[102]:/.blocked_load_avg
121 ± 44% -100.0% 0 ± 0% sched_debug.cfs_rq[102]:/.utilization_load_avg
14250 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[102]:/.tg_load_contrib
1297593 ± 2% -99.8% 1989 ± 3% sched_debug.cfs_rq[102]:/.tg_load_avg
14010 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[103]:/.blocked_load_avg
14077 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[103]:/.tg_load_contrib
84 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[103]:/.utilization_load_avg
1297902 ± 2% -99.8% 1986 ± 3% sched_debug.cfs_rq[103]:/.tg_load_avg
17308420 ± 8% -80.2% 3425797 ± 23% sched_debug.cfs_rq[103]:/.min_vruntime
14535 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[104]:/.blocked_load_avg
1297991 ± 2% -99.8% 1985 ± 3% sched_debug.cfs_rq[104]:/.tg_load_avg
17620551 ± 8% -80.6% 3410019 ± 23% sched_debug.cfs_rq[104]:/.min_vruntime
133 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[104]:/.utilization_load_avg
14604 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[104]:/.tg_load_contrib
1297262 ± 2% -99.8% 1981 ± 3% sched_debug.cfs_rq[105]:/.tg_load_avg
1 ± 33% +700.0% 12 ± 5% sched_debug.cfs_rq[105]:/.runnable_load_avg
15157 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[105]:/.tg_load_contrib
115 ± 27% -100.0% 0 ± 0% sched_debug.cfs_rq[105]:/.utilization_load_avg
15063 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[105]:/.blocked_load_avg
18220360 ± 2% -85.8% 2588613 ± 32% sched_debug.cfs_rq[105]:/.min_vruntime
17733781 ± 3% -85.4% 2583251 ± 32% sched_debug.cfs_rq[106]:/.min_vruntime
14034 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[106]:/.tg_load_contrib
149 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[106]:/.utilization_load_avg
1297519 ± 2% -99.8% 1980 ± 3% sched_debug.cfs_rq[106]:/.tg_load_avg
13919 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[106]:/.blocked_load_avg
1297805 ± 2% -99.8% 1975 ± 3% sched_debug.cfs_rq[107]:/.tg_load_avg
163 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[107]:/.utilization_load_avg
13335 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[107]:/.tg_load_contrib
13224 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[107]:/.blocked_load_avg
17429448 ± 3% -85.0% 2615721 ± 32% sched_debug.cfs_rq[107]:/.min_vruntime
13087 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[108]:/.blocked_load_avg
1298273 ± 2% -99.8% 1973 ± 3% sched_debug.cfs_rq[108]:/.tg_load_avg
17139485 ± 3% -84.9% 2593242 ± 32% sched_debug.cfs_rq[108]:/.min_vruntime
13181 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[108]:/.tg_load_contrib
126 ± 26% -100.0% 0 ± 0% sched_debug.cfs_rq[108]:/.utilization_load_avg
13576 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[109]:/.tg_load_contrib
17013902 ± 2% -84.7% 2602111 ± 32% sched_debug.cfs_rq[109]:/.min_vruntime
13494 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[109]:/.blocked_load_avg
1296843 ± 2% -99.8% 1970 ± 3% sched_debug.cfs_rq[109]:/.tg_load_avg
134 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[109]:/.utilization_load_avg
53 ± 42% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.utilization_load_avg
63989 ± 4% +36.6% 87389 ± 2% sched_debug.cfs_rq[10]:/.exec_clock
30520609 ± 4% -67.4% 9936406 ± 4% sched_debug.cfs_rq[10]:/.min_vruntime
6828 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.tg_load_contrib
1263785 ± 2% -99.8% 2110 ± 5% sched_debug.cfs_rq[10]:/.tg_load_avg
6726 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[10]:/.blocked_load_avg
12571 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[110]:/.tg_load_contrib
89 ± 37% -100.0% 0 ± 0% sched_debug.cfs_rq[110]:/.utilization_load_avg
12509 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[110]:/.blocked_load_avg
16981688 ± 2% -84.6% 2622647 ± 33% sched_debug.cfs_rq[110]:/.min_vruntime
1298970 ± 2% -99.8% 1972 ± 3% sched_debug.cfs_rq[110]:/.tg_load_avg
13166 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[111]:/.tg_load_contrib
16855401 ± 2% -84.5% 2607113 ± 33% sched_debug.cfs_rq[111]:/.min_vruntime
13072 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[111]:/.blocked_load_avg
1298512 ± 2% -99.8% 1974 ± 3% sched_debug.cfs_rq[111]:/.tg_load_avg
100 ± 49% -100.0% 0 ± 0% sched_debug.cfs_rq[111]:/.utilization_load_avg
110 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[112]:/.utilization_load_avg
12700 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[112]:/.blocked_load_avg
16786802 ± 2% -84.4% 2620052 ± 33% sched_debug.cfs_rq[112]:/.min_vruntime
12786 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[112]:/.tg_load_contrib
1299098 ± 2% -99.8% 1974 ± 3% sched_debug.cfs_rq[112]:/.tg_load_avg
12233 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[113]:/.tg_load_contrib
12173 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[113]:/.blocked_load_avg
101 ± 24% -100.0% 0 ± 0% sched_debug.cfs_rq[113]:/.utilization_load_avg
16747527 ± 2% -84.3% 2621005 ± 32% sched_debug.cfs_rq[113]:/.min_vruntime
1298913 ± 2% -99.8% 1971 ± 3% sched_debug.cfs_rq[113]:/.tg_load_avg
16710632 ± 2% -84.5% 2597564 ± 33% sched_debug.cfs_rq[114]:/.min_vruntime
13379 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[114]:/.tg_load_contrib
13324 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[114]:/.blocked_load_avg
1298550 ± 2% -99.8% 1965 ± 3% sched_debug.cfs_rq[114]:/.tg_load_avg
1298650 ± 2% -99.8% 1966 ± 3% sched_debug.cfs_rq[115]:/.tg_load_avg
16857312 ± 2% -84.6% 2589869 ± 32% sched_debug.cfs_rq[115]:/.min_vruntime
12773 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[115]:/.tg_load_contrib
12701 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[115]:/.blocked_load_avg
168 ± 36% -100.0% 0 ± 0% sched_debug.cfs_rq[116]:/.utilization_load_avg
12513 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[116]:/.tg_load_contrib
16942146 ± 2% -84.6% 2612577 ± 32% sched_debug.cfs_rq[116]:/.min_vruntime
12422 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[116]:/.blocked_load_avg
1296968 ± 2% -99.8% 1965 ± 3% sched_debug.cfs_rq[116]:/.tg_load_avg
13522 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[117]:/.tg_load_contrib
13453 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[117]:/.blocked_load_avg
17062474 ± 2% -84.8% 2588672 ± 33% sched_debug.cfs_rq[117]:/.min_vruntime
127 ± 35% -100.0% 0 ± 0% sched_debug.cfs_rq[117]:/.utilization_load_avg
1296448 ± 2% -99.8% 1964 ± 3% sched_debug.cfs_rq[117]:/.tg_load_avg
12634 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[118]:/.blocked_load_avg
17268018 ± 2% -85.1% 2579101 ± 32% sched_debug.cfs_rq[118]:/.min_vruntime
95 ± 49% -100.0% 0 ± 0% sched_debug.cfs_rq[118]:/.utilization_load_avg
1296574 ± 2% -99.8% 1964 ± 3% sched_debug.cfs_rq[118]:/.tg_load_avg
12681 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[118]:/.tg_load_contrib
17625296 ± 2% -85.4% 2574849 ± 32% sched_debug.cfs_rq[119]:/.min_vruntime
12767 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[119]:/.blocked_load_avg
127 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[119]:/.utilization_load_avg
12836 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[119]:/.tg_load_contrib
1296763 ± 2% -99.8% 1966 ± 3% sched_debug.cfs_rq[119]:/.tg_load_avg
1 ± 0% +1250.0% 13 ± 8% sched_debug.cfs_rq[11]:/.runnable_load_avg
1264432 ± 2% -99.8% 2108 ± 5% sched_debug.cfs_rq[11]:/.tg_load_avg
53 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[11]:/.utilization_load_avg
30345143 ± 4% -67.3% 9918910 ± 5% sched_debug.cfs_rq[11]:/.min_vruntime
63568 ± 4% +37.4% 87364 ± 3% sched_debug.cfs_rq[11]:/.exec_clock
7120 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[11]:/.tg_load_contrib
7029 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[11]:/.blocked_load_avg
63197 ± 4% +38.2% 87315 ± 3% sched_debug.cfs_rq[12]:/.exec_clock
48 ± 19% -100.0% 0 ± 0% sched_debug.cfs_rq[12]:/.utilization_load_avg
6997 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[12]:/.blocked_load_avg
30157769 ± 4% -67.1% 9923363 ± 5% sched_debug.cfs_rq[12]:/.min_vruntime
1263051 ± 2% -99.8% 2106 ± 5% sched_debug.cfs_rq[12]:/.tg_load_avg
7084 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[12]:/.tg_load_contrib
1717292 ± 22% -64.2% 615535 ± 25% sched_debug.cfs_rq[13]:/.max_vruntime
1717292 ± 22% -64.2% 615535 ± 25% sched_debug.cfs_rq[13]:/.MIN_vruntime
7336 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[13]:/.tg_load_contrib
7243 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[13]:/.blocked_load_avg
30048314 ± 4% -67.0% 9913403 ± 4% sched_debug.cfs_rq[13]:/.min_vruntime
1264142 ± 2% -99.8% 2100 ± 5% sched_debug.cfs_rq[13]:/.tg_load_avg
63053 ± 4% +38.9% 87610 ± 3% sched_debug.cfs_rq[13]:/.exec_clock
6776 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.tg_load_contrib
62893 ± 4% +39.2% 87531 ± 3% sched_debug.cfs_rq[14]:/.exec_clock
60 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.utilization_load_avg
29921296 ± 4% -66.9% 9894201 ± 4% sched_debug.cfs_rq[14]:/.min_vruntime
1262418 ± 2% -99.8% 2096 ± 5% sched_debug.cfs_rq[14]:/.tg_load_avg
6678 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[14]:/.blocked_load_avg
1264287 ± 2% -99.8% 2092 ± 5% sched_debug.cfs_rq[15]:/.tg_load_avg
8277 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[15]:/.tg_load_contrib
8162 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[15]:/.blocked_load_avg
91 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[16]:/.utilization_load_avg
1264502 ± 2% -99.8% 2090 ± 5% sched_debug.cfs_rq[16]:/.tg_load_avg
7647 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[16]:/.blocked_load_avg
7737 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[16]:/.tg_load_contrib
1 ± 33% +1066.7% 17 ± 14% sched_debug.cfs_rq[17]:/.runnable_load_avg
7953 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[17]:/.blocked_load_avg
8045 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[17]:/.tg_load_contrib
1266771 ± 2% -99.8% 2087 ± 5% sched_debug.cfs_rq[17]:/.tg_load_avg
8027 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.blocked_load_avg
69 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.utilization_load_avg
8149 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[18]:/.tg_load_contrib
1266363 ± 2% -99.8% 2081 ± 5% sched_debug.cfs_rq[18]:/.tg_load_avg
7689 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[19]:/.blocked_load_avg
1268542 ± 2% -99.8% 2076 ± 5% sched_debug.cfs_rq[19]:/.tg_load_avg
7821 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[19]:/.tg_load_contrib
65966 ± 4% +32.2% 87214 ± 3% sched_debug.cfs_rq[1]:/.exec_clock
31421923 ± 4% -68.6% 9853570 ± 5% sched_debug.cfs_rq[1]:/.min_vruntime
8521 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.blocked_load_avg
58 ± 40% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.utilization_load_avg
8629 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[1]:/.tg_load_contrib
1 ± 34% +940.0% 13 ± 7% sched_debug.cfs_rq[1]:/.runnable_load_avg
1259904 ± 2% -99.8% 2129 ± 5% sched_debug.cfs_rq[1]:/.tg_load_avg
7903 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[20]:/.tg_load_contrib
7788 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[20]:/.blocked_load_avg
1 ± 0% +1650.0% 17 ± 18% sched_debug.cfs_rq[20]:/.runnable_load_avg
1268618 ± 2% -99.8% 2075 ± 5% sched_debug.cfs_rq[20]:/.tg_load_avg
70 ± 33% -100.0% 0 ± 0% sched_debug.cfs_rq[20]:/.utilization_load_avg
7483 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[21]:/.blocked_load_avg
1267874 ± 2% -99.8% 2072 ± 5% sched_debug.cfs_rq[21]:/.tg_load_avg
7603 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[21]:/.tg_load_contrib
7412 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[22]:/.blocked_load_avg
1269455 ± 2% -99.8% 2067 ± 5% sched_debug.cfs_rq[22]:/.tg_load_avg
7486 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[22]:/.tg_load_contrib
7603 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[23]:/.tg_load_contrib
1268827 ± 2% -99.8% 2062 ± 5% sched_debug.cfs_rq[23]:/.tg_load_avg
7506 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[23]:/.blocked_load_avg
38 ± 47% -100.0% 0 ± 0% sched_debug.cfs_rq[23]:/.utilization_load_avg
7408 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[24]:/.blocked_load_avg
1 ± 34% +1420.0% 19 ± 18% sched_debug.cfs_rq[24]:/.runnable_load_avg
1272125 ± 2% -99.8% 2060 ± 5% sched_debug.cfs_rq[24]:/.tg_load_avg
7527 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[24]:/.tg_load_contrib
8034 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[25]:/.blocked_load_avg
8175 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[25]:/.tg_load_contrib
32 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[25]:/.utilization_load_avg
1 ± 0% +1725.0% 18 ± 18% sched_debug.cfs_rq[25]:/.runnable_load_avg
1273513 ± 2% -99.8% 2059 ± 5% sched_debug.cfs_rq[25]:/.tg_load_avg
1271660 ± 2% -99.8% 2055 ± 5% sched_debug.cfs_rq[26]:/.tg_load_avg
7386 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[26]:/.blocked_load_avg
7496 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[26]:/.tg_load_contrib
8025 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[27]:/.tg_load_contrib
1270142 ± 2% -99.8% 2048 ± 5% sched_debug.cfs_rq[27]:/.tg_load_avg
7899 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[27]:/.blocked_load_avg
7912 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.blocked_load_avg
96 ± 41% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.utilization_load_avg
8021 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[28]:/.tg_load_contrib
1272513 ± 2% -99.8% 2047 ± 5% sched_debug.cfs_rq[28]:/.tg_load_avg
7453 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[29]:/.blocked_load_avg
7567 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[29]:/.tg_load_contrib
1273454 ± 1% -99.8% 2045 ± 5% sched_debug.cfs_rq[29]:/.tg_load_avg
68132 ± 3% +29.9% 88514 ± 4% sched_debug.cfs_rq[2]:/.exec_clock
32561605 ± 3% -69.6% 9886080 ± 4% sched_debug.cfs_rq[2]:/.min_vruntime
7200 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[2]:/.blocked_load_avg
1255791 ± 2% -99.8% 2129 ± 5% sched_debug.cfs_rq[2]:/.tg_load_avg
7297 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[2]:/.tg_load_contrib
1 ± 0% +1325.0% 14 ± 12% sched_debug.cfs_rq[30]:/.runnable_load_avg
1274303 ± 1% -99.8% 2043 ± 5% sched_debug.cfs_rq[30]:/.tg_load_avg
60 ± 44% -100.0% 0 ± 0% sched_debug.cfs_rq[30]:/.utilization_load_avg
8222 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[30]:/.blocked_load_avg
8350 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[30]:/.tg_load_contrib
32370033 ± 6% -74.5% 8241237 ± 15% sched_debug.cfs_rq[30]:/.min_vruntime
1276736 ± 1% -99.8% 2041 ± 5% sched_debug.cfs_rq[31]:/.tg_load_avg
7796 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.tg_load_contrib
32444271 ± 6% -74.9% 8133972 ± 14% sched_debug.cfs_rq[31]:/.min_vruntime
7675 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.blocked_load_avg
49 ± 22% -100.0% 0 ± 0% sched_debug.cfs_rq[31]:/.utilization_load_avg
32025935 ± 7% -74.5% 8152920 ± 15% sched_debug.cfs_rq[32]:/.min_vruntime
7761 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[32]:/.tg_load_contrib
7629 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[32]:/.blocked_load_avg
1276111 ± 1% -99.8% 2034 ± 5% sched_debug.cfs_rq[32]:/.tg_load_avg
1276774 ± 1% -99.8% 2031 ± 5% sched_debug.cfs_rq[33]:/.tg_load_avg
31628083 ± 7% -74.2% 8144756 ± 15% sched_debug.cfs_rq[33]:/.min_vruntime
7696 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[33]:/.blocked_load_avg
7826 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[33]:/.tg_load_contrib
31244748 ± 7% -73.9% 8143865 ± 15% sched_debug.cfs_rq[34]:/.min_vruntime
7539 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[34]:/.tg_load_contrib
45 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[34]:/.utilization_load_avg
7436 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[34]:/.blocked_load_avg
1277729 ± 1% -99.8% 2028 ± 5% sched_debug.cfs_rq[34]:/.tg_load_avg
1 ± 0% +1400.0% 15 ± 12% sched_debug.cfs_rq[34]:/.runnable_load_avg
59 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[35]:/.utilization_load_avg
6769 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[35]:/.blocked_load_avg
6867 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[35]:/.tg_load_contrib
30835233 ± 8% -73.3% 8237780 ± 15% sched_debug.cfs_rq[35]:/.min_vruntime
1276250 ± 1% -99.8% 2023 ± 5% sched_debug.cfs_rq[35]:/.tg_load_avg
1276527 ± 1% -99.8% 2019 ± 5% sched_debug.cfs_rq[36]:/.tg_load_avg
7790 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[36]:/.blocked_load_avg
91 ± 22% -100.0% 0 ± 0% sched_debug.cfs_rq[36]:/.utilization_load_avg
31206999 ± 6% -73.7% 8216155 ± 15% sched_debug.cfs_rq[36]:/.min_vruntime
7898 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[36]:/.tg_load_contrib
1276979 ± 1% -99.8% 2018 ± 5% sched_debug.cfs_rq[37]:/.tg_load_avg
7495 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[37]:/.tg_load_contrib
31284022 ± 6% -73.7% 8225683 ± 15% sched_debug.cfs_rq[37]:/.min_vruntime
7376 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[37]:/.blocked_load_avg
1 ± 0% +1350.0% 14 ± 14% sched_debug.cfs_rq[38]:/.runnable_load_avg
7167 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.blocked_load_avg
60 ± 28% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.utilization_load_avg
31287625 ± 5% -73.9% 8159876 ± 15% sched_debug.cfs_rq[38]:/.min_vruntime
7274 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[38]:/.tg_load_contrib
1275869 ± 1% -99.8% 2017 ± 5% sched_debug.cfs_rq[38]:/.tg_load_avg
7581 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.blocked_load_avg
1036555 ± 32% -68.3% 328720 ± 48% sched_debug.cfs_rq[39]:/.MIN_vruntime
1036555 ± 32% -68.3% 328720 ± 48% sched_debug.cfs_rq[39]:/.max_vruntime
63 ± 33% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.utilization_load_avg
7709 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[39]:/.tg_load_contrib
1277209 ± 1% -99.8% 2012 ± 6% sched_debug.cfs_rq[39]:/.tg_load_avg
31179610 ± 5% -73.7% 8186405 ± 15% sched_debug.cfs_rq[39]:/.min_vruntime
67498 ± 4% +34.3% 90623 ± 2% sched_debug.cfs_rq[3]:/.exec_clock
7223 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[3]:/.tg_load_contrib
32380079 ± 3% -69.4% 9900194 ± 5% sched_debug.cfs_rq[3]:/.min_vruntime
7110 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[3]:/.blocked_load_avg
1256951 ± 2% -99.8% 2126 ± 5% sched_debug.cfs_rq[3]:/.tg_load_avg
1278998 ± 1% -99.8% 2008 ± 6% sched_debug.cfs_rq[40]:/.tg_load_avg
7455 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[40]:/.blocked_load_avg
39 ± 43% -100.0% 0 ± 0% sched_debug.cfs_rq[40]:/.utilization_load_avg
7588 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[40]:/.tg_load_contrib
31181443 ± 5% -73.8% 8166086 ± 15% sched_debug.cfs_rq[40]:/.min_vruntime
8071 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[41]:/.tg_load_contrib
48 ± 48% -100.0% 0 ± 0% sched_debug.cfs_rq[41]:/.utilization_load_avg
31219849 ± 6% -74.0% 8128936 ± 15% sched_debug.cfs_rq[41]:/.min_vruntime
7959 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[41]:/.blocked_load_avg
1279508 ± 1% -99.8% 2006 ± 6% sched_debug.cfs_rq[41]:/.tg_load_avg
0 ± 0% +Inf% 2 ± 44% sched_debug.cfs_rq[41]:/.load
1278063 ± 1% -99.8% 2003 ± 6% sched_debug.cfs_rq[42]:/.tg_load_avg
31280500 ± 6% -74.0% 8145420 ± 15% sched_debug.cfs_rq[42]:/.min_vruntime
7558 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[42]:/.tg_load_contrib
7474 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[42]:/.blocked_load_avg
1 ± 0% +1375.0% 14 ± 10% sched_debug.cfs_rq[43]:/.runnable_load_avg
1279939 ± 1% -99.8% 2004 ± 6% sched_debug.cfs_rq[43]:/.tg_load_avg
38 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[43]:/.utilization_load_avg
7496 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[43]:/.blocked_load_avg
31380609 ± 6% -74.0% 8147480 ± 15% sched_debug.cfs_rq[43]:/.min_vruntime
7601 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[43]:/.tg_load_contrib
1279763 ± 1% -99.8% 2000 ± 6% sched_debug.cfs_rq[44]:/.tg_load_avg
31657973 ± 6% -74.3% 8141050 ± 15% sched_debug.cfs_rq[44]:/.min_vruntime
8355 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[44]:/.tg_load_contrib
8250 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[44]:/.blocked_load_avg
8589 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[45]:/.tg_load_contrib
68 ± 45% -100.0% 0 ± 0% sched_debug.cfs_rq[45]:/.utilization_load_avg
1279921 ± 1% -99.8% 1999 ± 6% sched_debug.cfs_rq[45]:/.tg_load_avg
32303098 ± 2% -81.2% 6060450 ± 27% sched_debug.cfs_rq[45]:/.min_vruntime
8480 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[45]:/.blocked_load_avg
32574894 ± 2% -81.5% 6025932 ± 28% sched_debug.cfs_rq[46]:/.min_vruntime
1282275 ± 1% -99.8% 2001 ± 6% sched_debug.cfs_rq[46]:/.tg_load_avg
7869 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[46]:/.blocked_load_avg
7995 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[46]:/.tg_load_contrib
1 ± 0% +1375.0% 14 ± 5% sched_debug.cfs_rq[46]:/.runnable_load_avg
32356280 ± 2% -81.5% 5995175 ± 29% sched_debug.cfs_rq[47]:/.min_vruntime
1281150 ± 2% -99.8% 2005 ± 6% sched_debug.cfs_rq[47]:/.tg_load_avg
7930 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[47]:/.blocked_load_avg
76 ± 49% -100.0% 0 ± 0% sched_debug.cfs_rq[47]:/.utilization_load_avg
8081 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[47]:/.tg_load_contrib
50 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[48]:/.utilization_load_avg
7808 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[48]:/.blocked_load_avg
7911 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[48]:/.tg_load_contrib
1280329 ± 1% -99.8% 2002 ± 6% sched_debug.cfs_rq[48]:/.tg_load_avg
1 ± 0% +1450.0% 15 ± 7% sched_debug.cfs_rq[48]:/.runnable_load_avg
32077870 ± 2% -81.2% 6026082 ± 27% sched_debug.cfs_rq[48]:/.min_vruntime
8085 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[49]:/.tg_load_contrib
31747791 ± 2% -81.1% 6009817 ± 28% sched_debug.cfs_rq[49]:/.min_vruntime
1281102 ± 1% -99.8% 1997 ± 6% sched_debug.cfs_rq[49]:/.tg_load_avg
7957 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[49]:/.blocked_load_avg
67069 ± 4% +30.4% 87454 ± 3% sched_debug.cfs_rq[4]:/.exec_clock
31984181 ± 3% -68.9% 9948076 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
1258251 ± 2% -99.8% 2123 ± 5% sched_debug.cfs_rq[4]:/.tg_load_avg
7152 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.blocked_load_avg
1 ± 34% +960.0% 13 ± 6% sched_debug.cfs_rq[4]:/.runnable_load_avg
7264 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.tg_load_contrib
48 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[4]:/.utilization_load_avg
1282542 ± 2% -99.8% 1993 ± 6% sched_debug.cfs_rq[50]:/.tg_load_avg
7716 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[50]:/.tg_load_contrib
7592 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[50]:/.blocked_load_avg
31620130 ± 2% -80.7% 6090204 ± 28% sched_debug.cfs_rq[50]:/.min_vruntime
7663 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[51]:/.blocked_load_avg
31495201 ± 2% -80.8% 6058202 ± 28% sched_debug.cfs_rq[51]:/.min_vruntime
80 ± 26% -100.0% 0 ± 0% sched_debug.cfs_rq[51]:/.utilization_load_avg
7768 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[51]:/.tg_load_contrib
1282028 ± 1% -99.8% 1987 ± 6% sched_debug.cfs_rq[51]:/.tg_load_avg
31275600 ± 2% -80.7% 6034367 ± 28% sched_debug.cfs_rq[52]:/.min_vruntime
7925 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[52]:/.blocked_load_avg
1284572 ± 1% -99.8% 1990 ± 6% sched_debug.cfs_rq[52]:/.tg_load_avg
8036 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[52]:/.tg_load_contrib
31176732 ± 2% -80.6% 6054590 ± 28% sched_debug.cfs_rq[53]:/.min_vruntime
1284665 ± 2% -99.8% 1988 ± 6% sched_debug.cfs_rq[53]:/.tg_load_avg
7571 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[53]:/.tg_load_contrib
7469 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[53]:/.blocked_load_avg
31105150 ± 2% -80.7% 5993395 ± 28% sched_debug.cfs_rq[54]:/.min_vruntime
7313 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[54]:/.blocked_load_avg
1285119 ± 2% -99.8% 1984 ± 6% sched_debug.cfs_rq[54]:/.tg_load_avg
7424 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[54]:/.tg_load_contrib
1286085 ± 2% -99.8% 1980 ± 6% sched_debug.cfs_rq[55]:/.tg_load_avg
8183 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[55]:/.tg_load_contrib
8079 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[55]:/.blocked_load_avg
31123445 ± 2% -80.8% 5975863 ± 28% sched_debug.cfs_rq[55]:/.min_vruntime
7561 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[56]:/.tg_load_contrib
1285969 ± 2% -99.8% 1980 ± 6% sched_debug.cfs_rq[56]:/.tg_load_avg
31208163 ± 2% -80.9% 5968001 ± 28% sched_debug.cfs_rq[56]:/.min_vruntime
45 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[56]:/.utilization_load_avg
7460 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[56]:/.blocked_load_avg
1 ± 34% +1140.0% 15 ± 9% sched_debug.cfs_rq[56]:/.runnable_load_avg
76 ± 38% -100.0% 0 ± 0% sched_debug.cfs_rq[57]:/.utilization_load_avg
7834 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[57]:/.blocked_load_avg
31282504 ± 2% -80.9% 5971017 ± 27% sched_debug.cfs_rq[57]:/.min_vruntime
7932 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[57]:/.tg_load_contrib
1287860 ± 2% -99.8% 1982 ± 6% sched_debug.cfs_rq[57]:/.tg_load_avg
1 ± 47% +814.3% 16 ± 11% sched_debug.cfs_rq[57]:/.runnable_load_avg
8087 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[58]:/.blocked_load_avg
8213 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[58]:/.tg_load_contrib
31460262 ± 2% -80.9% 5999599 ± 28% sched_debug.cfs_rq[58]:/.min_vruntime
74 ± 40% -100.0% 0 ± 0% sched_debug.cfs_rq[58]:/.utilization_load_avg
1286949 ± 2% -99.8% 1985 ± 6% sched_debug.cfs_rq[58]:/.tg_load_avg
1288900 ± 2% -99.8% 1985 ± 6% sched_debug.cfs_rq[59]:/.tg_load_avg
82 ± 45% -100.0% 0 ± 0% sched_debug.cfs_rq[59]:/.utilization_load_avg
31665848 ± 2% -81.1% 5982858 ± 28% sched_debug.cfs_rq[59]:/.min_vruntime
1 ± 34% +1180.0% 16 ± 11% sched_debug.cfs_rq[59]:/.runnable_load_avg
8296 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[59]:/.blocked_load_avg
8421 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[59]:/.tg_load_contrib
66338 ± 3% +32.1% 87605 ± 3% sched_debug.cfs_rq[5]:/.exec_clock
1259776 ± 2% -99.8% 2121 ± 5% sched_debug.cfs_rq[5]:/.tg_load_avg
2903821 ± 15% -96.9% 89014 ± 43% sched_debug.cfs_rq[5]:/.spread0
7252 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[5]:/.tg_load_contrib
59 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[5]:/.utilization_load_avg
31604243 ± 4% -68.3% 10021931 ± 5% sched_debug.cfs_rq[5]:/.min_vruntime
7138 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[5]:/.blocked_load_avg
1 ± 0% +1250.0% 13 ± 3% sched_debug.cfs_rq[5]:/.runnable_load_avg
17261898 ± 6% -75.8% 4174611 ± 3% sched_debug.cfs_rq[60]:/.min_vruntime
155 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[60]:/.utilization_load_avg
14071 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[60]:/.blocked_load_avg
1291042 ± 2% -99.8% 1987 ± 6% sched_debug.cfs_rq[60]:/.tg_load_avg
37257 ± 7% +19.8% 44627 ± 7% sched_debug.cfs_rq[60]:/.exec_clock
14143 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[60]:/.tg_load_contrib
18970013 ± 5% -78.4% 4104573 ± 3% sched_debug.cfs_rq[61]:/.min_vruntime
173 ± 32% -100.0% 0 ± 0% sched_debug.cfs_rq[61]:/.utilization_load_avg
16260 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[61]:/.tg_load_contrib
40852 ± 6% +11.5% 45550 ± 6% sched_debug.cfs_rq[61]:/.exec_clock
16158 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[61]:/.blocked_load_avg
1290339 ± 2% -99.8% 1990 ± 6% sched_debug.cfs_rq[61]:/.tg_load_avg
38545 ± 6% +17.7% 45356 ± 7% sched_debug.cfs_rq[62]:/.exec_clock
15379 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[62]:/.tg_load_contrib
15302 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[62]:/.blocked_load_avg
17851602 ± 5% -76.9% 4122318 ± 3% sched_debug.cfs_rq[62]:/.min_vruntime
1289834 ± 2% -99.8% 1992 ± 6% sched_debug.cfs_rq[62]:/.tg_load_avg
37345 ± 5% +21.0% 45199 ± 8% sched_debug.cfs_rq[63]:/.exec_clock
17288069 ± 5% -76.3% 4100823 ± 2% sched_debug.cfs_rq[63]:/.min_vruntime
111 ± 36% -100.0% 0 ± 0% sched_debug.cfs_rq[63]:/.utilization_load_avg
14615 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[63]:/.tg_load_contrib
14512 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[63]:/.blocked_load_avg
1288713 ± 1% -99.8% 1988 ± 6% sched_debug.cfs_rq[63]:/.tg_load_avg
15023 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[64]:/.blocked_load_avg
36714 ± 5% +23.2% 45215 ± 7% sched_debug.cfs_rq[64]:/.exec_clock
16943908 ± 5% -75.8% 4102781 ± 4% sched_debug.cfs_rq[64]:/.min_vruntime
15120 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[64]:/.tg_load_contrib
1289203 ± 2% -99.8% 1983 ± 6% sched_debug.cfs_rq[64]:/.tg_load_avg
1288770 ± 1% -99.8% 1987 ± 5% sched_debug.cfs_rq[65]:/.tg_load_avg
14368 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[65]:/.tg_load_contrib
36363 ± 5% +24.6% 45315 ± 7% sched_debug.cfs_rq[65]:/.exec_clock
16805184 ± 5% -75.4% 4127563 ± 3% sched_debug.cfs_rq[65]:/.min_vruntime
14276 ± 6% -100.0% 0 ± 0% sched_debug.cfs_rq[65]:/.blocked_load_avg
120 ± 46% -100.0% 0 ± 0% sched_debug.cfs_rq[65]:/.utilization_load_avg
35814 ± 5% +26.5% 45320 ± 7% sched_debug.cfs_rq[66]:/.exec_clock
16586490 ± 5% -75.2% 4110024 ± 4% sched_debug.cfs_rq[66]:/.min_vruntime
14003 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[66]:/.blocked_load_avg
110 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[66]:/.utilization_load_avg
14100 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[66]:/.tg_load_contrib
1289222 ± 2% -99.8% 1987 ± 5% sched_debug.cfs_rq[66]:/.tg_load_avg
14693 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[67]:/.tg_load_contrib
121 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[67]:/.utilization_load_avg
14593 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[67]:/.blocked_load_avg
36596 ± 7% +22.7% 44914 ± 7% sched_debug.cfs_rq[67]:/.exec_clock
1288991 ± 2% -99.8% 1990 ± 6% sched_debug.cfs_rq[67]:/.tg_load_avg
16933869 ± 7% -75.7% 4122680 ± 4% sched_debug.cfs_rq[67]:/.min_vruntime
35949 ± 6% +25.3% 45027 ± 7% sched_debug.cfs_rq[68]:/.exec_clock
133 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[68]:/.utilization_load_avg
13595 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[68]:/.tg_load_contrib
1291134 ± 1% -99.8% 1984 ± 5% sched_debug.cfs_rq[68]:/.tg_load_avg
16638068 ± 6% -75.4% 4099752 ± 5% sched_debug.cfs_rq[68]:/.min_vruntime
13504 ± 3% -100.0% 0 ± 0% sched_debug.cfs_rq[68]:/.blocked_load_avg
13378 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[69]:/.tg_load_contrib
1287493 ± 1% -99.8% 1987 ± 5% sched_debug.cfs_rq[69]:/.tg_load_avg
13324 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[69]:/.blocked_load_avg
0 ± 0% +Inf% 17 ± 48% sched_debug.cfs_rq[69]:/.load
16378018 ± 5% -74.9% 4104247 ± 4% sched_debug.cfs_rq[69]:/.min_vruntime
35482 ± 6% +26.9% 45034 ± 7% sched_debug.cfs_rq[69]:/.exec_clock
89 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[69]:/.utilization_load_avg
65237 ± 4% +34.7% 87901 ± 3% sched_debug.cfs_rq[6]:/.exec_clock
6945 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[6]:/.blocked_load_avg
1260608 ± 2% -99.8% 2120 ± 5% sched_debug.cfs_rq[6]:/.tg_load_avg
0 ± 0% +Inf% 3 ± 0% sched_debug.cfs_rq[6]:/.load
7046 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[6]:/.tg_load_contrib
31032393 ± 4% -67.8% 9984141 ± 4% sched_debug.cfs_rq[6]:/.min_vruntime
16185513 ± 6% -74.7% 4088386 ± 4% sched_debug.cfs_rq[70]:/.min_vruntime
1289167 ± 1% -99.8% 1988 ± 5% sched_debug.cfs_rq[70]:/.tg_load_avg
14068 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[70]:/.tg_load_contrib
74 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[70]:/.utilization_load_avg
34993 ± 6% +28.5% 44959 ± 7% sched_debug.cfs_rq[70]:/.exec_clock
13985 ± 1% -100.0% 0 ± 0% sched_debug.cfs_rq[70]:/.blocked_load_avg
13431 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[71]:/.tg_load_contrib
34807 ± 6% +29.8% 45176 ± 6% sched_debug.cfs_rq[71]:/.exec_clock
95 ± 39% -100.0% 0 ± 0% sched_debug.cfs_rq[71]:/.utilization_load_avg
1288733 ± 1% -99.8% 1993 ± 5% sched_debug.cfs_rq[71]:/.tg_load_avg
16070569 ± 6% -74.4% 4110886 ± 5% sched_debug.cfs_rq[71]:/.min_vruntime
13367 ± 4% -100.0% 0 ± 0% sched_debug.cfs_rq[71]:/.blocked_load_avg
90 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[72]:/.utilization_load_avg
34683 ± 7% +31.7% 45667 ± 7% sched_debug.cfs_rq[72]:/.exec_clock
13512 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[72]:/.blocked_load_avg
1286529 ± 1% -99.8% 1993 ± 5% sched_debug.cfs_rq[72]:/.tg_load_avg
15993091 ± 6% -74.4% 4093989 ± 4% sched_debug.cfs_rq[72]:/.min_vruntime
13576 ± 5% -100.0% 0 ± 0% sched_debug.cfs_rq[72]:/.tg_load_contrib
12533 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[73]:/.tg_load_contrib
15963083 ± 6% -74.3% 4100627 ± 4% sched_debug.cfs_rq[73]:/.min_vruntime
34556 ± 7% +30.6% 45126 ± 7% sched_debug.cfs_rq[73]:/.exec_clock
12446 ± 2% -100.0% 0 ± 0% sched_debug.cfs_rq[73]:/.blocked_load_avg
1286434 ± 2% -99.8% 1995 ± 5% sched_debug.cfs_rq[73]:/.tg_load_avg
12633 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[74]:/.blocked_load_avg
15968393 ± 6% -74.4% 4080116 ± 3% sched_debug.cfs_rq[74]:/.min_vruntime
1287548 ± 1% -99.8% 1999 ± 5% sched_debug.cfs_rq[74]:/.tg_load_avg
34553 ± 7% +30.1% 44965 ± 7% sched_debug.cfs_rq[74]:/.exec_clock
12692 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[74]:/.tg_load_contrib
80 ± 21% -100.0% 0 ± 0% sched_debug.cfs_rq[74]:/.utilization_load_avg
15616 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[75]:/.tg_load_contrib
15525 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[75]:/.blocked_load_avg
1288073 ± 1% -99.8% 2003 ± 5% sched_debug.cfs_rq[75]:/.tg_load_avg
147 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[75]:/.utilization_load_avg
110 ± 8% -100.0% 0 ± 0% sched_debug.cfs_rq[76]:/.utilization_load_avg
1287479 ± 1% -99.8% 1994 ± 5% sched_debug.cfs_rq[76]:/.tg_load_avg
15195 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[76]:/.tg_load_contrib
1 ± 33% +833.3% 14 ± 15% sched_debug.cfs_rq[76]:/.runnable_load_avg
15107 ± 13% -100.0% 0 ± 0% sched_debug.cfs_rq[76]:/.blocked_load_avg
1287305 ± 1% -99.8% 1998 ± 5% sched_debug.cfs_rq[77]:/.tg_load_avg
137 ± 42% -100.0% 0 ± 0% sched_debug.cfs_rq[77]:/.utilization_load_avg
15832 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[77]:/.blocked_load_avg
15911 ± 20% -100.0% 0 ± 0% sched_debug.cfs_rq[77]:/.tg_load_contrib
1286971 ± 1% -99.8% 2001 ± 5% sched_debug.cfs_rq[78]:/.tg_load_avg
14248 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[78]:/.blocked_load_avg
14352 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[78]:/.tg_load_contrib
111 ± 34% -100.0% 0 ± 0% sched_debug.cfs_rq[78]:/.utilization_load_avg
14411 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[79]:/.tg_load_contrib
1288909 ± 1% -99.8% 2001 ± 5% sched_debug.cfs_rq[79]:/.tg_load_avg
75 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[79]:/.utilization_load_avg
14338 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[79]:/.blocked_load_avg
7923 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[7]:/.tg_load_contrib
65375 ± 4% +34.1% 87657 ± 2% sched_debug.cfs_rq[7]:/.exec_clock
31157107 ± 4% -67.8% 10027453 ± 4% sched_debug.cfs_rq[7]:/.min_vruntime
1259259 ± 2% -99.8% 2118 ± 5% sched_debug.cfs_rq[7]:/.tg_load_avg
90 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[7]:/.utilization_load_avg
7827 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[7]:/.blocked_load_avg
1288387 ± 1% -99.8% 2003 ± 5% sched_debug.cfs_rq[80]:/.tg_load_avg
14455 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[80]:/.blocked_load_avg
14541 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[80]:/.tg_load_contrib
122 ± 35% -100.0% 0 ± 0% sched_debug.cfs_rq[80]:/.utilization_load_avg
1288733 ± 1% -99.8% 1994 ± 4% sched_debug.cfs_rq[81]:/.tg_load_avg
14921 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[81]:/.tg_load_contrib
131 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[81]:/.utilization_load_avg
14835 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[81]:/.blocked_load_avg
13468 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[82]:/.tg_load_contrib
116 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[82]:/.utilization_load_avg
1 ± 34% +1020.0% 14 ± 13% sched_debug.cfs_rq[82]:/.runnable_load_avg
13402 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[82]:/.blocked_load_avg
1289545 ± 2% -99.8% 1997 ± 4% sched_debug.cfs_rq[82]:/.tg_load_avg
1290465 ± 2% -99.8% 1990 ± 3% sched_debug.cfs_rq[83]:/.tg_load_avg
13372 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[83]:/.blocked_load_avg
13427 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[83]:/.tg_load_contrib
13994 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[84]:/.tg_load_contrib
1288701 ± 2% -99.8% 1994 ± 3% sched_debug.cfs_rq[84]:/.tg_load_avg
13914 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[84]:/.blocked_load_avg
1288295 ± 2% -99.8% 1997 ± 3% sched_debug.cfs_rq[85]:/.tg_load_avg
136 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[85]:/.utilization_load_avg
14285 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[85]:/.tg_load_contrib
14203 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[85]:/.blocked_load_avg
14493 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[86]:/.blocked_load_avg
1290351 ± 2% -99.8% 1999 ± 3% sched_debug.cfs_rq[86]:/.tg_load_avg
14582 ± 16% -100.0% 0 ± 0% sched_debug.cfs_rq[86]:/.tg_load_contrib
131 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[86]:/.utilization_load_avg
1291995 ± 2% -99.8% 1995 ± 3% sched_debug.cfs_rq[87]:/.tg_load_avg
157 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[87]:/.utilization_load_avg
13568 ± 19% -100.0% 0 ± 0% sched_debug.cfs_rq[87]:/.blocked_load_avg
13634 ± 19% -100.0% 0 ± 0% sched_debug.cfs_rq[87]:/.tg_load_contrib
13771 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[88]:/.blocked_load_avg
1294612 ± 2% -99.8% 1999 ± 3% sched_debug.cfs_rq[88]:/.tg_load_avg
106 ± 37% -100.0% 0 ± 0% sched_debug.cfs_rq[88]:/.utilization_load_avg
13848 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[88]:/.tg_load_contrib
14099 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[89]:/.blocked_load_avg
14179 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[89]:/.tg_load_contrib
1293496 ± 2% -99.8% 2000 ± 3% sched_debug.cfs_rq[89]:/.tg_load_avg
148 ± 30% -100.0% 0 ± 0% sched_debug.cfs_rq[89]:/.utilization_load_avg
53 ± 17% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.utilization_load_avg
7008 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.tg_load_contrib
31125220 ± 4% -67.8% 10016250 ± 4% sched_debug.cfs_rq[8]:/.min_vruntime
6909 ± 9% -100.0% 0 ± 0% sched_debug.cfs_rq[8]:/.blocked_load_avg
65076 ± 4% +34.3% 87412 ± 3% sched_debug.cfs_rq[8]:/.exec_clock
1260552 ± 2% -99.8% 2116 ± 5% sched_debug.cfs_rq[8]:/.tg_load_avg
15070 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[90]:/.tg_load_contrib
14993 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[90]:/.blocked_load_avg
160 ± 25% -100.0% 0 ± 0% sched_debug.cfs_rq[90]:/.utilization_load_avg
18098859 ± 8% -80.9% 3458486 ± 22% sched_debug.cfs_rq[90]:/.min_vruntime
1293608 ± 2% -99.8% 2001 ± 3% sched_debug.cfs_rq[90]:/.tg_load_avg
1293253 ± 2% -99.8% 2000 ± 3% sched_debug.cfs_rq[91]:/.tg_load_avg
13313 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[91]:/.blocked_load_avg
181 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[91]:/.utilization_load_avg
1 ± 24% +600.0% 12 ± 12% sched_debug.cfs_rq[91]:/.runnable_load_avg
13398 ± 15% -100.0% 0 ± 0% sched_debug.cfs_rq[91]:/.tg_load_contrib
17441462 ± 8% -80.3% 3430296 ± 22% sched_debug.cfs_rq[91]:/.min_vruntime
106 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[92]:/.utilization_load_avg
13076 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[92]:/.blocked_load_avg
1291860 ± 2% -99.8% 1999 ± 3% sched_debug.cfs_rq[92]:/.tg_load_avg
13169 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[92]:/.tg_load_contrib
17098211 ± 8% -79.7% 3463238 ± 23% sched_debug.cfs_rq[92]:/.min_vruntime
16849249 ± 9% -79.6% 3442554 ± 23% sched_debug.cfs_rq[93]:/.min_vruntime
1292421 ± 2% -99.8% 1997 ± 3% sched_debug.cfs_rq[93]:/.tg_load_avg
13756 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[93]:/.tg_load_contrib
153 ± 38% -100.0% 0 ± 0% sched_debug.cfs_rq[93]:/.utilization_load_avg
13669 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[93]:/.blocked_load_avg
16554257 ± 9% -79.2% 3435110 ± 23% sched_debug.cfs_rq[94]:/.min_vruntime
13283 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[94]:/.tg_load_contrib
1293431 ± 2% -99.8% 1996 ± 3% sched_debug.cfs_rq[94]:/.tg_load_avg
13226 ± 18% -100.0% 0 ± 0% sched_debug.cfs_rq[94]:/.blocked_load_avg
13186 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[95]:/.tg_load_contrib
121 ± 31% -100.0% 0 ± 0% sched_debug.cfs_rq[95]:/.utilization_load_avg
1294318 ± 2% -99.8% 1997 ± 3% sched_debug.cfs_rq[95]:/.tg_load_avg
13125 ± 12% -100.0% 0 ± 0% sched_debug.cfs_rq[95]:/.blocked_load_avg
16653435 ± 8% -79.2% 3471376 ± 22% sched_debug.cfs_rq[95]:/.min_vruntime
17113324 ± 7% -79.6% 3484867 ± 23% sched_debug.cfs_rq[96]:/.min_vruntime
137 ± 23% -100.0% 0 ± 0% sched_debug.cfs_rq[96]:/.utilization_load_avg
1293535 ± 2% -99.8% 1996 ± 3% sched_debug.cfs_rq[96]:/.tg_load_avg
14067 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[96]:/.tg_load_contrib
13983 ± 11% -100.0% 0 ± 0% sched_debug.cfs_rq[96]:/.blocked_load_avg
1294425 ± 2% -99.8% 1993 ± 3% sched_debug.cfs_rq[97]:/.tg_load_avg
13713 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[97]:/.tg_load_contrib
13603 ± 14% -100.0% 0 ± 0% sched_debug.cfs_rq[97]:/.blocked_load_avg
16807656 ± 7% -79.3% 3476910 ± 23% sched_debug.cfs_rq[97]:/.min_vruntime
121 ± 29% -100.0% 0 ± 0% sched_debug.cfs_rq[97]:/.utilization_load_avg
13394 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[98]:/.blocked_load_avg
116 ± 37% -100.0% 0 ± 0% sched_debug.cfs_rq[98]:/.utilization_load_avg
1293736 ± 2% -99.8% 1985 ± 2% sched_debug.cfs_rq[98]:/.tg_load_avg
13487 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[98]:/.tg_load_contrib
16790802 ± 7% -79.5% 3450066 ± 22% sched_debug.cfs_rq[98]:/.min_vruntime
16781556 ± 7% -79.5% 3432159 ± 23% sched_debug.cfs_rq[99]:/.min_vruntime
13144 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[99]:/.tg_load_contrib
110 ± 29% -100.0% 0 ± 0% sched_debug.cfs_rq[99]:/.utilization_load_avg
13085 ± 10% -100.0% 0 ± 0% sched_debug.cfs_rq[99]:/.blocked_load_avg
1295505 ± 2% -99.8% 1980 ± 2% sched_debug.cfs_rq[99]:/.tg_load_avg
30828048 ± 4% -67.7% 9946390 ± 5% sched_debug.cfs_rq[9]:/.min_vruntime
64554 ± 4% +35.2% 87301 ± 2% sched_debug.cfs_rq[9]:/.exec_clock
7642 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[9]:/.tg_load_contrib
0 ± 0% +Inf% 17 ± 47% sched_debug.cfs_rq[9]:/.load
1262693 ± 2% -99.8% 2113 ± 5% sched_debug.cfs_rq[9]:/.tg_load_avg
1 ± 24% +657.1% 13 ± 13% sched_debug.cfs_rq[9]:/.runnable_load_avg
7534 ± 7% -100.0% 0 ± 0% sched_debug.cfs_rq[9]:/.blocked_load_avg
2 ± 30% -54.5% 1 ± 34% sched_debug.cpu#0.cpu_load[4]
1067783 ± 3% +30.4% 1391905 ± 4% sched_debug.cpu#0.sched_count
1030847 ± 3% +29.9% 1338797 ± 4% sched_debug.cpu#0.nr_switches
500455 ± 3% +23.9% 619920 ± 5% sched_debug.cpu#0.sched_goidle
396349 ± 5% +44.4% 572387 ± 11% sched_debug.cpu#0.ttwu_count
8659 ± 7% +50.4% 13023 ± 18% sched_debug.cpu#0.ttwu_local
195437 ± 1% +11.3% 217539 ± 2% sched_debug.cpu#0.nr_load_updates
1102888 ± 2% +22.0% 1345056 ± 5% sched_debug.cpu#1.sched_count
421072 ± 4% +33.0% 560086 ± 11% sched_debug.cpu#1.ttwu_count
1080649 ± 3% +21.2% 1309912 ± 5% sched_debug.cpu#1.nr_switches
529946 ± 3% +14.4% 606226 ± 5% sched_debug.cpu#1.sched_goidle
5373 ± 9% +96.4% 10552 ± 23% sched_debug.cpu#1.ttwu_local
403029 ± 5% +39.0% 560131 ± 9% sched_debug.cpu#10.ttwu_count
1056794 ± 3% +24.0% 1310867 ± 4% sched_debug.cpu#10.nr_switches
4532 ± 13% +117.0% 9835 ± 27% sched_debug.cpu#10.ttwu_local
1077929 ± 3% +24.9% 1346113 ± 4% sched_debug.cpu#10.sched_count
517885 ± 3% +17.1% 606556 ± 4% sched_debug.cpu#10.sched_goidle
365298 ± 5% +131.0% 843923 ± 5% sched_debug.cpu#100.avg_idle
456436 ± 4% -34.6% 298496 ± 20% sched_debug.cpu#100.sched_count
159012 ± 1% -43.7% 89468 ± 12% sched_debug.cpu#100.nr_load_updates
10489 ± 14% +365.2% 48797 ± 39% sched_debug.cpu#100.nr_uninterruptible
1906 ± 10% +223.8% 6173 ± 25% sched_debug.cpu#100.ttwu_local
218937 ± 4% -60.1% 87357 ± 16% sched_debug.cpu#100.sched_goidle
449700 ± 4% -35.0% 292116 ± 20% sched_debug.cpu#100.nr_switches
14022 ± 13% -41.9% 8153 ± 49% sched_debug.cpu#101.curr->pid
1900 ± 11% +226.6% 6206 ± 24% sched_debug.cpu#101.ttwu_local
362789 ± 8% +136.1% 856706 ± 3% sched_debug.cpu#101.avg_idle
453763 ± 4% -35.8% 291315 ± 20% sched_debug.cpu#101.nr_switches
159260 ± 1% -44.0% 89177 ± 12% sched_debug.cpu#101.nr_load_updates
8 ± 40% -84.4% 1 ± 34% sched_debug.cpu#101.cpu_load[3]
220953 ± 4% -60.4% 87391 ± 17% sched_debug.cpu#101.sched_goidle
18 ± 38% -86.7% 2 ± 20% sched_debug.cpu#101.cpu_load[1]
11 ± 37% -84.8% 1 ± 24% sched_debug.cpu#101.cpu_load[2]
10581 ± 15% +354.5% 48092 ± 37% sched_debug.cpu#101.nr_uninterruptible
33 ± 32% -90.3% 3 ± 13% sched_debug.cpu#101.cpu_load[0]
460554 ± 4% -35.5% 297013 ± 20% sched_debug.cpu#101.sched_count
381257 ± 10% +126.7% 864487 ± 2% sched_debug.cpu#102.avg_idle
457120 ± 5% -36.7% 289202 ± 20% sched_debug.cpu#102.nr_switches
464020 ± 5% -36.4% 294946 ± 20% sched_debug.cpu#102.sched_count
159614 ± 1% -44.4% 88729 ± 12% sched_debug.cpu#102.nr_load_updates
222593 ± 4% -61.2% 86471 ± 16% sched_debug.cpu#102.sched_goidle
7 ± 24% -83.9% 1 ± 34% sched_debug.cpu#102.cpu_load[3]
1896 ± 12% +226.1% 6185 ± 26% sched_debug.cpu#102.ttwu_local
8 ± 39% -76.5% 2 ± 0% sched_debug.cpu#102.cpu_load[2]
10714 ± 14% +348.7% 48080 ± 38% sched_debug.cpu#102.nr_uninterruptible
160961 ± 1% -44.7% 88963 ± 12% sched_debug.cpu#103.nr_load_updates
18 ± 22% -86.7% 2 ± 34% sched_debug.cpu#103.cpu_load[1]
29 ± 30% -88.8% 3 ± 33% sched_debug.cpu#103.cpu_load[0]
350413 ± 4% +147.0% 865509 ± 3% sched_debug.cpu#103.avg_idle
474428 ± 4% -37.3% 297374 ± 20% sched_debug.cpu#103.sched_count
11193 ± 15% +337.1% 48929 ± 37% sched_debug.cpu#103.nr_uninterruptible
1963 ± 10% +216.7% 6217 ± 25% sched_debug.cpu#103.ttwu_local
467221 ± 4% -37.7% 290912 ± 20% sched_debug.cpu#103.nr_switches
9 ± 38% -86.5% 1 ± 34% sched_debug.cpu#103.cpu_load[3]
227615 ± 4% -61.9% 86663 ± 16% sched_debug.cpu#103.sched_goidle
13 ± 29% -84.9% 2 ± 35% sched_debug.cpu#103.cpu_load[2]
231531 ± 4% -62.6% 86600 ± 16% sched_debug.cpu#104.sched_goidle
357971 ± 12% +136.2% 845542 ± 5% sched_debug.cpu#104.avg_idle
1974 ± 13% +210.5% 6130 ± 25% sched_debug.cpu#104.ttwu_local
475014 ± 4% -38.6% 291706 ± 20% sched_debug.cpu#104.nr_switches
161873 ± 1% -45.0% 89005 ± 12% sched_debug.cpu#104.nr_load_updates
11493 ± 14% +328.7% 49274 ± 37% sched_debug.cpu#104.nr_uninterruptible
482004 ± 4% -38.3% 297512 ± 20% sched_debug.cpu#104.sched_count
376857 ± 1% +135.5% 887605 ± 3% sched_debug.cpu#105.avg_idle
479657 ± 5% -52.5% 228014 ± 27% sched_debug.cpu#105.nr_switches
233823 ± 5% -68.9% 72804 ± 22% sched_debug.cpu#105.sched_goidle
486675 ± 5% -52.2% 232670 ± 27% sched_debug.cpu#105.sched_count
11520 ± 21% +181.5% 32428 ± 45% sched_debug.cpu#105.nr_uninterruptible
7 ± 45% -82.8% 1 ± 34% sched_debug.cpu#105.cpu_load[3]
252967 ± 3% -43.0% 144291 ± 34% sched_debug.cpu#105.ttwu_count
164221 ± 0% -54.7% 74406 ± 18% sched_debug.cpu#105.nr_load_updates
16589 ± 42% -55.4% 7396 ± 35% sched_debug.cpu#106.curr->pid
468655 ± 6% -51.1% 229029 ± 26% sched_debug.cpu#106.nr_switches
162419 ± 0% -53.8% 75000 ± 16% sched_debug.cpu#106.nr_load_updates
475888 ± 6% -50.8% 233905 ± 26% sched_debug.cpu#106.sched_count
7 ± 45% -79.3% 1 ± 33% sched_debug.cpu#106.cpu_load[2]
246487 ± 3% -43.0% 140405 ± 36% sched_debug.cpu#106.ttwu_count
11120 ± 19% +192.1% 32481 ± 43% sched_debug.cpu#106.nr_uninterruptible
228269 ± 6% -67.9% 73274 ± 21% sched_debug.cpu#106.sched_goidle
406908 ± 7% +112.6% 865001 ± 7% sched_debug.cpu#106.avg_idle
14380 ± 31% -54.4% 6560 ± 46% sched_debug.cpu#107.curr->pid
370251 ± 8% +130.8% 854462 ± 5% sched_debug.cpu#107.avg_idle
161470 ± 1% -53.2% 75636 ± 16% sched_debug.cpu#107.nr_load_updates
460610 ± 5% -49.8% 231031 ± 26% sched_debug.cpu#107.nr_switches
18 ± 47% -86.7% 2 ± 20% sched_debug.cpu#107.cpu_load[1]
10829 ± 18% +202.1% 32719 ± 44% sched_debug.cpu#107.nr_uninterruptible
224331 ± 5% -67.0% 74073 ± 21% sched_debug.cpu#107.sched_goidle
8 ± 29% -88.6% 1 ± 0% sched_debug.cpu#107.cpu_load[4]
14 ± 34% -87.7% 1 ± 24% sched_debug.cpu#107.cpu_load[2]
242403 ± 3% -41.5% 141720 ± 36% sched_debug.cpu#107.ttwu_count
11 ± 32% -91.1% 1 ± 0% sched_debug.cpu#107.cpu_load[3]
468411 ± 6% -49.7% 235417 ± 26% sched_debug.cpu#107.sched_count
8 ± 21% -84.8% 1 ± 34% sched_debug.cpu#108.cpu_load[3]
380484 ± 3% +126.1% 860361 ± 4% sched_debug.cpu#108.avg_idle
1938 ± 11% +144.4% 4736 ± 34% sched_debug.cpu#108.ttwu_local
220306 ± 5% -66.9% 72929 ± 21% sched_debug.cpu#108.sched_goidle
9 ± 31% -81.6% 1 ± 24% sched_debug.cpu#108.cpu_load[2]
237165 ± 3% -39.2% 144275 ± 33% sched_debug.cpu#108.ttwu_count
10533 ± 18% +213.2% 32988 ± 44% sched_debug.cpu#108.nr_uninterruptible
459326 ± 5% -49.1% 233641 ± 27% sched_debug.cpu#108.sched_count
160159 ± 1% -53.4% 74576 ± 17% sched_debug.cpu#108.nr_load_updates
452591 ± 5% -49.4% 229189 ± 26% sched_debug.cpu#108.nr_switches
159796 ± 0% -53.5% 74381 ± 17% sched_debug.cpu#109.nr_load_updates
235027 ± 3% -40.9% 138966 ± 37% sched_debug.cpu#109.ttwu_count
9 ± 13% -83.3% 1 ± 33% sched_debug.cpu#109.cpu_load[2]
11 ± 17% -82.2% 2 ± 35% sched_debug.cpu#109.cpu_load[1]
377549 ± 9% +127.3% 858083 ± 5% sched_debug.cpu#109.avg_idle
219624 ± 4% -66.8% 72964 ± 22% sched_debug.cpu#109.sched_goidle
10595 ± 17% +212.3% 33088 ± 43% sched_debug.cpu#109.nr_uninterruptible
458219 ± 5% -49.0% 233915 ± 27% sched_debug.cpu#109.sched_count
7 ± 10% -83.9% 1 ± 34% sched_debug.cpu#109.cpu_load[3]
451458 ± 5% -49.3% 228945 ± 27% sched_debug.cpu#109.nr_switches
1074502 ± 3% +25.3% 1346508 ± 4% sched_debug.cpu#11.sched_count
1053435 ± 3% +24.5% 1311348 ± 4% sched_debug.cpu#11.nr_switches
516178 ± 3% +17.5% 606755 ± 4% sched_debug.cpu#11.sched_goidle
399968 ± 5% +39.7% 558833 ± 9% sched_debug.cpu#11.ttwu_count
314120 ± 5% +9.2% 342906 ± 4% sched_debug.cpu#11.avg_idle
4418 ± 12% +126.2% 9995 ± 29% sched_debug.cpu#11.ttwu_local
215980 ± 4% -66.2% 72935 ± 21% sched_debug.cpu#110.sched_goidle
450583 ± 5% -48.4% 232685 ± 26% sched_debug.cpu#110.sched_count
10218 ± 17% +217.1% 32403 ± 45% sched_debug.cpu#110.nr_uninterruptible
236411 ± 4% -40.1% 141614 ± 35% sched_debug.cpu#110.ttwu_count
1902 ± 11% +141.6% 4596 ± 36% sched_debug.cpu#110.ttwu_local
361093 ± 2% +142.4% 875352 ± 6% sched_debug.cpu#110.avg_idle
443846 ± 5% -48.6% 228167 ± 26% sched_debug.cpu#110.nr_switches
158751 ± 1% -52.9% 74729 ± 16% sched_debug.cpu#110.nr_load_updates
10236 ± 17% +224.9% 33257 ± 42% sched_debug.cpu#111.nr_uninterruptible
441970 ± 5% -48.2% 228980 ± 26% sched_debug.cpu#111.nr_switches
448690 ± 5% -48.0% 233442 ± 26% sched_debug.cpu#111.sched_count
7 ± 38% -87.1% 1 ± 0% sched_debug.cpu#111.cpu_load[3]
234647 ± 3% -39.7% 141577 ± 35% sched_debug.cpu#111.ttwu_count
158578 ± 0% -52.9% 74649 ± 17% sched_debug.cpu#111.nr_load_updates
362093 ± 8% +139.5% 867085 ± 7% sched_debug.cpu#111.avg_idle
214924 ± 5% -66.1% 72865 ± 22% sched_debug.cpu#111.sched_goidle
158157 ± 0% -52.9% 74489 ± 17% sched_debug.cpu#112.nr_load_updates
440397 ± 5% -48.0% 228865 ± 25% sched_debug.cpu#112.nr_switches
365887 ± 7% +133.1% 852963 ± 5% sched_debug.cpu#112.avg_idle
447069 ± 5% -47.7% 233692 ± 26% sched_debug.cpu#112.sched_count
1900 ± 10% +141.1% 4582 ± 37% sched_debug.cpu#112.ttwu_local
12 ± 31% -85.7% 1 ± 47% sched_debug.cpu#112.cpu_load[2]
214176 ± 5% -66.1% 72600 ± 21% sched_debug.cpu#112.sched_goidle
10159 ± 18% +228.7% 33392 ± 41% sched_debug.cpu#112.nr_uninterruptible
232979 ± 3% -39.8% 140142 ± 35% sched_debug.cpu#112.ttwu_count
232848 ± 3% -39.6% 140551 ± 36% sched_debug.cpu#113.ttwu_count
5 ± 40% -80.0% 1 ± 0% sched_debug.cpu#113.cpu_load[3]
447370 ± 5% -47.4% 235538 ± 26% sched_debug.cpu#113.sched_count
10181 ± 19% +226.5% 33244 ± 43% sched_debug.cpu#113.nr_uninterruptible
1878 ± 11% +155.2% 4793 ± 35% sched_debug.cpu#113.ttwu_local
214289 ± 5% -65.7% 73398 ± 21% sched_debug.cpu#113.sched_goidle
376183 ± 8% +130.9% 868488 ± 4% sched_debug.cpu#113.avg_idle
158046 ± 0% -52.6% 74970 ± 16% sched_debug.cpu#113.nr_load_updates
440528 ± 5% -47.8% 230099 ± 26% sched_debug.cpu#113.nr_switches
157984 ± 0% -52.8% 74585 ± 17% sched_debug.cpu#114.nr_load_updates
389341 ± 11% +126.6% 882130 ± 4% sched_debug.cpu#114.avg_idle
446361 ± 5% -47.6% 234059 ± 27% sched_debug.cpu#114.sched_count
10131 ± 18% +230.7% 33508 ± 44% sched_debug.cpu#114.nr_uninterruptible
1868 ± 11% +145.1% 4578 ± 37% sched_debug.cpu#114.ttwu_local
231072 ± 3% -39.7% 139256 ± 36% sched_debug.cpu#114.ttwu_count
439650 ± 5% -47.7% 229733 ± 26% sched_debug.cpu#114.nr_switches
213858 ± 4% -65.8% 73221 ± 21% sched_debug.cpu#114.sched_goidle
10257 ± 19% +223.6% 33189 ± 45% sched_debug.cpu#115.nr_uninterruptible
215333 ± 5% -66.1% 73048 ± 21% sched_debug.cpu#115.sched_goidle
158530 ± 0% -52.7% 74953 ± 16% sched_debug.cpu#115.nr_load_updates
373651 ± 3% +139.0% 892874 ± 6% sched_debug.cpu#115.avg_idle
1870 ± 11% +148.4% 4645 ± 37% sched_debug.cpu#115.ttwu_local
449557 ± 5% -47.8% 234775 ± 26% sched_debug.cpu#115.sched_count
235907 ± 3% -39.4% 142887 ± 36% sched_debug.cpu#115.ttwu_count
11 ± 48% -89.1% 1 ± 34% sched_debug.cpu#115.cpu_load[3]
442695 ± 5% -48.0% 230068 ± 26% sched_debug.cpu#115.nr_switches
377458 ± 5% +130.2% 869008 ± 6% sched_debug.cpu#116.avg_idle
158733 ± 0% -52.8% 74844 ± 16% sched_debug.cpu#116.nr_load_updates
217431 ± 5% -66.5% 72782 ± 21% sched_debug.cpu#116.sched_goidle
236350 ± 3% -39.2% 143794 ± 34% sched_debug.cpu#116.ttwu_count
11 ± 46% -88.6% 1 ± 34% sched_debug.cpu#116.cpu_load[3]
446682 ± 5% -48.6% 229763 ± 26% sched_debug.cpu#116.nr_switches
453332 ± 5% -48.2% 234604 ± 26% sched_debug.cpu#116.sched_count
23 ± 49% -88.2% 2 ± 30% sched_debug.cpu#116.cpu_load[1]
33 ± 30% -91.0% 3 ± 33% sched_debug.cpu#116.cpu_load[0]
15318 ± 5% -37.5% 9566 ± 37% sched_debug.cpu#116.curr->pid
10313 ± 18% +223.4% 33352 ± 43% sched_debug.cpu#116.nr_uninterruptible
219393 ± 5% -66.9% 72645 ± 21% sched_debug.cpu#117.sched_goidle
10507 ± 19% +217.5% 33362 ± 44% sched_debug.cpu#117.nr_uninterruptible
354700 ± 8% +147.6% 878317 ± 3% sched_debug.cpu#117.avg_idle
9 ± 46% -82.1% 1 ± 47% sched_debug.cpu#117.cpu_load[2]
159100 ± 0% -53.1% 74634 ± 17% sched_debug.cpu#117.nr_load_updates
240074 ± 3% -40.1% 143847 ± 35% sched_debug.cpu#117.ttwu_count
1926 ± 10% +144.1% 4702 ± 35% sched_debug.cpu#117.ttwu_local
0 ± 0% +Inf% 1 ± 0% sched_debug.cpu#117.nr_running
450850 ± 5% -49.1% 229588 ± 26% sched_debug.cpu#117.nr_switches
458037 ± 5% -48.9% 234019 ± 26% sched_debug.cpu#117.sched_count
363475 ± 2% +135.4% 855586 ± 6% sched_debug.cpu#118.avg_idle
241373 ± 3% -40.3% 144168 ± 35% sched_debug.cpu#118.ttwu_count
465668 ± 5% -49.7% 234055 ± 26% sched_debug.cpu#118.sched_count
458922 ± 5% -50.1% 229212 ± 25% sched_debug.cpu#118.nr_switches
10907 ± 19% +203.9% 33151 ± 43% sched_debug.cpu#118.nr_uninterruptible
160228 ± 0% -53.5% 74553 ± 16% sched_debug.cpu#118.nr_load_updates
223357 ± 5% -67.5% 72534 ± 20% sched_debug.cpu#118.sched_goidle
11282 ± 20% +191.8% 32926 ± 43% sched_debug.cpu#119.nr_uninterruptible
365829 ± 5% +139.7% 877017 ± 5% sched_debug.cpu#119.avg_idle
228345 ± 5% -68.3% 72484 ± 21% sched_debug.cpu#119.sched_goidle
1952 ± 8% +142.6% 4736 ± 36% sched_debug.cpu#119.ttwu_local
161450 ± 0% -54.0% 74215 ± 16% sched_debug.cpu#119.nr_load_updates
476317 ± 5% -50.9% 233762 ± 26% sched_debug.cpu#119.sched_count
468835 ± 5% -51.1% 229406 ± 26% sched_debug.cpu#119.nr_switches
245834 ± 2% -42.4% 141647 ± 36% sched_debug.cpu#119.ttwu_count
1069632 ± 3% +25.9% 1346347 ± 4% sched_debug.cpu#12.sched_count
4492 ± 10% +126.0% 10153 ± 25% sched_debug.cpu#12.ttwu_local
0 ± 0% +Inf% 2 ± 34% sched_debug.cpu#12.load
512665 ± 3% +18.4% 606896 ± 4% sched_debug.cpu#12.sched_goidle
1046593 ± 3% +25.4% 1312232 ± 4% sched_debug.cpu#12.nr_switches
398917 ± 5% +40.0% 558290 ± 9% sched_debug.cpu#12.ttwu_count
397758 ± 5% +41.8% 563965 ± 10% sched_debug.cpu#13.ttwu_count
4605 ± 10% +118.3% 10054 ± 29% sched_debug.cpu#13.ttwu_local
1066964 ± 3% +26.5% 1349215 ± 4% sched_debug.cpu#13.sched_count
512519 ± 3% +18.8% 608637 ± 4% sched_debug.cpu#13.sched_goidle
1046422 ± 3% +25.7% 1315144 ± 4% sched_debug.cpu#13.nr_switches
1063482 ± 3% +27.2% 1352562 ± 4% sched_debug.cpu#14.sched_count
1042319 ± 4% +26.4% 1317288 ± 4% sched_debug.cpu#14.nr_switches
2 ± 30% -54.5% 1 ± 34% sched_debug.cpu#14.cpu_load[4]
395972 ± 5% +43.1% 566604 ± 10% sched_debug.cpu#14.ttwu_count
4417 ± 12% +130.6% 10185 ± 33% sched_debug.cpu#14.ttwu_local
510493 ± 4% +19.4% 609573 ± 4% sched_debug.cpu#14.sched_goidle
316104 ± 11% +102.9% 641422 ± 30% sched_debug.cpu#15.avg_idle
5655 ± 26% +47.6% 8349 ± 4% sched_debug.cpu#16.ttwu_local
304298 ± 6% +109.3% 636995 ± 30% sched_debug.cpu#16.avg_idle
5317 ± 12% +46.5% 7791 ± 6% sched_debug.cpu#17.ttwu_local
303838 ± 6% +108.3% 632990 ± 32% sched_debug.cpu#17.avg_idle
5213 ± 17% +52.5% 7947 ± 8% sched_debug.cpu#18.ttwu_local
297224 ± 7% +114.0% 636143 ± 31% sched_debug.cpu#18.avg_idle
5325 ± 12% +59.1% 8475 ± 5% sched_debug.cpu#19.ttwu_local
322255 ± 10% +95.1% 628732 ± 33% sched_debug.cpu#19.avg_idle
1111501 ± 2% +17.9% 1310937 ± 5% sched_debug.cpu#2.nr_switches
4788 ± 13% +105.3% 9830 ± 25% sched_debug.cpu#2.ttwu_local
1131111 ± 2% +19.0% 1345850 ± 5% sched_debug.cpu#2.sched_count
545588 ± 2% +11.2% 606775 ± 5% sched_debug.cpu#2.sched_goidle
423779 ± 4% +31.3% 556589 ± 10% sched_debug.cpu#2.ttwu_count
291025 ± 6% +114.3% 623791 ± 31% sched_debug.cpu#20.avg_idle
4983 ± 5% +69.8% 8460 ± 7% sched_debug.cpu#20.ttwu_local
4751 ± 6% +84.1% 8748 ± 4% sched_debug.cpu#21.ttwu_local
318379 ± 2% +99.5% 635089 ± 33% sched_debug.cpu#21.avg_idle
4822 ± 7% +67.5% 8078 ± 8% sched_debug.cpu#22.ttwu_local
328942 ± 8% +89.1% 622099 ± 33% sched_debug.cpu#23.avg_idle
4753 ± 8% +70.6% 8108 ± 11% sched_debug.cpu#23.ttwu_local
5070 ± 4% +62.7% 8251 ± 9% sched_debug.cpu#24.ttwu_local
311654 ± 5% +101.4% 627764 ± 34% sched_debug.cpu#24.avg_idle
11 ± 36% -75.0% 2 ± 30% sched_debug.cpu#25.cpu_load[2]
8 ± 29% -65.6% 2 ± 30% sched_debug.cpu#25.cpu_load[3]
4735 ± 8% +66.4% 7878 ± 8% sched_debug.cpu#25.ttwu_local
334745 ± 4% +89.2% 633318 ± 32% sched_debug.cpu#25.avg_idle
312300 ± 9% +108.0% 649565 ± 31% sched_debug.cpu#26.avg_idle
3 ± 24% -64.3% 1 ± 34% sched_debug.cpu#26.cpu_load[4]
5139 ± 11% +51.4% 7782 ± 8% sched_debug.cpu#26.ttwu_local
1 ± 47% +100.0% 3 ± 14% sched_debug.cpu#27.cpu_load[1]
4972 ± 9% +57.5% 7828 ± 12% sched_debug.cpu#27.ttwu_local
339666 ± 13% +87.2% 635981 ± 31% sched_debug.cpu#27.avg_idle
1 ± 34% +140.0% 3 ± 23% sched_debug.cpu#27.cpu_load[0]
5120 ± 12% +59.9% 8189 ± 9% sched_debug.cpu#28.ttwu_local
326310 ± 5% +96.9% 642647 ± 31% sched_debug.cpu#28.avg_idle
5624 ± 18% +50.7% 8479 ± 6% sched_debug.cpu#29.ttwu_local
317361 ± 7% +95.0% 618915 ± 35% sched_debug.cpu#29.avg_idle
1 ± 0% +375.0% 4 ± 9% sched_debug.cpu#3.cpu_load[0]
543732 ± 2% +11.7% 607295 ± 5% sched_debug.cpu#3.sched_goidle
1133509 ± 2% +18.8% 1347015 ± 4% sched_debug.cpu#3.sched_count
419045 ± 5% +32.6% 555748 ± 10% sched_debug.cpu#3.ttwu_count
1108080 ± 2% +18.4% 1312225 ± 4% sched_debug.cpu#3.nr_switches
4586 ± 10% +113.2% 9777 ± 25% sched_debug.cpu#3.ttwu_local
4814 ± 13% +115.8% 10391 ± 20% sched_debug.cpu#30.ttwu_local
4956 ± 12% +116.4% 10725 ± 17% sched_debug.cpu#31.ttwu_local
4822 ± 13% +100.7% 9676 ± 14% sched_debug.cpu#32.ttwu_local
1 ± 0% +300.0% 4 ± 30% sched_debug.cpu#32.cpu_load[0]
3 ± 14% -64.3% 1 ± 34% sched_debug.cpu#33.cpu_load[4]
5034 ± 17% +93.5% 9741 ± 14% sched_debug.cpu#33.ttwu_local
1 ± 33% +133.3% 3 ± 14% sched_debug.cpu#34.cpu_load[2]
4908 ± 15% +96.4% 9637 ± 12% sched_debug.cpu#34.ttwu_local
5062 ± 14% +89.2% 9576 ± 16% sched_debug.cpu#35.ttwu_local
4738 ± 15% +108.7% 9892 ± 15% sched_debug.cpu#36.ttwu_local
315129 ± 9% +55.5% 490026 ± 17% sched_debug.cpu#37.avg_idle
4831 ± 15% +107.7% 10032 ± 12% sched_debug.cpu#37.ttwu_local
331506 ± 7% +40.9% 467107 ± 16% sched_debug.cpu#38.avg_idle
4738 ± 13% +125.1% 10668 ± 9% sched_debug.cpu#38.ttwu_local
4 ± 25% -58.8% 1 ± 24% sched_debug.cpu#39.cpu_load[4]
4788 ± 14% +108.3% 9975 ± 22% sched_debug.cpu#39.ttwu_local
541207 ± 2% +13.1% 611853 ± 5% sched_debug.cpu#4.sched_goidle
1103021 ± 2% +19.8% 1321365 ± 5% sched_debug.cpu#4.nr_switches
4589 ± 10% +111.5% 9707 ± 22% sched_debug.cpu#4.ttwu_local
1122810 ± 2% +20.8% 1355960 ± 5% sched_debug.cpu#4.sched_count
414454 ± 5% +32.3% 548289 ± 10% sched_debug.cpu#4.ttwu_count
4792 ± 16% +106.2% 9883 ± 19% sched_debug.cpu#40.ttwu_local
4816 ± 15% +110.7% 10147 ± 12% sched_debug.cpu#41.ttwu_local
4735 ± 15% +119.3% 10384 ± 23% sched_debug.cpu#42.ttwu_local
4982 ± 37% +74.9% 8713 ± 14% sched_debug.cpu#42.curr->pid
1 ± 34% +140.0% 3 ± 33% sched_debug.cpu#43.cpu_load[0]
4756 ± 14% +101.0% 9560 ± 17% sched_debug.cpu#43.ttwu_local
1 ± 34% +160.0% 3 ± 13% sched_debug.cpu#43.cpu_load[1]
322258 ± 11% +43.5% 462420 ± 16% sched_debug.cpu#43.avg_idle
325887 ± 4% +43.4% 467240 ± 18% sched_debug.cpu#44.avg_idle
4814 ± 14% +112.4% 10226 ± 11% sched_debug.cpu#44.ttwu_local
5017 ± 8% +72.0% 8628 ± 13% sched_debug.cpu#45.ttwu_local
323613 ± 9% +61.2% 521795 ± 24% sched_debug.cpu#45.avg_idle
212655 ± 0% -29.2% 150525 ± 21% sched_debug.cpu#45.nr_load_updates
4919 ± 6% +91.3% 9413 ± 12% sched_debug.cpu#46.ttwu_local
316524 ± 4% +61.2% 510277 ± 29% sched_debug.cpu#46.avg_idle
213391 ± 0% -29.7% 150024 ± 22% sched_debug.cpu#46.nr_load_updates
1 ± 34% +260.0% 4 ± 45% sched_debug.cpu#47.cpu_load[2]
213365 ± 0% -29.3% 150874 ± 21% sched_debug.cpu#47.nr_load_updates
4959 ± 5% +110.2% 10424 ± 12% sched_debug.cpu#47.ttwu_local
297977 ± 5% +70.9% 509246 ± 27% sched_debug.cpu#47.avg_idle
5010 ± 5% +87.0% 9370 ± 10% sched_debug.cpu#48.ttwu_local
295224 ± 14% +75.4% 517762 ± 20% sched_debug.cpu#48.avg_idle
212733 ± 0% -29.1% 150866 ± 21% sched_debug.cpu#48.nr_load_updates
1 ± 0% +400.0% 5 ± 34% sched_debug.cpu#48.cpu_load[0]
212347 ± 0% -29.2% 150357 ± 21% sched_debug.cpu#49.nr_load_updates
307183 ± 7% +65.7% 509139 ± 22% sched_debug.cpu#49.avg_idle
5076 ± 8% +70.0% 8630 ± 13% sched_debug.cpu#49.ttwu_local
5672 ± 47% -53.8% 2619 ± 33% sched_debug.cpu#49.curr->pid
1 ± 0% +325.0% 4 ± 10% sched_debug.cpu#5.cpu_load[1]
4557 ± 11% +112.8% 9695 ± 26% sched_debug.cpu#5.ttwu_local
5632 ± 45% +100.1% 11273 ± 22% sched_debug.cpu#5.curr->pid
1 ± 24% +128.6% 4 ± 0% sched_debug.cpu#5.cpu_load[2]
529333 ± 3% +14.2% 604409 ± 5% sched_debug.cpu#5.sched_goidle
1079377 ± 3% +21.0% 1306000 ± 5% sched_debug.cpu#5.nr_switches
413773 ± 4% +33.3% 551394 ± 9% sched_debug.cpu#5.ttwu_count
1099369 ± 3% +22.0% 1340945 ± 5% sched_debug.cpu#5.sched_count
338950 ± 12% +56.4% 530174 ± 22% sched_debug.cpu#50.avg_idle
4950 ± 5% +71.8% 8507 ± 12% sched_debug.cpu#50.ttwu_local
211756 ± 0% -29.1% 150035 ± 21% sched_debug.cpu#50.nr_load_updates
211395 ± 0% -28.7% 150623 ± 21% sched_debug.cpu#51.nr_load_updates
333949 ± 8% +54.1% 514474 ± 25% sched_debug.cpu#51.avg_idle
4856 ± 7% +85.0% 8985 ± 12% sched_debug.cpu#51.ttwu_local
211096 ± 0% -28.4% 151073 ± 21% sched_debug.cpu#52.nr_load_updates
4834 ± 7% +110.5% 10175 ± 9% sched_debug.cpu#52.ttwu_local
325681 ± 1% +60.7% 523453 ± 27% sched_debug.cpu#52.avg_idle
1 ± 0% +425.0% 5 ± 43% sched_debug.cpu#53.cpu_load[1]
210826 ± 0% -28.6% 150475 ± 21% sched_debug.cpu#53.nr_load_updates
322501 ± 11% +60.3% 516953 ± 22% sched_debug.cpu#53.avg_idle
4764 ± 8% +89.9% 9047 ± 14% sched_debug.cpu#53.ttwu_local
210701 ± 0% -28.8% 150099 ± 21% sched_debug.cpu#54.nr_load_updates
338710 ± 11% +55.1% 525412 ± 22% sched_debug.cpu#54.avg_idle
1 ± 0% +300.0% 4 ± 50% sched_debug.cpu#54.cpu_load[0]
4884 ± 5% +89.3% 9248 ± 12% sched_debug.cpu#54.ttwu_local
210894 ± 0% -28.7% 150271 ± 21% sched_debug.cpu#55.nr_load_updates
5071 ± 5% +84.2% 9341 ± 6% sched_debug.cpu#55.ttwu_local
325786 ± 2% +60.5% 522749 ± 20% sched_debug.cpu#55.avg_idle
4926 ± 6% +87.5% 9236 ± 17% sched_debug.cpu#56.ttwu_local
210935 ± 0% -28.7% 150334 ± 21% sched_debug.cpu#56.nr_load_updates
322812 ± 7% +64.7% 531744 ± 18% sched_debug.cpu#56.avg_idle
3 ± 39% -61.5% 1 ± 34% sched_debug.cpu#56.cpu_load[4]
4935 ± 7% +89.4% 9345 ± 12% sched_debug.cpu#57.ttwu_local
308906 ± 6% +68.8% 521531 ± 18% sched_debug.cpu#57.avg_idle
210966 ± 0% -28.8% 150297 ± 21% sched_debug.cpu#57.nr_load_updates
4881 ± 8% +89.1% 9229 ± 9% sched_debug.cpu#58.ttwu_local
211377 ± 0% -29.1% 149801 ± 21% sched_debug.cpu#58.nr_load_updates
313176 ± 7% +62.7% 509465 ± 21% sched_debug.cpu#58.avg_idle
332981 ± 7% +52.0% 506078 ± 23% sched_debug.cpu#59.avg_idle
4903 ± 6% +88.8% 9259 ± 16% sched_debug.cpu#59.ttwu_local
211798 ± 0% -29.2% 150043 ± 21% sched_debug.cpu#59.nr_load_updates
408039 ± 4% +35.0% 550837 ± 9% sched_debug.cpu#6.ttwu_count
522969 ± 3% +16.1% 607170 ± 5% sched_debug.cpu#6.sched_goidle
4818 ± 17% +103.6% 9809 ± 25% sched_debug.cpu#6.ttwu_local
1067408 ± 3% +22.9% 1311321 ± 4% sched_debug.cpu#6.nr_switches
1098484 ± 3% +22.6% 1346438 ± 4% sched_debug.cpu#6.sched_count
159254 ± 1% -37.4% 99737 ± 5% sched_debug.cpu#60.nr_load_updates
465823 ± 5% -29.8% 327041 ± 5% sched_debug.cpu#60.nr_switches
472826 ± 5% -29.4% 333802 ± 5% sched_debug.cpu#60.sched_count
4 ± 40% -72.2% 1 ± 34% sched_debug.cpu#60.cpu_load[4]
2046 ± 12% +254.3% 7252 ± 28% sched_debug.cpu#60.ttwu_local
226846 ± 5% -57.3% 96883 ± 6% sched_debug.cpu#60.sched_goidle
375332 ± 4% +112.6% 797875 ± 2% sched_debug.cpu#60.avg_idle
11749 ± 14% +348.3% 52676 ± 2% sched_debug.cpu#60.nr_uninterruptible
2192 ± 13% +228.1% 7192 ± 29% sched_debug.cpu#61.ttwu_local
13261 ± 13% +323.8% 56208 ± 3% sched_debug.cpu#61.nr_uninterruptible
258980 ± 3% -11.4% 229352 ± 7% sched_debug.cpu#61.ttwu_count
8 ± 24% -68.6% 2 ± 30% sched_debug.cpu#61.cpu_load[2]
170014 ± 1% -40.7% 100819 ± 5% sched_debug.cpu#61.nr_load_updates
526023 ± 4% -35.3% 340593 ± 5% sched_debug.cpu#61.sched_count
253250 ± 4% -61.4% 97676 ± 6% sched_debug.cpu#61.sched_goidle
8 ± 26% -78.1% 1 ± 24% sched_debug.cpu#61.cpu_load[3]
6 ± 28% -81.5% 1 ± 34% sched_debug.cpu#61.cpu_load[4]
351138 ± 4% +136.7% 831304 ± 4% sched_debug.cpu#61.avg_idle
518488 ± 4% -35.6% 333844 ± 5% sched_debug.cpu#61.nr_switches
166210 ± 0% -39.4% 100751 ± 5% sched_debug.cpu#62.nr_load_updates
4 ± 40% -78.9% 1 ± 0% sched_debug.cpu#62.cpu_load[4]
7 ± 38% -79.3% 1 ± 33% sched_debug.cpu#62.cpu_load[3]
487117 ± 4% -31.6% 332975 ± 5% sched_debug.cpu#62.nr_switches
10 ± 47% -76.2% 2 ± 20% sched_debug.cpu#62.cpu_load[2]
2007 ± 12% +257.3% 7174 ± 28% sched_debug.cpu#62.ttwu_local
351310 ± 5% +137.9% 835592 ± 2% sched_debug.cpu#62.avg_idle
494319 ± 4% -31.3% 339633 ± 5% sched_debug.cpu#62.sched_count
11740 ± 14% +376.8% 55979 ± 5% sched_debug.cpu#62.nr_uninterruptible
237774 ± 4% -59.0% 97456 ± 6% sched_debug.cpu#62.sched_goidle
163519 ± 0% -38.4% 100702 ± 5% sched_debug.cpu#63.nr_load_updates
19 ± 39% -77.6% 4 ± 30% sched_debug.cpu#63.cpu_load[0]
8 ± 23% -72.7% 2 ± 19% sched_debug.cpu#63.cpu_load[2]
5 ± 25% -82.6% 1 ± 0% sched_debug.cpu#63.cpu_load[4]
7 ± 24% -78.6% 1 ± 33% sched_debug.cpu#63.cpu_load[3]
11 ± 28% -69.6% 3 ± 14% sched_debug.cpu#63.cpu_load[1]
383806 ± 6% +107.3% 795485 ± 2% sched_debug.cpu#63.avg_idle
230985 ± 3% -57.7% 97730 ± 6% sched_debug.cpu#63.sched_goidle
473483 ± 4% -29.6% 333448 ± 5% sched_debug.cpu#63.nr_switches
11189 ± 12% +402.7% 56248 ± 4% sched_debug.cpu#63.nr_uninterruptible
1967 ± 9% +268.6% 7251 ± 30% sched_debug.cpu#63.ttwu_local
480528 ± 4% -29.0% 341011 ± 5% sched_debug.cpu#63.sched_count
360584 ± 1% +124.2% 808561 ± 5% sched_debug.cpu#64.avg_idle
6 ± 49% -84.6% 1 ± 0% sched_debug.cpu#64.cpu_load[4]
162032 ± 1% -37.9% 100543 ± 5% sched_debug.cpu#64.nr_load_updates
471634 ± 4% -27.6% 341321 ± 5% sched_debug.cpu#64.sched_count
464736 ± 4% -28.0% 334706 ± 5% sched_debug.cpu#64.nr_switches
1924 ± 10% +271.0% 7140 ± 28% sched_debug.cpu#64.ttwu_local
226602 ± 4% -56.8% 97801 ± 7% sched_debug.cpu#64.sched_goidle
10882 ± 13% +426.6% 57312 ± 2% sched_debug.cpu#64.nr_uninterruptible
454639 ± 4% -26.7% 333349 ± 5% sched_debug.cpu#65.nr_switches
461901 ± 4% -26.3% 340369 ± 6% sched_debug.cpu#65.sched_count
0 ± 0% +Inf% 1 ± 24% sched_debug.cpu#65.nr_running
10689 ± 12% +426.0% 56232 ± 3% sched_debug.cpu#65.nr_uninterruptible
369695 ± 3% +115.9% 798046 ± 4% sched_debug.cpu#65.avg_idle
5 ± 28% -81.0% 1 ± 0% sched_debug.cpu#65.cpu_load[4]
221512 ± 4% -55.9% 97706 ± 7% sched_debug.cpu#65.sched_goidle
160420 ± 1% -37.3% 100577 ± 5% sched_debug.cpu#65.nr_load_updates
5 ± 28% -65.2% 2 ± 0% sched_debug.cpu#65.cpu_load[3]
1926 ± 11% +275.3% 7228 ± 28% sched_debug.cpu#65.ttwu_local
5 ± 20% -72.7% 1 ± 33% sched_debug.cpu#66.cpu_load[3]
7 ± 15% -65.5% 2 ± 20% sched_debug.cpu#66.cpu_load[2]
1859 ± 9% +282.2% 7106 ± 29% sched_debug.cpu#66.ttwu_local
159053 ± 1% -36.6% 100774 ± 5% sched_debug.cpu#66.nr_load_updates
8 ± 29% -61.8% 3 ± 13% sched_debug.cpu#66.cpu_load[1]
457600 ± 4% -25.3% 341731 ± 5% sched_debug.cpu#66.sched_count
10564 ± 14% +438.2% 56855 ± 4% sched_debug.cpu#66.nr_uninterruptible
369343 ± 8% +118.8% 808095 ± 3% sched_debug.cpu#66.avg_idle
4 ± 30% -75.0% 1 ± 0% sched_debug.cpu#66.cpu_load[4]
219472 ± 3% -55.3% 98188 ± 6% sched_debug.cpu#66.sched_goidle
450502 ± 4% -25.7% 334770 ± 5% sched_debug.cpu#66.nr_switches
466476 ± 6% -27.1% 339837 ± 5% sched_debug.cpu#67.sched_count
160888 ± 2% -37.7% 100182 ± 5% sched_debug.cpu#67.nr_load_updates
5 ± 37% -69.6% 1 ± 24% sched_debug.cpu#67.cpu_load[3]
459640 ± 6% -27.6% 332819 ± 5% sched_debug.cpu#67.nr_switches
5 ± 24% -76.2% 1 ± 34% sched_debug.cpu#67.cpu_load[4]
10908 ± 17% +417.4% 56441 ± 3% sched_debug.cpu#67.nr_uninterruptible
1935 ± 9% +273.9% 7235 ± 30% sched_debug.cpu#67.ttwu_local
224021 ± 6% -56.4% 97632 ± 6% sched_debug.cpu#67.sched_goidle
349251 ± 6% +128.9% 799422 ± 1% sched_debug.cpu#67.avg_idle
0 ± 0% +Inf% 1 ± 34% sched_debug.cpu#67.nr_running
1907 ± 11% +280.5% 7256 ± 29% sched_debug.cpu#68.ttwu_local
219479 ± 4% -55.6% 97442 ± 6% sched_debug.cpu#68.sched_goidle
450587 ± 5% -26.0% 333323 ± 5% sched_debug.cpu#68.nr_switches
392113 ± 8% +108.3% 816905 ± 2% sched_debug.cpu#68.avg_idle
0 ± 0% +Inf% 1 ± 33% sched_debug.cpu#68.nr_running
5 ± 34% -81.0% 1 ± 0% sched_debug.cpu#68.cpu_load[4]
159530 ± 1% -37.2% 100191 ± 5% sched_debug.cpu#68.nr_load_updates
457458 ± 5% -25.7% 340032 ± 5% sched_debug.cpu#68.sched_count
10515 ± 14% +439.1% 56681 ± 3% sched_debug.cpu#68.nr_uninterruptible
10444 ± 14% +450.7% 57517 ± 1% sched_debug.cpu#69.nr_uninterruptible
446109 ± 4% -25.1% 334321 ± 5% sched_debug.cpu#69.nr_switches
452933 ± 4% -24.7% 340979 ± 5% sched_debug.cpu#69.sched_count
217204 ± 4% -55.1% 97621 ± 6% sched_debug.cpu#69.sched_goidle
158575 ± 1% -36.8% 100204 ± 5% sched_debug.cpu#69.nr_load_updates
351220 ± 4% +137.5% 834165 ± 3% sched_debug.cpu#69.avg_idle
1876 ± 11% +283.9% 7204 ± 28% sched_debug.cpu#69.ttwu_local
1091164 ± 3% +24.1% 1354466 ± 5% sched_debug.cpu#7.sched_count
523643 ± 3% +16.7% 611192 ± 5% sched_debug.cpu#7.sched_goidle
410239 ± 4% +34.6% 552357 ± 9% sched_debug.cpu#7.ttwu_count
4561 ± 10% +124.7% 10249 ± 21% sched_debug.cpu#7.ttwu_local
1068057 ± 3% +23.5% 1319002 ± 5% sched_debug.cpu#7.nr_switches
10279 ± 14% +448.2% 56356 ± 2% sched_debug.cpu#70.nr_uninterruptible
352820 ± 9% +134.6% 827636 ± 4% sched_debug.cpu#70.avg_idle
9783 ± 25% +48.0% 14480 ± 8% sched_debug.cpu#70.curr->pid
449808 ± 4% -24.4% 340071 ± 5% sched_debug.cpu#70.sched_count
215565 ± 4% -54.9% 97278 ± 6% sched_debug.cpu#70.sched_goidle
442796 ± 5% -24.7% 333367 ± 5% sched_debug.cpu#70.nr_switches
157886 ± 1% -36.6% 100081 ± 5% sched_debug.cpu#70.nr_load_updates
3 ± 31% -71.4% 1 ± 0% sched_debug.cpu#70.cpu_load[4]
0 ± 0% +Inf% 1 ± 0% sched_debug.cpu#70.nr_running
1847 ± 12% +290.8% 7217 ± 29% sched_debug.cpu#70.ttwu_local
10300 ± 15% +445.5% 56190 ± 2% sched_debug.cpu#71.nr_uninterruptible
440375 ± 5% -24.4% 332977 ± 4% sched_debug.cpu#71.nr_switches
1857 ± 12% +287.9% 7205 ± 30% sched_debug.cpu#71.ttwu_local
447123 ± 5% -24.0% 339605 ± 4% sched_debug.cpu#71.sched_count
372180 ± 6% +119.1% 815314 ± 2% sched_debug.cpu#71.avg_idle
214355 ± 4% -54.7% 97027 ± 6% sched_debug.cpu#71.sched_goidle
157564 ± 1% -36.4% 100194 ± 4% sched_debug.cpu#71.nr_load_updates
1848 ± 12% +290.7% 7220 ± 29% sched_debug.cpu#72.ttwu_local
9980 ± 32% +51.9% 15161 ± 35% sched_debug.cpu#72.curr->pid
7 ± 10% -78.6% 1 ± 33% sched_debug.cpu#72.cpu_load[4]
389711 ± 4% +112.4% 827764 ± 1% sched_debug.cpu#72.avg_idle
8 ± 24% -79.4% 1 ± 47% sched_debug.cpu#72.cpu_load[3]
439193 ± 5% -23.8% 334863 ± 5% sched_debug.cpu#72.nr_switches
445902 ± 5% -23.4% 341459 ± 5% sched_debug.cpu#72.sched_count
213663 ± 5% -54.6% 96938 ± 6% sched_debug.cpu#72.sched_goidle
157330 ± 1% -36.1% 100472 ± 5% sched_debug.cpu#72.nr_load_updates
10371 ± 15% +454.6% 57521 ± 2% sched_debug.cpu#72.nr_uninterruptible
7 ± 39% -65.5% 2 ± 20% sched_debug.cpu#73.cpu_load[2]
214049 ± 5% -54.6% 97108 ± 6% sched_debug.cpu#73.sched_goidle
10495 ± 16% +434.3% 56076 ± 3% sched_debug.cpu#73.nr_uninterruptible
1824 ± 14% +299.2% 7281 ± 30% sched_debug.cpu#73.ttwu_local
6 ± 45% -70.8% 1 ± 24% sched_debug.cpu#73.cpu_load[3]
440090 ± 6% -24.2% 333514 ± 4% sched_debug.cpu#73.nr_switches
398199 ± 4% +100.8% 799630 ± 2% sched_debug.cpu#73.avg_idle
446777 ± 6% -23.9% 339998 ± 4% sched_debug.cpu#73.sched_count
157615 ± 1% -36.4% 100184 ± 5% sched_debug.cpu#73.nr_load_updates
10541 ± 16% +435.4% 56435 ± 3% sched_debug.cpu#74.nr_uninterruptible
448101 ± 5% -24.4% 338746 ± 5% sched_debug.cpu#74.sched_count
394634 ± 4% +101.7% 796067 ± 1% sched_debug.cpu#74.avg_idle
17 ± 43% -77.9% 3 ± 22% sched_debug.cpu#74.cpu_load[1]
441455 ± 5% -24.7% 332301 ± 5% sched_debug.cpu#74.nr_switches
1844 ± 11% +291.8% 7227 ± 30% sched_debug.cpu#74.ttwu_local
157582 ± 1% -36.7% 99748 ± 5% sched_debug.cpu#74.nr_load_updates
9 ± 28% -81.6% 1 ± 24% sched_debug.cpu#74.cpu_load[3]
214781 ± 5% -55.1% 96404 ± 6% sched_debug.cpu#74.sched_goidle
7 ± 26% -82.1% 1 ± 34% sched_debug.cpu#74.cpu_load[4]
12 ± 35% -80.0% 2 ± 20% sched_debug.cpu#74.cpu_load[2]
164640 ± 1% -52.7% 77949 ± 26% sched_debug.cpu#75.nr_load_updates
361203 ± 5% +153.4% 915258 ± 5% sched_debug.cpu#75.avg_idle
483086 ± 1% -51.1% 236050 ± 37% sched_debug.cpu#75.sched_count
476102 ± 1% -51.3% 231994 ± 36% sched_debug.cpu#75.nr_switches
232077 ± 1% -68.4% 73385 ± 27% sched_debug.cpu#75.sched_goidle
475860 ± 1% -50.4% 235843 ± 37% sched_debug.cpu#76.sched_count
370986 ± 4% +140.8% 893396 ± 7% sched_debug.cpu#76.avg_idle
468899 ± 1% -50.6% 231822 ± 37% sched_debug.cpu#76.nr_switches
163074 ± 0% -52.4% 77553 ± 26% sched_debug.cpu#76.nr_load_updates
228349 ± 1% -68.1% 72873 ± 28% sched_debug.cpu#76.sched_goidle
0 ± 0% +Inf% 25 ± 38% sched_debug.cpu#76.load
363660 ± 1% +148.2% 902694 ± 4% sched_debug.cpu#77.avg_idle
476100 ± 6% -50.2% 236976 ± 37% sched_debug.cpu#77.sched_count
228629 ± 5% -68.1% 72878 ± 28% sched_debug.cpu#77.sched_goidle
163638 ± 3% -52.6% 77542 ± 27% sched_debug.cpu#77.nr_load_updates
469211 ± 6% -50.5% 232239 ± 37% sched_debug.cpu#77.nr_switches
460611 ± 4% -49.7% 231831 ± 37% sched_debug.cpu#78.nr_switches
467338 ± 4% -49.5% 235855 ± 37% sched_debug.cpu#78.sched_count
224421 ± 4% -67.6% 72672 ± 28% sched_debug.cpu#78.sched_goidle
381951 ± 13% +136.3% 902531 ± 6% sched_debug.cpu#78.avg_idle
162704 ± 2% -52.4% 77434 ± 27% sched_debug.cpu#78.nr_load_updates
221400 ± 3% -66.8% 73518 ± 28% sched_debug.cpu#79.sched_goidle
397648 ± 7% +127.9% 906260 ± 5% sched_debug.cpu#79.avg_idle
461046 ± 3% -48.4% 237777 ± 37% sched_debug.cpu#79.sched_count
454447 ± 3% -48.6% 233725 ± 37% sched_debug.cpu#79.nr_switches
161593 ± 2% -51.8% 77813 ± 26% sched_debug.cpu#79.nr_load_updates
4503 ± 11% +117.6% 9799 ± 25% sched_debug.cpu#8.ttwu_local
1064520 ± 3% +23.4% 1313774 ± 4% sched_debug.cpu#8.nr_switches
521824 ± 3% +16.6% 608464 ± 5% sched_debug.cpu#8.sched_goidle
407022 ± 5% +35.5% 551655 ± 9% sched_debug.cpu#8.ttwu_count
1084793 ± 3% +24.3% 1348477 ± 4% sched_debug.cpu#8.sched_count
449003 ± 4% -48.8% 230108 ± 37% sched_debug.cpu#80.nr_switches
218669 ± 4% -67.0% 72091 ± 28% sched_debug.cpu#80.sched_goidle
455752 ± 4% -48.3% 235485 ± 37% sched_debug.cpu#80.sched_count
355619 ± 11% +154.8% 906078 ± 5% sched_debug.cpu#80.avg_idle
160754 ± 2% -52.2% 76880 ± 27% sched_debug.cpu#80.nr_load_updates
216571 ± 4% -66.5% 72533 ± 29% sched_debug.cpu#81.sched_goidle
375573 ± 5% +137.9% 893511 ± 6% sched_debug.cpu#81.avg_idle
160219 ± 2% -51.7% 77329 ± 28% sched_debug.cpu#81.nr_load_updates
451643 ± 4% -47.6% 236789 ± 38% sched_debug.cpu#81.sched_count
444845 ± 4% -47.7% 232555 ± 37% sched_debug.cpu#81.nr_switches
215428 ± 4% -66.6% 71893 ± 29% sched_debug.cpu#82.sched_goidle
368902 ± 6% +152.6% 931727 ± 3% sched_debug.cpu#82.avg_idle
442822 ± 4% -48.0% 230123 ± 37% sched_debug.cpu#82.nr_switches
159684 ± 2% -52.1% 76475 ± 27% sched_debug.cpu#82.nr_load_updates
449549 ± 4% -47.8% 234604 ± 37% sched_debug.cpu#82.sched_count
159520 ± 2% -51.4% 77568 ± 27% sched_debug.cpu#83.nr_load_updates
449125 ± 4% -46.8% 239047 ± 38% sched_debug.cpu#83.sched_count
215386 ± 4% -66.1% 72964 ± 28% sched_debug.cpu#83.sched_goidle
385918 ± 6% +136.5% 912662 ± 5% sched_debug.cpu#83.avg_idle
442478 ± 4% -47.3% 233229 ± 37% sched_debug.cpu#83.nr_switches
449876 ± 4% -47.3% 236952 ± 37% sched_debug.cpu#84.sched_count
443343 ± 4% -47.6% 232453 ± 37% sched_debug.cpu#84.nr_switches
159522 ± 2% -51.7% 77037 ± 27% sched_debug.cpu#84.nr_load_updates
215677 ± 4% -66.3% 72592 ± 29% sched_debug.cpu#84.sched_goidle
364262 ± 13% +148.5% 905299 ± 5% sched_debug.cpu#84.avg_idle
444975 ± 3% -48.1% 230920 ± 37% sched_debug.cpu#85.nr_switches
216569 ± 3% -66.8% 71874 ± 29% sched_debug.cpu#85.sched_goidle
451627 ± 3% -48.0% 234975 ± 37% sched_debug.cpu#85.sched_count
387650 ± 7% +129.1% 888267 ± 7% sched_debug.cpu#85.avg_idle
159827 ± 2% -51.9% 76809 ± 27% sched_debug.cpu#85.nr_load_updates
398952 ± 16% +120.7% 880379 ± 7% sched_debug.cpu#86.avg_idle
447181 ± 3% -48.3% 231357 ± 37% sched_debug.cpu#86.nr_switches
159916 ± 2% -51.8% 77000 ± 27% sched_debug.cpu#86.nr_load_updates
453947 ± 3% -48.1% 235643 ± 37% sched_debug.cpu#86.sched_count
217599 ± 3% -67.1% 71651 ± 28% sched_debug.cpu#86.sched_goidle
160324 ± 2% -52.1% 76764 ± 27% sched_debug.cpu#87.nr_load_updates
374310 ± 6% +138.1% 891320 ± 6% sched_debug.cpu#87.avg_idle
220101 ± 3% -67.3% 72026 ± 28% sched_debug.cpu#87.sched_goidle
458878 ± 3% -48.7% 235540 ± 37% sched_debug.cpu#87.sched_count
452233 ± 3% -48.8% 231554 ± 37% sched_debug.cpu#87.nr_switches
160998 ± 1% -52.2% 76976 ± 27% sched_debug.cpu#88.nr_load_updates
388490 ± 10% +133.5% 907200 ± 6% sched_debug.cpu#88.avg_idle
464167 ± 3% -49.3% 235524 ± 37% sched_debug.cpu#88.sched_count
222658 ± 2% -67.8% 71589 ± 28% sched_debug.cpu#88.sched_goidle
457358 ± 3% -49.5% 230996 ± 37% sched_debug.cpu#88.nr_switches
373956 ± 11% +142.8% 908123 ± 4% sched_debug.cpu#89.avg_idle
227255 ± 2% -68.5% 71532 ± 29% sched_debug.cpu#89.sched_goidle
466727 ± 2% -50.5% 231136 ± 38% sched_debug.cpu#89.nr_switches
161927 ± 1% -52.7% 76598 ± 28% sched_debug.cpu#89.nr_load_updates
473645 ± 2% -50.3% 235205 ± 38% sched_debug.cpu#89.sched_count
1081031 ± 3% +25.0% 1351582 ± 4% sched_debug.cpu#9.sched_count
0 ± 0% +Inf% 17 ± 47% sched_debug.cpu#9.load
402463 ± 4% +36.8% 550573 ± 9% sched_debug.cpu#9.ttwu_count
518725 ± 3% +17.5% 609468 ± 4% sched_debug.cpu#9.sched_goidle
1058429 ± 3% +24.5% 1317308 ± 4% sched_debug.cpu#9.nr_switches
4530 ± 14% +118.6% 9905 ± 28% sched_debug.cpu#9.ttwu_local
330519 ± 11% +154.1% 839727 ± 6% sched_debug.cpu#90.avg_idle
247920 ± 7% -23.5% 189659 ± 21% sched_debug.cpu#90.ttwu_count
11641 ± 14% +310.1% 47738 ± 37% sched_debug.cpu#90.nr_uninterruptible
12 ± 26% -86.0% 1 ± 47% sched_debug.cpu#90.cpu_load[2]
9 ± 27% -84.2% 1 ± 33% sched_debug.cpu#90.cpu_load[3]
236121 ± 4% -62.6% 88204 ± 16% sched_debug.cpu#90.sched_goidle
492698 ± 4% -39.5% 297909 ± 20% sched_debug.cpu#90.sched_count
484160 ± 5% -39.8% 291626 ± 20% sched_debug.cpu#90.nr_switches
164750 ± 1% -45.4% 89920 ± 12% sched_debug.cpu#90.nr_load_updates
13 ± 36% -80.0% 2 ± 30% sched_debug.cpu#90.cpu_load[1]
2380 ± 22% +161.2% 6218 ± 25% sched_debug.cpu#90.ttwu_local
10 ± 22% -78.0% 2 ± 19% sched_debug.cpu#91.cpu_load[2]
6 ± 17% -84.6% 1 ± 0% sched_debug.cpu#91.cpu_load[4]
1958 ± 11% +216.4% 6195 ± 25% sched_debug.cpu#91.ttwu_local
466743 ± 4% -37.1% 293473 ± 20% sched_debug.cpu#91.nr_switches
11129 ± 13% +339.4% 48904 ± 38% sched_debug.cpu#91.nr_uninterruptible
11 ± 37% -73.3% 3 ± 0% sched_debug.cpu#91.cpu_load[1]
161859 ± 2% -44.3% 90199 ± 12% sched_debug.cpu#91.nr_load_updates
227465 ± 4% -61.2% 88334 ± 16% sched_debug.cpu#91.sched_goidle
8 ± 23% -78.1% 1 ± 24% sched_debug.cpu#91.cpu_load[3]
473602 ± 4% -36.6% 300206 ± 20% sched_debug.cpu#91.sched_count
365768 ± 8% +131.9% 848199 ± 5% sched_debug.cpu#91.avg_idle
464831 ± 5% -35.8% 298566 ± 20% sched_debug.cpu#92.sched_count
1909 ± 12% +225.2% 6209 ± 25% sched_debug.cpu#92.ttwu_local
6 ± 41% -84.0% 1 ± 0% sched_debug.cpu#92.cpu_load[4]
10722 ± 14% +354.4% 48718 ± 37% sched_debug.cpu#92.nr_uninterruptible
223142 ± 5% -60.5% 88183 ± 16% sched_debug.cpu#92.sched_goidle
160290 ± 2% -43.9% 89915 ± 12% sched_debug.cpu#92.nr_load_updates
379115 ± 7% +132.0% 879627 ± 2% sched_debug.cpu#92.avg_idle
458041 ± 5% -36.1% 292684 ± 20% sched_debug.cpu#92.nr_switches
159097 ± 2% -43.6% 89715 ± 12% sched_debug.cpu#93.nr_load_updates
363321 ± 9% +135.0% 853706 ± 4% sched_debug.cpu#93.avg_idle
9 ± 38% -66.7% 3 ± 13% sched_debug.cpu#93.cpu_load[1]
449508 ± 5% -35.0% 291986 ± 20% sched_debug.cpu#93.nr_switches
218847 ± 5% -59.9% 87722 ± 16% sched_debug.cpu#93.sched_goidle
456189 ± 5% -34.6% 298447 ± 20% sched_debug.cpu#93.sched_count
9 ± 20% -73.0% 2 ± 20% sched_debug.cpu#93.cpu_load[2]
8 ± 8% -78.1% 1 ± 24% sched_debug.cpu#93.cpu_load[3]
10526 ± 14% +364.2% 48867 ± 36% sched_debug.cpu#93.nr_uninterruptible
6 ± 7% -80.8% 1 ± 34% sched_debug.cpu#93.cpu_load[4]
1885 ± 12% +225.2% 6131 ± 25% sched_debug.cpu#93.ttwu_local
215555 ± 5% -59.0% 88450 ± 16% sched_debug.cpu#94.sched_goidle
157842 ± 2% -43.0% 90011 ± 12% sched_debug.cpu#94.nr_load_updates
10314 ± 15% +383.2% 49839 ± 37% sched_debug.cpu#94.nr_uninterruptible
373165 ± 12% +128.3% 851916 ± 2% sched_debug.cpu#94.avg_idle
2 ± 0% +87.5% 3 ± 22% sched_debug.cpu#94.cpu_load[0]
443064 ± 5% -33.5% 294760 ± 20% sched_debug.cpu#94.nr_switches
1869 ± 13% +231.5% 6198 ± 23% sched_debug.cpu#94.ttwu_local
449617 ± 5% -33.0% 301360 ± 20% sched_debug.cpu#94.sched_count
10349 ± 13% +364.1% 48028 ± 38% sched_debug.cpu#95.nr_uninterruptible
157490 ± 2% -43.3% 89361 ± 12% sched_debug.cpu#95.nr_load_updates
447749 ± 5% -33.6% 297481 ± 20% sched_debug.cpu#95.sched_count
440550 ± 5% -33.9% 291184 ± 20% sched_debug.cpu#95.nr_switches
380790 ± 12% +129.2% 872691 ± 4% sched_debug.cpu#95.avg_idle
7 ± 40% -83.9% 1 ± 34% sched_debug.cpu#95.cpu_load[3]
214335 ± 5% -59.1% 87594 ± 16% sched_debug.cpu#95.sched_goidle
1843 ± 12% +243.4% 6328 ± 22% sched_debug.cpu#95.ttwu_local
10609 ± 12% +352.8% 48034 ± 35% sched_debug.cpu#96.nr_uninterruptible
220349 ± 4% -60.3% 87478 ± 15% sched_debug.cpu#96.sched_goidle
459952 ± 4% -35.5% 296579 ± 19% sched_debug.cpu#96.sched_count
382794 ± 8% +121.5% 847879 ± 3% sched_debug.cpu#96.avg_idle
8 ± 40% -72.7% 2 ± 19% sched_debug.cpu#96.cpu_load[2]
159883 ± 1% -44.1% 89417 ± 12% sched_debug.cpu#96.nr_load_updates
452633 ± 4% -35.8% 290522 ± 19% sched_debug.cpu#96.nr_switches
1927 ± 13% +223.1% 6226 ± 23% sched_debug.cpu#96.ttwu_local
6 ± 36% -77.8% 1 ± 33% sched_debug.cpu#96.cpu_load[3]
158918 ± 1% -43.8% 89247 ± 12% sched_debug.cpu#97.nr_load_updates
7 ± 27% -83.3% 1 ± 34% sched_debug.cpu#97.cpu_load[4]
371984 ± 10% +126.9% 843866 ± 4% sched_debug.cpu#97.avg_idle
9 ± 42% -81.6% 1 ± 24% sched_debug.cpu#97.cpu_load[3]
1857 ± 11% +231.5% 6158 ± 24% sched_debug.cpu#97.ttwu_local
454841 ± 4% -34.8% 296555 ± 20% sched_debug.cpu#97.sched_count
448082 ± 4% -35.1% 290591 ± 20% sched_debug.cpu#97.nr_switches
218092 ± 4% -59.9% 87476 ± 16% sched_debug.cpu#97.sched_goidle
10490 ± 13% +357.9% 48042 ± 37% sched_debug.cpu#97.nr_uninterruptible
8 ± 26% -85.3% 1 ± 34% sched_debug.cpu#98.cpu_load[3]
363336 ± 8% +139.6% 870418 ± 2% sched_debug.cpu#98.avg_idle
217716 ± 4% -59.6% 88025 ± 15% sched_debug.cpu#98.sched_goidle
6 ± 25% -84.6% 1 ± 0% sched_debug.cpu#98.cpu_load[4]
454322 ± 4% -34.0% 299712 ± 19% sched_debug.cpu#98.sched_count
1891 ± 9% +231.0% 6262 ± 25% sched_debug.cpu#98.ttwu_local
447408 ± 4% -34.5% 293275 ± 20% sched_debug.cpu#98.nr_switches
10 ± 36% -80.5% 2 ± 0% sched_debug.cpu#98.cpu_load[2]
158876 ± 1% -43.4% 89862 ± 12% sched_debug.cpu#98.nr_load_updates
10482 ± 13% +370.9% 49366 ± 38% sched_debug.cpu#98.nr_uninterruptible
10469 ± 14% +370.8% 49296 ± 38% sched_debug.cpu#99.nr_uninterruptible
455622 ± 4% -34.3% 299384 ± 20% sched_debug.cpu#99.sched_count
218527 ± 4% -59.7% 88000 ± 16% sched_debug.cpu#99.sched_goidle
448998 ± 4% -34.6% 293470 ± 20% sched_debug.cpu#99.nr_switches
1912 ± 12% +226.2% 6237 ± 25% sched_debug.cpu#99.ttwu_local
379179 ± 11% +115.2% 815881 ± 3% sched_debug.cpu#99.avg_idle
9 ± 49% -81.1% 1 ± 24% sched_debug.cpu#99.cpu_load[3]
158830 ± 1% -43.5% 89753 ± 12% sched_debug.cpu#99.nr_load_updates
6 ± 39% -81.5% 1 ± 34% sched_debug.cpu#99.cpu_load[4]
brickland3: Brickland Ivy Bridge-EX
Memory: 512G
aim7.time.involuntary_context_switches
1.4e+06 ++----------------------------------------------------------------+
O |
1.2e+06 ++ O |
| O O O |
1e+06 ++ |
| |
800000 ++ |
| |
600000 ++ |
| |
400000 ++ O |
| O O O
200000 ++ |
| ....*
0 *+------*--------*-------*-------*-------*--------*-------*-------+
cpuidle.POLL.usage
300 ++--------------------------------------------------------------------+
| |
250 ++ ..*.... |
| .... ... |
*.. . ....*...... |
200 ++ *... .. ....*........*....... |
| *.... *........*
150 ++ |
| |
100 ++ |
| O O O
| O O |
50 O+ O |
| O O |
0 ++--------------------------------------------------------------------+
cpuidle.C1-IVT-4S.usage
2e+07 ++----------------------------------------------------------------+
1.8e+07 ++ O O |
| |
1.6e+07 ++ O O
1.4e+07 ++ |
O O |
1.2e+07 ++ O O O |
1e+07 ++ |
8e+06 ++ |
| |
6e+06 ++ |
4e+06 ++ ....*
| ....*.......*.......*.......*........*.......*... |
2e+06 *+......*.... |
0 ++----------------------------------------------------------------+
numa-vmstat.node1.numa_other
85000 ++-------------------------------------------------O----------------+
O O O O O O O O
80000 ++ |
| |
75000 ++ |
| |
70000 ++ |
| |
65000 ++ |
| |
60000 ++ |
| |
55000 ++ |
*........*.......*........*.......*........*.......*........*.......*
50000 ++------------------------------------------------------------------+
numa-vmstat.node2.numa_other
85000 ++------------------------------------------------------------------+
*........*.......*........*.......*........*.......*........*.......*
80000 ++ |
| |
75000 ++ |
| |
70000 ++ |
| |
65000 ++ |
| |
60000 ++ |
| |
55000 ++ |
O O O O O O O O O
50000 ++------------------------------------------------------------------+
proc-vmstat.nr_anon_pages
56000 ++------------------------------------------------------------------+
*........*..... ....*... |
54000 ++ .. ....*.......*.... ... ..*
52000 ++ *.... . .... |
| *........*. |
50000 ++ |
48000 ++ |
| |
46000 ++ |
44000 ++ O O O O
| |
42000 ++ |
40000 O+ O O O |
| O |
38000 ++------------------------------------------------------------------+
meminfo.AnonPages
220000 *+-----------------------------------------------------------------+
| *........ ....*.......*..... ..*
210000 ++ *.......*.... .. .... |
| *........*. |
200000 ++ |
| |
190000 ++ |
| |
180000 ++ |
| O O O O
170000 ++ |
O |
160000 ++ O O O O |
| |
150000 ++-----------------------------------------------------------------+
slabinfo.mm_struct.active_objs
11000 ++------------------------------------------------------------------+
| |
10500 ++ O |
O O O |
| |
10000 ++ O |
| |
9500 ++ O |
| O O O
9000 ++ |
| |
| |
8500 *+.......*.......*........*.......*........*.......*........*.......*
| |
8000 ++------------------------------------------------------------------+
slabinfo.mm_struct.num_objs
11000 ++------------------------------------------------------------------+
| O |
O O |
10500 ++ O |
| |
| O |
10000 ++ |
| |
9500 ++ O O |
| O O
| |
9000 ++ |
| |
*........*....... ....*........*.......*........*.......|
8500 ++---------------*--------*-----------------------------------------*
slabinfo.kmalloc-128.active_objs
100000 ++-----------------------------------------------------------------+
O O O |
90000 ++ O O |
| |
80000 ++ O O O
| O |
70000 ++ |
| |
60000 ++ |
| |
50000 ++ |
| ....*
40000 *+...... ....*....... ....*....... ....*........*... |
| *.... *.... *... |
30000 ++-----------------------------------------------------------------+
slabinfo.kmalloc-128.num_objs
100000 ++-----------------------------------------------------------------+
O O O O |
90000 ++ O |
| |
80000 ++ O O O
| O |
70000 ++ |
| |
60000 ++ |
| |
50000 ++ |
| ....*
40000 *+...... ....*....... ....*....... ....*........*... |
| *.... *.... *... |
30000 ++-----------------------------------------------------------------+
slabinfo.kmalloc-128.active_slabs
1600 ++-------------------------------------------------------------------+
1500 ++ O O |
O O O |
1400 ++ |
1300 ++ O |
1200 ++ O O O
1100 ++ |
| |
1000 ++ |
900 ++ |
800 ++ |
700 ++ |
*........ ....*....... ....*....... ....*
600 ++ *.......*........*.... *.... *.... |
500 ++-------------------------------------------------------------------+
slabinfo.kmalloc-128.num_slabs
1600 ++-------------------------------------------------------------------+
1500 ++ O O |
O O O |
1400 ++ |
1300 ++ O |
1200 ++ O O O
1100 ++ |
| |
1000 ++ |
900 ++ |
800 ++ |
700 ++ |
*........ ....*....... ....*....... ....*
600 ++ *.......*........*.... *.... *.... |
500 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 3 months