[futex] 52507cfaff: UBSAN:Undefined_behaviour_in_kernel/events/core.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 52507cfaffe900c4d23931e9863bd54b4b980d65 ("futex: Replace PF_EXITPIDONE with a state")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------------+------------+------------+
| | 8012f98f92 | 52507cfaff |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 14 | 16 |
| UBSAN:Undefined_behaviour_in_arch/x86/kernel.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/tbprint.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/tbutils.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/tbdata.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/tbxface.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/workqueue.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/nsaccess.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/utstring.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_lib/string.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_arch/x86/kernel/alternative.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_arch/x86/realmode/init.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_crypto/algapi.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/exit.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/sched/cputime.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/locking/lockdep.c | 14 | 16 |
| EIP:arch_local_irq_restore | 6 | 10 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/tbxfload.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/dsmthdat.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/psargs.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/dsinit.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/evgpeinit.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/nsinit.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/evregion.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/utpredef.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/sysfs.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/rsxface.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/rsutils.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/rsaddr.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/rscalc.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/acpi/acpica/rsmisc.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_lib/zlib_inflate/inffast.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_fs/readdir.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_crypto/api.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_drivers/base/devres.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/linux/compiler.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/uapi/linux/swab.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/linux/unaligned/access_ok.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/af_inet.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/net/ip.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/ipconfig.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/ip_input.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/udp.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/signal.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/ip6_offload.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/ip6_input.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/net/ipv6.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/route.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/icmp.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/net/ip6_checksum.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/ndisc.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/addrconf_core.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/addrconf.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv6/ip6_fib.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/linux/skbuff.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/tcp_offload.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/tcp_ipv4.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/linux/tcp.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_include/net/dsfield.h | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/tcp_input.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_kernel/module.c | 14 | 16 |
| UBSAN:Undefined_behaviour_in_net/ipv4/tcp_minisocks.c | 2 | |
| EIP:arch_local_irq_enable | 5 | 9 |
| EIP:delay_tsc | 4 | 7 |
| UBSAN:Undefined_behaviour_in_net/ipv4/ip_output.c | 1 | |
| EIP:restore_nameidata | 1 | |
| EIP:lock_is_held_type | 1 | 1 |
| EIP:do_exit | 1 | |
| EIP:native_safe_halt | 2 | 4 |
| UBSAN:Undefined_behaviour_in_net/ipv6/raw.c | 3 | 6 |
| UBSAN:Undefined_behaviour_in_include/net/ndisc.h | 3 | 6 |
| UBSAN:Undefined_behaviour_in_include/net/neighbour.h | 3 | 6 |
| UBSAN:Undefined_behaviour_in_net/ipv6/datagram.c | 3 | 6 |
| EIP:rcutorture_one_extend | 1 | |
| EIP:cache_alloc_debugcheck_after | 1 | |
| UBSAN:Undefined_behaviour_in_kernel/events/core.c | 0 | 16 |
| EIP:tcp_seq_start | 0 | 1 |
| EIP:check_poison_obj | 0 | 1 |
| EIP:assert_no_holes | 0 | 1 |
| EIP:torture_random | 0 | 1 |
| EIP:__legitimize_mnt | 0 | 1 |
| EIP:wait_for_random_bytes | 0 | 1 |
+------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 2.955817] ================================================================================
[ 2.956899] UBSAN: Undefined behaviour in kernel/events/core.c:11620:2
[ 2.957921] member access within misaligned address (ptrval) for type 'struct perf_event'
[ 2.958918] which requires 8 byte alignment
[ 2.959435] CPU: 0 PID: 13 Comm: cryptomgr_test Not tainted 5.4.0-00037-g52507cfaffe900 #1
[ 2.960440] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 2.961471] Call Trace:
[ 2.961790] dump_stack+0x16/0x18
[ 2.962218] ubsan_epilogue+0xb/0x32
[ 2.962677] ubsan_type_mismatch_common+0xb0/0x104
[ 2.963285] __ubsan_handle_type_mismatch_v1+0x2c/0x32
[ 2.963939] perf_event_exit_task+0x6c/0x5eb
[ 2.964502] ? rcu_read_lock_sched_held+0x20/0x40
[ 2.965095] ? fpu__drop+0x1b6/0x1c1
[ 2.965462] do_exit+0x9df/0x189d
[ 2.965462] ? kfree+0x15c/0x169
[ 2.965462] __module_put_and_exit+0xa/0xa
[ 2.965462] cryptomgr_test+0x40/0x40
[ 2.965462] kthread+0x1a8/0x1ad
[ 2.965462] ? crypto_alg_put+0x56/0x56
[ 2.965462] ? kthread_create_worker_on_cpu+0x17/0x17
[ 2.965462] ret_from_fork+0x19/0x24
[ 2.965462] ================================================================================
[ 2.965484] ================================================================================
[ 2.966503] UBSAN: Undefined behaviour in kernel/events/core.c:11620:2
[ 2.967528] member access within misaligned address (ptrval) for type 'struct perf_event'
[ 2.968554] which requires 8 byte alignment
[ 2.969082] CPU: 0 PID: 13 Comm: cryptomgr_test Not tainted 5.4.0-00037-g52507cfaffe900 #1
[ 2.970093] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 2.971131] Call Trace:
[ 2.971458] dump_stack+0x16/0x18
[ 2.971907] ubsan_epilogue+0xb/0x32
[ 2.972374] ubsan_type_mismatch_common+0xb0/0x104
[ 2.972971] __ubsan_handle_type_mismatch_v1+0x2c/0x32
[ 2.973611] perf_event_exit_task+0x85/0x5eb
[ 2.974150] ? rcu_read_lock_sched_held+0x20/0x40
[ 2.974740] ? fpu__drop+0x1b6/0x1c1
[ 2.975200] do_exit+0x9df/0x189d
[ 2.975462] ? kfree+0x15c/0x169
[ 2.975462] __module_put_and_exit+0xa/0xa
[ 2.975462] cryptomgr_test+0x40/0x40
[ 2.975462] kthread+0x1a8/0x1ad
[ 2.975462] ? crypto_alg_put+0x56/0x56
[ 2.975462] ? kthread_create_worker_on_cpu+0x17/0x17
[ 2.975462] ret_from_fork+0x19/0x24
[ 2.975462] ================================================================================
To reproduce:
# build kernel
cd linux
cp config-5.4.0-00037-g52507cfaffe900 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 10 months
[softirq] 56c21abbe6: will-it-scale.per_process_ops -9.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.1% regression of will-it-scale.per_process_ops due to commit:
commit: 56c21abbe67ac004dcea51b34fe43e7542563967 ("[PATCH V7 4/4] softirq: Allow early break the softirq processing loop")
url: https://github.com/0day-ci/linux/commits/qianjun-kernel-gmail-com/Softirq...
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git fc4f28bb3daf3265d6bc5f73b497306985bb23ab
in testcase: will-it-scale
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:
nr_task: 50%
mode: process
test: unlink2
cpufreq_governor: performance
ucode: 0x5002f01
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap3/unlink2/will-it-scale/0x5002f01
commit:
c5efd3f36b ("softirq: Rewrite softirq processing loop")
56c21abbe6 ("softirq: Allow early break the softirq processing loop")
c5efd3f36ba40e32 56c21abbe67ac004dcea51b34fe
---------------- ---------------------------
%stddev %change %stddev
\ | \
6755 -9.1% 6140 ± 3% will-it-scale.per_process_ops
648583 -9.1% 589512 ± 3% will-it-scale.workload
3.32 ± 16% +1.8 5.10 ± 11% mpstat.cpu.all.soft%
38.54 ± 2% +4.9% 40.41 boot-time.boot
6507 ± 3% +5.6% 6874 boot-time.idle
118524 ± 6% -15.7% 99945 ± 4% numa-meminfo.node2.SUnreclaim
154132 ± 12% -17.5% 127224 ± 5% numa-meminfo.node2.Slab
13238 ± 14% -27.7% 9571 ± 4% numa-meminfo.node3.Mapped
48.25 +3.6% 50.00 vmstat.cpu.id
50.75 -3.4% 49.00 vmstat.cpu.sy
4195 -39.2% 2549 vmstat.system.cs
38678 -4.2% 37037 proc-vmstat.nr_slab_reclaimable
119432 -8.4% 109385 proc-vmstat.nr_slab_unreclaimable
8384131 -56.1% 3678508 ± 4% proc-vmstat.numa_hit
8290771 -56.8% 3585163 ± 5% proc-vmstat.numa_local
37833465 -56.0% 16665578 ± 5% proc-vmstat.pgalloc_normal
37858309 -55.9% 16689966 ± 5% proc-vmstat.pgfree
63272 ± 2% +4.5% 66097 proc-vmstat.pgreuse
2026017 ± 3% -56.3% 885611 ± 6% numa-numastat.node0.local_node
2044640 ± 2% -55.5% 908866 ± 7% numa-numastat.node0.numa_hit
2062943 -55.6% 916257 ± 13% numa-numastat.node1.local_node
2081620 -55.2% 931844 ± 12% numa-numastat.node1.numa_hit
2050083 -61.1% 797809 ± 14% numa-numastat.node2.local_node
2078050 -60.1% 828876 ± 14% numa-numastat.node2.numa_hit
2118742 ± 2% -54.8% 958560 ± 7% numa-numastat.node3.local_node
2146693 ± 2% -54.3% 981924 ± 5% numa-numastat.node3.numa_hit
1501571 ± 4% -35.5% 969253 ± 15% numa-vmstat.node0.numa_hit
1428280 ± 4% -37.6% 891503 ± 19% numa-vmstat.node0.numa_local
1469916 ± 5% -36.2% 938480 ± 6% numa-vmstat.node1.numa_hit
1359600 ± 6% -36.3% 865689 ± 7% numa-vmstat.node1.numa_local
110316 ± 12% -34.0% 72791 ± 25% numa-vmstat.node1.numa_other
29574 ± 6% -15.6% 24956 ± 4% numa-vmstat.node2.nr_slab_unreclaimable
1532952 ± 5% -45.4% 837688 ± 11% numa-vmstat.node2.numa_hit
1430484 ± 7% -48.8% 732210 ± 15% numa-vmstat.node2.numa_local
3308 ± 11% -28.8% 2354 ± 4% numa-vmstat.node3.nr_mapped
1512439 ± 6% -39.8% 910203 ± 6% numa-vmstat.node3.numa_hit
1427122 ± 5% -44.3% 794761 ± 8% numa-vmstat.node3.numa_local
8843 -25.1% 6619 ± 11% slabinfo.eventpoll_pwq.active_objs
8843 -25.1% 6619 ± 11% slabinfo.eventpoll_pwq.num_objs
8974 -10.2% 8058 ± 4% slabinfo.files_cache.active_objs
8974 -10.2% 8058 ± 4% slabinfo.files_cache.num_objs
211888 -15.2% 179708 slabinfo.filp.active_objs
3320 -13.8% 2861 slabinfo.filp.active_slabs
212563 -13.8% 183174 slabinfo.filp.num_objs
3320 -13.8% 2861 slabinfo.filp.num_slabs
82554 -19.7% 66310 slabinfo.kmalloc-512.active_objs
1296 -19.4% 1045 slabinfo.kmalloc-512.active_slabs
83019 -19.4% 66940 slabinfo.kmalloc-512.num_objs
1296 -19.4% 1045 slabinfo.kmalloc-512.num_slabs
18192 -22.3% 14144 ± 7% slabinfo.pde_opener.active_objs
18192 -22.3% 14144 ± 7% slabinfo.pde_opener.num_objs
171254 -16.6% 142779 ± 2% slabinfo.shmem_inode_cache.active_objs
3728 -12.3% 3269 ± 2% slabinfo.shmem_inode_cache.active_slabs
171549 -12.3% 150412 ± 2% slabinfo.shmem_inode_cache.num_objs
3728 -12.3% 3269 ± 2% slabinfo.shmem_inode_cache.num_slabs
21010 +16.7% 24526 slabinfo.vmap_area.active_objs
21021 +16.7% 24541 slabinfo.vmap_area.num_objs
0.91 ± 22% +0.5 1.45 ± 28% perf-profile.calltrace.cycles-pp.__memcg_kmem_uncharge.drain_obj_stock.refill_obj_stock.kmem_cache_free.rcu_do_batch
0.91 ± 23% +0.5 1.45 ± 28% perf-profile.calltrace.cycles-pp.page_counter_uncharge.__memcg_kmem_uncharge.drain_obj_stock.refill_obj_stock.kmem_cache_free
0.88 ± 22% +0.5 1.43 ± 27% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.__memcg_kmem_uncharge.drain_obj_stock.refill_obj_stock
0.78 ± 6% +0.7 1.47 ± 19% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.drain_obj_stock.refill_obj_stock.kmem_cache_free
0.78 ± 6% +0.7 1.48 ± 18% perf-profile.calltrace.cycles-pp.page_counter_uncharge.drain_obj_stock.refill_obj_stock.kmem_cache_free.rcu_do_batch
2.58 ± 9% +1.0 3.55 ± 17% perf-profile.calltrace.cycles-pp.drain_obj_stock.refill_obj_stock.kmem_cache_free.rcu_do_batch.rcu_core
2.85 ± 5% +1.2 4.04 ± 16% perf-profile.calltrace.cycles-pp.refill_obj_stock.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start
0.00 +3.5 3.54 ± 13% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath._raw_spin_lock.evict
0.00 +3.5 3.55 ± 13% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath._raw_spin_lock.evict.do_unlinkat
0.00 +3.6 3.59 ± 13% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add
0.00 +3.6 3.59 ± 13% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add.new_inode
1.06 ± 23% +5.7 6.74 ± 14% perf-profile.calltrace.cycles-pp.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start.asm_call_on_stack
1.08 ± 23% +5.7 6.81 ± 14% perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack
1.08 ± 23% +5.7 6.82 ± 14% perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu
1.08 ± 23% +5.7 6.82 ± 14% perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.08 ± 23% +5.7 6.82 ± 14% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +6.8 6.82 ± 14% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath
0.00 +6.8 6.83 ± 14% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.native_queued_spin_lock_slowpath._raw_spin_lock
0.60 ± 27% -0.3 0.33 ± 11% perf-profile.children.cycles-pp.kmem_cache_alloc
0.29 ± 21% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.shmem_alloc_inode
0.32 ± 21% -0.1 0.23 ± 9% perf-profile.children.cycles-pp.alloc_empty_file
0.32 ± 21% -0.1 0.23 ± 9% perf-profile.children.cycles-pp.__alloc_file
0.18 ± 10% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.obj_cgroup_charge
0.24 ± 25% -0.1 0.16 ± 11% perf-profile.children.cycles-pp.d_alloc_parallel
0.21 ± 29% -0.1 0.14 ± 13% perf-profile.children.cycles-pp.d_alloc
0.17 ± 36% -0.1 0.10 ± 14% perf-profile.children.cycles-pp.__d_alloc
0.13 ± 14% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.page_counter_try_charge
0.14 ± 12% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.__memcg_kmem_charge
0.16 ± 4% -0.0 0.12 ± 14% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.18 ± 26% +0.1 0.32 ± 20% perf-profile.children.cycles-pp.start_kernel
2.91 ± 9% +0.9 3.77 ± 18% perf-profile.children.cycles-pp.drain_obj_stock
3.23 ± 5% +1.1 4.30 ± 17% perf-profile.children.cycles-pp.refill_obj_stock
4.45 ± 21% +2.8 7.22 ± 14% perf-profile.children.cycles-pp.rcu_do_batch
4.45 ± 21% +2.8 7.22 ± 14% perf-profile.children.cycles-pp.rcu_core
4.45 ± 21% +2.8 7.23 ± 14% perf-profile.children.cycles-pp.__softirqentry_text_start
4.36 ± 21% +2.8 7.16 ± 14% perf-profile.children.cycles-pp.kmem_cache_free
1.58 ± 19% +5.6 7.21 ± 14% perf-profile.children.cycles-pp.irq_exit_rcu
1.58 ± 19% +5.6 7.20 ± 14% perf-profile.children.cycles-pp.do_softirq_own_stack
1.94 ± 17% +5.6 7.58 ± 13% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.92 ± 17% +5.6 7.55 ± 13% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.91 ± 17% +5.6 7.55 ± 13% perf-profile.children.cycles-pp.asm_call_on_stack
0.20 ± 77% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.kmem_cache_alloc
0.11 ± 14% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.page_counter_try_charge
0.11 ± 7% -0.0 0.08 ± 19% perf-profile.self.cycles-pp.__mod_memcg_state
0.34 ± 52% +0.3 0.66 ± 22% perf-profile.self.cycles-pp.drain_obj_stock
1.85 ± 8% +0.8 2.61 ± 18% perf-profile.self.cycles-pp.page_counter_cancel
1.00 ±110% +1.7 2.73 ± 48% perf-profile.self.cycles-pp.kmem_cache_free
7.77e+09 -6.4% 7.269e+09 perf-stat.i.branch-instructions
27098166 -8.5% 24797110 perf-stat.i.branch-misses
1.282e+08 ± 7% -14.5% 1.096e+08 ± 5% perf-stat.i.cache-misses
2.269e+08 ± 4% -11.2% 2.015e+08 ± 4% perf-stat.i.cache-references
4176 -39.8% 2514 perf-stat.i.context-switches
9.13 +4.6% 9.54 ± 2% perf-stat.i.cpi
3.066e+11 -2.5% 2.989e+11 perf-stat.i.cpu-cycles
438.98 -55.5% 195.51 perf-stat.i.cpu-migrations
1192843 ± 7% -21.3% 938565 ± 10% perf-stat.i.dTLB-load-misses
8.892e+09 -6.9% 8.277e+09 perf-stat.i.dTLB-loads
0.01 ± 2% -0.0 0.00 perf-stat.i.dTLB-store-miss-rate%
151589 ± 2% -37.1% 95330 ± 3% perf-stat.i.dTLB-store-misses
2.138e+09 -9.0% 1.946e+09 ± 3% perf-stat.i.dTLB-stores
15422532 ± 2% -7.6% 14254140 ± 3% perf-stat.i.iTLB-load-misses
4135008 ± 2% -11.1% 3674504 perf-stat.i.iTLB-loads
3.356e+10 -6.7% 3.13e+10 perf-stat.i.instructions
0.11 ± 2% -4.5% 0.11 ± 2% perf-stat.i.ipc
1.60 -2.5% 1.56 perf-stat.i.metric.GHz
0.83 ± 2% -35.6% 0.54 ± 2% perf-stat.i.metric.K/sec
99.36 -7.0% 92.37 perf-stat.i.metric.M/sec
89.17 +3.7 92.92 perf-stat.i.node-load-miss-rate%
3005728 ± 3% -44.0% 1684284 ± 6% perf-stat.i.node-loads
83.60 +10.4 94.04 perf-stat.i.node-store-miss-rate%
15867919 ± 3% -7.9% 14616711 ± 2% perf-stat.i.node-store-misses
3099149 ± 2% -70.5% 915605 ± 8% perf-stat.i.node-stores
9.14 +4.5% 9.55 ± 2% perf-stat.overall.cpi
0.01 ± 3% -0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
0.11 -4.3% 0.10 ± 2% perf-stat.overall.ipc
89.11 +3.8 92.91 perf-stat.overall.node-load-miss-rate%
83.54 +10.6 94.09 perf-stat.overall.node-store-miss-rate%
15568616 +2.6% 15978455 perf-stat.overall.path-length
7.743e+09 -6.4% 7.245e+09 perf-stat.ps.branch-instructions
26987276 -8.5% 24693465 perf-stat.ps.branch-misses
1.278e+08 ± 7% -14.5% 1.093e+08 ± 5% perf-stat.ps.cache-misses
2.261e+08 ± 4% -11.2% 2.008e+08 ± 4% perf-stat.ps.cache-references
4146 -39.8% 2495 perf-stat.ps.context-switches
3.056e+11 -2.5% 2.979e+11 perf-stat.ps.cpu-cycles
435.41 -55.5% 193.75 perf-stat.ps.cpu-migrations
1188975 ± 7% -21.2% 937304 ± 10% perf-stat.ps.dTLB-load-misses
8.86e+09 -6.9% 8.249e+09 perf-stat.ps.dTLB-loads
151390 ± 2% -37.1% 95183 ± 4% perf-stat.ps.dTLB-store-misses
2.131e+09 -9.0% 1.939e+09 ± 3% perf-stat.ps.dTLB-stores
15372733 ± 2% -7.6% 14205375 ± 3% perf-stat.ps.iTLB-load-misses
4119220 ± 2% -11.1% 3661007 perf-stat.ps.iTLB-loads
3.344e+10 -6.7% 3.12e+10 perf-stat.ps.instructions
3008402 ± 3% -44.1% 1681041 ± 6% perf-stat.ps.node-loads
15811834 ± 3% -7.9% 14568851 ± 2% perf-stat.ps.node-store-misses
3114607 ± 2% -70.6% 916394 ± 8% perf-stat.ps.node-stores
1.01e+13 -6.8% 9.414e+12 perf-stat.total.instructions
160152 ± 49% -98.5% 2403 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
3681748 ± 19% -89.9% 371526 ±173% sched_debug.cfs_rq:/.MIN_vruntime.max
689554 ± 27% -95.7% 29687 ±173% sched_debug.cfs_rq:/.MIN_vruntime.stddev
76842 +56.2% 119995 ± 10% sched_debug.cfs_rq:/.exec_clock.max
71761 -91.6% 6024 ± 93% sched_debug.cfs_rq:/.exec_clock.min
824.90 ± 17% +3417.5% 29015 ± 24% sched_debug.cfs_rq:/.exec_clock.stddev
26728 ± 37% -71.8% 7545 ± 18% sched_debug.cfs_rq:/.load.avg
537605 ± 39% -73.0% 145015 ±154% sched_debug.cfs_rq:/.load.max
96450 ± 28% -85.1% 14357 ±115% sched_debug.cfs_rq:/.load.stddev
23.13 ± 10% -54.3% 10.56 ± 13% sched_debug.cfs_rq:/.load_avg.avg
44.84 ± 9% -46.1% 24.17 ± 24% sched_debug.cfs_rq:/.load_avg.stddev
160152 ± 49% -98.5% 2403 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
3681748 ± 19% -89.9% 371526 ±173% sched_debug.cfs_rq:/.max_vruntime.max
689554 ± 27% -95.7% 29687 ±173% sched_debug.cfs_rq:/.max_vruntime.stddev
7548773 +63.1% 12309922 ± 10% sched_debug.cfs_rq:/.min_vruntime.max
6982694 -90.8% 642714 ± 89% sched_debug.cfs_rq:/.min_vruntime.min
82109 ± 16% +3517.1% 2970025 ± 24% sched_debug.cfs_rq:/.min_vruntime.stddev
0.46 ± 3% +35.3% 0.62 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.46 ± 9% -25.7% 1.08 ± 13% sched_debug.cfs_rq:/.nr_running.max
0.49 ± 2% -19.7% 0.39 ± 5% sched_debug.cfs_rq:/.nr_running.stddev
318.90 ± 19% -95.2% 15.34 ± 17% sched_debug.cfs_rq:/.nr_spread_over.avg
367.83 ± 17% -90.7% 34.06 ± 6% sched_debug.cfs_rq:/.nr_spread_over.max
273.04 ± 22% -99.2% 2.12 ± 67% sched_debug.cfs_rq:/.nr_spread_over.min
18.52 ± 4% -62.2% 6.99 ± 4% sched_debug.cfs_rq:/.nr_spread_over.stddev
476.90 +35.7% 647.33 ± 3% sched_debug.cfs_rq:/.runnable_avg.avg
477.76 -15.0% 406.02 ± 5% sched_debug.cfs_rq:/.runnable_avg.stddev
33460 ±280% +14366.1% 4840441 ± 23% sched_debug.cfs_rq:/.spread0.avg
278930 ± 43% +3632.2% 10410349 ± 18% sched_debug.cfs_rq:/.spread0.max
-287806 +336.7% -1256871 sched_debug.cfs_rq:/.spread0.min
82151 ± 17% +3515.3% 2970042 ± 24% sched_debug.cfs_rq:/.spread0.stddev
459.01 +40.6% 645.33 ± 3% sched_debug.cfs_rq:/.util_avg.avg
460.21 -12.1% 404.72 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
192.14 +36.1% 261.56 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.avg
211.12 -14.6% 180.28 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev
724.27 ± 4% +371.0% 3411 ± 13% sched_debug.cpu.clock_task.stddev
1.46 ± 9% -25.7% 1.08 ± 13% sched_debug.cpu.nr_running.max
4979 -26.8% 3642 ± 4% sched_debug.cpu.nr_switches.avg
3162 ± 4% -52.6% 1498 ± 12% sched_debug.cpu.nr_switches.min
2160 ± 4% +51.6% 3275 ± 17% sched_debug.cpu.nr_switches.stddev
3319 -41.7% 1936 ± 8% sched_debug.cpu.sched_count.avg
2269 -72.8% 617.17 ± 12% sched_debug.cpu.sched_count.min
1679 ± 6% +64.9% 2767 ± 21% sched_debug.cpu.sched_count.stddev
1109 -36.5% 704.97 ± 8% sched_debug.cpu.sched_goidle.avg
5848 ± 10% +122.7% 13026 ± 35% sched_debug.cpu.sched_goidle.max
596.08 ± 2% -95.1% 28.95 ± 57% sched_debug.cpu.sched_goidle.min
837.36 ± 6% +64.6% 1378 ± 21% sched_debug.cpu.sched_goidle.stddev
1477 -38.5% 908.81 ± 8% sched_debug.cpu.ttwu_count.avg
5591 ± 19% +156.9% 14361 ± 32% sched_debug.cpu.ttwu_count.max
977.79 -76.5% 229.52 ± 13% sched_debug.cpu.ttwu_count.min
801.84 ± 5% +82.3% 1461 ± 19% sched_debug.cpu.ttwu_count.stddev
1099 -50.4% 545.27 ± 9% sched_debug.cpu.ttwu_local.avg
1816 ± 3% +70.5% 3096 ± 11% sched_debug.cpu.ttwu_local.max
906.71 -75.8% 219.62 ± 12% sched_debug.cpu.ttwu_local.min
127.09 ± 4% +284.7% 488.96 ± 9% sched_debug.cpu.ttwu_local.stddev
43524 -48.7% 22324 ± 24% softirqs.CPU0.RCU
23136 +60.4% 37100 ± 8% softirqs.CPU0.SCHED
46628 +61.2% 75158 ± 22% softirqs.CPU100.RCU
45964 +72.7% 79378 ± 19% softirqs.CPU105.RCU
46416 +80.1% 83607 ± 25% softirqs.CPU106.RCU
46277 +88.9% 87437 ± 27% softirqs.CPU107.RCU
46217 +59.7% 73815 ± 24% softirqs.CPU108.RCU
46178 +75.1% 80839 ± 22% softirqs.CPU109.RCU
46619 +59.8% 74475 ± 15% softirqs.CPU110.RCU
46742 +75.2% 81873 ± 25% softirqs.CPU111.RCU
19962 ± 3% +17.6% 23468 ± 9% softirqs.CPU118.SCHED
19808 ± 3% +22.1% 24179 ± 12% softirqs.CPU119.SCHED
46640 +58.2% 73798 ± 31% softirqs.CPU120.RCU
19967 ± 2% +31.0% 26165 ± 15% softirqs.CPU121.SCHED
5284 ±123% -92.5% 397.50 ± 64% softirqs.CPU13.NET_RX
46584 +30.1% 60592 ± 15% softirqs.CPU133.RCU
46084 +60.5% 73956 ± 15% softirqs.CPU134.RCU
46262 +81.6% 83990 ± 23% softirqs.CPU136.RCU
46238 +65.8% 76651 ± 32% softirqs.CPU137.RCU
46338 +82.3% 84453 ± 18% softirqs.CPU138.RCU
46691 +91.9% 89596 ± 8% softirqs.CPU139.RCU
19859 ± 3% -25.2% 14846 ± 14% softirqs.CPU139.SCHED
46314 +31.9% 61070 ± 19% softirqs.CPU14.RCU
46095 +77.4% 81783 ± 16% softirqs.CPU140.RCU
46195 +48.9% 68768 ± 8% softirqs.CPU141.RCU
45915 +64.7% 75601 ± 24% softirqs.CPU142.RCU
46769 +53.8% 71950 ± 15% softirqs.CPU144.RCU
19920 ± 3% +30.9% 26067 ± 10% softirqs.CPU147.SCHED
19931 ± 2% +25.8% 25068 ± 11% softirqs.CPU150.SCHED
20194 ± 2% +16.3% 23479 ± 10% softirqs.CPU154.SCHED
19988 ± 2% +19.5% 23881 ± 9% softirqs.CPU156.SCHED
46965 +40.1% 65803 ± 25% softirqs.CPU16.RCU
46437 +34.6% 62487 ± 15% softirqs.CPU160.RCU
46825 +25.2% 58640 ± 12% softirqs.CPU161.RCU
20180 ± 3% +29.6% 26159 ± 16% softirqs.CPU162.SCHED
20142 ± 3% +33.3% 26851 ± 20% softirqs.CPU164.SCHED
46307 +33.2% 61681 ± 14% softirqs.CPU168.RCU
19982 ± 2% +34.5% 26871 ± 6% softirqs.CPU169.SCHED
20103 ± 2% +28.7% 25870 ± 16% softirqs.CPU170.SCHED
20275 ± 2% +24.2% 25181 ± 4% softirqs.CPU171.SCHED
20060 ± 3% +31.1% 26300 ± 10% softirqs.CPU176.SCHED
20049 ± 2% +26.3% 25326 ± 12% softirqs.CPU178.SCHED
46197 +42.5% 65832 ± 16% softirqs.CPU179.RCU
20152 ± 2% +49.5% 30130 ± 19% softirqs.CPU180.SCHED
20005 ± 3% +47.3% 29476 ± 18% softirqs.CPU182.SCHED
20065 ± 2% +19.3% 23931 ± 5% softirqs.CPU184.SCHED
20132 ± 2% +36.2% 27411 ± 24% softirqs.CPU186.SCHED
20106 ± 2% +32.5% 26640 ± 11% softirqs.CPU187.SCHED
20208 ± 3% +23.9% 25047 ± 20% softirqs.CPU188.SCHED
46604 +57.9% 73587 ± 21% softirqs.CPU19.RCU
45613 +89.5% 86446 ± 12% softirqs.CPU191.RCU
20084 ± 2% -22.5% 15565 ± 18% softirqs.CPU191.SCHED
46997 +37.2% 64467 ± 17% softirqs.CPU2.RCU
48025 ± 5% +67.9% 80646 ± 24% softirqs.CPU20.RCU
46556 +55.9% 72558 ± 19% softirqs.CPU21.RCU
46736 +67.6% 78318 ± 9% softirqs.CPU22.RCU
46587 +70.6% 79488 ± 15% softirqs.CPU23.RCU
46733 +85.3% 86598 ± 15% softirqs.CPU25.RCU
46749 +90.7% 89142 ± 20% softirqs.CPU28.RCU
46931 +71.3% 80401 ± 18% softirqs.CPU30.RCU
46684 +63.4% 76302 ± 21% softirqs.CPU31.RCU
47299 +71.3% 81014 ± 33% softirqs.CPU32.RCU
47053 +59.0% 74829 ± 12% softirqs.CPU37.RCU
19706 ± 2% +35.7% 26736 ± 10% softirqs.CPU43.SCHED
19748 ± 2% +21.2% 23937 ± 10% softirqs.CPU44.SCHED
47043 +42.6% 67077 ± 7% softirqs.CPU45.RCU
47237 +34.3% 63459 ± 17% softirqs.CPU48.RCU
46703 +65.3% 77189 ± 20% softirqs.CPU49.RCU
46930 +79.6% 84283 ± 10% softirqs.CPU51.RCU
19796 ± 3% -17.3% 16381 ± 7% softirqs.CPU51.SCHED
46865 +85.6% 86976 ± 27% softirqs.CPU52.RCU
47225 +76.6% 83421 ± 10% softirqs.CPU54.RCU
46585 +79.2% 83489 ± 22% softirqs.CPU55.RCU
46900 +73.8% 81535 ± 24% softirqs.CPU57.RCU
46909 +67.5% 78557 ± 12% softirqs.CPU58.RCU
46686 +55.6% 72626 ± 27% softirqs.CPU59.RCU
46598 +73.5% 80831 ± 13% softirqs.CPU60.RCU
46447 +67.0% 77581 ± 15% softirqs.CPU61.RCU
46565 +95.4% 90998 ± 13% softirqs.CPU62.RCU
46849 +55.3% 72757 ± 12% softirqs.CPU64.RCU
47140 +63.2% 76912 ± 10% softirqs.CPU65.RCU
46994 +98.4% 93224 ± 21% softirqs.CPU66.RCU
47098 +93.8% 91289 ± 21% softirqs.CPU68.RCU
19884 ± 2% -25.9% 14735 ± 36% softirqs.CPU68.SCHED
47227 +47.3% 69543 ± 24% softirqs.CPU70.RCU
47037 +55.1% 72953 ± 12% softirqs.CPU72.RCU
46823 +92.5% 90158 ± 6% softirqs.CPU73.RCU
19880 ± 3% -24.9% 14925 ± 10% softirqs.CPU73.SCHED
46779 +86.3% 87128 ± 16% softirqs.CPU74.RCU
46511 +78.9% 83214 ± 8% softirqs.CPU75.RCU
20115 ± 2% -16.8% 16738 ± 13% softirqs.CPU75.SCHED
47172 +86.1% 87766 ± 23% softirqs.CPU76.RCU
46545 +86.2% 86675 ± 21% softirqs.CPU77.RCU
46218 +83.2% 84657 ± 18% softirqs.CPU78.RCU
46684 +89.3% 88396 ± 13% softirqs.CPU80.RCU
19891 ± 2% -21.4% 15643 ± 22% softirqs.CPU80.SCHED
46495 +78.7% 83109 ± 16% softirqs.CPU82.RCU
46681 +47.0% 68611 ± 16% softirqs.CPU83.RCU
46863 +113.7% 100165 ± 21% softirqs.CPU84.RCU
46501 +82.7% 84976 ± 19% softirqs.CPU85.RCU
46438 +110.3% 97663 ± 20% softirqs.CPU86.RCU
20145 ± 2% -34.2% 13260 ± 45% softirqs.CPU86.SCHED
46477 +72.9% 80344 ± 7% softirqs.CPU88.RCU
46584 +88.7% 87890 ± 30% softirqs.CPU89.RCU
47011 +91.4% 89967 ± 27% softirqs.CPU90.RCU
46533 +90.2% 88483 ± 16% softirqs.CPU91.RCU
19809 ± 2% -22.4% 15380 ± 24% softirqs.CPU91.SCHED
46519 +80.5% 83983 ± 25% softirqs.CPU92.RCU
20033 ± 4% +15.8% 23197 ± 9% softirqs.CPU95.SCHED
46981 +141.8% 113612 ± 6% softirqs.CPU96.RCU
19901 ± 3% -60.5% 7864 ± 26% softirqs.CPU96.SCHED
46311 +52.8% 70742 ± 17% softirqs.CPU98.RCU
8941907 +45.2% 12983886 softirqs.RCU
74812 -12.4% 65558 softirqs.TIMER
9395 ±119% -93.2% 635.00 ± 86% interrupts.34:PCI-MSI.524292-edge.eth0-TxRx-3
6611 ± 9% -45.9% 3576 ± 34% interrupts.CPU0.NMI:Non-maskable_interrupts
6611 ± 9% -45.9% 3576 ± 34% interrupts.CPU0.PMI:Performance_monitoring_interrupts
512.25 ± 3% -72.8% 139.25 ± 34% interrupts.CPU0.RES:Rescheduling_interrupts
241.75 ± 14% -54.1% 111.00 ± 32% interrupts.CPU1.RES:Rescheduling_interrupts
228.50 ± 10% -58.8% 94.25 ± 25% interrupts.CPU10.RES:Rescheduling_interrupts
6502 ± 10% -43.7% 3660 ± 37% interrupts.CPU101.NMI:Non-maskable_interrupts
6502 ± 10% -43.7% 3660 ± 37% interrupts.CPU101.PMI:Performance_monitoring_interrupts
213.00 ± 3% -45.8% 115.50 ± 36% interrupts.CPU101.RES:Rescheduling_interrupts
230.75 ± 9% -46.9% 122.50 ± 29% interrupts.CPU104.RES:Rescheduling_interrupts
215.25 ± 4% -46.1% 116.00 ± 21% interrupts.CPU105.RES:Rescheduling_interrupts
210.00 ± 3% -36.3% 133.75 ± 18% interrupts.CPU106.RES:Rescheduling_interrupts
230.50 ± 15% -32.6% 155.25 ± 26% interrupts.CPU107.RES:Rescheduling_interrupts
214.25 -45.3% 117.25 ± 21% interrupts.CPU108.RES:Rescheduling_interrupts
908.25 ± 10% -11.5% 803.50 interrupts.CPU109.CAL:Function_call_interrupts
254.50 ± 17% -50.4% 126.25 ± 11% interrupts.CPU109.RES:Rescheduling_interrupts
212.50 ± 2% -62.4% 80.00 ± 38% interrupts.CPU11.RES:Rescheduling_interrupts
228.00 ± 14% -37.5% 142.50 ± 21% interrupts.CPU110.RES:Rescheduling_interrupts
210.75 ± 2% -44.8% 116.25 ± 17% interrupts.CPU112.RES:Rescheduling_interrupts
206.00 ± 2% -43.4% 116.50 ± 37% interrupts.CPU113.RES:Rescheduling_interrupts
6562 ± 11% -39.8% 3949 ± 30% interrupts.CPU114.NMI:Non-maskable_interrupts
6562 ± 11% -39.8% 3949 ± 30% interrupts.CPU114.PMI:Performance_monitoring_interrupts
214.00 ± 2% -43.1% 121.75 ± 23% interrupts.CPU114.RES:Rescheduling_interrupts
215.75 ± 3% -55.9% 95.25 ± 35% interrupts.CPU115.RES:Rescheduling_interrupts
6590 ± 9% -39.6% 3977 ± 30% interrupts.CPU116.NMI:Non-maskable_interrupts
6590 ± 9% -39.6% 3977 ± 30% interrupts.CPU116.PMI:Performance_monitoring_interrupts
219.00 -58.1% 91.75 ± 33% interrupts.CPU116.RES:Rescheduling_interrupts
6427 ± 11% -46.0% 3468 ± 37% interrupts.CPU117.NMI:Non-maskable_interrupts
6427 ± 11% -46.0% 3468 ± 37% interrupts.CPU117.PMI:Performance_monitoring_interrupts
239.50 ± 17% -54.0% 110.25 ± 26% interrupts.CPU117.RES:Rescheduling_interrupts
222.50 ± 11% -58.2% 93.00 ± 20% interrupts.CPU118.RES:Rescheduling_interrupts
236.75 ± 14% -61.1% 92.00 ± 26% interrupts.CPU119.RES:Rescheduling_interrupts
223.50 ± 12% -44.1% 125.00 ± 37% interrupts.CPU120.RES:Rescheduling_interrupts
215.75 ± 2% -59.7% 87.00 ± 46% interrupts.CPU121.RES:Rescheduling_interrupts
233.25 ± 9% -50.7% 115.00 ± 54% interrupts.CPU123.RES:Rescheduling_interrupts
232.50 ± 12% -67.4% 75.75 ± 49% interrupts.CPU124.RES:Rescheduling_interrupts
226.00 ± 5% -59.2% 92.25 ± 13% interrupts.CPU126.RES:Rescheduling_interrupts
9395 ±119% -93.2% 635.00 ± 86% interrupts.CPU13.34:PCI-MSI.524292-edge.eth0-TxRx-3
209.00 -52.3% 99.75 ± 30% interrupts.CPU13.RES:Rescheduling_interrupts
5697 ± 22% +45.1% 8264 ± 9% interrupts.CPU130.NMI:Non-maskable_interrupts
5697 ± 22% +45.1% 8264 ± 9% interrupts.CPU130.PMI:Performance_monitoring_interrupts
224.25 ± 2% -49.2% 114.00 ± 55% interrupts.CPU130.RES:Rescheduling_interrupts
218.25 ± 14% -54.1% 100.25 ± 56% interrupts.CPU131.RES:Rescheduling_interrupts
206.50 ± 3% -60.3% 82.00 ± 35% interrupts.CPU132.RES:Rescheduling_interrupts
5766 ± 22% +40.6% 8109 ± 9% interrupts.CPU133.NMI:Non-maskable_interrupts
5766 ± 22% +40.6% 8109 ± 9% interrupts.CPU133.PMI:Performance_monitoring_interrupts
223.75 ± 15% -60.1% 89.25 ± 10% interrupts.CPU133.RES:Rescheduling_interrupts
214.25 ± 4% -44.8% 118.25 ± 19% interrupts.CPU134.RES:Rescheduling_interrupts
206.75 ± 2% -40.6% 122.75 ± 32% interrupts.CPU136.RES:Rescheduling_interrupts
231.00 ± 16% -43.3% 131.00 ± 35% interrupts.CPU138.RES:Rescheduling_interrupts
221.00 ± 10% -35.4% 142.75 ± 14% interrupts.CPU139.RES:Rescheduling_interrupts
210.50 -39.1% 128.25 ± 24% interrupts.CPU140.RES:Rescheduling_interrupts
205.50 ± 5% -48.7% 105.50 ± 14% interrupts.CPU141.RES:Rescheduling_interrupts
6581 ± 10% +27.4% 8384 ± 7% interrupts.CPU142.NMI:Non-maskable_interrupts
6581 ± 10% +27.4% 8384 ± 7% interrupts.CPU142.PMI:Performance_monitoring_interrupts
202.00 ± 4% -35.6% 130.00 ± 33% interrupts.CPU142.RES:Rescheduling_interrupts
215.25 ± 12% -50.6% 106.25 ± 39% interrupts.CPU143.RES:Rescheduling_interrupts
214.00 ± 3% -45.2% 117.25 ± 15% interrupts.CPU144.RES:Rescheduling_interrupts
218.50 ± 11% -57.6% 92.75 ± 41% interrupts.CPU145.RES:Rescheduling_interrupts
6495 ± 11% +34.4% 8729 interrupts.CPU146.NMI:Non-maskable_interrupts
6495 ± 11% +34.4% 8729 interrupts.CPU146.PMI:Performance_monitoring_interrupts
206.00 ± 5% -53.6% 95.50 ± 31% interrupts.CPU146.RES:Rescheduling_interrupts
875.75 ± 6% -7.3% 811.50 interrupts.CPU147.CAL:Function_call_interrupts
222.50 ± 13% -64.3% 79.50 ± 29% interrupts.CPU147.RES:Rescheduling_interrupts
206.00 ± 5% -67.5% 67.00 ± 52% interrupts.CPU148.RES:Rescheduling_interrupts
226.75 ± 13% -67.8% 73.00 ± 29% interrupts.CPU149.RES:Rescheduling_interrupts
225.25 ± 13% -65.9% 76.75 ± 9% interrupts.CPU150.RES:Rescheduling_interrupts
215.75 ± 12% -67.0% 71.25 ± 37% interrupts.CPU151.RES:Rescheduling_interrupts
208.50 ± 3% -53.1% 97.75 ± 57% interrupts.CPU152.RES:Rescheduling_interrupts
199.00 ± 4% -63.9% 71.75 ± 48% interrupts.CPU153.RES:Rescheduling_interrupts
230.25 ± 9% -59.5% 93.25 ± 27% interrupts.CPU154.RES:Rescheduling_interrupts
159.00 ±164% -99.1% 1.50 ± 33% interrupts.CPU154.TLB:TLB_shootdowns
208.50 ± 14% -59.2% 85.00 ± 33% interrupts.CPU155.RES:Rescheduling_interrupts
6628 ± 9% -51.5% 3214 ± 48% interrupts.CPU156.NMI:Non-maskable_interrupts
6628 ± 9% -51.5% 3214 ± 48% interrupts.CPU156.PMI:Performance_monitoring_interrupts
213.50 ± 23% -65.0% 74.75 ± 36% interrupts.CPU156.RES:Rescheduling_interrupts
192.75 ± 5% -59.1% 78.75 ± 31% interrupts.CPU157.RES:Rescheduling_interrupts
195.50 ± 13% -68.3% 62.00 ± 29% interrupts.CPU158.RES:Rescheduling_interrupts
187.50 ± 12% -60.4% 74.25 ± 54% interrupts.CPU159.RES:Rescheduling_interrupts
214.50 ± 2% -45.7% 116.50 ± 24% interrupts.CPU16.RES:Rescheduling_interrupts
168.25 ± 15% -60.2% 67.00 ± 15% interrupts.CPU160.RES:Rescheduling_interrupts
142.75 ± 11% -54.5% 65.00 ± 29% interrupts.CPU161.RES:Rescheduling_interrupts
160.00 ± 26% -67.2% 52.50 ± 49% interrupts.CPU162.RES:Rescheduling_interrupts
124.00 ± 9% -46.8% 66.00 ± 39% interrupts.CPU163.RES:Rescheduling_interrupts
121.75 ± 18% -66.1% 41.25 ± 41% interrupts.CPU164.RES:Rescheduling_interrupts
109.25 ± 21% -59.7% 44.00 ± 37% interrupts.CPU165.RES:Rescheduling_interrupts
103.75 ± 19% -66.3% 35.00 ± 19% interrupts.CPU166.RES:Rescheduling_interrupts
80.25 ± 23% -69.2% 24.75 ± 29% interrupts.CPU168.RES:Rescheduling_interrupts
226.50 ± 9% -48.6% 116.50 ± 37% interrupts.CPU17.RES:Rescheduling_interrupts
81.50 ± 21% -84.0% 13.00 ± 62% interrupts.CPU173.RES:Rescheduling_interrupts
81.50 ± 34% -83.7% 13.25 ± 34% interrupts.CPU174.RES:Rescheduling_interrupts
894.50 ± 5% -9.2% 812.50 interrupts.CPU179.CAL:Function_call_interrupts
96.75 ± 33% -88.6% 11.00 ± 28% interrupts.CPU179.RES:Rescheduling_interrupts
206.75 ± 5% -47.8% 108.00 ± 38% interrupts.CPU18.RES:Rescheduling_interrupts
78.25 ± 26% -87.9% 9.50 ± 71% interrupts.CPU185.RES:Rescheduling_interrupts
901.50 ± 4% -7.8% 831.25 ± 3% interrupts.CPU187.CAL:Function_call_interrupts
79.75 ± 38% -85.3% 11.75 ± 54% interrupts.CPU187.RES:Rescheduling_interrupts
212.25 ± 4% -38.5% 130.50 ± 34% interrupts.CPU19.RES:Rescheduling_interrupts
86.25 ± 34% -90.4% 8.25 ± 31% interrupts.CPU190.RES:Rescheduling_interrupts
1058 ± 19% -22.5% 820.25 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
214.00 ± 3% -41.8% 124.50 ± 17% interrupts.CPU2.RES:Rescheduling_interrupts
206.50 ± 5% -32.4% 139.50 ± 27% interrupts.CPU20.RES:Rescheduling_interrupts
205.00 ± 2% -40.9% 121.25 ± 31% interrupts.CPU21.RES:Rescheduling_interrupts
237.75 ± 17% -44.6% 131.75 ± 23% interrupts.CPU22.RES:Rescheduling_interrupts
208.25 ± 5% -36.1% 133.00 ± 29% interrupts.CPU23.RES:Rescheduling_interrupts
209.00 ± 4% -50.8% 102.75 ± 45% interrupts.CPU24.RES:Rescheduling_interrupts
212.00 -28.4% 151.75 ± 16% interrupts.CPU25.RES:Rescheduling_interrupts
209.25 ± 4% -44.0% 117.25 ± 48% interrupts.CPU26.RES:Rescheduling_interrupts
6511 ± 11% -29.5% 4593 ± 27% interrupts.CPU27.NMI:Non-maskable_interrupts
6511 ± 11% -29.5% 4593 ± 27% interrupts.CPU27.PMI:Performance_monitoring_interrupts
221.50 ± 13% -52.6% 105.00 ± 59% interrupts.CPU27.RES:Rescheduling_interrupts
200.75 ± 3% -26.0% 148.50 ± 24% interrupts.CPU28.RES:Rescheduling_interrupts
224.25 ± 13% -51.8% 108.00 ± 65% interrupts.CPU29.RES:Rescheduling_interrupts
215.50 -53.4% 100.50 ± 57% interrupts.CPU3.RES:Rescheduling_interrupts
205.00 ± 2% -33.4% 136.50 ± 20% interrupts.CPU30.RES:Rescheduling_interrupts
219.00 ± 14% -41.8% 127.50 ± 26% interrupts.CPU31.RES:Rescheduling_interrupts
210.50 ± 5% -34.1% 138.75 ± 36% interrupts.CPU33.RES:Rescheduling_interrupts
6547 ± 11% -38.3% 4039 ± 30% interrupts.CPU34.NMI:Non-maskable_interrupts
6547 ± 11% -38.3% 4039 ± 30% interrupts.CPU34.PMI:Performance_monitoring_interrupts
214.75 ± 18% -51.9% 103.25 ± 62% interrupts.CPU34.RES:Rescheduling_interrupts
210.00 ± 2% -43.6% 118.50 ± 51% interrupts.CPU35.RES:Rescheduling_interrupts
214.75 ± 2% -25.8% 159.25 ± 8% interrupts.CPU36.RES:Rescheduling_interrupts
6451 ± 11% -44.1% 3608 ± 44% interrupts.CPU37.NMI:Non-maskable_interrupts
6451 ± 11% -44.1% 3608 ± 44% interrupts.CPU37.PMI:Performance_monitoring_interrupts
205.50 ± 4% -35.9% 131.75 ± 18% interrupts.CPU37.RES:Rescheduling_interrupts
165.50 ±162% -99.1% 1.50 ±110% interrupts.CPU38.TLB:TLB_shootdowns
228.00 ± 15% -48.4% 117.75 ± 27% interrupts.CPU41.RES:Rescheduling_interrupts
234.75 ± 6% -52.1% 112.50 ± 31% interrupts.CPU42.RES:Rescheduling_interrupts
205.25 ± 6% -66.4% 69.00 ± 24% interrupts.CPU43.RES:Rescheduling_interrupts
207.75 ± 3% -59.2% 84.75 ± 30% interrupts.CPU44.RES:Rescheduling_interrupts
227.25 ± 10% -53.4% 106.00 ± 4% interrupts.CPU45.RES:Rescheduling_interrupts
4912 ± 34% -45.5% 2678 ± 11% interrupts.CPU46.NMI:Non-maskable_interrupts
4912 ± 34% -45.5% 2678 ± 11% interrupts.CPU46.PMI:Performance_monitoring_interrupts
211.50 -56.7% 91.50 ± 24% interrupts.CPU46.RES:Rescheduling_interrupts
5753 ± 29% -43.6% 3242 ± 20% interrupts.CPU47.NMI:Non-maskable_interrupts
5753 ± 29% -43.6% 3242 ± 20% interrupts.CPU47.PMI:Performance_monitoring_interrupts
206.00 ± 2% -39.1% 125.50 ± 17% interrupts.CPU47.RES:Rescheduling_interrupts
6443 ± 12% -49.8% 3234 ± 20% interrupts.CPU48.NMI:Non-maskable_interrupts
6443 ± 12% -49.8% 3234 ± 20% interrupts.CPU48.PMI:Performance_monitoring_interrupts
226.25 ± 12% -57.5% 96.25 ± 15% interrupts.CPU48.RES:Rescheduling_interrupts
994.00 ± 16% -16.9% 826.25 ± 4% interrupts.CPU49.CAL:Function_call_interrupts
267.00 ± 21% -53.7% 123.75 ± 26% interrupts.CPU49.RES:Rescheduling_interrupts
847.25 ± 6% +16.4% 986.50 ± 12% interrupts.CPU5.CAL:Function_call_interrupts
226.00 ± 12% -47.1% 119.50 ± 36% interrupts.CPU5.RES:Rescheduling_interrupts
6641 ± 10% -62.3% 2505 ± 24% interrupts.CPU50.NMI:Non-maskable_interrupts
6641 ± 10% -62.3% 2505 ± 24% interrupts.CPU50.PMI:Performance_monitoring_interrupts
209.00 -41.6% 122.00 ± 18% interrupts.CPU50.RES:Rescheduling_interrupts
933.00 ± 7% -13.4% 807.75 interrupts.CPU51.CAL:Function_call_interrupts
6631 ± 9% -40.3% 3961 ± 16% interrupts.CPU51.NMI:Non-maskable_interrupts
6631 ± 9% -40.3% 3961 ± 16% interrupts.CPU51.PMI:Performance_monitoring_interrupts
273.25 ± 15% -52.0% 131.25 ± 18% interrupts.CPU51.RES:Rescheduling_interrupts
891.75 ± 3% -10.0% 803.00 interrupts.CPU52.CAL:Function_call_interrupts
6554 ± 11% -44.9% 3610 ± 20% interrupts.CPU52.NMI:Non-maskable_interrupts
6554 ± 11% -44.9% 3610 ± 20% interrupts.CPU52.PMI:Performance_monitoring_interrupts
237.25 ± 12% -37.1% 149.25 ± 27% interrupts.CPU52.RES:Rescheduling_interrupts
209.50 ± 7% -31.3% 144.00 ± 26% interrupts.CPU53.RES:Rescheduling_interrupts
205.00 ± 4% -30.7% 142.00 ± 13% interrupts.CPU54.RES:Rescheduling_interrupts
229.00 ± 11% -38.1% 141.75 ± 28% interrupts.CPU55.RES:Rescheduling_interrupts
218.25 ± 6% -49.1% 111.00 ± 56% interrupts.CPU56.RES:Rescheduling_interrupts
885.75 ± 6% -9.1% 805.50 interrupts.CPU57.CAL:Function_call_interrupts
244.00 ± 13% -41.5% 142.75 ± 37% interrupts.CPU57.RES:Rescheduling_interrupts
246.50 ± 24% -52.7% 116.50 ± 23% interrupts.CPU58.RES:Rescheduling_interrupts
880.50 ± 4% -8.5% 805.50 interrupts.CPU59.CAL:Function_call_interrupts
235.00 ± 14% -46.1% 126.75 ± 29% interrupts.CPU59.RES:Rescheduling_interrupts
901.75 ± 5% -10.8% 804.75 interrupts.CPU60.CAL:Function_call_interrupts
243.50 ± 9% -45.1% 133.75 ± 26% interrupts.CPU60.RES:Rescheduling_interrupts
871.50 ± 3% -6.1% 818.00 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
244.25 ± 15% -51.1% 119.50 ± 24% interrupts.CPU61.RES:Rescheduling_interrupts
228.75 ± 12% -36.0% 146.50 ± 19% interrupts.CPU62.RES:Rescheduling_interrupts
223.25 ± 11% -41.7% 130.25 ± 44% interrupts.CPU63.RES:Rescheduling_interrupts
228.00 ± 13% -44.6% 126.25 ± 15% interrupts.CPU64.RES:Rescheduling_interrupts
222.75 ± 12% -39.4% 135.00 ± 23% interrupts.CPU65.RES:Rescheduling_interrupts
930.50 ± 7% -13.4% 806.25 interrupts.CPU66.CAL:Function_call_interrupts
256.75 ± 17% -41.3% 150.75 ± 35% interrupts.CPU66.RES:Rescheduling_interrupts
226.00 ± 13% -59.0% 92.75 ± 69% interrupts.CPU67.RES:Rescheduling_interrupts
204.50 ± 3% -43.9% 114.75 ± 42% interrupts.CPU69.RES:Rescheduling_interrupts
231.50 ± 11% -48.2% 120.00 ± 22% interrupts.CPU70.RES:Rescheduling_interrupts
889.25 ± 5% -9.2% 807.00 interrupts.CPU71.CAL:Function_call_interrupts
233.50 ± 9% -55.5% 104.00 ± 45% interrupts.CPU71.RES:Rescheduling_interrupts
222.50 ± 2% -41.7% 129.75 ± 22% interrupts.CPU72.RES:Rescheduling_interrupts
214.50 ± 4% -27.9% 154.75 ± 20% interrupts.CPU73.RES:Rescheduling_interrupts
219.25 ± 3% -34.1% 144.50 ± 5% interrupts.CPU74.RES:Rescheduling_interrupts
214.00 -28.6% 152.75 ± 12% interrupts.CPU75.RES:Rescheduling_interrupts
225.50 ± 14% -35.6% 145.25 ± 21% interrupts.CPU76.RES:Rescheduling_interrupts
5705 ± 23% +47.1% 8393 ± 6% interrupts.CPU77.NMI:Non-maskable_interrupts
5705 ± 23% +47.1% 8393 ± 6% interrupts.CPU77.PMI:Performance_monitoring_interrupts
918.50 ± 5% -12.2% 806.75 interrupts.CPU79.CAL:Function_call_interrupts
243.50 ± 8% -62.3% 91.75 ± 63% interrupts.CPU79.RES:Rescheduling_interrupts
229.00 ± 12% -54.3% 104.75 ± 38% interrupts.CPU8.RES:Rescheduling_interrupts
953.75 ± 9% -12.2% 837.25 ± 4% interrupts.CPU80.CAL:Function_call_interrupts
208.00 -40.6% 123.50 ± 20% interrupts.CPU82.RES:Rescheduling_interrupts
927.25 ± 12% -9.2% 842.00 ± 8% interrupts.CPU83.CAL:Function_call_interrupts
231.75 ± 14% -47.7% 121.25 ± 27% interrupts.CPU83.RES:Rescheduling_interrupts
6546 ± 10% +31.5% 8607 ± 2% interrupts.CPU84.NMI:Non-maskable_interrupts
6546 ± 10% +31.5% 8607 ± 2% interrupts.CPU84.PMI:Performance_monitoring_interrupts
230.50 ± 13% -29.8% 161.75 ± 21% interrupts.CPU84.RES:Rescheduling_interrupts
224.00 ± 13% -49.6% 113.00 ± 49% interrupts.CPU87.RES:Rescheduling_interrupts
242.50 ± 20% -52.1% 116.25 ± 29% interrupts.CPU88.RES:Rescheduling_interrupts
5676 ± 23% +43.6% 8148 ± 12% interrupts.CPU89.NMI:Non-maskable_interrupts
5676 ± 23% +43.6% 8148 ± 12% interrupts.CPU89.PMI:Performance_monitoring_interrupts
210.00 ± 4% -46.5% 112.25 ± 30% interrupts.CPU9.RES:Rescheduling_interrupts
5725 ± 23% +51.7% 8683 interrupts.CPU90.NMI:Non-maskable_interrupts
5725 ± 23% +51.7% 8683 interrupts.CPU90.PMI:Performance_monitoring_interrupts
259.75 ± 23% -42.9% 148.25 ± 37% interrupts.CPU90.RES:Rescheduling_interrupts
211.75 ± 3% -35.1% 137.50 ± 33% interrupts.CPU91.RES:Rescheduling_interrupts
210.50 ± 3% -44.4% 117.00 ± 28% interrupts.CPU92.RES:Rescheduling_interrupts
228.25 ± 9% -61.1% 88.75 ± 34% interrupts.CPU93.RES:Rescheduling_interrupts
218.25 ± 4% -52.5% 103.75 ± 38% interrupts.CPU94.RES:Rescheduling_interrupts
216.50 ± 2% -80.7% 41.75 ± 73% interrupts.CPU95.RES:Rescheduling_interrupts
6560 ± 11% +33.0% 8725 interrupts.CPU96.NMI:Non-maskable_interrupts
6560 ± 11% +33.0% 8725 interrupts.CPU96.PMI:Performance_monitoring_interrupts
217.25 ± 3% -41.2% 127.75 ± 23% interrupts.CPU97.RES:Rescheduling_interrupts
215.75 ± 2% -48.9% 110.25 ± 29% interrupts.CPU98.RES:Rescheduling_interrupts
38197 -45.7% 20733 ± 6% interrupts.RES:Rescheduling_interrupts
will-it-scale.per_process_ops
7200 +--------------------------------------------------------------------+
| |
7000 |-+ +.. +.+ |
| : .+..+..+ + : + +. |
6800 |-+ : +.. + : + : + + .. +..|
| .+ .. : .+ : .+.. .+ + .+..+ |
6600 |.+ + + +. + +. +..+ |
| + .. |
6400 |-+ + O O O |
| O O O |
6200 |-+O O O |
| O O O O O O O |
6000 |-+ O |
| O |
5800 +--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[mm] 698ac7610f: will-it-scale.per_thread_ops 8.2% improvement
by kernel test robot
Greeting,
FYI, we noticed a 8.2% improvement of will-it-scale.per_thread_ops due to commit:
commit: 698ac7610f7928ddfa44a0736e89d776579d8b82 ("[PATCH 1/5] mm: Introduce mm_struct.has_pinned")
url: https://github.com/0day-ci/linux/commits/Peter-Xu/mm-Break-COW-for-pinned...
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git bcf876870b95592b52519ed4aafcf9d95999bc9c
in testcase: will-it-scale
on test machine: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
with following parameters:
nr_task: 100%
mode: thread
test: mmap2
cpufreq_governor: performance
ucode: 0x4002f01
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/thread/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp4/mmap2/will-it-scale/0x4002f01
commit:
v5.8
698ac7610f ("mm: Introduce mm_struct.has_pinned")
v5.8 698ac7610f7928ddfa44a0736e8
---------------- ---------------------------
%stddev %change %stddev
\ | \
2003 +8.2% 2168 will-it-scale.per_thread_ops
192350 +8.3% 208245 will-it-scale.workload
2643 ± 33% -36.3% 1683 meminfo.Active(file)
3.88 ± 2% -0.6 3.25 ± 2% mpstat.cpu.all.idle%
0.00 ± 3% -0.0 0.00 ± 8% mpstat.cpu.all.iowait%
307629 ± 3% +10.5% 340075 ± 3% numa-numastat.node0.local_node
15503 ± 60% +60.2% 24839 ± 25% numa-numastat.node1.other_node
161670 -58.7% 66739 ± 4% vmstat.system.cs
209406 -3.9% 201176 vmstat.system.in
364.00 ± 23% -53.8% 168.00 slabinfo.pid_namespace.active_objs
364.00 ± 23% -53.8% 168.00 slabinfo.pid_namespace.num_objs
985.50 ± 11% +14.6% 1129 ± 8% slabinfo.task_group.active_objs
985.50 ± 11% +14.6% 1129 ± 8% slabinfo.task_group.num_objs
660.25 ± 33% -36.3% 420.25 proc-vmstat.nr_active_file
302055 +2.4% 309211 proc-vmstat.nr_file_pages
281010 +2.6% 288385 proc-vmstat.nr_unevictable
660.25 ± 33% -36.3% 420.25 proc-vmstat.nr_zone_active_file
281010 +2.6% 288385 proc-vmstat.nr_zone_unevictable
20640832 ± 16% +40.0% 28888446 ± 6% cpuidle.C1.time
1743036 ± 6% -54.5% 792376 ± 4% cpuidle.C1.usage
5.048e+08 ± 54% -98.3% 8642335 ± 2% cpuidle.C6.time
706531 ± 51% -94.9% 36224 cpuidle.C6.usage
38313880 -56.5% 16666274 ± 5% cpuidle.POLL.time
18289550 -59.1% 7488947 ± 5% cpuidle.POLL.usage
302.94 ± 5% -32.9% 203.13 ± 6% sched_debug.cfs_rq:/.exec_clock.stddev
31707 ± 6% -43.6% 17867 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.77 ± 22% +41.1% 1.09 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
163292 ± 15% -75.8% 39543 ± 24% sched_debug.cfs_rq:/.spread0.avg
220569 ± 13% -65.9% 75287 ± 16% sched_debug.cfs_rq:/.spread0.max
-1903 +1952.7% -39073 sched_debug.cfs_rq:/.spread0.min
31680 ± 6% -43.7% 17850 ± 6% sched_debug.cfs_rq:/.spread0.stddev
698820 ± 2% -28.4% 500526 ± 3% sched_debug.cpu.avg_idle.avg
1100275 ± 3% -7.2% 1020875 ± 3% sched_debug.cpu.avg_idle.max
250869 -58.4% 104239 ± 4% sched_debug.cpu.nr_switches.avg
766741 ± 25% -64.8% 269919 ± 10% sched_debug.cpu.nr_switches.max
111347 ± 16% -50.8% 54786 ± 11% sched_debug.cpu.nr_switches.min
108077 ± 11% -67.3% 35316 ± 8% sched_debug.cpu.nr_switches.stddev
262769 -59.1% 107346 ± 4% sched_debug.cpu.sched_count.avg
800567 ± 25% -65.9% 272755 ± 10% sched_debug.cpu.sched_count.max
115870 ± 15% -51.6% 56034 ± 11% sched_debug.cpu.sched_count.min
112678 ± 11% -67.8% 36309 ± 9% sched_debug.cpu.sched_count.stddev
122760 -59.2% 50082 ± 4% sched_debug.cpu.sched_goidle.avg
372289 ± 25% -65.9% 126854 ± 10% sched_debug.cpu.sched_goidle.max
53911 ± 15% -51.7% 26040 ± 11% sched_debug.cpu.sched_goidle.min
52986 ± 12% -67.7% 17092 ± 9% sched_debug.cpu.sched_goidle.stddev
138914 -59.1% 56816 ± 4% sched_debug.cpu.ttwu_count.avg
168089 -57.1% 72151 ± 4% sched_debug.cpu.ttwu_count.max
123046 -61.1% 47837 ± 4% sched_debug.cpu.ttwu_count.min
11828 ± 15% -39.8% 7114 ± 4% sched_debug.cpu.ttwu_count.stddev
3050 ± 6% -34.3% 2004 ± 6% sched_debug.cpu.ttwu_local.max
383.97 ± 6% -30.1% 268.54 ± 12% sched_debug.cpu.ttwu_local.stddev
51.21 -0.4 50.79 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
51.24 -0.4 50.82 perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.84 ± 8% -0.1 0.76 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
0.83 ± 8% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
0.83 ± 8% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.83 ± 8% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
47.27 +0.5 47.79 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
47.27 +0.5 47.79 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.26 +0.5 47.78 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.25 +0.5 47.78 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.28 +0.5 47.81 perf-profile.calltrace.cycles-pp.__munmap
46.75 +0.5 47.30 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.__vm_munmap.__x64_sys_munmap.do_syscall_64
46.24 +0.5 46.79 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.__vm_munmap
46.77 +0.6 47.33 perf-profile.calltrace.cycles-pp.down_write_killable.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.70 +0.6 47.26 perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.__vm_munmap.__x64_sys_munmap
0.84 ± 8% -0.1 0.76 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
0.84 ± 8% -0.1 0.76 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
0.84 ± 8% -0.1 0.76 ± 4% perf-profile.children.cycles-pp.do_idle
0.83 ± 8% -0.1 0.74 ± 4% perf-profile.children.cycles-pp.start_secondary
0.11 ± 4% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.__sched_text_start
0.14 ± 3% -0.1 0.08 ± 6% perf-profile.children.cycles-pp.rwsem_wake
0.10 ± 4% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.wake_up_q
0.10 ± 4% -0.0 0.05 perf-profile.children.cycles-pp.try_to_wake_up
0.07 ± 7% -0.0 0.05 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.24 ± 2% +0.0 0.26 perf-profile.children.cycles-pp.mmap_region
0.42 ± 2% +0.0 0.45 perf-profile.children.cycles-pp.do_mmap
0.67 +0.0 0.71 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
97.96 +0.1 98.09 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
98.02 +0.1 98.14 perf-profile.children.cycles-pp.down_write_killable
97.86 +0.2 98.02 perf-profile.children.cycles-pp.rwsem_optimistic_spin
96.90 +0.2 97.07 perf-profile.children.cycles-pp.osq_lock
47.26 +0.5 47.78 perf-profile.children.cycles-pp.__x64_sys_munmap
47.28 +0.5 47.81 perf-profile.children.cycles-pp.__munmap
47.25 +0.5 47.78 perf-profile.children.cycles-pp.__vm_munmap
0.24 -0.0 0.19 ± 4% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.06 -0.0 0.03 ±100% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.09 +0.0 0.10 perf-profile.self.cycles-pp.find_vma
0.65 +0.0 0.70 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
96.31 +0.2 96.53 perf-profile.self.cycles-pp.osq_lock
0.66 -21.0% 0.52 ± 2% perf-stat.i.MPKI
0.11 +0.0 0.11 perf-stat.i.branch-miss-rate%
14774053 +4.3% 15410076 perf-stat.i.branch-misses
41.17 +0.9 42.09 perf-stat.i.cache-miss-rate%
19999033 -18.7% 16264313 ± 3% perf-stat.i.cache-misses
48696879 -20.4% 38779964 ± 2% perf-stat.i.cache-references
162553 -58.9% 66790 ± 4% perf-stat.i.context-switches
157.36 -3.9% 151.20 perf-stat.i.cpu-migrations
14271 +23.9% 17676 ± 3% perf-stat.i.cycles-between-cache-misses
0.00 ± 8% +0.0 0.00 ± 4% perf-stat.i.dTLB-load-miss-rate%
155048 ± 5% +46.3% 226826 ± 4% perf-stat.i.dTLB-load-misses
0.00 ± 2% +0.0 0.01 perf-stat.i.dTLB-store-miss-rate%
16761 +19.2% 19972 ± 2% perf-stat.i.dTLB-store-misses
4.118e+08 -15.7% 3.47e+08 perf-stat.i.dTLB-stores
93.99 +1.4 95.42 perf-stat.i.iTLB-load-miss-rate%
4103535 ± 3% -6.7% 3827881 perf-stat.i.iTLB-load-misses
263239 ± 4% -28.6% 188037 ± 2% perf-stat.i.iTLB-loads
18498 ± 3% +7.5% 19885 perf-stat.i.instructions-per-iTLB-miss
0.31 +226.8% 1.01 ± 3% perf-stat.i.metric.K/sec
88.51 +1.8 90.30 perf-stat.i.node-load-miss-rate%
8104644 -13.5% 7009101 perf-stat.i.node-load-misses
1047577 -28.3% 751122 perf-stat.i.node-loads
3521008 -13.1% 3058055 perf-stat.i.node-store-misses
0.64 -20.6% 0.51 ± 2% perf-stat.overall.MPKI
0.10 +0.0 0.10 perf-stat.overall.branch-miss-rate%
41.07 +0.9 41.92 perf-stat.overall.cache-miss-rate%
14273 +23.7% 17662 ± 3% perf-stat.overall.cycles-between-cache-misses
0.00 ± 5% +0.0 0.00 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 2% +0.0 0.01 ± 2% perf-stat.overall.dTLB-store-miss-rate%
93.95 +1.3 95.29 perf-stat.overall.iTLB-load-miss-rate%
18507 ± 3% +7.5% 19890 perf-stat.overall.instructions-per-iTLB-miss
88.55 +1.8 90.32 perf-stat.overall.node-load-miss-rate%
1.191e+08 -7.0% 1.107e+08 perf-stat.overall.path-length
14739033 +4.2% 15364552 perf-stat.ps.branch-misses
19930054 -18.7% 16209879 ± 3% perf-stat.ps.cache-misses
48535919 -20.3% 38662093 ± 2% perf-stat.ps.cache-references
161841 -58.9% 66477 ± 4% perf-stat.ps.context-switches
157.06 -3.9% 150.89 perf-stat.ps.cpu-migrations
154906 ± 5% +46.1% 226269 ± 5% perf-stat.ps.dTLB-load-misses
16730 +19.1% 19927 ± 2% perf-stat.ps.dTLB-store-misses
4.104e+08 -15.7% 3.459e+08 perf-stat.ps.dTLB-stores
4089203 ± 3% -6.7% 3814320 perf-stat.ps.iTLB-load-misses
263134 ± 4% -28.4% 188441 ± 2% perf-stat.ps.iTLB-loads
8076488 -13.5% 6984923 perf-stat.ps.node-load-misses
1044002 -28.3% 748425 perf-stat.ps.node-loads
3509068 -13.1% 3048190 perf-stat.ps.node-store-misses
2270 ±170% -98.9% 24.50 ±166% interrupts.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
3730764 ± 2% -62.0% 1416300 ± 4% interrupts.CAL:Function_call_interrupts
2270 ±170% -98.9% 24.25 ±168% interrupts.CPU0.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
111116 ± 29% -67.4% 36227 ± 8% interrupts.CPU0.CAL:Function_call_interrupts
11091 ± 38% -62.2% 4195 ± 12% interrupts.CPU0.RES:Rescheduling_interrupts
48001 ± 26% -53.7% 22228 ± 14% interrupts.CPU1.CAL:Function_call_interrupts
4189 ± 29% -43.2% 2378 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
34024 ± 32% -44.7% 18804 ± 4% interrupts.CPU10.CAL:Function_call_interrupts
26171 ± 19% -26.1% 19334 ± 3% interrupts.CPU11.CAL:Function_call_interrupts
2370 ± 13% -20.8% 1876 ± 18% interrupts.CPU11.RES:Rescheduling_interrupts
30568 ± 23% -45.9% 16537 ± 4% interrupts.CPU12.CAL:Function_call_interrupts
3094 ± 27% -46.9% 1643 ± 8% interrupts.CPU12.RES:Rescheduling_interrupts
514.50 ± 10% +35.6% 697.75 ± 16% interrupts.CPU12.TLB:TLB_shootdowns
29819 ± 32% -41.7% 17393 interrupts.CPU13.CAL:Function_call_interrupts
35364 ± 38% -49.6% 17819 ± 9% interrupts.CPU14.CAL:Function_call_interrupts
694.75 ± 11% +23.2% 856.00 ± 9% interrupts.CPU14.TLB:TLB_shootdowns
38361 ± 48% -49.8% 19273 ± 9% interrupts.CPU16.CAL:Function_call_interrupts
30069 ± 23% -31.6% 20559 ± 3% interrupts.CPU17.CAL:Function_call_interrupts
809.75 ± 7% -19.5% 651.75 ± 13% interrupts.CPU17.TLB:TLB_shootdowns
28245 ± 29% -33.1% 18894 ± 6% interrupts.CPU18.CAL:Function_call_interrupts
33560 ± 23% -48.2% 17369 ± 6% interrupts.CPU19.CAL:Function_call_interrupts
2863 ± 19% -40.3% 1709 ± 13% interrupts.CPU19.RES:Rescheduling_interrupts
47118 ± 32% -55.7% 20868 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
3897 ± 38% -48.9% 1991 ± 7% interrupts.CPU2.RES:Rescheduling_interrupts
34735 ± 29% -41.7% 20246 ± 9% interrupts.CPU20.CAL:Function_call_interrupts
37232 ± 23% -46.6% 19883 ± 12% interrupts.CPU21.CAL:Function_call_interrupts
32345 ± 16% -38.6% 19845 ± 6% interrupts.CPU22.CAL:Function_call_interrupts
34083 ± 22% -43.4% 19301 ± 6% interrupts.CPU23.CAL:Function_call_interrupts
61308 ± 16% -76.3% 14529 ± 13% interrupts.CPU24.CAL:Function_call_interrupts
6610 ± 26% -76.3% 1568 ± 11% interrupts.CPU24.RES:Rescheduling_interrupts
51384 ± 32% -75.0% 12848 ± 12% interrupts.CPU25.CAL:Function_call_interrupts
4643 ± 39% -70.6% 1366 ± 9% interrupts.CPU25.RES:Rescheduling_interrupts
48788 ± 25% -71.7% 13826 ± 17% interrupts.CPU26.CAL:Function_call_interrupts
4076 ± 32% -70.5% 1203 ± 18% interrupts.CPU26.RES:Rescheduling_interrupts
45702 ± 14% -70.7% 13369 ± 12% interrupts.CPU27.CAL:Function_call_interrupts
3614 ± 21% -67.3% 1180 ± 16% interrupts.CPU27.RES:Rescheduling_interrupts
51216 ± 14% -71.6% 14546 ± 15% interrupts.CPU28.CAL:Function_call_interrupts
4395 ± 24% -67.9% 1410 ± 21% interrupts.CPU28.RES:Rescheduling_interrupts
614.25 ± 18% +33.3% 818.75 ± 17% interrupts.CPU28.TLB:TLB_shootdowns
44945 ± 23% -66.5% 15059 ± 14% interrupts.CPU29.CAL:Function_call_interrupts
3994 ± 34% -68.2% 1271 ± 10% interrupts.CPU29.RES:Rescheduling_interrupts
39154 ± 24% -41.6% 22857 ± 6% interrupts.CPU3.CAL:Function_call_interrupts
45674 ± 11% -68.3% 14470 ± 8% interrupts.CPU30.CAL:Function_call_interrupts
4097 ± 23% -68.8% 1278 ± 10% interrupts.CPU30.RES:Rescheduling_interrupts
51890 ± 13% -72.6% 14227 ± 16% interrupts.CPU31.CAL:Function_call_interrupts
4557 ± 26% -71.4% 1305 ± 21% interrupts.CPU31.RES:Rescheduling_interrupts
41324 ± 23% -76.0% 9933 ± 11% interrupts.CPU32.CAL:Function_call_interrupts
3284 ± 33% -73.4% 873.75 ± 15% interrupts.CPU32.RES:Rescheduling_interrupts
39758 ± 31% -74.5% 10120 ± 17% interrupts.CPU33.CAL:Function_call_interrupts
3373 ± 42% -74.2% 869.00 ± 15% interrupts.CPU33.RES:Rescheduling_interrupts
513.00 ± 27% +46.0% 748.75 ± 16% interrupts.CPU33.TLB:TLB_shootdowns
40015 ± 14% -72.8% 10885 ± 8% interrupts.CPU34.CAL:Function_call_interrupts
3402 ± 25% -68.2% 1080 ± 13% interrupts.CPU34.RES:Rescheduling_interrupts
635.25 ± 22% +49.3% 948.25 ± 13% interrupts.CPU34.TLB:TLB_shootdowns
45251 ± 19% -75.2% 11204 ± 17% interrupts.CPU35.CAL:Function_call_interrupts
3731 ± 31% -73.4% 992.50 ± 20% interrupts.CPU35.RES:Rescheduling_interrupts
43390 ± 11% -78.3% 9434 ± 15% interrupts.CPU36.CAL:Function_call_interrupts
3536 ± 23% -77.3% 803.75 ± 14% interrupts.CPU36.RES:Rescheduling_interrupts
39820 ± 11% -75.9% 9613 ± 10% interrupts.CPU37.CAL:Function_call_interrupts
2987 ± 21% -70.8% 873.25 ± 9% interrupts.CPU37.RES:Rescheduling_interrupts
42969 ± 32% -76.6% 10068 ± 17% interrupts.CPU38.CAL:Function_call_interrupts
3202 ± 36% -74.4% 818.50 ± 27% interrupts.CPU38.RES:Rescheduling_interrupts
35571 ± 16% -72.4% 9822 ± 9% interrupts.CPU39.CAL:Function_call_interrupts
2986 ± 24% -73.9% 778.75 ± 15% interrupts.CPU39.RES:Rescheduling_interrupts
45001 ± 21% -48.2% 23317 ± 7% interrupts.CPU4.CAL:Function_call_interrupts
3689 ± 24% -43.6% 2080 ± 2% interrupts.CPU4.RES:Rescheduling_interrupts
39302 ± 21% -73.4% 10463 ± 8% interrupts.CPU40.CAL:Function_call_interrupts
2968 ± 32% -71.4% 848.50 ± 18% interrupts.CPU40.RES:Rescheduling_interrupts
40826 ± 19% -75.3% 10070 ± 10% interrupts.CPU41.CAL:Function_call_interrupts
3321 ± 30% -70.9% 967.25 ± 10% interrupts.CPU41.RES:Rescheduling_interrupts
700.25 ± 18% +26.2% 883.75 ± 17% interrupts.CPU41.TLB:TLB_shootdowns
35368 ± 11% -73.7% 9308 ± 15% interrupts.CPU42.CAL:Function_call_interrupts
2839 ± 12% -70.3% 844.50 ± 13% interrupts.CPU42.RES:Rescheduling_interrupts
45459 ± 25% -78.7% 9687 ± 11% interrupts.CPU43.CAL:Function_call_interrupts
3703 ± 29% -74.1% 959.50 ± 16% interrupts.CPU43.RES:Rescheduling_interrupts
41495 ± 15% -77.1% 9522 ± 12% interrupts.CPU44.CAL:Function_call_interrupts
3153 ± 28% -75.0% 789.75 ± 15% interrupts.CPU44.RES:Rescheduling_interrupts
38501 ± 26% -72.5% 10601 ± 14% interrupts.CPU45.CAL:Function_call_interrupts
3024 ± 38% -73.8% 791.00 ± 19% interrupts.CPU45.RES:Rescheduling_interrupts
39083 ± 35% -73.6% 10323 ± 18% interrupts.CPU46.CAL:Function_call_interrupts
3173 ± 37% -73.9% 829.75 ± 24% interrupts.CPU46.RES:Rescheduling_interrupts
44486 ± 20% -75.3% 10968 ± 15% interrupts.CPU47.CAL:Function_call_interrupts
3773 ± 34% -76.4% 890.25 ± 24% interrupts.CPU47.RES:Rescheduling_interrupts
34967 ± 42% -51.0% 17117 ± 10% interrupts.CPU48.CAL:Function_call_interrupts
31969 ± 38% -51.7% 15432 ± 12% interrupts.CPU49.CAL:Function_call_interrupts
33786 ± 12% -29.2% 23918 ± 5% interrupts.CPU5.CAL:Function_call_interrupts
3014 ± 16% -33.2% 2012 ± 9% interrupts.CPU5.RES:Rescheduling_interrupts
30514 ± 29% -46.4% 16343 ± 6% interrupts.CPU51.CAL:Function_call_interrupts
34448 ± 26% -48.7% 17686 ± 4% interrupts.CPU52.CAL:Function_call_interrupts
2811 ± 34% -42.0% 1631 ± 4% interrupts.CPU52.RES:Rescheduling_interrupts
30848 ± 33% -47.9% 16059 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
31017 ± 22% -52.7% 14676 ± 7% interrupts.CPU55.CAL:Function_call_interrupts
2501 ± 41% -41.8% 1455 ± 10% interrupts.CPU55.RES:Rescheduling_interrupts
28249 ± 23% -46.9% 14997 ± 10% interrupts.CPU56.CAL:Function_call_interrupts
2113 ± 18% -36.0% 1352 ± 15% interrupts.CPU56.RES:Rescheduling_interrupts
27658 ± 20% -49.3% 14034 ± 3% interrupts.CPU57.CAL:Function_call_interrupts
26559 ± 34% -40.6% 15778 ± 11% interrupts.CPU58.CAL:Function_call_interrupts
27984 ± 27% -39.9% 16815 ± 12% interrupts.CPU59.CAL:Function_call_interrupts
35098 ± 33% -37.5% 21921 interrupts.CPU6.CAL:Function_call_interrupts
3073 ± 37% -40.6% 1825 ± 8% interrupts.CPU6.RES:Rescheduling_interrupts
29248 ± 38% -48.2% 15149 ± 6% interrupts.CPU60.CAL:Function_call_interrupts
30880 ± 33% -52.3% 14722 ± 10% interrupts.CPU61.CAL:Function_call_interrupts
31218 ± 43% -51.5% 15152 ± 3% interrupts.CPU62.CAL:Function_call_interrupts
29210 ± 40% -46.5% 15627 ± 6% interrupts.CPU63.CAL:Function_call_interrupts
26813 ± 15% -39.0% 16343 ± 13% interrupts.CPU64.CAL:Function_call_interrupts
24791 ± 14% -32.6% 16704 ± 10% interrupts.CPU67.CAL:Function_call_interrupts
29638 ± 33% -42.9% 16914 ± 9% interrupts.CPU68.CAL:Function_call_interrupts
36247 ± 33% -48.5% 18670 ± 8% interrupts.CPU69.CAL:Function_call_interrupts
30379 ± 24% -30.6% 21096 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
3027 ± 25% -32.0% 2059 ± 3% interrupts.CPU7.RES:Rescheduling_interrupts
31064 ± 25% -42.8% 17774 interrupts.CPU70.CAL:Function_call_interrupts
52949 ± 14% -77.0% 12198 ± 10% interrupts.CPU72.CAL:Function_call_interrupts
4057 ± 23% -75.7% 985.00 ± 8% interrupts.CPU72.RES:Rescheduling_interrupts
42694 ± 22% -73.6% 11281 ± 16% interrupts.CPU73.CAL:Function_call_interrupts
3318 ± 39% -74.3% 851.75 ± 16% interrupts.CPU73.RES:Rescheduling_interrupts
49143 ± 24% -76.1% 11756 ± 12% interrupts.CPU74.CAL:Function_call_interrupts
3606 ± 31% -73.8% 946.00 ± 12% interrupts.CPU74.RES:Rescheduling_interrupts
50587 ± 24% -72.5% 13930 ± 20% interrupts.CPU75.CAL:Function_call_interrupts
3655 ± 36% -71.1% 1056 ± 12% interrupts.CPU75.RES:Rescheduling_interrupts
57791 ± 21% -78.4% 12488 ± 11% interrupts.CPU76.CAL:Function_call_interrupts
5109 ± 37% -79.7% 1037 ± 20% interrupts.CPU76.RES:Rescheduling_interrupts
52455 ± 26% -75.4% 12922 ± 5% interrupts.CPU77.CAL:Function_call_interrupts
3997 ± 37% -73.9% 1043 ± 14% interrupts.CPU77.RES:Rescheduling_interrupts
49188 ± 21% -74.5% 12521 ± 10% interrupts.CPU78.CAL:Function_call_interrupts
3867 ± 42% -74.5% 986.25 ± 18% interrupts.CPU78.RES:Rescheduling_interrupts
45517 ± 19% -72.6% 12484 ± 19% interrupts.CPU79.CAL:Function_call_interrupts
3369 ± 34% -71.9% 946.25 ± 20% interrupts.CPU79.RES:Rescheduling_interrupts
30702 ± 22% -39.9% 18462 ± 9% interrupts.CPU8.CAL:Function_call_interrupts
2580 ± 28% -39.9% 1550 ± 10% interrupts.CPU8.RES:Rescheduling_interrupts
35561 ± 30% -69.8% 10728 ± 12% interrupts.CPU80.CAL:Function_call_interrupts
2675 ± 44% -70.6% 787.50 ± 7% interrupts.CPU80.RES:Rescheduling_interrupts
38762 ± 33% -73.3% 10349 ± 18% interrupts.CPU81.CAL:Function_call_interrupts
2892 ± 48% -70.5% 853.25 ± 20% interrupts.CPU81.RES:Rescheduling_interrupts
46500 ± 39% -80.2% 9203 ± 6% interrupts.CPU82.CAL:Function_call_interrupts
3726 ± 41% -83.3% 622.75 ± 12% interrupts.CPU82.RES:Rescheduling_interrupts
42125 ± 25% -76.0% 10103 ± 7% interrupts.CPU83.CAL:Function_call_interrupts
3275 ± 40% -75.4% 804.50 ± 6% interrupts.CPU83.RES:Rescheduling_interrupts
37359 ± 28% -74.7% 9436 ± 7% interrupts.CPU84.CAL:Function_call_interrupts
2762 ± 45% -71.1% 797.50 ± 17% interrupts.CPU84.RES:Rescheduling_interrupts
38900 ± 13% -76.2% 9272 ± 8% interrupts.CPU85.CAL:Function_call_interrupts
2704 ± 27% -77.0% 622.25 ± 10% interrupts.CPU85.RES:Rescheduling_interrupts
40662 ± 24% -77.2% 9274 ± 14% interrupts.CPU86.CAL:Function_call_interrupts
3139 ± 39% -79.5% 643.00 ± 28% interrupts.CPU86.RES:Rescheduling_interrupts
33538 ± 23% -71.7% 9484 ± 14% interrupts.CPU87.CAL:Function_call_interrupts
2406 ± 40% -73.5% 638.25 ± 21% interrupts.CPU87.RES:Rescheduling_interrupts
36240 ± 26% -73.8% 9499 ± 10% interrupts.CPU88.CAL:Function_call_interrupts
2450 ± 39% -70.5% 721.75 ± 5% interrupts.CPU88.RES:Rescheduling_interrupts
41267 ± 29% -77.1% 9463 ± 11% interrupts.CPU89.CAL:Function_call_interrupts
3286 ± 34% -73.2% 879.50 ± 17% interrupts.CPU89.RES:Rescheduling_interrupts
36038 ± 18% -50.6% 17796 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
3140 ± 28% -48.3% 1622 ± 9% interrupts.CPU9.RES:Rescheduling_interrupts
38534 ± 25% -77.5% 8675 ± 9% interrupts.CPU90.CAL:Function_call_interrupts
3008 ± 27% -79.1% 629.50 ± 16% interrupts.CPU90.RES:Rescheduling_interrupts
38422 ± 29% -77.2% 8741 ± 14% interrupts.CPU91.CAL:Function_call_interrupts
3095 ± 51% -78.7% 658.75 ± 14% interrupts.CPU91.RES:Rescheduling_interrupts
38120 ± 45% -73.6% 10059 ± 10% interrupts.CPU92.CAL:Function_call_interrupts
2711 ± 61% -73.3% 722.75 ± 10% interrupts.CPU92.RES:Rescheduling_interrupts
37155 ± 19% -74.1% 9628 ± 12% interrupts.CPU93.CAL:Function_call_interrupts
2724 ± 32% -70.4% 806.00 ± 32% interrupts.CPU93.RES:Rescheduling_interrupts
43458 ± 15% -77.1% 9936 ± 10% interrupts.CPU94.CAL:Function_call_interrupts
2832 ± 25% -76.8% 655.75 ± 18% interrupts.CPU94.RES:Rescheduling_interrupts
54226 ± 22% -76.8% 12596 ± 17% interrupts.CPU95.CAL:Function_call_interrupts
4437 ± 37% -80.6% 860.50 ± 11% interrupts.CPU95.RES:Rescheduling_interrupts
302853 ± 2% -58.2% 126676 ± 2% interrupts.RES:Rescheduling_interrupts
will-it-scale.per_thread_ops
2300 +--------------------------------------------------------------------+
| |
2250 |-+ O O O O O |
2200 |-+ O O O O |
| O O O O O O O O |
2150 |-O O O O O O |
| O O O O |
2100 |-+ + |
| : + + +.. .+. |
2050 |.+ : + .+.. + : : + +. +.. + |
2000 |-+..+ + + : : + + + |
| : : + + |
1950 |-+ +.+.. +..+ |
| + |
1900 +--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[mm/lru] f21e66f153: vm-scalability.throughput -9.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.1% regression of vm-scalability.throughput due to commit:
commit: f21e66f15396d8cc12e5cd3fe0b9925900fd5d3a ("mm/lru: move lock into lru_note_cost")
https://github.com/alexshi/linux.git lruv19.3
in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
test: lru-file-readonce
cpufreq_governor: performance
ucode: 0x2006906
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-skl-fpga01/lru-file-readonce/vm-scalability/0x2006906
commit:
03ad758126 ("mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn")
f21e66f153 ("mm/lru: move lock into lru_note_cost")
03ad758126af753b f21e66f15396d8cc12e5cd3fe0b
---------------- ---------------------------
%stddev %change %stddev
\ | \
254145 -9.0% 231225 vm-scalability.median
26478241 -9.1% 24077000 vm-scalability.throughput
217.09 +7.6% 233.61 vm-scalability.time.elapsed_time
217.09 +7.6% 233.61 vm-scalability.time.elapsed_time.max
211139 +6.1% 224057 vm-scalability.time.involuntary_context_switches
7624 +2.2% 7792 vm-scalability.time.percent_of_cpu_this_job_got
15658 +10.9% 17361 vm-scalability.time.system_time
895.22 ± 6% -5.8% 843.12 ± 4% vm-scalability.time.user_time
0.13 ± 4% -0.0 0.12 ± 7% mpstat.cpu.all.soft%
4.02 ± 6% -0.5 3.52 ± 4% mpstat.cpu.all.usr%
16894 ± 5% +2338.9% 412019 ±164% cpuidle.C1.usage
32434 ± 2% +35.0% 43778 ± 18% cpuidle.POLL.time
9436 +18.3% 11162 ± 6% cpuidle.POLL.usage
44138 ± 2% +19.8% 52859 meminfo.Active
43471 ± 2% +19.7% 52036 meminfo.Active(anon)
33300 ± 3% -22.7% 25739 ± 8% meminfo.CmaFree
16146301 ± 8% +27.7% 20611319 ± 13% numa-numastat.node0.numa_foreign
16146301 ± 8% +27.7% 20611319 ± 13% numa-numastat.node1.numa_miss
16155502 ± 8% +27.7% 20629193 ± 13% numa-numastat.node1.other_node
25.00 -4.0% 24.00 vmstat.cpu.id
70.00 +2.5% 71.75 vmstat.cpu.sy
2895 -2.4% 2825 vmstat.system.cs
1884 ± 5% +9.6% 2065 ± 4% slabinfo.xfs_btree_cur.active_objs
1884 ± 5% +9.6% 2065 ± 4% slabinfo.xfs_btree_cur.num_objs
1560 ± 6% +10.2% 1720 ± 4% slabinfo.xfs_inode.active_objs
1560 ± 6% +10.2% 1720 ± 4% slabinfo.xfs_inode.num_objs
3454 ± 38% +91.0% 6599 ± 19% numa-meminfo.node0.Active
3181 ± 43% +95.7% 6226 ± 19% numa-meminfo.node0.Active(anon)
41146 ± 5% +13.3% 46637 ± 2% numa-meminfo.node1.Active
40752 ± 5% +13.3% 46186 numa-meminfo.node1.Active(anon)
14444025 ± 21% -25.1% 10818956 ± 2% numa-meminfo.node1.MemFree
9860 ± 17% +20.1% 11843 ± 4% softirqs.CPU0.SCHED
11533 ± 21% -20.9% 9119 ± 5% softirqs.CPU11.RCU
10826 ± 8% -13.0% 9415 ± 7% softirqs.CPU12.RCU
11576 ± 8% -14.4% 9906 ± 6% softirqs.CPU3.RCU
9096 ± 7% +31.4% 11952 ± 10% softirqs.CPU56.RCU
9900 ± 10% -9.8% 8933 ± 2% softirqs.CPU86.RCU
795.75 ± 43% +96.0% 1559 ± 19% numa-vmstat.node0.nr_active_anon
281.75 ± 5% -38.2% 174.25 ± 4% numa-vmstat.node0.nr_isolated_file
795.75 ± 43% +96.0% 1559 ± 19% numa-vmstat.node0.nr_zone_active_anon
8235195 ± 10% +31.5% 10825524 ± 15% numa-vmstat.node0.numa_foreign
10282 ± 5% +13.0% 11624 numa-vmstat.node1.nr_active_anon
8531 ± 4% -25.2% 6379 ± 4% numa-vmstat.node1.nr_free_cma
3661204 ± 22% -25.9% 2712587 ± 4% numa-vmstat.node1.nr_free_pages
283.75 ± 5% -35.3% 183.50 ± 2% numa-vmstat.node1.nr_isolated_file
10282 ± 5% +13.0% 11624 numa-vmstat.node1.nr_zone_active_anon
8236435 ± 10% +31.5% 10827046 ± 15% numa-vmstat.node1.numa_miss
8396469 ± 10% +30.6% 10967008 ± 14% numa-vmstat.node1.numa_other
12.22 ± 6% -32.5% 8.25 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
20.76 ± 16% -29.4% 14.66 ± 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
1984278 ± 15% -39.9% 1191650 ± 9% sched_debug.cpu.avg_idle.max
204652 ± 29% -36.9% 129207 ± 17% sched_debug.cpu.avg_idle.stddev
38274 ± 35% -60.0% 15301 ± 40% sched_debug.cpu.max_idle_balance_cost.stddev
6379 ± 10% -18.0% 5232 ± 2% sched_debug.cpu.sched_count.max
1219 -10.6% 1089 ± 3% sched_debug.cpu.sched_count.min
1773 ± 2% -11.7% 1565 sched_debug.cpu.sched_goidle.max
3033 ± 9% -15.3% 2570 ± 4% sched_debug.cpu.ttwu_count.max
547.08 -12.6% 478.08 ± 3% sched_debug.cpu.ttwu_count.min
2672 ± 13% -28.2% 1919 ± 16% sched_debug.cpu.ttwu_local.max
521.00 -12.7% 454.58 ± 2% sched_debug.cpu.ttwu_local.min
2006 ± 23% -35.8% 1287 ± 13% proc-vmstat.compact_daemon_wake
1600 ± 30% -63.1% 590.25 ± 34% proc-vmstat.kswapd_low_wmark_hit_quickly
10996 +18.8% 13064 ± 2% proc-vmstat.nr_active_anon
8416 ± 4% -24.1% 6388 ± 7% proc-vmstat.nr_free_cma
567.25 ± 5% -35.9% 363.50 proc-vmstat.nr_isolated_file
1640 +1.3% 1661 proc-vmstat.nr_page_table_pages
39487 +5.0% 41460 proc-vmstat.nr_shmem
738343 +1.2% 747370 proc-vmstat.nr_slab_reclaimable
10996 +18.8% 13064 ± 2% proc-vmstat.nr_zone_active_anon
35243008 ± 2% +24.8% 43974190 ± 6% proc-vmstat.numa_foreign
650.00 ± 41% +335.7% 2832 ± 90% proc-vmstat.numa_hint_faults_local
35243008 ± 2% +24.8% 43974190 ± 6% proc-vmstat.numa_miss
35278666 ± 2% +24.8% 44010305 ± 6% proc-vmstat.numa_other
2746 ± 12% -31.0% 1894 ± 7% proc-vmstat.pageoutrun
723574 +6.3% 768837 proc-vmstat.pgfault
52140 +6.7% 55636 proc-vmstat.pgreuse
5014763 -1.9% 4917778 proc-vmstat.workingset_nodereclaim
217.25 ± 32% -42.5% 125.00 ± 15% interrupts.CPU100.RES:Rescheduling_interrupts
252.25 ± 48% -51.9% 121.25 ± 22% interrupts.CPU102.RES:Rescheduling_interrupts
736.00 ± 37% -30.4% 512.25 ± 5% interrupts.CPU11.CAL:Function_call_interrupts
205.25 ± 8% +21.6% 249.50 ± 3% interrupts.CPU13.RES:Rescheduling_interrupts
579.75 ± 5% +17.9% 683.75 ± 7% interrupts.CPU2.CAL:Function_call_interrupts
203.50 ± 5% +19.0% 242.25 ± 12% interrupts.CPU21.RES:Rescheduling_interrupts
202.25 ± 12% +30.9% 264.75 ± 12% interrupts.CPU25.RES:Rescheduling_interrupts
253.50 ± 22% +52.2% 385.75 ± 35% interrupts.CPU3.RES:Rescheduling_interrupts
5341 ± 28% +45.0% 7745 ± 2% interrupts.CPU40.NMI:Non-maskable_interrupts
5341 ± 28% +45.0% 7745 ± 2% interrupts.CPU40.PMI:Performance_monitoring_interrupts
206.50 ± 4% +22.9% 253.75 ± 13% interrupts.CPU5.RES:Rescheduling_interrupts
333.50 +35.3% 451.25 ± 7% interrupts.CPU53.RES:Rescheduling_interrupts
282.25 ± 9% +41.5% 399.50 ± 17% interrupts.CPU54.RES:Rescheduling_interrupts
231.75 ± 9% +60.4% 371.75 ± 36% interrupts.CPU56.RES:Rescheduling_interrupts
5627 ± 31% +39.3% 7836 interrupts.CPU60.NMI:Non-maskable_interrupts
5627 ± 31% +39.3% 7836 interrupts.CPU60.PMI:Performance_monitoring_interrupts
197.25 ± 6% -21.2% 155.50 ± 3% interrupts.CPU76.RES:Rescheduling_interrupts
220.50 ± 14% -33.4% 146.75 ± 19% interrupts.CPU86.RES:Rescheduling_interrupts
393.75 ± 94% -63.8% 142.50 ± 19% interrupts.CPU87.RES:Rescheduling_interrupts
190.25 ± 20% -26.7% 139.50 ± 14% interrupts.CPU88.RES:Rescheduling_interrupts
344.75 ± 71% -60.8% 135.25 ± 19% interrupts.CPU89.RES:Rescheduling_interrupts
514.00 ± 3% +42.6% 732.75 ± 43% interrupts.CPU90.CAL:Function_call_interrupts
234.25 ± 40% -35.2% 151.75 ± 14% interrupts.CPU91.RES:Rescheduling_interrupts
208.50 ± 25% -37.6% 130.00 ± 8% interrupts.CPU92.RES:Rescheduling_interrupts
5614 ± 31% +39.8% 7849 interrupts.CPU94.NMI:Non-maskable_interrupts
5614 ± 31% +39.8% 7849 interrupts.CPU94.PMI:Performance_monitoring_interrupts
213.50 ± 43% -39.2% 129.75 ± 14% interrupts.CPU95.RES:Rescheduling_interrupts
149.75 ± 10% -23.9% 114.00 ± 10% interrupts.CPU96.RES:Rescheduling_interrupts
159.50 ± 17% -24.8% 120.00 ± 5% interrupts.CPU97.RES:Rescheduling_interrupts
5600 ± 31% +40.2% 7850 interrupts.CPU99.NMI:Non-maskable_interrupts
5600 ± 31% +40.2% 7850 interrupts.CPU99.PMI:Performance_monitoring_interrupts
364.00 ± 78% -67.9% 116.75 ± 13% interrupts.CPU99.RES:Rescheduling_interrupts
207.50 ± 3% +28.7% 267.00 ± 4% interrupts.IWI:IRQ_work_interrupts
8.072e+09 -5.6% 7.619e+09 perf-stat.i.branch-instructions
63803596 -5.8% 60104498 perf-stat.i.branch-misses
1.766e+08 -8.0% 1.624e+08 perf-stat.i.cache-misses
5.831e+08 -7.3% 5.407e+08 perf-stat.i.cache-references
4.49 +8.9% 4.89 perf-stat.i.cpi
2.166e+11 +1.3% 2.194e+11 perf-stat.i.cpu-cycles
155.22 -5.0% 147.42 perf-stat.i.cpu-migrations
1150 ± 2% +9.7% 1262 perf-stat.i.cycles-between-cache-misses
1.079e+10 -6.1% 1.013e+10 perf-stat.i.dTLB-loads
0.08 ± 3% -0.0 0.07 ± 4% perf-stat.i.dTLB-store-miss-rate%
6332212 ± 5% -25.1% 4744150 ± 4% perf-stat.i.dTLB-store-misses
5.587e+09 -7.3% 5.18e+09 perf-stat.i.dTLB-stores
9797172 -7.7% 9044671 perf-stat.i.iTLB-load-misses
3.991e+10 -6.0% 3.754e+10 perf-stat.i.instructions
0.38 ± 2% -6.0% 0.36 ± 2% perf-stat.i.ipc
0.55 ± 2% -9.5% 0.50 ± 2% perf-stat.i.major-faults
2.08 +1.3% 2.11 perf-stat.i.metric.GHz
241.51 -6.2% 226.45 perf-stat.i.metric.M/sec
3211 -1.6% 3159 perf-stat.i.minor-faults
7582078 +4.4% 7914778 perf-stat.i.node-load-misses
54668956 -10.1% 49167223 perf-stat.i.node-loads
17986536 -10.0% 16186020 perf-stat.i.node-stores
3211 -1.6% 3160 perf-stat.i.page-faults
5.42 +8.1% 5.86 perf-stat.overall.cpi
1225 +10.4% 1352 perf-stat.overall.cycles-between-cache-misses
0.11 ± 5% -0.0 0.09 ± 4% perf-stat.overall.dTLB-store-miss-rate%
4069 +1.9% 4144 perf-stat.overall.instructions-per-iTLB-miss
0.18 -7.5% 0.17 perf-stat.overall.ipc
12.26 +1.6 13.89 perf-stat.overall.node-load-miss-rate%
21.48 +2.0 23.50 ± 2% perf-stat.overall.node-store-miss-rate%
2021 +1.7% 2057 perf-stat.overall.path-length
8.064e+09 -5.2% 7.646e+09 perf-stat.ps.branch-instructions
63692635 -5.4% 60226577 perf-stat.ps.branch-misses
1.764e+08 -7.5% 1.632e+08 perf-stat.ps.cache-misses
5.83e+08 -6.8% 5.433e+08 perf-stat.ps.cache-references
2.162e+11 +2.1% 2.207e+11 perf-stat.ps.cpu-cycles
154.68 -4.8% 147.27 perf-stat.ps.cpu-migrations
1.079e+10 -5.7% 1.017e+10 perf-stat.ps.dTLB-loads
6277767 ± 5% -24.6% 4730694 ± 4% perf-stat.ps.dTLB-store-misses
5.585e+09 -6.9% 5.2e+09 perf-stat.ps.dTLB-stores
9801387 -7.2% 9090820 perf-stat.ps.iTLB-load-misses
3.988e+10 -5.5% 3.768e+10 perf-stat.ps.instructions
0.55 ± 2% -7.9% 0.50 ± 3% perf-stat.ps.major-faults
3197 -1.7% 3143 perf-stat.ps.minor-faults
7610500 +4.6% 7963493 perf-stat.ps.node-load-misses
54486912 -9.4% 49385987 perf-stat.ps.node-loads
17976202 -9.6% 16255865 perf-stat.ps.node-stores
3198 -1.7% 3144 perf-stat.ps.page-faults
8.684e+12 +1.7% 8.835e+12 perf-stat.total.instructions
34.78 ± 2% -18.8 15.96 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
34.63 ± 2% -18.6 15.99 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
8.36 -0.8 7.57 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readahead.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read
8.38 -0.8 7.60 perf-profile.calltrace.cycles-pp.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
8.36 -0.8 7.59 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read
8.29 -0.8 7.53 perf-profile.calltrace.cycles-pp.iomap_readahead_actor.iomap_apply.iomap_readahead.read_pages.page_cache_readahead_unbounded
7.86 -0.7 7.12 perf-profile.calltrace.cycles-pp.iomap_readpage_actor.iomap_readahead_actor.iomap_apply.iomap_readahead.read_pages
1.33 ± 18% -0.5 0.85 ± 11% perf-profile.calltrace.cycles-pp.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
3.45 -0.5 2.98 ± 5% perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
4.25 -0.4 3.87 perf-profile.calltrace.cycles-pp.memset_erms.iomap_readpage_actor.iomap_readahead_actor.iomap_apply.iomap_readahead
1.08 ± 18% -0.3 0.74 ± 12% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_lruvec
1.51 ± 9% -0.3 1.19 ± 13% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read
1.38 ± 10% -0.3 1.07 ± 14% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read
1.17 ± 12% -0.3 0.89 ± 15% perf-profile.calltrace.cycles-pp.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.page_cache_readahead_unbounded
0.75 ± 5% -0.2 0.58 ± 6% perf-profile.calltrace.cycles-pp.workingset_eviction.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
89.80 +1.7 91.46 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
89.70 +1.7 91.37 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
89.43 +1.7 91.11 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
89.12 +1.7 90.84 perf-profile.calltrace.cycles-pp.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
84.52 +2.2 86.72 perf-profile.calltrace.cycles-pp.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
48.53 +4.0 52.49 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read.xfs_file_read_iter
46.47 +4.3 50.78 perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read.xfs_file_buffered_aio_read
46.37 +4.3 50.68 perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
46.37 +4.3 50.68 perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded
46.37 +4.3 50.69 perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.page_cache_readahead_unbounded.generic_file_buffered_read
43.55 +4.3 47.88 perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
43.52 +4.3 47.86 perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
0.00 +23.1 23.11 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
0.00 +23.2 23.15 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +23.3 23.26 ± 2% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
8.38 -0.8 7.59 perf-profile.children.cycles-pp.read_pages
8.36 -0.8 7.58 perf-profile.children.cycles-pp.iomap_readahead
8.36 -0.8 7.58 perf-profile.children.cycles-pp.iomap_apply
8.30 -0.8 7.53 perf-profile.children.cycles-pp.iomap_readahead_actor
7.88 -0.7 7.15 perf-profile.children.cycles-pp.iomap_readpage_actor
1.80 ± 22% -0.7 1.11 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.secondary_startup_64
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.cpu_startup_entry
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.do_idle
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.cpuidle_enter
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.cpuidle_enter_state
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.intel_idle
0.77 ± 87% -0.7 0.11 ± 31% perf-profile.children.cycles-pp.start_secondary
1.47 ± 16% -0.5 0.99 ± 7% perf-profile.children.cycles-pp.free_pcppages_bulk
1.61 ± 14% -0.5 1.14 ± 5% perf-profile.children.cycles-pp.free_unref_page_list
4.29 -0.4 3.90 perf-profile.children.cycles-pp.memset_erms
3.46 -0.4 3.10 ± 2% perf-profile.children.cycles-pp.copy_page_to_iter
1.95 ± 8% -0.3 1.61 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
3.26 -0.3 2.93 ± 2% perf-profile.children.cycles-pp.copyout
3.25 -0.3 2.92 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.30 -0.3 2.98 perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.77 ± 9% -0.3 1.46 ± 5% perf-profile.children.cycles-pp.rmqueue
2.82 -0.3 2.51 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
2.30 ± 2% -0.2 2.05 ± 3% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.97 ± 4% -0.2 0.76 ± 3% perf-profile.children.cycles-pp.workingset_eviction
1.57 -0.2 1.40 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
1.63 -0.2 1.47 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64
1.06 ± 2% -0.1 0.92 ± 4% perf-profile.children.cycles-pp.mem_cgroup_charge
1.20 -0.1 1.08 ± 2% perf-profile.children.cycles-pp.__delete_from_page_cache
0.94 -0.1 0.84 perf-profile.children.cycles-pp.xas_store
0.91 ± 4% -0.1 0.82 ± 4% perf-profile.children.cycles-pp.xas_load
0.86 -0.1 0.77 ± 3% perf-profile.children.cycles-pp.ksys_write
0.46 -0.1 0.37 perf-profile.children.cycles-pp.workingset_age_nonresident
0.77 ± 2% -0.1 0.69 perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.52 ± 3% -0.1 0.44 ± 3% perf-profile.children.cycles-pp.__mod_memcg_state
0.78 -0.1 0.70 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.61 ± 2% -0.1 0.54 ± 4% perf-profile.children.cycles-pp.security_file_permission
0.53 -0.1 0.46 perf-profile.children.cycles-pp.xas_create
0.67 -0.1 0.60 ± 4% perf-profile.children.cycles-pp.vfs_write
0.56 ± 3% -0.1 0.50 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
0.54 ± 3% -0.1 0.48 ± 3% perf-profile.children.cycles-pp.find_get_entry
0.38 ± 3% -0.0 0.34 ± 5% perf-profile.children.cycles-pp.common_file_perm
0.45 ± 3% -0.0 0.41 ± 2% perf-profile.children.cycles-pp.xa_load
0.34 -0.0 0.30 ± 3% perf-profile.children.cycles-pp.try_charge
0.62 ± 2% -0.0 0.58 ± 2% perf-profile.children.cycles-pp.isolate_lru_pages
0.36 ± 3% -0.0 0.33 ± 2% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.30 ± 2% -0.0 0.26 ± 4% perf-profile.children.cycles-pp.__count_memcg_events
0.24 ± 3% -0.0 0.20 ± 4% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.19 ± 6% -0.0 0.16 ± 2% perf-profile.children.cycles-pp.workingset_update_node
0.19 ± 3% -0.0 0.16 ± 4% perf-profile.children.cycles-pp.page_counter_try_charge
0.22 ± 4% -0.0 0.20 ± 9% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.23 -0.0 0.21 ± 5% perf-profile.children.cycles-pp.__fdget_pos
0.28 ± 2% -0.0 0.26 ± 4% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.20 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.__fget_light
0.24 ± 2% -0.0 0.22 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.24 ± 2% -0.0 0.22 perf-profile.children.cycles-pp.__mod_lruvec_state
0.22 ± 3% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.uncharge_batch
0.17 ± 2% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.21 ± 3% -0.0 0.19 ± 3% perf-profile.children.cycles-pp.__mod_node_page_state
0.16 -0.0 0.14 ± 5% perf-profile.children.cycles-pp.xas_start
0.16 ± 5% -0.0 0.14 ± 6% perf-profile.children.cycles-pp.touch_atime
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.16 ± 2% -0.0 0.14 perf-profile.children.cycles-pp.xfs_ilock
0.14 ± 3% -0.0 0.12 perf-profile.children.cycles-pp.down_read
0.15 ± 3% -0.0 0.14 perf-profile.children.cycles-pp.___might_sleep
0.17 -0.0 0.16 ± 2% perf-profile.children.cycles-pp.unlock_page
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.aa_file_perm
0.12 -0.0 0.11 ± 4% perf-profile.children.cycles-pp.page_mapping
0.09 -0.0 0.08 ± 5% perf-profile.children.cycles-pp.task_tick_fair
94.88 +1.1 95.98 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
91.95 +1.4 93.36 perf-profile.children.cycles-pp.do_syscall_64
90.61 +1.6 92.17 perf-profile.children.cycles-pp.ksys_read
90.43 +1.6 92.02 perf-profile.children.cycles-pp.vfs_read
89.80 +1.7 91.47 perf-profile.children.cycles-pp.new_sync_read
89.70 +1.7 91.38 perf-profile.children.cycles-pp.xfs_file_read_iter
89.44 +1.7 91.12 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
89.13 +1.7 90.85 perf-profile.children.cycles-pp.generic_file_buffered_read
84.52 +2.2 86.73 perf-profile.children.cycles-pp.page_cache_readahead_unbounded
66.53 +3.8 70.36 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
48.64 +3.9 52.59 perf-profile.children.cycles-pp.__alloc_pages_nodemask
47.19 +4.2 51.42 perf-profile.children.cycles-pp.shrink_node
44.34 +4.2 48.58 perf-profile.children.cycles-pp.shrink_inactive_list
44.36 +4.2 48.61 perf-profile.children.cycles-pp.shrink_lruvec
46.56 +4.3 50.85 perf-profile.children.cycles-pp.__alloc_pages_slowpath
46.46 +4.3 50.76 perf-profile.children.cycles-pp.do_try_to_free_pages
46.46 +4.3 50.77 perf-profile.children.cycles-pp.try_to_free_pages
37.90 +4.4 42.34 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +23.6 23.60 ± 2% perf-profile.children.cycles-pp.lru_note_cost
0.78 ± 87% -0.7 0.11 ± 31% perf-profile.self.cycles-pp.intel_idle
4.25 -0.4 3.87 perf-profile.self.cycles-pp.memset_erms
3.22 -0.3 2.90 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
3.25 -0.3 2.94 perf-profile.self.cycles-pp.iomap_set_range_uptodate
2.73 -0.3 2.44 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
1.57 -0.2 1.40 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.49 -0.1 1.34 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.51 ± 7% -0.1 0.39 ± 5% perf-profile.self.cycles-pp.workingset_eviction
0.46 -0.1 0.37 perf-profile.self.cycles-pp.workingset_age_nonresident
0.52 ± 4% -0.1 0.44 ± 2% perf-profile.self.cycles-pp.__mod_memcg_state
0.75 ± 4% -0.1 0.68 ± 4% perf-profile.self.cycles-pp.xas_load
0.45 -0.1 0.39 ± 2% perf-profile.self.cycles-pp.xas_create
0.28 -0.0 0.23 ± 5% perf-profile.self.cycles-pp.generic_file_buffered_read
0.30 ± 2% -0.0 0.26 ± 3% perf-profile.self.cycles-pp.__count_memcg_events
0.23 ± 5% -0.0 0.19 ± 3% perf-profile.self.cycles-pp.mem_cgroup_charge
0.27 ± 2% -0.0 0.24 ± 3% perf-profile.self.cycles-pp.rmqueue_bulk
0.29 ± 3% -0.0 0.26 perf-profile.self.cycles-pp.iomap_readpage_actor
0.46 ± 2% -0.0 0.43 perf-profile.self.cycles-pp.free_pcppages_bulk
0.22 ± 4% -0.0 0.20 ± 9% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.37 -0.0 0.34 ± 2% perf-profile.self.cycles-pp.shrink_page_list
0.32 ± 2% -0.0 0.29 perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.19 ± 2% -0.0 0.16 ± 5% perf-profile.self.cycles-pp.__fget_light
0.17 ± 2% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.page_counter_try_charge
0.16 -0.0 0.14 ± 3% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.31 -0.0 0.29 ± 2% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.vfs_read
0.21 ± 2% -0.0 0.19 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.15 ± 4% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.07 -0.0 0.06 ± 9% perf-profile.self.cycles-pp.ksys_write
0.18 ± 2% -0.0 0.17 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.11 ± 4% -0.0 0.10 perf-profile.self.cycles-pp.workingset_update_node
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.self.cycles-pp.mark_page_accessed
0.07 -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__might_sleep
0.12 -0.0 0.11 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.06 -0.0 0.05 perf-profile.self.cycles-pp.__x64_sys_write
0.11 ± 3% +0.0 0.14 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.lru_note_cost
66.53 +3.8 70.36 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
vm-scalability.median
260000 +------------------------------------------------------------------+
|.+..+.+..+.+.+..+.+..+.+.+..+.+..+.+.+..+.+.+..+.+..+.+.+..+.+ |
250000 |-+ |
240000 |-+ |
| O |
230000 |-+ O O O O O O |
| |
220000 |-+ |
| |
210000 |-+ |
200000 |-+ |
| O O O O O O O O |
190000 |-+ O O O O O O O |
| O O O O O |
180000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[perf tools] 77b66fd551: perf-sanity-tests.'import_perf'_in_python.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 77b66fd55195289f5361af4f8a0978a3a90b9363 ("[PATCH 3/4] perf tools: Copy metric events properly when multiply cgroups")
url: https://github.com/0day-ci/linux/commits/Namhyung-Kim/perf-stat-Add-multi...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 2cb5383b30d47c446ec7d884cd80f93ffcc31817
in testcase: perf-sanity-tests
version: perf-x86_64-34d4ddd359db-1_20200909
with following parameters:
perf_compiler: gcc
ucode: 0xdc
on test machine: 8 threads Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 1
1: vmlinux symtab matches kallsyms : Ok
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 2
2: Detect openat syscall event : Ok
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 3
3: Detect openat syscall event on all cpus : Ok
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 4
4: Read samples using the mmap interface : Ok
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 5
5: Test data source output : Ok
2020-09-23 04:43:45 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 6
6: Parse event definition strings : Ok
2020-09-23 04:43:46 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 7
7: Simple expression parser : Ok
2020-09-23 04:43:46 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 8
8: PERF_RECORD_* events & perf_sample fields : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 9
9: Parse perf pmu format : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 10
10: PMU events :
10.1: PMU event table sanity : Ok
10.2: PMU event map aliases : Ok
10.3: Parsing of PMU event table metrics : Skip (some metrics failed)
10.4: Parsing of PMU event table metrics with fake PMUs : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 11
11: DSO data read : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 12
12: DSO data cache : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 13
13: DSO data reopen : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 14
14: Roundtrip evsel->name : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 15
15: Parse sched tracepoints fields : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 16
16: syscalls:sys_enter_openat event fields : Ok
2020-09-23 04:43:48 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 17
17: Setup struct perf_event_attr : Ok
2020-09-23 04:43:54 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 18
18: Match and link multiple hists : Ok
2020-09-23 04:43:54 sudo /usr/src/perf_selftests-x86_64-rhel-8.3-77b66fd55195289f5361af4f8a0978a3a90b9363/tools/perf/perf test 19
19: 'import perf' in python : FAILED!
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year, 10 months
[lib/scatterlist] 71606c3597: last_state.is_incomplete_run
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 71606c35971cec8fec875951914bb54d94af108b ("[PATCH rdma-next v2 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages")
url: https://github.com/0day-ci/linux/commits/Leon-Romanovsky/scatterlist-add-...
base: https://git.kernel.org/cgit/linux/kernel/git/rdma/rdma.git for-next
in testcase: phoronix-test-suite
version:
with following parameters:
test: clpeak-1.0.1
option_a: Global Memory Bandwidth
cpufreq_governor: performance
ucode: 0xdc
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
on test machine: 8 threads Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 22.388431] Kernel tests: Boot OK!
[ 22.388433]
[ 22.626333] /lkp/carel/src/bin/run-lkp
[ 22.626335]
[ 31.302985] RESULT_ROOT=/result/phoronix-test-suite/performance-Global_Memory_Bandwidth-clpeak-
1.0.1-ucode=0xdc-monitor=71fc66a9/lkp-skl-nuc2/debian-x86_64-phoronix/x86_64-rhel-8.3/gcc-9/71606c
35971cec8fec875951914bb54d94af108b/2
[ 31.302988]
[ 31.332075] job=/lkp/jobs/scheduled/lkp-skl-nuc2/phoronix-test-suite-performance-Global_Memory_
Bandwidth-clpeak-1.0.1-ucode=0xdc-monitor=71fc66a9-debian-x86_64-phoronix-71606c35-20200925-23721-
188s5ja-0.yaml
[ 31.332076]
[ 33.124259] result_service=inn:/result, RESULT_MNT=/inn/result, RESULT_ROOT=/inn/result/phoroni
x-test-suite/performance-Global_Memory_Bandwidth-clpeak-1.0.1-ucode=0xdc-monitor=71fc66a9/lkp-skl-
nuc2/debian-x86_64-phoronix/x86_64-rhel-8.3/gcc-9/71606c35971cec8fec875951914bb54d94af108b/2
[ 33.124262]
[ 33.156070] mount.nfs: try 1 time... mount.nfs -o vers=3 inn:/result /inn/result
[ 33.156071]
[ 33.171554] run-job /lkp/jobs/scheduled/lkp-skl-nuc2/phoronix-test-suite-performance-Global_Mem
ory_Bandwidth-clpeak-1.0.1-ucode=0xdc-monitor=71fc66a9-debian-x86_64-phoronix-71606c35-20200925-23
721-188s5ja-0.yaml
[ 33.171555]
[ 35.140681] /usr/bin/wget -q --timeout=1800 --tries=1 --local-encoding=UTF-8 http://inn:80/~car
el/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/jobs/scheduled/lkp-skl-nuc2/phoronix-test-suite-pe
rformance-Global_Memory_Bandwidth-clpeak-1.0.1-ucode=0xdc-monitor=71fc66a9-debian-x86_64-phoronix-
71606c35-20200925-23721-188s5ja-0.yaml&job_state=running -O /dev/null
[ 35.140684]
[ 35.179165] target ucode: 0xdc
[ 35.179166]
[ 35.186302] current_version: dc, target_version: dc
[ 35.186303]
[ 35.194877] 2020-09-25 07:30:44
[ 35.194878]
[ 35.202376] for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
[ 35.202377]
[ 35.211365] do
[ 35.202377]
[ 35.211365] do
[ 35.211366]
[ 35.216823] online_file="$cpu_dir"/online
[ 35.216825]
[ 35.225783] [ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue
[ 35.225784]
[ 35.236886]
[ 35.240094] file="$cpu_dir"/cpufreq/scaling_governor
[ 35.240095]
[ 35.249739] [ -f "$file" ] && echo "performance" > "$file"
[ 35.249740]
[ 35.258941] done
[ 35.258942]
[ 35.179166]
[ 35.186302] current_version: dc, target_version: dc
[ 35.186303]
[ 35.194877] 2020-09-25 07:30:44
[ 35.194878]
[ 35.202376] for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
[ 35.202377]
[ 35.211365] do
[ 35.211366]
[ 35.216823] online_file="$cpu_dir"/online
[ 35.216825]
[ 35.225783] [ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue
[ 35.225784]
[ 35.236886]
[ 35.240094] file="$cpu_dir"/cpufreq/scaling_governor
[ 35.240095]
[ 35.249739] [ -f "$file" ] && echo "performance" > "$file"
[ 35.249740]
[ 35.258941] done
[ 35.258942]
[ 35.264063]
[ 35.266964] perf version 4.19.37
[ 35.266965]
[ 35.274101] PTS_SILENT_MODE=1
[ 35.274102]
[ 81.870358] IPMI BMC is not supported on this machine, skip bmc-watchdog setup!
[ 81.870360]
[ 122.717329] perf: interrupt took too long (2638 > 2500), lowering kernel.perf_event_max_sample_
rate to 75000
[ 122.910327] perf: interrupt took too long (3385 > 3297), lowering kernel.perf_event_max_sample_
rate to 59000
[ 123.859332] perf: interrupt took too long (4332 > 4231), lowering kernel.perf_event_max_sample_
rate to 46000
[ 124.677330] perf: interrupt took too long (5501 > 5415), lowering kernel.perf_event_max_sample_
rate to 36000
[ 125.706330] perf: interrupt took too long (6928 > 6876), lowering kernel.perf_event_max_sample_
rate to 28000
[ 128.558329] perf: interrupt took too long (8674 > 8660), lowering kernel.perf_event_max_sample_
rate to 23000
[ 230.738516] ------------[ cut here ]------------
[ 230.744364] kernel BUG at kernel/dma/mapping.c:186!
[ 230.750349] invalid opcode: 0000 [#1] SMP PTI
[ 230.755904] CPU: 6 PID: 4235 Comm: clpeak Tainted: G I 5.9.0-rc3-00097-g71606c35
971cec #1
[ 230.766864] Hardware name: /NUC6i7KYB, BIOS KYSKLi70.86A.0041.2016.0817.1130 08/17/2016
[ 230.776355] RIP: 0010:dma_map_sg_attrs+0x3e/0x40
[ 230.782124] Code: 76 1d 0f 0b 48 8b 05 81 c1 dd 01 83 f9 02 77 f2 48 85 c0 75 0a e8 12 13 00 00
85 c0 78 0c c3 48 8b 40 30 e8 a4 d9 c8 00 eb f0 <0f> 0b 0f 1f 44 00 00 48 8b 87 40 02 00 00 48 85
c0 48 0f 44 05 49
[ 230.804046] RSP: 0018:ffffc9000307b9b8 EFLAGS: 00010282
[ 230.810547] RAX: 00000000bad55da1 RBX: ffff8888b66c2d00 RCX: 0000000000000000
[ 230.819149] RDX: 00000000bad55da1 RSI: ffff8888baf4d020 RDI: ffff8888be8c20b0
[ 230.819172] ------------[ cut here ]------------
[ 230.827838] RBP: ffff8888bad55a50 R08: 0000000000000130 R09: ffff8888baf4d020
[ 230.827839] R10: 0000000000000000 R11: 0000000000000001 R12: ffff8888bb347638
[ 230.827839] R13: ffff8888b66c2d00 R14: 0000000000001000 R15: ffff8888bad55a50
[ 230.827840] FS: 00007fc7f2960740(0000) GS:ffff8888bed80000(0000) knlGS:0000000000000000
[ 230.827841] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 230.827841] CR2: 00005579ebdbe000 CR3: 00000008bbc5a004 CR4: 00000000003706e0
[ 230.827842] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 230.827842] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 230.827842] Call Trace:
[ 230.827884] i915_gem_gtt_prepare_pages+0x55/0x80 [i915]
[ 230.833614] kernel BUG at kernel/dma/mapping.c:186!
[ 230.842130] __i915_gem_userptr_alloc_pages+0x72/0x140 [i915]
[ 230.842210] i915_gem_userptr_get_pages+0x14b/0x260 [i915]
[ 230.930488] __i915_gem_object_get_pages+0x55/0x60 [i915]
[ 230.936965] i915_vma_pin+0x439/0x740 [i915]
[ 230.942170] ? kmem_cache_alloc+0x3ff/0x480
[ 230.947196] eb_lookup_vmas+0x242/0x9a0 [i915]
[ 230.952615] i915_gem_do_execbuffer+0x269/0xd40 [i915]
[ 230.958708] ? iov_iter_copy_from_user_atomic+0xcb/0x380
[ 230.965010] ? ___perf_sw_event+0x102/0x140
[ 230.970108] i915_gem_execbuffer2_ioctl+0x23b/0x4a0 [i915]
[ 230.976730] ? page_add_new_anon_rmap+0xa3/0x200
[ 230.982331] ? i915_gem_execbuffer_ioctl+0x300/0x300 [i915]
[ 230.989012] drm_ioctl_kernel+0xaa/0x100 [drm]
[ 230.994412] drm_ioctl+0x20c/0x3a0 [drm]
[ 230.999325] ? i915_gem_execbuffer_ioctl+0x300/0x300 [i915]
[ 231.005861] ? handle_mm_fault+0x229/0x2a0
[ 231.010788] ? do_user_addr_fault+0x21b/0x400
[ 231.016037] __x64_sys_ioctl+0x83/0xb0
[ 231.020749] do_syscall_64+0x33/0x40
[ 231.025139] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 231.031147] RIP: 0033:0x7fc7f2ba33e7
[ 231.035638] Code: 00 00 90 48 8b 05 a9 7a 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66
2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 79 7a 0c 00
f7 d8 64 89 01 48
[ 231.057103] RSP: 002b:00007fff4c962b18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 231.065859] RAX: ffffffffffffffda RBX: 00007fff4c962b60 RCX: 00007fc7f2ba33e7
[ 231.074066] RDX: 00007fff4c962b60 RSI: 0000000040406469 RDI: 0000000000000007
[ 231.082446] RBP: 0000000040406469 R08: 0000000000000000 R09: 0000000000000001
[ 231.090823] R10: 0000000000000050 R11: 0000000000000246 R12: 0000000000000002
[ 231.099063] R13: 00005579ebdbae00 R14: 00007fff4c962d90 R15: 00007fff4c962c40
[ 231.107356] Modules linked in: iptable_filter netconsole sg ip_tables overlay rpcsec_gss_krb5 a
uth_rpcgss nfsv4 dns_resolver btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c sd_mod t1
0_pi intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel i91
5 kvm ahci irqbypass intel_gtt crct10dif_pclmul drm_kms_helper crc32_pclmul sdhci_pci crc32c_intel
syscopyarea intel_wmi_thunderbolt libahci ghash_clmulni_intel sysfillrect cqhci ir_rc6_decoder ra
pl sysimgblt fb_sys_fops intel_cstate mei_me sdhci rc_rc6_mce drm libata mmc_core intel_uncore mei
intel_pch_thermal joydev wmi nuvoton_cir rc_core video intel_pmc_core acpi_pad
[ 231.173027] invalid opcode: 0000 [#2] SMP PTI
[ 231.173041] ---[ end trace 7f1e0f1bbb700472 ]---
[ 231.178562] CPU: 7 PID: 314 Comm: kworker/u17:1 Tainted: G D I 5.9.0-rc3-00097-g71
606c35971cec #1
[ 231.178563] Hardware name: /NUC6i7KYB, BIOS KYSKLi70.86A.0041.2016.0817.1130 08/17/2016
[ 231.178590] Workqueue: i915-userptr-acquire __i915_gem_userptr_get_pages_worker [i915]
[ 231.184376] RIP: 0010:dma_map_sg_attrs+0x3e/0x40
[ 231.195984] RIP: 0010:dma_map_sg_attrs+0x3e/0x40
[ 231.195985] Code: 76 1d 0f 0b 48 8b 05 81 c1 dd 01 83 f9 02 77 f2 48 85 c0 75 0a e8 12 13 00 00
85 c0 78 0c c3 48 8b 40 30 e8 a4 d9 c8 00 eb f0 <0f> 0b 0f 1f 44 00 00 48 8b 87 40 02 00 00 48 85
c0 48 0f 44 05 49
[ 231.195986] RSP: 0018:ffffc900004d3dc8 EFLAGS: 00010282
[ 231.195986] RAX: 00000000bb1f1d40 RBX: ffff8888b53fc600 RCX: 0000000000000000
[ 231.195987] RDX: 00000000bb1f1d40 RSI: ffff8888b6743000 RDI: ffff8888be8c20b0
[ 231.195987] RBP: ffff8888bb1f16d0 R08: 0000000000000130 R09: ffff88887f20e800
[ 231.195988] R10: 000000000003ffbf R11: 0000000000000230 R12: ffff88883f400000
[ 231.195988] R13: ffff8888b53fc600 R14: 0000000040000000 R15: ffff8888bb1f16d0
[ 231.195989] FS: 0000000000000000(0000) GS:ffff8888bedc0000(0000) knlGS:0000000000000000
[ 231.195991] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 231.195992] CR2: 00007fed8ec66000 CR3: 00000008bd60a002 CR4: 00000000003706e0
[ 231.205376] Code: 76 1d 0f 0b 48 8b 05 81 c1 dd 01 83 f9 02 77 f2 48 85 c0 75 0a e8 12 13 00 00
85 c0 78 0c c3 48 8b 40 30 e8 a4 d9 c8 00 eb f0 <0f> 0b 0f 1f 44 00 00 48 8b 87 40 02 00 00 48 85
c0 48 0f 44 05 49
[ 231.214554] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 231.214554] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 231.214554] Call Trace:
[ 231.214598] i915_gem_gtt_prepare_pages+0x55/0x80 [i915]
[ 231.214656] __i915_gem_userptr_alloc_pages+0x72/0x140 [i915]
[ 231.220612] RSP: 0018:ffffc9000307b9b8 EFLAGS: 00010282
[ 231.226543] __i915_gem_userptr_get_pages_worker+0x266/0x2a0 [i915]
[ 231.226546] process_one_work+0x1b5/0x3a0
[ 231.248132] RAX: 00000000bad55da1 RBX: ffff8888b66c2d00 RCX: 0000000000000000
[ 231.254619] ? process_one_work+0x3a0/0x3a0
[ 231.254620] worker_thread+0x50/0x3c0
[ 231.254621] ? process_one_work+0x3a0/0x3a0
[ 231.254623] kthread+0x114/0x160
[ 231.254625] ? kthread_park+0xa0/0xa0
[ 231.254626] ret_from_fork+0x22/0x30
[ 231.254627] Modules linked in: iptable_filter netconsole sg ip_tables overlay rpcsec_gss_krb5 a
uth_rpcgss nfsv4 dns_resolver btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c sd_mod t1
0_pi intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel i91
5 kvm ahci irqbypass intel_gtt crct10dif_pclmul drm_kms_helper crc32_pclmul sdhci_pci crc32c_intel
syscopyarea intel_wmi_thunderbolt libahci ghash_clmulni_intel sysfillrect cqhci ir_rc6_decoder ra
pl sysimgblt fb_sys_fops intel_cstate mei_me sdhci rc_rc6_mce drm libata mmc_core intel_uncore mei
intel_pch_thermal joydev wmi nuvoton_cir rc_core video intel_pmc_core acpi_pad
[ 231.263144] RDX: 00000000bad55da1 RSI: ffff8888baf4d020 RDI: ffff8888be8c20b0
[ 231.271667] ---[ end trace 7f1e0f1bbb700473 ]---
[ 231.280128] RBP: ffff8888bad55a50 R08: 0000000000000130 R09: ffff8888baf4d020
[ 231.280130] R10: 0000000000000000 R11: 0000000000000001 R12: ffff8888bb347638
[ 231.288647] RIP: 0010:dma_map_sg_attrs+0x3e/0x40
[ 231.288648] Code: 76 1d 0f 0b 48 8b 05 81 c1 dd 01 83 f9 02 77 f2 48 85 c0 75 0a e8 12 13 00 00
85 c0 78 0c c3 48 8b 40 30 e8 a4 d9 c8 00 eb f0 <0f> 0b 0f 1f 44 00 00 48 8b 87 40 02 00 00 48 85
c0 48 0f 44 05 49
[ 231.288649] RSP: 0018:ffffc9000307b9b8 EFLAGS: 00010282
[ 231.297315] R13: ffff8888b66c2d00 R14: 0000000000001000 R15: ffff8888bad55a50
[ 231.297316] FS: 00007fc7f2960740(0000) GS:ffff8888bed80000(0000) knlGS:0000000000000000
[ 231.297318] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 231.306740] RAX: 00000000bad55da1 RBX: ffff8888b66c2d00 RCX: 0000000000000000
[ 231.306740] RDX: 00000000bad55da1 RSI: ffff8888baf4d020 RDI: ffff8888be8c20b0
[ 231.306741] RBP: ffff8888bad55a50 R08: 0000000000000130 R09: ffff8888baf4d020
[ 231.306741] R10: 0000000000000000 R11: 0000000000000001 R12: ffff8888bb347638
[ 231.306743] R13: ffff8888b66c2d00 R14: 0000000000001000 R15: ffff8888bad55a50
[ 231.313753] CR2: 00005579ebdbe000 CR3: 00000008bbc5a004 CR4: 00000000003706e0
[ 231.313754] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 231.313754] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 231.313755] Kernel panic - not syncing: Fatal exception
[ 231.322158] FS: 0000000000000000(0000) GS:ffff8888bedc0000(0000) knlGS:0000000000000000
[ 231.343770] Kernel Offset: disabled
[ 231.676344] ACPI MEMORY or I/O RESET_REG.
Thanks,
Rong Chen
1 year, 10 months
[ext4] 4e8fc10115: fio.write_iops 330.6% improvement
by kernel test robot
Greeting,
FYI, we noticed a 330.6% improvement of fio.write_iops due to commit:
commit: 4e8fc10115a6978060fe8a90f6a3a05463fa0660 ("[PATCHv3 1/1] ext4: Optimize file overwrites")
url: https://github.com/0day-ci/linux/commits/Ritesh-Harjani/Optimize-ext4-fil...
base: https://git.kernel.org/cgit/linux/kernel/git/tytso/ext4.git dev
in testcase: fio-basic
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
disk: 2pmem
fs: ext4
mount_option: dax
runtime: 200s
nr_task: 50%
time_based: tb
rw: write
bs: 4k
ioengine: sync
test_size: 200G
cpufreq_governor: performance
ucode: 0x5002f01
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
4k/gcc-9/performance/2pmem/ext4/sync/x86_64-rhel-8.3/dax/50%/debian-10.4-x86_64-20200603.cgz/200s/write/lkp-csl-2sp6/200G/fio-basic/tb/0x5002f01
commit:
27bc446e2d ("ext4: limit the length of per-inode prealloc list")
4e8fc10115 ("ext4: Optimize file overwrites")
27bc446e2def38db 4e8fc10115a6978060fe8a90f6a
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.12 ±106% -0.1 0.01 fio.latency_100us%
51.38 ± 23% -48.5 2.85 ± 20% fio.latency_20us%
0.01 +16.6 16.64 ± 28% fio.latency_2us%
0.24 ±135% +54.7 54.89 ± 3% fio.latency_4us%
32.62 ± 18% -31.7 0.91 ± 15% fio.latency_50us%
14780 ± 3% -9.4% 13390 fio.time.involuntary_context_switches
9299 -7.0% 8647 fio.time.system_time
228.71 ± 4% +281.9% 873.42 ± 6% fio.time.user_time
23448 -6.5% 21915 fio.time.voluntary_context_switches
5.426e+08 ± 5% +330.6% 2.337e+09 ± 6% fio.workload
10597 ± 5% +330.6% 45638 ± 6% fio.write_bw_MBps
26944 ± 8% -76.8% 6240 ± 9% fio.write_clat_90%_us
30368 ± 8% -72.0% 8512 ± 11% fio.write_clat_95%_us
38016 ± 9% -49.0% 19392 ± 4% fio.write_clat_99%_us
17448 ± 5% -77.9% 3855 ± 7% fio.write_clat_mean_us
11052 ± 32% -68.3% 3502 ± 10% fio.write_clat_stddev
2713004 ± 5% +330.6% 11683335 ± 6% fio.write_iops
13639680 ± 7% +26.6% 17267712 ± 5% meminfo.DirectMap2M
2704 ± 97% +131.9% 6269 ± 26% numa-meminfo.node0.PageTables
676.50 ± 96% +131.1% 1563 ± 26% numa-vmstat.node0.nr_page_table_pages
48.36 -6.8% 45.09 iostat.cpu.system
1.21 ± 4% +271.5% 4.51 ± 6% iostat.cpu.user
0.74 ± 2% +0.1 0.81 ± 5% mpstat.cpu.all.irq%
1.22 ± 4% +3.3 4.55 ± 6% mpstat.cpu.all.usr%
541348 +1.4% 548949 proc-vmstat.nr_file_pages
245833 +2.9% 252840 proc-vmstat.nr_unevictable
245833 +2.9% 252840 proc-vmstat.nr_zone_unevictable
695285 ± 20% -12.6% 607417 ± 17% proc-vmstat.pgfree
601976 ± 2% +22.0% 734594 ± 2% sched_debug.cpu.avg_idle.avg
1001923 +9.0% 1092207 ± 5% sched_debug.cpu.avg_idle.max
372963 -25.8% 276657 ± 6% sched_debug.cpu.avg_idle.stddev
22130 ± 17% +36.2% 30133 ± 14% sched_debug.cpu.nr_switches.max
3374 ± 18% +28.5% 4336 ± 10% sched_debug.cpu.nr_switches.stddev
-47.00 -45.7% -25.50 sched_debug.cpu.nr_uninterruptible.min
2816 ± 21% +36.5% 3844 ± 13% sched_debug.cpu.sched_count.stddev
26.69 ± 13% -44.0% 14.94 ± 17% sched_debug.cpu.sched_goidle.min
1424 ± 21% +36.2% 1941 ± 13% sched_debug.cpu.sched_goidle.stddev
1411 ± 18% +31.9% 1861 ± 12% sched_debug.cpu.ttwu_count.stddev
15.42 ± 3% -82.8% 2.66 ± 8% perf-stat.i.MPKI
3.417e+09 ± 4% +239.7% 1.161e+10 ± 6% perf-stat.i.branch-instructions
0.72 -0.1 0.64 perf-stat.i.branch-miss-rate%
24883051 ± 3% +181.5% 70036819 ± 4% perf-stat.i.branch-misses
97563341 ± 12% -58.3% 40638724 ± 14% perf-stat.i.cache-misses
2.96e+08 ± 2% -48.4% 1.529e+08 ± 11% perf-stat.i.cache-references
7.06 ± 4% -70.7% 2.06 ± 5% perf-stat.i.cpi
1461 ± 14% +170.2% 3948 ± 19% perf-stat.i.cycles-between-cache-misses
6.17e+09 ± 4% +243.3% 2.119e+10 ± 6% perf-stat.i.dTLB-loads
0.00 ± 11% -0.0 0.00 ± 3% perf-stat.i.dTLB-store-miss-rate%
3.978e+09 ± 4% +257.1% 1.421e+10 ± 6% perf-stat.i.dTLB-stores
83.61 +7.2 90.82 perf-stat.i.iTLB-load-miss-rate%
25688726 ± 3% +126.2% 58108368 ± 5% perf-stat.i.iTLB-load-misses
4852201 +17.7% 5709608 ± 2% perf-stat.i.iTLB-loads
1.962e+10 ± 4% +243.4% 6.738e+10 ± 6% perf-stat.i.instructions
774.43 ± 2% +50.4% 1165 perf-stat.i.instructions-per-iTLB-miss
0.15 ± 4% +235.9% 0.51 ± 6% perf-stat.i.ipc
0.25 ± 2% +51.6% 0.37 ± 3% perf-stat.i.metric.K/sec
144.73 ± 4% +239.5% 491.37 ± 6% perf-stat.i.metric.M/sec
89.29 +2.6 91.93 perf-stat.i.node-load-miss-rate%
12691022 ± 8% -56.3% 5550053 ± 12% perf-stat.i.node-load-misses
1504953 ± 13% -64.4% 535348 ± 15% perf-stat.i.node-loads
9964107 ± 8% -58.8% 4108905 ± 17% perf-stat.i.node-store-misses
15.10 ± 3% -84.9% 2.28 ± 11% perf-stat.overall.MPKI
0.73 -0.1 0.60 perf-stat.overall.branch-miss-rate%
6.86 ± 4% -71.0% 1.99 ± 6% perf-stat.overall.cpi
1401 ± 13% +139.9% 3361 ± 14% perf-stat.overall.cycles-between-cache-misses
0.00 ± 30% -0.0 0.00 ± 45% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 22% -0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
84.11 +6.9 91.02 perf-stat.overall.iTLB-load-miss-rate%
763.81 ± 2% +51.8% 1159 perf-stat.overall.instructions-per-iTLB-miss
0.15 ± 4% +245.0% 0.50 ± 6% perf-stat.overall.ipc
89.44 +1.8 91.23 perf-stat.overall.node-load-miss-rate%
7276 -20.3% 5801 perf-stat.overall.path-length
3.401e+09 ± 4% +239.6% 1.155e+10 ± 6% perf-stat.ps.branch-instructions
24776511 ± 3% +181.3% 69696643 ± 4% perf-stat.ps.branch-misses
97040508 ± 12% -58.3% 40436979 ± 14% perf-stat.ps.cache-misses
2.945e+08 ± 2% -48.3% 1.522e+08 ± 11% perf-stat.ps.cache-references
6.141e+09 ± 4% +243.2% 2.108e+10 ± 6% perf-stat.ps.dTLB-loads
3.959e+09 ± 4% +257.0% 1.414e+10 ± 6% perf-stat.ps.dTLB-stores
25562318 ± 3% +126.2% 57814503 ± 5% perf-stat.ps.iTLB-load-misses
4826722 +17.7% 5679789 ± 2% perf-stat.ps.iTLB-loads
1.953e+10 ± 4% +243.3% 6.704e+10 ± 6% perf-stat.ps.instructions
12624818 ± 8% -56.3% 5522769 ± 12% perf-stat.ps.node-load-misses
1497174 ± 13% -64.4% 532776 ± 15% perf-stat.ps.node-loads
9912289 ± 8% -58.8% 4087930 ± 17% perf-stat.ps.node-store-misses
3.947e+12 ± 4% +243.4% 1.355e+13 ± 6% perf-stat.total.instructions
290.75 ± 51% -78.1% 63.75 ±128% interrupts.CPU17.RES:Rescheduling_interrupts
6339 ± 25% -35.3% 4101 ± 52% interrupts.CPU19.NMI:Non-maskable_interrupts
6339 ± 25% -35.3% 4101 ± 52% interrupts.CPU19.PMI:Performance_monitoring_interrupts
166.00 ± 46% -91.6% 14.00 ± 72% interrupts.CPU2.RES:Rescheduling_interrupts
429.75 ± 2% +14.0% 490.00 ± 12% interrupts.CPU20.CAL:Function_call_interrupts
6339 ± 25% -35.3% 4100 ± 52% interrupts.CPU20.NMI:Non-maskable_interrupts
6339 ± 25% -35.3% 4100 ± 52% interrupts.CPU20.PMI:Performance_monitoring_interrupts
6338 ± 25% -31.1% 4364 ± 46% interrupts.CPU21.NMI:Non-maskable_interrupts
6338 ± 25% -31.1% 4364 ± 46% interrupts.CPU21.PMI:Performance_monitoring_interrupts
6339 ± 25% -50.8% 3121 ± 14% interrupts.CPU23.NMI:Non-maskable_interrupts
6339 ± 25% -50.8% 3121 ± 14% interrupts.CPU23.PMI:Performance_monitoring_interrupts
68.50 ± 54% +202.2% 207.00 interrupts.CPU24.RES:Rescheduling_interrupts
3328 ± 45% +76.5% 5876 ± 33% interrupts.CPU25.NMI:Non-maskable_interrupts
3328 ± 45% +76.5% 5876 ± 33% interrupts.CPU25.PMI:Performance_monitoring_interrupts
39.75 ± 79% +423.9% 208.25 ± 2% interrupts.CPU25.RES:Rescheduling_interrupts
1766 ±112% -75.2% 438.25 ± 4% interrupts.CPU27.CAL:Function_call_interrupts
82.75 ± 49% -64.0% 29.75 ±122% interrupts.CPU27.TLB:TLB_shootdowns
439.50 ± 2% +74.2% 765.50 ± 38% interrupts.CPU3.CAL:Function_call_interrupts
494.25 ± 5% -10.5% 442.25 ± 5% interrupts.CPU30.CAL:Function_call_interrupts
61.00 ±127% +230.7% 201.75 interrupts.CPU30.RES:Rescheduling_interrupts
56.50 ±140% +255.3% 200.75 interrupts.CPU31.RES:Rescheduling_interrupts
1633 ±123% -73.3% 435.50 ± 3% interrupts.CPU32.CAL:Function_call_interrupts
56.75 ±141% +252.4% 200.00 interrupts.CPU33.RES:Rescheduling_interrupts
56.75 ±139% +227.3% 185.75 ± 12% interrupts.CPU34.RES:Rescheduling_interrupts
56.50 ±142% +185.8% 161.50 ± 39% interrupts.CPU35.RES:Rescheduling_interrupts
79.75 ± 36% -56.4% 34.75 ± 91% interrupts.CPU36.TLB:TLB_shootdowns
65.25 ±117% +176.6% 180.50 ± 30% interrupts.CPU39.RES:Rescheduling_interrupts
78.50 ± 44% -54.1% 36.00 ± 83% interrupts.CPU39.TLB:TLB_shootdowns
62.25 ±120% +151.8% 156.75 ± 45% interrupts.CPU43.RES:Rescheduling_interrupts
86.00 ± 45% -54.4% 39.25 ± 97% interrupts.CPU43.TLB:TLB_shootdowns
487.50 ± 10% -10.8% 434.75 ± 3% interrupts.CPU44.CAL:Function_call_interrupts
93.00 ± 46% -64.5% 33.00 ±119% interrupts.CPU46.TLB:TLB_shootdowns
7330 ± 12% -41.4% 4293 ± 33% interrupts.CPU5.NMI:Non-maskable_interrupts
7330 ± 12% -41.4% 4293 ± 33% interrupts.CPU5.PMI:Performance_monitoring_interrupts
169.25 ± 36% -90.8% 15.50 ± 71% interrupts.CPU5.RES:Rescheduling_interrupts
3285 ± 45% +92.3% 6318 ± 25% interrupts.CPU57.NMI:Non-maskable_interrupts
3285 ± 45% +92.3% 6318 ± 25% interrupts.CPU57.PMI:Performance_monitoring_interrupts
7323 ± 12% -51.2% 3572 ± 34% interrupts.CPU6.NMI:Non-maskable_interrupts
7323 ± 12% -51.2% 3572 ± 34% interrupts.CPU6.PMI:Performance_monitoring_interrupts
32.50 ± 78% +580.0% 221.00 ±125% interrupts.CPU63.TLB:TLB_shootdowns
7323 ± 12% -41.5% 4286 ± 33% interrupts.CPU7.NMI:Non-maskable_interrupts
7323 ± 12% -41.5% 4286 ± 33% interrupts.CPU7.PMI:Performance_monitoring_interrupts
175.50 ± 27% -80.3% 34.50 ± 37% interrupts.CPU72.RES:Rescheduling_interrupts
93.25 ± 45% -57.1% 40.00 ±115% interrupts.CPU72.TLB:TLB_shootdowns
7868 -45.2% 4311 ± 32% interrupts.CPU73.NMI:Non-maskable_interrupts
7868 -45.2% 4311 ± 32% interrupts.CPU73.PMI:Performance_monitoring_interrupts
7330 ± 12% -41.4% 4297 ± 33% interrupts.CPU75.NMI:Non-maskable_interrupts
7330 ± 12% -41.4% 4297 ± 33% interrupts.CPU75.PMI:Performance_monitoring_interrupts
163.50 ± 41% -84.9% 24.75 ±127% interrupts.CPU77.RES:Rescheduling_interrupts
7324 ± 12% -41.4% 4294 ± 33% interrupts.CPU78.NMI:Non-maskable_interrupts
7324 ± 12% -41.4% 4294 ± 33% interrupts.CPU78.PMI:Performance_monitoring_interrupts
161.25 ± 45% -91.5% 13.75 ±109% interrupts.CPU80.RES:Rescheduling_interrupts
7325 ± 12% -41.5% 4287 ± 33% interrupts.CPU81.NMI:Non-maskable_interrupts
7325 ± 12% -41.5% 4287 ± 33% interrupts.CPU81.PMI:Performance_monitoring_interrupts
95.00 ± 50% -59.7% 38.25 ±117% interrupts.CPU92.TLB:TLB_shootdowns
8991 ±108% +161.3% 23491 ± 19% softirqs.CPU2.SCHED
67870 ± 5% +8.4% 73546 ± 2% softirqs.CPU2.TIMER
23244 ± 25% -88.7% 2626 softirqs.CPU24.SCHED
83405 ± 17% -23.4% 63886 ± 2% softirqs.CPU24.TIMER
23963 ± 12% -88.4% 2784 ± 2% softirqs.CPU25.SCHED
83623 ± 19% -23.5% 63968 ± 2% softirqs.CPU25.TIMER
4276 ± 5% +97.6% 8448 ± 13% softirqs.CPU26.RCU
14129 ± 74% -81.4% 2631 ± 4% softirqs.CPU26.SCHED
17203 ± 53% -70.0% 5163 ± 89% softirqs.CPU27.SCHED
70966 ± 5% -10.4% 63583 ± 5% softirqs.CPU27.TIMER
19121 ± 47% -74.6% 4863 ± 88% softirqs.CPU28.SCHED
72354 ± 6% -10.4% 64858 ± 2% softirqs.CPU29.TIMER
9275 ±101% +151.3% 23309 ± 19% softirqs.CPU3.SCHED
19928 ± 46% -84.7% 3042 ± 7% softirqs.CPU30.SCHED
72106 ± 7% -11.8% 63632 ± 2% softirqs.CPU30.TIMER
19845 ± 45% -84.7% 3030 ± 6% softirqs.CPU31.SCHED
72345 ± 6% -10.8% 64523 softirqs.CPU31.TIMER
19559 ± 47% -84.2% 3094 ± 8% softirqs.CPU32.SCHED
19689 ± 47% -83.0% 3352 ± 2% softirqs.CPU33.SCHED
71873 ± 7% -9.4% 65131 softirqs.CPU33.TIMER
16286 ± 48% -63.6% 5928 ± 76% softirqs.CPU34.SCHED
11784 ± 76% +118.7% 25776 softirqs.CPU4.SCHED
70606 ± 5% -9.8% 63713 softirqs.CPU48.TIMER
71122 ± 4% -10.2% 63890 ± 5% softirqs.CPU49.TIMER
8863 ±108% +190.0% 25702 softirqs.CPU5.SCHED
20026 ± 49% -87.1% 2587 ± 5% softirqs.CPU50.SCHED
70832 ± 4% -10.7% 63286 softirqs.CPU50.TIMER
18874 ± 50% -86.1% 2631 ± 4% softirqs.CPU51.SCHED
71694 ± 5% -13.7% 61847 ± 3% softirqs.CPU51.TIMER
17403 ± 56% -85.3% 2560 softirqs.CPU52.SCHED
71831 ± 8% -11.0% 63942 ± 3% softirqs.CPU52.TIMER
20860 ± 49% -87.1% 2689 ± 2% softirqs.CPU53.SCHED
81014 ± 19% -23.0% 62345 ± 2% softirqs.CPU53.TIMER
20180 ± 50% -87.7% 2480 ± 9% softirqs.CPU54.SCHED
71917 ± 5% -12.3% 63071 softirqs.CPU54.TIMER
74057 ± 12% -16.4% 61946 ± 2% softirqs.CPU55.TIMER
20135 ± 50% -86.8% 2667 ± 4% softirqs.CPU56.SCHED
73377 ± 7% -13.4% 63523 ± 3% softirqs.CPU56.TIMER
23019 ± 19% -64.3% 8226 ±118% softirqs.CPU57.SCHED
75540 ± 5% -14.6% 64485 ± 4% softirqs.CPU57.TIMER
20267 ± 49% -59.4% 8236 ±118% softirqs.CPU58.SCHED
72755 ± 7% -11.1% 64699 ± 3% softirqs.CPU58.TIMER
72871 ± 7% -10.9% 64896 ± 4% softirqs.CPU59.TIMER
8781 ±108% +192.7% 25703 softirqs.CPU6.SCHED
72683 ± 7% -10.9% 64778 ± 4% softirqs.CPU60.TIMER
72665 ± 8% -11.1% 64612 ± 4% softirqs.CPU61.TIMER
72308 ± 5% -10.1% 64991 ± 6% softirqs.CPU65.TIMER
20301 ± 49% -58.5% 8419 ±118% softirqs.CPU66.SCHED
11380 ± 79% +123.7% 25453 softirqs.CPU7.SCHED
4027 ± 5% +111.8% 8530 ± 32% softirqs.CPU71.RCU
5823 ± 96% +357.6% 26649 softirqs.CPU72.SCHED
2461 ± 12% +952.7% 25914 softirqs.CPU73.SCHED
8475 ±117% +176.7% 23452 ± 20% softirqs.CPU75.SCHED
8462 ±116% +178.9% 23601 ± 19% softirqs.CPU76.SCHED
8459 ±117% +211.7% 26366 ± 2% softirqs.CPU77.SCHED
8511 ±117% +205.5% 26002 ± 2% softirqs.CPU79.SCHED
8854 ±105% +186.2% 25341 ± 2% softirqs.CPU8.SCHED
8450 ±116% +215.1% 26629 ± 2% softirqs.CPU80.SCHED
8496 ±117% +206.5% 26038 softirqs.CPU81.SCHED
4144 ± 6% +83.5% 7603 ± 21% softirqs.CPU82.RCU
8429 ±117% +179.7% 23575 ± 18% softirqs.CPU82.SCHED
8393 ±117% +138.6% 20028 ± 30% softirqs.CPU84.SCHED
8422 ±116% +140.8% 20281 ± 28% softirqs.CPU92.SCHED
4021 ± 7% +93.4% 7778 ± 29% softirqs.CPU95.RCU
415214 +63.4% 678631 ± 6% softirqs.RCU
38.06 ± 7% -38.1 0.00 perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
36.28 ± 7% -36.3 0.00 perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply.dax_iomap_rw
36.07 ± 7% -36.1 0.00 perf-profile.calltrace.cycles-pp.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply
63.15 ± 7% -31.9 31.29 ± 12% perf-profile.calltrace.cycles-pp.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter.new_sync_write
11.15 ± 9% -11.1 0.00 perf-profile.calltrace.cycles-pp.__ext4_journal_stop.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
10.95 ± 9% -11.0 0.00 perf-profile.calltrace.cycles-pp.jbd2_journal_stop.__ext4_journal_stop.ext4_iomap_begin.iomap_apply.dax_iomap_rw
8.81 ± 7% -8.8 0.00 perf-profile.calltrace.cycles-pp.stop_this_handle.jbd2_journal_stop.__ext4_journal_stop.ext4_iomap_begin.iomap_apply
8.49 ± 6% -8.5 0.00 perf-profile.calltrace.cycles-pp.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin
5.93 ± 6% -5.9 0.00 perf-profile.calltrace.cycles-pp._raw_read_lock.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin
0.99 ± 9% +0.4 1.44 ± 19% perf-profile.calltrace.cycles-pp.ext4_write_checks.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +1.0 0.96 ± 17% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw
0.00 +1.1 1.10 ± 20% perf-profile.calltrace.cycles-pp.__check_block_validity.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw
0.00 +2.2 2.19 ± 17% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
1.94 ± 16% +6.6 8.49 ± 13% perf-profile.calltrace.cycles-pp.__copy_user_nocache.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply
1.95 ± 16% +6.6 8.54 ± 13% perf-profile.calltrace.cycles-pp.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw
1.99 ± 16% +6.7 8.70 ± 13% perf-profile.calltrace.cycles-pp._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter
7.86 ± 11% +12.8 20.70 ± 13% perf-profile.calltrace.cycles-pp._raw_read_lock.jbd2_transaction_committed.ext4_set_iomap.ext4_iomap_begin.iomap_apply
1.73 ± 15% +13.7 15.42 ± 27% perf-profile.calltrace.cycles-pp.__srcu_read_unlock.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter
12.86 ± 7% +14.8 27.69 ± 13% perf-profile.calltrace.cycles-pp.jbd2_transaction_committed.ext4_set_iomap.ext4_iomap_begin.iomap_apply.dax_iomap_rw
13.14 ± 7% +15.7 28.81 ± 13% perf-profile.calltrace.cycles-pp.ext4_set_iomap.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
3.87 ± 14% +20.9 24.76 ± 20% perf-profile.calltrace.cycles-pp.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter.new_sync_write
38.74 ± 7% -38.1 0.65 ± 8% perf-profile.children.cycles-pp.__ext4_journal_start_sb
36.93 ± 7% -36.3 0.61 ± 7% perf-profile.children.cycles-pp.jbd2__journal_start
36.73 ± 7% -36.1 0.60 ± 7% perf-profile.children.cycles-pp.start_this_handle
63.15 ± 7% -31.9 31.30 ± 12% perf-profile.children.cycles-pp.ext4_iomap_begin
11.21 ± 9% -11.2 0.01 ±173% perf-profile.children.cycles-pp.__ext4_journal_stop
11.01 ± 9% -11.0 0.01 ±173% perf-profile.children.cycles-pp.jbd2_journal_stop
8.83 ± 7% -8.8 0.00 perf-profile.children.cycles-pp.stop_this_handle
8.64 ± 7% -8.5 0.14 ± 8% perf-profile.children.cycles-pp.add_transaction_credits
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.timestamp_truncate
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.pmem_dax_direct_access
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.fsnotify_parent
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.file_modified
0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.aa_file_perm
0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.apparmor_file_permission
0.00 +0.1 0.07 ± 15% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.00 +0.1 0.08 ± 10% perf-profile.children.cycles-pp.__pmem_direct_access
0.00 +0.1 0.09 ± 9% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.09 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.09 ± 13% perf-profile.children.cycles-pp._cond_resched
0.00 +0.1 0.10 ± 12% perf-profile.children.cycles-pp.___might_sleep
0.00 +0.1 0.12 ± 12% perf-profile.children.cycles-pp.fsnotify
0.04 ± 57% +0.1 0.18 ± 7% perf-profile.children.cycles-pp.__fdget_pos
0.00 +0.1 0.14 ± 7% perf-profile.children.cycles-pp.__fget_light
0.00 +0.2 0.15 ± 10% perf-profile.children.cycles-pp.up_write
0.01 ±173% +0.2 0.17 ± 6% perf-profile.children.cycles-pp.current_time
0.00 +0.2 0.16 ± 11% perf-profile.children.cycles-pp.dax_direct_access
0.06 ± 7% +0.2 0.23 ± 11% perf-profile.children.cycles-pp.__sb_start_write
0.00 +0.2 0.18 ± 72% perf-profile.children.cycles-pp.generic_write_checks
0.04 ± 57% +0.2 0.22 ± 8% perf-profile.children.cycles-pp.__srcu_read_lock
0.06 ± 7% +0.2 0.26 ± 11% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.06 +0.2 0.26 ± 14% perf-profile.children.cycles-pp.common_file_perm
0.05 ± 9% +0.2 0.28 ± 11% perf-profile.children.cycles-pp.down_write
0.00 +0.2 0.23 ± 60% perf-profile.children.cycles-pp.ext4_generic_write_checks
0.09 ± 5% +0.3 0.34 ± 13% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.09 ± 5% +0.3 0.37 ± 14% perf-profile.children.cycles-pp.security_file_permission
0.10 ± 8% +0.4 0.54 ± 25% perf-profile.children.cycles-pp.ext4_inode_block_valid
0.99 ± 9% +0.4 1.44 ± 19% perf-profile.children.cycles-pp.ext4_write_checks
0.04 ± 57% +0.5 0.51 ± 31% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.12 ±173% +0.5 0.65 ± 42% perf-profile.children.cycles-pp.start_kernel
0.17 ± 11% +0.8 0.96 ± 17% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.19 ± 14% +0.9 1.11 ± 20% perf-profile.children.cycles-pp.__check_block_validity
0.39 ± 12% +1.8 2.20 ± 17% perf-profile.children.cycles-pp.ext4_map_blocks
1.94 ± 16% +6.6 8.50 ± 13% perf-profile.children.cycles-pp.__copy_user_nocache
1.95 ± 16% +6.6 8.54 ± 13% perf-profile.children.cycles-pp.__copy_user_flushcache
1.99 ± 16% +6.7 8.70 ± 13% perf-profile.children.cycles-pp._copy_from_iter_flushcache
13.96 ± 9% +7.1 21.04 ± 13% perf-profile.children.cycles-pp._raw_read_lock
1.73 ± 15% +13.7 15.43 ± 27% perf-profile.children.cycles-pp.__srcu_read_unlock
12.87 ± 7% +14.8 27.70 ± 13% perf-profile.children.cycles-pp.jbd2_transaction_committed
13.15 ± 7% +15.7 28.82 ± 13% perf-profile.children.cycles-pp.ext4_set_iomap
3.88 ± 14% +20.9 24.78 ± 20% perf-profile.children.cycles-pp.dax_iomap_actor
21.95 ± 7% -21.6 0.35 ± 8% perf-profile.self.cycles-pp.start_this_handle
8.79 ± 7% -8.8 0.00 perf-profile.self.cycles-pp.stop_this_handle
8.60 ± 7% -8.5 0.14 ± 8% perf-profile.self.cycles-pp.add_transaction_credits
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.current_time
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.aa_file_perm
0.00 +0.1 0.06 ± 20% perf-profile.self.cycles-pp.apparmor_file_permission
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.generic_write_checks
0.00 +0.1 0.07 ± 15% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.__pmem_direct_access
0.00 +0.1 0.08 ± 13% perf-profile.self.cycles-pp.__sb_start_write
0.00 +0.1 0.09 ± 13% perf-profile.self.cycles-pp.ksys_write
0.00 +0.1 0.10 ± 12% perf-profile.self.cycles-pp.___might_sleep
0.00 +0.1 0.11 ± 16% perf-profile.self.cycles-pp.dax_iomap_rw
0.00 +0.1 0.11 ± 11% perf-profile.self.cycles-pp.fsnotify
0.00 +0.1 0.12 ± 67% perf-profile.self.cycles-pp.file_update_time
0.00 +0.1 0.13 ± 8% perf-profile.self.cycles-pp.__fget_light
0.00 +0.1 0.13 ± 9% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.1 0.14 ± 15% perf-profile.self.cycles-pp.ext4_map_blocks
0.00 +0.2 0.15 ± 12% perf-profile.self.cycles-pp._copy_from_iter_flushcache
0.04 ± 57% +0.2 0.19 ± 15% perf-profile.self.cycles-pp.common_file_perm
0.00 +0.2 0.15 ± 10% perf-profile.self.cycles-pp.up_write
0.00 +0.2 0.17 ± 10% perf-profile.self.cycles-pp.down_write
0.04 ± 57% +0.2 0.21 ± 10% perf-profile.self.cycles-pp.dax_iomap_actor
0.01 ±173% +0.2 0.20 ± 11% perf-profile.self.cycles-pp.vfs_write
0.00 +0.2 0.18 ± 15% perf-profile.self.cycles-pp.do_syscall_64
0.08 ± 5% +0.2 0.28 ± 8% perf-profile.self.cycles-pp.ext4_iomap_begin
0.06 ± 15% +0.2 0.25 ± 11% perf-profile.self.cycles-pp.ext4_es_lookup_extent
0.06 ± 7% +0.2 0.26 ± 11% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.01 ±173% +0.2 0.22 ± 10% perf-profile.self.cycles-pp.__srcu_read_lock
0.09 ± 5% +0.3 0.34 ± 13% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.00 +0.3 0.31 ± 80% perf-profile.self.cycles-pp.new_sync_write
0.11 ± 7% +0.3 0.45 ± 9% perf-profile.self.cycles-pp.iomap_apply
0.04 ± 57% +0.4 0.47 ± 32% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.10 ± 8% +0.4 0.53 ± 25% perf-profile.self.cycles-pp.ext4_inode_block_valid
0.25 ± 12% +0.5 0.70 ± 25% perf-profile.self.cycles-pp.ext4_file_write_iter
0.09 ± 27% +0.5 0.56 ± 21% perf-profile.self.cycles-pp.__check_block_validity
0.27 ± 18% +0.8 1.11 ± 28% perf-profile.self.cycles-pp.ext4_set_iomap
4.99 ± 6% +2.0 6.95 ± 14% perf-profile.self.cycles-pp.jbd2_transaction_committed
1.93 ± 16% +6.5 8.46 ± 13% perf-profile.self.cycles-pp.__copy_user_nocache
13.90 ± 9% +7.0 20.92 ± 13% perf-profile.self.cycles-pp._raw_read_lock
1.73 ± 15% +13.6 15.35 ± 27% perf-profile.self.cycles-pp.__srcu_read_unlock
fio.write_bw_MBps
60000 +-------------------------------------------------------------------+
55000 |-+ O |
| O O O |
50000 |-+ O O O O O O |
45000 |-+ O O O O O O O |
40000 |-O O O O O |
35000 |-+ |
| |
30000 |-+ |
25000 |-+ |
20000 |-+ |
15000 |-+ |
|.+..+.+.+.+..+.+.+.+..+.+.+. .+. .+..+.+.+.+..+.+.+. .+. .+.|
10000 |-+ +. + +..+ +.+. |
5000 +-------------------------------------------------------------------+
fio.write_iops
1.6e+07 +-----------------------------------------------------------------+
| O |
1.4e+07 |-+ |
| O O O O O |
1.2e+07 |-+ O O O O O |
| O O O O O O O O O O |
1e+07 |-+ O |
| |
8e+06 |-+ |
| |
6e+06 |-+ |
| |
4e+06 |-+ |
|.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.|
2e+06 +-----------------------------------------------------------------+
fio.write_clat_mean_us
20000 +-------------------------------------------------------------------+
| +.+.. |
18000 |-+ .+..+.+.+.. .+..+. + |
16000 |.+..+. .+.+..+. .+.+.. .+.+ +.+.+.+..+.+.+ + +.|
| + + + |
14000 |-+ |
12000 |-+ |
| |
10000 |-+ |
8000 |-+ |
| |
6000 |-+ |
4000 |-O O O O O O O O O |
| O O O O O O O O O O O O O |
2000 +-------------------------------------------------------------------+
fio.write_clat_90__us
35000 +-------------------------------------------------------------------+
| |
30000 |-+ + .+. .+.. .+ + |
|.+.. +. .+ : + .+. .+. + .+. +.+.+.+. : : + .+.|
25000 |-+ +. + +. + : +..+ + + +. .. : : +. |
| + + + + |
20000 |-+ |
| |
15000 |-+ |
| |
10000 |-+ |
| O O O O O O O O O O |
5000 |-+ O O O O O O O O O O O O |
| |
0 +-------------------------------------------------------------------+
fio.write_clat_95__us
40000 +-------------------------------------------------------------------+
| |
35000 |-+ .+. .+.. + |
| +. +. + +. + +. .+ :+ |
30000 |.+.. : +..+ : +.. + + + +.+.+ + +.+.+. : : +..+.|
| +. : + : + + + + : : |
25000 |-+ + + + + |
| |
20000 |-+ |
| |
15000 |-+ |
| |
10000 |-+ O O O O O O O O |
| O O O O O O O O O O O O O |
5000 +-------------------------------------------------------------------+
fio.latency_4us_
70 +----------------------------------------------------------------------+
| O |
60 |-+ O |
| O O O O O O O O O |
50 |-+ O O O O O O |
| O O O O |
40 |-+ O |
| |
30 |-+ |
| |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fio.latency_50us_
45 +----------------------------------------------------------------------+
| + |
40 |-+ .+ :: |
35 |-+ + + +.+..+.+ .+ : : : .+ |
| + :+ + :: +. .. : +. .+. : : +. :|
30 |+++ : + + + : : : + : : + : : :|
25 |-+ + : + + : +.. : +..+.+ : : : |
| +.: + + + : : |
20 |-+ + + + |
15 |-+ |
| |
10 |-+ |
5 |-+ |
| |
0 +----------------------------------------------------------------------+
fio.workload
3e+09 +-----------------------------------------------------------------+
| |
| O O O |
2.5e+09 |-+ O O O O O |
| O O O O O O |
| O O O O O O O |
2e+09 |-+ |
| |
1.5e+09 |-+ |
| |
| |
1e+09 |-+ |
| |
|. .+..+.+.+. .+..+. .+. .+.. |
5e+08 +-----------------------------------------------------------------+
fio.time.user_time
1100 +--------------------------------------------------------------------+
| O |
1000 |-+ O O O O |
900 |-+ O O O O O |
| O O O O O O |
800 |-O O O O O |
700 |-+ |
| |
600 |-+ |
500 |-+ |
| |
400 |-+ + |
300 |-+.. + |
|.+ +.+..+.+.+.+..+.+.+..+. .+.+..+.+.+..+.+. .+.+. .+.|
200 +--------------------------------------------------------------------+
fio.time.system_time
9400 +--------------------------------------------------------------------+
9300 |-+ .+.+..+.+. .+.. .+.+.. |
|.+.. +.+..+.+.+.+..+.+.+..+ +.+..+.+.+..+.+ +.+ +.|
9200 |-+ + |
9100 |-+ + |
| |
9000 |-+ |
8900 |-+ |
8800 |-+ |
| O |
8700 |-O O O O O O O O O O |
8600 |-+ O O O O O |
| O O O O |
8500 |-+ O |
8400 +--------------------------------------------------------------------+
fio.time.voluntary_context_switches
24500 +-------------------------------------------------------------------+
| + + |
24000 |-+ + : : + |
|: + + +. + : : + + |
|: + + + .. +.+. .+. .+ + +. .+. + + |
23500 |-+ + +.+ +..+ + +.+.+. +.+.+..+ +.+..+.|
| |
23000 |-+ |
| |
22500 |-+ |
| O O |
| O O |
22000 |-+ O O O O O O O O |
| O O O O O O O O O O |
21500 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[fuse] fcee216beb: last_state.is_incomplete_run
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: fcee216beb9c15c3e1466bb76575358415687c55 ("fuse: split fuse_mount off of fuse_conn")
https://git.kernel.org/cgit/linux/kernel/git/mszeredi/fuse.git submounts
in testcase: ltp
version: ltp-x86_64-14c1f76-1_20200922
with following parameters:
disk: 1HDD
fs: btrfs
test: syscalls_part1
ucode: 0x21
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
user :notice: [ 299.536553] tst_test.c:1316: TINFO: Testing on btrfs
user :notice: [ 299.540470] tst_mkfs.c:90: TINFO: Formatting /dev/loop0 with btrfs opts='' extra opts=''
kern :info : [ 299.662209] BTRFS: device fsid a9d8404e-61b8-482f-a750-aefa4b5d83c9 devid 1 transid 5 /dev/loop0 scanned by mkfs.btrfs (18332)
kern :info : [ 299.664317] BTRFS info (device loop0): disk space caching is enabled
kern :info : [ 299.665487] BTRFS info (device loop0): has skinny extents
kern :info : [ 299.666593] BTRFS info (device loop0): flagging fs with big metadata feature
kern :info : [ 299.668479] BTRFS info (device loop0): enabling ssd optimizations
kern :info : [ 299.669656] BTRFS info (device loop0): checking UUID tree
user :notice: [ 299.672048] tst_test.c:1250: TINFO: Timeout per run is 0h 05m 00s
user :notice: [ 299.675226] fallocate04.c:82: TINFO: allocate '12288' bytes
user :notice: [ 299.678139] fallocate04.c:96: TPASS: test-case succeeded
user :notice: [ 299.738267] fallocate04.c:103: TINFO: read allocated file size '12288'
user :notice: [ 299.741591] fallocate04.c:104: TINFO: make a hole with FALLOC_FL_PUNCH_HOLE
user :notice: [ 299.745124] fallocate04.c:120: TINFO: check that file has a hole with lseek(,,SEEK_HOLE)
user :notice: [ 299.748286] fallocate04.c:137: TINFO: found a hole at '4096' offset
user :notice: [ 299.805365] fallocate04.c:143: TINFO: allocated file size before '12288' and after '8192'
user :notice: [ 299.808908] fallocate04.c:66: TINFO: reading the file, compare with expected buffer
user :notice: [ 299.811864] fallocate04.c:154: TPASS: test-case succeeded
user :notice: [ 299.815267] fallocate04.c:159: TINFO: zeroing file space with FALLOC_FL_ZERO_RANGE
user :notice: [ 299.880192] fallocate04.c:168: TINFO: read current allocated file size '8192'
user :notice: [ 299.946961] fallocate04.c:185: TINFO: allocated file size before '8192' and after '12288'
user :notice: [ 299.950504] fallocate04.c:66: TINFO: reading the file, compare with expected buffer
user :notice: [ 299.953448] fallocate04.c:196: TPASS: test-case succeeded
user :notice: [ 299.956996] fallocate04.c:201: TINFO: collapsing file space with FALLOC_FL_COLLAPSE_RANGE
user :notice: [ 300.005223] fallocate04.c:205: TINFO: read current allocated file size '12288'
user :notice: [ 300.008656] fallocate04.c:211: TCONF: FALLOC_FL_COLLAPSE_RANGE not supported
user :notice: [ 300.150766] tst_test.c:1316: TINFO: Testing on vfat
user :notice: [ 300.154811] tst_mkfs.c:90: TINFO: Formatting /dev/loop0 with vfat opts='' extra opts=''
user :notice: [ 300.158281] tst_test.c:1250: TINFO: Timeout per run is 0h 05m 00s
user :notice: [ 300.161538] fallocate04.c:82: TINFO: allocate '12288' bytes
user :notice: [ 300.164508] fallocate04.c:96: TPASS: test-case succeeded
user :notice: [ 300.196859] fallocate04.c:103: TINFO: read allocated file size '12288'
user :notice: [ 300.200159] fallocate04.c:104: TINFO: make a hole with FALLOC_FL_PUNCH_HOLE
user :notice: [ 300.203372] fallocate04.c:115: TCONF: FALLOC_FL_PUNCH_HOLE not supported
user :notice: [ 300.211529] tst_test.c:1316: TINFO: Testing on ntfs
user :notice: [ 300.215463] tst_mkfs.c:90: TINFO: Formatting /dev/loop0 with ntfs opts='' extra opts=''
user :notice: [ 300.228935] The partition start sector was not specified for /dev/loop0 and it could not be obtained automatically. It has been set to 0.
user :notice: [ 300.233947] The number of sectors per track was not specified for /dev/loop0 and it could not be obtained automatically. It has been set to 0.
user :notice: [ 300.238719] The number of heads was not specified for /dev/loop0 and it could not be obtained automatically. It has been set to 0.
user :notice: [ 300.243742] To boot from a device, Windows needs the 'partition start sector', the 'sectors per track' and the 'number of heads' to be set.
user :notice: [ 300.247203] Windows will not be able to boot from this device.
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year, 10 months
[mm/lru] 872102b4f7: fio.read_iops -28.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -28.8% regression of fio.read_iops due to commit:
commit: 872102b4f76574cc393121c77282afa62f3d9cb1 ("mm/lru: replace pgdat lru_lock with lruvec lock")
https://github.com/alexshi/linux.git lruv19.3
in testcase: fio-basic
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
disk: 2pmem
fs: xfs
runtime: 200s
nr_task: 50%
time_based: tb
rw: randread
bs: 2M
ioengine: mmap
test_size: 200G
cpufreq_governor: performance
ucode: 0x5002f01
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-9/performance/2pmem/xfs/mmap/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/200s/randread/lkp-csl-2sp6/200G/fio-basic/tb/0x5002f01
commit:
bd1f77b8f2 ("mm/vmscan: remove lruvec reget in move_pages_to_lru")
872102b4f7 ("mm/lru: replace pgdat lru_lock with lruvec lock")
bd1f77b8f24e5896 872102b4f76574cc393121c7728
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -14% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 -4% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.42 ± 5% -0.3 0.13 ± 12% fio.latency_1000us%
0.26 ± 37% +0.6 0.88 ± 34% fio.latency_100ms%
39.02 ± 8% +4.8 43.82 ± 6% fio.latency_10ms%
28.27 ± 4% -20.0 8.24 ± 13% fio.latency_20ms%
0.05 ± 34% +0.3 0.30 ± 48% fio.latency_250ms%
0.01 +0.0 0.02 ± 33% fio.latency_250us%
0.03 ± 34% -0.0 0.02 ± 11% fio.latency_2ms%
0.35 ± 6% +0.2 0.56 ± 7% fio.latency_500us%
3.90 ± 21% +18.0 21.89 ± 10% fio.latency_50ms%
10477 -28.8% 7464 fio.read_bw_MBps
17694720 +54.4% 27328512 ± 3% fio.read_clat_90%_us
19529728 ± 2% +57.4% 30736384 ± 3% fio.read_clat_95%_us
32047104 ± 13% +76.5% 56557568 ± 21% fio.read_clat_99%_us
8929025 +38.5% 12368382 fio.read_clat_mean_us
10546283 ± 11% +88.8% 19911890 ± 36% fio.read_clat_stddev
5238 -28.8% 3732 fio.read_iops
4.222e+09 -28.7% 3.009e+09 fio.time.file_system_inputs
5.28e+08 -28.7% 3.764e+08 fio.time.major_page_faults
8989 +2.5% 9210 fio.time.system_time
454.94 -36.3% 289.75 ± 2% fio.time.user_time
1048069 -28.8% 746564 fio.workload
48.96 +1.8% 49.83 iostat.cpu.system
2.35 -35.8% 1.51 ± 2% iostat.cpu.user
10335222 -28.6% 7374503 vmstat.io.bi
413252 -18.5% 336725 vmstat.system.in
1.28 -0.4 0.91 ± 2% mpstat.cpu.all.irq%
0.07 ± 3% -0.0 0.05 ± 7% mpstat.cpu.all.soft%
2.37 -0.8 1.52 ± 2% mpstat.cpu.all.usr%
2.016e+08 ± 4% -29.2% 1.427e+08 ± 7% numa-numastat.node0.local_node
60662601 ± 11% -24.3% 45932120 ± 14% numa-numastat.node0.numa_foreign
2.016e+08 ± 4% -29.2% 1.427e+08 ± 7% numa-numastat.node0.numa_hit
2.101e+08 ± 5% -31.8% 1.432e+08 ± 4% numa-numastat.node1.local_node
2.102e+08 ± 5% -31.8% 1.432e+08 ± 4% numa-numastat.node1.numa_hit
60662601 ± 11% -24.3% 45932120 ± 14% numa-numastat.node1.numa_miss
60678494 ± 11% -24.3% 45940212 ± 14% numa-numastat.node1.other_node
119.25 ± 3% +28.1% 152.75 ± 7% numa-vmstat.node0.nr_isolated_file
33216840 ± 7% -24.8% 24976772 ± 18% numa-vmstat.node0.numa_foreign
209321 ± 8% -21.3% 164755 ± 9% numa-vmstat.node0.workingset_activate_file
17212191 ± 4% -32.7% 11582203 ± 8% numa-vmstat.node0.workingset_refault_file
124.00 ± 9% +25.6% 155.75 ± 5% numa-vmstat.node1.nr_isolated_file
33218990 ± 7% -24.8% 24979161 ± 18% numa-vmstat.node1.numa_miss
33375795 ± 7% -24.9% 25068472 ± 18% numa-vmstat.node1.numa_other
231052 ± 7% -34.7% 150935 ± 5% numa-vmstat.node1.workingset_activate_file
18601963 ± 3% -36.4% 11831285 ± 3% numa-vmstat.node1.workingset_refault_file
13568 ± 14% +52.1% 20639 ± 20% softirqs.CPU15.SCHED
9404 ± 15% +76.1% 16560 ± 10% softirqs.CPU21.SCHED
14041 ± 14% -22.3% 10916 ± 9% softirqs.CPU27.SCHED
15935 ± 5% -12.0% 14019 ± 6% softirqs.CPU28.SCHED
12204 ± 18% +29.6% 15817 ± 12% softirqs.CPU29.SCHED
11446 ± 16% +32.0% 15107 ± 12% softirqs.CPU33.SCHED
13890 ± 12% -28.1% 9991 ± 8% softirqs.CPU38.SCHED
13263 ± 5% -18.4% 10824 ± 19% softirqs.CPU41.SCHED
10059 ± 4% +43.2% 14408 ± 11% softirqs.CPU43.SCHED
14747 ± 15% -29.9% 10345 ± 24% softirqs.CPU63.SCHED
18039 ± 4% -36.1% 11531 ± 23% softirqs.CPU69.SCHED
17966 ± 6% -29.3% 12702 ± 15% softirqs.CPU70.SCHED
15959 ± 6% +13.8% 18159 ± 6% softirqs.CPU75.SCHED
20.22 ±173% +1119.7% 246.63 ± 65% sched_debug.cfs_rq:/.MIN_vruntime.avg
1678 ±173% +613.7% 11976 ± 61% sched_debug.cfs_rq:/.MIN_vruntime.max
183.10 ±173% +787.5% 1624 ± 60% sched_debug.cfs_rq:/.MIN_vruntime.stddev
16899 ± 9% -44.7% 9341 ± 41% sched_debug.cfs_rq:/.exec_clock.min
20.22 ±173% +1119.7% 246.63 ± 65% sched_debug.cfs_rq:/.max_vruntime.avg
1678 ±173% +613.7% 11976 ± 61% sched_debug.cfs_rq:/.max_vruntime.max
183.10 ±173% +787.5% 1624 ± 60% sched_debug.cfs_rq:/.max_vruntime.stddev
1134 ± 2% +11.5% 1264 ± 5% sched_debug.cfs_rq:/.runnable_avg.max
858242 ± 2% -10.7% 766731 ± 5% sched_debug.cpu.avg_idle.avg
6.84 ± 15% -43.3% 3.88 ± 26% sched_debug.cpu.clock.stddev
4066 ± 10% -23.8% 3097 ± 22% sched_debug.cpu.nr_switches.min
6447 ± 5% -23.9% 4904 ± 18% sched_debug.cpu.sched_count.avg
2891 ± 9% -34.3% 1898 ± 29% sched_debug.cpu.sched_count.min
3146 ± 6% -24.2% 2386 ± 18% sched_debug.cpu.ttwu_count.avg
1411 ± 7% -41.4% 826.96 ± 36% sched_debug.cpu.ttwu_count.min
2382 ± 2% -23.8% 1814 ± 19% sched_debug.cpu.ttwu_local.avg
1008 ± 7% -40.2% 603.17 ± 35% sched_debug.cpu.ttwu_local.min
359785 -31.6% 246225 ± 2% proc-vmstat.allocstall_movable
2701 ± 4% -14.0% 2323 ± 5% proc-vmstat.allocstall_normal
2202 ± 11% -26.6% 1617 ± 13% proc-vmstat.compact_daemon_wake
6705 ± 6% -24.1% 5088 proc-vmstat.kswapd_low_wmark_hit_quickly
240.00 ± 5% +28.8% 309.00 ± 2% proc-vmstat.nr_isolated_file
4.112e+08 ± 4% -30.5% 2.857e+08 ± 5% proc-vmstat.numa_hit
4.112e+08 ± 4% -30.5% 2.856e+08 ± 5% proc-vmstat.numa_local
6734 ± 6% -24.2% 5106 proc-vmstat.pageoutrun
6861775 ± 3% -28.5% 4907919 ± 3% proc-vmstat.pgactivate
19735500 -28.7% 14067549 proc-vmstat.pgalloc_dma32
5.096e+08 -28.7% 3.634e+08 proc-vmstat.pgalloc_normal
7183439 ± 2% -29.9% 5035733 ± 3% proc-vmstat.pgdeactivate
1.057e+09 -28.7% 7.532e+08 proc-vmstat.pgfault
5.189e+08 -29.3% 3.67e+08 proc-vmstat.pgfree
5.277e+08 -28.8% 3.76e+08 proc-vmstat.pgmajfault
2.111e+09 -28.8% 1.504e+09 proc-vmstat.pgpgin
7183439 ± 2% -29.9% 5035737 ± 3% proc-vmstat.pgrefill
37787 -3.0% 36668 proc-vmstat.pgreuse
8.918e+08 -31.5% 6.106e+08 ± 2% proc-vmstat.pgscan_direct
1.046e+09 -29.1% 7.418e+08 proc-vmstat.pgscan_file
1.541e+08 ± 7% -14.8% 1.312e+08 ± 3% proc-vmstat.pgscan_kswapd
4.711e+08 -28.8% 3.354e+08 proc-vmstat.pgsteal_direct
5.176e+08 -29.3% 3.658e+08 proc-vmstat.pgsteal_file
46457566 ± 4% -34.6% 30400944 ± 5% proc-vmstat.pgsteal_kswapd
440965 ± 3% -28.5% 315113 ± 4% proc-vmstat.workingset_activate_file
304714 -1.6% 299813 proc-vmstat.workingset_nodes
35857549 -34.8% 23372298 ± 3% proc-vmstat.workingset_refault_file
10.38 -5.5% 9.82 ± 3% perf-stat.i.MPKI
1.241e+10 -21.2% 9.774e+09 perf-stat.i.branch-instructions
0.41 -0.0 0.37 ± 2% perf-stat.i.branch-miss-rate%
49513372 -27.1% 36076291 perf-stat.i.branch-misses
67.08 -2.8 64.24 ± 2% perf-stat.i.cache-miss-rate%
4.308e+08 -27.9% 3.107e+08 perf-stat.i.cache-misses
6.371e+08 -25.4% 4.755e+08 ± 3% perf-stat.i.cache-references
2.33 +35.1% 3.15 ± 2% perf-stat.i.cpi
120.83 ± 3% -6.6% 112.80 perf-stat.i.cpu-migrations
351.88 +62.6% 572.05 ± 10% perf-stat.i.cycles-between-cache-misses
0.40 ± 2% -0.1 0.35 ± 2% perf-stat.i.dTLB-load-miss-rate%
62688198 -30.5% 43544856 ± 3% perf-stat.i.dTLB-load-misses
1.528e+10 -22.5% 1.185e+10 perf-stat.i.dTLB-loads
4019498 ± 14% -34.4% 2637259 ± 13% perf-stat.i.dTLB-store-misses
7.703e+09 -28.2% 5.528e+09 perf-stat.i.dTLB-stores
90.30 -3.8 86.48 perf-stat.i.iTLB-load-miss-rate%
25562882 ± 2% -29.7% 17977001 ± 3% perf-stat.i.iTLB-load-misses
2565831 -4.8% 2442218 perf-stat.i.iTLB-loads
6.057e+10 -22.5% 4.695e+10 perf-stat.i.instructions
2442 ± 2% +16.1% 2834 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.44 -22.6% 0.34 perf-stat.i.ipc
2614555 -28.6% 1868012 perf-stat.i.major-faults
376.79 -23.3% 288.98 perf-stat.i.metric.M/sec
18247098 ± 4% -15.5% 15416866 ± 10% perf-stat.i.node-load-misses
12188715 ± 5% -21.9% 9517663 ± 11% perf-stat.i.node-loads
32.20 ± 7% +6.6 38.77 ± 6% perf-stat.i.node-store-miss-rate%
66487979 ± 4% -30.6% 46172081 ± 4% perf-stat.i.node-stores
2617676 -28.5% 1871042 perf-stat.i.page-faults
0.40 -0.0 0.37 perf-stat.overall.branch-miss-rate%
67.62 -2.3 65.37 ± 2% perf-stat.overall.cache-miss-rate%
2.28 +29.8% 2.95 perf-stat.overall.cpi
319.99 +39.6% 446.80 perf-stat.overall.cycles-between-cache-misses
0.41 -0.0 0.37 ± 2% perf-stat.overall.dTLB-load-miss-rate%
90.88 -2.9 88.02 perf-stat.overall.iTLB-load-miss-rate%
2370 ± 2% +10.3% 2614 perf-stat.overall.instructions-per-iTLB-miss
0.44 -23.0% 0.34 perf-stat.overall.ipc
11639257 +8.6% 12642851 perf-stat.overall.path-length
1.234e+10 -21.3% 9.711e+09 perf-stat.ps.branch-instructions
49253788 -27.3% 35825754 perf-stat.ps.branch-misses
4.285e+08 -28.0% 3.085e+08 perf-stat.ps.cache-misses
6.338e+08 -25.5% 4.722e+08 ± 2% perf-stat.ps.cache-references
120.32 ± 3% -6.6% 112.36 perf-stat.ps.cpu-migrations
62357626 -30.7% 43228426 ± 3% perf-stat.ps.dTLB-load-misses
1.52e+10 -22.6% 1.177e+10 perf-stat.ps.dTLB-loads
3997870 ± 14% -34.5% 2618215 ± 13% perf-stat.ps.dTLB-store-misses
7.662e+09 -28.4% 5.49e+09 perf-stat.ps.dTLB-stores
25427984 ± 2% -29.8% 17850169 ± 3% perf-stat.ps.iTLB-load-misses
2552225 -4.9% 2427455 perf-stat.ps.iTLB-loads
6.026e+10 -22.6% 4.665e+10 perf-stat.ps.instructions
2600945 -28.7% 1854511 perf-stat.ps.major-faults
18149721 ± 4% -15.6% 15321554 ± 10% perf-stat.ps.node-load-misses
12120279 ± 5% -22.1% 9447482 ± 11% perf-stat.ps.node-loads
66140758 ± 4% -30.7% 45828565 ± 4% perf-stat.ps.node-stores
2604051 -28.7% 1857524 perf-stat.ps.page-faults
1.22e+13 -22.6% 9.438e+12 perf-stat.total.instructions
44341974 -35.2% 28724149 ± 2% interrupts.CAL:Function_call_interrupts
514510 ± 17% -45.3% 281430 ± 38% interrupts.CPU10.CAL:Function_call_interrupts
473.75 ± 17% -36.0% 303.25 ± 29% interrupts.CPU10.RES:Rescheduling_interrupts
619735 ± 17% -42.1% 358676 ± 38% interrupts.CPU10.TLB:TLB_shootdowns
469484 ± 17% -48.7% 240923 ± 35% interrupts.CPU11.CAL:Function_call_interrupts
564157 ± 18% -45.6% 306869 ± 35% interrupts.CPU11.TLB:TLB_shootdowns
531287 ± 20% -48.1% 275608 ± 32% interrupts.CPU13.CAL:Function_call_interrupts
637857 ± 20% -45.3% 348935 ± 32% interrupts.CPU13.TLB:TLB_shootdowns
499281 ± 19% -38.6% 306492 ± 34% interrupts.CPU14.CAL:Function_call_interrupts
475976 ± 21% -52.7% 225055 ± 27% interrupts.CPU15.CAL:Function_call_interrupts
572018 ± 21% -50.2% 284870 ± 27% interrupts.CPU15.TLB:TLB_shootdowns
555579 ± 15% -43.8% 312381 ± 36% interrupts.CPU17.CAL:Function_call_interrupts
570.50 ± 21% -38.9% 348.75 ± 24% interrupts.CPU17.RES:Rescheduling_interrupts
667865 ± 15% -40.4% 397854 ± 35% interrupts.CPU17.TLB:TLB_shootdowns
482779 ± 4% -39.7% 291301 ± 10% interrupts.CPU19.CAL:Function_call_interrupts
3857 ± 52% +88.5% 7269 ± 12% interrupts.CPU19.NMI:Non-maskable_interrupts
3857 ± 52% +88.5% 7269 ± 12% interrupts.CPU19.PMI:Performance_monitoring_interrupts
580664 ± 5% -36.2% 370383 ± 9% interrupts.CPU19.TLB:TLB_shootdowns
396252 ± 22% -45.3% 216801 ± 27% interrupts.CPU2.CAL:Function_call_interrupts
477915 ± 22% -42.3% 275805 ± 29% interrupts.CPU2.TLB:TLB_shootdowns
510608 ± 19% -37.5% 319356 ± 11% interrupts.CPU20.CAL:Function_call_interrupts
612915 ± 19% -33.6% 407139 ± 11% interrupts.CPU20.TLB:TLB_shootdowns
636706 ± 2% -63.9% 229901 ± 27% interrupts.CPU21.CAL:Function_call_interrupts
579.25 ± 17% -49.2% 294.50 ± 40% interrupts.CPU21.RES:Rescheduling_interrupts
766725 ± 2% -62.0% 291250 ± 27% interrupts.CPU21.TLB:TLB_shootdowns
549197 ± 15% -48.9% 280774 ± 25% interrupts.CPU22.CAL:Function_call_interrupts
660454 ± 15% -46.1% 355936 ± 25% interrupts.CPU22.TLB:TLB_shootdowns
523131 ± 13% -32.6% 352687 ± 20% interrupts.CPU23.CAL:Function_call_interrupts
630355 ± 14% -28.8% 448670 ± 21% interrupts.CPU23.TLB:TLB_shootdowns
525254 ± 23% -30.4% 365629 ± 19% interrupts.CPU24.CAL:Function_call_interrupts
7145 ± 7% -33.6% 4747 ± 27% interrupts.CPU24.NMI:Non-maskable_interrupts
7145 ± 7% -33.6% 4747 ± 27% interrupts.CPU24.PMI:Performance_monitoring_interrupts
627590 ± 23% -26.0% 464481 ± 20% interrupts.CPU24.TLB:TLB_shootdowns
581869 ± 4% -36.1% 372038 ± 24% interrupts.CPU26.CAL:Function_call_interrupts
694101 ± 4% -31.9% 472936 ± 25% interrupts.CPU26.TLB:TLB_shootdowns
538598 ± 5% -28.6% 384679 ± 12% interrupts.CPU27.CAL:Function_call_interrupts
642874 ± 4% -24.2% 487029 ± 12% interrupts.CPU27.TLB:TLB_shootdowns
424876 ± 10% -27.0% 310280 ± 10% interrupts.CPU28.CAL:Function_call_interrupts
506970 ± 10% -22.6% 392411 ± 10% interrupts.CPU28.TLB:TLB_shootdowns
584246 ± 16% -50.2% 291080 ± 20% interrupts.CPU29.CAL:Function_call_interrupts
697977 ± 16% -47.3% 368035 ± 20% interrupts.CPU29.TLB:TLB_shootdowns
455587 ± 30% -45.1% 249895 ± 24% interrupts.CPU3.CAL:Function_call_interrupts
3503 ± 19% +115.7% 7557 ± 4% interrupts.CPU3.NMI:Non-maskable_interrupts
3503 ± 19% +115.7% 7557 ± 4% interrupts.CPU3.PMI:Performance_monitoring_interrupts
546880 ± 29% -41.9% 317713 ± 24% interrupts.CPU3.TLB:TLB_shootdowns
487481 ± 7% -42.2% 281677 ± 31% interrupts.CPU30.CAL:Function_call_interrupts
580712 ± 8% -38.7% 355944 ± 31% interrupts.CPU30.TLB:TLB_shootdowns
589565 ± 7% -39.1% 358984 ± 7% interrupts.CPU31.CAL:Function_call_interrupts
704026 ± 7% -35.2% 455881 ± 7% interrupts.CPU31.TLB:TLB_shootdowns
596511 ± 14% -49.3% 302322 ± 19% interrupts.CPU33.CAL:Function_call_interrupts
710919 ± 14% -46.0% 383773 ± 19% interrupts.CPU33.TLB:TLB_shootdowns
616200 ± 9% -35.9% 395241 ± 5% interrupts.CPU36.CAL:Function_call_interrupts
734912 ± 9% -31.6% 502440 ± 5% interrupts.CPU36.TLB:TLB_shootdowns
368287 ± 37% -40.4% 219619 ± 55% interrupts.CPU4.CAL:Function_call_interrupts
495264 ± 17% -25.1% 370713 ± 11% interrupts.CPU40.CAL:Function_call_interrupts
530209 ± 4% -22.3% 411733 ± 14% interrupts.CPU41.CAL:Function_call_interrupts
634299 ± 4% -17.8% 521086 ± 14% interrupts.CPU41.TLB:TLB_shootdowns
659640 ± 5% -46.5% 352817 ± 32% interrupts.CPU42.CAL:Function_call_interrupts
787362 ± 5% -43.0% 448485 ± 32% interrupts.CPU42.TLB:TLB_shootdowns
671016 ± 5% -45.8% 363480 ± 22% interrupts.CPU43.CAL:Function_call_interrupts
802128 ± 5% -42.4% 461685 ± 21% interrupts.CPU43.TLB:TLB_shootdowns
539134 ± 17% -38.3% 332555 ± 11% interrupts.CPU44.CAL:Function_call_interrupts
643279 ± 17% -34.7% 420306 ± 11% interrupts.CPU44.TLB:TLB_shootdowns
565843 ± 20% -38.4% 348501 ± 14% interrupts.CPU45.CAL:Function_call_interrupts
6785 ± 13% -34.8% 4421 ± 25% interrupts.CPU45.NMI:Non-maskable_interrupts
6785 ± 13% -34.8% 4421 ± 25% interrupts.CPU45.PMI:Performance_monitoring_interrupts
675651 ± 20% -34.1% 445120 ± 14% interrupts.CPU45.TLB:TLB_shootdowns
477497 ± 11% -28.3% 342463 ± 25% interrupts.CPU46.CAL:Function_call_interrupts
519680 ± 12% -22.1% 405073 ± 11% interrupts.CPU47.CAL:Function_call_interrupts
536170 ± 19% -39.8% 323037 ± 31% interrupts.CPU48.CAL:Function_call_interrupts
645371 ± 20% -36.5% 409770 ± 31% interrupts.CPU48.TLB:TLB_shootdowns
515342 ± 18% -33.2% 344018 ± 23% interrupts.CPU49.CAL:Function_call_interrupts
620201 ± 18% -29.5% 437113 ± 22% interrupts.CPU49.TLB:TLB_shootdowns
400.50 ± 12% -25.1% 300.00 ± 31% interrupts.CPU5.RES:Rescheduling_interrupts
511171 ± 12% -27.5% 370713 ± 11% interrupts.CPU50.CAL:Function_call_interrupts
613726 ± 13% -23.1% 472041 ± 11% interrupts.CPU50.TLB:TLB_shootdowns
6910 ± 12% -44.7% 3822 ± 32% interrupts.CPU51.NMI:Non-maskable_interrupts
6910 ± 12% -44.7% 3822 ± 32% interrupts.CPU51.PMI:Performance_monitoring_interrupts
568787 ± 9% -50.9% 279244 ± 50% interrupts.CPU54.CAL:Function_call_interrupts
684976 ± 9% -48.5% 352897 ± 51% interrupts.CPU54.TLB:TLB_shootdowns
540470 ± 18% -37.9% 335598 ± 18% interrupts.CPU56.CAL:Function_call_interrupts
650946 ± 18% -34.5% 426094 ± 18% interrupts.CPU56.TLB:TLB_shootdowns
463422 ± 30% -28.5% 331155 ± 18% interrupts.CPU57.CAL:Function_call_interrupts
465710 ± 21% -34.5% 304855 ± 35% interrupts.CPU60.CAL:Function_call_interrupts
431669 ± 20% -31.5% 295792 ± 30% interrupts.CPU62.CAL:Function_call_interrupts
519565 ± 21% -27.6% 375978 ± 30% interrupts.CPU62.TLB:TLB_shootdowns
558890 ± 13% -50.6% 275911 ± 37% interrupts.CPU64.CAL:Function_call_interrupts
672817 ± 13% -48.0% 350089 ± 37% interrupts.CPU64.TLB:TLB_shootdowns
461340 ± 23% -41.2% 271069 ± 33% interrupts.CPU66.CAL:Function_call_interrupts
554909 ± 23% -38.2% 343145 ± 32% interrupts.CPU66.TLB:TLB_shootdowns
420859 ± 14% -26.5% 309465 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
358128 ± 4% +26.3% 452209 ± 19% interrupts.CPU69.TLB:TLB_shootdowns
352962 ± 23% -46.3% 189365 ± 37% interrupts.CPU7.CAL:Function_call_interrupts
3338 ± 10% +83.2% 6115 ± 26% interrupts.CPU7.NMI:Non-maskable_interrupts
3338 ± 10% +83.2% 6115 ± 26% interrupts.CPU7.PMI:Performance_monitoring_interrupts
423895 ± 23% -43.4% 239764 ± 37% interrupts.CPU7.TLB:TLB_shootdowns
437126 ± 14% -48.1% 227085 ± 39% interrupts.CPU73.CAL:Function_call_interrupts
522730 ± 15% -45.2% 286383 ± 39% interrupts.CPU73.TLB:TLB_shootdowns
381752 ± 11% -47.5% 200594 ± 13% interrupts.CPU75.CAL:Function_call_interrupts
455929 ± 11% -44.1% 255086 ± 13% interrupts.CPU75.TLB:TLB_shootdowns
483577 ± 8% -40.8% 286436 ± 17% interrupts.CPU76.CAL:Function_call_interrupts
578150 ± 8% -37.3% 362412 ± 16% interrupts.CPU76.TLB:TLB_shootdowns
351522 ± 15% -32.7% 236646 ± 18% interrupts.CPU79.CAL:Function_call_interrupts
4402 ± 48% +64.4% 7238 ± 8% interrupts.CPU79.NMI:Non-maskable_interrupts
4402 ± 48% +64.4% 7238 ± 8% interrupts.CPU79.PMI:Performance_monitoring_interrupts
396881 ± 25% -37.6% 247478 ± 24% interrupts.CPU8.CAL:Function_call_interrupts
476809 ± 25% -34.0% 314587 ± 23% interrupts.CPU8.TLB:TLB_shootdowns
6710 ± 12% -34.3% 4410 ± 33% interrupts.CPU83.NMI:Non-maskable_interrupts
6710 ± 12% -34.3% 4410 ± 33% interrupts.CPU83.PMI:Performance_monitoring_interrupts
336621 ± 9% -33.4% 224135 ± 13% interrupts.CPU84.CAL:Function_call_interrupts
402608 ± 10% -29.7% 283099 ± 13% interrupts.CPU84.TLB:TLB_shootdowns
425327 ± 17% -54.8% 192392 ± 8% interrupts.CPU86.CAL:Function_call_interrupts
510345 ± 17% -52.1% 244641 ± 8% interrupts.CPU86.TLB:TLB_shootdowns
440103 ± 11% -54.7% 199220 ± 39% interrupts.CPU87.CAL:Function_call_interrupts
526646 ± 12% -52.2% 251847 ± 39% interrupts.CPU87.TLB:TLB_shootdowns
422519 ± 11% -51.1% 206404 ± 28% interrupts.CPU88.CAL:Function_call_interrupts
505320 ± 11% -48.1% 262433 ± 27% interrupts.CPU88.TLB:TLB_shootdowns
412331 ± 5% -55.5% 183678 ± 30% interrupts.CPU89.CAL:Function_call_interrupts
492101 ± 5% -52.7% 232936 ± 30% interrupts.CPU89.TLB:TLB_shootdowns
415473 ± 22% -33.7% 275568 ± 16% interrupts.CPU92.CAL:Function_call_interrupts
458023 ± 5% -47.2% 241982 ± 37% interrupts.CPU94.CAL:Function_call_interrupts
547827 ± 6% -44.2% 305484 ± 38% interrupts.CPU94.TLB:TLB_shootdowns
385795 ± 13% -50.6% 190645 ± 31% interrupts.CPU95.CAL:Function_call_interrupts
461756 ± 13% -48.2% 239372 ± 31% interrupts.CPU95.TLB:TLB_shootdowns
53152392 -31.4% 36456518 interrupts.TLB:TLB_shootdowns
6.22 ± 27% -6.2 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page
6.18 ± 27% -6.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
13.69 ± 12% -4.9 8.79 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
13.63 ± 12% -4.9 8.76 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
13.22 ± 12% -4.4 8.86 ± 8% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
7.34 ± 14% -3.4 3.90 ± 5% perf-profile.calltrace.cycles-pp.iomap_readpage.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
6.34 ± 15% -3.1 3.26 ± 5% perf-profile.calltrace.cycles-pp.submit_bio.iomap_readpage.filemap_fault.__xfs_filemap_fault.__do_fault
6.32 ± 14% -3.1 3.24 ± 5% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.iomap_readpage.filemap_fault.__xfs_filemap_fault
5.98 ± 15% -3.0 3.01 ± 5% perf-profile.calltrace.cycles-pp.pmem_submit_bio.submit_bio_noacct.submit_bio.iomap_readpage.filemap_fault
5.38 ± 15% -2.7 2.68 ± 5% perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio.iomap_readpage
5.30 ± 15% -2.7 2.64 ± 5% perf-profile.calltrace.cycles-pp.__memcpy_mcsafe.pmem_do_read.pmem_submit_bio.submit_bio_noacct.submit_bio
4.45 ± 13% -1.7 2.72 ± 8% perf-profile.calltrace.cycles-pp.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.19 ± 13% -1.6 2.56 ± 8% perf-profile.calltrace.cycles-pp.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
7.67 ± 12% -1.2 6.47 ± 7% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
2.11 ± 15% -1.1 0.99 ± 9% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.filemap_fault.__xfs_filemap_fault
2.13 ± 10% -0.9 1.26 ± 8% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
1.04 ± 12% -0.8 0.27 ±100% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.filemap_fault
0.86 ± 8% -0.6 0.28 ±100% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.72 ± 13% -0.4 0.28 ±100% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list
1.19 ± 11% -0.4 0.75 ± 7% perf-profile.calltrace.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
1.04 ± 11% -0.4 0.66 ± 7% perf-profile.calltrace.cycles-pp.rmap_walk_file.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.98 ± 12% -0.4 0.62 ± 6% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readpage.filemap_fault.__xfs_filemap_fault.__do_fault
0.96 ± 14% -0.3 0.66 ± 8% perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.88 ± 13% -0.3 0.61 ± 9% perf-profile.calltrace.cycles-pp.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.28 ±100% +0.5 0.78 ± 16% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
2.21 ± 14% +1.0 3.16 ± 16% perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list
2.30 ± 14% +1.1 3.42 ± 18% perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list
2.30 ± 14% +1.1 3.43 ± 18% perf-profile.calltrace.cycles-pp.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
2.30 ± 14% +1.1 3.43 ± 18% perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.00 +4.4 4.37 ± 58% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.deactivate_file_page
0.00 +4.4 4.39 ± 58% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.deactivate_file_page.invalidate_mapping_pages
0.00 +4.4 4.39 ± 58% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.deactivate_file_page.invalidate_mapping_pages.generic_fadvise
7.02 ± 25% +4.5 11.48 ± 19% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault.__xfs_filemap_fault
6.95 ± 25% +4.5 11.44 ± 19% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault
11.83 ± 10% +7.1 18.92 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
12.06 ± 12% +7.6 19.66 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +10.7 10.72 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add
0.00 +10.8 10.75 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru
0.00 +10.8 10.75 ± 20% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.__pagevec_lru_add.lru_cache_add.add_to_page_cache_lru.pagecache_get_page
13.96 ± 12% -4.6 9.33 ± 8% perf-profile.children.cycles-pp.lru_note_cost
7.35 ± 14% -3.4 3.90 ± 5% perf-profile.children.cycles-pp.iomap_readpage
6.34 ± 14% -3.1 3.26 ± 5% perf-profile.children.cycles-pp.submit_bio
6.32 ± 15% -3.1 3.25 ± 5% perf-profile.children.cycles-pp.submit_bio_noacct
5.98 ± 15% -3.0 3.01 ± 5% perf-profile.children.cycles-pp.pmem_submit_bio
5.38 ± 15% -2.7 2.68 ± 5% perf-profile.children.cycles-pp.pmem_do_read
5.32 ± 15% -2.7 2.65 ± 5% perf-profile.children.cycles-pp.__memcpy_mcsafe
4.45 ± 13% -1.7 2.72 ± 8% perf-profile.children.cycles-pp.xfs_filemap_map_pages
4.24 ± 13% -1.6 2.59 ± 8% perf-profile.children.cycles-pp.filemap_map_pages
2.12 ± 15% -1.1 1.00 ± 9% perf-profile.children.cycles-pp.__add_to_page_cache_locked
2.34 ± 10% -1.0 1.37 ± 7% perf-profile.children.cycles-pp.__remove_mapping
1.42 ± 21% -0.8 0.58 ± 7% perf-profile.children.cycles-pp.get_page_from_freelist
2.30 ± 13% -0.8 1.52 ± 7% perf-profile.children.cycles-pp.rmap_walk_file
1.13 ± 23% -0.7 0.43 ± 9% perf-profile.children.cycles-pp.rmqueue
1.04 ± 12% -0.6 0.49 ± 9% perf-profile.children.cycles-pp.mem_cgroup_charge
0.80 ± 26% -0.5 0.27 ± 11% perf-profile.children.cycles-pp.rmqueue_bulk
0.95 ± 14% -0.5 0.42 ± 13% perf-profile.children.cycles-pp.__count_memcg_events
1.44 ± 13% -0.5 0.93 ± 8% perf-profile.children.cycles-pp.page_referenced
1.00 ± 18% -0.5 0.49 ± 12% perf-profile.children.cycles-pp.alloc_set_pte
0.92 ± 20% -0.5 0.44 ± 15% perf-profile.children.cycles-pp._raw_spin_lock
0.98 ± 12% -0.4 0.55 ± 7% perf-profile.children.cycles-pp.__list_del_entry_valid
0.84 ± 18% -0.4 0.42 ± 7% perf-profile.children.cycles-pp.native_irq_return_iret
0.68 ± 21% -0.4 0.29 ± 17% perf-profile.children.cycles-pp.page_add_file_rmap
1.11 ± 13% -0.4 0.74 ± 9% perf-profile.children.cycles-pp.try_to_unmap
0.93 ± 16% -0.4 0.56 ± 19% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.98 ± 12% -0.4 0.62 ± 6% perf-profile.children.cycles-pp.iomap_apply
0.75 ± 14% -0.3 0.46 ± 9% perf-profile.children.cycles-pp.down_read
0.79 ± 12% -0.3 0.51 ± 10% perf-profile.children.cycles-pp.unlock_page
0.51 ± 13% -0.3 0.23 ± 5% perf-profile.children.cycles-pp.workingset_eviction
0.81 ± 13% -0.3 0.55 ± 9% perf-profile.children.cycles-pp.page_referenced_one
0.85 ± 12% -0.3 0.58 ± 9% perf-profile.children.cycles-pp.try_to_unmap_one
0.71 ± 13% -0.2 0.47 ± 10% perf-profile.children.cycles-pp.page_vma_mapped_walk
0.49 ± 13% -0.2 0.26 ± 14% perf-profile.children.cycles-pp.__mod_memcg_state
0.39 ± 12% -0.2 0.17 ± 13% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.60 ± 12% -0.2 0.41 ± 10% perf-profile.children.cycles-pp.__delete_from_page_cache
0.49 ± 13% -0.2 0.31 ± 9% perf-profile.children.cycles-pp.free_unref_page_list
0.36 ± 8% -0.2 0.18 ± 18% perf-profile.children.cycles-pp.shrink_slab
0.35 ± 16% -0.2 0.17 ± 8% perf-profile.children.cycles-pp.iomap_read_end_io
0.47 ± 15% -0.2 0.30 ± 6% perf-profile.children.cycles-pp.iomap_readpage_actor
0.41 ± 10% -0.2 0.24 ± 12% perf-profile.children.cycles-pp.up_read
0.51 ± 22% -0.2 0.35 ± 16% perf-profile.children.cycles-pp.free_pcppages_bulk
0.42 ± 9% -0.2 0.27 ± 9% perf-profile.children.cycles-pp.xfs_read_iomap_begin
0.50 ± 14% -0.1 0.35 ± 11% perf-profile.children.cycles-pp.asm_call_on_stack
0.29 ± 17% -0.1 0.15 ± 12% perf-profile.children.cycles-pp.__mod_lruvec_state
0.28 ± 18% -0.1 0.13 ± 3% perf-profile.children.cycles-pp.workingset_age_nonresident
0.25 ± 19% -0.1 0.12 ± 12% perf-profile.children.cycles-pp.__mod_node_page_state
0.24 ± 19% -0.1 0.11 ± 7% perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.23 ± 39% -0.1 0.10 ± 54% perf-profile.children.cycles-pp.kmem_cache_alloc
0.31 ± 11% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.xas_create
0.39 ± 16% -0.1 0.27 ± 11% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.33 ± 12% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.sync_regs
0.34 ± 16% -0.1 0.23 ± 12% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.31 ± 14% -0.1 0.20 ± 7% perf-profile.children.cycles-pp.hrtimer_interrupt
0.31 ± 16% -0.1 0.20 ± 10% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.18 ± 12% -0.1 0.08 ± 16% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.27 ± 14% -0.1 0.17 ± 7% perf-profile.children.cycles-pp.asm_sysvec_call_function
0.29 ± 18% -0.1 0.20 ± 13% perf-profile.children.cycles-pp.xas_find
0.28 ± 10% -0.1 0.19 ± 9% perf-profile.children.cycles-pp.submit_bio_checks
0.27 ± 13% -0.1 0.17 ± 10% perf-profile.children.cycles-pp.xfs_ilock
0.21 ± 11% -0.1 0.12 ± 5% perf-profile.children.cycles-pp.try_charge
0.17 ± 13% -0.1 0.07 ± 11% perf-profile.children.cycles-pp.lock_page_memcg
0.23 ± 15% -0.1 0.15 ± 11% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.19 ± 18% -0.1 0.11 ± 14% perf-profile.children.cycles-pp.tick_sched_timer
0.22 ± 12% -0.1 0.15 ± 11% perf-profile.children.cycles-pp.___might_sleep
0.18 ± 18% -0.1 0.10 ± 12% perf-profile.children.cycles-pp.tick_sched_handle
0.18 ± 18% -0.1 0.10 ± 12% perf-profile.children.cycles-pp.update_process_times
0.20 ± 12% -0.1 0.12 ± 12% perf-profile.children.cycles-pp.xfs_iunlock
0.14 ± 8% -0.1 0.07 ± 17% perf-profile.children.cycles-pp.down_read_trylock
0.21 ± 17% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.bio_alloc_bioset
0.17 ± 13% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.19 ± 14% -0.1 0.13 ± 11% perf-profile.children.cycles-pp.find_get_entry
0.18 ± 12% -0.1 0.12 ± 7% perf-profile.children.cycles-pp.sysvec_call_function
0.17 ± 13% -0.1 0.11 ± 7% perf-profile.children.cycles-pp.__sysvec_call_function
0.17 ± 14% -0.1 0.11 ± 14% perf-profile.children.cycles-pp._cond_resched
0.14 ± 20% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.workingset_refault
0.16 ± 12% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.__perf_sw_event
0.12 ± 9% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.page_counter_try_charge
0.15 ± 14% -0.1 0.10 ± 11% perf-profile.children.cycles-pp.page_mapping
0.13 ± 18% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.scheduler_tick
0.15 ± 11% -0.0 0.10 ± 15% perf-profile.children.cycles-pp.xfs_bmapi_read
0.14 ± 16% -0.0 0.09 perf-profile.children.cycles-pp.mempool_alloc
0.13 ± 14% -0.0 0.08 ± 15% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.12 ± 13% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.__might_sleep
0.12 ± 8% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.do_shrink_slab
0.10 ± 15% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 21% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.bio_associate_blkg
0.11 ± 11% -0.0 0.07 ± 17% perf-profile.children.cycles-pp.xfs_ilock_for_iomap
0.11 ± 12% -0.0 0.07 ± 14% perf-profile.children.cycles-pp.blk_throtl_bio
0.08 ± 10% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.count_shadow_nodes
0.12 ± 15% -0.0 0.08 ± 8% perf-profile.children.cycles-pp.___perf_sw_event
0.10 ± 17% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.rcu_all_qs
0.10 ± 15% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.uncharge_batch
0.12 ± 15% -0.0 0.08 ± 8% perf-profile.children.cycles-pp.ktime_get
0.09 ± 13% -0.0 0.06 ± 15% perf-profile.children.cycles-pp.PageHuge
0.04 ± 58% +0.0 0.07 ± 20% perf-profile.children.cycles-pp.lru_add_drain
0.04 ± 57% +0.0 0.07 ± 20% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.08 ± 12% +0.0 0.12 ± 10% perf-profile.children.cycles-pp.__list_add_valid
0.10 ± 5% +0.1 0.15 ± 21% perf-profile.children.cycles-pp.smp_call_function_single
0.27 ± 11% +0.1 0.41 ± 8% perf-profile.children.cycles-pp.move_pages_to_lru
0.19 ± 39% +0.2 0.34 ± 20% perf-profile.children.cycles-pp.shrink_active_list
2.48 ± 14% +1.0 3.49 ± 12% perf-profile.children.cycles-pp.smp_call_function_many_cond
2.57 ± 14% +1.1 3.63 ± 12% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
2.57 ± 14% +1.1 3.63 ± 12% perf-profile.children.cycles-pp.try_to_unmap_flush
2.57 ± 14% +1.1 3.63 ± 12% perf-profile.children.cycles-pp.arch_tlbbatch_flush
7.05 ± 25% +4.5 11.54 ± 19% perf-profile.children.cycles-pp.lru_cache_add
7.04 ± 25% +4.5 11.57 ± 19% perf-profile.children.cycles-pp.__pagevec_lru_add
8.11 ± 17% +7.7 15.81 ± 29% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
34.98 ± 12% +10.3 45.29 ± 11% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +15.3 15.29 ± 30% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
5.26 ± 15% -2.6 2.62 ± 5% perf-profile.self.cycles-pp.__memcpy_mcsafe
2.41 ± 12% -0.8 1.57 ± 8% perf-profile.self.cycles-pp.filemap_map_pages
0.95 ± 14% -0.5 0.42 ± 13% perf-profile.self.cycles-pp.__count_memcg_events
0.84 ± 18% -0.4 0.42 ± 7% perf-profile.self.cycles-pp.native_irq_return_iret
0.97 ± 12% -0.4 0.55 ± 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.75 ± 12% -0.3 0.49 ± 11% perf-profile.self.cycles-pp.unlock_page
0.48 ± 13% -0.2 0.26 ± 13% perf-profile.self.cycles-pp.__mod_memcg_state
0.39 ± 13% -0.2 0.17 ± 14% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.53 ± 14% -0.2 0.32 ± 8% perf-profile.self.cycles-pp.down_read
0.36 ± 19% -0.2 0.16 ± 13% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.40 ± 10% -0.2 0.24 ± 12% perf-profile.self.cycles-pp.up_read
0.26 ± 27% -0.2 0.09 ± 20% perf-profile.self.cycles-pp.page_add_file_rmap
0.44 ± 13% -0.2 0.29 ± 9% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.41 ± 13% -0.1 0.26 ± 10% perf-profile.self.cycles-pp.shrink_page_list
0.28 ± 18% -0.1 0.13 ± 3% perf-profile.self.cycles-pp.workingset_age_nonresident
0.25 ± 18% -0.1 0.11 ± 11% perf-profile.self.cycles-pp.__mod_node_page_state
0.24 ± 14% -0.1 0.11 ± 13% perf-profile.self.cycles-pp.mem_cgroup_charge
0.26 ± 15% -0.1 0.13 ± 9% perf-profile.self.cycles-pp.get_page_from_freelist
0.24 ± 21% -0.1 0.11 ± 9% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.36 ± 15% -0.1 0.24 ± 5% perf-profile.self.cycles-pp.try_to_unmap_one
0.24 ± 9% -0.1 0.12 ± 9% perf-profile.self.cycles-pp.workingset_eviction
0.33 ± 13% -0.1 0.21 ± 10% perf-profile.self.cycles-pp.sync_regs
0.29 ± 16% -0.1 0.18 ± 7% perf-profile.self.cycles-pp.__handle_mm_fault
0.27 ± 10% -0.1 0.16 ± 6% perf-profile.self.cycles-pp.xas_create
0.27 ± 12% -0.1 0.17 ± 7% perf-profile.self.cycles-pp._raw_spin_lock
0.17 ± 13% -0.1 0.07 ± 11% perf-profile.self.cycles-pp.lock_page_memcg
0.25 ± 12% -0.1 0.16 ± 12% perf-profile.self.cycles-pp.filemap_fault
0.23 ± 11% -0.1 0.14 ± 9% perf-profile.self.cycles-pp.alloc_set_pte
0.25 ± 12% -0.1 0.18 ± 10% perf-profile.self.cycles-pp.__remove_mapping
0.21 ± 13% -0.1 0.13 ± 11% perf-profile.self.cycles-pp.handle_mm_fault
0.21 ± 14% -0.1 0.14 ± 12% perf-profile.self.cycles-pp.free_pcppages_bulk
0.14 ± 7% -0.1 0.07 ± 17% perf-profile.self.cycles-pp.down_read_trylock
0.19 ± 14% -0.1 0.12 ± 12% perf-profile.self.cycles-pp.page_referenced_one
0.21 ± 11% -0.1 0.15 ± 11% perf-profile.self.cycles-pp.___might_sleep
0.14 ± 19% -0.1 0.07 ± 10% perf-profile.self.cycles-pp.rmqueue
0.15 ± 14% -0.1 0.09 ± 11% perf-profile.self.cycles-pp.page_mapping
0.08 ± 10% -0.1 0.03 ±100% perf-profile.self.cycles-pp.PageHuge
0.10 ± 10% -0.0 0.06 ± 9% perf-profile.self.cycles-pp.rmqueue_bulk
0.10 ± 10% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.page_counter_try_charge
0.07 ± 20% -0.0 0.03 ±100% perf-profile.self.cycles-pp._cond_resched
0.11 ± 12% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.__might_sleep
0.10 ± 10% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.try_charge
0.08 ± 12% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.do_user_addr_fault
0.10 ± 12% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.iomap_readpage_actor
0.10 ± 10% -0.0 0.07 ± 17% perf-profile.self.cycles-pp.rmap_walk_file
0.09 ± 14% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.kmem_cache_alloc
0.09 ± 13% -0.0 0.06 ± 9% perf-profile.self.cycles-pp.do_fault
0.09 ± 14% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
0.08 ± 15% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.iomap_apply
0.08 ± 13% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.pmem_submit_bio
0.07 ± 11% +0.0 0.12 ± 10% perf-profile.self.cycles-pp.__list_add_valid
0.09 ± 7% +0.1 0.15 ± 22% perf-profile.self.cycles-pp.smp_call_function_single
0.16 ± 11% +0.1 0.28 ± 7% perf-profile.self.cycles-pp.move_pages_to_lru
0.23 ± 17% +0.1 0.35 ± 19% perf-profile.self.cycles-pp.__pagevec_lru_add
0.33 ± 10% +0.1 0.46 ± 11% perf-profile.self.cycles-pp.isolate_lru_pages
2.35 ± 14% +1.0 3.39 ± 12% perf-profile.self.cycles-pp.smp_call_function_many_cond
34.97 ± 12% +10.3 45.28 ± 11% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
fio.read_bw_MBps
11500 +-------------------------------------------------------------------+
|. .+. +. .+.+. .+. .+. .+. + |
11000 |-+ +.+.+. + +.+.+.+ + +.+ + +.+.+.+. + + |
10500 |-+ + + +.+.+.+.+.+.+.|
| |
10000 |-+ |
9500 |-+ |
| |
9000 |-+ |
8500 |-+ |
| |
8000 |-+ |
7500 |-O O O O O O O O O O O O O O O O |
| O O O O O O O O O O |
7000 +-------------------------------------------------------------------+
fio.read_iops
6000 +--------------------------------------------------------------------+
| |
|. .+ +. .+.+. .+ .+. .+ + |
5500 |-+ + .+.+. + +.+.+.+ + + .+ + + .+.+.+. + + |
| + + +. + + +.+.+.+.+.+.+.|
| |
5000 |-+ |
| |
4500 |-+ |
| |
| |
4000 |-+ |
| O O O O O O O O O O O |
| O O O O O O O O O O O O O O O |
3500 +--------------------------------------------------------------------+
fio.read_clat_mean_us
1.3e+07 +----------------------------------------------------------------+
1.25e+07 |-+ O O O O O |
| O O O O O O O O O OO O O O O O O |
1.2e+07 |-O O O |
1.15e+07 |-+ |
| |
1.1e+07 |-+ |
1.05e+07 |-+ |
1e+07 |-+ |
| |
9.5e+06 |-+ |
9e+06 |-+ +. .+.+. .+.|
| .+.+. +. .+. .+. .+.+ .+.+. .+. + ++ + |
8.5e+06 |.+.+ + + + +.+.+.+.+ +.+.+ +.+ + |
8e+06 +----------------------------------------------------------------+
fio.read_clat_90__us
3e+07 +-----------------------------------------------------------------+
| O O O |
2.8e+07 |-+ O O O O O O O |
| O O O O O O O O O O O O O |
2.6e+07 |-+ O O |
| O |
2.4e+07 |-+ |
| |
2.2e+07 |-+ |
| |
2e+07 |-+ |
| |
1.8e+07 |-+ .+ +.+.+.+. .+.|
|.+. .+.+.+.+. .+ .+.+. .+. .+.+. .+. .+.+.+.+ :+ +.+ |
1.6e+07 +-----------------------------------------------------------------+
fio.read_clat_95__us
3.4e+07 +-----------------------------------------------------------------+
| O O |
3.2e+07 |-+ O O O O O O O O O O O O O O |
3e+07 |-O O O O O O O |
| O O |
2.8e+07 |-+ |
2.6e+07 |-+ |
| |
2.4e+07 |-+ |
2.2e+07 |-+ |
| |
2e+07 |-+ .+ .+ +.+. .+.+.|
1.8e+07 |.+ .+.+.+ + .+ .+. .+. .+.+.+. .+.+.+.+ :+ +.+.+ |
| +.+ + + +.+ +.+ +.+ + |
1.6e+07 +-----------------------------------------------------------------+
fio.latency_1000us_
0.5 +--------------------------------------------------------------------+
| .+. |
0.45 |-+ +.+. .+.+.+.+.+.+.+. +.+.. +.+. .+. .+.+ +.+. +|
|. + + +.+. + +.+. + + +.+ +. .+ |
0.4 |-+ + + + |
0.35 |-+ |
| |
0.3 |-+ |
| |
0.25 |-+ |
0.2 |-+ |
| O O O O |
0.15 |-+ O O O O O O O O O O |
| O O O O O O O O O O |
0.1 +--------------------------------------------------------------------+
fio.latency_20ms_
35 +----------------------------------------------------------------------+
| |
30 |.+.+.+. .+.. .+.+.+. .+.+.+.. .+.+.+. .+.+ + .+. |
| + +.+.+ +.+ +.+ + + + + .+..+ +. +|
| + + +.+ |
25 |-+ |
| |
20 |-+ |
| |
15 |-+ |
| |
| O O O |
10 |-O O O O O O O O O O O O O O O O |
| O O O O O O |
5 +----------------------------------------------------------------------+
fio.latency_50ms_
30 +----------------------------------------------------------------------+
| |
25 |-+ O |
| O O O O O O O |
| O O O O O O O O O O O |
20 |-+ O O O O O O |
| O |
15 |-+ |
| |
10 |-+ |
| |
| |
5 |-+ .+. .+. .+.+..+. .+.+.+.|
|.+.+.+.+.+. +.+.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+ + + |
0 +----------------------------------------------------------------------+
fio.workload
1.15e+06 +----------------------------------------------------------------+
|. .+. +. .+.+. .+. +. .+. + |
1.1e+06 |-+ +.+.+ + +.+.+.+ + +.+ + +.+.+.+. + + |
1.05e+06 |-+ + + +.++.+.+.+.+.|
| |
1e+06 |-+ |
950000 |-+ |
| |
900000 |-+ |
850000 |-+ |
| |
800000 |-+ |
750000 |-O O O O O O O O O O OO O O O O O |
| O O O O O O O O O |
700000 +----------------------------------------------------------------+
fio.time.user_time
650 +---------------------------------------------------------------------+
| + + + |
600 |:: :: :: |
550 |:+: : : : : |
| : + + : +.+ : + |
500 |-+ : + + .+.+.. + : + : + + .+.. |
| +.+ +.+ +.+.+.+ +.+ +.+ +.+ +.+. |
450 |-+ +.+.+.+.+.+.|
| |
400 |-+ |
350 |-+ |
| O |
300 |-O O O O O O O O O O |
| O O O O O O O O O O O O O O O |
250 +---------------------------------------------------------------------+
fio.time.system_time
9250 +--------------------------------------------------------------------+
| O O O O O O O O O O O O |
9200 |-O O O O O O O O O O O O O |
9150 |-+ O |
| |
9100 |-+ |
9050 |-+ |
| |
9000 |-+ .+.+.+.+.+.+.+.|
8950 |-+ +.+ +.+. .+.+.+.+ +.+.. +.+ +.+. .+ |
| : + + +.+ + : : + + + |
8900 |-+: + + : +.+ : + |
8850 |++: + : + : |
| + + + |
8800 +--------------------------------------------------------------------+
fio.time.major_page_faults
6e+08 +-----------------------------------------------------------------+
| |
|. .+. +. .+.+. .+. .+. .+. + |
5.5e+08 |-+ +.+.+. + ++.+.+ + +.+ + +.+.+.+.+ + |
| + +.+.+.+.+.+.+.|
| |
5e+08 |-+ |
| |
4.5e+08 |-+ |
| |
| |
4e+08 |-+ |
| O O O O O OO O O O O O O O O O O O |
| O O O O O O O O |
3.5e+08 +-----------------------------------------------------------------+
fio.time.file_system_inputs
4.6e+09 +-----------------------------------------------------------------+
|.+.+. .+. + + +. .+.+.+.+.+. .+.+.+. .+.+. + |
4.4e+09 |-+ +.+ + + + +.+ +.+ + + |
4.2e+09 |-+ +.+.+.+.+.+.+.|
| |
4e+09 |-+ |
3.8e+09 |-+ |
| |
3.6e+09 |-+ |
3.4e+09 |-+ |
| |
3.2e+09 |-+ O |
3e+09 |-O O O O O O OO O O O O O O O O O OO |
| O O O O O O |
2.8e+09 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
Re: [mm/lru] 44c86dfabf: fio.read_iops -11.8% regression
by Chen, Rong A
Hi Alex,
I have confirmed that set 'cgroup_disable=memory' can recover the
performance.
Best Regards,
Rong Chen
On 9/24/2020 7:24 AM, Alex Shi wrote:
> Hi Rong,
>
> Thanks a lot for testing!
>
> What I found is the result could be recovered if disabled the memcg by cmdline 'cgroup_disable=memory' on the same commit/kernel
> Is this applicable in your side?
>
> Thanks
> Alex
>
> 在 2020/9/23 下午5:15, kernel test robot 写道:
>> Greeting,
>>
>> FYI, we noticed a -11.8% regression of fio.read_iops due to commit:
>>
>>
>> commit: 44c86dfabfbbb310b5135c0e8b66b193b9d7e896 ("mm/lru: replace pgdat lru_lock with lruvec lock")
>> https://github.com/alexshi/linux.git lruv19.5
1 year, 10 months