[af_unix] afd20b9290: stress-ng.sockdiag.ops_per_sec -26.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -26.3% regression of stress-ng.sockdiag.ops_per_sec due to commit:
commit: afd20b9290e184c203fe22f2d6b80dc7127ba724 ("af_unix: Replace the big lock with small locks.")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
class: network
test: sockdiag
cpufreq_governor: performance
ucode: 0xd000280
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
network/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/sockdiag/stress-ng/60s/0xd000280
commit:
e6b4b87389 ("af_unix: Save hash in sk_hash.")
afd20b9290 ("af_unix: Replace the big lock with small locks.")
e6b4b873896f0e92 afd20b9290e184c203fe22f2d6b
---------------- ---------------------------
%stddev %change %stddev
\ | \
3.129e+08 -26.3% 2.306e+08 stress-ng.sockdiag.ops
5214640 -26.3% 3842782 stress-ng.sockdiag.ops_per_sec
82895 -6.9% 77178 stress-ng.time.involuntary_context_switches
103737 -9.5% 93892 stress-ng.time.voluntary_context_switches
7067 -6.3% 6620 vmstat.system.cs
0.05 -0.0 0.04 ± 6% mpstat.cpu.all.soft%
0.13 ± 3% -0.0 0.12 ± 5% mpstat.cpu.all.usr%
1783836 ± 7% -21.6% 1397649 ± 12% numa-vmstat.node1.numa_hit
1689477 ± 8% -22.9% 1303128 ± 13% numa-vmstat.node1.numa_local
894897 ± 22% +46.6% 1312222 ± 11% turbostat.C1E
3.85 ± 55% +3.5 7.33 ± 10% turbostat.C1E%
2451882 ± 4% -24.3% 1855676 ± 2% numa-numastat.node0.local_node
2501404 ± 3% -23.8% 1905161 ± 3% numa-numastat.node0.numa_hit
2437526 -24.1% 1849165 ± 3% numa-numastat.node1.local_node
2503693 -23.5% 1915338 ± 3% numa-numastat.node1.numa_hit
7977 ± 19% -22.6% 6178 ± 8% softirqs.CPU2.RCU
7989 ± 25% -23.4% 6121 ± 3% softirqs.CPU25.RCU
8011 ± 24% -26.8% 5862 ± 3% softirqs.CPU8.RCU
890963 ± 3% -17.4% 735738 softirqs.RCU
74920 -3.6% 72233 proc-vmstat.nr_slab_unreclaimable
5007343 -23.7% 3821593 proc-vmstat.numa_hit
4891675 -24.2% 3705934 proc-vmstat.numa_local
5007443 -23.7% 3821701 proc-vmstat.pgalloc_normal
4796850 -24.7% 3610677 proc-vmstat.pgfree
0.71 ± 17% -41.1% 0.42 perf-stat.i.MPKI
0.12 ± 12% -0.0 0.10 ± 8% perf-stat.i.branch-miss-rate%
10044516 ± 13% -23.6% 7678759 ± 3% perf-stat.i.cache-misses
42758000 ± 6% -28.5% 30580693 perf-stat.i.cache-references
6920 -5.9% 6510 perf-stat.i.context-switches
571.08 ± 2% -13.4% 494.31 ± 2% perf-stat.i.cpu-migrations
39356 ± 12% +29.2% 50865 ± 3% perf-stat.i.cycles-between-cache-misses
0.01 ± 36% -0.0 0.00 ± 24% perf-stat.i.dTLB-load-miss-rate%
0.01 ± 23% -0.0 0.00 ± 14% perf-stat.i.dTLB-store-miss-rate%
8.447e+08 +27.0% 1.073e+09 perf-stat.i.dTLB-stores
13.36 -2.2% 13.07 perf-stat.i.major-faults
364.56 ± 9% -24.9% 273.60 perf-stat.i.metric.K/sec
350.63 +0.7% 353.23 perf-stat.i.metric.M/sec
87.88 +1.4 89.23 perf-stat.i.node-load-miss-rate%
1381985 ± 12% -27.7% 999393 ± 3% perf-stat.i.node-load-misses
198989 ± 6% -31.9% 135458 ± 4% perf-stat.i.node-loads
4305132 -27.4% 3124590 perf-stat.i.node-store-misses
581796 ± 5% -25.6% 432807 ± 3% perf-stat.i.node-stores
0.46 ± 5% -28.7% 0.33 perf-stat.overall.MPKI
39894 ± 12% +28.6% 51310 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 ± 22% -0.0 0.00 ± 12% perf-stat.overall.dTLB-store-miss-rate%
9916145 ± 13% -23.8% 7560589 ± 3% perf-stat.ps.cache-misses
42385546 ± 5% -28.7% 30225277 perf-stat.ps.cache-references
6786 -5.9% 6385 perf-stat.ps.context-switches
562.65 ± 2% -13.5% 486.73 ± 2% perf-stat.ps.cpu-migrations
8.314e+08 +26.8% 1.055e+09 perf-stat.ps.dTLB-stores
1359293 ± 11% -27.7% 982331 ± 3% perf-stat.ps.node-load-misses
205280 ± 6% -33.3% 136979 ± 5% perf-stat.ps.node-loads
4237942 -27.5% 3070934 perf-stat.ps.node-store-misses
585102 ± 5% -26.6% 429702 ± 3% perf-stat.ps.node-stores
5.844e+12 +0.9% 5.897e+12 perf-stat.total.instructions
99.26 +0.5 99.72 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sendmsg
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.26 +0.5 99.73 perf-profile.calltrace.cycles-pp.sendmsg
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.netlink_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg
97.56 +0.5 98.04 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.sock_diag_rcv.netlink_unicast.netlink_sendmsg
99.22 +0.5 99.70 perf-profile.calltrace.cycles-pp.netlink_unicast.netlink_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
99.19 +0.5 99.68 perf-profile.calltrace.cycles-pp.sock_diag_rcv.netlink_unicast.netlink_sendmsg.sock_sendmsg.____sys_sendmsg
98.41 +0.5 98.90 perf-profile.calltrace.cycles-pp.__mutex_lock.sock_diag_rcv.netlink_unicast.netlink_sendmsg.sock_sendmsg
0.48 -0.4 0.07 ± 5% perf-profile.children.cycles-pp.recvmsg
0.46 ± 2% -0.4 0.06 perf-profile.children.cycles-pp.___sys_recvmsg
0.47 ± 2% -0.4 0.07 ± 6% perf-profile.children.cycles-pp.__sys_recvmsg
0.45 -0.4 0.06 ± 9% perf-profile.children.cycles-pp.____sys_recvmsg
1.14 -0.4 0.76 perf-profile.children.cycles-pp.netlink_dump
1.09 -0.4 0.73 perf-profile.children.cycles-pp.unix_diag_dump
0.66 -0.3 0.37 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
0.26 ± 2% -0.1 0.19 ± 2% perf-profile.children.cycles-pp.sk_diag_fill
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__x64_sys_socket
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__sys_socket
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__close
0.12 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.memset_erms
0.11 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.nla_put
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.__nlmsg_put
0.08 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__socket
0.08 -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__nla_put
0.07 -0.0 0.05 perf-profile.children.cycles-pp.__nla_reserve
0.07 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.rcu_core
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.__softirqentry_text_start
0.07 -0.0 0.05 ± 8% perf-profile.children.cycles-pp.rcu_do_batch
0.06 ± 7% -0.0 0.05 perf-profile.children.cycles-pp.sock_i_ino
99.89 +0.0 99.92 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.89 +0.0 99.92 perf-profile.children.cycles-pp.do_syscall_64
0.00 +0.1 0.08 perf-profile.children.cycles-pp.__raw_callee_save___native_queued_spin_unlock
99.26 +0.5 99.73 perf-profile.children.cycles-pp.sendmsg
99.25 +0.5 99.72 perf-profile.children.cycles-pp.__sys_sendmsg
99.25 +0.5 99.72 perf-profile.children.cycles-pp.___sys_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.____sys_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.sock_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.netlink_sendmsg
99.22 +0.5 99.70 perf-profile.children.cycles-pp.netlink_unicast
97.59 +0.5 98.08 perf-profile.children.cycles-pp.osq_lock
99.19 +0.5 99.68 perf-profile.children.cycles-pp.sock_diag_rcv
98.41 +0.5 98.90 perf-profile.children.cycles-pp.__mutex_lock
0.12 ± 5% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.unix_diag_dump
0.11 -0.0 0.08 perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__raw_callee_save___native_queued_spin_unlock
0.28 ± 5% +0.1 0.35 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
97.23 +0.5 97.72 perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
1 week, 4 days
[mm/readahead] 793917d997: fio.read_iops -18.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -18.8% regression of fio.read_iops due to commit:
commit: 793917d997df2e432f3e9ac126e4482d68256d01 ("mm/readahead: Add large folio readahead")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fio-basic
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:
disk: 2pmem
fs: xfs
runtime: 200s
nr_task: 50%
time_based: tb
rw: read
bs: 2M
ioengine: sync
test_size: 200G
cpufreq_governor: performance
ucode: 0x500320a
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 241.0% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=mmap-pread-seq |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 64.8% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=mmap-pread-seq-mt |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 24.1% improvement |
| test machine | 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=migrate |
| | ucode=0x42e |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 45.0% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=lru-file-mmap-read |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-11/performance/2pmem/xfs/sync/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/200s/read/lkp-csl-2sp7/200G/fio-basic/tb/0x500320a
commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")
18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.57 ±204% -0.6 0.00 ±223% fio.latency_1000us%
16.62 ± 22% +24.7 41.29 ± 3% fio.latency_20ms%
1.64 ±103% -1.6 0.08 ±112% fio.latency_2ms%
24.33 ± 30% -23.0 1.35 ± 17% fio.latency_4ms%
0.10 ± 45% +0.2 0.33 ± 17% fio.latency_50ms%
13189 ± 7% -18.8% 10710 fio.read_bw_MBps
11643562 ± 6% +33.0% 15488341 fio.read_clat_90%_us
14265002 ± 5% +13.3% 16165546 fio.read_clat_95%_us
6493088 ± 5% +37.0% 8897953 fio.read_clat_mean_us
3583181 ± 6% +16.5% 4173165 ± 2% fio.read_clat_stddev
6594 ± 7% -18.8% 5355 fio.read_iops
5.417e+09 ± 6% -19.0% 4.388e+09 fio.time.file_system_inputs
36028 ± 2% -35.5% 23253 fio.time.involuntary_context_switches
31.69 ± 9% +42.8% 45.26 ± 4% fio.time.user_time
21307 -5.2% 20198 fio.time.voluntary_context_switches
1322167 ± 6% -19.0% 1071118 fio.workload
451.73 -2.9% 438.55 pmeter.Average_Active_Power
13277785 ± 7% -18.6% 10808063 vmstat.io.bi
2158 -14.8% 1839 vmstat.system.cs
1.72 ± 5% +0.5 2.20 mpstat.cpu.all.irq%
0.22 ± 13% +0.1 0.33 ± 3% mpstat.cpu.all.soft%
0.27 ± 12% +0.2 0.46 ± 3% mpstat.cpu.all.usr%
341045 -67.6% 110363 meminfo.KReclaimable
64800 ± 8% +21.6% 78827 ± 7% meminfo.Mapped
341045 -67.6% 110363 meminfo.SReclaimable
555633 -40.9% 328399 meminfo.Slab
0.06 ± 6% -67.6% 0.02 turbostat.IPC
13881 ± 56% -60.3% 5510 ± 62% turbostat.POLL
66.00 -3.0% 64.00 turbostat.PkgTmp
276.27 -2.7% 268.67 turbostat.PkgWatt
3.121e+08 ± 16% -99.4% 1987161 ± 5% numa-numastat.node0.local_node
1.059e+08 ± 12% -98.6% 1510071 ± 4% numa-numastat.node0.numa_foreign
3.113e+08 ± 16% -99.3% 2032533 ± 4% numa-numastat.node0.numa_hit
2.627e+08 ± 3% -99.0% 2626722 ± 3% numa-numastat.node1.local_node
2.621e+08 ± 3% -99.0% 2668324 ± 2% numa-numastat.node1.numa_hit
1.059e+08 ± 12% -98.6% 1510159 ± 4% numa-numastat.node1.numa_miss
1.061e+08 ± 12% -98.5% 1551712 ± 2% numa-numastat.node1.other_node
163410 ± 3% -64.4% 58180 ± 38% numa-meminfo.node0.KReclaimable
163410 ± 3% -64.4% 58180 ± 38% numa-meminfo.node0.SReclaimable
272215 ± 6% -34.0% 179749 ± 13% numa-meminfo.node0.Slab
64115515 ± 4% +11.0% 71186782 numa-meminfo.node1.FilePages
177499 ± 3% -70.6% 52181 ± 44% numa-meminfo.node1.KReclaimable
65243028 ± 4% +10.8% 72267036 numa-meminfo.node1.MemUsed
177499 ± 3% -70.6% 52181 ± 44% numa-meminfo.node1.SReclaimable
283269 ± 6% -47.5% 148640 ± 18% numa-meminfo.node1.Slab
931356 ± 2% +17.3% 1092480 sched_debug.cpu.avg_idle.avg
1444016 ± 8% +19.3% 1722423 ± 3% sched_debug.cpu.avg_idle.max
339528 ± 20% +70.6% 579295 ± 16% sched_debug.cpu.avg_idle.min
187998 ± 9% +47.3% 276979 ± 4% sched_debug.cpu.avg_idle.stddev
8.03 ± 42% +204.5% 24.45 ± 25% sched_debug.cpu.clock.stddev
516781 +11.8% 577621 sched_debug.cpu.max_idle_balance_cost.avg
705916 ± 7% +21.6% 858128 ± 3% sched_debug.cpu.max_idle_balance_cost.max
38577 ± 31% +150.1% 96487 ± 9% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 17% +122.4% 0.00 ± 49% sched_debug.cpu.next_balance.stddev
40844 ± 3% -64.4% 14543 ± 38% numa-vmstat.node0.nr_slab_reclaimable
1.059e+08 ± 12% -98.6% 1510071 ± 4% numa-vmstat.node0.numa_foreign
3.113e+08 ± 16% -99.3% 2032392 ± 4% numa-vmstat.node0.numa_hit
3.121e+08 ± 16% -99.4% 1987020 ± 5% numa-vmstat.node0.numa_local
3202 ±101% -98.8% 38.17 ± 60% numa-vmstat.node0.workingset_nodes
16031245 ± 4% +10.9% 17773973 numa-vmstat.node1.nr_file_pages
44374 ± 3% -70.6% 13045 ± 44% numa-vmstat.node1.nr_slab_reclaimable
2.62e+08 ± 3% -99.0% 2668367 ± 2% numa-vmstat.node1.numa_hit
2.626e+08 ± 3% -99.0% 2626765 ± 3% numa-vmstat.node1.numa_local
1.059e+08 ± 12% -98.6% 1510159 ± 4% numa-vmstat.node1.numa_miss
1.061e+08 ± 12% -98.5% 1551712 ± 2% numa-vmstat.node1.numa_other
299.83 ± 34% -95.6% 13.33 ± 74% numa-vmstat.node1.workingset_nodes
318998 ±113% -99.4% 1970 ±141% proc-vmstat.compact_daemon_free_scanned
193995 ± 28% +430.3% 1028832 ± 27% proc-vmstat.compact_daemon_migrate_scanned
42975962 ± 42% -95.3% 2030090 ± 41% proc-vmstat.compact_free_scanned
7148833 ± 16% -98.6% 97725 ± 18% proc-vmstat.compact_isolated
26398894 ± 11% -30.4% 18367268 ± 31% proc-vmstat.compact_migrate_scanned
9977 ± 2% -4.6% 9519 ± 3% proc-vmstat.nr_active_anon
26107465 +4.9% 27388404 proc-vmstat.nr_file_pages
50836830 -2.5% 49578796 proc-vmstat.nr_free_pages
25502602 +5.0% 26782320 proc-vmstat.nr_inactive_file
16509 ± 8% +20.7% 19932 ± 6% proc-vmstat.nr_mapped
85277 -67.6% 27589 proc-vmstat.nr_slab_reclaimable
53644 +1.6% 54503 proc-vmstat.nr_slab_unreclaimable
9977 ± 2% -4.6% 9519 ± 3% proc-vmstat.nr_zone_active_anon
25502515 +5.0% 26782297 proc-vmstat.nr_zone_inactive_file
1.059e+08 ± 12% -98.6% 1510071 ± 4% proc-vmstat.numa_foreign
5.734e+08 ± 10% -99.2% 4702875 ± 2% proc-vmstat.numa_hit
5.748e+08 ± 10% -99.2% 4615902 ± 2% proc-vmstat.numa_local
1.059e+08 ± 12% -98.6% 1510159 ± 4% proc-vmstat.numa_miss
1.062e+08 ± 12% -98.5% 1597130 ± 3% proc-vmstat.numa_other
6.695e+08 ± 6% -18.7% 5.444e+08 proc-vmstat.pgalloc_normal
6.592e+08 ± 6% -21.1% 5.201e+08 ± 2% proc-vmstat.pgfree
3598703 ± 16% -97.8% 80835 ± 11% proc-vmstat.pgmigrate_success
2.708e+09 ± 6% -19.0% 2.194e+09 proc-vmstat.pgpgin
13829 ± 86% -100.0% 1.00 ±100% proc-vmstat.pgrotated
29010332 ± 58% -85.8% 4115913 ± 55% proc-vmstat.pgscan_file
29010332 ± 58% -85.8% 4114473 ± 55% proc-vmstat.pgscan_kswapd
28974103 ± 58% -85.8% 4114957 ± 55% proc-vmstat.pgsteal_file
28974103 ± 58% -85.8% 4113517 ± 55% proc-vmstat.pgsteal_kswapd
3588 ± 96% -98.6% 51.50 ± 58% proc-vmstat.workingset_nodes
28.72 ± 3% +109.8% 60.27 perf-stat.i.MPKI
5.903e+09 ± 4% -68.8% 1.841e+09 perf-stat.i.branch-instructions
0.24 ± 3% +0.1 0.36 perf-stat.i.branch-miss-rate%
14069166 ± 5% -54.5% 6402039 perf-stat.i.branch-misses
85.42 +5.1 90.52 perf-stat.i.cache-miss-rate%
7.328e+08 ± 7% -20.1% 5.852e+08 perf-stat.i.cache-misses
8.581e+08 ± 7% -24.9% 6.444e+08 perf-stat.i.cache-references
2034 -16.3% 1703 perf-stat.i.context-switches
5.10 ± 5% +188.5% 14.72 perf-stat.i.cpi
230.67 ± 7% +28.4% 296.26 perf-stat.i.cycles-between-cache-misses
0.02 ± 14% -0.0 0.00 ± 12% perf-stat.i.dTLB-load-miss-rate%
1298538 ± 17% -93.1% 90201 ± 11% perf-stat.i.dTLB-load-misses
6.904e+09 ± 4% -71.4% 1.976e+09 perf-stat.i.dTLB-loads
0.02 ± 12% -0.0 0.00 ± 20% perf-stat.i.dTLB-store-miss-rate%
1002153 ± 17% -95.1% 49149 ± 20% perf-stat.i.dTLB-store-misses
4.381e+09 ± 6% -61.1% 1.703e+09 perf-stat.i.dTLB-stores
55.38 ± 2% -11.7 43.67 perf-stat.i.iTLB-load-miss-rate%
2015561 ± 7% -42.1% 1166138 perf-stat.i.iTLB-load-misses
3.049e+10 ± 4% -65.3% 1.057e+10 perf-stat.i.instructions
15085 ± 3% -40.6% 8965 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.22 ± 4% -65.3% 0.08 perf-stat.i.ipc
14.43 +5.4% 15.21 perf-stat.i.major-faults
1115 ± 15% +37.6% 1534 perf-stat.i.metric.K/sec
190.01 ± 5% -65.5% 65.55 perf-stat.i.metric.M/sec
27.84 ± 14% +10.8 38.67 ± 2% perf-stat.i.node-load-miss-rate%
33175146 ± 4% +14.4% 37941465 ± 2% perf-stat.i.node-load-misses
25.76 ± 20% +12.6 38.33 ± 2% perf-stat.i.node-store-miss-rate%
35188134 ± 10% +35.5% 47664398 ± 2% perf-stat.i.node-store-misses
28.13 ± 3% +116.5% 60.90 perf-stat.overall.MPKI
0.24 ± 2% +0.1 0.35 perf-stat.overall.branch-miss-rate%
85.41 +5.4 90.82 perf-stat.overall.cache-miss-rate%
4.52 ± 4% +192.8% 13.24 perf-stat.overall.cpi
188.69 ± 8% +26.9% 239.44 perf-stat.overall.cycles-between-cache-misses
0.02 ± 14% -0.0 0.00 ± 11% perf-stat.overall.dTLB-load-miss-rate%
0.02 ± 13% -0.0 0.00 ± 21% perf-stat.overall.dTLB-store-miss-rate%
56.41 ± 2% -12.7 43.76 perf-stat.overall.iTLB-load-miss-rate%
15146 ± 3% -40.7% 8984 perf-stat.overall.instructions-per-iTLB-miss
0.22 ± 4% -65.9% 0.08 perf-stat.overall.ipc
23.53 ± 13% +7.7 31.24 ± 2% perf-stat.overall.node-load-miss-rate%
22.91 ± 19% +9.8 32.76 ± 2% perf-stat.overall.node-store-miss-rate%
4618032 ± 3% -57.6% 1956685 perf-stat.overall.path-length
5.845e+09 ± 4% -69.0% 1.811e+09 perf-stat.ps.branch-instructions
13926188 ± 5% -54.7% 6307104 perf-stat.ps.branch-misses
7.264e+08 ± 7% -20.8% 5.75e+08 perf-stat.ps.cache-misses
8.503e+08 ± 7% -25.5% 6.331e+08 perf-stat.ps.cache-references
2016 -16.4% 1685 perf-stat.ps.context-switches
1283289 ± 17% -93.1% 88877 ± 11% perf-stat.ps.dTLB-load-misses
6.836e+09 ± 4% -71.6% 1.944e+09 perf-stat.ps.dTLB-loads
989408 ± 17% -95.1% 48472 ± 20% perf-stat.ps.dTLB-store-misses
4.338e+09 ± 6% -61.4% 1.674e+09 perf-stat.ps.dTLB-stores
1997784 ± 7% -42.1% 1157224 perf-stat.ps.iTLB-load-misses
3.019e+10 ± 4% -65.6% 1.04e+10 perf-stat.ps.instructions
33161221 ± 4% +15.1% 38152076 ± 2% perf-stat.ps.node-load-misses
35138617 ± 10% +35.8% 47712507 ± 2% perf-stat.ps.node-store-misses
6.093e+12 ± 4% -65.6% 2.096e+12 perf-stat.total.instructions
40.24 ± 5% -40.2 0.00 perf-profile.calltrace.cycles-pp.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
26.04 ± 4% -26.0 0.00 perf-profile.calltrace.cycles-pp.read_pages.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read
26.03 ± 4% -26.0 0.00 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_get_pages.filemap_read
24.65 ± 5% -24.6 0.00 perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_get_pages
24.65 ± 5% -24.6 0.00 perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages.page_cache_ra_unbounded
8.82 ± 25% -8.8 0.00 perf-profile.calltrace.cycles-pp.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.posix_fadvise64
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.78 ± 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64
6.88 ± 34% -6.9 0.00 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages.filemap_read
6.82 ± 34% -6.8 0.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages
6.09 ± 41% -6.1 0.00 perf-profile.calltrace.cycles-pp.__pagevec_release.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64
6.08 ± 41% -6.1 0.00 perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_get_pages
0.00 +0.6 0.58 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_get_pages.filemap_read
0.00 +0.6 0.60 ± 2% perf-profile.calltrace.cycles-pp.folio_alloc.page_cache_ra_order.filemap_get_pages.filemap_read.xfs_file_buffered_read
0.00 +1.0 0.98 perf-profile.calltrace.cycles-pp.page_cache_ra_order.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
64.14 ± 4% +7.3 71.44 perf-profile.calltrace.cycles-pp.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read.vfs_read
64.15 ± 4% +7.3 71.45 perf-profile.calltrace.cycles-pp.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
64.16 ± 4% +7.3 71.47 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
64.16 ± 4% +7.3 71.47 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.17 ± 4% +7.3 71.50 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.19 ± 4% +7.3 71.52 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.18 ± 4% +7.3 71.51 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.19 ± 4% +7.3 71.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
64.20 ± 4% +7.3 71.54 perf-profile.calltrace.cycles-pp.read
24.55 ± 3% +10.8 35.40 perf-profile.calltrace.cycles-pp.copy_mc_fragile.pmem_do_read.pmem_submit_bio.__submit_bio.__submit_bio_noacct
24.01 ± 5% +11.8 35.76 perf-profile.calltrace.cycles-pp.pmem_submit_bio.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages
23.92 ± 5% +11.8 35.70 perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_submit_bio.__submit_bio.__submit_bio_noacct.iomap_readahead
19.40 ± 4% +13.5 32.92 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.filemap_read.xfs_file_buffered_read
19.64 ± 4% +13.7 33.39 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
19.77 ± 4% +14.0 33.75 perf-profile.calltrace.cycles-pp.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read
0.00 +36.0 35.98 perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages.filemap_get_pages
0.00 +36.0 35.99 perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_readahead.read_pages.filemap_get_pages.filemap_read
0.00 +36.3 36.34 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.filemap_get_pages.filemap_read.xfs_file_buffered_read
0.00 +36.4 36.38 perf-profile.calltrace.cycles-pp.read_pages.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
40.24 ± 5% -40.2 0.00 perf-profile.children.cycles-pp.page_cache_ra_unbounded
13.34 ± 17% -13.2 0.15 ± 13% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
9.26 ± 30% -9.3 0.00 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
9.34 ± 30% -9.1 0.22 ± 7% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
8.83 ± 25% -8.5 0.37 ± 4% perf-profile.children.cycles-pp.filemap_add_folio
7.78 ± 37% -7.4 0.39 ± 7% perf-profile.children.cycles-pp.posix_fadvise64
7.78 ± 37% -7.4 0.39 ± 7% perf-profile.children.cycles-pp.__x64_sys_fadvise64
7.78 ± 37% -7.4 0.39 ± 7% perf-profile.children.cycles-pp.ksys_fadvise64_64
7.78 ± 37% -7.4 0.39 ± 7% perf-profile.children.cycles-pp.generic_fadvise
7.78 ± 37% -7.4 0.39 ± 7% perf-profile.children.cycles-pp.invalidate_mapping_pagevec
6.90 ± 34% -6.8 0.08 ± 8% perf-profile.children.cycles-pp.folio_add_lru
6.85 ± 34% -6.8 0.08 ± 8% perf-profile.children.cycles-pp.__pagevec_lru_add
6.20 ± 40% -6.0 0.18 ± 9% perf-profile.children.cycles-pp.release_pages
6.09 ± 41% -5.9 0.17 ± 10% perf-profile.children.cycles-pp.__pagevec_release
5.05 ± 14% -4.5 0.60 ± 2% perf-profile.children.cycles-pp.folio_alloc
5.01 ± 14% -4.4 0.60 ± 2% perf-profile.children.cycles-pp.__alloc_pages
4.88 ± 14% -4.3 0.56 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
4.74 ± 15% -4.2 0.51 ± 3% perf-profile.children.cycles-pp.rmqueue
1.93 ± 14% -1.6 0.29 ± 3% perf-profile.children.cycles-pp.__filemap_add_folio
1.07 ± 38% -0.9 0.19 ± 6% perf-profile.children.cycles-pp.iomap_readpage_iter
0.98 ± 24% -0.8 0.17 ± 7% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.81 ± 7% -0.7 0.11 ± 5% perf-profile.children.cycles-pp.filemap_get_read_batch
0.75 ± 18% -0.7 0.08 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.69 ± 10% -0.6 0.04 ± 45% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.75 ± 41% -0.6 0.20 ± 10% perf-profile.children.cycles-pp.remove_mapping
0.74 ± 41% -0.5 0.20 ± 10% perf-profile.children.cycles-pp.__remove_mapping
0.64 ± 33% -0.5 0.12 ± 7% perf-profile.children.cycles-pp.charge_memcg
0.58 ± 20% -0.5 0.06 ± 7% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.58 ± 5% -0.5 0.07 ± 6% perf-profile.children.cycles-pp.iomap_read_end_io
0.58 ± 11% -0.5 0.08 ± 4% perf-profile.children.cycles-pp.xas_load
0.44 ± 48% -0.4 0.09 ± 10% perf-profile.children.cycles-pp.try_charge_memcg
0.32 ± 52% -0.2 0.08 ± 12% perf-profile.children.cycles-pp.page_counter_try_charge
0.23 ± 20% -0.2 0.06 ± 7% perf-profile.children.cycles-pp.__mod_lruvec_state
0.20 ± 19% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.__mod_node_page_state
0.14 ± 6% -0.1 0.06 ± 8% perf-profile.children.cycles-pp.kmem_cache_alloc
0.05 ± 74% +0.0 0.10 ± 11% perf-profile.children.cycles-pp.generic_file_write_iter
0.01 ±223% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.__x64_sys_execve
0.01 ±223% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.do_execveat_common
0.01 ±223% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.execve
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.mempool_alloc
0.05 ± 76% +0.1 0.11 ± 13% perf-profile.children.cycles-pp.new_sync_write
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.update_sd_lb_stats
0.05 ± 76% +0.1 0.11 ± 12% perf-profile.children.cycles-pp.vfs_write
0.05 ± 76% +0.1 0.11 ± 11% perf-profile.children.cycles-pp.ksys_write
0.02 ±141% +0.1 0.08 ± 9% perf-profile.children.cycles-pp.exc_page_fault
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.bio_free
0.00 +0.1 0.06 ± 17% perf-profile.children.cycles-pp.schedule
0.06 ± 76% +0.1 0.12 ± 11% perf-profile.children.cycles-pp.write
0.06 ± 11% +0.1 0.12 ± 46% perf-profile.children.cycles-pp.__might_resched
0.02 ±141% +0.1 0.08 ± 7% perf-profile.children.cycles-pp.asm_exc_page_fault
0.01 ±223% +0.1 0.07 ± 9% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.bio_alloc_bioset
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.__might_fault
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.__handle_mm_fault
0.01 ±223% +0.1 0.08 ± 9% perf-profile.children.cycles-pp.do_user_addr_fault
0.00 +0.1 0.07 ± 8% perf-profile.children.cycles-pp.__kmalloc
0.03 ± 70% +0.1 0.10 ± 10% perf-profile.children.cycles-pp.iomap_iter
0.00 +0.1 0.08 ± 12% perf-profile.children.cycles-pp.__free_pages_ok
0.00 +0.1 0.08 ± 9% perf-profile.children.cycles-pp.iomap_page_create
0.00 +0.1 0.08 ± 13% perf-profile.children.cycles-pp.__schedule
0.01 ±223% +0.1 0.09 ± 11% perf-profile.children.cycles-pp.xfs_read_iomap_begin
0.00 +0.2 0.16 ± 11% perf-profile.children.cycles-pp.__cond_resched
0.00 +0.2 0.18 ± 9% perf-profile.children.cycles-pp.folio_mapped
0.00 +1.0 0.98 perf-profile.children.cycles-pp.page_cache_ra_order
64.15 ± 4% +7.3 71.45 perf-profile.children.cycles-pp.xfs_file_buffered_read
64.14 ± 4% +7.3 71.44 perf-profile.children.cycles-pp.filemap_read
64.16 ± 4% +7.3 71.47 perf-profile.children.cycles-pp.xfs_file_read_iter
64.18 ± 4% +7.3 71.50 perf-profile.children.cycles-pp.new_sync_read
64.20 ± 4% +7.3 71.54 perf-profile.children.cycles-pp.vfs_read
64.21 ± 4% +7.4 71.56 perf-profile.children.cycles-pp.ksys_read
64.23 ± 4% +7.4 71.60 perf-profile.children.cycles-pp.read
26.03 ± 4% +10.3 36.34 perf-profile.children.cycles-pp.iomap_readahead
26.04 ± 4% +10.3 36.39 perf-profile.children.cycles-pp.read_pages
25.55 ± 4% +10.4 35.98 perf-profile.children.cycles-pp.__submit_bio
25.55 ± 4% +10.4 35.99 perf-profile.children.cycles-pp.__submit_bio_noacct
24.68 ± 4% +10.9 35.54 perf-profile.children.cycles-pp.copy_mc_fragile
24.89 ± 4% +10.9 35.77 perf-profile.children.cycles-pp.pmem_submit_bio
24.80 ± 4% +10.9 35.70 perf-profile.children.cycles-pp.pmem_do_read
19.64 ± 4% +13.8 33.39 perf-profile.children.cycles-pp.copyout
19.64 ± 4% +13.8 33.42 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
19.78 ± 4% +14.0 33.77 perf-profile.children.cycles-pp.copy_page_to_iter
13.34 ± 17% -13.2 0.15 ± 13% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.74 ± 18% -0.7 0.08 ± 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.52 ± 11% -0.5 0.06 ± 6% perf-profile.self.cycles-pp.xas_load
0.36 ± 3% -0.3 0.10 ± 6% perf-profile.self.cycles-pp.filemap_read
0.25 ± 52% -0.2 0.06 ± 11% perf-profile.self.cycles-pp.page_counter_try_charge
0.20 ± 19% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.__mod_node_page_state
0.17 ± 30% -0.1 0.07 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.11 ± 6% +0.0 0.16 ± 2% perf-profile.self.cycles-pp.pmem_do_read
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.kmem_cache_free
0.00 +0.1 0.13 ± 10% perf-profile.self.cycles-pp.__cond_resched
0.14 ± 6% +0.2 0.31 ± 2% perf-profile.self.cycles-pp.rmqueue
0.00 +0.2 0.18 ± 9% perf-profile.self.cycles-pp.folio_mapped
24.40 ± 4% +10.8 35.18 perf-profile.self.cycles-pp.copy_mc_fragile
19.38 ± 4% +13.7 33.04 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-11/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/mmap-pread-seq/vm-scalability/0x500320a
commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")
18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
46.87 ± 10% -97.9% 0.96 ± 4% vm-scalability.free_time
191595 ± 2% +241.0% 653394 ± 13% vm-scalability.median
0.19 ± 20% +1.2 1.37 ± 20% vm-scalability.stddev%
36786240 ± 2% +241.0% 1.255e+08 ± 13% vm-scalability.throughput
180.40 -77.2% 41.08 ± 12% vm-scalability.time.elapsed_time
180.40 -77.2% 41.08 ± 12% vm-scalability.time.elapsed_time.max
84631 -66.5% 28382 ± 21% vm-scalability.time.involuntary_context_switches
31087438 ± 5% -46.7% 16568180 ± 28% vm-scalability.time.major_page_faults
1.614e+08 -46.5% 86356129 ± 7% vm-scalability.time.minor_page_faults
18216 -3.2% 17637 vm-scalability.time.percent_of_cpu_this_job_got
28597 -86.4% 3901 ± 19% vm-scalability.time.system_time
4268 -21.8% 3336 ± 2% vm-scalability.time.user_time
46509474 ± 8% -45.0% 25597276 ± 33% vm-scalability.time.voluntary_context_switches
227.97 -60.7% 89.66 ± 3% uptime.boot
1.48e+09 ± 5% -45.4% 8.087e+08 ± 9% cpuidle..time
44196685 ± 8% -40.6% 26236295 ± 32% cpuidle..usage
4036916 ± 40% -82.9% 691729 ± 7% numa-numastat.node2.local_node
4117121 ± 39% -81.4% 766207 ± 7% numa-numastat.node2.numa_hit
1861867 ± 84% -62.8% 692093 ± 6% numa-numastat.node3.local_node
1942475 ± 80% -59.9% 778947 ± 5% numa-numastat.node3.numa_hit
3.67 ± 4% +4.2 7.85 ± 4% mpstat.cpu.all.idle%
0.88 ± 17% +1.0 1.90 ± 35% mpstat.cpu.all.iowait%
0.88 +0.2 1.06 mpstat.cpu.all.irq%
0.14 ± 10% +0.1 0.21 ± 11% mpstat.cpu.all.soft%
82.08 -34.6 47.48 ± 8% mpstat.cpu.all.sys%
12.35 +29.2 41.51 ± 10% mpstat.cpu.all.usr%
4.00 +175.0% 11.00 ± 7% vmstat.cpu.id
81.67 -44.1% 45.67 ± 8% vmstat.cpu.sy
12.00 +227.8% 39.33 ± 10% vmstat.cpu.us
16429426 -22.0% 12818057 vmstat.memory.cache
182.33 -10.4% 163.33 vmstat.procs.r
509931 ± 9% +119.5% 1119546 ± 25% vmstat.system.cs
2966302 ± 8% -89.8% 303438 ± 23% meminfo.Active
20863 ± 33% -74.0% 5428 meminfo.Active(anon)
2945438 ± 8% -89.9% 298009 ± 24% meminfo.Active(file)
180004 ± 2% -49.7% 90484 ± 5% meminfo.AnonHugePages
16278300 -20.6% 12927130 meminfo.Cached
24189299 -18.6% 19701239 meminfo.Memused
5160427 -23.6% 3940730 meminfo.PageTables
153951 ± 2% -26.3% 113466 meminfo.Shmem
24513374 -14.7% 20907068 meminfo.max_used_kB
2755746 ± 8% -72.3% 762359 ± 59% turbostat.C1
4033941 ± 8% -58.5% 1674613 ± 32% turbostat.C1E
2.54 ± 7% +2.8 5.35 ± 4% turbostat.C1E%
0.22 ± 40% +1.3 1.53 ± 95% turbostat.C6%
2.82 ± 4% +139.6% 6.76 ± 17% turbostat.CPU%c1
0.07 ± 11% +304.8% 0.28 ± 3% turbostat.CPU%c6
0.05 +9800.0% 4.95 ±136% turbostat.IPC
79897140 -76.2% 19045918 ± 13% turbostat.IRQ
37324544 ± 8% -36.7% 23632422 ± 33% turbostat.POLL
1.37 ± 9% +1.4 2.78 ± 31% turbostat.POLL%
51.33 +2.6% 52.67 turbostat.PkgTmp
254.83 +22.1% 311.09 ± 2% turbostat.PkgWatt
137924 ± 9% -49.1% 70157 ± 20% numa-meminfo.node0.AnonHugePages
355569 ± 8% +649.5% 2665047 ±108% numa-meminfo.node0.Inactive
1305569 ± 2% -23.2% 1003009 numa-meminfo.node0.PageTables
1287418 ± 2% -21.9% 1005960 numa-meminfo.node1.PageTables
1758487 ± 54% -92.5% 132350 ± 85% numa-meminfo.node2.Active
1757440 ± 54% -92.5% 131820 ± 86% numa-meminfo.node2.Active(file)
27560 ± 45% +38.2% 38100 ± 13% numa-meminfo.node2.AnonPages
8204369 ± 46% -64.0% 2956327 ± 76% numa-meminfo.node2.FilePages
37157 ± 20% -38.1% 22997 ± 40% numa-meminfo.node2.KReclaimable
10086474 ± 38% -54.6% 4583950 ± 48% numa-meminfo.node2.MemUsed
1265170 -21.8% 988848 numa-meminfo.node2.PageTables
37157 ± 20% -38.1% 22997 ± 40% numa-meminfo.node2.SReclaimable
78887 ±141% -100.0% 12.00 ±118% numa-meminfo.node2.Unevictable
14660 ± 37% -99.2% 124.33 ± 24% numa-meminfo.node3.Active(anon)
28679 ±119% -95.6% 1269 ± 5% numa-meminfo.node3.AnonPages
76530 ± 69% -96.5% 2646 ± 18% numa-meminfo.node3.AnonPages.max
51139 ± 71% -89.1% 5573 ± 83% numa-meminfo.node3.Inactive(anon)
1286708 ± 2% -22.9% 991696 numa-meminfo.node3.PageTables
37154 ± 10% -91.5% 3148 ±129% numa-meminfo.node3.Shmem
323291 -23.7% 246709 numa-vmstat.node0.nr_page_table_pages
318784 ± 2% -22.3% 247722 ± 3% numa-vmstat.node1.nr_page_table_pages
427728 ± 53% -92.1% 33763 ±101% numa-vmstat.node2.nr_active_file
6907 ± 45% +37.8% 9516 ± 13% numa-vmstat.node2.nr_anon_pages
313337 -22.3% 243536 ± 2% numa-vmstat.node2.nr_page_table_pages
9253 ± 20% -37.7% 5769 ± 40% numa-vmstat.node2.nr_slab_reclaimable
19721 ±141% -100.0% 3.00 ±118% numa-vmstat.node2.nr_unevictable
427713 ± 53% -92.1% 33773 ±101% numa-vmstat.node2.nr_zone_active_file
19721 ±141% -100.0% 3.00 ±118% numa-vmstat.node2.nr_zone_unevictable
4117235 ± 39% -81.4% 765901 ± 7% numa-vmstat.node2.numa_hit
4037029 ± 40% -82.9% 691422 ± 7% numa-vmstat.node2.numa_local
3813 ± 37% -99.2% 30.67 ± 24% numa-vmstat.node3.nr_active_anon
7118 ±119% -95.5% 319.67 ± 4% numa-vmstat.node3.nr_anon_pages
12533 ± 71% -88.9% 1394 ± 82% numa-vmstat.node3.nr_inactive_anon
318650 ± 2% -23.4% 244204 ± 2% numa-vmstat.node3.nr_page_table_pages
9236 ± 11% -91.5% 786.67 ±129% numa-vmstat.node3.nr_shmem
3813 ± 37% -99.2% 30.67 ± 24% numa-vmstat.node3.nr_zone_active_anon
12533 ± 71% -88.9% 1394 ± 82% numa-vmstat.node3.nr_zone_inactive_anon
1942945 ± 80% -59.9% 779066 ± 5% numa-vmstat.node3.numa_hit
1862337 ± 84% -62.8% 692212 ± 6% numa-vmstat.node3.numa_local
5229 ± 31% -74.1% 1356 proc-vmstat.nr_active_anon
734568 ± 8% -86.7% 98029 ± 20% proc-vmstat.nr_active_file
78115 -2.6% 76120 proc-vmstat.nr_anon_pages
4067457 -19.8% 3262058 proc-vmstat.nr_file_pages
43384997 +2.5% 44462783 proc-vmstat.nr_free_pages
111285 -7.1% 103410 proc-vmstat.nr_inactive_anon
2728035 -5.8% 2569881 proc-vmstat.nr_inactive_file
2741812 -4.6% 2616807 proc-vmstat.nr_mapped
1288800 ± 2% -22.7% 996178 proc-vmstat.nr_page_table_pages
38433 ± 2% -26.2% 28368 proc-vmstat.nr_shmem
41758 -6.1% 39207 proc-vmstat.nr_slab_reclaimable
79625 -2.7% 77448 proc-vmstat.nr_slab_unreclaimable
5229 ± 31% -74.1% 1356 proc-vmstat.nr_zone_active_anon
734568 ± 8% -86.7% 98029 ± 20% proc-vmstat.nr_zone_active_file
111285 -7.1% 103410 proc-vmstat.nr_zone_inactive_anon
2728035 -5.8% 2569881 proc-vmstat.nr_zone_inactive_file
29202 ± 11% -29.6% 20544 ± 20% proc-vmstat.numa_hint_faults
8906395 -63.8% 3221607 ± 2% proc-vmstat.numa_hit
8647246 -65.8% 2961022 ± 2% proc-vmstat.numa_local
33133 ± 4% -79.5% 6790 ± 29% proc-vmstat.numa_pages_migrated
115948 ± 9% -56.1% 50895 ± 29% proc-vmstat.numa_pte_updates
8910054 -1.5% 8776609 proc-vmstat.pgalloc_normal
2.246e+08 -46.6% 1.199e+08 ± 13% proc-vmstat.pgfault
8615822 -1.7% 8473592 proc-vmstat.pgfree
1799 ± 10% -99.8% 4.00 ± 35% proc-vmstat.pgmajfault
33133 ± 4% -79.5% 6790 ± 29% proc-vmstat.pgmigrate_success
2108 -1.3% 2081 proc-vmstat.pgpgout
53952 -63.2% 19870 ± 4% proc-vmstat.pgreuse
68595 ± 40% -99.9% 67.08 ±141% sched_debug.cfs_rq:/.MIN_vruntime.avg
5806823 ± 32% -99.9% 7023 ±141% sched_debug.cfs_rq:/.MIN_vruntime.max
620191 ± 35% -99.9% 655.14 ±141% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.65 ± 15% -89.8% 0.07 ± 9% sched_debug.cfs_rq:/.h_nr_running.avg
1.64 ± 6% -39.0% 1.00 sched_debug.cfs_rq:/.h_nr_running.max
0.22 ± 5% +12.5% 0.25 ± 4% sched_debug.cfs_rq:/.h_nr_running.stddev
5552 ± 12% -69.5% 1695 ± 5% sched_debug.cfs_rq:/.load.avg
13.57 ± 19% +34.1% 18.20 ± 2% sched_debug.cfs_rq:/.load_avg.avg
66.62 ± 42% +62.5% 108.26 ± 2% sched_debug.cfs_rq:/.load_avg.stddev
68595 ± 40% -99.9% 67.08 ±141% sched_debug.cfs_rq:/.max_vruntime.avg
5806823 ± 32% -99.9% 7023 ±141% sched_debug.cfs_rq:/.max_vruntime.max
620191 ± 35% -99.9% 655.14 ±141% sched_debug.cfs_rq:/.max_vruntime.stddev
13104524 ± 19% -99.9% 15496 ± 30% sched_debug.cfs_rq:/.min_vruntime.avg
13419952 ± 18% -99.7% 38089 ± 18% sched_debug.cfs_rq:/.min_vruntime.max
11781278 ± 16% -100.0% 4704 ± 36% sched_debug.cfs_rq:/.min_vruntime.min
239246 ± 22% -97.9% 5113 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
0.64 ± 15% -89.6% 0.07 ± 9% sched_debug.cfs_rq:/.nr_running.avg
0.20 ± 11% +26.5% 0.25 ± 4% sched_debug.cfs_rq:/.nr_running.stddev
3.29 ± 17% +58.9% 5.23 ± 3% sched_debug.cfs_rq:/.removed.load_avg.avg
363.44 ± 29% +175.1% 1000 ± 3% sched_debug.cfs_rq:/.removed.load_avg.max
33.82 ± 22% +113.2% 72.10 ± 3% sched_debug.cfs_rq:/.removed.load_avg.stddev
180.33 ± 24% +79.1% 323.00 ± 36% sched_debug.cfs_rq:/.removed.runnable_avg.max
13.81 ± 30% +68.6% 23.29 ± 36% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
180.33 ± 24% +79.1% 323.00 ± 36% sched_debug.cfs_rq:/.removed.util_avg.max
13.81 ± 30% +68.6% 23.29 ± 36% sched_debug.cfs_rq:/.removed.util_avg.stddev
603.68 ± 16% -78.0% 132.63 sched_debug.cfs_rq:/.runnable_avg.avg
66.03 ± 51% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_avg.min
165.10 ± 13% +65.5% 273.26 ± 2% sched_debug.cfs_rq:/.runnable_avg.stddev
-173513 -93.7% -10949 sched_debug.cfs_rq:/.spread0.avg
145970 ± 52% -92.0% 11642 ± 76% sched_debug.cfs_rq:/.spread0.max
-1500915 -98.6% -21741 sched_debug.cfs_rq:/.spread0.min
240582 ± 22% -97.9% 5113 ± 11% sched_debug.cfs_rq:/.spread0.stddev
597.31 ± 16% -77.8% 132.59 sched_debug.cfs_rq:/.util_avg.avg
65.36 ± 51% -100.0% 0.00 sched_debug.cfs_rq:/.util_avg.min
159.11 ± 15% +71.7% 273.19 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
527.10 ± 16% -97.8% 11.48 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.avg
1208 ± 13% -31.5% 828.00 sched_debug.cfs_rq:/.util_est_enqueued.max
144.31 ± 11% -45.9% 78.03 sched_debug.cfs_rq:/.util_est_enqueued.stddev
339313 ± 25% +142.8% 823971 sched_debug.cpu.avg_idle.avg
935254 ± 7% +21.5% 1136425 ± 8% sched_debug.cpu.avg_idle.max
22494 ± 99% -93.8% 1391 ± 20% sched_debug.cpu.avg_idle.min
146018 ± 28% +77.2% 258706 ± 2% sched_debug.cpu.avg_idle.stddev
126960 ± 13% -62.4% 47674 ± 2% sched_debug.cpu.clock.avg
127006 ± 13% -62.5% 47684 ± 2% sched_debug.cpu.clock.max
126910 ± 13% -62.4% 47665 ± 2% sched_debug.cpu.clock.min
26.99 ± 13% -77.9% 5.98 ± 2% sched_debug.cpu.clock.stddev
126018 ± 13% -62.3% 47544 ± 2% sched_debug.cpu.clock_task.avg
126163 ± 13% -62.2% 47659 ± 2% sched_debug.cpu.clock_task.max
118809 ± 11% -67.8% 38275 ± 2% sched_debug.cpu.clock_task.min
4965 ± 16% -93.4% 327.73 ± 17% sched_debug.cpu.curr->pid.avg
8887 ± 2% -40.1% 5324 sched_debug.cpu.curr->pid.max
0.00 ± 14% -73.5% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
0.64 ± 15% -89.5% 0.07 ± 16% sched_debug.cpu.nr_running.avg
1.64 ± 6% -39.0% 1.00 sched_debug.cpu.nr_running.max
262669 ± 6% -99.1% 2383 ± 2% sched_debug.cpu.nr_switches.avg
339518 ± 7% -94.9% 17226 ± 23% sched_debug.cpu.nr_switches.max
133856 ± 14% -99.5% 608.33 ± 23% sched_debug.cpu.nr_switches.min
73154 ± 15% -96.4% 2659 ± 10% sched_debug.cpu.nr_switches.stddev
2.33e+09 +12.3% 2.617e+09 ± 5% sched_debug.cpu.nr_uninterruptible.avg
126908 ± 13% -62.4% 47668 ± 2% sched_debug.cpu_clk
125894 ± 13% -62.9% 46652 ± 2% sched_debug.ktime
129346 ± 10% -62.8% 48145 ± 2% sched_debug.sched_clk
12.39 ± 8% -57.1% 5.31 perf-stat.i.MPKI
3.154e+10 ± 2% +307.2% 1.284e+11 ± 11% perf-stat.i.branch-instructions
0.16 ± 4% -0.1 0.05 ± 14% perf-stat.i.branch-miss-rate%
39015811 ± 4% +46.0% 56949389 ± 11% perf-stat.i.branch-misses
27.47 -15.6 11.85 ± 9% perf-stat.i.cache-miss-rate%
1.869e+08 +27.5% 2.382e+08 ± 2% perf-stat.i.cache-misses
6.927e+08 +197.5% 2.06e+09 ± 7% perf-stat.i.cache-references
532481 ± 9% +129.5% 1221898 ± 26% perf-stat.i.context-switches
11.00 ± 10% -86.7% 1.46 ± 11% perf-stat.i.cpi
5.805e+11 -1.8% 5.703e+11 perf-stat.i.cpu-cycles
1227 ± 11% +159.7% 3188 ± 26% perf-stat.i.cpu-migrations
3324 -27.0% 2426 ± 2% perf-stat.i.cycles-between-cache-misses
0.02 ± 2% -0.0 0.01 ± 7% perf-stat.i.dTLB-load-miss-rate%
6130936 +109.1% 12821616 ± 2% perf-stat.i.dTLB-load-misses
2.569e+10 ± 2% +299.8% 1.027e+11 ± 11% perf-stat.i.dTLB-loads
0.02 ± 3% -0.0 0.02 ± 7% perf-stat.i.dTLB-store-miss-rate%
1267107 ± 3% +135.4% 2983372 ± 3% perf-stat.i.dTLB-store-misses
4.635e+09 ± 2% +270.6% 1.718e+10 ± 10% perf-stat.i.dTLB-stores
6815222 ± 3% +53.9% 10485254 ± 16% perf-stat.i.iTLB-load-misses
9.9e+10 ± 2% +299.0% 3.95e+11 ± 10% perf-stat.i.instructions
13347 +197.8% 39746 ± 28% perf-stat.i.instructions-per-iTLB-miss
0.17 ± 2% +303.5% 0.70 ± 11% perf-stat.i.ipc
177593 ± 6% +123.8% 397415 ± 20% perf-stat.i.major-faults
3.02 -1.7% 2.97 perf-stat.i.metric.GHz
325.59 ± 2% +300.2% 1303 ± 11% perf-stat.i.metric.M/sec
926874 ± 2% +129.5% 2126907 ± 3% perf-stat.i.minor-faults
95.92 -1.3 94.58 perf-stat.i.node-load-miss-rate%
1082968 ± 3% +42.9% 1547522 ± 4% perf-stat.i.node-loads
98.43 -1.8 96.63 perf-stat.i.node-store-miss-rate%
135314 ± 21% +108.5% 282144 ± 23% perf-stat.i.node-stores
1104467 ± 3% +128.6% 2524323 ± 3% perf-stat.i.page-faults
7.13 -26.5% 5.24 ± 3% perf-stat.overall.MPKI
0.12 ± 3% -0.1 0.05 ± 20% perf-stat.overall.branch-miss-rate%
27.42 -15.8 11.66 ± 9% perf-stat.overall.cache-miss-rate%
6.00 -75.6% 1.46 ± 11% perf-stat.overall.cpi
3065 -22.0% 2389 ± 2% perf-stat.overall.cycles-between-cache-misses
0.02 -0.0 0.01 ± 8% perf-stat.overall.dTLB-load-miss-rate%
0.03 -0.0 0.02 ± 7% perf-stat.overall.dTLB-store-miss-rate%
14472 +172.7% 39462 ± 28% perf-stat.overall.instructions-per-iTLB-miss
0.17 +316.0% 0.69 ± 12% perf-stat.overall.ipc
98.95 -1.7 97.30 perf-stat.overall.node-store-miss-rate%
3605 -8.3% 3305 perf-stat.overall.path-length
3.058e+10 +308.8% 1.25e+11 ± 10% perf-stat.ps.branch-instructions
37933954 ± 4% +46.3% 55497860 ± 11% perf-stat.ps.branch-misses
1.88e+08 +23.7% 2.325e+08 ± 2% perf-stat.ps.cache-misses
6.856e+08 +192.8% 2.008e+09 ± 7% perf-stat.ps.cache-references
514435 ± 8% +131.2% 1189347 ± 26% perf-stat.ps.context-switches
190639 -1.9% 187097 perf-stat.ps.cpu-clock
5.764e+11 -3.7% 5.553e+11 perf-stat.ps.cpu-cycles
1188 ± 11% +162.8% 3124 ± 26% perf-stat.ps.cpu-migrations
6014890 +107.4% 12475043 ± 2% perf-stat.ps.dTLB-load-misses
2.494e+10 +300.9% 9.998e+10 ± 10% perf-stat.ps.dTLB-loads
1226138 +136.8% 2902969 ± 2% perf-stat.ps.dTLB-store-misses
4.503e+09 +271.6% 1.673e+10 ± 9% perf-stat.ps.dTLB-stores
6644280 ± 2% +53.6% 10202828 ± 16% perf-stat.ps.iTLB-load-misses
9.61e+10 +300.1% 3.845e+11 ± 10% perf-stat.ps.instructions
171562 ± 6% +125.4% 386762 ± 21% perf-stat.ps.major-faults
895259 +131.2% 2069711 ± 3% perf-stat.ps.minor-faults
1077758 ± 2% +39.9% 1508290 ± 4% perf-stat.ps.node-loads
12361604 ± 3% -15.9% 10395011 ± 15% perf-stat.ps.node-store-misses
131225 ± 22% +109.1% 274420 ± 23% perf-stat.ps.node-stores
1066822 ± 2% +130.3% 2456474 ± 2% perf-stat.ps.page-faults
190639 -1.9% 187097 perf-stat.ps.task-clock
1.742e+13 -8.3% 1.597e+13 perf-stat.total.instructions
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.munmap
99.44 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
99.44 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
99.44 -99.2 0.20 ±141% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
99.42 -99.2 0.20 ±141% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
80.45 -80.4 0.00 perf-profile.calltrace.cycles-pp.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
50.41 ± 2% -50.4 0.00 perf-profile.calltrace.cycles-pp.workingset_activation.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range
49.47 ± 2% -49.5 0.00 perf-profile.calltrace.cycles-pp.workingset_age_nonresident.workingset_activation.folio_mark_accessed.zap_pte_range.zap_pmd_range
25.91 ± 3% -25.9 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range
24.60 ± 3% -24.6 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range.zap_pmd_range
24.59 ± 3% -24.6 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range
24.53 ± 3% -24.5 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed
8.21 ± 4% -8.2 0.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
6.14 ± 4% -6.1 0.00 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range
5.40 ± 4% -5.4 0.00 perf-profile.calltrace.cycles-pp.page_remove_rmap.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
0.00 +0.7 0.69 ± 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ± 13% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ± 13% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ± 13% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ± 13% perf-profile.calltrace.cycles-pp.task_state.proc_pid_status.proc_single_show.seq_read_iter.seq_read
0.00 +0.9 0.89 ± 25% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.9 0.89 ± 25% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +0.9 0.92 ± 33% perf-profile.calltrace.cycles-pp.io_serial_out.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock
0.00 +0.9 0.92 ± 33% perf-profile.calltrace.cycles-pp._free_event.perf_event_release_kernel.perf_release.__fput.task_work_run
0.00 +0.9 0.92 ± 33% perf-profile.calltrace.cycles-pp.fork
0.00 +1.0 1.01 ± 15% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +1.0 1.01 ± 15% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +1.1 1.08 ± 39% perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +1.1 1.09 ± 46% perf-profile.calltrace.cycles-pp.__close
0.00 +1.2 1.17 ± 34% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common
0.00 +1.2 1.21 ± 35% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter
0.00 +1.2 1.21 ± 35% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +1.2 1.21 ± 35% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +1.2 1.24 ± 69% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.sys_imageblit.drm_fbdev_fb_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.con_scroll.lf.vt_console_print.call_console_drivers.console_unlock
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.vt_console_print.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.lf.vt_console_print.call_console_drivers.console_unlock.vprintk_emit
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.fbcon_scroll.con_scroll.lf.vt_console_print.call_console_drivers
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.fbcon_redraw.fbcon_scroll.con_scroll.lf.vt_console_print
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll.lf
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll
0.00 +1.3 1.25 ± 4% perf-profile.calltrace.cycles-pp.drm_fbdev_fb_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll
0.00 +1.3 1.27 ± 44% perf-profile.calltrace.cycles-pp.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read
0.00 +1.3 1.27 ± 44% perf-profile.calltrace.cycles-pp.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
0.00 +1.3 1.28 ± 20% perf-profile.calltrace.cycles-pp.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 +1.3 1.28 ± 20% perf-profile.calltrace.cycles-pp.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 +1.4 1.38 ± 76% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.00 +1.5 1.50 ± 14% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +1.5 1.50 ± 14% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +1.6 1.55 ± 41% perf-profile.calltrace.cycles-pp.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ± 45% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ± 45% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ± 45% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.68 ± 19% perf-profile.calltrace.cycles-pp.fnmatch
0.00 +1.7 1.68 ± 46% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.69 ± 49% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ± 49% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ± 49% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ± 49% perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ± 49% perf-profile.calltrace.cycles-pp.execve
0.00 +1.7 1.71 ± 65% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +1.7 1.71 ± 65% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +1.9 1.88 ± 26% perf-profile.calltrace.cycles-pp.proc_pid_status.proc_single_show.seq_read_iter.seq_read.vfs_read
0.00 +1.9 1.88 ± 26% perf-profile.calltrace.cycles-pp.proc_single_show.seq_read_iter.seq_read.vfs_read.ksys_read
0.00 +2.1 2.10 ± 29% perf-profile.calltrace.cycles-pp.seq_read_iter.seq_read.vfs_read.ksys_read.do_syscall_64
0.00 +2.1 2.10 ± 29% perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.2 2.23 ± 46% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_event_release_kernel.perf_release.__fput
0.00 +2.3 2.30 ± 57% perf-profile.calltrace.cycles-pp.update_sg_lb_stats.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance
0.00 +2.3 2.30 ± 57% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
0.00 +2.3 2.30 ± 57% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
0.00 +2.3 2.33 ±103% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.4 2.43 ± 40% perf-profile.calltrace.cycles-pp.event_function_call.perf_event_release_kernel.perf_release.__fput.task_work_run
0.00 +2.6 2.60 ± 76% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +2.6 2.60 ± 76% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.7 2.67 ± 16% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.00 +2.7 2.67 ± 16% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.00 +2.7 2.69 ± 59% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +2.7 2.69 ± 59% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +2.9 2.93 ± 57% perf-profile.calltrace.cycles-pp._dl_catch_error
0.00 +3.7 3.66 ± 49% perf-profile.calltrace.cycles-pp.generic_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +3.7 3.66 ± 49% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +3.7 3.66 ± 49% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
0.00 +3.7 3.67 ± 56% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.7 3.67 ± 56% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
0.00 +3.7 3.68 ± 27% perf-profile.calltrace.cycles-pp.perf_release.__fput.task_work_run.do_exit.do_group_exit
0.00 +3.7 3.68 ± 27% perf-profile.calltrace.cycles-pp.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
0.00 +3.8 3.78 ± 17% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ± 17% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ± 17% perf-profile.calltrace.cycles-pp.read
0.00 +3.8 3.78 ± 17% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ± 17% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +4.2 4.22 ± 34% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart
0.00 +4.2 4.22 ± 34% perf-profile.calltrace.cycles-pp.__fput.task_work_run.do_exit.do_group_exit.get_signal
0.00 +4.6 4.58 ± 34% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare
0.00 +4.6 4.58 ± 34% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop
0.00 +6.7 6.66 ± 45% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +7.7 7.74 ± 43% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.call_console_drivers
0.00 +7.7 7.74 ± 43% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock
0.00 +8.0 8.04 ± 75% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_helper_damage_blit.drm_fb_helper_damage_work.process_one_work.worker_thread
0.00 +8.1 8.15 ± 76% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +8.1 8.15 ± 76% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +8.1 8.15 ± 76% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_blit.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
0.00 +8.6 8.62 ± 68% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +8.7 8.66 ± 41% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock.vprintk_emit
0.00 +8.9 8.87 ± 67% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +8.9 8.87 ± 67% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +9.5 9.47 ± 39% perf-profile.calltrace.cycles-pp.serial8250_console_write.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit
0.00 +10.7 10.72 ± 35% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write
0.00 +10.7 10.72 ± 35% perf-profile.calltrace.cycles-pp.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold
0.00 +16.3 16.26 ± 19% perf-profile.calltrace.cycles-pp.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +16.3 16.26 ± 19% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write
0.00 +16.3 16.26 ± 19% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write
0.00 +20.0 20.03 ± 19% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +20.1 20.14 ± 20% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ± 19% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ± 19% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ± 19% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ± 19% perf-profile.calltrace.cycles-pp.write
0.00 +34.4 34.37 ± 18% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.00 +34.8 34.84 ± 18% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.00 +39.3 39.35 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +39.3 39.35 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
0.00 +39.9 39.90 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.4 41.39 ± 16% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.4 41.39 ± 16% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.5 41.53 ± 16% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
99.45 -99.4 0.00 perf-profile.children.cycles-pp.munmap
99.46 -99.3 0.14 ±141% perf-profile.children.cycles-pp.__vm_munmap
99.45 -99.3 0.14 ±141% perf-profile.children.cycles-pp.__x64_sys_munmap
99.47 -99.3 0.20 ±141% perf-profile.children.cycles-pp.unmap_region
99.47 -98.9 0.61 ± 82% perf-profile.children.cycles-pp.__do_munmap
99.45 -98.4 1.08 ± 39% perf-profile.children.cycles-pp.zap_pte_range
99.45 -98.4 1.08 ± 39% perf-profile.children.cycles-pp.unmap_page_range
99.45 -98.4 1.08 ± 39% perf-profile.children.cycles-pp.zap_pmd_range
99.45 -98.2 1.28 ± 20% perf-profile.children.cycles-pp.unmap_vmas
80.46 -80.5 0.00 perf-profile.children.cycles-pp.folio_mark_accessed
99.61 -62.0 37.61 ± 9% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.61 -62.0 37.61 ± 9% perf-profile.children.cycles-pp.do_syscall_64
50.42 ± 2% -50.4 0.00 perf-profile.children.cycles-pp.workingset_activation
49.67 ± 2% -49.7 0.00 perf-profile.children.cycles-pp.workingset_age_nonresident
26.03 ± 3% -26.0 0.00 perf-profile.children.cycles-pp.pagevec_lru_move_fn
24.79 ± 3% -24.8 0.00 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
24.71 ± 3% -24.7 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
24.81 ± 3% -24.7 0.11 ±141% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
8.21 ± 4% -8.2 0.00 perf-profile.children.cycles-pp.tlb_flush_mmu
6.58 ± 3% -6.5 0.11 ±141% perf-profile.children.cycles-pp.release_pages
5.42 ± 4% -4.9 0.52 ± 99% perf-profile.children.cycles-pp.page_remove_rmap
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.__irq_exit_rcu
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.setlocale
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.__alloc_pages
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.do_read_fault
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.finish_fault
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.kfree
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.4 0.45 ± 25% perf-profile.children.cycles-pp.single_release
0.10 ± 4% +0.5 0.58 ± 34% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.tick_nohz_next_event
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.dup_mm
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.dup_mmap
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.filename_lookup
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.menu_select
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.path_lookupat
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +0.6 0.56 ± 19% perf-profile.children.cycles-pp.user_path_at_empty
0.00 +0.6 0.58 ± 34% perf-profile.children.cycles-pp.page_counter_uncharge
0.00 +0.6 0.58 ± 34% perf-profile.children.cycles-pp.do_fault
0.00 +0.6 0.58 ± 34% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.00 +0.6 0.58 ± 34% perf-profile.children.cycles-pp.mod_objcg_state
0.00 +0.6 0.58 ± 34% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.00 +0.7 0.67 ± 36% perf-profile.children.cycles-pp.__do_sys_clone
0.00 +0.7 0.67 ± 36% perf-profile.children.cycles-pp.kernel_clone
0.00 +0.7 0.67 ± 36% perf-profile.children.cycles-pp.copy_process
0.00 +0.7 0.67 ± 36% perf-profile.children.cycles-pp.sw_perf_event_destroy
0.00 +0.7 0.69 ± 13% perf-profile.children.cycles-pp.dput
0.00 +0.7 0.69 ± 13% perf-profile.children.cycles-pp.__do_sys_newstat
0.00 +0.7 0.69 ± 13% perf-profile.children.cycles-pp.task_state
0.00 +0.7 0.69 ± 13% perf-profile.children.cycles-pp.vfs_statx
0.00 +0.7 0.72 ± 52% perf-profile.children.cycles-pp.__cond_resched
0.00 +0.7 0.72 ± 52% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.13 +0.8 0.89 ± 25% perf-profile.children.cycles-pp.update_process_times
0.13 +0.8 0.89 ± 25% perf-profile.children.cycles-pp.tick_sched_handle
0.24 +0.8 1.01 ± 15% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.9 0.86 ± 65% perf-profile.children.cycles-pp.kmem_cache_alloc
0.14 +0.9 1.01 ± 15% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.9 0.89 ± 25% perf-profile.children.cycles-pp.__close
0.00 +0.9 0.92 ± 33% perf-profile.children.cycles-pp._free_event
0.00 +0.9 0.92 ± 45% perf-profile.children.cycles-pp.begin_new_exec
0.00 +0.9 0.92 ± 45% perf-profile.children.cycles-pp.exec_mmap
0.00 +1.0 1.03 ± 44% perf-profile.children.cycles-pp.io_serial_out
0.00 +1.1 1.07 ± 53% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.00 +1.1 1.07 ± 53% perf-profile.children.cycles-pp.shmem_write_begin
0.00 +1.1 1.08 ± 39% perf-profile.children.cycles-pp.delay_tsc
0.00 +1.1 1.12 ± 19% perf-profile.children.cycles-pp.fork
0.02 ±141% +1.2 1.17 ± 34% perf-profile.children.cycles-pp.load_elf_binary
0.00 +1.2 1.18 ± 49% perf-profile.children.cycles-pp.vsnprintf
0.00 +1.2 1.18 ± 49% perf-profile.children.cycles-pp.seq_printf
0.00 +1.2 1.21 ± 35% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +1.2 1.21 ± 35% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.00 +1.2 1.21 ± 35% perf-profile.children.cycles-pp.copyin
0.05 +1.2 1.28 ± 20% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.sys_imageblit
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.con_scroll
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.vt_console_print
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.lf
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.fbcon_scroll
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.fbcon_redraw
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.fbcon_putcs
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.bit_putcs
0.00 +1.3 1.25 ± 4% perf-profile.children.cycles-pp.drm_fbdev_fb_imageblit
0.02 ±141% +1.3 1.28 ± 20% perf-profile.children.cycles-pp.search_binary_handler
0.02 ±141% +1.3 1.28 ± 20% perf-profile.children.cycles-pp.exec_binprm
0.00 +1.3 1.27 ± 44% perf-profile.children.cycles-pp.proc_reg_read_iter
0.34 +1.3 1.64 ± 21% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.33 ± 2% +1.3 1.64 ± 21% perf-profile.children.cycles-pp.hrtimer_interrupt
0.02 ±141% +1.5 1.55 ± 41% perf-profile.children.cycles-pp.bprm_execve
0.00 +1.6 1.61 ± 20% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +1.6 1.61 ± 20% perf-profile.children.cycles-pp.__handle_mm_fault
0.02 ±141% +1.6 1.64 ± 45% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.02 ±141% +1.7 1.69 ± 49% perf-profile.children.cycles-pp.__x64_sys_execve
0.02 ±141% +1.7 1.69 ± 49% perf-profile.children.cycles-pp.do_execveat_common
0.02 ±141% +1.7 1.69 ± 49% perf-profile.children.cycles-pp.execve
0.00 +1.7 1.68 ± 19% perf-profile.children.cycles-pp.fnmatch
0.00 +1.7 1.68 ± 46% perf-profile.children.cycles-pp.new_sync_read
0.00 +1.7 1.72 ± 80% perf-profile.children.cycles-pp.poll_idle
0.00 +1.8 1.76 ± 98% perf-profile.children.cycles-pp.__libc_start_main
0.00 +1.8 1.76 ± 98% perf-profile.children.cycles-pp.main
0.00 +1.8 1.76 ± 98% perf-profile.children.cycles-pp.run_builtin
0.00 +1.8 1.76 ± 98% perf-profile.children.cycles-pp.cmd_record
0.00 +1.8 1.76 ± 98% perf-profile.children.cycles-pp.__cmd_record
0.03 ±141% +1.8 1.80 ± 42% perf-profile.children.cycles-pp.mmput
0.03 ±141% +1.8 1.80 ± 42% perf-profile.children.cycles-pp.exit_mmap
0.00 +1.9 1.88 ± 26% perf-profile.children.cycles-pp.proc_pid_status
0.00 +1.9 1.88 ± 26% perf-profile.children.cycles-pp.proc_single_show
0.00 +2.0 2.01 ± 15% perf-profile.children.cycles-pp.exc_page_fault
0.00 +2.0 2.01 ± 15% perf-profile.children.cycles-pp.do_user_addr_fault
0.00 +2.1 2.10 ± 29% perf-profile.children.cycles-pp.seq_read
0.00 +2.2 2.23 ± 46% perf-profile.children.cycles-pp.smp_call_function_single
0.00 +2.4 2.43 ± 40% perf-profile.children.cycles-pp.event_function_call
0.36 +2.4 2.81 ± 22% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.00 +2.6 2.58 ± 65% perf-profile.children.cycles-pp.update_sg_lb_stats
0.00 +2.6 2.58 ± 65% perf-profile.children.cycles-pp.find_busiest_group
0.00 +2.6 2.58 ± 65% perf-profile.children.cycles-pp.update_sd_lb_stats
0.76 ± 5% +2.6 3.39 ± 23% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +2.6 2.65 ± 82% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.00 +2.8 2.77 ± 23% perf-profile.children.cycles-pp.asm_exc_page_fault
0.00 +2.9 2.91 ± 61% perf-profile.children.cycles-pp.load_balance
0.00 +3.0 3.00 ± 46% perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +3.0 3.02 ± 55% perf-profile.children.cycles-pp.newidle_balance
0.02 ±141% +3.3 3.29 ± 52% perf-profile.children.cycles-pp._dl_catch_error
0.02 ±141% +3.6 3.66 ± 49% perf-profile.children.cycles-pp.generic_file_write_iter
0.02 ±141% +3.6 3.66 ± 49% perf-profile.children.cycles-pp.__generic_file_write_iter
0.02 ±141% +3.6 3.66 ± 49% perf-profile.children.cycles-pp.generic_perform_write
0.00 +3.6 3.65 ± 19% perf-profile.children.cycles-pp.seq_read_iter
0.00 +3.7 3.67 ± 45% perf-profile.children.cycles-pp.__schedule
0.00 +3.7 3.68 ± 56% perf-profile.children.cycles-pp.do_filp_open
0.00 +3.7 3.68 ± 56% perf-profile.children.cycles-pp.path_openat
0.00 +3.7 3.68 ± 27% perf-profile.children.cycles-pp.perf_release
0.00 +3.7 3.68 ± 27% perf-profile.children.cycles-pp.perf_event_release_kernel
0.00 +3.8 3.78 ± 17% perf-profile.children.cycles-pp.ksys_read
0.00 +3.8 3.78 ± 17% perf-profile.children.cycles-pp.vfs_read
0.00 +3.9 3.92 ± 17% perf-profile.children.cycles-pp.read
0.00 +3.9 3.92 ± 56% perf-profile.children.cycles-pp.__x64_sys_openat
0.00 +3.9 3.92 ± 56% perf-profile.children.cycles-pp.do_sys_openat2
0.00 +4.6 4.58 ± 34% perf-profile.children.cycles-pp.arch_do_signal_or_restart
0.00 +4.6 4.58 ± 34% perf-profile.children.cycles-pp.get_signal
0.00 +5.5 5.46 ± 15% perf-profile.children.cycles-pp.__fput
0.00 +5.7 5.71 ± 16% perf-profile.children.cycles-pp.task_work_run
0.00 +6.1 6.07 ± 19% perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.02 ±141% +6.2 6.22 ± 13% perf-profile.children.cycles-pp.do_group_exit
0.02 ±141% +6.2 6.22 ± 13% perf-profile.children.cycles-pp.do_exit
0.00 +6.3 6.27 ± 15% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.00 +7.4 7.36 ± 41% perf-profile.children.cycles-pp.io_serial_in
0.00 +7.7 7.74 ± 43% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +8.1 8.15 ± 76% perf-profile.children.cycles-pp.process_one_work
0.00 +8.1 8.15 ± 76% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.00 +8.1 8.15 ± 76% perf-profile.children.cycles-pp.drm_fb_helper_damage_blit
0.00 +8.1 8.15 ± 76% perf-profile.children.cycles-pp.memcpy_toio
0.00 +8.3 8.33 ± 40% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +8.6 8.62 ± 68% perf-profile.children.cycles-pp.worker_thread
0.00 +8.7 8.66 ± 41% perf-profile.children.cycles-pp.uart_console_write
0.02 ±141% +8.9 8.87 ± 67% perf-profile.children.cycles-pp.ret_from_fork
0.02 ±141% +8.9 8.87 ± 67% perf-profile.children.cycles-pp.kthread
0.00 +9.5 9.47 ± 39% perf-profile.children.cycles-pp.serial8250_console_write
0.00 +10.7 10.72 ± 35% perf-profile.children.cycles-pp.console_unlock
0.00 +10.7 10.72 ± 35% perf-profile.children.cycles-pp.call_console_drivers
0.00 +16.3 16.26 ± 19% perf-profile.children.cycles-pp.devkmsg_write.cold
0.00 +16.3 16.26 ± 19% perf-profile.children.cycles-pp.devkmsg_emit
0.00 +16.3 16.26 ± 19% perf-profile.children.cycles-pp.vprintk_emit
0.02 ±141% +20.0 20.03 ± 19% perf-profile.children.cycles-pp.new_sync_write
0.02 ±141% +20.1 20.14 ± 20% perf-profile.children.cycles-pp.vfs_write
0.03 ±141% +20.2 20.27 ± 19% perf-profile.children.cycles-pp.ksys_write
0.02 ±141% +20.2 20.27 ± 19% perf-profile.children.cycles-pp.write
0.32 ± 37% +34.7 34.98 ± 18% perf-profile.children.cycles-pp.intel_idle
0.32 ± 37% +34.7 34.98 ± 18% perf-profile.children.cycles-pp.mwait_idle_with_hints
0.33 ± 36% +39.2 39.48 ± 19% perf-profile.children.cycles-pp.cpuidle_enter
0.33 ± 36% +39.2 39.48 ± 19% perf-profile.children.cycles-pp.cpuidle_enter_state
0.33 ± 35% +39.7 40.04 ± 19% perf-profile.children.cycles-pp.cpuidle_idle_call
0.33 ± 35% +41.2 41.53 ± 16% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.33 ± 35% +41.2 41.53 ± 16% perf-profile.children.cycles-pp.cpu_startup_entry
0.33 ± 35% +41.2 41.53 ± 16% perf-profile.children.cycles-pp.do_idle
49.51 ± 2% -49.5 0.00 perf-profile.self.cycles-pp.workingset_age_nonresident
24.71 ± 3% -24.7 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
6.52 ± 3% -6.5 0.00 perf-profile.self.cycles-pp.release_pages
5.28 ± 6% -4.9 0.36 ± 76% perf-profile.self.cycles-pp.zap_pte_range
0.00 +0.4 0.45 ± 25% perf-profile.self.cycles-pp.proc_pid_status
0.00 +0.4 0.45 ± 25% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.4 0.45 ± 25% perf-profile.self.cycles-pp.page_counter_uncharge
0.00 +0.6 0.56 ± 19% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +1.0 1.03 ± 44% perf-profile.self.cycles-pp.io_serial_out
0.00 +1.1 1.08 ± 39% perf-profile.self.cycles-pp.delay_tsc
0.00 +1.2 1.21 ± 35% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.03 ± 70% +1.2 1.28 ± 20% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +1.3 1.25 ± 4% perf-profile.self.cycles-pp.sys_imageblit
0.00 +1.7 1.68 ± 19% perf-profile.self.cycles-pp.fnmatch
0.00 +1.7 1.72 ± 80% perf-profile.self.cycles-pp.poll_idle
0.00 +2.1 2.12 ± 41% perf-profile.self.cycles-pp.smp_call_function_single
0.00 +2.4 2.44 ± 61% perf-profile.self.cycles-pp.update_sg_lb_stats
0.00 +7.4 7.36 ± 41% perf-profile.self.cycles-pp.io_serial_in
0.00 +8.1 8.15 ± 76% perf-profile.self.cycles-pp.memcpy_toio
0.32 ± 37% +34.7 34.98 ± 18% perf-profile.self.cycles-pp.mwait_idle_with_hints
***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/mmap-pread-seq-mt/vm-scalability/0x500320a
commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")
18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
47.76 ± 21% +120.9% 105.50 ± 18% vm-scalability.free_time
381902 +67.5% 639832 ± 9% vm-scalability.median
72780899 +64.8% 1.199e+08 ± 6% vm-scalability.throughput
461.08 +35.5% 624.74 vm-scalability.time.elapsed_time
461.08 +35.5% 624.74 vm-scalability.time.elapsed_time.max
140148 ± 7% +46.2% 204907 ± 13% vm-scalability.time.involuntary_context_switches
1.039e+10 +149.1% 2.588e+10 ± 5% vm-scalability.time.maximum_resident_set_size
3.901e+08 +35.8% 5.298e+08 ± 11% vm-scalability.time.minor_page_faults
11965 -27.0% 8731 ± 3% vm-scalability.time.percent_of_cpu_this_job_got
45372 -31.5% 31070 ± 13% vm-scalability.time.system_time
9799 +139.3% 23452 ± 20% vm-scalability.time.user_time
2.327e+10 +67.3% 3.895e+10 ± 3% vm-scalability.workload
3.17e+10 +97.0% 6.247e+10 ± 4% cpuidle..time
1.364e+08 ± 2% +52.6% 2.081e+08 ± 18% cpuidle..usage
511.67 +31.7% 673.93 uptime.boot
39763 +75.9% 69937 ± 3% uptime.idle
1650179 ± 6% +108.4% 3439319 ± 13% numa-numastat.node0.local_node
1714274 ± 7% +104.0% 3497283 ± 12% numa-numastat.node0.numa_hit
80709 ± 10% +1568.8% 1346869 ± 50% numa-numastat.node1.other_node
51833 ± 45% +3011.6% 1612852 ± 72% numa-numastat.node2.other_node
35.43 +16.4 51.83 ± 2% mpstat.cpu.all.idle%
1.15 +0.2 1.32 ± 3% mpstat.cpu.all.irq%
0.14 ± 2% -0.0 0.12 ± 12% mpstat.cpu.all.soft%
51.46 -25.4 26.05 ± 10% mpstat.cpu.all.sys%
11.22 +8.7 19.95 ± 21% mpstat.cpu.all.usr%
35.33 +44.3% 51.00 vmstat.cpu.id
11.00 +72.7% 19.00 ± 22% vmstat.cpu.us
39267912 +186.2% 1.124e+08 vmstat.memory.cache
1.463e+08 -61.9% 55690119 ± 3% vmstat.memory.free
121.33 -25.8% 90.00 ± 3% vmstat.procs.r
2004 -25.7% 1489 ± 2% turbostat.Avg_MHz
65.34 -16.4 48.90 ± 2% turbostat.Busy%
4391194 ± 3% -40.7% 2602910 ± 48% turbostat.C1
65661757 +90.5% 1.251e+08 ± 4% turbostat.C1E
33.68 +15.8 49.47 ± 3% turbostat.C1E%
34.63 +47.3% 51.01 ± 2% turbostat.CPU%c1
0.07 ± 7% +135.0% 0.16 ± 7% turbostat.IPC
2.636e+08 ± 7% +64.2% 4.327e+08 ± 41% turbostat.IRQ
206.94 -1.8% 203.27 turbostat.PkgWatt
18398499 +233.3% 61329324 ± 14% meminfo.Active
434070 ± 10% -41.0% 256027 ± 30% meminfo.Active(anon)
17964429 +240.0% 61073297 ± 14% meminfo.Active(file)
39197349 +185.9% 1.121e+08 meminfo.Cached
2720032 ± 3% -12.0% 2394908 meminfo.Committed_AS
18649788 +160.4% 48571839 ± 21% meminfo.Inactive
592314 ± 10% -23.6% 452265 ± 14% meminfo.Inactive(anon)
18057473 +166.5% 48119574 ± 21% meminfo.Inactive(file)
219940 +73.3% 381070 meminfo.KReclaimable
36119017 +201.2% 1.088e+08 meminfo.Mapped
1.812e+08 -9.5% 1.639e+08 meminfo.MemAvailable
1.461e+08 -61.8% 55768647 ± 3% meminfo.MemFree
51577716 +175.2% 1.419e+08 meminfo.Memused
10114995 +171.9% 27506491 ± 2% meminfo.PageTables
219940 +73.3% 381070 meminfo.SReclaimable
312390 +13.2% 353621 ± 2% meminfo.SUnreclaim
734611 ± 13% -41.7% 427960 ± 8% meminfo.Shmem
532331 +38.0% 734691 meminfo.Slab
55437706 +182.4% 1.566e+08 meminfo.max_used_kB
270259 ±112% +1573.3% 4522215 ± 16% numa-vmstat.node0.nr_active_file
946393 ± 35% +663.5% 7225435 ± 16% numa-vmstat.node0.nr_file_pages
10541034 ± 3% -69.1% 3255834 ± 25% numa-vmstat.node0.nr_free_pages
98102 ± 70% +2211.2% 2267326 ± 32% numa-vmstat.node0.nr_inactive_file
381079 ± 92% +1678.2% 6776300 ± 21% numa-vmstat.node0.nr_mapped
643599 ± 3% +153.2% 1629683 ± 22% numa-vmstat.node0.nr_page_table_pages
20984 ± 12% +51.4% 31778 ± 15% numa-vmstat.node0.nr_slab_reclaimable
270259 ±112% +1573.3% 4522216 ± 16% numa-vmstat.node0.nr_zone_active_file
98101 ± 70% +2211.2% 2267302 ± 32% numa-vmstat.node0.nr_zone_inactive_file
1713996 ± 7% +104.0% 3496544 ± 12% numa-vmstat.node0.numa_hit
1649901 ± 6% +108.4% 3438581 ± 13% numa-vmstat.node0.numa_local
708884 ±122% +466.2% 4013857 ± 7% numa-vmstat.node1.nr_active_file
1335766 ±119% +354.3% 6068220 ± 19% numa-vmstat.node1.nr_file_pages
10293220 ± 15% -58.1% 4317437 ± 20% numa-vmstat.node1.nr_free_pages
622687 ±117% +224.6% 2021489 ± 45% numa-vmstat.node1.nr_inactive_file
1331303 ±119% +348.9% 5976082 ± 20% numa-vmstat.node1.nr_mapped
642435 +189.7% 1860950 ± 16% numa-vmstat.node1.nr_page_table_pages
6047 ± 66% +206.0% 18505 ± 14% numa-vmstat.node1.nr_slab_reclaimable
16156 ± 3% +29.6% 20936 ± 11% numa-vmstat.node1.nr_slab_unreclaimable
708884 ±122% +466.2% 4013857 ± 7% numa-vmstat.node1.nr_zone_active_file
622684 ±117% +224.6% 2021474 ± 45% numa-vmstat.node1.nr_zone_inactive_file
80709 ± 10% +1568.8% 1346867 ± 50% numa-vmstat.node1.numa_other
629284 ± 7% +212.3% 1965297 ± 36% numa-vmstat.node2.nr_page_table_pages
51833 ± 45% +3011.6% 1612851 ± 72% numa-vmstat.node2.numa_other
76205 ± 42% -77.4% 17185 ±135% numa-vmstat.node3.nr_active_anon
2340058 ± 55% +93.0% 4516830 ± 11% numa-vmstat.node3.nr_active_file
27102 ± 87% -88.7% 3069 ± 50% numa-vmstat.node3.nr_anon_pages
4825949 ± 61% +73.4% 8366128 ± 6% numa-vmstat.node3.nr_file_pages
6778041 ± 42% -63.2% 2495049 ± 11% numa-vmstat.node3.nr_free_pages
61462 ± 36% -90.9% 5604 ± 6% numa-vmstat.node3.nr_inactive_anon
9125 ± 24% -25.3% 6816 ± 7% numa-vmstat.node3.nr_kernel_stack
614483 ± 6% +124.8% 1381443 ± 22% numa-vmstat.node3.nr_page_table_pages
110543 ± 34% -82.1% 19753 ±119% numa-vmstat.node3.nr_shmem
16802 ± 18% +54.9% 26023 ± 18% numa-vmstat.node3.nr_slab_reclaimable
76205 ± 42% -77.4% 17185 ±135% numa-vmstat.node3.nr_zone_active_anon
2340060 ± 55% +93.0% 4516833 ± 11% numa-vmstat.node3.nr_zone_active_file
61462 ± 36% -90.9% 5606 ± 6% numa-vmstat.node3.nr_zone_inactive_anon
1395481 ± 71% +251.5% 4905621 ± 17% proc-vmstat.compact_free_scanned
33074 ± 70% +419.4% 171779 ± 22% proc-vmstat.compact_isolated
202053 ± 73% +9922.0% 20249857 ± 36% proc-vmstat.compact_migrate_scanned
108289 ± 9% -40.8% 64062 ± 30% proc-vmstat.nr_active_anon
4508307 +239.1% 15287070 ± 14% proc-vmstat.nr_active_file
4529195 -9.4% 4101452 proc-vmstat.nr_dirty_background_threshold
9069466 -9.4% 8212934 proc-vmstat.nr_dirty_threshold
9806867 +185.6% 28009127 proc-vmstat.nr_file_pages
36527728 -61.8% 13964856 ± 3% proc-vmstat.nr_free_pages
147970 ± 10% -23.8% 112719 ± 14% proc-vmstat.nr_inactive_anon
4504742 +166.5% 12005145 ± 21% proc-vmstat.nr_inactive_file
9.67 ± 70% +1544.8% 159.00 ± 73% proc-vmstat.nr_isolated_file
32470 -2.3% 31717 proc-vmstat.nr_kernel_stack
9038764 +200.8% 27190039 proc-vmstat.nr_mapped
2528396 +171.3% 6860572 proc-vmstat.nr_page_table_pages
183340 ± 13% -41.8% 106760 ± 8% proc-vmstat.nr_shmem
55004 +73.2% 95262 proc-vmstat.nr_slab_reclaimable
78097 +13.2% 88396 ± 2% proc-vmstat.nr_slab_unreclaimable
108289 ± 9% -40.8% 64062 ± 30% proc-vmstat.nr_zone_active_anon
4508309 +239.1% 15287080 ± 14% proc-vmstat.nr_zone_active_file
147971 ± 10% -23.8% 112721 ± 14% proc-vmstat.nr_zone_inactive_anon
4504748 +166.5% 12005031 ± 21% proc-vmstat.nr_zone_inactive_file
1399962 ± 70% +205.1% 4271516 ± 15% proc-vmstat.numa_foreign
187918 ± 10% -26.4% 138295 ± 13% proc-vmstat.numa_hint_faults_local
20021857 ± 5% -27.1% 14586662 ± 7% proc-vmstat.numa_hit
19764621 ± 5% -27.5% 14337667 ± 7% proc-vmstat.numa_local
1400006 ± 70% +205.2% 4273115 ± 15% proc-vmstat.numa_miss
1661389 ± 59% +172.8% 4532571 ± 14% proc-vmstat.numa_other
13859568 +153.3% 35111344 proc-vmstat.pgactivate
0.00 +3.9e+107% 393020 ± 3% proc-vmstat.pgalloc_dma32
21429150 +145.6% 52620266 ± 4% proc-vmstat.pgalloc_normal
289208 ± 74% +3004.1% 8977408 ± 51% proc-vmstat.pgdeactivate
5.274e+08 +35.0% 7.12e+08 ± 17% proc-vmstat.pgfault
21431234 +147.7% 53085222 ± 4% proc-vmstat.pgfree
6523 ± 8% -78.2% 1419 ±122% proc-vmstat.pgmajfault
51107 ± 27% +105.8% 105175 ± 15% proc-vmstat.pgmigrate_success
289208 ± 74% +3004.1% 8977408 ± 51% proc-vmstat.pgrefill
114636 +24.7% 142954 ± 2% proc-vmstat.pgreuse
256146 ± 80% +10403.4% 26904076 ± 47% proc-vmstat.pgscan_file
256146 ± 80% +2517.8% 6705410 ± 27% proc-vmstat.pgscan_kswapd
1706 ±141% +4493.7% 78398 ± 94% proc-vmstat.slabs_scanned
5.68 -9.8% 5.12 ± 7% perf-stat.i.MPKI
2.635e+10 +89.2% 4.987e+10 ± 2% perf-stat.i.branch-instructions
29446064 ± 2% -26.8% 21568998 ± 20% perf-stat.i.branch-misses
41.05 +8.4 49.42 ± 3% perf-stat.i.cache-miss-rate%
4.891e+08 +66.5% 8.145e+08 ± 4% perf-stat.i.cache-references
3.18 -31.9% 2.17 ± 14% perf-stat.i.cpi
3.57e+11 -26.4% 2.627e+11 ± 3% perf-stat.i.cpu-cycles
2046 ± 2% -38.8% 1252 ± 7% perf-stat.i.cycles-between-cache-misses
0.02 ± 4% -0.0 0.01 ± 13% perf-stat.i.dTLB-load-miss-rate%
2.143e+10 +87.7% 4.022e+10 ± 2% perf-stat.i.dTLB-loads
0.02 -0.0 0.01 ± 5% perf-stat.i.dTLB-store-miss-rate%
4.017e+09 +71.6% 6.894e+09 ± 3% perf-stat.i.dTLB-stores
67.19 -6.8 60.42 ± 2% perf-stat.i.iTLB-load-miss-rate%
8.244e+10 +87.3% 1.544e+11 ± 2% perf-stat.i.instructions
11508 +64.7% 18958 ± 20% perf-stat.i.instructions-per-iTLB-miss
0.46 +71.5% 0.78 ± 3% perf-stat.i.ipc
1.86 -26.5% 1.37 ± 3% perf-stat.i.metric.GHz
272.17 +86.8% 508.48 ± 2% perf-stat.i.metric.M/sec
89.61 -4.2 85.41 ± 2% perf-stat.i.node-load-miss-rate%
93.39 -1.1 92.31 perf-stat.i.node-store-miss-rate%
5.94 -11.1% 5.27 ± 2% perf-stat.overall.MPKI
0.11 -0.1 0.04 ± 18% perf-stat.overall.branch-miss-rate%
24.75 -8.3 16.49 ± 3% perf-stat.overall.cache-miss-rate%
4.34 -58.3% 1.81 ± 7% perf-stat.overall.cpi
2957 -29.4% 2088 ± 8% perf-stat.overall.cycles-between-cache-misses
0.03 ± 4% -0.0 0.02 ± 18% perf-stat.overall.dTLB-load-miss-rate%
0.03 -0.0 0.02 ± 6% perf-stat.overall.dTLB-store-miss-rate%
79.19 -4.8 74.43 ± 5% perf-stat.overall.iTLB-load-miss-rate%
14468 +118.3% 31586 ± 23% perf-stat.overall.instructions-per-iTLB-miss
0.23 +140.9% 0.55 ± 7% perf-stat.overall.ipc
1753 +44.2% 2527 ± 3% perf-stat.overall.path-length
2.825e+10 +80.0% 5.086e+10 ± 4% perf-stat.ps.branch-instructions
31369620 -29.5% 22107780 ± 22% perf-stat.ps.branch-misses
5.237e+08 +58.7% 8.311e+08 ± 7% perf-stat.ps.cache-references
3.833e+11 -25.9% 2.842e+11 ± 2% perf-stat.ps.cpu-cycles
2.292e+10 +78.8% 4.1e+10 ± 5% perf-stat.ps.dTLB-loads
4.264e+09 +64.4% 7.011e+09 ± 6% perf-stat.ps.dTLB-stores
8.823e+10 +78.4% 1.574e+11 ± 5% perf-stat.ps.instructions
18972299 -23.4% 14538789 ± 27% perf-stat.ps.node-load-misses
1093618 ± 5% -7.4% 1012942 ± 3% perf-stat.ps.node-loads
7576747 ± 5% -20.8% 5997202 ± 26% perf-stat.ps.node-store-misses
4.081e+13 +141.5% 9.854e+13 ± 6% perf-stat.total.instructions
1078861 ±112% +1578.0% 18103185 ± 16% numa-meminfo.node0.Active
1073730 ±112% +1574.7% 17982022 ± 17% numa-meminfo.node0.Active(file)
3776230 ± 34% +665.8% 28918288 ± 16% numa-meminfo.node0.FilePages
510515 ± 29% +1738.2% 9384093 ± 29% numa-meminfo.node0.Inactive
390425 ± 70% +2253.6% 9188935 ± 31% numa-meminfo.node0.Inactive(file)
83922 ± 12% +51.5% 127156 ± 15% numa-meminfo.node0.KReclaimable
1513602 ± 92% +1691.7% 27118547 ± 21% numa-meminfo.node0.Mapped
42176849 ± 3% -69.2% 12973556 ± 24% numa-meminfo.node0.MemFree
6979665 ± 18% +418.4% 36182958 ± 8% numa-meminfo.node0.MemUsed
2570494 ± 3% +154.8% 6550818 ± 23% numa-meminfo.node0.PageTables
83922 ± 12% +51.5% 127156 ± 15% numa-meminfo.node0.SReclaimable
181192 ± 10% +28.8% 233452 ± 12% numa-meminfo.node0.Slab
2821085 ±121% +465.7% 15958433 ± 8% numa-meminfo.node1.Active
2812387 ±122% +467.1% 15948621 ± 8% numa-meminfo.node1.Active(file)
46712 ± 79% +167.4% 124913 ± 51% numa-meminfo.node1.AnonPages.max
5323354 ±119% +355.5% 24250501 ± 19% numa-meminfo.node1.FilePages
2511909 ±116% +228.8% 8260033 ± 43% numa-meminfo.node1.Inactive
2494074 ±117% +227.6% 8170385 ± 45% numa-meminfo.node1.Inactive(file)
24145 ± 66% +206.4% 73980 ± 14% numa-meminfo.node1.KReclaimable
5303377 ±119% +350.4% 23887574 ± 20% numa-meminfo.node1.Mapped
41195841 ± 15% -58.1% 17270949 ± 20% numa-meminfo.node1.MemFree
8338805 ± 76% +286.9% 32263698 ± 10% numa-meminfo.node1.MemUsed
2565988 +190.9% 7463727 ± 16% numa-meminfo.node1.PageTables
24145 ± 66% +206.4% 73980 ± 14% numa-meminfo.node1.SReclaimable
64627 ± 3% +29.6% 83746 ± 11% numa-meminfo.node1.SUnreclaim
88774 ± 20% +77.7% 157727 ± 11% numa-meminfo.node1.Slab
2513412 ± 7% +212.9% 7864995 ± 36% numa-meminfo.node2.PageTables
115780 ± 29% +36.6% 158133 ± 12% numa-meminfo.node2.Slab
9580292 ± 54% +88.0% 18009449 ± 11% numa-meminfo.node3.Active
305023 ± 43% -77.4% 68934 ±135% numa-meminfo.node3.Active(anon)
9275268 ± 55% +93.4% 17940514 ± 10% numa-meminfo.node3.Active(file)
108407 ± 87% -88.6% 12400 ± 51% numa-meminfo.node3.AnonPages
160985 ± 61% -73.4% 42806 ± 76% numa-meminfo.node3.AnonPages.max
19239019 ± 61% +74.0% 33471217 ± 6% numa-meminfo.node3.FilePages
246214 ± 36% -90.8% 22559 ± 6% numa-meminfo.node3.Inactive(anon)
67061 ± 18% +55.3% 104124 ± 18% numa-meminfo.node3.KReclaimable
9127 ± 24% -25.3% 6820 ± 8% numa-meminfo.node3.KernelStack
27179894 ± 42% -63.4% 9952841 ± 10% numa-meminfo.node3.MemFree
22309393 ± 51% +77.2% 39536446 ± 2% numa-meminfo.node3.MemUsed
2454785 ± 6% +125.9% 5545631 ± 22% numa-meminfo.node3.PageTables
67061 ± 18% +55.3% 104124 ± 18% numa-meminfo.node3.SReclaimable
442746 ± 34% -82.1% 79224 ±119% numa-meminfo.node3.Shmem
146622 ± 3% +26.3% 185228 ± 9% numa-meminfo.node3.Slab
0.69 ± 5% -28.1% 0.49 ± 5% sched_debug.cfs_rq:/.h_nr_running.avg
1.64 ± 6% -10.5% 1.47 sched_debug.cfs_rq:/.h_nr_running.max
0.20 ± 2% +15.6% 0.24 ± 8% sched_debug.cfs_rq:/.h_nr_running.stddev
6824 ± 8% +196.8% 20253 ± 6% sched_debug.cfs_rq:/.load.avg
229906 ± 3% +129.8% 528372 ± 9% sched_debug.cfs_rq:/.load.max
21576 ± 14% +260.6% 77803 ± 5% sched_debug.cfs_rq:/.load.stddev
9.61 ± 13% +125.1% 21.63 ± 9% sched_debug.cfs_rq:/.load_avg.avg
338.23 ± 10% +90.2% 643.24 ± 7% sched_debug.cfs_rq:/.load_avg.max
1.93 ± 20% -50.2% 0.96 ± 51% sched_debug.cfs_rq:/.load_avg.min
35.19 ± 16% +148.2% 87.36 ± 4% sched_debug.cfs_rq:/.load_avg.stddev
28561269 ± 4% +23.7% 35334125 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
29300278 ± 4% +24.1% 36374870 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
25701358 ± 3% +10.0% 28264060 ± 8% sched_debug.cfs_rq:/.min_vruntime.min
520966 ± 10% +117.7% 1134026 ± 46% sched_debug.cfs_rq:/.min_vruntime.stddev
0.68 ± 5% -28.2% 0.48 ± 6% sched_debug.cfs_rq:/.nr_running.avg
0.18 ± 2% +19.6% 0.21 ± 5% sched_debug.cfs_rq:/.nr_running.stddev
137.38 ± 5% -30.0% 96.19 ± 4% sched_debug.cfs_rq:/.removed.load_avg.max
54.41 ± 14% -34.3% 35.73 ± 37% sched_debug.cfs_rq:/.removed.runnable_avg.max
54.41 ± 14% -34.3% 35.73 ± 37% sched_debug.cfs_rq:/.removed.util_avg.max
617.96 ± 5% -24.3% 467.92 ± 2% sched_debug.cfs_rq:/.runnable_avg.avg
1519 ± 3% -8.0% 1397 sched_debug.cfs_rq:/.runnable_avg.max
208.55 ± 11% -33.1% 139.44 ± 22% sched_debug.cfs_rq:/.runnable_avg.min
140.06 ± 3% +32.9% 186.10 ± 7% sched_debug.cfs_rq:/.runnable_avg.stddev
590586 ± 38% +96.2% 1158625 ± 14% sched_debug.cfs_rq:/.spread0.max
-3015844 +130.5% -6950269 sched_debug.cfs_rq:/.spread0.min
521714 ± 10% +117.4% 1134421 ± 46% sched_debug.cfs_rq:/.spread0.stddev
609.27 ± 5% -24.4% 460.70 ± 2% sched_debug.cfs_rq:/.util_avg.avg
1444 ± 4% -7.9% 1330 sched_debug.cfs_rq:/.util_avg.max
130.58 ± 3% +35.1% 176.43 ± 7% sched_debug.cfs_rq:/.util_avg.stddev
572.34 ± 5% -31.2% 393.53 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.avg
1316 ± 9% -18.0% 1078 sched_debug.cfs_rq:/.util_est_enqueued.max
310775 ± 11% +89.7% 589408 ± 15% sched_debug.cpu.avg_idle.avg
239814 ± 5% +41.3% 338862 ± 4% sched_debug.cpu.clock.avg
239856 ± 5% +41.3% 338900 ± 4% sched_debug.cpu.clock.max
239768 ± 5% +41.3% 338817 ± 4% sched_debug.cpu.clock.min
237458 ± 5% +40.9% 334653 ± 4% sched_debug.cpu.clock_task.avg
237726 ± 5% +41.1% 335330 ± 4% sched_debug.cpu.clock_task.max
227513 ± 5% +42.8% 324812 ± 4% sched_debug.cpu.clock_task.min
5295 ± 6% -31.7% 3615 ± 6% sched_debug.cpu.curr->pid.avg
13058 ± 5% +24.9% 16312 ± 2% sched_debug.cpu.curr->pid.max
1468 +14.2% 1678 ± 4% sched_debug.cpu.curr->pid.stddev
0.68 ± 6% -32.1% 0.46 ± 6% sched_debug.cpu.nr_running.avg
2.408e+09 -9.4% 2.182e+09 ± 6% sched_debug.cpu.nr_uninterruptible.avg
239765 ± 5% +41.3% 338814 ± 4% sched_debug.cpu_clk
238754 ± 5% +41.5% 337801 ± 4% sched_debug.ktime
240246 ± 5% +41.2% 339291 ± 4% sched_debug.sched_clk
54.77 -39.6 15.17 ± 4% perf-profile.calltrace.cycles-pp.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
43.84 -38.2 5.61 ± 3% perf-profile.calltrace.cycles-pp.next_uptodate_page.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
69.95 -37.8 32.18 ± 3% perf-profile.calltrace.cycles-pp.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
75.67 -35.5 40.16 ± 7% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
76.91 -32.6 44.30 ± 7% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
77.31 -32.5 44.81 ± 7% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
78.89 -25.4 53.45 ± 8% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
78.91 -25.2 53.70 ± 8% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
79.15 -25.1 54.07 ± 8% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
88.17 -15.5 72.70 ± 3% perf-profile.calltrace.cycles-pp.do_access
1.25 ± 3% -0.9 0.37 ± 70% perf-profile.calltrace.cycles-pp.folio_wake_bit.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
1.43 ± 2% -0.2 1.20 perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault
1.48 ± 2% -0.1 1.36 ± 2% perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
4.18 +1.1 5.30 ± 2% perf-profile.calltrace.cycles-pp.xas_load.xas_find.filemap_map_pages.xfs_filemap_map_pages.do_fault
4.19 +1.1 5.32 ± 2% perf-profile.calltrace.cycles-pp.xas_find.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
6.57 +1.1 7.70 ± 4% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
4.03 +1.1 5.16 ± 2% perf-profile.calltrace.cycles-pp.xas_start.xas_load.xas_find.filemap_map_pages.xfs_filemap_map_pages
6.30 +1.2 7.52 ± 4% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_filemap_map_pages.do_fault.__handle_mm_fault
0.17 ±141% +2.4 2.62 ± 8% perf-profile.calltrace.cycles-pp.up_read.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
0.74 ± 21% +3.8 4.52 ± 18% perf-profile.calltrace.cycles-pp.down_read_trylock.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
12.32 +22.6 34.87 ± 11% perf-profile.calltrace.cycles-pp.do_rw_once
54.79 -39.6 15.19 ± 4% perf-profile.children.cycles-pp.filemap_map_pages
44.39 -38.8 5.64 ± 3% perf-profile.children.cycles-pp.next_uptodate_page
69.96 -37.8 32.18 ± 3% perf-profile.children.cycles-pp.xfs_filemap_map_pages
75.69 -35.5 40.17 ± 7% perf-profile.children.cycles-pp.do_fault
76.93 -32.6 44.32 ± 7% perf-profile.children.cycles-pp.__handle_mm_fault
77.35 -32.4 44.94 ± 7% perf-profile.children.cycles-pp.handle_mm_fault
78.91 -25.5 53.46 ± 8% perf-profile.children.cycles-pp.do_user_addr_fault
78.93 -25.2 53.71 ± 8% perf-profile.children.cycles-pp.exc_page_fault
79.18 -25.1 54.10 ± 8% perf-profile.children.cycles-pp.asm_exc_page_fault
90.13 -16.3 73.82 ± 4% perf-profile.children.cycles-pp.do_access
1.75 -1.6 0.11 ± 25% perf-profile.children.cycles-pp.folio_unlock
1.96 ± 3% -1.1 0.88 ± 15% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.39 ± 4% -0.7 0.74 ± 34% perf-profile.children.cycles-pp.folio_wake_bit
0.69 ± 2% -0.6 0.11 ± 11% perf-profile.children.cycles-pp.PageHeadHuge
0.48 -0.4 0.03 ± 70% perf-profile.children.cycles-pp.filemap_add_folio
0.71 ± 9% -0.4 0.28 ± 56% perf-profile.children.cycles-pp.intel_idle
0.86 ± 2% -0.4 0.45 ± 31% perf-profile.children.cycles-pp.__wake_up_common
0.80 ± 3% -0.4 0.41 ± 31% perf-profile.children.cycles-pp.wake_page_function
0.77 ± 3% -0.4 0.39 ± 30% perf-profile.children.cycles-pp.try_to_wake_up
0.93 ± 9% -0.3 0.62 ± 13% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.54 ± 6% -0.2 0.31 ± 37% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.46 ± 9% -0.2 0.26 ± 43% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.35 ± 3% -0.2 0.17 ± 29% perf-profile.children.cycles-pp.ttwu_do_activate
1.49 ± 2% -0.2 1.32 perf-profile.children.cycles-pp.page_add_file_rmap
0.61 ± 8% -0.2 0.43 ± 15% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.60 ± 9% -0.2 0.43 ± 15% perf-profile.children.cycles-pp.hrtimer_interrupt
0.58 ± 3% -0.2 0.41 ± 32% perf-profile.children.cycles-pp.schedule_idle
0.30 ± 2% -0.2 0.14 ± 31% perf-profile.children.cycles-pp.enqueue_task_fair
0.27 ± 4% -0.1 0.12 ± 36% perf-profile.children.cycles-pp.pick_next_task_fair
0.27 ± 3% -0.1 0.14 ± 30% perf-profile.children.cycles-pp.dequeue_task_fair
0.29 ± 15% -0.1 0.17 ± 14% perf-profile.children.cycles-pp.irq_exit_rcu
0.22 ± 2% -0.1 0.11 ± 30% perf-profile.children.cycles-pp.enqueue_entity
0.39 ± 11% -0.1 0.28 ± 16% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.22 ± 6% -0.1 0.11 ± 17% perf-profile.children.cycles-pp.update_load_avg
0.23 ± 3% -0.1 0.12 ± 29% perf-profile.children.cycles-pp.dequeue_entity
0.21 ± 2% -0.1 0.10 ± 19% perf-profile.children.cycles-pp.read_pages
0.31 ± 17% -0.1 0.21 ± 17% perf-profile.children.cycles-pp.update_process_times
0.20 ± 2% -0.1 0.10 ± 19% perf-profile.children.cycles-pp.iomap_readahead
0.24 ± 17% -0.1 0.14 ± 15% perf-profile.children.cycles-pp.__softirqentry_text_start
0.32 ± 18% -0.1 0.23 ± 16% perf-profile.children.cycles-pp.tick_sched_timer
0.31 ± 17% -0.1 0.21 ± 17% perf-profile.children.cycles-pp.tick_sched_handle
0.26 ± 6% -0.1 0.17 ± 38% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.14 ± 5% -0.1 0.06 ± 13% perf-profile.children.cycles-pp.asm_sysvec_call_function
0.12 ± 8% -0.1 0.04 ± 73% perf-profile.children.cycles-pp.newidle_balance
0.21 ± 16% -0.1 0.14 ± 17% perf-profile.children.cycles-pp.scheduler_tick
0.16 ± 16% -0.1 0.09 ± 10% perf-profile.children.cycles-pp.rebalance_domains
0.11 ± 11% -0.1 0.04 ± 70% perf-profile.children.cycles-pp.irqtime_account_irq
0.10 ± 4% -0.1 0.04 ± 70% perf-profile.children.cycles-pp.select_task_rq_fair
0.10 ± 8% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.set_next_entity
0.13 ± 7% -0.1 0.07 ± 23% perf-profile.children.cycles-pp.update_curr
0.14 ± 13% -0.1 0.08 ± 12% perf-profile.children.cycles-pp._raw_spin_trylock
0.16 ± 17% -0.1 0.10 ± 19% perf-profile.children.cycles-pp.task_tick_fair
1.54 ± 2% -0.1 1.48 perf-profile.children.cycles-pp.do_set_pte
0.12 -0.1 0.07 ± 70% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.10 ± 4% -0.0 0.05 ± 70% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.14 ± 5% -0.0 0.09 ± 33% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.11 ± 11% -0.0 0.07 ± 11% perf-profile.children.cycles-pp.folio_memcg_lock
0.09 ± 9% -0.0 0.05 ± 71% perf-profile.children.cycles-pp.update_rq_clock
0.18 -0.0 0.16 ± 3% perf-profile.children.cycles-pp.sync_regs
0.21 -0.0 0.19 ± 2% perf-profile.children.cycles-pp.error_entry
0.07 +0.0 0.08 perf-profile.children.cycles-pp.___perf_sw_event
0.07 +0.0 0.11 perf-profile.children.cycles-pp.__perf_sw_event
0.02 ±141% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.__memcg_kmem_charge_page
0.00 +0.1 0.05 perf-profile.children.cycles-pp.unlock_page_memcg
0.00 +0.1 0.07 ± 23% perf-profile.children.cycles-pp.memset_erms
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.ondemand_readahead
0.00 +0.1 0.07 perf-profile.children.cycles-pp.pmd_install
0.12 ± 4% +0.1 0.21 ± 27% perf-profile.children.cycles-pp.finish_fault
0.35 +0.2 0.50 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
0.24 ± 6% +0.3 0.51 ± 20% perf-profile.children.cycles-pp.native_irq_return_iret
0.38 ± 9% +0.5 0.87 ± 36% perf-profile.children.cycles-pp.finish_task_switch
0.17 ± 29% +0.5 0.71 ± 62% perf-profile.children.cycles-pp.find_vma
8.29 ± 4% +1.1 9.36 ± 9% perf-profile.children.cycles-pp.down_read
5.10 +1.1 6.22 ± 5% perf-profile.children.cycles-pp.xas_start
6.58 +1.1 7.71 ± 4% perf-profile.children.cycles-pp.xfs_iunlock
5.20 +1.2 6.36 ± 5% perf-profile.children.cycles-pp.xas_load
4.20 +1.2 5.39 ± 2% perf-profile.children.cycles-pp.xas_find
6.94 +3.6 10.58 ± 6% perf-profile.children.cycles-pp.up_read
0.74 ± 21% +3.8 4.53 ± 18% perf-profile.children.cycles-pp.down_read_trylock
10.62 +23.3 33.93 ± 13% perf-profile.children.cycles-pp.do_rw_once
43.93 -38.4 5.53 ± 3% perf-profile.self.cycles-pp.next_uptodate_page
1.73 -1.6 0.10 ± 29% perf-profile.self.cycles-pp.folio_unlock
0.68 -0.6 0.10 ± 14% perf-profile.self.cycles-pp.PageHeadHuge
0.70 ± 9% -0.4 0.28 ± 56% perf-profile.self.cycles-pp.intel_idle
1.32 -0.2 1.12 ± 2% perf-profile.self.cycles-pp.page_add_file_rmap
0.46 ± 9% -0.2 0.25 ± 46% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.13 ± 9% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.__count_memcg_events
0.17 ± 4% -0.1 0.09 ± 33% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.09 ± 18% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.irqtime_account_irq
0.14 ± 13% -0.1 0.08 ± 12% perf-profile.self.cycles-pp._raw_spin_trylock
0.11 ± 11% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.folio_memcg_lock
0.11 ± 4% -0.0 0.07 ± 70% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.08 ± 5% -0.0 0.04 ± 71% perf-profile.self.cycles-pp.enqueue_entity
0.07 ± 7% -0.0 0.03 ± 70% perf-profile.self.cycles-pp.update_rq_clock
0.18 -0.0 0.15 ± 3% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.07 ± 23% perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.09 ± 18% perf-profile.self.cycles-pp.xas_find
0.03 ± 70% +0.1 0.16 ± 7% perf-profile.self.cycles-pp.do_set_pte
0.33 +0.2 0.49 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.2 0.25 ± 22% perf-profile.self.cycles-pp.exc_page_fault
0.24 ± 6% +0.3 0.51 ± 20% perf-profile.self.cycles-pp.native_irq_return_iret
0.28 ± 7% +0.3 0.57 ± 35% perf-profile.self.cycles-pp.__schedule
0.25 ± 10% +0.5 0.75 ± 37% perf-profile.self.cycles-pp.finish_task_switch
0.87 ± 3% +1.0 1.91 ± 14% perf-profile.self.cycles-pp.filemap_map_pages
8.19 ± 4% +1.1 9.27 ± 9% perf-profile.self.cycles-pp.down_read
2.01 ± 2% +1.1 3.13 ± 23% perf-profile.self.cycles-pp.filemap_fault
5.05 +1.1 6.18 ± 5% perf-profile.self.cycles-pp.xas_start
1.21 ± 22% +2.9 4.08 ± 11% perf-profile.self.cycles-pp.__handle_mm_fault
6.87 +3.6 10.51 ± 6% perf-profile.self.cycles-pp.up_read
0.73 ± 21% +3.8 4.50 ± 18% perf-profile.self.cycles-pp.down_read_trylock
9.43 +4.7 14.14 ± 6% perf-profile.self.cycles-pp.do_access
7.58 +20.1 27.67 ± 16% perf-profile.self.cycles-pp.do_rw_once
***************************************************************************************************
lkp-ivb-2ep1: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-ivb-2ep1/migrate/vm-scalability/0x42e
commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")
18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
1279697 ± 2% +24.1% 1588230 vm-scalability.median
1279697 ± 2% +24.1% 1588230 vm-scalability.throughput
627.91 -1.0% 621.31 vm-scalability.time.elapsed_time
627.91 -1.0% 621.31 vm-scalability.time.elapsed_time.max
6.24 ± 5% -1.0 5.22 ± 5% turbostat.Busy%
33945569 ± 20% +44.6% 49088231 ± 17% turbostat.C6
3642839 -96.6% 122845 numa-numastat.node0.interleave_hit
12102273 ± 14% -27.0% 8838437 ± 13% numa-numastat.node0.numa_hit
3665673 -96.7% 122772 numa-numastat.node1.interleave_hit
18826459 ± 8% -24.4% 14235613 ± 7% numa-numastat.node1.numa_hit
40872 ± 6% +67564.4% 27656338 meminfo.Active
217.20 +1.3e+07% 27613029 meminfo.Active(file)
28240893 -97.1% 806140 meminfo.Inactive
27927778 -98.2% 491393 ± 2% meminfo.Inactive(file)
627080 -15.9% 527421 meminfo.Mapped
56525 ± 12% -12.9% 49208 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
64.98 ± 31% -61.2% 25.18 ± 83% sched_debug.cfs_rq:/.removed.runnable_avg.max
12.77 ± 25% -64.1% 4.58 ± 86% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
55.55 ± 24% -54.7% 25.18 ± 83% sched_debug.cfs_rq:/.removed.util_avg.max
11.42 ± 19% -59.9% 4.58 ± 86% sched_debug.cfs_rq:/.removed.util_avg.stddev
24966 ± 36% -39.8% 15027 ± 29% sched_debug.cfs_rq:/.spread0.max
6277 ± 55% +2.2e+05% 13807856 numa-meminfo.node0.Active
128.00 +1.1e+07% 13801812 numa-meminfo.node0.Active(file)
14256136 -96.4% 508062 ± 6% numa-meminfo.node0.Inactive
13970347 -98.2% 245790 ± 2% numa-meminfo.node0.Inactive(file)
329516 -15.4% 278703 numa-meminfo.node0.Mapped
34559 ± 14% +39944.8% 13839251 numa-meminfo.node1.Active
89.00 ± 2% +1.6e+07% 13801956 numa-meminfo.node1.Active(file)
13998562 -97.9% 298543 ± 10% numa-meminfo.node1.Inactive
13971170 -98.2% 245871 numa-meminfo.node1.Inactive(file)
297635 -16.3% 249066 ± 2% numa-meminfo.node1.Mapped
53.60 +1.3e+07% 6902228 proc-vmstat.nr_active_file
6981859 -98.2% 122915 proc-vmstat.nr_inactive_file
156878 -15.7% 132188 proc-vmstat.nr_mapped
26257 +7.1% 28123 proc-vmstat.nr_slab_unreclaimable
53.60 +1.3e+07% 6902228 proc-vmstat.nr_zone_active_file
6981859 -98.2% 122915 proc-vmstat.nr_zone_inactive_file
30931182 -25.4% 23076379 ± 3% proc-vmstat.numa_hit
7308513 -96.6% 245617 proc-vmstat.numa_interleave
24103602 ± 2% -18.4% 19671160 ± 3% proc-vmstat.numa_local
6826723 ± 9% -50.1% 3404877 ± 12% proc-vmstat.numa_other
66066 +10919.7% 7280270 proc-vmstat.pgactivate
32.00 +1.1e+07% 3451480 numa-vmstat.node0.nr_active_file
3491772 -98.2% 61446 numa-vmstat.node0.nr_inactive_file
82425 -15.2% 69909 numa-vmstat.node0.nr_mapped
32.00 +1.1e+07% 3451480 numa-vmstat.node0.nr_zone_active_file
3491772 -98.2% 61446 numa-vmstat.node0.nr_zone_inactive_file
12101625 ± 14% -27.0% 8837704 ± 13% numa-vmstat.node0.numa_hit
3642839 -96.6% 122845 numa-vmstat.node0.numa_interleave
21.60 ± 2% +1.6e+07% 3451528 numa-vmstat.node1.nr_active_file
3491978 -98.2% 61461 numa-vmstat.node1.nr_inactive_file
74233 -16.1% 62260 numa-vmstat.node1.nr_mapped
21.60 ± 2% +1.6e+07% 3451528 numa-vmstat.node1.nr_zone_active_file
3491978 -98.2% 61461 numa-vmstat.node1.nr_zone_inactive_file
18825701 ± 8% -24.4% 14234723 ± 7% numa-vmstat.node1.numa_hit
3665673 -96.7% 122772 numa-vmstat.node1.numa_interleave
40.20 ± 7% +28.4% 51.60 ± 3% perf-stat.i.MPKI
4.844e+08 -4.1% 4.644e+08 perf-stat.i.branch-instructions
9.15 ± 5% +1.5 10.66 perf-stat.i.branch-miss-rate%
19.69 ± 5% +2.0 21.69 ± 5% perf-stat.i.cache-miss-rate%
8672757 ± 11% +28.2% 11114804 ± 11% perf-stat.i.cache-misses
49731408 ± 4% +12.4% 55875231 ± 5% perf-stat.i.cache-references
765.32 ± 9% -22.8% 590.45 ± 4% perf-stat.i.cycles-between-cache-misses
1.15 ± 5% +0.2 1.30 ± 2% perf-stat.i.dTLB-load-miss-rate%
0.18 ± 2% +0.0 0.20 perf-stat.i.dTLB-store-miss-rate%
811798 ± 7% +18.5% 962255 ± 8% perf-stat.i.dTLB-store-misses
90.13 +2.5 92.64 perf-stat.i.iTLB-load-miss-rate%
162262 ± 8% -21.6% 127248 ± 16% perf-stat.i.iTLB-loads
2.198e+09 -4.5% 2.099e+09 perf-stat.i.instructions
374.33 ± 6% +21.9% 456.27 ± 10% perf-stat.i.metric.K/sec
898067 ± 4% +17.5% 1055190 ± 3% perf-stat.i.node-stores
22.64 ± 6% +17.5% 26.62 ± 5% perf-stat.overall.MPKI
17.39 ± 6% +2.4 19.83 ± 6% perf-stat.overall.cache-miss-rate%
731.03 ± 8% -22.9% 563.68 ± 5% perf-stat.overall.cycles-between-cache-misses
0.17 ± 3% +0.0 0.19 ± 2% perf-stat.overall.dTLB-store-miss-rate%
90.01 +1.8 91.84 perf-stat.overall.iTLB-load-miss-rate%
34.37 -3.0 31.41 perf-stat.overall.node-store-miss-rate%
42785 -5.6% 40397 perf-stat.overall.path-length
4.838e+08 -4.1% 4.638e+08 perf-stat.ps.branch-instructions
8662103 ± 11% +28.2% 11100817 ± 11% perf-stat.ps.cache-misses
49649551 ± 4% +12.3% 55779056 ± 5% perf-stat.ps.cache-references
810507 ± 7% +18.5% 960597 ± 8% perf-stat.ps.dTLB-store-misses
162062 ± 8% -21.6% 127036 ± 16% perf-stat.ps.iTLB-loads
2.195e+09 -4.5% 2.096e+09 perf-stat.ps.instructions
898053 ± 4% +17.5% 1055430 ± 3% perf-stat.ps.node-stores
1.389e+12 -5.6% 1.311e+12 perf-stat.total.instructions
1.48 ± 8% -0.4 1.04 ± 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.69 ± 19% -0.4 1.29 ± 8% perf-profile.calltrace.cycles-pp.fork
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work.worker_thread
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.calltrace.cycles-pp.drm_fb_memcpy_toio.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail
1.02 ± 13% -0.4 0.64 ± 11% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_memcpy_toio.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes
1.09 ± 11% -0.4 0.74 ± 5% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
0.59 ± 5% +0.1 0.71 ± 9% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.57 ± 6% +0.1 0.69 ± 8% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.48 ± 8% -0.4 1.04 ± 4% perf-profile.children.cycles-pp.process_one_work
1.72 ± 19% -0.4 1.30 ± 8% perf-profile.children.cycles-pp.fork
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.commit_tail
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.mgag200_simple_display_pipe_update
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.mgag200_handle_damage
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.memcpy_toio
1.08 ± 12% -0.4 0.69 ± 9% perf-profile.children.cycles-pp.drm_fb_memcpy_toio
1.09 ± 11% -0.4 0.74 ± 5% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.45 ± 19% -0.1 0.31 ± 19% perf-profile.children.cycles-pp.finish_task_switch
0.32 ± 19% -0.1 0.20 ± 15% perf-profile.children.cycles-pp.perf_iterate_sb
0.50 ± 12% -0.1 0.40 ± 8% perf-profile.children.cycles-pp.native_irq_return_iret
0.40 ± 14% -0.1 0.31 ± 11% perf-profile.children.cycles-pp.get_page_from_freelist
0.35 ± 12% -0.1 0.27 ± 16% perf-profile.children.cycles-pp.perf_pmu_sched_task
0.30 ± 15% -0.1 0.22 ± 13% perf-profile.children.cycles-pp.__perf_pmu_sched_task
0.16 ± 8% -0.1 0.11 ± 18% perf-profile.children.cycles-pp.__close
0.18 ± 21% -0.1 0.13 ± 9% perf-profile.children.cycles-pp.__get_vm_area_node
0.11 ± 20% -0.0 0.08 ± 15% perf-profile.children.cycles-pp.__put_user_nocheck_4
0.13 ± 14% -0.0 0.10 ± 7% perf-profile.children.cycles-pp.flush_tlb_func
0.34 ± 12% +0.1 0.44 ± 11% perf-profile.children.cycles-pp.timerqueue_del
0.61 ± 6% +0.1 0.72 ± 8% perf-profile.children.cycles-pp.irq_enter_rcu
0.58 ± 6% +0.1 0.70 ± 8% perf-profile.children.cycles-pp.tick_irq_enter
0.42 ± 12% +0.1 0.54 ± 11% perf-profile.children.cycles-pp.__remove_hrtimer
0.60 ± 48% -0.3 0.31 ± 15% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.85 ± 10% -0.2 0.64 ± 10% perf-profile.self.cycles-pp.memcpy_toio
0.49 ± 11% -0.1 0.40 ± 8% perf-profile.self.cycles-pp.native_irq_return_iret
0.25 ± 16% -0.1 0.18 ± 22% perf-profile.self.cycles-pp._dl_catch_error
0.09 ± 13% -0.0 0.05 ± 51% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.15 ± 5% -0.0 0.12 ± 18% perf-profile.self.cycles-pp.update_load_avg
0.11 ± 15% +0.1 0.16 ± 18% perf-profile.self.cycles-pp.timerqueue_del
***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/lru-file-mmap-read/vm-scalability/0x500320a
commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")
18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.10 ± 3% +118.3% 0.22 ± 16% vm-scalability.free_time
101518 ± 2% +42.3% 144417 ± 3% vm-scalability.median
411.17 ± 36% +892.2 1303 ± 17% vm-scalability.stddev%
19593900 +45.0% 28415043 ± 3% vm-scalability.throughput
301.57 -34.4% 197.69 ± 2% vm-scalability.time.elapsed_time
301.57 -34.4% 197.69 ± 2% vm-scalability.time.elapsed_time.max
640209 ± 4% -19.7% 514308 ± 10% vm-scalability.time.involuntary_context_switches
15114 +5.2% 15894 vm-scalability.time.percent_of_cpu_this_job_got
42292 -35.8% 27163 ± 3% vm-scalability.time.system_time
3288 ± 4% +29.7% 4264 ± 5% vm-scalability.time.user_time
1.085e+10 ± 3% -51.9% 5.216e+09 ± 7% cpuidle..time
21205215 ± 12% -48.5% 10910479 ± 5% cpuidle..usage
349.92 -29.7% 246.17 uptime.boot
18808 -30.0% 13160 ± 4% uptime.idle
5.17 ± 7% +109.7% 10.83 ± 3% vmstat.cpu.us
21348646 -41.2% 12548500 ± 5% vmstat.memory.free
5766 ± 4% +23.5% 7120 ± 8% vmstat.system.cs
2.715e+08 ± 2% -43.2% 1.542e+08 ± 6% turbostat.IRQ
33367 ± 25% -64.5% 11839 ± 20% turbostat.POLL
43.50 ± 2% +11.1% 48.33 ± 2% turbostat.PkgTmp
252.04 +6.8% 269.14 turbostat.PkgWatt
18.99 ± 2% -5.4 13.63 ± 8% mpstat.cpu.all.idle%
0.00 ± 31% +0.0 0.01 ±106% mpstat.cpu.all.iowait%
1.08 ± 3% +0.7 1.75 ± 7% mpstat.cpu.all.irq%
0.07 ± 3% +0.1 0.14 ± 6% mpstat.cpu.all.soft%
5.82 ± 5% +5.4 11.22 ± 4% mpstat.cpu.all.usr%
165900 ± 4% +7920.0% 13305211 ± 12% meminfo.Active
160338 ± 4% -63.9% 57831 ± 15% meminfo.Active(anon)
5561 ± 11% +2.4e+05% 13247380 ± 12% meminfo.Active(file)
880618 ± 7% -24.3% 666574 ± 2% meminfo.Committed_AS
569221 ± 11% -22.3% 442282 ± 2% meminfo.Inactive(anon)
23000987 ± 3% -46.2% 12372721 ± 4% meminfo.MemFree
388894 +16.7% 453885 ± 2% meminfo.SUnreclaim
422287 ± 14% -55.5% 188091 ± 9% meminfo.Shmem
2.278e+08 ± 3% -95.1% 11200440 ± 15% numa-numastat.node0.local_node
40532177 ± 14% -90.8% 3741196 ± 21% numa-numastat.node0.numa_foreign
2.275e+08 ± 3% -95.1% 11245827 ± 15% numa-numastat.node0.numa_hit
29443860 ± 13% -83.7% 4811729 ± 27% numa-numastat.node0.numa_miss
29476169 ± 13% -83.5% 4859208 ± 27% numa-numastat.node0.other_node
2.303e+08 ± 3% -95.4% 10517523 ± 16% numa-numastat.node1.local_node
37621005 ± 13% -88.6% 4292186 ± 19% numa-numastat.node1.numa_foreign
2.299e+08 ± 3% -95.4% 10603099 ± 16% numa-numastat.node1.numa_hit
41991213 ± 19% -90.5% 3972773 ± 19% numa-numastat.node1.numa_miss
42068264 ± 19% -90.4% 4057461 ± 19% numa-numastat.node1.other_node
2.349e+08 ± 3% -95.2% 11355001 ± 20% numa-numastat.node2.local_node
34589661 ± 21% -88.3% 4047359 ± 16% numa-numastat.node2.numa_foreign
2.345e+08 ± 3% -95.1% 11421655 ± 20% numa-numastat.node2.numa_hit
41161372 ± 19% -91.1% 3677838 ± 19% numa-numastat.node2.numa_miss
41262019 ± 19% -90.9% 3744731 ± 19% numa-numastat.node2.other_node
2.429e+08 -95.5% 10974126 ± 17% numa-numastat.node3.local_node
31729047 ± 13% -86.9% 4152899 ± 24% numa-numastat.node3.numa_foreign
2.425e+08 -95.4% 11034785 ± 17% numa-numastat.node3.numa_hit
31871419 ± 7% -88.2% 3769387 ± 24% numa-numastat.node3.numa_miss
31964737 ± 7% -88.0% 3833705 ± 23% numa-numastat.node3.other_node
18558 ± 21% +16974.0% 3168645 ± 14% numa-meminfo.node0.Active
736.50 ±109% +4.3e+05% 3151953 ± 15% numa-meminfo.node0.Active(file)
34210000 +9.8% 37546409 numa-meminfo.node0.Mapped
5885498 ± 5% -46.0% 3176670 ± 4% numa-meminfo.node0.MemFree
17451 ± 43% +18303.0% 3211606 ± 17% numa-meminfo.node1.Active
13751 ± 59% -64.0% 4950 ± 54% numa-meminfo.node1.Active(anon)
3698 ± 33% +86597.6% 3206655 ± 17% numa-meminfo.node1.Active(file)
86098 ± 84% -81.2% 16195 ±100% numa-meminfo.node1.Inactive(anon)
36059221 +11.3% 40136100 numa-meminfo.node1.Mapped
6127422 ± 6% -48.4% 3164279 ± 5% numa-meminfo.node1.MemFree
79357 ± 69% -90.4% 7595 ± 68% numa-meminfo.node1.Shmem
5745 ± 20% +63238.1% 3639196 ± 20% numa-meminfo.node2.Active
432.83 ±111% +8.4e+05% 3634730 ± 20% numa-meminfo.node2.Active(file)
5992877 ± 3% -47.5% 3148114 ± 5% numa-meminfo.node2.MemFree
82044 ± 2% +30.5% 107052 ± 14% numa-meminfo.node2.SUnreclaim
127440 ± 8% +2819.6% 3720728 ± 14% numa-meminfo.node3.Active
126792 ± 7% -74.1% 32868 ± 32% numa-meminfo.node3.Active(anon)
647.17 ± 88% +5.7e+05% 3687859 ± 14% numa-meminfo.node3.Active(file)
35603773 +10.6% 39372854 numa-meminfo.node3.Mapped
6060136 ± 4% -47.8% 3163708 ± 6% numa-meminfo.node3.MemFree
174654 ± 21% -55.6% 77594 ± 59% numa-meminfo.node3.Shmem
29.60 ± 15% +20.6% 35.71 ± 9% sched_debug.cfs_rq:/.load_avg.avg
23054096 ± 12% -46.9% 12234921 ± 16% sched_debug.cfs_rq:/.min_vruntime.avg
23762670 ± 12% -46.6% 12697185 ± 16% sched_debug.cfs_rq:/.min_vruntime.max
19343278 ± 15% -61.2% 7508563 ± 23% sched_debug.cfs_rq:/.min_vruntime.min
1484122 ± 52% -67.1% 487970 ± 52% sched_debug.cfs_rq:/.spread0.max
768.96 ± 6% -33.8% 509.26 ± 11% sched_debug.cfs_rq:/.util_est_enqueued.avg
186.50 ± 14% +68.2% 313.61 ± 11% sched_debug.cfs_rq:/.util_est_enqueued.stddev
836130 ± 4% +12.9% 943827 ± 3% sched_debug.cpu.avg_idle.avg
1477455 ± 18% +82.9% 2702269 ± 24% sched_debug.cpu.avg_idle.max
143256 ± 15% +91.3% 274043 ± 25% sched_debug.cpu.avg_idle.min
168476 ± 10% -33.2% 112500 ± 11% sched_debug.cpu.clock.avg
168594 ± 10% -33.2% 112546 ± 11% sched_debug.cpu.clock.max
168368 ± 10% -33.2% 112436 ± 11% sched_debug.cpu.clock.min
167148 ± 10% -33.5% 111146 ± 10% sched_debug.cpu.clock_task.avg
167428 ± 10% -33.4% 111447 ± 10% sched_debug.cpu.clock_task.max
157510 ± 10% -34.9% 102572 ± 11% sched_debug.cpu.clock_task.min
10904 ± 6% -13.6% 9420 ± 4% sched_debug.cpu.curr->pid.max
779426 ± 22% +69.3% 1319881 ± 25% sched_debug.cpu.max_idle_balance_cost.max
22416 ± 61% +282.2% 85670 ± 44% sched_debug.cpu.max_idle_balance_cost.stddev
6379 ± 6% -31.4% 4375 ± 9% sched_debug.cpu.nr_switches.avg
3704 ± 7% -43.7% 2086 ± 15% sched_debug.cpu.nr_switches.min
168334 ± 10% -33.2% 112430 ± 11% sched_debug.cpu_clk
167321 ± 10% -33.4% 111418 ± 11% sched_debug.ktime
168815 ± 10% -32.5% 113880 ± 10% sched_debug.sched_clk
2616 ± 9% +46.3% 3827 ± 14% proc-vmstat.allocstall_normal
2156058 ± 15% +4474.3% 98624936 ± 55% proc-vmstat.compact_daemon_migrate_scanned
20.00 ± 38% +2.4e+05% 48248 ± 40% proc-vmstat.compact_fail
8500358 ± 8% -73.3% 2268228 ± 46% proc-vmstat.compact_isolated
10223131 ± 8% +3370.9% 3.548e+08 ± 61% proc-vmstat.compact_migrate_scanned
27.00 ± 46% +9.8e+05% 265302 ± 36% proc-vmstat.compact_stall
7.00 ± 73% +3.1e+06% 217053 ± 36% proc-vmstat.compact_success
823.33 ± 4% +1193.8% 10652 ± 56% proc-vmstat.kswapd_low_wmark_hit_quickly
41044 ± 4% -64.6% 14549 ± 15% proc-vmstat.nr_active_anon
1387 ± 11% +2.4e+05% 3353262 ± 12% proc-vmstat.nr_active_file
41328974 +6.4% 43963911 proc-vmstat.nr_file_pages
5852009 ± 2% -46.9% 3109416 ± 4% proc-vmstat.nr_free_pages
142808 ± 11% -22.5% 110708 ± 2% proc-vmstat.nr_inactive_anon
1075 ± 3% -32.2% 728.67 ± 10% proc-vmstat.nr_isolated_file
35690 +2.4% 36540 proc-vmstat.nr_kernel_stack
35775593 +9.8% 39287167 proc-vmstat.nr_mapped
869101 +5.6% 917415 ± 2% proc-vmstat.nr_page_table_pages
107130 ± 15% -55.9% 47235 ± 9% proc-vmstat.nr_shmem
97259 +16.7% 113507 ± 2% proc-vmstat.nr_slab_unreclaimable
41046 ± 4% -64.5% 14554 ± 15% proc-vmstat.nr_zone_active_anon
1387 ± 11% +2.4e+05% 3353349 ± 12% proc-vmstat.nr_zone_active_file
142851 ± 11% -22.5% 110765 ± 2% proc-vmstat.nr_zone_inactive_anon
1.445e+08 ± 14% -88.8% 16233642 ± 19% proc-vmstat.numa_foreign
80928 ± 55% -74.9% 20329 ± 20% proc-vmstat.numa_hint_faults
40260 ± 38% -71.4% 11516 ± 32% proc-vmstat.numa_hint_faults_local
9.344e+08 ± 2% -95.3% 44308201 ± 16% proc-vmstat.numa_hit
9.359e+08 ± 2% -95.3% 44049923 ± 16% proc-vmstat.numa_local
1.445e+08 ± 14% -88.8% 16231729 ± 19% proc-vmstat.numa_miss
1.448e+08 ± 14% -88.6% 16495107 ± 19% proc-vmstat.numa_other
411268 ± 14% -51.4% 199735 ± 35% proc-vmstat.numa_pte_updates
2547 ± 2% +339.4% 11194 ± 53% proc-vmstat.pageoutrun
254135 ± 15% +14971.1% 38301088 ± 5% proc-vmstat.pgactivate
8098345 +12.0% 9067933 ± 4% proc-vmstat.pgalloc_dma32
4244635 ± 8% -73.7% 1116248 ± 46% proc-vmstat.pgmigrate_success
2415 -1.5% 2380 proc-vmstat.pgpgout
86990 ± 2% -23.8% 66253 ± 2% proc-vmstat.pgreuse
1.85e+09 -24.6% 1.396e+09 ± 7% proc-vmstat.pgscan_direct
2175 ± 16% -94.3% 124.67 ± 32% proc-vmstat.pgscan_direct_throttle
1.965e+08 ± 16% +239.4% 6.671e+08 ± 14% proc-vmstat.pgscan_kswapd
9.921e+08 -4.0% 9.527e+08 proc-vmstat.pgsteal_direct
34748892 ± 2% +116.7% 75306059 ± 7% proc-vmstat.pgsteal_kswapd
4957012 -8.8% 4522105 ± 2% proc-vmstat.workingset_nodereclaim
4020 ± 17% -59.1% 1644 ± 74% proc-vmstat.workingset_refault_file
183.17 ±109% +4.5e+05% 821995 ± 14% numa-vmstat.node0.nr_active_file
1519607 ± 6% -45.2% 832457 ± 4% numa-vmstat.node0.nr_free_pages
8464904 +10.0% 9312159 numa-vmstat.node0.nr_mapped
183.17 ±109% +4.5e+05% 822014 ± 14% numa-vmstat.node0.nr_zone_active_file
40532177 ± 14% -90.8% 3741196 ± 21% numa-vmstat.node0.numa_foreign
2.275e+08 ± 3% -95.1% 11245235 ± 15% numa-vmstat.node0.numa_hit
2.278e+08 ± 3% -95.1% 11199848 ± 15% numa-vmstat.node0.numa_local
29443860 ± 13% -83.7% 4811729 ± 27% numa-vmstat.node0.numa_miss
29476169 ± 13% -83.5% 4859208 ± 27% numa-vmstat.node0.numa_other
3499 ± 59% -63.4% 1280 ± 53% numa-vmstat.node1.nr_active_anon
922.83 ± 33% +90476.4% 835869 ± 17% numa-vmstat.node1.nr_active_file
1586446 ± 5% -47.8% 827706 ± 5% numa-vmstat.node1.nr_free_pages
21322 ± 84% -81.0% 4042 ±100% numa-vmstat.node1.nr_inactive_anon
254.17 ± 5% -24.0% 193.17 ± 8% numa-vmstat.node1.nr_isolated_file
8919519 +11.6% 9957601 numa-vmstat.node1.nr_mapped
19735 ± 68% -90.1% 1947 ± 67% numa-vmstat.node1.nr_shmem
3500 ± 59% -63.4% 1281 ± 53% numa-vmstat.node1.nr_zone_active_anon
923.00 ± 33% +90460.5% 835873 ± 17% numa-vmstat.node1.nr_zone_active_file
21334 ± 83% -81.0% 4056 ±100% numa-vmstat.node1.nr_zone_inactive_anon
37621005 ± 13% -88.6% 4292186 ± 19% numa-vmstat.node1.numa_foreign
2.299e+08 ± 3% -95.4% 10603128 ± 16% numa-vmstat.node1.numa_hit
2.303e+08 ± 3% -95.4% 10517553 ± 16% numa-vmstat.node1.numa_local
41991213 ± 19% -90.5% 3972773 ± 19% numa-vmstat.node1.numa_miss
42068264 ± 19% -90.4% 4057461 ± 19% numa-vmstat.node1.numa_other
1268948 ± 4% -17.2% 1051134 ± 9% numa-vmstat.node1.workingset_nodereclaim
3123 ± 50% -84.8% 476.17 ±183% numa-vmstat.node1.workingset_refault_file
107.83 ±111% +8.8e+05% 944107 ± 20% numa-vmstat.node2.nr_active_file
1554237 ± 2% -46.9% 826010 ± 5% numa-vmstat.node2.nr_free_pages
277.50 ± 3% -45.2% 152.17 ± 11% numa-vmstat.node2.nr_isolated_file
20516 ± 2% +30.5% 26769 ± 14% numa-vmstat.node2.nr_slab_unreclaimable
107.83 ±111% +8.8e+05% 944108 ± 20% numa-vmstat.node2.nr_zone_active_file
34589661 ± 21% -88.3% 4047359 ± 16% numa-vmstat.node2.numa_foreign
2.345e+08 ± 3% -95.1% 11421811 ± 20% numa-vmstat.node2.numa_hit
2.349e+08 ± 3% -95.2% 11355157 ± 20% numa-vmstat.node2.numa_local
41161372 ± 19% -91.1% 3677838 ± 19% numa-vmstat.node2.numa_miss
41262019 ± 19% -90.9% 3744731 ± 19% numa-vmstat.node2.numa_other
1263735 ± 6% -10.7% 1128659 ± 4% numa-vmstat.node2.workingset_nodereclaim
31933 ± 7% -73.0% 8623 ± 34% numa-vmstat.node3.nr_active_anon
161.00 ± 88% +6e+05% 961368 ± 14% numa-vmstat.node3.nr_active_file
1568641 ± 4% -47.2% 828497 ± 5% numa-vmstat.node3.nr_free_pages
264.00 ± 9% -44.3% 147.00 ± 14% numa-vmstat.node3.nr_isolated_file
8802026 +10.9% 9759037 numa-vmstat.node3.nr_mapped
43822 ± 20% -54.4% 20003 ± 57% numa-vmstat.node3.nr_shmem
31933 ± 7% -73.0% 8625 ± 34% numa-vmstat.node3.nr_zone_active_anon
161.00 ± 88% +6e+05% 961370 ± 14% numa-vmstat.node3.nr_zone_active_file
31729047 ± 13% -86.9% 4152899 ± 24% numa-vmstat.node3.numa_foreign
2.425e+08 -95.5% 11033782 ± 17% numa-vmstat.node3.numa_hit
2.429e+08 -95.5% 10973124 ± 17% numa-vmstat.node3.numa_local
31871419 ± 7% -88.2% 3769387 ± 24% numa-vmstat.node3.numa_miss
31964738 ± 7% -88.0% 3833705 ± 23% numa-vmstat.node3.numa_other
2.978e+10 +13.3% 3.375e+10 perf-stat.i.branch-instructions
32180693 ± 4% -14.4% 27541092 ± 7% perf-stat.i.branch-misses
5.15e+08 ± 3% +19.6% 6.16e+08 ± 4% perf-stat.i.cache-references
5528 ± 4% +30.5% 7216 ± 10% perf-stat.i.context-switches
4.07 +6.5% 4.33 ± 3% perf-stat.i.cpi
4.639e+11 +8.0% 5.009e+11 perf-stat.i.cpu-cycles
270.77 +9.2% 295.80 ± 4% perf-stat.i.cpu-migrations
14735665 ± 5% -32.1% 9999207 ± 8% perf-stat.i.dTLB-load-misses
2.788e+10 +4.1% 2.903e+10 perf-stat.i.dTLB-loads
0.01 ± 29% +0.0 0.02 ± 10% perf-stat.i.dTLB-store-miss-rate%
838683 ± 5% +22.9% 1030872 ± 3% perf-stat.i.dTLB-store-misses
6.544e+09 -30.2% 4.568e+09 ± 2% perf-stat.i.dTLB-stores
3574432 +25.9% 4499739 ± 6% perf-stat.i.iTLB-load-misses
1.084e+11 +4.2% 1.13e+11 perf-stat.i.instructions
29122 -15.1% 24732 ± 6% perf-stat.i.instructions-per-iTLB-miss
0.35 ± 4% -20.1% 0.28 ± 6% perf-stat.i.ipc
111142 +51.4% 168291 ± 2% perf-stat.i.major-faults
2.39 +7.8% 2.58 perf-stat.i.metric.GHz
332.60 +5.1% 349.53 perf-stat.i.metric.M/sec
219873 ± 2% +51.0% 331983 ± 2% perf-stat.i.minor-faults
18812619 ± 3% +50.5% 28318976 ± 10% perf-stat.i.node-load-misses
4668466 ± 6% +72.7% 8060213 ± 11% perf-stat.i.node-loads
60.86 +19.9 80.75 perf-stat.i.node-store-miss-rate%
11184776 ± 3% -70.5% 3297078 ± 4% perf-stat.i.node-stores
331016 +51.1% 500275 ± 2% perf-stat.i.page-faults
4.70 +15.5% 5.43 ± 3% perf-stat.overall.MPKI
0.11 ± 3% -0.0 0.08 ± 7% perf-stat.overall.branch-miss-rate%
31.24 -1.6 29.60 ± 2% perf-stat.overall.cache-miss-rate%
0.05 ± 5% -0.0 0.03 ± 7% perf-stat.overall.dTLB-load-miss-rate%
0.01 ± 3% +0.0 0.02 ± 3% perf-stat.overall.dTLB-store-miss-rate%
30846 -17.5% 25451 ± 6% perf-stat.overall.instructions-per-iTLB-miss
54.45 +26.4 80.82 perf-stat.overall.node-store-miss-rate%
6878 -32.1% 4673 perf-stat.overall.path-length
3.01e+10 +12.7% 3.391e+10 perf-stat.ps.branch-instructions
32017180 ± 4% -15.6% 27019688 ± 7% perf-stat.ps.branch-misses
5.163e+08 ± 2% +19.4% 6.167e+08 ± 4% perf-stat.ps.cache-references
5664 ± 4% +23.3% 6986 ± 9% perf-stat.ps.context-switches
4.801e+11 +5.8% 5.079e+11 perf-stat.ps.cpu-cycles
269.63 +6.4% 286.97 ± 4% perf-stat.ps.cpu-migrations
15001073 ± 5% -32.9% 10065925 ± 8% perf-stat.ps.dTLB-load-misses
2.823e+10 +3.4% 2.918e+10 perf-stat.ps.dTLB-loads
840470 ± 4% +22.8% 1031964 ± 3% perf-stat.ps.dTLB-store-misses
6.581e+09 -30.6% 4.57e+09 ± 2% perf-stat.ps.dTLB-stores
3558560 +25.8% 4478004 ± 6% perf-stat.ps.iTLB-load-misses
1.098e+11 +3.4% 1.135e+11 perf-stat.ps.instructions
110858 +52.3% 168813 ± 2% perf-stat.ps.major-faults
219045 +51.8% 332439 ± 2% perf-stat.ps.minor-faults
18998997 ± 3% +49.3% 28360434 ± 10% perf-stat.ps.node-load-misses
4608425 ± 6% +71.4% 7899142 ± 10% perf-stat.ps.node-loads
11135414 ± 3% -70.6% 3278444 ± 4% perf-stat.ps.node-stores
329904 +51.9% 501252 ± 2% perf-stat.ps.page-faults
3.324e+13 -32.1% 2.258e+13 perf-stat.total.instructions
88.08 -62.2 25.91 ± 39% perf-profile.calltrace.cycles-pp.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
61.68 ± 3% -36.0 25.68 ± 39% perf-profile.calltrace.cycles-pp.folio_alloc.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
61.64 ± 3% -36.0 25.68 ± 39% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
55.80 ± 5% -30.1 25.65 ± 39% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
55.22 ± 5% -29.6 25.59 ± 39% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc.page_cache_ra_unbounded
20.78 ± 11% -20.8 0.00 perf-profile.calltrace.cycles-pp.filemap_add_folio.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
52.64 ± 5% -19.5 33.18 ± 8% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
52.68 ± 5% -19.5 33.23 ± 8% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
18.43 ± 12% -18.4 0.00 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
18.39 ± 12% -18.4 0.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_fault
17.74 ± 12% -17.7 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded
55.38 ± 6% -14.3 41.11 ± 16% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc
56.25 ± 6% -14.3 41.99 ± 17% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages
25.36 ± 10% -12.3 13.07 ± 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
25.22 ± 9% -12.1 13.12 ± 18% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
88.37 -11.6 76.74 ± 5% perf-profile.calltrace.cycles-pp.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault.__handle_mm_fault
88.37 -11.6 76.75 ± 5% perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
88.37 -11.6 76.75 ± 5% perf-profile.calltrace.cycles-pp.__xfs_filemap_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault
90.28 -11.1 79.20 ± 5% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
90.32 -11.0 79.34 ± 5% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
90.45 -10.9 79.57 ± 5% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
90.50 -10.8 79.66 ± 4% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
90.50 -10.8 79.66 ± 4% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
90.56 -10.8 79.74 ± 4% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
94.36 -8.1 86.27 ± 4% perf-profile.calltrace.cycles-pp.do_access
17.70 ± 12% -5.9 11.76 ± 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru
17.73 ± 12% -5.9 11.84 ± 14% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio
5.77 ± 25% -5.8 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
5.51 ± 9% -5.5 0.00 perf-profile.calltrace.cycles-pp.read_pages.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
5.50 ± 8% -5.5 0.00 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
5.46 ± 27% -5.5 0.00 perf-profile.calltrace.cycles-pp.rmqueue_bulk.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_unbounded
5.29 ± 9% -5.3 0.00 perf-profile.calltrace.cycles-pp.iomap_readpage_iter.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_fault
5.12 ± 28% -5.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages.folio_alloc
5.12 ± 28% -5.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages
11.37 ± 9% -5.0 6.38 ± 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
11.41 ± 9% -5.0 6.44 ± 16% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
5.37 ± 13% -4.9 0.47 ± 70% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
11.50 ± 9% -4.9 6.60 ± 16% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
0.76 ± 8% +0.5 1.28 ± 20% perf-profile.calltrace.cycles-pp.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
0.79 ± 7% +0.5 1.33 ± 19% perf-profile.calltrace.cycles-pp.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.40 ± 9% +0.5 1.95 ± 8% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.balance_pgdat.kswapd.kthread
1.40 ± 9% +0.5 1.95 ± 8% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
1.43 ± 9% +0.6 2.00 ± 8% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
1.43 ± 9% +0.6 2.00 ± 8% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
1.43 ± 9% +0.6 2.02 ± 8% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
0.27 ±100% +0.7 0.96 ± 23% perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault
0.28 ±100% +0.7 1.00 ± 23% perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
0.00 +0.7 0.74 ± 26% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages
0.56 ± 46% +0.8 1.35 ± 10% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
0.00 +1.0 1.00 ± 25% perf-profile.calltrace.cycles-pp.try_charge_memcg.charge_memcg.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio
0.00 +1.0 1.04 ± 21% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages
0.00 +1.0 1.04 ± 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages.folio_alloc
0.00 +1.1 1.15 ± 26% perf-profile.calltrace.cycles-pp.charge_memcg.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.ondemand_readahead
0.00 +1.2 1.17 ± 21% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +1.2 1.19 ± 58% perf-profile.calltrace.cycles-pp.uncharge_batch.__mem_cgroup_uncharge.free_compound_page.shrink_page_list.shrink_inactive_list
1.76 ± 9% +1.3 3.02 ± 15% perf-profile.calltrace.cycles-pp.ret_from_fork
1.76 ± 9% +1.3 3.02 ± 15% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +1.3 1.32 ± 29% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.ondemand_readahead.filemap_fault
0.00 +1.4 1.45 ± 54% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge.free_compound_page.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.00 +1.5 1.45 ± 54% perf-profile.calltrace.cycles-pp.free_compound_page.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +1.5 1.50 ± 81% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__free_pages_ok.shrink_page_list.shrink_inactive_list
0.00 +1.5 1.53 ± 80% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__free_pages_ok.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.00 +1.6 1.61 ± 77% perf-profile.calltrace.cycles-pp.__free_pages_ok.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +1.7 1.74 ± 26% perf-profile.calltrace.cycles-pp.__filemap_add_folio.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault
4.68 ± 5% +2.2 6.89 ± 15% perf-profile.calltrace.cycles-pp.do_rw_once
0.00 +3.5 3.49 ± 35% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.memset_erms.iomap_readpage_iter.iomap_readahead.read_pages
0.00 +6.7 6.71 ± 14% perf-profile.calltrace.cycles-pp.memset_erms.iomap_readpage_iter.iomap_readahead.read_pages.filemap_fault
0.00 +9.0 8.97 ± 13% perf-profile.calltrace.cycles-pp.iomap_readpage_iter.iomap_readahead.read_pages.filemap_fault.__xfs_filemap_fault
0.00 +9.1 9.05 ± 32% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages.folio_alloc
0.00 +9.1 9.10 ± 12% perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +9.1 9.10 ± 12% perf-profile.calltrace.cycles-pp.read_pages.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
0.00 +9.1 9.10 ± 31% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +9.7 9.72 ± 31% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault
0.00 +11.8 11.84 ± 14% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio.ondemand_readahead
0.00 +12.4 12.38 ± 14% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.ondemand_readahead.filemap_fault
0.00 +12.4 12.39 ± 14% perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault
0.00 +14.1 14.13 ± 14% perf-profile.calltrace.cycles-pp.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +15.5 15.54 ± 44% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +17.8 17.76 ± 42% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault
0.00 +27.5 27.50 ± 36% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault.__xfs_filemap_fault
0.00 +27.5 27.51 ± 36% perf-profile.calltrace.cycles-pp.folio_alloc.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +41.7 41.69 ± 23% perf-profile.calltrace.cycles-pp.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
88.08 -62.1 25.98 ± 39% perf-profile.children.cycles-pp.page_cache_ra_unbounded
54.48 ± 5% -19.0 35.44 ± 8% perf-profile.children.cycles-pp.shrink_inactive_list
54.50 ± 5% -19.0 35.47 ± 8% perf-profile.children.cycles-pp.shrink_lruvec
56.66 ± 6% -14.4 42.30 ± 17% perf-profile.children.cycles-pp.do_try_to_free_pages
56.67 ± 6% -14.4 42.31 ± 17% perf-profile.children.cycles-pp.try_to_free_pages
58.09 ± 6% -13.8 44.30 ± 16% perf-profile.children.cycles-pp.shrink_node
68.98 ± 2% -13.6 55.40 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
57.28 ± 6% -12.7 44.62 ± 15% perf-profile.children.cycles-pp.__alloc_pages_slowpath
88.37 -11.5 76.88 ± 5% perf-profile.children.cycles-pp.filemap_fault
88.37 -11.5 76.89 ± 5% perf-profile.children.cycles-pp.__do_fault
88.37 -11.5 76.89 ± 5% perf-profile.children.cycles-pp.__xfs_filemap_fault
90.49 -11.0 79.46 ± 5% perf-profile.children.cycles-pp.__handle_mm_fault
90.30 -10.9 79.36 ± 5% perf-profile.children.cycles-pp.do_fault
90.63 -10.8 79.78 ± 5% perf-profile.children.cycles-pp.handle_mm_fault
90.64 -10.8 79.86 ± 5% perf-profile.children.cycles-pp.do_user_addr_fault
90.64 -10.8 79.88 ± 5% perf-profile.children.cycles-pp.exc_page_fault
90.70 -10.7 79.96 ± 5% perf-profile.children.cycles-pp.asm_exc_page_fault
10.20 ± 18% -8.7 1.50 ± 17% perf-profile.children.cycles-pp._raw_spin_lock
94.68 -7.5 87.16 ± 4% perf-profile.children.cycles-pp.do_access
20.78 ± 11% -6.4 14.34 ± 14% perf-profile.children.cycles-pp.filemap_add_folio
6.28 ± 24% -5.9 0.36 ± 42% perf-profile.children.cycles-pp.rmqueue_bulk
18.47 ± 12% -5.9 12.58 ± 14% perf-profile.children.cycles-pp.folio_add_lru
18.50 ± 12% -5.8 12.65 ± 14% perf-profile.children.cycles-pp.__pagevec_lru_add
17.85 ± 12% -5.6 12.29 ± 14% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
12.08 ± 9% -5.1 7.00 ± 16% perf-profile.children.cycles-pp.lru_note_cost
5.56 ± 13% -4.6 0.96 ± 16% perf-profile.children.cycles-pp.__remove_mapping
2.26 ± 8% -2.1 0.20 ± 29% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.27 ± 13% -1.2 0.12 ± 19% perf-profile.children.cycles-pp.workingset_eviction
0.71 -0.6 0.16 ± 12% perf-profile.children.cycles-pp.__list_del_entry_valid
0.61 ± 12% -0.5 0.13 ± 57% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.96 ± 6% -0.5 0.49 ± 23% perf-profile.children.cycles-pp.free_pcppages_bulk
0.46 ± 10% -0.4 0.04 ± 75% perf-profile.children.cycles-pp.workingset_age_nonresident
0.65 ± 5% -0.4 0.28 ± 12% perf-profile.children.cycles-pp.isolate_lru_pages
0.96 ± 6% -0.3 0.64 ± 14% perf-profile.children.cycles-pp.folio_referenced
0.26 ± 5% -0.2 0.08 ± 12% perf-profile.children.cycles-pp.__free_one_page
0.24 ± 5% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.down_read
0.20 ± 18% -0.1 0.08 ± 24% perf-profile.children.cycles-pp.move_pages_to_lru
0.14 ± 4% -0.1 0.02 ± 99% perf-profile.children.cycles-pp.__might_resched
0.20 ± 6% -0.1 0.08 ± 13% perf-profile.children.cycles-pp.xas_load
0.15 ± 42% -0.1 0.04 ±104% perf-profile.children.cycles-pp.alloc_pages_vma
0.30 ± 11% -0.1 0.21 ± 19% perf-profile.children.cycles-pp.xas_create
0.20 ± 7% -0.1 0.14 ± 23% perf-profile.children.cycles-pp.filemap_unaccount_folio
0.14 ± 4% -0.1 0.08 ± 17% perf-profile.children.cycles-pp.next_uptodate_page
0.08 ± 6% -0.0 0.04 ± 45% perf-profile.children.cycles-pp.__list_add_valid
0.06 ± 13% +0.0 0.08 ± 13% perf-profile.children.cycles-pp.count_shadow_nodes
0.05 ± 7% +0.0 0.09 ± 14% perf-profile.children.cycles-pp.__mod_zone_page_state
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.update_load_avg
0.06 ± 9% +0.1 0.11 ± 22% perf-profile.children.cycles-pp.release_pages
0.00 +0.1 0.06 ± 16% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.flush_tlb_func
0.00 +0.1 0.07 ± 37% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.02 ±141% +0.1 0.09 ± 10% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.07 ± 38% perf-profile.children.cycles-pp.sysvec_call_function_single
0.00 +0.1 0.08 ± 16% perf-profile.children.cycles-pp.iomap_releasepage
0.01 ±223% +0.1 0.09 ± 22% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.try_to_release_page
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.filemap_release_folio
0.08 ± 4% +0.1 0.17 ± 15% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 ± 17% perf-profile.children.cycles-pp.__mod_lruvec_kmem_state
0.18 ± 4% +0.1 0.28 ± 14% perf-profile.children.cycles-pp.__mod_lruvec_state
0.11 ± 6% +0.1 0.22 ± 16% perf-profile.children.cycles-pp.scheduler_tick
0.16 ± 4% +0.1 0.27 ± 14% perf-profile.children.cycles-pp.__mod_node_page_state
0.08 ± 54% +0.1 0.20 ± 44% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.1 0.13 ± 7% perf-profile.children.cycles-pp.folio_mapcount
0.15 ± 6% +0.1 0.30 ± 17% perf-profile.children.cycles-pp.tick_sched_handle
0.15 ± 4% +0.1 0.30 ± 17% perf-profile.children.cycles-pp.update_process_times
0.16 ± 5% +0.2 0.31 ± 17% perf-profile.children.cycles-pp.tick_sched_timer
0.06 ± 52% +0.2 0.26 ± 52% perf-profile.children.cycles-pp.workingset_update_node
0.05 ± 76% +0.2 0.25 ± 54% perf-profile.children.cycles-pp.list_lru_add
0.22 ± 4% +0.2 0.43 ± 16% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.21 ± 61% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.46 ± 4% +0.2 0.67 ± 19% perf-profile.children.cycles-pp.xas_store
0.00 +0.2 0.22 ± 47% perf-profile.children.cycles-pp.uncharge_folio
0.00 +0.2 0.22 ± 61% perf-profile.children.cycles-pp.folio_mark_accessed
0.12 ± 10% +0.2 0.34 ± 13% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.2 0.24 ± 52% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.30 ± 3% +0.2 0.55 ± 16% perf-profile.children.cycles-pp.hrtimer_interrupt
0.31 ± 3% +0.2 0.56 ± 16% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.18 ± 7% +0.3 0.46 ± 13% perf-profile.children.cycles-pp.native_irq_return_iret
0.34 ± 4% +0.3 0.65 ± 15% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.24 ± 8% +0.3 0.55 ± 25% perf-profile.children.cycles-pp.__count_memcg_events
0.14 ± 37% +0.3 0.47 ± 24% perf-profile.children.cycles-pp.drain_local_pages_wq
0.14 ± 37% +0.3 0.47 ± 24% perf-profile.children.cycles-pp.drain_pages_zone
0.15 ± 36% +0.4 0.51 ± 25% perf-profile.children.cycles-pp.process_one_work
0.15 ± 34% +0.4 0.52 ± 25% perf-profile.children.cycles-pp.worker_thread
0.00 +0.4 0.39 ± 63% perf-profile.children.cycles-pp.unmap_vmas
0.00 +0.4 0.39 ± 63% perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.4 0.39 ± 63% perf-profile.children.cycles-pp.zap_pte_range
0.00 +0.4 0.40 ± 63% perf-profile.children.cycles-pp.munmap
0.00 +0.4 0.41 ± 63% perf-profile.children.cycles-pp.__x64_sys_munmap
0.00 +0.4 0.41 ± 62% perf-profile.children.cycles-pp.__do_munmap
0.00 +0.4 0.41 ± 62% perf-profile.children.cycles-pp.unmap_region
0.00 +0.4 0.41 ± 62% perf-profile.children.cycles-pp.__vm_munmap
0.50 ± 10% +0.5 0.97 ± 23% perf-profile.children.cycles-pp.page_add_file_rmap
0.51 ± 9% +0.5 1.00 ± 23% perf-profile.children.cycles-pp.do_set_pte
0.77 ± 7% +0.5 1.30 ± 19% perf-profile.children.cycles-pp.filemap_map_pages
1.75 ± 2% +0.5 2.28 ± 18% perf-profile.children.cycles-pp.rmap_walk_file
0.79 ± 7% +0.5 1.33 ± 19% perf-profile.children.cycles-pp.xfs_filemap_map_pages
0.24 ± 16% +0.5 0.78 ± 25% perf-profile.children.cycles-pp.page_counter_try_charge
1.43 ± 9% +0.6 2.00 ± 8% perf-profile.children.cycles-pp.balance_pgdat
1.43 ± 9% +0.6 2.02 ± 8% perf-profile.children.cycles-pp.kswapd
0.18 ± 16% +0.6 0.79 ± 39% perf-profile.children.cycles-pp.page_counter_cancel
0.32 ± 14% +0.7 1.01 ± 24% perf-profile.children.cycles-pp.try_charge_memcg
0.08 ± 14% +0.7 0.78 ± 38% perf-profile.children.cycles-pp.propagate_protected_usage
0.00 +0.7 0.72 ± 49% perf-profile.children.cycles-pp.free_transhuge_page
0.87 ± 4% +0.7 1.62 ± 23% perf-profile.children.cycles-pp.try_to_unmap
1.25 ± 9% +0.9 2.12 ± 28% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.70 ± 4% +0.9 1.60 ± 23% perf-profile.children.cycles-pp.try_to_unmap_one
0.24 ± 7% +1.0 1.21 ± 29% perf-profile.children.cycles-pp.page_remove_rmap
0.21 ± 14% +1.1 1.34 ± 40% perf-profile.children.cycles-pp.page_counter_uncharge
1.80 ± 10% +1.2 3.02 ± 14% perf-profile.children.cycles-pp.ret_from_fork
1.76 ± 9% +1.3 3.02 ± 15% perf-profile.children.cycles-pp.kthread
0.23 ± 11% +1.3 1.57 ± 42% perf-profile.children.cycles-pp.uncharge_batch
0.00 +1.7 1.72 ± 43% perf-profile.children.cycles-pp.free_compound_page
0.00 +1.7 1.75 ± 43% perf-profile.children.cycles-pp.__mem_cgroup_uncharge
0.00 +2.1 2.08 ± 64% perf-profile.children.cycles-pp.__free_pages_ok
0.46 ± 5% +2.1 2.61 ± 28% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
5.50 ± 8% +3.7 9.17 ± 12% perf-profile.children.cycles-pp.iomap_readahead
5.51 ± 9% +3.7 9.18 ± 12% perf-profile.children.cycles-pp.read_pages
5.32 ± 9% +3.7 9.04 ± 13% perf-profile.children.cycles-pp.iomap_readpage_iter
2.98 ± 10% +5.7 8.69 ± 13% perf-profile.children.cycles-pp.memset_erms
17.88 ± 12% +8.1 25.98 ± 16% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +41.7 41.74 ± 23% perf-profile.children.cycles-pp.ondemand_readahead
68.94 ± 2% -13.5 55.40 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.20 ± 8% -2.1 0.07 ± 21% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.80 ± 14% -0.7 0.07 ± 16% perf-profile.self.cycles-pp.workingset_eviction
0.70 -0.5 0.16 ± 12% perf-profile.self.cycles-pp.__list_del_entry_valid
0.60 ± 11% -0.5 0.13 ± 57% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.46 ± 10% -0.4 0.04 ± 75% perf-profile.self.cycles-pp.workingset_age_nonresident
0.28 ± 13% -0.2 0.08 ± 75% perf-profile.self.cycles-pp.charge_memcg
0.24 ± 3% -0.2 0.08 ± 11% perf-profile.self.cycles-pp.shrink_page_list
0.19 ± 12% -0.1 0.04 ±118% perf-profile.self.cycles-pp.__mem_cgroup_charge
0.23 ± 5% -0.1 0.09 ± 10% perf-profile.self.cycles-pp._raw_spin_lock
0.22 ± 5% -0.1 0.12 ± 10% perf-profile.self.cycles-pp.isolate_lru_pages
0.16 ± 4% -0.1 0.05 ± 45% perf-profile.self.cycles-pp.xas_load
0.20 ± 3% -0.1 0.10 ± 9% perf-profile.self.cycles-pp.xas_create
0.19 ± 5% -0.1 0.10 ± 21% perf-profile.self.cycles-pp.__pagevec_lru_add
0.17 ± 5% -0.1 0.08 ± 12% perf-profile.self.cycles-pp.down_read
0.30 ± 4% -0.1 0.21 ± 7% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.14 ± 5% -0.1 0.07 ± 14% perf-profile.self.cycles-pp.next_uptodate_page
0.07 -0.0 0.04 ± 45% perf-profile.self.cycles-pp.__list_add_valid
0.05 ± 7% +0.0 0.09 ± 14% perf-profile.self.cycles-pp.__mod_zone_page_state
0.06 ± 6% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.xas_store
0.02 ±141% +0.1 0.07 ± 12% perf-profile.self.cycles-pp.count_shadow_nodes
0.00 +0.1 0.06 ± 16% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.page_remove_rmap
0.01 ±223% +0.1 0.08 ± 25% perf-profile.self.cycles-pp.release_pages
0.06 ± 8% +0.1 0.14 ± 14% perf-profile.self.cycles-pp.filemap_map_pages
0.06 ± 9% +0.1 0.16 ± 21% perf-profile.self.cycles-pp.lru_note_cost
0.00 +0.1 0.12 ± 14% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.15 ± 6% +0.1 0.27 ± 14% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.13 ± 7% perf-profile.self.cycles-pp.folio_mapcount
0.00 +0.1 0.14 ± 46% perf-profile.self.cycles-pp.uncharge_batch
0.00 +0.2 0.15 ± 19% perf-profile.self.cycles-pp.page_add_file_rmap
0.08 ± 12% +0.2 0.22 ± 24% perf-profile.self.cycles-pp.try_charge_memcg
0.21 ± 7% +0.2 0.37 ± 18% perf-profile.self.cycles-pp.__mod_lruvec_page_state
0.00 +0.2 0.22 ± 47% perf-profile.self.cycles-pp.uncharge_folio
0.20 ± 8% +0.3 0.46 ± 27% perf-profile.self.cycles-pp.__count_memcg_events
0.18 ± 7% +0.3 0.46 ± 13% perf-profile.self.cycles-pp.native_irq_return_iret
0.04 ± 44% +0.4 0.40 ± 27% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.19 ± 16% +0.4 0.56 ± 24% perf-profile.self.cycles-pp.page_counter_try_charge
0.18 ± 15% +0.6 0.78 ± 38% perf-profile.self.cycles-pp.page_counter_cancel
0.08 ± 14% +0.7 0.77 ± 38% perf-profile.self.cycles-pp.propagate_protected_usage
3.37 ± 8% +2.8 6.19 ± 16% perf-profile.self.cycles-pp.do_access
2.95 ± 9% +5.5 8.47 ± 13% perf-profile.self.cycles-pp.memset_erms
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 weeks, 3 days
[mm/page_alloc] f26b3fa046: netperf.Throughput_Mbps -18.0% regression
by kernel test robot
(please be noted we reported
"[mm/page_alloc] 39907a939a: netperf.Throughput_Mbps -18.1% regression"
on
https://lore.kernel.org/all/[email protected]/
while the commit is on branch.
now we still observe similar regression when it's on mainline, and we also
observe a 13.2% improvement on another netperf subtest.
so report again for information)
Greeting,
FYI, we noticed a -18.0% regression of netperf.Throughput_Mbps due to commit:
commit: f26b3fa046116a7dedcaafe30083402113941451 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: netperf
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 1
cluster: cs-localhost
test: UDP_STREAM
cpufreq_governor: performance
ucode: 0xd000331
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | netperf: netperf.Throughput_Mbps 13.2% improvement |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory |
| test parameters | cluster=cs-localhost |
| | cpufreq_governor=performance |
| | ip=ipv4 |
| | nr_threads=25% |
| | runtime=300s |
| | send_size=10K |
| | test=SCTP_STREAM_MANY |
| | ucode=0xd000331 |
+------------------+-------------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
cs-localhost/gcc-11/performance/ipv4/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-icl-2sp4/UDP_STREAM/netperf/0xd000331
commit:
8b10b465d0 ("mm/page_alloc: free pages in a single pass during bulk free")
f26b3fa046 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free")
8b10b465d0e18b00 f26b3fa046116a7dedcaafe3008
---------------- ---------------------------
%stddev %change %stddev
\ | \
120956 ± 2% -18.0% 99177 netperf.Throughput_Mbps
120956 ± 2% -18.0% 99177 netperf.Throughput_total_Mbps
90.83 -2.0% 89.00 netperf.time.percent_of_cpu_this_job_got
69242552 ± 2% -18.0% 56775058 netperf.workload
29460 ± 2% +25.7% 37044 meminfo.Shmem
96933 ±198% +9094.3% 8912386 ± 7% turbostat.POLL
1746 ± 2% +6694.6% 118678 ± 3% vmstat.system.cs
293357 ± 7% -21.2% 231238 ± 17% sched_debug.cfs_rq:/.min_vruntime.max
269394 ± 8% -23.6% 205870 ± 17% sched_debug.cfs_rq:/.spread0.max
239945 ± 64% -99.5% 1108 ± 2% sched_debug.cpu.avg_idle.min
122694 ± 18% +26.2% 154895 ± 6% sched_debug.cpu.avg_idle.stddev
4705 ± 2% +2916.4% 141948 ± 3% sched_debug.cpu.nr_switches.avg
65447 ± 3% +9997.6% 6608655 ± 13% sched_debug.cpu.nr_switches.max
8178 ± 3% +9737.7% 804544 ± 5% sched_debug.cpu.nr_switches.stddev
250093 ± 8% +15.0% 287675 ± 7% perf-stat.i.cache-misses
1674 ± 2% +7043.4% 119598 ± 3% perf-stat.i.context-switches
3127 +1.8% 3183 perf-stat.i.minor-faults
7495 ± 24% +76.9% 13260 ± 6% perf-stat.i.node-loads
3128 +1.8% 3184 perf-stat.i.page-faults
0.05 ± 7% +0.0 0.06 ± 11% perf-stat.overall.cache-miss-rate%
45827 ± 6% -13.7% 39529 ± 10% perf-stat.overall.cycles-between-cache-misses
87.75 ± 3% -7.5 80.29 ± 2% perf-stat.overall.node-load-miss-rate%
18242 ± 5% +22.8% 22395 ± 2% perf-stat.overall.path-length
249180 ± 8% +15.0% 286678 ± 7% perf-stat.ps.cache-misses
1668 ± 2% +7044.3% 119200 ± 3% perf-stat.ps.context-switches
3114 +1.8% 3170 perf-stat.ps.minor-faults
7465 ± 24% +77.0% 13213 ± 6% perf-stat.ps.node-loads
3115 +1.8% 3171 perf-stat.ps.page-faults
2640 ± 3% -3.7% 2541 proc-vmstat.nr_active_anon
71813 +2.8% 73854 proc-vmstat.nr_inactive_anon
9669 +2.7% 9930 proc-vmstat.nr_mapped
7368 ± 2% +25.7% 9262 proc-vmstat.nr_shmem
2640 ± 3% -3.7% 2541 proc-vmstat.nr_zone_active_anon
71813 +2.8% 73854 proc-vmstat.nr_zone_inactive_anon
419.83 ±190% +1461.8% 6556 ± 15% proc-vmstat.numa_hint_faults
380.83 ±212% +1374.0% 5613 ± 3% proc-vmstat.numa_hint_faults_local
1.336e+08 -13.8% 1.152e+08 ± 2% proc-vmstat.numa_hit
1.337e+08 -13.6% 1.156e+08 ± 2% proc-vmstat.numa_local
8502 ± 97% +311.4% 34976 ± 9% proc-vmstat.numa_pte_updates
7931 +1121.7% 96900 ± 4% proc-vmstat.pgactivate
1.33e+08 -14.0% 1.144e+08 proc-vmstat.pgalloc_normal
1060109 +1.2% 1073035 proc-vmstat.pgfault
1.33e+08 -14.0% 1.144e+08 proc-vmstat.pgfree
1.26 ± 19% +0.6 1.81 ± 17% perf-profile.calltrace.cycles-pp.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.__netif_receive_skb_one_core
1.18 ± 19% +0.6 1.79 ± 17% perf-profile.calltrace.cycles-pp.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
0.00 +0.7 0.69 ± 21% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
0.00 +0.7 0.70 ± 21% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb
0.00 +0.7 0.71 ± 16% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page.skb_release_data.__consume_stateless_skb.udp_recvmsg
0.00 +0.8 0.82 ± 22% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb
0.00 +0.8 0.83 ± 23% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp
0.00 +0.8 0.84 ± 22% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +0.9 0.86 ± 23% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
0.00 +0.9 0.87 ± 22% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb
0.00 +0.9 0.87 ± 22% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +0.9 0.88 ± 23% perf-profile.calltrace.cycles-pp.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg
0.00 +1.0 0.97 ± 24% perf-profile.calltrace.cycles-pp.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv
0.20 ±142% +1.1 1.33 ± 19% perf-profile.calltrace.cycles-pp.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu
0.00 +1.2 1.19 ± 21% perf-profile.calltrace.cycles-pp.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom
0.42 ± 71% +1.6 2.06 ± 20% perf-profile.calltrace.cycles-pp.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.41 ± 19% -0.3 0.15 ± 21% perf-profile.children.cycles-pp.udp_rmem_release
0.65 ± 16% -0.2 0.45 ± 14% perf-profile.children.cycles-pp.kfree
0.44 ± 14% -0.2 0.26 ± 15% perf-profile.children.cycles-pp.__slab_free
0.58 ± 8% -0.2 0.42 ± 18% perf-profile.children.cycles-pp.free_pcp_prepare
0.17 ± 13% -0.1 0.07 ± 15% perf-profile.children.cycles-pp.free_unref_page_commit
0.24 ± 5% -0.1 0.18 ± 19% perf-profile.children.cycles-pp.kmem_cache_free
0.21 ± 14% -0.1 0.16 ± 17% perf-profile.children.cycles-pp.send_data
0.10 ± 15% +0.0 0.15 ± 8% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.finish_wait
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.06 ± 23% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.02 ±141% +0.1 0.08 ± 10% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.10 ± 4% +0.1 0.16 ± 19% perf-profile.children.cycles-pp.__list_add_valid
0.00 +0.1 0.08 ± 14% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.00 +0.1 0.08 ± 37% perf-profile.children.cycles-pp.nohz_run_idle_balance
0.01 ±223% +0.1 0.09 ± 21% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.21 ± 11% +0.1 0.29 ± 16% perf-profile.children.cycles-pp.skb_set_owner_w
0.08 ± 11% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.00 +0.1 0.08 ± 36% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.00 +0.1 0.08 ± 19% perf-profile.children.cycles-pp.prepare_task_switch
0.00 +0.1 0.10 ± 34% perf-profile.children.cycles-pp.prepare_to_wait_exclusive
0.07 ± 48% +0.1 0.20 ± 20% perf-profile.children.cycles-pp._raw_spin_lock_bh
0.73 ± 6% +0.1 0.86 ± 9% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.1 0.14 ± 34% perf-profile.children.cycles-pp.set_next_entity
0.07 ± 18% +0.2 0.22 ± 26% perf-profile.children.cycles-pp.__zone_watermark_ok
0.00 +0.2 0.17 ± 11% perf-profile.children.cycles-pp.enqueue_entity
0.00 +0.2 0.17 ± 29% perf-profile.children.cycles-pp.update_load_avg
0.00 +0.2 0.18 ± 21% perf-profile.children.cycles-pp.__switch_to
0.00 +0.2 0.19 ± 22% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.2 0.19 ± 21% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.27 ± 34% +0.2 0.47 ± 15% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.2 0.19 ± 20% perf-profile.children.cycles-pp.update_curr
0.00 +0.2 0.21 ± 7% perf-profile.children.cycles-pp.enqueue_task_fair
0.00 +0.2 0.22 ± 6% perf-profile.children.cycles-pp.ttwu_do_activate
0.00 +0.2 0.24 ± 25% perf-profile.children.cycles-pp.pick_next_task_fair
0.28 ± 14% +0.2 0.52 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.3 0.26 ± 17% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.00 +0.3 0.32 ± 17% perf-profile.children.cycles-pp.sysvec_call_function_single
0.39 ± 14% +0.3 0.72 ± 16% perf-profile.children.cycles-pp.free_pcppages_bulk
0.00 +0.3 0.33 ± 26% perf-profile.children.cycles-pp.dequeue_entity
0.00 +0.4 0.37 ± 24% perf-profile.children.cycles-pp.dequeue_task_fair
0.00 +0.4 0.41 ± 25% perf-profile.children.cycles-pp.finish_task_switch
0.00 +0.5 0.50 ± 19% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
1.26 ± 19% +0.6 1.81 ± 17% perf-profile.children.cycles-pp.udp_unicast_rcv_skb
1.19 ± 19% +0.6 1.80 ± 17% perf-profile.children.cycles-pp.udp_queue_rcv_one_skb
0.00 +0.7 0.71 ± 21% perf-profile.children.cycles-pp.autoremove_wake_function
0.00 +0.7 0.71 ± 20% perf-profile.children.cycles-pp.try_to_wake_up
0.00 +0.8 0.83 ± 21% perf-profile.children.cycles-pp.__wake_up_common
0.49 ± 17% +0.8 1.34 ± 19% perf-profile.children.cycles-pp.__udp_enqueue_schedule_skb
0.00 +0.9 0.87 ± 22% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.9 0.88 ± 22% perf-profile.children.cycles-pp.__wake_up_common_lock
0.00 +0.9 0.89 ± 23% perf-profile.children.cycles-pp.schedule_timeout
0.01 ±223% +0.9 0.90 ± 22% perf-profile.children.cycles-pp.schedule
0.03 ±100% +0.9 0.97 ± 24% perf-profile.children.cycles-pp.sock_def_readable
0.00 +1.2 1.19 ± 21% perf-profile.children.cycles-pp.__skb_wait_for_more_packets
0.57 ± 19% +1.5 2.08 ± 20% perf-profile.children.cycles-pp.__skb_recv_udp
0.05 ± 47% +1.7 1.73 ± 22% perf-profile.children.cycles-pp.__schedule
0.24 ± 21% -0.2 0.03 ±100% perf-profile.self.cycles-pp.udp_rmem_release
0.44 ± 14% -0.2 0.26 ± 16% perf-profile.self.cycles-pp.__slab_free
0.58 ± 8% -0.2 0.42 ± 18% perf-profile.self.cycles-pp.free_pcp_prepare
0.29 ± 19% -0.1 0.16 ± 10% perf-profile.self.cycles-pp.kfree
0.28 ± 17% -0.1 0.18 ± 18% perf-profile.self.cycles-pp.udp_recvmsg
0.13 ± 13% -0.1 0.05 ± 45% perf-profile.self.cycles-pp.free_unref_page_commit
0.24 ± 21% -0.1 0.17 ± 17% perf-profile.self.cycles-pp.send_omni_inner
0.08 ± 10% -0.0 0.02 ± 99% perf-profile.self.cycles-pp.kmem_cache_free
0.10 ± 19% -0.0 0.06 ± 52% perf-profile.self.cycles-pp.__dev_queue_xmit
0.12 ± 15% -0.0 0.09 ± 18% perf-profile.self.cycles-pp.__cgroup_bpf_run_filter_skb
0.06 ± 9% +0.1 0.12 ± 18% perf-profile.self.cycles-pp.free_pcppages_bulk
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.flush_smp_call_function_queue
0.03 ±100% +0.1 0.10 ± 42% perf-profile.self.cycles-pp.sock_def_readable
0.08 ± 8% +0.1 0.15 ± 18% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.1 0.08 ± 22% perf-profile.self.cycles-pp.update_curr
0.00 +0.1 0.08 ± 20% perf-profile.self.cycles-pp.enqueue_entity
0.01 ±223% +0.1 0.09 ± 20% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.21 ± 11% +0.1 0.29 ± 16% perf-profile.self.cycles-pp.skb_set_owner_w
0.08 ± 12% +0.1 0.16 ± 22% perf-profile.self.cycles-pp.memcg_slab_free_hook
0.00 +0.1 0.09 ± 33% perf-profile.self.cycles-pp.set_next_entity
0.00 +0.1 0.12 ± 31% perf-profile.self.cycles-pp.__wake_up_common
0.07 ± 50% +0.1 0.19 ± 21% perf-profile.self.cycles-pp._raw_spin_lock_bh
0.00 +0.1 0.12 ± 16% perf-profile.self.cycles-pp.try_to_wake_up
0.04 ±101% +0.1 0.17 ± 11% perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.2 0.15 ± 17% perf-profile.self.cycles-pp.__skb_wait_for_more_packets
0.00 +0.2 0.15 ± 26% perf-profile.self.cycles-pp.finish_task_switch
0.17 ± 14% +0.2 0.33 ± 24% perf-profile.self.cycles-pp.skb_release_data
0.04 ± 72% +0.2 0.22 ± 26% perf-profile.self.cycles-pp.__zone_watermark_ok
0.00 +0.2 0.18 ± 21% perf-profile.self.cycles-pp.__switch_to
0.26 ± 16% +0.2 0.48 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.3 0.27 ± 23% perf-profile.self.cycles-pp.__schedule
0.04 ± 71% +0.4 0.45 ± 21% perf-profile.self.cycles-pp.__skb_recv_udp
***************************************************************************************************
lkp-icl-2sp4: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase/ucode:
cs-localhost/gcc-11/performance/ipv4/x86_64-rhel-8.3/25%/debian-10.4-x86_64-20200603.cgz/300s/10K/lkp-icl-2sp4/SCTP_STREAM_MANY/netperf/0xd000331
commit:
8b10b465d0 ("mm/page_alloc: free pages in a single pass during bulk free")
f26b3fa046 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free")
8b10b465d0e18b00 f26b3fa046116a7dedcaafe3008
---------------- ---------------------------
%stddev %change %stddev
\ | \
14785 ± 2% +13.2% 16740 netperf.Throughput_Mbps
473143 ± 2% +13.2% 535690 netperf.Throughput_total_Mbps
17542 ± 6% +24.5% 21835 ± 2% netperf.time.involuntary_context_switches
1342 ± 3% +19.2% 1600 netperf.time.percent_of_cpu_this_job_got
3935 ± 3% +19.4% 4698 netperf.time.system_time
110.45 ± 3% +15.2% 127.26 netperf.time.user_time
199875 ± 5% -36.6% 126767 ± 9% netperf.time.voluntary_context_switches
1.733e+09 ± 2% +13.2% 1.962e+09 netperf.workload
3.48 ± 3% +0.5 3.94 mpstat.cpu.all.soft%
16.79 ± 3% +3.4 20.23 mpstat.cpu.all.sys%
0.68 ± 3% +0.1 0.78 mpstat.cpu.all.usr%
27.83 ± 5% +22.8% 34.17 ± 3% vmstat.procs.r
3208349 ± 2% +12.1% 3596993 vmstat.system.cs
263494 +3.6% 273026 vmstat.system.in
1.101e+09 ± 6% +16.4% 1.282e+09 ± 3% numa-numastat.node0.local_node
1.1e+09 ± 6% +16.4% 1.28e+09 ± 3% numa-numastat.node0.numa_hit
1.151e+09 ± 4% +10.2% 1.269e+09 ± 2% numa-numastat.node1.local_node
1.149e+09 ± 4% +10.2% 1.265e+09 ± 2% numa-numastat.node1.numa_hit
1.1e+09 ± 6% +16.4% 1.28e+09 ± 3% numa-vmstat.node0.numa_hit
1.101e+09 ± 6% +16.4% 1.282e+09 ± 3% numa-vmstat.node0.numa_local
1.149e+09 ± 4% +10.2% 1.265e+09 ± 2% numa-vmstat.node1.numa_hit
1.151e+09 ± 4% +10.2% 1.269e+09 ± 2% numa-vmstat.node1.numa_local
953763 ± 18% +33.6% 1273973 ± 8% meminfo.Active
953603 ± 18% +33.6% 1273684 ± 8% meminfo.Active(anon)
1450710 ± 13% +23.9% 1797564 ± 6% meminfo.Committed_AS
484102 ± 18% +32.7% 642218 ± 9% meminfo.Mapped
983413 ± 18% +34.8% 1326115 ± 8% meminfo.Shmem
812.50 ± 2% +16.5% 946.17 turbostat.Avg_MHz
24.64 ± 2% +4.1 28.73 turbostat.Busy%
4.704e+08 ± 2% +11.0% 5.219e+08 turbostat.C1
5.57 ± 2% +0.7 6.26 turbostat.C1%
0.37 ± 10% -16.1% 0.31 turbostat.IPC
1004055 ± 2% +11.4% 1118247 turbostat.POLL
0.02 +0.0 0.03 turbostat.POLL%
416.33 ± 4% +7.5% 447.50 turbostat.PkgWatt
238335 ± 18% +33.4% 317828 ± 8% proc-vmstat.nr_active_anon
811128 ± 5% +10.5% 896520 ± 3% proc-vmstat.nr_file_pages
80808 ± 2% +7.4% 86814 ± 3% proc-vmstat.nr_inactive_anon
120937 ± 19% +34.1% 162233 ± 9% proc-vmstat.nr_mapped
1938 ± 2% +4.7% 2029 ± 2% proc-vmstat.nr_page_table_pages
245826 ± 18% +34.6% 330998 ± 8% proc-vmstat.nr_shmem
238335 ± 18% +33.4% 317828 ± 8% proc-vmstat.nr_zone_active_anon
80808 ± 2% +7.4% 86814 ± 3% proc-vmstat.nr_zone_inactive_anon
2.248e+09 ± 2% +13.2% 2.545e+09 proc-vmstat.numa_hit
2.253e+09 ± 2% +13.2% 2.551e+09 proc-vmstat.numa_local
260577 ± 15% +33.6% 348260 ± 11% proc-vmstat.pgactivate
5.944e+09 ± 2% +13.2% 6.73e+09 proc-vmstat.pgalloc_normal
1579108 ± 2% +3.8% 1638994 proc-vmstat.pgfault
5.944e+09 ± 2% +13.2% 6.73e+09 proc-vmstat.pgfree
850785 ± 19% +64.9% 1403095 ± 8% sched_debug.cfs_rq:/.MIN_vruntime.max
110144 ± 17% +78.2% 196314 ± 20% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.26 ± 15% +25.6% 0.33 ± 12% sched_debug.cfs_rq:/.h_nr_running.avg
36930 ± 6% -19.2% 29847 ± 4% sched_debug.cfs_rq:/.load.max
13805 ± 8% -14.6% 11792 ± 3% sched_debug.cfs_rq:/.load.stddev
850785 ± 19% +64.9% 1403095 ± 8% sched_debug.cfs_rq:/.max_vruntime.max
110144 ± 17% +78.2% 196314 ± 20% sched_debug.cfs_rq:/.max_vruntime.stddev
803157 ± 9% +44.1% 1157345 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
1328522 ± 10% +31.4% 1746141 ± 9% sched_debug.cfs_rq:/.min_vruntime.max
349319 ± 17% +85.6% 648499 ± 17% sched_debug.cfs_rq:/.min_vruntime.min
209093 ± 8% +17.5% 245777 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
279.98 ± 11% +24.1% 347.54 ± 7% sched_debug.cfs_rq:/.runnable_avg.avg
209084 ± 8% +17.5% 245769 ± 8% sched_debug.cfs_rq:/.spread0.stddev
279.78 ± 11% +24.2% 347.36 ± 7% sched_debug.cfs_rq:/.util_avg.avg
183.66 ± 15% +29.2% 237.36 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
1276 ± 11% +19.6% 1526 ± 5% sched_debug.cpu.curr->pid.avg
0.21 ± 10% +18.4% 0.25 ± 4% sched_debug.cpu.nr_running.avg
26.69 -0.8% 26.49 perf-stat.i.MPKI
1.96e+10 ± 2% +13.0% 2.215e+10 perf-stat.i.branch-instructions
1.257e+08 ± 2% +13.2% 1.423e+08 ± 2% perf-stat.i.branch-misses
2.672e+09 ± 2% +12.2% 2.997e+09 perf-stat.i.cache-references
3236739 ± 2% +12.2% 3630735 perf-stat.i.context-switches
1.10 +2.9% 1.13 perf-stat.i.cpi
1.099e+11 ± 2% +16.3% 1.278e+11 perf-stat.i.cpu-cycles
216.09 ± 3% +9.1% 235.65 ± 2% perf-stat.i.cpu-migrations
2.893e+10 ± 2% +13.1% 3.271e+10 perf-stat.i.dTLB-loads
0.01 ± 7% -0.0 0.00 ± 38% perf-stat.i.dTLB-store-miss-rate%
1218982 ± 5% -66.4% 409240 ± 39% perf-stat.i.dTLB-store-misses
1.715e+10 ± 2% +13.1% 1.939e+10 perf-stat.i.dTLB-stores
1.002e+11 ± 2% +13.0% 1.132e+11 perf-stat.i.instructions
0.91 -2.7% 0.89 perf-stat.i.ipc
0.86 ± 2% +16.3% 1.00 perf-stat.i.metric.GHz
533.97 ± 2% +13.0% 603.48 perf-stat.i.metric.M/sec
4845 ± 2% +4.4% 5058 perf-stat.i.minor-faults
106011 ± 17% -43.2% 60257 ± 28% perf-stat.i.node-loads
56.37 ± 10% +12.1 68.44 ± 7% perf-stat.i.node-store-miss-rate%
1300772 ± 13% -31.9% 886088 ± 31% perf-stat.i.node-stores
4846 ± 2% +4.4% 5059 perf-stat.i.page-faults
26.68 -0.8% 26.47 perf-stat.overall.MPKI
1.10 +2.9% 1.13 perf-stat.overall.cpi
0.01 ± 6% -0.0 0.00 ± 39% perf-stat.overall.dTLB-store-miss-rate%
0.91 -2.8% 0.89 perf-stat.overall.ipc
1.953e+10 ± 2% +13.0% 2.207e+10 perf-stat.ps.branch-instructions
1.252e+08 ± 2% +13.2% 1.418e+08 ± 2% perf-stat.ps.branch-misses
2.662e+09 ± 2% +12.2% 2.986e+09 perf-stat.ps.cache-references
3224941 ± 2% +12.2% 3617930 perf-stat.ps.context-switches
1.095e+11 ± 2% +16.3% 1.273e+11 perf-stat.ps.cpu-cycles
215.47 ± 3% +9.1% 235.06 ± 2% perf-stat.ps.cpu-migrations
2.882e+10 ± 2% +13.1% 3.259e+10 perf-stat.ps.dTLB-loads
1214485 ± 5% -66.4% 407878 ± 39% perf-stat.ps.dTLB-store-misses
1.709e+10 ± 2% +13.1% 1.932e+10 perf-stat.ps.dTLB-stores
9.982e+10 ± 2% +13.0% 1.128e+11 perf-stat.ps.instructions
4823 ± 3% +4.4% 5034 perf-stat.ps.minor-faults
105655 ± 17% -43.2% 59979 ± 28% perf-stat.ps.node-loads
1296468 ± 13% -31.9% 882954 ± 31% perf-stat.ps.node-stores
4824 ± 3% +4.4% 5035 perf-stat.ps.page-faults
3.017e+13 ± 2% +13.1% 3.411e+13 perf-stat.total.instructions
22.12 ± 7% -3.1 19.06 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
18.84 ± 9% -3.0 15.83 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
21.94 ± 7% -3.0 18.95 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
21.86 ± 7% -3.0 18.88 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
17.27 ± 8% -2.8 14.47 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
17.04 ± 8% -2.7 14.32 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
14.40 ± 5% -1.8 12.57 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
14.23 ± 5% -1.8 12.40 ± 3% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
1.80 ± 37% -0.8 1.02 ± 31% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
1.59 ± 37% -0.7 0.89 ± 32% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.55 ± 3% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.sctp_chunkify._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg_to_asoc
0.66 ± 12% +0.2 0.90 ± 12% perf-profile.calltrace.cycles-pp.__free_pages_ok.skb_release_data.kfree_skb_reason.sctp_recvmsg.inet_recvmsg
4.04 ± 3% +0.3 4.38 ± 2% perf-profile.calltrace.cycles-pp.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv.sctp_backlog_rcv
3.98 ± 3% +0.3 4.32 ± 3% perf-profile.calltrace.cycles-pp.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg_to_asoc.sctp_sendmsg.sock_sendmsg
0.88 ± 7% +0.4 1.24 ± 6% perf-profile.calltrace.cycles-pp.free_unref_page.skb_release_data.consume_skb.sctp_chunk_put.sctp_outq_sack
0.78 ± 6% +0.4 1.13 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_free.sctp_recvmsg.inet_recvmsg.____sys_recvmsg.___sys_recvmsg
3.34 ± 3% +0.4 3.70 ± 3% perf-profile.calltrace.cycles-pp._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg_to_asoc.sctp_sendmsg
2.11 ± 3% +0.4 2.49 ± 3% perf-profile.calltrace.cycles-pp.consume_skb.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm
2.71 ± 3% +0.4 3.09 ± 3% perf-profile.calltrace.cycles-pp.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv
1.36 ± 5% +0.4 1.75 ± 4% perf-profile.calltrace.cycles-pp.skb_release_data.consume_skb.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter
0.47 ± 45% +0.4 0.87 ± 4% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.sctp_recvmsg.inet_recvmsg.____sys_recvmsg
1.54 ± 6% +0.4 1.94 ± 6% perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter
1.52 ± 6% +0.4 1.92 ± 7% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.sctp_packet_transmit.sctp_outq_flush
1.88 ± 5% +0.4 2.28 ± 5% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty
1.50 ± 6% +0.4 1.90 ± 7% perf-profile.calltrace.cycles-pp.kmalloc_large_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.sctp_packet_transmit
1.82 ± 5% +0.4 2.22 ± 5% perf-profile.calltrace.cycles-pp.kmalloc_large_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb._sctp_make_chunk
1.92 ± 5% +0.4 2.33 ± 5% perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user
2.66 ± 3% +0.4 3.06 ± 4% perf-profile.calltrace.cycles-pp.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg_to_asoc
1.78 ± 6% +0.4 2.20 ± 6% perf-profile.calltrace.cycles-pp.__alloc_skb.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter.sctp_do_sm
1.36 ± 9% +0.4 1.79 ± 9% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages.kmalloc_large_node.__kmalloc_node_track_caller
7.79 ± 2% +0.7 8.46 perf-profile.calltrace.cycles-pp.sctp_packet_pack.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter.sctp_do_sm
7.41 ± 2% +0.7 8.10 perf-profile.calltrace.cycles-pp.memcpy_erms.sctp_packet_pack.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter
0.00 +0.7 0.74 ± 16% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page.skb_release_data.kfree_skb_reason
0.00 +0.7 0.74 ± 10% perf-profile.calltrace.cycles-pp.free_unref_page_commit.free_unref_page.skb_release_data.consume_skb.sctp_chunk_put
9.46 +0.8 10.22 perf-profile.calltrace.cycles-pp.sctp_do_sm.sctp_primitive_SEND.sctp_sendmsg_to_asoc.sctp_sendmsg.sock_sendmsg
9.63 +0.8 10.40 perf-profile.calltrace.cycles-pp.sctp_cmd_interpreter.sctp_do_sm.sctp_primitive_SEND.sctp_sendmsg_to_asoc.sctp_sendmsg
13.53 ± 2% +0.8 14.30 ± 2% perf-profile.calltrace.cycles-pp.sctp_do_sm.sctp_assoc_bh_rcv.sctp_backlog_rcv.__release_sock.release_sock
13.82 ± 2% +0.8 14.60 ± 2% perf-profile.calltrace.cycles-pp.sctp_assoc_bh_rcv.sctp_backlog_rcv.__release_sock.release_sock.sctp_sendmsg
13.84 ± 2% +0.8 14.62 ± 2% perf-profile.calltrace.cycles-pp.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv.sctp_backlog_rcv.__release_sock
10.83 ± 2% +0.8 11.64 perf-profile.calltrace.cycles-pp.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter.sctp_do_sm.sctp_primitive_SEND
12.82 ± 2% +0.9 13.68 ± 2% perf-profile.calltrace.cycles-pp.sctp_outq_flush.sctp_cmd_interpreter.sctp_do_sm.sctp_primitive_SEND.sctp_sendmsg_to_asoc
14.77 +1.0 15.74 perf-profile.calltrace.cycles-pp.sctp_primitive_SEND.sctp_sendmsg_to_asoc.sctp_sendmsg.sock_sendmsg.____sys_sendmsg
0.00 +1.0 0.96 ± 14% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page.skb_release_data.kfree_skb_reason.sctp_recvmsg
2.05 ± 7% +1.0 3.02 ± 9% perf-profile.calltrace.cycles-pp.kfree_skb_reason.sctp_recvmsg.inet_recvmsg.____sys_recvmsg.___sys_recvmsg
1.40 ± 8% +1.0 2.44 ± 10% perf-profile.calltrace.cycles-pp.skb_release_data.kfree_skb_reason.sctp_recvmsg.inet_recvmsg.____sys_recvmsg
0.00 +1.1 1.10 ± 12% perf-profile.calltrace.cycles-pp.free_unref_page.skb_release_data.kfree_skb_reason.sctp_recvmsg.inet_recvmsg
25.15 +1.2 26.32 perf-profile.calltrace.cycles-pp.sctp_sendmsg_to_asoc.sctp_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
2.08 ± 6% +1.2 3.30 ± 6% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.kmalloc_large_node.__kmalloc_node_track_caller.kmalloc_reserve
2.48 ± 6% +1.3 3.75 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages.kmalloc_large_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
49.05 ± 2% +1.7 50.72 perf-profile.calltrace.cycles-pp.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.main
49.47 ± 2% +1.7 51.21 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sendmsg.main.__libc_start_main
49.17 ± 2% +1.7 50.91 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.main.__libc_start_main
51.24 ± 2% +1.8 52.98 perf-profile.calltrace.cycles-pp.__libc_start_main
50.11 ± 2% +1.8 51.86 perf-profile.calltrace.cycles-pp.sendmsg.main.__libc_start_main
50.99 ± 2% +1.8 52.74 perf-profile.calltrace.cycles-pp.main.__libc_start_main
45.54 +2.0 47.52 perf-profile.calltrace.cycles-pp.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64
45.10 +2.0 47.08 perf-profile.calltrace.cycles-pp.sctp_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg
47.29 +2.0 49.27 perf-profile.calltrace.cycles-pp.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
49.20 +2.0 51.18 perf-profile.calltrace.cycles-pp.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
19.03 ± 9% -3.1 15.95 ± 5% perf-profile.children.cycles-pp.cpuidle_idle_call
22.08 ± 7% -3.1 19.03 ± 4% perf-profile.children.cycles-pp.do_idle
22.12 ± 7% -3.1 19.06 ± 4% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
22.12 ± 7% -3.1 19.06 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
17.42 ± 8% -2.9 14.56 ± 5% perf-profile.children.cycles-pp.cpuidle_enter
17.37 ± 8% -2.9 14.52 ± 5% perf-profile.children.cycles-pp.cpuidle_enter_state
14.52 ± 5% -1.9 12.64 ± 3% perf-profile.children.cycles-pp.intel_idle
14.43 ± 5% -1.9 12.55 ± 3% perf-profile.children.cycles-pp.mwait_idle_with_hints
2.25 ± 27% -0.8 1.46 ± 21% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.88 ± 29% -0.7 1.18 ± 23% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.19 ± 33% -0.5 0.73 ± 25% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.16 ± 32% -0.4 0.72 ± 24% perf-profile.children.cycles-pp.hrtimer_interrupt
0.76 ± 45% -0.3 0.44 ± 37% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.52 ± 44% -0.2 0.31 ± 34% perf-profile.children.cycles-pp.tick_sched_timer
0.48 ± 47% -0.2 0.28 ± 36% perf-profile.children.cycles-pp.tick_sched_handle
0.44 ± 44% -0.2 0.27 ± 32% perf-profile.children.cycles-pp.update_process_times
0.32 ± 35% -0.1 0.20 ± 28% perf-profile.children.cycles-pp.__irq_exit_rcu
0.24 ± 35% -0.1 0.16 ± 24% perf-profile.children.cycles-pp.scheduler_tick
0.24 ± 12% -0.1 0.18 ± 19% perf-profile.children.cycles-pp.clockevents_program_event
0.15 ± 23% -0.1 0.10 ± 18% perf-profile.children.cycles-pp.rebalance_domains
0.80 ± 2% -0.0 0.76 perf-profile.children.cycles-pp.sctp_chunkify
0.10 ± 33% -0.0 0.06 ± 21% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.40 ± 6% -0.0 0.36 ± 2% perf-profile.children.cycles-pp.native_sched_clock
0.47 ± 5% -0.0 0.43 ± 2% perf-profile.children.cycles-pp.sched_clock_cpu
0.12 ± 13% -0.0 0.08 ± 7% perf-profile.children.cycles-pp.native_irq_return_iret
0.55 -0.0 0.52 ± 2% perf-profile.children.cycles-pp.sctp_chunk_free
0.34 ± 6% -0.0 0.31 ± 2% perf-profile.children.cycles-pp.rcu_idle_exit
0.08 ± 11% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.lapic_next_deadline
0.33 ± 2% +0.0 0.34 perf-profile.children.cycles-pp.loopback_xmit
0.36 +0.0 0.38 perf-profile.children.cycles-pp.xmit_one
0.12 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.__build_skb_around
0.89 ± 2% +0.0 0.93 perf-profile.children.cycles-pp.enqueue_task_fair
0.93 ± 2% +0.0 0.96 perf-profile.children.cycles-pp.ttwu_do_activate
0.44 ± 5% +0.0 0.48 ± 2% perf-profile.children.cycles-pp.__mod_node_page_state
0.22 ± 5% +0.1 0.28 ± 6% perf-profile.children.cycles-pp.rmqueue_bulk
0.44 ± 2% +0.1 0.53 perf-profile.children.cycles-pp.__list_add_valid
2.68 ± 2% +0.1 2.77 perf-profile.children.cycles-pp.try_to_wake_up
2.69 ± 2% +0.1 2.79 perf-profile.children.cycles-pp.autoremove_wake_function
0.29 ± 7% +0.1 0.40 ± 6% perf-profile.children.cycles-pp.__free_one_page
2.98 +0.1 3.09 perf-profile.children.cycles-pp.__wake_up_common
3.45 ± 2% +0.1 3.56 perf-profile.children.cycles-pp.sctp_data_ready
3.07 +0.1 3.18 perf-profile.children.cycles-pp.__wake_up_common_lock
3.64 ± 2% +0.1 3.76 perf-profile.children.cycles-pp.sctp_ulpq_tail_event
0.39 ± 3% +0.2 0.56 ± 3% perf-profile.children.cycles-pp.__zone_watermark_ok
0.67 ± 12% +0.2 0.91 ± 12% perf-profile.children.cycles-pp.__free_pages_ok
0.50 ± 11% +0.3 0.79 ± 8% perf-profile.children.cycles-pp.free_unref_page_commit
0.95 ± 5% +0.3 1.27 ± 4% perf-profile.children.cycles-pp.__slab_free
2.47 ± 3% +0.4 2.83 ± 2% perf-profile.children.cycles-pp.kmem_cache_free
4.13 ± 2% +0.4 4.50 ± 2% perf-profile.children.cycles-pp.sctp_outq_sack
4.06 ± 2% +0.4 4.43 ± 3% perf-profile.children.cycles-pp.sctp_make_datafrag_empty
3.67 ± 2% +0.4 4.06 ± 3% perf-profile.children.cycles-pp._sctp_make_chunk
4.37 ± 2% +0.4 4.76 ± 2% perf-profile.children.cycles-pp.sctp_chunk_put
2.80 ± 2% +0.4 3.20 ± 2% perf-profile.children.cycles-pp.consume_skb
1.22 ± 12% +0.5 1.71 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.67 ± 8% +0.6 2.22 ± 8% perf-profile.children.cycles-pp.rmqueue
8.31 +0.7 9.02 perf-profile.children.cycles-pp.sctp_packet_pack
7.64 +0.7 8.38 perf-profile.children.cycles-pp.memcpy_erms
0.82 +0.8 1.60 ± 8% perf-profile.children.cycles-pp._raw_spin_lock
2.60 ± 5% +0.8 3.42 ± 6% perf-profile.children.cycles-pp.get_page_from_freelist
3.02 ± 5% +0.8 3.86 ± 6% perf-profile.children.cycles-pp.__alloc_pages
0.13 ± 15% +0.8 0.97 ± 14% perf-profile.children.cycles-pp.free_pcppages_bulk
3.56 ± 5% +0.8 4.40 ± 5% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
3.39 ± 5% +0.8 4.24 ± 6% perf-profile.children.cycles-pp.kmalloc_large_node
3.62 ± 4% +0.9 4.48 ± 5% perf-profile.children.cycles-pp.kmalloc_reserve
15.00 +0.9 15.86 perf-profile.children.cycles-pp.sctp_primitive_SEND
4.83 ± 3% +0.9 5.70 ± 4% perf-profile.children.cycles-pp.__alloc_skb
2.08 ± 7% +1.0 3.04 ± 9% perf-profile.children.cycles-pp.kfree_skb_reason
0.56 ± 23% +1.0 1.56 ± 19% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.22 ± 6% +1.2 2.38 ± 8% perf-profile.children.cycles-pp.free_unref_page
27.46 +1.2 28.68 perf-profile.children.cycles-pp.sctp_outq_flush
25.20 +1.2 26.44 perf-profile.children.cycles-pp.sctp_sendmsg_to_asoc
24.22 +1.3 25.47 perf-profile.children.cycles-pp.sctp_packet_transmit
2.95 ± 6% +1.4 4.38 ± 7% perf-profile.children.cycles-pp.skb_release_data
32.36 +1.6 33.92 perf-profile.children.cycles-pp.sctp_do_sm
32.11 +1.6 33.68 perf-profile.children.cycles-pp.sctp_cmd_interpreter
51.46 ± 2% +1.7 53.15 perf-profile.children.cycles-pp.main
51.24 ± 2% +1.8 52.98 perf-profile.children.cycles-pp.__libc_start_main
45.47 +2.0 47.44 perf-profile.children.cycles-pp.sctp_sendmsg
45.56 +2.0 47.53 perf-profile.children.cycles-pp.sock_sendmsg
47.32 +2.0 49.30 perf-profile.children.cycles-pp.____sys_sendmsg
49.23 +2.0 51.22 perf-profile.children.cycles-pp.___sys_sendmsg
49.56 +2.0 51.57 perf-profile.children.cycles-pp.__sys_sendmsg
51.10 +2.0 53.14 perf-profile.children.cycles-pp.sendmsg
73.82 ± 2% +3.0 76.86 perf-profile.children.cycles-pp.do_syscall_64
74.28 ± 2% +3.0 77.32 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
14.36 ± 5% -1.9 12.48 ± 3% perf-profile.self.cycles-pp.mwait_idle_with_hints
0.48 ± 21% -0.1 0.34 ± 13% perf-profile.self.cycles-pp.cpuidle_enter_state
0.39 ± 5% -0.0 0.35 ± 2% perf-profile.self.cycles-pp.native_sched_clock
0.12 ± 13% -0.0 0.08 ± 7% perf-profile.self.cycles-pp.native_irq_return_iret
0.08 ± 11% -0.0 0.06 ± 9% perf-profile.self.cycles-pp.lapic_next_deadline
0.38 ± 3% -0.0 0.35 ± 2% perf-profile.self.cycles-pp.sctp_packet_pack
0.50 ± 2% -0.0 0.48 ± 2% perf-profile.self.cycles-pp.__might_sleep
0.23 -0.0 0.22 ± 2% perf-profile.self.cycles-pp.do_idle
0.07 +0.0 0.09 ± 12% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.20 ± 3% +0.0 0.22 ± 4% perf-profile.self.cycles-pp.enqueue_task_fair
0.02 ±141% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.poll_idle
1.13 +0.0 1.17 perf-profile.self.cycles-pp.kmem_cache_free
0.43 ± 6% +0.0 0.47 ± 2% perf-profile.self.cycles-pp.__mod_node_page_state
0.37 ± 2% +0.1 0.45 ± 2% perf-profile.self.cycles-pp.__list_add_valid
0.84 ± 6% +0.1 0.92 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.51 ± 2% +0.1 0.61 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.38 ± 2% +0.2 0.54 ± 4% perf-profile.self.cycles-pp.__zone_watermark_ok
0.76 +0.2 0.96 perf-profile.self.cycles-pp._raw_spin_lock
0.79 ± 8% +0.3 1.08 ± 7% perf-profile.self.cycles-pp.rmqueue
0.44 ± 12% +0.3 0.75 ± 9% perf-profile.self.cycles-pp.free_unref_page_commit
0.93 ± 5% +0.3 1.26 ± 4% perf-profile.self.cycles-pp.__slab_free
7.61 +0.7 8.35 perf-profile.self.cycles-pp.memcpy_erms
0.56 ± 23% +1.0 1.55 ± 19% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 weeks, 5 days
[io_uring] 584b0180f0: phoronix-test-suite.fio.SequentialWrite.IO_uring.Yes.Yes.1MB.DefaultTestDirectory.mb_s -10.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -10.2% regression of phoronix-test-suite.fio.SequentialWrite.IO_uring.Yes.Yes.1MB.DefaultTestDirectory.mb_s due to commit:
commit: 584b0180f0f4d67d7145950fe68c625f06c88b10 ("io_uring: move read/write file prep state into actual opcode handler")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: phoronix-test-suite
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:
test: fio-1.14.1
option_a: Sequential Write
option_b: IO_uring
option_c: Yes
option_d: Yes
option_e: 1MB
option_f: Default Test Directory
cpufreq_governor: performance
ucode: 0x500320a
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/option_b/option_c/option_d/option_e/option_f/rootfs/tbox_group/test/testcase/ucode:
gcc-11/performance/x86_64-rhel-8.3/Sequential Write/IO_uring/Yes/Yes/1MB/Default Test Directory/debian-x86_64-phoronix/lkp-csl-2sp7/fio-1.14.1/phoronix-test-suite/0x500320a
commit:
a3e4bc23d5 ("io_uring: defer splice/tee file validity check until command issue")
584b0180f0 ("io_uring: move read/write file prep state into actual opcode handler")
a3e4bc23d5470b2b 584b0180f0f4d67d7145950fe68
---------------- ---------------------------
%stddev %change %stddev
\ | \
1081 -10.2% 971.00 phoronix-test-suite.fio.SequentialWrite.IO_uring.Yes.Yes.1MB.DefaultTestDirectory.iops
1084 -10.2% 974.67 phoronix-test-suite.fio.SequentialWrite.IO_uring.Yes.Yes.1MB.DefaultTestDirectory.mb_s
118.42 +132.0% 274.70 ± 55% phoronix-test-suite.time.elapsed_time
118.42 +132.0% 274.70 ± 55% phoronix-test-suite.time.elapsed_time.max
1317 ± 19% +921.5% 13458 ± 53% phoronix-test-suite.time.involuntary_context_switches
185595 +23.8% 229715 ± 17% phoronix-test-suite.time.minor_page_faults
68.33 +2031.5% 1456 ± 3% phoronix-test-suite.time.percent_of_cpu_this_job_got
58.62 +6771.2% 4028 ± 58% phoronix-test-suite.time.system_time
244.97 +10.1% 269.72 pmeter.Average_Active_Power
1655992 ± 78% +1356.2% 24114501 ± 50% numa-numastat.node1.local_node
1662758 ± 78% +1374.2% 24512157 ± 50% numa-numastat.node1.numa_hit
958669 +10.8% 1062574 ± 7% meminfo.Active
843569 +12.0% 945049 ± 8% meminfo.Active(anon)
61229 ± 3% -60.2% 24352 ± 21% meminfo.Writeback
96.73 -14.0 82.71 mpstat.cpu.all.idle%
1.75 ± 15% -0.3 1.43 ± 16% mpstat.cpu.all.irq%
0.80 +14.4 15.19 ± 3% mpstat.cpu.all.sys%
0.33 -0.0 0.28 ± 11% mpstat.cpu.all.usr%
258678 ± 7% -37.0% 162850 ± 17% numa-meminfo.node0.Dirty
56023 ± 12% -85.5% 8096 ± 50% numa-meminfo.node0.Writeback
2978 ± 44% +350.6% 13421 ± 82% numa-meminfo.node1.Active(anon)
1.83 ± 73% +8.1e+06% 148245 ± 22% numa-meminfo.node1.Dirty
96.33 -14.4% 82.50 vmstat.cpu.id
746.67 ± 2% -40.8% 441.83 ± 53% vmstat.io.bi
1.00 +1350.0% 14.50 ± 15% vmstat.procs.r
187622 ± 3% +2.6% 192531 vmstat.system.in
79.33 ± 9% +490.3% 468.33 ± 3% turbostat.Avg_MHz
4.77 ± 15% +13.3 18.11 ± 5% turbostat.Busy%
1688 ± 7% +53.6% 2593 ± 2% turbostat.Bzy_MHz
13881106 ± 34% +181.8% 39121066 ± 64% turbostat.C1E
22867090 ± 3% +134.6% 53644530 ± 54% turbostat.IRQ
49.00 +5.4% 51.67 ± 2% turbostat.PkgTmp
120.78 +16.5% 140.68 turbostat.PkgWatt
64647 ± 7% -36.9% 40763 ± 17% numa-vmstat.node0.nr_dirty
13891 ± 12% -85.1% 2064 ± 50% numa-vmstat.node0.nr_writeback
78537 ± 6% -45.5% 42771 ± 17% numa-vmstat.node0.nr_zone_write_pending
744.50 ± 44% +350.7% 3355 ± 82% numa-vmstat.node1.nr_active_anon
6.00 ± 52% +3.6e+08% 21761586 ± 53% numa-vmstat.node1.nr_dirtied
0.00 +3.7e+106% 37065 ± 22% numa-vmstat.node1.nr_dirty
5.50 ± 48% +3.9e+08% 21697670 ± 53% numa-vmstat.node1.nr_written
744.50 ± 44% +350.7% 3355 ± 82% numa-vmstat.node1.nr_zone_active_anon
0.00 +4e+106% 40201 ± 22% numa-vmstat.node1.nr_zone_write_pending
1662586 ± 78% +1374.3% 24512265 ± 50% numa-vmstat.node1.numa_hit
1655820 ± 78% +1356.4% 24114609 ± 50% numa-vmstat.node1.numa_local
17009 ± 36% +3730.9% 651618 ± 64% sched_debug.cfs_rq:/.min_vruntime.avg
33445 ± 30% +2211.0% 772918 ± 59% sched_debug.cfs_rq:/.min_vruntime.max
11326 ± 45% +4758.2% 550285 ± 70% sched_debug.cfs_rq:/.min_vruntime.min
3907 ± 21% +2044.3% 83779 ± 38% sched_debug.cfs_rq:/.min_vruntime.stddev
91.33 ± 33% +55.1% 141.65 ± 37% sched_debug.cfs_rq:/.runnable_avg.avg
7037 ± 70% +823.4% 64981 ± 67% sched_debug.cfs_rq:/.spread0.max
-15244 +934.3% -157672 sched_debug.cfs_rq:/.spread0.min
3930 ± 21% +2031.5% 83782 ± 38% sched_debug.cfs_rq:/.spread0.stddev
90.43 ± 34% +53.1% 138.43 ± 37% sched_debug.cfs_rq:/.util_avg.avg
135.84 ± 21% +529.3% 854.87 ± 67% sched_debug.cpu.curr->pid.avg
1000 ± 10% +376.5% 4767 ± 59% sched_debug.cpu.nr_switches.min
210892 +12.0% 236262 ± 8% proc-vmstat.nr_active_anon
18617 +7.7% 20053 proc-vmstat.nr_kernel_stack
53538 +3.3% 55319 proc-vmstat.nr_slab_unreclaimable
15260 ± 4% -60.1% 6094 ± 21% proc-vmstat.nr_writeback
210892 +12.0% 236262 ± 8% proc-vmstat.nr_zone_active_anon
9868 ± 31% +227.2% 32291 ± 60% proc-vmstat.numa_hint_faults
9609 ± 28% +156.6% 24657 ± 62% proc-vmstat.numa_hint_faults_local
416.00 ± 8% +259.4% 1495 ± 54% proc-vmstat.numa_huge_pte_updates
259.00 ±156% +40441.1% 105001 ± 46% proc-vmstat.numa_pages_migrated
230799 ± 7% +252.9% 814389 ± 53% proc-vmstat.numa_pte_updates
292996 +7.3% 314293 proc-vmstat.pgactivate
867707 +73.7% 1507465 ± 39% proc-vmstat.pgfault
259.00 ±156% +40441.1% 105001 ± 46% proc-vmstat.pgmigrate_success
311.33 ± 7% +2155.2% 7021 ± 65% proc-vmstat.pgrotated
30.29 ± 42% -73.8% 7.93 ±110% perf-stat.i.MPKI
7.08e+08 +283.6% 2.716e+09 perf-stat.i.branch-instructions
2.91 ± 44% -1.9 1.04 ± 83% perf-stat.i.branch-miss-rate%
34699379 -10.6% 31027580 ± 3% perf-stat.i.cache-misses
6.802e+09 ± 10% +554.9% 4.455e+10 ± 3% perf-stat.i.cpu-cycles
33.73 ± 25% +1103.3% 405.85 ± 5% perf-stat.i.cpu-migrations
1102 ± 33% +132.4% 2562 ± 22% perf-stat.i.cycles-between-cache-misses
0.22 ± 50% -0.2 0.07 ±137% perf-stat.i.dTLB-load-miss-rate%
9.67e+08 ± 4% +270.6% 3.584e+09 ± 2% perf-stat.i.dTLB-loads
4.994e+08 ± 2% -6.6% 4.664e+08 ± 2% perf-stat.i.dTLB-stores
3.503e+09 +284.6% 1.347e+10 perf-stat.i.instructions
3126 ± 5% +327.7% 13371 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.52 ± 9% -38.4% 0.32 ± 8% perf-stat.i.ipc
70851 ± 10% +554.6% 463814 ± 3% perf-stat.i.metric.GHz
23543856 +202.1% 71118618 perf-stat.i.metric.M/sec
24.45 ± 5% +17.6 42.05 ± 5% perf-stat.i.node-load-miss-rate%
154141 ± 4% +1027.3% 1737599 ± 11% perf-stat.i.node-load-misses
6780078 -41.7% 3950788 ± 5% perf-stat.i.node-loads
19.06 ± 16% +32.5 51.53 ± 7% perf-stat.i.node-store-miss-rate%
54242 ± 17% +4415.1% 2449115 ± 15% perf-stat.i.node-store-misses
5188725 -41.8% 3018300 ± 12% perf-stat.i.node-stores
20.79 ± 28% -81.7% 3.80 ± 31% perf-stat.overall.MPKI
2.06 ± 28% -1.8 0.31 ± 39% perf-stat.overall.branch-miss-rate%
1.94 ± 10% +70.2% 3.31 ± 2% perf-stat.overall.cpi
195.90 ± 8% +632.9% 1435 perf-stat.overall.cycles-between-cache-misses
0.10 ± 48% -0.1 0.01 ±116% perf-stat.overall.dTLB-load-miss-rate%
3154 ± 6% +306.3% 12813 ± 8% perf-stat.overall.instructions-per-iTLB-miss
0.52 ± 11% -41.9% 0.30 ± 2% perf-stat.overall.ipc
2.23 ± 4% +28.3 30.55 ± 10% perf-stat.overall.node-load-miss-rate%
1.04 ± 17% +43.8 44.81 ± 15% perf-stat.overall.node-store-miss-rate%
7.02e+08 +284.8% 2.702e+09 perf-stat.ps.branch-instructions
34381260 -10.2% 30861100 ± 3% perf-stat.ps.cache-misses
6.746e+09 ± 10% +556.9% 4.431e+10 ± 3% perf-stat.ps.cpu-cycles
33.43 ± 25% +1107.4% 403.66 ± 5% perf-stat.ps.cpu-migrations
9.587e+08 ± 4% +271.8% 3.565e+09 ± 2% perf-stat.ps.dTLB-loads
4.951e+08 ± 2% -6.3% 4.64e+08 ± 2% perf-stat.ps.dTLB-stores
3.473e+09 +285.9% 1.34e+10 perf-stat.ps.instructions
152903 ± 4% +1030.3% 1728344 ± 11% perf-stat.ps.node-load-misses
6717519 -41.5% 3929544 ± 5% perf-stat.ps.node-loads
53840 ± 17% +4424.7% 2436140 ± 15% perf-stat.ps.node-store-misses
5140848 -41.6% 3001971 ± 12% perf-stat.ps.node-stores
4.129e+11 +801.9% 3.724e+12 ± 56% perf-stat.total.instructions
61.17 ± 3% -49.8 11.37 ± 16% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
60.62 ± 3% -49.4 11.26 ± 16% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
60.58 ± 3% -49.3 11.26 ± 16% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
59.76 ± 3% -48.6 11.19 ± 16% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
55.62 ± 2% -44.8 10.84 ± 17% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
55.13 ± 3% -44.4 10.73 ± 17% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
41.00 ± 4% -31.5 9.54 ± 20% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
40.57 ± 3% -31.0 9.53 ± 20% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
13.53 ± 4% -12.4 1.10 ± 9% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
14.04 ± 21% -12.1 1.96 ± 4% perf-profile.calltrace.cycles-pp.generic_perform_write.ext4_buffered_write_iter.io_write.io_issue_sqe.io_wq_submit_work
13.80 ± 10% -12.1 1.73 ± 9% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
10.98 ± 5% -10.0 0.98 ± 9% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
11.01 ± 9% -9.6 1.41 ± 11% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
10.97 ± 9% -9.6 1.40 ± 11% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
10.78 ± 9% -9.4 1.38 ± 11% perf-profile.calltrace.cycles-pp.loop_process_work.process_one_work.worker_thread.kthread.ret_from_fork
10.60 ± 10% -9.2 1.35 ± 11% perf-profile.calltrace.cycles-pp.lo_write_simple.loop_process_work.process_one_work.worker_thread.kthread
10.32 ± 9% -9.0 1.31 ± 12% perf-profile.calltrace.cycles-pp.do_iter_write.lo_write_simple.loop_process_work.process_one_work.worker_thread
10.00 ± 9% -8.7 1.28 ± 12% perf-profile.calltrace.cycles-pp.do_iter_readv_writev.do_iter_write.lo_write_simple.loop_process_work.process_one_work
9.93 ± 9% -8.7 1.28 ± 12% perf-profile.calltrace.cycles-pp.generic_file_write_iter.do_iter_readv_writev.do_iter_write.lo_write_simple.loop_process_work
9.59 ± 9% -8.4 1.24 ± 13% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.do_iter_readv_writev.do_iter_write.lo_write_simple
9.41 ± 9% -8.2 1.22 ± 13% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.do_iter_readv_writev.do_iter_write
9.02 ± 8% -8.1 0.95 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
9.02 ± 8% -8.1 0.95 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.calltrace.cycles-pp.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.calltrace.cycles-pp.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.calltrace.cycles-pp.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.16 ± 9% -7.1 1.07 ± 14% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.do_iter_readv_writev
7.36 ± 6% -6.7 0.62 ± 9% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
7.17 ± 7% -6.6 0.62 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
5.18 ± 21% -4.5 0.72 ± 6% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter.io_write.io_issue_sqe
5.08 ± 21% -4.4 0.70 ± 5% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter.io_write
5.05 ± 21% -4.4 0.70 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter
4.69 ± 8% -4.3 0.36 ± 70% perf-profile.calltrace.cycles-pp.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64
5.02 ± 21% -4.3 0.75 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.io_write.io_issue_sqe
0.00 +0.6 0.61 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.ext4_buffered_write_iter.io_write.io_issue_sqe
0.00 +2.0 2.03 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.ext4_buffered_write_iter.io_write
28.19 ± 7% +59.2 87.44 ± 2% perf-profile.calltrace.cycles-pp.ret_from_fork
14.40 ± 21% +71.3 85.71 ± 2% perf-profile.calltrace.cycles-pp.io_wqe_worker.ret_from_fork
14.25 ± 21% +71.4 85.64 ± 2% perf-profile.calltrace.cycles-pp.io_worker_handle_work.io_wqe_worker.ret_from_fork
14.23 ± 21% +71.4 85.63 ± 2% perf-profile.calltrace.cycles-pp.io_issue_sqe.io_wq_submit_work.io_worker_handle_work.io_wqe_worker.ret_from_fork
14.23 ± 21% +71.4 85.64 ± 2% perf-profile.calltrace.cycles-pp.io_wq_submit_work.io_worker_handle_work.io_wqe_worker.ret_from_fork
14.22 ± 21% +71.4 85.63 ± 2% perf-profile.calltrace.cycles-pp.io_write.io_issue_sqe.io_wq_submit_work.io_worker_handle_work.io_wqe_worker
14.10 ± 21% +71.5 85.62 ± 2% perf-profile.calltrace.cycles-pp.ext4_buffered_write_iter.io_write.io_issue_sqe.io_wq_submit_work.io_worker_handle_work
0.00 +80.9 80.92 ± 2% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.ext4_buffered_write_iter.io_write
0.00 +83.0 82.99 ± 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.ext4_buffered_write_iter.io_write.io_issue_sqe
0.00 +83.6 83.63 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.ext4_buffered_write_iter.io_write.io_issue_sqe.io_wq_submit_work
61.17 ± 3% -49.8 11.37 ± 16% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
61.17 ± 3% -49.8 11.37 ± 16% perf-profile.children.cycles-pp.cpu_startup_entry
61.17 ± 3% -49.8 11.37 ± 16% perf-profile.children.cycles-pp.do_idle
60.34 ± 3% -49.0 11.30 ± 16% perf-profile.children.cycles-pp.cpuidle_idle_call
56.14 ± 3% -45.2 10.95 ± 17% perf-profile.children.cycles-pp.cpuidle_enter
56.08 ± 3% -45.1 10.94 ± 17% perf-profile.children.cycles-pp.cpuidle_enter_state
41.25 ± 4% -31.6 9.64 ± 20% perf-profile.children.cycles-pp.intel_idle
41.16 ± 4% -31.5 9.64 ± 20% perf-profile.children.cycles-pp.mwait_idle_with_hints
23.64 ± 10% -20.4 3.22 ± 5% perf-profile.children.cycles-pp.generic_perform_write
13.80 ± 10% -12.1 1.73 ± 9% perf-profile.children.cycles-pp.kthread
13.48 ± 8% -11.8 1.72 ± 9% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
13.41 ± 5% -11.6 1.80 ± 9% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
11.71 ± 6% -10.2 1.55 ± 9% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
11.01 ± 9% -9.6 1.41 ± 11% perf-profile.children.cycles-pp.worker_thread
10.97 ± 9% -9.6 1.40 ± 11% perf-profile.children.cycles-pp.process_one_work
10.78 ± 9% -9.4 1.38 ± 11% perf-profile.children.cycles-pp.loop_process_work
10.61 ± 10% -9.3 1.35 ± 12% perf-profile.children.cycles-pp.lo_write_simple
10.19 ± 7% -9.1 1.12 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
10.18 ± 7% -9.1 1.12 ± 5% perf-profile.children.cycles-pp.do_syscall_64
10.32 ± 9% -9.0 1.32 ± 12% perf-profile.children.cycles-pp.do_iter_write
10.08 ± 9% -8.8 1.32 ± 12% perf-profile.children.cycles-pp.generic_file_write_iter
10.00 ± 9% -8.7 1.28 ± 12% perf-profile.children.cycles-pp.do_iter_readv_writev
9.75 ± 9% -8.5 1.28 ± 13% perf-profile.children.cycles-pp.__generic_file_write_iter
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.children.cycles-pp.__x64_sys_fadvise64
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.children.cycles-pp.ksys_fadvise64_64
8.21 ± 8% -7.3 0.88 ± 7% perf-profile.children.cycles-pp.generic_fadvise
7.93 ± 6% -6.8 1.16 ± 10% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
7.76 ± 7% -6.6 1.15 ± 10% perf-profile.children.cycles-pp.hrtimer_interrupt
5.14 ± 21% -4.4 0.72 ± 5% perf-profile.children.cycles-pp.copyin
5.13 ± 21% -4.4 0.72 ± 5% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
5.02 ± 21% -4.3 0.75 ± 4% perf-profile.children.cycles-pp.ext4_da_write_begin
4.70 ± 8% -4.2 0.52 ± 9% perf-profile.children.cycles-pp.invalidate_mapping_pagevec
4.60 ± 10% -3.8 0.80 ± 15% perf-profile.children.cycles-pp.__hrtimer_run_queues
3.98 ± 9% -3.5 0.44 ± 7% perf-profile.children.cycles-pp.__softirqentry_text_start
3.54 ± 13% -3.2 0.30 ± 13% perf-profile.children.cycles-pp.menu_select
3.71 ± 15% -3.2 0.54 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
3.51 ± 9% -3.2 0.36 ± 10% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
3.51 ± 9% -3.2 0.36 ± 10% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
3.51 ± 9% -3.1 0.37 ± 6% perf-profile.children.cycles-pp.do_writepages
3.51 ± 9% -3.1 0.37 ± 6% perf-profile.children.cycles-pp.ext4_writepages
3.50 ± 9% -3.1 0.37 ± 6% perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
3.67 ± 16% -3.1 0.54 ± 3% perf-profile.children.cycles-pp.__filemap_get_folio
2.87 ± 10% -2.6 0.29 ± 11% perf-profile.children.cycles-pp.mpage_process_page_bufs
3.08 ± 11% -2.4 0.66 ± 15% perf-profile.children.cycles-pp.tick_sched_timer
2.71 ± 52% -2.4 0.30 ± 23% perf-profile.children.cycles-pp.ktime_get
2.70 ± 10% -2.4 0.30 ± 5% perf-profile.children.cycles-pp.smpboot_thread_fn
2.66 ± 21% -2.3 0.33 ± 3% perf-profile.children.cycles-pp.generic_write_end
2.61 ± 11% -2.3 0.29 ± 6% perf-profile.children.cycles-pp.run_ksoftirqd
2.57 ± 11% -2.3 0.28 ± 7% perf-profile.children.cycles-pp.blk_complete_reqs
2.56 ± 11% -2.3 0.28 ± 7% perf-profile.children.cycles-pp.blk_mq_end_request
2.56 ± 11% -2.3 0.28 ± 7% perf-profile.children.cycles-pp.blk_update_request
2.54 ± 11% -2.3 0.28 ± 7% perf-profile.children.cycles-pp.ext4_end_bio
2.54 ± 11% -2.3 0.28 ± 7% perf-profile.children.cycles-pp.ext4_finish_bio
2.45 ± 9% -2.2 0.24 ± 9% perf-profile.children.cycles-pp.mpage_submit_page
2.48 ± 21% -2.2 0.30 ± 2% perf-profile.children.cycles-pp.__block_commit_write
2.40 ± 15% -1.8 0.58 ± 14% perf-profile.children.cycles-pp.tick_sched_handle
2.06 ± 34% -1.8 0.24 ± 15% perf-profile.children.cycles-pp.clockevents_program_event
2.26 ± 13% -1.7 0.57 ± 13% perf-profile.children.cycles-pp.update_process_times
1.80 ± 9% -1.6 0.17 ± 8% perf-profile.children.cycles-pp.ext4_bio_write_page
1.78 ± 11% -1.6 0.18 ± 7% perf-profile.children.cycles-pp.folio_end_writeback
1.86 ± 21% -1.6 0.28 ± 6% perf-profile.children.cycles-pp.filemap_add_folio
1.78 ± 14% -1.6 0.20 ± 13% perf-profile.children.cycles-pp.__irq_exit_rcu
1.80 ± 22% -1.5 0.26 ± 6% perf-profile.children.cycles-pp.ext4_block_write_begin
1.69 ± 12% -1.5 0.20 ± 17% perf-profile.children.cycles-pp.mapping_evict_folio
1.63 ± 22% -1.5 0.14 ± 12% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.68 ± 12% -1.5 0.20 ± 17% perf-profile.children.cycles-pp.filemap_release_folio
1.55 ± 10% -1.4 0.16 ± 7% perf-profile.children.cycles-pp.__folio_end_writeback
1.50 ± 6% -1.3 0.15 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.41 ± 11% -1.3 0.14 ± 8% perf-profile.children.cycles-pp.remove_mapping
1.38 ± 11% -1.2 0.14 ± 7% perf-profile.children.cycles-pp.__remove_mapping
1.28 ± 14% -1.1 0.14 ± 10% perf-profile.children.cycles-pp.release_pages
1.24 ± 27% -1.1 0.11 ± 17% perf-profile.children.cycles-pp.tick_nohz_next_event
1.26 ± 21% -1.1 0.19 ± 5% perf-profile.children.cycles-pp.__filemap_add_folio
1.26 ± 9% -1.1 0.19 ± 4% perf-profile.children.cycles-pp.__schedule
1.19 ± 14% -1.1 0.13 ± 13% perf-profile.children.cycles-pp.__pagevec_release
1.15 ± 15% -1.0 0.13 ± 25% perf-profile.children.cycles-pp.try_to_free_buffers
1.11 ± 16% -1.0 0.10 ± 15% perf-profile.children.cycles-pp.__folio_start_writeback
1.17 ± 24% -1.0 0.18 ± 8% perf-profile.children.cycles-pp.create_empty_buffers
1.35 ± 8% -1.0 0.40 ± 13% perf-profile.children.cycles-pp.perf_tp_event
1.42 ± 13% -0.9 0.47 ± 13% perf-profile.children.cycles-pp.scheduler_tick
1.05 ± 7% -0.9 0.12 ± 12% perf-profile.children.cycles-pp.__mod_lruvec_page_state
1.31 ± 8% -0.9 0.38 ± 13% perf-profile.children.cycles-pp.perf_event_output_forward
1.31 ± 8% -0.9 0.39 ± 13% perf-profile.children.cycles-pp.__perf_event_overflow
1.06 ± 6% -0.9 0.14 ± 9% perf-profile.children.cycles-pp.xas_load
1.17 ± 8% -0.8 0.34 ± 14% perf-profile.children.cycles-pp.perf_prepare_sample
0.96 ± 22% -0.8 0.14 ± 6% perf-profile.children.cycles-pp.mark_buffer_dirty
0.94 ± 21% -0.8 0.15 ± 6% perf-profile.children.cycles-pp.folio_alloc
1.12 ± 8% -0.8 0.32 ± 15% perf-profile.children.cycles-pp.perf_callchain
1.11 ± 8% -0.8 0.32 ± 15% perf-profile.children.cycles-pp.get_perf_callchain
0.92 ± 19% -0.8 0.15 ± 5% perf-profile.children.cycles-pp.__alloc_pages
0.90 ± 18% -0.8 0.14 ± 9% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.86 ± 10% -0.8 0.09 ± 11% perf-profile.children.cycles-pp._raw_spin_lock
0.89 ± 27% -0.7 0.14 ± 7% perf-profile.children.cycles-pp.alloc_page_buffers
0.80 ± 10% -0.7 0.07 ± 15% perf-profile.children.cycles-pp.irq_work_run_list
0.81 ± 17% -0.7 0.07 ± 21% perf-profile.children.cycles-pp.irq_enter_rcu
0.86 ± 18% -0.7 0.13 ± 9% perf-profile.children.cycles-pp.fault_in_readable
0.88 ± 8% -0.7 0.16 ± 4% perf-profile.children.cycles-pp.schedule
0.84 ± 29% -0.7 0.14 ± 9% perf-profile.children.cycles-pp.kmem_cache_alloc
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.sysvec_irq_work
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.__sysvec_irq_work
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.irq_work_single
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.irq_work_run
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp._printk
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.vprintk_emit
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.console_unlock
0.78 ± 9% -0.7 0.07 ± 16% perf-profile.children.cycles-pp.call_console_drivers
0.77 ± 19% -0.7 0.06 ± 47% perf-profile.children.cycles-pp.tick_irq_enter
0.84 ± 29% -0.7 0.14 ± 9% perf-profile.children.cycles-pp.alloc_buffer_head
0.80 ± 16% -0.7 0.10 ± 10% perf-profile.children.cycles-pp.shmem_write_begin
0.78 ± 24% -0.7 0.09 ± 30% perf-profile.children.cycles-pp.kmem_cache_free
0.75 ± 10% -0.7 0.06 ± 11% perf-profile.children.cycles-pp.serial8250_console_write
0.75 ± 10% -0.7 0.06 ± 11% perf-profile.children.cycles-pp.uart_console_write
0.77 ± 14% -0.7 0.09 ± 8% perf-profile.children.cycles-pp.lapic_next_deadline
0.76 ± 9% -0.7 0.08 ± 12% perf-profile.children.cycles-pp.__filemap_remove_folio
0.76 ± 17% -0.7 0.10 ± 10% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.72 ± 55% -0.7 0.06 ± 47% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.72 ± 9% -0.7 0.06 ± 11% perf-profile.children.cycles-pp.wait_for_xmitr
0.72 ± 10% -0.7 0.06 ± 11% perf-profile.children.cycles-pp.serial8250_console_putchar
0.73 ± 23% -0.6 0.08 ± 29% perf-profile.children.cycles-pp.free_buffer_head
0.70 ± 19% -0.6 0.06 ± 19% perf-profile.children.cycles-pp.rebalance_domains
0.69 ± 14% -0.6 0.06 ± 17% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.73 ± 21% -0.6 0.12 ± 6% perf-profile.children.cycles-pp.get_page_from_freelist
0.71 ± 8% -0.6 0.09 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.64 ± 21% -0.6 0.06 ± 14% perf-profile.children.cycles-pp.free_unref_page_list
0.67 ± 23% -0.6 0.10 ± 10% perf-profile.children.cycles-pp.__folio_mark_dirty
0.59 ± 8% -0.6 0.02 ± 99% perf-profile.children.cycles-pp.io_serial_in
0.62 ± 15% -0.6 0.06 ± 11% perf-profile.children.cycles-pp.folio_clear_dirty_for_io
0.62 ± 14% -0.6 0.06 ± 11% perf-profile.children.cycles-pp.sched_clock_cpu
0.60 ± 20% -0.5 0.09 ± 7% perf-profile.children.cycles-pp.folio_add_lru
0.54 ± 16% -0.5 0.06 ± 9% perf-profile.children.cycles-pp.native_sched_clock
0.50 ± 7% -0.5 0.04 ± 71% perf-profile.children.cycles-pp.read_tsc
0.50 ± 10% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.__might_resched
0.55 ± 38% -0.4 0.11 ± 25% perf-profile.children.cycles-pp.start_kernel
0.50 ± 17% -0.4 0.07 ± 10% perf-profile.children.cycles-pp.xas_store
0.49 ± 11% -0.4 0.06 ± 11% perf-profile.children.cycles-pp.__mod_lruvec_state
0.51 ± 19% -0.4 0.08 ± 8% perf-profile.children.cycles-pp.__pagevec_lru_add
0.54 ± 18% -0.4 0.11 ± 11% perf-profile.children.cycles-pp.load_balance
0.47 ± 49% -0.4 0.05 ± 46% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.50 ± 21% -0.4 0.08 ± 5% perf-profile.children.cycles-pp.rmqueue
0.46 ± 14% -0.4 0.04 ± 44% perf-profile.children.cycles-pp.irqtime_account_irq
0.47 ± 22% -0.4 0.06 ± 7% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.43 ± 34% -0.4 0.03 ±102% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.46 ± 8% -0.4 0.06 ± 17% perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
0.47 ± 12% -0.4 0.08 ± 13% perf-profile.children.cycles-pp.perf_callchain_user
0.44 ± 7% -0.4 0.06 ± 8% perf-profile.children.cycles-pp.ksys_read
0.62 ± 7% -0.4 0.24 ± 18% perf-profile.children.cycles-pp.perf_callchain_kernel
0.44 ± 7% -0.4 0.06 ± 8% perf-profile.children.cycles-pp.vfs_read
0.43 ± 8% -0.4 0.06 ± 13% perf-profile.children.cycles-pp.jbd2_journal_grab_journal_head
0.39 ± 7% -0.4 0.03 ±100% perf-profile.children.cycles-pp.free_pcppages_bulk
0.38 ± 8% -0.4 0.02 ± 99% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.40 ± 13% -0.4 0.05 ± 7% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.43 ± 9% -0.3 0.08 ± 8% perf-profile.children.cycles-pp.try_to_wake_up
0.38 ± 10% -0.3 0.04 ± 44% perf-profile.children.cycles-pp.__mod_node_page_state
0.41 ± 13% -0.3 0.07 ± 15% perf-profile.children.cycles-pp.__get_user_nocheck_8
0.40 ± 20% -0.3 0.08 ± 11% perf-profile.children.cycles-pp.find_busiest_group
0.65 ± 9% -0.3 0.33 ± 17% perf-profile.children.cycles-pp.update_curr
0.51 ± 7% -0.3 0.19 ± 17% perf-profile.children.cycles-pp.unwind_next_frame
0.38 ± 21% -0.3 0.06 ± 9% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.37 ± 20% -0.3 0.06 ± 9% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.63 ± 9% -0.3 0.32 ± 17% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.38 ± 17% -0.3 0.08 ± 12% perf-profile.children.cycles-pp.update_sd_lb_stats
0.38 ± 6% -0.3 0.08 ± 8% perf-profile.children.cycles-pp.__libc_start_main
0.32 ± 12% -0.3 0.02 ± 99% perf-profile.children.cycles-pp.read
0.32 ± 21% -0.3 0.04 ± 45% perf-profile.children.cycles-pp.rmqueue_bulk
0.31 ± 42% -0.3 0.04 ± 71% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.26 ± 25% -0.2 0.02 ± 99% perf-profile.children.cycles-pp.folio_account_dirtied
0.29 ± 20% -0.2 0.07 ± 14% perf-profile.children.cycles-pp.update_sg_lb_stats
0.27 ± 7% -0.2 0.08 ± 7% perf-profile.children.cycles-pp.asm_exc_page_fault
0.24 ± 10% -0.2 0.08 ± 22% perf-profile.children.cycles-pp.__unwind_start
0.18 ± 16% -0.2 0.03 ±100% perf-profile.children.cycles-pp.__orc_find
0.16 ± 16% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.ksys_write
0.16 ± 16% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.vfs_write
0.16 ± 16% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.new_sync_write
0.15 ± 17% -0.1 0.02 ± 99% perf-profile.children.cycles-pp.__libc_write
0.17 ± 18% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.schedule_timeout
0.15 ± 24% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +2.6 2.64 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
28.20 ± 7% +59.2 87.44 ± 2% perf-profile.children.cycles-pp.ret_from_fork
14.40 ± 21% +71.3 85.71 ± 2% perf-profile.children.cycles-pp.io_wqe_worker
14.25 ± 21% +71.4 85.64 ± 2% perf-profile.children.cycles-pp.io_worker_handle_work
14.23 ± 21% +71.4 85.63 ± 2% perf-profile.children.cycles-pp.io_issue_sqe
14.23 ± 21% +71.4 85.64 ± 2% perf-profile.children.cycles-pp.io_wq_submit_work
14.22 ± 21% +71.4 85.63 ± 2% perf-profile.children.cycles-pp.io_write
14.10 ± 21% +71.5 85.62 ± 2% perf-profile.children.cycles-pp.ext4_buffered_write_iter
0.00 +80.9 80.95 ± 2% perf-profile.children.cycles-pp.osq_lock
0.00 +83.0 83.00 ± 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.00 +83.6 83.63 ± 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
40.90 ± 3% -31.3 9.63 ± 20% perf-profile.self.cycles-pp.mwait_idle_with_hints
8.22 ± 9% -7.1 1.08 ± 14% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
5.02 ± 20% -4.3 0.71 ± 5% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.31 ± 61% -2.0 0.26 ± 25% perf-profile.self.cycles-pp.ktime_get
1.89 ± 12% -1.7 0.17 ± 8% perf-profile.self.cycles-pp.cpuidle_enter_state
1.64 ± 15% -1.5 0.14 ± 20% perf-profile.self.cycles-pp.menu_select
1.42 ± 20% -1.3 0.15 ± 7% perf-profile.self.cycles-pp.__block_commit_write
1.26 ± 7% -1.1 0.14 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.86 ± 5% -0.7 0.12 ± 13% perf-profile.self.cycles-pp.xas_load
0.83 ± 17% -0.7 0.12 ± 10% perf-profile.self.cycles-pp.fault_in_readable
0.77 ± 14% -0.7 0.09 ± 8% perf-profile.self.cycles-pp.lapic_next_deadline
0.69 ± 9% -0.6 0.08 ± 16% perf-profile.self.cycles-pp._raw_spin_lock
0.70 ± 8% -0.6 0.09 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.59 ± 8% -0.6 0.02 ± 99% perf-profile.self.cycles-pp.io_serial_in
0.50 ± 6% -0.5 0.03 ±100% perf-profile.self.cycles-pp.read_tsc
0.51 ± 16% -0.5 0.06 ± 9% perf-profile.self.cycles-pp.native_sched_clock
0.46 ± 10% -0.4 0.06 ± 8% perf-profile.self.cycles-pp.__might_resched
0.43 ± 11% -0.4 0.03 ± 70% perf-profile.self.cycles-pp.ext4_bio_write_page
0.42 ± 8% -0.4 0.05 ± 8% perf-profile.self.cycles-pp.jbd2_journal_grab_journal_head
0.38 ± 11% -0.3 0.03 ± 70% perf-profile.self.cycles-pp.__mod_node_page_state
0.36 ± 15% -0.3 0.02 ± 99% perf-profile.self.cycles-pp.release_pages
0.37 ± 62% -0.3 0.05 ± 45% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.21 ± 19% -0.2 0.06 ± 8% perf-profile.self.cycles-pp.update_sg_lb_stats
0.18 ± 16% -0.2 0.03 ±100% perf-profile.self.cycles-pp.__orc_find
0.19 ± 8% -0.1 0.08 ± 15% perf-profile.self.cycles-pp.unwind_next_frame
0.00 +2.6 2.62 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.00 +80.4 80.42 ± 2% perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 weeks, 5 days
[printk] 8e27473211: hwsim.ap_ft_pmf_bip_cmac_128.fail
by kernel test robot
(please be noted we reported
"[printk] 8e27473211: hwsim.ap_ft_pmf_bip_gmac_256.fail"
on
https://lore.kernel.org/all/[email protected]/
when this commit is on linux-next/master
now we noticed there are other hwsim tests failures while
ap_ft_pmf_bip_gmac_256 problem is still existing, after this commit is on
mainline now.
FYI)
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 8e274732115f63c1d09136284431b3555bd5cc56 ("printk: extend console_lock for per-console locking")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hwsim
version: hwsim-x86_64-717e5d7-1_20220411
with following parameters:
test: group-18
ucode: 0xec
on test machine: 4 threads Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
please be noted, besides ap_ft_pmf_bip_cmac_128, we also noticed there are two
other tests in this group failed on this commit, but could pass on parent.
09c5ba0aa2fcfdad 8e274732115f63c1d0913628443
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:6 50% 3:3 hwsim.ap_ft_eap_cui.fail
:6 50% 3:3 hwsim.ap_ft_many.fail
:6 50% 3:3 hwsim.ap_ft_pmf_bip_cmac_128.fail
2022-05-28 14:39:16 ./run-tests.py ap_ft_eap_cui
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START ap_ft_eap_cui 1/1
Test: WPA2-EAP-FT AP with CUI
Starting AP wlan3
Starting AP wlan4
Connect to first AP
Connect STA wlan0 to AP
Roam to the second AP
Roam back to the first AP
Roaming association rejected
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 1519, in test_ap_ft_eap_cui
generic_ap_ft_eap(dev, apdev, vlan=False, cui=True)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 1487, in generic_ap_ft_eap
conndev=conndev, only_one_way=only_one_way)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 273, in run_roams
dev.roam(ap1['bssid'])
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1235, in roam
raise Exception("Roaming association rejected")
Exception: Roaming association rejected
FAIL ap_ft_eap_cui 2.019561 2022-05-28 14:39:18.963140
passed 0 test case(s)
skipped 0 test case(s)
failed tests: ap_ft_eap_cui
2022-05-28 14:39:18 ./run-tests.py ap_ft_many
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START ap_ft_many 1/1
Test: WPA2-PSK-FT AP multiple times
Starting AP wlan3
Starting AP wlan4
Connect to first AP
Connect STA wlan0 to AP
Roam to the second AP
Roam back to the first AP
Roaming association rejected
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 437, in test_ap_ft_many
run_roams(dev[0], apdev, hapd0, hapd1, ssid, passphrase, roams=50)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 273, in run_roams
dev.roam(ap1['bssid'])
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1235, in roam
raise Exception("Roaming association rejected")
Exception: Roaming association rejected
FAIL ap_ft_many 1.922176 2022-05-28 14:39:21.058942
passed 0 test case(s)
skipped 0 test case(s)
failed tests: ap_ft_many
...
2022-05-28 14:39:22 ./run-tests.py ap_ft_pmf_bip_cmac_128
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START ap_ft_pmf_bip_cmac_128 1/1
Test: WPA2-PSK-FT AP with PMF/BIP-CMAC-128
Starting AP wlan3
Starting AP wlan4
Connect to first AP
Connect STA wlan0 to AP
Roam to the second AP
Roam back to the first AP
Roaming association rejected
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 549, in test_ap_ft_pmf_bip_cmac_128
run_ap_ft_pmf_bip(dev, apdev, "AES-128-CMAC")
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 580, in run_ap_ft_pmf_bip
group_mgmt=cipher)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ap_ft.py", line 273, in run_roams
dev.roam(ap1['bssid'])
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1235, in roam
raise Exception("Roaming association rejected")
Exception: Roaming association rejected
FAIL ap_ft_pmf_bip_cmac_128 1.981388 2022-05-28 14:39:24.770954
passed 0 test case(s)
skipped 0 test case(s)
failed tests: ap_ft_pmf_bip_cmac_128
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
1 month
[selftests/kselftest] 3c89247543: kernel-selftests.vm.make_fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 3c89247543c0ca796c2bf7c3c15ba82d6f5d2c47 ("[PATCH v1] selftests/kselftest: Make failed tests exit with 1")
url: https://github.com/intel-lab-lkp/linux/commits/Micka-l-Sala-n/selftests-k...
patch link: https://lore.kernel.org/lkml/[email protected]
in testcase: kernel-selftests
version: kernel-selftests-x86_64-8d3977ef-1_20220523
with following parameters:
sc_nr_hugepages: 2
group: vm
ucode: 0xc2
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 20 threads 1 sockets Commet Lake with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-8.3-kselftests-3c89247543c0ca796c2bf7c3c15ba82d6f5d2c47
2022-05-29 09:56:50 ln -sf /usr/bin/clang
2022-05-29 09:56:50 ln -sf /usr/bin/llc
2022-05-29 09:56:51 sed -i s/default_timeout=45/default_timeout=300/ kselftest/runner.sh
LKP WARN miss config CONFIG_MEM_SOFT_DIRTY= of vm/config
2022-05-29 09:56:51 make -C vm
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-3c89247543c0ca796c2bf7c3c15ba82d6f5d2c47/tools/testing/selftests/vm'
/bin/sh ./check_config.sh gcc
make --no-builtin-rules ARCH=x86 -C ../../../.. headers_install
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-3c89247543c0ca796c2bf7c3c15ba82d6f5d2c47'
....
ok 4 selftests: vm: run_vmtests.sh # SKIP
make: *** [../lib.mk:94: run_tests] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-3c89247543c0ca796c2bf7c3c15ba82d6f5d2c47/tools/testing/selftests/vm'
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
1 month
[net] d5a42de8bd: BUG:unable_to_handle_page_fault_for_address
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: d5a42de8bdbe25081f07b801d8b35f4d75a791f4 ("net: Add a second bind table hashed by port and address")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: netperf
version: netperf-x86_64-2.7-0_20220502
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 16
cluster: cs-localhost
test: TCP_CRR
cpufreq_governor: performance
ucode: 0x7002402
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 54.916314][ T6840] BUG: unable to handle page fault for address: 00007f9251155fe0
[ 54.924136][ T6840] #PF: supervisor read access in kernel mode
[ 54.930148][ T6840] #PF: error_code(0x0000) - not-present page
[ 54.936113][ T6840] PGD 104c3a067 P4D 104c3a067 PUD 0
[ 54.941381][ T6840] Oops: 0000 [#1] SMP NOPTI
[ 54.945874][ T6840] CPU: 92 PID: 6840 Comm: netperf Not tainted 5.18.0-rc7-01831-gd5a42de8bdbe #1
[ 54.954875][ T6840] RIP: 0010:inet_bind2_bucket_find (include/net/net_namespace.h:361 include/net/inet_hashtables.h:116 net/ipv4/inet_hashtables.c:765 net/ipv4/inet_hashtables.c:819)
[ 54.960937][ T6840] Code: 57 ff 21 d0 49 8b 55 30 48 8d 34 c2 48 8b 06 48 85 c0 75 10 eb 45 48 39 d3 74 57 48 8b 40 20 48 85 c0 74 37 48 83 e8 20 74 31 <48> 8b 10 66 83 fd 0a 75 e3 48 39 d3 75 e3 66 3b 48 0c 75 dd 44 3b
All code
========
0: 57 push %rdi
1: ff 21 jmpq *(%rcx)
3: d0 49 8b rorb -0x75(%rcx)
6: 55 push %rbp
7: 30 48 8d xor %cl,-0x73(%rax)
a: 34 c2 xor $0xc2,%al
c: 48 8b 06 mov (%rsi),%rax
f: 48 85 c0 test %rax,%rax
12: 75 10 jne 0x24
14: eb 45 jmp 0x5b
16: 48 39 d3 cmp %rdx,%rbx
19: 74 57 je 0x72
1b: 48 8b 40 20 mov 0x20(%rax),%rax
1f: 48 85 c0 test %rax,%rax
22: 74 37 je 0x5b
24: 48 83 e8 20 sub $0x20,%rax
28: 74 31 je 0x5b
2a:* 48 8b 10 mov (%rax),%rdx <-- trapping instruction
2d: 66 83 fd 0a cmp $0xa,%bp
31: 75 e3 jne 0x16
33: 48 39 d3 cmp %rdx,%rbx
36: 75 e3 jne 0x1b
38: 66 3b 48 0c cmp 0xc(%rax),%cx
3c: 75 dd jne 0x1b
3e: 44 rex.R
3f: 3b .byte 0x3b
Code starting with the faulting instruction
===========================================
0: 48 8b 10 mov (%rax),%rdx
3: 66 83 fd 0a cmp $0xa,%bp
7: 75 e3 jne 0xffffffffffffffec
9: 48 39 d3 cmp %rdx,%rbx
c: 75 e3 jne 0xfffffffffffffff1
e: 66 3b 48 0c cmp 0xc(%rax),%cx
12: 75 dd jne 0xfffffffffffffff1
14: 44 rex.R
15: 3b .byte 0x3b
[ 54.980691][ T6840] RSP: 0018:ffffc90023997d98 EFLAGS: 00010202
[ 54.986781][ T6840] RAX: 00007f9251155fe0 RBX: ffffffff837ac900 RCX: 000000000000de4b
[ 54.994782][ T6840] RDX: ffffffff82201160 RSI: ffffc90009e23078 RDI: 0000000000010000
[ 55.002785][ T6840] RBP: 0000000000000002 R08: 000000000000de4b R09: ffffc90023997df0
[ 55.010785][ T6840] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8888957808c0
[ 55.018781][ T6840] R13: ffffffff837af540 R14: 0000000000000000 R15: ffffc90023997df0
[ 55.026784][ T6840] FS: 00007fb1e688d740(0000) GS:ffff88905f900000(0000) knlGS:0000000000000000
[ 55.035748][ T6840] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 55.042367][ T6840] CR2: 00007f9251155fe0 CR3: 000000017a86c001 CR4: 00000000007706e0
[ 55.050381][ T6840] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 55.058397][ T6840] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 55.066417][ T6840] PKRU: 55555554
[ 55.070014][ T6840] Call Trace:
[ 55.073349][ T6840] <TASK>
[ 55.076340][ T6840] inet_csk_get_port (net/ipv4/inet_connection_sock.c:492)
[ 55.081331][ T6840] __inet_bind (net/ipv4/af_inet.c:525)
[ 55.085801][ T6840] __sys_bind (net/socket.c:1744)
[ 55.090093][ T6840] ? __sys_setsockopt (include/linux/file.h:32 net/socket.c:2231)
[ 55.095074][ T6840] __x64_sys_bind (net/socket.c:1755 net/socket.c:1753 net/socket.c:1753)
[ 55.099616][ T6840] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80)
[ 55.104070][ T6840] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:115)
[ 55.110000][ T6840] RIP: 0033:0x7fb1e6989ec7
[ 55.114459][ T6840] Code: ff ff ff ff c3 48 8b 15 c7 ff 0b 00 f7 d8 64 89 02 b8 ff ff ff ff eb ba 66 2e 0f 1f 84 00 00 00 00 00 90 b8 31 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 99 ff 0b 00 f7 d8 64 89 01 48
All code
========
0: ff (bad)
1: ff (bad)
2: ff (bad)
3: ff c3 inc %ebx
5: 48 8b 15 c7 ff 0b 00 mov 0xbffc7(%rip),%rdx # 0xbffd3
c: f7 d8 neg %eax
e: 64 89 02 mov %eax,%fs:(%rdx)
11: b8 ff ff ff ff mov $0xffffffff,%eax
16: eb ba jmp 0xffffffffffffffd2
18: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
1f: 00 00 00
22: 90 nop
23: b8 31 00 00 00 mov $0x31,%eax
28: 0f 05 syscall
2a:* 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax <-- trapping instruction
30: 73 01 jae 0x33
32: c3 retq
33: 48 8b 0d 99 ff 0b 00 mov 0xbff99(%rip),%rcx # 0xbffd3
3a: f7 d8 neg %eax
3c: 64 89 01 mov %eax,%fs:(%rcx)
3f: 48 rex.W
Code starting with the faulting instruction
===========================================
0: 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax
6: 73 01 jae 0x9
8: c3 retq
9: 48 8b 0d 99 ff 0b 00 mov 0xbff99(%rip),%rcx # 0xbffa9
10: f7 d8 neg %eax
12: 64 89 01 mov %eax,%fs:(%rcx)
15: 48 rex.W
[ 55.134300][ T6840] RSP: 002b:00007ffea9914708 EFLAGS: 00000246 ORIG_RAX: 0000000000000031
[ 55.142774][ T6840] RAX: ffffffffffffffda RBX: 000055ede299b508 RCX: 00007fb1e6989ec7
[ 55.150812][ T6840] RDX: 0000000000000010 RSI: 000055ede404b350 RDI: 0000000000000006
[ 55.158856][ T6840] RBP: 00007ffea9914760 R08: 0000000000000004 R09: 0000000000000000
[ 55.166900][ T6840] R10: 00007ffea9914730 R11: 0000000000000246 R12: 000055ede299b4d8
[ 55.174944][ T6840] R13: 00007ffea9914ac0 R14: 0000000000000000 R15: 0000000000000000
[ 55.182998][ T6840] </TASK>
[ 55.186105][ T6840] Modules linked in: binfmt_misc btrfs blake2b_generic xor raid6_pq intel_rapl_msr zstd_compress ipmi_ssif intel_rapl_common libcrc32c ast drm_vram_helper drm_ttm_helper ttm skx_edac nfit drm_kms_helper nvme libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel rapl intel_cstate nvme_core syscopyarea t10_pi sysfillrect ahci sysimgblt crc64_rocksoft_generic fb_sys_fops libahci mei_me crc64_rocksoft intel_uncore ioatdma drm crc64 mei intel_pch_thermal acpi_ipmi libata joydev dca wmi ipmi_si ipmi_devintf ipmi_msghandler acpi_pad acpi_power_meter ip_tables
[ 55.245877][ T6840] CR2: 00007f9251155fe0
[ 55.250154][ T6840] ---[ end trace 0000000000000000 ]---
[ 55.287285][ T6840] RIP: 0010:inet_bind2_bucket_find (include/net/net_namespace.h:361 include/net/inet_hashtables.h:116 net/ipv4/inet_hashtables.c:765 net/ipv4/inet_hashtables.c:819)
[ 55.293481][ T6840] Code: 57 ff 21 d0 49 8b 55 30 48 8d 34 c2 48 8b 06 48 85 c0 75 10 eb 45 48 39 d3 74 57 48 8b 40 20 48 85 c0 74 37 48 83 e8 20 74 31 <48> 8b 10 66 83 fd 0a 75 e3 48 39 d3 75 e3 66 3b 48 0c 75 dd 44 3b
All code
========
0: 57 push %rdi
1: ff 21 jmpq *(%rcx)
3: d0 49 8b rorb -0x75(%rcx)
6: 55 push %rbp
7: 30 48 8d xor %cl,-0x73(%rax)
a: 34 c2 xor $0xc2,%al
c: 48 8b 06 mov (%rsi),%rax
f: 48 85 c0 test %rax,%rax
12: 75 10 jne 0x24
14: eb 45 jmp 0x5b
16: 48 39 d3 cmp %rdx,%rbx
19: 74 57 je 0x72
1b: 48 8b 40 20 mov 0x20(%rax),%rax
1f: 48 85 c0 test %rax,%rax
22: 74 37 je 0x5b
24: 48 83 e8 20 sub $0x20,%rax
28: 74 31 je 0x5b
2a:* 48 8b 10 mov (%rax),%rdx <-- trapping instruction
2d: 66 83 fd 0a cmp $0xa,%bp
31: 75 e3 jne 0x16
33: 48 39 d3 cmp %rdx,%rbx
36: 75 e3 jne 0x1b
38: 66 3b 48 0c cmp 0xc(%rax),%cx
3c: 75 dd jne 0x1b
3e: 44 rex.R
3f: 3b .byte 0x3b
Code starting with the faulting instruction
===========================================
0: 48 8b 10 mov (%rax),%rdx
3: 66 83 fd 0a cmp $0xa,%bp
7: 75 e3 jne 0xffffffffffffffec
9: 48 39 d3 cmp %rdx,%rbx
c: 75 e3 jne 0xfffffffffffffff1
e: 66 3b 48 0c cmp 0xc(%rax),%cx
12: 75 dd jne 0xfffffffffffffff1
14: 44 rex.R
15: 3b .byte 0x3b
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
1 month