[iomap] f1c8b8b2e6: leaking-addresses.proc.printk_formats._3.bios0_pages0_startsector0x0bi_vcnt0_bi_size0
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: f1c8b8b2e616895aa0f5be4e53d4cd1ffa751001 ("iomap: Address soft lockup in iomap_finish_ioend()")
url: https://github.com/0day-ci/linux/commits/UPDATE-20220111-073805/trondmy-k...
in testcase: leaking-addresses
version: leaking-addresses-x86_64-4f19048-1_20220117
with following parameters:
ucode: 0x28
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2022-01-18 03:18:41 ./leaking_addresses.pl --output-raw result/scan.out
2022-01-18 03:19:04 ./leaking_addresses.pl --input-raw result/scan.out --squash-by-filename
Total number of results from scan (incl dmesg): 166437
dmesg output:
[ 2.162691] mapped IOAPIC to ffffffffff5fb000 (fec00000)
Results squashed by filename (excl dmesg). Displaying [<number of results> <filename>], <example result>
...
[163 printk_formats] 0xffffffff83b6a2c0 : "3. bios 0, pages 0, start sector 0x0 bi_vcnt 0, bi_size 0"
...
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[sched/fair] 8b4e74ccb5: stress-ng.sem.ops_per_sec 12.0% improvement
by kernel test robot
Greeting,
FYI, we noticed a 12.0% improvement of stress-ng.sem.ops_per_sec due to commit:
commit: 8b4e74ccb582797f6f0b0a50372ebd9fd2372a27 ("sched/fair: Fix detection of per-CPU kthreads waking a task")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:
nr_threads: 100%
testtime: 60s
sc_pid_max: 4194304
class: scheduler
test: sem
cpufreq_governor: performance
ucode: 0x5003102
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/test/testcase/testtime/ucode:
scheduler/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/4194304/lkp-csl-2sp7/sem/stress-ng/60s/0x5003102
commit:
8c92606ab8 ("sched/cpuacct: Make user/system times in cpuacct.stat more precise")
8b4e74ccb5 ("sched/fair: Fix detection of per-CPU kthreads waking a task")
8c92606ab81086db 8b4e74ccb582797f6f0b0a50372
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.467e+08 +12.0% 5.003e+08 ± 2% stress-ng.sem.ops
7445762 +12.0% 8337795 ± 2% stress-ng.sem.ops_per_sec
44238040 ± 3% -41.7% 25806960 ± 6% stress-ng.time.involuntary_context_switches
19795 +14.1% 22585 ± 2% stress-ng.time.minor_page_faults
1114 +61.3% 1798 ± 4% stress-ng.time.percent_of_cpu_this_job_got
514.41 ± 2% +43.5% 738.12 ± 3% stress-ng.time.system_time
177.69 ± 2% +112.8% 378.12 ± 17% stress-ng.time.user_time
2.234e+08 +11.9% 2.5e+08 ± 2% stress-ng.time.voluntary_context_switches
1.344e+08 +64.3% 2.209e+08 ± 3% cpuidle..usage
6.74 ± 4% +7.4 14.13 ± 8% mpstat.cpu.all.irq%
0.11 ± 6% -0.0 0.10 ± 4% mpstat.cpu.all.soft%
23961 ± 6% +14.9% 27543 ± 7% meminfo.Active
21572 ± 7% +17.5% 25338 ± 7% meminfo.Active(anon)
34792 ± 4% +13.6% 39509 ± 4% meminfo.Shmem
21170 ± 7% +13.8% 24094 ± 8% numa-meminfo.node1.Active
20286 ± 8% +17.4% 23826 ± 8% numa-meminfo.node1.Active(anon)
25568 ± 19% +33.0% 34003 ± 12% numa-meminfo.node1.Shmem
5074 ± 8% +16.7% 5924 ± 7% numa-vmstat.node1.nr_active_anon
6395 ± 19% +32.7% 8489 ± 12% numa-vmstat.node1.nr_shmem
5074 ± 8% +16.7% 5924 ± 7% numa-vmstat.node1.nr_zone_active_anon
18.14 ± 12% +63.3% 29.62 ± 13% vmstat.procs.r
5829450 ± 3% +22.4% 7135732 ± 4% vmstat.system.cs
1569944 ± 2% -7.9% 1446325 ± 3% vmstat.system.in
856.00 ± 2% +14.8% 982.38 ± 6% turbostat.Avg_MHz
30.83 ± 2% +4.5 35.35 ± 6% turbostat.Busy%
54314337 ± 2% +258.7% 1.948e+08 ± 3% turbostat.C1
20.89 ± 3% +37.2 58.10 ± 4% turbostat.C1%
78271316 -73.8% 20522940 ± 6% turbostat.C1E
51.84 ± 3% -33.5 18.33 ± 20% turbostat.C1E%
151.14 -3.0% 146.68 turbostat.RAMWatt
5378 ± 8% +17.8% 6335 ± 7% proc-vmstat.nr_active_anon
10448 +2.5% 10708 proc-vmstat.nr_mapped
8686 ± 5% +13.8% 9883 ± 4% proc-vmstat.nr_shmem
5378 ± 8% +17.8% 6335 ± 7% proc-vmstat.nr_zone_active_anon
5980 ± 2% +42.9% 8548 ± 6% proc-vmstat.numa_hint_faults
4552 ± 2% +45.7% 6633 ± 7% proc-vmstat.numa_hint_faults_local
242021 ± 2% +55.3% 375758 ± 8% proc-vmstat.numa_pte_updates
11361 ± 4% +33.2% 15138 ± 4% proc-vmstat.pgactivate
15.52 ± 2% +27.3% 19.76 ± 5% perf-stat.i.MPKI
8.682e+09 ± 3% -8.4% 7.954e+09 ± 4% perf-stat.i.branch-instructions
2.00 ± 2% -0.3 1.69 ± 3% perf-stat.i.branch-miss-rate%
1.742e+08 ± 2% -23.7% 1.329e+08 ± 5% perf-stat.i.branch-misses
9.01 ± 13% -6.3 2.75 ± 16% perf-stat.i.cache-miss-rate%
63223447 ± 16% -73.9% 16490736 ± 30% perf-stat.i.cache-misses
6124406 ± 3% +22.8% 7522839 ± 4% perf-stat.i.context-switches
1.95 ± 3% +33.1% 2.59 ± 5% perf-stat.i.cpi
8.583e+10 +18.4% 1.016e+11 ± 6% perf-stat.i.cpu-cycles
777439 ± 3% +212.1% 2426371 ± 4% perf-stat.i.cpu-migrations
1661 ± 9% +290.2% 6484 ± 18% perf-stat.i.cycles-between-cache-misses
0.03 ± 18% +0.1 0.08 ± 19% perf-stat.i.dTLB-load-miss-rate%
4643099 ± 6% +122.7% 10338382 ± 17% perf-stat.i.dTLB-load-misses
1.401e+10 ± 2% -18.1% 1.148e+10 ± 4% perf-stat.i.dTLB-loads
0.01 ± 10% +0.0 0.03 ± 7% perf-stat.i.dTLB-store-miss-rate%
1004339 ± 3% +127.6% 2285509 ± 6% perf-stat.i.dTLB-store-misses
7.573e+09 ± 3% -7.5% 7.004e+09 ± 4% perf-stat.i.dTLB-stores
47522496 ± 3% +22.5% 58208580 ± 5% perf-stat.i.iTLB-load-misses
80073958 ± 3% +15.4% 92424903 ± 4% perf-stat.i.iTLB-loads
4.505e+10 ± 3% -13.0% 3.918e+10 ± 4% perf-stat.i.instructions
1114 ± 7% -21.0% 879.54 ± 6% perf-stat.i.instructions-per-iTLB-miss
0.52 ± 2% -24.3% 0.40 ± 4% perf-stat.i.ipc
0.89 +18.4% 1.06 ± 6% perf-stat.i.metric.GHz
1442 ± 5% -81.2% 270.89 ± 7% perf-stat.i.metric.K/sec
322.94 ± 2% -11.8% 284.86 ± 4% perf-stat.i.metric.M/sec
37742898 ± 17% -82.3% 6663637 ± 16% perf-stat.i.node-load-misses
1795169 ± 28% -81.5% 332344 ± 41% perf-stat.i.node-loads
9826294 ± 16% -54.8% 4439792 ± 13% perf-stat.i.node-store-misses
16.34 ± 3% +28.2% 20.94 ± 2% perf-stat.overall.MPKI
2.01 ± 2% -0.3 1.67 perf-stat.overall.branch-miss-rate%
8.61 ± 17% -6.6 1.99 ± 24% perf-stat.overall.cache-miss-rate%
1.91 ± 2% +36.2% 2.60 ± 5% perf-stat.overall.cpi
1396 ± 17% +365.9% 6504 ± 18% perf-stat.overall.cycles-between-cache-misses
0.03 ± 7% +0.1 0.09 ± 18% perf-stat.overall.dTLB-load-miss-rate%
0.01 ± 4% +0.0 0.03 ± 6% perf-stat.overall.dTLB-store-miss-rate%
37.24 +1.4 38.64 perf-stat.overall.iTLB-load-miss-rate%
948.40 -29.0% 673.67 ± 3% perf-stat.overall.instructions-per-iTLB-miss
0.52 ± 2% -26.4% 0.39 ± 5% perf-stat.overall.ipc
8.551e+09 ± 3% -8.4% 7.834e+09 ± 4% perf-stat.ps.branch-instructions
1.716e+08 ± 2% -23.7% 1.31e+08 ± 5% perf-stat.ps.branch-misses
62268255 ± 16% -73.9% 16242861 ± 30% perf-stat.ps.cache-misses
6031679 ± 3% +22.9% 7410005 ± 4% perf-stat.ps.context-switches
8.453e+10 +18.4% 1.001e+11 ± 6% perf-stat.ps.cpu-cycles
765671 ± 3% +212.1% 2389979 ± 4% perf-stat.ps.cpu-migrations
4572862 ± 6% +122.7% 10182919 ± 17% perf-stat.ps.dTLB-load-misses
1.38e+10 ± 2% -18.1% 1.131e+10 ± 4% perf-stat.ps.dTLB-loads
989144 ± 3% +127.6% 2251210 ± 6% perf-stat.ps.dTLB-store-misses
7.459e+09 ± 3% -7.5% 6.899e+09 ± 4% perf-stat.ps.dTLB-stores
46803103 ± 3% +22.5% 57335214 ± 5% perf-stat.ps.iTLB-load-misses
78861849 ± 3% +15.4% 91038604 ± 4% perf-stat.ps.iTLB-loads
4.437e+10 ± 3% -13.0% 3.859e+10 ± 4% perf-stat.ps.instructions
37172598 ± 17% -82.3% 6563822 ± 16% perf-stat.ps.node-load-misses
1768067 ± 28% -81.5% 327355 ± 41% perf-stat.ps.node-loads
9677815 ± 16% -54.8% 4373330 ± 13% perf-stat.ps.node-store-misses
2.969e+12 -12.6% 2.596e+12 ± 2% perf-stat.total.instructions
21644 ± 7% -24.9% 16265 ± 5% softirqs.CPU0.SCHED
18130 ± 5% -31.1% 12498 ± 5% softirqs.CPU1.SCHED
16233 ± 14% -32.3% 10986 ± 5% softirqs.CPU10.SCHED
16783 ± 6% -33.3% 11197 ± 6% softirqs.CPU11.SCHED
17251 ± 7% -34.8% 11249 ± 7% softirqs.CPU12.SCHED
17471 ± 8% -35.1% 11336 ± 6% softirqs.CPU13.SCHED
16679 ± 5% -33.1% 11152 ± 5% softirqs.CPU14.SCHED
16761 ± 6% -35.4% 10824 ± 5% softirqs.CPU15.SCHED
17013 ± 7% -34.3% 11183 ± 6% softirqs.CPU16.SCHED
16918 ± 9% -33.5% 11253 ± 7% softirqs.CPU17.SCHED
16822 ± 9% -33.8% 11140 ± 5% softirqs.CPU18.SCHED
16936 ± 4% -35.0% 11016 ± 4% softirqs.CPU19.SCHED
17552 ± 7% -33.6% 11659 ± 5% softirqs.CPU2.SCHED
16941 ± 6% -34.3% 11138 ± 5% softirqs.CPU20.SCHED
16867 ± 5% -34.7% 11007 ± 6% softirqs.CPU21.SCHED
16719 ± 6% -34.6% 10928 ± 5% softirqs.CPU22.SCHED
16841 ± 6% -33.4% 11213 ± 2% softirqs.CPU23.SCHED
16985 ± 4% -35.5% 10959 ± 5% softirqs.CPU24.SCHED
16663 ± 6% -36.0% 10657 ± 5% softirqs.CPU25.SCHED
16596 ± 5% -35.1% 10772 ± 5% softirqs.CPU26.SCHED
16552 ± 5% -35.2% 10721 ± 5% softirqs.CPU27.SCHED
16422 ± 3% -34.5% 10754 ± 7% softirqs.CPU28.SCHED
16722 ± 5% -33.5% 11124 ± 7% softirqs.CPU29.SCHED
17412 ± 7% -33.6% 11565 ± 4% softirqs.CPU3.SCHED
16516 ± 3% -35.9% 10582 ± 5% softirqs.CPU30.SCHED
16570 ± 5% -36.3% 10548 ± 5% softirqs.CPU31.SCHED
16596 ± 4% -35.6% 10696 ± 5% softirqs.CPU32.SCHED
16922 ± 6% -36.6% 10731 ± 3% softirqs.CPU33.SCHED
16821 ± 4% -36.8% 10638 ± 6% softirqs.CPU34.SCHED
16569 ± 4% -34.1% 10914 ± 6% softirqs.CPU35.SCHED
16457 ± 4% -34.8% 10735 ± 6% softirqs.CPU36.SCHED
16429 ± 5% -34.5% 10754 ± 9% softirqs.CPU37.SCHED
16584 ± 5% -34.6% 10851 ± 5% softirqs.CPU38.SCHED
16548 ± 5% -35.2% 10731 ± 6% softirqs.CPU39.SCHED
17058 ± 7% -34.9% 11110 ± 3% softirqs.CPU4.SCHED
16577 ± 4% -34.0% 10940 ± 6% softirqs.CPU40.SCHED
16629 ± 4% -34.6% 10880 ± 6% softirqs.CPU41.SCHED
16515 ± 4% -35.7% 10626 ± 8% softirqs.CPU42.SCHED
16464 ± 5% -35.1% 10691 ± 6% softirqs.CPU43.SCHED
16571 ± 5% -35.4% 10711 ± 5% softirqs.CPU44.SCHED
17292 ± 5% -37.6% 10796 ± 4% softirqs.CPU45.SCHED
16664 ± 5% -35.4% 10759 ± 5% softirqs.CPU46.SCHED
16522 ± 5% -34.4% 10836 ± 4% softirqs.CPU47.SCHED
16448 ± 6% -35.5% 10616 ± 3% softirqs.CPU48.SCHED
16346 ± 7% -36.4% 10394 ± 4% softirqs.CPU49.SCHED
16985 ± 5% -35.3% 10989 ± 5% softirqs.CPU5.SCHED
16987 ± 5% -35.5% 10956 ± 7% softirqs.CPU50.SCHED
16814 ± 6% -35.0% 10934 ± 4% softirqs.CPU51.SCHED
17242 ± 9% -35.7% 11094 ± 8% softirqs.CPU52.SCHED
17045 ± 7% -35.5% 11001 ± 7% softirqs.CPU53.SCHED
16772 ± 5% -34.8% 10941 ± 5% softirqs.CPU54.SCHED
16750 ± 5% -34.9% 10900 ± 3% softirqs.CPU55.SCHED
16707 ± 6% -32.9% 11213 ± 7% softirqs.CPU56.SCHED
16896 ± 6% -34.5% 11067 ± 3% softirqs.CPU57.SCHED
16776 ± 5% -31.8% 11443 ± 9% softirqs.CPU58.SCHED
16629 ± 7% -32.8% 11180 ± 4% softirqs.CPU59.SCHED
17009 ± 7% -33.5% 11314 ± 7% softirqs.CPU6.SCHED
17329 ± 6% -36.6% 10989 ± 5% softirqs.CPU60.SCHED
16812 ± 9% -33.4% 11190 ± 5% softirqs.CPU61.SCHED
16819 ± 6% -34.1% 11091 ± 4% softirqs.CPU62.SCHED
17183 ± 10% -34.1% 11332 ± 4% softirqs.CPU63.SCHED
16738 ± 7% -32.3% 11338 ± 6% softirqs.CPU64.SCHED
16944 ± 7% -34.6% 11076 ± 6% softirqs.CPU65.SCHED
16760 ± 7% -33.1% 11216 ± 6% softirqs.CPU66.SCHED
16922 ± 7% -36.2% 10797 ± 6% softirqs.CPU67.SCHED
16820 ± 6% -33.9% 11113 ± 3% softirqs.CPU68.SCHED
16979 ± 7% -33.3% 11320 ± 10% softirqs.CPU69.SCHED
17352 ± 11% -37.4% 10865 ± 5% softirqs.CPU7.SCHED
17142 ± 6% -32.3% 11611 ± 6% softirqs.CPU70.SCHED
17104 ± 8% -35.0% 11112 ± 3% softirqs.CPU71.SCHED
16454 ± 6% -35.2% 10667 ± 5% softirqs.CPU73.SCHED
16722 ± 5% -33.4% 11140 ± 8% softirqs.CPU74.SCHED
16625 ± 5% -36.8% 10510 ± 5% softirqs.CPU75.SCHED
16673 ± 4% -35.2% 10803 ± 4% softirqs.CPU76.SCHED
16979 ± 8% -36.2% 10841 ± 5% softirqs.CPU77.SCHED
16656 ± 4% -35.9% 10685 ± 6% softirqs.CPU78.SCHED
16334 ± 5% -35.2% 10579 ± 5% softirqs.CPU79.SCHED
16846 ± 6% -32.4% 11380 ± 14% softirqs.CPU8.SCHED
16553 ± 5% -34.2% 10886 ± 4% softirqs.CPU80.SCHED
16619 ± 4% -34.5% 10885 ± 7% softirqs.CPU81.SCHED
16319 ± 5% -33.8% 10811 ± 6% softirqs.CPU82.SCHED
16729 ± 3% -34.6% 10939 ± 4% softirqs.CPU83.SCHED
16582 ± 5% -35.4% 10708 ± 4% softirqs.CPU84.SCHED
16404 ± 6% -34.6% 10729 ± 5% softirqs.CPU85.SCHED
16723 ± 4% -35.4% 10804 ± 5% softirqs.CPU86.SCHED
16754 ± 6% -35.9% 10732 ± 3% softirqs.CPU87.SCHED
16596 ± 3% -33.1% 11104 ± 8% softirqs.CPU88.SCHED
16847 ± 4% -35.2% 10918 ± 6% softirqs.CPU89.SCHED
17517 ± 7% -38.3% 10800 ± 5% softirqs.CPU9.SCHED
16389 ± 4% -35.9% 10502 ± 5% softirqs.CPU90.SCHED
16495 ± 5% -34.7% 10772 ± 3% softirqs.CPU91.SCHED
16486 ± 5% -35.2% 10689 ± 4% softirqs.CPU92.SCHED
16718 ± 5% -35.6% 10769 ± 4% softirqs.CPU93.SCHED
16798 ± 6% -36.4% 10677 ± 6% softirqs.CPU94.SCHED
16290 ± 6% -33.9% 10763 ± 6% softirqs.CPU95.SCHED
1615774 ± 5% -34.6% 1057271 ± 4% softirqs.SCHED
18.03 ± 8% -17.3 0.70 ± 19% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
17.66 ± 8% -17.2 0.45 ± 61% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.do_nanosleep
30.78 ± 4% -16.7 14.11 perf-profile.calltrace.cycles-pp.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
27.75 ± 5% -16.6 11.11 perf-profile.calltrace.cycles-pp.schedule.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
27.63 ± 5% -16.6 11.03 perf-profile.calltrace.cycles-pp.__schedule.schedule.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep
29.85 ± 4% -16.6 13.28 perf-profile.calltrace.cycles-pp.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.21 ± 4% -16.6 13.64 perf-profile.calltrace.cycles-pp.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
32.01 ± 4% -16.0 16.04 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__nanosleep
33.80 ± 3% -16.0 17.83 ± 3% perf-profile.calltrace.cycles-pp.__nanosleep
31.75 ± 4% -15.9 15.81 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
15.86 ± 9% -15.8 0.07 ±264% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule
11.07 ± 9% -11.1 0.00 perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
10.84 ± 9% -10.8 0.00 perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
5.55 ± 7% -5.4 0.15 ±173% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
3.42 -1.0 2.43 perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
2.36 ± 2% -0.4 1.97 perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
2.30 ± 2% -0.4 1.93 perf-profile.calltrace.cycles-pp.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.02 ± 5% -0.2 0.82 ± 7% [email protected]@GLIBC_2.2.5
1.45 ± 3% -0.2 1.26 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_start_range_ns.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.96 ± 4% -0.1 0.84 ± 2% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64
0.85 ± 5% -0.1 0.74 ± 5% perf-profile.calltrace.cycles-pp.semaphore_posix_thrash
1.13 ± 5% +0.3 1.39 ± 3% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.91 ± 5% +0.3 1.19 ± 5% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.64 ± 3% +0.3 0.92 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.06 ± 4% +0.4 1.43 perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
0.15 ±158% +0.5 0.63 ± 2% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.75 ± 4% +0.5 1.27 ± 3% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
0.72 ± 4% +0.5 1.24 ± 3% perf-profile.calltrace.cycles-pp.switch_fpu_return.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ± 87% +0.5 0.88 ± 6% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule_idle
0.38 ± 63% +0.5 0.93 ± 3% perf-profile.calltrace.cycles-pp.restore_fpregs_from_fpstate.switch_fpu_return.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.35 ± 87% +0.5 0.90 ± 6% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule_idle.do_idle
0.39 ± 87% +0.6 0.95 ± 6% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.00 +0.6 0.62 ± 5% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule
0.00 +0.6 0.64 ± 5% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule.do_nanosleep
0.00 +0.7 0.74 ± 4% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.83 ± 4% +0.7 1.58 ± 39% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
0.00 +0.9 0.85 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule_idle.do_idle
0.00 +0.9 0.91 ± 3% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
0.15 ±158% +0.9 1.08 ± 3% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
0.00 +1.0 1.02 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.24 ±115% +1.2 1.48 ± 6% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch
0.24 ±115% +1.2 1.49 ± 5% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule
0.00 +1.3 1.31 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle
0.00 +1.3 1.32 ± 2% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state
5.26 ± 2% +1.3 6.58 ± 3% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
0.00 +1.3 1.35 ± 2% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state.cpuidle_enter
0.00 +1.4 1.38 ± 2% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle
0.00 +1.4 1.44 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
0.00 +1.5 1.50 ± 2% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.07 ±244% +1.6 1.68 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
0.80 ± 4% +1.7 2.47 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.set_next_entity.pick_next_task_fair.__schedule.schedule_idle
1.19 ± 6% +1.7 2.90 ± 5% perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
1.07 ± 3% +2.0 3.07 perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule_idle.do_idle
1.73 ± 7% +2.0 3.77 ± 6% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.26 ± 3% +2.1 3.37 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.67 ± 2% +2.2 2.90 ± 3% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
3.53 ± 3% +2.3 5.84 ± 3% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.do_nanosleep
0.00 +2.4 2.37 ± 7% perf-profile.calltrace.cycles-pp.set_task_cpu.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
4.87 ± 2% +3.5 8.36 ± 3% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
4.82 ± 2% +3.5 8.32 ± 3% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
3.29 ± 2% +3.9 7.22 ± 2% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup
4.36 ± 3% +4.0 8.35 perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
4.43 ± 4% +4.0 8.42 perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
9.17 ± 3% +8.4 17.56 ± 2% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
8.37 ± 3% +8.6 16.93 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
8.30 ± 3% +8.6 16.87 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
34.81 ± 2% +9.5 44.26 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.24 +10.2 15.42 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle
5.28 +10.2 15.48 ± 2% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state
5.63 +10.3 15.95 ± 2% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state.cpuidle_enter
49.71 +11.7 61.43 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
49.29 +12.1 61.38 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
56.41 +16.5 72.91 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
56.45 +16.5 72.96 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
56.46 +16.5 72.99 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
57.04 +16.6 73.67 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
21.90 ± 2% +23.6 45.54 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle
17.67 ± 8% -17.1 0.55 ± 24% perf-profile.children.cycles-pp.newidle_balance
30.12 ± 5% -17.0 13.09 perf-profile.children.cycles-pp.schedule
35.92 ± 3% -16.9 19.05 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
35.57 ± 3% -16.8 18.76 ± 6% perf-profile.children.cycles-pp.do_syscall_64
30.79 ± 4% -16.7 14.12 perf-profile.children.cycles-pp.__x64_sys_nanosleep
29.87 ± 4% -16.6 13.29 perf-profile.children.cycles-pp.do_nanosleep
30.22 ± 4% -16.6 13.64 perf-profile.children.cycles-pp.hrtimer_nanosleep
33.93 ± 3% -16.0 17.95 ± 3% perf-profile.children.cycles-pp.__nanosleep
15.94 ± 9% -15.5 0.42 ± 26% perf-profile.children.cycles-pp.load_balance
20.33 ± 7% -15.3 4.99 ± 2% perf-profile.children.cycles-pp.pick_next_task_fair
34.42 ± 4% -13.0 21.45 perf-profile.children.cycles-pp.__schedule
11.12 ± 9% -10.9 0.27 ± 31% perf-profile.children.cycles-pp.find_busiest_group
10.93 ± 9% -10.7 0.26 ± 30% perf-profile.children.cycles-pp.update_sd_lb_stats
2.33 ± 17% -2.3 0.06 ± 9% perf-profile.children.cycles-pp.raw_spin_rq_lock_nested
1.54 ± 12% -1.5 0.05 ± 40% perf-profile.children.cycles-pp.idle_cpu
5.40 ± 8% -1.2 4.21 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
3.42 -1.0 2.43 perf-profile.children.cycles-pp.__x64_sys_sched_yield
1.09 ± 8% -1.0 0.13 ± 11% perf-profile.children.cycles-pp.update_blocked_averages
0.81 ± 8% -0.7 0.11 ± 6% perf-profile.children.cycles-pp._find_next_bit
1.00 ± 3% -0.6 0.41 ± 3% perf-profile.children.cycles-pp.do_sched_yield
0.92 ± 4% -0.4 0.50 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
1.03 ± 5% -0.3 0.72 ± 3% perf-profile.children.cycles-pp.clockevents_program_event
1.72 ± 5% -0.3 1.46 ± 4% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.90 ± 3% -0.2 0.66 ± 4% perf-profile.children.cycles-pp.sched_clock_cpu
0.33 ± 4% -0.2 0.11 ± 8% perf-profile.children.cycles-pp.yield_task_fair
0.75 ± 4% -0.2 0.54 ± 3% perf-profile.children.cycles-pp.lapic_next_deadline
1.31 ± 2% -0.2 1.10 ± 4% perf-profile.children.cycles-pp.update_rq_clock
1.02 ± 4% -0.2 0.82 ± 7% [email protected]@GLIBC_2.2.5
1.46 ± 3% -0.2 1.27 ± 3% perf-profile.children.cycles-pp.hrtimer_start_range_ns
0.78 ± 3% -0.2 0.58 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.48 ± 4% -0.2 0.30 ± 3% perf-profile.children.cycles-pp.reweight_entity
0.65 ± 6% -0.1 0.52 ± 4% perf-profile.children.cycles-pp.ktime_get
0.27 ± 4% -0.1 0.14 ± 6% perf-profile.children.cycles-pp.irq_enter_rcu
0.42 ± 4% -0.1 0.29 ± 2% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.23 ± 7% -0.1 0.11 ± 6% perf-profile.children.cycles-pp.place_entity
0.92 ± 4% -0.1 0.80 ± 4% perf-profile.children.cycles-pp.semaphore_posix_thrash
0.38 ± 6% -0.1 0.27 ± 5% perf-profile.children.cycles-pp.irq_exit_rcu
0.23 ± 4% -0.1 0.12 ± 6% perf-profile.children.cycles-pp.tick_irq_enter
0.59 ± 3% -0.1 0.49 ± 4% perf-profile.children.cycles-pp.__update_load_avg_se
0.37 ± 3% -0.1 0.28 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.18 ± 3% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.update_irq_load_avg
0.81 ± 5% -0.1 0.72 ± 4% perf-profile.children.cycles-pp.load_new_mm_cr3
0.42 ± 5% -0.1 0.35 ± 4% perf-profile.children.cycles-pp.read_tsc
0.37 ± 5% -0.1 0.30 ± 4% perf-profile.children.cycles-pp.save_fpregs_to_fpstate
0.34 ± 4% -0.1 0.27 ± 4% perf-profile.children.cycles-pp.pick_next_entity
0.21 ± 9% -0.1 0.15 ± 5% perf-profile.children.cycles-pp.__softirqentry_text_start
0.19 ± 4% -0.1 0.13 ± 4% perf-profile.children.cycles-pp.put_prev_entity
0.25 ± 2% -0.1 0.20 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
0.11 ± 3% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.rebalance_domains
0.19 ± 2% -0.1 0.14 ± 2% perf-profile.children.cycles-pp.irqtime_account_irq
0.10 ± 6% -0.1 0.04 ± 38% perf-profile.children.cycles-pp.clear_buddies
0.48 ± 3% -0.1 0.43 ± 4% [email protected]@GLIBC_2.2.5
0.23 ± 3% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.tick_sched_handle
0.41 ± 6% -0.0 0.36 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.23 ± 4% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.update_process_times
0.26 ± 7% -0.0 0.22 ± 4% perf-profile.children.cycles-pp.__calc_delta
0.09 ± 11% -0.0 0.04 ± 37% perf-profile.children.cycles-pp.__list_add_valid
0.40 ± 5% -0.0 0.35 ± 3% perf-profile.children.cycles-pp.get_timespec64
0.09 ± 17% -0.0 0.05 ± 38% perf-profile.children.cycles-pp.perf_trace_buf_alloc
0.13 ± 4% -0.0 0.09 perf-profile.children.cycles-pp.scheduler_tick
0.34 ± 5% -0.0 0.30 ± 3% perf-profile.children.cycles-pp._copy_from_user
0.07 ± 10% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.15 ± 5% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.__might_fault
0.15 ± 6% -0.0 0.13 ± 6% perf-profile.children.cycles-pp.rb_erase
0.09 ± 7% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.11 ± 4% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.__list_del_entry_valid
0.13 ± 7% -0.0 0.10 ± 7% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.10 ± 5% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.__enqueue_entity
0.08 -0.0 0.06 ± 6% perf-profile.children.cycles-pp.hrtimer_update_next_event
0.09 ± 5% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.perf_trace_buf_update
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.irqentry_exit
0.08 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.put_prev_task_fair
0.06 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.hrtimer_reprogram
0.09 ± 5% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.09 ± 5% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.rcu_eqs_exit
0.12 ± 8% +0.0 0.15 ± 3% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.09 ± 3% +0.0 0.12 ± 5% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.12 ± 7% +0.0 0.15 ± 6% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.07 ± 6% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.call_cpuidle
0.01 ±158% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.switch_ldt
0.28 ± 6% +0.0 0.33 ± 4% perf-profile.children.cycles-pp.enqueue_hrtimer
0.25 ± 5% +0.0 0.30 ± 4% perf-profile.children.cycles-pp.timerqueue_add
0.14 ± 5% +0.0 0.19 ± 5% perf-profile.children.cycles-pp.rcu_idle_exit
0.11 ± 4% +0.1 0.17 ± 5% perf-profile.children.cycles-pp.rcu_dynticks_inc
0.16 ± 6% +0.1 0.22 ± 2% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.00 +0.1 0.05 ± 9% perf-profile.children.cycles-pp.rcu_needs_cpu
0.19 ± 6% +0.1 0.25 ± 3% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.06 ± 5% perf-profile.children.cycles-pp.irq_work_needs_cpu
0.00 +0.1 0.06 ± 10% perf-profile.children.cycles-pp.menu_reflect
0.01 ±244% +0.1 0.08 ± 7% perf-profile.children.cycles-pp.cpumask_next
0.08 ± 5% +0.1 0.19 ± 3% perf-profile.children.cycles-pp.remove_entity_load_avg
0.13 +0.1 0.24 ± 5% perf-profile.children.cycles-pp.update_ts_time_stats
0.32 ± 5% +0.1 0.44 ± 2% perf-profile.children.cycles-pp.tick_nohz_next_event
0.06 ± 7% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.attach_entity_load_avg
0.12 ± 2% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.nr_iowait_cpu
0.08 ± 12% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.cpus_share_cache
0.31 ± 6% +0.1 0.44 ± 8% perf-profile.children.cycles-pp.shim_nanosleep_uint64
0.29 ± 4% +0.2 0.44 ± 3% perf-profile.children.cycles-pp.check_preempt_curr
0.39 ± 6% +0.2 0.55 ± 3% perf-profile.children.cycles-pp.available_idle_cpu
0.48 ± 6% +0.2 0.64 ± 2% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.17 ± 5% +0.2 0.35 ± 3% perf-profile.children.cycles-pp.resched_curr
0.28 ± 5% +0.2 0.47 ± 3% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.13 ± 6% +0.2 0.36 ± 3% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.13 ± 3% +0.2 0.37 ± 4% perf-profile.children.cycles-pp.hrtimer_active
0.14 ± 5% +0.2 0.38 ± 3% perf-profile.children.cycles-pp.hrtimer_try_to_cancel
1.14 ± 5% +0.3 1.41 ± 3% perf-profile.children.cycles-pp.menu_select
0.93 ± 4% +0.3 1.21 ± 3% perf-profile.children.cycles-pp.__switch_to_asm
0.63 ± 3% +0.3 0.92 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.39 ± 9% +0.3 0.72 ± 5% perf-profile.children.cycles-pp.select_idle_cpu
0.83 ± 4% +0.3 1.17 ± 3% perf-profile.children.cycles-pp.__switch_to
0.59 ± 4% +0.4 0.96 ± 3% perf-profile.children.cycles-pp.restore_fpregs_from_fpstate
1.01 ± 7% +0.4 1.43 ± 2% perf-profile.children.cycles-pp.select_task_rq_fair
0.90 ± 3% +0.4 1.35 ± 3% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.83 ± 4% +0.5 1.28 ± 3% perf-profile.children.cycles-pp.switch_fpu_return
0.75 ± 8% +0.5 1.22 ± 2% perf-profile.children.cycles-pp.select_idle_sibling
1.75 ± 10% +0.8 2.53 ± 2% perf-profile.children.cycles-pp.finish_task_switch
5.74 ± 2% +0.9 6.62 ± 3% perf-profile.children.cycles-pp.dequeue_task_fair
1.02 ± 4% +0.9 1.91 ± 64% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.36 ± 16% +1.2 1.55 ± 2% perf-profile.children.cycles-pp.poll_idle
1.33 ± 3% +1.9 3.21 perf-profile.children.cycles-pp.set_next_entity
1.09 ± 6% +2.1 3.20 ± 7% perf-profile.children.cycles-pp.set_task_cpu
3.73 ± 2% +2.2 5.88 ± 3% perf-profile.children.cycles-pp.dequeue_entity
5.35 ± 7% +3.2 8.57 ± 6% perf-profile.children.cycles-pp.update_cfs_group
3.93 ± 2% +3.7 7.61 ± 2% perf-profile.children.cycles-pp.update_load_avg
6.11 ± 2% +3.8 9.90 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
4.48 ± 4% +4.0 8.50 perf-profile.children.cycles-pp.schedule_idle
4.41 ± 3% +4.3 8.69 ± 3% perf-profile.children.cycles-pp.enqueue_entity
5.56 ± 2% +4.3 9.90 ± 3% perf-profile.children.cycles-pp.ttwu_do_activate
12.47 +8.2 20.66 ± 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
11.62 +8.5 20.08 ± 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
11.52 +8.5 20.01 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
10.54 +8.7 19.28 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
9.76 +8.8 18.58 ± 3% perf-profile.children.cycles-pp.hrtimer_wakeup
9.70 +8.8 18.54 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
50.22 +11.8 62.01 perf-profile.children.cycles-pp.cpuidle_enter
50.19 +11.8 61.99 perf-profile.children.cycles-pp.cpuidle_enter_state
21.23 +14.6 35.80 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
43.34 ± 2% +15.9 59.29 perf-profile.children.cycles-pp.intel_idle
56.46 +16.5 72.99 perf-profile.children.cycles-pp.start_secondary
57.01 +16.6 73.63 perf-profile.children.cycles-pp.do_idle
57.04 +16.6 73.67 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
57.04 +16.6 73.67 perf-profile.children.cycles-pp.cpu_startup_entry
8.53 ± 9% -8.3 0.20 ± 31% perf-profile.self.cycles-pp.update_sd_lb_stats
1.51 ± 12% -1.5 0.05 ± 40% perf-profile.self.cycles-pp.idle_cpu
2.05 ± 4% -0.6 1.40 perf-profile.self.cycles-pp._raw_spin_lock
0.74 ± 8% -0.6 0.10 ± 9% perf-profile.self.cycles-pp._find_next_bit
0.63 ± 5% -0.4 0.18 ± 6% perf-profile.self.cycles-pp.cpuidle_enter_state
0.46 ± 21% -0.4 0.04 ± 81% perf-profile.self.cycles-pp.update_blocked_averages
0.84 ± 4% -0.4 0.48 ± 2% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.38 ± 11% -0.3 0.11 ± 5% perf-profile.self.cycles-pp.newidle_balance
0.41 ± 10% -0.2 0.19 ± 4% perf-profile.self.cycles-pp.dequeue_task_fair
0.88 ± 5% -0.2 0.66 ± 7% [email protected]@GLIBC_2.2.5
0.75 ± 4% -0.2 0.54 ± 3% perf-profile.self.cycles-pp.lapic_next_deadline
0.75 ± 3% -0.2 0.56 ± 3% perf-profile.self.cycles-pp.native_sched_clock
0.90 ± 5% -0.2 0.73 ± 4% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.86 ± 5% -0.1 0.71 ± 5% perf-profile.self.cycles-pp.semaphore_posix_thrash
0.58 ± 9% -0.1 0.44 ± 3% perf-profile.self.cycles-pp.enqueue_task_fair
0.41 ± 3% -0.1 0.27 ± 3% perf-profile.self.cycles-pp.reweight_entity
0.58 ± 2% -0.1 0.48 ± 4% perf-profile.self.cycles-pp.__update_load_avg_se
0.37 ± 2% -0.1 0.28 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.18 ± 4% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.update_irq_load_avg
0.81 ± 5% -0.1 0.72 ± 4% perf-profile.self.cycles-pp.load_new_mm_cr3
0.42 ± 4% -0.1 0.34 ± 5% perf-profile.self.cycles-pp.read_tsc
0.37 ± 4% -0.1 0.30 ± 4% perf-profile.self.cycles-pp.save_fpregs_to_fpstate
0.16 ± 7% -0.1 0.09 ± 6% perf-profile.self.cycles-pp.place_entity
0.27 ± 10% -0.1 0.20 ± 5% perf-profile.self.cycles-pp.ktime_get
0.44 ± 4% -0.1 0.37 ± 4% [email protected]@GLIBC_2.2.5
0.18 ± 6% -0.1 0.12 ± 4% perf-profile.self.cycles-pp.__x64_sys_nanosleep
0.08 ± 10% -0.1 0.02 ±100% perf-profile.self.cycles-pp.__list_add_valid
0.56 ± 3% -0.1 0.50 ± 4% perf-profile.self.cycles-pp.enqueue_entity
0.35 ± 5% -0.1 0.30 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.40 ± 6% -0.1 0.35 ± 3% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.13 ± 7% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.schedule
0.30 ± 4% -0.0 0.25 ± 4% perf-profile.self.cycles-pp.pick_next_entity
0.11 ± 4% -0.0 0.06 ± 10% perf-profile.self.cycles-pp.sched_clock_cpu
0.11 ± 18% -0.0 0.06 ± 10% perf-profile.self.cycles-pp.hrtimer_interrupt
0.31 ± 4% -0.0 0.27 ± 4% perf-profile.self.cycles-pp.pick_next_task_fair
0.26 ± 8% -0.0 0.21 ± 3% perf-profile.self.cycles-pp.__calc_delta
0.25 ± 4% -0.0 0.21 ± 5% perf-profile.self.cycles-pp.select_task_rq_fair
0.19 ± 6% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.__sched_yield
0.07 ± 4% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.irqtime_account_irq
0.12 ± 7% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.09 ± 7% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.__enqueue_entity
0.07 ± 7% -0.0 0.04 ± 37% perf-profile.self.cycles-pp.clockevents_program_event
0.10 ± 4% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__list_del_entry_valid
0.06 ± 5% -0.0 0.04 ± 37% perf-profile.self.cycles-pp.get_timespec64
0.09 ± 8% -0.0 0.07 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.08 ± 6% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.put_prev_entity
0.06 ± 7% +0.0 0.08 ± 7% perf-profile.self.cycles-pp.hrtimer_reprogram
0.09 ± 12% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.select_idle_sibling
0.07 ± 8% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.call_cpuidle
0.17 ± 4% +0.0 0.21 ± 4% perf-profile.self.cycles-pp.timerqueue_add
0.01 ±158% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.switch_ldt
0.11 ± 6% +0.1 0.16 ± 6% perf-profile.self.cycles-pp.rcu_dynticks_inc
0.00 +0.1 0.05 perf-profile.self.cycles-pp.tick_nohz_idle_exit
0.01 ±244% +0.1 0.06 ± 7% perf-profile.self.cycles-pp.rcu_idle_exit
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.irq_work_needs_cpu
0.11 ± 8% +0.1 0.17 ± 6% perf-profile.self.cycles-pp.select_idle_cpu
0.00 +0.1 0.08 ± 9% perf-profile.self.cycles-pp.migrate_task_rq_fair
0.22 ± 5% +0.1 0.32 ± 4% perf-profile.self.cycles-pp.switch_fpu_return
0.06 ± 16% +0.1 0.16 ± 5% perf-profile.self.cycles-pp.poll_idle
0.49 ± 5% +0.1 0.59 ± 5% perf-profile.self.cycles-pp.menu_select
0.29 ± 5% +0.1 0.41 ± 8% perf-profile.self.cycles-pp.shim_nanosleep_uint64
0.06 ± 7% +0.1 0.17 ± 3% perf-profile.self.cycles-pp.attach_entity_load_avg
0.12 ± 4% +0.1 0.24 ± 6% perf-profile.self.cycles-pp.nr_iowait_cpu
0.08 ± 13% +0.1 0.22 ± 5% perf-profile.self.cycles-pp.cpus_share_cache
0.41 ± 5% +0.1 0.55 ± 4% perf-profile.self.cycles-pp.do_idle
0.39 ± 7% +0.2 0.55 ± 2% perf-profile.self.cycles-pp.available_idle_cpu
0.16 ± 5% +0.2 0.34 ± 3% perf-profile.self.cycles-pp.resched_curr
0.12 ± 5% +0.2 0.33 ± 4% perf-profile.self.cycles-pp.hrtimer_active
0.25 ± 6% +0.3 0.52 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.93 ± 4% +0.3 1.20 ± 3% perf-profile.self.cycles-pp.__switch_to_asm
0.62 ± 3% +0.3 0.91 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.81 ± 4% +0.3 1.16 ± 3% perf-profile.self.cycles-pp.__switch_to
0.59 ± 4% +0.4 0.96 ± 3% perf-profile.self.cycles-pp.restore_fpregs_from_fpstate
0.66 ± 7% +2.2 2.88 ± 8% perf-profile.self.cycles-pp.set_task_cpu
5.34 ± 7% +3.2 8.56 ± 6% perf-profile.self.cycles-pp.update_cfs_group
2.42 ± 3% +3.6 5.99 ± 3% perf-profile.self.cycles-pp.update_load_avg
37.45 ± 2% +5.4 42.82 ± 2% perf-profile.self.cycles-pp.intel_idle
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[net] 91a760b269: stress-ng.sockfd.ops_per_sec 11.1% improvement
by kernel test robot
Greeting,
FYI, we noticed a 11.1% improvement of stress-ng.sockfd.ops_per_sec due to commit:
commit: 91a760b26926265a60c77ddf016529bcf3e17a04 ("net: bpf: Handle return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
class: network
test: sockfd
cpufreq_governor: performance
ucode: 0xd000280
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
network/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/sockfd/stress-ng/60s/0xd000280
commit:
44bab87d8c ("bpf/selftests: Test bpf_d_path on rdonly_mem.")
91a760b269 ("net: bpf: Handle return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND()")
44bab87d8ca6f054 91a760b26926265a60c77ddf016
---------------- ---------------------------
%stddev %change %stddev
\ | \
62606757 +11.1% 69543502 stress-ng.sockfd.ops
1042700 +11.1% 1158219 stress-ng.sockfd.ops_per_sec
39649320 ± 3% +16.9% 46347365 ± 2% stress-ng.time.involuntary_context_switches
40863134 ± 3% +16.6% 47645007 ± 2% stress-ng.time.voluntary_context_switches
1218291 ± 7% +15.0% 1400874 ± 3% vmstat.system.cs
1531051 +8.7% 1664219 proc-vmstat.numa_hit
1415336 +9.4% 1548835 proc-vmstat.numa_local
1533329 +8.7% 1666172 proc-vmstat.pgalloc_normal
1312762 +10.1% 1445416 proc-vmstat.pgfree
3.91 ± 9% +44.6% 5.65 ± 23% perf-stat.i.MPKI
0.38 ± 6% +0.2 0.58 ± 23% perf-stat.i.branch-miss-rate%
34907752 ± 7% +15.9% 40452634 ± 2% perf-stat.i.branch-misses
58903061 ± 4% +7.5% 63345388 perf-stat.i.cache-misses
1.816e+08 ± 5% +11.9% 2.032e+08 ± 2% perf-stat.i.cache-references
1269255 ± 7% +15.6% 1467498 ± 3% perf-stat.i.context-switches
1256 ± 2% +17.2% 1473 ± 3% perf-stat.i.cpu-migrations
6575 -6.6% 6139 perf-stat.i.cycles-between-cache-misses
0.01 ± 44% +0.0 0.03 ± 48% perf-stat.i.dTLB-load-miss-rate%
4.035e+09 ± 5% +10.9% 4.475e+09 ± 2% perf-stat.i.dTLB-stores
255.11 ± 3% +10.2% 281.05 perf-stat.i.metric.K/sec
12204006 ± 4% +8.3% 13217107 ± 2% perf-stat.i.node-load-misses
1942843 ± 4% +13.3% 2201701 ± 2% perf-stat.i.node-loads
12401363 ± 4% +10.3% 13678576 perf-stat.i.node-store-misses
3.74 +7.5% 4.02 perf-stat.overall.MPKI
0.31 ± 2% +0.0 0.35 ± 2% perf-stat.overall.branch-miss-rate%
32.41 -1.2 31.18 perf-stat.overall.cache-miss-rate%
8.01 -4.5% 7.65 perf-stat.overall.cpi
6612 -7.6% 6108 perf-stat.overall.cycles-between-cache-misses
0.01 ± 9% +0.0 0.01 ± 6% perf-stat.overall.dTLB-load-miss-rate%
0.12 +4.7% 0.13 perf-stat.overall.ipc
85.41 +1.6 87.00 perf-stat.overall.node-store-miss-rate%
34371265 ± 6% +15.9% 39833821 ± 2% perf-stat.ps.branch-misses
58139497 ± 3% +7.6% 62545220 perf-stat.ps.cache-misses
1.795e+08 ± 4% +11.8% 2.007e+08 ± 2% perf-stat.ps.cache-references
1251886 ± 7% +15.6% 1447062 ± 3% perf-stat.ps.context-switches
1239 ± 2% +17.4% 1454 ± 3% perf-stat.ps.cpu-migrations
1200950 ± 12% +21.5% 1459523 ± 7% perf-stat.ps.dTLB-load-misses
3.98e+09 ± 5% +10.9% 4.415e+09 ± 2% perf-stat.ps.dTLB-stores
12039917 ± 3% +8.3% 13043647 perf-stat.ps.node-load-misses
1927780 ± 4% +13.3% 2184856 ± 2% perf-stat.ps.node-loads
12234839 ± 4% +10.3% 13498939 perf-stat.ps.node-store-misses
3.127e+12 +5.4% 3.295e+12 perf-stat.total.instructions
19250 ± 4% +17.0% 22532 ± 8% softirqs.CPU1.RCU
19195 ± 5% +10.7% 21248 ± 2% softirqs.CPU10.RCU
18204 ± 2% +14.3% 20809 softirqs.CPU100.RCU
18565 ± 4% +12.6% 20897 ± 2% softirqs.CPU102.RCU
18396 ± 3% +12.7% 20728 softirqs.CPU103.RCU
18343 ± 2% +14.3% 20957 ± 2% softirqs.CPU104.RCU
18219 +13.4% 20666 ± 5% softirqs.CPU105.RCU
18599 ± 4% +13.8% 21171 ± 2% softirqs.CPU107.RCU
18222 ± 3% +13.7% 20714 softirqs.CPU108.RCU
18416 ± 2% +11.3% 20496 ± 4% softirqs.CPU109.RCU
18865 ± 2% +9.5% 20655 ± 3% softirqs.CPU11.RCU
18319 +12.3% 20565 ± 2% softirqs.CPU110.RCU
18314 ± 2% +12.8% 20663 ± 3% softirqs.CPU111.RCU
18326 +13.2% 20745 ± 3% softirqs.CPU113.RCU
18515 ± 3% +12.0% 20733 ± 3% softirqs.CPU115.RCU
18080 +12.4% 20330 ± 3% softirqs.CPU116.RCU
18384 +11.4% 20477 ± 4% softirqs.CPU117.RCU
18310 ± 5% +10.3% 20189 ± 3% softirqs.CPU118.RCU
18515 ± 2% +12.5% 20822 ± 2% softirqs.CPU119.RCU
18642 ± 3% +11.9% 20852 softirqs.CPU120.RCU
18121 ± 3% +12.9% 20463 ± 2% softirqs.CPU121.RCU
18926 ± 3% +9.5% 20733 ± 4% softirqs.CPU122.RCU
18274 ± 3% +12.5% 20555 ± 3% softirqs.CPU123.RCU
18230 ± 5% +13.8% 20740 ± 4% softirqs.CPU124.RCU
18462 ± 2% +12.8% 20823 ± 5% softirqs.CPU125.RCU
17987 ± 2% +15.0% 20685 ± 5% softirqs.CPU126.RCU
16788 ± 2% +14.8% 19269 ± 3% softirqs.CPU127.RCU
18544 ± 2% +16.6% 21622 ± 7% softirqs.CPU14.RCU
18921 ± 2% +10.0% 20808 ± 3% softirqs.CPU15.RCU
18674 ± 2% +10.9% 20708 ± 3% softirqs.CPU16.RCU
18471 +12.9% 20855 softirqs.CPU17.RCU
18661 ± 3% +13.0% 21093 ± 4% softirqs.CPU18.RCU
18942 ± 2% +15.1% 21812 ± 5% softirqs.CPU19.RCU
18757 ± 2% +10.4% 20706 ± 2% softirqs.CPU20.RCU
18686 ± 3% +13.5% 21208 ± 2% softirqs.CPU21.RCU
18959 ± 2% +11.8% 21193 softirqs.CPU22.RCU
18882 ± 2% +12.0% 21141 ± 3% softirqs.CPU24.RCU
18758 ± 2% +12.5% 21100 ± 2% softirqs.CPU25.RCU
18486 ± 4% +15.8% 21406 ± 3% softirqs.CPU26.RCU
18597 +11.0% 20651 ± 4% softirqs.CPU28.RCU
19322 +13.3% 21895 ± 9% softirqs.CPU3.RCU
18805 ± 2% +11.9% 21052 ± 5% softirqs.CPU30.RCU
18642 ± 2% +13.0% 21063 ± 2% softirqs.CPU32.RCU
18504 ± 2% +12.8% 20863 softirqs.CPU33.RCU
18648 ± 3% +11.2% 20730 ± 2% softirqs.CPU34.RCU
18778 +11.6% 20951 ± 3% softirqs.CPU35.RCU
18595 ± 3% +13.9% 21182 ± 6% softirqs.CPU36.RCU
18541 ± 3% +14.1% 21148 ± 4% softirqs.CPU37.RCU
18482 ± 2% +12.0% 20693 ± 3% softirqs.CPU38.RCU
18078 ± 2% +17.6% 21255 ± 4% softirqs.CPU39.RCU
19108 ± 2% +14.9% 21960 ± 4% softirqs.CPU4.RCU
18751 +11.4% 20890 ± 2% softirqs.CPU41.RCU
18632 ± 2% +13.0% 21055 ± 3% softirqs.CPU42.RCU
18300 ± 2% +13.9% 20850 ± 3% softirqs.CPU43.RCU
18647 ± 2% +13.4% 21140 ± 4% softirqs.CPU44.RCU
18327 +14.3% 20940 ± 3% softirqs.CPU45.RCU
18272 ± 3% +14.3% 20882 ± 2% softirqs.CPU46.RCU
18359 ± 2% +13.7% 20874 softirqs.CPU47.RCU
18550 ± 2% +11.7% 20718 ± 3% softirqs.CPU48.RCU
18418 ± 2% +11.7% 20576 ± 3% softirqs.CPU49.RCU
18753 ± 4% +13.2% 21229 ± 3% softirqs.CPU5.RCU
18450 +11.4% 20545 ± 2% softirqs.CPU50.RCU
18442 ± 3% +12.1% 20673 ± 2% softirqs.CPU51.RCU
18402 ± 2% +13.6% 20896 ± 5% softirqs.CPU53.RCU
18664 ± 2% +18.9% 22195 ± 10% softirqs.CPU54.RCU
18502 ± 2% +10.2% 20389 ± 2% softirqs.CPU55.RCU
18421 ± 3% +12.8% 20787 ± 3% softirqs.CPU56.RCU
18544 ± 3% +12.0% 20763 ± 3% softirqs.CPU57.RCU
17891 +16.1% 20770 ± 3% softirqs.CPU58.RCU
18934 +15.4% 21842 ± 8% softirqs.CPU6.RCU
18276 ± 2% +16.3% 21257 ± 3% softirqs.CPU61.RCU
19277 ± 3% +10.8% 21367 softirqs.CPU62.RCU
18512 ± 2% +14.0% 21096 softirqs.CPU63.RCU
18081 +15.9% 20948 ± 3% softirqs.CPU66.RCU
18550 ± 3% +12.3% 20826 softirqs.CPU67.RCU
18111 +17.7% 21322 softirqs.CPU68.RCU
19092 ± 3% +12.0% 21383 ± 3% softirqs.CPU69.RCU
18334 ± 3% +16.8% 21415 ± 2% softirqs.CPU7.RCU
18770 ± 5% +11.7% 20972 ± 3% softirqs.CPU70.RCU
18500 ± 3% +13.2% 20946 ± 3% softirqs.CPU72.RCU
18748 ± 3% +12.8% 21142 ± 3% softirqs.CPU73.RCU
19066 ± 3% +11.5% 21261 softirqs.CPU74.RCU
18789 ± 4% +12.3% 21098 ± 3% softirqs.CPU75.RCU
18516 +14.7% 21244 ± 3% softirqs.CPU76.RCU
18825 ± 2% +12.1% 21105 ± 3% softirqs.CPU78.RCU
18320 +16.9% 21423 ± 3% softirqs.CPU79.RCU
18691 ± 2% +16.8% 21828 ± 7% softirqs.CPU8.RCU
18657 ± 4% +12.8% 21040 softirqs.CPU80.RCU
18694 ± 2% +10.0% 20569 ± 2% softirqs.CPU81.RCU
18670 ± 2% +12.8% 21058 ± 3% softirqs.CPU82.RCU
18548 ± 2% +13.8% 21103 ± 2% softirqs.CPU84.RCU
18485 ± 2% +14.3% 21125 ± 3% softirqs.CPU85.RCU
18412 ± 3% +12.3% 20680 ± 3% softirqs.CPU86.RCU
18635 ± 2% +12.9% 21041 ± 4% softirqs.CPU87.RCU
18272 ± 2% +15.9% 21186 ± 2% softirqs.CPU88.RCU
17968 ± 3% +14.5% 20566 ± 3% softirqs.CPU89.RCU
18656 ± 3% +14.5% 21365 ± 4% softirqs.CPU9.RCU
18629 ± 3% +13.0% 21059 softirqs.CPU90.RCU
18415 ± 2% +13.7% 20940 ± 2% softirqs.CPU91.RCU
18711 ± 3% +13.6% 21252 ± 3% softirqs.CPU92.RCU
18206 ± 4% +16.0% 21113 softirqs.CPU93.RCU
18633 ± 2% +15.6% 21541 ± 2% softirqs.CPU94.RCU
18756 ± 5% +11.1% 20839 ± 3% softirqs.CPU96.RCU
18441 ± 2% +13.4% 20913 ± 2% softirqs.CPU98.RCU
18316 ± 2% +14.2% 20914 softirqs.CPU99.RCU
2383757 +12.6% 2683774 ± 2% softirqs.RCU
46.59 -1.0 45.57 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_inflight.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg
46.45 -1.0 45.43 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.unix_inflight.unix_attach_fds.unix_scm_to_skb
46.66 -1.0 45.66 perf-profile.calltrace.cycles-pp.unix_inflight.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg
46.96 -0.9 46.07 perf-profile.calltrace.cycles-pp.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
46.96 -0.9 46.06 perf-profile.calltrace.cycles-pp.unix_attach_fds.unix_scm_to_skb.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg
46.06 -0.7 45.34 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_notinflight.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg
45.90 -0.7 45.20 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.unix_notinflight.unix_detach_fds.unix_stream_read_generic
46.14 -0.7 45.45 perf-profile.calltrace.cycles-pp.unix_notinflight.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg
46.15 -0.7 45.46 perf-profile.calltrace.cycles-pp.unix_detach_fds.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg
48.81 -0.5 48.33 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg
47.52 -0.5 47.04 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg
47.54 -0.5 47.06 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg.do_syscall_64
47.65 -0.4 47.21 perf-profile.calltrace.cycles-pp.____sys_recvmsg.___sys_recvmsg.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.74 -0.4 47.31 perf-profile.calltrace.cycles-pp.___sys_recvmsg.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg
47.76 -0.4 47.34 perf-profile.calltrace.cycles-pp.__sys_recvmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.81 -0.4 47.39 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.83 -0.4 47.41 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.recvmsg.stress_run
47.87 -0.4 47.46 perf-profile.calltrace.cycles-pp.recvmsg.stress_run
49.00 -0.4 48.58 perf-profile.calltrace.cycles-pp.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64
49.07 -0.4 48.67 perf-profile.calltrace.cycles-pp.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
49.23 -0.4 48.86 perf-profile.calltrace.cycles-pp.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
49.26 -0.4 48.90 perf-profile.calltrace.cycles-pp.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.45 -0.3 49.15 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.47 -0.3 49.18 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sendmsg.stress_run
49.55 -0.3 49.27 perf-profile.calltrace.cycles-pp.sendmsg.stress_run
99.59 -0.1 99.51 perf-profile.calltrace.cycles-pp.stress_run
0.51 +0.1 0.61 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close.stress_run
0.52 +0.1 0.62 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close.stress_run
0.58 +0.1 0.69 perf-profile.calltrace.cycles-pp.__close.stress_run
0.53 ± 2% +0.2 0.70 perf-profile.calltrace.cycles-pp.__scm_send.unix_stream_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
0.70 ± 2% +0.3 1.00 perf-profile.calltrace.cycles-pp.do_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open
1.24 +0.4 1.68 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.23 +0.4 1.67 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
1.37 +0.5 1.84 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.39 +0.5 1.86 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.37 +0.5 1.85 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.39 +0.5 1.87 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64.stress_run
1.43 +0.5 1.91 perf-profile.calltrace.cycles-pp.open64.stress_run
0.00 +0.6 0.58 ± 2% perf-profile.calltrace.cycles-pp.do_dentry_open.do_open.path_openat.do_filp_open.do_sys_openat2
92.42 -1.7 90.73 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.84 -1.7 91.16 perf-profile.children.cycles-pp._raw_spin_lock
46.66 -1.0 45.66 perf-profile.children.cycles-pp.unix_inflight
46.96 -0.9 46.07 perf-profile.children.cycles-pp.unix_scm_to_skb
46.96 -0.9 46.07 perf-profile.children.cycles-pp.unix_attach_fds
46.14 -0.7 45.45 perf-profile.children.cycles-pp.unix_notinflight
46.15 -0.7 45.46 perf-profile.children.cycles-pp.unix_detach_fds
48.82 -0.5 48.34 perf-profile.children.cycles-pp.unix_stream_sendmsg
47.59 -0.5 47.13 perf-profile.children.cycles-pp.unix_stream_recvmsg
47.58 -0.5 47.12 perf-profile.children.cycles-pp.unix_stream_read_generic
47.71 -0.4 47.28 perf-profile.children.cycles-pp.____sys_recvmsg
47.80 -0.4 47.38 perf-profile.children.cycles-pp.___sys_recvmsg
47.82 -0.4 47.41 perf-profile.children.cycles-pp.__sys_recvmsg
49.00 -0.4 48.59 perf-profile.children.cycles-pp.sock_sendmsg
49.07 -0.4 48.68 perf-profile.children.cycles-pp.____sys_sendmsg
47.99 -0.4 47.62 perf-profile.children.cycles-pp.recvmsg
49.23 -0.4 48.87 perf-profile.children.cycles-pp.___sys_sendmsg
49.26 -0.4 48.90 perf-profile.children.cycles-pp.__sys_sendmsg
49.68 -0.2 49.44 perf-profile.children.cycles-pp.sendmsg
99.33 -0.1 99.22 perf-profile.children.cycles-pp.do_syscall_64
99.38 -0.1 99.28 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.59 -0.1 99.51 perf-profile.children.cycles-pp.stress_run
0.22 ± 2% -0.0 0.20 perf-profile.children.cycles-pp.skb_unlink
0.10 +0.0 0.11 perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.05 +0.0 0.06 perf-profile.children.cycles-pp.switch_fpu_return
0.07 +0.0 0.08 perf-profile.children.cycles-pp.recvmsg_copy_msghdr
0.07 +0.0 0.08 perf-profile.children.cycles-pp.__skb_datagram_iter
0.09 +0.0 0.10 perf-profile.children.cycles-pp.ioctl
0.06 +0.0 0.07 perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.05 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.dequeue_entity
0.13 ± 5% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.rcu_core
0.07 ± 6% +0.0 0.09 ± 8% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 4% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.link_path_walk
0.12 ± 5% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.ret_from_fork
0.12 ± 5% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.kthread
0.12 ± 3% +0.0 0.13 ± 2% perf-profile.children.cycles-pp.__x64_sys_close
0.11 ± 3% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.06 ± 7% +0.0 0.08 ± 8% perf-profile.children.cycles-pp.kobject_put
0.08 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.kmalloc_reserve
0.09 ± 5% +0.0 0.11 perf-profile.children.cycles-pp.__copy_msghdr_from_user
0.07 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.getname_flags
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.cdev_put
0.06 ± 9% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.enqueue_entity
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.iovec_from_user
0.28 ± 3% +0.0 0.30 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.14 ± 6% +0.0 0.16 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.10 +0.0 0.12 ± 3% perf-profile.children.cycles-pp.__import_iovec
0.11 +0.0 0.13 ± 2% perf-profile.children.cycles-pp.__entry_text_start
0.08 ± 4% +0.0 0.10 perf-profile.children.cycles-pp.__check_object_size
0.06 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.security_file_free
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.update_load_avg
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.import_iovec
0.06 +0.0 0.08 perf-profile.children.cycles-pp.apparmor_file_free_security
0.17 ± 4% +0.0 0.19 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.12 +0.0 0.14 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.09 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.common_file_perm
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.select_idle_cpu
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.lockref_get
0.12 ± 3% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.__might_resched
0.09 ± 5% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.security_file_receive
0.14 ± 2% +0.0 0.16 ± 2% perf-profile.children.cycles-pp.kmem_cache_alloc
0.10 ± 7% +0.0 0.12 ± 5% perf-profile.children.cycles-pp.__legitimize_path
0.08 ± 9% +0.0 0.10 ± 3% perf-profile.children.cycles-pp.update_curr
0.09 ± 8% +0.0 0.11 ± 6% perf-profile.children.cycles-pp.lockref_get_not_dead
0.08 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.load_new_mm_cr3
0.10 ± 4% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.complete_walk
0.14 ± 2% +0.0 0.17 perf-profile.children.cycles-pp.sendmsg_copy_msghdr
0.09 ± 5% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.pick_next_task_fair
0.10 ± 8% +0.0 0.13 ± 4% perf-profile.children.cycles-pp.ttwu_do_activate
0.08 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.sock_recvmsg
0.08 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.security_socket_recvmsg
0.16 ± 3% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.__receive_fd
0.10 ± 5% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.select_idle_sibling
0.10 ± 6% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.enqueue_task_fair
0.11 ± 6% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.dequeue_task_fair
0.11 ± 4% +0.0 0.14 ± 6% perf-profile.children.cycles-pp.select_task_rq_fair
0.10 ± 5% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.try_to_unlazy
0.09 ± 5% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.terminate_walk
0.08 ± 5% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.propagate_protected_usage
0.18 ± 2% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.sock_alloc_send_pskb
0.15 ± 3% +0.0 0.19 perf-profile.children.cycles-pp.__alloc_skb
0.11 ± 3% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.19 +0.0 0.22 ± 2% perf-profile.children.cycles-pp._copy_from_user
0.16 ± 3% +0.0 0.19 ± 3% perf-profile.children.cycles-pp.alloc_skb_with_frags
0.22 ± 3% +0.0 0.26 perf-profile.children.cycles-pp.scm_detach_fds
0.21 +0.0 0.25 perf-profile.children.cycles-pp.copy_msghdr_from_user
0.02 ±141% +0.0 0.06 ± 8% perf-profile.children.cycles-pp.cdev_get
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.page_counter_cancel
0.09 ± 4% +0.0 0.14 ± 2% perf-profile.children.cycles-pp.aa_get_task_label
0.01 ±223% +0.0 0.06 ± 9% perf-profile.children.cycles-pp.__switch_to
0.13 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.00 +0.1 0.05 perf-profile.children.cycles-pp.kobject_get_unless_zero
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__kmalloc_track_caller
0.17 ± 2% +0.1 0.22 perf-profile.children.cycles-pp.security_file_alloc
0.17 ± 5% +0.1 0.22 ± 4% perf-profile.children.cycles-pp.lockref_put_or_lock
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.available_idle_cpu
0.19 ± 5% +0.1 0.25 ± 3% perf-profile.children.cycles-pp.dput
0.18 ± 4% +0.1 0.24 perf-profile.children.cycles-pp.page_counter_uncharge
0.14 ± 6% +0.1 0.20 ± 4% perf-profile.children.cycles-pp.chrdev_open
0.18 ± 2% +0.1 0.24 perf-profile.children.cycles-pp.scm_fp_dup
0.20 ± 4% +0.1 0.26 perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.18 ± 2% +0.1 0.24 perf-profile.children.cycles-pp.security_socket_sendmsg
0.20 ± 2% +0.1 0.27 ± 3% perf-profile.children.cycles-pp.page_counter_charge
0.22 ± 2% +0.1 0.29 ± 4% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
0.32 ± 2% +0.1 0.39 ± 2% perf-profile.children.cycles-pp.__fput
0.26 +0.1 0.34 ± 2% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.25 ± 2% +0.1 0.32 ± 3% perf-profile.children.cycles-pp.obj_cgroup_charge
0.28 ± 2% +0.1 0.36 perf-profile.children.cycles-pp.kfree
0.36 ± 2% +0.1 0.44 ± 2% perf-profile.children.cycles-pp.task_work_run
0.16 ± 3% +0.1 0.24 perf-profile.children.cycles-pp.ima_file_check
0.15 +0.1 0.23 ± 2% perf-profile.children.cycles-pp.security_task_getsecid_subj
0.15 ± 3% +0.1 0.23 perf-profile.children.cycles-pp.apparmor_task_getsecid
0.32 +0.1 0.40 perf-profile.children.cycles-pp.alloc_empty_file
0.31 +0.1 0.40 perf-profile.children.cycles-pp.__alloc_file
0.18 ± 2% +0.1 0.26 ± 2% perf-profile.children.cycles-pp.security_file_open
0.24 ± 2% +0.1 0.33 perf-profile.children.cycles-pp.aa_sk_perm
0.17 ± 3% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.apparmor_file_open
0.32 ± 4% +0.1 0.42 ± 2% perf-profile.children.cycles-pp.schedule_timeout
0.32 ± 4% +0.1 0.42 ± 3% perf-profile.children.cycles-pp.__wake_up_common
0.31 ± 5% +0.1 0.41 ± 4% perf-profile.children.cycles-pp.try_to_wake_up
0.32 ± 5% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.autoremove_wake_function
0.33 ± 4% +0.1 0.43 ± 3% perf-profile.children.cycles-pp.__wake_up_common_lock
0.34 ± 5% +0.1 0.44 ± 4% perf-profile.children.cycles-pp.sock_def_readable
0.60 +0.1 0.72 perf-profile.children.cycles-pp.__close
0.41 +0.1 0.53 perf-profile.children.cycles-pp.refcount_dec_not_one
0.42 +0.1 0.54 perf-profile.children.cycles-pp.free_uid
0.42 +0.1 0.54 perf-profile.children.cycles-pp.refcount_dec_and_lock_irqsave
0.44 +0.1 0.57 perf-profile.children.cycles-pp.__scm_destroy
0.48 ± 4% +0.1 0.62 ± 2% perf-profile.children.cycles-pp.__schedule
0.48 ± 4% +0.1 0.63 ± 2% perf-profile.children.cycles-pp.schedule
0.62 ± 2% +0.2 0.77 perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.65 ± 2% +0.2 0.81 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.53 ± 2% +0.2 0.70 perf-profile.children.cycles-pp.__scm_send
0.40 ± 2% +0.2 0.58 ± 2% perf-profile.children.cycles-pp.do_dentry_open
0.70 ± 2% +0.3 1.00 perf-profile.children.cycles-pp.do_open
1.24 +0.4 1.68 perf-profile.children.cycles-pp.do_filp_open
1.23 +0.4 1.68 perf-profile.children.cycles-pp.path_openat
1.38 +0.5 1.85 perf-profile.children.cycles-pp.do_sys_open
1.37 +0.5 1.84 perf-profile.children.cycles-pp.do_sys_openat2
1.44 +0.5 1.93 perf-profile.children.cycles-pp.open64
92.10 -1.7 90.39 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.09 +0.0 0.10 ± 3% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.kfree
0.06 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.unix_stream_read_generic
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.unix_notinflight
0.06 ± 7% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.kobject_put
0.06 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.__alloc_file
0.12 +0.0 0.14 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.42 +0.0 0.44 perf-profile.self.cycles-pp._raw_spin_lock
0.06 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__schedule
0.06 +0.0 0.08 perf-profile.self.cycles-pp.apparmor_file_free_security
0.07 ± 7% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.unix_inflight
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.lockref_get
0.11 +0.0 0.13 ± 2% perf-profile.self.cycles-pp.__might_resched
0.08 ± 4% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.common_file_perm
0.09 ± 8% +0.0 0.11 ± 6% perf-profile.self.cycles-pp.lockref_get_not_dead
0.08 ± 4% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.load_new_mm_cr3
0.08 ± 4% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.propagate_protected_usage
0.05 ± 8% +0.0 0.09 perf-profile.self.cycles-pp.apparmor_task_getsecid
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.page_counter_cancel
0.12 +0.0 0.16 ± 2% perf-profile.self.cycles-pp.unix_attach_fds
0.13 ± 3% +0.0 0.17 ± 2% perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.09 +0.0 0.14 ± 3% perf-profile.self.cycles-pp.aa_get_task_label
0.16 ± 3% +0.0 0.20 ± 2% perf-profile.self.cycles-pp.page_counter_charge
0.17 ± 5% +0.0 0.22 ± 5% perf-profile.self.cycles-pp.lockref_put_or_lock
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__check_object_size
0.00 +0.1 0.05 ± 7% perf-profile.self.cycles-pp.__switch_to
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.available_idle_cpu
0.13 ± 3% +0.1 0.19 ± 2% perf-profile.self.cycles-pp.scm_fp_dup
0.16 +0.1 0.22 perf-profile.self.cycles-pp.__scm_send
0.22 +0.1 0.31 perf-profile.self.cycles-pp.aa_sk_perm
0.17 ± 2% +0.1 0.26 ± 3% perf-profile.self.cycles-pp.apparmor_file_open
0.41 +0.1 0.53 perf-profile.self.cycles-pp.refcount_dec_not_one
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[sched/fair] c9c342d546: fxmark.ssd_xfs_DWOM_72_bufferedio.works/sec -58.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -58.3% regression of fxmark.ssd_xfs_DWOM_72_bufferedio.works/sec due to commit:
commit: c9c342d5463edc3fbf808db485570b1803b6eefe ("sched/fair: Relax update_min_vruntime()")
https://git.kernel.org/cgit/linux/kernel/git/peterz/queue.git sched/wip.migrate
in testcase: fxmark
on test machine: 24 threads 1 sockets Intel Atom(R) P5362 processor with 64G memory
with following parameters:
disk: 1SSD
media: ssd
test: DWOM
fstype: xfs
directio: bufferedio
cpufreq_governor: performance
ucode: 0x9c02000e
test-description: FxMark is a filesystem benchmark that test multicore scalability.
test-url: https://github.com/sslab-gatech/fxmark
In addition to that, the commit also has significant impact on the following tests:
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/bufferedio/1SSD/xfs/x86_64-rhel-8.3/ssd/debian-10.4-x86_64-20200603.cgz/lkp-snr-a1/DWOM/fxmark/0x9c02000e
commit:
820ba6bb9a ("sched/fair: Cleanup remnants")
c9c342d546 ("sched/fair: Relax update_min_vruntime()")
820ba6bb9a300844 c9c342d5463edc3fbf808db4855
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.96 ± 4% +10.0% 5.45 fxmark.ssd_xfs_DWOM_18_bufferedio.idle_sec
0.55 ± 4% +10.0% 0.60 fxmark.ssd_xfs_DWOM_18_bufferedio.idle_util
133.18 +20.3% 160.20 fxmark.ssd_xfs_DWOM_36_bufferedio.idle_sec
11.09 +20.3% 13.34 fxmark.ssd_xfs_DWOM_36_bufferedio.idle_util
0.12 ± 3% +18.4% 0.14 ± 3% fxmark.ssd_xfs_DWOM_36_bufferedio.softirq_sec
0.01 ± 3% +18.4% 0.01 ± 3% fxmark.ssd_xfs_DWOM_36_bufferedio.softirq_util
3.55 ± 8% -34.3% 2.33 ± 2% fxmark.ssd_xfs_DWOM_36_bufferedio.user_sec
0.30 ± 8% -34.3% 0.19 ± 2% fxmark.ssd_xfs_DWOM_36_bufferedio.user_util
35454648 -35.5% 22862699 ± 3% fxmark.ssd_xfs_DWOM_36_bufferedio.works
709129 -35.5% 457216 ± 3% fxmark.ssd_xfs_DWOM_36_bufferedio.works/sec
166.30 +30.1% 216.34 fxmark.ssd_xfs_DWOM_54_bufferedio.idle_sec
13.84 +30.3% 18.03 fxmark.ssd_xfs_DWOM_54_bufferedio.idle_util
0.17 ± 4% +23.9% 0.21 ± 2% fxmark.ssd_xfs_DWOM_54_bufferedio.softirq_sec
0.01 ± 4% +24.1% 0.02 ± 2% fxmark.ssd_xfs_DWOM_54_bufferedio.softirq_util
5.51 ± 3% -30.0% 3.86 ± 2% fxmark.ssd_xfs_DWOM_54_bufferedio.user_sec
0.46 ± 3% -29.9% 0.32 ± 2% fxmark.ssd_xfs_DWOM_54_bufferedio.user_util
33852041 -54.6% 15384759 ± 5% fxmark.ssd_xfs_DWOM_54_bufferedio.works
676729 -54.5% 307684 ± 5% fxmark.ssd_xfs_DWOM_54_bufferedio.works/sec
105.93 ± 2% +26.6% 134.12 ± 2% fxmark.ssd_xfs_DWOM_72_bufferedio.idle_sec
8.82 ± 2% +26.7% 11.18 ± 2% fxmark.ssd_xfs_DWOM_72_bufferedio.idle_util
0.12 ± 6% +28.6% 0.16 ± 5% fxmark.ssd_xfs_DWOM_72_bufferedio.softirq_sec
0.01 ± 6% +28.7% 0.01 ± 5% fxmark.ssd_xfs_DWOM_72_bufferedio.softirq_util
4.04 ± 27% -56.3% 1.76 ± 5% fxmark.ssd_xfs_DWOM_72_bufferedio.user_sec
0.34 ± 27% -56.3% 0.15 ± 5% fxmark.ssd_xfs_DWOM_72_bufferedio.user_util
34555856 -58.3% 14412150 ± 2% fxmark.ssd_xfs_DWOM_72_bufferedio.works
691075 -58.3% 288206 ± 2% fxmark.ssd_xfs_DWOM_72_bufferedio.works/sec
11772443 ± 15% -22.3% 9151522 ± 12% meminfo.DirectMap2M
24.31 +4.0% 25.29 iostat.cpu.idle
64.69 -1.5% 63.73 iostat.cpu.system
9374 +12.4% 10536 ± 2% softirqs.CPU10.SCHED
22278 ± 8% +16.8% 26015 ± 8% softirqs.CPU2.RCU
13550 ± 4% +14.5% 15516 ± 4% softirqs.CPU2.SCHED
5242 ± 28% +108.2% 10914 ± 26% softirqs.CPU2.TIMER
7428 +26.5% 9395 ± 11% softirqs.CPU21.SCHED
7816 ± 5% +16.4% 9100 ± 7% softirqs.CPU23.SCHED
21565 ± 7% +15.5% 24901 ± 7% softirqs.CPU3.RCU
16713 ± 13% +24.5% 20810 ± 9% softirqs.CPU6.RCU
1.12 -28.2% 0.80 perf-stat.i.MPKI
4.072e+08 -16.2% 3.412e+08 perf-stat.i.branch-instructions
1721600 -6.6% 1607846 perf-stat.i.branch-misses
3.14 ± 4% +0.3 3.46 ± 5% perf-stat.i.cache-miss-rate%
160539 +2.2% 164008 perf-stat.i.cache-misses
5610372 -45.4% 3063912 perf-stat.i.cache-references
2.87 ± 2% +9.7% 3.14 perf-stat.i.cpi
14045 -1.3% 13869 perf-stat.i.cpu-clock
1.473e+10 -6.8% 1.373e+10 perf-stat.i.cpu-cycles
102.90 +44.1% 148.29 perf-stat.i.cpu-migrations
432428 ± 2% -14.9% 368000 ± 2% perf-stat.i.cycles-between-cache-misses
89452 ± 30% -48.6% 46000 ± 11% perf-stat.i.dTLB-load-misses
6.016e+08 -23.4% 4.61e+08 perf-stat.i.dTLB-loads
2.036e+08 -44.1% 1.139e+08 perf-stat.i.dTLB-stores
2.014e+09 -18.3% 1.645e+09 perf-stat.i.instructions
0.61 -6.8% 0.57 perf-stat.i.metric.GHz
232.82 -44.7% 128.85 perf-stat.i.metric.K/sec
50.52 -24.5% 38.15 perf-stat.i.metric.M/sec
14045 -1.3% 13869 perf-stat.i.task-clock
2.79 -33.2% 1.86 perf-stat.overall.MPKI
0.42 ± 2% +0.0 0.45 perf-stat.overall.branch-miss-rate%
2.86 +2.3 5.13 perf-stat.overall.cache-miss-rate%
7.31 +14.5% 8.37 perf-stat.overall.cpi
91845 -4.4% 87771 perf-stat.overall.cycles-between-cache-misses
0.01 ± 30% -0.0 0.01 ± 10% perf-stat.overall.dTLB-load-miss-rate%
0.14 -12.6% 0.12 perf-stat.overall.ipc
4.098e+08 -13.0% 3.565e+08 perf-stat.ps.branch-instructions
1730353 ± 2% -7.1% 1607986 perf-stat.ps.branch-misses
5646350 -43.4% 3196134 perf-stat.ps.cache-references
1.482e+10 -2.9% 1.438e+10 perf-stat.ps.cpu-cycles
103.39 +49.8% 154.85 perf-stat.ps.cpu-migrations
90043 ± 30% -47.4% 47379 ± 11% perf-stat.ps.dTLB-load-misses
6.055e+08 -20.3% 4.823e+08 perf-stat.ps.dTLB-loads
2.049e+08 -42.1% 1.186e+08 perf-stat.ps.dTLB-stores
2.027e+09 -15.2% 1.719e+09 perf-stat.ps.instructions
1.017e+12 -15.2% 8.62e+11 perf-stat.total.instructions
3.39 -1.0 2.34 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.xfs_ilock.xfs_file_buffered_write.new_sync_write
2.75 -1.0 1.78 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_pwrite64
99.16 -0.3 98.82 perf-profile.calltrace.cycles-pp.__libc_pwrite
0.86 ± 2% -0.3 0.56 perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
98.88 -0.3 98.62 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_pwrite
98.84 -0.2 98.60 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
98.77 -0.2 98.54 perf-profile.calltrace.cycles-pp.ksys_pwrite64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
98.70 -0.2 98.50 perf-profile.calltrace.cycles-pp.vfs_write.ksys_pwrite64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
98.53 -0.2 98.37 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_pwrite64.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.20 -0.1 1.14 ± 3% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_pwrite64
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
0.00 +0.6 0.58 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.xfs_iunlock.xfs_file_buffered_write
0.00 +0.6 0.62 ± 3% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.xfs_iunlock.xfs_file_buffered_write.new_sync_write
93.68 +1.2 94.84 perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_pwrite64
93.27 +1.2 94.50 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.xfs_ilock.xfs_file_buffered_write.new_sync_write.vfs_write
3.39 -1.0 2.36 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
2.76 -1.0 1.79 perf-profile.children.cycles-pp.iomap_file_buffered_write
99.25 -0.4 98.87 perf-profile.children.cycles-pp.__libc_pwrite
0.86 ± 3% -0.3 0.56 perf-profile.children.cycles-pp.iomap_iter
0.74 ± 2% -0.3 0.48 ± 2% perf-profile.children.cycles-pp.down_write
0.64 ± 7% -0.2 0.39 ± 7% perf-profile.children.cycles-pp.iomap_write_begin
98.77 -0.2 98.54 perf-profile.children.cycles-pp.ksys_pwrite64
0.63 ± 4% -0.2 0.41 ± 2% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.66 -0.2 0.45 ± 4% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.61 -0.2 0.42 ± 5% perf-profile.children.cycles-pp.xfs_file_write_checks
98.76 -0.2 98.58 perf-profile.children.cycles-pp.vfs_write
0.60 -0.2 0.41 ± 4% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.60 -0.2 0.42 ± 5% perf-profile.children.cycles-pp.copyin
99.07 -0.2 98.90 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.03 -0.2 98.87 perf-profile.children.cycles-pp.do_syscall_64
98.59 -0.1 98.45 perf-profile.children.cycles-pp.new_sync_write
0.28 ± 7% -0.1 0.17 ± 10% perf-profile.children.cycles-pp.pagecache_get_page
0.34 ± 4% -0.1 0.23 ± 4% perf-profile.children.cycles-pp.osq_unlock
0.27 ± 9% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.__filemap_get_folio
0.27 ± 3% -0.1 0.17 ± 7% perf-profile.children.cycles-pp.rwsem_mark_wake
0.31 ± 2% -0.1 0.22 ± 5% perf-profile.children.cycles-pp.up_write
0.25 ± 3% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.wake_q_add
0.25 ± 5% -0.1 0.17 ± 4% perf-profile.children.cycles-pp.file_update_time
0.21 ± 8% -0.1 0.13 ± 7% perf-profile.children.cycles-pp.iomap_page_create
0.22 -0.1 0.14 ± 5% perf-profile.children.cycles-pp.iomap_write_end
1.25 -0.1 1.18 ± 2% perf-profile.children.cycles-pp.xfs_iunlock
0.16 ± 3% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.15 ± 5% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__entry_text_start
0.15 ± 5% -0.0 0.11 ± 7% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.13 ± 3% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.fault_in_readable
0.11 ± 6% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.__might_resched
0.09 ± 15% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.xas_load
0.09 ± 7% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.main_work
0.08 ± 6% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.common_file_perm
0.11 ± 3% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.__get_user_nocheck_1
0.08 ± 5% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.09 ± 7% -0.0 0.06 ± 17% perf-profile.children.cycles-pp.security_file_permission
0.06 +0.0 0.08 ± 10% perf-profile.children.cycles-pp.__get_user_nocheck_8
0.06 +0.0 0.08 ± 15% perf-profile.children.cycles-pp.perf_callchain_user
0.04 ± 57% +0.0 0.06 ± 13% perf-profile.children.cycles-pp.perf_output_sample
0.08 ± 6% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.perf_session__process_user_event
0.06 ± 20% +0.0 0.09 ± 15% perf-profile.children.cycles-pp.machines__deliver_event
0.06 ± 11% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.ksys_read
0.10 ± 15% +0.0 0.13 ± 12% perf-profile.children.cycles-pp.perf_session__deliver_event
0.09 ± 7% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.__ordered_events__flush
0.06 ± 11% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.vfs_read
0.08 ± 19% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.__unwind_start
0.03 ±102% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.irq_exit_rcu
0.01 ±173% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.__libc_read
0.42 ± 7% +0.0 0.47 ± 2% perf-profile.children.cycles-pp.update_curr
0.00 +0.1 0.05 perf-profile.children.cycles-pp.kernel_text_address
0.39 ± 7% +0.1 0.44 perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.01 ±173% +0.1 0.06 ± 17% perf-profile.children.cycles-pp.seq_read
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.memcpy_erms
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.pick_next_task_fair
0.01 ±173% +0.1 0.07 ± 12% perf-profile.children.cycles-pp.seq_read_iter
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.dequeue_entity
0.69 ± 5% +0.1 0.75 perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.proc_reg_read
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.build_id__mark_dso_hit
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.dequeue_task_fair
0.72 ± 6% +0.1 0.78 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.12 ± 16% +0.1 0.18 ± 12% perf-profile.children.cycles-pp.process_simple
0.22 ± 11% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.unwind_next_frame
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.load_balance
0.49 ± 2% +0.1 0.56 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.14 ± 16% +0.1 0.21 ± 10% perf-profile.children.cycles-pp.record__finish_output
0.14 ± 16% +0.1 0.21 ± 10% perf-profile.children.cycles-pp.perf_session__process_events
0.15 ± 10% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.cmd_record
0.16 ± 13% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.__libc_start_main
0.16 ± 13% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.main
0.16 ± 13% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.run_builtin
0.15 ± 12% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.cmd_sched
0.00 +0.1 0.08 ± 8% perf-profile.children.cycles-pp.perf_trace_sched_switch
0.00 +0.1 0.08 ± 8% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.28 ± 9% +0.1 0.37 ± 2% perf-profile.children.cycles-pp.perf_callchain_kernel
0.07 ± 6% +0.1 0.17 ± 9% perf-profile.children.cycles-pp.schedule
0.36 ± 6% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.get_perf_callchain
0.36 ± 6% +0.1 0.47 ± 4% perf-profile.children.cycles-pp.perf_callchain
0.79 ± 4% +0.1 0.90 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.85 ± 4% +0.1 0.96 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.11 ± 9% +0.1 0.23 ± 7% perf-profile.children.cycles-pp.intel_idle
0.38 ± 7% +0.1 0.49 ± 3% perf-profile.children.cycles-pp.perf_prepare_sample
0.08 ± 5% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.__schedule
0.44 ± 5% +0.1 0.57 perf-profile.children.cycles-pp.perf_swevent_overflow
0.44 ± 5% +0.1 0.57 perf-profile.children.cycles-pp.__perf_event_overflow
0.43 ± 6% +0.1 0.57 ± 2% perf-profile.children.cycles-pp.perf_event_output_forward
0.45 ± 5% +0.1 0.59 perf-profile.children.cycles-pp.perf_tp_event
0.44 ± 2% +0.2 0.62 ± 4% perf-profile.children.cycles-pp.wake_up_q
0.40 ± 2% +0.2 0.60 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
0.23 ± 5% +0.2 0.44 ± 3% perf-profile.children.cycles-pp.cpuidle_enter
0.23 ± 5% +0.2 0.44 ± 3% perf-profile.children.cycles-pp.cpuidle_enter_state
0.10 ± 7% +0.2 0.32 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.27 ± 6% +0.2 0.52 ± 2% perf-profile.children.cycles-pp.start_secondary
0.28 ± 4% +0.3 0.54 ± 2% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.28 ± 4% +0.3 0.54 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
0.28 ± 4% +0.3 0.54 ± 2% perf-profile.children.cycles-pp.do_idle
94.08 +1.0 95.09 perf-profile.children.cycles-pp.xfs_ilock
93.29 +1.3 94.58 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
3.34 -1.1 2.25 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.61 ± 2% -0.2 0.40 ± 4% perf-profile.self.cycles-pp.down_write
0.59 ± 2% -0.2 0.41 ± 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.40 ± 3% -0.1 0.26 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.34 ± 4% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.osq_unlock
0.31 ± 3% -0.1 0.22 ± 4% perf-profile.self.cycles-pp.up_write
0.25 ± 3% -0.1 0.16 ± 9% perf-profile.self.cycles-pp.wake_q_add
0.21 ± 8% -0.1 0.13 ± 7% perf-profile.self.cycles-pp.iomap_page_create
0.22 ± 3% -0.1 0.15 ± 2% perf-profile.self.cycles-pp.iomap_iter
0.19 -0.1 0.12 ± 3% perf-profile.self.cycles-pp.iomap_file_buffered_write
0.21 ± 5% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.file_update_time
0.20 ± 8% -0.1 0.14 ± 10% perf-profile.self.cycles-pp.new_sync_write
0.16 ± 3% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.14 ± 7% -0.1 0.08 ± 13% perf-profile.self.cycles-pp.__filemap_get_folio
0.14 ± 7% -0.1 0.10 ± 11% perf-profile.self.cycles-pp.iomap_write_begin
0.07 ± 11% -0.0 0.03 ±100% perf-profile.self.cycles-pp.main_work
0.11 ± 3% -0.0 0.07 ± 14% perf-profile.self.cycles-pp.__libc_pwrite
0.11 ± 6% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.__might_resched
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.10 ± 4% -0.0 0.07 perf-profile.self.cycles-pp.__get_user_nocheck_1
0.08 ± 8% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.__entry_text_start
0.07 ± 10% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.unwind_next_frame
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.memcpy_erms
0.11 ± 9% +0.1 0.23 ± 7% perf-profile.self.cycles-pp.intel_idle
0.10 ± 7% +0.2 0.32 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.99 ± 4% +2.6 3.55 ± 3% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[md] 0c031fd37f: kernel_BUG_at_drivers/md/raid#.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 0c031fd37f69deb0cd8c43bbfcfccd62ebd7e952 ("md: Move alloc/free acct bioset in to personality")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: mdadm-selftests
version: mdadm-selftests-x86_64-5f41845-1_20220116
with following parameters:
disk: 1HDD
test_prefix: 18
ucode: 0x28
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 116.534267][ T1574] kernel BUG at drivers/md/raid10.c:928!
[ 116.539866][ T1574] invalid opcode: 0000 [#1] SMP KASAN PTI
[ 116.545453][ T1574] CPU: 2 PID: 1574 Comm: md126_resync Not tainted 5.16.0-rc3-00104-g0c031fd37f69 #1
[ 116.554700][ T1574] Hardware name: Gigabyte Technology Co., Ltd. Z97X-UD5H/Z97X-UD5H, BIOS F9 04/21/2015
[ 116.564232][ T1574] RIP: 0010:raise_barrier (drivers/md/raid10.c:928 (discriminator 3)) raid10
[ 116.570273][ T1574] Code: 07 83 c0 03 38 d0 7c 04 84 d2 75 63 8b 8b ec 00 00 00 85 c9 74 14 48 8d ab dc 00 00 00 48 89 ef e8 0d 8d f1 c2 e9 1e fd ff ff <0f> 0b 4c 89 e7 e8 be 2e 1e c1 e9 61 fe ff ff 48 8b 7c 24 08 e8 af
All code
========
0: 07 (bad)
1: 83 c0 03 add $0x3,%eax
4: 38 d0 cmp %dl,%al
6: 7c 04 jl 0xc
8: 84 d2 test %dl,%dl
a: 75 63 jne 0x6f
c: 8b 8b ec 00 00 00 mov 0xec(%rbx),%ecx
12: 85 c9 test %ecx,%ecx
14: 74 14 je 0x2a
16: 48 8d ab dc 00 00 00 lea 0xdc(%rbx),%rbp
1d: 48 89 ef mov %rbp,%rdi
20: e8 0d 8d f1 c2 callq 0xffffffffc2f18d32
25: e9 1e fd ff ff jmpq 0xfffffffffffffd48
2a:* 0f 0b ud2 <-- trapping instruction
2c: 4c 89 e7 mov %r12,%rdi
2f: e8 be 2e 1e c1 callq 0xffffffffc11e2ef2
34: e9 61 fe ff ff jmpq 0xfffffffffffffe9a
39: 48 8b 7c 24 08 mov 0x8(%rsp),%rdi
3e: e8 .byte 0xe8
3f: af scas %es:(%rdi),%eax
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 4c 89 e7 mov %r12,%rdi
5: e8 be 2e 1e c1 callq 0xffffffffc11e2ec8
a: e9 61 fe ff ff jmpq 0xfffffffffffffe70
f: 48 8b 7c 24 08 mov 0x8(%rsp),%rdi
14: e8 .byte 0xe8
15: af scas %es:(%rdi),%eax
[ 116.589800][ T1574] RSP: 0018:ffffc900037e7808 EFLAGS: 00010246
[ 116.595830][ T1574] RAX: 0000000000000007 RBX: ffff88840ba6b800 RCX: 0000000000000000
[ 116.603708][ T1574] RDX: 0000000000000000 RSI: ffff88840ba6b8ec RDI: ffff88840ba6b800
[ 116.611577][ T1574] RBP: ffff8884180f0000 R08: 0000000000000001 R09: ffffed10360e2304
[ 116.619439][ T1574] R10: ffff8881b071181f R11: ffffed10360e2303 R12: dffffc0000000000
[ 116.627316][ T1574] R13: 0000000000000001 R14: 0000000000000003 R15: dffffc0000000000
[ 116.635186][ T1574] FS: 0000000000000000(0000) GS:ffff8883a9500000(0000) knlGS:0000000000000000
[ 116.644009][ T1574] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 116.650491][ T1574] CR2: 000055d240e17038 CR3: 000000041de14004 CR4: 00000000001706e0
[ 116.658360][ T1574] Call Trace:
[ 116.661531][ T1574] <TASK>
[ 116.664339][ T1574] ? mempool_init_node (mm/mempool.c:190)
[ 116.669315][ T1574] ? raid10_end_read_request (drivers/md/raid10.c:927) raid10
[ 116.675624][ T1574] ? memset (mm/kasan/shadow.c:44)
[ 116.679610][ T1574] ? bio_reset (arch/x86/include/asm/atomic.h:41 include/linux/atomic/atomic-instrumented.h:42 block/bio.c:307)
[ 116.683746][ T1574] ? raid10_alloc_init_r10buf (drivers/md/raid10.c:3164) raid10
[ 116.690141][ T1574] raid10_sync_request (include/linux/instrumented.h:86 include/linux/atomic/atomic-instrumented.h:41 drivers/md/raid10.c:3453) raid10
[ 116.696097][ T1574] ? raid10_run (drivers/md/raid10.c:3246) raid10
[ 116.701443][ T1574] ? queue_work_on (kernel/workqueue.c:1548)
[ 116.705914][ T1574] md_do_sync.cold (drivers/md/md.c:8943)
[ 116.710619][ T1574] ? ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 116.715101][ T1574] ? md_seq_show (drivers/md/md.c:8691)
[ 116.719745][ T1574] ? newidle_balance (kernel/sched/fair.c:10129 kernel/sched/fair.c:10140 kernel/sched/fair.c:10902)
[ 116.724575][ T1574] ? dequeue_entity (kernel/sched/fair.c:4379)
[ 116.729313][ T1574] ? __x64_sys_rt_sigpending (kernel/signal.c:4055)
[ 116.734835][ T1574] ? __switch_to (arch/x86/include/asm/bitops.h:55 include/asm-generic/bitops/instrumented-atomic.h:29 include/linux/thread_info.h:89 arch/x86/include/asm/fpu/sched.h:65 arch/x86/kernel/process_64.c:622)
[ 116.739314][ T1574] ? __switch_to_asm (arch/x86/entry/entry_64.S:254)
[ 116.743969][ T1574] md_thread (drivers/md/md.c:7900)
[ 116.748096][ T1574] ? bb_store (drivers/md/md.c:7884)
[ 116.752141][ T1574] ? _raw_read_unlock_irqrestore (kernel/locking/spinlock.c:161)
[ 116.757835][ T1574] ? _raw_read_unlock_irqrestore (kernel/locking/spinlock.c:161)
[ 116.763538][ T1574] ? __kthread_parkme (arch/x86/include/asm/bitops.h:207 (discriminator 4) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 4) kernel/kthread.c:249 (discriminator 4))
[ 116.768358][ T1574] ? schedule (arch/x86/include/asm/bitops.h:207 (discriminator 1) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 1) include/linux/thread_info.h:118 (discriminator 1) include/linux/sched.h:2120 (discriminator 1) kernel/sched/core.c:6328 (discriminator 1))
[ 116.772482][ T1574] ? bb_store (drivers/md/md.c:7884)
[ 116.776514][ T1574] kthread (kernel/kthread.c:327)
[ 116.780446][ T1574] ? set_kthread_struct (kernel/kthread.c:272)
[ 116.785535][ T1574] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 116.789833][ T1574] </TASK>
[ 116.792745][ T1574] Modules linked in: multipath loop raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid10 raid1 raid0 netconsole btrfs ipmi_devintf ipmi_msghandler blake2b_generic xor raid6_pq zstd_compress libcrc32c intel_rapl_msr intel_rapl_common sd_mod t10_pi sg x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel i915 intel_gtt rapl ttm intel_cstate drm_kms_helper ahci syscopyarea sysfillrect libahci sysimgblt fb_sys_fops mei_me intel_uncore drm libata mxm_wmi mei video wmi acpi_pad ip_tables
[ 116.845910][ T1574] ---[ end trace ae936c3b92a0bada ]---
[ 116.851474][ T1574] RIP: 0010:raise_barrier (drivers/md/raid10.c:928 (discriminator 3)) raid10
[ 116.857548][ T1574] Code: 07 83 c0 03 38 d0 7c 04 84 d2 75 63 8b 8b ec 00 00 00 85 c9 74 14 48 8d ab dc 00 00 00 48 89 ef e8 0d 8d f1 c2 e9 1e fd ff ff <0f> 0b 4c 89 e7 e8 be 2e 1e c1 e9 61 fe ff ff 48 8b 7c 24 08 e8 af
All code
========
0: 07 (bad)
1: 83 c0 03 add $0x3,%eax
4: 38 d0 cmp %dl,%al
6: 7c 04 jl 0xc
8: 84 d2 test %dl,%dl
a: 75 63 jne 0x6f
c: 8b 8b ec 00 00 00 mov 0xec(%rbx),%ecx
12: 85 c9 test %ecx,%ecx
14: 74 14 je 0x2a
16: 48 8d ab dc 00 00 00 lea 0xdc(%rbx),%rbp
1d: 48 89 ef mov %rbp,%rdi
20: e8 0d 8d f1 c2 callq 0xffffffffc2f18d32
25: e9 1e fd ff ff jmpq 0xfffffffffffffd48
2a:* 0f 0b ud2 <-- trapping instruction
2c: 4c 89 e7 mov %r12,%rdi
2f: e8 be 2e 1e c1 callq 0xffffffffc11e2ef2
34: e9 61 fe ff ff jmpq 0xfffffffffffffe9a
39: 48 8b 7c 24 08 mov 0x8(%rsp),%rdi
3e: e8 .byte 0xe8
3f: af scas %es:(%rdi),%eax
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 4c 89 e7 mov %r12,%rdi
5: e8 be 2e 1e c1 callq 0xffffffffc11e2ec8
a: e9 61 fe ff ff jmpq 0xfffffffffffffe70
f: 48 8b 7c 24 08 mov 0x8(%rsp),%rdi
14: e8 .byte 0xe8
15: af scas %es:(%rdi),%eax
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[btrfs] 09d40ccb5b: xfstests.btrfs.048.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 09d40ccb5bf450e6ffb2f0bdac7d3a1ff4eb1917 ("[PATCH 3/5] btrfs: Convert compression description strings into system flags.")
url: https://github.com/0day-ci/linux/commits/Li-Zhang/btrfs-Cleanup-BTRFS_INO...
base: https://git.kernel.org/cgit/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/linux-btrfs/1642323009-1953-1-git-send-email-zhan...
in testcase: xfstests
version: xfstests-x86_64-972d710-1_20220117
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group-04
ucode: 0x28
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2022-01-18 21:31:27 export TEST_DIR=/fs/sda1
2022-01-18 21:31:27 export TEST_DEV=/dev/sda1
2022-01-18 21:31:27 export FSTYP=btrfs
2022-01-18 21:31:27 export SCRATCH_MNT=/fs/scratch
2022-01-18 21:31:27 mkdir /fs/scratch -p
2022-01-18 21:31:27 export SCRATCH_DEV_POOL="/dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 /dev/sda6"
2022-01-18 21:31:27 sed "s:^:btrfs/:" //lkp/benchmarks/xfstests/tests/btrfs-group-04
2022-01-18 21:31:27 ./check btrfs/040 btrfs/041 btrfs/042 btrfs/043 btrfs/044 btrfs/045 btrfs/046 btrfs/047 btrfs/048 btrfs/049
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 lkp-hsw-d01 5.16.0-rc8-00129-g09d40ccb5bf4 #1 SMP Wed Jan 19 04:47:51 CST 2022
MKFS_OPTIONS -- /dev/sda2
MOUNT_OPTIONS -- /dev/sda2 /fs/scratch
btrfs/040 2s
btrfs/041 2s
btrfs/042 4s
btrfs/043 1s
btrfs/044 2s
btrfs/045 2s
btrfs/046 9s
btrfs/047 1s
btrfs/048 - output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/048.out.bad)
--- tests/btrfs/048.out 2022-01-17 16:52:03.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/048.out.bad 2022-01-18 21:31:55.278101334 +0000
@@ -35,17 +35,11 @@
***
compression=lzo
***
-compression=lzo
***
-compression=lzo
***
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/048.out /lkp/benchmarks/xfstests/results//btrfs/048.out.bad' to see the entire diff)
btrfs/049 13s
Ran: btrfs/040 btrfs/041 btrfs/042 btrfs/043 btrfs/044 btrfs/045 btrfs/046 btrfs/047 btrfs/048 btrfs/049
Failures: btrfs/048
Failed 1 of 10 tests
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[PCI/MSI] 9fb9eb4b59: BUG:KASAN:use-after-free_in__pci_enable_msi_range
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 9fb9eb4b59acc607e978288c96ac7efa917153d4 ("PCI/MSI: Let core code free MSI descriptors")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: ltp
version: ltp-x86_64-14c1f76-1_20211225
with following parameters:
test: numa
ucode: 0x42e
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 17.860629][ T306] BUG: KASAN: use-after-free in __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:475 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] Read of size 2 at addr ffff888f49b2ee5c by task kworker/0:2/306
[ 17.860629][ T306]
[ 17.860629][ T306] CPU: 0 PID: 306 Comm: kworker/0:2 Not tainted 5.16.0-rc5-00073-g9fb9eb4b59ac #1
[ 17.860629][ T306] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[ 17.860629][ T306] Workqueue: events work_for_cpu_fn
[ 17.860629][ T306] Call Trace:
[ 17.860629][ T306] <TASK>
[ 17.860629][ T306] dump_stack_lvl (lib/dump_stack.c:107)
[ 17.860629][ T306] print_address_description+0x21/0x140
[ 17.860629][ T306] ? __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:475 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] kasan_report.cold (mm/kasan/report.c:434 mm/kasan/report.c:450)
[ 17.860629][ T306] ? __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:475 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:475 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1031)
[ 17.860629][ T306] ? pci_enable_msix_range (drivers/pci/msi/msi.c:1010)
[ 17.860629][ T306] ? pci_address_to_pio+0x40/0x40
[ 17.860629][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 17.860629][ T306] ? pcie_port_service_unregister (drivers/pci/pcie/portdrv_core.c:316)
[ 17.860629][ T306] ? dequeue_entity (kernel/sched/fair.c:4379)
[ 17.860629][ T306] ? _raw_read_unlock_irqrestore (kernel/locking/spinlock.c:161)
[ 17.860629][ T306] ? __switch_to (arch/x86/include/asm/bitops.h:55 include/asm-generic/bitops/instrumented-atomic.h:29 include/linux/thread_info.h:89 arch/x86/include/asm/fpu/sched.h:65 arch/x86/kernel/process_64.c:622)
[ 17.860629][ T306] ? pcie_portdrv_remove (drivers/pci/pcie/portdrv_pci.c:103)
[ 17.860629][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 17.860629][ T306] ? pcie_portdrv_remove (drivers/pci/pcie/portdrv_pci.c:103)
[ 17.860629][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 17.860629][ T306] ? pci_device_shutdown (drivers/pci/pci-driver.c:305)
[ 17.860629][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 17.860629][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 17.860629][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 17.860629][ T306] ? __kthread_parkme (arch/x86/include/asm/bitops.h:207 (discriminator 4) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 4) kernel/kthread.c:249 (discriminator 4))
[ 17.860629][ T306] ? schedule (arch/x86/include/asm/bitops.h:207 (discriminator 1) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 1) include/linux/thread_info.h:118 (discriminator 1) include/linux/sched.h:2120 (discriminator 1) kernel/sched/core.c:6328 (discriminator 1))
[ 17.860629][ T306] ? process_one_work (kernel/workqueue.c:2388)
[ 17.860629][ T306] ? process_one_work (kernel/workqueue.c:2388)
[ 17.860629][ T306] kthread (kernel/kthread.c:327)
[ 17.860629][ T306] ? set_kthread_struct (kernel/kthread.c:272)
[ 17.860629][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 17.860629][ T306] </TASK>
[ 17.860629][ T306]
[ 17.860629][ T306] Allocated by task 306:
[ 17.860629][ T306] kasan_save_stack (mm/kasan/common.c:38)
[ 17.860629][ T306] __kasan_kmalloc (mm/kasan/common.c:46 mm/kasan/common.c:434 mm/kasan/common.c:513 mm/kasan/common.c:522)
[ 17.860629][ T306] alloc_msi_entry (include/linux/slab.h:590 include/linux/slab.h:724 kernel/irq/msi.c:38)
[ 17.860629][ T306] msi_add_msi_desc (kernel/irq/msi.c:76)
[ 17.860629][ T306] msi_setup_msi_desc (drivers/pci/msi/msi.c:367)
[ 17.860629][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.c:449 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1031)
[ 17.860629][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 17.860629][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 17.860629][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 17.860629][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 17.860629][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 17.860629][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 17.860629][ T306] kthread (kernel/kthread.c:327)
[ 17.860629][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 17.860629][ T306]
[ 17.860629][ T306] Freed by task 306:
[ 17.860629][ T306] kasan_save_stack (mm/kasan/common.c:38)
[ 17.860629][ T306] kasan_set_track (mm/kasan/common.c:46)
[ 17.860629][ T306] kasan_set_free_info (mm/kasan/generic.c:372)
[ 17.860629][ T306] __kasan_slab_free (mm/kasan/common.c:368 mm/kasan/common.c:328 mm/kasan/common.c:374)
[ 17.860629][ T306] kfree (mm/slub.c:1749 mm/slub.c:3513 mm/slub.c:4561)
[ 17.860629][ T306] msi_free_msi_descs_range (kernel/irq/msi.c:136 (discriminator 2))
[ 17.860629][ T306] msi_domain_alloc_irqs_descs_locked (kernel/irq/msi.c:958)
[ 17.860629][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.c:459 drivers/pci/msi/msi.c:907)
[ 17.860629][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1031)
[ 17.860629][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 17.860629][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 17.860629][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 17.860629][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 17.860629][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 17.860629][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 17.860629][ T306] kthread (kernel/kthread.c:327)
[ 17.860629][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 17.860629][ T306]
[ 17.860629][ T306] The buggy address belongs to the object at ffff888f49b2ee00
[ 17.860629][ T306] which belongs to the cache kmalloc-128 of size 128
[ 17.860629][ T306] The buggy address is located 92 bytes inside of
[ 17.860629][ T306] 128-byte region [ffff888f49b2ee00, ffff888f49b2ee80)
[ 17.860629][ T306] The buggy address belongs to the page:
[ 17.860629][ T306] page:000000000287bdee refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xf49b2e
[ 17.860629][ T306] head:000000000287bdee order:1 compound_mapcount:0
[ 17.860629][ T306] flags: 0x57ffffc0010200(slab|head|node=1|zone=2|lastcpupid=0x1fffff)
[ 17.860629][ T306] raw: 0057ffffc0010200 0000000000000000 dead000000000122 ffff88810004c8c0
[ 17.860629][ T306] raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000
[ 17.860629][ T306] page dumped because: kasan: bad access detected
[ 17.860629][ T306]
[ 17.860629][ T306] Memory state around the buggy address:
[ 17.860629][ T306] ffff888f49b2ed00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 17.860629][ T306] ffff888f49b2ed80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 17.860629][ T306] >ffff888f49b2ee00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 17.860629][ T306] ^
[ 17.860629][ T306] ffff888f49b2ee80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 17.860629][ T306] ffff888f49b2ef00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 17.860629][ T306] ==================================================================
[ 17.860629][ T306] Disabling lock debugging due to kernel taint
[ 18.438903][ T306] pcieport 0000:00:01.0: PME: Signaling with IRQ 25
[ 18.448863][ T306] pcieport 0000:00:02.0: PME: Signaling with IRQ 26
[ 18.458334][ T306] IOAPIC[0]: Preconfigured routing entry (0-16 -> IRQ 16 Level:1 ActiveLow:1)
[ 18.468383][ T306] pcieport 0000:00:03.0: PME: Signaling with IRQ 27
[ 18.478073][ T306] pcieport 0000:00:11.0: PME: Signaling with IRQ 28
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[scsi] 96b6b4c6b1: xfstests.generic.351.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 96b6b4c6b150e8cc17224c6af0c6d3c890d2ccc5 ("scsi: sd: Move WRITE_ZEROES configuration to a separate function")
https://git.kernel.org/cgit/linux/kernel/git/mkp/linux.git 5.18/discovery
in testcase: xfstests
version: xfstests-x86_64-972d710-1_20220117
with following parameters:
disk: 4HDD
fs: xfs
test: generic-group-17
ucode: 0xe2
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 4 threads Intel(R) Xeon(R) CPU E3-1225 v5 @ 3.30GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2022-01-18 12:58:35 export TEST_DIR=/fs/sda1
2022-01-18 12:58:35 export TEST_DEV=/dev/sda1
2022-01-18 12:58:35 export FSTYP=xfs
2022-01-18 12:58:35 export SCRATCH_MNT=/fs/scratch
2022-01-18 12:58:35 mkdir /fs/scratch -p
2022-01-18 12:58:35 export SCRATCH_DEV=/dev/sda4
2022-01-18 12:58:35 export SCRATCH_LOGDEV=/dev/sda2
2022-01-18 12:58:35 sed "s:^:generic/:" //lkp/benchmarks/xfstests/tests/generic-group-17
2022-01-18 12:58:35 ./check generic/340 generic/341 generic/342 generic/343 generic/344 generic/345 generic/346 generic/347 generic/348 generic/349 generic/350 generic/351 generic/352 generic/353 generic/354 generic/355 generic/356 generic/357 generic/358 generic/359
FSTYP -- xfs (debug)
PLATFORM -- Linux/x86_64 lkp-skl-d06 5.16.0-rc1-00109-g96b6b4c6b150 #1 SMP Tue Jan 18 20:17:16 CST 2022
MKFS_OPTIONS -- -f /dev/sda4
MOUNT_OPTIONS -- /dev/sda4 /fs/scratch
generic/340 5s
generic/341 4s
generic/342 5s
generic/343 4s
generic/344 6s
generic/345 6s
generic/346 5s
generic/347 104s
generic/348 4s
generic/349 2s
generic/350 1s
generic/351 - output mismatch (see /lkp/benchmarks/xfstests/results//generic/351.out.bad)
--- tests/generic/351.out 2022-01-17 16:52:03.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//generic/351.out.bad 2022-01-18 13:01:07.416336311 +0000
@@ -25,7 +25,7 @@
Destroy device
Create w/o unmap or writesame and format
Zero punch, no fallback available
-fallocate: Operation not supported
+fallocate: Remote I/O error
Zero range, write fallback
Check contents
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/generic/351.out /lkp/benchmarks/xfstests/results//generic/351.out.bad' to see the entire diff)
generic/352 [not run] Reflink not supported by scratch filesystem type: xfs
generic/353 [not run] Reflink not supported by scratch filesystem type: xfs
generic/354 6s
generic/355 2s
generic/356 [not run] Reflink not supported by scratch filesystem type: xfs
generic/357 [not run] Reflink not supported by scratch filesystem type: xfs
generic/358 [not run] Reflink not supported by scratch filesystem type: xfs
generic/359 [not run] Reflink not supported by scratch filesystem type: xfs
Ran: generic/340 generic/341 generic/342 generic/343 generic/344 generic/345 generic/346 generic/347 generic/348 generic/349 generic/350 generic/351 generic/352 generic/353 generic/354 generic/355 generic/356 generic/357 generic/358 generic/359
Not run: generic/352 generic/353 generic/356 generic/357 generic/358 generic/359
Failures: generic/351
Failed 1 of 20 tests
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[genirq/msi] [confidence: ] cd6cf06590: stack_segment:#[##]
by kernel test robot
(
please be noted we reported:
"[genirq/msi] cf24208bdb: RIP:_raw_spin_lock_irqsave"
on https://lists.01.org/hyperkitty/list/[email protected]/thread/E63WJZCXP327...
when this commit is in:
commit: cf24208bdbd0a9e0238c2514a10c49a610e26ee5 ("genirq/msi: Convert storage to xarray")
https://git.kernel.org/cgit/linux/kernel/git/tglx/devel.git msi
report again here as a reminder the issue still exists on mainline
)
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: cd6cf06590b9792340dceaa285138777f3cc4d90 ("genirq/msi: Convert storage to xarray")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: ltp
version: ltp-x86_64-14c1f76-1_20211218
with following parameters:
test: numa
ucode: 0x42e
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
(
Note: in parent dmesg there is also similar
[ 15.367201][ T306] ==================================================================
[ 15.368136][ T306] BUG: KASAN: use-after-free in __pci_enable_msi_range+0x618/0x640
[ 15.368136][ T306] Read of size 2 at addr ffff888109acb464 by task kworker/0:2/306
but no following
"stack segment: 0000 [#1] "
or
"RIP: 0010:_raw_spin_lock_irqsave"
and ltp tests can go on
)
[ 20.017068][ T306] BUG: KASAN: use-after-free in __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:474 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] Read of size 2 at addr ffff888f47a57854 by task kworker/0:2/306
[ 20.017068][ T306]
[ 20.017068][ T306] CPU: 0 PID: 306 Comm: kworker/0:2 Not tainted 5.16.0-rc5-00095-gcd6cf06590b9 #1
[ 20.017068][ T306] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[ 20.017068][ T306] Workqueue: events work_for_cpu_fn
[ 20.017068][ T306] Call Trace:
[ 20.017068][ T306] <TASK>
[ 20.017068][ T306] dump_stack_lvl (lib/dump_stack.c:107)
[ 20.017068][ T306] print_address_description+0x21/0x140
[ 20.017068][ T306] ? __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:474 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] kasan_report.cold (mm/kasan/report.c:434 mm/kasan/report.c:450)
[ 20.017068][ T306] ? msi_domain_alloc_irqs_descs_locked (kernel/irq/msi.c:938)
[ 20.017068][ T306] ? __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:474 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.h:36 drivers/pci/msi/msi.c:474 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1029)
[ 20.017068][ T306] ? pci_enable_msix_range (drivers/pci/msi/msi.c:1008)
[ 20.017068][ T306] ? pci_address_to_pio+0x40/0x40
[ 20.017068][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 20.017068][ T306] ? pcie_port_service_unregister (drivers/pci/pcie/portdrv_core.c:316)
[ 20.017068][ T306] ? dequeue_entity (kernel/sched/fair.c:4379)
[ 20.017068][ T306] ? _raw_read_unlock_irqrestore (kernel/locking/spinlock.c:161)
[ 20.017068][ T306] ? __switch_to (arch/x86/include/asm/bitops.h:55 include/asm-generic/bitops/instrumented-atomic.h:29 include/linux/thread_info.h:89 arch/x86/include/asm/fpu/sched.h:65 arch/x86/kernel/process_64.c:622)
[ 20.017068][ T306] ? pcie_portdrv_remove (drivers/pci/pcie/portdrv_pci.c:103)
[ 20.017068][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 20.017068][ T306] ? pcie_portdrv_remove (drivers/pci/pcie/portdrv_pci.c:103)
[ 20.017068][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 20.017068][ T306] ? pci_device_shutdown (drivers/pci/pci-driver.c:305)
[ 20.017068][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 20.017068][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 20.017068][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 20.017068][ T306] ? __kthread_parkme (arch/x86/include/asm/bitops.h:207 (discriminator 4) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 4) kernel/kthread.c:249 (discriminator 4))
[ 20.017068][ T306] ? schedule (arch/x86/include/asm/bitops.h:207 (discriminator 1) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 1) include/linux/thread_info.h:118 (discriminator 1) include/linux/sched.h:2120 (discriminator 1) kernel/sched/core.c:6328 (discriminator 1))
[ 20.017068][ T306] ? process_one_work (kernel/workqueue.c:2388)
[ 20.017068][ T306] ? process_one_work (kernel/workqueue.c:2388)
[ 20.017068][ T306] kthread (kernel/kthread.c:327)
[ 20.017068][ T306] ? set_kthread_struct (kernel/kthread.c:272)
[ 20.017068][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 20.017068][ T306] </TASK>
[ 20.017068][ T306]
[ 20.017068][ T306] Allocated by task 306:
[ 20.017068][ T306] kasan_save_stack (mm/kasan/common.c:38)
[ 20.017068][ T306] __kasan_kmalloc (mm/kasan/common.c:46 mm/kasan/common.c:434 mm/kasan/common.c:513 mm/kasan/common.c:522)
[ 20.017068][ T306] msi_add_msi_desc (include/linux/slab.h:590 include/linux/slab.h:724 kernel/irq/msi.c:38 kernel/irq/msi.c:85)
[ 20.017068][ T306] msi_setup_msi_desc (drivers/pci/msi/msi.c:366)
[ 20.017068][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.c:448 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1029)
[ 20.017068][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 20.017068][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 20.017068][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 20.017068][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 20.017068][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 20.017068][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 20.017068][ T306] kthread (kernel/kthread.c:327)
[ 20.017068][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 20.017068][ T306]
[ 20.017068][ T306] Freed by task 306:
[ 20.017068][ T306] kasan_save_stack (mm/kasan/common.c:38)
[ 20.017068][ T306] kasan_set_track (mm/kasan/common.c:46)
[ 20.017068][ T306] kasan_set_free_info (mm/kasan/generic.c:372)
[ 20.017068][ T306] __kasan_slab_free (mm/kasan/common.c:368 mm/kasan/common.c:328 mm/kasan/common.c:374)
[ 20.017068][ T306] kfree (mm/slub.c:1749 mm/slub.c:3513 mm/slub.c:4561)
[ 20.017068][ T306] msi_free_msi_descs_range (kernel/irq/msi.c:59 kernel/irq/msi.c:160)
[ 20.017068][ T306] msi_domain_alloc_irqs_descs_locked (kernel/irq/msi.c:940)
[ 20.017068][ T306] __pci_enable_msi_range (drivers/pci/msi/msi.c:458 drivers/pci/msi/msi.c:905)
[ 20.017068][ T306] pci_alloc_irq_vectors_affinity (drivers/pci/msi/msi.c:1029)
[ 20.017068][ T306] pcie_port_device_register (include/linux/pci.h:1882 drivers/pci/pcie/portdrv_core.c:107 drivers/pci/pcie/portdrv_core.c:178 drivers/pci/pcie/portdrv_core.c:353)
[ 20.017068][ T306] pcie_portdrv_probe (drivers/pci/pcie/portdrv_pci.c:117)
[ 20.017068][ T306] local_pci_probe (drivers/pci/pci-driver.c:323)
[ 20.017068][ T306] work_for_cpu_fn (kernel/workqueue.c:5194)
[ 20.017068][ T306] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/workqueue.h:108 kernel/workqueue.c:2303)
[ 20.017068][ T306] worker_thread (include/linux/list.h:284 kernel/workqueue.c:2358 kernel/workqueue.c:2450)
[ 20.017068][ T306] kthread (kernel/kthread.c:327)
[ 20.017068][ T306] ret_from_fork (arch/x86/entry/entry_64.S:301)
[ 20.017068][ T306]
[ 20.017068][ T306] The buggy address belongs to the object at ffff888f47a57800
[ 20.017068][ T306] which belongs to the cache kmalloc-128 of size 128
[ 20.017068][ T306] The buggy address is located 84 bytes inside of
[ 20.017068][ T306] 128-byte region [ffff888f47a57800, ffff888f47a57880)
[ 20.017068][ T306] The buggy address belongs to the page:
[ 20.017068][ T306] page:00000000821cb941 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xf47a56
[ 20.017068][ T306] head:00000000821cb941 order:1 compound_mapcount:0
[ 20.017068][ T306] flags: 0x57ffffc0010200(slab|head|node=1|zone=2|lastcpupid=0x1fffff)
[ 20.017068][ T306] raw: 0057ffffc0010200 0000000000000000 dead000000000122 ffff88810004c8c0
[ 20.017068][ T306] raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000
[ 20.017068][ T306] page dumped because: kasan: bad access detected
[ 20.017068][ T306]
[ 20.017068][ T306] Memory state around the buggy address:
[ 20.017068][ T306] ffff888f47a57700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[ 20.017068][ T306] ffff888f47a57780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 20.017068][ T306] >ffff888f47a57800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 20.017068][ T306] ^
[ 20.017068][ T306] ffff888f47a57880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 20.017068][ T306] ffff888f47a57900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 20.017068][ T306] ==================================================================
[ 20.017068][ T306] Disabling lock debugging due to kernel taint
[ 20.616644][ T306] stack segment: 0000 [#1] SMP KASAN PTI
[ 20.617620][ T306] CPU: 0 PID: 306 Comm: kworker/0:2 Tainted: G B 5.16.0-rc5-00095-gcd6cf06590b9 #1
[ 20.617620][ T306] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[ 20.617620][ T306] Workqueue: events work_for_cpu_fn
[ 20.617620][ T306] RIP: 0010:_raw_spin_lock_irqsave (arch/x86/include/asm/atomic.h:202 include/linux/atomic/atomic-instrumented.h:513 include/asm-generic/qspinlock.h:82 include/linux/spinlock.h:185 include/linux/spinlock_api_smp.h:111 kernel/locking/spinlock.c:162)
[ 20.617620][ T306] Code: be 04 00 00 00 c7 44 24 20 00 00 00 00 e8 88 c0 2c fe be 04 00 00 00 48 8d 7c 24 20 e8 79 c0 2c fe ba 01 00 00 00 8b 44 24 20 <f0> 0f b1 55 00 75 2e 48 b8 00 00 00 00 00 fc ff df 48 c7 04 03 00
All code
========
0: be 04 00 00 00 mov $0x4,%esi
5: c7 44 24 20 00 00 00 movl $0x0,0x20(%rsp)
c: 00
d: e8 88 c0 2c fe callq 0xfffffffffe2cc09a
12: be 04 00 00 00 mov $0x4,%esi
17: 48 8d 7c 24 20 lea 0x20(%rsp),%rdi
1c: e8 79 c0 2c fe callq 0xfffffffffe2cc09a
21: ba 01 00 00 00 mov $0x1,%edx
26: 8b 44 24 20 mov 0x20(%rsp),%eax
2a:* f0 0f b1 55 00 lock cmpxchg %edx,0x0(%rbp) <-- trapping instruction
2f: 75 2e jne 0x5f
31: 48 b8 00 00 00 00 00 movabs $0xdffffc0000000000,%rax
38: fc ff df
3b: 48 rex.W
3c: c7 .byte 0xc7
3d: 04 03 add $0x3,%al
...
Code starting with the faulting instruction
===========================================
0: f0 0f b1 55 00 lock cmpxchg %edx,0x0(%rbp)
5: 75 2e jne 0x35
7: 48 b8 00 00 00 00 00 movabs $0xdffffc0000000000,%rax
e: fc ff df
11: 48 rex.W
12: c7 .byte 0xc7
13: 04 03 add $0x3,%al
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks
[security] d3b04a4398: WARNING:at_crypto/kdf_sp800108.c:#crypto_kdf108_init
by kernel test robot
(please be noted we reported this commit as
"[security] d3b04a4398: WARNING:at_crypto/kdf_sp800108.c:#crypto_kdf108_init"
when it's on linux-next/master
https://lists.01.org/hyperkitty/list/[email protected]/thread/5B23YXI7UOBE...
where we saw discussion about solution.
since now it's on mainline and the issue still exists, we reported this again
as a reminder)
Greeting,
FYI, we noticed the following commit (built with clang-14):
commit: d3b04a4398fe8022c9ca4b5ac6ab08059334b180 ("security: DH - use KDF implementation from crypto API")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 42.753085][ T1] WARNING: CPU: 1 PID: 1 at crypto/kdf_sp800108.c:138 crypto_kdf108_init (crypto/kdf_sp800108.c:136)
[ 42.754665][ T1] Modules linked in:
[ 42.755366][ T1] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.16.0-rc1-00049-gd3b04a4398fe #2
[ 42.756752][ T1] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 42.758199][ T1] RIP: 0010:crypto_kdf108_init (crypto/kdf_sp800108.c:136)
[ 42.759125][ T1] Code: 89 de 48 83 c6 08 48 89 df e8 18 02 6d fc 4c 89 f7 e8 50 4b fd fb 85 ed 74 41 90 48 c7 c7 a0 9d 3b a8 89 ee e8 fd 15 b5 fb 90 <0f> 0b 90 90 89 e8 5b 41 5e 41 5f 5d c3 48 c7 c7 a0 9e 3b a8 48 c7
All code
========
0: 89 de mov %ebx,%esi
2: 48 83 c6 08 add $0x8,%rsi
6: 48 89 df mov %rbx,%rdi
9: e8 18 02 6d fc callq 0xfffffffffc6d0226
e: 4c 89 f7 mov %r14,%rdi
11: e8 50 4b fd fb callq 0xfffffffffbfd4b66
16: 85 ed test %ebp,%ebp
18: 74 41 je 0x5b
1a: 90 nop
1b: 48 c7 c7 a0 9d 3b a8 mov $0xffffffffa83b9da0,%rdi
22: 89 ee mov %ebp,%esi
24: e8 fd 15 b5 fb callq 0xfffffffffbb51626
29: 90 nop
2a:* 0f 0b ud2 <-- trapping instruction
2c: 90 nop
2d: 90 nop
2e: 89 e8 mov %ebp,%eax
30: 5b pop %rbx
31: 41 5e pop %r14
33: 41 5f pop %r15
35: 5d pop %rbp
36: c3 retq
37: 48 c7 c7 a0 9e 3b a8 mov $0xffffffffa83b9ea0,%rdi
3e: 48 rex.W
3f: c7 .byte 0xc7
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 90 nop
3: 90 nop
4: 89 e8 mov %ebp,%eax
6: 5b pop %rbx
7: 41 5e pop %r14
9: 41 5f pop %r15
b: 5d pop %rbp
c: c3 retq
d: 48 c7 c7 a0 9e 3b a8 mov $0xffffffffa83b9ea0,%rdi
14: 48 rex.W
15: c7 .byte 0xc7
[ 42.762103][ T1] RSP: 0000:ffffc9000001fce8 EFLAGS: 00010286
[ 42.763114][ T1] RAX: 000000000000003a RBX: 0000000000000001 RCX: ffffffffa8cf6c80
[ 42.764320][ T1] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffffc9000001fa68
[ 42.765555][ T1] RBP: 00000000fffffff4 R08: dffffc0000000000 R09: fffff52000003f4e
[ 42.766791][ T1] R10: 0000000000000000 R11: dffff12000003f4f R12: ffffffffa9f515c4
[ 42.768018][ T1] R13: 0000000080000000 R14: ffff88811cc81810 R15: dffffc0000000000
[ 42.769296][ T1] FS: 0000000000000000(0000) GS:ffff8883af100000(0000) knlGS:0000000000000000
[ 42.770717][ T1] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 42.771785][ T1] CR2: 0000000000000000 CR3: 000000027a416000 CR4: 00000000000406a0
[ 42.773108][ T1] Call Trace:
[ 42.773668][ T1] <TASK>
[ 42.774210][ T1] do_one_initcall (init/main.c:1297)
[ 42.775218][ T1] ? ca_keys_setup (crypto/kdf_sp800108.c:127)
[ 42.775989][ T1] ? rcu_read_lock_sched_held (include/linux/lockdep.h:? kernel/rcu/update.c:125)
[ 42.776935][ T1] do_initcall_level (init/main.c:1369)
[ 42.777736][ T1] do_initcalls (init/main.c:1383)
[ 42.778474][ T1] kernel_init_freeable (init/main.c:1614)
[ 42.779311][ T1] ? rest_init (init/main.c:1491)
[ 42.780126][ T1] kernel_init (init/main.c:1501)
[ 42.780816][ T1] ? rest_init (init/main.c:1491)
[ 42.781551][ T1] ret_from_fork (??:?)
[ 42.782285][ T1] </TASK>
[ 42.782774][ T1] irq event stamp: 377787
[ 42.783453][ T1] hardirqs last enabled at (377795): __up_console_sem (arch/x86/include/asm/irqflags.h:22 arch/x86/include/asm/irqflags.h:70 arch/x86/include/asm/irqflags.h:132 kernel/printk/printk.c:255)
[ 42.784998][ T1] hardirqs last disabled at (377804): __up_console_sem (kernel/printk/printk.c:253)
[ 42.786520][ T1] softirqs last enabled at (377700): __do_softirq (arch/x86/include/asm/preempt.h:27 kernel/softirq.c:402 kernel/softirq.c:587)
[ 42.788096][ T1] softirqs last disabled at (377677): __irq_exit_rcu (kernel/softirq.c:? kernel/softirq.c:636)
[ 42.789665][ T1] ---[ end trace b31286580039568f ]---
[ 42.796879][ T1] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 248)
[ 42.802438][ T1] io scheduler mq-deadline registered
[ 42.806384][ T1] crc32: CRC_LE_BITS = 32, CRC_BE BITS = 32
[ 42.807361][ T1] crc32: self tests passed, processed 225944 bytes in 503210 nsec
[ 42.809123][ T1] crc32c: CRC_LE_BITS = 32
[ 42.809830][ T1] crc32c: self tests passed, processed 225944 bytes in 245030 nsec
[ 42.845225][ T1] crc32_combine: 8373 self tests passed
[ 42.876255][ T1] crc32c_combine: 8373 self tests passed
[ 42.901332][ T1] gpio_winbond: chip ID at 2e is ffff
[ 42.902303][ T1] gpio_winbond: not an our chip
[ 42.903183][ T1] gpio_winbond: chip ID at 4e is ffff
[ 42.904047][ T1] gpio_winbond: not an our chip
[ 42.946447][ T1] IPMI message handler: version 39.2
[ 42.948023][ T1] ipmi_si: IPMI System Interface driver
[ 42.951967][ T1] ipmi_si: Unable to find any System Interface(s)
[ 42.953042][ T1] ipmi_ssif: IPMI SSIF Interface driver
[ 44.060392][ T1] N_HDLC line discipline registered with maxframe=4096
[ 44.061589][ T1] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 44.063981][ T1] serial 00:05: GPIO lookup for consumer rs485-term
[ 44.065202][ T1] serial 00:05: using ACPI for GPIO lookup
[ 44.066193][ T1] acpi PNP0501:00: GPIO: looking up rs485-term-gpios
[ 44.067284][ T1] acpi PNP0501:00: GPIO: looking up rs485-term-gpio
[ 44.068360][ T1] serial 00:05: using lookup tables for GPIO lookup
[ 44.069524][ T1] serial 00:05: No GPIO consumer rs485-term found
[ 44.106838][ T1] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 44.117852][ T1] serial 00:06: GPIO lookup for consumer rs485-term
[ 44.119004][ T1] serial 00:06: using ACPI for GPIO lookup
[ 44.119929][ T1] acpi PNP0501:01: GPIO: looking up rs485-term-gpios
[ 44.120977][ T1] acpi PNP0501:01: GPIO: looking up rs485-term-gpio
[ 44.122053][ T1] serial 00:06: using lookup tables for GPIO lookup
[ 44.123113][ T1] serial 00:06: No GPIO consumer rs485-term found
[ 44.154749][ T1] 00:06: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[ 44.182277][ T1] telclk_interrupt = 0xf non-mcpbl0010 hw.
[ 44.207166][ T1] _warn_unseeded_randomness: 7 callbacks suppressed
[ 44.207179][ T1] random: get_random_u64 called from cache_random_seq_create+0x5e/0x140 with crng_init=0
[ 44.216194][ T1] hp_sw: device handler registered
[ 44.295846][ T2] random: get_random_u64 called from dup_task_struct+0x59a/0x900 with crng_init=0
[ 44.302804][ T1] scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1)
[ 44.304482][ T1] scsi host0: scsi_debug: version 0190 [20200710]
[ 44.304482][ T1] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0
[ 44.306922][ T1] random: get_random_u64 called from cache_random_seq_create+0x5e/0x140 with crng_init=0
[ 44.327167][ T1] scsi 0:0:0:0: Direct-Access Linux scsi_debug 0190 PQ: 0 ANSI: 7
[ 44.337441][ T1] scsi 0:0:0:0: Attached scsi generic sg0 type 0
[ 44.379479][ T1] Rounding down aligned max_sectors from 4294967295 to 4294967288
[ 44.385389][ T1] db_root: cannot open: /etc/target
[ 44.392443][ T1] platform physmap-flash.0: failed to claim resource 0: [mem 0x08000000-0x07ffffff]
[ 44.395648][ T1] e1000: Intel(R) PRO/1000 Network Driver
[ 44.396621][ T1] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 50.061744][ T1] ACPI: _SB_.LNKC: Enabled at IRQ 11
[ 50.528966][ T1] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 52:54:00:12:34:56
[ 50.530547][ T1] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 50.535580][ T1] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[ 50.541874][ T1] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 50.543195][ T1] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 50.551639][ T1] mousedev: PS/2 mouse device common for all mice
[ 50.557528][ T26] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[ 50.575247][ T26] evbug: Connected device: input0 (AT Translated Set 2 keyboard at isa0060/serio0/input0)
[ 50.585299][ T1] rtc-test rtc-test.0: char device (0:0)
[ 50.586268][ T1] rtc-test rtc-test.0: registered as rtc0
[ 50.587402][ T1] rtc-test rtc-test.0: setting system clock to 2022-01-17T23:31:28 UTC (1642462288)
[ 50.593215][ T1] rtc-test rtc-test.1: char device (0:1)
[ 50.594249][ T1] rtc-test rtc-test.1: registered as rtc1
[ 50.597134][ T1] rtc-test rtc-test.2: char device (0:2)
[ 50.598045][ T1] rtc-test rtc-test.2: registered as rtc2
[ 50.602016][ T1] pps pps0: new PPS source ktimer
[ 50.602939][ T1] pps pps0: ktimer PPS source registered
[ 50.604273][ T1] Driver for 1-wire Dallas network protocol.
[ 50.608496][ T1] __power_supply_register: Expected proper parent device for 'test_ac'
[ 50.611397][ T1] __power_supply_register: Expected proper parent device for 'test_battery'
[ 50.617810][ T1] __power_supply_register: Expected proper parent device for 'test_usb'
[ 50.628083][ T1] Driver 'corsair-psu' was unable to register with bus_type 'hid' because the bus was not initialized.
[ 50.647132][ T1] cpu5wdt: init success
[ 50.648266][ T1] w83877f_wdt: cannot register miscdev on minor=130 (err=-16)
[ 50.649465][ T1] w83977f_wdt: driver v1.00
[ 50.650233][ T1] w83977f_wdt: cannot register miscdev on minor=130 (err=-16)
[ 50.651537][ T1] machzwd: MachZ ZF-Logic Watchdog driver initializing
To reproduce:
# build kernel
cd linux
cp config-5.16.0-rc1-00049-gd3b04a4398fe .config
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
6 months, 3 weeks