[af_unix] afd20b9290: stress-ng.sockdiag.ops_per_sec -26.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -26.3% regression of stress-ng.sockdiag.ops_per_sec due to commit:
commit: afd20b9290e184c203fe22f2d6b80dc7127ba724 ("af_unix: Replace the big lock with small locks.")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
class: network
test: sockdiag
cpufreq_governor: performance
ucode: 0xd000280
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
network/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/sockdiag/stress-ng/60s/0xd000280
commit:
e6b4b87389 ("af_unix: Save hash in sk_hash.")
afd20b9290 ("af_unix: Replace the big lock with small locks.")
e6b4b873896f0e92 afd20b9290e184c203fe22f2d6b
---------------- ---------------------------
%stddev %change %stddev
\ | \
3.129e+08 -26.3% 2.306e+08 stress-ng.sockdiag.ops
5214640 -26.3% 3842782 stress-ng.sockdiag.ops_per_sec
82895 -6.9% 77178 stress-ng.time.involuntary_context_switches
103737 -9.5% 93892 stress-ng.time.voluntary_context_switches
7067 -6.3% 6620 vmstat.system.cs
0.05 -0.0 0.04 ± 6% mpstat.cpu.all.soft%
0.13 ± 3% -0.0 0.12 ± 5% mpstat.cpu.all.usr%
1783836 ± 7% -21.6% 1397649 ± 12% numa-vmstat.node1.numa_hit
1689477 ± 8% -22.9% 1303128 ± 13% numa-vmstat.node1.numa_local
894897 ± 22% +46.6% 1312222 ± 11% turbostat.C1E
3.85 ± 55% +3.5 7.33 ± 10% turbostat.C1E%
2451882 ± 4% -24.3% 1855676 ± 2% numa-numastat.node0.local_node
2501404 ± 3% -23.8% 1905161 ± 3% numa-numastat.node0.numa_hit
2437526 -24.1% 1849165 ± 3% numa-numastat.node1.local_node
2503693 -23.5% 1915338 ± 3% numa-numastat.node1.numa_hit
7977 ± 19% -22.6% 6178 ± 8% softirqs.CPU2.RCU
7989 ± 25% -23.4% 6121 ± 3% softirqs.CPU25.RCU
8011 ± 24% -26.8% 5862 ± 3% softirqs.CPU8.RCU
890963 ± 3% -17.4% 735738 softirqs.RCU
74920 -3.6% 72233 proc-vmstat.nr_slab_unreclaimable
5007343 -23.7% 3821593 proc-vmstat.numa_hit
4891675 -24.2% 3705934 proc-vmstat.numa_local
5007443 -23.7% 3821701 proc-vmstat.pgalloc_normal
4796850 -24.7% 3610677 proc-vmstat.pgfree
0.71 ± 17% -41.1% 0.42 perf-stat.i.MPKI
0.12 ± 12% -0.0 0.10 ± 8% perf-stat.i.branch-miss-rate%
10044516 ± 13% -23.6% 7678759 ± 3% perf-stat.i.cache-misses
42758000 ± 6% -28.5% 30580693 perf-stat.i.cache-references
6920 -5.9% 6510 perf-stat.i.context-switches
571.08 ± 2% -13.4% 494.31 ± 2% perf-stat.i.cpu-migrations
39356 ± 12% +29.2% 50865 ± 3% perf-stat.i.cycles-between-cache-misses
0.01 ± 36% -0.0 0.00 ± 24% perf-stat.i.dTLB-load-miss-rate%
0.01 ± 23% -0.0 0.00 ± 14% perf-stat.i.dTLB-store-miss-rate%
8.447e+08 +27.0% 1.073e+09 perf-stat.i.dTLB-stores
13.36 -2.2% 13.07 perf-stat.i.major-faults
364.56 ± 9% -24.9% 273.60 perf-stat.i.metric.K/sec
350.63 +0.7% 353.23 perf-stat.i.metric.M/sec
87.88 +1.4 89.23 perf-stat.i.node-load-miss-rate%
1381985 ± 12% -27.7% 999393 ± 3% perf-stat.i.node-load-misses
198989 ± 6% -31.9% 135458 ± 4% perf-stat.i.node-loads
4305132 -27.4% 3124590 perf-stat.i.node-store-misses
581796 ± 5% -25.6% 432807 ± 3% perf-stat.i.node-stores
0.46 ± 5% -28.7% 0.33 perf-stat.overall.MPKI
39894 ± 12% +28.6% 51310 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 ± 22% -0.0 0.00 ± 12% perf-stat.overall.dTLB-store-miss-rate%
9916145 ± 13% -23.8% 7560589 ± 3% perf-stat.ps.cache-misses
42385546 ± 5% -28.7% 30225277 perf-stat.ps.cache-references
6786 -5.9% 6385 perf-stat.ps.context-switches
562.65 ± 2% -13.5% 486.73 ± 2% perf-stat.ps.cpu-migrations
8.314e+08 +26.8% 1.055e+09 perf-stat.ps.dTLB-stores
1359293 ± 11% -27.7% 982331 ± 3% perf-stat.ps.node-load-misses
205280 ± 6% -33.3% 136979 ± 5% perf-stat.ps.node-loads
4237942 -27.5% 3070934 perf-stat.ps.node-store-misses
585102 ± 5% -26.6% 429702 ± 3% perf-stat.ps.node-stores
5.844e+12 +0.9% 5.897e+12 perf-stat.total.instructions
99.26 +0.5 99.72 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sendmsg
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.26 +0.5 99.73 perf-profile.calltrace.cycles-pp.sendmsg
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg.do_syscall_64
99.25 +0.5 99.72 perf-profile.calltrace.cycles-pp.___sys_sendmsg.__sys_sendmsg.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendmsg
99.24 +0.5 99.71 perf-profile.calltrace.cycles-pp.netlink_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg.__sys_sendmsg
97.56 +0.5 98.04 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.sock_diag_rcv.netlink_unicast.netlink_sendmsg
99.22 +0.5 99.70 perf-profile.calltrace.cycles-pp.netlink_unicast.netlink_sendmsg.sock_sendmsg.____sys_sendmsg.___sys_sendmsg
99.19 +0.5 99.68 perf-profile.calltrace.cycles-pp.sock_diag_rcv.netlink_unicast.netlink_sendmsg.sock_sendmsg.____sys_sendmsg
98.41 +0.5 98.90 perf-profile.calltrace.cycles-pp.__mutex_lock.sock_diag_rcv.netlink_unicast.netlink_sendmsg.sock_sendmsg
0.48 -0.4 0.07 ± 5% perf-profile.children.cycles-pp.recvmsg
0.46 ± 2% -0.4 0.06 perf-profile.children.cycles-pp.___sys_recvmsg
0.47 ± 2% -0.4 0.07 ± 6% perf-profile.children.cycles-pp.__sys_recvmsg
0.45 -0.4 0.06 ± 9% perf-profile.children.cycles-pp.____sys_recvmsg
1.14 -0.4 0.76 perf-profile.children.cycles-pp.netlink_dump
1.09 -0.4 0.73 perf-profile.children.cycles-pp.unix_diag_dump
0.66 -0.3 0.37 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
0.26 ± 2% -0.1 0.19 ± 2% perf-profile.children.cycles-pp.sk_diag_fill
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__x64_sys_socket
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__sys_socket
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__close
0.12 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.memset_erms
0.11 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.nla_put
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.__nlmsg_put
0.08 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__socket
0.08 -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__nla_put
0.07 -0.0 0.05 perf-profile.children.cycles-pp.__nla_reserve
0.07 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.rcu_core
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.__softirqentry_text_start
0.07 -0.0 0.05 ± 8% perf-profile.children.cycles-pp.rcu_do_batch
0.06 ± 7% -0.0 0.05 perf-profile.children.cycles-pp.sock_i_ino
99.89 +0.0 99.92 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.89 +0.0 99.92 perf-profile.children.cycles-pp.do_syscall_64
0.00 +0.1 0.08 perf-profile.children.cycles-pp.__raw_callee_save___native_queued_spin_unlock
99.26 +0.5 99.73 perf-profile.children.cycles-pp.sendmsg
99.25 +0.5 99.72 perf-profile.children.cycles-pp.__sys_sendmsg
99.25 +0.5 99.72 perf-profile.children.cycles-pp.___sys_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.____sys_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.sock_sendmsg
99.24 +0.5 99.71 perf-profile.children.cycles-pp.netlink_sendmsg
99.22 +0.5 99.70 perf-profile.children.cycles-pp.netlink_unicast
97.59 +0.5 98.08 perf-profile.children.cycles-pp.osq_lock
99.19 +0.5 99.68 perf-profile.children.cycles-pp.sock_diag_rcv
98.41 +0.5 98.90 perf-profile.children.cycles-pp.__mutex_lock
0.12 ± 5% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.unix_diag_dump
0.11 -0.0 0.08 perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__raw_callee_save___native_queued_spin_unlock
0.28 ± 5% +0.1 0.35 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
97.23 +0.5 97.72 perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
1 week, 4 days
[memcg] 0f12156dff: will-it-scale.per_process_ops -33.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -33.6% regression of will-it-scale.per_process_ops due to commit:
commit: 0f12156dff2862ac54235fc72703f18770769042 ("memcg: enable accounting for file lock caches")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 104 threads 2 sockets Skylake with 192G memory
with following parameters:
nr_task: 50%
mode: process
test: lock1
cpufreq_governor: performance
ucode: 0x2006a0a
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -40.9% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=50% |
| | test=lock1 |
| | ucode=0xd000280 |
+------------------+---------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/lock1/will-it-scale/0x2006a0a
commit:
b655843444 ("memcg: enable accounting for pollfd and select bits arrays")
0f12156dff ("memcg: enable accounting for file lock caches")
b655843444152c0a 0f12156dff2862ac54235fc7270
---------------- ---------------------------
%stddev %change %stddev
\ | \
65855410 -33.6% 43722413 ± 5% will-it-scale.52.processes
1266449 -33.6% 840815 ± 5% will-it-scale.per_process_ops
65855410 -33.6% 43722413 ± 5% will-it-scale.workload
141099 ± 3% +8.2% 152680 ± 2% meminfo.Active
140875 ± 3% +8.2% 152456 ± 2% meminfo.Active(anon)
28.79 +6.7 35.47 mpstat.cpu.all.sys%
20.49 -6.7 13.76 ± 5% mpstat.cpu.all.usr%
138801 ± 3% +8.4% 150456 ± 2% numa-meminfo.node1.Active
138689 ± 3% +8.4% 150381 ± 2% numa-meminfo.node1.Active(anon)
34681 ± 3% +8.3% 37570 ± 2% numa-vmstat.node1.nr_active_anon
34681 ± 3% +8.3% 37570 ± 2% numa-vmstat.node1.nr_zone_active_anon
4480 ± 3% -85.9% 632.67 ± 12% slabinfo.Acpi-Parse.active_objs
4480 ± 3% -85.9% 632.67 ± 12% slabinfo.Acpi-Parse.num_objs
2957456 ±214% -96.5% 104939 ± 5% turbostat.C1
103.99 +5.8% 110.04 turbostat.RAMWatt
19.83 -34.5% 13.00 ± 4% vmstat.cpu.us
2487 ± 2% +4.4% 2596 vmstat.system.cs
35219 ± 3% +8.2% 38114 ± 2% proc-vmstat.nr_active_anon
10977 +1.8% 11171 proc-vmstat.nr_mapped
40640 ± 3% +7.6% 43745 ± 2% proc-vmstat.nr_shmem
35219 ± 3% +8.2% 38114 ± 2% proc-vmstat.nr_zone_active_anon
53061 ± 3% +6.5% 56488 ± 3% proc-vmstat.pgactivate
0.02 ± 19% -24.0% 0.01 ± 18% perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
9.26 ± 21% -39.8% 5.58 ± 14% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
172.83 ± 11% -29.4% 122.00 ± 5% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk
127.50 ± 4% +47.8% 188.50 ± 7% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode
1080 ± 19% +66.6% 1799 ± 13% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
7204 ± 3% -12.0% 6337 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
7204 ± 3% -12.0% 6337 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read
7209 ± 3% -12.0% 6342 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
6825 ± 4% -11.5% 6036 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
9.26 ± 21% -39.8% 5.57 ± 14% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
7204 ± 3% -12.0% 6337 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
7204 ± 3% -12.0% 6337 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read
0.05 ± 7% +751.3% 0.40 ±180% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
7209 ± 3% -12.0% 6342 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
6825 ± 4% -11.5% 6036 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
827.67 ± 9% -16.3% 692.50 ± 4% interrupts.CPU13.CAL:Function_call_interrupts
3623 ± 54% +66.8% 6045 ± 33% interrupts.CPU26.NMI:Non-maskable_interrupts
3623 ± 54% +66.8% 6045 ± 33% interrupts.CPU26.PMI:Performance_monitoring_interrupts
239.50 ± 18% -33.1% 160.33 ± 38% interrupts.CPU41.RES:Rescheduling_interrupts
261.33 ± 18% -54.4% 119.17 ± 49% interrupts.CPU42.RES:Rescheduling_interrupts
7894 -30.2% 5511 ± 37% interrupts.CPU45.NMI:Non-maskable_interrupts
7894 -30.2% 5511 ± 37% interrupts.CPU45.PMI:Performance_monitoring_interrupts
3374 ± 40% +90.6% 6432 ± 23% interrupts.CPU58.NMI:Non-maskable_interrupts
3374 ± 40% +90.6% 6432 ± 23% interrupts.CPU58.PMI:Performance_monitoring_interrupts
272.83 ± 8% -36.4% 173.50 ± 40% interrupts.CPU6.RES:Rescheduling_interrupts
2890 ± 37% +61.0% 4652 ± 37% interrupts.CPU76.NMI:Non-maskable_interrupts
2890 ± 37% +61.0% 4652 ± 37% interrupts.CPU76.PMI:Performance_monitoring_interrupts
272.67 ± 9% -29.0% 193.67 ± 27% interrupts.CPU81.RES:Rescheduling_interrupts
3475 ± 47% +109.3% 7273 ± 12% interrupts.CPU93.NMI:Non-maskable_interrupts
3475 ± 47% +109.3% 7273 ± 12% interrupts.CPU93.PMI:Performance_monitoring_interrupts
3054 ± 37% +91.7% 5857 ± 38% interrupts.CPU96.NMI:Non-maskable_interrupts
3054 ± 37% +91.7% 5857 ± 38% interrupts.CPU96.PMI:Performance_monitoring_interrupts
0.05 ± 5% +1601.9% 0.79 ± 20% perf-stat.i.MPKI
12.52 ± 2% +20.9 33.42 ± 4% perf-stat.i.cache-miss-rate%
647128 ± 2% +4855.0% 32065121 ± 8% perf-stat.i.cache-misses
5130146 ± 2% +1777.3% 96305803 ± 9% perf-stat.i.cache-references
2302 +6.1% 2443 perf-stat.i.context-switches
259518 ± 2% -98.1% 4827 ± 6% perf-stat.i.cycles-between-cache-misses
0.17 -0.1 0.11 perf-stat.i.dTLB-load-miss-rate%
65644478 -33.6% 43563819 ± 5% perf-stat.i.dTLB-load-misses
51892 -27.4% 37675 ± 3% perf-stat.i.dTLB-store-misses
68181011 -33.4% 45405593 ± 5% perf-stat.i.iTLB-load-misses
1933 +58.2% 3058 ± 3% perf-stat.i.instructions-per-iTLB-miss
81.87 ± 12% +782.2% 722.30 ± 16% perf-stat.i.metric.K/sec
126326 ± 3% +6119.4% 7856755 ± 2% perf-stat.i.node-load-misses
26893 ± 6% +3382.5% 936567 ± 3% perf-stat.i.node-loads
87.04 +12.8 99.79 perf-stat.i.node-store-miss-rate%
33838 ± 3% +28351.8% 9627739 perf-stat.i.node-store-misses
0.04 ± 2% +1709.7% 0.71 ± 14% perf-stat.overall.MPKI
0.52 -0.0 0.49 ± 3% perf-stat.overall.branch-miss-rate%
12.65 ± 2% +20.7 33.35 ± 4% perf-stat.overall.cache-miss-rate%
222708 ± 2% -98.0% 4560 ± 8% perf-stat.overall.cycles-between-cache-misses
0.17 -0.1 0.10 perf-stat.overall.dTLB-load-miss-rate%
0.00 -0.0 0.00 ± 2% perf-stat.overall.dTLB-store-miss-rate%
1929 +56.0% 3010 perf-stat.overall.instructions-per-iTLB-miss
82.22 +7.1 89.34 perf-stat.overall.node-load-miss-rate%
78.75 +21.2 99.91 perf-stat.overall.node-store-miss-rate%
603681 +56.8% 946452 perf-stat.overall.path-length
649626 ± 2% +4816.5% 31938893 ± 8% perf-stat.ps.cache-misses
5137402 ± 2% +1767.5% 95941923 ± 9% perf-stat.ps.cache-references
2313 +5.8% 2448 perf-stat.ps.context-switches
65424262 -33.6% 43419292 ± 5% perf-stat.ps.dTLB-load-misses
51749 -27.4% 37573 ± 3% perf-stat.ps.dTLB-store-misses
67951468 -33.4% 45252180 ± 5% perf-stat.ps.iTLB-load-misses
126213 ± 3% +6102.7% 7828694 ± 2% perf-stat.ps.node-load-misses
27281 ± 6% +3321.9% 933553 ± 3% perf-stat.ps.node-loads
33783 ± 3% +28302.8% 9595407 perf-stat.ps.node-store-misses
15207 ± 10% -34.0% 10039 ± 12% softirqs.CPU0.RCU
16968 ± 17% -36.3% 10810 ± 19% softirqs.CPU1.RCU
17526 ± 19% -36.4% 11142 ± 14% softirqs.CPU10.RCU
17232 ± 16% -48.1% 8946 ± 13% softirqs.CPU100.RCU
15770 ± 16% -43.7% 8874 ± 28% softirqs.CPU101.RCU
14396 ± 15% -44.6% 7969 ± 13% softirqs.CPU102.RCU
18909 ± 5% -46.4% 10130 ± 17% softirqs.CPU103.RCU
17822 ± 23% -45.3% 9744 ± 22% softirqs.CPU11.RCU
18056 ± 22% -52.5% 8582 ± 22% softirqs.CPU12.RCU
14576 ± 17% -33.3% 9723 ± 15% softirqs.CPU13.RCU
18836 ± 15% -47.1% 9961 ± 8% softirqs.CPU14.RCU
20542 ± 17% -54.3% 9394 ± 11% softirqs.CPU15.RCU
17485 ± 19% -42.0% 10141 ± 19% softirqs.CPU16.RCU
17387 ± 17% -37.8% 10807 ± 13% softirqs.CPU17.RCU
18473 ± 15% -49.9% 9250 ± 18% softirqs.CPU18.RCU
19232 ± 19% -44.1% 10751 ± 10% softirqs.CPU19.RCU
17052 ± 21% -42.9% 9744 ± 22% softirqs.CPU2.RCU
17330 ± 17% -43.5% 9797 ± 16% softirqs.CPU20.RCU
18624 ± 15% -43.9% 10445 ± 24% softirqs.CPU21.RCU
18206 ± 18% -44.4% 10123 ± 26% softirqs.CPU22.RCU
18047 ± 12% -44.5% 10022 ± 23% softirqs.CPU23.RCU
19351 ± 18% -47.7% 10115 ± 21% softirqs.CPU24.RCU
18629 ± 17% -51.4% 9057 ± 18% softirqs.CPU25.RCU
15355 ± 14% -44.8% 8480 ± 13% softirqs.CPU26.RCU
15129 ± 14% -39.7% 9117 ± 16% softirqs.CPU27.RCU
14744 ± 10% -39.0% 8996 ± 21% softirqs.CPU28.RCU
13973 ± 11% -39.4% 8470 ± 18% softirqs.CPU29.RCU
35779 ± 10% -27.5% 25930 ± 30% softirqs.CPU29.SCHED
18703 ± 16% -43.4% 10577 ± 19% softirqs.CPU3.RCU
17000 ± 19% -40.8% 10057 ± 21% softirqs.CPU30.RCU
18602 ± 16% -40.6% 11040 ± 15% softirqs.CPU31.RCU
17242 ± 23% -44.0% 9662 ± 19% softirqs.CPU32.RCU
17841 ± 15% -43.1% 10144 ± 21% softirqs.CPU33.RCU
17867 ± 14% -44.6% 9890 ± 21% softirqs.CPU34.RCU
19083 ± 16% -46.7% 10177 ± 15% softirqs.CPU35.RCU
19161 ± 11% -49.8% 9616 ± 24% softirqs.CPU36.RCU
19335 ± 15% -47.7% 10106 ± 16% softirqs.CPU37.RCU
20460 ± 11% -54.3% 9341 ± 17% softirqs.CPU38.RCU
10641 ± 58% +117.0% 23087 ± 43% softirqs.CPU38.SCHED
20165 ± 8% -53.5% 9381 ± 17% softirqs.CPU39.RCU
20158 ± 9% -47.3% 10630 ± 20% softirqs.CPU4.RCU
18503 ± 20% -40.7% 10980 ± 20% softirqs.CPU40.RCU
18917 ± 11% -51.1% 9241 ± 17% softirqs.CPU41.RCU
19989 ± 9% -57.3% 8544 ± 12% softirqs.CPU42.RCU
10393 ± 58% +155.8% 26585 ± 30% softirqs.CPU42.SCHED
16499 ± 11% -45.5% 8991 ± 19% softirqs.CPU43.RCU
18599 ± 16% -49.3% 9433 ± 24% softirqs.CPU44.RCU
20721 ± 8% -56.4% 9035 ± 14% softirqs.CPU45.RCU
6210 ± 26% +213.4% 19464 ± 52% softirqs.CPU45.SCHED
16912 ± 21% -43.6% 9547 ± 24% softirqs.CPU46.RCU
19085 ± 14% -49.9% 9571 ± 15% softirqs.CPU47.RCU
16524 ± 21% -46.7% 8815 ± 20% softirqs.CPU48.RCU
18590 ± 13% -52.0% 8921 ± 11% softirqs.CPU49.RCU
19473 ± 15% -47.6% 10198 ± 12% softirqs.CPU5.RCU
19807 ± 16% -51.7% 9574 ± 20% softirqs.CPU50.RCU
14621 ± 7% -44.6% 8100 ± 14% softirqs.CPU51.RCU
21579 ± 9% -49.4% 10911 ± 16% softirqs.CPU52.RCU
17758 ± 25% -50.6% 8778 ± 25% softirqs.CPU53.RCU
18495 ± 15% -47.1% 9776 ± 18% softirqs.CPU54.RCU
16729 ± 20% -46.4% 8963 ± 20% softirqs.CPU55.RCU
15031 ± 10% -44.4% 8358 ± 18% softirqs.CPU56.RCU
15721 ± 17% -42.2% 9080 ± 30% softirqs.CPU57.RCU
14757 ± 3% -32.1% 10022 ± 23% softirqs.CPU58.RCU
37022 ± 9% -41.4% 21679 ± 41% softirqs.CPU58.SCHED
16077 ± 6% -41.1% 9466 ± 32% softirqs.CPU59.RCU
20064 ± 11% -54.4% 9150 ± 18% softirqs.CPU6.RCU
14811 ± 18% -46.8% 7880 ± 15% softirqs.CPU60.RCU
15191 ± 8% -49.8% 7623 ± 14% softirqs.CPU61.RCU
16592 ± 13% -49.0% 8455 ± 12% softirqs.CPU62.RCU
15392 ± 21% -44.3% 8566 ± 17% softirqs.CPU63.RCU
18606 ± 16% -52.5% 8837 ± 19% softirqs.CPU65.RCU
14770 ± 22% -40.4% 8802 ± 29% softirqs.CPU66.RCU
14465 ± 11% -32.3% 9797 ± 24% softirqs.CPU67.RCU
16993 ± 16% -48.1% 8818 ± 12% softirqs.CPU68.RCU
18034 ± 14% -52.9% 8497 ± 18% softirqs.CPU69.RCU
18628 ± 16% -47.9% 9700 ± 18% softirqs.CPU7.RCU
16564 ± 17% -42.4% 9545 ± 15% softirqs.CPU70.RCU
16333 ± 19% -45.3% 8933 ± 25% softirqs.CPU71.RCU
17985 ± 19% -52.0% 8624 ± 25% softirqs.CPU72.RCU
16575 ± 17% -50.7% 8172 ± 15% softirqs.CPU73.RCU
17243 ± 18% -48.5% 8875 ± 15% softirqs.CPU74.RCU
17293 ± 12% -44.8% 9547 ± 15% softirqs.CPU75.RCU
16269 ± 20% -46.7% 8673 ± 24% softirqs.CPU76.RCU
17326 ± 18% -45.8% 9383 ± 22% softirqs.CPU77.RCU
18826 ± 18% -50.6% 9292 ± 24% softirqs.CPU78.RCU
19893 ± 8% -52.4% 9460 ± 18% softirqs.CPU79.RCU
18812 ± 19% -43.6% 10616 ± 19% softirqs.CPU8.RCU
20001 ± 12% -53.0% 9393 ± 14% softirqs.CPU80.RCU
20756 ± 5% -53.9% 9562 ± 18% softirqs.CPU81.RCU
18840 ± 15% -53.5% 8761 ± 17% softirqs.CPU82.RCU
17159 ± 21% -52.3% 8189 ± 20% softirqs.CPU83.RCU
18671 ± 16% -52.4% 8881 ± 19% softirqs.CPU84.RCU
18929 ± 11% -55.5% 8421 ± 21% softirqs.CPU85.RCU
17559 ± 15% -51.0% 8604 ± 13% softirqs.CPU86.RCU
17343 ± 14% -53.0% 8154 ± 18% softirqs.CPU87.RCU
15481 ± 13% -43.6% 8727 ± 14% softirqs.CPU88.RCU
16364 ± 14% -43.9% 9187 ± 24% softirqs.CPU89.RCU
18492 ± 19% -43.5% 10441 ± 14% softirqs.CPU9.RCU
14056 ± 7% -38.3% 8666 ± 21% softirqs.CPU90.RCU
14189 ± 6% -37.9% 8815 ± 24% softirqs.CPU91.RCU
15154 ± 14% -48.1% 7867 ± 15% softirqs.CPU92.RCU
15589 ± 14% -42.4% 8979 ± 25% softirqs.CPU93.RCU
14492 ± 10% -34.4% 9504 ± 19% softirqs.CPU94.RCU
34198 ± 16% -52.1% 16384 ± 44% softirqs.CPU94.SCHED
17203 ± 7% -46.2% 9249 ± 16% softirqs.CPU95.RCU
15367 ± 11% -44.0% 8601 ± 12% softirqs.CPU96.RCU
14328 ± 13% -37.8% 8911 ± 27% softirqs.CPU97.RCU
16134 ± 16% -46.3% 8672 ± 19% softirqs.CPU98.RCU
15282 ± 17% -44.5% 8486 ± 18% softirqs.CPU99.RCU
1808149 ± 6% -46.3% 971467 ± 15% softirqs.RCU
36250 ± 5% +36.3% 49404 ± 2% softirqs.TIMER
17.47 ± 9% -5.9 11.61 ± 9% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
9.44 ± 9% -3.2 6.23 ± 9% perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_fcntl64
8.57 ± 9% -2.9 5.64 ± 9% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_fcntl64
2.27 ± 9% -1.0 1.27 ± 29% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__libc_fcntl64
2.68 ± 9% -1.0 1.70 ± 9% perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
2.65 ± 8% -0.9 1.78 ± 9% perf-profile.calltrace.cycles-pp.security_file_lock.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl
2.52 ± 9% -0.9 1.65 ± 9% perf-profile.calltrace.cycles-pp._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.80 ± 8% -0.5 0.28 ±100% perf-profile.calltrace.cycles-pp._raw_spin_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
1.34 ± 9% -0.5 0.83 ± 9% perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
1.62 ± 9% -0.5 1.12 ± 10% perf-profile.calltrace.cycles-pp.common_file_perm.security_file_lock.do_lock_file_wait.fcntl_setlk.do_fcntl
0.82 ± 10% -0.4 0.38 ± 71% perf-profile.calltrace.cycles-pp.syscall_enter_from_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
0.66 ± 11% -0.4 0.28 ±100% perf-profile.calltrace.cycles-pp.locks_delete_lock_ctx.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
1.04 ± 9% -0.3 0.69 ± 9% perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64
0.86 ± 9% -0.3 0.59 ± 8% perf-profile.calltrace.cycles-pp.__fget_light.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
0.00 +0.6 0.59 ± 11% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
1.06 ± 9% +0.9 1.92 ± 12% perf-profile.calltrace.cycles-pp.locks_dispose_list.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
0.00 +0.9 0.92 ± 10% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
0.67 ± 9% +1.0 1.68 ± 14% perf-profile.calltrace.cycles-pp.kmem_cache_free.locks_dispose_list.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.00 +1.4 1.38 ± 8% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
0.00 +1.8 1.79 ± 59% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk
0.00 +2.2 2.18 ± 58% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk.do_fcntl
0.00 +2.2 2.21 ± 58% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl
0.00 +2.4 2.35 ± 43% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode
0.00 +2.8 2.80 ± 59% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk
0.00 +2.8 2.84 ± 41% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode.do_lock_file_wait
0.00 +2.9 2.86 ± 41% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk
1.10 ± 9% +2.9 3.98 ± 32% perf-profile.calltrace.cycles-pp.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
0.00 +3.0 2.99 ± 55% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
1.69 ± 9% +3.8 5.49 ± 21% perf-profile.calltrace.cycles-pp.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
3.39 ± 8% +4.0 7.42 ± 22% perf-profile.calltrace.cycles-pp.locks_alloc_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
3.00 ± 8% +4.2 7.17 ± 23% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl
0.00 +4.6 4.61 ± 29% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode
0.00 +5.0 5.00 ± 27% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
6.48 ± 9% +6.3 12.76 ± 12% perf-profile.calltrace.cycles-pp.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
5.71 ± 9% +6.5 12.21 ± 13% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.00 +7.3 7.35 ± 35% perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock
17.27 ± 9% +8.5 25.78 ± 11% perf-profile.calltrace.cycles-pp.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
13.80 ± 9% +9.6 23.35 ± 12% perf-profile.calltrace.cycles-pp.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl
28.29 ± 9% +13.3 41.58 ± 12% perf-profile.calltrace.cycles-pp.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
26.73 ± 9% +13.8 40.53 ± 12% perf-profile.calltrace.cycles-pp.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
23.74 ± 9% +14.8 38.56 ± 13% perf-profile.calltrace.cycles-pp.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.55 ± 9% -5.9 11.68 ± 9% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
9.74 ± 9% -3.3 6.41 ± 9% perf-profile.children.cycles-pp.syscall_return_via_sysret
9.02 ± 8% -3.1 5.96 ± 9% perf-profile.children.cycles-pp.__entry_text_start
4.02 ± 9% -1.5 2.53 ± 9% perf-profile.children.cycles-pp.memset_erms
2.58 ± 9% -0.9 1.69 ± 9% perf-profile.children.cycles-pp._copy_from_user
2.68 ± 8% -0.9 1.80 ± 9% perf-profile.children.cycles-pp.security_file_lock
2.02 ± 7% -0.7 1.33 ± 9% perf-profile.children.cycles-pp._raw_spin_lock
1.40 ± 8% -0.6 0.85 ± 10% perf-profile.children.cycles-pp.___might_sleep
1.67 ± 9% -0.5 1.16 ± 10% perf-profile.children.cycles-pp.common_file_perm
1.48 ± 9% -0.5 0.98 ± 9% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
1.23 ± 9% -0.4 0.81 ± 9% perf-profile.children.cycles-pp.copy_user_generic_unrolled
1.04 ± 9% -0.4 0.67 ± 9% perf-profile.children.cycles-pp.__might_sleep
0.86 ± 10% -0.3 0.56 ± 9% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.86 ± 9% -0.3 0.59 ± 8% perf-profile.children.cycles-pp.__fget_light
0.76 ± 7% -0.3 0.50 ± 8% perf-profile.children.cycles-pp.apparmor_file_lock
0.77 ± 8% -0.3 0.51 ± 9% perf-profile.children.cycles-pp.__cond_resched
0.59 ± 8% -0.2 0.38 ± 9% perf-profile.children.cycles-pp.__might_fault
0.66 ± 9% -0.2 0.46 ± 9% perf-profile.children.cycles-pp.locks_release_private
0.56 ± 9% -0.2 0.36 ± 9% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.58 ± 9% -0.2 0.44 ± 10% perf-profile.children.cycles-pp.locks_insert_lock_ctx
0.69 ± 11% -0.1 0.54 ± 9% perf-profile.children.cycles-pp.locks_delete_lock_ctx
0.56 ± 10% -0.1 0.44 ± 8% perf-profile.children.cycles-pp.locks_unlink_lock_ctx
0.38 ± 8% -0.1 0.26 ± 10% perf-profile.children.cycles-pp.rcu_all_qs
0.33 ± 11% -0.1 0.21 ± 9% perf-profile.children.cycles-pp.flock64_to_posix_lock
0.35 ± 8% -0.1 0.24 ± 10% perf-profile.children.cycles-pp.aa_file_perm
0.27 ± 21% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.23 ± 9% -0.1 0.13 ± 8% perf-profile.children.cycles-pp.__init_waitqueue_head
0.31 ± 8% -0.1 0.21 ± 9% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.24 ± 9% -0.1 0.17 ± 6% perf-profile.children.cycles-pp.testcase
0.19 ± 11% -0.1 0.12 ± 10% perf-profile.children.cycles-pp.should_failslab
0.15 ± 10% -0.1 0.09 ± 10% [email protected]
0.16 ± 7% -0.1 0.10 ± 14% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.13 ± 9% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.security_file_fcntl
0.12 ± 12% -0.0 0.08 ± 8% perf-profile.children.cycles-pp.__list_del_entry_valid
0.14 ± 9% -0.0 0.10 ± 10% perf-profile.children.cycles-pp.locks_get_lock_context
0.08 ± 10% -0.0 0.03 ± 70% perf-profile.children.cycles-pp.vfs_lock_file
0.13 ± 8% -0.0 0.10 ± 7% perf-profile.children.cycles-pp.locks_delete_block
0.10 ± 10% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.__list_add_valid
0.10 ± 10% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.memset
0.00 +0.1 0.08 ± 9% perf-profile.children.cycles-pp.obj_cgroup_uncharge
0.00 +0.2 0.21 ± 18% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.00 +0.6 0.55 ± 9% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.00 +0.6 0.60 ± 9% perf-profile.children.cycles-pp.refill_obj_stock
1.10 ± 9% +0.9 1.96 ± 12% perf-profile.children.cycles-pp.locks_dispose_list
0.00 +1.7 1.67 ± 10% perf-profile.children.cycles-pp.mod_objcg_state
0.00 +2.4 2.42 ± 35% perf-profile.children.cycles-pp.propagate_protected_usage
0.00 +2.5 2.47 ± 9% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.00 +4.7 4.74 ± 36% perf-profile.children.cycles-pp.page_counter_cancel
0.00 +5.8 5.77 ± 35% perf-profile.children.cycles-pp.page_counter_uncharge
0.00 +5.8 5.82 ± 35% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.00 +7.4 7.36 ± 35% perf-profile.children.cycles-pp.page_counter_try_charge
0.00 +7.4 7.42 ± 35% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
3.47 ± 9% +7.8 11.27 ± 18% perf-profile.children.cycles-pp.kmem_cache_free
0.00 +8.1 8.05 ± 32% perf-profile.children.cycles-pp.obj_cgroup_charge
17.35 ± 9% +8.5 25.81 ± 11% perf-profile.children.cycles-pp.do_lock_file_wait
14.01 ± 9% +9.5 23.56 ± 12% perf-profile.children.cycles-pp.posix_lock_inode
10.00 ± 8% +10.2 20.24 ± 14% perf-profile.children.cycles-pp.locks_alloc_lock
9.00 ± 8% +10.7 19.67 ± 14% perf-profile.children.cycles-pp.kmem_cache_alloc
28.35 ± 9% +13.3 41.61 ± 12% perf-profile.children.cycles-pp.__x64_sys_fcntl
26.89 ± 9% +13.7 40.63 ± 12% perf-profile.children.cycles-pp.do_fcntl
23.92 ± 9% +14.7 38.67 ± 13% perf-profile.children.cycles-pp.fcntl_setlk
17.12 ± 9% -5.7 11.40 ± 9% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
9.72 ± 9% -3.3 6.39 ± 9% perf-profile.self.cycles-pp.syscall_return_via_sysret
7.81 ± 8% -2.6 5.16 ± 9% perf-profile.self.cycles-pp.__entry_text_start
3.90 ± 9% -1.4 2.45 ± 9% perf-profile.self.cycles-pp.memset_erms
1.97 ± 7% -0.7 1.32 ± 10% perf-profile.self.cycles-pp._raw_spin_lock
1.78 ± 9% -0.6 1.21 ± 9% perf-profile.self.cycles-pp.__libc_fcntl64
1.34 ± 8% -0.5 0.82 ± 11% perf-profile.self.cycles-pp.___might_sleep
1.35 ± 9% -0.4 0.94 ± 10% perf-profile.self.cycles-pp.common_file_perm
1.19 ± 9% -0.4 0.79 ± 9% perf-profile.self.cycles-pp.copy_user_generic_unrolled
1.30 ± 8% -0.3 0.96 ± 8% perf-profile.self.cycles-pp.posix_lock_inode
0.91 ± 9% -0.3 0.58 ± 9% perf-profile.self.cycles-pp.__might_sleep
1.08 ± 9% -0.3 0.76 ± 8% perf-profile.self.cycles-pp.fcntl_setlk
0.74 ± 8% -0.3 0.43 ± 10% perf-profile.self.cycles-pp.locks_alloc_lock
0.79 ± 9% -0.3 0.51 ± 10% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.76 ± 9% -0.3 0.49 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.83 ± 9% -0.3 0.56 ± 8% perf-profile.self.cycles-pp.__fget_light
0.70 ± 9% -0.2 0.46 ± 8% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.66 ± 7% -0.2 0.43 ± 7% perf-profile.self.cycles-pp.apparmor_file_lock
0.64 ± 9% -0.2 0.42 ± 9% perf-profile.self.cycles-pp.locks_release_private
0.52 ± 9% -0.2 0.34 ± 9% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.47 ± 10% -0.2 0.30 ± 9% perf-profile.self.cycles-pp.do_lock_file_wait
0.54 ± 9% -0.2 0.38 ± 9% perf-profile.self.cycles-pp.do_fcntl
0.46 ± 9% -0.2 0.30 ± 8% perf-profile.self.cycles-pp.__x64_sys_fcntl
0.40 ± 11% -0.2 0.25 ± 11% perf-profile.self.cycles-pp.do_syscall_64
0.33 ± 11% -0.1 0.19 ± 10% perf-profile.self.cycles-pp.flock64_to_posix_lock
0.38 ± 9% -0.1 0.25 ± 9% perf-profile.self.cycles-pp.__cond_resched
0.25 ± 7% -0.1 0.14 ± 10% perf-profile.self.cycles-pp.locks_dispose_list
0.24 ± 9% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.testcase
0.28 ± 7% -0.1 0.19 ± 11% perf-profile.self.cycles-pp.aa_file_perm
0.22 ± 24% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.28 ± 8% -0.1 0.18 ± 10% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.25 ± 9% -0.1 0.16 ± 10% perf-profile.self.cycles-pp._copy_from_user
0.26 ± 8% -0.1 0.17 ± 10% perf-profile.self.cycles-pp.rcu_all_qs
0.18 ± 9% -0.1 0.10 ± 10% perf-profile.self.cycles-pp.__init_waitqueue_head
0.15 ± 8% -0.1 0.07 ± 14% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.14 ± 9% -0.1 0.08 ± 11% perf-profi[email protected]
0.12 ± 12% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.18 ± 12% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.security_file_lock
0.08 ± 12% -0.0 0.03 ±100% perf-profile.self.cycles-pp.__list_add_valid
0.12 ± 9% -0.0 0.08 ± 11% perf-profile.self.cycles-pp.locks_get_lock_context
0.10 ± 10% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.__might_fault
0.10 ± 10% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.memset
0.10 ± 8% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.security_file_fcntl
0.11 ± 8% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.locks_delete_block
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.obj_cgroup_uncharge
0.00 +0.2 0.16 ± 24% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.00 +0.3 0.34 ± 9% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.00 +0.6 0.56 ± 9% perf-profile.self.cycles-pp.obj_cgroup_charge
0.00 +0.6 0.58 ± 10% perf-profile.self.cycles-pp.refill_obj_stock
3.14 ± 8% +1.3 4.46 ± 10% perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +1.7 1.65 ± 10% perf-profile.self.cycles-pp.mod_objcg_state
0.00 +2.2 2.17 ± 9% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.00 +2.4 2.40 ± 35% perf-profile.self.cycles-pp.propagate_protected_usage
0.00 +4.7 4.70 ± 36% perf-profile.self.cycles-pp.page_counter_cancel
0.00 +5.9 5.92 ± 35% perf-profile.self.cycles-pp.page_counter_try_charge
will-it-scale.52.processes
7e+07 +-----------------------------------------------------------------+
| .+ + + + + |
6.5e+07 |+++ ++++++.++++ +++.+ +++++.+++++++.++++ ++.++ +++++.+++++++ |
| |
6e+07 |-+ |
| |
5.5e+07 |-+ |
| |
5e+07 |-+ O |
| OO O O OOO OO O O OOO O |
4.5e+07 |O+O OOOO O O O O OOO O OOOOO O O O O OO O OOOO O |
| O O O OO O O O|
4e+07 |-+ O O |
| |
3.5e+07 +-----------------------------------------------------------------+
will-it-scale.per_process_ops
1.3e+06 +-----------------------------------------------------------------+
|+++.+++++++.++++++++.+++++++.+++++++.+++++++.++++++++.+++++++ |
1.2e+06 |-+ |
| |
| |
1.1e+06 |-+ |
| |
1e+06 |-+ |
| O |
900000 |-+ O O O O O O O O O O |
|O O OOO O O OO O O O O O O OOOO O O O OO O O OOOO |
| O O O O OO OO O OO O|
800000 |-O O O O |
| |
700000 +-----------------------------------------------------------------+
will-it-scale.workload
7e+07 +-----------------------------------------------------------------+
| .+ + + + + |
6.5e+07 |+++ ++++++.++++ +++.+ +++++.+++++++.++++ ++.++ +++++.+++++++ |
| |
6e+07 |-+ |
| |
5.5e+07 |-+ |
| |
5e+07 |-+ O |
| OO O O OOO OO O O OOO O |
4.5e+07 |O+O OOOO O O O O OOO O OOOOO O O O O OO O OOOO O |
| O O O OO O O O|
4e+07 |-+ O O |
| |
3.5e+07 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-icl-2sp2: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp2/lock1/will-it-scale/0xd000280
commit:
b655843444 ("memcg: enable accounting for pollfd and select bits arrays")
0f12156dff ("memcg: enable accounting for file lock caches")
b655843444152c0a 0f12156dff2862ac54235fc7270
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.785e+08 -40.9% 1.055e+08 ± 2% will-it-scale.64.processes
2788911 -40.9% 1649196 ± 2% will-it-scale.per_process_ops
1.785e+08 -40.9% 1.055e+08 ± 2% will-it-scale.workload
366571 ± 7% +15.2% 422206 ± 9% numa-numastat.node0.local_node
2807 +2.4% 2874 vmstat.system.cs
0.02 ± 2% -0.0 0.02 ± 2% mpstat.cpu.all.soft%
5.55 -2.4 3.15 ± 7% mpstat.cpu.all.usr%
5364 ± 2% -85.5% 778.67 ± 13% slabinfo.Acpi-Parse.active_objs
5364 ± 2% -85.5% 778.67 ± 13% slabinfo.Acpi-Parse.num_objs
49038 +5.1% 51541 proc-vmstat.nr_active_anon
55209 +5.1% 58037 proc-vmstat.nr_shmem
49038 +5.1% 51541 proc-vmstat.nr_zone_active_anon
183.83 ± 28% +320.7% 773.33 ±101% interrupts.154:IR-PCI-MSI.25690117-edge.eth0-TxRx-5
836.50 ± 10% +46.3% 1223 ± 30% interrupts.CPU103.CAL:Function_call_interrupts
1348 ± 83% +208.2% 4155 ± 50% interrupts.CPU2.RES:Rescheduling_interrupts
1089 ± 77% +158.6% 2818 ± 53% interrupts.CPU4.RES:Rescheduling_interrupts
183.83 ± 28% +320.7% 773.33 ±101% interrupts.CPU5.154:IR-PCI-MSI.25690117-edge.eth0-TxRx-5
806.17 ± 14% +31.2% 1058 ± 30% interrupts.CPU55.CAL:Function_call_interrupts
723.17 ± 74% +417.0% 3738 ± 82% interrupts.CPU55.RES:Rescheduling_interrupts
4847 ± 94% -97.1% 138.17 ± 19% interrupts.CPU86.NMI:Non-maskable_interrupts
4847 ± 94% -97.1% 138.17 ± 19% interrupts.CPU86.PMI:Performance_monitoring_interrupts
8069 ± 63% -80.7% 1561 ±200% interrupts.CPU99.NMI:Non-maskable_interrupts
8069 ± 63% -80.7% 1561 ±200% interrupts.CPU99.PMI:Performance_monitoring_interrupts
2345 -12.0% 2063 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
2345 -12.0% 2063 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
262.68 ± 7% -42.8% 150.13 ± 12% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
561.17 ± 9% -16.5% 468.33 ± 5% perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
183.33 ± 7% +43.2% 262.50 ± 8% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode
113.33 ± 11% +78.7% 202.50 ± 4% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
7031 -12.0% 6185 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
7031 -12.0% 6185 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read
7035 -12.0% 6189 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
2345 -12.0% 2063 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
2345 -12.0% 2063 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
2.50 ± 7% -47.6% 1.31 ± 18% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
262.68 ± 7% -42.8% 150.13 ± 12% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
7031 -12.0% 6185 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
7031 -12.0% 6185 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read
7035 -12.0% 6189 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.04 ± 22% +1112.5% 0.50 ±128% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.02 ± 3% +1574.6% 0.27 ± 6% perf-stat.i.MPKI
7.076e+10 -6.5% 6.615e+10 ± 2% perf-stat.i.branch-instructions
0.05 +0.0 0.08 ± 5% perf-stat.i.branch-miss-rate%
35955313 ± 2% +45.6% 52354202 ± 5% perf-stat.i.branch-misses
10.31 +18.0 28.33 ± 8% perf-stat.i.cache-miss-rate%
492492 +4777.7% 24022051 ± 4% perf-stat.i.cache-misses
4620608 +1768.8% 86349315 ± 4% perf-stat.i.cache-references
2651 +2.5% 2716 perf-stat.i.context-switches
0.47 +8.5% 0.51 ± 2% perf-stat.i.cpi
425867 -98.4% 6992 ± 5% perf-stat.i.cycles-between-cache-misses
0.00 ± 4% +0.0 0.00 ± 11% perf-stat.i.dTLB-load-miss-rate%
1.03e+11 -3.8% 9.907e+10 ± 2% perf-stat.i.dTLB-loads
0.00 +0.0 0.00 ± 8% perf-stat.i.dTLB-store-miss-rate%
6.534e+10 -11.3% 5.796e+10 ± 2% perf-stat.i.dTLB-stores
3.516e+11 -7.2% 3.263e+11 ± 2% perf-stat.i.instructions
2.11 -7.1% 1.96 ± 2% perf-stat.i.ipc
37.50 ± 2% +2090.1% 821.35 ± 3% perf-stat.i.metric.K/sec
1867 -6.7% 1743 ± 2% perf-stat.i.metric.M/sec
104632 ± 2% +6707.0% 7122311 ± 3% perf-stat.i.node-load-misses
14377 ± 21% +3209.4% 475820 ± 29% perf-stat.i.node-loads
45.30 ± 16% +51.5 96.80 perf-stat.i.node-store-miss-rate%
48602 ± 14% +22917.3% 11186859 ± 6% perf-stat.i.node-store-misses
65851 ± 9% +432.6% 350742 ± 9% perf-stat.i.node-stores
0.01 +1902.5% 0.27 ± 6% perf-stat.overall.MPKI
0.05 +0.0 0.08 ± 6% perf-stat.overall.branch-miss-rate%
10.65 +17.3 27.92 ± 8% perf-stat.overall.cache-miss-rate%
0.47 +7.7% 0.51 ± 2% perf-stat.overall.cpi
336119 -97.9% 6940 ± 4% perf-stat.overall.cycles-between-cache-misses
0.00 ± 4% +0.0 0.00 ± 10% perf-stat.overall.dTLB-load-miss-rate%
0.00 +0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
2.11 -7.1% 1.96 ± 2% perf-stat.overall.ipc
87.73 ± 2% +6.0 93.76 perf-stat.overall.node-load-miss-rate%
42.34 ± 13% +54.6 96.95 perf-stat.overall.node-store-miss-rate%
594415 +56.9% 932831 perf-stat.overall.path-length
7.05e+10 -6.5% 6.592e+10 ± 2% perf-stat.ps.branch-instructions
35867377 +45.6% 52233576 ± 5% perf-stat.ps.branch-misses
493933 +4747.8% 23944727 ± 4% perf-stat.ps.cache-misses
4639810 +1756.0% 86115077 ± 4% perf-stat.ps.cache-references
2653 +2.6% 2721 perf-stat.ps.context-switches
1.027e+11 -3.8% 9.872e+10 ± 2% perf-stat.ps.dTLB-loads
6.51e+10 -11.3% 5.775e+10 ± 2% perf-stat.ps.dTLB-stores
3.504e+11 -7.2% 3.252e+11 ± 2% perf-stat.ps.instructions
104869 ± 2% +6668.1% 7097622 ± 3% perf-stat.ps.node-load-misses
14698 ± 21% +3134.0% 475371 ± 29% perf-stat.ps.node-loads
48497 ± 14% +22887.1% 11148161 ± 6% perf-stat.ps.node-store-misses
65971 ± 9% +430.7% 350139 ± 8% perf-stat.ps.node-stores
1.061e+14 -7.2% 9.845e+13 ± 2% perf-stat.total.instructions
11738 ± 11% -32.5% 7926 ± 19% softirqs.CPU1.RCU
12768 ± 12% -39.4% 7741 ± 11% softirqs.CPU10.RCU
8853 ± 10% -33.9% 5851 ± 8% softirqs.CPU100.RCU
9695 ± 6% -40.1% 5811 ± 5% softirqs.CPU101.RCU
9277 ± 7% -38.0% 5752 ± 7% softirqs.CPU102.RCU
9585 ± 8% -33.5% 6376 ± 17% softirqs.CPU103.RCU
9838 ± 7% -40.5% 5857 ± 9% softirqs.CPU104.RCU
9315 ± 8% -40.6% 5529 ± 7% softirqs.CPU105.RCU
9661 ± 10% -40.5% 5749 ± 5% softirqs.CPU106.RCU
9531 ± 6% -35.6% 6138 ± 7% softirqs.CPU108.RCU
9472 ± 7% -34.4% 6210 ± 16% softirqs.CPU109.RCU
11502 ± 13% -41.5% 6726 ± 14% softirqs.CPU11.RCU
9003 ± 11% -34.3% 5916 ± 12% softirqs.CPU111.RCU
9739 ± 7% -36.5% 6187 ± 10% softirqs.CPU113.RCU
9453 ± 9% -34.2% 6220 ± 13% softirqs.CPU114.RCU
9891 ± 7% -31.3% 6798 ± 18% softirqs.CPU115.RCU
10008 ± 9% -39.1% 6090 ± 12% softirqs.CPU116.RCU
10132 ± 17% -41.7% 5912 ± 15% softirqs.CPU117.RCU
9765 ± 8% -37.5% 6103 ± 8% softirqs.CPU118.RCU
8960 ± 11% -33.9% 5923 ± 8% softirqs.CPU119.RCU
10910 ± 17% -34.0% 7197 ± 6% softirqs.CPU12.RCU
9577 ± 6% -37.6% 5973 ± 12% softirqs.CPU120.RCU
9565 ± 10% -42.9% 5457 ± 7% softirqs.CPU121.RCU
9137 ± 10% -38.7% 5604 ± 11% softirqs.CPU124.RCU
9418 ± 13% -34.3% 6186 ± 13% softirqs.CPU125.RCU
9151 ± 15% -37.2% 5745 ± 11% softirqs.CPU126.RCU
10800 ± 7% -34.7% 7057 ± 10% softirqs.CPU127.RCU
9815 ± 9% -29.1% 6958 ± 6% softirqs.CPU13.RCU
11463 ± 15% -39.2% 6967 ± 12% softirqs.CPU14.RCU
10757 ± 15% -37.6% 6716 ± 10% softirqs.CPU15.RCU
10809 ± 17% -38.5% 6648 ± 16% softirqs.CPU16.RCU
11169 ± 14% -42.4% 6435 ± 13% softirqs.CPU17.RCU
12215 ± 11% -40.1% 7319 ± 13% softirqs.CPU19.RCU
11500 ± 12% -39.2% 6996 ± 10% softirqs.CPU2.RCU
11027 ± 18% -32.7% 7416 ± 9% softirqs.CPU20.RCU
11155 ± 17% -39.5% 6745 ± 5% softirqs.CPU21.RCU
11083 ± 16% -32.8% 7451 ± 12% softirqs.CPU22.RCU
11841 ± 15% -44.9% 6523 ± 13% softirqs.CPU23.RCU
11415 ± 16% -38.7% 6997 ± 15% softirqs.CPU24.RCU
11325 ± 5% -37.7% 7061 ± 8% softirqs.CPU25.RCU
12111 ± 11% -44.5% 6715 ± 11% softirqs.CPU26.RCU
11627 ± 11% -43.1% 6613 ± 11% softirqs.CPU27.RCU
11113 ± 3% -40.1% 6658 ± 10% softirqs.CPU28.RCU
10333 ± 13% -33.2% 6907 ± 8% softirqs.CPU29.RCU
12191 ± 13% -42.5% 7009 ± 13% softirqs.CPU3.RCU
10265 ± 11% -31.7% 7012 ± 9% softirqs.CPU30.RCU
11440 ± 13% -39.6% 6910 ± 17% softirqs.CPU31.RCU
10389 ± 12% -33.8% 6874 ± 11% softirqs.CPU32.RCU
11224 ± 10% -37.3% 7032 ± 19% softirqs.CPU33.RCU
10886 ± 13% -32.8% 7317 ± 16% softirqs.CPU34.RCU
10588 ± 9% -32.7% 7125 ± 6% softirqs.CPU35.RCU
9719 ± 16% -31.8% 6625 ± 14% softirqs.CPU36.RCU
11099 ± 12% -33.2% 7414 ± 12% softirqs.CPU37.RCU
10338 ± 9% -25.6% 7690 ± 14% softirqs.CPU38.RCU
10238 ± 11% -27.1% 7461 ± 9% softirqs.CPU39.RCU
11660 ± 15% -35.5% 7516 ± 12% softirqs.CPU4.RCU
10552 ± 13% -31.6% 7220 ± 12% softirqs.CPU40.RCU
10552 ± 10% -31.1% 7273 ± 7% softirqs.CPU41.RCU
10103 ± 13% -31.8% 6891 ± 9% softirqs.CPU42.RCU
10974 ± 8% -36.9% 6921 ± 10% softirqs.CPU43.RCU
10730 ± 13% -37.2% 6739 ± 17% softirqs.CPU44.RCU
11205 ± 12% -35.4% 7233 ± 11% softirqs.CPU45.RCU
11136 ± 12% -39.4% 6751 ± 12% softirqs.CPU46.RCU
10649 ± 11% -33.6% 7073 ± 8% softirqs.CPU47.RCU
9988 ± 10% -36.5% 6346 ± 8% softirqs.CPU48.RCU
9801 ± 9% -36.9% 6180 ± 6% softirqs.CPU49.RCU
12102 ± 9% -37.5% 7563 ± 17% softirqs.CPU5.RCU
9884 ± 12% -36.2% 6303 ± 7% softirqs.CPU50.RCU
9347 ± 9% -34.8% 6096 ± 7% softirqs.CPU51.RCU
10164 ± 11% -35.3% 6575 ± 7% softirqs.CPU52.RCU
10518 ± 15% -37.2% 6609 ± 10% softirqs.CPU53.RCU
9857 ± 12% -32.5% 6655 ± 5% softirqs.CPU54.RCU
10795 ± 13% -44.0% 6047 ± 5% softirqs.CPU55.RCU
10397 ± 12% -38.9% 6350 ± 7% softirqs.CPU56.RCU
9513 ± 6% -30.5% 6615 ± 8% softirqs.CPU57.RCU
10400 ± 10% -37.6% 6492 ± 5% softirqs.CPU58.RCU
10434 ± 8% -39.1% 6351 ± 9% softirqs.CPU59.RCU
12055 ± 14% -32.8% 8102 ± 22% softirqs.CPU6.RCU
10580 ± 12% -40.1% 6340 ± 11% softirqs.CPU60.RCU
10099 ± 11% -41.4% 5915 ± 7% softirqs.CPU61.RCU
10021 ± 10% -35.8% 6436 ± 3% softirqs.CPU62.RCU
9856 ± 7% -31.3% 6774 ± 9% softirqs.CPU64.RCU
9963 ± 12% -36.0% 6372 ± 8% softirqs.CPU65.RCU
9664 ± 8% -30.1% 6752 ± 5% softirqs.CPU66.RCU
9673 ± 5% -33.2% 6463 ± 10% softirqs.CPU67.RCU
9910 ± 8% -36.4% 6307 ± 6% softirqs.CPU68.RCU
9589 ± 5% -34.2% 6310 ± 7% softirqs.CPU69.RCU
11733 ± 9% -42.0% 6809 ± 12% softirqs.CPU7.RCU
9603 ± 42% +105.6% 19745 ± 32% softirqs.CPU7.SCHED
9881 ± 9% -34.0% 6521 ± 7% softirqs.CPU70.RCU
9620 ± 6% -33.2% 6424 ± 2% softirqs.CPU71.RCU
35921 ± 13% -39.4% 21764 ± 30% softirqs.CPU71.SCHED
10659 ± 13% -38.8% 6524 ± 7% softirqs.CPU72.RCU
9917 ± 7% -38.9% 6055 ± 7% softirqs.CPU73.RCU
9572 ± 7% -40.0% 5741 ± 9% softirqs.CPU74.RCU
9683 ± 9% -36.2% 6176 ± 6% softirqs.CPU75.RCU
10099 ± 7% -38.4% 6220 ± 10% softirqs.CPU76.RCU
9726 ± 5% -28.6% 6946 ± 14% softirqs.CPU78.RCU
9463 ± 6% -37.1% 5956 ± 12% softirqs.CPU79.RCU
10789 ± 12% -34.1% 7105 ± 20% softirqs.CPU8.RCU
9917 ± 7% -35.1% 6439 ± 11% softirqs.CPU80.RCU
9821 ± 4% -34.5% 6430 ± 3% softirqs.CPU81.RCU
9938 ± 12% -38.6% 6098 ± 10% softirqs.CPU82.RCU
9838 ± 4% -35.2% 6370 ± 7% softirqs.CPU83.RCU
10401 ± 8% -40.1% 6229 ± 6% softirqs.CPU84.RCU
10051 ± 6% -36.4% 6396 ± 9% softirqs.CPU85.RCU
9792 ± 4% -35.3% 6339 ± 14% softirqs.CPU86.RCU
10059 ± 9% -36.3% 6408 ± 8% softirqs.CPU87.RCU
9972 ± 8% -39.2% 6059 ± 3% softirqs.CPU88.RCU
9447 ± 9% -30.8% 6534 ± 12% softirqs.CPU89.RCU
11511 ± 24% -37.0% 7246 ± 11% softirqs.CPU9.RCU
9563 ± 6% -30.9% 6612 ± 6% softirqs.CPU90.RCU
36244 ± 11% -29.7% 25496 ± 25% softirqs.CPU90.SCHED
9671 ± 8% -34.4% 6340 ± 7% softirqs.CPU91.RCU
9848 ± 7% -33.7% 6531 ± 5% softirqs.CPU92.RCU
9578 ± 8% -32.4% 6479 ± 5% softirqs.CPU93.RCU
9519 ± 5% -36.3% 6064 ± 7% softirqs.CPU94.RCU
9480 ± 9% -31.0% 6544 ± 12% softirqs.CPU95.RCU
9025 ± 8% -35.2% 5845 ± 5% softirqs.CPU96.RCU
9656 ± 19% -39.8% 5814 ± 8% softirqs.CPU97.RCU
10008 ± 7% -39.4% 6064 ± 9% softirqs.CPU98.RCU
9619 ± 6% -41.1% 5661 ± 3% softirqs.CPU99.RCU
1308899 ± 3% -36.2% 835466 ± 3% softirqs.RCU
47568 ± 2% +20.6% 57361 ± 4% softirqs.TIMER
10.02 ± 3% -4.9 5.08 ± 18% perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_fcntl64
8.20 -3.8 4.38 ± 6% perf-profile.calltrace.cycles-pp._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.28 -2.7 2.58 ± 5% perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
4.78 -2.3 2.48 ± 7% perf-profile.calltrace.cycles-pp.security_file_lock.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl
3.68 -1.8 1.88 ± 7% perf-profile.calltrace.cycles-pp.common_file_perm.security_file_lock.do_lock_file_wait.fcntl_setlk.do_fcntl
3.36 -1.4 1.93 ± 6% perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64
2.98 -1.2 1.75 ± 6% perf-profile.calltrace.cycles-pp.__fget_light.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
2.51 -1.2 1.30 ± 7% perf-profile.calltrace.cycles-pp.locks_delete_lock_ctx.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
2.49 -1.1 1.36 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
2.02 -1.0 1.05 ± 6% perf-profile.calltrace.cycles-pp.locks_unlink_lock_ctx.locks_delete_lock_ctx.posix_lock_inode.do_lock_file_wait.fcntl_setlk
2.02 -1.0 1.06 ± 7% perf-profile.calltrace.cycles-pp.locks_insert_lock_ctx.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
1.14 -0.8 0.37 ± 70% perf-profile.calltrace.cycles-pp.locks_release_private.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
1.70 -0.7 0.98 ± 7% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64
1.24 -0.6 0.64 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.locks_insert_lock_ctx.posix_lock_inode.do_lock_file_wait.fcntl_setlk
2.15 -0.6 1.57 ± 8% perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
1.34 -0.6 0.77 ± 6% perf-profile.calltrace.cycles-pp.___might_sleep.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
1.32 -0.6 0.75 ± 7% perf-profile.calltrace.cycles-pp.__might_fault._copy_from_user.do_fcntl.__x64_sys_fcntl.do_syscall_64
1.22 -0.5 0.68 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.locks_unlink_lock_ctx.locks_delete_lock_ctx.posix_lock_inode.do_lock_file_wait
1.21 -0.5 0.68 ± 7% perf-profile.calltrace.cycles-pp.aa_file_perm.common_file_perm.security_file_lock.do_lock_file_wait.fcntl_setlk
1.41 -0.5 0.90 ± 9% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
1.21 -0.5 0.70 ± 6% perf-profile.calltrace.cycles-pp.syscall_enter_from_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
1.08 -0.5 0.61 ± 7% perf-profile.calltrace.cycles-pp.flock64_to_posix_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
0.76 -0.4 0.35 ± 70% perf-profile.calltrace.cycles-pp.___might_sleep.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
1.27 -0.4 0.86 ± 5% perf-profile.calltrace.cycles-pp.__might_sleep.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
1.04 -0.4 0.66 ± 10% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_fcntl64
0.76 -0.1 0.63 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
97.66 +0.4 98.07 perf-profile.calltrace.cycles-pp.__libc_fcntl64
0.00 +0.5 0.54 ± 4% perf-profile.calltrace.cycles-pp.refill_obj_stock.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl
0.00 +0.8 0.83 ± 7% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl
0.00 +0.9 0.90 ± 7% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
0.00 +0.9 0.92 ± 5% perf-profile.calltrace.cycles-pp.refill_obj_stock.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.00 +1.3 1.26 ± 7% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk
2.15 +1.4 3.54 ± 14% perf-profile.calltrace.cycles-pp.locks_dispose_list.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
0.00 +1.6 1.59 ± 11% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
1.14 +1.8 2.90 ± 18% perf-profile.calltrace.cycles-pp.kmem_cache_free.locks_dispose_list.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.00 +1.8 1.83 ± 6% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
0.00 +2.1 2.13 ± 29% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc
0.00 +2.2 2.16 ± 29% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk
0.00 +2.4 2.42 ± 7% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
0.00 +2.6 2.60 ± 28% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk.do_fcntl
0.00 +2.7 2.66 ± 27% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl
0.00 +3.0 3.04 ± 37% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode
2.86 ± 2% +3.6 6.42 ± 10% perf-profile.calltrace.cycles-pp.kmem_cache_free.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
0.00 +3.6 3.59 ± 33% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk
0.00 +3.6 3.65 ± 36% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode.do_lock_file_wait
0.00 +3.7 3.72 ± 36% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk
0.00 +4.3 4.29 ± 27% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl
3.76 +5.7 9.47 ± 12% perf-profile.calltrace.cycles-pp.kmem_cache_free.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
8.70 +6.2 14.88 ± 6% perf-profile.calltrace.cycles-pp.locks_alloc_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
7.40 +6.3 13.74 ± 7% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.do_fcntl.__x64_sys_fcntl
83.98 +6.8 90.74 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
82.41 +7.4 89.82 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
0.00 +7.8 7.75 ± 29% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode
46.22 +7.9 54.10 ± 3% perf-profile.calltrace.cycles-pp.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64
78.68 +8.9 87.56 perf-profile.calltrace.cycles-pp.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
0.00 +9.2 9.15 ± 23% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait
73.64 +11.0 84.60 perf-profile.calltrace.cycles-pp.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64
0.00 +11.2 11.21 ± 25% perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc.locks_alloc_lock
38.04 +11.7 49.70 ± 4% perf-profile.calltrace.cycles-pp.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl
16.21 +11.8 28.04 ± 5% perf-profile.calltrace.cycles-pp.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl
13.60 +12.1 25.71 ± 6% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk
63.80 +15.2 78.96 perf-profile.calltrace.cycles-pp.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.76 -3.4 4.33 ± 6% perf-profile.children.cycles-pp.memset_erms
8.05 -3.4 4.62 ± 7% perf-profile.children.cycles-pp._copy_from_user
5.70 ± 2% -2.8 2.89 ± 18% perf-profile.children.cycles-pp.__entry_text_start
6.00 -2.5 3.47 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
5.10 -2.4 2.66 ± 6% perf-profile.children.cycles-pp.security_file_lock
5.00 -2.2 2.75 ± 8% perf-profile.children.cycles-pp.syscall_return_via_sysret
3.90 -1.9 2.01 ± 7% perf-profile.children.cycles-pp.common_file_perm
3.94 -1.7 2.28 ± 6% perf-profile.children.cycles-pp.copy_user_generic_unrolled
3.74 -1.7 2.08 ± 5% perf-profile.children.cycles-pp.___might_sleep
3.95 -1.6 2.40 ± 6% perf-profile.children.cycles-pp.__might_sleep
2.72 -1.3 1.41 ± 6% perf-profile.children.cycles-pp.locks_delete_lock_ctx
2.58 -1.3 1.29 ± 5% perf-profile.children.cycles-pp.locks_release_private
3.10 -1.3 1.81 ± 6% perf-profile.children.cycles-pp.__fget_light
2.18 -1.0 1.14 ± 6% perf-profile.children.cycles-pp.locks_unlink_lock_ctx
2.18 -1.0 1.15 ± 7% perf-profile.children.cycles-pp.locks_insert_lock_ctx
2.03 -0.9 1.17 ± 7% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
2.19 -0.8 1.34 ± 5% perf-profile.children.cycles-pp.__cond_resched
1.54 -0.7 0.86 ± 7% perf-profile.children.cycles-pp.aa_file_perm
1.54 -0.7 0.87 ± 7% perf-profile.children.cycles-pp.__might_fault
1.68 -0.6 1.07 ± 9% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.32 -0.6 0.76 ± 6% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
1.19 -0.5 0.67 ± 6% perf-profile.children.cycles-pp.flock64_to_posix_lock
1.10 -0.4 0.65 ± 5% perf-profile.children.cycles-pp.rcu_all_qs
0.81 -0.4 0.45 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.94 -0.3 0.60 ± 6% perf-profile.children.cycles-pp.__init_waitqueue_head
0.74 -0.3 0.41 ± 4% perf-profile.children.cycles-pp.locks_delete_block
0.75 -0.3 0.42 ± 5% perf-profile.children.cycles-pp.locks_get_lock_context
0.70 -0.3 0.39 ± 7% perf-profile.children.cycles-pp.locks_copy_lock
0.53 -0.3 0.23 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.78 ± 2% -0.3 0.49 ± 9% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.61 -0.3 0.35 ± 8% perf-profile.children.cycles-pp.security_file_fcntl
0.60 -0.2 0.35 ± 11% perf-profile.children.cycles-pp.memset
0.53 -0.2 0.29 ± 6% perf-profile.children.cycles-pp.apparmor_file_lock
0.43 ± 2% -0.2 0.24 ± 7% perf-profile.children.cycles-pp.__list_add_valid
0.43 ± 2% -0.2 0.25 ± 7% perf-profile.children.cycles-pp.vfs_lock_file
0.50 -0.2 0.33 ± 8% perf-profile.children.cycles-pp.testcase
0.37 -0.2 0.21 ± 6% perf-profile.children.cycles-pp.locks_copy_conflock
0.32 ± 2% -0.1 0.18 ± 7% perf-profile.children.cycles-pp.__fdget_raw
0.36 -0.1 0.24 ± 7% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.22 -0.1 0.12 ± 6% perf-profile.children.cycles-pp.locks_move_blocks
0.41 -0.1 0.32 ± 6% perf-profile.children.cycles-pp.should_failslab
0.20 ± 2% -0.1 0.13 ± 10% [email protected]
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.get_mem_cgroup_from_objcg
0.00 +0.1 0.08 ± 11% perf-profile.children.cycles-pp.refill_stock
98.10 +0.1 98.23 perf-profile.children.cycles-pp.__libc_fcntl64
0.00 +0.3 0.30 ± 6% perf-profile.children.cycles-pp.obj_cgroup_uncharge
0.00 +0.5 0.53 ± 7% perf-profile.children.cycles-pp.mem_cgroup_from_task
2.41 +1.3 3.69 ± 14% perf-profile.children.cycles-pp.locks_dispose_list
0.00 +2.0 1.99 ± 5% perf-profile.children.cycles-pp.refill_obj_stock
0.32 ± 2% +2.1 2.40 ± 6% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.00 +3.4 3.39 ± 21% perf-profile.children.cycles-pp.propagate_protected_usage
0.00 +4.9 4.90 ± 8% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.00 +5.6 5.60 ± 6% perf-profile.children.cycles-pp.mod_objcg_state
0.00 +5.9 5.94 ± 26% perf-profile.children.cycles-pp.page_counter_cancel
84.29 +6.7 90.98 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +7.1 7.12 ± 24% perf-profile.children.cycles-pp.page_counter_uncharge
83.10 +7.2 90.30 perf-profile.children.cycles-pp.do_syscall_64
0.00 +7.3 7.27 ± 24% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
46.87 +7.6 54.45 ± 3% perf-profile.children.cycles-pp.do_lock_file_wait
79.26 +8.6 87.90 perf-profile.children.cycles-pp.__x64_sys_fcntl
74.02 +10.8 84.84 perf-profile.children.cycles-pp.do_fcntl
39.59 +11.0 50.57 ± 4% perf-profile.children.cycles-pp.posix_lock_inode
0.00 +11.2 11.23 ± 25% perf-profile.children.cycles-pp.page_counter_try_charge
0.00 +11.4 11.37 ± 24% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
8.04 +11.5 19.54 ± 6% perf-profile.children.cycles-pp.kmem_cache_free
0.00 +13.6 13.63 ± 19% perf-profile.children.cycles-pp.obj_cgroup_charge
64.58 +14.8 79.39 perf-profile.children.cycles-pp.fcntl_setlk
25.73 +17.7 43.45 ± 2% perf-profile.children.cycles-pp.locks_alloc_lock
22.71 +18.8 41.55 ± 3% perf-profile.children.cycles-pp.kmem_cache_alloc
7.42 -3.3 4.14 ± 6% perf-profile.self.cycles-pp.memset_erms
6.55 -3.2 3.35 ± 8% perf-profile.self.cycles-pp.__libc_fcntl64
5.72 -2.4 3.30 ± 6% perf-profile.self.cycles-pp._raw_spin_lock
5.14 -2.4 2.76 ± 5% perf-profile.self.cycles-pp.posix_lock_inode
5.00 -2.2 2.75 ± 8% perf-profile.self.cycles-pp.syscall_return_via_sysret
3.49 -1.7 1.77 ± 4% perf-profile.self.cycles-pp.fcntl_setlk
3.61 -1.5 2.09 ± 6% perf-profile.self.cycles-pp.copy_user_generic_unrolled
3.20 -1.4 1.77 ± 5% perf-profile.self.cycles-pp.___might_sleep
2.97 -1.2 1.74 ± 6% perf-profile.self.cycles-pp.__fget_light
2.35 -1.2 1.14 ± 7% perf-profile.self.cycles-pp.common_file_perm
3.02 -1.2 1.84 ± 6% perf-profile.self.cycles-pp.__might_sleep
2.25 -1.1 1.14 ± 5% perf-profile.self.cycles-pp.locks_release_private
2.38 -0.9 1.47 ± 8% perf-profile.self.cycles-pp.locks_alloc_lock
1.80 -0.8 1.04 ± 7% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.39 -0.7 0.71 ± 21% perf-profile.self.cycles-pp.__entry_text_start
1.61 -0.7 0.94 ± 7% perf-profile.self.cycles-pp.__x64_sys_fcntl
1.60 -0.6 0.95 ± 8% perf-profile.self.cycles-pp.do_fcntl
1.43 -0.6 0.79 ± 6% perf-profile.self.cycles-pp.do_lock_file_wait
1.40 -0.6 0.81 ± 7% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.31 -0.6 0.73 ± 7% perf-profile.self.cycles-pp.aa_file_perm
1.16 -0.5 0.67 ± 6% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
1.08 -0.5 0.60 ± 6% perf-profile.self.cycles-pp.flock64_to_posix_lock
0.97 -0.4 0.56 ± 6% perf-profile.self.cycles-pp._copy_from_user
1.02 -0.4 0.61 ± 8% perf-profile.self.cycles-pp.do_syscall_64
1.04 -0.4 0.65 ± 4% perf-profile.self.cycles-pp.__cond_resched
0.81 -0.4 0.45 ± 7% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.76 -0.4 0.42 ± 5% perf-profile.self.cycles-pp.security_file_lock
0.71 -0.3 0.38 ± 8% perf-profile.self.cycles-pp.locks_insert_lock_ctx
0.64 -0.3 0.32 ± 6% perf-profile.self.cycles-pp.locks_unlink_lock_ctx
0.75 ± 2% -0.3 0.44 ± 6% perf-profile.self.cycles-pp.rcu_all_qs
0.70 -0.3 0.42 ± 10% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.58 ± 2% -0.3 0.32 ± 5% perf-profile.self.cycles-pp.locks_dispose_list
0.43 -0.3 0.18 ± 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.52 -0.2 0.29 ± 5% perf-profile.self.cycles-pp.locks_delete_block
0.53 -0.2 0.30 ± 5% perf-profile.self.cycles-pp.locks_get_lock_context
0.63 ± 2% -0.2 0.42 ± 5% perf-profile.self.cycles-pp.__init_waitqueue_head
0.43 -0.2 0.23 ± 9% perf-profile.self.cycles-pp.locks_delete_lock_ctx
0.56 ± 2% -0.2 0.36 ± 11% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.43 -0.2 0.25 ± 9% perf-profile.self.cycles-pp.security_file_fcntl
0.32 ± 2% -0.2 0.17 ± 7% perf-profile.self.cycles-pp.__list_add_valid
0.33 -0.2 0.18 ± 9% perf-profile.self.cycles-pp.locks_copy_lock
0.33 -0.1 0.18 ± 7% perf-profile.self.cycles-pp.__might_fault
0.42 ± 2% -0.1 0.27 ± 9% perf-profile.self.cycles-pp.testcase
0.33 ± 2% -0.1 0.18 ± 5% perf-profile.self.cycles-pp.vfs_lock_file
0.32 -0.1 0.18 ± 7% perf-profile.self.cycles-pp.apparmor_file_lock
0.32 ± 2% -0.1 0.18 ± 6% perf-profile.self.cycles-pp.locks_copy_conflock
0.30 ± 3% -0.1 0.18 ± 10% perf-profile.self.cycles-pp.memset
0.26 ± 3% -0.1 0.18 ± 7% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.17 ± 2% -0.1 0.09 ± 7% perf-profile.self.cycles-pp.locks_move_blocks
0.11 ± 4% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.__fdget_raw
0.20 ± 3% -0.0 0.17 ± 6% perf-profile.self.cycles-pp.should_failslab
0.10 ± 4% -0.0 0.07 ± 14% [email protected]
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.get_mem_cgroup_from_objcg
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.refill_stock
0.00 +0.1 0.14 ± 10% perf-profile.self.cycles-pp.obj_cgroup_uncharge
0.00 +0.4 0.36 ± 7% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.11 ± 3% +0.8 0.92 ± 5% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.00 +1.8 1.85 ± 5% perf-profile.self.cycles-pp.refill_obj_stock
0.00 +2.0 2.05 ± 7% perf-profile.self.cycles-pp.obj_cgroup_charge
9.35 +3.0 12.32 ± 4% perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +3.4 3.36 ± 21% perf-profile.self.cycles-pp.propagate_protected_usage
0.00 +4.0 4.01 ± 8% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.00 +5.2 5.23 ± 7% perf-profile.self.cycles-pp.mod_objcg_state
0.00 +5.9 5.90 ± 26% perf-profile.self.cycles-pp.page_counter_cancel
0.00 +9.0 8.97 ± 25% perf-profile.self.cycles-pp.page_counter_try_charge
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
2 months
[xfs] 6191cf3ad5: stress-ng.rename.ops_per_sec -73.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -73.5% regression of stress-ng.rename.ops_per_sec due to commit:
commit: 6191cf3ad59fda5901160633fef8e41b064a5246 ("xfs: flush inodegc workqueue tasks before cancel")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads 2 sockets Ice Lake with 256G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: xfs
class: filesystem
test: rename
cpufreq_governor: performance
ucode: 0xb000280
In addition to that, the commit also has significant impact on the following tests:
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
filesystem/gcc-9/performance/1HDD/xfs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp1/rename/stress-ng/60s/0xb000280
commit:
a8e422af69 ("xfs: remove unused xfs_ioctl32.h declarations")
6191cf3ad5 ("xfs: flush inodegc workqueue tasks before cancel")
a8e422af69613300 6191cf3ad59fda5901160633fef
---------------- ---------------------------
%stddev %change %stddev
\ | \
13809323 -73.5% 3655053 ± 2% stress-ng.rename.ops
230150 -73.5% 60915 ± 2% stress-ng.rename.ops_per_sec
68641 ± 6% -71.5% 19530 ± 18% stress-ng.time.involuntary_context_switches
762.40 +1.7% 775.00 stress-ng.time.percent_of_cpu_this_job_got
463.07 +3.3% 478.34 stress-ng.time.system_time
10.61 ± 5% -71.3% 3.04 ± 3% stress-ng.time.user_time
152587 ± 6% -72.6% 41861 ± 14% stress-ng.time.voluntary_context_switches
280117 ± 13% -22.7% 216667 ± 19% numa-numastat.node0.numa_hit
10272 ± 4% -48.7% 5273 ± 6% vmstat.system.cs
0.06 ± 21% -0.0 0.04 ± 4% mpstat.cpu.all.soft%
0.20 ± 3% -0.1 0.07 ± 2% mpstat.cpu.all.usr%
2703 ± 4% +5.6% 2854 ± 3% proc-vmstat.nr_active_anon
7988 ± 2% +2.5% 8185 proc-vmstat.nr_shmem
2703 ± 4% +5.6% 2854 ± 3% proc-vmstat.nr_zone_active_anon
522094 -10.7% 466068 proc-vmstat.numa_hit
436762 ± 2% -13.2% 379232 proc-vmstat.numa_local
4853 ± 6% +12.1% 5438 ± 3% proc-vmstat.pgactivate
525596 -10.4% 470724 proc-vmstat.pgalloc_normal
476799 -11.2% 423300 proc-vmstat.pgfree
61932 -2.6% 60328 proc-vmstat.pgpgout
3.60 ± 40% +102.9% 7.30 perf-stat.i.MPKI
4.774e+09 -71.1% 1.381e+09 perf-stat.i.branch-instructions
16114311 ± 32% -72.6% 4418605 ± 4% perf-stat.i.branch-misses
30.00 ± 24% +8.4 38.44 ± 4% perf-stat.i.cache-miss-rate%
20101026 ± 3% -8.7% 18358375 ± 3% perf-stat.i.cache-misses
73234568 ± 37% -34.7% 47830132 ± 2% perf-stat.i.cache-references
10261 ± 4% -50.5% 5079 ± 7% perf-stat.i.context-switches
1.29 ± 3% +227.7% 4.23 perf-stat.i.cpi
123.06 -8.1% 113.09 perf-stat.i.cpu-migrations
1474 ± 4% +9.8% 1619 ± 2% perf-stat.i.cycles-between-cache-misses
7.692e+11 ±199% -100.0% 202244 ± 46% perf-stat.i.dTLB-load-misses
5.182e+09 -70.1% 1.551e+09 perf-stat.i.dTLB-loads
107292 ± 95% -66.8% 35623 ± 23% perf-stat.i.dTLB-store-misses
2.564e+09 -76.9% 5.931e+08 ± 2% perf-stat.i.dTLB-stores
2.194e+10 -70.1% 6.557e+09 perf-stat.i.instructions
0.78 ± 2% -68.4% 0.25 perf-stat.i.ipc
130.67 -71.9% 36.71 perf-stat.i.metric.M/sec
93.21 +4.0 97.26 perf-stat.i.node-load-miss-rate%
3910237 ± 5% +52.1% 5948845 ± 3% perf-stat.i.node-load-misses
215305 ± 35% -69.9% 64873 ± 4% perf-stat.i.node-loads
55.52 ± 4% +14.4 69.90 perf-stat.i.node-store-miss-rate%
3175249 ± 9% +151.9% 7999504 ± 3% perf-stat.i.node-store-misses
2444045 ± 4% +32.3% 3234491 ± 2% perf-stat.i.node-stores
3.33 ± 36% +119.4% 7.29 perf-stat.overall.MPKI
30.29 ± 25% +8.1 38.42 ± 4% perf-stat.overall.cache-miss-rate%
1.28 ± 2% +232.8% 4.26 perf-stat.overall.cpi
1399 ± 5% +8.8% 1522 ± 3% perf-stat.overall.cycles-between-cache-misses
0.78 ± 2% -70.0% 0.23 perf-stat.overall.ipc
94.87 +4.0 98.92 perf-stat.overall.node-load-miss-rate%
56.43 ± 4% +14.8 71.21 perf-stat.overall.node-store-miss-rate%
4.699e+09 -71.1% 1.359e+09 perf-stat.ps.branch-instructions
15853475 ± 32% -72.6% 4344260 ± 4% perf-stat.ps.branch-misses
19790250 ± 3% -8.7% 18072625 ± 3% perf-stat.ps.cache-misses
72066501 ± 37% -34.7% 47079258 ± 2% perf-stat.ps.cache-references
10100 ± 4% -50.5% 4999 ± 7% perf-stat.ps.context-switches
120.98 -8.1% 111.24 perf-stat.ps.cpu-migrations
7.486e+11 ±199% -100.0% 198976 ± 46% perf-stat.ps.dTLB-load-misses
5.102e+09 -70.1% 1.526e+09 perf-stat.ps.dTLB-loads
105425 ± 95% -66.8% 34991 ± 23% perf-stat.ps.dTLB-store-misses
2.524e+09 -76.9% 5.837e+08 ± 2% perf-stat.ps.dTLB-stores
2.16e+10 -70.1% 6.454e+09 perf-stat.ps.instructions
3850372 ± 5% +52.1% 5857127 ± 3% perf-stat.ps.node-load-misses
211823 ± 35% -69.9% 63770 ± 4% perf-stat.ps.node-loads
3125462 ± 9% +152.0% 7875415 ± 3% perf-stat.ps.node-store-misses
2404750 ± 4% +32.4% 3183125 ± 2% perf-stat.ps.node-stores
1.364e+12 -70.1% 4.079e+11 perf-stat.total.instructions
18334 ± 40% -45.8% 9942 ± 27% softirqs.CPU0.RCU
25205 ± 59% -74.2% 6513 ± 35% softirqs.CPU1.RCU
11337 ± 24% -64.5% 4025 ± 16% softirqs.CPU10.RCU
3467 ±128% -93.0% 243.40 ± 45% softirqs.CPU11.NET_RX
9325 ± 16% -53.9% 4303 ± 31% softirqs.CPU12.RCU
18711 ± 69% -76.9% 4323 ± 42% softirqs.CPU14.RCU
11424 ± 24% -68.3% 3616 ± 23% softirqs.CPU16.RCU
10296 ± 10% -64.4% 3663 ± 15% softirqs.CPU17.RCU
16686 ± 80% -73.0% 4497 ± 48% softirqs.CPU18.RCU
22801 ± 52% -75.3% 5620 ± 51% softirqs.CPU19.RCU
19856 ± 66% -76.0% 4775 ± 32% softirqs.CPU2.RCU
17692 ± 63% -67.3% 5787 ± 51% softirqs.CPU23.RCU
10098 ± 11% -24.2% 7656 ± 19% softirqs.CPU24.SCHED
28140 ± 38% -79.2% 5848 ± 77% softirqs.CPU26.RCU
25221 ± 51% -76.7% 5889 ± 64% softirqs.CPU27.RCU
19503 ± 56% -61.0% 7614 ± 53% softirqs.CPU28.RCU
21536 ± 32% -85.6% 3093 ± 13% softirqs.CPU29.RCU
22529 ± 40% -70.4% 6667 ± 64% softirqs.CPU3.RCU
16623 ± 48% -81.0% 3155 ± 14% softirqs.CPU30.RCU
12194 ± 36% -72.9% 3299 ± 3% softirqs.CPU31.RCU
14779 ± 66% -68.1% 4710 ± 41% softirqs.CPU35.RCU
15014 ± 48% -78.2% 3276 ± 13% softirqs.CPU39.RCU
13379 ± 53% -73.9% 3493 ± 28% softirqs.CPU41.RCU
13192 ± 44% -70.0% 3964 ± 45% softirqs.CPU42.RCU
8842 ± 27% -55.8% 3907 ± 44% softirqs.CPU44.RCU
10412 ± 21% -61.1% 4054 ± 24% softirqs.CPU47.RCU
24935 ± 43% -67.6% 8068 ± 75% softirqs.CPU49.RCU
14355 ± 23% -63.9% 5179 ± 77% softirqs.CPU5.RCU
24979 ± 74% -83.7% 4082 ± 28% softirqs.CPU50.RCU
16596 ± 35% -72.4% 4583 ± 37% softirqs.CPU51.RCU
14924 ± 37% -69.3% 4587 ± 45% softirqs.CPU52.RCU
10496 ± 17% -67.7% 3391 ± 23% softirqs.CPU54.RCU
11523 ± 27% -63.5% 4210 ± 37% softirqs.CPU58.RCU
10856 ± 23% -69.6% 3300 ± 14% softirqs.CPU6.RCU
9567 ± 12% -55.3% 4278 ± 34% softirqs.CPU60.RCU
9623 ± 13% -65.3% 3339 ± 20% softirqs.CPU61.RCU
17635 ± 52% -76.6% 4123 ± 38% softirqs.CPU62.RCU
11383 ± 23% -67.1% 3748 ± 16% softirqs.CPU64.RCU
12070 ± 16% -66.9% 3989 ± 9% softirqs.CPU65.RCU
13676 ± 50% -66.3% 4608 ± 54% softirqs.CPU66.RCU
25554 ± 51% -80.7% 4932 ± 38% softirqs.CPU67.RCU
19115 ± 76% -68.3% 6051 ± 34% softirqs.CPU68.RCU
22577 ± 40% -74.8% 5681 ± 50% softirqs.CPU72.RCU
28142 ± 51% -78.7% 6003 ± 56% softirqs.CPU73.RCU
25500 ± 37% -86.6% 3423 ± 28% softirqs.CPU74.RCU
17326 ± 38% -73.1% 4669 ± 36% softirqs.CPU75.RCU
20427 ± 44% -76.4% 4825 ± 34% softirqs.CPU76.RCU
22440 ± 35% -86.0% 3148 ± 14% softirqs.CPU77.RCU
15102 ± 31% -79.1% 3155 ± 12% softirqs.CPU78.RCU
12795 ± 27% -75.0% 3204 ± 11% softirqs.CPU79.RCU
15211 ± 66% -75.0% 3805 ± 23% softirqs.CPU83.RCU
17019 ± 73% -81.4% 3162 ± 30% softirqs.CPU86.RCU
14440 ± 49% -78.2% 3154 ± 21% softirqs.CPU87.RCU
15354 ± 60% -74.2% 3957 ± 61% softirqs.CPU88.RCU
16994 ± 65% -79.9% 3411 ± 32% softirqs.CPU89.RCU
13048 ± 47% -69.2% 4016 ± 59% softirqs.CPU90.RCU
13869 ± 57% -75.8% 3360 ± 29% softirqs.CPU91.RCU
11657 ± 57% -71.4% 3328 ± 32% softirqs.CPU92.RCU
9599 ± 20% -56.5% 4175 ± 43% softirqs.CPU93.RCU
9690 ± 12% -69.3% 2974 ± 15% softirqs.CPU94.RCU
9935 ± 13% -56.1% 4358 ± 48% softirqs.CPU95.RCU
1526333 ± 5% -67.9% 489535 ± 7% softirqs.RCU
59.46 ± 3% -56.6 2.84 ± 4% perf-profile.calltrace.cycles-pp.rename
59.31 ± 3% -56.5 2.80 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.rename
59.28 ± 3% -56.5 2.80 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
59.20 ± 3% -56.4 2.78 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
58.75 ± 3% -56.1 2.66 ± 4% perf-profile.calltrace.cycles-pp.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
47.67 ± 3% -47.7 0.00 perf-profile.calltrace.cycles-pp.lock_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.96 ± 3% -47.0 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename.do_syscall_64
36.08 ± 3% -36.1 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename
11.08 ± 9% -11.1 0.00 perf-profile.calltrace.cycles-pp.xfs_inodegc_flush.xfs_fs_statfs.statfs_by_dentry.vfs_statfs.user_statfs
9.71 ± 2% -9.7 0.00 perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename
11.05 ± 4% -8.6 2.43 ± 4% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.vfs_statfs.user_statfs
6.60 ± 15% -6.6 0.00 perf-profile.calltrace.cycles-pp.__flush_work.xfs_inodegc_flush.xfs_fs_statfs.statfs_by_dentry.vfs_statfs
7.71 ± 2% -5.9 1.86 ± 4% perf-profile.calltrace.cycles-pp.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.47 ± 2% -4.9 1.53 ± 4% perf-profile.calltrace.cycles-pp.xfs_vn_rename.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64
6.43 ± 2% -4.9 1.53 ± 4% perf-profile.calltrace.cycles-pp.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2.__x64_sys_rename
3.87 ± 2% -2.9 0.97 ± 23% perf-profile.calltrace.cycles-pp.cpumask_next.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.vfs_statfs
2.49 ± 3% -1.9 0.62 ± 3% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
2.28 ± 2% -1.7 0.59 ± 4% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename.vfs_rename
0.00 +9.1 9.15 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.flush_workqueue_prep_pwqs.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
0.00 +10.1 10.14 ± 2% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
0.00 +10.2 10.21 ± 2% perf-profile.calltrace.cycles-pp.flush_workqueue_prep_pwqs.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.vfs_statfs
13.53 ± 2% +27.9 41.41 perf-profile.calltrace.cycles-pp.statvfs64
13.47 ± 2% +27.9 41.39 perf-profile.calltrace.cycles-pp.__statfs.statvfs64
13.44 ± 2% +27.9 41.38 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
13.44 ± 2% +27.9 41.38 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
13.41 ± 2% +28.0 41.37 perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
13.41 +30.4 43.83 perf-profile.calltrace.cycles-pp.__statfs
13.21 +30.6 43.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs
13.18 +30.6 43.77 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
13.09 +30.6 43.74 perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
26.34 +58.7 85.07 perf-profile.calltrace.cycles-pp.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
0.00 +59.8 59.78 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
23.44 ± 2% +61.0 84.49 perf-profile.calltrace.cycles-pp.vfs_statfs.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.40 ± 2% +61.1 84.47 perf-profile.calltrace.cycles-pp.statfs_by_dentry.vfs_statfs.user_statfs.__do_sys_statfs.do_syscall_64
23.25 ± 2% +61.2 84.44 perf-profile.calltrace.cycles-pp.xfs_fs_statfs.statfs_by_dentry.vfs_statfs.user_statfs.__do_sys_statfs
0.00 +70.5 70.49 perf-profile.calltrace.cycles-pp.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.vfs_statfs
0.00 +81.1 81.11 perf-profile.calltrace.cycles-pp.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.vfs_statfs.user_statfs
59.50 ± 3% -56.6 2.85 ± 4% perf-profile.children.cycles-pp.rename
59.21 ± 3% -56.4 2.78 ± 4% perf-profile.children.cycles-pp.__x64_sys_rename
58.78 ± 3% -56.1 2.67 ± 4% perf-profile.children.cycles-pp.do_renameat2
47.67 ± 3% -47.6 0.10 ± 21% perf-profile.children.cycles-pp.lock_rename
11.20 ± 9% -10.5 0.75 ± 4% perf-profile.children.cycles-pp.xfs_inodegc_flush
11.50 ± 4% -9.0 2.55 ± 4% perf-profile.children.cycles-pp.__percpu_counter_sum
7.15 ± 14% -7.2 0.00 perf-profile.children.cycles-pp.__flush_work
7.73 ± 2% -5.9 1.87 ± 4% perf-profile.children.cycles-pp.vfs_rename
7.34 -5.6 1.75 ± 4% perf-profile.children.cycles-pp.cpumask_next
6.47 ± 2% -4.9 1.54 ± 4% perf-profile.children.cycles-pp.xfs_vn_rename
6.45 ± 2% -4.9 1.54 ± 4% perf-profile.children.cycles-pp.xfs_rename
4.65 -3.5 1.12 ± 3% perf-profile.children.cycles-pp._find_next_bit
2.71 ± 2% -2.0 0.75 ± 4% perf-profile.children.cycles-pp.xfs_inodegc_queue_all
2.51 ± 2% -1.9 0.63 ± 4% perf-profile.children.cycles-pp.__xfs_trans_commit
2.12 ± 7% -1.7 0.39 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
2.31 ± 2% -1.7 0.61 ± 4% perf-profile.children.cycles-pp.xlog_cil_commit
2.08 ± 3% -1.7 0.41 perf-profile.children.cycles-pp._raw_spin_lock
1.78 ± 6% -1.5 0.30 ± 3% perf-profile.children.cycles-pp.user_path_at_empty
1.40 ± 3% -1.3 0.10 ± 9% perf-profile.children.cycles-pp.__might_resched
1.66 ± 5% -1.3 0.39 ± 10% perf-profile.children.cycles-pp.dput
1.33 ± 7% -1.1 0.18 ± 4% perf-profile.children.cycles-pp.filename_lookup
1.44 ± 4% -1.1 0.32 ± 7% perf-profile.children.cycles-pp.xfs_trans_alloc
1.29 ± 7% -1.1 0.17 ± 4% perf-profile.children.cycles-pp.path_lookupat
1.35 ± 4% -1.1 0.29 ± 3% perf-profile.children.cycles-pp.__lookup_hash
1.28 ± 3% -1.0 0.29 ± 7% perf-profile.children.cycles-pp.xfs_trans_reserve
0.99 ± 4% -1.0 0.04 ± 50% perf-profile.children.cycles-pp.__cond_resched
1.01 ± 9% -0.9 0.10 ± 7% perf-profile.children.cycles-pp.complete_walk
1.16 ± 6% -0.9 0.27 ± 12% perf-profile.children.cycles-pp.lockref_put_return
0.97 ± 10% -0.9 0.10 ± 10% perf-profile.children.cycles-pp.try_to_unlazy
0.92 ± 10% -0.8 0.08 ± 9% perf-profile.children.cycles-pp.__legitimize_path
0.90 ± 4% -0.8 0.07 ± 9% perf-profile.children.cycles-pp.__might_sleep
1.09 ± 5% -0.8 0.27 ± 13% perf-profile.children.cycles-pp.path_put
0.97 ± 6% -0.8 0.19 ± 9% perf-profile.children.cycles-pp.xfs_log_reserve
0.83 ± 11% -0.8 0.05 ± 9% perf-profile.children.cycles-pp.lockref_get_not_dead
0.98 ± 5% -0.7 0.28 ± 4% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.88 ± 3% -0.6 0.23 ± 5% perf-profile.children.cycles-pp.filename_parentat
0.83 ± 2% -0.6 0.22 ± 6% perf-profile.children.cycles-pp.path_parentat
0.84 -0.6 0.23 ± 4% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.80 -0.6 0.22 ± 6% perf-profile.children.cycles-pp.getname_flags
0.66 ± 2% -0.5 0.15 ± 7% perf-profile.children.cycles-pp.kmem_cache_alloc
0.70 ± 2% -0.5 0.19 ± 8% perf-profile.children.cycles-pp.xfs_dir_createname
0.65 ± 3% -0.5 0.17 ± 13% perf-profile.children.cycles-pp.__vsnprintf_chk
0.59 ± 8% -0.5 0.11 ± 7% perf-profile.children.cycles-pp.xfs_vn_lookup
0.60 ± 4% -0.4 0.17 ± 4% perf-profile.children.cycles-pp.link_path_walk
0.54 ± 2% -0.4 0.15 ± 7% perf-profile.children.cycles-pp.strncpy_from_user
0.54 ± 4% -0.4 0.15 ± 10% perf-profile.children.cycles-pp.xfs_dir2_sf_addname
0.53 ± 6% -0.4 0.14 ± 6% perf-profile.children.cycles-pp.d_move
0.53 ± 2% -0.4 0.16 ± 4% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.49 ± 5% -0.4 0.12 ± 11% perf-profile.children.cycles-pp.vfprintf
1.22 ± 38% -0.3 0.87 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.47 ± 5% -0.3 0.14 ± 4% perf-profile.children.cycles-pp.xfs_inode_item_format
0.42 ± 4% -0.3 0.10 ± 11% perf-profile.children.cycles-pp.path_init
0.76 ± 11% -0.3 0.46 ± 5% perf-profile.children.cycles-pp.__softirqentry_text_start
0.40 ± 4% -0.3 0.10 ± 11% perf-profile.children.cycles-pp.lookup_dcache
0.39 ± 4% -0.3 0.10 ± 9% perf-profile.children.cycles-pp.d_lookup
0.39 ± 12% -0.3 0.10 ± 8% perf-profile.children.cycles-pp.xfs_mod_fdblocks
0.36 ± 13% -0.3 0.09 ± 13% perf-profile.children.cycles-pp.xfs_lookup
0.35 ± 3% -0.3 0.08 ± 5% perf-profile.children.cycles-pp.d_alloc
0.35 ± 13% -0.3 0.09 ± 13% perf-profile.children.cycles-pp.xfs_dir_lookup
0.76 ± 19% -0.3 0.50 ± 7% perf-profile.children.cycles-pp.irq_exit_rcu
0.34 ± 6% -0.3 0.09 perf-profile.children.cycles-pp.xfs_dir_removename
0.72 ± 43% -0.2 0.48 ± 4% perf-profile.children.cycles-pp.tick_sched_handle
0.32 ± 5% -0.2 0.08 ± 12% perf-profile.children.cycles-pp._IO_default_xsputn
0.31 ± 10% -0.2 0.08 ± 7% perf-profile.children.cycles-pp.rcu_core
0.28 ± 3% -0.2 0.05 perf-profile.children.cycles-pp.__d_alloc
0.31 ± 6% -0.2 0.08 ± 5% perf-profile.children.cycles-pp.inode_permission
0.69 ± 40% -0.2 0.47 ± 3% perf-profile.children.cycles-pp.update_process_times
0.31 ± 6% -0.2 0.10 ± 9% perf-profile.children.cycles-pp.xlog_grant_add_space
0.28 ± 13% -0.2 0.07 ± 10% perf-profile.children.cycles-pp.down_read
0.24 ± 7% -0.2 0.07 ± 11% perf-profile.children.cycles-pp.__d_move
0.29 ± 10% -0.2 0.12 ± 8% perf-profile.children.cycles-pp.kthread
0.21 ± 7% -0.2 0.05 ± 50% perf-profile.children.cycles-pp.rcu_do_batch
0.29 ± 10% -0.2 0.13 ± 6% perf-profile.children.cycles-pp.ret_from_fork
0.19 ± 6% -0.2 0.03 ± 81% perf-profile.children.cycles-pp.xlog_space_left
0.19 ± 7% -0.2 0.03 ± 82% perf-profile.children.cycles-pp.xlog_grant_push_ail
0.23 ± 6% -0.2 0.07 ± 7% perf-profile.children.cycles-pp.kmem_cache_free
0.21 ± 4% -0.2 0.05 ± 9% perf-profile.children.cycles-pp.__kmalloc
0.19 ± 6% -0.2 0.03 ± 82% perf-profile.children.cycles-pp.xlog_grant_push_threshold
0.22 ± 4% -0.2 0.06 ± 12% perf-profile.children.cycles-pp.__check_object_size
0.21 ± 4% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.kmem_alloc
0.21 ± 9% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.xfs_dir2_sf_removename
0.19 ± 6% -0.1 0.05 perf-profile.children.cycles-pp.make_kuid
0.20 ± 7% -0.1 0.05 ± 9% perf-profile.children.cycles-pp.map_id_range_down
0.19 ± 6% -0.1 0.05 ± 9% perf-profile.children.cycles-pp.xfs_lock_inodes
0.29 ± 7% -0.1 0.15 ± 12% perf-profile.children.cycles-pp.mutex_unlock
0.24 ± 8% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.__d_lookup
0.17 ± 7% -0.1 0.03 ± 81% perf-profile.children.cycles-pp.__entry_text_start
0.20 ± 5% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.__schedule
0.19 ± 7% -0.1 0.05 ± 7% perf-profile.children.cycles-pp.generic_permission
0.18 ± 9% -0.1 0.04 ± 51% perf-profile.children.cycles-pp.schedule
0.18 ± 6% -0.1 0.05 ± 52% perf-profile.children.cycles-pp.fsnotify_get_cookie
0.35 ± 4% -0.1 0.23 ± 6% perf-profile.children.cycles-pp.mutex_lock
0.17 ± 6% -0.1 0.05 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.40 ± 24% -0.1 0.29 ± 3% perf-profile.children.cycles-pp.scheduler_tick
0.23 ± 9% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.load_balance
0.20 ± 5% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.find_busiest_group
0.19 ± 5% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.update_sd_lb_stats
0.16 ± 34% -0.1 0.09 ± 11% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.12 ± 61% -0.1 0.05 ± 52% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.17 ± 11% -0.1 0.10 ± 11% perf-profile.children.cycles-pp.osq_unlock
0.13 ± 18% -0.1 0.06 ± 56% perf-profile.children.cycles-pp.start_kernel
0.10 ± 28% -0.1 0.04 ± 50% perf-profile.children.cycles-pp.process_one_work
0.12 ± 26% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.worker_thread
0.12 ± 17% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.update_rq_clock
0.07 ± 31% -0.0 0.03 ± 81% perf-profile.children.cycles-pp.update_blocked_averages
0.07 ± 37% -0.0 0.04 ± 50% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.1 0.08 ± 15% perf-profile.children.cycles-pp._IO_setb
9.72 ± 2% +0.4 10.16 perf-profile.children.cycles-pp.mutex_spin_on_owner
86.17 ± 2% +2.0 88.15 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
86.13 ± 2% +2.0 88.13 perf-profile.children.cycles-pp.do_syscall_64
2.35 ± 40% +6.9 9.23 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +10.3 10.26 ± 2% perf-profile.children.cycles-pp.flush_workqueue_prep_pwqs
46.97 ± 3% +23.5 70.52 perf-profile.children.cycles-pp.__mutex_lock
36.10 ± 3% +23.7 59.81 perf-profile.children.cycles-pp.osq_lock
13.54 ± 2% +27.9 41.41 perf-profile.children.cycles-pp.statvfs64
26.95 +58.3 85.24 perf-profile.children.cycles-pp.__statfs
26.51 +58.6 85.12 perf-profile.children.cycles-pp.__do_sys_statfs
26.34 +58.7 85.07 perf-profile.children.cycles-pp.user_statfs
23.44 ± 2% +61.0 84.49 perf-profile.children.cycles-pp.vfs_statfs
23.40 ± 2% +61.1 84.47 perf-profile.children.cycles-pp.statfs_by_dentry
23.26 ± 2% +61.2 84.44 perf-profile.children.cycles-pp.xfs_fs_statfs
0.00 +81.1 81.12 perf-profile.children.cycles-pp.flush_workqueue
5.07 ± 6% -4.0 1.04 ± 4% perf-profile.self.cycles-pp.__percpu_counter_sum
3.96 -3.0 0.96 ± 3% perf-profile.self.cycles-pp._find_next_bit
2.75 ± 2% -2.1 0.65 ± 6% perf-profile.self.cycles-pp.cpumask_next
2.04 ± 3% -1.6 0.40 perf-profile.self.cycles-pp._raw_spin_lock
1.28 ± 3% -1.2 0.10 ± 14% perf-profile.self.cycles-pp.__might_resched
1.44 ± 6% -1.1 0.38 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.15 ± 6% -0.9 0.27 ± 12% perf-profile.self.cycles-pp.lockref_put_return
1.21 ± 2% -0.9 0.34 ± 5% perf-profile.self.cycles-pp.xfs_inodegc_queue_all
0.82 ± 11% -0.8 0.05 ± 9% perf-profile.self.cycles-pp.lockref_get_not_dead
0.73 ± 5% -0.7 0.06 ± 6% perf-profile.self.cycles-pp.__might_sleep
0.62 ± 4% -0.4 0.17 ± 8% perf-profile.self.cycles-pp.xfs_trans_log_inode
0.86 ± 2% -0.4 0.43 ± 3% perf-profile.self.cycles-pp.__mutex_lock
0.52 ± 2% -0.4 0.16 ± 4% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.47 ± 5% -0.4 0.11 ± 10% perf-profile.self.cycles-pp.vfprintf
0.41 ± 4% -0.3 0.10 ± 10% perf-profile.self.cycles-pp.path_init
0.30 ± 6% -0.2 0.09 ± 8% perf-profile.self.cycles-pp.xfs_inode_item_format
0.31 ± 6% -0.2 0.10 ± 9% perf-profile.self.cycles-pp.xlog_grant_add_space
0.28 ± 6% -0.2 0.08 ± 12% perf-profile.self.cycles-pp._IO_default_xsputn
0.26 ± 2% -0.2 0.07 ± 15% perf-profile.self.cycles-pp.strncpy_from_user
0.24 ± 14% -0.2 0.06 ± 12% perf-profile.self.cycles-pp.down_read
0.23 ± 3% -0.2 0.06 ± 12% perf-profile.self.cycles-pp.kmem_cache_alloc
0.22 ± 3% -0.2 0.06 ± 15% perf-profile.self.cycles-pp.link_path_walk
0.19 ± 5% -0.2 0.03 ± 81% perf-profile.self.cycles-pp.xlog_space_left
0.29 ± 6% -0.1 0.15 ± 12% perf-profile.self.cycles-pp.mutex_unlock
0.18 ± 5% -0.1 0.05 ± 52% perf-profile.self.cycles-pp.fsnotify_get_cookie
0.18 ± 6% -0.1 0.05 perf-profile.self.cycles-pp.map_id_range_down
0.22 ± 8% -0.1 0.09 ± 12% perf-profile.self.cycles-pp.__d_lookup
0.17 ± 6% -0.1 0.05 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.33 ± 4% -0.1 0.21 ± 4% perf-profile.self.cycles-pp.mutex_lock
0.17 ± 11% -0.1 0.10 ± 11% perf-profile.self.cycles-pp.osq_unlock
0.14 ± 7% -0.1 0.07 ± 9% perf-profile.self.cycles-pp.update_sd_lb_stats
0.15 ± 34% -0.1 0.09 ± 11% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.13 ± 8% -0.0 0.11 ± 8% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.1 0.08 ± 12% perf-profile.self.cycles-pp._IO_setb
9.67 ± 2% +0.5 10.13 ± 2% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.00 +1.1 1.07 ± 4% perf-profile.self.cycles-pp.flush_workqueue_prep_pwqs
2.33 ± 40% +6.9 9.18 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irq
35.94 ± 3% +23.7 59.65 perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
4 months
[PATCH v2 0/4] Fix softlockup when adding inotify watch
by Stephen Brennan
Hello,
I'd like to request re-testing this series on LKP, before posting my v2
more broadly. The 0-day bot detected a warning on my v1 posting[1]. I
believe v2 should address the warning but I'm still working on
reproducing it via the information included in the email. According to
the LKP website [2] it's possible to request testing by directly mailing
this list, so I wanted to try that in case I can't reproduce the
original.
[1]: https://lore.kernel.org/linux-fsdevel/[email protected]
[2]: https://01.org/lkp/documentation/0-day-test-service
Thanks,
Stephen
(series summary below)
When a system with large amounts of memory has several millions of
negative dentries in a single directory, a softlockup can occur while
adding an inotify watch:
watchdog: BUG: soft lockup - CPU#20 stuck for 9s! [inotifywait:9528]
CPU: 20 PID: 9528 Comm: inotifywait Kdump: loaded Not tainted 5.16.0-rc4.20211208.el8uek.rc1.x86_64 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.4.1 12/03/2020
RIP: 0010:__fsnotify_update_child_dentry_flags+0xad/0x120
Call Trace:
<TASK>
fsnotify_add_mark_locked+0x113/0x160
inotify_new_watch+0x130/0x190
inotify_update_watch+0x11a/0x140
__x64_sys_inotify_add_watch+0xef/0x140
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
v1 of this series can be found at:
https://lore.kernel.org/linux-fsdevel/20211214005337.161885-1-stephen.s.b...
v1 itself is based on a prior series:
https://lore.kernel.org/linux-fsdevel/1611235185-1685-1-git-send-email-ga...
The strategy employed by this series is to move negative dentries to the
end of the d_subdirs list, and mark them with a flag as "tail negative".
Then, readers of the d_subdirs list, which are only interested in
positive dentries, can stop reading once they reach the first tail
negative dentry. By applying this patch, I'm able to avoid the above
softlockup caused by 200 million negative dentries on my test system.
Inotify watches are set up nearly instantly.
Previously, Al expressed concern for:
1. Possible memory corruption due to use of lock_parent() in
sweep_negative(), see patch 01 for fix.
2. The previous patch didn't catch all ways a negative dentry could
become positive (d_add, d_instantiate_new), see patch 01.
3. The previous series contained a new negative dentry limit, which
capped the negative dentry count at around 3 per hash bucket. I've
dropped this patch from the series.
Patches 2-4 are unmodified from the previous posting.
Konstantin Khlebnikov (3):
fsnotify: stop walking child dentries if remaining tail is negative
dcache: add action D_WALK_SKIP_SIBLINGS to d_walk()
dcache: stop walking siblings if remaining dentries all negative
Stephen Brennan (1):
dcache: sweep cached negative dentries to the end of list of siblings
fs/dcache.c | 96 ++++++++++++++++++++++++++++++++++++++++--
fs/libfs.c | 3 ++
fs/notify/fsnotify.c | 6 ++-
include/linux/dcache.h | 6 +++
4 files changed, 106 insertions(+), 5 deletions(-)
--
2.30.2
4 months, 2 weeks
[ocfs2] c42ff46f97: sysctl_table_check_failed
by kernel test robot
(please be noted we reported "[ocfs2] 46e33fd45a: sysctl_table_check_failed"
on https://lists.01.org/hyperkitty/list/[email protected]/thread/KQ2F6TPJWMDV...
when this commit on:
commit: 46e33fd45a52bf03769906e64d8a8a1ab317777d ("ocfs2: simplify subdirectory
registration with register_sysctl()")
https://git.kernel.org/cgit/linux/kernel/git/mcgrof/linux-next.git 20211118-sysctl-cleanups-set-04
we resend this report as a reminder the issue still exists on maineline)
Greeting,
FYI, we noticed the following commit (built with clang-14):
commit: c42ff46f97c1c25577a84fbfb111710d25a129e0 ("ocfs2: simplify subdirectory registration with register_sysctl()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu Icelake-Server -smp 4 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------+------------+------------+
| | e99f5e7479 | c42ff46f97 |
+---------------------------+------------+------------+
| sysctl_table_check_failed | 0 | 10 |
+---------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 187.975448][ T1] ksmbd: The ksmbd server is experimental, use at your own risk.
[ 187.977238][ T1] QNX4 filesystem 0.2.3 registered.
[ 187.979872][ T1] orangefs_debugfs_init: called with debug mask: :none: :0:
[ 187.981449][ T1] orangefs_init: module version upstream loaded
[ 187.983277][ T1] JFS: nTxBlock = 8192, nTxLock = 65536
[ 188.000617][ T1] sysctl table check failed: fs/ocfs2/nm Not a file
[ 188.001528][ T1] sysctl table check failed: fs/ocfs2/nm No proc_handler
[ 188.002483][ T1] sysctl table check failed: fs/ocfs2/nm bogus .mode 0555
[ 188.003437][ T1] CPU: 0 PID: 1 Comm: swapper Not tainted 5.16.0-11534-gc42ff46f97c1 #1 843f34bc11d03a69a557abb9c2a31a40f9711bc8
[ 188.005020][ T1] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 188.006289][ T1] Call Trace:
[ 188.006805][ T1] <TASK>
[ 188.007273][ T1] dump_stack_lvl (??:?)
[ 188.007965][ T1] dump_stack (??:?)
[ 188.008592][ T1] __register_sysctl_table (??:?)
[ 188.009381][ T1] ? ocfs2_stack_glue_init (stackglue.c:?)
To reproduce:
# build kernel
cd linux
cp config-5.16.0-11534-gc42ff46f97c1 .config
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
5 months
[tools headers UAPI] e2bcbd7769: kernel-selftests.ir.make_fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: e2bcbd7769ee8f05e1b3d10848aace98973844e4 ("tools headers UAPI: remove stale lirc.h")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: kernel-selftests
version: kernel-selftests-x86_64-db530529-1_20220124
with following parameters:
group: group-01
ucode: 0xe2
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 8 threads Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2022-01-27 18:57:29 make -C ir
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir'
gcc -Wall -O2 -I../../../include/uapi ir_loopback.c -o /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir/ir_loopback
ir_loopback.c: In function ‘main’:
ir_loopback.c:147:20: error: ‘RC_PROTO_RCMM32’ undeclared (first use in this function); did you mean ‘RC_PROTO_RC6_MCE’?
if (rc_proto == RC_PROTO_RCMM32 &&
^~~~~~~~~~~~~~~
RC_PROTO_RC6_MCE
ir_loopback.c:147:20: note: each undeclared identifier is reported only once for each function it appears in
make: *** [../lib.mk:146: /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir/ir_loopback] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir'
2022-01-27 18:57:29 make run_tests -C ir
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir'
gcc -Wall -O2 -I../../../include/uapi ir_loopback.c -o /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir/ir_loopback
ir_loopback.c: In function ‘main’:
ir_loopback.c:147:20: error: ‘RC_PROTO_RCMM32’ undeclared (first use in this function); did you mean ‘RC_PROTO_RC6_MCE’?
if (rc_proto == RC_PROTO_RCMM32 &&
^~~~~~~~~~~~~~~
RC_PROTO_RC6_MCE
ir_loopback.c:147:20: note: each undeclared identifier is reported only once for each function it appears in
make: *** [../lib.mk:146: /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir/ir_loopback] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-e2bcbd7769ee8f05e1b3d10848aace98973844e4/tools/testing/selftests/ir'
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
5 months
[fs/exec] 80bd5afdd8: xfstests.generic.633.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 80bd5afdd8568e41fc3a75c695bb179e0d9eee4d ("[PATCH v3] fs/exec: require argv[0] presence in do_execveat_common()")
url: https://github.com/0day-ci/linux/commits/Ariadne-Conill/fs-exec-require-a...
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 2c271fe77d52a0555161926c232cd5bc07178b39
patch link: https://lore.kernel.org/lkml/[email protected]
in testcase: xfstests
version: xfstests-x86_64-972d710-1_20220127
with following parameters:
disk: 4HDD
fs: f2fs
test: generic-group-31
ucode: 0xe2
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 4 threads Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
user :warn : [ 208.077271] run fstests generic/633 at 2022-01-30 04:50:49
kern :warn : [ 208.529090] Attempted to run process '/dev/fd/5/file1' with NULL argv
user :notice: [ 208.806756] generic/633 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//generic/633.out.bad)
user :notice: [ 208.826454] --- tests/generic/633.out 2022-01-27 11:54:16.000000000 +0000
user :notice: [ 208.842458] +++ /lkp/benchmarks/xfstests/results//generic/633.out.bad 2022-01-30 04:50:49.769538285 +0000
user :notice: [ 208.859622] @@ -1,2 +1,4 @@
user :warn : [ 208.860623] run fstests generic/634 at 2022-01-30 04:50:49
user :notice: [ 208.866037] QA output created by 633
user :notice: [ 208.889262] Silence is golden
user :notice: [ 208.901240] +idmapped-mounts.c: 3608: setid_binaries - Invalid argument - failure: sys_execveat
user :notice: [ 208.918907] +idmapped-mounts.c: 13953: run_test - Success - failure: setid binaries on regular mounts
user :notice: [ 208.935261] ...
user :notice: [ 208.949376] (Run 'diff -u /lkp/benchmarks/xfstests/tests/generic/633.out /lkp/benchmarks/xfstests/results//generic/633.out.bad' to see the entire diff)
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
5 months
I'll call back in an hour
by Vlada
Hi,
I like a lot of things.. I just cannot list all of them here! But most of all..
My name is Vlada. http://earworks.com/?wptouch_switch=desktop&redirect=https%3A%2F%2Fyour-d... I like that I'm a woman, beautiful, shy and sometimes whimsical!
I do believe that I would meet a reliable, loving and wise man with whom I share all the moments in my life!
Is said that I am a tender, smart, with good sense of humor and honest girl.
I am modest, balanced and loving. I respect people and their personal freedom. I'd like to have a nice family of full value, but unfortunately I can't find a partner among people around me
Goodbye,
Vlada.
5 months
Most likely
by Zinaida
Hi.
All my life I have been dreaming about a caring husband and happy children, everybody knows that for every woman the most important is family happiness. I would like to meet an honest.
I am a very nice, sensual girl, so I would love my man to be kind and sympathetic too. My Profiles:374074 - https://slackliner.de/wiki/?wptouch_switch=desktop&redirect=https%3A%2F%2...
My friends say that it is very difficult to resist my charm!
I like jogging, it helps me to keep in a good form. I am able to listen and to find the mutual language with different people. I am waiting for you! I'm just a lady who wants to find love!
Regards,
Zinaida.
5 months
[io_uring] 811b398582: WARNING:possible_recursive_locking_detected
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with clang-14):
commit: 811b3985828e422a3759cf07a848fa75c17c1db4 ("io_uring: support for user allocated memory for rings/sqes")
https://github.com/ammarfaizi2/linux-block axboe/linux-block/perf-wip
in testcase: trinity
version: trinity-static-x86_64-x86_64-1c734c75-1_2020-01-06
with following parameters:
runtime: 300s
group: group-02
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 411.985606][ T3987] WARNING: possible recursive locking detected
[ 411.986603][ T3987] 5.17.0-rc1-00119-g811b3985828e #1 Not tainted
[ 411.987466][ T3987] --------------------------------------------
[ 411.988512][ T3987] trinity-c2/3987 is trying to acquire lock:
[ 411.989352][ T3987] ffff888103bc1160 (&mm->mmap_lock#2){++++}-{3:3}, at: internal_get_user_pages_fast (gup.c:?)
[ 411.991790][ T3987]
[ 411.991790][ T3987] but task is already holding lock:
[ 411.992859][ T3987] ffff888103bc1160 (&mm->mmap_lock#2){++++}-{3:3}, at: __io_uaddr_map (io_uring.c:?)
[ 411.994141][ T3987]
[ 411.994141][ T3987] other info that might help us debug this:
[ 411.995262][ T3987] Possible unsafe locking scenario:
[ 411.995262][ T3987]
[ 411.996391][ T3987] CPU0
[ 411.996947][ T3987] ----
[ 411.997487][ T3987] lock(&mm->mmap_lock#2);
[ 411.998169][ T3987] lock(&mm->mmap_lock#2);
[ 411.998857][ T3987]
[ 411.998857][ T3987] *** DEADLOCK ***
[ 411.998857][ T3987]
[ 412.000128][ T3987] May be due to missing lock nesting notation
[ 412.000128][ T3987]
[ 412.001283][ T3987] 1 lock held by trinity-c2/3987:
[ 412.002016][ T3987] #0: ffff888103bc1160 (&mm->mmap_lock#2){++++}-{3:3}, at: __io_uaddr_map (io_uring.c:?)
[ 412.003336][ T3987]
[ 412.003336][ T3987] stack backtrace:
[ 412.004261][ T3987] CPU: 0 PID: 3987 Comm: trinity-c2 Not tainted 5.17.0-rc1-00119-g811b3985828e #1
[ 412.005522][ T3987] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 412.006766][ T3987] Call Trace:
[ 412.007338][ T3987] <TASK>
[ 412.007880][ T3987] validate_chain (lockdep.c:?)
[ 412.008646][ T3987] ? validate_chain (lockdep.c:?)
[ 412.009361][ T3987] ? __io_uaddr_map (io_uring.c:?)
[ 412.012103][ T3987] ? __se_sys_io_uring_setup (io_uring.c:?)
[ 412.012912][ T3987] ? do_syscall_64 (??:?)
[ 412.013651][ T3987] ? entry_SYSCALL_64_after_hwframe (??:?)
[ 412.014519][ T3987] ? mark_lock (lockdep.c:?)
[ 412.015181][ T3987] __lock_acquire (lockdep.c:?)
[ 412.015875][ T3987] lock_acquire (??:?)
[ 412.016538][ T3987] ? internal_get_user_pages_fast (gup.c:?)
[ 412.017441][ T3987] internal_get_user_pages_fast (gup.c:?)
[ 412.018272][ T3987] ? internal_get_user_pages_fast (gup.c:?)
[ 412.019119][ T3987] ? pin_user_pages_fast (??:?)
[ 412.019856][ T3987] __io_uaddr_map (io_uring.c:?)
[ 412.020561][ T3987] io_allocate_scq_urings (io_uring.c:?)
[ 412.021346][ T3987] io_uring_create (io_uring.c:?)
[ 412.022084][ T3987] __se_sys_io_uring_setup (io_uring.c:?)
[ 412.022885][ T3987] do_syscall_64 (??:?)
[ 412.023559][ T3987] entry_SYSCALL_64_after_hwframe (??:?)
[ 412.024393][ T3987] RIP: 0033:0x463519
[ 412.025011][ T3987] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db 59 00 00 c3 66 2e 0f 1f 84 00 00 00 00
All code
========
0: 00 f3 add %dh,%bl
2: c3 retq
3: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
a: 00 00 00
d: 0f 1f 40 00 nopl 0x0(%rax)
11: 48 89 f8 mov %rdi,%rax
14: 48 89 f7 mov %rsi,%rdi
17: 48 89 d6 mov %rdx,%rsi
1a: 48 89 ca mov %rcx,%rdx
1d: 4d 89 c2 mov %r8,%r10
20: 4d 89 c8 mov %r9,%r8
23: 4c 8b 4c 24 08 mov 0x8(%rsp),%r9
28: 0f 05 syscall
2a:* 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax <-- trapping instruction
30: 0f 83 db 59 00 00 jae 0x5a11
36: c3 retq
37: 66 data16
38: 2e cs
39: 0f .byte 0xf
3a: 1f (bad)
3b: 84 00 test %al,(%rax)
3d: 00 00 add %al,(%rax)
...
Code starting with the faulting instruction
===========================================
0: 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax
6: 0f 83 db 59 00 00 jae 0x59e7
c: c3 retq
d: 66 data16
e: 2e cs
f: 0f .byte 0xf
10: 1f (bad)
11: 84 00 test %al,(%rax)
13: 00 00 add %al,(%rax)
...
[ 412.027422][ T3987] RSP: 002b:00007fff36077c88 EFLAGS: 00000246 ORIG_RAX: 00000000000001a9
[ 412.028649][ T3987] RAX: ffffffffffffffda RBX: 00000000000001a9 RCX: 0000000000463519
[ 412.029783][ T3987] RDX: 000000a437863b79 RSI: 0000000000000004 RDI: 00000000a4a4a4a4
[ 412.030907][ T3987] RBP: 00007faea05c0000 R08: fffffffffffffff6 R09: 004af9db521a5050
[ 412.032066][ T3987] R10: 00000000fafafafa R11: 0000000000000246 R12: 0000000000000002
[ 412.033184][ T3987] R13: 00007faea05c0058 R14: 000000000109a850 R15: 00007faea05c0000
[ 412.034309][ T3987] </TASK>
[ 624.729797][ T417] sysrq: Emergency Sync
[ 624.730827][ T10] Emergency Sync complete
[ 624.731705][ T417] sysrq: Resetting
Kboot worker: lkp-worker53
Elapsed time: 660
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-45.cgz
-m 16384
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
ip=::::vm-snb-45::dhcp
root=/dev/ram0
RESULT_ROOT=/result/trinity/group-02-300s/vm-snb/yocto-x86_64-minimal-20190520.cgz/x86_64-randconfig-a015-20220124/clang-14/811b3985828e422a3759cf07a848fa75c17c1db4/9
BOOT_IMAGE=/pkg/linux/x86_64-randconfig-a015-20220124/clang-14/811b3985828e422a3759cf07a848fa75c17c1db4/vmlinuz-5.17.0-rc1-00119-g811b3985828e
branch=ammarfaizi2-block/axboe/linux-block/perf-wip
job=/job-script
user=lkp
ARCH=x86_64
kconfig=x86_64-randconfig-a015-20220124
commit=811b3985828e422a3759cf07a848fa75c17c1db4
vmalloc=128M
initramfs_async=0
page_owner=on
max_uptime=2100
result_service=tmpfs
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
To reproduce:
# build kernel
cd linux
cp config-5.17.0-rc1-00119-g811b3985828e .config
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang
5 months