[mm/demotion] 8ebccd60c2: BUG:sleeping_function_called_from_invalid_context_at_mm/compaction.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 8ebccd60c2db6beefef2f39b05a95024be0c39eb ("[RFC PATCH v4 3/7] mm/demotion: Build demotion targets based on explicit memory tiers")
url: https://github.com/intel-lab-lkp/linux/commits/Aneesh-Kumar-K-V/mm-demoti...
base: https://git.kernel.org/cgit/linux/kernel/git/gregkh/driver-core.git b232b02bf3c205b13a26dcec08e53baddd8e59ed
patch link: https://lore.kernel.org/linux-mm/[email protected]
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 2.576581][ T1] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table helpers
[ 2.584367][ T1] BUG: sleeping function called from invalid context at mm/compaction.c:540
[ 2.585275][ T1] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
[ 2.586166][ T1] preempt_count: 1, expected: 0
[ 2.586668][ T1] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.18.0-rc5-00059-g8ebccd60c2db #1
[ 2.587562][ T1] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 2.588577][ T1] Call Trace:
[ 2.588948][ T1] <TASK>
[ 2.589284][ T1] dump_stack_lvl+0x34/0x44
[ 2.589765][ T1] __might_resched+0x134/0x149
[ 2.590253][ T1] isolate_freepages_block+0xe6/0x2d3
[ 2.590794][ T1] isolate_freepages_range+0xc5/0x118
[ 2.591342][ T1] alloc_contig_range+0x2dd/0x350
[ 2.591858][ T1] ? alloc_contig_pages+0x170/0x194
[ 2.592384][ T1] alloc_contig_pages+0x170/0x194
[ 2.592896][ T1] init_args+0x3d0/0x44e
[ 2.593345][ T1] ? init_args+0x44e/0x44e
[ 2.593816][ T1] debug_vm_pgtable+0x46/0x809
[ 2.594312][ T1] ? alloc_inode+0x37/0x8e
[ 2.594774][ T1] ? init_args+0x44e/0x44e
[ 2.595235][ T1] do_one_initcall+0x83/0x187
[ 2.595729][ T1] do_initcalls+0xc6/0xdf
[ 2.596190][ T1] kernel_init_freeable+0x10d/0x13c
[ 2.596721][ T1] ? rest_init+0xcd/0xcd
[ 2.597170][ T1] kernel_init+0x16/0x11a
[ 2.597636][ T1] ret_from_fork+0x22/0x30
[ 2.598097][ T1] </TASK>
[ 2.626547][ T1] ------------[ cut here ]------------
[ 2.627157][ T1] initcall debug_vm_pgtable+0x0/0x809 returned with preemption imbalance
[ 2.628019][ T1] WARNING: CPU: 0 PID: 1 at init/main.c:1311 do_one_initcall+0x140/0x187
[ 2.628863][ T1] Modules linked in:
[ 2.629280][ T1] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 5.18.0-rc5-00059-g8ebccd60c2db #1
[ 2.630295][ T1] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 2.631306][ T1] RIP: 0010:do_one_initcall+0x140/0x187
[ 2.631867][ T1] Code: 00 00 48 c7 c6 ca b6 2c 82 48 89 e7 e8 80 ca 44 00 fb 80 3c 24 00 74 14 48 89 e2 48 89 ee 48 c7 c7 df b6 2c 82 e8 b3 d6 a2 00 <0f> 0b 48 8b 44 24 40 65 48 2b 04 25 28 00 00 00 74 05 e8 d8 cd a4
[ 2.633713][ T1] RSP: 0000:ffffc90000013ea8 EFLAGS: 00010286
[ 2.634312][ T1] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003
[ 2.635123][ T1] RDX: 0000000000000216 RSI: 0000000000000001 RDI: 0000000000000001
[ 2.635932][ T1] RBP: ffffffff82f3b694 R08: 0000000000000000 R09: 0000000000000019
[ 2.636735][ T1] R10: 0000000000000000 R11: 0000000074696e69 R12: 0000000000000000
[ 2.637538][ T1] R13: ffff88810cba0000 R14: 0000000000000000 R15: 0000000000000000
[ 2.638353][ T1] FS: 0000000000000000(0000) GS:ffff88842fc00000(0000) knlGS:0000000000000000
[ 2.639253][ T1] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2.639901][ T1] CR2: ffff88843ffff000 CR3: 0000000002612000 CR4: 00000000000406f0
[ 2.640711][ T1] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2.641526][ T1] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 2.642341][ T1] Call Trace:
[ 2.642707][ T1] <TASK>
[ 2.643051][ T1] do_initcalls+0xc6/0xdf
[ 2.643512][ T1] kernel_init_freeable+0x10d/0x13c
[ 2.644045][ T1] ? rest_init+0xcd/0xcd
[ 2.644498][ T1] kernel_init+0x16/0x11a
[ 2.644956][ T1] ret_from_fork+0x22/0x30
[ 2.645417][ T1] </TASK>
[ 2.645764][ T1] ---[ end trace 0000000000000000 ]---
To reproduce:
# build kernel
cd linux
cp config-5.18.0-rc5-00059-g8ebccd60c2db .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 1 week
[xfs] 55a3d6bbc5: BUG:KASAN:use-after-free_in_xfs_attr3_node_inactive[xfs]
by kernel test robot
(please be noted we reported
"[xfs] 55a3d6bbc5: aim7.jobs-per-min 19.8% improvement",
but now we noticed a func issue)
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 55a3d6bbc5cc34a8e5aeb7ea5645a72cafddef2b ("[PATCH 1/2] xfs: bound maximum wait time for inodegc work")
url: https://github.com/intel-lab-lkp/linux/commits/Dave-Chinner/xfs-non-block...
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/[email protected]
in testcase: xfstests
version: xfstests-x86_64-48c5dbb-1_20220523
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-group-43
ucode: 0x21
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 4 threads 1 sockets Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 439.394273][ T16] ==================================================================
[ 439.394411][ T16] BUG: KASAN: use-after-free in xfs_attr3_node_inactive+0x63c/0x900 [xfs]
[ 439.394716][ T16] Read of size 4 at addr ffff88817a448844 by task kworker/0:1/16
[ 439.394849][ T16]
[ 439.394897][ T16] CPU: 0 PID: 16 Comm: kworker/0:1 Not tainted 5.18.0-rc2-00158-g55a3d6bbc5cc #1
[ 439.395052][ T16] Hardware name: Hewlett-Packard p6-1451cx/2ADA, BIOS 8.15 02/05/2013
[ 439.395191][ T16] Workqueue: xfs-inodegc/sdb4 xfs_inodegc_worker [xfs]
[ 439.395460][ T16] Call Trace:
[ 439.395648][ T16] <TASK>
[ 439.395706][ T16] ? xfs_attr3_node_inactive+0x63c/0x900 [xfs]
[ 439.395948][ T16] dump_stack_lvl+0x34/0x44
[ 439.396033][ T16] print_address_description+0x1f/0x200
[ 439.396150][ T16] ? xfs_attr3_node_inactive+0x63c/0x900 [xfs]
[ 439.396387][ T16] print_report.cold+0x55/0x22c
[ 439.396479][ T16] ? _raw_spin_lock_irqsave+0x87/0x100
[ 439.396577][ T16] kasan_report+0xab/0x140
[ 439.396658][ T16] ? xfs_attr3_node_inactive+0x63c/0x900 [xfs]
[ 439.396892][ T16] xfs_attr3_node_inactive+0x63c/0x900 [xfs]
[ 439.397121][ T16] ? xfs_buf_set_ref+0x6c/0xc0 [xfs]
[ 439.397337][ T16] ? xfs_attr3_leaf_inactive+0x440/0x440 [xfs]
[ 439.397568][ T16] ? common_interrupt+0x17/0xc0
[ 439.397658][ T16] ? asm_common_interrupt+0x1e/0x40
[ 439.397751][ T16] ? xfs_trans_buf_set_type+0x91/0x200 [xfs]
[ 439.397985][ T16] ? xfs_trans_buf_set_type+0xc3/0x200 [xfs]
[ 439.398218][ T16] xfs_attr3_root_inactive+0x1a0/0x500 [xfs]
[ 439.398650][ T16] ? xfs_attr3_node_inactive+0x900/0x900 [xfs]
[ 439.398875][ T16] ? xfs_trans_alloc+0x325/0x780 [xfs]
[ 439.399098][ T16] xfs_attr_inactive+0x479/0x580 [xfs]
[ 439.399312][ T16] ? xfs_attr3_root_inactive+0x500/0x500 [xfs]
[ 439.399534][ T16] ? _raw_spin_lock+0x81/0x100
[ 439.399622][ T16] ? _raw_write_lock_irq+0x100/0x100
[ 439.399717][ T16] xfs_inactive+0x542/0x700 [xfs]
[ 439.400037][ T16] xfs_inodegc_worker+0x176/0x380 [xfs]
[ 439.400377][ T16] process_one_work+0x689/0x1040
[ 439.400481][ T16] worker_thread+0x5b3/0xf00
[ 439.400579][ T16] ? process_one_work+0x1040/0x1040
[ 439.400684][ T16] kthread+0x292/0x340
[ 439.400771][ T16] ? kthread_complete_and_exit+0x40/0x40
[ 439.400878][ T16] ret_from_fork+0x22/0x30
[ 439.400962][ T16] </TASK>
[ 439.401020][ T16]
[ 439.401065][ T16] Allocated by task 16:
[ 439.401141][ T16] kasan_save_stack+0x1e/0x40
[ 439.401226][ T16] __kasan_slab_alloc+0x66/0x80
[ 439.401313][ T16] kmem_cache_alloc+0x13c/0x300
[ 439.401400][ T16] _xfs_buf_alloc+0x61/0xd80 [xfs]
[ 439.401620][ T16] xfs_buf_get_map+0x12a/0xac0 [xfs]
[ 439.401831][ T16] xfs_buf_read_map+0xb7/0x980 [xfs]
[ 439.402042][ T16] xfs_trans_read_buf_map+0x441/0xb00 [xfs]
[ 439.402271][ T16] xfs_da_read_buf+0x1ce/0x2c0 [xfs]
[ 439.402474][ T16] xfs_da3_node_read+0x23/0x80 [xfs]
[ 439.402674][ T16] xfs_attr3_root_inactive+0xbf/0x500 [xfs]
[ 439.402891][ T16] xfs_attr_inactive+0x479/0x580 [xfs]
[ 439.403101][ T16] xfs_inactive+0x542/0x700 [xfs]
[ 439.403309][ T16] xfs_inodegc_worker+0x176/0x380 [xfs]
[ 439.403525][ T16] process_one_work+0x689/0x1040
[ 439.403615][ T16] worker_thread+0x5b3/0xf00
[ 439.403697][ T16] kthread+0x292/0x340
[ 439.403771][ T16] ret_from_fork+0x22/0x30
[ 439.403852][ T16]
[ 439.404243][ T16] Freed by task 16:
[ 439.404313][ T16] kasan_save_stack+0x1e/0x40
[ 439.404398][ T16] kasan_set_track+0x21/0x40
[ 439.404482][ T16] kasan_set_free_info+0x20/0x40
[ 439.404571][ T16] __kasan_slab_free+0x108/0x180
[ 439.404659][ T16] kmem_cache_free+0xb5/0x380
[ 439.404743][ T16] xfs_buf_rele+0x5d0/0xa00 [xfs]
[ 439.404963][ T16] xfs_attr3_node_inactive+0x1e2/0x900 [xfs]
[ 439.405288][ T16] xfs_attr3_root_inactive+0x1a0/0x500 [xfs]
[ 439.405632][ T16] xfs_attr_inactive+0x479/0x580 [xfs]
[ 439.405925][ T16] xfs_inactive+0x542/0x700 [xfs]
[ 439.406135][ T16] xfs_inodegc_worker+0x176/0x380 [xfs]
[ 439.406350][ T16] process_one_work+0x689/0x1040
[ 439.406440][ T16] worker_thread+0x5b3/0xf00
[ 439.406524][ T16] kthread+0x292/0x340
[ 439.406598][ T16] ret_from_fork+0x22/0x30
[ 439.406679][ T16]
[ 439.406724][ T16] Last potentially related work creation:
[ 439.406822][ T16] kasan_save_stack+0x1e/0x40
[ 439.406907][ T16] __kasan_record_aux_stack+0x96/0xc0
[ 439.407001][ T16] insert_work+0x4a/0x340
[ 439.407079][ T16] __queue_work+0x515/0xd40
[ 439.407160][ T16] queue_work_on+0x48/0x80
[ 439.407240][ T16] xfs_buf_bio_end_io+0x272/0x380 [xfs]
[ 439.407456][ T16] blk_update_request+0x2be/0xe80
[ 439.407553][ T16] scsi_end_request+0x71/0x600
[ 439.407641][ T16] scsi_io_completion+0x126/0xb00
[ 439.407731][ T16] blk_complete_reqs+0xaa/0x100
[ 439.407824][ T16] __do_softirq+0x1a2/0x5f7
[ 439.407916][ T16]
[ 439.407962][ T16] Second to last potentially related work creation:
[ 439.408083][ T16] kasan_save_stack+0x1e/0x40
[ 439.408184][ T16] __kasan_record_aux_stack+0x96/0xc0
[ 439.408294][ T16] insert_work+0x4a/0x340
[ 439.408381][ T16] __queue_work+0x515/0xd40
[ 439.408466][ T16] queue_work_on+0x48/0x80
[ 439.408546][ T16] xfs_buf_bio_end_io+0x272/0x380 [xfs]
[ 439.408773][ T16] blk_update_request+0x2be/0xe80
[ 439.408865][ T16] scsi_end_request+0x71/0x600
[ 439.408951][ T16] scsi_io_completion+0x126/0xb00
[ 439.409040][ T16] blk_complete_reqs+0xaa/0x100
[ 439.409127][ T16] __do_softirq+0x1a2/0x5f7
[ 439.409209][ T16]
[ 439.409254][ T16] The buggy address belongs to the object at ffff88817a448700
[ 439.409254][ T16] which belongs to the cache xfs_buf of size 360
[ 439.409486][ T16] The buggy address is located 324 bytes inside of
[ 439.409486][ T16] 360-byte region [ffff88817a448700, ffff88817a448868)
[ 439.409708][ T16]
[ 439.409754][ T16] The buggy address belongs to the physical page:
[ 439.409863][ T16] page:000000009a495195 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x17a448
[ 439.410036][ T16] head:000000009a495195 order:1 compound_mapcount:0 compound_pincount:0
[ 439.410175][ T16] flags: 0x17ffffc0010200(slab|head|node=0|zone=2|lastcpupid=0x1fffff)
[ 439.410318][ T16] raw: 0017ffffc0010200 dead000000000100 dead000000000122 ffff888134c91400
[ 439.410466][ T16] raw: 0000000000000000 0000000080120012 00000001ffffffff 0000000000000000
[ 439.410609][ T16] page dumped because: kasan: bad access detected
[ 439.410718][ T16]
[ 439.410763][ T16] Memory state around the buggy address:
[ 439.410860][ T16] ffff88817a448700: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 439.410996][ T16] ffff88817a448780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 439.411133][ T16] >ffff88817a448800: fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc
[ 439.411268][ T16] ^
[ 439.411375][ T16] ffff88817a448880: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
[ 439.411515][ T16] ffff88817a448900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 439.411650][ T16] ==================================================================
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[selftests net] edae34a3ed: kernel-selftests.net.make_fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: edae34a3ed9293b5077dddf9e51a3d86c95dc76a ("selftests net: add UDP GRO fraglist + bpf self-tests")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: kernel-selftests
version: kernel-selftests-x86_64-8d3977ef-1_20220523
with following parameters:
group: net
ucode: 0xec
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 8 threads Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz with 28G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
clang -O2 -target bpf -c bpf/nat6to4.c -I../../bpf -I../../../../../usr/include/ -o /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-edae34a3ed9293b5077dddf9e51a3d86c95dc76a/tools/testing/selftests/net/bpf/nat6to4.o
bpf/nat6to4.c:43:10: fatal error: 'bpf/bpf_helpers.h' file not found
#include <bpf/bpf_helpers.h>
^~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [bpf/Makefile:11: /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-edae34a3ed9293b5077dddf9e51a3d86c95dc76a/tools/testing/selftests/net/bpf/nat6to4.o] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-edae34a3ed9293b5077dddf9e51a3d86c95dc76a/tools/testing/selftests/net'
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[KVM] 317463a437: kvm-unit-tests.apic-split.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 317463a437c161c16c8316f55a03cec8d027d373 ("[PATCH v2 2/2] KVM: Inject #GP on invalid writes to x2APIC registers")
url: https://github.com/intel-lab-lkp/linux/commits/Venkatesh-Srinivas/KVM-Inj...
base: https://git.kernel.org/cgit/virt/kvm/kvm.git master
patch link: https://lore.kernel.org/kvm/[email protected]
in testcase: kvm-unit-tests
version: kvm-unit-tests-x86_64-1a4529c-1_20220412
with following parameters:
ucode: 0x28
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz with 6G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
please be noted, besides 'apic-split', we also noticed 'apic' test failed on
this commit but could pass on parent.
2022-05-26 18:33:47 ./run_tests.sh
[31mFAIL[0m apic-split
[32mPASS[0m ioapic-split (19 tests)
[31mFAIL[0m apic
...
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[sched/numa] 5278ba412f: unixbench.score -2.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -2.9% regression of unixbench.score due to commit:
commit: 5278ba412faff8402e318ad20ab762cc9ba7a801 ("[PATCH 3/4] sched/numa: Apply imbalance limitations consistently")
url: https://github.com/intel-lab-lkp/linux/commits/Mel-Gorman/Mitigate-incons...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 991d8d8142cad94f9c5c05db25e67fa83d6f772a
patch link: https://lore.kernel.org/lkml/[email protected]
in testcase: unixbench
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory
with following parameters:
runtime: 300s
nr_task: 1
test: shell1
cpufreq_governor: performance
ucode: 0xd000331
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score -11.1% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1 |
| | runtime=300s |
| | test=shell8 |
| | ucode=0xd000331 |
+------------------+---------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-11/performance/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-icl-2sp2/shell1/unixbench/0xd000331
commit:
626db23ac9 ("sched/numa: Do not swap tasks between nodes when spare capacity is available")
5278ba412f ("sched/numa: Apply imbalance limitations consistently")
626db23ac968c13b 5278ba412faff8402e318ad20ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
2705 -2.9% 2625 unixbench.score
71105 ± 5% -13.4% 61547 ± 4% unixbench.time.involuntary_context_switches
287.33 ± 10% +312.2% 1184 ± 6% unixbench.time.major_page_faults
1.232e+08 -2.9% 1.196e+08 unixbench.time.minor_page_faults
143.00 +1.4% 145.00 unixbench.time.percent_of_cpu_this_job_got
383.13 +7.7% 412.62 unixbench.time.system_time
521.90 -3.4% 504.36 unixbench.time.user_time
3400459 -3.6% 3278744 unixbench.time.voluntary_context_switches
7226950 -2.9% 7015844 unixbench.workload
30.49 +1.5% 30.96 turbostat.RAMWatt
16050 -6.6% 14986 vmstat.system.cs
1293860 ± 6% +4158.8% 55103032 ± 60% numa-numastat.node0.local_node
1344815 ± 5% +3999.6% 55131871 ± 60% numa-numastat.node0.numa_hit
89313120 -63.9% 32257305 ±103% numa-numastat.node1.local_node
89365596 -63.8% 32337261 ±102% numa-numastat.node1.numa_hit
66387 +30.1% 86365 ± 6% meminfo.Active
66201 +30.2% 86181 ± 7% meminfo.Active(anon)
12721152 ± 10% -27.2% 9261056 ± 7% meminfo.DirectMap2M
48790 +10.7% 53987 ± 2% meminfo.Mapped
92524 +25.1% 115764 ± 5% meminfo.Shmem
18726 ± 8% -15.7% 15785 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
87.63 ± 5% -11.0% 77.99 ± 4% sched_debug.cfs_rq:/.runnable_avg.avg
618.08 ± 2% +21.9% 753.36 ± 11% sched_debug.cfs_rq:/.runnable_avg.max
10444 ± 28% -376.2% -28845 sched_debug.cfs_rq:/.spread0.avg
60208 ± 4% -77.6% 13496 ± 74% sched_debug.cfs_rq:/.spread0.max
-8539 +479.7% -49504 sched_debug.cfs_rq:/.spread0.min
18727 ± 8% -15.7% 15785 ± 8% sched_debug.cfs_rq:/.spread0.stddev
87.61 ± 5% -11.0% 77.96 ± 4% sched_debug.cfs_rq:/.util_avg.avg
617.98 ± 2% +21.9% 753.33 ± 11% sched_debug.cfs_rq:/.util_avg.max
0.00 ± 12% -24.5% 0.00 ± 23% sched_debug.cpu.next_balance.stddev
161071 ± 4% -22.7% 124508 ± 10% sched_debug.cpu.nr_switches.max
44072 ± 6% -25.6% 32790 ± 3% sched_debug.cpu.nr_switches.stddev
184614 ± 6% -58.9% 75886 ± 28% numa-meminfo.node0.AnonHugePages
227436 ± 5% -50.8% 111893 ± 18% numa-meminfo.node0.AnonPages
250375 ± 6% -39.2% 152234 ± 18% numa-meminfo.node0.AnonPages.max
239538 ± 4% -50.1% 119562 ± 18% numa-meminfo.node0.Inactive
239385 ± 4% -50.1% 119488 ± 18% numa-meminfo.node0.Inactive(anon)
17347 ± 20% -33.1% 11605 ± 35% numa-meminfo.node0.Shmem
61415 ± 2% +35.5% 83195 ± 9% numa-meminfo.node1.Active
61415 ± 2% +35.3% 83107 ± 9% numa-meminfo.node1.Active(anon)
15111 ± 77% +724.0% 124518 ± 17% numa-meminfo.node1.AnonHugePages
48170 ± 26% +252.5% 169822 ± 12% numa-meminfo.node1.AnonPages
977352 +19.6% 1168644 ± 5% numa-meminfo.node1.AnonPages.max
62321 ± 14% +205.7% 190528 ± 11% numa-meminfo.node1.Inactive
62321 ± 14% +205.6% 190446 ± 11% numa-meminfo.node1.Inactive(anon)
75390 ± 4% +38.5% 104414 ± 8% numa-meminfo.node1.Shmem
56856 ± 5% -50.8% 27966 ± 18% numa-vmstat.node0.nr_anon_pages
59844 ± 4% -50.1% 29863 ± 18% numa-vmstat.node0.nr_inactive_anon
4336 ± 20% -33.3% 2894 ± 35% numa-vmstat.node0.nr_shmem
59844 ± 4% -50.1% 29862 ± 18% numa-vmstat.node0.nr_zone_inactive_anon
1344749 ± 5% +3999.7% 55131256 ± 60% numa-vmstat.node0.numa_hit
1293794 ± 6% +4159.0% 55102416 ± 60% numa-vmstat.node0.numa_local
15374 ± 2% +35.3% 20797 ± 9% numa-vmstat.node1.nr_active_anon
11720 ± 27% +258.8% 42056 ± 12% numa-vmstat.node1.nr_anon_pages
15192 ± 14% +210.3% 47137 ± 11% numa-vmstat.node1.nr_inactive_anon
18800 ± 4% +38.6% 26056 ± 8% numa-vmstat.node1.nr_shmem
15374 ± 2% +35.3% 20797 ± 9% numa-vmstat.node1.nr_zone_active_anon
15192 ± 14% +210.3% 47137 ± 11% numa-vmstat.node1.nr_zone_inactive_anon
89364368 -63.8% 32336320 ±102% numa-vmstat.node1.numa_hit
89311892 -63.9% 32256364 ±103% numa-vmstat.node1.numa_local
16551 +30.1% 21527 ± 7% proc-vmstat.nr_active_anon
68924 +2.2% 70416 proc-vmstat.nr_anon_pages
75477 +2.7% 77526 proc-vmstat.nr_inactive_anon
12470 +10.3% 13749 ± 2% proc-vmstat.nr_mapped
23215 +25.1% 29034 ± 5% proc-vmstat.nr_shmem
16551 +30.1% 21527 ± 7% proc-vmstat.nr_zone_active_anon
75477 +2.7% 77526 proc-vmstat.nr_zone_inactive_anon
242786 +41.2% 342858 ± 16% proc-vmstat.numa_hint_faults
241277 +41.0% 340196 ± 16% proc-vmstat.numa_hint_faults_local
90713416 -3.6% 87455312 proc-vmstat.numa_hit
90609984 -3.6% 87346517 proc-vmstat.numa_local
3339 ± 58% +412.8% 17123 ± 58% proc-vmstat.numa_pages_migrated
343034 ± 3% +49.8% 513708 ± 14% proc-vmstat.numa_pte_updates
86449 +86.6% 161358 ± 2% proc-vmstat.pgactivate
90707968 -3.6% 87448830 proc-vmstat.pgalloc_normal
1.257e+08 -2.7% 1.223e+08 proc-vmstat.pgfault
90721230 -3.6% 87454560 proc-vmstat.pgfree
3339 ± 58% +412.8% 17123 ± 58% proc-vmstat.pgmigrate_success
6816403 -2.2% 6668991 proc-vmstat.pgreuse
3800 -3.2% 3677 proc-vmstat.thp_fault_alloc
1597074 -2.9% 1550242 proc-vmstat.unevictable_pgs_culled
0.84 ± 7% -0.2 0.69 ± 10% perf-profile.calltrace.cycles-pp.ret_from_fork
0.81 ± 7% -0.1 0.66 ± 10% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.54 ± 7% +0.1 0.66 ± 12% perf-profile.calltrace.cycles-pp.next_uptodate_page.filemap_map_pages.do_read_fault.do_fault.__handle_mm_fault
0.79 ± 7% +0.1 0.92 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__open64_nocancel.setlocale
0.78 ± 7% +0.1 0.91 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel.setlocale
0.76 ± 8% +0.1 0.88 ± 7% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel
0.76 ± 8% +0.1 0.90 ± 8% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel.setlocale
0.83 ± 7% +0.1 0.96 ± 7% perf-profile.calltrace.cycles-pp.__open64_nocancel.setlocale
0.81 ± 7% -0.1 0.66 ± 10% perf-profile.children.cycles-pp.kthread
1.10 ± 6% -0.1 0.97 ± 8% perf-profile.children.cycles-pp.ret_from_fork
0.31 ± 10% -0.1 0.25 ± 7% perf-profile.children.cycles-pp.newidle_balance
0.04 ± 45% +0.0 0.08 ± 20% perf-profile.children.cycles-pp.folio_add_lru
0.08 ± 20% +0.0 0.12 ± 12% perf-profile.children.cycles-pp.touch_atime
0.05 ± 47% +0.0 0.08 ± 23% perf-profile.children.cycles-pp.apparmor_file_free_security
0.04 ± 71% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.perf_output_copy
0.29 ± 3% +0.0 0.33 ± 8% perf-profile.children.cycles-pp.page_counter_charge
0.17 ± 9% +0.0 0.21 ± 9% perf-profile.children.cycles-pp.__anon_vma_prepare
0.14 ± 8% +0.0 0.18 ± 16% perf-profile.children.cycles-pp.apparmor_file_alloc_security
0.46 ± 4% +0.0 0.51 ± 7% perf-profile.children.cycles-pp.vfs_read
0.03 ±100% +0.1 0.08 ± 19% perf-profile.children.cycles-pp.propagate_protected_usage
0.20 ± 15% +0.1 0.25 ± 4% perf-profile.children.cycles-pp.copy_string_kernel
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__mark_inode_dirty
0.27 ± 5% +0.1 0.33 ± 6% perf-profile.children.cycles-pp.vma_interval_tree_remove
0.20 ± 6% +0.1 0.26 ± 5% perf-profile.children.cycles-pp.up_write
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.rmqueue_bulk
0.36 ± 7% +0.1 0.43 ± 8% perf-profile.children.cycles-pp.get_page_from_freelist
0.34 ± 7% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.unlink_file_vma
0.31 ± 8% +0.1 0.40 ± 10% perf-profile.children.cycles-pp.__slab_free
0.34 ± 10% +0.1 0.44 ± 7% perf-profile.children.cycles-pp.ksys_write
0.72 ± 3% +0.1 0.82 ± 5% perf-profile.children.cycles-pp.__split_vma
0.33 ± 10% +0.1 0.44 ± 8% perf-profile.children.cycles-pp.vfs_write
0.32 ± 10% +0.1 0.42 ± 9% perf-profile.children.cycles-pp.new_sync_write
1.05 ± 4% +0.2 1.29 ± 8% perf-profile.children.cycles-pp.next_uptodate_page
1.41 ± 4% -0.3 1.10 ± 23% perf-profile.self.cycles-pp.menu_select
0.09 ± 10% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.vm_normal_page
0.02 ±141% +0.1 0.07 ± 21% perf-profile.self.cycles-pp.propagate_protected_usage
0.26 ± 5% +0.1 0.32 ± 6% perf-profile.self.cycles-pp.vma_interval_tree_remove
0.20 ± 8% +0.1 0.26 ± 3% perf-profile.self.cycles-pp.up_write
0.52 ± 6% +0.1 0.59 ± 8% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.30 ± 10% +0.1 0.38 ± 10% perf-profile.self.cycles-pp.__slab_free
1.00 ± 4% +0.2 1.24 ± 8% perf-profile.self.cycles-pp.next_uptodate_page
9.50 +3.3% 9.82 perf-stat.i.MPKI
1.467e+09 -3.1% 1.421e+09 perf-stat.i.branch-instructions
25259322 -2.8% 24564546 perf-stat.i.branch-misses
4.64 +3.8 8.40 perf-stat.i.cache-miss-rate%
3128761 +84.4% 5768597 perf-stat.i.cache-misses
16064 -6.6% 15000 perf-stat.i.context-switches
1.28 ± 2% +4.9% 1.35 ± 2% perf-stat.i.cpi
147.24 +456.0% 818.73 perf-stat.i.cpu-migrations
3222 ± 3% -37.4% 2017 ± 2% perf-stat.i.cycles-between-cache-misses
0.03 +0.0 0.04 perf-stat.i.dTLB-load-miss-rate%
646123 ± 2% +10.8% 716013 perf-stat.i.dTLB-load-misses
1.866e+09 -2.7% 1.816e+09 perf-stat.i.dTLB-loads
988430 -1.7% 971311 perf-stat.i.dTLB-store-misses
1.078e+09 -2.4% 1.052e+09 perf-stat.i.dTLB-stores
7.095e+09 -3.1% 6.873e+09 perf-stat.i.instructions
0.89 ± 6% +162.7% 2.35 ± 5% perf-stat.i.major-faults
34.46 -2.8% 33.52 perf-stat.i.metric.M/sec
195370 -2.7% 190058 perf-stat.i.minor-faults
85.14 +3.7 88.89 perf-stat.i.node-load-miss-rate%
504104 ± 2% +121.9% 1118698 perf-stat.i.node-load-misses
104059 ± 3% +44.8% 150652 ± 4% perf-stat.i.node-loads
5.36 ± 18% +19.3 24.63 ± 4% perf-stat.i.node-store-miss-rate%
45098 ± 16% +733.8% 376029 ± 4% perf-stat.i.node-store-misses
937285 +26.6% 1186448 ± 2% perf-stat.i.node-stores
195371 -2.7% 190061 perf-stat.i.page-faults
9.64 +2.9% 9.91 perf-stat.overall.MPKI
4.59 +3.9 8.47 perf-stat.overall.cache-miss-rate%
1.20 ± 2% +4.6% 1.25 ± 2% perf-stat.overall.cpi
2709 ± 2% -44.9% 1493 ± 2% perf-stat.overall.cycles-between-cache-misses
0.03 ± 2% +0.0 0.04 perf-stat.overall.dTLB-load-miss-rate%
82.64 +5.4 88.05 perf-stat.overall.node-load-miss-rate%
4.60 ± 14% +19.4 24.05 ± 4% perf-stat.overall.node-store-miss-rate%
1.465e+09 -3.1% 1.42e+09 perf-stat.ps.branch-instructions
25223900 -2.7% 24532257 perf-stat.ps.branch-misses
3133964 +84.0% 5765238 perf-stat.ps.cache-misses
16037 -6.6% 14976 perf-stat.ps.context-switches
146.96 +455.6% 816.47 perf-stat.ps.cpu-migrations
645888 ± 2% +10.8% 715648 perf-stat.ps.dTLB-load-misses
1.864e+09 -2.7% 1.814e+09 perf-stat.ps.dTLB-loads
987054 -1.7% 970089 perf-stat.ps.dTLB-store-misses
1.077e+09 -2.4% 1.051e+09 perf-stat.ps.dTLB-stores
7.086e+09 -3.1% 6.866e+09 perf-stat.ps.instructions
0.89 ± 6% +162.4% 2.34 ± 5% perf-stat.ps.major-faults
195074 -2.7% 189793 perf-stat.ps.minor-faults
503993 +121.6% 1117027 perf-stat.ps.node-load-misses
105826 ± 3% +43.3% 151624 ± 4% perf-stat.ps.node-loads
45328 ± 16% +728.3% 375435 ± 4% perf-stat.ps.node-store-misses
937188 +26.5% 1185959 ± 2% perf-stat.ps.node-stores
195075 -2.7% 189796 perf-stat.ps.page-faults
4.472e+12 -3.1% 4.335e+12 perf-stat.total.instructions
***************************************************************************************************
lkp-icl-2sp2: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-11/performance/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-icl-2sp2/shell8/unixbench/0xd000331
commit:
626db23ac9 ("sched/numa: Do not swap tasks between nodes when spare capacity is available")
5278ba412f ("sched/numa: Apply imbalance limitations consistently")
626db23ac968c13b 5278ba412faff8402e318ad20ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
9459 -11.1% 8410 unixbench.score
49295 ± 2% -13.0% 42902 unixbench.time.involuntary_context_switches
1616 ± 3% +147.0% 3993 ± 4% unixbench.time.major_page_faults
45966048 -11.0% 40896241 unixbench.time.minor_page_faults
173.46 +18.9% 206.17 unixbench.time.system_time
190.29 -16.7% 158.42 unixbench.time.user_time
1307674 -12.3% 1146302 unixbench.time.voluntary_context_switches
358509 -11.3% 317922 unixbench.workload
31.04 +6.0% 32.92 turbostat.RAMWatt
53649 -10.2% 48196 vmstat.system.cs
0.14 ± 3% +0.0 0.16 mpstat.cpu.all.soft%
1.63 -0.2 1.45 mpstat.cpu.all.usr%
0.34 ±223% +2.3 2.67 ± 83% perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ±223% +2.4 2.71 ± 81% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
12953258 ± 7% -29.1% 9177429 ± 7% meminfo.DirectMap2M
22474 +13.6% 25533 ± 2% meminfo.KernelStack
7838 +156.0% 20065 ± 20% meminfo.PageTables
62596 ± 16% -39.1% 38109 ± 36% numa-vmstat.node0.nr_inactive_anon
12227 ± 6% +11.5% 13631 ± 4% numa-vmstat.node0.nr_kernel_stack
1163 ± 18% +134.4% 2726 ± 18% numa-vmstat.node0.nr_page_table_pages
62596 ± 16% -39.1% 38109 ± 36% numa-vmstat.node0.nr_zone_inactive_anon
14241 ± 67% +184.6% 40525 ± 36% numa-vmstat.node1.nr_anon_pages
14899 ± 69% +182.7% 42126 ± 33% numa-vmstat.node1.nr_inactive_anon
10252 ± 7% +16.7% 11959 ± 6% numa-vmstat.node1.nr_kernel_stack
809.17 ± 25% +202.7% 2449 ± 21% numa-vmstat.node1.nr_page_table_pages
14899 ± 69% +182.7% 42126 ± 33% numa-vmstat.node1.nr_zone_inactive_anon
244098 ± 15% -39.8% 146934 ± 40% numa-meminfo.node0.AnonPages.max
250449 ± 16% -39.3% 152036 ± 37% numa-meminfo.node0.Inactive
250301 ± 16% -39.3% 151965 ± 37% numa-meminfo.node0.Inactive(anon)
4630 ± 18% +127.3% 10522 ± 23% numa-meminfo.node0.PageTables
56826 ± 67% +184.7% 161757 ± 35% numa-meminfo.node1.AnonPages
64784 ± 57% +166.9% 172884 ± 33% numa-meminfo.node1.AnonPages.max
59459 ± 70% +183.1% 168330 ± 33% numa-meminfo.node1.Inactive
59459 ± 70% +182.8% 168177 ± 33% numa-meminfo.node1.Inactive(anon)
10234 ± 7% +16.8% 11949 ± 6% numa-meminfo.node1.KernelStack
3189 ± 24% +203.1% 9665 ± 22% numa-meminfo.node1.PageTables
73176 +3.2% 75515 proc-vmstat.nr_anon_pages
77439 +3.4% 80074 proc-vmstat.nr_inactive_anon
22484 +13.6% 25550 ± 2% proc-vmstat.nr_kernel_stack
1964 +159.7% 5101 ± 19% proc-vmstat.nr_page_table_pages
77439 +3.4% 80074 proc-vmstat.nr_zone_inactive_anon
33321568 -11.1% 29636150 proc-vmstat.numa_hit
33204884 -11.1% 29522839 proc-vmstat.numa_local
2826 ± 7% +450.7% 15565 proc-vmstat.pgactivate
33314916 -11.1% 29632022 proc-vmstat.pgalloc_normal
46272301 -10.9% 41208227 proc-vmstat.pgfault
33127507 -11.1% 29444660 proc-vmstat.pgfree
2594250 -11.2% 2304883 proc-vmstat.pgreuse
1499 -10.6% 1340 proc-vmstat.thp_fault_alloc
635894 -11.1% 565409 proc-vmstat.unevictable_pgs_culled
10.69 +4.3% 11.15 perf-stat.i.MPKI
4.972e+09 -10.5% 4.448e+09 perf-stat.i.branch-instructions
1.78 +0.0 1.81 perf-stat.i.branch-miss-rate%
87366205 -9.0% 79542506 perf-stat.i.branch-misses
4.39 +10.2 14.62 perf-stat.i.cache-miss-rate%
10832039 +238.1% 36622848 perf-stat.i.cache-misses
2.643e+08 -6.7% 2.466e+08 perf-stat.i.cache-references
55032 -10.2% 49392 perf-stat.i.context-switches
0.84 +11.6% 0.94 perf-stat.i.cpi
1083 ± 4% +90.1% 2058 perf-stat.i.cpu-migrations
1976 -61.1% 769.37 ± 3% perf-stat.i.cycles-between-cache-misses
0.04 ± 4% +0.0 0.04 perf-stat.i.dTLB-load-miss-rate%
6.279e+09 -10.2% 5.638e+09 perf-stat.i.dTLB-loads
3629603 -10.3% 3257478 perf-stat.i.dTLB-store-misses
3.662e+09 -10.2% 3.289e+09 perf-stat.i.dTLB-stores
2.408e+10 -10.5% 2.154e+10 perf-stat.i.instructions
1.19 -10.6% 1.06 perf-stat.i.ipc
25.60 ± 3% +145.3% 62.80 ± 4% perf-stat.i.major-faults
102.69 +76.3% 181.04 perf-stat.i.metric.K/sec
118.55 -10.3% 106.39 perf-stat.i.metric.M/sec
710240 -11.0% 632294 perf-stat.i.minor-faults
67.54 +24.9 92.47 perf-stat.i.node-load-miss-rate%
823765 ± 2% +788.1% 7315469 perf-stat.i.node-load-misses
399301 ± 3% +31.5% 525162 perf-stat.i.node-loads
5.32 ± 6% +34.6 39.92 perf-stat.i.node-store-miss-rate%
193239 ± 6% +1687.7% 3454580 perf-stat.i.node-store-misses
4318690 +18.6% 5123428 perf-stat.i.node-stores
710266 -11.0% 632357 perf-stat.i.page-faults
10.98 +4.2% 11.44 perf-stat.overall.MPKI
1.76 +0.0 1.79 perf-stat.overall.branch-miss-rate%
4.10 +10.8 14.85 perf-stat.overall.cache-miss-rate%
0.83 +12.2% 0.94 perf-stat.overall.cpi
1853 -70.3% 550.23 perf-stat.overall.cycles-between-cache-misses
0.04 ± 4% +0.0 0.05 perf-stat.overall.dTLB-load-miss-rate%
1.20 -10.9% 1.07 perf-stat.overall.ipc
67.35 +25.9 93.30 perf-stat.overall.node-load-miss-rate%
4.28 ± 6% +36.0 40.27 perf-stat.overall.node-store-miss-rate%
4.894e+09 -10.5% 4.379e+09 perf-stat.ps.branch-instructions
86000604 -9.0% 78300554 perf-stat.ps.branch-misses
10662516 +238.1% 36050907 perf-stat.ps.cache-misses
2.602e+08 -6.7% 2.427e+08 perf-stat.ps.cache-references
54171 -10.2% 48620 perf-stat.ps.context-switches
1066 ± 4% +90.1% 2026 perf-stat.ps.cpu-migrations
6.181e+09 -10.2% 5.55e+09 perf-stat.ps.dTLB-loads
3572776 -10.2% 3206610 perf-stat.ps.dTLB-store-misses
3.604e+09 -10.2% 3.237e+09 perf-stat.ps.dTLB-stores
2.37e+10 -10.5% 2.121e+10 perf-stat.ps.instructions
25.20 ± 3% +145.3% 61.82 ± 4% perf-stat.ps.major-faults
699120 -11.0% 622421 perf-stat.ps.minor-faults
810873 ± 2% +788.1% 7201233 perf-stat.ps.node-load-misses
393054 ± 3% +31.5% 516951 perf-stat.ps.node-loads
190213 ± 6% +1687.8% 3400631 perf-stat.ps.node-store-misses
4251087 +18.6% 5043399 perf-stat.ps.node-stores
699145 -11.0% 622482 perf-stat.ps.page-faults
1.523e+12 -10.5% 1.364e+12 perf-stat.total.instructions
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[cpufreq] a6cb305191: kernel_BUG_at_drivers/cpufreq/cpufreq.c
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: a6cb305191dd85350290fd66aeea8e62e33562f9 ("[PATCH 2/3] cpufreq: Panic if policy is active in cpufreq_policy_free()")
url: https://github.com/intel-lab-lkp/linux/commits/Viresh-Kumar/cpufreq-Minor...
base: https://git.kernel.org/cgit/linux/kernel/git/rafael/linux-pm.git linux-next
patch link: https://lore.kernel.org/linux-pm/8c3d50faf8811e86136fb3f9c459e43fc3c50bc0...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | a1f8da428b | a6cb305191 |
+------------------------------------------+------------+------------+
| boot_successes | 15 | 0 |
| boot_failures | 0 | 13 |
| kernel_BUG_at_drivers/cpufreq/cpufreq.c | 0 | 13 |
| invalid_opcode:#[##] | 0 | 13 |
| RIP:cpufreq_policy_free | 0 | 13 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 13 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 10.343913][ T200] kernel BUG at drivers/cpufreq/cpufreq.c:1291!
[ 10.345025][ T200] invalid opcode: 0000 [#1] SMP PTI
[ 10.345958][ T200] CPU: 1 PID: 200 Comm: systemd-udevd Not tainted 5.18.0-02127-ga6cb305191dd #1
[ 10.347496][ T200] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 10.349232][ T200] RIP: 0010:cpufreq_policy_free (drivers/cpufreq/cpufreq.c:1291 (discriminator 1))
[ 10.349244][ T200] Code: e6 ff ff 48 8b 7d 10 e8 bc 47 c5 ff 48 8b 7d 08 e8 b3 47 c5 ff 48 8b 7d 00 e8 aa 47 c5 ff 48 89 ef 5b 5d 41 5c e9 de 09 97 ff <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 2e 0f 1f 84 00 00 00
All code
========
0: e6 ff out %al,$0xff
2: ff 48 8b decl -0x75(%rax)
5: 7d 10 jge 0x17
7: e8 bc 47 c5 ff callq 0xffffffffffc547c8
c: 48 8b 7d 08 mov 0x8(%rbp),%rdi
10: e8 b3 47 c5 ff callq 0xffffffffffc547c8
15: 48 8b 7d 00 mov 0x0(%rbp),%rdi
19: e8 aa 47 c5 ff callq 0xffffffffffc547c8
1e: 48 89 ef mov %rbp,%rdi
21: 5b pop %rbx
22: 5d pop %rbp
23: 41 5c pop %r12
25: e9 de 09 97 ff jmpq 0xffffffffff970a08
2a:* 0f 0b ud2 <-- trapping instruction
2c: 66 66 2e 0f 1f 84 00 data16 nopw %cs:0x0(%rax,%rax,1)
33: 00 00 00 00
37: 66 data16
38: 66 data16
39: 2e cs
3a: 0f .byte 0xf
3b: 1f (bad)
3c: 84 00 test %al,(%rax)
...
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 66 66 2e 0f 1f 84 00 data16 nopw %cs:0x0(%rax,%rax,1)
9: 00 00 00 00
d: 66 data16
e: 66 data16
f: 2e cs
10: 0f .byte 0xf
11: 1f (bad)
12: 84 00 test %al,(%rax)
...
[ 10.349247][ T200] RSP: 0018:ffffbdf4405abca0 EFLAGS: 00010293
[ 10.349251][ T200] RAX: 0000000000000000 RBX: 0000000000000002 RCX: 0000000000000000
[ 10.349253][ T200] RDX: 0000000000000000 RSI: 0000000000000002 RDI: ffff96e640375618
[ 10.349255][ T200] RBP: ffff96e6c5775c00 R08: ffffbdf4405aba24 R09: 0000000000000002
[ 10.349257][ T200] R10: 00000000fffffffb R11: ffffddf43fc029d0 R12: 00000000fffffffb
[ 10.349259][ T200] R13: 0000000000000000 R14: ffff96e6c5775dd0 R15: 0000000000000001
[ 10.349261][ T200] FS: 0000000000000000(0000) GS:ffff96e96fd00000(0063) knlGS:00000000f7bca800
[ 10.349267][ T200] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
[ 10.349269][ T200] CR2: 0000000056e2c164 CR3: 000000042fcd2000 CR4: 00000000000406e0
[ 10.349271][ T200] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 10.349273][ T200] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 10.349275][ T200] Call Trace:
[ 10.349304][ T200] <TASK>
[ 10.349308][ T200] cpufreq_online (drivers/cpufreq/cpufreq.c:1562)
[ 10.349314][ T200] cpufreq_add_dev (drivers/cpufreq/cpufreq.c:1578)
[ 10.349318][ T200] subsys_interface_register (drivers/base/bus.c:1036)
[ 10.349328][ T200] ? 0xffffffffc0071000
[ 10.349330][ T200] cpufreq_register_driver (drivers/cpufreq/cpufreq.c:2872)
[ 10.349333][ T200] acpi_cpufreq_init (drivers/cpufreq/acpi-cpufreq.c:286) acpi_cpufreq
[ 10.349342][ T200] do_one_initcall (init/main.c:1295)
[ 10.349348][ T200] ? __cond_resched (kernel/sched/core.c:8181)
[ 10.349354][ T200] ? kmem_cache_alloc_trace (mm/slub.c:3219 mm/slub.c:3225 mm/slub.c:3256)
[ 10.349362][ T200] do_init_module (kernel/module.c:3731)
[ 10.349366][ T200] __do_sys_finit_module (kernel/module.c:4222)
[ 10.349369][ T200] __do_fast_syscall_32 (arch/x86/entry/common.c:112 arch/x86/entry/common.c:178)
[ 10.349376][ T200] do_fast_syscall_32 (arch/x86/entry/common.c:203)
[ 10.349379][ T200] entry_SYSENTER_compat_after_hwframe (arch/x86/entry/entry_64_compat.S:117)
[ 10.349385][ T200] RIP: 0023:0xf7f01549
[ 10.349388][ T200] Code: 03 74 c0 01 10 05 03 74 b8 01 10 06 03 74 b4 01 10 07 03 74 b0 01 10 08 03 74 d8 01 00 00 00 00 00 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00
All code
========
0: 03 74 c0 01 add 0x1(%rax,%rax,8),%esi
4: 10 05 03 74 b8 01 adc %al,0x1b87403(%rip) # 0x1b8740d
a: 10 06 adc %al,(%rsi)
c: 03 74 b4 01 add 0x1(%rsp,%rsi,4),%esi
10: 10 07 adc %al,(%rdi)
12: 03 74 b0 01 add 0x1(%rax,%rsi,4),%esi
16: 10 08 adc %cl,(%rax)
18: 03 74 d8 01 add 0x1(%rax,%rbx,8),%esi
1c: 00 00 add %al,(%rax)
1e: 00 00 add %al,(%rax)
20: 00 51 52 add %dl,0x52(%rcx)
23: 55 push %rbp
24: 89 e5 mov %esp,%ebp
26: 0f 34 sysenter
28: cd 80 int $0x80
2a:* 5d pop %rbp <-- trapping instruction
2b: 5a pop %rdx
2c: 59 pop %rcx
2d: c3 retq
2e: 90 nop
2f: 90 nop
30: 90 nop
31: 90 nop
32: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
39: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
Code starting with the faulting instruction
===========================================
0: 5d pop %rbp
1: 5a pop %rdx
2: 59 pop %rcx
3: c3 retq
4: 90 nop
5: 90 nop
6: 90 nop
7: 90 nop
8: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
f: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
To reproduce:
# build kernel
cd linux
cp config-5.18.0-02127-ga6cb305191dd .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[xfs] 55a3d6bbc5: aim7.jobs-per-min 19.8% improvement
by kernel test robot
Greeting,
FYI, we noticed a 19.8% improvement of aim7.jobs-per-min due to commit:
commit: 55a3d6bbc5cc34a8e5aeb7ea5645a72cafddef2b ("[PATCH 1/2] xfs: bound maximum wait time for inodegc work")
url: https://github.com/intel-lab-lkp/linux/commits/Dave-Chinner/xfs-non-block...
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/[email protected]
in testcase: aim7
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
with following parameters:
disk: 1BRD_48G
fs: xfs
test: disk_rw
load: 3000
cpufreq_governor: performance
ucode: 0x7002402
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase/ucode:
gcc-11/performance/1BRD_48G/xfs/x86_64-rhel-8.3/3000/debian-10.4-x86_64-20200603.cgz/lkp-cpl-4sp1/disk_rw/aim7/0x7002402
commit:
ab6a8d3f1a ("Merge branch 'guilt/xfs-5.19-misc-3' into xfs-5.19-for-next")
55a3d6bbc5 ("xfs: bound maximum wait time for inodegc work")
ab6a8d3f1a2a85de 55a3d6bbc5cc34a8e5aeb7ea564
---------------- ---------------------------
%stddev %change %stddev
\ | \
403537 +19.8% 483615 aim7.jobs-per-min
44.91 -16.7% 37.42 aim7.time.elapsed_time
44.91 -16.7% 37.42 aim7.time.elapsed_time.max
26113 ± 2% +201.4% 78708 ± 16% aim7.time.involuntary_context_switches
2848 -2.1% 2787 aim7.time.maximum_resident_set_size
1036 -35.0% 673.56 aim7.time.system_time
631532 -31.7% 431436 aim7.time.voluntary_context_switches
65148 -28.5% 46596 ± 31% numa-numastat.node3.other_node
5.487e+09 -12.9% 4.779e+09 cpuidle..time
12110694 -20.4% 9645343 ± 17% cpuidle..usage
82.99 +1.5% 84.25 iostat.cpu.idle
16.38 -8.7% 14.95 iostat.cpu.system
24.67 ± 8% -26.4% 18.17 ± 4% vmstat.procs.r
42006 +20.9% 50773 vmstat.system.cs
26178 ± 2% -32.4% 17695 ± 4% meminfo.Active
26016 ± 2% -32.6% 17522 ± 4% meminfo.Active(anon)
49632 ± 7% -20.5% 39475 ± 2% meminfo.Shmem
1.04 ± 8% +0.2 1.20 ± 11% mpstat.cpu.all.irq%
0.12 ± 5% +0.0 0.14 ± 9% mpstat.cpu.all.soft%
15.86 -1.6 14.29 mpstat.cpu.all.sys%
0.64 +0.2 0.81 ± 2% mpstat.cpu.all.usr%
581.00 -17.4% 480.00 turbostat.Avg_MHz
18.30 -2.8 15.50 ± 2% turbostat.Busy%
3181 -2.4% 3103 turbostat.Bzy_MHz
0.19 ± 2% +36.2% 0.26 ± 2% turbostat.IPC
14124132 -22.5% 10948641 ± 16% turbostat.IRQ
371.48 -1.6% 365.70 turbostat.PkgWatt
120505 ± 70% +99.8% 240737 ± 29% numa-meminfo.node0.AnonPages
132911 ± 64% +92.1% 255280 ± 30% numa-meminfo.node0.AnonPages.max
132341 ± 60% +89.9% 251379 ± 28% numa-meminfo.node0.Inactive(anon)
14263 ±112% +105.9% 29369 ± 73% numa-meminfo.node0.KernelStack
30681 ± 40% -41.1% 18067 ± 31% numa-meminfo.node2.KReclaimable
30681 ± 40% -41.1% 18067 ± 31% numa-meminfo.node2.SReclaimable
21106 ± 3% -34.1% 13904 ± 12% numa-meminfo.node3.Active
21106 ± 3% -34.1% 13904 ± 12% numa-meminfo.node3.Active(anon)
26276 ± 15% -29.3% 18581 ± 13% numa-meminfo.node3.Shmem
30111 ± 70% +100.0% 60231 ± 29% numa-vmstat.node0.nr_anon_pages
33069 ± 60% +90.2% 62893 ± 28% numa-vmstat.node0.nr_inactive_anon
14242 ±112% +106.6% 29422 ± 73% numa-vmstat.node0.nr_kernel_stack
33069 ± 60% +90.2% 62892 ± 28% numa-vmstat.node0.nr_zone_inactive_anon
7674 ± 40% -41.1% 4523 ± 32% numa-vmstat.node2.nr_slab_reclaimable
5251 ± 3% -35.6% 3381 ± 12% numa-vmstat.node3.nr_active_anon
6583 ± 14% -30.2% 4597 ± 13% numa-vmstat.node3.nr_shmem
5251 ± 3% -35.6% 3381 ± 12% numa-vmstat.node3.nr_zone_active_anon
65140 -28.5% 46596 ± 31% numa-vmstat.node3.numa_other
6504 ± 2% -33.2% 4347 ± 4% proc-vmstat.nr_active_anon
270275 -8.2% 248155 proc-vmstat.nr_dirty
868712 -2.8% 844011 proc-vmstat.nr_file_pages
270401 -8.2% 248295 proc-vmstat.nr_inactive_file
12408 ± 7% -20.4% 9879 ± 3% proc-vmstat.nr_shmem
39345 -2.9% 38186 proc-vmstat.nr_slab_reclaimable
107105 -1.9% 105036 proc-vmstat.nr_slab_unreclaimable
6504 ± 2% -33.2% 4347 ± 4% proc-vmstat.nr_zone_active_anon
270402 -8.2% 248294 proc-vmstat.nr_zone_inactive_file
270276 -8.2% 248155 proc-vmstat.nr_zone_write_pending
11886 ± 3% -14.5% 10162 ± 4% proc-vmstat.pgactivate
578250 -6.5% 540908 proc-vmstat.pgfault
32358 ± 3% -6.8% 30168 ± 2% proc-vmstat.pgreuse
1585 -1.3% 1564 proc-vmstat.unevictable_pgs_culled
4.63 ±119% +185.3% 13.21 ± 83% perf-stat.i.MPKI
1.126e+10 +11.6% 1.256e+10 perf-stat.i.branch-instructions
1.02 ± 57% +0.9 1.88 ± 56% perf-stat.i.branch-miss-rate%
54201742 ± 4% +29.0% 69896187 ± 5% perf-stat.i.branch-misses
36.01 ± 3% -6.6 29.42 ± 12% perf-stat.i.cache-miss-rate%
34680496 +10.2% 38229820 ± 2% perf-stat.i.cache-misses
85947764 ± 10% +38.4% 1.19e+08 ± 14% perf-stat.i.cache-references
42770 +21.1% 51798 perf-stat.i.context-switches
8.358e+10 -18.7% 6.794e+10 perf-stat.i.cpu-cycles
1.634e+10 +10.4% 1.804e+10 perf-stat.i.dTLB-loads
0.01 ±150% +0.0 0.04 ± 73% perf-stat.i.dTLB-store-miss-rate%
134798 ± 26% +46.9% 198054 ± 27% perf-stat.i.dTLB-store-misses
8.912e+09 +15.1% 1.026e+10 perf-stat.i.dTLB-stores
36040275 ± 3% +24.5% 44863056 ± 4% perf-stat.i.iTLB-load-misses
5.647e+10 +11.2% 6.278e+10 perf-stat.i.instructions
0.61 ± 2% +24.9% 0.76 perf-stat.i.ipc
0.58 -18.7% 0.47 perf-stat.i.metric.GHz
253.65 +12.0% 284.09 perf-stat.i.metric.M/sec
9922 +11.7% 11084 perf-stat.i.minor-faults
6990890 +18.7% 8300960 ± 4% perf-stat.i.node-load-misses
3326179 +18.7% 3949195 perf-stat.i.node-loads
3443940 +11.1% 3825016 ± 2% perf-stat.i.node-store-misses
7324856 +10.5% 8090736 perf-stat.i.node-stores
9958 +11.7% 11124 perf-stat.i.page-faults
1.52 ± 10% +23.9% 1.88 ± 13% perf-stat.overall.MPKI
0.48 ± 4% +0.1 0.55 ± 4% perf-stat.overall.branch-miss-rate%
1.48 -27.0% 1.08 perf-stat.overall.cpi
2409 -26.3% 1774 ± 3% perf-stat.overall.cycles-between-cache-misses
86.34 +1.9 88.25 perf-stat.overall.iTLB-load-miss-rate%
1568 ± 3% -10.7% 1400 ± 3% perf-stat.overall.instructions-per-iTLB-miss
0.68 +37.0% 0.93 perf-stat.overall.ipc
1.108e+10 +12.2% 1.243e+10 perf-stat.ps.branch-instructions
53180193 ± 4% +29.3% 68763152 ± 4% perf-stat.ps.branch-misses
34126941 +10.9% 37833475 ± 2% perf-stat.ps.cache-misses
84414160 ± 10% +38.5% 1.169e+08 ± 13% perf-stat.ps.cache-references
42090 +21.9% 51290 perf-stat.ps.context-switches
8.222e+10 -18.4% 6.71e+10 perf-stat.ps.cpu-cycles
1.608e+10 +11.1% 1.787e+10 perf-stat.ps.dTLB-loads
131907 ± 26% +45.3% 191712 ± 26% perf-stat.ps.dTLB-store-misses
8.77e+09 +15.8% 1.016e+10 perf-stat.ps.dTLB-stores
35474759 ± 3% +25.3% 44446876 ± 4% perf-stat.ps.iTLB-load-misses
5.556e+10 +11.8% 6.214e+10 perf-stat.ps.instructions
9585 +9.4% 10490 perf-stat.ps.minor-faults
6879195 +19.6% 8227277 ± 4% perf-stat.ps.node-load-misses
3270811 +19.5% 3907209 perf-stat.ps.node-loads
3389704 +11.8% 3790016 ± 2% perf-stat.ps.node-store-misses
7209880 +11.1% 8013315 perf-stat.ps.node-stores
9619 +9.4% 10528 perf-stat.ps.page-faults
2.547e+12 -6.7% 2.375e+12 perf-stat.total.instructions
15.20 ± 5% -11.9 3.34 ± 4% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
15.82 ± 5% -11.6 4.20 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64
16.02 ± 5% -11.5 4.56 ± 4% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.72 ± 5% -11.4 3.36 ± 4% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
16.58 ± 5% -11.3 5.30 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
16.58 ± 5% -11.3 5.30 ± 4% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
16.59 ± 5% -11.3 5.32 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
16.59 ± 5% -11.3 5.32 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
16.60 ± 5% -11.3 5.33 ± 4% perf-profile.calltrace.cycles-pp.unlink
15.32 ± 5% -11.1 4.20 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open
15.53 ± 5% -11.0 4.57 ± 3% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
16.44 ± 5% -10.8 5.66 ± 4% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
16.48 ± 5% -10.8 5.71 ± 4% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
16.48 ± 5% -10.8 5.71 ± 4% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.49 ± 5% -10.8 5.73 ± 4% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
16.50 ± 5% -10.8 5.74 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
16.49 ± 5% -10.8 5.73 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
16.50 ± 5% -10.8 5.74 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
16.51 ± 5% -10.8 5.76 ± 4% perf-profile.calltrace.cycles-pp.creat64
0.86 ± 7% +0.2 1.01 ± 8% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
0.70 ± 4% +0.2 0.86 ± 11% perf-profile.calltrace.cycles-pp.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread
0.58 ± 4% +0.2 0.82 ± 4% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
0.59 ± 5% +0.2 0.84 ± 5% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
0.54 ± 4% +0.3 0.79 ± 3% perf-profile.calltrace.cycles-pp.xas_load.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.53 ± 4% +0.3 0.79 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.__filemap_get_folio.iomap_write_begin.iomap_write_iter
0.55 ± 3% +0.3 0.84 ± 4% perf-profile.calltrace.cycles-pp.down_write.xfs_ilock.xfs_file_buffered_write.new_sync_write.vfs_write
0.57 ± 5% +0.3 0.86 ± 5% perf-profile.calltrace.cycles-pp.apparmor_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.61 ± 4% +0.3 0.92 ± 5% perf-profile.calltrace.cycles-pp.folio_alloc.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.64 ± 4% +0.3 0.97 ± 6% perf-profile.calltrace.cycles-pp.__folio_mark_dirty.filemap_dirty_folio.iomap_write_end.iomap_write_iter.iomap_file_buffered_write
0.66 ± 5% +0.3 1.00 ± 4% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.71 ± 4% +0.3 1.05 ± 3% perf-profile.calltrace.cycles-pp.down_write.xfs_ilock.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
0.68 ± 4% +0.3 1.03 ± 4% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
0.78 ± 4% +0.4 1.14 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.llseek
0.72 ± 4% +0.4 1.09 ± 5% perf-profile.calltrace.cycles-pp.fault_in_readable.fault_in_iov_iter_readable.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.96 ± 4% +0.4 1.34 ± 5% perf-profile.calltrace.cycles-pp.memset_erms.zero_user_segments.__iomap_write_begin.iomap_write_begin.iomap_write_iter
0.98 ± 2% +0.4 1.37 ± 5% perf-profile.calltrace.cycles-pp.__filemap_add_folio.filemap_add_folio.__filemap_get_folio.iomap_write_begin.iomap_write_iter
1.00 ± 4% +0.4 1.40 ± 5% perf-profile.calltrace.cycles-pp.zero_user_segments.__iomap_write_begin.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.84 ± 4% +0.4 1.25 ± 4% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.92 ± 4% +0.4 1.34 ± 5% perf-profile.calltrace.cycles-pp.__entry_text_start.write
0.87 ± 4% +0.4 1.31 ± 4% perf-profile.calltrace.cycles-pp.fault_in_iov_iter_readable.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
0.91 ± 4% +0.4 1.35 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.llseek
0.79 ± 5% +0.5 1.24 ± 7% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill
0.81 ± 4% +0.5 1.27 ± 7% perf-profile.calltrace.cycles-pp.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill.dentry_kill
0.26 ±100% +0.5 0.75 ± 5% perf-profile.calltrace.cycles-pp.ksys_lseek.do_syscall_64.entry_SYSCALL_64_after_hwframe.llseek
0.27 ±100% +0.5 0.77 ± 5% perf-profile.calltrace.cycles-pp.balance_dirty_pages_ratelimited.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
0.17 ±141% +0.5 0.69 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.80 ± 5% +0.5 1.32 ± 46% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.09 ±223% +0.5 0.62 ± 6% perf-profile.calltrace.cycles-pp.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink
0.09 ±223% +0.5 0.62 ± 6% perf-profile.calltrace.cycles-pp.xfs_vn_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64
0.86 ± 6% +0.5 1.40 ± 44% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.09 ±223% +0.5 0.64 ± 6% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.08 ±223% +0.6 0.64 ± 12% perf-profile.calltrace.cycles-pp.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inodegc_worker.process_one_work
0.17 ±141% +0.6 0.74 ± 5% perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.__filemap_get_folio.iomap_write_begin.iomap_write_iter
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.truncate_cleanup_folio.truncate_inode_pages_range.evict.__dentry_kill.dentry_kill
0.00 +0.6 0.58 ± 7% perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_end.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
0.08 ±223% +0.6 0.67 ± 5% perf-profile.calltrace.cycles-pp.__entry_text_start.llseek
0.18 ±141% +0.6 0.77 ± 5% perf-profile.calltrace.cycles-pp.xfs_break_layouts.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write
0.00 +0.6 0.62 ± 5% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.__filemap_get_folio.iomap_write_begin
0.00 +0.6 0.62 ± 16% perf-profile.calltrace.cycles-pp.xfs_inactive_truncate.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread
0.00 +0.6 0.62 ± 4% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.__filemap_get_folio.iomap_write_begin
0.00 +0.6 0.63 ± 5% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
1.22 ± 4% +0.6 1.85 ± 4% perf-profile.calltrace.cycles-pp.filemap_dirty_folio.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +0.6 0.64 ± 5% perf-profile.calltrace.cycles-pp.disk_rw
0.00 +0.6 0.65 ± 4% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.08 ±223% +0.6 0.73 ± 40% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.08 ±223% +0.7 0.74 ± 40% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.48 ± 3% +0.7 2.14 ± 5% perf-profile.calltrace.cycles-pp.filemap_add_folio.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.00 +0.7 0.66 ± 5% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.__filemap_get_folio.iomap_write_begin
0.00 +0.7 0.67 ± 3% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_write_checks
1.65 ± 5% +0.7 2.34 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter_atomic.iomap_write_iter.iomap_file_buffered_write
0.00 +0.7 0.70 ± 6% perf-profile.calltrace.cycles-pp.folio_account_dirtied.__folio_mark_dirty.filemap_dirty_folio.iomap_write_end.iomap_write_iter
0.00 +0.7 0.71 ± 3% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_write_checks.xfs_file_buffered_write
0.75 ± 5% +0.7 1.48 ± 11% perf-profile.calltrace.cycles-pp.xfs_inactive.xfs_inodegc_worker.process_one_work.worker_thread.kthread
0.76 ± 5% +0.7 1.50 ± 10% perf-profile.calltrace.cycles-pp.xfs_inodegc_worker.process_one_work.worker_thread.kthread.ret_from_fork
0.78 ± 4% +0.7 1.53 ± 10% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.81 ± 4% +0.8 2.57 ± 6% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter_atomic.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.84 ± 12% +0.8 1.61 ± 5% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write
0.78 ± 4% +0.8 1.58 ± 10% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.79 ± 4% +0.8 1.61 ± 10% perf-profile.calltrace.cycles-pp.ret_from_fork
0.79 ± 4% +0.8 1.61 ± 10% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.10 ±223% +0.9 0.98 ± 6% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write
1.87 ± 4% +0.9 2.78 ± 5% perf-profile.calltrace.cycles-pp.llseek
1.76 ± 6% +0.9 2.68 ± 6% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dentry_kill.dput
1.78 ± 6% +0.9 2.70 ± 6% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dentry_kill.dput.__fput
1.81 ± 6% +1.0 2.76 ± 6% perf-profile.calltrace.cycles-pp.dentry_kill.dput.__fput.task_work_run.exit_to_user_mode_loop
1.80 ± 6% +1.0 2.76 ± 6% perf-profile.calltrace.cycles-pp.__dentry_kill.dentry_kill.dput.__fput.task_work_run
1.82 ± 6% +1.0 2.79 ± 6% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare
2.26 ± 4% +1.0 3.25 ± 5% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
1.90 ± 6% +1.0 2.91 ± 6% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
1.90 ± 6% +1.0 2.90 ± 6% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
1.91 ± 6% +1.0 2.92 ± 6% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
1.91 ± 6% +1.0 2.92 ± 6% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
1.90 ± 6% +1.0 2.92 ± 6% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.92 ± 6% +1.0 2.93 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
1.92 ± 6% +1.0 2.94 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
1.92 ± 6% +1.0 2.94 ± 6% perf-profile.calltrace.cycles-pp.__close
1.97 ± 5% +1.3 3.28 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
3.32 ± 4% +1.4 4.67 ± 4% perf-profile.calltrace.cycles-pp.__iomap_write_begin.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
2.94 ± 4% +1.5 4.42 ± 4% perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
4.40 ± 4% +2.1 6.47 ± 5% perf-profile.calltrace.cycles-pp.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
4.48 ± 3% +2.3 6.74 ± 4% perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
6.36 ± 4% +2.8 9.20 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
8.14 ± 3% +3.6 11.77 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write
19.08 ± 4% +8.6 27.69 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write
24.25 ± 4% +11.2 35.45 ± 4% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write
27.82 ± 3% +13.3 41.14 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
28.69 ± 4% +13.7 42.44 ± 4% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.64 ± 3% +14.8 45.43 ± 4% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
31.21 ± 3% +15.1 46.28 ± 4% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
31.66 ± 3% +15.3 46.95 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
31.92 ± 3% +15.4 47.35 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
33.93 ± 4% +16.8 50.75 ± 4% perf-profile.calltrace.cycles-pp.write
29.92 ± 5% -23.2 6.71 ± 4% perf-profile.children.cycles-pp.osq_lock
31.14 ± 5% -22.7 8.41 ± 4% perf-profile.children.cycles-pp.rwsem_optimistic_spin
31.55 ± 5% -22.4 9.12 ± 4% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
16.58 ± 5% -11.3 5.30 ± 4% perf-profile.children.cycles-pp.__x64_sys_unlink
16.58 ± 5% -11.3 5.30 ± 4% perf-profile.children.cycles-pp.do_unlinkat
16.61 ± 5% -11.3 5.34 ± 4% perf-profile.children.cycles-pp.unlink
16.44 ± 5% -10.8 5.66 ± 4% perf-profile.children.cycles-pp.open_last_lookups
16.49 ± 5% -10.8 5.73 ± 4% perf-profile.children.cycles-pp.__x64_sys_creat
16.51 ± 5% -10.8 5.76 ± 4% perf-profile.children.cycles-pp.path_openat
16.51 ± 5% -10.8 5.76 ± 4% perf-profile.children.cycles-pp.creat64
16.51 ± 5% -10.8 5.76 ± 4% perf-profile.children.cycles-pp.do_filp_open
16.54 ± 5% -10.7 5.80 ± 4% perf-profile.children.cycles-pp.do_sys_openat2
67.60 ± 4% -5.3 62.29 ± 4% perf-profile.children.cycles-pp.do_syscall_64
68.00 ± 4% -5.1 62.90 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.62 ± 6% -0.2 0.44 ± 12% perf-profile.children.cycles-pp.xfs_check_agi_freecount
0.34 ± 8% -0.1 0.22 ± 14% perf-profile.children.cycles-pp.xfs_inobt_get_rec
0.29 ± 8% -0.1 0.17 ± 10% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.22 ± 8% -0.1 0.14 ± 10% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
0.35 ± 7% -0.1 0.27 ± 9% perf-profile.children.cycles-pp.xfs_dialloc_ag
0.20 ± 7% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.20 ± 7% -0.1 0.12 ± 11% perf-profile.children.cycles-pp.xfs_btree_increment
0.39 ± 7% -0.1 0.32 ± 8% perf-profile.children.cycles-pp.xfs_dialloc
0.04 ± 45% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.up
0.05 ± 7% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.balance_dirty_pages
0.06 ± 8% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.__x64_sys_write
0.06 ± 6% +0.0 0.08 ± 11% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.06 ± 11% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.07 ± 12% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.xfs_bmapi_reserve_delalloc
0.05 ± 7% +0.0 0.08 ± 12% perf-profile.children.cycles-pp.free_unref_page_commit
0.05 ± 8% +0.0 0.08 ± 7% perf-profile.children.cycles-pp.rw_verify_area
0.06 ± 11% +0.0 0.09 ± 6% perf-profile.children.cycles-pp.xas_create
0.04 ± 45% +0.0 0.07 ± 8% perf-profile.children.cycles-pp.iomap_adjust_read_range
0.05 ± 13% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_dir3_data_check
0.05 ± 13% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.__xfs_dir3_data_check
0.05 ± 7% +0.0 0.08 ± 12% perf-profile.children.cycles-pp.xas_clear_mark
0.06 ± 7% +0.0 0.09 ± 9% perf-profile.children.cycles-pp.xfs_free_eofblocks
0.06 ± 13% +0.0 0.08 ± 8% perf-profile.children.cycles-pp.iov_iter_init
0.06 ± 9% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.06 ± 11% +0.0 0.09 ± 10% perf-profile.children.cycles-pp.iomap_iter_done
0.04 ± 71% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.__x64_sys_openat
0.06 +0.0 0.09 ± 6% perf-profile.children.cycles-pp.folio_memcg_unlock
0.07 ± 11% +0.0 0.10 ± 6% perf-profile.children.cycles-pp.xfs_vn_lookup
0.04 ± 45% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.06 ± 11% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.xfs_lookup
0.06 ± 11% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.xfs_dir_lookup
0.04 ± 45% +0.0 0.08 ± 14% perf-profile.children.cycles-pp.wake_up_q
0.06 ± 6% +0.0 0.09 ± 9% perf-profile.children.cycles-pp.PageHeadHuge
0.06 ± 9% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__list_add_valid
0.06 ± 11% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.05 ± 47% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_dir2_node_addname_int
0.08 ± 10% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.__xa_set_mark
0.06 ± 9% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.__mark_inode_dirty
0.07 ± 5% +0.0 0.10 ± 7% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.07 ± 11% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.xfs_release
0.09 ± 6% +0.0 0.13 ± 12% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.06 ± 11% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.04 ± 71% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.xfs_dir2_node_lookup
0.04 ± 71% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.open64
0.07 ± 6% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.alloc_pages
0.10 ± 5% +0.0 0.14 ± 9% perf-profile.children.cycles-pp.xfs_btree_lookup
0.10 ± 13% +0.0 0.14 ± 7% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.09 ± 14% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.xfs_dir2_node_addname
0.06 ± 11% +0.0 0.11 ± 10% perf-profile.children.cycles-pp.rwsem_wake
0.02 ± 99% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.xas_find
0.08 ± 8% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.xfs_get_extsz_hint
0.03 ± 70% +0.0 0.08 ± 11% perf-profile.children.cycles-pp.xfs_isilocked
0.09 ± 11% +0.0 0.14 ± 5% perf-profile.children.cycles-pp.xfs_dir_createname
0.02 ±141% +0.0 0.06 ± 14% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.10 ± 4% +0.0 0.15 ± 7% perf-profile.children.cycles-pp.node_dirty_ok
0.01 ±223% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.mem_cgroup_track_foreign_dirty_slowpath
0.11 ± 8% +0.1 0.16 ± 11% perf-profile.children.cycles-pp.memcpy_erms
0.00 +0.1 0.05 perf-profile.children.cycles-pp.xas_alloc
0.08 ± 5% +0.1 0.14 ± 11% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.05 ± 7% perf-profile.children.cycles-pp.uncharge_folio
0.11 ± 6% +0.1 0.16 ± 9% perf-profile.children.cycles-pp.filemap_unaccount_folio
0.01 ±223% +0.1 0.06 ± 17% perf-profile.children.cycles-pp.generic_file_llseek_size
0.01 ±223% +0.1 0.06 ± 6% perf-profile.children.cycles-pp.xlog_grant_push_ail
0.01 ±223% +0.1 0.06 ± 6% perf-profile.children.cycles-pp.xlog_grant_push_threshold
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.folio_mapping
0.07 ± 10% +0.1 0.13 ± 22% perf-profile.children.cycles-pp.kmem_cache_free
0.02 ± 99% +0.1 0.08 ± 21% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.schedule_idle
0.12 ± 11% +0.1 0.18 ± 11% perf-profile.children.cycles-pp.__free_one_page
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.memcg_check_events
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.filemap_free_folio
0.13 ± 8% +0.1 0.18 ± 5% perf-profile.children.cycles-pp.xfs_da_read_buf
0.10 ± 9% +0.1 0.16 ± 8% perf-profile.children.cycles-pp.file_remove_privs
0.12 ± 5% +0.1 0.18 ± 5% perf-profile.children.cycles-pp.aa_file_perm
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.down_read
0.01 ±223% +0.1 0.07 ± 11% perf-profile.children.cycles-pp.xfs_dir2_leafn_remove
0.01 ±223% +0.1 0.07 ± 7% perf-profile.children.cycles-pp.xlog_space_left
0.12 ± 5% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.1 0.06 perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.08 ± 6% +0.1 0.14 ± 12% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
0.02 ±141% +0.1 0.08 ± 20% perf-profile.children.cycles-pp.propagate_protected_usage
0.15 ± 2% +0.1 0.21 ± 3% perf-profile.children.cycles-pp.generic_write_check_limits
0.12 ± 4% +0.1 0.18 ± 10% perf-profile.children.cycles-pp.page_counter_try_charge
0.14 ± 6% +0.1 0.20 ± 3% perf-profile.children.cycles-pp.folio_memcg_lock
0.13 ± 6% +0.1 0.19 ± 5% perf-profile.children.cycles-pp.iomap_page_create
0.07 ± 12% +0.1 0.14 ± 25% perf-profile.children.cycles-pp.rcu_do_batch
0.00 +0.1 0.07 ± 14% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.1 0.07 ± 18% perf-profile.children.cycles-pp.update_rq_clock
0.14 ± 7% +0.1 0.21 ± 4% perf-profile.children.cycles-pp.xfs_dir2_node_removename
0.15 ± 8% +0.1 0.22 ± 4% perf-profile.children.cycles-pp.xfs_dir_removename
0.15 ± 5% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.xfs_iread_extents
0.14 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.file_modified
0.11 ± 17% +0.1 0.19 ± 27% perf-profile.children.cycles-pp.rcu_core
0.15 ± 9% +0.1 0.22 ± 7% perf-profile.children.cycles-pp.find_lock_entries
0.15 ± 9% +0.1 0.23 ± 4% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
0.16 ± 12% +0.1 0.23 ± 12% perf-profile.children.cycles-pp.free_pcppages_bulk
0.17 ± 4% +0.1 0.25 ± 8% perf-profile.children.cycles-pp.__mod_node_page_state
0.08 ± 12% +0.1 0.16 ± 25% perf-profile.children.cycles-pp.try_to_wake_up
0.18 ± 2% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.xas_start
0.17 ± 6% +0.1 0.25 ± 8% perf-profile.children.cycles-pp.xfs_file_llseek
0.16 ± 6% +0.1 0.25 ± 8% perf-profile.children.cycles-pp.inode_to_bdi
0.18 ± 6% +0.1 0.28 ± 6% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.18 ± 5% +0.1 0.27 ± 5% perf-profile.children.cycles-pp.rcu_all_qs
0.17 ± 4% +0.1 0.26 ± 7% perf-profile.children.cycles-pp.try_charge_memcg
0.20 ± 5% +0.1 0.29 ± 8% perf-profile.children.cycles-pp.__mod_lruvec_state
0.06 ± 13% +0.1 0.16 ± 27% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.21 ± 5% +0.1 0.32 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.22 ± 11% +0.1 0.33 ± 10% perf-profile.children.cycles-pp.current_time
0.24 ± 6% +0.1 0.36 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.27 ± 4% +0.1 0.38 ± 4% perf-profile.children.cycles-pp.xfs_errortag_test
0.25 ± 7% +0.1 0.38 ± 6% perf-profile.children.cycles-pp.folio_account_cleaned
0.28 ± 4% +0.1 0.41 ± 5% perf-profile.children.cycles-pp.rmqueue
0.24 ± 6% +0.1 0.37 ± 4% perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
0.27 ± 8% +0.1 0.40 ± 10% perf-profile.children.cycles-pp.free_unref_page_list
0.30 ± 3% +0.1 0.43 ± 4% perf-profile.children.cycles-pp.generic_write_checks
0.49 ± 4% +0.1 0.62 ± 6% perf-profile.children.cycles-pp.xfs_vn_unlink
0.32 ± 5% +0.1 0.46 ± 5% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.49 ± 5% +0.1 0.62 ± 6% perf-profile.children.cycles-pp.xfs_remove
0.30 ± 6% +0.1 0.44 ± 8% perf-profile.children.cycles-pp.xas_store
0.50 ± 5% +0.1 0.64 ± 6% perf-profile.children.cycles-pp.vfs_unlink
0.22 ± 3% +0.1 0.36 ± 9% perf-profile.children.cycles-pp.uncharge_batch
0.20 ± 5% +0.1 0.35 ± 9% perf-profile.children.cycles-pp.page_counter_uncharge
0.08 ± 32% +0.1 0.23 ± 10% perf-profile.children.cycles-pp.xlog_grant_add_space
0.30 ± 2% +0.2 0.45 ± 5% perf-profile.children.cycles-pp.charge_memcg
0.86 ± 7% +0.2 1.01 ± 8% perf-profile.children.cycles-pp.lookup_open
0.31 ± 5% +0.2 0.47 ± 2% perf-profile.children.cycles-pp.folio_unlock
0.22 ± 5% +0.2 0.38 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.70 ± 4% +0.2 0.86 ± 11% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.32 ± 6% +0.2 0.48 ± 6% perf-profile.children.cycles-pp.xfs_break_leased_layouts
0.33 ± 8% +0.2 0.50 ± 6% perf-profile.children.cycles-pp.__folio_cancel_dirty
0.32 ± 2% +0.2 0.49 ± 7% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.47 ± 4% +0.2 0.64 ± 12% perf-profile.children.cycles-pp.xfs_ifree
0.25 ± 4% +0.2 0.42 ± 8% perf-profile.children.cycles-pp.__mem_cgroup_uncharge_list
0.32 ± 4% +0.2 0.49 ± 5% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.39 ± 6% +0.2 0.56 ± 13% perf-profile.children.cycles-pp.xfs_difree
0.36 ± 7% +0.2 0.54 ± 8% perf-profile.children.cycles-pp.delete_from_page_cache_batch
0.46 ± 4% +0.2 0.63 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.33 ± 8% +0.2 0.51 ± 9% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.38 ± 3% +0.2 0.57 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.38 ± 7% +0.2 0.58 ± 6% perf-profile.children.cycles-pp.truncate_cleanup_folio
0.06 +0.2 0.26 ± 14% perf-profile.children.cycles-pp.__down_common
0.05 ± 8% +0.2 0.25 ± 14% perf-profile.children.cycles-pp.schedule_timeout
0.06 +0.2 0.26 ± 13% perf-profile.children.cycles-pp.down
0.06 +0.2 0.26 ± 14% perf-profile.children.cycles-pp.xfs_buf_lock
0.42 ± 4% +0.2 0.62 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
0.42 ± 4% +0.2 0.62 ± 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.42 ± 6% +0.2 0.63 ± 4% perf-profile.children.cycles-pp.__cond_resched
0.46 ± 5% +0.2 0.67 ± 5% perf-profile.children.cycles-pp.__fget_light
0.47 ± 6% +0.2 0.69 ± 3% perf-profile.children.cycles-pp.xfs_file_write_iter
0.39 ± 3% +0.2 0.61 ± 7% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_end
0.29 ± 12% +0.2 0.51 ± 44% perf-profile.children.cycles-pp.__softirqentry_text_start
0.32 ± 11% +0.2 0.55 ± 42% perf-profile.children.cycles-pp.__irq_exit_rcu
0.48 ± 4% +0.2 0.71 ± 5% perf-profile.children.cycles-pp.disk_rw
0.09 ± 4% +0.2 0.32 ± 14% perf-profile.children.cycles-pp.xfs_ialloc_read_agi
0.44 ± 4% +0.2 0.68 ± 5% perf-profile.children.cycles-pp.__pagevec_lru_add
0.12 ± 3% +0.2 0.36 ± 13% perf-profile.children.cycles-pp.xfs_read_agi
0.46 ± 4% +0.2 0.71 ± 6% perf-profile.children.cycles-pp.folio_account_dirtied
0.13 ± 33% +0.2 0.38 ± 13% perf-profile.children.cycles-pp.xfs_log_reserve
0.08 ± 4% +0.2 0.33 ± 13% perf-profile.children.cycles-pp.update_sg_lb_stats
0.14 ± 31% +0.3 0.39 ± 14% perf-profile.children.cycles-pp.xfs_trans_reserve
0.48 ± 4% +0.3 0.74 ± 5% perf-profile.children.cycles-pp.folio_add_lru
0.16 ± 30% +0.3 0.41 ± 13% perf-profile.children.cycles-pp.xfs_trans_alloc
0.19 ± 4% +0.3 0.45 ± 10% perf-profile.children.cycles-pp.xfs_buf_find
0.09 ± 6% +0.3 0.35 ± 14% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.3 0.26 ± 18% perf-profile.children.cycles-pp.xfs_trans_roll
0.53 ± 4% +0.3 0.79 ± 5% perf-profile.children.cycles-pp.ksys_lseek
0.09 ± 6% +0.3 0.35 ± 14% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.3 0.26 ± 17% perf-profile.children.cycles-pp.xfs_defer_trans_roll
0.22 ± 4% +0.3 0.48 ± 10% perf-profile.children.cycles-pp.xfs_buf_read_map
0.00 +0.3 0.26 ± 17% perf-profile.children.cycles-pp.xfs_defer_finish
0.20 ± 4% +0.3 0.46 ± 10% perf-profile.children.cycles-pp.xfs_buf_get_map
0.53 ± 5% +0.3 0.80 ± 5% perf-profile.children.cycles-pp.xfs_break_layouts
0.53 ± 4% +0.3 0.80 ± 5% perf-profile.children.cycles-pp.__alloc_pages
0.54 ± 5% +0.3 0.81 ± 5% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited
0.02 ±142% +0.3 0.29 ± 16% perf-profile.children.cycles-pp.xfs_itruncate_extents_flags
0.08 ± 4% +0.3 0.36 ± 16% perf-profile.children.cycles-pp.newidle_balance
0.59 ± 5% +0.3 0.86 ± 4% perf-profile.children.cycles-pp.__fdget_pos
0.10 ± 3% +0.3 0.39 ± 15% perf-profile.children.cycles-pp.load_balance
0.09 ± 6% +0.3 0.38 ± 15% perf-profile.children.cycles-pp.pick_next_task_fair
0.58 ± 4% +0.3 0.88 ± 4% perf-profile.children.cycles-pp.apparmor_file_permission
0.32 ± 4% +0.3 0.62 ± 10% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.62 ± 4% +0.3 0.92 ± 6% perf-profile.children.cycles-pp.folio_alloc
0.67 ± 5% +0.3 0.98 ± 4% perf-profile.children.cycles-pp.xas_load
0.14 ± 2% +0.3 0.46 ± 13% perf-profile.children.cycles-pp.schedule
0.64 ± 4% +0.3 0.97 ± 6% perf-profile.children.cycles-pp.__folio_mark_dirty
0.62 ± 4% +0.3 0.96 ± 5% perf-profile.children.cycles-pp.up_write
0.67 ± 4% +0.3 1.02 ± 4% perf-profile.children.cycles-pp.security_file_permission
0.18 ± 2% +0.3 0.52 ± 12% perf-profile.children.cycles-pp.__schedule
0.75 ± 4% +0.4 1.13 ± 5% perf-profile.children.cycles-pp.fault_in_readable
0.98 ± 4% +0.4 1.36 ± 5% perf-profile.children.cycles-pp.memset_erms
0.99 ± 2% +0.4 1.39 ± 5% perf-profile.children.cycles-pp.__filemap_add_folio
1.00 ± 4% +0.4 1.40 ± 5% perf-profile.children.cycles-pp.zero_user_segments
0.89 ± 4% +0.4 1.34 ± 4% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.86 ± 4% +0.5 1.31 ± 4% perf-profile.children.cycles-pp.xfs_iunlock
0.81 ± 4% +0.5 1.27 ± 7% perf-profile.children.cycles-pp.__pagevec_release
0.85 ± 5% +0.5 1.32 ± 7% perf-profile.children.cycles-pp.release_pages
1.01 ± 5% +0.5 1.49 ± 4% perf-profile.children.cycles-pp.__might_resched
1.15 ± 5% +0.5 1.67 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.05 ± 8% +0.6 0.62 ± 16% perf-profile.children.cycles-pp.xfs_inactive_truncate
0.41 ± 20% +0.6 0.98 ± 6% perf-profile.children.cycles-pp.xfs_vn_update_time
1.24 ± 4% +0.6 1.88 ± 4% perf-profile.children.cycles-pp.filemap_dirty_folio
1.38 ± 5% +0.6 2.01 ± 5% perf-profile.children.cycles-pp.__entry_text_start
1.49 ± 3% +0.7 2.15 ± 5% perf-profile.children.cycles-pp.filemap_add_folio
1.35 ± 4% +0.7 2.03 ± 3% perf-profile.children.cycles-pp.down_write
1.47 ± 4% +0.7 2.16 ± 4% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.75 ± 5% +0.7 1.48 ± 11% perf-profile.children.cycles-pp.xfs_inactive
1.76 ± 5% +0.7 2.49 ± 6% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.76 ± 5% +0.7 1.50 ± 10% perf-profile.children.cycles-pp.xfs_inodegc_worker
0.78 ± 4% +0.7 1.53 ± 10% perf-profile.children.cycles-pp.process_one_work
1.82 ± 4% +0.8 2.58 ± 6% perf-profile.children.cycles-pp.copyin
0.66 ± 4% +0.8 1.42 ± 9% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.84 ± 12% +0.8 1.62 ± 5% perf-profile.children.cycles-pp.file_update_time
1.55 ± 3% +0.8 2.32 ± 4% perf-profile.children.cycles-pp.xfs_ilock
0.78 ± 4% +0.8 1.58 ± 10% perf-profile.children.cycles-pp.worker_thread
0.79 ± 4% +0.8 1.61 ± 10% perf-profile.children.cycles-pp.ret_from_fork
0.79 ± 4% +0.8 1.61 ± 10% perf-profile.children.cycles-pp.kthread
0.23 ± 6% +0.8 1.05 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.35 ± 4% +0.9 1.24 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
1.76 ± 6% +0.9 2.68 ± 6% perf-profile.children.cycles-pp.truncate_inode_pages_range
1.78 ± 6% +0.9 2.70 ± 6% perf-profile.children.cycles-pp.evict
1.81 ± 6% +1.0 2.76 ± 6% perf-profile.children.cycles-pp.dentry_kill
1.80 ± 6% +1.0 2.76 ± 6% perf-profile.children.cycles-pp.__dentry_kill
1.84 ± 5% +1.0 2.81 ± 6% perf-profile.children.cycles-pp.dput
0.94 ± 4% +1.0 1.92 ± 6% perf-profile.children.cycles-pp.xlog_cil_commit
2.27 ± 5% +1.0 3.26 ± 5% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
1.90 ± 6% +1.0 2.91 ± 6% perf-profile.children.cycles-pp.task_work_run
1.90 ± 6% +1.0 2.90 ± 6% perf-profile.children.cycles-pp.__fput
1.91 ± 6% +1.0 2.92 ± 6% perf-profile.children.cycles-pp.exit_to_user_mode_loop
1.92 ± 6% +1.0 2.94 ± 6% perf-profile.children.cycles-pp.__close
0.97 ± 4% +1.0 1.99 ± 6% perf-profile.children.cycles-pp.__xfs_trans_commit
2.13 ± 4% +1.0 3.16 ± 5% perf-profile.children.cycles-pp.llseek
2.04 ± 6% +1.1 3.13 ± 6% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
2.21 ± 6% +1.2 3.38 ± 6% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
2.00 ± 5% +1.3 3.33 ± 4% perf-profile.children.cycles-pp.xfs_file_write_checks
3.33 ± 4% +1.4 4.70 ± 4% perf-profile.children.cycles-pp.__iomap_write_begin
3.00 ± 4% +1.5 4.50 ± 4% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
4.47 ± 4% +2.1 6.57 ± 5% perf-profile.children.cycles-pp.__filemap_get_folio
4.50 ± 3% +2.3 6.78 ± 4% perf-profile.children.cycles-pp.iomap_iter
6.38 ± 4% +2.9 9.23 ± 4% perf-profile.children.cycles-pp.iomap_write_end
8.16 ± 4% +3.6 11.79 ± 4% perf-profile.children.cycles-pp.iomap_write_begin
19.12 ± 4% +8.6 27.75 ± 4% perf-profile.children.cycles-pp.iomap_write_iter
24.29 ± 4% +11.2 35.50 ± 4% perf-profile.children.cycles-pp.iomap_file_buffered_write
27.86 ± 3% +13.3 41.20 ± 4% perf-profile.children.cycles-pp.xfs_file_buffered_write
28.72 ± 4% +13.7 42.46 ± 4% perf-profile.children.cycles-pp.new_sync_write
30.67 ± 3% +14.8 45.48 ± 4% perf-profile.children.cycles-pp.vfs_write
31.23 ± 3% +15.1 46.30 ± 4% perf-profile.children.cycles-pp.ksys_write
34.29 ± 4% +16.5 50.82 ± 4% perf-profile.children.cycles-pp.write
29.66 ± 5% -23.1 6.60 ± 4% perf-profile.self.cycles-pp.osq_lock
0.20 ± 9% -0.1 0.12 ± 10% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
0.13 ± 4% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.xfs_buf_item_format_segment
0.09 ± 7% -0.0 0.04 ± 72% perf-profile.self.cycles-pp.xfs_inobt_get_rec
0.05 +0.0 0.08 ± 6% perf-profile.self.cycles-pp.rw_verify_area
0.06 ± 11% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__folio_cancel_dirty
0.06 ± 9% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.folio_memcg_unlock
0.05 +0.0 0.08 ± 11% perf-profile.self.cycles-pp.alloc_pages
0.04 ± 45% +0.0 0.07 ± 9% perf-profile.self.cycles-pp.folio_add_lru
0.05 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.try_charge_memcg
0.06 ± 6% +0.0 0.09 ± 9% perf-profile.self.cycles-pp.folio_account_cleaned
0.06 ± 6% +0.0 0.09 ± 10% perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.05 ± 8% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__mark_inode_dirty
0.06 ± 11% +0.0 0.09 ± 7% perf-profile.self.cycles-pp.__list_add_valid
0.06 ± 6% +0.0 0.09 ± 7% perf-profile.self.cycles-pp.copyin
0.04 ± 45% +0.0 0.07 ± 6% perf-profile.self.cycles-pp.iomap_iter_done
0.07 ± 9% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.rmqueue
0.04 ± 45% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.04 ± 45% +0.0 0.08 ± 9% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.04 ± 45% +0.0 0.08 ± 9% perf-profile.self.cycles-pp.xas_clear_mark
0.06 ± 7% +0.0 0.10 ± 11% perf-profile.self.cycles-pp.__alloc_pages
0.07 ± 8% +0.0 0.10 ± 9% perf-profile.self.cycles-pp.node_dirty_ok
0.10 ± 13% +0.0 0.14 ± 8% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.07 ± 10% +0.0 0.11 ± 8% perf-profile.self.cycles-pp.xfs_get_extsz_hint
0.02 ± 99% +0.0 0.07 ± 11% perf-profile.self.cycles-pp.get_page_from_freelist
0.03 ± 70% +0.0 0.08 ± 10% perf-profile.self.cycles-pp.PageHeadHuge
0.11 ± 8% +0.0 0.15 ± 11% perf-profile.self.cycles-pp.memcpy_erms
0.07 ± 9% +0.0 0.12 ± 8% perf-profile.self.cycles-pp.folio_account_dirtied
0.03 ±100% +0.0 0.07 ± 8% perf-profile.self.cycles-pp.iomap_adjust_read_range
0.09 ± 13% +0.0 0.14 ± 11% perf-profile.self.cycles-pp.security_file_permission
0.07 ± 6% +0.0 0.12 ± 9% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.01 ±223% +0.0 0.06 ± 8% perf-profile.self.cycles-pp.__mod_zone_page_state
0.02 ±141% +0.0 0.06 ± 11% perf-profile.self.cycles-pp.delete_from_page_cache_batch
0.01 ±223% +0.0 0.06 ± 11% perf-profile.self.cycles-pp.__mod_lruvec_state
0.12 ± 5% +0.1 0.17 ± 7% perf-profile.self.cycles-pp.__filemap_add_folio
0.00 +0.1 0.05 ± 7% perf-profile.self.cycles-pp.uncharge_folio
0.10 ± 10% +0.1 0.15 ± 7% perf-profile.self.cycles-pp.file_remove_privs
0.02 ±141% +0.1 0.07 ± 5% perf-profile.self.cycles-pp.iov_iter_init
0.11 ± 12% +0.1 0.17 ± 13% perf-profile.self.cycles-pp.__free_one_page
0.10 ± 4% +0.1 0.16 ± 6% perf-profile.self.cycles-pp.aa_file_perm
0.01 ±223% +0.1 0.06 ± 14% perf-profile.self.cycles-pp._xfs_trans_bjoin
0.01 ±223% +0.1 0.06 ± 17% perf-profile.self.cycles-pp.generic_file_llseek_size
0.01 ±223% +0.1 0.06 ± 11% perf-profile.self.cycles-pp.__x64_sys_write
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.charge_memcg
0.13 ± 5% +0.1 0.18 ± 3% perf-profile.self.cycles-pp.folio_memcg_lock
0.10 ± 7% +0.1 0.16 ± 8% perf-profile.self.cycles-pp.page_counter_try_charge
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.mem_cgroup_track_foreign_dirty_slowpath
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.filemap_free_folio
0.10 ± 5% +0.1 0.16 ± 6% perf-profile.self.cycles-pp.ksys_lseek
0.11 ± 6% +0.1 0.17 ± 5% perf-profile.self.cycles-pp.iomap_page_create
0.13 ± 8% +0.1 0.19 ± 3% perf-profile.self.cycles-pp.__fdget_pos
0.11 ± 9% +0.1 0.17 ± 7% perf-profile.self.cycles-pp.find_lock_entries
0.01 ±223% +0.1 0.07 ± 7% perf-profile.self.cycles-pp.xlog_space_left
0.02 ±141% +0.1 0.08 ± 16% perf-profile.self.cycles-pp.propagate_protected_usage
0.14 ± 4% +0.1 0.20 ± 4% perf-profile.self.cycles-pp.generic_write_check_limits
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.xfs_isilocked
0.12 ± 3% +0.1 0.18 ± 6% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.10 ± 11% +0.1 0.16 ± 8% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.truncate_cleanup_folio
0.14 ± 6% +0.1 0.21 ± 9% perf-profile.self.cycles-pp.xas_store
0.00 +0.1 0.07 ± 14% perf-profile.self.cycles-pp.idle_cpu
0.15 ± 5% +0.1 0.21 ± 5% perf-profile.self.cycles-pp.fault_in_iov_iter_readable
0.13 ± 5% +0.1 0.20 ± 10% perf-profile.self.cycles-pp.inode_to_bdi
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.free_unref_page_list
0.13 ± 3% +0.1 0.20 ± 3% perf-profile.self.cycles-pp.rcu_all_qs
0.16 ± 2% +0.1 0.23 ± 4% perf-profile.self.cycles-pp.xas_start
0.16 ± 5% +0.1 0.24 ± 4% perf-profile.self.cycles-pp.do_syscall_64
0.14 ± 5% +0.1 0.22 ± 6% perf-profile.self.cycles-pp.xfs_iread_extents
0.16 ± 3% +0.1 0.23 ± 7% perf-profile.self.cycles-pp.generic_write_checks
0.16 ± 7% +0.1 0.24 ± 7% perf-profile.self.cycles-pp.xfs_break_layouts
0.17 ± 6% +0.1 0.24 ± 7% perf-profile.self.cycles-pp.__mod_node_page_state
0.17 ± 4% +0.1 0.24 ± 6% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.15 ± 5% +0.1 0.23 ± 8% perf-profile.self.cycles-pp.current_time
0.16 ± 6% +0.1 0.24 ± 8% perf-profile.self.cycles-pp.xfs_file_llseek
0.15 ± 8% +0.1 0.24 ± 8% perf-profile.self.cycles-pp.release_pages
0.03 ±100% +0.1 0.11 ± 6% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.17 ± 4% +0.1 0.26 ± 5% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.20 ± 4% +0.1 0.29 ± 4% perf-profile.self.cycles-pp.xfs_file_write_checks
0.19 ± 12% +0.1 0.29 ± 12% perf-profile.self.cycles-pp.ksys_write
0.20 ± 4% +0.1 0.30 ± 5% perf-profile.self.cycles-pp.file_update_time
0.20 ± 3% +0.1 0.30 ± 6% perf-profile.self.cycles-pp.xfs_ilock
0.18 ± 5% +0.1 0.28 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.24 ± 3% +0.1 0.34 ± 4% perf-profile.self.cycles-pp.xfs_errortag_test
0.21 ± 5% +0.1 0.32 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.22 ± 10% +0.1 0.32 ± 4% perf-profile.self.cycles-pp.__cond_resched
0.28 ± 4% +0.1 0.39 ± 6% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.18 ± 7% +0.1 0.29 ± 8% perf-profile.self.cycles-pp._raw_spin_lock
0.24 ± 6% +0.1 0.36 ± 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.25 ± 6% +0.1 0.37 ± 4% perf-profile.self.cycles-pp.xfs_iunlock
0.17 ± 5% +0.1 0.29 ± 8% perf-profile.self.cycles-pp.page_counter_uncharge
0.24 ± 6% +0.1 0.37 ± 4% perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
0.26 ± 5% +0.1 0.40 ± 9% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_end
0.28 ± 9% +0.1 0.42 ± 10% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.30 ± 5% +0.1 0.45 ± 3% perf-profile.self.cycles-pp.folio_unlock
0.08 ± 32% +0.1 0.23 ± 9% perf-profile.self.cycles-pp.xlog_grant_add_space
0.33 ± 3% +0.2 0.48 ± 5% perf-profile.self.cycles-pp.llseek
0.32 ± 3% +0.2 0.47 ± 7% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.32 ± 5% +0.2 0.48 ± 5% perf-profile.self.cycles-pp.xfs_break_leased_layouts
0.32 ± 4% +0.2 0.48 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.30 ± 4% +0.2 0.46 ± 9% perf-profile.self.cycles-pp.new_sync_write
0.36 ± 5% +0.2 0.54 ± 5% perf-profile.self.cycles-pp.iomap_write_begin
0.36 ± 4% +0.2 0.54 ± 4% perf-profile.self.cycles-pp.filemap_dirty_folio
0.06 ± 6% +0.2 0.26 ± 14% perf-profile.self.cycles-pp.update_sg_lb_stats
0.41 ± 7% +0.2 0.61 ± 5% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited
0.43 ± 4% +0.2 0.63 ± 6% perf-profile.self.cycles-pp.disk_rw
0.44 ± 6% +0.2 0.64 ± 5% perf-profile.self.cycles-pp.__fget_light
0.42 ± 4% +0.2 0.62 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.47 ± 6% +0.2 0.68 ± 3% perf-profile.self.cycles-pp.xfs_file_write_iter
0.45 ± 5% +0.2 0.66 ± 5% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
0.46 ± 4% +0.2 0.69 ± 6% perf-profile.self.cycles-pp.xfs_file_buffered_write
0.46 ± 5% +0.2 0.69 ± 4% perf-profile.self.cycles-pp.apparmor_file_permission
0.50 ± 6% +0.2 0.74 ± 4% perf-profile.self.cycles-pp.xas_load
0.52 ± 6% +0.3 0.77 ± 6% perf-profile.self.cycles-pp.iomap_write_iter
0.62 ± 5% +0.3 0.88 ± 6% perf-profile.self.cycles-pp.__entry_text_start
0.59 ± 4% +0.3 0.92 ± 5% perf-profile.self.cycles-pp.up_write
0.69 ± 4% +0.3 1.02 ± 5% perf-profile.self.cycles-pp.iomap_file_buffered_write
0.66 ± 5% +0.3 1.00 ± 4% perf-profile.self.cycles-pp.down_write
0.73 ± 7% +0.4 1.10 ± 5% perf-profile.self.cycles-pp.write
0.73 ± 3% +0.4 1.10 ± 4% perf-profile.self.cycles-pp.fault_in_readable
0.97 ± 4% +0.4 1.35 ± 5% perf-profile.self.cycles-pp.memset_erms
0.79 ± 5% +0.4 1.17 ± 4% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
1.00 ± 5% +0.5 1.47 ± 4% perf-profile.self.cycles-pp.__might_resched
0.93 ± 3% +0.5 1.44 ± 6% perf-profile.self.cycles-pp.vfs_write
1.13 ± 5% +0.5 1.65 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.06 ± 3% +0.5 1.58 ± 5% perf-profile.self.cycles-pp.iomap_iter
1.45 ± 3% +0.7 2.13 ± 5% perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.49 ± 4% +0.7 2.20 ± 5% perf-profile.self.cycles-pp.__filemap_get_folio
1.74 ± 5% +0.7 2.46 ± 6% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.23 ± 6% +0.8 1.05 ± 7% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.14 ± 3% +0.9 3.02 ± 4% perf-profile.self.cycles-pp.__iomap_write_begin
4.82 ± 4% +2.1 6.89 ± 5% perf-profile.self.cycles-pp.iomap_write_end
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[xfs] 1e3a7e46a4: stress-ng.rename.ops_per_sec 248.5% improvement
by kernel test robot
Greeting,
FYI, we noticed a 248.5% improvement of stress-ng.rename.ops_per_sec due to commit:
commit: 1e3a7e46a47e6a53068fb9bee539daf78105fb4b ("[PATCH 2/2] xfs: introduce xfs_inodegc_push()")
url: https://github.com/intel-lab-lkp/linux/commits/Dave-Chinner/xfs-non-block...
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/[email protected]
in testcase: stress-ng
on test machine: 96 threads 2 sockets Ice Lake with 256G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: xfs
class: filesystem
test: rename
cpufreq_governor: performance
ucode: 0xb000280
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
filesystem/gcc-11/performance/1HDD/xfs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp1/rename/stress-ng/60s/0xb000280
commit:
55a3d6bbc5 ("xfs: bound maximum wait time for inodegc work")
1e3a7e46a4 ("xfs: introduce xfs_inodegc_push()")
55a3d6bbc5cc34a8 1e3a7e46a47e6a53068fb9bee53
---------------- ---------------------------
%stddev %change %stddev
\ | \
3794914 ± 3% +248.5% 13224917 ± 3% stress-ng.rename.ops
63247 ± 3% +248.5% 220409 ± 3% stress-ng.rename.ops_per_sec
24480 ± 16% +140.8% 58958 ± 8% stress-ng.time.involuntary_context_switches
772.83 -1.3% 762.83 stress-ng.time.percent_of_cpu_this_job_got
476.74 -2.6% 464.54 stress-ng.time.system_time
57698 ± 20% +88.2% 108591 ± 5% stress-ng.time.voluntary_context_switches
8.67 ± 2% -4.0% 8.32 ± 2% iostat.cpu.system
0.08 ± 8% +0.1 0.18 ± 2% mpstat.cpu.all.usr%
209407 ± 19% +28.4% 268870 ± 13% numa-numastat.node0.numa_hit
208845 ± 19% +28.6% 268487 ± 13% numa-vmstat.node0.numa_hit
1338 ± 20% -26.0% 990.58 ± 18% sched_debug.cfs_rq:/.load_avg.max
5276 ± 7% +54.6% 8157 ± 3% vmstat.system.cs
305.17 ± 4% -4.3% 292.17 ± 3% turbostat.Avg_MHz
0.08 ± 8% +129.8% 0.18 ± 4% turbostat.IPC
28098 +1.3% 28452 proc-vmstat.nr_slab_reclaimable
453184 +18.1% 535149 ± 2% proc-vmstat.numa_hit
366298 +22.4% 448457 ± 3% proc-vmstat.numa_local
453048 +18.2% 535346 ± 2% proc-vmstat.pgalloc_normal
408545 +19.9% 489665 ± 3% proc-vmstat.pgfree
54437 ± 13% +14.3% 62206 proc-vmstat.pgpgout
13.24 ± 39% -66.4% 4.44 ± 42% perf-stat.i.MPKI
1.407e+09 +119.0% 3.081e+09 ± 3% perf-stat.i.branch-instructions
5119 ± 8% +58.0% 8088 ± 3% perf-stat.i.context-switches
4.28 ± 4% -56.5% 1.86 ± 3% perf-stat.i.cpi
114.97 +5.2% 120.93 perf-stat.i.cpu-migrations
1.597e+09 +112.2% 3.389e+09 ± 3% perf-stat.i.dTLB-loads
6.079e+08 ± 2% +149.1% 1.514e+09 ± 3% perf-stat.i.dTLB-stores
6.755e+09 +122.1% 1.501e+10 ± 3% perf-stat.i.instructions
0.24 ± 5% +124.5% 0.54 ± 3% perf-stat.i.ipc
38.24 +118.0% 83.38 ± 3% perf-stat.i.metric.M/sec
96.95 -3.4 93.53 perf-stat.i.node-load-miss-rate%
6021058 ± 6% -37.3% 3773142 ± 6% perf-stat.i.node-load-misses
64998 ± 3% +167.7% 173987 ± 5% perf-stat.i.node-loads
69.73 -21.2 48.54 ± 2% perf-stat.i.node-store-miss-rate%
8124437 ± 7% -72.6% 2224691 ± 6% perf-stat.i.node-store-misses
3277470 ± 8% -29.4% 2312551 ± 9% perf-stat.i.node-stores
12.93 ± 39% -67.1% 4.26 ± 39% perf-stat.overall.MPKI
4.32 ± 4% -56.7% 1.87 ± 3% perf-stat.overall.cpi
0.03 ± 71% -0.0 0.01 ±106% perf-stat.overall.dTLB-store-miss-rate%
0.23 ± 4% +131.0% 0.54 ± 3% perf-stat.overall.ipc
98.93 -3.4 95.58 perf-stat.overall.node-load-miss-rate%
71.27 -22.2 49.08 ± 3% perf-stat.overall.node-store-miss-rate%
1.384e+09 +119.0% 3.032e+09 ± 3% perf-stat.ps.branch-instructions
5037 ± 8% +58.1% 7962 ± 3% perf-stat.ps.context-switches
113.00 +5.3% 118.99 perf-stat.ps.cpu-migrations
1.571e+09 +112.3% 3.336e+09 ± 3% perf-stat.ps.dTLB-loads
5.98e+08 ± 2% +149.2% 1.49e+09 ± 3% perf-stat.ps.dTLB-stores
6.647e+09 ± 2% +122.2% 1.477e+10 ± 3% perf-stat.ps.instructions
5927285 ± 6% -37.3% 3714015 ± 6% perf-stat.ps.node-load-misses
63838 ± 3% +168.2% 171208 ± 5% perf-stat.ps.node-loads
7996485 ± 7% -72.6% 2189406 ± 6% perf-stat.ps.node-store-misses
3224465 ± 8% -29.4% 2275720 ± 9% perf-stat.ps.node-stores
4.194e+11 +123.2% 9.362e+11 ± 3% perf-stat.total.instructions
79.30 -79.3 0.00 perf-profile.calltrace.cycles-pp.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
69.15 ± 2% -69.1 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.user_statfs
82.68 ± 2% -68.7 14.01 ± 3% perf-profile.calltrace.cycles-pp.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64
82.70 ± 2% -68.6 14.14 ± 3% perf-profile.calltrace.cycles-pp.statfs_by_dentry.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
83.29 ± 2% -66.6 16.68 ± 3% perf-profile.calltrace.cycles-pp.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
58.94 ± 2% -58.9 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
42.74 ± 2% -34.6 8.19 ± 3% perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
42.77 ± 2% -34.5 8.29 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs
42.78 ± 2% -34.5 8.33 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs
42.83 ± 2% -34.3 8.53 ± 3% perf-profile.calltrace.cycles-pp.__statfs
40.60 ± 2% -32.0 8.64 ± 3% perf-profile.calltrace.cycles-pp.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
40.60 ± 2% -31.9 8.67 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
40.60 ± 2% -31.9 8.68 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__statfs.statvfs64
40.62 ± 2% -31.9 8.70 ± 3% perf-profile.calltrace.cycles-pp.__statfs.statvfs64
40.63 ± 2% -31.9 8.75 ± 3% perf-profile.calltrace.cycles-pp.statvfs64
9.72 ± 2% -9.7 0.00 perf-profile.calltrace.cycles-pp.flush_workqueue_prep_pwqs.flush_workqueue.xfs_fs_statfs.statfs_by_dentry.user_statfs
9.64 ± 3% -9.6 0.00 perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
8.67 ± 2% -8.7 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.flush_workqueue_prep_pwqs.flush_workqueue.xfs_fs_statfs.statfs_by_dentry
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.try_to_unlazy.complete_walk.path_lookupat.filename_lookup.user_path_at_empty
0.00 +0.6 0.58 ± 6% perf-profile.calltrace.cycles-pp.complete_walk.path_lookupat.filename_lookup.user_path_at_empty.user_statfs
0.00 +0.6 0.58 ± 6% perf-profile.calltrace.cycles-pp.__vsnprintf_chk
0.00 +0.6 0.59 ± 6% perf-profile.calltrace.cycles-pp.xfs_vn_lookup.__lookup_hash.do_renameat2.__x64_sys_rename.do_syscall_64
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.xfs_trans_ijoin.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
0.00 +0.6 0.61 ± 4% perf-profile.calltrace.cycles-pp.xfs_log_ticket_ungrant.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename
0.00 +0.7 0.69 ± 5% perf-profile.calltrace.cycles-pp.xfs_dir_createname.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
0.00 +0.7 0.72 ± 4% perf-profile.calltrace.cycles-pp.path_lookupat.filename_lookup.user_path_at_empty.user_statfs.__do_sys_statfs
0.00 +0.7 0.72 ± 3% perf-profile.calltrace.cycles-pp.path_parentat.filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64
0.00 +0.7 0.74 ± 4% perf-profile.calltrace.cycles-pp.filename_lookup.user_path_at_empty.user_statfs.__do_sys_statfs.do_syscall_64
0.00 +0.8 0.76 ± 3% perf-profile.calltrace.cycles-pp.filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.0 0.97 ± 12% perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_rename.xfs_vn_rename
0.00 +1.0 1.00 ± 4% perf-profile.calltrace.cycles-pp.xlog_cil_insert_items.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename
0.00 +1.3 1.28 ± 5% perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_rename.xfs_vn_rename.vfs_rename
0.00 +1.3 1.28 ± 4% perf-profile.calltrace.cycles-pp.cpumask_next.xfs_inodegc_queue_all.xfs_fs_statfs.statfs_by_dentry.user_statfs
0.00 +1.4 1.42 ± 5% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
0.00 +1.4 1.44 ± 4% perf-profile.calltrace.cycles-pp.__lookup_hash.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.58 ± 5% perf-profile.calltrace.cycles-pp.user_path_at_empty.user_statfs.__do_sys_statfs.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.63 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.user_statfs
0.60 ± 4% +1.8 2.43 ± 4% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_rename.xfs_vn_rename.vfs_rename
0.63 ± 5% +2.0 2.66 ± 3% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2
0.00 +2.5 2.48 ± 3% perf-profile.calltrace.cycles-pp._find_next_bit.cpumask_next.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry
0.00 +2.7 2.70 ± 5% perf-profile.calltrace.cycles-pp.xfs_inodegc_queue_all.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
0.83 ± 37% +3.0 3.82 ± 3% perf-profile.calltrace.cycles-pp.cpumask_next.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.user_statfs
1.51 ± 6% +5.0 6.53 ± 3% perf-profile.calltrace.cycles-pp.xfs_rename.xfs_vn_rename.vfs_rename.do_renameat2.__x64_sys_rename
1.52 ± 6% +5.0 6.56 ± 3% perf-profile.calltrace.cycles-pp.xfs_vn_rename.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64
1.82 ± 6% +5.9 7.70 ± 3% perf-profile.calltrace.cycles-pp.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.40 ± 5% +7.9 10.26 ± 3% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.xfs_fs_statfs.statfs_by_dentry.user_statfs.__do_sys_statfs
0.00 +9.7 9.70 ± 2% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename
0.00 +45.9 45.92 ± 2% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename
0.00 +56.7 56.70 ± 2% perf-profile.calltrace.cycles-pp.__mutex_lock.lock_rename.do_renameat2.__x64_sys_rename.do_syscall_64
0.00 +57.4 57.37 ± 2% perf-profile.calltrace.cycles-pp.lock_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.64 ± 5% +65.7 68.37 ± 2% perf-profile.calltrace.cycles-pp.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
2.76 ± 5% +66.1 68.83 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
2.78 ± 5% +66.1 68.91 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.rename
2.79 ± 5% +66.1 68.94 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.rename
2.83 ± 5% +66.3 69.09 ± 2% perf-profile.calltrace.cycles-pp.rename
79.30 -79.3 0.00 perf-profile.children.cycles-pp.flush_workqueue
82.68 ± 2% -68.7 14.02 ± 3% perf-profile.children.cycles-pp.xfs_fs_statfs
82.70 ± 2% -68.6 14.15 ± 3% perf-profile.children.cycles-pp.statfs_by_dentry
83.29 ± 2% -66.6 16.69 ± 3% perf-profile.children.cycles-pp.user_statfs
83.34 ± 2% -66.5 16.84 ± 3% perf-profile.children.cycles-pp.__do_sys_statfs
83.47 ± 2% -66.2 17.30 ± 3% perf-profile.children.cycles-pp.__statfs
40.64 ± 2% -31.9 8.76 ± 3% perf-profile.children.cycles-pp.statvfs64
58.96 ± 2% -13.0 45.95 ± 2% perf-profile.children.cycles-pp.osq_lock
69.17 ± 2% -12.5 56.70 ± 2% perf-profile.children.cycles-pp.__mutex_lock
9.78 ± 2% -9.8 0.00 perf-profile.children.cycles-pp.flush_workqueue_prep_pwqs
8.74 ± 2% -8.7 0.02 ±141% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.08 ± 17% +0.0 0.12 ± 17% perf-profile.children.cycles-pp.update_sg_lb_stats
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.xfs_ilock
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.mnt_want_write
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.apparmor_path_rename
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.xfs_sort_for_rename
0.00 +0.1 0.06 perf-profile.children.cycles-pp.xfs_idata_realloc
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.lookup_fast
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.xfs_trans_ichgtime
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.stress_temp_hash_truncate
0.06 ± 48% +0.1 0.12 ± 18% perf-profile.children.cycles-pp.schedule
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.note_gp_changes
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.map_id_up
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.security_path_rename
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.xfs_da_hashname
0.02 ± 99% +0.1 0.10 ± 15% perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +0.1 0.07 ± 23% perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +0.1 0.07 ± 8% perf-profile.children.cycles-pp.stress_rename
0.07 ± 49% +0.1 0.14 ± 13% perf-profile.children.cycles-pp.__schedule
0.25 ± 7% +0.1 0.32 ± 3% perf-profile.children.cycles-pp.mutex_lock
0.00 +0.1 0.07 ± 9% perf-profile.children.cycles-pp.___d_drop
0.00 +0.1 0.07 ± 12% perf-profile.children.cycles-pp.__check_heap_object
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.xlog_cil_alloc_shadow_bufs
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.xlog_prepare_iovec
0.00 +0.1 0.08 ± 12% perf-profile.children.cycles-pp.mod_objcg_state
0.08 ± 14% +0.1 0.17 ± 9% perf-profile.children.cycles-pp.osq_unlock
0.00 +0.1 0.08 ± 10% perf-profile.children.cycles-pp.xfs_trans_unreserve_and_mod_sb
0.00 +0.1 0.09 ± 14% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.1 0.09 ± 17% perf-profile.children.cycles-pp.__legitimize_mnt
0.00 +0.1 0.09 ± 4% perf-profile.children.cycles-pp.__might_fault
0.00 +0.1 0.09 ± 18% perf-profile.children.cycles-pp.newidle_balance
0.16 ± 12% +0.1 0.25 ± 6% perf-profile.children.cycles-pp.mutex_unlock
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.kfree
0.00 +0.1 0.10 ± 7% perf-profile.children.cycles-pp.xfs_iunlock
0.02 ±141% +0.1 0.11 ± 15% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.01 ±223% +0.1 0.12 ± 8% perf-profile.children.cycles-pp._copy_to_user
0.08 ± 6% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.__d_lookup
0.02 ±141% +0.1 0.12 ± 3% perf-profile.children.cycles-pp.walk_component
0.00 +0.1 0.11 ± 6% perf-profile.children.cycles-pp.generic_permission
0.02 ±141% +0.1 0.13 ± 30% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.11 ± 9% perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
0.00 +0.1 0.12 ± 6% perf-profile.children.cycles-pp.xfs_dir2_sf_addname_easy
0.06 ± 9% +0.1 0.18 ± 3% perf-profile.children.cycles-pp.inode_permission
0.06 ± 13% +0.1 0.18 ± 6% perf-profile.children.cycles-pp.fsnotify_get_cookie
0.00 +0.1 0.12 ± 8% perf-profile.children.cycles-pp.rcu_all_qs
0.00 +0.1 0.12 ± 4% perf-profile.children.cycles-pp.up_write
0.00 +0.1 0.12 ± 16% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.01 ±223% +0.1 0.14 ± 6% perf-profile.children.cycles-pp.do_statfs_native
0.15 ± 21% +0.1 0.28 ± 8% perf-profile.children.cycles-pp.ret_from_fork
0.15 ± 22% +0.1 0.28 ± 8% perf-profile.children.cycles-pp.kthread
0.06 ± 13% +0.1 0.19 ± 2% perf-profile.children.cycles-pp.fsnotify_move
0.03 ± 70% +0.1 0.16 ± 5% perf-profile.children.cycles-pp.__entry_text_start
0.00 +0.1 0.13 ± 3% perf-profile.children.cycles-pp.run_ksoftirqd
0.05 ± 46% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.xfs_dir2_sf_removename
0.01 ±223% +0.1 0.14 ± 3% perf-profile.children.cycles-pp.smpboot_thread_fn
0.00 +0.1 0.14 ± 9% perf-profile.children.cycles-pp.xfs_trans_free
0.01 ±223% +0.1 0.15 ± 7% perf-profile.children.cycles-pp.memset_erms
0.06 ± 9% +0.1 0.20 ± 7% perf-profile.children.cycles-pp.xfs_lock_inodes
0.02 ±141% +0.2 0.17 ± 9% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.05 ± 46% +0.2 0.20 ± 10% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.05 ± 45% +0.2 0.20 ± 5% perf-profile.children.cycles-pp.__list_add_valid
0.06 ± 17% +0.2 0.22 ± 6% perf-profile.children.cycles-pp.__check_object_size
0.04 ± 71% +0.2 0.20 ± 12% perf-profile.children.cycles-pp.xlog_space_left
0.04 ± 71% +0.2 0.20 ± 11% perf-profile.children.cycles-pp.xlog_grant_push_threshold
0.07 ± 16% +0.2 0.23 ± 7% perf-profile.children.cycles-pp.__d_move
0.04 ± 71% +0.2 0.20 ± 11% perf-profile.children.cycles-pp.xlog_grant_push_ail
0.00 +0.2 0.16 ± 5% perf-profile.children.cycles-pp.call_rcu
0.00 +0.2 0.16 ± 5% perf-profile.children.cycles-pp.slab_pre_alloc_hook
0.06 ± 11% +0.2 0.22 ± 7% perf-profile.children.cycles-pp.rcu_do_batch
0.06 ± 7% +0.2 0.24 ± 7% perf-profile.children.cycles-pp.xfs_inode_to_log_dinode
0.06 ± 11% +0.2 0.24 ± 6% perf-profile.children.cycles-pp.kmem_alloc
0.55 ± 12% +0.2 0.74 ± 9% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.2 0.19 ± 6% perf-profile.children.cycles-pp.__percpu_counter_compare
0.05 ± 45% +0.2 0.24 ± 11% perf-profile.children.cycles-pp.__cond_resched
0.11 ± 14% +0.2 0.31 ± 8% perf-profile.children.cycles-pp.rcu_core
0.00 +0.2 0.20 ± 6% perf-profile.children.cycles-pp.__d_add
0.00 +0.2 0.20 ± 5% perf-profile.children.cycles-pp.__d_rehash
0.05 ± 45% +0.2 0.25 ± 10% perf-profile.children.cycles-pp.__kmalloc
0.09 ± 13% +0.2 0.29 ± 6% perf-profile.children.cycles-pp._IO_default_xsputn
0.00 +0.2 0.21 ± 5% perf-profile.children.cycles-pp.d_splice_alias
0.04 ± 44% +0.2 0.26 ± 3% perf-profile.children.cycles-pp.dentry_kill
0.09 ± 5% +0.2 0.31 ± 3% perf-profile.children.cycles-pp.xlog_grant_add_space
0.08 ± 10% +0.2 0.30 ± 11% perf-profile.children.cycles-pp.down_read
0.00 +0.2 0.22 ± 4% perf-profile.children.cycles-pp.__dentry_kill
0.06 ± 17% +0.2 0.30 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.10 ± 7% +0.2 0.34 ± 6% perf-profile.children.cycles-pp.__might_resched
0.10 ± 15% +0.2 0.34 ± 6% perf-profile.children.cycles-pp.xfs_dir_removename
0.10 ± 24% +0.3 0.35 ± 19% perf-profile.children.cycles-pp.xfs_mod_freecounter
0.07 ± 9% +0.3 0.33 ± 8% perf-profile.children.cycles-pp.__might_sleep
0.09 ± 7% +0.3 0.36 ± 7% perf-profile.children.cycles-pp.xfs_dir_lookup
0.00 +0.3 0.27 ± 4% perf-profile.children.cycles-pp.xfs_trans_del_item
0.09 ± 4% +0.3 0.36 ± 6% perf-profile.children.cycles-pp.xfs_lookup
0.08 ± 4% +0.3 0.35 ± 4% perf-profile.children.cycles-pp.d_lookup
0.11 ± 13% +0.3 0.38 ± 4% perf-profile.children.cycles-pp.path_init
0.11 ± 14% +0.3 0.39 ± 6% perf-profile.children.cycles-pp.kmem_cache_free
0.08 ± 4% +0.3 0.36 ± 4% perf-profile.children.cycles-pp.lookup_dcache
0.14 ± 10% +0.3 0.46 ± 8% perf-profile.children.cycles-pp.vfprintf
0.07 ± 9% +0.3 0.40 ± 8% perf-profile.children.cycles-pp.__d_alloc
0.15 ± 8% +0.3 0.50 ± 3% perf-profile.children.cycles-pp.link_path_walk
0.13 ± 12% +0.4 0.48 ± 6% perf-profile.children.cycles-pp.xfs_inode_item_format
0.14 ± 11% +0.4 0.50 ± 6% perf-profile.children.cycles-pp.d_move
0.15 ± 8% +0.4 0.51 ± 8% perf-profile.children.cycles-pp.xfs_dir2_sf_addname
0.15 ± 11% +0.4 0.52 ± 3% perf-profile.children.cycles-pp.strncpy_from_user
0.10 ± 11% +0.4 0.48 ± 8% perf-profile.children.cycles-pp.d_alloc
0.12 ± 9% +0.4 0.51 ± 7% perf-profile.children.cycles-pp.kmem_cache_alloc
0.15 ± 9% +0.4 0.54 ± 6% perf-profile.children.cycles-pp.xlog_cil_insert_format_items
0.02 ±141% +0.4 0.42 ± 3% perf-profile.children.cycles-pp.down_write
0.02 ±223% +0.4 0.42 ± 25% perf-profile.children.cycles-pp.xlog_ticket_alloc
0.18 ± 9% +0.4 0.60 ± 7% perf-profile.children.cycles-pp.__vsnprintf_chk
0.15 ± 9% +0.5 0.61 ± 4% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.12 ± 6% +0.5 0.60 ± 6% perf-profile.children.cycles-pp.xfs_vn_lookup
0.19 ± 8% +0.5 0.70 ± 5% perf-profile.children.cycles-pp.xfs_dir_createname
0.22 ± 5% +0.5 0.73 ± 3% perf-profile.children.cycles-pp.path_parentat
0.23 ± 5% +0.5 0.77 ± 4% perf-profile.children.cycles-pp.filename_parentat
0.21 ± 10% +0.6 0.78 ± 6% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.00 +0.6 0.58 ± 11% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.24 ± 7% +0.6 0.82 ± 4% perf-profile.children.cycles-pp.getname_flags
0.00 +0.6 0.59 ± 7% perf-profile.children.cycles-pp.xfs_trans_ijoin
0.05 ± 8% +0.6 0.65 ± 6% perf-profile.children.cycles-pp.lockref_get_not_dead
0.25 ± 5% +0.6 0.90 ± 5% perf-profile.children.cycles-pp.path_put
0.08 ± 5% +0.7 0.75 ± 7% perf-profile.children.cycles-pp.__legitimize_path
0.10 ± 3% +0.7 0.81 ± 6% perf-profile.children.cycles-pp.try_to_unlazy
0.24 ± 4% +0.7 0.97 ± 6% perf-profile.children.cycles-pp.lockref_put_return
0.28 ± 5% +0.7 1.01 ± 4% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.10 ± 4% +0.7 0.84 ± 6% perf-profile.children.cycles-pp.complete_walk
0.19 ± 21% +0.8 0.98 ± 12% perf-profile.children.cycles-pp.xfs_log_reserve
0.17 ± 6% +0.9 1.07 ± 4% perf-profile.children.cycles-pp.path_lookupat
0.18 ± 7% +0.9 1.10 ± 4% perf-profile.children.cycles-pp.filename_lookup
0.28 ± 6% +1.0 1.28 ± 5% perf-profile.children.cycles-pp.xfs_trans_reserve
0.32 ± 6% +1.1 1.42 ± 5% perf-profile.children.cycles-pp.xfs_trans_alloc
0.37 ± 3% +1.1 1.47 ± 4% perf-profile.children.cycles-pp.dput
0.30 ± 4% +1.2 1.45 ± 5% perf-profile.children.cycles-pp.__lookup_hash
0.32 ± 5% +1.3 1.58 ± 5% perf-profile.children.cycles-pp.user_path_at_empty
0.39 ± 6% +1.3 1.73 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.42 ± 7% +1.5 1.91 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.60 ± 4% +1.8 2.44 ± 3% perf-profile.children.cycles-pp.xlog_cil_commit
0.64 ± 5% +2.0 2.68 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
0.78 ± 8% +2.1 2.85 ± 4% perf-profile.children.cycles-pp.xfs_inodegc_queue_all
1.13 ± 6% +2.8 3.93 ± 3% perf-profile.children.cycles-pp._find_next_bit
1.74 ± 7% +4.5 6.22 ± 3% perf-profile.children.cycles-pp.cpumask_next
1.51 ± 6% +5.0 6.55 ± 3% perf-profile.children.cycles-pp.xfs_rename
1.52 ± 6% +5.0 6.57 ± 3% perf-profile.children.cycles-pp.xfs_vn_rename
1.82 ± 6% +5.9 7.71 ± 3% perf-profile.children.cycles-pp.vfs_rename
2.53 ± 5% +8.2 10.72 ± 3% perf-profile.children.cycles-pp.__percpu_counter_sum
0.11 ± 12% +57.3 57.38 ± 2% perf-profile.children.cycles-pp.lock_rename
2.65 ± 5% +65.8 68.40 ± 2% perf-profile.children.cycles-pp.do_renameat2
2.76 ± 5% +66.1 68.83 ± 2% perf-profile.children.cycles-pp.__x64_sys_rename
2.84 ± 5% +66.3 69.13 ± 2% perf-profile.children.cycles-pp.rename
58.78 ± 2% -13.0 45.77 ± 2% perf-profile.self.cycles-pp.osq_lock
8.70 ± 2% -8.7 0.02 ±141% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.06 ± 13% +0.0 0.09 ± 15% perf-profile.self.cycles-pp.update_sg_lb_stats
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.05 ± 13% perf-profile.self.cycles-pp.__vsnprintf_chk
0.00 +0.1 0.06 perf-profile.self.cycles-pp.do_syscall_64
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.__entry_text_start
0.00 +0.1 0.06 perf-profile.self.cycles-pp.stress_temp_hash_truncate
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.map_id_up
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.do_renameat2
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.kfree
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.stress_rename
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.rename
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.xfs_da_hashname
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.___d_drop
0.00 +0.1 0.07 ± 13% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.__check_heap_object
0.23 ± 7% +0.1 0.30 ± 3% perf-profile.self.cycles-pp.mutex_lock
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.inode_permission
0.00 +0.1 0.07 ± 21% perf-profile.self.cycles-pp.xlog_prepare_iovec
0.05 ± 47% +0.1 0.12 ± 10% perf-profile.self.cycles-pp.xfs_fs_statfs
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.getname_flags
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.xfs_inode_item_format
0.00 +0.1 0.08 ± 14% perf-profile.self.cycles-pp.dput
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.xfs_rename
0.00 +0.1 0.08 ± 17% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.00 +0.1 0.08 ± 12% perf-profile.self.cycles-pp.rcu_all_qs
0.00 +0.1 0.08 ± 14% perf-profile.self.cycles-pp.__legitimize_mnt
0.08 ± 14% +0.1 0.17 ± 9% perf-profile.self.cycles-pp.osq_unlock
0.00 +0.1 0.08 ± 16% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.1 0.08 ± 8% perf-profile.self.cycles-pp.xfs_dir2_sf_addname_easy
0.00 +0.1 0.08 ± 16% perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.xfs_dir_lookup
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.__kmalloc
0.00 +0.1 0.09 ± 12% perf-profile.self.cycles-pp.__check_object_size
0.07 +0.1 0.16 ± 9% perf-profile.self.cycles-pp.__d_lookup
0.00 +0.1 0.09 ± 12% perf-profile.self.cycles-pp.call_rcu
0.16 ± 12% +0.1 0.25 ± 6% perf-profile.self.cycles-pp.mutex_unlock
0.00 +0.1 0.09 ± 9% perf-profile.self.cycles-pp.__statfs
0.00 +0.1 0.10 ± 5% perf-profile.self.cycles-pp.generic_permission
0.00 +0.1 0.11 ± 10% perf-profile.self.cycles-pp.xfs_trans_ijoin
0.00 +0.1 0.11 ± 13% perf-profile.self.cycles-pp.xfs_lock_inodes
0.00 +0.1 0.12 ± 14% perf-profile.self.cycles-pp.memcg_slab_free_hook
0.00 +0.1 0.12 ± 9% perf-profile.self.cycles-pp.xfs_inode_to_log_dinode
0.06 ± 13% +0.1 0.17 ± 4% perf-profile.self.cycles-pp.fsnotify_get_cookie
0.00 +0.1 0.12 ± 5% perf-profile.self.cycles-pp.up_write
0.00 +0.1 0.12 ± 9% perf-profile.self.cycles-pp.statfs_by_dentry
0.00 +0.1 0.12 ± 15% perf-profile.self.cycles-pp.__cond_resched
0.00 +0.1 0.13 ± 9% perf-profile.self.cycles-pp.xfs_trans_free
0.06 ± 11% +0.1 0.20 ± 12% perf-profile.self.cycles-pp.kmem_cache_alloc
0.02 ±141% +0.1 0.15 ± 3% perf-profile.self.cycles-pp.kmem_cache_free
0.01 ±223% +0.1 0.15 ± 5% perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.14 ± 7% perf-profile.self.cycles-pp.vfs_rename
0.04 ± 45% +0.1 0.19 ± 5% perf-profile.self.cycles-pp.__list_add_valid
0.02 ±141% +0.1 0.17 ± 10% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.04 ± 71% +0.2 0.19 ± 12% perf-profile.self.cycles-pp.xlog_space_left
0.06 ± 7% +0.2 0.22 ± 5% perf-profile.self.cycles-pp.link_path_walk
0.00 +0.2 0.17 ± 4% perf-profile.self.cycles-pp.d_lookup
0.06 ± 14% +0.2 0.24 ± 6% perf-profile.self.cycles-pp.strncpy_from_user
0.07 ± 9% +0.2 0.25 ± 11% perf-profile.self.cycles-pp.down_read
0.08 ± 14% +0.2 0.26 ± 6% perf-profile.self.cycles-pp._IO_default_xsputn
0.00 +0.2 0.19 ± 6% perf-profile.self.cycles-pp.__percpu_counter_compare
0.00 +0.2 0.20 ± 5% perf-profile.self.cycles-pp.__d_rehash
0.09 ± 6% +0.2 0.31 ± 3% perf-profile.self.cycles-pp.xlog_grant_add_space
0.09 ± 5% +0.2 0.32 ± 7% perf-profile.self.cycles-pp.__might_resched
0.06 ± 11% +0.2 0.30 ± 8% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.3 0.26 ± 4% perf-profile.self.cycles-pp.xfs_trans_del_item
0.10 ± 11% +0.3 0.38 ± 4% perf-profile.self.cycles-pp.path_init
0.14 ± 9% +0.3 0.43 ± 7% perf-profile.self.cycles-pp.vfprintf
0.00 +0.3 0.30 ± 2% perf-profile.self.cycles-pp.down_write
0.43 ± 6% +0.4 0.79 ± 2% perf-profile.self.cycles-pp.__mutex_lock
0.17 ± 11% +0.4 0.62 ± 6% perf-profile.self.cycles-pp.xfs_trans_log_inode
0.15 ± 8% +0.4 0.60 ± 4% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.00 +0.6 0.57 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.05 +0.6 0.64 ± 7% perf-profile.self.cycles-pp.lockref_get_not_dead
0.24 ± 4% +0.7 0.95 ± 6% perf-profile.self.cycles-pp.lockref_put_return
0.38 ± 6% +0.8 1.16 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.37 ± 7% +1.0 1.40 ± 7% perf-profile.self.cycles-pp.xfs_inodegc_queue_all
0.40 ± 7% +1.5 1.87 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.63 ± 8% +1.7 2.28 ± 3% perf-profile.self.cycles-pp.cpumask_next
0.97 ± 6% +2.4 3.38 ± 3% perf-profile.self.cycles-pp._find_next_bit
1.05 ± 4% +3.7 4.76 ± 5% perf-profile.self.cycles-pp.__percpu_counter_sum
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[mac80211] 13d5c8ebd3: hwsim.dpp_pfs_errors.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 13d5c8ebd318e7c7258a675a44068d3a750d801f ("[PATCH v2 3/3] mac80211: refactor some key code")
url: https://github.com/intel-lab-lkp/linux/commits/Johannes-Berg/mac80211_hws...
base: https://git.kernel.org/cgit/linux/kernel/git/wireless/wireless-next.git main
patch link: https://lore.kernel.org/linux-wireless/20220519232721.6febb5d5b82b.I3e8b3...
in testcase: hwsim
version: hwsim-x86_64-717e5d7-1_20220411
with following parameters:
test: group-07
ucode: 0x21
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
please be noted, besides test dpp_pfs_errors, we also found
test wpa2_ocv_sta_override_sa_query_req failed upon this commit but pass on
parent.
2022-05-25 14:27:44 ./run-tests.py dpp_pfs_errors
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START dpp_pfs_errors 1/1
Test: DPP PFS error cases
Starting AP wlan3
Connect STA wlan0 to AP
Connection timed out
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_dpp.py", line 6553, in test_dpp_pfs_errors
dpp_netaccesskey=params1_sta_netaccesskey)
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1140, in connect
self.connect_network(id)
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 500, in connect_network
self.wait_connected(timeout=timeout)
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1411, in wait_connected
raise Exception(error)
Exception: Connection timed out
FAIL dpp_pfs_errors 10.14996 2022-05-25 14:27:54.649706
passed 0 test case(s)
skipped 0 test case(s)
failed tests: dpp_pfs_errors
...
2022-05-25 14:28:40 ./run-tests.py wpa2_ocv_sta_override_sa_query_req
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START wpa2_ocv_sta_override_sa_query_req 1/1
Test: OCV on 2.4 GHz and STA override SA Query Request
Starting AP wlan3
Connect STA wlan0 to AP
Connection timed out
Traceback (most recent call last):
File "./run-tests.py", line 533, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_ocv.py", line 943, in test_wpa2_ocv_sta_override_sa_query_req
ieee80211w="2")
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1140, in connect
self.connect_network(id)
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 500, in connect_network
self.wait_connected(timeout=timeout)
File "/lkp/benchmarks/hwsim/tests/hwsim/wpasupplicant.py", line 1411, in wait_connected
raise Exception(error)
Exception: Connection timed out
FAIL wpa2_ocv_sta_override_sa_query_req 10.129819 2022-05-25 14:28:50.410355
passed 0 test case(s)
skipped 0 test case(s)
failed tests: wpa2_ocv_sta_override_sa_query_req
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks
[mm] 6adb0a02c2: WARNING:suspicious_RCU_usage
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 6adb0a02c27c8811bee9783451ee25155baf490e ("[PATCH] mm: memcontrol: add the mempolicy interface for cgroup v2.")
url: https://github.com/intel-lab-lkp/linux/commits/hezhongkun/mm-memcontrol-a...
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 143a6252e1b8ab424b4b293512a97cca7295c182
patch link: https://lore.kernel.org/lkml/[email protected]
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 1.775514][ T2] WARNING: suspicious RCU usage
[ 1.776115][ T2] 5.18.0-01158-g6adb0a02c27c #10 Not tainted
[ 1.776513][ T2] -----------------------------
[ 1.777133][ T2] include/linux/cgroup.h:495 suspicious rcu_dereference_check() usage!
[ 1.777513][ T2]
[ 1.777513][ T2] other info that might help us debug this:
[ 1.777513][ T2]
[ 1.778513][ T2]
[ 1.778513][ T2] rcu_scheduler_active = 1, debug_locks = 1
[ 1.779493][ T2] no locks held by kthreadd/2.
[ 1.779514][ T2]
[ 1.779514][ T2] stack backtrace:
[ 1.780272][ T2] CPU: 0 PID: 2 Comm: kthreadd Not tainted 5.18.0-01158-g6adb0a02c27c #10
[ 1.780509][ T2] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 1.780509][ T2] Call Trace:
[ 1.780509][ T2] <TASK>
[ 1.780509][ T2] dump_stack_lvl (kbuild/src/x86_64-2/lib/dump_stack.c:107 (discriminator 4))
[ 1.780509][ T2] mem_cgroup_from_task (kbuild/src/x86_64-2/include/linux/cgroup.h:495 kbuild/src/x86_64-2/mm/memcontrol.c:909)
[ 1.780509][ T2] get_cgrp_or_task_policy (kbuild/src/x86_64-2/mm/mempolicy.c:184)
[ 1.780509][ T2] alloc_pages (kbuild/src/x86_64-2/mm/mempolicy.c:2280)
[ 1.780509][ T2] allocate_slab (kbuild/src/x86_64-2/mm/slub.c:1799 kbuild/src/x86_64-2/mm/slub.c:1944)
[ 1.780509][ T2] ___slab_alloc (kbuild/src/x86_64-2/mm/slub.c:3005)
[ 1.780509][ T2] ? dup_task_struct (kbuild/src/x86_64-2/kernel/fork.c:172 kbuild/src/x86_64-2/kernel/fork.c:971)
[ 1.780509][ T2] kmem_cache_alloc_node (kbuild/src/x86_64-2/mm/slub.c:3092 kbuild/src/x86_64-2/mm/slub.c:3183 kbuild/src/x86_64-2/mm/slub.c:3267)
[ 1.780509][ T2] dup_task_struct (kbuild/src/x86_64-2/kernel/fork.c:172 kbuild/src/x86_64-2/kernel/fork.c:971)
[ 1.780509][ T2] ? trace_hardirqs_on (kbuild/src/x86_64-2/kernel/trace/trace_preemptirq.c:50 (discriminator 22))
[ 1.780509][ T2] copy_process (kbuild/src/x86_64-2/kernel/fork.c:2073)
[ 1.780509][ T2] ? alloc_chain_hlocks (kbuild/src/x86_64-2/kernel/locking/lockdep.c:3455)
[ 1.780509][ T2] ? add_chain_cache (kbuild/src/x86_64-2/kernel/locking/lockdep.c:3664)
[ 1.780509][ T2] ? __lock_acquire (kbuild/src/x86_64-2/kernel/locking/lockdep.c:5029)
[ 1.780509][ T2] ? __cleanup_sighand (kbuild/src/x86_64-2/kernel/fork.c:1982)
[ 1.780509][ T2] ? finish_task_switch+0x20f/0x900
[ 1.780509][ T2] ? check_prev_add (kbuild/src/x86_64-2/kernel/locking/lockdep.c:3759)
[ 1.780509][ T2] ? __lock_release (kbuild/src/x86_64-2/kernel/locking/lockdep.c:5317)
[ 1.780509][ T2] kernel_clone (kbuild/src/x86_64-2/kernel/fork.c:2644)
[ 1.780509][ T2] ? create_io_thread (kbuild/src/x86_64-2/kernel/fork.c:2604)
[ 1.780509][ T2] ? __lock_acquire (kbuild/src/x86_64-2/kernel/locking/lockdep.c:5029)
[ 1.780509][ T2] ? finish_task_switch+0x214/0x900
[ 1.780509][ T2] ? find_held_lock (kbuild/src/x86_64-2/kernel/locking/lockdep.c:5132)
[ 1.780509][ T2] kernel_thread (kbuild/src/x86_64-2/kernel/fork.c:2687)
[ 1.780509][ T2] ? __ia32_sys_clone3 (kbuild/src/x86_64-2/kernel/fork.c:2687)
[ 1.780509][ T2] ? lock_downgrade (kbuild/src/x86_64-2/kernel/locking/lockdep.c:5293)
[ 1.780509][ T2] ? kthread_complete_and_exit (kbuild/src/x86_64-2/kernel/kthread.c:331)
[ 1.780509][ T2] ? kthreadd (kbuild/src/x86_64-2/kernel/kthread.c:396 kbuild/src/x86_64-2/kernel/kthread.c:745)
[ 1.780509][ T2] ? do_raw_spin_unlock (kbuild/src/x86_64-2/arch/x86/include/asm/atomic.h:29 kbuild/src/x86_64-2/include/linux/atomic/atomic-instrumented.h:28 kbuild/src/x86_64-2/include/asm-generic/qspinlock.h:28 kbuild/src/x86_64-2/kernel/locking/spinlock_debug.c:100 kbuild/src/x86_64-2/kernel/locking/spinlock_debug.c:140)
[ 1.780509][ T2] kthreadd (kbuild/src/x86_64-2/kernel/kthread.c:400 kbuild/src/x86_64-2/kernel/kthread.c:745)
[ 1.780509][ T2] ? kthread_is_per_cpu (kbuild/src/x86_64-2/kernel/kthread.c:718)
[ 1.780509][ T2] ret_from_fork (kbuild/src/x86_64-2/arch/x86/entry/entry_64.S:308)
[ 1.780509][ T2] </TASK>
[ 1.781590][ T1] cblist_init_generic: Setting adjustable number of callback queues.
[ 1.782518][ T1] cblist_init_generic: Setting shift to 1 and lim to 1.
[ 1.783730][ T1] cblist_init_generic: Setting shift to 1 and lim to 1.
[ 1.784646][ T1] Running RCU-tasks wait API self tests
[ 1.785657][ T1] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only.
[ 1.787556][ T1] rcu: Hierarchical SRCU implementation.
[ 1.791308][ T1] NMI watchdog: Perf NMI watchdog permanently disabled
[ 1.792109][ T1] smp: Bringing up secondary CPUs ...
[ 1.793634][ T1] x86: Booting SMP configuration:
[ 1.794300][ T1] .... node #0, CPUs: #1
[ 0.090644][ T0] masked ExtINT on CPU#1
[ 1.797615][ T1] smp: Brought up 1 node, 2 CPUs
[ 1.798527][ T1] smpboot: Max logical packages: 1
[ 1.799519][ T1] smpboot: Total of 2 processors activated (8380.31 BogoMIPS)
[ 1.802552][ T11] Callback from call_rcu_tasks_trace() invoked.
[ 1.898728][ T10] Callback from call_rcu_tasks_rude() invoked.
[ 1.998585][ T22] node 0 deferred pages initialised in 196ms
[ 2.099652][ T1] allocated 268435456 bytes of page_ext
[ 2.100769][ T1] Node 0, zone DMA: page owner found early allocated 0 pages
[ 2.106388][ T1] Node 0, zone DMA32: page owner found early allocated 0 pages
[ 2.143231][ T1] Node 0, zone Normal: page owner found early allocated 66872 pages
[ 2.145828][ T1] devtmpfs: initialized
[ 2.147626][ T1] x86/mm: Memory block size: 128MB
[ 2.195988][ T1] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[ 2.197567][ T1] futex hash table entries: 512 (order: 4, 65536 bytes, linear)
[ 2.199426][ T1] pinctrl core: initialized pinctrl subsystem
[ 2.212984][ T1] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[ 2.217521][ T1] audit: initializing netlink subsys (disabled)
[ 2.219652][ T27] audit: type=2000 audit(1653397015.364:1): state=initialized audit_enabled=0 res=1
[ 2.222174][ T1] thermal_sys: Registered thermal governor 'fair_share'
[ 2.222184][ T1] thermal_sys: Registered thermal governor 'bang_bang'
[ 2.222529][ T1] thermal_sys: Registered thermal governor 'step_wise'
[ 2.223522][ T1] thermal_sys: Registered thermal governor 'user_space'
[ 2.224756][ T1] cpuidle: using governor menu
[ 2.227738][ T1] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 2.229834][ T1] PCI: Using configuration type 1 for base access
[ 2.279233][ T1] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[ 2.281673][ T1] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 2.285568][ T1] cryptd: max_cpu_qlen set to 1000
[ 2.291676][ T1] ACPI: Added _OSI(Module Device)
[ 2.292523][ T1] ACPI: Added _OSI(Processor Device)
[ 2.293523][ T1] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 2.294523][ T1] ACPI: Added _OSI(Processor Aggregator Device)
[ 2.295547][ T1] ACPI: Added _OSI(Linux-Dell-Video)
[ 2.296535][ T1] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[ 2.297539][ T1] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[ 2.347738][ T1] ACPI: 1 ACPI AML tables successfully acquired and loaded
[ 2.363724][ T1] ACPI: Interpreter enabled
[ 2.364811][ T1] ACPI: PM: (supports S0 S3 S4 S5)
[ 2.365567][ T1] ACPI: Using IOAPIC for interrupt routing
[ 2.366799][ T1] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 2.370916][ T1] ACPI: Enabled 2 GPEs in block 00 to 0F
To reproduce:
# build kernel
cd linux
cp config-5.18.0-01158-g6adb0a02c27c .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
2 months, 2 weeks